mishig HF Staff commited on
Commit
d56ebdb
Β·
verified Β·
1 Parent(s): 54fbdb2

Add 1 files

Browse files
Files changed (1) hide show
  1. 2307/2307.16890.md +5742 -0
2307/2307.16890.md ADDED
@@ -0,0 +1,5742 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Discovering Adaptable Symbolic Algorithms from Scratch
2
+
3
+ URL Source: https://arxiv.org/html/2307.16890
4
+
5
+ Markdown Content:
6
+ I Introduction
7
+ II Related Work
8
+ III Methods
9
+ III-A Algorithm Representation
10
+ III-B Evolutionary Search
11
+ III-B1 Multi-Objective Search
12
+ III-B2 Single-Objective Search
13
+ III-C Algorithm Configurations and Baselines
14
+ IV Non-Stationary Task Domains
15
+ IV-A Quadruped Robot
16
+ IV-B Cataclysmic Cartpole Environment
17
+ V Results
18
+ V-A Quadruped Leg-Breaking
19
+ V-A1 Comparison with Baselines
20
+ V-A2 On Simplicity and Interpretability
21
+ V-B Cataclysmic Cartpole
22
+ V-B1 Comparison with Baselines
23
+ V-B2 On Simplicity and Interpretability
24
+ VI Conclusion and Discussion
25
+ S1 Methods Additional Details
26
+ S1-A Baseline Details
27
+ S1-B Quadruped Tasks
28
+ S1-C Cataclysmic Cartpole Tasks
29
+ S2 Additional Experiments: Cataclysmic Cartpole
30
+ S2-A Adaptation Gap
31
+ S2-B Adapting to Unseen Dynamics in Cataclysmic Cartpole
32
+ S3 Cartpole Algorithm Analysis
33
+ S4 Complexity Comparison
34
+ S4-A Baselines
35
+ S4-B Quadruped Leg-Breaking
36
+ S4-C Cataclysmic Cartpole
37
+ S4-D Discussion
38
+ S5 Search Space Additional Details
39
+ \WarningFilter
40
+
41
+ hyperrefIgnoring empty anchor \WarningFilterxcolorIncompatible color definition \WarningFilter*latexText page 0 contains only floats \usetikzlibrarytikzmark \usetikzlibraryarrows.meta \usetikzlibrarybending \usetikzlibrarycalc,positioning,quotes \usetikzlibrarymatrix,shapes.arrows
42
+
43
+ Discovering Adaptable Symbolic Algorithms from Scratch
44
+ Stephen Kelly
45
+ 1
46
+ ,
47
+ 4
48
+ , Daniel S. Park
49
+ 1
50
+ , Xingyou Song
51
+ 1
52
+ ,
53
+ 2
54
+ , Mitchell McIntire
55
+ 3
56
+ , Pranav Nashikkar
57
+ 3
58
+ ,
59
+ Ritam Guha
60
+ 5
61
+ , Wolfgang Banzhaf
62
+ 5
63
+ , Kalyanmoy Deb
64
+ 5
65
+ , Vishnu Naresh Boddeti
66
+ 5
67
+ , Jie Tan
68
+ 1
69
+ ,
70
+ 2
71
+ , Esteban Real
72
+ 1
73
+ ,
74
+ 2
75
+
76
+
77
+ 1
78
+ Google Research,
79
+ 2
80
+ Google DeepMind,
81
+ 3
82
+ Google,
83
+ 4
84
+ McMaster University,
85
+ 5
86
+ Michigan State University
87
+ Abstract
88
+
89
+ Autonomous robots deployed in the real world will need control policies that rapidly adapt to environmental changes. To this end, we propose AutoRobotics-Zero (ARZ), a method based on AutoML-Zero that discovers zero-shot adaptable policies from scratch. In contrast to neural network adaptation policies, where only model parameters are optimized, ARZ can build control algorithms with the full expressive power of a linear register machine. We evolve modular policies that tune their model parameters and alter their inference algorithm on-the-fly to adapt to sudden environmental changes. We demonstrate our method on a realistic simulated quadruped robot, for which we evolve safe control policies that avoid falling when individual limbs suddenly break. This is a challenging task in which two popular neural network baselines fail. Finally, we conduct a detailed analysis of our method on a novel and challenging non-stationary control task dubbed Cataclysmic Cartpole. Results confirm our findings that ARZ is significantly more robust to sudden environmental changes and can build simple, interpretable control policies.
90
+
91
+ ††footnotetext: Published and Best Overall Paper Finalist at IROS 2023.††footnotetext: Videos: https://youtu.be/sEFP1Hay4nE††footnotetext: Correspondence: spkelly@mcmaster.ca
92
+ I Introduction
93
+
94
+ Robots deployed in the real world will inevitably face many environmental changes. For example, robots’ internal conditions, such as battery levels and physical wear-and-tear, and external conditions, such as new terrain or obstacles, imply that the system’s dynamics are non-stationary. In these situations, a static controller that always maps the same state to the same action is rarely optimal. Robots must be capable of continuously adapting their control policy in response to the changing environment. To achieve this capability, they must recognize a change in the environment without an external cue, purely by observing how actions change the system state over time, and update their control in response. Recurrent deep neural networks are a popular policy representation to support fast adaptation. However, they are often (1) monolithic, which leads to the distraction dilemma when attempting to learn policies that are robust to multiple dissimilar environmental physics [1, 2]; (2) overparameterized, which can lead to poor generalization and long inference time; and (3) difficult to interpret. Ideally, we would like to find a policy that can express multiple modes of behavior while still being simple and interpretable.
95
+
96
+ {tcolorbox} [ width=0.48height=4.4in, valign=center,left=0pt,right=0pt,top=0pt,bottom=0pt, colback=codebackground,colframe=codeframe,boxrule=0.5pt,arc=0pt] # wX: vector memory at address X. def f(x, v, i): w0 = copy(v) w0[i] = 0 w1 = abs(v) w1[0] = -0.858343 * norm(w2) w2 = w0 * w0 return log(x), w1 # sX: scalar memory at address X. # vX: vector memory at address X. # obs, action: observation and action vectors. def GetAction(obs, action): if s13 < s15: s5 = -0.920261 * s15 if s15 < s12: s8, v14, i13 = 0, min(v8, sqrt(min(0, v3))), -1 if s1 < s7: s7, action = f(s12, v0, i8) action = heaviside(v12) if s13 < s2: s15, v3 = f(s10, v7, i2) if s2 < s0: s11, v9, i13 = 0, 0, -1 s7 = arcsin(s15) if s1 < s13: s3 = -0.920261 * s13 s12 = dot(v3, obs) s1, s3, s15 = maximum(s3, s5), cos(s3), 0.947679 * s2 if s2 < s8: s5, v13, i5 = 0, min(v3, sqrt(min(0, v13))), -1 if s6 < s0: s15, v9, i11 = 0, 0, -1 if s2 < s3: s2, v7 = f3(s8, v12, i1) if s1 < s6: s13, v14, i3 = 0, min(v8, sqrt(min(0, v0))), -1 if s13 < s2: s7 = -0.920261 * s2 if s0 < s1: s3 = -0.920261 * s1 if s7 < s1: s8, action = f(s5, v15, i3) if s0 < s13: s5, v7 = f(s15, v7, i15) s2 = s10 + s3 if s7 < s12: s11, v13 = f(s9, v15, i5) if s4 < s11: s0, v9, i13 = 0, 0, -1 s10, action[i5] = sqrt(s7), s6 if s7 < s9: s15 = 0 if s14 < s11: s3 = -0.920261 * s11 if s8 < s5: s10, v15, i1 = 0, min(v13, sqrt(min(0, v0))), -1 return action
97
+
98
+
99
+
100
+ Figure 1: Automatically discovered Python code representing an adaptable policy for a realistic quadruped robot simulator (top–right inset). This evolved policy outperforms MLP and LSTM baselines when a random leg is suddenly broken at a random time. (Lines in red will be discussed in the text).
101
+
102
+ We propose AutoRobotics-Zero (ARZ), a new framework based on AutoML-Zero (AMLZ) [3] to specifically support the evolution of dynamic, self-modifying control policies in a realistic quadruped robot adaptation task. We represent these policies as programs instead of neural networks and demonstrate how the adaptable policy and its initial parameters can be evolved from scratch using only basic mathematical operations as building blocks. Evolution can discover control programs that use their sensory-motor experience to fine-tune their policy parameters or alter their control logic on-the-fly while interacting with the environment. This enables the adaptive behaviors necessary to maintain near-optimal performance under changing environmental conditions. Unlike the original AMLZ, we go beyond toy tasks by tackling the simulator for the actual Laikago robot [4]. To facilitate this, we shifted away from the supervised learning paradigm of AMLZ. We show that evolved programs can adapt during their lifetime without explicitly receiving any supervised input (such as a reward signal). Furthermore, while AMLZ relied on the hand-crafted application of three discovered functions, we allow the number of functions used in the evolved programs to be determined by the evolutionary process itself. To do this, we use conditional automatically defined functions (CADFs) and demonstrate their impact. With this approach, we find that evolved adaptable policies are significantly simpler than state-of-the-art solutions from the literature because evolutionary search begins with minimal programs and incrementally adds complexity through interaction with the task domain. Their behavior is highly interpretable as a result.
103
+
104
+ In the quadruped robot, ARZ is able to evolve adaptable policies that maintain forward locomotion and avoid falling, even when all motors on a randomly selected leg fail to generate any torque, effectively turning the leg into a passive double pendulum. In contrast, despite comprehensive hyperparameter tuning and being trained with state-of-the-art reinforcement learning methods, MLP and LSTM baselines are unable to learn robust behaviors under such challenging conditions.
105
+
106
+ While the quadruped is a realistic complex task, simulating the real robot is time-consuming. Due to the lack of efficient yet challenging benchmarks for adaptive control, we created a toy adaptation task dubbed Cataclysmic Cartpole and repeated our analysis on this task with similar findings. In both cases, we provide a detailed analysis of evolved control programs to explain how they work, something notoriously difficult with black box neural network representations.
107
+
108
+ In summary, this paper develops an evolutionary method for the automated discovery of adaptable robotic policies from scratch. We applied the method to two tasks in which adaptation is critical, Quadruped Leg-Breaking and Cataclysmic Cartpole. On each task, the resulting policies:
109
+
110
+ βˆ™
111
+
112
+ surpass carefully-trained MLP and LSTM baselines;
113
+
114
+ βˆ™
115
+
116
+ are represented as interpretable, symbolic programs; and
117
+
118
+ βˆ™
119
+
120
+ use fewer parameters and operations than the baselines.
121
+
122
+ These points are demonstrated for each task in SectionΒ V.
123
+
124
+ II Related Work
125
+
126
+ Early demonstrations of Genetic Programming (GP) established its power to evolve optimal nonlinear control policies from scratch that were also simple and interpretable [5]. More recently, GP has been used to distill the behavior of complex neural network policies developed with Deep Reinforcement Learning into interpretable and explainable programs without sacrificing control quality [6]. In this work, we extend these methods to evolve programs that can change their behavior in response to a changing environment.
127
+
128
+ We demonstrate how to automatically discover a controller that can context switch between distinct behavior modes when it encounters diverse tasks, thus avoiding trade-offs associated with generalization across diverse environmental physics. If we can anticipate the nature of the environmental change a robot is likely to encounter, we can simulate environments similar to the expected changes and focus on building multitask control policies [2, 7]. In this case, some form of domain randomization [8] is typically employed to expose candidate policies to a breadth of task dynamics. However, policies trained with domain randomization often trade optimality in any particular environment dynamics for generality across a breadth of dynamics. This is the problem we aim to address with ARZ. Unlike previous studies in learning quadruped locomotion in the presence of non-stationary morphologies (e.g., [9]), we are specifically interested in how controllers can be automatically built from scratch without requiring any prior task decomposition or curriculum learning. This alleviates some burden on robotics engineers and reduces researcher bias toward known machine learning algorithms, opening the possibility for a complex adaptive system to discover something new.
129
+
130
+ In addition to anticipated non-stationary dynamics, another important class of adaptation tasks in robotics is sim-to-real transfer [11], where the robot needs to adapt policies trained in simulation to unanticipated characteristics of the real-world. Successful approaches to learn adaptive policies can be categorized by three broad areas of innovation: (1) New adaptation operators that allow policies to quickly tune their model parameters within a small number of interactions [10, 11, 12, 13]; (2) Modular policy structures that separate the policy from the adaptation algorithm and/or world model, allowing both to be learned [14, 15, 16, 17]; and (3) Hierarchical methods that allow a diverse set of complete or partial behaviors to be dynamically switched in and out of use at run-time, adapting by selecting the best strategy for the current environmental situation [9, 2, 18]. These algorithmic models of behavioral plasticity, modular structure, and hierarchical representations reflect the fundamental properties of meta-learning. In nature, these properties emerged through adaptation at two timescales (evolution and lifetime learning) [19]. ARZ makes these two time scales explicit by implementing an evolutionary search loop that acts on a β€œgenome” of code, and an evaluation that steps through an episode which is analogous to the β€œlifetime” of the robot.
131
+
132
+ III Methods
133
+ III-A Algorithm Representation
134
+
135
+ As in the original AutoML-Zero [3], policies are represented as linear register machines that act on virtual memory [20]. In this work, we support four types of memory: scalar, vector, matrix, and index (e.g.Β s1, v1, m1, i1). Scalar, vector, and matrix memory are floating-point, while index memory stores integers. Algorithms are composed of two core functions: StartEpisode() and GetAction(). StartEpisode() runs once at the start of each episode of interaction with the environment. Its sole purpose is to initialize the contents of virtual memory with evolved constants. The content of these memories at any point in time can be characterized as the control program’s state. Our goal is to discover algorithms that can adapt by tuning their memory state or altering their control code on-the-fly while interacting with their environment. This adaptation, as well as the algorithm’s decision-making policy, are implemented by the GetAction() function, in which each instruction executes a single operation (e.g. s0=s7*s1 or s3=v1[i2]). We define a large library of operations (Table S2) and place no bounds on the complexity of programs. Evolutionary search is employed to discover what sequence of operations and associated memory addresses appear in the GetAction() function.
136
+
137
+ Conditional Automatically Defined Functions: In addition to StartEpisode() and GetAction(), up to 6 Conditionally-invoked Automatically Defined Functions [21] (CADFs) may be generated in an algorithm. Each CADF represents an additional function block, itself automatically discovered, which is callable from GetAction(). Since each CADF is conditionally invoked, the sequence of CADFs executed at each timestep throughout an episode is dynamic. This property is advantageous for multi-task learning and adaptation because programs that can switch control code in and out of the execution path on-the-fly are able to dynamically integrate general, re-useable code for related tasks and specialized code for disjoint tasks. We demonstrate in Section IV how this improves performance for the quadruped task. Each CADF receives 4 scalars, 2 vectors, and 2 indices as input, and execution of the function is conditional on a
138
+ <
139
+ comparison of the first 2 scalars (a configuration chosen for simplicity). The set of operations available is identical to GetAction() except that CADFs may not call each other to avoid infinite recursion. Each CADF uses its own local memory of the same size and dimensionality as the main memory used by Setup() and GetAction(). Their memory is initialized to zero at the start of each episode and is persistent across timesteps, allowing functions to integrate variables over time. Post-execution, the CADF returns the single most recently written index, scalar, and vector from its local memory.
140
+
141
+ The policy-environment interface and evaluation procedure are illustrated in Fig. 2. Sections V-A and V-B provide examples of evolved programs in this representation for the quadruped robot and Cataclysmic Cartpole task, respectively.
142
+
143
+ [ width=0.48height=4.0in, valign=center,left=0pt,right=0pt,top=0pt,bottom=0pt, colback=codebackground,colframe=codeframe,boxrule=0.5pt,arc=0pt] # StartEpisode = initialization code.
144
+
145
+ # GetAction = control algorithm.
146
+
147
+ # Sim = simulation environment.
148
+
149
+ # episodes = number of evaluation episodes.
150
+
151
+ # sX/vX/mX/iX: scalar/vector/matrix/index memory
152
+
153
+ # at address X.
154
+
155
+ def EvaluateFitness(StartEpisode, GetAction):
156
+
157
+ sum_reward = 0
158
+
159
+ for e in episodes:
160
+
161
+ reward = 0
162
+
163
+ steps = 0
164
+
165
+ # Initialize sX/vX/mX with evolved parameters.
166
+
167
+ # iX is initialized to zero.
168
+
169
+ StartEpisode()
170
+
171
+ # Set environment initial conditions.
172
+
173
+ state = Sim.Reset()
174
+
175
+ while (!Sim.Terminal()):
176
+
177
+ # Copy state to memory, will be accessible
178
+
179
+ # to GetAction.
180
+
181
+ v1 = state
182
+
183
+ # Execute action-prediction instructions.
184
+
185
+ GetAction(state)
186
+
187
+ if Sim.NumAction() > 1:
188
+
189
+ action = v4
190
+
191
+ else:
192
+
193
+ action = s3
194
+
195
+ state = Sim.Update(action)
196
+
197
+ reward += Reward(state, action)
198
+
199
+ steps += 1
200
+
201
+ sum_reward += reward
202
+
203
+ sum_steps += steps
204
+
205
+ return sum_reward/episodes, sum_steps/episodes
206
+
207
+ Figure 2: Evaluation process for an evolved control algorithm. The single-objective evolutionary search uses the mean episodic reward as the algorithm’s fitness, while the multi-objective search optimizes two fitness metrics: mean reward (first return value) and mean steps per episode (second return value).
208
+ III-B Evolutionary Search
209
+
210
+ Two evolutionary algorithms are employed in this work: Multi-objective search with the Nondominated Sorting genetic algorithm II (NSGA-II) [22] and single-objective search with Regularized evolution (RegEvo) [23, 3]. Both search algorithms iteratively update a population of candidate control programs using an algorithmic model of the Darwinian principle of natural selection. The generic steps for evolutionary search are:
211
+
212
+ 1.
213
+
214
+ Initialize a population of random control programs.
215
+
216
+ 2.
217
+
218
+ Evaluate each program in the task (Fig. 2).
219
+
220
+ 3.
221
+
222
+ Select promising programs using a task-specific fitness metric (See Fig. 2 caption).
223
+
224
+ 4.
225
+
226
+ Modify selected individuals through crossover and then mutation (Fig. S1).
227
+
228
+ 5.
229
+
230
+ Insert new programs into the population, replacing some proportion of existing individuals.
231
+
232
+ 6.
233
+
234
+ Go to step 2.
235
+
236
+ For the purposes of this study, the most significant difference between NSGA-II and RegEvo is their selection method. NSGA-II identifies promising individuals using multiple fitness metrics (e.g., forward motion and stability) while RegEvo selects based on a single metric (forward motion). Both search methods simultaneously evolve: (1) Initial algorithm parameters (i.e. initial values in floating-point memory sX, vX, mX), which are set by StartEpisode(); and (2) Program content of the GetAction() function and CADFs.
237
+
238
+ III-B1 Multi-Objective Search
239
+
240
+ In the Quadruped robot tasks, the goal is to build a controller that continuously walks at a desired pace in the presence of motor malfunctions. It is critical that real-world robots avoid damage associated with falling, and the simplest way for a robot to achieve this is by standing relatively still and not attempting to move forward after it detects damage. As such, this domain is well suited to multi-objective search because walking in the presence of unpredictable dynamics while maintaining stability are conflicting objectives that must be optimized simultaneously. In this work, we show how NSGA-II maintains a diverse population of control algorithms covering a spectrum of trade-offs between forward motion and stability. From this diverse population of partial solutions, or building blocks, evolutionary search operators (mutation and cross-over) can build policies that are competent in both objectives. NSGA-II objective functions and constraints for the quadruped robot task are discussed in Section IV.
241
+
242
+ III-B2 Single-Objective Search
243
+
244
+ The Cataclysmic Cartpole task provides a challenging adaptation benchmark environment without the safety constraints and simulation overhead of the real-world robotics task. To further simplify our study of adaptation and reduce experiment time in this task, we adopt the RegEvo search algorithm and optimize it for fast experimentation. Unlike NSGA-II, asynchronous parallel workers in RegEvo also perform selection, which eliminates the bottleneck of waiting for the entire population to be evaluated prior to ranking, selecting, and modifying individuals.
245
+
246
+ Crossover and Mutation Operators: We use a simple crossover operator that swaps a randomly selected CADF between two parent algorithms. Since all CADFs have the same argument list and return value format, no signature matching is required to select crossover points. If either parent algorithm contains no CADFs, one randomly selected parent is returned. Post-crossover, the child program is subject to stochastic mutation, which adds, removes, or modifies code using operators listed in Table S1.
247
+
248
+ III-C Algorithm Configurations and Baselines
249
+
250
+ Temporal memory is the primary mental system that allows an organism to change, learn, or adapt during its lifetime. In order to predict the best action for a given situation in a dynamic environment, the policy must be able to compare the current situation with past situations and actions. This is because generating an appropriate action depends on the current state and a prediction of how the environment is changing. Our evolved algorithms are able to adapt partly because they are stateful: the contents of their memory (sX, vX, mX, and iX) are persistent across timesteps of an episode.
251
+
252
+ We compare ARZ against stateless and stateful baselines. These policy architectures consist, respectively, of multilayer perceptrons (MLP) and long short-term memory (LSTM) networks whose parameters to be optimized are purely continuous. Therefore, we use Augmented Random Search (ARS) [24], which is a state-of-the-art continuous optimizer and has been shown to be particularly effective in learning robot locomotion tasks [12, 25]. In comparison, Proximal Policy Optimization [26] underperformed significantly; we omit the results and leave investigation for future work. All methods were allowed to train until convergence with details in Supplement S1-A.
253
+
254
+ IV Non-Stationary Task Domains
255
+
256
+ We consider two different environments: a realistic simulator for a quadruped robot and the novel Cataclysmic Cartpole. In both cases, policies must handle changes in the environment’s transition function that would normally impede their proper function. These changes might be sudden or gradual, and no sensor input is provided to indicate when a change is occurring or how the environment is changing.
257
+
258
+ IV-A Quadruped Robot
259
+
260
+ We use the Tiny Differentiable Simulator [27] to simulate the Unitree Laikago robot [4]. It is a quadruped robot with 3 actuated degrees of freedom per leg. Thus the action space has 12-dimensional real values corresponding to desired motor angles. A Proportional-Derivative controller is used to track these desired angles. The observation space includes 37 real values describing the angle and velocity for each joint as well as the position, orientation, and velocity of the robot body. Each episode begins with the robot in a stable upright position and continues for a maximum of 1000 timesteps (10 seconds). Each action suggested by the policy is repeated for 10 consecutive steps.
261
+
262
+ The goal of the non-stationary quadruped task is to move forward (x-axis) at 1.0 meters/second. Adaptation must handle sudden leg-breaking in which all joints on a single, randomly selected leg suddenly become passive at a random time within each episode. The leg effectively becomes a double pendulum for the remainder of the episode. The episode will terminate early if the robot falls and this results in less return. We design the following reward function:
263
+
264
+
265
+ π‘Ÿ
266
+ ⁒
267
+ (
268
+ 𝑑
269
+ )
270
+ =
271
+ 1.0
272
+ βˆ’
273
+ 2
274
+ *
275
+ |
276
+ 𝑣
277
+ ⁒
278
+ (
279
+ 𝑑
280
+ )
281
+ βˆ’
282
+ 𝑣
283
+ Β―
284
+ |
285
+ βˆ’
286
+ β€–
287
+ π‘Ž
288
+ β†’
289
+ ⁒
290
+ (
291
+ 𝑑
292
+ )
293
+ βˆ’
294
+ π‘Ž
295
+ β†’
296
+ ⁒
297
+ (
298
+ 𝑑
299
+ βˆ’
300
+ 1
301
+ )
302
+ β€–
303
+ 2
304
+ ,
305
+ (1)
306
+
307
+ where the first term 1.0 is the survival bonus,
308
+ 𝑣
309
+ Β―
310
+ is the target forward velocity of 1 m/s,
311
+ 𝑣
312
+ ⁒
313
+ (
314
+ 𝑑
315
+ )
316
+ is the robot’s current forward velocity, and
317
+ π‘Ž
318
+ β†’
319
+ ⁒
320
+ (
321
+ 𝑑
322
+ )
323
+ and
324
+ π‘Ž
325
+ β†’
326
+ ⁒
327
+ (
328
+ 𝑑
329
+ βˆ’
330
+ 1
331
+ )
332
+ are the policy’s current and previous action vectors. This reward function is shaped to encourage the robot to walk at a constant speed for as long as possible while alleviating motor stress by minimizing the change in the joint acceleration. In the context of multi-objective search, maximizing the mean of Equation 1 over a maximum of 1000 timesteps is Objective 1. To discourage behaviors that deviate too much along the y-axis, we terminate an episode if the robot’s y-axis location exceeds
333
+ Β±
334
+ 3.0
335
+ meters. Objective 2 is simply the number of timesteps the robot was able to survive without falling or reaching this y-axis threshold. Importantly, we are not interested in policies that simply stand still. Thus, if Objective 2 is greater than
336
+ 400
337
+ and Objective 1 is less than
338
+ 50
339
+ , both fitnesses are set to 0. As shown in Fig. S2, these fitness constraints eliminate policies that would otherwise persist in the population without contributing to progress on the forward motion objective.
340
+
341
+ IV-B Cataclysmic Cartpole Environment
342
+
343
+ To study the nature of adaptation in more detail, we introduce a new, highly challenging but computationally simple domain called Cataclysmic Cartpole in which multiple aspects of the classic Cartpole ([28]) physics are made dynamic. Adaptation must handle the following non-stationary properties:
344
+
345
+ βˆ™
346
+
347
+ Track Angle: The track tilts to a random angle at a random time. Because the robot’s frame of reference for the pole angle (
348
+ πœƒ
349
+ ) is relative to the cart, it must figure out the new direction of gravity and desired value of
350
+ πœƒ
351
+ to maintain balance, and respond quickly enough to keep the pole balanced. The track angle is variable in [-15, 15] degrees. This simulates a change in the external environment.
352
+
353
+ βˆ™
354
+
355
+ Force: A force multiplier
356
+ 𝑓
357
+ is applied to the policy’s action such that its actuator strength may increase or decrease over time. The policy’s effective action is
358
+ 𝑓
359
+ Γ—
360
+ action
361
+ , where
362
+ 𝑓
363
+ changes over time within the range [0.5, 2]. This simulates a drop in actuator strength due to a low battery, for example.
364
+
365
+ βˆ™
366
+
367
+ Damping: A damping factor
368
+ 𝐷
369
+ simulates variable joint friction by modifying joint torque as
370
+ 𝜏
371
+ 𝐷
372
+ =
373
+ βˆ’
374
+ 𝐷
375
+ ⁒
376
+ π‘ž
377
+ π‘Ÿ
378
+ Λ™
379
+ , where
380
+ π‘ž
381
+ π‘Ÿ
382
+ Λ™
383
+ is the joint velocity (see eqns. 2.81, 2.83 in [29]). This simulates joint wear and tear.
384
+ 𝐷
385
+ changes over time in the range [0.0, 0.15].
386
+
387
+ Each type of change is controlled by a single parameter. We investigate two schedules for how these parameters might change during an episode, illustrated in Fig. S4.
388
+
389
+ V Results
390
+ V-A Quadruped Leg-Breaking
391
+ V-A1 Comparison with Baselines
392
+
393
+ ARZβ€”with the inclusion of CADFsβ€”is the only method that produced a viable control policy in the leg-breaking task. This problem is exceedingly difficult: finding a policy that maintains smooth locomotion and is robust to leg breaking requires 20 evolution experiment repetitions (Fitness
394
+ >
395
+ 600
396
+ in Fig. 2(a)). In Fig. 2(a), training fitness between 500 and 600 typically indicates either (1) a viable forward gait behavior that is only robust to 3/4 legs breaking or (2) a policy robust to any leg breaking but which operates at a high frequency not viable for a real robot, with its reward being significantly penalized by fitness shaping as a result. Within the single best repeat, the NSGA-II search algorithm produces a variety of policies with performance trade-offs between smooth forward locomotion (reward objective) and stability (steps objective), Fig. 2(b). From this final set of individuals, we select a single policy to compare with the single best policy from each baseline. Due to practical wall-clock time limits, we were only able to train both ARS+MLP and ARS+LSTM policies up to
397
+ 10
398
+ 6
399
+ trials in total, but found that under this sample limit, even the best ARS policy only achieved a reward of 360, much lower than the 570 found by the best ARZ policy, suggesting that ARZ can even be more sample efficient than standard neural network baselines.
400
+
401
+ (a) Evolution progress
402
+ (b) Best Pareto fronts
403
+ Figure 3: CADFs speed up evolution on average and produced the best final result. (a) shows ARZ search data recorded over 20 independent repeats with and without the use of CADFs. The horizontal axis for (a) shows the total number of individual programs evaluated, while the vertical axis shows mean return (Equation 1) over 32 episodes for the single best individual discovered so far. (b) shows Pareto fronts for the single repeats with max reward from each experiment. Each point in (b) represents the bi-objective fitness of one control program.
404
+
405
+ Fig. 4 confirms that ARZ is the only method capable of building a controller that is robust to multiple different legs breaking mid-episode. We plot post-training test results for one champion ARZ policy in comparison with the single-best controller discovered by ARS+MLP and ARS+LSTM. ARZ’s adaption quality (as measured by mean reward) is superior to baselines in the case of each individual leg, and its performance on the stationary task (See "None" in Fig. 4) is significantly better than any other method. Interestingly, Fig. 4 indicates that the MLP also learned a policy that is robust to the specific case of the back-right leg breaking. Unlike ARZ, it is unable to generalize this adaptation to any other leg. Finally, while the LSTM policy performed better than the MLP on the stationary task, it fails to adapt to any of the leg-breaking scenarios.
406
+
407
+ Figure 4: ARZ discovers the only policy that can adapt to any leg breaking. The plot shows test results for the single best policy from ARZ and ARS baselines (MLP and LSTM) in the mid-episode leg-breaking task. For each leg, bars show mean reward over 100 episodes in which that leg is broken at a randomly selected timestep. A reward
408
+ <
409
+ 400
410
+ in any column indicates the majority of test episodes for that leg ended with a fall.
411
+
412
+ Visualizing trajectories for a sample of 5 test episodes from Fig. 4 confirms that the ARZ policy is the only controller that can avoid falling in all scenarios, although in the case of the front-left leg breaking, it has trouble maintaining forward motion, Fig. 5. This is reflected in its relatively weak test reward for the front-left leg (See Fig. 4). The MLP policy manages to keep walking with a broken back-right leg but falls in all other dynamic tasks. The LSTM, finally, is only able to avoid falling in the stationary task in which all legs are reliable.
413
+
414
+ (a) ARZ
415
+ (b) MLP
416
+ (c) LSTM
417
+ Figure 5: ARZ discovers the only policy that consistently avoids falling. Plot shows sample trajectories in each leg-breaking task. The vertical bar indicates the change point (step 500).
418
+ β–²
419
+ indicates that the robot fell over. Each plot shows 4 test episodes in which a unique leg breaks. From top to bottom, the affected legs are: None, Back-Left, Back-Right, Front-Left, Front-Right.
420
+ V-A2 On Simplicity and Interpretability
421
+
422
+ The policy for the Quadruped Leg-Breaking task discovered by evolutionary search is presented in Fig. 1. This algorithm uses 608 parameters and can be expressed in less than 40 lines of code, executing at most 2080 floating point operations (FLOPs) per step. This should be contrasted with the number of parameters and FLOPs expended in the baseline MLP/LSTM models, which use more than 2.5k/9k parameters and 5k/18k FLOPs per step, respectively. A detailed account of how these numbers were obtained can be found in Section S4. We note that each function possesses its own variables and memory, which persists throughout the run. The initialization value for the variables are tuned for the GetAction function, thus counted as parameters, while they are all set to zero for f.
423
+
424
+ Here we provide an initial analysis of the ARZ policy, leaving a full analysis and interpretation of the algorithm to future work. The key feature of the algorithm is that it discretizes the input into four states, and the action of the quadruped is completely determined by its internal state and the discrete label. The temporal transitions of the discretized states show a stable periodic motion when the leg is not broken, and the leg-breaking introduces a clear disruption in this pattern, as shown in Fig. 6. This being a stateful algorithm with multiple variables accumulating and preserving variables from previous steps, we conjecture that the temporal pattern of the discrete states serves as a signal for the adaptive behavior of the quadruped.
425
+
426
+ Figure 6: State trajectories of various leg-breaking patterns. The leg-breaking event is marked by a vertical red line. Note that different leg breaking patterns result in different state trajectories. We conjecture that these trajectories serve as signals that trigger the adaptive response in the algorithm.
427
+
428
+ We now expand upon how the continuous input signal is discretized in the ARZ algorithm presented in Fig. 1. We first observe that the only way the incoming observation vector interacts with the rest of the algorithm is by forming scalar s12, by taking an inner-product with a dynamical vector v3 (the second of the three red-colored lines of code). The scalar s12 affects the action only through the two if statements colored in red. Thus the effect of the input observation on the action is entirely determined by the relative position of the scalar s12 with respect to the two decision boundaries set by the scalars s15 and s7. In other words, the external input of the observation to the system is effectively discretized into four states: 0 (s12
429
+ ≀
430
+ s15, s7), 1 (s15, s7 < s12), 2 (s7 < s12
431
+ ≀
432
+ s15) or 3 (s15 < s12
433
+ ≀
434
+ s7).
435
+
436
+ Thus external changes in the environment, such as leg breaking, can be accurately detected by the change in the pattern of the state trajectory, because the variables s7 and s15 defining the decision boundary of the states form a stable periodic function in time. We demonstrate this in Fig. 7, where we plot the values of the three scalars s12, s15 and s7 for front-leg breaking, whose occurrence is marked by the vertical red line. Despite the marked change of behavior of the input s12 after leg-breaking, we see that the behavior of the two scalars s7 and s15 are only marginally affected. Intriguingly, the behavior of the scalar registers s7 and s15 resemble that of central pattern generators in biological circuits responsible for generating rhythmic movements [30].
437
+
438
+ Figure 7: The scalar values s12, s15 and s7 of the quadruped during front-leg breaking. Note the consistent periodic behavior of the scalars s15 and s7 despite leg breaking, marked by the vertical red line. The same periodicity is observed for all leg-breaking scenarios analyzed.
439
+
440
+ The policy’s ability to quickly identify and adapt to multiple unique failure conditions is clear in Fig. 7(a), which plots the controller’s actions one second before and after a leg breaks. We see a clear, instantaneous change in behavior when a leg fails. This policy is able to identify when a change has occurred and rapidly adapt. Fig. 7(b) shows the particular sequence of CADFs executed at each timestep before and after the change, indicating that CADFs do play a role in the policy’s ability to rapidly adjust its behavior. Indeed, only evolutionary runs that included CADFs were able to discover a policy robust to any leg breaking.
441
+
442
+ (a) Actions
443
+ (b) CADF call sequences
444
+ Figure 8: ARZ policy behavior changes when Front-Left leg breaks mid-episode (step 500), as shown by the dynamics of the actions and the program control flow due to CADFs.
445
+ V-B Cataclysmic Cartpole
446
+
447
+ Introducing a novel benchmark adaptation task is an informative addition to results in the realistic quadruped simulator because we can empirically adjust the nature of the benchmark dynamics until they are significant enough to create an adaptation gap: when stateless policies (i.e., MLP generalists) fail to perform well because they cannot adapt their control policy in the non-stationary environment (See Section S2 for details.). Having confirmed that Cataclysmic Cartpole requires adaptation, we only examine stateful policies in this task.
448
+
449
+ V-B1 Comparison with Baselines
450
+
451
+ In Cataclysmic Cartpole, we confirm that ARZ produces superior control relative to the (stateful) ARS+LSTM baseline in tasks with a sudden, dramatic change. Fig. 9 and 10 show testing that was done after the search is complete. A fitness score of 800 indicates the policy managed to balance the pole for
452
+ β‰ˆ
453
+ 800
454
+ timesteps, surviving up to the last point in an episode with any active dynamics (See Fig. S4). "Stationary" is the standard Cartpole task while "Force", "Damping", and "Track Angle" refer to Cartpole with sudden or continuous change in these parameters only (See Section IV-B). "All" is the case where all change parameters are potentially changing simultaneously. Legends indicate the policy type and corresponding task type used during evolution. First, note that strong adaptable policies do not emerge from ARZ or ARS+LSTM evolved in the stationary task alone (See ARZ [Stationary] and LSTM [Stationary]), implying that proficiency in the stationary task does not directly transfer to any non-stationary configuration. However, when exposed to non-stationary properties during the search, ARZ and ARS+LSTM discover policies that adapt to all sudden and continuous non-stationary tasks. ARZ is significantly more proficient in the sudden change tasks (Fig. 10), achieving near perfect scores of
455
+ β‰ˆ
456
+ 1000
457
+ in all tasks. In continuous change, the single best LSTM policy achieves the best multitasking performance with a stronger score than ARZ on the Track Angle problem, and it is at least as proficient as ARZ on all other tasks. However, unlike the LSTM network, ARZ policies are uniquely interpretable.
458
+
459
+ Figure 9: Post-evolution test results in the Cataclysmic Cartpole continuous-change task. Legend indicates policy type and search task. [All] marks policies exposed to all tasks during evolution. ARZ and LSTM both solve this adaptation task, and no direct transfer from stationary tasks to dynamic tasks is observed. The best 5 policies from each experiment are shown.
460
+ Figure 10: Post-evolution test results in the Cataclysmic Cartpole sudden-change task. [All] marks policies exposed to all tasks during evolution. ARZ discovers the only policy that adapts to all sudden-change Cataclysmic Cartpole tasks. The best 5 policies from each experiment are shown.
461
+ V-B2 On Simplicity and Interpretability
462
+
463
+ Here we decompose an ARZ policy to provide a detailed explanation of how it integrates state observations over time to compute optimal actions in a changing environment. An example of an algorithm discovered in the ARZ [All] setting of Fig. 9 is presented in Fig. 11. Note that CADFs were not required to solve this task and have thus been omitted from the search space in order to simplify program analysis. What we find are three accumulators that collect the history of observation and action values from which the current action can be inferred.
464
+
465
+ [ width=0.48height=1.15in, valign=center,left=0pt,right=0pt,top=0pt,bottom=0pt, colback=codebackground,colframe=codeframe,boxrule=0.5pt,arc=0pt] # sX: scalar memory at address X.
466
+
467
+ # obs: vector [x, theta, x_dot, theta_dot].
468
+
469
+ # a, b, c: fixed scalar parameters.
470
+
471
+ # V, W: 4-dimensional vector parameters.
472
+
473
+ def GetAction(obs, action):
474
+
475
+ s0 = a * s2 + action
476
+
477
+ s1 = s0 + s1 + b * action + dot(V, obs)
478
+
479
+ s2 = s0 + c * s1
480
+
481
+ action = s0 + dot(obs, W)
482
+
483
+ return action
484
+
485
+ Figure 11: Sample stateful action function evolved on the task where all parameters are subject to continuous change (ARZ [All] in Fig.Β 9). Code shown in Python.
486
+
487
+ This algorithm uses 11 variables and executes 25 FLOPs per step. Meanwhile, the MLP and LSTM counterparts use more than 1k and 4.5k parameters, expending more than 2k and 9k FLOPs per step, respectively. More details for this computation are presented section S4.
488
+
489
+ There are two useful ways to view this algorithm. First, by organizing the values of s0, s1, and s2 at step
490
+ 𝑛
491
+ into a vector
492
+ 𝑍
493
+ 𝑛
494
+ , which can be interpreted as a vector in latent space of
495
+ 𝑑
496
+ =
497
+ 3
498
+ dimensions, we find that the algorithm can be expressed in the form:
499
+ 𝑠
500
+ 𝑛
501
+ +
502
+ 1
503
+ =
504
+ concat
505
+ ⁒
506
+ (
507
+ obs
508
+ 𝑛
509
+ +
510
+ 1
511
+ ,
512
+ act
513
+ 𝑛
514
+ )
515
+ ;
516
+ 𝑍
517
+ 𝑛
518
+ +
519
+ 1
520
+ =
521
+ π‘ˆ
522
+ ~
523
+ β‹…
524
+ 𝑍
525
+ 𝑛
526
+ +
527
+ 𝑃
528
+ ~
529
+ β‹…
530
+ 𝑠
531
+ 𝑛
532
+ +
533
+ 1
534
+ ;
535
+ act
536
+ 𝑛
537
+ +
538
+ 1
539
+ =
540
+ 𝐴
541
+ ~
542
+ 𝑇
543
+ β‹…
544
+ 𝑍
545
+ 𝑛
546
+ +
547
+ 1
548
+ +
549
+ π‘Š
550
+ ~
551
+ 𝑇
552
+ β‹…
553
+ 𝑠
554
+ 𝑛
555
+ +
556
+ 1
557
+ , with the projection matrix
558
+ 𝑃
559
+ ~
560
+ that projects the state vector to the latent space, and a
561
+ 𝑑
562
+ Γ—
563
+ 𝑑
564
+ evolution matrix
565
+ π‘ˆ
566
+ ~
567
+ . This is a linear recurrent neural network with internal state
568
+ 𝑍
569
+ 𝑛
570
+ . The second way to view the algorithm is to interpret it as a generalization of a proportional–integral–derivative (PID) controller. This can be done by first explicitly solving the recurrent equations presented above and taking the continuous limit. Introducing a single five-dimensional state vector
571
+ 𝑠
572
+ ⁒
573
+ (
574
+ 𝑑
575
+ )
576
+ =
577
+ [
578
+ π‘₯
579
+ ⁒
580
+ (
581
+ 𝑑
582
+ )
583
+ ,
584
+ πœƒ
585
+ ⁒
586
+ (
587
+ 𝑑
588
+ )
589
+ ,
590
+ π‘₯
591
+ Λ™
592
+ ⁒
593
+ (
594
+ 𝑑
595
+ )
596
+ ,
597
+ πœƒ
598
+ Λ™
599
+ ⁒
600
+ (
601
+ 𝑑
602
+ )
603
+ ,
604
+ act
605
+ ⁒
606
+ (
607
+ 𝑑
608
+ )
609
+ ]
610
+ , and
611
+ 𝑑
612
+ -dimensional vectors
613
+ 𝑒
614
+ ,
615
+ 𝑣
616
+ , and
617
+ 𝑀
618
+ , a five-dimensional vector
619
+ 𝑝
620
+ and a constant term
621
+ 𝑐
622
+ , the algorithm in the continuous time limit can be written in the form:
623
+ act
624
+ ⁒
625
+ (
626
+ 𝑑
627
+ )
628
+ =
629
+ 𝑐
630
+ +
631
+ 𝑀
632
+ 𝑇
633
+ β‹…
634
+ π‘ˆ
635
+ 𝑑
636
+ β‹…
637
+ 𝑒
638
+ +
639
+ 𝑝
640
+ 𝑇
641
+ β‹…
642
+ 𝑠
643
+ ⁒
644
+ (
645
+ 𝑑
646
+ )
647
+ +
648
+ 𝑣
649
+ 𝑇
650
+ β‹…
651
+ ∫
652
+ 0
653
+ 𝑑
654
+ 𝑑
655
+ 𝜏
656
+ ⁒
657
+ π‘ˆ
658
+ 𝑑
659
+ βˆ’
660
+ 𝜏
661
+ β‹…
662
+ 𝑃
663
+ β‹…
664
+ 𝑠
665
+ ⁒
666
+ (
667
+ 𝜏
668
+ )
669
+ where
670
+ 𝑃
671
+ and
672
+ π‘ˆ
673
+ are the continuous-time versions of
674
+ 𝑃
675
+ ~
676
+ and
677
+ π‘ˆ
678
+ ~
679
+ . In our particular discovered algorithm (Fig. 11),
680
+ 𝑑
681
+ happens to be 3. Notice that the integration measure now has a time-dependent weight factor in the integrand versus the conventional PID controller. Further derivations, discussions, and interpretations regarding this algorithm are presented in the supplementary material.
682
+
683
+ VI Conclusion and Discussion
684
+
685
+ We have shown that using ARZ to search simultaneously in program space and parameter space produces proficient, simple, and interpretable control algorithms that can perform zero-shot adaptation, rapidly changing their behavior to maintain near-optimal control in environments that undergo radical change. In the remainder of this section, we briefly motivate and speculate about future work.
686
+
687
+ CADFs and the Distraction Dilemma. In the quadruped robot domain, we have observed that including Conditionally invoked Automatically Defined Functions (CADFs) in our search space improves the expressiveness of evolved control algorithms. In the single best policy, CADFs have been used to discretize the observation space into four states. The action is then completely determined by the internal state of the system and this discretized observation. One interpretation is that this discretization helps the policy define a switching behavior that can overcome the distraction dilemma: the challenge for a multi-task policy to balance the reward of excelling at multiple different tasks against the ultimate goal of achieving generalization [1]. By contrast, searching only in the parameter space of a hand-designed MLP or LSTM network did not produce policies that can adapt to more than one unique change event (i.e., a single leg breaking). A deeper study of modular/hierarchical policies and their impact on the distraction dilemma is left to future work.
688
+
689
+ The Cataclysmic Cartpole Task. Given the computationally intensive nature of simulating a real robot, we felt compelled to also include a more manageable toy task where adaptation matters. This led to the Cataclysmic Cartpole task. We found it useful for doing quick experiments and emphasizing the power and interpretability of ARZ results. We hope that it may also provide an easily reproducible environment for use in further research.
690
+
691
+ Adapting to Unseen Task Dynamics. Looking to the future, we have included detailed supplementary material which raises an open and ambitious question: how can we build adaptive control policies without any prior knowledge about what type of environmental change may occur in the future? Surprisingly, preliminary results with ARZ on the cataclysmic cartpole task suggest that injecting partial-observability and dynamic actuator noise during evolution (training) can act as a general surrogate for non-stationary task dynamics S2. In preliminary work, we found this to support the emergence of policies that can adapt to novel task dynamics that were not experienced during search (evolution). This was not possible for our LSTM baselines. If true, this would be significant because it implies we might be able to evolve proficient control policies without complete prior knowledge of their task environment dynamics, thus relaxing the need for an accurate physics simulator. Future work may investigate the robustness of this preliminary finding.
692
+
693
+ Author Contributions
694
+
695
+ SK and ER led the project. ER and JT conceived the project and acted as principal advisors. All authors contributed to the methodology. SK, MM, PN, and DP ran the evolution experiments. XS ran the baselines. MM and DP analysed the algorithms. SK, DP, and MM wrote the paper. All authors edited the paper.
696
+
697
+ Acknowledgements
698
+
699
+ We would like to thank Wenhao Yu, Chen Liang, Sehoon Ha, James Lee and the Google Brain Evolution and AutoML groups for technical discussions; Erwin Coumans for physics simulations advice; Erwin Coumans, Kevin Yap, Jacob Budzis, Heng Li, Kaiyuan Wang, and Ryan Gillard for code contributions; and Quoc V. Le, Vincent Vanhoucke, Ed Chi, and Erik Goodman for guidance and support.
700
+
701
+ References
702
+ [1] M.Β Hessel, H.Β Soyer, L.Β Espeholt, W.Β Czarnecki, S.Β Schmitt, and H.Β van Hasselt, β€œMulti-Task Deep Reinforcement […]].”   AAAI, 2019.
703
+ [2] S.Β Kelly, T.Β Voegerl, W.Β Banzhaf, and C.Β Gondro, β€œEvolving hierarchical memory-prediction […]],” Genet.Β Program.Β Evolvable Mach., 2021.
704
+ [3] E.Β Real, C.Β Liang, D.Β R. So, and Q.Β V. Le, β€œAutoML-Zero: Evolving Machine Learning Algorithms From Scratch,” ICML, 2020.
705
+ [4] β€œUnitree Robotics.” [Online]. Available: http://www.unitree.cc/
706
+ [5] J.Β R. Koza and M.Β A. Keane, β€œGenetic breeding of non-linear optimal control strategies […]],” in Analysis and Optimization of Systems, 1990.
707
+ [6] Y.Β Dhebar, K.Β Deb, S.Β Nageshrao, L.Β Zhu, and D.Β Filev, β€œToward Interpretable-AI Policies […]],” IEEE.Β Trans.Β Cybern., 2022.
708
+ [7] W.Β Yu, J.Β Tan, C.Β K. Liu, and G.Β Turk, β€œPreparing for the unknown: Learning a universal policy […],” in RSS, 2017.
709
+ [8] J.Β Tobin, R.Β Fong, A.Β Ray, J.Β Schneider, W.Β Zaremba, and P.Β Abbeel, β€œDomain randomization for transferring […]],” CoRR, 2017.
710
+ [9] A.Β Cully, J.Β Clune, D.Β Tarapore, and J.-B. Mouret, β€œRobots that can adapt like animals,” Nature, 2015.
711
+ [10] X.Β Song, Y.Β Yang, K.Β Choromanski, K.Β Caluwaerts, W.Β Gao, C.Β Finn, and J.Β Tan, β€œRapidly Adaptable Legged Robots […],” in IROS, 2020.
712
+ [11] X.Β Song, W.Β Gao, Y.Β Yang, K.Β Choromanski, A.Β Pacchiano, and Y.Β Tang, β€œES-MAML: simple hessian-free meta learning,” in ICLR, 2020.
713
+ [12] W.Β Yu, J.Β Tan, Y.Β Bai, E.Β Coumans, and S.Β Ha, β€œLearning fast adaptation with meta strategy optimization,” IEEE Robot.Β Autom.Β Lett., 2020.
714
+ [13] C.Β Finn, P.Β Abbeel, and S.Β Levine, β€œModel-agnostic meta-learning for fast adaptation of deep networks,” in ICML, 2017.
715
+ [14] A.Β Kumar, Z.Β Fu, D.Β Pathak, and J.Β Malik, β€œRMA: rapid motor adaptation for legged robots,” CoRR, 2021.
716
+ [15] E.Β Najarro and S.Β Risi, β€œMeta-Learning through Hebbian Plasticity in Random Networks,” CoRR, 2020.
717
+ [16] D.Β Floreano and J.Β Urzelai, β€œEvolutionary robots with on-line self-organization and behavioral fitness,” Neural Networks, 2000.
718
+ [17] T.Β Anne, J.Β Wilkinson, and Z.Β Li, β€œMeta-learning for fast adaptive locomotion with uncertainties […]],” in IROS, 2021.
719
+ [18] A.Β Li, C.Β Florensa, I.Β Clavera, and P.Β Abbeel, β€œSub-policy adaptation for hierarchical reinforcement learning,” in ICLR, 2020.
720
+ [19] J.Β X. Wang, β€œMeta-learning in natural and artificial intelligence,” Current Opinion in Behavioral Sciences, 2021.
721
+ [20] M.Β Brameier and W.Β Banzhaf, Linear Genetic Programming.Β Β Β Springer, 2007.
722
+ [21] J.Β R. Koza, Genetic Programming II: Automatic Discovery of Reusable Programs.Β Β Β Cambridge, MA, USA: MIT Press, 1994.
723
+ [22] K.Β Deb, A.Β Pratap, S.Β Agarwal, and T.Β Meyarivan, β€œA fast and elitist multiobjective genetic […],” IEEE Trans.Β Evol.Β Comput., 2002.
724
+ [23] E.Β Real, A.Β Aggarwal, Y.Β Huang, and Q.Β V. Le, β€œRegularized evolution for image classifier architecture search,” AAAI, 2019.
725
+ [24] H.Β Mania, A.Β Guy, and B.Β Recht, β€œSimple random search of static linear policies is competitive […]],” in NeurIPS, 2018.
726
+ [25] K.-H. Lee, O.Β Nachum, T.Β Zhang, S.Β Guadarrama, J.Β Tan, and W.Β Yu, β€œPI-ARS: Accelerating Evolution-Learned […]],” in IROS, 2022.
727
+ [26] J.Β Schulman, F.Β Wolski, P.Β Dhariwal, A.Β Radford, and O.Β Klimov, β€œProximal policy optimization algorithms,” CoRR, 2017.
728
+ [27] E.Β Heiden, D.Β Millard, E.Β Coumans, Y.Β Sheng, and G.Β S. Sukhatme, β€œNeuralSim: Augmenting Differentiable […]],” in ICRA, 2021.
729
+ [28] R.Β S. Sutton and A.Β G. Barto, Reinforcement Learning: An Introduction.Β Β Β Cambridge, MA, USA: A Bradford Book, 2018.
730
+ [29] S.Β Sueda, β€œAnalytically differentiable articulated […]],” 2021. [Online]. Available: https://github.com/sueda/redmax/blob/master/notes.pdf
731
+ [30] E.Β Marder and D.Β Bucher, β€œCentral pattern generators and the control of rhythmic movements,” Current Biology, 2001.
732
+ [31] R.Β Gillard, S.Β Jonany, Y.Β Miao, M.Β Munn, C.Β deΒ Souza, J.Β Dungay, C.Β Liang, D.Β R. So, Q.Β V. Le, and E.Β Real, β€œUnified functional hashing in automatic machine learning,” arXiv, 2023.
733
+ [32] K.Β O. Stanley and R.Β Miikkulainen, β€œEvolving neural networks through augmenting topologies,” Evolutionary Computation, 2002.
734
+ [33] J.Β Bergstra and Y.Β Bengio, β€œRandom search for hyper-parameter optimization,” JMLR, 2012.
735
+ [34] D.Β Hafner, J.Β Davidson, and V.Β Vanhoucke, β€œTensorflow agents: Efficient batched reinforcement learning in tensorflow,” CoRR, 2017.
736
+ [35] G.Β Brockman, V.Β Cheung, L.Β Pettersson, J.Β Schneider, J.Β Schulman, J.Β Tang, and W.Β Zaremba, β€œOpenAI Gym,” 2016.
737
+
738
+ Supplementary Material
739
+
740
+ S1 Methods Additional Details
741
+ Supplementary Figure S1: Simplified example of a population of algorithms, modified via crossover and mutation to produce a new population. Complete list of mutation operators is provided in Table S1
742
+
743
+ Operator
744
+
745
+
746
+
747
+ Allowed Functions
748
+
749
+
750
+
751
+ Prob
752
+
753
+
754
+
755
+ Description
756
+
757
+
758
+
759
+
760
+ Insert Instruction
761
+
762
+
763
+
764
+ GetAction() CADF()
765
+
766
+
767
+
768
+ 0.5
769
+
770
+
771
+
772
+ Insert randomly generated instruction at uniformly sampled line number
773
+
774
+
775
+
776
+
777
+ Delete Instruction
778
+
779
+
780
+
781
+ GetAction() CADF()
782
+
783
+
784
+
785
+ 1.0
786
+
787
+
788
+
789
+ Delete the instruction at a uniformly sampled line number
790
+
791
+
792
+
793
+
794
+ Randomize Instruction
795
+
796
+
797
+
798
+ GetAction() CADF()
799
+
800
+
801
+
802
+ 1.0
803
+
804
+
805
+
806
+ Randomize the instruction at a uniformly sampled line number
807
+
808
+
809
+
810
+
811
+ Randomize Function
812
+
813
+
814
+
815
+ GetAction() CADF()
816
+
817
+
818
+
819
+ 0.1
820
+
821
+
822
+
823
+ Randomly shuffles all lines of code
824
+
825
+
826
+
827
+
828
+ Randomize constants
829
+
830
+
831
+
832
+ StartEpisode()
833
+
834
+
835
+
836
+ 0.5
837
+
838
+
839
+
840
+ Modify a fraction (0.2) of uniformly sampled constants in a uniformly sampled instruction. For each constant, add noise sampled from
841
+ 𝒩
842
+ ⁒
843
+ (
844
+ 0
845
+ ,
846
+ 0.05
847
+ 2
848
+ )
849
+ .
850
+
851
+
852
+
853
+
854
+ Randomize Parameter
855
+
856
+
857
+
858
+ GetAction() CADF()
859
+
860
+
861
+
862
+ 0.5
863
+
864
+
865
+
866
+ Randomize a uniformly sampled parameter in a uniformly sampled instruction
867
+
868
+
869
+
870
+
871
+ Randomize dim indices
872
+
873
+
874
+
875
+ GetAction() CADF()
876
+
877
+
878
+
879
+ 0.5
880
+
881
+
882
+
883
+ Randomize a fraction (0.2) of uniformly sampled dim indices in a uniformly sampled instruction. Each chosen dim index is set to a new integer uniformly sampled from
884
+ [
885
+ 0
886
+ ,
887
+ 𝑑
888
+ ⁒
889
+ 𝑖
890
+ ⁒
891
+ π‘š
892
+ )
893
+ where
894
+ 𝑑
895
+ ⁒
896
+ 𝑖
897
+ ⁒
898
+ π‘š
899
+ is the size of the memory structure being referenced.
900
+
901
+ Supplementary Table S1: Mutation operators. Prob column lists the relative probability of applying each operation. For example, the Delete Instruction op will be applied twice as often as the Insert instruction.
902
+ S1-A Baseline Details
903
+
904
+ Augmented Random Search (ARS): We used a standard implementation from [24] and hyperparameter tuned over a cross product between:
905
+
906
+ βˆ™
907
+
908
+ learning rate: [0.001, 0.005, 0.01, 0.05, 0.1, 0.5]
909
+
910
+ βˆ™
911
+
912
+ Gaussian standard deviation: [0.001, 0.005, 0.01, 0.05, 0.1, 0.5]
913
+
914
+ and used a 2-layer MLP of hidden layer sizes (32,32) with Tanh non-linearity, along with an LSTM of size 32.
915
+
916
+ Proximal Policy Optimization (PPO): We used a standard implementation from TF-Agents [34], which we verified to reproduce standard Mujoco results from [26]. We varied the following hyperparameters:
917
+
918
+ βˆ™
919
+
920
+ nsteps ("collect sequence length"): [256, 1024]
921
+
922
+ βˆ™
923
+
924
+ learning rate: [5e-5, 1e-4, 5e-4, 1e-3, 5e-3]
925
+
926
+ βˆ™
927
+
928
+ entropy regularization: [0.0, 0.05, 0.1, 0.5]
929
+
930
+ and due to the use of a shared value network, we used a 2-layer MLP of hidden layer sizes (256, 256) with ReLU nonlinearity alongside an LSTM of size 256. Since PPO significantly underperformed (e.g., obtaining only
931
+ β‰ˆ
932
+ 100 reward on quadruped tasks), we omitted its results in this paper to save space.
933
+
934
+ S1-B Quadruped Tasks
935
+
936
+ We perform 20 independent repeats for each method with unique random seeds. All repeats are allowed to train until convergence. NSGA-II uses parent and child population sizes of 100 and 1000, respectively. No search restarts or FEC are enabled. The set of operations available for inclusion in any program are listed in Table S2. For ARS experiments, we run a hyperparameter sweep consisting of 36 repeats with unique hyperparameters. We then run an additional 20 repeats using the best hyperparameter configuration.
937
+
938
+ Supplementary Figure S2: A typical Pareto front early in NSGA-II search. The dashed box shows policies that are effectively eliminated through fitness constraints.
939
+ S1-C Cataclysmic Cartpole Tasks
940
+
941
+ Cartpole [28, 35] is a classic control task in which a pole is attached by an un-actuated joint to a cart that moves Left or Right along a frictionless track, Figure S3. The observable state of the system at each timestep,
942
+ 𝑠
943
+ β†’
944
+ ⁒
945
+ (
946
+ 𝑑
947
+ )
948
+ , is described by 4 variables including the cart position (
949
+ π‘₯
950
+ ), cart velocity (
951
+ π‘₯
952
+ Λ™
953
+ ), pole angle relative to the cart (
954
+ πœƒ
955
+ ), and pole angular velocity (
956
+ πœƒ
957
+ Λ™
958
+ ). We use a continuous-action version of the problem in which the system is controlled by applying a force
959
+ ∈
960
+ [
961
+ βˆ’
962
+ 1
963
+ ,
964
+ 1
965
+ ]
966
+ to the cart. The pole starts nearly upright, and the goal is to prevent it from falling over. An episode ends when the pole is more than 12 degrees from vertical, the cart moves more than 2.4 units from the center, or a time constraint is reached (1000 timesteps). A reward of
967
+ (
968
+ 1
969
+ βˆ’
970
+ |
971
+ πœƒ
972
+ 𝑣
973
+ ⁒
974
+ 𝑒
975
+ ⁒
976
+ π‘Ÿ
977
+ ⁒
978
+ 𝑑
979
+ |
980
+ /
981
+ 12
982
+ )
983
+ 2
984
+ is provided for every timestep that the pole remains upright, where
985
+ πœƒ
986
+ 𝑣
987
+ ⁒
988
+ 𝑒
989
+ ⁒
990
+ π‘Ÿ
991
+ ⁒
992
+ 𝑑
993
+ is a fixed reference for the angle of the pole relative to the vertical plane. As such, the objective is to balance the pole close to vertical for as long as possible.
994
+
995
+ Supplementary Figure S3: Illustration of a track angle change in the Cataclysmic Cartpole task with the 4 variables in the state observation
996
+ 𝑠
997
+ β†’
998
+ ⁒
999
+ (
1000
+ 𝑑
1001
+ )
1002
+ . Note that
1003
+ πœƒ
1004
+ always represents the angle between the pole and the line running perpendicular to the track and cart, thus the desired value of
1005
+ πœƒ
1006
+ to maintain balance
1007
+ (
1008
+ πœƒ
1009
+ 𝑣
1010
+ ⁒
1011
+ 𝑒
1012
+ ⁒
1013
+ π‘Ÿ
1014
+ ⁒
1015
+ 𝑑
1016
+ =
1017
+ 0
1018
+ )
1019
+ changes with the track angle and is not directly observable to the policy.
1020
+ 1.
1021
+
1022
+ Sudden: A sudden change in each change parameter occurs at a unique random timestep in [200, 800], Figure 3(a).
1023
+
1024
+ 2.
1025
+
1026
+ Continuous: Each parameter changes over a window with random, independently chosen start and stop timesteps in [200, 800], Figure 3(b).
1027
+
1028
+ (a) Sudden
1029
+ (b) Continuous
1030
+ Supplementary Figure S4: A typical randomly-created change schedule.
1031
+
1032
+ For the ARZ methods, we execute 10 repeats of each experiment with unique random seeds. For ARS, we run a hyperparameter sweep consisting of 36 repeats with unique hyperparameters. In each case, we select 5 repeats with the best search fitness and test the single best policy from each. Plots show mean fitness over 100 episodes for each policy in each task.
1033
+
1034
+ S2 Additional Experiments: Cataclysmic Cartpole
1035
+ S2-A Adaptation Gap
1036
+
1037
+ In this section we use stateless policies (ARZ and MLP) to confirm that Cataclysmic Cartpole dynamics are significant enough to create an adaptation gap: when stateless policies (i.e. generalists) fail to perform well because they cannot adapt their control policy in the non-stationary environment. As mentioned in Section III-C our evolved algorithms are able to adapt partly because they are stateful: the contents of their memory (sX, vX, mX, and iX) are persistent across timesteps of an episode. The representation can easily support stateless algorithms simply by forcing the policy to wipe its memory content and re-initialize constants at the beginning of the GetAction() function (See Figure 2).
1038
+
1039
+ Fig. S5 indicates that, in the continuous change environment, the stateless baselines (MLP and ARZ stateless) fail to achieve sufficient fitness (
1040
+ β‰ˆ
1041
+ 800
1042
+ ) when all types of change occur simultaneously (ALL). This confirms that the continuous change paradigm provides a suitably challenging non-stationary problem environments to study adaptation and life-long learning. In the sudden change task (Figure S6), the MLP baseline still fails. Surprizingly, ARZ can discover stateless policies that succeed under this type of non-stationarity.
1043
+
1044
+ Supplementary Figure S5: Stateless baselines fail to achieve sufficient fitness (
1045
+ β‰ˆ
1046
+ 800
1047
+ ) when all types of change occur simultaneously (ALL). The plot shows test results for stateless baselines in the Cataclysmic Cartpole continuous change tasks. The legend indicates policy type and search task. "Stationary" is the standard Cartpole task while "Force", "Damping", and "Track Angle" refer to Cartpole with continuous change in these parameters only (See Section IV-B). "All" is the case where all change parameters are potentially changing simultaneously. Y-axis is the average reward of 100 episodes in each task. See Section S2-A for discussion.
1048
+ Supplementary Figure S6: ARZ can discover stateless policies that succeed in the sudden change tasks. The plot shows test results for stateless baselines in the Cartpole sudden change tasks. The legend indicates policy type and search task. "Stationary" is the standard Cartpole task while "Force", "Damping", and "Track Angle" refer to Cartpole with continuous change in these parameters only (See Section IV-B). "All" is the case where all change parameters are potentially changing simultaneously. Y-axis is the average reward of 100 episodes in each task. See Section S2-A for discussion.
1049
+ S2-B Adapting to Unseen Dynamics in Cataclysmic Cartpole
1050
+
1051
+ How can we build adaptive control policies without any prior knowledge about what type of environmental change might occur? Surprisingly, for ARZ, we find that injecting partial-observability and dynamic actuator noise during evolution (training) can act as a general surrogate for non-stationary task dynamics, supporting the emergence of policies that can adapt to novel task dynamics that were not experienced during evolution. This was not possible for our LSTM baselines. It is a significant finding that deserves more attention in future work because it implies we can potentially evolve proficient control policies without complete prior knowledge of their task environment dynamics, thus relaxing the need for an accurate physics simulator.
1052
+
1053
+ If we assume that no simulator is available for any of the non-stationary tasks in Cataclysmic Cartpole (Force, Damping, Track Angle), can we still build policies that cope with these changes? From a policy’s perspective, changes to the physics of the environment will (1) change the meaning of its sensor observations (e.g. pole angle sensor value (
1054
+ πœƒ
1055
+ ) corresponding to vertical suddenly changes); and/or (2) change the effect of its actions (e.g. a particular actuator value suddenly has a much greater effect on the cart’s trajectory). To prepare policies for these uncertainties, we evolve them with non-stationary noise applied to their actions and introduce a partially-observable observation space. Specifically, we modify the task to add:
1056
+
1057
+ βˆ™
1058
+
1059
+ Actuator Noise: Each action value
1060
+ 𝑣
1061
+ is modified such that
1062
+ 𝑣
1063
+ =
1064
+ 𝑣
1065
+ +
1066
+ 𝑛
1067
+ , where
1068
+ 𝑛
1069
+ is sampled from a Gaussian distribution with mean that varies in [-2, 2] following the continuous change schedule in Figure 3(b).
1070
+
1071
+ βˆ™
1072
+
1073
+ Partial Observability: Positional state variables (cart position (
1074
+ π‘₯
1075
+ ) and pole angle relative to the cart (
1076
+ πœƒ
1077
+ )) are set to zero prior to passing the state observation to the policy.
1078
+
1079
+ Our hypothesis is that this will encourage policies to rely less on their observations and actions, and as a result they might build a stronger, more dynamic internal world-model to predict how their actions will affect future states. That is, there is more pressure to model the environment’s dynamic transition function. In Figure S7, ARZ [PO + Act Noise] shows test results for an ARZ experiment that uses the stationary task simulator during evolution (i.e. the unmodified Cartpole environment) but applies actuator noise and partial observability as described above. Remarkably, these evolved policies are able to adapt reasonably well under all non-stationary tasks in the Cataclysmic Cartpole environment, achieving an average reward of
1080
+ β‰₯
1081
+ 700
1082
+ in all tasks. Using the same search configuration, ARS does not discover parameters for an LSTM network that supports adaptation to all non-stationary tasks (LSTM [PO + Act Noise]).
1083
+
1084
+ Supplementary Figure S7: ARZ can discover policies that adapt to unseen tasks. The plot shows post-evolution test results for adapting policies in the Cartpole continuous change tasks. The legend indicates policy type and search task. [All] indicate policies were exposed to all tasks during evolution. [PO + Act Noise] indicates policies were evolved with partial observability and action noise on the stationary task, while the dynamic change tasks were unseen until test. Y-axis is the average reward of 100 episodes in each task. See Section S2-B for discussion.
1085
+
1086
+ In summary, preliminary data presented in this section suggests that adding partial-observability and actuator noise to the stationary Cartpole task during search allows ARZ to discover policies that can adapt to unseen non-stationary tasks, a methodology that does not work for ARS with LSTM networks. We leave comprehensive analysis of these findings to future work.
1087
+
1088
+ S3 Cartpole Algorithm Analysis
1089
+
1090
+ Here we analyze the algorithm presented in Figure 11:
1091
+
1092
+ [ width=0.48height=1.3in, valign=center,left=0pt,right=0pt,top=0pt,bottom=0pt, colback=codebackground,colframe=codeframe,boxrule=0.5pt,arc=0pt] # sX: scalar memory at address X.
1093
+
1094
+ # obs: vector [x, theta, x_dot, theta_dot].
1095
+
1096
+ # a, b, c: fixed scalar parameters.
1097
+
1098
+ # V, W: 4-dimensional vector parameters.
1099
+
1100
+ def GetAction(obs, action):
1101
+
1102
+ s0 = a * s2 + action
1103
+
1104
+ s1 = s0 + s1 + b * action + dot(V, obs)
1105
+
1106
+ s2 = s0 + c * s1
1107
+
1108
+ action = s0 + dot(obs, W)
1109
+
1110
+ return action
1111
+
1112
+ Supplementary Figure S8: Sample stateful action function evolved on the Cataclysmic Cartpole task where all parameters are subject to continuous change (ARZ [All] in Fig.Β 9). Code shown in Python. This figure is a repeat of Figure 11.
1113
+
1114
+ Organizing the values of
1115
+ πœ‡
1116
+ =
1117
+ s0
1118
+ ,
1119
+ 𝜈
1120
+ =
1121
+ s1
1122
+ and
1123
+ πœ‰
1124
+ =
1125
+ s2
1126
+ at step
1127
+ 𝑛
1128
+ into a vector:
1129
+
1130
+
1131
+ 𝑍
1132
+ 𝑛
1133
+ =
1134
+ (
1135
+ πœ‡
1136
+ 𝑛
1137
+ ,
1138
+ 𝜈
1139
+ 𝑛
1140
+ +
1141
+ 1
1142
+ ,
1143
+ πœ‰
1144
+ 𝑛
1145
+ )
1146
+ 𝑇
1147
+ ,
1148
+
1149
+
1150
+ and concatenating the observation vector at step
1151
+ 𝑛
1152
+ +
1153
+ 1
1154
+ and the action at step
1155
+ 𝑛
1156
+ into a state vector
1157
+ 𝑠
1158
+ 𝑛
1159
+ +
1160
+ 1
1161
+ :
1162
+
1163
+
1164
+ 𝑠
1165
+ 𝑛
1166
+ +
1167
+ 1
1168
+ =
1169
+ (
1170
+ π‘₯
1171
+ 𝑛
1172
+ +
1173
+ 1
1174
+ ,
1175
+ πœƒ
1176
+ 𝑛
1177
+ +
1178
+ 1
1179
+ ,
1180
+ π‘₯
1181
+ Λ™
1182
+ 𝑛
1183
+ +
1184
+ 1
1185
+ ,
1186
+ πœƒ
1187
+ Λ™
1188
+ 𝑛
1189
+ +
1190
+ 1
1191
+ ,
1192
+ act
1193
+ 𝑛
1194
+ )
1195
+ 𝑇
1196
+ ,
1197
+
1198
+
1199
+ we can re-write the value of these accumulators at step
1200
+ 𝑛
1201
+ in the following way:
1202
+
1203
+
1204
+ 𝑠
1205
+ 𝑛
1206
+ +
1207
+ 1
1208
+
1209
+ =
1210
+ concat
1211
+ ⁒
1212
+ (
1213
+ obs
1214
+ 𝑛
1215
+ +
1216
+ 1
1217
+ ,
1218
+ act
1219
+ 𝑛
1220
+ )
1221
+
1222
+
1223
+ 𝑍
1224
+ 𝑛
1225
+ +
1226
+ 1
1227
+
1228
+ =
1229
+ π‘ˆ
1230
+ ~
1231
+ β‹…
1232
+ 𝑍
1233
+ 𝑛
1234
+ +
1235
+ 𝑃
1236
+ ~
1237
+ β‹…
1238
+ 𝑠
1239
+ 𝑛
1240
+ +
1241
+ 1
1242
+ ,
1243
+ (2)
1244
+
1245
+ act
1246
+ 𝑛
1247
+ +
1248
+ 1
1249
+
1250
+ =
1251
+ 𝐴
1252
+ ~
1253
+ 𝑇
1254
+ β‹…
1255
+ 𝑍
1256
+ 𝑛
1257
+ +
1258
+ 1
1259
+ +
1260
+ π‘Š
1261
+ ~
1262
+ 𝑇
1263
+ β‹…
1264
+ 𝑠
1265
+ 𝑛
1266
+ +
1267
+ 1
1268
+ .
1269
+
1270
+
1271
+ The particular variables used in this formula map to the parameters of Figure 11 as follows:
1272
+
1273
+
1274
+ π‘ˆ
1275
+ ~
1276
+
1277
+ =
1278
+ (
1279
+ 0
1280
+
1281
+ 0
1282
+
1283
+ π‘Ž
1284
+
1285
+
1286
+ 0
1287
+
1288
+ 1
1289
+
1290
+ π‘Ž
1291
+
1292
+
1293
+ 0
1294
+
1295
+ 𝑐
1296
+
1297
+ π‘Ž
1298
+ ⁒
1299
+ (
1300
+ 1
1301
+ +
1302
+ 𝑐
1303
+ )
1304
+ )
1305
+ ,
1306
+ (6)
1307
+
1308
+ 𝑃
1309
+ ~
1310
+
1311
+ =
1312
+ (
1313
+ 0
1314
+
1315
+ 0
1316
+
1317
+ 0
1318
+
1319
+ 0
1320
+
1321
+ 1
1322
+
1323
+
1324
+ 𝑉
1325
+ 1
1326
+
1327
+ 𝑉
1328
+ 2
1329
+
1330
+ 𝑉
1331
+ 3
1332
+
1333
+ 𝑉
1334
+ 4
1335
+
1336
+ 𝑏
1337
+ +
1338
+ 1
1339
+
1340
+
1341
+ 𝑐
1342
+ ⁒
1343
+ 𝑉
1344
+ 1
1345
+
1346
+ 𝑐
1347
+ ⁒
1348
+ 𝑉
1349
+ 2
1350
+
1351
+ 𝑐
1352
+ ⁒
1353
+ 𝑉
1354
+ 3
1355
+
1356
+ 𝑐
1357
+ ⁒
1358
+ 𝑉
1359
+ 4
1360
+
1361
+ π‘Ž
1362
+ +
1363
+ 𝑏
1364
+ ⁒
1365
+ 𝑐
1366
+ +
1367
+ 𝑐
1368
+ )
1369
+ ,
1370
+ (10)
1371
+
1372
+ 𝐴
1373
+ ~
1374
+
1375
+ =
1376
+ (
1377
+ 1
1378
+ ,
1379
+ 0
1380
+ ,
1381
+ 0
1382
+ )
1383
+ 𝑇
1384
+ ,
1385
+
1386
+
1387
+ π‘Š
1388
+ ~
1389
+
1390
+ =
1391
+ (
1392
+ π‘Š
1393
+ 1
1394
+ ,
1395
+ π‘Š
1396
+ 2
1397
+ ,
1398
+ π‘Š
1399
+ 3
1400
+ ,
1401
+ π‘Š
1402
+ 4
1403
+ ,
1404
+ 0
1405
+ )
1406
+ 𝑇
1407
+ .
1408
+
1409
+
1410
+ The numerical values of the parameters of the model found are given by
1411
+
1412
+
1413
+ π‘Ž
1414
+
1415
+ =
1416
+ βˆ’
1417
+ 0.549
1418
+ ,
1419
+ 𝑏
1420
+ =
1421
+ βˆ’
1422
+ 0.673
1423
+ ,
1424
+ 𝑐
1425
+ =
1426
+ 0.082
1427
+ ,
1428
+
1429
+
1430
+ 𝑉
1431
+
1432
+ =
1433
+ (
1434
+ βˆ’
1435
+ 1.960
1436
+ ,
1437
+ βˆ’
1438
+ 0.7422
1439
+ ,
1440
+ 0.7373
1441
+ ,
1442
+ βˆ’
1443
+ 5.284
1444
+ )
1445
+ 𝑇
1446
+ ,
1447
+
1448
+
1449
+ π‘Š
1450
+
1451
+ =
1452
+ (
1453
+ 0.0
1454
+ ,
1455
+ 0.365
1456
+ ,
1457
+ 2.878
1458
+ ,
1459
+ 2.799
1460
+ )
1461
+ 𝑇
1462
+ .
1463
+
1464
+
1465
+ Equation (2) can be viewed as a linear recurrent model, where
1466
+ 𝑍
1467
+ 𝑛
1468
+ is the internal state of the model. The action at the
1469
+ 𝑛
1470
+ -th step is obtained as a linear function of the internal state, the observation vector and the action value at the previous step. An interesting aspect of the particular model found is that the matrix
1471
+ π‘ˆ
1472
+ ~
1473
+ by construction has eigenvalues
1474
+ 0
1475
+ ,
1476
+ 1
1477
+ and
1478
+ π‘Ž
1479
+ ⁒
1480
+ (
1481
+ 1
1482
+ +
1483
+ 𝑐
1484
+ )
1485
+ β‰ˆ
1486
+ βˆ’
1487
+ 0.594
1488
+ .
1489
+
1490
+ Equation (2) being a simple linear model, we may write
1491
+ act
1492
+ 𝑛
1493
+ +
1494
+ 1
1495
+ explicitly as a sum:
1496
+
1497
+
1498
+ act
1499
+ 𝑛
1500
+ =
1501
+
1502
+ 𝐴
1503
+ ~
1504
+ 𝑇
1505
+ β‹…
1506
+ π‘ˆ
1507
+ ~
1508
+ 𝑛
1509
+ β‹…
1510
+ 𝑍
1511
+ 0
1512
+ +
1513
+ π‘Š
1514
+ ~
1515
+ 𝑇
1516
+ β‹…
1517
+ 𝑠
1518
+ 𝑛
1519
+
1520
+
1521
+ +
1522
+ 𝐴
1523
+ ~
1524
+ 𝑇
1525
+ β‹…
1526
+ βˆ‘
1527
+ 𝑖
1528
+ =
1529
+ 0
1530
+ 𝑛
1531
+ π‘ˆ
1532
+ ~
1533
+ 𝑛
1534
+ βˆ’
1535
+ 𝑖
1536
+ β‹…
1537
+ 𝑃
1538
+ ~
1539
+ β‹…
1540
+ 𝑠
1541
+ 𝑖
1542
+ .
1543
+
1544
+
1545
+ When taking the continuous limit of this expression, there is a subtlety in that the
1546
+ 𝑠
1547
+ 𝑛
1548
+ +
1549
+ 1
1550
+ vector is obtained by composing the observation vector at time step
1551
+ 𝑛
1552
+ +
1553
+ 1
1554
+ and the action value at time step
1555
+ 𝑛
1556
+ . We can nevertheless be careful to redefine
1557
+ 𝑠
1558
+ to be made up of concurrent components and still arrive at an expression which in the continuous limit, takes the form:
1559
+
1560
+
1561
+ act
1562
+ ⁒
1563
+ (
1564
+ 𝑑
1565
+ )
1566
+ =
1567
+
1568
+ 𝑐
1569
+ +
1570
+ 𝑀
1571
+ 𝑇
1572
+ β‹…
1573
+ π‘ˆ
1574
+ 𝑑
1575
+ β‹…
1576
+ 𝑒
1577
+ +
1578
+ 𝑝
1579
+ 𝑇
1580
+ β‹…
1581
+ 𝑠
1582
+ ⁒
1583
+ (
1584
+ 𝑑
1585
+ )
1586
+
1587
+
1588
+ +
1589
+ 𝑣
1590
+ 𝑇
1591
+ β‹…
1592
+ ∫
1593
+ 0
1594
+ 𝑑
1595
+ 𝑑
1596
+ 𝑒
1597
+ ⁒
1598
+ π‘ˆ
1599
+ 𝑑
1600
+ βˆ’
1601
+ 𝑒
1602
+ β‹…
1603
+ 𝑃
1604
+ β‹…
1605
+ 𝑠
1606
+ ⁒
1607
+ (
1608
+ 𝑒
1609
+ )
1610
+ .
1611
+ (11)
1612
+
1613
+ We note that when we set
1614
+ π‘ˆ
1615
+ =
1616
+ Id
1617
+ this expression straightforwardly reduces to a PID controller-like model:
1618
+
1619
+
1620
+ act
1621
+ ⁒
1622
+ (
1623
+ 𝑑
1624
+ )
1625
+ =
1626
+
1627
+ (
1628
+ 𝑐
1629
+ +
1630
+ 𝑀
1631
+ 𝑇
1632
+ β‹…
1633
+ 𝑒
1634
+ )
1635
+ +
1636
+ 𝑝
1637
+ 𝑇
1638
+ β‹…
1639
+ 𝑠
1640
+ ⁒
1641
+ (
1642
+ 𝑑
1643
+ )
1644
+ +
1645
+ (
1646
+ 𝑣
1647
+ 𝑇
1648
+ β‹…
1649
+ 𝑃
1650
+ )
1651
+ β‹…
1652
+ ∫
1653
+ 0
1654
+ 𝑑
1655
+ 𝑑
1656
+ 𝑒
1657
+ ⁒
1658
+ 𝑠
1659
+ ⁒
1660
+ (
1661
+ 𝑒
1662
+ )
1663
+ .
1664
+
1665
+
1666
+ An instructive way of re-writing Equation (11) is to explicitly use the eigenvalues
1667
+ 𝑒
1668
+ βˆ’
1669
+ πœ”
1670
+ π‘˜
1671
+ of
1672
+ π‘ˆ
1673
+ . The equation can be re-parameterized as
1674
+
1675
+
1676
+ act
1677
+ ⁒
1678
+ (
1679
+ 𝑑
1680
+ )
1681
+ =
1682
+
1683
+ 𝑐
1684
+ +
1685
+ βˆ‘
1686
+ π‘˜
1687
+ =
1688
+ 1
1689
+ 𝑑
1690
+ 𝑐
1691
+ π‘˜
1692
+ ⁒
1693
+ 𝑒
1694
+ βˆ’
1695
+ πœ”
1696
+ π‘˜
1697
+ ⁒
1698
+ 𝑑
1699
+ +
1700
+ 𝑝
1701
+ 𝑇
1702
+ β‹…
1703
+ 𝑠
1704
+ ⁒
1705
+ (
1706
+ 𝑑
1707
+ )
1708
+
1709
+
1710
+ +
1711
+ βˆ‘
1712
+ π‘˜
1713
+ =
1714
+ 1
1715
+ 𝑑
1716
+ 𝑣
1717
+ π‘˜
1718
+ 𝑇
1719
+ β‹…
1720
+ ∫
1721
+ 0
1722
+ 𝑑
1723
+ 𝑑
1724
+ 𝑒
1725
+ ⁒
1726
+ 𝑒
1727
+ βˆ’
1728
+ πœ”
1729
+ π‘˜
1730
+ ⁒
1731
+ (
1732
+ 𝑑
1733
+ βˆ’
1734
+ 𝑒
1735
+ )
1736
+ ⁒
1737
+ 𝑠
1738
+ ⁒
1739
+ (
1740
+ 𝑒
1741
+ )
1742
+ .
1743
+
1744
+
1745
+ Here it is clear that the expression is a straightforward generalization of the PID controller, where only the weight-one cumulant of the history is utilized to compute the action. Now, a multitude of cumulants with distinct decay rates can be utilized.
1746
+
1747
+ S4 Complexity Comparison
1748
+ S4-A Baselines
1749
+
1750
+ As noted in section S1, MLP and LSTM networks have been trained with ARS as baselines for the adaptation tasks in the paper. We can estimate a lower bound for the number parameters and floating point operations required for each model by only counting the matrix variables for the parameter count and matrix multiplications for the floating point operations. This negelects the bias variables and non-matrix multiplication ops such as application of non-linearities or vector component-wise multiplications.
1751
+
1752
+ Given the input dimension
1753
+ 𝑑
1754
+ in
1755
+ , the output dimension
1756
+ 𝑑
1757
+ out
1758
+ and the internal dimension
1759
+ 𝑑
1760
+ , we find that the number of parameters and the floating point operations for the MLP and LSTM model per step is given by:
1761
+
1762
+
1763
+ FLOPS
1764
+ MLP
1765
+
1766
+ β‰ˆ
1767
+ 2
1768
+ Γ—
1769
+ Params
1770
+ MLP
1771
+ >
1772
+ 2
1773
+ ⁒
1774
+ 𝑑
1775
+ ⁒
1776
+ (
1777
+ 𝑑
1778
+ in
1779
+ +
1780
+ 𝑑
1781
+ +
1782
+ 𝑑
1783
+ out
1784
+ )
1785
+ (12)
1786
+
1787
+ FLOPS
1788
+ LSTM
1789
+
1790
+ β‰ˆ
1791
+ 2
1792
+ Γ—
1793
+ Params
1794
+ LSTM
1795
+ >
1796
+ 2
1797
+ ⁒
1798
+ 𝑑
1799
+ ⁒
1800
+ (
1801
+ 4
1802
+ ⁒
1803
+ 𝑑
1804
+ in
1805
+ +
1806
+ 4
1807
+ ⁒
1808
+ 𝑑
1809
+ +
1810
+ 𝑑
1811
+ out
1812
+ )
1813
+ (13)
1814
+ S4-B Quadruped Leg-Breaking
1815
+
1816
+ The algorithm presented in Figure 1 contains
1817
+ 16
1818
+ +
1819
+ 16
1820
+ Γ—
1821
+ 37
1822
+ =
1823
+ 608
1824
+ parameters and executes a maximum of
1825
+ 54
1826
+ Γ—
1827
+ 37
1828
+ +
1829
+ 82
1830
+ =
1831
+ 2080
1832
+ floating point ops per step, where we have counted all operations acting on floats or pairs of floats, assuming that all of the β€œif" statements pass. The input and output dimensions of the tasks are 37 and 12, while the ARS-trained models have internal dimensions
1833
+ 𝑑
1834
+ =
1835
+ 32
1836
+ . Using the formulae above, we see that the MLP model contains over 2592 parameters and uses more than 5184 FLOPs. Meanwhile the LSTM model uses more than 9216 parameters and 18432 FLOPs.
1837
+
1838
+ S4-C Cataclysmic Cartpole
1839
+
1840
+ The algorithm presented in Figure 9 contains 11 parameters and executes 25 floating point ops per step. The input and output dimensions of the tasks are 4 and 1, with internal dimensions
1841
+ 𝑑
1842
+ =
1843
+ 32
1844
+ for the neural networks. The MLP model contains over 1184 parameters and uses more than 2368 FLOPs. The LSTM model uses more than 4604 parameters and 9280 FLOPs.
1845
+
1846
+ S4-D Discussion
1847
+
1848
+ The efficiency of ARZ policies stems from two characteristics of the system. First, like many genetic programming methods, ARZ builds policies starting from simple algorithms and incrementally adds complexity through interaction with the task environment (e.g., [5, 20]). This implies that the computational cost of action inference is low early in evolution, and only increases as more complex structures provide fitness gains. In other words, the search is bound by incremental growth. Second, in ARZ, mutation is twice as likely to remove an instruction than insert an instruction (See Table S1), which has been found to have a regularization effect on the population [3].
1849
+
1850
+ S5 Search Space Additional Details
1851
+
1852
+ Supplementary TableΒ S2 describes the set of operations in our search space. Note that no matrix operations were used for the quadruped robot domain.
1853
+
1854
+ Supplementary Table S2: Ops vocabulary.
1855
+ 𝑠
1856
+ ,
1857
+ 𝑣
1858
+ β†’
1859
+ and
1860
+ 𝑀
1861
+ denote a scalar, vector, and matrix, resp. Early-alphabet letters (
1862
+ π‘Ž
1863
+ ,
1864
+ 𝑏
1865
+ , etc.) denote memory addresses. Mid-alphabet letters (e.g.
1866
+ 𝑖
1867
+ ,
1868
+ 𝑗
1869
+ , etc.) denote vector/matrix indexes (β€œIndex” column). Greek letters denote constants (β€œConsts.” column).
1870
+ 𝒰
1871
+ ⁒
1872
+ (
1873
+ 𝛼
1874
+ ,
1875
+ 𝛽
1876
+ )
1877
+ denotes a sample from a uniform distribution in
1878
+ [
1879
+ 𝛼
1880
+ ,
1881
+ 𝛽
1882
+ ]
1883
+ .
1884
+ 𝒩
1885
+ ⁒
1886
+ (
1887
+ πœ‡
1888
+ ,
1889
+ 𝜎
1890
+ )
1891
+ is analogous for a normal distribution with mean
1892
+ πœ‡
1893
+ and standard deviation
1894
+ 𝜎
1895
+ .
1896
+ πŸ™
1897
+ 𝑋
1898
+ is the indicator function for set
1899
+ 𝑋
1900
+ . Example: β€œ
1901
+ 𝑀
1902
+ π‘Ž
1903
+ (
1904
+ 𝑖
1905
+ ,
1906
+ 𝑗
1907
+ )
1908
+ =
1909
+ 𝒰
1910
+ ⁒
1911
+ (
1912
+ 𝛼
1913
+ ,
1914
+ 𝛽
1915
+ )
1916
+ ” describes the operation β€œassign to the
1917
+ 𝑖
1918
+ ,
1919
+ 𝑗
1920
+ -th entry of the matrix at address
1921
+ π‘Ž
1922
+ a value sampled from a uniform random distribution in
1923
+ [
1924
+ 𝛼
1925
+ ,
1926
+ 𝛽
1927
+ ]
1928
+ ”.
1929
+
1930
+ Op
1931
+
1932
+
1933
+
1934
+ Code
1935
+
1936
+ Input Args Output Args
1937
+
1938
+ Description
1939
+
1940
+
1941
+
1942
+
1943
+ ID
1944
+
1945
+
1946
+
1947
+ Example
1948
+
1949
+
1950
+
1951
+ Addresses
1952
+
1953
+
1954
+
1955
+ Consts.
1956
+
1957
+
1958
+
1959
+ Address
1960
+
1961
+
1962
+
1963
+ Index
1964
+
1965
+
1966
+
1967
+ (see caption)
1968
+
1969
+
1970
+
1971
+
1972
+ / types
1973
+
1974
+
1975
+
1976
+ / type
1977
+
1978
+
1979
+
1980
+
1981
+ OP1
1982
+
1983
+
1984
+
1985
+ no_op
1986
+
1987
+
1988
+
1989
+ –
1990
+
1991
+
1992
+
1993
+ –
1994
+
1995
+
1996
+
1997
+ –
1998
+
1999
+
2000
+
2001
+ –
2002
+
2003
+
2004
+
2005
+ –
2006
+
2007
+
2008
+
2009
+
2010
+ OP2
2011
+
2012
+
2013
+
2014
+ s2=s3+s0
2015
+
2016
+
2017
+
2018
+ π‘Ž
2019
+ ,
2020
+ 𝑏
2021
+ / scalars
2022
+
2023
+
2024
+
2025
+ –
2026
+
2027
+
2028
+
2029
+ 𝑐
2030
+ / scalar
2031
+
2032
+
2033
+
2034
+ –
2035
+
2036
+
2037
+
2038
+ 𝑠
2039
+ 𝑐
2040
+ =
2041
+ 𝑠
2042
+ π‘Ž
2043
+ +
2044
+ 𝑠
2045
+ 𝑏
2046
+
2047
+
2048
+
2049
+
2050
+ OP3
2051
+
2052
+
2053
+
2054
+ s4=s0-s1
2055
+
2056
+
2057
+
2058
+ π‘Ž
2059
+ ,
2060
+ 𝑏
2061
+ / scalars
2062
+
2063
+
2064
+
2065
+ –
2066
+
2067
+
2068
+
2069
+ 𝑐
2070
+ / scalar
2071
+
2072
+
2073
+
2074
+ –
2075
+
2076
+
2077
+
2078
+ 𝑠
2079
+ 𝑐
2080
+ =
2081
+ 𝑠
2082
+ π‘Ž
2083
+ βˆ’
2084
+ 𝑠
2085
+ 𝑏
2086
+
2087
+
2088
+
2089
+
2090
+ OP4
2091
+
2092
+
2093
+
2094
+ s8=s5*s5
2095
+
2096
+
2097
+
2098
+ π‘Ž
2099
+ ,
2100
+ 𝑏
2101
+ / scalars
2102
+
2103
+
2104
+
2105
+ –
2106
+
2107
+
2108
+
2109
+ 𝑐
2110
+ / scalar
2111
+
2112
+
2113
+
2114
+ –
2115
+
2116
+
2117
+
2118
+ 𝑠
2119
+ 𝑐
2120
+ =
2121
+ 𝑠
2122
+ π‘Ž
2123
+ ⁒
2124
+ 𝑠
2125
+ 𝑏
2126
+
2127
+
2128
+
2129
+
2130
+ OP5
2131
+
2132
+
2133
+
2134
+ s7=s5/s2
2135
+
2136
+
2137
+
2138
+ π‘Ž
2139
+ ,
2140
+ 𝑏
2141
+ / scalars
2142
+
2143
+
2144
+
2145
+ –
2146
+
2147
+
2148
+
2149
+ 𝑐
2150
+ / scalar
2151
+
2152
+
2153
+
2154
+ –
2155
+
2156
+
2157
+
2158
+ 𝑠
2159
+ 𝑐
2160
+ =
2161
+ 𝑠
2162
+ π‘Ž
2163
+ /
2164
+ 𝑠
2165
+ 𝑏
2166
+
2167
+
2168
+
2169
+
2170
+ OP6
2171
+
2172
+
2173
+
2174
+ s8=abs(s0)
2175
+
2176
+
2177
+
2178
+ π‘Ž
2179
+ / scalar
2180
+
2181
+
2182
+
2183
+ –
2184
+
2185
+
2186
+
2187
+ 𝑏
2188
+ / scalar
2189
+
2190
+
2191
+
2192
+ –
2193
+
2194
+
2195
+
2196
+ 𝑠
2197
+ 𝑏
2198
+ =
2199
+ |
2200
+ 𝑠
2201
+ π‘Ž
2202
+ |
2203
+
2204
+
2205
+
2206
+
2207
+ OP7
2208
+
2209
+
2210
+
2211
+ s4=1/s8
2212
+
2213
+
2214
+
2215
+ π‘Ž
2216
+ / scalar
2217
+
2218
+
2219
+
2220
+ –
2221
+
2222
+
2223
+
2224
+ 𝑏
2225
+ / scalar
2226
+
2227
+
2228
+
2229
+ –
2230
+
2231
+
2232
+
2233
+ 𝑠
2234
+ 𝑏
2235
+ =
2236
+ 1
2237
+ /
2238
+ 𝑠
2239
+ π‘Ž
2240
+
2241
+
2242
+
2243
+
2244
+ OP8
2245
+
2246
+
2247
+
2248
+ s5=sin(s4)
2249
+
2250
+
2251
+
2252
+ π‘Ž
2253
+ / scalar
2254
+
2255
+
2256
+
2257
+ –
2258
+
2259
+
2260
+
2261
+ 𝑏
2262
+ / scalar
2263
+
2264
+
2265
+
2266
+ –
2267
+
2268
+
2269
+
2270
+ 𝑠
2271
+ 𝑏
2272
+ =
2273
+ sin
2274
+ ⁑
2275
+ (
2276
+ 𝑠
2277
+ π‘Ž
2278
+ )
2279
+
2280
+
2281
+
2282
+
2283
+ OP9
2284
+
2285
+
2286
+
2287
+ s1=cos(s4)
2288
+
2289
+
2290
+
2291
+ π‘Ž
2292
+ / scalar
2293
+
2294
+
2295
+
2296
+ –
2297
+
2298
+
2299
+
2300
+ 𝑏
2301
+ / scalar
2302
+
2303
+
2304
+
2305
+ –
2306
+
2307
+
2308
+
2309
+ 𝑠
2310
+ 𝑏
2311
+ =
2312
+ cos
2313
+ ⁑
2314
+ (
2315
+ 𝑠
2316
+ π‘Ž
2317
+ )
2318
+
2319
+
2320
+
2321
+
2322
+ OP10
2323
+
2324
+
2325
+
2326
+ s3=tan(s3)
2327
+
2328
+
2329
+
2330
+ π‘Ž
2331
+ / scalar
2332
+
2333
+
2334
+
2335
+ –
2336
+
2337
+
2338
+
2339
+ 𝑏
2340
+ / scalar
2341
+
2342
+
2343
+
2344
+ –
2345
+
2346
+
2347
+
2348
+ 𝑠
2349
+ 𝑏
2350
+ =
2351
+ tan
2352
+ ⁑
2353
+ (
2354
+ 𝑠
2355
+ π‘Ž
2356
+ )
2357
+
2358
+
2359
+
2360
+
2361
+ OP11
2362
+
2363
+
2364
+
2365
+ s0=arcsin(s4)
2366
+
2367
+
2368
+
2369
+ π‘Ž
2370
+ / scalar
2371
+
2372
+
2373
+
2374
+ –
2375
+
2376
+
2377
+
2378
+ 𝑏
2379
+ / scalar
2380
+
2381
+
2382
+
2383
+ –
2384
+
2385
+
2386
+
2387
+ 𝑠
2388
+ 𝑏
2389
+ =
2390
+ arcsin
2391
+ ⁑
2392
+ (
2393
+ 𝑠
2394
+ π‘Ž
2395
+ )
2396
+
2397
+
2398
+
2399
+
2400
+ OP12
2401
+
2402
+
2403
+
2404
+ s2=arccos(s0)
2405
+
2406
+
2407
+
2408
+ π‘Ž
2409
+ / scalar
2410
+
2411
+
2412
+
2413
+ –
2414
+
2415
+
2416
+
2417
+ 𝑏
2418
+ / scalar
2419
+
2420
+
2421
+
2422
+ –
2423
+
2424
+
2425
+
2426
+ 𝑠
2427
+ 𝑏
2428
+ =
2429
+ arccos
2430
+ ⁑
2431
+ (
2432
+ 𝑠
2433
+ π‘Ž
2434
+ )
2435
+
2436
+
2437
+
2438
+
2439
+ OP13
2440
+
2441
+
2442
+
2443
+ s4=arctan(s0)
2444
+
2445
+
2446
+
2447
+ π‘Ž
2448
+ / scalar
2449
+
2450
+
2451
+
2452
+ –
2453
+
2454
+
2455
+
2456
+ 𝑏
2457
+ / scalar
2458
+
2459
+
2460
+
2461
+ –
2462
+
2463
+
2464
+
2465
+ 𝑠
2466
+ 𝑏
2467
+ =
2468
+ arctan
2469
+ ⁑
2470
+ (
2471
+ 𝑠
2472
+ π‘Ž
2473
+ )
2474
+
2475
+
2476
+
2477
+
2478
+ OP14
2479
+
2480
+
2481
+
2482
+ s1=exp(s2)
2483
+
2484
+
2485
+
2486
+ π‘Ž
2487
+ / scalar
2488
+
2489
+
2490
+
2491
+ –
2492
+
2493
+
2494
+
2495
+ 𝑏
2496
+ / scalar
2497
+
2498
+
2499
+
2500
+ –
2501
+
2502
+
2503
+
2504
+ 𝑠
2505
+ 𝑏
2506
+ =
2507
+ 𝑒
2508
+ 𝑠
2509
+ π‘Ž
2510
+
2511
+
2512
+
2513
+
2514
+ OP15
2515
+
2516
+
2517
+
2518
+ s0=log(s3)
2519
+
2520
+
2521
+
2522
+ π‘Ž
2523
+ / scalar
2524
+
2525
+
2526
+
2527
+ –
2528
+
2529
+
2530
+
2531
+ 𝑏
2532
+ / scalar
2533
+
2534
+
2535
+
2536
+ –
2537
+
2538
+
2539
+
2540
+ 𝑠
2541
+ 𝑏
2542
+ =
2543
+ log
2544
+ ⁑
2545
+ 𝑠
2546
+ π‘Ž
2547
+
2548
+
2549
+
2550
+
2551
+ OP16
2552
+
2553
+
2554
+
2555
+ s3=heaviside(s0)
2556
+
2557
+
2558
+
2559
+ π‘Ž
2560
+ / scalar
2561
+
2562
+
2563
+
2564
+ –
2565
+
2566
+
2567
+
2568
+ 𝑏
2569
+ / scalar
2570
+
2571
+
2572
+
2573
+ –
2574
+
2575
+
2576
+
2577
+ 𝑠
2578
+ 𝑏
2579
+ =
2580
+ πŸ™
2581
+ ℝ
2582
+ +
2583
+ ⁒
2584
+ (
2585
+ 𝑠
2586
+ π‘Ž
2587
+ )
2588
+
2589
+
2590
+
2591
+
2592
+ OP17
2593
+
2594
+
2595
+
2596
+ v2=heaviside(v2)
2597
+
2598
+
2599
+
2600
+ π‘Ž
2601
+ / vector
2602
+
2603
+
2604
+
2605
+ –
2606
+
2607
+
2608
+
2609
+ 𝑏
2610
+ / vector
2611
+
2612
+
2613
+
2614
+ –
2615
+
2616
+
2617
+
2618
+ 𝑣
2619
+ β†’
2620
+ 𝑏
2621
+ (
2622
+ 𝑖
2623
+ )
2624
+ =
2625
+ πŸ™
2626
+ ℝ
2627
+ +
2628
+ ⁒
2629
+ (
2630
+ 𝑣
2631
+ β†’
2632
+ π‘Ž
2633
+ (
2634
+ 𝑖
2635
+ )
2636
+ )
2637
+ ⁒
2638
+ βˆ€
2639
+ 𝑖
2640
+
2641
+
2642
+
2643
+
2644
+ OP18
2645
+
2646
+
2647
+
2648
+ m7=heaviside(m3)
2649
+
2650
+
2651
+
2652
+ π‘Ž
2653
+ / matrix
2654
+
2655
+
2656
+
2657
+ –
2658
+
2659
+
2660
+
2661
+ 𝑏
2662
+ / matrix
2663
+
2664
+
2665
+
2666
+ –
2667
+
2668
+
2669
+
2670
+ 𝑀
2671
+ 𝑏
2672
+ (
2673
+ 𝑖
2674
+ ,
2675
+ 𝑗
2676
+ )
2677
+ =
2678
+ πŸ™
2679
+ ℝ
2680
+ +
2681
+ ⁒
2682
+ (
2683
+ 𝑀
2684
+ π‘Ž
2685
+ (
2686
+ 𝑖
2687
+ ,
2688
+ 𝑗
2689
+ )
2690
+ )
2691
+ ⁒
2692
+ βˆ€
2693
+ 𝑖
2694
+ ,
2695
+ 𝑗
2696
+
2697
+
2698
+
2699
+
2700
+ OP19
2701
+
2702
+
2703
+
2704
+ v1=s7*v1
2705
+
2706
+
2707
+
2708
+ π‘Ž
2709
+ ,
2710
+ 𝑏
2711
+ / sc,vec
2712
+
2713
+
2714
+
2715
+ –
2716
+
2717
+
2718
+
2719
+ 𝑐
2720
+ / vector
2721
+
2722
+
2723
+
2724
+ –
2725
+
2726
+
2727
+
2728
+ 𝑣
2729
+ β†’
2730
+ 𝑐
2731
+ =
2732
+ 𝑠
2733
+ π‘Ž
2734
+ ⁒
2735
+ 𝑣
2736
+ β†’
2737
+ 𝑏
2738
+
2739
+
2740
+
2741
+
2742
+ OP20
2743
+
2744
+
2745
+
2746
+ v1=bcast(s3)
2747
+
2748
+
2749
+
2750
+ π‘Ž
2751
+ / scalar
2752
+
2753
+
2754
+
2755
+ –
2756
+
2757
+
2758
+
2759
+ 𝑏
2760
+ / vector
2761
+
2762
+
2763
+
2764
+ –
2765
+
2766
+
2767
+
2768
+ 𝑣
2769
+ β†’
2770
+ 𝑏
2771
+ (
2772
+ 𝑖
2773
+ )
2774
+ =
2775
+ 𝑠
2776
+ π‘Ž
2777
+ ⁒
2778
+ βˆ€
2779
+ 𝑖
2780
+
2781
+
2782
+
2783
+
2784
+ OP21
2785
+
2786
+
2787
+
2788
+ v5=1/v7
2789
+
2790
+
2791
+
2792
+ π‘Ž
2793
+ / vector
2794
+
2795
+
2796
+
2797
+ –
2798
+
2799
+
2800
+
2801
+ 𝑏
2802
+ / vector
2803
+
2804
+
2805
+
2806
+ –
2807
+
2808
+
2809
+
2810
+ 𝑣
2811
+ β†’
2812
+ 𝑏
2813
+ (
2814
+ 𝑖
2815
+ )
2816
+ =
2817
+ 1
2818
+ /
2819
+ 𝑣
2820
+ β†’
2821
+ π‘Ž
2822
+ (
2823
+ 𝑖
2824
+ )
2825
+ ⁒
2826
+ βˆ€
2827
+ 𝑖
2828
+
2829
+
2830
+
2831
+
2832
+ OP22
2833
+
2834
+
2835
+
2836
+ s0=norm(v3)
2837
+
2838
+
2839
+
2840
+ π‘Ž
2841
+ / scalar
2842
+
2843
+
2844
+
2845
+ –
2846
+
2847
+
2848
+
2849
+ 𝑏
2850
+ / vector
2851
+
2852
+
2853
+
2854
+ –
2855
+
2856
+
2857
+
2858
+ 𝑠
2859
+ 𝑏
2860
+ =
2861
+ |
2862
+ 𝑣
2863
+ β†’
2864
+ π‘Ž
2865
+ |
2866
+
2867
+
2868
+
2869
+
2870
+ OP23
2871
+
2872
+
2873
+
2874
+ v3=abs(v3)
2875
+
2876
+
2877
+
2878
+ π‘Ž
2879
+ / vector
2880
+
2881
+
2882
+
2883
+ –
2884
+
2885
+
2886
+
2887
+ 𝑏
2888
+ / vector
2889
+
2890
+
2891
+
2892
+ –
2893
+
2894
+
2895
+
2896
+ 𝑣
2897
+ β†’
2898
+ 𝑏
2899
+ (
2900
+ 𝑖
2901
+ )
2902
+ =
2903
+ |
2904
+ 𝑣
2905
+ β†’
2906
+ π‘Ž
2907
+ (
2908
+ 𝑖
2909
+ )
2910
+ |
2911
+ ⁒
2912
+ βˆ€
2913
+ 𝑖
2914
+
2915
+
2916
+
2917
+
2918
+ OP24
2919
+
2920
+
2921
+
2922
+ v5=v0+v9
2923
+
2924
+
2925
+
2926
+ π‘Ž
2927
+ ,
2928
+ 𝑏
2929
+ / vectors
2930
+
2931
+
2932
+
2933
+ –
2934
+
2935
+
2936
+
2937
+ 𝑐
2938
+ / vector
2939
+
2940
+
2941
+
2942
+ –
2943
+
2944
+
2945
+
2946
+ 𝑣
2947
+ β†’
2948
+ 𝑐
2949
+ =
2950
+ 𝑣
2951
+ β†’
2952
+ π‘Ž
2953
+ +
2954
+ 𝑣
2955
+ β†’
2956
+ 𝑏
2957
+
2958
+
2959
+
2960
+
2961
+ OP25
2962
+
2963
+
2964
+
2965
+ v1=v0-v9
2966
+
2967
+
2968
+
2969
+ π‘Ž
2970
+ ,
2971
+ 𝑏
2972
+ / vectors
2973
+
2974
+
2975
+
2976
+ –
2977
+
2978
+
2979
+
2980
+ 𝑐
2981
+ / vector
2982
+
2983
+
2984
+
2985
+ –
2986
+
2987
+
2988
+
2989
+ 𝑣
2990
+ β†’
2991
+ 𝑐
2992
+ =
2993
+ 𝑣
2994
+ β†’
2995
+ π‘Ž
2996
+ βˆ’
2997
+ 𝑣
2998
+ β†’
2999
+ 𝑏
3000
+
3001
+
3002
+
3003
+
3004
+ OP26
3005
+
3006
+
3007
+
3008
+ v8=v1*v9
3009
+
3010
+
3011
+
3012
+ π‘Ž
3013
+ ,
3014
+ 𝑏
3015
+ / vectors
3016
+
3017
+
3018
+
3019
+ –
3020
+
3021
+
3022
+
3023
+ 𝑐
3024
+ / vector
3025
+
3026
+
3027
+
3028
+ –
3029
+
3030
+
3031
+
3032
+ 𝑣
3033
+ β†’
3034
+ 𝑐
3035
+ (
3036
+ 𝑖
3037
+ )
3038
+ =
3039
+ 𝑣
3040
+ β†’
3041
+ π‘Ž
3042
+ (
3043
+ 𝑖
3044
+ )
3045
+ ⁒
3046
+ 𝑣
3047
+ β†’
3048
+ 𝑏
3049
+ (
3050
+ 𝑖
3051
+ )
3052
+ ⁒
3053
+ βˆ€
3054
+ 𝑖
3055
+
3056
+
3057
+
3058
+
3059
+ OP27
3060
+
3061
+
3062
+
3063
+ v9=v8/v2
3064
+
3065
+
3066
+
3067
+ π‘Ž
3068
+ ,
3069
+ 𝑏
3070
+ / vectors
3071
+
3072
+
3073
+
3074
+ –
3075
+
3076
+
3077
+
3078
+ 𝑐
3079
+ / vector
3080
+
3081
+
3082
+
3083
+ –
3084
+
3085
+
3086
+
3087
+ 𝑣
3088
+ β†’
3089
+ 𝑐
3090
+ (
3091
+ 𝑖
3092
+ )
3093
+ =
3094
+ 𝑣
3095
+ β†’
3096
+ π‘Ž
3097
+ (
3098
+ 𝑖
3099
+ )
3100
+ /
3101
+ 𝑣
3102
+ β†’
3103
+ 𝑏
3104
+ (
3105
+ 𝑖
3106
+ )
3107
+ ⁒
3108
+ βˆ€
3109
+ 𝑖
3110
+
3111
+
3112
+
3113
+
3114
+ OP28
3115
+
3116
+
3117
+
3118
+ s6=dot(v1,v5)
3119
+
3120
+
3121
+
3122
+ π‘Ž
3123
+ ,
3124
+ 𝑏
3125
+ / vectors
3126
+
3127
+
3128
+
3129
+ –
3130
+
3131
+
3132
+
3133
+ 𝑐
3134
+ / scalar
3135
+
3136
+
3137
+
3138
+ –
3139
+
3140
+
3141
+
3142
+ 𝑠
3143
+ 𝑐
3144
+ =
3145
+ 𝑣
3146
+ β†’
3147
+ π‘Ž
3148
+ 𝑇
3149
+ ⁒
3150
+ 𝑣
3151
+ β†’
3152
+ 𝑏
3153
+
3154
+
3155
+
3156
+
3157
+ OP29
3158
+
3159
+
3160
+
3161
+ m1=outer(v6,v5)
3162
+
3163
+
3164
+
3165
+ π‘Ž
3166
+ ,
3167
+ 𝑏
3168
+ / vectors
3169
+
3170
+
3171
+
3172
+ –
3173
+
3174
+
3175
+
3176
+ 𝑐
3177
+ / matrix
3178
+
3179
+
3180
+
3181
+ –
3182
+
3183
+
3184
+
3185
+ 𝑀
3186
+ 𝑐
3187
+ =
3188
+ 𝑣
3189
+ β†’
3190
+ π‘Ž
3191
+ ⁒
3192
+ 𝑣
3193
+ β†’
3194
+ 𝑏
3195
+ 𝑇
3196
+
3197
+
3198
+
3199
+
3200
+ OP30
3201
+
3202
+
3203
+
3204
+ m1=s4*m2
3205
+
3206
+
3207
+
3208
+ π‘Ž
3209
+ ,
3210
+ 𝑏
3211
+ / sc/mat
3212
+
3213
+
3214
+
3215
+ –
3216
+
3217
+
3218
+
3219
+ 𝑐
3220
+ / matrix
3221
+
3222
+
3223
+
3224
+ –
3225
+
3226
+
3227
+
3228
+ 𝑀
3229
+ 𝑐
3230
+ =
3231
+ 𝑠
3232
+ π‘Ž
3233
+ ⁒
3234
+ 𝑀
3235
+ 𝑏
3236
+
3237
+
3238
+
3239
+
3240
+ OP31
3241
+
3242
+
3243
+
3244
+ m3=1/m0
3245
+
3246
+
3247
+
3248
+ π‘Ž
3249
+ / matrix
3250
+
3251
+
3252
+
3253
+ –
3254
+
3255
+
3256
+
3257
+ 𝑏
3258
+ / matrix
3259
+
3260
+
3261
+
3262
+ –
3263
+
3264
+
3265
+
3266
+ 𝑀
3267
+ 𝑏
3268
+ (
3269
+ 𝑖
3270
+ ,
3271
+ 𝑗
3272
+ )
3273
+ =
3274
+ 1
3275
+ /
3276
+ 𝑀
3277
+ π‘Ž
3278
+ (
3279
+ 𝑖
3280
+ ,
3281
+ 𝑗
3282
+ )
3283
+ ⁒
3284
+ βˆ€
3285
+ 𝑖
3286
+ ,
3287
+ 𝑗
3288
+
3289
+
3290
+
3291
+
3292
+ OP32
3293
+
3294
+
3295
+
3296
+ v6=dot(m1,v0)
3297
+
3298
+
3299
+
3300
+ π‘Ž
3301
+ ,
3302
+ 𝑏
3303
+ / mat/vec
3304
+
3305
+
3306
+
3307
+ –
3308
+
3309
+
3310
+
3311
+ 𝑐
3312
+ / vector
3313
+
3314
+
3315
+
3316
+ –
3317
+
3318
+
3319
+
3320
+ 𝑣
3321
+ β†’
3322
+ 𝑐
3323
+ =
3324
+ 𝑀
3325
+ π‘Ž
3326
+ ⁒
3327
+ 𝑣
3328
+ β†’
3329
+ 𝑏
3330
+
3331
+
3332
+
3333
+
3334
+ OP33
3335
+
3336
+
3337
+
3338
+ m2=bcast(v0,axis=0)
3339
+
3340
+
3341
+
3342
+ π‘Ž
3343
+ / vector
3344
+
3345
+
3346
+
3347
+ –
3348
+
3349
+
3350
+
3351
+ 𝑏
3352
+ / matrix
3353
+
3354
+
3355
+
3356
+ –
3357
+
3358
+
3359
+
3360
+ 𝑀
3361
+ 𝑏
3362
+ (
3363
+ 𝑖
3364
+ ,
3365
+ 𝑗
3366
+ )
3367
+ =
3368
+ 𝑣
3369
+ β†’
3370
+ π‘Ž
3371
+ (
3372
+ 𝑖
3373
+ )
3374
+ ⁒
3375
+ βˆ€
3376
+ 𝑖
3377
+ ,
3378
+ 𝑗
3379
+
3380
+
3381
+
3382
+
3383
+ OP34
3384
+
3385
+
3386
+
3387
+ m2=bcast(v0,axis=1)
3388
+
3389
+
3390
+
3391
+ π‘Ž
3392
+ / vector
3393
+
3394
+
3395
+
3396
+ –
3397
+
3398
+
3399
+
3400
+ 𝑏
3401
+ / matrix
3402
+
3403
+
3404
+
3405
+ –
3406
+
3407
+
3408
+
3409
+ 𝑀
3410
+ 𝑏
3411
+ (
3412
+ 𝑗
3413
+ ,
3414
+ 𝑖
3415
+ )
3416
+ =
3417
+ 𝑣
3418
+ β†’
3419
+ π‘Ž
3420
+ (
3421
+ 𝑖
3422
+ )
3423
+ ⁒
3424
+ βˆ€
3425
+ 𝑖
3426
+ ,
3427
+ 𝑗
3428
+
3429
+
3430
+
3431
+
3432
+ OP35
3433
+
3434
+
3435
+
3436
+ s2=norm(m1)
3437
+
3438
+
3439
+
3440
+ π‘Ž
3441
+ / matrix
3442
+
3443
+
3444
+
3445
+ –
3446
+
3447
+
3448
+
3449
+ 𝑏
3450
+ / scalar
3451
+
3452
+
3453
+
3454
+ –
3455
+
3456
+
3457
+
3458
+ 𝑠
3459
+ 𝑏
3460
+ =
3461
+ β€–
3462
+ 𝑀
3463
+ π‘Ž
3464
+ β€–
3465
+
3466
+
3467
+
3468
+
3469
+ OP36
3470
+
3471
+
3472
+
3473
+ v4=norm(m7,axis=0)
3474
+
3475
+
3476
+
3477
+ π‘Ž
3478
+ / matrix
3479
+
3480
+
3481
+
3482
+ –
3483
+
3484
+
3485
+
3486
+ 𝑏
3487
+ / vector
3488
+
3489
+
3490
+
3491
+ –
3492
+
3493
+
3494
+
3495
+ 𝑣
3496
+ β†’
3497
+ 𝑏
3498
+ (
3499
+ 𝑖
3500
+ )
3501
+ =
3502
+ |
3503
+ 𝑀
3504
+ π‘Ž
3505
+ (
3506
+ 𝑖
3507
+ ,
3508
+ β‹…
3509
+ )
3510
+ |
3511
+ ⁒
3512
+ βˆ€
3513
+ 𝑖
3514
+
3515
+
3516
+
3517
+
3518
+ OP37
3519
+
3520
+
3521
+
3522
+ v4=norm(m7,axis=1)
3523
+
3524
+
3525
+
3526
+ π‘Ž
3527
+ / matrix
3528
+
3529
+
3530
+
3531
+ –
3532
+
3533
+
3534
+
3535
+ 𝑏
3536
+ / vector
3537
+
3538
+
3539
+
3540
+ –
3541
+
3542
+
3543
+
3544
+ 𝑣
3545
+ β†’
3546
+ 𝑏
3547
+ (
3548
+ 𝑗
3549
+ )
3550
+ =
3551
+ |
3552
+ 𝑀
3553
+ π‘Ž
3554
+ (
3555
+ β‹…
3556
+ ,
3557
+ 𝑗
3558
+ )
3559
+ |
3560
+ ⁒
3561
+ βˆ€
3562
+ 𝑗
3563
+
3564
+
3565
+ [Table continues on the next page.]
3566
+ Supplementary Table S2: Ops vocabulary (continued)
3567
+
3568
+ Op
3569
+
3570
+
3571
+
3572
+ Code
3573
+
3574
+ Input Args Output Args
3575
+
3576
+ Description
3577
+
3578
+
3579
+
3580
+
3581
+ ID
3582
+
3583
+
3584
+
3585
+ Example
3586
+
3587
+
3588
+
3589
+ Addresses
3590
+
3591
+
3592
+
3593
+ Consts
3594
+
3595
+
3596
+
3597
+ Address
3598
+
3599
+
3600
+
3601
+ Index
3602
+
3603
+
3604
+
3605
+ (see caption)
3606
+
3607
+
3608
+
3609
+
3610
+ / types
3611
+
3612
+
3613
+
3614
+ / type
3615
+
3616
+
3617
+
3618
+
3619
+ OP38
3620
+
3621
+
3622
+
3623
+ m9=transpose(m3)
3624
+
3625
+
3626
+
3627
+ π‘Ž
3628
+ / matrix
3629
+
3630
+
3631
+
3632
+ –
3633
+
3634
+
3635
+
3636
+ 𝑏
3637
+ / matrix
3638
+
3639
+
3640
+
3641
+ –
3642
+
3643
+
3644
+
3645
+ 𝑀
3646
+ 𝑏
3647
+ =
3648
+ |
3649
+ 𝑀
3650
+ π‘Ž
3651
+ 𝑇
3652
+ |
3653
+
3654
+
3655
+
3656
+
3657
+ OP39
3658
+
3659
+
3660
+
3661
+ m1=abs(m8)
3662
+
3663
+
3664
+
3665
+ π‘Ž
3666
+ / matrix
3667
+
3668
+
3669
+
3670
+ –
3671
+
3672
+
3673
+
3674
+ 𝑏
3675
+ / matrix
3676
+
3677
+
3678
+
3679
+ –
3680
+
3681
+
3682
+
3683
+ 𝑀
3684
+ 𝑏
3685
+ (
3686
+ 𝑖
3687
+ ,
3688
+ 𝑗
3689
+ )
3690
+ =
3691
+ |
3692
+ 𝑀
3693
+ π‘Ž
3694
+ (
3695
+ 𝑖
3696
+ ,
3697
+ 𝑗
3698
+ )
3699
+ |
3700
+ ⁒
3701
+ βˆ€
3702
+ 𝑖
3703
+ ,
3704
+ 𝑗
3705
+
3706
+
3707
+
3708
+
3709
+ OP40
3710
+
3711
+
3712
+
3713
+ m2=m2+m0
3714
+
3715
+
3716
+
3717
+ π‘Ž
3718
+ ,
3719
+ 𝑏
3720
+ / matrixes
3721
+
3722
+
3723
+
3724
+ –
3725
+
3726
+
3727
+
3728
+ 𝑐
3729
+ / matrix
3730
+
3731
+
3732
+
3733
+ –
3734
+
3735
+
3736
+
3737
+ 𝑀
3738
+ 𝑐
3739
+ =
3740
+ 𝑀
3741
+ π‘Ž
3742
+ +
3743
+ 𝑀
3744
+ 𝑏
3745
+
3746
+
3747
+
3748
+
3749
+ OP41
3750
+
3751
+
3752
+
3753
+ m2=m3-m1
3754
+
3755
+
3756
+
3757
+ π‘Ž
3758
+ ,
3759
+ 𝑏
3760
+ / matrixes
3761
+
3762
+
3763
+
3764
+ –
3765
+
3766
+
3767
+
3768
+ 𝑐
3769
+ / matrix
3770
+
3771
+
3772
+
3773
+ –
3774
+
3775
+
3776
+
3777
+ 𝑀
3778
+ 𝑐
3779
+ =
3780
+ 𝑀
3781
+ π‘Ž
3782
+ βˆ’
3783
+ 𝑀
3784
+ 𝑏
3785
+
3786
+
3787
+
3788
+
3789
+ OP42
3790
+
3791
+
3792
+
3793
+ m3=m2*m3
3794
+
3795
+
3796
+
3797
+ π‘Ž
3798
+ ,
3799
+ 𝑏
3800
+ / matrixes
3801
+
3802
+
3803
+
3804
+ –
3805
+
3806
+
3807
+
3808
+ 𝑐
3809
+ / matrix
3810
+
3811
+
3812
+
3813
+ –
3814
+
3815
+
3816
+
3817
+ 𝑀
3818
+ 𝑐
3819
+ (
3820
+ 𝑖
3821
+ ,
3822
+ 𝑗
3823
+ )
3824
+ =
3825
+ 𝑀
3826
+ π‘Ž
3827
+ (
3828
+ 𝑖
3829
+ ,
3830
+ 𝑗
3831
+ )
3832
+ ⁒
3833
+ 𝑀
3834
+ 𝑏
3835
+ (
3836
+ 𝑖
3837
+ ,
3838
+ 𝑗
3839
+ )
3840
+ ⁒
3841
+ βˆ€
3842
+ 𝑖
3843
+ ,
3844
+ 𝑗
3845
+
3846
+
3847
+
3848
+
3849
+ OP43
3850
+
3851
+
3852
+
3853
+ m4=m2/m4
3854
+
3855
+
3856
+
3857
+ π‘Ž
3858
+ ,
3859
+ 𝑏
3860
+ / matrixes
3861
+
3862
+
3863
+
3864
+ –
3865
+
3866
+
3867
+
3868
+ 𝑐
3869
+ / matrix
3870
+
3871
+
3872
+
3873
+ –
3874
+
3875
+
3876
+
3877
+ 𝑀
3878
+ 𝑐
3879
+ (
3880
+ 𝑖
3881
+ ,
3882
+ 𝑗
3883
+ )
3884
+ =
3885
+ 𝑀
3886
+ π‘Ž
3887
+ (
3888
+ 𝑖
3889
+ ,
3890
+ 𝑗
3891
+ )
3892
+ /
3893
+ 𝑀
3894
+ 𝑏
3895
+ (
3896
+ 𝑖
3897
+ ,
3898
+ 𝑗
3899
+ )
3900
+ ⁒
3901
+ βˆ€
3902
+ 𝑖
3903
+ ,
3904
+ 𝑗
3905
+
3906
+
3907
+
3908
+
3909
+ OP44
3910
+
3911
+
3912
+
3913
+ m5=matmul(m5,m7)
3914
+
3915
+
3916
+
3917
+ π‘Ž
3918
+ ,
3919
+ 𝑏
3920
+ / matrixes
3921
+
3922
+
3923
+
3924
+ –
3925
+
3926
+
3927
+
3928
+ 𝑐
3929
+ / matrix
3930
+
3931
+
3932
+
3933
+ –
3934
+
3935
+
3936
+
3937
+ 𝑀
3938
+ 𝑐
3939
+ =
3940
+ 𝑀
3941
+ π‘Ž
3942
+ ⁒
3943
+ 𝑀
3944
+ 𝑏
3945
+
3946
+
3947
+
3948
+
3949
+ OP45
3950
+
3951
+
3952
+
3953
+ s1=minimum(s2,s3)
3954
+
3955
+
3956
+
3957
+ π‘Ž
3958
+ ,
3959
+ 𝑏
3960
+ / scalars
3961
+
3962
+
3963
+
3964
+ –
3965
+
3966
+
3967
+
3968
+ 𝑐
3969
+ / scalar
3970
+
3971
+
3972
+
3973
+ –
3974
+
3975
+
3976
+
3977
+ 𝑠
3978
+ 𝑐
3979
+ =
3980
+ min
3981
+ ⁑
3982
+ (
3983
+ 𝑠
3984
+ π‘Ž
3985
+ ,
3986
+ 𝑠
3987
+ 𝑏
3988
+ )
3989
+
3990
+
3991
+
3992
+
3993
+ OP46
3994
+
3995
+
3996
+
3997
+ v4=minimum(v3,v9)
3998
+
3999
+
4000
+
4001
+ π‘Ž
4002
+ ,
4003
+ 𝑏
4004
+ / vectors
4005
+
4006
+
4007
+
4008
+ –
4009
+
4010
+
4011
+
4012
+ 𝑐
4013
+ / vector
4014
+
4015
+
4016
+
4017
+ –
4018
+
4019
+
4020
+
4021
+ 𝑣
4022
+ β†’
4023
+ 𝑐
4024
+ (
4025
+ 𝑖
4026
+ )
4027
+ =
4028
+ min
4029
+ ⁑
4030
+ (
4031
+ 𝑣
4032
+ β†’
4033
+ π‘Ž
4034
+ (
4035
+ 𝑖
4036
+ )
4037
+ ,
4038
+ 𝑣
4039
+ β†’
4040
+ 𝑏
4041
+ (
4042
+ 𝑖
4043
+ )
4044
+ )
4045
+ ⁒
4046
+ βˆ€
4047
+ 𝑖
4048
+
4049
+
4050
+
4051
+
4052
+ OP47
4053
+
4054
+
4055
+
4056
+ m2=minimum(m2,m1)
4057
+
4058
+
4059
+
4060
+ π‘Ž
4061
+ ,
4062
+ 𝑏
4063
+ / matrixes
4064
+
4065
+
4066
+
4067
+ –
4068
+
4069
+
4070
+
4071
+ 𝑐
4072
+ / matrix
4073
+
4074
+
4075
+
4076
+ –
4077
+
4078
+
4079
+
4080
+ 𝑀
4081
+ 𝑐
4082
+ (
4083
+ 𝑖
4084
+ ,
4085
+ 𝑗
4086
+ )
4087
+ =
4088
+ min
4089
+ ⁑
4090
+ (
4091
+ 𝑀
4092
+ π‘Ž
4093
+ (
4094
+ 𝑖
4095
+ ,
4096
+ 𝑗
4097
+ )
4098
+ ,
4099
+ 𝑀
4100
+ 𝑏
4101
+ (
4102
+ 𝑖
4103
+ ,
4104
+ 𝑗
4105
+ )
4106
+ )
4107
+ ⁒
4108
+ βˆ€
4109
+ 𝑖
4110
+ ,
4111
+ 𝑗
4112
+
4113
+
4114
+
4115
+
4116
+ OP48
4117
+
4118
+
4119
+
4120
+ s8=maximum(s3,s0)
4121
+
4122
+
4123
+
4124
+ π‘Ž
4125
+ ,
4126
+ 𝑏
4127
+ / scalars
4128
+
4129
+
4130
+
4131
+ –
4132
+
4133
+
4134
+
4135
+ 𝑐
4136
+ / scalar
4137
+
4138
+
4139
+
4140
+ –
4141
+
4142
+
4143
+
4144
+ 𝑠
4145
+ 𝑐
4146
+ =
4147
+ max
4148
+ ⁑
4149
+ (
4150
+ 𝑠
4151
+ π‘Ž
4152
+ ,
4153
+ 𝑠
4154
+ 𝑏
4155
+ )
4156
+
4157
+
4158
+
4159
+
4160
+ OP49
4161
+
4162
+
4163
+
4164
+ v7=maximum(v3,v6)
4165
+
4166
+
4167
+
4168
+ π‘Ž
4169
+ ,
4170
+ 𝑏
4171
+ / vectors
4172
+
4173
+
4174
+
4175
+ –
4176
+
4177
+
4178
+
4179
+ 𝑐
4180
+ / vector
4181
+
4182
+
4183
+
4184
+ –
4185
+
4186
+
4187
+
4188
+ 𝑣
4189
+ β†’
4190
+ 𝑐
4191
+ (
4192
+ 𝑖
4193
+ )
4194
+ =
4195
+ max
4196
+ ⁑
4197
+ (
4198
+ 𝑣
4199
+ β†’
4200
+ π‘Ž
4201
+ (
4202
+ 𝑖
4203
+ )
4204
+ ,
4205
+ 𝑣
4206
+ β†’
4207
+ 𝑏
4208
+ (
4209
+ 𝑖
4210
+ )
4211
+ )
4212
+ ⁒
4213
+ βˆ€
4214
+ 𝑖
4215
+
4216
+
4217
+
4218
+
4219
+ OP50
4220
+
4221
+
4222
+
4223
+ m7=maximum(m1,m0)
4224
+
4225
+
4226
+
4227
+ π‘Ž
4228
+ ,
4229
+ 𝑏
4230
+ / matrixes
4231
+
4232
+
4233
+
4234
+ –
4235
+
4236
+
4237
+
4238
+ 𝑐
4239
+ / matrix
4240
+
4241
+
4242
+
4243
+ –
4244
+
4245
+
4246
+
4247
+ 𝑀
4248
+ 𝑐
4249
+ (
4250
+ 𝑖
4251
+ ,
4252
+ 𝑗
4253
+ )
4254
+ =
4255
+ max
4256
+ ⁑
4257
+ (
4258
+ 𝑀
4259
+ π‘Ž
4260
+ (
4261
+ 𝑖
4262
+ ,
4263
+ 𝑗
4264
+ )
4265
+ ,
4266
+ 𝑀
4267
+ 𝑏
4268
+ (
4269
+ 𝑖
4270
+ ,
4271
+ 𝑗
4272
+ )
4273
+ )
4274
+ ⁒
4275
+ βˆ€
4276
+ 𝑖
4277
+ ,
4278
+ 𝑗
4279
+
4280
+
4281
+
4282
+
4283
+ OP51
4284
+
4285
+
4286
+
4287
+ s2=mean(v2)
4288
+
4289
+
4290
+
4291
+ π‘Ž
4292
+ / vector
4293
+
4294
+
4295
+
4296
+ –
4297
+
4298
+
4299
+
4300
+ 𝑏
4301
+ / scalar
4302
+
4303
+
4304
+
4305
+ –
4306
+
4307
+
4308
+
4309
+ 𝑠
4310
+ 𝑏
4311
+ =
4312
+ mean
4313
+ ⁑
4314
+ (
4315
+ 𝑣
4316
+ β†’
4317
+ π‘Ž
4318
+ )
4319
+
4320
+
4321
+
4322
+
4323
+ OP52
4324
+
4325
+
4326
+
4327
+ s2=mean(m8)
4328
+
4329
+
4330
+
4331
+ π‘Ž
4332
+ / matrix
4333
+
4334
+
4335
+
4336
+ –
4337
+
4338
+
4339
+
4340
+ 𝑏
4341
+ / scalar
4342
+
4343
+
4344
+
4345
+ –
4346
+
4347
+
4348
+
4349
+ 𝑠
4350
+ 𝑏
4351
+ =
4352
+ mean
4353
+ ⁑
4354
+ (
4355
+ 𝑀
4356
+ π‘Ž
4357
+ )
4358
+
4359
+
4360
+
4361
+
4362
+ OP53
4363
+
4364
+
4365
+
4366
+ v1=mean(m2,axis=0)
4367
+
4368
+
4369
+
4370
+ π‘Ž
4371
+ / matrix
4372
+
4373
+
4374
+
4375
+ –
4376
+
4377
+
4378
+
4379
+ 𝑏
4380
+ / vector
4381
+
4382
+
4383
+
4384
+ –
4385
+
4386
+
4387
+
4388
+ 𝑣
4389
+ β†’
4390
+ 𝑏
4391
+ (
4392
+ 𝑖
4393
+ )
4394
+ =
4395
+ mean
4396
+ ⁑
4397
+ (
4398
+ 𝑀
4399
+ π‘Ž
4400
+ (
4401
+ 𝑖
4402
+ ,
4403
+ β‹…
4404
+ )
4405
+ )
4406
+ ⁒
4407
+ βˆ€
4408
+ 𝑖
4409
+
4410
+
4411
+
4412
+
4413
+ OP54
4414
+
4415
+
4416
+
4417
+ v3=std(m2,axis=0)
4418
+
4419
+
4420
+
4421
+ π‘Ž
4422
+ / matrix
4423
+
4424
+
4425
+
4426
+ –
4427
+
4428
+
4429
+
4430
+ 𝑏
4431
+ / vector
4432
+
4433
+
4434
+
4435
+ –
4436
+
4437
+
4438
+
4439
+ 𝑣
4440
+ β†’
4441
+ 𝑏
4442
+ (
4443
+ 𝑖
4444
+ )
4445
+ =
4446
+ stdev
4447
+ ⁑
4448
+ (
4449
+ 𝑀
4450
+ π‘Ž
4451
+ (
4452
+ 𝑖
4453
+ ,
4454
+ β‹…
4455
+ )
4456
+ )
4457
+ ⁒
4458
+ βˆ€
4459
+ 𝑖
4460
+
4461
+
4462
+
4463
+
4464
+ OP55
4465
+
4466
+
4467
+
4468
+ s3=std(v3)
4469
+
4470
+
4471
+
4472
+ π‘Ž
4473
+ / vector
4474
+
4475
+
4476
+
4477
+ –
4478
+
4479
+
4480
+
4481
+ 𝑏
4482
+ / scalar
4483
+
4484
+
4485
+
4486
+ –
4487
+
4488
+
4489
+
4490
+ 𝑠
4491
+ 𝑏
4492
+ =
4493
+ stdev
4494
+ ⁑
4495
+ (
4496
+ 𝑣
4497
+ β†’
4498
+ π‘Ž
4499
+ )
4500
+
4501
+
4502
+
4503
+
4504
+ OP56
4505
+
4506
+
4507
+
4508
+ s4=std(m0)
4509
+
4510
+
4511
+
4512
+ π‘Ž
4513
+ / matrix
4514
+
4515
+
4516
+
4517
+ –
4518
+
4519
+
4520
+
4521
+ 𝑏
4522
+ / scalar
4523
+
4524
+
4525
+
4526
+ –
4527
+
4528
+
4529
+
4530
+ 𝑠
4531
+ 𝑏
4532
+ =
4533
+ stdev
4534
+ ⁑
4535
+ (
4536
+ 𝑀
4537
+ π‘Ž
4538
+ )
4539
+
4540
+
4541
+
4542
+
4543
+ OP57
4544
+
4545
+
4546
+
4547
+ s2=C1
4548
+
4549
+
4550
+
4551
+ –
4552
+
4553
+
4554
+
4555
+ 𝛾
4556
+
4557
+
4558
+
4559
+ π‘Ž
4560
+ / scalar
4561
+
4562
+
4563
+
4564
+ –
4565
+
4566
+
4567
+
4568
+ 𝑠
4569
+ π‘Ž
4570
+ =
4571
+ 𝛾
4572
+
4573
+
4574
+
4575
+
4576
+ OP58
4577
+
4578
+
4579
+
4580
+ v3[5]=C2
4581
+
4582
+
4583
+
4584
+ –
4585
+
4586
+
4587
+
4588
+ 𝛾
4589
+
4590
+
4591
+
4592
+ π‘Ž
4593
+ / vector
4594
+
4595
+
4596
+
4597
+ 𝑖
4598
+
4599
+
4600
+
4601
+ 𝑣
4602
+ β†’
4603
+ π‘Ž
4604
+ (
4605
+ 𝑖
4606
+ )
4607
+ =
4608
+ 𝛾
4609
+
4610
+
4611
+
4612
+
4613
+ OP59
4614
+
4615
+
4616
+
4617
+ m2[5,1]=C1
4618
+
4619
+
4620
+
4621
+ –
4622
+
4623
+
4624
+
4625
+ 𝛾
4626
+
4627
+
4628
+
4629
+ π‘Ž
4630
+ / matrix
4631
+
4632
+
4633
+
4634
+ 𝑖
4635
+ ,
4636
+ 𝑗
4637
+
4638
+
4639
+
4640
+ 𝑀
4641
+ π‘Ž
4642
+ (
4643
+ 𝑖
4644
+ ,
4645
+ 𝑗
4646
+ )
4647
+ =
4648
+ 𝛾
4649
+
4650
+
4651
+
4652
+
4653
+ OP60
4654
+
4655
+
4656
+
4657
+ s4=uniform(C2,C3)
4658
+
4659
+
4660
+
4661
+ –
4662
+
4663
+
4664
+
4665
+ 𝛼
4666
+ ,
4667
+ 𝛽
4668
+
4669
+
4670
+
4671
+ π‘Ž
4672
+ / scalar
4673
+
4674
+
4675
+
4676
+ –
4677
+
4678
+
4679
+
4680
+ 𝑠
4681
+ π‘Ž
4682
+ =
4683
+ 𝒰
4684
+ ⁒
4685
+ (
4686
+ 𝛼
4687
+ ,
4688
+ 𝛽
4689
+ )
4690
+
4691
+
4692
+
4693
+
4694
+ OP61
4695
+
4696
+
4697
+
4698
+ m2=m4
4699
+
4700
+
4701
+
4702
+ π‘Ž
4703
+ / matrix
4704
+
4705
+
4706
+
4707
+ –
4708
+
4709
+
4710
+
4711
+ 𝑏
4712
+ / matrix
4713
+
4714
+
4715
+
4716
+ –
4717
+
4718
+
4719
+
4720
+ 𝑀
4721
+ 𝑏
4722
+ =
4723
+ 𝑀
4724
+ π‘Ž
4725
+
4726
+
4727
+
4728
+
4729
+ OP62
4730
+
4731
+
4732
+
4733
+ v2=v4
4734
+
4735
+
4736
+
4737
+ π‘Ž
4738
+ / vector
4739
+
4740
+
4741
+
4742
+ –
4743
+
4744
+
4745
+
4746
+ 𝑏
4747
+ / vector
4748
+
4749
+
4750
+
4751
+ –
4752
+
4753
+
4754
+
4755
+ 𝑣
4756
+ β†’
4757
+ 𝑏
4758
+ =
4759
+ 𝑣
4760
+ β†’
4761
+ π‘Ž
4762
+
4763
+
4764
+
4765
+
4766
+ OP63
4767
+
4768
+
4769
+
4770
+ i2=i4
4771
+
4772
+
4773
+
4774
+ π‘Ž
4775
+ / index
4776
+
4777
+
4778
+
4779
+ –
4780
+
4781
+
4782
+
4783
+ 𝑏
4784
+ / index
4785
+
4786
+
4787
+
4788
+ –
4789
+
4790
+
4791
+
4792
+ 𝑖
4793
+ 𝑏
4794
+ =
4795
+ 𝑖
4796
+ π‘Ž
4797
+
4798
+
4799
+
4800
+
4801
+ OP64
4802
+
4803
+
4804
+
4805
+ v2=power(v5,v3)
4806
+
4807
+
4808
+
4809
+ π‘Ž
4810
+ ,
4811
+ 𝑏
4812
+ / vectors
4813
+
4814
+
4815
+
4816
+ –
4817
+
4818
+
4819
+
4820
+ 𝑐
4821
+ / vector
4822
+
4823
+
4824
+
4825
+ –
4826
+
4827
+
4828
+
4829
+ 𝑣
4830
+ β†’
4831
+ 𝑐
4832
+ (
4833
+ 𝑖
4834
+ )
4835
+ =
4836
+ power
4837
+ ⁑
4838
+ (
4839
+ 𝑣
4840
+ β†’
4841
+ π‘Ž
4842
+ (
4843
+ 𝑖
4844
+ )
4845
+ ,
4846
+ 𝑣
4847
+ β†’
4848
+ 𝑏
4849
+ (
4850
+ 𝑖
4851
+ )
4852
+ )
4853
+ ⁒
4854
+ βˆ€
4855
+ 𝑖
4856
+
4857
+
4858
+
4859
+
4860
+ OP65
4861
+
4862
+
4863
+
4864
+ v3=m2[:,1]
4865
+
4866
+
4867
+
4868
+ π‘Ž
4869
+ ,
4870
+ 𝑏
4871
+ / matrix,index
4872
+
4873
+
4874
+
4875
+ –
4876
+
4877
+
4878
+
4879
+ 𝑐
4880
+ / vector
4881
+
4882
+
4883
+
4884
+ –
4885
+
4886
+
4887
+
4888
+ 𝑣
4889
+ β†’
4890
+ 𝑐
4891
+ =
4892
+ 𝑀
4893
+ π‘Ž
4894
+ (
4895
+ β‹…
4896
+ ,
4897
+ 𝑗
4898
+ 𝑏
4899
+ )
4900
+
4901
+
4902
+
4903
+
4904
+ OP66
4905
+
4906
+
4907
+
4908
+ v3=m2[1,:]
4909
+
4910
+
4911
+
4912
+ π‘Ž
4913
+ ,
4914
+ 𝑏
4915
+ / matrix,index
4916
+
4917
+
4918
+
4919
+ –
4920
+
4921
+
4922
+
4923
+ 𝑐
4924
+ / vector
4925
+
4926
+
4927
+
4928
+ –
4929
+
4930
+
4931
+
4932
+ 𝑣
4933
+ β†’
4934
+ 𝑐
4935
+ =
4936
+ 𝑀
4937
+ π‘Ž
4938
+ (
4939
+ 𝑖
4940
+ 𝑏
4941
+ ,
4942
+ β‹…
4943
+ )
4944
+
4945
+
4946
+
4947
+
4948
+ OP67
4949
+
4950
+
4951
+
4952
+ s3=m2[1,5]
4953
+
4954
+
4955
+
4956
+ π‘Ž
4957
+ ,
4958
+ 𝑏
4959
+ ,
4960
+ 𝑐
4961
+ / m,i,i
4962
+
4963
+
4964
+
4965
+ –
4966
+
4967
+
4968
+
4969
+ 𝑑
4970
+ / scalar
4971
+
4972
+
4973
+
4974
+ –
4975
+
4976
+
4977
+
4978
+ 𝑠
4979
+ 𝑑
4980
+ =
4981
+ 𝑀
4982
+ π‘Ž
4983
+ (
4984
+ 𝑖
4985
+ 𝑏
4986
+ ,
4987
+ 𝑗
4988
+ 𝑐
4989
+ )
4990
+
4991
+
4992
+
4993
+
4994
+ OP68
4995
+
4996
+
4997
+
4998
+ s3=v2[5]
4999
+
5000
+
5001
+
5002
+ π‘Ž
5003
+ ,
5004
+ 𝑏
5005
+ / vector,index
5006
+
5007
+
5008
+
5009
+ –
5010
+
5011
+
5012
+
5013
+ 𝑐
5014
+ / scalar
5015
+
5016
+
5017
+
5018
+ –
5019
+
5020
+
5021
+
5022
+ 𝑠
5023
+ 𝑐
5024
+ =
5025
+ 𝑣
5026
+ β†’
5027
+ π‘Ž
5028
+ (
5029
+ 𝑖
5030
+ 𝑏
5031
+ )
5032
+
5033
+
5034
+
5035
+
5036
+ OP69
5037
+
5038
+
5039
+
5040
+ v3=0
5041
+
5042
+
5043
+
5044
+ –
5045
+
5046
+
5047
+
5048
+ –
5049
+
5050
+
5051
+
5052
+ π‘Ž
5053
+ / vector
5054
+
5055
+
5056
+
5057
+ –
5058
+
5059
+
5060
+
5061
+ 𝑣
5062
+ β†’
5063
+ π‘Ž
5064
+ =
5065
+ 0
5066
+
5067
+
5068
+
5069
+
5070
+ OP70
5071
+
5072
+
5073
+
5074
+ s5=0
5075
+
5076
+
5077
+
5078
+ –
5079
+
5080
+
5081
+
5082
+ –
5083
+
5084
+
5085
+
5086
+ π‘Ž
5087
+ / scalar
5088
+
5089
+
5090
+
5091
+ –
5092
+
5093
+
5094
+
5095
+ 𝑠
5096
+ π‘Ž
5097
+ =
5098
+ 0
5099
+
5100
+
5101
+
5102
+
5103
+ OP71
5104
+
5105
+
5106
+
5107
+ i2=0
5108
+
5109
+
5110
+
5111
+ –
5112
+
5113
+
5114
+
5115
+ –
5116
+
5117
+
5118
+
5119
+ π‘Ž
5120
+ / index
5121
+
5122
+
5123
+
5124
+ –
5125
+
5126
+
5127
+
5128
+ 𝑖
5129
+ π‘Ž
5130
+ =
5131
+ 0
5132
+
5133
+
5134
+
5135
+
5136
+ OP72
5137
+
5138
+
5139
+
5140
+ v2=sqrt(v5)
5141
+
5142
+
5143
+
5144
+ π‘Ž
5145
+ / vector
5146
+
5147
+
5148
+
5149
+ –
5150
+
5151
+
5152
+
5153
+ 𝑏
5154
+ / vector
5155
+
5156
+
5157
+
5158
+ –
5159
+
5160
+
5161
+
5162
+ 𝑣
5163
+ β†’
5164
+ 𝑏
5165
+ (
5166
+ 𝑖
5167
+ )
5168
+ =
5169
+ sqrt
5170
+ ⁑
5171
+ (
5172
+ 𝑣
5173
+ β†’
5174
+ π‘Ž
5175
+ (
5176
+ 𝑖
5177
+ )
5178
+ )
5179
+ ⁒
5180
+ βˆ€
5181
+ 𝑖
5182
+
5183
+
5184
+
5185
+
5186
+ OP73
5187
+
5188
+
5189
+
5190
+ v2=power(v5,2)
5191
+
5192
+
5193
+
5194
+ π‘Ž
5195
+ / vector
5196
+
5197
+
5198
+
5199
+ –
5200
+
5201
+
5202
+
5203
+ 𝑏
5204
+ / vector
5205
+
5206
+
5207
+
5208
+ –
5209
+
5210
+
5211
+
5212
+ 𝑣
5213
+ β†’
5214
+ 𝑏
5215
+ (
5216
+ 𝑖
5217
+ )
5218
+ =
5219
+ power
5220
+ ⁑
5221
+ (
5222
+ 𝑣
5223
+ β†’
5224
+ π‘Ž
5225
+ (
5226
+ 𝑖
5227
+ )
5228
+ ,
5229
+ 2
5230
+ )
5231
+ ⁒
5232
+ βˆ€
5233
+ 𝑖
5234
+
5235
+
5236
+
5237
+
5238
+ OP74
5239
+
5240
+
5241
+
5242
+ s1=sum(v5)
5243
+
5244
+
5245
+
5246
+ π‘Ž
5247
+ / vector
5248
+
5249
+
5250
+
5251
+ –
5252
+
5253
+
5254
+
5255
+ 𝑏
5256
+ / scalar
5257
+
5258
+
5259
+
5260
+ –
5261
+
5262
+
5263
+
5264
+ 𝑠
5265
+ 𝑏
5266
+ =
5267
+ sum
5268
+ ⁑
5269
+ (
5270
+ 𝑣
5271
+ β†’
5272
+ π‘Ž
5273
+ (
5274
+ 𝑖
5275
+ )
5276
+ )
5277
+ ⁒
5278
+ βˆ€
5279
+ 𝑖
5280
+
5281
+
5282
+
5283
+
5284
+ OP75
5285
+
5286
+
5287
+
5288
+ s5=sqrt(s1)
5289
+
5290
+
5291
+
5292
+ π‘Ž
5293
+ / scalar
5294
+
5295
+
5296
+
5297
+ –
5298
+
5299
+
5300
+
5301
+ 𝑏
5302
+ / scalar
5303
+
5304
+
5305
+
5306
+ –
5307
+
5308
+
5309
+
5310
+ 𝑠
5311
+ 𝑏
5312
+ =
5313
+ 𝑠
5314
+ π‘Ž
5315
+
5316
+
5317
+
5318
+
5319
+ OP76
5320
+
5321
+
5322
+
5323
+ s3=s0*s2+s5
5324
+
5325
+
5326
+
5327
+ π‘Ž
5328
+ ,
5329
+ 𝑏
5330
+ ,
5331
+ 𝑐
5332
+ / scalars
5333
+
5334
+
5335
+
5336
+ –
5337
+
5338
+
5339
+
5340
+ 𝑑
5341
+ / scalar
5342
+
5343
+
5344
+
5345
+ –
5346
+
5347
+
5348
+
5349
+ 𝑠
5350
+ 𝑑
5351
+ =
5352
+ 𝑠
5353
+ π‘Ž
5354
+ *
5355
+ 𝑠
5356
+ 𝑏
5357
+ +
5358
+ 𝑠
5359
+ ⁒
5360
+ 𝑐
5361
+
5362
+
5363
+
5364
+
5365
+ OP77
5366
+
5367
+
5368
+
5369
+ s2=s4*C1
5370
+
5371
+
5372
+
5373
+ π‘Ž
5374
+ / scalar
5375
+
5376
+
5377
+
5378
+ 𝛾
5379
+
5380
+
5381
+
5382
+ 𝑏
5383
+ / scalar
5384
+
5385
+
5386
+
5387
+ –
5388
+
5389
+
5390
+
5391
+ 𝑠
5392
+ 𝑏
5393
+ =
5394
+ 𝑠
5395
+ 𝑏
5396
+ *
5397
+ 𝛾
5398
+
5399
+
5400
+
5401
+
5402
+ OP78
5403
+
5404
+
5405
+
5406
+ m2[1,:]=v3
5407
+
5408
+
5409
+
5410
+ π‘Ž
5411
+ / vector
5412
+
5413
+
5414
+
5415
+ –
5416
+
5417
+
5418
+
5419
+ 𝑏
5420
+ / matrix
5421
+
5422
+
5423
+
5424
+ 𝑖
5425
+
5426
+
5427
+
5428
+ 𝑀
5429
+ 𝑏
5430
+ (
5431
+ 𝑖
5432
+ ,
5433
+ β‹…
5434
+ )
5435
+ =
5436
+ 𝑣
5437
+ β†’
5438
+ π‘Ž
5439
+
5440
+
5441
+
5442
+
5443
+ OP79
5444
+
5445
+
5446
+
5447
+ m2[:,1]=v3
5448
+
5449
+
5450
+
5451
+ π‘Ž
5452
+ / vector
5453
+
5454
+
5455
+
5456
+ –
5457
+
5458
+
5459
+
5460
+ 𝑏
5461
+ / matrix
5462
+
5463
+
5464
+
5465
+ 𝑖
5466
+
5467
+
5468
+
5469
+ 𝑀
5470
+ 𝑏
5471
+ (
5472
+ β‹…
5473
+ ,
5474
+ 𝑗
5475
+ )
5476
+ =
5477
+ 𝑣
5478
+ β†’
5479
+ π‘Ž
5480
+
5481
+
5482
+
5483
+
5484
+ OP80
5485
+
5486
+
5487
+
5488
+ i3 = size(m1, axis=0) - 1
5489
+
5490
+
5491
+
5492
+ π‘Ž
5493
+ / matrix
5494
+
5495
+
5496
+
5497
+ –
5498
+
5499
+
5500
+
5501
+ 𝑏
5502
+ / index
5503
+
5504
+
5505
+
5506
+ –
5507
+
5508
+
5509
+
5510
+ 𝑖
5511
+ 𝑏
5512
+ =
5513
+ size
5514
+ ⁑
5515
+ (
5516
+ 𝑀
5517
+ π‘Ž
5518
+ (
5519
+ 𝑖
5520
+ ,
5521
+ β‹…
5522
+ )
5523
+ )
5524
+ βˆ’
5525
+ 1
5526
+
5527
+
5528
+
5529
+
5530
+ OP81
5531
+
5532
+
5533
+
5534
+ i3 = size(m1, axis=1) - 1
5535
+
5536
+
5537
+
5538
+ π‘Ž
5539
+ / matrix
5540
+
5541
+
5542
+
5543
+ –
5544
+
5545
+
5546
+
5547
+ 𝑏
5548
+ / index
5549
+
5550
+
5551
+
5552
+ –
5553
+
5554
+
5555
+
5556
+ 𝑖
5557
+ 𝑏
5558
+ =
5559
+ size
5560
+ ⁑
5561
+ (
5562
+ 𝑀
5563
+ π‘Ž
5564
+ (
5565
+ β‹…
5566
+ ,
5567
+ 𝑗
5568
+ )
5569
+ )
5570
+ βˆ’
5571
+ 1
5572
+
5573
+
5574
+
5575
+
5576
+ OP82
5577
+
5578
+
5579
+
5580
+ i3 = len(v1) - 1
5581
+
5582
+
5583
+
5584
+ π‘Ž
5585
+ / vector
5586
+
5587
+
5588
+
5589
+ –
5590
+
5591
+
5592
+
5593
+ 𝑏
5594
+ / index
5595
+
5596
+
5597
+
5598
+ –
5599
+
5600
+
5601
+
5602
+ 𝑖
5603
+ 𝑏
5604
+ =
5605
+ len
5606
+ ⁑
5607
+ (
5608
+ 𝑣
5609
+ β†’
5610
+ π‘Ž
5611
+ )
5612
+ βˆ’
5613
+ 1
5614
+
5615
+
5616
+
5617
+
5618
+ OP83
5619
+
5620
+
5621
+
5622
+ s1 = v0[3] * v1[3] + s0
5623
+
5624
+
5625
+
5626
+ π‘Ž
5627
+ ,
5628
+ 𝑏
5629
+ ,
5630
+ 𝑐
5631
+ ,
5632
+ 𝑑
5633
+ / v,v,s,i
5634
+
5635
+
5636
+
5637
+ –
5638
+
5639
+
5640
+
5641
+ 𝑒
5642
+ / scalar
5643
+
5644
+
5645
+
5646
+ –
5647
+
5648
+
5649
+
5650
+ 𝑠
5651
+ 𝑒
5652
+ =
5653
+ 𝑣
5654
+ β†’
5655
+ π‘Ž
5656
+ (
5657
+ 𝑖
5658
+ 𝑑
5659
+ )
5660
+ *
5661
+ 𝑣
5662
+ β†’
5663
+ 𝑏
5664
+ (
5665
+ 𝑖
5666
+ 𝑑
5667
+ )
5668
+ +
5669
+ 𝑠
5670
+ 𝑐
5671
+
5672
+
5673
+
5674
+
5675
+ OP84
5676
+
5677
+
5678
+
5679
+ s3=dot(v0[:5],v1[:5])
5680
+
5681
+
5682
+
5683
+ π‘Ž
5684
+ ,
5685
+ 𝑏
5686
+ ,
5687
+ 𝑐
5688
+ / v,s,i
5689
+
5690
+
5691
+
5692
+ –
5693
+
5694
+
5695
+
5696
+ 𝑑
5697
+ / scalar
5698
+
5699
+
5700
+
5701
+ –
5702
+
5703
+
5704
+
5705
+ 𝑠
5706
+ 𝑑
5707
+ =
5708
+ 𝑣
5709
+ β†’
5710
+ π‘Ž
5711
+ 𝑇
5712
+ ⁣
5713
+ (
5714
+ :
5715
+ 𝑖
5716
+ 𝑐
5717
+ +
5718
+ 1
5719
+ )
5720
+ ⁒
5721
+ 𝑣
5722
+ β†’
5723
+ 𝑏
5724
+ (
5725
+ :
5726
+ 𝑖
5727
+ 𝑐
5728
+ +
5729
+ 1
5730
+ )
5731
+
5732
+ Generated on Fri Oct 13 20:50:33 2023 by LATExml
5733
+
5734
+ HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
5735
+
5736
+ failed: silence
5737
+ failed: bigdelim
5738
+ failed: textgreek
5739
+
5740
+ Authors: achieve the best HTML results from your LaTeX submissions by selecting from this list of supported packages.
5741
+
5742
+ /div>