2022-05-10 21:34:15.530691 PDT | [0] Epoch -1000 finished ---------------------------------- ---------------- epoch -1000 replay_buffer/size 999996 trainer/num train calls 1000 trainer/Policy Loss 2.08057 trainer/Log Pis Mean -2.0242 trainer/Log Pis Std 0.383452 trainer/Log Pis Max -0.890336 trainer/Log Pis Min -2.74687 trainer/policy/mean Mean -0.000151047 trainer/policy/mean Std 0.000143057 trainer/policy/mean Max 0.000153419 trainer/policy/mean Min -0.00108394 trainer/policy/normal/std Mean 0.999966 trainer/policy/normal/std Std 0.000372336 trainer/policy/normal/std Max 1.00056 trainer/policy/normal/std Min 0.99901 trainer/policy/normal/log_std Mean -3.403e-05 trainer/policy/normal/log_std Std 0.000372359 trainer/policy/normal/log_std Max 0.000563939 trainer/policy/normal/log_std Min -0.000990106 eval/num steps total 885 eval/num paths total 6 eval/path length Mean 147.5 eval/path length Std 12.052 eval/path length Max 169 eval/path length Min 133 eval/Rewards Mean 0.947688 eval/Rewards Std 0.0751563 eval/Rewards Max 1.00097 eval/Rewards Min 0.511679 eval/Returns Mean 139.784 eval/Returns Std 9.02147 eval/Returns Max 153.885 eval/Returns Min 128.037 eval/Actions Mean -0.000132344 eval/Actions Std 0.000132035 eval/Actions Max 4.98875e-05 eval/Actions Min -0.000350481 eval/Num Paths 6 eval/Average Returns 139.784 eval/normalized_score 4.91789 time/evaluation sampling (s) 1.82279 time/logging (s) 0.00368053 time/sampling batch (s) 0.265311 time/saving (s) 0.00365946 time/training (s) 4.1785 time/epoch (s) 6.27394 time/total (s) 30230.9 Epoch -1000 ---------------------------------- ---------------- 2022-05-10 21:34:20.943695 PDT | [0] Epoch -999 finished ---------------------------------- --------------- epoch -999 replay_buffer/size 999996 trainer/num train calls 2000 trainer/Policy Loss -0.700571 trainer/Log Pis Mean 0.513774 trainer/Log Pis Std 2.21618 trainer/Log Pis Max 7.9459 trainer/Log Pis Min -5.84089 trainer/policy/mean Mean 0.146481 trainer/policy/mean Std 0.551998 trainer/policy/mean Max 0.98412 trainer/policy/mean Min -0.998778 trainer/policy/normal/std Mean 0.61127 trainer/policy/normal/std Std 0.266687 trainer/policy/normal/std Max 1.52626 trainer/policy/normal/std Min 0.140468 trainer/policy/normal/log_std Mean -0.603638 trainer/policy/normal/log_std Std 0.50195 trainer/policy/normal/log_std Max 0.422823 trainer/policy/normal/log_std Min -1.96278 eval/num steps total 1407 eval/num paths total 7 eval/path length Mean 522 eval/path length Std 0 eval/path length Max 522 eval/path length Min 522 eval/Rewards Mean 3.22738 eval/Rewards Std 0.921924 eval/Rewards Max 5.96004 eval/Rewards Min 1.00967 eval/Returns Mean 1684.69 eval/Returns Std 0 eval/Returns Max 1684.69 eval/Returns Min 1684.69 eval/Actions Mean 0.181825 eval/Actions Std 0.531462 eval/Actions Max 0.987061 eval/Actions Min -0.999021 eval/Num Paths 1 eval/Average Returns 1684.69 eval/normalized_score 52.3868 time/evaluation sampling (s) 0.929678 time/logging (s) 0.00236416 time/sampling batch (s) 0.265568 time/saving (s) 0.00314158 time/training (s) 4.19152 time/epoch (s) 5.39227 time/total (s) 30236.3 Epoch -999 ---------------------------------- --------------- 2022-05-10 21:34:26.341472 PDT | [0] Epoch -998 finished ---------------------------------- --------------- epoch -998 replay_buffer/size 999996 trainer/num train calls 3000 trainer/Policy Loss -1.05486 trainer/Log Pis Mean 0.729826 trainer/Log Pis Std 2.23988 trainer/Log Pis Max 7.05894 trainer/Log Pis Min -6.12047 trainer/policy/mean Mean 0.1495 trainer/policy/mean Std 0.567816 trainer/policy/mean Max 0.993031 trainer/policy/mean Min -0.997876 trainer/policy/normal/std Mean 0.543871 trainer/policy/normal/std Std 0.248469 trainer/policy/normal/std Max 1.49734 trainer/policy/normal/std Min 0.102636 trainer/policy/normal/log_std Mean -0.734978 trainer/policy/normal/log_std Std 0.537459 trainer/policy/normal/log_std Max 0.403693 trainer/policy/normal/log_std Min -2.27657 eval/num steps total 2041 eval/num paths total 8 eval/path length Mean 634 eval/path length Std 0 eval/path length Max 634 eval/path length Min 634 eval/Rewards Mean 3.19066 eval/Rewards Std 0.751917 eval/Rewards Max 4.71267 eval/Rewards Min 0.998178 eval/Returns Mean 2022.88 eval/Returns Std 0 eval/Returns Max 2022.88 eval/Returns Min 2022.88 eval/Actions Mean 0.150154 eval/Actions Std 0.56476 eval/Actions Max 0.990176 eval/Actions Min -0.99764 eval/Num Paths 1 eval/Average Returns 2022.88 eval/normalized_score 62.7779 time/evaluation sampling (s) 0.909844 time/logging (s) 0.0029005 time/sampling batch (s) 0.268022 time/saving (s) 0.00311116 time/training (s) 4.19571 time/epoch (s) 5.37959 time/total (s) 30241.7 Epoch -998 ---------------------------------- --------------- 2022-05-10 21:34:31.743884 PDT | [0] Epoch -997 finished ---------------------------------- --------------- epoch -997 replay_buffer/size 999996 trainer/num train calls 4000 trainer/Policy Loss -1.17194 trainer/Log Pis Mean 1.16795 trainer/Log Pis Std 2.59182 trainer/Log Pis Max 9.10201 trainer/Log Pis Min -5.99112 trainer/policy/mean Mean 0.160753 trainer/policy/mean Std 0.582256 trainer/policy/mean Max 0.996697 trainer/policy/mean Min -0.997121 trainer/policy/normal/std Mean 0.503135 trainer/policy/normal/std Std 0.221239 trainer/policy/normal/std Max 1.36354 trainer/policy/normal/std Min 0.0970104 trainer/policy/normal/log_std Mean -0.80629 trainer/policy/normal/log_std Std 0.52572 trainer/policy/normal/log_std Max 0.310084 trainer/policy/normal/log_std Min -2.33294 eval/num steps total 2608 eval/num paths total 9 eval/path length Mean 567 eval/path length Std 0 eval/path length Max 567 eval/path length Min 567 eval/Rewards Mean 3.1729 eval/Rewards Std 0.808466 eval/Rewards Max 4.81279 eval/Rewards Min 0.996069 eval/Returns Mean 1799.04 eval/Returns Std 0 eval/Returns Max 1799.04 eval/Returns Min 1799.04 eval/Actions Mean 0.166009 eval/Actions Std 0.57491 eval/Actions Max 0.995015 eval/Actions Min -0.998197 eval/Num Paths 1 eval/Average Returns 1799.04 eval/normalized_score 55.9001 time/evaluation sampling (s) 0.911769 time/logging (s) 0.00250041 time/sampling batch (s) 0.26461 time/saving (s) 0.00333515 time/training (s) 4.20108 time/epoch (s) 5.3833 time/total (s) 30247.1 Epoch -997 ---------------------------------- --------------- 2022-05-10 21:34:37.127706 PDT | [0] Epoch -996 finished ---------------------------------- --------------- epoch -996 replay_buffer/size 999996 trainer/num train calls 5000 trainer/Policy Loss -1.41551 trainer/Log Pis Mean 1.20499 trainer/Log Pis Std 2.57778 trainer/Log Pis Max 8.4039 trainer/Log Pis Min -6.67808 trainer/policy/mean Mean 0.153682 trainer/policy/mean Std 0.593476 trainer/policy/mean Max 0.995461 trainer/policy/mean Min -0.997094 trainer/policy/normal/std Mean 0.484164 trainer/policy/normal/std Std 0.21521 trainer/policy/normal/std Max 1.26594 trainer/policy/normal/std Min 0.0933585 trainer/policy/normal/log_std Mean -0.848085 trainer/policy/normal/log_std Std 0.534045 trainer/policy/normal/log_std Max 0.235812 trainer/policy/normal/log_std Min -2.37131 eval/num steps total 3445 eval/num paths total 11 eval/path length Mean 418.5 eval/path length Std 162.5 eval/path length Max 581 eval/path length Min 256 eval/Rewards Mean 3.10445 eval/Rewards Std 0.914124 eval/Rewards Max 5.45895 eval/Rewards Min 1.01109 eval/Returns Mean 1299.21 eval/Returns Std 523.879 eval/Returns Max 1823.09 eval/Returns Min 775.334 eval/Actions Mean 0.146925 eval/Actions Std 0.549452 eval/Actions Max 0.99539 eval/Actions Min -0.998572 eval/Num Paths 2 eval/Average Returns 1299.21 eval/normalized_score 40.5425 time/evaluation sampling (s) 0.899775 time/logging (s) 0.00329606 time/sampling batch (s) 0.266378 time/saving (s) 0.00301561 time/training (s) 4.19336 time/epoch (s) 5.36583 time/total (s) 30252.4 Epoch -996 ---------------------------------- --------------- 2022-05-10 21:34:42.515149 PDT | [0] Epoch -995 finished ---------------------------------- --------------- epoch -995 replay_buffer/size 999996 trainer/num train calls 6000 trainer/Policy Loss -1.51747 trainer/Log Pis Mean 1.34678 trainer/Log Pis Std 2.49795 trainer/Log Pis Max 8.64417 trainer/Log Pis Min -5.45719 trainer/policy/mean Mean 0.12376 trainer/policy/mean Std 0.604951 trainer/policy/mean Max 0.996843 trainer/policy/mean Min -0.997543 trainer/policy/normal/std Mean 0.47817 trainer/policy/normal/std Std 0.218963 trainer/policy/normal/std Max 1.27467 trainer/policy/normal/std Min 0.091223 trainer/policy/normal/log_std Mean -0.864875 trainer/policy/normal/log_std Std 0.540167 trainer/policy/normal/log_std Max 0.242691 trainer/policy/normal/log_std Min -2.39445 eval/num steps total 3992 eval/num paths total 13 eval/path length Mean 273.5 eval/path length Std 40.5 eval/path length Max 314 eval/path length Min 233 eval/Rewards Mean 2.98659 eval/Rewards Std 1.05888 eval/Rewards Max 5.79806 eval/Rewards Min 0.991397 eval/Returns Mean 816.833 eval/Returns Std 124.868 eval/Returns Max 941.7 eval/Returns Min 691.965 eval/Actions Mean 0.112069 eval/Actions Std 0.519299 eval/Actions Max 0.99418 eval/Actions Min -0.998396 eval/Num Paths 2 eval/Average Returns 816.833 eval/normalized_score 25.7209 time/evaluation sampling (s) 0.876783 time/logging (s) 0.00291957 time/sampling batch (s) 0.266703 time/saving (s) 0.00365275 time/training (s) 4.2182 time/epoch (s) 5.36825 time/total (s) 30257.8 Epoch -995 ---------------------------------- --------------- 2022-05-10 21:34:48.163819 PDT | [0] Epoch -994 finished ---------------------------------- --------------- epoch -994 replay_buffer/size 999996 trainer/num train calls 7000 trainer/Policy Loss -1.50547 trainer/Log Pis Mean 1.61391 trainer/Log Pis Std 2.51888 trainer/Log Pis Max 9.01529 trainer/Log Pis Min -4.49173 trainer/policy/mean Mean 0.140123 trainer/policy/mean Std 0.596783 trainer/policy/mean Max 0.994998 trainer/policy/mean Min -0.997691 trainer/policy/normal/std Mean 0.453148 trainer/policy/normal/std Std 0.203629 trainer/policy/normal/std Max 1.43024 trainer/policy/normal/std Min 0.0881281 trainer/policy/normal/log_std Mean -0.917325 trainer/policy/normal/log_std Std 0.539918 trainer/policy/normal/log_std Max 0.35784 trainer/policy/normal/log_std Min -2.42896 eval/num steps total 4729 eval/num paths total 15 eval/path length Mean 368.5 eval/path length Std 57.5 eval/path length Max 426 eval/path length Min 311 eval/Rewards Mean 3.07704 eval/Rewards Std 0.915543 eval/Rewards Max 5.33842 eval/Rewards Min 0.985001 eval/Returns Mean 1133.89 eval/Returns Std 207.658 eval/Returns Max 1341.55 eval/Returns Min 926.233 eval/Actions Mean 0.140104 eval/Actions Std 0.548999 eval/Actions Max 0.995733 eval/Actions Min -0.997654 eval/Num Paths 2 eval/Average Returns 1133.89 eval/normalized_score 35.4628 time/evaluation sampling (s) 0.883767 time/logging (s) 0.00359877 time/sampling batch (s) 0.268445 time/saving (s) 0.00367861 time/training (s) 4.47026 time/epoch (s) 5.62975 time/total (s) 30263.4 Epoch -994 ---------------------------------- --------------- 2022-05-10 21:34:53.595813 PDT | [0] Epoch -993 finished ---------------------------------- --------------- epoch -993 replay_buffer/size 999996 trainer/num train calls 8000 trainer/Policy Loss -1.5613 trainer/Log Pis Mean 1.53224 trainer/Log Pis Std 2.56625 trainer/Log Pis Max 8.95176 trainer/Log Pis Min -4.5517 trainer/policy/mean Mean 0.144737 trainer/policy/mean Std 0.600415 trainer/policy/mean Max 0.996616 trainer/policy/mean Min -0.997785 trainer/policy/normal/std Mean 0.446521 trainer/policy/normal/std Std 0.205955 trainer/policy/normal/std Max 1.17463 trainer/policy/normal/std Min 0.0825134 trainer/policy/normal/log_std Mean -0.934574 trainer/policy/normal/log_std Std 0.540795 trainer/policy/normal/log_std Max 0.160955 trainer/policy/normal/log_std Min -2.49479 eval/num steps total 5403 eval/num paths total 16 eval/path length Mean 674 eval/path length Std 0 eval/path length Max 674 eval/path length Min 674 eval/Rewards Mean 3.17771 eval/Rewards Std 0.765205 eval/Rewards Max 5.45375 eval/Rewards Min 1.00872 eval/Returns Mean 2141.78 eval/Returns Std 0 eval/Returns Max 2141.78 eval/Returns Min 2141.78 eval/Actions Mean 0.159567 eval/Actions Std 0.575784 eval/Actions Max 0.998313 eval/Actions Min -0.997582 eval/Num Paths 1 eval/Average Returns 2141.78 eval/normalized_score 66.4311 time/evaluation sampling (s) 0.880706 time/logging (s) 0.00319664 time/sampling batch (s) 0.271416 time/saving (s) 0.00354973 time/training (s) 4.2529 time/epoch (s) 5.41177 time/total (s) 30268.8 Epoch -993 ---------------------------------- --------------- 2022-05-10 21:34:58.935557 PDT | [0] Epoch -992 finished ---------------------------------- --------------- epoch -992 replay_buffer/size 999996 trainer/num train calls 9000 trainer/Policy Loss -1.6247 trainer/Log Pis Mean 1.70016 trainer/Log Pis Std 2.47353 trainer/Log Pis Max 12.8762 trainer/Log Pis Min -8.5564 trainer/policy/mean Mean 0.160201 trainer/policy/mean Std 0.602372 trainer/policy/mean Max 0.997472 trainer/policy/mean Min -0.998624 trainer/policy/normal/std Mean 0.450403 trainer/policy/normal/std Std 0.208295 trainer/policy/normal/std Max 1.21948 trainer/policy/normal/std Min 0.0902308 trainer/policy/normal/log_std Mean -0.92872 trainer/policy/normal/log_std Std 0.54904 trainer/policy/normal/log_std Max 0.198421 trainer/policy/normal/log_std Min -2.40538 eval/num steps total 5878 eval/num paths total 17 eval/path length Mean 475 eval/path length Std 0 eval/path length Max 475 eval/path length Min 475 eval/Rewards Mean 3.00487 eval/Rewards Std 0.797224 eval/Rewards Max 4.61876 eval/Rewards Min 0.996949 eval/Returns Mean 1427.31 eval/Returns Std 0 eval/Returns Max 1427.31 eval/Returns Min 1427.31 eval/Actions Mean 0.164467 eval/Actions Std 0.56626 eval/Actions Max 0.995034 eval/Actions Min -0.996805 eval/Num Paths 1 eval/Average Returns 1427.31 eval/normalized_score 44.4785 time/evaluation sampling (s) 0.885036 time/logging (s) 0.00252047 time/sampling batch (s) 0.263373 time/saving (s) 0.00353027 time/training (s) 4.16535 time/epoch (s) 5.31981 time/total (s) 30274.2 Epoch -992 ---------------------------------- --------------- 2022-05-10 21:35:04.349961 PDT | [0] Epoch -991 finished ---------------------------------- --------------- epoch -991 replay_buffer/size 999996 trainer/num train calls 10000 trainer/Policy Loss -1.80125 trainer/Log Pis Mean 1.69482 trainer/Log Pis Std 2.5156 trainer/Log Pis Max 10.0044 trainer/Log Pis Min -5.41238 trainer/policy/mean Mean 0.117403 trainer/policy/mean Std 0.612831 trainer/policy/mean Max 0.99879 trainer/policy/mean Min -0.998115 trainer/policy/normal/std Mean 0.437679 trainer/policy/normal/std Std 0.19509 trainer/policy/normal/std Max 1.21849 trainer/policy/normal/std Min 0.0876592 trainer/policy/normal/log_std Mean -0.94663 trainer/policy/normal/log_std Std 0.52415 trainer/policy/normal/log_std Max 0.197609 trainer/policy/normal/log_std Min -2.4343 eval/num steps total 6536 eval/num paths total 18 eval/path length Mean 658 eval/path length Std 0 eval/path length Max 658 eval/path length Min 658 eval/Rewards Mean 3.13949 eval/Rewards Std 0.74963 eval/Rewards Max 5.25057 eval/Rewards Min 0.981266 eval/Returns Mean 2065.79 eval/Returns Std 0 eval/Returns Max 2065.79 eval/Returns Min 2065.79 eval/Actions Mean 0.141591 eval/Actions Std 0.571286 eval/Actions Max 0.996664 eval/Actions Min -0.998259 eval/Num Paths 1 eval/Average Returns 2065.79 eval/normalized_score 64.0962 time/evaluation sampling (s) 0.882341 time/logging (s) 0.0026716 time/sampling batch (s) 0.269649 time/saving (s) 0.00310039 time/training (s) 4.23735 time/epoch (s) 5.39511 time/total (s) 30279.6 Epoch -991 ---------------------------------- --------------- 2022-05-10 21:35:09.735138 PDT | [0] Epoch -990 finished ---------------------------------- --------------- epoch -990 replay_buffer/size 999996 trainer/num train calls 11000 trainer/Policy Loss -1.91356 trainer/Log Pis Mean 1.79674 trainer/Log Pis Std 2.61765 trainer/Log Pis Max 9.72437 trainer/Log Pis Min -5.41835 trainer/policy/mean Mean 0.160773 trainer/policy/mean Std 0.617057 trainer/policy/mean Max 0.997721 trainer/policy/mean Min -0.997312 trainer/policy/normal/std Mean 0.443212 trainer/policy/normal/std Std 0.202026 trainer/policy/normal/std Max 1.16164 trainer/policy/normal/std Min 0.0869694 trainer/policy/normal/log_std Mean -0.941032 trainer/policy/normal/log_std Std 0.540399 trainer/policy/normal/log_std Max 0.149832 trainer/policy/normal/log_std Min -2.4422 eval/num steps total 7166 eval/num paths total 19 eval/path length Mean 630 eval/path length Std 0 eval/path length Max 630 eval/path length Min 630 eval/Rewards Mean 3.18878 eval/Rewards Std 0.777351 eval/Rewards Max 5.09152 eval/Rewards Min 1.0133 eval/Returns Mean 2008.93 eval/Returns Std 0 eval/Returns Max 2008.93 eval/Returns Min 2008.93 eval/Actions Mean 0.157601 eval/Actions Std 0.572799 eval/Actions Max 0.998295 eval/Actions Min -0.997752 eval/Num Paths 1 eval/Average Returns 2008.93 eval/normalized_score 62.3493 time/evaluation sampling (s) 0.878816 time/logging (s) 0.00259028 time/sampling batch (s) 0.268441 time/saving (s) 0.00299111 time/training (s) 4.21327 time/epoch (s) 5.36611 time/total (s) 30284.9 Epoch -990 ---------------------------------- --------------- 2022-05-10 21:35:15.170931 PDT | [0] Epoch -989 finished ---------------------------------- --------------- epoch -989 replay_buffer/size 999996 trainer/num train calls 12000 trainer/Policy Loss -1.69727 trainer/Log Pis Mean 1.65522 trainer/Log Pis Std 2.55384 trainer/Log Pis Max 9.52865 trainer/Log Pis Min -4.63646 trainer/policy/mean Mean 0.182372 trainer/policy/mean Std 0.597805 trainer/policy/mean Max 0.997547 trainer/policy/mean Min -0.998671 trainer/policy/normal/std Mean 0.439578 trainer/policy/normal/std Std 0.20224 trainer/policy/normal/std Max 1.13818 trainer/policy/normal/std Min 0.0908809 trainer/policy/normal/log_std Mean -0.952548 trainer/policy/normal/log_std Std 0.548236 trainer/policy/normal/log_std Max 0.129427 trainer/policy/normal/log_std Min -2.39821 eval/num steps total 7653 eval/num paths total 20 eval/path length Mean 487 eval/path length Std 0 eval/path length Max 487 eval/path length Min 487 eval/Rewards Mean 3.01388 eval/Rewards Std 0.772666 eval/Rewards Max 4.48321 eval/Rewards Min 1.01388 eval/Returns Mean 1467.76 eval/Returns Std 0 eval/Returns Max 1467.76 eval/Returns Min 1467.76 eval/Actions Mean 0.156867 eval/Actions Std 0.569739 eval/Actions Max 0.993915 eval/Actions Min -0.998119 eval/Num Paths 1 eval/Average Returns 1467.76 eval/normalized_score 45.7213 time/evaluation sampling (s) 0.892543 time/logging (s) 0.00268973 time/sampling batch (s) 0.27154 time/saving (s) 0.00346208 time/training (s) 4.24666 time/epoch (s) 5.41689 time/total (s) 30290.4 Epoch -989 ---------------------------------- --------------- 2022-05-10 21:35:20.583206 PDT | [0] Epoch -988 finished ---------------------------------- --------------- epoch -988 replay_buffer/size 999996 trainer/num train calls 13000 trainer/Policy Loss -1.98808 trainer/Log Pis Mean 2.03736 trainer/Log Pis Std 2.65539 trainer/Log Pis Max 10.3639 trainer/Log Pis Min -5.21487 trainer/policy/mean Mean 0.130417 trainer/policy/mean Std 0.621989 trainer/policy/mean Max 0.997698 trainer/policy/mean Min -0.997239 trainer/policy/normal/std Mean 0.438291 trainer/policy/normal/std Std 0.199856 trainer/policy/normal/std Max 1.20882 trainer/policy/normal/std Min 0.0846448 trainer/policy/normal/log_std Mean -0.954134 trainer/policy/normal/log_std Std 0.546979 trainer/policy/normal/log_std Max 0.189643 trainer/policy/normal/log_std Min -2.46929 eval/num steps total 8239 eval/num paths total 21 eval/path length Mean 586 eval/path length Std 0 eval/path length Max 586 eval/path length Min 586 eval/Rewards Mean 3.06355 eval/Rewards Std 0.718299 eval/Rewards Max 4.61323 eval/Rewards Min 0.98533 eval/Returns Mean 1795.24 eval/Returns Std 0 eval/Returns Max 1795.24 eval/Returns Min 1795.24 eval/Actions Mean 0.16756 eval/Actions Std 0.582936 eval/Actions Max 0.998551 eval/Actions Min -0.997226 eval/Num Paths 1 eval/Average Returns 1795.24 eval/normalized_score 55.7834 time/evaluation sampling (s) 0.909489 time/logging (s) 0.00251031 time/sampling batch (s) 0.273559 time/saving (s) 0.00297472 time/training (s) 4.20443 time/epoch (s) 5.39296 time/total (s) 30295.7 Epoch -988 ---------------------------------- --------------- 2022-05-10 21:35:26.046164 PDT | [0] Epoch -987 finished ---------------------------------- --------------- epoch -987 replay_buffer/size 999996 trainer/num train calls 14000 trainer/Policy Loss -1.69246 trainer/Log Pis Mean 1.74908 trainer/Log Pis Std 2.58691 trainer/Log Pis Max 10.8051 trainer/Log Pis Min -4.8459 trainer/policy/mean Mean 0.142625 trainer/policy/mean Std 0.598942 trainer/policy/mean Max 0.99792 trainer/policy/mean Min -0.997276 trainer/policy/normal/std Mean 0.422287 trainer/policy/normal/std Std 0.18898 trainer/policy/normal/std Max 1.09132 trainer/policy/normal/std Min 0.0870902 trainer/policy/normal/log_std Mean -0.987612 trainer/policy/normal/log_std Std 0.538453 trainer/policy/normal/log_std Max 0.0873846 trainer/policy/normal/log_std Min -2.44081 eval/num steps total 8771 eval/num paths total 22 eval/path length Mean 532 eval/path length Std 0 eval/path length Max 532 eval/path length Min 532 eval/Rewards Mean 3.14299 eval/Rewards Std 0.842835 eval/Rewards Max 5.47758 eval/Rewards Min 0.986163 eval/Returns Mean 1672.07 eval/Returns Std 0 eval/Returns Max 1672.07 eval/Returns Min 1672.07 eval/Actions Mean 0.161277 eval/Actions Std 0.572904 eval/Actions Max 0.998207 eval/Actions Min -0.998276 eval/Num Paths 1 eval/Average Returns 1672.07 eval/normalized_score 51.999 time/evaluation sampling (s) 0.924463 time/logging (s) 0.002889 time/sampling batch (s) 0.270435 time/saving (s) 0.00376225 time/training (s) 4.24281 time/epoch (s) 5.44436 time/total (s) 30301.2 Epoch -987 ---------------------------------- --------------- 2022-05-10 21:35:31.431559 PDT | [0] Epoch -986 finished ---------------------------------- --------------- epoch -986 replay_buffer/size 999996 trainer/num train calls 15000 trainer/Policy Loss -1.75315 trainer/Log Pis Mean 1.82224 trainer/Log Pis Std 2.40833 trainer/Log Pis Max 15.8687 trainer/Log Pis Min -4.05178 trainer/policy/mean Mean 0.153436 trainer/policy/mean Std 0.604187 trainer/policy/mean Max 0.997227 trainer/policy/mean Min -0.998948 trainer/policy/normal/std Mean 0.421499 trainer/policy/normal/std Std 0.196809 trainer/policy/normal/std Max 1.14728 trainer/policy/normal/std Min 0.0824994 trainer/policy/normal/log_std Mean -1.00087 trainer/policy/normal/log_std Std 0.563927 trainer/policy/normal/log_std Max 0.137392 trainer/policy/normal/log_std Min -2.49496 eval/num steps total 9446 eval/num paths total 23 eval/path length Mean 675 eval/path length Std 0 eval/path length Max 675 eval/path length Min 675 eval/Rewards Mean 3.15982 eval/Rewards Std 0.708465 eval/Rewards Max 5.19248 eval/Rewards Min 0.995696 eval/Returns Mean 2132.88 eval/Returns Std 0 eval/Returns Max 2132.88 eval/Returns Min 2132.88 eval/Actions Mean 0.166434 eval/Actions Std 0.593784 eval/Actions Max 0.998191 eval/Actions Min -0.998831 eval/Num Paths 1 eval/Average Returns 2132.88 eval/normalized_score 66.1578 time/evaluation sampling (s) 0.905544 time/logging (s) 0.00328205 time/sampling batch (s) 0.265064 time/saving (s) 0.00362855 time/training (s) 4.18884 time/epoch (s) 5.36636 time/total (s) 30306.6 Epoch -986 ---------------------------------- --------------- 2022-05-10 21:35:36.900197 PDT | [0] Epoch -985 finished ---------------------------------- --------------- epoch -985 replay_buffer/size 999996 trainer/num train calls 16000 trainer/Policy Loss -1.67275 trainer/Log Pis Mean 1.57626 trainer/Log Pis Std 2.52012 trainer/Log Pis Max 12.9807 trainer/Log Pis Min -4.35165 trainer/policy/mean Mean 0.151012 trainer/policy/mean Std 0.591224 trainer/policy/mean Max 0.998059 trainer/policy/mean Min -0.998403 trainer/policy/normal/std Mean 0.424924 trainer/policy/normal/std Std 0.193685 trainer/policy/normal/std Max 1.12305 trainer/policy/normal/std Min 0.0895257 trainer/policy/normal/log_std Mean -0.984305 trainer/policy/normal/log_std Std 0.543398 trainer/policy/normal/log_std Max 0.116046 trainer/policy/normal/log_std Min -2.41323 eval/num steps total 10018 eval/num paths total 24 eval/path length Mean 572 eval/path length Std 0 eval/path length Max 572 eval/path length Min 572 eval/Rewards Mean 3.17473 eval/Rewards Std 0.710237 eval/Rewards Max 4.46691 eval/Rewards Min 0.979054 eval/Returns Mean 1815.95 eval/Returns Std 0 eval/Returns Max 1815.95 eval/Returns Min 1815.95 eval/Actions Mean 0.144233 eval/Actions Std 0.595336 eval/Actions Max 0.998978 eval/Actions Min -0.99898 eval/Num Paths 1 eval/Average Returns 1815.95 eval/normalized_score 56.4197 time/evaluation sampling (s) 0.986927 time/logging (s) 0.00301739 time/sampling batch (s) 0.265568 time/saving (s) 0.00379745 time/training (s) 4.18975 time/epoch (s) 5.44906 time/total (s) 30312 Epoch -985 ---------------------------------- --------------- 2022-05-10 21:35:42.338145 PDT | [0] Epoch -984 finished ---------------------------------- --------------- epoch -984 replay_buffer/size 999996 trainer/num train calls 17000 trainer/Policy Loss -1.83126 trainer/Log Pis Mean 1.66844 trainer/Log Pis Std 2.62125 trainer/Log Pis Max 10.4734 trainer/Log Pis Min -6.20913 trainer/policy/mean Mean 0.154615 trainer/policy/mean Std 0.608745 trainer/policy/mean Max 0.996938 trainer/policy/mean Min -0.998108 trainer/policy/normal/std Mean 0.430466 trainer/policy/normal/std Std 0.195566 trainer/policy/normal/std Max 1.16487 trainer/policy/normal/std Min 0.0859499 trainer/policy/normal/log_std Mean -0.971636 trainer/policy/normal/log_std Std 0.544979 trainer/policy/normal/log_std Max 0.152607 trainer/policy/normal/log_std Min -2.45399 eval/num steps total 10521 eval/num paths total 25 eval/path length Mean 503 eval/path length Std 0 eval/path length Max 503 eval/path length Min 503 eval/Rewards Mean 3.09003 eval/Rewards Std 0.760564 eval/Rewards Max 4.71898 eval/Rewards Min 0.983903 eval/Returns Mean 1554.28 eval/Returns Std 0 eval/Returns Max 1554.28 eval/Returns Min 1554.28 eval/Actions Mean 0.162734 eval/Actions Std 0.579517 eval/Actions Max 0.998887 eval/Actions Min -0.998206 eval/Num Paths 1 eval/Average Returns 1554.28 eval/normalized_score 48.3798 time/evaluation sampling (s) 0.903749 time/logging (s) 0.00233088 time/sampling batch (s) 0.26805 time/saving (s) 0.00310277 time/training (s) 4.24057 time/epoch (s) 5.41781 time/total (s) 30317.4 Epoch -984 ---------------------------------- --------------- 2022-05-10 21:35:47.706815 PDT | [0] Epoch -983 finished ---------------------------------- --------------- epoch -983 replay_buffer/size 999996 trainer/num train calls 18000 trainer/Policy Loss -1.75163 trainer/Log Pis Mean 1.67896 trainer/Log Pis Std 2.64514 trainer/Log Pis Max 12.902 trainer/Log Pis Min -5.98316 trainer/policy/mean Mean 0.140409 trainer/policy/mean Std 0.599443 trainer/policy/mean Max 0.996444 trainer/policy/mean Min -0.997979 trainer/policy/normal/std Mean 0.424753 trainer/policy/normal/std Std 0.196103 trainer/policy/normal/std Max 1.07076 trainer/policy/normal/std Min 0.08519 trainer/policy/normal/log_std Mean -0.987878 trainer/policy/normal/log_std Std 0.549997 trainer/policy/normal/log_std Max 0.0683646 trainer/policy/normal/log_std Min -2.46287 eval/num steps total 11500 eval/num paths total 27 eval/path length Mean 489.5 eval/path length Std 6.5 eval/path length Max 496 eval/path length Min 483 eval/Rewards Mean 3.02459 eval/Rewards Std 0.781403 eval/Rewards Max 4.76336 eval/Rewards Min 0.982451 eval/Returns Mean 1480.54 eval/Returns Std 29.0447 eval/Returns Max 1509.58 eval/Returns Min 1451.49 eval/Actions Mean 0.150271 eval/Actions Std 0.568845 eval/Actions Max 0.998716 eval/Actions Min -0.998683 eval/Num Paths 2 eval/Average Returns 1480.54 eval/normalized_score 46.1139 time/evaluation sampling (s) 0.874355 time/logging (s) 0.00419892 time/sampling batch (s) 0.266433 time/saving (s) 0.00360431 time/training (s) 4.20276 time/epoch (s) 5.35135 time/total (s) 30322.8 Epoch -983 ---------------------------------- --------------- 2022-05-10 21:35:53.080355 PDT | [0] Epoch -982 finished ---------------------------------- ---------------- epoch -982 replay_buffer/size 999996 trainer/num train calls 19000 trainer/Policy Loss -1.82231 trainer/Log Pis Mean 1.83122 trainer/Log Pis Std 2.53163 trainer/Log Pis Max 8.63397 trainer/Log Pis Min -6.40184 trainer/policy/mean Mean 0.130885 trainer/policy/mean Std 0.599861 trainer/policy/mean Max 0.995944 trainer/policy/mean Min -0.997645 trainer/policy/normal/std Mean 0.41741 trainer/policy/normal/std Std 0.190865 trainer/policy/normal/std Max 0.999836 trainer/policy/normal/std Min 0.0877618 trainer/policy/normal/log_std Mean -1.00135 trainer/policy/normal/log_std Std 0.53908 trainer/policy/normal/log_std Max -0.000163807 trainer/policy/normal/log_std Min -2.43313 eval/num steps total 12275 eval/num paths total 28 eval/path length Mean 775 eval/path length Std 0 eval/path length Max 775 eval/path length Min 775 eval/Rewards Mean 3.32297 eval/Rewards Std 0.704363 eval/Rewards Max 5.37942 eval/Rewards Min 0.980153 eval/Returns Mean 2575.3 eval/Returns Std 0 eval/Returns Max 2575.3 eval/Returns Min 2575.3 eval/Actions Mean 0.145638 eval/Actions Std 0.597855 eval/Actions Max 0.998696 eval/Actions Min -0.998741 eval/Num Paths 1 eval/Average Returns 2575.3 eval/normalized_score 79.7516 time/evaluation sampling (s) 0.884536 time/logging (s) 0.00300405 time/sampling batch (s) 0.264636 time/saving (s) 0.00304116 time/training (s) 4.19741 time/epoch (s) 5.35263 time/total (s) 30328.2 Epoch -982 ---------------------------------- ---------------- 2022-05-10 21:35:58.470511 PDT | [0] Epoch -981 finished ---------------------------------- --------------- epoch -981 replay_buffer/size 999996 trainer/num train calls 20000 trainer/Policy Loss -1.84771 trainer/Log Pis Mean 1.81452 trainer/Log Pis Std 2.53811 trainer/Log Pis Max 8.78603 trainer/Log Pis Min -4.71758 trainer/policy/mean Mean 0.131404 trainer/policy/mean Std 0.607434 trainer/policy/mean Max 0.998794 trainer/policy/mean Min -0.998159 trainer/policy/normal/std Mean 0.416917 trainer/policy/normal/std Std 0.191081 trainer/policy/normal/std Max 1.07305 trainer/policy/normal/std Min 0.0859782 trainer/policy/normal/log_std Mean -1.00513 trainer/policy/normal/log_std Std 0.547044 trainer/policy/normal/log_std Max 0.0705032 trainer/policy/normal/log_std Min -2.45366 eval/num steps total 12770 eval/num paths total 29 eval/path length Mean 495 eval/path length Std 0 eval/path length Max 495 eval/path length Min 495 eval/Rewards Mean 3.08889 eval/Rewards Std 0.771839 eval/Rewards Max 4.66859 eval/Rewards Min 0.981101 eval/Returns Mean 1529 eval/Returns Std 0 eval/Returns Max 1529 eval/Returns Min 1529 eval/Actions Mean 0.157274 eval/Actions Std 0.578174 eval/Actions Max 0.996555 eval/Actions Min -0.998799 eval/Num Paths 1 eval/Average Returns 1529 eval/normalized_score 47.6029 time/evaluation sampling (s) 0.871546 time/logging (s) 0.00217019 time/sampling batch (s) 0.265672 time/saving (s) 0.00297397 time/training (s) 4.22769 time/epoch (s) 5.37005 time/total (s) 30333.5 Epoch -981 ---------------------------------- --------------- 2022-05-10 21:36:03.848877 PDT | [0] Epoch -980 finished ---------------------------------- --------------- epoch -980 replay_buffer/size 999996 trainer/num train calls 21000 trainer/Policy Loss -1.93923 trainer/Log Pis Mean 1.97851 trainer/Log Pis Std 2.65885 trainer/Log Pis Max 8.80393 trainer/Log Pis Min -11.4297 trainer/policy/mean Mean 0.170829 trainer/policy/mean Std 0.609231 trainer/policy/mean Max 0.998594 trainer/policy/mean Min -0.998116 trainer/policy/normal/std Mean 0.42415 trainer/policy/normal/std Std 0.200296 trainer/policy/normal/std Max 1.07384 trainer/policy/normal/std Min 0.0850416 trainer/policy/normal/log_std Mean -0.995536 trainer/policy/normal/log_std Std 0.562083 trainer/policy/normal/log_std Max 0.0712414 trainer/policy/normal/log_std Min -2.46461 eval/num steps total 13338 eval/num paths total 30 eval/path length Mean 568 eval/path length Std 0 eval/path length Max 568 eval/path length Min 568 eval/Rewards Mean 3.18491 eval/Rewards Std 0.794137 eval/Rewards Max 4.77485 eval/Rewards Min 0.983782 eval/Returns Mean 1809.03 eval/Returns Std 0 eval/Returns Max 1809.03 eval/Returns Min 1809.03 eval/Actions Mean 0.160136 eval/Actions Std 0.590965 eval/Actions Max 0.998408 eval/Actions Min -0.998736 eval/Num Paths 1 eval/Average Returns 1809.03 eval/normalized_score 56.2072 time/evaluation sampling (s) 0.871068 time/logging (s) 0.00245843 time/sampling batch (s) 0.265573 time/saving (s) 0.00296852 time/training (s) 4.21773 time/epoch (s) 5.3598 time/total (s) 30338.9 Epoch -980 ---------------------------------- --------------- 2022-05-10 21:36:09.223342 PDT | [0] Epoch -979 finished ---------------------------------- --------------- epoch -979 replay_buffer/size 999996 trainer/num train calls 22000 trainer/Policy Loss -1.71901 trainer/Log Pis Mean 1.7885 trainer/Log Pis Std 2.48254 trainer/Log Pis Max 9.33542 trainer/Log Pis Min -4.13135 trainer/policy/mean Mean 0.17962 trainer/policy/mean Std 0.598194 trainer/policy/mean Max 0.995958 trainer/policy/mean Min -0.997603 trainer/policy/normal/std Mean 0.419066 trainer/policy/normal/std Std 0.196295 trainer/policy/normal/std Max 1.18349 trainer/policy/normal/std Min 0.0827858 trainer/policy/normal/log_std Mean -1.00725 trainer/policy/normal/log_std Std 0.564218 trainer/policy/normal/log_std Max 0.168472 trainer/policy/normal/log_std Min -2.4915 eval/num steps total 13912 eval/num paths total 31 eval/path length Mean 574 eval/path length Std 0 eval/path length Max 574 eval/path length Min 574 eval/Rewards Mean 3.16596 eval/Rewards Std 0.795853 eval/Rewards Max 4.94182 eval/Rewards Min 0.979002 eval/Returns Mean 1817.26 eval/Returns Std 0 eval/Returns Max 1817.26 eval/Returns Min 1817.26 eval/Actions Mean 0.174934 eval/Actions Std 0.582561 eval/Actions Max 0.997014 eval/Actions Min -0.999034 eval/Num Paths 1 eval/Average Returns 1817.26 eval/normalized_score 56.46 time/evaluation sampling (s) 0.881476 time/logging (s) 0.00236534 time/sampling batch (s) 0.263193 time/saving (s) 0.00293467 time/training (s) 4.20565 time/epoch (s) 5.35562 time/total (s) 30344.2 Epoch -979 ---------------------------------- --------------- 2022-05-10 21:36:14.650957 PDT | [0] Epoch -978 finished ---------------------------------- --------------- epoch -978 replay_buffer/size 999996 trainer/num train calls 23000 trainer/Policy Loss -1.69926 trainer/Log Pis Mean 1.78186 trainer/Log Pis Std 2.51326 trainer/Log Pis Max 8.91996 trainer/Log Pis Min -5.30229 trainer/policy/mean Mean 0.150122 trainer/policy/mean Std 0.601016 trainer/policy/mean Max 0.997112 trainer/policy/mean Min -0.99764 trainer/policy/normal/std Mean 0.416824 trainer/policy/normal/std Std 0.188867 trainer/policy/normal/std Max 1.08886 trainer/policy/normal/std Min 0.078996 trainer/policy/normal/log_std Mean -1.00528 trainer/policy/normal/log_std Std 0.549592 trainer/policy/normal/log_std Max 0.0851271 trainer/policy/normal/log_std Min -2.53836 eval/num steps total 14571 eval/num paths total 32 eval/path length Mean 659 eval/path length Std 0 eval/path length Max 659 eval/path length Min 659 eval/Rewards Mean 3.19452 eval/Rewards Std 0.769292 eval/Rewards Max 4.79419 eval/Rewards Min 1.01381 eval/Returns Mean 2105.19 eval/Returns Std 0 eval/Returns Max 2105.19 eval/Returns Min 2105.19 eval/Actions Mean 0.17143 eval/Actions Std 0.590727 eval/Actions Max 0.998537 eval/Actions Min -0.998732 eval/Num Paths 1 eval/Average Returns 2105.19 eval/normalized_score 65.3069 time/evaluation sampling (s) 0.872076 time/logging (s) 0.0030485 time/sampling batch (s) 0.265905 time/saving (s) 0.00347731 time/training (s) 4.26477 time/epoch (s) 5.40927 time/total (s) 30349.7 Epoch -978 ---------------------------------- --------------- 2022-05-10 21:36:20.009450 PDT | [0] Epoch -977 finished ---------------------------------- --------------- epoch -977 replay_buffer/size 999996 trainer/num train calls 24000 trainer/Policy Loss -1.94765 trainer/Log Pis Mean 1.9477 trainer/Log Pis Std 2.57917 trainer/Log Pis Max 8.67049 trainer/Log Pis Min -4.49015 trainer/policy/mean Mean 0.159724 trainer/policy/mean Std 0.619252 trainer/policy/mean Max 0.996771 trainer/policy/mean Min -0.997566 trainer/policy/normal/std Mean 0.423696 trainer/policy/normal/std Std 0.193855 trainer/policy/normal/std Max 1.07933 trainer/policy/normal/std Min 0.0905117 trainer/policy/normal/log_std Mean -0.988422 trainer/policy/normal/log_std Std 0.545834 trainer/policy/normal/log_std Max 0.07634 trainer/policy/normal/log_std Min -2.40228 eval/num steps total 15104 eval/num paths total 33 eval/path length Mean 533 eval/path length Std 0 eval/path length Max 533 eval/path length Min 533 eval/Rewards Mean 3.16547 eval/Rewards Std 0.843706 eval/Rewards Max 5.36648 eval/Rewards Min 1.00872 eval/Returns Mean 1687.2 eval/Returns Std 0 eval/Returns Max 1687.2 eval/Returns Min 1687.2 eval/Actions Mean 0.155481 eval/Actions Std 0.5712 eval/Actions Max 0.998248 eval/Actions Min -0.998349 eval/Num Paths 1 eval/Average Returns 1687.2 eval/normalized_score 52.4637 time/evaluation sampling (s) 0.872013 time/logging (s) 0.00244325 time/sampling batch (s) 0.264607 time/saving (s) 0.00304254 time/training (s) 4.19662 time/epoch (s) 5.33872 time/total (s) 30355 Epoch -977 ---------------------------------- --------------- 2022-05-10 21:36:25.364274 PDT | [0] Epoch -976 finished ---------------------------------- --------------- epoch -976 replay_buffer/size 999996 trainer/num train calls 25000 trainer/Policy Loss -1.78098 trainer/Log Pis Mean 1.62933 trainer/Log Pis Std 2.38296 trainer/Log Pis Max 8.63878 trainer/Log Pis Min -5.05951 trainer/policy/mean Mean 0.133365 trainer/policy/mean Std 0.600899 trainer/policy/mean Max 0.998476 trainer/policy/mean Min -0.99785 trainer/policy/normal/std Mean 0.429283 trainer/policy/normal/std Std 0.203662 trainer/policy/normal/std Max 1.58992 trainer/policy/normal/std Min 0.0901352 trainer/policy/normal/log_std Mean -0.980305 trainer/policy/normal/log_std Std 0.552299 trainer/policy/normal/log_std Max 0.463685 trainer/policy/normal/log_std Min -2.40644 eval/num steps total 15639 eval/num paths total 34 eval/path length Mean 535 eval/path length Std 0 eval/path length Max 535 eval/path length Min 535 eval/Rewards Mean 3.17878 eval/Rewards Std 0.844257 eval/Rewards Max 5.53906 eval/Rewards Min 0.977045 eval/Returns Mean 1700.65 eval/Returns Std 0 eval/Returns Max 1700.65 eval/Returns Min 1700.65 eval/Actions Mean 0.165781 eval/Actions Std 0.580564 eval/Actions Max 0.998243 eval/Actions Min -0.998431 eval/Num Paths 1 eval/Average Returns 1700.65 eval/normalized_score 52.877 time/evaluation sampling (s) 0.894551 time/logging (s) 0.00237123 time/sampling batch (s) 0.264546 time/saving (s) 0.00310259 time/training (s) 4.17114 time/epoch (s) 5.33571 time/total (s) 30360.3 Epoch -976 ---------------------------------- --------------- 2022-05-10 21:36:30.748943 PDT | [0] Epoch -975 finished ---------------------------------- --------------- epoch -975 replay_buffer/size 999996 trainer/num train calls 26000 trainer/Policy Loss -2.04966 trainer/Log Pis Mean 1.97108 trainer/Log Pis Std 2.5666 trainer/Log Pis Max 10.7818 trainer/Log Pis Min -5.7106 trainer/policy/mean Mean 0.128208 trainer/policy/mean Std 0.617331 trainer/policy/mean Max 0.999123 trainer/policy/mean Min -0.99781 trainer/policy/normal/std Mean 0.423902 trainer/policy/normal/std Std 0.199637 trainer/policy/normal/std Max 1.13133 trainer/policy/normal/std Min 0.0786556 trainer/policy/normal/log_std Mean -0.996398 trainer/policy/normal/log_std Std 0.563939 trainer/policy/normal/log_std Max 0.123398 trainer/policy/normal/log_std Min -2.54268 eval/num steps total 16205 eval/num paths total 35 eval/path length Mean 566 eval/path length Std 0 eval/path length Max 566 eval/path length Min 566 eval/Rewards Mean 3.20399 eval/Rewards Std 0.822809 eval/Rewards Max 4.72614 eval/Rewards Min 0.985689 eval/Returns Mean 1813.46 eval/Returns Std 0 eval/Returns Max 1813.46 eval/Returns Min 1813.46 eval/Actions Mean 0.163953 eval/Actions Std 0.586853 eval/Actions Max 0.998373 eval/Actions Min -0.998729 eval/Num Paths 1 eval/Average Returns 1813.46 eval/normalized_score 56.3433 time/evaluation sampling (s) 0.907965 time/logging (s) 0.00256448 time/sampling batch (s) 0.264705 time/saving (s) 0.00311143 time/training (s) 4.18765 time/epoch (s) 5.366 time/total (s) 30365.7 Epoch -975 ---------------------------------- --------------- 2022-05-10 21:36:36.133809 PDT | [0] Epoch -974 finished ---------------------------------- --------------- epoch -974 replay_buffer/size 999996 trainer/num train calls 27000 trainer/Policy Loss -1.90855 trainer/Log Pis Mean 1.73933 trainer/Log Pis Std 2.30257 trainer/Log Pis Max 8.10579 trainer/Log Pis Min -3.96292 trainer/policy/mean Mean 0.116122 trainer/policy/mean Std 0.599031 trainer/policy/mean Max 0.998795 trainer/policy/mean Min -0.998662 trainer/policy/normal/std Mean 0.404428 trainer/policy/normal/std Std 0.189591 trainer/policy/normal/std Max 1.08213 trainer/policy/normal/std Min 0.0830608 trainer/policy/normal/log_std Mean -1.03917 trainer/policy/normal/log_std Std 0.552174 trainer/policy/normal/log_std Max 0.0789274 trainer/policy/normal/log_std Min -2.48818 eval/num steps total 16775 eval/num paths total 36 eval/path length Mean 570 eval/path length Std 0 eval/path length Max 570 eval/path length Min 570 eval/Rewards Mean 3.22123 eval/Rewards Std 0.768955 eval/Rewards Max 4.78367 eval/Rewards Min 0.97755 eval/Returns Mean 1836.1 eval/Returns Std 0 eval/Returns Max 1836.1 eval/Returns Min 1836.1 eval/Actions Mean 0.14766 eval/Actions Std 0.594147 eval/Actions Max 0.998421 eval/Actions Min -0.998999 eval/Num Paths 1 eval/Average Returns 1836.1 eval/normalized_score 57.039 time/evaluation sampling (s) 0.912586 time/logging (s) 0.00258371 time/sampling batch (s) 0.264955 time/saving (s) 0.00299059 time/training (s) 4.18287 time/epoch (s) 5.36599 time/total (s) 30371.1 Epoch -974 ---------------------------------- --------------- 2022-05-10 21:36:41.491777 PDT | [0] Epoch -973 finished ---------------------------------- --------------- epoch -973 replay_buffer/size 999996 trainer/num train calls 28000 trainer/Policy Loss -1.97436 trainer/Log Pis Mean 2.12692 trainer/Log Pis Std 2.54208 trainer/Log Pis Max 10.2994 trainer/Log Pis Min -5.72743 trainer/policy/mean Mean 0.107064 trainer/policy/mean Std 0.61902 trainer/policy/mean Max 0.996584 trainer/policy/mean Min -0.998301 trainer/policy/normal/std Mean 0.41142 trainer/policy/normal/std Std 0.187658 trainer/policy/normal/std Max 1.19561 trainer/policy/normal/std Min 0.0860483 trainer/policy/normal/log_std Mean -1.01583 trainer/policy/normal/log_std Std 0.540302 trainer/policy/normal/log_std Max 0.178655 trainer/policy/normal/log_std Min -2.45285 eval/num steps total 17279 eval/num paths total 37 eval/path length Mean 504 eval/path length Std 0 eval/path length Max 504 eval/path length Min 504 eval/Rewards Mean 3.10545 eval/Rewards Std 0.778876 eval/Rewards Max 4.80763 eval/Rewards Min 1.01254 eval/Returns Mean 1565.15 eval/Returns Std 0 eval/Returns Max 1565.15 eval/Returns Min 1565.15 eval/Actions Mean 0.168197 eval/Actions Std 0.585289 eval/Actions Max 0.998694 eval/Actions Min -0.999086 eval/Num Paths 1 eval/Average Returns 1565.15 eval/normalized_score 48.7136 time/evaluation sampling (s) 0.902831 time/logging (s) 0.00241571 time/sampling batch (s) 0.263197 time/saving (s) 0.00315836 time/training (s) 4.16718 time/epoch (s) 5.33878 time/total (s) 30376.4 Epoch -973 ---------------------------------- --------------- 2022-05-10 21:36:46.880930 PDT | [0] Epoch -972 finished ---------------------------------- --------------- epoch -972 replay_buffer/size 999996 trainer/num train calls 29000 trainer/Policy Loss -1.85426 trainer/Log Pis Mean 1.81028 trainer/Log Pis Std 2.63417 trainer/Log Pis Max 9.89343 trainer/Log Pis Min -6.92771 trainer/policy/mean Mean 0.111503 trainer/policy/mean Std 0.609174 trainer/policy/mean Max 0.997634 trainer/policy/mean Min -0.996941 trainer/policy/normal/std Mean 0.408463 trainer/policy/normal/std Std 0.18464 trainer/policy/normal/std Max 1.03353 trainer/policy/normal/std Min 0.0782091 trainer/policy/normal/log_std Mean -1.02241 trainer/policy/normal/log_std Std 0.539373 trainer/policy/normal/log_std Max 0.032978 trainer/policy/normal/log_std Min -2.54837 eval/num steps total 17815 eval/num paths total 38 eval/path length Mean 536 eval/path length Std 0 eval/path length Max 536 eval/path length Min 536 eval/Rewards Mean 3.15421 eval/Rewards Std 0.849619 eval/Rewards Max 5.37192 eval/Rewards Min 1.01308 eval/Returns Mean 1690.66 eval/Returns Std 0 eval/Returns Max 1690.66 eval/Returns Min 1690.66 eval/Actions Mean 0.160559 eval/Actions Std 0.571171 eval/Actions Max 0.997622 eval/Actions Min -0.998241 eval/Num Paths 1 eval/Average Returns 1690.66 eval/normalized_score 52.5701 time/evaluation sampling (s) 0.89331 time/logging (s) 0.00249647 time/sampling batch (s) 0.267254 time/saving (s) 0.00318901 time/training (s) 4.20395 time/epoch (s) 5.3702 time/total (s) 30381.8 Epoch -972 ---------------------------------- --------------- 2022-05-10 21:36:52.253546 PDT | [0] Epoch -971 finished ---------------------------------- --------------- epoch -971 replay_buffer/size 999996 trainer/num train calls 30000 trainer/Policy Loss -2.07156 trainer/Log Pis Mean 1.85123 trainer/Log Pis Std 2.51427 trainer/Log Pis Max 8.8648 trainer/Log Pis Min -4.257 trainer/policy/mean Mean 0.133926 trainer/policy/mean Std 0.610864 trainer/policy/mean Max 0.996787 trainer/policy/mean Min -0.997041 trainer/policy/normal/std Mean 0.409905 trainer/policy/normal/std Std 0.190664 trainer/policy/normal/std Max 0.965126 trainer/policy/normal/std Min 0.0834681 trainer/policy/normal/log_std Mean -1.02631 trainer/policy/normal/log_std Std 0.555815 trainer/policy/normal/log_std Max -0.0354966 trainer/policy/normal/log_std Min -2.48329 eval/num steps total 18745 eval/num paths total 40 eval/path length Mean 465 eval/path length Std 84 eval/path length Max 549 eval/path length Min 381 eval/Rewards Mean 3.14451 eval/Rewards Std 0.846575 eval/Rewards Max 4.80536 eval/Rewards Min 0.98052 eval/Returns Mean 1462.2 eval/Returns Std 299.968 eval/Returns Max 1762.17 eval/Returns Min 1162.23 eval/Actions Mean 0.146298 eval/Actions Std 0.577689 eval/Actions Max 0.997726 eval/Actions Min -0.998325 eval/Num Paths 2 eval/Average Returns 1462.2 eval/normalized_score 45.5504 time/evaluation sampling (s) 0.893055 time/logging (s) 0.00342828 time/sampling batch (s) 0.265035 time/saving (s) 0.00299342 time/training (s) 4.18999 time/epoch (s) 5.3545 time/total (s) 30387.2 Epoch -971 ---------------------------------- --------------- 2022-05-10 21:36:57.613004 PDT | [0] Epoch -970 finished ---------------------------------- --------------- epoch -970 replay_buffer/size 999996 trainer/num train calls 31000 trainer/Policy Loss -2.05519 trainer/Log Pis Mean 2.06655 trainer/Log Pis Std 2.48444 trainer/Log Pis Max 13.7471 trainer/Log Pis Min -5.62947 trainer/policy/mean Mean 0.164048 trainer/policy/mean Std 0.615938 trainer/policy/mean Max 0.997495 trainer/policy/mean Min -0.997964 trainer/policy/normal/std Mean 0.413917 trainer/policy/normal/std Std 0.196309 trainer/policy/normal/std Max 1.21033 trainer/policy/normal/std Min 0.0810413 trainer/policy/normal/log_std Mean -1.02093 trainer/policy/normal/log_std Std 0.564112 trainer/policy/normal/log_std Max 0.190897 trainer/policy/normal/log_std Min -2.5128 eval/num steps total 19719 eval/num paths total 42 eval/path length Mean 487 eval/path length Std 10 eval/path length Max 497 eval/path length Min 477 eval/Rewards Mean 3.03226 eval/Rewards Std 0.78557 eval/Rewards Max 4.86656 eval/Rewards Min 0.984383 eval/Returns Mean 1476.71 eval/Returns Std 34.6022 eval/Returns Max 1511.31 eval/Returns Min 1442.11 eval/Actions Mean 0.152505 eval/Actions Std 0.568651 eval/Actions Max 0.998407 eval/Actions Min -0.997593 eval/Num Paths 2 eval/Average Returns 1476.71 eval/normalized_score 45.9963 time/evaluation sampling (s) 0.884513 time/logging (s) 0.00365738 time/sampling batch (s) 0.265831 time/saving (s) 0.00302052 time/training (s) 4.18365 time/epoch (s) 5.34067 time/total (s) 30392.5 Epoch -970 ---------------------------------- --------------- 2022-05-10 21:37:02.997310 PDT | [0] Epoch -969 finished ---------------------------------- --------------- epoch -969 replay_buffer/size 999996 trainer/num train calls 32000 trainer/Policy Loss -1.87556 trainer/Log Pis Mean 1.71449 trainer/Log Pis Std 2.80212 trainer/Log Pis Max 15.7883 trainer/Log Pis Min -6.26314 trainer/policy/mean Mean 0.14948 trainer/policy/mean Std 0.60823 trainer/policy/mean Max 0.998003 trainer/policy/mean Min -0.999246 trainer/policy/normal/std Mean 0.423875 trainer/policy/normal/std Std 0.193863 trainer/policy/normal/std Max 1.02461 trainer/policy/normal/std Min 0.0922913 trainer/policy/normal/log_std Mean -0.985702 trainer/policy/normal/log_std Std 0.538444 trainer/policy/normal/log_std Max 0.024315 trainer/policy/normal/log_std Min -2.3828 eval/num steps total 20293 eval/num paths total 43 eval/path length Mean 574 eval/path length Std 0 eval/path length Max 574 eval/path length Min 574 eval/Rewards Mean 3.1131 eval/Rewards Std 0.722994 eval/Rewards Max 4.67297 eval/Rewards Min 0.98491 eval/Returns Mean 1786.92 eval/Returns Std 0 eval/Returns Max 1786.92 eval/Returns Min 1786.92 eval/Actions Mean 0.154863 eval/Actions Std 0.60122 eval/Actions Max 0.998599 eval/Actions Min -0.997864 eval/Num Paths 1 eval/Average Returns 1786.92 eval/normalized_score 55.5277 time/evaluation sampling (s) 0.870368 time/logging (s) 0.00266459 time/sampling batch (s) 0.266935 time/saving (s) 0.0031952 time/training (s) 4.22107 time/epoch (s) 5.36423 time/total (s) 30397.9 Epoch -969 ---------------------------------- --------------- 2022-05-10 21:37:08.399867 PDT | [0] Epoch -968 finished ---------------------------------- --------------- epoch -968 replay_buffer/size 999996 trainer/num train calls 33000 trainer/Policy Loss -1.96062 trainer/Log Pis Mean 1.82939 trainer/Log Pis Std 2.61615 trainer/Log Pis Max 13.0157 trainer/Log Pis Min -4.88207 trainer/policy/mean Mean 0.127341 trainer/policy/mean Std 0.60915 trainer/policy/mean Max 0.997009 trainer/policy/mean Min -0.998219 trainer/policy/normal/std Mean 0.415081 trainer/policy/normal/std Std 0.194216 trainer/policy/normal/std Max 1.09875 trainer/policy/normal/std Min 0.0883098 trainer/policy/normal/log_std Mean -1.01568 trainer/policy/normal/log_std Std 0.560294 trainer/policy/normal/log_std Max 0.0941696 trainer/policy/normal/log_std Min -2.4269 eval/num steps total 20819 eval/num paths total 44 eval/path length Mean 526 eval/path length Std 0 eval/path length Max 526 eval/path length Min 526 eval/Rewards Mean 3.17802 eval/Rewards Std 0.834295 eval/Rewards Max 5.45241 eval/Rewards Min 0.982918 eval/Returns Mean 1671.64 eval/Returns Std 0 eval/Returns Max 1671.64 eval/Returns Min 1671.64 eval/Actions Mean 0.159338 eval/Actions Std 0.588239 eval/Actions Max 0.998231 eval/Actions Min -0.99744 eval/Num Paths 1 eval/Average Returns 1671.64 eval/normalized_score 51.9857 time/evaluation sampling (s) 0.899097 time/logging (s) 0.00263433 time/sampling batch (s) 0.267461 time/saving (s) 0.00335894 time/training (s) 4.21041 time/epoch (s) 5.38296 time/total (s) 30403.3 Epoch -968 ---------------------------------- --------------- 2022-05-10 21:37:13.830861 PDT | [0] Epoch -967 finished ---------------------------------- --------------- epoch -967 replay_buffer/size 999996 trainer/num train calls 34000 trainer/Policy Loss -1.90522 trainer/Log Pis Mean 1.82552 trainer/Log Pis Std 2.46 trainer/Log Pis Max 8.93632 trainer/Log Pis Min -3.43946 trainer/policy/mean Mean 0.159428 trainer/policy/mean Std 0.60352 trainer/policy/mean Max 0.996446 trainer/policy/mean Min -0.996897 trainer/policy/normal/std Mean 0.41849 trainer/policy/normal/std Std 0.192362 trainer/policy/normal/std Max 1.00974 trainer/policy/normal/std Min 0.0788205 trainer/policy/normal/log_std Mean -1.00305 trainer/policy/normal/log_std Std 0.552114 trainer/policy/normal/log_std Max 0.00969015 trainer/policy/normal/log_std Min -2.54058 eval/num steps total 21539 eval/num paths total 46 eval/path length Mean 360 eval/path length Std 55 eval/path length Max 415 eval/path length Min 305 eval/Rewards Mean 3.06027 eval/Rewards Std 0.932292 eval/Rewards Max 4.70033 eval/Rewards Min 0.978205 eval/Returns Mean 1101.7 eval/Returns Std 222.279 eval/Returns Max 1323.98 eval/Returns Min 879.418 eval/Actions Mean 0.127566 eval/Actions Std 0.566431 eval/Actions Max 0.997976 eval/Actions Min -0.998592 eval/Num Paths 2 eval/Average Returns 1101.7 eval/normalized_score 34.4736 time/evaluation sampling (s) 0.877948 time/logging (s) 0.00302337 time/sampling batch (s) 0.270131 time/saving (s) 0.00311948 time/training (s) 4.25714 time/epoch (s) 5.41136 time/total (s) 30408.7 Epoch -967 ---------------------------------- --------------- 2022-05-10 21:37:19.235782 PDT | [0] Epoch -966 finished ---------------------------------- --------------- epoch -966 replay_buffer/size 999996 trainer/num train calls 35000 trainer/Policy Loss -1.92932 trainer/Log Pis Mean 2.09652 trainer/Log Pis Std 2.69539 trainer/Log Pis Max 9.5597 trainer/Log Pis Min -4.87186 trainer/policy/mean Mean 0.12167 trainer/policy/mean Std 0.616939 trainer/policy/mean Max 0.998002 trainer/policy/mean Min -0.997349 trainer/policy/normal/std Mean 0.410459 trainer/policy/normal/std Std 0.187212 trainer/policy/normal/std Max 0.994876 trainer/policy/normal/std Min 0.0822762 trainer/policy/normal/log_std Mean -1.02245 trainer/policy/normal/log_std Std 0.553228 trainer/policy/normal/log_std Max -0.00513691 trainer/policy/normal/log_std Min -2.49767 eval/num steps total 22271 eval/num paths total 47 eval/path length Mean 732 eval/path length Std 0 eval/path length Max 732 eval/path length Min 732 eval/Rewards Mean 3.20833 eval/Rewards Std 0.708572 eval/Rewards Max 4.74253 eval/Rewards Min 1.00376 eval/Returns Mean 2348.5 eval/Returns Std 0 eval/Returns Max 2348.5 eval/Returns Min 2348.5 eval/Actions Mean 0.158021 eval/Actions Std 0.599464 eval/Actions Max 0.997091 eval/Actions Min -0.997565 eval/Num Paths 1 eval/Average Returns 2348.5 eval/normalized_score 72.7829 time/evaluation sampling (s) 0.889613 time/logging (s) 0.0029222 time/sampling batch (s) 0.268965 time/saving (s) 0.0030669 time/training (s) 4.22077 time/epoch (s) 5.38534 time/total (s) 30414.1 Epoch -966 ---------------------------------- --------------- 2022-05-10 21:37:24.694669 PDT | [0] Epoch -965 finished ---------------------------------- --------------- epoch -965 replay_buffer/size 999996 trainer/num train calls 36000 trainer/Policy Loss -1.8256 trainer/Log Pis Mean 1.88752 trainer/Log Pis Std 2.49378 trainer/Log Pis Max 8.32548 trainer/Log Pis Min -5.74369 trainer/policy/mean Mean 0.167462 trainer/policy/mean Std 0.609958 trainer/policy/mean Max 0.998453 trainer/policy/mean Min -0.997603 trainer/policy/normal/std Mean 0.415821 trainer/policy/normal/std Std 0.191523 trainer/policy/normal/std Max 1.17324 trainer/policy/normal/std Min 0.082807 trainer/policy/normal/log_std Mean -1.0116 trainer/policy/normal/log_std Std 0.55829 trainer/policy/normal/log_std Max 0.15977 trainer/policy/normal/log_std Min -2.49124 eval/num steps total 23183 eval/num paths total 48 eval/path length Mean 912 eval/path length Std 0 eval/path length Max 912 eval/path length Min 912 eval/Rewards Mean 3.24822 eval/Rewards Std 0.618188 eval/Rewards Max 4.89735 eval/Rewards Min 0.983912 eval/Returns Mean 2962.38 eval/Returns Std 0 eval/Returns Max 2962.38 eval/Returns Min 2962.38 eval/Actions Mean 0.1695 eval/Actions Std 0.613207 eval/Actions Max 0.998092 eval/Actions Min -0.997458 eval/Num Paths 1 eval/Average Returns 2962.38 eval/normalized_score 91.645 time/evaluation sampling (s) 0.887911 time/logging (s) 0.00336294 time/sampling batch (s) 0.27063 time/saving (s) 0.0029887 time/training (s) 4.275 time/epoch (s) 5.43989 time/total (s) 30419.5 Epoch -965 ---------------------------------- --------------- 2022-05-10 21:37:30.161316 PDT | [0] Epoch -964 finished ---------------------------------- --------------- epoch -964 replay_buffer/size 999996 trainer/num train calls 37000 trainer/Policy Loss -2.10569 trainer/Log Pis Mean 2.05618 trainer/Log Pis Std 2.63345 trainer/Log Pis Max 14.4573 trainer/Log Pis Min -5.78858 trainer/policy/mean Mean 0.133977 trainer/policy/mean Std 0.621105 trainer/policy/mean Max 0.998717 trainer/policy/mean Min -0.999134 trainer/policy/normal/std Mean 0.406934 trainer/policy/normal/std Std 0.195887 trainer/policy/normal/std Max 1.437 trainer/policy/normal/std Min 0.0799171 trainer/policy/normal/log_std Mean -1.04057 trainer/policy/normal/log_std Std 0.569293 trainer/policy/normal/log_std Max 0.362559 trainer/policy/normal/log_std Min -2.52677 eval/num steps total 23732 eval/num paths total 49 eval/path length Mean 549 eval/path length Std 0 eval/path length Max 549 eval/path length Min 549 eval/Rewards Mean 3.13397 eval/Rewards Std 0.832525 eval/Rewards Max 4.68044 eval/Rewards Min 0.979416 eval/Returns Mean 1720.55 eval/Returns Std 0 eval/Returns Max 1720.55 eval/Returns Min 1720.55 eval/Actions Mean 0.152715 eval/Actions Std 0.556066 eval/Actions Max 0.997811 eval/Actions Min -0.997993 eval/Num Paths 1 eval/Average Returns 1720.55 eval/normalized_score 53.4886 time/evaluation sampling (s) 0.955578 time/logging (s) 0.00243211 time/sampling batch (s) 0.267355 time/saving (s) 0.0030082 time/training (s) 4.21806 time/epoch (s) 5.44643 time/total (s) 30424.9 Epoch -964 ---------------------------------- --------------- 2022-05-10 21:37:35.600805 PDT | [0] Epoch -963 finished ---------------------------------- --------------- epoch -963 replay_buffer/size 999996 trainer/num train calls 38000 trainer/Policy Loss -1.82931 trainer/Log Pis Mean 1.78927 trainer/Log Pis Std 2.43954 trainer/Log Pis Max 9.45866 trainer/Log Pis Min -3.99243 trainer/policy/mean Mean 0.135919 trainer/policy/mean Std 0.595222 trainer/policy/mean Max 0.996636 trainer/policy/mean Min -0.996492 trainer/policy/normal/std Mean 0.408682 trainer/policy/normal/std Std 0.192477 trainer/policy/normal/std Max 1.06608 trainer/policy/normal/std Min 0.0794827 trainer/policy/normal/log_std Mean -1.03219 trainer/policy/normal/log_std Std 0.561272 trainer/policy/normal/log_std Max 0.063991 trainer/policy/normal/log_std Min -2.53222 eval/num steps total 24309 eval/num paths total 50 eval/path length Mean 577 eval/path length Std 0 eval/path length Max 577 eval/path length Min 577 eval/Rewards Mean 3.19391 eval/Rewards Std 0.736808 eval/Rewards Max 5.15619 eval/Rewards Min 1.00835 eval/Returns Mean 1842.88 eval/Returns Std 0 eval/Returns Max 1842.88 eval/Returns Min 1842.88 eval/Actions Mean 0.148694 eval/Actions Std 0.599288 eval/Actions Max 0.997137 eval/Actions Min -0.99702 eval/Num Paths 1 eval/Average Returns 1842.88 eval/normalized_score 57.2474 time/evaluation sampling (s) 0.91124 time/logging (s) 0.00260645 time/sampling batch (s) 0.268656 time/saving (s) 0.0032081 time/training (s) 4.23478 time/epoch (s) 5.42049 time/total (s) 30430.4 Epoch -963 ---------------------------------- --------------- 2022-05-10 21:37:41.068508 PDT | [0] Epoch -962 finished ---------------------------------- -------------- epoch -962 replay_buffer/size 999996 trainer/num train calls 39000 trainer/Policy Loss -1.92443 trainer/Log Pis Mean 1.80697 trainer/Log Pis Std 2.47643 trainer/Log Pis Max 9.77966 trainer/Log Pis Min -4.99797 trainer/policy/mean Mean 0.150852 trainer/policy/mean Std 0.607296 trainer/policy/mean Max 0.997376 trainer/policy/mean Min -0.997491 trainer/policy/normal/std Mean 0.410974 trainer/policy/normal/std Std 0.188226 trainer/policy/normal/std Max 1.05167 trainer/policy/normal/std Min 0.0802579 trainer/policy/normal/log_std Mean -1.02173 trainer/policy/normal/log_std Std 0.553925 trainer/policy/normal/log_std Max 0.050377 trainer/policy/normal/log_std Min -2.52251 eval/num steps total 24883 eval/num paths total 51 eval/path length Mean 574 eval/path length Std 0 eval/path length Max 574 eval/path length Min 574 eval/Rewards Mean 3.11662 eval/Rewards Std 0.711834 eval/Rewards Max 4.47918 eval/Rewards Min 1.00815 eval/Returns Mean 1788.94 eval/Returns Std 0 eval/Returns Max 1788.94 eval/Returns Min 1788.94 eval/Actions Mean 0.153059 eval/Actions Std 0.60358 eval/Actions Max 0.998286 eval/Actions Min -0.998435 eval/Num Paths 1 eval/Average Returns 1788.94 eval/normalized_score 55.5898 time/evaluation sampling (s) 0.928753 time/logging (s) 0.0026327 time/sampling batch (s) 0.2694 time/saving (s) 0.0032202 time/training (s) 4.2442 time/epoch (s) 5.44821 time/total (s) 30435.8 Epoch -962 ---------------------------------- -------------- 2022-05-10 21:37:46.487322 PDT | [0] Epoch -961 finished ---------------------------------- --------------- epoch -961 replay_buffer/size 999996 trainer/num train calls 40000 trainer/Policy Loss -2.01676 trainer/Log Pis Mean 1.99564 trainer/Log Pis Std 2.51398 trainer/Log Pis Max 9.3387 trainer/Log Pis Min -5.69652 trainer/policy/mean Mean 0.161276 trainer/policy/mean Std 0.609946 trainer/policy/mean Max 0.995263 trainer/policy/mean Min -0.998061 trainer/policy/normal/std Mean 0.411159 trainer/policy/normal/std Std 0.189647 trainer/policy/normal/std Max 1.08503 trainer/policy/normal/std Min 0.0811495 trainer/policy/normal/log_std Mean -1.01978 trainer/policy/normal/log_std Std 0.547985 trainer/policy/normal/log_std Max 0.0816105 trainer/policy/normal/log_std Min -2.51146 eval/num steps total 25455 eval/num paths total 52 eval/path length Mean 572 eval/path length Std 0 eval/path length Max 572 eval/path length Min 572 eval/Rewards Mean 3.15032 eval/Rewards Std 0.709572 eval/Rewards Max 4.50556 eval/Rewards Min 0.994473 eval/Returns Mean 1801.98 eval/Returns Std 0 eval/Returns Max 1801.98 eval/Returns Min 1801.98 eval/Actions Mean 0.155551 eval/Actions Std 0.609991 eval/Actions Max 0.998652 eval/Actions Min -0.998522 eval/Num Paths 1 eval/Average Returns 1801.98 eval/normalized_score 55.9906 time/evaluation sampling (s) 0.892991 time/logging (s) 0.00257649 time/sampling batch (s) 0.268754 time/saving (s) 0.00319483 time/training (s) 4.23183 time/epoch (s) 5.39934 time/total (s) 30441.2 Epoch -961 ---------------------------------- --------------- 2022-05-10 21:37:51.933707 PDT | [0] Epoch -960 finished ---------------------------------- --------------- epoch -960 replay_buffer/size 999996 trainer/num train calls 41000 trainer/Policy Loss -1.77786 trainer/Log Pis Mean 1.72998 trainer/Log Pis Std 2.58518 trainer/Log Pis Max 9.38031 trainer/Log Pis Min -5.22384 trainer/policy/mean Mean 0.0997379 trainer/policy/mean Std 0.597326 trainer/policy/mean Max 0.997961 trainer/policy/mean Min -0.998287 trainer/policy/normal/std Mean 0.411452 trainer/policy/normal/std Std 0.189972 trainer/policy/normal/std Max 1.02536 trainer/policy/normal/std Min 0.0793417 trainer/policy/normal/log_std Mean -1.0204 trainer/policy/normal/log_std Std 0.551174 trainer/policy/normal/log_std Max 0.0250407 trainer/policy/normal/log_std Min -2.53399 eval/num steps total 25956 eval/num paths total 53 eval/path length Mean 501 eval/path length Std 0 eval/path length Max 501 eval/path length Min 501 eval/Rewards Mean 3.12229 eval/Rewards Std 0.749293 eval/Rewards Max 4.72357 eval/Rewards Min 1.00535 eval/Returns Mean 1564.27 eval/Returns Std 0 eval/Returns Max 1564.27 eval/Returns Min 1564.27 eval/Actions Mean 0.138369 eval/Actions Std 0.583416 eval/Actions Max 0.997876 eval/Actions Min -0.998833 eval/Num Paths 1 eval/Average Returns 1564.27 eval/normalized_score 48.6865 time/evaluation sampling (s) 0.906414 time/logging (s) 0.00239612 time/sampling batch (s) 0.269634 time/saving (s) 0.00322205 time/training (s) 4.24521 time/epoch (s) 5.42687 time/total (s) 30446.7 Epoch -960 ---------------------------------- --------------- 2022-05-10 21:37:57.316849 PDT | [0] Epoch -959 finished ---------------------------------- --------------- epoch -959 replay_buffer/size 999996 trainer/num train calls 42000 trainer/Policy Loss -2.16568 trainer/Log Pis Mean 2.04104 trainer/Log Pis Std 2.52964 trainer/Log Pis Max 11.352 trainer/Log Pis Min -6.30909 trainer/policy/mean Mean 0.144721 trainer/policy/mean Std 0.615544 trainer/policy/mean Max 0.995163 trainer/policy/mean Min -0.999432 trainer/policy/normal/std Mean 0.408702 trainer/policy/normal/std Std 0.186225 trainer/policy/normal/std Max 1.06302 trainer/policy/normal/std Min 0.0826625 trainer/policy/normal/log_std Mean -1.0245 trainer/policy/normal/log_std Std 0.546557 trainer/policy/normal/log_std Max 0.0611167 trainer/policy/normal/log_std Min -2.49299 eval/num steps total 26600 eval/num paths total 54 eval/path length Mean 644 eval/path length Std 0 eval/path length Max 644 eval/path length Min 644 eval/Rewards Mean 3.26454 eval/Rewards Std 0.770345 eval/Rewards Max 4.78231 eval/Rewards Min 0.985243 eval/Returns Mean 2102.36 eval/Returns Std 0 eval/Returns Max 2102.36 eval/Returns Min 2102.36 eval/Actions Mean 0.161842 eval/Actions Std 0.57943 eval/Actions Max 0.996645 eval/Actions Min -0.998883 eval/Num Paths 1 eval/Average Returns 2102.36 eval/normalized_score 65.2201 time/evaluation sampling (s) 0.897185 time/logging (s) 0.0026832 time/sampling batch (s) 0.267304 time/saving (s) 0.00299714 time/training (s) 4.1938 time/epoch (s) 5.36396 time/total (s) 30452 Epoch -959 ---------------------------------- --------------- 2022-05-10 21:38:02.728820 PDT | [0] Epoch -958 finished ---------------------------------- --------------- epoch -958 replay_buffer/size 999996 trainer/num train calls 43000 trainer/Policy Loss -1.92129 trainer/Log Pis Mean 1.74294 trainer/Log Pis Std 2.35037 trainer/Log Pis Max 9.39926 trainer/Log Pis Min -4.95963 trainer/policy/mean Mean 0.127444 trainer/policy/mean Std 0.599375 trainer/policy/mean Max 0.997875 trainer/policy/mean Min -0.998538 trainer/policy/normal/std Mean 0.41482 trainer/policy/normal/std Std 0.196476 trainer/policy/normal/std Max 1.04181 trainer/policy/normal/std Min 0.0786577 trainer/policy/normal/log_std Mean -1.01798 trainer/policy/normal/log_std Std 0.561778 trainer/policy/normal/log_std Max 0.0409568 trainer/policy/normal/log_std Min -2.54265 eval/num steps total 27348 eval/num paths total 55 eval/path length Mean 748 eval/path length Std 0 eval/path length Max 748 eval/path length Min 748 eval/Rewards Mean 3.1995 eval/Rewards Std 0.659714 eval/Rewards Max 4.86006 eval/Rewards Min 1.00284 eval/Returns Mean 2393.23 eval/Returns Std 0 eval/Returns Max 2393.23 eval/Returns Min 2393.23 eval/Actions Mean 0.160611 eval/Actions Std 0.604965 eval/Actions Max 0.99821 eval/Actions Min -0.997354 eval/Num Paths 1 eval/Average Returns 2393.23 eval/normalized_score 74.1572 time/evaluation sampling (s) 0.879373 time/logging (s) 0.00297968 time/sampling batch (s) 0.269588 time/saving (s) 0.00305823 time/training (s) 4.23789 time/epoch (s) 5.39288 time/total (s) 30457.4 Epoch -958 ---------------------------------- --------------- 2022-05-10 21:38:08.138187 PDT | [0] Epoch -957 finished ---------------------------------- --------------- epoch -957 replay_buffer/size 999996 trainer/num train calls 44000 trainer/Policy Loss -2.08058 trainer/Log Pis Mean 2.03384 trainer/Log Pis Std 2.52983 trainer/Log Pis Max 12.8704 trainer/Log Pis Min -6.06051 trainer/policy/mean Mean 0.147602 trainer/policy/mean Std 0.613677 trainer/policy/mean Max 0.996923 trainer/policy/mean Min -0.999029 trainer/policy/normal/std Mean 0.402056 trainer/policy/normal/std Std 0.189413 trainer/policy/normal/std Max 1.53456 trainer/policy/normal/std Min 0.0798836 trainer/policy/normal/log_std Mean -1.04851 trainer/policy/normal/log_std Std 0.561556 trainer/policy/normal/log_std Max 0.428243 trainer/policy/normal/log_std Min -2.52718 eval/num steps total 27855 eval/num paths total 56 eval/path length Mean 507 eval/path length Std 0 eval/path length Max 507 eval/path length Min 507 eval/Rewards Mean 3.09741 eval/Rewards Std 0.796033 eval/Rewards Max 5.12482 eval/Rewards Min 0.980958 eval/Returns Mean 1570.39 eval/Returns Std 0 eval/Returns Max 1570.39 eval/Returns Min 1570.39 eval/Actions Mean 0.164424 eval/Actions Std 0.579027 eval/Actions Max 0.997921 eval/Actions Min -0.99763 eval/Num Paths 1 eval/Average Returns 1570.39 eval/normalized_score 48.8746 time/evaluation sampling (s) 0.901168 time/logging (s) 0.00264824 time/sampling batch (s) 0.267304 time/saving (s) 0.00335102 time/training (s) 4.21528 time/epoch (s) 5.38975 time/total (s) 30462.8 Epoch -957 ---------------------------------- --------------- 2022-05-10 21:38:13.529413 PDT | [0] Epoch -956 finished ---------------------------------- --------------- epoch -956 replay_buffer/size 999996 trainer/num train calls 45000 trainer/Policy Loss -2.0301 trainer/Log Pis Mean 2.0828 trainer/Log Pis Std 2.65067 trainer/Log Pis Max 12.0571 trainer/Log Pis Min -4.75702 trainer/policy/mean Mean 0.131637 trainer/policy/mean Std 0.611874 trainer/policy/mean Max 0.998583 trainer/policy/mean Min -0.99813 trainer/policy/normal/std Mean 0.400653 trainer/policy/normal/std Std 0.188708 trainer/policy/normal/std Max 0.913709 trainer/policy/normal/std Min 0.0816755 trainer/policy/normal/log_std Mean -1.0525 trainer/policy/normal/log_std Std 0.561772 trainer/policy/normal/log_std Max -0.0902427 trainer/policy/normal/log_std Min -2.505 eval/num steps total 28329 eval/num paths total 57 eval/path length Mean 474 eval/path length Std 0 eval/path length Max 474 eval/path length Min 474 eval/Rewards Mean 3.18538 eval/Rewards Std 0.857356 eval/Rewards Max 4.75176 eval/Rewards Min 0.987578 eval/Returns Mean 1509.87 eval/Returns Std 0 eval/Returns Max 1509.87 eval/Returns Min 1509.87 eval/Actions Mean 0.151419 eval/Actions Std 0.593668 eval/Actions Max 0.998225 eval/Actions Min -0.998477 eval/Num Paths 1 eval/Average Returns 1509.87 eval/normalized_score 47.0152 time/evaluation sampling (s) 0.880995 time/logging (s) 0.00223317 time/sampling batch (s) 0.268861 time/saving (s) 0.00312043 time/training (s) 4.21606 time/epoch (s) 5.37127 time/total (s) 30468.2 Epoch -956 ---------------------------------- --------------- 2022-05-10 21:38:18.904788 PDT | [0] Epoch -955 finished ---------------------------------- --------------- epoch -955 replay_buffer/size 999996 trainer/num train calls 46000 trainer/Policy Loss -1.98512 trainer/Log Pis Mean 1.94097 trainer/Log Pis Std 2.49692 trainer/Log Pis Max 8.78232 trainer/Log Pis Min -5.40431 trainer/policy/mean Mean 0.103015 trainer/policy/mean Std 0.629136 trainer/policy/mean Max 0.997169 trainer/policy/mean Min -0.998236 trainer/policy/normal/std Mean 0.408335 trainer/policy/normal/std Std 0.189134 trainer/policy/normal/std Max 1.11377 trainer/policy/normal/std Min 0.0772626 trainer/policy/normal/log_std Mean -1.03222 trainer/policy/normal/log_std Std 0.563119 trainer/policy/normal/log_std Max 0.107752 trainer/policy/normal/log_std Min -2.56054 eval/num steps total 28842 eval/num paths total 58 eval/path length Mean 513 eval/path length Std 0 eval/path length Max 513 eval/path length Min 513 eval/Rewards Mean 3.14523 eval/Rewards Std 0.873236 eval/Rewards Max 5.52516 eval/Rewards Min 1.00167 eval/Returns Mean 1613.5 eval/Returns Std 0 eval/Returns Max 1613.5 eval/Returns Min 1613.5 eval/Actions Mean 0.150507 eval/Actions Std 0.567703 eval/Actions Max 0.996517 eval/Actions Min -0.998145 eval/Num Paths 1 eval/Average Returns 1613.5 eval/normalized_score 50.1994 time/evaluation sampling (s) 0.870886 time/logging (s) 0.00235545 time/sampling batch (s) 0.267542 time/saving (s) 0.00302587 time/training (s) 4.21257 time/epoch (s) 5.35638 time/total (s) 30473.5 Epoch -955 ---------------------------------- --------------- 2022-05-10 21:38:24.265843 PDT | [0] Epoch -954 finished ---------------------------------- --------------- epoch -954 replay_buffer/size 999996 trainer/num train calls 47000 trainer/Policy Loss -2.09891 trainer/Log Pis Mean 2.01062 trainer/Log Pis Std 2.67806 trainer/Log Pis Max 9.27904 trainer/Log Pis Min -4.87019 trainer/policy/mean Mean 0.123554 trainer/policy/mean Std 0.612587 trainer/policy/mean Max 0.997531 trainer/policy/mean Min -0.998389 trainer/policy/normal/std Mean 0.402559 trainer/policy/normal/std Std 0.189522 trainer/policy/normal/std Max 0.99778 trainer/policy/normal/std Min 0.0784461 trainer/policy/normal/log_std Mean -1.04709 trainer/policy/normal/log_std Std 0.560916 trainer/policy/normal/log_std Max -0.00222202 trainer/policy/normal/log_std Min -2.54534 eval/num steps total 29738 eval/num paths total 59 eval/path length Mean 896 eval/path length Std 0 eval/path length Max 896 eval/path length Min 896 eval/Rewards Mean 3.27847 eval/Rewards Std 0.710324 eval/Rewards Max 4.7855 eval/Rewards Min 0.980926 eval/Returns Mean 2937.51 eval/Returns Std 0 eval/Returns Max 2937.51 eval/Returns Min 2937.51 eval/Actions Mean 0.172072 eval/Actions Std 0.594203 eval/Actions Max 0.997011 eval/Actions Min -0.998808 eval/Num Paths 1 eval/Average Returns 2937.51 eval/normalized_score 90.8807 time/evaluation sampling (s) 0.873891 time/logging (s) 0.00340017 time/sampling batch (s) 0.265928 time/saving (s) 0.00303984 time/training (s) 4.19667 time/epoch (s) 5.34293 time/total (s) 30478.9 Epoch -954 ---------------------------------- --------------- 2022-05-10 21:38:29.748215 PDT | [0] Epoch -953 finished ---------------------------------- --------------- epoch -953 replay_buffer/size 999996 trainer/num train calls 48000 trainer/Policy Loss -1.95456 trainer/Log Pis Mean 2.02503 trainer/Log Pis Std 2.63868 trainer/Log Pis Max 11.2429 trainer/Log Pis Min -6.2224 trainer/policy/mean Mean 0.140211 trainer/policy/mean Std 0.621027 trainer/policy/mean Max 0.998716 trainer/policy/mean Min -0.998287 trainer/policy/normal/std Mean 0.410941 trainer/policy/normal/std Std 0.19667 trainer/policy/normal/std Max 0.970655 trainer/policy/normal/std Min 0.0833182 trainer/policy/normal/log_std Mean -1.03264 trainer/policy/normal/log_std Std 0.574974 trainer/policy/normal/log_std Max -0.0297844 trainer/policy/normal/log_std Min -2.48509 eval/num steps total 30283 eval/num paths total 60 eval/path length Mean 545 eval/path length Std 0 eval/path length Max 545 eval/path length Min 545 eval/Rewards Mean 3.16671 eval/Rewards Std 0.862108 eval/Rewards Max 5.22171 eval/Rewards Min 1.00458 eval/Returns Mean 1725.86 eval/Returns Std 0 eval/Returns Max 1725.86 eval/Returns Min 1725.86 eval/Actions Mean 0.15207 eval/Actions Std 0.573547 eval/Actions Max 0.998138 eval/Actions Min -0.999248 eval/Num Paths 1 eval/Average Returns 1725.86 eval/normalized_score 53.6516 time/evaluation sampling (s) 0.899575 time/logging (s) 0.00246336 time/sampling batch (s) 0.27456 time/saving (s) 0.00310106 time/training (s) 4.28173 time/epoch (s) 5.46143 time/total (s) 30484.4 Epoch -953 ---------------------------------- --------------- 2022-05-10 21:38:35.180510 PDT | [0] Epoch -952 finished ---------------------------------- --------------- epoch -952 replay_buffer/size 999996 trainer/num train calls 49000 trainer/Policy Loss -1.97908 trainer/Log Pis Mean 1.95887 trainer/Log Pis Std 2.52434 trainer/Log Pis Max 9.76102 trainer/Log Pis Min -5.31884 trainer/policy/mean Mean 0.153668 trainer/policy/mean Std 0.604424 trainer/policy/mean Max 0.99673 trainer/policy/mean Min -0.99768 trainer/policy/normal/std Mean 0.402022 trainer/policy/normal/std Std 0.190513 trainer/policy/normal/std Max 0.975306 trainer/policy/normal/std Min 0.0758315 trainer/policy/normal/log_std Mean -1.05216 trainer/policy/normal/log_std Std 0.570182 trainer/policy/normal/log_std Max -0.025004 trainer/policy/normal/log_std Min -2.57924 eval/num steps total 30776 eval/num paths total 61 eval/path length Mean 493 eval/path length Std 0 eval/path length Max 493 eval/path length Min 493 eval/Rewards Mean 3.04869 eval/Rewards Std 0.797086 eval/Rewards Max 4.72787 eval/Rewards Min 0.983098 eval/Returns Mean 1503 eval/Returns Std 0 eval/Returns Max 1503 eval/Returns Min 1503 eval/Actions Mean 0.153613 eval/Actions Std 0.564798 eval/Actions Max 0.997905 eval/Actions Min -0.995248 eval/Num Paths 1 eval/Average Returns 1503 eval/normalized_score 46.8041 time/evaluation sampling (s) 0.900376 time/logging (s) 0.00227109 time/sampling batch (s) 0.276911 time/saving (s) 0.00300472 time/training (s) 4.22999 time/epoch (s) 5.41256 time/total (s) 30489.8 Epoch -952 ---------------------------------- --------------- 2022-05-10 21:38:40.637283 PDT | [0] Epoch -951 finished ---------------------------------- --------------- epoch -951 replay_buffer/size 999996 trainer/num train calls 50000 trainer/Policy Loss -1.96137 trainer/Log Pis Mean 1.87036 trainer/Log Pis Std 2.55295 trainer/Log Pis Max 14.8967 trainer/Log Pis Min -5.65917 trainer/policy/mean Mean 0.13616 trainer/policy/mean Std 0.600823 trainer/policy/mean Max 0.997436 trainer/policy/mean Min -0.998415 trainer/policy/normal/std Mean 0.401403 trainer/policy/normal/std Std 0.186056 trainer/policy/normal/std Max 1.02686 trainer/policy/normal/std Min 0.0806219 trainer/policy/normal/log_std Mean -1.04885 trainer/policy/normal/log_std Std 0.56074 trainer/policy/normal/log_std Max 0.0265014 trainer/policy/normal/log_std Min -2.51798 eval/num steps total 31441 eval/num paths total 62 eval/path length Mean 665 eval/path length Std 0 eval/path length Max 665 eval/path length Min 665 eval/Rewards Mean 3.17198 eval/Rewards Std 0.698698 eval/Rewards Max 4.64391 eval/Rewards Min 0.996754 eval/Returns Mean 2109.37 eval/Returns Std 0 eval/Returns Max 2109.37 eval/Returns Min 2109.37 eval/Actions Mean 0.166117 eval/Actions Std 0.60058 eval/Actions Max 0.99884 eval/Actions Min -0.996655 eval/Num Paths 1 eval/Average Returns 2109.37 eval/normalized_score 65.4353 time/evaluation sampling (s) 0.917442 time/logging (s) 0.00274719 time/sampling batch (s) 0.272175 time/saving (s) 0.00320268 time/training (s) 4.24194 time/epoch (s) 5.43751 time/total (s) 30495.2 Epoch -951 ---------------------------------- --------------- 2022-05-10 21:38:46.087684 PDT | [0] Epoch -950 finished ---------------------------------- --------------- epoch -950 replay_buffer/size 999996 trainer/num train calls 51000 trainer/Policy Loss -2.04507 trainer/Log Pis Mean 2.08687 trainer/Log Pis Std 2.53388 trainer/Log Pis Max 9.59618 trainer/Log Pis Min -5.58714 trainer/policy/mean Mean 0.151217 trainer/policy/mean Std 0.61437 trainer/policy/mean Max 0.998388 trainer/policy/mean Min -0.998735 trainer/policy/normal/std Mean 0.403336 trainer/policy/normal/std Std 0.190123 trainer/policy/normal/std Max 1.2645 trainer/policy/normal/std Min 0.0796699 trainer/policy/normal/log_std Mean -1.04629 trainer/policy/normal/log_std Std 0.564245 trainer/policy/normal/log_std Max 0.234676 trainer/policy/normal/log_std Min -2.52986 eval/num steps total 32271 eval/num paths total 63 eval/path length Mean 830 eval/path length Std 0 eval/path length Max 830 eval/path length Min 830 eval/Rewards Mean 3.27369 eval/Rewards Std 0.692169 eval/Rewards Max 4.78351 eval/Rewards Min 0.985245 eval/Returns Mean 2717.16 eval/Returns Std 0 eval/Returns Max 2717.16 eval/Returns Min 2717.16 eval/Actions Mean 0.167505 eval/Actions Std 0.606933 eval/Actions Max 0.997734 eval/Actions Min -0.998513 eval/Num Paths 1 eval/Average Returns 2717.16 eval/normalized_score 84.1104 time/evaluation sampling (s) 0.91209 time/logging (s) 0.00332081 time/sampling batch (s) 0.273132 time/saving (s) 0.00299114 time/training (s) 4.23974 time/epoch (s) 5.43127 time/total (s) 30500.6 Epoch -950 ---------------------------------- --------------- 2022-05-10 21:38:51.554486 PDT | [0] Epoch -949 finished ---------------------------------- --------------- epoch -949 replay_buffer/size 999996 trainer/num train calls 52000 trainer/Policy Loss -1.78689 trainer/Log Pis Mean 1.85056 trainer/Log Pis Std 2.65203 trainer/Log Pis Max 9.99964 trainer/Log Pis Min -7.02866 trainer/policy/mean Mean 0.110522 trainer/policy/mean Std 0.607629 trainer/policy/mean Max 0.997943 trainer/policy/mean Min -0.998292 trainer/policy/normal/std Mean 0.411551 trainer/policy/normal/std Std 0.191569 trainer/policy/normal/std Max 0.98576 trainer/policy/normal/std Min 0.076341 trainer/policy/normal/log_std Mean -1.02345 trainer/policy/normal/log_std Std 0.559214 trainer/policy/normal/log_std Max -0.0143419 trainer/policy/normal/log_std Min -2.57255 eval/num steps total 32793 eval/num paths total 64 eval/path length Mean 522 eval/path length Std 0 eval/path length Max 522 eval/path length Min 522 eval/Rewards Mean 3.12109 eval/Rewards Std 0.841627 eval/Rewards Max 5.40662 eval/Rewards Min 0.995711 eval/Returns Mean 1629.21 eval/Returns Std 0 eval/Returns Max 1629.21 eval/Returns Min 1629.21 eval/Actions Mean 0.14732 eval/Actions Std 0.569334 eval/Actions Max 0.997889 eval/Actions Min -0.997906 eval/Num Paths 1 eval/Average Returns 1629.21 eval/normalized_score 50.682 time/evaluation sampling (s) 0.923371 time/logging (s) 0.00241232 time/sampling batch (s) 0.272883 time/saving (s) 0.0031264 time/training (s) 4.24439 time/epoch (s) 5.44619 time/total (s) 30506.1 Epoch -949 ---------------------------------- --------------- 2022-05-10 21:38:57.003308 PDT | [0] Epoch -948 finished ---------------------------------- --------------- epoch -948 replay_buffer/size 999996 trainer/num train calls 53000 trainer/Policy Loss -1.97744 trainer/Log Pis Mean 1.95074 trainer/Log Pis Std 2.59341 trainer/Log Pis Max 10.0232 trainer/Log Pis Min -7.94822 trainer/policy/mean Mean 0.13974 trainer/policy/mean Std 0.615814 trainer/policy/mean Max 0.998383 trainer/policy/mean Min -0.996984 trainer/policy/normal/std Mean 0.405638 trainer/policy/normal/std Std 0.190253 trainer/policy/normal/std Max 0.930152 trainer/policy/normal/std Min 0.0756655 trainer/policy/normal/log_std Mean -1.03898 trainer/policy/normal/log_std Std 0.559458 trainer/policy/normal/log_std Max -0.0724074 trainer/policy/normal/log_std Min -2.58143 eval/num steps total 33386 eval/num paths total 65 eval/path length Mean 593 eval/path length Std 0 eval/path length Max 593 eval/path length Min 593 eval/Rewards Mean 3.12631 eval/Rewards Std 0.746611 eval/Rewards Max 4.86827 eval/Rewards Min 1.00269 eval/Returns Mean 1853.9 eval/Returns Std 0 eval/Returns Max 1853.9 eval/Returns Min 1853.9 eval/Actions Mean 0.163702 eval/Actions Std 0.587256 eval/Actions Max 0.997937 eval/Actions Min -0.998978 eval/Num Paths 1 eval/Average Returns 1853.9 eval/normalized_score 57.5858 time/evaluation sampling (s) 0.916401 time/logging (s) 0.00259228 time/sampling batch (s) 0.274047 time/saving (s) 0.00313905 time/training (s) 4.23311 time/epoch (s) 5.42929 time/total (s) 30511.5 Epoch -948 ---------------------------------- --------------- 2022-05-10 21:39:02.476709 PDT | [0] Epoch -947 finished ---------------------------------- --------------- epoch -947 replay_buffer/size 999996 trainer/num train calls 54000 trainer/Policy Loss -1.97531 trainer/Log Pis Mean 1.91018 trainer/Log Pis Std 2.57123 trainer/Log Pis Max 10.6045 trainer/Log Pis Min -4.86067 trainer/policy/mean Mean 0.155936 trainer/policy/mean Std 0.607762 trainer/policy/mean Max 0.99737 trainer/policy/mean Min -0.997949 trainer/policy/normal/std Mean 0.402218 trainer/policy/normal/std Std 0.190485 trainer/policy/normal/std Max 0.990943 trainer/policy/normal/std Min 0.0772257 trainer/policy/normal/log_std Mean -1.05085 trainer/policy/normal/log_std Std 0.567539 trainer/policy/normal/log_std Max -0.00909777 trainer/policy/normal/log_std Min -2.56102 eval/num steps total 33959 eval/num paths total 66 eval/path length Mean 573 eval/path length Std 0 eval/path length Max 573 eval/path length Min 573 eval/Rewards Mean 3.17364 eval/Rewards Std 0.79518 eval/Rewards Max 4.74819 eval/Rewards Min 0.98906 eval/Returns Mean 1818.49 eval/Returns Std 0 eval/Returns Max 1818.49 eval/Returns Min 1818.49 eval/Actions Mean 0.16265 eval/Actions Std 0.584794 eval/Actions Max 0.997273 eval/Actions Min -0.998378 eval/Num Paths 1 eval/Average Returns 1818.49 eval/normalized_score 56.4979 time/evaluation sampling (s) 0.902903 time/logging (s) 0.00274007 time/sampling batch (s) 0.273697 time/saving (s) 0.00335924 time/training (s) 4.27096 time/epoch (s) 5.45366 time/total (s) 30517 Epoch -947 ---------------------------------- --------------- 2022-05-10 21:39:07.905423 PDT | [0] Epoch -946 finished ---------------------------------- --------------- epoch -946 replay_buffer/size 999996 trainer/num train calls 55000 trainer/Policy Loss -1.97057 trainer/Log Pis Mean 1.96469 trainer/Log Pis Std 2.48568 trainer/Log Pis Max 8.62782 trainer/Log Pis Min -5.65272 trainer/policy/mean Mean 0.138022 trainer/policy/mean Std 0.602839 trainer/policy/mean Max 0.999155 trainer/policy/mean Min -0.998257 trainer/policy/normal/std Mean 0.392593 trainer/policy/normal/std Std 0.181379 trainer/policy/normal/std Max 0.944881 trainer/policy/normal/std Min 0.0749846 trainer/policy/normal/log_std Mean -1.06827 trainer/policy/normal/log_std Std 0.553818 trainer/policy/normal/log_std Max -0.0566966 trainer/policy/normal/log_std Min -2.59047 eval/num steps total 34940 eval/num paths total 68 eval/path length Mean 490.5 eval/path length Std 3.5 eval/path length Max 494 eval/path length Min 487 eval/Rewards Mean 3.08174 eval/Rewards Std 0.777836 eval/Rewards Max 4.72578 eval/Rewards Min 0.979021 eval/Returns Mean 1511.59 eval/Returns Std 36.0116 eval/Returns Max 1547.6 eval/Returns Min 1475.58 eval/Actions Mean 0.157376 eval/Actions Std 0.584525 eval/Actions Max 0.998518 eval/Actions Min -0.997731 eval/Num Paths 2 eval/Average Returns 1511.59 eval/normalized_score 47.068 time/evaluation sampling (s) 0.895504 time/logging (s) 0.00391126 time/sampling batch (s) 0.273775 time/saving (s) 0.00334012 time/training (s) 4.23345 time/epoch (s) 5.40998 time/total (s) 30522.4 Epoch -946 ---------------------------------- --------------- 2022-05-10 21:39:13.449824 PDT | [0] Epoch -945 finished ---------------------------------- --------------- epoch -945 replay_buffer/size 999996 trainer/num train calls 56000 trainer/Policy Loss -2.04528 trainer/Log Pis Mean 2.15528 trainer/Log Pis Std 2.5196 trainer/Log Pis Max 8.69704 trainer/Log Pis Min -4.87161 trainer/policy/mean Mean 0.175541 trainer/policy/mean Std 0.603593 trainer/policy/mean Max 0.996679 trainer/policy/mean Min -0.997981 trainer/policy/normal/std Mean 0.395638 trainer/policy/normal/std Std 0.187281 trainer/policy/normal/std Max 0.953092 trainer/policy/normal/std Min 0.0776967 trainer/policy/normal/log_std Mean -1.07021 trainer/policy/normal/log_std Std 0.575801 trainer/policy/normal/log_std Max -0.0480439 trainer/policy/normal/log_std Min -2.55494 eval/num steps total 35588 eval/num paths total 69 eval/path length Mean 648 eval/path length Std 0 eval/path length Max 648 eval/path length Min 648 eval/Rewards Mean 3.19906 eval/Rewards Std 0.757695 eval/Rewards Max 4.68018 eval/Rewards Min 0.992062 eval/Returns Mean 2072.99 eval/Returns Std 0 eval/Returns Max 2072.99 eval/Returns Min 2072.99 eval/Actions Mean 0.160617 eval/Actions Std 0.601137 eval/Actions Max 0.997681 eval/Actions Min -0.997988 eval/Num Paths 1 eval/Average Returns 2072.99 eval/normalized_score 64.3177 time/evaluation sampling (s) 0.970482 time/logging (s) 0.00273239 time/sampling batch (s) 0.274705 time/saving (s) 0.0030902 time/training (s) 4.27219 time/epoch (s) 5.5232 time/total (s) 30527.9 Epoch -945 ---------------------------------- --------------- 2022-05-10 21:39:18.911348 PDT | [0] Epoch -944 finished ---------------------------------- --------------- epoch -944 replay_buffer/size 999996 trainer/num train calls 57000 trainer/Policy Loss -2.24684 trainer/Log Pis Mean 1.98505 trainer/Log Pis Std 2.61712 trainer/Log Pis Max 12.0048 trainer/Log Pis Min -7.18669 trainer/policy/mean Mean 0.114437 trainer/policy/mean Std 0.623619 trainer/policy/mean Max 0.998158 trainer/policy/mean Min -0.998674 trainer/policy/normal/std Mean 0.402351 trainer/policy/normal/std Std 0.192116 trainer/policy/normal/std Max 0.965581 trainer/policy/normal/std Min 0.074242 trainer/policy/normal/log_std Mean -1.0539 trainer/policy/normal/log_std Std 0.575623 trainer/policy/normal/log_std Max -0.0350252 trainer/policy/normal/log_std Min -2.60043 eval/num steps total 36570 eval/num paths total 71 eval/path length Mean 491 eval/path length Std 19 eval/path length Max 510 eval/path length Min 472 eval/Rewards Mean 3.10601 eval/Rewards Std 0.851536 eval/Rewards Max 5.33738 eval/Rewards Min 0.982776 eval/Returns Mean 1525.05 eval/Returns Std 67.8448 eval/Returns Max 1592.9 eval/Returns Min 1457.21 eval/Actions Mean 0.148074 eval/Actions Std 0.570326 eval/Actions Max 0.997858 eval/Actions Min -0.997969 eval/Num Paths 2 eval/Average Returns 1525.05 eval/normalized_score 47.4817 time/evaluation sampling (s) 0.887872 time/logging (s) 0.00363372 time/sampling batch (s) 0.274655 time/saving (s) 0.00302639 time/training (s) 4.2733 time/epoch (s) 5.44248 time/total (s) 30533.4 Epoch -944 ---------------------------------- --------------- 2022-05-10 21:39:24.363966 PDT | [0] Epoch -943 finished ---------------------------------- --------------- epoch -943 replay_buffer/size 999996 trainer/num train calls 58000 trainer/Policy Loss -2.11726 trainer/Log Pis Mean 1.97889 trainer/Log Pis Std 2.59552 trainer/Log Pis Max 9.55029 trainer/Log Pis Min -4.91405 trainer/policy/mean Mean 0.162235 trainer/policy/mean Std 0.609647 trainer/policy/mean Max 0.99776 trainer/policy/mean Min -0.998867 trainer/policy/normal/std Mean 0.405931 trainer/policy/normal/std Std 0.193291 trainer/policy/normal/std Max 0.973517 trainer/policy/normal/std Min 0.0758125 trainer/policy/normal/log_std Mean -1.04639 trainer/policy/normal/log_std Std 0.580214 trainer/policy/normal/log_std Max -0.0268397 trainer/policy/normal/log_std Min -2.57949 eval/num steps total 37109 eval/num paths total 72 eval/path length Mean 539 eval/path length Std 0 eval/path length Max 539 eval/path length Min 539 eval/Rewards Mean 3.14303 eval/Rewards Std 0.87552 eval/Rewards Max 5.38508 eval/Rewards Min 1.00273 eval/Returns Mean 1694.09 eval/Returns Std 0 eval/Returns Max 1694.09 eval/Returns Min 1694.09 eval/Actions Mean 0.158956 eval/Actions Std 0.572232 eval/Actions Max 0.99634 eval/Actions Min -0.99883 eval/Num Paths 1 eval/Average Returns 1694.09 eval/normalized_score 52.6756 time/evaluation sampling (s) 0.901267 time/logging (s) 0.0024423 time/sampling batch (s) 0.272141 time/saving (s) 0.00301729 time/training (s) 4.25277 time/epoch (s) 5.43163 time/total (s) 30538.8 Epoch -943 ---------------------------------- --------------- 2022-05-10 21:39:29.827562 PDT | [0] Epoch -942 finished ---------------------------------- --------------- epoch -942 replay_buffer/size 999996 trainer/num train calls 59000 trainer/Policy Loss -1.96562 trainer/Log Pis Mean 2.00372 trainer/Log Pis Std 2.46157 trainer/Log Pis Max 8.40132 trainer/Log Pis Min -4.68123 trainer/policy/mean Mean 0.143817 trainer/policy/mean Std 0.606491 trainer/policy/mean Max 0.995972 trainer/policy/mean Min -0.998292 trainer/policy/normal/std Mean 0.401268 trainer/policy/normal/std Std 0.188113 trainer/policy/normal/std Max 0.9364 trainer/policy/normal/std Min 0.0829323 trainer/policy/normal/log_std Mean -1.04803 trainer/policy/normal/log_std Std 0.553774 trainer/policy/normal/log_std Max -0.0657125 trainer/policy/normal/log_std Min -2.48973 eval/num steps total 38050 eval/num paths total 74 eval/path length Mean 470.5 eval/path length Std 65.5 eval/path length Max 536 eval/path length Min 405 eval/Rewards Mean 3.1718 eval/Rewards Std 0.793594 eval/Rewards Max 5.01904 eval/Rewards Min 0.995327 eval/Returns Mean 1492.33 eval/Returns Std 242.748 eval/Returns Max 1735.08 eval/Returns Min 1249.58 eval/Actions Mean 0.125714 eval/Actions Std 0.576599 eval/Actions Max 0.998706 eval/Actions Min -0.99826 eval/Num Paths 2 eval/Average Returns 1492.33 eval/normalized_score 46.4762 time/evaluation sampling (s) 0.896624 time/logging (s) 0.00355815 time/sampling batch (s) 0.275108 time/saving (s) 0.00299419 time/training (s) 4.26669 time/epoch (s) 5.44498 time/total (s) 30544.3 Epoch -942 ---------------------------------- --------------- 2022-05-10 21:39:35.297449 PDT | [0] Epoch -941 finished ---------------------------------- --------------- epoch -941 replay_buffer/size 999996 trainer/num train calls 60000 trainer/Policy Loss -2.19041 trainer/Log Pis Mean 2.04077 trainer/Log Pis Std 2.6392 trainer/Log Pis Max 14.9331 trainer/Log Pis Min -5.83975 trainer/policy/mean Mean 0.143201 trainer/policy/mean Std 0.620418 trainer/policy/mean Max 0.997016 trainer/policy/mean Min -0.998541 trainer/policy/normal/std Mean 0.404784 trainer/policy/normal/std Std 0.193763 trainer/policy/normal/std Max 0.985417 trainer/policy/normal/std Min 0.0792176 trainer/policy/normal/log_std Mean -1.04695 trainer/policy/normal/log_std Std 0.572589 trainer/policy/normal/log_std Max -0.0146901 trainer/policy/normal/log_std Min -2.53556 eval/num steps total 38508 eval/num paths total 75 eval/path length Mean 458 eval/path length Std 0 eval/path length Max 458 eval/path length Min 458 eval/Rewards Mean 3.17557 eval/Rewards Std 0.910039 eval/Rewards Max 5.16055 eval/Rewards Min 0.984418 eval/Returns Mean 1454.41 eval/Returns Std 0 eval/Returns Max 1454.41 eval/Returns Min 1454.41 eval/Actions Mean 0.135531 eval/Actions Std 0.568611 eval/Actions Max 0.997562 eval/Actions Min -0.998771 eval/Num Paths 1 eval/Average Returns 1454.41 eval/normalized_score 45.3111 time/evaluation sampling (s) 0.904206 time/logging (s) 0.00221132 time/sampling batch (s) 0.273853 time/saving (s) 0.00310816 time/training (s) 4.26539 time/epoch (s) 5.44877 time/total (s) 30549.7 Epoch -941 ---------------------------------- --------------- 2022-05-10 21:39:40.780407 PDT | [0] Epoch -940 finished ---------------------------------- --------------- epoch -940 replay_buffer/size 999996 trainer/num train calls 61000 trainer/Policy Loss -2.02847 trainer/Log Pis Mean 1.87307 trainer/Log Pis Std 2.50011 trainer/Log Pis Max 8.00644 trainer/Log Pis Min -5.07477 trainer/policy/mean Mean 0.160192 trainer/policy/mean Std 0.602556 trainer/policy/mean Max 0.995295 trainer/policy/mean Min -0.997587 trainer/policy/normal/std Mean 0.395202 trainer/policy/normal/std Std 0.183872 trainer/policy/normal/std Max 1.00271 trainer/policy/normal/std Min 0.0782791 trainer/policy/normal/log_std Mean -1.06261 trainer/policy/normal/log_std Std 0.554612 trainer/policy/normal/log_std Max 0.00270513 trainer/policy/normal/log_std Min -2.54747 eval/num steps total 39059 eval/num paths total 76 eval/path length Mean 551 eval/path length Std 0 eval/path length Max 551 eval/path length Min 551 eval/Rewards Mean 3.18292 eval/Rewards Std 0.841054 eval/Rewards Max 4.94635 eval/Rewards Min 0.983508 eval/Returns Mean 1753.79 eval/Returns Std 0 eval/Returns Max 1753.79 eval/Returns Min 1753.79 eval/Actions Mean 0.15444 eval/Actions Std 0.568532 eval/Actions Max 0.99662 eval/Actions Min -0.998602 eval/Num Paths 1 eval/Average Returns 1753.79 eval/normalized_score 54.5098 time/evaluation sampling (s) 0.940133 time/logging (s) 0.00250882 time/sampling batch (s) 0.273303 time/saving (s) 0.00322289 time/training (s) 4.24431 time/epoch (s) 5.46348 time/total (s) 30555.2 Epoch -940 ---------------------------------- --------------- 2022-05-10 21:39:46.274410 PDT | [0] Epoch -939 finished ---------------------------------- --------------- epoch -939 replay_buffer/size 999996 trainer/num train calls 62000 trainer/Policy Loss -2.04705 trainer/Log Pis Mean 2.08531 trainer/Log Pis Std 2.62323 trainer/Log Pis Max 14.3581 trainer/Log Pis Min -6.06838 trainer/policy/mean Mean 0.11498 trainer/policy/mean Std 0.623598 trainer/policy/mean Max 0.99765 trainer/policy/mean Min -0.997849 trainer/policy/normal/std Mean 0.39713 trainer/policy/normal/std Std 0.183526 trainer/policy/normal/std Max 0.989421 trainer/policy/normal/std Min 0.0810034 trainer/policy/normal/log_std Mean -1.05885 trainer/policy/normal/log_std Std 0.560235 trainer/policy/normal/log_std Max -0.0106356 trainer/policy/normal/log_std Min -2.51326 eval/num steps total 40033 eval/num paths total 78 eval/path length Mean 487 eval/path length Std 76 eval/path length Max 563 eval/path length Min 411 eval/Rewards Mean 3.1596 eval/Rewards Std 0.777212 eval/Rewards Max 4.74534 eval/Rewards Min 0.977869 eval/Returns Mean 1538.73 eval/Returns Std 282.471 eval/Returns Max 1821.2 eval/Returns Min 1256.26 eval/Actions Mean 0.14987 eval/Actions Std 0.59505 eval/Actions Max 0.998554 eval/Actions Min -0.998435 eval/Num Paths 2 eval/Average Returns 1538.73 eval/normalized_score 47.9018 time/evaluation sampling (s) 0.943554 time/logging (s) 0.0036638 time/sampling batch (s) 0.27374 time/saving (s) 0.00309411 time/training (s) 4.25125 time/epoch (s) 5.4753 time/total (s) 30560.7 Epoch -939 ---------------------------------- --------------- 2022-05-10 21:39:51.772262 PDT | [0] Epoch -938 finished ---------------------------------- --------------- epoch -938 replay_buffer/size 999996 trainer/num train calls 63000 trainer/Policy Loss -2.35837 trainer/Log Pis Mean 2.14958 trainer/Log Pis Std 2.64668 trainer/Log Pis Max 9.82825 trainer/Log Pis Min -5.56848 trainer/policy/mean Mean 0.139229 trainer/policy/mean Std 0.622001 trainer/policy/mean Max 0.997178 trainer/policy/mean Min -0.998659 trainer/policy/normal/std Mean 0.397738 trainer/policy/normal/std Std 0.186102 trainer/policy/normal/std Max 0.916673 trainer/policy/normal/std Min 0.0803614 trainer/policy/normal/log_std Mean -1.05664 trainer/policy/normal/log_std Std 0.554545 trainer/policy/normal/log_std Max -0.0870047 trainer/policy/normal/log_std Min -2.52122 eval/num steps total 40530 eval/num paths total 79 eval/path length Mean 497 eval/path length Std 0 eval/path length Max 497 eval/path length Min 497 eval/Rewards Mean 3.07572 eval/Rewards Std 0.786195 eval/Rewards Max 4.75979 eval/Rewards Min 0.977872 eval/Returns Mean 1528.63 eval/Returns Std 0 eval/Returns Max 1528.63 eval/Returns Min 1528.63 eval/Actions Mean 0.154325 eval/Actions Std 0.577941 eval/Actions Max 0.997776 eval/Actions Min -0.998988 eval/Num Paths 1 eval/Average Returns 1528.63 eval/normalized_score 47.5916 time/evaluation sampling (s) 0.930442 time/logging (s) 0.00235175 time/sampling batch (s) 0.273759 time/saving (s) 0.00315443 time/training (s) 4.26687 time/epoch (s) 5.47657 time/total (s) 30566.1 Epoch -938 ---------------------------------- --------------- 2022-05-10 21:39:57.250005 PDT | [0] Epoch -937 finished ---------------------------------- --------------- epoch -937 replay_buffer/size 999996 trainer/num train calls 64000 trainer/Policy Loss -1.95773 trainer/Log Pis Mean 1.98766 trainer/Log Pis Std 2.46347 trainer/Log Pis Max 10.1866 trainer/Log Pis Min -3.64057 trainer/policy/mean Mean 0.162178 trainer/policy/mean Std 0.609787 trainer/policy/mean Max 0.99523 trainer/policy/mean Min -0.99859 trainer/policy/normal/std Mean 0.401597 trainer/policy/normal/std Std 0.189207 trainer/policy/normal/std Max 0.969169 trainer/policy/normal/std Min 0.0773585 trainer/policy/normal/log_std Mean -1.05275 trainer/policy/normal/log_std Std 0.569335 trainer/policy/normal/log_std Max -0.0313164 trainer/policy/normal/log_std Min -2.55931 eval/num steps total 41185 eval/num paths total 80 eval/path length Mean 655 eval/path length Std 0 eval/path length Max 655 eval/path length Min 655 eval/Rewards Mean 3.23959 eval/Rewards Std 0.682364 eval/Rewards Max 4.89083 eval/Rewards Min 0.997829 eval/Returns Mean 2121.93 eval/Returns Std 0 eval/Returns Max 2121.93 eval/Returns Min 2121.93 eval/Actions Mean 0.147647 eval/Actions Std 0.598945 eval/Actions Max 0.995825 eval/Actions Min -0.997954 eval/Num Paths 1 eval/Average Returns 2121.93 eval/normalized_score 65.8214 time/evaluation sampling (s) 0.914358 time/logging (s) 0.00275858 time/sampling batch (s) 0.272598 time/saving (s) 0.00319702 time/training (s) 4.26545 time/epoch (s) 5.45836 time/total (s) 30571.6 Epoch -937 ---------------------------------- --------------- 2022-05-10 21:40:02.709510 PDT | [0] Epoch -936 finished ---------------------------------- --------------- epoch -936 replay_buffer/size 999996 trainer/num train calls 65000 trainer/Policy Loss -2.14865 trainer/Log Pis Mean 2.14586 trainer/Log Pis Std 2.7275 trainer/Log Pis Max 10.1448 trainer/Log Pis Min -5.83599 trainer/policy/mean Mean 0.149014 trainer/policy/mean Std 0.621526 trainer/policy/mean Max 0.99797 trainer/policy/mean Min -0.99797 trainer/policy/normal/std Mean 0.403351 trainer/policy/normal/std Std 0.188846 trainer/policy/normal/std Max 0.97149 trainer/policy/normal/std Min 0.0761114 trainer/policy/normal/log_std Mean -1.04668 trainer/policy/normal/log_std Std 0.567578 trainer/policy/normal/log_std Max -0.0289247 trainer/policy/normal/log_std Min -2.57556 eval/num steps total 41723 eval/num paths total 81 eval/path length Mean 538 eval/path length Std 0 eval/path length Max 538 eval/path length Min 538 eval/Rewards Mean 3.22842 eval/Rewards Std 0.840937 eval/Rewards Max 5.473 eval/Rewards Min 0.994028 eval/Returns Mean 1736.89 eval/Returns Std 0 eval/Returns Max 1736.89 eval/Returns Min 1736.89 eval/Actions Mean 0.158271 eval/Actions Std 0.583369 eval/Actions Max 0.998245 eval/Actions Min -0.998244 eval/Num Paths 1 eval/Average Returns 1736.89 eval/normalized_score 53.9906 time/evaluation sampling (s) 0.894478 time/logging (s) 0.00275098 time/sampling batch (s) 0.273724 time/saving (s) 0.00337904 time/training (s) 4.26527 time/epoch (s) 5.4396 time/total (s) 30577 Epoch -936 ---------------------------------- --------------- 2022-05-10 21:40:08.163317 PDT | [0] Epoch -935 finished ---------------------------------- --------------- epoch -935 replay_buffer/size 999996 trainer/num train calls 66000 trainer/Policy Loss -2.16567 trainer/Log Pis Mean 2.09659 trainer/Log Pis Std 2.61497 trainer/Log Pis Max 14.1477 trainer/Log Pis Min -4.14031 trainer/policy/mean Mean 0.140458 trainer/policy/mean Std 0.617689 trainer/policy/mean Max 0.995893 trainer/policy/mean Min -0.998708 trainer/policy/normal/std Mean 0.399915 trainer/policy/normal/std Std 0.183587 trainer/policy/normal/std Max 0.948483 trainer/policy/normal/std Min 0.0828135 trainer/policy/normal/log_std Mean -1.04877 trainer/policy/normal/log_std Std 0.551651 trainer/policy/normal/log_std Max -0.0528914 trainer/policy/normal/log_std Min -2.49116 eval/num steps total 42271 eval/num paths total 82 eval/path length Mean 548 eval/path length Std 0 eval/path length Max 548 eval/path length Min 548 eval/Rewards Mean 3.16171 eval/Rewards Std 0.85356 eval/Rewards Max 4.913 eval/Rewards Min 0.982621 eval/Returns Mean 1732.61 eval/Returns Std 0 eval/Returns Max 1732.61 eval/Returns Min 1732.61 eval/Actions Mean 0.149004 eval/Actions Std 0.555762 eval/Actions Max 0.997396 eval/Actions Min -0.998157 eval/Num Paths 1 eval/Average Returns 1732.61 eval/normalized_score 53.8592 time/evaluation sampling (s) 0.897857 time/logging (s) 0.00243995 time/sampling batch (s) 0.273155 time/saving (s) 0.00326512 time/training (s) 4.2565 time/epoch (s) 5.43322 time/total (s) 30582.5 Epoch -935 ---------------------------------- --------------- 2022-05-10 21:40:13.610003 PDT | [0] Epoch -934 finished ---------------------------------- --------------- epoch -934 replay_buffer/size 999996 trainer/num train calls 67000 trainer/Policy Loss -2.12841 trainer/Log Pis Mean 2.26754 trainer/Log Pis Std 2.69893 trainer/Log Pis Max 14.621 trainer/Log Pis Min -5.11456 trainer/policy/mean Mean 0.131175 trainer/policy/mean Std 0.625969 trainer/policy/mean Max 0.997216 trainer/policy/mean Min -0.999482 trainer/policy/normal/std Mean 0.401184 trainer/policy/normal/std Std 0.184655 trainer/policy/normal/std Max 0.97764 trainer/policy/normal/std Min 0.0819474 trainer/policy/normal/log_std Mean -1.04696 trainer/policy/normal/log_std Std 0.555058 trainer/policy/normal/log_std Max -0.0226139 trainer/policy/normal/log_std Min -2.50168 eval/num steps total 42773 eval/num paths total 83 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.16588 eval/Rewards Std 0.739653 eval/Rewards Max 4.69328 eval/Rewards Min 0.978302 eval/Returns Mean 1589.27 eval/Returns Std 0 eval/Returns Max 1589.27 eval/Returns Min 1589.27 eval/Actions Mean 0.155484 eval/Actions Std 0.594539 eval/Actions Max 0.997235 eval/Actions Min -0.998006 eval/Num Paths 1 eval/Average Returns 1589.27 eval/normalized_score 49.4549 time/evaluation sampling (s) 0.899204 time/logging (s) 0.00233719 time/sampling batch (s) 0.272977 time/saving (s) 0.00309309 time/training (s) 4.24895 time/epoch (s) 5.42656 time/total (s) 30587.9 Epoch -934 ---------------------------------- --------------- 2022-05-10 21:40:19.073314 PDT | [0] Epoch -933 finished ---------------------------------- --------------- epoch -933 replay_buffer/size 999996 trainer/num train calls 68000 trainer/Policy Loss -2.13139 trainer/Log Pis Mean 2.26286 trainer/Log Pis Std 2.46837 trainer/Log Pis Max 9.73113 trainer/Log Pis Min -4.86874 trainer/policy/mean Mean 0.129608 trainer/policy/mean Std 0.625207 trainer/policy/mean Max 0.997709 trainer/policy/mean Min -0.997572 trainer/policy/normal/std Mean 0.399789 trainer/policy/normal/std Std 0.193837 trainer/policy/normal/std Max 0.987438 trainer/policy/normal/std Min 0.080852 trainer/policy/normal/log_std Mean -1.06202 trainer/policy/normal/log_std Std 0.576176 trainer/policy/normal/log_std Max -0.0126418 trainer/policy/normal/log_std Min -2.51513 eval/num steps total 43706 eval/num paths total 85 eval/path length Mean 466.5 eval/path length Std 129.5 eval/path length Max 596 eval/path length Min 337 eval/Rewards Mean 3.05717 eval/Rewards Std 0.940846 eval/Rewards Max 4.71646 eval/Rewards Min 0.990302 eval/Returns Mean 1426.17 eval/Returns Std 401.493 eval/Returns Max 1827.66 eval/Returns Min 1024.68 eval/Actions Mean 0.130528 eval/Actions Std 0.549937 eval/Actions Max 0.998162 eval/Actions Min -0.998392 eval/Num Paths 2 eval/Average Returns 1426.17 eval/normalized_score 44.4434 time/evaluation sampling (s) 0.890632 time/logging (s) 0.00349152 time/sampling batch (s) 0.273687 time/saving (s) 0.00298153 time/training (s) 4.27383 time/epoch (s) 5.44462 time/total (s) 30593.4 Epoch -933 ---------------------------------- --------------- 2022-05-10 21:40:24.536563 PDT | [0] Epoch -932 finished ---------------------------------- --------------- epoch -932 replay_buffer/size 999996 trainer/num train calls 69000 trainer/Policy Loss -2.07323 trainer/Log Pis Mean 2.00574 trainer/Log Pis Std 2.47186 trainer/Log Pis Max 9.22553 trainer/Log Pis Min -4.1912 trainer/policy/mean Mean 0.117397 trainer/policy/mean Std 0.623235 trainer/policy/mean Max 0.996616 trainer/policy/mean Min -0.99642 trainer/policy/normal/std Mean 0.404211 trainer/policy/normal/std Std 0.194077 trainer/policy/normal/std Max 1.19961 trainer/policy/normal/std Min 0.0815272 trainer/policy/normal/log_std Mean -1.05085 trainer/policy/normal/log_std Std 0.579025 trainer/policy/normal/log_std Max 0.181996 trainer/policy/normal/log_std Min -2.50682 eval/num steps total 44584 eval/num paths total 87 eval/path length Mean 439 eval/path length Std 33 eval/path length Max 472 eval/path length Min 406 eval/Rewards Mean 3.17474 eval/Rewards Std 0.827182 eval/Rewards Max 4.79178 eval/Rewards Min 0.9783 eval/Returns Mean 1393.71 eval/Returns Std 136.096 eval/Returns Max 1529.81 eval/Returns Min 1257.61 eval/Actions Mean 0.13315 eval/Actions Std 0.575993 eval/Actions Max 0.997409 eval/Actions Min -0.999444 eval/Num Paths 2 eval/Average Returns 1393.71 eval/normalized_score 43.446 time/evaluation sampling (s) 0.896174 time/logging (s) 0.00340093 time/sampling batch (s) 0.273238 time/saving (s) 0.00304741 time/training (s) 4.26754 time/epoch (s) 5.4434 time/total (s) 30598.8 Epoch -932 ---------------------------------- --------------- 2022-05-10 21:40:30.016542 PDT | [0] Epoch -931 finished ---------------------------------- --------------- epoch -931 replay_buffer/size 999996 trainer/num train calls 70000 trainer/Policy Loss -2.18591 trainer/Log Pis Mean 2.15398 trainer/Log Pis Std 2.69554 trainer/Log Pis Max 11.0866 trainer/Log Pis Min -5.35523 trainer/policy/mean Mean 0.138109 trainer/policy/mean Std 0.629334 trainer/policy/mean Max 0.997737 trainer/policy/mean Min -0.998683 trainer/policy/normal/std Mean 0.406416 trainer/policy/normal/std Std 0.188367 trainer/policy/normal/std Max 1.03113 trainer/policy/normal/std Min 0.0789944 trainer/policy/normal/log_std Mean -1.03671 trainer/policy/normal/log_std Std 0.56227 trainer/policy/normal/log_std Max 0.0306561 trainer/policy/normal/log_std Min -2.53838 eval/num steps total 45173 eval/num paths total 88 eval/path length Mean 589 eval/path length Std 0 eval/path length Max 589 eval/path length Min 589 eval/Rewards Mean 3.1807 eval/Rewards Std 0.750105 eval/Rewards Max 5.27862 eval/Rewards Min 0.999043 eval/Returns Mean 1873.43 eval/Returns Std 0 eval/Returns Max 1873.43 eval/Returns Min 1873.43 eval/Actions Mean 0.148197 eval/Actions Std 0.601887 eval/Actions Max 0.996897 eval/Actions Min -0.997584 eval/Num Paths 1 eval/Average Returns 1873.43 eval/normalized_score 58.186 time/evaluation sampling (s) 0.89052 time/logging (s) 0.00248548 time/sampling batch (s) 0.274069 time/saving (s) 0.00303099 time/training (s) 4.28903 time/epoch (s) 5.45914 time/total (s) 30604.3 Epoch -931 ---------------------------------- --------------- 2022-05-10 21:40:35.506206 PDT | [0] Epoch -930 finished ---------------------------------- --------------- epoch -930 replay_buffer/size 999996 trainer/num train calls 71000 trainer/Policy Loss -2.07044 trainer/Log Pis Mean 2.16744 trainer/Log Pis Std 2.55795 trainer/Log Pis Max 9.25153 trainer/Log Pis Min -4.58273 trainer/policy/mean Mean 0.120309 trainer/policy/mean Std 0.623396 trainer/policy/mean Max 0.998063 trainer/policy/mean Min -0.99696 trainer/policy/normal/std Mean 0.39971 trainer/policy/normal/std Std 0.186552 trainer/policy/normal/std Max 0.991336 trainer/policy/normal/std Min 0.079559 trainer/policy/normal/log_std Mean -1.05229 trainer/policy/normal/log_std Std 0.556889 trainer/policy/normal/log_std Max -0.00870164 trainer/policy/normal/log_std Min -2.53126 eval/num steps total 45704 eval/num paths total 89 eval/path length Mean 531 eval/path length Std 0 eval/path length Max 531 eval/path length Min 531 eval/Rewards Mean 3.20762 eval/Rewards Std 0.855898 eval/Rewards Max 5.39106 eval/Rewards Min 0.990922 eval/Returns Mean 1703.25 eval/Returns Std 0 eval/Returns Max 1703.25 eval/Returns Min 1703.25 eval/Actions Mean 0.148252 eval/Actions Std 0.582026 eval/Actions Max 0.998538 eval/Actions Min -0.998344 eval/Num Paths 1 eval/Average Returns 1703.25 eval/normalized_score 52.9568 time/evaluation sampling (s) 0.913941 time/logging (s) 0.00246324 time/sampling batch (s) 0.273629 time/saving (s) 0.00321486 time/training (s) 4.27648 time/epoch (s) 5.46973 time/total (s) 30609.7 Epoch -930 ---------------------------------- --------------- 2022-05-10 21:40:41.010441 PDT | [0] Epoch -929 finished ---------------------------------- --------------- epoch -929 replay_buffer/size 999996 trainer/num train calls 72000 trainer/Policy Loss -2.13593 trainer/Log Pis Mean 2.27398 trainer/Log Pis Std 2.62518 trainer/Log Pis Max 9.24153 trainer/Log Pis Min -4.42554 trainer/policy/mean Mean 0.126481 trainer/policy/mean Std 0.617989 trainer/policy/mean Max 0.997779 trainer/policy/mean Min -0.997748 trainer/policy/normal/std Mean 0.398966 trainer/policy/normal/std Std 0.194114 trainer/policy/normal/std Max 0.965692 trainer/policy/normal/std Min 0.0745066 trainer/policy/normal/log_std Mean -1.06641 trainer/policy/normal/log_std Std 0.582653 trainer/policy/normal/log_std Max -0.0349102 trainer/policy/normal/log_std Min -2.59687 eval/num steps total 46294 eval/num paths total 90 eval/path length Mean 590 eval/path length Std 0 eval/path length Max 590 eval/path length Min 590 eval/Rewards Mean 3.21368 eval/Rewards Std 0.746862 eval/Rewards Max 5.06183 eval/Rewards Min 0.982794 eval/Returns Mean 1896.07 eval/Returns Std 0 eval/Returns Max 1896.07 eval/Returns Min 1896.07 eval/Actions Mean 0.164753 eval/Actions Std 0.597661 eval/Actions Max 0.997761 eval/Actions Min -0.998585 eval/Num Paths 1 eval/Average Returns 1896.07 eval/normalized_score 58.8816 time/evaluation sampling (s) 0.936743 time/logging (s) 0.00258143 time/sampling batch (s) 0.273034 time/saving (s) 0.00300704 time/training (s) 4.26923 time/epoch (s) 5.48459 time/total (s) 30615.2 Epoch -929 ---------------------------------- --------------- 2022-05-10 21:40:46.490714 PDT | [0] Epoch -928 finished ---------------------------------- --------------- epoch -928 replay_buffer/size 999996 trainer/num train calls 73000 trainer/Policy Loss -2.14597 trainer/Log Pis Mean 2.07779 trainer/Log Pis Std 2.58066 trainer/Log Pis Max 10.5047 trainer/Log Pis Min -5.49193 trainer/policy/mean Mean 0.128833 trainer/policy/mean Std 0.618445 trainer/policy/mean Max 0.996229 trainer/policy/mean Min -0.998703 trainer/policy/normal/std Mean 0.398029 trainer/policy/normal/std Std 0.188957 trainer/policy/normal/std Max 1.66048 trainer/policy/normal/std Min 0.0766982 trainer/policy/normal/log_std Mean -1.06022 trainer/policy/normal/log_std Std 0.564582 trainer/policy/normal/log_std Max 0.507106 trainer/policy/normal/log_std Min -2.56788 eval/num steps total 46875 eval/num paths total 91 eval/path length Mean 581 eval/path length Std 0 eval/path length Max 581 eval/path length Min 581 eval/Rewards Mean 3.14784 eval/Rewards Std 0.768222 eval/Rewards Max 4.90691 eval/Rewards Min 0.9814 eval/Returns Mean 1828.89 eval/Returns Std 0 eval/Returns Max 1828.89 eval/Returns Min 1828.89 eval/Actions Mean 0.16526 eval/Actions Std 0.587469 eval/Actions Max 0.997713 eval/Actions Min -0.998217 eval/Num Paths 1 eval/Average Returns 1828.89 eval/normalized_score 56.8174 time/evaluation sampling (s) 0.918036 time/logging (s) 0.00257074 time/sampling batch (s) 0.272786 time/saving (s) 0.00303469 time/training (s) 4.26398 time/epoch (s) 5.46041 time/total (s) 30620.7 Epoch -928 ---------------------------------- --------------- 2022-05-10 21:40:51.922419 PDT | [0] Epoch -927 finished ---------------------------------- -------------- epoch -927 replay_buffer/size 999996 trainer/num train calls 74000 trainer/Policy Loss -2.19166 trainer/Log Pis Mean 2.00882 trainer/Log Pis Std 2.55514 trainer/Log Pis Max 10.4577 trainer/Log Pis Min -4.87914 trainer/policy/mean Mean 0.168229 trainer/policy/mean Std 0.612007 trainer/policy/mean Max 0.997954 trainer/policy/mean Min -0.996953 trainer/policy/normal/std Mean 0.404059 trainer/policy/normal/std Std 0.185814 trainer/policy/normal/std Max 1.2094 trainer/policy/normal/std Min 0.0784644 trainer/policy/normal/log_std Mean -1.03956 trainer/policy/normal/log_std Std 0.555196 trainer/policy/normal/log_std Max 0.190127 trainer/policy/normal/log_std Min -2.54511 eval/num steps total 47354 eval/num paths total 92 eval/path length Mean 479 eval/path length Std 0 eval/path length Max 479 eval/path length Min 479 eval/Rewards Mean 3.08254 eval/Rewards Std 0.837268 eval/Rewards Max 4.91009 eval/Rewards Min 0.99285 eval/Returns Mean 1476.54 eval/Returns Std 0 eval/Returns Max 1476.54 eval/Returns Min 1476.54 eval/Actions Mean 0.147755 eval/Actions Std 0.57492 eval/Actions Max 0.997833 eval/Actions Min -0.998213 eval/Num Paths 1 eval/Average Returns 1476.54 eval/normalized_score 45.9909 time/evaluation sampling (s) 0.915489 time/logging (s) 0.0022223 time/sampling batch (s) 0.267212 time/saving (s) 0.0029766 time/training (s) 4.22428 time/epoch (s) 5.41218 time/total (s) 30626.1 Epoch -927 ---------------------------------- -------------- 2022-05-10 21:40:57.313605 PDT | [0] Epoch -926 finished ---------------------------------- --------------- epoch -926 replay_buffer/size 999996 trainer/num train calls 75000 trainer/Policy Loss -1.97802 trainer/Log Pis Mean 2.00088 trainer/Log Pis Std 2.34684 trainer/Log Pis Max 8.34379 trainer/Log Pis Min -4.40534 trainer/policy/mean Mean 0.172937 trainer/policy/mean Std 0.614618 trainer/policy/mean Max 0.996297 trainer/policy/mean Min -0.998296 trainer/policy/normal/std Mean 0.4035 trainer/policy/normal/std Std 0.185083 trainer/policy/normal/std Max 0.906886 trainer/policy/normal/std Min 0.0800978 trainer/policy/normal/log_std Mean -1.04201 trainer/policy/normal/log_std Std 0.558455 trainer/policy/normal/log_std Max -0.0977384 trainer/policy/normal/log_std Min -2.52451 eval/num steps total 48019 eval/num paths total 93 eval/path length Mean 665 eval/path length Std 0 eval/path length Max 665 eval/path length Min 665 eval/Rewards Mean 3.15237 eval/Rewards Std 0.703598 eval/Rewards Max 4.7453 eval/Rewards Min 0.978333 eval/Returns Mean 2096.32 eval/Returns Std 0 eval/Returns Max 2096.32 eval/Returns Min 2096.32 eval/Actions Mean 0.154541 eval/Actions Std 0.59255 eval/Actions Max 0.99738 eval/Actions Min -0.999033 eval/Num Paths 1 eval/Average Returns 2096.32 eval/normalized_score 65.0345 time/evaluation sampling (s) 0.873869 time/logging (s) 0.0031373 time/sampling batch (s) 0.267072 time/saving (s) 0.00319921 time/training (s) 4.22571 time/epoch (s) 5.37299 time/total (s) 30631.5 Epoch -926 ---------------------------------- --------------- 2022-05-10 21:41:02.863260 PDT | [0] Epoch -925 finished ---------------------------------- --------------- epoch -925 replay_buffer/size 999996 trainer/num train calls 76000 trainer/Policy Loss -1.90968 trainer/Log Pis Mean 2.02256 trainer/Log Pis Std 2.48989 trainer/Log Pis Max 8.78817 trainer/Log Pis Min -4.76043 trainer/policy/mean Mean 0.161449 trainer/policy/mean Std 0.607039 trainer/policy/mean Max 0.997644 trainer/policy/mean Min -0.997462 trainer/policy/normal/std Mean 0.401256 trainer/policy/normal/std Std 0.18588 trainer/policy/normal/std Max 0.934951 trainer/policy/normal/std Min 0.0785548 trainer/policy/normal/log_std Mean -1.04775 trainer/policy/normal/log_std Std 0.556465 trainer/policy/normal/log_std Max -0.067261 trainer/policy/normal/log_std Min -2.54396 eval/num steps total 48747 eval/num paths total 94 eval/path length Mean 728 eval/path length Std 0 eval/path length Max 728 eval/path length Min 728 eval/Rewards Mean 3.15878 eval/Rewards Std 0.657383 eval/Rewards Max 4.37513 eval/Rewards Min 0.988885 eval/Returns Mean 2299.59 eval/Returns Std 0 eval/Returns Max 2299.59 eval/Returns Min 2299.59 eval/Actions Mean 0.163592 eval/Actions Std 0.605188 eval/Actions Max 0.998159 eval/Actions Min -0.998987 eval/Num Paths 1 eval/Average Returns 2299.59 eval/normalized_score 71.2801 time/evaluation sampling (s) 0.962102 time/logging (s) 0.00318599 time/sampling batch (s) 0.274264 time/saving (s) 0.00329377 time/training (s) 4.28686 time/epoch (s) 5.5297 time/total (s) 30637 Epoch -925 ---------------------------------- --------------- 2022-05-10 21:41:08.311651 PDT | [0] Epoch -924 finished ---------------------------------- --------------- epoch -924 replay_buffer/size 999996 trainer/num train calls 77000 trainer/Policy Loss -1.93006 trainer/Log Pis Mean 1.83705 trainer/Log Pis Std 2.5619 trainer/Log Pis Max 9.98635 trainer/Log Pis Min -6.02498 trainer/policy/mean Mean 0.14154 trainer/policy/mean Std 0.610317 trainer/policy/mean Max 0.996966 trainer/policy/mean Min -0.997663 trainer/policy/normal/std Mean 0.400076 trainer/policy/normal/std Std 0.187293 trainer/policy/normal/std Max 1.02106 trainer/policy/normal/std Min 0.0742091 trainer/policy/normal/log_std Mean -1.05431 trainer/policy/normal/log_std Std 0.56515 trainer/policy/normal/log_std Max 0.0208384 trainer/policy/normal/log_std Min -2.60087 eval/num steps total 49316 eval/num paths total 95 eval/path length Mean 569 eval/path length Std 0 eval/path length Max 569 eval/path length Min 569 eval/Rewards Mean 3.12366 eval/Rewards Std 0.746071 eval/Rewards Max 4.58426 eval/Rewards Min 0.989374 eval/Returns Mean 1777.36 eval/Returns Std 0 eval/Returns Max 1777.36 eval/Returns Min 1777.36 eval/Actions Mean 0.156012 eval/Actions Std 0.59866 eval/Actions Max 0.997679 eval/Actions Min -0.997879 eval/Num Paths 1 eval/Average Returns 1777.36 eval/normalized_score 55.2341 time/evaluation sampling (s) 0.890566 time/logging (s) 0.00244187 time/sampling batch (s) 0.27145 time/saving (s) 0.00307436 time/training (s) 4.2604 time/epoch (s) 5.42793 time/total (s) 30642.4 Epoch -924 ---------------------------------- --------------- 2022-05-10 21:41:13.765163 PDT | [0] Epoch -923 finished ---------------------------------- --------------- epoch -923 replay_buffer/size 999996 trainer/num train calls 78000 trainer/Policy Loss -1.84959 trainer/Log Pis Mean 1.78354 trainer/Log Pis Std 2.64313 trainer/Log Pis Max 14.8877 trainer/Log Pis Min -6.03554 trainer/policy/mean Mean 0.146193 trainer/policy/mean Std 0.609072 trainer/policy/mean Max 0.996835 trainer/policy/mean Min -0.99813 trainer/policy/normal/std Mean 0.402227 trainer/policy/normal/std Std 0.185683 trainer/policy/normal/std Max 0.949428 trainer/policy/normal/std Min 0.0768681 trainer/policy/normal/log_std Mean -1.04563 trainer/policy/normal/log_std Std 0.558777 trainer/policy/normal/log_std Max -0.0518955 trainer/policy/normal/log_std Min -2.56566 eval/num steps total 49820 eval/num paths total 96 eval/path length Mean 504 eval/path length Std 0 eval/path length Max 504 eval/path length Min 504 eval/Rewards Mean 3.11573 eval/Rewards Std 0.761696 eval/Rewards Max 4.78294 eval/Rewards Min 0.99411 eval/Returns Mean 1570.33 eval/Returns Std 0 eval/Returns Max 1570.33 eval/Returns Min 1570.33 eval/Actions Mean 0.159647 eval/Actions Std 0.586116 eval/Actions Max 0.997846 eval/Actions Min -0.996146 eval/Num Paths 1 eval/Average Returns 1570.33 eval/normalized_score 48.8728 time/evaluation sampling (s) 0.895789 time/logging (s) 0.00234171 time/sampling batch (s) 0.272094 time/saving (s) 0.00304378 time/training (s) 4.26056 time/epoch (s) 5.43382 time/total (s) 30647.9 Epoch -923 ---------------------------------- --------------- 2022-05-10 21:41:19.225614 PDT | [0] Epoch -922 finished ---------------------------------- --------------- epoch -922 replay_buffer/size 999996 trainer/num train calls 79000 trainer/Policy Loss -1.98997 trainer/Log Pis Mean 2.07048 trainer/Log Pis Std 2.56752 trainer/Log Pis Max 13.5502 trainer/Log Pis Min -4.62351 trainer/policy/mean Mean 0.150286 trainer/policy/mean Std 0.60363 trainer/policy/mean Max 0.998789 trainer/policy/mean Min -0.998076 trainer/policy/normal/std Mean 0.393078 trainer/policy/normal/std Std 0.186241 trainer/policy/normal/std Max 1.07152 trainer/policy/normal/std Min 0.0713278 trainer/policy/normal/log_std Mean -1.07565 trainer/policy/normal/log_std Std 0.573419 trainer/policy/normal/log_std Max 0.0690768 trainer/policy/normal/log_std Min -2.64047 eval/num steps total 50502 eval/num paths total 97 eval/path length Mean 682 eval/path length Std 0 eval/path length Max 682 eval/path length Min 682 eval/Rewards Mean 3.26446 eval/Rewards Std 0.72138 eval/Rewards Max 5.45031 eval/Rewards Min 0.985435 eval/Returns Mean 2226.36 eval/Returns Std 0 eval/Returns Max 2226.36 eval/Returns Min 2226.36 eval/Actions Mean 0.142765 eval/Actions Std 0.595044 eval/Actions Max 0.997891 eval/Actions Min -0.997415 eval/Num Paths 1 eval/Average Returns 2226.36 eval/normalized_score 69.0301 time/evaluation sampling (s) 0.885704 time/logging (s) 0.00272701 time/sampling batch (s) 0.278307 time/saving (s) 0.00296232 time/training (s) 4.27157 time/epoch (s) 5.44127 time/total (s) 30653.3 Epoch -922 ---------------------------------- --------------- 2022-05-10 21:41:24.708240 PDT | [0] Epoch -921 finished ---------------------------------- --------------- epoch -921 replay_buffer/size 999996 trainer/num train calls 80000 trainer/Policy Loss -2.10536 trainer/Log Pis Mean 2.26249 trainer/Log Pis Std 2.62275 trainer/Log Pis Max 9.98795 trainer/Log Pis Min -5.09859 trainer/policy/mean Mean 0.14396 trainer/policy/mean Std 0.623886 trainer/policy/mean Max 0.997385 trainer/policy/mean Min -0.995992 trainer/policy/normal/std Mean 0.395044 trainer/policy/normal/std Std 0.182674 trainer/policy/normal/std Max 1.00235 trainer/policy/normal/std Min 0.0730996 trainer/policy/normal/log_std Mean -1.06357 trainer/policy/normal/log_std Std 0.557877 trainer/policy/normal/log_std Max 0.0023471 trainer/policy/normal/log_std Min -2.61593 eval/num steps total 51033 eval/num paths total 98 eval/path length Mean 531 eval/path length Std 0 eval/path length Max 531 eval/path length Min 531 eval/Rewards Mean 3.21998 eval/Rewards Std 0.840317 eval/Rewards Max 5.40345 eval/Rewards Min 0.988376 eval/Returns Mean 1709.81 eval/Returns Std 0 eval/Returns Max 1709.81 eval/Returns Min 1709.81 eval/Actions Mean 0.153722 eval/Actions Std 0.586464 eval/Actions Max 0.997259 eval/Actions Min -0.998418 eval/Num Paths 1 eval/Average Returns 1709.81 eval/normalized_score 53.1585 time/evaluation sampling (s) 0.9003 time/logging (s) 0.00234602 time/sampling batch (s) 0.274019 time/saving (s) 0.00301279 time/training (s) 4.28282 time/epoch (s) 5.46249 time/total (s) 30658.8 Epoch -921 ---------------------------------- --------------- 2022-05-10 21:41:30.130460 PDT | [0] Epoch -920 finished ---------------------------------- --------------- epoch -920 replay_buffer/size 999996 trainer/num train calls 81000 trainer/Policy Loss -2.05387 trainer/Log Pis Mean 1.8999 trainer/Log Pis Std 2.50144 trainer/Log Pis Max 8.87919 trainer/Log Pis Min -5.71606 trainer/policy/mean Mean 0.121462 trainer/policy/mean Std 0.624866 trainer/policy/mean Max 0.997605 trainer/policy/mean Min -0.996811 trainer/policy/normal/std Mean 0.404716 trainer/policy/normal/std Std 0.190055 trainer/policy/normal/std Max 0.961185 trainer/policy/normal/std Min 0.081282 trainer/policy/normal/log_std Mean -1.04246 trainer/policy/normal/log_std Std 0.563088 trainer/policy/normal/log_std Max -0.0395884 trainer/policy/normal/log_std Min -2.50983 eval/num steps total 52033 eval/num paths total 99 eval/path length Mean 1000 eval/path length Std 0 eval/path length Max 1000 eval/path length Min 1000 eval/Rewards Mean 3.22305 eval/Rewards Std 0.561684 eval/Rewards Max 4.3488 eval/Rewards Min 0.982294 eval/Returns Mean 3223.05 eval/Returns Std 0 eval/Returns Max 3223.05 eval/Returns Min 3223.05 eval/Actions Mean 0.172595 eval/Actions Std 0.618993 eval/Actions Max 0.997299 eval/Actions Min -0.99851 eval/Num Paths 1 eval/Average Returns 3223.05 eval/normalized_score 99.6543 time/evaluation sampling (s) 0.884855 time/logging (s) 0.00362541 time/sampling batch (s) 0.271434 time/saving (s) 0.00299494 time/training (s) 4.24147 time/epoch (s) 5.40438 time/total (s) 30664.2 Epoch -920 ---------------------------------- --------------- 2022-05-10 21:41:35.529651 PDT | [0] Epoch -919 finished ---------------------------------- --------------- epoch -919 replay_buffer/size 999996 trainer/num train calls 82000 trainer/Policy Loss -2.02209 trainer/Log Pis Mean 1.95662 trainer/Log Pis Std 2.51092 trainer/Log Pis Max 9.58727 trainer/Log Pis Min -5.35518 trainer/policy/mean Mean 0.129221 trainer/policy/mean Std 0.611199 trainer/policy/mean Max 0.997728 trainer/policy/mean Min -0.997496 trainer/policy/normal/std Mean 0.394755 trainer/policy/normal/std Std 0.188171 trainer/policy/normal/std Max 0.950771 trainer/policy/normal/std Min 0.0727383 trainer/policy/normal/log_std Mean -1.07572 trainer/policy/normal/log_std Std 0.583126 trainer/policy/normal/log_std Max -0.0504823 trainer/policy/normal/log_std Min -2.62089 eval/num steps total 52989 eval/num paths total 101 eval/path length Mean 478 eval/path length Std 5 eval/path length Max 483 eval/path length Min 473 eval/Rewards Mean 3.18132 eval/Rewards Std 0.845671 eval/Rewards Max 4.77277 eval/Rewards Min 0.989129 eval/Returns Mean 1520.67 eval/Returns Std 8.32255 eval/Returns Max 1528.99 eval/Returns Min 1512.35 eval/Actions Mean 0.156266 eval/Actions Std 0.584767 eval/Actions Max 0.997948 eval/Actions Min -0.998482 eval/Num Paths 2 eval/Average Returns 1520.67 eval/normalized_score 47.347 time/evaluation sampling (s) 0.901835 time/logging (s) 0.00356711 time/sampling batch (s) 0.268583 time/saving (s) 0.00299351 time/training (s) 4.20258 time/epoch (s) 5.37956 time/total (s) 30669.6 Epoch -919 ---------------------------------- --------------- 2022-05-10 21:41:40.965001 PDT | [0] Epoch -918 finished ---------------------------------- --------------- epoch -918 replay_buffer/size 999996 trainer/num train calls 83000 trainer/Policy Loss -2.07852 trainer/Log Pis Mean 2.09115 trainer/Log Pis Std 2.59333 trainer/Log Pis Max 11.6139 trainer/Log Pis Min -5.88419 trainer/policy/mean Mean 0.161697 trainer/policy/mean Std 0.613314 trainer/policy/mean Max 0.997798 trainer/policy/mean Min -0.998747 trainer/policy/normal/std Mean 0.392393 trainer/policy/normal/std Std 0.184396 trainer/policy/normal/std Max 0.922959 trainer/policy/normal/std Min 0.0804874 trainer/policy/normal/log_std Mean -1.07688 trainer/policy/normal/log_std Std 0.572493 trainer/policy/normal/log_std Max -0.0801707 trainer/policy/normal/log_std Min -2.51965 eval/num steps total 53481 eval/num paths total 102 eval/path length Mean 492 eval/path length Std 0 eval/path length Max 492 eval/path length Min 492 eval/Rewards Mean 3.06396 eval/Rewards Std 0.788278 eval/Rewards Max 4.68897 eval/Rewards Min 0.991599 eval/Returns Mean 1507.47 eval/Returns Std 0 eval/Returns Max 1507.47 eval/Returns Min 1507.47 eval/Actions Mean 0.146791 eval/Actions Std 0.574581 eval/Actions Max 0.996578 eval/Actions Min -0.998037 eval/Num Paths 1 eval/Average Returns 1507.47 eval/normalized_score 46.9413 time/evaluation sampling (s) 0.923675 time/logging (s) 0.00228329 time/sampling batch (s) 0.267752 time/saving (s) 0.00313715 time/training (s) 4.21819 time/epoch (s) 5.41503 time/total (s) 30675 Epoch -918 ---------------------------------- --------------- 2022-05-10 21:41:46.340633 PDT | [0] Epoch -917 finished ---------------------------------- --------------- epoch -917 replay_buffer/size 999996 trainer/num train calls 84000 trainer/Policy Loss -2.15425 trainer/Log Pis Mean 2.02861 trainer/Log Pis Std 2.44533 trainer/Log Pis Max 8.45276 trainer/Log Pis Min -4.39644 trainer/policy/mean Mean 0.160862 trainer/policy/mean Std 0.612949 trainer/policy/mean Max 0.996443 trainer/policy/mean Min -0.998441 trainer/policy/normal/std Mean 0.395411 trainer/policy/normal/std Std 0.180994 trainer/policy/normal/std Max 0.913322 trainer/policy/normal/std Min 0.0739363 trainer/policy/normal/log_std Mean -1.06316 trainer/policy/normal/log_std Std 0.561348 trainer/policy/normal/log_std Max -0.0906671 trainer/policy/normal/log_std Min -2.60455 eval/num steps total 54285 eval/num paths total 104 eval/path length Mean 402 eval/path length Std 98 eval/path length Max 500 eval/path length Min 304 eval/Rewards Mean 3.09563 eval/Rewards Std 0.870476 eval/Rewards Max 5.10596 eval/Rewards Min 0.981287 eval/Returns Mean 1244.44 eval/Returns Std 303.168 eval/Returns Max 1547.61 eval/Returns Min 941.274 eval/Actions Mean 0.117114 eval/Actions Std 0.563339 eval/Actions Max 0.99888 eval/Actions Min -0.998122 eval/Num Paths 2 eval/Average Returns 1244.44 eval/normalized_score 38.8596 time/evaluation sampling (s) 0.917771 time/logging (s) 0.00313685 time/sampling batch (s) 0.264763 time/saving (s) 0.00295911 time/training (s) 4.16881 time/epoch (s) 5.35744 time/total (s) 30680.4 Epoch -917 ---------------------------------- --------------- 2022-05-10 21:41:51.733951 PDT | [0] Epoch -916 finished ---------------------------------- --------------- epoch -916 replay_buffer/size 999996 trainer/num train calls 85000 trainer/Policy Loss -1.99738 trainer/Log Pis Mean 2.05624 trainer/Log Pis Std 2.50326 trainer/Log Pis Max 11.0975 trainer/Log Pis Min -4.45488 trainer/policy/mean Mean 0.153856 trainer/policy/mean Std 0.605919 trainer/policy/mean Max 0.996623 trainer/policy/mean Min -0.997185 trainer/policy/normal/std Mean 0.394187 trainer/policy/normal/std Std 0.189416 trainer/policy/normal/std Max 1.0249 trainer/policy/normal/std Min 0.0787011 trainer/policy/normal/log_std Mean -1.07476 trainer/policy/normal/log_std Std 0.574913 trainer/policy/normal/log_std Max 0.0245954 trainer/policy/normal/log_std Min -2.5421 eval/num steps total 54839 eval/num paths total 105 eval/path length Mean 554 eval/path length Std 0 eval/path length Max 554 eval/path length Min 554 eval/Rewards Mean 3.21828 eval/Rewards Std 0.833025 eval/Rewards Max 5.09969 eval/Rewards Min 0.987074 eval/Returns Mean 1782.93 eval/Returns Std 0 eval/Returns Max 1782.93 eval/Returns Min 1782.93 eval/Actions Mean 0.14829 eval/Actions Std 0.579041 eval/Actions Max 0.997639 eval/Actions Min -0.998468 eval/Num Paths 1 eval/Average Returns 1782.93 eval/normalized_score 55.4052 time/evaluation sampling (s) 0.907258 time/logging (s) 0.00251865 time/sampling batch (s) 0.265443 time/saving (s) 0.00313755 time/training (s) 4.1952 time/epoch (s) 5.37355 time/total (s) 30685.7 Epoch -916 ---------------------------------- --------------- 2022-05-10 21:41:57.202043 PDT | [0] Epoch -915 finished ---------------------------------- --------------- epoch -915 replay_buffer/size 999996 trainer/num train calls 86000 trainer/Policy Loss -2.05592 trainer/Log Pis Mean 1.98994 trainer/Log Pis Std 2.61349 trainer/Log Pis Max 10.2534 trainer/Log Pis Min -5.84446 trainer/policy/mean Mean 0.131409 trainer/policy/mean Std 0.616025 trainer/policy/mean Max 0.997586 trainer/policy/mean Min -0.997964 trainer/policy/normal/std Mean 0.397999 trainer/policy/normal/std Std 0.188066 trainer/policy/normal/std Max 0.992224 trainer/policy/normal/std Min 0.0757473 trainer/policy/normal/log_std Mean -1.06157 trainer/policy/normal/log_std Std 0.567924 trainer/policy/normal/log_std Max -0.00780611 trainer/policy/normal/log_std Min -2.58035 eval/num steps total 55418 eval/num paths total 106 eval/path length Mean 579 eval/path length Std 0 eval/path length Max 579 eval/path length Min 579 eval/Rewards Mean 3.16858 eval/Rewards Std 0.796265 eval/Rewards Max 4.90088 eval/Rewards Min 0.989061 eval/Returns Mean 1834.61 eval/Returns Std 0 eval/Returns Max 1834.61 eval/Returns Min 1834.61 eval/Actions Mean 0.167757 eval/Actions Std 0.58328 eval/Actions Max 0.997254 eval/Actions Min -0.99887 eval/Num Paths 1 eval/Average Returns 1834.61 eval/normalized_score 56.993 time/evaluation sampling (s) 0.882245 time/logging (s) 0.00279071 time/sampling batch (s) 0.272972 time/saving (s) 0.00337541 time/training (s) 4.28775 time/epoch (s) 5.44913 time/total (s) 30691.2 Epoch -915 ---------------------------------- --------------- 2022-05-10 21:42:02.564375 PDT | [0] Epoch -914 finished ---------------------------------- --------------- epoch -914 replay_buffer/size 999996 trainer/num train calls 87000 trainer/Policy Loss -1.97795 trainer/Log Pis Mean 1.91438 trainer/Log Pis Std 2.42644 trainer/Log Pis Max 9.22827 trainer/Log Pis Min -7.32082 trainer/policy/mean Mean 0.143209 trainer/policy/mean Std 0.614463 trainer/policy/mean Max 0.996666 trainer/policy/mean Min -0.998199 trainer/policy/normal/std Mean 0.398382 trainer/policy/normal/std Std 0.181281 trainer/policy/normal/std Max 0.951002 trainer/policy/normal/std Min 0.0786492 trainer/policy/normal/log_std Mean -1.05201 trainer/policy/normal/log_std Std 0.550922 trainer/policy/normal/log_std Max -0.0502392 trainer/policy/normal/log_std Min -2.54276 eval/num steps total 55989 eval/num paths total 107 eval/path length Mean 571 eval/path length Std 0 eval/path length Max 571 eval/path length Min 571 eval/Rewards Mean 3.17557 eval/Rewards Std 0.701881 eval/Rewards Max 4.45103 eval/Rewards Min 0.988122 eval/Returns Mean 1813.25 eval/Returns Std 0 eval/Returns Max 1813.25 eval/Returns Min 1813.25 eval/Actions Mean 0.148802 eval/Actions Std 0.609397 eval/Actions Max 0.996731 eval/Actions Min -0.998087 eval/Num Paths 1 eval/Average Returns 1813.25 eval/normalized_score 56.3368 time/evaluation sampling (s) 0.875899 time/logging (s) 0.00246037 time/sampling batch (s) 0.266752 time/saving (s) 0.00310902 time/training (s) 4.19451 time/epoch (s) 5.34273 time/total (s) 30696.5 Epoch -914 ---------------------------------- --------------- 2022-05-10 21:42:07.896392 PDT | [0] Epoch -913 finished ---------------------------------- --------------- epoch -913 replay_buffer/size 999996 trainer/num train calls 88000 trainer/Policy Loss -2.06738 trainer/Log Pis Mean 2.05307 trainer/Log Pis Std 2.60556 trainer/Log Pis Max 9.68705 trainer/Log Pis Min -5.50033 trainer/policy/mean Mean 0.117168 trainer/policy/mean Std 0.624839 trainer/policy/mean Max 0.99757 trainer/policy/mean Min -0.997605 trainer/policy/normal/std Mean 0.381826 trainer/policy/normal/std Std 0.177773 trainer/policy/normal/std Max 0.929771 trainer/policy/normal/std Min 0.0751489 trainer/policy/normal/log_std Mean -1.09606 trainer/policy/normal/log_std Std 0.551634 trainer/policy/normal/log_std Max -0.0728167 trainer/policy/normal/log_std Min -2.58828 eval/num steps total 56422 eval/num paths total 108 eval/path length Mean 433 eval/path length Std 0 eval/path length Max 433 eval/path length Min 433 eval/Rewards Mean 3.16248 eval/Rewards Std 0.921199 eval/Rewards Max 5.03667 eval/Rewards Min 0.983911 eval/Returns Mean 1369.36 eval/Returns Std 0 eval/Returns Max 1369.36 eval/Returns Min 1369.36 eval/Actions Mean 0.126048 eval/Actions Std 0.557193 eval/Actions Max 0.992586 eval/Actions Min -0.99827 eval/Num Paths 1 eval/Average Returns 1369.36 eval/normalized_score 42.6977 time/evaluation sampling (s) 0.882936 time/logging (s) 0.00207538 time/sampling batch (s) 0.264233 time/saving (s) 0.0029567 time/training (s) 4.16043 time/epoch (s) 5.31263 time/total (s) 30701.8 Epoch -913 ---------------------------------- --------------- 2022-05-10 21:42:13.228757 PDT | [0] Epoch -912 finished ---------------------------------- --------------- epoch -912 replay_buffer/size 999996 trainer/num train calls 89000 trainer/Policy Loss -2.14764 trainer/Log Pis Mean 2.25698 trainer/Log Pis Std 2.67729 trainer/Log Pis Max 17.2668 trainer/Log Pis Min -4.48575 trainer/policy/mean Mean 0.144302 trainer/policy/mean Std 0.612391 trainer/policy/mean Max 0.998187 trainer/policy/mean Min -0.999382 trainer/policy/normal/std Mean 0.38713 trainer/policy/normal/std Std 0.186989 trainer/policy/normal/std Max 0.924854 trainer/policy/normal/std Min 0.0761056 trainer/policy/normal/log_std Mean -1.09894 trainer/policy/normal/log_std Std 0.59083 trainer/policy/normal/log_std Max -0.0781195 trainer/policy/normal/log_std Min -2.57563 eval/num steps total 57080 eval/num paths total 109 eval/path length Mean 658 eval/path length Std 0 eval/path length Max 658 eval/path length Min 658 eval/Rewards Mean 3.22998 eval/Rewards Std 0.741245 eval/Rewards Max 4.79262 eval/Rewards Min 0.995793 eval/Returns Mean 2125.33 eval/Returns Std 0 eval/Returns Max 2125.33 eval/Returns Min 2125.33 eval/Actions Mean 0.165608 eval/Actions Std 0.600179 eval/Actions Max 0.997637 eval/Actions Min -0.99929 eval/Num Paths 1 eval/Average Returns 2125.33 eval/normalized_score 65.9258 time/evaluation sampling (s) 0.876788 time/logging (s) 0.00264978 time/sampling batch (s) 0.262444 time/saving (s) 0.00300257 time/training (s) 4.16919 time/epoch (s) 5.31407 time/total (s) 30707.2 Epoch -912 ---------------------------------- --------------- 2022-05-10 21:42:18.595650 PDT | [0] Epoch -911 finished ---------------------------------- --------------- epoch -911 replay_buffer/size 999996 trainer/num train calls 90000 trainer/Policy Loss -2.01585 trainer/Log Pis Mean 2.0217 trainer/Log Pis Std 2.76924 trainer/Log Pis Max 16.7377 trainer/Log Pis Min -5.13388 trainer/policy/mean Mean 0.116713 trainer/policy/mean Std 0.616461 trainer/policy/mean Max 0.995784 trainer/policy/mean Min -0.999512 trainer/policy/normal/std Mean 0.394658 trainer/policy/normal/std Std 0.183226 trainer/policy/normal/std Max 0.933196 trainer/policy/normal/std Min 0.0773266 trainer/policy/normal/log_std Mean -1.06691 trainer/policy/normal/log_std Std 0.563989 trainer/policy/normal/log_std Max -0.0691399 trainer/policy/normal/log_std Min -2.55972 eval/num steps total 57712 eval/num paths total 110 eval/path length Mean 632 eval/path length Std 0 eval/path length Max 632 eval/path length Min 632 eval/Rewards Mean 3.17819 eval/Rewards Std 0.763582 eval/Rewards Max 4.73153 eval/Rewards Min 0.990265 eval/Returns Mean 2008.62 eval/Returns Std 0 eval/Returns Max 2008.62 eval/Returns Min 2008.62 eval/Actions Mean 0.142833 eval/Actions Std 0.591048 eval/Actions Max 0.995157 eval/Actions Min -0.998669 eval/Num Paths 1 eval/Average Returns 2008.62 eval/normalized_score 62.3396 time/evaluation sampling (s) 0.87739 time/logging (s) 0.00277507 time/sampling batch (s) 0.267314 time/saving (s) 0.00300244 time/training (s) 4.19719 time/epoch (s) 5.34767 time/total (s) 30712.5 Epoch -911 ---------------------------------- --------------- 2022-05-10 21:42:23.946796 PDT | [0] Epoch -910 finished ---------------------------------- --------------- epoch -910 replay_buffer/size 999996 trainer/num train calls 91000 trainer/Policy Loss -2.0998 trainer/Log Pis Mean 2.0347 trainer/Log Pis Std 2.48694 trainer/Log Pis Max 15.5141 trainer/Log Pis Min -4.59568 trainer/policy/mean Mean 0.116764 trainer/policy/mean Std 0.620246 trainer/policy/mean Max 0.996063 trainer/policy/mean Min -0.999365 trainer/policy/normal/std Mean 0.394462 trainer/policy/normal/std Std 0.184336 trainer/policy/normal/std Max 0.996943 trainer/policy/normal/std Min 0.0732861 trainer/policy/normal/log_std Mean -1.06559 trainer/policy/normal/log_std Std 0.556246 trainer/policy/normal/log_std Max -0.0030618 trainer/policy/normal/log_std Min -2.61338 eval/num steps total 58212 eval/num paths total 111 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.06974 eval/Rewards Std 0.781195 eval/Rewards Max 4.75074 eval/Rewards Min 0.983629 eval/Returns Mean 1534.87 eval/Returns Std 0 eval/Returns Max 1534.87 eval/Returns Min 1534.87 eval/Actions Mean 0.163835 eval/Actions Std 0.586415 eval/Actions Max 0.997271 eval/Actions Min -0.99634 eval/Num Paths 1 eval/Average Returns 1534.87 eval/normalized_score 47.7833 time/evaluation sampling (s) 0.88754 time/logging (s) 0.0022544 time/sampling batch (s) 0.264614 time/saving (s) 0.00295925 time/training (s) 4.17408 time/epoch (s) 5.33145 time/total (s) 30717.9 Epoch -910 ---------------------------------- --------------- 2022-05-10 21:42:29.280972 PDT | [0] Epoch -909 finished ---------------------------------- --------------- epoch -909 replay_buffer/size 999996 trainer/num train calls 92000 trainer/Policy Loss -2.20511 trainer/Log Pis Mean 2.16005 trainer/Log Pis Std 2.62192 trainer/Log Pis Max 12.7015 trainer/Log Pis Min -4.69914 trainer/policy/mean Mean 0.154113 trainer/policy/mean Std 0.626917 trainer/policy/mean Max 0.997534 trainer/policy/mean Min -0.997619 trainer/policy/normal/std Mean 0.397563 trainer/policy/normal/std Std 0.186724 trainer/policy/normal/std Max 0.960257 trainer/policy/normal/std Min 0.077156 trainer/policy/normal/log_std Mean -1.06318 trainer/policy/normal/log_std Std 0.571629 trainer/policy/normal/log_std Max -0.0405544 trainer/policy/normal/log_std Min -2.56193 eval/num steps total 59118 eval/num paths total 113 eval/path length Mean 453 eval/path length Std 45 eval/path length Max 498 eval/path length Min 408 eval/Rewards Mean 3.11313 eval/Rewards Std 0.788538 eval/Rewards Max 4.91568 eval/Rewards Min 0.98497 eval/Returns Mean 1410.25 eval/Returns Std 159.774 eval/Returns Max 1570.02 eval/Returns Min 1250.47 eval/Actions Mean 0.156327 eval/Actions Std 0.592643 eval/Actions Max 0.997502 eval/Actions Min -0.999246 eval/Num Paths 2 eval/Average Returns 1410.25 eval/normalized_score 43.9542 time/evaluation sampling (s) 0.87829 time/logging (s) 0.00338421 time/sampling batch (s) 0.265177 time/saving (s) 0.0029098 time/training (s) 4.16627 time/epoch (s) 5.31603 time/total (s) 30723.2 Epoch -909 ---------------------------------- --------------- 2022-05-10 21:42:34.609779 PDT | [0] Epoch -908 finished ---------------------------------- --------------- epoch -908 replay_buffer/size 999996 trainer/num train calls 93000 trainer/Policy Loss -2.19315 trainer/Log Pis Mean 2.04593 trainer/Log Pis Std 2.58425 trainer/Log Pis Max 9.78226 trainer/Log Pis Min -5.92236 trainer/policy/mean Mean 0.173254 trainer/policy/mean Std 0.603574 trainer/policy/mean Max 0.998018 trainer/policy/mean Min -0.99822 trainer/policy/normal/std Mean 0.392715 trainer/policy/normal/std Std 0.187001 trainer/policy/normal/std Max 0.982202 trainer/policy/normal/std Min 0.0802339 trainer/policy/normal/log_std Mean -1.07475 trainer/policy/normal/log_std Std 0.565976 trainer/policy/normal/log_std Max -0.0179578 trainer/policy/normal/log_std Min -2.52281 eval/num steps total 59913 eval/num paths total 115 eval/path length Mean 397.5 eval/path length Std 79.5 eval/path length Max 477 eval/path length Min 318 eval/Rewards Mean 3.06007 eval/Rewards Std 0.900967 eval/Rewards Max 5.16351 eval/Rewards Min 0.984415 eval/Returns Mean 1216.38 eval/Returns Std 265.659 eval/Returns Max 1482.03 eval/Returns Min 950.718 eval/Actions Mean 0.143565 eval/Actions Std 0.567465 eval/Actions Max 0.997217 eval/Actions Min -0.999585 eval/Num Paths 2 eval/Average Returns 1216.38 eval/normalized_score 37.9973 time/evaluation sampling (s) 0.878917 time/logging (s) 0.0032219 time/sampling batch (s) 0.262889 time/saving (s) 0.00296932 time/training (s) 4.16186 time/epoch (s) 5.30986 time/total (s) 30728.5 Epoch -908 ---------------------------------- --------------- 2022-05-10 21:42:39.976975 PDT | [0] Epoch -907 finished ---------------------------------- --------------- epoch -907 replay_buffer/size 999996 trainer/num train calls 94000 trainer/Policy Loss -1.93266 trainer/Log Pis Mean 2.10884 trainer/Log Pis Std 2.50571 trainer/Log Pis Max 10.4211 trainer/Log Pis Min -4.72256 trainer/policy/mean Mean 0.124317 trainer/policy/mean Std 0.608889 trainer/policy/mean Max 0.996599 trainer/policy/mean Min -0.997371 trainer/policy/normal/std Mean 0.397758 trainer/policy/normal/std Std 0.184021 trainer/policy/normal/std Max 0.990292 trainer/policy/normal/std Min 0.0810747 trainer/policy/normal/log_std Mean -1.05525 trainer/policy/normal/log_std Std 0.553113 trainer/policy/normal/log_std Max -0.009755 trainer/policy/normal/log_std Min -2.51239 eval/num steps total 60559 eval/num paths total 116 eval/path length Mean 646 eval/path length Std 0 eval/path length Max 646 eval/path length Min 646 eval/Rewards Mean 3.17377 eval/Rewards Std 0.74822 eval/Rewards Max 4.76584 eval/Rewards Min 0.984064 eval/Returns Mean 2050.25 eval/Returns Std 0 eval/Returns Max 2050.25 eval/Returns Min 2050.25 eval/Actions Mean 0.14825 eval/Actions Std 0.590361 eval/Actions Max 0.998117 eval/Actions Min -0.999008 eval/Num Paths 1 eval/Average Returns 2050.25 eval/normalized_score 63.619 time/evaluation sampling (s) 0.883432 time/logging (s) 0.00269801 time/sampling batch (s) 0.266246 time/saving (s) 0.00294288 time/training (s) 4.19192 time/epoch (s) 5.34724 time/total (s) 30733.8 Epoch -907 ---------------------------------- --------------- 2022-05-10 21:42:45.415408 PDT | [0] Epoch -906 finished ---------------------------------- --------------- epoch -906 replay_buffer/size 999996 trainer/num train calls 95000 trainer/Policy Loss -2.16052 trainer/Log Pis Mean 2.23302 trainer/Log Pis Std 2.59283 trainer/Log Pis Max 11.8479 trainer/Log Pis Min -4.46336 trainer/policy/mean Mean 0.149722 trainer/policy/mean Std 0.616928 trainer/policy/mean Max 0.996211 trainer/policy/mean Min -0.997997 trainer/policy/normal/std Mean 0.39189 trainer/policy/normal/std Std 0.18566 trainer/policy/normal/std Max 0.956703 trainer/policy/normal/std Min 0.0720207 trainer/policy/normal/log_std Mean -1.08025 trainer/policy/normal/log_std Std 0.577556 trainer/policy/normal/log_std Max -0.0442622 trainer/policy/normal/log_std Min -2.6308 eval/num steps total 61070 eval/num paths total 117 eval/path length Mean 511 eval/path length Std 0 eval/path length Max 511 eval/path length Min 511 eval/Rewards Mean 3.15474 eval/Rewards Std 0.789513 eval/Rewards Max 5.15232 eval/Rewards Min 0.98788 eval/Returns Mean 1612.07 eval/Returns Std 0 eval/Returns Max 1612.07 eval/Returns Min 1612.07 eval/Actions Mean 0.156026 eval/Actions Std 0.588807 eval/Actions Max 0.997622 eval/Actions Min -0.99711 eval/Num Paths 1 eval/Average Returns 1612.07 eval/normalized_score 50.1555 time/evaluation sampling (s) 0.970072 time/logging (s) 0.00228489 time/sampling batch (s) 0.26564 time/saving (s) 0.00298917 time/training (s) 4.17768 time/epoch (s) 5.41867 time/total (s) 30739.3 Epoch -906 ---------------------------------- --------------- 2022-05-10 21:42:50.792272 PDT | [0] Epoch -905 finished ---------------------------------- --------------- epoch -905 replay_buffer/size 999996 trainer/num train calls 96000 trainer/Policy Loss -2.03372 trainer/Log Pis Mean 1.88487 trainer/Log Pis Std 2.49037 trainer/Log Pis Max 9.95403 trainer/Log Pis Min -5.6134 trainer/policy/mean Mean 0.144262 trainer/policy/mean Std 0.610764 trainer/policy/mean Max 0.997814 trainer/policy/mean Min -0.99654 trainer/policy/normal/std Mean 0.396763 trainer/policy/normal/std Std 0.185189 trainer/policy/normal/std Max 0.971369 trainer/policy/normal/std Min 0.0828679 trainer/policy/normal/log_std Mean -1.05807 trainer/policy/normal/log_std Std 0.551477 trainer/policy/normal/log_std Max -0.0290493 trainer/policy/normal/log_std Min -2.49051 eval/num steps total 61566 eval/num paths total 118 eval/path length Mean 496 eval/path length Std 0 eval/path length Max 496 eval/path length Min 496 eval/Rewards Mean 3.04508 eval/Rewards Std 0.797656 eval/Rewards Max 4.75743 eval/Rewards Min 0.98253 eval/Returns Mean 1510.36 eval/Returns Std 0 eval/Returns Max 1510.36 eval/Returns Min 1510.36 eval/Actions Mean 0.151463 eval/Actions Std 0.574287 eval/Actions Max 0.995399 eval/Actions Min -0.99658 eval/Num Paths 1 eval/Average Returns 1510.36 eval/normalized_score 47.0301 time/evaluation sampling (s) 0.891702 time/logging (s) 0.00252859 time/sampling batch (s) 0.264313 time/saving (s) 0.00339709 time/training (s) 4.19602 time/epoch (s) 5.35796 time/total (s) 30744.6 Epoch -905 ---------------------------------- --------------- 2022-05-10 21:42:56.150676 PDT | [0] Epoch -904 finished ---------------------------------- --------------- epoch -904 replay_buffer/size 999996 trainer/num train calls 97000 trainer/Policy Loss -1.99156 trainer/Log Pis Mean 2.08205 trainer/Log Pis Std 2.59841 trainer/Log Pis Max 12.1007 trainer/Log Pis Min -6.04347 trainer/policy/mean Mean 0.110806 trainer/policy/mean Std 0.6198 trainer/policy/mean Max 0.995949 trainer/policy/mean Min -0.998943 trainer/policy/normal/std Mean 0.398354 trainer/policy/normal/std Std 0.185858 trainer/policy/normal/std Max 0.944261 trainer/policy/normal/std Min 0.0717979 trainer/policy/normal/log_std Mean -1.05449 trainer/policy/normal/log_std Std 0.553272 trainer/policy/normal/log_std Max -0.0573524 trainer/policy/normal/log_std Min -2.6339 eval/num steps total 62065 eval/num paths total 119 eval/path length Mean 499 eval/path length Std 0 eval/path length Max 499 eval/path length Min 499 eval/Rewards Mean 3.07991 eval/Rewards Std 0.771661 eval/Rewards Max 4.71718 eval/Rewards Min 0.981579 eval/Returns Mean 1536.88 eval/Returns Std 0 eval/Returns Max 1536.88 eval/Returns Min 1536.88 eval/Actions Mean 0.153106 eval/Actions Std 0.58717 eval/Actions Max 0.997872 eval/Actions Min -0.998332 eval/Num Paths 1 eval/Average Returns 1536.88 eval/normalized_score 47.845 time/evaluation sampling (s) 0.907775 time/logging (s) 0.00252827 time/sampling batch (s) 0.263548 time/saving (s) 0.00338871 time/training (s) 4.16161 time/epoch (s) 5.33885 time/total (s) 30750 Epoch -904 ---------------------------------- --------------- 2022-05-10 21:43:01.582277 PDT | [0] Epoch -903 finished ---------------------------------- --------------- epoch -903 replay_buffer/size 999996 trainer/num train calls 98000 trainer/Policy Loss -2.12313 trainer/Log Pis Mean 2.08771 trainer/Log Pis Std 2.67943 trainer/Log Pis Max 10.1983 trainer/Log Pis Min -6.30174 trainer/policy/mean Mean 0.142775 trainer/policy/mean Std 0.60231 trainer/policy/mean Max 0.997196 trainer/policy/mean Min -0.998131 trainer/policy/normal/std Mean 0.385262 trainer/policy/normal/std Std 0.185625 trainer/policy/normal/std Max 0.9629 trainer/policy/normal/std Min 0.0786136 trainer/policy/normal/log_std Mean -1.09636 trainer/policy/normal/log_std Std 0.569325 trainer/policy/normal/log_std Max -0.0378052 trainer/policy/normal/log_std Min -2.54321 eval/num steps total 62718 eval/num paths total 120 eval/path length Mean 653 eval/path length Std 0 eval/path length Max 653 eval/path length Min 653 eval/Rewards Mean 3.15274 eval/Rewards Std 0.735484 eval/Rewards Max 5.17172 eval/Rewards Min 0.988152 eval/Returns Mean 2058.74 eval/Returns Std 0 eval/Returns Max 2058.74 eval/Returns Min 2058.74 eval/Actions Mean 0.149819 eval/Actions Std 0.58517 eval/Actions Max 0.99732 eval/Actions Min -0.997424 eval/Num Paths 1 eval/Average Returns 2058.74 eval/normalized_score 63.8797 time/evaluation sampling (s) 0.889444 time/logging (s) 0.00273947 time/sampling batch (s) 0.27036 time/saving (s) 0.00326516 time/training (s) 4.24613 time/epoch (s) 5.41193 time/total (s) 30755.4 Epoch -903 ---------------------------------- --------------- 2022-05-10 21:43:06.975236 PDT | [0] Epoch -902 finished ---------------------------------- --------------- epoch -902 replay_buffer/size 999996 trainer/num train calls 99000 trainer/Policy Loss -1.97404 trainer/Log Pis Mean 2.05054 trainer/Log Pis Std 2.54219 trainer/Log Pis Max 9.37886 trainer/Log Pis Min -4.94958 trainer/policy/mean Mean 0.147976 trainer/policy/mean Std 0.611558 trainer/policy/mean Max 0.998552 trainer/policy/mean Min -0.998303 trainer/policy/normal/std Mean 0.394346 trainer/policy/normal/std Std 0.18436 trainer/policy/normal/std Max 0.99593 trainer/policy/normal/std Min 0.0720386 trainer/policy/normal/log_std Mean -1.07159 trainer/policy/normal/log_std Std 0.574068 trainer/policy/normal/log_std Max -0.00407793 trainer/policy/normal/log_std Min -2.63055 eval/num steps total 63215 eval/num paths total 121 eval/path length Mean 497 eval/path length Std 0 eval/path length Max 497 eval/path length Min 497 eval/Rewards Mean 3.09165 eval/Rewards Std 0.747708 eval/Rewards Max 4.61123 eval/Rewards Min 0.986237 eval/Returns Mean 1536.55 eval/Returns Std 0 eval/Returns Max 1536.55 eval/Returns Min 1536.55 eval/Actions Mean 0.148961 eval/Actions Std 0.591665 eval/Actions Max 0.998023 eval/Actions Min -0.996335 eval/Num Paths 1 eval/Average Returns 1536.55 eval/normalized_score 47.835 time/evaluation sampling (s) 0.892687 time/logging (s) 0.00236662 time/sampling batch (s) 0.268555 time/saving (s) 0.00323087 time/training (s) 4.20611 time/epoch (s) 5.37295 time/total (s) 30760.8 Epoch -902 ---------------------------------- --------------- 2022-05-10 21:43:12.430464 PDT | [0] Epoch -901 finished ---------------------------------- --------------- epoch -901 replay_buffer/size 999996 trainer/num train calls 100000 trainer/Policy Loss -2.09212 trainer/Log Pis Mean 2.11277 trainer/Log Pis Std 2.54947 trainer/Log Pis Max 10.603 trainer/Log Pis Min -3.47318 trainer/policy/mean Mean 0.160435 trainer/policy/mean Std 0.604342 trainer/policy/mean Max 0.996618 trainer/policy/mean Min -0.998254 trainer/policy/normal/std Mean 0.39275 trainer/policy/normal/std Std 0.183954 trainer/policy/normal/std Max 1.01285 trainer/policy/normal/std Min 0.0755116 trainer/policy/normal/log_std Mean -1.07534 trainer/policy/normal/log_std Std 0.572935 trainer/policy/normal/log_std Max 0.0127644 trainer/policy/normal/log_std Min -2.58347 eval/num steps total 63791 eval/num paths total 122 eval/path length Mean 576 eval/path length Std 0 eval/path length Max 576 eval/path length Min 576 eval/Rewards Mean 3.12921 eval/Rewards Std 0.756154 eval/Rewards Max 4.82683 eval/Rewards Min 0.991822 eval/Returns Mean 1802.42 eval/Returns Std 0 eval/Returns Max 1802.42 eval/Returns Min 1802.42 eval/Actions Mean 0.163514 eval/Actions Std 0.581838 eval/Actions Max 0.997951 eval/Actions Min -0.998136 eval/Num Paths 1 eval/Average Returns 1802.42 eval/normalized_score 56.0041 time/evaluation sampling (s) 0.907636 time/logging (s) 0.00259703 time/sampling batch (s) 0.270428 time/saving (s) 0.00548157 time/training (s) 4.24969 time/epoch (s) 5.43583 time/total (s) 30766.2 Epoch -901 ---------------------------------- --------------- 2022-05-10 21:43:17.838058 PDT | [0] Epoch -900 finished ---------------------------------- --------------- epoch -900 replay_buffer/size 999996 trainer/num train calls 101000 trainer/Policy Loss -2.02713 trainer/Log Pis Mean 2.20915 trainer/Log Pis Std 2.59197 trainer/Log Pis Max 11.7487 trainer/Log Pis Min -4.34128 trainer/policy/mean Mean 0.168478 trainer/policy/mean Std 0.6121 trainer/policy/mean Max 0.997647 trainer/policy/mean Min -0.997778 trainer/policy/normal/std Mean 0.398624 trainer/policy/normal/std Std 0.186956 trainer/policy/normal/std Max 0.974352 trainer/policy/normal/std Min 0.0779981 trainer/policy/normal/log_std Mean -1.05943 trainer/policy/normal/log_std Std 0.568419 trainer/policy/normal/log_std Max -0.0259827 trainer/policy/normal/log_std Min -2.55107 eval/num steps total 64680 eval/num paths total 124 eval/path length Mean 444.5 eval/path length Std 42.5 eval/path length Max 487 eval/path length Min 402 eval/Rewards Mean 3.14056 eval/Rewards Std 0.839963 eval/Rewards Max 4.80507 eval/Rewards Min 0.984165 eval/Returns Mean 1395.98 eval/Returns Std 166.928 eval/Returns Max 1562.9 eval/Returns Min 1229.05 eval/Actions Mean 0.136146 eval/Actions Std 0.583038 eval/Actions Max 0.995197 eval/Actions Min -0.998571 eval/Num Paths 2 eval/Average Returns 1395.98 eval/normalized_score 43.5157 time/evaluation sampling (s) 0.892562 time/logging (s) 0.00346745 time/sampling batch (s) 0.270142 time/saving (s) 0.00296657 time/training (s) 4.21977 time/epoch (s) 5.38891 time/total (s) 30771.6 Epoch -900 ---------------------------------- --------------- 2022-05-10 21:43:23.299325 PDT | [0] Epoch -899 finished ---------------------------------- --------------- epoch -899 replay_buffer/size 999996 trainer/num train calls 102000 trainer/Policy Loss -2.00891 trainer/Log Pis Mean 2.11512 trainer/Log Pis Std 2.4567 trainer/Log Pis Max 8.90584 trainer/Log Pis Min -3.77225 trainer/policy/mean Mean 0.12957 trainer/policy/mean Std 0.620232 trainer/policy/mean Max 0.996944 trainer/policy/mean Min -0.996816 trainer/policy/normal/std Mean 0.390539 trainer/policy/normal/std Std 0.18253 trainer/policy/normal/std Max 1.03257 trainer/policy/normal/std Min 0.0778205 trainer/policy/normal/log_std Mean -1.07462 trainer/policy/normal/log_std Std 0.553095 trainer/policy/normal/log_std Max 0.0320467 trainer/policy/normal/log_std Min -2.55335 eval/num steps total 65673 eval/num paths total 126 eval/path length Mean 496.5 eval/path length Std 28.5 eval/path length Max 525 eval/path length Min 468 eval/Rewards Mean 3.1763 eval/Rewards Std 0.847047 eval/Rewards Max 5.43096 eval/Rewards Min 0.982509 eval/Returns Mean 1577.03 eval/Returns Std 85.9147 eval/Returns Max 1662.95 eval/Returns Min 1491.12 eval/Actions Mean 0.156635 eval/Actions Std 0.583151 eval/Actions Max 0.997615 eval/Actions Min -0.998738 eval/Num Paths 2 eval/Average Returns 1577.03 eval/normalized_score 49.0788 time/evaluation sampling (s) 0.890145 time/logging (s) 0.00374573 time/sampling batch (s) 0.271669 time/saving (s) 0.00306319 time/training (s) 4.27326 time/epoch (s) 5.44188 time/total (s) 30777 Epoch -899 ---------------------------------- --------------- 2022-05-10 21:43:28.841500 PDT | [0] Epoch -898 finished ---------------------------------- --------------- epoch -898 replay_buffer/size 999996 trainer/num train calls 103000 trainer/Policy Loss -1.84948 trainer/Log Pis Mean 2.10719 trainer/Log Pis Std 2.47106 trainer/Log Pis Max 8.94739 trainer/Log Pis Min -4.52651 trainer/policy/mean Mean 0.166025 trainer/policy/mean Std 0.605736 trainer/policy/mean Max 0.995009 trainer/policy/mean Min -0.997616 trainer/policy/normal/std Mean 0.399598 trainer/policy/normal/std Std 0.190158 trainer/policy/normal/std Max 0.961566 trainer/policy/normal/std Min 0.0744532 trainer/policy/normal/log_std Mean -1.06152 trainer/policy/normal/log_std Std 0.578871 trainer/policy/normal/log_std Max -0.0391919 trainer/policy/normal/log_std Min -2.59758 eval/num steps total 66209 eval/num paths total 127 eval/path length Mean 536 eval/path length Std 0 eval/path length Max 536 eval/path length Min 536 eval/Rewards Mean 3.20165 eval/Rewards Std 0.824016 eval/Rewards Max 5.48039 eval/Rewards Min 0.988246 eval/Returns Mean 1716.09 eval/Returns Std 0 eval/Returns Max 1716.09 eval/Returns Min 1716.09 eval/Actions Mean 0.160649 eval/Actions Std 0.586439 eval/Actions Max 0.997792 eval/Actions Min -0.997622 eval/Num Paths 1 eval/Average Returns 1716.09 eval/normalized_score 53.3514 time/evaluation sampling (s) 0.919792 time/logging (s) 0.0024281 time/sampling batch (s) 0.273967 time/saving (s) 0.00304306 time/training (s) 4.3217 time/epoch (s) 5.52093 time/total (s) 30782.6 Epoch -898 ---------------------------------- --------------- 2022-05-10 21:43:34.291045 PDT | [0] Epoch -897 finished ---------------------------------- --------------- epoch -897 replay_buffer/size 999996 trainer/num train calls 104000 trainer/Policy Loss -1.95543 trainer/Log Pis Mean 2.0922 trainer/Log Pis Std 2.42419 trainer/Log Pis Max 9.46247 trainer/Log Pis Min -4.36841 trainer/policy/mean Mean 0.156945 trainer/policy/mean Std 0.604117 trainer/policy/mean Max 0.997446 trainer/policy/mean Min -0.997438 trainer/policy/normal/std Mean 0.394999 trainer/policy/normal/std Std 0.184713 trainer/policy/normal/std Max 0.937794 trainer/policy/normal/std Min 0.0785229 trainer/policy/normal/log_std Mean -1.0669 trainer/policy/normal/log_std Std 0.564294 trainer/policy/normal/log_std Max -0.0642249 trainer/policy/normal/log_std Min -2.54437 eval/num steps total 66715 eval/num paths total 128 eval/path length Mean 506 eval/path length Std 0 eval/path length Max 506 eval/path length Min 506 eval/Rewards Mean 3.10451 eval/Rewards Std 0.773069 eval/Rewards Max 4.80017 eval/Rewards Min 0.985124 eval/Returns Mean 1570.88 eval/Returns Std 0 eval/Returns Max 1570.88 eval/Returns Min 1570.88 eval/Actions Mean 0.153254 eval/Actions Std 0.588099 eval/Actions Max 0.997336 eval/Actions Min -0.999148 eval/Num Paths 1 eval/Average Returns 1570.88 eval/normalized_score 48.8898 time/evaluation sampling (s) 0.909713 time/logging (s) 0.00229456 time/sampling batch (s) 0.270102 time/saving (s) 0.00304835 time/training (s) 4.24494 time/epoch (s) 5.4301 time/total (s) 30788 Epoch -897 ---------------------------------- --------------- 2022-05-10 21:43:39.764763 PDT | [0] Epoch -896 finished ---------------------------------- --------------- epoch -896 replay_buffer/size 999996 trainer/num train calls 105000 trainer/Policy Loss -2.0803 trainer/Log Pis Mean 2.09557 trainer/Log Pis Std 2.52958 trainer/Log Pis Max 9.12936 trainer/Log Pis Min -4.75501 trainer/policy/mean Mean 0.146378 trainer/policy/mean Std 0.607352 trainer/policy/mean Max 0.996874 trainer/policy/mean Min -0.99812 trainer/policy/normal/std Mean 0.386208 trainer/policy/normal/std Std 0.182596 trainer/policy/normal/std Max 0.934357 trainer/policy/normal/std Min 0.0731269 trainer/policy/normal/log_std Mean -1.09604 trainer/policy/normal/log_std Std 0.581646 trainer/policy/normal/log_std Max -0.0678966 trainer/policy/normal/log_std Min -2.61556 eval/num steps total 67257 eval/num paths total 129 eval/path length Mean 542 eval/path length Std 0 eval/path length Max 542 eval/path length Min 542 eval/Rewards Mean 3.18764 eval/Rewards Std 0.849285 eval/Rewards Max 5.30091 eval/Rewards Min 0.980242 eval/Returns Mean 1727.7 eval/Returns Std 0 eval/Returns Max 1727.7 eval/Returns Min 1727.7 eval/Actions Mean 0.147618 eval/Actions Std 0.575173 eval/Actions Max 0.995401 eval/Actions Min -0.998637 eval/Num Paths 1 eval/Average Returns 1727.7 eval/normalized_score 53.7082 time/evaluation sampling (s) 0.906251 time/logging (s) 0.00255926 time/sampling batch (s) 0.271891 time/saving (s) 0.00315356 time/training (s) 4.27036 time/epoch (s) 5.45421 time/total (s) 30793.4 Epoch -896 ---------------------------------- --------------- 2022-05-10 21:43:45.163614 PDT | [0] Epoch -895 finished ---------------------------------- --------------- epoch -895 replay_buffer/size 999996 trainer/num train calls 106000 trainer/Policy Loss -2.16559 trainer/Log Pis Mean 2.21387 trainer/Log Pis Std 2.54226 trainer/Log Pis Max 10.7725 trainer/Log Pis Min -5.4391 trainer/policy/mean Mean 0.0990978 trainer/policy/mean Std 0.628039 trainer/policy/mean Max 0.998741 trainer/policy/mean Min -0.997838 trainer/policy/normal/std Mean 0.396941 trainer/policy/normal/std Std 0.186666 trainer/policy/normal/std Max 1.35666 trainer/policy/normal/std Min 0.0754899 trainer/policy/normal/log_std Mean -1.0617 trainer/policy/normal/log_std Std 0.563463 trainer/policy/normal/log_std Max 0.305022 trainer/policy/normal/log_std Min -2.58376 eval/num steps total 67842 eval/num paths total 130 eval/path length Mean 585 eval/path length Std 0 eval/path length Max 585 eval/path length Min 585 eval/Rewards Mean 3.18747 eval/Rewards Std 0.75182 eval/Rewards Max 4.87264 eval/Rewards Min 0.990841 eval/Returns Mean 1864.67 eval/Returns Std 0 eval/Returns Max 1864.67 eval/Returns Min 1864.67 eval/Actions Mean 0.172033 eval/Actions Std 0.597082 eval/Actions Max 0.997972 eval/Actions Min -0.998635 eval/Num Paths 1 eval/Average Returns 1864.67 eval/normalized_score 57.9167 time/evaluation sampling (s) 0.877435 time/logging (s) 0.00264083 time/sampling batch (s) 0.266328 time/saving (s) 0.00324454 time/training (s) 4.2298 time/epoch (s) 5.37945 time/total (s) 30798.8 Epoch -895 ---------------------------------- --------------- 2022-05-10 21:43:50.477710 PDT | [0] Epoch -894 finished ---------------------------------- --------------- epoch -894 replay_buffer/size 999996 trainer/num train calls 107000 trainer/Policy Loss -2.25201 trainer/Log Pis Mean 2.34801 trainer/Log Pis Std 2.63423 trainer/Log Pis Max 12.828 trainer/Log Pis Min -5.17421 trainer/policy/mean Mean 0.152672 trainer/policy/mean Std 0.625572 trainer/policy/mean Max 0.998111 trainer/policy/mean Min -0.999117 trainer/policy/normal/std Mean 0.391733 trainer/policy/normal/std Std 0.185421 trainer/policy/normal/std Max 0.943657 trainer/policy/normal/std Min 0.0751269 trainer/policy/normal/log_std Mean -1.08169 trainer/policy/normal/log_std Std 0.580233 trainer/policy/normal/log_std Max -0.0579927 trainer/policy/normal/log_std Min -2.58858 eval/num steps total 68820 eval/num paths total 132 eval/path length Mean 489 eval/path length Std 0 eval/path length Max 489 eval/path length Min 489 eval/Rewards Mean 3.19316 eval/Rewards Std 0.807906 eval/Rewards Max 4.87411 eval/Rewards Min 0.987411 eval/Returns Mean 1561.45 eval/Returns Std 1.8032 eval/Returns Max 1563.26 eval/Returns Min 1559.65 eval/Actions Mean 0.144667 eval/Actions Std 0.59354 eval/Actions Max 0.996152 eval/Actions Min -0.998734 eval/Num Paths 2 eval/Average Returns 1561.45 eval/normalized_score 48.6001 time/evaluation sampling (s) 0.888679 time/logging (s) 0.00399908 time/sampling batch (s) 0.260224 time/saving (s) 0.00310307 time/training (s) 4.14025 time/epoch (s) 5.29626 time/total (s) 30804.1 Epoch -894 ---------------------------------- --------------- 2022-05-10 21:43:55.877353 PDT | [0] Epoch -893 finished ---------------------------------- --------------- epoch -893 replay_buffer/size 999996 trainer/num train calls 108000 trainer/Policy Loss -2.17066 trainer/Log Pis Mean 2.21712 trainer/Log Pis Std 2.57811 trainer/Log Pis Max 10.9624 trainer/Log Pis Min -6.06275 trainer/policy/mean Mean 0.107772 trainer/policy/mean Std 0.62105 trainer/policy/mean Max 0.996851 trainer/policy/mean Min -0.996815 trainer/policy/normal/std Mean 0.389503 trainer/policy/normal/std Std 0.187901 trainer/policy/normal/std Max 1.03413 trainer/policy/normal/std Min 0.0690379 trainer/policy/normal/log_std Mean -1.09026 trainer/policy/normal/log_std Std 0.582886 trainer/policy/normal/log_std Max 0.0335606 trainer/policy/normal/log_std Min -2.6731 eval/num steps total 69817 eval/num paths total 134 eval/path length Mean 498.5 eval/path length Std 7.5 eval/path length Max 506 eval/path length Min 491 eval/Rewards Mean 3.09828 eval/Rewards Std 0.794419 eval/Rewards Max 5.03234 eval/Rewards Min 0.981192 eval/Returns Mean 1544.49 eval/Returns Std 30.1959 eval/Returns Max 1574.69 eval/Returns Min 1514.3 eval/Actions Mean 0.151542 eval/Actions Std 0.583243 eval/Actions Max 0.998165 eval/Actions Min -0.998483 eval/Num Paths 2 eval/Average Returns 1544.49 eval/normalized_score 48.079 time/evaluation sampling (s) 0.917008 time/logging (s) 0.00394741 time/sampling batch (s) 0.265721 time/saving (s) 0.00353863 time/training (s) 4.18965 time/epoch (s) 5.37987 time/total (s) 30809.5 Epoch -893 ---------------------------------- --------------- 2022-05-10 21:44:01.265295 PDT | [0] Epoch -892 finished ---------------------------------- --------------- epoch -892 replay_buffer/size 999996 trainer/num train calls 109000 trainer/Policy Loss -2.15838 trainer/Log Pis Mean 2.10254 trainer/Log Pis Std 2.47781 trainer/Log Pis Max 12.3141 trainer/Log Pis Min -5.61161 trainer/policy/mean Mean 0.150644 trainer/policy/mean Std 0.612864 trainer/policy/mean Max 0.998057 trainer/policy/mean Min -0.998226 trainer/policy/normal/std Mean 0.395855 trainer/policy/normal/std Std 0.187478 trainer/policy/normal/std Max 0.905888 trainer/policy/normal/std Min 0.0743947 trainer/policy/normal/log_std Mean -1.06688 trainer/policy/normal/log_std Std 0.567076 trainer/policy/normal/log_std Max -0.0988395 trainer/policy/normal/log_std Min -2.59837 eval/num steps total 70316 eval/num paths total 135 eval/path length Mean 499 eval/path length Std 0 eval/path length Max 499 eval/path length Min 499 eval/Rewards Mean 3.09491 eval/Rewards Std 0.788922 eval/Rewards Max 4.76351 eval/Rewards Min 0.987455 eval/Returns Mean 1544.36 eval/Returns Std 0 eval/Returns Max 1544.36 eval/Returns Min 1544.36 eval/Actions Mean 0.162461 eval/Actions Std 0.581196 eval/Actions Max 0.998485 eval/Actions Min -0.996527 eval/Num Paths 1 eval/Average Returns 1544.36 eval/normalized_score 48.075 time/evaluation sampling (s) 0.898691 time/logging (s) 0.00229826 time/sampling batch (s) 0.264992 time/saving (s) 0.00299665 time/training (s) 4.19741 time/epoch (s) 5.36639 time/total (s) 30814.9 Epoch -892 ---------------------------------- --------------- 2022-05-10 21:44:06.664040 PDT | [0] Epoch -891 finished ---------------------------------- --------------- epoch -891 replay_buffer/size 999996 trainer/num train calls 110000 trainer/Policy Loss -2.04331 trainer/Log Pis Mean 1.97581 trainer/Log Pis Std 2.55411 trainer/Log Pis Max 8.81573 trainer/Log Pis Min -4.95535 trainer/policy/mean Mean 0.132862 trainer/policy/mean Std 0.607306 trainer/policy/mean Max 0.997408 trainer/policy/mean Min -0.997863 trainer/policy/normal/std Mean 0.395088 trainer/policy/normal/std Std 0.189997 trainer/policy/normal/std Max 1.0153 trainer/policy/normal/std Min 0.0764865 trainer/policy/normal/log_std Mean -1.07397 trainer/policy/normal/log_std Std 0.578728 trainer/policy/normal/log_std Max 0.0151887 trainer/policy/normal/log_std Min -2.57064 eval/num steps total 70888 eval/num paths total 136 eval/path length Mean 572 eval/path length Std 0 eval/path length Max 572 eval/path length Min 572 eval/Rewards Mean 3.13017 eval/Rewards Std 0.754512 eval/Rewards Max 4.80368 eval/Rewards Min 0.985344 eval/Returns Mean 1790.46 eval/Returns Std 0 eval/Returns Max 1790.46 eval/Returns Min 1790.46 eval/Actions Mean 0.158168 eval/Actions Std 0.585571 eval/Actions Max 0.996435 eval/Actions Min -0.997914 eval/Num Paths 1 eval/Average Returns 1790.46 eval/normalized_score 55.6365 time/evaluation sampling (s) 0.901777 time/logging (s) 0.00252568 time/sampling batch (s) 0.266109 time/saving (s) 0.00302739 time/training (s) 4.20609 time/epoch (s) 5.37953 time/total (s) 30820.3 Epoch -891 ---------------------------------- --------------- 2022-05-10 21:44:12.070451 PDT | [0] Epoch -890 finished ---------------------------------- --------------- epoch -890 replay_buffer/size 999996 trainer/num train calls 111000 trainer/Policy Loss -2.03101 trainer/Log Pis Mean 2.21071 trainer/Log Pis Std 2.65574 trainer/Log Pis Max 9.11358 trainer/Log Pis Min -6.91465 trainer/policy/mean Mean 0.103839 trainer/policy/mean Std 0.621331 trainer/policy/mean Max 0.99856 trainer/policy/mean Min -0.998032 trainer/policy/normal/std Mean 0.390403 trainer/policy/normal/std Std 0.186007 trainer/policy/normal/std Max 0.973472 trainer/policy/normal/std Min 0.0714676 trainer/policy/normal/log_std Mean -1.08427 trainer/policy/normal/log_std Std 0.576635 trainer/policy/normal/log_std Max -0.0268863 trainer/policy/normal/log_std Min -2.63851 eval/num steps total 71390 eval/num paths total 137 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.08592 eval/Rewards Std 0.77873 eval/Rewards Max 4.72367 eval/Rewards Min 0.984333 eval/Returns Mean 1549.13 eval/Returns Std 0 eval/Returns Max 1549.13 eval/Returns Min 1549.13 eval/Actions Mean 0.163203 eval/Actions Std 0.582954 eval/Actions Max 0.99771 eval/Actions Min -0.999188 eval/Num Paths 1 eval/Average Returns 1549.13 eval/normalized_score 48.2215 time/evaluation sampling (s) 0.884525 time/logging (s) 0.00226736 time/sampling batch (s) 0.267037 time/saving (s) 0.00302535 time/training (s) 4.22977 time/epoch (s) 5.38662 time/total (s) 30825.7 Epoch -890 ---------------------------------- --------------- 2022-05-10 21:44:17.480599 PDT | [0] Epoch -889 finished ---------------------------------- --------------- epoch -889 replay_buffer/size 999996 trainer/num train calls 112000 trainer/Policy Loss -2.22586 trainer/Log Pis Mean 2.10032 trainer/Log Pis Std 2.6564 trainer/Log Pis Max 12.0768 trainer/Log Pis Min -5.45811 trainer/policy/mean Mean 0.15465 trainer/policy/mean Std 0.622993 trainer/policy/mean Max 0.996781 trainer/policy/mean Min -0.999026 trainer/policy/normal/std Mean 0.39826 trainer/policy/normal/std Std 0.186171 trainer/policy/normal/std Max 0.999433 trainer/policy/normal/std Min 0.0735752 trainer/policy/normal/log_std Mean -1.05828 trainer/policy/normal/log_std Std 0.563704 trainer/policy/normal/log_std Max -0.00056712 trainer/policy/normal/log_std Min -2.60945 eval/num steps total 71961 eval/num paths total 138 eval/path length Mean 571 eval/path length Std 0 eval/path length Max 571 eval/path length Min 571 eval/Rewards Mean 3.16855 eval/Rewards Std 0.814936 eval/Rewards Max 4.80471 eval/Rewards Min 0.989524 eval/Returns Mean 1809.24 eval/Returns Std 0 eval/Returns Max 1809.24 eval/Returns Min 1809.24 eval/Actions Mean 0.16666 eval/Actions Std 0.597058 eval/Actions Max 0.997934 eval/Actions Min -0.998755 eval/Num Paths 1 eval/Average Returns 1809.24 eval/normalized_score 56.2137 time/evaluation sampling (s) 0.874208 time/logging (s) 0.00245573 time/sampling batch (s) 0.267791 time/saving (s) 0.00298847 time/training (s) 4.24333 time/epoch (s) 5.39078 time/total (s) 30831 Epoch -889 ---------------------------------- --------------- 2022-05-10 21:44:22.874013 PDT | [0] Epoch -888 finished ---------------------------------- --------------- epoch -888 replay_buffer/size 999996 trainer/num train calls 113000 trainer/Policy Loss -1.94675 trainer/Log Pis Mean 1.98295 trainer/Log Pis Std 2.54638 trainer/Log Pis Max 9.26118 trainer/Log Pis Min -8.83078 trainer/policy/mean Mean 0.116624 trainer/policy/mean Std 0.606979 trainer/policy/mean Max 0.998027 trainer/policy/mean Min -0.998169 trainer/policy/normal/std Mean 0.399603 trainer/policy/normal/std Std 0.186724 trainer/policy/normal/std Max 0.950798 trainer/policy/normal/std Min 0.0703728 trainer/policy/normal/log_std Mean -1.05337 trainer/policy/normal/log_std Std 0.559117 trainer/policy/normal/log_std Max -0.0504534 trainer/policy/normal/log_std Min -2.65395 eval/num steps total 72549 eval/num paths total 139 eval/path length Mean 588 eval/path length Std 0 eval/path length Max 588 eval/path length Min 588 eval/Rewards Mean 3.20937 eval/Rewards Std 0.754465 eval/Rewards Max 5.25911 eval/Rewards Min 0.991248 eval/Returns Mean 1887.11 eval/Returns Std 0 eval/Returns Max 1887.11 eval/Returns Min 1887.11 eval/Actions Mean 0.152773 eval/Actions Std 0.598273 eval/Actions Max 0.996508 eval/Actions Min -0.997435 eval/Num Paths 1 eval/Average Returns 1887.11 eval/normalized_score 58.6062 time/evaluation sampling (s) 0.89446 time/logging (s) 0.00262819 time/sampling batch (s) 0.265948 time/saving (s) 0.00307052 time/training (s) 4.20821 time/epoch (s) 5.37432 time/total (s) 30836.4 Epoch -888 ---------------------------------- --------------- 2022-05-10 21:44:28.357371 PDT | [0] Epoch -887 finished ---------------------------------- --------------- epoch -887 replay_buffer/size 999996 trainer/num train calls 114000 trainer/Policy Loss -2.17231 trainer/Log Pis Mean 2.08374 trainer/Log Pis Std 2.53769 trainer/Log Pis Max 9.18573 trainer/Log Pis Min -4.13664 trainer/policy/mean Mean 0.142389 trainer/policy/mean Std 0.611824 trainer/policy/mean Max 0.997705 trainer/policy/mean Min -0.998361 trainer/policy/normal/std Mean 0.39333 trainer/policy/normal/std Std 0.183583 trainer/policy/normal/std Max 0.935774 trainer/policy/normal/std Min 0.077872 trainer/policy/normal/log_std Mean -1.07198 trainer/policy/normal/log_std Std 0.567459 trainer/policy/normal/log_std Max -0.0663809 trainer/policy/normal/log_std Min -2.55269 eval/num steps total 73051 eval/num paths total 140 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.1332 eval/Rewards Std 0.796405 eval/Rewards Max 4.75751 eval/Rewards Min 0.984578 eval/Returns Mean 1572.87 eval/Returns Std 0 eval/Returns Max 1572.87 eval/Returns Min 1572.87 eval/Actions Mean 0.163252 eval/Actions Std 0.595184 eval/Actions Max 0.998309 eval/Actions Min -0.99897 eval/Num Paths 1 eval/Average Returns 1572.87 eval/normalized_score 48.9508 time/evaluation sampling (s) 0.953929 time/logging (s) 0.00233442 time/sampling batch (s) 0.268396 time/saving (s) 0.00305533 time/training (s) 4.23584 time/epoch (s) 5.46356 time/total (s) 30841.9 Epoch -887 ---------------------------------- --------------- 2022-05-10 21:44:33.818036 PDT | [0] Epoch -886 finished ---------------------------------- --------------- epoch -886 replay_buffer/size 999996 trainer/num train calls 115000 trainer/Policy Loss -2.02791 trainer/Log Pis Mean 2.1006 trainer/Log Pis Std 2.5844 trainer/Log Pis Max 10.8494 trainer/Log Pis Min -6.81491 trainer/policy/mean Mean 0.137777 trainer/policy/mean Std 0.602917 trainer/policy/mean Max 0.998389 trainer/policy/mean Min -0.998726 trainer/policy/normal/std Mean 0.395379 trainer/policy/normal/std Std 0.186205 trainer/policy/normal/std Max 0.947369 trainer/policy/normal/std Min 0.0769334 trainer/policy/normal/log_std Mean -1.06716 trainer/policy/normal/log_std Std 0.56595 trainer/policy/normal/log_std Max -0.0540663 trainer/policy/normal/log_std Min -2.56482 eval/num steps total 74039 eval/num paths total 142 eval/path length Mean 494 eval/path length Std 227 eval/path length Max 721 eval/path length Min 267 eval/Rewards Mean 3.14464 eval/Rewards Std 0.809904 eval/Rewards Max 5.53153 eval/Rewards Min 0.98399 eval/Returns Mean 1553.45 eval/Returns Std 740.68 eval/Returns Max 2294.13 eval/Returns Min 812.77 eval/Actions Mean 0.128169 eval/Actions Std 0.576684 eval/Actions Max 0.999015 eval/Actions Min -0.999101 eval/Num Paths 2 eval/Average Returns 1553.45 eval/normalized_score 48.3542 time/evaluation sampling (s) 0.898971 time/logging (s) 0.00377359 time/sampling batch (s) 0.271359 time/saving (s) 0.00296552 time/training (s) 4.26506 time/epoch (s) 5.44213 time/total (s) 30847.3 Epoch -886 ---------------------------------- --------------- 2022-05-10 21:44:39.232403 PDT | [0] Epoch -885 finished ---------------------------------- --------------- epoch -885 replay_buffer/size 999996 trainer/num train calls 116000 trainer/Policy Loss -2.05956 trainer/Log Pis Mean 2.09783 trainer/Log Pis Std 2.6167 trainer/Log Pis Max 9.25821 trainer/Log Pis Min -4.83968 trainer/policy/mean Mean 0.14334 trainer/policy/mean Std 0.610799 trainer/policy/mean Max 0.998068 trainer/policy/mean Min -0.997492 trainer/policy/normal/std Mean 0.392991 trainer/policy/normal/std Std 0.181571 trainer/policy/normal/std Max 0.923047 trainer/policy/normal/std Min 0.0790435 trainer/policy/normal/log_std Mean -1.06529 trainer/policy/normal/log_std Std 0.547709 trainer/policy/normal/log_std Max -0.080075 trainer/policy/normal/log_std Min -2.53776 eval/num steps total 74540 eval/num paths total 143 eval/path length Mean 501 eval/path length Std 0 eval/path length Max 501 eval/path length Min 501 eval/Rewards Mean 3.10315 eval/Rewards Std 0.761603 eval/Rewards Max 4.6936 eval/Rewards Min 0.981609 eval/Returns Mean 1554.68 eval/Returns Std 0 eval/Returns Max 1554.68 eval/Returns Min 1554.68 eval/Actions Mean 0.150989 eval/Actions Std 0.588224 eval/Actions Max 0.997382 eval/Actions Min -0.99622 eval/Num Paths 1 eval/Average Returns 1554.68 eval/normalized_score 48.392 time/evaluation sampling (s) 0.885507 time/logging (s) 0.00230485 time/sampling batch (s) 0.269408 time/saving (s) 0.00300895 time/training (s) 4.23286 time/epoch (s) 5.39309 time/total (s) 30852.7 Epoch -885 ---------------------------------- --------------- 2022-05-10 21:44:44.661211 PDT | [0] Epoch -884 finished ---------------------------------- --------------- epoch -884 replay_buffer/size 999996 trainer/num train calls 117000 trainer/Policy Loss -2.04779 trainer/Log Pis Mean 1.91758 trainer/Log Pis Std 2.64032 trainer/Log Pis Max 9.83694 trainer/Log Pis Min -5.83029 trainer/policy/mean Mean 0.0963208 trainer/policy/mean Std 0.61611 trainer/policy/mean Max 0.997456 trainer/policy/mean Min -0.998304 trainer/policy/normal/std Mean 0.394801 trainer/policy/normal/std Std 0.183179 trainer/policy/normal/std Max 0.955331 trainer/policy/normal/std Min 0.067812 trainer/policy/normal/log_std Mean -1.06393 trainer/policy/normal/log_std Std 0.556449 trainer/policy/normal/log_std Max -0.0456974 trainer/policy/normal/log_std Min -2.69102 eval/num steps total 75024 eval/num paths total 144 eval/path length Mean 484 eval/path length Std 0 eval/path length Max 484 eval/path length Min 484 eval/Rewards Mean 3.16932 eval/Rewards Std 0.836431 eval/Rewards Max 4.78677 eval/Rewards Min 0.989233 eval/Returns Mean 1533.95 eval/Returns Std 0 eval/Returns Max 1533.95 eval/Returns Min 1533.95 eval/Actions Mean 0.14464 eval/Actions Std 0.596464 eval/Actions Max 0.995688 eval/Actions Min -0.998661 eval/Num Paths 1 eval/Average Returns 1533.95 eval/normalized_score 47.7551 time/evaluation sampling (s) 0.894217 time/logging (s) 0.00240789 time/sampling batch (s) 0.269592 time/saving (s) 0.00320145 time/training (s) 4.23915 time/epoch (s) 5.40857 time/total (s) 30858.1 Epoch -884 ---------------------------------- --------------- 2022-05-10 21:44:50.082665 PDT | [0] Epoch -883 finished ---------------------------------- --------------- epoch -883 replay_buffer/size 999996 trainer/num train calls 118000 trainer/Policy Loss -2.16202 trainer/Log Pis Mean 1.92827 trainer/Log Pis Std 2.62506 trainer/Log Pis Max 8.79455 trainer/Log Pis Min -8.42623 trainer/policy/mean Mean 0.15097 trainer/policy/mean Std 0.615481 trainer/policy/mean Max 0.99751 trainer/policy/mean Min -0.997283 trainer/policy/normal/std Mean 0.393801 trainer/policy/normal/std Std 0.182708 trainer/policy/normal/std Max 0.919743 trainer/policy/normal/std Min 0.0732001 trainer/policy/normal/log_std Mean -1.06856 trainer/policy/normal/log_std Std 0.561842 trainer/policy/normal/log_std Max -0.0836611 trainer/policy/normal/log_std Min -2.61456 eval/num steps total 75611 eval/num paths total 145 eval/path length Mean 587 eval/path length Std 0 eval/path length Max 587 eval/path length Min 587 eval/Rewards Mean 3.20943 eval/Rewards Std 0.741558 eval/Rewards Max 5.34482 eval/Rewards Min 0.987147 eval/Returns Mean 1883.93 eval/Returns Std 0 eval/Returns Max 1883.93 eval/Returns Min 1883.93 eval/Actions Mean 0.151953 eval/Actions Std 0.593328 eval/Actions Max 0.997949 eval/Actions Min -0.996288 eval/Num Paths 1 eval/Average Returns 1883.93 eval/normalized_score 58.5086 time/evaluation sampling (s) 0.891978 time/logging (s) 0.00269101 time/sampling batch (s) 0.269797 time/saving (s) 0.00329099 time/training (s) 4.2339 time/epoch (s) 5.40166 time/total (s) 30863.5 Epoch -883 ---------------------------------- --------------- 2022-05-10 21:44:55.536377 PDT | [0] Epoch -882 finished ---------------------------------- --------------- epoch -882 replay_buffer/size 999996 trainer/num train calls 119000 trainer/Policy Loss -2.26403 trainer/Log Pis Mean 2.1292 trainer/Log Pis Std 2.69178 trainer/Log Pis Max 14.9857 trainer/Log Pis Min -7.12493 trainer/policy/mean Mean 0.145432 trainer/policy/mean Std 0.616594 trainer/policy/mean Max 0.997294 trainer/policy/mean Min -0.998164 trainer/policy/normal/std Mean 0.392716 trainer/policy/normal/std Std 0.189561 trainer/policy/normal/std Max 0.967067 trainer/policy/normal/std Min 0.0744529 trainer/policy/normal/log_std Mean -1.08223 trainer/policy/normal/log_std Std 0.583799 trainer/policy/normal/log_std Max -0.0334874 trainer/policy/normal/log_std Min -2.59759 eval/num steps total 76149 eval/num paths total 146 eval/path length Mean 538 eval/path length Std 0 eval/path length Max 538 eval/path length Min 538 eval/Rewards Mean 3.1917 eval/Rewards Std 0.866266 eval/Rewards Max 5.42986 eval/Rewards Min 0.986106 eval/Returns Mean 1717.13 eval/Returns Std 0 eval/Returns Max 1717.13 eval/Returns Min 1717.13 eval/Actions Mean 0.153505 eval/Actions Std 0.584842 eval/Actions Max 0.998045 eval/Actions Min -0.998679 eval/Num Paths 1 eval/Average Returns 1717.13 eval/normalized_score 53.3836 time/evaluation sampling (s) 0.905586 time/logging (s) 0.00233574 time/sampling batch (s) 0.270738 time/saving (s) 0.00314393 time/training (s) 4.25125 time/epoch (s) 5.43306 time/total (s) 30869 Epoch -882 ---------------------------------- --------------- 2022-05-10 21:45:00.987622 PDT | [0] Epoch -881 finished ---------------------------------- --------------- epoch -881 replay_buffer/size 999996 trainer/num train calls 120000 trainer/Policy Loss -2.12232 trainer/Log Pis Mean 1.94728 trainer/Log Pis Std 2.58324 trainer/Log Pis Max 15.5346 trainer/Log Pis Min -6.35904 trainer/policy/mean Mean 0.148116 trainer/policy/mean Std 0.604086 trainer/policy/mean Max 0.998308 trainer/policy/mean Min -0.999422 trainer/policy/normal/std Mean 0.396363 trainer/policy/normal/std Std 0.188584 trainer/policy/normal/std Max 0.91133 trainer/policy/normal/std Min 0.0761803 trainer/policy/normal/log_std Mean -1.06733 trainer/policy/normal/log_std Std 0.570919 trainer/policy/normal/log_std Max -0.0928507 trainer/policy/normal/log_std Min -2.57465 eval/num steps total 76695 eval/num paths total 147 eval/path length Mean 546 eval/path length Std 0 eval/path length Max 546 eval/path length Min 546 eval/Rewards Mean 3.21483 eval/Rewards Std 0.77347 eval/Rewards Max 4.63079 eval/Rewards Min 0.98593 eval/Returns Mean 1755.3 eval/Returns Std 0 eval/Returns Max 1755.3 eval/Returns Min 1755.3 eval/Actions Mean 0.136547 eval/Actions Std 0.581866 eval/Actions Max 0.994162 eval/Actions Min -0.998332 eval/Num Paths 1 eval/Average Returns 1755.3 eval/normalized_score 54.5561 time/evaluation sampling (s) 0.931524 time/logging (s) 0.00237917 time/sampling batch (s) 0.269712 time/saving (s) 0.00345399 time/training (s) 4.22419 time/epoch (s) 5.43126 time/total (s) 30874.4 Epoch -881 ---------------------------------- --------------- 2022-05-10 21:45:06.439273 PDT | [0] Epoch -880 finished ---------------------------------- --------------- epoch -880 replay_buffer/size 999996 trainer/num train calls 121000 trainer/Policy Loss -2.18157 trainer/Log Pis Mean 2.21001 trainer/Log Pis Std 2.51668 trainer/Log Pis Max 9.46343 trainer/Log Pis Min -5.64426 trainer/policy/mean Mean 0.149955 trainer/policy/mean Std 0.619748 trainer/policy/mean Max 0.998309 trainer/policy/mean Min -0.997956 trainer/policy/normal/std Mean 0.40144 trainer/policy/normal/std Std 0.191613 trainer/policy/normal/std Max 0.985419 trainer/policy/normal/std Min 0.0725231 trainer/policy/normal/log_std Mean -1.05642 trainer/policy/normal/log_std Std 0.576146 trainer/policy/normal/log_std Max -0.0146886 trainer/policy/normal/log_std Min -2.62385 eval/num steps total 77249 eval/num paths total 148 eval/path length Mean 554 eval/path length Std 0 eval/path length Max 554 eval/path length Min 554 eval/Rewards Mean 3.16228 eval/Rewards Std 0.839899 eval/Rewards Max 4.95473 eval/Rewards Min 0.994725 eval/Returns Mean 1751.9 eval/Returns Std 0 eval/Returns Max 1751.9 eval/Returns Min 1751.9 eval/Actions Mean 0.160277 eval/Actions Std 0.572822 eval/Actions Max 0.997405 eval/Actions Min -0.998417 eval/Num Paths 1 eval/Average Returns 1751.9 eval/normalized_score 54.4518 time/evaluation sampling (s) 0.926193 time/logging (s) 0.00250435 time/sampling batch (s) 0.26927 time/saving (s) 0.00323839 time/training (s) 4.23088 time/epoch (s) 5.43209 time/total (s) 30879.9 Epoch -880 ---------------------------------- --------------- 2022-05-10 21:45:11.898880 PDT | [0] Epoch -879 finished ---------------------------------- --------------- epoch -879 replay_buffer/size 999996 trainer/num train calls 122000 trainer/Policy Loss -2.10392 trainer/Log Pis Mean 2.08592 trainer/Log Pis Std 2.52623 trainer/Log Pis Max 10.0989 trainer/Log Pis Min -5.69888 trainer/policy/mean Mean 0.151456 trainer/policy/mean Std 0.614176 trainer/policy/mean Max 0.996578 trainer/policy/mean Min -0.998472 trainer/policy/normal/std Mean 0.396812 trainer/policy/normal/std Std 0.187581 trainer/policy/normal/std Max 0.955256 trainer/policy/normal/std Min 0.0666925 trainer/policy/normal/log_std Mean -1.06701 trainer/policy/normal/log_std Std 0.576145 trainer/policy/normal/log_std Max -0.0457759 trainer/policy/normal/log_std Min -2.70766 eval/num steps total 77885 eval/num paths total 149 eval/path length Mean 636 eval/path length Std 0 eval/path length Max 636 eval/path length Min 636 eval/Rewards Mean 3.1847 eval/Rewards Std 0.79204 eval/Rewards Max 4.75926 eval/Rewards Min 0.990298 eval/Returns Mean 2025.47 eval/Returns Std 0 eval/Returns Max 2025.47 eval/Returns Min 2025.47 eval/Actions Mean 0.140726 eval/Actions Std 0.566645 eval/Actions Max 0.997451 eval/Actions Min -0.998194 eval/Num Paths 1 eval/Average Returns 2025.47 eval/normalized_score 62.8574 time/evaluation sampling (s) 0.909349 time/logging (s) 0.00273301 time/sampling batch (s) 0.270521 time/saving (s) 0.00320193 time/training (s) 4.25412 time/epoch (s) 5.43993 time/total (s) 30885.3 Epoch -879 ---------------------------------- --------------- 2022-05-10 21:45:17.330348 PDT | [0] Epoch -878 finished ---------------------------------- --------------- epoch -878 replay_buffer/size 999996 trainer/num train calls 123000 trainer/Policy Loss -2.05585 trainer/Log Pis Mean 2.09753 trainer/Log Pis Std 2.73243 trainer/Log Pis Max 12.5448 trainer/Log Pis Min -4.94632 trainer/policy/mean Mean 0.142131 trainer/policy/mean Std 0.610477 trainer/policy/mean Max 0.997763 trainer/policy/mean Min -0.998619 trainer/policy/normal/std Mean 0.389039 trainer/policy/normal/std Std 0.187942 trainer/policy/normal/std Max 0.9766 trainer/policy/normal/std Min 0.074567 trainer/policy/normal/log_std Mean -1.09056 trainer/policy/normal/log_std Std 0.58036 trainer/policy/normal/log_std Max -0.0236776 trainer/policy/normal/log_std Min -2.59606 eval/num steps total 78537 eval/num paths total 150 eval/path length Mean 652 eval/path length Std 0 eval/path length Max 652 eval/path length Min 652 eval/Rewards Mean 3.2015 eval/Rewards Std 0.758712 eval/Rewards Max 4.76962 eval/Rewards Min 0.991564 eval/Returns Mean 2087.38 eval/Returns Std 0 eval/Returns Max 2087.38 eval/Returns Min 2087.38 eval/Actions Mean 0.164116 eval/Actions Std 0.599771 eval/Actions Max 0.996894 eval/Actions Min -0.998148 eval/Num Paths 1 eval/Average Returns 2087.38 eval/normalized_score 64.7596 time/evaluation sampling (s) 0.879947 time/logging (s) 0.00270019 time/sampling batch (s) 0.270604 time/saving (s) 0.00295433 time/training (s) 4.25518 time/epoch (s) 5.41138 time/total (s) 30890.7 Epoch -878 ---------------------------------- --------------- 2022-05-10 21:45:22.784224 PDT | [0] Epoch -877 finished ---------------------------------- --------------- epoch -877 replay_buffer/size 999996 trainer/num train calls 124000 trainer/Policy Loss -2.278 trainer/Log Pis Mean 2.36374 trainer/Log Pis Std 2.72282 trainer/Log Pis Max 13.7833 trainer/Log Pis Min -4.46064 trainer/policy/mean Mean 0.109092 trainer/policy/mean Std 0.634034 trainer/policy/mean Max 0.99828 trainer/policy/mean Min -0.996655 trainer/policy/normal/std Mean 0.392378 trainer/policy/normal/std Std 0.18781 trainer/policy/normal/std Max 1.00781 trainer/policy/normal/std Min 0.0756669 trainer/policy/normal/log_std Mean -1.08094 trainer/policy/normal/log_std Std 0.580323 trainer/policy/normal/log_std Max 0.00777918 trainer/policy/normal/log_std Min -2.58141 eval/num steps total 79039 eval/num paths total 151 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.10514 eval/Rewards Std 0.764112 eval/Rewards Max 4.6733 eval/Rewards Min 0.98469 eval/Returns Mean 1558.78 eval/Returns Std 0 eval/Returns Max 1558.78 eval/Returns Min 1558.78 eval/Actions Mean 0.157407 eval/Actions Std 0.590079 eval/Actions Max 0.998149 eval/Actions Min -0.99673 eval/Num Paths 1 eval/Average Returns 1558.78 eval/normalized_score 48.518 time/evaluation sampling (s) 0.891909 time/logging (s) 0.0023799 time/sampling batch (s) 0.271732 time/saving (s) 0.00311592 time/training (s) 4.26443 time/epoch (s) 5.43356 time/total (s) 30896.1 Epoch -877 ---------------------------------- --------------- 2022-05-10 21:45:28.229357 PDT | [0] Epoch -876 finished ---------------------------------- --------------- epoch -876 replay_buffer/size 999996 trainer/num train calls 125000 trainer/Policy Loss -2.0045 trainer/Log Pis Mean 2.00711 trainer/Log Pis Std 2.41259 trainer/Log Pis Max 9.65265 trainer/Log Pis Min -4.00138 trainer/policy/mean Mean 0.12673 trainer/policy/mean Std 0.60367 trainer/policy/mean Max 0.998018 trainer/policy/mean Min -0.998045 trainer/policy/normal/std Mean 0.395839 trainer/policy/normal/std Std 0.185454 trainer/policy/normal/std Max 0.969942 trainer/policy/normal/std Min 0.0745682 trainer/policy/normal/log_std Mean -1.06375 trainer/policy/normal/log_std Std 0.560967 trainer/policy/normal/log_std Max -0.030519 trainer/policy/normal/log_std Min -2.59604 eval/num steps total 79864 eval/num paths total 153 eval/path length Mean 412.5 eval/path length Std 75.5 eval/path length Max 488 eval/path length Min 337 eval/Rewards Mean 3.0951 eval/Rewards Std 0.903571 eval/Rewards Max 5.37726 eval/Rewards Min 0.990608 eval/Returns Mean 1276.73 eval/Returns Std 243.245 eval/Returns Max 1519.97 eval/Returns Min 1033.48 eval/Actions Mean 0.143895 eval/Actions Std 0.5664 eval/Actions Max 0.997604 eval/Actions Min -0.998214 eval/Num Paths 2 eval/Average Returns 1276.73 eval/normalized_score 39.8516 time/evaluation sampling (s) 0.88996 time/logging (s) 0.00321281 time/sampling batch (s) 0.270649 time/saving (s) 0.00295944 time/training (s) 4.25922 time/epoch (s) 5.426 time/total (s) 30901.6 Epoch -876 ---------------------------------- --------------- 2022-05-10 21:45:33.673243 PDT | [0] Epoch -875 finished ---------------------------------- --------------- epoch -875 replay_buffer/size 999996 trainer/num train calls 126000 trainer/Policy Loss -2.16131 trainer/Log Pis Mean 2.19077 trainer/Log Pis Std 2.54054 trainer/Log Pis Max 10.47 trainer/Log Pis Min -4.44819 trainer/policy/mean Mean 0.135662 trainer/policy/mean Std 0.617908 trainer/policy/mean Max 0.997419 trainer/policy/mean Min -0.996693 trainer/policy/normal/std Mean 0.394361 trainer/policy/normal/std Std 0.193522 trainer/policy/normal/std Max 1.05055 trainer/policy/normal/std Min 0.068287 trainer/policy/normal/log_std Mean -1.08176 trainer/policy/normal/log_std Std 0.590497 trainer/policy/normal/log_std Max 0.0493139 trainer/policy/normal/log_std Min -2.68404 eval/num steps total 80862 eval/num paths total 155 eval/path length Mean 499 eval/path length Std 42 eval/path length Max 541 eval/path length Min 457 eval/Rewards Mean 3.07101 eval/Rewards Std 0.868892 eval/Rewards Max 5.25703 eval/Rewards Min 0.98961 eval/Returns Mean 1532.44 eval/Returns Std 157.527 eval/Returns Max 1689.96 eval/Returns Min 1374.91 eval/Actions Mean 0.143184 eval/Actions Std 0.587548 eval/Actions Max 0.996741 eval/Actions Min -0.997288 eval/Num Paths 2 eval/Average Returns 1532.44 eval/normalized_score 47.7085 time/evaluation sampling (s) 0.889926 time/logging (s) 0.00363368 time/sampling batch (s) 0.27103 time/saving (s) 0.00295484 time/training (s) 4.25683 time/epoch (s) 5.42438 time/total (s) 30907 Epoch -875 ---------------------------------- --------------- 2022-05-10 21:45:39.108540 PDT | [0] Epoch -874 finished ---------------------------------- --------------- epoch -874 replay_buffer/size 999996 trainer/num train calls 127000 trainer/Policy Loss -2.20346 trainer/Log Pis Mean 2.13069 trainer/Log Pis Std 2.63191 trainer/Log Pis Max 12.4347 trainer/Log Pis Min -5.93022 trainer/policy/mean Mean 0.158347 trainer/policy/mean Std 0.620324 trainer/policy/mean Max 0.997011 trainer/policy/mean Min -0.999678 trainer/policy/normal/std Mean 0.389487 trainer/policy/normal/std Std 0.186875 trainer/policy/normal/std Max 1.03702 trainer/policy/normal/std Min 0.0718673 trainer/policy/normal/log_std Mean -1.08835 trainer/policy/normal/log_std Std 0.581209 trainer/policy/normal/log_std Max 0.0363558 trainer/policy/normal/log_std Min -2.63293 eval/num steps total 81855 eval/num paths total 157 eval/path length Mean 496.5 eval/path length Std 5.5 eval/path length Max 502 eval/path length Min 491 eval/Rewards Mean 3.10403 eval/Rewards Std 0.768157 eval/Rewards Max 4.76717 eval/Rewards Min 0.983953 eval/Returns Mean 1541.15 eval/Returns Std 34.5969 eval/Returns Max 1575.75 eval/Returns Min 1506.56 eval/Actions Mean 0.154763 eval/Actions Std 0.588193 eval/Actions Max 0.998043 eval/Actions Min -0.997655 eval/Num Paths 2 eval/Average Returns 1541.15 eval/normalized_score 47.9763 time/evaluation sampling (s) 0.884195 time/logging (s) 0.00370289 time/sampling batch (s) 0.276652 time/saving (s) 0.00299973 time/training (s) 4.24786 time/epoch (s) 5.41541 time/total (s) 30912.4 Epoch -874 ---------------------------------- --------------- 2022-05-10 21:45:44.499786 PDT | [0] Epoch -873 finished ---------------------------------- --------------- epoch -873 replay_buffer/size 999996 trainer/num train calls 128000 trainer/Policy Loss -2.13556 trainer/Log Pis Mean 2.15669 trainer/Log Pis Std 2.55374 trainer/Log Pis Max 11.6519 trainer/Log Pis Min -4.53425 trainer/policy/mean Mean 0.135024 trainer/policy/mean Std 0.61449 trainer/policy/mean Max 0.99805 trainer/policy/mean Min -0.997951 trainer/policy/normal/std Mean 0.385339 trainer/policy/normal/std Std 0.182465 trainer/policy/normal/std Max 0.901792 trainer/policy/normal/std Min 0.0711985 trainer/policy/normal/log_std Mean -1.09639 trainer/policy/normal/log_std Std 0.575358 trainer/policy/normal/log_std Max -0.103372 trainer/policy/normal/log_std Min -2.64228 eval/num steps total 82841 eval/num paths total 159 eval/path length Mean 493 eval/path length Std 1 eval/path length Max 494 eval/path length Min 492 eval/Rewards Mean 3.11885 eval/Rewards Std 0.796839 eval/Rewards Max 4.87521 eval/Rewards Min 0.987108 eval/Returns Mean 1537.59 eval/Returns Std 26.8762 eval/Returns Max 1564.47 eval/Returns Min 1510.72 eval/Actions Mean 0.156648 eval/Actions Std 0.58389 eval/Actions Max 0.998544 eval/Actions Min -0.998781 eval/Num Paths 2 eval/Average Returns 1537.59 eval/normalized_score 47.8669 time/evaluation sampling (s) 0.906571 time/logging (s) 0.003838 time/sampling batch (s) 0.26686 time/saving (s) 0.00321207 time/training (s) 4.19105 time/epoch (s) 5.37153 time/total (s) 30917.8 Epoch -873 ---------------------------------- --------------- 2022-05-10 21:45:49.896295 PDT | [0] Epoch -872 finished ---------------------------------- --------------- epoch -872 replay_buffer/size 999996 trainer/num train calls 129000 trainer/Policy Loss -2.10927 trainer/Log Pis Mean 2.00903 trainer/Log Pis Std 2.52267 trainer/Log Pis Max 11.4436 trainer/Log Pis Min -6.09359 trainer/policy/mean Mean 0.149796 trainer/policy/mean Std 0.618099 trainer/policy/mean Max 0.997648 trainer/policy/mean Min -0.99739 trainer/policy/normal/std Mean 0.395711 trainer/policy/normal/std Std 0.189459 trainer/policy/normal/std Max 0.959703 trainer/policy/normal/std Min 0.0766065 trainer/policy/normal/log_std Mean -1.07196 trainer/policy/normal/log_std Std 0.578464 trainer/policy/normal/log_std Max -0.0411311 trainer/policy/normal/log_std Min -2.56907 eval/num steps total 83831 eval/num paths total 161 eval/path length Mean 495 eval/path length Std 1 eval/path length Max 496 eval/path length Min 494 eval/Rewards Mean 3.10759 eval/Rewards Std 0.787559 eval/Rewards Max 4.99471 eval/Rewards Min 0.983549 eval/Returns Mean 1538.26 eval/Returns Std 12.6706 eval/Returns Max 1550.93 eval/Returns Min 1525.59 eval/Actions Mean 0.157132 eval/Actions Std 0.588669 eval/Actions Max 0.998166 eval/Actions Min -0.999427 eval/Num Paths 2 eval/Average Returns 1538.26 eval/normalized_score 47.8873 time/evaluation sampling (s) 0.879747 time/logging (s) 0.00409009 time/sampling batch (s) 0.26916 time/saving (s) 0.0032796 time/training (s) 4.2205 time/epoch (s) 5.37678 time/total (s) 30923.2 Epoch -872 ---------------------------------- --------------- 2022-05-10 21:45:55.279604 PDT | [0] Epoch -871 finished ---------------------------------- --------------- epoch -871 replay_buffer/size 999996 trainer/num train calls 130000 trainer/Policy Loss -2.30484 trainer/Log Pis Mean 2.27672 trainer/Log Pis Std 2.43961 trainer/Log Pis Max 9.8461 trainer/Log Pis Min -3.95924 trainer/policy/mean Mean 0.15535 trainer/policy/mean Std 0.616192 trainer/policy/mean Max 0.997087 trainer/policy/mean Min -0.998248 trainer/policy/normal/std Mean 0.384809 trainer/policy/normal/std Std 0.181549 trainer/policy/normal/std Max 0.964493 trainer/policy/normal/std Min 0.0735422 trainer/policy/normal/log_std Mean -1.09658 trainer/policy/normal/log_std Std 0.572043 trainer/policy/normal/log_std Max -0.0361526 trainer/policy/normal/log_std Min -2.6099 eval/num steps total 84359 eval/num paths total 162 eval/path length Mean 528 eval/path length Std 0 eval/path length Max 528 eval/path length Min 528 eval/Rewards Mean 3.2059 eval/Rewards Std 0.839589 eval/Rewards Max 5.52258 eval/Rewards Min 0.988399 eval/Returns Mean 1692.71 eval/Returns Std 0 eval/Returns Max 1692.71 eval/Returns Min 1692.71 eval/Actions Mean 0.144414 eval/Actions Std 0.578687 eval/Actions Max 0.997905 eval/Actions Min -0.998468 eval/Num Paths 1 eval/Average Returns 1692.71 eval/normalized_score 52.6332 time/evaluation sampling (s) 0.88946 time/logging (s) 0.00224103 time/sampling batch (s) 0.267461 time/saving (s) 0.00315891 time/training (s) 4.19918 time/epoch (s) 5.36151 time/total (s) 30928.5 Epoch -871 ---------------------------------- --------------- 2022-05-10 21:46:00.631025 PDT | [0] Epoch -870 finished ---------------------------------- --------------- epoch -870 replay_buffer/size 999996 trainer/num train calls 131000 trainer/Policy Loss -2.17408 trainer/Log Pis Mean 2.18209 trainer/Log Pis Std 2.52316 trainer/Log Pis Max 11.9467 trainer/Log Pis Min -6.22386 trainer/policy/mean Mean 0.175035 trainer/policy/mean Std 0.607679 trainer/policy/mean Max 0.998188 trainer/policy/mean Min -0.998185 trainer/policy/normal/std Mean 0.390943 trainer/policy/normal/std Std 0.186517 trainer/policy/normal/std Max 0.933978 trainer/policy/normal/std Min 0.0673347 trainer/policy/normal/log_std Mean -1.08286 trainer/policy/normal/log_std Std 0.575916 trainer/policy/normal/log_std Max -0.0683029 trainer/policy/normal/log_std Min -2.69808 eval/num steps total 85024 eval/num paths total 163 eval/path length Mean 665 eval/path length Std 0 eval/path length Max 665 eval/path length Min 665 eval/Rewards Mean 3.1983 eval/Rewards Std 0.68076 eval/Rewards Max 4.6103 eval/Rewards Min 0.986045 eval/Returns Mean 2126.87 eval/Returns Std 0 eval/Returns Max 2126.87 eval/Returns Min 2126.87 eval/Actions Mean 0.155792 eval/Actions Std 0.608487 eval/Actions Max 0.997677 eval/Actions Min -0.998151 eval/Num Paths 1 eval/Average Returns 2126.87 eval/normalized_score 65.973 time/evaluation sampling (s) 0.901452 time/logging (s) 0.00272213 time/sampling batch (s) 0.264749 time/saving (s) 0.00297084 time/training (s) 4.16041 time/epoch (s) 5.33231 time/total (s) 30933.9 Epoch -870 ---------------------------------- --------------- 2022-05-10 21:46:06.047960 PDT | [0] Epoch -869 finished ---------------------------------- --------------- epoch -869 replay_buffer/size 999996 trainer/num train calls 132000 trainer/Policy Loss -2.23302 trainer/Log Pis Mean 2.18421 trainer/Log Pis Std 2.58704 trainer/Log Pis Max 12.2179 trainer/Log Pis Min -5.15282 trainer/policy/mean Mean 0.129319 trainer/policy/mean Std 0.620926 trainer/policy/mean Max 0.997138 trainer/policy/mean Min -0.998047 trainer/policy/normal/std Mean 0.395239 trainer/policy/normal/std Std 0.186789 trainer/policy/normal/std Max 0.957044 trainer/policy/normal/std Min 0.0718485 trainer/policy/normal/log_std Mean -1.06984 trainer/policy/normal/log_std Std 0.572472 trainer/policy/normal/log_std Max -0.0439057 trainer/policy/normal/log_std Min -2.6332 eval/num steps total 85558 eval/num paths total 164 eval/path length Mean 534 eval/path length Std 0 eval/path length Max 534 eval/path length Min 534 eval/Rewards Mean 3.16035 eval/Rewards Std 0.872796 eval/Rewards Max 5.56035 eval/Rewards Min 0.989137 eval/Returns Mean 1687.63 eval/Returns Std 0 eval/Returns Max 1687.63 eval/Returns Min 1687.63 eval/Actions Mean 0.160234 eval/Actions Std 0.575281 eval/Actions Max 0.997766 eval/Actions Min -0.998421 eval/Num Paths 1 eval/Average Returns 1687.63 eval/normalized_score 52.4769 time/evaluation sampling (s) 0.922799 time/logging (s) 0.00235864 time/sampling batch (s) 0.271178 time/saving (s) 0.00297396 time/training (s) 4.19784 time/epoch (s) 5.39715 time/total (s) 30939.3 Epoch -869 ---------------------------------- --------------- 2022-05-10 21:46:11.497325 PDT | [0] Epoch -868 finished ---------------------------------- --------------- epoch -868 replay_buffer/size 999996 trainer/num train calls 133000 trainer/Policy Loss -2.01294 trainer/Log Pis Mean 2.01284 trainer/Log Pis Std 2.51528 trainer/Log Pis Max 11.2604 trainer/Log Pis Min -5.43306 trainer/policy/mean Mean 0.134935 trainer/policy/mean Std 0.608876 trainer/policy/mean Max 0.997045 trainer/policy/mean Min -0.996883 trainer/policy/normal/std Mean 0.388142 trainer/policy/normal/std Std 0.184673 trainer/policy/normal/std Max 0.936155 trainer/policy/normal/std Min 0.0748375 trainer/policy/normal/log_std Mean -1.08885 trainer/policy/normal/log_std Std 0.573077 trainer/policy/normal/log_std Max -0.0659742 trainer/policy/normal/log_std Min -2.59244 eval/num steps total 86103 eval/num paths total 165 eval/path length Mean 545 eval/path length Std 0 eval/path length Max 545 eval/path length Min 545 eval/Rewards Mean 3.16183 eval/Rewards Std 0.838251 eval/Rewards Max 4.91502 eval/Rewards Min 0.98608 eval/Returns Mean 1723.2 eval/Returns Std 0 eval/Returns Max 1723.2 eval/Returns Min 1723.2 eval/Actions Mean 0.153298 eval/Actions Std 0.565924 eval/Actions Max 0.995327 eval/Actions Min -0.998099 eval/Num Paths 1 eval/Average Returns 1723.2 eval/normalized_score 53.5698 time/evaluation sampling (s) 0.972299 time/logging (s) 0.00243993 time/sampling batch (s) 0.26489 time/saving (s) 0.00314978 time/training (s) 4.1871 time/epoch (s) 5.42988 time/total (s) 30944.7 Epoch -868 ---------------------------------- --------------- 2022-05-10 21:46:16.882891 PDT | [0] Epoch -867 finished ---------------------------------- --------------- epoch -867 replay_buffer/size 999996 trainer/num train calls 134000 trainer/Policy Loss -2.15115 trainer/Log Pis Mean 2.00544 trainer/Log Pis Std 2.55887 trainer/Log Pis Max 9.61475 trainer/Log Pis Min -7.0578 trainer/policy/mean Mean 0.090169 trainer/policy/mean Std 0.613667 trainer/policy/mean Max 0.995188 trainer/policy/mean Min -0.99864 trainer/policy/normal/std Mean 0.385265 trainer/policy/normal/std Std 0.182619 trainer/policy/normal/std Max 0.965031 trainer/policy/normal/std Min 0.0721441 trainer/policy/normal/log_std Mean -1.09543 trainer/policy/normal/log_std Std 0.57106 trainer/policy/normal/log_std Max -0.0355947 trainer/policy/normal/log_std Min -2.62909 eval/num steps total 87075 eval/num paths total 167 eval/path length Mean 486 eval/path length Std 5 eval/path length Max 491 eval/path length Min 481 eval/Rewards Mean 3.16273 eval/Rewards Std 0.812497 eval/Rewards Max 4.87796 eval/Rewards Min 0.98637 eval/Returns Mean 1537.09 eval/Returns Std 1.34538 eval/Returns Max 1538.43 eval/Returns Min 1535.74 eval/Actions Mean 0.149188 eval/Actions Std 0.590429 eval/Actions Max 0.997743 eval/Actions Min -0.998201 eval/Num Paths 2 eval/Average Returns 1537.09 eval/normalized_score 47.8515 time/evaluation sampling (s) 0.906781 time/logging (s) 0.00370333 time/sampling batch (s) 0.26626 time/saving (s) 0.00306091 time/training (s) 4.1876 time/epoch (s) 5.36741 time/total (s) 30950.1 Epoch -867 ---------------------------------- --------------- 2022-05-10 21:46:22.237044 PDT | [0] Epoch -866 finished ---------------------------------- --------------- epoch -866 replay_buffer/size 999996 trainer/num train calls 135000 trainer/Policy Loss -2.13986 trainer/Log Pis Mean 2.20589 trainer/Log Pis Std 2.75471 trainer/Log Pis Max 16.5176 trainer/Log Pis Min -7.01962 trainer/policy/mean Mean 0.131803 trainer/policy/mean Std 0.624071 trainer/policy/mean Max 0.998561 trainer/policy/mean Min -0.999163 trainer/policy/normal/std Mean 0.395338 trainer/policy/normal/std Std 0.192916 trainer/policy/normal/std Max 0.980756 trainer/policy/normal/std Min 0.0706181 trainer/policy/normal/log_std Mean -1.08039 trainer/policy/normal/log_std Std 0.594414 trainer/policy/normal/log_std Max -0.0194318 trainer/policy/normal/log_std Min -2.65047 eval/num steps total 87658 eval/num paths total 168 eval/path length Mean 583 eval/path length Std 0 eval/path length Max 583 eval/path length Min 583 eval/Rewards Mean 3.19285 eval/Rewards Std 0.770571 eval/Rewards Max 5.27327 eval/Rewards Min 0.987727 eval/Returns Mean 1861.43 eval/Returns Std 0 eval/Returns Max 1861.43 eval/Returns Min 1861.43 eval/Actions Mean 0.149337 eval/Actions Std 0.592302 eval/Actions Max 0.998491 eval/Actions Min -0.998385 eval/Num Paths 1 eval/Average Returns 1861.43 eval/normalized_score 57.8172 time/evaluation sampling (s) 0.882882 time/logging (s) 0.00241771 time/sampling batch (s) 0.265771 time/saving (s) 0.0029497 time/training (s) 4.17911 time/epoch (s) 5.33313 time/total (s) 30955.4 Epoch -866 ---------------------------------- --------------- 2022-05-10 21:46:27.629791 PDT | [0] Epoch -865 finished ---------------------------------- --------------- epoch -865 replay_buffer/size 999996 trainer/num train calls 136000 trainer/Policy Loss -2.4467 trainer/Log Pis Mean 2.13019 trainer/Log Pis Std 2.57377 trainer/Log Pis Max 11.3925 trainer/Log Pis Min -4.54694 trainer/policy/mean Mean 0.118033 trainer/policy/mean Std 0.6346 trainer/policy/mean Max 0.997585 trainer/policy/mean Min -0.997671 trainer/policy/normal/std Mean 0.387034 trainer/policy/normal/std Std 0.188399 trainer/policy/normal/std Max 0.981025 trainer/policy/normal/std Min 0.067484 trainer/policy/normal/log_std Mean -1.10135 trainer/policy/normal/log_std Std 0.594752 trainer/policy/normal/log_std Max -0.0191572 trainer/policy/normal/log_std Min -2.69586 eval/num steps total 88551 eval/num paths total 170 eval/path length Mean 446.5 eval/path length Std 93.5 eval/path length Max 540 eval/path length Min 353 eval/Rewards Mean 3.11067 eval/Rewards Std 0.904517 eval/Rewards Max 5.31518 eval/Rewards Min 0.991198 eval/Returns Mean 1388.91 eval/Returns Std 320.041 eval/Returns Max 1708.95 eval/Returns Min 1068.87 eval/Actions Mean 0.12955 eval/Actions Std 0.565504 eval/Actions Max 0.99869 eval/Actions Min -0.998161 eval/Num Paths 2 eval/Average Returns 1388.91 eval/normalized_score 43.2987 time/evaluation sampling (s) 0.879538 time/logging (s) 0.00328641 time/sampling batch (s) 0.268551 time/saving (s) 0.00297001 time/training (s) 4.21968 time/epoch (s) 5.37403 time/total (s) 30960.8 Epoch -865 ---------------------------------- --------------- 2022-05-10 21:46:33.014344 PDT | [0] Epoch -864 finished ---------------------------------- --------------- epoch -864 replay_buffer/size 999996 trainer/num train calls 137000 trainer/Policy Loss -2.20524 trainer/Log Pis Mean 2.07943 trainer/Log Pis Std 2.47925 trainer/Log Pis Max 9.11946 trainer/Log Pis Min -5.85606 trainer/policy/mean Mean 0.155122 trainer/policy/mean Std 0.610899 trainer/policy/mean Max 0.997717 trainer/policy/mean Min -0.998184 trainer/policy/normal/std Mean 0.39653 trainer/policy/normal/std Std 0.189583 trainer/policy/normal/std Max 1.02848 trainer/policy/normal/std Min 0.0684603 trainer/policy/normal/log_std Mean -1.06985 trainer/policy/normal/log_std Std 0.579761 trainer/policy/normal/log_std Max 0.0280837 trainer/policy/normal/log_std Min -2.6815 eval/num steps total 89362 eval/num paths total 171 eval/path length Mean 811 eval/path length Std 0 eval/path length Max 811 eval/path length Min 811 eval/Rewards Mean 3.25106 eval/Rewards Std 0.620546 eval/Rewards Max 4.73891 eval/Rewards Min 0.987024 eval/Returns Mean 2636.61 eval/Returns Std 0 eval/Returns Max 2636.61 eval/Returns Min 2636.61 eval/Actions Mean 0.146613 eval/Actions Std 0.603863 eval/Actions Max 0.997833 eval/Actions Min -0.998104 eval/Num Paths 1 eval/Average Returns 2636.61 eval/normalized_score 81.6354 time/evaluation sampling (s) 0.879155 time/logging (s) 0.00305588 time/sampling batch (s) 0.267272 time/saving (s) 0.00296974 time/training (s) 4.21197 time/epoch (s) 5.36442 time/total (s) 30966.2 Epoch -864 ---------------------------------- --------------- 2022-05-10 21:46:38.375707 PDT | [0] Epoch -863 finished ---------------------------------- --------------- epoch -863 replay_buffer/size 999996 trainer/num train calls 138000 trainer/Policy Loss -2.24875 trainer/Log Pis Mean 2.2473 trainer/Log Pis Std 2.72614 trainer/Log Pis Max 12.3061 trainer/Log Pis Min -5.9804 trainer/policy/mean Mean 0.130515 trainer/policy/mean Std 0.620451 trainer/policy/mean Max 0.998726 trainer/policy/mean Min -0.999392 trainer/policy/normal/std Mean 0.390021 trainer/policy/normal/std Std 0.185192 trainer/policy/normal/std Max 0.936281 trainer/policy/normal/std Min 0.0708837 trainer/policy/normal/log_std Mean -1.08675 trainer/policy/normal/log_std Std 0.581712 trainer/policy/normal/log_std Max -0.0658392 trainer/policy/normal/log_std Min -2.64672 eval/num steps total 89950 eval/num paths total 172 eval/path length Mean 588 eval/path length Std 0 eval/path length Max 588 eval/path length Min 588 eval/Rewards Mean 3.16832 eval/Rewards Std 0.714945 eval/Rewards Max 5.06196 eval/Rewards Min 0.987633 eval/Returns Mean 1862.97 eval/Returns Std 0 eval/Returns Max 1862.97 eval/Returns Min 1862.97 eval/Actions Mean 0.156003 eval/Actions Std 0.604181 eval/Actions Max 0.99657 eval/Actions Min -0.997065 eval/Num Paths 1 eval/Average Returns 1862.97 eval/normalized_score 57.8646 time/evaluation sampling (s) 0.884182 time/logging (s) 0.00247799 time/sampling batch (s) 0.265918 time/saving (s) 0.00296495 time/training (s) 4.18536 time/epoch (s) 5.34091 time/total (s) 30971.5 Epoch -863 ---------------------------------- --------------- 2022-05-10 21:46:43.791786 PDT | [0] Epoch -862 finished ---------------------------------- --------------- epoch -862 replay_buffer/size 999996 trainer/num train calls 139000 trainer/Policy Loss -1.86939 trainer/Log Pis Mean 1.88605 trainer/Log Pis Std 2.59979 trainer/Log Pis Max 10.1155 trainer/Log Pis Min -7.15906 trainer/policy/mean Mean 0.130752 trainer/policy/mean Std 0.602008 trainer/policy/mean Max 0.996454 trainer/policy/mean Min -0.998225 trainer/policy/normal/std Mean 0.385149 trainer/policy/normal/std Std 0.181577 trainer/policy/normal/std Max 0.966742 trainer/policy/normal/std Min 0.0776785 trainer/policy/normal/log_std Mean -1.09205 trainer/policy/normal/log_std Std 0.562132 trainer/policy/normal/log_std Max -0.0338241 trainer/policy/normal/log_std Min -2.55518 eval/num steps total 90922 eval/num paths total 174 eval/path length Mean 486 eval/path length Std 10 eval/path length Max 496 eval/path length Min 476 eval/Rewards Mean 3.1028 eval/Rewards Std 0.808932 eval/Rewards Max 4.7626 eval/Rewards Min 0.982312 eval/Returns Mean 1507.96 eval/Returns Std 21.9339 eval/Returns Max 1529.9 eval/Returns Min 1486.03 eval/Actions Mean 0.147241 eval/Actions Std 0.582328 eval/Actions Max 0.997372 eval/Actions Min -0.998548 eval/Num Paths 2 eval/Average Returns 1507.96 eval/normalized_score 46.9565 time/evaluation sampling (s) 0.882285 time/logging (s) 0.00373673 time/sampling batch (s) 0.268742 time/saving (s) 0.00308947 time/training (s) 4.23964 time/epoch (s) 5.39749 time/total (s) 30976.9 Epoch -862 ---------------------------------- --------------- 2022-05-10 21:46:49.221203 PDT | [0] Epoch -861 finished ---------------------------------- --------------- epoch -861 replay_buffer/size 999996 trainer/num train calls 140000 trainer/Policy Loss -2.0564 trainer/Log Pis Mean 2.0801 trainer/Log Pis Std 2.54546 trainer/Log Pis Max 9.7321 trainer/Log Pis Min -5.28115 trainer/policy/mean Mean 0.113069 trainer/policy/mean Std 0.6175 trainer/policy/mean Max 0.99722 trainer/policy/mean Min -0.998164 trainer/policy/normal/std Mean 0.388764 trainer/policy/normal/std Std 0.181406 trainer/policy/normal/std Max 0.959287 trainer/policy/normal/std Min 0.0752832 trainer/policy/normal/log_std Mean -1.08132 trainer/policy/normal/log_std Std 0.559875 trainer/policy/normal/log_std Max -0.0415653 trainer/policy/normal/log_std Min -2.5865 eval/num steps total 91753 eval/num paths total 175 eval/path length Mean 831 eval/path length Std 0 eval/path length Max 831 eval/path length Min 831 eval/Rewards Mean 3.22449 eval/Rewards Std 0.703751 eval/Rewards Max 5.41022 eval/Rewards Min 0.98858 eval/Returns Mean 2679.56 eval/Returns Std 0 eval/Returns Max 2679.56 eval/Returns Min 2679.56 eval/Actions Mean 0.148013 eval/Actions Std 0.599054 eval/Actions Max 0.997861 eval/Actions Min -0.997943 eval/Num Paths 1 eval/Average Returns 2679.56 eval/normalized_score 82.9549 time/evaluation sampling (s) 0.88408 time/logging (s) 0.00355688 time/sampling batch (s) 0.270113 time/saving (s) 0.00358637 time/training (s) 4.24758 time/epoch (s) 5.40892 time/total (s) 30982.3 Epoch -861 ---------------------------------- --------------- 2022-05-10 21:46:54.761631 PDT | [0] Epoch -860 finished ---------------------------------- --------------- epoch -860 replay_buffer/size 999996 trainer/num train calls 141000 trainer/Policy Loss -1.99351 trainer/Log Pis Mean 1.98354 trainer/Log Pis Std 2.53043 trainer/Log Pis Max 9.22395 trainer/Log Pis Min -6.18355 trainer/policy/mean Mean 0.15907 trainer/policy/mean Std 0.599242 trainer/policy/mean Max 0.998657 trainer/policy/mean Min -0.997646 trainer/policy/normal/std Mean 0.392883 trainer/policy/normal/std Std 0.183823 trainer/policy/normal/std Max 0.93948 trainer/policy/normal/std Min 0.078624 trainer/policy/normal/log_std Mean -1.07043 trainer/policy/normal/log_std Std 0.558746 trainer/policy/normal/log_std Max -0.0624289 trainer/policy/normal/log_std Min -2.54308 eval/num steps total 92253 eval/num paths total 176 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.07829 eval/Rewards Std 0.749597 eval/Rewards Max 4.63518 eval/Rewards Min 0.988459 eval/Returns Mean 1539.15 eval/Returns Std 0 eval/Returns Max 1539.15 eval/Returns Min 1539.15 eval/Actions Mean 0.159665 eval/Actions Std 0.589636 eval/Actions Max 0.997439 eval/Actions Min -0.997923 eval/Num Paths 1 eval/Average Returns 1539.15 eval/normalized_score 47.9147 time/evaluation sampling (s) 0.914297 time/logging (s) 0.00237588 time/sampling batch (s) 0.27494 time/saving (s) 0.00315497 time/training (s) 4.32396 time/epoch (s) 5.51873 time/total (s) 30987.8 Epoch -860 ---------------------------------- --------------- 2022-05-10 21:47:00.223496 PDT | [0] Epoch -859 finished ---------------------------------- --------------- epoch -859 replay_buffer/size 999996 trainer/num train calls 142000 trainer/Policy Loss -2.19777 trainer/Log Pis Mean 2.1362 trainer/Log Pis Std 2.63489 trainer/Log Pis Max 10.496 trainer/Log Pis Min -5.27734 trainer/policy/mean Mean 0.115396 trainer/policy/mean Std 0.620706 trainer/policy/mean Max 0.998159 trainer/policy/mean Min -0.998368 trainer/policy/normal/std Mean 0.394078 trainer/policy/normal/std Std 0.186233 trainer/policy/normal/std Max 0.979585 trainer/policy/normal/std Min 0.0694915 trainer/policy/normal/log_std Mean -1.07061 trainer/policy/normal/log_std Std 0.56584 trainer/policy/normal/log_std Max -0.0206267 trainer/policy/normal/log_std Min -2.66655 eval/num steps total 92980 eval/num paths total 177 eval/path length Mean 727 eval/path length Std 0 eval/path length Max 727 eval/path length Min 727 eval/Rewards Mean 3.18532 eval/Rewards Std 0.652653 eval/Rewards Max 4.71611 eval/Rewards Min 0.986334 eval/Returns Mean 2315.73 eval/Returns Std 0 eval/Returns Max 2315.73 eval/Returns Min 2315.73 eval/Actions Mean 0.146072 eval/Actions Std 0.59694 eval/Actions Max 0.998569 eval/Actions Min -0.997474 eval/Num Paths 1 eval/Average Returns 2315.73 eval/normalized_score 71.776 time/evaluation sampling (s) 0.901546 time/logging (s) 0.00290755 time/sampling batch (s) 0.277339 time/saving (s) 0.00297547 time/training (s) 4.25767 time/epoch (s) 5.44243 time/total (s) 30993.3 Epoch -859 ---------------------------------- --------------- 2022-05-10 21:47:05.672518 PDT | [0] Epoch -858 finished ---------------------------------- --------------- epoch -858 replay_buffer/size 999996 trainer/num train calls 143000 trainer/Policy Loss -2.11748 trainer/Log Pis Mean 2.03152 trainer/Log Pis Std 2.61775 trainer/Log Pis Max 13.9997 trainer/Log Pis Min -6.81669 trainer/policy/mean Mean 0.123913 trainer/policy/mean Std 0.613415 trainer/policy/mean Max 0.995778 trainer/policy/mean Min -0.997964 trainer/policy/normal/std Mean 0.392893 trainer/policy/normal/std Std 0.186801 trainer/policy/normal/std Max 0.944757 trainer/policy/normal/std Min 0.0678943 trainer/policy/normal/log_std Mean -1.07826 trainer/policy/normal/log_std Std 0.577789 trainer/policy/normal/log_std Max -0.0568272 trainer/policy/normal/log_std Min -2.6898 eval/num steps total 93572 eval/num paths total 178 eval/path length Mean 592 eval/path length Std 0 eval/path length Max 592 eval/path length Min 592 eval/Rewards Mean 3.13211 eval/Rewards Std 0.722818 eval/Rewards Max 4.69335 eval/Rewards Min 0.987583 eval/Returns Mean 1854.21 eval/Returns Std 0 eval/Returns Max 1854.21 eval/Returns Min 1854.21 eval/Actions Mean 0.165082 eval/Actions Std 0.597387 eval/Actions Max 0.9971 eval/Actions Min -0.999463 eval/Num Paths 1 eval/Average Returns 1854.21 eval/normalized_score 57.5953 time/evaluation sampling (s) 0.914785 time/logging (s) 0.00257646 time/sampling batch (s) 0.269731 time/saving (s) 0.00317247 time/training (s) 4.23832 time/epoch (s) 5.42859 time/total (s) 30998.7 Epoch -858 ---------------------------------- --------------- 2022-05-10 21:47:11.145419 PDT | [0] Epoch -857 finished ---------------------------------- --------------- epoch -857 replay_buffer/size 999996 trainer/num train calls 144000 trainer/Policy Loss -2.15149 trainer/Log Pis Mean 2.21848 trainer/Log Pis Std 2.59993 trainer/Log Pis Max 15.4359 trainer/Log Pis Min -3.78524 trainer/policy/mean Mean 0.175759 trainer/policy/mean Std 0.61256 trainer/policy/mean Max 0.998747 trainer/policy/mean Min -0.999443 trainer/policy/normal/std Mean 0.387106 trainer/policy/normal/std Std 0.18433 trainer/policy/normal/std Max 0.992679 trainer/policy/normal/std Min 0.0722582 trainer/policy/normal/log_std Mean -1.0929 trainer/policy/normal/log_std Std 0.576709 trainer/policy/normal/log_std Max -0.00734775 trainer/policy/normal/log_std Min -2.62751 eval/num steps total 94147 eval/num paths total 179 eval/path length Mean 575 eval/path length Std 0 eval/path length Max 575 eval/path length Min 575 eval/Rewards Mean 3.15411 eval/Rewards Std 0.687635 eval/Rewards Max 4.51602 eval/Rewards Min 0.986532 eval/Returns Mean 1813.61 eval/Returns Std 0 eval/Returns Max 1813.61 eval/Returns Min 1813.61 eval/Actions Mean 0.153094 eval/Actions Std 0.60799 eval/Actions Max 0.998349 eval/Actions Min -0.997702 eval/Num Paths 1 eval/Average Returns 1813.61 eval/normalized_score 56.348 time/evaluation sampling (s) 0.937098 time/logging (s) 0.00293871 time/sampling batch (s) 0.270596 time/saving (s) 0.00301315 time/training (s) 4.23996 time/epoch (s) 5.45361 time/total (s) 31004.2 Epoch -857 ---------------------------------- --------------- 2022-05-10 21:47:16.580202 PDT | [0] Epoch -856 finished ---------------------------------- --------------- epoch -856 replay_buffer/size 999996 trainer/num train calls 145000 trainer/Policy Loss -2.27389 trainer/Log Pis Mean 2.2947 trainer/Log Pis Std 2.62381 trainer/Log Pis Max 11.8953 trainer/Log Pis Min -5.57773 trainer/policy/mean Mean 0.122271 trainer/policy/mean Std 0.629012 trainer/policy/mean Max 0.996114 trainer/policy/mean Min -0.998491 trainer/policy/normal/std Mean 0.389949 trainer/policy/normal/std Std 0.189346 trainer/policy/normal/std Max 0.936614 trainer/policy/normal/std Min 0.0773863 trainer/policy/normal/log_std Mean -1.08934 trainer/policy/normal/log_std Std 0.58171 trainer/policy/normal/log_std Max -0.065484 trainer/policy/normal/log_std Min -2.55895 eval/num steps total 94654 eval/num paths total 180 eval/path length Mean 507 eval/path length Std 0 eval/path length Max 507 eval/path length Min 507 eval/Rewards Mean 3.08864 eval/Rewards Std 0.726216 eval/Rewards Max 4.63341 eval/Rewards Min 0.984067 eval/Returns Mean 1565.94 eval/Returns Std 0 eval/Returns Max 1565.94 eval/Returns Min 1565.94 eval/Actions Mean 0.164119 eval/Actions Std 0.589555 eval/Actions Max 0.997395 eval/Actions Min -0.99683 eval/Num Paths 1 eval/Average Returns 1565.94 eval/normalized_score 48.738 time/evaluation sampling (s) 0.911179 time/logging (s) 0.00238161 time/sampling batch (s) 0.269959 time/saving (s) 0.0031311 time/training (s) 4.22756 time/epoch (s) 5.41422 time/total (s) 31009.6 Epoch -856 ---------------------------------- --------------- 2022-05-10 21:47:22.037484 PDT | [0] Epoch -855 finished ---------------------------------- --------------- epoch -855 replay_buffer/size 999996 trainer/num train calls 146000 trainer/Policy Loss -2.20168 trainer/Log Pis Mean 2.24868 trainer/Log Pis Std 2.49497 trainer/Log Pis Max 9.71261 trainer/Log Pis Min -5.90896 trainer/policy/mean Mean 0.119305 trainer/policy/mean Std 0.617982 trainer/policy/mean Max 0.99612 trainer/policy/mean Min -0.998107 trainer/policy/normal/std Mean 0.391311 trainer/policy/normal/std Std 0.183126 trainer/policy/normal/std Max 1.02684 trainer/policy/normal/std Min 0.0759591 trainer/policy/normal/log_std Mean -1.07386 trainer/policy/normal/log_std Std 0.556603 trainer/policy/normal/log_std Max 0.0264855 trainer/policy/normal/log_std Min -2.57756 eval/num steps total 95302 eval/num paths total 182 eval/path length Mean 324 eval/path length Std 28 eval/path length Max 352 eval/path length Min 296 eval/Rewards Mean 3.00507 eval/Rewards Std 1.02208 eval/Rewards Max 5.53422 eval/Rewards Min 0.990818 eval/Returns Mean 973.643 eval/Returns Std 92.3662 eval/Returns Max 1066.01 eval/Returns Min 881.277 eval/Actions Mean 0.110952 eval/Actions Std 0.547415 eval/Actions Max 0.999616 eval/Actions Min -0.997838 eval/Num Paths 2 eval/Average Returns 973.643 eval/normalized_score 30.5391 time/evaluation sampling (s) 0.905613 time/logging (s) 0.00274267 time/sampling batch (s) 0.270562 time/saving (s) 0.00302628 time/training (s) 4.25587 time/epoch (s) 5.43782 time/total (s) 31015 Epoch -855 ---------------------------------- --------------- 2022-05-10 21:47:27.517662 PDT | [0] Epoch -854 finished ---------------------------------- -------------- epoch -854 replay_buffer/size 999996 trainer/num train calls 147000 trainer/Policy Loss -2.10488 trainer/Log Pis Mean 1.97887 trainer/Log Pis Std 2.58572 trainer/Log Pis Max 10.115 trainer/Log Pis Min -9.24532 trainer/policy/mean Mean 0.125031 trainer/policy/mean Std 0.616077 trainer/policy/mean Max 0.995877 trainer/policy/mean Min -0.997556 trainer/policy/normal/std Mean 0.388368 trainer/policy/normal/std Std 0.178515 trainer/policy/normal/std Max 0.911441 trainer/policy/normal/std Min 0.0743587 trainer/policy/normal/log_std Mean -1.07739 trainer/policy/normal/log_std Std 0.549214 trainer/policy/normal/log_std Max -0.0927282 trainer/policy/normal/log_std Min -2.59885 eval/num steps total 96195 eval/num paths total 184 eval/path length Mean 446.5 eval/path length Std 45.5 eval/path length Max 492 eval/path length Min 401 eval/Rewards Mean 3.12195 eval/Rewards Std 0.853223 eval/Rewards Max 4.74446 eval/Rewards Min 0.988607 eval/Returns Mean 1393.95 eval/Returns Std 103.537 eval/Returns Max 1497.49 eval/Returns Min 1290.41 eval/Actions Mean 0.127477 eval/Actions Std 0.565549 eval/Actions Max 0.998602 eval/Actions Min -0.999029 eval/Num Paths 2 eval/Average Returns 1393.95 eval/normalized_score 43.4534 time/evaluation sampling (s) 0.89155 time/logging (s) 0.0034982 time/sampling batch (s) 0.273046 time/saving (s) 0.0029616 time/training (s) 4.28972 time/epoch (s) 5.46078 time/total (s) 31020.5 Epoch -854 ---------------------------------- -------------- 2022-05-10 21:47:32.958256 PDT | [0] Epoch -853 finished ---------------------------------- --------------- epoch -853 replay_buffer/size 999996 trainer/num train calls 148000 trainer/Policy Loss -2.13748 trainer/Log Pis Mean 2.06701 trainer/Log Pis Std 2.56269 trainer/Log Pis Max 8.29063 trainer/Log Pis Min -5.32038 trainer/policy/mean Mean 0.138195 trainer/policy/mean Std 0.616121 trainer/policy/mean Max 0.996184 trainer/policy/mean Min -0.997471 trainer/policy/normal/std Mean 0.393718 trainer/policy/normal/std Std 0.181553 trainer/policy/normal/std Max 0.932858 trainer/policy/normal/std Min 0.0766545 trainer/policy/normal/log_std Mean -1.06936 trainer/policy/normal/log_std Std 0.565503 trainer/policy/normal/log_std Max -0.0695027 trainer/policy/normal/log_std Min -2.56845 eval/num steps total 96747 eval/num paths total 185 eval/path length Mean 552 eval/path length Std 0 eval/path length Max 552 eval/path length Min 552 eval/Rewards Mean 3.19594 eval/Rewards Std 0.822077 eval/Rewards Max 4.76393 eval/Rewards Min 0.987987 eval/Returns Mean 1764.16 eval/Returns Std 0 eval/Returns Max 1764.16 eval/Returns Min 1764.16 eval/Actions Mean 0.148358 eval/Actions Std 0.570533 eval/Actions Max 0.998104 eval/Actions Min -0.99838 eval/Num Paths 1 eval/Average Returns 1764.16 eval/normalized_score 54.8284 time/evaluation sampling (s) 0.889173 time/logging (s) 0.00247698 time/sampling batch (s) 0.270238 time/saving (s) 0.00309924 time/training (s) 4.25458 time/epoch (s) 5.41957 time/total (s) 31025.9 Epoch -853 ---------------------------------- --------------- 2022-05-10 21:47:38.348656 PDT | [0] Epoch -852 finished ---------------------------------- --------------- epoch -852 replay_buffer/size 999996 trainer/num train calls 149000 trainer/Policy Loss -2.21563 trainer/Log Pis Mean 2.15088 trainer/Log Pis Std 2.58015 trainer/Log Pis Max 14.1803 trainer/Log Pis Min -4.59676 trainer/policy/mean Mean 0.143866 trainer/policy/mean Std 0.606384 trainer/policy/mean Max 0.998261 trainer/policy/mean Min -0.99956 trainer/policy/normal/std Mean 0.384469 trainer/policy/normal/std Std 0.185807 trainer/policy/normal/std Max 0.923119 trainer/policy/normal/std Min 0.0718312 trainer/policy/normal/log_std Mean -1.10194 trainer/policy/normal/log_std Std 0.578366 trainer/policy/normal/log_std Max -0.0799973 trainer/policy/normal/log_std Min -2.63344 eval/num steps total 97606 eval/num paths total 187 eval/path length Mean 429.5 eval/path length Std 61.5 eval/path length Max 491 eval/path length Min 368 eval/Rewards Mean 3.04398 eval/Rewards Std 0.845818 eval/Rewards Max 5.32145 eval/Rewards Min 0.984502 eval/Returns Mean 1307.39 eval/Returns Std 164.96 eval/Returns Max 1472.35 eval/Returns Min 1142.43 eval/Actions Mean 0.126347 eval/Actions Std 0.561126 eval/Actions Max 0.997683 eval/Actions Min -0.997569 eval/Num Paths 2 eval/Average Returns 1307.39 eval/normalized_score 40.7938 time/evaluation sampling (s) 0.888969 time/logging (s) 0.00324155 time/sampling batch (s) 0.268742 time/saving (s) 0.00298465 time/training (s) 4.20742 time/epoch (s) 5.37136 time/total (s) 31031.3 Epoch -852 ---------------------------------- --------------- 2022-05-10 21:47:43.796046 PDT | [0] Epoch -851 finished ---------------------------------- --------------- epoch -851 replay_buffer/size 999996 trainer/num train calls 150000 trainer/Policy Loss -1.80246 trainer/Log Pis Mean 1.86354 trainer/Log Pis Std 2.51236 trainer/Log Pis Max 10.0601 trainer/Log Pis Min -6.42215 trainer/policy/mean Mean 0.130499 trainer/policy/mean Std 0.606836 trainer/policy/mean Max 0.997262 trainer/policy/mean Min -0.997798 trainer/policy/normal/std Mean 0.395676 trainer/policy/normal/std Std 0.187088 trainer/policy/normal/std Max 1.03267 trainer/policy/normal/std Min 0.0730575 trainer/policy/normal/log_std Mean -1.06758 trainer/policy/normal/log_std Std 0.569317 trainer/policy/normal/log_std Max 0.0321438 trainer/policy/normal/log_std Min -2.61651 eval/num steps total 98527 eval/num paths total 189 eval/path length Mean 460.5 eval/path length Std 29.5 eval/path length Max 490 eval/path length Min 431 eval/Rewards Mean 3.1678 eval/Rewards Std 0.840725 eval/Rewards Max 5.43176 eval/Rewards Min 0.991389 eval/Returns Mean 1458.77 eval/Returns Std 109.517 eval/Returns Max 1568.29 eval/Returns Min 1349.26 eval/Actions Mean 0.153503 eval/Actions Std 0.587959 eval/Actions Max 0.998064 eval/Actions Min -0.998317 eval/Num Paths 2 eval/Average Returns 1458.77 eval/normalized_score 45.4451 time/evaluation sampling (s) 0.887149 time/logging (s) 0.00368939 time/sampling batch (s) 0.270721 time/saving (s) 0.0032303 time/training (s) 4.26308 time/epoch (s) 5.42787 time/total (s) 31036.7 Epoch -851 ---------------------------------- --------------- 2022-05-10 21:47:49.247884 PDT | [0] Epoch -850 finished ---------------------------------- --------------- epoch -850 replay_buffer/size 999996 trainer/num train calls 151000 trainer/Policy Loss -2.13426 trainer/Log Pis Mean 2.05844 trainer/Log Pis Std 2.62745 trainer/Log Pis Max 12.1247 trainer/Log Pis Min -5.2685 trainer/policy/mean Mean 0.128299 trainer/policy/mean Std 0.613504 trainer/policy/mean Max 0.997939 trainer/policy/mean Min -0.997841 trainer/policy/normal/std Mean 0.386226 trainer/policy/normal/std Std 0.180451 trainer/policy/normal/std Max 0.95674 trainer/policy/normal/std Min 0.0750269 trainer/policy/normal/log_std Mean -1.089 trainer/policy/normal/log_std Std 0.563897 trainer/policy/normal/log_std Max -0.0442232 trainer/policy/normal/log_std Min -2.58991 eval/num steps total 99085 eval/num paths total 190 eval/path length Mean 558 eval/path length Std 0 eval/path length Max 558 eval/path length Min 558 eval/Rewards Mean 3.20739 eval/Rewards Std 0.791059 eval/Rewards Max 4.68737 eval/Rewards Min 0.983904 eval/Returns Mean 1789.72 eval/Returns Std 0 eval/Returns Max 1789.72 eval/Returns Min 1789.72 eval/Actions Mean 0.144681 eval/Actions Std 0.600824 eval/Actions Max 0.997871 eval/Actions Min -0.998465 eval/Num Paths 1 eval/Average Returns 1789.72 eval/normalized_score 55.614 time/evaluation sampling (s) 0.892564 time/logging (s) 0.00270134 time/sampling batch (s) 0.272284 time/saving (s) 0.00333689 time/training (s) 4.2596 time/epoch (s) 5.43048 time/total (s) 31042.2 Epoch -850 ---------------------------------- --------------- 2022-05-10 21:47:54.677818 PDT | [0] Epoch -849 finished ---------------------------------- --------------- epoch -849 replay_buffer/size 999996 trainer/num train calls 152000 trainer/Policy Loss -2.06093 trainer/Log Pis Mean 2.03567 trainer/Log Pis Std 2.62976 trainer/Log Pis Max 9.11287 trainer/Log Pis Min -7.42401 trainer/policy/mean Mean 0.159052 trainer/policy/mean Std 0.607046 trainer/policy/mean Max 0.997914 trainer/policy/mean Min -0.999082 trainer/policy/normal/std Mean 0.386847 trainer/policy/normal/std Std 0.180041 trainer/policy/normal/std Max 0.922502 trainer/policy/normal/std Min 0.070584 trainer/policy/normal/log_std Mean -1.0904 trainer/policy/normal/log_std Std 0.573751 trainer/policy/normal/log_std Max -0.080666 trainer/policy/normal/log_std Min -2.65095 eval/num steps total 99648 eval/num paths total 191 eval/path length Mean 563 eval/path length Std 0 eval/path length Max 563 eval/path length Min 563 eval/Rewards Mean 3.20099 eval/Rewards Std 0.777394 eval/Rewards Max 4.89427 eval/Rewards Min 0.98542 eval/Returns Mean 1802.16 eval/Returns Std 0 eval/Returns Max 1802.16 eval/Returns Min 1802.16 eval/Actions Mean 0.152674 eval/Actions Std 0.593185 eval/Actions Max 0.997531 eval/Actions Min -0.997971 eval/Num Paths 1 eval/Average Returns 1802.16 eval/normalized_score 55.996 time/evaluation sampling (s) 0.90282 time/logging (s) 0.00241416 time/sampling batch (s) 0.267587 time/saving (s) 0.00295177 time/training (s) 4.23386 time/epoch (s) 5.40963 time/total (s) 31047.6 Epoch -849 ---------------------------------- --------------- 2022-05-10 21:48:00.248280 PDT | [0] Epoch -848 finished ---------------------------------- --------------- epoch -848 replay_buffer/size 999996 trainer/num train calls 153000 trainer/Policy Loss -2.07869 trainer/Log Pis Mean 1.97369 trainer/Log Pis Std 2.56209 trainer/Log Pis Max 12.4966 trainer/Log Pis Min -5.93762 trainer/policy/mean Mean 0.151171 trainer/policy/mean Std 0.605697 trainer/policy/mean Max 0.997896 trainer/policy/mean Min -0.999212 trainer/policy/normal/std Mean 0.386662 trainer/policy/normal/std Std 0.180269 trainer/policy/normal/std Max 0.894763 trainer/policy/normal/std Min 0.0733197 trainer/policy/normal/log_std Mean -1.08857 trainer/policy/normal/log_std Std 0.565648 trainer/policy/normal/log_std Max -0.111196 trainer/policy/normal/log_std Min -2.61293 eval/num steps total 100376 eval/num paths total 192 eval/path length Mean 728 eval/path length Std 0 eval/path length Max 728 eval/path length Min 728 eval/Rewards Mean 3.22752 eval/Rewards Std 0.700835 eval/Rewards Max 4.82174 eval/Rewards Min 0.985867 eval/Returns Mean 2349.64 eval/Returns Std 0 eval/Returns Max 2349.64 eval/Returns Min 2349.64 eval/Actions Mean 0.152432 eval/Actions Std 0.600207 eval/Actions Max 0.997452 eval/Actions Min -0.99823 eval/Num Paths 1 eval/Average Returns 2349.64 eval/normalized_score 72.8178 time/evaluation sampling (s) 0.970973 time/logging (s) 0.0029429 time/sampling batch (s) 0.279364 time/saving (s) 0.00304456 time/training (s) 4.29466 time/epoch (s) 5.55098 time/total (s) 31053.1 Epoch -848 ---------------------------------- --------------- 2022-05-10 21:48:05.678797 PDT | [0] Epoch -847 finished ---------------------------------- --------------- epoch -847 replay_buffer/size 999996 trainer/num train calls 154000 trainer/Policy Loss -2.2742 trainer/Log Pis Mean 2.30059 trainer/Log Pis Std 2.57588 trainer/Log Pis Max 9.37905 trainer/Log Pis Min -4.60508 trainer/policy/mean Mean 0.120819 trainer/policy/mean Std 0.628391 trainer/policy/mean Max 0.998913 trainer/policy/mean Min -0.998173 trainer/policy/normal/std Mean 0.397814 trainer/policy/normal/std Std 0.190704 trainer/policy/normal/std Max 1.00421 trainer/policy/normal/std Min 0.0748568 trainer/policy/normal/log_std Mean -1.06422 trainer/policy/normal/log_std Std 0.571125 trainer/policy/normal/log_std Max 0.00419997 trainer/policy/normal/log_std Min -2.59218 eval/num steps total 101267 eval/num paths total 194 eval/path length Mean 445.5 eval/path length Std 43.5 eval/path length Max 489 eval/path length Min 402 eval/Rewards Mean 3.19216 eval/Rewards Std 0.808088 eval/Rewards Max 4.89038 eval/Rewards Min 0.982786 eval/Returns Mean 1422.11 eval/Returns Std 159.713 eval/Returns Max 1581.82 eval/Returns Min 1262.4 eval/Actions Mean 0.158049 eval/Actions Std 0.589445 eval/Actions Max 0.997178 eval/Actions Min -0.998446 eval/Num Paths 2 eval/Average Returns 1422.11 eval/normalized_score 44.3186 time/evaluation sampling (s) 0.925891 time/logging (s) 0.00347141 time/sampling batch (s) 0.267527 time/saving (s) 0.00318908 time/training (s) 4.21074 time/epoch (s) 5.41082 time/total (s) 31058.5 Epoch -847 ---------------------------------- --------------- 2022-05-10 21:48:11.133031 PDT | [0] Epoch -846 finished ---------------------------------- --------------- epoch -846 replay_buffer/size 999996 trainer/num train calls 155000 trainer/Policy Loss -2.02958 trainer/Log Pis Mean 1.92492 trainer/Log Pis Std 2.52308 trainer/Log Pis Max 9.50552 trainer/Log Pis Min -6.62212 trainer/policy/mean Mean 0.109199 trainer/policy/mean Std 0.610822 trainer/policy/mean Max 0.996538 trainer/policy/mean Min -0.998005 trainer/policy/normal/std Mean 0.388394 trainer/policy/normal/std Std 0.180295 trainer/policy/normal/std Max 0.992781 trainer/policy/normal/std Min 0.0743489 trainer/policy/normal/log_std Mean -1.08181 trainer/policy/normal/log_std Std 0.56033 trainer/policy/normal/log_std Max -0.00724544 trainer/policy/normal/log_std Min -2.59899 eval/num steps total 101842 eval/num paths total 195 eval/path length Mean 575 eval/path length Std 0 eval/path length Max 575 eval/path length Min 575 eval/Rewards Mean 3.14132 eval/Rewards Std 0.768788 eval/Rewards Max 4.85832 eval/Rewards Min 0.98921 eval/Returns Mean 1806.26 eval/Returns Std 0 eval/Returns Max 1806.26 eval/Returns Min 1806.26 eval/Actions Mean 0.160807 eval/Actions Std 0.585383 eval/Actions Max 0.996085 eval/Actions Min -0.998216 eval/Num Paths 1 eval/Average Returns 1806.26 eval/normalized_score 56.1221 time/evaluation sampling (s) 0.91514 time/logging (s) 0.0025334 time/sampling batch (s) 0.269236 time/saving (s) 0.00319655 time/training (s) 4.24317 time/epoch (s) 5.43328 time/total (s) 31064 Epoch -846 ---------------------------------- --------------- 2022-05-10 21:48:16.579264 PDT | [0] Epoch -845 finished ---------------------------------- --------------- epoch -845 replay_buffer/size 999996 trainer/num train calls 156000 trainer/Policy Loss -2.25988 trainer/Log Pis Mean 2.32454 trainer/Log Pis Std 2.76609 trainer/Log Pis Max 11.852 trainer/Log Pis Min -6.20368 trainer/policy/mean Mean 0.14684 trainer/policy/mean Std 0.620838 trainer/policy/mean Max 0.996952 trainer/policy/mean Min -0.997043 trainer/policy/normal/std Mean 0.389494 trainer/policy/normal/std Std 0.186626 trainer/policy/normal/std Max 0.958843 trainer/policy/normal/std Min 0.0694169 trainer/policy/normal/log_std Mean -1.09056 trainer/policy/normal/log_std Std 0.587208 trainer/policy/normal/log_std Max -0.0420278 trainer/policy/normal/log_std Min -2.66763 eval/num steps total 102354 eval/num paths total 196 eval/path length Mean 512 eval/path length Std 0 eval/path length Max 512 eval/path length Min 512 eval/Rewards Mean 3.13844 eval/Rewards Std 0.780413 eval/Rewards Max 5.10714 eval/Rewards Min 0.990514 eval/Returns Mean 1606.88 eval/Returns Std 0 eval/Returns Max 1606.88 eval/Returns Min 1606.88 eval/Actions Mean 0.142312 eval/Actions Std 0.588791 eval/Actions Max 0.996901 eval/Actions Min -0.996828 eval/Num Paths 1 eval/Average Returns 1606.88 eval/normalized_score 49.9959 time/evaluation sampling (s) 0.921509 time/logging (s) 0.00245516 time/sampling batch (s) 0.269502 time/saving (s) 0.00317531 time/training (s) 4.22961 time/epoch (s) 5.42625 time/total (s) 31069.4 Epoch -845 ---------------------------------- --------------- 2022-05-10 21:48:22.019712 PDT | [0] Epoch -844 finished ---------------------------------- --------------- epoch -844 replay_buffer/size 999996 trainer/num train calls 157000 trainer/Policy Loss -2.24162 trainer/Log Pis Mean 2.02042 trainer/Log Pis Std 2.7029 trainer/Log Pis Max 9.93581 trainer/Log Pis Min -5.92037 trainer/policy/mean Mean 0.155062 trainer/policy/mean Std 0.611443 trainer/policy/mean Max 0.997887 trainer/policy/mean Min -0.998376 trainer/policy/normal/std Mean 0.386767 trainer/policy/normal/std Std 0.179139 trainer/policy/normal/std Max 0.911543 trainer/policy/normal/std Min 0.0703277 trainer/policy/normal/log_std Mean -1.08804 trainer/policy/normal/log_std Std 0.567815 trainer/policy/normal/log_std Max -0.0926166 trainer/policy/normal/log_std Min -2.65459 eval/num steps total 102917 eval/num paths total 197 eval/path length Mean 563 eval/path length Std 0 eval/path length Max 563 eval/path length Min 563 eval/Rewards Mean 3.22172 eval/Rewards Std 0.824242 eval/Rewards Max 4.68405 eval/Rewards Min 0.983626 eval/Returns Mean 1813.83 eval/Returns Std 0 eval/Returns Max 1813.83 eval/Returns Min 1813.83 eval/Actions Mean 0.162554 eval/Actions Std 0.596537 eval/Actions Max 0.998319 eval/Actions Min -0.99844 eval/Num Paths 1 eval/Average Returns 1813.83 eval/normalized_score 56.3546 time/evaluation sampling (s) 0.896441 time/logging (s) 0.00255271 time/sampling batch (s) 0.269852 time/saving (s) 0.00307076 time/training (s) 4.24853 time/epoch (s) 5.42044 time/total (s) 31074.8 Epoch -844 ---------------------------------- --------------- 2022-05-10 21:48:27.444513 PDT | [0] Epoch -843 finished ---------------------------------- --------------- epoch -843 replay_buffer/size 999996 trainer/num train calls 158000 trainer/Policy Loss -2.19277 trainer/Log Pis Mean 2.21253 trainer/Log Pis Std 2.65577 trainer/Log Pis Max 12.7489 trainer/Log Pis Min -3.40248 trainer/policy/mean Mean 0.123493 trainer/policy/mean Std 0.615351 trainer/policy/mean Max 0.996965 trainer/policy/mean Min -0.998189 trainer/policy/normal/std Mean 0.388558 trainer/policy/normal/std Std 0.190032 trainer/policy/normal/std Max 0.97756 trainer/policy/normal/std Min 0.0736367 trainer/policy/normal/log_std Mean -1.09315 trainer/policy/normal/log_std Std 0.58117 trainer/policy/normal/log_std Max -0.0226954 trainer/policy/normal/log_std Min -2.60861 eval/num steps total 103911 eval/num paths total 199 eval/path length Mean 497 eval/path length Std 2 eval/path length Max 499 eval/path length Min 495 eval/Rewards Mean 3.07322 eval/Rewards Std 0.768859 eval/Rewards Max 4.69508 eval/Rewards Min 0.982175 eval/Returns Mean 1527.39 eval/Returns Std 11.8044 eval/Returns Max 1539.19 eval/Returns Min 1515.59 eval/Actions Mean 0.150652 eval/Actions Std 0.587416 eval/Actions Max 0.997752 eval/Actions Min -0.997685 eval/Num Paths 2 eval/Average Returns 1527.39 eval/normalized_score 47.5535 time/evaluation sampling (s) 0.877428 time/logging (s) 0.00351842 time/sampling batch (s) 0.269798 time/saving (s) 0.00305156 time/training (s) 4.25207 time/epoch (s) 5.40587 time/total (s) 31080.2 Epoch -843 ---------------------------------- --------------- 2022-05-10 21:48:32.890249 PDT | [0] Epoch -842 finished ---------------------------------- --------------- epoch -842 replay_buffer/size 999996 trainer/num train calls 159000 trainer/Policy Loss -2.17088 trainer/Log Pis Mean 2.09635 trainer/Log Pis Std 2.62326 trainer/Log Pis Max 10.0857 trainer/Log Pis Min -6.72415 trainer/policy/mean Mean 0.1654 trainer/policy/mean Std 0.610655 trainer/policy/mean Max 0.99675 trainer/policy/mean Min -0.99785 trainer/policy/normal/std Mean 0.388679 trainer/policy/normal/std Std 0.183398 trainer/policy/normal/std Max 0.9274 trainer/policy/normal/std Min 0.0710114 trainer/policy/normal/log_std Mean -1.08877 trainer/policy/normal/log_std Std 0.579159 trainer/policy/normal/log_std Max -0.0753701 trainer/policy/normal/log_std Min -2.64491 eval/num steps total 104444 eval/num paths total 200 eval/path length Mean 533 eval/path length Std 0 eval/path length Max 533 eval/path length Min 533 eval/Rewards Mean 3.21264 eval/Rewards Std 0.81126 eval/Rewards Max 5.19786 eval/Rewards Min 0.990483 eval/Returns Mean 1712.34 eval/Returns Std 0 eval/Returns Max 1712.34 eval/Returns Min 1712.34 eval/Actions Mean 0.135176 eval/Actions Std 0.585806 eval/Actions Max 0.994346 eval/Actions Min -0.998142 eval/Num Paths 1 eval/Average Returns 1712.34 eval/normalized_score 53.2361 time/evaluation sampling (s) 0.893851 time/logging (s) 0.00240443 time/sampling batch (s) 0.271063 time/saving (s) 0.00296235 time/training (s) 4.25414 time/epoch (s) 5.42442 time/total (s) 31085.7 Epoch -842 ---------------------------------- --------------- 2022-05-10 21:48:38.312777 PDT | [0] Epoch -841 finished ---------------------------------- --------------- epoch -841 replay_buffer/size 999996 trainer/num train calls 160000 trainer/Policy Loss -2.0293 trainer/Log Pis Mean 2.11688 trainer/Log Pis Std 2.51456 trainer/Log Pis Max 9.96021 trainer/Log Pis Min -3.5244 trainer/policy/mean Mean 0.146393 trainer/policy/mean Std 0.604072 trainer/policy/mean Max 0.997351 trainer/policy/mean Min -0.998001 trainer/policy/normal/std Mean 0.393922 trainer/policy/normal/std Std 0.186248 trainer/policy/normal/std Max 0.981558 trainer/policy/normal/std Min 0.0739405 trainer/policy/normal/log_std Mean -1.07569 trainer/policy/normal/log_std Std 0.57979 trainer/policy/normal/log_std Max -0.0186141 trainer/policy/normal/log_std Min -2.6045 eval/num steps total 105414 eval/num paths total 202 eval/path length Mean 485 eval/path length Std 10 eval/path length Max 495 eval/path length Min 475 eval/Rewards Mean 3.07 eval/Rewards Std 0.832321 eval/Rewards Max 4.80106 eval/Rewards Min 0.989552 eval/Returns Mean 1488.95 eval/Returns Std 18.2805 eval/Returns Max 1507.23 eval/Returns Min 1470.67 eval/Actions Mean 0.15588 eval/Actions Std 0.578594 eval/Actions Max 0.99815 eval/Actions Min -0.998787 eval/Num Paths 2 eval/Average Returns 1488.95 eval/normalized_score 46.3724 time/evaluation sampling (s) 0.882324 time/logging (s) 0.0039793 time/sampling batch (s) 0.269573 time/saving (s) 0.00314074 time/training (s) 4.24445 time/epoch (s) 5.40347 time/total (s) 31091.1 Epoch -841 ---------------------------------- --------------- 2022-05-10 21:48:43.739500 PDT | [0] Epoch -840 finished ---------------------------------- --------------- epoch -840 replay_buffer/size 999996 trainer/num train calls 161000 trainer/Policy Loss -2.3347 trainer/Log Pis Mean 2.3907 trainer/Log Pis Std 2.71473 trainer/Log Pis Max 16.8885 trainer/Log Pis Min -4.253 trainer/policy/mean Mean 0.129191 trainer/policy/mean Std 0.622169 trainer/policy/mean Max 0.996953 trainer/policy/mean Min -0.999671 trainer/policy/normal/std Mean 0.37921 trainer/policy/normal/std Std 0.181765 trainer/policy/normal/std Max 0.96439 trainer/policy/normal/std Min 0.0747288 trainer/policy/normal/log_std Mean -1.11574 trainer/policy/normal/log_std Std 0.581532 trainer/policy/normal/log_std Max -0.0362597 trainer/policy/normal/log_std Min -2.59389 eval/num steps total 105976 eval/num paths total 203 eval/path length Mean 562 eval/path length Std 0 eval/path length Max 562 eval/path length Min 562 eval/Rewards Mean 3.18271 eval/Rewards Std 0.781198 eval/Rewards Max 4.7628 eval/Rewards Min 0.993842 eval/Returns Mean 1788.68 eval/Returns Std 0 eval/Returns Max 1788.68 eval/Returns Min 1788.68 eval/Actions Mean 0.146801 eval/Actions Std 0.598174 eval/Actions Max 0.996353 eval/Actions Min -0.998307 eval/Num Paths 1 eval/Average Returns 1788.68 eval/normalized_score 55.5819 time/evaluation sampling (s) 0.878862 time/logging (s) 0.00296745 time/sampling batch (s) 0.270227 time/saving (s) 0.00326285 time/training (s) 4.25 time/epoch (s) 5.40532 time/total (s) 31096.5 Epoch -840 ---------------------------------- --------------- 2022-05-10 21:48:49.186381 PDT | [0] Epoch -839 finished ---------------------------------- --------------- epoch -839 replay_buffer/size 999996 trainer/num train calls 162000 trainer/Policy Loss -2.10785 trainer/Log Pis Mean 2.23753 trainer/Log Pis Std 2.4746 trainer/Log Pis Max 9.68052 trainer/Log Pis Min -3.88947 trainer/policy/mean Mean 0.152585 trainer/policy/mean Std 0.613559 trainer/policy/mean Max 0.997372 trainer/policy/mean Min -0.997651 trainer/policy/normal/std Mean 0.387411 trainer/policy/normal/std Std 0.181459 trainer/policy/normal/std Max 1.05134 trainer/policy/normal/std Min 0.0767374 trainer/policy/normal/log_std Mean -1.08614 trainer/policy/normal/log_std Std 0.563051 trainer/policy/normal/log_std Max 0.0500659 trainer/policy/normal/log_std Min -2.56737 eval/num steps total 106971 eval/num paths total 205 eval/path length Mean 497.5 eval/path length Std 21.5 eval/path length Max 519 eval/path length Min 476 eval/Rewards Mean 3.18153 eval/Rewards Std 0.826446 eval/Rewards Max 5.3012 eval/Rewards Min 0.992406 eval/Returns Mean 1582.81 eval/Returns Std 63.5119 eval/Returns Max 1646.32 eval/Returns Min 1519.3 eval/Actions Mean 0.156406 eval/Actions Std 0.592337 eval/Actions Max 0.99781 eval/Actions Min -0.998406 eval/Num Paths 2 eval/Average Returns 1582.81 eval/normalized_score 49.2563 time/evaluation sampling (s) 0.887306 time/logging (s) 0.00369187 time/sampling batch (s) 0.271677 time/saving (s) 0.00302978 time/training (s) 4.26165 time/epoch (s) 5.42736 time/total (s) 31101.9 Epoch -839 ---------------------------------- --------------- 2022-05-10 21:48:54.631906 PDT | [0] Epoch -838 finished ---------------------------------- --------------- epoch -838 replay_buffer/size 999996 trainer/num train calls 163000 trainer/Policy Loss -1.94268 trainer/Log Pis Mean 2.07626 trainer/Log Pis Std 2.56032 trainer/Log Pis Max 9.15669 trainer/Log Pis Min -6.70537 trainer/policy/mean Mean 0.15404 trainer/policy/mean Std 0.604337 trainer/policy/mean Max 0.996881 trainer/policy/mean Min -0.998391 trainer/policy/normal/std Mean 0.389358 trainer/policy/normal/std Std 0.187907 trainer/policy/normal/std Max 0.901882 trainer/policy/normal/std Min 0.0736615 trainer/policy/normal/log_std Mean -1.09041 trainer/policy/normal/log_std Std 0.583414 trainer/policy/normal/log_std Max -0.103271 trainer/policy/normal/log_std Min -2.60827 eval/num steps total 107584 eval/num paths total 206 eval/path length Mean 613 eval/path length Std 0 eval/path length Max 613 eval/path length Min 613 eval/Rewards Mean 3.19666 eval/Rewards Std 0.834888 eval/Rewards Max 5.40412 eval/Rewards Min 0.98475 eval/Returns Mean 1959.55 eval/Returns Std 0 eval/Returns Max 1959.55 eval/Returns Min 1959.55 eval/Actions Mean 0.151792 eval/Actions Std 0.578701 eval/Actions Max 0.997339 eval/Actions Min -0.998441 eval/Num Paths 1 eval/Average Returns 1959.55 eval/normalized_score 60.8322 time/evaluation sampling (s) 0.900529 time/logging (s) 0.0025625 time/sampling batch (s) 0.270706 time/saving (s) 0.00300567 time/training (s) 4.24753 time/epoch (s) 5.42434 time/total (s) 31107.3 Epoch -838 ---------------------------------- --------------- 2022-05-10 21:49:00.102238 PDT | [0] Epoch -837 finished ---------------------------------- --------------- epoch -837 replay_buffer/size 999996 trainer/num train calls 164000 trainer/Policy Loss -2.06263 trainer/Log Pis Mean 2.09692 trainer/Log Pis Std 2.49627 trainer/Log Pis Max 12.1367 trainer/Log Pis Min -5.00219 trainer/policy/mean Mean 0.165973 trainer/policy/mean Std 0.612249 trainer/policy/mean Max 0.997413 trainer/policy/mean Min -0.997389 trainer/policy/normal/std Mean 0.392953 trainer/policy/normal/std Std 0.18534 trainer/policy/normal/std Max 0.985583 trainer/policy/normal/std Min 0.0725394 trainer/policy/normal/log_std Mean -1.07911 trainer/policy/normal/log_std Std 0.583748 trainer/policy/normal/log_std Max -0.0145216 trainer/policy/normal/log_std Min -2.62363 eval/num steps total 108100 eval/num paths total 207 eval/path length Mean 516 eval/path length Std 0 eval/path length Max 516 eval/path length Min 516 eval/Rewards Mean 3.19371 eval/Rewards Std 0.802912 eval/Rewards Max 5.31047 eval/Rewards Min 0.988487 eval/Returns Mean 1647.95 eval/Returns Std 0 eval/Returns Max 1647.95 eval/Returns Min 1647.95 eval/Actions Mean 0.157891 eval/Actions Std 0.595241 eval/Actions Max 0.997859 eval/Actions Min -0.997032 eval/Num Paths 1 eval/Average Returns 1647.95 eval/normalized_score 51.258 time/evaluation sampling (s) 0.901708 time/logging (s) 0.0023929 time/sampling batch (s) 0.273335 time/saving (s) 0.00309171 time/training (s) 4.26921 time/epoch (s) 5.44974 time/total (s) 31112.8 Epoch -837 ---------------------------------- --------------- 2022-05-10 21:49:05.560738 PDT | [0] Epoch -836 finished ---------------------------------- --------------- epoch -836 replay_buffer/size 999996 trainer/num train calls 165000 trainer/Policy Loss -1.95245 trainer/Log Pis Mean 1.98872 trainer/Log Pis Std 2.69402 trainer/Log Pis Max 10.0804 trainer/Log Pis Min -5.5638 trainer/policy/mean Mean 0.105821 trainer/policy/mean Std 0.617419 trainer/policy/mean Max 0.995216 trainer/policy/mean Min -0.998494 trainer/policy/normal/std Mean 0.386053 trainer/policy/normal/std Std 0.180345 trainer/policy/normal/std Max 0.961571 trainer/policy/normal/std Min 0.0684378 trainer/policy/normal/log_std Mean -1.08953 trainer/policy/normal/log_std Std 0.562567 trainer/policy/normal/log_std Max -0.0391871 trainer/policy/normal/log_std Min -2.68183 eval/num steps total 108767 eval/num paths total 208 eval/path length Mean 667 eval/path length Std 0 eval/path length Max 667 eval/path length Min 667 eval/Rewards Mean 3.17169 eval/Rewards Std 0.690996 eval/Rewards Max 4.74675 eval/Rewards Min 0.991008 eval/Returns Mean 2115.52 eval/Returns Std 0 eval/Returns Max 2115.52 eval/Returns Min 2115.52 eval/Actions Mean 0.166607 eval/Actions Std 0.601393 eval/Actions Max 0.997668 eval/Actions Min -0.998006 eval/Num Paths 1 eval/Average Returns 2115.52 eval/normalized_score 65.6243 time/evaluation sampling (s) 0.907079 time/logging (s) 0.0027067 time/sampling batch (s) 0.272167 time/saving (s) 0.00297749 time/training (s) 4.25358 time/epoch (s) 5.43851 time/total (s) 31118.2 Epoch -836 ---------------------------------- --------------- 2022-05-10 21:49:11.068190 PDT | [0] Epoch -835 finished ---------------------------------- --------------- epoch -835 replay_buffer/size 999996 trainer/num train calls 166000 trainer/Policy Loss -2.18438 trainer/Log Pis Mean 2.15012 trainer/Log Pis Std 2.62469 trainer/Log Pis Max 9.9613 trainer/Log Pis Min -5.98478 trainer/policy/mean Mean 0.152811 trainer/policy/mean Std 0.611698 trainer/policy/mean Max 0.996397 trainer/policy/mean Min -0.998569 trainer/policy/normal/std Mean 0.381908 trainer/policy/normal/std Std 0.185873 trainer/policy/normal/std Max 1.01126 trainer/policy/normal/std Min 0.0665066 trainer/policy/normal/log_std Mean -1.11419 trainer/policy/normal/log_std Std 0.593595 trainer/policy/normal/log_std Max 0.0111944 trainer/policy/normal/log_std Min -2.71045 eval/num steps total 109751 eval/num paths total 210 eval/path length Mean 492 eval/path length Std 1 eval/path length Max 493 eval/path length Min 491 eval/Rewards Mean 3.0999 eval/Rewards Std 0.7927 eval/Rewards Max 4.73354 eval/Rewards Min 0.989425 eval/Returns Mean 1525.15 eval/Returns Std 20.4895 eval/Returns Max 1545.64 eval/Returns Min 1504.66 eval/Actions Mean 0.15475 eval/Actions Std 0.579973 eval/Actions Max 0.99776 eval/Actions Min -0.997382 eval/Num Paths 2 eval/Average Returns 1525.15 eval/normalized_score 47.4847 time/evaluation sampling (s) 0.949335 time/logging (s) 0.00369372 time/sampling batch (s) 0.272446 time/saving (s) 0.00317871 time/training (s) 4.25935 time/epoch (s) 5.488 time/total (s) 31123.7 Epoch -835 ---------------------------------- --------------- 2022-05-10 21:49:16.527088 PDT | [0] Epoch -834 finished ---------------------------------- --------------- epoch -834 replay_buffer/size 999996 trainer/num train calls 167000 trainer/Policy Loss -2.43921 trainer/Log Pis Mean 2.34557 trainer/Log Pis Std 2.66879 trainer/Log Pis Max 9.25892 trainer/Log Pis Min -5.47488 trainer/policy/mean Mean 0.155982 trainer/policy/mean Std 0.62836 trainer/policy/mean Max 0.998263 trainer/policy/mean Min -0.996249 trainer/policy/normal/std Mean 0.381278 trainer/policy/normal/std Std 0.180824 trainer/policy/normal/std Max 0.90737 trainer/policy/normal/std Min 0.0781126 trainer/policy/normal/log_std Mean -1.10371 trainer/policy/normal/log_std Std 0.5648 trainer/policy/normal/log_std Max -0.0972054 trainer/policy/normal/log_std Min -2.5496 eval/num steps total 110336 eval/num paths total 211 eval/path length Mean 585 eval/path length Std 0 eval/path length Max 585 eval/path length Min 585 eval/Rewards Mean 3.16378 eval/Rewards Std 0.752763 eval/Rewards Max 4.85946 eval/Rewards Min 0.993024 eval/Returns Mean 1850.81 eval/Returns Std 0 eval/Returns Max 1850.81 eval/Returns Min 1850.81 eval/Actions Mean 0.181227 eval/Actions Std 0.595238 eval/Actions Max 0.997576 eval/Actions Min -0.998603 eval/Num Paths 1 eval/Average Returns 1850.81 eval/normalized_score 57.491 time/evaluation sampling (s) 0.931114 time/logging (s) 0.00254433 time/sampling batch (s) 0.270419 time/saving (s) 0.00315189 time/training (s) 4.23029 time/epoch (s) 5.43752 time/total (s) 31129.2 Epoch -834 ---------------------------------- --------------- 2022-05-10 21:49:21.981513 PDT | [0] Epoch -833 finished ---------------------------------- --------------- epoch -833 replay_buffer/size 999996 trainer/num train calls 168000 trainer/Policy Loss -2.12878 trainer/Log Pis Mean 1.93311 trainer/Log Pis Std 2.54315 trainer/Log Pis Max 9.22239 trainer/Log Pis Min -4.77779 trainer/policy/mean Mean 0.125773 trainer/policy/mean Std 0.614799 trainer/policy/mean Max 0.995876 trainer/policy/mean Min -0.996611 trainer/policy/normal/std Mean 0.386666 trainer/policy/normal/std Std 0.184128 trainer/policy/normal/std Max 0.99351 trainer/policy/normal/std Min 0.0755978 trainer/policy/normal/log_std Mean -1.09205 trainer/policy/normal/log_std Std 0.570673 trainer/policy/normal/log_std Max -0.0065112 trainer/policy/normal/log_std Min -2.58233 eval/num steps total 110824 eval/num paths total 212 eval/path length Mean 488 eval/path length Std 0 eval/path length Max 488 eval/path length Min 488 eval/Rewards Mean 3.11367 eval/Rewards Std 0.816476 eval/Rewards Max 4.84559 eval/Rewards Min 0.989195 eval/Returns Mean 1519.47 eval/Returns Std 0 eval/Returns Max 1519.47 eval/Returns Min 1519.47 eval/Actions Mean 0.148026 eval/Actions Std 0.587651 eval/Actions Max 0.998867 eval/Actions Min -0.997994 eval/Num Paths 1 eval/Average Returns 1519.47 eval/normalized_score 47.3101 time/evaluation sampling (s) 0.899574 time/logging (s) 0.002313 time/sampling batch (s) 0.271303 time/saving (s) 0.00307835 time/training (s) 4.25774 time/epoch (s) 5.434 time/total (s) 31134.6 Epoch -833 ---------------------------------- --------------- 2022-05-10 21:49:27.444415 PDT | [0] Epoch -832 finished ---------------------------------- --------------- epoch -832 replay_buffer/size 999996 trainer/num train calls 169000 trainer/Policy Loss -2.19363 trainer/Log Pis Mean 2.10056 trainer/Log Pis Std 2.66167 trainer/Log Pis Max 14.0604 trainer/Log Pis Min -6.20538 trainer/policy/mean Mean 0.156842 trainer/policy/mean Std 0.611814 trainer/policy/mean Max 0.998296 trainer/policy/mean Min -0.999232 trainer/policy/normal/std Mean 0.393969 trainer/policy/normal/std Std 0.185691 trainer/policy/normal/std Max 0.954774 trainer/policy/normal/std Min 0.0704408 trainer/policy/normal/log_std Mean -1.07545 trainer/policy/normal/log_std Std 0.580202 trainer/policy/normal/log_std Max -0.0462812 trainer/policy/normal/log_std Min -2.65298 eval/num steps total 111365 eval/num paths total 213 eval/path length Mean 541 eval/path length Std 0 eval/path length Max 541 eval/path length Min 541 eval/Rewards Mean 3.20929 eval/Rewards Std 0.80627 eval/Rewards Max 5.18049 eval/Rewards Min 0.987675 eval/Returns Mean 1736.23 eval/Returns Std 0 eval/Returns Max 1736.23 eval/Returns Min 1736.23 eval/Actions Mean 0.140912 eval/Actions Std 0.587237 eval/Actions Max 0.997296 eval/Actions Min -0.998333 eval/Num Paths 1 eval/Average Returns 1736.23 eval/normalized_score 53.9702 time/evaluation sampling (s) 0.900182 time/logging (s) 0.00243318 time/sampling batch (s) 0.271731 time/saving (s) 0.00307302 time/training (s) 4.26549 time/epoch (s) 5.44291 time/total (s) 31140.1 Epoch -832 ---------------------------------- --------------- 2022-05-10 21:49:32.898697 PDT | [0] Epoch -831 finished ---------------------------------- --------------- epoch -831 replay_buffer/size 999996 trainer/num train calls 170000 trainer/Policy Loss -2.22604 trainer/Log Pis Mean 2.1394 trainer/Log Pis Std 2.50829 trainer/Log Pis Max 9.4969 trainer/Log Pis Min -5.79407 trainer/policy/mean Mean 0.141777 trainer/policy/mean Std 0.612431 trainer/policy/mean Max 0.997295 trainer/policy/mean Min -0.998232 trainer/policy/normal/std Mean 0.391144 trainer/policy/normal/std Std 0.190441 trainer/policy/normal/std Max 0.969805 trainer/policy/normal/std Min 0.0771807 trainer/policy/normal/log_std Mean -1.08409 trainer/policy/normal/log_std Std 0.575291 trainer/policy/normal/log_std Max -0.0306606 trainer/policy/normal/log_std Min -2.56161 eval/num steps total 112013 eval/num paths total 214 eval/path length Mean 648 eval/path length Std 0 eval/path length Max 648 eval/path length Min 648 eval/Rewards Mean 3.22233 eval/Rewards Std 0.763978 eval/Rewards Max 4.7678 eval/Rewards Min 0.989099 eval/Returns Mean 2088.07 eval/Returns Std 0 eval/Returns Max 2088.07 eval/Returns Min 2088.07 eval/Actions Mean 0.157799 eval/Actions Std 0.601212 eval/Actions Max 0.998324 eval/Actions Min -0.998506 eval/Num Paths 1 eval/Average Returns 2088.07 eval/normalized_score 64.7809 time/evaluation sampling (s) 0.891097 time/logging (s) 0.00271259 time/sampling batch (s) 0.272771 time/saving (s) 0.0029637 time/training (s) 4.26487 time/epoch (s) 5.43441 time/total (s) 31145.5 Epoch -831 ---------------------------------- --------------- 2022-05-10 21:49:38.346783 PDT | [0] Epoch -830 finished ---------------------------------- --------------- epoch -830 replay_buffer/size 999996 trainer/num train calls 171000 trainer/Policy Loss -2.33799 trainer/Log Pis Mean 2.42904 trainer/Log Pis Std 2.6096 trainer/Log Pis Max 12.8252 trainer/Log Pis Min -5.00261 trainer/policy/mean Mean 0.17103 trainer/policy/mean Std 0.618186 trainer/policy/mean Max 0.996695 trainer/policy/mean Min -0.999435 trainer/policy/normal/std Mean 0.390747 trainer/policy/normal/std Std 0.189596 trainer/policy/normal/std Max 0.966309 trainer/policy/normal/std Min 0.0730311 trainer/policy/normal/log_std Mean -1.08977 trainer/policy/normal/log_std Std 0.590726 trainer/policy/normal/log_std Max -0.0342715 trainer/policy/normal/log_std Min -2.61687 eval/num steps total 112513 eval/num paths total 215 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.13562 eval/Rewards Std 0.765392 eval/Rewards Max 4.66118 eval/Rewards Min 0.983189 eval/Returns Mean 1567.81 eval/Returns Std 0 eval/Returns Max 1567.81 eval/Returns Min 1567.81 eval/Actions Mean 0.152336 eval/Actions Std 0.594846 eval/Actions Max 0.998325 eval/Actions Min -0.99934 eval/Num Paths 1 eval/Average Returns 1567.81 eval/normalized_score 48.7954 time/evaluation sampling (s) 0.88563 time/logging (s) 0.00230135 time/sampling batch (s) 0.271501 time/saving (s) 0.00298717 time/training (s) 4.26514 time/epoch (s) 5.42756 time/total (s) 31150.9 Epoch -830 ---------------------------------- --------------- 2022-05-10 21:49:43.776002 PDT | [0] Epoch -829 finished ---------------------------------- --------------- epoch -829 replay_buffer/size 999996 trainer/num train calls 172000 trainer/Policy Loss -1.99986 trainer/Log Pis Mean 2.04158 trainer/Log Pis Std 2.39929 trainer/Log Pis Max 11.0141 trainer/Log Pis Min -3.30932 trainer/policy/mean Mean 0.137067 trainer/policy/mean Std 0.598304 trainer/policy/mean Max 0.996705 trainer/policy/mean Min -0.998058 trainer/policy/normal/std Mean 0.386082 trainer/policy/normal/std Std 0.183326 trainer/policy/normal/std Max 0.932418 trainer/policy/normal/std Min 0.069504 trainer/policy/normal/log_std Mean -1.09251 trainer/policy/normal/log_std Std 0.569038 trainer/policy/normal/log_std Max -0.0699739 trainer/policy/normal/log_std Min -2.66637 eval/num steps total 113513 eval/num paths total 216 eval/path length Mean 1000 eval/path length Std 0 eval/path length Max 1000 eval/path length Min 1000 eval/Rewards Mean 3.23871 eval/Rewards Std 0.551625 eval/Rewards Max 4.21417 eval/Rewards Min 0.989585 eval/Returns Mean 3238.71 eval/Returns Std 0 eval/Returns Max 3238.71 eval/Returns Min 3238.71 eval/Actions Mean 0.169944 eval/Actions Std 0.616657 eval/Actions Max 0.998053 eval/Actions Min -0.998257 eval/Num Paths 1 eval/Average Returns 3238.71 eval/normalized_score 100.136 time/evaluation sampling (s) 0.881793 time/logging (s) 0.00364638 time/sampling batch (s) 0.27206 time/saving (s) 0.00310609 time/training (s) 4.24979 time/epoch (s) 5.41039 time/total (s) 31156.3 Epoch -829 ---------------------------------- --------------- 2022-05-10 21:49:49.208748 PDT | [0] Epoch -828 finished ---------------------------------- --------------- epoch -828 replay_buffer/size 999996 trainer/num train calls 173000 trainer/Policy Loss -2.05378 trainer/Log Pis Mean 2.15502 trainer/Log Pis Std 2.51604 trainer/Log Pis Max 9.52671 trainer/Log Pis Min -3.67879 trainer/policy/mean Mean 0.152696 trainer/policy/mean Std 0.618697 trainer/policy/mean Max 0.997401 trainer/policy/mean Min -0.996624 trainer/policy/normal/std Mean 0.394352 trainer/policy/normal/std Std 0.179658 trainer/policy/normal/std Max 0.905749 trainer/policy/normal/std Min 0.0747693 trainer/policy/normal/log_std Mean -1.06012 trainer/policy/normal/log_std Std 0.545584 trainer/policy/normal/log_std Max -0.0989928 trainer/policy/normal/log_std Min -2.59335 eval/num steps total 114154 eval/num paths total 217 eval/path length Mean 641 eval/path length Std 0 eval/path length Max 641 eval/path length Min 641 eval/Rewards Mean 3.27266 eval/Rewards Std 0.739247 eval/Rewards Max 4.83224 eval/Rewards Min 0.985221 eval/Returns Mean 2097.78 eval/Returns Std 0 eval/Returns Max 2097.78 eval/Returns Min 2097.78 eval/Actions Mean 0.153203 eval/Actions Std 0.584678 eval/Actions Max 0.997884 eval/Actions Min -0.998543 eval/Num Paths 1 eval/Average Returns 2097.78 eval/normalized_score 65.0792 time/evaluation sampling (s) 0.887143 time/logging (s) 0.00264416 time/sampling batch (s) 0.271436 time/saving (s) 0.00306747 time/training (s) 4.24727 time/epoch (s) 5.41156 time/total (s) 31161.8 Epoch -828 ---------------------------------- --------------- 2022-05-10 21:49:54.715002 PDT | [0] Epoch -827 finished ---------------------------------- --------------- epoch -827 replay_buffer/size 999996 trainer/num train calls 174000 trainer/Policy Loss -1.79732 trainer/Log Pis Mean 2.05884 trainer/Log Pis Std 2.45953 trainer/Log Pis Max 10.2954 trainer/Log Pis Min -4.02319 trainer/policy/mean Mean 0.155802 trainer/policy/mean Std 0.598255 trainer/policy/mean Max 0.997704 trainer/policy/mean Min -0.996886 trainer/policy/normal/std Mean 0.382314 trainer/policy/normal/std Std 0.180331 trainer/policy/normal/std Max 0.945551 trainer/policy/normal/std Min 0.0682966 trainer/policy/normal/log_std Mean -1.10307 trainer/policy/normal/log_std Std 0.572897 trainer/policy/normal/log_std Max -0.055987 trainer/policy/normal/log_std Min -2.68389 eval/num steps total 114655 eval/num paths total 218 eval/path length Mean 501 eval/path length Std 0 eval/path length Max 501 eval/path length Min 501 eval/Rewards Mean 3.13534 eval/Rewards Std 0.743269 eval/Rewards Max 4.6942 eval/Rewards Min 0.983147 eval/Returns Mean 1570.81 eval/Returns Std 0 eval/Returns Max 1570.81 eval/Returns Min 1570.81 eval/Actions Mean 0.145623 eval/Actions Std 0.592206 eval/Actions Max 0.998022 eval/Actions Min -0.996624 eval/Num Paths 1 eval/Average Returns 1570.81 eval/normalized_score 48.8875 time/evaluation sampling (s) 0.970423 time/logging (s) 0.00232257 time/sampling batch (s) 0.270375 time/saving (s) 0.00296242 time/training (s) 4.23962 time/epoch (s) 5.4857 time/total (s) 31167.2 Epoch -827 ---------------------------------- --------------- 2022-05-10 21:50:00.151403 PDT | [0] Epoch -826 finished ---------------------------------- --------------- epoch -826 replay_buffer/size 999996 trainer/num train calls 175000 trainer/Policy Loss -2.35195 trainer/Log Pis Mean 2.12624 trainer/Log Pis Std 2.47448 trainer/Log Pis Max 10.0157 trainer/Log Pis Min -5.13097 trainer/policy/mean Mean 0.141538 trainer/policy/mean Std 0.616888 trainer/policy/mean Max 0.997848 trainer/policy/mean Min -0.99812 trainer/policy/normal/std Mean 0.391759 trainer/policy/normal/std Std 0.187968 trainer/policy/normal/std Max 0.950697 trainer/policy/normal/std Min 0.0706515 trainer/policy/normal/log_std Mean -1.08443 trainer/policy/normal/log_std Std 0.584916 trainer/policy/normal/log_std Max -0.0505603 trainer/policy/normal/log_std Min -2.65 eval/num steps total 115642 eval/num paths total 220 eval/path length Mean 493.5 eval/path length Std 0.5 eval/path length Max 494 eval/path length Min 493 eval/Rewards Mean 3.06883 eval/Rewards Std 0.781043 eval/Rewards Max 4.68828 eval/Rewards Min 0.986732 eval/Returns Mean 1514.47 eval/Returns Std 1.30849 eval/Returns Max 1515.77 eval/Returns Min 1513.16 eval/Actions Mean 0.151884 eval/Actions Std 0.582341 eval/Actions Max 0.996607 eval/Actions Min -0.999143 eval/Num Paths 2 eval/Average Returns 1514.47 eval/normalized_score 47.1564 time/evaluation sampling (s) 0.890761 time/logging (s) 0.00369652 time/sampling batch (s) 0.271622 time/saving (s) 0.00305556 time/training (s) 4.24835 time/epoch (s) 5.41748 time/total (s) 31172.7 Epoch -826 ---------------------------------- --------------- 2022-05-10 21:50:05.575384 PDT | [0] Epoch -825 finished ---------------------------------- --------------- epoch -825 replay_buffer/size 999996 trainer/num train calls 176000 trainer/Policy Loss -2.08548 trainer/Log Pis Mean 2.18965 trainer/Log Pis Std 2.62269 trainer/Log Pis Max 9.55256 trainer/Log Pis Min -3.44674 trainer/policy/mean Mean 0.123749 trainer/policy/mean Std 0.614254 trainer/policy/mean Max 0.997816 trainer/policy/mean Min -0.997792 trainer/policy/normal/std Mean 0.382871 trainer/policy/normal/std Std 0.18232 trainer/policy/normal/std Max 0.994664 trainer/policy/normal/std Min 0.0692699 trainer/policy/normal/log_std Mean -1.10382 trainer/policy/normal/log_std Std 0.576439 trainer/policy/normal/log_std Max -0.00535033 trainer/policy/normal/log_std Min -2.66975 eval/num steps total 116370 eval/num paths total 221 eval/path length Mean 728 eval/path length Std 0 eval/path length Max 728 eval/path length Min 728 eval/Rewards Mean 3.23928 eval/Rewards Std 0.720222 eval/Rewards Max 4.78296 eval/Rewards Min 0.993896 eval/Returns Mean 2358.19 eval/Returns Std 0 eval/Returns Max 2358.19 eval/Returns Min 2358.19 eval/Actions Mean 0.160403 eval/Actions Std 0.602509 eval/Actions Max 0.998234 eval/Actions Min -0.998244 eval/Num Paths 1 eval/Average Returns 2358.19 eval/normalized_score 73.0807 time/evaluation sampling (s) 0.905667 time/logging (s) 0.00306791 time/sampling batch (s) 0.270971 time/saving (s) 0.0031427 time/training (s) 4.22016 time/epoch (s) 5.403 time/total (s) 31178.1 Epoch -825 ---------------------------------- --------------- 2022-05-10 21:50:11.065648 PDT | [0] Epoch -824 finished ---------------------------------- --------------- epoch -824 replay_buffer/size 999996 trainer/num train calls 177000 trainer/Policy Loss -2.08216 trainer/Log Pis Mean 1.90308 trainer/Log Pis Std 2.54719 trainer/Log Pis Max 14.4923 trainer/Log Pis Min -4.95854 trainer/policy/mean Mean 0.149985 trainer/policy/mean Std 0.611475 trainer/policy/mean Max 0.996564 trainer/policy/mean Min -0.998337 trainer/policy/normal/std Mean 0.391462 trainer/policy/normal/std Std 0.179347 trainer/policy/normal/std Max 0.882796 trainer/policy/normal/std Min 0.0715421 trainer/policy/normal/log_std Mean -1.07209 trainer/policy/normal/log_std Std 0.558115 trainer/policy/normal/log_std Max -0.124661 trainer/policy/normal/log_std Min -2.63747 eval/num steps total 116900 eval/num paths total 222 eval/path length Mean 530 eval/path length Std 0 eval/path length Max 530 eval/path length Min 530 eval/Rewards Mean 3.21637 eval/Rewards Std 0.853328 eval/Rewards Max 5.48316 eval/Rewards Min 0.993302 eval/Returns Mean 1704.67 eval/Returns Std 0 eval/Returns Max 1704.67 eval/Returns Min 1704.67 eval/Actions Mean 0.15221 eval/Actions Std 0.580276 eval/Actions Max 0.997901 eval/Actions Min -0.998005 eval/Num Paths 1 eval/Average Returns 1704.67 eval/normalized_score 53.0007 time/evaluation sampling (s) 0.928399 time/logging (s) 0.00247276 time/sampling batch (s) 0.272698 time/saving (s) 0.00315418 time/training (s) 4.26259 time/epoch (s) 5.46932 time/total (s) 31183.5 Epoch -824 ---------------------------------- --------------- 2022-05-10 21:50:16.546735 PDT | [0] Epoch -823 finished ---------------------------------- --------------- epoch -823 replay_buffer/size 999996 trainer/num train calls 178000 trainer/Policy Loss -2.03191 trainer/Log Pis Mean 2.04977 trainer/Log Pis Std 2.53279 trainer/Log Pis Max 13.8588 trainer/Log Pis Min -5.08786 trainer/policy/mean Mean 0.125556 trainer/policy/mean Std 0.61289 trainer/policy/mean Max 0.994776 trainer/policy/mean Min -0.999109 trainer/policy/normal/std Mean 0.382029 trainer/policy/normal/std Std 0.177487 trainer/policy/normal/std Max 0.939999 trainer/policy/normal/std Min 0.0704742 trainer/policy/normal/log_std Mean -1.09821 trainer/policy/normal/log_std Std 0.558791 trainer/policy/normal/log_std Max -0.0618767 trainer/policy/normal/log_std Min -2.65251 eval/num steps total 117397 eval/num paths total 223 eval/path length Mean 497 eval/path length Std 0 eval/path length Max 497 eval/path length Min 497 eval/Rewards Mean 3.08068 eval/Rewards Std 0.784762 eval/Rewards Max 4.74145 eval/Rewards Min 0.986721 eval/Returns Mean 1531.1 eval/Returns Std 0 eval/Returns Max 1531.1 eval/Returns Min 1531.1 eval/Actions Mean 0.15998 eval/Actions Std 0.58216 eval/Actions Max 0.995898 eval/Actions Min -0.999335 eval/Num Paths 1 eval/Average Returns 1531.1 eval/normalized_score 47.6675 time/evaluation sampling (s) 0.918685 time/logging (s) 0.00246063 time/sampling batch (s) 0.271667 time/saving (s) 0.00333975 time/training (s) 4.26452 time/epoch (s) 5.46068 time/total (s) 31189 Epoch -823 ---------------------------------- --------------- 2022-05-10 21:50:21.945498 PDT | [0] Epoch -822 finished ---------------------------------- --------------- epoch -822 replay_buffer/size 999996 trainer/num train calls 179000 trainer/Policy Loss -2.26196 trainer/Log Pis Mean 2.15563 trainer/Log Pis Std 2.65016 trainer/Log Pis Max 10.2888 trainer/Log Pis Min -4.53327 trainer/policy/mean Mean 0.125778 trainer/policy/mean Std 0.621413 trainer/policy/mean Max 0.999018 trainer/policy/mean Min -0.998134 trainer/policy/normal/std Mean 0.391161 trainer/policy/normal/std Std 0.184553 trainer/policy/normal/std Max 0.951588 trainer/policy/normal/std Min 0.0689137 trainer/policy/normal/log_std Mean -1.0777 trainer/policy/normal/log_std Std 0.565294 trainer/policy/normal/log_std Max -0.0496232 trainer/policy/normal/log_std Min -2.6749 eval/num steps total 117795 eval/num paths total 224 eval/path length Mean 398 eval/path length Std 0 eval/path length Max 398 eval/path length Min 398 eval/Rewards Mean 3.06222 eval/Rewards Std 0.860696 eval/Rewards Max 4.65597 eval/Rewards Min 0.983573 eval/Returns Mean 1218.76 eval/Returns Std 0 eval/Returns Max 1218.76 eval/Returns Min 1218.76 eval/Actions Mean 0.14393 eval/Actions Std 0.58156 eval/Actions Max 0.992692 eval/Actions Min -0.999065 eval/Num Paths 1 eval/Average Returns 1218.76 eval/normalized_score 38.0706 time/evaluation sampling (s) 0.898401 time/logging (s) 0.00207137 time/sampling batch (s) 0.268326 time/saving (s) 0.00307871 time/training (s) 4.20619 time/epoch (s) 5.37807 time/total (s) 31194.4 Epoch -822 ---------------------------------- --------------- 2022-05-10 21:50:27.371190 PDT | [0] Epoch -821 finished ---------------------------------- --------------- epoch -821 replay_buffer/size 999996 trainer/num train calls 180000 trainer/Policy Loss -2.13124 trainer/Log Pis Mean 2.09437 trainer/Log Pis Std 2.54507 trainer/Log Pis Max 10.3609 trainer/Log Pis Min -7.23486 trainer/policy/mean Mean 0.0961639 trainer/policy/mean Std 0.622318 trainer/policy/mean Max 0.996872 trainer/policy/mean Min -0.998374 trainer/policy/normal/std Mean 0.384817 trainer/policy/normal/std Std 0.184983 trainer/policy/normal/std Max 0.990676 trainer/policy/normal/std Min 0.0705562 trainer/policy/normal/log_std Mean -1.09827 trainer/policy/normal/log_std Std 0.572828 trainer/policy/normal/log_std Max -0.00936721 trainer/policy/normal/log_std Min -2.65135 eval/num steps total 118754 eval/num paths total 226 eval/path length Mean 479.5 eval/path length Std 1.5 eval/path length Max 481 eval/path length Min 478 eval/Rewards Mean 3.08249 eval/Rewards Std 0.837078 eval/Rewards Max 4.92225 eval/Rewards Min 0.983076 eval/Returns Mean 1478.05 eval/Returns Std 5.66245 eval/Returns Max 1483.72 eval/Returns Min 1472.39 eval/Actions Mean 0.138572 eval/Actions Std 0.573891 eval/Actions Max 0.997609 eval/Actions Min -0.998506 eval/Num Paths 2 eval/Average Returns 1478.05 eval/normalized_score 46.0376 time/evaluation sampling (s) 0.889546 time/logging (s) 0.00357279 time/sampling batch (s) 0.270949 time/saving (s) 0.00297676 time/training (s) 4.23998 time/epoch (s) 5.40703 time/total (s) 31199.8 Epoch -821 ---------------------------------- --------------- 2022-05-10 21:50:32.775677 PDT | [0] Epoch -820 finished ---------------------------------- --------------- epoch -820 replay_buffer/size 999996 trainer/num train calls 181000 trainer/Policy Loss -2.0307 trainer/Log Pis Mean 2.13956 trainer/Log Pis Std 2.46325 trainer/Log Pis Max 11.6426 trainer/Log Pis Min -5.00983 trainer/policy/mean Mean 0.143941 trainer/policy/mean Std 0.608664 trainer/policy/mean Max 0.995691 trainer/policy/mean Min -0.99813 trainer/policy/normal/std Mean 0.385674 trainer/policy/normal/std Std 0.183712 trainer/policy/normal/std Max 0.968804 trainer/policy/normal/std Min 0.071304 trainer/policy/normal/log_std Mean -1.09857 trainer/policy/normal/log_std Std 0.58335 trainer/policy/normal/log_std Max -0.0316925 trainer/policy/normal/log_std Min -2.6408 eval/num steps total 119714 eval/num paths total 229 eval/path length Mean 320 eval/path length Std 18.3848 eval/path length Max 342 eval/path length Min 297 eval/Rewards Mean 2.97128 eval/Rewards Std 1.02467 eval/Rewards Max 5.22273 eval/Rewards Min 0.98771 eval/Returns Mean 950.809 eval/Returns Std 34.556 eval/Returns Max 989.404 eval/Returns Min 905.55 eval/Actions Mean 0.0892056 eval/Actions Std 0.523779 eval/Actions Max 0.999412 eval/Actions Min -0.999752 eval/Num Paths 3 eval/Average Returns 950.809 eval/normalized_score 29.8375 time/evaluation sampling (s) 0.896269 time/logging (s) 0.00363507 time/sampling batch (s) 0.270009 time/saving (s) 0.00297027 time/training (s) 4.21142 time/epoch (s) 5.3843 time/total (s) 31205.2 Epoch -820 ---------------------------------- --------------- 2022-05-10 21:50:38.188632 PDT | [0] Epoch -819 finished ---------------------------------- --------------- epoch -819 replay_buffer/size 999996 trainer/num train calls 182000 trainer/Policy Loss -2.08116 trainer/Log Pis Mean 2.19627 trainer/Log Pis Std 2.55979 trainer/Log Pis Max 9.81025 trainer/Log Pis Min -4.97472 trainer/policy/mean Mean 0.166484 trainer/policy/mean Std 0.612116 trainer/policy/mean Max 0.999014 trainer/policy/mean Min -0.99618 trainer/policy/normal/std Mean 0.381348 trainer/policy/normal/std Std 0.182426 trainer/policy/normal/std Max 0.941938 trainer/policy/normal/std Min 0.0753187 trainer/policy/normal/log_std Mean -1.1064 trainer/policy/normal/log_std Std 0.571462 trainer/policy/normal/log_std Max -0.0598155 trainer/policy/normal/log_std Min -2.58603 eval/num steps total 120346 eval/num paths total 230 eval/path length Mean 632 eval/path length Std 0 eval/path length Max 632 eval/path length Min 632 eval/Rewards Mean 3.21697 eval/Rewards Std 0.765902 eval/Rewards Max 4.72082 eval/Rewards Min 0.991241 eval/Returns Mean 2033.13 eval/Returns Std 0 eval/Returns Max 2033.13 eval/Returns Min 2033.13 eval/Actions Mean 0.153396 eval/Actions Std 0.602994 eval/Actions Max 0.998405 eval/Actions Min -0.998608 eval/Num Paths 1 eval/Average Returns 2033.13 eval/normalized_score 63.0927 time/evaluation sampling (s) 0.881239 time/logging (s) 0.00268854 time/sampling batch (s) 0.270759 time/saving (s) 0.00311518 time/training (s) 4.23397 time/epoch (s) 5.39177 time/total (s) 31210.6 Epoch -819 ---------------------------------- --------------- 2022-05-10 21:50:43.631917 PDT | [0] Epoch -818 finished ---------------------------------- --------------- epoch -818 replay_buffer/size 999996 trainer/num train calls 183000 trainer/Policy Loss -2.03195 trainer/Log Pis Mean 1.99753 trainer/Log Pis Std 2.54338 trainer/Log Pis Max 9.30498 trainer/Log Pis Min -4.19748 trainer/policy/mean Mean 0.169445 trainer/policy/mean Std 0.598249 trainer/policy/mean Max 0.998267 trainer/policy/mean Min -0.996936 trainer/policy/normal/std Mean 0.388952 trainer/policy/normal/std Std 0.184853 trainer/policy/normal/std Max 0.937246 trainer/policy/normal/std Min 0.0639083 trainer/policy/normal/log_std Mean -1.08944 trainer/policy/normal/log_std Std 0.582148 trainer/policy/normal/log_std Max -0.0648096 trainer/policy/normal/log_std Min -2.75031 eval/num steps total 120912 eval/num paths total 231 eval/path length Mean 566 eval/path length Std 0 eval/path length Max 566 eval/path length Min 566 eval/Rewards Mean 3.22058 eval/Rewards Std 0.746088 eval/Rewards Max 4.74606 eval/Rewards Min 0.986395 eval/Returns Mean 1822.85 eval/Returns Std 0 eval/Returns Max 1822.85 eval/Returns Min 1822.85 eval/Actions Mean 0.159849 eval/Actions Std 0.604786 eval/Actions Max 0.997069 eval/Actions Min -0.998136 eval/Num Paths 1 eval/Average Returns 1822.85 eval/normalized_score 56.6317 time/evaluation sampling (s) 0.880746 time/logging (s) 0.00252766 time/sampling batch (s) 0.271955 time/saving (s) 0.00298233 time/training (s) 4.26462 time/epoch (s) 5.42284 time/total (s) 31216 Epoch -818 ---------------------------------- --------------- 2022-05-10 21:50:49.074480 PDT | [0] Epoch -817 finished ---------------------------------- --------------- epoch -817 replay_buffer/size 999996 trainer/num train calls 184000 trainer/Policy Loss -2.17849 trainer/Log Pis Mean 2.21565 trainer/Log Pis Std 2.61816 trainer/Log Pis Max 9.75435 trainer/Log Pis Min -5.0247 trainer/policy/mean Mean 0.17374 trainer/policy/mean Std 0.621471 trainer/policy/mean Max 0.998028 trainer/policy/mean Min -0.997381 trainer/policy/normal/std Mean 0.387071 trainer/policy/normal/std Std 0.185072 trainer/policy/normal/std Max 0.951847 trainer/policy/normal/std Min 0.0760614 trainer/policy/normal/log_std Mean -1.09324 trainer/policy/normal/log_std Std 0.576673 trainer/policy/normal/log_std Max -0.0493509 trainer/policy/normal/log_std Min -2.57621 eval/num steps total 121800 eval/num paths total 233 eval/path length Mean 444 eval/path length Std 7 eval/path length Max 451 eval/path length Min 437 eval/Rewards Mean 3.18971 eval/Rewards Std 0.888367 eval/Rewards Max 5.51061 eval/Rewards Min 0.983349 eval/Returns Mean 1416.23 eval/Returns Std 25.9885 eval/Returns Max 1442.22 eval/Returns Min 1390.24 eval/Actions Mean 0.130834 eval/Actions Std 0.5784 eval/Actions Max 0.997421 eval/Actions Min -0.998339 eval/Num Paths 2 eval/Average Returns 1416.23 eval/normalized_score 44.138 time/evaluation sampling (s) 0.89451 time/logging (s) 0.0034344 time/sampling batch (s) 0.270944 time/saving (s) 0.00299765 time/training (s) 4.25139 time/epoch (s) 5.42328 time/total (s) 31221.4 Epoch -817 ---------------------------------- --------------- 2022-05-10 21:50:54.528842 PDT | [0] Epoch -816 finished ---------------------------------- --------------- epoch -816 replay_buffer/size 999996 trainer/num train calls 185000 trainer/Policy Loss -2.3921 trainer/Log Pis Mean 2.41158 trainer/Log Pis Std 2.65801 trainer/Log Pis Max 10.8582 trainer/Log Pis Min -3.83288 trainer/policy/mean Mean 0.106051 trainer/policy/mean Std 0.632365 trainer/policy/mean Max 0.998478 trainer/policy/mean Min -0.998426 trainer/policy/normal/std Mean 0.384071 trainer/policy/normal/std Std 0.183163 trainer/policy/normal/std Max 0.97856 trainer/policy/normal/std Min 0.0702086 trainer/policy/normal/log_std Mean -1.10093 trainer/policy/normal/log_std Std 0.577631 trainer/policy/normal/log_std Max -0.0216731 trainer/policy/normal/log_std Min -2.65628 eval/num steps total 122357 eval/num paths total 234 eval/path length Mean 557 eval/path length Std 0 eval/path length Max 557 eval/path length Min 557 eval/Rewards Mean 3.16495 eval/Rewards Std 0.856645 eval/Rewards Max 4.94112 eval/Rewards Min 0.985559 eval/Returns Mean 1762.88 eval/Returns Std 0 eval/Returns Max 1762.88 eval/Returns Min 1762.88 eval/Actions Mean 0.158509 eval/Actions Std 0.574721 eval/Actions Max 0.997073 eval/Actions Min -0.998703 eval/Num Paths 1 eval/Average Returns 1762.88 eval/normalized_score 54.7891 time/evaluation sampling (s) 0.897287 time/logging (s) 0.0024719 time/sampling batch (s) 0.277191 time/saving (s) 0.00299324 time/training (s) 4.25299 time/epoch (s) 5.43294 time/total (s) 31226.9 Epoch -816 ---------------------------------- --------------- 2022-05-10 21:50:59.982268 PDT | [0] Epoch -815 finished ---------------------------------- --------------- epoch -815 replay_buffer/size 999996 trainer/num train calls 186000 trainer/Policy Loss -2.23653 trainer/Log Pis Mean 2.2494 trainer/Log Pis Std 2.70018 trainer/Log Pis Max 11.0907 trainer/Log Pis Min -8.05067 trainer/policy/mean Mean 0.164216 trainer/policy/mean Std 0.620873 trainer/policy/mean Max 0.995886 trainer/policy/mean Min -0.998832 trainer/policy/normal/std Mean 0.389795 trainer/policy/normal/std Std 0.181716 trainer/policy/normal/std Max 0.991088 trainer/policy/normal/std Min 0.0706473 trainer/policy/normal/log_std Mean -1.08371 trainer/policy/normal/log_std Std 0.575632 trainer/policy/normal/log_std Max -0.00895168 trainer/policy/normal/log_std Min -2.65006 eval/num steps total 122979 eval/num paths total 235 eval/path length Mean 622 eval/path length Std 0 eval/path length Max 622 eval/path length Min 622 eval/Rewards Mean 3.22173 eval/Rewards Std 0.768897 eval/Rewards Max 4.85963 eval/Rewards Min 0.992265 eval/Returns Mean 2003.92 eval/Returns Std 0 eval/Returns Max 2003.92 eval/Returns Min 2003.92 eval/Actions Mean 0.142882 eval/Actions Std 0.577523 eval/Actions Max 0.998182 eval/Actions Min -0.998749 eval/Num Paths 1 eval/Average Returns 2003.92 eval/normalized_score 62.1952 time/evaluation sampling (s) 0.894337 time/logging (s) 0.00266593 time/sampling batch (s) 0.270821 time/saving (s) 0.00300799 time/training (s) 4.26239 time/epoch (s) 5.43322 time/total (s) 31232.3 Epoch -815 ---------------------------------- --------------- 2022-05-10 21:51:05.394519 PDT | [0] Epoch -814 finished ---------------------------------- --------------- epoch -814 replay_buffer/size 999996 trainer/num train calls 187000 trainer/Policy Loss -2.17061 trainer/Log Pis Mean 2.24586 trainer/Log Pis Std 2.60412 trainer/Log Pis Max 10.2502 trainer/Log Pis Min -4.72414 trainer/policy/mean Mean 0.169607 trainer/policy/mean Std 0.619307 trainer/policy/mean Max 0.997938 trainer/policy/mean Min -0.997676 trainer/policy/normal/std Mean 0.391533 trainer/policy/normal/std Std 0.186133 trainer/policy/normal/std Max 1.03193 trainer/policy/normal/std Min 0.0726981 trainer/policy/normal/log_std Mean -1.08172 trainer/policy/normal/log_std Std 0.578205 trainer/policy/normal/log_std Max 0.0314349 trainer/policy/normal/log_std Min -2.62144 eval/num steps total 123479 eval/num paths total 236 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.1015 eval/Rewards Std 0.754805 eval/Rewards Max 4.71335 eval/Rewards Min 0.99025 eval/Returns Mean 1550.75 eval/Returns Std 0 eval/Returns Max 1550.75 eval/Returns Min 1550.75 eval/Actions Mean 0.154404 eval/Actions Std 0.585866 eval/Actions Max 0.997664 eval/Actions Min -0.996601 eval/Num Paths 1 eval/Average Returns 1550.75 eval/normalized_score 48.2712 time/evaluation sampling (s) 0.897286 time/logging (s) 0.00226207 time/sampling batch (s) 0.270145 time/saving (s) 0.00295038 time/training (s) 4.21888 time/epoch (s) 5.39152 time/total (s) 31237.7 Epoch -814 ---------------------------------- --------------- 2022-05-10 21:51:10.806407 PDT | [0] Epoch -813 finished ---------------------------------- --------------- epoch -813 replay_buffer/size 999996 trainer/num train calls 188000 trainer/Policy Loss -2.33403 trainer/Log Pis Mean 2.20941 trainer/Log Pis Std 2.69782 trainer/Log Pis Max 11.0142 trainer/Log Pis Min -4.70519 trainer/policy/mean Mean 0.128561 trainer/policy/mean Std 0.627496 trainer/policy/mean Max 0.996367 trainer/policy/mean Min -0.998237 trainer/policy/normal/std Mean 0.395513 trainer/policy/normal/std Std 0.189998 trainer/policy/normal/std Max 1.05987 trainer/policy/normal/std Min 0.0713588 trainer/policy/normal/log_std Mean -1.0728 trainer/policy/normal/log_std Std 0.579047 trainer/policy/normal/log_std Max 0.058145 trainer/policy/normal/log_std Min -2.64003 eval/num steps total 124432 eval/num paths total 238 eval/path length Mean 476.5 eval/path length Std 37.5 eval/path length Max 514 eval/path length Min 439 eval/Rewards Mean 3.18933 eval/Rewards Std 0.842419 eval/Rewards Max 4.89224 eval/Rewards Min 0.98973 eval/Returns Mean 1519.71 eval/Returns Std 118.462 eval/Returns Max 1638.18 eval/Returns Min 1401.25 eval/Actions Mean 0.143424 eval/Actions Std 0.583888 eval/Actions Max 0.997156 eval/Actions Min -0.998622 eval/Num Paths 2 eval/Average Returns 1519.71 eval/normalized_score 47.3176 time/evaluation sampling (s) 0.912694 time/logging (s) 0.00362458 time/sampling batch (s) 0.268382 time/saving (s) 0.00323595 time/training (s) 4.20536 time/epoch (s) 5.3933 time/total (s) 31243.1 Epoch -813 ---------------------------------- --------------- 2022-05-10 21:51:16.257205 PDT | [0] Epoch -812 finished ---------------------------------- --------------- epoch -812 replay_buffer/size 999996 trainer/num train calls 189000 trainer/Policy Loss -2.13834 trainer/Log Pis Mean 2.30903 trainer/Log Pis Std 2.71394 trainer/Log Pis Max 10.0186 trainer/Log Pis Min -5.33593 trainer/policy/mean Mean 0.157308 trainer/policy/mean Std 0.624552 trainer/policy/mean Max 0.998328 trainer/policy/mean Min -0.997567 trainer/policy/normal/std Mean 0.387599 trainer/policy/normal/std Std 0.187809 trainer/policy/normal/std Max 0.955039 trainer/policy/normal/std Min 0.0721337 trainer/policy/normal/log_std Mean -1.09444 trainer/policy/normal/log_std Std 0.58005 trainer/policy/normal/log_std Max -0.0460033 trainer/policy/normal/log_std Min -2.62923 eval/num steps total 124878 eval/num paths total 239 eval/path length Mean 446 eval/path length Std 0 eval/path length Max 446 eval/path length Min 446 eval/Rewards Mean 3.19523 eval/Rewards Std 0.911112 eval/Rewards Max 5.36716 eval/Rewards Min 0.986595 eval/Returns Mean 1425.07 eval/Returns Std 0 eval/Returns Max 1425.07 eval/Returns Min 1425.07 eval/Actions Mean 0.136558 eval/Actions Std 0.576204 eval/Actions Max 0.995758 eval/Actions Min -0.998597 eval/Num Paths 1 eval/Average Returns 1425.07 eval/normalized_score 44.4097 time/evaluation sampling (s) 0.925024 time/logging (s) 0.0022358 time/sampling batch (s) 0.269216 time/saving (s) 0.00305977 time/training (s) 4.22946 time/epoch (s) 5.42899 time/total (s) 31248.5 Epoch -812 ---------------------------------- --------------- 2022-05-10 21:51:21.711959 PDT | [0] Epoch -811 finished ---------------------------------- --------------- epoch -811 replay_buffer/size 999996 trainer/num train calls 190000 trainer/Policy Loss -2.1155 trainer/Log Pis Mean 2.21719 trainer/Log Pis Std 2.64715 trainer/Log Pis Max 15.6927 trainer/Log Pis Min -4.28826 trainer/policy/mean Mean 0.12998 trainer/policy/mean Std 0.620982 trainer/policy/mean Max 0.998361 trainer/policy/mean Min -0.998847 trainer/policy/normal/std Mean 0.402332 trainer/policy/normal/std Std 0.194318 trainer/policy/normal/std Max 1.01269 trainer/policy/normal/std Min 0.0656412 trainer/policy/normal/log_std Mean -1.05783 trainer/policy/normal/log_std Std 0.584276 trainer/policy/normal/log_std Max 0.0126127 trainer/policy/normal/log_std Min -2.72355 eval/num steps total 125454 eval/num paths total 240 eval/path length Mean 576 eval/path length Std 0 eval/path length Max 576 eval/path length Min 576 eval/Rewards Mean 3.19581 eval/Rewards Std 0.711243 eval/Rewards Max 4.82521 eval/Rewards Min 0.987985 eval/Returns Mean 1840.79 eval/Returns Std 0 eval/Returns Max 1840.79 eval/Returns Min 1840.79 eval/Actions Mean 0.145809 eval/Actions Std 0.598237 eval/Actions Max 0.998032 eval/Actions Min -0.99813 eval/Num Paths 1 eval/Average Returns 1840.79 eval/normalized_score 57.183 time/evaluation sampling (s) 0.909223 time/logging (s) 0.00246227 time/sampling batch (s) 0.270288 time/saving (s) 0.00310867 time/training (s) 4.24979 time/epoch (s) 5.43488 time/total (s) 31254 Epoch -811 ---------------------------------- --------------- 2022-05-10 21:51:27.151707 PDT | [0] Epoch -810 finished ---------------------------------- --------------- epoch -810 replay_buffer/size 999996 trainer/num train calls 191000 trainer/Policy Loss -2.12575 trainer/Log Pis Mean 2.19709 trainer/Log Pis Std 2.64589 trainer/Log Pis Max 12.9752 trainer/Log Pis Min -4.31988 trainer/policy/mean Mean 0.134946 trainer/policy/mean Std 0.620744 trainer/policy/mean Max 0.997821 trainer/policy/mean Min -0.996277 trainer/policy/normal/std Mean 0.38267 trainer/policy/normal/std Std 0.182189 trainer/policy/normal/std Max 0.998128 trainer/policy/normal/std Min 0.0711936 trainer/policy/normal/log_std Mean -1.10376 trainer/policy/normal/log_std Std 0.574082 trainer/policy/normal/log_std Max -0.00187382 trainer/policy/normal/log_std Min -2.64235 eval/num steps total 125925 eval/num paths total 241 eval/path length Mean 471 eval/path length Std 0 eval/path length Max 471 eval/path length Min 471 eval/Rewards Mean 3.08267 eval/Rewards Std 0.859147 eval/Rewards Max 4.69982 eval/Rewards Min 0.992259 eval/Returns Mean 1451.94 eval/Returns Std 0 eval/Returns Max 1451.94 eval/Returns Min 1451.94 eval/Actions Mean 0.138036 eval/Actions Std 0.573149 eval/Actions Max 0.998075 eval/Actions Min -0.998637 eval/Num Paths 1 eval/Average Returns 1451.94 eval/normalized_score 45.2351 time/evaluation sampling (s) 0.911406 time/logging (s) 0.00223997 time/sampling batch (s) 0.270185 time/saving (s) 0.00313955 time/training (s) 4.23211 time/epoch (s) 5.41908 time/total (s) 31259.4 Epoch -810 ---------------------------------- --------------- 2022-05-10 21:51:32.584898 PDT | [0] Epoch -809 finished ---------------------------------- --------------- epoch -809 replay_buffer/size 999996 trainer/num train calls 192000 trainer/Policy Loss -2.04021 trainer/Log Pis Mean 2.07924 trainer/Log Pis Std 2.42972 trainer/Log Pis Max 9.37289 trainer/Log Pis Min -3.2713 trainer/policy/mean Mean 0.130651 trainer/policy/mean Std 0.61266 trainer/policy/mean Max 0.997625 trainer/policy/mean Min -0.997685 trainer/policy/normal/std Mean 0.385737 trainer/policy/normal/std Std 0.184209 trainer/policy/normal/std Max 0.985698 trainer/policy/normal/std Min 0.0718656 trainer/policy/normal/log_std Mean -1.09823 trainer/policy/normal/log_std Std 0.581843 trainer/policy/normal/log_std Max -0.0144055 trainer/policy/normal/log_std Min -2.63296 eval/num steps total 126446 eval/num paths total 242 eval/path length Mean 521 eval/path length Std 0 eval/path length Max 521 eval/path length Min 521 eval/Rewards Mean 3.0969 eval/Rewards Std 0.83796 eval/Rewards Max 5.30541 eval/Rewards Min 0.989289 eval/Returns Mean 1613.48 eval/Returns Std 0 eval/Returns Max 1613.48 eval/Returns Min 1613.48 eval/Actions Mean 0.134553 eval/Actions Std 0.568491 eval/Actions Max 0.997276 eval/Actions Min -0.997946 eval/Num Paths 1 eval/Average Returns 1613.48 eval/normalized_score 50.1988 time/evaluation sampling (s) 0.88685 time/logging (s) 0.00232834 time/sampling batch (s) 0.270126 time/saving (s) 0.00294946 time/training (s) 4.25075 time/epoch (s) 5.413 time/total (s) 31264.8 Epoch -809 ---------------------------------- --------------- 2022-05-10 21:51:38.020121 PDT | [0] Epoch -808 finished ---------------------------------- --------------- epoch -808 replay_buffer/size 999996 trainer/num train calls 193000 trainer/Policy Loss -2.07282 trainer/Log Pis Mean 2.07289 trainer/Log Pis Std 2.65389 trainer/Log Pis Max 13.0328 trainer/Log Pis Min -3.62437 trainer/policy/mean Mean 0.137434 trainer/policy/mean Std 0.615423 trainer/policy/mean Max 0.997115 trainer/policy/mean Min -0.99819 trainer/policy/normal/std Mean 0.390949 trainer/policy/normal/std Std 0.182585 trainer/policy/normal/std Max 0.948135 trainer/policy/normal/std Min 0.076319 trainer/policy/normal/log_std Mean -1.07822 trainer/policy/normal/log_std Std 0.567735 trainer/policy/normal/log_std Max -0.0532581 trainer/policy/normal/log_std Min -2.57283 eval/num steps total 127025 eval/num paths total 243 eval/path length Mean 579 eval/path length Std 0 eval/path length Max 579 eval/path length Min 579 eval/Rewards Mean 3.16026 eval/Rewards Std 0.720718 eval/Rewards Max 4.77124 eval/Rewards Min 0.987926 eval/Returns Mean 1829.79 eval/Returns Std 0 eval/Returns Max 1829.79 eval/Returns Min 1829.79 eval/Actions Mean 0.155273 eval/Actions Std 0.601629 eval/Actions Max 0.996887 eval/Actions Min -0.99812 eval/Num Paths 1 eval/Average Returns 1829.79 eval/normalized_score 56.8451 time/evaluation sampling (s) 0.953193 time/logging (s) 0.00252679 time/sampling batch (s) 0.266696 time/saving (s) 0.00297877 time/training (s) 4.18985 time/epoch (s) 5.41525 time/total (s) 31270.2 Epoch -808 ---------------------------------- --------------- 2022-05-10 21:51:43.395473 PDT | [0] Epoch -807 finished ---------------------------------- --------------- epoch -807 replay_buffer/size 999996 trainer/num train calls 194000 trainer/Policy Loss -2.09349 trainer/Log Pis Mean 2.01781 trainer/Log Pis Std 2.64045 trainer/Log Pis Max 9.87148 trainer/Log Pis Min -6.95808 trainer/policy/mean Mean 0.125103 trainer/policy/mean Std 0.620792 trainer/policy/mean Max 0.998235 trainer/policy/mean Min -0.99663 trainer/policy/normal/std Mean 0.388232 trainer/policy/normal/std Std 0.183307 trainer/policy/normal/std Max 0.956115 trainer/policy/normal/std Min 0.0733763 trainer/policy/normal/log_std Mean -1.0877 trainer/policy/normal/log_std Std 0.573051 trainer/policy/normal/log_std Max -0.044877 trainer/policy/normal/log_std Min -2.61216 eval/num steps total 127941 eval/num paths total 245 eval/path length Mean 458 eval/path length Std 197 eval/path length Max 655 eval/path length Min 261 eval/Rewards Mean 3.13818 eval/Rewards Std 0.805365 eval/Rewards Max 5.19979 eval/Rewards Min 0.985032 eval/Returns Mean 1437.29 eval/Returns Std 657.787 eval/Returns Max 2095.07 eval/Returns Min 779.5 eval/Actions Mean 0.131325 eval/Actions Std 0.579045 eval/Actions Max 0.99878 eval/Actions Min -0.997909 eval/Num Paths 2 eval/Average Returns 1437.29 eval/normalized_score 44.785 time/evaluation sampling (s) 0.883815 time/logging (s) 0.00360912 time/sampling batch (s) 0.26775 time/saving (s) 0.00309194 time/training (s) 4.19805 time/epoch (s) 5.35632 time/total (s) 31275.6 Epoch -807 ---------------------------------- --------------- 2022-05-10 21:51:48.754561 PDT | [0] Epoch -806 finished ---------------------------------- --------------- epoch -806 replay_buffer/size 999996 trainer/num train calls 195000 trainer/Policy Loss -2.24941 trainer/Log Pis Mean 2.12676 trainer/Log Pis Std 2.58488 trainer/Log Pis Max 12.0907 trainer/Log Pis Min -4.96779 trainer/policy/mean Mean 0.129222 trainer/policy/mean Std 0.619546 trainer/policy/mean Max 0.998196 trainer/policy/mean Min -0.998583 trainer/policy/normal/std Mean 0.397663 trainer/policy/normal/std Std 0.192915 trainer/policy/normal/std Max 0.979613 trainer/policy/normal/std Min 0.0683548 trainer/policy/normal/log_std Mean -1.07246 trainer/policy/normal/log_std Std 0.591585 trainer/policy/normal/log_std Max -0.0205974 trainer/policy/normal/log_std Min -2.68304 eval/num steps total 128440 eval/num paths total 246 eval/path length Mean 499 eval/path length Std 0 eval/path length Max 499 eval/path length Min 499 eval/Rewards Mean 3.15691 eval/Rewards Std 0.751143 eval/Rewards Max 4.73491 eval/Rewards Min 0.985895 eval/Returns Mean 1575.3 eval/Returns Std 0 eval/Returns Max 1575.3 eval/Returns Min 1575.3 eval/Actions Mean 0.151999 eval/Actions Std 0.597203 eval/Actions Max 0.998386 eval/Actions Min -0.996491 eval/Num Paths 1 eval/Average Returns 1575.3 eval/normalized_score 49.0255 time/evaluation sampling (s) 0.878265 time/logging (s) 0.00244581 time/sampling batch (s) 0.266386 time/saving (s) 0.00327447 time/training (s) 4.18738 time/epoch (s) 5.33775 time/total (s) 31280.9 Epoch -806 ---------------------------------- --------------- 2022-05-10 21:51:54.128782 PDT | [0] Epoch -805 finished ---------------------------------- --------------- epoch -805 replay_buffer/size 999996 trainer/num train calls 196000 trainer/Policy Loss -2.1477 trainer/Log Pis Mean 2.17407 trainer/Log Pis Std 2.566 trainer/Log Pis Max 15.2527 trainer/Log Pis Min -3.23834 trainer/policy/mean Mean 0.159358 trainer/policy/mean Std 0.6123 trainer/policy/mean Max 0.997991 trainer/policy/mean Min -0.998953 trainer/policy/normal/std Mean 0.383962 trainer/policy/normal/std Std 0.181289 trainer/policy/normal/std Max 0.965859 trainer/policy/normal/std Min 0.0763414 trainer/policy/normal/log_std Mean -1.09913 trainer/policy/normal/log_std Std 0.572931 trainer/policy/normal/log_std Max -0.0347374 trainer/policy/normal/log_std Min -2.57254 eval/num steps total 129023 eval/num paths total 247 eval/path length Mean 583 eval/path length Std 0 eval/path length Max 583 eval/path length Min 583 eval/Rewards Mean 3.20191 eval/Rewards Std 0.75904 eval/Rewards Max 4.94752 eval/Rewards Min 0.988022 eval/Returns Mean 1866.72 eval/Returns Std 0 eval/Returns Max 1866.72 eval/Returns Min 1866.72 eval/Actions Mean 0.159813 eval/Actions Std 0.591744 eval/Actions Max 0.996292 eval/Actions Min -0.998207 eval/Num Paths 1 eval/Average Returns 1866.72 eval/normalized_score 57.9796 time/evaluation sampling (s) 0.880307 time/logging (s) 0.00248898 time/sampling batch (s) 0.266363 time/saving (s) 0.00307324 time/training (s) 4.20177 time/epoch (s) 5.354 time/total (s) 31286.3 Epoch -805 ---------------------------------- --------------- 2022-05-10 21:51:59.502556 PDT | [0] Epoch -804 finished ---------------------------------- --------------- epoch -804 replay_buffer/size 999996 trainer/num train calls 197000 trainer/Policy Loss -2.08907 trainer/Log Pis Mean 2.0412 trainer/Log Pis Std 2.42139 trainer/Log Pis Max 8.90245 trainer/Log Pis Min -3.68415 trainer/policy/mean Mean 0.160968 trainer/policy/mean Std 0.616125 trainer/policy/mean Max 0.998194 trainer/policy/mean Min -0.996841 trainer/policy/normal/std Mean 0.402595 trainer/policy/normal/std Std 0.185399 trainer/policy/normal/std Max 0.956094 trainer/policy/normal/std Min 0.0760126 trainer/policy/normal/log_std Mean -1.04467 trainer/policy/normal/log_std Std 0.558526 trainer/policy/normal/log_std Max -0.044899 trainer/policy/normal/log_std Min -2.57686 eval/num steps total 129620 eval/num paths total 248 eval/path length Mean 597 eval/path length Std 0 eval/path length Max 597 eval/path length Min 597 eval/Rewards Mean 3.25711 eval/Rewards Std 0.755472 eval/Rewards Max 5.29894 eval/Rewards Min 0.988704 eval/Returns Mean 1944.49 eval/Returns Std 0 eval/Returns Max 1944.49 eval/Returns Min 1944.49 eval/Actions Mean 0.160861 eval/Actions Std 0.600282 eval/Actions Max 0.998016 eval/Actions Min -0.998024 eval/Num Paths 1 eval/Average Returns 1944.49 eval/normalized_score 60.3694 time/evaluation sampling (s) 0.885325 time/logging (s) 0.00261136 time/sampling batch (s) 0.267163 time/saving (s) 0.00313197 time/training (s) 4.19545 time/epoch (s) 5.35368 time/total (s) 31291.6 Epoch -804 ---------------------------------- --------------- 2022-05-10 21:52:04.854032 PDT | [0] Epoch -803 finished ---------------------------------- --------------- epoch -803 replay_buffer/size 999996 trainer/num train calls 198000 trainer/Policy Loss -1.99357 trainer/Log Pis Mean 1.98986 trainer/Log Pis Std 2.53682 trainer/Log Pis Max 9.5299 trainer/Log Pis Min -4.70758 trainer/policy/mean Mean 0.131634 trainer/policy/mean Std 0.599648 trainer/policy/mean Max 0.9952 trainer/policy/mean Min -0.997613 trainer/policy/normal/std Mean 0.387439 trainer/policy/normal/std Std 0.185647 trainer/policy/normal/std Max 0.953662 trainer/policy/normal/std Min 0.0743881 trainer/policy/normal/log_std Mean -1.0946 trainer/policy/normal/log_std Std 0.582569 trainer/policy/normal/log_std Max -0.0474456 trainer/policy/normal/log_std Min -2.59846 eval/num steps total 130197 eval/num paths total 249 eval/path length Mean 577 eval/path length Std 0 eval/path length Max 577 eval/path length Min 577 eval/Rewards Mean 3.16974 eval/Rewards Std 0.774832 eval/Rewards Max 4.86011 eval/Rewards Min 0.98494 eval/Returns Mean 1828.94 eval/Returns Std 0 eval/Returns Max 1828.94 eval/Returns Min 1828.94 eval/Actions Mean 0.160836 eval/Actions Std 0.59146 eval/Actions Max 0.996464 eval/Actions Min -0.998302 eval/Num Paths 1 eval/Average Returns 1828.94 eval/normalized_score 56.8189 time/evaluation sampling (s) 0.881344 time/logging (s) 0.00250696 time/sampling batch (s) 0.265167 time/saving (s) 0.0029701 time/training (s) 4.17927 time/epoch (s) 5.33126 time/total (s) 31297 Epoch -803 ---------------------------------- --------------- 2022-05-10 21:52:10.221425 PDT | [0] Epoch -802 finished ---------------------------------- --------------- epoch -802 replay_buffer/size 999996 trainer/num train calls 199000 trainer/Policy Loss -1.90215 trainer/Log Pis Mean 2.13959 trainer/Log Pis Std 2.41244 trainer/Log Pis Max 9.29756 trainer/Log Pis Min -5.9172 trainer/policy/mean Mean 0.154385 trainer/policy/mean Std 0.606222 trainer/policy/mean Max 0.996367 trainer/policy/mean Min -0.996602 trainer/policy/normal/std Mean 0.383295 trainer/policy/normal/std Std 0.184639 trainer/policy/normal/std Max 0.921439 trainer/policy/normal/std Min 0.0648892 trainer/policy/normal/log_std Mean -1.1077 trainer/policy/normal/log_std Std 0.588315 trainer/policy/normal/log_std Max -0.0818187 trainer/policy/normal/log_std Min -2.73507 eval/num steps total 130826 eval/num paths total 250 eval/path length Mean 629 eval/path length Std 0 eval/path length Max 629 eval/path length Min 629 eval/Rewards Mean 3.20934 eval/Rewards Std 0.775157 eval/Rewards Max 4.6788 eval/Rewards Min 0.987141 eval/Returns Mean 2018.68 eval/Returns Std 0 eval/Returns Max 2018.68 eval/Returns Min 2018.68 eval/Actions Mean 0.148638 eval/Actions Std 0.577369 eval/Actions Max 0.996758 eval/Actions Min -0.998501 eval/Num Paths 1 eval/Average Returns 2018.68 eval/normalized_score 62.6487 time/evaluation sampling (s) 0.883203 time/logging (s) 0.00266594 time/sampling batch (s) 0.26621 time/saving (s) 0.00294627 time/training (s) 4.1924 time/epoch (s) 5.34743 time/total (s) 31302.3 Epoch -802 ---------------------------------- --------------- 2022-05-10 21:52:15.577688 PDT | [0] Epoch -801 finished ---------------------------------- --------------- epoch -801 replay_buffer/size 999996 trainer/num train calls 200000 trainer/Policy Loss -2.21689 trainer/Log Pis Mean 2.28059 trainer/Log Pis Std 2.49372 trainer/Log Pis Max 10.4853 trainer/Log Pis Min -7.40564 trainer/policy/mean Mean 0.176417 trainer/policy/mean Std 0.618269 trainer/policy/mean Max 0.996333 trainer/policy/mean Min -0.996381 trainer/policy/normal/std Mean 0.382187 trainer/policy/normal/std Std 0.181178 trainer/policy/normal/std Max 0.917283 trainer/policy/normal/std Min 0.0719806 trainer/policy/normal/log_std Mean -1.10609 trainer/policy/normal/log_std Std 0.579066 trainer/policy/normal/log_std Max -0.0863395 trainer/policy/normal/log_std Min -2.63136 eval/num steps total 131384 eval/num paths total 251 eval/path length Mean 558 eval/path length Std 0 eval/path length Max 558 eval/path length Min 558 eval/Rewards Mean 3.14811 eval/Rewards Std 0.83722 eval/Rewards Max 4.70239 eval/Rewards Min 0.985193 eval/Returns Mean 1756.64 eval/Returns Std 0 eval/Returns Max 1756.64 eval/Returns Min 1756.64 eval/Actions Mean 0.154728 eval/Actions Std 0.583848 eval/Actions Max 0.996911 eval/Actions Min -0.998552 eval/Num Paths 1 eval/Average Returns 1756.64 eval/normalized_score 54.5975 time/evaluation sampling (s) 0.891834 time/logging (s) 0.00245908 time/sampling batch (s) 0.265508 time/saving (s) 0.00545679 time/training (s) 4.17066 time/epoch (s) 5.33592 time/total (s) 31307.7 Epoch -801 ---------------------------------- --------------- 2022-05-10 21:52:20.957043 PDT | [0] Epoch -800 finished ---------------------------------- --------------- epoch -800 replay_buffer/size 999996 trainer/num train calls 201000 trainer/Policy Loss -2.08164 trainer/Log Pis Mean 2.05481 trainer/Log Pis Std 2.68492 trainer/Log Pis Max 13.1372 trainer/Log Pis Min -7.62772 trainer/policy/mean Mean 0.140144 trainer/policy/mean Std 0.613159 trainer/policy/mean Max 0.997723 trainer/policy/mean Min -0.998454 trainer/policy/normal/std Mean 0.393715 trainer/policy/normal/std Std 0.192023 trainer/policy/normal/std Max 1.01252 trainer/policy/normal/std Min 0.0717015 trainer/policy/normal/log_std Mean -1.08183 trainer/policy/normal/log_std Std 0.587273 trainer/policy/normal/log_std Max 0.012444 trainer/policy/normal/log_std Min -2.63524 eval/num steps total 131924 eval/num paths total 252 eval/path length Mean 540 eval/path length Std 0 eval/path length Max 540 eval/path length Min 540 eval/Rewards Mean 3.15665 eval/Rewards Std 0.862726 eval/Rewards Max 5.25523 eval/Rewards Min 0.989013 eval/Returns Mean 1704.59 eval/Returns Std 0 eval/Returns Max 1704.59 eval/Returns Min 1704.59 eval/Actions Mean 0.141119 eval/Actions Std 0.564067 eval/Actions Max 0.995934 eval/Actions Min -0.998645 eval/Num Paths 1 eval/Average Returns 1704.59 eval/normalized_score 52.9982 time/evaluation sampling (s) 0.924966 time/logging (s) 0.00250039 time/sampling batch (s) 0.264505 time/saving (s) 0.00295017 time/training (s) 4.1647 time/epoch (s) 5.35963 time/total (s) 31313 Epoch -800 ---------------------------------- --------------- 2022-05-10 21:52:26.330139 PDT | [0] Epoch -799 finished ---------------------------------- --------------- epoch -799 replay_buffer/size 999996 trainer/num train calls 202000 trainer/Policy Loss -2.27652 trainer/Log Pis Mean 2.21725 trainer/Log Pis Std 2.81276 trainer/Log Pis Max 9.9755 trainer/Log Pis Min -7.88165 trainer/policy/mean Mean 0.129782 trainer/policy/mean Std 0.632787 trainer/policy/mean Max 0.997962 trainer/policy/mean Min -0.998173 trainer/policy/normal/std Mean 0.39859 trainer/policy/normal/std Std 0.183517 trainer/policy/normal/std Max 0.915478 trainer/policy/normal/std Min 0.0681627 trainer/policy/normal/log_std Mean -1.05797 trainer/policy/normal/log_std Std 0.569068 trainer/policy/normal/log_std Max -0.0883086 trainer/policy/normal/log_std Min -2.68586 eval/num steps total 132328 eval/num paths total 253 eval/path length Mean 404 eval/path length Std 0 eval/path length Max 404 eval/path length Min 404 eval/Rewards Mean 3.04027 eval/Rewards Std 0.779981 eval/Rewards Max 4.4732 eval/Rewards Min 0.982313 eval/Returns Mean 1228.27 eval/Returns Std 0 eval/Returns Max 1228.27 eval/Returns Min 1228.27 eval/Actions Mean 0.140795 eval/Actions Std 0.589667 eval/Actions Max 0.996014 eval/Actions Min -0.997225 eval/Num Paths 1 eval/Average Returns 1228.27 eval/normalized_score 38.3626 time/evaluation sampling (s) 0.896479 time/logging (s) 0.00208086 time/sampling batch (s) 0.265811 time/saving (s) 0.00299166 time/training (s) 4.18537 time/epoch (s) 5.35274 time/total (s) 31318.4 Epoch -799 ---------------------------------- --------------- 2022-05-10 21:52:31.708670 PDT | [0] Epoch -798 finished ---------------------------------- --------------- epoch -798 replay_buffer/size 999996 trainer/num train calls 203000 trainer/Policy Loss -2.20267 trainer/Log Pis Mean 2.33332 trainer/Log Pis Std 2.52948 trainer/Log Pis Max 13.0032 trainer/Log Pis Min -4.00331 trainer/policy/mean Mean 0.169408 trainer/policy/mean Std 0.617489 trainer/policy/mean Max 0.998622 trainer/policy/mean Min -0.998455 trainer/policy/normal/std Mean 0.387111 trainer/policy/normal/std Std 0.184467 trainer/policy/normal/std Max 0.970161 trainer/policy/normal/std Min 0.0730213 trainer/policy/normal/log_std Mean -1.09419 trainer/policy/normal/log_std Std 0.580784 trainer/policy/normal/log_std Max -0.0302937 trainer/policy/normal/log_std Min -2.617 eval/num steps total 132929 eval/num paths total 254 eval/path length Mean 601 eval/path length Std 0 eval/path length Max 601 eval/path length Min 601 eval/Rewards Mean 3.15286 eval/Rewards Std 0.797854 eval/Rewards Max 5.40902 eval/Rewards Min 0.985224 eval/Returns Mean 1894.87 eval/Returns Std 0 eval/Returns Max 1894.87 eval/Returns Min 1894.87 eval/Actions Mean 0.139775 eval/Actions Std 0.578137 eval/Actions Max 0.997491 eval/Actions Min -0.997738 eval/Num Paths 1 eval/Average Returns 1894.87 eval/normalized_score 58.8446 time/evaluation sampling (s) 0.89338 time/logging (s) 0.00260026 time/sampling batch (s) 0.266019 time/saving (s) 0.00306019 time/training (s) 4.19396 time/epoch (s) 5.35902 time/total (s) 31323.7 Epoch -798 ---------------------------------- --------------- 2022-05-10 21:52:37.054446 PDT | [0] Epoch -797 finished ---------------------------------- --------------- epoch -797 replay_buffer/size 999996 trainer/num train calls 204000 trainer/Policy Loss -2.17569 trainer/Log Pis Mean 2.13528 trainer/Log Pis Std 2.60507 trainer/Log Pis Max 8.63315 trainer/Log Pis Min -9.14827 trainer/policy/mean Mean 0.154653 trainer/policy/mean Std 0.612493 trainer/policy/mean Max 0.9977 trainer/policy/mean Min -0.997875 trainer/policy/normal/std Mean 0.391046 trainer/policy/normal/std Std 0.186809 trainer/policy/normal/std Max 0.956367 trainer/policy/normal/std Min 0.07394 trainer/policy/normal/log_std Mean -1.08132 trainer/policy/normal/log_std Std 0.571842 trainer/policy/normal/log_std Max -0.0446137 trainer/policy/normal/log_std Min -2.6045 eval/num steps total 133458 eval/num paths total 255 eval/path length Mean 529 eval/path length Std 0 eval/path length Max 529 eval/path length Min 529 eval/Rewards Mean 3.20739 eval/Rewards Std 0.861847 eval/Rewards Max 5.31481 eval/Rewards Min 0.984066 eval/Returns Mean 1696.71 eval/Returns Std 0 eval/Returns Max 1696.71 eval/Returns Min 1696.71 eval/Actions Mean 0.150531 eval/Actions Std 0.581357 eval/Actions Max 0.998315 eval/Actions Min -0.998515 eval/Num Paths 1 eval/Average Returns 1696.71 eval/normalized_score 52.7561 time/evaluation sampling (s) 0.890143 time/logging (s) 0.00236944 time/sampling batch (s) 0.263672 time/saving (s) 0.00293256 time/training (s) 4.16662 time/epoch (s) 5.32573 time/total (s) 31329.1 Epoch -797 ---------------------------------- --------------- 2022-05-10 21:52:42.404396 PDT | [0] Epoch -796 finished ---------------------------------- --------------- epoch -796 replay_buffer/size 999996 trainer/num train calls 205000 trainer/Policy Loss -2.18348 trainer/Log Pis Mean 2.1556 trainer/Log Pis Std 2.56984 trainer/Log Pis Max 9.56155 trainer/Log Pis Min -3.36784 trainer/policy/mean Mean 0.162437 trainer/policy/mean Std 0.606757 trainer/policy/mean Max 0.997498 trainer/policy/mean Min -0.997887 trainer/policy/normal/std Mean 0.390653 trainer/policy/normal/std Std 0.184446 trainer/policy/normal/std Max 0.902042 trainer/policy/normal/std Min 0.0719316 trainer/policy/normal/log_std Mean -1.08394 trainer/policy/normal/log_std Std 0.579967 trainer/policy/normal/log_std Max -0.103095 trainer/policy/normal/log_std Min -2.63204 eval/num steps total 134432 eval/num paths total 257 eval/path length Mean 487 eval/path length Std 18 eval/path length Max 505 eval/path length Min 469 eval/Rewards Mean 3.17815 eval/Rewards Std 0.814052 eval/Rewards Max 4.84265 eval/Rewards Min 0.987385 eval/Returns Mean 1547.76 eval/Returns Std 43.4439 eval/Returns Max 1591.2 eval/Returns Min 1504.31 eval/Actions Mean 0.151809 eval/Actions Std 0.585381 eval/Actions Max 0.998434 eval/Actions Min -0.998381 eval/Num Paths 2 eval/Average Returns 1547.76 eval/normalized_score 48.1793 time/evaluation sampling (s) 0.87747 time/logging (s) 0.00383177 time/sampling batch (s) 0.265966 time/saving (s) 0.00314798 time/training (s) 4.18102 time/epoch (s) 5.33144 time/total (s) 31334.4 Epoch -796 ---------------------------------- --------------- 2022-05-10 21:52:47.767254 PDT | [0] Epoch -795 finished ---------------------------------- --------------- epoch -795 replay_buffer/size 999996 trainer/num train calls 206000 trainer/Policy Loss -2.21188 trainer/Log Pis Mean 2.33244 trainer/Log Pis Std 2.58608 trainer/Log Pis Max 9.51717 trainer/Log Pis Min -6.34329 trainer/policy/mean Mean 0.15705 trainer/policy/mean Std 0.613582 trainer/policy/mean Max 0.998607 trainer/policy/mean Min -0.997491 trainer/policy/normal/std Mean 0.388464 trainer/policy/normal/std Std 0.185499 trainer/policy/normal/std Max 0.995189 trainer/policy/normal/std Min 0.073713 trainer/policy/normal/log_std Mean -1.08891 trainer/policy/normal/log_std Std 0.575056 trainer/policy/normal/log_std Max -0.00482314 trainer/policy/normal/log_std Min -2.60758 eval/num steps total 134926 eval/num paths total 258 eval/path length Mean 494 eval/path length Std 0 eval/path length Max 494 eval/path length Min 494 eval/Rewards Mean 3.06653 eval/Rewards Std 0.802262 eval/Rewards Max 4.74158 eval/Rewards Min 0.986712 eval/Returns Mean 1514.86 eval/Returns Std 0 eval/Returns Max 1514.86 eval/Returns Min 1514.86 eval/Actions Mean 0.158311 eval/Actions Std 0.57403 eval/Actions Max 0.99622 eval/Actions Min -0.999309 eval/Num Paths 1 eval/Average Returns 1514.86 eval/normalized_score 47.1686 time/evaluation sampling (s) 0.875926 time/logging (s) 0.00244997 time/sampling batch (s) 0.267239 time/saving (s) 0.00331702 time/training (s) 4.19228 time/epoch (s) 5.34121 time/total (s) 31339.8 Epoch -795 ---------------------------------- --------------- 2022-05-10 21:52:53.144122 PDT | [0] Epoch -794 finished ---------------------------------- --------------- epoch -794 replay_buffer/size 999996 trainer/num train calls 207000 trainer/Policy Loss -2.17457 trainer/Log Pis Mean 2.19453 trainer/Log Pis Std 2.58375 trainer/Log Pis Max 10.1744 trainer/Log Pis Min -3.88956 trainer/policy/mean Mean 0.159129 trainer/policy/mean Std 0.619505 trainer/policy/mean Max 0.998389 trainer/policy/mean Min -0.998035 trainer/policy/normal/std Mean 0.39103 trainer/policy/normal/std Std 0.182867 trainer/policy/normal/std Max 0.978114 trainer/policy/normal/std Min 0.0711444 trainer/policy/normal/log_std Mean -1.081 trainer/policy/normal/log_std Std 0.57614 trainer/policy/normal/log_std Max -0.0221287 trainer/policy/normal/log_std Min -2.64304 eval/num steps total 135870 eval/num paths total 260 eval/path length Mean 472 eval/path length Std 18 eval/path length Max 490 eval/path length Min 454 eval/Rewards Mean 3.15631 eval/Rewards Std 0.852101 eval/Rewards Max 5.2465 eval/Rewards Min 0.990985 eval/Returns Mean 1489.78 eval/Returns Std 39.8473 eval/Returns Max 1529.62 eval/Returns Min 1449.93 eval/Actions Mean 0.161536 eval/Actions Std 0.585265 eval/Actions Max 0.9984 eval/Actions Min -0.998574 eval/Num Paths 2 eval/Average Returns 1489.78 eval/normalized_score 46.3978 time/evaluation sampling (s) 0.886634 time/logging (s) 0.00349625 time/sampling batch (s) 0.266953 time/saving (s) 0.00300712 time/training (s) 4.19761 time/epoch (s) 5.3577 time/total (s) 31345.1 Epoch -794 ---------------------------------- --------------- 2022-05-10 21:52:58.516840 PDT | [0] Epoch -793 finished ---------------------------------- --------------- epoch -793 replay_buffer/size 999996 trainer/num train calls 208000 trainer/Policy Loss -2.26653 trainer/Log Pis Mean 2.09381 trainer/Log Pis Std 2.66138 trainer/Log Pis Max 10.0181 trainer/Log Pis Min -4.47484 trainer/policy/mean Mean 0.125547 trainer/policy/mean Std 0.619147 trainer/policy/mean Max 0.996427 trainer/policy/mean Min -0.999187 trainer/policy/normal/std Mean 0.390016 trainer/policy/normal/std Std 0.186325 trainer/policy/normal/std Max 1.00749 trainer/policy/normal/std Min 0.0740407 trainer/policy/normal/log_std Mean -1.08053 trainer/policy/normal/log_std Std 0.5612 trainer/policy/normal/log_std Max 0.00746378 trainer/policy/normal/log_std Min -2.60314 eval/num steps total 136847 eval/num paths total 262 eval/path length Mean 488.5 eval/path length Std 22.5 eval/path length Max 511 eval/path length Min 466 eval/Rewards Mean 3.18558 eval/Rewards Std 0.822796 eval/Rewards Max 5.01252 eval/Rewards Min 0.982335 eval/Returns Mean 1556.16 eval/Returns Std 60.9182 eval/Returns Max 1617.07 eval/Returns Min 1495.24 eval/Actions Mean 0.158044 eval/Actions Std 0.582526 eval/Actions Max 0.998761 eval/Actions Min -0.998313 eval/Num Paths 2 eval/Average Returns 1556.16 eval/normalized_score 48.4373 time/evaluation sampling (s) 0.877169 time/logging (s) 0.00362857 time/sampling batch (s) 0.267622 time/saving (s) 0.00297942 time/training (s) 4.20128 time/epoch (s) 5.35268 time/total (s) 31350.5 Epoch -793 ---------------------------------- --------------- 2022-05-10 21:53:03.845771 PDT | [0] Epoch -792 finished ---------------------------------- --------------- epoch -792 replay_buffer/size 999996 trainer/num train calls 209000 trainer/Policy Loss -2.01112 trainer/Log Pis Mean 2.13807 trainer/Log Pis Std 2.5309 trainer/Log Pis Max 9.96801 trainer/Log Pis Min -4.53907 trainer/policy/mean Mean 0.161237 trainer/policy/mean Std 0.614713 trainer/policy/mean Max 0.998103 trainer/policy/mean Min -0.998379 trainer/policy/normal/std Mean 0.393339 trainer/policy/normal/std Std 0.185007 trainer/policy/normal/std Max 0.99439 trainer/policy/normal/std Min 0.0670644 trainer/policy/normal/log_std Mean -1.07374 trainer/policy/normal/log_std Std 0.571094 trainer/policy/normal/log_std Max -0.00562572 trainer/policy/normal/log_std Min -2.7021 eval/num steps total 137473 eval/num paths total 263 eval/path length Mean 626 eval/path length Std 0 eval/path length Max 626 eval/path length Min 626 eval/Rewards Mean 3.20197 eval/Rewards Std 0.768832 eval/Rewards Max 4.86788 eval/Rewards Min 0.988802 eval/Returns Mean 2004.43 eval/Returns Std 0 eval/Returns Max 2004.43 eval/Returns Min 2004.43 eval/Actions Mean 0.148916 eval/Actions Std 0.582247 eval/Actions Max 0.998306 eval/Actions Min -0.998021 eval/Num Paths 1 eval/Average Returns 2004.43 eval/normalized_score 62.2112 time/evaluation sampling (s) 0.879332 time/logging (s) 0.00265956 time/sampling batch (s) 0.264307 time/saving (s) 0.0030598 time/training (s) 4.15838 time/epoch (s) 5.30774 time/total (s) 31355.8 Epoch -792 ---------------------------------- --------------- 2022-05-10 21:53:09.174219 PDT | [0] Epoch -791 finished ---------------------------------- -------------- epoch -791 replay_buffer/size 999996 trainer/num train calls 210000 trainer/Policy Loss -2.2565 trainer/Log Pis Mean 2.18591 trainer/Log Pis Std 2.58108 trainer/Log Pis Max 10.0442 trainer/Log Pis Min -6.90105 trainer/policy/mean Mean 0.159945 trainer/policy/mean Std 0.613863 trainer/policy/mean Max 0.997881 trainer/policy/mean Min -0.997966 trainer/policy/normal/std Mean 0.389728 trainer/policy/normal/std Std 0.192196 trainer/policy/normal/std Max 0.9787 trainer/policy/normal/std Min 0.0671344 trainer/policy/normal/log_std Mean -1.09334 trainer/policy/normal/log_std Std 0.58879 trainer/policy/normal/log_std Max -0.0215303 trainer/policy/normal/log_std Min -2.70106 eval/num steps total 137930 eval/num paths total 264 eval/path length Mean 457 eval/path length Std 0 eval/path length Max 457 eval/path length Min 457 eval/Rewards Mean 3.08552 eval/Rewards Std 0.893993 eval/Rewards Max 4.93044 eval/Rewards Min 0.982476 eval/Returns Mean 1410.08 eval/Returns Std 0 eval/Returns Max 1410.08 eval/Returns Min 1410.08 eval/Actions Mean 0.137051 eval/Actions Std 0.544569 eval/Actions Max 0.996891 eval/Actions Min -0.998437 eval/Num Paths 1 eval/Average Returns 1410.08 eval/normalized_score 43.9491 time/evaluation sampling (s) 0.881832 time/logging (s) 0.0021333 time/sampling batch (s) 0.263744 time/saving (s) 0.0029649 time/training (s) 4.15747 time/epoch (s) 5.30815 time/total (s) 31361.1 Epoch -791 ---------------------------------- -------------- 2022-05-10 21:53:14.538260 PDT | [0] Epoch -790 finished ---------------------------------- --------------- epoch -790 replay_buffer/size 999996 trainer/num train calls 211000 trainer/Policy Loss -2.25855 trainer/Log Pis Mean 2.23652 trainer/Log Pis Std 2.66376 trainer/Log Pis Max 9.63265 trainer/Log Pis Min -6.18318 trainer/policy/mean Mean 0.186019 trainer/policy/mean Std 0.61485 trainer/policy/mean Max 0.997834 trainer/policy/mean Min -0.998696 trainer/policy/normal/std Mean 0.388425 trainer/policy/normal/std Std 0.184525 trainer/policy/normal/std Max 0.970312 trainer/policy/normal/std Min 0.0735818 trainer/policy/normal/log_std Mean -1.08775 trainer/policy/normal/log_std Std 0.572052 trainer/policy/normal/log_std Max -0.0301378 trainer/policy/normal/log_std Min -2.60936 eval/num steps total 138486 eval/num paths total 265 eval/path length Mean 556 eval/path length Std 0 eval/path length Max 556 eval/path length Min 556 eval/Rewards Mean 3.16947 eval/Rewards Std 0.835279 eval/Rewards Max 4.87141 eval/Rewards Min 0.988931 eval/Returns Mean 1762.23 eval/Returns Std 0 eval/Returns Max 1762.23 eval/Returns Min 1762.23 eval/Actions Mean 0.158847 eval/Actions Std 0.572353 eval/Actions Max 0.998424 eval/Actions Min -0.998596 eval/Num Paths 1 eval/Average Returns 1762.23 eval/normalized_score 54.7691 time/evaluation sampling (s) 0.87729 time/logging (s) 0.00238492 time/sampling batch (s) 0.266248 time/saving (s) 0.00301334 time/training (s) 4.19525 time/epoch (s) 5.34419 time/total (s) 31366.4 Epoch -790 ---------------------------------- --------------- 2022-05-10 21:53:19.945588 PDT | [0] Epoch -789 finished ---------------------------------- --------------- epoch -789 replay_buffer/size 999996 trainer/num train calls 212000 trainer/Policy Loss -2.21027 trainer/Log Pis Mean 2.183 trainer/Log Pis Std 2.60657 trainer/Log Pis Max 10.9275 trainer/Log Pis Min -5.62831 trainer/policy/mean Mean 0.127244 trainer/policy/mean Std 0.61905 trainer/policy/mean Max 0.99678 trainer/policy/mean Min -0.997965 trainer/policy/normal/std Mean 0.385079 trainer/policy/normal/std Std 0.179444 trainer/policy/normal/std Max 0.974676 trainer/policy/normal/std Min 0.075245 trainer/policy/normal/log_std Mean -1.09197 trainer/policy/normal/log_std Std 0.564048 trainer/policy/normal/log_std Max -0.0256505 trainer/policy/normal/log_std Min -2.58701 eval/num steps total 139090 eval/num paths total 266 eval/path length Mean 604 eval/path length Std 0 eval/path length Max 604 eval/path length Min 604 eval/Rewards Mean 3.20607 eval/Rewards Std 0.748042 eval/Rewards Max 5.34573 eval/Rewards Min 0.988812 eval/Returns Mean 1936.47 eval/Returns Std 0 eval/Returns Max 1936.47 eval/Returns Min 1936.47 eval/Actions Mean 0.151771 eval/Actions Std 0.590033 eval/Actions Max 0.997983 eval/Actions Min -0.998198 eval/Num Paths 1 eval/Average Returns 1936.47 eval/normalized_score 60.1228 time/evaluation sampling (s) 0.949633 time/logging (s) 0.00250347 time/sampling batch (s) 0.264543 time/saving (s) 0.00296358 time/training (s) 4.16768 time/epoch (s) 5.38732 time/total (s) 31371.8 Epoch -789 ---------------------------------- --------------- 2022-05-10 21:53:25.288521 PDT | [0] Epoch -788 finished ---------------------------------- --------------- epoch -788 replay_buffer/size 999996 trainer/num train calls 213000 trainer/Policy Loss -2.43171 trainer/Log Pis Mean 2.36226 trainer/Log Pis Std 2.60106 trainer/Log Pis Max 11.607 trainer/Log Pis Min -3.82597 trainer/policy/mean Mean 0.134506 trainer/policy/mean Std 0.631141 trainer/policy/mean Max 0.99656 trainer/policy/mean Min -0.997915 trainer/policy/normal/std Mean 0.392028 trainer/policy/normal/std Std 0.187847 trainer/policy/normal/std Max 0.979126 trainer/policy/normal/std Min 0.0698358 trainer/policy/normal/log_std Mean -1.08025 trainer/policy/normal/log_std Std 0.575398 trainer/policy/normal/log_std Max -0.0210952 trainer/policy/normal/log_std Min -2.66161 eval/num steps total 139757 eval/num paths total 267 eval/path length Mean 667 eval/path length Std 0 eval/path length Max 667 eval/path length Min 667 eval/Rewards Mean 3.21744 eval/Rewards Std 0.725621 eval/Rewards Max 5.43129 eval/Rewards Min 0.990945 eval/Returns Mean 2146.03 eval/Returns Std 0 eval/Returns Max 2146.03 eval/Returns Min 2146.03 eval/Actions Mean 0.150417 eval/Actions Std 0.606554 eval/Actions Max 0.998097 eval/Actions Min -0.998146 eval/Num Paths 1 eval/Average Returns 2146.03 eval/normalized_score 66.562 time/evaluation sampling (s) 0.888736 time/logging (s) 0.00276191 time/sampling batch (s) 0.266873 time/saving (s) 0.00297844 time/training (s) 4.16178 time/epoch (s) 5.32313 time/total (s) 31377.2 Epoch -788 ---------------------------------- --------------- 2022-05-10 21:53:30.695087 PDT | [0] Epoch -787 finished ---------------------------------- --------------- epoch -787 replay_buffer/size 999996 trainer/num train calls 214000 trainer/Policy Loss -2.26958 trainer/Log Pis Mean 2.22025 trainer/Log Pis Std 2.6712 trainer/Log Pis Max 10.345 trainer/Log Pis Min -3.90933 trainer/policy/mean Mean 0.125251 trainer/policy/mean Std 0.623651 trainer/policy/mean Max 0.998042 trainer/policy/mean Min -0.998344 trainer/policy/normal/std Mean 0.388614 trainer/policy/normal/std Std 0.189272 trainer/policy/normal/std Max 1.0062 trainer/policy/normal/std Min 0.0696342 trainer/policy/normal/log_std Mean -1.09514 trainer/policy/normal/log_std Std 0.589406 trainer/policy/normal/log_std Max 0.00618544 trainer/policy/normal/log_std Min -2.6645 eval/num steps total 140307 eval/num paths total 268 eval/path length Mean 550 eval/path length Std 0 eval/path length Max 550 eval/path length Min 550 eval/Rewards Mean 3.17043 eval/Rewards Std 0.845962 eval/Rewards Max 5.0005 eval/Rewards Min 0.984902 eval/Returns Mean 1743.74 eval/Returns Std 0 eval/Returns Max 1743.74 eval/Returns Min 1743.74 eval/Actions Mean 0.156945 eval/Actions Std 0.569767 eval/Actions Max 0.997196 eval/Actions Min -0.998438 eval/Num Paths 1 eval/Average Returns 1743.74 eval/normalized_score 54.201 time/evaluation sampling (s) 0.883669 time/logging (s) 0.00250945 time/sampling batch (s) 0.268264 time/saving (s) 0.00316974 time/training (s) 4.22829 time/epoch (s) 5.3859 time/total (s) 31382.5 Epoch -787 ---------------------------------- --------------- 2022-05-10 21:53:36.080615 PDT | [0] Epoch -786 finished ---------------------------------- --------------- epoch -786 replay_buffer/size 999996 trainer/num train calls 215000 trainer/Policy Loss -2.14948 trainer/Log Pis Mean 2.21428 trainer/Log Pis Std 2.49065 trainer/Log Pis Max 11.5548 trainer/Log Pis Min -5.84246 trainer/policy/mean Mean 0.140776 trainer/policy/mean Std 0.616812 trainer/policy/mean Max 0.997709 trainer/policy/mean Min -0.998454 trainer/policy/normal/std Mean 0.389371 trainer/policy/normal/std Std 0.187835 trainer/policy/normal/std Max 1.05158 trainer/policy/normal/std Min 0.0760515 trainer/policy/normal/log_std Mean -1.0882 trainer/policy/normal/log_std Std 0.576634 trainer/policy/normal/log_std Max 0.0502977 trainer/policy/normal/log_std Min -2.57634 eval/num steps total 141091 eval/num paths total 270 eval/path length Mean 392 eval/path length Std 7 eval/path length Max 399 eval/path length Min 385 eval/Rewards Mean 3.1059 eval/Rewards Std 0.907108 eval/Rewards Max 4.80862 eval/Rewards Min 0.983652 eval/Returns Mean 1217.51 eval/Returns Std 13.8342 eval/Returns Max 1231.35 eval/Returns Min 1203.68 eval/Actions Mean 0.142988 eval/Actions Std 0.575954 eval/Actions Max 0.997083 eval/Actions Min -0.999094 eval/Num Paths 2 eval/Average Returns 1217.51 eval/normalized_score 38.0322 time/evaluation sampling (s) 0.913796 time/logging (s) 0.00314793 time/sampling batch (s) 0.26607 time/saving (s) 0.00297839 time/training (s) 4.18006 time/epoch (s) 5.36605 time/total (s) 31387.9 Epoch -786 ---------------------------------- --------------- 2022-05-10 21:53:41.505199 PDT | [0] Epoch -785 finished ---------------------------------- --------------- epoch -785 replay_buffer/size 999996 trainer/num train calls 216000 trainer/Policy Loss -2.01306 trainer/Log Pis Mean 2.05087 trainer/Log Pis Std 2.5022 trainer/Log Pis Max 9.49757 trainer/Log Pis Min -3.43846 trainer/policy/mean Mean 0.166963 trainer/policy/mean Std 0.606107 trainer/policy/mean Max 0.996792 trainer/policy/mean Min -0.996749 trainer/policy/normal/std Mean 0.394075 trainer/policy/normal/std Std 0.185848 trainer/policy/normal/std Max 0.950664 trainer/policy/normal/std Min 0.0780851 trainer/policy/normal/log_std Mean -1.07183 trainer/policy/normal/log_std Std 0.570391 trainer/policy/normal/log_std Max -0.0505942 trainer/policy/normal/log_std Min -2.54996 eval/num steps total 141596 eval/num paths total 271 eval/path length Mean 505 eval/path length Std 0 eval/path length Max 505 eval/path length Min 505 eval/Rewards Mean 3.10109 eval/Rewards Std 0.759391 eval/Rewards Max 4.8202 eval/Rewards Min 0.984284 eval/Returns Mean 1566.05 eval/Returns Std 0 eval/Returns Max 1566.05 eval/Returns Min 1566.05 eval/Actions Mean 0.163784 eval/Actions Std 0.587393 eval/Actions Max 0.996822 eval/Actions Min -0.998465 eval/Num Paths 1 eval/Average Returns 1566.05 eval/normalized_score 48.7414 time/evaluation sampling (s) 0.906613 time/logging (s) 0.00273965 time/sampling batch (s) 0.268546 time/saving (s) 0.00350827 time/training (s) 4.22254 time/epoch (s) 5.40395 time/total (s) 31393.3 Epoch -785 ---------------------------------- --------------- 2022-05-10 21:53:46.908185 PDT | [0] Epoch -784 finished ---------------------------------- --------------- epoch -784 replay_buffer/size 999996 trainer/num train calls 217000 trainer/Policy Loss -2.2639 trainer/Log Pis Mean 2.17459 trainer/Log Pis Std 2.50003 trainer/Log Pis Max 10.1666 trainer/Log Pis Min -5.50521 trainer/policy/mean Mean 0.156375 trainer/policy/mean Std 0.61917 trainer/policy/mean Max 0.997117 trainer/policy/mean Min -0.998375 trainer/policy/normal/std Mean 0.380675 trainer/policy/normal/std Std 0.178345 trainer/policy/normal/std Max 0.927972 trainer/policy/normal/std Min 0.07395 trainer/policy/normal/log_std Mean -1.10806 trainer/policy/normal/log_std Std 0.57584 trainer/policy/normal/log_std Max -0.0747542 trainer/policy/normal/log_std Min -2.60437 eval/num steps total 142413 eval/num paths total 273 eval/path length Mean 408.5 eval/path length Std 4.5 eval/path length Max 413 eval/path length Min 404 eval/Rewards Mean 3.06079 eval/Rewards Std 0.787132 eval/Rewards Max 4.75563 eval/Rewards Min 0.983142 eval/Returns Mean 1250.33 eval/Returns Std 17.6639 eval/Returns Max 1268 eval/Returns Min 1232.67 eval/Actions Mean 0.14874 eval/Actions Std 0.591844 eval/Actions Max 0.995996 eval/Actions Min -0.996756 eval/Num Paths 2 eval/Average Returns 1250.33 eval/normalized_score 39.0406 time/evaluation sampling (s) 0.912979 time/logging (s) 0.00323741 time/sampling batch (s) 0.269264 time/saving (s) 0.00319498 time/training (s) 4.1946 time/epoch (s) 5.38328 time/total (s) 31398.7 Epoch -784 ---------------------------------- --------------- 2022-05-10 21:53:52.291426 PDT | [0] Epoch -783 finished ---------------------------------- --------------- epoch -783 replay_buffer/size 999996 trainer/num train calls 218000 trainer/Policy Loss -2.15781 trainer/Log Pis Mean 2.1253 trainer/Log Pis Std 2.54284 trainer/Log Pis Max 9.18969 trainer/Log Pis Min -7.03955 trainer/policy/mean Mean 0.146153 trainer/policy/mean Std 0.610908 trainer/policy/mean Max 0.997632 trainer/policy/mean Min -0.998593 trainer/policy/normal/std Mean 0.384583 trainer/policy/normal/std Std 0.183596 trainer/policy/normal/std Max 0.986003 trainer/policy/normal/std Min 0.0732005 trainer/policy/normal/log_std Mean -1.09877 trainer/policy/normal/log_std Std 0.574005 trainer/policy/normal/log_std Max -0.0140957 trainer/policy/normal/log_std Min -2.61455 eval/num steps total 142985 eval/num paths total 274 eval/path length Mean 572 eval/path length Std 0 eval/path length Max 572 eval/path length Min 572 eval/Rewards Mean 3.1239 eval/Rewards Std 0.714361 eval/Rewards Max 4.41365 eval/Rewards Min 0.991408 eval/Returns Mean 1786.87 eval/Returns Std 0 eval/Returns Max 1786.87 eval/Returns Min 1786.87 eval/Actions Mean 0.157307 eval/Actions Std 0.604993 eval/Actions Max 0.997061 eval/Actions Min -0.998767 eval/Num Paths 1 eval/Average Returns 1786.87 eval/normalized_score 55.5263 time/evaluation sampling (s) 0.889281 time/logging (s) 0.00249829 time/sampling batch (s) 0.26682 time/saving (s) 0.00303381 time/training (s) 4.20058 time/epoch (s) 5.36222 time/total (s) 31404.1 Epoch -783 ---------------------------------- --------------- 2022-05-10 21:53:57.637508 PDT | [0] Epoch -782 finished ---------------------------------- --------------- epoch -782 replay_buffer/size 999996 trainer/num train calls 219000 trainer/Policy Loss -2.20954 trainer/Log Pis Mean 2.15421 trainer/Log Pis Std 2.60782 trainer/Log Pis Max 11.4313 trainer/Log Pis Min -4.9539 trainer/policy/mean Mean 0.114101 trainer/policy/mean Std 0.61398 trainer/policy/mean Max 0.997207 trainer/policy/mean Min -0.997638 trainer/policy/normal/std Mean 0.386852 trainer/policy/normal/std Std 0.18762 trainer/policy/normal/std Max 0.983866 trainer/policy/normal/std Min 0.0690999 trainer/policy/normal/log_std Mean -1.09693 trainer/policy/normal/log_std Std 0.581672 trainer/policy/normal/log_std Max -0.016266 trainer/policy/normal/log_std Min -2.6722 eval/num steps total 143718 eval/num paths total 275 eval/path length Mean 733 eval/path length Std 0 eval/path length Max 733 eval/path length Min 733 eval/Rewards Mean 3.24012 eval/Rewards Std 0.716748 eval/Rewards Max 4.67291 eval/Rewards Min 0.98434 eval/Returns Mean 2375.01 eval/Returns Std 0 eval/Returns Max 2375.01 eval/Returns Min 2375.01 eval/Actions Mean 0.153185 eval/Actions Std 0.607885 eval/Actions Max 0.99723 eval/Actions Min -0.998478 eval/Num Paths 1 eval/Average Returns 2375.01 eval/normalized_score 73.5974 time/evaluation sampling (s) 0.881337 time/logging (s) 0.00301493 time/sampling batch (s) 0.267225 time/saving (s) 0.00307023 time/training (s) 4.17191 time/epoch (s) 5.32655 time/total (s) 31409.4 Epoch -782 ---------------------------------- --------------- 2022-05-10 21:54:03.028420 PDT | [0] Epoch -781 finished ---------------------------------- --------------- epoch -781 replay_buffer/size 999996 trainer/num train calls 220000 trainer/Policy Loss -2.00777 trainer/Log Pis Mean 1.98087 trainer/Log Pis Std 2.54534 trainer/Log Pis Max 9.70305 trainer/Log Pis Min -3.89492 trainer/policy/mean Mean 0.149704 trainer/policy/mean Std 0.613806 trainer/policy/mean Max 0.998182 trainer/policy/mean Min -0.99745 trainer/policy/normal/std Mean 0.404738 trainer/policy/normal/std Std 0.187793 trainer/policy/normal/std Max 1.01212 trainer/policy/normal/std Min 0.0774578 trainer/policy/normal/log_std Mean -1.04404 trainer/policy/normal/log_std Std 0.571859 trainer/policy/normal/log_std Max 0.0120431 trainer/policy/normal/log_std Min -2.55802 eval/num steps total 144625 eval/num paths total 277 eval/path length Mean 453.5 eval/path length Std 49.5 eval/path length Max 503 eval/path length Min 404 eval/Rewards Mean 3.17491 eval/Rewards Std 0.8578 eval/Rewards Max 5.04088 eval/Rewards Min 0.985732 eval/Returns Mean 1439.82 eval/Returns Std 149.123 eval/Returns Max 1588.95 eval/Returns Min 1290.7 eval/Actions Mean 0.131385 eval/Actions Std 0.577202 eval/Actions Max 0.998102 eval/Actions Min -0.999248 eval/Num Paths 2 eval/Average Returns 1439.82 eval/normalized_score 44.8629 time/evaluation sampling (s) 0.88306 time/logging (s) 0.00350551 time/sampling batch (s) 0.268659 time/saving (s) 0.00314609 time/training (s) 4.21227 time/epoch (s) 5.37064 time/total (s) 31414.8 Epoch -781 ---------------------------------- --------------- 2022-05-10 21:54:08.458978 PDT | [0] Epoch -780 finished ---------------------------------- --------------- epoch -780 replay_buffer/size 999996 trainer/num train calls 221000 trainer/Policy Loss -1.84354 trainer/Log Pis Mean 1.98792 trainer/Log Pis Std 2.62957 trainer/Log Pis Max 8.91547 trainer/Log Pis Min -5.67488 trainer/policy/mean Mean 0.155494 trainer/policy/mean Std 0.598246 trainer/policy/mean Max 0.998102 trainer/policy/mean Min -0.996511 trainer/policy/normal/std Mean 0.381144 trainer/policy/normal/std Std 0.181766 trainer/policy/normal/std Max 1.00723 trainer/policy/normal/std Min 0.0757896 trainer/policy/normal/log_std Mean -1.10755 trainer/policy/normal/log_std Std 0.574035 trainer/policy/normal/log_std Max 0.00720877 trainer/policy/normal/log_std Min -2.57979 eval/num steps total 145113 eval/num paths total 278 eval/path length Mean 488 eval/path length Std 0 eval/path length Max 488 eval/path length Min 488 eval/Rewards Mean 3.19097 eval/Rewards Std 0.822142 eval/Rewards Max 4.82065 eval/Rewards Min 0.984672 eval/Returns Mean 1557.19 eval/Returns Std 0 eval/Returns Max 1557.19 eval/Returns Min 1557.19 eval/Actions Mean 0.161407 eval/Actions Std 0.598183 eval/Actions Max 0.995927 eval/Actions Min -0.998542 eval/Num Paths 1 eval/Average Returns 1557.19 eval/normalized_score 48.4692 time/evaluation sampling (s) 0.894161 time/logging (s) 0.00226362 time/sampling batch (s) 0.269647 time/saving (s) 0.00295719 time/training (s) 4.23982 time/epoch (s) 5.40885 time/total (s) 31420.2 Epoch -780 ---------------------------------- --------------- 2022-05-10 21:54:13.836870 PDT | [0] Epoch -779 finished ---------------------------------- --------------- epoch -779 replay_buffer/size 999996 trainer/num train calls 222000 trainer/Policy Loss -1.98727 trainer/Log Pis Mean 2.13952 trainer/Log Pis Std 2.68201 trainer/Log Pis Max 9.77824 trainer/Log Pis Min -5.27096 trainer/policy/mean Mean 0.132783 trainer/policy/mean Std 0.612755 trainer/policy/mean Max 0.997305 trainer/policy/mean Min -0.997245 trainer/policy/normal/std Mean 0.388941 trainer/policy/normal/std Std 0.18121 trainer/policy/normal/std Max 0.953352 trainer/policy/normal/std Min 0.0746303 trainer/policy/normal/log_std Mean -1.08176 trainer/policy/normal/log_std Std 0.563448 trainer/policy/normal/log_std Max -0.0477716 trainer/policy/normal/log_std Min -2.59521 eval/num steps total 145888 eval/num paths total 280 eval/path length Mean 387.5 eval/path length Std 24.5 eval/path length Max 412 eval/path length Min 363 eval/Rewards Mean 3.02583 eval/Rewards Std 0.92774 eval/Rewards Max 5.03311 eval/Rewards Min 0.983974 eval/Returns Mean 1172.51 eval/Returns Std 68.7482 eval/Returns Max 1241.26 eval/Returns Min 1103.76 eval/Actions Mean 0.117663 eval/Actions Std 0.564932 eval/Actions Max 0.995649 eval/Actions Min -0.999291 eval/Num Paths 2 eval/Average Returns 1172.51 eval/normalized_score 36.6494 time/evaluation sampling (s) 0.885445 time/logging (s) 0.00305745 time/sampling batch (s) 0.266844 time/saving (s) 0.00295658 time/training (s) 4.20025 time/epoch (s) 5.35855 time/total (s) 31425.6 Epoch -779 ---------------------------------- --------------- 2022-05-10 21:54:19.212331 PDT | [0] Epoch -778 finished ---------------------------------- --------------- epoch -778 replay_buffer/size 999996 trainer/num train calls 223000 trainer/Policy Loss -2.29688 trainer/Log Pis Mean 2.31914 trainer/Log Pis Std 2.65419 trainer/Log Pis Max 13.77 trainer/Log Pis Min -6.12353 trainer/policy/mean Mean 0.140951 trainer/policy/mean Std 0.621338 trainer/policy/mean Max 0.997259 trainer/policy/mean Min -0.999014 trainer/policy/normal/std Mean 0.38203 trainer/policy/normal/std Std 0.183469 trainer/policy/normal/std Max 0.927009 trainer/policy/normal/std Min 0.0718992 trainer/policy/normal/log_std Mean -1.10893 trainer/policy/normal/log_std Std 0.582921 trainer/policy/normal/log_std Max -0.0757916 trainer/policy/normal/log_std Min -2.63249 eval/num steps total 146796 eval/num paths total 282 eval/path length Mean 454 eval/path length Std 44 eval/path length Max 498 eval/path length Min 410 eval/Rewards Mean 3.06218 eval/Rewards Std 0.780296 eval/Rewards Max 4.69249 eval/Rewards Min 0.981478 eval/Returns Mean 1390.23 eval/Returns Std 136.578 eval/Returns Max 1526.81 eval/Returns Min 1253.65 eval/Actions Mean 0.146536 eval/Actions Std 0.58716 eval/Actions Max 0.996876 eval/Actions Min -0.996447 eval/Num Paths 2 eval/Average Returns 1390.23 eval/normalized_score 43.3391 time/evaluation sampling (s) 0.890098 time/logging (s) 0.0034381 time/sampling batch (s) 0.266592 time/saving (s) 0.00293039 time/training (s) 4.1924 time/epoch (s) 5.35546 time/total (s) 31430.9 Epoch -778 ---------------------------------- --------------- 2022-05-10 21:54:24.599382 PDT | [0] Epoch -777 finished ---------------------------------- --------------- epoch -777 replay_buffer/size 999996 trainer/num train calls 224000 trainer/Policy Loss -2.10472 trainer/Log Pis Mean 2.08645 trainer/Log Pis Std 2.56804 trainer/Log Pis Max 11.7979 trainer/Log Pis Min -7.48008 trainer/policy/mean Mean 0.149579 trainer/policy/mean Std 0.616059 trainer/policy/mean Max 0.996248 trainer/policy/mean Min -0.998591 trainer/policy/normal/std Mean 0.388138 trainer/policy/normal/std Std 0.184337 trainer/policy/normal/std Max 0.954183 trainer/policy/normal/std Min 0.06849 trainer/policy/normal/log_std Mean -1.09096 trainer/policy/normal/log_std Std 0.579911 trainer/policy/normal/log_std Max -0.0468994 trainer/policy/normal/log_std Min -2.68107 eval/num steps total 147293 eval/num paths total 283 eval/path length Mean 497 eval/path length Std 0 eval/path length Max 497 eval/path length Min 497 eval/Rewards Mean 3.065 eval/Rewards Std 0.778177 eval/Rewards Max 4.74455 eval/Rewards Min 0.987146 eval/Returns Mean 1523.3 eval/Returns Std 0 eval/Returns Max 1523.3 eval/Returns Min 1523.3 eval/Actions Mean 0.155857 eval/Actions Std 0.58571 eval/Actions Max 0.9972 eval/Actions Min -0.999298 eval/Num Paths 1 eval/Average Returns 1523.3 eval/normalized_score 47.4279 time/evaluation sampling (s) 0.892835 time/logging (s) 0.00224677 time/sampling batch (s) 0.267208 time/saving (s) 0.00295107 time/training (s) 4.20061 time/epoch (s) 5.36585 time/total (s) 31436.3 Epoch -777 ---------------------------------- --------------- 2022-05-10 21:54:29.947090 PDT | [0] Epoch -776 finished ---------------------------------- --------------- epoch -776 replay_buffer/size 999996 trainer/num train calls 225000 trainer/Policy Loss -2.14398 trainer/Log Pis Mean 2.03204 trainer/Log Pis Std 2.51942 trainer/Log Pis Max 9.38606 trainer/Log Pis Min -7.26746 trainer/policy/mean Mean 0.129847 trainer/policy/mean Std 0.613855 trainer/policy/mean Max 0.995992 trainer/policy/mean Min -0.997185 trainer/policy/normal/std Mean 0.386541 trainer/policy/normal/std Std 0.180363 trainer/policy/normal/std Max 1.13918 trainer/policy/normal/std Min 0.066202 trainer/policy/normal/log_std Mean -1.08935 trainer/policy/normal/log_std Std 0.567557 trainer/policy/normal/log_std Max 0.130307 trainer/policy/normal/log_std Min -2.71505 eval/num steps total 147972 eval/num paths total 284 eval/path length Mean 679 eval/path length Std 0 eval/path length Max 679 eval/path length Min 679 eval/Rewards Mean 3.22925 eval/Rewards Std 0.730397 eval/Rewards Max 5.40124 eval/Rewards Min 0.982988 eval/Returns Mean 2192.66 eval/Returns Std 0 eval/Returns Max 2192.66 eval/Returns Min 2192.66 eval/Actions Mean 0.154066 eval/Actions Std 0.599104 eval/Actions Max 0.997829 eval/Actions Min -0.997985 eval/Num Paths 1 eval/Average Returns 2192.66 eval/normalized_score 67.9946 time/evaluation sampling (s) 0.878478 time/logging (s) 0.00270567 time/sampling batch (s) 0.26555 time/saving (s) 0.00297027 time/training (s) 4.1778 time/epoch (s) 5.3275 time/total (s) 31441.6 Epoch -776 ---------------------------------- --------------- 2022-05-10 21:54:35.352330 PDT | [0] Epoch -775 finished ---------------------------------- --------------- epoch -775 replay_buffer/size 999996 trainer/num train calls 226000 trainer/Policy Loss -2.01288 trainer/Log Pis Mean 2.02403 trainer/Log Pis Std 2.65367 trainer/Log Pis Max 12.0824 trainer/Log Pis Min -7.02833 trainer/policy/mean Mean 0.123508 trainer/policy/mean Std 0.607715 trainer/policy/mean Max 0.997335 trainer/policy/mean Min -0.998686 trainer/policy/normal/std Mean 0.388387 trainer/policy/normal/std Std 0.184848 trainer/policy/normal/std Max 1.0228 trainer/policy/normal/std Min 0.0736291 trainer/policy/normal/log_std Mean -1.08796 trainer/policy/normal/log_std Std 0.572882 trainer/policy/normal/log_std Max 0.0225453 trainer/policy/normal/log_std Min -2.60871 eval/num steps total 148794 eval/num paths total 286 eval/path length Mean 411 eval/path length Std 37 eval/path length Max 448 eval/path length Min 374 eval/Rewards Mean 2.96238 eval/Rewards Std 0.924862 eval/Rewards Max 4.73175 eval/Rewards Min 0.982034 eval/Returns Mean 1217.54 eval/Returns Std 128.425 eval/Returns Max 1345.96 eval/Returns Min 1089.11 eval/Actions Mean 0.122354 eval/Actions Std 0.573755 eval/Actions Max 0.997796 eval/Actions Min -0.998931 eval/Num Paths 2 eval/Average Returns 1217.54 eval/normalized_score 38.033 time/evaluation sampling (s) 0.903269 time/logging (s) 0.00345834 time/sampling batch (s) 0.268689 time/saving (s) 0.00318122 time/training (s) 4.20719 time/epoch (s) 5.38579 time/total (s) 31447 Epoch -775 ---------------------------------- --------------- 2022-05-10 21:54:40.745804 PDT | [0] Epoch -774 finished ---------------------------------- --------------- epoch -774 replay_buffer/size 999996 trainer/num train calls 227000 trainer/Policy Loss -2.33186 trainer/Log Pis Mean 2.29186 trainer/Log Pis Std 2.58322 trainer/Log Pis Max 12.8964 trainer/Log Pis Min -4.55517 trainer/policy/mean Mean 0.150267 trainer/policy/mean Std 0.624639 trainer/policy/mean Max 0.997983 trainer/policy/mean Min -0.99717 trainer/policy/normal/std Mean 0.391876 trainer/policy/normal/std Std 0.18953 trainer/policy/normal/std Max 1.00588 trainer/policy/normal/std Min 0.0678947 trainer/policy/normal/log_std Mean -1.08748 trainer/policy/normal/log_std Std 0.592905 trainer/policy/normal/log_std Max 0.00586088 trainer/policy/normal/log_std Min -2.6898 eval/num steps total 149458 eval/num paths total 287 eval/path length Mean 664 eval/path length Std 0 eval/path length Max 664 eval/path length Min 664 eval/Rewards Mean 3.20665 eval/Rewards Std 0.691433 eval/Rewards Max 4.60897 eval/Rewards Min 0.99006 eval/Returns Mean 2129.22 eval/Returns Std 0 eval/Returns Max 2129.22 eval/Returns Min 2129.22 eval/Actions Mean 0.158113 eval/Actions Std 0.608728 eval/Actions Max 0.998417 eval/Actions Min -0.997864 eval/Num Paths 1 eval/Average Returns 2129.22 eval/normalized_score 66.0452 time/evaluation sampling (s) 0.910573 time/logging (s) 0.00330147 time/sampling batch (s) 0.267228 time/saving (s) 0.0036115 time/training (s) 4.18804 time/epoch (s) 5.37276 time/total (s) 31452.4 Epoch -774 ---------------------------------- --------------- 2022-05-10 21:54:46.160585 PDT | [0] Epoch -773 finished ---------------------------------- --------------- epoch -773 replay_buffer/size 999996 trainer/num train calls 228000 trainer/Policy Loss -2.09427 trainer/Log Pis Mean 2.13105 trainer/Log Pis Std 2.53298 trainer/Log Pis Max 9.39858 trainer/Log Pis Min -7.3266 trainer/policy/mean Mean 0.115679 trainer/policy/mean Std 0.617467 trainer/policy/mean Max 0.995419 trainer/policy/mean Min -0.996598 trainer/policy/normal/std Mean 0.385911 trainer/policy/normal/std Std 0.188612 trainer/policy/normal/std Max 1.00692 trainer/policy/normal/std Min 0.0657738 trainer/policy/normal/log_std Mean -1.10121 trainer/policy/normal/log_std Std 0.585229 trainer/policy/normal/log_std Max 0.00689863 trainer/policy/normal/log_std Min -2.72153 eval/num steps total 149930 eval/num paths total 288 eval/path length Mean 472 eval/path length Std 0 eval/path length Max 472 eval/path length Min 472 eval/Rewards Mean 3.15065 eval/Rewards Std 0.870153 eval/Rewards Max 4.73295 eval/Rewards Min 0.98114 eval/Returns Mean 1487.11 eval/Returns Std 0 eval/Returns Max 1487.11 eval/Returns Min 1487.11 eval/Actions Mean 0.143053 eval/Actions Std 0.585539 eval/Actions Max 0.997484 eval/Actions Min -0.998049 eval/Num Paths 1 eval/Average Returns 1487.11 eval/normalized_score 46.3157 time/evaluation sampling (s) 0.915349 time/logging (s) 0.00228823 time/sampling batch (s) 0.267727 time/saving (s) 0.0030666 time/training (s) 4.20465 time/epoch (s) 5.39308 time/total (s) 31457.8 Epoch -773 ---------------------------------- --------------- 2022-05-10 21:54:51.603489 PDT | [0] Epoch -772 finished ---------------------------------- --------------- epoch -772 replay_buffer/size 999996 trainer/num train calls 229000 trainer/Policy Loss -2.3634 trainer/Log Pis Mean 2.24843 trainer/Log Pis Std 2.73614 trainer/Log Pis Max 14.2234 trainer/Log Pis Min -4.73997 trainer/policy/mean Mean 0.150112 trainer/policy/mean Std 0.622725 trainer/policy/mean Max 0.998343 trainer/policy/mean Min -0.999337 trainer/policy/normal/std Mean 0.391084 trainer/policy/normal/std Std 0.184742 trainer/policy/normal/std Max 0.903633 trainer/policy/normal/std Min 0.0732853 trainer/policy/normal/log_std Mean -1.08061 trainer/policy/normal/log_std Std 0.572742 trainer/policy/normal/log_std Max -0.101332 trainer/policy/normal/log_std Min -2.6134 eval/num steps total 150469 eval/num paths total 289 eval/path length Mean 539 eval/path length Std 0 eval/path length Max 539 eval/path length Min 539 eval/Rewards Mean 3.2137 eval/Rewards Std 0.842404 eval/Rewards Max 5.51874 eval/Rewards Min 0.981857 eval/Returns Mean 1732.18 eval/Returns Std 0 eval/Returns Max 1732.18 eval/Returns Min 1732.18 eval/Actions Mean 0.15594 eval/Actions Std 0.586646 eval/Actions Max 0.997636 eval/Actions Min -0.998744 eval/Num Paths 1 eval/Average Returns 1732.18 eval/normalized_score 53.846 time/evaluation sampling (s) 0.933872 time/logging (s) 0.0024914 time/sampling batch (s) 0.270005 time/saving (s) 0.00305125 time/training (s) 4.21328 time/epoch (s) 5.4227 time/total (s) 31463.2 Epoch -772 ---------------------------------- --------------- 2022-05-10 21:54:57.010125 PDT | [0] Epoch -771 finished ---------------------------------- --------------- epoch -771 replay_buffer/size 999996 trainer/num train calls 230000 trainer/Policy Loss -2.11206 trainer/Log Pis Mean 2.13063 trainer/Log Pis Std 2.39116 trainer/Log Pis Max 8.6001 trainer/Log Pis Min -4.46291 trainer/policy/mean Mean 0.125864 trainer/policy/mean Std 0.615691 trainer/policy/mean Max 0.997918 trainer/policy/mean Min -0.996812 trainer/policy/normal/std Mean 0.379064 trainer/policy/normal/std Std 0.178307 trainer/policy/normal/std Max 0.940685 trainer/policy/normal/std Min 0.066082 trainer/policy/normal/log_std Mean -1.1121 trainer/policy/normal/log_std Std 0.575173 trainer/policy/normal/log_std Max -0.0611475 trainer/policy/normal/log_std Min -2.71686 eval/num steps total 151037 eval/num paths total 290 eval/path length Mean 568 eval/path length Std 0 eval/path length Max 568 eval/path length Min 568 eval/Rewards Mean 3.17334 eval/Rewards Std 0.765671 eval/Rewards Max 4.87796 eval/Rewards Min 0.987657 eval/Returns Mean 1802.45 eval/Returns Std 0 eval/Returns Max 1802.45 eval/Returns Min 1802.45 eval/Actions Mean 0.155017 eval/Actions Std 0.583435 eval/Actions Max 0.997791 eval/Actions Min -0.998317 eval/Num Paths 1 eval/Average Returns 1802.45 eval/normalized_score 56.0051 time/evaluation sampling (s) 0.893227 time/logging (s) 0.00251065 time/sampling batch (s) 0.267832 time/saving (s) 0.00317689 time/training (s) 4.21943 time/epoch (s) 5.38618 time/total (s) 31468.6 Epoch -771 ---------------------------------- --------------- 2022-05-10 21:55:03.277767 PDT | [0] Epoch -770 finished ---------------------------------- --------------- epoch -770 replay_buffer/size 999996 trainer/num train calls 231000 trainer/Policy Loss -2.16356 trainer/Log Pis Mean 1.99103 trainer/Log Pis Std 2.54941 trainer/Log Pis Max 9.3184 trainer/Log Pis Min -7.08179 trainer/policy/mean Mean 0.153519 trainer/policy/mean Std 0.604431 trainer/policy/mean Max 0.995775 trainer/policy/mean Min -0.998648 trainer/policy/normal/std Mean 0.385165 trainer/policy/normal/std Std 0.186079 trainer/policy/normal/std Max 0.980247 trainer/policy/normal/std Min 0.0723674 trainer/policy/normal/log_std Mean -1.1044 trainer/policy/normal/log_std Std 0.592167 trainer/policy/normal/log_std Max -0.0199512 trainer/policy/normal/log_std Min -2.626 eval/num steps total 151551 eval/num paths total 291 eval/path length Mean 514 eval/path length Std 0 eval/path length Max 514 eval/path length Min 514 eval/Rewards Mean 3.11738 eval/Rewards Std 0.793003 eval/Rewards Max 5.19735 eval/Rewards Min 0.988489 eval/Returns Mean 1602.34 eval/Returns Std 0 eval/Returns Max 1602.34 eval/Returns Min 1602.34 eval/Actions Mean 0.164451 eval/Actions Std 0.588835 eval/Actions Max 0.99712 eval/Actions Min -0.996692 eval/Num Paths 1 eval/Average Returns 1602.34 eval/normalized_score 49.8563 time/evaluation sampling (s) 0.909865 time/logging (s) 0.00300223 time/sampling batch (s) 0.274609 time/saving (s) 0.00373505 time/training (s) 5.05624 time/epoch (s) 6.24745 time/total (s) 31474.8 Epoch -770 ---------------------------------- --------------- 2022-05-10 21:55:09.697684 PDT | [0] Epoch -769 finished ---------------------------------- --------------- epoch -769 replay_buffer/size 999996 trainer/num train calls 232000 trainer/Policy Loss -2.04919 trainer/Log Pis Mean 2.0288 trainer/Log Pis Std 2.62779 trainer/Log Pis Max 10.0514 trainer/Log Pis Min -6.47833 trainer/policy/mean Mean 0.159339 trainer/policy/mean Std 0.600131 trainer/policy/mean Max 0.996994 trainer/policy/mean Min -0.998755 trainer/policy/normal/std Mean 0.379591 trainer/policy/normal/std Std 0.183404 trainer/policy/normal/std Max 1.01263 trainer/policy/normal/std Min 0.0720815 trainer/policy/normal/log_std Mean -1.115 trainer/policy/normal/log_std Std 0.580832 trainer/policy/normal/log_std Max 0.0125498 trainer/policy/normal/log_std Min -2.62996 eval/num steps total 152549 eval/num paths total 293 eval/path length Mean 499 eval/path length Std 5 eval/path length Max 504 eval/path length Min 494 eval/Rewards Mean 3.14969 eval/Rewards Std 0.789693 eval/Rewards Max 4.87547 eval/Rewards Min 0.984319 eval/Returns Mean 1571.69 eval/Returns Std 1.14063 eval/Returns Max 1572.83 eval/Returns Min 1570.55 eval/Actions Mean 0.157562 eval/Actions Std 0.591687 eval/Actions Max 0.997711 eval/Actions Min -0.998876 eval/Num Paths 2 eval/Average Returns 1571.69 eval/normalized_score 48.9147 time/evaluation sampling (s) 0.988656 time/logging (s) 0.00620827 time/sampling batch (s) 0.278653 time/saving (s) 0.00575992 time/training (s) 5.12144 time/epoch (s) 6.40072 time/total (s) 31481.2 Epoch -769 ---------------------------------- --------------- 2022-05-10 21:55:15.962786 PDT | [0] Epoch -768 finished ---------------------------------- --------------- epoch -768 replay_buffer/size 999996 trainer/num train calls 233000 trainer/Policy Loss -2.15559 trainer/Log Pis Mean 2.10374 trainer/Log Pis Std 2.56453 trainer/Log Pis Max 13.4048 trainer/Log Pis Min -5.12681 trainer/policy/mean Mean 0.156746 trainer/policy/mean Std 0.607794 trainer/policy/mean Max 0.997742 trainer/policy/mean Min -0.997363 trainer/policy/normal/std Mean 0.386931 trainer/policy/normal/std Std 0.185864 trainer/policy/normal/std Max 0.927777 trainer/policy/normal/std Min 0.0726858 trainer/policy/normal/log_std Mean -1.1002 trainer/policy/normal/log_std Std 0.594519 trainer/policy/normal/log_std Max -0.0749644 trainer/policy/normal/log_std Min -2.62161 eval/num steps total 153055 eval/num paths total 294 eval/path length Mean 506 eval/path length Std 0 eval/path length Max 506 eval/path length Min 506 eval/Rewards Mean 3.08995 eval/Rewards Std 0.8215 eval/Rewards Max 5.14696 eval/Rewards Min 0.981165 eval/Returns Mean 1563.51 eval/Returns Std 0 eval/Returns Max 1563.51 eval/Returns Min 1563.51 eval/Actions Mean 0.15943 eval/Actions Std 0.581032 eval/Actions Max 0.998008 eval/Actions Min -0.999 eval/Num Paths 1 eval/Average Returns 1563.51 eval/normalized_score 48.6634 time/evaluation sampling (s) 0.923215 time/logging (s) 0.00254326 time/sampling batch (s) 0.27739 time/saving (s) 0.00339827 time/training (s) 5.03048 time/epoch (s) 6.23703 time/total (s) 31487.5 Epoch -768 ---------------------------------- --------------- 2022-05-10 21:55:21.435921 PDT | [0] Epoch -767 finished ---------------------------------- --------------- epoch -767 replay_buffer/size 999996 trainer/num train calls 234000 trainer/Policy Loss -2.24772 trainer/Log Pis Mean 2.31401 trainer/Log Pis Std 2.58405 trainer/Log Pis Max 17.2442 trainer/Log Pis Min -6.27595 trainer/policy/mean Mean 0.116783 trainer/policy/mean Std 0.628827 trainer/policy/mean Max 0.997434 trainer/policy/mean Min -0.999847 trainer/policy/normal/std Mean 0.388738 trainer/policy/normal/std Std 0.181952 trainer/policy/normal/std Max 0.882402 trainer/policy/normal/std Min 0.0734863 trainer/policy/normal/log_std Mean -1.08536 trainer/policy/normal/log_std Std 0.571502 trainer/policy/normal/log_std Max -0.125108 trainer/policy/normal/log_std Min -2.61066 eval/num steps total 153575 eval/num paths total 295 eval/path length Mean 520 eval/path length Std 0 eval/path length Max 520 eval/path length Min 520 eval/Rewards Mean 3.19036 eval/Rewards Std 0.820142 eval/Rewards Max 5.37669 eval/Rewards Min 0.986286 eval/Returns Mean 1658.99 eval/Returns Std 0 eval/Returns Max 1658.99 eval/Returns Min 1658.99 eval/Actions Mean 0.167544 eval/Actions Std 0.597649 eval/Actions Max 0.998201 eval/Actions Min -0.996893 eval/Num Paths 1 eval/Average Returns 1658.99 eval/normalized_score 51.597 time/evaluation sampling (s) 0.95329 time/logging (s) 0.00304155 time/sampling batch (s) 0.268775 time/saving (s) 0.00412333 time/training (s) 4.22276 time/epoch (s) 5.45199 time/total (s) 31492.9 Epoch -767 ---------------------------------- --------------- 2022-05-10 21:55:26.853275 PDT | [0] Epoch -766 finished ---------------------------------- --------------- epoch -766 replay_buffer/size 999996 trainer/num train calls 235000 trainer/Policy Loss -2.27122 trainer/Log Pis Mean 2.40254 trainer/Log Pis Std 2.5319 trainer/Log Pis Max 10.76 trainer/Log Pis Min -6.92349 trainer/policy/mean Mean 0.169725 trainer/policy/mean Std 0.625812 trainer/policy/mean Max 0.9968 trainer/policy/mean Min -0.997119 trainer/policy/normal/std Mean 0.389995 trainer/policy/normal/std Std 0.188098 trainer/policy/normal/std Max 0.991036 trainer/policy/normal/std Min 0.0688345 trainer/policy/normal/log_std Mean -1.09096 trainer/policy/normal/log_std Std 0.590263 trainer/policy/normal/log_std Max -0.00900478 trainer/policy/normal/log_std Min -2.67605 eval/num steps total 154143 eval/num paths total 296 eval/path length Mean 568 eval/path length Std 0 eval/path length Max 568 eval/path length Min 568 eval/Rewards Mean 3.1362 eval/Rewards Std 0.761726 eval/Rewards Max 4.9034 eval/Rewards Min 0.988384 eval/Returns Mean 1781.36 eval/Returns Std 0 eval/Returns Max 1781.36 eval/Returns Min 1781.36 eval/Actions Mean 0.149268 eval/Actions Std 0.591552 eval/Actions Max 0.99835 eval/Actions Min -0.997956 eval/Num Paths 1 eval/Average Returns 1781.36 eval/normalized_score 55.357 time/evaluation sampling (s) 0.93339 time/logging (s) 0.00271701 time/sampling batch (s) 0.268954 time/saving (s) 0.00320362 time/training (s) 4.18689 time/epoch (s) 5.39516 time/total (s) 31498.3 Epoch -766 ---------------------------------- --------------- 2022-05-10 21:55:32.339603 PDT | [0] Epoch -765 finished ---------------------------------- --------------- epoch -765 replay_buffer/size 999996 trainer/num train calls 236000 trainer/Policy Loss -2.01189 trainer/Log Pis Mean 2.00368 trainer/Log Pis Std 2.59183 trainer/Log Pis Max 9.53157 trainer/Log Pis Min -5.85944 trainer/policy/mean Mean 0.156344 trainer/policy/mean Std 0.607871 trainer/policy/mean Max 0.996976 trainer/policy/mean Min -0.99785 trainer/policy/normal/std Mean 0.391279 trainer/policy/normal/std Std 0.189493 trainer/policy/normal/std Max 0.953429 trainer/policy/normal/std Min 0.0688564 trainer/policy/normal/log_std Mean -1.08771 trainer/policy/normal/log_std Std 0.5886 trainer/policy/normal/log_std Max -0.0476903 trainer/policy/normal/log_std Min -2.67573 eval/num steps total 154549 eval/num paths total 297 eval/path length Mean 406 eval/path length Std 0 eval/path length Max 406 eval/path length Min 406 eval/Rewards Mean 3.05868 eval/Rewards Std 0.788312 eval/Rewards Max 4.58469 eval/Rewards Min 0.982134 eval/Returns Mean 1241.83 eval/Returns Std 0 eval/Returns Max 1241.83 eval/Returns Min 1241.83 eval/Actions Mean 0.143363 eval/Actions Std 0.593382 eval/Actions Max 0.997139 eval/Actions Min -0.999345 eval/Num Paths 1 eval/Average Returns 1241.83 eval/normalized_score 38.7792 time/evaluation sampling (s) 0.905272 time/logging (s) 0.00258509 time/sampling batch (s) 0.27442 time/saving (s) 0.00363454 time/training (s) 4.27903 time/epoch (s) 5.46494 time/total (s) 31503.8 Epoch -765 ---------------------------------- --------------- 2022-05-10 21:55:38.084915 PDT | [0] Epoch -764 finished ---------------------------------- --------------- epoch -764 replay_buffer/size 999996 trainer/num train calls 237000 trainer/Policy Loss -2.00766 trainer/Log Pis Mean 1.97638 trainer/Log Pis Std 2.50061 trainer/Log Pis Max 8.86324 trainer/Log Pis Min -6.05126 trainer/policy/mean Mean 0.155396 trainer/policy/mean Std 0.605243 trainer/policy/mean Max 0.995456 trainer/policy/mean Min -0.998784 trainer/policy/normal/std Mean 0.382673 trainer/policy/normal/std Std 0.180264 trainer/policy/normal/std Max 0.891001 trainer/policy/normal/std Min 0.0747716 trainer/policy/normal/log_std Mean -1.10078 trainer/policy/normal/log_std Std 0.56828 trainer/policy/normal/log_std Max -0.115409 trainer/policy/normal/log_std Min -2.59332 eval/num steps total 155513 eval/num paths total 299 eval/path length Mean 482 eval/path length Std 132 eval/path length Max 614 eval/path length Min 350 eval/Rewards Mean 3.07643 eval/Rewards Std 0.876407 eval/Rewards Max 5.15851 eval/Rewards Min 0.980987 eval/Returns Mean 1482.84 eval/Returns Std 424.472 eval/Returns Max 1907.31 eval/Returns Min 1058.37 eval/Actions Mean 0.115714 eval/Actions Std 0.561176 eval/Actions Max 0.998859 eval/Actions Min -0.998675 eval/Num Paths 2 eval/Average Returns 1482.84 eval/normalized_score 46.1846 time/evaluation sampling (s) 0.892223 time/logging (s) 0.00380826 time/sampling batch (s) 0.271471 time/saving (s) 0.00351035 time/training (s) 4.55463 time/epoch (s) 5.72565 time/total (s) 31509.5 Epoch -764 ---------------------------------- --------------- 2022-05-10 21:55:44.241646 PDT | [0] Epoch -763 finished ---------------------------------- --------------- epoch -763 replay_buffer/size 999996 trainer/num train calls 238000 trainer/Policy Loss -1.92721 trainer/Log Pis Mean 1.94153 trainer/Log Pis Std 2.43167 trainer/Log Pis Max 9.38153 trainer/Log Pis Min -4.65649 trainer/policy/mean Mean 0.153573 trainer/policy/mean Std 0.59746 trainer/policy/mean Max 0.997783 trainer/policy/mean Min -0.99697 trainer/policy/normal/std Mean 0.382917 trainer/policy/normal/std Std 0.182666 trainer/policy/normal/std Max 0.945728 trainer/policy/normal/std Min 0.0688348 trainer/policy/normal/log_std Mean -1.10605 trainer/policy/normal/log_std Std 0.582683 trainer/policy/normal/log_std Max -0.0558008 trainer/policy/normal/log_std Min -2.67605 eval/num steps total 156421 eval/num paths total 301 eval/path length Mean 454 eval/path length Std 44 eval/path length Max 498 eval/path length Min 410 eval/Rewards Mean 3.08355 eval/Rewards Std 0.825333 eval/Rewards Max 4.82121 eval/Rewards Min 0.981309 eval/Returns Mean 1399.93 eval/Returns Std 149.237 eval/Returns Max 1549.17 eval/Returns Min 1250.69 eval/Actions Mean 0.152313 eval/Actions Std 0.574614 eval/Actions Max 0.998374 eval/Actions Min -0.998493 eval/Num Paths 2 eval/Average Returns 1399.93 eval/normalized_score 43.6371 time/evaluation sampling (s) 0.894767 time/logging (s) 0.00398824 time/sampling batch (s) 0.265965 time/saving (s) 0.00353065 time/training (s) 4.96769 time/epoch (s) 6.13594 time/total (s) 31515.7 Epoch -763 ---------------------------------- --------------- 2022-05-10 21:55:49.835895 PDT | [0] Epoch -762 finished ---------------------------------- --------------- epoch -762 replay_buffer/size 999996 trainer/num train calls 239000 trainer/Policy Loss -2.11406 trainer/Log Pis Mean 2.18065 trainer/Log Pis Std 2.52754 trainer/Log Pis Max 9.79787 trainer/Log Pis Min -6.71426 trainer/policy/mean Mean 0.16653 trainer/policy/mean Std 0.614775 trainer/policy/mean Max 0.997438 trainer/policy/mean Min -0.997923 trainer/policy/normal/std Mean 0.394536 trainer/policy/normal/std Std 0.18545 trainer/policy/normal/std Max 1.20602 trainer/policy/normal/std Min 0.0727868 trainer/policy/normal/log_std Mean -1.07085 trainer/policy/normal/log_std Std 0.572123 trainer/policy/normal/log_std Max 0.187322 trainer/policy/normal/log_std Min -2.62022 eval/num steps total 156956 eval/num paths total 302 eval/path length Mean 535 eval/path length Std 0 eval/path length Max 535 eval/path length Min 535 eval/Rewards Mean 3.23341 eval/Rewards Std 0.858221 eval/Rewards Max 5.48306 eval/Rewards Min 0.987017 eval/Returns Mean 1729.87 eval/Returns Std 0 eval/Returns Max 1729.87 eval/Returns Min 1729.87 eval/Actions Mean 0.15871 eval/Actions Std 0.584711 eval/Actions Max 0.998921 eval/Actions Min -0.998014 eval/Num Paths 1 eval/Average Returns 1729.87 eval/normalized_score 53.775 time/evaluation sampling (s) 0.899177 time/logging (s) 0.00318073 time/sampling batch (s) 0.278769 time/saving (s) 0.00386214 time/training (s) 4.38422 time/epoch (s) 5.56921 time/total (s) 31521.3 Epoch -762 ---------------------------------- --------------- 2022-05-10 21:55:55.384592 PDT | [0] Epoch -761 finished ---------------------------------- --------------- epoch -761 replay_buffer/size 999996 trainer/num train calls 240000 trainer/Policy Loss -2.08745 trainer/Log Pis Mean 2.09554 trainer/Log Pis Std 2.57525 trainer/Log Pis Max 9.84324 trainer/Log Pis Min -5.69073 trainer/policy/mean Mean 0.14443 trainer/policy/mean Std 0.615722 trainer/policy/mean Max 0.995817 trainer/policy/mean Min -0.996148 trainer/policy/normal/std Mean 0.387086 trainer/policy/normal/std Std 0.185725 trainer/policy/normal/std Max 0.960757 trainer/policy/normal/std Min 0.0735436 trainer/policy/normal/log_std Mean -1.09387 trainer/policy/normal/log_std Std 0.577204 trainer/policy/normal/log_std Max -0.0400335 trainer/policy/normal/log_std Min -2.60988 eval/num steps total 157489 eval/num paths total 303 eval/path length Mean 533 eval/path length Std 0 eval/path length Max 533 eval/path length Min 533 eval/Rewards Mean 3.22145 eval/Rewards Std 0.816067 eval/Rewards Max 5.53647 eval/Rewards Min 0.986036 eval/Returns Mean 1717.03 eval/Returns Std 0 eval/Returns Max 1717.03 eval/Returns Min 1717.03 eval/Actions Mean 0.154744 eval/Actions Std 0.583486 eval/Actions Max 0.997739 eval/Actions Min -0.997723 eval/Num Paths 1 eval/Average Returns 1717.03 eval/normalized_score 53.3804 time/evaluation sampling (s) 0.901059 time/logging (s) 0.00427898 time/sampling batch (s) 0.276916 time/saving (s) 0.00449374 time/training (s) 4.33937 time/epoch (s) 5.52612 time/total (s) 31526.8 Epoch -761 ---------------------------------- --------------- 2022-05-10 21:56:00.903205 PDT | [0] Epoch -760 finished ---------------------------------- --------------- epoch -760 replay_buffer/size 999996 trainer/num train calls 241000 trainer/Policy Loss -2.30346 trainer/Log Pis Mean 2.38101 trainer/Log Pis Std 2.56216 trainer/Log Pis Max 10.1098 trainer/Log Pis Min -6.57666 trainer/policy/mean Mean 0.1505 trainer/policy/mean Std 0.626372 trainer/policy/mean Max 0.998805 trainer/policy/mean Min -0.998875 trainer/policy/normal/std Mean 0.395833 trainer/policy/normal/std Std 0.196622 trainer/policy/normal/std Max 0.957458 trainer/policy/normal/std Min 0.0732374 trainer/policy/normal/log_std Mean -1.08419 trainer/policy/normal/log_std Std 0.60486 trainer/policy/normal/log_std Max -0.0434737 trainer/policy/normal/log_std Min -2.61405 eval/num steps total 158197 eval/num paths total 304 eval/path length Mean 708 eval/path length Std 0 eval/path length Max 708 eval/path length Min 708 eval/Rewards Mean 3.23707 eval/Rewards Std 0.762643 eval/Rewards Max 4.90233 eval/Rewards Min 0.988641 eval/Returns Mean 2291.85 eval/Returns Std 0 eval/Returns Max 2291.85 eval/Returns Min 2291.85 eval/Actions Mean 0.151054 eval/Actions Std 0.580254 eval/Actions Max 0.998525 eval/Actions Min -0.998838 eval/Num Paths 1 eval/Average Returns 2291.85 eval/normalized_score 71.0422 time/evaluation sampling (s) 0.927509 time/logging (s) 0.00301944 time/sampling batch (s) 0.273996 time/saving (s) 0.00331521 time/training (s) 4.28726 time/epoch (s) 5.4951 time/total (s) 31532.3 Epoch -760 ---------------------------------- --------------- 2022-05-10 21:56:06.410842 PDT | [0] Epoch -759 finished ---------------------------------- --------------- epoch -759 replay_buffer/size 999996 trainer/num train calls 242000 trainer/Policy Loss -2.05955 trainer/Log Pis Mean 2.02536 trainer/Log Pis Std 2.55833 trainer/Log Pis Max 17.3182 trainer/Log Pis Min -6.12409 trainer/policy/mean Mean 0.161332 trainer/policy/mean Std 0.604703 trainer/policy/mean Max 0.998298 trainer/policy/mean Min -0.99965 trainer/policy/normal/std Mean 0.394225 trainer/policy/normal/std Std 0.187968 trainer/policy/normal/std Max 1.01702 trainer/policy/normal/std Min 0.0723535 trainer/policy/normal/log_std Mean -1.0774 trainer/policy/normal/log_std Std 0.585223 trainer/policy/normal/log_std Max 0.0168799 trainer/policy/normal/log_std Min -2.62619 eval/num steps total 158774 eval/num paths total 305 eval/path length Mean 577 eval/path length Std 0 eval/path length Max 577 eval/path length Min 577 eval/Rewards Mean 3.17034 eval/Rewards Std 0.768101 eval/Rewards Max 4.81413 eval/Rewards Min 0.989035 eval/Returns Mean 1829.29 eval/Returns Std 0 eval/Returns Max 1829.29 eval/Returns Min 1829.29 eval/Actions Mean 0.162384 eval/Actions Std 0.594854 eval/Actions Max 0.997952 eval/Actions Min -0.998669 eval/Num Paths 1 eval/Average Returns 1829.29 eval/normalized_score 56.8296 time/evaluation sampling (s) 0.927304 time/logging (s) 0.0030488 time/sampling batch (s) 0.273144 time/saving (s) 0.00376344 time/training (s) 4.27924 time/epoch (s) 5.4865 time/total (s) 31537.8 Epoch -759 ---------------------------------- --------------- 2022-05-10 21:56:12.135927 PDT | [0] Epoch -758 finished ---------------------------------- --------------- epoch -758 replay_buffer/size 999996 trainer/num train calls 243000 trainer/Policy Loss -2.26129 trainer/Log Pis Mean 2.03899 trainer/Log Pis Std 2.57287 trainer/Log Pis Max 10.7991 trainer/Log Pis Min -3.7909 trainer/policy/mean Mean 0.145383 trainer/policy/mean Std 0.611276 trainer/policy/mean Max 0.997736 trainer/policy/mean Min -0.998228 trainer/policy/normal/std Mean 0.390953 trainer/policy/normal/std Std 0.187826 trainer/policy/normal/std Max 1.0603 trainer/policy/normal/std Min 0.0692389 trainer/policy/normal/log_std Mean -1.08652 trainer/policy/normal/log_std Std 0.584672 trainer/policy/normal/log_std Max 0.058554 trainer/policy/normal/log_std Min -2.67019 eval/num steps total 159661 eval/num paths total 307 eval/path length Mean 443.5 eval/path length Std 43.5 eval/path length Max 487 eval/path length Min 400 eval/Rewards Mean 3.09945 eval/Rewards Std 0.849157 eval/Rewards Max 4.8951 eval/Rewards Min 0.984346 eval/Returns Mean 1374.6 eval/Returns Std 142.914 eval/Returns Max 1517.52 eval/Returns Min 1231.69 eval/Actions Mean 0.151006 eval/Actions Std 0.578685 eval/Actions Max 0.998715 eval/Actions Min -0.999225 eval/Num Paths 2 eval/Average Returns 1374.6 eval/normalized_score 42.859 time/evaluation sampling (s) 0.932279 time/logging (s) 0.0042235 time/sampling batch (s) 0.274528 time/saving (s) 0.00382641 time/training (s) 4.48947 time/epoch (s) 5.70433 time/total (s) 31543.5 Epoch -758 ---------------------------------- --------------- 2022-05-10 21:56:17.631055 PDT | [0] Epoch -757 finished ---------------------------------- --------------- epoch -757 replay_buffer/size 999996 trainer/num train calls 244000 trainer/Policy Loss -2.12004 trainer/Log Pis Mean 2.0409 trainer/Log Pis Std 2.5224 trainer/Log Pis Max 8.89582 trainer/Log Pis Min -4.98661 trainer/policy/mean Mean 0.133395 trainer/policy/mean Std 0.614349 trainer/policy/mean Max 0.998184 trainer/policy/mean Min -0.998409 trainer/policy/normal/std Mean 0.381586 trainer/policy/normal/std Std 0.182703 trainer/policy/normal/std Max 0.884462 trainer/policy/normal/std Min 0.0745704 trainer/policy/normal/log_std Mean -1.10669 trainer/policy/normal/log_std Std 0.573012 trainer/policy/normal/log_std Max -0.122776 trainer/policy/normal/log_std Min -2.59601 eval/num steps total 160166 eval/num paths total 308 eval/path length Mean 505 eval/path length Std 0 eval/path length Max 505 eval/path length Min 505 eval/Rewards Mean 3.11089 eval/Rewards Std 0.776242 eval/Rewards Max 4.78722 eval/Rewards Min 0.982779 eval/Returns Mean 1571 eval/Returns Std 0 eval/Returns Max 1571 eval/Returns Min 1571 eval/Actions Mean 0.157245 eval/Actions Std 0.586094 eval/Actions Max 0.997981 eval/Actions Min -0.999148 eval/Num Paths 1 eval/Average Returns 1571 eval/normalized_score 48.8934 time/evaluation sampling (s) 0.896706 time/logging (s) 0.00238323 time/sampling batch (s) 0.274655 time/saving (s) 0.00297838 time/training (s) 4.29493 time/epoch (s) 5.47165 time/total (s) 31549 Epoch -757 ---------------------------------- --------------- 2022-05-10 21:56:23.172258 PDT | [0] Epoch -756 finished ---------------------------------- --------------- epoch -756 replay_buffer/size 999996 trainer/num train calls 245000 trainer/Policy Loss -2.20005 trainer/Log Pis Mean 2.09923 trainer/Log Pis Std 2.52414 trainer/Log Pis Max 9.56943 trainer/Log Pis Min -4.49947 trainer/policy/mean Mean 0.145692 trainer/policy/mean Std 0.617948 trainer/policy/mean Max 0.998873 trainer/policy/mean Min -0.998442 trainer/policy/normal/std Mean 0.388743 trainer/policy/normal/std Std 0.189653 trainer/policy/normal/std Max 0.976483 trainer/policy/normal/std Min 0.073713 trainer/policy/normal/log_std Mean -1.09249 trainer/policy/normal/log_std Std 0.581847 trainer/policy/normal/log_std Max -0.0237984 trainer/policy/normal/log_std Min -2.60758 eval/num steps total 160741 eval/num paths total 309 eval/path length Mean 575 eval/path length Std 0 eval/path length Max 575 eval/path length Min 575 eval/Rewards Mean 3.20744 eval/Rewards Std 0.79376 eval/Rewards Max 4.77029 eval/Rewards Min 0.983915 eval/Returns Mean 1844.28 eval/Returns Std 0 eval/Returns Max 1844.28 eval/Returns Min 1844.28 eval/Actions Mean 0.160078 eval/Actions Std 0.602459 eval/Actions Max 0.998649 eval/Actions Min -0.998585 eval/Num Paths 1 eval/Average Returns 1844.28 eval/normalized_score 57.2903 time/evaluation sampling (s) 0.925482 time/logging (s) 0.00287849 time/sampling batch (s) 0.274751 time/saving (s) 0.00358649 time/training (s) 4.31449 time/epoch (s) 5.52118 time/total (s) 31554.5 Epoch -756 ---------------------------------- --------------- 2022-05-10 21:56:28.671761 PDT | [0] Epoch -755 finished ---------------------------------- --------------- epoch -755 replay_buffer/size 999996 trainer/num train calls 246000 trainer/Policy Loss -2.0763 trainer/Log Pis Mean 1.97364 trainer/Log Pis Std 2.51943 trainer/Log Pis Max 9.10541 trainer/Log Pis Min -4.34917 trainer/policy/mean Mean 0.130641 trainer/policy/mean Std 0.604957 trainer/policy/mean Max 0.999275 trainer/policy/mean Min -0.997817 trainer/policy/normal/std Mean 0.387888 trainer/policy/normal/std Std 0.187454 trainer/policy/normal/std Max 1.03164 trainer/policy/normal/std Min 0.0799838 trainer/policy/normal/log_std Mean -1.09103 trainer/policy/normal/log_std Std 0.573198 trainer/policy/normal/log_std Max 0.0311506 trainer/policy/normal/log_std Min -2.52593 eval/num steps total 161304 eval/num paths total 310 eval/path length Mean 563 eval/path length Std 0 eval/path length Max 563 eval/path length Min 563 eval/Rewards Mean 3.20467 eval/Rewards Std 0.797903 eval/Rewards Max 4.73411 eval/Rewards Min 0.988903 eval/Returns Mean 1804.23 eval/Returns Std 0 eval/Returns Max 1804.23 eval/Returns Min 1804.23 eval/Actions Mean 0.15678 eval/Actions Std 0.597472 eval/Actions Max 0.998552 eval/Actions Min -0.997826 eval/Num Paths 1 eval/Average Returns 1804.23 eval/normalized_score 56.0597 time/evaluation sampling (s) 0.894531 time/logging (s) 0.00301274 time/sampling batch (s) 0.274318 time/saving (s) 0.00355375 time/training (s) 4.30336 time/epoch (s) 5.47878 time/total (s) 31560 Epoch -755 ---------------------------------- --------------- 2022-05-10 21:56:34.207937 PDT | [0] Epoch -754 finished ---------------------------------- --------------- epoch -754 replay_buffer/size 999996 trainer/num train calls 247000 trainer/Policy Loss -2.11961 trainer/Log Pis Mean 2.25401 trainer/Log Pis Std 2.52296 trainer/Log Pis Max 9.7435 trainer/Log Pis Min -5.17243 trainer/policy/mean Mean 0.130355 trainer/policy/mean Std 0.624842 trainer/policy/mean Max 0.99831 trainer/policy/mean Min -0.998466 trainer/policy/normal/std Mean 0.396368 trainer/policy/normal/std Std 0.192902 trainer/policy/normal/std Max 1.06194 trainer/policy/normal/std Min 0.0747847 trainer/policy/normal/log_std Mean -1.07639 trainer/policy/normal/log_std Std 0.592128 trainer/policy/normal/log_std Max 0.0601004 trainer/policy/normal/log_std Min -2.59314 eval/num steps total 161989 eval/num paths total 311 eval/path length Mean 685 eval/path length Std 0 eval/path length Max 685 eval/path length Min 685 eval/Rewards Mean 3.2172 eval/Rewards Std 0.770037 eval/Rewards Max 5.35493 eval/Rewards Min 0.986667 eval/Returns Mean 2203.78 eval/Returns Std 0 eval/Returns Max 2203.78 eval/Returns Min 2203.78 eval/Actions Mean 0.143386 eval/Actions Std 0.582347 eval/Actions Max 0.998385 eval/Actions Min -0.998453 eval/Num Paths 1 eval/Average Returns 2203.78 eval/normalized_score 68.3362 time/evaluation sampling (s) 0.898815 time/logging (s) 0.00281364 time/sampling batch (s) 0.276996 time/saving (s) 0.00304029 time/training (s) 4.33301 time/epoch (s) 5.51468 time/total (s) 31565.5 Epoch -754 ---------------------------------- --------------- 2022-05-10 21:56:39.700406 PDT | [0] Epoch -753 finished ---------------------------------- --------------- epoch -753 replay_buffer/size 999996 trainer/num train calls 248000 trainer/Policy Loss -2.35799 trainer/Log Pis Mean 2.25093 trainer/Log Pis Std 2.60413 trainer/Log Pis Max 10.1675 trainer/Log Pis Min -4.46461 trainer/policy/mean Mean 0.165987 trainer/policy/mean Std 0.621475 trainer/policy/mean Max 0.997416 trainer/policy/mean Min -0.997172 trainer/policy/normal/std Mean 0.376269 trainer/policy/normal/std Std 0.178422 trainer/policy/normal/std Max 0.960763 trainer/policy/normal/std Min 0.0694045 trainer/policy/normal/log_std Mean -1.11768 trainer/policy/normal/log_std Std 0.566826 trainer/policy/normal/log_std Max -0.0400274 trainer/policy/normal/log_std Min -2.6678 eval/num steps total 162566 eval/num paths total 312 eval/path length Mean 577 eval/path length Std 0 eval/path length Max 577 eval/path length Min 577 eval/Rewards Mean 3.14206 eval/Rewards Std 0.729129 eval/Rewards Max 4.89481 eval/Rewards Min 0.987722 eval/Returns Mean 1812.97 eval/Returns Std 0 eval/Returns Max 1812.97 eval/Returns Min 1812.97 eval/Actions Mean 0.153671 eval/Actions Std 0.606089 eval/Actions Max 0.998479 eval/Actions Min -0.998092 eval/Num Paths 1 eval/Average Returns 1812.97 eval/normalized_score 56.3281 time/evaluation sampling (s) 0.889957 time/logging (s) 0.00300593 time/sampling batch (s) 0.2741 time/saving (s) 0.00337833 time/training (s) 4.30123 time/epoch (s) 5.47167 time/total (s) 31571 Epoch -753 ---------------------------------- --------------- 2022-05-10 21:56:45.216797 PDT | [0] Epoch -752 finished ---------------------------------- --------------- epoch -752 replay_buffer/size 999996 trainer/num train calls 249000 trainer/Policy Loss -2.16599 trainer/Log Pis Mean 2.03557 trainer/Log Pis Std 2.57099 trainer/Log Pis Max 15.8415 trainer/Log Pis Min -5.28005 trainer/policy/mean Mean 0.135095 trainer/policy/mean Std 0.613372 trainer/policy/mean Max 0.998008 trainer/policy/mean Min -0.999708 trainer/policy/normal/std Mean 0.387344 trainer/policy/normal/std Std 0.182871 trainer/policy/normal/std Max 0.976199 trainer/policy/normal/std Min 0.0749286 trainer/policy/normal/log_std Mean -1.08865 trainer/policy/normal/log_std Std 0.567866 trainer/policy/normal/log_std Max -0.0240887 trainer/policy/normal/log_std Min -2.59122 eval/num steps total 163433 eval/num paths total 314 eval/path length Mean 433.5 eval/path length Std 110.5 eval/path length Max 544 eval/path length Min 323 eval/Rewards Mean 3.12469 eval/Rewards Std 0.915869 eval/Rewards Max 4.85447 eval/Rewards Min 0.990397 eval/Returns Mean 1354.55 eval/Returns Std 353.281 eval/Returns Max 1707.83 eval/Returns Min 1001.27 eval/Actions Mean 0.120351 eval/Actions Std 0.546071 eval/Actions Max 0.998538 eval/Actions Min -0.999818 eval/Num Paths 2 eval/Average Returns 1354.55 eval/normalized_score 42.2429 time/evaluation sampling (s) 0.893066 time/logging (s) 0.00342348 time/sampling batch (s) 0.274777 time/saving (s) 0.00331811 time/training (s) 4.32058 time/epoch (s) 5.49516 time/total (s) 31576.5 Epoch -752 ---------------------------------- --------------- 2022-05-10 21:56:50.719653 PDT | [0] Epoch -751 finished ---------------------------------- --------------- epoch -751 replay_buffer/size 999996 trainer/num train calls 250000 trainer/Policy Loss -2.31739 trainer/Log Pis Mean 2.32966 trainer/Log Pis Std 2.72162 trainer/Log Pis Max 10.8877 trainer/Log Pis Min -5.08525 trainer/policy/mean Mean 0.104138 trainer/policy/mean Std 0.622112 trainer/policy/mean Max 0.997921 trainer/policy/mean Min -0.996813 trainer/policy/normal/std Mean 0.382233 trainer/policy/normal/std Std 0.184692 trainer/policy/normal/std Max 0.929987 trainer/policy/normal/std Min 0.0731178 trainer/policy/normal/log_std Mean -1.10994 trainer/policy/normal/log_std Std 0.585367 trainer/policy/normal/log_std Max -0.0725842 trainer/policy/normal/log_std Min -2.61568 eval/num steps total 164005 eval/num paths total 315 eval/path length Mean 572 eval/path length Std 0 eval/path length Max 572 eval/path length Min 572 eval/Rewards Mean 3.15024 eval/Rewards Std 0.745932 eval/Rewards Max 4.61759 eval/Rewards Min 0.984262 eval/Returns Mean 1801.94 eval/Returns Std 0 eval/Returns Max 1801.94 eval/Returns Min 1801.94 eval/Actions Mean 0.156962 eval/Actions Std 0.600958 eval/Actions Max 0.998585 eval/Actions Min -0.998552 eval/Num Paths 1 eval/Average Returns 1801.94 eval/normalized_score 55.9892 time/evaluation sampling (s) 0.914614 time/logging (s) 0.00264728 time/sampling batch (s) 0.280252 time/saving (s) 0.00320767 time/training (s) 4.28054 time/epoch (s) 5.48126 time/total (s) 31581.9 Epoch -751 ---------------------------------- --------------- 2022-05-10 21:56:56.212914 PDT | [0] Epoch -750 finished ---------------------------------- --------------- epoch -750 replay_buffer/size 999996 trainer/num train calls 251000 trainer/Policy Loss -2.271 trainer/Log Pis Mean 2.18261 trainer/Log Pis Std 2.58958 trainer/Log Pis Max 11.7475 trainer/Log Pis Min -7.1081 trainer/policy/mean Mean 0.122574 trainer/policy/mean Std 0.619198 trainer/policy/mean Max 0.99698 trainer/policy/mean Min -0.997508 trainer/policy/normal/std Mean 0.383516 trainer/policy/normal/std Std 0.188416 trainer/policy/normal/std Max 1.04612 trainer/policy/normal/std Min 0.0746893 trainer/policy/normal/log_std Mean -1.1087 trainer/policy/normal/log_std Std 0.587577 trainer/policy/normal/log_std Max 0.0450921 trainer/policy/normal/log_std Min -2.59442 eval/num steps total 164851 eval/num paths total 317 eval/path length Mean 423 eval/path length Std 27 eval/path length Max 450 eval/path length Min 396 eval/Rewards Mean 3.12839 eval/Rewards Std 0.886441 eval/Rewards Max 4.89335 eval/Rewards Min 0.985559 eval/Returns Mean 1323.31 eval/Returns Std 107.696 eval/Returns Max 1431 eval/Returns Min 1215.61 eval/Actions Mean 0.139933 eval/Actions Std 0.579683 eval/Actions Max 0.998122 eval/Actions Min -0.999406 eval/Num Paths 2 eval/Average Returns 1323.31 eval/normalized_score 41.2829 time/evaluation sampling (s) 0.929842 time/logging (s) 0.00323764 time/sampling batch (s) 0.272627 time/saving (s) 0.00315159 time/training (s) 4.2643 time/epoch (s) 5.47316 time/total (s) 31587.4 Epoch -750 ---------------------------------- --------------- 2022-05-10 21:57:01.805896 PDT | [0] Epoch -749 finished ---------------------------------- --------------- epoch -749 replay_buffer/size 999996 trainer/num train calls 252000 trainer/Policy Loss -2.16735 trainer/Log Pis Mean 2.05973 trainer/Log Pis Std 2.61869 trainer/Log Pis Max 10.9658 trainer/Log Pis Min -6.36909 trainer/policy/mean Mean 0.13943 trainer/policy/mean Std 0.616514 trainer/policy/mean Max 0.997416 trainer/policy/mean Min -0.9977 trainer/policy/normal/std Mean 0.387205 trainer/policy/normal/std Std 0.178556 trainer/policy/normal/std Max 0.925671 trainer/policy/normal/std Min 0.0672606 trainer/policy/normal/log_std Mean -1.08495 trainer/policy/normal/log_std Std 0.562577 trainer/policy/normal/log_std Max -0.0772365 trainer/policy/normal/log_std Min -2.69918 eval/num steps total 165582 eval/num paths total 318 eval/path length Mean 731 eval/path length Std 0 eval/path length Max 731 eval/path length Min 731 eval/Rewards Mean 3.21022 eval/Rewards Std 0.729025 eval/Rewards Max 4.74543 eval/Rewards Min 0.980711 eval/Returns Mean 2346.67 eval/Returns Std 0 eval/Returns Max 2346.67 eval/Returns Min 2346.67 eval/Actions Mean 0.165255 eval/Actions Std 0.601266 eval/Actions Max 0.997295 eval/Actions Min -0.998758 eval/Num Paths 1 eval/Average Returns 2346.67 eval/normalized_score 72.7268 time/evaluation sampling (s) 1.00145 time/logging (s) 0.00305255 time/sampling batch (s) 0.275503 time/saving (s) 0.00359367 time/training (s) 4.28826 time/epoch (s) 5.57186 time/total (s) 31593 Epoch -749 ---------------------------------- --------------- 2022-05-10 21:57:07.268205 PDT | [0] Epoch -748 finished ---------------------------------- --------------- epoch -748 replay_buffer/size 999996 trainer/num train calls 253000 trainer/Policy Loss -2.20724 trainer/Log Pis Mean 2.17496 trainer/Log Pis Std 2.50962 trainer/Log Pis Max 9.79483 trainer/Log Pis Min -4.00669 trainer/policy/mean Mean 0.10519 trainer/policy/mean Std 0.61859 trainer/policy/mean Max 0.998879 trainer/policy/mean Min -0.99722 trainer/policy/normal/std Mean 0.38452 trainer/policy/normal/std Std 0.183973 trainer/policy/normal/std Max 0.929725 trainer/policy/normal/std Min 0.0691266 trainer/policy/normal/log_std Mean -1.10058 trainer/policy/normal/log_std Std 0.57863 trainer/policy/normal/log_std Max -0.0728663 trainer/policy/normal/log_std Min -2.67182 eval/num steps total 166213 eval/num paths total 319 eval/path length Mean 631 eval/path length Std 0 eval/path length Max 631 eval/path length Min 631 eval/Rewards Mean 3.21721 eval/Rewards Std 0.736255 eval/Rewards Max 4.82695 eval/Rewards Min 0.986909 eval/Returns Mean 2030.06 eval/Returns Std 0 eval/Returns Max 2030.06 eval/Returns Min 2030.06 eval/Actions Mean 0.145183 eval/Actions Std 0.583801 eval/Actions Max 0.997996 eval/Actions Min -0.998281 eval/Num Paths 1 eval/Average Returns 2030.06 eval/normalized_score 62.9985 time/evaluation sampling (s) 0.885143 time/logging (s) 0.00275467 time/sampling batch (s) 0.272945 time/saving (s) 0.00321121 time/training (s) 4.2771 time/epoch (s) 5.44115 time/total (s) 31598.4 Epoch -748 ---------------------------------- --------------- 2022-05-10 21:57:12.757373 PDT | [0] Epoch -747 finished ---------------------------------- --------------- epoch -747 replay_buffer/size 999996 trainer/num train calls 254000 trainer/Policy Loss -2.13807 trainer/Log Pis Mean 2.1812 trainer/Log Pis Std 2.51222 trainer/Log Pis Max 9.06478 trainer/Log Pis Min -4.15577 trainer/policy/mean Mean 0.153824 trainer/policy/mean Std 0.621934 trainer/policy/mean Max 0.997846 trainer/policy/mean Min -0.998154 trainer/policy/normal/std Mean 0.385167 trainer/policy/normal/std Std 0.179951 trainer/policy/normal/std Max 0.982967 trainer/policy/normal/std Min 0.0774609 trainer/policy/normal/log_std Mean -1.09187 trainer/policy/normal/log_std Std 0.563751 trainer/policy/normal/log_std Max -0.0171801 trainer/policy/normal/log_std Min -2.55798 eval/num steps total 166885 eval/num paths total 320 eval/path length Mean 672 eval/path length Std 0 eval/path length Max 672 eval/path length Min 672 eval/Rewards Mean 3.1762 eval/Rewards Std 0.69358 eval/Rewards Max 4.7684 eval/Rewards Min 0.981558 eval/Returns Mean 2134.41 eval/Returns Std 0 eval/Returns Max 2134.41 eval/Returns Min 2134.41 eval/Actions Mean 0.158073 eval/Actions Std 0.600758 eval/Actions Max 0.997751 eval/Actions Min -0.996651 eval/Num Paths 1 eval/Average Returns 2134.41 eval/normalized_score 66.2046 time/evaluation sampling (s) 0.892204 time/logging (s) 0.00295287 time/sampling batch (s) 0.274693 time/saving (s) 0.00348142 time/training (s) 4.29528 time/epoch (s) 5.46861 time/total (s) 31603.9 Epoch -747 ---------------------------------- --------------- 2022-05-10 21:57:18.243477 PDT | [0] Epoch -746 finished ---------------------------------- --------------- epoch -746 replay_buffer/size 999996 trainer/num train calls 255000 trainer/Policy Loss -2.32259 trainer/Log Pis Mean 2.41732 trainer/Log Pis Std 2.51571 trainer/Log Pis Max 11.2603 trainer/Log Pis Min -4.42993 trainer/policy/mean Mean 0.1535 trainer/policy/mean Std 0.623692 trainer/policy/mean Max 0.999391 trainer/policy/mean Min -0.997963 trainer/policy/normal/std Mean 0.384392 trainer/policy/normal/std Std 0.179031 trainer/policy/normal/std Max 0.889135 trainer/policy/normal/std Min 0.0652097 trainer/policy/normal/log_std Mean -1.0994 trainer/policy/normal/log_std Std 0.581223 trainer/policy/normal/log_std Max -0.117506 trainer/policy/normal/log_std Min -2.73015 eval/num steps total 167443 eval/num paths total 321 eval/path length Mean 558 eval/path length Std 0 eval/path length Max 558 eval/path length Min 558 eval/Rewards Mean 3.20081 eval/Rewards Std 0.822885 eval/Rewards Max 4.80092 eval/Rewards Min 0.988715 eval/Returns Mean 1786.05 eval/Returns Std 0 eval/Returns Max 1786.05 eval/Returns Min 1786.05 eval/Actions Mean 0.161098 eval/Actions Std 0.575487 eval/Actions Max 0.998246 eval/Actions Min -0.998497 eval/Num Paths 1 eval/Average Returns 1786.05 eval/normalized_score 55.5012 time/evaluation sampling (s) 0.882253 time/logging (s) 0.00260334 time/sampling batch (s) 0.274545 time/saving (s) 0.00317829 time/training (s) 4.30241 time/epoch (s) 5.465 time/total (s) 31609.4 Epoch -746 ---------------------------------- --------------- 2022-05-10 21:57:23.762584 PDT | [0] Epoch -745 finished ---------------------------------- --------------- epoch -745 replay_buffer/size 999996 trainer/num train calls 256000 trainer/Policy Loss -2.20829 trainer/Log Pis Mean 2.08777 trainer/Log Pis Std 2.48561 trainer/Log Pis Max 8.49458 trainer/Log Pis Min -5.75825 trainer/policy/mean Mean 0.134703 trainer/policy/mean Std 0.623375 trainer/policy/mean Max 0.997305 trainer/policy/mean Min -0.998174 trainer/policy/normal/std Mean 0.394704 trainer/policy/normal/std Std 0.188755 trainer/policy/normal/std Max 0.933717 trainer/policy/normal/std Min 0.0694857 trainer/policy/normal/log_std Mean -1.07345 trainer/policy/normal/log_std Std 0.576369 trainer/policy/normal/log_std Max -0.0685822 trainer/policy/normal/log_std Min -2.66663 eval/num steps total 168436 eval/num paths total 323 eval/path length Mean 496.5 eval/path length Std 11.5 eval/path length Max 508 eval/path length Min 485 eval/Rewards Mean 3.16828 eval/Rewards Std 0.795307 eval/Rewards Max 4.83317 eval/Rewards Min 0.985794 eval/Returns Mean 1573.05 eval/Returns Std 21.5594 eval/Returns Max 1594.61 eval/Returns Min 1551.49 eval/Actions Mean 0.157336 eval/Actions Std 0.596627 eval/Actions Max 0.998171 eval/Actions Min -0.998809 eval/Num Paths 2 eval/Average Returns 1573.05 eval/normalized_score 48.9564 time/evaluation sampling (s) 0.902372 time/logging (s) 0.00361544 time/sampling batch (s) 0.275181 time/saving (s) 0.0029938 time/training (s) 4.31475 time/epoch (s) 5.49891 time/total (s) 31614.9 Epoch -745 ---------------------------------- --------------- 2022-05-10 21:57:29.240396 PDT | [0] Epoch -744 finished ---------------------------------- --------------- epoch -744 replay_buffer/size 999996 trainer/num train calls 257000 trainer/Policy Loss -2.18365 trainer/Log Pis Mean 2.07902 trainer/Log Pis Std 2.51111 trainer/Log Pis Max 10.3646 trainer/Log Pis Min -4.10818 trainer/policy/mean Mean 0.138686 trainer/policy/mean Std 0.612831 trainer/policy/mean Max 0.998677 trainer/policy/mean Min -0.998428 trainer/policy/normal/std Mean 0.395158 trainer/policy/normal/std Std 0.186022 trainer/policy/normal/std Max 0.975982 trainer/policy/normal/std Min 0.0740821 trainer/policy/normal/log_std Mean -1.07042 trainer/policy/normal/log_std Std 0.57408 trainer/policy/normal/log_std Max -0.0243112 trainer/policy/normal/log_std Min -2.60258 eval/num steps total 169308 eval/num paths total 325 eval/path length Mean 436 eval/path length Std 31 eval/path length Max 467 eval/path length Min 405 eval/Rewards Mean 3.13143 eval/Rewards Std 0.841496 eval/Rewards Max 5.01268 eval/Rewards Min 0.980387 eval/Returns Mean 1365.3 eval/Returns Std 127.27 eval/Returns Max 1492.57 eval/Returns Min 1238.03 eval/Actions Mean 0.147521 eval/Actions Std 0.585883 eval/Actions Max 0.997677 eval/Actions Min -0.998332 eval/Num Paths 2 eval/Average Returns 1365.3 eval/normalized_score 42.5732 time/evaluation sampling (s) 0.884847 time/logging (s) 0.00332293 time/sampling batch (s) 0.274944 time/saving (s) 0.00300567 time/training (s) 4.29063 time/epoch (s) 5.45675 time/total (s) 31620.3 Epoch -744 ---------------------------------- --------------- 2022-05-10 21:57:34.719947 PDT | [0] Epoch -743 finished ---------------------------------- --------------- epoch -743 replay_buffer/size 999996 trainer/num train calls 258000 trainer/Policy Loss -2.03538 trainer/Log Pis Mean 1.96108 trainer/Log Pis Std 2.40929 trainer/Log Pis Max 9.33358 trainer/Log Pis Min -7.9593 trainer/policy/mean Mean 0.158135 trainer/policy/mean Std 0.600289 trainer/policy/mean Max 0.996755 trainer/policy/mean Min -0.998546 trainer/policy/normal/std Mean 0.382998 trainer/policy/normal/std Std 0.182234 trainer/policy/normal/std Max 0.978456 trainer/policy/normal/std Min 0.0707627 trainer/policy/normal/log_std Mean -1.10047 trainer/policy/normal/log_std Std 0.568555 trainer/policy/normal/log_std Max -0.0217793 trainer/policy/normal/log_std Min -2.64842 eval/num steps total 169868 eval/num paths total 326 eval/path length Mean 560 eval/path length Std 0 eval/path length Max 560 eval/path length Min 560 eval/Rewards Mean 3.18481 eval/Rewards Std 0.846614 eval/Rewards Max 4.80597 eval/Rewards Min 0.987665 eval/Returns Mean 1783.5 eval/Returns Std 0 eval/Returns Max 1783.5 eval/Returns Min 1783.5 eval/Actions Mean 0.159788 eval/Actions Std 0.564971 eval/Actions Max 0.998328 eval/Actions Min -0.998914 eval/Num Paths 1 eval/Average Returns 1783.5 eval/normalized_score 55.4226 time/evaluation sampling (s) 0.917421 time/logging (s) 0.00290253 time/sampling batch (s) 0.272321 time/saving (s) 0.00353035 time/training (s) 4.262 time/epoch (s) 5.45818 time/total (s) 31625.8 Epoch -743 ---------------------------------- --------------- 2022-05-10 21:57:40.209221 PDT | [0] Epoch -742 finished ---------------------------------- --------------- epoch -742 replay_buffer/size 999996 trainer/num train calls 259000 trainer/Policy Loss -2.2306 trainer/Log Pis Mean 2.25783 trainer/Log Pis Std 2.65104 trainer/Log Pis Max 11.6613 trainer/Log Pis Min -5.1929 trainer/policy/mean Mean 0.16396 trainer/policy/mean Std 0.619619 trainer/policy/mean Max 0.998621 trainer/policy/mean Min -0.998428 trainer/policy/normal/std Mean 0.382741 trainer/policy/normal/std Std 0.184539 trainer/policy/normal/std Max 0.989184 trainer/policy/normal/std Min 0.0667224 trainer/policy/normal/log_std Mean -1.11109 trainer/policy/normal/log_std Std 0.59461 trainer/policy/normal/log_std Max -0.0108748 trainer/policy/normal/log_std Min -2.70721 eval/num steps total 170455 eval/num paths total 327 eval/path length Mean 587 eval/path length Std 0 eval/path length Max 587 eval/path length Min 587 eval/Rewards Mean 3.21995 eval/Rewards Std 0.742618 eval/Rewards Max 4.93156 eval/Rewards Min 0.987925 eval/Returns Mean 1890.11 eval/Returns Std 0 eval/Returns Max 1890.11 eval/Returns Min 1890.11 eval/Actions Mean 0.175524 eval/Actions Std 0.599153 eval/Actions Max 0.998527 eval/Actions Min -0.998651 eval/Num Paths 1 eval/Average Returns 1890.11 eval/normalized_score 58.6984 time/evaluation sampling (s) 0.881892 time/logging (s) 0.00254786 time/sampling batch (s) 0.275276 time/saving (s) 0.00298566 time/training (s) 4.30504 time/epoch (s) 5.46774 time/total (s) 31631.3 Epoch -742 ---------------------------------- --------------- 2022-05-10 21:57:45.695755 PDT | [0] Epoch -741 finished ---------------------------------- ---------------- epoch -741 replay_buffer/size 999996 trainer/num train calls 260000 trainer/Policy Loss -2.0783 trainer/Log Pis Mean 2.04672 trainer/Log Pis Std 2.51947 trainer/Log Pis Max 9.46566 trainer/Log Pis Min -5.79676 trainer/policy/mean Mean 0.151469 trainer/policy/mean Std 0.603726 trainer/policy/mean Max 0.998164 trainer/policy/mean Min -0.998073 trainer/policy/normal/std Mean 0.390287 trainer/policy/normal/std Std 0.192579 trainer/policy/normal/std Max 1.00064 trainer/policy/normal/std Min 0.0696759 trainer/policy/normal/log_std Mean -1.09818 trainer/policy/normal/log_std Std 0.606605 trainer/policy/normal/log_std Max 0.000640426 trainer/policy/normal/log_std Min -2.6639 eval/num steps total 171343 eval/num paths total 329 eval/path length Mean 444 eval/path length Std 39 eval/path length Max 483 eval/path length Min 405 eval/Rewards Mean 3.12783 eval/Rewards Std 0.814763 eval/Rewards Max 4.77994 eval/Rewards Min 0.98048 eval/Returns Mean 1388.76 eval/Returns Std 148.476 eval/Returns Max 1537.23 eval/Returns Min 1240.28 eval/Actions Mean 0.149892 eval/Actions Std 0.594806 eval/Actions Max 0.996798 eval/Actions Min -0.998233 eval/Num Paths 2 eval/Average Returns 1388.76 eval/normalized_score 43.2939 time/evaluation sampling (s) 0.91186 time/logging (s) 0.00388914 time/sampling batch (s) 0.272346 time/saving (s) 0.00373851 time/training (s) 4.27509 time/epoch (s) 5.46693 time/total (s) 31636.7 Epoch -741 ---------------------------------- ---------------- 2022-05-10 21:57:51.158137 PDT | [0] Epoch -740 finished ---------------------------------- --------------- epoch -740 replay_buffer/size 999996 trainer/num train calls 261000 trainer/Policy Loss -2.26703 trainer/Log Pis Mean 2.34268 trainer/Log Pis Std 2.65291 trainer/Log Pis Max 9.60896 trainer/Log Pis Min -5.30434 trainer/policy/mean Mean 0.155936 trainer/policy/mean Std 0.618723 trainer/policy/mean Max 0.997845 trainer/policy/mean Min -0.99837 trainer/policy/normal/std Mean 0.379683 trainer/policy/normal/std Std 0.186041 trainer/policy/normal/std Max 0.921548 trainer/policy/normal/std Min 0.0660229 trainer/policy/normal/log_std Mean -1.12444 trainer/policy/normal/log_std Std 0.604889 trainer/policy/normal/log_std Max -0.0817001 trainer/policy/normal/log_std Min -2.71775 eval/num steps total 171838 eval/num paths total 330 eval/path length Mean 495 eval/path length Std 0 eval/path length Max 495 eval/path length Min 495 eval/Rewards Mean 3.07044 eval/Rewards Std 0.784807 eval/Rewards Max 4.71091 eval/Rewards Min 0.980865 eval/Returns Mean 1519.87 eval/Returns Std 0 eval/Returns Max 1519.87 eval/Returns Min 1519.87 eval/Actions Mean 0.143125 eval/Actions Std 0.577955 eval/Actions Max 0.997258 eval/Actions Min -0.997204 eval/Num Paths 1 eval/Average Returns 1519.87 eval/normalized_score 47.3223 time/evaluation sampling (s) 0.907709 time/logging (s) 0.00232368 time/sampling batch (s) 0.272707 time/saving (s) 0.00312154 time/training (s) 4.25399 time/epoch (s) 5.43985 time/total (s) 31642.2 Epoch -740 ---------------------------------- --------------- 2022-05-10 21:57:56.632400 PDT | [0] Epoch -739 finished ---------------------------------- --------------- epoch -739 replay_buffer/size 999996 trainer/num train calls 262000 trainer/Policy Loss -2.0236 trainer/Log Pis Mean 2.17072 trainer/Log Pis Std 2.6161 trainer/Log Pis Max 14.5768 trainer/Log Pis Min -4.57715 trainer/policy/mean Mean 0.116362 trainer/policy/mean Std 0.602046 trainer/policy/mean Max 0.99802 trainer/policy/mean Min -0.998797 trainer/policy/normal/std Mean 0.370702 trainer/policy/normal/std Std 0.179985 trainer/policy/normal/std Max 0.927635 trainer/policy/normal/std Min 0.0674406 trainer/policy/normal/log_std Mean -1.13999 trainer/policy/normal/log_std Std 0.582359 trainer/policy/normal/log_std Max -0.0751171 trainer/policy/normal/log_std Min -2.69651 eval/num steps total 172378 eval/num paths total 331 eval/path length Mean 540 eval/path length Std 0 eval/path length Max 540 eval/path length Min 540 eval/Rewards Mean 3.19937 eval/Rewards Std 0.873507 eval/Rewards Max 5.421 eval/Rewards Min 0.989936 eval/Returns Mean 1727.66 eval/Returns Std 0 eval/Returns Max 1727.66 eval/Returns Min 1727.66 eval/Actions Mean 0.152853 eval/Actions Std 0.575876 eval/Actions Max 0.998035 eval/Actions Min -0.997961 eval/Num Paths 1 eval/Average Returns 1727.66 eval/normalized_score 53.707 time/evaluation sampling (s) 0.921782 time/logging (s) 0.0023858 time/sampling batch (s) 0.270886 time/saving (s) 0.00298395 time/training (s) 4.2557 time/epoch (s) 5.45374 time/total (s) 31647.7 Epoch -739 ---------------------------------- --------------- 2022-05-10 21:58:02.085919 PDT | [0] Epoch -738 finished ---------------------------------- --------------- epoch -738 replay_buffer/size 999996 trainer/num train calls 263000 trainer/Policy Loss -2.08456 trainer/Log Pis Mean 2.04224 trainer/Log Pis Std 2.46586 trainer/Log Pis Max 10.0277 trainer/Log Pis Min -7.53639 trainer/policy/mean Mean 0.142299 trainer/policy/mean Std 0.601367 trainer/policy/mean Max 0.99772 trainer/policy/mean Min -0.998129 trainer/policy/normal/std Mean 0.377176 trainer/policy/normal/std Std 0.179132 trainer/policy/normal/std Max 0.923583 trainer/policy/normal/std Min 0.0717997 trainer/policy/normal/log_std Mean -1.11736 trainer/policy/normal/log_std Std 0.573147 trainer/policy/normal/log_std Max -0.0794948 trainer/policy/normal/log_std Min -2.63387 eval/num steps total 173358 eval/num paths total 332 eval/path length Mean 980 eval/path length Std 0 eval/path length Max 980 eval/path length Min 980 eval/Rewards Mean 3.23601 eval/Rewards Std 0.595385 eval/Rewards Max 4.47123 eval/Rewards Min 0.985889 eval/Returns Mean 3171.29 eval/Returns Std 0 eval/Returns Max 3171.29 eval/Returns Min 3171.29 eval/Actions Mean 0.161721 eval/Actions Std 0.611522 eval/Actions Max 0.997653 eval/Actions Min -0.997854 eval/Num Paths 1 eval/Average Returns 3171.29 eval/normalized_score 98.064 time/evaluation sampling (s) 0.902824 time/logging (s) 0.00368162 time/sampling batch (s) 0.273615 time/saving (s) 0.00307197 time/training (s) 4.25122 time/epoch (s) 5.43441 time/total (s) 31653.1 Epoch -738 ---------------------------------- --------------- 2022-05-10 21:58:07.556248 PDT | [0] Epoch -737 finished ---------------------------------- --------------- epoch -737 replay_buffer/size 999996 trainer/num train calls 264000 trainer/Policy Loss -2.03413 trainer/Log Pis Mean 1.96731 trainer/Log Pis Std 2.63102 trainer/Log Pis Max 9.91212 trainer/Log Pis Min -5.34764 trainer/policy/mean Mean 0.121526 trainer/policy/mean Std 0.602741 trainer/policy/mean Max 0.997738 trainer/policy/mean Min -0.997695 trainer/policy/normal/std Mean 0.385474 trainer/policy/normal/std Std 0.183296 trainer/policy/normal/std Max 0.92013 trainer/policy/normal/std Min 0.0733879 trainer/policy/normal/log_std Mean -1.09559 trainer/policy/normal/log_std Std 0.572075 trainer/policy/normal/log_std Max -0.0832402 trainer/policy/normal/log_std Min -2.612 eval/num steps total 173955 eval/num paths total 333 eval/path length Mean 597 eval/path length Std 0 eval/path length Max 597 eval/path length Min 597 eval/Rewards Mean 3.17887 eval/Rewards Std 0.793453 eval/Rewards Max 5.39627 eval/Rewards Min 0.991445 eval/Returns Mean 1897.79 eval/Returns Std 0 eval/Returns Max 1897.79 eval/Returns Min 1897.79 eval/Actions Mean 0.166551 eval/Actions Std 0.59867 eval/Actions Max 0.998251 eval/Actions Min -0.998517 eval/Num Paths 1 eval/Average Returns 1897.79 eval/normalized_score 58.9343 time/evaluation sampling (s) 0.894271 time/logging (s) 0.00310805 time/sampling batch (s) 0.273189 time/saving (s) 0.00356974 time/training (s) 4.27473 time/epoch (s) 5.44886 time/total (s) 31658.5 Epoch -737 ---------------------------------- --------------- 2022-05-10 21:58:13.030234 PDT | [0] Epoch -736 finished ---------------------------------- --------------- epoch -736 replay_buffer/size 999996 trainer/num train calls 265000 trainer/Policy Loss -2.18387 trainer/Log Pis Mean 2.12906 trainer/Log Pis Std 2.53119 trainer/Log Pis Max 10.8366 trainer/Log Pis Min -5.16736 trainer/policy/mean Mean 0.127133 trainer/policy/mean Std 0.617634 trainer/policy/mean Max 0.998524 trainer/policy/mean Min -0.997791 trainer/policy/normal/std Mean 0.391119 trainer/policy/normal/std Std 0.185396 trainer/policy/normal/std Max 0.981263 trainer/policy/normal/std Min 0.0714695 trainer/policy/normal/log_std Mean -1.08231 trainer/policy/normal/log_std Std 0.576872 trainer/policy/normal/log_std Max -0.018915 trainer/policy/normal/log_std Min -2.63848 eval/num steps total 174512 eval/num paths total 334 eval/path length Mean 557 eval/path length Std 0 eval/path length Max 557 eval/path length Min 557 eval/Rewards Mean 3.15899 eval/Rewards Std 0.797981 eval/Rewards Max 4.7669 eval/Rewards Min 0.990017 eval/Returns Mean 1759.56 eval/Returns Std 0 eval/Returns Max 1759.56 eval/Returns Min 1759.56 eval/Actions Mean 0.147688 eval/Actions Std 0.582599 eval/Actions Max 0.997325 eval/Actions Min -0.998617 eval/Num Paths 1 eval/Average Returns 1759.56 eval/normalized_score 54.6871 time/evaluation sampling (s) 0.893852 time/logging (s) 0.0025163 time/sampling batch (s) 0.272982 time/saving (s) 0.00310328 time/training (s) 4.27974 time/epoch (s) 5.45219 time/total (s) 31664 Epoch -736 ---------------------------------- --------------- 2022-05-10 21:58:18.489084 PDT | [0] Epoch -735 finished ---------------------------------- --------------- epoch -735 replay_buffer/size 999996 trainer/num train calls 266000 trainer/Policy Loss -2.19895 trainer/Log Pis Mean 2.16067 trainer/Log Pis Std 2.63612 trainer/Log Pis Max 9.71074 trainer/Log Pis Min -4.02134 trainer/policy/mean Mean 0.137539 trainer/policy/mean Std 0.611699 trainer/policy/mean Max 0.996847 trainer/policy/mean Min -0.998274 trainer/policy/normal/std Mean 0.380392 trainer/policy/normal/std Std 0.181387 trainer/policy/normal/std Max 0.995562 trainer/policy/normal/std Min 0.0753464 trainer/policy/normal/log_std Mean -1.10915 trainer/policy/normal/log_std Std 0.572464 trainer/policy/normal/log_std Max -0.00444768 trainer/policy/normal/log_std Min -2.58566 eval/num steps total 175058 eval/num paths total 335 eval/path length Mean 546 eval/path length Std 0 eval/path length Max 546 eval/path length Min 546 eval/Rewards Mean 3.15928 eval/Rewards Std 0.863051 eval/Rewards Max 5.07621 eval/Rewards Min 0.989566 eval/Returns Mean 1724.97 eval/Returns Std 0 eval/Returns Max 1724.97 eval/Returns Min 1724.97 eval/Actions Mean 0.150514 eval/Actions Std 0.569789 eval/Actions Max 0.997169 eval/Actions Min -0.998248 eval/Num Paths 1 eval/Average Returns 1724.97 eval/normalized_score 53.6242 time/evaluation sampling (s) 0.893048 time/logging (s) 0.00241723 time/sampling batch (s) 0.271601 time/saving (s) 0.00300899 time/training (s) 4.2683 time/epoch (s) 5.43838 time/total (s) 31669.4 Epoch -735 ---------------------------------- --------------- 2022-05-10 21:58:23.976621 PDT | [0] Epoch -734 finished ---------------------------------- --------------- epoch -734 replay_buffer/size 999996 trainer/num train calls 267000 trainer/Policy Loss -2.07732 trainer/Log Pis Mean 2.1867 trainer/Log Pis Std 2.69188 trainer/Log Pis Max 9.02544 trainer/Log Pis Min -4.30753 trainer/policy/mean Mean 0.129015 trainer/policy/mean Std 0.61392 trainer/policy/mean Max 0.997396 trainer/policy/mean Min -0.995866 trainer/policy/normal/std Mean 0.383457 trainer/policy/normal/std Std 0.180275 trainer/policy/normal/std Max 0.886003 trainer/policy/normal/std Min 0.0656536 trainer/policy/normal/log_std Mean -1.09783 trainer/policy/normal/log_std Std 0.566979 trainer/policy/normal/log_std Max -0.121035 trainer/policy/normal/log_std Min -2.72336 eval/num steps total 175887 eval/num paths total 337 eval/path length Mean 414.5 eval/path length Std 8.5 eval/path length Max 423 eval/path length Min 406 eval/Rewards Mean 3.09792 eval/Rewards Std 0.861764 eval/Rewards Max 5.30575 eval/Rewards Min 0.982445 eval/Returns Mean 1284.09 eval/Returns Std 27.1003 eval/Returns Max 1311.19 eval/Returns Min 1256.99 eval/Actions Mean 0.154209 eval/Actions Std 0.584868 eval/Actions Max 0.997198 eval/Actions Min -0.999052 eval/Num Paths 2 eval/Average Returns 1284.09 eval/normalized_score 40.0778 time/evaluation sampling (s) 0.897663 time/logging (s) 0.00326961 time/sampling batch (s) 0.273874 time/saving (s) 0.00299712 time/training (s) 4.29008 time/epoch (s) 5.46789 time/total (s) 31674.9 Epoch -734 ---------------------------------- --------------- 2022-05-10 21:58:29.460909 PDT | [0] Epoch -733 finished ---------------------------------- --------------- epoch -733 replay_buffer/size 999996 trainer/num train calls 268000 trainer/Policy Loss -2.15351 trainer/Log Pis Mean 2.21908 trainer/Log Pis Std 2.50573 trainer/Log Pis Max 11.6134 trainer/Log Pis Min -4.07523 trainer/policy/mean Mean 0.146438 trainer/policy/mean Std 0.612412 trainer/policy/mean Max 0.996728 trainer/policy/mean Min -0.997938 trainer/policy/normal/std Mean 0.3879 trainer/policy/normal/std Std 0.182963 trainer/policy/normal/std Max 1.00323 trainer/policy/normal/std Min 0.0739161 trainer/policy/normal/log_std Mean -1.0883 trainer/policy/normal/log_std Std 0.57165 trainer/policy/normal/log_std Max 0.00322453 trainer/policy/normal/log_std Min -2.60482 eval/num steps total 176453 eval/num paths total 338 eval/path length Mean 566 eval/path length Std 0 eval/path length Max 566 eval/path length Min 566 eval/Rewards Mean 3.18216 eval/Rewards Std 0.762935 eval/Rewards Max 4.75968 eval/Rewards Min 0.987564 eval/Returns Mean 1801.1 eval/Returns Std 0 eval/Returns Max 1801.1 eval/Returns Min 1801.1 eval/Actions Mean 0.151 eval/Actions Std 0.593495 eval/Actions Max 0.996707 eval/Actions Min -0.998158 eval/Num Paths 1 eval/Average Returns 1801.1 eval/normalized_score 55.9636 time/evaluation sampling (s) 0.896298 time/logging (s) 0.00257148 time/sampling batch (s) 0.274243 time/saving (s) 0.00326933 time/training (s) 4.28674 time/epoch (s) 5.46312 time/total (s) 31680.4 Epoch -733 ---------------------------------- --------------- 2022-05-10 21:58:35.432977 PDT | [0] Epoch -732 finished ---------------------------------- --------------- epoch -732 replay_buffer/size 999996 trainer/num train calls 269000 trainer/Policy Loss -2.32529 trainer/Log Pis Mean 2.25939 trainer/Log Pis Std 2.65391 trainer/Log Pis Max 9.80705 trainer/Log Pis Min -6.4293 trainer/policy/mean Mean 0.135529 trainer/policy/mean Std 0.625922 trainer/policy/mean Max 0.998671 trainer/policy/mean Min -0.997598 trainer/policy/normal/std Mean 0.383315 trainer/policy/normal/std Std 0.184871 trainer/policy/normal/std Max 0.977569 trainer/policy/normal/std Min 0.0695408 trainer/policy/normal/log_std Mean -1.10663 trainer/policy/normal/log_std Std 0.584871 trainer/policy/normal/log_std Max -0.0226866 trainer/policy/normal/log_std Min -2.66584 eval/num steps total 177394 eval/num paths total 340 eval/path length Mean 470.5 eval/path length Std 65.5 eval/path length Max 536 eval/path length Min 405 eval/Rewards Mean 3.15177 eval/Rewards Std 0.825403 eval/Rewards Max 5.52291 eval/Rewards Min 0.986251 eval/Returns Mean 1482.91 eval/Returns Std 241.531 eval/Returns Max 1724.44 eval/Returns Min 1241.38 eval/Actions Mean 0.157615 eval/Actions Std 0.59096 eval/Actions Max 0.998804 eval/Actions Min -0.997596 eval/Num Paths 2 eval/Average Returns 1482.91 eval/normalized_score 46.1868 time/evaluation sampling (s) 0.899401 time/logging (s) 0.00428354 time/sampling batch (s) 0.268886 time/saving (s) 0.00385882 time/training (s) 4.77665 time/epoch (s) 5.95308 time/total (s) 31686.3 Epoch -732 ---------------------------------- --------------- 2022-05-10 21:58:41.695061 PDT | [0] Epoch -731 finished ---------------------------------- --------------- epoch -731 replay_buffer/size 999996 trainer/num train calls 270000 trainer/Policy Loss -2.05278 trainer/Log Pis Mean 2.09298 trainer/Log Pis Std 2.67916 trainer/Log Pis Max 9.91556 trainer/Log Pis Min -5.6078 trainer/policy/mean Mean 0.117794 trainer/policy/mean Std 0.621191 trainer/policy/mean Max 0.998922 trainer/policy/mean Min -0.998404 trainer/policy/normal/std Mean 0.392869 trainer/policy/normal/std Std 0.185947 trainer/policy/normal/std Max 0.988878 trainer/policy/normal/std Min 0.0728171 trainer/policy/normal/log_std Mean -1.07507 trainer/policy/normal/log_std Std 0.570078 trainer/policy/normal/log_std Max -0.0111848 trainer/policy/normal/log_std Min -2.6198 eval/num steps total 178257 eval/num paths total 342 eval/path length Mean 431.5 eval/path length Std 25.5 eval/path length Max 457 eval/path length Min 406 eval/Rewards Mean 3.1294 eval/Rewards Std 0.807722 eval/Rewards Max 4.99111 eval/Rewards Min 0.98473 eval/Returns Mean 1350.34 eval/Returns Std 103.304 eval/Returns Max 1453.64 eval/Returns Min 1247.03 eval/Actions Mean 0.145859 eval/Actions Std 0.579973 eval/Actions Max 0.998023 eval/Actions Min -0.998458 eval/Num Paths 2 eval/Average Returns 1350.34 eval/normalized_score 42.1134 time/evaluation sampling (s) 0.957945 time/logging (s) 0.00569196 time/sampling batch (s) 0.267598 time/saving (s) 0.00597416 time/training (s) 5.00348 time/epoch (s) 6.24069 time/total (s) 31692.6 Epoch -731 ---------------------------------- --------------- 2022-05-10 21:58:47.965409 PDT | [0] Epoch -730 finished ---------------------------------- --------------- epoch -730 replay_buffer/size 999996 trainer/num train calls 271000 trainer/Policy Loss -2.21047 trainer/Log Pis Mean 2.17158 trainer/Log Pis Std 2.55264 trainer/Log Pis Max 10.0054 trainer/Log Pis Min -4.45811 trainer/policy/mean Mean 0.138826 trainer/policy/mean Std 0.618268 trainer/policy/mean Max 0.996473 trainer/policy/mean Min -0.997141 trainer/policy/normal/std Mean 0.38526 trainer/policy/normal/std Std 0.184593 trainer/policy/normal/std Max 0.974287 trainer/policy/normal/std Min 0.0744583 trainer/policy/normal/log_std Mean -1.09713 trainer/policy/normal/log_std Std 0.57345 trainer/policy/normal/log_std Max -0.026049 trainer/policy/normal/log_std Min -2.59752 eval/num steps total 178941 eval/num paths total 343 eval/path length Mean 684 eval/path length Std 0 eval/path length Max 684 eval/path length Min 684 eval/Rewards Mean 3.2488 eval/Rewards Std 0.775192 eval/Rewards Max 5.4792 eval/Rewards Min 0.982044 eval/Returns Mean 2222.18 eval/Returns Std 0 eval/Returns Max 2222.18 eval/Returns Min 2222.18 eval/Actions Mean 0.148793 eval/Actions Std 0.594116 eval/Actions Max 0.998083 eval/Actions Min -0.998007 eval/Num Paths 1 eval/Average Returns 2222.18 eval/normalized_score 68.9015 time/evaluation sampling (s) 1.02838 time/logging (s) 0.00333327 time/sampling batch (s) 0.271417 time/saving (s) 0.00310652 time/training (s) 4.9374 time/epoch (s) 6.24363 time/total (s) 31698.8 Epoch -730 ---------------------------------- --------------- 2022-05-10 21:58:53.957995 PDT | [0] Epoch -729 finished ---------------------------------- --------------- epoch -729 replay_buffer/size 999996 trainer/num train calls 272000 trainer/Policy Loss -2.15135 trainer/Log Pis Mean 2.2062 trainer/Log Pis Std 2.54336 trainer/Log Pis Max 9.19597 trainer/Log Pis Min -4.5296 trainer/policy/mean Mean 0.146165 trainer/policy/mean Std 0.609105 trainer/policy/mean Max 0.999438 trainer/policy/mean Min -0.998064 trainer/policy/normal/std Mean 0.382665 trainer/policy/normal/std Std 0.191722 trainer/policy/normal/std Max 0.967337 trainer/policy/normal/std Min 0.070657 trainer/policy/normal/log_std Mean -1.11797 trainer/policy/normal/log_std Std 0.602622 trainer/policy/normal/log_std Max -0.0332078 trainer/policy/normal/log_std Min -2.64992 eval/num steps total 179500 eval/num paths total 344 eval/path length Mean 559 eval/path length Std 0 eval/path length Max 559 eval/path length Min 559 eval/Rewards Mean 3.16319 eval/Rewards Std 0.789499 eval/Rewards Max 4.75777 eval/Rewards Min 0.991287 eval/Returns Mean 1768.22 eval/Returns Std 0 eval/Returns Max 1768.22 eval/Returns Min 1768.22 eval/Actions Mean 0.150426 eval/Actions Std 0.590506 eval/Actions Max 0.997975 eval/Actions Min -0.998277 eval/Num Paths 1 eval/Average Returns 1768.22 eval/normalized_score 54.9533 time/evaluation sampling (s) 0.901529 time/logging (s) 0.00309854 time/sampling batch (s) 0.270108 time/saving (s) 0.0036243 time/training (s) 4.79247 time/epoch (s) 5.97083 time/total (s) 31704.8 Epoch -729 ---------------------------------- --------------- 2022-05-10 21:58:59.743526 PDT | [0] Epoch -728 finished ---------------------------------- --------------- epoch -728 replay_buffer/size 999996 trainer/num train calls 273000 trainer/Policy Loss -2.30933 trainer/Log Pis Mean 2.0564 trainer/Log Pis Std 2.87195 trainer/Log Pis Max 17.1961 trainer/Log Pis Min -4.68189 trainer/policy/mean Mean 0.132972 trainer/policy/mean Std 0.619085 trainer/policy/mean Max 0.99762 trainer/policy/mean Min -0.999671 trainer/policy/normal/std Mean 0.389543 trainer/policy/normal/std Std 0.18464 trainer/policy/normal/std Max 1.0027 trainer/policy/normal/std Min 0.06468 trainer/policy/normal/log_std Mean -1.0858 trainer/policy/normal/log_std Std 0.575845 trainer/policy/normal/log_std Max 0.00269419 trainer/policy/normal/log_std Min -2.7383 eval/num steps total 180340 eval/num paths total 345 eval/path length Mean 840 eval/path length Std 0 eval/path length Max 840 eval/path length Min 840 eval/Rewards Mean 3.24099 eval/Rewards Std 0.685005 eval/Rewards Max 5.01396 eval/Rewards Min 0.989785 eval/Returns Mean 2722.43 eval/Returns Std 0 eval/Returns Max 2722.43 eval/Returns Min 2722.43 eval/Actions Mean 0.161083 eval/Actions Std 0.600683 eval/Actions Max 0.998027 eval/Actions Min -0.998494 eval/Num Paths 1 eval/Average Returns 2722.43 eval/normalized_score 84.2724 time/evaluation sampling (s) 0.905522 time/logging (s) 0.00336035 time/sampling batch (s) 0.273927 time/saving (s) 0.00330765 time/training (s) 4.57742 time/epoch (s) 5.76354 time/total (s) 31710.6 Epoch -728 ---------------------------------- --------------- 2022-05-10 21:59:05.514793 PDT | [0] Epoch -727 finished ---------------------------------- --------------- epoch -727 replay_buffer/size 999996 trainer/num train calls 274000 trainer/Policy Loss -2.01191 trainer/Log Pis Mean 2.13638 trainer/Log Pis Std 2.53831 trainer/Log Pis Max 9.72054 trainer/Log Pis Min -4.5073 trainer/policy/mean Mean 0.140973 trainer/policy/mean Std 0.602558 trainer/policy/mean Max 0.996556 trainer/policy/mean Min -0.997846 trainer/policy/normal/std Mean 0.380241 trainer/policy/normal/std Std 0.187554 trainer/policy/normal/std Max 0.967859 trainer/policy/normal/std Min 0.0713265 trainer/policy/normal/log_std Mean -1.12028 trainer/policy/normal/log_std Std 0.594623 trainer/policy/normal/log_std Max -0.0326693 trainer/policy/normal/log_std Min -2.64049 eval/num steps total 180805 eval/num paths total 346 eval/path length Mean 465 eval/path length Std 0 eval/path length Max 465 eval/path length Min 465 eval/Rewards Mean 3.16618 eval/Rewards Std 0.898362 eval/Rewards Max 4.77465 eval/Rewards Min 0.988891 eval/Returns Mean 1472.27 eval/Returns Std 0 eval/Returns Max 1472.27 eval/Returns Min 1472.27 eval/Actions Mean 0.154322 eval/Actions Std 0.561438 eval/Actions Max 0.998055 eval/Actions Min -0.998417 eval/Num Paths 1 eval/Average Returns 1472.27 eval/normalized_score 45.8599 time/evaluation sampling (s) 0.91796 time/logging (s) 0.00225171 time/sampling batch (s) 0.2712 time/saving (s) 0.00303545 time/training (s) 4.55425 time/epoch (s) 5.74869 time/total (s) 31716.3 Epoch -727 ---------------------------------- --------------- 2022-05-10 21:59:11.594532 PDT | [0] Epoch -726 finished ---------------------------------- --------------- epoch -726 replay_buffer/size 999996 trainer/num train calls 275000 trainer/Policy Loss -2.2132 trainer/Log Pis Mean 2.2972 trainer/Log Pis Std 2.63766 trainer/Log Pis Max 9.58224 trainer/Log Pis Min -7.09638 trainer/policy/mean Mean 0.151668 trainer/policy/mean Std 0.613488 trainer/policy/mean Max 0.998104 trainer/policy/mean Min -0.997469 trainer/policy/normal/std Mean 0.386017 trainer/policy/normal/std Std 0.191027 trainer/policy/normal/std Max 0.982105 trainer/policy/normal/std Min 0.0693943 trainer/policy/normal/log_std Mean -1.10949 trainer/policy/normal/log_std Std 0.605483 trainer/policy/normal/log_std Max -0.0180571 trainer/policy/normal/log_std Min -2.66795 eval/num steps total 181308 eval/num paths total 347 eval/path length Mean 503 eval/path length Std 0 eval/path length Max 503 eval/path length Min 503 eval/Rewards Mean 3.1 eval/Rewards Std 0.780823 eval/Rewards Max 4.80795 eval/Rewards Min 0.980951 eval/Returns Mean 1559.3 eval/Returns Std 0 eval/Returns Max 1559.3 eval/Returns Min 1559.3 eval/Actions Mean 0.16418 eval/Actions Std 0.591878 eval/Actions Max 0.998271 eval/Actions Min -0.999183 eval/Num Paths 1 eval/Average Returns 1559.3 eval/normalized_score 48.5339 time/evaluation sampling (s) 0.924176 time/logging (s) 0.00265387 time/sampling batch (s) 0.271892 time/saving (s) 0.00319589 time/training (s) 4.85718 time/epoch (s) 6.05909 time/total (s) 31722.4 Epoch -726 ---------------------------------- --------------- 2022-05-10 21:59:17.887310 PDT | [0] Epoch -725 finished ---------------------------------- --------------- epoch -725 replay_buffer/size 999996 trainer/num train calls 276000 trainer/Policy Loss -2.42278 trainer/Log Pis Mean 2.27938 trainer/Log Pis Std 2.71563 trainer/Log Pis Max 11.5217 trainer/Log Pis Min -5.67719 trainer/policy/mean Mean 0.161081 trainer/policy/mean Std 0.624634 trainer/policy/mean Max 0.997452 trainer/policy/mean Min -0.998798 trainer/policy/normal/std Mean 0.381096 trainer/policy/normal/std Std 0.187019 trainer/policy/normal/std Max 0.993714 trainer/policy/normal/std Min 0.0707111 trainer/policy/normal/log_std Mean -1.11684 trainer/policy/normal/log_std Std 0.592605 trainer/policy/normal/log_std Max -0.00630569 trainer/policy/normal/log_std Min -2.64915 eval/num steps total 181885 eval/num paths total 348 eval/path length Mean 577 eval/path length Std 0 eval/path length Max 577 eval/path length Min 577 eval/Rewards Mean 3.12331 eval/Rewards Std 0.723452 eval/Rewards Max 4.77997 eval/Rewards Min 0.98979 eval/Returns Mean 1802.15 eval/Returns Std 0 eval/Returns Max 1802.15 eval/Returns Min 1802.15 eval/Actions Mean 0.151002 eval/Actions Std 0.598332 eval/Actions Max 0.998147 eval/Actions Min -0.99793 eval/Num Paths 1 eval/Average Returns 1802.15 eval/normalized_score 55.9957 time/evaluation sampling (s) 0.911643 time/logging (s) 0.00313495 time/sampling batch (s) 0.302674 time/saving (s) 0.00379682 time/training (s) 5.05082 time/epoch (s) 6.27207 time/total (s) 31728.7 Epoch -725 ---------------------------------- --------------- 2022-05-10 21:59:24.430589 PDT | [0] Epoch -724 finished ---------------------------------- --------------- epoch -724 replay_buffer/size 999996 trainer/num train calls 277000 trainer/Policy Loss -2.23308 trainer/Log Pis Mean 2.06054 trainer/Log Pis Std 2.54828 trainer/Log Pis Max 9.73048 trainer/Log Pis Min -6.32113 trainer/policy/mean Mean 0.121245 trainer/policy/mean Std 0.622663 trainer/policy/mean Max 0.996887 trainer/policy/mean Min -0.997792 trainer/policy/normal/std Mean 0.393368 trainer/policy/normal/std Std 0.187416 trainer/policy/normal/std Max 0.997294 trainer/policy/normal/std Min 0.074747 trainer/policy/normal/log_std Mean -1.07698 trainer/policy/normal/log_std Std 0.577244 trainer/policy/normal/log_std Max -0.0027093 trainer/policy/normal/log_std Min -2.59365 eval/num steps total 182843 eval/num paths total 350 eval/path length Mean 479 eval/path length Std 21 eval/path length Max 500 eval/path length Min 458 eval/Rewards Mean 3.18171 eval/Rewards Std 0.813914 eval/Rewards Max 5.2267 eval/Rewards Min 0.982416 eval/Returns Mean 1524.04 eval/Returns Std 46.2904 eval/Returns Max 1570.33 eval/Returns Min 1477.75 eval/Actions Mean 0.140486 eval/Actions Std 0.585861 eval/Actions Max 0.997871 eval/Actions Min -0.999338 eval/Num Paths 2 eval/Average Returns 1524.04 eval/normalized_score 47.4505 time/evaluation sampling (s) 1.04163 time/logging (s) 0.00425906 time/sampling batch (s) 0.310123 time/saving (s) 0.00395075 time/training (s) 5.16224 time/epoch (s) 6.52221 time/total (s) 31735.2 Epoch -724 ---------------------------------- --------------- 2022-05-10 21:59:31.018070 PDT | [0] Epoch -723 finished ---------------------------------- --------------- epoch -723 replay_buffer/size 999996 trainer/num train calls 278000 trainer/Policy Loss -2.22922 trainer/Log Pis Mean 2.17766 trainer/Log Pis Std 2.69543 trainer/Log Pis Max 10.5189 trainer/Log Pis Min -6.85201 trainer/policy/mean Mean 0.124446 trainer/policy/mean Std 0.623562 trainer/policy/mean Max 0.997233 trainer/policy/mean Min -0.997511 trainer/policy/normal/std Mean 0.391744 trainer/policy/normal/std Std 0.186623 trainer/policy/normal/std Max 0.971296 trainer/policy/normal/std Min 0.0740012 trainer/policy/normal/log_std Mean -1.07941 trainer/policy/normal/log_std Std 0.572802 trainer/policy/normal/log_std Max -0.0291244 trainer/policy/normal/log_std Min -2.60367 eval/num steps total 183559 eval/num paths total 352 eval/path length Mean 358 eval/path length Std 182 eval/path length Max 540 eval/path length Min 176 eval/Rewards Mean 3.02411 eval/Rewards Std 0.910965 eval/Rewards Max 5.08184 eval/Rewards Min 0.98285 eval/Returns Mean 1082.63 eval/Returns Std 654.525 eval/Returns Max 1737.16 eval/Returns Min 428.108 eval/Actions Mean 0.130811 eval/Actions Std 0.55756 eval/Actions Max 0.998119 eval/Actions Min -0.998766 eval/Num Paths 2 eval/Average Returns 1082.63 eval/normalized_score 33.8879 time/evaluation sampling (s) 1.04873 time/logging (s) 0.00324417 time/sampling batch (s) 0.310814 time/saving (s) 0.00333413 time/training (s) 5.19795 time/epoch (s) 6.56407 time/total (s) 31741.8 Epoch -723 ---------------------------------- --------------- 2022-05-10 21:59:36.494991 PDT | [0] Epoch -722 finished ---------------------------------- --------------- epoch -722 replay_buffer/size 999996 trainer/num train calls 279000 trainer/Policy Loss -2.34653 trainer/Log Pis Mean 2.21548 trainer/Log Pis Std 2.66198 trainer/Log Pis Max 12.4975 trainer/Log Pis Min -4.80877 trainer/policy/mean Mean 0.125505 trainer/policy/mean Std 0.616303 trainer/policy/mean Max 0.997947 trainer/policy/mean Min -0.997526 trainer/policy/normal/std Mean 0.382991 trainer/policy/normal/std Std 0.188103 trainer/policy/normal/std Max 1.02279 trainer/policy/normal/std Min 0.0736675 trainer/policy/normal/log_std Mean -1.10887 trainer/policy/normal/log_std Std 0.584064 trainer/policy/normal/log_std Max 0.0225353 trainer/policy/normal/log_std Min -2.60819 eval/num steps total 184125 eval/num paths total 353 eval/path length Mean 566 eval/path length Std 0 eval/path length Max 566 eval/path length Min 566 eval/Rewards Mean 3.21599 eval/Rewards Std 0.828022 eval/Rewards Max 4.78035 eval/Rewards Min 0.983665 eval/Returns Mean 1820.25 eval/Returns Std 0 eval/Returns Max 1820.25 eval/Returns Min 1820.25 eval/Actions Mean 0.156823 eval/Actions Std 0.57837 eval/Actions Max 0.996806 eval/Actions Min -0.99845 eval/Num Paths 1 eval/Average Returns 1820.25 eval/normalized_score 56.5519 time/evaluation sampling (s) 0.93178 time/logging (s) 0.00264764 time/sampling batch (s) 0.270946 time/saving (s) 0.00341577 time/training (s) 4.24658 time/epoch (s) 5.45537 time/total (s) 31747.2 Epoch -722 ---------------------------------- --------------- 2022-05-10 21:59:42.000640 PDT | [0] Epoch -721 finished ---------------------------------- --------------- epoch -721 replay_buffer/size 999996 trainer/num train calls 280000 trainer/Policy Loss -1.93367 trainer/Log Pis Mean 2.05595 trainer/Log Pis Std 2.55267 trainer/Log Pis Max 16.3601 trainer/Log Pis Min -5.85376 trainer/policy/mean Mean 0.13651 trainer/policy/mean Std 0.603629 trainer/policy/mean Max 0.99749 trainer/policy/mean Min -0.999777 trainer/policy/normal/std Mean 0.385719 trainer/policy/normal/std Std 0.18092 trainer/policy/normal/std Max 0.971781 trainer/policy/normal/std Min 0.0746083 trainer/policy/normal/log_std Mean -1.09241 trainer/policy/normal/log_std Std 0.568576 trainer/policy/normal/log_std Max -0.0286251 trainer/policy/normal/log_std Min -2.5955 eval/num steps total 185046 eval/num paths total 355 eval/path length Mean 460.5 eval/path length Std 1.5 eval/path length Max 462 eval/path length Min 459 eval/Rewards Mean 3.20569 eval/Rewards Std 0.872869 eval/Rewards Max 5.23321 eval/Rewards Min 0.980752 eval/Returns Mean 1476.22 eval/Returns Std 5.44566 eval/Returns Max 1481.67 eval/Returns Min 1470.77 eval/Actions Mean 0.137179 eval/Actions Std 0.576909 eval/Actions Max 0.99755 eval/Actions Min -0.99834 eval/Num Paths 2 eval/Average Returns 1476.22 eval/normalized_score 45.9812 time/evaluation sampling (s) 0.945746 time/logging (s) 0.00405721 time/sampling batch (s) 0.271052 time/saving (s) 0.00381528 time/training (s) 4.26091 time/epoch (s) 5.48559 time/total (s) 31752.7 Epoch -721 ---------------------------------- --------------- 2022-05-10 21:59:47.387519 PDT | [0] Epoch -720 finished ---------------------------------- --------------- epoch -720 replay_buffer/size 999996 trainer/num train calls 281000 trainer/Policy Loss -2.13562 trainer/Log Pis Mean 2.03125 trainer/Log Pis Std 2.7089 trainer/Log Pis Max 10.9204 trainer/Log Pis Min -6.55743 trainer/policy/mean Mean 0.126644 trainer/policy/mean Std 0.621032 trainer/policy/mean Max 0.996058 trainer/policy/mean Min -0.99664 trainer/policy/normal/std Mean 0.385594 trainer/policy/normal/std Std 0.181567 trainer/policy/normal/std Max 0.968347 trainer/policy/normal/std Min 0.0753925 trainer/policy/normal/log_std Mean -1.09212 trainer/policy/normal/log_std Std 0.565735 trainer/policy/normal/log_std Max -0.0321647 trainer/policy/normal/log_std Min -2.58505 eval/num steps total 185534 eval/num paths total 356 eval/path length Mean 488 eval/path length Std 0 eval/path length Max 488 eval/path length Min 488 eval/Rewards Mean 3.10252 eval/Rewards Std 0.835359 eval/Rewards Max 4.95181 eval/Rewards Min 0.989448 eval/Returns Mean 1514.03 eval/Returns Std 0 eval/Returns Max 1514.03 eval/Returns Min 1514.03 eval/Actions Mean 0.142983 eval/Actions Std 0.566821 eval/Actions Max 0.997756 eval/Actions Min -0.9985 eval/Num Paths 1 eval/Average Returns 1514.03 eval/normalized_score 47.143 time/evaluation sampling (s) 0.883019 time/logging (s) 0.00238126 time/sampling batch (s) 0.26587 time/saving (s) 0.00315538 time/training (s) 4.20899 time/epoch (s) 5.36341 time/total (s) 31758.1 Epoch -720 ---------------------------------- --------------- 2022-05-10 21:59:52.882927 PDT | [0] Epoch -719 finished ---------------------------------- --------------- epoch -719 replay_buffer/size 999996 trainer/num train calls 282000 trainer/Policy Loss -1.96764 trainer/Log Pis Mean 1.95955 trainer/Log Pis Std 2.55257 trainer/Log Pis Max 10.1557 trainer/Log Pis Min -4.94773 trainer/policy/mean Mean 0.139108 trainer/policy/mean Std 0.607236 trainer/policy/mean Max 0.999187 trainer/policy/mean Min -0.997736 trainer/policy/normal/std Mean 0.392662 trainer/policy/normal/std Std 0.18328 trainer/policy/normal/std Max 0.970968 trainer/policy/normal/std Min 0.0778559 trainer/policy/normal/log_std Mean -1.07215 trainer/policy/normal/log_std Std 0.563087 trainer/policy/normal/log_std Max -0.0294619 trainer/policy/normal/log_std Min -2.55289 eval/num steps total 186514 eval/num paths total 358 eval/path length Mean 490 eval/path length Std 2 eval/path length Max 492 eval/path length Min 488 eval/Rewards Mean 3.10103 eval/Rewards Std 0.792107 eval/Rewards Max 4.83846 eval/Rewards Min 0.986007 eval/Returns Mean 1519.51 eval/Returns Std 2.97798 eval/Returns Max 1522.48 eval/Returns Min 1516.53 eval/Actions Mean 0.15595 eval/Actions Std 0.585249 eval/Actions Max 0.998648 eval/Actions Min -0.99838 eval/Num Paths 2 eval/Average Returns 1519.51 eval/normalized_score 47.3112 time/evaluation sampling (s) 0.897907 time/logging (s) 0.00361809 time/sampling batch (s) 0.274842 time/saving (s) 0.00319542 time/training (s) 4.29563 time/epoch (s) 5.4752 time/total (s) 31763.5 Epoch -719 ---------------------------------- --------------- 2022-05-10 21:59:58.353476 PDT | [0] Epoch -718 finished ---------------------------------- --------------- epoch -718 replay_buffer/size 999996 trainer/num train calls 283000 trainer/Policy Loss -2.34732 trainer/Log Pis Mean 2.44271 trainer/Log Pis Std 2.56562 trainer/Log Pis Max 13.072 trainer/Log Pis Min -3.21882 trainer/policy/mean Mean 0.154552 trainer/policy/mean Std 0.626043 trainer/policy/mean Max 0.996864 trainer/policy/mean Min -0.998541 trainer/policy/normal/std Mean 0.387904 trainer/policy/normal/std Std 0.185326 trainer/policy/normal/std Max 0.921428 trainer/policy/normal/std Min 0.0684401 trainer/policy/normal/log_std Mean -1.09379 trainer/policy/normal/log_std Std 0.584074 trainer/policy/normal/log_std Max -0.0818303 trainer/policy/normal/log_std Min -2.6818 eval/num steps total 187013 eval/num paths total 359 eval/path length Mean 499 eval/path length Std 0 eval/path length Max 499 eval/path length Min 499 eval/Rewards Mean 3.07643 eval/Rewards Std 0.782761 eval/Rewards Max 4.7517 eval/Rewards Min 0.981322 eval/Returns Mean 1535.14 eval/Returns Std 0 eval/Returns Max 1535.14 eval/Returns Min 1535.14 eval/Actions Mean 0.158373 eval/Actions Std 0.583296 eval/Actions Max 0.99829 eval/Actions Min -0.996488 eval/Num Paths 1 eval/Average Returns 1535.14 eval/normalized_score 47.7916 time/evaluation sampling (s) 0.901349 time/logging (s) 0.0022801 time/sampling batch (s) 0.273493 time/saving (s) 0.00300661 time/training (s) 4.26769 time/epoch (s) 5.44782 time/total (s) 31769 Epoch -718 ---------------------------------- --------------- 2022-05-10 22:00:03.825116 PDT | [0] Epoch -717 finished ---------------------------------- --------------- epoch -717 replay_buffer/size 999996 trainer/num train calls 284000 trainer/Policy Loss -2.30345 trainer/Log Pis Mean 2.41811 trainer/Log Pis Std 2.68236 trainer/Log Pis Max 9.41856 trainer/Log Pis Min -6.71661 trainer/policy/mean Mean 0.125484 trainer/policy/mean Std 0.62532 trainer/policy/mean Max 0.997392 trainer/policy/mean Min -0.998602 trainer/policy/normal/std Mean 0.380456 trainer/policy/normal/std Std 0.184782 trainer/policy/normal/std Max 0.980883 trainer/policy/normal/std Min 0.0673339 trainer/policy/normal/log_std Mean -1.12087 trainer/policy/normal/log_std Std 0.603201 trainer/policy/normal/log_std Max -0.0193018 trainer/policy/normal/log_std Min -2.69809 eval/num steps total 187837 eval/num paths total 361 eval/path length Mean 412 eval/path length Std 3 eval/path length Max 415 eval/path length Min 409 eval/Rewards Mean 3.0809 eval/Rewards Std 0.785701 eval/Rewards Max 4.77776 eval/Rewards Min 0.981332 eval/Returns Mean 1269.33 eval/Returns Std 16.6767 eval/Returns Max 1286.01 eval/Returns Min 1252.66 eval/Actions Mean 0.150221 eval/Actions Std 0.593483 eval/Actions Max 0.997316 eval/Actions Min -0.999447 eval/Num Paths 2 eval/Average Returns 1269.33 eval/normalized_score 39.6244 time/evaluation sampling (s) 0.895785 time/logging (s) 0.00316614 time/sampling batch (s) 0.274197 time/saving (s) 0.00299652 time/training (s) 4.2751 time/epoch (s) 5.45124 time/total (s) 31774.5 Epoch -717 ---------------------------------- --------------- 2022-05-10 22:00:09.265904 PDT | [0] Epoch -716 finished ---------------------------------- --------------- epoch -716 replay_buffer/size 999996 trainer/num train calls 285000 trainer/Policy Loss -2.13362 trainer/Log Pis Mean 2.23966 trainer/Log Pis Std 2.65919 trainer/Log Pis Max 8.96804 trainer/Log Pis Min -5.79727 trainer/policy/mean Mean 0.162346 trainer/policy/mean Std 0.612424 trainer/policy/mean Max 0.997638 trainer/policy/mean Min -0.997771 trainer/policy/normal/std Mean 0.385313 trainer/policy/normal/std Std 0.181528 trainer/policy/normal/std Max 1.03508 trainer/policy/normal/std Min 0.0746863 trainer/policy/normal/log_std Mean -1.09292 trainer/policy/normal/log_std Std 0.566308 trainer/policy/normal/log_std Max 0.0344786 trainer/policy/normal/log_std Min -2.59446 eval/num steps total 188739 eval/num paths total 363 eval/path length Mean 451 eval/path length Std 27 eval/path length Max 478 eval/path length Min 424 eval/Rewards Mean 3.17025 eval/Rewards Std 0.855891 eval/Rewards Max 5.3492 eval/Rewards Min 0.989139 eval/Returns Mean 1429.78 eval/Returns Std 106.34 eval/Returns Max 1536.12 eval/Returns Min 1323.44 eval/Actions Mean 0.156255 eval/Actions Std 0.59818 eval/Actions Max 0.998147 eval/Actions Min -0.999363 eval/Num Paths 2 eval/Average Returns 1429.78 eval/normalized_score 44.5544 time/evaluation sampling (s) 0.895757 time/logging (s) 0.00346482 time/sampling batch (s) 0.271882 time/saving (s) 0.00302272 time/training (s) 4.24503 time/epoch (s) 5.41916 time/total (s) 31779.9 Epoch -716 ---------------------------------- --------------- 2022-05-10 22:00:14.733242 PDT | [0] Epoch -715 finished ---------------------------------- --------------- epoch -715 replay_buffer/size 999996 trainer/num train calls 286000 trainer/Policy Loss -2.1011 trainer/Log Pis Mean 2.20401 trainer/Log Pis Std 2.49218 trainer/Log Pis Max 10.0457 trainer/Log Pis Min -4.56509 trainer/policy/mean Mean 0.129446 trainer/policy/mean Std 0.618634 trainer/policy/mean Max 0.996392 trainer/policy/mean Min -0.998047 trainer/policy/normal/std Mean 0.387606 trainer/policy/normal/std Std 0.187205 trainer/policy/normal/std Max 0.972744 trainer/policy/normal/std Min 0.0684752 trainer/policy/normal/log_std Mean -1.09547 trainer/policy/normal/log_std Std 0.585019 trainer/policy/normal/log_std Max -0.0276347 trainer/policy/normal/log_std Min -2.68128 eval/num steps total 189739 eval/num paths total 365 eval/path length Mean 500 eval/path length Std 1 eval/path length Max 501 eval/path length Min 499 eval/Rewards Mean 3.10555 eval/Rewards Std 0.764334 eval/Rewards Max 4.76545 eval/Rewards Min 0.985393 eval/Returns Mean 1552.78 eval/Returns Std 20.0543 eval/Returns Max 1572.83 eval/Returns Min 1532.72 eval/Actions Mean 0.153004 eval/Actions Std 0.591186 eval/Actions Max 0.997011 eval/Actions Min -0.997895 eval/Num Paths 2 eval/Average Returns 1552.78 eval/normalized_score 48.3335 time/evaluation sampling (s) 0.908068 time/logging (s) 0.0035996 time/sampling batch (s) 0.273216 time/saving (s) 0.00298935 time/training (s) 4.25836 time/epoch (s) 5.44624 time/total (s) 31785.3 Epoch -715 ---------------------------------- --------------- 2022-05-10 22:00:20.216415 PDT | [0] Epoch -714 finished ---------------------------------- --------------- epoch -714 replay_buffer/size 999996 trainer/num train calls 287000 trainer/Policy Loss -2.01373 trainer/Log Pis Mean 2.02235 trainer/Log Pis Std 2.492 trainer/Log Pis Max 9.64872 trainer/Log Pis Min -5.78483 trainer/policy/mean Mean 0.163319 trainer/policy/mean Std 0.599424 trainer/policy/mean Max 0.997945 trainer/policy/mean Min -0.99801 trainer/policy/normal/std Mean 0.375598 trainer/policy/normal/std Std 0.183433 trainer/policy/normal/std Max 0.990251 trainer/policy/normal/std Min 0.0684161 trainer/policy/normal/log_std Mean -1.12895 trainer/policy/normal/log_std Std 0.587066 trainer/policy/normal/log_std Max -0.00979689 trainer/policy/normal/log_std Min -2.68215 eval/num steps total 190388 eval/num paths total 366 eval/path length Mean 649 eval/path length Std 0 eval/path length Max 649 eval/path length Min 649 eval/Rewards Mean 3.1953 eval/Rewards Std 0.661877 eval/Rewards Max 4.46037 eval/Rewards Min 0.984511 eval/Returns Mean 2073.75 eval/Returns Std 0 eval/Returns Max 2073.75 eval/Returns Min 2073.75 eval/Actions Mean 0.15878 eval/Actions Std 0.611539 eval/Actions Max 0.996459 eval/Actions Min -0.997602 eval/Num Paths 1 eval/Average Returns 2073.75 eval/normalized_score 64.3409 time/evaluation sampling (s) 0.904622 time/logging (s) 0.00325087 time/sampling batch (s) 0.274338 time/saving (s) 0.00353713 time/training (s) 4.27547 time/epoch (s) 5.46121 time/total (s) 31790.8 Epoch -714 ---------------------------------- --------------- 2022-05-10 22:00:25.712728 PDT | [0] Epoch -713 finished ---------------------------------- ---------------- epoch -713 replay_buffer/size 999996 trainer/num train calls 288000 trainer/Policy Loss -2.15471 trainer/Log Pis Mean 2.08019 trainer/Log Pis Std 2.6975 trainer/Log Pis Max 9.12166 trainer/Log Pis Min -7.28054 trainer/policy/mean Mean 0.115895 trainer/policy/mean Std 0.623874 trainer/policy/mean Max 0.996101 trainer/policy/mean Min -0.998814 trainer/policy/normal/std Mean 0.388249 trainer/policy/normal/std Std 0.187534 trainer/policy/normal/std Max 0.999862 trainer/policy/normal/std Min 0.072691 trainer/policy/normal/log_std Mean -1.09245 trainer/policy/normal/log_std Std 0.580987 trainer/policy/normal/log_std Max -0.000138054 trainer/policy/normal/log_std Min -2.62154 eval/num steps total 190920 eval/num paths total 367 eval/path length Mean 532 eval/path length Std 0 eval/path length Max 532 eval/path length Min 532 eval/Rewards Mean 3.12635 eval/Rewards Std 0.848385 eval/Rewards Max 5.22025 eval/Rewards Min 0.986825 eval/Returns Mean 1663.22 eval/Returns Std 0 eval/Returns Max 1663.22 eval/Returns Min 1663.22 eval/Actions Mean 0.138858 eval/Actions Std 0.568368 eval/Actions Max 0.996816 eval/Actions Min -0.998433 eval/Num Paths 1 eval/Average Returns 1663.22 eval/normalized_score 51.727 time/evaluation sampling (s) 0.925938 time/logging (s) 0.00257618 time/sampling batch (s) 0.27315 time/saving (s) 0.00326052 time/training (s) 4.26862 time/epoch (s) 5.47355 time/total (s) 31796.3 Epoch -713 ---------------------------------- ---------------- 2022-05-10 22:00:31.203452 PDT | [0] Epoch -712 finished ---------------------------------- --------------- epoch -712 replay_buffer/size 999996 trainer/num train calls 289000 trainer/Policy Loss -2.29932 trainer/Log Pis Mean 2.35803 trainer/Log Pis Std 2.47489 trainer/Log Pis Max 11.6888 trainer/Log Pis Min -3.55128 trainer/policy/mean Mean 0.156219 trainer/policy/mean Std 0.622152 trainer/policy/mean Max 0.999027 trainer/policy/mean Min -0.997856 trainer/policy/normal/std Mean 0.382378 trainer/policy/normal/std Std 0.182056 trainer/policy/normal/std Max 1.00867 trainer/policy/normal/std Min 0.0766348 trainer/policy/normal/log_std Mean -1.10286 trainer/policy/normal/log_std Std 0.570006 trainer/policy/normal/log_std Max 0.00863721 trainer/policy/normal/log_std Min -2.5687 eval/num steps total 191420 eval/num paths total 368 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.1275 eval/Rewards Std 0.747543 eval/Rewards Max 4.72847 eval/Rewards Min 0.989319 eval/Returns Mean 1563.75 eval/Returns Std 0 eval/Returns Max 1563.75 eval/Returns Min 1563.75 eval/Actions Mean 0.152838 eval/Actions Std 0.594012 eval/Actions Max 0.998188 eval/Actions Min -0.996402 eval/Num Paths 1 eval/Average Returns 1563.75 eval/normalized_score 48.6707 time/evaluation sampling (s) 0.921833 time/logging (s) 0.00243315 time/sampling batch (s) 0.273112 time/saving (s) 0.00305471 time/training (s) 4.26866 time/epoch (s) 5.46909 time/total (s) 31801.7 Epoch -712 ---------------------------------- --------------- 2022-05-10 22:00:36.796623 PDT | [0] Epoch -711 finished ---------------------------------- --------------- epoch -711 replay_buffer/size 999996 trainer/num train calls 290000 trainer/Policy Loss -2.11597 trainer/Log Pis Mean 2.01878 trainer/Log Pis Std 2.53488 trainer/Log Pis Max 12.1241 trainer/Log Pis Min -5.47708 trainer/policy/mean Mean 0.135213 trainer/policy/mean Std 0.604114 trainer/policy/mean Max 0.997682 trainer/policy/mean Min -0.996698 trainer/policy/normal/std Mean 0.37664 trainer/policy/normal/std Std 0.177608 trainer/policy/normal/std Max 0.904661 trainer/policy/normal/std Min 0.0671093 trainer/policy/normal/log_std Mean -1.11717 trainer/policy/normal/log_std Std 0.571085 trainer/policy/normal/log_std Max -0.100195 trainer/policy/normal/log_std Min -2.70143 eval/num steps total 191916 eval/num paths total 369 eval/path length Mean 496 eval/path length Std 0 eval/path length Max 496 eval/path length Min 496 eval/Rewards Mean 3.08958 eval/Rewards Std 0.767999 eval/Rewards Max 4.65487 eval/Rewards Min 0.982412 eval/Returns Mean 1532.43 eval/Returns Std 0 eval/Returns Max 1532.43 eval/Returns Min 1532.43 eval/Actions Mean 0.152762 eval/Actions Std 0.582904 eval/Actions Max 0.998284 eval/Actions Min -0.996574 eval/Num Paths 1 eval/Average Returns 1532.43 eval/normalized_score 47.7084 time/evaluation sampling (s) 1.01908 time/logging (s) 0.00231749 time/sampling batch (s) 0.274468 time/saving (s) 0.00313179 time/training (s) 4.27264 time/epoch (s) 5.57163 time/total (s) 31807.3 Epoch -711 ---------------------------------- --------------- 2022-05-10 22:00:42.282990 PDT | [0] Epoch -710 finished ---------------------------------- --------------- epoch -710 replay_buffer/size 999996 trainer/num train calls 291000 trainer/Policy Loss -2.17383 trainer/Log Pis Mean 2.09539 trainer/Log Pis Std 2.6031 trainer/Log Pis Max 9.59482 trainer/Log Pis Min -8.39682 trainer/policy/mean Mean 0.141433 trainer/policy/mean Std 0.601866 trainer/policy/mean Max 0.997543 trainer/policy/mean Min -0.997197 trainer/policy/normal/std Mean 0.383404 trainer/policy/normal/std Std 0.185165 trainer/policy/normal/std Max 0.93672 trainer/policy/normal/std Min 0.0721926 trainer/policy/normal/log_std Mean -1.10723 trainer/policy/normal/log_std Std 0.58743 trainer/policy/normal/log_std Max -0.0653707 trainer/policy/normal/log_std Min -2.62842 eval/num steps total 192430 eval/num paths total 370 eval/path length Mean 514 eval/path length Std 0 eval/path length Max 514 eval/path length Min 514 eval/Rewards Mean 3.10967 eval/Rewards Std 0.851488 eval/Rewards Max 5.38023 eval/Rewards Min 0.986786 eval/Returns Mean 1598.37 eval/Returns Std 0 eval/Returns Max 1598.37 eval/Returns Min 1598.37 eval/Actions Mean 0.153531 eval/Actions Std 0.577265 eval/Actions Max 0.995665 eval/Actions Min -0.997042 eval/Num Paths 1 eval/Average Returns 1598.37 eval/normalized_score 49.7344 time/evaluation sampling (s) 0.906213 time/logging (s) 0.00242454 time/sampling batch (s) 0.273547 time/saving (s) 0.00301352 time/training (s) 4.27968 time/epoch (s) 5.46488 time/total (s) 31812.8 Epoch -710 ---------------------------------- --------------- 2022-05-10 22:00:47.954506 PDT | [0] Epoch -709 finished ---------------------------------- --------------- epoch -709 replay_buffer/size 999996 trainer/num train calls 292000 trainer/Policy Loss -2.31421 trainer/Log Pis Mean 2.2614 trainer/Log Pis Std 2.57629 trainer/Log Pis Max 8.95058 trainer/Log Pis Min -6.46699 trainer/policy/mean Mean 0.153385 trainer/policy/mean Std 0.621041 trainer/policy/mean Max 0.996674 trainer/policy/mean Min -0.996579 trainer/policy/normal/std Mean 0.384621 trainer/policy/normal/std Std 0.178806 trainer/policy/normal/std Max 0.990303 trainer/policy/normal/std Min 0.0761593 trainer/policy/normal/log_std Mean -1.09231 trainer/policy/normal/log_std Std 0.561986 trainer/policy/normal/log_std Max -0.00974476 trainer/policy/normal/log_std Min -2.57493 eval/num steps total 193018 eval/num paths total 371 eval/path length Mean 588 eval/path length Std 0 eval/path length Max 588 eval/path length Min 588 eval/Rewards Mean 3.21055 eval/Rewards Std 0.752778 eval/Rewards Max 4.97614 eval/Rewards Min 0.984173 eval/Returns Mean 1887.8 eval/Returns Std 0 eval/Returns Max 1887.8 eval/Returns Min 1887.8 eval/Actions Mean 0.171953 eval/Actions Std 0.597175 eval/Actions Max 0.998254 eval/Actions Min -0.998526 eval/Num Paths 1 eval/Average Returns 1887.8 eval/normalized_score 58.6275 time/evaluation sampling (s) 0.899379 time/logging (s) 0.00272624 time/sampling batch (s) 0.282517 time/saving (s) 0.0031974 time/training (s) 4.46278 time/epoch (s) 5.6506 time/total (s) 31818.4 Epoch -709 ---------------------------------- --------------- 2022-05-10 22:00:53.414725 PDT | [0] Epoch -708 finished ---------------------------------- --------------- epoch -708 replay_buffer/size 999996 trainer/num train calls 293000 trainer/Policy Loss -2.28477 trainer/Log Pis Mean 2.31634 trainer/Log Pis Std 2.67265 trainer/Log Pis Max 13.0211 trainer/Log Pis Min -4.74467 trainer/policy/mean Mean 0.128609 trainer/policy/mean Std 0.616998 trainer/policy/mean Max 0.997179 trainer/policy/mean Min -0.998439 trainer/policy/normal/std Mean 0.386377 trainer/policy/normal/std Std 0.183279 trainer/policy/normal/std Max 0.930764 trainer/policy/normal/std Min 0.0715903 trainer/policy/normal/log_std Mean -1.09479 trainer/policy/normal/log_std Std 0.577986 trainer/policy/normal/log_std Max -0.0717494 trainer/policy/normal/log_std Min -2.6368 eval/num steps total 193576 eval/num paths total 372 eval/path length Mean 558 eval/path length Std 0 eval/path length Max 558 eval/path length Min 558 eval/Rewards Mean 3.20004 eval/Rewards Std 0.806527 eval/Rewards Max 4.77334 eval/Rewards Min 0.990581 eval/Returns Mean 1785.62 eval/Returns Std 0 eval/Returns Max 1785.62 eval/Returns Min 1785.62 eval/Actions Mean 0.148742 eval/Actions Std 0.595032 eval/Actions Max 0.998681 eval/Actions Min -0.998289 eval/Num Paths 1 eval/Average Returns 1785.62 eval/normalized_score 55.4879 time/evaluation sampling (s) 0.883167 time/logging (s) 0.0024998 time/sampling batch (s) 0.271407 time/saving (s) 0.00301942 time/training (s) 4.2786 time/epoch (s) 5.43869 time/total (s) 31823.9 Epoch -708 ---------------------------------- --------------- 2022-05-10 22:00:58.888076 PDT | [0] Epoch -707 finished ---------------------------------- --------------- epoch -707 replay_buffer/size 999996 trainer/num train calls 294000 trainer/Policy Loss -2.21067 trainer/Log Pis Mean 2.24628 trainer/Log Pis Std 2.64681 trainer/Log Pis Max 9.97401 trainer/Log Pis Min -6.15823 trainer/policy/mean Mean 0.116475 trainer/policy/mean Std 0.623341 trainer/policy/mean Max 0.997435 trainer/policy/mean Min -0.998557 trainer/policy/normal/std Mean 0.383315 trainer/policy/normal/std Std 0.184921 trainer/policy/normal/std Max 0.955217 trainer/policy/normal/std Min 0.0649963 trainer/policy/normal/log_std Mean -1.10963 trainer/policy/normal/log_std Std 0.593922 trainer/policy/normal/log_std Max -0.0458172 trainer/policy/normal/log_std Min -2.73343 eval/num steps total 194149 eval/num paths total 373 eval/path length Mean 573 eval/path length Std 0 eval/path length Max 573 eval/path length Min 573 eval/Rewards Mean 3.09564 eval/Rewards Std 0.751468 eval/Rewards Max 5.01199 eval/Rewards Min 0.980249 eval/Returns Mean 1773.8 eval/Returns Std 0 eval/Returns Max 1773.8 eval/Returns Min 1773.8 eval/Actions Mean 0.152171 eval/Actions Std 0.590182 eval/Actions Max 0.998103 eval/Actions Min -0.997778 eval/Num Paths 1 eval/Average Returns 1773.8 eval/normalized_score 55.1246 time/evaluation sampling (s) 0.88465 time/logging (s) 0.00265762 time/sampling batch (s) 0.272784 time/saving (s) 0.0030574 time/training (s) 4.28924 time/epoch (s) 5.45239 time/total (s) 31829.3 Epoch -707 ---------------------------------- --------------- 2022-05-10 22:01:04.323965 PDT | [0] Epoch -706 finished ---------------------------------- --------------- epoch -706 replay_buffer/size 999996 trainer/num train calls 295000 trainer/Policy Loss -2.33364 trainer/Log Pis Mean 2.26313 trainer/Log Pis Std 2.57592 trainer/Log Pis Max 8.81868 trainer/Log Pis Min -7.34309 trainer/policy/mean Mean 0.133086 trainer/policy/mean Std 0.62229 trainer/policy/mean Max 0.9991 trainer/policy/mean Min -0.99843 trainer/policy/normal/std Mean 0.38033 trainer/policy/normal/std Std 0.186231 trainer/policy/normal/std Max 0.938348 trainer/policy/normal/std Min 0.0692021 trainer/policy/normal/log_std Mean -1.11876 trainer/policy/normal/log_std Std 0.591719 trainer/policy/normal/log_std Max -0.0636347 trainer/policy/normal/log_std Min -2.67072 eval/num steps total 194674 eval/num paths total 374 eval/path length Mean 525 eval/path length Std 0 eval/path length Max 525 eval/path length Min 525 eval/Rewards Mean 3.12496 eval/Rewards Std 0.814049 eval/Rewards Max 5.42203 eval/Rewards Min 0.987578 eval/Returns Mean 1640.6 eval/Returns Std 0 eval/Returns Max 1640.6 eval/Returns Min 1640.6 eval/Actions Mean 0.157944 eval/Actions Std 0.583425 eval/Actions Max 0.997807 eval/Actions Min -0.997492 eval/Num Paths 1 eval/Average Returns 1640.6 eval/normalized_score 51.0321 time/evaluation sampling (s) 0.89929 time/logging (s) 0.00239842 time/sampling batch (s) 0.270199 time/saving (s) 0.00301706 time/training (s) 4.23967 time/epoch (s) 5.41458 time/total (s) 31834.8 Epoch -706 ---------------------------------- --------------- 2022-05-10 22:01:09.749882 PDT | [0] Epoch -705 finished ---------------------------------- --------------- epoch -705 replay_buffer/size 999996 trainer/num train calls 296000 trainer/Policy Loss -2.22963 trainer/Log Pis Mean 2.27251 trainer/Log Pis Std 2.66518 trainer/Log Pis Max 13.4016 trainer/Log Pis Min -7.53517 trainer/policy/mean Mean 0.157931 trainer/policy/mean Std 0.62244 trainer/policy/mean Max 0.99696 trainer/policy/mean Min -0.998617 trainer/policy/normal/std Mean 0.390152 trainer/policy/normal/std Std 0.183051 trainer/policy/normal/std Max 0.950666 trainer/policy/normal/std Min 0.0706528 trainer/policy/normal/log_std Mean -1.08193 trainer/policy/normal/log_std Std 0.571072 trainer/policy/normal/log_std Max -0.0505929 trainer/policy/normal/log_std Min -2.64998 eval/num steps total 195246 eval/num paths total 375 eval/path length Mean 572 eval/path length Std 0 eval/path length Max 572 eval/path length Min 572 eval/Rewards Mean 3.11302 eval/Rewards Std 0.714755 eval/Rewards Max 4.52953 eval/Rewards Min 0.981992 eval/Returns Mean 1780.65 eval/Returns Std 0 eval/Returns Max 1780.65 eval/Returns Min 1780.65 eval/Actions Mean 0.155485 eval/Actions Std 0.601333 eval/Actions Max 0.998512 eval/Actions Min -0.997531 eval/Num Paths 1 eval/Average Returns 1780.65 eval/normalized_score 55.3351 time/evaluation sampling (s) 0.881773 time/logging (s) 0.00256299 time/sampling batch (s) 0.270581 time/saving (s) 0.00309414 time/training (s) 4.24741 time/epoch (s) 5.40542 time/total (s) 31840.2 Epoch -705 ---------------------------------- --------------- 2022-05-10 22:01:15.218871 PDT | [0] Epoch -704 finished ---------------------------------- --------------- epoch -704 replay_buffer/size 999996 trainer/num train calls 297000 trainer/Policy Loss -2.36766 trainer/Log Pis Mean 2.44076 trainer/Log Pis Std 2.62127 trainer/Log Pis Max 10.7416 trainer/Log Pis Min -4.64454 trainer/policy/mean Mean 0.153375 trainer/policy/mean Std 0.633031 trainer/policy/mean Max 0.998782 trainer/policy/mean Min -0.997112 trainer/policy/normal/std Mean 0.378371 trainer/policy/normal/std Std 0.180691 trainer/policy/normal/std Max 0.87088 trainer/policy/normal/std Min 0.0728493 trainer/policy/normal/log_std Mean -1.11785 trainer/policy/normal/log_std Std 0.582177 trainer/policy/normal/log_std Max -0.138251 trainer/policy/normal/log_std Min -2.61936 eval/num steps total 195674 eval/num paths total 376 eval/path length Mean 428 eval/path length Std 0 eval/path length Max 428 eval/path length Min 428 eval/Rewards Mean 3.12375 eval/Rewards Std 0.895739 eval/Rewards Max 5.4532 eval/Rewards Min 0.989471 eval/Returns Mean 1336.96 eval/Returns Std 0 eval/Returns Max 1336.96 eval/Returns Min 1336.96 eval/Actions Mean 0.145637 eval/Actions Std 0.581075 eval/Actions Max 0.998198 eval/Actions Min -0.997026 eval/Num Paths 1 eval/Average Returns 1336.96 eval/normalized_score 41.7025 time/evaluation sampling (s) 0.883921 time/logging (s) 0.00213464 time/sampling batch (s) 0.27259 time/saving (s) 0.00302755 time/training (s) 4.28604 time/epoch (s) 5.44771 time/total (s) 31845.6 Epoch -704 ---------------------------------- --------------- 2022-05-10 22:01:20.702719 PDT | [0] Epoch -703 finished ---------------------------------- --------------- epoch -703 replay_buffer/size 999996 trainer/num train calls 298000 trainer/Policy Loss -2.4209 trainer/Log Pis Mean 2.28199 trainer/Log Pis Std 2.68188 trainer/Log Pis Max 11.9365 trainer/Log Pis Min -5.0366 trainer/policy/mean Mean 0.146648 trainer/policy/mean Std 0.625764 trainer/policy/mean Max 0.998244 trainer/policy/mean Min -0.998376 trainer/policy/normal/std Mean 0.392031 trainer/policy/normal/std Std 0.187413 trainer/policy/normal/std Max 1.00462 trainer/policy/normal/std Min 0.0697137 trainer/policy/normal/log_std Mean -1.08213 trainer/policy/normal/log_std Std 0.581955 trainer/policy/normal/log_std Max 0.00461205 trainer/policy/normal/log_std Min -2.66336 eval/num steps total 196308 eval/num paths total 377 eval/path length Mean 634 eval/path length Std 0 eval/path length Max 634 eval/path length Min 634 eval/Rewards Mean 3.25479 eval/Rewards Std 0.719181 eval/Rewards Max 4.65511 eval/Rewards Min 0.986627 eval/Returns Mean 2063.53 eval/Returns Std 0 eval/Returns Max 2063.53 eval/Returns Min 2063.53 eval/Actions Mean 0.136157 eval/Actions Std 0.605441 eval/Actions Max 0.996468 eval/Actions Min -0.997907 eval/Num Paths 1 eval/Average Returns 2063.53 eval/normalized_score 64.0271 time/evaluation sampling (s) 0.916125 time/logging (s) 0.00286887 time/sampling batch (s) 0.270314 time/saving (s) 0.00322039 time/training (s) 4.27123 time/epoch (s) 5.46375 time/total (s) 31851.1 Epoch -703 ---------------------------------- --------------- 2022-05-10 22:01:26.143069 PDT | [0] Epoch -702 finished ---------------------------------- --------------- epoch -702 replay_buffer/size 999996 trainer/num train calls 299000 trainer/Policy Loss -2.22254 trainer/Log Pis Mean 2.24442 trainer/Log Pis Std 2.59758 trainer/Log Pis Max 11.2595 trainer/Log Pis Min -5.02933 trainer/policy/mean Mean 0.144617 trainer/policy/mean Std 0.622407 trainer/policy/mean Max 0.998529 trainer/policy/mean Min -0.99709 trainer/policy/normal/std Mean 0.390756 trainer/policy/normal/std Std 0.182337 trainer/policy/normal/std Max 1.0158 trainer/policy/normal/std Min 0.0718189 trainer/policy/normal/log_std Mean -1.07781 trainer/policy/normal/log_std Std 0.565642 trainer/policy/normal/log_std Max 0.0156751 trainer/policy/normal/log_std Min -2.63361 eval/num steps total 196808 eval/num paths total 378 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.12373 eval/Rewards Std 0.842898 eval/Rewards Max 5.2914 eval/Rewards Min 0.981814 eval/Returns Mean 1561.87 eval/Returns Std 0 eval/Returns Max 1561.87 eval/Returns Min 1561.87 eval/Actions Mean 0.15836 eval/Actions Std 0.582771 eval/Actions Max 0.997526 eval/Actions Min -0.998241 eval/Num Paths 1 eval/Average Returns 1561.87 eval/normalized_score 48.6128 time/evaluation sampling (s) 0.926312 time/logging (s) 0.00250742 time/sampling batch (s) 0.268719 time/saving (s) 0.00332337 time/training (s) 4.21798 time/epoch (s) 5.41884 time/total (s) 31856.5 Epoch -702 ---------------------------------- --------------- 2022-05-10 22:01:31.627285 PDT | [0] Epoch -701 finished ---------------------------------- --------------- epoch -701 replay_buffer/size 999996 trainer/num train calls 300000 trainer/Policy Loss -2.26742 trainer/Log Pis Mean 2.23046 trainer/Log Pis Std 2.58441 trainer/Log Pis Max 9.60716 trainer/Log Pis Min -4.43315 trainer/policy/mean Mean 0.15536 trainer/policy/mean Std 0.618855 trainer/policy/mean Max 0.997364 trainer/policy/mean Min -0.998718 trainer/policy/normal/std Mean 0.393268 trainer/policy/normal/std Std 0.193727 trainer/policy/normal/std Max 0.957075 trainer/policy/normal/std Min 0.0695577 trainer/policy/normal/log_std Mean -1.08988 trainer/policy/normal/log_std Std 0.604718 trainer/policy/normal/log_std Max -0.0438736 trainer/policy/normal/log_std Min -2.6656 eval/num steps total 197310 eval/num paths total 379 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.11835 eval/Rewards Std 0.748926 eval/Rewards Max 4.73037 eval/Rewards Min 0.986477 eval/Returns Mean 1565.41 eval/Returns Std 0 eval/Returns Max 1565.41 eval/Returns Min 1565.41 eval/Actions Mean 0.152901 eval/Actions Std 0.59333 eval/Actions Max 0.997921 eval/Actions Min -0.998196 eval/Num Paths 1 eval/Average Returns 1565.41 eval/normalized_score 48.7218 time/evaluation sampling (s) 0.899396 time/logging (s) 0.00311361 time/sampling batch (s) 0.271619 time/saving (s) 0.00616158 time/training (s) 4.28321 time/epoch (s) 5.4635 time/total (s) 31862 Epoch -701 ---------------------------------- --------------- 2022-05-10 22:01:37.813512 PDT | [0] Epoch -700 finished ---------------------------------- --------------- epoch -700 replay_buffer/size 999996 trainer/num train calls 301000 trainer/Policy Loss -2.21733 trainer/Log Pis Mean 2.21601 trainer/Log Pis Std 2.58558 trainer/Log Pis Max 10.3672 trainer/Log Pis Min -4.28352 trainer/policy/mean Mean 0.136844 trainer/policy/mean Std 0.61939 trainer/policy/mean Max 0.997586 trainer/policy/mean Min -0.998411 trainer/policy/normal/std Mean 0.392304 trainer/policy/normal/std Std 0.191725 trainer/policy/normal/std Max 0.973917 trainer/policy/normal/std Min 0.068038 trainer/policy/normal/log_std Mean -1.08799 trainer/policy/normal/log_std Std 0.59456 trainer/policy/normal/log_std Max -0.0264295 trainer/policy/normal/log_std Min -2.68769 eval/num steps total 197864 eval/num paths total 380 eval/path length Mean 554 eval/path length Std 0 eval/path length Max 554 eval/path length Min 554 eval/Rewards Mean 3.25041 eval/Rewards Std 0.824108 eval/Rewards Max 5.13181 eval/Rewards Min 0.982278 eval/Returns Mean 1800.73 eval/Returns Std 0 eval/Returns Max 1800.73 eval/Returns Min 1800.73 eval/Actions Mean 0.153033 eval/Actions Std 0.583309 eval/Actions Max 0.998202 eval/Actions Min -0.998168 eval/Num Paths 1 eval/Average Returns 1800.73 eval/normalized_score 55.952 time/evaluation sampling (s) 0.89538 time/logging (s) 0.00282251 time/sampling batch (s) 0.268144 time/saving (s) 0.00337988 time/training (s) 4.99456 time/epoch (s) 6.16428 time/total (s) 31868.1 Epoch -700 ---------------------------------- --------------- 2022-05-10 22:01:43.674602 PDT | [0] Epoch -699 finished ---------------------------------- --------------- epoch -699 replay_buffer/size 999996 trainer/num train calls 302000 trainer/Policy Loss -2.00749 trainer/Log Pis Mean 2.22853 trainer/Log Pis Std 2.63268 trainer/Log Pis Max 13.2136 trainer/Log Pis Min -5.15344 trainer/policy/mean Mean 0.145768 trainer/policy/mean Std 0.617212 trainer/policy/mean Max 0.997401 trainer/policy/mean Min -0.998823 trainer/policy/normal/std Mean 0.383445 trainer/policy/normal/std Std 0.189094 trainer/policy/normal/std Max 1.06087 trainer/policy/normal/std Min 0.065298 trainer/policy/normal/log_std Mean -1.11392 trainer/policy/normal/log_std Std 0.600896 trainer/policy/normal/log_std Max 0.0590863 trainer/policy/normal/log_std Min -2.72879 eval/num steps total 198393 eval/num paths total 381 eval/path length Mean 529 eval/path length Std 0 eval/path length Max 529 eval/path length Min 529 eval/Rewards Mean 3.15006 eval/Rewards Std 0.820488 eval/Rewards Max 5.49121 eval/Rewards Min 0.983308 eval/Returns Mean 1666.38 eval/Returns Std 0 eval/Returns Max 1666.38 eval/Returns Min 1666.38 eval/Actions Mean 0.166698 eval/Actions Std 0.585197 eval/Actions Max 0.998396 eval/Actions Min -0.997419 eval/Num Paths 1 eval/Average Returns 1666.38 eval/normalized_score 51.8241 time/evaluation sampling (s) 0.90788 time/logging (s) 0.00301768 time/sampling batch (s) 0.280632 time/saving (s) 0.00421431 time/training (s) 4.64368 time/epoch (s) 5.83943 time/total (s) 31874 Epoch -699 ---------------------------------- --------------- 2022-05-10 22:01:49.057691 PDT | [0] Epoch -698 finished ---------------------------------- --------------- epoch -698 replay_buffer/size 999996 trainer/num train calls 303000 trainer/Policy Loss -1.92052 trainer/Log Pis Mean 2.00686 trainer/Log Pis Std 2.64084 trainer/Log Pis Max 11.2912 trainer/Log Pis Min -6.30624 trainer/policy/mean Mean 0.152924 trainer/policy/mean Std 0.607162 trainer/policy/mean Max 0.998949 trainer/policy/mean Min -0.997955 trainer/policy/normal/std Mean 0.388511 trainer/policy/normal/std Std 0.18093 trainer/policy/normal/std Max 0.905295 trainer/policy/normal/std Min 0.0757511 trainer/policy/normal/log_std Mean -1.08691 trainer/policy/normal/log_std Std 0.575063 trainer/policy/normal/log_std Max -0.0994947 trainer/policy/normal/log_std Min -2.5803 eval/num steps total 199215 eval/num paths total 383 eval/path length Mean 411 eval/path length Std 2 eval/path length Max 413 eval/path length Min 409 eval/Rewards Mean 3.0753 eval/Rewards Std 0.777136 eval/Rewards Max 4.79082 eval/Rewards Min 0.993474 eval/Returns Mean 1263.95 eval/Returns Std 9.54001 eval/Returns Max 1273.49 eval/Returns Min 1254.41 eval/Actions Mean 0.149832 eval/Actions Std 0.591134 eval/Actions Max 0.997297 eval/Actions Min -0.997328 eval/Num Paths 2 eval/Average Returns 1263.95 eval/normalized_score 39.459 time/evaluation sampling (s) 0.888239 time/logging (s) 0.00329294 time/sampling batch (s) 0.265856 time/saving (s) 0.00297807 time/training (s) 4.20166 time/epoch (s) 5.36203 time/total (s) 31879.3 Epoch -698 ---------------------------------- --------------- 2022-05-10 22:01:54.414202 PDT | [0] Epoch -697 finished ---------------------------------- --------------- epoch -697 replay_buffer/size 999996 trainer/num train calls 304000 trainer/Policy Loss -2.09815 trainer/Log Pis Mean 2.06894 trainer/Log Pis Std 2.414 trainer/Log Pis Max 8.59248 trainer/Log Pis Min -4.39128 trainer/policy/mean Mean 0.142008 trainer/policy/mean Std 0.61419 trainer/policy/mean Max 0.99808 trainer/policy/mean Min -0.998177 trainer/policy/normal/std Mean 0.390134 trainer/policy/normal/std Std 0.181991 trainer/policy/normal/std Max 0.9035 trainer/policy/normal/std Min 0.0716556 trainer/policy/normal/log_std Mean -1.08266 trainer/policy/normal/log_std Std 0.575423 trainer/policy/normal/log_std Max -0.10148 trainer/policy/normal/log_std Min -2.63588 eval/num steps total 200207 eval/num paths total 385 eval/path length Mean 496 eval/path length Std 89 eval/path length Max 585 eval/path length Min 407 eval/Rewards Mean 3.17052 eval/Rewards Std 0.784813 eval/Rewards Max 5.0408 eval/Rewards Min 0.986454 eval/Returns Mean 1572.58 eval/Returns Std 320.288 eval/Returns Max 1892.87 eval/Returns Min 1252.29 eval/Actions Mean 0.148288 eval/Actions Std 0.593816 eval/Actions Max 0.997776 eval/Actions Min -0.998568 eval/Num Paths 2 eval/Average Returns 1572.58 eval/normalized_score 48.9419 time/evaluation sampling (s) 0.887597 time/logging (s) 0.00434689 time/sampling batch (s) 0.265792 time/saving (s) 0.00352514 time/training (s) 4.17538 time/epoch (s) 5.33664 time/total (s) 31884.7 Epoch -697 ---------------------------------- --------------- 2022-05-10 22:01:59.815867 PDT | [0] Epoch -696 finished ---------------------------------- --------------- epoch -696 replay_buffer/size 999996 trainer/num train calls 305000 trainer/Policy Loss -2.14399 trainer/Log Pis Mean 2.15177 trainer/Log Pis Std 2.53761 trainer/Log Pis Max 11.33 trainer/Log Pis Min -3.78266 trainer/policy/mean Mean 0.121331 trainer/policy/mean Std 0.625479 trainer/policy/mean Max 0.997431 trainer/policy/mean Min -0.997465 trainer/policy/normal/std Mean 0.393132 trainer/policy/normal/std Std 0.185814 trainer/policy/normal/std Max 0.926551 trainer/policy/normal/std Min 0.0722364 trainer/policy/normal/log_std Mean -1.07647 trainer/policy/normal/log_std Std 0.575643 trainer/policy/normal/log_std Max -0.0762863 trainer/policy/normal/log_std Min -2.62781 eval/num steps total 200705 eval/num paths total 386 eval/path length Mean 498 eval/path length Std 0 eval/path length Max 498 eval/path length Min 498 eval/Rewards Mean 3.06613 eval/Rewards Std 0.795684 eval/Rewards Max 4.77434 eval/Rewards Min 0.985102 eval/Returns Mean 1526.94 eval/Returns Std 0 eval/Returns Max 1526.94 eval/Returns Min 1526.94 eval/Actions Mean 0.163031 eval/Actions Std 0.58155 eval/Actions Max 0.997919 eval/Actions Min -0.99684 eval/Num Paths 1 eval/Average Returns 1526.94 eval/normalized_score 47.5395 time/evaluation sampling (s) 0.878993 time/logging (s) 0.00231439 time/sampling batch (s) 0.268358 time/saving (s) 0.00308944 time/training (s) 4.22575 time/epoch (s) 5.3785 time/total (s) 31890.1 Epoch -696 ---------------------------------- --------------- 2022-05-10 22:02:05.187176 PDT | [0] Epoch -695 finished ---------------------------------- --------------- epoch -695 replay_buffer/size 999996 trainer/num train calls 306000 trainer/Policy Loss -2.13426 trainer/Log Pis Mean 2.02787 trainer/Log Pis Std 2.37556 trainer/Log Pis Max 8.63576 trainer/Log Pis Min -5.01165 trainer/policy/mean Mean 0.160311 trainer/policy/mean Std 0.599445 trainer/policy/mean Max 0.998576 trainer/policy/mean Min -0.997948 trainer/policy/normal/std Mean 0.386048 trainer/policy/normal/std Std 0.190772 trainer/policy/normal/std Max 1.01665 trainer/policy/normal/std Min 0.0707721 trainer/policy/normal/log_std Mean -1.10311 trainer/policy/normal/log_std Std 0.589909 trainer/policy/normal/log_std Max 0.0165094 trainer/policy/normal/log_std Min -2.64829 eval/num steps total 201614 eval/num paths total 388 eval/path length Mean 454.5 eval/path length Std 46.5 eval/path length Max 501 eval/path length Min 408 eval/Rewards Mean 3.13558 eval/Rewards Std 0.780766 eval/Rewards Max 5.27952 eval/Rewards Min 0.988508 eval/Returns Mean 1425.12 eval/Returns Std 166.107 eval/Returns Max 1591.23 eval/Returns Min 1259.01 eval/Actions Mean 0.137961 eval/Actions Std 0.589634 eval/Actions Max 0.997772 eval/Actions Min -0.999444 eval/Num Paths 2 eval/Average Returns 1425.12 eval/normalized_score 44.4112 time/evaluation sampling (s) 0.882938 time/logging (s) 0.0034295 time/sampling batch (s) 0.267506 time/saving (s) 0.00309214 time/training (s) 4.19466 time/epoch (s) 5.35163 time/total (s) 31895.4 Epoch -695 ---------------------------------- --------------- 2022-05-10 22:02:10.570090 PDT | [0] Epoch -694 finished ---------------------------------- --------------- epoch -694 replay_buffer/size 999996 trainer/num train calls 307000 trainer/Policy Loss -1.96006 trainer/Log Pis Mean 2.06103 trainer/Log Pis Std 2.58331 trainer/Log Pis Max 13.2401 trainer/Log Pis Min -7.28505 trainer/policy/mean Mean 0.146513 trainer/policy/mean Std 0.610147 trainer/policy/mean Max 0.995267 trainer/policy/mean Min -0.998525 trainer/policy/normal/std Mean 0.391819 trainer/policy/normal/std Std 0.185308 trainer/policy/normal/std Max 0.965065 trainer/policy/normal/std Min 0.0664834 trainer/policy/normal/log_std Mean -1.08143 trainer/policy/normal/log_std Std 0.579703 trainer/policy/normal/log_std Max -0.0355597 trainer/policy/normal/log_std Min -2.7108 eval/num steps total 202219 eval/num paths total 389 eval/path length Mean 605 eval/path length Std 0 eval/path length Max 605 eval/path length Min 605 eval/Rewards Mean 3.16364 eval/Rewards Std 0.756597 eval/Rewards Max 5.23971 eval/Rewards Min 0.985336 eval/Returns Mean 1914 eval/Returns Std 0 eval/Returns Max 1914 eval/Returns Min 1914 eval/Actions Mean 0.160647 eval/Actions Std 0.5915 eval/Actions Max 0.998113 eval/Actions Min -0.997669 eval/Num Paths 1 eval/Average Returns 1914 eval/normalized_score 59.4325 time/evaluation sampling (s) 0.916638 time/logging (s) 0.00259588 time/sampling batch (s) 0.265333 time/saving (s) 0.00295076 time/training (s) 4.1738 time/epoch (s) 5.36132 time/total (s) 31900.8 Epoch -694 ---------------------------------- --------------- 2022-05-10 22:02:15.989308 PDT | [0] Epoch -693 finished ---------------------------------- --------------- epoch -693 replay_buffer/size 999996 trainer/num train calls 308000 trainer/Policy Loss -2.43245 trainer/Log Pis Mean 2.40951 trainer/Log Pis Std 2.63005 trainer/Log Pis Max 10.0661 trainer/Log Pis Min -5.50899 trainer/policy/mean Mean 0.159585 trainer/policy/mean Std 0.621088 trainer/policy/mean Max 0.996359 trainer/policy/mean Min -0.998165 trainer/policy/normal/std Mean 0.380726 trainer/policy/normal/std Std 0.18693 trainer/policy/normal/std Max 0.931242 trainer/policy/normal/std Min 0.0634209 trainer/policy/normal/log_std Mean -1.12276 trainer/policy/normal/log_std Std 0.607297 trainer/policy/normal/log_std Max -0.0712361 trainer/policy/normal/log_std Min -2.75796 eval/num steps total 202762 eval/num paths total 390 eval/path length Mean 543 eval/path length Std 0 eval/path length Max 543 eval/path length Min 543 eval/Rewards Mean 3.1631 eval/Rewards Std 0.840518 eval/Rewards Max 5.14654 eval/Rewards Min 0.990278 eval/Returns Mean 1717.57 eval/Returns Std 0 eval/Returns Max 1717.57 eval/Returns Min 1717.57 eval/Actions Mean 0.15143 eval/Actions Std 0.574006 eval/Actions Max 0.998483 eval/Actions Min -0.997918 eval/Num Paths 1 eval/Average Returns 1717.57 eval/normalized_score 53.3968 time/evaluation sampling (s) 0.91886 time/logging (s) 0.00257721 time/sampling batch (s) 0.266452 time/saving (s) 0.00330751 time/training (s) 4.2073 time/epoch (s) 5.39849 time/total (s) 31906.2 Epoch -693 ---------------------------------- --------------- 2022-05-10 22:02:21.500224 PDT | [0] Epoch -692 finished ---------------------------------- --------------- epoch -692 replay_buffer/size 999996 trainer/num train calls 309000 trainer/Policy Loss -2.09159 trainer/Log Pis Mean 2.14817 trainer/Log Pis Std 2.63632 trainer/Log Pis Max 10.7484 trainer/Log Pis Min -5.79848 trainer/policy/mean Mean 0.147416 trainer/policy/mean Std 0.606123 trainer/policy/mean Max 0.997044 trainer/policy/mean Min -0.998287 trainer/policy/normal/std Mean 0.380766 trainer/policy/normal/std Std 0.187721 trainer/policy/normal/std Max 0.974511 trainer/policy/normal/std Min 0.0666332 trainer/policy/normal/log_std Mean -1.1201 trainer/policy/normal/log_std Std 0.599429 trainer/policy/normal/log_std Max -0.0258196 trainer/policy/normal/log_std Min -2.70855 eval/num steps total 203405 eval/num paths total 391 eval/path length Mean 643 eval/path length Std 0 eval/path length Max 643 eval/path length Min 643 eval/Rewards Mean 3.25321 eval/Rewards Std 0.749529 eval/Rewards Max 4.67598 eval/Rewards Min 0.980705 eval/Returns Mean 2091.81 eval/Returns Std 0 eval/Returns Max 2091.81 eval/Returns Min 2091.81 eval/Actions Mean 0.153803 eval/Actions Std 0.612126 eval/Actions Max 0.99761 eval/Actions Min -0.9983 eval/Num Paths 1 eval/Average Returns 2091.81 eval/normalized_score 64.896 time/evaluation sampling (s) 1.02409 time/logging (s) 0.00305699 time/sampling batch (s) 0.265766 time/saving (s) 0.00329745 time/training (s) 4.19413 time/epoch (s) 5.49034 time/total (s) 31911.7 Epoch -692 ---------------------------------- --------------- 2022-05-10 22:02:26.908931 PDT | [0] Epoch -691 finished ---------------------------------- --------------- epoch -691 replay_buffer/size 999996 trainer/num train calls 310000 trainer/Policy Loss -2.1584 trainer/Log Pis Mean 2.21715 trainer/Log Pis Std 2.60129 trainer/Log Pis Max 9.52754 trainer/Log Pis Min -5.10945 trainer/policy/mean Mean 0.157908 trainer/policy/mean Std 0.607896 trainer/policy/mean Max 0.99845 trainer/policy/mean Min -0.996666 trainer/policy/normal/std Mean 0.38149 trainer/policy/normal/std Std 0.183888 trainer/policy/normal/std Max 1.0011 trainer/policy/normal/std Min 0.0733047 trainer/policy/normal/log_std Mean -1.10936 trainer/policy/normal/log_std Std 0.579634 trainer/policy/normal/log_std Max 0.00109541 trainer/policy/normal/log_std Min -2.61313 eval/num steps total 203982 eval/num paths total 392 eval/path length Mean 577 eval/path length Std 0 eval/path length Max 577 eval/path length Min 577 eval/Rewards Mean 3.17636 eval/Rewards Std 0.71279 eval/Rewards Max 4.6619 eval/Rewards Min 0.986539 eval/Returns Mean 1832.76 eval/Returns Std 0 eval/Returns Max 1832.76 eval/Returns Min 1832.76 eval/Actions Mean 0.148365 eval/Actions Std 0.605917 eval/Actions Max 0.997546 eval/Actions Min -0.999455 eval/Num Paths 1 eval/Average Returns 1832.76 eval/normalized_score 56.9363 time/evaluation sampling (s) 0.903903 time/logging (s) 0.00251524 time/sampling batch (s) 0.267513 time/saving (s) 0.00319379 time/training (s) 4.2102 time/epoch (s) 5.38732 time/total (s) 31917.1 Epoch -691 ---------------------------------- --------------- 2022-05-10 22:02:32.357769 PDT | [0] Epoch -690 finished ---------------------------------- --------------- epoch -690 replay_buffer/size 999996 trainer/num train calls 311000 trainer/Policy Loss -2.17155 trainer/Log Pis Mean 2.17718 trainer/Log Pis Std 2.68765 trainer/Log Pis Max 9.79824 trainer/Log Pis Min -6.24388 trainer/policy/mean Mean 0.157264 trainer/policy/mean Std 0.620758 trainer/policy/mean Max 0.998521 trainer/policy/mean Min -0.998265 trainer/policy/normal/std Mean 0.39301 trainer/policy/normal/std Std 0.185334 trainer/policy/normal/std Max 0.949013 trainer/policy/normal/std Min 0.0697404 trainer/policy/normal/log_std Mean -1.07748 trainer/policy/normal/log_std Std 0.579284 trainer/policy/normal/log_std Max -0.0523331 trainer/policy/normal/log_std Min -2.66298 eval/num steps total 204854 eval/num paths total 394 eval/path length Mean 436 eval/path length Std 27 eval/path length Max 463 eval/path length Min 409 eval/Rewards Mean 3.13796 eval/Rewards Std 0.821354 eval/Rewards Max 4.74302 eval/Rewards Min 0.982211 eval/Returns Mean 1368.15 eval/Returns Std 111.358 eval/Returns Max 1479.51 eval/Returns Min 1256.79 eval/Actions Mean 0.14717 eval/Actions Std 0.585642 eval/Actions Max 0.996286 eval/Actions Min -0.997757 eval/Num Paths 2 eval/Average Returns 1368.15 eval/normalized_score 42.6607 time/evaluation sampling (s) 0.914115 time/logging (s) 0.00341336 time/sampling batch (s) 0.270956 time/saving (s) 0.00300867 time/training (s) 4.23701 time/epoch (s) 5.4285 time/total (s) 31922.5 Epoch -690 ---------------------------------- --------------- 2022-05-10 22:02:37.804628 PDT | [0] Epoch -689 finished ---------------------------------- --------------- epoch -689 replay_buffer/size 999996 trainer/num train calls 312000 trainer/Policy Loss -2.30664 trainer/Log Pis Mean 2.22842 trainer/Log Pis Std 2.6328 trainer/Log Pis Max 9.65452 trainer/Log Pis Min -5.18167 trainer/policy/mean Mean 0.15197 trainer/policy/mean Std 0.621425 trainer/policy/mean Max 0.996585 trainer/policy/mean Min -0.997723 trainer/policy/normal/std Mean 0.38924 trainer/policy/normal/std Std 0.185488 trainer/policy/normal/std Max 0.989431 trainer/policy/normal/std Min 0.0759885 trainer/policy/normal/log_std Mean -1.08895 trainer/policy/normal/log_std Std 0.581541 trainer/policy/normal/log_std Max -0.0106257 trainer/policy/normal/log_std Min -2.57717 eval/num steps total 205395 eval/num paths total 395 eval/path length Mean 541 eval/path length Std 0 eval/path length Max 541 eval/path length Min 541 eval/Rewards Mean 3.16467 eval/Rewards Std 0.859763 eval/Rewards Max 5.18783 eval/Rewards Min 0.989298 eval/Returns Mean 1712.09 eval/Returns Std 0 eval/Returns Max 1712.09 eval/Returns Min 1712.09 eval/Actions Mean 0.150156 eval/Actions Std 0.571462 eval/Actions Max 0.995937 eval/Actions Min -0.998085 eval/Num Paths 1 eval/Average Returns 1712.09 eval/normalized_score 53.2285 time/evaluation sampling (s) 0.884794 time/logging (s) 0.00241766 time/sampling batch (s) 0.270003 time/saving (s) 0.00308925 time/training (s) 4.26459 time/epoch (s) 5.42489 time/total (s) 31927.9 Epoch -689 ---------------------------------- --------------- 2022-05-10 22:02:43.191335 PDT | [0] Epoch -688 finished ---------------------------------- --------------- epoch -688 replay_buffer/size 999996 trainer/num train calls 313000 trainer/Policy Loss -2.10118 trainer/Log Pis Mean 2.1889 trainer/Log Pis Std 2.53471 trainer/Log Pis Max 9.11235 trainer/Log Pis Min -5.11744 trainer/policy/mean Mean 0.110074 trainer/policy/mean Std 0.617183 trainer/policy/mean Max 0.997517 trainer/policy/mean Min -0.999168 trainer/policy/normal/std Mean 0.388541 trainer/policy/normal/std Std 0.188474 trainer/policy/normal/std Max 1.22861 trainer/policy/normal/std Min 0.0657521 trainer/policy/normal/log_std Mean -1.09802 trainer/policy/normal/log_std Std 0.59868 trainer/policy/normal/log_std Max 0.20588 trainer/policy/normal/log_std Min -2.72186 eval/num steps total 206017 eval/num paths total 396 eval/path length Mean 622 eval/path length Std 0 eval/path length Max 622 eval/path length Min 622 eval/Rewards Mean 3.17823 eval/Rewards Std 0.799964 eval/Rewards Max 4.99359 eval/Rewards Min 0.981572 eval/Returns Mean 1976.86 eval/Returns Std 0 eval/Returns Max 1976.86 eval/Returns Min 1976.86 eval/Actions Mean 0.141211 eval/Actions Std 0.576999 eval/Actions Max 0.997127 eval/Actions Min -0.998329 eval/Num Paths 1 eval/Average Returns 1976.86 eval/normalized_score 61.3639 time/evaluation sampling (s) 0.875789 time/logging (s) 0.00254625 time/sampling batch (s) 0.267281 time/saving (s) 0.00294696 time/training (s) 4.21738 time/epoch (s) 5.36594 time/total (s) 31933.3 Epoch -688 ---------------------------------- --------------- 2022-05-10 22:02:48.585805 PDT | [0] Epoch -687 finished ---------------------------------- --------------- epoch -687 replay_buffer/size 999996 trainer/num train calls 314000 trainer/Policy Loss -2.08915 trainer/Log Pis Mean 2.31734 trainer/Log Pis Std 2.53742 trainer/Log Pis Max 8.16292 trainer/Log Pis Min -6.64123 trainer/policy/mean Mean 0.176577 trainer/policy/mean Std 0.61061 trainer/policy/mean Max 0.998218 trainer/policy/mean Min -0.998085 trainer/policy/normal/std Mean 0.384267 trainer/policy/normal/std Std 0.18232 trainer/policy/normal/std Max 0.931881 trainer/policy/normal/std Min 0.0693286 trainer/policy/normal/log_std Mean -1.10129 trainer/policy/normal/log_std Std 0.580674 trainer/policy/normal/log_std Max -0.07055 trainer/policy/normal/log_std Min -2.6689 eval/num steps total 206516 eval/num paths total 397 eval/path length Mean 499 eval/path length Std 0 eval/path length Max 499 eval/path length Min 499 eval/Rewards Mean 3.08049 eval/Rewards Std 0.755416 eval/Rewards Max 4.67554 eval/Rewards Min 0.983946 eval/Returns Mean 1537.17 eval/Returns Std 0 eval/Returns Max 1537.17 eval/Returns Min 1537.17 eval/Actions Mean 0.150757 eval/Actions Std 0.588844 eval/Actions Max 0.998032 eval/Actions Min -0.996593 eval/Num Paths 1 eval/Average Returns 1537.17 eval/normalized_score 47.8539 time/evaluation sampling (s) 0.88697 time/logging (s) 0.0022538 time/sampling batch (s) 0.268738 time/saving (s) 0.00298162 time/training (s) 4.21238 time/epoch (s) 5.37333 time/total (s) 31938.7 Epoch -687 ---------------------------------- --------------- 2022-05-10 22:02:53.997168 PDT | [0] Epoch -686 finished ---------------------------------- --------------- epoch -686 replay_buffer/size 999996 trainer/num train calls 315000 trainer/Policy Loss -2.12741 trainer/Log Pis Mean 2.33002 trainer/Log Pis Std 2.65969 trainer/Log Pis Max 10.1814 trainer/Log Pis Min -5.05057 trainer/policy/mean Mean 0.164892 trainer/policy/mean Std 0.613094 trainer/policy/mean Max 0.997869 trainer/policy/mean Min -0.997203 trainer/policy/normal/std Mean 0.379954 trainer/policy/normal/std Std 0.184255 trainer/policy/normal/std Max 0.974684 trainer/policy/normal/std Min 0.0731659 trainer/policy/normal/log_std Mean -1.11794 trainer/policy/normal/log_std Std 0.590649 trainer/policy/normal/log_std Max -0.0256421 trainer/policy/normal/log_std Min -2.61503 eval/num steps total 207277 eval/num paths total 398 eval/path length Mean 761 eval/path length Std 0 eval/path length Max 761 eval/path length Min 761 eval/Rewards Mean 3.19952 eval/Rewards Std 0.661317 eval/Rewards Max 4.7953 eval/Rewards Min 0.986234 eval/Returns Mean 2434.84 eval/Returns Std 0 eval/Returns Max 2434.84 eval/Returns Min 2434.84 eval/Actions Mean 0.16655 eval/Actions Std 0.606872 eval/Actions Max 0.998174 eval/Actions Min -0.998654 eval/Num Paths 1 eval/Average Returns 2434.84 eval/normalized_score 75.4357 time/evaluation sampling (s) 0.877431 time/logging (s) 0.00294121 time/sampling batch (s) 0.269313 time/saving (s) 0.00296593 time/training (s) 4.23854 time/epoch (s) 5.3912 time/total (s) 31944.1 Epoch -686 ---------------------------------- --------------- 2022-05-10 22:02:59.388375 PDT | [0] Epoch -685 finished ---------------------------------- --------------- epoch -685 replay_buffer/size 999996 trainer/num train calls 316000 trainer/Policy Loss -2.34614 trainer/Log Pis Mean 2.26981 trainer/Log Pis Std 2.72904 trainer/Log Pis Max 11.3387 trainer/Log Pis Min -4.21885 trainer/policy/mean Mean 0.130539 trainer/policy/mean Std 0.612229 trainer/policy/mean Max 0.997814 trainer/policy/mean Min -0.998609 trainer/policy/normal/std Mean 0.376037 trainer/policy/normal/std Std 0.181499 trainer/policy/normal/std Max 0.942805 trainer/policy/normal/std Min 0.0691376 trainer/policy/normal/log_std Mean -1.12396 trainer/policy/normal/log_std Std 0.578921 trainer/policy/normal/log_std Max -0.0588963 trainer/policy/normal/log_std Min -2.67166 eval/num steps total 207864 eval/num paths total 399 eval/path length Mean 587 eval/path length Std 0 eval/path length Max 587 eval/path length Min 587 eval/Rewards Mean 3.13771 eval/Rewards Std 0.778035 eval/Rewards Max 5.22234 eval/Rewards Min 0.981399 eval/Returns Mean 1841.84 eval/Returns Std 0 eval/Returns Max 1841.84 eval/Returns Min 1841.84 eval/Actions Mean 0.14499 eval/Actions Std 0.585476 eval/Actions Max 0.998205 eval/Actions Min -0.998065 eval/Num Paths 1 eval/Average Returns 1841.84 eval/normalized_score 57.2151 time/evaluation sampling (s) 0.875088 time/logging (s) 0.00261605 time/sampling batch (s) 0.26872 time/saving (s) 0.00299009 time/training (s) 4.22059 time/epoch (s) 5.37 time/total (s) 31949.5 Epoch -685 ---------------------------------- --------------- 2022-05-10 22:03:04.778178 PDT | [0] Epoch -684 finished ---------------------------------- --------------- epoch -684 replay_buffer/size 999996 trainer/num train calls 317000 trainer/Policy Loss -2.31185 trainer/Log Pis Mean 2.30677 trainer/Log Pis Std 2.52606 trainer/Log Pis Max 8.89808 trainer/Log Pis Min -4.91449 trainer/policy/mean Mean 0.14063 trainer/policy/mean Std 0.627496 trainer/policy/mean Max 0.998094 trainer/policy/mean Min -0.997817 trainer/policy/normal/std Mean 0.383213 trainer/policy/normal/std Std 0.180898 trainer/policy/normal/std Max 0.941233 trainer/policy/normal/std Min 0.0756274 trainer/policy/normal/log_std Mean -1.10104 trainer/policy/normal/log_std Std 0.572066 trainer/policy/normal/log_std Max -0.0605644 trainer/policy/normal/log_std Min -2.58194 eval/num steps total 208863 eval/num paths total 401 eval/path length Mean 499.5 eval/path length Std 45.5 eval/path length Max 545 eval/path length Min 454 eval/Rewards Mean 3.228 eval/Rewards Std 0.848107 eval/Rewards Max 5.31813 eval/Rewards Min 0.979789 eval/Returns Mean 1612.39 eval/Returns Std 149.988 eval/Returns Max 1762.38 eval/Returns Min 1462.4 eval/Actions Mean 0.137517 eval/Actions Std 0.579729 eval/Actions Max 0.998059 eval/Actions Min -0.998746 eval/Num Paths 2 eval/Average Returns 1612.39 eval/normalized_score 50.1651 time/evaluation sampling (s) 0.90282 time/logging (s) 0.00415192 time/sampling batch (s) 0.26519 time/saving (s) 0.00378713 time/training (s) 4.1947 time/epoch (s) 5.37065 time/total (s) 31954.8 Epoch -684 ---------------------------------- --------------- 2022-05-10 22:03:10.170773 PDT | [0] Epoch -683 finished ---------------------------------- --------------- epoch -683 replay_buffer/size 999996 trainer/num train calls 318000 trainer/Policy Loss -2.03396 trainer/Log Pis Mean 2.14025 trainer/Log Pis Std 2.52108 trainer/Log Pis Max 9.52714 trainer/Log Pis Min -3.66823 trainer/policy/mean Mean 0.162932 trainer/policy/mean Std 0.599389 trainer/policy/mean Max 0.996652 trainer/policy/mean Min -0.997774 trainer/policy/normal/std Mean 0.388505 trainer/policy/normal/std Std 0.189206 trainer/policy/normal/std Max 1.10688 trainer/policy/normal/std Min 0.0710693 trainer/policy/normal/log_std Mean -1.09766 trainer/policy/normal/log_std Std 0.595385 trainer/policy/normal/log_std Max 0.101545 trainer/policy/normal/log_std Min -2.6441 eval/num steps total 209398 eval/num paths total 402 eval/path length Mean 535 eval/path length Std 0 eval/path length Max 535 eval/path length Min 535 eval/Rewards Mean 3.2244 eval/Rewards Std 0.816697 eval/Rewards Max 5.35376 eval/Rewards Min 0.982363 eval/Returns Mean 1725.05 eval/Returns Std 0 eval/Returns Max 1725.05 eval/Returns Min 1725.05 eval/Actions Mean 0.142244 eval/Actions Std 0.590054 eval/Actions Max 0.998128 eval/Actions Min -0.998512 eval/Num Paths 1 eval/Average Returns 1725.05 eval/normalized_score 53.6269 time/evaluation sampling (s) 0.882825 time/logging (s) 0.00247258 time/sampling batch (s) 0.268235 time/saving (s) 0.00317988 time/training (s) 4.21271 time/epoch (s) 5.36943 time/total (s) 31960.2 Epoch -683 ---------------------------------- --------------- 2022-05-10 22:03:15.595741 PDT | [0] Epoch -682 finished ---------------------------------- --------------- epoch -682 replay_buffer/size 999996 trainer/num train calls 319000 trainer/Policy Loss -2.37598 trainer/Log Pis Mean 2.16032 trainer/Log Pis Std 2.68155 trainer/Log Pis Max 9.70047 trainer/Log Pis Min -4.56439 trainer/policy/mean Mean 0.105126 trainer/policy/mean Std 0.626822 trainer/policy/mean Max 0.996886 trainer/policy/mean Min -0.998111 trainer/policy/normal/std Mean 0.37306 trainer/policy/normal/std Std 0.172 trainer/policy/normal/std Max 0.927731 trainer/policy/normal/std Min 0.0667407 trainer/policy/normal/log_std Mean -1.11918 trainer/policy/normal/log_std Std 0.553416 trainer/policy/normal/log_std Max -0.0750131 trainer/policy/normal/log_std Min -2.70694 eval/num steps total 210066 eval/num paths total 403 eval/path length Mean 668 eval/path length Std 0 eval/path length Max 668 eval/path length Min 668 eval/Rewards Mean 3.17693 eval/Rewards Std 0.675341 eval/Rewards Max 4.67857 eval/Rewards Min 0.987651 eval/Returns Mean 2122.19 eval/Returns Std 0 eval/Returns Max 2122.19 eval/Returns Min 2122.19 eval/Actions Mean 0.151146 eval/Actions Std 0.602778 eval/Actions Max 0.997836 eval/Actions Min -0.997259 eval/Num Paths 1 eval/Average Returns 2122.19 eval/normalized_score 65.8293 time/evaluation sampling (s) 0.914245 time/logging (s) 0.00296926 time/sampling batch (s) 0.269382 time/saving (s) 0.00323717 time/training (s) 4.21454 time/epoch (s) 5.40437 time/total (s) 31965.6 Epoch -682 ---------------------------------- --------------- 2022-05-10 22:03:21.025700 PDT | [0] Epoch -681 finished ---------------------------------- --------------- epoch -681 replay_buffer/size 999996 trainer/num train calls 320000 trainer/Policy Loss -2.31484 trainer/Log Pis Mean 2.25134 trainer/Log Pis Std 2.5459 trainer/Log Pis Max 9.6019 trainer/Log Pis Min -3.76671 trainer/policy/mean Mean 0.103468 trainer/policy/mean Std 0.630896 trainer/policy/mean Max 0.998185 trainer/policy/mean Min -0.997879 trainer/policy/normal/std Mean 0.398659 trainer/policy/normal/std Std 0.191951 trainer/policy/normal/std Max 0.9679 trainer/policy/normal/std Min 0.0703802 trainer/policy/normal/log_std Mean -1.06855 trainer/policy/normal/log_std Std 0.588858 trainer/policy/normal/log_std Max -0.032626 trainer/policy/normal/log_std Min -2.65384 eval/num steps total 210589 eval/num paths total 404 eval/path length Mean 523 eval/path length Std 0 eval/path length Max 523 eval/path length Min 523 eval/Rewards Mean 3.14859 eval/Rewards Std 0.846146 eval/Rewards Max 5.50352 eval/Rewards Min 0.988329 eval/Returns Mean 1646.71 eval/Returns Std 0 eval/Returns Max 1646.71 eval/Returns Min 1646.71 eval/Actions Mean 0.157355 eval/Actions Std 0.581761 eval/Actions Max 0.997556 eval/Actions Min -0.999125 eval/Num Paths 1 eval/Average Returns 1646.71 eval/normalized_score 51.2198 time/evaluation sampling (s) 0.926002 time/logging (s) 0.00239561 time/sampling batch (s) 0.268293 time/saving (s) 0.0030789 time/training (s) 4.20853 time/epoch (s) 5.4083 time/total (s) 31971 Epoch -681 ---------------------------------- --------------- 2022-05-10 22:03:26.420034 PDT | [0] Epoch -680 finished ---------------------------------- --------------- epoch -680 replay_buffer/size 999996 trainer/num train calls 321000 trainer/Policy Loss -2.19207 trainer/Log Pis Mean 2.01132 trainer/Log Pis Std 2.53343 trainer/Log Pis Max 8.88713 trainer/Log Pis Min -4.38351 trainer/policy/mean Mean 0.167701 trainer/policy/mean Std 0.596764 trainer/policy/mean Max 0.998356 trainer/policy/mean Min -0.997418 trainer/policy/normal/std Mean 0.384329 trainer/policy/normal/std Std 0.184669 trainer/policy/normal/std Max 0.909814 trainer/policy/normal/std Min 0.0774144 trainer/policy/normal/log_std Mean -1.10157 trainer/policy/normal/log_std Std 0.578139 trainer/policy/normal/log_std Max -0.0945149 trainer/policy/normal/log_std Min -2.55858 eval/num steps total 211170 eval/num paths total 405 eval/path length Mean 581 eval/path length Std 0 eval/path length Max 581 eval/path length Min 581 eval/Rewards Mean 3.14634 eval/Rewards Std 0.743078 eval/Rewards Max 5.18437 eval/Rewards Min 0.981314 eval/Returns Mean 1828.03 eval/Returns Std 0 eval/Returns Max 1828.03 eval/Returns Min 1828.03 eval/Actions Mean 0.15189 eval/Actions Std 0.605022 eval/Actions Max 0.997747 eval/Actions Min -0.998577 eval/Num Paths 1 eval/Average Returns 1828.03 eval/normalized_score 56.7908 time/evaluation sampling (s) 0.911396 time/logging (s) 0.00255938 time/sampling batch (s) 0.267306 time/saving (s) 0.00314409 time/training (s) 4.18916 time/epoch (s) 5.37357 time/total (s) 31976.4 Epoch -680 ---------------------------------- --------------- 2022-05-10 22:03:31.849836 PDT | [0] Epoch -679 finished ---------------------------------- --------------- epoch -679 replay_buffer/size 999996 trainer/num train calls 322000 trainer/Policy Loss -2.21472 trainer/Log Pis Mean 2.13966 trainer/Log Pis Std 2.58251 trainer/Log Pis Max 12.5577 trainer/Log Pis Min -3.12768 trainer/policy/mean Mean 0.126879 trainer/policy/mean Std 0.615416 trainer/policy/mean Max 0.997606 trainer/policy/mean Min -0.999134 trainer/policy/normal/std Mean 0.380939 trainer/policy/normal/std Std 0.181987 trainer/policy/normal/std Max 0.907023 trainer/policy/normal/std Min 0.0682119 trainer/policy/normal/log_std Mean -1.11208 trainer/policy/normal/log_std Std 0.586004 trainer/policy/normal/log_std Max -0.0975875 trainer/policy/normal/log_std Min -2.68514 eval/num steps total 211719 eval/num paths total 406 eval/path length Mean 549 eval/path length Std 0 eval/path length Max 549 eval/path length Min 549 eval/Rewards Mean 3.1608 eval/Rewards Std 0.828664 eval/Rewards Max 4.68628 eval/Rewards Min 0.98206 eval/Returns Mean 1735.28 eval/Returns Std 0 eval/Returns Max 1735.28 eval/Returns Min 1735.28 eval/Actions Mean 0.143194 eval/Actions Std 0.585811 eval/Actions Max 0.997848 eval/Actions Min -0.998441 eval/Num Paths 1 eval/Average Returns 1735.28 eval/normalized_score 53.941 time/evaluation sampling (s) 0.914701 time/logging (s) 0.00251687 time/sampling batch (s) 0.268123 time/saving (s) 0.00317397 time/training (s) 4.22032 time/epoch (s) 5.40884 time/total (s) 31981.8 Epoch -679 ---------------------------------- --------------- 2022-05-10 22:03:37.233698 PDT | [0] Epoch -678 finished ---------------------------------- --------------- epoch -678 replay_buffer/size 999996 trainer/num train calls 323000 trainer/Policy Loss -2.39685 trainer/Log Pis Mean 2.32749 trainer/Log Pis Std 2.67007 trainer/Log Pis Max 13.1925 trainer/Log Pis Min -5.15407 trainer/policy/mean Mean 0.170577 trainer/policy/mean Std 0.633477 trainer/policy/mean Max 0.998627 trainer/policy/mean Min -0.999626 trainer/policy/normal/std Mean 0.394739 trainer/policy/normal/std Std 0.18941 trainer/policy/normal/std Max 0.982062 trainer/policy/normal/std Min 0.0689408 trainer/policy/normal/log_std Mean -1.07644 trainer/policy/normal/log_std Std 0.584082 trainer/policy/normal/log_std Max -0.0181011 trainer/policy/normal/log_std Min -2.67451 eval/num steps total 212715 eval/num paths total 408 eval/path length Mean 498 eval/path length Std 6 eval/path length Max 504 eval/path length Min 492 eval/Rewards Mean 3.17701 eval/Rewards Std 0.801262 eval/Rewards Max 5.32171 eval/Rewards Min 0.983713 eval/Returns Mean 1582.15 eval/Returns Std 10.4939 eval/Returns Max 1592.65 eval/Returns Min 1571.66 eval/Actions Mean 0.149149 eval/Actions Std 0.593395 eval/Actions Max 0.997073 eval/Actions Min -0.99871 eval/Num Paths 2 eval/Average Returns 1582.15 eval/normalized_score 49.2361 time/evaluation sampling (s) 0.890473 time/logging (s) 0.00370321 time/sampling batch (s) 0.273597 time/saving (s) 0.00299161 time/training (s) 4.19318 time/epoch (s) 5.36395 time/total (s) 31987.2 Epoch -678 ---------------------------------- --------------- 2022-05-10 22:03:42.603791 PDT | [0] Epoch -677 finished ---------------------------------- --------------- epoch -677 replay_buffer/size 999996 trainer/num train calls 324000 trainer/Policy Loss -2.24071 trainer/Log Pis Mean 2.26862 trainer/Log Pis Std 2.60823 trainer/Log Pis Max 12.6031 trainer/Log Pis Min -3.60789 trainer/policy/mean Mean 0.127259 trainer/policy/mean Std 0.628231 trainer/policy/mean Max 0.997709 trainer/policy/mean Min -0.999276 trainer/policy/normal/std Mean 0.394622 trainer/policy/normal/std Std 0.183306 trainer/policy/normal/std Max 0.921262 trainer/policy/normal/std Min 0.0723469 trainer/policy/normal/log_std Mean -1.06845 trainer/policy/normal/log_std Std 0.568312 trainer/policy/normal/log_std Max -0.0820112 trainer/policy/normal/log_std Min -2.62628 eval/num steps total 213328 eval/num paths total 409 eval/path length Mean 613 eval/path length Std 0 eval/path length Max 613 eval/path length Min 613 eval/Rewards Mean 3.17423 eval/Rewards Std 0.790339 eval/Rewards Max 5.32686 eval/Rewards Min 0.985867 eval/Returns Mean 1945.8 eval/Returns Std 0 eval/Returns Max 1945.8 eval/Returns Min 1945.8 eval/Actions Mean 0.144768 eval/Actions Std 0.584855 eval/Actions Max 0.997046 eval/Actions Min -0.998066 eval/Num Paths 1 eval/Average Returns 1945.8 eval/normalized_score 60.4096 time/evaluation sampling (s) 0.880367 time/logging (s) 0.0025115 time/sampling batch (s) 0.266791 time/saving (s) 0.00299112 time/training (s) 4.19541 time/epoch (s) 5.34807 time/total (s) 31992.5 Epoch -677 ---------------------------------- --------------- 2022-05-10 22:03:47.984558 PDT | [0] Epoch -676 finished ---------------------------------- --------------- epoch -676 replay_buffer/size 999996 trainer/num train calls 325000 trainer/Policy Loss -2.12282 trainer/Log Pis Mean 2.28439 trainer/Log Pis Std 2.49268 trainer/Log Pis Max 10.1853 trainer/Log Pis Min -5.00055 trainer/policy/mean Mean 0.180088 trainer/policy/mean Std 0.611792 trainer/policy/mean Max 0.998931 trainer/policy/mean Min -0.996528 trainer/policy/normal/std Mean 0.382071 trainer/policy/normal/std Std 0.187525 trainer/policy/normal/std Max 0.974953 trainer/policy/normal/std Min 0.0674178 trainer/policy/normal/log_std Mean -1.11676 trainer/policy/normal/log_std Std 0.600257 trainer/policy/normal/log_std Max -0.025366 trainer/policy/normal/log_std Min -2.69685 eval/num steps total 213909 eval/num paths total 410 eval/path length Mean 581 eval/path length Std 0 eval/path length Max 581 eval/path length Min 581 eval/Rewards Mean 3.12834 eval/Rewards Std 0.742108 eval/Rewards Max 4.99859 eval/Rewards Min 0.980626 eval/Returns Mean 1817.57 eval/Returns Std 0 eval/Returns Max 1817.57 eval/Returns Min 1817.57 eval/Actions Mean 0.154165 eval/Actions Std 0.593648 eval/Actions Max 0.997781 eval/Actions Min -0.997829 eval/Num Paths 1 eval/Average Returns 1817.57 eval/normalized_score 56.4694 time/evaluation sampling (s) 0.874987 time/logging (s) 0.00243896 time/sampling batch (s) 0.267306 time/saving (s) 0.00297616 time/training (s) 4.21208 time/epoch (s) 5.35979 time/total (s) 31997.9 Epoch -676 ---------------------------------- --------------- 2022-05-10 22:03:53.391111 PDT | [0] Epoch -675 finished ---------------------------------- --------------- epoch -675 replay_buffer/size 999996 trainer/num train calls 326000 trainer/Policy Loss -2.06091 trainer/Log Pis Mean 2.25246 trainer/Log Pis Std 2.61095 trainer/Log Pis Max 10.2033 trainer/Log Pis Min -5.88244 trainer/policy/mean Mean 0.183933 trainer/policy/mean Std 0.607975 trainer/policy/mean Max 0.996335 trainer/policy/mean Min -0.996424 trainer/policy/normal/std Mean 0.387793 trainer/policy/normal/std Std 0.191378 trainer/policy/normal/std Max 0.976944 trainer/policy/normal/std Min 0.0714824 trainer/policy/normal/log_std Mean -1.1037 trainer/policy/normal/log_std Std 0.603751 trainer/policy/normal/log_std Max -0.0233256 trainer/policy/normal/log_std Min -2.6383 eval/num steps total 214492 eval/num paths total 411 eval/path length Mean 583 eval/path length Std 0 eval/path length Max 583 eval/path length Min 583 eval/Rewards Mean 3.21159 eval/Rewards Std 0.691806 eval/Rewards Max 4.89578 eval/Rewards Min 0.983795 eval/Returns Mean 1872.36 eval/Returns Std 0 eval/Returns Max 1872.36 eval/Returns Min 1872.36 eval/Actions Mean 0.169025 eval/Actions Std 0.609042 eval/Actions Max 0.998179 eval/Actions Min -0.997201 eval/Num Paths 1 eval/Average Returns 1872.36 eval/normalized_score 58.153 time/evaluation sampling (s) 0.874971 time/logging (s) 0.00251454 time/sampling batch (s) 0.269017 time/saving (s) 0.00298434 time/training (s) 4.23615 time/epoch (s) 5.38564 time/total (s) 32003.3 Epoch -675 ---------------------------------- --------------- 2022-05-10 22:03:58.800791 PDT | [0] Epoch -674 finished ---------------------------------- --------------- epoch -674 replay_buffer/size 999996 trainer/num train calls 327000 trainer/Policy Loss -2.13661 trainer/Log Pis Mean 2.15435 trainer/Log Pis Std 2.57395 trainer/Log Pis Max 9.93538 trainer/Log Pis Min -4.5586 trainer/policy/mean Mean 0.136354 trainer/policy/mean Std 0.617747 trainer/policy/mean Max 0.997057 trainer/policy/mean Min -0.998452 trainer/policy/normal/std Mean 0.38581 trainer/policy/normal/std Std 0.180474 trainer/policy/normal/std Max 0.972408 trainer/policy/normal/std Min 0.0671632 trainer/policy/normal/log_std Mean -1.09192 trainer/policy/normal/log_std Std 0.569243 trainer/policy/normal/log_std Max -0.0279797 trainer/policy/normal/log_std Min -2.70063 eval/num steps total 215040 eval/num paths total 412 eval/path length Mean 548 eval/path length Std 0 eval/path length Max 548 eval/path length Min 548 eval/Rewards Mean 3.19602 eval/Rewards Std 0.826255 eval/Rewards Max 4.79693 eval/Rewards Min 0.987396 eval/Returns Mean 1751.42 eval/Returns Std 0 eval/Returns Max 1751.42 eval/Returns Min 1751.42 eval/Actions Mean 0.145022 eval/Actions Std 0.568869 eval/Actions Max 0.997181 eval/Actions Min -0.99841 eval/Num Paths 1 eval/Average Returns 1751.42 eval/normalized_score 54.437 time/evaluation sampling (s) 0.883267 time/logging (s) 0.00248642 time/sampling batch (s) 0.268492 time/saving (s) 0.00310096 time/training (s) 4.23135 time/epoch (s) 5.3887 time/total (s) 32008.7 Epoch -674 ---------------------------------- --------------- 2022-05-10 22:04:04.199252 PDT | [0] Epoch -673 finished ---------------------------------- --------------- epoch -673 replay_buffer/size 999996 trainer/num train calls 328000 trainer/Policy Loss -2.16286 trainer/Log Pis Mean 2.21324 trainer/Log Pis Std 2.64152 trainer/Log Pis Max 10.0571 trainer/Log Pis Min -5.30927 trainer/policy/mean Mean 0.117358 trainer/policy/mean Std 0.611548 trainer/policy/mean Max 0.998206 trainer/policy/mean Min -0.997511 trainer/policy/normal/std Mean 0.380774 trainer/policy/normal/std Std 0.182142 trainer/policy/normal/std Max 0.904698 trainer/policy/normal/std Min 0.0685156 trainer/policy/normal/log_std Mean -1.11491 trainer/policy/normal/log_std Std 0.592771 trainer/policy/normal/log_std Max -0.100154 trainer/policy/normal/log_std Min -2.68069 eval/num steps total 215609 eval/num paths total 413 eval/path length Mean 569 eval/path length Std 0 eval/path length Max 569 eval/path length Min 569 eval/Rewards Mean 3.14394 eval/Rewards Std 0.775171 eval/Rewards Max 4.89813 eval/Rewards Min 0.978195 eval/Returns Mean 1788.9 eval/Returns Std 0 eval/Returns Max 1788.9 eval/Returns Min 1788.9 eval/Actions Mean 0.148153 eval/Actions Std 0.585443 eval/Actions Max 0.997911 eval/Actions Min -0.997965 eval/Num Paths 1 eval/Average Returns 1788.9 eval/normalized_score 55.5887 time/evaluation sampling (s) 0.887463 time/logging (s) 0.00257372 time/sampling batch (s) 0.267677 time/saving (s) 0.00315032 time/training (s) 4.21662 time/epoch (s) 5.37748 time/total (s) 32014 Epoch -673 ---------------------------------- --------------- 2022-05-10 22:04:09.661871 PDT | [0] Epoch -672 finished ---------------------------------- --------------- epoch -672 replay_buffer/size 999996 trainer/num train calls 329000 trainer/Policy Loss -2.24843 trainer/Log Pis Mean 2.18552 trainer/Log Pis Std 2.63683 trainer/Log Pis Max 9.20254 trainer/Log Pis Min -5.09988 trainer/policy/mean Mean 0.17366 trainer/policy/mean Std 0.604904 trainer/policy/mean Max 0.997713 trainer/policy/mean Min -0.997665 trainer/policy/normal/std Mean 0.378871 trainer/policy/normal/std Std 0.184586 trainer/policy/normal/std Max 0.968731 trainer/policy/normal/std Min 0.0654456 trainer/policy/normal/log_std Mean -1.12603 trainer/policy/normal/log_std Std 0.604849 trainer/policy/normal/log_std Max -0.0317688 trainer/policy/normal/log_std Min -2.72654 eval/num steps total 216555 eval/num paths total 415 eval/path length Mean 473 eval/path length Std 13 eval/path length Max 486 eval/path length Min 460 eval/Rewards Mean 3.19017 eval/Rewards Std 0.846041 eval/Rewards Max 5.13688 eval/Rewards Min 0.980708 eval/Returns Mean 1508.95 eval/Returns Std 38.4527 eval/Returns Max 1547.4 eval/Returns Min 1470.5 eval/Actions Mean 0.150719 eval/Actions Std 0.589968 eval/Actions Max 0.99846 eval/Actions Min -0.999139 eval/Num Paths 2 eval/Average Returns 1508.95 eval/normalized_score 46.9869 time/evaluation sampling (s) 0.961413 time/logging (s) 0.00357488 time/sampling batch (s) 0.268286 time/saving (s) 0.00312588 time/training (s) 4.2059 time/epoch (s) 5.4423 time/total (s) 32019.5 Epoch -672 ---------------------------------- --------------- 2022-05-10 22:04:15.061072 PDT | [0] Epoch -671 finished ---------------------------------- --------------- epoch -671 replay_buffer/size 999996 trainer/num train calls 330000 trainer/Policy Loss -1.95842 trainer/Log Pis Mean 1.93283 trainer/Log Pis Std 2.67911 trainer/Log Pis Max 10.6364 trainer/Log Pis Min -4.55727 trainer/policy/mean Mean 0.133854 trainer/policy/mean Std 0.606086 trainer/policy/mean Max 0.997789 trainer/policy/mean Min -0.998435 trainer/policy/normal/std Mean 0.391436 trainer/policy/normal/std Std 0.187539 trainer/policy/normal/std Max 0.96445 trainer/policy/normal/std Min 0.0741909 trainer/policy/normal/log_std Mean -1.08277 trainer/policy/normal/log_std Std 0.57794 trainer/policy/normal/log_std Max -0.0361969 trainer/policy/normal/log_std Min -2.60111 eval/num steps total 217508 eval/num paths total 417 eval/path length Mean 476.5 eval/path length Std 6.5 eval/path length Max 483 eval/path length Min 470 eval/Rewards Mean 3.07787 eval/Rewards Std 0.83908 eval/Rewards Max 4.81402 eval/Rewards Min 0.984786 eval/Returns Mean 1466.61 eval/Returns Std 21.4319 eval/Returns Max 1488.04 eval/Returns Min 1445.17 eval/Actions Mean 0.147913 eval/Actions Std 0.577733 eval/Actions Max 0.996366 eval/Actions Min -0.998646 eval/Num Paths 2 eval/Average Returns 1466.61 eval/normalized_score 45.6858 time/evaluation sampling (s) 0.883251 time/logging (s) 0.00356111 time/sampling batch (s) 0.26834 time/saving (s) 0.00297245 time/training (s) 4.21974 time/epoch (s) 5.37786 time/total (s) 32024.9 Epoch -671 ---------------------------------- --------------- 2022-05-10 22:04:20.464201 PDT | [0] Epoch -670 finished ---------------------------------- --------------- epoch -670 replay_buffer/size 999996 trainer/num train calls 331000 trainer/Policy Loss -2.12809 trainer/Log Pis Mean 2.13964 trainer/Log Pis Std 2.60851 trainer/Log Pis Max 10.1347 trainer/Log Pis Min -4.30721 trainer/policy/mean Mean 0.172129 trainer/policy/mean Std 0.609731 trainer/policy/mean Max 0.9977 trainer/policy/mean Min -0.997205 trainer/policy/normal/std Mean 0.381884 trainer/policy/normal/std Std 0.184339 trainer/policy/normal/std Max 0.966004 trainer/policy/normal/std Min 0.0739415 trainer/policy/normal/log_std Mean -1.11172 trainer/policy/normal/log_std Std 0.588334 trainer/policy/normal/log_std Max -0.0345872 trainer/policy/normal/log_std Min -2.60448 eval/num steps total 218083 eval/num paths total 418 eval/path length Mean 575 eval/path length Std 0 eval/path length Max 575 eval/path length Min 575 eval/Rewards Mean 3.17869 eval/Rewards Std 0.740319 eval/Rewards Max 4.84643 eval/Rewards Min 0.98832 eval/Returns Mean 1827.75 eval/Returns Std 0 eval/Returns Max 1827.75 eval/Returns Min 1827.75 eval/Actions Mean 0.159876 eval/Actions Std 0.599293 eval/Actions Max 0.998437 eval/Actions Min -0.998114 eval/Num Paths 1 eval/Average Returns 1827.75 eval/normalized_score 56.7823 time/evaluation sampling (s) 0.896943 time/logging (s) 0.00268866 time/sampling batch (s) 0.267239 time/saving (s) 0.003096 time/training (s) 4.21131 time/epoch (s) 5.38128 time/total (s) 32030.3 Epoch -670 ---------------------------------- --------------- 2022-05-10 22:04:25.879500 PDT | [0] Epoch -669 finished ---------------------------------- --------------- epoch -669 replay_buffer/size 999996 trainer/num train calls 332000 trainer/Policy Loss -1.95526 trainer/Log Pis Mean 1.96666 trainer/Log Pis Std 2.66883 trainer/Log Pis Max 10.055 trainer/Log Pis Min -5.47355 trainer/policy/mean Mean 0.131505 trainer/policy/mean Std 0.609134 trainer/policy/mean Max 0.997965 trainer/policy/mean Min -0.99814 trainer/policy/normal/std Mean 0.390313 trainer/policy/normal/std Std 0.18873 trainer/policy/normal/std Max 0.900433 trainer/policy/normal/std Min 0.070453 trainer/policy/normal/log_std Mean -1.08902 trainer/policy/normal/log_std Std 0.586138 trainer/policy/normal/log_std Max -0.104879 trainer/policy/normal/log_std Min -2.65281 eval/num steps total 218604 eval/num paths total 419 eval/path length Mean 521 eval/path length Std 0 eval/path length Max 521 eval/path length Min 521 eval/Rewards Mean 3.16487 eval/Rewards Std 0.827803 eval/Rewards Max 5.45455 eval/Rewards Min 0.980812 eval/Returns Mean 1648.9 eval/Returns Std 0 eval/Returns Max 1648.9 eval/Returns Min 1648.9 eval/Actions Mean 0.155707 eval/Actions Std 0.587621 eval/Actions Max 0.997688 eval/Actions Min -0.996762 eval/Num Paths 1 eval/Average Returns 1648.9 eval/normalized_score 51.2869 time/evaluation sampling (s) 0.916301 time/logging (s) 0.00270096 time/sampling batch (s) 0.268336 time/saving (s) 0.00361393 time/training (s) 4.20333 time/epoch (s) 5.39428 time/total (s) 32035.7 Epoch -669 ---------------------------------- --------------- 2022-05-10 22:04:31.285238 PDT | [0] Epoch -668 finished ---------------------------------- --------------- epoch -668 replay_buffer/size 999996 trainer/num train calls 333000 trainer/Policy Loss -1.94099 trainer/Log Pis Mean 2.09212 trainer/Log Pis Std 2.50223 trainer/Log Pis Max 10.2433 trainer/Log Pis Min -4.39182 trainer/policy/mean Mean 0.121788 trainer/policy/mean Std 0.610255 trainer/policy/mean Max 0.998158 trainer/policy/mean Min -0.99818 trainer/policy/normal/std Mean 0.379118 trainer/policy/normal/std Std 0.180117 trainer/policy/normal/std Max 0.87141 trainer/policy/normal/std Min 0.0680347 trainer/policy/normal/log_std Mean -1.11609 trainer/policy/normal/log_std Std 0.584421 trainer/policy/normal/log_std Max -0.137643 trainer/policy/normal/log_std Min -2.68774 eval/num steps total 219170 eval/num paths total 420 eval/path length Mean 566 eval/path length Std 0 eval/path length Max 566 eval/path length Min 566 eval/Rewards Mean 3.14221 eval/Rewards Std 0.779974 eval/Rewards Max 4.83304 eval/Rewards Min 0.981619 eval/Returns Mean 1778.49 eval/Returns Std 0 eval/Returns Max 1778.49 eval/Returns Min 1778.49 eval/Actions Mean 0.150648 eval/Actions Std 0.588448 eval/Actions Max 0.996717 eval/Actions Min -0.998227 eval/Num Paths 1 eval/Average Returns 1778.49 eval/normalized_score 55.2688 time/evaluation sampling (s) 0.917006 time/logging (s) 0.00242788 time/sampling batch (s) 0.267001 time/saving (s) 0.00304703 time/training (s) 4.19484 time/epoch (s) 5.38432 time/total (s) 32041 Epoch -668 ---------------------------------- --------------- 2022-05-10 22:04:36.683601 PDT | [0] Epoch -667 finished ---------------------------------- --------------- epoch -667 replay_buffer/size 999996 trainer/num train calls 334000 trainer/Policy Loss -2.10659 trainer/Log Pis Mean 2.11835 trainer/Log Pis Std 2.62742 trainer/Log Pis Max 13.9923 trainer/Log Pis Min -5.52372 trainer/policy/mean Mean 0.153551 trainer/policy/mean Std 0.610847 trainer/policy/mean Max 0.998372 trainer/policy/mean Min -0.997291 trainer/policy/normal/std Mean 0.387234 trainer/policy/normal/std Std 0.189552 trainer/policy/normal/std Max 0.980758 trainer/policy/normal/std Min 0.0641354 trainer/policy/normal/log_std Mean -1.10394 trainer/policy/normal/log_std Std 0.601973 trainer/policy/normal/log_std Max -0.0194291 trainer/policy/normal/log_std Min -2.74676 eval/num steps total 219664 eval/num paths total 421 eval/path length Mean 494 eval/path length Std 0 eval/path length Max 494 eval/path length Min 494 eval/Rewards Mean 3.07834 eval/Rewards Std 0.759063 eval/Rewards Max 4.61658 eval/Rewards Min 0.985273 eval/Returns Mean 1520.7 eval/Returns Std 0 eval/Returns Max 1520.7 eval/Returns Min 1520.7 eval/Actions Mean 0.156381 eval/Actions Std 0.596766 eval/Actions Max 0.998256 eval/Actions Min -0.999355 eval/Num Paths 1 eval/Average Returns 1520.7 eval/normalized_score 47.3479 time/evaluation sampling (s) 0.909131 time/logging (s) 0.00231271 time/sampling batch (s) 0.26768 time/saving (s) 0.00321737 time/training (s) 4.19486 time/epoch (s) 5.3772 time/total (s) 32046.4 Epoch -667 ---------------------------------- --------------- 2022-05-10 22:04:42.108220 PDT | [0] Epoch -666 finished ---------------------------------- --------------- epoch -666 replay_buffer/size 999996 trainer/num train calls 335000 trainer/Policy Loss -2.33894 trainer/Log Pis Mean 2.18214 trainer/Log Pis Std 2.72697 trainer/Log Pis Max 9.72972 trainer/Log Pis Min -5.57522 trainer/policy/mean Mean 0.132851 trainer/policy/mean Std 0.627687 trainer/policy/mean Max 0.998288 trainer/policy/mean Min -0.996784 trainer/policy/normal/std Mean 0.374647 trainer/policy/normal/std Std 0.179052 trainer/policy/normal/std Max 0.89106 trainer/policy/normal/std Min 0.0715035 trainer/policy/normal/log_std Mean -1.12681 trainer/policy/normal/log_std Std 0.578959 trainer/policy/normal/log_std Max -0.115343 trainer/policy/normal/log_std Min -2.63801 eval/num steps total 220146 eval/num paths total 422 eval/path length Mean 482 eval/path length Std 0 eval/path length Max 482 eval/path length Min 482 eval/Rewards Mean 3.18817 eval/Rewards Std 0.826185 eval/Rewards Max 4.75062 eval/Rewards Min 0.988539 eval/Returns Mean 1536.7 eval/Returns Std 0 eval/Returns Max 1536.7 eval/Returns Min 1536.7 eval/Actions Mean 0.15113 eval/Actions Std 0.602516 eval/Actions Max 0.997658 eval/Actions Min -0.999129 eval/Num Paths 1 eval/Average Returns 1536.7 eval/normalized_score 47.8394 time/evaluation sampling (s) 0.895233 time/logging (s) 0.00226377 time/sampling batch (s) 0.268434 time/saving (s) 0.00302142 time/training (s) 4.23468 time/epoch (s) 5.40363 time/total (s) 32051.8 Epoch -666 ---------------------------------- --------------- 2022-05-10 22:04:47.477438 PDT | [0] Epoch -665 finished ---------------------------------- --------------- epoch -665 replay_buffer/size 999996 trainer/num train calls 336000 trainer/Policy Loss -2.2393 trainer/Log Pis Mean 2.21089 trainer/Log Pis Std 2.53255 trainer/Log Pis Max 9.66992 trainer/Log Pis Min -6.83073 trainer/policy/mean Mean 0.160993 trainer/policy/mean Std 0.620006 trainer/policy/mean Max 0.997801 trainer/policy/mean Min -0.998509 trainer/policy/normal/std Mean 0.385276 trainer/policy/normal/std Std 0.182061 trainer/policy/normal/std Max 0.976886 trainer/policy/normal/std Min 0.0692916 trainer/policy/normal/log_std Mean -1.09804 trainer/policy/normal/log_std Std 0.580228 trainer/policy/normal/log_std Max -0.0233849 trainer/policy/normal/log_std Min -2.66943 eval/num steps total 221137 eval/num paths total 424 eval/path length Mean 495.5 eval/path length Std 5.5 eval/path length Max 501 eval/path length Min 490 eval/Rewards Mean 3.10631 eval/Rewards Std 0.805412 eval/Rewards Max 4.91418 eval/Rewards Min 0.986198 eval/Returns Mean 1539.18 eval/Returns Std 12.6066 eval/Returns Max 1551.79 eval/Returns Min 1526.57 eval/Actions Mean 0.157275 eval/Actions Std 0.581616 eval/Actions Max 0.998211 eval/Actions Min -0.998152 eval/Num Paths 2 eval/Average Returns 1539.18 eval/normalized_score 47.9157 time/evaluation sampling (s) 0.876165 time/logging (s) 0.00395828 time/sampling batch (s) 0.268212 time/saving (s) 0.00302382 time/training (s) 4.19872 time/epoch (s) 5.35008 time/total (s) 32057.2 Epoch -665 ---------------------------------- --------------- 2022-05-10 22:04:52.933165 PDT | [0] Epoch -664 finished ---------------------------------- --------------- epoch -664 replay_buffer/size 999996 trainer/num train calls 337000 trainer/Policy Loss -2.15609 trainer/Log Pis Mean 2.19113 trainer/Log Pis Std 2.53392 trainer/Log Pis Max 9.06846 trainer/Log Pis Min -4.24004 trainer/policy/mean Mean 0.115077 trainer/policy/mean Std 0.615186 trainer/policy/mean Max 0.996287 trainer/policy/mean Min -0.997553 trainer/policy/normal/std Mean 0.380502 trainer/policy/normal/std Std 0.17941 trainer/policy/normal/std Max 1.02754 trainer/policy/normal/std Min 0.0757672 trainer/policy/normal/log_std Mean -1.10626 trainer/policy/normal/log_std Std 0.567666 trainer/policy/normal/log_std Max 0.02717 trainer/policy/normal/log_std Min -2.58009 eval/num steps total 221656 eval/num paths total 425 eval/path length Mean 519 eval/path length Std 0 eval/path length Max 519 eval/path length Min 519 eval/Rewards Mean 3.17494 eval/Rewards Std 0.863813 eval/Rewards Max 5.54798 eval/Rewards Min 0.98668 eval/Returns Mean 1647.79 eval/Returns Std 0 eval/Returns Max 1647.79 eval/Returns Min 1647.79 eval/Actions Mean 0.145693 eval/Actions Std 0.578075 eval/Actions Max 0.998023 eval/Actions Min -0.997905 eval/Num Paths 1 eval/Average Returns 1647.79 eval/normalized_score 51.253 time/evaluation sampling (s) 0.900327 time/logging (s) 0.00238866 time/sampling batch (s) 0.273017 time/saving (s) 0.00315182 time/training (s) 4.25378 time/epoch (s) 5.43267 time/total (s) 32062.6 Epoch -664 ---------------------------------- --------------- 2022-05-10 22:04:58.396505 PDT | [0] Epoch -663 finished ---------------------------------- --------------- epoch -663 replay_buffer/size 999996 trainer/num train calls 338000 trainer/Policy Loss -2.04374 trainer/Log Pis Mean 2.05672 trainer/Log Pis Std 2.5039 trainer/Log Pis Max 8.90513 trainer/Log Pis Min -4.0242 trainer/policy/mean Mean 0.116722 trainer/policy/mean Std 0.613475 trainer/policy/mean Max 0.996331 trainer/policy/mean Min -0.998455 trainer/policy/normal/std Mean 0.379433 trainer/policy/normal/std Std 0.183184 trainer/policy/normal/std Max 0.987434 trainer/policy/normal/std Min 0.069367 trainer/policy/normal/log_std Mean -1.11425 trainer/policy/normal/log_std Std 0.578178 trainer/policy/normal/log_std Max -0.012646 trainer/policy/normal/log_std Min -2.66834 eval/num steps total 222150 eval/num paths total 426 eval/path length Mean 494 eval/path length Std 0 eval/path length Max 494 eval/path length Min 494 eval/Rewards Mean 3.05591 eval/Rewards Std 0.787817 eval/Rewards Max 4.73899 eval/Rewards Min 0.986579 eval/Returns Mean 1509.62 eval/Returns Std 0 eval/Returns Max 1509.62 eval/Returns Min 1509.62 eval/Actions Mean 0.158259 eval/Actions Std 0.575516 eval/Actions Max 0.996098 eval/Actions Min -0.997508 eval/Num Paths 1 eval/Average Returns 1509.62 eval/normalized_score 47.0074 time/evaluation sampling (s) 0.890875 time/logging (s) 0.00225218 time/sampling batch (s) 0.27443 time/saving (s) 0.00294824 time/training (s) 4.27115 time/epoch (s) 5.44166 time/total (s) 32068.1 Epoch -663 ---------------------------------- --------------- 2022-05-10 22:05:03.869991 PDT | [0] Epoch -662 finished ---------------------------------- --------------- epoch -662 replay_buffer/size 999996 trainer/num train calls 339000 trainer/Policy Loss -2.28308 trainer/Log Pis Mean 2.36544 trainer/Log Pis Std 2.61144 trainer/Log Pis Max 13.7896 trainer/Log Pis Min -4.51663 trainer/policy/mean Mean 0.118762 trainer/policy/mean Std 0.619894 trainer/policy/mean Max 0.996561 trainer/policy/mean Min -0.999318 trainer/policy/normal/std Mean 0.379572 trainer/policy/normal/std Std 0.181723 trainer/policy/normal/std Max 0.977709 trainer/policy/normal/std Min 0.0714139 trainer/policy/normal/log_std Mean -1.11381 trainer/policy/normal/log_std Std 0.578247 trainer/policy/normal/log_std Max -0.0225434 trainer/policy/normal/log_std Min -2.63926 eval/num steps total 222649 eval/num paths total 427 eval/path length Mean 499 eval/path length Std 0 eval/path length Max 499 eval/path length Min 499 eval/Rewards Mean 3.08894 eval/Rewards Std 0.770447 eval/Rewards Max 4.72206 eval/Rewards Min 0.984466 eval/Returns Mean 1541.38 eval/Returns Std 0 eval/Returns Max 1541.38 eval/Returns Min 1541.38 eval/Actions Mean 0.153071 eval/Actions Std 0.588652 eval/Actions Max 0.997991 eval/Actions Min -0.999288 eval/Num Paths 1 eval/Average Returns 1541.38 eval/normalized_score 47.9834 time/evaluation sampling (s) 0.90544 time/logging (s) 0.00236249 time/sampling batch (s) 0.274249 time/saving (s) 0.00307767 time/training (s) 4.26683 time/epoch (s) 5.45196 time/total (s) 32073.5 Epoch -662 ---------------------------------- --------------- 2022-05-10 22:05:09.291727 PDT | [0] Epoch -661 finished ---------------------------------- ---------------- epoch -661 replay_buffer/size 999996 trainer/num train calls 340000 trainer/Policy Loss -2.28403 trainer/Log Pis Mean 2.22959 trainer/Log Pis Std 2.57413 trainer/Log Pis Max 12.4998 trainer/Log Pis Min -6.0875 trainer/policy/mean Mean 0.141194 trainer/policy/mean Std 0.615939 trainer/policy/mean Max 0.998632 trainer/policy/mean Min -0.998755 trainer/policy/normal/std Mean 0.385817 trainer/policy/normal/std Std 0.186613 trainer/policy/normal/std Max 0.999008 trainer/policy/normal/std Min 0.068703 trainer/policy/normal/log_std Mean -1.10277 trainer/policy/normal/log_std Std 0.591302 trainer/policy/normal/log_std Max -0.000992612 trainer/policy/normal/log_std Min -2.67796 eval/num steps total 223634 eval/num paths total 430 eval/path length Mean 328.333 eval/path length Std 27.6446 eval/path length Max 367 eval/path length Min 304 eval/Rewards Mean 3.02208 eval/Rewards Std 0.969445 eval/Rewards Max 4.97641 eval/Rewards Min 0.98095 eval/Returns Mean 992.249 eval/Returns Std 116.247 eval/Returns Max 1154.8 eval/Returns Min 889.705 eval/Actions Mean 0.134153 eval/Actions Std 0.556764 eval/Actions Max 0.997188 eval/Actions Min -0.999652 eval/Num Paths 3 eval/Average Returns 992.249 eval/normalized_score 31.1107 time/evaluation sampling (s) 0.900204 time/logging (s) 0.00363844 time/sampling batch (s) 0.272055 time/saving (s) 0.00294037 time/training (s) 4.22274 time/epoch (s) 5.40158 time/total (s) 32078.9 Epoch -661 ---------------------------------- ---------------- 2022-05-10 22:05:14.740128 PDT | [0] Epoch -660 finished ---------------------------------- --------------- epoch -660 replay_buffer/size 999996 trainer/num train calls 341000 trainer/Policy Loss -2.16673 trainer/Log Pis Mean 2.05128 trainer/Log Pis Std 2.54716 trainer/Log Pis Max 8.99415 trainer/Log Pis Min -3.96834 trainer/policy/mean Mean 0.140792 trainer/policy/mean Std 0.6158 trainer/policy/mean Max 0.997798 trainer/policy/mean Min -0.998504 trainer/policy/normal/std Mean 0.390324 trainer/policy/normal/std Std 0.185242 trainer/policy/normal/std Max 0.958286 trainer/policy/normal/std Min 0.0715727 trainer/policy/normal/log_std Mean -1.08659 trainer/policy/normal/log_std Std 0.584049 trainer/policy/normal/log_std Max -0.042609 trainer/policy/normal/log_std Min -2.63704 eval/num steps total 224514 eval/num paths total 432 eval/path length Mean 440 eval/path length Std 34 eval/path length Max 474 eval/path length Min 406 eval/Rewards Mean 3.17252 eval/Rewards Std 0.835163 eval/Rewards Max 4.77532 eval/Rewards Min 0.97977 eval/Returns Mean 1395.91 eval/Returns Std 139.234 eval/Returns Max 1535.14 eval/Returns Min 1256.68 eval/Actions Mean 0.151074 eval/Actions Std 0.585204 eval/Actions Max 0.998245 eval/Actions Min -0.999234 eval/Num Paths 2 eval/Average Returns 1395.91 eval/normalized_score 43.5136 time/evaluation sampling (s) 0.902462 time/logging (s) 0.00339555 time/sampling batch (s) 0.273443 time/saving (s) 0.00302284 time/training (s) 4.24424 time/epoch (s) 5.42656 time/total (s) 32084.4 Epoch -660 ---------------------------------- --------------- 2022-05-10 22:05:20.215030 PDT | [0] Epoch -659 finished ---------------------------------- --------------- epoch -659 replay_buffer/size 999996 trainer/num train calls 342000 trainer/Policy Loss -2.23023 trainer/Log Pis Mean 2.34725 trainer/Log Pis Std 2.6369 trainer/Log Pis Max 9.61824 trainer/Log Pis Min -4.51364 trainer/policy/mean Mean 0.132695 trainer/policy/mean Std 0.613726 trainer/policy/mean Max 0.997916 trainer/policy/mean Min -0.998704 trainer/policy/normal/std Mean 0.378591 trainer/policy/normal/std Std 0.183176 trainer/policy/normal/std Max 0.964966 trainer/policy/normal/std Min 0.0675933 trainer/policy/normal/log_std Mean -1.12313 trainer/policy/normal/log_std Std 0.595024 trainer/policy/normal/log_std Max -0.0356627 trainer/policy/normal/log_std Min -2.69425 eval/num steps total 225011 eval/num paths total 433 eval/path length Mean 497 eval/path length Std 0 eval/path length Max 497 eval/path length Min 497 eval/Rewards Mean 3.07523 eval/Rewards Std 0.785043 eval/Rewards Max 4.66214 eval/Rewards Min 0.989573 eval/Returns Mean 1528.39 eval/Returns Std 0 eval/Returns Max 1528.39 eval/Returns Min 1528.39 eval/Actions Mean 0.159943 eval/Actions Std 0.588641 eval/Actions Max 0.996623 eval/Actions Min -0.999241 eval/Num Paths 1 eval/Average Returns 1528.39 eval/normalized_score 47.5842 time/evaluation sampling (s) 0.899656 time/logging (s) 0.00231811 time/sampling batch (s) 0.274352 time/saving (s) 0.00304913 time/training (s) 4.27287 time/epoch (s) 5.45225 time/total (s) 32089.8 Epoch -659 ---------------------------------- --------------- 2022-05-10 22:05:25.629251 PDT | [0] Epoch -658 finished ---------------------------------- --------------- epoch -658 replay_buffer/size 999996 trainer/num train calls 343000 trainer/Policy Loss -1.99307 trainer/Log Pis Mean 1.99698 trainer/Log Pis Std 2.32536 trainer/Log Pis Max 9.49464 trainer/Log Pis Min -5.55112 trainer/policy/mean Mean 0.171833 trainer/policy/mean Std 0.590963 trainer/policy/mean Max 0.996852 trainer/policy/mean Min -0.99786 trainer/policy/normal/std Mean 0.378727 trainer/policy/normal/std Std 0.18373 trainer/policy/normal/std Max 0.989394 trainer/policy/normal/std Min 0.0703626 trainer/policy/normal/log_std Mean -1.12121 trainer/policy/normal/log_std Std 0.590834 trainer/policy/normal/log_std Max -0.0106626 trainer/policy/normal/log_std Min -2.65409 eval/num steps total 226002 eval/num paths total 435 eval/path length Mean 495.5 eval/path length Std 0.5 eval/path length Max 496 eval/path length Min 495 eval/Rewards Mean 3.06125 eval/Rewards Std 0.790161 eval/Rewards Max 4.76101 eval/Rewards Min 0.982905 eval/Returns Mean 1516.85 eval/Returns Std 2.41016 eval/Returns Max 1519.26 eval/Returns Min 1514.44 eval/Actions Mean 0.161083 eval/Actions Std 0.58288 eval/Actions Max 0.99812 eval/Actions Min -0.99704 eval/Num Paths 2 eval/Average Returns 1516.85 eval/normalized_score 47.2296 time/evaluation sampling (s) 0.901601 time/logging (s) 0.0037218 time/sampling batch (s) 0.271758 time/saving (s) 0.00306124 time/training (s) 4.21425 time/epoch (s) 5.39439 time/total (s) 32095.2 Epoch -658 ---------------------------------- --------------- 2022-05-10 22:05:31.058596 PDT | [0] Epoch -657 finished ---------------------------------- --------------- epoch -657 replay_buffer/size 999996 trainer/num train calls 344000 trainer/Policy Loss -2.28378 trainer/Log Pis Mean 2.25825 trainer/Log Pis Std 2.59409 trainer/Log Pis Max 14.7661 trainer/Log Pis Min -5.4211 trainer/policy/mean Mean 0.167544 trainer/policy/mean Std 0.612893 trainer/policy/mean Max 0.99658 trainer/policy/mean Min -0.999296 trainer/policy/normal/std Mean 0.375635 trainer/policy/normal/std Std 0.180645 trainer/policy/normal/std Max 0.945839 trainer/policy/normal/std Min 0.0720094 trainer/policy/normal/log_std Mean -1.12719 trainer/policy/normal/log_std Std 0.586466 trainer/policy/normal/log_std Max -0.0556829 trainer/policy/normal/log_std Min -2.63096 eval/num steps total 226629 eval/num paths total 436 eval/path length Mean 627 eval/path length Std 0 eval/path length Max 627 eval/path length Min 627 eval/Rewards Mean 3.26103 eval/Rewards Std 0.773172 eval/Rewards Max 4.75701 eval/Rewards Min 0.987344 eval/Returns Mean 2044.67 eval/Returns Std 0 eval/Returns Max 2044.67 eval/Returns Min 2044.67 eval/Actions Mean 0.153928 eval/Actions Std 0.585726 eval/Actions Max 0.998578 eval/Actions Min -0.998023 eval/Num Paths 1 eval/Average Returns 2044.67 eval/normalized_score 63.4474 time/evaluation sampling (s) 0.910754 time/logging (s) 0.00267539 time/sampling batch (s) 0.272421 time/saving (s) 0.00317414 time/training (s) 4.21774 time/epoch (s) 5.40677 time/total (s) 32100.6 Epoch -657 ---------------------------------- --------------- 2022-05-10 22:05:36.509036 PDT | [0] Epoch -656 finished ---------------------------------- --------------- epoch -656 replay_buffer/size 999996 trainer/num train calls 345000 trainer/Policy Loss -2.15987 trainer/Log Pis Mean 2.16847 trainer/Log Pis Std 2.62094 trainer/Log Pis Max 9.79179 trainer/Log Pis Min -5.87818 trainer/policy/mean Mean 0.16073 trainer/policy/mean Std 0.606083 trainer/policy/mean Max 0.997341 trainer/policy/mean Min -0.998276 trainer/policy/normal/std Mean 0.376979 trainer/policy/normal/std Std 0.178876 trainer/policy/normal/std Max 0.923562 trainer/policy/normal/std Min 0.0767447 trainer/policy/normal/log_std Mean -1.11874 trainer/policy/normal/log_std Std 0.574868 trainer/policy/normal/log_std Max -0.0795169 trainer/policy/normal/log_std Min -2.56727 eval/num steps total 227177 eval/num paths total 437 eval/path length Mean 548 eval/path length Std 0 eval/path length Max 548 eval/path length Min 548 eval/Rewards Mean 3.1838 eval/Rewards Std 0.825401 eval/Rewards Max 4.81657 eval/Rewards Min 0.976762 eval/Returns Mean 1744.72 eval/Returns Std 0 eval/Returns Max 1744.72 eval/Returns Min 1744.72 eval/Actions Mean 0.145751 eval/Actions Std 0.570961 eval/Actions Max 0.996651 eval/Actions Min -0.998737 eval/Num Paths 1 eval/Average Returns 1744.72 eval/normalized_score 54.2312 time/evaluation sampling (s) 0.919971 time/logging (s) 0.00241473 time/sampling batch (s) 0.272415 time/saving (s) 0.00313525 time/training (s) 4.23087 time/epoch (s) 5.42881 time/total (s) 32106.1 Epoch -656 ---------------------------------- --------------- 2022-05-10 22:05:41.977844 PDT | [0] Epoch -655 finished ---------------------------------- --------------- epoch -655 replay_buffer/size 999996 trainer/num train calls 346000 trainer/Policy Loss -2.15854 trainer/Log Pis Mean 2.29161 trainer/Log Pis Std 2.4914 trainer/Log Pis Max 9.647 trainer/Log Pis Min -4.5725 trainer/policy/mean Mean 0.128928 trainer/policy/mean Std 0.621891 trainer/policy/mean Max 0.998351 trainer/policy/mean Min -0.997572 trainer/policy/normal/std Mean 0.388867 trainer/policy/normal/std Std 0.187424 trainer/policy/normal/std Max 0.987462 trainer/policy/normal/std Min 0.0626007 trainer/policy/normal/log_std Mean -1.09215 trainer/policy/normal/log_std Std 0.584896 trainer/policy/normal/log_std Max -0.0126169 trainer/policy/normal/log_std Min -2.77098 eval/num steps total 228152 eval/num paths total 439 eval/path length Mean 487.5 eval/path length Std 52.5 eval/path length Max 540 eval/path length Min 435 eval/Rewards Mean 3.18876 eval/Rewards Std 0.867309 eval/Rewards Max 5.47211 eval/Rewards Min 0.981928 eval/Returns Mean 1554.52 eval/Returns Std 189.049 eval/Returns Max 1743.57 eval/Returns Min 1365.47 eval/Actions Mean 0.153346 eval/Actions Std 0.587088 eval/Actions Max 0.998684 eval/Actions Min -0.998424 eval/Num Paths 2 eval/Average Returns 1554.52 eval/normalized_score 48.387 time/evaluation sampling (s) 0.907911 time/logging (s) 0.00367578 time/sampling batch (s) 0.274048 time/saving (s) 0.00311447 time/training (s) 4.25987 time/epoch (s) 5.44862 time/total (s) 32111.5 Epoch -655 ---------------------------------- --------------- 2022-05-10 22:05:47.419342 PDT | [0] Epoch -654 finished ---------------------------------- --------------- epoch -654 replay_buffer/size 999996 trainer/num train calls 347000 trainer/Policy Loss -2.1259 trainer/Log Pis Mean 2.16018 trainer/Log Pis Std 2.60263 trainer/Log Pis Max 12.1294 trainer/Log Pis Min -4.70179 trainer/policy/mean Mean 0.139345 trainer/policy/mean Std 0.605774 trainer/policy/mean Max 0.999046 trainer/policy/mean Min -0.998428 trainer/policy/normal/std Mean 0.388247 trainer/policy/normal/std Std 0.184572 trainer/policy/normal/std Max 0.926936 trainer/policy/normal/std Min 0.070253 trainer/policy/normal/log_std Mean -1.08772 trainer/policy/normal/log_std Std 0.571249 trainer/policy/normal/log_std Max -0.075871 trainer/policy/normal/log_std Min -2.65565 eval/num steps total 228643 eval/num paths total 440 eval/path length Mean 491 eval/path length Std 0 eval/path length Max 491 eval/path length Min 491 eval/Rewards Mean 3.04361 eval/Rewards Std 0.779372 eval/Rewards Max 4.61302 eval/Rewards Min 0.986635 eval/Returns Mean 1494.41 eval/Returns Std 0 eval/Returns Max 1494.41 eval/Returns Min 1494.41 eval/Actions Mean 0.152377 eval/Actions Std 0.583306 eval/Actions Max 0.997888 eval/Actions Min -0.997761 eval/Num Paths 1 eval/Average Returns 1494.41 eval/normalized_score 46.5402 time/evaluation sampling (s) 0.901058 time/logging (s) 0.00224135 time/sampling batch (s) 0.272547 time/saving (s) 0.00295879 time/training (s) 4.23989 time/epoch (s) 5.4187 time/total (s) 32116.9 Epoch -654 ---------------------------------- --------------- 2022-05-10 22:05:52.954343 PDT | [0] Epoch -653 finished ---------------------------------- --------------- epoch -653 replay_buffer/size 999996 trainer/num train calls 348000 trainer/Policy Loss -2.24834 trainer/Log Pis Mean 2.27862 trainer/Log Pis Std 2.47346 trainer/Log Pis Max 10.0516 trainer/Log Pis Min -5.9874 trainer/policy/mean Mean 0.164695 trainer/policy/mean Std 0.62027 trainer/policy/mean Max 0.997748 trainer/policy/mean Min -0.99848 trainer/policy/normal/std Mean 0.378723 trainer/policy/normal/std Std 0.184761 trainer/policy/normal/std Max 1.15677 trainer/policy/normal/std Min 0.0695447 trainer/policy/normal/log_std Mean -1.12149 trainer/policy/normal/log_std Std 0.590384 trainer/policy/normal/log_std Max 0.145629 trainer/policy/normal/log_std Min -2.66579 eval/num steps total 229209 eval/num paths total 441 eval/path length Mean 566 eval/path length Std 0 eval/path length Max 566 eval/path length Min 566 eval/Rewards Mean 3.23783 eval/Rewards Std 0.805358 eval/Rewards Max 4.75243 eval/Rewards Min 0.982179 eval/Returns Mean 1832.61 eval/Returns Std 0 eval/Returns Max 1832.61 eval/Returns Min 1832.61 eval/Actions Mean 0.158202 eval/Actions Std 0.576847 eval/Actions Max 0.998028 eval/Actions Min -0.998747 eval/Num Paths 1 eval/Average Returns 1832.61 eval/normalized_score 56.9318 time/evaluation sampling (s) 0.971156 time/logging (s) 0.00251178 time/sampling batch (s) 0.274382 time/saving (s) 0.00303652 time/training (s) 4.26271 time/epoch (s) 5.5138 time/total (s) 32122.4 Epoch -653 ---------------------------------- --------------- 2022-05-10 22:05:58.365714 PDT | [0] Epoch -652 finished ---------------------------------- --------------- epoch -652 replay_buffer/size 999996 trainer/num train calls 349000 trainer/Policy Loss -2.17443 trainer/Log Pis Mean 2.21028 trainer/Log Pis Std 2.54399 trainer/Log Pis Max 12.8305 trainer/Log Pis Min -5.06863 trainer/policy/mean Mean 0.149496 trainer/policy/mean Std 0.618901 trainer/policy/mean Max 0.996978 trainer/policy/mean Min -0.999442 trainer/policy/normal/std Mean 0.381481 trainer/policy/normal/std Std 0.182194 trainer/policy/normal/std Max 0.972331 trainer/policy/normal/std Min 0.0676833 trainer/policy/normal/log_std Mean -1.1118 trainer/policy/normal/log_std Std 0.588667 trainer/policy/normal/log_std Max -0.0280585 trainer/policy/normal/log_std Min -2.69292 eval/num steps total 230201 eval/num paths total 443 eval/path length Mean 496 eval/path length Std 2 eval/path length Max 498 eval/path length Min 494 eval/Rewards Mean 3.09142 eval/Rewards Std 0.783536 eval/Rewards Max 4.86642 eval/Rewards Min 0.979548 eval/Returns Mean 1533.34 eval/Returns Std 5.3441 eval/Returns Max 1538.69 eval/Returns Min 1528 eval/Actions Mean 0.152707 eval/Actions Std 0.588891 eval/Actions Max 0.998602 eval/Actions Min -0.998276 eval/Num Paths 2 eval/Average Returns 1533.34 eval/normalized_score 47.7364 time/evaluation sampling (s) 0.891513 time/logging (s) 0.00369688 time/sampling batch (s) 0.271409 time/saving (s) 0.00306292 time/training (s) 4.22134 time/epoch (s) 5.39102 time/total (s) 32127.8 Epoch -652 ---------------------------------- --------------- 2022-05-10 22:06:03.815247 PDT | [0] Epoch -651 finished ---------------------------------- --------------- epoch -651 replay_buffer/size 999996 trainer/num train calls 350000 trainer/Policy Loss -2.17266 trainer/Log Pis Mean 2.17844 trainer/Log Pis Std 2.60262 trainer/Log Pis Max 9.11212 trainer/Log Pis Min -5.61508 trainer/policy/mean Mean 0.133677 trainer/policy/mean Std 0.610518 trainer/policy/mean Max 0.997958 trainer/policy/mean Min -0.99816 trainer/policy/normal/std Mean 0.374995 trainer/policy/normal/std Std 0.179857 trainer/policy/normal/std Max 0.860606 trainer/policy/normal/std Min 0.0649497 trainer/policy/normal/log_std Mean -1.12822 trainer/policy/normal/log_std Std 0.585394 trainer/policy/normal/log_std Max -0.150119 trainer/policy/normal/log_std Min -2.73414 eval/num steps total 231198 eval/num paths total 445 eval/path length Mean 498.5 eval/path length Std 6.5 eval/path length Max 505 eval/path length Min 492 eval/Rewards Mean 3.18577 eval/Rewards Std 0.766517 eval/Rewards Max 4.89037 eval/Rewards Min 0.978115 eval/Returns Mean 1588.11 eval/Returns Std 16.2225 eval/Returns Max 1604.33 eval/Returns Min 1571.89 eval/Actions Mean 0.15519 eval/Actions Std 0.594321 eval/Actions Max 0.998699 eval/Actions Min -0.998311 eval/Num Paths 2 eval/Average Returns 1588.11 eval/normalized_score 49.4191 time/evaluation sampling (s) 0.894933 time/logging (s) 0.00371508 time/sampling batch (s) 0.274639 time/saving (s) 0.0031735 time/training (s) 4.2517 time/epoch (s) 5.42816 time/total (s) 32133.3 Epoch -651 ---------------------------------- --------------- 2022-05-10 22:06:09.279378 PDT | [0] Epoch -650 finished ---------------------------------- --------------- epoch -650 replay_buffer/size 999996 trainer/num train calls 351000 trainer/Policy Loss -1.94579 trainer/Log Pis Mean 1.87532 trainer/Log Pis Std 2.56593 trainer/Log Pis Max 11.2529 trainer/Log Pis Min -5.36478 trainer/policy/mean Mean 0.14478 trainer/policy/mean Std 0.604079 trainer/policy/mean Max 0.998052 trainer/policy/mean Min -0.998074 trainer/policy/normal/std Mean 0.389993 trainer/policy/normal/std Std 0.181445 trainer/policy/normal/std Max 1.13983 trainer/policy/normal/std Min 0.0709404 trainer/policy/normal/log_std Mean -1.07637 trainer/policy/normal/log_std Std 0.556043 trainer/policy/normal/log_std Max 0.130876 trainer/policy/normal/log_std Min -2.64592 eval/num steps total 232140 eval/num paths total 447 eval/path length Mean 471 eval/path length Std 3 eval/path length Max 474 eval/path length Min 468 eval/Rewards Mean 3.22217 eval/Rewards Std 0.842096 eval/Rewards Max 4.92146 eval/Rewards Min 0.980679 eval/Returns Mean 1517.64 eval/Returns Std 0.977977 eval/Returns Max 1518.62 eval/Returns Min 1516.66 eval/Actions Mean 0.137728 eval/Actions Std 0.585999 eval/Actions Max 0.996543 eval/Actions Min -0.998686 eval/Num Paths 2 eval/Average Returns 1517.64 eval/normalized_score 47.254 time/evaluation sampling (s) 0.89576 time/logging (s) 0.00358018 time/sampling batch (s) 0.274578 time/saving (s) 0.00299951 time/training (s) 4.26533 time/epoch (s) 5.44224 time/total (s) 32138.7 Epoch -650 ---------------------------------- --------------- 2022-05-10 22:06:14.717890 PDT | [0] Epoch -649 finished ---------------------------------- --------------- epoch -649 replay_buffer/size 999996 trainer/num train calls 352000 trainer/Policy Loss -2.09075 trainer/Log Pis Mean 2.10666 trainer/Log Pis Std 2.59883 trainer/Log Pis Max 9.7977 trainer/Log Pis Min -4.57615 trainer/policy/mean Mean 0.111749 trainer/policy/mean Std 0.615343 trainer/policy/mean Max 0.996522 trainer/policy/mean Min -0.997444 trainer/policy/normal/std Mean 0.386556 trainer/policy/normal/std Std 0.18995 trainer/policy/normal/std Max 0.929894 trainer/policy/normal/std Min 0.0676533 trainer/policy/normal/log_std Mean -1.10282 trainer/policy/normal/log_std Std 0.59243 trainer/policy/normal/log_std Max -0.0726843 trainer/policy/normal/log_std Min -2.69336 eval/num steps total 232624 eval/num paths total 448 eval/path length Mean 484 eval/path length Std 0 eval/path length Max 484 eval/path length Min 484 eval/Rewards Mean 3.16146 eval/Rewards Std 0.832658 eval/Rewards Max 4.78956 eval/Rewards Min 0.98364 eval/Returns Mean 1530.15 eval/Returns Std 0 eval/Returns Max 1530.15 eval/Returns Min 1530.15 eval/Actions Mean 0.151973 eval/Actions Std 0.594082 eval/Actions Max 0.998627 eval/Actions Min -0.998246 eval/Num Paths 1 eval/Average Returns 1530.15 eval/normalized_score 47.6382 time/evaluation sampling (s) 0.896379 time/logging (s) 0.00226042 time/sampling batch (s) 0.273203 time/saving (s) 0.00299857 time/training (s) 4.24114 time/epoch (s) 5.41598 time/total (s) 32144.1 Epoch -649 ---------------------------------- --------------- 2022-05-10 22:06:20.198897 PDT | [0] Epoch -648 finished ---------------------------------- --------------- epoch -648 replay_buffer/size 999996 trainer/num train calls 353000 trainer/Policy Loss -2.16906 trainer/Log Pis Mean 2.27793 trainer/Log Pis Std 2.7432 trainer/Log Pis Max 9.22435 trainer/Log Pis Min -5.95655 trainer/policy/mean Mean 0.137335 trainer/policy/mean Std 0.612382 trainer/policy/mean Max 0.998766 trainer/policy/mean Min -0.998598 trainer/policy/normal/std Mean 0.383805 trainer/policy/normal/std Std 0.192622 trainer/policy/normal/std Max 1.02577 trainer/policy/normal/std Min 0.0669968 trainer/policy/normal/log_std Mean -1.11642 trainer/policy/normal/log_std Std 0.605408 trainer/policy/normal/log_std Max 0.0254395 trainer/policy/normal/log_std Min -2.70311 eval/num steps total 233510 eval/num paths total 450 eval/path length Mean 443 eval/path length Std 34 eval/path length Max 477 eval/path length Min 409 eval/Rewards Mean 3.14606 eval/Rewards Std 0.811683 eval/Rewards Max 4.66589 eval/Rewards Min 0.979862 eval/Returns Mean 1393.7 eval/Returns Std 140.177 eval/Returns Max 1533.88 eval/Returns Min 1253.53 eval/Actions Mean 0.149619 eval/Actions Std 0.597315 eval/Actions Max 0.99729 eval/Actions Min -0.998405 eval/Num Paths 2 eval/Average Returns 1393.7 eval/normalized_score 43.4458 time/evaluation sampling (s) 0.904873 time/logging (s) 0.00351138 time/sampling batch (s) 0.275971 time/saving (s) 0.00317949 time/training (s) 4.27345 time/epoch (s) 5.46098 time/total (s) 32149.6 Epoch -648 ---------------------------------- --------------- 2022-05-10 22:06:25.670047 PDT | [0] Epoch -647 finished ---------------------------------- --------------- epoch -647 replay_buffer/size 999996 trainer/num train calls 354000 trainer/Policy Loss -2.26037 trainer/Log Pis Mean 2.51424 trainer/Log Pis Std 2.48613 trainer/Log Pis Max 14.6905 trainer/Log Pis Min -3.33079 trainer/policy/mean Mean 0.135197 trainer/policy/mean Std 0.634342 trainer/policy/mean Max 0.996109 trainer/policy/mean Min -0.999323 trainer/policy/normal/std Mean 0.381869 trainer/policy/normal/std Std 0.185107 trainer/policy/normal/std Max 0.952464 trainer/policy/normal/std Min 0.0658787 trainer/policy/normal/log_std Mean -1.11728 trainer/policy/normal/log_std Std 0.602865 trainer/policy/normal/log_std Max -0.0487026 trainer/policy/normal/log_std Min -2.71994 eval/num steps total 234380 eval/num paths total 453 eval/path length Mean 290 eval/path length Std 15.5563 eval/path length Max 312 eval/path length Min 279 eval/Rewards Mean 2.94587 eval/Rewards Std 1.08849 eval/Rewards Max 5.524 eval/Rewards Min 0.979628 eval/Returns Mean 854.302 eval/Returns Std 46.9434 eval/Returns Max 920.689 eval/Returns Min 820.729 eval/Actions Mean 0.0970234 eval/Actions Std 0.538013 eval/Actions Max 0.997915 eval/Actions Min -0.9999 eval/Num Paths 3 eval/Average Returns 854.302 eval/normalized_score 26.8722 time/evaluation sampling (s) 0.919918 time/logging (s) 0.00342099 time/sampling batch (s) 0.273509 time/saving (s) 0.0030626 time/training (s) 4.24939 time/epoch (s) 5.4493 time/total (s) 32155 Epoch -647 ---------------------------------- --------------- 2022-05-10 22:06:31.172855 PDT | [0] Epoch -646 finished ---------------------------------- --------------- epoch -646 replay_buffer/size 999996 trainer/num train calls 355000 trainer/Policy Loss -2.30091 trainer/Log Pis Mean 2.2255 trainer/Log Pis Std 2.46846 trainer/Log Pis Max 9.6056 trainer/Log Pis Min -5.14722 trainer/policy/mean Mean 0.115178 trainer/policy/mean Std 0.625109 trainer/policy/mean Max 0.996435 trainer/policy/mean Min -0.998357 trainer/policy/normal/std Mean 0.382399 trainer/policy/normal/std Std 0.184223 trainer/policy/normal/std Max 1.00804 trainer/policy/normal/std Min 0.0669205 trainer/policy/normal/log_std Mean -1.1098 trainer/policy/normal/log_std Std 0.587389 trainer/policy/normal/log_std Max 0.00800839 trainer/policy/normal/log_std Min -2.70425 eval/num steps total 234958 eval/num paths total 454 eval/path length Mean 578 eval/path length Std 0 eval/path length Max 578 eval/path length Min 578 eval/Rewards Mean 3.19893 eval/Rewards Std 0.778779 eval/Rewards Max 4.88529 eval/Rewards Min 0.981588 eval/Returns Mean 1848.98 eval/Returns Std 0 eval/Returns Max 1848.98 eval/Returns Min 1848.98 eval/Actions Mean 0.165542 eval/Actions Std 0.599335 eval/Actions Max 0.997993 eval/Actions Min -0.998265 eval/Num Paths 1 eval/Average Returns 1848.98 eval/normalized_score 57.4347 time/evaluation sampling (s) 0.928843 time/logging (s) 0.0027423 time/sampling batch (s) 0.276737 time/saving (s) 0.00331584 time/training (s) 4.26874 time/epoch (s) 5.48038 time/total (s) 32160.5 Epoch -646 ---------------------------------- --------------- 2022-05-10 22:06:36.652907 PDT | [0] Epoch -645 finished ---------------------------------- --------------- epoch -645 replay_buffer/size 999996 trainer/num train calls 356000 trainer/Policy Loss -2.32053 trainer/Log Pis Mean 2.32648 trainer/Log Pis Std 2.60316 trainer/Log Pis Max 10.3439 trainer/Log Pis Min -6.65178 trainer/policy/mean Mean 0.136178 trainer/policy/mean Std 0.617552 trainer/policy/mean Max 0.99819 trainer/policy/mean Min -0.998479 trainer/policy/normal/std Mean 0.380428 trainer/policy/normal/std Std 0.183042 trainer/policy/normal/std Max 1.0245 trainer/policy/normal/std Min 0.0692258 trainer/policy/normal/log_std Mean -1.11322 trainer/policy/normal/log_std Std 0.583324 trainer/policy/normal/log_std Max 0.0242027 trainer/policy/normal/log_std Min -2.67038 eval/num steps total 235607 eval/num paths total 455 eval/path length Mean 649 eval/path length Std 0 eval/path length Max 649 eval/path length Min 649 eval/Rewards Mean 3.18615 eval/Rewards Std 0.7212 eval/Rewards Max 4.91299 eval/Rewards Min 0.983698 eval/Returns Mean 2067.81 eval/Returns Std 0 eval/Returns Max 2067.81 eval/Returns Min 2067.81 eval/Actions Mean 0.151668 eval/Actions Std 0.599264 eval/Actions Max 0.998523 eval/Actions Min -0.998078 eval/Num Paths 1 eval/Average Returns 2067.81 eval/normalized_score 64.1584 time/evaluation sampling (s) 0.924438 time/logging (s) 0.00271603 time/sampling batch (s) 0.274194 time/saving (s) 0.00295666 time/training (s) 4.254 time/epoch (s) 5.45831 time/total (s) 32166 Epoch -645 ---------------------------------- --------------- 2022-05-10 22:06:42.118440 PDT | [0] Epoch -644 finished ---------------------------------- --------------- epoch -644 replay_buffer/size 999996 trainer/num train calls 357000 trainer/Policy Loss -2.10307 trainer/Log Pis Mean 2.16347 trainer/Log Pis Std 2.56408 trainer/Log Pis Max 14.3038 trainer/Log Pis Min -7.22073 trainer/policy/mean Mean 0.132216 trainer/policy/mean Std 0.614684 trainer/policy/mean Max 0.99764 trainer/policy/mean Min -0.998832 trainer/policy/normal/std Mean 0.37946 trainer/policy/normal/std Std 0.183626 trainer/policy/normal/std Max 0.995355 trainer/policy/normal/std Min 0.0663043 trainer/policy/normal/log_std Mean -1.11926 trainer/policy/normal/log_std Std 0.590256 trainer/policy/normal/log_std Max -0.00465551 trainer/policy/normal/log_std Min -2.7135 eval/num steps total 236581 eval/num paths total 457 eval/path length Mean 487 eval/path length Std 75 eval/path length Max 562 eval/path length Min 412 eval/Rewards Mean 3.15652 eval/Rewards Std 0.802987 eval/Rewards Max 4.82506 eval/Rewards Min 0.984912 eval/Returns Mean 1537.23 eval/Returns Std 274.051 eval/Returns Max 1811.28 eval/Returns Min 1263.17 eval/Actions Mean 0.151607 eval/Actions Std 0.599632 eval/Actions Max 0.997979 eval/Actions Min -0.999253 eval/Num Paths 2 eval/Average Returns 1537.23 eval/normalized_score 47.8557 time/evaluation sampling (s) 0.907842 time/logging (s) 0.0036058 time/sampling batch (s) 0.273668 time/saving (s) 0.00302346 time/training (s) 4.25687 time/epoch (s) 5.44501 time/total (s) 32171.4 Epoch -644 ---------------------------------- --------------- 2022-05-10 22:06:47.577799 PDT | [0] Epoch -643 finished ---------------------------------- --------------- epoch -643 replay_buffer/size 999996 trainer/num train calls 358000 trainer/Policy Loss -2.22813 trainer/Log Pis Mean 2.34103 trainer/Log Pis Std 2.64606 trainer/Log Pis Max 16.1246 trainer/Log Pis Min -4.35152 trainer/policy/mean Mean 0.155121 trainer/policy/mean Std 0.61837 trainer/policy/mean Max 0.998422 trainer/policy/mean Min -0.999791 trainer/policy/normal/std Mean 0.379654 trainer/policy/normal/std Std 0.185753 trainer/policy/normal/std Max 0.977329 trainer/policy/normal/std Min 0.0723373 trainer/policy/normal/log_std Mean -1.11805 trainer/policy/normal/log_std Std 0.586775 trainer/policy/normal/log_std Max -0.0229322 trainer/policy/normal/log_std Min -2.62641 eval/num steps total 237073 eval/num paths total 458 eval/path length Mean 492 eval/path length Std 0 eval/path length Max 492 eval/path length Min 492 eval/Rewards Mean 3.19541 eval/Rewards Std 0.798529 eval/Rewards Max 5.01901 eval/Rewards Min 0.981881 eval/Returns Mean 1572.14 eval/Returns Std 0 eval/Returns Max 1572.14 eval/Returns Min 1572.14 eval/Actions Mean 0.151154 eval/Actions Std 0.587122 eval/Actions Max 0.998188 eval/Actions Min -0.998552 eval/Num Paths 1 eval/Average Returns 1572.14 eval/normalized_score 48.9285 time/evaluation sampling (s) 0.892535 time/logging (s) 0.00234169 time/sampling batch (s) 0.273477 time/saving (s) 0.00311592 time/training (s) 4.26493 time/epoch (s) 5.4364 time/total (s) 32176.9 Epoch -643 ---------------------------------- --------------- 2022-05-10 22:06:53.071975 PDT | [0] Epoch -642 finished ---------------------------------- --------------- epoch -642 replay_buffer/size 999996 trainer/num train calls 359000 trainer/Policy Loss -2.06105 trainer/Log Pis Mean 2.10334 trainer/Log Pis Std 2.65269 trainer/Log Pis Max 9.50108 trainer/Log Pis Min -7.64702 trainer/policy/mean Mean 0.144965 trainer/policy/mean Std 0.604354 trainer/policy/mean Max 0.997053 trainer/policy/mean Min -0.996698 trainer/policy/normal/std Mean 0.36915 trainer/policy/normal/std Std 0.178288 trainer/policy/normal/std Max 0.979412 trainer/policy/normal/std Min 0.0690435 trainer/policy/normal/log_std Mean -1.14335 trainer/policy/normal/log_std Std 0.582122 trainer/policy/normal/log_std Max -0.0208026 trainer/policy/normal/log_std Min -2.67302 eval/num steps total 237616 eval/num paths total 459 eval/path length Mean 543 eval/path length Std 0 eval/path length Max 543 eval/path length Min 543 eval/Rewards Mean 3.20084 eval/Rewards Std 0.835446 eval/Rewards Max 4.92961 eval/Rewards Min 0.985222 eval/Returns Mean 1738.06 eval/Returns Std 0 eval/Returns Max 1738.06 eval/Returns Min 1738.06 eval/Actions Mean 0.14231 eval/Actions Std 0.572238 eval/Actions Max 0.998121 eval/Actions Min -0.998467 eval/Num Paths 1 eval/Average Returns 1738.06 eval/normalized_score 54.0264 time/evaluation sampling (s) 0.905746 time/logging (s) 0.00249554 time/sampling batch (s) 0.275218 time/saving (s) 0.0030666 time/training (s) 4.28596 time/epoch (s) 5.47249 time/total (s) 32182.4 Epoch -642 ---------------------------------- --------------- 2022-05-10 22:06:58.527053 PDT | [0] Epoch -641 finished ---------------------------------- --------------- epoch -641 replay_buffer/size 999996 trainer/num train calls 360000 trainer/Policy Loss -2.08729 trainer/Log Pis Mean 2.11086 trainer/Log Pis Std 2.51675 trainer/Log Pis Max 9.43597 trainer/Log Pis Min -4.72268 trainer/policy/mean Mean 0.163575 trainer/policy/mean Std 0.608364 trainer/policy/mean Max 0.998305 trainer/policy/mean Min -0.997811 trainer/policy/normal/std Mean 0.384061 trainer/policy/normal/std Std 0.184047 trainer/policy/normal/std Max 0.934508 trainer/policy/normal/std Min 0.0723086 trainer/policy/normal/log_std Mean -1.10372 trainer/policy/normal/log_std Std 0.583566 trainer/policy/normal/log_std Max -0.0677348 trainer/policy/normal/log_std Min -2.62681 eval/num steps total 238187 eval/num paths total 460 eval/path length Mean 571 eval/path length Std 0 eval/path length Max 571 eval/path length Min 571 eval/Rewards Mean 3.21744 eval/Rewards Std 0.755267 eval/Rewards Max 4.77076 eval/Rewards Min 0.988885 eval/Returns Mean 1837.16 eval/Returns Std 0 eval/Returns Max 1837.16 eval/Returns Min 1837.16 eval/Actions Mean 0.154981 eval/Actions Std 0.600562 eval/Actions Max 0.998075 eval/Actions Min -0.998793 eval/Num Paths 1 eval/Average Returns 1837.16 eval/normalized_score 57.0713 time/evaluation sampling (s) 0.896413 time/logging (s) 0.0024343 time/sampling batch (s) 0.278577 time/saving (s) 0.00308438 time/training (s) 4.2528 time/epoch (s) 5.43331 time/total (s) 32187.8 Epoch -641 ---------------------------------- --------------- 2022-05-10 22:07:04.005593 PDT | [0] Epoch -640 finished ---------------------------------- --------------- epoch -640 replay_buffer/size 999996 trainer/num train calls 361000 trainer/Policy Loss -2.13772 trainer/Log Pis Mean 2.24704 trainer/Log Pis Std 2.56124 trainer/Log Pis Max 12.2677 trainer/Log Pis Min -4.16584 trainer/policy/mean Mean 0.165669 trainer/policy/mean Std 0.616107 trainer/policy/mean Max 0.996523 trainer/policy/mean Min -0.997226 trainer/policy/normal/std Mean 0.381906 trainer/policy/normal/std Std 0.185151 trainer/policy/normal/std Max 0.979852 trainer/policy/normal/std Min 0.0691927 trainer/policy/normal/log_std Mean -1.11402 trainer/policy/normal/log_std Std 0.593533 trainer/policy/normal/log_std Max -0.0203539 trainer/policy/normal/log_std Min -2.67086 eval/num steps total 239183 eval/num paths total 462 eval/path length Mean 498 eval/path length Std 22 eval/path length Max 520 eval/path length Min 476 eval/Rewards Mean 3.1153 eval/Rewards Std 0.850728 eval/Rewards Max 5.48084 eval/Rewards Min 0.983278 eval/Returns Mean 1551.42 eval/Returns Std 79.6872 eval/Returns Max 1631.11 eval/Returns Min 1471.73 eval/Actions Mean 0.154041 eval/Actions Std 0.583732 eval/Actions Max 0.998123 eval/Actions Min -0.998486 eval/Num Paths 2 eval/Average Returns 1551.42 eval/normalized_score 48.2918 time/evaluation sampling (s) 0.897308 time/logging (s) 0.00379285 time/sampling batch (s) 0.274978 time/saving (s) 0.00315517 time/training (s) 4.27908 time/epoch (s) 5.45831 time/total (s) 32193.3 Epoch -640 ---------------------------------- --------------- 2022-05-10 22:07:09.470883 PDT | [0] Epoch -639 finished ---------------------------------- --------------- epoch -639 replay_buffer/size 999996 trainer/num train calls 362000 trainer/Policy Loss -2.16891 trainer/Log Pis Mean 2.24302 trainer/Log Pis Std 2.50212 trainer/Log Pis Max 10.0916 trainer/Log Pis Min -4.67657 trainer/policy/mean Mean 0.139688 trainer/policy/mean Std 0.615594 trainer/policy/mean Max 0.997216 trainer/policy/mean Min -0.998298 trainer/policy/normal/std Mean 0.386359 trainer/policy/normal/std Std 0.180192 trainer/policy/normal/std Max 0.93858 trainer/policy/normal/std Min 0.0746411 trainer/policy/normal/log_std Mean -1.08965 trainer/policy/normal/log_std Std 0.566934 trainer/policy/normal/log_std Max -0.063387 trainer/policy/normal/log_std Min -2.59506 eval/num steps total 239768 eval/num paths total 463 eval/path length Mean 585 eval/path length Std 0 eval/path length Max 585 eval/path length Min 585 eval/Rewards Mean 3.16905 eval/Rewards Std 0.720306 eval/Rewards Max 5.03817 eval/Rewards Min 0.982744 eval/Returns Mean 1853.9 eval/Returns Std 0 eval/Returns Max 1853.9 eval/Returns Min 1853.9 eval/Actions Mean 0.151294 eval/Actions Std 0.601024 eval/Actions Max 0.997973 eval/Actions Min -0.996846 eval/Num Paths 1 eval/Average Returns 1853.9 eval/normalized_score 57.5857 time/evaluation sampling (s) 0.898106 time/logging (s) 0.00255382 time/sampling batch (s) 0.274419 time/saving (s) 0.00300313 time/training (s) 4.26437 time/epoch (s) 5.44245 time/total (s) 32198.7 Epoch -639 ---------------------------------- --------------- 2022-05-10 22:07:14.952583 PDT | [0] Epoch -638 finished ---------------------------------- --------------- epoch -638 replay_buffer/size 999996 trainer/num train calls 363000 trainer/Policy Loss -2.40665 trainer/Log Pis Mean 2.476 trainer/Log Pis Std 2.64414 trainer/Log Pis Max 13.6242 trainer/Log Pis Min -4.88944 trainer/policy/mean Mean 0.143114 trainer/policy/mean Std 0.630583 trainer/policy/mean Max 0.998604 trainer/policy/mean Min -0.999108 trainer/policy/normal/std Mean 0.379939 trainer/policy/normal/std Std 0.186619 trainer/policy/normal/std Max 1.01434 trainer/policy/normal/std Min 0.0712116 trainer/policy/normal/log_std Mean -1.12254 trainer/policy/normal/log_std Std 0.600015 trainer/policy/normal/log_std Max 0.0142429 trainer/policy/normal/log_std Min -2.6421 eval/num steps total 240268 eval/num paths total 464 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.08668 eval/Rewards Std 0.766533 eval/Rewards Max 4.72396 eval/Rewards Min 0.986308 eval/Returns Mean 1543.34 eval/Returns Std 0 eval/Returns Max 1543.34 eval/Returns Min 1543.34 eval/Actions Mean 0.153847 eval/Actions Std 0.58984 eval/Actions Max 0.997759 eval/Actions Min -0.996662 eval/Num Paths 1 eval/Average Returns 1543.34 eval/normalized_score 48.0436 time/evaluation sampling (s) 0.903399 time/logging (s) 0.0022282 time/sampling batch (s) 0.273973 time/saving (s) 0.00296023 time/training (s) 4.27728 time/epoch (s) 5.45984 time/total (s) 32204.2 Epoch -638 ---------------------------------- --------------- 2022-05-10 22:07:20.433729 PDT | [0] Epoch -637 finished ---------------------------------- --------------- epoch -637 replay_buffer/size 999996 trainer/num train calls 364000 trainer/Policy Loss -2.20473 trainer/Log Pis Mean 2.18711 trainer/Log Pis Std 2.55722 trainer/Log Pis Max 14.9858 trainer/Log Pis Min -4.69674 trainer/policy/mean Mean 0.104673 trainer/policy/mean Std 0.623996 trainer/policy/mean Max 0.997502 trainer/policy/mean Min -0.999433 trainer/policy/normal/std Mean 0.379691 trainer/policy/normal/std Std 0.183725 trainer/policy/normal/std Max 0.931775 trainer/policy/normal/std Min 0.0727844 trainer/policy/normal/log_std Mean -1.11516 trainer/policy/normal/log_std Std 0.581107 trainer/policy/normal/log_std Max -0.0706634 trainer/policy/normal/log_std Min -2.62025 eval/num steps total 240887 eval/num paths total 465 eval/path length Mean 619 eval/path length Std 0 eval/path length Max 619 eval/path length Min 619 eval/Rewards Mean 3.28335 eval/Rewards Std 0.776982 eval/Rewards Max 5.42737 eval/Rewards Min 0.989891 eval/Returns Mean 2032.39 eval/Returns Std 0 eval/Returns Max 2032.39 eval/Returns Min 2032.39 eval/Actions Mean 0.155083 eval/Actions Std 0.591892 eval/Actions Max 0.99845 eval/Actions Min -0.998464 eval/Num Paths 1 eval/Average Returns 2032.39 eval/normalized_score 63.0702 time/evaluation sampling (s) 0.915776 time/logging (s) 0.0026133 time/sampling batch (s) 0.274699 time/saving (s) 0.00304165 time/training (s) 4.26382 time/epoch (s) 5.45995 time/total (s) 32209.6 Epoch -637 ---------------------------------- --------------- 2022-05-10 22:07:25.871764 PDT | [0] Epoch -636 finished ---------------------------------- --------------- epoch -636 replay_buffer/size 999996 trainer/num train calls 365000 trainer/Policy Loss -2.11719 trainer/Log Pis Mean 1.9892 trainer/Log Pis Std 2.56348 trainer/Log Pis Max 10.8118 trainer/Log Pis Min -7.79705 trainer/policy/mean Mean 0.130134 trainer/policy/mean Std 0.614507 trainer/policy/mean Max 0.998935 trainer/policy/mean Min -0.997599 trainer/policy/normal/std Mean 0.393 trainer/policy/normal/std Std 0.188514 trainer/policy/normal/std Max 0.998531 trainer/policy/normal/std Min 0.0697912 trainer/policy/normal/log_std Mean -1.07805 trainer/policy/normal/log_std Std 0.575814 trainer/policy/normal/log_std Max -0.00147051 trainer/policy/normal/log_std Min -2.66225 eval/num steps total 241458 eval/num paths total 466 eval/path length Mean 571 eval/path length Std 0 eval/path length Max 571 eval/path length Min 571 eval/Rewards Mean 3.23944 eval/Rewards Std 0.80409 eval/Rewards Max 4.79569 eval/Rewards Min 0.986857 eval/Returns Mean 1849.72 eval/Returns Std 0 eval/Returns Max 1849.72 eval/Returns Min 1849.72 eval/Actions Mean 0.159588 eval/Actions Std 0.607255 eval/Actions Max 0.998127 eval/Actions Min -0.998649 eval/Num Paths 1 eval/Average Returns 1849.72 eval/normalized_score 57.4574 time/evaluation sampling (s) 0.918641 time/logging (s) 0.00247002 time/sampling batch (s) 0.272224 time/saving (s) 0.00297468 time/training (s) 4.22031 time/epoch (s) 5.41662 time/total (s) 32215 Epoch -636 ---------------------------------- --------------- 2022-05-10 22:07:31.322088 PDT | [0] Epoch -635 finished ---------------------------------- --------------- epoch -635 replay_buffer/size 999996 trainer/num train calls 366000 trainer/Policy Loss -2.12694 trainer/Log Pis Mean 2.06686 trainer/Log Pis Std 2.62478 trainer/Log Pis Max 9.95457 trainer/Log Pis Min -5.44316 trainer/policy/mean Mean 0.117122 trainer/policy/mean Std 0.614091 trainer/policy/mean Max 0.996196 trainer/policy/mean Min -0.997798 trainer/policy/normal/std Mean 0.379129 trainer/policy/normal/std Std 0.182237 trainer/policy/normal/std Max 0.979427 trainer/policy/normal/std Min 0.0697718 trainer/policy/normal/log_std Mean -1.11835 trainer/policy/normal/log_std Std 0.588215 trainer/policy/normal/log_std Max -0.0207875 trainer/policy/normal/log_std Min -2.66253 eval/num steps total 241977 eval/num paths total 467 eval/path length Mean 519 eval/path length Std 0 eval/path length Max 519 eval/path length Min 519 eval/Rewards Mean 3.15552 eval/Rewards Std 0.788082 eval/Rewards Max 5.32447 eval/Rewards Min 0.9819 eval/Returns Mean 1637.71 eval/Returns Std 0 eval/Returns Max 1637.71 eval/Returns Min 1637.71 eval/Actions Mean 0.153448 eval/Actions Std 0.588459 eval/Actions Max 0.998273 eval/Actions Min -0.997802 eval/Num Paths 1 eval/Average Returns 1637.71 eval/normalized_score 50.9433 time/evaluation sampling (s) 0.91106 time/logging (s) 0.00237289 time/sampling batch (s) 0.273615 time/saving (s) 0.0030495 time/training (s) 4.23859 time/epoch (s) 5.42868 time/total (s) 32220.5 Epoch -635 ---------------------------------- --------------- 2022-05-10 22:07:36.875511 PDT | [0] Epoch -634 finished ---------------------------------- --------------- epoch -634 replay_buffer/size 999996 trainer/num train calls 367000 trainer/Policy Loss -2.18288 trainer/Log Pis Mean 2.02133 trainer/Log Pis Std 2.62197 trainer/Log Pis Max 11.511 trainer/Log Pis Min -5.07359 trainer/policy/mean Mean 0.13828 trainer/policy/mean Std 0.617724 trainer/policy/mean Max 0.996583 trainer/policy/mean Min -0.998246 trainer/policy/normal/std Mean 0.38888 trainer/policy/normal/std Std 0.187688 trainer/policy/normal/std Max 0.979981 trainer/policy/normal/std Min 0.0697048 trainer/policy/normal/log_std Mean -1.09326 trainer/policy/normal/log_std Std 0.588219 trainer/policy/normal/log_std Max -0.0202218 trainer/policy/normal/log_std Min -2.66349 eval/num steps total 242546 eval/num paths total 468 eval/path length Mean 569 eval/path length Std 0 eval/path length Max 569 eval/path length Min 569 eval/Rewards Mean 3.08218 eval/Rewards Std 0.731793 eval/Rewards Max 4.41033 eval/Rewards Min 0.987812 eval/Returns Mean 1753.76 eval/Returns Std 0 eval/Returns Max 1753.76 eval/Returns Min 1753.76 eval/Actions Mean 0.15492 eval/Actions Std 0.598051 eval/Actions Max 0.997164 eval/Actions Min -0.998581 eval/Num Paths 1 eval/Average Returns 1753.76 eval/normalized_score 54.5089 time/evaluation sampling (s) 0.997008 time/logging (s) 0.00257041 time/sampling batch (s) 0.273621 time/saving (s) 0.00305195 time/training (s) 4.25575 time/epoch (s) 5.532 time/total (s) 32226 Epoch -634 ---------------------------------- --------------- 2022-05-10 22:07:42.349947 PDT | [0] Epoch -633 finished ---------------------------------- --------------- epoch -633 replay_buffer/size 999996 trainer/num train calls 368000 trainer/Policy Loss -2.20772 trainer/Log Pis Mean 2.31278 trainer/Log Pis Std 2.55817 trainer/Log Pis Max 12.5761 trainer/Log Pis Min -4.49327 trainer/policy/mean Mean 0.124198 trainer/policy/mean Std 0.623735 trainer/policy/mean Max 0.99793 trainer/policy/mean Min -0.997265 trainer/policy/normal/std Mean 0.386564 trainer/policy/normal/std Std 0.180184 trainer/policy/normal/std Max 0.958263 trainer/policy/normal/std Min 0.0727973 trainer/policy/normal/log_std Mean -1.08807 trainer/policy/normal/log_std Std 0.564598 trainer/policy/normal/log_std Max -0.0426326 trainer/policy/normal/log_std Min -2.62008 eval/num steps total 243400 eval/num paths total 470 eval/path length Mean 427 eval/path length Std 22 eval/path length Max 449 eval/path length Min 405 eval/Rewards Mean 3.13383 eval/Rewards Std 0.858967 eval/Rewards Max 5.40848 eval/Rewards Min 0.986971 eval/Returns Mean 1338.14 eval/Returns Std 91.4647 eval/Returns Max 1429.61 eval/Returns Min 1246.68 eval/Actions Mean 0.144588 eval/Actions Std 0.586238 eval/Actions Max 0.998282 eval/Actions Min -0.999216 eval/Num Paths 2 eval/Average Returns 1338.14 eval/normalized_score 41.7387 time/evaluation sampling (s) 0.90601 time/logging (s) 0.00338246 time/sampling batch (s) 0.27485 time/saving (s) 0.00306775 time/training (s) 4.26645 time/epoch (s) 5.45375 time/total (s) 32231.5 Epoch -633 ---------------------------------- --------------- 2022-05-10 22:07:47.788697 PDT | [0] Epoch -632 finished ---------------------------------- --------------- epoch -632 replay_buffer/size 999996 trainer/num train calls 369000 trainer/Policy Loss -2.17273 trainer/Log Pis Mean 2.14636 trainer/Log Pis Std 2.56326 trainer/Log Pis Max 10.2482 trainer/Log Pis Min -4.9454 trainer/policy/mean Mean 0.129432 trainer/policy/mean Std 0.613619 trainer/policy/mean Max 0.997867 trainer/policy/mean Min -0.997506 trainer/policy/normal/std Mean 0.38408 trainer/policy/normal/std Std 0.185371 trainer/policy/normal/std Max 0.963966 trainer/policy/normal/std Min 0.0680166 trainer/policy/normal/log_std Mean -1.1017 trainer/policy/normal/log_std Std 0.576459 trainer/policy/normal/log_std Max -0.0366996 trainer/policy/normal/log_std Min -2.688 eval/num steps total 244216 eval/num paths total 473 eval/path length Mean 272 eval/path length Std 64.3584 eval/path length Max 319 eval/path length Min 181 eval/Rewards Mean 2.77875 eval/Rewards Std 0.926133 eval/Rewards Max 4.8038 eval/Rewards Min 0.98423 eval/Returns Mean 755.821 eval/Returns Std 221.271 eval/Returns Max 914.265 eval/Returns Min 442.904 eval/Actions Mean 0.125708 eval/Actions Std 0.553958 eval/Actions Max 0.998996 eval/Actions Min -0.999508 eval/Num Paths 3 eval/Average Returns 755.821 eval/normalized_score 23.8462 time/evaluation sampling (s) 0.901487 time/logging (s) 0.00321215 time/sampling batch (s) 0.273098 time/saving (s) 0.00298989 time/training (s) 4.2361 time/epoch (s) 5.41689 time/total (s) 32236.9 Epoch -632 ---------------------------------- --------------- 2022-05-10 22:07:53.254584 PDT | [0] Epoch -631 finished ---------------------------------- --------------- epoch -631 replay_buffer/size 999996 trainer/num train calls 370000 trainer/Policy Loss -2.27814 trainer/Log Pis Mean 2.21449 trainer/Log Pis Std 2.65202 trainer/Log Pis Max 9.76224 trainer/Log Pis Min -5.89649 trainer/policy/mean Mean 0.1698 trainer/policy/mean Std 0.615546 trainer/policy/mean Max 0.995821 trainer/policy/mean Min -0.997337 trainer/policy/normal/std Mean 0.384498 trainer/policy/normal/std Std 0.182705 trainer/policy/normal/std Max 0.964842 trainer/policy/normal/std Min 0.0684275 trainer/policy/normal/log_std Mean -1.10225 trainer/policy/normal/log_std Std 0.585993 trainer/policy/normal/log_std Max -0.0357908 trainer/policy/normal/log_std Min -2.68198 eval/num steps total 244776 eval/num paths total 474 eval/path length Mean 560 eval/path length Std 0 eval/path length Max 560 eval/path length Min 560 eval/Rewards Mean 3.21951 eval/Rewards Std 0.815599 eval/Rewards Max 4.84344 eval/Rewards Min 0.986548 eval/Returns Mean 1802.92 eval/Returns Std 0 eval/Returns Max 1802.92 eval/Returns Min 1802.92 eval/Actions Mean 0.161625 eval/Actions Std 0.575707 eval/Actions Max 0.998315 eval/Actions Min -0.998436 eval/Num Paths 1 eval/Average Returns 1802.92 eval/normalized_score 56.0195 time/evaluation sampling (s) 0.904538 time/logging (s) 0.00246265 time/sampling batch (s) 0.273539 time/saving (s) 0.00301129 time/training (s) 4.26011 time/epoch (s) 5.44366 time/total (s) 32242.3 Epoch -631 ---------------------------------- --------------- 2022-05-10 22:07:58.768940 PDT | [0] Epoch -630 finished ---------------------------------- ---------------- epoch -630 replay_buffer/size 999996 trainer/num train calls 371000 trainer/Policy Loss -2.42448 trainer/Log Pis Mean 2.54399 trainer/Log Pis Std 2.6701 trainer/Log Pis Max 11.3322 trainer/Log Pis Min -6.18023 trainer/policy/mean Mean 0.117783 trainer/policy/mean Std 0.634306 trainer/policy/mean Max 0.99814 trainer/policy/mean Min -0.998338 trainer/policy/normal/std Mean 0.377753 trainer/policy/normal/std Std 0.184523 trainer/policy/normal/std Max 0.999391 trainer/policy/normal/std Min 0.0658615 trainer/policy/normal/log_std Mean -1.12364 trainer/policy/normal/log_std Std 0.587928 trainer/policy/normal/log_std Max -0.000609107 trainer/policy/normal/log_std Min -2.7202 eval/num steps total 245595 eval/num paths total 476 eval/path length Mean 409.5 eval/path length Std 0.5 eval/path length Max 410 eval/path length Min 409 eval/Rewards Mean 3.06714 eval/Rewards Std 0.812072 eval/Rewards Max 4.77652 eval/Rewards Min 0.986048 eval/Returns Mean 1255.99 eval/Returns Std 4.17763 eval/Returns Max 1260.17 eval/Returns Min 1251.81 eval/Actions Mean 0.1353 eval/Actions Std 0.590943 eval/Actions Max 0.99625 eval/Actions Min -0.996967 eval/Num Paths 2 eval/Average Returns 1255.99 eval/normalized_score 39.2145 time/evaluation sampling (s) 0.89754 time/logging (s) 0.0032934 time/sampling batch (s) 0.275388 time/saving (s) 0.00302668 time/training (s) 4.31417 time/epoch (s) 5.49341 time/total (s) 32247.8 Epoch -630 ---------------------------------- ---------------- 2022-05-10 22:08:04.244795 PDT | [0] Epoch -629 finished ---------------------------------- --------------- epoch -629 replay_buffer/size 999996 trainer/num train calls 372000 trainer/Policy Loss -2.17436 trainer/Log Pis Mean 2.01954 trainer/Log Pis Std 2.49481 trainer/Log Pis Max 9.19138 trainer/Log Pis Min -5.33635 trainer/policy/mean Mean 0.154113 trainer/policy/mean Std 0.607438 trainer/policy/mean Max 0.997008 trainer/policy/mean Min -0.998136 trainer/policy/normal/std Mean 0.389555 trainer/policy/normal/std Std 0.186078 trainer/policy/normal/std Max 1.02766 trainer/policy/normal/std Min 0.0716408 trainer/policy/normal/log_std Mean -1.08646 trainer/policy/normal/log_std Std 0.575737 trainer/policy/normal/log_std Max 0.0272835 trainer/policy/normal/log_std Min -2.63609 eval/num steps total 246103 eval/num paths total 477 eval/path length Mean 508 eval/path length Std 0 eval/path length Max 508 eval/path length Min 508 eval/Rewards Mean 3.15125 eval/Rewards Std 0.790213 eval/Rewards Max 5.11447 eval/Rewards Min 0.987516 eval/Returns Mean 1600.84 eval/Returns Std 0 eval/Returns Max 1600.84 eval/Returns Min 1600.84 eval/Actions Mean 0.157168 eval/Actions Std 0.593143 eval/Actions Max 0.998523 eval/Actions Min -0.997031 eval/Num Paths 1 eval/Average Returns 1600.84 eval/normalized_score 49.8102 time/evaluation sampling (s) 0.90685 time/logging (s) 0.0022995 time/sampling batch (s) 0.274061 time/saving (s) 0.00300346 time/training (s) 4.26697 time/epoch (s) 5.45318 time/total (s) 32253.3 Epoch -629 ---------------------------------- --------------- 2022-05-10 22:08:09.713700 PDT | [0] Epoch -628 finished ---------------------------------- --------------- epoch -628 replay_buffer/size 999996 trainer/num train calls 373000 trainer/Policy Loss -2.31889 trainer/Log Pis Mean 2.46061 trainer/Log Pis Std 2.60513 trainer/Log Pis Max 9.73074 trainer/Log Pis Min -3.77007 trainer/policy/mean Mean 0.14055 trainer/policy/mean Std 0.628282 trainer/policy/mean Max 0.994323 trainer/policy/mean Min -0.997322 trainer/policy/normal/std Mean 0.375442 trainer/policy/normal/std Std 0.182973 trainer/policy/normal/std Max 0.947788 trainer/policy/normal/std Min 0.0739876 trainer/policy/normal/log_std Mean -1.13002 trainer/policy/normal/log_std Std 0.589317 trainer/policy/normal/log_std Max -0.0536247 trainer/policy/normal/log_std Min -2.60386 eval/num steps total 246594 eval/num paths total 478 eval/path length Mean 491 eval/path length Std 0 eval/path length Max 491 eval/path length Min 491 eval/Rewards Mean 3.06082 eval/Rewards Std 0.774563 eval/Rewards Max 4.62551 eval/Rewards Min 0.981335 eval/Returns Mean 1502.86 eval/Returns Std 0 eval/Returns Max 1502.86 eval/Returns Min 1502.86 eval/Actions Mean 0.149125 eval/Actions Std 0.587479 eval/Actions Max 0.997085 eval/Actions Min -0.998143 eval/Num Paths 1 eval/Average Returns 1502.86 eval/normalized_score 46.7998 time/evaluation sampling (s) 0.895621 time/logging (s) 0.00229933 time/sampling batch (s) 0.279227 time/saving (s) 0.0029706 time/training (s) 4.26721 time/epoch (s) 5.44733 time/total (s) 32258.7 Epoch -628 ---------------------------------- --------------- 2022-05-10 22:08:15.193917 PDT | [0] Epoch -627 finished ---------------------------------- --------------- epoch -627 replay_buffer/size 999996 trainer/num train calls 374000 trainer/Policy Loss -2.09384 trainer/Log Pis Mean 2.19382 trainer/Log Pis Std 2.61132 trainer/Log Pis Max 15.3777 trainer/Log Pis Min -4.25064 trainer/policy/mean Mean 0.154931 trainer/policy/mean Std 0.615768 trainer/policy/mean Max 0.997891 trainer/policy/mean Min -0.999572 trainer/policy/normal/std Mean 0.389096 trainer/policy/normal/std Std 0.186601 trainer/policy/normal/std Max 0.974771 trainer/policy/normal/std Min 0.0722266 trainer/policy/normal/log_std Mean -1.09052 trainer/policy/normal/log_std Std 0.583503 trainer/policy/normal/log_std Max -0.0255529 trainer/policy/normal/log_std Min -2.62795 eval/num steps total 247411 eval/num paths total 480 eval/path length Mean 408.5 eval/path length Std 0.5 eval/path length Max 409 eval/path length Min 408 eval/Rewards Mean 3.07435 eval/Rewards Std 0.802617 eval/Rewards Max 4.83465 eval/Rewards Min 0.981472 eval/Returns Mean 1255.87 eval/Returns Std 9.16495 eval/Returns Max 1265.04 eval/Returns Min 1246.71 eval/Actions Mean 0.148367 eval/Actions Std 0.592985 eval/Actions Max 0.998685 eval/Actions Min -0.999364 eval/Num Paths 2 eval/Average Returns 1255.87 eval/normalized_score 39.2108 time/evaluation sampling (s) 0.908143 time/logging (s) 0.00326447 time/sampling batch (s) 0.275615 time/saving (s) 0.00311679 time/training (s) 4.2694 time/epoch (s) 5.45953 time/total (s) 32264.2 Epoch -627 ---------------------------------- --------------- 2022-05-10 22:08:20.686500 PDT | [0] Epoch -626 finished ---------------------------------- --------------- epoch -626 replay_buffer/size 999996 trainer/num train calls 375000 trainer/Policy Loss -2.41026 trainer/Log Pis Mean 2.29384 trainer/Log Pis Std 2.63321 trainer/Log Pis Max 9.87331 trainer/Log Pis Min -4.99677 trainer/policy/mean Mean 0.132843 trainer/policy/mean Std 0.626738 trainer/policy/mean Max 0.998546 trainer/policy/mean Min -0.99715 trainer/policy/normal/std Mean 0.381995 trainer/policy/normal/std Std 0.178699 trainer/policy/normal/std Max 0.960505 trainer/policy/normal/std Min 0.0731673 trainer/policy/normal/log_std Mean -1.10107 trainer/policy/normal/log_std Std 0.565743 trainer/policy/normal/log_std Max -0.0402961 trainer/policy/normal/log_std Min -2.61501 eval/num steps total 247924 eval/num paths total 481 eval/path length Mean 513 eval/path length Std 0 eval/path length Max 513 eval/path length Min 513 eval/Rewards Mean 3.08855 eval/Rewards Std 0.830896 eval/Rewards Max 5.33138 eval/Rewards Min 0.983714 eval/Returns Mean 1584.43 eval/Returns Std 0 eval/Returns Max 1584.43 eval/Returns Min 1584.43 eval/Actions Mean 0.149685 eval/Actions Std 0.570971 eval/Actions Max 0.99756 eval/Actions Min -0.997529 eval/Num Paths 1 eval/Average Returns 1584.43 eval/normalized_score 49.306 time/evaluation sampling (s) 0.920351 time/logging (s) 0.0023672 time/sampling batch (s) 0.273673 time/saving (s) 0.00300126 time/training (s) 4.27053 time/epoch (s) 5.46992 time/total (s) 32269.7 Epoch -626 ---------------------------------- --------------- 2022-05-10 22:08:26.168678 PDT | [0] Epoch -625 finished ---------------------------------- --------------- epoch -625 replay_buffer/size 999996 trainer/num train calls 376000 trainer/Policy Loss -2.09254 trainer/Log Pis Mean 2.16393 trainer/Log Pis Std 2.48825 trainer/Log Pis Max 9.38455 trainer/Log Pis Min -5.26303 trainer/policy/mean Mean 0.131978 trainer/policy/mean Std 0.613625 trainer/policy/mean Max 0.997664 trainer/policy/mean Min -0.996594 trainer/policy/normal/std Mean 0.38183 trainer/policy/normal/std Std 0.183921 trainer/policy/normal/std Max 0.980195 trainer/policy/normal/std Min 0.075905 trainer/policy/normal/log_std Mean -1.10625 trainer/policy/normal/log_std Std 0.572439 trainer/policy/normal/log_std Max -0.0200036 trainer/policy/normal/log_std Min -2.57827 eval/num steps total 248596 eval/num paths total 483 eval/path length Mean 336 eval/path length Std 24 eval/path length Max 360 eval/path length Min 312 eval/Rewards Mean 3.04895 eval/Rewards Std 0.977365 eval/Rewards Max 4.98154 eval/Rewards Min 0.981795 eval/Returns Mean 1024.45 eval/Returns Std 100.383 eval/Returns Max 1124.83 eval/Returns Min 924.063 eval/Actions Mean 0.131973 eval/Actions Std 0.557622 eval/Actions Max 0.997072 eval/Actions Min -0.999701 eval/Num Paths 2 eval/Average Returns 1024.45 eval/normalized_score 32.1 time/evaluation sampling (s) 0.911946 time/logging (s) 0.0028721 time/sampling batch (s) 0.274403 time/saving (s) 0.00312451 time/training (s) 4.26853 time/epoch (s) 5.46087 time/total (s) 32275.1 Epoch -625 ---------------------------------- --------------- 2022-05-10 22:08:31.640079 PDT | [0] Epoch -624 finished ---------------------------------- --------------- epoch -624 replay_buffer/size 999996 trainer/num train calls 377000 trainer/Policy Loss -2.25204 trainer/Log Pis Mean 2.26228 trainer/Log Pis Std 2.58748 trainer/Log Pis Max 9.63428 trainer/Log Pis Min -4.20475 trainer/policy/mean Mean 0.164338 trainer/policy/mean Std 0.623596 trainer/policy/mean Max 0.995358 trainer/policy/mean Min -0.998135 trainer/policy/normal/std Mean 0.391368 trainer/policy/normal/std Std 0.189263 trainer/policy/normal/std Max 0.987994 trainer/policy/normal/std Min 0.0679153 trainer/policy/normal/log_std Mean -1.08749 trainer/policy/normal/log_std Std 0.588997 trainer/policy/normal/log_std Max -0.0120783 trainer/policy/normal/log_std Min -2.68949 eval/num steps total 249079 eval/num paths total 484 eval/path length Mean 483 eval/path length Std 0 eval/path length Max 483 eval/path length Min 483 eval/Rewards Mean 3.20018 eval/Rewards Std 0.83639 eval/Rewards Max 4.79052 eval/Rewards Min 0.98234 eval/Returns Mean 1545.68 eval/Returns Std 0 eval/Returns Max 1545.68 eval/Returns Min 1545.68 eval/Actions Mean 0.142826 eval/Actions Std 0.601433 eval/Actions Max 0.997927 eval/Actions Min -0.998421 eval/Num Paths 1 eval/Average Returns 1545.68 eval/normalized_score 48.1156 time/evaluation sampling (s) 0.909278 time/logging (s) 0.00229038 time/sampling batch (s) 0.274803 time/saving (s) 0.00313356 time/training (s) 4.25953 time/epoch (s) 5.44904 time/total (s) 32280.6 Epoch -624 ---------------------------------- --------------- 2022-05-10 22:08:37.110123 PDT | [0] Epoch -623 finished ---------------------------------- --------------- epoch -623 replay_buffer/size 999996 trainer/num train calls 378000 trainer/Policy Loss -2.29083 trainer/Log Pis Mean 2.25409 trainer/Log Pis Std 2.47172 trainer/Log Pis Max 10.1966 trainer/Log Pis Min -3.75393 trainer/policy/mean Mean 0.137984 trainer/policy/mean Std 0.614943 trainer/policy/mean Max 0.997739 trainer/policy/mean Min -0.997656 trainer/policy/normal/std Mean 0.38431 trainer/policy/normal/std Std 0.183257 trainer/policy/normal/std Max 0.942641 trainer/policy/normal/std Min 0.0732235 trainer/policy/normal/log_std Mean -1.10228 trainer/policy/normal/log_std Std 0.583349 trainer/policy/normal/log_std Max -0.0590697 trainer/policy/normal/log_std Min -2.61424 eval/num steps total 249725 eval/num paths total 485 eval/path length Mean 646 eval/path length Std 0 eval/path length Max 646 eval/path length Min 646 eval/Rewards Mean 3.20996 eval/Rewards Std 0.745623 eval/Rewards Max 4.75476 eval/Rewards Min 0.980872 eval/Returns Mean 2073.63 eval/Returns Std 0 eval/Returns Max 2073.63 eval/Returns Min 2073.63 eval/Actions Mean 0.148517 eval/Actions Std 0.6037 eval/Actions Max 0.997683 eval/Actions Min -0.998104 eval/Num Paths 1 eval/Average Returns 2073.63 eval/normalized_score 64.3373 time/evaluation sampling (s) 0.928849 time/logging (s) 0.00274165 time/sampling batch (s) 0.273665 time/saving (s) 0.00320447 time/training (s) 4.24036 time/epoch (s) 5.44882 time/total (s) 32286 Epoch -623 ---------------------------------- --------------- 2022-05-10 22:08:42.576836 PDT | [0] Epoch -622 finished ---------------------------------- --------------- epoch -622 replay_buffer/size 999996 trainer/num train calls 379000 trainer/Policy Loss -2.17418 trainer/Log Pis Mean 2.03557 trainer/Log Pis Std 2.61195 trainer/Log Pis Max 11.3619 trainer/Log Pis Min -4.48061 trainer/policy/mean Mean 0.145003 trainer/policy/mean Std 0.608156 trainer/policy/mean Max 0.997905 trainer/policy/mean Min -0.998783 trainer/policy/normal/std Mean 0.378161 trainer/policy/normal/std Std 0.180382 trainer/policy/normal/std Max 1.00109 trainer/policy/normal/std Min 0.0739133 trainer/policy/normal/log_std Mean -1.11611 trainer/policy/normal/log_std Std 0.575157 trainer/policy/normal/log_std Max 0.00108684 trainer/policy/normal/log_std Min -2.60486 eval/num steps total 250687 eval/num paths total 487 eval/path length Mean 481 eval/path length Std 20 eval/path length Max 501 eval/path length Min 461 eval/Rewards Mean 3.15019 eval/Rewards Std 0.832875 eval/Rewards Max 5.04145 eval/Rewards Min 0.980037 eval/Returns Mean 1515.24 eval/Returns Std 33.4375 eval/Returns Max 1548.68 eval/Returns Min 1481.8 eval/Actions Mean 0.149826 eval/Actions Std 0.583117 eval/Actions Max 0.997978 eval/Actions Min -0.999524 eval/Num Paths 2 eval/Average Returns 1515.24 eval/normalized_score 47.1801 time/evaluation sampling (s) 0.904869 time/logging (s) 0.00358777 time/sampling batch (s) 0.273499 time/saving (s) 0.00299554 time/training (s) 4.26103 time/epoch (s) 5.44598 time/total (s) 32291.5 Epoch -622 ---------------------------------- --------------- 2022-05-10 22:08:48.053211 PDT | [0] Epoch -621 finished ---------------------------------- --------------- epoch -621 replay_buffer/size 999996 trainer/num train calls 380000 trainer/Policy Loss -2.19194 trainer/Log Pis Mean 2.21705 trainer/Log Pis Std 2.48097 trainer/Log Pis Max 10.6657 trainer/Log Pis Min -4.77804 trainer/policy/mean Mean 0.133336 trainer/policy/mean Std 0.615859 trainer/policy/mean Max 0.995907 trainer/policy/mean Min -0.997908 trainer/policy/normal/std Mean 0.383155 trainer/policy/normal/std Std 0.186961 trainer/policy/normal/std Max 1.03332 trainer/policy/normal/std Min 0.0696545 trainer/policy/normal/log_std Mean -1.10982 trainer/policy/normal/log_std Std 0.589946 trainer/policy/normal/log_std Max 0.0327754 trainer/policy/normal/log_std Min -2.66421 eval/num steps total 251264 eval/num paths total 488 eval/path length Mean 577 eval/path length Std 0 eval/path length Max 577 eval/path length Min 577 eval/Rewards Mean 3.15845 eval/Rewards Std 0.753088 eval/Rewards Max 5.04958 eval/Rewards Min 0.978402 eval/Returns Mean 1822.43 eval/Returns Std 0 eval/Returns Max 1822.43 eval/Returns Min 1822.43 eval/Actions Mean 0.160257 eval/Actions Std 0.606978 eval/Actions Max 0.99878 eval/Actions Min -0.998338 eval/Num Paths 1 eval/Average Returns 1822.43 eval/normalized_score 56.6188 time/evaluation sampling (s) 0.8969 time/logging (s) 0.00253086 time/sampling batch (s) 0.274712 time/saving (s) 0.00311162 time/training (s) 4.27621 time/epoch (s) 5.45347 time/total (s) 32297 Epoch -621 ---------------------------------- --------------- 2022-05-10 22:08:53.541254 PDT | [0] Epoch -620 finished ---------------------------------- --------------- epoch -620 replay_buffer/size 999996 trainer/num train calls 381000 trainer/Policy Loss -2.26782 trainer/Log Pis Mean 2.11086 trainer/Log Pis Std 2.50089 trainer/Log Pis Max 9.29996 trainer/Log Pis Min -4.65855 trainer/policy/mean Mean 0.149288 trainer/policy/mean Std 0.620533 trainer/policy/mean Max 0.998114 trainer/policy/mean Min -0.996932 trainer/policy/normal/std Mean 0.393338 trainer/policy/normal/std Std 0.189274 trainer/policy/normal/std Max 1.01538 trainer/policy/normal/std Min 0.0697181 trainer/policy/normal/log_std Mean -1.08024 trainer/policy/normal/log_std Std 0.584133 trainer/policy/normal/log_std Max 0.0152654 trainer/policy/normal/log_std Min -2.6633 eval/num steps total 251896 eval/num paths total 489 eval/path length Mean 632 eval/path length Std 0 eval/path length Max 632 eval/path length Min 632 eval/Rewards Mean 3.20849 eval/Rewards Std 0.785704 eval/Rewards Max 4.72777 eval/Rewards Min 0.979119 eval/Returns Mean 2027.77 eval/Returns Std 0 eval/Returns Max 2027.77 eval/Returns Min 2027.77 eval/Actions Mean 0.146998 eval/Actions Std 0.581536 eval/Actions Max 0.997297 eval/Actions Min -0.998255 eval/Num Paths 1 eval/Average Returns 2027.77 eval/normalized_score 62.9281 time/evaluation sampling (s) 0.901561 time/logging (s) 0.00265222 time/sampling batch (s) 0.275162 time/saving (s) 0.00303013 time/training (s) 4.284 time/epoch (s) 5.46641 time/total (s) 32302.4 Epoch -620 ---------------------------------- --------------- 2022-05-10 22:08:59.047428 PDT | [0] Epoch -619 finished ---------------------------------- --------------- epoch -619 replay_buffer/size 999996 trainer/num train calls 382000 trainer/Policy Loss -2.20909 trainer/Log Pis Mean 2.18497 trainer/Log Pis Std 2.52726 trainer/Log Pis Max 9.55555 trainer/Log Pis Min -5.17984 trainer/policy/mean Mean 0.125027 trainer/policy/mean Std 0.61486 trainer/policy/mean Max 0.999201 trainer/policy/mean Min -0.9982 trainer/policy/normal/std Mean 0.376962 trainer/policy/normal/std Std 0.182317 trainer/policy/normal/std Max 0.951839 trainer/policy/normal/std Min 0.0649376 trainer/policy/normal/log_std Mean -1.12416 trainer/policy/normal/log_std Std 0.585291 trainer/policy/normal/log_std Max -0.0493592 trainer/policy/normal/log_std Min -2.73433 eval/num steps total 252775 eval/num paths total 493 eval/path length Mean 219.75 eval/path length Std 49.464 eval/path length Max 283 eval/path length Min 168 eval/Rewards Mean 2.80765 eval/Rewards Std 0.99497 eval/Rewards Max 5.37165 eval/Rewards Min 0.982005 eval/Returns Mean 616.981 eval/Returns Std 200.419 eval/Returns Max 874.537 eval/Returns Min 411.608 eval/Actions Mean 0.0851156 eval/Actions Std 0.522717 eval/Actions Max 0.999146 eval/Actions Min -0.997854 eval/Num Paths 4 eval/Average Returns 616.981 eval/normalized_score 19.5802 time/evaluation sampling (s) 0.902748 time/logging (s) 0.00345733 time/sampling batch (s) 0.277012 time/saving (s) 0.00302638 time/training (s) 4.29889 time/epoch (s) 5.48513 time/total (s) 32307.9 Epoch -619 ---------------------------------- --------------- 2022-05-10 22:09:04.512041 PDT | [0] Epoch -618 finished ---------------------------------- --------------- epoch -618 replay_buffer/size 999996 trainer/num train calls 383000 trainer/Policy Loss -2.3989 trainer/Log Pis Mean 2.24945 trainer/Log Pis Std 2.76815 trainer/Log Pis Max 13.5298 trainer/Log Pis Min -9.11751 trainer/policy/mean Mean 0.163494 trainer/policy/mean Std 0.619746 trainer/policy/mean Max 0.998367 trainer/policy/mean Min -0.999639 trainer/policy/normal/std Mean 0.383759 trainer/policy/normal/std Std 0.186406 trainer/policy/normal/std Max 0.992491 trainer/policy/normal/std Min 0.0691683 trainer/policy/normal/log_std Mean -1.10659 trainer/policy/normal/log_std Std 0.586826 trainer/policy/normal/log_std Max -0.00753703 trainer/policy/normal/log_std Min -2.67121 eval/num steps total 253287 eval/num paths total 494 eval/path length Mean 512 eval/path length Std 0 eval/path length Max 512 eval/path length Min 512 eval/Rewards Mean 3.16595 eval/Rewards Std 0.800343 eval/Rewards Max 5.26216 eval/Rewards Min 0.985955 eval/Returns Mean 1620.97 eval/Returns Std 0 eval/Returns Max 1620.97 eval/Returns Min 1620.97 eval/Actions Mean 0.160882 eval/Actions Std 0.592859 eval/Actions Max 0.999082 eval/Actions Min -0.996474 eval/Num Paths 1 eval/Average Returns 1620.97 eval/normalized_score 50.4287 time/evaluation sampling (s) 0.907356 time/logging (s) 0.00235384 time/sampling batch (s) 0.274112 time/saving (s) 0.00302471 time/training (s) 4.25481 time/epoch (s) 5.44166 time/total (s) 32313.4 Epoch -618 ---------------------------------- --------------- 2022-05-10 22:09:10.022303 PDT | [0] Epoch -617 finished ---------------------------------- --------------- epoch -617 replay_buffer/size 999996 trainer/num train calls 384000 trainer/Policy Loss -2.2045 trainer/Log Pis Mean 2.10225 trainer/Log Pis Std 2.65577 trainer/Log Pis Max 14.786 trainer/Log Pis Min -6.58841 trainer/policy/mean Mean 0.120142 trainer/policy/mean Std 0.61449 trainer/policy/mean Max 0.999 trainer/policy/mean Min -0.998188 trainer/policy/normal/std Mean 0.388058 trainer/policy/normal/std Std 0.190125 trainer/policy/normal/std Max 1.02498 trainer/policy/normal/std Min 0.0682868 trainer/policy/normal/log_std Mean -1.09742 trainer/policy/normal/log_std Std 0.59017 trainer/policy/normal/log_std Max 0.0246739 trainer/policy/normal/log_std Min -2.68404 eval/num steps total 254010 eval/num paths total 495 eval/path length Mean 723 eval/path length Std 0 eval/path length Max 723 eval/path length Min 723 eval/Rewards Mean 3.25558 eval/Rewards Std 0.762023 eval/Rewards Max 4.9963 eval/Rewards Min 0.987673 eval/Returns Mean 2353.78 eval/Returns Std 0 eval/Returns Max 2353.78 eval/Returns Min 2353.78 eval/Actions Mean 0.163576 eval/Actions Std 0.590312 eval/Actions Max 0.998481 eval/Actions Min -0.998565 eval/Num Paths 1 eval/Average Returns 2353.78 eval/normalized_score 72.9452 time/evaluation sampling (s) 0.907791 time/logging (s) 0.00299377 time/sampling batch (s) 0.275513 time/saving (s) 0.00307826 time/training (s) 4.29976 time/epoch (s) 5.48913 time/total (s) 32318.8 Epoch -617 ---------------------------------- --------------- 2022-05-10 22:09:15.616049 PDT | [0] Epoch -616 finished ---------------------------------- --------------- epoch -616 replay_buffer/size 999996 trainer/num train calls 385000 trainer/Policy Loss -2.21 trainer/Log Pis Mean 2.23193 trainer/Log Pis Std 2.63128 trainer/Log Pis Max 9.48011 trainer/Log Pis Min -4.61438 trainer/policy/mean Mean 0.162502 trainer/policy/mean Std 0.606787 trainer/policy/mean Max 0.99659 trainer/policy/mean Min -0.997905 trainer/policy/normal/std Mean 0.383343 trainer/policy/normal/std Std 0.188921 trainer/policy/normal/std Max 0.934206 trainer/policy/normal/std Min 0.0682538 trainer/policy/normal/log_std Mean -1.11732 trainer/policy/normal/log_std Std 0.61014 trainer/policy/normal/log_std Max -0.0680584 trainer/policy/normal/log_std Min -2.68452 eval/num steps total 254597 eval/num paths total 496 eval/path length Mean 587 eval/path length Std 0 eval/path length Max 587 eval/path length Min 587 eval/Rewards Mean 3.1626 eval/Rewards Std 0.750405 eval/Rewards Max 4.99077 eval/Rewards Min 0.9856 eval/Returns Mean 1856.45 eval/Returns Std 0 eval/Returns Max 1856.45 eval/Returns Min 1856.45 eval/Actions Mean 0.163755 eval/Actions Std 0.592147 eval/Actions Max 0.99872 eval/Actions Min -0.99819 eval/Num Paths 1 eval/Average Returns 1856.45 eval/normalized_score 57.6641 time/evaluation sampling (s) 0.92426 time/logging (s) 0.00264906 time/sampling batch (s) 0.274401 time/saving (s) 0.00318587 time/training (s) 4.3672 time/epoch (s) 5.57169 time/total (s) 32324.4 Epoch -616 ---------------------------------- --------------- 2022-05-10 22:09:21.094444 PDT | [0] Epoch -615 finished ---------------------------------- --------------- epoch -615 replay_buffer/size 999996 trainer/num train calls 386000 trainer/Policy Loss -2.16153 trainer/Log Pis Mean 2.19536 trainer/Log Pis Std 2.59057 trainer/Log Pis Max 11.1442 trainer/Log Pis Min -3.69452 trainer/policy/mean Mean 0.126356 trainer/policy/mean Std 0.621065 trainer/policy/mean Max 0.997652 trainer/policy/mean Min -0.99709 trainer/policy/normal/std Mean 0.385795 trainer/policy/normal/std Std 0.185929 trainer/policy/normal/std Max 0.942382 trainer/policy/normal/std Min 0.0670563 trainer/policy/normal/log_std Mean -1.10024 trainer/policy/normal/log_std Std 0.585275 trainer/policy/normal/log_std Max -0.0593445 trainer/policy/normal/log_std Min -2.70222 eval/num steps total 255160 eval/num paths total 497 eval/path length Mean 563 eval/path length Std 0 eval/path length Max 563 eval/path length Min 563 eval/Rewards Mean 3.23219 eval/Rewards Std 0.804751 eval/Rewards Max 4.76926 eval/Rewards Min 0.983266 eval/Returns Mean 1819.72 eval/Returns Std 0 eval/Returns Max 1819.72 eval/Returns Min 1819.72 eval/Actions Mean 0.162338 eval/Actions Std 0.577776 eval/Actions Max 0.997332 eval/Actions Min -0.998787 eval/Num Paths 1 eval/Average Returns 1819.72 eval/normalized_score 56.5357 time/evaluation sampling (s) 0.955964 time/logging (s) 0.00252539 time/sampling batch (s) 0.27269 time/saving (s) 0.00310224 time/training (s) 4.22213 time/epoch (s) 5.45641 time/total (s) 32329.9 Epoch -615 ---------------------------------- --------------- 2022-05-10 22:09:26.532766 PDT | [0] Epoch -614 finished ---------------------------------- --------------- epoch -614 replay_buffer/size 999996 trainer/num train calls 387000 trainer/Policy Loss -2.22415 trainer/Log Pis Mean 2.34885 trainer/Log Pis Std 2.55372 trainer/Log Pis Max 9.85093 trainer/Log Pis Min -4.19321 trainer/policy/mean Mean 0.128938 trainer/policy/mean Std 0.614043 trainer/policy/mean Max 0.997975 trainer/policy/mean Min -0.996847 trainer/policy/normal/std Mean 0.364057 trainer/policy/normal/std Std 0.176906 trainer/policy/normal/std Max 0.855722 trainer/policy/normal/std Min 0.0647902 trainer/policy/normal/log_std Mean -1.1608 trainer/policy/normal/log_std Std 0.589898 trainer/policy/normal/log_std Max -0.155809 trainer/policy/normal/log_std Min -2.7366 eval/num steps total 255700 eval/num paths total 498 eval/path length Mean 540 eval/path length Std 0 eval/path length Max 540 eval/path length Min 540 eval/Rewards Mean 3.16347 eval/Rewards Std 0.844869 eval/Rewards Max 5.04173 eval/Rewards Min 0.980893 eval/Returns Mean 1708.27 eval/Returns Std 0 eval/Returns Max 1708.27 eval/Returns Min 1708.27 eval/Actions Mean 0.143175 eval/Actions Std 0.567741 eval/Actions Max 0.99709 eval/Actions Min -0.998681 eval/Num Paths 1 eval/Average Returns 1708.27 eval/normalized_score 53.1113 time/evaluation sampling (s) 0.947224 time/logging (s) 0.00247416 time/sampling batch (s) 0.268517 time/saving (s) 0.00305926 time/training (s) 4.19535 time/epoch (s) 5.41662 time/total (s) 32335.3 Epoch -614 ---------------------------------- --------------- 2022-05-10 22:09:31.919736 PDT | [0] Epoch -613 finished ---------------------------------- --------------- epoch -613 replay_buffer/size 999996 trainer/num train calls 388000 trainer/Policy Loss -2.19657 trainer/Log Pis Mean 2.16243 trainer/Log Pis Std 2.66784 trainer/Log Pis Max 12.4245 trainer/Log Pis Min -4.26461 trainer/policy/mean Mean 0.12479 trainer/policy/mean Std 0.621948 trainer/policy/mean Max 0.99842 trainer/policy/mean Min -0.997425 trainer/policy/normal/std Mean 0.378462 trainer/policy/normal/std Std 0.181606 trainer/policy/normal/std Max 0.984191 trainer/policy/normal/std Min 0.0672099 trainer/policy/normal/log_std Mean -1.11702 trainer/policy/normal/log_std Std 0.579461 trainer/policy/normal/log_std Max -0.0159355 trainer/policy/normal/log_std Min -2.69994 eval/num steps total 256588 eval/num paths total 500 eval/path length Mean 444 eval/path length Std 38 eval/path length Max 482 eval/path length Min 406 eval/Rewards Mean 3.13793 eval/Rewards Std 0.816513 eval/Rewards Max 4.74594 eval/Rewards Min 0.978803 eval/Returns Mean 1393.24 eval/Returns Std 149.693 eval/Returns Max 1542.93 eval/Returns Min 1243.55 eval/Actions Mean 0.138772 eval/Actions Std 0.594199 eval/Actions Max 0.997038 eval/Actions Min -0.9987 eval/Num Paths 2 eval/Average Returns 1393.24 eval/normalized_score 43.4316 time/evaluation sampling (s) 0.89204 time/logging (s) 0.00349274 time/sampling batch (s) 0.268998 time/saving (s) 0.00315038 time/training (s) 4.19886 time/epoch (s) 5.36654 time/total (s) 32340.7 Epoch -613 ---------------------------------- --------------- 2022-05-10 22:09:37.334729 PDT | [0] Epoch -612 finished ---------------------------------- --------------- epoch -612 replay_buffer/size 999996 trainer/num train calls 389000 trainer/Policy Loss -2.35337 trainer/Log Pis Mean 2.29894 trainer/Log Pis Std 2.75896 trainer/Log Pis Max 11.455 trainer/Log Pis Min -4.82705 trainer/policy/mean Mean 0.12901 trainer/policy/mean Std 0.620314 trainer/policy/mean Max 0.998284 trainer/policy/mean Min -0.997216 trainer/policy/normal/std Mean 0.379459 trainer/policy/normal/std Std 0.185992 trainer/policy/normal/std Max 1.15676 trainer/policy/normal/std Min 0.0731632 trainer/policy/normal/log_std Mean -1.12379 trainer/policy/normal/log_std Std 0.601224 trainer/policy/normal/log_std Max 0.145619 trainer/policy/normal/log_std Min -2.61506 eval/num steps total 257159 eval/num paths total 501 eval/path length Mean 571 eval/path length Std 0 eval/path length Max 571 eval/path length Min 571 eval/Rewards Mean 3.16964 eval/Rewards Std 0.778585 eval/Rewards Max 4.80092 eval/Rewards Min 0.979948 eval/Returns Mean 1809.87 eval/Returns Std 0 eval/Returns Max 1809.87 eval/Returns Min 1809.87 eval/Actions Mean 0.156916 eval/Actions Std 0.59176 eval/Actions Max 0.997402 eval/Actions Min -0.998705 eval/Num Paths 1 eval/Average Returns 1809.87 eval/normalized_score 56.2328 time/evaluation sampling (s) 0.904884 time/logging (s) 0.00257226 time/sampling batch (s) 0.270635 time/saving (s) 0.00298371 time/training (s) 4.21148 time/epoch (s) 5.39255 time/total (s) 32346.1 Epoch -612 ---------------------------------- --------------- 2022-05-10 22:09:42.728743 PDT | [0] Epoch -611 finished ---------------------------------- --------------- epoch -611 replay_buffer/size 999996 trainer/num train calls 390000 trainer/Policy Loss -2.22349 trainer/Log Pis Mean 2.31189 trainer/Log Pis Std 2.63732 trainer/Log Pis Max 17.4303 trainer/Log Pis Min -5.94768 trainer/policy/mean Mean 0.140963 trainer/policy/mean Std 0.620017 trainer/policy/mean Max 0.996897 trainer/policy/mean Min -0.999424 trainer/policy/normal/std Mean 0.377725 trainer/policy/normal/std Std 0.182752 trainer/policy/normal/std Max 0.89576 trainer/policy/normal/std Min 0.0695004 trainer/policy/normal/log_std Mean -1.12251 trainer/policy/normal/log_std Std 0.586781 trainer/policy/normal/log_std Max -0.110082 trainer/policy/normal/log_std Min -2.66642 eval/num steps total 257654 eval/num paths total 502 eval/path length Mean 495 eval/path length Std 0 eval/path length Max 495 eval/path length Min 495 eval/Rewards Mean 3.05361 eval/Rewards Std 0.786909 eval/Rewards Max 4.71212 eval/Rewards Min 0.986975 eval/Returns Mean 1511.54 eval/Returns Std 0 eval/Returns Max 1511.54 eval/Returns Min 1511.54 eval/Actions Mean 0.159566 eval/Actions Std 0.579952 eval/Actions Max 0.996357 eval/Actions Min -0.997359 eval/Num Paths 1 eval/Average Returns 1511.54 eval/normalized_score 47.0663 time/evaluation sampling (s) 0.883728 time/logging (s) 0.0022549 time/sampling batch (s) 0.269985 time/saving (s) 0.00297125 time/training (s) 4.21312 time/epoch (s) 5.37206 time/total (s) 32351.4 Epoch -611 ---------------------------------- --------------- 2022-05-10 22:09:48.152740 PDT | [0] Epoch -610 finished ---------------------------------- --------------- epoch -610 replay_buffer/size 999996 trainer/num train calls 391000 trainer/Policy Loss -1.94832 trainer/Log Pis Mean 2.06558 trainer/Log Pis Std 2.65684 trainer/Log Pis Max 11.7676 trainer/Log Pis Min -5.31045 trainer/policy/mean Mean 0.162622 trainer/policy/mean Std 0.602403 trainer/policy/mean Max 0.998386 trainer/policy/mean Min -0.99783 trainer/policy/normal/std Mean 0.385007 trainer/policy/normal/std Std 0.182712 trainer/policy/normal/std Max 0.943867 trainer/policy/normal/std Min 0.0673038 trainer/policy/normal/log_std Mean -1.09772 trainer/policy/normal/log_std Std 0.576445 trainer/policy/normal/log_std Max -0.0577698 trainer/policy/normal/log_std Min -2.69854 eval/num steps total 258590 eval/num paths total 504 eval/path length Mean 468 eval/path length Std 35 eval/path length Max 503 eval/path length Min 433 eval/Rewards Mean 3.14276 eval/Rewards Std 0.798266 eval/Rewards Max 5.39628 eval/Rewards Min 0.980859 eval/Returns Mean 1470.81 eval/Returns Std 97.6317 eval/Returns Max 1568.45 eval/Returns Min 1373.18 eval/Actions Mean 0.154579 eval/Actions Std 0.589593 eval/Actions Max 0.997634 eval/Actions Min -0.998363 eval/Num Paths 2 eval/Average Returns 1470.81 eval/normalized_score 45.8151 time/evaluation sampling (s) 0.886576 time/logging (s) 0.00360932 time/sampling batch (s) 0.272144 time/saving (s) 0.0030542 time/training (s) 4.23863 time/epoch (s) 5.40401 time/total (s) 32356.9 Epoch -610 ---------------------------------- --------------- 2022-05-10 22:09:53.398886 PDT | [0] Epoch -609 finished ---------------------------------- --------------- epoch -609 replay_buffer/size 999996 trainer/num train calls 392000 trainer/Policy Loss -2.18478 trainer/Log Pis Mean 2.24276 trainer/Log Pis Std 2.58142 trainer/Log Pis Max 10.9489 trainer/Log Pis Min -6.18656 trainer/policy/mean Mean 0.118935 trainer/policy/mean Std 0.638101 trainer/policy/mean Max 0.997736 trainer/policy/mean Min -0.996035 trainer/policy/normal/std Mean 0.390348 trainer/policy/normal/std Std 0.183291 trainer/policy/normal/std Max 1.0123 trainer/policy/normal/std Min 0.0694905 trainer/policy/normal/log_std Mean -1.07896 trainer/policy/normal/log_std Std 0.564299 trainer/policy/normal/log_std Max 0.0122253 trainer/policy/normal/log_std Min -2.66656 eval/num steps total 259252 eval/num paths total 505 eval/path length Mean 662 eval/path length Std 0 eval/path length Max 662 eval/path length Min 662 eval/Rewards Mean 3.17539 eval/Rewards Std 0.718007 eval/Rewards Max 4.77002 eval/Rewards Min 0.984648 eval/Returns Mean 2102.11 eval/Returns Std 0 eval/Returns Max 2102.11 eval/Returns Min 2102.11 eval/Actions Mean 0.160766 eval/Actions Std 0.596092 eval/Actions Max 0.99874 eval/Actions Min -0.996627 eval/Num Paths 1 eval/Average Returns 2102.11 eval/normalized_score 65.2122 time/evaluation sampling (s) 0.896756 time/logging (s) 0.00269445 time/sampling batch (s) 0.259632 time/saving (s) 0.00297033 time/training (s) 4.06198 time/epoch (s) 5.22404 time/total (s) 32362.1 Epoch -609 ---------------------------------- --------------- 2022-05-10 22:09:58.612667 PDT | [0] Epoch -608 finished ---------------------------------- --------------- epoch -608 replay_buffer/size 999996 trainer/num train calls 393000 trainer/Policy Loss -2.17952 trainer/Log Pis Mean 2.15312 trainer/Log Pis Std 2.42775 trainer/Log Pis Max 8.47089 trainer/Log Pis Min -6.47285 trainer/policy/mean Mean 0.123662 trainer/policy/mean Std 0.613121 trainer/policy/mean Max 0.99779 trainer/policy/mean Min -0.997911 trainer/policy/normal/std Mean 0.383654 trainer/policy/normal/std Std 0.185166 trainer/policy/normal/std Max 0.988091 trainer/policy/normal/std Min 0.0662032 trainer/policy/normal/log_std Mean -1.10609 trainer/policy/normal/log_std Std 0.586484 trainer/policy/normal/log_std Max -0.0119807 trainer/policy/normal/log_std Min -2.71503 eval/num steps total 259952 eval/num paths total 506 eval/path length Mean 700 eval/path length Std 0 eval/path length Max 700 eval/path length Min 700 eval/Rewards Mean 3.2339 eval/Rewards Std 0.782852 eval/Rewards Max 5.43882 eval/Rewards Min 0.979681 eval/Returns Mean 2263.73 eval/Returns Std 0 eval/Returns Max 2263.73 eval/Returns Min 2263.73 eval/Actions Mean 0.164157 eval/Actions Std 0.590907 eval/Actions Max 0.99892 eval/Actions Min -0.998376 eval/Num Paths 1 eval/Average Returns 2263.73 eval/normalized_score 70.1783 time/evaluation sampling (s) 0.877463 time/logging (s) 0.00318985 time/sampling batch (s) 0.258489 time/saving (s) 0.0030846 time/training (s) 4.05107 time/epoch (s) 5.19329 time/total (s) 32367.3 Epoch -608 ---------------------------------- --------------- 2022-05-10 22:10:03.966279 PDT | [0] Epoch -607 finished ---------------------------------- --------------- epoch -607 replay_buffer/size 999996 trainer/num train calls 394000 trainer/Policy Loss -2.54154 trainer/Log Pis Mean 2.3839 trainer/Log Pis Std 2.70569 trainer/Log Pis Max 10.6594 trainer/Log Pis Min -6.42261 trainer/policy/mean Mean 0.103433 trainer/policy/mean Std 0.632724 trainer/policy/mean Max 0.996939 trainer/policy/mean Min -0.997683 trainer/policy/normal/std Mean 0.376122 trainer/policy/normal/std Std 0.180334 trainer/policy/normal/std Max 0.905466 trainer/policy/normal/std Min 0.0690168 trainer/policy/normal/log_std Mean -1.12535 trainer/policy/normal/log_std Std 0.585428 trainer/policy/normal/log_std Max -0.0993058 trainer/policy/normal/log_std Min -2.67341 eval/num steps total 260451 eval/num paths total 507 eval/path length Mean 499 eval/path length Std 0 eval/path length Max 499 eval/path length Min 499 eval/Rewards Mean 3.09158 eval/Rewards Std 0.767658 eval/Rewards Max 4.67328 eval/Rewards Min 0.980349 eval/Returns Mean 1542.7 eval/Returns Std 0 eval/Returns Max 1542.7 eval/Returns Min 1542.7 eval/Actions Mean 0.149839 eval/Actions Std 0.585968 eval/Actions Max 0.998223 eval/Actions Min -0.996083 eval/Num Paths 1 eval/Average Returns 1542.7 eval/normalized_score 48.0238 time/evaluation sampling (s) 0.872183 time/logging (s) 0.00228531 time/sampling batch (s) 0.265777 time/saving (s) 0.00296226 time/training (s) 4.18845 time/epoch (s) 5.33166 time/total (s) 32372.6 Epoch -607 ---------------------------------- --------------- 2022-05-10 22:10:09.274496 PDT | [0] Epoch -606 finished ---------------------------------- --------------- epoch -606 replay_buffer/size 999996 trainer/num train calls 395000 trainer/Policy Loss -2.40439 trainer/Log Pis Mean 2.27409 trainer/Log Pis Std 2.64973 trainer/Log Pis Max 9.83953 trainer/Log Pis Min -8.3486 trainer/policy/mean Mean 0.151851 trainer/policy/mean Std 0.630859 trainer/policy/mean Max 0.997735 trainer/policy/mean Min -0.99813 trainer/policy/normal/std Mean 0.387752 trainer/policy/normal/std Std 0.182692 trainer/policy/normal/std Max 0.962133 trainer/policy/normal/std Min 0.072358 trainer/policy/normal/log_std Mean -1.08957 trainer/policy/normal/log_std Std 0.574805 trainer/policy/normal/log_std Max -0.0386021 trainer/policy/normal/log_std Min -2.62613 eval/num steps total 261022 eval/num paths total 508 eval/path length Mean 571 eval/path length Std 0 eval/path length Max 571 eval/path length Min 571 eval/Rewards Mean 3.22977 eval/Rewards Std 0.811495 eval/Rewards Max 4.75273 eval/Rewards Min 0.979793 eval/Returns Mean 1844.2 eval/Returns Std 0 eval/Returns Max 1844.2 eval/Returns Min 1844.2 eval/Actions Mean 0.159346 eval/Actions Std 0.605792 eval/Actions Max 0.99868 eval/Actions Min -0.998556 eval/Num Paths 1 eval/Average Returns 1844.2 eval/normalized_score 57.2877 time/evaluation sampling (s) 0.873115 time/logging (s) 0.002502 time/sampling batch (s) 0.262601 time/saving (s) 0.00306929 time/training (s) 4.14631 time/epoch (s) 5.2876 time/total (s) 32377.9 Epoch -606 ---------------------------------- --------------- 2022-05-10 22:10:14.623265 PDT | [0] Epoch -605 finished ---------------------------------- --------------- epoch -605 replay_buffer/size 999996 trainer/num train calls 396000 trainer/Policy Loss -2.41323 trainer/Log Pis Mean 2.35279 trainer/Log Pis Std 2.54755 trainer/Log Pis Max 9.24646 trainer/Log Pis Min -5.83351 trainer/policy/mean Mean 0.126111 trainer/policy/mean Std 0.622086 trainer/policy/mean Max 0.997641 trainer/policy/mean Min -0.998889 trainer/policy/normal/std Mean 0.371342 trainer/policy/normal/std Std 0.181092 trainer/policy/normal/std Max 0.916746 trainer/policy/normal/std Min 0.064093 trainer/policy/normal/log_std Mean -1.14426 trainer/policy/normal/log_std Std 0.598653 trainer/policy/normal/log_std Max -0.0869245 trainer/policy/normal/log_std Min -2.74742 eval/num steps total 261719 eval/num paths total 509 eval/path length Mean 697 eval/path length Std 0 eval/path length Max 697 eval/path length Min 697 eval/Rewards Mean 3.21412 eval/Rewards Std 0.759906 eval/Rewards Max 5.48716 eval/Rewards Min 0.980664 eval/Returns Mean 2240.24 eval/Returns Std 0 eval/Returns Max 2240.24 eval/Returns Min 2240.24 eval/Actions Mean 0.154866 eval/Actions Std 0.59539 eval/Actions Max 0.998166 eval/Actions Min -0.998178 eval/Num Paths 1 eval/Average Returns 2240.24 eval/normalized_score 69.4565 time/evaluation sampling (s) 0.892728 time/logging (s) 0.00309986 time/sampling batch (s) 0.263735 time/saving (s) 0.00326238 time/training (s) 4.16548 time/epoch (s) 5.3283 time/total (s) 32383.2 Epoch -605 ---------------------------------- --------------- 2022-05-10 22:10:19.966422 PDT | [0] Epoch -604 finished ---------------------------------- --------------- epoch -604 replay_buffer/size 999996 trainer/num train calls 397000 trainer/Policy Loss -2.22817 trainer/Log Pis Mean 2.3854 trainer/Log Pis Std 2.716 trainer/Log Pis Max 11.4555 trainer/Log Pis Min -5.2346 trainer/policy/mean Mean 0.188118 trainer/policy/mean Std 0.611308 trainer/policy/mean Max 0.996039 trainer/policy/mean Min -0.997625 trainer/policy/normal/std Mean 0.381011 trainer/policy/normal/std Std 0.184655 trainer/policy/normal/std Max 0.904153 trainer/policy/normal/std Min 0.0687094 trainer/policy/normal/log_std Mean -1.11621 trainer/policy/normal/log_std Std 0.594287 trainer/policy/normal/log_std Max -0.100757 trainer/policy/normal/log_std Min -2.67787 eval/num steps total 262211 eval/num paths total 510 eval/path length Mean 492 eval/path length Std 0 eval/path length Max 492 eval/path length Min 492 eval/Rewards Mean 3.05231 eval/Rewards Std 0.788833 eval/Rewards Max 4.70304 eval/Rewards Min 0.981461 eval/Returns Mean 1501.74 eval/Returns Std 0 eval/Returns Max 1501.74 eval/Returns Min 1501.74 eval/Actions Mean 0.155581 eval/Actions Std 0.583446 eval/Actions Max 0.995672 eval/Actions Min -0.996784 eval/Num Paths 1 eval/Average Returns 1501.74 eval/normalized_score 46.7652 time/evaluation sampling (s) 0.871158 time/logging (s) 0.00222001 time/sampling batch (s) 0.263745 time/saving (s) 0.00292863 time/training (s) 4.18109 time/epoch (s) 5.32114 time/total (s) 32388.6 Epoch -604 ---------------------------------- --------------- 2022-05-10 22:10:25.320438 PDT | [0] Epoch -603 finished ---------------------------------- --------------- epoch -603 replay_buffer/size 999996 trainer/num train calls 398000 trainer/Policy Loss -2.00526 trainer/Log Pis Mean 2.09175 trainer/Log Pis Std 2.54871 trainer/Log Pis Max 10.8565 trainer/Log Pis Min -5.8833 trainer/policy/mean Mean 0.125453 trainer/policy/mean Std 0.613532 trainer/policy/mean Max 0.997997 trainer/policy/mean Min -0.998448 trainer/policy/normal/std Mean 0.37903 trainer/policy/normal/std Std 0.184517 trainer/policy/normal/std Max 1.05146 trainer/policy/normal/std Min 0.0682912 trainer/policy/normal/log_std Mean -1.12071 trainer/policy/normal/log_std Std 0.590468 trainer/policy/normal/log_std Max 0.0501779 trainer/policy/normal/log_std Min -2.68397 eval/num steps total 262762 eval/num paths total 511 eval/path length Mean 551 eval/path length Std 0 eval/path length Max 551 eval/path length Min 551 eval/Rewards Mean 3.15768 eval/Rewards Std 0.828168 eval/Rewards Max 4.70561 eval/Rewards Min 0.980749 eval/Returns Mean 1739.88 eval/Returns Std 0 eval/Returns Max 1739.88 eval/Returns Min 1739.88 eval/Actions Mean 0.146639 eval/Actions Std 0.565358 eval/Actions Max 0.996212 eval/Actions Min -0.998045 eval/Num Paths 1 eval/Average Returns 1739.88 eval/normalized_score 54.0826 time/evaluation sampling (s) 0.891239 time/logging (s) 0.00246582 time/sampling batch (s) 0.26436 time/saving (s) 0.00331032 time/training (s) 4.17185 time/epoch (s) 5.33323 time/total (s) 32393.9 Epoch -603 ---------------------------------- --------------- 2022-05-10 22:10:30.862558 PDT | [0] Epoch -602 finished ---------------------------------- --------------- epoch -602 replay_buffer/size 999996 trainer/num train calls 399000 trainer/Policy Loss -2.12264 trainer/Log Pis Mean 2.25825 trainer/Log Pis Std 2.62556 trainer/Log Pis Max 9.20663 trainer/Log Pis Min -5.65633 trainer/policy/mean Mean 0.164609 trainer/policy/mean Std 0.614007 trainer/policy/mean Max 0.998364 trainer/policy/mean Min -0.998409 trainer/policy/normal/std Mean 0.387312 trainer/policy/normal/std Std 0.191473 trainer/policy/normal/std Max 0.996236 trainer/policy/normal/std Min 0.0661475 trainer/policy/normal/log_std Mean -1.10519 trainer/policy/normal/log_std Std 0.604723 trainer/policy/normal/log_std Max -0.00377084 trainer/policy/normal/log_std Min -2.71587 eval/num steps total 263355 eval/num paths total 512 eval/path length Mean 593 eval/path length Std 0 eval/path length Max 593 eval/path length Min 593 eval/Rewards Mean 3.22606 eval/Rewards Std 0.71495 eval/Rewards Max 5.23781 eval/Rewards Min 0.982458 eval/Returns Mean 1913.06 eval/Returns Std 0 eval/Returns Max 1913.06 eval/Returns Min 1913.06 eval/Actions Mean 0.1621 eval/Actions Std 0.609053 eval/Actions Max 0.998112 eval/Actions Min -0.998206 eval/Num Paths 1 eval/Average Returns 1913.06 eval/normalized_score 59.4035 time/evaluation sampling (s) 0.935826 time/logging (s) 0.00257351 time/sampling batch (s) 0.260919 time/saving (s) 0.00312844 time/training (s) 4.3187 time/epoch (s) 5.52115 time/total (s) 32399.4 Epoch -602 ---------------------------------- --------------- 2022-05-10 22:10:36.189266 PDT | [0] Epoch -601 finished ---------------------------------- --------------- epoch -601 replay_buffer/size 999996 trainer/num train calls 400000 trainer/Policy Loss -2.0104 trainer/Log Pis Mean 1.88745 trainer/Log Pis Std 2.6245 trainer/Log Pis Max 10.807 trainer/Log Pis Min -4.38053 trainer/policy/mean Mean 0.119796 trainer/policy/mean Std 0.610576 trainer/policy/mean Max 0.999307 trainer/policy/mean Min -0.99851 trainer/policy/normal/std Mean 0.390282 trainer/policy/normal/std Std 0.186454 trainer/policy/normal/std Max 0.959889 trainer/policy/normal/std Min 0.073733 trainer/policy/normal/log_std Mean -1.08628 trainer/policy/normal/log_std Std 0.580249 trainer/policy/normal/log_std Max -0.0409379 trainer/policy/normal/log_std Min -2.60731 eval/num steps total 264277 eval/num paths total 514 eval/path length Mean 461 eval/path length Std 50 eval/path length Max 511 eval/path length Min 411 eval/Rewards Mean 3.16723 eval/Rewards Std 0.799933 eval/Rewards Max 5.43477 eval/Rewards Min 0.983647 eval/Returns Mean 1460.09 eval/Returns Std 197.733 eval/Returns Max 1657.83 eval/Returns Min 1262.36 eval/Actions Mean 0.14574 eval/Actions Std 0.588307 eval/Actions Max 0.998161 eval/Actions Min -0.998012 eval/Num Paths 2 eval/Average Returns 1460.09 eval/normalized_score 45.4858 time/evaluation sampling (s) 0.892097 time/logging (s) 0.00351884 time/sampling batch (s) 0.261374 time/saving (s) 0.00575609 time/training (s) 4.1439 time/epoch (s) 5.30665 time/total (s) 32404.7 Epoch -601 ---------------------------------- --------------- 2022-05-10 22:10:41.557064 PDT | [0] Epoch -600 finished ---------------------------------- -------------- epoch -600 replay_buffer/size 999996 trainer/num train calls 401000 trainer/Policy Loss -2.31282 trainer/Log Pis Mean 2.36376 trainer/Log Pis Std 2.654 trainer/Log Pis Max 11.835 trainer/Log Pis Min -3.94599 trainer/policy/mean Mean 0.174314 trainer/policy/mean Std 0.611215 trainer/policy/mean Max 0.997507 trainer/policy/mean Min -0.997498 trainer/policy/normal/std Mean 0.381983 trainer/policy/normal/std Std 0.183313 trainer/policy/normal/std Max 0.967329 trainer/policy/normal/std Min 0.0697187 trainer/policy/normal/log_std Mean -1.11059 trainer/policy/normal/log_std Std 0.587749 trainer/policy/normal/log_std Max -0.0332162 trainer/policy/normal/log_std Min -2.66329 eval/num steps total 265272 eval/num paths total 516 eval/path length Mean 497.5 eval/path length Std 2.5 eval/path length Max 500 eval/path length Min 495 eval/Rewards Mean 3.08143 eval/Rewards Std 0.797801 eval/Rewards Max 4.76752 eval/Rewards Min 0.983705 eval/Returns Mean 1533.01 eval/Returns Std 20.6137 eval/Returns Max 1553.63 eval/Returns Min 1512.4 eval/Actions Mean 0.163808 eval/Actions Std 0.583772 eval/Actions Max 0.998545 eval/Actions Min -0.998518 eval/Num Paths 2 eval/Average Returns 1533.01 eval/normalized_score 47.7262 time/evaluation sampling (s) 0.909381 time/logging (s) 0.0036612 time/sampling batch (s) 0.262606 time/saving (s) 0.0030158 time/training (s) 4.16833 time/epoch (s) 5.34699 time/total (s) 32410.1 Epoch -600 ---------------------------------- -------------- 2022-05-10 22:10:46.871470 PDT | [0] Epoch -599 finished ---------------------------------- --------------- epoch -599 replay_buffer/size 999996 trainer/num train calls 402000 trainer/Policy Loss -2.18231 trainer/Log Pis Mean 2.26099 trainer/Log Pis Std 2.6873 trainer/Log Pis Max 15.0248 trainer/Log Pis Min -5.7461 trainer/policy/mean Mean 0.155183 trainer/policy/mean Std 0.610526 trainer/policy/mean Max 0.998225 trainer/policy/mean Min -0.998965 trainer/policy/normal/std Mean 0.382982 trainer/policy/normal/std Std 0.187978 trainer/policy/normal/std Max 0.939871 trainer/policy/normal/std Min 0.0660681 trainer/policy/normal/log_std Mean -1.11625 trainer/policy/normal/log_std Std 0.605239 trainer/policy/normal/log_std Max -0.0620129 trainer/policy/normal/log_std Min -2.71707 eval/num steps total 265994 eval/num paths total 517 eval/path length Mean 722 eval/path length Std 0 eval/path length Max 722 eval/path length Min 722 eval/Rewards Mean 3.23704 eval/Rewards Std 0.767058 eval/Rewards Max 5.08757 eval/Rewards Min 0.987502 eval/Returns Mean 2337.14 eval/Returns Std 0 eval/Returns Max 2337.14 eval/Returns Min 2337.14 eval/Actions Mean 0.156055 eval/Actions Std 0.59217 eval/Actions Max 0.99849 eval/Actions Min -0.998811 eval/Num Paths 1 eval/Average Returns 2337.14 eval/normalized_score 72.4339 time/evaluation sampling (s) 0.886744 time/logging (s) 0.00295728 time/sampling batch (s) 0.26109 time/saving (s) 0.00301487 time/training (s) 4.13893 time/epoch (s) 5.29274 time/total (s) 32415.4 Epoch -599 ---------------------------------- --------------- 2022-05-10 22:10:52.186712 PDT | [0] Epoch -598 finished ---------------------------------- --------------- epoch -598 replay_buffer/size 999996 trainer/num train calls 403000 trainer/Policy Loss -2.08968 trainer/Log Pis Mean 2.118 trainer/Log Pis Std 2.71073 trainer/Log Pis Max 15.2172 trainer/Log Pis Min -6.13968 trainer/policy/mean Mean 0.149981 trainer/policy/mean Std 0.619322 trainer/policy/mean Max 0.998245 trainer/policy/mean Min -0.998274 trainer/policy/normal/std Mean 0.381317 trainer/policy/normal/std Std 0.178705 trainer/policy/normal/std Max 0.915927 trainer/policy/normal/std Min 0.0675286 trainer/policy/normal/log_std Mean -1.10476 trainer/policy/normal/log_std Std 0.572429 trainer/policy/normal/log_std Max -0.0878183 trainer/policy/normal/log_std Min -2.6952 eval/num steps total 266795 eval/num paths total 519 eval/path length Mean 400.5 eval/path length Std 0.5 eval/path length Max 401 eval/path length Min 400 eval/Rewards Mean 3.08684 eval/Rewards Std 0.855268 eval/Rewards Max 4.75125 eval/Rewards Min 0.98143 eval/Returns Mean 1236.28 eval/Returns Std 1.15319 eval/Returns Max 1237.43 eval/Returns Min 1235.12 eval/Actions Mean 0.146602 eval/Actions Std 0.580476 eval/Actions Max 0.997352 eval/Actions Min -0.99691 eval/Num Paths 2 eval/Average Returns 1236.28 eval/normalized_score 38.6088 time/evaluation sampling (s) 0.877455 time/logging (s) 0.00315249 time/sampling batch (s) 0.262551 time/saving (s) 0.00293162 time/training (s) 4.1484 time/epoch (s) 5.29449 time/total (s) 32420.7 Epoch -598 ---------------------------------- --------------- 2022-05-10 22:10:57.512371 PDT | [0] Epoch -597 finished ---------------------------------- --------------- epoch -597 replay_buffer/size 999996 trainer/num train calls 404000 trainer/Policy Loss -2.17411 trainer/Log Pis Mean 2.1499 trainer/Log Pis Std 2.62486 trainer/Log Pis Max 11.2235 trainer/Log Pis Min -6.43567 trainer/policy/mean Mean 0.141657 trainer/policy/mean Std 0.617009 trainer/policy/mean Max 0.996204 trainer/policy/mean Min -0.998184 trainer/policy/normal/std Mean 0.396024 trainer/policy/normal/std Std 0.190111 trainer/policy/normal/std Max 1.0306 trainer/policy/normal/std Min 0.0738482 trainer/policy/normal/log_std Mean -1.0732 trainer/policy/normal/log_std Std 0.583974 trainer/policy/normal/log_std Max 0.0301401 trainer/policy/normal/log_std Min -2.60574 eval/num steps total 267299 eval/num paths total 520 eval/path length Mean 504 eval/path length Std 0 eval/path length Max 504 eval/path length Min 504 eval/Rewards Mean 3.13814 eval/Rewards Std 0.756937 eval/Rewards Max 4.85685 eval/Rewards Min 0.985784 eval/Returns Mean 1581.62 eval/Returns Std 0 eval/Returns Max 1581.62 eval/Returns Min 1581.62 eval/Actions Mean 0.156913 eval/Actions Std 0.591881 eval/Actions Max 0.998048 eval/Actions Min -0.999158 eval/Num Paths 1 eval/Average Returns 1581.62 eval/normalized_score 49.2198 time/evaluation sampling (s) 0.877665 time/logging (s) 0.00233897 time/sampling batch (s) 0.263782 time/saving (s) 0.00300928 time/training (s) 4.15717 time/epoch (s) 5.30396 time/total (s) 32426 Epoch -597 ---------------------------------- --------------- 2022-05-10 22:11:02.858646 PDT | [0] Epoch -596 finished ---------------------------------- --------------- epoch -596 replay_buffer/size 999996 trainer/num train calls 405000 trainer/Policy Loss -2.34461 trainer/Log Pis Mean 2.31866 trainer/Log Pis Std 2.63711 trainer/Log Pis Max 10.0042 trainer/Log Pis Min -5.82702 trainer/policy/mean Mean 0.135585 trainer/policy/mean Std 0.622874 trainer/policy/mean Max 0.997162 trainer/policy/mean Min -0.997758 trainer/policy/normal/std Mean 0.383846 trainer/policy/normal/std Std 0.182998 trainer/policy/normal/std Max 0.926242 trainer/policy/normal/std Min 0.0661519 trainer/policy/normal/log_std Mean -1.10459 trainer/policy/normal/log_std Std 0.586839 trainer/policy/normal/log_std Max -0.07662 trainer/policy/normal/log_std Min -2.7158 eval/num steps total 267796 eval/num paths total 521 eval/path length Mean 497 eval/path length Std 0 eval/path length Max 497 eval/path length Min 497 eval/Rewards Mean 3.11628 eval/Rewards Std 0.809275 eval/Rewards Max 5.11906 eval/Rewards Min 0.984029 eval/Returns Mean 1548.79 eval/Returns Std 0 eval/Returns Max 1548.79 eval/Returns Min 1548.79 eval/Actions Mean 0.148348 eval/Actions Std 0.581615 eval/Actions Max 0.996584 eval/Actions Min -0.998306 eval/Num Paths 1 eval/Average Returns 1548.79 eval/normalized_score 48.211 time/evaluation sampling (s) 0.869728 time/logging (s) 0.00229421 time/sampling batch (s) 0.265721 time/saving (s) 0.00304435 time/training (s) 4.18428 time/epoch (s) 5.32507 time/total (s) 32431.3 Epoch -596 ---------------------------------- --------------- 2022-05-10 22:11:08.260689 PDT | [0] Epoch -595 finished ---------------------------------- --------------- epoch -595 replay_buffer/size 999996 trainer/num train calls 406000 trainer/Policy Loss -2.06359 trainer/Log Pis Mean 2.08515 trainer/Log Pis Std 2.55896 trainer/Log Pis Max 9.75236 trainer/Log Pis Min -4.97216 trainer/policy/mean Mean 0.16078 trainer/policy/mean Std 0.607219 trainer/policy/mean Max 0.997282 trainer/policy/mean Min -0.998276 trainer/policy/normal/std Mean 0.383178 trainer/policy/normal/std Std 0.185749 trainer/policy/normal/std Max 0.914138 trainer/policy/normal/std Min 0.0642872 trainer/policy/normal/log_std Mean -1.11304 trainer/policy/normal/log_std Std 0.601475 trainer/policy/normal/log_std Max -0.0897743 trainer/policy/normal/log_std Min -2.74439 eval/num steps total 268277 eval/num paths total 522 eval/path length Mean 481 eval/path length Std 0 eval/path length Max 481 eval/path length Min 481 eval/Rewards Mean 3.07963 eval/Rewards Std 0.830171 eval/Rewards Max 4.81139 eval/Rewards Min 0.981035 eval/Returns Mean 1481.3 eval/Returns Std 0 eval/Returns Max 1481.3 eval/Returns Min 1481.3 eval/Actions Mean 0.14815 eval/Actions Std 0.575379 eval/Actions Max 0.996792 eval/Actions Min -0.998116 eval/Num Paths 1 eval/Average Returns 1481.3 eval/normalized_score 46.1374 time/evaluation sampling (s) 0.946883 time/logging (s) 0.0021895 time/sampling batch (s) 0.263772 time/saving (s) 0.00311559 time/training (s) 4.16513 time/epoch (s) 5.38109 time/total (s) 32436.7 Epoch -595 ---------------------------------- --------------- 2022-05-10 22:11:13.651228 PDT | [0] Epoch -594 finished ---------------------------------- --------------- epoch -594 replay_buffer/size 999996 trainer/num train calls 407000 trainer/Policy Loss -2.24553 trainer/Log Pis Mean 2.19009 trainer/Log Pis Std 2.78025 trainer/Log Pis Max 12.3184 trainer/Log Pis Min -6.86302 trainer/policy/mean Mean 0.13892 trainer/policy/mean Std 0.612242 trainer/policy/mean Max 0.998014 trainer/policy/mean Min -0.998905 trainer/policy/normal/std Mean 0.380845 trainer/policy/normal/std Std 0.184085 trainer/policy/normal/std Max 1.03109 trainer/policy/normal/std Min 0.066592 trainer/policy/normal/log_std Mean -1.1152 trainer/policy/normal/log_std Std 0.589678 trainer/policy/normal/log_std Max 0.030618 trainer/policy/normal/log_std Min -2.70917 eval/num steps total 268848 eval/num paths total 523 eval/path length Mean 571 eval/path length Std 0 eval/path length Max 571 eval/path length Min 571 eval/Rewards Mean 3.19112 eval/Rewards Std 0.758107 eval/Rewards Max 4.8847 eval/Rewards Min 0.982217 eval/Returns Mean 1822.13 eval/Returns Std 0 eval/Returns Max 1822.13 eval/Returns Min 1822.13 eval/Actions Mean 0.152973 eval/Actions Std 0.600286 eval/Actions Max 0.998487 eval/Actions Min -0.998285 eval/Num Paths 1 eval/Average Returns 1822.13 eval/normalized_score 56.6097 time/evaluation sampling (s) 0.871094 time/logging (s) 0.00243429 time/sampling batch (s) 0.266737 time/saving (s) 0.00304587 time/training (s) 4.22603 time/epoch (s) 5.36934 time/total (s) 32442.1 Epoch -594 ---------------------------------- --------------- 2022-05-10 22:11:18.999508 PDT | [0] Epoch -593 finished ---------------------------------- --------------- epoch -593 replay_buffer/size 999996 trainer/num train calls 408000 trainer/Policy Loss -2.09395 trainer/Log Pis Mean 2.18982 trainer/Log Pis Std 2.5214 trainer/Log Pis Max 9.89579 trainer/Log Pis Min -4.54 trainer/policy/mean Mean 0.142768 trainer/policy/mean Std 0.612873 trainer/policy/mean Max 0.997824 trainer/policy/mean Min -0.998062 trainer/policy/normal/std Mean 0.38365 trainer/policy/normal/std Std 0.184861 trainer/policy/normal/std Max 0.99733 trainer/policy/normal/std Min 0.0692579 trainer/policy/normal/log_std Mean -1.10436 trainer/policy/normal/log_std Std 0.581351 trainer/policy/normal/log_std Max -0.0026738 trainer/policy/normal/log_std Min -2.66992 eval/num steps total 269791 eval/num paths total 525 eval/path length Mean 471.5 eval/path length Std 58.5 eval/path length Max 530 eval/path length Min 413 eval/Rewards Mean 3.14936 eval/Rewards Std 0.813612 eval/Rewards Max 5.38182 eval/Rewards Min 0.9886 eval/Returns Mean 1484.93 eval/Returns Std 213.551 eval/Returns Max 1698.48 eval/Returns Min 1271.37 eval/Actions Mean 0.150068 eval/Actions Std 0.58923 eval/Actions Max 0.998521 eval/Actions Min -0.997911 eval/Num Paths 2 eval/Average Returns 1484.93 eval/normalized_score 46.2487 time/evaluation sampling (s) 0.874511 time/logging (s) 0.00343561 time/sampling batch (s) 0.270997 time/saving (s) 0.00290253 time/training (s) 4.17618 time/epoch (s) 5.32802 time/total (s) 32447.4 Epoch -593 ---------------------------------- --------------- 2022-05-10 22:11:24.332422 PDT | [0] Epoch -592 finished ---------------------------------- --------------- epoch -592 replay_buffer/size 999996 trainer/num train calls 409000 trainer/Policy Loss -2.18879 trainer/Log Pis Mean 2.21326 trainer/Log Pis Std 2.57696 trainer/Log Pis Max 9.8359 trainer/Log Pis Min -6.28468 trainer/policy/mean Mean 0.149773 trainer/policy/mean Std 0.614691 trainer/policy/mean Max 0.998225 trainer/policy/mean Min -0.998536 trainer/policy/normal/std Mean 0.382991 trainer/policy/normal/std Std 0.185957 trainer/policy/normal/std Max 0.932954 trainer/policy/normal/std Min 0.0681366 trainer/policy/normal/log_std Mean -1.11181 trainer/policy/normal/log_std Std 0.595 trainer/policy/normal/log_std Max -0.0693991 trainer/policy/normal/log_std Min -2.68624 eval/num steps total 270667 eval/num paths total 527 eval/path length Mean 438 eval/path length Std 31 eval/path length Max 469 eval/path length Min 407 eval/Rewards Mean 3.15276 eval/Rewards Std 0.834541 eval/Rewards Max 4.83773 eval/Rewards Min 0.983519 eval/Returns Mean 1380.91 eval/Returns Std 128.523 eval/Returns Max 1509.43 eval/Returns Min 1252.38 eval/Actions Mean 0.149434 eval/Actions Std 0.585439 eval/Actions Max 0.9977 eval/Actions Min -0.998358 eval/Num Paths 2 eval/Average Returns 1380.91 eval/normalized_score 43.0527 time/evaluation sampling (s) 0.877862 time/logging (s) 0.00338943 time/sampling batch (s) 0.264088 time/saving (s) 0.00306488 time/training (s) 4.16326 time/epoch (s) 5.31166 time/total (s) 32452.7 Epoch -592 ---------------------------------- --------------- 2022-05-10 22:11:29.715150 PDT | [0] Epoch -591 finished ---------------------------------- --------------- epoch -591 replay_buffer/size 999996 trainer/num train calls 410000 trainer/Policy Loss -2.09752 trainer/Log Pis Mean 2.2298 trainer/Log Pis Std 2.61077 trainer/Log Pis Max 10.2306 trainer/Log Pis Min -4.59423 trainer/policy/mean Mean 0.163893 trainer/policy/mean Std 0.612667 trainer/policy/mean Max 0.998334 trainer/policy/mean Min -0.997924 trainer/policy/normal/std Mean 0.396791 trainer/policy/normal/std Std 0.189729 trainer/policy/normal/std Max 1.11542 trainer/policy/normal/std Min 0.0696716 trainer/policy/normal/log_std Mean -1.07139 trainer/policy/normal/log_std Std 0.586087 trainer/policy/normal/log_std Max 0.109228 trainer/policy/normal/log_std Min -2.66396 eval/num steps total 271284 eval/num paths total 528 eval/path length Mean 617 eval/path length Std 0 eval/path length Max 617 eval/path length Min 617 eval/Rewards Mean 3.1925 eval/Rewards Std 0.796021 eval/Rewards Max 5.20585 eval/Rewards Min 0.983859 eval/Returns Mean 1969.77 eval/Returns Std 0 eval/Returns Max 1969.77 eval/Returns Min 1969.77 eval/Actions Mean 0.138602 eval/Actions Std 0.587223 eval/Actions Max 0.998106 eval/Actions Min -0.998432 eval/Num Paths 1 eval/Average Returns 1969.77 eval/normalized_score 61.1462 time/evaluation sampling (s) 0.876462 time/logging (s) 0.00261476 time/sampling batch (s) 0.268005 time/saving (s) 0.00303531 time/training (s) 4.21029 time/epoch (s) 5.36041 time/total (s) 32458.1 Epoch -591 ---------------------------------- --------------- 2022-05-10 22:11:35.069725 PDT | [0] Epoch -590 finished ---------------------------------- --------------- epoch -590 replay_buffer/size 999996 trainer/num train calls 411000 trainer/Policy Loss -2.23839 trainer/Log Pis Mean 2.22164 trainer/Log Pis Std 2.67105 trainer/Log Pis Max 12.7184 trainer/Log Pis Min -3.47485 trainer/policy/mean Mean 0.162003 trainer/policy/mean Std 0.617039 trainer/policy/mean Max 0.998259 trainer/policy/mean Min -0.998478 trainer/policy/normal/std Mean 0.383178 trainer/policy/normal/std Std 0.178568 trainer/policy/normal/std Max 0.934902 trainer/policy/normal/std Min 0.0687407 trainer/policy/normal/log_std Mean -1.09698 trainer/policy/normal/log_std Std 0.565155 trainer/policy/normal/log_std Max -0.0673139 trainer/policy/normal/log_std Min -2.67741 eval/num steps total 271858 eval/num paths total 529 eval/path length Mean 574 eval/path length Std 0 eval/path length Max 574 eval/path length Min 574 eval/Rewards Mean 3.22751 eval/Rewards Std 0.783813 eval/Rewards Max 4.76278 eval/Rewards Min 0.98435 eval/Returns Mean 1852.59 eval/Returns Std 0 eval/Returns Max 1852.59 eval/Returns Min 1852.59 eval/Actions Mean 0.159764 eval/Actions Std 0.603277 eval/Actions Max 0.998241 eval/Actions Min -0.998457 eval/Num Paths 1 eval/Average Returns 1852.59 eval/normalized_score 57.5456 time/evaluation sampling (s) 0.874706 time/logging (s) 0.0025209 time/sampling batch (s) 0.26572 time/saving (s) 0.00295835 time/training (s) 4.18731 time/epoch (s) 5.33321 time/total (s) 32463.4 Epoch -590 ---------------------------------- --------------- 2022-05-10 22:11:40.456081 PDT | [0] Epoch -589 finished ---------------------------------- --------------- epoch -589 replay_buffer/size 999996 trainer/num train calls 412000 trainer/Policy Loss -2.02624 trainer/Log Pis Mean 2.2354 trainer/Log Pis Std 2.56549 trainer/Log Pis Max 9.80519 trainer/Log Pis Min -4.7746 trainer/policy/mean Mean 0.130088 trainer/policy/mean Std 0.611039 trainer/policy/mean Max 0.995973 trainer/policy/mean Min -0.998625 trainer/policy/normal/std Mean 0.382046 trainer/policy/normal/std Std 0.180001 trainer/policy/normal/std Max 0.909127 trainer/policy/normal/std Min 0.067628 trainer/policy/normal/log_std Mean -1.10192 trainer/policy/normal/log_std Std 0.567774 trainer/policy/normal/log_std Max -0.0952709 trainer/policy/normal/log_std Min -2.69373 eval/num steps total 272332 eval/num paths total 530 eval/path length Mean 474 eval/path length Std 0 eval/path length Max 474 eval/path length Min 474 eval/Rewards Mean 3.11277 eval/Rewards Std 0.793225 eval/Rewards Max 4.71952 eval/Rewards Min 0.984711 eval/Returns Mean 1475.45 eval/Returns Std 0 eval/Returns Max 1475.45 eval/Returns Min 1475.45 eval/Actions Mean 0.137163 eval/Actions Std 0.593789 eval/Actions Max 0.997605 eval/Actions Min -0.997987 eval/Num Paths 1 eval/Average Returns 1475.45 eval/normalized_score 45.9576 time/evaluation sampling (s) 0.898776 time/logging (s) 0.00226111 time/sampling batch (s) 0.271926 time/saving (s) 0.0029693 time/training (s) 4.18873 time/epoch (s) 5.36466 time/total (s) 32468.8 Epoch -589 ---------------------------------- --------------- 2022-05-10 22:11:45.802654 PDT | [0] Epoch -588 finished ---------------------------------- --------------- epoch -588 replay_buffer/size 999996 trainer/num train calls 413000 trainer/Policy Loss -2.27177 trainer/Log Pis Mean 2.23876 trainer/Log Pis Std 2.66349 trainer/Log Pis Max 10.7389 trainer/Log Pis Min -4.13314 trainer/policy/mean Mean 0.111788 trainer/policy/mean Std 0.621038 trainer/policy/mean Max 0.997504 trainer/policy/mean Min -0.997931 trainer/policy/normal/std Mean 0.383476 trainer/policy/normal/std Std 0.182213 trainer/policy/normal/std Max 1.01336 trainer/policy/normal/std Min 0.0697922 trainer/policy/normal/log_std Mean -1.10058 trainer/policy/normal/log_std Std 0.57215 trainer/policy/normal/log_std Max 0.0132747 trainer/policy/normal/log_std Min -2.66223 eval/num steps total 273310 eval/num paths total 532 eval/path length Mean 489 eval/path length Std 23 eval/path length Max 512 eval/path length Min 466 eval/Rewards Mean 3.11693 eval/Rewards Std 0.851139 eval/Rewards Max 5.30683 eval/Rewards Min 0.983448 eval/Returns Mean 1524.18 eval/Returns Std 73.6983 eval/Returns Max 1597.88 eval/Returns Min 1450.48 eval/Actions Mean 0.146982 eval/Actions Std 0.568841 eval/Actions Max 0.998499 eval/Actions Min -0.998358 eval/Num Paths 2 eval/Average Returns 1524.18 eval/normalized_score 47.4548 time/evaluation sampling (s) 0.884618 time/logging (s) 0.0036331 time/sampling batch (s) 0.264703 time/saving (s) 0.00295368 time/training (s) 4.17098 time/epoch (s) 5.32688 time/total (s) 32474.1 Epoch -588 ---------------------------------- --------------- 2022-05-10 22:11:51.160171 PDT | [0] Epoch -587 finished ---------------------------------- --------------- epoch -587 replay_buffer/size 999996 trainer/num train calls 414000 trainer/Policy Loss -2.09857 trainer/Log Pis Mean 2.08733 trainer/Log Pis Std 2.45998 trainer/Log Pis Max 8.87405 trainer/Log Pis Min -5.40791 trainer/policy/mean Mean 0.141968 trainer/policy/mean Std 0.619323 trainer/policy/mean Max 0.996754 trainer/policy/mean Min -0.996792 trainer/policy/normal/std Mean 0.385946 trainer/policy/normal/std Std 0.181965 trainer/policy/normal/std Max 0.967897 trainer/policy/normal/std Min 0.0682373 trainer/policy/normal/log_std Mean -1.09384 trainer/policy/normal/log_std Std 0.57369 trainer/policy/normal/log_std Max -0.03263 trainer/policy/normal/log_std Min -2.68476 eval/num steps total 273958 eval/num paths total 533 eval/path length Mean 648 eval/path length Std 0 eval/path length Max 648 eval/path length Min 648 eval/Rewards Mean 3.23443 eval/Rewards Std 0.711549 eval/Rewards Max 4.82525 eval/Rewards Min 0.987194 eval/Returns Mean 2095.91 eval/Returns Std 0 eval/Returns Max 2095.91 eval/Returns Min 2095.91 eval/Actions Mean 0.14884 eval/Actions Std 0.604719 eval/Actions Max 0.997289 eval/Actions Min -0.99817 eval/Num Paths 1 eval/Average Returns 2095.91 eval/normalized_score 65.0218 time/evaluation sampling (s) 0.894112 time/logging (s) 0.00259975 time/sampling batch (s) 0.265437 time/saving (s) 0.00291777 time/training (s) 4.16972 time/epoch (s) 5.33478 time/total (s) 32479.4 Epoch -587 ---------------------------------- --------------- 2022-05-10 22:11:56.548798 PDT | [0] Epoch -586 finished ---------------------------------- --------------- epoch -586 replay_buffer/size 999996 trainer/num train calls 415000 trainer/Policy Loss -2.10469 trainer/Log Pis Mean 2.05974 trainer/Log Pis Std 2.61536 trainer/Log Pis Max 13.1913 trainer/Log Pis Min -4.50036 trainer/policy/mean Mean 0.158087 trainer/policy/mean Std 0.597915 trainer/policy/mean Max 0.995415 trainer/policy/mean Min -0.999032 trainer/policy/normal/std Mean 0.383498 trainer/policy/normal/std Std 0.189757 trainer/policy/normal/std Max 1.00446 trainer/policy/normal/std Min 0.0668953 trainer/policy/normal/log_std Mean -1.11912 trainer/policy/normal/log_std Std 0.615071 trainer/policy/normal/log_std Max 0.00444923 trainer/policy/normal/log_std Min -2.70463 eval/num steps total 274684 eval/num paths total 534 eval/path length Mean 726 eval/path length Std 0 eval/path length Max 726 eval/path length Min 726 eval/Rewards Mean 3.27276 eval/Rewards Std 0.726098 eval/Rewards Max 4.74466 eval/Rewards Min 0.981064 eval/Returns Mean 2376.02 eval/Returns Std 0 eval/Returns Max 2376.02 eval/Returns Min 2376.02 eval/Actions Mean 0.156591 eval/Actions Std 0.595488 eval/Actions Max 0.998639 eval/Actions Min -0.998918 eval/Num Paths 1 eval/Average Returns 2376.02 eval/normalized_score 73.6286 time/evaluation sampling (s) 0.888955 time/logging (s) 0.00297993 time/sampling batch (s) 0.26705 time/saving (s) 0.00306396 time/training (s) 4.20578 time/epoch (s) 5.36783 time/total (s) 32484.8 Epoch -586 ---------------------------------- --------------- 2022-05-10 22:12:01.940671 PDT | [0] Epoch -585 finished ---------------------------------- --------------- epoch -585 replay_buffer/size 999996 trainer/num train calls 416000 trainer/Policy Loss -2.41432 trainer/Log Pis Mean 2.27442 trainer/Log Pis Std 2.50623 trainer/Log Pis Max 8.99836 trainer/Log Pis Min -5.648 trainer/policy/mean Mean 0.167594 trainer/policy/mean Std 0.614134 trainer/policy/mean Max 0.998487 trainer/policy/mean Min -0.996932 trainer/policy/normal/std Mean 0.384793 trainer/policy/normal/std Std 0.185388 trainer/policy/normal/std Max 0.964425 trainer/policy/normal/std Min 0.0675506 trainer/policy/normal/log_std Mean -1.1049 trainer/policy/normal/log_std Std 0.591564 trainer/policy/normal/log_std Max -0.0362228 trainer/policy/normal/log_std Min -2.69488 eval/num steps total 275493 eval/num paths total 536 eval/path length Mean 404.5 eval/path length Std 1.5 eval/path length Max 406 eval/path length Min 403 eval/Rewards Mean 3.06249 eval/Rewards Std 0.790805 eval/Rewards Max 4.57994 eval/Rewards Min 0.983585 eval/Returns Mean 1238.78 eval/Returns Std 5.77115 eval/Returns Max 1244.55 eval/Returns Min 1233.01 eval/Actions Mean 0.141417 eval/Actions Std 0.59351 eval/Actions Max 0.997517 eval/Actions Min -0.999435 eval/Num Paths 2 eval/Average Returns 1238.78 eval/normalized_score 38.6856 time/evaluation sampling (s) 0.903645 time/logging (s) 0.00314212 time/sampling batch (s) 0.266849 time/saving (s) 0.00295503 time/training (s) 4.194 time/epoch (s) 5.37059 time/total (s) 32490.2 Epoch -585 ---------------------------------- --------------- 2022-05-10 22:12:07.292624 PDT | [0] Epoch -584 finished ---------------------------------- --------------- epoch -584 replay_buffer/size 999996 trainer/num train calls 417000 trainer/Policy Loss -2.31905 trainer/Log Pis Mean 2.38007 trainer/Log Pis Std 2.63852 trainer/Log Pis Max 11.1245 trainer/Log Pis Min -4.4102 trainer/policy/mean Mean 0.11861 trainer/policy/mean Std 0.6209 trainer/policy/mean Max 0.996562 trainer/policy/mean Min -0.99875 trainer/policy/normal/std Mean 0.375924 trainer/policy/normal/std Std 0.179769 trainer/policy/normal/std Max 0.939017 trainer/policy/normal/std Min 0.0633434 trainer/policy/normal/log_std Mean -1.12805 trainer/policy/normal/log_std Std 0.592672 trainer/policy/normal/log_std Max -0.0629218 trainer/policy/normal/log_std Min -2.75918 eval/num steps total 276492 eval/num paths total 538 eval/path length Mean 499.5 eval/path length Std 11.5 eval/path length Max 511 eval/path length Min 488 eval/Rewards Mean 3.14463 eval/Rewards Std 0.800275 eval/Rewards Max 5.19 eval/Rewards Min 0.981462 eval/Returns Mean 1570.74 eval/Returns Std 29.0761 eval/Returns Max 1599.82 eval/Returns Min 1541.67 eval/Actions Mean 0.15132 eval/Actions Std 0.591016 eval/Actions Max 0.99816 eval/Actions Min -0.99915 eval/Num Paths 2 eval/Average Returns 1570.74 eval/normalized_score 48.8856 time/evaluation sampling (s) 0.873692 time/logging (s) 0.00366293 time/sampling batch (s) 0.266196 time/saving (s) 0.00305013 time/training (s) 4.18469 time/epoch (s) 5.3313 time/total (s) 32495.5 Epoch -584 ---------------------------------- --------------- 2022-05-10 22:12:12.679395 PDT | [0] Epoch -583 finished ---------------------------------- --------------- epoch -583 replay_buffer/size 999996 trainer/num train calls 418000 trainer/Policy Loss -2.3033 trainer/Log Pis Mean 2.25304 trainer/Log Pis Std 2.60302 trainer/Log Pis Max 13.6901 trainer/Log Pis Min -4.2734 trainer/policy/mean Mean 0.119639 trainer/policy/mean Std 0.618379 trainer/policy/mean Max 0.996235 trainer/policy/mean Min -0.997732 trainer/policy/normal/std Mean 0.378947 trainer/policy/normal/std Std 0.185471 trainer/policy/normal/std Max 0.976808 trainer/policy/normal/std Min 0.0687355 trainer/policy/normal/log_std Mean -1.123 trainer/policy/normal/log_std Std 0.59451 trainer/policy/normal/log_std Max -0.0234654 trainer/policy/normal/log_std Min -2.67749 eval/num steps total 276987 eval/num paths total 539 eval/path length Mean 495 eval/path length Std 0 eval/path length Max 495 eval/path length Min 495 eval/Rewards Mean 3.05308 eval/Rewards Std 0.769138 eval/Rewards Max 4.62958 eval/Rewards Min 0.979068 eval/Returns Mean 1511.28 eval/Returns Std 0 eval/Returns Max 1511.28 eval/Returns Min 1511.28 eval/Actions Mean 0.146131 eval/Actions Std 0.579344 eval/Actions Max 0.99659 eval/Actions Min -0.997194 eval/Num Paths 1 eval/Average Returns 1511.28 eval/normalized_score 47.0584 time/evaluation sampling (s) 0.873179 time/logging (s) 0.00234898 time/sampling batch (s) 0.267254 time/saving (s) 0.00311847 time/training (s) 4.21783 time/epoch (s) 5.36373 time/total (s) 32500.9 Epoch -583 ---------------------------------- --------------- 2022-05-10 22:12:18.075542 PDT | [0] Epoch -582 finished ---------------------------------- --------------- epoch -582 replay_buffer/size 999996 trainer/num train calls 419000 trainer/Policy Loss -2.03484 trainer/Log Pis Mean 2.1313 trainer/Log Pis Std 2.48231 trainer/Log Pis Max 9.53327 trainer/Log Pis Min -5.1151 trainer/policy/mean Mean 0.143335 trainer/policy/mean Std 0.604461 trainer/policy/mean Max 0.998201 trainer/policy/mean Min -0.998331 trainer/policy/normal/std Mean 0.37298 trainer/policy/normal/std Std 0.178876 trainer/policy/normal/std Max 0.927761 trainer/policy/normal/std Min 0.0701911 trainer/policy/normal/log_std Mean -1.13282 trainer/policy/normal/log_std Std 0.583282 trainer/policy/normal/log_std Max -0.0749814 trainer/policy/normal/log_std Min -2.65653 eval/num steps total 277473 eval/num paths total 540 eval/path length Mean 486 eval/path length Std 0 eval/path length Max 486 eval/path length Min 486 eval/Rewards Mean 3.21245 eval/Rewards Std 0.810126 eval/Rewards Max 4.86732 eval/Rewards Min 0.98642 eval/Returns Mean 1561.25 eval/Returns Std 0 eval/Returns Max 1561.25 eval/Returns Min 1561.25 eval/Actions Mean 0.143782 eval/Actions Std 0.591191 eval/Actions Max 0.99744 eval/Actions Min -0.997925 eval/Num Paths 1 eval/Average Returns 1561.25 eval/normalized_score 48.5939 time/evaluation sampling (s) 0.881881 time/logging (s) 0.00225363 time/sampling batch (s) 0.266845 time/saving (s) 0.00297193 time/training (s) 4.22073 time/epoch (s) 5.37468 time/total (s) 32506.3 Epoch -582 ---------------------------------- --------------- 2022-05-10 22:12:23.438329 PDT | [0] Epoch -581 finished ---------------------------------- --------------- epoch -581 replay_buffer/size 999996 trainer/num train calls 420000 trainer/Policy Loss -2.13949 trainer/Log Pis Mean 2.04609 trainer/Log Pis Std 2.65705 trainer/Log Pis Max 9.96822 trainer/Log Pis Min -5.59473 trainer/policy/mean Mean 0.128736 trainer/policy/mean Std 0.612687 trainer/policy/mean Max 0.997591 trainer/policy/mean Min -0.997516 trainer/policy/normal/std Mean 0.385533 trainer/policy/normal/std Std 0.186567 trainer/policy/normal/std Max 0.969588 trainer/policy/normal/std Min 0.065897 trainer/policy/normal/log_std Mean -1.1023 trainer/policy/normal/log_std Std 0.587983 trainer/policy/normal/log_std Max -0.0308837 trainer/policy/normal/log_std Min -2.71966 eval/num steps total 278367 eval/num paths total 542 eval/path length Mean 447 eval/path length Std 40 eval/path length Max 487 eval/path length Min 407 eval/Rewards Mean 3.13086 eval/Rewards Std 0.796255 eval/Rewards Max 4.85899 eval/Rewards Min 0.98205 eval/Returns Mean 1399.5 eval/Returns Std 147.181 eval/Returns Max 1546.68 eval/Returns Min 1252.31 eval/Actions Mean 0.147572 eval/Actions Std 0.595658 eval/Actions Max 0.997212 eval/Actions Min -0.998584 eval/Num Paths 2 eval/Average Returns 1399.5 eval/normalized_score 43.6238 time/evaluation sampling (s) 0.875335 time/logging (s) 0.00369785 time/sampling batch (s) 0.265487 time/saving (s) 0.00341606 time/training (s) 4.19509 time/epoch (s) 5.34303 time/total (s) 32511.6 Epoch -581 ---------------------------------- --------------- 2022-05-10 22:12:28.837234 PDT | [0] Epoch -580 finished ---------------------------------- --------------- epoch -580 replay_buffer/size 999996 trainer/num train calls 421000 trainer/Policy Loss -2.12929 trainer/Log Pis Mean 2.17556 trainer/Log Pis Std 2.54927 trainer/Log Pis Max 9.21391 trainer/Log Pis Min -4.06221 trainer/policy/mean Mean 0.121703 trainer/policy/mean Std 0.614682 trainer/policy/mean Max 0.996228 trainer/policy/mean Min -0.998702 trainer/policy/normal/std Mean 0.377245 trainer/policy/normal/std Std 0.180542 trainer/policy/normal/std Max 0.984617 trainer/policy/normal/std Min 0.073217 trainer/policy/normal/log_std Mean -1.11954 trainer/policy/normal/log_std Std 0.577887 trainer/policy/normal/log_std Max -0.0155028 trainer/policy/normal/log_std Min -2.61433 eval/num steps total 279026 eval/num paths total 543 eval/path length Mean 659 eval/path length Std 0 eval/path length Max 659 eval/path length Min 659 eval/Rewards Mean 3.18933 eval/Rewards Std 0.71351 eval/Rewards Max 5.18692 eval/Rewards Min 0.983177 eval/Returns Mean 2101.77 eval/Returns Std 0 eval/Returns Max 2101.77 eval/Returns Min 2101.77 eval/Actions Mean 0.152992 eval/Actions Std 0.607332 eval/Actions Max 0.998587 eval/Actions Min -0.99904 eval/Num Paths 1 eval/Average Returns 2101.77 eval/normalized_score 65.2017 time/evaluation sampling (s) 0.873444 time/logging (s) 0.00263928 time/sampling batch (s) 0.266691 time/saving (s) 0.00296326 time/training (s) 4.23029 time/epoch (s) 5.37603 time/total (s) 32517 Epoch -580 ---------------------------------- --------------- 2022-05-10 22:12:34.209379 PDT | [0] Epoch -579 finished ---------------------------------- --------------- epoch -579 replay_buffer/size 999996 trainer/num train calls 422000 trainer/Policy Loss -2.20029 trainer/Log Pis Mean 2.13604 trainer/Log Pis Std 2.51537 trainer/Log Pis Max 9.88178 trainer/Log Pis Min -5.82049 trainer/policy/mean Mean 0.152323 trainer/policy/mean Std 0.608369 trainer/policy/mean Max 0.997152 trainer/policy/mean Min -0.997338 trainer/policy/normal/std Mean 0.378777 trainer/policy/normal/std Std 0.178629 trainer/policy/normal/std Max 1.00107 trainer/policy/normal/std Min 0.0655564 trainer/policy/normal/log_std Mean -1.11542 trainer/policy/normal/log_std Std 0.581892 trainer/policy/normal/log_std Max 0.00106814 trainer/policy/normal/log_std Min -2.72485 eval/num steps total 280016 eval/num paths total 545 eval/path length Mean 495 eval/path length Std 88 eval/path length Max 583 eval/path length Min 407 eval/Rewards Mean 3.11315 eval/Rewards Std 0.757117 eval/Rewards Max 4.80219 eval/Rewards Min 0.980989 eval/Returns Mean 1541.01 eval/Returns Std 297.85 eval/Returns Max 1838.86 eval/Returns Min 1243.16 eval/Actions Mean 0.149918 eval/Actions Std 0.602479 eval/Actions Max 0.997963 eval/Actions Min -0.999217 eval/Num Paths 2 eval/Average Returns 1541.01 eval/normalized_score 47.9719 time/evaluation sampling (s) 0.881688 time/logging (s) 0.00368995 time/sampling batch (s) 0.265846 time/saving (s) 0.0030292 time/training (s) 4.19791 time/epoch (s) 5.35216 time/total (s) 32522.4 Epoch -579 ---------------------------------- --------------- 2022-05-10 22:12:39.582672 PDT | [0] Epoch -578 finished ---------------------------------- --------------- epoch -578 replay_buffer/size 999996 trainer/num train calls 423000 trainer/Policy Loss -2.01121 trainer/Log Pis Mean 1.94138 trainer/Log Pis Std 2.56045 trainer/Log Pis Max 9.41561 trainer/Log Pis Min -5.2649 trainer/policy/mean Mean 0.145389 trainer/policy/mean Std 0.595401 trainer/policy/mean Max 0.996402 trainer/policy/mean Min -0.997946 trainer/policy/normal/std Mean 0.375746 trainer/policy/normal/std Std 0.181741 trainer/policy/normal/std Max 0.959606 trainer/policy/normal/std Min 0.068601 trainer/policy/normal/log_std Mean -1.12915 trainer/policy/normal/log_std Std 0.590602 trainer/policy/normal/log_std Max -0.0412327 trainer/policy/normal/log_std Min -2.67945 eval/num steps total 280613 eval/num paths total 546 eval/path length Mean 597 eval/path length Std 0 eval/path length Max 597 eval/path length Min 597 eval/Rewards Mean 3.1811 eval/Rewards Std 0.766136 eval/Rewards Max 5.3574 eval/Rewards Min 0.986285 eval/Returns Mean 1899.12 eval/Returns Std 0 eval/Returns Max 1899.12 eval/Returns Min 1899.12 eval/Actions Mean 0.142094 eval/Actions Std 0.591391 eval/Actions Max 0.997711 eval/Actions Min -0.997363 eval/Num Paths 1 eval/Average Returns 1899.12 eval/normalized_score 58.9752 time/evaluation sampling (s) 0.878797 time/logging (s) 0.00248184 time/sampling batch (s) 0.26612 time/saving (s) 0.00293113 time/training (s) 4.20049 time/epoch (s) 5.35082 time/total (s) 32527.7 Epoch -578 ---------------------------------- --------------- 2022-05-10 22:12:44.950523 PDT | [0] Epoch -577 finished ---------------------------------- --------------- epoch -577 replay_buffer/size 999996 trainer/num train calls 424000 trainer/Policy Loss -2.03664 trainer/Log Pis Mean 2.16247 trainer/Log Pis Std 2.59499 trainer/Log Pis Max 10.6208 trainer/Log Pis Min -4.85356 trainer/policy/mean Mean 0.126372 trainer/policy/mean Std 0.621121 trainer/policy/mean Max 0.996364 trainer/policy/mean Min -0.998025 trainer/policy/normal/std Mean 0.383528 trainer/policy/normal/std Std 0.184316 trainer/policy/normal/std Max 0.921286 trainer/policy/normal/std Min 0.0667595 trainer/policy/normal/log_std Mean -1.10357 trainer/policy/normal/log_std Std 0.578495 trainer/policy/normal/log_std Max -0.081985 trainer/policy/normal/log_std Min -2.70666 eval/num steps total 281145 eval/num paths total 547 eval/path length Mean 532 eval/path length Std 0 eval/path length Max 532 eval/path length Min 532 eval/Rewards Mean 3.15588 eval/Rewards Std 0.829371 eval/Rewards Max 5.46376 eval/Rewards Min 0.984754 eval/Returns Mean 1678.93 eval/Returns Std 0 eval/Returns Max 1678.93 eval/Returns Min 1678.93 eval/Actions Mean 0.151398 eval/Actions Std 0.586551 eval/Actions Max 0.997857 eval/Actions Min -0.998473 eval/Num Paths 1 eval/Average Returns 1678.93 eval/normalized_score 52.2097 time/evaluation sampling (s) 0.871652 time/logging (s) 0.0025081 time/sampling batch (s) 0.266007 time/saving (s) 0.00304291 time/training (s) 4.20317 time/epoch (s) 5.34638 time/total (s) 32533.1 Epoch -577 ---------------------------------- --------------- 2022-05-10 22:12:50.474702 PDT | [0] Epoch -576 finished ---------------------------------- --------------- epoch -576 replay_buffer/size 999996 trainer/num train calls 425000 trainer/Policy Loss -2.17525 trainer/Log Pis Mean 2.17261 trainer/Log Pis Std 2.66737 trainer/Log Pis Max 11.3107 trainer/Log Pis Min -7.13707 trainer/policy/mean Mean 0.134172 trainer/policy/mean Std 0.615314 trainer/policy/mean Max 0.996583 trainer/policy/mean Min -0.998485 trainer/policy/normal/std Mean 0.38737 trainer/policy/normal/std Std 0.184684 trainer/policy/normal/std Max 0.954315 trainer/policy/normal/std Min 0.0701413 trainer/policy/normal/log_std Mean -1.09622 trainer/policy/normal/log_std Std 0.588327 trainer/policy/normal/log_std Max -0.0467612 trainer/policy/normal/log_std Min -2.65724 eval/num steps total 282143 eval/num paths total 549 eval/path length Mean 499 eval/path length Std 1 eval/path length Max 500 eval/path length Min 498 eval/Rewards Mean 3.14461 eval/Rewards Std 0.750319 eval/Rewards Max 4.73149 eval/Rewards Min 0.984094 eval/Returns Mean 1569.16 eval/Returns Std 4.99058 eval/Returns Max 1574.15 eval/Returns Min 1564.17 eval/Actions Mean 0.150149 eval/Actions Std 0.594717 eval/Actions Max 0.998168 eval/Actions Min -0.996215 eval/Num Paths 2 eval/Average Returns 1569.16 eval/normalized_score 48.8369 time/evaluation sampling (s) 0.979602 time/logging (s) 0.00378708 time/sampling batch (s) 0.272511 time/saving (s) 0.00312028 time/training (s) 4.24454 time/epoch (s) 5.50356 time/total (s) 32538.6 Epoch -576 ---------------------------------- --------------- 2022-05-10 22:12:55.920539 PDT | [0] Epoch -575 finished ---------------------------------- --------------- epoch -575 replay_buffer/size 999996 trainer/num train calls 426000 trainer/Policy Loss -2.12393 trainer/Log Pis Mean 2.29476 trainer/Log Pis Std 2.70713 trainer/Log Pis Max 9.75513 trainer/Log Pis Min -4.42524 trainer/policy/mean Mean 0.138387 trainer/policy/mean Std 0.610821 trainer/policy/mean Max 0.998518 trainer/policy/mean Min -0.998179 trainer/policy/normal/std Mean 0.374295 trainer/policy/normal/std Std 0.178105 trainer/policy/normal/std Max 0.965697 trainer/policy/normal/std Min 0.0661577 trainer/policy/normal/log_std Mean -1.12782 trainer/policy/normal/log_std Std 0.581029 trainer/policy/normal/log_std Max -0.0349056 trainer/policy/normal/log_std Min -2.71571 eval/num steps total 282637 eval/num paths total 550 eval/path length Mean 494 eval/path length Std 0 eval/path length Max 494 eval/path length Min 494 eval/Rewards Mean 3.20512 eval/Rewards Std 0.770875 eval/Rewards Max 4.93556 eval/Rewards Min 0.987255 eval/Returns Mean 1583.33 eval/Returns Std 0 eval/Returns Max 1583.33 eval/Returns Min 1583.33 eval/Actions Mean 0.16538 eval/Actions Std 0.593712 eval/Actions Max 0.996954 eval/Actions Min -0.998021 eval/Num Paths 1 eval/Average Returns 1583.33 eval/normalized_score 49.2722 time/evaluation sampling (s) 0.919078 time/logging (s) 0.00227653 time/sampling batch (s) 0.271418 time/saving (s) 0.00300959 time/training (s) 4.22577 time/epoch (s) 5.42155 time/total (s) 32544 Epoch -575 ---------------------------------- --------------- 2022-05-10 22:13:01.362532 PDT | [0] Epoch -574 finished ---------------------------------- --------------- epoch -574 replay_buffer/size 999996 trainer/num train calls 427000 trainer/Policy Loss -2.36065 trainer/Log Pis Mean 2.37675 trainer/Log Pis Std 2.63517 trainer/Log Pis Max 14.814 trainer/Log Pis Min -4.75726 trainer/policy/mean Mean 0.121759 trainer/policy/mean Std 0.626203 trainer/policy/mean Max 0.998379 trainer/policy/mean Min -0.999333 trainer/policy/normal/std Mean 0.381947 trainer/policy/normal/std Std 0.189093 trainer/policy/normal/std Max 0.912026 trainer/policy/normal/std Min 0.0735743 trainer/policy/normal/log_std Mean -1.11927 trainer/policy/normal/log_std Std 0.602746 trainer/policy/normal/log_std Max -0.0920867 trainer/policy/normal/log_std Min -2.60946 eval/num steps total 283585 eval/num paths total 552 eval/path length Mean 474 eval/path length Std 5 eval/path length Max 479 eval/path length Min 469 eval/Rewards Mean 3.11701 eval/Rewards Std 0.818415 eval/Rewards Max 4.95283 eval/Rewards Min 0.983354 eval/Returns Mean 1477.46 eval/Returns Std 19.0032 eval/Returns Max 1496.46 eval/Returns Min 1458.46 eval/Actions Mean 0.150351 eval/Actions Std 0.590378 eval/Actions Max 0.998299 eval/Actions Min -0.997917 eval/Num Paths 2 eval/Average Returns 1477.46 eval/normalized_score 46.0194 time/evaluation sampling (s) 0.917002 time/logging (s) 0.00359419 time/sampling batch (s) 0.27152 time/saving (s) 0.00306648 time/training (s) 4.22648 time/epoch (s) 5.42167 time/total (s) 32549.4 Epoch -574 ---------------------------------- --------------- 2022-05-10 22:13:06.844838 PDT | [0] Epoch -573 finished ---------------------------------- --------------- epoch -573 replay_buffer/size 999996 trainer/num train calls 428000 trainer/Policy Loss -2.24606 trainer/Log Pis Mean 2.20478 trainer/Log Pis Std 2.57802 trainer/Log Pis Max 11.9799 trainer/Log Pis Min -4.92285 trainer/policy/mean Mean 0.148615 trainer/policy/mean Std 0.624963 trainer/policy/mean Max 0.997459 trainer/policy/mean Min -0.998369 trainer/policy/normal/std Mean 0.390349 trainer/policy/normal/std Std 0.18865 trainer/policy/normal/std Max 1.01351 trainer/policy/normal/std Min 0.0707122 trainer/policy/normal/log_std Mean -1.09147 trainer/policy/normal/log_std Std 0.593664 trainer/policy/normal/log_std Max 0.0134162 trainer/policy/normal/log_std Min -2.64914 eval/num steps total 284449 eval/num paths total 554 eval/path length Mean 432 eval/path length Std 70 eval/path length Max 502 eval/path length Min 362 eval/Rewards Mean 3.13512 eval/Rewards Std 0.925204 eval/Rewards Max 5.40679 eval/Rewards Min 0.98223 eval/Returns Mean 1354.37 eval/Returns Std 219.06 eval/Returns Max 1573.43 eval/Returns Min 1135.31 eval/Actions Mean 0.145455 eval/Actions Std 0.566538 eval/Actions Max 0.997831 eval/Actions Min -0.998163 eval/Num Paths 2 eval/Average Returns 1354.37 eval/normalized_score 42.2374 time/evaluation sampling (s) 0.926886 time/logging (s) 0.00336448 time/sampling batch (s) 0.272367 time/saving (s) 0.00309488 time/training (s) 4.25446 time/epoch (s) 5.46018 time/total (s) 32554.9 Epoch -573 ---------------------------------- --------------- 2022-05-10 22:13:12.298804 PDT | [0] Epoch -572 finished ---------------------------------- --------------- epoch -572 replay_buffer/size 999996 trainer/num train calls 429000 trainer/Policy Loss -2.37888 trainer/Log Pis Mean 2.53228 trainer/Log Pis Std 2.70393 trainer/Log Pis Max 13.0347 trainer/Log Pis Min -5.91635 trainer/policy/mean Mean 0.163262 trainer/policy/mean Std 0.629208 trainer/policy/mean Max 0.997063 trainer/policy/mean Min -0.997788 trainer/policy/normal/std Mean 0.375929 trainer/policy/normal/std Std 0.186446 trainer/policy/normal/std Max 1.01463 trainer/policy/normal/std Min 0.0677823 trainer/policy/normal/log_std Mean -1.13384 trainer/policy/normal/log_std Std 0.599579 trainer/policy/normal/log_std Max 0.0145205 trainer/policy/normal/log_std Min -2.69145 eval/num steps total 284995 eval/num paths total 555 eval/path length Mean 546 eval/path length Std 0 eval/path length Max 546 eval/path length Min 546 eval/Rewards Mean 3.221 eval/Rewards Std 0.84086 eval/Rewards Max 4.96333 eval/Rewards Min 0.978731 eval/Returns Mean 1758.67 eval/Returns Std 0 eval/Returns Max 1758.67 eval/Returns Min 1758.67 eval/Actions Mean 0.146566 eval/Actions Std 0.575392 eval/Actions Max 0.998128 eval/Actions Min -0.998697 eval/Num Paths 1 eval/Average Returns 1758.67 eval/normalized_score 54.6597 time/evaluation sampling (s) 0.89614 time/logging (s) 0.00248113 time/sampling batch (s) 0.273895 time/saving (s) 0.00300515 time/training (s) 4.25555 time/epoch (s) 5.43107 time/total (s) 32560.3 Epoch -572 ---------------------------------- --------------- 2022-05-10 22:13:17.744374 PDT | [0] Epoch -571 finished ---------------------------------- --------------- epoch -571 replay_buffer/size 999996 trainer/num train calls 430000 trainer/Policy Loss -2.14707 trainer/Log Pis Mean 2.01022 trainer/Log Pis Std 2.58524 trainer/Log Pis Max 10.5573 trainer/Log Pis Min -5.91177 trainer/policy/mean Mean 0.123166 trainer/policy/mean Std 0.608183 trainer/policy/mean Max 0.998905 trainer/policy/mean Min -0.998473 trainer/policy/normal/std Mean 0.387035 trainer/policy/normal/std Std 0.186115 trainer/policy/normal/std Max 1.02253 trainer/policy/normal/std Min 0.0732743 trainer/policy/normal/log_std Mean -1.09499 trainer/policy/normal/log_std Std 0.579621 trainer/policy/normal/log_std Max 0.0222848 trainer/policy/normal/log_std Min -2.61355 eval/num steps total 285571 eval/num paths total 556 eval/path length Mean 576 eval/path length Std 0 eval/path length Max 576 eval/path length Min 576 eval/Rewards Mean 3.21269 eval/Rewards Std 0.799422 eval/Rewards Max 4.79498 eval/Rewards Min 0.987979 eval/Returns Mean 1850.51 eval/Returns Std 0 eval/Returns Max 1850.51 eval/Returns Min 1850.51 eval/Actions Mean 0.163235 eval/Actions Std 0.600996 eval/Actions Max 0.998838 eval/Actions Min -0.998616 eval/Num Paths 1 eval/Average Returns 1850.51 eval/normalized_score 57.4817 time/evaluation sampling (s) 0.89912 time/logging (s) 0.00253113 time/sampling batch (s) 0.272533 time/saving (s) 0.0029726 time/training (s) 4.24658 time/epoch (s) 5.42373 time/total (s) 32565.7 Epoch -571 ---------------------------------- --------------- 2022-05-10 22:13:23.185197 PDT | [0] Epoch -570 finished ---------------------------------- --------------- epoch -570 replay_buffer/size 999996 trainer/num train calls 431000 trainer/Policy Loss -2.32929 trainer/Log Pis Mean 2.19475 trainer/Log Pis Std 2.77145 trainer/Log Pis Max 9.41383 trainer/Log Pis Min -8.10502 trainer/policy/mean Mean 0.131223 trainer/policy/mean Std 0.62889 trainer/policy/mean Max 0.998297 trainer/policy/mean Min -0.996985 trainer/policy/normal/std Mean 0.389142 trainer/policy/normal/std Std 0.184579 trainer/policy/normal/std Max 0.992394 trainer/policy/normal/std Min 0.0664461 trainer/policy/normal/log_std Mean -1.08707 trainer/policy/normal/log_std Std 0.576078 trainer/policy/normal/log_std Max -0.00763474 trainer/policy/normal/log_std Min -2.71136 eval/num steps total 286292 eval/num paths total 558 eval/path length Mean 360.5 eval/path length Std 131.5 eval/path length Max 492 eval/path length Min 229 eval/Rewards Mean 3.03222 eval/Rewards Std 0.938236 eval/Rewards Max 5.55329 eval/Rewards Min 0.98507 eval/Returns Mean 1093.11 eval/Returns Std 453.824 eval/Returns Max 1546.94 eval/Returns Min 639.29 eval/Actions Mean 0.110513 eval/Actions Std 0.559544 eval/Actions Max 0.997543 eval/Actions Min -0.999528 eval/Num Paths 2 eval/Average Returns 1093.11 eval/normalized_score 34.2099 time/evaluation sampling (s) 0.892474 time/logging (s) 0.00298738 time/sampling batch (s) 0.273309 time/saving (s) 0.00299924 time/training (s) 4.24741 time/epoch (s) 5.41918 time/total (s) 32571.2 Epoch -570 ---------------------------------- --------------- 2022-05-10 22:13:28.641560 PDT | [0] Epoch -569 finished ---------------------------------- --------------- epoch -569 replay_buffer/size 999996 trainer/num train calls 432000 trainer/Policy Loss -2.25685 trainer/Log Pis Mean 2.36469 trainer/Log Pis Std 2.38453 trainer/Log Pis Max 8.85133 trainer/Log Pis Min -7.10171 trainer/policy/mean Mean 0.145518 trainer/policy/mean Std 0.623388 trainer/policy/mean Max 0.997423 trainer/policy/mean Min -0.997245 trainer/policy/normal/std Mean 0.383916 trainer/policy/normal/std Std 0.187385 trainer/policy/normal/std Max 1.00235 trainer/policy/normal/std Min 0.0730039 trainer/policy/normal/log_std Mean -1.10795 trainer/policy/normal/log_std Std 0.589832 trainer/policy/normal/log_std Max 0.00234377 trainer/policy/normal/log_std Min -2.61724 eval/num steps total 287115 eval/num paths total 560 eval/path length Mean 411.5 eval/path length Std 4.5 eval/path length Max 416 eval/path length Min 407 eval/Rewards Mean 3.07224 eval/Rewards Std 0.811255 eval/Rewards Max 4.86514 eval/Rewards Min 0.982327 eval/Returns Mean 1264.23 eval/Returns Std 19.7535 eval/Returns Max 1283.98 eval/Returns Min 1244.47 eval/Actions Mean 0.152725 eval/Actions Std 0.5944 eval/Actions Max 0.997088 eval/Actions Min -0.996554 eval/Num Paths 2 eval/Average Returns 1264.23 eval/normalized_score 39.4675 time/evaluation sampling (s) 0.895695 time/logging (s) 0.00324873 time/sampling batch (s) 0.279308 time/saving (s) 0.00313439 time/training (s) 4.25339 time/epoch (s) 5.43478 time/total (s) 32576.6 Epoch -569 ---------------------------------- --------------- 2022-05-10 22:13:34.084639 PDT | [0] Epoch -568 finished ---------------------------------- --------------- epoch -568 replay_buffer/size 999996 trainer/num train calls 433000 trainer/Policy Loss -2.29379 trainer/Log Pis Mean 2.35506 trainer/Log Pis Std 2.54995 trainer/Log Pis Max 9.87329 trainer/Log Pis Min -6.8199 trainer/policy/mean Mean 0.148228 trainer/policy/mean Std 0.620785 trainer/policy/mean Max 0.998919 trainer/policy/mean Min -0.998031 trainer/policy/normal/std Mean 0.37896 trainer/policy/normal/std Std 0.187715 trainer/policy/normal/std Max 0.971711 trainer/policy/normal/std Min 0.0667335 trainer/policy/normal/log_std Mean -1.12549 trainer/policy/normal/log_std Std 0.599805 trainer/policy/normal/log_std Max -0.0286967 trainer/policy/normal/log_std Min -2.70705 eval/num steps total 287617 eval/num paths total 561 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.10185 eval/Rewards Std 0.773765 eval/Rewards Max 4.96018 eval/Rewards Min 0.986098 eval/Returns Mean 1557.13 eval/Returns Std 0 eval/Returns Max 1557.13 eval/Returns Min 1557.13 eval/Actions Mean 0.156338 eval/Actions Std 0.584866 eval/Actions Max 0.998521 eval/Actions Min -0.997606 eval/Num Paths 1 eval/Average Returns 1557.13 eval/normalized_score 48.4672 time/evaluation sampling (s) 0.90572 time/logging (s) 0.00235975 time/sampling batch (s) 0.270895 time/saving (s) 0.00299792 time/training (s) 4.23828 time/epoch (s) 5.42025 time/total (s) 32582 Epoch -568 ---------------------------------- --------------- 2022-05-10 22:13:39.523271 PDT | [0] Epoch -567 finished ---------------------------------- --------------- epoch -567 replay_buffer/size 999996 trainer/num train calls 434000 trainer/Policy Loss -2.30338 trainer/Log Pis Mean 2.28888 trainer/Log Pis Std 2.5952 trainer/Log Pis Max 9.83557 trainer/Log Pis Min -6.00677 trainer/policy/mean Mean 0.166857 trainer/policy/mean Std 0.621509 trainer/policy/mean Max 0.998465 trainer/policy/mean Min -0.997623 trainer/policy/normal/std Mean 0.382106 trainer/policy/normal/std Std 0.183624 trainer/policy/normal/std Max 0.930342 trainer/policy/normal/std Min 0.0744341 trainer/policy/normal/log_std Mean -1.10912 trainer/policy/normal/log_std Std 0.583377 trainer/policy/normal/log_std Max -0.0722028 trainer/policy/normal/log_std Min -2.59784 eval/num steps total 288435 eval/num paths total 563 eval/path length Mean 409 eval/path length Std 2 eval/path length Max 411 eval/path length Min 407 eval/Rewards Mean 3.07112 eval/Rewards Std 0.800331 eval/Rewards Max 4.71874 eval/Rewards Min 0.978037 eval/Returns Mean 1256.09 eval/Returns Std 9.41097 eval/Returns Max 1265.5 eval/Returns Min 1246.68 eval/Actions Mean 0.148661 eval/Actions Std 0.595459 eval/Actions Max 0.997282 eval/Actions Min -0.999376 eval/Num Paths 2 eval/Average Returns 1256.09 eval/normalized_score 39.2174 time/evaluation sampling (s) 0.8927 time/logging (s) 0.00325079 time/sampling batch (s) 0.272696 time/saving (s) 0.00313845 time/training (s) 4.24594 time/epoch (s) 5.41772 time/total (s) 32587.4 Epoch -567 ---------------------------------- --------------- 2022-05-10 22:13:44.977750 PDT | [0] Epoch -566 finished ---------------------------------- --------------- epoch -566 replay_buffer/size 999996 trainer/num train calls 435000 trainer/Policy Loss -2.22514 trainer/Log Pis Mean 2.21878 trainer/Log Pis Std 2.49767 trainer/Log Pis Max 9.60732 trainer/Log Pis Min -3.82428 trainer/policy/mean Mean 0.131401 trainer/policy/mean Std 0.623889 trainer/policy/mean Max 0.998132 trainer/policy/mean Min -0.998636 trainer/policy/normal/std Mean 0.381646 trainer/policy/normal/std Std 0.186002 trainer/policy/normal/std Max 0.948937 trainer/policy/normal/std Min 0.0677271 trainer/policy/normal/log_std Mean -1.11671 trainer/policy/normal/log_std Std 0.597882 trainer/policy/normal/log_std Max -0.0524132 trainer/policy/normal/log_std Min -2.69227 eval/num steps total 289020 eval/num paths total 564 eval/path length Mean 585 eval/path length Std 0 eval/path length Max 585 eval/path length Min 585 eval/Rewards Mean 3.12574 eval/Rewards Std 0.749208 eval/Rewards Max 5.02378 eval/Rewards Min 0.984944 eval/Returns Mean 1828.56 eval/Returns Std 0 eval/Returns Max 1828.56 eval/Returns Min 1828.56 eval/Actions Mean 0.153758 eval/Actions Std 0.594883 eval/Actions Max 0.998173 eval/Actions Min -0.997028 eval/Num Paths 1 eval/Average Returns 1828.56 eval/normalized_score 56.8071 time/evaluation sampling (s) 0.897563 time/logging (s) 0.00252968 time/sampling batch (s) 0.271342 time/saving (s) 0.00293758 time/training (s) 4.25728 time/epoch (s) 5.43165 time/total (s) 32592.9 Epoch -566 ---------------------------------- --------------- 2022-05-10 22:13:50.402617 PDT | [0] Epoch -565 finished ---------------------------------- ---------------- epoch -565 replay_buffer/size 999996 trainer/num train calls 436000 trainer/Policy Loss -2.20647 trainer/Log Pis Mean 2.31008 trainer/Log Pis Std 2.59353 trainer/Log Pis Max 9.71773 trainer/Log Pis Min -3.47997 trainer/policy/mean Mean 0.15719 trainer/policy/mean Std 0.616605 trainer/policy/mean Max 0.995645 trainer/policy/mean Min -0.998278 trainer/policy/normal/std Mean 0.386559 trainer/policy/normal/std Std 0.186372 trainer/policy/normal/std Max 1.00025 trainer/policy/normal/std Min 0.0674936 trainer/policy/normal/log_std Mean -1.10215 trainer/policy/normal/log_std Std 0.59671 trainer/policy/normal/log_std Max 0.000252453 trainer/policy/normal/log_std Min -2.69572 eval/num steps total 289602 eval/num paths total 565 eval/path length Mean 582 eval/path length Std 0 eval/path length Max 582 eval/path length Min 582 eval/Rewards Mean 3.17316 eval/Rewards Std 0.757406 eval/Rewards Max 4.95982 eval/Rewards Min 0.984779 eval/Returns Mean 1846.78 eval/Returns Std 0 eval/Returns Max 1846.78 eval/Returns Min 1846.78 eval/Actions Mean 0.158671 eval/Actions Std 0.588411 eval/Actions Max 0.996712 eval/Actions Min -0.998513 eval/Num Paths 1 eval/Average Returns 1846.78 eval/normalized_score 57.3671 time/evaluation sampling (s) 0.901578 time/logging (s) 0.00252322 time/sampling batch (s) 0.271591 time/saving (s) 0.00303459 time/training (s) 4.22441 time/epoch (s) 5.40313 time/total (s) 32598.3 Epoch -565 ---------------------------------- ---------------- 2022-05-10 22:13:55.827067 PDT | [0] Epoch -564 finished ---------------------------------- --------------- epoch -564 replay_buffer/size 999996 trainer/num train calls 437000 trainer/Policy Loss -2.13094 trainer/Log Pis Mean 2.11759 trainer/Log Pis Std 2.54613 trainer/Log Pis Max 9.63438 trainer/Log Pis Min -4.70027 trainer/policy/mean Mean 0.129389 trainer/policy/mean Std 0.610227 trainer/policy/mean Max 0.996715 trainer/policy/mean Min -0.998516 trainer/policy/normal/std Mean 0.377475 trainer/policy/normal/std Std 0.176117 trainer/policy/normal/std Max 0.9883 trainer/policy/normal/std Min 0.0672981 trainer/policy/normal/log_std Mean -1.11329 trainer/policy/normal/log_std Std 0.568074 trainer/policy/normal/log_std Max -0.0117691 trainer/policy/normal/log_std Min -2.69862 eval/num steps total 290592 eval/num paths total 567 eval/path length Mean 495 eval/path length Std 12 eval/path length Max 507 eval/path length Min 483 eval/Rewards Mean 3.19096 eval/Rewards Std 0.792943 eval/Rewards Max 4.94362 eval/Rewards Min 0.982322 eval/Returns Mean 1579.53 eval/Returns Std 26.4417 eval/Returns Max 1605.97 eval/Returns Min 1553.08 eval/Actions Mean 0.146681 eval/Actions Std 0.596787 eval/Actions Max 0.998453 eval/Actions Min -0.998686 eval/Num Paths 2 eval/Average Returns 1579.53 eval/normalized_score 49.1554 time/evaluation sampling (s) 0.918616 time/logging (s) 0.00369396 time/sampling batch (s) 0.269285 time/saving (s) 0.00309433 time/training (s) 4.20918 time/epoch (s) 5.40387 time/total (s) 32603.7 Epoch -564 ---------------------------------- --------------- 2022-05-10 22:14:01.251033 PDT | [0] Epoch -563 finished ---------------------------------- --------------- epoch -563 replay_buffer/size 999996 trainer/num train calls 438000 trainer/Policy Loss -2.25576 trainer/Log Pis Mean 2.22788 trainer/Log Pis Std 2.72628 trainer/Log Pis Max 9.36766 trainer/Log Pis Min -5.45324 trainer/policy/mean Mean 0.130755 trainer/policy/mean Std 0.617734 trainer/policy/mean Max 0.999093 trainer/policy/mean Min -0.997974 trainer/policy/normal/std Mean 0.377341 trainer/policy/normal/std Std 0.181894 trainer/policy/normal/std Max 0.903938 trainer/policy/normal/std Min 0.0644549 trainer/policy/normal/log_std Mean -1.12133 trainer/policy/normal/log_std Std 0.581318 trainer/policy/normal/log_std Max -0.100994 trainer/policy/normal/log_std Min -2.74179 eval/num steps total 291157 eval/num paths total 568 eval/path length Mean 565 eval/path length Std 0 eval/path length Max 565 eval/path length Min 565 eval/Rewards Mean 3.17798 eval/Rewards Std 0.771767 eval/Rewards Max 4.87688 eval/Rewards Min 0.987171 eval/Returns Mean 1795.56 eval/Returns Std 0 eval/Returns Max 1795.56 eval/Returns Min 1795.56 eval/Actions Mean 0.149155 eval/Actions Std 0.595495 eval/Actions Max 0.99724 eval/Actions Min -0.998175 eval/Num Paths 1 eval/Average Returns 1795.56 eval/normalized_score 55.7932 time/evaluation sampling (s) 0.909374 time/logging (s) 0.00250187 time/sampling batch (s) 0.26999 time/saving (s) 0.00308048 time/training (s) 4.21576 time/epoch (s) 5.40071 time/total (s) 32609.1 Epoch -563 ---------------------------------- --------------- 2022-05-10 22:14:06.645520 PDT | [0] Epoch -562 finished ---------------------------------- --------------- epoch -562 replay_buffer/size 999996 trainer/num train calls 439000 trainer/Policy Loss -2.20203 trainer/Log Pis Mean 2.27793 trainer/Log Pis Std 2.66727 trainer/Log Pis Max 13.2066 trainer/Log Pis Min -6.46049 trainer/policy/mean Mean 0.17239 trainer/policy/mean Std 0.622053 trainer/policy/mean Max 0.998291 trainer/policy/mean Min -0.99787 trainer/policy/normal/std Mean 0.384585 trainer/policy/normal/std Std 0.182179 trainer/policy/normal/std Max 1.20377 trainer/policy/normal/std Min 0.0707482 trainer/policy/normal/log_std Mean -1.09848 trainer/policy/normal/log_std Std 0.576533 trainer/policy/normal/log_std Max 0.185454 trainer/policy/normal/log_std Min -2.64863 eval/num steps total 291643 eval/num paths total 569 eval/path length Mean 486 eval/path length Std 0 eval/path length Max 486 eval/path length Min 486 eval/Rewards Mean 3.07463 eval/Rewards Std 0.814631 eval/Rewards Max 4.96074 eval/Rewards Min 0.985081 eval/Returns Mean 1494.27 eval/Returns Std 0 eval/Returns Max 1494.27 eval/Returns Min 1494.27 eval/Actions Mean 0.153564 eval/Actions Std 0.577539 eval/Actions Max 0.997503 eval/Actions Min -0.997881 eval/Num Paths 1 eval/Average Returns 1494.27 eval/normalized_score 46.5359 time/evaluation sampling (s) 0.8987 time/logging (s) 0.00223924 time/sampling batch (s) 0.26901 time/saving (s) 0.00304334 time/training (s) 4.19982 time/epoch (s) 5.37281 time/total (s) 32614.5 Epoch -562 ---------------------------------- --------------- 2022-05-10 22:14:12.037820 PDT | [0] Epoch -561 finished ---------------------------------- --------------- epoch -561 replay_buffer/size 999996 trainer/num train calls 440000 trainer/Policy Loss -2.26505 trainer/Log Pis Mean 2.14846 trainer/Log Pis Std 2.51799 trainer/Log Pis Max 16.8238 trainer/Log Pis Min -5.06563 trainer/policy/mean Mean 0.173679 trainer/policy/mean Std 0.613478 trainer/policy/mean Max 0.996141 trainer/policy/mean Min -0.999849 trainer/policy/normal/std Mean 0.383153 trainer/policy/normal/std Std 0.185294 trainer/policy/normal/std Max 0.981118 trainer/policy/normal/std Min 0.0698905 trainer/policy/normal/log_std Mean -1.10863 trainer/policy/normal/log_std Std 0.588785 trainer/policy/normal/log_std Max -0.0190623 trainer/policy/normal/log_std Min -2.66082 eval/num steps total 292150 eval/num paths total 570 eval/path length Mean 507 eval/path length Std 0 eval/path length Max 507 eval/path length Min 507 eval/Rewards Mean 3.06104 eval/Rewards Std 0.822071 eval/Rewards Max 5.26295 eval/Rewards Min 0.981375 eval/Returns Mean 1551.95 eval/Returns Std 0 eval/Returns Max 1551.95 eval/Returns Min 1551.95 eval/Actions Mean 0.150618 eval/Actions Std 0.57736 eval/Actions Max 0.996186 eval/Actions Min -0.997387 eval/Num Paths 1 eval/Average Returns 1551.95 eval/normalized_score 48.308 time/evaluation sampling (s) 0.896355 time/logging (s) 0.00234336 time/sampling batch (s) 0.268998 time/saving (s) 0.00303518 time/training (s) 4.19995 time/epoch (s) 5.37068 time/total (s) 32619.8 Epoch -561 ---------------------------------- --------------- 2022-05-10 22:14:17.445781 PDT | [0] Epoch -560 finished ---------------------------------- --------------- epoch -560 replay_buffer/size 999996 trainer/num train calls 441000 trainer/Policy Loss -2.22033 trainer/Log Pis Mean 2.2009 trainer/Log Pis Std 2.4911 trainer/Log Pis Max 9.1799 trainer/Log Pis Min -5.39344 trainer/policy/mean Mean 0.156872 trainer/policy/mean Std 0.60264 trainer/policy/mean Max 0.997666 trainer/policy/mean Min -0.99755 trainer/policy/normal/std Mean 0.378989 trainer/policy/normal/std Std 0.179078 trainer/policy/normal/std Max 0.984306 trainer/policy/normal/std Min 0.0733314 trainer/policy/normal/log_std Mean -1.11463 trainer/policy/normal/log_std Std 0.580127 trainer/policy/normal/log_std Max -0.0158186 trainer/policy/normal/log_std Min -2.61277 eval/num steps total 292683 eval/num paths total 571 eval/path length Mean 533 eval/path length Std 0 eval/path length Max 533 eval/path length Min 533 eval/Rewards Mean 3.21035 eval/Rewards Std 0.809994 eval/Rewards Max 5.34057 eval/Rewards Min 0.980379 eval/Returns Mean 1711.12 eval/Returns Std 0 eval/Returns Max 1711.12 eval/Returns Min 1711.12 eval/Actions Mean 0.134992 eval/Actions Std 0.586385 eval/Actions Max 0.997534 eval/Actions Min -0.998283 eval/Num Paths 1 eval/Average Returns 1711.12 eval/normalized_score 53.1987 time/evaluation sampling (s) 0.885261 time/logging (s) 0.00239369 time/sampling batch (s) 0.270069 time/saving (s) 0.00300741 time/training (s) 4.22542 time/epoch (s) 5.38615 time/total (s) 32625.2 Epoch -560 ---------------------------------- --------------- 2022-05-10 22:14:22.830679 PDT | [0] Epoch -559 finished ---------------------------------- --------------- epoch -559 replay_buffer/size 999996 trainer/num train calls 442000 trainer/Policy Loss -2.32615 trainer/Log Pis Mean 2.20444 trainer/Log Pis Std 2.55949 trainer/Log Pis Max 11.5715 trainer/Log Pis Min -6.01621 trainer/policy/mean Mean 0.147211 trainer/policy/mean Std 0.612549 trainer/policy/mean Max 0.996422 trainer/policy/mean Min -0.998784 trainer/policy/normal/std Mean 0.380745 trainer/policy/normal/std Std 0.184123 trainer/policy/normal/std Max 0.941771 trainer/policy/normal/std Min 0.0659239 trainer/policy/normal/log_std Mean -1.11722 trainer/policy/normal/log_std Std 0.59625 trainer/policy/normal/log_std Max -0.0599927 trainer/policy/normal/log_std Min -2.71925 eval/num steps total 293502 eval/num paths total 573 eval/path length Mean 409.5 eval/path length Std 80.5 eval/path length Max 490 eval/path length Min 329 eval/Rewards Mean 3.01055 eval/Rewards Std 0.883401 eval/Rewards Max 5.29694 eval/Rewards Min 0.977905 eval/Returns Mean 1232.82 eval/Returns Std 245.637 eval/Returns Max 1478.46 eval/Returns Min 987.185 eval/Actions Mean 0.124769 eval/Actions Std 0.561122 eval/Actions Max 0.996702 eval/Actions Min -0.999328 eval/Num Paths 2 eval/Average Returns 1232.82 eval/normalized_score 38.5026 time/evaluation sampling (s) 0.884817 time/logging (s) 0.00332376 time/sampling batch (s) 0.266615 time/saving (s) 0.00293726 time/training (s) 4.20656 time/epoch (s) 5.36425 time/total (s) 32630.6 Epoch -559 ---------------------------------- --------------- 2022-05-10 22:14:28.316905 PDT | [0] Epoch -558 finished ---------------------------------- --------------- epoch -558 replay_buffer/size 999996 trainer/num train calls 443000 trainer/Policy Loss -2.22801 trainer/Log Pis Mean 2.21281 trainer/Log Pis Std 2.4556 trainer/Log Pis Max 10.3378 trainer/Log Pis Min -4.96875 trainer/policy/mean Mean 0.163554 trainer/policy/mean Std 0.611606 trainer/policy/mean Max 0.998259 trainer/policy/mean Min -0.998268 trainer/policy/normal/std Mean 0.383178 trainer/policy/normal/std Std 0.185817 trainer/policy/normal/std Max 0.959876 trainer/policy/normal/std Min 0.0682742 trainer/policy/normal/log_std Mean -1.10939 trainer/policy/normal/log_std Std 0.590001 trainer/policy/normal/log_std Max -0.040951 trainer/policy/normal/log_std Min -2.68422 eval/num steps total 294031 eval/num paths total 574 eval/path length Mean 529 eval/path length Std 0 eval/path length Max 529 eval/path length Min 529 eval/Rewards Mean 3.18242 eval/Rewards Std 0.862808 eval/Rewards Max 5.49439 eval/Rewards Min 0.982143 eval/Returns Mean 1683.5 eval/Returns Std 0 eval/Returns Max 1683.5 eval/Returns Min 1683.5 eval/Actions Mean 0.157336 eval/Actions Std 0.582534 eval/Actions Max 0.998472 eval/Actions Min -0.997975 eval/Num Paths 1 eval/Average Returns 1683.5 eval/normalized_score 52.3501 time/evaluation sampling (s) 0.914322 time/logging (s) 0.00235273 time/sampling batch (s) 0.272541 time/saving (s) 0.00300628 time/training (s) 4.2712 time/epoch (s) 5.46342 time/total (s) 32636.1 Epoch -558 ---------------------------------- --------------- 2022-05-10 22:14:33.747717 PDT | [0] Epoch -557 finished ---------------------------------- --------------- epoch -557 replay_buffer/size 999996 trainer/num train calls 444000 trainer/Policy Loss -2.09396 trainer/Log Pis Mean 2.06367 trainer/Log Pis Std 2.56135 trainer/Log Pis Max 9.47218 trainer/Log Pis Min -4.83879 trainer/policy/mean Mean 0.153418 trainer/policy/mean Std 0.613142 trainer/policy/mean Max 0.99622 trainer/policy/mean Min -0.995757 trainer/policy/normal/std Mean 0.388225 trainer/policy/normal/std Std 0.18043 trainer/policy/normal/std Max 0.971968 trainer/policy/normal/std Min 0.067788 trainer/policy/normal/log_std Mean -1.08552 trainer/policy/normal/log_std Std 0.569671 trainer/policy/normal/log_std Max -0.028432 trainer/policy/normal/log_std Min -2.69137 eval/num steps total 294688 eval/num paths total 575 eval/path length Mean 657 eval/path length Std 0 eval/path length Max 657 eval/path length Min 657 eval/Rewards Mean 3.19645 eval/Rewards Std 0.710471 eval/Rewards Max 4.88241 eval/Rewards Min 0.986328 eval/Returns Mean 2100.07 eval/Returns Std 0 eval/Returns Max 2100.07 eval/Returns Min 2100.07 eval/Actions Mean 0.154966 eval/Actions Std 0.602245 eval/Actions Max 0.998112 eval/Actions Min -0.997742 eval/Num Paths 1 eval/Average Returns 2100.07 eval/normalized_score 65.1496 time/evaluation sampling (s) 0.915259 time/logging (s) 0.00281168 time/sampling batch (s) 0.269438 time/saving (s) 0.00297003 time/training (s) 4.21904 time/epoch (s) 5.40951 time/total (s) 32641.5 Epoch -557 ---------------------------------- --------------- 2022-05-10 22:14:39.214838 PDT | [0] Epoch -556 finished ---------------------------------- --------------- epoch -556 replay_buffer/size 999996 trainer/num train calls 445000 trainer/Policy Loss -2.23254 trainer/Log Pis Mean 2.15298 trainer/Log Pis Std 2.54413 trainer/Log Pis Max 11.3605 trainer/Log Pis Min -5.08727 trainer/policy/mean Mean 0.0974555 trainer/policy/mean Std 0.618934 trainer/policy/mean Max 0.997907 trainer/policy/mean Min -0.998499 trainer/policy/normal/std Mean 0.374317 trainer/policy/normal/std Std 0.177344 trainer/policy/normal/std Max 0.88722 trainer/policy/normal/std Min 0.0687081 trainer/policy/normal/log_std Mean -1.1228 trainer/policy/normal/log_std Std 0.567417 trainer/policy/normal/log_std Max -0.119662 trainer/policy/normal/log_std Min -2.67789 eval/num steps total 295182 eval/num paths total 576 eval/path length Mean 494 eval/path length Std 0 eval/path length Max 494 eval/path length Min 494 eval/Rewards Mean 3.06303 eval/Rewards Std 0.782929 eval/Rewards Max 4.68833 eval/Rewards Min 0.986371 eval/Returns Mean 1513.14 eval/Returns Std 0 eval/Returns Max 1513.14 eval/Returns Min 1513.14 eval/Actions Mean 0.151738 eval/Actions Std 0.582824 eval/Actions Max 0.997977 eval/Actions Min -0.996488 eval/Num Paths 1 eval/Average Returns 1513.14 eval/normalized_score 47.1155 time/evaluation sampling (s) 0.946851 time/logging (s) 0.00222883 time/sampling batch (s) 0.270166 time/saving (s) 0.00298363 time/training (s) 4.22232 time/epoch (s) 5.44455 time/total (s) 32646.9 Epoch -556 ---------------------------------- --------------- 2022-05-10 22:14:44.642115 PDT | [0] Epoch -555 finished ---------------------------------- --------------- epoch -555 replay_buffer/size 999996 trainer/num train calls 446000 trainer/Policy Loss -2.37505 trainer/Log Pis Mean 2.36292 trainer/Log Pis Std 2.58013 trainer/Log Pis Max 13.7024 trainer/Log Pis Min -4.99659 trainer/policy/mean Mean 0.139517 trainer/policy/mean Std 0.623229 trainer/policy/mean Max 0.997614 trainer/policy/mean Min -0.999153 trainer/policy/normal/std Mean 0.372923 trainer/policy/normal/std Std 0.181382 trainer/policy/normal/std Max 0.898554 trainer/policy/normal/std Min 0.0704237 trainer/policy/normal/log_std Mean -1.13754 trainer/policy/normal/log_std Std 0.591413 trainer/policy/normal/log_std Max -0.106968 trainer/policy/normal/log_std Min -2.65323 eval/num steps total 295681 eval/num paths total 577 eval/path length Mean 499 eval/path length Std 0 eval/path length Max 499 eval/path length Min 499 eval/Rewards Mean 3.10378 eval/Rewards Std 0.76141 eval/Rewards Max 4.67698 eval/Rewards Min 0.981169 eval/Returns Mean 1548.79 eval/Returns Std 0 eval/Returns Max 1548.79 eval/Returns Min 1548.79 eval/Actions Mean 0.15431 eval/Actions Std 0.590954 eval/Actions Max 0.99702 eval/Actions Min -0.997219 eval/Num Paths 1 eval/Average Returns 1548.79 eval/normalized_score 48.2109 time/evaluation sampling (s) 0.905664 time/logging (s) 0.00224271 time/sampling batch (s) 0.271154 time/saving (s) 0.00295761 time/training (s) 4.22324 time/epoch (s) 5.40526 time/total (s) 32652.3 Epoch -555 ---------------------------------- --------------- 2022-05-10 22:14:50.047950 PDT | [0] Epoch -554 finished ---------------------------------- --------------- epoch -554 replay_buffer/size 999996 trainer/num train calls 447000 trainer/Policy Loss -2.06956 trainer/Log Pis Mean 2.13086 trainer/Log Pis Std 2.45895 trainer/Log Pis Max 10.4132 trainer/Log Pis Min -4.93408 trainer/policy/mean Mean 0.182398 trainer/policy/mean Std 0.603081 trainer/policy/mean Max 0.998595 trainer/policy/mean Min -0.996942 trainer/policy/normal/std Mean 0.388676 trainer/policy/normal/std Std 0.188648 trainer/policy/normal/std Max 1.01086 trainer/policy/normal/std Min 0.0719116 trainer/policy/normal/log_std Mean -1.09601 trainer/policy/normal/log_std Std 0.592704 trainer/policy/normal/log_std Max 0.0108007 trainer/policy/normal/log_std Min -2.63232 eval/num steps total 296527 eval/num paths total 579 eval/path length Mean 423 eval/path length Std 9 eval/path length Max 432 eval/path length Min 414 eval/Rewards Mean 3.15045 eval/Rewards Std 0.811761 eval/Rewards Max 5.39756 eval/Rewards Min 0.986664 eval/Returns Mean 1332.64 eval/Returns Std 44.9183 eval/Returns Max 1377.56 eval/Returns Min 1287.72 eval/Actions Mean 0.147874 eval/Actions Std 0.583731 eval/Actions Max 0.998415 eval/Actions Min -0.997372 eval/Num Paths 2 eval/Average Returns 1332.64 eval/normalized_score 41.5696 time/evaluation sampling (s) 0.894167 time/logging (s) 0.00313668 time/sampling batch (s) 0.270079 time/saving (s) 0.00294189 time/training (s) 4.21465 time/epoch (s) 5.38498 time/total (s) 32657.7 Epoch -554 ---------------------------------- --------------- 2022-05-10 22:14:55.462011 PDT | [0] Epoch -553 finished ---------------------------------- --------------- epoch -553 replay_buffer/size 999996 trainer/num train calls 448000 trainer/Policy Loss -2.2737 trainer/Log Pis Mean 2.21661 trainer/Log Pis Std 2.62618 trainer/Log Pis Max 14.9152 trainer/Log Pis Min -4.55041 trainer/policy/mean Mean 0.120913 trainer/policy/mean Std 0.621846 trainer/policy/mean Max 0.997072 trainer/policy/mean Min -0.998363 trainer/policy/normal/std Mean 0.376298 trainer/policy/normal/std Std 0.181824 trainer/policy/normal/std Max 0.901922 trainer/policy/normal/std Min 0.0650266 trainer/policy/normal/log_std Mean -1.12998 trainer/policy/normal/log_std Std 0.598884 trainer/policy/normal/log_std Max -0.103227 trainer/policy/normal/log_std Min -2.73296 eval/num steps total 297377 eval/num paths total 582 eval/path length Mean 283.333 eval/path length Std 78.0185 eval/path length Max 362 eval/path length Min 177 eval/Rewards Mean 2.93004 eval/Rewards Std 1.05055 eval/Rewards Max 5.2821 eval/Rewards Min 0.983274 eval/Returns Mean 830.179 eval/Returns Std 294.318 eval/Returns Max 1122.74 eval/Returns Min 427.497 eval/Actions Mean 0.0831658 eval/Actions Std 0.512251 eval/Actions Max 0.997003 eval/Actions Min -0.997946 eval/Num Paths 3 eval/Average Returns 830.179 eval/normalized_score 26.131 time/evaluation sampling (s) 0.899258 time/logging (s) 0.00334481 time/sampling batch (s) 0.270385 time/saving (s) 0.00309753 time/training (s) 4.21624 time/epoch (s) 5.39232 time/total (s) 32663.1 Epoch -553 ---------------------------------- --------------- 2022-05-10 22:15:00.891385 PDT | [0] Epoch -552 finished ---------------------------------- --------------- epoch -552 replay_buffer/size 999996 trainer/num train calls 449000 trainer/Policy Loss -2.1367 trainer/Log Pis Mean 1.99554 trainer/Log Pis Std 2.53404 trainer/Log Pis Max 9.43914 trainer/Log Pis Min -4.23265 trainer/policy/mean Mean 0.144276 trainer/policy/mean Std 0.610768 trainer/policy/mean Max 0.998314 trainer/policy/mean Min -0.998599 trainer/policy/normal/std Mean 0.383883 trainer/policy/normal/std Std 0.18577 trainer/policy/normal/std Max 0.901833 trainer/policy/normal/std Min 0.0726581 trainer/policy/normal/log_std Mean -1.10742 trainer/policy/normal/log_std Std 0.590392 trainer/policy/normal/log_std Max -0.103326 trainer/policy/normal/log_std Min -2.62199 eval/num steps total 297952 eval/num paths total 583 eval/path length Mean 575 eval/path length Std 0 eval/path length Max 575 eval/path length Min 575 eval/Rewards Mean 3.1977 eval/Rewards Std 0.791469 eval/Rewards Max 4.74935 eval/Rewards Min 0.985717 eval/Returns Mean 1838.68 eval/Returns Std 0 eval/Returns Max 1838.68 eval/Returns Min 1838.68 eval/Actions Mean 0.160354 eval/Actions Std 0.600486 eval/Actions Max 0.997624 eval/Actions Min -0.998892 eval/Num Paths 1 eval/Average Returns 1838.68 eval/normalized_score 57.118 time/evaluation sampling (s) 0.905099 time/logging (s) 0.00251962 time/sampling batch (s) 0.271121 time/saving (s) 0.00310971 time/training (s) 4.22456 time/epoch (s) 5.40641 time/total (s) 32668.5 Epoch -552 ---------------------------------- --------------- 2022-05-10 22:15:06.341165 PDT | [0] Epoch -551 finished ---------------------------------- --------------- epoch -551 replay_buffer/size 999996 trainer/num train calls 450000 trainer/Policy Loss -2.13073 trainer/Log Pis Mean 2.07389 trainer/Log Pis Std 2.62078 trainer/Log Pis Max 9.72679 trainer/Log Pis Min -4.73357 trainer/policy/mean Mean 0.150808 trainer/policy/mean Std 0.609578 trainer/policy/mean Max 0.998685 trainer/policy/mean Min -0.996459 trainer/policy/normal/std Mean 0.381733 trainer/policy/normal/std Std 0.186481 trainer/policy/normal/std Max 0.941406 trainer/policy/normal/std Min 0.0665284 trainer/policy/normal/log_std Mean -1.11563 trainer/policy/normal/log_std Std 0.595522 trainer/policy/normal/log_std Max -0.060381 trainer/policy/normal/log_std Min -2.71013 eval/num steps total 298693 eval/num paths total 584 eval/path length Mean 741 eval/path length Std 0 eval/path length Max 741 eval/path length Min 741 eval/Rewards Mean 3.24056 eval/Rewards Std 0.682574 eval/Rewards Max 4.86303 eval/Rewards Min 0.982581 eval/Returns Mean 2401.26 eval/Returns Std 0 eval/Returns Max 2401.26 eval/Returns Min 2401.26 eval/Actions Mean 0.165838 eval/Actions Std 0.609988 eval/Actions Max 0.998166 eval/Actions Min -0.998139 eval/Num Paths 1 eval/Average Returns 2401.26 eval/normalized_score 74.404 time/evaluation sampling (s) 0.925372 time/logging (s) 0.00305408 time/sampling batch (s) 0.272576 time/saving (s) 0.00301053 time/training (s) 4.22422 time/epoch (s) 5.42823 time/total (s) 32674 Epoch -551 ---------------------------------- --------------- 2022-05-10 22:15:11.699893 PDT | [0] Epoch -550 finished ---------------------------------- --------------- epoch -550 replay_buffer/size 999996 trainer/num train calls 451000 trainer/Policy Loss -2.21007 trainer/Log Pis Mean 2.24959 trainer/Log Pis Std 2.49967 trainer/Log Pis Max 10.4916 trainer/Log Pis Min -4.00978 trainer/policy/mean Mean 0.157342 trainer/policy/mean Std 0.618394 trainer/policy/mean Max 0.997356 trainer/policy/mean Min -0.998636 trainer/policy/normal/std Mean 0.385669 trainer/policy/normal/std Std 0.185492 trainer/policy/normal/std Max 1.0012 trainer/policy/normal/std Min 0.0695903 trainer/policy/normal/log_std Mean -1.09998 trainer/policy/normal/log_std Std 0.585489 trainer/policy/normal/log_std Max 0.00119555 trainer/policy/normal/log_std Min -2.66513 eval/num steps total 299193 eval/num paths total 585 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.12696 eval/Rewards Std 0.751478 eval/Rewards Max 4.68355 eval/Rewards Min 0.987623 eval/Returns Mean 1563.48 eval/Returns Std 0 eval/Returns Max 1563.48 eval/Returns Min 1563.48 eval/Actions Mean 0.149842 eval/Actions Std 0.596383 eval/Actions Max 0.996686 eval/Actions Min -0.996092 eval/Num Paths 1 eval/Average Returns 1563.48 eval/normalized_score 48.6623 time/evaluation sampling (s) 0.879628 time/logging (s) 0.00224527 time/sampling batch (s) 0.271073 time/saving (s) 0.00301127 time/training (s) 4.18051 time/epoch (s) 5.33647 time/total (s) 32679.3 Epoch -550 ---------------------------------- --------------- 2022-05-10 22:15:17.060331 PDT | [0] Epoch -549 finished ---------------------------------- --------------- epoch -549 replay_buffer/size 999996 trainer/num train calls 452000 trainer/Policy Loss -2.27523 trainer/Log Pis Mean 2.18317 trainer/Log Pis Std 2.58307 trainer/Log Pis Max 9.12387 trainer/Log Pis Min -5.99617 trainer/policy/mean Mean 0.11901 trainer/policy/mean Std 0.620514 trainer/policy/mean Max 0.996802 trainer/policy/mean Min -0.998283 trainer/policy/normal/std Mean 0.376638 trainer/policy/normal/std Std 0.181466 trainer/policy/normal/std Max 0.906839 trainer/policy/normal/std Min 0.072044 trainer/policy/normal/log_std Mean -1.12342 trainer/policy/normal/log_std Std 0.583272 trainer/policy/normal/log_std Max -0.0977907 trainer/policy/normal/log_std Min -2.63048 eval/num steps total 299965 eval/num paths total 586 eval/path length Mean 772 eval/path length Std 0 eval/path length Max 772 eval/path length Min 772 eval/Rewards Mean 3.26688 eval/Rewards Std 0.702803 eval/Rewards Max 5.50631 eval/Rewards Min 0.979355 eval/Returns Mean 2522.04 eval/Returns Std 0 eval/Returns Max 2522.04 eval/Returns Min 2522.04 eval/Actions Mean 0.15173 eval/Actions Std 0.602378 eval/Actions Max 0.997764 eval/Actions Min -0.997806 eval/Num Paths 1 eval/Average Returns 2522.04 eval/normalized_score 78.1149 time/evaluation sampling (s) 0.872778 time/logging (s) 0.00298717 time/sampling batch (s) 0.266286 time/saving (s) 0.00293866 time/training (s) 4.1948 time/epoch (s) 5.33979 time/total (s) 32684.6 Epoch -549 ---------------------------------- --------------- 2022-05-10 22:15:22.435923 PDT | [0] Epoch -548 finished ---------------------------------- --------------- epoch -548 replay_buffer/size 999996 trainer/num train calls 453000 trainer/Policy Loss -2.43022 trainer/Log Pis Mean 2.2197 trainer/Log Pis Std 2.55536 trainer/Log Pis Max 10.0688 trainer/Log Pis Min -3.42178 trainer/policy/mean Mean 0.152205 trainer/policy/mean Std 0.619211 trainer/policy/mean Max 0.997175 trainer/policy/mean Min -0.997639 trainer/policy/normal/std Mean 0.385544 trainer/policy/normal/std Std 0.187408 trainer/policy/normal/std Max 0.947259 trainer/policy/normal/std Min 0.0673831 trainer/policy/normal/log_std Mean -1.1051 trainer/policy/normal/log_std Std 0.594707 trainer/policy/normal/log_std Max -0.0541828 trainer/policy/normal/log_std Min -2.69736 eval/num steps total 300551 eval/num paths total 587 eval/path length Mean 586 eval/path length Std 0 eval/path length Max 586 eval/path length Min 586 eval/Rewards Mean 3.11262 eval/Rewards Std 0.745579 eval/Rewards Max 4.73942 eval/Rewards Min 0.987362 eval/Returns Mean 1823.99 eval/Returns Std 0 eval/Returns Max 1823.99 eval/Returns Min 1823.99 eval/Actions Mean 0.166172 eval/Actions Std 0.589652 eval/Actions Max 0.998295 eval/Actions Min -0.998994 eval/Num Paths 1 eval/Average Returns 1823.99 eval/normalized_score 56.6669 time/evaluation sampling (s) 0.880699 time/logging (s) 0.00245344 time/sampling batch (s) 0.265583 time/saving (s) 0.00300325 time/training (s) 4.20183 time/epoch (s) 5.35357 time/total (s) 32690 Epoch -548 ---------------------------------- --------------- 2022-05-10 22:15:27.842840 PDT | [0] Epoch -547 finished ---------------------------------- --------------- epoch -547 replay_buffer/size 999996 trainer/num train calls 454000 trainer/Policy Loss -2.03466 trainer/Log Pis Mean 2.01327 trainer/Log Pis Std 2.6258 trainer/Log Pis Max 10.1876 trainer/Log Pis Min -5.56504 trainer/policy/mean Mean 0.160101 trainer/policy/mean Std 0.609306 trainer/policy/mean Max 0.998103 trainer/policy/mean Min -0.996902 trainer/policy/normal/std Mean 0.380481 trainer/policy/normal/std Std 0.180516 trainer/policy/normal/std Max 0.967706 trainer/policy/normal/std Min 0.0705375 trainer/policy/normal/log_std Mean -1.11176 trainer/policy/normal/log_std Std 0.582983 trainer/policy/normal/log_std Max -0.0328267 trainer/policy/normal/log_std Min -2.65161 eval/num steps total 300960 eval/num paths total 588 eval/path length Mean 409 eval/path length Std 0 eval/path length Max 409 eval/path length Min 409 eval/Rewards Mean 3.07827 eval/Rewards Std 0.781206 eval/Rewards Max 4.66369 eval/Rewards Min 0.986555 eval/Returns Mean 1259.01 eval/Returns Std 0 eval/Returns Max 1259.01 eval/Returns Min 1259.01 eval/Actions Mean 0.137103 eval/Actions Std 0.588028 eval/Actions Max 0.996242 eval/Actions Min -0.996536 eval/Num Paths 1 eval/Average Returns 1259.01 eval/normalized_score 39.3073 time/evaluation sampling (s) 0.876731 time/logging (s) 0.00195514 time/sampling batch (s) 0.267537 time/saving (s) 0.00298527 time/training (s) 4.23552 time/epoch (s) 5.38473 time/total (s) 32695.4 Epoch -547 ---------------------------------- --------------- 2022-05-10 22:15:33.245670 PDT | [0] Epoch -546 finished ---------------------------------- --------------- epoch -546 replay_buffer/size 999996 trainer/num train calls 455000 trainer/Policy Loss -2.3338 trainer/Log Pis Mean 2.21348 trainer/Log Pis Std 2.57805 trainer/Log Pis Max 9.38041 trainer/Log Pis Min -3.99132 trainer/policy/mean Mean 0.171646 trainer/policy/mean Std 0.63132 trainer/policy/mean Max 0.998225 trainer/policy/mean Min -0.998028 trainer/policy/normal/std Mean 0.392177 trainer/policy/normal/std Std 0.186912 trainer/policy/normal/std Max 0.991348 trainer/policy/normal/std Min 0.0730661 trainer/policy/normal/log_std Mean -1.08233 trainer/policy/normal/log_std Std 0.583712 trainer/policy/normal/log_std Max -0.00868938 trainer/policy/normal/log_std Min -2.61639 eval/num steps total 301542 eval/num paths total 589 eval/path length Mean 582 eval/path length Std 0 eval/path length Max 582 eval/path length Min 582 eval/Rewards Mean 3.24604 eval/Rewards Std 0.731083 eval/Rewards Max 4.74368 eval/Rewards Min 0.988565 eval/Returns Mean 1889.19 eval/Returns Std 0 eval/Returns Max 1889.19 eval/Returns Min 1889.19 eval/Actions Mean 0.173216 eval/Actions Std 0.603827 eval/Actions Max 0.998112 eval/Actions Min -0.998426 eval/Num Paths 1 eval/Average Returns 1889.19 eval/normalized_score 58.6702 time/evaluation sampling (s) 0.874727 time/logging (s) 0.00251188 time/sampling batch (s) 0.267271 time/saving (s) 0.00299192 time/training (s) 4.23429 time/epoch (s) 5.38179 time/total (s) 32700.8 Epoch -546 ---------------------------------- --------------- 2022-05-10 22:15:38.610090 PDT | [0] Epoch -545 finished ---------------------------------- --------------- epoch -545 replay_buffer/size 999996 trainer/num train calls 456000 trainer/Policy Loss -2.2199 trainer/Log Pis Mean 2.09082 trainer/Log Pis Std 2.81594 trainer/Log Pis Max 17.0589 trainer/Log Pis Min -8.40406 trainer/policy/mean Mean 0.136744 trainer/policy/mean Std 0.622676 trainer/policy/mean Max 0.998773 trainer/policy/mean Min -0.999579 trainer/policy/normal/std Mean 0.383016 trainer/policy/normal/std Std 0.186549 trainer/policy/normal/std Max 0.990593 trainer/policy/normal/std Min 0.069713 trainer/policy/normal/log_std Mean -1.10909 trainer/policy/normal/log_std Std 0.58777 trainer/policy/normal/log_std Max -0.00945157 trainer/policy/normal/log_std Min -2.66337 eval/num steps total 302038 eval/num paths total 590 eval/path length Mean 496 eval/path length Std 0 eval/path length Max 496 eval/path length Min 496 eval/Rewards Mean 3.12435 eval/Rewards Std 0.752197 eval/Rewards Max 4.65061 eval/Rewards Min 0.986043 eval/Returns Mean 1549.68 eval/Returns Std 0 eval/Returns Max 1549.68 eval/Returns Min 1549.68 eval/Actions Mean 0.142649 eval/Actions Std 0.593185 eval/Actions Max 0.998047 eval/Actions Min -0.996868 eval/Num Paths 1 eval/Average Returns 1549.68 eval/normalized_score 48.2383 time/evaluation sampling (s) 0.874851 time/logging (s) 0.00224676 time/sampling batch (s) 0.270047 time/saving (s) 0.00300446 time/training (s) 4.1926 time/epoch (s) 5.34275 time/total (s) 32706.1 Epoch -545 ---------------------------------- --------------- 2022-05-10 22:15:43.972400 PDT | [0] Epoch -544 finished ---------------------------------- --------------- epoch -544 replay_buffer/size 999996 trainer/num train calls 457000 trainer/Policy Loss -2.04277 trainer/Log Pis Mean 2.07056 trainer/Log Pis Std 2.64215 trainer/Log Pis Max 14.1262 trainer/Log Pis Min -4.61108 trainer/policy/mean Mean 0.147806 trainer/policy/mean Std 0.59799 trainer/policy/mean Max 0.996454 trainer/policy/mean Min -0.997797 trainer/policy/normal/std Mean 0.379667 trainer/policy/normal/std Std 0.186594 trainer/policy/normal/std Max 0.956185 trainer/policy/normal/std Min 0.0761272 trainer/policy/normal/log_std Mean -1.11975 trainer/policy/normal/log_std Std 0.589396 trainer/policy/normal/log_std Max -0.0448041 trainer/policy/normal/log_std Min -2.57535 eval/num steps total 302537 eval/num paths total 591 eval/path length Mean 499 eval/path length Std 0 eval/path length Max 499 eval/path length Min 499 eval/Rewards Mean 3.09588 eval/Rewards Std 0.771458 eval/Rewards Max 4.64344 eval/Rewards Min 0.988872 eval/Returns Mean 1544.84 eval/Returns Std 0 eval/Returns Max 1544.84 eval/Returns Min 1544.84 eval/Actions Mean 0.154307 eval/Actions Std 0.591556 eval/Actions Max 0.996423 eval/Actions Min -0.998812 eval/Num Paths 1 eval/Average Returns 1544.84 eval/normalized_score 48.0897 time/evaluation sampling (s) 0.879908 time/logging (s) 0.0022227 time/sampling batch (s) 0.264244 time/saving (s) 0.00296107 time/training (s) 4.19151 time/epoch (s) 5.34084 time/total (s) 32711.5 Epoch -544 ---------------------------------- --------------- 2022-05-10 22:15:49.304829 PDT | [0] Epoch -543 finished ---------------------------------- --------------- epoch -543 replay_buffer/size 999996 trainer/num train calls 458000 trainer/Policy Loss -2.16693 trainer/Log Pis Mean 2.24796 trainer/Log Pis Std 2.82552 trainer/Log Pis Max 13.7079 trainer/Log Pis Min -4.84921 trainer/policy/mean Mean 0.112714 trainer/policy/mean Std 0.616689 trainer/policy/mean Max 0.999098 trainer/policy/mean Min -0.999698 trainer/policy/normal/std Mean 0.381854 trainer/policy/normal/std Std 0.185204 trainer/policy/normal/std Max 0.929687 trainer/policy/normal/std Min 0.0636772 trainer/policy/normal/log_std Mean -1.11042 trainer/policy/normal/log_std Std 0.583453 trainer/policy/normal/log_std Max -0.0729075 trainer/policy/normal/log_std Min -2.75393 eval/num steps total 303104 eval/num paths total 592 eval/path length Mean 567 eval/path length Std 0 eval/path length Max 567 eval/path length Min 567 eval/Rewards Mean 3.18324 eval/Rewards Std 0.813969 eval/Rewards Max 4.74707 eval/Rewards Min 0.988546 eval/Returns Mean 1804.9 eval/Returns Std 0 eval/Returns Max 1804.9 eval/Returns Min 1804.9 eval/Actions Mean 0.160231 eval/Actions Std 0.593989 eval/Actions Max 0.998096 eval/Actions Min -0.999064 eval/Num Paths 1 eval/Average Returns 1804.9 eval/normalized_score 56.0802 time/evaluation sampling (s) 0.871641 time/logging (s) 0.00242383 time/sampling batch (s) 0.263277 time/saving (s) 0.00293906 time/training (s) 4.17119 time/epoch (s) 5.31147 time/total (s) 32716.8 Epoch -543 ---------------------------------- --------------- 2022-05-10 22:15:54.629103 PDT | [0] Epoch -542 finished ---------------------------------- --------------- epoch -542 replay_buffer/size 999996 trainer/num train calls 459000 trainer/Policy Loss -2.26369 trainer/Log Pis Mean 2.32218 trainer/Log Pis Std 2.52477 trainer/Log Pis Max 10.304 trainer/Log Pis Min -4.49878 trainer/policy/mean Mean 0.151776 trainer/policy/mean Std 0.624949 trainer/policy/mean Max 0.998079 trainer/policy/mean Min -0.99812 trainer/policy/normal/std Mean 0.371793 trainer/policy/normal/std Std 0.175903 trainer/policy/normal/std Max 0.919824 trainer/policy/normal/std Min 0.0702358 trainer/policy/normal/log_std Mean -1.13068 trainer/policy/normal/log_std Std 0.571038 trainer/policy/normal/log_std Max -0.0835725 trainer/policy/normal/log_std Min -2.6559 eval/num steps total 303512 eval/num paths total 593 eval/path length Mean 408 eval/path length Std 0 eval/path length Max 408 eval/path length Min 408 eval/Rewards Mean 3.0694 eval/Rewards Std 0.794775 eval/Rewards Max 4.66479 eval/Rewards Min 0.984474 eval/Returns Mean 1252.32 eval/Returns Std 0 eval/Returns Max 1252.32 eval/Returns Min 1252.32 eval/Actions Mean 0.144431 eval/Actions Std 0.594164 eval/Actions Max 0.996927 eval/Actions Min -0.996363 eval/Num Paths 1 eval/Average Returns 1252.32 eval/normalized_score 39.1015 time/evaluation sampling (s) 0.874379 time/logging (s) 0.00206094 time/sampling batch (s) 0.262151 time/saving (s) 0.00310061 time/training (s) 4.16113 time/epoch (s) 5.30282 time/total (s) 32722.1 Epoch -542 ---------------------------------- --------------- 2022-05-10 22:15:59.979421 PDT | [0] Epoch -541 finished ---------------------------------- --------------- epoch -541 replay_buffer/size 999996 trainer/num train calls 460000 trainer/Policy Loss -2.11094 trainer/Log Pis Mean 2.21188 trainer/Log Pis Std 2.49431 trainer/Log Pis Max 10.9189 trainer/Log Pis Min -4.01416 trainer/policy/mean Mean 0.129226 trainer/policy/mean Std 0.616006 trainer/policy/mean Max 0.998019 trainer/policy/mean Min -0.997632 trainer/policy/normal/std Mean 0.370889 trainer/policy/normal/std Std 0.17596 trainer/policy/normal/std Max 0.912879 trainer/policy/normal/std Min 0.0717708 trainer/policy/normal/log_std Mean -1.1342 trainer/policy/normal/log_std Std 0.572917 trainer/policy/normal/log_std Max -0.0911516 trainer/policy/normal/log_std Min -2.63428 eval/num steps total 304170 eval/num paths total 594 eval/path length Mean 658 eval/path length Std 0 eval/path length Max 658 eval/path length Min 658 eval/Rewards Mean 3.17523 eval/Rewards Std 0.716412 eval/Rewards Max 5.16845 eval/Rewards Min 0.981815 eval/Returns Mean 2089.3 eval/Returns Std 0 eval/Returns Max 2089.3 eval/Returns Min 2089.3 eval/Actions Mean 0.14736 eval/Actions Std 0.592491 eval/Actions Max 0.997233 eval/Actions Min -0.997925 eval/Num Paths 1 eval/Average Returns 2089.3 eval/normalized_score 64.8188 time/evaluation sampling (s) 0.889544 time/logging (s) 0.0025917 time/sampling batch (s) 0.26249 time/saving (s) 0.00287559 time/training (s) 4.17186 time/epoch (s) 5.32936 time/total (s) 32727.4 Epoch -541 ---------------------------------- --------------- 2022-05-10 22:16:05.320760 PDT | [0] Epoch -540 finished ---------------------------------- --------------- epoch -540 replay_buffer/size 999996 trainer/num train calls 461000 trainer/Policy Loss -1.93172 trainer/Log Pis Mean 1.9697 trainer/Log Pis Std 2.53205 trainer/Log Pis Max 9.26646 trainer/Log Pis Min -5.25796 trainer/policy/mean Mean 0.157759 trainer/policy/mean Std 0.60827 trainer/policy/mean Max 0.994937 trainer/policy/mean Min -0.997164 trainer/policy/normal/std Mean 0.383942 trainer/policy/normal/std Std 0.181431 trainer/policy/normal/std Max 0.954407 trainer/policy/normal/std Min 0.0711225 trainer/policy/normal/log_std Mean -1.09991 trainer/policy/normal/log_std Std 0.57493 trainer/policy/normal/log_std Max -0.0466653 trainer/policy/normal/log_std Min -2.64335 eval/num steps total 305028 eval/num paths total 596 eval/path length Mean 429 eval/path length Std 19 eval/path length Max 448 eval/path length Min 410 eval/Rewards Mean 3.14079 eval/Rewards Std 0.851146 eval/Rewards Max 5.42019 eval/Rewards Min 0.984318 eval/Returns Mean 1347.4 eval/Returns Std 78.8772 eval/Returns Max 1426.28 eval/Returns Min 1268.52 eval/Actions Mean 0.151434 eval/Actions Std 0.586827 eval/Actions Max 0.997017 eval/Actions Min -0.998024 eval/Num Paths 2 eval/Average Returns 1347.4 eval/normalized_score 42.0231 time/evaluation sampling (s) 0.891322 time/logging (s) 0.00345429 time/sampling batch (s) 0.262635 time/saving (s) 0.00296451 time/training (s) 4.16057 time/epoch (s) 5.32095 time/total (s) 32732.7 Epoch -540 ---------------------------------- --------------- 2022-05-10 22:16:10.687540 PDT | [0] Epoch -539 finished ---------------------------------- --------------- epoch -539 replay_buffer/size 999996 trainer/num train calls 462000 trainer/Policy Loss -2.29743 trainer/Log Pis Mean 2.31415 trainer/Log Pis Std 2.64497 trainer/Log Pis Max 9.40826 trainer/Log Pis Min -4.32956 trainer/policy/mean Mean 0.141216 trainer/policy/mean Std 0.623837 trainer/policy/mean Max 0.996391 trainer/policy/mean Min -0.998526 trainer/policy/normal/std Mean 0.379374 trainer/policy/normal/std Std 0.180653 trainer/policy/normal/std Max 0.883321 trainer/policy/normal/std Min 0.0664446 trainer/policy/normal/log_std Mean -1.11509 trainer/policy/normal/log_std Std 0.58311 trainer/policy/normal/log_std Max -0.124066 trainer/policy/normal/log_std Min -2.71139 eval/num steps total 305633 eval/num paths total 597 eval/path length Mean 605 eval/path length Std 0 eval/path length Max 605 eval/path length Min 605 eval/Rewards Mean 3.25233 eval/Rewards Std 0.795734 eval/Rewards Max 5.45532 eval/Rewards Min 0.983588 eval/Returns Mean 1967.66 eval/Returns Std 0 eval/Returns Max 1967.66 eval/Returns Min 1967.66 eval/Actions Mean 0.146638 eval/Actions Std 0.590812 eval/Actions Max 0.995857 eval/Actions Min -0.997876 eval/Num Paths 1 eval/Average Returns 1967.66 eval/normalized_score 61.0812 time/evaluation sampling (s) 0.887756 time/logging (s) 0.00261288 time/sampling batch (s) 0.264765 time/saving (s) 0.00296201 time/training (s) 4.18617 time/epoch (s) 5.34426 time/total (s) 32738.1 Epoch -539 ---------------------------------- --------------- 2022-05-10 22:16:16.061208 PDT | [0] Epoch -538 finished ---------------------------------- --------------- epoch -538 replay_buffer/size 999996 trainer/num train calls 463000 trainer/Policy Loss -2.14357 trainer/Log Pis Mean 2.05751 trainer/Log Pis Std 2.52111 trainer/Log Pis Max 10.2101 trainer/Log Pis Min -6.38121 trainer/policy/mean Mean 0.136803 trainer/policy/mean Std 0.612235 trainer/policy/mean Max 0.996594 trainer/policy/mean Min -0.99839 trainer/policy/normal/std Mean 0.385933 trainer/policy/normal/std Std 0.189478 trainer/policy/normal/std Max 1.03756 trainer/policy/normal/std Min 0.0654558 trainer/policy/normal/log_std Mean -1.10421 trainer/policy/normal/log_std Std 0.593177 trainer/policy/normal/log_std Max 0.0368687 trainer/policy/normal/log_std Min -2.72638 eval/num steps total 306508 eval/num paths total 599 eval/path length Mean 437.5 eval/path length Std 28.5 eval/path length Max 466 eval/path length Min 409 eval/Rewards Mean 3.15457 eval/Rewards Std 0.835172 eval/Rewards Max 5.04079 eval/Rewards Min 0.97962 eval/Returns Mean 1380.13 eval/Returns Std 124.008 eval/Returns Max 1504.13 eval/Returns Min 1256.12 eval/Actions Mean 0.143567 eval/Actions Std 0.582546 eval/Actions Max 0.998012 eval/Actions Min -0.999444 eval/Num Paths 2 eval/Average Returns 1380.13 eval/normalized_score 43.0287 time/evaluation sampling (s) 0.895588 time/logging (s) 0.00338684 time/sampling batch (s) 0.266383 time/saving (s) 0.00298958 time/training (s) 4.18448 time/epoch (s) 5.35283 time/total (s) 32743.4 Epoch -538 ---------------------------------- --------------- 2022-05-10 22:16:21.491692 PDT | [0] Epoch -537 finished ---------------------------------- --------------- epoch -537 replay_buffer/size 999996 trainer/num train calls 464000 trainer/Policy Loss -2.3214 trainer/Log Pis Mean 2.2261 trainer/Log Pis Std 2.52566 trainer/Log Pis Max 8.5293 trainer/Log Pis Min -5.39597 trainer/policy/mean Mean 0.168315 trainer/policy/mean Std 0.612467 trainer/policy/mean Max 0.998273 trainer/policy/mean Min -0.998447 trainer/policy/normal/std Mean 0.386735 trainer/policy/normal/std Std 0.185854 trainer/policy/normal/std Max 0.961071 trainer/policy/normal/std Min 0.0722403 trainer/policy/normal/log_std Mean -1.09962 trainer/policy/normal/log_std Std 0.5909 trainer/policy/normal/log_std Max -0.0397075 trainer/policy/normal/log_std Min -2.62776 eval/num steps total 307494 eval/num paths total 601 eval/path length Mean 493 eval/path length Std 11 eval/path length Max 504 eval/path length Min 482 eval/Rewards Mean 3.17057 eval/Rewards Std 0.800172 eval/Rewards Max 4.80551 eval/Rewards Min 0.985597 eval/Returns Mean 1563.09 eval/Returns Std 15.0088 eval/Returns Max 1578.1 eval/Returns Min 1548.08 eval/Actions Mean 0.15558 eval/Actions Std 0.596489 eval/Actions Max 0.998784 eval/Actions Min -0.999079 eval/Num Paths 2 eval/Average Returns 1563.09 eval/normalized_score 48.6504 time/evaluation sampling (s) 0.954877 time/logging (s) 0.0038544 time/sampling batch (s) 0.265662 time/saving (s) 0.00329884 time/training (s) 4.18169 time/epoch (s) 5.40938 time/total (s) 32748.9 Epoch -537 ---------------------------------- --------------- 2022-05-10 22:16:26.862728 PDT | [0] Epoch -536 finished ---------------------------------- --------------- epoch -536 replay_buffer/size 999996 trainer/num train calls 465000 trainer/Policy Loss -2.22721 trainer/Log Pis Mean 2.22202 trainer/Log Pis Std 2.54066 trainer/Log Pis Max 8.90246 trainer/Log Pis Min -4.73801 trainer/policy/mean Mean 0.126944 trainer/policy/mean Std 0.623874 trainer/policy/mean Max 0.996856 trainer/policy/mean Min -0.998254 trainer/policy/normal/std Mean 0.378548 trainer/policy/normal/std Std 0.184709 trainer/policy/normal/std Max 1.02187 trainer/policy/normal/std Min 0.0706154 trainer/policy/normal/log_std Mean -1.12154 trainer/policy/normal/log_std Std 0.589242 trainer/policy/normal/log_std Max 0.0216348 trainer/policy/normal/log_std Min -2.65051 eval/num steps total 308316 eval/num paths total 602 eval/path length Mean 822 eval/path length Std 0 eval/path length Max 822 eval/path length Min 822 eval/Rewards Mean 3.20457 eval/Rewards Std 0.645474 eval/Rewards Max 4.48501 eval/Rewards Min 0.983349 eval/Returns Mean 2634.15 eval/Returns Std 0 eval/Returns Max 2634.15 eval/Returns Min 2634.15 eval/Actions Mean 0.161176 eval/Actions Std 0.611686 eval/Actions Max 0.997804 eval/Actions Min -0.998369 eval/Num Paths 1 eval/Average Returns 2634.15 eval/normalized_score 81.5599 time/evaluation sampling (s) 0.884086 time/logging (s) 0.00325082 time/sampling batch (s) 0.266999 time/saving (s) 0.00310552 time/training (s) 4.1912 time/epoch (s) 5.34864 time/total (s) 32754.2 Epoch -536 ---------------------------------- --------------- 2022-05-10 22:16:32.242697 PDT | [0] Epoch -535 finished ---------------------------------- --------------- epoch -535 replay_buffer/size 999996 trainer/num train calls 466000 trainer/Policy Loss -2.1048 trainer/Log Pis Mean 2.14039 trainer/Log Pis Std 2.50056 trainer/Log Pis Max 8.70507 trainer/Log Pis Min -5.92636 trainer/policy/mean Mean 0.152825 trainer/policy/mean Std 0.613734 trainer/policy/mean Max 0.998572 trainer/policy/mean Min -0.998249 trainer/policy/normal/std Mean 0.386943 trainer/policy/normal/std Std 0.190932 trainer/policy/normal/std Max 0.958865 trainer/policy/normal/std Min 0.0674959 trainer/policy/normal/log_std Mean -1.10594 trainer/policy/normal/log_std Std 0.603778 trainer/policy/normal/log_std Max -0.0420054 trainer/policy/normal/log_std Min -2.69569 eval/num steps total 309229 eval/num paths total 604 eval/path length Mean 456.5 eval/path length Std 44.5 eval/path length Max 501 eval/path length Min 412 eval/Rewards Mean 3.1184 eval/Rewards Std 0.778649 eval/Rewards Max 4.75504 eval/Rewards Min 0.980225 eval/Returns Mean 1423.55 eval/Returns Std 154.426 eval/Returns Max 1577.98 eval/Returns Min 1269.12 eval/Actions Mean 0.151041 eval/Actions Std 0.591641 eval/Actions Max 0.998818 eval/Actions Min -0.999383 eval/Num Paths 2 eval/Average Returns 1423.55 eval/normalized_score 44.3629 time/evaluation sampling (s) 0.894776 time/logging (s) 0.00345198 time/sampling batch (s) 0.267364 time/saving (s) 0.00300325 time/training (s) 4.18993 time/epoch (s) 5.35852 time/total (s) 32759.6 Epoch -535 ---------------------------------- --------------- 2022-05-10 22:16:37.612860 PDT | [0] Epoch -534 finished ---------------------------------- --------------- epoch -534 replay_buffer/size 999996 trainer/num train calls 467000 trainer/Policy Loss -2.25639 trainer/Log Pis Mean 2.46391 trainer/Log Pis Std 2.60383 trainer/Log Pis Max 11.1 trainer/Log Pis Min -9.80204 trainer/policy/mean Mean 0.125601 trainer/policy/mean Std 0.627527 trainer/policy/mean Max 0.997118 trainer/policy/mean Min -0.997608 trainer/policy/normal/std Mean 0.379876 trainer/policy/normal/std Std 0.194226 trainer/policy/normal/std Max 1.04456 trainer/policy/normal/std Min 0.0680744 trainer/policy/normal/log_std Mean -1.12946 trainer/policy/normal/log_std Std 0.60837 trainer/policy/normal/log_std Max 0.0435982 trainer/policy/normal/log_std Min -2.68715 eval/num steps total 310042 eval/num paths total 606 eval/path length Mean 406.5 eval/path length Std 0.5 eval/path length Max 407 eval/path length Min 406 eval/Rewards Mean 3.06534 eval/Rewards Std 0.791881 eval/Rewards Max 4.59995 eval/Rewards Min 0.980946 eval/Returns Mean 1246.06 eval/Returns Std 0.460815 eval/Returns Max 1246.52 eval/Returns Min 1245.6 eval/Actions Mean 0.142921 eval/Actions Std 0.600043 eval/Actions Max 0.997613 eval/Actions Min -0.997227 eval/Num Paths 2 eval/Average Returns 1246.06 eval/normalized_score 38.9094 time/evaluation sampling (s) 0.886218 time/logging (s) 0.00318596 time/sampling batch (s) 0.266353 time/saving (s) 0.00295754 time/training (s) 4.18979 time/epoch (s) 5.3485 time/total (s) 32764.9 Epoch -534 ---------------------------------- --------------- 2022-05-10 22:16:43.006465 PDT | [0] Epoch -533 finished ---------------------------------- --------------- epoch -533 replay_buffer/size 999996 trainer/num train calls 468000 trainer/Policy Loss -2.28563 trainer/Log Pis Mean 2.22818 trainer/Log Pis Std 2.66712 trainer/Log Pis Max 11.6126 trainer/Log Pis Min -5.95752 trainer/policy/mean Mean 0.117803 trainer/policy/mean Std 0.622704 trainer/policy/mean Max 0.997477 trainer/policy/mean Min -0.998293 trainer/policy/normal/std Mean 0.380513 trainer/policy/normal/std Std 0.182649 trainer/policy/normal/std Max 0.945722 trainer/policy/normal/std Min 0.0628737 trainer/policy/normal/log_std Mean -1.11378 trainer/policy/normal/log_std Std 0.585414 trainer/policy/normal/log_std Max -0.0558065 trainer/policy/normal/log_std Min -2.76663 eval/num steps total 310534 eval/num paths total 607 eval/path length Mean 492 eval/path length Std 0 eval/path length Max 492 eval/path length Min 492 eval/Rewards Mean 3.14236 eval/Rewards Std 0.82804 eval/Rewards Max 5.04132 eval/Rewards Min 0.983843 eval/Returns Mean 1546.04 eval/Returns Std 0 eval/Returns Max 1546.04 eval/Returns Min 1546.04 eval/Actions Mean 0.150399 eval/Actions Std 0.584271 eval/Actions Max 0.998373 eval/Actions Min -0.998408 eval/Num Paths 1 eval/Average Returns 1546.04 eval/normalized_score 48.1266 time/evaluation sampling (s) 0.881929 time/logging (s) 0.00235079 time/sampling batch (s) 0.267384 time/saving (s) 0.0031996 time/training (s) 4.21613 time/epoch (s) 5.37099 time/total (s) 32770.3 Epoch -533 ---------------------------------- --------------- 2022-05-10 22:16:48.367768 PDT | [0] Epoch -532 finished ---------------------------------- --------------- epoch -532 replay_buffer/size 999996 trainer/num train calls 469000 trainer/Policy Loss -2.1107 trainer/Log Pis Mean 2.18413 trainer/Log Pis Std 2.53356 trainer/Log Pis Max 12.4334 trainer/Log Pis Min -3.93844 trainer/policy/mean Mean 0.131969 trainer/policy/mean Std 0.608126 trainer/policy/mean Max 0.998328 trainer/policy/mean Min -0.999664 trainer/policy/normal/std Mean 0.38381 trainer/policy/normal/std Std 0.180487 trainer/policy/normal/std Max 0.921141 trainer/policy/normal/std Min 0.0661754 trainer/policy/normal/log_std Mean -1.09833 trainer/policy/normal/log_std Std 0.571411 trainer/policy/normal/log_std Max -0.082142 trainer/policy/normal/log_std Min -2.71545 eval/num steps total 311100 eval/num paths total 608 eval/path length Mean 566 eval/path length Std 0 eval/path length Max 566 eval/path length Min 566 eval/Rewards Mean 3.25287 eval/Rewards Std 0.785952 eval/Rewards Max 4.69378 eval/Rewards Min 0.978378 eval/Returns Mean 1841.12 eval/Returns Std 0 eval/Returns Max 1841.12 eval/Returns Min 1841.12 eval/Actions Mean 0.149668 eval/Actions Std 0.604427 eval/Actions Max 0.998519 eval/Actions Min -0.998545 eval/Num Paths 1 eval/Average Returns 1841.12 eval/normalized_score 57.1933 time/evaluation sampling (s) 0.889755 time/logging (s) 0.00241444 time/sampling batch (s) 0.266192 time/saving (s) 0.00295123 time/training (s) 4.1783 time/epoch (s) 5.33961 time/total (s) 32775.6 Epoch -532 ---------------------------------- --------------- 2022-05-10 22:16:53.757704 PDT | [0] Epoch -531 finished ---------------------------------- --------------- epoch -531 replay_buffer/size 999996 trainer/num train calls 470000 trainer/Policy Loss -2.18934 trainer/Log Pis Mean 2.0775 trainer/Log Pis Std 2.63781 trainer/Log Pis Max 12.4669 trainer/Log Pis Min -6.79447 trainer/policy/mean Mean 0.157033 trainer/policy/mean Std 0.609004 trainer/policy/mean Max 0.997677 trainer/policy/mean Min -0.997787 trainer/policy/normal/std Mean 0.384756 trainer/policy/normal/std Std 0.189063 trainer/policy/normal/std Max 1.05125 trainer/policy/normal/std Min 0.063088 trainer/policy/normal/log_std Mean -1.10714 trainer/policy/normal/log_std Std 0.592149 trainer/policy/normal/log_std Max 0.0499763 trainer/policy/normal/log_std Min -2.76322 eval/num steps total 311773 eval/num paths total 609 eval/path length Mean 673 eval/path length Std 0 eval/path length Max 673 eval/path length Min 673 eval/Rewards Mean 3.18722 eval/Rewards Std 0.746598 eval/Rewards Max 5.37482 eval/Rewards Min 0.990798 eval/Returns Mean 2145 eval/Returns Std 0 eval/Returns Max 2145 eval/Returns Min 2145 eval/Actions Mean 0.145518 eval/Actions Std 0.589076 eval/Actions Max 0.998495 eval/Actions Min -0.998008 eval/Num Paths 1 eval/Average Returns 2145 eval/normalized_score 66.5302 time/evaluation sampling (s) 0.882919 time/logging (s) 0.00275592 time/sampling batch (s) 0.267918 time/saving (s) 0.00297316 time/training (s) 4.21187 time/epoch (s) 5.36844 time/total (s) 32781 Epoch -531 ---------------------------------- --------------- 2022-05-10 22:16:59.114402 PDT | [0] Epoch -530 finished ---------------------------------- --------------- epoch -530 replay_buffer/size 999996 trainer/num train calls 471000 trainer/Policy Loss -2.20859 trainer/Log Pis Mean 2.28547 trainer/Log Pis Std 2.74379 trainer/Log Pis Max 11.9853 trainer/Log Pis Min -7.34871 trainer/policy/mean Mean 0.127281 trainer/policy/mean Std 0.626619 trainer/policy/mean Max 0.998621 trainer/policy/mean Min -0.997728 trainer/policy/normal/std Mean 0.383441 trainer/policy/normal/std Std 0.189823 trainer/policy/normal/std Max 1.47193 trainer/policy/normal/std Min 0.0705609 trainer/policy/normal/log_std Mean -1.11187 trainer/policy/normal/log_std Std 0.594968 trainer/policy/normal/log_std Max 0.386574 trainer/policy/normal/log_std Min -2.65128 eval/num steps total 312415 eval/num paths total 610 eval/path length Mean 642 eval/path length Std 0 eval/path length Max 642 eval/path length Min 642 eval/Rewards Mean 3.24097 eval/Rewards Std 0.724487 eval/Rewards Max 4.73675 eval/Rewards Min 0.983865 eval/Returns Mean 2080.7 eval/Returns Std 0 eval/Returns Max 2080.7 eval/Returns Min 2080.7 eval/Actions Mean 0.150162 eval/Actions Std 0.609042 eval/Actions Max 0.998361 eval/Actions Min -0.9982 eval/Num Paths 1 eval/Average Returns 2080.7 eval/normalized_score 64.5545 time/evaluation sampling (s) 0.87909 time/logging (s) 0.00258935 time/sampling batch (s) 0.266267 time/saving (s) 0.00289871 time/training (s) 4.18423 time/epoch (s) 5.33508 time/total (s) 32786.3 Epoch -530 ---------------------------------- --------------- 2022-05-10 22:17:04.495359 PDT | [0] Epoch -529 finished ---------------------------------- --------------- epoch -529 replay_buffer/size 999996 trainer/num train calls 472000 trainer/Policy Loss -2.14611 trainer/Log Pis Mean 2.19581 trainer/Log Pis Std 2.52132 trainer/Log Pis Max 12.7643 trainer/Log Pis Min -4.85902 trainer/policy/mean Mean 0.132618 trainer/policy/mean Std 0.618589 trainer/policy/mean Max 0.995257 trainer/policy/mean Min -0.998556 trainer/policy/normal/std Mean 0.381424 trainer/policy/normal/std Std 0.181862 trainer/policy/normal/std Max 0.899815 trainer/policy/normal/std Min 0.063822 trainer/policy/normal/log_std Mean -1.11117 trainer/policy/normal/log_std Std 0.586741 trainer/policy/normal/log_std Max -0.105567 trainer/policy/normal/log_std Min -2.75166 eval/num steps total 312974 eval/num paths total 611 eval/path length Mean 559 eval/path length Std 0 eval/path length Max 559 eval/path length Min 559 eval/Rewards Mean 3.19372 eval/Rewards Std 0.805163 eval/Rewards Max 4.71662 eval/Rewards Min 0.977506 eval/Returns Mean 1785.29 eval/Returns Std 0 eval/Returns Max 1785.29 eval/Returns Min 1785.29 eval/Actions Mean 0.149666 eval/Actions Std 0.597887 eval/Actions Max 0.9987 eval/Actions Min -0.998674 eval/Num Paths 1 eval/Average Returns 1785.29 eval/normalized_score 55.4776 time/evaluation sampling (s) 0.892935 time/logging (s) 0.00243551 time/sampling batch (s) 0.266686 time/saving (s) 0.00314197 time/training (s) 4.19382 time/epoch (s) 5.35902 time/total (s) 32791.7 Epoch -529 ---------------------------------- --------------- 2022-05-10 22:17:09.861847 PDT | [0] Epoch -528 finished ---------------------------------- --------------- epoch -528 replay_buffer/size 999996 trainer/num train calls 473000 trainer/Policy Loss -2.26905 trainer/Log Pis Mean 2.28709 trainer/Log Pis Std 2.68036 trainer/Log Pis Max 9.436 trainer/Log Pis Min -7.28064 trainer/policy/mean Mean 0.130241 trainer/policy/mean Std 0.622933 trainer/policy/mean Max 0.997981 trainer/policy/mean Min -0.997532 trainer/policy/normal/std Mean 0.386285 trainer/policy/normal/std Std 0.189077 trainer/policy/normal/std Max 0.978271 trainer/policy/normal/std Min 0.0664363 trainer/policy/normal/log_std Mean -1.10324 trainer/policy/normal/log_std Std 0.592865 trainer/policy/normal/log_std Max -0.021969 trainer/policy/normal/log_std Min -2.71151 eval/num steps total 313602 eval/num paths total 613 eval/path length Mean 314 eval/path length Std 6 eval/path length Max 320 eval/path length Min 308 eval/Rewards Mean 2.99129 eval/Rewards Std 0.956092 eval/Rewards Max 5.22669 eval/Rewards Min 0.983622 eval/Returns Mean 939.266 eval/Returns Std 30.3937 eval/Returns Max 969.66 eval/Returns Min 908.872 eval/Actions Mean 0.135758 eval/Actions Std 0.567307 eval/Actions Max 0.99824 eval/Actions Min -0.996482 eval/Num Paths 2 eval/Average Returns 939.266 eval/normalized_score 29.4828 time/evaluation sampling (s) 0.884101 time/logging (s) 0.00265291 time/sampling batch (s) 0.265406 time/saving (s) 0.00294584 time/training (s) 4.1897 time/epoch (s) 5.34481 time/total (s) 32797.1 Epoch -528 ---------------------------------- --------------- 2022-05-10 22:17:15.243895 PDT | [0] Epoch -527 finished ---------------------------------- ---------------- epoch -527 replay_buffer/size 999996 trainer/num train calls 474000 trainer/Policy Loss -2.38653 trainer/Log Pis Mean 2.37497 trainer/Log Pis Std 2.57247 trainer/Log Pis Max 10.6255 trainer/Log Pis Min -5.55183 trainer/policy/mean Mean 0.153958 trainer/policy/mean Std 0.626497 trainer/policy/mean Max 0.998541 trainer/policy/mean Min -0.998521 trainer/policy/normal/std Mean 0.382628 trainer/policy/normal/std Std 0.189914 trainer/policy/normal/std Max 1.00008 trainer/policy/normal/std Min 0.0656738 trainer/policy/normal/log_std Mean -1.1191 trainer/policy/normal/log_std Std 0.607178 trainer/policy/normal/log_std Max 8.32046e-05 trainer/policy/normal/log_std Min -2.72306 eval/num steps total 314094 eval/num paths total 614 eval/path length Mean 492 eval/path length Std 0 eval/path length Max 492 eval/path length Min 492 eval/Rewards Mean 3.04816 eval/Rewards Std 0.787419 eval/Rewards Max 4.62856 eval/Rewards Min 0.980282 eval/Returns Mean 1499.7 eval/Returns Std 0 eval/Returns Max 1499.7 eval/Returns Min 1499.7 eval/Actions Mean 0.152787 eval/Actions Std 0.583631 eval/Actions Max 0.996463 eval/Actions Min -0.99777 eval/Num Paths 1 eval/Average Returns 1499.7 eval/normalized_score 46.7026 time/evaluation sampling (s) 0.888589 time/logging (s) 0.00225132 time/sampling batch (s) 0.267242 time/saving (s) 0.00298835 time/training (s) 4.19881 time/epoch (s) 5.35988 time/total (s) 32802.4 Epoch -527 ---------------------------------- ---------------- 2022-05-10 22:17:20.612023 PDT | [0] Epoch -526 finished ---------------------------------- --------------- epoch -526 replay_buffer/size 999996 trainer/num train calls 475000 trainer/Policy Loss -2.3748 trainer/Log Pis Mean 2.21428 trainer/Log Pis Std 2.46434 trainer/Log Pis Max 9.0289 trainer/Log Pis Min -4.20692 trainer/policy/mean Mean 0.12769 trainer/policy/mean Std 0.622991 trainer/policy/mean Max 0.998594 trainer/policy/mean Min -0.998596 trainer/policy/normal/std Mean 0.37441 trainer/policy/normal/std Std 0.178512 trainer/policy/normal/std Max 0.847102 trainer/policy/normal/std Min 0.0679552 trainer/policy/normal/log_std Mean -1.12908 trainer/policy/normal/log_std Std 0.584721 trainer/policy/normal/log_std Max -0.165934 trainer/policy/normal/log_std Min -2.68891 eval/num steps total 314587 eval/num paths total 615 eval/path length Mean 493 eval/path length Std 0 eval/path length Max 493 eval/path length Min 493 eval/Rewards Mean 3.28136 eval/Rewards Std 0.84694 eval/Rewards Max 4.704 eval/Rewards Min 0.980215 eval/Returns Mean 1617.71 eval/Returns Std 0 eval/Returns Max 1617.71 eval/Returns Min 1617.71 eval/Actions Mean 0.145091 eval/Actions Std 0.577774 eval/Actions Max 0.994908 eval/Actions Min -0.998871 eval/Num Paths 1 eval/Average Returns 1617.71 eval/normalized_score 50.3287 time/evaluation sampling (s) 0.888496 time/logging (s) 0.00230085 time/sampling batch (s) 0.266127 time/saving (s) 0.00318935 time/training (s) 4.18636 time/epoch (s) 5.34647 time/total (s) 32807.8 Epoch -526 ---------------------------------- --------------- 2022-05-10 22:17:25.993887 PDT | [0] Epoch -525 finished ---------------------------------- --------------- epoch -525 replay_buffer/size 999996 trainer/num train calls 476000 trainer/Policy Loss -2.2585 trainer/Log Pis Mean 2.2366 trainer/Log Pis Std 2.54804 trainer/Log Pis Max 8.93296 trainer/Log Pis Min -4.38508 trainer/policy/mean Mean 0.13936 trainer/policy/mean Std 0.619977 trainer/policy/mean Max 0.996979 trainer/policy/mean Min -0.997866 trainer/policy/normal/std Mean 0.378707 trainer/policy/normal/std Std 0.185363 trainer/policy/normal/std Max 0.935223 trainer/policy/normal/std Min 0.0658322 trainer/policy/normal/log_std Mean -1.12817 trainer/policy/normal/log_std Std 0.607716 trainer/policy/normal/log_std Max -0.0669706 trainer/policy/normal/log_std Min -2.72065 eval/num steps total 315104 eval/num paths total 616 eval/path length Mean 517 eval/path length Std 0 eval/path length Max 517 eval/path length Min 517 eval/Rewards Mean 3.10011 eval/Rewards Std 0.827992 eval/Rewards Max 5.31346 eval/Rewards Min 0.985513 eval/Returns Mean 1602.76 eval/Returns Std 0 eval/Returns Max 1602.76 eval/Returns Min 1602.76 eval/Actions Mean 0.138783 eval/Actions Std 0.568487 eval/Actions Max 0.998478 eval/Actions Min -0.997953 eval/Num Paths 1 eval/Average Returns 1602.76 eval/normalized_score 49.8692 time/evaluation sampling (s) 0.915675 time/logging (s) 0.00231515 time/sampling batch (s) 0.264845 time/saving (s) 0.00292514 time/training (s) 4.17444 time/epoch (s) 5.3602 time/total (s) 32813.1 Epoch -525 ---------------------------------- --------------- 2022-05-10 22:17:31.382416 PDT | [0] Epoch -524 finished ---------------------------------- --------------- epoch -524 replay_buffer/size 999996 trainer/num train calls 477000 trainer/Policy Loss -2.53264 trainer/Log Pis Mean 2.58715 trainer/Log Pis Std 2.62364 trainer/Log Pis Max 13.2532 trainer/Log Pis Min -5.23373 trainer/policy/mean Mean 0.121562 trainer/policy/mean Std 0.634264 trainer/policy/mean Max 0.997004 trainer/policy/mean Min -0.99864 trainer/policy/normal/std Mean 0.370279 trainer/policy/normal/std Std 0.180197 trainer/policy/normal/std Max 0.92013 trainer/policy/normal/std Min 0.0675337 trainer/policy/normal/log_std Mean -1.14753 trainer/policy/normal/log_std Std 0.59987 trainer/policy/normal/log_std Max -0.0832406 trainer/policy/normal/log_std Min -2.69513 eval/num steps total 315562 eval/num paths total 617 eval/path length Mean 458 eval/path length Std 0 eval/path length Max 458 eval/path length Min 458 eval/Rewards Mean 3.1996 eval/Rewards Std 0.888767 eval/Rewards Max 5.17565 eval/Rewards Min 0.988225 eval/Returns Mean 1465.42 eval/Returns Std 0 eval/Returns Max 1465.42 eval/Returns Min 1465.42 eval/Actions Mean 0.135216 eval/Actions Std 0.575978 eval/Actions Max 0.998566 eval/Actions Min -0.998212 eval/Num Paths 1 eval/Average Returns 1465.42 eval/normalized_score 45.6493 time/evaluation sampling (s) 0.885183 time/logging (s) 0.00225284 time/sampling batch (s) 0.264377 time/saving (s) 0.00290555 time/training (s) 4.21231 time/epoch (s) 5.36703 time/total (s) 32818.5 Epoch -524 ---------------------------------- --------------- 2022-05-10 22:17:36.735894 PDT | [0] Epoch -523 finished ---------------------------------- --------------- epoch -523 replay_buffer/size 999996 trainer/num train calls 478000 trainer/Policy Loss -2.29346 trainer/Log Pis Mean 2.30779 trainer/Log Pis Std 2.57096 trainer/Log Pis Max 16.0776 trainer/Log Pis Min -6.74443 trainer/policy/mean Mean 0.145428 trainer/policy/mean Std 0.617378 trainer/policy/mean Max 0.998553 trainer/policy/mean Min -0.999862 trainer/policy/normal/std Mean 0.379826 trainer/policy/normal/std Std 0.185624 trainer/policy/normal/std Max 0.908869 trainer/policy/normal/std Min 0.0670898 trainer/policy/normal/log_std Mean -1.12448 trainer/policy/normal/log_std Std 0.606248 trainer/policy/normal/log_std Max -0.0955546 trainer/policy/normal/log_std Min -2.70172 eval/num steps total 316110 eval/num paths total 618 eval/path length Mean 548 eval/path length Std 0 eval/path length Max 548 eval/path length Min 548 eval/Rewards Mean 3.1817 eval/Rewards Std 0.825958 eval/Rewards Max 4.9142 eval/Rewards Min 0.982916 eval/Returns Mean 1743.57 eval/Returns Std 0 eval/Returns Max 1743.57 eval/Returns Min 1743.57 eval/Actions Mean 0.145944 eval/Actions Std 0.576016 eval/Actions Max 0.99796 eval/Actions Min -0.998401 eval/Num Paths 1 eval/Average Returns 1743.57 eval/normalized_score 54.1959 time/evaluation sampling (s) 0.900267 time/logging (s) 0.00248084 time/sampling batch (s) 0.262637 time/saving (s) 0.00304815 time/training (s) 4.1637 time/epoch (s) 5.33214 time/total (s) 32823.8 Epoch -523 ---------------------------------- --------------- 2022-05-10 22:17:42.094565 PDT | [0] Epoch -522 finished ---------------------------------- --------------- epoch -522 replay_buffer/size 999996 trainer/num train calls 479000 trainer/Policy Loss -2.2348 trainer/Log Pis Mean 2.20559 trainer/Log Pis Std 2.5214 trainer/Log Pis Max 10.2568 trainer/Log Pis Min -4.68196 trainer/policy/mean Mean 0.151785 trainer/policy/mean Std 0.606088 trainer/policy/mean Max 0.997406 trainer/policy/mean Min -0.997311 trainer/policy/normal/std Mean 0.374469 trainer/policy/normal/std Std 0.183655 trainer/policy/normal/std Max 0.916241 trainer/policy/normal/std Min 0.0718074 trainer/policy/normal/log_std Mean -1.13587 trainer/policy/normal/log_std Std 0.596714 trainer/policy/normal/log_std Max -0.0874761 trainer/policy/normal/log_std Min -2.63377 eval/num steps total 316672 eval/num paths total 619 eval/path length Mean 562 eval/path length Std 0 eval/path length Max 562 eval/path length Min 562 eval/Rewards Mean 3.2138 eval/Rewards Std 0.819201 eval/Rewards Max 4.87561 eval/Rewards Min 0.989649 eval/Returns Mean 1806.16 eval/Returns Std 0 eval/Returns Max 1806.16 eval/Returns Min 1806.16 eval/Actions Mean 0.15741 eval/Actions Std 0.579051 eval/Actions Max 0.99767 eval/Actions Min -0.998425 eval/Num Paths 1 eval/Average Returns 1806.16 eval/normalized_score 56.1189 time/evaluation sampling (s) 0.884718 time/logging (s) 0.00254082 time/sampling batch (s) 0.264209 time/saving (s) 0.00295705 time/training (s) 4.18262 time/epoch (s) 5.33704 time/total (s) 32829.2 Epoch -522 ---------------------------------- --------------- 2022-05-10 22:17:47.443398 PDT | [0] Epoch -521 finished ---------------------------------- --------------- epoch -521 replay_buffer/size 999996 trainer/num train calls 480000 trainer/Policy Loss -2.08361 trainer/Log Pis Mean 2.3436 trainer/Log Pis Std 2.50081 trainer/Log Pis Max 9.30886 trainer/Log Pis Min -4.2973 trainer/policy/mean Mean 0.163811 trainer/policy/mean Std 0.61395 trainer/policy/mean Max 0.997216 trainer/policy/mean Min -0.997787 trainer/policy/normal/std Mean 0.378515 trainer/policy/normal/std Std 0.182289 trainer/policy/normal/std Max 0.892152 trainer/policy/normal/std Min 0.0710963 trainer/policy/normal/log_std Mean -1.121 trainer/policy/normal/log_std Std 0.589794 trainer/policy/normal/log_std Max -0.114119 trainer/policy/normal/log_std Min -2.64372 eval/num steps total 317593 eval/num paths total 621 eval/path length Mean 460.5 eval/path length Std 21.5 eval/path length Max 482 eval/path length Min 439 eval/Rewards Mean 3.20606 eval/Rewards Std 0.869454 eval/Rewards Max 5.57896 eval/Rewards Min 0.981844 eval/Returns Mean 1476.39 eval/Returns Std 75.2881 eval/Returns Max 1551.68 eval/Returns Min 1401.1 eval/Actions Mean 0.151319 eval/Actions Std 0.594685 eval/Actions Max 0.99731 eval/Actions Min -0.998804 eval/Num Paths 2 eval/Average Returns 1476.39 eval/normalized_score 45.9865 time/evaluation sampling (s) 0.873364 time/logging (s) 0.00368028 time/sampling batch (s) 0.264001 time/saving (s) 0.00293269 time/training (s) 4.18432 time/epoch (s) 5.3283 time/total (s) 32834.5 Epoch -521 ---------------------------------- --------------- 2022-05-10 22:17:52.806069 PDT | [0] Epoch -520 finished ---------------------------------- --------------- epoch -520 replay_buffer/size 999996 trainer/num train calls 481000 trainer/Policy Loss -2.14017 trainer/Log Pis Mean 2.30726 trainer/Log Pis Std 2.47299 trainer/Log Pis Max 14.912 trainer/Log Pis Min -6.60677 trainer/policy/mean Mean 0.108315 trainer/policy/mean Std 0.616901 trainer/policy/mean Max 0.995263 trainer/policy/mean Min -0.998585 trainer/policy/normal/std Mean 0.365571 trainer/policy/normal/std Std 0.180035 trainer/policy/normal/std Max 0.908746 trainer/policy/normal/std Min 0.0579813 trainer/policy/normal/log_std Mean -1.16099 trainer/policy/normal/log_std Std 0.599235 trainer/policy/normal/log_std Max -0.0956895 trainer/policy/normal/log_std Min -2.84763 eval/num steps total 318260 eval/num paths total 622 eval/path length Mean 667 eval/path length Std 0 eval/path length Max 667 eval/path length Min 667 eval/Rewards Mean 3.12726 eval/Rewards Std 0.712196 eval/Rewards Max 4.93685 eval/Rewards Min 0.984074 eval/Returns Mean 2085.88 eval/Returns Std 0 eval/Returns Max 2085.88 eval/Returns Min 2085.88 eval/Actions Mean 0.15836 eval/Actions Std 0.602109 eval/Actions Max 0.998124 eval/Actions Min -0.998867 eval/Num Paths 1 eval/Average Returns 2085.88 eval/normalized_score 64.7137 time/evaluation sampling (s) 0.88067 time/logging (s) 0.00261499 time/sampling batch (s) 0.266389 time/saving (s) 0.00299174 time/training (s) 4.18727 time/epoch (s) 5.33993 time/total (s) 32839.9 Epoch -520 ---------------------------------- --------------- 2022-05-10 22:17:58.135791 PDT | [0] Epoch -519 finished ---------------------------------- --------------- epoch -519 replay_buffer/size 999996 trainer/num train calls 482000 trainer/Policy Loss -2.39892 trainer/Log Pis Mean 2.41237 trainer/Log Pis Std 2.60587 trainer/Log Pis Max 11.0152 trainer/Log Pis Min -3.4176 trainer/policy/mean Mean 0.151643 trainer/policy/mean Std 0.621082 trainer/policy/mean Max 0.997483 trainer/policy/mean Min -0.997205 trainer/policy/normal/std Mean 0.371561 trainer/policy/normal/std Std 0.180759 trainer/policy/normal/std Max 0.894663 trainer/policy/normal/std Min 0.0614893 trainer/policy/normal/log_std Mean -1.14427 trainer/policy/normal/log_std Std 0.601059 trainer/policy/normal/log_std Max -0.111308 trainer/policy/normal/log_std Min -2.78889 eval/num steps total 319066 eval/num paths total 623 eval/path length Mean 806 eval/path length Std 0 eval/path length Max 806 eval/path length Min 806 eval/Rewards Mean 3.22599 eval/Rewards Std 0.692322 eval/Rewards Max 4.63909 eval/Rewards Min 0.981179 eval/Returns Mean 2600.15 eval/Returns Std 0 eval/Returns Max 2600.15 eval/Returns Min 2600.15 eval/Actions Mean 0.148167 eval/Actions Std 0.602034 eval/Actions Max 0.998387 eval/Actions Min -0.998449 eval/Num Paths 1 eval/Average Returns 2600.15 eval/normalized_score 80.5151 time/evaluation sampling (s) 0.869527 time/logging (s) 0.00303947 time/sampling batch (s) 0.263023 time/saving (s) 0.00296984 time/training (s) 4.16449 time/epoch (s) 5.30305 time/total (s) 32845.2 Epoch -519 ---------------------------------- --------------- 2022-05-10 22:18:03.568338 PDT | [0] Epoch -518 finished ---------------------------------- --------------- epoch -518 replay_buffer/size 999996 trainer/num train calls 483000 trainer/Policy Loss -2.2988 trainer/Log Pis Mean 2.21653 trainer/Log Pis Std 2.54358 trainer/Log Pis Max 8.7474 trainer/Log Pis Min -6.33984 trainer/policy/mean Mean 0.163129 trainer/policy/mean Std 0.616016 trainer/policy/mean Max 0.996567 trainer/policy/mean Min -0.997662 trainer/policy/normal/std Mean 0.368189 trainer/policy/normal/std Std 0.184578 trainer/policy/normal/std Max 0.993971 trainer/policy/normal/std Min 0.0664322 trainer/policy/normal/log_std Mean -1.15767 trainer/policy/normal/log_std Std 0.605094 trainer/policy/normal/log_std Max -0.0060469 trainer/policy/normal/log_std Min -2.71157 eval/num steps total 319565 eval/num paths total 624 eval/path length Mean 499 eval/path length Std 0 eval/path length Max 499 eval/path length Min 499 eval/Rewards Mean 3.09281 eval/Rewards Std 0.766106 eval/Rewards Max 4.703 eval/Rewards Min 0.986466 eval/Returns Mean 1543.31 eval/Returns Std 0 eval/Returns Max 1543.31 eval/Returns Min 1543.31 eval/Actions Mean 0.151658 eval/Actions Std 0.592411 eval/Actions Max 0.997307 eval/Actions Min -0.996219 eval/Num Paths 1 eval/Average Returns 1543.31 eval/normalized_score 48.0428 time/evaluation sampling (s) 0.951853 time/logging (s) 0.00235462 time/sampling batch (s) 0.265476 time/saving (s) 0.00299429 time/training (s) 4.18765 time/epoch (s) 5.41033 time/total (s) 32850.6 Epoch -518 ---------------------------------- --------------- 2022-05-10 22:18:08.926858 PDT | [0] Epoch -517 finished ---------------------------------- --------------- epoch -517 replay_buffer/size 999996 trainer/num train calls 484000 trainer/Policy Loss -2.36171 trainer/Log Pis Mean 2.28354 trainer/Log Pis Std 2.6241 trainer/Log Pis Max 9.67181 trainer/Log Pis Min -4.15949 trainer/policy/mean Mean 0.144655 trainer/policy/mean Std 0.615984 trainer/policy/mean Max 0.998038 trainer/policy/mean Min -0.998212 trainer/policy/normal/std Mean 0.381263 trainer/policy/normal/std Std 0.186679 trainer/policy/normal/std Max 0.966236 trainer/policy/normal/std Min 0.0672089 trainer/policy/normal/log_std Mean -1.11964 trainer/policy/normal/log_std Std 0.602603 trainer/policy/normal/log_std Max -0.0343469 trainer/policy/normal/log_std Min -2.69995 eval/num steps total 320144 eval/num paths total 625 eval/path length Mean 579 eval/path length Std 0 eval/path length Max 579 eval/path length Min 579 eval/Rewards Mean 3.22019 eval/Rewards Std 0.780963 eval/Rewards Max 4.96672 eval/Rewards Min 0.986524 eval/Returns Mean 1864.49 eval/Returns Std 0 eval/Returns Max 1864.49 eval/Returns Min 1864.49 eval/Actions Mean 0.163609 eval/Actions Std 0.594609 eval/Actions Max 0.997519 eval/Actions Min -0.998543 eval/Num Paths 1 eval/Average Returns 1864.49 eval/normalized_score 57.9112 time/evaluation sampling (s) 0.878284 time/logging (s) 0.00248429 time/sampling batch (s) 0.272 time/saving (s) 0.00296603 time/training (s) 4.18128 time/epoch (s) 5.33701 time/total (s) 32855.9 Epoch -517 ---------------------------------- --------------- 2022-05-10 22:18:14.313280 PDT | [0] Epoch -516 finished ---------------------------------- --------------- epoch -516 replay_buffer/size 999996 trainer/num train calls 485000 trainer/Policy Loss -1.9552 trainer/Log Pis Mean 2.16965 trainer/Log Pis Std 2.50069 trainer/Log Pis Max 9.13598 trainer/Log Pis Min -5.15353 trainer/policy/mean Mean 0.151827 trainer/policy/mean Std 0.595 trainer/policy/mean Max 0.995627 trainer/policy/mean Min -0.996495 trainer/policy/normal/std Mean 0.372646 trainer/policy/normal/std Std 0.181407 trainer/policy/normal/std Max 1.06329 trainer/policy/normal/std Min 0.0669489 trainer/policy/normal/log_std Mean -1.13746 trainer/policy/normal/log_std Std 0.58989 trainer/policy/normal/log_std Max 0.061366 trainer/policy/normal/log_std Min -2.70383 eval/num steps total 321057 eval/num paths total 627 eval/path length Mean 456.5 eval/path length Std 17.5 eval/path length Max 474 eval/path length Min 439 eval/Rewards Mean 3.181 eval/Rewards Std 0.879652 eval/Rewards Max 5.52776 eval/Rewards Min 0.979818 eval/Returns Mean 1452.13 eval/Returns Std 59.056 eval/Returns Max 1511.18 eval/Returns Min 1393.07 eval/Actions Mean 0.145801 eval/Actions Std 0.589622 eval/Actions Max 0.998573 eval/Actions Min -0.999121 eval/Num Paths 2 eval/Average Returns 1452.13 eval/normalized_score 45.2409 time/evaluation sampling (s) 0.889545 time/logging (s) 0.00342853 time/sampling batch (s) 0.265405 time/saving (s) 0.00297106 time/training (s) 4.20443 time/epoch (s) 5.36578 time/total (s) 32861.3 Epoch -516 ---------------------------------- --------------- 2022-05-10 22:18:19.672541 PDT | [0] Epoch -515 finished ---------------------------------- --------------- epoch -515 replay_buffer/size 999996 trainer/num train calls 486000 trainer/Policy Loss -2.21292 trainer/Log Pis Mean 2.25868 trainer/Log Pis Std 2.57558 trainer/Log Pis Max 8.92906 trainer/Log Pis Min -7.59396 trainer/policy/mean Mean 0.148273 trainer/policy/mean Std 0.617989 trainer/policy/mean Max 0.996481 trainer/policy/mean Min -0.9979 trainer/policy/normal/std Mean 0.377932 trainer/policy/normal/std Std 0.183902 trainer/policy/normal/std Max 0.976364 trainer/policy/normal/std Min 0.0721214 trainer/policy/normal/log_std Mean -1.12555 trainer/policy/normal/log_std Std 0.595328 trainer/policy/normal/log_std Max -0.02392 trainer/policy/normal/log_std Min -2.6294 eval/num steps total 321564 eval/num paths total 628 eval/path length Mean 507 eval/path length Std 0 eval/path length Max 507 eval/path length Min 507 eval/Rewards Mean 3.04878 eval/Rewards Std 0.78727 eval/Rewards Max 4.936 eval/Rewards Min 0.981997 eval/Returns Mean 1545.73 eval/Returns Std 0 eval/Returns Max 1545.73 eval/Returns Min 1545.73 eval/Actions Mean 0.1651 eval/Actions Std 0.586141 eval/Actions Max 0.997181 eval/Actions Min -0.997131 eval/Num Paths 1 eval/Average Returns 1545.73 eval/normalized_score 48.117 time/evaluation sampling (s) 0.88009 time/logging (s) 0.00235372 time/sampling batch (s) 0.26509 time/saving (s) 0.00310554 time/training (s) 4.18572 time/epoch (s) 5.33635 time/total (s) 32866.6 Epoch -515 ---------------------------------- --------------- 2022-05-10 22:18:25.040327 PDT | [0] Epoch -514 finished ---------------------------------- --------------- epoch -514 replay_buffer/size 999996 trainer/num train calls 487000 trainer/Policy Loss -2.21133 trainer/Log Pis Mean 2.18791 trainer/Log Pis Std 2.60604 trainer/Log Pis Max 9.59404 trainer/Log Pis Min -4.86822 trainer/policy/mean Mean 0.123161 trainer/policy/mean Std 0.609838 trainer/policy/mean Max 0.998304 trainer/policy/mean Min -0.996561 trainer/policy/normal/std Mean 0.378674 trainer/policy/normal/std Std 0.184296 trainer/policy/normal/std Max 1.04639 trainer/policy/normal/std Min 0.0734962 trainer/policy/normal/log_std Mean -1.11959 trainer/policy/normal/log_std Std 0.584617 trainer/policy/normal/log_std Max 0.0453456 trainer/policy/normal/log_std Min -2.61052 eval/num steps total 322228 eval/num paths total 629 eval/path length Mean 664 eval/path length Std 0 eval/path length Max 664 eval/path length Min 664 eval/Rewards Mean 3.20029 eval/Rewards Std 0.686014 eval/Rewards Max 4.66061 eval/Rewards Min 0.988223 eval/Returns Mean 2125 eval/Returns Std 0 eval/Returns Max 2125 eval/Returns Min 2125 eval/Actions Mean 0.163905 eval/Actions Std 0.610506 eval/Actions Max 0.997952 eval/Actions Min -0.997552 eval/Num Paths 1 eval/Average Returns 2125 eval/normalized_score 65.9155 time/evaluation sampling (s) 0.881264 time/logging (s) 0.00271934 time/sampling batch (s) 0.266302 time/saving (s) 0.00293834 time/training (s) 4.19307 time/epoch (s) 5.34629 time/total (s) 32872 Epoch -514 ---------------------------------- --------------- 2022-05-10 22:18:30.413175 PDT | [0] Epoch -513 finished ---------------------------------- --------------- epoch -513 replay_buffer/size 999996 trainer/num train calls 488000 trainer/Policy Loss -2.13754 trainer/Log Pis Mean 2.21689 trainer/Log Pis Std 2.56951 trainer/Log Pis Max 10.3209 trainer/Log Pis Min -4.24594 trainer/policy/mean Mean 0.133596 trainer/policy/mean Std 0.622507 trainer/policy/mean Max 0.997914 trainer/policy/mean Min -0.998123 trainer/policy/normal/std Mean 0.392549 trainer/policy/normal/std Std 0.18771 trainer/policy/normal/std Max 1.00675 trainer/policy/normal/std Min 0.0694527 trainer/policy/normal/log_std Mean -1.0814 trainer/policy/normal/log_std Std 0.583317 trainer/policy/normal/log_std Max 0.00673027 trainer/policy/normal/log_std Min -2.66711 eval/num steps total 322725 eval/num paths total 630 eval/path length Mean 497 eval/path length Std 0 eval/path length Max 497 eval/path length Min 497 eval/Rewards Mean 3.0974 eval/Rewards Std 0.749682 eval/Rewards Max 4.61606 eval/Rewards Min 0.981731 eval/Returns Mean 1539.41 eval/Returns Std 0 eval/Returns Max 1539.41 eval/Returns Min 1539.41 eval/Actions Mean 0.16496 eval/Actions Std 0.599313 eval/Actions Max 0.998313 eval/Actions Min -0.99665 eval/Num Paths 1 eval/Average Returns 1539.41 eval/normalized_score 47.9228 time/evaluation sampling (s) 0.91181 time/logging (s) 0.00228159 time/sampling batch (s) 0.265213 time/saving (s) 0.00300041 time/training (s) 4.16826 time/epoch (s) 5.35056 time/total (s) 32877.3 Epoch -513 ---------------------------------- --------------- 2022-05-10 22:18:35.801172 PDT | [0] Epoch -512 finished ---------------------------------- --------------- epoch -512 replay_buffer/size 999996 trainer/num train calls 489000 trainer/Policy Loss -2.28924 trainer/Log Pis Mean 2.23237 trainer/Log Pis Std 2.65368 trainer/Log Pis Max 10.9372 trainer/Log Pis Min -5.47366 trainer/policy/mean Mean 0.152573 trainer/policy/mean Std 0.622138 trainer/policy/mean Max 0.998738 trainer/policy/mean Min -0.997813 trainer/policy/normal/std Mean 0.381462 trainer/policy/normal/std Std 0.188714 trainer/policy/normal/std Max 0.945644 trainer/policy/normal/std Min 0.0674404 trainer/policy/normal/log_std Mean -1.12351 trainer/policy/normal/log_std Std 0.611795 trainer/policy/normal/log_std Max -0.0558894 trainer/policy/normal/log_std Min -2.69651 eval/num steps total 323250 eval/num paths total 631 eval/path length Mean 525 eval/path length Std 0 eval/path length Max 525 eval/path length Min 525 eval/Rewards Mean 3.18102 eval/Rewards Std 0.855022 eval/Rewards Max 5.47372 eval/Rewards Min 0.985536 eval/Returns Mean 1670.04 eval/Returns Std 0 eval/Returns Max 1670.04 eval/Returns Min 1670.04 eval/Actions Mean 0.148963 eval/Actions Std 0.583868 eval/Actions Max 0.998127 eval/Actions Min -0.998192 eval/Num Paths 1 eval/Average Returns 1670.04 eval/normalized_score 51.9364 time/evaluation sampling (s) 0.911321 time/logging (s) 0.00238453 time/sampling batch (s) 0.26516 time/saving (s) 0.0029801 time/training (s) 4.18447 time/epoch (s) 5.36632 time/total (s) 32882.7 Epoch -512 ---------------------------------- --------------- 2022-05-10 22:18:41.182578 PDT | [0] Epoch -511 finished ---------------------------------- --------------- epoch -511 replay_buffer/size 999996 trainer/num train calls 490000 trainer/Policy Loss -2.04654 trainer/Log Pis Mean 2.03719 trainer/Log Pis Std 2.46141 trainer/Log Pis Max 10.3517 trainer/Log Pis Min -5.30329 trainer/policy/mean Mean 0.141918 trainer/policy/mean Std 0.600917 trainer/policy/mean Max 0.997298 trainer/policy/mean Min -0.997682 trainer/policy/normal/std Mean 0.38644 trainer/policy/normal/std Std 0.190101 trainer/policy/normal/std Max 0.99211 trainer/policy/normal/std Min 0.0677887 trainer/policy/normal/log_std Mean -1.10247 trainer/policy/normal/log_std Std 0.591232 trainer/policy/normal/log_std Max -0.0079217 trainer/policy/normal/log_std Min -2.69136 eval/num steps total 323752 eval/num paths total 632 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.21322 eval/Rewards Std 0.801797 eval/Rewards Max 5.26762 eval/Rewards Min 0.987587 eval/Returns Mean 1613.04 eval/Returns Std 0 eval/Returns Max 1613.04 eval/Returns Min 1613.04 eval/Actions Mean 0.146467 eval/Actions Std 0.595179 eval/Actions Max 0.99668 eval/Actions Min -0.998287 eval/Num Paths 1 eval/Average Returns 1613.04 eval/normalized_score 50.185 time/evaluation sampling (s) 0.880077 time/logging (s) 0.00236222 time/sampling batch (s) 0.266935 time/saving (s) 0.00303413 time/training (s) 4.20693 time/epoch (s) 5.35934 time/total (s) 32888.1 Epoch -511 ---------------------------------- --------------- 2022-05-10 22:18:46.546965 PDT | [0] Epoch -510 finished ---------------------------------- --------------- epoch -510 replay_buffer/size 999996 trainer/num train calls 491000 trainer/Policy Loss -2.30345 trainer/Log Pis Mean 2.20974 trainer/Log Pis Std 2.63251 trainer/Log Pis Max 9.7393 trainer/Log Pis Min -4.31732 trainer/policy/mean Mean 0.166185 trainer/policy/mean Std 0.610498 trainer/policy/mean Max 0.998349 trainer/policy/mean Min -0.997746 trainer/policy/normal/std Mean 0.384074 trainer/policy/normal/std Std 0.185951 trainer/policy/normal/std Max 0.986885 trainer/policy/normal/std Min 0.0684622 trainer/policy/normal/log_std Mean -1.10822 trainer/policy/normal/log_std Std 0.595091 trainer/policy/normal/log_std Max -0.0132021 trainer/policy/normal/log_std Min -2.68147 eval/num steps total 324253 eval/num paths total 633 eval/path length Mean 501 eval/path length Std 0 eval/path length Max 501 eval/path length Min 501 eval/Rewards Mean 3.09469 eval/Rewards Std 0.790269 eval/Rewards Max 4.85934 eval/Rewards Min 0.984571 eval/Returns Mean 1550.44 eval/Returns Std 0 eval/Returns Max 1550.44 eval/Returns Min 1550.44 eval/Actions Mean 0.160585 eval/Actions Std 0.586793 eval/Actions Max 0.997347 eval/Actions Min -0.998647 eval/Num Paths 1 eval/Average Returns 1550.44 eval/normalized_score 48.2616 time/evaluation sampling (s) 0.907459 time/logging (s) 0.00224286 time/sampling batch (s) 0.263182 time/saving (s) 0.00308247 time/training (s) 4.16651 time/epoch (s) 5.34248 time/total (s) 32893.4 Epoch -510 ---------------------------------- --------------- 2022-05-10 22:18:51.899552 PDT | [0] Epoch -509 finished ---------------------------------- --------------- epoch -509 replay_buffer/size 999996 trainer/num train calls 492000 trainer/Policy Loss -2.20651 trainer/Log Pis Mean 2.29438 trainer/Log Pis Std 2.61946 trainer/Log Pis Max 12.4221 trainer/Log Pis Min -4.32733 trainer/policy/mean Mean 0.129944 trainer/policy/mean Std 0.637788 trainer/policy/mean Max 0.997167 trainer/policy/mean Min -0.998725 trainer/policy/normal/std Mean 0.395356 trainer/policy/normal/std Std 0.195001 trainer/policy/normal/std Max 1.0449 trainer/policy/normal/std Min 0.0715472 trainer/policy/normal/log_std Mean -1.08317 trainer/policy/normal/log_std Std 0.601002 trainer/policy/normal/log_std Max 0.0439247 trainer/policy/normal/log_std Min -2.6374 eval/num steps total 324731 eval/num paths total 634 eval/path length Mean 478 eval/path length Std 0 eval/path length Max 478 eval/path length Min 478 eval/Rewards Mean 3.07176 eval/Rewards Std 0.831731 eval/Rewards Max 4.74894 eval/Rewards Min 0.97924 eval/Returns Mean 1468.3 eval/Returns Std 0 eval/Returns Max 1468.3 eval/Returns Min 1468.3 eval/Actions Mean 0.140243 eval/Actions Std 0.572008 eval/Actions Max 0.995941 eval/Actions Min -0.997867 eval/Num Paths 1 eval/Average Returns 1468.3 eval/normalized_score 45.7379 time/evaluation sampling (s) 0.88194 time/logging (s) 0.00222573 time/sampling batch (s) 0.264831 time/saving (s) 0.00297542 time/training (s) 4.17885 time/epoch (s) 5.33083 time/total (s) 32898.7 Epoch -509 ---------------------------------- --------------- 2022-05-10 22:18:57.277696 PDT | [0] Epoch -508 finished ---------------------------------- --------------- epoch -508 replay_buffer/size 999996 trainer/num train calls 493000 trainer/Policy Loss -2.19622 trainer/Log Pis Mean 2.0551 trainer/Log Pis Std 2.53904 trainer/Log Pis Max 10.0417 trainer/Log Pis Min -5.54294 trainer/policy/mean Mean 0.163428 trainer/policy/mean Std 0.595323 trainer/policy/mean Max 0.997856 trainer/policy/mean Min -0.99721 trainer/policy/normal/std Mean 0.381113 trainer/policy/normal/std Std 0.186553 trainer/policy/normal/std Max 1.02029 trainer/policy/normal/std Min 0.0719061 trainer/policy/normal/log_std Mean -1.11759 trainer/policy/normal/log_std Std 0.595599 trainer/policy/normal/log_std Max 0.0200876 trainer/policy/normal/log_std Min -2.63239 eval/num steps total 325240 eval/num paths total 635 eval/path length Mean 509 eval/path length Std 0 eval/path length Max 509 eval/path length Min 509 eval/Rewards Mean 3.14266 eval/Rewards Std 0.777232 eval/Rewards Max 4.9558 eval/Rewards Min 0.989936 eval/Returns Mean 1599.61 eval/Returns Std 0 eval/Returns Max 1599.61 eval/Returns Min 1599.61 eval/Actions Mean 0.165817 eval/Actions Std 0.592046 eval/Actions Max 0.998553 eval/Actions Min -0.999054 eval/Num Paths 1 eval/Average Returns 1599.61 eval/normalized_score 49.7726 time/evaluation sampling (s) 0.877834 time/logging (s) 0.00233781 time/sampling batch (s) 0.26669 time/saving (s) 0.00298435 time/training (s) 4.20659 time/epoch (s) 5.35644 time/total (s) 32904.1 Epoch -508 ---------------------------------- --------------- 2022-05-10 22:19:02.663271 PDT | [0] Epoch -507 finished ---------------------------------- ---------------- epoch -507 replay_buffer/size 999996 trainer/num train calls 494000 trainer/Policy Loss -2.2175 trainer/Log Pis Mean 2.22123 trainer/Log Pis Std 2.42269 trainer/Log Pis Max 10.7196 trainer/Log Pis Min -6.08015 trainer/policy/mean Mean 0.129488 trainer/policy/mean Std 0.616842 trainer/policy/mean Max 0.998228 trainer/policy/mean Min -0.999624 trainer/policy/normal/std Mean 0.378597 trainer/policy/normal/std Std 0.184013 trainer/policy/normal/std Max 1.00026 trainer/policy/normal/std Min 0.0647042 trainer/policy/normal/log_std Mean -1.1222 trainer/policy/normal/log_std Std 0.592068 trainer/policy/normal/log_std Max 0.000257697 trainer/policy/normal/log_std Min -2.73793 eval/num steps total 326204 eval/num paths total 637 eval/path length Mean 482 eval/path length Std 6 eval/path length Max 488 eval/path length Min 476 eval/Rewards Mean 3.15515 eval/Rewards Std 0.833461 eval/Rewards Max 4.95565 eval/Rewards Min 0.987684 eval/Returns Mean 1520.78 eval/Returns Std 11.4254 eval/Returns Max 1532.21 eval/Returns Min 1509.36 eval/Actions Mean 0.147569 eval/Actions Std 0.594222 eval/Actions Max 0.997395 eval/Actions Min -0.999096 eval/Num Paths 2 eval/Average Returns 1520.78 eval/normalized_score 47.3505 time/evaluation sampling (s) 0.893993 time/logging (s) 0.0036388 time/sampling batch (s) 0.267164 time/saving (s) 0.00304741 time/training (s) 4.19737 time/epoch (s) 5.36522 time/total (s) 32909.5 Epoch -507 ---------------------------------- ---------------- 2022-05-10 22:19:08.047545 PDT | [0] Epoch -506 finished ---------------------------------- --------------- epoch -506 replay_buffer/size 999996 trainer/num train calls 495000 trainer/Policy Loss -2.34402 trainer/Log Pis Mean 2.4197 trainer/Log Pis Std 2.60006 trainer/Log Pis Max 9.73348 trainer/Log Pis Min -4.49035 trainer/policy/mean Mean 0.123304 trainer/policy/mean Std 0.629685 trainer/policy/mean Max 0.998345 trainer/policy/mean Min -0.997407 trainer/policy/normal/std Mean 0.380894 trainer/policy/normal/std Std 0.181736 trainer/policy/normal/std Max 0.958041 trainer/policy/normal/std Min 0.0710145 trainer/policy/normal/log_std Mean -1.10897 trainer/policy/normal/log_std Std 0.575758 trainer/policy/normal/log_std Max -0.0428644 trainer/policy/normal/log_std Min -2.64487 eval/num steps total 326729 eval/num paths total 638 eval/path length Mean 525 eval/path length Std 0 eval/path length Max 525 eval/path length Min 525 eval/Rewards Mean 3.19812 eval/Rewards Std 0.829249 eval/Rewards Max 5.50275 eval/Rewards Min 0.987923 eval/Returns Mean 1679.01 eval/Returns Std 0 eval/Returns Max 1679.01 eval/Returns Min 1679.01 eval/Actions Mean 0.162278 eval/Actions Std 0.596834 eval/Actions Max 0.998479 eval/Actions Min -0.999153 eval/Num Paths 1 eval/Average Returns 1679.01 eval/normalized_score 52.2122 time/evaluation sampling (s) 0.880339 time/logging (s) 0.00229202 time/sampling batch (s) 0.26677 time/saving (s) 0.00295494 time/training (s) 4.20862 time/epoch (s) 5.36098 time/total (s) 32914.8 Epoch -506 ---------------------------------- --------------- 2022-05-10 22:19:13.432451 PDT | [0] Epoch -505 finished ---------------------------------- --------------- epoch -505 replay_buffer/size 999996 trainer/num train calls 496000 trainer/Policy Loss -2.30691 trainer/Log Pis Mean 2.13748 trainer/Log Pis Std 2.70848 trainer/Log Pis Max 12.9105 trainer/Log Pis Min -6.769 trainer/policy/mean Mean 0.139039 trainer/policy/mean Std 0.62189 trainer/policy/mean Max 0.997876 trainer/policy/mean Min -0.996871 trainer/policy/normal/std Mean 0.383268 trainer/policy/normal/std Std 0.185297 trainer/policy/normal/std Max 0.979981 trainer/policy/normal/std Min 0.0706786 trainer/policy/normal/log_std Mean -1.10948 trainer/policy/normal/log_std Std 0.592926 trainer/policy/normal/log_std Max -0.0202226 trainer/policy/normal/log_std Min -2.64961 eval/num steps total 327672 eval/num paths total 640 eval/path length Mean 471.5 eval/path length Std 30.5 eval/path length Max 502 eval/path length Min 441 eval/Rewards Mean 3.16015 eval/Rewards Std 0.827221 eval/Rewards Max 5.52886 eval/Rewards Min 0.980432 eval/Returns Mean 1490.01 eval/Returns Std 77.4775 eval/Returns Max 1567.49 eval/Returns Min 1412.53 eval/Actions Mean 0.144516 eval/Actions Std 0.582632 eval/Actions Max 0.998482 eval/Actions Min -0.999263 eval/Num Paths 2 eval/Average Returns 1490.01 eval/normalized_score 46.405 time/evaluation sampling (s) 0.880131 time/logging (s) 0.00363819 time/sampling batch (s) 0.267943 time/saving (s) 0.0030957 time/training (s) 4.20949 time/epoch (s) 5.3643 time/total (s) 32920.2 Epoch -505 ---------------------------------- --------------- 2022-05-10 22:19:18.798575 PDT | [0] Epoch -504 finished ---------------------------------- --------------- epoch -504 replay_buffer/size 999996 trainer/num train calls 497000 trainer/Policy Loss -2.21569 trainer/Log Pis Mean 2.24079 trainer/Log Pis Std 2.49007 trainer/Log Pis Max 9.45519 trainer/Log Pis Min -4.53734 trainer/policy/mean Mean 0.17446 trainer/policy/mean Std 0.613954 trainer/policy/mean Max 0.997894 trainer/policy/mean Min -0.998173 trainer/policy/normal/std Mean 0.383593 trainer/policy/normal/std Std 0.181728 trainer/policy/normal/std Max 0.958513 trainer/policy/normal/std Min 0.069075 trainer/policy/normal/log_std Mean -1.10312 trainer/policy/normal/log_std Std 0.580762 trainer/policy/normal/log_std Max -0.0423726 trainer/policy/normal/log_std Min -2.67256 eval/num steps total 328570 eval/num paths total 642 eval/path length Mean 449 eval/path length Std 39 eval/path length Max 488 eval/path length Min 410 eval/Rewards Mean 3.14163 eval/Rewards Std 0.798196 eval/Rewards Max 4.90685 eval/Rewards Min 0.989988 eval/Returns Mean 1410.59 eval/Returns Std 142.659 eval/Returns Max 1553.25 eval/Returns Min 1267.94 eval/Actions Mean 0.151121 eval/Actions Std 0.599897 eval/Actions Max 0.997866 eval/Actions Min -0.998053 eval/Num Paths 2 eval/Average Returns 1410.59 eval/normalized_score 43.9648 time/evaluation sampling (s) 0.88835 time/logging (s) 0.00357522 time/sampling batch (s) 0.262679 time/saving (s) 0.0029765 time/training (s) 4.18689 time/epoch (s) 5.34447 time/total (s) 32925.6 Epoch -504 ---------------------------------- --------------- 2022-05-10 22:19:24.144597 PDT | [0] Epoch -503 finished ---------------------------------- ---------------- epoch -503 replay_buffer/size 999996 trainer/num train calls 498000 trainer/Policy Loss -2.1554 trainer/Log Pis Mean 2.20951 trainer/Log Pis Std 2.56595 trainer/Log Pis Max 9.42246 trainer/Log Pis Min -5.88496 trainer/policy/mean Mean 0.113713 trainer/policy/mean Std 0.622373 trainer/policy/mean Max 0.995396 trainer/policy/mean Min -0.997988 trainer/policy/normal/std Mean 0.379945 trainer/policy/normal/std Std 0.18222 trainer/policy/normal/std Max 0.999511 trainer/policy/normal/std Min 0.0743108 trainer/policy/normal/log_std Mean -1.10947 trainer/policy/normal/log_std Std 0.568278 trainer/policy/normal/log_std Max -0.000489235 trainer/policy/normal/log_std Min -2.5995 eval/num steps total 329483 eval/num paths total 644 eval/path length Mean 456.5 eval/path length Std 18.5 eval/path length Max 475 eval/path length Min 438 eval/Rewards Mean 3.1836 eval/Rewards Std 0.864212 eval/Rewards Max 5.52451 eval/Rewards Min 0.986771 eval/Returns Mean 1453.31 eval/Returns Std 65.2375 eval/Returns Max 1518.55 eval/Returns Min 1388.08 eval/Actions Mean 0.144351 eval/Actions Std 0.59124 eval/Actions Max 0.996334 eval/Actions Min -0.99837 eval/Num Paths 2 eval/Average Returns 1453.31 eval/normalized_score 45.2774 time/evaluation sampling (s) 0.87101 time/logging (s) 0.00340544 time/sampling batch (s) 0.264259 time/saving (s) 0.00295843 time/training (s) 4.18255 time/epoch (s) 5.32418 time/total (s) 32930.9 Epoch -503 ---------------------------------- ---------------- 2022-05-10 22:19:29.530701 PDT | [0] Epoch -502 finished ---------------------------------- --------------- epoch -502 replay_buffer/size 999996 trainer/num train calls 499000 trainer/Policy Loss -2.13243 trainer/Log Pis Mean 2.29 trainer/Log Pis Std 2.56949 trainer/Log Pis Max 11.3827 trainer/Log Pis Min -5.21757 trainer/policy/mean Mean 0.150569 trainer/policy/mean Std 0.619053 trainer/policy/mean Max 0.998157 trainer/policy/mean Min -0.999112 trainer/policy/normal/std Mean 0.390283 trainer/policy/normal/std Std 0.18598 trainer/policy/normal/std Max 0.957882 trainer/policy/normal/std Min 0.0732545 trainer/policy/normal/log_std Mean -1.09035 trainer/policy/normal/log_std Std 0.594304 trainer/policy/normal/log_std Max -0.0430311 trainer/policy/normal/log_std Min -2.61382 eval/num steps total 330046 eval/num paths total 645 eval/path length Mean 563 eval/path length Std 0 eval/path length Max 563 eval/path length Min 563 eval/Rewards Mean 3.2247 eval/Rewards Std 0.80712 eval/Rewards Max 4.84603 eval/Rewards Min 0.988528 eval/Returns Mean 1815.51 eval/Returns Std 0 eval/Returns Max 1815.51 eval/Returns Min 1815.51 eval/Actions Mean 0.171475 eval/Actions Std 0.583905 eval/Actions Max 0.998471 eval/Actions Min -0.998416 eval/Num Paths 1 eval/Average Returns 1815.51 eval/normalized_score 56.4062 time/evaluation sampling (s) 0.868393 time/logging (s) 0.00243286 time/sampling batch (s) 0.267458 time/saving (s) 0.00299387 time/training (s) 4.22188 time/epoch (s) 5.36315 time/total (s) 32936.2 Epoch -502 ---------------------------------- --------------- 2022-05-10 22:19:35.046495 PDT | [0] Epoch -501 finished ---------------------------------- ---------------- epoch -501 replay_buffer/size 999996 trainer/num train calls 500000 trainer/Policy Loss -2.0638 trainer/Log Pis Mean 2.22224 trainer/Log Pis Std 2.65451 trainer/Log Pis Max 17.128 trainer/Log Pis Min -5.27812 trainer/policy/mean Mean 0.115443 trainer/policy/mean Std 0.626368 trainer/policy/mean Max 0.998377 trainer/policy/mean Min -0.999618 trainer/policy/normal/std Mean 0.386101 trainer/policy/normal/std Std 0.188469 trainer/policy/normal/std Max 1.00009 trainer/policy/normal/std Min 0.0715583 trainer/policy/normal/log_std Mean -1.09936 trainer/policy/normal/log_std Std 0.581892 trainer/policy/normal/log_std Max 8.98798e-05 trainer/policy/normal/log_std Min -2.63724 eval/num steps total 330983 eval/num paths total 647 eval/path length Mean 468.5 eval/path length Std 56.5 eval/path length Max 525 eval/path length Min 412 eval/Rewards Mean 3.13025 eval/Rewards Std 0.813832 eval/Rewards Max 5.45643 eval/Rewards Min 0.989459 eval/Returns Mean 1466.52 eval/Returns Std 198.861 eval/Returns Max 1665.38 eval/Returns Min 1267.66 eval/Actions Mean 0.153085 eval/Actions Std 0.589395 eval/Actions Max 0.997833 eval/Actions Min -0.997331 eval/Num Paths 2 eval/Average Returns 1466.52 eval/normalized_score 45.6832 time/evaluation sampling (s) 0.887823 time/logging (s) 0.00369407 time/sampling batch (s) 0.265026 time/saving (s) 0.00550133 time/training (s) 4.33321 time/epoch (s) 5.49525 time/total (s) 32941.7 Epoch -501 ---------------------------------- ---------------- 2022-05-10 22:19:40.413782 PDT | [0] Epoch -500 finished ---------------------------------- --------------- epoch -500 replay_buffer/size 999996 trainer/num train calls 501000 trainer/Policy Loss -2.32888 trainer/Log Pis Mean 2.23964 trainer/Log Pis Std 2.42973 trainer/Log Pis Max 11.0252 trainer/Log Pis Min -5.74941 trainer/policy/mean Mean 0.154243 trainer/policy/mean Std 0.622582 trainer/policy/mean Max 0.996848 trainer/policy/mean Min -0.996885 trainer/policy/normal/std Mean 0.379155 trainer/policy/normal/std Std 0.181736 trainer/policy/normal/std Max 1.00878 trainer/policy/normal/std Min 0.0694134 trainer/policy/normal/log_std Mean -1.1172 trainer/policy/normal/log_std Std 0.585681 trainer/policy/normal/log_std Max 0.00874393 trainer/policy/normal/log_std Min -2.66768 eval/num steps total 331569 eval/num paths total 648 eval/path length Mean 586 eval/path length Std 0 eval/path length Max 586 eval/path length Min 586 eval/Rewards Mean 3.2152 eval/Rewards Std 0.759197 eval/Rewards Max 4.93725 eval/Rewards Min 0.985369 eval/Returns Mean 1884.11 eval/Returns Std 0 eval/Returns Max 1884.11 eval/Returns Min 1884.11 eval/Actions Mean 0.164207 eval/Actions Std 0.59689 eval/Actions Max 0.998392 eval/Actions Min -0.998562 eval/Num Paths 1 eval/Average Returns 1884.11 eval/normalized_score 58.514 time/evaluation sampling (s) 0.908483 time/logging (s) 0.0024636 time/sampling batch (s) 0.263689 time/saving (s) 0.00295963 time/training (s) 4.16635 time/epoch (s) 5.34395 time/total (s) 32947.1 Epoch -500 ---------------------------------- --------------- 2022-05-10 22:19:45.808658 PDT | [0] Epoch -499 finished ---------------------------------- --------------- epoch -499 replay_buffer/size 999996 trainer/num train calls 502000 trainer/Policy Loss -2.12616 trainer/Log Pis Mean 2.02041 trainer/Log Pis Std 2.62836 trainer/Log Pis Max 9.4079 trainer/Log Pis Min -6.46412 trainer/policy/mean Mean 0.157946 trainer/policy/mean Std 0.608587 trainer/policy/mean Max 0.998055 trainer/policy/mean Min -0.998258 trainer/policy/normal/std Mean 0.381126 trainer/policy/normal/std Std 0.185091 trainer/policy/normal/std Max 0.964916 trainer/policy/normal/std Min 0.0658216 trainer/policy/normal/log_std Mean -1.11744 trainer/policy/normal/log_std Std 0.598496 trainer/policy/normal/log_std Max -0.0357137 trainer/policy/normal/log_std Min -2.72081 eval/num steps total 332556 eval/num paths total 650 eval/path length Mean 493.5 eval/path length Std 7.5 eval/path length Max 501 eval/path length Min 486 eval/Rewards Mean 3.15149 eval/Rewards Std 0.792791 eval/Rewards Max 4.7628 eval/Rewards Min 0.985105 eval/Returns Mean 1555.26 eval/Returns Std 6.443 eval/Returns Max 1561.7 eval/Returns Min 1548.82 eval/Actions Mean 0.140753 eval/Actions Std 0.589045 eval/Actions Max 0.99828 eval/Actions Min -0.999379 eval/Num Paths 2 eval/Average Returns 1555.26 eval/normalized_score 48.4098 time/evaluation sampling (s) 0.906706 time/logging (s) 0.00377454 time/sampling batch (s) 0.264884 time/saving (s) 0.00316103 time/training (s) 4.19577 time/epoch (s) 5.37429 time/total (s) 32952.5 Epoch -499 ---------------------------------- --------------- 2022-05-10 22:19:51.241010 PDT | [0] Epoch -498 finished ---------------------------------- --------------- epoch -498 replay_buffer/size 999996 trainer/num train calls 503000 trainer/Policy Loss -2.15191 trainer/Log Pis Mean 2.06207 trainer/Log Pis Std 2.70196 trainer/Log Pis Max 10.9185 trainer/Log Pis Min -6.1318 trainer/policy/mean Mean 0.130242 trainer/policy/mean Std 0.608688 trainer/policy/mean Max 0.997948 trainer/policy/mean Min -0.998226 trainer/policy/normal/std Mean 0.377938 trainer/policy/normal/std Std 0.187659 trainer/policy/normal/std Max 1.11796 trainer/policy/normal/std Min 0.0670234 trainer/policy/normal/log_std Mean -1.1308 trainer/policy/normal/log_std Std 0.606306 trainer/policy/normal/log_std Max 0.111509 trainer/policy/normal/log_std Min -2.70271 eval/num steps total 333310 eval/num paths total 651 eval/path length Mean 754 eval/path length Std 0 eval/path length Max 754 eval/path length Min 754 eval/Rewards Mean 3.20979 eval/Rewards Std 0.702644 eval/Rewards Max 5.23329 eval/Rewards Min 0.989249 eval/Returns Mean 2420.18 eval/Returns Std 0 eval/Returns Max 2420.18 eval/Returns Min 2420.18 eval/Actions Mean 0.166512 eval/Actions Std 0.603107 eval/Actions Max 0.997887 eval/Actions Min -0.997846 eval/Num Paths 1 eval/Average Returns 2420.18 eval/normalized_score 74.9854 time/evaluation sampling (s) 0.943576 time/logging (s) 0.00288122 time/sampling batch (s) 0.265074 time/saving (s) 0.00302395 time/training (s) 4.19465 time/epoch (s) 5.40921 time/total (s) 32957.9 Epoch -498 ---------------------------------- --------------- 2022-05-10 22:19:56.616360 PDT | [0] Epoch -497 finished ---------------------------------- --------------- epoch -497 replay_buffer/size 999996 trainer/num train calls 504000 trainer/Policy Loss -2.37354 trainer/Log Pis Mean 2.28853 trainer/Log Pis Std 2.70496 trainer/Log Pis Max 14.9736 trainer/Log Pis Min -5.84019 trainer/policy/mean Mean 0.157182 trainer/policy/mean Std 0.622618 trainer/policy/mean Max 0.997537 trainer/policy/mean Min -0.999755 trainer/policy/normal/std Mean 0.38571 trainer/policy/normal/std Std 0.184895 trainer/policy/normal/std Max 0.912189 trainer/policy/normal/std Min 0.06736 trainer/policy/normal/log_std Mean -1.10344 trainer/policy/normal/log_std Std 0.595025 trainer/policy/normal/log_std Max -0.0919086 trainer/policy/normal/log_std Min -2.6977 eval/num steps total 333944 eval/num paths total 652 eval/path length Mean 634 eval/path length Std 0 eval/path length Max 634 eval/path length Min 634 eval/Rewards Mean 3.21393 eval/Rewards Std 0.779439 eval/Rewards Max 4.94363 eval/Rewards Min 0.988643 eval/Returns Mean 2037.63 eval/Returns Std 0 eval/Returns Max 2037.63 eval/Returns Min 2037.63 eval/Actions Mean 0.152602 eval/Actions Std 0.580777 eval/Actions Max 0.997574 eval/Actions Min -0.998733 eval/Num Paths 1 eval/Average Returns 2037.63 eval/normalized_score 63.2311 time/evaluation sampling (s) 0.899139 time/logging (s) 0.00267394 time/sampling batch (s) 0.264545 time/saving (s) 0.00311447 time/training (s) 4.1838 time/epoch (s) 5.35327 time/total (s) 32963.2 Epoch -497 ---------------------------------- --------------- 2022-05-10 22:20:02.061924 PDT | [0] Epoch -496 finished ---------------------------------- --------------- epoch -496 replay_buffer/size 999996 trainer/num train calls 505000 trainer/Policy Loss -2.16572 trainer/Log Pis Mean 2.20085 trainer/Log Pis Std 2.65353 trainer/Log Pis Max 9.1614 trainer/Log Pis Min -4.7943 trainer/policy/mean Mean 0.160632 trainer/policy/mean Std 0.608943 trainer/policy/mean Max 0.998469 trainer/policy/mean Min -0.997771 trainer/policy/normal/std Mean 0.38967 trainer/policy/normal/std Std 0.188942 trainer/policy/normal/std Max 0.954245 trainer/policy/normal/std Min 0.0733387 trainer/policy/normal/log_std Mean -1.09231 trainer/policy/normal/log_std Std 0.589047 trainer/policy/normal/log_std Max -0.046835 trainer/policy/normal/log_std Min -2.61267 eval/num steps total 334499 eval/num paths total 653 eval/path length Mean 555 eval/path length Std 0 eval/path length Max 555 eval/path length Min 555 eval/Rewards Mean 3.20825 eval/Rewards Std 0.82535 eval/Rewards Max 4.97171 eval/Rewards Min 0.990528 eval/Returns Mean 1780.58 eval/Returns Std 0 eval/Returns Max 1780.58 eval/Returns Min 1780.58 eval/Actions Mean 0.155471 eval/Actions Std 0.580305 eval/Actions Max 0.998601 eval/Actions Min -0.998783 eval/Num Paths 1 eval/Average Returns 1780.58 eval/normalized_score 55.3329 time/evaluation sampling (s) 0.918584 time/logging (s) 0.00248061 time/sampling batch (s) 0.26797 time/saving (s) 0.00307759 time/training (s) 4.23097 time/epoch (s) 5.42308 time/total (s) 32968.7 Epoch -496 ---------------------------------- --------------- 2022-05-10 22:20:07.436206 PDT | [0] Epoch -495 finished ---------------------------------- --------------- epoch -495 replay_buffer/size 999996 trainer/num train calls 506000 trainer/Policy Loss -2.01578 trainer/Log Pis Mean 2.01409 trainer/Log Pis Std 2.5255 trainer/Log Pis Max 9.29344 trainer/Log Pis Min -5.98373 trainer/policy/mean Mean 0.155825 trainer/policy/mean Std 0.609638 trainer/policy/mean Max 0.997657 trainer/policy/mean Min -0.996644 trainer/policy/normal/std Mean 0.381727 trainer/policy/normal/std Std 0.183847 trainer/policy/normal/std Max 0.917451 trainer/policy/normal/std Min 0.0722664 trainer/policy/normal/log_std Mean -1.11266 trainer/policy/normal/log_std Std 0.590736 trainer/policy/normal/log_std Max -0.0861561 trainer/policy/normal/log_std Min -2.6274 eval/num steps total 335338 eval/num paths total 655 eval/path length Mean 419.5 eval/path length Std 86.5 eval/path length Max 506 eval/path length Min 333 eval/Rewards Mean 3.08346 eval/Rewards Std 0.874787 eval/Rewards Max 5.38209 eval/Rewards Min 0.986425 eval/Returns Mean 1293.51 eval/Returns Std 282.7 eval/Returns Max 1576.21 eval/Returns Min 1010.81 eval/Actions Mean 0.141567 eval/Actions Std 0.573755 eval/Actions Max 0.998231 eval/Actions Min -0.997121 eval/Num Paths 2 eval/Average Returns 1293.51 eval/normalized_score 40.3673 time/evaluation sampling (s) 0.883561 time/logging (s) 0.0033176 time/sampling batch (s) 0.265956 time/saving (s) 0.00303256 time/training (s) 4.19746 time/epoch (s) 5.35333 time/total (s) 32974 Epoch -495 ---------------------------------- --------------- 2022-05-10 22:20:12.782486 PDT | [0] Epoch -494 finished ---------------------------------- --------------- epoch -494 replay_buffer/size 999996 trainer/num train calls 507000 trainer/Policy Loss -2.21096 trainer/Log Pis Mean 2.12835 trainer/Log Pis Std 2.72857 trainer/Log Pis Max 16.2848 trainer/Log Pis Min -4.68431 trainer/policy/mean Mean 0.0830195 trainer/policy/mean Std 0.62388 trainer/policy/mean Max 0.997907 trainer/policy/mean Min -0.999708 trainer/policy/normal/std Mean 0.379517 trainer/policy/normal/std Std 0.188883 trainer/policy/normal/std Max 1.00855 trainer/policy/normal/std Min 0.0696371 trainer/policy/normal/log_std Mean -1.12416 trainer/policy/normal/log_std Std 0.598463 trainer/policy/normal/log_std Max 0.0085169 trainer/policy/normal/log_std Min -2.66446 eval/num steps total 335833 eval/num paths total 656 eval/path length Mean 495 eval/path length Std 0 eval/path length Max 495 eval/path length Min 495 eval/Rewards Mean 3.08437 eval/Rewards Std 0.771604 eval/Rewards Max 4.69055 eval/Rewards Min 0.987004 eval/Returns Mean 1526.77 eval/Returns Std 0 eval/Returns Max 1526.77 eval/Returns Min 1526.77 eval/Actions Mean 0.151208 eval/Actions Std 0.580998 eval/Actions Max 0.995943 eval/Actions Min -0.999276 eval/Num Paths 1 eval/Average Returns 1526.77 eval/normalized_score 47.5343 time/evaluation sampling (s) 0.887476 time/logging (s) 0.00229074 time/sampling batch (s) 0.264185 time/saving (s) 0.00308564 time/training (s) 4.16644 time/epoch (s) 5.32348 time/total (s) 32979.3 Epoch -494 ---------------------------------- --------------- 2022-05-10 22:20:18.181039 PDT | [0] Epoch -493 finished ---------------------------------- --------------- epoch -493 replay_buffer/size 999996 trainer/num train calls 508000 trainer/Policy Loss -2.12219 trainer/Log Pis Mean 2.12245 trainer/Log Pis Std 2.54034 trainer/Log Pis Max 8.72678 trainer/Log Pis Min -4.76501 trainer/policy/mean Mean 0.13814 trainer/policy/mean Std 0.608436 trainer/policy/mean Max 0.998104 trainer/policy/mean Min -0.998464 trainer/policy/normal/std Mean 0.379333 trainer/policy/normal/std Std 0.183592 trainer/policy/normal/std Max 1.10713 trainer/policy/normal/std Min 0.071212 trainer/policy/normal/log_std Mean -1.1162 trainer/policy/normal/log_std Std 0.581899 trainer/policy/normal/log_std Max 0.101767 trainer/policy/normal/log_std Min -2.64209 eval/num steps total 336561 eval/num paths total 657 eval/path length Mean 728 eval/path length Std 0 eval/path length Max 728 eval/path length Min 728 eval/Rewards Mean 3.23046 eval/Rewards Std 0.683765 eval/Rewards Max 4.84503 eval/Rewards Min 0.988555 eval/Returns Mean 2351.78 eval/Returns Std 0 eval/Returns Max 2351.78 eval/Returns Min 2351.78 eval/Actions Mean 0.14801 eval/Actions Std 0.602333 eval/Actions Max 0.99835 eval/Actions Min -0.997633 eval/Num Paths 1 eval/Average Returns 2351.78 eval/normalized_score 72.8836 time/evaluation sampling (s) 0.877764 time/logging (s) 0.00286114 time/sampling batch (s) 0.266826 time/saving (s) 0.00299782 time/training (s) 4.22671 time/epoch (s) 5.37716 time/total (s) 32984.7 Epoch -493 ---------------------------------- --------------- 2022-05-10 22:20:23.571925 PDT | [0] Epoch -492 finished ---------------------------------- --------------- epoch -492 replay_buffer/size 999996 trainer/num train calls 509000 trainer/Policy Loss -2.30413 trainer/Log Pis Mean 2.15883 trainer/Log Pis Std 2.62819 trainer/Log Pis Max 15.369 trainer/Log Pis Min -4.3508 trainer/policy/mean Mean 0.122953 trainer/policy/mean Std 0.616959 trainer/policy/mean Max 0.996534 trainer/policy/mean Min -0.999018 trainer/policy/normal/std Mean 0.382573 trainer/policy/normal/std Std 0.184212 trainer/policy/normal/std Max 0.983491 trainer/policy/normal/std Min 0.0679171 trainer/policy/normal/log_std Mean -1.10906 trainer/policy/normal/log_std Std 0.586983 trainer/policy/normal/log_std Max -0.0166464 trainer/policy/normal/log_std Min -2.68947 eval/num steps total 337537 eval/num paths total 659 eval/path length Mean 488 eval/path length Std 32 eval/path length Max 520 eval/path length Min 456 eval/Rewards Mean 3.20755 eval/Rewards Std 0.872333 eval/Rewards Max 5.49794 eval/Rewards Min 0.98422 eval/Returns Mean 1565.28 eval/Returns Std 113.256 eval/Returns Max 1678.54 eval/Returns Min 1452.03 eval/Actions Mean 0.136661 eval/Actions Std 0.585137 eval/Actions Max 0.997781 eval/Actions Min -0.99827 eval/Num Paths 2 eval/Average Returns 1565.28 eval/normalized_score 48.7178 time/evaluation sampling (s) 0.879783 time/logging (s) 0.00358108 time/sampling batch (s) 0.267499 time/saving (s) 0.0030797 time/training (s) 4.21547 time/epoch (s) 5.36942 time/total (s) 32990.1 Epoch -492 ---------------------------------- --------------- 2022-05-10 22:20:28.946920 PDT | [0] Epoch -491 finished ---------------------------------- --------------- epoch -491 replay_buffer/size 999996 trainer/num train calls 510000 trainer/Policy Loss -1.9895 trainer/Log Pis Mean 2.15413 trainer/Log Pis Std 2.58449 trainer/Log Pis Max 10.5683 trainer/Log Pis Min -5.15139 trainer/policy/mean Mean 0.131073 trainer/policy/mean Std 0.609799 trainer/policy/mean Max 0.998718 trainer/policy/mean Min -0.998226 trainer/policy/normal/std Mean 0.377662 trainer/policy/normal/std Std 0.185724 trainer/policy/normal/std Max 1.01856 trainer/policy/normal/std Min 0.071713 trainer/policy/normal/log_std Mean -1.12348 trainer/policy/normal/log_std Std 0.586113 trainer/policy/normal/log_std Max 0.0183898 trainer/policy/normal/log_std Min -2.63508 eval/num steps total 338159 eval/num paths total 660 eval/path length Mean 622 eval/path length Std 0 eval/path length Max 622 eval/path length Min 622 eval/Rewards Mean 3.08311 eval/Rewards Std 0.812266 eval/Rewards Max 4.87585 eval/Rewards Min 0.990289 eval/Returns Mean 1917.69 eval/Returns Std 0 eval/Returns Max 1917.69 eval/Returns Min 1917.69 eval/Actions Mean 0.154706 eval/Actions Std 0.588025 eval/Actions Max 0.997579 eval/Actions Min -0.998291 eval/Num Paths 1 eval/Average Returns 1917.69 eval/normalized_score 59.5459 time/evaluation sampling (s) 0.889424 time/logging (s) 0.00251131 time/sampling batch (s) 0.26807 time/saving (s) 0.00296083 time/training (s) 4.18862 time/epoch (s) 5.35159 time/total (s) 32995.5 Epoch -491 ---------------------------------- --------------- 2022-05-10 22:20:34.319875 PDT | [0] Epoch -490 finished ---------------------------------- --------------- epoch -490 replay_buffer/size 999996 trainer/num train calls 511000 trainer/Policy Loss -2.02273 trainer/Log Pis Mean 2.07825 trainer/Log Pis Std 2.7176 trainer/Log Pis Max 9.46428 trainer/Log Pis Min -6.86638 trainer/policy/mean Mean 0.148998 trainer/policy/mean Std 0.612345 trainer/policy/mean Max 0.995911 trainer/policy/mean Min -0.99818 trainer/policy/normal/std Mean 0.391309 trainer/policy/normal/std Std 0.184919 trainer/policy/normal/std Max 1.01452 trainer/policy/normal/std Min 0.0688653 trainer/policy/normal/log_std Mean -1.08197 trainer/policy/normal/log_std Std 0.578581 trainer/policy/normal/log_std Max 0.0144158 trainer/policy/normal/log_std Min -2.6756 eval/num steps total 339096 eval/num paths total 663 eval/path length Mean 312.333 eval/path length Std 116.179 eval/path length Max 432 eval/path length Min 155 eval/Rewards Mean 2.93326 eval/Rewards Std 0.910732 eval/Rewards Max 5.50543 eval/Rewards Min 0.983278 eval/Returns Mean 916.156 eval/Returns Std 432.196 eval/Returns Max 1358.43 eval/Returns Min 329.663 eval/Actions Mean 0.124908 eval/Actions Std 0.565307 eval/Actions Max 0.997525 eval/Actions Min -0.997343 eval/Num Paths 3 eval/Average Returns 916.156 eval/normalized_score 28.7727 time/evaluation sampling (s) 0.891132 time/logging (s) 0.00332804 time/sampling batch (s) 0.266181 time/saving (s) 0.00302785 time/training (s) 4.18816 time/epoch (s) 5.35182 time/total (s) 33000.8 Epoch -490 ---------------------------------- --------------- 2022-05-10 22:20:39.667913 PDT | [0] Epoch -489 finished ---------------------------------- --------------- epoch -489 replay_buffer/size 999996 trainer/num train calls 512000 trainer/Policy Loss -2.18117 trainer/Log Pis Mean 2.08821 trainer/Log Pis Std 2.51877 trainer/Log Pis Max 9.3568 trainer/Log Pis Min -4.46344 trainer/policy/mean Mean 0.129473 trainer/policy/mean Std 0.612419 trainer/policy/mean Max 0.99772 trainer/policy/mean Min -0.997257 trainer/policy/normal/std Mean 0.372051 trainer/policy/normal/std Std 0.17578 trainer/policy/normal/std Max 0.956238 trainer/policy/normal/std Min 0.0734016 trainer/policy/normal/log_std Mean -1.13235 trainer/policy/normal/log_std Std 0.577846 trainer/policy/normal/log_std Max -0.0447479 trainer/policy/normal/log_std Min -2.61181 eval/num steps total 339658 eval/num paths total 664 eval/path length Mean 562 eval/path length Std 0 eval/path length Max 562 eval/path length Min 562 eval/Rewards Mean 3.18401 eval/Rewards Std 0.780967 eval/Rewards Max 4.73316 eval/Rewards Min 0.985905 eval/Returns Mean 1789.41 eval/Returns Std 0 eval/Returns Max 1789.41 eval/Returns Min 1789.41 eval/Actions Mean 0.146601 eval/Actions Std 0.602186 eval/Actions Max 0.99825 eval/Actions Min -0.998212 eval/Num Paths 1 eval/Average Returns 1789.41 eval/normalized_score 55.6044 time/evaluation sampling (s) 0.882548 time/logging (s) 0.00237408 time/sampling batch (s) 0.263453 time/saving (s) 0.00304312 time/training (s) 4.17386 time/epoch (s) 5.32528 time/total (s) 33006.1 Epoch -489 ---------------------------------- --------------- 2022-05-10 22:20:45.082873 PDT | [0] Epoch -488 finished ---------------------------------- --------------- epoch -488 replay_buffer/size 999996 trainer/num train calls 513000 trainer/Policy Loss -2.31616 trainer/Log Pis Mean 2.26594 trainer/Log Pis Std 2.7929 trainer/Log Pis Max 12.7162 trainer/Log Pis Min -4.88407 trainer/policy/mean Mean 0.118094 trainer/policy/mean Std 0.621822 trainer/policy/mean Max 0.998714 trainer/policy/mean Min -0.997996 trainer/policy/normal/std Mean 0.388988 trainer/policy/normal/std Std 0.186663 trainer/policy/normal/std Max 0.992748 trainer/policy/normal/std Min 0.070589 trainer/policy/normal/log_std Mean -1.09137 trainer/policy/normal/log_std Std 0.584643 trainer/policy/normal/log_std Max -0.00727882 trainer/policy/normal/log_std Min -2.65088 eval/num steps total 340192 eval/num paths total 665 eval/path length Mean 534 eval/path length Std 0 eval/path length Max 534 eval/path length Min 534 eval/Rewards Mean 3.18781 eval/Rewards Std 0.830225 eval/Rewards Max 5.51276 eval/Rewards Min 0.98714 eval/Returns Mean 1702.29 eval/Returns Std 0 eval/Returns Max 1702.29 eval/Returns Min 1702.29 eval/Actions Mean 0.152599 eval/Actions Std 0.57986 eval/Actions Max 0.997992 eval/Actions Min -0.998202 eval/Num Paths 1 eval/Average Returns 1702.29 eval/normalized_score 52.9275 time/evaluation sampling (s) 0.88805 time/logging (s) 0.00328549 time/sampling batch (s) 0.269023 time/saving (s) 0.0029965 time/training (s) 4.23073 time/epoch (s) 5.39408 time/total (s) 33011.5 Epoch -488 ---------------------------------- --------------- 2022-05-10 22:20:50.547507 PDT | [0] Epoch -487 finished ---------------------------------- --------------- epoch -487 replay_buffer/size 999996 trainer/num train calls 514000 trainer/Policy Loss -2.13969 trainer/Log Pis Mean 2.16689 trainer/Log Pis Std 2.64371 trainer/Log Pis Max 9.96045 trainer/Log Pis Min -5.90832 trainer/policy/mean Mean 0.150496 trainer/policy/mean Std 0.608786 trainer/policy/mean Max 0.997111 trainer/policy/mean Min -0.997283 trainer/policy/normal/std Mean 0.381363 trainer/policy/normal/std Std 0.186541 trainer/policy/normal/std Max 0.988032 trainer/policy/normal/std Min 0.0720831 trainer/policy/normal/log_std Mean -1.11821 trainer/policy/normal/log_std Std 0.599424 trainer/policy/normal/log_std Max -0.0120398 trainer/policy/normal/log_std Min -2.62994 eval/num steps total 340659 eval/num paths total 666 eval/path length Mean 467 eval/path length Std 0 eval/path length Max 467 eval/path length Min 467 eval/Rewards Mean 3.19719 eval/Rewards Std 0.876411 eval/Rewards Max 5.00561 eval/Rewards Min 0.987698 eval/Returns Mean 1493.09 eval/Returns Std 0 eval/Returns Max 1493.09 eval/Returns Min 1493.09 eval/Actions Mean 0.149489 eval/Actions Std 0.575659 eval/Actions Max 0.997755 eval/Actions Min -0.998487 eval/Num Paths 1 eval/Average Returns 1493.09 eval/normalized_score 46.4996 time/evaluation sampling (s) 0.903688 time/logging (s) 0.00216536 time/sampling batch (s) 0.271027 time/saving (s) 0.00305438 time/training (s) 4.2595 time/epoch (s) 5.43944 time/total (s) 33017 Epoch -487 ---------------------------------- --------------- 2022-05-10 22:20:55.943447 PDT | [0] Epoch -486 finished ---------------------------------- --------------- epoch -486 replay_buffer/size 999996 trainer/num train calls 515000 trainer/Policy Loss -2.18269 trainer/Log Pis Mean 2.19037 trainer/Log Pis Std 2.58658 trainer/Log Pis Max 11.365 trainer/Log Pis Min -5.06455 trainer/policy/mean Mean 0.127711 trainer/policy/mean Std 0.618204 trainer/policy/mean Max 0.998452 trainer/policy/mean Min -0.998217 trainer/policy/normal/std Mean 0.383172 trainer/policy/normal/std Std 0.18193 trainer/policy/normal/std Max 0.943878 trainer/policy/normal/std Min 0.0697423 trainer/policy/normal/log_std Mean -1.10421 trainer/policy/normal/log_std Std 0.581578 trainer/policy/normal/log_std Max -0.0577582 trainer/policy/normal/log_std Min -2.66295 eval/num steps total 341154 eval/num paths total 667 eval/path length Mean 495 eval/path length Std 0 eval/path length Max 495 eval/path length Min 495 eval/Rewards Mean 3.03862 eval/Rewards Std 0.765407 eval/Rewards Max 4.63624 eval/Rewards Min 0.985763 eval/Returns Mean 1504.12 eval/Returns Std 0 eval/Returns Max 1504.12 eval/Returns Min 1504.12 eval/Actions Mean 0.149243 eval/Actions Std 0.579553 eval/Actions Max 0.996954 eval/Actions Min -0.997906 eval/Num Paths 1 eval/Average Returns 1504.12 eval/normalized_score 46.8383 time/evaluation sampling (s) 0.906927 time/logging (s) 0.00229565 time/sampling batch (s) 0.26561 time/saving (s) 0.00316391 time/training (s) 4.19614 time/epoch (s) 5.37413 time/total (s) 33022.4 Epoch -486 ---------------------------------- --------------- 2022-05-10 22:21:01.322410 PDT | [0] Epoch -485 finished ---------------------------------- --------------- epoch -485 replay_buffer/size 999996 trainer/num train calls 516000 trainer/Policy Loss -2.22069 trainer/Log Pis Mean 2.1863 trainer/Log Pis Std 2.67543 trainer/Log Pis Max 9.94689 trainer/Log Pis Min -5.23671 trainer/policy/mean Mean 0.132276 trainer/policy/mean Std 0.616575 trainer/policy/mean Max 0.994926 trainer/policy/mean Min -0.998523 trainer/policy/normal/std Mean 0.385094 trainer/policy/normal/std Std 0.185447 trainer/policy/normal/std Max 0.950922 trainer/policy/normal/std Min 0.0697888 trainer/policy/normal/log_std Mean -1.10227 trainer/policy/normal/log_std Std 0.586072 trainer/policy/normal/log_std Max -0.050323 trainer/policy/normal/log_std Min -2.66228 eval/num steps total 341696 eval/num paths total 668 eval/path length Mean 542 eval/path length Std 0 eval/path length Max 542 eval/path length Min 542 eval/Rewards Mean 3.14717 eval/Rewards Std 0.83787 eval/Rewards Max 4.88458 eval/Rewards Min 0.989818 eval/Returns Mean 1705.77 eval/Returns Std 0 eval/Returns Max 1705.77 eval/Returns Min 1705.77 eval/Actions Mean 0.143185 eval/Actions Std 0.560411 eval/Actions Max 0.997742 eval/Actions Min -0.99848 eval/Num Paths 1 eval/Average Returns 1705.77 eval/normalized_score 53.0343 time/evaluation sampling (s) 0.887508 time/logging (s) 0.0023424 time/sampling batch (s) 0.266181 time/saving (s) 0.00294139 time/training (s) 4.19801 time/epoch (s) 5.35699 time/total (s) 33027.7 Epoch -485 ---------------------------------- --------------- 2022-05-10 22:21:06.697227 PDT | [0] Epoch -484 finished ---------------------------------- --------------- epoch -484 replay_buffer/size 999996 trainer/num train calls 517000 trainer/Policy Loss -2.12445 trainer/Log Pis Mean 2.15837 trainer/Log Pis Std 2.67454 trainer/Log Pis Max 9.21459 trainer/Log Pis Min -4.97595 trainer/policy/mean Mean 0.169438 trainer/policy/mean Std 0.607671 trainer/policy/mean Max 0.996725 trainer/policy/mean Min -0.998892 trainer/policy/normal/std Mean 0.385433 trainer/policy/normal/std Std 0.185938 trainer/policy/normal/std Max 0.990977 trainer/policy/normal/std Min 0.0716428 trainer/policy/normal/log_std Mean -1.10035 trainer/policy/normal/log_std Std 0.582956 trainer/policy/normal/log_std Max -0.00906402 trainer/policy/normal/log_std Min -2.63606 eval/num steps total 342387 eval/num paths total 669 eval/path length Mean 691 eval/path length Std 0 eval/path length Max 691 eval/path length Min 691 eval/Rewards Mean 3.26764 eval/Rewards Std 0.755571 eval/Rewards Max 5.49088 eval/Rewards Min 0.984284 eval/Returns Mean 2257.94 eval/Returns Std 0 eval/Returns Max 2257.94 eval/Returns Min 2257.94 eval/Actions Mean 0.150209 eval/Actions Std 0.601771 eval/Actions Max 0.997377 eval/Actions Min -0.998173 eval/Num Paths 1 eval/Average Returns 2257.94 eval/normalized_score 70.0004 time/evaluation sampling (s) 0.894883 time/logging (s) 0.00274535 time/sampling batch (s) 0.266729 time/saving (s) 0.00302746 time/training (s) 4.18567 time/epoch (s) 5.35305 time/total (s) 33033.1 Epoch -484 ---------------------------------- --------------- 2022-05-10 22:21:12.078916 PDT | [0] Epoch -483 finished ---------------------------------- --------------- epoch -483 replay_buffer/size 999996 trainer/num train calls 518000 trainer/Policy Loss -2.18636 trainer/Log Pis Mean 2.26647 trainer/Log Pis Std 2.60724 trainer/Log Pis Max 10.5244 trainer/Log Pis Min -5.62226 trainer/policy/mean Mean 0.134274 trainer/policy/mean Std 0.620591 trainer/policy/mean Max 0.997795 trainer/policy/mean Min -0.998109 trainer/policy/normal/std Mean 0.376928 trainer/policy/normal/std Std 0.182929 trainer/policy/normal/std Max 0.951915 trainer/policy/normal/std Min 0.0671847 trainer/policy/normal/log_std Mean -1.12311 trainer/policy/normal/log_std Std 0.582234 trainer/policy/normal/log_std Max -0.0492793 trainer/policy/normal/log_std Min -2.70031 eval/num steps total 343032 eval/num paths total 670 eval/path length Mean 645 eval/path length Std 0 eval/path length Max 645 eval/path length Min 645 eval/Rewards Mean 3.24413 eval/Rewards Std 0.746427 eval/Rewards Max 4.739 eval/Rewards Min 0.986157 eval/Returns Mean 2092.46 eval/Returns Std 0 eval/Returns Max 2092.46 eval/Returns Min 2092.46 eval/Actions Mean 0.151582 eval/Actions Std 0.6128 eval/Actions Max 0.998589 eval/Actions Min -0.99866 eval/Num Paths 1 eval/Average Returns 2092.46 eval/normalized_score 64.9159 time/evaluation sampling (s) 0.881952 time/logging (s) 0.00269467 time/sampling batch (s) 0.266831 time/saving (s) 0.00294966 time/training (s) 4.20536 time/epoch (s) 5.35979 time/total (s) 33038.4 Epoch -483 ---------------------------------- --------------- 2022-05-10 22:21:17.488408 PDT | [0] Epoch -482 finished ---------------------------------- --------------- epoch -482 replay_buffer/size 999996 trainer/num train calls 519000 trainer/Policy Loss -2.12893 trainer/Log Pis Mean 2.22175 trainer/Log Pis Std 2.54771 trainer/Log Pis Max 9.19948 trainer/Log Pis Min -4.75046 trainer/policy/mean Mean 0.140137 trainer/policy/mean Std 0.609132 trainer/policy/mean Max 0.998079 trainer/policy/mean Min -0.99775 trainer/policy/normal/std Mean 0.379244 trainer/policy/normal/std Std 0.184275 trainer/policy/normal/std Max 0.957263 trainer/policy/normal/std Min 0.0678307 trainer/policy/normal/log_std Mean -1.12058 trainer/policy/normal/log_std Std 0.592441 trainer/policy/normal/log_std Max -0.0436776 trainer/policy/normal/log_std Min -2.69074 eval/num steps total 343913 eval/num paths total 671 eval/path length Mean 881 eval/path length Std 0 eval/path length Max 881 eval/path length Min 881 eval/Rewards Mean 3.26927 eval/Rewards Std 0.672045 eval/Rewards Max 4.69535 eval/Rewards Min 0.988493 eval/Returns Mean 2880.23 eval/Returns Std 0 eval/Returns Max 2880.23 eval/Returns Min 2880.23 eval/Actions Mean 0.154663 eval/Actions Std 0.608529 eval/Actions Max 0.998348 eval/Actions Min -0.99921 eval/Num Paths 1 eval/Average Returns 2880.23 eval/normalized_score 89.1208 time/evaluation sampling (s) 0.883955 time/logging (s) 0.00332819 time/sampling batch (s) 0.269208 time/saving (s) 0.0029388 time/training (s) 4.22841 time/epoch (s) 5.38784 time/total (s) 33043.8 Epoch -482 ---------------------------------- --------------- 2022-05-10 22:21:23.077452 PDT | [0] Epoch -481 finished ---------------------------------- --------------- epoch -481 replay_buffer/size 999996 trainer/num train calls 520000 trainer/Policy Loss -2.29812 trainer/Log Pis Mean 2.22131 trainer/Log Pis Std 2.65897 trainer/Log Pis Max 14.3933 trainer/Log Pis Min -5.84446 trainer/policy/mean Mean 0.148587 trainer/policy/mean Std 0.616977 trainer/policy/mean Max 0.998538 trainer/policy/mean Min -0.999481 trainer/policy/normal/std Mean 0.385454 trainer/policy/normal/std Std 0.190574 trainer/policy/normal/std Max 1.12988 trainer/policy/normal/std Min 0.0728425 trainer/policy/normal/log_std Mean -1.10912 trainer/policy/normal/log_std Std 0.601106 trainer/policy/normal/log_std Max 0.122107 trainer/policy/normal/log_std Min -2.61946 eval/num steps total 344565 eval/num paths total 672 eval/path length Mean 652 eval/path length Std 0 eval/path length Max 652 eval/path length Min 652 eval/Rewards Mean 3.20455 eval/Rewards Std 0.735872 eval/Rewards Max 4.7868 eval/Rewards Min 0.985049 eval/Returns Mean 2089.37 eval/Returns Std 0 eval/Returns Max 2089.37 eval/Returns Min 2089.37 eval/Actions Mean 0.151523 eval/Actions Std 0.601734 eval/Actions Max 0.998346 eval/Actions Min -0.997882 eval/Num Paths 1 eval/Average Returns 2089.37 eval/normalized_score 64.8208 time/evaluation sampling (s) 0.918982 time/logging (s) 0.00264856 time/sampling batch (s) 0.267894 time/saving (s) 0.00296953 time/training (s) 4.37364 time/epoch (s) 5.56613 time/total (s) 33049.4 Epoch -481 ---------------------------------- --------------- 2022-05-10 22:21:28.466046 PDT | [0] Epoch -480 finished ---------------------------------- --------------- epoch -480 replay_buffer/size 999996 trainer/num train calls 521000 trainer/Policy Loss -2.10504 trainer/Log Pis Mean 2.13528 trainer/Log Pis Std 2.57813 trainer/Log Pis Max 11.4044 trainer/Log Pis Min -5.50926 trainer/policy/mean Mean 0.131918 trainer/policy/mean Std 0.607414 trainer/policy/mean Max 0.994234 trainer/policy/mean Min -0.997386 trainer/policy/normal/std Mean 0.375093 trainer/policy/normal/std Std 0.182604 trainer/policy/normal/std Max 1.04946 trainer/policy/normal/std Min 0.067388 trainer/policy/normal/log_std Mean -1.13308 trainer/policy/normal/log_std Std 0.596159 trainer/policy/normal/log_std Max 0.0482752 trainer/policy/normal/log_std Min -2.69729 eval/num steps total 345137 eval/num paths total 673 eval/path length Mean 572 eval/path length Std 0 eval/path length Max 572 eval/path length Min 572 eval/Rewards Mean 3.1498 eval/Rewards Std 0.752107 eval/Rewards Max 4.86211 eval/Rewards Min 0.989265 eval/Returns Mean 1801.69 eval/Returns Std 0 eval/Returns Max 1801.69 eval/Returns Min 1801.69 eval/Actions Mean 0.150685 eval/Actions Std 0.589497 eval/Actions Max 0.996984 eval/Actions Min -0.998151 eval/Num Paths 1 eval/Average Returns 1801.69 eval/normalized_score 55.9815 time/evaluation sampling (s) 0.878879 time/logging (s) 0.00238081 time/sampling batch (s) 0.266301 time/saving (s) 0.0029585 time/training (s) 4.21601 time/epoch (s) 5.36653 time/total (s) 33054.8 Epoch -480 ---------------------------------- --------------- 2022-05-10 22:21:33.942909 PDT | [0] Epoch -479 finished ---------------------------------- --------------- epoch -479 replay_buffer/size 999996 trainer/num train calls 522000 trainer/Policy Loss -2.05836 trainer/Log Pis Mean 2.13347 trainer/Log Pis Std 2.56135 trainer/Log Pis Max 12.2408 trainer/Log Pis Min -4.74679 trainer/policy/mean Mean 0.117188 trainer/policy/mean Std 0.620381 trainer/policy/mean Max 0.997822 trainer/policy/mean Min -0.998131 trainer/policy/normal/std Mean 0.382217 trainer/policy/normal/std Std 0.185724 trainer/policy/normal/std Max 0.965238 trainer/policy/normal/std Min 0.0631337 trainer/policy/normal/log_std Mean -1.11093 trainer/policy/normal/log_std Std 0.587083 trainer/policy/normal/log_std Max -0.0353806 trainer/policy/normal/log_std Min -2.7625 eval/num steps total 345656 eval/num paths total 674 eval/path length Mean 519 eval/path length Std 0 eval/path length Max 519 eval/path length Min 519 eval/Rewards Mean 3.16537 eval/Rewards Std 0.808789 eval/Rewards Max 5.40062 eval/Rewards Min 0.989046 eval/Returns Mean 1642.82 eval/Returns Std 0 eval/Returns Max 1642.82 eval/Returns Min 1642.82 eval/Actions Mean 0.155565 eval/Actions Std 0.593483 eval/Actions Max 0.997308 eval/Actions Min -0.997063 eval/Num Paths 1 eval/Average Returns 1642.82 eval/normalized_score 51.1003 time/evaluation sampling (s) 0.943584 time/logging (s) 0.00243893 time/sampling batch (s) 0.269595 time/saving (s) 0.00300529 time/training (s) 4.23607 time/epoch (s) 5.4547 time/total (s) 33060.2 Epoch -479 ---------------------------------- --------------- 2022-05-10 22:21:39.371518 PDT | [0] Epoch -478 finished ---------------------------------- --------------- epoch -478 replay_buffer/size 999996 trainer/num train calls 523000 trainer/Policy Loss -2.20015 trainer/Log Pis Mean 2.24959 trainer/Log Pis Std 2.6137 trainer/Log Pis Max 14.5646 trainer/Log Pis Min -5.37826 trainer/policy/mean Mean 0.151015 trainer/policy/mean Std 0.610107 trainer/policy/mean Max 0.998627 trainer/policy/mean Min -0.998642 trainer/policy/normal/std Mean 0.381178 trainer/policy/normal/std Std 0.179818 trainer/policy/normal/std Max 0.972315 trainer/policy/normal/std Min 0.07267 trainer/policy/normal/log_std Mean -1.10519 trainer/policy/normal/log_std Std 0.57139 trainer/policy/normal/log_std Max -0.0280759 trainer/policy/normal/log_std Min -2.62183 eval/num steps total 346627 eval/num paths total 676 eval/path length Mean 485.5 eval/path length Std 76.5 eval/path length Max 562 eval/path length Min 409 eval/Rewards Mean 3.1512 eval/Rewards Std 0.80406 eval/Rewards Max 4.76355 eval/Rewards Min 0.981455 eval/Returns Mean 1529.91 eval/Returns Std 273.909 eval/Returns Max 1803.81 eval/Returns Min 1256 eval/Actions Mean 0.148734 eval/Actions Std 0.596586 eval/Actions Max 0.998781 eval/Actions Min -0.998341 eval/Num Paths 2 eval/Average Returns 1529.91 eval/normalized_score 47.6308 time/evaluation sampling (s) 0.882295 time/logging (s) 0.00373946 time/sampling batch (s) 0.270434 time/saving (s) 0.00315632 time/training (s) 4.24802 time/epoch (s) 5.40765 time/total (s) 33065.6 Epoch -478 ---------------------------------- --------------- 2022-05-10 22:21:44.766432 PDT | [0] Epoch -477 finished ---------------------------------- --------------- epoch -477 replay_buffer/size 999996 trainer/num train calls 524000 trainer/Policy Loss -2.22689 trainer/Log Pis Mean 2.10812 trainer/Log Pis Std 2.64146 trainer/Log Pis Max 12.0875 trainer/Log Pis Min -4.55095 trainer/policy/mean Mean 0.126934 trainer/policy/mean Std 0.628112 trainer/policy/mean Max 0.998017 trainer/policy/mean Min -0.997862 trainer/policy/normal/std Mean 0.379311 trainer/policy/normal/std Std 0.181282 trainer/policy/normal/std Max 0.981842 trainer/policy/normal/std Min 0.0733329 trainer/policy/normal/log_std Mean -1.11474 trainer/policy/normal/log_std Std 0.58062 trainer/policy/normal/log_std Max -0.0183247 trainer/policy/normal/log_std Min -2.61275 eval/num steps total 347163 eval/num paths total 677 eval/path length Mean 536 eval/path length Std 0 eval/path length Max 536 eval/path length Min 536 eval/Rewards Mean 3.21146 eval/Rewards Std 0.839892 eval/Rewards Max 5.53267 eval/Rewards Min 0.981629 eval/Returns Mean 1721.34 eval/Returns Std 0 eval/Returns Max 1721.34 eval/Returns Min 1721.34 eval/Actions Mean 0.163813 eval/Actions Std 0.588885 eval/Actions Max 0.99808 eval/Actions Min -0.998031 eval/Num Paths 1 eval/Average Returns 1721.34 eval/normalized_score 53.5129 time/evaluation sampling (s) 0.913016 time/logging (s) 0.00234214 time/sampling batch (s) 0.26626 time/saving (s) 0.00293187 time/training (s) 4.18673 time/epoch (s) 5.37128 time/total (s) 33071 Epoch -477 ---------------------------------- --------------- 2022-05-10 22:21:50.170736 PDT | [0] Epoch -476 finished ---------------------------------- --------------- epoch -476 replay_buffer/size 999996 trainer/num train calls 525000 trainer/Policy Loss -2.17606 trainer/Log Pis Mean 2.3236 trainer/Log Pis Std 2.65396 trainer/Log Pis Max 9.6123 trainer/Log Pis Min -4.888 trainer/policy/mean Mean 0.162782 trainer/policy/mean Std 0.616617 trainer/policy/mean Max 0.997795 trainer/policy/mean Min -0.997501 trainer/policy/normal/std Mean 0.373299 trainer/policy/normal/std Std 0.180837 trainer/policy/normal/std Max 0.985938 trainer/policy/normal/std Min 0.0677796 trainer/policy/normal/log_std Mean -1.13651 trainer/policy/normal/log_std Std 0.594029 trainer/policy/normal/log_std Max -0.0141614 trainer/policy/normal/log_std Min -2.69149 eval/num steps total 348134 eval/num paths total 679 eval/path length Mean 485.5 eval/path length Std 14.5 eval/path length Max 500 eval/path length Min 471 eval/Rewards Mean 3.16926 eval/Rewards Std 0.804718 eval/Rewards Max 4.75128 eval/Rewards Min 0.986872 eval/Returns Mean 1538.67 eval/Returns Std 31.9033 eval/Returns Max 1570.58 eval/Returns Min 1506.77 eval/Actions Mean 0.149273 eval/Actions Std 0.599448 eval/Actions Max 0.99833 eval/Actions Min -0.999185 eval/Num Paths 2 eval/Average Returns 1538.67 eval/normalized_score 47.9002 time/evaluation sampling (s) 0.884446 time/logging (s) 0.00358911 time/sampling batch (s) 0.274 time/saving (s) 0.00300327 time/training (s) 4.21837 time/epoch (s) 5.38341 time/total (s) 33076.4 Epoch -476 ---------------------------------- --------------- 2022-05-10 22:21:55.530801 PDT | [0] Epoch -475 finished ---------------------------------- --------------- epoch -475 replay_buffer/size 999996 trainer/num train calls 526000 trainer/Policy Loss -2.2218 trainer/Log Pis Mean 2.40105 trainer/Log Pis Std 2.6673 trainer/Log Pis Max 13.4942 trainer/Log Pis Min -3.00905 trainer/policy/mean Mean 0.112402 trainer/policy/mean Std 0.621504 trainer/policy/mean Max 0.997602 trainer/policy/mean Min -0.999517 trainer/policy/normal/std Mean 0.373777 trainer/policy/normal/std Std 0.178786 trainer/policy/normal/std Max 0.921199 trainer/policy/normal/std Min 0.0713508 trainer/policy/normal/log_std Mean -1.13137 trainer/policy/normal/log_std Std 0.585612 trainer/policy/normal/log_std Max -0.0820794 trainer/policy/normal/log_std Min -2.64015 eval/num steps total 348672 eval/num paths total 680 eval/path length Mean 538 eval/path length Std 0 eval/path length Max 538 eval/path length Min 538 eval/Rewards Mean 3.19092 eval/Rewards Std 0.84519 eval/Rewards Max 5.35304 eval/Rewards Min 0.986973 eval/Returns Mean 1716.71 eval/Returns Std 0 eval/Returns Max 1716.71 eval/Returns Min 1716.71 eval/Actions Mean 0.150618 eval/Actions Std 0.579445 eval/Actions Max 0.998465 eval/Actions Min -0.998284 eval/Num Paths 1 eval/Average Returns 1716.71 eval/normalized_score 53.3706 time/evaluation sampling (s) 0.892399 time/logging (s) 0.00243262 time/sampling batch (s) 0.264516 time/saving (s) 0.00295184 time/training (s) 4.17483 time/epoch (s) 5.33713 time/total (s) 33081.7 Epoch -475 ---------------------------------- --------------- 2022-05-10 22:22:00.938403 PDT | [0] Epoch -474 finished ---------------------------------- --------------- epoch -474 replay_buffer/size 999996 trainer/num train calls 527000 trainer/Policy Loss -2.09165 trainer/Log Pis Mean 2.07648 trainer/Log Pis Std 2.53006 trainer/Log Pis Max 10.4233 trainer/Log Pis Min -5.20411 trainer/policy/mean Mean 0.146957 trainer/policy/mean Std 0.60293 trainer/policy/mean Max 0.999051 trainer/policy/mean Min -0.996394 trainer/policy/normal/std Mean 0.388433 trainer/policy/normal/std Std 0.192064 trainer/policy/normal/std Max 1.02864 trainer/policy/normal/std Min 0.0738226 trainer/policy/normal/log_std Mean -1.10131 trainer/policy/normal/log_std Std 0.601243 trainer/policy/normal/log_std Max 0.0282415 trainer/policy/normal/log_std Min -2.60609 eval/num steps total 349325 eval/num paths total 681 eval/path length Mean 653 eval/path length Std 0 eval/path length Max 653 eval/path length Min 653 eval/Rewards Mean 3.21309 eval/Rewards Std 0.669496 eval/Rewards Max 4.5582 eval/Rewards Min 0.981888 eval/Returns Mean 2098.15 eval/Returns Std 0 eval/Returns Max 2098.15 eval/Returns Min 2098.15 eval/Actions Mean 0.147528 eval/Actions Std 0.612391 eval/Actions Max 0.997224 eval/Actions Min -0.998988 eval/Num Paths 1 eval/Average Returns 2098.15 eval/normalized_score 65.0906 time/evaluation sampling (s) 0.901313 time/logging (s) 0.00274004 time/sampling batch (s) 0.266209 time/saving (s) 0.00315117 time/training (s) 4.2083 time/epoch (s) 5.38171 time/total (s) 33087.1 Epoch -474 ---------------------------------- --------------- 2022-05-10 22:22:06.310485 PDT | [0] Epoch -473 finished ---------------------------------- --------------- epoch -473 replay_buffer/size 999996 trainer/num train calls 528000 trainer/Policy Loss -2.31009 trainer/Log Pis Mean 2.43809 trainer/Log Pis Std 2.61092 trainer/Log Pis Max 13.6639 trainer/Log Pis Min -4.05781 trainer/policy/mean Mean 0.144561 trainer/policy/mean Std 0.62724 trainer/policy/mean Max 0.994732 trainer/policy/mean Min -0.998269 trainer/policy/normal/std Mean 0.369999 trainer/policy/normal/std Std 0.177958 trainer/policy/normal/std Max 0.913381 trainer/policy/normal/std Min 0.0694682 trainer/policy/normal/log_std Mean -1.13959 trainer/policy/normal/log_std Std 0.578588 trainer/policy/normal/log_std Max -0.0906025 trainer/policy/normal/log_std Min -2.66689 eval/num steps total 349901 eval/num paths total 682 eval/path length Mean 576 eval/path length Std 0 eval/path length Max 576 eval/path length Min 576 eval/Rewards Mean 3.13388 eval/Rewards Std 0.711371 eval/Rewards Max 4.53028 eval/Rewards Min 0.988465 eval/Returns Mean 1805.12 eval/Returns Std 0 eval/Returns Max 1805.12 eval/Returns Min 1805.12 eval/Actions Mean 0.150905 eval/Actions Std 0.604778 eval/Actions Max 0.998008 eval/Actions Min -0.997325 eval/Num Paths 1 eval/Average Returns 1805.12 eval/normalized_score 56.0869 time/evaluation sampling (s) 0.898188 time/logging (s) 0.00277209 time/sampling batch (s) 0.271649 time/saving (s) 0.00325542 time/training (s) 4.17401 time/epoch (s) 5.34987 time/total (s) 33092.5 Epoch -473 ---------------------------------- --------------- 2022-05-10 22:22:11.707779 PDT | [0] Epoch -472 finished ---------------------------------- --------------- epoch -472 replay_buffer/size 999996 trainer/num train calls 529000 trainer/Policy Loss -2.30036 trainer/Log Pis Mean 2.2834 trainer/Log Pis Std 2.60655 trainer/Log Pis Max 11.4253 trainer/Log Pis Min -4.43314 trainer/policy/mean Mean 0.125248 trainer/policy/mean Std 0.621573 trainer/policy/mean Max 0.998226 trainer/policy/mean Min -0.997854 trainer/policy/normal/std Mean 0.376703 trainer/policy/normal/std Std 0.187533 trainer/policy/normal/std Max 1.04445 trainer/policy/normal/std Min 0.0630211 trainer/policy/normal/log_std Mean -1.13204 trainer/policy/normal/log_std Std 0.59931 trainer/policy/normal/log_std Max 0.0434899 trainer/policy/normal/log_std Min -2.76429 eval/num steps total 350660 eval/num paths total 683 eval/path length Mean 759 eval/path length Std 0 eval/path length Max 759 eval/path length Min 759 eval/Rewards Mean 3.18131 eval/Rewards Std 0.693402 eval/Rewards Max 4.75865 eval/Rewards Min 0.982036 eval/Returns Mean 2414.62 eval/Returns Std 0 eval/Returns Max 2414.62 eval/Returns Min 2414.62 eval/Actions Mean 0.164358 eval/Actions Std 0.598148 eval/Actions Max 0.998404 eval/Actions Min -0.997979 eval/Num Paths 1 eval/Average Returns 2414.62 eval/normalized_score 74.8144 time/evaluation sampling (s) 0.906836 time/logging (s) 0.00296885 time/sampling batch (s) 0.266927 time/saving (s) 0.00298415 time/training (s) 4.19538 time/epoch (s) 5.37509 time/total (s) 33097.9 Epoch -472 ---------------------------------- --------------- 2022-05-10 22:22:17.095251 PDT | [0] Epoch -471 finished ---------------------------------- --------------- epoch -471 replay_buffer/size 999996 trainer/num train calls 530000 trainer/Policy Loss -2.19807 trainer/Log Pis Mean 2.13054 trainer/Log Pis Std 2.6174 trainer/Log Pis Max 14.497 trainer/Log Pis Min -3.76409 trainer/policy/mean Mean 0.110057 trainer/policy/mean Std 0.621295 trainer/policy/mean Max 0.998567 trainer/policy/mean Min -0.998446 trainer/policy/normal/std Mean 0.388592 trainer/policy/normal/std Std 0.189588 trainer/policy/normal/std Max 0.960501 trainer/policy/normal/std Min 0.0700275 trainer/policy/normal/log_std Mean -1.09486 trainer/policy/normal/log_std Std 0.586733 trainer/policy/normal/log_std Max -0.0403007 trainer/policy/normal/log_std Min -2.65887 eval/num steps total 351593 eval/num paths total 685 eval/path length Mean 466.5 eval/path length Std 58.5 eval/path length Max 525 eval/path length Min 408 eval/Rewards Mean 3.15514 eval/Rewards Std 0.827851 eval/Rewards Max 5.55418 eval/Rewards Min 0.983183 eval/Returns Mean 1471.87 eval/Returns Std 227.063 eval/Returns Max 1698.94 eval/Returns Min 1244.81 eval/Actions Mean 0.15114 eval/Actions Std 0.592843 eval/Actions Max 0.997216 eval/Actions Min -0.999482 eval/Num Paths 2 eval/Average Returns 1471.87 eval/normalized_score 45.8477 time/evaluation sampling (s) 0.904275 time/logging (s) 0.00375859 time/sampling batch (s) 0.26635 time/saving (s) 0.00294778 time/training (s) 4.18883 time/epoch (s) 5.36617 time/total (s) 33103.2 Epoch -471 ---------------------------------- --------------- 2022-05-10 22:22:22.477385 PDT | [0] Epoch -470 finished ---------------------------------- --------------- epoch -470 replay_buffer/size 999996 trainer/num train calls 531000 trainer/Policy Loss -2.09416 trainer/Log Pis Mean 2.24467 trainer/Log Pis Std 2.6248 trainer/Log Pis Max 15.8977 trainer/Log Pis Min -5.14734 trainer/policy/mean Mean 0.142512 trainer/policy/mean Std 0.611319 trainer/policy/mean Max 0.997563 trainer/policy/mean Min -0.999613 trainer/policy/normal/std Mean 0.372165 trainer/policy/normal/std Std 0.18086 trainer/policy/normal/std Max 0.928021 trainer/policy/normal/std Min 0.0671087 trainer/policy/normal/log_std Mean -1.14069 trainer/policy/normal/log_std Std 0.595959 trainer/policy/normal/log_std Max -0.0747005 trainer/policy/normal/log_std Min -2.70144 eval/num steps total 352096 eval/num paths total 686 eval/path length Mean 503 eval/path length Std 0 eval/path length Max 503 eval/path length Min 503 eval/Rewards Mean 3.13923 eval/Rewards Std 0.7548 eval/Rewards Max 4.67978 eval/Rewards Min 0.988051 eval/Returns Mean 1579.03 eval/Returns Std 0 eval/Returns Max 1579.03 eval/Returns Min 1579.03 eval/Actions Mean 0.156892 eval/Actions Std 0.597312 eval/Actions Max 0.998414 eval/Actions Min -0.997078 eval/Num Paths 1 eval/Average Returns 1579.03 eval/normalized_score 49.1402 time/evaluation sampling (s) 0.879896 time/logging (s) 0.00222803 time/sampling batch (s) 0.266412 time/saving (s) 0.00292688 time/training (s) 4.20719 time/epoch (s) 5.35866 time/total (s) 33108.6 Epoch -470 ---------------------------------- --------------- 2022-05-10 22:22:27.833756 PDT | [0] Epoch -469 finished ---------------------------------- --------------- epoch -469 replay_buffer/size 999996 trainer/num train calls 532000 trainer/Policy Loss -2.36912 trainer/Log Pis Mean 2.15034 trainer/Log Pis Std 2.52228 trainer/Log Pis Max 9.06527 trainer/Log Pis Min -6.19183 trainer/policy/mean Mean 0.147233 trainer/policy/mean Std 0.619904 trainer/policy/mean Max 0.99882 trainer/policy/mean Min -0.997349 trainer/policy/normal/std Mean 0.387918 trainer/policy/normal/std Std 0.188761 trainer/policy/normal/std Max 1.01926 trainer/policy/normal/std Min 0.0727132 trainer/policy/normal/log_std Mean -1.09764 trainer/policy/normal/log_std Std 0.59077 trainer/policy/normal/log_std Max 0.0190765 trainer/policy/normal/log_std Min -2.62123 eval/num steps total 353086 eval/num paths total 688 eval/path length Mean 495 eval/path length Std 88 eval/path length Max 583 eval/path length Min 407 eval/Rewards Mean 3.12394 eval/Rewards Std 0.766935 eval/Rewards Max 5.16806 eval/Rewards Min 0.984468 eval/Returns Mean 1546.35 eval/Returns Std 303.891 eval/Returns Max 1850.24 eval/Returns Min 1242.46 eval/Actions Mean 0.151311 eval/Actions Std 0.598865 eval/Actions Max 0.998357 eval/Actions Min -0.998385 eval/Num Paths 2 eval/Average Returns 1546.35 eval/normalized_score 48.136 time/evaluation sampling (s) 0.882011 time/logging (s) 0.00361073 time/sampling batch (s) 0.265993 time/saving (s) 0.00290137 time/training (s) 4.18134 time/epoch (s) 5.33586 time/total (s) 33113.9 Epoch -469 ---------------------------------- --------------- 2022-05-10 22:22:33.205833 PDT | [0] Epoch -468 finished ---------------------------------- --------------- epoch -468 replay_buffer/size 999996 trainer/num train calls 533000 trainer/Policy Loss -2.29715 trainer/Log Pis Mean 2.10692 trainer/Log Pis Std 2.45203 trainer/Log Pis Max 8.62938 trainer/Log Pis Min -5.05694 trainer/policy/mean Mean 0.16016 trainer/policy/mean Std 0.615694 trainer/policy/mean Max 0.997648 trainer/policy/mean Min -0.998571 trainer/policy/normal/std Mean 0.391045 trainer/policy/normal/std Std 0.18496 trainer/policy/normal/std Max 0.950475 trainer/policy/normal/std Min 0.0704019 trainer/policy/normal/log_std Mean -1.08129 trainer/policy/normal/log_std Std 0.574497 trainer/policy/normal/log_std Max -0.0507937 trainer/policy/normal/log_std Min -2.65354 eval/num steps total 353727 eval/num paths total 689 eval/path length Mean 641 eval/path length Std 0 eval/path length Max 641 eval/path length Min 641 eval/Rewards Mean 3.25066 eval/Rewards Std 0.722642 eval/Rewards Max 4.75757 eval/Rewards Min 0.991462 eval/Returns Mean 2083.68 eval/Returns Std 0 eval/Returns Max 2083.68 eval/Returns Min 2083.68 eval/Actions Mean 0.142166 eval/Actions Std 0.607695 eval/Actions Max 0.997026 eval/Actions Min -0.999475 eval/Num Paths 1 eval/Average Returns 2083.68 eval/normalized_score 64.6459 time/evaluation sampling (s) 0.882282 time/logging (s) 0.00268638 time/sampling batch (s) 0.267838 time/saving (s) 0.00304076 time/training (s) 4.1933 time/epoch (s) 5.34915 time/total (s) 33119.3 Epoch -468 ---------------------------------- --------------- 2022-05-10 22:22:38.539199 PDT | [0] Epoch -467 finished ---------------------------------- --------------- epoch -467 replay_buffer/size 999996 trainer/num train calls 534000 trainer/Policy Loss -2.09549 trainer/Log Pis Mean 2.04514 trainer/Log Pis Std 2.48679 trainer/Log Pis Max 10.1481 trainer/Log Pis Min -5.3836 trainer/policy/mean Mean 0.118568 trainer/policy/mean Std 0.612237 trainer/policy/mean Max 0.998801 trainer/policy/mean Min -0.998819 trainer/policy/normal/std Mean 0.372728 trainer/policy/normal/std Std 0.179818 trainer/policy/normal/std Max 0.909431 trainer/policy/normal/std Min 0.0670056 trainer/policy/normal/log_std Mean -1.13435 trainer/policy/normal/log_std Std 0.584549 trainer/policy/normal/log_std Max -0.0949363 trainer/policy/normal/log_std Min -2.70298 eval/num steps total 354291 eval/num paths total 690 eval/path length Mean 564 eval/path length Std 0 eval/path length Max 564 eval/path length Min 564 eval/Rewards Mean 3.24397 eval/Rewards Std 0.808951 eval/Rewards Max 4.74932 eval/Rewards Min 0.982095 eval/Returns Mean 1829.6 eval/Returns Std 0 eval/Returns Max 1829.6 eval/Returns Min 1829.6 eval/Actions Mean 0.154776 eval/Actions Std 0.577275 eval/Actions Max 0.998037 eval/Actions Min -0.998124 eval/Num Paths 1 eval/Average Returns 1829.6 eval/normalized_score 56.8391 time/evaluation sampling (s) 0.878599 time/logging (s) 0.00243915 time/sampling batch (s) 0.264895 time/saving (s) 0.00294502 time/training (s) 4.16215 time/epoch (s) 5.31103 time/total (s) 33124.6 Epoch -467 ---------------------------------- --------------- 2022-05-10 22:22:43.902267 PDT | [0] Epoch -466 finished ---------------------------------- --------------- epoch -466 replay_buffer/size 999996 trainer/num train calls 535000 trainer/Policy Loss -2.23083 trainer/Log Pis Mean 2.30381 trainer/Log Pis Std 2.59375 trainer/Log Pis Max 10.1545 trainer/Log Pis Min -4.72038 trainer/policy/mean Mean 0.159789 trainer/policy/mean Std 0.625782 trainer/policy/mean Max 0.997206 trainer/policy/mean Min -0.99805 trainer/policy/normal/std Mean 0.388213 trainer/policy/normal/std Std 0.18822 trainer/policy/normal/std Max 0.957511 trainer/policy/normal/std Min 0.0681752 trainer/policy/normal/log_std Mean -1.09627 trainer/policy/normal/log_std Std 0.590229 trainer/policy/normal/log_std Max -0.0434182 trainer/policy/normal/log_std Min -2.68567 eval/num steps total 354791 eval/num paths total 691 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.11605 eval/Rewards Std 0.748187 eval/Rewards Max 4.694 eval/Rewards Min 0.981364 eval/Returns Mean 1558.03 eval/Returns Std 0 eval/Returns Max 1558.03 eval/Returns Min 1558.03 eval/Actions Mean 0.151877 eval/Actions Std 0.597964 eval/Actions Max 0.997895 eval/Actions Min -0.997382 eval/Num Paths 1 eval/Average Returns 1558.03 eval/normalized_score 48.4948 time/evaluation sampling (s) 0.870538 time/logging (s) 0.00224116 time/sampling batch (s) 0.266197 time/saving (s) 0.00295954 time/training (s) 4.19912 time/epoch (s) 5.34105 time/total (s) 33129.9 Epoch -466 ---------------------------------- --------------- 2022-05-10 22:22:49.213867 PDT | [0] Epoch -465 finished ---------------------------------- --------------- epoch -465 replay_buffer/size 999996 trainer/num train calls 536000 trainer/Policy Loss -2.41471 trainer/Log Pis Mean 2.16322 trainer/Log Pis Std 2.69097 trainer/Log Pis Max 12.9664 trainer/Log Pis Min -6.79703 trainer/policy/mean Mean 0.128525 trainer/policy/mean Std 0.631209 trainer/policy/mean Max 0.99713 trainer/policy/mean Min -0.998366 trainer/policy/normal/std Mean 0.38787 trainer/policy/normal/std Std 0.192411 trainer/policy/normal/std Max 0.987203 trainer/policy/normal/std Min 0.0681527 trainer/policy/normal/log_std Mean -1.10319 trainer/policy/normal/log_std Std 0.601177 trainer/policy/normal/log_std Max -0.0128793 trainer/policy/normal/log_std Min -2.686 eval/num steps total 355293 eval/num paths total 692 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.12507 eval/Rewards Std 0.754604 eval/Rewards Max 4.69691 eval/Rewards Min 0.987859 eval/Returns Mean 1568.79 eval/Returns Std 0 eval/Returns Max 1568.79 eval/Returns Min 1568.79 eval/Actions Mean 0.150636 eval/Actions Std 0.592658 eval/Actions Max 0.998221 eval/Actions Min -0.996993 eval/Num Paths 1 eval/Average Returns 1568.79 eval/normalized_score 48.8254 time/evaluation sampling (s) 0.876163 time/logging (s) 0.00226737 time/sampling batch (s) 0.26275 time/saving (s) 0.00295573 time/training (s) 4.14573 time/epoch (s) 5.28987 time/total (s) 33135.2 Epoch -465 ---------------------------------- --------------- 2022-05-10 22:22:54.555807 PDT | [0] Epoch -464 finished ---------------------------------- --------------- epoch -464 replay_buffer/size 999996 trainer/num train calls 537000 trainer/Policy Loss -2.2607 trainer/Log Pis Mean 2.25545 trainer/Log Pis Std 2.66079 trainer/Log Pis Max 14.9626 trainer/Log Pis Min -5.21181 trainer/policy/mean Mean 0.147051 trainer/policy/mean Std 0.611099 trainer/policy/mean Max 0.9983 trainer/policy/mean Min -0.998995 trainer/policy/normal/std Mean 0.382591 trainer/policy/normal/std Std 0.181016 trainer/policy/normal/std Max 1.02698 trainer/policy/normal/std Min 0.0674513 trainer/policy/normal/log_std Mean -1.10687 trainer/policy/normal/log_std Std 0.586059 trainer/policy/normal/log_std Max 0.0266245 trainer/policy/normal/log_std Min -2.69635 eval/num steps total 355853 eval/num paths total 693 eval/path length Mean 560 eval/path length Std 0 eval/path length Max 560 eval/path length Min 560 eval/Rewards Mean 3.20145 eval/Rewards Std 0.84619 eval/Rewards Max 4.7633 eval/Rewards Min 0.981593 eval/Returns Mean 1792.81 eval/Returns Std 0 eval/Returns Max 1792.81 eval/Returns Min 1792.81 eval/Actions Mean 0.157521 eval/Actions Std 0.571523 eval/Actions Max 0.997914 eval/Actions Min -0.999427 eval/Num Paths 1 eval/Average Returns 1792.81 eval/normalized_score 55.7089 time/evaluation sampling (s) 0.874261 time/logging (s) 0.00239047 time/sampling batch (s) 0.264752 time/saving (s) 0.00294537 time/training (s) 4.17607 time/epoch (s) 5.32042 time/total (s) 33140.6 Epoch -464 ---------------------------------- --------------- 2022-05-10 22:22:59.875263 PDT | [0] Epoch -463 finished ---------------------------------- --------------- epoch -463 replay_buffer/size 999996 trainer/num train calls 538000 trainer/Policy Loss -2.44072 trainer/Log Pis Mean 2.34886 trainer/Log Pis Std 2.60576 trainer/Log Pis Max 9.91348 trainer/Log Pis Min -3.33246 trainer/policy/mean Mean 0.123404 trainer/policy/mean Std 0.62718 trainer/policy/mean Max 0.998343 trainer/policy/mean Min -0.998295 trainer/policy/normal/std Mean 0.391407 trainer/policy/normal/std Std 0.192828 trainer/policy/normal/std Max 0.954644 trainer/policy/normal/std Min 0.0689969 trainer/policy/normal/log_std Mean -1.0933 trainer/policy/normal/log_std Std 0.6018 trainer/policy/normal/log_std Max -0.0464164 trainer/policy/normal/log_std Min -2.67369 eval/num steps total 356514 eval/num paths total 694 eval/path length Mean 661 eval/path length Std 0 eval/path length Max 661 eval/path length Min 661 eval/Rewards Mean 3.21026 eval/Rewards Std 0.731222 eval/Rewards Max 4.98411 eval/Rewards Min 0.98755 eval/Returns Mean 2121.98 eval/Returns Std 0 eval/Returns Max 2121.98 eval/Returns Min 2121.98 eval/Actions Mean 0.150923 eval/Actions Std 0.598245 eval/Actions Max 0.997906 eval/Actions Min -0.998217 eval/Num Paths 1 eval/Average Returns 2121.98 eval/normalized_score 65.823 time/evaluation sampling (s) 0.871917 time/logging (s) 0.00262965 time/sampling batch (s) 0.264075 time/saving (s) 0.00294451 time/training (s) 4.15633 time/epoch (s) 5.2979 time/total (s) 33145.9 Epoch -463 ---------------------------------- --------------- 2022-05-10 22:23:05.193793 PDT | [0] Epoch -462 finished ---------------------------------- --------------- epoch -462 replay_buffer/size 999996 trainer/num train calls 539000 trainer/Policy Loss -2.22441 trainer/Log Pis Mean 2.4143 trainer/Log Pis Std 2.67875 trainer/Log Pis Max 15.0041 trainer/Log Pis Min -4.3587 trainer/policy/mean Mean 0.116797 trainer/policy/mean Std 0.621728 trainer/policy/mean Max 0.99765 trainer/policy/mean Min -0.999676 trainer/policy/normal/std Mean 0.373785 trainer/policy/normal/std Std 0.177495 trainer/policy/normal/std Max 0.954419 trainer/policy/normal/std Min 0.0693277 trainer/policy/normal/log_std Mean -1.12957 trainer/policy/normal/log_std Std 0.582849 trainer/policy/normal/log_std Max -0.0466525 trainer/policy/normal/log_std Min -2.66891 eval/num steps total 357478 eval/num paths total 696 eval/path length Mean 482 eval/path length Std 101 eval/path length Max 583 eval/path length Min 381 eval/Rewards Mean 3.12207 eval/Rewards Std 0.87168 eval/Rewards Max 5.36054 eval/Rewards Min 0.984237 eval/Returns Mean 1504.84 eval/Returns Std 323.715 eval/Returns Max 1828.55 eval/Returns Min 1181.12 eval/Actions Mean 0.123082 eval/Actions Std 0.568215 eval/Actions Max 0.99781 eval/Actions Min -0.998249 eval/Num Paths 2 eval/Average Returns 1504.84 eval/normalized_score 46.8605 time/evaluation sampling (s) 0.884391 time/logging (s) 0.00355043 time/sampling batch (s) 0.262719 time/saving (s) 0.00294367 time/training (s) 4.14407 time/epoch (s) 5.29767 time/total (s) 33151.2 Epoch -462 ---------------------------------- --------------- 2022-05-10 22:23:10.534487 PDT | [0] Epoch -461 finished ---------------------------------- --------------- epoch -461 replay_buffer/size 999996 trainer/num train calls 540000 trainer/Policy Loss -2.31912 trainer/Log Pis Mean 2.35362 trainer/Log Pis Std 2.66713 trainer/Log Pis Max 9.80462 trainer/Log Pis Min -5.68173 trainer/policy/mean Mean 0.126831 trainer/policy/mean Std 0.619813 trainer/policy/mean Max 0.995969 trainer/policy/mean Min -0.997975 trainer/policy/normal/std Mean 0.369593 trainer/policy/normal/std Std 0.179326 trainer/policy/normal/std Max 0.863443 trainer/policy/normal/std Min 0.0734274 trainer/policy/normal/log_std Mean -1.14416 trainer/policy/normal/log_std Std 0.585551 trainer/policy/normal/log_std Max -0.146828 trainer/policy/normal/log_std Min -2.61146 eval/num steps total 358148 eval/num paths total 697 eval/path length Mean 670 eval/path length Std 0 eval/path length Max 670 eval/path length Min 670 eval/Rewards Mean 3.20955 eval/Rewards Std 0.692624 eval/Rewards Max 4.81142 eval/Rewards Min 0.9845 eval/Returns Mean 2150.4 eval/Returns Std 0 eval/Returns Max 2150.4 eval/Returns Min 2150.4 eval/Actions Mean 0.166049 eval/Actions Std 0.613798 eval/Actions Max 0.998052 eval/Actions Min -0.997376 eval/Num Paths 1 eval/Average Returns 2150.4 eval/normalized_score 66.6961 time/evaluation sampling (s) 0.896379 time/logging (s) 0.00266741 time/sampling batch (s) 0.263218 time/saving (s) 0.00291735 time/training (s) 4.15264 time/epoch (s) 5.31782 time/total (s) 33156.5 Epoch -461 ---------------------------------- --------------- 2022-05-10 22:23:15.938462 PDT | [0] Epoch -460 finished ---------------------------------- --------------- epoch -460 replay_buffer/size 999996 trainer/num train calls 541000 trainer/Policy Loss -2.40145 trainer/Log Pis Mean 2.46829 trainer/Log Pis Std 2.75677 trainer/Log Pis Max 12.8707 trainer/Log Pis Min -5.82395 trainer/policy/mean Mean 0.133857 trainer/policy/mean Std 0.626625 trainer/policy/mean Max 0.994633 trainer/policy/mean Min -0.998169 trainer/policy/normal/std Mean 0.374619 trainer/policy/normal/std Std 0.178724 trainer/policy/normal/std Max 0.880194 trainer/policy/normal/std Min 0.0736498 trainer/policy/normal/log_std Mean -1.12571 trainer/policy/normal/log_std Std 0.575861 trainer/policy/normal/log_std Max -0.127613 trainer/policy/normal/log_std Min -2.60843 eval/num steps total 359139 eval/num paths total 699 eval/path length Mean 495.5 eval/path length Std 57.5 eval/path length Max 553 eval/path length Min 438 eval/Rewards Mean 3.22164 eval/Rewards Std 0.857928 eval/Rewards Max 5.51825 eval/Rewards Min 0.981208 eval/Returns Mean 1596.32 eval/Returns Std 199.234 eval/Returns Max 1795.56 eval/Returns Min 1397.09 eval/Actions Mean 0.13988 eval/Actions Std 0.583597 eval/Actions Max 0.998123 eval/Actions Min -0.998458 eval/Num Paths 2 eval/Average Returns 1596.32 eval/normalized_score 49.6716 time/evaluation sampling (s) 0.965667 time/logging (s) 0.00374583 time/sampling batch (s) 0.263231 time/saving (s) 0.00310135 time/training (s) 4.14804 time/epoch (s) 5.38379 time/total (s) 33161.9 Epoch -460 ---------------------------------- --------------- 2022-05-10 22:23:21.302131 PDT | [0] Epoch -459 finished ---------------------------------- --------------- epoch -459 replay_buffer/size 999996 trainer/num train calls 542000 trainer/Policy Loss -2.35882 trainer/Log Pis Mean 2.27781 trainer/Log Pis Std 2.64715 trainer/Log Pis Max 10.3784 trainer/Log Pis Min -6.811 trainer/policy/mean Mean 0.125112 trainer/policy/mean Std 0.622477 trainer/policy/mean Max 0.997574 trainer/policy/mean Min -0.998752 trainer/policy/normal/std Mean 0.378336 trainer/policy/normal/std Std 0.182394 trainer/policy/normal/std Max 0.991887 trainer/policy/normal/std Min 0.0718242 trainer/policy/normal/log_std Mean -1.12009 trainer/policy/normal/log_std Std 0.586691 trainer/policy/normal/log_std Max -0.00814612 trainer/policy/normal/log_std Min -2.63353 eval/num steps total 359641 eval/num paths total 700 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.07034 eval/Rewards Std 0.803537 eval/Rewards Max 5.17068 eval/Rewards Min 0.989283 eval/Returns Mean 1541.31 eval/Returns Std 0 eval/Returns Max 1541.31 eval/Returns Min 1541.31 eval/Actions Mean 0.149084 eval/Actions Std 0.578671 eval/Actions Max 0.997282 eval/Actions Min -0.998049 eval/Num Paths 1 eval/Average Returns 1541.31 eval/normalized_score 47.9812 time/evaluation sampling (s) 0.892181 time/logging (s) 0.00231796 time/sampling batch (s) 0.26503 time/saving (s) 0.00299455 time/training (s) 4.17779 time/epoch (s) 5.34031 time/total (s) 33167.2 Epoch -459 ---------------------------------- --------------- 2022-05-10 22:23:26.644799 PDT | [0] Epoch -458 finished ---------------------------------- --------------- epoch -458 replay_buffer/size 999996 trainer/num train calls 543000 trainer/Policy Loss -2.31754 trainer/Log Pis Mean 2.29021 trainer/Log Pis Std 2.5968 trainer/Log Pis Max 15.1278 trainer/Log Pis Min -5.49941 trainer/policy/mean Mean 0.110866 trainer/policy/mean Std 0.6174 trainer/policy/mean Max 0.997006 trainer/policy/mean Min -0.998546 trainer/policy/normal/std Mean 0.379554 trainer/policy/normal/std Std 0.184604 trainer/policy/normal/std Max 0.970606 trainer/policy/normal/std Min 0.0689437 trainer/policy/normal/log_std Mean -1.11841 trainer/policy/normal/log_std Std 0.587219 trainer/policy/normal/log_std Max -0.029835 trainer/policy/normal/log_std Min -2.67447 eval/num steps total 360143 eval/num paths total 701 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.10593 eval/Rewards Std 0.766659 eval/Rewards Max 4.6924 eval/Rewards Min 0.985312 eval/Returns Mean 1559.18 eval/Returns Std 0 eval/Returns Max 1559.18 eval/Returns Min 1559.18 eval/Actions Mean 0.151121 eval/Actions Std 0.589817 eval/Actions Max 0.99895 eval/Actions Min -0.996798 eval/Num Paths 1 eval/Average Returns 1559.18 eval/normalized_score 48.5301 time/evaluation sampling (s) 0.880724 time/logging (s) 0.00232253 time/sampling batch (s) 0.269281 time/saving (s) 0.00314429 time/training (s) 4.16521 time/epoch (s) 5.32068 time/total (s) 33172.5 Epoch -458 ---------------------------------- --------------- 2022-05-10 22:23:31.999928 PDT | [0] Epoch -457 finished ---------------------------------- --------------- epoch -457 replay_buffer/size 999996 trainer/num train calls 544000 trainer/Policy Loss -2.28649 trainer/Log Pis Mean 2.44089 trainer/Log Pis Std 2.5958 trainer/Log Pis Max 9.4055 trainer/Log Pis Min -4.15285 trainer/policy/mean Mean 0.137042 trainer/policy/mean Std 0.623889 trainer/policy/mean Max 0.998362 trainer/policy/mean Min -0.997961 trainer/policy/normal/std Mean 0.375467 trainer/policy/normal/std Std 0.184244 trainer/policy/normal/std Max 1.01404 trainer/policy/normal/std Min 0.0648099 trainer/policy/normal/log_std Mean -1.13565 trainer/policy/normal/log_std Std 0.603985 trainer/policy/normal/log_std Max 0.0139411 trainer/policy/normal/log_std Min -2.7363 eval/num steps total 360764 eval/num paths total 702 eval/path length Mean 621 eval/path length Std 0 eval/path length Max 621 eval/path length Min 621 eval/Rewards Mean 3.21443 eval/Rewards Std 0.762955 eval/Rewards Max 4.98194 eval/Rewards Min 0.988758 eval/Returns Mean 1996.16 eval/Returns Std 0 eval/Returns Max 1996.16 eval/Returns Min 1996.16 eval/Actions Mean 0.139927 eval/Actions Std 0.581648 eval/Actions Max 0.997574 eval/Actions Min -0.998476 eval/Num Paths 1 eval/Average Returns 1996.16 eval/normalized_score 61.9569 time/evaluation sampling (s) 0.877131 time/logging (s) 0.00264312 time/sampling batch (s) 0.265055 time/saving (s) 0.00306774 time/training (s) 4.18536 time/epoch (s) 5.33326 time/total (s) 33177.9 Epoch -457 ---------------------------------- --------------- 2022-05-10 22:23:37.336414 PDT | [0] Epoch -456 finished ---------------------------------- --------------- epoch -456 replay_buffer/size 999996 trainer/num train calls 545000 trainer/Policy Loss -2.27296 trainer/Log Pis Mean 2.07101 trainer/Log Pis Std 2.56372 trainer/Log Pis Max 9.70215 trainer/Log Pis Min -4.90569 trainer/policy/mean Mean 0.153301 trainer/policy/mean Std 0.619145 trainer/policy/mean Max 0.99759 trainer/policy/mean Min -0.997832 trainer/policy/normal/std Mean 0.388688 trainer/policy/normal/std Std 0.181691 trainer/policy/normal/std Max 0.97166 trainer/policy/normal/std Min 0.0736662 trainer/policy/normal/log_std Mean -1.0847 trainer/policy/normal/log_std Std 0.569583 trainer/policy/normal/log_std Max -0.0287499 trainer/policy/normal/log_std Min -2.60821 eval/num steps total 361716 eval/num paths total 704 eval/path length Mean 476 eval/path length Std 11 eval/path length Max 487 eval/path length Min 465 eval/Rewards Mean 3.19539 eval/Rewards Std 0.840984 eval/Rewards Max 4.99262 eval/Rewards Min 0.981318 eval/Returns Mean 1521 eval/Returns Std 28.7542 eval/Returns Max 1549.76 eval/Returns Min 1492.25 eval/Actions Mean 0.150841 eval/Actions Std 0.588897 eval/Actions Max 0.998233 eval/Actions Min -0.998499 eval/Num Paths 2 eval/Average Returns 1521 eval/normalized_score 47.3573 time/evaluation sampling (s) 0.882902 time/logging (s) 0.00349896 time/sampling batch (s) 0.26461 time/saving (s) 0.00296015 time/training (s) 4.16143 time/epoch (s) 5.3154 time/total (s) 33183.2 Epoch -456 ---------------------------------- --------------- 2022-05-10 22:23:42.681164 PDT | [0] Epoch -455 finished ---------------------------------- --------------- epoch -455 replay_buffer/size 999996 trainer/num train calls 546000 trainer/Policy Loss -2.19292 trainer/Log Pis Mean 2.23911 trainer/Log Pis Std 2.61711 trainer/Log Pis Max 9.74107 trainer/Log Pis Min -5.5679 trainer/policy/mean Mean 0.170168 trainer/policy/mean Std 0.610108 trainer/policy/mean Max 0.998884 trainer/policy/mean Min -0.996826 trainer/policy/normal/std Mean 0.38309 trainer/policy/normal/std Std 0.187528 trainer/policy/normal/std Max 1.07705 trainer/policy/normal/std Min 0.0710817 trainer/policy/normal/log_std Mean -1.11484 trainer/policy/normal/log_std Std 0.602285 trainer/policy/normal/log_std Max 0.0742249 trainer/policy/normal/log_std Min -2.64392 eval/num steps total 362699 eval/num paths total 706 eval/path length Mean 491.5 eval/path length Std 87.5 eval/path length Max 579 eval/path length Min 404 eval/Rewards Mean 3.1529 eval/Rewards Std 0.763443 eval/Rewards Max 4.84244 eval/Rewards Min 0.978109 eval/Returns Mean 1549.65 eval/Returns Std 311.582 eval/Returns Max 1861.23 eval/Returns Min 1238.07 eval/Actions Mean 0.143024 eval/Actions Std 0.595508 eval/Actions Max 0.998319 eval/Actions Min -0.998291 eval/Num Paths 2 eval/Average Returns 1549.65 eval/normalized_score 48.2374 time/evaluation sampling (s) 0.874059 time/logging (s) 0.00386713 time/sampling batch (s) 0.269935 time/saving (s) 0.00308629 time/training (s) 4.17246 time/epoch (s) 5.3234 time/total (s) 33188.5 Epoch -455 ---------------------------------- --------------- 2022-05-10 22:23:48.004613 PDT | [0] Epoch -454 finished ---------------------------------- --------------- epoch -454 replay_buffer/size 999996 trainer/num train calls 547000 trainer/Policy Loss -1.98115 trainer/Log Pis Mean 2.01835 trainer/Log Pis Std 2.54252 trainer/Log Pis Max 10.2083 trainer/Log Pis Min -4.559 trainer/policy/mean Mean 0.144644 trainer/policy/mean Std 0.608641 trainer/policy/mean Max 0.998184 trainer/policy/mean Min -0.996596 trainer/policy/normal/std Mean 0.386338 trainer/policy/normal/std Std 0.184454 trainer/policy/normal/std Max 0.984731 trainer/policy/normal/std Min 0.0678499 trainer/policy/normal/log_std Mean -1.09798 trainer/policy/normal/log_std Std 0.586084 trainer/policy/normal/log_std Max -0.0153871 trainer/policy/normal/log_std Min -2.69046 eval/num steps total 363266 eval/num paths total 707 eval/path length Mean 567 eval/path length Std 0 eval/path length Max 567 eval/path length Min 567 eval/Rewards Mean 3.14841 eval/Rewards Std 0.771191 eval/Rewards Max 4.89513 eval/Rewards Min 0.983845 eval/Returns Mean 1785.15 eval/Returns Std 0 eval/Returns Max 1785.15 eval/Returns Min 1785.15 eval/Actions Mean 0.148894 eval/Actions Std 0.583592 eval/Actions Max 0.997063 eval/Actions Min -0.998297 eval/Num Paths 1 eval/Average Returns 1785.15 eval/normalized_score 55.4734 time/evaluation sampling (s) 0.870893 time/logging (s) 0.00244037 time/sampling batch (s) 0.264249 time/saving (s) 0.00297415 time/training (s) 4.15954 time/epoch (s) 5.30009 time/total (s) 33193.8 Epoch -454 ---------------------------------- --------------- 2022-05-10 22:23:53.343275 PDT | [0] Epoch -453 finished ---------------------------------- --------------- epoch -453 replay_buffer/size 999996 trainer/num train calls 548000 trainer/Policy Loss -2.43193 trainer/Log Pis Mean 2.34945 trainer/Log Pis Std 2.73469 trainer/Log Pis Max 15.3217 trainer/Log Pis Min -6.44197 trainer/policy/mean Mean 0.155935 trainer/policy/mean Std 0.626939 trainer/policy/mean Max 0.998415 trainer/policy/mean Min -0.999343 trainer/policy/normal/std Mean 0.382827 trainer/policy/normal/std Std 0.186389 trainer/policy/normal/std Max 0.966186 trainer/policy/normal/std Min 0.0656402 trainer/policy/normal/log_std Mean -1.11147 trainer/policy/normal/log_std Std 0.592348 trainer/policy/normal/log_std Max -0.0343987 trainer/policy/normal/log_std Min -2.72357 eval/num steps total 364080 eval/num paths total 709 eval/path length Mean 407 eval/path length Std 2 eval/path length Max 409 eval/path length Min 405 eval/Rewards Mean 3.07049 eval/Rewards Std 0.782448 eval/Rewards Max 4.68592 eval/Rewards Min 0.983685 eval/Returns Mean 1249.69 eval/Returns Std 4.98868 eval/Returns Max 1254.68 eval/Returns Min 1244.7 eval/Actions Mean 0.149927 eval/Actions Std 0.594788 eval/Actions Max 0.99712 eval/Actions Min -0.99638 eval/Num Paths 2 eval/Average Returns 1249.69 eval/normalized_score 39.0208 time/evaluation sampling (s) 0.876622 time/logging (s) 0.00322678 time/sampling batch (s) 0.263685 time/saving (s) 0.00295738 time/training (s) 4.17087 time/epoch (s) 5.31736 time/total (s) 33199.1 Epoch -453 ---------------------------------- --------------- 2022-05-10 22:23:58.648797 PDT | [0] Epoch -452 finished ---------------------------------- --------------- epoch -452 replay_buffer/size 999996 trainer/num train calls 549000 trainer/Policy Loss -2.11404 trainer/Log Pis Mean 2.18234 trainer/Log Pis Std 2.60893 trainer/Log Pis Max 10.5445 trainer/Log Pis Min -4.35006 trainer/policy/mean Mean 0.169183 trainer/policy/mean Std 0.598074 trainer/policy/mean Max 0.997271 trainer/policy/mean Min -0.997772 trainer/policy/normal/std Mean 0.383351 trainer/policy/normal/std Std 0.189421 trainer/policy/normal/std Max 0.994215 trainer/policy/normal/std Min 0.070573 trainer/policy/normal/log_std Mean -1.11263 trainer/policy/normal/log_std Std 0.596662 trainer/policy/normal/log_std Max -0.00580173 trainer/policy/normal/log_std Min -2.65111 eval/num steps total 365018 eval/num paths total 711 eval/path length Mean 469 eval/path length Std 29 eval/path length Max 498 eval/path length Min 440 eval/Rewards Mean 3.11676 eval/Rewards Std 0.84785 eval/Rewards Max 5.57049 eval/Rewards Min 0.986561 eval/Returns Mean 1461.76 eval/Returns Std 69.6383 eval/Returns Max 1531.4 eval/Returns Min 1392.12 eval/Actions Mean 0.151891 eval/Actions Std 0.586507 eval/Actions Max 0.998282 eval/Actions Min -0.997763 eval/Num Paths 2 eval/Average Returns 1461.76 eval/normalized_score 45.5369 time/evaluation sampling (s) 0.87121 time/logging (s) 0.00349427 time/sampling batch (s) 0.262025 time/saving (s) 0.00298492 time/training (s) 4.14399 time/epoch (s) 5.2837 time/total (s) 33204.4 Epoch -452 ---------------------------------- --------------- 2022-05-10 22:24:03.957145 PDT | [0] Epoch -451 finished ---------------------------------- --------------- epoch -451 replay_buffer/size 999996 trainer/num train calls 550000 trainer/Policy Loss -2.29713 trainer/Log Pis Mean 2.27623 trainer/Log Pis Std 2.66032 trainer/Log Pis Max 11.1144 trainer/Log Pis Min -7.2499 trainer/policy/mean Mean 0.162445 trainer/policy/mean Std 0.616583 trainer/policy/mean Max 0.997061 trainer/policy/mean Min -0.998136 trainer/policy/normal/std Mean 0.37512 trainer/policy/normal/std Std 0.184763 trainer/policy/normal/std Max 0.92872 trainer/policy/normal/std Min 0.0655782 trainer/policy/normal/log_std Mean -1.13533 trainer/policy/normal/log_std Std 0.599317 trainer/policy/normal/log_std Max -0.0739483 trainer/policy/normal/log_std Min -2.72451 eval/num steps total 365605 eval/num paths total 712 eval/path length Mean 587 eval/path length Std 0 eval/path length Max 587 eval/path length Min 587 eval/Rewards Mean 3.13204 eval/Rewards Std 0.778693 eval/Rewards Max 5.37983 eval/Rewards Min 0.982416 eval/Returns Mean 1838.51 eval/Returns Std 0 eval/Returns Max 1838.51 eval/Returns Min 1838.51 eval/Actions Mean 0.143684 eval/Actions Std 0.584053 eval/Actions Max 0.998209 eval/Actions Min -0.997812 eval/Num Paths 1 eval/Average Returns 1838.51 eval/normalized_score 57.1129 time/evaluation sampling (s) 0.876924 time/logging (s) 0.00255229 time/sampling batch (s) 0.266599 time/saving (s) 0.00302008 time/training (s) 4.1366 time/epoch (s) 5.28569 time/total (s) 33209.7 Epoch -451 ---------------------------------- --------------- 2022-05-10 22:24:09.260809 PDT | [0] Epoch -450 finished ---------------------------------- --------------- epoch -450 replay_buffer/size 999996 trainer/num train calls 551000 trainer/Policy Loss -2.0243 trainer/Log Pis Mean 2.2465 trainer/Log Pis Std 2.56538 trainer/Log Pis Max 10.2891 trainer/Log Pis Min -4.86292 trainer/policy/mean Mean 0.122664 trainer/policy/mean Std 0.621411 trainer/policy/mean Max 0.995897 trainer/policy/mean Min -0.99811 trainer/policy/normal/std Mean 0.378007 trainer/policy/normal/std Std 0.183056 trainer/policy/normal/std Max 1.02544 trainer/policy/normal/std Min 0.0708452 trainer/policy/normal/log_std Mean -1.11802 trainer/policy/normal/log_std Std 0.576982 trainer/policy/normal/log_std Max 0.0251239 trainer/policy/normal/log_std Min -2.64726 eval/num steps total 366594 eval/num paths total 714 eval/path length Mean 494.5 eval/path length Std 0.5 eval/path length Max 495 eval/path length Min 494 eval/Rewards Mean 3.07333 eval/Rewards Std 0.762328 eval/Rewards Max 4.62844 eval/Rewards Min 0.981356 eval/Returns Mean 1519.76 eval/Returns Std 0.622588 eval/Returns Max 1520.39 eval/Returns Min 1519.14 eval/Actions Mean 0.15294 eval/Actions Std 0.588809 eval/Actions Max 0.997891 eval/Actions Min -0.996957 eval/Num Paths 2 eval/Average Returns 1519.76 eval/normalized_score 47.3192 time/evaluation sampling (s) 0.874983 time/logging (s) 0.00363029 time/sampling batch (s) 0.263514 time/saving (s) 0.00294149 time/training (s) 4.13746 time/epoch (s) 5.28252 time/total (s) 33215 Epoch -450 ---------------------------------- --------------- 2022-05-10 22:24:14.606164 PDT | [0] Epoch -449 finished ---------------------------------- --------------- epoch -449 replay_buffer/size 999996 trainer/num train calls 552000 trainer/Policy Loss -2.36906 trainer/Log Pis Mean 2.25536 trainer/Log Pis Std 2.6398 trainer/Log Pis Max 10.2019 trainer/Log Pis Min -4.94958 trainer/policy/mean Mean 0.154881 trainer/policy/mean Std 0.610526 trainer/policy/mean Max 0.996982 trainer/policy/mean Min -0.998249 trainer/policy/normal/std Mean 0.376755 trainer/policy/normal/std Std 0.181646 trainer/policy/normal/std Max 0.902192 trainer/policy/normal/std Min 0.0680339 trainer/policy/normal/log_std Mean -1.12779 trainer/policy/normal/log_std Std 0.595767 trainer/policy/normal/log_std Max -0.102928 trainer/policy/normal/log_std Min -2.68775 eval/num steps total 367144 eval/num paths total 715 eval/path length Mean 550 eval/path length Std 0 eval/path length Max 550 eval/path length Min 550 eval/Rewards Mean 3.26986 eval/Rewards Std 0.812126 eval/Rewards Max 4.97203 eval/Rewards Min 0.983051 eval/Returns Mean 1798.42 eval/Returns Std 0 eval/Returns Max 1798.42 eval/Returns Min 1798.42 eval/Actions Mean 0.145095 eval/Actions Std 0.583989 eval/Actions Max 0.998542 eval/Actions Min -0.998463 eval/Num Paths 1 eval/Average Returns 1798.42 eval/normalized_score 55.8813 time/evaluation sampling (s) 0.895878 time/logging (s) 0.00236567 time/sampling batch (s) 0.268662 time/saving (s) 0.00296659 time/training (s) 4.15213 time/epoch (s) 5.322 time/total (s) 33220.3 Epoch -449 ---------------------------------- --------------- 2022-05-10 22:24:19.928977 PDT | [0] Epoch -448 finished ---------------------------------- --------------- epoch -448 replay_buffer/size 999996 trainer/num train calls 553000 trainer/Policy Loss -2.16132 trainer/Log Pis Mean 2.22749 trainer/Log Pis Std 2.59027 trainer/Log Pis Max 10.0684 trainer/Log Pis Min -5.63716 trainer/policy/mean Mean 0.117846 trainer/policy/mean Std 0.620923 trainer/policy/mean Max 0.996658 trainer/policy/mean Min -0.997719 trainer/policy/normal/std Mean 0.374733 trainer/policy/normal/std Std 0.182948 trainer/policy/normal/std Max 0.88503 trainer/policy/normal/std Min 0.0696799 trainer/policy/normal/log_std Mean -1.13238 trainer/policy/normal/log_std Std 0.590183 trainer/policy/normal/log_std Max -0.122133 trainer/policy/normal/log_std Min -2.66384 eval/num steps total 367699 eval/num paths total 716 eval/path length Mean 555 eval/path length Std 0 eval/path length Max 555 eval/path length Min 555 eval/Rewards Mean 3.21022 eval/Rewards Std 0.841545 eval/Rewards Max 4.97654 eval/Rewards Min 0.982605 eval/Returns Mean 1781.67 eval/Returns Std 0 eval/Returns Max 1781.67 eval/Returns Min 1781.67 eval/Actions Mean 0.155911 eval/Actions Std 0.577729 eval/Actions Max 0.997899 eval/Actions Min -0.999183 eval/Num Paths 1 eval/Average Returns 1781.67 eval/normalized_score 55.3665 time/evaluation sampling (s) 0.87127 time/logging (s) 0.00238984 time/sampling batch (s) 0.263042 time/saving (s) 0.00300475 time/training (s) 4.16133 time/epoch (s) 5.30103 time/total (s) 33225.6 Epoch -448 ---------------------------------- --------------- 2022-05-10 22:24:25.270189 PDT | [0] Epoch -447 finished ---------------------------------- --------------- epoch -447 replay_buffer/size 999996 trainer/num train calls 554000 trainer/Policy Loss -2.04169 trainer/Log Pis Mean 2.09729 trainer/Log Pis Std 2.54561 trainer/Log Pis Max 10.0158 trainer/Log Pis Min -4.16695 trainer/policy/mean Mean 0.150225 trainer/policy/mean Std 0.606908 trainer/policy/mean Max 0.998157 trainer/policy/mean Min -0.997236 trainer/policy/normal/std Mean 0.384532 trainer/policy/normal/std Std 0.184986 trainer/policy/normal/std Max 0.966558 trainer/policy/normal/std Min 0.0724839 trainer/policy/normal/log_std Mean -1.1065 trainer/policy/normal/log_std Std 0.594931 trainer/policy/normal/log_std Max -0.0340139 trainer/policy/normal/log_std Min -2.62439 eval/num steps total 368583 eval/num paths total 718 eval/path length Mean 442 eval/path length Std 36 eval/path length Max 478 eval/path length Min 406 eval/Rewards Mean 3.14502 eval/Rewards Std 0.814678 eval/Rewards Max 4.77 eval/Rewards Min 0.979082 eval/Returns Mean 1390.1 eval/Returns Std 144.24 eval/Returns Max 1534.34 eval/Returns Min 1245.86 eval/Actions Mean 0.152835 eval/Actions Std 0.604241 eval/Actions Max 0.997304 eval/Actions Min -0.99836 eval/Num Paths 2 eval/Average Returns 1390.1 eval/normalized_score 43.335 time/evaluation sampling (s) 0.883102 time/logging (s) 0.00333887 time/sampling batch (s) 0.265448 time/saving (s) 0.00293666 time/training (s) 4.1653 time/epoch (s) 5.32012 time/total (s) 33231 Epoch -447 ---------------------------------- --------------- 2022-05-10 22:24:30.594846 PDT | [0] Epoch -446 finished ---------------------------------- --------------- epoch -446 replay_buffer/size 999996 trainer/num train calls 555000 trainer/Policy Loss -2.3102 trainer/Log Pis Mean 2.1719 trainer/Log Pis Std 2.66128 trainer/Log Pis Max 14.6999 trainer/Log Pis Min -6.08607 trainer/policy/mean Mean 0.14807 trainer/policy/mean Std 0.626843 trainer/policy/mean Max 0.995887 trainer/policy/mean Min -0.99906 trainer/policy/normal/std Mean 0.392815 trainer/policy/normal/std Std 0.187465 trainer/policy/normal/std Max 0.988229 trainer/policy/normal/std Min 0.0736949 trainer/policy/normal/log_std Mean -1.07903 trainer/policy/normal/log_std Std 0.578631 trainer/policy/normal/log_std Max -0.0118404 trainer/policy/normal/log_std Min -2.60782 eval/num steps total 369537 eval/num paths total 720 eval/path length Mean 477 eval/path length Std 19 eval/path length Max 496 eval/path length Min 458 eval/Rewards Mean 3.18761 eval/Rewards Std 0.812229 eval/Rewards Max 5.20758 eval/Rewards Min 0.981774 eval/Returns Mean 1520.49 eval/Returns Std 47.3793 eval/Returns Max 1567.87 eval/Returns Min 1473.11 eval/Actions Mean 0.15368 eval/Actions Std 0.588034 eval/Actions Max 0.997744 eval/Actions Min -0.998196 eval/Num Paths 2 eval/Average Returns 1520.49 eval/normalized_score 47.3414 time/evaluation sampling (s) 0.895218 time/logging (s) 0.00359203 time/sampling batch (s) 0.261956 time/saving (s) 0.00314284 time/training (s) 4.13918 time/epoch (s) 5.30309 time/total (s) 33236.3 Epoch -446 ---------------------------------- --------------- 2022-05-10 22:24:35.961457 PDT | [0] Epoch -445 finished ---------------------------------- --------------- epoch -445 replay_buffer/size 999996 trainer/num train calls 556000 trainer/Policy Loss -2.09456 trainer/Log Pis Mean 2.17928 trainer/Log Pis Std 2.54433 trainer/Log Pis Max 9.77656 trainer/Log Pis Min -6.93855 trainer/policy/mean Mean 0.177947 trainer/policy/mean Std 0.607344 trainer/policy/mean Max 0.9973 trainer/policy/mean Min -0.996465 trainer/policy/normal/std Mean 0.384936 trainer/policy/normal/std Std 0.180393 trainer/policy/normal/std Max 0.925975 trainer/policy/normal/std Min 0.0710491 trainer/policy/normal/log_std Mean -1.09501 trainer/policy/normal/log_std Std 0.570584 trainer/policy/normal/log_std Max -0.0769083 trainer/policy/normal/log_std Min -2.64438 eval/num steps total 370424 eval/num paths total 722 eval/path length Mean 443.5 eval/path length Std 32.5 eval/path length Max 476 eval/path length Min 411 eval/Rewards Mean 3.18235 eval/Rewards Std 0.802316 eval/Rewards Max 4.75666 eval/Rewards Min 0.983598 eval/Returns Mean 1411.37 eval/Returns Std 136.53 eval/Returns Max 1547.9 eval/Returns Min 1274.84 eval/Actions Mean 0.16251 eval/Actions Std 0.60054 eval/Actions Max 0.997829 eval/Actions Min -0.99904 eval/Num Paths 2 eval/Average Returns 1411.37 eval/normalized_score 43.9887 time/evaluation sampling (s) 0.893253 time/logging (s) 0.003398 time/sampling batch (s) 0.26415 time/saving (s) 0.00297578 time/training (s) 4.18027 time/epoch (s) 5.34405 time/total (s) 33241.6 Epoch -445 ---------------------------------- --------------- 2022-05-10 22:24:41.306930 PDT | [0] Epoch -444 finished ---------------------------------- --------------- epoch -444 replay_buffer/size 999996 trainer/num train calls 557000 trainer/Policy Loss -2.14303 trainer/Log Pis Mean 2.16106 trainer/Log Pis Std 2.71403 trainer/Log Pis Max 10.0628 trainer/Log Pis Min -4.53118 trainer/policy/mean Mean 0.168375 trainer/policy/mean Std 0.611802 trainer/policy/mean Max 0.99762 trainer/policy/mean Min -0.99858 trainer/policy/normal/std Mean 0.375857 trainer/policy/normal/std Std 0.177352 trainer/policy/normal/std Max 0.933203 trainer/policy/normal/std Min 0.066728 trainer/policy/normal/log_std Mean -1.1231 trainer/policy/normal/log_std Std 0.581339 trainer/policy/normal/log_std Max -0.0691325 trainer/policy/normal/log_std Min -2.70713 eval/num steps total 371010 eval/num paths total 723 eval/path length Mean 586 eval/path length Std 0 eval/path length Max 586 eval/path length Min 586 eval/Rewards Mean 3.2025 eval/Rewards Std 0.746304 eval/Rewards Max 4.93036 eval/Rewards Min 0.981899 eval/Returns Mean 1876.67 eval/Returns Std 0 eval/Returns Max 1876.67 eval/Returns Min 1876.67 eval/Actions Mean 0.170016 eval/Actions Std 0.599089 eval/Actions Max 0.998753 eval/Actions Min -0.998675 eval/Num Paths 1 eval/Average Returns 1876.67 eval/normalized_score 58.2853 time/evaluation sampling (s) 0.910059 time/logging (s) 0.00246269 time/sampling batch (s) 0.262225 time/saving (s) 0.00296468 time/training (s) 4.14468 time/epoch (s) 5.32239 time/total (s) 33246.9 Epoch -444 ---------------------------------- --------------- 2022-05-10 22:24:46.644799 PDT | [0] Epoch -443 finished ---------------------------------- --------------- epoch -443 replay_buffer/size 999996 trainer/num train calls 558000 trainer/Policy Loss -2.05949 trainer/Log Pis Mean 2.04952 trainer/Log Pis Std 2.62273 trainer/Log Pis Max 11.2287 trainer/Log Pis Min -6.53629 trainer/policy/mean Mean 0.128067 trainer/policy/mean Std 0.606433 trainer/policy/mean Max 0.998799 trainer/policy/mean Min -0.997511 trainer/policy/normal/std Mean 0.378701 trainer/policy/normal/std Std 0.179796 trainer/policy/normal/std Max 1.03853 trainer/policy/normal/std Min 0.0744709 trainer/policy/normal/log_std Mean -1.11113 trainer/policy/normal/log_std Std 0.566624 trainer/policy/normal/log_std Max 0.0378057 trainer/policy/normal/log_std Min -2.59735 eval/num steps total 371828 eval/num paths total 725 eval/path length Mean 409 eval/path length Std 4 eval/path length Max 413 eval/path length Min 405 eval/Rewards Mean 3.05866 eval/Rewards Std 0.797717 eval/Rewards Max 4.84053 eval/Rewards Min 0.979699 eval/Returns Mean 1250.99 eval/Returns Std 16.4105 eval/Returns Max 1267.4 eval/Returns Min 1234.58 eval/Actions Mean 0.144445 eval/Actions Std 0.588859 eval/Actions Max 0.997248 eval/Actions Min -0.999455 eval/Num Paths 2 eval/Average Returns 1250.99 eval/normalized_score 39.0608 time/evaluation sampling (s) 0.892887 time/logging (s) 0.00325187 time/sampling batch (s) 0.263819 time/saving (s) 0.00295294 time/training (s) 4.15379 time/epoch (s) 5.3167 time/total (s) 33252.3 Epoch -443 ---------------------------------- --------------- 2022-05-10 22:24:52.001780 PDT | [0] Epoch -442 finished ---------------------------------- --------------- epoch -442 replay_buffer/size 999996 trainer/num train calls 559000 trainer/Policy Loss -2.45887 trainer/Log Pis Mean 2.44022 trainer/Log Pis Std 2.53326 trainer/Log Pis Max 9.12558 trainer/Log Pis Min -4.12586 trainer/policy/mean Mean 0.159357 trainer/policy/mean Std 0.626794 trainer/policy/mean Max 0.996839 trainer/policy/mean Min -0.997676 trainer/policy/normal/std Mean 0.373241 trainer/policy/normal/std Std 0.179458 trainer/policy/normal/std Max 0.927326 trainer/policy/normal/std Min 0.0672719 trainer/policy/normal/log_std Mean -1.13539 trainer/policy/normal/log_std Std 0.592874 trainer/policy/normal/log_std Max -0.0754506 trainer/policy/normal/log_std Min -2.69901 eval/num steps total 372239 eval/num paths total 726 eval/path length Mean 411 eval/path length Std 0 eval/path length Max 411 eval/path length Min 411 eval/Rewards Mean 3.06754 eval/Rewards Std 0.792521 eval/Rewards Max 4.72588 eval/Rewards Min 0.982485 eval/Returns Mean 1260.76 eval/Returns Std 0 eval/Returns Max 1260.76 eval/Returns Min 1260.76 eval/Actions Mean 0.150793 eval/Actions Std 0.593225 eval/Actions Max 0.997278 eval/Actions Min -0.996765 eval/Num Paths 1 eval/Average Returns 1260.76 eval/normalized_score 39.3609 time/evaluation sampling (s) 0.887676 time/logging (s) 0.00207263 time/sampling batch (s) 0.265433 time/saving (s) 0.003118 time/training (s) 4.17548 time/epoch (s) 5.33378 time/total (s) 33257.6 Epoch -442 ---------------------------------- --------------- 2022-05-10 22:24:57.336160 PDT | [0] Epoch -441 finished ---------------------------------- --------------- epoch -441 replay_buffer/size 999996 trainer/num train calls 560000 trainer/Policy Loss -2.08442 trainer/Log Pis Mean 2.241 trainer/Log Pis Std 2.63642 trainer/Log Pis Max 14.3546 trainer/Log Pis Min -5.77367 trainer/policy/mean Mean 0.124117 trainer/policy/mean Std 0.626207 trainer/policy/mean Max 0.99668 trainer/policy/mean Min -0.998451 trainer/policy/normal/std Mean 0.382538 trainer/policy/normal/std Std 0.182067 trainer/policy/normal/std Max 1.02877 trainer/policy/normal/std Min 0.0681502 trainer/policy/normal/log_std Mean -1.10512 trainer/policy/normal/log_std Std 0.578494 trainer/policy/normal/log_std Max 0.0283601 trainer/policy/normal/log_std Min -2.68604 eval/num steps total 373213 eval/num paths total 728 eval/path length Mean 487 eval/path length Std 15 eval/path length Max 502 eval/path length Min 472 eval/Rewards Mean 3.1049 eval/Rewards Std 0.804417 eval/Rewards Max 5.11727 eval/Rewards Min 0.987447 eval/Returns Mean 1512.08 eval/Returns Std 32.385 eval/Returns Max 1544.47 eval/Returns Min 1479.7 eval/Actions Mean 0.146581 eval/Actions Std 0.592775 eval/Actions Max 0.996865 eval/Actions Min -0.999627 eval/Num Paths 2 eval/Average Returns 1512.08 eval/normalized_score 47.0832 time/evaluation sampling (s) 0.883408 time/logging (s) 0.00357578 time/sampling batch (s) 0.264494 time/saving (s) 0.00294054 time/training (s) 4.15936 time/epoch (s) 5.31378 time/total (s) 33262.9 Epoch -441 ---------------------------------- --------------- 2022-05-10 22:25:02.724499 PDT | [0] Epoch -440 finished ---------------------------------- --------------- epoch -440 replay_buffer/size 999996 trainer/num train calls 561000 trainer/Policy Loss -2.25194 trainer/Log Pis Mean 2.29066 trainer/Log Pis Std 2.6282 trainer/Log Pis Max 9.28744 trainer/Log Pis Min -5.47233 trainer/policy/mean Mean 0.126507 trainer/policy/mean Std 0.615587 trainer/policy/mean Max 0.99758 trainer/policy/mean Min -0.996761 trainer/policy/normal/std Mean 0.370674 trainer/policy/normal/std Std 0.180922 trainer/policy/normal/std Max 0.932712 trainer/policy/normal/std Min 0.0661785 trainer/policy/normal/log_std Mean -1.14367 trainer/policy/normal/log_std Std 0.59159 trainer/policy/normal/log_std Max -0.069659 trainer/policy/normal/log_std Min -2.7154 eval/num steps total 373873 eval/num paths total 729 eval/path length Mean 660 eval/path length Std 0 eval/path length Max 660 eval/path length Min 660 eval/Rewards Mean 3.21662 eval/Rewards Std 0.659125 eval/Rewards Max 4.70885 eval/Rewards Min 0.983307 eval/Returns Mean 2122.97 eval/Returns Std 0 eval/Returns Max 2122.97 eval/Returns Min 2122.97 eval/Actions Mean 0.150054 eval/Actions Std 0.611032 eval/Actions Max 0.998825 eval/Actions Min -0.997967 eval/Num Paths 1 eval/Average Returns 2122.97 eval/normalized_score 65.8532 time/evaluation sampling (s) 0.928366 time/logging (s) 0.00265318 time/sampling batch (s) 0.265572 time/saving (s) 0.00296666 time/training (s) 4.16566 time/epoch (s) 5.36521 time/total (s) 33268.3 Epoch -440 ---------------------------------- --------------- 2022-05-10 22:25:08.018498 PDT | [0] Epoch -439 finished ---------------------------------- --------------- epoch -439 replay_buffer/size 999996 trainer/num train calls 562000 trainer/Policy Loss -2.04124 trainer/Log Pis Mean 2.13753 trainer/Log Pis Std 2.78832 trainer/Log Pis Max 13.5953 trainer/Log Pis Min -6.66887 trainer/policy/mean Mean 0.0964654 trainer/policy/mean Std 0.609887 trainer/policy/mean Max 0.99807 trainer/policy/mean Min -0.998833 trainer/policy/normal/std Mean 0.373083 trainer/policy/normal/std Std 0.18083 trainer/policy/normal/std Max 1.01466 trainer/policy/normal/std Min 0.0682931 trainer/policy/normal/log_std Mean -1.13486 trainer/policy/normal/log_std Std 0.586597 trainer/policy/normal/log_std Max 0.0145585 trainer/policy/normal/log_std Min -2.68395 eval/num steps total 374407 eval/num paths total 730 eval/path length Mean 534 eval/path length Std 0 eval/path length Max 534 eval/path length Min 534 eval/Rewards Mean 3.16816 eval/Rewards Std 0.837519 eval/Rewards Max 5.28244 eval/Rewards Min 0.991889 eval/Returns Mean 1691.8 eval/Returns Std 0 eval/Returns Max 1691.8 eval/Returns Min 1691.8 eval/Actions Mean 0.143878 eval/Actions Std 0.58148 eval/Actions Max 0.997959 eval/Actions Min -0.99768 eval/Num Paths 1 eval/Average Returns 1691.8 eval/normalized_score 52.605 time/evaluation sampling (s) 0.873229 time/logging (s) 0.00240381 time/sampling batch (s) 0.262255 time/saving (s) 0.00307697 time/training (s) 4.13067 time/epoch (s) 5.27163 time/total (s) 33273.6 Epoch -439 ---------------------------------- --------------- 2022-05-10 22:25:13.366871 PDT | [0] Epoch -438 finished ---------------------------------- --------------- epoch -438 replay_buffer/size 999996 trainer/num train calls 563000 trainer/Policy Loss -2.26191 trainer/Log Pis Mean 2.31415 trainer/Log Pis Std 2.48148 trainer/Log Pis Max 9.30186 trainer/Log Pis Min -4.0934 trainer/policy/mean Mean 0.125405 trainer/policy/mean Std 0.618782 trainer/policy/mean Max 0.998573 trainer/policy/mean Min -0.997732 trainer/policy/normal/std Mean 0.376875 trainer/policy/normal/std Std 0.184101 trainer/policy/normal/std Max 0.958658 trainer/policy/normal/std Min 0.071972 trainer/policy/normal/log_std Mean -1.12478 trainer/policy/normal/log_std Std 0.584893 trainer/policy/normal/log_std Max -0.0422212 trainer/policy/normal/log_std Min -2.63148 eval/num steps total 374934 eval/num paths total 731 eval/path length Mean 527 eval/path length Std 0 eval/path length Max 527 eval/path length Min 527 eval/Rewards Mean 3.15872 eval/Rewards Std 0.863937 eval/Rewards Max 5.5363 eval/Rewards Min 0.98353 eval/Returns Mean 1664.65 eval/Returns Std 0 eval/Returns Max 1664.65 eval/Returns Min 1664.65 eval/Actions Mean 0.157484 eval/Actions Std 0.584706 eval/Actions Max 0.997557 eval/Actions Min -0.998957 eval/Num Paths 1 eval/Average Returns 1664.65 eval/normalized_score 51.7708 time/evaluation sampling (s) 0.878257 time/logging (s) 0.00296907 time/sampling batch (s) 0.264795 time/saving (s) 0.00323876 time/training (s) 4.17757 time/epoch (s) 5.32683 time/total (s) 33278.9 Epoch -438 ---------------------------------- --------------- 2022-05-10 22:25:18.703447 PDT | [0] Epoch -437 finished ---------------------------------- --------------- epoch -437 replay_buffer/size 999996 trainer/num train calls 564000 trainer/Policy Loss -2.35269 trainer/Log Pis Mean 2.24345 trainer/Log Pis Std 2.52455 trainer/Log Pis Max 8.74954 trainer/Log Pis Min -4.99895 trainer/policy/mean Mean 0.138258 trainer/policy/mean Std 0.617569 trainer/policy/mean Max 0.994127 trainer/policy/mean Min -0.997567 trainer/policy/normal/std Mean 0.374122 trainer/policy/normal/std Std 0.18603 trainer/policy/normal/std Max 0.950111 trainer/policy/normal/std Min 0.0580925 trainer/policy/normal/log_std Mean -1.14012 trainer/policy/normal/log_std Std 0.602946 trainer/policy/normal/log_std Max -0.0511765 trainer/policy/normal/log_std Min -2.84572 eval/num steps total 375408 eval/num paths total 732 eval/path length Mean 474 eval/path length Std 0 eval/path length Max 474 eval/path length Min 474 eval/Rewards Mean 3.21875 eval/Rewards Std 0.870614 eval/Rewards Max 4.80981 eval/Rewards Min 0.98609 eval/Returns Mean 1525.69 eval/Returns Std 0 eval/Returns Max 1525.69 eval/Returns Min 1525.69 eval/Actions Mean 0.141955 eval/Actions Std 0.573056 eval/Actions Max 0.995338 eval/Actions Min -0.998598 eval/Num Paths 1 eval/Average Returns 1525.69 eval/normalized_score 47.5012 time/evaluation sampling (s) 0.869479 time/logging (s) 0.00217469 time/sampling batch (s) 0.264542 time/saving (s) 0.00293101 time/training (s) 4.17462 time/epoch (s) 5.31375 time/total (s) 33284.2 Epoch -437 ---------------------------------- --------------- 2022-05-10 22:25:24.026662 PDT | [0] Epoch -436 finished ---------------------------------- --------------- epoch -436 replay_buffer/size 999996 trainer/num train calls 565000 trainer/Policy Loss -2.18419 trainer/Log Pis Mean 2.16945 trainer/Log Pis Std 2.58888 trainer/Log Pis Max 10.2394 trainer/Log Pis Min -4.23261 trainer/policy/mean Mean 0.131129 trainer/policy/mean Std 0.619631 trainer/policy/mean Max 0.997198 trainer/policy/mean Min -0.997298 trainer/policy/normal/std Mean 0.380135 trainer/policy/normal/std Std 0.180038 trainer/policy/normal/std Max 0.869298 trainer/policy/normal/std Min 0.0651682 trainer/policy/normal/log_std Mean -1.1165 trainer/policy/normal/log_std Std 0.594553 trainer/policy/normal/log_std Max -0.140069 trainer/policy/normal/log_std Min -2.73078 eval/num steps total 376065 eval/num paths total 734 eval/path length Mean 328.5 eval/path length Std 17.5 eval/path length Max 346 eval/path length Min 311 eval/Rewards Mean 3.01035 eval/Rewards Std 0.897729 eval/Rewards Max 4.76909 eval/Rewards Min 0.980567 eval/Returns Mean 988.898 eval/Returns Std 70.6313 eval/Returns Max 1059.53 eval/Returns Min 918.267 eval/Actions Mean 0.136458 eval/Actions Std 0.577855 eval/Actions Max 0.997172 eval/Actions Min -0.999317 eval/Num Paths 2 eval/Average Returns 988.898 eval/normalized_score 31.0078 time/evaluation sampling (s) 0.878442 time/logging (s) 0.00301638 time/sampling batch (s) 0.263257 time/saving (s) 0.00339739 time/training (s) 4.15394 time/epoch (s) 5.30205 time/total (s) 33289.5 Epoch -436 ---------------------------------- --------------- 2022-05-10 22:25:29.376531 PDT | [0] Epoch -435 finished ---------------------------------- --------------- epoch -435 replay_buffer/size 999996 trainer/num train calls 566000 trainer/Policy Loss -2.32497 trainer/Log Pis Mean 2.33189 trainer/Log Pis Std 2.69813 trainer/Log Pis Max 12.8954 trainer/Log Pis Min -6.58797 trainer/policy/mean Mean 0.142003 trainer/policy/mean Std 0.61939 trainer/policy/mean Max 0.996885 trainer/policy/mean Min -0.997682 trainer/policy/normal/std Mean 0.37312 trainer/policy/normal/std Std 0.178508 trainer/policy/normal/std Max 0.968512 trainer/policy/normal/std Min 0.0676756 trainer/policy/normal/log_std Mean -1.13114 trainer/policy/normal/log_std Std 0.579497 trainer/policy/normal/log_std Max -0.031994 trainer/policy/normal/log_std Min -2.69303 eval/num steps total 376647 eval/num paths total 735 eval/path length Mean 582 eval/path length Std 0 eval/path length Max 582 eval/path length Min 582 eval/Rewards Mean 3.12412 eval/Rewards Std 0.713927 eval/Rewards Max 4.73689 eval/Rewards Min 0.97908 eval/Returns Mean 1818.24 eval/Returns Std 0 eval/Returns Max 1818.24 eval/Returns Min 1818.24 eval/Actions Mean 0.146392 eval/Actions Std 0.596286 eval/Actions Max 0.99705 eval/Actions Min -0.998575 eval/Num Paths 1 eval/Average Returns 1818.24 eval/normalized_score 56.4901 time/evaluation sampling (s) 0.878134 time/logging (s) 0.00245399 time/sampling batch (s) 0.265154 time/saving (s) 0.00307983 time/training (s) 4.17807 time/epoch (s) 5.32689 time/total (s) 33294.8 Epoch -435 ---------------------------------- --------------- 2022-05-10 22:25:34.719136 PDT | [0] Epoch -434 finished ---------------------------------- --------------- epoch -434 replay_buffer/size 999996 trainer/num train calls 567000 trainer/Policy Loss -2.33406 trainer/Log Pis Mean 2.31221 trainer/Log Pis Std 2.61258 trainer/Log Pis Max 10.1802 trainer/Log Pis Min -5.67872 trainer/policy/mean Mean 0.141369 trainer/policy/mean Std 0.624635 trainer/policy/mean Max 0.996089 trainer/policy/mean Min -0.997189 trainer/policy/normal/std Mean 0.372017 trainer/policy/normal/std Std 0.180591 trainer/policy/normal/std Max 0.932661 trainer/policy/normal/std Min 0.0666896 trainer/policy/normal/log_std Mean -1.13689 trainer/policy/normal/log_std Std 0.583985 trainer/policy/normal/log_std Max -0.0697136 trainer/policy/normal/log_std Min -2.70771 eval/num steps total 377580 eval/num paths total 737 eval/path length Mean 466.5 eval/path length Std 19.5 eval/path length Max 486 eval/path length Min 447 eval/Rewards Mean 3.18851 eval/Rewards Std 0.869575 eval/Rewards Max 5.44914 eval/Rewards Min 0.983282 eval/Returns Mean 1487.44 eval/Returns Std 63.8788 eval/Returns Max 1551.32 eval/Returns Min 1423.56 eval/Actions Mean 0.14377 eval/Actions Std 0.587271 eval/Actions Max 0.998073 eval/Actions Min -0.998596 eval/Num Paths 2 eval/Average Returns 1487.44 eval/normalized_score 46.3259 time/evaluation sampling (s) 0.874863 time/logging (s) 0.00347247 time/sampling batch (s) 0.264117 time/saving (s) 0.00298122 time/training (s) 4.17637 time/epoch (s) 5.3218 time/total (s) 33300.2 Epoch -434 ---------------------------------- --------------- 2022-05-10 22:25:40.058398 PDT | [0] Epoch -433 finished ---------------------------------- --------------- epoch -433 replay_buffer/size 999996 trainer/num train calls 568000 trainer/Policy Loss -2.15252 trainer/Log Pis Mean 1.97644 trainer/Log Pis Std 2.54839 trainer/Log Pis Max 9.21751 trainer/Log Pis Min -4.13948 trainer/policy/mean Mean 0.141618 trainer/policy/mean Std 0.607405 trainer/policy/mean Max 0.99647 trainer/policy/mean Min -0.998022 trainer/policy/normal/std Mean 0.39017 trainer/policy/normal/std Std 0.193154 trainer/policy/normal/std Max 0.991659 trainer/policy/normal/std Min 0.071447 trainer/policy/normal/log_std Mean -1.09817 trainer/policy/normal/log_std Std 0.604211 trainer/policy/normal/log_std Max -0.0083763 trainer/policy/normal/log_std Min -2.6388 eval/num steps total 378538 eval/num paths total 739 eval/path length Mean 479 eval/path length Std 72 eval/path length Max 551 eval/path length Min 407 eval/Rewards Mean 3.14657 eval/Rewards Std 0.816434 eval/Rewards Max 5.18748 eval/Rewards Min 0.983149 eval/Returns Mean 1507.21 eval/Returns Std 261.127 eval/Returns Max 1768.33 eval/Returns Min 1246.08 eval/Actions Mean 0.154609 eval/Actions Std 0.590731 eval/Actions Max 0.997362 eval/Actions Min -0.99835 eval/Num Paths 2 eval/Average Returns 1507.21 eval/normalized_score 46.9333 time/evaluation sampling (s) 0.874397 time/logging (s) 0.00359396 time/sampling batch (s) 0.264746 time/saving (s) 0.00310753 time/training (s) 4.17159 time/epoch (s) 5.31743 time/total (s) 33305.5 Epoch -433 ---------------------------------- --------------- 2022-05-10 22:25:45.401706 PDT | [0] Epoch -432 finished ---------------------------------- --------------- epoch -432 replay_buffer/size 999996 trainer/num train calls 569000 trainer/Policy Loss -2.26274 trainer/Log Pis Mean 2.15062 trainer/Log Pis Std 2.6343 trainer/Log Pis Max 10.2377 trainer/Log Pis Min -5.34915 trainer/policy/mean Mean 0.0936257 trainer/policy/mean Std 0.613168 trainer/policy/mean Max 0.995126 trainer/policy/mean Min -0.998106 trainer/policy/normal/std Mean 0.378503 trainer/policy/normal/std Std 0.183084 trainer/policy/normal/std Max 0.977259 trainer/policy/normal/std Min 0.0703468 trainer/policy/normal/log_std Mean -1.12067 trainer/policy/normal/log_std Std 0.588339 trainer/policy/normal/log_std Max -0.0230039 trainer/policy/normal/log_std Min -2.65432 eval/num steps total 379517 eval/num paths total 741 eval/path length Mean 489.5 eval/path length Std 63.5 eval/path length Max 553 eval/path length Min 426 eval/Rewards Mean 3.17025 eval/Rewards Std 0.868061 eval/Rewards Max 5.4959 eval/Rewards Min 0.987002 eval/Returns Mean 1551.84 eval/Returns Std 216.948 eval/Returns Max 1768.79 eval/Returns Min 1334.89 eval/Actions Mean 0.146272 eval/Actions Std 0.583597 eval/Actions Max 0.99837 eval/Actions Min -0.998361 eval/Num Paths 2 eval/Average Returns 1551.84 eval/normalized_score 48.3047 time/evaluation sampling (s) 0.896834 time/logging (s) 0.00364907 time/sampling batch (s) 0.263974 time/saving (s) 0.0030621 time/training (s) 4.15374 time/epoch (s) 5.32126 time/total (s) 33310.8 Epoch -432 ---------------------------------- --------------- 2022-05-10 22:25:50.730032 PDT | [0] Epoch -431 finished ---------------------------------- --------------- epoch -431 replay_buffer/size 999996 trainer/num train calls 570000 trainer/Policy Loss -2.19822 trainer/Log Pis Mean 2.26718 trainer/Log Pis Std 2.54065 trainer/Log Pis Max 9.77043 trainer/Log Pis Min -5.11454 trainer/policy/mean Mean 0.106877 trainer/policy/mean Std 0.617819 trainer/policy/mean Max 0.997749 trainer/policy/mean Min -0.998291 trainer/policy/normal/std Mean 0.372631 trainer/policy/normal/std Std 0.181623 trainer/policy/normal/std Max 0.949065 trainer/policy/normal/std Min 0.0633249 trainer/policy/normal/log_std Mean -1.1383 trainer/policy/normal/log_std Std 0.591952 trainer/policy/normal/log_std Max -0.0522781 trainer/policy/normal/log_std Min -2.75948 eval/num steps total 380017 eval/num paths total 742 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.13438 eval/Rewards Std 0.756054 eval/Rewards Max 4.72374 eval/Rewards Min 0.984212 eval/Returns Mean 1567.19 eval/Returns Std 0 eval/Returns Max 1567.19 eval/Returns Min 1567.19 eval/Actions Mean 0.149915 eval/Actions Std 0.593022 eval/Actions Max 0.998152 eval/Actions Min -0.999236 eval/Num Paths 1 eval/Average Returns 1567.19 eval/normalized_score 48.7764 time/evaluation sampling (s) 0.890196 time/logging (s) 0.00224204 time/sampling batch (s) 0.261857 time/saving (s) 0.00304594 time/training (s) 4.14777 time/epoch (s) 5.30511 time/total (s) 33316.1 Epoch -431 ---------------------------------- --------------- 2022-05-10 22:25:56.060103 PDT | [0] Epoch -430 finished ---------------------------------- --------------- epoch -430 replay_buffer/size 999996 trainer/num train calls 571000 trainer/Policy Loss -2.2787 trainer/Log Pis Mean 2.17286 trainer/Log Pis Std 2.58187 trainer/Log Pis Max 10.9986 trainer/Log Pis Min -6.74429 trainer/policy/mean Mean 0.132523 trainer/policy/mean Std 0.627338 trainer/policy/mean Max 0.996551 trainer/policy/mean Min -0.997996 trainer/policy/normal/std Mean 0.376721 trainer/policy/normal/std Std 0.184463 trainer/policy/normal/std Max 0.97835 trainer/policy/normal/std Min 0.061082 trainer/policy/normal/log_std Mean -1.1255 trainer/policy/normal/log_std Std 0.585453 trainer/policy/normal/log_std Max -0.0218876 trainer/policy/normal/log_std Min -2.79554 eval/num steps total 380672 eval/num paths total 743 eval/path length Mean 655 eval/path length Std 0 eval/path length Max 655 eval/path length Min 655 eval/Rewards Mean 3.23179 eval/Rewards Std 0.746065 eval/Rewards Max 4.75905 eval/Rewards Min 0.980976 eval/Returns Mean 2116.82 eval/Returns Std 0 eval/Returns Max 2116.82 eval/Returns Min 2116.82 eval/Actions Mean 0.160695 eval/Actions Std 0.601739 eval/Actions Max 0.999141 eval/Actions Min -0.998348 eval/Num Paths 1 eval/Average Returns 2116.82 eval/normalized_score 65.6643 time/evaluation sampling (s) 0.892362 time/logging (s) 0.00275097 time/sampling batch (s) 0.26248 time/saving (s) 0.00318792 time/training (s) 4.14794 time/epoch (s) 5.30872 time/total (s) 33321.4 Epoch -430 ---------------------------------- --------------- 2022-05-10 22:26:01.426027 PDT | [0] Epoch -429 finished ---------------------------------- --------------- epoch -429 replay_buffer/size 999996 trainer/num train calls 572000 trainer/Policy Loss -2.24641 trainer/Log Pis Mean 2.34677 trainer/Log Pis Std 2.52317 trainer/Log Pis Max 9.88663 trainer/Log Pis Min -4.97716 trainer/policy/mean Mean 0.163967 trainer/policy/mean Std 0.615364 trainer/policy/mean Max 0.997125 trainer/policy/mean Min -0.997696 trainer/policy/normal/std Mean 0.378037 trainer/policy/normal/std Std 0.180682 trainer/policy/normal/std Max 0.970703 trainer/policy/normal/std Min 0.069427 trainer/policy/normal/log_std Mean -1.11914 trainer/policy/normal/log_std Std 0.58325 trainer/policy/normal/log_std Max -0.0297344 trainer/policy/normal/log_std Min -2.66748 eval/num steps total 381313 eval/num paths total 744 eval/path length Mean 641 eval/path length Std 0 eval/path length Max 641 eval/path length Min 641 eval/Rewards Mean 3.21998 eval/Rewards Std 0.737385 eval/Rewards Max 4.72055 eval/Rewards Min 0.982032 eval/Returns Mean 2064.01 eval/Returns Std 0 eval/Returns Max 2064.01 eval/Returns Min 2064.01 eval/Actions Mean 0.148787 eval/Actions Std 0.605225 eval/Actions Max 0.998297 eval/Actions Min -0.998107 eval/Num Paths 1 eval/Average Returns 2064.01 eval/normalized_score 64.0415 time/evaluation sampling (s) 0.891896 time/logging (s) 0.00276325 time/sampling batch (s) 0.265313 time/saving (s) 0.00296838 time/training (s) 4.18058 time/epoch (s) 5.34352 time/total (s) 33326.8 Epoch -429 ---------------------------------- --------------- 2022-05-10 22:26:06.785266 PDT | [0] Epoch -428 finished ---------------------------------- --------------- epoch -428 replay_buffer/size 999996 trainer/num train calls 573000 trainer/Policy Loss -2.24693 trainer/Log Pis Mean 2.19795 trainer/Log Pis Std 2.508 trainer/Log Pis Max 10.2003 trainer/Log Pis Min -4.21329 trainer/policy/mean Mean 0.122961 trainer/policy/mean Std 0.619659 trainer/policy/mean Max 0.997341 trainer/policy/mean Min -0.996716 trainer/policy/normal/std Mean 0.374994 trainer/policy/normal/std Std 0.179361 trainer/policy/normal/std Max 0.950285 trainer/policy/normal/std Min 0.0672919 trainer/policy/normal/log_std Mean -1.12669 trainer/policy/normal/log_std Std 0.581539 trainer/policy/normal/log_std Max -0.0509935 trainer/policy/normal/log_std Min -2.69872 eval/num steps total 382274 eval/num paths total 746 eval/path length Mean 480.5 eval/path length Std 39.5 eval/path length Max 520 eval/path length Min 441 eval/Rewards Mean 3.20417 eval/Rewards Std 0.878419 eval/Rewards Max 5.54705 eval/Rewards Min 0.97825 eval/Returns Mean 1539.6 eval/Returns Std 127.758 eval/Returns Max 1667.36 eval/Returns Min 1411.84 eval/Actions Mean 0.141257 eval/Actions Std 0.579121 eval/Actions Max 0.998429 eval/Actions Min -0.997984 eval/Num Paths 2 eval/Average Returns 1539.6 eval/normalized_score 47.9287 time/evaluation sampling (s) 0.8986 time/logging (s) 0.00364821 time/sampling batch (s) 0.263892 time/saving (s) 0.00304365 time/training (s) 4.16913 time/epoch (s) 5.33831 time/total (s) 33332.1 Epoch -428 ---------------------------------- --------------- 2022-05-10 22:26:12.153447 PDT | [0] Epoch -427 finished ---------------------------------- --------------- epoch -427 replay_buffer/size 999996 trainer/num train calls 574000 trainer/Policy Loss -2.30323 trainer/Log Pis Mean 2.3571 trainer/Log Pis Std 2.70086 trainer/Log Pis Max 10.9734 trainer/Log Pis Min -4.94628 trainer/policy/mean Mean 0.169265 trainer/policy/mean Std 0.617884 trainer/policy/mean Max 0.998619 trainer/policy/mean Min -0.998676 trainer/policy/normal/std Mean 0.387268 trainer/policy/normal/std Std 0.197781 trainer/policy/normal/std Max 1.06986 trainer/policy/normal/std Min 0.0607462 trainer/policy/normal/log_std Mean -1.11772 trainer/policy/normal/log_std Std 0.629401 trainer/policy/normal/log_std Max 0.0675295 trainer/policy/normal/log_std Min -2.80105 eval/num steps total 382792 eval/num paths total 747 eval/path length Mean 518 eval/path length Std 0 eval/path length Max 518 eval/path length Min 518 eval/Rewards Mean 3.15629 eval/Rewards Std 0.818681 eval/Rewards Max 5.35081 eval/Rewards Min 0.982876 eval/Returns Mean 1634.96 eval/Returns Std 0 eval/Returns Max 1634.96 eval/Returns Min 1634.96 eval/Actions Mean 0.152759 eval/Actions Std 0.592299 eval/Actions Max 0.998209 eval/Actions Min -0.99775 eval/Num Paths 1 eval/Average Returns 1634.96 eval/normalized_score 50.8586 time/evaluation sampling (s) 0.874101 time/logging (s) 0.00241364 time/sampling batch (s) 0.264046 time/saving (s) 0.00298884 time/training (s) 4.20099 time/epoch (s) 5.34454 time/total (s) 33337.5 Epoch -427 ---------------------------------- --------------- 2022-05-10 22:26:17.474522 PDT | [0] Epoch -426 finished ---------------------------------- --------------- epoch -426 replay_buffer/size 999996 trainer/num train calls 575000 trainer/Policy Loss -2.0805 trainer/Log Pis Mean 2.1122 trainer/Log Pis Std 2.562 trainer/Log Pis Max 11.8695 trainer/Log Pis Min -6.51863 trainer/policy/mean Mean 0.146448 trainer/policy/mean Std 0.610878 trainer/policy/mean Max 0.997651 trainer/policy/mean Min -0.998237 trainer/policy/normal/std Mean 0.376296 trainer/policy/normal/std Std 0.17701 trainer/policy/normal/std Max 0.90092 trainer/policy/normal/std Min 0.0645132 trainer/policy/normal/log_std Mean -1.12004 trainer/policy/normal/log_std Std 0.576783 trainer/policy/normal/log_std Max -0.104339 trainer/policy/normal/log_std Min -2.74088 eval/num steps total 383514 eval/num paths total 748 eval/path length Mean 722 eval/path length Std 0 eval/path length Max 722 eval/path length Min 722 eval/Rewards Mean 3.21956 eval/Rewards Std 0.726467 eval/Rewards Max 4.77529 eval/Rewards Min 0.982914 eval/Returns Mean 2324.52 eval/Returns Std 0 eval/Returns Max 2324.52 eval/Returns Min 2324.52 eval/Actions Mean 0.15435 eval/Actions Std 0.603198 eval/Actions Max 0.99798 eval/Actions Min -0.998734 eval/Num Paths 1 eval/Average Returns 2324.52 eval/normalized_score 72.0462 time/evaluation sampling (s) 0.875931 time/logging (s) 0.00292973 time/sampling batch (s) 0.264058 time/saving (s) 0.0029759 time/training (s) 4.15367 time/epoch (s) 5.29956 time/total (s) 33342.8 Epoch -426 ---------------------------------- --------------- 2022-05-10 22:26:22.859516 PDT | [0] Epoch -425 finished ---------------------------------- --------------- epoch -425 replay_buffer/size 999996 trainer/num train calls 576000 trainer/Policy Loss -2.23394 trainer/Log Pis Mean 2.19104 trainer/Log Pis Std 2.56327 trainer/Log Pis Max 11.2107 trainer/Log Pis Min -6.14151 trainer/policy/mean Mean 0.128744 trainer/policy/mean Std 0.616957 trainer/policy/mean Max 0.997885 trainer/policy/mean Min -0.997534 trainer/policy/normal/std Mean 0.381052 trainer/policy/normal/std Std 0.181178 trainer/policy/normal/std Max 0.893006 trainer/policy/normal/std Min 0.0656792 trainer/policy/normal/log_std Mean -1.11154 trainer/policy/normal/log_std Std 0.585735 trainer/policy/normal/log_std Max -0.113163 trainer/policy/normal/log_std Min -2.72297 eval/num steps total 384057 eval/num paths total 749 eval/path length Mean 543 eval/path length Std 0 eval/path length Max 543 eval/path length Min 543 eval/Rewards Mean 3.20416 eval/Rewards Std 0.837187 eval/Rewards Max 4.93212 eval/Rewards Min 0.982612 eval/Returns Mean 1739.86 eval/Returns Std 0 eval/Returns Max 1739.86 eval/Returns Min 1739.86 eval/Actions Mean 0.145748 eval/Actions Std 0.571385 eval/Actions Max 0.9985 eval/Actions Min -0.998666 eval/Num Paths 1 eval/Average Returns 1739.86 eval/normalized_score 54.0818 time/evaluation sampling (s) 0.871438 time/logging (s) 0.00236647 time/sampling batch (s) 0.268342 time/saving (s) 0.00293942 time/training (s) 4.21709 time/epoch (s) 5.36217 time/total (s) 33348.1 Epoch -425 ---------------------------------- --------------- 2022-05-10 22:26:28.257839 PDT | [0] Epoch -424 finished ---------------------------------- --------------- epoch -424 replay_buffer/size 999996 trainer/num train calls 577000 trainer/Policy Loss -2.4303 trainer/Log Pis Mean 2.30772 trainer/Log Pis Std 2.58782 trainer/Log Pis Max 12.5251 trainer/Log Pis Min -3.7634 trainer/policy/mean Mean 0.124702 trainer/policy/mean Std 0.633025 trainer/policy/mean Max 0.997831 trainer/policy/mean Min -0.998857 trainer/policy/normal/std Mean 0.373962 trainer/policy/normal/std Std 0.178126 trainer/policy/normal/std Max 0.996099 trainer/policy/normal/std Min 0.0686859 trainer/policy/normal/log_std Mean -1.12909 trainer/policy/normal/log_std Std 0.581409 trainer/policy/normal/log_std Max -0.00390893 trainer/policy/normal/log_std Min -2.67821 eval/num steps total 384485 eval/num paths total 750 eval/path length Mean 428 eval/path length Std 0 eval/path length Max 428 eval/path length Min 428 eval/Rewards Mean 3.16842 eval/Rewards Std 0.87437 eval/Rewards Max 5.44441 eval/Rewards Min 0.987585 eval/Returns Mean 1356.08 eval/Returns Std 0 eval/Returns Max 1356.08 eval/Returns Min 1356.08 eval/Actions Mean 0.143008 eval/Actions Std 0.584026 eval/Actions Max 0.998623 eval/Actions Min -0.997159 eval/Num Paths 1 eval/Average Returns 1356.08 eval/normalized_score 42.2899 time/evaluation sampling (s) 0.878608 time/logging (s) 0.00213774 time/sampling batch (s) 0.26786 time/saving (s) 0.00297498 time/training (s) 4.2241 time/epoch (s) 5.37568 time/total (s) 33353.5 Epoch -424 ---------------------------------- --------------- 2022-05-10 22:26:33.615483 PDT | [0] Epoch -423 finished ---------------------------------- --------------- epoch -423 replay_buffer/size 999996 trainer/num train calls 578000 trainer/Policy Loss -1.97962 trainer/Log Pis Mean 1.79412 trainer/Log Pis Std 2.5141 trainer/Log Pis Max 9.63845 trainer/Log Pis Min -7.49895 trainer/policy/mean Mean 0.109463 trainer/policy/mean Std 0.604746 trainer/policy/mean Max 0.996897 trainer/policy/mean Min -0.998587 trainer/policy/normal/std Mean 0.372443 trainer/policy/normal/std Std 0.172628 trainer/policy/normal/std Max 0.934863 trainer/policy/normal/std Min 0.0680364 trainer/policy/normal/log_std Mean -1.12158 trainer/policy/normal/log_std Std 0.554873 trainer/policy/normal/log_std Max -0.0673556 trainer/policy/normal/log_std Min -2.68771 eval/num steps total 384944 eval/num paths total 751 eval/path length Mean 459 eval/path length Std 0 eval/path length Max 459 eval/path length Min 459 eval/Rewards Mean 3.20137 eval/Rewards Std 0.888506 eval/Rewards Max 5.06219 eval/Rewards Min 0.984902 eval/Returns Mean 1469.43 eval/Returns Std 0 eval/Returns Max 1469.43 eval/Returns Min 1469.43 eval/Actions Mean 0.130903 eval/Actions Std 0.572581 eval/Actions Max 0.997438 eval/Actions Min -0.998413 eval/Num Paths 1 eval/Average Returns 1469.43 eval/normalized_score 45.7726 time/evaluation sampling (s) 0.87966 time/logging (s) 0.00221128 time/sampling batch (s) 0.266905 time/saving (s) 0.00296149 time/training (s) 4.18349 time/epoch (s) 5.33523 time/total (s) 33358.8 Epoch -423 ---------------------------------- --------------- 2022-05-10 22:26:39.092743 PDT | [0] Epoch -422 finished ---------------------------------- --------------- epoch -422 replay_buffer/size 999996 trainer/num train calls 579000 trainer/Policy Loss -2.1808 trainer/Log Pis Mean 2.36887 trainer/Log Pis Std 2.49266 trainer/Log Pis Max 9.97066 trainer/Log Pis Min -4.58457 trainer/policy/mean Mean 0.145811 trainer/policy/mean Std 0.617232 trainer/policy/mean Max 0.995319 trainer/policy/mean Min -0.998012 trainer/policy/normal/std Mean 0.378982 trainer/policy/normal/std Std 0.183464 trainer/policy/normal/std Max 0.960963 trainer/policy/normal/std Min 0.0660889 trainer/policy/normal/log_std Mean -1.11996 trainer/policy/normal/log_std Std 0.589898 trainer/policy/normal/log_std Max -0.0398199 trainer/policy/normal/log_std Min -2.71676 eval/num steps total 385392 eval/num paths total 752 eval/path length Mean 448 eval/path length Std 0 eval/path length Max 448 eval/path length Min 448 eval/Rewards Mean 3.12269 eval/Rewards Std 0.898429 eval/Rewards Max 5.38554 eval/Rewards Min 0.988078 eval/Returns Mean 1398.97 eval/Returns Std 0 eval/Returns Max 1398.97 eval/Returns Min 1398.97 eval/Actions Mean 0.13233 eval/Actions Std 0.571602 eval/Actions Max 0.998182 eval/Actions Min -0.998128 eval/Num Paths 1 eval/Average Returns 1398.97 eval/normalized_score 43.6075 time/evaluation sampling (s) 0.950598 time/logging (s) 0.00218612 time/sampling batch (s) 0.268298 time/saving (s) 0.00305707 time/training (s) 4.23071 time/epoch (s) 5.45485 time/total (s) 33364.3 Epoch -422 ---------------------------------- --------------- 2022-05-10 22:26:44.471074 PDT | [0] Epoch -421 finished ---------------------------------- --------------- epoch -421 replay_buffer/size 999996 trainer/num train calls 580000 trainer/Policy Loss -2.18749 trainer/Log Pis Mean 2.25921 trainer/Log Pis Std 2.67928 trainer/Log Pis Max 13.3385 trainer/Log Pis Min -4.84727 trainer/policy/mean Mean 0.112679 trainer/policy/mean Std 0.624356 trainer/policy/mean Max 0.997106 trainer/policy/mean Min -0.99973 trainer/policy/normal/std Mean 0.373983 trainer/policy/normal/std Std 0.179879 trainer/policy/normal/std Max 1.05325 trainer/policy/normal/std Min 0.064958 trainer/policy/normal/log_std Mean -1.1281 trainer/policy/normal/log_std Std 0.577332 trainer/policy/normal/log_std Max 0.0518774 trainer/policy/normal/log_std Min -2.73401 eval/num steps total 385913 eval/num paths total 753 eval/path length Mean 521 eval/path length Std 0 eval/path length Max 521 eval/path length Min 521 eval/Rewards Mean 3.15024 eval/Rewards Std 0.798541 eval/Rewards Max 5.37647 eval/Rewards Min 0.978973 eval/Returns Mean 1641.27 eval/Returns Std 0 eval/Returns Max 1641.27 eval/Returns Min 1641.27 eval/Actions Mean 0.158638 eval/Actions Std 0.58828 eval/Actions Max 0.998292 eval/Actions Min -0.996552 eval/Num Paths 1 eval/Average Returns 1641.27 eval/normalized_score 51.0527 time/evaluation sampling (s) 0.882138 time/logging (s) 0.00233867 time/sampling batch (s) 0.267484 time/saving (s) 0.00296935 time/training (s) 4.20114 time/epoch (s) 5.35607 time/total (s) 33369.7 Epoch -421 ---------------------------------- --------------- 2022-05-10 22:26:49.860119 PDT | [0] Epoch -420 finished ---------------------------------- --------------- epoch -420 replay_buffer/size 999996 trainer/num train calls 581000 trainer/Policy Loss -2.21823 trainer/Log Pis Mean 2.2801 trainer/Log Pis Std 2.52474 trainer/Log Pis Max 10.4046 trainer/Log Pis Min -5.97585 trainer/policy/mean Mean 0.160587 trainer/policy/mean Std 0.619758 trainer/policy/mean Max 0.998509 trainer/policy/mean Min -0.997461 trainer/policy/normal/std Mean 0.365637 trainer/policy/normal/std Std 0.174067 trainer/policy/normal/std Max 0.913636 trainer/policy/normal/std Min 0.0693549 trainer/policy/normal/log_std Mean -1.15218 trainer/policy/normal/log_std Std 0.582878 trainer/policy/normal/log_std Max -0.0903231 trainer/policy/normal/log_std Min -2.66852 eval/num steps total 386566 eval/num paths total 754 eval/path length Mean 653 eval/path length Std 0 eval/path length Max 653 eval/path length Min 653 eval/Rewards Mean 3.14181 eval/Rewards Std 0.694434 eval/Rewards Max 4.53977 eval/Rewards Min 0.980828 eval/Returns Mean 2051.6 eval/Returns Std 0 eval/Returns Max 2051.6 eval/Returns Min 2051.6 eval/Actions Mean 0.150719 eval/Actions Std 0.605939 eval/Actions Max 0.998336 eval/Actions Min -0.999021 eval/Num Paths 1 eval/Average Returns 2051.6 eval/normalized_score 63.6604 time/evaluation sampling (s) 0.877161 time/logging (s) 0.00278252 time/sampling batch (s) 0.267091 time/saving (s) 0.00297421 time/training (s) 4.2171 time/epoch (s) 5.36711 time/total (s) 33375 Epoch -420 ---------------------------------- --------------- 2022-05-10 22:26:55.242884 PDT | [0] Epoch -419 finished ---------------------------------- --------------- epoch -419 replay_buffer/size 999996 trainer/num train calls 582000 trainer/Policy Loss -2.25983 trainer/Log Pis Mean 1.90581 trainer/Log Pis Std 2.42162 trainer/Log Pis Max 9.34325 trainer/Log Pis Min -4.80274 trainer/policy/mean Mean 0.100765 trainer/policy/mean Std 0.612395 trainer/policy/mean Max 0.998756 trainer/policy/mean Min -0.998084 trainer/policy/normal/std Mean 0.381274 trainer/policy/normal/std Std 0.175259 trainer/policy/normal/std Max 0.991666 trainer/policy/normal/std Min 0.0731273 trainer/policy/normal/log_std Mean -1.09474 trainer/policy/normal/log_std Std 0.545813 trainer/policy/normal/log_std Max -0.0083689 trainer/policy/normal/log_std Min -2.61555 eval/num steps total 387141 eval/num paths total 755 eval/path length Mean 575 eval/path length Std 0 eval/path length Max 575 eval/path length Min 575 eval/Rewards Mean 3.11508 eval/Rewards Std 0.719805 eval/Rewards Max 4.54658 eval/Rewards Min 0.986988 eval/Returns Mean 1791.17 eval/Returns Std 0 eval/Returns Max 1791.17 eval/Returns Min 1791.17 eval/Actions Mean 0.151147 eval/Actions Std 0.601445 eval/Actions Max 0.998076 eval/Actions Min -0.996135 eval/Num Paths 1 eval/Average Returns 1791.17 eval/normalized_score 55.6584 time/evaluation sampling (s) 0.882924 time/logging (s) 0.00254828 time/sampling batch (s) 0.267378 time/saving (s) 0.00308369 time/training (s) 4.20449 time/epoch (s) 5.36043 time/total (s) 33380.4 Epoch -419 ---------------------------------- --------------- 2022-05-10 22:27:00.650722 PDT | [0] Epoch -418 finished ---------------------------------- --------------- epoch -418 replay_buffer/size 999996 trainer/num train calls 583000 trainer/Policy Loss -2.31692 trainer/Log Pis Mean 2.32486 trainer/Log Pis Std 2.65047 trainer/Log Pis Max 14.9221 trainer/Log Pis Min -6.25686 trainer/policy/mean Mean 0.148986 trainer/policy/mean Std 0.615124 trainer/policy/mean Max 0.997398 trainer/policy/mean Min -0.999777 trainer/policy/normal/std Mean 0.371423 trainer/policy/normal/std Std 0.185848 trainer/policy/normal/std Max 1.51898 trainer/policy/normal/std Min 0.0696625 trainer/policy/normal/log_std Mean -1.14727 trainer/policy/normal/log_std Std 0.601783 trainer/policy/normal/log_std Max 0.41804 trainer/policy/normal/log_std Min -2.66409 eval/num steps total 387667 eval/num paths total 756 eval/path length Mean 526 eval/path length Std 0 eval/path length Max 526 eval/path length Min 526 eval/Rewards Mean 3.17054 eval/Rewards Std 0.808265 eval/Rewards Max 5.46139 eval/Rewards Min 0.984989 eval/Returns Mean 1667.71 eval/Returns Std 0 eval/Returns Max 1667.71 eval/Returns Min 1667.71 eval/Actions Mean 0.149235 eval/Actions Std 0.585544 eval/Actions Max 0.99807 eval/Actions Min -0.997189 eval/Num Paths 1 eval/Average Returns 1667.71 eval/normalized_score 51.8648 time/evaluation sampling (s) 0.894888 time/logging (s) 0.00240869 time/sampling batch (s) 0.273136 time/saving (s) 0.00312073 time/training (s) 4.21166 time/epoch (s) 5.38522 time/total (s) 33385.8 Epoch -418 ---------------------------------- --------------- 2022-05-10 22:27:06.035800 PDT | [0] Epoch -417 finished ---------------------------------- --------------- epoch -417 replay_buffer/size 999996 trainer/num train calls 584000 trainer/Policy Loss -2.31146 trainer/Log Pis Mean 2.41209 trainer/Log Pis Std 2.56789 trainer/Log Pis Max 12.8212 trainer/Log Pis Min -4.33964 trainer/policy/mean Mean 0.146356 trainer/policy/mean Std 0.618806 trainer/policy/mean Max 0.997203 trainer/policy/mean Min -0.997879 trainer/policy/normal/std Mean 0.371483 trainer/policy/normal/std Std 0.178389 trainer/policy/normal/std Max 0.973628 trainer/policy/normal/std Min 0.0694706 trainer/policy/normal/log_std Mean -1.13601 trainer/policy/normal/log_std Std 0.581092 trainer/policy/normal/log_std Max -0.0267255 trainer/policy/normal/log_std Min -2.66685 eval/num steps total 388447 eval/num paths total 758 eval/path length Mean 390 eval/path length Std 9 eval/path length Max 399 eval/path length Min 381 eval/Rewards Mean 3.11436 eval/Rewards Std 0.91655 eval/Rewards Max 4.80496 eval/Rewards Min 0.980555 eval/Returns Mean 1214.6 eval/Returns Std 18.6337 eval/Returns Max 1233.23 eval/Returns Min 1195.97 eval/Actions Mean 0.142206 eval/Actions Std 0.584227 eval/Actions Max 0.996245 eval/Actions Min -0.998754 eval/Num Paths 2 eval/Average Returns 1214.6 eval/normalized_score 37.9427 time/evaluation sampling (s) 0.899038 time/logging (s) 0.00297577 time/sampling batch (s) 0.266893 time/saving (s) 0.00300465 time/training (s) 4.19142 time/epoch (s) 5.36333 time/total (s) 33391.2 Epoch -417 ---------------------------------- --------------- 2022-05-10 22:27:11.424460 PDT | [0] Epoch -416 finished ---------------------------------- --------------- epoch -416 replay_buffer/size 999996 trainer/num train calls 585000 trainer/Policy Loss -2.17493 trainer/Log Pis Mean 2.14406 trainer/Log Pis Std 2.53306 trainer/Log Pis Max 9.64303 trainer/Log Pis Min -6.30256 trainer/policy/mean Mean 0.131226 trainer/policy/mean Std 0.610854 trainer/policy/mean Max 0.996259 trainer/policy/mean Min -0.998749 trainer/policy/normal/std Mean 0.377588 trainer/policy/normal/std Std 0.183117 trainer/policy/normal/std Max 0.933175 trainer/policy/normal/std Min 0.0657378 trainer/policy/normal/log_std Mean -1.12669 trainer/policy/normal/log_std Std 0.597529 trainer/policy/normal/log_std Max -0.0691622 trainer/policy/normal/log_std Min -2.72208 eval/num steps total 389175 eval/num paths total 759 eval/path length Mean 728 eval/path length Std 0 eval/path length Max 728 eval/path length Min 728 eval/Rewards Mean 3.22898 eval/Rewards Std 0.666841 eval/Rewards Max 4.82084 eval/Rewards Min 0.983257 eval/Returns Mean 2350.69 eval/Returns Std 0 eval/Returns Max 2350.69 eval/Returns Min 2350.69 eval/Actions Mean 0.142059 eval/Actions Std 0.601351 eval/Actions Max 0.997395 eval/Actions Min -0.998094 eval/Num Paths 1 eval/Average Returns 2350.69 eval/normalized_score 72.8503 time/evaluation sampling (s) 0.904911 time/logging (s) 0.00291455 time/sampling batch (s) 0.267444 time/saving (s) 0.00316463 time/training (s) 4.18776 time/epoch (s) 5.36619 time/total (s) 33396.5 Epoch -416 ---------------------------------- --------------- 2022-05-10 22:27:16.801430 PDT | [0] Epoch -415 finished ---------------------------------- --------------- epoch -415 replay_buffer/size 999996 trainer/num train calls 586000 trainer/Policy Loss -2.00468 trainer/Log Pis Mean 1.89809 trainer/Log Pis Std 2.47476 trainer/Log Pis Max 12.4617 trainer/Log Pis Min -5.56443 trainer/policy/mean Mean 0.117088 trainer/policy/mean Std 0.606224 trainer/policy/mean Max 0.995781 trainer/policy/mean Min -0.997581 trainer/policy/normal/std Mean 0.37847 trainer/policy/normal/std Std 0.177059 trainer/policy/normal/std Max 0.982242 trainer/policy/normal/std Min 0.0633354 trainer/policy/normal/log_std Mean -1.10982 trainer/policy/normal/log_std Std 0.564747 trainer/policy/normal/log_std Max -0.0179172 trainer/policy/normal/log_std Min -2.75931 eval/num steps total 389841 eval/num paths total 760 eval/path length Mean 666 eval/path length Std 0 eval/path length Max 666 eval/path length Min 666 eval/Rewards Mean 3.21926 eval/Rewards Std 0.675058 eval/Rewards Max 4.70981 eval/Rewards Min 0.989413 eval/Returns Mean 2144.02 eval/Returns Std 0 eval/Returns Max 2144.02 eval/Returns Min 2144.02 eval/Actions Mean 0.149392 eval/Actions Std 0.607682 eval/Actions Max 0.997297 eval/Actions Min -0.997525 eval/Num Paths 1 eval/Average Returns 2144.02 eval/normalized_score 66.5002 time/evaluation sampling (s) 0.90276 time/logging (s) 0.00279946 time/sampling batch (s) 0.266556 time/saving (s) 0.00317108 time/training (s) 4.17898 time/epoch (s) 5.35427 time/total (s) 33401.9 Epoch -415 ---------------------------------- --------------- 2022-05-10 22:27:22.215431 PDT | [0] Epoch -414 finished ---------------------------------- --------------- epoch -414 replay_buffer/size 999996 trainer/num train calls 587000 trainer/Policy Loss -2.35843 trainer/Log Pis Mean 2.15358 trainer/Log Pis Std 2.44646 trainer/Log Pis Max 9.07772 trainer/Log Pis Min -4.01523 trainer/policy/mean Mean 0.159665 trainer/policy/mean Std 0.615139 trainer/policy/mean Max 0.997554 trainer/policy/mean Min -0.998208 trainer/policy/normal/std Mean 0.383352 trainer/policy/normal/std Std 0.185044 trainer/policy/normal/std Max 0.91884 trainer/policy/normal/std Min 0.0662164 trainer/policy/normal/log_std Mean -1.11289 trainer/policy/normal/log_std Std 0.603006 trainer/policy/normal/log_std Max -0.0846434 trainer/policy/normal/log_std Min -2.71483 eval/num steps total 390513 eval/num paths total 761 eval/path length Mean 672 eval/path length Std 0 eval/path length Max 672 eval/path length Min 672 eval/Rewards Mean 3.17913 eval/Rewards Std 0.703779 eval/Rewards Max 4.91107 eval/Rewards Min 0.984358 eval/Returns Mean 2136.37 eval/Returns Std 0 eval/Returns Max 2136.37 eval/Returns Min 2136.37 eval/Actions Mean 0.160793 eval/Actions Std 0.602546 eval/Actions Max 0.998512 eval/Actions Min -0.997714 eval/Num Paths 1 eval/Average Returns 2136.37 eval/normalized_score 66.2651 time/evaluation sampling (s) 0.886648 time/logging (s) 0.00280785 time/sampling batch (s) 0.26928 time/saving (s) 0.00313535 time/training (s) 4.22939 time/epoch (s) 5.39126 time/total (s) 33407.3 Epoch -414 ---------------------------------- --------------- 2022-05-10 22:27:27.587791 PDT | [0] Epoch -413 finished ---------------------------------- --------------- epoch -413 replay_buffer/size 999996 trainer/num train calls 588000 trainer/Policy Loss -2.26666 trainer/Log Pis Mean 2.242 trainer/Log Pis Std 2.64469 trainer/Log Pis Max 10.1759 trainer/Log Pis Min -8.23838 trainer/policy/mean Mean 0.157112 trainer/policy/mean Std 0.620012 trainer/policy/mean Max 0.997515 trainer/policy/mean Min -0.998492 trainer/policy/normal/std Mean 0.382188 trainer/policy/normal/std Std 0.179364 trainer/policy/normal/std Max 0.89827 trainer/policy/normal/std Min 0.0675767 trainer/policy/normal/log_std Mean -1.10332 trainer/policy/normal/log_std Std 0.573452 trainer/policy/normal/log_std Max -0.107285 trainer/policy/normal/log_std Min -2.69449 eval/num steps total 391158 eval/num paths total 762 eval/path length Mean 645 eval/path length Std 0 eval/path length Max 645 eval/path length Min 645 eval/Rewards Mean 3.24232 eval/Rewards Std 0.751818 eval/Rewards Max 4.86274 eval/Rewards Min 0.984918 eval/Returns Mean 2091.3 eval/Returns Std 0 eval/Returns Max 2091.3 eval/Returns Min 2091.3 eval/Actions Mean 0.160206 eval/Actions Std 0.58895 eval/Actions Max 0.998273 eval/Actions Min -0.998956 eval/Num Paths 1 eval/Average Returns 2091.3 eval/normalized_score 64.8801 time/evaluation sampling (s) 0.877337 time/logging (s) 0.00267493 time/sampling batch (s) 0.266801 time/saving (s) 0.00300307 time/training (s) 4.19989 time/epoch (s) 5.34971 time/total (s) 33412.6 Epoch -413 ---------------------------------- --------------- 2022-05-10 22:27:32.953894 PDT | [0] Epoch -412 finished ---------------------------------- --------------- epoch -412 replay_buffer/size 999996 trainer/num train calls 589000 trainer/Policy Loss -2.31824 trainer/Log Pis Mean 2.32357 trainer/Log Pis Std 2.6501 trainer/Log Pis Max 9.77784 trainer/Log Pis Min -4.20817 trainer/policy/mean Mean 0.126019 trainer/policy/mean Std 0.624927 trainer/policy/mean Max 0.998951 trainer/policy/mean Min -0.998002 trainer/policy/normal/std Mean 0.374991 trainer/policy/normal/std Std 0.179862 trainer/policy/normal/std Max 0.982553 trainer/policy/normal/std Min 0.0704257 trainer/policy/normal/log_std Mean -1.12636 trainer/policy/normal/log_std Std 0.579923 trainer/policy/normal/log_std Max -0.0176012 trainer/policy/normal/log_std Min -2.6532 eval/num steps total 391742 eval/num paths total 763 eval/path length Mean 584 eval/path length Std 0 eval/path length Max 584 eval/path length Min 584 eval/Rewards Mean 3.24054 eval/Rewards Std 0.747696 eval/Rewards Max 4.88264 eval/Rewards Min 0.984074 eval/Returns Mean 1892.48 eval/Returns Std 0 eval/Returns Max 1892.48 eval/Returns Min 1892.48 eval/Actions Mean 0.167664 eval/Actions Std 0.600441 eval/Actions Max 0.99882 eval/Actions Min -0.998538 eval/Num Paths 1 eval/Average Returns 1892.48 eval/normalized_score 58.7711 time/evaluation sampling (s) 0.876407 time/logging (s) 0.00240597 time/sampling batch (s) 0.267243 time/saving (s) 0.0029385 time/training (s) 4.19417 time/epoch (s) 5.34316 time/total (s) 33418 Epoch -412 ---------------------------------- --------------- 2022-05-10 22:27:38.349290 PDT | [0] Epoch -411 finished ---------------------------------- --------------- epoch -411 replay_buffer/size 999996 trainer/num train calls 590000 trainer/Policy Loss -2.16766 trainer/Log Pis Mean 2.20412 trainer/Log Pis Std 2.55388 trainer/Log Pis Max 9.54896 trainer/Log Pis Min -4.57541 trainer/policy/mean Mean 0.166335 trainer/policy/mean Std 0.61549 trainer/policy/mean Max 0.998108 trainer/policy/mean Min -0.99627 trainer/policy/normal/std Mean 0.389245 trainer/policy/normal/std Std 0.194421 trainer/policy/normal/std Max 1.04065 trainer/policy/normal/std Min 0.0719306 trainer/policy/normal/log_std Mean -1.10097 trainer/policy/normal/log_std Std 0.603791 trainer/policy/normal/log_std Max 0.0398497 trainer/policy/normal/log_std Min -2.63205 eval/num steps total 392272 eval/num paths total 764 eval/path length Mean 530 eval/path length Std 0 eval/path length Max 530 eval/path length Min 530 eval/Rewards Mean 3.18489 eval/Rewards Std 0.820365 eval/Rewards Max 5.31713 eval/Rewards Min 0.991344 eval/Returns Mean 1687.99 eval/Returns Std 0 eval/Returns Max 1687.99 eval/Returns Min 1687.99 eval/Actions Mean 0.144578 eval/Actions Std 0.585636 eval/Actions Max 0.998565 eval/Actions Min -0.997868 eval/Num Paths 1 eval/Average Returns 1687.99 eval/normalized_score 52.4882 time/evaluation sampling (s) 0.880835 time/logging (s) 0.0024656 time/sampling batch (s) 0.267976 time/saving (s) 0.00296835 time/training (s) 4.21853 time/epoch (s) 5.37277 time/total (s) 33423.3 Epoch -411 ---------------------------------- --------------- 2022-05-10 22:27:43.690388 PDT | [0] Epoch -410 finished ---------------------------------- --------------- epoch -410 replay_buffer/size 999996 trainer/num train calls 591000 trainer/Policy Loss -2.28871 trainer/Log Pis Mean 2.2591 trainer/Log Pis Std 2.60935 trainer/Log Pis Max 10.8517 trainer/Log Pis Min -4.5556 trainer/policy/mean Mean 0.117442 trainer/policy/mean Std 0.620389 trainer/policy/mean Max 0.997768 trainer/policy/mean Min -0.997782 trainer/policy/normal/std Mean 0.379619 trainer/policy/normal/std Std 0.185023 trainer/policy/normal/std Max 0.937191 trainer/policy/normal/std Min 0.0574699 trainer/policy/normal/log_std Mean -1.12502 trainer/policy/normal/log_std Std 0.606999 trainer/policy/normal/log_std Max -0.0648681 trainer/policy/normal/log_std Min -2.85649 eval/num steps total 393234 eval/num paths total 766 eval/path length Mean 481 eval/path length Std 21 eval/path length Max 502 eval/path length Min 460 eval/Rewards Mean 3.18101 eval/Rewards Std 0.820618 eval/Rewards Max 5.1246 eval/Rewards Min 0.98737 eval/Returns Mean 1530.07 eval/Returns Std 52.7318 eval/Returns Max 1582.8 eval/Returns Min 1477.34 eval/Actions Mean 0.147804 eval/Actions Std 0.5851 eval/Actions Max 0.998339 eval/Actions Min -0.998672 eval/Num Paths 2 eval/Average Returns 1530.07 eval/normalized_score 47.6357 time/evaluation sampling (s) 0.880917 time/logging (s) 0.00355766 time/sampling batch (s) 0.265672 time/saving (s) 0.00292865 time/training (s) 4.1669 time/epoch (s) 5.31997 time/total (s) 33428.7 Epoch -410 ---------------------------------- --------------- 2022-05-10 22:27:49.084498 PDT | [0] Epoch -409 finished ---------------------------------- --------------- epoch -409 replay_buffer/size 999996 trainer/num train calls 592000 trainer/Policy Loss -2.13893 trainer/Log Pis Mean 2.10416 trainer/Log Pis Std 2.59416 trainer/Log Pis Max 14.2958 trainer/Log Pis Min -4.78257 trainer/policy/mean Mean 0.121373 trainer/policy/mean Std 0.612303 trainer/policy/mean Max 0.997298 trainer/policy/mean Min -0.999101 trainer/policy/normal/std Mean 0.386591 trainer/policy/normal/std Std 0.189154 trainer/policy/normal/std Max 1.06472 trainer/policy/normal/std Min 0.0696463 trainer/policy/normal/log_std Mean -1.10232 trainer/policy/normal/log_std Std 0.593033 trainer/policy/normal/log_std Max 0.0627137 trainer/policy/normal/log_std Min -2.66432 eval/num steps total 393803 eval/num paths total 767 eval/path length Mean 569 eval/path length Std 0 eval/path length Max 569 eval/path length Min 569 eval/Rewards Mean 3.25183 eval/Rewards Std 0.791672 eval/Rewards Max 4.7533 eval/Rewards Min 0.986146 eval/Returns Mean 1850.29 eval/Returns Std 0 eval/Returns Max 1850.29 eval/Returns Min 1850.29 eval/Actions Mean 0.153253 eval/Actions Std 0.607697 eval/Actions Max 0.998569 eval/Actions Min -0.998526 eval/Num Paths 1 eval/Average Returns 1850.29 eval/normalized_score 57.475 time/evaluation sampling (s) 0.878241 time/logging (s) 0.00245088 time/sampling batch (s) 0.267802 time/saving (s) 0.00314554 time/training (s) 4.21893 time/epoch (s) 5.37057 time/total (s) 33434 Epoch -409 ---------------------------------- --------------- 2022-05-10 22:27:54.481943 PDT | [0] Epoch -408 finished ---------------------------------- --------------- epoch -408 replay_buffer/size 999996 trainer/num train calls 593000 trainer/Policy Loss -2.20507 trainer/Log Pis Mean 2.14956 trainer/Log Pis Std 2.58083 trainer/Log Pis Max 12.125 trainer/Log Pis Min -4.91709 trainer/policy/mean Mean 0.135898 trainer/policy/mean Std 0.617373 trainer/policy/mean Max 0.997758 trainer/policy/mean Min -0.99864 trainer/policy/normal/std Mean 0.378886 trainer/policy/normal/std Std 0.186278 trainer/policy/normal/std Max 0.946991 trainer/policy/normal/std Min 0.0676593 trainer/policy/normal/log_std Mean -1.12502 trainer/policy/normal/log_std Std 0.598702 trainer/policy/normal/log_std Max -0.0544653 trainer/policy/normal/log_std Min -2.69327 eval/num steps total 394766 eval/num paths total 769 eval/path length Mean 481.5 eval/path length Std 17.5 eval/path length Max 499 eval/path length Min 464 eval/Rewards Mean 3.15474 eval/Rewards Std 0.827468 eval/Rewards Max 4.99306 eval/Rewards Min 0.986816 eval/Returns Mean 1519.01 eval/Returns Std 24.5827 eval/Returns Max 1543.59 eval/Returns Min 1494.42 eval/Actions Mean 0.145038 eval/Actions Std 0.582853 eval/Actions Max 0.997892 eval/Actions Min -0.998655 eval/Num Paths 2 eval/Average Returns 1519.01 eval/normalized_score 47.2959 time/evaluation sampling (s) 0.879554 time/logging (s) 0.0035704 time/sampling batch (s) 0.268479 time/saving (s) 0.00297601 time/training (s) 4.22154 time/epoch (s) 5.37612 time/total (s) 33439.4 Epoch -408 ---------------------------------- --------------- 2022-05-10 22:27:59.851456 PDT | [0] Epoch -407 finished ---------------------------------- --------------- epoch -407 replay_buffer/size 999996 trainer/num train calls 594000 trainer/Policy Loss -2.15113 trainer/Log Pis Mean 2.27792 trainer/Log Pis Std 2.51402 trainer/Log Pis Max 10.0442 trainer/Log Pis Min -4.59911 trainer/policy/mean Mean 0.126573 trainer/policy/mean Std 0.626175 trainer/policy/mean Max 0.998973 trainer/policy/mean Min -0.998487 trainer/policy/normal/std Mean 0.37097 trainer/policy/normal/std Std 0.177972 trainer/policy/normal/std Max 0.893605 trainer/policy/normal/std Min 0.0714344 trainer/policy/normal/log_std Mean -1.13543 trainer/policy/normal/log_std Std 0.574189 trainer/policy/normal/log_std Max -0.112492 trainer/policy/normal/log_std Min -2.63898 eval/num steps total 395176 eval/num paths total 770 eval/path length Mean 410 eval/path length Std 0 eval/path length Max 410 eval/path length Min 410 eval/Rewards Mean 3.0674 eval/Rewards Std 0.818032 eval/Rewards Max 4.73752 eval/Rewards Min 0.980385 eval/Returns Mean 1257.63 eval/Returns Std 0 eval/Returns Max 1257.63 eval/Returns Min 1257.63 eval/Actions Mean 0.148207 eval/Actions Std 0.596593 eval/Actions Max 0.997351 eval/Actions Min -0.99662 eval/Num Paths 1 eval/Average Returns 1257.63 eval/normalized_score 39.2649 time/evaluation sampling (s) 0.880865 time/logging (s) 0.00200695 time/sampling batch (s) 0.266482 time/saving (s) 0.00301894 time/training (s) 4.1931 time/epoch (s) 5.34548 time/total (s) 33444.8 Epoch -407 ---------------------------------- --------------- 2022-05-10 22:28:05.196906 PDT | [0] Epoch -406 finished ---------------------------------- --------------- epoch -406 replay_buffer/size 999996 trainer/num train calls 595000 trainer/Policy Loss -2.22195 trainer/Log Pis Mean 2.19332 trainer/Log Pis Std 2.52955 trainer/Log Pis Max 8.87859 trainer/Log Pis Min -6.06823 trainer/policy/mean Mean 0.13417 trainer/policy/mean Std 0.616464 trainer/policy/mean Max 0.997777 trainer/policy/mean Min -0.998402 trainer/policy/normal/std Mean 0.380809 trainer/policy/normal/std Std 0.187669 trainer/policy/normal/std Max 0.97687 trainer/policy/normal/std Min 0.0670371 trainer/policy/normal/log_std Mean -1.12187 trainer/policy/normal/log_std Std 0.60348 trainer/policy/normal/log_std Max -0.0234013 trainer/policy/normal/log_std Min -2.70251 eval/num steps total 395679 eval/num paths total 771 eval/path length Mean 503 eval/path length Std 0 eval/path length Max 503 eval/path length Min 503 eval/Rewards Mean 3.18169 eval/Rewards Std 0.811314 eval/Rewards Max 5.21346 eval/Rewards Min 0.981627 eval/Returns Mean 1600.39 eval/Returns Std 0 eval/Returns Max 1600.39 eval/Returns Min 1600.39 eval/Actions Mean 0.154885 eval/Actions Std 0.592821 eval/Actions Max 0.998071 eval/Actions Min -0.99913 eval/Num Paths 1 eval/Average Returns 1600.39 eval/normalized_score 49.7965 time/evaluation sampling (s) 0.877409 time/logging (s) 0.00235678 time/sampling batch (s) 0.26532 time/saving (s) 0.00306727 time/training (s) 4.1755 time/epoch (s) 5.32365 time/total (s) 33450.1 Epoch -406 ---------------------------------- --------------- 2022-05-10 22:28:10.567128 PDT | [0] Epoch -405 finished ---------------------------------- --------------- epoch -405 replay_buffer/size 999996 trainer/num train calls 596000 trainer/Policy Loss -2.31539 trainer/Log Pis Mean 2.28003 trainer/Log Pis Std 2.40649 trainer/Log Pis Max 9.57513 trainer/Log Pis Min -5.58433 trainer/policy/mean Mean 0.127674 trainer/policy/mean Std 0.62191 trainer/policy/mean Max 0.99718 trainer/policy/mean Min -0.997247 trainer/policy/normal/std Mean 0.377543 trainer/policy/normal/std Std 0.187301 trainer/policy/normal/std Max 1.13093 trainer/policy/normal/std Min 0.0620206 trainer/policy/normal/log_std Mean -1.13244 trainer/policy/normal/log_std Std 0.608687 trainer/policy/normal/log_std Max 0.123041 trainer/policy/normal/log_std Min -2.78029 eval/num steps total 396638 eval/num paths total 773 eval/path length Mean 479.5 eval/path length Std 72.5 eval/path length Max 552 eval/path length Min 407 eval/Rewards Mean 3.16472 eval/Rewards Std 0.813398 eval/Rewards Max 5.09131 eval/Rewards Min 0.98532 eval/Returns Mean 1517.48 eval/Returns Std 260.988 eval/Returns Max 1778.47 eval/Returns Min 1256.49 eval/Actions Mean 0.153027 eval/Actions Std 0.586728 eval/Actions Max 0.998402 eval/Actions Min -0.998828 eval/Num Paths 2 eval/Average Returns 1517.48 eval/normalized_score 47.249 time/evaluation sampling (s) 0.882845 time/logging (s) 0.00356855 time/sampling batch (s) 0.267296 time/saving (s) 0.00297267 time/training (s) 4.19208 time/epoch (s) 5.34876 time/total (s) 33455.5 Epoch -405 ---------------------------------- --------------- 2022-05-10 22:28:15.935916 PDT | [0] Epoch -404 finished ---------------------------------- --------------- epoch -404 replay_buffer/size 999996 trainer/num train calls 597000 trainer/Policy Loss -2.09435 trainer/Log Pis Mean 1.96811 trainer/Log Pis Std 2.5558 trainer/Log Pis Max 10.3793 trainer/Log Pis Min -4.93821 trainer/policy/mean Mean 0.138306 trainer/policy/mean Std 0.609634 trainer/policy/mean Max 0.99864 trainer/policy/mean Min -0.998264 trainer/policy/normal/std Mean 0.378662 trainer/policy/normal/std Std 0.184385 trainer/policy/normal/std Max 1.02856 trainer/policy/normal/std Min 0.0678348 trainer/policy/normal/log_std Mean -1.12105 trainer/policy/normal/log_std Std 0.588823 trainer/policy/normal/log_std Max 0.0281639 trainer/policy/normal/log_std Min -2.69068 eval/num steps total 397132 eval/num paths total 774 eval/path length Mean 494 eval/path length Std 0 eval/path length Max 494 eval/path length Min 494 eval/Rewards Mean 3.19083 eval/Rewards Std 0.789662 eval/Rewards Max 4.94289 eval/Rewards Min 0.981361 eval/Returns Mean 1576.27 eval/Returns Std 0 eval/Returns Max 1576.27 eval/Returns Min 1576.27 eval/Actions Mean 0.158209 eval/Actions Std 0.593209 eval/Actions Max 0.996737 eval/Actions Min -0.998133 eval/Num Paths 1 eval/Average Returns 1576.27 eval/normalized_score 49.0554 time/evaluation sampling (s) 0.890247 time/logging (s) 0.00230806 time/sampling batch (s) 0.265966 time/saving (s) 0.00298951 time/training (s) 4.18339 time/epoch (s) 5.3449 time/total (s) 33460.8 Epoch -404 ---------------------------------- --------------- 2022-05-10 22:28:21.355616 PDT | [0] Epoch -403 finished ---------------------------------- --------------- epoch -403 replay_buffer/size 999996 trainer/num train calls 598000 trainer/Policy Loss -2.35783 trainer/Log Pis Mean 2.25744 trainer/Log Pis Std 2.63365 trainer/Log Pis Max 9.73819 trainer/Log Pis Min -9.78748 trainer/policy/mean Mean 0.140603 trainer/policy/mean Std 0.615184 trainer/policy/mean Max 0.998511 trainer/policy/mean Min -0.997514 trainer/policy/normal/std Mean 0.37736 trainer/policy/normal/std Std 0.18392 trainer/policy/normal/std Max 0.943034 trainer/policy/normal/std Min 0.0625647 trainer/policy/normal/log_std Mean -1.12691 trainer/policy/normal/log_std Std 0.594819 trainer/policy/normal/log_std Max -0.058653 trainer/policy/normal/log_std Min -2.77155 eval/num steps total 397667 eval/num paths total 775 eval/path length Mean 535 eval/path length Std 0 eval/path length Max 535 eval/path length Min 535 eval/Rewards Mean 3.13936 eval/Rewards Std 0.845085 eval/Rewards Max 5.06392 eval/Rewards Min 0.981051 eval/Returns Mean 1679.56 eval/Returns Std 0 eval/Returns Max 1679.56 eval/Returns Min 1679.56 eval/Actions Mean 0.144526 eval/Actions Std 0.56349 eval/Actions Max 0.996408 eval/Actions Min -0.99837 eval/Num Paths 1 eval/Average Returns 1679.56 eval/normalized_score 52.229 time/evaluation sampling (s) 0.945963 time/logging (s) 0.00236592 time/sampling batch (s) 0.266182 time/saving (s) 0.00312955 time/training (s) 4.1796 time/epoch (s) 5.39724 time/total (s) 33466.2 Epoch -403 ---------------------------------- --------------- 2022-05-10 22:28:26.709024 PDT | [0] Epoch -402 finished ---------------------------------- --------------- epoch -402 replay_buffer/size 999996 trainer/num train calls 599000 trainer/Policy Loss -2.11849 trainer/Log Pis Mean 2.1236 trainer/Log Pis Std 2.50375 trainer/Log Pis Max 9.96522 trainer/Log Pis Min -6.99888 trainer/policy/mean Mean 0.125957 trainer/policy/mean Std 0.61301 trainer/policy/mean Max 0.998258 trainer/policy/mean Min -0.997913 trainer/policy/normal/std Mean 0.375928 trainer/policy/normal/std Std 0.181494 trainer/policy/normal/std Max 0.922138 trainer/policy/normal/std Min 0.0672261 trainer/policy/normal/log_std Mean -1.1249 trainer/policy/normal/log_std Std 0.581107 trainer/policy/normal/log_std Max -0.0810602 trainer/policy/normal/log_std Min -2.69969 eval/num steps total 398168 eval/num paths total 776 eval/path length Mean 501 eval/path length Std 0 eval/path length Max 501 eval/path length Min 501 eval/Rewards Mean 3.07989 eval/Rewards Std 0.765435 eval/Rewards Max 4.74297 eval/Rewards Min 0.985289 eval/Returns Mean 1543.03 eval/Returns Std 0 eval/Returns Max 1543.03 eval/Returns Min 1543.03 eval/Actions Mean 0.152857 eval/Actions Std 0.589886 eval/Actions Max 0.998035 eval/Actions Min -0.999121 eval/Num Paths 1 eval/Average Returns 1543.03 eval/normalized_score 48.0339 time/evaluation sampling (s) 0.879045 time/logging (s) 0.00226067 time/sampling batch (s) 0.265955 time/saving (s) 0.0030081 time/training (s) 4.18038 time/epoch (s) 5.33065 time/total (s) 33471.5 Epoch -402 ---------------------------------- --------------- 2022-05-10 22:28:32.092634 PDT | [0] Epoch -401 finished ---------------------------------- --------------- epoch -401 replay_buffer/size 999996 trainer/num train calls 600000 trainer/Policy Loss -2.27183 trainer/Log Pis Mean 2.29616 trainer/Log Pis Std 2.59892 trainer/Log Pis Max 10.2606 trainer/Log Pis Min -4.87674 trainer/policy/mean Mean 0.163772 trainer/policy/mean Std 0.607842 trainer/policy/mean Max 0.998172 trainer/policy/mean Min -0.997185 trainer/policy/normal/std Mean 0.369904 trainer/policy/normal/std Std 0.182332 trainer/policy/normal/std Max 0.996554 trainer/policy/normal/std Min 0.0655579 trainer/policy/normal/log_std Mean -1.15192 trainer/policy/normal/log_std Std 0.606773 trainer/policy/normal/log_std Max -0.00345193 trainer/policy/normal/log_std Min -2.72482 eval/num steps total 399149 eval/num paths total 778 eval/path length Mean 490.5 eval/path length Std 37.5 eval/path length Max 528 eval/path length Min 453 eval/Rewards Mean 3.18499 eval/Rewards Std 0.895667 eval/Rewards Max 5.32577 eval/Rewards Min 0.982214 eval/Returns Mean 1562.24 eval/Returns Std 146.147 eval/Returns Max 1708.38 eval/Returns Min 1416.09 eval/Actions Mean 0.14718 eval/Actions Std 0.568365 eval/Actions Max 0.998521 eval/Actions Min -0.998502 eval/Num Paths 2 eval/Average Returns 1562.24 eval/normalized_score 48.6241 time/evaluation sampling (s) 0.877096 time/logging (s) 0.00371465 time/sampling batch (s) 0.268257 time/saving (s) 0.00571181 time/training (s) 4.20792 time/epoch (s) 5.3627 time/total (s) 33476.9 Epoch -401 ---------------------------------- --------------- 2022-05-10 22:28:37.458079 PDT | [0] Epoch -400 finished ---------------------------------- --------------- epoch -400 replay_buffer/size 999996 trainer/num train calls 601000 trainer/Policy Loss -2.08471 trainer/Log Pis Mean 1.94648 trainer/Log Pis Std 2.40068 trainer/Log Pis Max 9.56037 trainer/Log Pis Min -7.71732 trainer/policy/mean Mean 0.145741 trainer/policy/mean Std 0.600481 trainer/policy/mean Max 0.998812 trainer/policy/mean Min -0.998104 trainer/policy/normal/std Mean 0.380574 trainer/policy/normal/std Std 0.18128 trainer/policy/normal/std Max 1.03418 trainer/policy/normal/std Min 0.0757393 trainer/policy/normal/log_std Mean -1.10866 trainer/policy/normal/log_std Std 0.573431 trainer/policy/normal/log_std Max 0.0336051 trainer/policy/normal/log_std Min -2.58046 eval/num steps total 400137 eval/num paths total 780 eval/path length Mean 494 eval/path length Std 3 eval/path length Max 497 eval/path length Min 491 eval/Rewards Mean 3.16963 eval/Rewards Std 0.797083 eval/Rewards Max 4.98589 eval/Rewards Min 0.988287 eval/Returns Mean 1565.8 eval/Returns Std 1.36481 eval/Returns Max 1567.16 eval/Returns Min 1564.43 eval/Actions Mean 0.156987 eval/Actions Std 0.59133 eval/Actions Max 0.998062 eval/Actions Min -0.998953 eval/Num Paths 2 eval/Average Returns 1565.8 eval/normalized_score 48.7335 time/evaluation sampling (s) 0.883823 time/logging (s) 0.00357299 time/sampling batch (s) 0.2663 time/saving (s) 0.00293204 time/training (s) 4.18583 time/epoch (s) 5.34246 time/total (s) 33482.2 Epoch -400 ---------------------------------- --------------- 2022-05-10 22:28:42.814742 PDT | [0] Epoch -399 finished ---------------------------------- --------------- epoch -399 replay_buffer/size 999996 trainer/num train calls 602000 trainer/Policy Loss -2.13074 trainer/Log Pis Mean 2.10054 trainer/Log Pis Std 2.44252 trainer/Log Pis Max 9.27868 trainer/Log Pis Min -4.71472 trainer/policy/mean Mean 0.16058 trainer/policy/mean Std 0.608808 trainer/policy/mean Max 0.998439 trainer/policy/mean Min -0.997434 trainer/policy/normal/std Mean 0.380475 trainer/policy/normal/std Std 0.185296 trainer/policy/normal/std Max 1.01924 trainer/policy/normal/std Min 0.0701754 trainer/policy/normal/log_std Mean -1.11727 trainer/policy/normal/log_std Std 0.590682 trainer/policy/normal/log_std Max 0.019057 trainer/policy/normal/log_std Min -2.65676 eval/num steps total 401067 eval/num paths total 782 eval/path length Mean 465 eval/path length Std 37 eval/path length Max 502 eval/path length Min 428 eval/Rewards Mean 3.11971 eval/Rewards Std 0.828393 eval/Rewards Max 5.45306 eval/Rewards Min 0.98665 eval/Returns Mean 1450.66 eval/Returns Std 113.237 eval/Returns Max 1563.9 eval/Returns Min 1337.43 eval/Actions Mean 0.151618 eval/Actions Std 0.586436 eval/Actions Max 0.997811 eval/Actions Min -0.999318 eval/Num Paths 2 eval/Average Returns 1450.66 eval/normalized_score 45.196 time/evaluation sampling (s) 0.87775 time/logging (s) 0.00349958 time/sampling batch (s) 0.266352 time/saving (s) 0.00292648 time/training (s) 4.1834 time/epoch (s) 5.33393 time/total (s) 33487.6 Epoch -399 ---------------------------------- --------------- 2022-05-10 22:28:48.200413 PDT | [0] Epoch -398 finished ---------------------------------- --------------- epoch -398 replay_buffer/size 999996 trainer/num train calls 603000 trainer/Policy Loss -2.22769 trainer/Log Pis Mean 2.1763 trainer/Log Pis Std 2.5859 trainer/Log Pis Max 12.3783 trainer/Log Pis Min -4.14051 trainer/policy/mean Mean 0.134102 trainer/policy/mean Std 0.611714 trainer/policy/mean Max 0.998492 trainer/policy/mean Min -0.998397 trainer/policy/normal/std Mean 0.377696 trainer/policy/normal/std Std 0.181961 trainer/policy/normal/std Max 0.958863 trainer/policy/normal/std Min 0.069677 trainer/policy/normal/log_std Mean -1.12219 trainer/policy/normal/log_std Std 0.587048 trainer/policy/normal/log_std Max -0.0420066 trainer/policy/normal/log_std Min -2.66389 eval/num steps total 401567 eval/num paths total 783 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.15617 eval/Rewards Std 0.770989 eval/Rewards Max 4.73674 eval/Rewards Min 0.989825 eval/Returns Mean 1578.08 eval/Returns Std 0 eval/Returns Max 1578.08 eval/Returns Min 1578.08 eval/Actions Mean 0.153194 eval/Actions Std 0.596136 eval/Actions Max 0.998607 eval/Actions Min -0.996934 eval/Num Paths 1 eval/Average Returns 1578.08 eval/normalized_score 49.1111 time/evaluation sampling (s) 0.896558 time/logging (s) 0.00228871 time/sampling batch (s) 0.266836 time/saving (s) 0.0030024 time/training (s) 4.19319 time/epoch (s) 5.36187 time/total (s) 33492.9 Epoch -398 ---------------------------------- --------------- 2022-05-10 22:28:53.554787 PDT | [0] Epoch -397 finished ---------------------------------- --------------- epoch -397 replay_buffer/size 999996 trainer/num train calls 604000 trainer/Policy Loss -2.36581 trainer/Log Pis Mean 2.09418 trainer/Log Pis Std 2.59204 trainer/Log Pis Max 9.71772 trainer/Log Pis Min -5.38076 trainer/policy/mean Mean 0.127213 trainer/policy/mean Std 0.622907 trainer/policy/mean Max 0.997297 trainer/policy/mean Min -0.997666 trainer/policy/normal/std Mean 0.37467 trainer/policy/normal/std Std 0.178408 trainer/policy/normal/std Max 0.972597 trainer/policy/normal/std Min 0.0641989 trainer/policy/normal/log_std Mean -1.129 trainer/policy/normal/log_std Std 0.587726 trainer/policy/normal/log_std Max -0.0277855 trainer/policy/normal/log_std Min -2.74577 eval/num steps total 402112 eval/num paths total 784 eval/path length Mean 545 eval/path length Std 0 eval/path length Max 545 eval/path length Min 545 eval/Rewards Mean 3.18468 eval/Rewards Std 0.849088 eval/Rewards Max 5.14146 eval/Rewards Min 0.984407 eval/Returns Mean 1735.65 eval/Returns Std 0 eval/Returns Max 1735.65 eval/Returns Min 1735.65 eval/Actions Mean 0.148193 eval/Actions Std 0.571312 eval/Actions Max 0.997443 eval/Actions Min -0.99839 eval/Num Paths 1 eval/Average Returns 1735.65 eval/normalized_score 53.9525 time/evaluation sampling (s) 0.878672 time/logging (s) 0.00239756 time/sampling batch (s) 0.266908 time/saving (s) 0.00303628 time/training (s) 4.18123 time/epoch (s) 5.33224 time/total (s) 33498.3 Epoch -397 ---------------------------------- --------------- 2022-05-10 22:28:58.984447 PDT | [0] Epoch -396 finished ---------------------------------- --------------- epoch -396 replay_buffer/size 999996 trainer/num train calls 605000 trainer/Policy Loss -2.18471 trainer/Log Pis Mean 2.3076 trainer/Log Pis Std 2.49564 trainer/Log Pis Max 8.92664 trainer/Log Pis Min -4.51679 trainer/policy/mean Mean 0.105691 trainer/policy/mean Std 0.628885 trainer/policy/mean Max 0.997323 trainer/policy/mean Min -0.997834 trainer/policy/normal/std Mean 0.388581 trainer/policy/normal/std Std 0.188879 trainer/policy/normal/std Max 1.03473 trainer/policy/normal/std Min 0.0701498 trainer/policy/normal/log_std Mean -1.0937 trainer/policy/normal/log_std Std 0.584775 trainer/policy/normal/log_std Max 0.0341389 trainer/policy/normal/log_std Min -2.65712 eval/num steps total 402675 eval/num paths total 785 eval/path length Mean 563 eval/path length Std 0 eval/path length Max 563 eval/path length Min 563 eval/Rewards Mean 3.22221 eval/Rewards Std 0.819501 eval/Rewards Max 4.77322 eval/Rewards Min 0.980059 eval/Returns Mean 1814.1 eval/Returns Std 0 eval/Returns Max 1814.1 eval/Returns Min 1814.1 eval/Actions Mean 0.160454 eval/Actions Std 0.582692 eval/Actions Max 0.998569 eval/Actions Min -0.998256 eval/Num Paths 1 eval/Average Returns 1814.1 eval/normalized_score 56.363 time/evaluation sampling (s) 0.885702 time/logging (s) 0.00253708 time/sampling batch (s) 0.271019 time/saving (s) 0.00353743 time/training (s) 4.24418 time/epoch (s) 5.40698 time/total (s) 33503.7 Epoch -396 ---------------------------------- --------------- 2022-05-10 22:29:04.591232 PDT | [0] Epoch -395 finished ---------------------------------- --------------- epoch -395 replay_buffer/size 999996 trainer/num train calls 606000 trainer/Policy Loss -2.15048 trainer/Log Pis Mean 2.17537 trainer/Log Pis Std 2.53488 trainer/Log Pis Max 10.2101 trainer/Log Pis Min -4.66754 trainer/policy/mean Mean 0.125893 trainer/policy/mean Std 0.614282 trainer/policy/mean Max 0.997329 trainer/policy/mean Min -0.997622 trainer/policy/normal/std Mean 0.381146 trainer/policy/normal/std Std 0.183851 trainer/policy/normal/std Max 0.902526 trainer/policy/normal/std Min 0.0691924 trainer/policy/normal/log_std Mean -1.11307 trainer/policy/normal/log_std Std 0.587036 trainer/policy/normal/log_std Max -0.102558 trainer/policy/normal/log_std Min -2.67086 eval/num steps total 403181 eval/num paths total 786 eval/path length Mean 506 eval/path length Std 0 eval/path length Max 506 eval/path length Min 506 eval/Rewards Mean 3.12878 eval/Rewards Std 0.764259 eval/Rewards Max 4.81205 eval/Rewards Min 0.977724 eval/Returns Mean 1583.16 eval/Returns Std 0 eval/Returns Max 1583.16 eval/Returns Min 1583.16 eval/Actions Mean 0.154036 eval/Actions Std 0.588433 eval/Actions Max 0.996934 eval/Actions Min -0.995474 eval/Num Paths 1 eval/Average Returns 1583.16 eval/normalized_score 49.2671 time/evaluation sampling (s) 0.94996 time/logging (s) 0.00225696 time/sampling batch (s) 0.268807 time/saving (s) 0.00298435 time/training (s) 4.35963 time/epoch (s) 5.58364 time/total (s) 33509.3 Epoch -395 ---------------------------------- --------------- 2022-05-10 22:29:10.003747 PDT | [0] Epoch -394 finished ---------------------------------- --------------- epoch -394 replay_buffer/size 999996 trainer/num train calls 607000 trainer/Policy Loss -2.10873 trainer/Log Pis Mean 2.12149 trainer/Log Pis Std 2.62182 trainer/Log Pis Max 10.7434 trainer/Log Pis Min -5.12864 trainer/policy/mean Mean 0.137531 trainer/policy/mean Std 0.607062 trainer/policy/mean Max 0.995488 trainer/policy/mean Min -0.998516 trainer/policy/normal/std Mean 0.38416 trainer/policy/normal/std Std 0.183574 trainer/policy/normal/std Max 1.0143 trainer/policy/normal/std Min 0.0730719 trainer/policy/normal/log_std Mean -1.10473 trainer/policy/normal/log_std Std 0.588246 trainer/policy/normal/log_std Max 0.0141966 trainer/policy/normal/log_std Min -2.61631 eval/num steps total 403794 eval/num paths total 787 eval/path length Mean 613 eval/path length Std 0 eval/path length Max 613 eval/path length Min 613 eval/Rewards Mean 3.27811 eval/Rewards Std 0.767515 eval/Rewards Max 5.3618 eval/Rewards Min 0.984458 eval/Returns Mean 2009.48 eval/Returns Std 0 eval/Returns Max 2009.48 eval/Returns Min 2009.48 eval/Actions Mean 0.139263 eval/Actions Std 0.588964 eval/Actions Max 0.998731 eval/Actions Min -0.997909 eval/Num Paths 1 eval/Average Returns 2009.48 eval/normalized_score 62.3663 time/evaluation sampling (s) 0.888116 time/logging (s) 0.0025075 time/sampling batch (s) 0.2693 time/saving (s) 0.00295588 time/training (s) 4.22695 time/epoch (s) 5.38983 time/total (s) 33514.7 Epoch -394 ---------------------------------- --------------- 2022-05-10 22:29:15.395967 PDT | [0] Epoch -393 finished ---------------------------------- --------------- epoch -393 replay_buffer/size 999996 trainer/num train calls 608000 trainer/Policy Loss -2.28755 trainer/Log Pis Mean 2.19097 trainer/Log Pis Std 2.58466 trainer/Log Pis Max 11.697 trainer/Log Pis Min -4.52545 trainer/policy/mean Mean 0.145163 trainer/policy/mean Std 0.615569 trainer/policy/mean Max 0.998727 trainer/policy/mean Min -0.998024 trainer/policy/normal/std Mean 0.384935 trainer/policy/normal/std Std 0.181046 trainer/policy/normal/std Max 0.912438 trainer/policy/normal/std Min 0.070455 trainer/policy/normal/log_std Mean -1.09631 trainer/policy/normal/log_std Std 0.574001 trainer/policy/normal/log_std Max -0.0916348 trainer/policy/normal/log_std Min -2.65278 eval/num steps total 404794 eval/num paths total 789 eval/path length Mean 500 eval/path length Std 3 eval/path length Max 503 eval/path length Min 497 eval/Rewards Mean 3.12526 eval/Rewards Std 0.769135 eval/Rewards Max 4.98707 eval/Rewards Min 0.985494 eval/Returns Mean 1562.63 eval/Returns Std 3.30197 eval/Returns Max 1565.93 eval/Returns Min 1559.33 eval/Actions Mean 0.15669 eval/Actions Std 0.592291 eval/Actions Max 0.998883 eval/Actions Min -0.998366 eval/Num Paths 2 eval/Average Returns 1562.63 eval/normalized_score 48.6362 time/evaluation sampling (s) 0.895301 time/logging (s) 0.00391708 time/sampling batch (s) 0.268622 time/saving (s) 0.00322708 time/training (s) 4.19975 time/epoch (s) 5.37081 time/total (s) 33520 Epoch -393 ---------------------------------- --------------- 2022-05-10 22:29:20.807804 PDT | [0] Epoch -392 finished ---------------------------------- --------------- epoch -392 replay_buffer/size 999996 trainer/num train calls 609000 trainer/Policy Loss -2.15615 trainer/Log Pis Mean 2.19567 trainer/Log Pis Std 2.54538 trainer/Log Pis Max 9.80111 trainer/Log Pis Min -7.39666 trainer/policy/mean Mean 0.152913 trainer/policy/mean Std 0.614193 trainer/policy/mean Max 0.997579 trainer/policy/mean Min -0.997078 trainer/policy/normal/std Mean 0.379676 trainer/policy/normal/std Std 0.188709 trainer/policy/normal/std Max 1.03283 trainer/policy/normal/std Min 0.0713446 trainer/policy/normal/log_std Mean -1.12354 trainer/policy/normal/log_std Std 0.597816 trainer/policy/normal/log_std Max 0.032303 trainer/policy/normal/log_std Min -2.64023 eval/num steps total 405748 eval/num paths total 791 eval/path length Mean 477 eval/path length Std 3 eval/path length Max 480 eval/path length Min 474 eval/Rewards Mean 3.19688 eval/Rewards Std 0.836571 eval/Rewards Max 4.7516 eval/Rewards Min 0.982128 eval/Returns Mean 1524.91 eval/Returns Std 8.05765 eval/Returns Max 1532.97 eval/Returns Min 1516.85 eval/Actions Mean 0.139809 eval/Actions Std 0.599589 eval/Actions Max 0.996806 eval/Actions Min -0.998685 eval/Num Paths 2 eval/Average Returns 1524.91 eval/normalized_score 47.4773 time/evaluation sampling (s) 0.889395 time/logging (s) 0.00359883 time/sampling batch (s) 0.269018 time/saving (s) 0.00310205 time/training (s) 4.22331 time/epoch (s) 5.38842 time/total (s) 33525.4 Epoch -392 ---------------------------------- --------------- 2022-05-10 22:29:26.211503 PDT | [0] Epoch -391 finished ---------------------------------- --------------- epoch -391 replay_buffer/size 999996 trainer/num train calls 610000 trainer/Policy Loss -2.37372 trainer/Log Pis Mean 2.42537 trainer/Log Pis Std 2.55537 trainer/Log Pis Max 9.56781 trainer/Log Pis Min -4.57792 trainer/policy/mean Mean 0.138134 trainer/policy/mean Std 0.617767 trainer/policy/mean Max 0.997884 trainer/policy/mean Min -0.99757 trainer/policy/normal/std Mean 0.3795 trainer/policy/normal/std Std 0.183246 trainer/policy/normal/std Max 0.957939 trainer/policy/normal/std Min 0.0676095 trainer/policy/normal/log_std Mean -1.11515 trainer/policy/normal/log_std Std 0.579837 trainer/policy/normal/log_std Max -0.0429711 trainer/policy/normal/log_std Min -2.69401 eval/num steps total 406739 eval/num paths total 793 eval/path length Mean 495.5 eval/path length Std 80.5 eval/path length Max 576 eval/path length Min 415 eval/Rewards Mean 3.11416 eval/Rewards Std 0.77628 eval/Rewards Max 4.92967 eval/Rewards Min 0.982906 eval/Returns Mean 1543.07 eval/Returns Std 265.808 eval/Returns Max 1808.87 eval/Returns Min 1277.26 eval/Actions Mean 0.150298 eval/Actions Std 0.587813 eval/Actions Max 0.998186 eval/Actions Min -0.999243 eval/Num Paths 2 eval/Average Returns 1543.07 eval/normalized_score 48.0351 time/evaluation sampling (s) 0.894929 time/logging (s) 0.00360978 time/sampling batch (s) 0.268744 time/saving (s) 0.00304456 time/training (s) 4.2103 time/epoch (s) 5.38063 time/total (s) 33530.8 Epoch -391 ---------------------------------- --------------- 2022-05-10 22:29:31.658051 PDT | [0] Epoch -390 finished ---------------------------------- --------------- epoch -390 replay_buffer/size 999996 trainer/num train calls 611000 trainer/Policy Loss -2.2167 trainer/Log Pis Mean 2.31019 trainer/Log Pis Std 2.65302 trainer/Log Pis Max 9.45458 trainer/Log Pis Min -4.52137 trainer/policy/mean Mean 0.17877 trainer/policy/mean Std 0.609117 trainer/policy/mean Max 0.996381 trainer/policy/mean Min -0.998102 trainer/policy/normal/std Mean 0.377059 trainer/policy/normal/std Std 0.186547 trainer/policy/normal/std Max 0.996029 trainer/policy/normal/std Min 0.0630876 trainer/policy/normal/log_std Mean -1.13209 trainer/policy/normal/log_std Std 0.604422 trainer/policy/normal/log_std Max -0.00397877 trainer/policy/normal/log_std Min -2.76323 eval/num steps total 407267 eval/num paths total 794 eval/path length Mean 528 eval/path length Std 0 eval/path length Max 528 eval/path length Min 528 eval/Rewards Mean 3.13021 eval/Rewards Std 0.842943 eval/Rewards Max 5.30059 eval/Rewards Min 0.987839 eval/Returns Mean 1652.75 eval/Returns Std 0 eval/Returns Max 1652.75 eval/Returns Min 1652.75 eval/Actions Mean 0.147374 eval/Actions Std 0.575822 eval/Actions Max 0.99717 eval/Actions Min -0.99812 eval/Num Paths 1 eval/Average Returns 1652.75 eval/normalized_score 51.4053 time/evaluation sampling (s) 0.919599 time/logging (s) 0.00239696 time/sampling batch (s) 0.270859 time/saving (s) 0.00319577 time/training (s) 4.22639 time/epoch (s) 5.42244 time/total (s) 33536.2 Epoch -390 ---------------------------------- --------------- 2022-05-10 22:29:37.107916 PDT | [0] Epoch -389 finished ---------------------------------- --------------- epoch -389 replay_buffer/size 999996 trainer/num train calls 612000 trainer/Policy Loss -2.38851 trainer/Log Pis Mean 2.38902 trainer/Log Pis Std 2.6036 trainer/Log Pis Max 10.103 trainer/Log Pis Min -7.57972 trainer/policy/mean Mean 0.135937 trainer/policy/mean Std 0.620395 trainer/policy/mean Max 0.997001 trainer/policy/mean Min -0.998309 trainer/policy/normal/std Mean 0.375442 trainer/policy/normal/std Std 0.18408 trainer/policy/normal/std Max 1.04223 trainer/policy/normal/std Min 0.0643618 trainer/policy/normal/log_std Mean -1.13604 trainer/policy/normal/log_std Std 0.605938 trainer/policy/normal/log_std Max 0.0413588 trainer/policy/normal/log_std Min -2.74324 eval/num steps total 407922 eval/num paths total 795 eval/path length Mean 655 eval/path length Std 0 eval/path length Max 655 eval/path length Min 655 eval/Rewards Mean 3.16748 eval/Rewards Std 0.727559 eval/Rewards Max 4.96934 eval/Rewards Min 0.983387 eval/Returns Mean 2074.7 eval/Returns Std 0 eval/Returns Max 2074.7 eval/Returns Min 2074.7 eval/Actions Mean 0.15147 eval/Actions Std 0.593924 eval/Actions Max 0.996641 eval/Actions Min -0.998144 eval/Num Paths 1 eval/Average Returns 2074.7 eval/normalized_score 64.3702 time/evaluation sampling (s) 0.906781 time/logging (s) 0.00293542 time/sampling batch (s) 0.271403 time/saving (s) 0.00336581 time/training (s) 4.24274 time/epoch (s) 5.42722 time/total (s) 33541.7 Epoch -389 ---------------------------------- --------------- 2022-05-10 22:29:42.523420 PDT | [0] Epoch -388 finished ---------------------------------- --------------- epoch -388 replay_buffer/size 999996 trainer/num train calls 613000 trainer/Policy Loss -2.18509 trainer/Log Pis Mean 2.09481 trainer/Log Pis Std 2.59802 trainer/Log Pis Max 10.0876 trainer/Log Pis Min -3.61431 trainer/policy/mean Mean 0.128694 trainer/policy/mean Std 0.609607 trainer/policy/mean Max 0.99865 trainer/policy/mean Min -0.998592 trainer/policy/normal/std Mean 0.377304 trainer/policy/normal/std Std 0.184565 trainer/policy/normal/std Max 1.02268 trainer/policy/normal/std Min 0.0652506 trainer/policy/normal/log_std Mean -1.12383 trainer/policy/normal/log_std Std 0.585057 trainer/policy/normal/log_std Max 0.0224218 trainer/policy/normal/log_std Min -2.72952 eval/num steps total 408497 eval/num paths total 796 eval/path length Mean 575 eval/path length Std 0 eval/path length Max 575 eval/path length Min 575 eval/Rewards Mean 3.18517 eval/Rewards Std 0.775776 eval/Rewards Max 4.73877 eval/Rewards Min 0.984463 eval/Returns Mean 1831.48 eval/Returns Std 0 eval/Returns Max 1831.48 eval/Returns Min 1831.48 eval/Actions Mean 0.156019 eval/Actions Std 0.600877 eval/Actions Max 0.998993 eval/Actions Min -0.998659 eval/Num Paths 1 eval/Average Returns 1831.48 eval/normalized_score 56.8968 time/evaluation sampling (s) 0.883548 time/logging (s) 0.00266443 time/sampling batch (s) 0.269875 time/saving (s) 0.00295951 time/training (s) 4.23302 time/epoch (s) 5.39207 time/total (s) 33547.1 Epoch -388 ---------------------------------- --------------- 2022-05-10 22:29:47.937328 PDT | [0] Epoch -387 finished ---------------------------------- --------------- epoch -387 replay_buffer/size 999996 trainer/num train calls 614000 trainer/Policy Loss -2.08153 trainer/Log Pis Mean 2.23449 trainer/Log Pis Std 2.56366 trainer/Log Pis Max 9.67006 trainer/Log Pis Min -4.04724 trainer/policy/mean Mean 0.135289 trainer/policy/mean Std 0.614039 trainer/policy/mean Max 0.996546 trainer/policy/mean Min -0.997755 trainer/policy/normal/std Mean 0.380083 trainer/policy/normal/std Std 0.185495 trainer/policy/normal/std Max 0.989459 trainer/policy/normal/std Min 0.0690046 trainer/policy/normal/log_std Mean -1.11585 trainer/policy/normal/log_std Std 0.584332 trainer/policy/normal/log_std Max -0.0105972 trainer/policy/normal/log_std Min -2.67358 eval/num steps total 409168 eval/num paths total 797 eval/path length Mean 671 eval/path length Std 0 eval/path length Max 671 eval/path length Min 671 eval/Rewards Mean 3.19085 eval/Rewards Std 0.692642 eval/Rewards Max 4.95535 eval/Rewards Min 0.985421 eval/Returns Mean 2141.06 eval/Returns Std 0 eval/Returns Max 2141.06 eval/Returns Min 2141.06 eval/Actions Mean 0.162262 eval/Actions Std 0.608408 eval/Actions Max 0.99769 eval/Actions Min -0.998121 eval/Num Paths 1 eval/Average Returns 2141.06 eval/normalized_score 66.4092 time/evaluation sampling (s) 0.88808 time/logging (s) 0.00271999 time/sampling batch (s) 0.270297 time/saving (s) 0.00307406 time/training (s) 4.22702 time/epoch (s) 5.39119 time/total (s) 33552.5 Epoch -387 ---------------------------------- --------------- 2022-05-10 22:29:53.394987 PDT | [0] Epoch -386 finished ---------------------------------- --------------- epoch -386 replay_buffer/size 999996 trainer/num train calls 615000 trainer/Policy Loss -2.03255 trainer/Log Pis Mean 2.06903 trainer/Log Pis Std 2.53303 trainer/Log Pis Max 9.69528 trainer/Log Pis Min -4.2763 trainer/policy/mean Mean 0.147529 trainer/policy/mean Std 0.609644 trainer/policy/mean Max 0.995462 trainer/policy/mean Min -0.997353 trainer/policy/normal/std Mean 0.382717 trainer/policy/normal/std Std 0.18719 trainer/policy/normal/std Max 1.02779 trainer/policy/normal/std Min 0.0719422 trainer/policy/normal/log_std Mean -1.11049 trainer/policy/normal/log_std Std 0.587851 trainer/policy/normal/log_std Max 0.0274127 trainer/policy/normal/log_std Min -2.63189 eval/num steps total 410010 eval/num paths total 799 eval/path length Mean 421 eval/path length Std 10 eval/path length Max 431 eval/path length Min 411 eval/Rewards Mean 3.12808 eval/Rewards Std 0.849486 eval/Rewards Max 5.51156 eval/Rewards Min 0.983224 eval/Returns Mean 1316.92 eval/Returns Std 45.6487 eval/Returns Max 1362.57 eval/Returns Min 1271.27 eval/Actions Mean 0.147937 eval/Actions Std 0.589503 eval/Actions Max 0.997691 eval/Actions Min -0.996691 eval/Num Paths 2 eval/Average Returns 1316.92 eval/normalized_score 41.0867 time/evaluation sampling (s) 0.886494 time/logging (s) 0.00332839 time/sampling batch (s) 0.271904 time/saving (s) 0.00311692 time/training (s) 4.27039 time/epoch (s) 5.43523 time/total (s) 33557.9 Epoch -386 ---------------------------------- --------------- 2022-05-10 22:29:58.854715 PDT | [0] Epoch -385 finished ---------------------------------- --------------- epoch -385 replay_buffer/size 999996 trainer/num train calls 616000 trainer/Policy Loss -2.57346 trainer/Log Pis Mean 2.36497 trainer/Log Pis Std 2.68898 trainer/Log Pis Max 11.2578 trainer/Log Pis Min -4.31829 trainer/policy/mean Mean 0.111229 trainer/policy/mean Std 0.627707 trainer/policy/mean Max 0.996854 trainer/policy/mean Min -0.997506 trainer/policy/normal/std Mean 0.373109 trainer/policy/normal/std Std 0.185197 trainer/policy/normal/std Max 0.961174 trainer/policy/normal/std Min 0.0696119 trainer/policy/normal/log_std Mean -1.14076 trainer/policy/normal/log_std Std 0.597313 trainer/policy/normal/log_std Max -0.0395998 trainer/policy/normal/log_std Min -2.66482 eval/num steps total 410594 eval/num paths total 800 eval/path length Mean 584 eval/path length Std 0 eval/path length Max 584 eval/path length Min 584 eval/Rewards Mean 3.19597 eval/Rewards Std 0.742932 eval/Rewards Max 4.93137 eval/Rewards Min 0.984301 eval/Returns Mean 1866.45 eval/Returns Std 0 eval/Returns Max 1866.45 eval/Returns Min 1866.45 eval/Actions Mean 0.168028 eval/Actions Std 0.596691 eval/Actions Max 0.998031 eval/Actions Min -0.998294 eval/Num Paths 1 eval/Average Returns 1866.45 eval/normalized_score 57.9714 time/evaluation sampling (s) 0.880996 time/logging (s) 0.00251959 time/sampling batch (s) 0.272573 time/saving (s) 0.00300094 time/training (s) 4.27657 time/epoch (s) 5.43566 time/total (s) 33563.3 Epoch -385 ---------------------------------- --------------- 2022-05-10 22:30:04.325808 PDT | [0] Epoch -384 finished ---------------------------------- --------------- epoch -384 replay_buffer/size 999996 trainer/num train calls 617000 trainer/Policy Loss -2.33814 trainer/Log Pis Mean 2.31255 trainer/Log Pis Std 2.69753 trainer/Log Pis Max 10.9944 trainer/Log Pis Min -3.73509 trainer/policy/mean Mean 0.135349 trainer/policy/mean Std 0.618797 trainer/policy/mean Max 0.996134 trainer/policy/mean Min -0.998589 trainer/policy/normal/std Mean 0.372512 trainer/policy/normal/std Std 0.177892 trainer/policy/normal/std Max 0.875268 trainer/policy/normal/std Min 0.0733243 trainer/policy/normal/log_std Mean -1.13334 trainer/policy/normal/log_std Std 0.582047 trainer/policy/normal/log_std Max -0.133225 trainer/policy/normal/log_std Min -2.61286 eval/num steps total 411485 eval/num paths total 802 eval/path length Mean 445.5 eval/path length Std 34.5 eval/path length Max 480 eval/path length Min 411 eval/Rewards Mean 3.12947 eval/Rewards Std 0.830821 eval/Rewards Max 4.74811 eval/Rewards Min 0.979735 eval/Returns Mean 1394.18 eval/Returns Std 135.702 eval/Returns Max 1529.88 eval/Returns Min 1258.47 eval/Actions Mean 0.147601 eval/Actions Std 0.594361 eval/Actions Max 0.99804 eval/Actions Min -0.998676 eval/Num Paths 2 eval/Average Returns 1394.18 eval/normalized_score 43.4604 time/evaluation sampling (s) 0.962857 time/logging (s) 0.00342826 time/sampling batch (s) 0.26985 time/saving (s) 0.00323191 time/training (s) 4.20976 time/epoch (s) 5.44913 time/total (s) 33568.8 Epoch -384 ---------------------------------- --------------- 2022-05-10 22:30:09.735134 PDT | [0] Epoch -383 finished ---------------------------------- --------------- epoch -383 replay_buffer/size 999996 trainer/num train calls 618000 trainer/Policy Loss -2.47093 trainer/Log Pis Mean 2.29975 trainer/Log Pis Std 2.56657 trainer/Log Pis Max 11.3395 trainer/Log Pis Min -5.5457 trainer/policy/mean Mean 0.143252 trainer/policy/mean Std 0.621434 trainer/policy/mean Max 0.995854 trainer/policy/mean Min -0.998517 trainer/policy/normal/std Mean 0.385941 trainer/policy/normal/std Std 0.191615 trainer/policy/normal/std Max 1.054 trainer/policy/normal/std Min 0.0714246 trainer/policy/normal/log_std Mean -1.11011 trainer/policy/normal/log_std Std 0.606793 trainer/policy/normal/log_std Max 0.0525927 trainer/policy/normal/log_std Min -2.63911 eval/num steps total 412392 eval/num paths total 804 eval/path length Mean 453.5 eval/path length Std 24.5 eval/path length Max 478 eval/path length Min 429 eval/Rewards Mean 3.18919 eval/Rewards Std 0.903946 eval/Rewards Max 5.42609 eval/Rewards Min 0.981816 eval/Returns Mean 1446.3 eval/Returns Std 83.8388 eval/Returns Max 1530.13 eval/Returns Min 1362.46 eval/Actions Mean 0.144321 eval/Actions Std 0.584928 eval/Actions Max 0.998653 eval/Actions Min -0.998868 eval/Num Paths 2 eval/Average Returns 1446.3 eval/normalized_score 45.0618 time/evaluation sampling (s) 0.889557 time/logging (s) 0.00346389 time/sampling batch (s) 0.27053 time/saving (s) 0.0029292 time/training (s) 4.21965 time/epoch (s) 5.38613 time/total (s) 33574.2 Epoch -383 ---------------------------------- --------------- 2022-05-10 22:30:15.139493 PDT | [0] Epoch -382 finished ---------------------------------- --------------- epoch -382 replay_buffer/size 999996 trainer/num train calls 619000 trainer/Policy Loss -2.23364 trainer/Log Pis Mean 2.32231 trainer/Log Pis Std 2.46304 trainer/Log Pis Max 9.5366 trainer/Log Pis Min -4.01793 trainer/policy/mean Mean 0.129408 trainer/policy/mean Std 0.605923 trainer/policy/mean Max 0.997739 trainer/policy/mean Min -0.998714 trainer/policy/normal/std Mean 0.366839 trainer/policy/normal/std Std 0.181774 trainer/policy/normal/std Max 0.926868 trainer/policy/normal/std Min 0.061857 trainer/policy/normal/log_std Mean -1.16163 trainer/policy/normal/log_std Std 0.607965 trainer/policy/normal/log_std Max -0.0759443 trainer/policy/normal/log_std Min -2.78293 eval/num steps total 412967 eval/num paths total 805 eval/path length Mean 575 eval/path length Std 0 eval/path length Max 575 eval/path length Min 575 eval/Rewards Mean 3.15725 eval/Rewards Std 0.716895 eval/Rewards Max 4.71291 eval/Rewards Min 0.979898 eval/Returns Mean 1815.42 eval/Returns Std 0 eval/Returns Max 1815.42 eval/Returns Min 1815.42 eval/Actions Mean 0.145855 eval/Actions Std 0.610619 eval/Actions Max 0.996781 eval/Actions Min -0.99817 eval/Num Paths 1 eval/Average Returns 1815.42 eval/normalized_score 56.4034 time/evaluation sampling (s) 0.888843 time/logging (s) 0.0025018 time/sampling batch (s) 0.26893 time/saving (s) 0.00295439 time/training (s) 4.21737 time/epoch (s) 5.3806 time/total (s) 33579.6 Epoch -382 ---------------------------------- --------------- 2022-05-10 22:30:20.560493 PDT | [0] Epoch -381 finished ---------------------------------- --------------- epoch -381 replay_buffer/size 999996 trainer/num train calls 620000 trainer/Policy Loss -2.18018 trainer/Log Pis Mean 2.26685 trainer/Log Pis Std 2.45085 trainer/Log Pis Max 10.6711 trainer/Log Pis Min -4.58895 trainer/policy/mean Mean 0.138668 trainer/policy/mean Std 0.623045 trainer/policy/mean Max 0.997737 trainer/policy/mean Min -0.998249 trainer/policy/normal/std Mean 0.378811 trainer/policy/normal/std Std 0.178929 trainer/policy/normal/std Max 0.975221 trainer/policy/normal/std Min 0.0692846 trainer/policy/normal/log_std Mean -1.11571 trainer/policy/normal/log_std Std 0.582213 trainer/policy/normal/log_std Max -0.0250914 trainer/policy/normal/log_std Min -2.66953 eval/num steps total 413598 eval/num paths total 806 eval/path length Mean 631 eval/path length Std 0 eval/path length Max 631 eval/path length Min 631 eval/Rewards Mean 3.28246 eval/Rewards Std 0.791043 eval/Rewards Max 5.23382 eval/Rewards Min 0.986946 eval/Returns Mean 2071.23 eval/Returns Std 0 eval/Returns Max 2071.23 eval/Returns Min 2071.23 eval/Actions Mean 0.153782 eval/Actions Std 0.59037 eval/Actions Max 0.998506 eval/Actions Min -0.998612 eval/Num Paths 1 eval/Average Returns 2071.23 eval/normalized_score 64.2636 time/evaluation sampling (s) 0.898831 time/logging (s) 0.0026319 time/sampling batch (s) 0.270975 time/saving (s) 0.00308795 time/training (s) 4.22284 time/epoch (s) 5.39836 time/total (s) 33585 Epoch -381 ---------------------------------- --------------- 2022-05-10 22:30:25.996292 PDT | [0] Epoch -380 finished ---------------------------------- --------------- epoch -380 replay_buffer/size 999996 trainer/num train calls 621000 trainer/Policy Loss -2.45989 trainer/Log Pis Mean 2.48549 trainer/Log Pis Std 2.6562 trainer/Log Pis Max 11.4286 trainer/Log Pis Min -4.40867 trainer/policy/mean Mean 0.0927787 trainer/policy/mean Std 0.637378 trainer/policy/mean Max 0.997635 trainer/policy/mean Min -0.997444 trainer/policy/normal/std Mean 0.38151 trainer/policy/normal/std Std 0.186397 trainer/policy/normal/std Max 0.957848 trainer/policy/normal/std Min 0.0632059 trainer/policy/normal/log_std Mean -1.11661 trainer/policy/normal/log_std Std 0.596846 trainer/policy/normal/log_std Max -0.043066 trainer/policy/normal/log_std Min -2.76136 eval/num steps total 414060 eval/num paths total 807 eval/path length Mean 462 eval/path length Std 0 eval/path length Max 462 eval/path length Min 462 eval/Rewards Mean 3.19602 eval/Rewards Std 0.889609 eval/Rewards Max 5.15191 eval/Rewards Min 0.98175 eval/Returns Mean 1476.56 eval/Returns Std 0 eval/Returns Max 1476.56 eval/Returns Min 1476.56 eval/Actions Mean 0.146167 eval/Actions Std 0.578739 eval/Actions Max 0.995199 eval/Actions Min -0.999469 eval/Num Paths 1 eval/Average Returns 1476.56 eval/normalized_score 45.9917 time/evaluation sampling (s) 0.916263 time/logging (s) 0.0022487 time/sampling batch (s) 0.2694 time/saving (s) 0.00309429 time/training (s) 4.22152 time/epoch (s) 5.41253 time/total (s) 33590.4 Epoch -380 ---------------------------------- --------------- 2022-05-10 22:30:31.443528 PDT | [0] Epoch -379 finished ---------------------------------- --------------- epoch -379 replay_buffer/size 999996 trainer/num train calls 622000 trainer/Policy Loss -2.46374 trainer/Log Pis Mean 2.32676 trainer/Log Pis Std 2.69553 trainer/Log Pis Max 12.3908 trainer/Log Pis Min -5.37832 trainer/policy/mean Mean 0.158126 trainer/policy/mean Std 0.624337 trainer/policy/mean Max 0.996183 trainer/policy/mean Min -0.997855 trainer/policy/normal/std Mean 0.38768 trainer/policy/normal/std Std 0.19031 trainer/policy/normal/std Max 0.95064 trainer/policy/normal/std Min 0.0685653 trainer/policy/normal/log_std Mean -1.10259 trainer/policy/normal/log_std Std 0.600894 trainer/policy/normal/log_std Max -0.0506196 trainer/policy/normal/log_std Min -2.67997 eval/num steps total 414982 eval/num paths total 809 eval/path length Mean 461 eval/path length Std 8 eval/path length Max 469 eval/path length Min 453 eval/Rewards Mean 3.2284 eval/Rewards Std 0.865568 eval/Rewards Max 5.30916 eval/Rewards Min 0.98197 eval/Returns Mean 1488.29 eval/Returns Std 32.5197 eval/Returns Max 1520.81 eval/Returns Min 1455.77 eval/Actions Mean 0.15056 eval/Actions Std 0.574722 eval/Actions Max 0.998402 eval/Actions Min -0.999044 eval/Num Paths 2 eval/Average Returns 1488.29 eval/normalized_score 46.3521 time/evaluation sampling (s) 0.921064 time/logging (s) 0.00356153 time/sampling batch (s) 0.27027 time/saving (s) 0.00320744 time/training (s) 4.22754 time/epoch (s) 5.42564 time/total (s) 33595.8 Epoch -379 ---------------------------------- --------------- 2022-05-10 22:30:36.899157 PDT | [0] Epoch -378 finished ---------------------------------- --------------- epoch -378 replay_buffer/size 999996 trainer/num train calls 623000 trainer/Policy Loss -2.30615 trainer/Log Pis Mean 2.53643 trainer/Log Pis Std 2.60047 trainer/Log Pis Max 12.2696 trainer/Log Pis Min -3.65237 trainer/policy/mean Mean 0.144222 trainer/policy/mean Std 0.628874 trainer/policy/mean Max 0.994696 trainer/policy/mean Min -0.998144 trainer/policy/normal/std Mean 0.372958 trainer/policy/normal/std Std 0.181456 trainer/policy/normal/std Max 0.936884 trainer/policy/normal/std Min 0.0710819 trainer/policy/normal/log_std Mean -1.13598 trainer/policy/normal/log_std Std 0.587229 trainer/policy/normal/log_std Max -0.0651962 trainer/policy/normal/log_std Min -2.64392 eval/num steps total 415608 eval/num paths total 810 eval/path length Mean 626 eval/path length Std 0 eval/path length Max 626 eval/path length Min 626 eval/Rewards Mean 3.23921 eval/Rewards Std 0.791139 eval/Rewards Max 5.24642 eval/Rewards Min 0.977916 eval/Returns Mean 2027.74 eval/Returns Std 0 eval/Returns Max 2027.74 eval/Returns Min 2027.74 eval/Actions Mean 0.155929 eval/Actions Std 0.590436 eval/Actions Max 0.997623 eval/Actions Min -0.998595 eval/Num Paths 1 eval/Average Returns 2027.74 eval/normalized_score 62.9274 time/evaluation sampling (s) 0.906966 time/logging (s) 0.0026578 time/sampling batch (s) 0.271502 time/saving (s) 0.00306766 time/training (s) 4.24738 time/epoch (s) 5.43158 time/total (s) 33601.3 Epoch -378 ---------------------------------- --------------- 2022-05-10 22:30:42.340564 PDT | [0] Epoch -377 finished ---------------------------------- --------------- epoch -377 replay_buffer/size 999996 trainer/num train calls 624000 trainer/Policy Loss -2.25339 trainer/Log Pis Mean 2.23213 trainer/Log Pis Std 2.58891 trainer/Log Pis Max 13.7178 trainer/Log Pis Min -6.44709 trainer/policy/mean Mean 0.135255 trainer/policy/mean Std 0.621698 trainer/policy/mean Max 0.994048 trainer/policy/mean Min -0.998744 trainer/policy/normal/std Mean 0.375572 trainer/policy/normal/std Std 0.183853 trainer/policy/normal/std Max 0.987552 trainer/policy/normal/std Min 0.0620859 trainer/policy/normal/log_std Mean -1.13375 trainer/policy/normal/log_std Std 0.600898 trainer/policy/normal/log_std Max -0.0125266 trainer/policy/normal/log_std Min -2.77924 eval/num steps total 416105 eval/num paths total 811 eval/path length Mean 497 eval/path length Std 0 eval/path length Max 497 eval/path length Min 497 eval/Rewards Mean 3.07567 eval/Rewards Std 0.777345 eval/Rewards Max 4.68609 eval/Rewards Min 0.982471 eval/Returns Mean 1528.61 eval/Returns Std 0 eval/Returns Max 1528.61 eval/Returns Min 1528.61 eval/Actions Mean 0.149491 eval/Actions Std 0.57979 eval/Actions Max 0.998182 eval/Actions Min -0.99761 eval/Num Paths 1 eval/Average Returns 1528.61 eval/normalized_score 47.591 time/evaluation sampling (s) 0.891744 time/logging (s) 0.00232282 time/sampling batch (s) 0.272085 time/saving (s) 0.003075 time/training (s) 4.24851 time/epoch (s) 5.41774 time/total (s) 33606.7 Epoch -377 ---------------------------------- --------------- 2022-05-10 22:30:47.763661 PDT | [0] Epoch -376 finished ---------------------------------- --------------- epoch -376 replay_buffer/size 999996 trainer/num train calls 625000 trainer/Policy Loss -2.28303 trainer/Log Pis Mean 2.14695 trainer/Log Pis Std 2.53212 trainer/Log Pis Max 9.47793 trainer/Log Pis Min -7.53459 trainer/policy/mean Mean 0.114725 trainer/policy/mean Std 0.621309 trainer/policy/mean Max 0.997743 trainer/policy/mean Min -0.998746 trainer/policy/normal/std Mean 0.376008 trainer/policy/normal/std Std 0.182236 trainer/policy/normal/std Max 0.918156 trainer/policy/normal/std Min 0.0623948 trainer/policy/normal/log_std Mean -1.12731 trainer/policy/normal/log_std Std 0.588335 trainer/policy/normal/log_std Max -0.0853877 trainer/policy/normal/log_std Min -2.77427 eval/num steps total 416623 eval/num paths total 812 eval/path length Mean 518 eval/path length Std 0 eval/path length Max 518 eval/path length Min 518 eval/Rewards Mean 3.1869 eval/Rewards Std 0.841413 eval/Rewards Max 5.54191 eval/Rewards Min 0.984218 eval/Returns Mean 1650.82 eval/Returns Std 0 eval/Returns Max 1650.82 eval/Returns Min 1650.82 eval/Actions Mean 0.152256 eval/Actions Std 0.59238 eval/Actions Max 0.998524 eval/Actions Min -0.998436 eval/Num Paths 1 eval/Average Returns 1650.82 eval/normalized_score 51.3458 time/evaluation sampling (s) 0.879689 time/logging (s) 0.0023381 time/sampling batch (s) 0.270739 time/saving (s) 0.00299096 time/training (s) 4.24445 time/epoch (s) 5.4002 time/total (s) 33612.1 Epoch -376 ---------------------------------- --------------- 2022-05-10 22:30:53.148970 PDT | [0] Epoch -375 finished ---------------------------------- --------------- epoch -375 replay_buffer/size 999996 trainer/num train calls 626000 trainer/Policy Loss -2.24831 trainer/Log Pis Mean 2.17651 trainer/Log Pis Std 2.5682 trainer/Log Pis Max 10.6877 trainer/Log Pis Min -6.14453 trainer/policy/mean Mean 0.148831 trainer/policy/mean Std 0.614179 trainer/policy/mean Max 0.998042 trainer/policy/mean Min -0.998207 trainer/policy/normal/std Mean 0.372278 trainer/policy/normal/std Std 0.180879 trainer/policy/normal/std Max 1.00793 trainer/policy/normal/std Min 0.0698433 trainer/policy/normal/log_std Mean -1.14083 trainer/policy/normal/log_std Std 0.597701 trainer/policy/normal/log_std Max 0.0078964 trainer/policy/normal/log_std Min -2.6615 eval/num steps total 417125 eval/num paths total 813 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.11298 eval/Rewards Std 0.755613 eval/Rewards Max 4.73795 eval/Rewards Min 0.985085 eval/Returns Mean 1562.71 eval/Returns Std 0 eval/Returns Max 1562.71 eval/Returns Min 1562.71 eval/Actions Mean 0.158363 eval/Actions Std 0.587048 eval/Actions Max 0.998149 eval/Actions Min -0.996286 eval/Num Paths 1 eval/Average Returns 1562.71 eval/normalized_score 48.6388 time/evaluation sampling (s) 0.883001 time/logging (s) 0.0022971 time/sampling batch (s) 0.268512 time/saving (s) 0.00294908 time/training (s) 4.20581 time/epoch (s) 5.36257 time/total (s) 33617.4 Epoch -375 ---------------------------------- --------------- 2022-05-10 22:30:58.614169 PDT | [0] Epoch -374 finished ---------------------------------- --------------- epoch -374 replay_buffer/size 999996 trainer/num train calls 627000 trainer/Policy Loss -2.15079 trainer/Log Pis Mean 2.18785 trainer/Log Pis Std 2.64393 trainer/Log Pis Max 11.3074 trainer/Log Pis Min -5.01209 trainer/policy/mean Mean 0.152106 trainer/policy/mean Std 0.620295 trainer/policy/mean Max 0.997431 trainer/policy/mean Min -0.998343 trainer/policy/normal/std Mean 0.378791 trainer/policy/normal/std Std 0.188075 trainer/policy/normal/std Max 1.0056 trainer/policy/normal/std Min 0.0682992 trainer/policy/normal/log_std Mean -1.12621 trainer/policy/normal/log_std Std 0.599671 trainer/policy/normal/log_std Max 0.00558495 trainer/policy/normal/log_std Min -2.68386 eval/num steps total 417649 eval/num paths total 814 eval/path length Mean 524 eval/path length Std 0 eval/path length Max 524 eval/path length Min 524 eval/Rewards Mean 3.22417 eval/Rewards Std 0.842144 eval/Rewards Max 5.51475 eval/Rewards Min 0.982595 eval/Returns Mean 1689.47 eval/Returns Std 0 eval/Returns Max 1689.47 eval/Returns Min 1689.47 eval/Actions Mean 0.150307 eval/Actions Std 0.592317 eval/Actions Max 0.996497 eval/Actions Min -0.997684 eval/Num Paths 1 eval/Average Returns 1689.47 eval/normalized_score 52.5334 time/evaluation sampling (s) 0.893855 time/logging (s) 0.00240843 time/sampling batch (s) 0.265195 time/saving (s) 0.0031106 time/training (s) 4.27788 time/epoch (s) 5.44245 time/total (s) 33622.9 Epoch -374 ---------------------------------- --------------- 2022-05-10 22:31:04.127215 PDT | [0] Epoch -373 finished ---------------------------------- --------------- epoch -373 replay_buffer/size 999996 trainer/num train calls 628000 trainer/Policy Loss -2.23758 trainer/Log Pis Mean 2.27407 trainer/Log Pis Std 2.63394 trainer/Log Pis Max 11.1393 trainer/Log Pis Min -6.07432 trainer/policy/mean Mean 0.139327 trainer/policy/mean Std 0.617605 trainer/policy/mean Max 0.996964 trainer/policy/mean Min -0.998336 trainer/policy/normal/std Mean 0.379489 trainer/policy/normal/std Std 0.188515 trainer/policy/normal/std Max 1.029 trainer/policy/normal/std Min 0.0720101 trainer/policy/normal/log_std Mean -1.12104 trainer/policy/normal/log_std Std 0.589657 trainer/policy/normal/log_std Max 0.0285916 trainer/policy/normal/log_std Min -2.63095 eval/num steps total 418647 eval/num paths total 816 eval/path length Mean 499 eval/path length Std 82 eval/path length Max 581 eval/path length Min 417 eval/Rewards Mean 3.15046 eval/Rewards Std 0.800061 eval/Rewards Max 5.07579 eval/Rewards Min 0.98247 eval/Returns Mean 1572.08 eval/Returns Std 287.583 eval/Returns Max 1859.66 eval/Returns Min 1284.5 eval/Actions Mean 0.155204 eval/Actions Std 0.591838 eval/Actions Max 0.998111 eval/Actions Min -0.998288 eval/Num Paths 2 eval/Average Returns 1572.08 eval/normalized_score 48.9267 time/evaluation sampling (s) 0.987317 time/logging (s) 0.00363688 time/sampling batch (s) 0.268808 time/saving (s) 0.00301344 time/training (s) 4.22879 time/epoch (s) 5.49157 time/total (s) 33628.4 Epoch -373 ---------------------------------- --------------- 2022-05-10 22:31:09.517628 PDT | [0] Epoch -372 finished ---------------------------------- --------------- epoch -372 replay_buffer/size 999996 trainer/num train calls 629000 trainer/Policy Loss -2.32017 trainer/Log Pis Mean 2.23051 trainer/Log Pis Std 2.59285 trainer/Log Pis Max 15.7298 trainer/Log Pis Min -9.13833 trainer/policy/mean Mean 0.116273 trainer/policy/mean Std 0.618714 trainer/policy/mean Max 0.998044 trainer/policy/mean Min -0.999266 trainer/policy/normal/std Mean 0.37267 trainer/policy/normal/std Std 0.181755 trainer/policy/normal/std Max 1.04492 trainer/policy/normal/std Min 0.0675072 trainer/policy/normal/log_std Mean -1.13916 trainer/policy/normal/log_std Std 0.594409 trainer/policy/normal/log_std Max 0.0439445 trainer/policy/normal/log_std Min -2.69552 eval/num steps total 419143 eval/num paths total 817 eval/path length Mean 496 eval/path length Std 0 eval/path length Max 496 eval/path length Min 496 eval/Rewards Mean 3.08958 eval/Rewards Std 0.778141 eval/Rewards Max 4.74899 eval/Rewards Min 0.988125 eval/Returns Mean 1532.43 eval/Returns Std 0 eval/Returns Max 1532.43 eval/Returns Min 1532.43 eval/Actions Mean 0.151062 eval/Actions Std 0.586093 eval/Actions Max 0.997512 eval/Actions Min -0.999239 eval/Num Paths 1 eval/Average Returns 1532.43 eval/normalized_score 47.7083 time/evaluation sampling (s) 0.876287 time/logging (s) 0.0023406 time/sampling batch (s) 0.267846 time/saving (s) 0.00311479 time/training (s) 4.21684 time/epoch (s) 5.36643 time/total (s) 33633.8 Epoch -372 ---------------------------------- --------------- 2022-05-10 22:31:14.943730 PDT | [0] Epoch -371 finished ---------------------------------- --------------- epoch -371 replay_buffer/size 999996 trainer/num train calls 630000 trainer/Policy Loss -2.05594 trainer/Log Pis Mean 2.03648 trainer/Log Pis Std 2.47762 trainer/Log Pis Max 8.84446 trainer/Log Pis Min -3.89692 trainer/policy/mean Mean 0.148758 trainer/policy/mean Std 0.607761 trainer/policy/mean Max 0.997672 trainer/policy/mean Min -0.997275 trainer/policy/normal/std Mean 0.380607 trainer/policy/normal/std Std 0.187688 trainer/policy/normal/std Max 0.946934 trainer/policy/normal/std Min 0.0672691 trainer/policy/normal/log_std Mean -1.11893 trainer/policy/normal/log_std Std 0.594116 trainer/policy/normal/log_std Max -0.0545255 trainer/policy/normal/log_std Min -2.69905 eval/num steps total 420118 eval/num paths total 819 eval/path length Mean 487.5 eval/path length Std 73.5 eval/path length Max 561 eval/path length Min 414 eval/Rewards Mean 3.15586 eval/Rewards Std 0.789112 eval/Rewards Max 4.85896 eval/Rewards Min 0.98333 eval/Returns Mean 1538.48 eval/Returns Std 264.511 eval/Returns Max 1802.99 eval/Returns Min 1273.97 eval/Actions Mean 0.156025 eval/Actions Std 0.599937 eval/Actions Max 0.998485 eval/Actions Min -0.9983 eval/Num Paths 2 eval/Average Returns 1538.48 eval/normalized_score 47.8943 time/evaluation sampling (s) 0.908034 time/logging (s) 0.00348934 time/sampling batch (s) 0.266988 time/saving (s) 0.00295849 time/training (s) 4.22317 time/epoch (s) 5.40464 time/total (s) 33639.2 Epoch -371 ---------------------------------- --------------- 2022-05-10 22:31:20.361405 PDT | [0] Epoch -370 finished ---------------------------------- --------------- epoch -370 replay_buffer/size 999996 trainer/num train calls 631000 trainer/Policy Loss -2.18557 trainer/Log Pis Mean 2.27393 trainer/Log Pis Std 2.63098 trainer/Log Pis Max 9.89106 trainer/Log Pis Min -3.49546 trainer/policy/mean Mean 0.154462 trainer/policy/mean Std 0.6057 trainer/policy/mean Max 0.996495 trainer/policy/mean Min -0.997529 trainer/policy/normal/std Mean 0.378286 trainer/policy/normal/std Std 0.18426 trainer/policy/normal/std Max 1.08578 trainer/policy/normal/std Min 0.0649089 trainer/policy/normal/log_std Mean -1.12471 trainer/policy/normal/log_std Std 0.596488 trainer/policy/normal/log_std Max 0.0823006 trainer/policy/normal/log_std Min -2.73477 eval/num steps total 420643 eval/num paths total 820 eval/path length Mean 525 eval/path length Std 0 eval/path length Max 525 eval/path length Min 525 eval/Rewards Mean 3.14877 eval/Rewards Std 0.850912 eval/Rewards Max 5.48185 eval/Rewards Min 0.985985 eval/Returns Mean 1653.1 eval/Returns Std 0 eval/Returns Max 1653.1 eval/Returns Min 1653.1 eval/Actions Mean 0.151451 eval/Actions Std 0.585762 eval/Actions Max 0.998147 eval/Actions Min -0.998488 eval/Num Paths 1 eval/Average Returns 1653.1 eval/normalized_score 51.4162 time/evaluation sampling (s) 0.893894 time/logging (s) 0.0023281 time/sampling batch (s) 0.268638 time/saving (s) 0.00303914 time/training (s) 4.22573 time/epoch (s) 5.39363 time/total (s) 33644.6 Epoch -370 ---------------------------------- --------------- 2022-05-10 22:31:25.736012 PDT | [0] Epoch -369 finished ---------------------------------- --------------- epoch -369 replay_buffer/size 999996 trainer/num train calls 632000 trainer/Policy Loss -2.19232 trainer/Log Pis Mean 2.30354 trainer/Log Pis Std 2.60645 trainer/Log Pis Max 9.94607 trainer/Log Pis Min -4.92439 trainer/policy/mean Mean 0.129882 trainer/policy/mean Std 0.621396 trainer/policy/mean Max 0.996821 trainer/policy/mean Min -0.998174 trainer/policy/normal/std Mean 0.376688 trainer/policy/normal/std Std 0.178978 trainer/policy/normal/std Max 0.898909 trainer/policy/normal/std Min 0.0637058 trainer/policy/normal/log_std Mean -1.12449 trainer/policy/normal/log_std Std 0.589949 trainer/policy/normal/log_std Max -0.106573 trainer/policy/normal/log_std Min -2.75348 eval/num steps total 421286 eval/num paths total 821 eval/path length Mean 643 eval/path length Std 0 eval/path length Max 643 eval/path length Min 643 eval/Rewards Mean 3.2333 eval/Rewards Std 0.715424 eval/Rewards Max 4.75422 eval/Rewards Min 0.982221 eval/Returns Mean 2079.01 eval/Returns Std 0 eval/Returns Max 2079.01 eval/Returns Min 2079.01 eval/Actions Mean 0.14118 eval/Actions Std 0.604657 eval/Actions Max 0.998389 eval/Actions Min -0.998545 eval/Num Paths 1 eval/Average Returns 2079.01 eval/normalized_score 64.5026 time/evaluation sampling (s) 0.889572 time/logging (s) 0.00272948 time/sampling batch (s) 0.266471 time/saving (s) 0.00305543 time/training (s) 4.19028 time/epoch (s) 5.3521 time/total (s) 33649.9 Epoch -369 ---------------------------------- --------------- 2022-05-10 22:31:31.177494 PDT | [0] Epoch -368 finished ---------------------------------- --------------- epoch -368 replay_buffer/size 999996 trainer/num train calls 633000 trainer/Policy Loss -2.40209 trainer/Log Pis Mean 2.46914 trainer/Log Pis Std 2.49264 trainer/Log Pis Max 11.3874 trainer/Log Pis Min -3.97822 trainer/policy/mean Mean 0.132816 trainer/policy/mean Std 0.631536 trainer/policy/mean Max 0.996792 trainer/policy/mean Min -0.999419 trainer/policy/normal/std Mean 0.38013 trainer/policy/normal/std Std 0.182274 trainer/policy/normal/std Max 0.968047 trainer/policy/normal/std Min 0.0715701 trainer/policy/normal/log_std Mean -1.11579 trainer/policy/normal/log_std Std 0.588848 trainer/policy/normal/log_std Max -0.0324746 trainer/policy/normal/log_std Min -2.63708 eval/num steps total 422275 eval/num paths total 823 eval/path length Mean 494.5 eval/path length Std 3.5 eval/path length Max 498 eval/path length Min 491 eval/Rewards Mean 3.05982 eval/Rewards Std 0.789028 eval/Rewards Max 4.74198 eval/Rewards Min 0.979235 eval/Returns Mean 1513.08 eval/Returns Std 15.0261 eval/Returns Max 1528.11 eval/Returns Min 1498.05 eval/Actions Mean 0.163755 eval/Actions Std 0.584661 eval/Actions Max 0.997433 eval/Actions Min -0.999045 eval/Num Paths 2 eval/Average Returns 1513.08 eval/normalized_score 47.1138 time/evaluation sampling (s) 0.945778 time/logging (s) 0.00369582 time/sampling batch (s) 0.267218 time/saving (s) 0.00305259 time/training (s) 4.19999 time/epoch (s) 5.41974 time/total (s) 33655.3 Epoch -368 ---------------------------------- --------------- 2022-05-10 22:31:36.583286 PDT | [0] Epoch -367 finished ---------------------------------- --------------- epoch -367 replay_buffer/size 999996 trainer/num train calls 634000 trainer/Policy Loss -2.19121 trainer/Log Pis Mean 2.2176 trainer/Log Pis Std 2.64119 trainer/Log Pis Max 9.27353 trainer/Log Pis Min -5.97156 trainer/policy/mean Mean 0.155805 trainer/policy/mean Std 0.62586 trainer/policy/mean Max 0.996565 trainer/policy/mean Min -0.997525 trainer/policy/normal/std Mean 0.383505 trainer/policy/normal/std Std 0.179313 trainer/policy/normal/std Max 0.919204 trainer/policy/normal/std Min 0.0668079 trainer/policy/normal/log_std Mean -1.0981 trainer/policy/normal/log_std Std 0.5693 trainer/policy/normal/log_std Max -0.0842475 trainer/policy/normal/log_std Min -2.70593 eval/num steps total 422824 eval/num paths total 824 eval/path length Mean 549 eval/path length Std 0 eval/path length Max 549 eval/path length Min 549 eval/Rewards Mean 3.22937 eval/Rewards Std 0.837497 eval/Rewards Max 5.23407 eval/Rewards Min 0.982404 eval/Returns Mean 1772.92 eval/Returns Std 0 eval/Returns Max 1772.92 eval/Returns Min 1772.92 eval/Actions Mean 0.152503 eval/Actions Std 0.582572 eval/Actions Max 0.998483 eval/Actions Min -0.997858 eval/Num Paths 1 eval/Average Returns 1772.92 eval/normalized_score 55.0977 time/evaluation sampling (s) 0.905849 time/logging (s) 0.00251413 time/sampling batch (s) 0.26736 time/saving (s) 0.0031657 time/training (s) 4.20264 time/epoch (s) 5.38153 time/total (s) 33660.7 Epoch -367 ---------------------------------- --------------- 2022-05-10 22:31:41.984979 PDT | [0] Epoch -366 finished ---------------------------------- --------------- epoch -366 replay_buffer/size 999996 trainer/num train calls 635000 trainer/Policy Loss -2.23575 trainer/Log Pis Mean 2.02731 trainer/Log Pis Std 2.52481 trainer/Log Pis Max 9.26773 trainer/Log Pis Min -5.61322 trainer/policy/mean Mean 0.156388 trainer/policy/mean Std 0.607346 trainer/policy/mean Max 0.996706 trainer/policy/mean Min -0.998878 trainer/policy/normal/std Mean 0.379255 trainer/policy/normal/std Std 0.180791 trainer/policy/normal/std Max 0.955356 trainer/policy/normal/std Min 0.0677179 trainer/policy/normal/log_std Mean -1.11537 trainer/policy/normal/log_std Std 0.583109 trainer/policy/normal/log_std Max -0.0456708 trainer/policy/normal/log_std Min -2.6924 eval/num steps total 423340 eval/num paths total 825 eval/path length Mean 516 eval/path length Std 0 eval/path length Max 516 eval/path length Min 516 eval/Rewards Mean 3.18551 eval/Rewards Std 0.821389 eval/Rewards Max 5.34451 eval/Rewards Min 0.983955 eval/Returns Mean 1643.73 eval/Returns Std 0 eval/Returns Max 1643.73 eval/Returns Min 1643.73 eval/Actions Mean 0.155545 eval/Actions Std 0.593455 eval/Actions Max 0.998447 eval/Actions Min -0.995755 eval/Num Paths 1 eval/Average Returns 1643.73 eval/normalized_score 51.128 time/evaluation sampling (s) 0.89367 time/logging (s) 0.00232781 time/sampling batch (s) 0.267758 time/saving (s) 0.00302121 time/training (s) 4.2117 time/epoch (s) 5.37848 time/total (s) 33666.1 Epoch -366 ---------------------------------- --------------- 2022-05-10 22:31:47.454160 PDT | [0] Epoch -365 finished ---------------------------------- --------------- epoch -365 replay_buffer/size 999996 trainer/num train calls 636000 trainer/Policy Loss -2.10729 trainer/Log Pis Mean 1.95871 trainer/Log Pis Std 2.63199 trainer/Log Pis Max 9.93005 trainer/Log Pis Min -5.6219 trainer/policy/mean Mean 0.0849741 trainer/policy/mean Std 0.616531 trainer/policy/mean Max 0.998613 trainer/policy/mean Min -0.997888 trainer/policy/normal/std Mean 0.382222 trainer/policy/normal/std Std 0.180608 trainer/policy/normal/std Max 0.92592 trainer/policy/normal/std Min 0.067395 trainer/policy/normal/log_std Mean -1.10591 trainer/policy/normal/log_std Std 0.579694 trainer/policy/normal/log_std Max -0.0769674 trainer/policy/normal/log_std Min -2.69718 eval/num steps total 423836 eval/num paths total 826 eval/path length Mean 496 eval/path length Std 0 eval/path length Max 496 eval/path length Min 496 eval/Rewards Mean 3.08841 eval/Rewards Std 0.791581 eval/Rewards Max 4.72375 eval/Rewards Min 0.980003 eval/Returns Mean 1531.85 eval/Returns Std 0 eval/Returns Max 1531.85 eval/Returns Min 1531.85 eval/Actions Mean 0.145811 eval/Actions Std 0.577068 eval/Actions Max 0.99807 eval/Actions Min -0.997348 eval/Num Paths 1 eval/Average Returns 1531.85 eval/normalized_score 47.6906 time/evaluation sampling (s) 0.934889 time/logging (s) 0.00236017 time/sampling batch (s) 0.267989 time/saving (s) 0.00298073 time/training (s) 4.23803 time/epoch (s) 5.44625 time/total (s) 33671.6 Epoch -365 ---------------------------------- --------------- 2022-05-10 22:31:52.872447 PDT | [0] Epoch -364 finished ---------------------------------- --------------- epoch -364 replay_buffer/size 999996 trainer/num train calls 637000 trainer/Policy Loss -2.23049 trainer/Log Pis Mean 2.25659 trainer/Log Pis Std 2.57622 trainer/Log Pis Max 9.97079 trainer/Log Pis Min -5.45546 trainer/policy/mean Mean 0.148282 trainer/policy/mean Std 0.615457 trainer/policy/mean Max 0.997161 trainer/policy/mean Min -0.99872 trainer/policy/normal/std Mean 0.384173 trainer/policy/normal/std Std 0.187361 trainer/policy/normal/std Max 0.963434 trainer/policy/normal/std Min 0.0677578 trainer/policy/normal/log_std Mean -1.1095 trainer/policy/normal/log_std Std 0.597008 trainer/policy/normal/log_std Max -0.0372508 trainer/policy/normal/log_std Min -2.69182 eval/num steps total 424332 eval/num paths total 827 eval/path length Mean 496 eval/path length Std 0 eval/path length Max 496 eval/path length Min 496 eval/Rewards Mean 3.09199 eval/Rewards Std 0.778147 eval/Rewards Max 4.75083 eval/Rewards Min 0.98367 eval/Returns Mean 1533.63 eval/Returns Std 0 eval/Returns Max 1533.63 eval/Returns Min 1533.63 eval/Actions Mean 0.156121 eval/Actions Std 0.596426 eval/Actions Max 0.997624 eval/Actions Min -0.996319 eval/Num Paths 1 eval/Average Returns 1533.63 eval/normalized_score 47.7452 time/evaluation sampling (s) 0.871866 time/logging (s) 0.00223431 time/sampling batch (s) 0.269573 time/saving (s) 0.00299687 time/training (s) 4.24856 time/epoch (s) 5.39523 time/total (s) 33677 Epoch -364 ---------------------------------- --------------- 2022-05-10 22:31:58.281114 PDT | [0] Epoch -363 finished ---------------------------------- --------------- epoch -363 replay_buffer/size 999996 trainer/num train calls 638000 trainer/Policy Loss -2.2054 trainer/Log Pis Mean 2.13699 trainer/Log Pis Std 2.62827 trainer/Log Pis Max 9.16619 trainer/Log Pis Min -6.10701 trainer/policy/mean Mean 0.138757 trainer/policy/mean Std 0.615118 trainer/policy/mean Max 0.997779 trainer/policy/mean Min -0.998464 trainer/policy/normal/std Mean 0.382569 trainer/policy/normal/std Std 0.182292 trainer/policy/normal/std Max 0.930031 trainer/policy/normal/std Min 0.0697961 trainer/policy/normal/log_std Mean -1.10868 trainer/policy/normal/log_std Std 0.588517 trainer/policy/normal/log_std Max -0.0725377 trainer/policy/normal/log_std Min -2.66218 eval/num steps total 424836 eval/num paths total 828 eval/path length Mean 504 eval/path length Std 0 eval/path length Max 504 eval/path length Min 504 eval/Rewards Mean 3.13347 eval/Rewards Std 0.757726 eval/Rewards Max 4.77811 eval/Rewards Min 0.984024 eval/Returns Mean 1579.27 eval/Returns Std 0 eval/Returns Max 1579.27 eval/Returns Min 1579.27 eval/Actions Mean 0.156089 eval/Actions Std 0.585004 eval/Actions Max 0.997838 eval/Actions Min -0.995621 eval/Num Paths 1 eval/Average Returns 1579.27 eval/normalized_score 49.1474 time/evaluation sampling (s) 0.880265 time/logging (s) 0.00227631 time/sampling batch (s) 0.268755 time/saving (s) 0.00303529 time/training (s) 4.23152 time/epoch (s) 5.38585 time/total (s) 33682.3 Epoch -363 ---------------------------------- --------------- 2022-05-10 22:32:03.687701 PDT | [0] Epoch -362 finished ---------------------------------- --------------- epoch -362 replay_buffer/size 999996 trainer/num train calls 639000 trainer/Policy Loss -2.3099 trainer/Log Pis Mean 2.37655 trainer/Log Pis Std 2.65347 trainer/Log Pis Max 10.1091 trainer/Log Pis Min -6.77735 trainer/policy/mean Mean 0.136941 trainer/policy/mean Std 0.615826 trainer/policy/mean Max 0.997143 trainer/policy/mean Min -0.997981 trainer/policy/normal/std Mean 0.370593 trainer/policy/normal/std Std 0.175914 trainer/policy/normal/std Max 1.02603 trainer/policy/normal/std Min 0.0602842 trainer/policy/normal/log_std Mean -1.13656 trainer/policy/normal/log_std Std 0.578186 trainer/policy/normal/log_std Max 0.0257006 trainer/policy/normal/log_std Min -2.80869 eval/num steps total 425255 eval/num paths total 829 eval/path length Mean 419 eval/path length Std 0 eval/path length Max 419 eval/path length Min 419 eval/Rewards Mean 3.0839 eval/Rewards Std 0.839924 eval/Rewards Max 5.10092 eval/Rewards Min 0.980788 eval/Returns Mean 1292.16 eval/Returns Std 0 eval/Returns Max 1292.16 eval/Returns Min 1292.16 eval/Actions Mean 0.147177 eval/Actions Std 0.586269 eval/Actions Max 0.996397 eval/Actions Min -0.997233 eval/Num Paths 1 eval/Average Returns 1292.16 eval/normalized_score 40.3257 time/evaluation sampling (s) 0.874252 time/logging (s) 0.00194864 time/sampling batch (s) 0.269803 time/saving (s) 0.00296738 time/training (s) 4.23436 time/epoch (s) 5.38333 time/total (s) 33687.7 Epoch -362 ---------------------------------- --------------- 2022-05-10 22:32:09.075525 PDT | [0] Epoch -361 finished ---------------------------------- --------------- epoch -361 replay_buffer/size 999996 trainer/num train calls 640000 trainer/Policy Loss -2.41 trainer/Log Pis Mean 2.35895 trainer/Log Pis Std 2.60805 trainer/Log Pis Max 9.63321 trainer/Log Pis Min -6.159 trainer/policy/mean Mean 0.134965 trainer/policy/mean Std 0.620492 trainer/policy/mean Max 0.998048 trainer/policy/mean Min -0.997823 trainer/policy/normal/std Mean 0.383009 trainer/policy/normal/std Std 0.185483 trainer/policy/normal/std Max 1.01266 trainer/policy/normal/std Min 0.0709725 trainer/policy/normal/log_std Mean -1.10877 trainer/policy/normal/log_std Std 0.587571 trainer/policy/normal/log_std Max 0.0125756 trainer/policy/normal/log_std Min -2.64546 eval/num steps total 425907 eval/num paths total 830 eval/path length Mean 652 eval/path length Std 0 eval/path length Max 652 eval/path length Min 652 eval/Rewards Mean 3.16875 eval/Rewards Std 0.691402 eval/Rewards Max 4.81484 eval/Rewards Min 0.982666 eval/Returns Mean 2066.03 eval/Returns Std 0 eval/Returns Max 2066.03 eval/Returns Min 2066.03 eval/Actions Mean 0.152119 eval/Actions Std 0.60655 eval/Actions Max 0.998376 eval/Actions Min -0.998584 eval/Num Paths 1 eval/Average Returns 2066.03 eval/normalized_score 64.1036 time/evaluation sampling (s) 0.878093 time/logging (s) 0.00264207 time/sampling batch (s) 0.268509 time/saving (s) 0.00296473 time/training (s) 4.21364 time/epoch (s) 5.36585 time/total (s) 33693.1 Epoch -361 ---------------------------------- --------------- 2022-05-10 22:32:14.474283 PDT | [0] Epoch -360 finished ---------------------------------- --------------- epoch -360 replay_buffer/size 999996 trainer/num train calls 641000 trainer/Policy Loss -2.00106 trainer/Log Pis Mean 2.05344 trainer/Log Pis Std 2.53072 trainer/Log Pis Max 9.25474 trainer/Log Pis Min -4.07514 trainer/policy/mean Mean 0.11384 trainer/policy/mean Std 0.609089 trainer/policy/mean Max 0.997855 trainer/policy/mean Min -0.997074 trainer/policy/normal/std Mean 0.374821 trainer/policy/normal/std Std 0.180856 trainer/policy/normal/std Max 0.946836 trainer/policy/normal/std Min 0.066815 trainer/policy/normal/log_std Mean -1.12886 trainer/policy/normal/log_std Std 0.585131 trainer/policy/normal/log_std Max -0.0546298 trainer/policy/normal/log_std Min -2.70583 eval/num steps total 426411 eval/num paths total 831 eval/path length Mean 504 eval/path length Std 0 eval/path length Max 504 eval/path length Min 504 eval/Rewards Mean 3.10837 eval/Rewards Std 0.754592 eval/Rewards Max 4.73759 eval/Rewards Min 0.98639 eval/Returns Mean 1566.62 eval/Returns Std 0 eval/Returns Max 1566.62 eval/Returns Min 1566.62 eval/Actions Mean 0.154991 eval/Actions Std 0.593435 eval/Actions Max 0.998191 eval/Actions Min -0.997873 eval/Num Paths 1 eval/Average Returns 1566.62 eval/normalized_score 48.7589 time/evaluation sampling (s) 0.883206 time/logging (s) 0.00237901 time/sampling batch (s) 0.267432 time/saving (s) 0.0031837 time/training (s) 4.21782 time/epoch (s) 5.37402 time/total (s) 33698.5 Epoch -360 ---------------------------------- --------------- 2022-05-10 22:32:19.858343 PDT | [0] Epoch -359 finished ---------------------------------- --------------- epoch -359 replay_buffer/size 999996 trainer/num train calls 642000 trainer/Policy Loss -2.29508 trainer/Log Pis Mean 2.1365 trainer/Log Pis Std 2.67588 trainer/Log Pis Max 14.6041 trainer/Log Pis Min -4.81867 trainer/policy/mean Mean 0.125937 trainer/policy/mean Std 0.626484 trainer/policy/mean Max 0.997086 trainer/policy/mean Min -0.998991 trainer/policy/normal/std Mean 0.373422 trainer/policy/normal/std Std 0.181418 trainer/policy/normal/std Max 1.0101 trainer/policy/normal/std Min 0.0698158 trainer/policy/normal/log_std Mean -1.1361 trainer/policy/normal/log_std Std 0.591846 trainer/policy/normal/log_std Max 0.010047 trainer/policy/normal/log_std Min -2.66189 eval/num steps total 426898 eval/num paths total 832 eval/path length Mean 487 eval/path length Std 0 eval/path length Max 487 eval/path length Min 487 eval/Rewards Mean 3.1953 eval/Rewards Std 0.805816 eval/Rewards Max 4.85631 eval/Rewards Min 0.982518 eval/Returns Mean 1556.11 eval/Returns Std 0 eval/Returns Max 1556.11 eval/Returns Min 1556.11 eval/Actions Mean 0.15325 eval/Actions Std 0.600278 eval/Actions Max 0.998025 eval/Actions Min -0.998158 eval/Num Paths 1 eval/Average Returns 1556.11 eval/normalized_score 48.436 time/evaluation sampling (s) 0.875971 time/logging (s) 0.00233733 time/sampling batch (s) 0.272923 time/saving (s) 0.00294871 time/training (s) 4.20695 time/epoch (s) 5.36113 time/total (s) 33703.8 Epoch -359 ---------------------------------- --------------- 2022-05-10 22:32:25.245228 PDT | [0] Epoch -358 finished ---------------------------------- --------------- epoch -358 replay_buffer/size 999996 trainer/num train calls 643000 trainer/Policy Loss -2.34697 trainer/Log Pis Mean 2.29531 trainer/Log Pis Std 2.57111 trainer/Log Pis Max 11.7205 trainer/Log Pis Min -4.86154 trainer/policy/mean Mean 0.119298 trainer/policy/mean Std 0.618012 trainer/policy/mean Max 0.998301 trainer/policy/mean Min -0.997621 trainer/policy/normal/std Mean 0.37173 trainer/policy/normal/std Std 0.181538 trainer/policy/normal/std Max 0.998382 trainer/policy/normal/std Min 0.0680704 trainer/policy/normal/log_std Mean -1.14067 trainer/policy/normal/log_std Std 0.590988 trainer/policy/normal/log_std Max -0.00161934 trainer/policy/normal/log_std Min -2.68721 eval/num steps total 427428 eval/num paths total 833 eval/path length Mean 530 eval/path length Std 0 eval/path length Max 530 eval/path length Min 530 eval/Rewards Mean 3.20906 eval/Rewards Std 0.848956 eval/Rewards Max 5.37464 eval/Rewards Min 0.979296 eval/Returns Mean 1700.8 eval/Returns Std 0 eval/Returns Max 1700.8 eval/Returns Min 1700.8 eval/Actions Mean 0.142286 eval/Actions Std 0.578763 eval/Actions Max 0.998513 eval/Actions Min -0.998512 eval/Num Paths 1 eval/Average Returns 1700.8 eval/normalized_score 52.8817 time/evaluation sampling (s) 0.890157 time/logging (s) 0.00258655 time/sampling batch (s) 0.26748 time/saving (s) 0.00323649 time/training (s) 4.20103 time/epoch (s) 5.36449 time/total (s) 33709.2 Epoch -358 ---------------------------------- --------------- 2022-05-10 22:32:30.727026 PDT | [0] Epoch -357 finished ---------------------------------- --------------- epoch -357 replay_buffer/size 999996 trainer/num train calls 644000 trainer/Policy Loss -2.3731 trainer/Log Pis Mean 2.34712 trainer/Log Pis Std 2.72701 trainer/Log Pis Max 12.17 trainer/Log Pis Min -5.02297 trainer/policy/mean Mean 0.129804 trainer/policy/mean Std 0.625898 trainer/policy/mean Max 0.995722 trainer/policy/mean Min -0.998602 trainer/policy/normal/std Mean 0.375808 trainer/policy/normal/std Std 0.18591 trainer/policy/normal/std Max 1.00685 trainer/policy/normal/std Min 0.067475 trainer/policy/normal/log_std Mean -1.13262 trainer/policy/normal/log_std Std 0.596194 trainer/policy/normal/log_std Max 0.0068315 trainer/policy/normal/log_std Min -2.696 eval/num steps total 428339 eval/num paths total 835 eval/path length Mean 455.5 eval/path length Std 47.5 eval/path length Max 503 eval/path length Min 408 eval/Rewards Mean 3.11128 eval/Rewards Std 0.775365 eval/Rewards Max 4.6994 eval/Rewards Min 0.983872 eval/Returns Mean 1417.19 eval/Returns Std 164.421 eval/Returns Max 1581.61 eval/Returns Min 1252.77 eval/Actions Mean 0.156899 eval/Actions Std 0.597034 eval/Actions Max 0.998815 eval/Actions Min -0.999338 eval/Num Paths 2 eval/Average Returns 1417.19 eval/normalized_score 44.1674 time/evaluation sampling (s) 0.925743 time/logging (s) 0.00345005 time/sampling batch (s) 0.27326 time/saving (s) 0.00310149 time/training (s) 4.25374 time/epoch (s) 5.4593 time/total (s) 33714.7 Epoch -357 ---------------------------------- --------------- 2022-05-10 22:32:36.151071 PDT | [0] Epoch -356 finished ---------------------------------- --------------- epoch -356 replay_buffer/size 999996 trainer/num train calls 645000 trainer/Policy Loss -2.29768 trainer/Log Pis Mean 2.26196 trainer/Log Pis Std 2.59779 trainer/Log Pis Max 8.88765 trainer/Log Pis Min -5.76714 trainer/policy/mean Mean 0.140706 trainer/policy/mean Std 0.620747 trainer/policy/mean Max 0.997489 trainer/policy/mean Min -0.997637 trainer/policy/normal/std Mean 0.383388 trainer/policy/normal/std Std 0.190542 trainer/policy/normal/std Max 0.943859 trainer/policy/normal/std Min 0.0619509 trainer/policy/normal/log_std Mean -1.11553 trainer/policy/normal/log_std Std 0.603422 trainer/policy/normal/log_std Max -0.0577785 trainer/policy/normal/log_std Min -2.78141 eval/num steps total 429234 eval/num paths total 836 eval/path length Mean 895 eval/path length Std 0 eval/path length Max 895 eval/path length Min 895 eval/Rewards Mean 3.23232 eval/Rewards Std 0.628804 eval/Rewards Max 4.86116 eval/Rewards Min 0.986683 eval/Returns Mean 2892.93 eval/Returns Std 0 eval/Returns Max 2892.93 eval/Returns Min 2892.93 eval/Actions Mean 0.153939 eval/Actions Std 0.610261 eval/Actions Max 0.997824 eval/Actions Min -0.998833 eval/Num Paths 1 eval/Average Returns 2892.93 eval/normalized_score 89.5109 time/evaluation sampling (s) 0.910918 time/logging (s) 0.00353646 time/sampling batch (s) 0.270459 time/saving (s) 0.00312482 time/training (s) 4.21296 time/epoch (s) 5.401 time/total (s) 33720.1 Epoch -356 ---------------------------------- --------------- 2022-05-10 22:32:41.616376 PDT | [0] Epoch -355 finished ---------------------------------- --------------- epoch -355 replay_buffer/size 999996 trainer/num train calls 646000 trainer/Policy Loss -2.06071 trainer/Log Pis Mean 2.14151 trainer/Log Pis Std 2.54536 trainer/Log Pis Max 9.99551 trainer/Log Pis Min -4.33824 trainer/policy/mean Mean 0.131366 trainer/policy/mean Std 0.611962 trainer/policy/mean Max 0.997444 trainer/policy/mean Min -0.99781 trainer/policy/normal/std Mean 0.387289 trainer/policy/normal/std Std 0.192029 trainer/policy/normal/std Max 1.05544 trainer/policy/normal/std Min 0.0670489 trainer/policy/normal/log_std Mean -1.10319 trainer/policy/normal/log_std Std 0.597491 trainer/policy/normal/log_std Max 0.0539572 trainer/policy/normal/log_std Min -2.70233 eval/num steps total 429800 eval/num paths total 837 eval/path length Mean 566 eval/path length Std 0 eval/path length Max 566 eval/path length Min 566 eval/Rewards Mean 3.17422 eval/Rewards Std 0.811671 eval/Rewards Max 4.75913 eval/Rewards Min 0.985579 eval/Returns Mean 1796.61 eval/Returns Std 0 eval/Returns Max 1796.61 eval/Returns Min 1796.61 eval/Actions Mean 0.162644 eval/Actions Std 0.600808 eval/Actions Max 0.998497 eval/Actions Min -0.998363 eval/Num Paths 1 eval/Average Returns 1796.61 eval/normalized_score 55.8255 time/evaluation sampling (s) 0.925036 time/logging (s) 0.00256021 time/sampling batch (s) 0.270742 time/saving (s) 0.00319686 time/training (s) 4.23953 time/epoch (s) 5.44106 time/total (s) 33725.5 Epoch -355 ---------------------------------- --------------- 2022-05-10 22:32:47.067467 PDT | [0] Epoch -354 finished ---------------------------------- --------------- epoch -354 replay_buffer/size 999996 trainer/num train calls 647000 trainer/Policy Loss -2.34978 trainer/Log Pis Mean 2.33909 trainer/Log Pis Std 2.70513 trainer/Log Pis Max 13.5197 trainer/Log Pis Min -4.42079 trainer/policy/mean Mean 0.140759 trainer/policy/mean Std 0.614856 trainer/policy/mean Max 0.998438 trainer/policy/mean Min -0.999085 trainer/policy/normal/std Mean 0.373244 trainer/policy/normal/std Std 0.187025 trainer/policy/normal/std Max 1.12282 trainer/policy/normal/std Min 0.0586688 trainer/policy/normal/log_std Mean -1.14514 trainer/policy/normal/log_std Std 0.609217 trainer/policy/normal/log_std Max 0.115841 trainer/policy/normal/log_std Min -2.83585 eval/num steps total 430486 eval/num paths total 838 eval/path length Mean 686 eval/path length Std 0 eval/path length Max 686 eval/path length Min 686 eval/Rewards Mean 3.23685 eval/Rewards Std 0.752426 eval/Rewards Max 5.51524 eval/Rewards Min 0.980993 eval/Returns Mean 2220.48 eval/Returns Std 0 eval/Returns Max 2220.48 eval/Returns Min 2220.48 eval/Actions Mean 0.150047 eval/Actions Std 0.599171 eval/Actions Max 0.997946 eval/Actions Min -0.9989 eval/Num Paths 1 eval/Average Returns 2220.48 eval/normalized_score 68.8494 time/evaluation sampling (s) 0.913512 time/logging (s) 0.00281966 time/sampling batch (s) 0.271502 time/saving (s) 0.00310284 time/training (s) 4.23705 time/epoch (s) 5.42798 time/total (s) 33730.9 Epoch -354 ---------------------------------- --------------- 2022-05-10 22:32:52.494603 PDT | [0] Epoch -353 finished ---------------------------------- --------------- epoch -353 replay_buffer/size 999996 trainer/num train calls 648000 trainer/Policy Loss -2.19987 trainer/Log Pis Mean 2.30699 trainer/Log Pis Std 2.6275 trainer/Log Pis Max 10.2483 trainer/Log Pis Min -8.00018 trainer/policy/mean Mean 0.148389 trainer/policy/mean Std 0.609411 trainer/policy/mean Max 0.998157 trainer/policy/mean Min -0.998417 trainer/policy/normal/std Mean 0.372953 trainer/policy/normal/std Std 0.181703 trainer/policy/normal/std Max 0.962022 trainer/policy/normal/std Min 0.066652 trainer/policy/normal/log_std Mean -1.13849 trainer/policy/normal/log_std Std 0.594844 trainer/policy/normal/log_std Max -0.0387182 trainer/policy/normal/log_std Min -2.70827 eval/num steps total 431139 eval/num paths total 839 eval/path length Mean 653 eval/path length Std 0 eval/path length Max 653 eval/path length Min 653 eval/Rewards Mean 3.25712 eval/Rewards Std 0.743305 eval/Rewards Max 4.81318 eval/Rewards Min 0.978562 eval/Returns Mean 2126.9 eval/Returns Std 0 eval/Returns Max 2126.9 eval/Returns Min 2126.9 eval/Actions Mean 0.147088 eval/Actions Std 0.610493 eval/Actions Max 0.997635 eval/Actions Min -0.998539 eval/Num Paths 1 eval/Average Returns 2126.9 eval/normalized_score 65.9739 time/evaluation sampling (s) 0.894597 time/logging (s) 0.00280069 time/sampling batch (s) 0.271553 time/saving (s) 0.00306803 time/training (s) 4.23216 time/epoch (s) 5.40417 time/total (s) 33736.4 Epoch -353 ---------------------------------- --------------- 2022-05-10 22:32:57.927901 PDT | [0] Epoch -352 finished ---------------------------------- --------------- epoch -352 replay_buffer/size 999996 trainer/num train calls 649000 trainer/Policy Loss -2.11336 trainer/Log Pis Mean 2.12873 trainer/Log Pis Std 2.48674 trainer/Log Pis Max 9.65735 trainer/Log Pis Min -6.4919 trainer/policy/mean Mean 0.125619 trainer/policy/mean Std 0.61135 trainer/policy/mean Max 0.99752 trainer/policy/mean Min -0.997306 trainer/policy/normal/std Mean 0.371533 trainer/policy/normal/std Std 0.182251 trainer/policy/normal/std Max 0.940439 trainer/policy/normal/std Min 0.0665354 trainer/policy/normal/log_std Mean -1.14366 trainer/policy/normal/log_std Std 0.596487 trainer/policy/normal/log_std Max -0.0614089 trainer/policy/normal/log_std Min -2.71002 eval/num steps total 431581 eval/num paths total 840 eval/path length Mean 442 eval/path length Std 0 eval/path length Max 442 eval/path length Min 442 eval/Rewards Mean 3.1963 eval/Rewards Std 0.904935 eval/Rewards Max 5.49089 eval/Rewards Min 0.981925 eval/Returns Mean 1412.76 eval/Returns Std 0 eval/Returns Max 1412.76 eval/Returns Min 1412.76 eval/Actions Mean 0.133071 eval/Actions Std 0.580054 eval/Actions Max 0.998558 eval/Actions Min -0.99766 eval/Num Paths 1 eval/Average Returns 1412.76 eval/normalized_score 44.0315 time/evaluation sampling (s) 0.884762 time/logging (s) 0.00212799 time/sampling batch (s) 0.270987 time/saving (s) 0.00296036 time/training (s) 4.24858 time/epoch (s) 5.40941 time/total (s) 33741.8 Epoch -352 ---------------------------------- --------------- 2022-05-10 22:33:03.371638 PDT | [0] Epoch -351 finished ---------------------------------- --------------- epoch -351 replay_buffer/size 999996 trainer/num train calls 650000 trainer/Policy Loss -2.17652 trainer/Log Pis Mean 2.14341 trainer/Log Pis Std 2.59921 trainer/Log Pis Max 10.14 trainer/Log Pis Min -5.04338 trainer/policy/mean Mean 0.114788 trainer/policy/mean Std 0.609047 trainer/policy/mean Max 0.997957 trainer/policy/mean Min -0.998793 trainer/policy/normal/std Mean 0.375366 trainer/policy/normal/std Std 0.181599 trainer/policy/normal/std Max 1.01491 trainer/policy/normal/std Min 0.0666291 trainer/policy/normal/log_std Mean -1.12819 trainer/policy/normal/log_std Std 0.585646 trainer/policy/normal/log_std Max 0.014796 trainer/policy/normal/log_std Min -2.70861 eval/num steps total 432007 eval/num paths total 841 eval/path length Mean 426 eval/path length Std 0 eval/path length Max 426 eval/path length Min 426 eval/Rewards Mean 3.11403 eval/Rewards Std 0.893868 eval/Rewards Max 5.39944 eval/Rewards Min 0.983048 eval/Returns Mean 1326.58 eval/Returns Std 0 eval/Returns Max 1326.58 eval/Returns Min 1326.58 eval/Actions Mean 0.142604 eval/Actions Std 0.580864 eval/Actions Max 0.996492 eval/Actions Min -0.999352 eval/Num Paths 1 eval/Average Returns 1326.58 eval/normalized_score 41.3833 time/evaluation sampling (s) 0.888306 time/logging (s) 0.00209342 time/sampling batch (s) 0.271677 time/saving (s) 0.00296828 time/training (s) 4.25566 time/epoch (s) 5.42071 time/total (s) 33747.2 Epoch -351 ---------------------------------- --------------- 2022-05-10 22:33:08.814093 PDT | [0] Epoch -350 finished ---------------------------------- --------------- epoch -350 replay_buffer/size 999996 trainer/num train calls 651000 trainer/Policy Loss -2.06573 trainer/Log Pis Mean 2.11228 trainer/Log Pis Std 2.42272 trainer/Log Pis Max 9.82988 trainer/Log Pis Min -5.86279 trainer/policy/mean Mean 0.162151 trainer/policy/mean Std 0.601435 trainer/policy/mean Max 0.998842 trainer/policy/mean Min -0.997435 trainer/policy/normal/std Mean 0.380414 trainer/policy/normal/std Std 0.182349 trainer/policy/normal/std Max 0.984842 trainer/policy/normal/std Min 0.064615 trainer/policy/normal/log_std Mean -1.11399 trainer/policy/normal/log_std Std 0.586207 trainer/policy/normal/log_std Max -0.0152742 trainer/policy/normal/log_std Min -2.73931 eval/num steps total 432665 eval/num paths total 842 eval/path length Mean 658 eval/path length Std 0 eval/path length Max 658 eval/path length Min 658 eval/Rewards Mean 3.24325 eval/Rewards Std 0.727566 eval/Rewards Max 4.76954 eval/Rewards Min 0.984682 eval/Returns Mean 2134.06 eval/Returns Std 0 eval/Returns Max 2134.06 eval/Returns Min 2134.06 eval/Actions Mean 0.162318 eval/Actions Std 0.603998 eval/Actions Max 0.998537 eval/Actions Min -0.998394 eval/Num Paths 1 eval/Average Returns 2134.06 eval/normalized_score 66.1939 time/evaluation sampling (s) 0.906272 time/logging (s) 0.00271735 time/sampling batch (s) 0.271414 time/saving (s) 0.00304238 time/training (s) 4.23658 time/epoch (s) 5.42003 time/total (s) 33752.6 Epoch -350 ---------------------------------- --------------- 2022-05-10 22:33:14.259553 PDT | [0] Epoch -349 finished ---------------------------------- --------------- epoch -349 replay_buffer/size 999996 trainer/num train calls 652000 trainer/Policy Loss -2.08247 trainer/Log Pis Mean 2.11224 trainer/Log Pis Std 2.45373 trainer/Log Pis Max 9.51439 trainer/Log Pis Min -4.7831 trainer/policy/mean Mean 0.130841 trainer/policy/mean Std 0.612511 trainer/policy/mean Max 0.998385 trainer/policy/mean Min -0.997963 trainer/policy/normal/std Mean 0.375796 trainer/policy/normal/std Std 0.178547 trainer/policy/normal/std Max 0.919769 trainer/policy/normal/std Min 0.068315 trainer/policy/normal/log_std Mean -1.12322 trainer/policy/normal/log_std Std 0.579085 trainer/policy/normal/log_std Max -0.0836326 trainer/policy/normal/log_std Min -2.68363 eval/num steps total 433227 eval/num paths total 843 eval/path length Mean 562 eval/path length Std 0 eval/path length Max 562 eval/path length Min 562 eval/Rewards Mean 3.21347 eval/Rewards Std 0.774508 eval/Rewards Max 4.66721 eval/Rewards Min 0.981707 eval/Returns Mean 1805.97 eval/Returns Std 0 eval/Returns Max 1805.97 eval/Returns Min 1805.97 eval/Actions Mean 0.142959 eval/Actions Std 0.604991 eval/Actions Max 0.99859 eval/Actions Min -0.998392 eval/Num Paths 1 eval/Average Returns 1805.97 eval/normalized_score 56.1131 time/evaluation sampling (s) 0.898143 time/logging (s) 0.00245359 time/sampling batch (s) 0.27152 time/saving (s) 0.00304102 time/training (s) 4.24688 time/epoch (s) 5.42204 time/total (s) 33758 Epoch -349 ---------------------------------- --------------- 2022-05-10 22:33:19.701725 PDT | [0] Epoch -348 finished ---------------------------------- --------------- epoch -348 replay_buffer/size 999996 trainer/num train calls 653000 trainer/Policy Loss -2.16476 trainer/Log Pis Mean 2.23048 trainer/Log Pis Std 2.63866 trainer/Log Pis Max 11.7944 trainer/Log Pis Min -5.20225 trainer/policy/mean Mean 0.122466 trainer/policy/mean Std 0.615236 trainer/policy/mean Max 0.997482 trainer/policy/mean Min -0.998561 trainer/policy/normal/std Mean 0.376634 trainer/policy/normal/std Std 0.176987 trainer/policy/normal/std Max 0.878172 trainer/policy/normal/std Min 0.068305 trainer/policy/normal/log_std Mean -1.12186 trainer/policy/normal/log_std Std 0.584708 trainer/policy/normal/log_std Max -0.129912 trainer/policy/normal/log_std Min -2.68377 eval/num steps total 434180 eval/num paths total 845 eval/path length Mean 476.5 eval/path length Std 3.5 eval/path length Max 480 eval/path length Min 473 eval/Rewards Mean 3.16891 eval/Rewards Std 0.856491 eval/Rewards Max 4.7663 eval/Rewards Min 0.979864 eval/Returns Mean 1509.99 eval/Returns Std 8.56874 eval/Returns Max 1518.55 eval/Returns Min 1501.42 eval/Actions Mean 0.154885 eval/Actions Std 0.592527 eval/Actions Max 0.998396 eval/Actions Min -0.998706 eval/Num Paths 2 eval/Average Returns 1509.99 eval/normalized_score 47.0187 time/evaluation sampling (s) 0.887993 time/logging (s) 0.00353653 time/sampling batch (s) 0.27191 time/saving (s) 0.00299164 time/training (s) 4.25357 time/epoch (s) 5.42001 time/total (s) 33763.5 Epoch -348 ---------------------------------- --------------- 2022-05-10 22:33:25.141721 PDT | [0] Epoch -347 finished ---------------------------------- --------------- epoch -347 replay_buffer/size 999996 trainer/num train calls 654000 trainer/Policy Loss -2.10021 trainer/Log Pis Mean 2.15335 trainer/Log Pis Std 2.66274 trainer/Log Pis Max 8.75998 trainer/Log Pis Min -5.43935 trainer/policy/mean Mean 0.129387 trainer/policy/mean Std 0.606004 trainer/policy/mean Max 0.997972 trainer/policy/mean Min -0.998472 trainer/policy/normal/std Mean 0.380122 trainer/policy/normal/std Std 0.187503 trainer/policy/normal/std Max 0.947461 trainer/policy/normal/std Min 0.0696473 trainer/policy/normal/log_std Mean -1.12542 trainer/policy/normal/log_std Std 0.608852 trainer/policy/normal/log_std Max -0.0539697 trainer/policy/normal/log_std Min -2.66431 eval/num steps total 434772 eval/num paths total 846 eval/path length Mean 592 eval/path length Std 0 eval/path length Max 592 eval/path length Min 592 eval/Rewards Mean 3.20936 eval/Rewards Std 0.74308 eval/Rewards Max 5.07176 eval/Rewards Min 0.982927 eval/Returns Mean 1899.94 eval/Returns Std 0 eval/Returns Max 1899.94 eval/Returns Min 1899.94 eval/Actions Mean 0.15931 eval/Actions Std 0.59513 eval/Actions Max 0.99799 eval/Actions Min -0.998676 eval/Num Paths 1 eval/Average Returns 1899.94 eval/normalized_score 59.0004 time/evaluation sampling (s) 0.887773 time/logging (s) 0.00252281 time/sampling batch (s) 0.270714 time/saving (s) 0.002963 time/training (s) 4.25186 time/epoch (s) 5.41584 time/total (s) 33768.9 Epoch -347 ---------------------------------- --------------- 2022-05-10 22:33:30.680236 PDT | [0] Epoch -346 finished ---------------------------------- --------------- epoch -346 replay_buffer/size 999996 trainer/num train calls 655000 trainer/Policy Loss -2.22713 trainer/Log Pis Mean 2.17374 trainer/Log Pis Std 2.55268 trainer/Log Pis Max 9.84781 trainer/Log Pis Min -5.51874 trainer/policy/mean Mean 0.133267 trainer/policy/mean Std 0.611452 trainer/policy/mean Max 0.997523 trainer/policy/mean Min -0.998213 trainer/policy/normal/std Mean 0.384946 trainer/policy/normal/std Std 0.193875 trainer/policy/normal/std Max 1.4836 trainer/policy/normal/std Min 0.0676961 trainer/policy/normal/log_std Mean -1.11404 trainer/policy/normal/log_std Std 0.608185 trainer/policy/normal/log_std Max 0.394473 trainer/policy/normal/log_std Min -2.69273 eval/num steps total 435296 eval/num paths total 847 eval/path length Mean 524 eval/path length Std 0 eval/path length Max 524 eval/path length Min 524 eval/Rewards Mean 3.20466 eval/Rewards Std 0.846097 eval/Rewards Max 5.48942 eval/Rewards Min 0.982125 eval/Returns Mean 1679.24 eval/Returns Std 0 eval/Returns Max 1679.24 eval/Returns Min 1679.24 eval/Actions Mean 0.145597 eval/Actions Std 0.588948 eval/Actions Max 0.998263 eval/Actions Min -0.997672 eval/Num Paths 1 eval/Average Returns 1679.24 eval/normalized_score 52.2192 time/evaluation sampling (s) 0.993559 time/logging (s) 0.00242632 time/sampling batch (s) 0.271744 time/saving (s) 0.00320433 time/training (s) 4.24445 time/epoch (s) 5.51538 time/total (s) 33774.4 Epoch -346 ---------------------------------- --------------- 2022-05-10 22:33:36.145333 PDT | [0] Epoch -345 finished ---------------------------------- --------------- epoch -345 replay_buffer/size 999996 trainer/num train calls 656000 trainer/Policy Loss -2.16414 trainer/Log Pis Mean 2.27362 trainer/Log Pis Std 2.57858 trainer/Log Pis Max 9.0005 trainer/Log Pis Min -4.07616 trainer/policy/mean Mean 0.141786 trainer/policy/mean Std 0.607834 trainer/policy/mean Max 0.998977 trainer/policy/mean Min -0.998143 trainer/policy/normal/std Mean 0.38214 trainer/policy/normal/std Std 0.19388 trainer/policy/normal/std Max 1.05598 trainer/policy/normal/std Min 0.0688134 trainer/policy/normal/log_std Mean -1.1281 trainer/policy/normal/log_std Std 0.621742 trainer/policy/normal/log_std Max 0.0544681 trainer/policy/normal/log_std Min -2.67636 eval/num steps total 436008 eval/num paths total 848 eval/path length Mean 712 eval/path length Std 0 eval/path length Max 712 eval/path length Min 712 eval/Rewards Mean 3.253 eval/Rewards Std 0.767549 eval/Rewards Max 5.41539 eval/Rewards Min 0.983387 eval/Returns Mean 2316.14 eval/Returns Std 0 eval/Returns Max 2316.14 eval/Returns Min 2316.14 eval/Actions Mean 0.160888 eval/Actions Std 0.599945 eval/Actions Max 0.998721 eval/Actions Min -0.999408 eval/Num Paths 1 eval/Average Returns 2316.14 eval/normalized_score 71.7885 time/evaluation sampling (s) 0.931459 time/logging (s) 0.00293219 time/sampling batch (s) 0.270885 time/saving (s) 0.00322654 time/training (s) 4.23388 time/epoch (s) 5.44238 time/total (s) 33779.8 Epoch -345 ---------------------------------- --------------- 2022-05-10 22:33:41.624823 PDT | [0] Epoch -344 finished ---------------------------------- --------------- epoch -344 replay_buffer/size 999996 trainer/num train calls 657000 trainer/Policy Loss -2.05751 trainer/Log Pis Mean 2.19972 trainer/Log Pis Std 2.55732 trainer/Log Pis Max 9.92459 trainer/Log Pis Min -5.65929 trainer/policy/mean Mean 0.150925 trainer/policy/mean Std 0.614663 trainer/policy/mean Max 0.998213 trainer/policy/mean Min -0.998615 trainer/policy/normal/std Mean 0.378916 trainer/policy/normal/std Std 0.188539 trainer/policy/normal/std Max 1.01399 trainer/policy/normal/std Min 0.0637339 trainer/policy/normal/log_std Mean -1.12962 trainer/policy/normal/log_std Std 0.610317 trainer/policy/normal/log_std Max 0.0138945 trainer/policy/normal/log_std Min -2.75304 eval/num steps total 436676 eval/num paths total 849 eval/path length Mean 668 eval/path length Std 0 eval/path length Max 668 eval/path length Min 668 eval/Rewards Mean 3.20048 eval/Rewards Std 0.673743 eval/Rewards Max 4.56039 eval/Rewards Min 0.985267 eval/Returns Mean 2137.92 eval/Returns Std 0 eval/Returns Max 2137.92 eval/Returns Min 2137.92 eval/Actions Mean 0.169001 eval/Actions Std 0.611991 eval/Actions Max 0.998537 eval/Actions Min -0.997334 eval/Num Paths 1 eval/Average Returns 2137.92 eval/normalized_score 66.3127 time/evaluation sampling (s) 0.921521 time/logging (s) 0.00280404 time/sampling batch (s) 0.270005 time/saving (s) 0.00309456 time/training (s) 4.2587 time/epoch (s) 5.45613 time/total (s) 33785.3 Epoch -344 ---------------------------------- --------------- 2022-05-10 22:33:47.040636 PDT | [0] Epoch -343 finished ---------------------------------- --------------- epoch -343 replay_buffer/size 999996 trainer/num train calls 658000 trainer/Policy Loss -2.40037 trainer/Log Pis Mean 2.48898 trainer/Log Pis Std 2.68219 trainer/Log Pis Max 15.5412 trainer/Log Pis Min -4.97369 trainer/policy/mean Mean 0.157435 trainer/policy/mean Std 0.635205 trainer/policy/mean Max 0.998496 trainer/policy/mean Min -0.999412 trainer/policy/normal/std Mean 0.380343 trainer/policy/normal/std Std 0.18394 trainer/policy/normal/std Max 1.24829 trainer/policy/normal/std Min 0.0718182 trainer/policy/normal/log_std Mean -1.11659 trainer/policy/normal/log_std Std 0.590109 trainer/policy/normal/log_std Max 0.221771 trainer/policy/normal/log_std Min -2.63362 eval/num steps total 437310 eval/num paths total 850 eval/path length Mean 634 eval/path length Std 0 eval/path length Max 634 eval/path length Min 634 eval/Rewards Mean 3.20854 eval/Rewards Std 0.780412 eval/Rewards Max 4.86733 eval/Rewards Min 0.988467 eval/Returns Mean 2034.21 eval/Returns Std 0 eval/Returns Max 2034.21 eval/Returns Min 2034.21 eval/Actions Mean 0.154122 eval/Actions Std 0.587287 eval/Actions Max 0.998326 eval/Actions Min -0.998629 eval/Num Paths 1 eval/Average Returns 2034.21 eval/normalized_score 63.1261 time/evaluation sampling (s) 0.894349 time/logging (s) 0.00271642 time/sampling batch (s) 0.270512 time/saving (s) 0.00295862 time/training (s) 4.22214 time/epoch (s) 5.39267 time/total (s) 33790.7 Epoch -343 ---------------------------------- --------------- 2022-05-10 22:33:52.512893 PDT | [0] Epoch -342 finished ---------------------------------- --------------- epoch -342 replay_buffer/size 999996 trainer/num train calls 659000 trainer/Policy Loss -2.13751 trainer/Log Pis Mean 2.06215 trainer/Log Pis Std 2.3786 trainer/Log Pis Max 8.72688 trainer/Log Pis Min -6.72515 trainer/policy/mean Mean 0.138165 trainer/policy/mean Std 0.620249 trainer/policy/mean Max 0.994806 trainer/policy/mean Min -0.99622 trainer/policy/normal/std Mean 0.371592 trainer/policy/normal/std Std 0.17936 trainer/policy/normal/std Max 0.917808 trainer/policy/normal/std Min 0.0648871 trainer/policy/normal/log_std Mean -1.14115 trainer/policy/normal/log_std Std 0.595036 trainer/policy/normal/log_std Max -0.0857673 trainer/policy/normal/log_std Min -2.73511 eval/num steps total 437860 eval/num paths total 851 eval/path length Mean 550 eval/path length Std 0 eval/path length Max 550 eval/path length Min 550 eval/Rewards Mean 3.22261 eval/Rewards Std 0.809896 eval/Rewards Max 4.97504 eval/Rewards Min 0.984501 eval/Returns Mean 1772.44 eval/Returns Std 0 eval/Returns Max 1772.44 eval/Returns Min 1772.44 eval/Actions Mean 0.145986 eval/Actions Std 0.586478 eval/Actions Max 0.997852 eval/Actions Min -0.999102 eval/Num Paths 1 eval/Average Returns 1772.44 eval/normalized_score 55.0828 time/evaluation sampling (s) 0.89442 time/logging (s) 0.00250255 time/sampling batch (s) 0.273398 time/saving (s) 0.00314781 time/training (s) 4.2754 time/epoch (s) 5.44887 time/total (s) 33796.2 Epoch -342 ---------------------------------- --------------- 2022-05-10 22:33:57.932090 PDT | [0] Epoch -341 finished ---------------------------------- --------------- epoch -341 replay_buffer/size 999996 trainer/num train calls 660000 trainer/Policy Loss -2.31384 trainer/Log Pis Mean 2.15976 trainer/Log Pis Std 2.60256 trainer/Log Pis Max 9.77225 trainer/Log Pis Min -5.21885 trainer/policy/mean Mean 0.146047 trainer/policy/mean Std 0.619944 trainer/policy/mean Max 0.997283 trainer/policy/mean Min -0.998209 trainer/policy/normal/std Mean 0.382233 trainer/policy/normal/std Std 0.184259 trainer/policy/normal/std Max 0.980598 trainer/policy/normal/std Min 0.0665333 trainer/policy/normal/log_std Mean -1.11071 trainer/policy/normal/log_std Std 0.588447 trainer/policy/normal/log_std Max -0.019593 trainer/policy/normal/log_std Min -2.71005 eval/num steps total 438344 eval/num paths total 852 eval/path length Mean 484 eval/path length Std 0 eval/path length Max 484 eval/path length Min 484 eval/Rewards Mean 3.20101 eval/Rewards Std 0.817701 eval/Rewards Max 4.76082 eval/Rewards Min 0.980304 eval/Returns Mean 1549.29 eval/Returns Std 0 eval/Returns Max 1549.29 eval/Returns Min 1549.29 eval/Actions Mean 0.151509 eval/Actions Std 0.601826 eval/Actions Max 0.997417 eval/Actions Min -0.998762 eval/Num Paths 1 eval/Average Returns 1549.29 eval/normalized_score 48.2264 time/evaluation sampling (s) 0.88709 time/logging (s) 0.00222702 time/sampling batch (s) 0.270595 time/saving (s) 0.00298025 time/training (s) 4.23273 time/epoch (s) 5.39562 time/total (s) 33801.6 Epoch -341 ---------------------------------- --------------- 2022-05-10 22:34:03.341987 PDT | [0] Epoch -340 finished ---------------------------------- --------------- epoch -340 replay_buffer/size 999996 trainer/num train calls 661000 trainer/Policy Loss -2.07285 trainer/Log Pis Mean 2.11389 trainer/Log Pis Std 2.64807 trainer/Log Pis Max 10.2163 trainer/Log Pis Min -4.32514 trainer/policy/mean Mean 0.141155 trainer/policy/mean Std 0.6082 trainer/policy/mean Max 0.999182 trainer/policy/mean Min -0.998687 trainer/policy/normal/std Mean 0.384184 trainer/policy/normal/std Std 0.186167 trainer/policy/normal/std Max 0.930101 trainer/policy/normal/std Min 0.0676709 trainer/policy/normal/log_std Mean -1.10589 trainer/policy/normal/log_std Std 0.587781 trainer/policy/normal/log_std Max -0.0724625 trainer/policy/normal/log_std Min -2.6931 eval/num steps total 438929 eval/num paths total 853 eval/path length Mean 585 eval/path length Std 0 eval/path length Max 585 eval/path length Min 585 eval/Rewards Mean 3.21825 eval/Rewards Std 0.765731 eval/Rewards Max 4.96044 eval/Rewards Min 0.979321 eval/Returns Mean 1882.68 eval/Returns Std 0 eval/Returns Max 1882.68 eval/Returns Min 1882.68 eval/Actions Mean 0.168731 eval/Actions Std 0.601907 eval/Actions Max 0.998761 eval/Actions Min -0.998464 eval/Num Paths 1 eval/Average Returns 1882.68 eval/normalized_score 58.47 time/evaluation sampling (s) 0.887867 time/logging (s) 0.00246977 time/sampling batch (s) 0.270891 time/saving (s) 0.00292814 time/training (s) 4.2229 time/epoch (s) 5.38706 time/total (s) 33806.9 Epoch -340 ---------------------------------- --------------- 2022-05-10 22:34:08.766648 PDT | [0] Epoch -339 finished ---------------------------------- --------------- epoch -339 replay_buffer/size 999996 trainer/num train calls 662000 trainer/Policy Loss -2.18686 trainer/Log Pis Mean 2.16637 trainer/Log Pis Std 2.62037 trainer/Log Pis Max 9.05828 trainer/Log Pis Min -5.76145 trainer/policy/mean Mean 0.145299 trainer/policy/mean Std 0.610519 trainer/policy/mean Max 0.998111 trainer/policy/mean Min -0.997211 trainer/policy/normal/std Mean 0.372979 trainer/policy/normal/std Std 0.185406 trainer/policy/normal/std Max 1.02437 trainer/policy/normal/std Min 0.0642473 trainer/policy/normal/log_std Mean -1.14696 trainer/policy/normal/log_std Std 0.614324 trainer/policy/normal/log_std Max 0.0240781 trainer/policy/normal/log_std Min -2.74501 eval/num steps total 439617 eval/num paths total 854 eval/path length Mean 688 eval/path length Std 0 eval/path length Max 688 eval/path length Min 688 eval/Rewards Mean 3.1767 eval/Rewards Std 0.715672 eval/Rewards Max 5.332 eval/Rewards Min 0.982285 eval/Returns Mean 2185.57 eval/Returns Std 0 eval/Returns Max 2185.57 eval/Returns Min 2185.57 eval/Actions Mean 0.154042 eval/Actions Std 0.601298 eval/Actions Max 0.998402 eval/Actions Min -0.998119 eval/Num Paths 1 eval/Average Returns 2185.57 eval/normalized_score 67.7767 time/evaluation sampling (s) 0.895078 time/logging (s) 0.00279691 time/sampling batch (s) 0.271247 time/saving (s) 0.00295213 time/training (s) 4.22995 time/epoch (s) 5.40202 time/total (s) 33812.4 Epoch -339 ---------------------------------- --------------- 2022-05-10 22:34:14.229061 PDT | [0] Epoch -338 finished ---------------------------------- --------------- epoch -338 replay_buffer/size 999996 trainer/num train calls 663000 trainer/Policy Loss -2.28967 trainer/Log Pis Mean 2.37278 trainer/Log Pis Std 2.5146 trainer/Log Pis Max 10.2144 trainer/Log Pis Min -6.01433 trainer/policy/mean Mean 0.157149 trainer/policy/mean Std 0.618973 trainer/policy/mean Max 0.998672 trainer/policy/mean Min -0.998569 trainer/policy/normal/std Mean 0.373468 trainer/policy/normal/std Std 0.177441 trainer/policy/normal/std Max 0.908791 trainer/policy/normal/std Min 0.0740004 trainer/policy/normal/log_std Mean -1.1263 trainer/policy/normal/log_std Std 0.569812 trainer/policy/normal/log_std Max -0.0956397 trainer/policy/normal/log_std Min -2.60369 eval/num steps total 440134 eval/num paths total 855 eval/path length Mean 517 eval/path length Std 0 eval/path length Max 517 eval/path length Min 517 eval/Rewards Mean 3.1356 eval/Rewards Std 0.783382 eval/Rewards Max 5.21994 eval/Rewards Min 0.980395 eval/Returns Mean 1621.11 eval/Returns Std 0 eval/Returns Max 1621.11 eval/Returns Min 1621.11 eval/Actions Mean 0.16725 eval/Actions Std 0.593966 eval/Actions Max 0.997829 eval/Actions Min -0.996384 eval/Num Paths 1 eval/Average Returns 1621.11 eval/normalized_score 50.433 time/evaluation sampling (s) 0.891335 time/logging (s) 0.00249471 time/sampling batch (s) 0.272281 time/saving (s) 0.00316323 time/training (s) 4.26975 time/epoch (s) 5.43902 time/total (s) 33817.8 Epoch -338 ---------------------------------- --------------- 2022-05-10 22:34:19.668317 PDT | [0] Epoch -337 finished ---------------------------------- --------------- epoch -337 replay_buffer/size 999996 trainer/num train calls 664000 trainer/Policy Loss -2.42474 trainer/Log Pis Mean 2.40822 trainer/Log Pis Std 2.78549 trainer/Log Pis Max 10.3809 trainer/Log Pis Min -4.18091 trainer/policy/mean Mean 0.169312 trainer/policy/mean Std 0.619192 trainer/policy/mean Max 0.997869 trainer/policy/mean Min -0.99884 trainer/policy/normal/std Mean 0.380828 trainer/policy/normal/std Std 0.188938 trainer/policy/normal/std Max 0.959181 trainer/policy/normal/std Min 0.0662987 trainer/policy/normal/log_std Mean -1.12832 trainer/policy/normal/log_std Std 0.620679 trainer/policy/normal/log_std Max -0.0416755 trainer/policy/normal/log_std Min -2.71359 eval/num steps total 440658 eval/num paths total 856 eval/path length Mean 524 eval/path length Std 0 eval/path length Max 524 eval/path length Min 524 eval/Rewards Mean 3.181 eval/Rewards Std 0.832044 eval/Rewards Max 5.45666 eval/Rewards Min 0.980256 eval/Returns Mean 1666.85 eval/Returns Std 0 eval/Returns Max 1666.85 eval/Returns Min 1666.85 eval/Actions Mean 0.161658 eval/Actions Std 0.593598 eval/Actions Max 0.998639 eval/Actions Min -0.997198 eval/Num Paths 1 eval/Average Returns 1666.85 eval/normalized_score 51.8384 time/evaluation sampling (s) 0.889583 time/logging (s) 0.00238568 time/sampling batch (s) 0.271296 time/saving (s) 0.00303458 time/training (s) 4.25007 time/epoch (s) 5.41637 time/total (s) 33823.2 Epoch -337 ---------------------------------- --------------- 2022-05-10 22:34:25.094438 PDT | [0] Epoch -336 finished ---------------------------------- --------------- epoch -336 replay_buffer/size 999996 trainer/num train calls 665000 trainer/Policy Loss -2.25997 trainer/Log Pis Mean 2.33438 trainer/Log Pis Std 2.58133 trainer/Log Pis Max 11.5769 trainer/Log Pis Min -5.863 trainer/policy/mean Mean 0.159002 trainer/policy/mean Std 0.616125 trainer/policy/mean Max 0.997614 trainer/policy/mean Min -0.99852 trainer/policy/normal/std Mean 0.381238 trainer/policy/normal/std Std 0.18673 trainer/policy/normal/std Max 0.939712 trainer/policy/normal/std Min 0.0707221 trainer/policy/normal/log_std Mean -1.11835 trainer/policy/normal/log_std Std 0.598423 trainer/policy/normal/log_std Max -0.0621818 trainer/policy/normal/log_std Min -2.649 eval/num steps total 441566 eval/num paths total 858 eval/path length Mean 454 eval/path length Std 34 eval/path length Max 488 eval/path length Min 420 eval/Rewards Mean 3.14414 eval/Rewards Std 0.821657 eval/Rewards Max 5.13904 eval/Rewards Min 0.980906 eval/Returns Mean 1427.44 eval/Returns Std 129.011 eval/Returns Max 1556.45 eval/Returns Min 1298.43 eval/Actions Mean 0.157198 eval/Actions Std 0.598153 eval/Actions Max 0.997978 eval/Actions Min -0.997945 eval/Num Paths 2 eval/Average Returns 1427.44 eval/normalized_score 44.4823 time/evaluation sampling (s) 0.889449 time/logging (s) 0.00364694 time/sampling batch (s) 0.2708 time/saving (s) 0.00323891 time/training (s) 4.23711 time/epoch (s) 5.40425 time/total (s) 33828.6 Epoch -336 ---------------------------------- --------------- 2022-05-10 22:34:30.543386 PDT | [0] Epoch -335 finished ---------------------------------- --------------- epoch -335 replay_buffer/size 999996 trainer/num train calls 666000 trainer/Policy Loss -2.36582 trainer/Log Pis Mean 2.31199 trainer/Log Pis Std 2.76369 trainer/Log Pis Max 13.2463 trainer/Log Pis Min -8.86012 trainer/policy/mean Mean 0.123554 trainer/policy/mean Std 0.625138 trainer/policy/mean Max 0.997325 trainer/policy/mean Min -0.998867 trainer/policy/normal/std Mean 0.378907 trainer/policy/normal/std Std 0.184072 trainer/policy/normal/std Max 0.94304 trainer/policy/normal/std Min 0.0662869 trainer/policy/normal/log_std Mean -1.11921 trainer/policy/normal/log_std Std 0.586126 trainer/policy/normal/log_std Max -0.0586462 trainer/policy/normal/log_std Min -2.71376 eval/num steps total 442145 eval/num paths total 859 eval/path length Mean 579 eval/path length Std 0 eval/path length Max 579 eval/path length Min 579 eval/Rewards Mean 3.18387 eval/Rewards Std 0.778383 eval/Rewards Max 4.77942 eval/Rewards Min 0.978431 eval/Returns Mean 1843.46 eval/Returns Std 0 eval/Returns Max 1843.46 eval/Returns Min 1843.46 eval/Actions Mean 0.164753 eval/Actions Std 0.599265 eval/Actions Max 0.996982 eval/Actions Min -0.998369 eval/Num Paths 1 eval/Average Returns 1843.46 eval/normalized_score 57.2651 time/evaluation sampling (s) 0.901846 time/logging (s) 0.00254288 time/sampling batch (s) 0.270074 time/saving (s) 0.00310117 time/training (s) 4.24696 time/epoch (s) 5.42452 time/total (s) 33834 Epoch -335 ---------------------------------- --------------- 2022-05-10 22:34:36.024515 PDT | [0] Epoch -334 finished ---------------------------------- --------------- epoch -334 replay_buffer/size 999996 trainer/num train calls 667000 trainer/Policy Loss -2.11641 trainer/Log Pis Mean 2.19465 trainer/Log Pis Std 2.65729 trainer/Log Pis Max 9.95038 trainer/Log Pis Min -5.32801 trainer/policy/mean Mean 0.122903 trainer/policy/mean Std 0.611827 trainer/policy/mean Max 0.997958 trainer/policy/mean Min -0.997876 trainer/policy/normal/std Mean 0.377825 trainer/policy/normal/std Std 0.189612 trainer/policy/normal/std Max 1.05972 trainer/policy/normal/std Min 0.068241 trainer/policy/normal/log_std Mean -1.13126 trainer/policy/normal/log_std Std 0.604044 trainer/policy/normal/log_std Max 0.058003 trainer/policy/normal/log_std Min -2.68471 eval/num steps total 442721 eval/num paths total 860 eval/path length Mean 576 eval/path length Std 0 eval/path length Max 576 eval/path length Min 576 eval/Rewards Mean 3.13136 eval/Rewards Std 0.715306 eval/Rewards Max 4.56068 eval/Rewards Min 0.976794 eval/Returns Mean 1803.66 eval/Returns Std 0 eval/Returns Max 1803.66 eval/Returns Min 1803.66 eval/Actions Mean 0.150296 eval/Actions Std 0.607091 eval/Actions Max 0.998699 eval/Actions Min -0.998329 eval/Num Paths 1 eval/Average Returns 1803.66 eval/normalized_score 56.0422 time/evaluation sampling (s) 0.940275 time/logging (s) 0.00253917 time/sampling batch (s) 0.27033 time/saving (s) 0.00303449 time/training (s) 4.24125 time/epoch (s) 5.45743 time/total (s) 33839.5 Epoch -334 ---------------------------------- --------------- 2022-05-10 22:34:41.494116 PDT | [0] Epoch -333 finished ---------------------------------- --------------- epoch -333 replay_buffer/size 999996 trainer/num train calls 668000 trainer/Policy Loss -2.1977 trainer/Log Pis Mean 2.31097 trainer/Log Pis Std 2.58625 trainer/Log Pis Max 12.1351 trainer/Log Pis Min -4.03772 trainer/policy/mean Mean 0.120729 trainer/policy/mean Std 0.62023 trainer/policy/mean Max 0.998456 trainer/policy/mean Min -0.997557 trainer/policy/normal/std Mean 0.376955 trainer/policy/normal/std Std 0.18048 trainer/policy/normal/std Max 0.858624 trainer/policy/normal/std Min 0.0688377 trainer/policy/normal/log_std Mean -1.12366 trainer/policy/normal/log_std Std 0.588213 trainer/policy/normal/log_std Max -0.152424 trainer/policy/normal/log_std Min -2.676 eval/num steps total 443222 eval/num paths total 861 eval/path length Mean 501 eval/path length Std 0 eval/path length Max 501 eval/path length Min 501 eval/Rewards Mean 3.10963 eval/Rewards Std 0.767818 eval/Rewards Max 4.74513 eval/Rewards Min 0.978876 eval/Returns Mean 1557.92 eval/Returns Std 0 eval/Returns Max 1557.92 eval/Returns Min 1557.92 eval/Actions Mean 0.15049 eval/Actions Std 0.587095 eval/Actions Max 0.99716 eval/Actions Min -0.996143 eval/Num Paths 1 eval/Average Returns 1557.92 eval/normalized_score 48.4917 time/evaluation sampling (s) 0.924315 time/logging (s) 0.00231322 time/sampling batch (s) 0.271602 time/saving (s) 0.00302609 time/training (s) 4.24473 time/epoch (s) 5.44599 time/total (s) 33845 Epoch -333 ---------------------------------- --------------- 2022-05-10 22:34:46.982603 PDT | [0] Epoch -332 finished ---------------------------------- --------------- epoch -332 replay_buffer/size 999996 trainer/num train calls 669000 trainer/Policy Loss -2.14505 trainer/Log Pis Mean 2.13136 trainer/Log Pis Std 2.6537 trainer/Log Pis Max 16.0369 trainer/Log Pis Min -5.11178 trainer/policy/mean Mean 0.138344 trainer/policy/mean Std 0.611709 trainer/policy/mean Max 0.996832 trainer/policy/mean Min -0.998847 trainer/policy/normal/std Mean 0.3817 trainer/policy/normal/std Std 0.187371 trainer/policy/normal/std Max 0.958483 trainer/policy/normal/std Min 0.0733988 trainer/policy/normal/log_std Mean -1.11713 trainer/policy/normal/log_std Std 0.598001 trainer/policy/normal/log_std Max -0.0424039 trainer/policy/normal/log_std Min -2.61185 eval/num steps total 443770 eval/num paths total 862 eval/path length Mean 548 eval/path length Std 0 eval/path length Max 548 eval/path length Min 548 eval/Rewards Mean 3.26257 eval/Rewards Std 0.814868 eval/Rewards Max 4.92828 eval/Rewards Min 0.98753 eval/Returns Mean 1787.89 eval/Returns Std 0 eval/Returns Max 1787.89 eval/Returns Min 1787.89 eval/Actions Mean 0.152859 eval/Actions Std 0.584543 eval/Actions Max 0.997156 eval/Actions Min -0.998807 eval/Num Paths 1 eval/Average Returns 1787.89 eval/normalized_score 55.5575 time/evaluation sampling (s) 0.933906 time/logging (s) 0.00248977 time/sampling batch (s) 0.272041 time/saving (s) 0.00315937 time/training (s) 4.25378 time/epoch (s) 5.46538 time/total (s) 33850.4 Epoch -332 ---------------------------------- --------------- 2022-05-10 22:34:52.439597 PDT | [0] Epoch -331 finished ---------------------------------- --------------- epoch -331 replay_buffer/size 999996 trainer/num train calls 670000 trainer/Policy Loss -2.24424 trainer/Log Pis Mean 2.17008 trainer/Log Pis Std 2.63045 trainer/Log Pis Max 12.1721 trainer/Log Pis Min -7.45751 trainer/policy/mean Mean 0.134174 trainer/policy/mean Std 0.617083 trainer/policy/mean Max 0.99802 trainer/policy/mean Min -0.998412 trainer/policy/normal/std Mean 0.379923 trainer/policy/normal/std Std 0.184104 trainer/policy/normal/std Max 1.01749 trainer/policy/normal/std Min 0.0667761 trainer/policy/normal/log_std Mean -1.12153 trainer/policy/normal/log_std Std 0.601103 trainer/policy/normal/log_std Max 0.0173435 trainer/policy/normal/log_std Min -2.70641 eval/num steps total 444605 eval/num paths total 863 eval/path length Mean 835 eval/path length Std 0 eval/path length Max 835 eval/path length Min 835 eval/Rewards Mean 3.2301 eval/Rewards Std 0.634926 eval/Rewards Max 4.54597 eval/Rewards Min 0.97969 eval/Returns Mean 2697.14 eval/Returns Std 0 eval/Returns Max 2697.14 eval/Returns Min 2697.14 eval/Actions Mean 0.159161 eval/Actions Std 0.613985 eval/Actions Max 0.997724 eval/Actions Min -0.997744 eval/Num Paths 1 eval/Average Returns 2697.14 eval/normalized_score 83.4951 time/evaluation sampling (s) 0.900135 time/logging (s) 0.00311448 time/sampling batch (s) 0.272499 time/saving (s) 0.00296606 time/training (s) 4.25567 time/epoch (s) 5.43438 time/total (s) 33855.9 Epoch -331 ---------------------------------- --------------- 2022-05-10 22:34:57.877512 PDT | [0] Epoch -330 finished ---------------------------------- --------------- epoch -330 replay_buffer/size 999996 trainer/num train calls 671000 trainer/Policy Loss -2.4253 trainer/Log Pis Mean 2.25981 trainer/Log Pis Std 2.66659 trainer/Log Pis Max 16.156 trainer/Log Pis Min -4.65143 trainer/policy/mean Mean 0.141368 trainer/policy/mean Std 0.622678 trainer/policy/mean Max 0.999091 trainer/policy/mean Min -0.999616 trainer/policy/normal/std Mean 0.382342 trainer/policy/normal/std Std 0.182911 trainer/policy/normal/std Max 0.956676 trainer/policy/normal/std Min 0.0653356 trainer/policy/normal/log_std Mean -1.11114 trainer/policy/normal/log_std Std 0.593451 trainer/policy/normal/log_std Max -0.0442903 trainer/policy/normal/log_std Min -2.72822 eval/num steps total 445182 eval/num paths total 864 eval/path length Mean 577 eval/path length Std 0 eval/path length Max 577 eval/path length Min 577 eval/Rewards Mean 3.14628 eval/Rewards Std 0.717768 eval/Rewards Max 4.63906 eval/Rewards Min 0.985498 eval/Returns Mean 1815.4 eval/Returns Std 0 eval/Returns Max 1815.4 eval/Returns Min 1815.4 eval/Actions Mean 0.15588 eval/Actions Std 0.608331 eval/Actions Max 0.998608 eval/Actions Min -0.997256 eval/Num Paths 1 eval/Average Returns 1815.4 eval/normalized_score 56.403 time/evaluation sampling (s) 0.891907 time/logging (s) 0.00251058 time/sampling batch (s) 0.272301 time/saving (s) 0.00300231 time/training (s) 4.24463 time/epoch (s) 5.41435 time/total (s) 33861.3 Epoch -330 ---------------------------------- --------------- 2022-05-10 22:35:03.333201 PDT | [0] Epoch -329 finished ---------------------------------- --------------- epoch -329 replay_buffer/size 999996 trainer/num train calls 672000 trainer/Policy Loss -2.4481 trainer/Log Pis Mean 2.47968 trainer/Log Pis Std 2.93145 trainer/Log Pis Max 15.7343 trainer/Log Pis Min -8.10545 trainer/policy/mean Mean 0.168148 trainer/policy/mean Std 0.630318 trainer/policy/mean Max 0.99831 trainer/policy/mean Min -0.99959 trainer/policy/normal/std Mean 0.388203 trainer/policy/normal/std Std 0.188003 trainer/policy/normal/std Max 1.00299 trainer/policy/normal/std Min 0.0708661 trainer/policy/normal/log_std Mean -1.09563 trainer/policy/normal/log_std Std 0.589466 trainer/policy/normal/log_std Max 0.00298994 trainer/policy/normal/log_std Min -2.64696 eval/num steps total 446106 eval/num paths total 866 eval/path length Mean 462 eval/path length Std 42 eval/path length Max 504 eval/path length Min 420 eval/Rewards Mean 3.12297 eval/Rewards Std 0.806787 eval/Rewards Max 5.1634 eval/Rewards Min 0.980188 eval/Returns Mean 1442.81 eval/Returns Std 141.058 eval/Returns Max 1583.87 eval/Returns Min 1301.75 eval/Actions Mean 0.158519 eval/Actions Std 0.592874 eval/Actions Max 0.998949 eval/Actions Min -0.996974 eval/Num Paths 2 eval/Average Returns 1442.81 eval/normalized_score 44.9547 time/evaluation sampling (s) 0.889836 time/logging (s) 0.00347688 time/sampling batch (s) 0.272373 time/saving (s) 0.00301947 time/training (s) 4.26472 time/epoch (s) 5.43342 time/total (s) 33866.7 Epoch -329 ---------------------------------- --------------- 2022-05-10 22:35:08.758976 PDT | [0] Epoch -328 finished ---------------------------------- --------------- epoch -328 replay_buffer/size 999996 trainer/num train calls 673000 trainer/Policy Loss -2.13613 trainer/Log Pis Mean 1.98938 trainer/Log Pis Std 2.44653 trainer/Log Pis Max 10.0844 trainer/Log Pis Min -4.62568 trainer/policy/mean Mean 0.13641 trainer/policy/mean Std 0.604989 trainer/policy/mean Max 0.996367 trainer/policy/mean Min -0.996856 trainer/policy/normal/std Mean 0.381777 trainer/policy/normal/std Std 0.189444 trainer/policy/normal/std Max 0.995282 trainer/policy/normal/std Min 0.0680447 trainer/policy/normal/log_std Mean -1.11835 trainer/policy/normal/log_std Std 0.599655 trainer/policy/normal/log_std Max -0.00472893 trainer/policy/normal/log_std Min -2.68759 eval/num steps total 446679 eval/num paths total 867 eval/path length Mean 573 eval/path length Std 0 eval/path length Max 573 eval/path length Min 573 eval/Rewards Mean 3.23642 eval/Rewards Std 0.754124 eval/Rewards Max 4.92054 eval/Rewards Min 0.981405 eval/Returns Mean 1854.47 eval/Returns Std 0 eval/Returns Max 1854.47 eval/Returns Min 1854.47 eval/Actions Mean 0.144747 eval/Actions Std 0.599805 eval/Actions Max 0.996274 eval/Actions Min -0.998216 eval/Num Paths 1 eval/Average Returns 1854.47 eval/normalized_score 57.6034 time/evaluation sampling (s) 0.895903 time/logging (s) 0.00256561 time/sampling batch (s) 0.271672 time/saving (s) 0.00304406 time/training (s) 4.2285 time/epoch (s) 5.40169 time/total (s) 33872.1 Epoch -328 ---------------------------------- --------------- 2022-05-10 22:35:14.227706 PDT | [0] Epoch -327 finished ---------------------------------- --------------- epoch -327 replay_buffer/size 999996 trainer/num train calls 674000 trainer/Policy Loss -1.98947 trainer/Log Pis Mean 2.23839 trainer/Log Pis Std 2.50829 trainer/Log Pis Max 8.9675 trainer/Log Pis Min -3.81488 trainer/policy/mean Mean 0.145302 trainer/policy/mean Std 0.60928 trainer/policy/mean Max 0.996986 trainer/policy/mean Min -0.998304 trainer/policy/normal/std Mean 0.374453 trainer/policy/normal/std Std 0.184492 trainer/policy/normal/std Max 0.951036 trainer/policy/normal/std Min 0.0691889 trainer/policy/normal/log_std Mean -1.13529 trainer/policy/normal/log_std Std 0.593609 trainer/policy/normal/log_std Max -0.0502039 trainer/policy/normal/log_std Min -2.67091 eval/num steps total 447175 eval/num paths total 868 eval/path length Mean 496 eval/path length Std 0 eval/path length Max 496 eval/path length Min 496 eval/Rewards Mean 3.08117 eval/Rewards Std 0.750233 eval/Rewards Max 4.57285 eval/Rewards Min 0.981135 eval/Returns Mean 1528.26 eval/Returns Std 0 eval/Returns Max 1528.26 eval/Returns Min 1528.26 eval/Actions Mean 0.150326 eval/Actions Std 0.596621 eval/Actions Max 0.997763 eval/Actions Min -0.998395 eval/Num Paths 1 eval/Average Returns 1528.26 eval/normalized_score 47.5802 time/evaluation sampling (s) 0.900808 time/logging (s) 0.00225528 time/sampling batch (s) 0.273052 time/saving (s) 0.003041 time/training (s) 4.26619 time/epoch (s) 5.44535 time/total (s) 33877.6 Epoch -327 ---------------------------------- --------------- 2022-05-10 22:35:19.728560 PDT | [0] Epoch -326 finished ---------------------------------- --------------- epoch -326 replay_buffer/size 999996 trainer/num train calls 675000 trainer/Policy Loss -2.37078 trainer/Log Pis Mean 2.39354 trainer/Log Pis Std 2.62901 trainer/Log Pis Max 16.0513 trainer/Log Pis Min -3.6891 trainer/policy/mean Mean 0.130251 trainer/policy/mean Std 0.627818 trainer/policy/mean Max 0.996278 trainer/policy/mean Min -0.999587 trainer/policy/normal/std Mean 0.376764 trainer/policy/normal/std Std 0.184063 trainer/policy/normal/std Max 1.00888 trainer/policy/normal/std Min 0.0670644 trainer/policy/normal/log_std Mean -1.12757 trainer/policy/normal/log_std Std 0.592337 trainer/policy/normal/log_std Max 0.00883716 trainer/policy/normal/log_std Min -2.7021 eval/num steps total 447736 eval/num paths total 869 eval/path length Mean 561 eval/path length Std 0 eval/path length Max 561 eval/path length Min 561 eval/Rewards Mean 3.19365 eval/Rewards Std 0.809631 eval/Rewards Max 4.6701 eval/Rewards Min 0.983026 eval/Returns Mean 1791.64 eval/Returns Std 0 eval/Returns Max 1791.64 eval/Returns Min 1791.64 eval/Actions Mean 0.153604 eval/Actions Std 0.599996 eval/Actions Max 0.997967 eval/Actions Min -0.99872 eval/Num Paths 1 eval/Average Returns 1791.64 eval/normalized_score 55.6728 time/evaluation sampling (s) 0.958441 time/logging (s) 0.00249532 time/sampling batch (s) 0.271776 time/saving (s) 0.00297225 time/training (s) 4.24217 time/epoch (s) 5.47785 time/total (s) 33883.1 Epoch -326 ---------------------------------- --------------- 2022-05-10 22:35:25.146008 PDT | [0] Epoch -325 finished ---------------------------------- --------------- epoch -325 replay_buffer/size 999996 trainer/num train calls 676000 trainer/Policy Loss -2.22806 trainer/Log Pis Mean 2.23761 trainer/Log Pis Std 2.56917 trainer/Log Pis Max 11.6349 trainer/Log Pis Min -3.52389 trainer/policy/mean Mean 0.1423 trainer/policy/mean Std 0.616649 trainer/policy/mean Max 0.994676 trainer/policy/mean Min -0.999336 trainer/policy/normal/std Mean 0.383179 trainer/policy/normal/std Std 0.186098 trainer/policy/normal/std Max 0.94809 trainer/policy/normal/std Min 0.0698869 trainer/policy/normal/log_std Mean -1.10656 trainer/policy/normal/log_std Std 0.581708 trainer/policy/normal/log_std Max -0.0533056 trainer/policy/normal/log_std Min -2.66088 eval/num steps total 448319 eval/num paths total 870 eval/path length Mean 583 eval/path length Std 0 eval/path length Max 583 eval/path length Min 583 eval/Rewards Mean 3.18471 eval/Rewards Std 0.759094 eval/Rewards Max 4.95187 eval/Rewards Min 0.982105 eval/Returns Mean 1856.69 eval/Returns Std 0 eval/Returns Max 1856.69 eval/Returns Min 1856.69 eval/Actions Mean 0.166324 eval/Actions Std 0.59396 eval/Actions Max 0.998492 eval/Actions Min -0.998551 eval/Num Paths 1 eval/Average Returns 1856.69 eval/normalized_score 57.6714 time/evaluation sampling (s) 0.891858 time/logging (s) 0.00249449 time/sampling batch (s) 0.26964 time/saving (s) 0.00293485 time/training (s) 4.22727 time/epoch (s) 5.3942 time/total (s) 33888.4 Epoch -325 ---------------------------------- --------------- 2022-05-10 22:35:30.701012 PDT | [0] Epoch -324 finished ---------------------------------- --------------- epoch -324 replay_buffer/size 999996 trainer/num train calls 677000 trainer/Policy Loss -2.31945 trainer/Log Pis Mean 2.23932 trainer/Log Pis Std 2.60211 trainer/Log Pis Max 11.6336 trainer/Log Pis Min -4.19853 trainer/policy/mean Mean 0.114007 trainer/policy/mean Std 0.622475 trainer/policy/mean Max 0.997884 trainer/policy/mean Min -0.997083 trainer/policy/normal/std Mean 0.376486 trainer/policy/normal/std Std 0.181855 trainer/policy/normal/std Max 0.890524 trainer/policy/normal/std Min 0.0615697 trainer/policy/normal/log_std Mean -1.12523 trainer/policy/normal/log_std Std 0.58642 trainer/policy/normal/log_std Max -0.115945 trainer/policy/normal/log_std Min -2.78759 eval/num steps total 448812 eval/num paths total 871 eval/path length Mean 493 eval/path length Std 0 eval/path length Max 493 eval/path length Min 493 eval/Rewards Mean 3.08814 eval/Rewards Std 0.7583 eval/Rewards Max 4.52875 eval/Rewards Min 0.981104 eval/Returns Mean 1522.45 eval/Returns Std 0 eval/Returns Max 1522.45 eval/Returns Min 1522.45 eval/Actions Mean 0.15611 eval/Actions Std 0.597456 eval/Actions Max 0.998672 eval/Actions Min -0.997109 eval/Num Paths 1 eval/Average Returns 1522.45 eval/normalized_score 47.4018 time/evaluation sampling (s) 0.948846 time/logging (s) 0.00239736 time/sampling batch (s) 0.265515 time/saving (s) 0.00314951 time/training (s) 4.31224 time/epoch (s) 5.53215 time/total (s) 33894 Epoch -324 ---------------------------------- --------------- 2022-05-10 22:35:36.074629 PDT | [0] Epoch -323 finished ---------------------------------- --------------- epoch -323 replay_buffer/size 999996 trainer/num train calls 678000 trainer/Policy Loss -2.3486 trainer/Log Pis Mean 2.19504 trainer/Log Pis Std 2.58307 trainer/Log Pis Max 12.1643 trainer/Log Pis Min -5.2199 trainer/policy/mean Mean 0.138445 trainer/policy/mean Std 0.617684 trainer/policy/mean Max 0.995847 trainer/policy/mean Min -0.998203 trainer/policy/normal/std Mean 0.374044 trainer/policy/normal/std Std 0.177724 trainer/policy/normal/std Max 0.970514 trainer/policy/normal/std Min 0.060838 trainer/policy/normal/log_std Mean -1.12885 trainer/policy/normal/log_std Std 0.582899 trainer/policy/normal/log_std Max -0.0299293 trainer/policy/normal/log_std Min -2.79954 eval/num steps total 449301 eval/num paths total 872 eval/path length Mean 489 eval/path length Std 0 eval/path length Max 489 eval/path length Min 489 eval/Rewards Mean 3.04666 eval/Rewards Std 0.778168 eval/Rewards Max 4.60202 eval/Rewards Min 0.982911 eval/Returns Mean 1489.82 eval/Returns Std 0 eval/Returns Max 1489.82 eval/Returns Min 1489.82 eval/Actions Mean 0.146372 eval/Actions Std 0.579907 eval/Actions Max 0.996366 eval/Actions Min -0.99817 eval/Num Paths 1 eval/Average Returns 1489.82 eval/normalized_score 46.399 time/evaluation sampling (s) 0.891644 time/logging (s) 0.0024426 time/sampling batch (s) 0.267029 time/saving (s) 0.00314622 time/training (s) 4.18642 time/epoch (s) 5.35069 time/total (s) 33899.3 Epoch -323 ---------------------------------- --------------- 2022-05-10 22:35:41.523304 PDT | [0] Epoch -322 finished ---------------------------------- --------------- epoch -322 replay_buffer/size 999996 trainer/num train calls 679000 trainer/Policy Loss -2.40229 trainer/Log Pis Mean 2.46869 trainer/Log Pis Std 2.66899 trainer/Log Pis Max 10.6786 trainer/Log Pis Min -3.67675 trainer/policy/mean Mean 0.125187 trainer/policy/mean Std 0.631188 trainer/policy/mean Max 0.995932 trainer/policy/mean Min -0.998612 trainer/policy/normal/std Mean 0.380621 trainer/policy/normal/std Std 0.182065 trainer/policy/normal/std Max 1.07008 trainer/policy/normal/std Min 0.0676153 trainer/policy/normal/log_std Mean -1.10951 trainer/policy/normal/log_std Std 0.575767 trainer/policy/normal/log_std Max 0.0677368 trainer/policy/normal/log_std Min -2.69392 eval/num steps total 449870 eval/num paths total 873 eval/path length Mean 569 eval/path length Std 0 eval/path length Max 569 eval/path length Min 569 eval/Rewards Mean 3.16803 eval/Rewards Std 0.81438 eval/Rewards Max 4.80393 eval/Rewards Min 0.981713 eval/Returns Mean 1802.61 eval/Returns Std 0 eval/Returns Max 1802.61 eval/Returns Min 1802.61 eval/Actions Mean 0.160559 eval/Actions Std 0.595051 eval/Actions Max 0.996397 eval/Actions Min -0.999272 eval/Num Paths 1 eval/Average Returns 1802.61 eval/normalized_score 56.0098 time/evaluation sampling (s) 0.911638 time/logging (s) 0.00255057 time/sampling batch (s) 0.269904 time/saving (s) 0.00306823 time/training (s) 4.2383 time/epoch (s) 5.42546 time/total (s) 33904.8 Epoch -322 ---------------------------------- --------------- 2022-05-10 22:35:46.960928 PDT | [0] Epoch -321 finished ---------------------------------- --------------- epoch -321 replay_buffer/size 999996 trainer/num train calls 680000 trainer/Policy Loss -2.17822 trainer/Log Pis Mean 2.03168 trainer/Log Pis Std 2.5261 trainer/Log Pis Max 9.89802 trainer/Log Pis Min -4.82782 trainer/policy/mean Mean 0.127418 trainer/policy/mean Std 0.606898 trainer/policy/mean Max 0.997187 trainer/policy/mean Min -0.997834 trainer/policy/normal/std Mean 0.382835 trainer/policy/normal/std Std 0.182312 trainer/policy/normal/std Max 0.98087 trainer/policy/normal/std Min 0.0693075 trainer/policy/normal/log_std Mean -1.10534 trainer/policy/normal/log_std Std 0.581034 trainer/policy/normal/log_std Max -0.0193155 trainer/policy/normal/log_std Min -2.6692 eval/num steps total 450498 eval/num paths total 874 eval/path length Mean 628 eval/path length Std 0 eval/path length Max 628 eval/path length Min 628 eval/Rewards Mean 3.21015 eval/Rewards Std 0.769325 eval/Rewards Max 4.97173 eval/Rewards Min 0.980891 eval/Returns Mean 2015.97 eval/Returns Std 0 eval/Returns Max 2015.97 eval/Returns Min 2015.97 eval/Actions Mean 0.138661 eval/Actions Std 0.581794 eval/Actions Max 0.997852 eval/Actions Min -0.997981 eval/Num Paths 1 eval/Average Returns 2015.97 eval/normalized_score 62.5657 time/evaluation sampling (s) 0.910458 time/logging (s) 0.00263088 time/sampling batch (s) 0.269763 time/saving (s) 0.00302724 time/training (s) 4.22833 time/epoch (s) 5.41421 time/total (s) 33910.2 Epoch -321 ---------------------------------- --------------- 2022-05-10 22:35:52.412636 PDT | [0] Epoch -320 finished ---------------------------------- --------------- epoch -320 replay_buffer/size 999996 trainer/num train calls 681000 trainer/Policy Loss -2.16883 trainer/Log Pis Mean 2.1712 trainer/Log Pis Std 2.60848 trainer/Log Pis Max 9.17041 trainer/Log Pis Min -4.39339 trainer/policy/mean Mean 0.118167 trainer/policy/mean Std 0.614498 trainer/policy/mean Max 0.999185 trainer/policy/mean Min -0.998016 trainer/policy/normal/std Mean 0.370225 trainer/policy/normal/std Std 0.180084 trainer/policy/normal/std Max 0.897375 trainer/policy/normal/std Min 0.0666952 trainer/policy/normal/log_std Mean -1.14309 trainer/policy/normal/log_std Std 0.586837 trainer/policy/normal/log_std Max -0.108281 trainer/policy/normal/log_std Min -2.70762 eval/num steps total 451039 eval/num paths total 875 eval/path length Mean 541 eval/path length Std 0 eval/path length Max 541 eval/path length Min 541 eval/Rewards Mean 3.20182 eval/Rewards Std 0.8486 eval/Rewards Max 5.35053 eval/Rewards Min 0.98236 eval/Returns Mean 1732.18 eval/Returns Std 0 eval/Returns Max 1732.18 eval/Returns Min 1732.18 eval/Actions Mean 0.15984 eval/Actions Std 0.58183 eval/Actions Max 0.998004 eval/Actions Min -0.997627 eval/Num Paths 1 eval/Average Returns 1732.18 eval/normalized_score 53.8459 time/evaluation sampling (s) 0.891819 time/logging (s) 0.00244473 time/sampling batch (s) 0.276249 time/saving (s) 0.00315862 time/training (s) 4.25451 time/epoch (s) 5.42818 time/total (s) 33915.6 Epoch -320 ---------------------------------- --------------- 2022-05-10 22:35:57.937810 PDT | [0] Epoch -319 finished ---------------------------------- --------------- epoch -319 replay_buffer/size 999996 trainer/num train calls 682000 trainer/Policy Loss -2.23498 trainer/Log Pis Mean 2.38892 trainer/Log Pis Std 2.66469 trainer/Log Pis Max 9.57899 trainer/Log Pis Min -5.32391 trainer/policy/mean Mean 0.113596 trainer/policy/mean Std 0.618624 trainer/policy/mean Max 0.998603 trainer/policy/mean Min -0.99904 trainer/policy/normal/std Mean 0.378126 trainer/policy/normal/std Std 0.187308 trainer/policy/normal/std Max 1.13604 trainer/policy/normal/std Min 0.0653803 trainer/policy/normal/log_std Mean -1.12848 trainer/policy/normal/log_std Std 0.601589 trainer/policy/normal/log_std Max 0.127547 trainer/policy/normal/log_std Min -2.72753 eval/num steps total 451617 eval/num paths total 876 eval/path length Mean 578 eval/path length Std 0 eval/path length Max 578 eval/path length Min 578 eval/Rewards Mean 3.17741 eval/Rewards Std 0.788179 eval/Rewards Max 4.83421 eval/Rewards Min 0.983676 eval/Returns Mean 1836.54 eval/Returns Std 0 eval/Returns Max 1836.54 eval/Returns Min 1836.54 eval/Actions Mean 0.165523 eval/Actions Std 0.589443 eval/Actions Max 0.997262 eval/Actions Min -0.999368 eval/Num Paths 1 eval/Average Returns 1836.54 eval/normalized_score 57.0525 time/evaluation sampling (s) 0.897273 time/logging (s) 0.0027645 time/sampling batch (s) 0.277138 time/saving (s) 0.00331917 time/training (s) 4.32147 time/epoch (s) 5.50196 time/total (s) 33921.1 Epoch -319 ---------------------------------- --------------- 2022-05-10 22:36:03.432928 PDT | [0] Epoch -318 finished ---------------------------------- --------------- epoch -318 replay_buffer/size 999996 trainer/num train calls 683000 trainer/Policy Loss -2.05927 trainer/Log Pis Mean 2.08217 trainer/Log Pis Std 2.62656 trainer/Log Pis Max 18.1043 trainer/Log Pis Min -5.67191 trainer/policy/mean Mean 0.167347 trainer/policy/mean Std 0.602474 trainer/policy/mean Max 0.998262 trainer/policy/mean Min -0.999725 trainer/policy/normal/std Mean 0.387188 trainer/policy/normal/std Std 0.184243 trainer/policy/normal/std Max 1.06058 trainer/policy/normal/std Min 0.072995 trainer/policy/normal/log_std Mean -1.09394 trainer/policy/normal/log_std Std 0.582341 trainer/policy/normal/log_std Max 0.0588179 trainer/policy/normal/log_std Min -2.61736 eval/num steps total 452185 eval/num paths total 877 eval/path length Mean 568 eval/path length Std 0 eval/path length Max 568 eval/path length Min 568 eval/Rewards Mean 3.20892 eval/Rewards Std 0.762961 eval/Rewards Max 4.87898 eval/Rewards Min 0.982764 eval/Returns Mean 1822.67 eval/Returns Std 0 eval/Returns Max 1822.67 eval/Returns Min 1822.67 eval/Actions Mean 0.153164 eval/Actions Std 0.599507 eval/Actions Max 0.997573 eval/Actions Min -0.998084 eval/Num Paths 1 eval/Average Returns 1822.67 eval/normalized_score 56.6262 time/evaluation sampling (s) 0.973826 time/logging (s) 0.00252836 time/sampling batch (s) 0.270048 time/saving (s) 0.0030772 time/training (s) 4.22172 time/epoch (s) 5.4712 time/total (s) 33926.6 Epoch -318 ---------------------------------- --------------- 2022-05-10 22:36:08.843990 PDT | [0] Epoch -317 finished ---------------------------------- --------------- epoch -317 replay_buffer/size 999996 trainer/num train calls 684000 trainer/Policy Loss -2.1109 trainer/Log Pis Mean 2.14001 trainer/Log Pis Std 2.66236 trainer/Log Pis Max 13.764 trainer/Log Pis Min -5.96711 trainer/policy/mean Mean 0.10416 trainer/policy/mean Std 0.610913 trainer/policy/mean Max 0.995313 trainer/policy/mean Min -0.998147 trainer/policy/normal/std Mean 0.373456 trainer/policy/normal/std Std 0.182238 trainer/policy/normal/std Max 0.957218 trainer/policy/normal/std Min 0.0622709 trainer/policy/normal/log_std Mean -1.1391 trainer/policy/normal/log_std Std 0.601018 trainer/policy/normal/log_std Max -0.0437239 trainer/policy/normal/log_std Min -2.77626 eval/num steps total 452700 eval/num paths total 878 eval/path length Mean 515 eval/path length Std 0 eval/path length Max 515 eval/path length Min 515 eval/Rewards Mean 3.11447 eval/Rewards Std 0.841014 eval/Rewards Max 5.41015 eval/Rewards Min 0.987067 eval/Returns Mean 1603.95 eval/Returns Std 0 eval/Returns Max 1603.95 eval/Returns Min 1603.95 eval/Actions Mean 0.146096 eval/Actions Std 0.57873 eval/Actions Max 0.995486 eval/Actions Min -0.99815 eval/Num Paths 1 eval/Average Returns 1603.95 eval/normalized_score 49.9059 time/evaluation sampling (s) 0.893431 time/logging (s) 0.00237592 time/sampling batch (s) 0.269986 time/saving (s) 0.00296351 time/training (s) 4.21905 time/epoch (s) 5.3878 time/total (s) 33932 Epoch -317 ---------------------------------- --------------- 2022-05-10 22:36:14.301141 PDT | [0] Epoch -316 finished ---------------------------------- --------------- epoch -316 replay_buffer/size 999996 trainer/num train calls 685000 trainer/Policy Loss -1.9792 trainer/Log Pis Mean 2.13576 trainer/Log Pis Std 2.49214 trainer/Log Pis Max 8.20429 trainer/Log Pis Min -7.98031 trainer/policy/mean Mean 0.156404 trainer/policy/mean Std 0.597906 trainer/policy/mean Max 0.998872 trainer/policy/mean Min -0.997871 trainer/policy/normal/std Mean 0.373785 trainer/policy/normal/std Std 0.183916 trainer/policy/normal/std Max 0.931605 trainer/policy/normal/std Min 0.0702573 trainer/policy/normal/log_std Mean -1.13716 trainer/policy/normal/log_std Std 0.594849 trainer/policy/normal/log_std Max -0.0708462 trainer/policy/normal/log_std Min -2.65559 eval/num steps total 453179 eval/num paths total 879 eval/path length Mean 479 eval/path length Std 0 eval/path length Max 479 eval/path length Min 479 eval/Rewards Mean 3.18545 eval/Rewards Std 0.841619 eval/Rewards Max 4.7454 eval/Rewards Min 0.980704 eval/Returns Mean 1525.83 eval/Returns Std 0 eval/Returns Max 1525.83 eval/Returns Min 1525.83 eval/Actions Mean 0.151346 eval/Actions Std 0.599386 eval/Actions Max 0.998523 eval/Actions Min -0.998259 eval/Num Paths 1 eval/Average Returns 1525.83 eval/normalized_score 47.5056 time/evaluation sampling (s) 0.896633 time/logging (s) 0.00225791 time/sampling batch (s) 0.271588 time/saving (s) 0.00303468 time/training (s) 4.26033 time/epoch (s) 5.43385 time/total (s) 33937.4 Epoch -316 ---------------------------------- --------------- 2022-05-10 22:36:19.739741 PDT | [0] Epoch -315 finished ---------------------------------- --------------- epoch -315 replay_buffer/size 999996 trainer/num train calls 686000 trainer/Policy Loss -2.18476 trainer/Log Pis Mean 2.08471 trainer/Log Pis Std 2.56407 trainer/Log Pis Max 9.34356 trainer/Log Pis Min -5.84777 trainer/policy/mean Mean 0.139733 trainer/policy/mean Std 0.611908 trainer/policy/mean Max 0.997595 trainer/policy/mean Min -0.997752 trainer/policy/normal/std Mean 0.384042 trainer/policy/normal/std Std 0.179857 trainer/policy/normal/std Max 0.965533 trainer/policy/normal/std Min 0.0704506 trainer/policy/normal/log_std Mean -1.09889 trainer/policy/normal/log_std Std 0.575704 trainer/policy/normal/log_std Max -0.0350749 trainer/policy/normal/log_std Min -2.65284 eval/num steps total 454080 eval/num paths total 881 eval/path length Mean 450.5 eval/path length Std 38.5 eval/path length Max 489 eval/path length Min 412 eval/Rewards Mean 3.09312 eval/Rewards Std 0.827699 eval/Rewards Max 4.98213 eval/Rewards Min 0.986565 eval/Returns Mean 1393.45 eval/Returns Std 126.692 eval/Returns Max 1520.14 eval/Returns Min 1266.76 eval/Actions Mean 0.154389 eval/Actions Std 0.58116 eval/Actions Max 0.997852 eval/Actions Min -0.998486 eval/Num Paths 2 eval/Average Returns 1393.45 eval/normalized_score 43.438 time/evaluation sampling (s) 0.903527 time/logging (s) 0.00347909 time/sampling batch (s) 0.275254 time/saving (s) 0.00308417 time/training (s) 4.23127 time/epoch (s) 5.41662 time/total (s) 33942.8 Epoch -315 ---------------------------------- --------------- 2022-05-10 22:36:25.139604 PDT | [0] Epoch -314 finished ---------------------------------- --------------- epoch -314 replay_buffer/size 999996 trainer/num train calls 687000 trainer/Policy Loss -2.18086 trainer/Log Pis Mean 2.18239 trainer/Log Pis Std 2.61686 trainer/Log Pis Max 9.73059 trainer/Log Pis Min -4.6207 trainer/policy/mean Mean 0.135356 trainer/policy/mean Std 0.620432 trainer/policy/mean Max 0.997875 trainer/policy/mean Min -0.997544 trainer/policy/normal/std Mean 0.376563 trainer/policy/normal/std Std 0.182525 trainer/policy/normal/std Max 0.900897 trainer/policy/normal/std Min 0.0718204 trainer/policy/normal/log_std Mean -1.12629 trainer/policy/normal/log_std Std 0.588573 trainer/policy/normal/log_std Max -0.104365 trainer/policy/normal/log_std Min -2.63359 eval/num steps total 455001 eval/num paths total 883 eval/path length Mean 460.5 eval/path length Std 36.5 eval/path length Max 497 eval/path length Min 424 eval/Rewards Mean 3.09122 eval/Rewards Std 0.82166 eval/Rewards Max 5.357 eval/Rewards Min 0.983304 eval/Returns Mean 1423.51 eval/Returns Std 119.725 eval/Returns Max 1543.23 eval/Returns Min 1303.78 eval/Actions Mean 0.150997 eval/Actions Std 0.580277 eval/Actions Max 0.998036 eval/Actions Min -0.997894 eval/Num Paths 2 eval/Average Returns 1423.51 eval/normalized_score 44.3616 time/evaluation sampling (s) 0.892666 time/logging (s) 0.00346161 time/sampling batch (s) 0.269929 time/saving (s) 0.00295506 time/training (s) 4.20779 time/epoch (s) 5.3768 time/total (s) 33948.2 Epoch -314 ---------------------------------- --------------- 2022-05-10 22:36:30.587278 PDT | [0] Epoch -313 finished ---------------------------------- -------------- epoch -313 replay_buffer/size 999996 trainer/num train calls 688000 trainer/Policy Loss -2.44707 trainer/Log Pis Mean 2.36874 trainer/Log Pis Std 2.70489 trainer/Log Pis Max 9.17051 trainer/Log Pis Min -7.13534 trainer/policy/mean Mean 0.134772 trainer/policy/mean Std 0.627904 trainer/policy/mean Max 0.996166 trainer/policy/mean Min -0.998256 trainer/policy/normal/std Mean 0.368217 trainer/policy/normal/std Std 0.177721 trainer/policy/normal/std Max 0.962479 trainer/policy/normal/std Min 0.0678362 trainer/policy/normal/log_std Mean -1.14825 trainer/policy/normal/log_std Std 0.587862 trainer/policy/normal/log_std Max -0.0382433 trainer/policy/normal/log_std Min -2.69066 eval/num steps total 455408 eval/num paths total 884 eval/path length Mean 407 eval/path length Std 0 eval/path length Max 407 eval/path length Min 407 eval/Rewards Mean 3.06586 eval/Rewards Std 0.785608 eval/Rewards Max 4.69119 eval/Rewards Min 0.984729 eval/Returns Mean 1247.8 eval/Returns Std 0 eval/Returns Max 1247.8 eval/Returns Min 1247.8 eval/Actions Mean 0.143484 eval/Actions Std 0.593058 eval/Actions Max 0.997856 eval/Actions Min -0.996524 eval/Num Paths 1 eval/Average Returns 1247.8 eval/normalized_score 38.9629 time/evaluation sampling (s) 0.91232 time/logging (s) 0.0021126 time/sampling batch (s) 0.2708 time/saving (s) 0.0029359 time/training (s) 4.23502 time/epoch (s) 5.42319 time/total (s) 33953.7 Epoch -313 ---------------------------------- -------------- 2022-05-10 22:36:36.051793 PDT | [0] Epoch -312 finished ---------------------------------- --------------- epoch -312 replay_buffer/size 999996 trainer/num train calls 689000 trainer/Policy Loss -2.22723 trainer/Log Pis Mean 2.31527 trainer/Log Pis Std 2.48208 trainer/Log Pis Max 9.44353 trainer/Log Pis Min -4.40049 trainer/policy/mean Mean 0.136033 trainer/policy/mean Std 0.617367 trainer/policy/mean Max 0.997965 trainer/policy/mean Min -0.998399 trainer/policy/normal/std Mean 0.37454 trainer/policy/normal/std Std 0.182104 trainer/policy/normal/std Max 0.996249 trainer/policy/normal/std Min 0.0669183 trainer/policy/normal/log_std Mean -1.13166 trainer/policy/normal/log_std Std 0.588331 trainer/policy/normal/log_std Max -0.00375827 trainer/policy/normal/log_std Min -2.70428 eval/num steps total 455925 eval/num paths total 885 eval/path length Mean 517 eval/path length Std 0 eval/path length Max 517 eval/path length Min 517 eval/Rewards Mean 3.16671 eval/Rewards Std 0.794549 eval/Rewards Max 5.2124 eval/Rewards Min 0.978731 eval/Returns Mean 1637.19 eval/Returns Std 0 eval/Returns Max 1637.19 eval/Returns Min 1637.19 eval/Actions Mean 0.162237 eval/Actions Std 0.595318 eval/Actions Max 0.998405 eval/Actions Min -0.995756 eval/Num Paths 1 eval/Average Returns 1637.19 eval/normalized_score 50.9272 time/evaluation sampling (s) 0.937095 time/logging (s) 0.00241687 time/sampling batch (s) 0.270042 time/saving (s) 0.00318064 time/training (s) 4.22867 time/epoch (s) 5.4414 time/total (s) 33959.1 Epoch -312 ---------------------------------- --------------- 2022-05-10 22:36:41.490196 PDT | [0] Epoch -311 finished ---------------------------------- --------------- epoch -311 replay_buffer/size 999996 trainer/num train calls 690000 trainer/Policy Loss -2.04799 trainer/Log Pis Mean 2.14279 trainer/Log Pis Std 2.40904 trainer/Log Pis Max 7.76117 trainer/Log Pis Min -4.83562 trainer/policy/mean Mean 0.146145 trainer/policy/mean Std 0.6085 trainer/policy/mean Max 0.998124 trainer/policy/mean Min -0.997368 trainer/policy/normal/std Mean 0.380996 trainer/policy/normal/std Std 0.185195 trainer/policy/normal/std Max 0.998201 trainer/policy/normal/std Min 0.067667 trainer/policy/normal/log_std Mean -1.11499 trainer/policy/normal/log_std Std 0.590824 trainer/policy/normal/log_std Max -0.00180037 trainer/policy/normal/log_std Min -2.69316 eval/num steps total 456464 eval/num paths total 886 eval/path length Mean 539 eval/path length Std 0 eval/path length Max 539 eval/path length Min 539 eval/Rewards Mean 3.18124 eval/Rewards Std 0.852482 eval/Rewards Max 5.36587 eval/Rewards Min 0.978551 eval/Returns Mean 1714.69 eval/Returns Std 0 eval/Returns Max 1714.69 eval/Returns Min 1714.69 eval/Actions Mean 0.145886 eval/Actions Std 0.579292 eval/Actions Max 0.997853 eval/Actions Min -0.997865 eval/Num Paths 1 eval/Average Returns 1714.69 eval/normalized_score 53.3084 time/evaluation sampling (s) 0.919366 time/logging (s) 0.00241065 time/sampling batch (s) 0.270325 time/saving (s) 0.00310823 time/training (s) 4.2198 time/epoch (s) 5.41501 time/total (s) 33964.5 Epoch -311 ---------------------------------- --------------- 2022-05-10 22:36:46.899556 PDT | [0] Epoch -310 finished ---------------------------------- --------------- epoch -310 replay_buffer/size 999996 trainer/num train calls 691000 trainer/Policy Loss -2.36794 trainer/Log Pis Mean 2.38571 trainer/Log Pis Std 2.46788 trainer/Log Pis Max 10.341 trainer/Log Pis Min -3.29276 trainer/policy/mean Mean 0.126085 trainer/policy/mean Std 0.619399 trainer/policy/mean Max 0.99862 trainer/policy/mean Min -0.997734 trainer/policy/normal/std Mean 0.370435 trainer/policy/normal/std Std 0.179958 trainer/policy/normal/std Max 0.914129 trainer/policy/normal/std Min 0.0693636 trainer/policy/normal/log_std Mean -1.14505 trainer/policy/normal/log_std Std 0.594766 trainer/policy/normal/log_std Max -0.0897831 trainer/policy/normal/log_std Min -2.66839 eval/num steps total 457042 eval/num paths total 887 eval/path length Mean 578 eval/path length Std 0 eval/path length Max 578 eval/path length Min 578 eval/Rewards Mean 3.21983 eval/Rewards Std 0.746097 eval/Rewards Max 4.89144 eval/Rewards Min 0.977297 eval/Returns Mean 1861.06 eval/Returns Std 0 eval/Returns Max 1861.06 eval/Returns Min 1861.06 eval/Actions Mean 0.154075 eval/Actions Std 0.601972 eval/Actions Max 0.998443 eval/Actions Min -0.997758 eval/Num Paths 1 eval/Average Returns 1861.06 eval/normalized_score 57.8058 time/evaluation sampling (s) 0.91167 time/logging (s) 0.0024224 time/sampling batch (s) 0.268689 time/saving (s) 0.00308096 time/training (s) 4.20006 time/epoch (s) 5.38592 time/total (s) 33969.9 Epoch -310 ---------------------------------- --------------- 2022-05-10 22:36:52.354024 PDT | [0] Epoch -309 finished ---------------------------------- --------------- epoch -309 replay_buffer/size 999996 trainer/num train calls 692000 trainer/Policy Loss -2.25094 trainer/Log Pis Mean 2.22395 trainer/Log Pis Std 2.4674 trainer/Log Pis Max 8.89748 trainer/Log Pis Min -3.92272 trainer/policy/mean Mean 0.121259 trainer/policy/mean Std 0.621362 trainer/policy/mean Max 0.998181 trainer/policy/mean Min -0.998316 trainer/policy/normal/std Mean 0.379248 trainer/policy/normal/std Std 0.177973 trainer/policy/normal/std Max 0.960178 trainer/policy/normal/std Min 0.0635015 trainer/policy/normal/log_std Mean -1.10843 trainer/policy/normal/log_std Std 0.566252 trainer/policy/normal/log_std Max -0.0406367 trainer/policy/normal/log_std Min -2.75669 eval/num steps total 458037 eval/num paths total 889 eval/path length Mean 497.5 eval/path length Std 0.5 eval/path length Max 498 eval/path length Min 497 eval/Rewards Mean 3.0811 eval/Rewards Std 0.759686 eval/Rewards Max 4.73114 eval/Rewards Min 0.976055 eval/Returns Mean 1532.85 eval/Returns Std 6.32607 eval/Returns Max 1539.17 eval/Returns Min 1526.52 eval/Actions Mean 0.143914 eval/Actions Std 0.587812 eval/Actions Max 0.998329 eval/Actions Min -0.996809 eval/Num Paths 2 eval/Average Returns 1532.85 eval/normalized_score 47.7211 time/evaluation sampling (s) 0.884943 time/logging (s) 0.00372953 time/sampling batch (s) 0.271557 time/saving (s) 0.00297327 time/training (s) 4.26879 time/epoch (s) 5.43199 time/total (s) 33975.3 Epoch -309 ---------------------------------- --------------- 2022-05-10 22:36:57.761874 PDT | [0] Epoch -308 finished ---------------------------------- --------------- epoch -308 replay_buffer/size 999996 trainer/num train calls 693000 trainer/Policy Loss -2.19528 trainer/Log Pis Mean 2.16088 trainer/Log Pis Std 2.5342 trainer/Log Pis Max 10.8418 trainer/Log Pis Min -5.66049 trainer/policy/mean Mean 0.145188 trainer/policy/mean Std 0.609768 trainer/policy/mean Max 0.996261 trainer/policy/mean Min -0.998019 trainer/policy/normal/std Mean 0.375164 trainer/policy/normal/std Std 0.181674 trainer/policy/normal/std Max 0.942129 trainer/policy/normal/std Min 0.0654013 trainer/policy/normal/log_std Mean -1.13258 trainer/policy/normal/log_std Std 0.596656 trainer/policy/normal/log_std Max -0.0596131 trainer/policy/normal/log_std Min -2.72721 eval/num steps total 458635 eval/num paths total 890 eval/path length Mean 598 eval/path length Std 0 eval/path length Max 598 eval/path length Min 598 eval/Rewards Mean 3.21091 eval/Rewards Std 0.774555 eval/Rewards Max 5.44257 eval/Rewards Min 0.981989 eval/Returns Mean 1920.13 eval/Returns Std 0 eval/Returns Max 1920.13 eval/Returns Min 1920.13 eval/Actions Mean 0.141693 eval/Actions Std 0.588244 eval/Actions Max 0.997331 eval/Actions Min -0.99769 eval/Num Paths 1 eval/Average Returns 1920.13 eval/normalized_score 59.6207 time/evaluation sampling (s) 0.884076 time/logging (s) 0.00263448 time/sampling batch (s) 0.270087 time/saving (s) 0.00313086 time/training (s) 4.2234 time/epoch (s) 5.38333 time/total (s) 33980.7 Epoch -308 ---------------------------------- --------------- 2022-05-10 22:37:03.294286 PDT | [0] Epoch -307 finished ---------------------------------- --------------- epoch -307 replay_buffer/size 999996 trainer/num train calls 694000 trainer/Policy Loss -2.11515 trainer/Log Pis Mean 2.06511 trainer/Log Pis Std 2.63693 trainer/Log Pis Max 10.3258 trainer/Log Pis Min -6.59055 trainer/policy/mean Mean 0.122994 trainer/policy/mean Std 0.618385 trainer/policy/mean Max 0.997984 trainer/policy/mean Min -0.997846 trainer/policy/normal/std Mean 0.394591 trainer/policy/normal/std Std 0.190911 trainer/policy/normal/std Max 0.966249 trainer/policy/normal/std Min 0.0616844 trainer/policy/normal/log_std Mean -1.08096 trainer/policy/normal/log_std Std 0.594391 trainer/policy/normal/log_std Max -0.034334 trainer/policy/normal/log_std Min -2.78572 eval/num steps total 459180 eval/num paths total 891 eval/path length Mean 545 eval/path length Std 0 eval/path length Max 545 eval/path length Min 545 eval/Rewards Mean 3.15193 eval/Rewards Std 0.86731 eval/Rewards Max 5.15159 eval/Rewards Min 0.985372 eval/Returns Mean 1717.8 eval/Returns Std 0 eval/Returns Max 1717.8 eval/Returns Min 1717.8 eval/Actions Mean 0.160457 eval/Actions Std 0.577337 eval/Actions Max 0.996756 eval/Actions Min -0.999026 eval/Num Paths 1 eval/Average Returns 1717.8 eval/normalized_score 53.404 time/evaluation sampling (s) 0.966734 time/logging (s) 0.00246476 time/sampling batch (s) 0.271873 time/saving (s) 0.00302777 time/training (s) 4.26481 time/epoch (s) 5.50891 time/total (s) 33986.2 Epoch -307 ---------------------------------- --------------- 2022-05-10 22:37:08.756927 PDT | [0] Epoch -306 finished ---------------------------------- --------------- epoch -306 replay_buffer/size 999996 trainer/num train calls 695000 trainer/Policy Loss -2.39432 trainer/Log Pis Mean 2.30914 trainer/Log Pis Std 2.6508 trainer/Log Pis Max 10.2349 trainer/Log Pis Min -4.81825 trainer/policy/mean Mean 0.156813 trainer/policy/mean Std 0.623565 trainer/policy/mean Max 0.99759 trainer/policy/mean Min -0.998748 trainer/policy/normal/std Mean 0.377151 trainer/policy/normal/std Std 0.18722 trainer/policy/normal/std Max 0.971079 trainer/policy/normal/std Min 0.0670329 trainer/policy/normal/log_std Mean -1.13527 trainer/policy/normal/log_std Std 0.612731 trainer/policy/normal/log_std Max -0.0293474 trainer/policy/normal/log_std Min -2.70257 eval/num steps total 459898 eval/num paths total 892 eval/path length Mean 718 eval/path length Std 0 eval/path length Max 718 eval/path length Min 718 eval/Rewards Mean 3.22156 eval/Rewards Std 0.757844 eval/Rewards Max 4.88705 eval/Rewards Min 0.981325 eval/Returns Mean 2313.08 eval/Returns Std 0 eval/Returns Max 2313.08 eval/Returns Min 2313.08 eval/Actions Mean 0.156212 eval/Actions Std 0.579747 eval/Actions Max 0.997742 eval/Actions Min -0.998403 eval/Num Paths 1 eval/Average Returns 2313.08 eval/normalized_score 71.6946 time/evaluation sampling (s) 0.890709 time/logging (s) 0.00319072 time/sampling batch (s) 0.271797 time/saving (s) 0.00322615 time/training (s) 4.2713 time/epoch (s) 5.44022 time/total (s) 33991.7 Epoch -306 ---------------------------------- --------------- 2022-05-10 22:37:14.176800 PDT | [0] Epoch -305 finished ---------------------------------- --------------- epoch -305 replay_buffer/size 999996 trainer/num train calls 696000 trainer/Policy Loss -2.34011 trainer/Log Pis Mean 2.43083 trainer/Log Pis Std 2.50635 trainer/Log Pis Max 9.61389 trainer/Log Pis Min -4.81419 trainer/policy/mean Mean 0.116113 trainer/policy/mean Std 0.633953 trainer/policy/mean Max 0.998105 trainer/policy/mean Min -0.998588 trainer/policy/normal/std Mean 0.374597 trainer/policy/normal/std Std 0.17935 trainer/policy/normal/std Max 1.02637 trainer/policy/normal/std Min 0.0718949 trainer/policy/normal/log_std Mean -1.12724 trainer/policy/normal/log_std Std 0.580031 trainer/policy/normal/log_std Max 0.0260245 trainer/policy/normal/log_std Min -2.63255 eval/num steps total 460856 eval/num paths total 894 eval/path length Mean 479 eval/path length Std 66 eval/path length Max 545 eval/path length Min 413 eval/Rewards Mean 3.17281 eval/Rewards Std 0.822343 eval/Rewards Max 5.35176 eval/Rewards Min 0.982569 eval/Returns Mean 1519.78 eval/Returns Std 241.972 eval/Returns Max 1761.75 eval/Returns Min 1277.8 eval/Actions Mean 0.149299 eval/Actions Std 0.587181 eval/Actions Max 0.998368 eval/Actions Min -0.998348 eval/Num Paths 2 eval/Average Returns 1519.78 eval/normalized_score 47.3195 time/evaluation sampling (s) 0.884928 time/logging (s) 0.00374979 time/sampling batch (s) 0.271437 time/saving (s) 0.00301324 time/training (s) 4.23392 time/epoch (s) 5.39705 time/total (s) 33997.1 Epoch -305 ---------------------------------- --------------- 2022-05-10 22:37:19.599475 PDT | [0] Epoch -304 finished ---------------------------------- --------------- epoch -304 replay_buffer/size 999996 trainer/num train calls 697000 trainer/Policy Loss -2.25826 trainer/Log Pis Mean 2.15334 trainer/Log Pis Std 2.38694 trainer/Log Pis Max 8.40938 trainer/Log Pis Min -6.10181 trainer/policy/mean Mean 0.154927 trainer/policy/mean Std 0.60727 trainer/policy/mean Max 0.995766 trainer/policy/mean Min -0.997677 trainer/policy/normal/std Mean 0.374982 trainer/policy/normal/std Std 0.174967 trainer/policy/normal/std Max 0.903873 trainer/policy/normal/std Min 0.0693598 trainer/policy/normal/log_std Mean -1.11868 trainer/policy/normal/log_std Std 0.564089 trainer/policy/normal/log_std Max -0.101067 trainer/policy/normal/log_std Min -2.66845 eval/num steps total 461380 eval/num paths total 895 eval/path length Mean 524 eval/path length Std 0 eval/path length Max 524 eval/path length Min 524 eval/Rewards Mean 3.15177 eval/Rewards Std 0.849778 eval/Rewards Max 5.50756 eval/Rewards Min 0.979107 eval/Returns Mean 1651.53 eval/Returns Std 0 eval/Returns Max 1651.53 eval/Returns Min 1651.53 eval/Actions Mean 0.151579 eval/Actions Std 0.584508 eval/Actions Max 0.99771 eval/Actions Min -0.997143 eval/Num Paths 1 eval/Average Returns 1651.53 eval/normalized_score 51.3678 time/evaluation sampling (s) 0.88773 time/logging (s) 0.00251509 time/sampling batch (s) 0.270026 time/saving (s) 0.00339433 time/training (s) 4.23205 time/epoch (s) 5.39571 time/total (s) 34002.5 Epoch -304 ---------------------------------- --------------- 2022-05-10 22:37:25.034387 PDT | [0] Epoch -303 finished ---------------------------------- --------------- epoch -303 replay_buffer/size 999996 trainer/num train calls 698000 trainer/Policy Loss -2.51005 trainer/Log Pis Mean 2.28691 trainer/Log Pis Std 2.58297 trainer/Log Pis Max 14.9915 trainer/Log Pis Min -6.77108 trainer/policy/mean Mean 0.140103 trainer/policy/mean Std 0.625133 trainer/policy/mean Max 0.998211 trainer/policy/mean Min -0.999661 trainer/policy/normal/std Mean 0.370557 trainer/policy/normal/std Std 0.183003 trainer/policy/normal/std Max 0.956575 trainer/policy/normal/std Min 0.0653576 trainer/policy/normal/log_std Mean -1.14871 trainer/policy/normal/log_std Std 0.601869 trainer/policy/normal/log_std Max -0.0443964 trainer/policy/normal/log_std Min -2.72788 eval/num steps total 462007 eval/num paths total 896 eval/path length Mean 627 eval/path length Std 0 eval/path length Max 627 eval/path length Min 627 eval/Rewards Mean 3.28863 eval/Rewards Std 0.768254 eval/Rewards Max 4.89487 eval/Rewards Min 0.988161 eval/Returns Mean 2061.97 eval/Returns Std 0 eval/Returns Max 2061.97 eval/Returns Min 2061.97 eval/Actions Mean 0.139549 eval/Actions Std 0.589523 eval/Actions Max 0.998771 eval/Actions Min -0.998576 eval/Num Paths 1 eval/Average Returns 2061.97 eval/normalized_score 63.979 time/evaluation sampling (s) 0.894372 time/logging (s) 0.00262033 time/sampling batch (s) 0.27039 time/saving (s) 0.00296247 time/training (s) 4.2413 time/epoch (s) 5.41165 time/total (s) 34007.9 Epoch -303 ---------------------------------- --------------- 2022-05-10 22:37:30.452976 PDT | [0] Epoch -302 finished ---------------------------------- --------------- epoch -302 replay_buffer/size 999996 trainer/num train calls 699000 trainer/Policy Loss -2.40062 trainer/Log Pis Mean 2.17542 trainer/Log Pis Std 2.60762 trainer/Log Pis Max 9.35581 trainer/Log Pis Min -4.77535 trainer/policy/mean Mean 0.158531 trainer/policy/mean Std 0.623261 trainer/policy/mean Max 0.997126 trainer/policy/mean Min -0.998813 trainer/policy/normal/std Mean 0.379583 trainer/policy/normal/std Std 0.184677 trainer/policy/normal/std Max 1.10606 trainer/policy/normal/std Min 0.0680929 trainer/policy/normal/log_std Mean -1.1242 trainer/policy/normal/log_std Std 0.605442 trainer/policy/normal/log_std Max 0.100803 trainer/policy/normal/log_std Min -2.68688 eval/num steps total 462560 eval/num paths total 897 eval/path length Mean 553 eval/path length Std 0 eval/path length Max 553 eval/path length Min 553 eval/Rewards Mean 3.20098 eval/Rewards Std 0.810037 eval/Rewards Max 4.95033 eval/Rewards Min 0.983489 eval/Returns Mean 1770.14 eval/Returns Std 0 eval/Returns Max 1770.14 eval/Returns Min 1770.14 eval/Actions Mean 0.141993 eval/Actions Std 0.575983 eval/Actions Max 0.997672 eval/Actions Min -0.998692 eval/Num Paths 1 eval/Average Returns 1770.14 eval/normalized_score 55.0122 time/evaluation sampling (s) 0.892094 time/logging (s) 0.00243952 time/sampling batch (s) 0.270497 time/saving (s) 0.00306281 time/training (s) 4.22707 time/epoch (s) 5.39517 time/total (s) 34013.3 Epoch -302 ---------------------------------- --------------- 2022-05-10 22:37:35.913408 PDT | [0] Epoch -301 finished ---------------------------------- --------------- epoch -301 replay_buffer/size 999996 trainer/num train calls 700000 trainer/Policy Loss -2.24647 trainer/Log Pis Mean 2.26944 trainer/Log Pis Std 2.54957 trainer/Log Pis Max 12.0587 trainer/Log Pis Min -7.75399 trainer/policy/mean Mean 0.122521 trainer/policy/mean Std 0.618081 trainer/policy/mean Max 0.997867 trainer/policy/mean Min -0.998803 trainer/policy/normal/std Mean 0.372497 trainer/policy/normal/std Std 0.177529 trainer/policy/normal/std Max 0.964274 trainer/policy/normal/std Min 0.0652623 trainer/policy/normal/log_std Mean -1.13156 trainer/policy/normal/log_std Std 0.577407 trainer/policy/normal/log_std Max -0.0363802 trainer/policy/normal/log_std Min -2.72934 eval/num steps total 463135 eval/num paths total 898 eval/path length Mean 575 eval/path length Std 0 eval/path length Max 575 eval/path length Min 575 eval/Rewards Mean 3.18034 eval/Rewards Std 0.707655 eval/Rewards Max 4.6752 eval/Rewards Min 0.981858 eval/Returns Mean 1828.7 eval/Returns Std 0 eval/Returns Max 1828.7 eval/Returns Min 1828.7 eval/Actions Mean 0.149416 eval/Actions Std 0.607806 eval/Actions Max 0.998631 eval/Actions Min -0.998766 eval/Num Paths 1 eval/Average Returns 1828.7 eval/normalized_score 56.8114 time/evaluation sampling (s) 0.917908 time/logging (s) 0.00255029 time/sampling batch (s) 0.269714 time/saving (s) 0.00587232 time/training (s) 4.24117 time/epoch (s) 5.43722 time/total (s) 34018.7 Epoch -301 ---------------------------------- --------------- 2022-05-10 22:37:41.377651 PDT | [0] Epoch -300 finished ---------------------------------- --------------- epoch -300 replay_buffer/size 999996 trainer/num train calls 701000 trainer/Policy Loss -2.12701 trainer/Log Pis Mean 2.1427 trainer/Log Pis Std 2.50594 trainer/Log Pis Max 11.0549 trainer/Log Pis Min -4.29234 trainer/policy/mean Mean 0.165405 trainer/policy/mean Std 0.61007 trainer/policy/mean Max 0.998443 trainer/policy/mean Min -0.998303 trainer/policy/normal/std Mean 0.387979 trainer/policy/normal/std Std 0.188617 trainer/policy/normal/std Max 1.01302 trainer/policy/normal/std Min 0.0681852 trainer/policy/normal/log_std Mean -1.0994 trainer/policy/normal/log_std Std 0.597323 trainer/policy/normal/log_std Max 0.0129372 trainer/policy/normal/log_std Min -2.68553 eval/num steps total 463621 eval/num paths total 899 eval/path length Mean 486 eval/path length Std 0 eval/path length Max 486 eval/path length Min 486 eval/Rewards Mean 3.21315 eval/Rewards Std 0.814435 eval/Rewards Max 4.84476 eval/Rewards Min 0.977525 eval/Returns Mean 1561.59 eval/Returns Std 0 eval/Returns Max 1561.59 eval/Returns Min 1561.59 eval/Actions Mean 0.149423 eval/Actions Std 0.596452 eval/Actions Max 0.998307 eval/Actions Min -0.998509 eval/Num Paths 1 eval/Average Returns 1561.59 eval/normalized_score 48.6044 time/evaluation sampling (s) 0.927348 time/logging (s) 0.00224915 time/sampling batch (s) 0.270796 time/saving (s) 0.00305221 time/training (s) 4.23694 time/epoch (s) 5.44039 time/total (s) 34024.2 Epoch -300 ---------------------------------- --------------- 2022-05-10 22:37:46.827129 PDT | [0] Epoch -299 finished ---------------------------------- --------------- epoch -299 replay_buffer/size 999996 trainer/num train calls 702000 trainer/Policy Loss -2.10723 trainer/Log Pis Mean 2.10636 trainer/Log Pis Std 2.56827 trainer/Log Pis Max 9.58237 trainer/Log Pis Min -4.83926 trainer/policy/mean Mean 0.142231 trainer/policy/mean Std 0.603685 trainer/policy/mean Max 0.998628 trainer/policy/mean Min -0.998569 trainer/policy/normal/std Mean 0.382466 trainer/policy/normal/std Std 0.183867 trainer/policy/normal/std Max 1.12341 trainer/policy/normal/std Min 0.0690445 trainer/policy/normal/log_std Mean -1.10768 trainer/policy/normal/log_std Std 0.582665 trainer/policy/normal/log_std Max 0.116371 trainer/policy/normal/log_std Min -2.673 eval/num steps total 464586 eval/num paths total 901 eval/path length Mean 482.5 eval/path length Std 1.5 eval/path length Max 484 eval/path length Min 481 eval/Rewards Mean 3.22282 eval/Rewards Std 0.809296 eval/Rewards Max 4.78972 eval/Rewards Min 0.981224 eval/Returns Mean 1555.01 eval/Returns Std 4.40435 eval/Returns Max 1559.41 eval/Returns Min 1550.6 eval/Actions Mean 0.1399 eval/Actions Std 0.598951 eval/Actions Max 0.996947 eval/Actions Min -0.998801 eval/Num Paths 2 eval/Average Returns 1555.01 eval/normalized_score 48.4021 time/evaluation sampling (s) 0.915921 time/logging (s) 0.0036395 time/sampling batch (s) 0.270811 time/saving (s) 0.00320462 time/training (s) 4.23384 time/epoch (s) 5.42742 time/total (s) 34029.6 Epoch -299 ---------------------------------- --------------- 2022-05-10 22:37:52.271141 PDT | [0] Epoch -298 finished ---------------------------------- --------------- epoch -298 replay_buffer/size 999996 trainer/num train calls 703000 trainer/Policy Loss -2.21106 trainer/Log Pis Mean 2.28099 trainer/Log Pis Std 2.59577 trainer/Log Pis Max 14.9393 trainer/Log Pis Min -4.7982 trainer/policy/mean Mean 0.104088 trainer/policy/mean Std 0.616147 trainer/policy/mean Max 0.998952 trainer/policy/mean Min -0.999587 trainer/policy/normal/std Mean 0.378599 trainer/policy/normal/std Std 0.183342 trainer/policy/normal/std Max 1.02079 trainer/policy/normal/std Min 0.0645156 trainer/policy/normal/log_std Mean -1.12066 trainer/policy/normal/log_std Std 0.589501 trainer/policy/normal/log_std Max 0.0205741 trainer/policy/normal/log_std Min -2.74085 eval/num steps total 465063 eval/num paths total 902 eval/path length Mean 477 eval/path length Std 0 eval/path length Max 477 eval/path length Min 477 eval/Rewards Mean 3.09893 eval/Rewards Std 0.83916 eval/Rewards Max 4.76069 eval/Rewards Min 0.98542 eval/Returns Mean 1478.19 eval/Returns Std 0 eval/Returns Max 1478.19 eval/Returns Min 1478.19 eval/Actions Mean 0.138706 eval/Actions Std 0.578297 eval/Actions Max 0.997691 eval/Actions Min -0.998338 eval/Num Paths 1 eval/Average Returns 1478.19 eval/normalized_score 46.0417 time/evaluation sampling (s) 0.882034 time/logging (s) 0.00229048 time/sampling batch (s) 0.271612 time/saving (s) 0.00304849 time/training (s) 4.26017 time/epoch (s) 5.41916 time/total (s) 34035 Epoch -298 ---------------------------------- --------------- 2022-05-10 22:37:58.150269 PDT | [0] Epoch -297 finished ---------------------------------- --------------- epoch -297 replay_buffer/size 999996 trainer/num train calls 704000 trainer/Policy Loss -2.00945 trainer/Log Pis Mean 2.08597 trainer/Log Pis Std 2.4868 trainer/Log Pis Max 14.0993 trainer/Log Pis Min -6.94881 trainer/policy/mean Mean 0.156014 trainer/policy/mean Std 0.607888 trainer/policy/mean Max 0.998072 trainer/policy/mean Min -0.998104 trainer/policy/normal/std Mean 0.380149 trainer/policy/normal/std Std 0.176764 trainer/policy/normal/std Max 0.924317 trainer/policy/normal/std Min 0.0731432 trainer/policy/normal/log_std Mean -1.10801 trainer/policy/normal/log_std Std 0.574572 trainer/policy/normal/log_std Max -0.0787004 trainer/policy/normal/log_std Min -2.61534 eval/num steps total 465949 eval/num paths total 904 eval/path length Mean 443 eval/path length Std 27 eval/path length Max 470 eval/path length Min 416 eval/Rewards Mean 3.16211 eval/Rewards Std 0.841748 eval/Rewards Max 4.98272 eval/Rewards Min 0.986353 eval/Returns Mean 1400.81 eval/Returns Std 116.711 eval/Returns Max 1517.52 eval/Returns Min 1284.1 eval/Actions Mean 0.1496 eval/Actions Std 0.584124 eval/Actions Max 0.998327 eval/Actions Min -0.998418 eval/Num Paths 2 eval/Average Returns 1400.81 eval/normalized_score 43.6643 time/evaluation sampling (s) 0.887341 time/logging (s) 0.00372998 time/sampling batch (s) 0.289835 time/saving (s) 0.00325182 time/training (s) 4.67309 time/epoch (s) 5.85724 time/total (s) 34040.9 Epoch -297 ---------------------------------- --------------- 2022-05-10 22:38:03.570609 PDT | [0] Epoch -296 finished ---------------------------------- --------------- epoch -296 replay_buffer/size 999996 trainer/num train calls 705000 trainer/Policy Loss -2.2238 trainer/Log Pis Mean 2.17287 trainer/Log Pis Std 2.67153 trainer/Log Pis Max 8.98914 trainer/Log Pis Min -5.57296 trainer/policy/mean Mean 0.133992 trainer/policy/mean Std 0.618833 trainer/policy/mean Max 0.997591 trainer/policy/mean Min -0.996785 trainer/policy/normal/std Mean 0.379713 trainer/policy/normal/std Std 0.187777 trainer/policy/normal/std Max 1.0211 trainer/policy/normal/std Min 0.0661415 trainer/policy/normal/log_std Mean -1.12338 trainer/policy/normal/log_std Std 0.599397 trainer/policy/normal/log_std Max 0.0208783 trainer/policy/normal/log_std Min -2.71596 eval/num steps total 466840 eval/num paths total 906 eval/path length Mean 445.5 eval/path length Std 29.5 eval/path length Max 475 eval/path length Min 416 eval/Rewards Mean 3.13528 eval/Rewards Std 0.844478 eval/Rewards Max 4.95731 eval/Rewards Min 0.982607 eval/Returns Mean 1396.77 eval/Returns Std 116.973 eval/Returns Max 1513.74 eval/Returns Min 1279.8 eval/Actions Mean 0.1525 eval/Actions Std 0.596685 eval/Actions Max 0.997309 eval/Actions Min -0.998865 eval/Num Paths 2 eval/Average Returns 1396.77 eval/normalized_score 43.54 time/evaluation sampling (s) 0.886065 time/logging (s) 0.00339858 time/sampling batch (s) 0.2751 time/saving (s) 0.00305277 time/training (s) 4.2292 time/epoch (s) 5.39682 time/total (s) 34046.3 Epoch -296 ---------------------------------- --------------- 2022-05-10 22:38:08.984569 PDT | [0] Epoch -295 finished ---------------------------------- --------------- epoch -295 replay_buffer/size 999996 trainer/num train calls 706000 trainer/Policy Loss -2.1213 trainer/Log Pis Mean 2.26714 trainer/Log Pis Std 2.57607 trainer/Log Pis Max 9.56874 trainer/Log Pis Min -5.33096 trainer/policy/mean Mean 0.153547 trainer/policy/mean Std 0.606563 trainer/policy/mean Max 0.998521 trainer/policy/mean Min -0.99716 trainer/policy/normal/std Mean 0.383047 trainer/policy/normal/std Std 0.191133 trainer/policy/normal/std Max 1.17508 trainer/policy/normal/std Min 0.0597712 trainer/policy/normal/log_std Mean -1.11764 trainer/policy/normal/log_std Std 0.605681 trainer/policy/normal/log_std Max 0.161339 trainer/policy/normal/log_std Min -2.81723 eval/num steps total 467379 eval/num paths total 907 eval/path length Mean 539 eval/path length Std 0 eval/path length Max 539 eval/path length Min 539 eval/Rewards Mean 3.22431 eval/Rewards Std 0.85625 eval/Rewards Max 5.51177 eval/Rewards Min 0.986571 eval/Returns Mean 1737.9 eval/Returns Std 0 eval/Returns Max 1737.9 eval/Returns Min 1737.9 eval/Actions Mean 0.15849 eval/Actions Std 0.585724 eval/Actions Max 0.998829 eval/Actions Min -0.998128 eval/Num Paths 1 eval/Average Returns 1737.9 eval/normalized_score 54.0216 time/evaluation sampling (s) 0.888512 time/logging (s) 0.0023348 time/sampling batch (s) 0.269772 time/saving (s) 0.00301571 time/training (s) 4.22594 time/epoch (s) 5.38957 time/total (s) 34051.7 Epoch -295 ---------------------------------- --------------- 2022-05-10 22:38:14.482534 PDT | [0] Epoch -294 finished ---------------------------------- --------------- epoch -294 replay_buffer/size 999996 trainer/num train calls 707000 trainer/Policy Loss -2.47309 trainer/Log Pis Mean 2.29689 trainer/Log Pis Std 2.4869 trainer/Log Pis Max 8.93391 trainer/Log Pis Min -3.24645 trainer/policy/mean Mean 0.12952 trainer/policy/mean Std 0.623879 trainer/policy/mean Max 0.998478 trainer/policy/mean Min -0.996842 trainer/policy/normal/std Mean 0.378387 trainer/policy/normal/std Std 0.18652 trainer/policy/normal/std Max 0.968162 trainer/policy/normal/std Min 0.0686732 trainer/policy/normal/log_std Mean -1.1267 trainer/policy/normal/log_std Std 0.59916 trainer/policy/normal/log_std Max -0.0323563 trainer/policy/normal/log_std Min -2.6784 eval/num steps total 467882 eval/num paths total 908 eval/path length Mean 503 eval/path length Std 0 eval/path length Max 503 eval/path length Min 503 eval/Rewards Mean 3.12875 eval/Rewards Std 0.775535 eval/Rewards Max 4.76707 eval/Rewards Min 0.982727 eval/Returns Mean 1573.76 eval/Returns Std 0 eval/Returns Max 1573.76 eval/Returns Min 1573.76 eval/Actions Mean 0.155269 eval/Actions Std 0.590545 eval/Actions Max 0.998005 eval/Actions Min -0.998768 eval/Num Paths 1 eval/Average Returns 1573.76 eval/normalized_score 48.9782 time/evaluation sampling (s) 0.89489 time/logging (s) 0.00236577 time/sampling batch (s) 0.269827 time/saving (s) 0.00313832 time/training (s) 4.30461 time/epoch (s) 5.47483 time/total (s) 34057.2 Epoch -294 ---------------------------------- --------------- 2022-05-10 22:38:20.019525 PDT | [0] Epoch -293 finished ---------------------------------- --------------- epoch -293 replay_buffer/size 999996 trainer/num train calls 708000 trainer/Policy Loss -2.31376 trainer/Log Pis Mean 2.14774 trainer/Log Pis Std 2.83462 trainer/Log Pis Max 10.5505 trainer/Log Pis Min -8.587 trainer/policy/mean Mean 0.131514 trainer/policy/mean Std 0.61422 trainer/policy/mean Max 0.997374 trainer/policy/mean Min -0.997345 trainer/policy/normal/std Mean 0.375288 trainer/policy/normal/std Std 0.18309 trainer/policy/normal/std Max 0.952769 trainer/policy/normal/std Min 0.0659959 trainer/policy/normal/log_std Mean -1.13095 trainer/policy/normal/log_std Std 0.591016 trainer/policy/normal/log_std Max -0.0483826 trainer/policy/normal/log_std Min -2.71816 eval/num steps total 468516 eval/num paths total 909 eval/path length Mean 634 eval/path length Std 0 eval/path length Max 634 eval/path length Min 634 eval/Rewards Mean 3.20417 eval/Rewards Std 0.765674 eval/Rewards Max 4.65769 eval/Rewards Min 0.984218 eval/Returns Mean 2031.44 eval/Returns Std 0 eval/Returns Max 2031.44 eval/Returns Min 2031.44 eval/Actions Mean 0.146178 eval/Actions Std 0.59844 eval/Actions Max 0.99777 eval/Actions Min -0.998356 eval/Num Paths 1 eval/Average Returns 2031.44 eval/normalized_score 63.0409 time/evaluation sampling (s) 0.927918 time/logging (s) 0.00268967 time/sampling batch (s) 0.270069 time/saving (s) 0.00300405 time/training (s) 4.31033 time/epoch (s) 5.51401 time/total (s) 34062.7 Epoch -293 ---------------------------------- --------------- 2022-05-10 22:38:25.420901 PDT | [0] Epoch -292 finished ---------------------------------- --------------- epoch -292 replay_buffer/size 999996 trainer/num train calls 709000 trainer/Policy Loss -2.28481 trainer/Log Pis Mean 2.22484 trainer/Log Pis Std 2.47003 trainer/Log Pis Max 9.14638 trainer/Log Pis Min -3.83103 trainer/policy/mean Mean 0.12832 trainer/policy/mean Std 0.62156 trainer/policy/mean Max 0.99759 trainer/policy/mean Min -0.998411 trainer/policy/normal/std Mean 0.391966 trainer/policy/normal/std Std 0.187309 trainer/policy/normal/std Max 0.967803 trainer/policy/normal/std Min 0.0734817 trainer/policy/normal/log_std Mean -1.08253 trainer/policy/normal/log_std Std 0.581876 trainer/policy/normal/log_std Max -0.0327265 trainer/policy/normal/log_std Min -2.61072 eval/num steps total 469049 eval/num paths total 910 eval/path length Mean 533 eval/path length Std 0 eval/path length Max 533 eval/path length Min 533 eval/Rewards Mean 3.21909 eval/Rewards Std 0.855664 eval/Rewards Max 5.35374 eval/Rewards Min 0.988284 eval/Returns Mean 1715.77 eval/Returns Std 0 eval/Returns Max 1715.77 eval/Returns Min 1715.77 eval/Actions Mean 0.144175 eval/Actions Std 0.586314 eval/Actions Max 0.99868 eval/Actions Min -0.99864 eval/Num Paths 1 eval/Average Returns 1715.77 eval/normalized_score 53.3417 time/evaluation sampling (s) 0.88871 time/logging (s) 0.00233827 time/sampling batch (s) 0.269233 time/saving (s) 0.00299284 time/training (s) 4.21463 time/epoch (s) 5.37791 time/total (s) 34068.1 Epoch -292 ---------------------------------- --------------- 2022-05-10 22:38:30.878073 PDT | [0] Epoch -291 finished ---------------------------------- --------------- epoch -291 replay_buffer/size 999996 trainer/num train calls 710000 trainer/Policy Loss -2.39634 trainer/Log Pis Mean 2.17933 trainer/Log Pis Std 2.44249 trainer/Log Pis Max 8.72014 trainer/Log Pis Min -6.96183 trainer/policy/mean Mean 0.139862 trainer/policy/mean Std 0.618453 trainer/policy/mean Max 0.997728 trainer/policy/mean Min -0.998675 trainer/policy/normal/std Mean 0.382445 trainer/policy/normal/std Std 0.183428 trainer/policy/normal/std Max 1.09264 trainer/policy/normal/std Min 0.0687908 trainer/policy/normal/log_std Mean -1.10952 trainer/policy/normal/log_std Std 0.589186 trainer/policy/normal/log_std Max 0.0885997 trainer/policy/normal/log_std Min -2.67669 eval/num steps total 469594 eval/num paths total 911 eval/path length Mean 545 eval/path length Std 0 eval/path length Max 545 eval/path length Min 545 eval/Rewards Mean 3.1516 eval/Rewards Std 0.826838 eval/Rewards Max 4.92507 eval/Rewards Min 0.986033 eval/Returns Mean 1717.62 eval/Returns Std 0 eval/Returns Max 1717.62 eval/Returns Min 1717.62 eval/Actions Mean 0.145543 eval/Actions Std 0.573805 eval/Actions Max 0.996809 eval/Actions Min -0.998606 eval/Num Paths 1 eval/Average Returns 1717.62 eval/normalized_score 53.3985 time/evaluation sampling (s) 0.915958 time/logging (s) 0.00251336 time/sampling batch (s) 0.269662 time/saving (s) 0.00311694 time/training (s) 4.24285 time/epoch (s) 5.4341 time/total (s) 34073.5 Epoch -291 ---------------------------------- --------------- 2022-05-10 22:38:36.329536 PDT | [0] Epoch -290 finished ---------------------------------- --------------- epoch -290 replay_buffer/size 999996 trainer/num train calls 711000 trainer/Policy Loss -2.51869 trainer/Log Pis Mean 2.48652 trainer/Log Pis Std 2.73427 trainer/Log Pis Max 11.4092 trainer/Log Pis Min -6.89972 trainer/policy/mean Mean 0.110509 trainer/policy/mean Std 0.637197 trainer/policy/mean Max 0.997933 trainer/policy/mean Min -0.9974 trainer/policy/normal/std Mean 0.377738 trainer/policy/normal/std Std 0.183727 trainer/policy/normal/std Max 0.938777 trainer/policy/normal/std Min 0.0684308 trainer/policy/normal/log_std Mean -1.12622 trainer/policy/normal/log_std Std 0.596206 trainer/policy/normal/log_std Max -0.0631769 trainer/policy/normal/log_std Min -2.68193 eval/num steps total 470563 eval/num paths total 913 eval/path length Mean 484.5 eval/path length Std 14.5 eval/path length Max 499 eval/path length Min 470 eval/Rewards Mean 3.11017 eval/Rewards Std 0.829625 eval/Rewards Max 4.81705 eval/Rewards Min 0.979115 eval/Returns Mean 1506.88 eval/Returns Std 25.7155 eval/Returns Max 1532.59 eval/Returns Min 1481.16 eval/Actions Mean 0.147326 eval/Actions Std 0.573611 eval/Actions Max 0.998273 eval/Actions Min -0.999288 eval/Num Paths 2 eval/Average Returns 1506.88 eval/normalized_score 46.9232 time/evaluation sampling (s) 0.925985 time/logging (s) 0.00366087 time/sampling batch (s) 0.269065 time/saving (s) 0.00307402 time/training (s) 4.22759 time/epoch (s) 5.42938 time/total (s) 34078.9 Epoch -290 ---------------------------------- --------------- 2022-05-10 22:38:41.761076 PDT | [0] Epoch -289 finished ---------------------------------- --------------- epoch -289 replay_buffer/size 999996 trainer/num train calls 712000 trainer/Policy Loss -2.25148 trainer/Log Pis Mean 2.34648 trainer/Log Pis Std 2.57459 trainer/Log Pis Max 10.401 trainer/Log Pis Min -3.57841 trainer/policy/mean Mean 0.137345 trainer/policy/mean Std 0.62448 trainer/policy/mean Max 0.997483 trainer/policy/mean Min -0.998244 trainer/policy/normal/std Mean 0.38113 trainer/policy/normal/std Std 0.18633 trainer/policy/normal/std Max 0.932731 trainer/policy/normal/std Min 0.0660793 trainer/policy/normal/log_std Mean -1.12071 trainer/policy/normal/log_std Std 0.606186 trainer/policy/normal/log_std Max -0.0696387 trainer/policy/normal/log_std Min -2.7169 eval/num steps total 471506 eval/num paths total 915 eval/path length Mean 471.5 eval/path length Std 12.5 eval/path length Max 484 eval/path length Min 459 eval/Rewards Mean 3.20381 eval/Rewards Std 0.857756 eval/Rewards Max 5.14243 eval/Rewards Min 0.983557 eval/Returns Mean 1510.59 eval/Returns Std 35.7698 eval/Returns Max 1546.36 eval/Returns Min 1474.82 eval/Actions Mean 0.150396 eval/Actions Std 0.589163 eval/Actions Max 0.997924 eval/Actions Min -0.998337 eval/Num Paths 2 eval/Average Returns 1510.59 eval/normalized_score 47.0374 time/evaluation sampling (s) 0.890553 time/logging (s) 0.00359876 time/sampling batch (s) 0.270811 time/saving (s) 0.00300575 time/training (s) 4.24001 time/epoch (s) 5.40798 time/total (s) 34084.3 Epoch -289 ---------------------------------- --------------- 2022-05-10 22:38:47.238624 PDT | [0] Epoch -288 finished ---------------------------------- --------------- epoch -288 replay_buffer/size 999996 trainer/num train calls 713000 trainer/Policy Loss -2.0948 trainer/Log Pis Mean 2.29214 trainer/Log Pis Std 2.64259 trainer/Log Pis Max 9.55878 trainer/Log Pis Min -5.9573 trainer/policy/mean Mean 0.154514 trainer/policy/mean Std 0.620239 trainer/policy/mean Max 0.998151 trainer/policy/mean Min -0.99763 trainer/policy/normal/std Mean 0.388244 trainer/policy/normal/std Std 0.190816 trainer/policy/normal/std Max 1.01965 trainer/policy/normal/std Min 0.0655004 trainer/policy/normal/log_std Mean -1.10141 trainer/policy/normal/log_std Std 0.603405 trainer/policy/normal/log_std Max 0.0194595 trainer/policy/normal/log_std Min -2.7257 eval/num steps total 472495 eval/num paths total 917 eval/path length Mean 494.5 eval/path length Std 84.5 eval/path length Max 579 eval/path length Min 410 eval/Rewards Mean 3.14545 eval/Rewards Std 0.747143 eval/Rewards Max 4.97361 eval/Rewards Min 0.980667 eval/Returns Mean 1555.43 eval/Returns Std 289.51 eval/Returns Max 1844.94 eval/Returns Min 1265.92 eval/Actions Mean 0.158363 eval/Actions Std 0.605064 eval/Actions Max 0.996966 eval/Actions Min -0.99897 eval/Num Paths 2 eval/Average Returns 1555.43 eval/normalized_score 48.4149 time/evaluation sampling (s) 0.908714 time/logging (s) 0.00368268 time/sampling batch (s) 0.271004 time/saving (s) 0.00296167 time/training (s) 4.26773 time/epoch (s) 5.45409 time/total (s) 34089.8 Epoch -288 ---------------------------------- --------------- 2022-05-10 22:38:52.756046 PDT | [0] Epoch -287 finished ---------------------------------- --------------- epoch -287 replay_buffer/size 999996 trainer/num train calls 714000 trainer/Policy Loss -2.26152 trainer/Log Pis Mean 2.28027 trainer/Log Pis Std 2.47478 trainer/Log Pis Max 9.30367 trainer/Log Pis Min -6.89969 trainer/policy/mean Mean 0.142555 trainer/policy/mean Std 0.619388 trainer/policy/mean Max 0.998167 trainer/policy/mean Min -0.997669 trainer/policy/normal/std Mean 0.376918 trainer/policy/normal/std Std 0.186075 trainer/policy/normal/std Max 0.976623 trainer/policy/normal/std Min 0.0626496 trainer/policy/normal/log_std Mean -1.13238 trainer/policy/normal/log_std Std 0.604022 trainer/policy/normal/log_std Max -0.023655 trainer/policy/normal/log_std Min -2.7702 eval/num steps total 473105 eval/num paths total 918 eval/path length Mean 610 eval/path length Std 0 eval/path length Max 610 eval/path length Min 610 eval/Rewards Mean 3.24956 eval/Rewards Std 0.793925 eval/Rewards Max 5.59321 eval/Rewards Min 0.981313 eval/Returns Mean 1982.23 eval/Returns Std 0 eval/Returns Max 1982.23 eval/Returns Min 1982.23 eval/Actions Mean 0.162592 eval/Actions Std 0.598031 eval/Actions Max 0.998841 eval/Actions Min -0.998445 eval/Num Paths 1 eval/Average Returns 1982.23 eval/normalized_score 61.5289 time/evaluation sampling (s) 0.956597 time/logging (s) 0.00260501 time/sampling batch (s) 0.270964 time/saving (s) 0.00301631 time/training (s) 4.25999 time/epoch (s) 5.49317 time/total (s) 34095.3 Epoch -287 ---------------------------------- --------------- 2022-05-10 22:38:58.195747 PDT | [0] Epoch -286 finished ---------------------------------- --------------- epoch -286 replay_buffer/size 999996 trainer/num train calls 715000 trainer/Policy Loss -2.0911 trainer/Log Pis Mean 2.02132 trainer/Log Pis Std 2.41553 trainer/Log Pis Max 9.99593 trainer/Log Pis Min -6.36194 trainer/policy/mean Mean 0.143024 trainer/policy/mean Std 0.6083 trainer/policy/mean Max 0.997691 trainer/policy/mean Min -0.998141 trainer/policy/normal/std Mean 0.394187 trainer/policy/normal/std Std 0.19037 trainer/policy/normal/std Max 1.0468 trainer/policy/normal/std Min 0.0730709 trainer/policy/normal/log_std Mean -1.08083 trainer/policy/normal/log_std Std 0.5913 trainer/policy/normal/log_std Max 0.0457389 trainer/policy/normal/log_std Min -2.61633 eval/num steps total 474012 eval/num paths total 920 eval/path length Mean 453.5 eval/path length Std 47.5 eval/path length Max 501 eval/path length Min 406 eval/Rewards Mean 3.11159 eval/Rewards Std 0.763801 eval/Rewards Max 4.65582 eval/Rewards Min 0.978462 eval/Returns Mean 1411.11 eval/Returns Std 162.245 eval/Returns Max 1573.35 eval/Returns Min 1248.86 eval/Actions Mean 0.152537 eval/Actions Std 0.597823 eval/Actions Max 0.996542 eval/Actions Min -0.999159 eval/Num Paths 2 eval/Average Returns 1411.11 eval/normalized_score 43.9806 time/evaluation sampling (s) 0.882581 time/logging (s) 0.00345765 time/sampling batch (s) 0.270974 time/saving (s) 0.0030127 time/training (s) 4.25735 time/epoch (s) 5.41738 time/total (s) 34100.7 Epoch -286 ---------------------------------- --------------- 2022-05-10 22:39:03.611714 PDT | [0] Epoch -285 finished ---------------------------------- --------------- epoch -285 replay_buffer/size 999996 trainer/num train calls 716000 trainer/Policy Loss -2.15205 trainer/Log Pis Mean 2.206 trainer/Log Pis Std 2.59176 trainer/Log Pis Max 9.53928 trainer/Log Pis Min -4.71409 trainer/policy/mean Mean 0.137867 trainer/policy/mean Std 0.614146 trainer/policy/mean Max 0.99774 trainer/policy/mean Min -0.996884 trainer/policy/normal/std Mean 0.384184 trainer/policy/normal/std Std 0.182003 trainer/policy/normal/std Max 0.94226 trainer/policy/normal/std Min 0.0694292 trainer/policy/normal/log_std Mean -1.0994 trainer/policy/normal/log_std Std 0.575483 trainer/policy/normal/log_std Max -0.059474 trainer/policy/normal/log_std Min -2.66745 eval/num steps total 474516 eval/num paths total 921 eval/path length Mean 504 eval/path length Std 0 eval/path length Max 504 eval/path length Min 504 eval/Rewards Mean 3.06701 eval/Rewards Std 0.776451 eval/Rewards Max 4.77344 eval/Rewards Min 0.986704 eval/Returns Mean 1545.77 eval/Returns Std 0 eval/Returns Max 1545.77 eval/Returns Min 1545.77 eval/Actions Mean 0.16006 eval/Actions Std 0.585865 eval/Actions Max 0.998043 eval/Actions Min -0.995813 eval/Num Paths 1 eval/Average Returns 1545.77 eval/normalized_score 48.1184 time/evaluation sampling (s) 0.883445 time/logging (s) 0.00228846 time/sampling batch (s) 0.269307 time/saving (s) 0.00293343 time/training (s) 4.23376 time/epoch (s) 5.39173 time/total (s) 34106.1 Epoch -285 ---------------------------------- --------------- 2022-05-10 22:39:09.017377 PDT | [0] Epoch -284 finished ---------------------------------- --------------- epoch -284 replay_buffer/size 999996 trainer/num train calls 717000 trainer/Policy Loss -2.18567 trainer/Log Pis Mean 2.23756 trainer/Log Pis Std 2.58301 trainer/Log Pis Max 12.2297 trainer/Log Pis Min -5.20382 trainer/policy/mean Mean 0.123522 trainer/policy/mean Std 0.616241 trainer/policy/mean Max 0.998068 trainer/policy/mean Min -0.998872 trainer/policy/normal/std Mean 0.383917 trainer/policy/normal/std Std 0.182617 trainer/policy/normal/std Max 0.94093 trainer/policy/normal/std Min 0.0674183 trainer/policy/normal/log_std Mean -1.10175 trainer/policy/normal/log_std Std 0.579677 trainer/policy/normal/log_std Max -0.0608867 trainer/policy/normal/log_std Min -2.69684 eval/num steps total 475512 eval/num paths total 923 eval/path length Mean 498 eval/path length Std 1 eval/path length Max 499 eval/path length Min 497 eval/Rewards Mean 3.10449 eval/Rewards Std 0.770028 eval/Rewards Max 5.00944 eval/Rewards Min 0.974164 eval/Returns Mean 1546.03 eval/Returns Std 19.3073 eval/Returns Max 1565.34 eval/Returns Min 1526.73 eval/Actions Mean 0.150747 eval/Actions Std 0.590466 eval/Actions Max 0.998336 eval/Actions Min -0.99832 eval/Num Paths 2 eval/Average Returns 1546.03 eval/normalized_score 48.1264 time/evaluation sampling (s) 0.889367 time/logging (s) 0.00370126 time/sampling batch (s) 0.269787 time/saving (s) 0.00298971 time/training (s) 4.21792 time/epoch (s) 5.38376 time/total (s) 34111.5 Epoch -284 ---------------------------------- --------------- 2022-05-10 22:39:14.452125 PDT | [0] Epoch -283 finished ---------------------------------- --------------- epoch -283 replay_buffer/size 999996 trainer/num train calls 718000 trainer/Policy Loss -2.24589 trainer/Log Pis Mean 2.34996 trainer/Log Pis Std 2.62687 trainer/Log Pis Max 11.479 trainer/Log Pis Min -7.91762 trainer/policy/mean Mean 0.141899 trainer/policy/mean Std 0.621202 trainer/policy/mean Max 0.994121 trainer/policy/mean Min -0.997963 trainer/policy/normal/std Mean 0.374194 trainer/policy/normal/std Std 0.183101 trainer/policy/normal/std Max 0.947488 trainer/policy/normal/std Min 0.0670606 trainer/policy/normal/log_std Mean -1.13463 trainer/policy/normal/log_std Std 0.592195 trainer/policy/normal/log_std Max -0.0539411 trainer/policy/normal/log_std Min -2.70216 eval/num steps total 475995 eval/num paths total 924 eval/path length Mean 483 eval/path length Std 0 eval/path length Max 483 eval/path length Min 483 eval/Rewards Mean 3.20999 eval/Rewards Std 0.813127 eval/Rewards Max 4.75142 eval/Rewards Min 0.985281 eval/Returns Mean 1550.42 eval/Returns Std 0 eval/Returns Max 1550.42 eval/Returns Min 1550.42 eval/Actions Mean 0.138811 eval/Actions Std 0.59796 eval/Actions Max 0.996649 eval/Actions Min -0.998802 eval/Num Paths 1 eval/Average Returns 1550.42 eval/normalized_score 48.2612 time/evaluation sampling (s) 0.890459 time/logging (s) 0.00229739 time/sampling batch (s) 0.275145 time/saving (s) 0.00297346 time/training (s) 4.2393 time/epoch (s) 5.41017 time/total (s) 34116.9 Epoch -283 ---------------------------------- --------------- 2022-05-10 22:39:19.925833 PDT | [0] Epoch -282 finished ---------------------------------- --------------- epoch -282 replay_buffer/size 999996 trainer/num train calls 719000 trainer/Policy Loss -2.19999 trainer/Log Pis Mean 2.26918 trainer/Log Pis Std 2.58445 trainer/Log Pis Max 9.14448 trainer/Log Pis Min -3.73268 trainer/policy/mean Mean 0.131107 trainer/policy/mean Std 0.617836 trainer/policy/mean Max 0.995356 trainer/policy/mean Min -0.997148 trainer/policy/normal/std Mean 0.37552 trainer/policy/normal/std Std 0.178334 trainer/policy/normal/std Max 0.977601 trainer/policy/normal/std Min 0.0753642 trainer/policy/normal/log_std Mean -1.12141 trainer/policy/normal/log_std Std 0.571803 trainer/policy/normal/log_std Max -0.0226536 trainer/policy/normal/log_std Min -2.58542 eval/num steps total 476536 eval/num paths total 925 eval/path length Mean 541 eval/path length Std 0 eval/path length Max 541 eval/path length Min 541 eval/Rewards Mean 3.18947 eval/Rewards Std 0.822267 eval/Rewards Max 5.34253 eval/Rewards Min 0.97724 eval/Returns Mean 1725.5 eval/Returns Std 0 eval/Returns Max 1725.5 eval/Returns Min 1725.5 eval/Actions Mean 0.153998 eval/Actions Std 0.586639 eval/Actions Max 0.998456 eval/Actions Min -0.99786 eval/Num Paths 1 eval/Average Returns 1725.5 eval/normalized_score 53.6407 time/evaluation sampling (s) 0.902298 time/logging (s) 0.00239901 time/sampling batch (s) 0.270703 time/saving (s) 0.00309779 time/training (s) 4.27222 time/epoch (s) 5.45072 time/total (s) 34122.4 Epoch -282 ---------------------------------- --------------- 2022-05-10 22:39:25.361397 PDT | [0] Epoch -281 finished ---------------------------------- --------------- epoch -281 replay_buffer/size 999996 trainer/num train calls 720000 trainer/Policy Loss -2.39168 trainer/Log Pis Mean 2.49761 trainer/Log Pis Std 2.60993 trainer/Log Pis Max 9.93654 trainer/Log Pis Min -4.76816 trainer/policy/mean Mean 0.139019 trainer/policy/mean Std 0.625162 trainer/policy/mean Max 0.997019 trainer/policy/mean Min -0.997521 trainer/policy/normal/std Mean 0.368803 trainer/policy/normal/std Std 0.180884 trainer/policy/normal/std Max 0.993192 trainer/policy/normal/std Min 0.0658533 trainer/policy/normal/log_std Mean -1.14929 trainer/policy/normal/log_std Std 0.591432 trainer/policy/normal/log_std Max -0.0068309 trainer/policy/normal/log_std Min -2.72033 eval/num steps total 477061 eval/num paths total 926 eval/path length Mean 525 eval/path length Std 0 eval/path length Max 525 eval/path length Min 525 eval/Rewards Mean 3.13477 eval/Rewards Std 0.874155 eval/Rewards Max 5.49874 eval/Rewards Min 0.976488 eval/Returns Mean 1645.75 eval/Returns Std 0 eval/Returns Max 1645.75 eval/Returns Min 1645.75 eval/Actions Mean 0.1532 eval/Actions Std 0.570903 eval/Actions Max 0.997697 eval/Actions Min -0.99791 eval/Num Paths 1 eval/Average Returns 1645.75 eval/normalized_score 51.1903 time/evaluation sampling (s) 0.892372 time/logging (s) 0.00235001 time/sampling batch (s) 0.270139 time/saving (s) 0.00300874 time/training (s) 4.24414 time/epoch (s) 5.41201 time/total (s) 34127.8 Epoch -281 ---------------------------------- --------------- 2022-05-10 22:39:30.824276 PDT | [0] Epoch -280 finished ---------------------------------- --------------- epoch -280 replay_buffer/size 999996 trainer/num train calls 721000 trainer/Policy Loss -2.21633 trainer/Log Pis Mean 2.18449 trainer/Log Pis Std 2.55706 trainer/Log Pis Max 10.7349 trainer/Log Pis Min -4.5394 trainer/policy/mean Mean 0.127425 trainer/policy/mean Std 0.616346 trainer/policy/mean Max 0.997516 trainer/policy/mean Min -0.998142 trainer/policy/normal/std Mean 0.38502 trainer/policy/normal/std Std 0.189279 trainer/policy/normal/std Max 0.99096 trainer/policy/normal/std Min 0.0725327 trainer/policy/normal/log_std Mean -1.10585 trainer/policy/normal/log_std Std 0.589544 trainer/policy/normal/log_std Max -0.00908081 trainer/policy/normal/log_std Min -2.62372 eval/num steps total 477878 eval/num paths total 927 eval/path length Mean 817 eval/path length Std 0 eval/path length Max 817 eval/path length Min 817 eval/Rewards Mean 3.2499 eval/Rewards Std 0.666478 eval/Rewards Max 4.90627 eval/Rewards Min 0.983751 eval/Returns Mean 2655.17 eval/Returns Std 0 eval/Returns Max 2655.17 eval/Returns Min 2655.17 eval/Actions Mean 0.151466 eval/Actions Std 0.610062 eval/Actions Max 0.998448 eval/Actions Min -0.99771 eval/Num Paths 1 eval/Average Returns 2655.17 eval/normalized_score 82.2057 time/evaluation sampling (s) 0.918402 time/logging (s) 0.0032249 time/sampling batch (s) 0.270782 time/saving (s) 0.00314683 time/training (s) 4.24481 time/epoch (s) 5.44037 time/total (s) 34133.2 Epoch -280 ---------------------------------- --------------- 2022-05-10 22:39:36.298272 PDT | [0] Epoch -279 finished ---------------------------------- --------------- epoch -279 replay_buffer/size 999996 trainer/num train calls 722000 trainer/Policy Loss -2.12498 trainer/Log Pis Mean 2.24079 trainer/Log Pis Std 2.57374 trainer/Log Pis Max 11.4379 trainer/Log Pis Min -5.90354 trainer/policy/mean Mean 0.158406 trainer/policy/mean Std 0.613482 trainer/policy/mean Max 0.997745 trainer/policy/mean Min -0.998364 trainer/policy/normal/std Mean 0.379579 trainer/policy/normal/std Std 0.181035 trainer/policy/normal/std Max 0.963695 trainer/policy/normal/std Min 0.070509 trainer/policy/normal/log_std Mean -1.11402 trainer/policy/normal/log_std Std 0.582502 trainer/policy/normal/log_std Max -0.0369809 trainer/policy/normal/log_std Min -2.65202 eval/num steps total 478848 eval/num paths total 929 eval/path length Mean 485 eval/path length Std 0 eval/path length Max 485 eval/path length Min 485 eval/Rewards Mean 3.1507 eval/Rewards Std 0.835771 eval/Rewards Max 4.81148 eval/Rewards Min 0.976369 eval/Returns Mean 1528.09 eval/Returns Std 6.30578 eval/Returns Max 1534.4 eval/Returns Min 1521.79 eval/Actions Mean 0.155738 eval/Actions Std 0.595431 eval/Actions Max 0.998672 eval/Actions Min -0.998847 eval/Num Paths 2 eval/Average Returns 1528.09 eval/normalized_score 47.575 time/evaluation sampling (s) 0.917826 time/logging (s) 0.00362903 time/sampling batch (s) 0.269895 time/saving (s) 0.00297916 time/training (s) 4.25643 time/epoch (s) 5.45075 time/total (s) 34138.7 Epoch -279 ---------------------------------- --------------- 2022-05-10 22:39:41.709671 PDT | [0] Epoch -278 finished ---------------------------------- --------------- epoch -278 replay_buffer/size 999996 trainer/num train calls 723000 trainer/Policy Loss -2.27561 trainer/Log Pis Mean 2.27768 trainer/Log Pis Std 2.55408 trainer/Log Pis Max 9.93354 trainer/Log Pis Min -4.99205 trainer/policy/mean Mean 0.161634 trainer/policy/mean Std 0.616687 trainer/policy/mean Max 0.99842 trainer/policy/mean Min -0.99719 trainer/policy/normal/std Mean 0.3742 trainer/policy/normal/std Std 0.18083 trainer/policy/normal/std Max 0.926653 trainer/policy/normal/std Min 0.0654542 trainer/policy/normal/log_std Mean -1.13557 trainer/policy/normal/log_std Std 0.597913 trainer/policy/normal/log_std Max -0.0761765 trainer/policy/normal/log_std Min -2.72641 eval/num steps total 479351 eval/num paths total 930 eval/path length Mean 503 eval/path length Std 0 eval/path length Max 503 eval/path length Min 503 eval/Rewards Mean 3.14768 eval/Rewards Std 0.745301 eval/Rewards Max 4.69193 eval/Rewards Min 0.983738 eval/Returns Mean 1583.28 eval/Returns Std 0 eval/Returns Max 1583.28 eval/Returns Min 1583.28 eval/Actions Mean 0.161334 eval/Actions Std 0.601911 eval/Actions Max 0.998272 eval/Actions Min -0.997134 eval/Num Paths 1 eval/Average Returns 1583.28 eval/normalized_score 49.2708 time/evaluation sampling (s) 0.914662 time/logging (s) 0.00234063 time/sampling batch (s) 0.266941 time/saving (s) 0.00311246 time/training (s) 4.19982 time/epoch (s) 5.38688 time/total (s) 34144.1 Epoch -278 ---------------------------------- --------------- 2022-05-10 22:39:47.073594 PDT | [0] Epoch -277 finished ---------------------------------- --------------- epoch -277 replay_buffer/size 999996 trainer/num train calls 724000 trainer/Policy Loss -2.05633 trainer/Log Pis Mean 2.17663 trainer/Log Pis Std 2.38635 trainer/Log Pis Max 10.4799 trainer/Log Pis Min -3.51378 trainer/policy/mean Mean 0.159249 trainer/policy/mean Std 0.599734 trainer/policy/mean Max 0.995859 trainer/policy/mean Min -0.998022 trainer/policy/normal/std Mean 0.373683 trainer/policy/normal/std Std 0.179644 trainer/policy/normal/std Max 0.976369 trainer/policy/normal/std Min 0.0648202 trainer/policy/normal/log_std Mean -1.13485 trainer/policy/normal/log_std Std 0.594134 trainer/policy/normal/log_std Max -0.023915 trainer/policy/normal/log_std Min -2.73614 eval/num steps total 479860 eval/num paths total 931 eval/path length Mean 509 eval/path length Std 0 eval/path length Max 509 eval/path length Min 509 eval/Rewards Mean 3.10627 eval/Rewards Std 0.760688 eval/Rewards Max 4.82508 eval/Rewards Min 0.977829 eval/Returns Mean 1581.09 eval/Returns Std 0 eval/Returns Max 1581.09 eval/Returns Min 1581.09 eval/Actions Mean 0.156264 eval/Actions Std 0.590496 eval/Actions Max 0.997955 eval/Actions Min -0.998839 eval/Num Paths 1 eval/Average Returns 1581.09 eval/normalized_score 49.2035 time/evaluation sampling (s) 0.898649 time/logging (s) 0.002273 time/sampling batch (s) 0.265115 time/saving (s) 0.00297141 time/training (s) 4.17172 time/epoch (s) 5.34073 time/total (s) 34149.4 Epoch -277 ---------------------------------- --------------- 2022-05-10 22:39:52.479794 PDT | [0] Epoch -276 finished ---------------------------------- --------------- epoch -276 replay_buffer/size 999996 trainer/num train calls 725000 trainer/Policy Loss -2.25339 trainer/Log Pis Mean 2.10568 trainer/Log Pis Std 2.67579 trainer/Log Pis Max 12.2399 trainer/Log Pis Min -6.48698 trainer/policy/mean Mean 0.171204 trainer/policy/mean Std 0.601161 trainer/policy/mean Max 0.996276 trainer/policy/mean Min -0.998262 trainer/policy/normal/std Mean 0.376381 trainer/policy/normal/std Std 0.187391 trainer/policy/normal/std Max 1.01277 trainer/policy/normal/std Min 0.0658255 trainer/policy/normal/log_std Mean -1.13878 trainer/policy/normal/log_std Std 0.615739 trainer/policy/normal/log_std Max 0.0126929 trainer/policy/normal/log_std Min -2.72075 eval/num steps total 480437 eval/num paths total 932 eval/path length Mean 577 eval/path length Std 0 eval/path length Max 577 eval/path length Min 577 eval/Rewards Mean 3.16428 eval/Rewards Std 0.762629 eval/Rewards Max 4.76696 eval/Rewards Min 0.986125 eval/Returns Mean 1825.79 eval/Returns Std 0 eval/Returns Max 1825.79 eval/Returns Min 1825.79 eval/Actions Mean 0.156136 eval/Actions Std 0.601301 eval/Actions Max 0.998174 eval/Actions Min -0.998468 eval/Num Paths 1 eval/Average Returns 1825.79 eval/normalized_score 56.7221 time/evaluation sampling (s) 0.880114 time/logging (s) 0.00267949 time/sampling batch (s) 0.269418 time/saving (s) 0.00324751 time/training (s) 4.22775 time/epoch (s) 5.38321 time/total (s) 34154.8 Epoch -276 ---------------------------------- --------------- 2022-05-10 22:39:57.910894 PDT | [0] Epoch -275 finished ---------------------------------- --------------- epoch -275 replay_buffer/size 999996 trainer/num train calls 726000 trainer/Policy Loss -2.195 trainer/Log Pis Mean 2.28441 trainer/Log Pis Std 2.55272 trainer/Log Pis Max 12.7668 trainer/Log Pis Min -4.7307 trainer/policy/mean Mean 0.138118 trainer/policy/mean Std 0.618547 trainer/policy/mean Max 0.997957 trainer/policy/mean Min -0.999465 trainer/policy/normal/std Mean 0.378841 trainer/policy/normal/std Std 0.184862 trainer/policy/normal/std Max 0.895097 trainer/policy/normal/std Min 0.0659164 trainer/policy/normal/log_std Mean -1.12305 trainer/policy/normal/log_std Std 0.595371 trainer/policy/normal/log_std Max -0.110823 trainer/policy/normal/log_std Min -2.71937 eval/num steps total 481087 eval/num paths total 933 eval/path length Mean 650 eval/path length Std 0 eval/path length Max 650 eval/path length Min 650 eval/Rewards Mean 3.25469 eval/Rewards Std 0.724079 eval/Rewards Max 4.81737 eval/Rewards Min 0.982709 eval/Returns Mean 2115.55 eval/Returns Std 0 eval/Returns Max 2115.55 eval/Returns Min 2115.55 eval/Actions Mean 0.154843 eval/Actions Std 0.612731 eval/Actions Max 0.99706 eval/Actions Min -0.998327 eval/Num Paths 1 eval/Average Returns 2115.55 eval/normalized_score 65.6253 time/evaluation sampling (s) 0.873364 time/logging (s) 0.00265067 time/sampling batch (s) 0.269107 time/saving (s) 0.00305269 time/training (s) 4.25949 time/epoch (s) 5.40766 time/total (s) 34160.2 Epoch -275 ---------------------------------- --------------- 2022-05-10 22:40:03.301145 PDT | [0] Epoch -274 finished ---------------------------------- --------------- epoch -274 replay_buffer/size 999996 trainer/num train calls 727000 trainer/Policy Loss -2.25152 trainer/Log Pis Mean 2.18484 trainer/Log Pis Std 2.59867 trainer/Log Pis Max 9.84306 trainer/Log Pis Min -4.86012 trainer/policy/mean Mean 0.154068 trainer/policy/mean Std 0.616573 trainer/policy/mean Max 0.996298 trainer/policy/mean Min -0.998263 trainer/policy/normal/std Mean 0.383652 trainer/policy/normal/std Std 0.185337 trainer/policy/normal/std Max 0.957952 trainer/policy/normal/std Min 0.0685849 trainer/policy/normal/log_std Mean -1.11051 trainer/policy/normal/log_std Std 0.598716 trainer/policy/normal/log_std Max -0.0429578 trainer/policy/normal/log_std Min -2.67968 eval/num steps total 482067 eval/num paths total 935 eval/path length Mean 490 eval/path length Std 82 eval/path length Max 572 eval/path length Min 408 eval/Rewards Mean 3.14143 eval/Rewards Std 0.790344 eval/Rewards Max 4.97991 eval/Rewards Min 0.979629 eval/Returns Mean 1539.3 eval/Returns Std 282.44 eval/Returns Max 1821.74 eval/Returns Min 1256.86 eval/Actions Mean 0.15375 eval/Actions Std 0.594411 eval/Actions Max 0.998046 eval/Actions Min -0.998959 eval/Num Paths 2 eval/Average Returns 1539.3 eval/normalized_score 47.9194 time/evaluation sampling (s) 0.877579 time/logging (s) 0.00363799 time/sampling batch (s) 0.26742 time/saving (s) 0.00302388 time/training (s) 4.21623 time/epoch (s) 5.36789 time/total (s) 34165.6 Epoch -274 ---------------------------------- --------------- 2022-05-10 22:40:08.699785 PDT | [0] Epoch -273 finished ---------------------------------- --------------- epoch -273 replay_buffer/size 999996 trainer/num train calls 728000 trainer/Policy Loss -2.32911 trainer/Log Pis Mean 2.32181 trainer/Log Pis Std 2.57606 trainer/Log Pis Max 10.1679 trainer/Log Pis Min -5.29565 trainer/policy/mean Mean 0.124288 trainer/policy/mean Std 0.619582 trainer/policy/mean Max 0.998005 trainer/policy/mean Min -0.998416 trainer/policy/normal/std Mean 0.376496 trainer/policy/normal/std Std 0.179433 trainer/policy/normal/std Max 0.970008 trainer/policy/normal/std Min 0.0679922 trainer/policy/normal/log_std Mean -1.12261 trainer/policy/normal/log_std Std 0.582286 trainer/policy/normal/log_std Max -0.0304509 trainer/policy/normal/log_std Min -2.68836 eval/num steps total 483041 eval/num paths total 938 eval/path length Mean 324.667 eval/path length Std 97.5921 eval/path length Max 402 eval/path length Min 187 eval/Rewards Mean 2.91552 eval/Rewards Std 0.996493 eval/Rewards Max 5.18796 eval/Rewards Min 0.976857 eval/Returns Mean 946.571 eval/Returns Std 360.206 eval/Returns Max 1258.47 eval/Returns Min 441.821 eval/Actions Mean 0.105159 eval/Actions Std 0.542229 eval/Actions Max 0.997786 eval/Actions Min -0.999711 eval/Num Paths 3 eval/Average Returns 946.571 eval/normalized_score 29.7072 time/evaluation sampling (s) 0.878981 time/logging (s) 0.00372097 time/sampling batch (s) 0.267176 time/saving (s) 0.00296365 time/training (s) 4.22268 time/epoch (s) 5.37552 time/total (s) 34171 Epoch -273 ---------------------------------- --------------- 2022-05-10 22:40:14.097340 PDT | [0] Epoch -272 finished ---------------------------------- --------------- epoch -272 replay_buffer/size 999996 trainer/num train calls 729000 trainer/Policy Loss -2.29396 trainer/Log Pis Mean 2.37633 trainer/Log Pis Std 2.52222 trainer/Log Pis Max 9.91372 trainer/Log Pis Min -4.51397 trainer/policy/mean Mean 0.133578 trainer/policy/mean Std 0.627637 trainer/policy/mean Max 0.993306 trainer/policy/mean Min -0.997555 trainer/policy/normal/std Mean 0.377066 trainer/policy/normal/std Std 0.183102 trainer/policy/normal/std Max 1.03076 trainer/policy/normal/std Min 0.0715115 trainer/policy/normal/log_std Mean -1.125 trainer/policy/normal/log_std Std 0.589498 trainer/policy/normal/log_std Max 0.0302996 trainer/policy/normal/log_std Min -2.6379 eval/num steps total 483549 eval/num paths total 939 eval/path length Mean 508 eval/path length Std 0 eval/path length Max 508 eval/path length Min 508 eval/Rewards Mean 3.21797 eval/Rewards Std 0.844596 eval/Rewards Max 5.48691 eval/Rewards Min 0.985742 eval/Returns Mean 1634.73 eval/Returns Std 0 eval/Returns Max 1634.73 eval/Returns Min 1634.73 eval/Actions Mean 0.157626 eval/Actions Std 0.596645 eval/Actions Max 0.997614 eval/Actions Min -0.998136 eval/Num Paths 1 eval/Average Returns 1634.73 eval/normalized_score 50.8516 time/evaluation sampling (s) 0.879031 time/logging (s) 0.00227588 time/sampling batch (s) 0.267622 time/saving (s) 0.00296319 time/training (s) 4.22127 time/epoch (s) 5.37316 time/total (s) 34176.3 Epoch -272 ---------------------------------- --------------- 2022-05-10 22:40:19.467113 PDT | [0] Epoch -271 finished ---------------------------------- --------------- epoch -271 replay_buffer/size 999996 trainer/num train calls 730000 trainer/Policy Loss -2.42403 trainer/Log Pis Mean 2.27548 trainer/Log Pis Std 2.69445 trainer/Log Pis Max 10.65 trainer/Log Pis Min -6.20446 trainer/policy/mean Mean 0.149979 trainer/policy/mean Std 0.620362 trainer/policy/mean Max 0.997451 trainer/policy/mean Min -0.998643 trainer/policy/normal/std Mean 0.378152 trainer/policy/normal/std Std 0.184758 trainer/policy/normal/std Max 0.94509 trainer/policy/normal/std Min 0.0644748 trainer/policy/normal/log_std Mean -1.12796 trainer/policy/normal/log_std Std 0.604027 trainer/policy/normal/log_std Max -0.0564751 trainer/policy/normal/log_std Min -2.74148 eval/num steps total 484045 eval/num paths total 940 eval/path length Mean 496 eval/path length Std 0 eval/path length Max 496 eval/path length Min 496 eval/Rewards Mean 3.08166 eval/Rewards Std 0.752115 eval/Rewards Max 4.64822 eval/Rewards Min 0.98388 eval/Returns Mean 1528.5 eval/Returns Std 0 eval/Returns Max 1528.5 eval/Returns Min 1528.5 eval/Actions Mean 0.15116 eval/Actions Std 0.593153 eval/Actions Max 0.997448 eval/Actions Min -0.999046 eval/Num Paths 1 eval/Average Returns 1528.5 eval/normalized_score 47.5877 time/evaluation sampling (s) 0.880088 time/logging (s) 0.002363 time/sampling batch (s) 0.266044 time/saving (s) 0.00315621 time/training (s) 4.19502 time/epoch (s) 5.34667 time/total (s) 34181.7 Epoch -271 ---------------------------------- --------------- 2022-05-10 22:40:24.856527 PDT | [0] Epoch -270 finished ---------------------------------- --------------- epoch -270 replay_buffer/size 999996 trainer/num train calls 731000 trainer/Policy Loss -2.20179 trainer/Log Pis Mean 2.12059 trainer/Log Pis Std 2.51249 trainer/Log Pis Max 9.34069 trainer/Log Pis Min -5.26513 trainer/policy/mean Mean 0.137075 trainer/policy/mean Std 0.609529 trainer/policy/mean Max 0.998481 trainer/policy/mean Min -0.998044 trainer/policy/normal/std Mean 0.372164 trainer/policy/normal/std Std 0.18037 trainer/policy/normal/std Max 0.97857 trainer/policy/normal/std Min 0.0679847 trainer/policy/normal/log_std Mean -1.13769 trainer/policy/normal/log_std Std 0.588388 trainer/policy/normal/log_std Max -0.0216632 trainer/policy/normal/log_std Min -2.68847 eval/num steps total 484573 eval/num paths total 941 eval/path length Mean 528 eval/path length Std 0 eval/path length Max 528 eval/path length Min 528 eval/Rewards Mean 3.14545 eval/Rewards Std 0.837561 eval/Rewards Max 5.5169 eval/Rewards Min 0.981787 eval/Returns Mean 1660.8 eval/Returns Std 0 eval/Returns Max 1660.8 eval/Returns Min 1660.8 eval/Actions Mean 0.148555 eval/Actions Std 0.58293 eval/Actions Max 0.997713 eval/Actions Min -0.997981 eval/Num Paths 1 eval/Average Returns 1660.8 eval/normalized_score 51.6526 time/evaluation sampling (s) 0.876705 time/logging (s) 0.00230957 time/sampling batch (s) 0.26778 time/saving (s) 0.00289446 time/training (s) 4.21686 time/epoch (s) 5.36655 time/total (s) 34187.1 Epoch -270 ---------------------------------- --------------- 2022-05-10 22:40:30.242278 PDT | [0] Epoch -269 finished ---------------------------------- --------------- epoch -269 replay_buffer/size 999996 trainer/num train calls 732000 trainer/Policy Loss -2.32408 trainer/Log Pis Mean 2.18244 trainer/Log Pis Std 2.59004 trainer/Log Pis Max 9.80979 trainer/Log Pis Min -5.24911 trainer/policy/mean Mean 0.138193 trainer/policy/mean Std 0.619914 trainer/policy/mean Max 0.997506 trainer/policy/mean Min -0.99756 trainer/policy/normal/std Mean 0.375822 trainer/policy/normal/std Std 0.182594 trainer/policy/normal/std Max 0.99735 trainer/policy/normal/std Min 0.0708841 trainer/policy/normal/log_std Mean -1.12853 trainer/policy/normal/log_std Std 0.588742 trainer/policy/normal/log_std Max -0.00265312 trainer/policy/normal/log_std Min -2.64671 eval/num steps total 485409 eval/num paths total 943 eval/path length Mean 418 eval/path length Std 9 eval/path length Max 427 eval/path length Min 409 eval/Rewards Mean 3.10766 eval/Rewards Std 0.842016 eval/Rewards Max 5.34345 eval/Rewards Min 0.976706 eval/Returns Mean 1299 eval/Returns Std 40.912 eval/Returns Max 1339.91 eval/Returns Min 1258.09 eval/Actions Mean 0.146073 eval/Actions Std 0.592028 eval/Actions Max 0.99785 eval/Actions Min -0.999109 eval/Num Paths 2 eval/Average Returns 1299 eval/normalized_score 40.536 time/evaluation sampling (s) 0.886504 time/logging (s) 0.00317655 time/sampling batch (s) 0.265358 time/saving (s) 0.00289991 time/training (s) 4.20574 time/epoch (s) 5.36368 time/total (s) 34192.4 Epoch -269 ---------------------------------- --------------- 2022-05-10 22:40:35.694575 PDT | [0] Epoch -268 finished ---------------------------------- --------------- epoch -268 replay_buffer/size 999996 trainer/num train calls 733000 trainer/Policy Loss -2.2592 trainer/Log Pis Mean 2.31943 trainer/Log Pis Std 2.72427 trainer/Log Pis Max 17.0157 trainer/Log Pis Min -4.48149 trainer/policy/mean Mean 0.133834 trainer/policy/mean Std 0.62158 trainer/policy/mean Max 0.998069 trainer/policy/mean Min -0.999605 trainer/policy/normal/std Mean 0.373472 trainer/policy/normal/std Std 0.183632 trainer/policy/normal/std Max 1.02172 trainer/policy/normal/std Min 0.0700228 trainer/policy/normal/log_std Mean -1.13961 trainer/policy/normal/log_std Std 0.599518 trainer/policy/normal/log_std Max 0.0214873 trainer/policy/normal/log_std Min -2.65893 eval/num steps total 485948 eval/num paths total 944 eval/path length Mean 539 eval/path length Std 0 eval/path length Max 539 eval/path length Min 539 eval/Rewards Mean 3.18 eval/Rewards Std 0.84393 eval/Rewards Max 5.47062 eval/Rewards Min 0.97685 eval/Returns Mean 1714.02 eval/Returns Std 0 eval/Returns Max 1714.02 eval/Returns Min 1714.02 eval/Actions Mean 0.156752 eval/Actions Std 0.583274 eval/Actions Max 0.997228 eval/Actions Min -0.997473 eval/Num Paths 1 eval/Average Returns 1714.02 eval/normalized_score 53.2878 time/evaluation sampling (s) 0.96955 time/logging (s) 0.00243546 time/sampling batch (s) 0.265721 time/saving (s) 0.00291996 time/training (s) 4.18767 time/epoch (s) 5.42829 time/total (s) 34197.9 Epoch -268 ---------------------------------- --------------- 2022-05-10 22:40:41.079792 PDT | [0] Epoch -267 finished ---------------------------------- --------------- epoch -267 replay_buffer/size 999996 trainer/num train calls 734000 trainer/Policy Loss -2.15059 trainer/Log Pis Mean 2.2162 trainer/Log Pis Std 2.57743 trainer/Log Pis Max 12.997 trainer/Log Pis Min -8.12021 trainer/policy/mean Mean 0.14904 trainer/policy/mean Std 0.601255 trainer/policy/mean Max 0.998885 trainer/policy/mean Min -0.999299 trainer/policy/normal/std Mean 0.370014 trainer/policy/normal/std Std 0.178046 trainer/policy/normal/std Max 0.872301 trainer/policy/normal/std Min 0.0637067 trainer/policy/normal/log_std Mean -1.14048 trainer/policy/normal/log_std Std 0.580928 trainer/policy/normal/log_std Max -0.136621 trainer/policy/normal/log_std Min -2.75347 eval/num steps total 486556 eval/num paths total 945 eval/path length Mean 608 eval/path length Std 0 eval/path length Max 608 eval/path length Min 608 eval/Rewards Mean 3.21541 eval/Rewards Std 0.780108 eval/Rewards Max 5.4049 eval/Rewards Min 0.977569 eval/Returns Mean 1954.97 eval/Returns Std 0 eval/Returns Max 1954.97 eval/Returns Min 1954.97 eval/Actions Mean 0.151198 eval/Actions Std 0.592229 eval/Actions Max 0.998721 eval/Actions Min -0.997938 eval/Num Paths 1 eval/Average Returns 1954.97 eval/normalized_score 60.6912 time/evaluation sampling (s) 0.891991 time/logging (s) 0.00254885 time/sampling batch (s) 0.264817 time/saving (s) 0.00305729 time/training (s) 4.20001 time/epoch (s) 5.36242 time/total (s) 34203.2 Epoch -267 ---------------------------------- --------------- 2022-05-10 22:40:46.442864 PDT | [0] Epoch -266 finished ---------------------------------- --------------- epoch -266 replay_buffer/size 999996 trainer/num train calls 735000 trainer/Policy Loss -2.2252 trainer/Log Pis Mean 2.16551 trainer/Log Pis Std 2.67611 trainer/Log Pis Max 15.8878 trainer/Log Pis Min -4.6433 trainer/policy/mean Mean 0.144449 trainer/policy/mean Std 0.609776 trainer/policy/mean Max 0.99698 trainer/policy/mean Min -0.999786 trainer/policy/normal/std Mean 0.372446 trainer/policy/normal/std Std 0.175048 trainer/policy/normal/std Max 0.91321 trainer/policy/normal/std Min 0.070748 trainer/policy/normal/log_std Mean -1.12809 trainer/policy/normal/log_std Std 0.570194 trainer/policy/normal/log_std Max -0.09079 trainer/policy/normal/log_std Min -2.64863 eval/num steps total 487133 eval/num paths total 946 eval/path length Mean 577 eval/path length Std 0 eval/path length Max 577 eval/path length Min 577 eval/Rewards Mean 3.13685 eval/Rewards Std 0.750283 eval/Rewards Max 5.04025 eval/Rewards Min 0.981221 eval/Returns Mean 1809.96 eval/Returns Std 0 eval/Returns Max 1809.96 eval/Returns Min 1809.96 eval/Actions Mean 0.156356 eval/Actions Std 0.601021 eval/Actions Max 0.998457 eval/Actions Min -0.997838 eval/Num Paths 1 eval/Average Returns 1809.96 eval/normalized_score 56.2358 time/evaluation sampling (s) 0.88598 time/logging (s) 0.0024503 time/sampling batch (s) 0.263683 time/saving (s) 0.00274939 time/training (s) 4.18479 time/epoch (s) 5.33965 time/total (s) 34208.6 Epoch -266 ---------------------------------- --------------- 2022-05-10 22:40:51.846442 PDT | [0] Epoch -265 finished ---------------------------------- --------------- epoch -265 replay_buffer/size 999996 trainer/num train calls 736000 trainer/Policy Loss -2.26741 trainer/Log Pis Mean 2.35494 trainer/Log Pis Std 2.54654 trainer/Log Pis Max 13.726 trainer/Log Pis Min -3.19314 trainer/policy/mean Mean 0.147919 trainer/policy/mean Std 0.618083 trainer/policy/mean Max 0.9994 trainer/policy/mean Min -0.99811 trainer/policy/normal/std Mean 0.377593 trainer/policy/normal/std Std 0.183655 trainer/policy/normal/std Max 1.01577 trainer/policy/normal/std Min 0.0675697 trainer/policy/normal/log_std Mean -1.12631 trainer/policy/normal/log_std Std 0.596628 trainer/policy/normal/log_std Max 0.0156458 trainer/policy/normal/log_std Min -2.6946 eval/num steps total 487642 eval/num paths total 947 eval/path length Mean 509 eval/path length Std 0 eval/path length Max 509 eval/path length Min 509 eval/Rewards Mean 3.12389 eval/Rewards Std 0.78111 eval/Rewards Max 4.98399 eval/Rewards Min 0.979358 eval/Returns Mean 1590.06 eval/Returns Std 0 eval/Returns Max 1590.06 eval/Returns Min 1590.06 eval/Actions Mean 0.164152 eval/Actions Std 0.586587 eval/Actions Max 0.998497 eval/Actions Min -0.998815 eval/Num Paths 1 eval/Average Returns 1590.06 eval/normalized_score 49.479 time/evaluation sampling (s) 0.878514 time/logging (s) 0.00216786 time/sampling batch (s) 0.268828 time/saving (s) 0.00279187 time/training (s) 4.22748 time/epoch (s) 5.37978 time/total (s) 34213.9 Epoch -265 ---------------------------------- --------------- 2022-05-10 22:40:57.280262 PDT | [0] Epoch -264 finished ---------------------------------- --------------- epoch -264 replay_buffer/size 999996 trainer/num train calls 737000 trainer/Policy Loss -2.17814 trainer/Log Pis Mean 2.32027 trainer/Log Pis Std 2.57648 trainer/Log Pis Max 10.2737 trainer/Log Pis Min -4.18167 trainer/policy/mean Mean 0.149081 trainer/policy/mean Std 0.61232 trainer/policy/mean Max 0.996956 trainer/policy/mean Min -0.998362 trainer/policy/normal/std Mean 0.371538 trainer/policy/normal/std Std 0.179739 trainer/policy/normal/std Max 0.953431 trainer/policy/normal/std Min 0.0732523 trainer/policy/normal/log_std Mean -1.13881 trainer/policy/normal/log_std Std 0.586524 trainer/policy/normal/log_std Max -0.0476877 trainer/policy/normal/log_std Min -2.61385 eval/num steps total 488209 eval/num paths total 948 eval/path length Mean 567 eval/path length Std 0 eval/path length Max 567 eval/path length Min 567 eval/Rewards Mean 3.22754 eval/Rewards Std 0.760365 eval/Rewards Max 4.78511 eval/Rewards Min 0.977131 eval/Returns Mean 1830.01 eval/Returns Std 0 eval/Returns Max 1830.01 eval/Returns Min 1830.01 eval/Actions Mean 0.15784 eval/Actions Std 0.608279 eval/Actions Max 0.997455 eval/Actions Min -0.998531 eval/Num Paths 1 eval/Average Returns 1830.01 eval/normalized_score 56.8519 time/evaluation sampling (s) 0.896013 time/logging (s) 0.00235412 time/sampling batch (s) 0.27176 time/saving (s) 0.00281598 time/training (s) 4.23734 time/epoch (s) 5.41028 time/total (s) 34219.4 Epoch -264 ---------------------------------- --------------- 2022-05-10 22:41:02.738699 PDT | [0] Epoch -263 finished ---------------------------------- --------------- epoch -263 replay_buffer/size 999996 trainer/num train calls 738000 trainer/Policy Loss -2.00165 trainer/Log Pis Mean 2.11376 trainer/Log Pis Std 2.62519 trainer/Log Pis Max 10.4819 trainer/Log Pis Min -7.39141 trainer/policy/mean Mean 0.139664 trainer/policy/mean Std 0.606015 trainer/policy/mean Max 0.997858 trainer/policy/mean Min -0.996829 trainer/policy/normal/std Mean 0.379434 trainer/policy/normal/std Std 0.188708 trainer/policy/normal/std Max 0.950211 trainer/policy/normal/std Min 0.0681377 trainer/policy/normal/log_std Mean -1.12445 trainer/policy/normal/log_std Std 0.598118 trainer/policy/normal/log_std Max -0.0510709 trainer/policy/normal/log_std Min -2.68623 eval/num steps total 488705 eval/num paths total 949 eval/path length Mean 496 eval/path length Std 0 eval/path length Max 496 eval/path length Min 496 eval/Rewards Mean 3.05864 eval/Rewards Std 0.76225 eval/Rewards Max 4.62641 eval/Rewards Min 0.975473 eval/Returns Mean 1517.09 eval/Returns Std 0 eval/Returns Max 1517.09 eval/Returns Min 1517.09 eval/Actions Mean 0.152256 eval/Actions Std 0.585521 eval/Actions Max 0.997732 eval/Actions Min -0.996935 eval/Num Paths 1 eval/Average Returns 1517.09 eval/normalized_score 47.2369 time/evaluation sampling (s) 0.889072 time/logging (s) 0.00219212 time/sampling batch (s) 0.27313 time/saving (s) 0.00292445 time/training (s) 4.26742 time/epoch (s) 5.43473 time/total (s) 34224.8 Epoch -263 ---------------------------------- --------------- 2022-05-10 22:41:08.192773 PDT | [0] Epoch -262 finished ---------------------------------- --------------- epoch -262 replay_buffer/size 999996 trainer/num train calls 739000 trainer/Policy Loss -2.06507 trainer/Log Pis Mean 2.01708 trainer/Log Pis Std 2.63749 trainer/Log Pis Max 9.39903 trainer/Log Pis Min -8.13346 trainer/policy/mean Mean 0.13259 trainer/policy/mean Std 0.60369 trainer/policy/mean Max 0.998133 trainer/policy/mean Min -0.997388 trainer/policy/normal/std Mean 0.387802 trainer/policy/normal/std Std 0.186963 trainer/policy/normal/std Max 1.04971 trainer/policy/normal/std Min 0.0696119 trainer/policy/normal/log_std Mean -1.09628 trainer/policy/normal/log_std Std 0.58963 trainer/policy/normal/log_std Max 0.0485185 trainer/policy/normal/log_std Min -2.66482 eval/num steps total 489210 eval/num paths total 950 eval/path length Mean 505 eval/path length Std 0 eval/path length Max 505 eval/path length Min 505 eval/Rewards Mean 3.07914 eval/Rewards Std 0.813877 eval/Rewards Max 5.17385 eval/Rewards Min 0.981109 eval/Returns Mean 1554.97 eval/Returns Std 0 eval/Returns Max 1554.97 eval/Returns Min 1554.97 eval/Actions Mean 0.160014 eval/Actions Std 0.579399 eval/Actions Max 0.997248 eval/Actions Min -0.996819 eval/Num Paths 1 eval/Average Returns 1554.97 eval/normalized_score 48.4008 time/evaluation sampling (s) 0.890974 time/logging (s) 0.00222788 time/sampling batch (s) 0.272326 time/saving (s) 0.00283855 time/training (s) 4.2622 time/epoch (s) 5.43057 time/total (s) 34230.2 Epoch -262 ---------------------------------- --------------- 2022-05-10 22:41:13.631564 PDT | [0] Epoch -261 finished ---------------------------------- --------------- epoch -261 replay_buffer/size 999996 trainer/num train calls 740000 trainer/Policy Loss -2.57633 trainer/Log Pis Mean 2.35132 trainer/Log Pis Std 2.67483 trainer/Log Pis Max 11.5357 trainer/Log Pis Min -7.73539 trainer/policy/mean Mean 0.116975 trainer/policy/mean Std 0.639181 trainer/policy/mean Max 0.998007 trainer/policy/mean Min -0.998907 trainer/policy/normal/std Mean 0.379616 trainer/policy/normal/std Std 0.184695 trainer/policy/normal/std Max 0.987012 trainer/policy/normal/std Min 0.063339 trainer/policy/normal/log_std Mean -1.12227 trainer/policy/normal/log_std Std 0.600733 trainer/policy/normal/log_std Max -0.0130731 trainer/policy/normal/log_std Min -2.75925 eval/num steps total 489788 eval/num paths total 951 eval/path length Mean 578 eval/path length Std 0 eval/path length Max 578 eval/path length Min 578 eval/Rewards Mean 3.19786 eval/Rewards Std 0.775617 eval/Rewards Max 4.78822 eval/Rewards Min 0.982645 eval/Returns Mean 1848.36 eval/Returns Std 0 eval/Returns Max 1848.36 eval/Returns Min 1848.36 eval/Actions Mean 0.161692 eval/Actions Std 0.597224 eval/Actions Max 0.997561 eval/Actions Min -0.998306 eval/Num Paths 1 eval/Average Returns 1848.36 eval/normalized_score 57.4157 time/evaluation sampling (s) 0.896142 time/logging (s) 0.0023319 time/sampling batch (s) 0.271502 time/saving (s) 0.00280952 time/training (s) 4.24274 time/epoch (s) 5.41553 time/total (s) 34235.6 Epoch -261 ---------------------------------- --------------- 2022-05-10 22:41:19.077384 PDT | [0] Epoch -260 finished ---------------------------------- --------------- epoch -260 replay_buffer/size 999996 trainer/num train calls 741000 trainer/Policy Loss -2.4389 trainer/Log Pis Mean 2.28502 trainer/Log Pis Std 2.59633 trainer/Log Pis Max 13.7611 trainer/Log Pis Min -5.26993 trainer/policy/mean Mean 0.159333 trainer/policy/mean Std 0.616504 trainer/policy/mean Max 0.996033 trainer/policy/mean Min -0.998559 trainer/policy/normal/std Mean 0.37748 trainer/policy/normal/std Std 0.181449 trainer/policy/normal/std Max 1.03721 trainer/policy/normal/std Min 0.0651171 trainer/policy/normal/log_std Mean -1.12788 trainer/policy/normal/log_std Std 0.60417 trainer/policy/normal/log_std Max 0.0365385 trainer/policy/normal/log_std Min -2.73157 eval/num steps total 490521 eval/num paths total 952 eval/path length Mean 733 eval/path length Std 0 eval/path length Max 733 eval/path length Min 733 eval/Rewards Mean 3.21439 eval/Rewards Std 0.74477 eval/Rewards Max 4.68829 eval/Rewards Min 0.980038 eval/Returns Mean 2356.15 eval/Returns Std 0 eval/Returns Max 2356.15 eval/Returns Min 2356.15 eval/Actions Mean 0.158181 eval/Actions Std 0.602815 eval/Actions Max 0.998145 eval/Actions Min -0.998723 eval/Num Paths 1 eval/Average Returns 2356.15 eval/normalized_score 73.018 time/evaluation sampling (s) 0.888107 time/logging (s) 0.00292858 time/sampling batch (s) 0.272014 time/saving (s) 0.00309903 time/training (s) 4.25675 time/epoch (s) 5.4229 time/total (s) 34241.1 Epoch -260 ---------------------------------- --------------- 2022-05-10 22:41:24.556743 PDT | [0] Epoch -259 finished ---------------------------------- --------------- epoch -259 replay_buffer/size 999996 trainer/num train calls 742000 trainer/Policy Loss -2.28721 trainer/Log Pis Mean 2.27804 trainer/Log Pis Std 2.61438 trainer/Log Pis Max 12.6936 trainer/Log Pis Min -6.57163 trainer/policy/mean Mean 0.147891 trainer/policy/mean Std 0.615074 trainer/policy/mean Max 0.998888 trainer/policy/mean Min -0.998273 trainer/policy/normal/std Mean 0.374089 trainer/policy/normal/std Std 0.185428 trainer/policy/normal/std Max 1.01226 trainer/policy/normal/std Min 0.0689633 trainer/policy/normal/log_std Mean -1.14084 trainer/policy/normal/log_std Std 0.605224 trainer/policy/normal/log_std Max 0.0121841 trainer/policy/normal/log_std Min -2.67418 eval/num steps total 491087 eval/num paths total 953 eval/path length Mean 566 eval/path length Std 0 eval/path length Max 566 eval/path length Min 566 eval/Rewards Mean 3.20378 eval/Rewards Std 0.773445 eval/Rewards Max 4.74196 eval/Rewards Min 0.97855 eval/Returns Mean 1813.34 eval/Returns Std 0 eval/Returns Max 1813.34 eval/Returns Min 1813.34 eval/Actions Mean 0.143952 eval/Actions Std 0.601836 eval/Actions Max 0.998923 eval/Actions Min -0.998294 eval/Num Paths 1 eval/Average Returns 1813.34 eval/normalized_score 56.3396 time/evaluation sampling (s) 0.894469 time/logging (s) 0.002403 time/sampling batch (s) 0.273526 time/saving (s) 0.00304293 time/training (s) 4.28191 time/epoch (s) 5.45535 time/total (s) 34246.5 Epoch -259 ---------------------------------- --------------- 2022-05-10 22:41:30.029034 PDT | [0] Epoch -258 finished ---------------------------------- --------------- epoch -258 replay_buffer/size 999996 trainer/num train calls 743000 trainer/Policy Loss -2.21329 trainer/Log Pis Mean 2.32569 trainer/Log Pis Std 2.6017 trainer/Log Pis Max 12.7136 trainer/Log Pis Min -3.7356 trainer/policy/mean Mean 0.198916 trainer/policy/mean Std 0.604155 trainer/policy/mean Max 0.997657 trainer/policy/mean Min -0.998778 trainer/policy/normal/std Mean 0.376009 trainer/policy/normal/std Std 0.179608 trainer/policy/normal/std Max 0.935323 trainer/policy/normal/std Min 0.0655463 trainer/policy/normal/log_std Mean -1.12268 trainer/policy/normal/log_std Std 0.578251 trainer/policy/normal/log_std Max -0.0668636 trainer/policy/normal/log_std Min -2.725 eval/num steps total 491661 eval/num paths total 954 eval/path length Mean 574 eval/path length Std 0 eval/path length Max 574 eval/path length Min 574 eval/Rewards Mean 3.16838 eval/Rewards Std 0.74173 eval/Rewards Max 4.85068 eval/Rewards Min 0.985817 eval/Returns Mean 1818.65 eval/Returns Std 0 eval/Returns Max 1818.65 eval/Returns Min 1818.65 eval/Actions Mean 0.149509 eval/Actions Std 0.599644 eval/Actions Max 0.998004 eval/Actions Min -0.998059 eval/Num Paths 1 eval/Average Returns 1818.65 eval/normalized_score 56.5028 time/evaluation sampling (s) 0.896531 time/logging (s) 0.0024552 time/sampling batch (s) 0.272531 time/saving (s) 0.00286822 time/training (s) 4.27431 time/epoch (s) 5.4487 time/total (s) 34252 Epoch -258 ---------------------------------- --------------- 2022-05-10 22:41:35.499074 PDT | [0] Epoch -257 finished ---------------------------------- --------------- epoch -257 replay_buffer/size 999996 trainer/num train calls 744000 trainer/Policy Loss -2.57713 trainer/Log Pis Mean 2.46221 trainer/Log Pis Std 2.55449 trainer/Log Pis Max 11.0768 trainer/Log Pis Min -4.02848 trainer/policy/mean Mean 0.161919 trainer/policy/mean Std 0.624701 trainer/policy/mean Max 0.999177 trainer/policy/mean Min -0.999245 trainer/policy/normal/std Mean 0.368559 trainer/policy/normal/std Std 0.180608 trainer/policy/normal/std Max 0.942392 trainer/policy/normal/std Min 0.0672837 trainer/policy/normal/log_std Mean -1.15304 trainer/policy/normal/log_std Std 0.600776 trainer/policy/normal/log_std Max -0.0593342 trainer/policy/normal/log_std Min -2.69884 eval/num steps total 492169 eval/num paths total 955 eval/path length Mean 508 eval/path length Std 0 eval/path length Max 508 eval/path length Min 508 eval/Rewards Mean 3.11601 eval/Rewards Std 0.767854 eval/Rewards Max 4.9884 eval/Rewards Min 0.982537 eval/Returns Mean 1582.93 eval/Returns Std 0 eval/Returns Max 1582.93 eval/Returns Min 1582.93 eval/Actions Mean 0.155904 eval/Actions Std 0.590041 eval/Actions Max 0.99756 eval/Actions Min -0.998607 eval/Num Paths 1 eval/Average Returns 1582.93 eval/normalized_score 49.2601 time/evaluation sampling (s) 0.911225 time/logging (s) 0.00220558 time/sampling batch (s) 0.271783 time/saving (s) 0.00281178 time/training (s) 4.25821 time/epoch (s) 5.44624 time/total (s) 34257.4 Epoch -257 ---------------------------------- --------------- 2022-05-10 22:41:40.987203 PDT | [0] Epoch -256 finished ---------------------------------- --------------- epoch -256 replay_buffer/size 999996 trainer/num train calls 745000 trainer/Policy Loss -2.35594 trainer/Log Pis Mean 2.31199 trainer/Log Pis Std 2.5875 trainer/Log Pis Max 9.00382 trainer/Log Pis Min -5.95971 trainer/policy/mean Mean 0.143628 trainer/policy/mean Std 0.629514 trainer/policy/mean Max 0.995875 trainer/policy/mean Min -0.998704 trainer/policy/normal/std Mean 0.377717 trainer/policy/normal/std Std 0.178579 trainer/policy/normal/std Max 1.02348 trainer/policy/normal/std Min 0.0651978 trainer/policy/normal/log_std Mean -1.1197 trainer/policy/normal/log_std Std 0.585638 trainer/policy/normal/log_std Max 0.0232103 trainer/policy/normal/log_std Min -2.73033 eval/num steps total 492654 eval/num paths total 956 eval/path length Mean 485 eval/path length Std 0 eval/path length Max 485 eval/path length Min 485 eval/Rewards Mean 3.17261 eval/Rewards Std 0.819494 eval/Rewards Max 4.80703 eval/Rewards Min 0.981495 eval/Returns Mean 1538.71 eval/Returns Std 0 eval/Returns Max 1538.71 eval/Returns Min 1538.71 eval/Actions Mean 0.151289 eval/Actions Std 0.596722 eval/Actions Max 0.998241 eval/Actions Min -0.998536 eval/Num Paths 1 eval/Average Returns 1538.71 eval/normalized_score 47.9014 time/evaluation sampling (s) 0.925756 time/logging (s) 0.00218148 time/sampling batch (s) 0.271958 time/saving (s) 0.0029843 time/training (s) 4.26158 time/epoch (s) 5.46446 time/total (s) 34262.9 Epoch -256 ---------------------------------- --------------- 2022-05-10 22:41:46.453811 PDT | [0] Epoch -255 finished ---------------------------------- --------------- epoch -255 replay_buffer/size 999996 trainer/num train calls 746000 trainer/Policy Loss -2.25986 trainer/Log Pis Mean 2.23281 trainer/Log Pis Std 2.53016 trainer/Log Pis Max 9.763 trainer/Log Pis Min -5.79417 trainer/policy/mean Mean 0.131588 trainer/policy/mean Std 0.619918 trainer/policy/mean Max 0.99895 trainer/policy/mean Min -0.998325 trainer/policy/normal/std Mean 0.375088 trainer/policy/normal/std Std 0.177546 trainer/policy/normal/std Max 0.944504 trainer/policy/normal/std Min 0.0696084 trainer/policy/normal/log_std Mean -1.12192 trainer/policy/normal/log_std Std 0.571279 trainer/policy/normal/log_std Max -0.0570958 trainer/policy/normal/log_std Min -2.66487 eval/num steps total 493151 eval/num paths total 957 eval/path length Mean 497 eval/path length Std 0 eval/path length Max 497 eval/path length Min 497 eval/Rewards Mean 3.06012 eval/Rewards Std 0.749407 eval/Rewards Max 4.59627 eval/Rewards Min 0.982883 eval/Returns Mean 1520.88 eval/Returns Std 0 eval/Returns Max 1520.88 eval/Returns Min 1520.88 eval/Actions Mean 0.160221 eval/Actions Std 0.594308 eval/Actions Max 0.998078 eval/Actions Min -0.998454 eval/Num Paths 1 eval/Average Returns 1520.88 eval/normalized_score 47.3535 time/evaluation sampling (s) 0.923307 time/logging (s) 0.00221703 time/sampling batch (s) 0.271673 time/saving (s) 0.00294791 time/training (s) 4.24277 time/epoch (s) 5.44291 time/total (s) 34268.3 Epoch -255 ---------------------------------- --------------- 2022-05-10 22:41:51.936724 PDT | [0] Epoch -254 finished ---------------------------------- --------------- epoch -254 replay_buffer/size 999996 trainer/num train calls 747000 trainer/Policy Loss -2.10167 trainer/Log Pis Mean 2.14203 trainer/Log Pis Std 2.56712 trainer/Log Pis Max 11.2294 trainer/Log Pis Min -5.64286 trainer/policy/mean Mean 0.114904 trainer/policy/mean Std 0.612349 trainer/policy/mean Max 0.998999 trainer/policy/mean Min -0.996827 trainer/policy/normal/std Mean 0.38214 trainer/policy/normal/std Std 0.189343 trainer/policy/normal/std Max 1.00127 trainer/policy/normal/std Min 0.0675848 trainer/policy/normal/log_std Mean -1.11878 trainer/policy/normal/log_std Std 0.60385 trainer/policy/normal/log_std Max 0.00126473 trainer/policy/normal/log_std Min -2.69437 eval/num steps total 494066 eval/num paths total 959 eval/path length Mean 457.5 eval/path length Std 19.5 eval/path length Max 477 eval/path length Min 438 eval/Rewards Mean 3.19312 eval/Rewards Std 0.861521 eval/Rewards Max 5.55655 eval/Rewards Min 0.98027 eval/Returns Mean 1460.85 eval/Returns Std 69.826 eval/Returns Max 1530.68 eval/Returns Min 1391.03 eval/Actions Mean 0.152097 eval/Actions Std 0.595774 eval/Actions Max 0.997852 eval/Actions Min -0.998525 eval/Num Paths 2 eval/Average Returns 1460.85 eval/normalized_score 45.5091 time/evaluation sampling (s) 0.92028 time/logging (s) 0.00344796 time/sampling batch (s) 0.272945 time/saving (s) 0.00289429 time/training (s) 4.26088 time/epoch (s) 5.46044 time/total (s) 34273.8 Epoch -254 ---------------------------------- --------------- 2022-05-10 22:41:57.376850 PDT | [0] Epoch -253 finished ---------------------------------- --------------- epoch -253 replay_buffer/size 999996 trainer/num train calls 748000 trainer/Policy Loss -2.23163 trainer/Log Pis Mean 2.19787 trainer/Log Pis Std 2.50222 trainer/Log Pis Max 8.95188 trainer/Log Pis Min -5.75971 trainer/policy/mean Mean 0.149455 trainer/policy/mean Std 0.617252 trainer/policy/mean Max 0.996361 trainer/policy/mean Min -0.998042 trainer/policy/normal/std Mean 0.376368 trainer/policy/normal/std Std 0.183519 trainer/policy/normal/std Max 0.949853 trainer/policy/normal/std Min 0.0684531 trainer/policy/normal/log_std Mean -1.13221 trainer/policy/normal/log_std Std 0.602599 trainer/policy/normal/log_std Max -0.0514476 trainer/policy/normal/log_std Min -2.68161 eval/num steps total 495060 eval/num paths total 961 eval/path length Mean 497 eval/path length Std 2 eval/path length Max 499 eval/path length Min 495 eval/Rewards Mean 3.09353 eval/Rewards Std 0.780736 eval/Rewards Max 4.78459 eval/Rewards Min 0.978523 eval/Returns Mean 1537.49 eval/Returns Std 14.6388 eval/Returns Max 1552.13 eval/Returns Min 1522.85 eval/Actions Mean 0.156655 eval/Actions Std 0.587499 eval/Actions Max 0.998216 eval/Actions Min -0.998508 eval/Num Paths 2 eval/Average Returns 1537.49 eval/normalized_score 47.8637 time/evaluation sampling (s) 0.899359 time/logging (s) 0.00356404 time/sampling batch (s) 0.270809 time/saving (s) 0.0029445 time/training (s) 4.23992 time/epoch (s) 5.4166 time/total (s) 34279.2 Epoch -253 ---------------------------------- --------------- 2022-05-10 22:42:02.819572 PDT | [0] Epoch -252 finished ---------------------------------- --------------- epoch -252 replay_buffer/size 999996 trainer/num train calls 749000 trainer/Policy Loss -2.18539 trainer/Log Pis Mean 2.17244 trainer/Log Pis Std 2.69337 trainer/Log Pis Max 15.7413 trainer/Log Pis Min -4.29045 trainer/policy/mean Mean 0.17681 trainer/policy/mean Std 0.608917 trainer/policy/mean Max 0.997468 trainer/policy/mean Min -0.999305 trainer/policy/normal/std Mean 0.380842 trainer/policy/normal/std Std 0.181655 trainer/policy/normal/std Max 1.02231 trainer/policy/normal/std Min 0.0617399 trainer/policy/normal/log_std Mean -1.11361 trainer/policy/normal/log_std Std 0.589651 trainer/policy/normal/log_std Max 0.0220651 trainer/policy/normal/log_std Min -2.78483 eval/num steps total 495628 eval/num paths total 962 eval/path length Mean 568 eval/path length Std 0 eval/path length Max 568 eval/path length Min 568 eval/Rewards Mean 3.14126 eval/Rewards Std 0.740752 eval/Rewards Max 4.46828 eval/Rewards Min 0.98056 eval/Returns Mean 1784.23 eval/Returns Std 0 eval/Returns Max 1784.23 eval/Returns Min 1784.23 eval/Actions Mean 0.156951 eval/Actions Std 0.606039 eval/Actions Max 0.998372 eval/Actions Min -0.997988 eval/Num Paths 1 eval/Average Returns 1784.23 eval/normalized_score 55.4452 time/evaluation sampling (s) 0.908883 time/logging (s) 0.00236262 time/sampling batch (s) 0.271589 time/saving (s) 0.00284652 time/training (s) 4.23237 time/epoch (s) 5.41805 time/total (s) 34284.6 Epoch -252 ---------------------------------- --------------- 2022-05-10 22:42:08.243763 PDT | [0] Epoch -251 finished ---------------------------------- --------------- epoch -251 replay_buffer/size 999996 trainer/num train calls 750000 trainer/Policy Loss -2.23698 trainer/Log Pis Mean 2.29806 trainer/Log Pis Std 2.49177 trainer/Log Pis Max 9.93892 trainer/Log Pis Min -5.54314 trainer/policy/mean Mean 0.166499 trainer/policy/mean Std 0.605829 trainer/policy/mean Max 0.997335 trainer/policy/mean Min -0.998276 trainer/policy/normal/std Mean 0.374521 trainer/policy/normal/std Std 0.184122 trainer/policy/normal/std Max 1.03527 trainer/policy/normal/std Min 0.062879 trainer/policy/normal/log_std Mean -1.1408 trainer/policy/normal/log_std Std 0.61084 trainer/policy/normal/log_std Max 0.0346655 trainer/policy/normal/log_std Min -2.76654 eval/num steps total 496140 eval/num paths total 963 eval/path length Mean 512 eval/path length Std 0 eval/path length Max 512 eval/path length Min 512 eval/Rewards Mean 3.14823 eval/Rewards Std 0.803403 eval/Rewards Max 5.20319 eval/Rewards Min 0.984537 eval/Returns Mean 1611.89 eval/Returns Std 0 eval/Returns Max 1611.89 eval/Returns Min 1611.89 eval/Actions Mean 0.15987 eval/Actions Std 0.596153 eval/Actions Max 0.998069 eval/Actions Min -0.99721 eval/Num Paths 1 eval/Average Returns 1611.89 eval/normalized_score 50.1499 time/evaluation sampling (s) 0.891598 time/logging (s) 0.00220276 time/sampling batch (s) 0.27166 time/saving (s) 0.00284927 time/training (s) 4.23226 time/epoch (s) 5.40057 time/total (s) 34290.1 Epoch -251 ---------------------------------- --------------- 2022-05-10 22:42:13.675414 PDT | [0] Epoch -250 finished ---------------------------------- --------------- epoch -250 replay_buffer/size 999996 trainer/num train calls 751000 trainer/Policy Loss -2.2308 trainer/Log Pis Mean 2.24485 trainer/Log Pis Std 2.5401 trainer/Log Pis Max 8.76262 trainer/Log Pis Min -5.83092 trainer/policy/mean Mean 0.155532 trainer/policy/mean Std 0.610603 trainer/policy/mean Max 0.997236 trainer/policy/mean Min -0.997689 trainer/policy/normal/std Mean 0.3791 trainer/policy/normal/std Std 0.185972 trainer/policy/normal/std Max 0.98264 trainer/policy/normal/std Min 0.0700195 trainer/policy/normal/log_std Mean -1.12365 trainer/policy/normal/log_std Std 0.597699 trainer/policy/normal/log_std Max -0.0175128 trainer/policy/normal/log_std Min -2.65898 eval/num steps total 496654 eval/num paths total 964 eval/path length Mean 514 eval/path length Std 0 eval/path length Max 514 eval/path length Min 514 eval/Rewards Mean 3.17534 eval/Rewards Std 0.801136 eval/Rewards Max 5.23188 eval/Rewards Min 0.974984 eval/Returns Mean 1632.12 eval/Returns Std 0 eval/Returns Max 1632.12 eval/Returns Min 1632.12 eval/Actions Mean 0.162775 eval/Actions Std 0.597096 eval/Actions Max 0.998195 eval/Actions Min -0.997085 eval/Num Paths 1 eval/Average Returns 1632.12 eval/normalized_score 50.7715 time/evaluation sampling (s) 0.895371 time/logging (s) 0.002217 time/sampling batch (s) 0.272805 time/saving (s) 0.00299502 time/training (s) 4.23483 time/epoch (s) 5.40822 time/total (s) 34295.5 Epoch -250 ---------------------------------- --------------- 2022-05-10 22:42:19.194442 PDT | [0] Epoch -249 finished ---------------------------------- --------------- epoch -249 replay_buffer/size 999996 trainer/num train calls 752000 trainer/Policy Loss -2.3327 trainer/Log Pis Mean 2.42277 trainer/Log Pis Std 2.53907 trainer/Log Pis Max 12.5948 trainer/Log Pis Min -5.94593 trainer/policy/mean Mean 0.16769 trainer/policy/mean Std 0.623054 trainer/policy/mean Max 0.998458 trainer/policy/mean Min -0.998231 trainer/policy/normal/std Mean 0.386956 trainer/policy/normal/std Std 0.18347 trainer/policy/normal/std Max 0.905932 trainer/policy/normal/std Min 0.0628572 trainer/policy/normal/log_std Mean -1.0961 trainer/policy/normal/log_std Std 0.586188 trainer/policy/normal/log_std Max -0.0987908 trainer/policy/normal/log_std Min -2.76689 eval/num steps total 497231 eval/num paths total 965 eval/path length Mean 577 eval/path length Std 0 eval/path length Max 577 eval/path length Min 577 eval/Rewards Mean 3.16975 eval/Rewards Std 0.702041 eval/Rewards Max 4.43126 eval/Rewards Min 0.984685 eval/Returns Mean 1828.95 eval/Returns Std 0 eval/Returns Max 1828.95 eval/Returns Min 1828.95 eval/Actions Mean 0.159737 eval/Actions Std 0.615071 eval/Actions Max 0.998515 eval/Actions Min -0.998549 eval/Num Paths 1 eval/Average Returns 1828.95 eval/normalized_score 56.8191 time/evaluation sampling (s) 0.969437 time/logging (s) 0.00244255 time/sampling batch (s) 0.278043 time/saving (s) 0.00293578 time/training (s) 4.24284 time/epoch (s) 5.4957 time/total (s) 34301 Epoch -249 ---------------------------------- --------------- 2022-05-10 22:42:24.671253 PDT | [0] Epoch -248 finished ---------------------------------- --------------- epoch -248 replay_buffer/size 999996 trainer/num train calls 753000 trainer/Policy Loss -2.2511 trainer/Log Pis Mean 2.29199 trainer/Log Pis Std 2.57016 trainer/Log Pis Max 10.7014 trainer/Log Pis Min -4.91933 trainer/policy/mean Mean 0.179968 trainer/policy/mean Std 0.608619 trainer/policy/mean Max 0.997385 trainer/policy/mean Min -0.997766 trainer/policy/normal/std Mean 0.384087 trainer/policy/normal/std Std 0.193564 trainer/policy/normal/std Max 1.00543 trainer/policy/normal/std Min 0.0667718 trainer/policy/normal/log_std Mean -1.11912 trainer/policy/normal/log_std Std 0.614529 trainer/policy/normal/log_std Max 0.00541209 trainer/policy/normal/log_std Min -2.70647 eval/num steps total 497881 eval/num paths total 966 eval/path length Mean 650 eval/path length Std 0 eval/path length Max 650 eval/path length Min 650 eval/Rewards Mean 3.23363 eval/Rewards Std 0.747958 eval/Rewards Max 4.67881 eval/Rewards Min 0.980068 eval/Returns Mean 2101.86 eval/Returns Std 0 eval/Returns Max 2101.86 eval/Returns Min 2101.86 eval/Actions Mean 0.162332 eval/Actions Std 0.610889 eval/Actions Max 0.99834 eval/Actions Min -0.998768 eval/Num Paths 1 eval/Average Returns 2101.86 eval/normalized_score 65.2046 time/evaluation sampling (s) 0.894531 time/logging (s) 0.00255302 time/sampling batch (s) 0.272738 time/saving (s) 0.0028664 time/training (s) 4.2806 time/epoch (s) 5.45329 time/total (s) 34306.4 Epoch -248 ---------------------------------- --------------- 2022-05-10 22:42:30.108096 PDT | [0] Epoch -247 finished ---------------------------------- --------------- epoch -247 replay_buffer/size 999996 trainer/num train calls 754000 trainer/Policy Loss -2.18185 trainer/Log Pis Mean 2.18372 trainer/Log Pis Std 2.53256 trainer/Log Pis Max 11.7879 trainer/Log Pis Min -4.10481 trainer/policy/mean Mean 0.131509 trainer/policy/mean Std 0.610843 trainer/policy/mean Max 0.998205 trainer/policy/mean Min -0.997868 trainer/policy/normal/std Mean 0.377582 trainer/policy/normal/std Std 0.185 trainer/policy/normal/std Max 0.90005 trainer/policy/normal/std Min 0.0672339 trainer/policy/normal/log_std Mean -1.12824 trainer/policy/normal/log_std Std 0.59953 trainer/policy/normal/log_std Max -0.105305 trainer/policy/normal/log_std Min -2.69958 eval/num steps total 498446 eval/num paths total 967 eval/path length Mean 565 eval/path length Std 0 eval/path length Max 565 eval/path length Min 565 eval/Rewards Mean 3.21023 eval/Rewards Std 0.76633 eval/Rewards Max 4.73579 eval/Rewards Min 0.981753 eval/Returns Mean 1813.78 eval/Returns Std 0 eval/Returns Max 1813.78 eval/Returns Min 1813.78 eval/Actions Mean 0.15157 eval/Actions Std 0.604212 eval/Actions Max 0.997421 eval/Actions Min -0.998163 eval/Num Paths 1 eval/Average Returns 1813.78 eval/normalized_score 56.353 time/evaluation sampling (s) 0.893164 time/logging (s) 0.0022859 time/sampling batch (s) 0.271777 time/saving (s) 0.00282432 time/training (s) 4.24313 time/epoch (s) 5.41318 time/total (s) 34311.8 Epoch -247 ---------------------------------- --------------- 2022-05-10 22:42:35.573194 PDT | [0] Epoch -246 finished ---------------------------------- --------------- epoch -246 replay_buffer/size 999996 trainer/num train calls 755000 trainer/Policy Loss -2.49611 trainer/Log Pis Mean 2.42414 trainer/Log Pis Std 2.52807 trainer/Log Pis Max 11.8864 trainer/Log Pis Min -6.37363 trainer/policy/mean Mean 0.10595 trainer/policy/mean Std 0.639887 trainer/policy/mean Max 0.998061 trainer/policy/mean Min -0.998891 trainer/policy/normal/std Mean 0.379484 trainer/policy/normal/std Std 0.178604 trainer/policy/normal/std Max 1.03821 trainer/policy/normal/std Min 0.0688894 trainer/policy/normal/log_std Mean -1.11114 trainer/policy/normal/log_std Std 0.574205 trainer/policy/normal/log_std Max 0.0374985 trainer/policy/normal/log_std Min -2.67525 eval/num steps total 498944 eval/num paths total 968 eval/path length Mean 498 eval/path length Std 0 eval/path length Max 498 eval/path length Min 498 eval/Rewards Mean 3.09739 eval/Rewards Std 0.774065 eval/Rewards Max 4.70127 eval/Rewards Min 0.984965 eval/Returns Mean 1542.5 eval/Returns Std 0 eval/Returns Max 1542.5 eval/Returns Min 1542.5 eval/Actions Mean 0.152665 eval/Actions Std 0.593815 eval/Actions Max 0.997457 eval/Actions Min -0.999171 eval/Num Paths 1 eval/Average Returns 1542.5 eval/normalized_score 48.0177 time/evaluation sampling (s) 0.922669 time/logging (s) 0.00273752 time/sampling batch (s) 0.272231 time/saving (s) 0.00358616 time/training (s) 4.24085 time/epoch (s) 5.44207 time/total (s) 34317.3 Epoch -246 ---------------------------------- --------------- 2022-05-10 22:42:41.030247 PDT | [0] Epoch -245 finished ---------------------------------- --------------- epoch -245 replay_buffer/size 999996 trainer/num train calls 756000 trainer/Policy Loss -2.23062 trainer/Log Pis Mean 2.25412 trainer/Log Pis Std 2.59875 trainer/Log Pis Max 13.7505 trainer/Log Pis Min -3.77248 trainer/policy/mean Mean 0.141726 trainer/policy/mean Std 0.622394 trainer/policy/mean Max 0.998746 trainer/policy/mean Min -0.996905 trainer/policy/normal/std Mean 0.384496 trainer/policy/normal/std Std 0.180603 trainer/policy/normal/std Max 0.960426 trainer/policy/normal/std Min 0.0660163 trainer/policy/normal/log_std Mean -1.10023 trainer/policy/normal/log_std Std 0.582857 trainer/policy/normal/log_std Max -0.0403786 trainer/policy/normal/log_std Min -2.71785 eval/num steps total 499504 eval/num paths total 969 eval/path length Mean 560 eval/path length Std 0 eval/path length Max 560 eval/path length Min 560 eval/Rewards Mean 3.1729 eval/Rewards Std 0.79111 eval/Rewards Max 4.7251 eval/Rewards Min 0.977853 eval/Returns Mean 1776.82 eval/Returns Std 0 eval/Returns Max 1776.82 eval/Returns Min 1776.82 eval/Actions Mean 0.149916 eval/Actions Std 0.602602 eval/Actions Max 0.997605 eval/Actions Min -0.998169 eval/Num Paths 1 eval/Average Returns 1776.82 eval/normalized_score 55.2176 time/evaluation sampling (s) 0.908687 time/logging (s) 0.00237824 time/sampling batch (s) 0.272535 time/saving (s) 0.00323728 time/training (s) 4.24571 time/epoch (s) 5.43254 time/total (s) 34322.7 Epoch -245 ---------------------------------- --------------- 2022-05-10 22:42:46.461387 PDT | [0] Epoch -244 finished ---------------------------------- --------------- epoch -244 replay_buffer/size 999996 trainer/num train calls 757000 trainer/Policy Loss -2.13082 trainer/Log Pis Mean 2.08417 trainer/Log Pis Std 2.54023 trainer/Log Pis Max 9.13954 trainer/Log Pis Min -4.71931 trainer/policy/mean Mean 0.140301 trainer/policy/mean Std 0.604431 trainer/policy/mean Max 0.99835 trainer/policy/mean Min -0.998709 trainer/policy/normal/std Mean 0.380908 trainer/policy/normal/std Std 0.185275 trainer/policy/normal/std Max 1.00919 trainer/policy/normal/std Min 0.0648496 trainer/policy/normal/log_std Mean -1.11473 trainer/policy/normal/log_std Std 0.587817 trainer/policy/normal/log_std Max 0.00914941 trainer/policy/normal/log_std Min -2.73568 eval/num steps total 500077 eval/num paths total 970 eval/path length Mean 573 eval/path length Std 0 eval/path length Max 573 eval/path length Min 573 eval/Rewards Mean 3.12057 eval/Rewards Std 0.742694 eval/Rewards Max 4.61334 eval/Rewards Min 0.981806 eval/Returns Mean 1788.09 eval/Returns Std 0 eval/Returns Max 1788.09 eval/Returns Min 1788.09 eval/Actions Mean 0.151529 eval/Actions Std 0.596922 eval/Actions Max 0.99821 eval/Actions Min -0.997321 eval/Num Paths 1 eval/Average Returns 1788.09 eval/normalized_score 55.5637 time/evaluation sampling (s) 0.916887 time/logging (s) 0.00245016 time/sampling batch (s) 0.271366 time/saving (s) 0.00301573 time/training (s) 4.21386 time/epoch (s) 5.40758 time/total (s) 34328.1 Epoch -244 ---------------------------------- --------------- 2022-05-10 22:42:51.908720 PDT | [0] Epoch -243 finished ---------------------------------- --------------- epoch -243 replay_buffer/size 999996 trainer/num train calls 758000 trainer/Policy Loss -2.24167 trainer/Log Pis Mean 2.31214 trainer/Log Pis Std 2.66233 trainer/Log Pis Max 9.16486 trainer/Log Pis Min -4.88192 trainer/policy/mean Mean 0.144391 trainer/policy/mean Std 0.616537 trainer/policy/mean Max 0.996843 trainer/policy/mean Min -0.997357 trainer/policy/normal/std Mean 0.365057 trainer/policy/normal/std Std 0.176127 trainer/policy/normal/std Max 0.936085 trainer/policy/normal/std Min 0.0668594 trainer/policy/normal/log_std Mean -1.15686 trainer/policy/normal/log_std Std 0.588574 trainer/policy/normal/log_std Max -0.0660494 trainer/policy/normal/log_std Min -2.70516 eval/num steps total 500633 eval/num paths total 971 eval/path length Mean 556 eval/path length Std 0 eval/path length Max 556 eval/path length Min 556 eval/Rewards Mean 3.18317 eval/Rewards Std 0.848811 eval/Rewards Max 4.85715 eval/Rewards Min 0.981204 eval/Returns Mean 1769.84 eval/Returns Std 0 eval/Returns Max 1769.84 eval/Returns Min 1769.84 eval/Actions Mean 0.163545 eval/Actions Std 0.575286 eval/Actions Max 0.99822 eval/Actions Min -0.999196 eval/Num Paths 1 eval/Average Returns 1769.84 eval/normalized_score 55.003 time/evaluation sampling (s) 0.899396 time/logging (s) 0.00237932 time/sampling batch (s) 0.271538 time/saving (s) 0.00293352 time/training (s) 4.24726 time/epoch (s) 5.42351 time/total (s) 34333.6 Epoch -243 ---------------------------------- --------------- 2022-05-10 22:42:57.337349 PDT | [0] Epoch -242 finished ---------------------------------- --------------- epoch -242 replay_buffer/size 999996 trainer/num train calls 759000 trainer/Policy Loss -1.9466 trainer/Log Pis Mean 1.98507 trainer/Log Pis Std 2.42256 trainer/Log Pis Max 9.24016 trainer/Log Pis Min -5.12408 trainer/policy/mean Mean 0.16516 trainer/policy/mean Std 0.60287 trainer/policy/mean Max 0.996681 trainer/policy/mean Min -0.997723 trainer/policy/normal/std Mean 0.378863 trainer/policy/normal/std Std 0.183402 trainer/policy/normal/std Max 0.956549 trainer/policy/normal/std Min 0.0663686 trainer/policy/normal/log_std Mean -1.12482 trainer/policy/normal/log_std Std 0.603414 trainer/policy/normal/log_std Max -0.0444233 trainer/policy/normal/log_std Min -2.71253 eval/num steps total 501211 eval/num paths total 972 eval/path length Mean 578 eval/path length Std 0 eval/path length Max 578 eval/path length Min 578 eval/Rewards Mean 3.18067 eval/Rewards Std 0.72957 eval/Rewards Max 4.83689 eval/Rewards Min 0.977061 eval/Returns Mean 1838.43 eval/Returns Std 0 eval/Returns Max 1838.43 eval/Returns Min 1838.43 eval/Actions Mean 0.146951 eval/Actions Std 0.609286 eval/Actions Max 0.997755 eval/Actions Min -0.999144 eval/Num Paths 1 eval/Average Returns 1838.43 eval/normalized_score 57.1105 time/evaluation sampling (s) 0.896573 time/logging (s) 0.00253823 time/sampling batch (s) 0.273206 time/saving (s) 0.0029856 time/training (s) 4.22978 time/epoch (s) 5.40509 time/total (s) 34339 Epoch -242 ---------------------------------- --------------- 2022-05-10 22:43:02.761843 PDT | [0] Epoch -241 finished ---------------------------------- --------------- epoch -241 replay_buffer/size 999996 trainer/num train calls 760000 trainer/Policy Loss -2.28062 trainer/Log Pis Mean 2.43086 trainer/Log Pis Std 2.63057 trainer/Log Pis Max 9.12087 trainer/Log Pis Min -5.70911 trainer/policy/mean Mean 0.14419 trainer/policy/mean Std 0.613085 trainer/policy/mean Max 0.997783 trainer/policy/mean Min -0.997546 trainer/policy/normal/std Mean 0.366849 trainer/policy/normal/std Std 0.178893 trainer/policy/normal/std Max 0.924855 trainer/policy/normal/std Min 0.0665812 trainer/policy/normal/log_std Mean -1.15749 trainer/policy/normal/log_std Std 0.601558 trainer/policy/normal/log_std Max -0.0781186 trainer/policy/normal/log_std Min -2.70933 eval/num steps total 501746 eval/num paths total 973 eval/path length Mean 535 eval/path length Std 0 eval/path length Max 535 eval/path length Min 535 eval/Rewards Mean 3.24646 eval/Rewards Std 0.842559 eval/Rewards Max 5.59501 eval/Rewards Min 0.982844 eval/Returns Mean 1736.86 eval/Returns Std 0 eval/Returns Max 1736.86 eval/Returns Min 1736.86 eval/Actions Mean 0.167503 eval/Actions Std 0.595581 eval/Actions Max 0.998646 eval/Actions Min -0.998936 eval/Num Paths 1 eval/Average Returns 1736.86 eval/normalized_score 53.9895 time/evaluation sampling (s) 0.896508 time/logging (s) 0.00232842 time/sampling batch (s) 0.271153 time/saving (s) 0.00294805 time/training (s) 4.2277 time/epoch (s) 5.40064 time/total (s) 34344.4 Epoch -241 ---------------------------------- --------------- 2022-05-10 22:43:08.183962 PDT | [0] Epoch -240 finished ---------------------------------- --------------- epoch -240 replay_buffer/size 999996 trainer/num train calls 761000 trainer/Policy Loss -2.20506 trainer/Log Pis Mean 2.17871 trainer/Log Pis Std 2.63894 trainer/Log Pis Max 15.5761 trainer/Log Pis Min -4.52683 trainer/policy/mean Mean 0.172876 trainer/policy/mean Std 0.610517 trainer/policy/mean Max 0.997533 trainer/policy/mean Min -0.999711 trainer/policy/normal/std Mean 0.391213 trainer/policy/normal/std Std 0.190389 trainer/policy/normal/std Max 1.05461 trainer/policy/normal/std Min 0.0689105 trainer/policy/normal/log_std Mean -1.09133 trainer/policy/normal/log_std Std 0.597711 trainer/policy/normal/log_std Max 0.0531729 trainer/policy/normal/log_std Min -2.67495 eval/num steps total 502278 eval/num paths total 974 eval/path length Mean 532 eval/path length Std 0 eval/path length Max 532 eval/path length Min 532 eval/Rewards Mean 3.17622 eval/Rewards Std 0.809753 eval/Rewards Max 5.4562 eval/Rewards Min 0.985424 eval/Returns Mean 1689.75 eval/Returns Std 0 eval/Returns Max 1689.75 eval/Returns Min 1689.75 eval/Actions Mean 0.148458 eval/Actions Std 0.586328 eval/Actions Max 0.998382 eval/Actions Min -0.997684 eval/Num Paths 1 eval/Average Returns 1689.75 eval/normalized_score 52.5422 time/evaluation sampling (s) 0.890172 time/logging (s) 0.00231611 time/sampling batch (s) 0.271379 time/saving (s) 0.00282938 time/training (s) 4.23163 time/epoch (s) 5.39833 time/total (s) 34349.8 Epoch -240 ---------------------------------- --------------- 2022-05-10 22:43:13.604299 PDT | [0] Epoch -239 finished ---------------------------------- --------------- epoch -239 replay_buffer/size 999996 trainer/num train calls 762000 trainer/Policy Loss -2.13464 trainer/Log Pis Mean 2.14454 trainer/Log Pis Std 2.61865 trainer/Log Pis Max 11.9743 trainer/Log Pis Min -5.47815 trainer/policy/mean Mean 0.14396 trainer/policy/mean Std 0.616008 trainer/policy/mean Max 0.998135 trainer/policy/mean Min -0.998909 trainer/policy/normal/std Mean 0.379883 trainer/policy/normal/std Std 0.184247 trainer/policy/normal/std Max 0.953617 trainer/policy/normal/std Min 0.0706954 trainer/policy/normal/log_std Mean -1.11662 trainer/policy/normal/log_std Std 0.586322 trainer/policy/normal/log_std Max -0.0474931 trainer/policy/normal/log_std Min -2.64938 eval/num steps total 502780 eval/num paths total 975 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.14808 eval/Rewards Std 0.786183 eval/Rewards Max 4.86181 eval/Rewards Min 0.985363 eval/Returns Mean 1580.34 eval/Returns Std 0 eval/Returns Max 1580.34 eval/Returns Min 1580.34 eval/Actions Mean 0.16491 eval/Actions Std 0.599547 eval/Actions Max 0.998827 eval/Actions Min -0.998531 eval/Num Paths 1 eval/Average Returns 1580.34 eval/normalized_score 49.1804 time/evaluation sampling (s) 0.896507 time/logging (s) 0.00216992 time/sampling batch (s) 0.271259 time/saving (s) 0.00277636 time/training (s) 4.22388 time/epoch (s) 5.39659 time/total (s) 34355.2 Epoch -239 ---------------------------------- --------------- 2022-05-10 22:43:19.038680 PDT | [0] Epoch -238 finished ---------------------------------- --------------- epoch -238 replay_buffer/size 999996 trainer/num train calls 763000 trainer/Policy Loss -2.12492 trainer/Log Pis Mean 2.27263 trainer/Log Pis Std 2.5718 trainer/Log Pis Max 9.57799 trainer/Log Pis Min -6.79869 trainer/policy/mean Mean 0.1353 trainer/policy/mean Std 0.607081 trainer/policy/mean Max 0.998796 trainer/policy/mean Min -0.997789 trainer/policy/normal/std Mean 0.366994 trainer/policy/normal/std Std 0.178845 trainer/policy/normal/std Max 1.21151 trainer/policy/normal/std Min 0.0674397 trainer/policy/normal/log_std Mean -1.15353 trainer/policy/normal/log_std Std 0.591813 trainer/policy/normal/log_std Max 0.19187 trainer/policy/normal/log_std Min -2.69652 eval/num steps total 503356 eval/num paths total 976 eval/path length Mean 576 eval/path length Std 0 eval/path length Max 576 eval/path length Min 576 eval/Rewards Mean 3.20825 eval/Rewards Std 0.771528 eval/Rewards Max 4.74481 eval/Rewards Min 0.98363 eval/Returns Mean 1847.95 eval/Returns Std 0 eval/Returns Max 1847.95 eval/Returns Min 1847.95 eval/Actions Mean 0.167767 eval/Actions Std 0.608263 eval/Actions Max 0.998651 eval/Actions Min -0.99879 eval/Num Paths 1 eval/Average Returns 1847.95 eval/normalized_score 57.4031 time/evaluation sampling (s) 0.889269 time/logging (s) 0.00247524 time/sampling batch (s) 0.272415 time/saving (s) 0.00298583 time/training (s) 4.24408 time/epoch (s) 5.41122 time/total (s) 34360.6 Epoch -238 ---------------------------------- --------------- 2022-05-10 22:43:24.480656 PDT | [0] Epoch -237 finished ---------------------------------- --------------- epoch -237 replay_buffer/size 999996 trainer/num train calls 764000 trainer/Policy Loss -2.36066 trainer/Log Pis Mean 2.35611 trainer/Log Pis Std 2.53568 trainer/Log Pis Max 12.356 trainer/Log Pis Min -3.95457 trainer/policy/mean Mean 0.165203 trainer/policy/mean Std 0.617011 trainer/policy/mean Max 0.997188 trainer/policy/mean Min -0.998236 trainer/policy/normal/std Mean 0.380645 trainer/policy/normal/std Std 0.188257 trainer/policy/normal/std Max 1.02116 trainer/policy/normal/std Min 0.0615839 trainer/policy/normal/log_std Mean -1.12472 trainer/policy/normal/log_std Std 0.61 trainer/policy/normal/log_std Max 0.0209348 trainer/policy/normal/log_std Min -2.78735 eval/num steps total 503934 eval/num paths total 977 eval/path length Mean 578 eval/path length Std 0 eval/path length Max 578 eval/path length Min 578 eval/Rewards Mean 3.14551 eval/Rewards Std 0.695005 eval/Rewards Max 4.60989 eval/Rewards Min 0.976458 eval/Returns Mean 1818.11 eval/Returns Std 0 eval/Returns Max 1818.11 eval/Returns Min 1818.11 eval/Actions Mean 0.153237 eval/Actions Std 0.608796 eval/Actions Max 0.998127 eval/Actions Min -0.998328 eval/Num Paths 1 eval/Average Returns 1818.11 eval/normalized_score 56.486 time/evaluation sampling (s) 0.898958 time/logging (s) 0.00243529 time/sampling batch (s) 0.272178 time/saving (s) 0.00303857 time/training (s) 4.2415 time/epoch (s) 5.41811 time/total (s) 34366 Epoch -237 ---------------------------------- --------------- 2022-05-10 22:43:29.943780 PDT | [0] Epoch -236 finished ---------------------------------- --------------- epoch -236 replay_buffer/size 999996 trainer/num train calls 765000 trainer/Policy Loss -2.12873 trainer/Log Pis Mean 2.07161 trainer/Log Pis Std 2.67269 trainer/Log Pis Max 10.905 trainer/Log Pis Min -6.29555 trainer/policy/mean Mean 0.128696 trainer/policy/mean Std 0.616973 trainer/policy/mean Max 0.997572 trainer/policy/mean Min -0.998112 trainer/policy/normal/std Mean 0.389292 trainer/policy/normal/std Std 0.190537 trainer/policy/normal/std Max 1.02718 trainer/policy/normal/std Min 0.0659853 trainer/policy/normal/log_std Mean -1.09708 trainer/policy/normal/log_std Std 0.599222 trainer/policy/normal/log_std Max 0.0268134 trainer/policy/normal/log_std Min -2.71832 eval/num steps total 504705 eval/num paths total 979 eval/path length Mean 385.5 eval/path length Std 27.5 eval/path length Max 413 eval/path length Min 358 eval/Rewards Mean 3.11459 eval/Rewards Std 0.943002 eval/Rewards Max 4.95197 eval/Rewards Min 0.976026 eval/Returns Mean 1200.67 eval/Returns Std 122.459 eval/Returns Max 1323.13 eval/Returns Min 1078.21 eval/Actions Mean 0.118585 eval/Actions Std 0.561869 eval/Actions Max 0.998009 eval/Actions Min -0.99872 eval/Num Paths 2 eval/Average Returns 1200.67 eval/normalized_score 37.5148 time/evaluation sampling (s) 0.905543 time/logging (s) 0.00306157 time/sampling batch (s) 0.273053 time/saving (s) 0.00298079 time/training (s) 4.2552 time/epoch (s) 5.43984 time/total (s) 34371.4 Epoch -236 ---------------------------------- --------------- 2022-05-10 22:43:35.400381 PDT | [0] Epoch -235 finished ---------------------------------- --------------- epoch -235 replay_buffer/size 999996 trainer/num train calls 766000 trainer/Policy Loss -2.17462 trainer/Log Pis Mean 2.30188 trainer/Log Pis Std 2.6899 trainer/Log Pis Max 12.8759 trainer/Log Pis Min -4.64275 trainer/policy/mean Mean 0.127348 trainer/policy/mean Std 0.616678 trainer/policy/mean Max 0.997723 trainer/policy/mean Min -0.998276 trainer/policy/normal/std Mean 0.380486 trainer/policy/normal/std Std 0.184756 trainer/policy/normal/std Max 0.883621 trainer/policy/normal/std Min 0.0603562 trainer/policy/normal/log_std Mean -1.11856 trainer/policy/normal/log_std Std 0.595885 trainer/policy/normal/log_std Max -0.123727 trainer/policy/normal/log_std Min -2.80749 eval/num steps total 505224 eval/num paths total 980 eval/path length Mean 519 eval/path length Std 0 eval/path length Max 519 eval/path length Min 519 eval/Rewards Mean 3.19256 eval/Rewards Std 0.857943 eval/Rewards Max 5.56556 eval/Rewards Min 0.986577 eval/Returns Mean 1656.94 eval/Returns Std 0 eval/Returns Max 1656.94 eval/Returns Min 1656.94 eval/Actions Mean 0.149783 eval/Actions Std 0.586085 eval/Actions Max 0.998138 eval/Actions Min -0.998349 eval/Num Paths 1 eval/Average Returns 1656.94 eval/normalized_score 51.534 time/evaluation sampling (s) 0.906677 time/logging (s) 0.00225336 time/sampling batch (s) 0.272221 time/saving (s) 0.00294836 time/training (s) 4.24799 time/epoch (s) 5.43209 time/total (s) 34376.9 Epoch -235 ---------------------------------- --------------- 2022-05-10 22:43:40.846161 PDT | [0] Epoch -234 finished ---------------------------------- --------------- epoch -234 replay_buffer/size 999996 trainer/num train calls 767000 trainer/Policy Loss -2.20369 trainer/Log Pis Mean 2.20992 trainer/Log Pis Std 2.65312 trainer/Log Pis Max 10.0102 trainer/Log Pis Min -4.97868 trainer/policy/mean Mean 0.12674 trainer/policy/mean Std 0.619871 trainer/policy/mean Max 0.998459 trainer/policy/mean Min -0.997591 trainer/policy/normal/std Mean 0.372074 trainer/policy/normal/std Std 0.184442 trainer/policy/normal/std Max 0.954742 trainer/policy/normal/std Min 0.0643347 trainer/policy/normal/log_std Mean -1.14203 trainer/policy/normal/log_std Std 0.593774 trainer/policy/normal/log_std Max -0.0463137 trainer/policy/normal/log_std Min -2.74366 eval/num steps total 505874 eval/num paths total 981 eval/path length Mean 650 eval/path length Std 0 eval/path length Max 650 eval/path length Min 650 eval/Rewards Mean 3.25647 eval/Rewards Std 0.738986 eval/Rewards Max 4.80268 eval/Rewards Min 0.980591 eval/Returns Mean 2116.71 eval/Returns Std 0 eval/Returns Max 2116.71 eval/Returns Min 2116.71 eval/Actions Mean 0.145782 eval/Actions Std 0.610018 eval/Actions Max 0.997464 eval/Actions Min -0.998531 eval/Num Paths 1 eval/Average Returns 2116.71 eval/normalized_score 65.6609 time/evaluation sampling (s) 0.915553 time/logging (s) 0.00261953 time/sampling batch (s) 0.271118 time/saving (s) 0.00287793 time/training (s) 4.22961 time/epoch (s) 5.42178 time/total (s) 34382.3 Epoch -234 ---------------------------------- --------------- 2022-05-10 22:43:46.307548 PDT | [0] Epoch -233 finished ---------------------------------- --------------- epoch -233 replay_buffer/size 999996 trainer/num train calls 768000 trainer/Policy Loss -2.30833 trainer/Log Pis Mean 2.2126 trainer/Log Pis Std 2.66481 trainer/Log Pis Max 10.3826 trainer/Log Pis Min -7.84503 trainer/policy/mean Mean 0.115117 trainer/policy/mean Std 0.626826 trainer/policy/mean Max 0.998762 trainer/policy/mean Min -0.998632 trainer/policy/normal/std Mean 0.383286 trainer/policy/normal/std Std 0.186916 trainer/policy/normal/std Max 1.01212 trainer/policy/normal/std Min 0.0679726 trainer/policy/normal/log_std Mean -1.11077 trainer/policy/normal/log_std Std 0.593556 trainer/policy/normal/log_std Max 0.0120426 trainer/policy/normal/log_std Min -2.68865 eval/num steps total 506712 eval/num paths total 982 eval/path length Mean 838 eval/path length Std 0 eval/path length Max 838 eval/path length Min 838 eval/Rewards Mean 3.25725 eval/Rewards Std 0.623756 eval/Rewards Max 4.83541 eval/Rewards Min 0.984663 eval/Returns Mean 2729.58 eval/Returns Std 0 eval/Returns Max 2729.58 eval/Returns Min 2729.58 eval/Actions Mean 0.16375 eval/Actions Std 0.618695 eval/Actions Max 0.998676 eval/Actions Min -0.998858 eval/Num Paths 1 eval/Average Returns 2729.58 eval/normalized_score 84.4919 time/evaluation sampling (s) 0.902159 time/logging (s) 0.00310382 time/sampling batch (s) 0.273631 time/saving (s) 0.00297988 time/training (s) 4.25629 time/epoch (s) 5.43817 time/total (s) 34387.7 Epoch -233 ---------------------------------- --------------- 2022-05-10 22:43:51.789751 PDT | [0] Epoch -232 finished ---------------------------------- --------------- epoch -232 replay_buffer/size 999996 trainer/num train calls 769000 trainer/Policy Loss -2.0659 trainer/Log Pis Mean 2.17988 trainer/Log Pis Std 2.43641 trainer/Log Pis Max 8.97316 trainer/Log Pis Min -3.53857 trainer/policy/mean Mean 0.133977 trainer/policy/mean Std 0.610987 trainer/policy/mean Max 0.997819 trainer/policy/mean Min -0.998559 trainer/policy/normal/std Mean 0.375389 trainer/policy/normal/std Std 0.18199 trainer/policy/normal/std Max 0.962848 trainer/policy/normal/std Min 0.064217 trainer/policy/normal/log_std Mean -1.13113 trainer/policy/normal/log_std Std 0.59459 trainer/policy/normal/log_std Max -0.0378594 trainer/policy/normal/log_std Min -2.74549 eval/num steps total 507255 eval/num paths total 983 eval/path length Mean 543 eval/path length Std 0 eval/path length Max 543 eval/path length Min 543 eval/Rewards Mean 3.15963 eval/Rewards Std 0.841562 eval/Rewards Max 4.86573 eval/Rewards Min 0.978428 eval/Returns Mean 1715.68 eval/Returns Std 0 eval/Returns Max 1715.68 eval/Returns Min 1715.68 eval/Actions Mean 0.147752 eval/Actions Std 0.562859 eval/Actions Max 0.996949 eval/Actions Min -0.998872 eval/Num Paths 1 eval/Average Returns 1715.68 eval/normalized_score 53.3389 time/evaluation sampling (s) 0.936002 time/logging (s) 0.00237752 time/sampling batch (s) 0.27192 time/saving (s) 0.00304911 time/training (s) 4.24461 time/epoch (s) 5.45796 time/total (s) 34393.2 Epoch -232 ---------------------------------- --------------- 2022-05-10 22:43:57.231456 PDT | [0] Epoch -231 finished ---------------------------------- --------------- epoch -231 replay_buffer/size 999996 trainer/num train calls 770000 trainer/Policy Loss -2.14415 trainer/Log Pis Mean 2.15611 trainer/Log Pis Std 2.61444 trainer/Log Pis Max 9.63836 trainer/Log Pis Min -5.1755 trainer/policy/mean Mean 0.140538 trainer/policy/mean Std 0.610995 trainer/policy/mean Max 0.998548 trainer/policy/mean Min -0.997262 trainer/policy/normal/std Mean 0.382811 trainer/policy/normal/std Std 0.190369 trainer/policy/normal/std Max 0.986989 trainer/policy/normal/std Min 0.070846 trainer/policy/normal/log_std Mean -1.11705 trainer/policy/normal/log_std Std 0.602259 trainer/policy/normal/log_std Max -0.0130962 trainer/policy/normal/log_std Min -2.64725 eval/num steps total 507802 eval/num paths total 984 eval/path length Mean 547 eval/path length Std 0 eval/path length Max 547 eval/path length Min 547 eval/Rewards Mean 3.18709 eval/Rewards Std 0.842596 eval/Rewards Max 5.18421 eval/Rewards Min 0.978366 eval/Returns Mean 1743.34 eval/Returns Std 0 eval/Returns Max 1743.34 eval/Returns Min 1743.34 eval/Actions Mean 0.152574 eval/Actions Std 0.581153 eval/Actions Max 0.997604 eval/Actions Min -0.998338 eval/Num Paths 1 eval/Average Returns 1743.34 eval/normalized_score 54.1887 time/evaluation sampling (s) 0.907462 time/logging (s) 0.00238006 time/sampling batch (s) 0.271975 time/saving (s) 0.00290999 time/training (s) 4.23283 time/epoch (s) 5.41755 time/total (s) 34398.6 Epoch -231 ---------------------------------- --------------- 2022-05-10 22:44:02.750139 PDT | [0] Epoch -230 finished ---------------------------------- --------------- epoch -230 replay_buffer/size 999996 trainer/num train calls 771000 trainer/Policy Loss -2.32415 trainer/Log Pis Mean 2.22815 trainer/Log Pis Std 2.55855 trainer/Log Pis Max 9.93921 trainer/Log Pis Min -5.67726 trainer/policy/mean Mean 0.138655 trainer/policy/mean Std 0.621414 trainer/policy/mean Max 0.998732 trainer/policy/mean Min -0.997295 trainer/policy/normal/std Mean 0.385297 trainer/policy/normal/std Std 0.191432 trainer/policy/normal/std Max 1.14763 trainer/policy/normal/std Min 0.0662682 trainer/policy/normal/log_std Mean -1.11206 trainer/policy/normal/log_std Std 0.607431 trainer/policy/normal/log_std Max 0.137697 trainer/policy/normal/log_std Min -2.71405 eval/num steps total 508242 eval/num paths total 985 eval/path length Mean 440 eval/path length Std 0 eval/path length Max 440 eval/path length Min 440 eval/Rewards Mean 3.17519 eval/Rewards Std 0.910199 eval/Rewards Max 5.57989 eval/Rewards Min 0.977102 eval/Returns Mean 1397.09 eval/Returns Std 0 eval/Returns Max 1397.09 eval/Returns Min 1397.09 eval/Actions Mean 0.145636 eval/Actions Std 0.58208 eval/Actions Max 0.997301 eval/Actions Min -0.997629 eval/Num Paths 1 eval/Average Returns 1397.09 eval/normalized_score 43.5497 time/evaluation sampling (s) 0.963942 time/logging (s) 0.00212888 time/sampling batch (s) 0.27284 time/saving (s) 0.00295898 time/training (s) 4.25296 time/epoch (s) 5.49483 time/total (s) 34404.1 Epoch -230 ---------------------------------- --------------- 2022-05-10 22:44:08.190309 PDT | [0] Epoch -229 finished ---------------------------------- --------------- epoch -229 replay_buffer/size 999996 trainer/num train calls 772000 trainer/Policy Loss -2.18182 trainer/Log Pis Mean 2.22356 trainer/Log Pis Std 2.55535 trainer/Log Pis Max 10.1937 trainer/Log Pis Min -3.64204 trainer/policy/mean Mean 0.137609 trainer/policy/mean Std 0.608751 trainer/policy/mean Max 0.997623 trainer/policy/mean Min -0.997643 trainer/policy/normal/std Mean 0.371208 trainer/policy/normal/std Std 0.179446 trainer/policy/normal/std Max 0.886614 trainer/policy/normal/std Min 0.0666152 trainer/policy/normal/log_std Mean -1.14272 trainer/policy/normal/log_std Std 0.595313 trainer/policy/normal/log_std Max -0.120345 trainer/policy/normal/log_std Min -2.70882 eval/num steps total 508997 eval/num paths total 986 eval/path length Mean 755 eval/path length Std 0 eval/path length Max 755 eval/path length Min 755 eval/Rewards Mean 3.20655 eval/Rewards Std 0.746187 eval/Rewards Max 5.46388 eval/Rewards Min 0.976558 eval/Returns Mean 2420.94 eval/Returns Std 0 eval/Returns Max 2420.94 eval/Returns Min 2420.94 eval/Actions Mean 0.144427 eval/Actions Std 0.589563 eval/Actions Max 0.997293 eval/Actions Min -0.998095 eval/Num Paths 1 eval/Average Returns 2420.94 eval/normalized_score 75.0088 time/evaluation sampling (s) 0.894665 time/logging (s) 0.00289888 time/sampling batch (s) 0.272225 time/saving (s) 0.00301222 time/training (s) 4.24454 time/epoch (s) 5.41734 time/total (s) 34409.5 Epoch -229 ---------------------------------- --------------- 2022-05-10 22:44:13.672462 PDT | [0] Epoch -228 finished ---------------------------------- --------------- epoch -228 replay_buffer/size 999996 trainer/num train calls 773000 trainer/Policy Loss -2.28751 trainer/Log Pis Mean 2.30189 trainer/Log Pis Std 2.52729 trainer/Log Pis Max 9.63475 trainer/Log Pis Min -5.52953 trainer/policy/mean Mean 0.142031 trainer/policy/mean Std 0.630478 trainer/policy/mean Max 0.99741 trainer/policy/mean Min -0.997841 trainer/policy/normal/std Mean 0.378862 trainer/policy/normal/std Std 0.180563 trainer/policy/normal/std Max 1.05108 trainer/policy/normal/std Min 0.0697693 trainer/policy/normal/log_std Mean -1.11509 trainer/policy/normal/log_std Std 0.578615 trainer/policy/normal/log_std Max 0.0498165 trainer/policy/normal/log_std Min -2.66256 eval/num steps total 509496 eval/num paths total 987 eval/path length Mean 499 eval/path length Std 0 eval/path length Max 499 eval/path length Min 499 eval/Rewards Mean 3.10603 eval/Rewards Std 0.744346 eval/Rewards Max 4.66323 eval/Rewards Min 0.985129 eval/Returns Mean 1549.91 eval/Returns Std 0 eval/Returns Max 1549.91 eval/Returns Min 1549.91 eval/Actions Mean 0.150762 eval/Actions Std 0.591293 eval/Actions Max 0.998491 eval/Actions Min -0.997196 eval/Num Paths 1 eval/Average Returns 1549.91 eval/normalized_score 48.2454 time/evaluation sampling (s) 0.896271 time/logging (s) 0.0023358 time/sampling batch (s) 0.27415 time/saving (s) 0.00317281 time/training (s) 4.28167 time/epoch (s) 5.4576 time/total (s) 34415 Epoch -228 ---------------------------------- --------------- 2022-05-10 22:44:19.149612 PDT | [0] Epoch -227 finished ---------------------------------- --------------- epoch -227 replay_buffer/size 999996 trainer/num train calls 774000 trainer/Policy Loss -2.08597 trainer/Log Pis Mean 2.26899 trainer/Log Pis Std 2.62974 trainer/Log Pis Max 9.3697 trainer/Log Pis Min -5.67549 trainer/policy/mean Mean 0.126786 trainer/policy/mean Std 0.618073 trainer/policy/mean Max 0.997418 trainer/policy/mean Min -0.997992 trainer/policy/normal/std Mean 0.38135 trainer/policy/normal/std Std 0.189377 trainer/policy/normal/std Max 0.957286 trainer/policy/normal/std Min 0.063977 trainer/policy/normal/log_std Mean -1.12145 trainer/policy/normal/log_std Std 0.6051 trainer/policy/normal/log_std Max -0.043653 trainer/policy/normal/log_std Min -2.74923 eval/num steps total 509993 eval/num paths total 988 eval/path length Mean 497 eval/path length Std 0 eval/path length Max 497 eval/path length Min 497 eval/Rewards Mean 3.09293 eval/Rewards Std 0.780368 eval/Rewards Max 4.7835 eval/Rewards Min 0.978036 eval/Returns Mean 1537.18 eval/Returns Std 0 eval/Returns Max 1537.18 eval/Returns Min 1537.18 eval/Actions Mean 0.152244 eval/Actions Std 0.588581 eval/Actions Max 0.997984 eval/Actions Min -0.998939 eval/Num Paths 1 eval/Average Returns 1537.18 eval/normalized_score 47.8544 time/evaluation sampling (s) 0.8908 time/logging (s) 0.00218521 time/sampling batch (s) 0.272895 time/saving (s) 0.00289875 time/training (s) 4.28419 time/epoch (s) 5.45297 time/total (s) 34420.5 Epoch -227 ---------------------------------- --------------- 2022-05-10 22:44:24.627525 PDT | [0] Epoch -226 finished ---------------------------------- --------------- epoch -226 replay_buffer/size 999996 trainer/num train calls 775000 trainer/Policy Loss -2.18051 trainer/Log Pis Mean 2.08096 trainer/Log Pis Std 2.67752 trainer/Log Pis Max 14.1623 trainer/Log Pis Min -7.86507 trainer/policy/mean Mean 0.129175 trainer/policy/mean Std 0.62039 trainer/policy/mean Max 0.997517 trainer/policy/mean Min -0.999131 trainer/policy/normal/std Mean 0.373324 trainer/policy/normal/std Std 0.178823 trainer/policy/normal/std Max 0.931943 trainer/policy/normal/std Min 0.0725734 trainer/policy/normal/log_std Mean -1.12817 trainer/policy/normal/log_std Std 0.57222 trainer/policy/normal/log_std Max -0.0704832 trainer/policy/normal/log_std Min -2.62316 eval/num steps total 510487 eval/num paths total 989 eval/path length Mean 494 eval/path length Std 0 eval/path length Max 494 eval/path length Min 494 eval/Rewards Mean 3.04753 eval/Rewards Std 0.784505 eval/Rewards Max 4.70437 eval/Rewards Min 0.984761 eval/Returns Mean 1505.48 eval/Returns Std 0 eval/Returns Max 1505.48 eval/Returns Min 1505.48 eval/Actions Mean 0.14902 eval/Actions Std 0.585127 eval/Actions Max 0.996271 eval/Actions Min -0.996675 eval/Num Paths 1 eval/Average Returns 1505.48 eval/normalized_score 46.8803 time/evaluation sampling (s) 0.898413 time/logging (s) 0.00239247 time/sampling batch (s) 0.279157 time/saving (s) 0.00331469 time/training (s) 4.2707 time/epoch (s) 5.45398 time/total (s) 34425.9 Epoch -226 ---------------------------------- --------------- 2022-05-10 22:44:30.064514 PDT | [0] Epoch -225 finished ---------------------------------- --------------- epoch -225 replay_buffer/size 999996 trainer/num train calls 776000 trainer/Policy Loss -2.17504 trainer/Log Pis Mean 2.23815 trainer/Log Pis Std 2.62061 trainer/Log Pis Max 10.0561 trainer/Log Pis Min -4.39963 trainer/policy/mean Mean 0.12959 trainer/policy/mean Std 0.617721 trainer/policy/mean Max 0.998135 trainer/policy/mean Min -0.998714 trainer/policy/normal/std Mean 0.378409 trainer/policy/normal/std Std 0.182308 trainer/policy/normal/std Max 0.919099 trainer/policy/normal/std Min 0.0675523 trainer/policy/normal/log_std Mean -1.12149 trainer/policy/normal/log_std Std 0.591227 trainer/policy/normal/log_std Max -0.0843618 trainer/policy/normal/log_std Min -2.69485 eval/num steps total 511072 eval/num paths total 990 eval/path length Mean 585 eval/path length Std 0 eval/path length Max 585 eval/path length Min 585 eval/Rewards Mean 3.19958 eval/Rewards Std 0.747153 eval/Rewards Max 5.34415 eval/Rewards Min 0.982208 eval/Returns Mean 1871.75 eval/Returns Std 0 eval/Returns Max 1871.75 eval/Returns Min 1871.75 eval/Actions Mean 0.152193 eval/Actions Std 0.60183 eval/Actions Max 0.997349 eval/Actions Min -0.997405 eval/Num Paths 1 eval/Average Returns 1871.75 eval/normalized_score 58.1344 time/evaluation sampling (s) 0.900585 time/logging (s) 0.00243166 time/sampling batch (s) 0.272279 time/saving (s) 0.00295398 time/training (s) 4.23477 time/epoch (s) 5.41302 time/total (s) 34431.3 Epoch -225 ---------------------------------- --------------- 2022-05-10 22:44:35.481619 PDT | [0] Epoch -224 finished ---------------------------------- --------------- epoch -224 replay_buffer/size 999996 trainer/num train calls 777000 trainer/Policy Loss -2.08998 trainer/Log Pis Mean 2.20468 trainer/Log Pis Std 2.53658 trainer/Log Pis Max 9.96287 trainer/Log Pis Min -5.1344 trainer/policy/mean Mean 0.164066 trainer/policy/mean Std 0.597905 trainer/policy/mean Max 0.996987 trainer/policy/mean Min -0.997442 trainer/policy/normal/std Mean 0.372908 trainer/policy/normal/std Std 0.181569 trainer/policy/normal/std Max 1.01811 trainer/policy/normal/std Min 0.0624121 trainer/policy/normal/log_std Mean -1.14044 trainer/policy/normal/log_std Std 0.60082 trainer/policy/normal/log_std Max 0.0179495 trainer/policy/normal/log_std Min -2.774 eval/num steps total 511823 eval/num paths total 991 eval/path length Mean 751 eval/path length Std 0 eval/path length Max 751 eval/path length Min 751 eval/Rewards Mean 3.23758 eval/Rewards Std 0.678261 eval/Rewards Max 5.3029 eval/Rewards Min 0.976711 eval/Returns Mean 2431.42 eval/Returns Std 0 eval/Returns Max 2431.42 eval/Returns Min 2431.42 eval/Actions Mean 0.155978 eval/Actions Std 0.61574 eval/Actions Max 0.996972 eval/Actions Min -0.998022 eval/Num Paths 1 eval/Average Returns 2431.42 eval/normalized_score 75.3308 time/evaluation sampling (s) 0.910938 time/logging (s) 0.00324802 time/sampling batch (s) 0.270664 time/saving (s) 0.00286302 time/training (s) 4.2066 time/epoch (s) 5.39431 time/total (s) 34436.7 Epoch -224 ---------------------------------- --------------- 2022-05-10 22:44:40.929066 PDT | [0] Epoch -223 finished ---------------------------------- --------------- epoch -223 replay_buffer/size 999996 trainer/num train calls 778000 trainer/Policy Loss -2.29848 trainer/Log Pis Mean 2.23938 trainer/Log Pis Std 2.65738 trainer/Log Pis Max 8.95918 trainer/Log Pis Min -5.1812 trainer/policy/mean Mean 0.141579 trainer/policy/mean Std 0.620443 trainer/policy/mean Max 0.99842 trainer/policy/mean Min -0.998098 trainer/policy/normal/std Mean 0.374388 trainer/policy/normal/std Std 0.182017 trainer/policy/normal/std Max 1.05071 trainer/policy/normal/std Min 0.0660936 trainer/policy/normal/log_std Mean -1.13232 trainer/policy/normal/log_std Std 0.59001 trainer/policy/normal/log_std Max 0.0494671 trainer/policy/normal/log_std Min -2.71668 eval/num steps total 512386 eval/num paths total 992 eval/path length Mean 563 eval/path length Std 0 eval/path length Max 563 eval/path length Min 563 eval/Rewards Mean 3.24686 eval/Rewards Std 0.767301 eval/Rewards Max 4.78501 eval/Rewards Min 0.979403 eval/Returns Mean 1827.98 eval/Returns Std 0 eval/Returns Max 1827.98 eval/Returns Min 1827.98 eval/Actions Mean 0.144465 eval/Actions Std 0.604674 eval/Actions Max 0.997867 eval/Actions Min -0.998452 eval/Num Paths 1 eval/Average Returns 1827.98 eval/normalized_score 56.7896 time/evaluation sampling (s) 0.926607 time/logging (s) 0.00235243 time/sampling batch (s) 0.270664 time/saving (s) 0.00302393 time/training (s) 4.22031 time/epoch (s) 5.42296 time/total (s) 34442.2 Epoch -223 ---------------------------------- --------------- 2022-05-10 22:44:46.434450 PDT | [0] Epoch -222 finished ---------------------------------- --------------- epoch -222 replay_buffer/size 999996 trainer/num train calls 779000 trainer/Policy Loss -2.21569 trainer/Log Pis Mean 2.17767 trainer/Log Pis Std 2.57095 trainer/Log Pis Max 9.67352 trainer/Log Pis Min -4.90384 trainer/policy/mean Mean 0.151139 trainer/policy/mean Std 0.605861 trainer/policy/mean Max 0.997977 trainer/policy/mean Min -0.997134 trainer/policy/normal/std Mean 0.374153 trainer/policy/normal/std Std 0.18496 trainer/policy/normal/std Max 1.01114 trainer/policy/normal/std Min 0.0661927 trainer/policy/normal/log_std Mean -1.13922 trainer/policy/normal/log_std Std 0.602671 trainer/policy/normal/log_std Max 0.011083 trainer/policy/normal/log_std Min -2.71519 eval/num steps total 512933 eval/num paths total 993 eval/path length Mean 547 eval/path length Std 0 eval/path length Max 547 eval/path length Min 547 eval/Rewards Mean 3.19217 eval/Rewards Std 0.820518 eval/Rewards Max 4.95767 eval/Rewards Min 0.97664 eval/Returns Mean 1746.12 eval/Returns Std 0 eval/Returns Max 1746.12 eval/Returns Min 1746.12 eval/Actions Mean 0.137878 eval/Actions Std 0.57767 eval/Actions Max 0.998011 eval/Actions Min -0.998423 eval/Num Paths 1 eval/Average Returns 1746.12 eval/normalized_score 54.2741 time/evaluation sampling (s) 0.949413 time/logging (s) 0.00244068 time/sampling batch (s) 0.272805 time/saving (s) 0.00304911 time/training (s) 4.25379 time/epoch (s) 5.4815 time/total (s) 34447.6 Epoch -222 ---------------------------------- --------------- 2022-05-10 22:44:51.857843 PDT | [0] Epoch -221 finished ---------------------------------- --------------- epoch -221 replay_buffer/size 999996 trainer/num train calls 780000 trainer/Policy Loss -2.22709 trainer/Log Pis Mean 2.28247 trainer/Log Pis Std 2.54865 trainer/Log Pis Max 11.815 trainer/Log Pis Min -3.92608 trainer/policy/mean Mean 0.111816 trainer/policy/mean Std 0.625447 trainer/policy/mean Max 0.99815 trainer/policy/mean Min -0.998247 trainer/policy/normal/std Mean 0.374057 trainer/policy/normal/std Std 0.179619 trainer/policy/normal/std Max 0.972499 trainer/policy/normal/std Min 0.0670233 trainer/policy/normal/log_std Mean -1.12974 trainer/policy/normal/log_std Std 0.582414 trainer/policy/normal/log_std Max -0.0278861 trainer/policy/normal/log_std Min -2.70271 eval/num steps total 513472 eval/num paths total 994 eval/path length Mean 539 eval/path length Std 0 eval/path length Max 539 eval/path length Min 539 eval/Rewards Mean 3.22512 eval/Rewards Std 0.839081 eval/Rewards Max 5.44218 eval/Rewards Min 0.981375 eval/Returns Mean 1738.34 eval/Returns Std 0 eval/Returns Max 1738.34 eval/Returns Min 1738.34 eval/Actions Mean 0.150828 eval/Actions Std 0.587307 eval/Actions Max 0.998043 eval/Actions Min -0.998061 eval/Num Paths 1 eval/Average Returns 1738.34 eval/normalized_score 54.0351 time/evaluation sampling (s) 0.902954 time/logging (s) 0.00251317 time/sampling batch (s) 0.271303 time/saving (s) 0.00313576 time/training (s) 4.21943 time/epoch (s) 5.39933 time/total (s) 34453 Epoch -221 ---------------------------------- --------------- 2022-05-10 22:44:57.456534 PDT | [0] Epoch -220 finished ---------------------------------- --------------- epoch -220 replay_buffer/size 999996 trainer/num train calls 781000 trainer/Policy Loss -2.19133 trainer/Log Pis Mean 2.23585 trainer/Log Pis Std 2.48558 trainer/Log Pis Max 9.50727 trainer/Log Pis Min -7.95228 trainer/policy/mean Mean 0.138016 trainer/policy/mean Std 0.617722 trainer/policy/mean Max 0.996359 trainer/policy/mean Min -0.998297 trainer/policy/normal/std Mean 0.37786 trainer/policy/normal/std Std 0.184577 trainer/policy/normal/std Max 0.93331 trainer/policy/normal/std Min 0.071389 trainer/policy/normal/log_std Mean -1.12792 trainer/policy/normal/log_std Std 0.601592 trainer/policy/normal/log_std Max -0.0690182 trainer/policy/normal/log_std Min -2.63961 eval/num steps total 514115 eval/num paths total 995 eval/path length Mean 643 eval/path length Std 0 eval/path length Max 643 eval/path length Min 643 eval/Rewards Mean 3.24954 eval/Rewards Std 0.772518 eval/Rewards Max 4.74552 eval/Rewards Min 0.988503 eval/Returns Mean 2089.45 eval/Returns Std 0 eval/Returns Max 2089.45 eval/Returns Min 2089.45 eval/Actions Mean 0.156149 eval/Actions Std 0.595301 eval/Actions Max 0.998271 eval/Actions Min -0.999049 eval/Num Paths 1 eval/Average Returns 2089.45 eval/normalized_score 64.8234 time/evaluation sampling (s) 0.905396 time/logging (s) 0.00299258 time/sampling batch (s) 0.272572 time/saving (s) 0.00343061 time/training (s) 4.39079 time/epoch (s) 5.57519 time/total (s) 34458.6 Epoch -220 ---------------------------------- --------------- 2022-05-10 22:45:03.062571 PDT | [0] Epoch -219 finished ---------------------------------- --------------- epoch -219 replay_buffer/size 999996 trainer/num train calls 782000 trainer/Policy Loss -2.35712 trainer/Log Pis Mean 2.27897 trainer/Log Pis Std 2.62344 trainer/Log Pis Max 13.5041 trainer/Log Pis Min -3.42261 trainer/policy/mean Mean 0.115221 trainer/policy/mean Std 0.627579 trainer/policy/mean Max 0.9954 trainer/policy/mean Min -0.998756 trainer/policy/normal/std Mean 0.380063 trainer/policy/normal/std Std 0.188235 trainer/policy/normal/std Max 0.970888 trainer/policy/normal/std Min 0.0678736 trainer/policy/normal/log_std Mean -1.12289 trainer/policy/normal/log_std Std 0.599581 trainer/policy/normal/log_std Max -0.0295444 trainer/policy/normal/log_std Min -2.69011 eval/num steps total 514861 eval/num paths total 996 eval/path length Mean 746 eval/path length Std 0 eval/path length Max 746 eval/path length Min 746 eval/Rewards Mean 3.24426 eval/Rewards Std 0.653815 eval/Rewards Max 4.7729 eval/Rewards Min 0.976827 eval/Returns Mean 2420.22 eval/Returns Std 0 eval/Returns Max 2420.22 eval/Returns Min 2420.22 eval/Actions Mean 0.15822 eval/Actions Std 0.608348 eval/Actions Max 0.99839 eval/Actions Min -0.998239 eval/Num Paths 1 eval/Average Returns 2420.22 eval/normalized_score 74.9866 time/evaluation sampling (s) 0.898875 time/logging (s) 0.00307338 time/sampling batch (s) 0.270942 time/saving (s) 0.00328848 time/training (s) 4.40599 time/epoch (s) 5.58217 time/total (s) 34464.2 Epoch -219 ---------------------------------- --------------- 2022-05-10 22:45:08.427091 PDT | [0] Epoch -218 finished ---------------------------------- --------------- epoch -218 replay_buffer/size 999996 trainer/num train calls 783000 trainer/Policy Loss -2.22565 trainer/Log Pis Mean 2.04822 trainer/Log Pis Std 2.58612 trainer/Log Pis Max 12.3858 trainer/Log Pis Min -6.85811 trainer/policy/mean Mean 0.149137 trainer/policy/mean Std 0.607506 trainer/policy/mean Max 0.998359 trainer/policy/mean Min -0.9974 trainer/policy/normal/std Mean 0.375876 trainer/policy/normal/std Std 0.182242 trainer/policy/normal/std Max 1.01221 trainer/policy/normal/std Min 0.0657014 trainer/policy/normal/log_std Mean -1.12767 trainer/policy/normal/log_std Std 0.587912 trainer/policy/normal/log_std Max 0.0121381 trainer/policy/normal/log_std Min -2.72263 eval/num steps total 515605 eval/num paths total 997 eval/path length Mean 744 eval/path length Std 0 eval/path length Max 744 eval/path length Min 744 eval/Rewards Mean 3.23784 eval/Rewards Std 0.634263 eval/Rewards Max 4.67015 eval/Rewards Min 0.982398 eval/Returns Mean 2408.95 eval/Returns Std 0 eval/Returns Max 2408.95 eval/Returns Min 2408.95 eval/Actions Mean 0.160024 eval/Actions Std 0.610525 eval/Actions Max 0.997797 eval/Actions Min -0.997568 eval/Num Paths 1 eval/Average Returns 2408.95 eval/normalized_score 74.6403 time/evaluation sampling (s) 0.888622 time/logging (s) 0.00319353 time/sampling batch (s) 0.26449 time/saving (s) 0.00339206 time/training (s) 4.18128 time/epoch (s) 5.34098 time/total (s) 34469.6 Epoch -218 ---------------------------------- --------------- 2022-05-10 22:45:13.765589 PDT | [0] Epoch -217 finished ---------------------------------- --------------- epoch -217 replay_buffer/size 999996 trainer/num train calls 784000 trainer/Policy Loss -2.16406 trainer/Log Pis Mean 2.29669 trainer/Log Pis Std 2.63214 trainer/Log Pis Max 12.9365 trainer/Log Pis Min -5.92816 trainer/policy/mean Mean 0.15602 trainer/policy/mean Std 0.61024 trainer/policy/mean Max 0.997488 trainer/policy/mean Min -0.99786 trainer/policy/normal/std Mean 0.370495 trainer/policy/normal/std Std 0.183149 trainer/policy/normal/std Max 0.945237 trainer/policy/normal/std Min 0.0677108 trainer/policy/normal/log_std Mean -1.15053 trainer/policy/normal/log_std Std 0.605985 trainer/policy/normal/log_std Max -0.0563195 trainer/policy/normal/log_std Min -2.69251 eval/num steps total 516168 eval/num paths total 998 eval/path length Mean 563 eval/path length Std 0 eval/path length Max 563 eval/path length Min 563 eval/Rewards Mean 3.22977 eval/Rewards Std 0.825511 eval/Rewards Max 4.86216 eval/Rewards Min 0.98032 eval/Returns Mean 1818.36 eval/Returns Std 0 eval/Returns Max 1818.36 eval/Returns Min 1818.36 eval/Actions Mean 0.162229 eval/Actions Std 0.581669 eval/Actions Max 0.996673 eval/Actions Min -0.998476 eval/Num Paths 1 eval/Average Returns 1818.36 eval/normalized_score 56.4938 time/evaluation sampling (s) 0.880357 time/logging (s) 0.00234536 time/sampling batch (s) 0.263148 time/saving (s) 0.00286839 time/training (s) 4.16502 time/epoch (s) 5.31374 time/total (s) 34474.9 Epoch -217 ---------------------------------- --------------- 2022-05-10 22:45:19.053572 PDT | [0] Epoch -216 finished ---------------------------------- --------------- epoch -216 replay_buffer/size 999996 trainer/num train calls 785000 trainer/Policy Loss -2.31885 trainer/Log Pis Mean 2.42012 trainer/Log Pis Std 2.58667 trainer/Log Pis Max 10.9378 trainer/Log Pis Min -3.85833 trainer/policy/mean Mean 0.133773 trainer/policy/mean Std 0.627778 trainer/policy/mean Max 0.998848 trainer/policy/mean Min -0.997934 trainer/policy/normal/std Mean 0.372293 trainer/policy/normal/std Std 0.181875 trainer/policy/normal/std Max 1.01641 trainer/policy/normal/std Min 0.0694917 trainer/policy/normal/log_std Mean -1.13963 trainer/policy/normal/log_std Std 0.592668 trainer/policy/normal/log_std Max 0.0162811 trainer/policy/normal/log_std Min -2.66655 eval/num steps total 516689 eval/num paths total 999 eval/path length Mean 521 eval/path length Std 0 eval/path length Max 521 eval/path length Min 521 eval/Rewards Mean 3.16195 eval/Rewards Std 0.784235 eval/Rewards Max 5.30302 eval/Rewards Min 0.979877 eval/Returns Mean 1647.38 eval/Returns Std 0 eval/Returns Max 1647.38 eval/Returns Min 1647.38 eval/Actions Mean 0.160989 eval/Actions Std 0.59205 eval/Actions Max 0.998151 eval/Actions Min -0.996696 eval/Num Paths 1 eval/Average Returns 1647.38 eval/normalized_score 51.2402 time/evaluation sampling (s) 0.86264 time/logging (s) 0.0022158 time/sampling batch (s) 0.262956 time/saving (s) 0.00280681 time/training (s) 4.13423 time/epoch (s) 5.26484 time/total (s) 34480.1 Epoch -216 ---------------------------------- --------------- 2022-05-10 22:45:24.373534 PDT | [0] Epoch -215 finished ---------------------------------- --------------- epoch -215 replay_buffer/size 999996 trainer/num train calls 786000 trainer/Policy Loss -2.51268 trainer/Log Pis Mean 2.43815 trainer/Log Pis Std 2.64888 trainer/Log Pis Max 15.0347 trainer/Log Pis Min -5.51964 trainer/policy/mean Mean 0.115795 trainer/policy/mean Std 0.636068 trainer/policy/mean Max 0.996413 trainer/policy/mean Min -0.998678 trainer/policy/normal/std Mean 0.374713 trainer/policy/normal/std Std 0.182661 trainer/policy/normal/std Max 0.997608 trainer/policy/normal/std Min 0.0677036 trainer/policy/normal/log_std Mean -1.13075 trainer/policy/normal/log_std Std 0.585191 trainer/policy/normal/log_std Max -0.00239498 trainer/policy/normal/log_std Min -2.69262 eval/num steps total 517167 eval/num paths total 1000 eval/path length Mean 478 eval/path length Std 0 eval/path length Max 478 eval/path length Min 478 eval/Rewards Mean 3.19358 eval/Rewards Std 0.838657 eval/Rewards Max 4.7441 eval/Rewards Min 0.981432 eval/Returns Mean 1526.53 eval/Returns Std 0 eval/Returns Max 1526.53 eval/Returns Min 1526.53 eval/Actions Mean 0.141491 eval/Actions Std 0.600941 eval/Actions Max 0.998217 eval/Actions Min -0.998753 eval/Num Paths 1 eval/Average Returns 1526.53 eval/normalized_score 47.527 time/evaluation sampling (s) 0.867886 time/logging (s) 0.00206071 time/sampling batch (s) 0.262319 time/saving (s) 0.00278522 time/training (s) 4.16181 time/epoch (s) 5.29686 time/total (s) 34485.4 Epoch -215 ---------------------------------- --------------- 2022-05-10 22:45:29.704230 PDT | [0] Epoch -214 finished ---------------------------------- --------------- epoch -214 replay_buffer/size 999996 trainer/num train calls 787000 trainer/Policy Loss -2.39845 trainer/Log Pis Mean 2.21679 trainer/Log Pis Std 2.54646 trainer/Log Pis Max 8.81484 trainer/Log Pis Min -5.40127 trainer/policy/mean Mean 0.139939 trainer/policy/mean Std 0.625031 trainer/policy/mean Max 0.995616 trainer/policy/mean Min -0.998428 trainer/policy/normal/std Mean 0.375447 trainer/policy/normal/std Std 0.181052 trainer/policy/normal/std Max 0.99354 trainer/policy/normal/std Min 0.0647923 trainer/policy/normal/log_std Mean -1.13139 trainer/policy/normal/log_std Std 0.596373 trainer/policy/normal/log_std Max -0.00648115 trainer/policy/normal/log_std Min -2.73657 eval/num steps total 518074 eval/num paths total 1002 eval/path length Mean 453.5 eval/path length Std 46.5 eval/path length Max 500 eval/path length Min 407 eval/Rewards Mean 3.09976 eval/Rewards Std 0.788904 eval/Rewards Max 4.69612 eval/Rewards Min 0.981357 eval/Returns Mean 1405.74 eval/Returns Std 152.599 eval/Returns Max 1558.34 eval/Returns Min 1253.14 eval/Actions Mean 0.140797 eval/Actions Std 0.59051 eval/Actions Max 0.998393 eval/Actions Min -0.999019 eval/Num Paths 2 eval/Average Returns 1405.74 eval/normalized_score 43.8157 time/evaluation sampling (s) 0.869092 time/logging (s) 0.00362687 time/sampling batch (s) 0.264813 time/saving (s) 0.0035382 time/training (s) 4.16804 time/epoch (s) 5.30911 time/total (s) 34490.8 Epoch -214 ---------------------------------- --------------- 2022-05-10 22:45:35.191236 PDT | [0] Epoch -213 finished ---------------------------------- --------------- epoch -213 replay_buffer/size 999996 trainer/num train calls 788000 trainer/Policy Loss -2.45442 trainer/Log Pis Mean 2.29103 trainer/Log Pis Std 2.66239 trainer/Log Pis Max 11.5801 trainer/Log Pis Min -5.45392 trainer/policy/mean Mean 0.154716 trainer/policy/mean Std 0.62445 trainer/policy/mean Max 0.997048 trainer/policy/mean Min -0.999386 trainer/policy/normal/std Mean 0.378399 trainer/policy/normal/std Std 0.178313 trainer/policy/normal/std Max 0.90855 trainer/policy/normal/std Min 0.0690206 trainer/policy/normal/log_std Mean -1.11611 trainer/policy/normal/log_std Std 0.58138 trainer/policy/normal/log_std Max -0.0959053 trainer/policy/normal/log_std Min -2.67335 eval/num steps total 519047 eval/num paths total 1004 eval/path length Mean 486.5 eval/path length Std 4.5 eval/path length Max 491 eval/path length Min 482 eval/Rewards Mean 3.08348 eval/Rewards Std 0.806557 eval/Rewards Max 4.82582 eval/Rewards Min 0.976834 eval/Returns Mean 1500.11 eval/Returns Std 3.43726 eval/Returns Max 1503.55 eval/Returns Min 1496.68 eval/Actions Mean 0.149879 eval/Actions Std 0.583221 eval/Actions Max 0.998227 eval/Actions Min -0.998275 eval/Num Paths 2 eval/Average Returns 1500.11 eval/normalized_score 46.7154 time/evaluation sampling (s) 0.869808 time/logging (s) 0.00349313 time/sampling batch (s) 0.265243 time/saving (s) 0.00289754 time/training (s) 4.32152 time/epoch (s) 5.46296 time/total (s) 34496.2 Epoch -213 ---------------------------------- --------------- 2022-05-10 22:45:40.820890 PDT | [0] Epoch -212 finished ---------------------------------- --------------- epoch -212 replay_buffer/size 999996 trainer/num train calls 789000 trainer/Policy Loss -2.31856 trainer/Log Pis Mean 2.44503 trainer/Log Pis Std 2.52963 trainer/Log Pis Max 11.8458 trainer/Log Pis Min -4.75626 trainer/policy/mean Mean 0.149602 trainer/policy/mean Std 0.624586 trainer/policy/mean Max 0.99855 trainer/policy/mean Min -0.998551 trainer/policy/normal/std Mean 0.366995 trainer/policy/normal/std Std 0.181491 trainer/policy/normal/std Max 0.963447 trainer/policy/normal/std Min 0.0636045 trainer/policy/normal/log_std Mean -1.15733 trainer/policy/normal/log_std Std 0.598834 trainer/policy/normal/log_std Max -0.0372382 trainer/policy/normal/log_std Min -2.75507 eval/num steps total 519534 eval/num paths total 1005 eval/path length Mean 487 eval/path length Std 0 eval/path length Max 487 eval/path length Min 487 eval/Rewards Mean 3.19503 eval/Rewards Std 0.797428 eval/Rewards Max 4.76906 eval/Rewards Min 0.980063 eval/Returns Mean 1555.98 eval/Returns Std 0 eval/Returns Max 1555.98 eval/Returns Min 1555.98 eval/Actions Mean 0.15407 eval/Actions Std 0.60058 eval/Actions Max 0.998395 eval/Actions Min -0.998794 eval/Num Paths 1 eval/Average Returns 1555.98 eval/normalized_score 48.4319 time/evaluation sampling (s) 0.893441 time/logging (s) 0.00231022 time/sampling batch (s) 0.264188 time/saving (s) 0.00311721 time/training (s) 4.44207 time/epoch (s) 5.60512 time/total (s) 34501.8 Epoch -212 ---------------------------------- --------------- 2022-05-10 22:45:46.524277 PDT | [0] Epoch -211 finished ---------------------------------- --------------- epoch -211 replay_buffer/size 999996 trainer/num train calls 790000 trainer/Policy Loss -2.19229 trainer/Log Pis Mean 2.34014 trainer/Log Pis Std 2.54919 trainer/Log Pis Max 9.70948 trainer/Log Pis Min -4.01835 trainer/policy/mean Mean 0.190408 trainer/policy/mean Std 0.612897 trainer/policy/mean Max 0.996892 trainer/policy/mean Min -0.997923 trainer/policy/normal/std Mean 0.384603 trainer/policy/normal/std Std 0.186503 trainer/policy/normal/std Max 0.97692 trainer/policy/normal/std Min 0.0675594 trainer/policy/normal/log_std Mean -1.107 trainer/policy/normal/log_std Std 0.594871 trainer/policy/normal/log_std Max -0.0233506 trainer/policy/normal/log_std Min -2.69475 eval/num steps total 520219 eval/num paths total 1006 eval/path length Mean 685 eval/path length Std 0 eval/path length Max 685 eval/path length Min 685 eval/Rewards Mean 3.21131 eval/Rewards Std 0.746942 eval/Rewards Max 5.36456 eval/Rewards Min 0.980269 eval/Returns Mean 2199.75 eval/Returns Std 0 eval/Returns Max 2199.75 eval/Returns Min 2199.75 eval/Actions Mean 0.161866 eval/Actions Std 0.602377 eval/Actions Max 0.998397 eval/Actions Min -0.997684 eval/Num Paths 1 eval/Average Returns 2199.75 eval/normalized_score 68.2124 time/evaluation sampling (s) 0.974128 time/logging (s) 0.00282398 time/sampling batch (s) 0.264745 time/saving (s) 0.00303193 time/training (s) 4.43564 time/epoch (s) 5.68037 time/total (s) 34507.5 Epoch -211 ---------------------------------- --------------- 2022-05-10 22:45:52.140104 PDT | [0] Epoch -210 finished ---------------------------------- --------------- epoch -210 replay_buffer/size 999996 trainer/num train calls 791000 trainer/Policy Loss -2.22689 trainer/Log Pis Mean 2.25709 trainer/Log Pis Std 2.61547 trainer/Log Pis Max 12.6771 trainer/Log Pis Min -8.24558 trainer/policy/mean Mean 0.129988 trainer/policy/mean Std 0.619251 trainer/policy/mean Max 0.996632 trainer/policy/mean Min -0.99807 trainer/policy/normal/std Mean 0.38676 trainer/policy/normal/std Std 0.191486 trainer/policy/normal/std Max 1.04496 trainer/policy/normal/std Min 0.0684499 trainer/policy/normal/log_std Mean -1.10744 trainer/policy/normal/log_std Std 0.606145 trainer/policy/normal/log_std Max 0.0439747 trainer/policy/normal/log_std Min -2.68165 eval/num steps total 520786 eval/num paths total 1007 eval/path length Mean 567 eval/path length Std 0 eval/path length Max 567 eval/path length Min 567 eval/Rewards Mean 3.11713 eval/Rewards Std 0.724043 eval/Rewards Max 4.57371 eval/Rewards Min 0.978447 eval/Returns Mean 1767.42 eval/Returns Std 0 eval/Returns Max 1767.42 eval/Returns Min 1767.42 eval/Actions Mean 0.150746 eval/Actions Std 0.598109 eval/Actions Max 0.997956 eval/Actions Min -0.998322 eval/Num Paths 1 eval/Average Returns 1767.42 eval/normalized_score 54.9285 time/evaluation sampling (s) 0.881341 time/logging (s) 0.00234394 time/sampling batch (s) 0.264048 time/saving (s) 0.00283251 time/training (s) 4.44123 time/epoch (s) 5.5918 time/total (s) 34513.1 Epoch -210 ---------------------------------- --------------- 2022-05-10 22:45:57.720965 PDT | [0] Epoch -209 finished ---------------------------------- --------------- epoch -209 replay_buffer/size 999996 trainer/num train calls 792000 trainer/Policy Loss -2.07255 trainer/Log Pis Mean 2.17158 trainer/Log Pis Std 2.54531 trainer/Log Pis Max 13.9814 trainer/Log Pis Min -5.42377 trainer/policy/mean Mean 0.134974 trainer/policy/mean Std 0.608586 trainer/policy/mean Max 0.998204 trainer/policy/mean Min -0.999601 trainer/policy/normal/std Mean 0.377906 trainer/policy/normal/std Std 0.180699 trainer/policy/normal/std Max 0.955931 trainer/policy/normal/std Min 0.0705195 trainer/policy/normal/log_std Mean -1.12099 trainer/policy/normal/log_std Std 0.58793 trainer/policy/normal/log_std Max -0.0450698 trainer/policy/normal/log_std Min -2.65187 eval/num steps total 521364 eval/num paths total 1008 eval/path length Mean 578 eval/path length Std 0 eval/path length Max 578 eval/path length Min 578 eval/Rewards Mean 3.19421 eval/Rewards Std 0.769241 eval/Rewards Max 4.79764 eval/Rewards Min 0.983264 eval/Returns Mean 1846.25 eval/Returns Std 0 eval/Returns Max 1846.25 eval/Returns Min 1846.25 eval/Actions Mean 0.165826 eval/Actions Std 0.599948 eval/Actions Max 0.998678 eval/Actions Min -0.998329 eval/Num Paths 1 eval/Average Returns 1846.25 eval/normalized_score 57.3509 time/evaluation sampling (s) 0.861385 time/logging (s) 0.00247319 time/sampling batch (s) 0.263884 time/saving (s) 0.00295014 time/training (s) 4.427 time/epoch (s) 5.55769 time/total (s) 34518.7 Epoch -209 ---------------------------------- --------------- 2022-05-10 22:46:03.018342 PDT | [0] Epoch -208 finished ---------------------------------- --------------- epoch -208 replay_buffer/size 999996 trainer/num train calls 793000 trainer/Policy Loss -2.28498 trainer/Log Pis Mean 2.38281 trainer/Log Pis Std 2.53585 trainer/Log Pis Max 11.3933 trainer/Log Pis Min -6.71922 trainer/policy/mean Mean 0.152045 trainer/policy/mean Std 0.609954 trainer/policy/mean Max 0.99745 trainer/policy/mean Min -0.997384 trainer/policy/normal/std Mean 0.372416 trainer/policy/normal/std Std 0.185542 trainer/policy/normal/std Max 0.959543 trainer/policy/normal/std Min 0.0648618 trainer/policy/normal/log_std Mean -1.14787 trainer/policy/normal/log_std Std 0.611902 trainer/policy/normal/log_std Max -0.0412977 trainer/policy/normal/log_std Min -2.7355 eval/num steps total 522186 eval/num paths total 1009 eval/path length Mean 822 eval/path length Std 0 eval/path length Max 822 eval/path length Min 822 eval/Rewards Mean 3.25574 eval/Rewards Std 0.649834 eval/Rewards Max 4.82539 eval/Rewards Min 0.980192 eval/Returns Mean 2676.22 eval/Returns Std 0 eval/Returns Max 2676.22 eval/Returns Min 2676.22 eval/Actions Mean 0.157316 eval/Actions Std 0.6075 eval/Actions Max 0.998336 eval/Actions Min -0.998172 eval/Num Paths 1 eval/Average Returns 2676.22 eval/normalized_score 82.8524 time/evaluation sampling (s) 0.864444 time/logging (s) 0.00298769 time/sampling batch (s) 0.261409 time/saving (s) 0.00281242 time/training (s) 4.14308 time/epoch (s) 5.27473 time/total (s) 34523.9 Epoch -208 ---------------------------------- --------------- 2022-05-10 22:46:09.994703 PDT | [0] Epoch -207 finished ---------------------------------- --------------- epoch -207 replay_buffer/size 999996 trainer/num train calls 794000 trainer/Policy Loss -2.05951 trainer/Log Pis Mean 2.16405 trainer/Log Pis Std 2.42038 trainer/Log Pis Max 10.4841 trainer/Log Pis Min -5.14308 trainer/policy/mean Mean 0.121454 trainer/policy/mean Std 0.613311 trainer/policy/mean Max 0.997383 trainer/policy/mean Min -0.997276 trainer/policy/normal/std Mean 0.376039 trainer/policy/normal/std Std 0.180719 trainer/policy/normal/std Max 1.03239 trainer/policy/normal/std Min 0.0698722 trainer/policy/normal/log_std Mean -1.12285 trainer/policy/normal/log_std Std 0.578068 trainer/policy/normal/log_std Max 0.0318764 trainer/policy/normal/log_std Min -2.66109 eval/num steps total 522761 eval/num paths total 1010 eval/path length Mean 575 eval/path length Std 0 eval/path length Max 575 eval/path length Min 575 eval/Rewards Mean 3.1279 eval/Rewards Std 0.738171 eval/Rewards Max 5.07082 eval/Rewards Min 0.981291 eval/Returns Mean 1798.54 eval/Returns Std 0 eval/Returns Max 1798.54 eval/Returns Min 1798.54 eval/Actions Mean 0.149606 eval/Actions Std 0.596369 eval/Actions Max 0.998146 eval/Actions Min -0.997796 eval/Num Paths 1 eval/Average Returns 1798.54 eval/normalized_score 55.8848 time/evaluation sampling (s) 0.869942 time/logging (s) 0.00245962 time/sampling batch (s) 0.263112 time/saving (s) 0.00297777 time/training (s) 5.81376 time/epoch (s) 6.95225 time/total (s) 34530.9 Epoch -207 ---------------------------------- --------------- 2022-05-10 22:46:18.387259 PDT | [0] Epoch -206 finished ---------------------------------- --------------- epoch -206 replay_buffer/size 999996 trainer/num train calls 795000 trainer/Policy Loss -2.44513 trainer/Log Pis Mean 2.49735 trainer/Log Pis Std 2.61766 trainer/Log Pis Max 10.57 trainer/Log Pis Min -5.03593 trainer/policy/mean Mean 0.151379 trainer/policy/mean Std 0.626206 trainer/policy/mean Max 0.998716 trainer/policy/mean Min -0.998046 trainer/policy/normal/std Mean 0.377568 trainer/policy/normal/std Std 0.182199 trainer/policy/normal/std Max 1.03099 trainer/policy/normal/std Min 0.0679386 trainer/policy/normal/log_std Mean -1.12377 trainer/policy/normal/log_std Std 0.590811 trainer/policy/normal/log_std Max 0.0305197 trainer/policy/normal/log_std Min -2.68915 eval/num steps total 523730 eval/num paths total 1012 eval/path length Mean 484.5 eval/path length Std 21.5 eval/path length Max 506 eval/path length Min 463 eval/Rewards Mean 3.14641 eval/Rewards Std 0.837475 eval/Rewards Max 5.09023 eval/Rewards Min 0.978304 eval/Returns Mean 1524.44 eval/Returns Std 50.6341 eval/Returns Max 1575.07 eval/Returns Min 1473.8 eval/Actions Mean 0.150339 eval/Actions Std 0.583355 eval/Actions Max 0.998099 eval/Actions Min -0.998648 eval/Num Paths 2 eval/Average Returns 1524.44 eval/normalized_score 47.4627 time/evaluation sampling (s) 1.62814 time/logging (s) 0.00376354 time/sampling batch (s) 0.263367 time/saving (s) 0.00324779 time/training (s) 6.47192 time/epoch (s) 8.37044 time/total (s) 34539.3 Epoch -206 ---------------------------------- --------------- 2022-05-10 22:46:26.752704 PDT | [0] Epoch -205 finished ---------------------------------- --------------- epoch -205 replay_buffer/size 999996 trainer/num train calls 796000 trainer/Policy Loss -2.27767 trainer/Log Pis Mean 2.29326 trainer/Log Pis Std 2.58938 trainer/Log Pis Max 10.3198 trainer/Log Pis Min -3.19907 trainer/policy/mean Mean 0.119165 trainer/policy/mean Std 0.614623 trainer/policy/mean Max 0.997836 trainer/policy/mean Min -0.998415 trainer/policy/normal/std Mean 0.369276 trainer/policy/normal/std Std 0.179382 trainer/policy/normal/std Max 1.02253 trainer/policy/normal/std Min 0.0667749 trainer/policy/normal/log_std Mean -1.14748 trainer/policy/normal/log_std Std 0.593107 trainer/policy/normal/log_std Max 0.0222819 trainer/policy/normal/log_std Min -2.70643 eval/num steps total 524724 eval/num paths total 1014 eval/path length Mean 497 eval/path length Std 1 eval/path length Max 498 eval/path length Min 496 eval/Rewards Mean 3.08203 eval/Rewards Std 0.785626 eval/Rewards Max 4.67665 eval/Rewards Min 0.975115 eval/Returns Mean 1531.77 eval/Returns Std 15.7984 eval/Returns Max 1547.57 eval/Returns Min 1515.97 eval/Actions Mean 0.150463 eval/Actions Std 0.591402 eval/Actions Max 0.998487 eval/Actions Min -0.998729 eval/Num Paths 2 eval/Average Returns 1531.77 eval/normalized_score 47.6881 time/evaluation sampling (s) 1.64032 time/logging (s) 0.00370536 time/sampling batch (s) 0.263966 time/saving (s) 0.00292517 time/training (s) 6.43095 time/epoch (s) 8.34187 time/total (s) 34547.6 Epoch -205 ---------------------------------- --------------- 2022-05-10 22:46:35.115745 PDT | [0] Epoch -204 finished ---------------------------------- --------------- epoch -204 replay_buffer/size 999996 trainer/num train calls 797000 trainer/Policy Loss -2.17422 trainer/Log Pis Mean 2.13598 trainer/Log Pis Std 2.54414 trainer/Log Pis Max 8.83054 trainer/Log Pis Min -5.55215 trainer/policy/mean Mean 0.158741 trainer/policy/mean Std 0.604811 trainer/policy/mean Max 0.997176 trainer/policy/mean Min -0.998369 trainer/policy/normal/std Mean 0.382392 trainer/policy/normal/std Std 0.186927 trainer/policy/normal/std Max 1.02067 trainer/policy/normal/std Min 0.0711471 trainer/policy/normal/log_std Mean -1.1115 trainer/policy/normal/log_std Std 0.588626 trainer/policy/normal/log_std Max 0.0204625 trainer/policy/normal/log_std Min -2.64301 eval/num steps total 525293 eval/num paths total 1015 eval/path length Mean 569 eval/path length Std 0 eval/path length Max 569 eval/path length Min 569 eval/Rewards Mean 3.24148 eval/Rewards Std 0.755608 eval/Rewards Max 4.75379 eval/Rewards Min 0.979749 eval/Returns Mean 1844.4 eval/Returns Std 0 eval/Returns Max 1844.4 eval/Returns Min 1844.4 eval/Actions Mean 0.145501 eval/Actions Std 0.605635 eval/Actions Max 0.998131 eval/Actions Min -0.998308 eval/Num Paths 1 eval/Average Returns 1844.4 eval/normalized_score 57.294 time/evaluation sampling (s) 1.62166 time/logging (s) 0.00238507 time/sampling batch (s) 0.261716 time/saving (s) 0.00287002 time/training (s) 6.44932 time/epoch (s) 8.33795 time/total (s) 34556 Epoch -204 ---------------------------------- --------------- 2022-05-10 22:46:43.463485 PDT | [0] Epoch -203 finished ---------------------------------- --------------- epoch -203 replay_buffer/size 999996 trainer/num train calls 798000 trainer/Policy Loss -1.94529 trainer/Log Pis Mean 1.96651 trainer/Log Pis Std 2.52977 trainer/Log Pis Max 9.90418 trainer/Log Pis Min -4.87774 trainer/policy/mean Mean 0.146095 trainer/policy/mean Std 0.597032 trainer/policy/mean Max 0.997438 trainer/policy/mean Min -0.998429 trainer/policy/normal/std Mean 0.381544 trainer/policy/normal/std Std 0.190336 trainer/policy/normal/std Max 1.03575 trainer/policy/normal/std Min 0.0653963 trainer/policy/normal/log_std Mean -1.12156 trainer/policy/normal/log_std Std 0.605145 trainer/policy/normal/log_std Max 0.0351218 trainer/policy/normal/log_std Min -2.72729 eval/num steps total 526287 eval/num paths total 1017 eval/path length Mean 497 eval/path length Std 0 eval/path length Max 497 eval/path length Min 497 eval/Rewards Mean 3.10186 eval/Rewards Std 0.760253 eval/Rewards Max 4.70858 eval/Rewards Min 0.97936 eval/Returns Mean 1541.62 eval/Returns Std 1.36462 eval/Returns Max 1542.99 eval/Returns Min 1540.26 eval/Actions Mean 0.149296 eval/Actions Std 0.594225 eval/Actions Max 0.998296 eval/Actions Min -0.999366 eval/Num Paths 2 eval/Average Returns 1541.62 eval/normalized_score 47.9909 time/evaluation sampling (s) 1.6177 time/logging (s) 0.00358298 time/sampling batch (s) 0.263034 time/saving (s) 0.00287327 time/training (s) 6.43814 time/epoch (s) 8.32533 time/total (s) 34564.3 Epoch -203 ---------------------------------- --------------- 2022-05-10 22:46:51.804525 PDT | [0] Epoch -202 finished ---------------------------------- --------------- epoch -202 replay_buffer/size 999996 trainer/num train calls 799000 trainer/Policy Loss -2.31402 trainer/Log Pis Mean 2.26028 trainer/Log Pis Std 2.76169 trainer/Log Pis Max 14.1696 trainer/Log Pis Min -5.28132 trainer/policy/mean Mean 0.172102 trainer/policy/mean Std 0.619464 trainer/policy/mean Max 0.997814 trainer/policy/mean Min -0.998629 trainer/policy/normal/std Mean 0.387945 trainer/policy/normal/std Std 0.188933 trainer/policy/normal/std Max 1.02352 trainer/policy/normal/std Min 0.0650546 trainer/policy/normal/log_std Mean -1.09862 trainer/policy/normal/log_std Std 0.594339 trainer/policy/normal/log_std Max 0.0232513 trainer/policy/normal/log_std Min -2.73253 eval/num steps total 527208 eval/num paths total 1019 eval/path length Mean 460.5 eval/path length Std 49.5 eval/path length Max 510 eval/path length Min 411 eval/Rewards Mean 3.06255 eval/Rewards Std 0.797743 eval/Rewards Max 5.05672 eval/Rewards Min 0.979029 eval/Returns Mean 1410.31 eval/Returns Std 156.46 eval/Returns Max 1566.77 eval/Returns Min 1253.85 eval/Actions Mean 0.159702 eval/Actions Std 0.589837 eval/Actions Max 0.99861 eval/Actions Min -0.998265 eval/Num Paths 2 eval/Average Returns 1410.31 eval/normalized_score 43.9559 time/evaluation sampling (s) 1.61101 time/logging (s) 0.0034849 time/sampling batch (s) 0.263026 time/saving (s) 0.00286199 time/training (s) 6.4372 time/epoch (s) 8.31758 time/total (s) 34572.6 Epoch -202 ---------------------------------- --------------- 2022-05-10 22:46:58.540036 PDT | [0] Epoch -201 finished ---------------------------------- --------------- epoch -201 replay_buffer/size 999996 trainer/num train calls 800000 trainer/Policy Loss -2.30908 trainer/Log Pis Mean 2.36942 trainer/Log Pis Std 2.55328 trainer/Log Pis Max 9.91245 trainer/Log Pis Min -5.34475 trainer/policy/mean Mean 0.136561 trainer/policy/mean Std 0.626473 trainer/policy/mean Max 0.997446 trainer/policy/mean Min -0.998143 trainer/policy/normal/std Mean 0.38218 trainer/policy/normal/std Std 0.187774 trainer/policy/normal/std Max 1.16299 trainer/policy/normal/std Min 0.0629134 trainer/policy/normal/log_std Mean -1.11821 trainer/policy/normal/log_std Std 0.604784 trainer/policy/normal/log_std Max 0.150993 trainer/policy/normal/log_std Min -2.766 eval/num steps total 527755 eval/num paths total 1020 eval/path length Mean 547 eval/path length Std 0 eval/path length Max 547 eval/path length Min 547 eval/Rewards Mean 3.16498 eval/Rewards Std 0.838405 eval/Rewards Max 5.00098 eval/Rewards Min 0.982315 eval/Returns Mean 1731.24 eval/Returns Std 0 eval/Returns Max 1731.24 eval/Returns Min 1731.24 eval/Actions Mean 0.150794 eval/Actions Std 0.570525 eval/Actions Max 0.997187 eval/Actions Min -0.998718 eval/Num Paths 1 eval/Average Returns 1731.24 eval/normalized_score 53.817 time/evaluation sampling (s) 1.60408 time/logging (s) 0.00233681 time/sampling batch (s) 0.263359 time/saving (s) 0.00548076 time/training (s) 4.83537 time/epoch (s) 6.71062 time/total (s) 34579.3 Epoch -201 ---------------------------------- --------------- 2022-05-10 22:47:03.871744 PDT | [0] Epoch -200 finished ---------------------------------- --------------- epoch -200 replay_buffer/size 999996 trainer/num train calls 801000 trainer/Policy Loss -2.2215 trainer/Log Pis Mean 2.26926 trainer/Log Pis Std 2.51991 trainer/Log Pis Max 9.21419 trainer/Log Pis Min -5.20911 trainer/policy/mean Mean 0.157134 trainer/policy/mean Std 0.61083 trainer/policy/mean Max 0.997183 trainer/policy/mean Min -0.998189 trainer/policy/normal/std Mean 0.383165 trainer/policy/normal/std Std 0.185459 trainer/policy/normal/std Max 0.974852 trainer/policy/normal/std Min 0.0658906 trainer/policy/normal/log_std Mean -1.10997 trainer/policy/normal/log_std Std 0.593183 trainer/policy/normal/log_std Max -0.0254692 trainer/policy/normal/log_std Min -2.71976 eval/num steps total 528433 eval/num paths total 1021 eval/path length Mean 678 eval/path length Std 0 eval/path length Max 678 eval/path length Min 678 eval/Rewards Mean 3.23145 eval/Rewards Std 0.725173 eval/Rewards Max 5.32558 eval/Rewards Min 0.97979 eval/Returns Mean 2190.92 eval/Returns Std 0 eval/Returns Max 2190.92 eval/Returns Min 2190.92 eval/Actions Mean 0.165078 eval/Actions Std 0.611134 eval/Actions Max 0.998601 eval/Actions Min -0.998073 eval/Num Paths 1 eval/Average Returns 2190.92 eval/normalized_score 67.9411 time/evaluation sampling (s) 0.868258 time/logging (s) 0.00260083 time/sampling batch (s) 0.264375 time/saving (s) 0.00281111 time/training (s) 4.17052 time/epoch (s) 5.30857 time/total (s) 34584.6 Epoch -200 ---------------------------------- --------------- 2022-05-10 22:47:09.166615 PDT | [0] Epoch -199 finished ---------------------------------- --------------- epoch -199 replay_buffer/size 999996 trainer/num train calls 802000 trainer/Policy Loss -2.37249 trainer/Log Pis Mean 2.39575 trainer/Log Pis Std 2.65289 trainer/Log Pis Max 9.677 trainer/Log Pis Min -4.57026 trainer/policy/mean Mean 0.147955 trainer/policy/mean Std 0.621331 trainer/policy/mean Max 0.997328 trainer/policy/mean Min -0.998415 trainer/policy/normal/std Mean 0.370869 trainer/policy/normal/std Std 0.182485 trainer/policy/normal/std Max 0.926495 trainer/policy/normal/std Min 0.065413 trainer/policy/normal/log_std Mean -1.14606 trainer/policy/normal/log_std Std 0.598047 trainer/policy/normal/log_std Max -0.0763465 trainer/policy/normal/log_std Min -2.72703 eval/num steps total 528969 eval/num paths total 1022 eval/path length Mean 536 eval/path length Std 0 eval/path length Max 536 eval/path length Min 536 eval/Rewards Mean 3.20897 eval/Rewards Std 0.861386 eval/Rewards Max 5.55616 eval/Rewards Min 0.984499 eval/Returns Mean 1720.01 eval/Returns Std 0 eval/Returns Max 1720.01 eval/Returns Min 1720.01 eval/Actions Mean 0.161647 eval/Actions Std 0.588919 eval/Actions Max 0.998311 eval/Actions Min -0.997687 eval/Num Paths 1 eval/Average Returns 1720.01 eval/normalized_score 53.4719 time/evaluation sampling (s) 0.862083 time/logging (s) 0.00224292 time/sampling batch (s) 0.262771 time/saving (s) 0.00275932 time/training (s) 4.14119 time/epoch (s) 5.27105 time/total (s) 34589.9 Epoch -199 ---------------------------------- --------------- 2022-05-10 22:47:14.478856 PDT | [0] Epoch -198 finished ---------------------------------- --------------- epoch -198 replay_buffer/size 999996 trainer/num train calls 803000 trainer/Policy Loss -2.29107 trainer/Log Pis Mean 2.15893 trainer/Log Pis Std 2.61316 trainer/Log Pis Max 10.2689 trainer/Log Pis Min -4.94718 trainer/policy/mean Mean 0.142856 trainer/policy/mean Std 0.617432 trainer/policy/mean Max 0.997244 trainer/policy/mean Min -0.998025 trainer/policy/normal/std Mean 0.376974 trainer/policy/normal/std Std 0.179032 trainer/policy/normal/std Max 0.886299 trainer/policy/normal/std Min 0.0667257 trainer/policy/normal/log_std Mean -1.12037 trainer/policy/normal/log_std Std 0.580777 trainer/policy/normal/log_std Max -0.120701 trainer/policy/normal/log_std Min -2.70717 eval/num steps total 529626 eval/num paths total 1023 eval/path length Mean 657 eval/path length Std 0 eval/path length Max 657 eval/path length Min 657 eval/Rewards Mean 3.17664 eval/Rewards Std 0.691067 eval/Rewards Max 4.88085 eval/Rewards Min 0.982595 eval/Returns Mean 2087.05 eval/Returns Std 0 eval/Returns Max 2087.05 eval/Returns Min 2087.05 eval/Actions Mean 0.146176 eval/Actions Std 0.599183 eval/Actions Max 0.998283 eval/Actions Min -0.998065 eval/Num Paths 1 eval/Average Returns 2087.05 eval/normalized_score 64.7496 time/evaluation sampling (s) 0.865081 time/logging (s) 0.00259545 time/sampling batch (s) 0.263125 time/saving (s) 0.00278206 time/training (s) 4.1558 time/epoch (s) 5.28938 time/total (s) 34595.2 Epoch -198 ---------------------------------- --------------- 2022-05-10 22:47:19.784782 PDT | [0] Epoch -197 finished ---------------------------------- --------------- epoch -197 replay_buffer/size 999996 trainer/num train calls 804000 trainer/Policy Loss -2.39529 trainer/Log Pis Mean 2.39476 trainer/Log Pis Std 2.74653 trainer/Log Pis Max 12.3282 trainer/Log Pis Min -6.89953 trainer/policy/mean Mean 0.120313 trainer/policy/mean Std 0.621105 trainer/policy/mean Max 0.998244 trainer/policy/mean Min -0.999247 trainer/policy/normal/std Mean 0.37642 trainer/policy/normal/std Std 0.186527 trainer/policy/normal/std Max 0.967218 trainer/policy/normal/std Min 0.0641851 trainer/policy/normal/log_std Mean -1.13345 trainer/policy/normal/log_std Std 0.601941 trainer/policy/normal/log_std Max -0.0333314 trainer/policy/normal/log_std Min -2.74598 eval/num steps total 530144 eval/num paths total 1024 eval/path length Mean 518 eval/path length Std 0 eval/path length Max 518 eval/path length Min 518 eval/Rewards Mean 3.15641 eval/Rewards Std 0.850611 eval/Rewards Max 5.56636 eval/Rewards Min 0.985888 eval/Returns Mean 1635.02 eval/Returns Std 0 eval/Returns Max 1635.02 eval/Returns Min 1635.02 eval/Actions Mean 0.149874 eval/Actions Std 0.584495 eval/Actions Max 0.997622 eval/Actions Min -0.998031 eval/Num Paths 1 eval/Average Returns 1635.02 eval/normalized_score 50.8605 time/evaluation sampling (s) 0.868075 time/logging (s) 0.00231925 time/sampling batch (s) 0.263203 time/saving (s) 0.00290016 time/training (s) 4.14528 time/epoch (s) 5.28178 time/total (s) 34600.5 Epoch -197 ---------------------------------- --------------- 2022-05-10 22:47:25.081101 PDT | [0] Epoch -196 finished ---------------------------------- --------------- epoch -196 replay_buffer/size 999996 trainer/num train calls 805000 trainer/Policy Loss -2.24118 trainer/Log Pis Mean 2.1582 trainer/Log Pis Std 2.6468 trainer/Log Pis Max 9.47863 trainer/Log Pis Min -5.71842 trainer/policy/mean Mean 0.152721 trainer/policy/mean Std 0.617142 trainer/policy/mean Max 0.997296 trainer/policy/mean Min -0.998438 trainer/policy/normal/std Mean 0.389983 trainer/policy/normal/std Std 0.189815 trainer/policy/normal/std Max 1.05808 trainer/policy/normal/std Min 0.0644191 trainer/policy/normal/log_std Mean -1.09186 trainer/policy/normal/log_std Std 0.589922 trainer/policy/normal/log_std Max 0.0564589 trainer/policy/normal/log_std Min -2.74234 eval/num steps total 530677 eval/num paths total 1025 eval/path length Mean 533 eval/path length Std 0 eval/path length Max 533 eval/path length Min 533 eval/Rewards Mean 3.23899 eval/Rewards Std 0.825823 eval/Rewards Max 5.51565 eval/Rewards Min 0.982956 eval/Returns Mean 1726.38 eval/Returns Std 0 eval/Returns Max 1726.38 eval/Returns Min 1726.38 eval/Actions Mean 0.170815 eval/Actions Std 0.596423 eval/Actions Max 0.999056 eval/Actions Min -0.997584 eval/Num Paths 1 eval/Average Returns 1726.38 eval/normalized_score 53.6678 time/evaluation sampling (s) 0.866514 time/logging (s) 0.00228018 time/sampling batch (s) 0.262004 time/saving (s) 0.00299353 time/training (s) 4.13921 time/epoch (s) 5.273 time/total (s) 34605.8 Epoch -196 ---------------------------------- --------------- 2022-05-10 22:47:30.407692 PDT | [0] Epoch -195 finished ---------------------------------- --------------- epoch -195 replay_buffer/size 999996 trainer/num train calls 806000 trainer/Policy Loss -2.31047 trainer/Log Pis Mean 2.34334 trainer/Log Pis Std 2.60764 trainer/Log Pis Max 15.742 trainer/Log Pis Min -5.05421 trainer/policy/mean Mean 0.135991 trainer/policy/mean Std 0.616067 trainer/policy/mean Max 0.998224 trainer/policy/mean Min -0.999512 trainer/policy/normal/std Mean 0.365688 trainer/policy/normal/std Std 0.17404 trainer/policy/normal/std Max 1.07508 trainer/policy/normal/std Min 0.0657418 trainer/policy/normal/log_std Mean -1.1513 trainer/policy/normal/log_std Std 0.581427 trainer/policy/normal/log_std Max 0.0723913 trainer/policy/normal/log_std Min -2.72202 eval/num steps total 531173 eval/num paths total 1026 eval/path length Mean 496 eval/path length Std 0 eval/path length Max 496 eval/path length Min 496 eval/Rewards Mean 3.07129 eval/Rewards Std 0.791408 eval/Rewards Max 4.80175 eval/Rewards Min 0.980481 eval/Returns Mean 1523.36 eval/Returns Std 0 eval/Returns Max 1523.36 eval/Returns Min 1523.36 eval/Actions Mean 0.15063 eval/Actions Std 0.581252 eval/Actions Max 0.998033 eval/Actions Min -0.9989 eval/Num Paths 1 eval/Average Returns 1523.36 eval/normalized_score 47.4296 time/evaluation sampling (s) 0.872731 time/logging (s) 0.0025693 time/sampling batch (s) 0.264443 time/saving (s) 0.00336284 time/training (s) 4.16019 time/epoch (s) 5.3033 time/total (s) 34611.1 Epoch -195 ---------------------------------- --------------- 2022-05-10 22:47:35.725783 PDT | [0] Epoch -194 finished ---------------------------------- --------------- epoch -194 replay_buffer/size 999996 trainer/num train calls 807000 trainer/Policy Loss -2.04324 trainer/Log Pis Mean 2.02446 trainer/Log Pis Std 2.45865 trainer/Log Pis Max 8.72288 trainer/Log Pis Min -5.92918 trainer/policy/mean Mean 0.138869 trainer/policy/mean Std 0.60436 trainer/policy/mean Max 0.998042 trainer/policy/mean Min -0.997942 trainer/policy/normal/std Mean 0.383617 trainer/policy/normal/std Std 0.188304 trainer/policy/normal/std Max 1.06103 trainer/policy/normal/std Min 0.0615731 trainer/policy/normal/log_std Mean -1.1139 trainer/policy/normal/log_std Std 0.604394 trainer/policy/normal/log_std Max 0.0592359 trainer/policy/normal/log_std Min -2.78753 eval/num steps total 532142 eval/num paths total 1028 eval/path length Mean 484.5 eval/path length Std 0.5 eval/path length Max 485 eval/path length Min 484 eval/Rewards Mean 3.20171 eval/Rewards Std 0.81648 eval/Rewards Max 4.82976 eval/Rewards Min 0.980891 eval/Returns Mean 1551.23 eval/Returns Std 4.15124 eval/Returns Max 1555.38 eval/Returns Min 1547.08 eval/Actions Mean 0.145385 eval/Actions Std 0.596624 eval/Actions Max 0.997906 eval/Actions Min -0.998559 eval/Num Paths 2 eval/Average Returns 1551.23 eval/normalized_score 48.2859 time/evaluation sampling (s) 0.871339 time/logging (s) 0.00353003 time/sampling batch (s) 0.263208 time/saving (s) 0.00293735 time/training (s) 4.15403 time/epoch (s) 5.29504 time/total (s) 34616.4 Epoch -194 ---------------------------------- --------------- 2022-05-10 22:47:41.026926 PDT | [0] Epoch -193 finished ---------------------------------- --------------- epoch -193 replay_buffer/size 999996 trainer/num train calls 808000 trainer/Policy Loss -2.15364 trainer/Log Pis Mean 2.01217 trainer/Log Pis Std 2.68654 trainer/Log Pis Max 15.4094 trainer/Log Pis Min -7.17227 trainer/policy/mean Mean 0.117553 trainer/policy/mean Std 0.608484 trainer/policy/mean Max 0.997198 trainer/policy/mean Min -0.999915 trainer/policy/normal/std Mean 0.37007 trainer/policy/normal/std Std 0.180035 trainer/policy/normal/std Max 0.914383 trainer/policy/normal/std Min 0.0667079 trainer/policy/normal/log_std Mean -1.14322 trainer/policy/normal/log_std Std 0.5866 trainer/policy/normal/log_std Max -0.0895061 trainer/policy/normal/log_std Min -2.70743 eval/num steps total 532684 eval/num paths total 1029 eval/path length Mean 542 eval/path length Std 0 eval/path length Max 542 eval/path length Min 542 eval/Rewards Mean 3.25498 eval/Rewards Std 0.849882 eval/Rewards Max 5.4529 eval/Rewards Min 0.982685 eval/Returns Mean 1764.2 eval/Returns Std 0 eval/Returns Max 1764.2 eval/Returns Min 1764.2 eval/Actions Mean 0.15584 eval/Actions Std 0.587585 eval/Actions Max 0.998701 eval/Actions Min -0.997876 eval/Num Paths 1 eval/Average Returns 1764.2 eval/normalized_score 54.8296 time/evaluation sampling (s) 0.872673 time/logging (s) 0.00228315 time/sampling batch (s) 0.262733 time/saving (s) 0.00284099 time/training (s) 4.13611 time/epoch (s) 5.27664 time/total (s) 34621.6 Epoch -193 ---------------------------------- --------------- 2022-05-10 22:47:46.335179 PDT | [0] Epoch -192 finished ---------------------------------- --------------- epoch -192 replay_buffer/size 999996 trainer/num train calls 809000 trainer/Policy Loss -2.32025 trainer/Log Pis Mean 2.30766 trainer/Log Pis Std 2.61797 trainer/Log Pis Max 13.8093 trainer/Log Pis Min -4.46096 trainer/policy/mean Mean 0.141632 trainer/policy/mean Std 0.629939 trainer/policy/mean Max 0.998538 trainer/policy/mean Min -0.997614 trainer/policy/normal/std Mean 0.384115 trainer/policy/normal/std Std 0.182089 trainer/policy/normal/std Max 1.05828 trainer/policy/normal/std Min 0.0694648 trainer/policy/normal/log_std Mean -1.0997 trainer/policy/normal/log_std Std 0.576136 trainer/policy/normal/log_std Max 0.0566457 trainer/policy/normal/log_std Min -2.66694 eval/num steps total 533265 eval/num paths total 1030 eval/path length Mean 581 eval/path length Std 0 eval/path length Max 581 eval/path length Min 581 eval/Rewards Mean 3.18292 eval/Rewards Std 0.784414 eval/Rewards Max 4.87384 eval/Rewards Min 0.981504 eval/Returns Mean 1849.28 eval/Returns Std 0 eval/Returns Max 1849.28 eval/Returns Min 1849.28 eval/Actions Mean 0.160602 eval/Actions Std 0.596241 eval/Actions Max 0.998221 eval/Actions Min -0.998427 eval/Num Paths 1 eval/Average Returns 1849.28 eval/normalized_score 57.4438 time/evaluation sampling (s) 0.883781 time/logging (s) 0.00239701 time/sampling batch (s) 0.262382 time/saving (s) 0.00290145 time/training (s) 4.13379 time/epoch (s) 5.28525 time/total (s) 34626.9 Epoch -192 ---------------------------------- --------------- 2022-05-10 22:47:51.721431 PDT | [0] Epoch -191 finished ---------------------------------- ---------------- epoch -191 replay_buffer/size 999996 trainer/num train calls 810000 trainer/Policy Loss -2.21441 trainer/Log Pis Mean 2.40232 trainer/Log Pis Std 2.63788 trainer/Log Pis Max 12.9289 trainer/Log Pis Min -5.53868 trainer/policy/mean Mean 0.109842 trainer/policy/mean Std 0.627168 trainer/policy/mean Max 0.998029 trainer/policy/mean Min -0.998683 trainer/policy/normal/std Mean 0.372531 trainer/policy/normal/std Std 0.183611 trainer/policy/normal/std Max 1.00042 trainer/policy/normal/std Min 0.0652608 trainer/policy/normal/log_std Mean -1.13877 trainer/policy/normal/log_std Std 0.590458 trainer/policy/normal/log_std Max 0.000421674 trainer/policy/normal/log_std Min -2.72936 eval/num steps total 533817 eval/num paths total 1031 eval/path length Mean 552 eval/path length Std 0 eval/path length Max 552 eval/path length Min 552 eval/Rewards Mean 3.20243 eval/Rewards Std 0.825647 eval/Rewards Max 5.05967 eval/Rewards Min 0.979021 eval/Returns Mean 1767.74 eval/Returns Std 0 eval/Returns Max 1767.74 eval/Returns Min 1767.74 eval/Actions Mean 0.155787 eval/Actions Std 0.578622 eval/Actions Max 0.997734 eval/Actions Min -0.99851 eval/Num Paths 1 eval/Average Returns 1767.74 eval/normalized_score 54.9385 time/evaluation sampling (s) 0.93799 time/logging (s) 0.00277053 time/sampling batch (s) 0.269066 time/saving (s) 0.00306553 time/training (s) 4.15022 time/epoch (s) 5.36311 time/total (s) 34632.3 Epoch -191 ---------------------------------- ---------------- 2022-05-10 22:47:57.070059 PDT | [0] Epoch -190 finished ---------------------------------- --------------- epoch -190 replay_buffer/size 999996 trainer/num train calls 811000 trainer/Policy Loss -2.17961 trainer/Log Pis Mean 2.02828 trainer/Log Pis Std 2.52338 trainer/Log Pis Max 9.26476 trainer/Log Pis Min -5.80319 trainer/policy/mean Mean 0.145206 trainer/policy/mean Std 0.607028 trainer/policy/mean Max 0.998359 trainer/policy/mean Min -0.997207 trainer/policy/normal/std Mean 0.379076 trainer/policy/normal/std Std 0.188449 trainer/policy/normal/std Max 0.94089 trainer/policy/normal/std Min 0.0675889 trainer/policy/normal/log_std Mean -1.1309 trainer/policy/normal/log_std Std 0.614339 trainer/policy/normal/log_std Max -0.0609293 trainer/policy/normal/log_std Min -2.69431 eval/num steps total 534795 eval/num paths total 1033 eval/path length Mean 489 eval/path length Std 8 eval/path length Max 497 eval/path length Min 481 eval/Rewards Mean 3.1416 eval/Rewards Std 0.856701 eval/Rewards Max 5.00514 eval/Rewards Min 0.980825 eval/Returns Mean 1536.24 eval/Returns Std 16.9188 eval/Returns Max 1553.16 eval/Returns Min 1519.32 eval/Actions Mean 0.12333 eval/Actions Std 0.55922 eval/Actions Max 0.998545 eval/Actions Min -0.998463 eval/Num Paths 2 eval/Average Returns 1536.24 eval/normalized_score 47.8255 time/evaluation sampling (s) 0.873353 time/logging (s) 0.00346748 time/sampling batch (s) 0.263978 time/saving (s) 0.00296811 time/training (s) 4.1817 time/epoch (s) 5.32547 time/total (s) 34637.6 Epoch -190 ---------------------------------- --------------- 2022-05-10 22:48:02.376034 PDT | [0] Epoch -189 finished ---------------------------------- --------------- epoch -189 replay_buffer/size 999996 trainer/num train calls 812000 trainer/Policy Loss -2.29777 trainer/Log Pis Mean 2.35039 trainer/Log Pis Std 2.65138 trainer/Log Pis Max 14.283 trainer/Log Pis Min -5.1647 trainer/policy/mean Mean 0.152294 trainer/policy/mean Std 0.617676 trainer/policy/mean Max 0.998691 trainer/policy/mean Min -0.999888 trainer/policy/normal/std Mean 0.379182 trainer/policy/normal/std Std 0.183182 trainer/policy/normal/std Max 0.971559 trainer/policy/normal/std Min 0.07019 trainer/policy/normal/log_std Mean -1.11747 trainer/policy/normal/log_std Std 0.585277 trainer/policy/normal/log_std Max -0.028853 trainer/policy/normal/log_std Min -2.65655 eval/num steps total 535512 eval/num paths total 1034 eval/path length Mean 717 eval/path length Std 0 eval/path length Max 717 eval/path length Min 717 eval/Rewards Mean 3.25861 eval/Rewards Std 0.736716 eval/Rewards Max 4.83421 eval/Rewards Min 0.987 eval/Returns Mean 2336.42 eval/Returns Std 0 eval/Returns Max 2336.42 eval/Returns Min 2336.42 eval/Actions Mean 0.159385 eval/Actions Std 0.588013 eval/Actions Max 0.998944 eval/Actions Min -0.998641 eval/Num Paths 1 eval/Average Returns 2336.42 eval/normalized_score 72.4119 time/evaluation sampling (s) 0.860887 time/logging (s) 0.00266772 time/sampling batch (s) 0.262932 time/saving (s) 0.00278829 time/training (s) 4.15244 time/epoch (s) 5.28172 time/total (s) 34642.9 Epoch -189 ---------------------------------- --------------- 2022-05-10 22:48:07.688427 PDT | [0] Epoch -188 finished ---------------------------------- --------------- epoch -188 replay_buffer/size 999996 trainer/num train calls 813000 trainer/Policy Loss -2.15354 trainer/Log Pis Mean 2.29077 trainer/Log Pis Std 2.58766 trainer/Log Pis Max 9.23955 trainer/Log Pis Min -5.70584 trainer/policy/mean Mean 0.14274 trainer/policy/mean Std 0.607902 trainer/policy/mean Max 0.997211 trainer/policy/mean Min -0.996571 trainer/policy/normal/std Mean 0.370982 trainer/policy/normal/std Std 0.181151 trainer/policy/normal/std Max 0.954966 trainer/policy/normal/std Min 0.0695214 trainer/policy/normal/log_std Mean -1.14397 trainer/policy/normal/log_std Std 0.594638 trainer/policy/normal/log_std Max -0.0460793 trainer/policy/normal/log_std Min -2.66612 eval/num steps total 535996 eval/num paths total 1035 eval/path length Mean 484 eval/path length Std 0 eval/path length Max 484 eval/path length Min 484 eval/Rewards Mean 3.17127 eval/Rewards Std 0.812898 eval/Rewards Max 4.78887 eval/Rewards Min 0.977816 eval/Returns Mean 1534.9 eval/Returns Std 0 eval/Returns Max 1534.9 eval/Returns Min 1534.9 eval/Actions Mean 0.147024 eval/Actions Std 0.601524 eval/Actions Max 0.997009 eval/Actions Min -0.998491 eval/Num Paths 1 eval/Average Returns 1534.9 eval/normalized_score 47.7841 time/evaluation sampling (s) 0.864137 time/logging (s) 0.00223084 time/sampling batch (s) 0.263175 time/saving (s) 0.00291065 time/training (s) 4.15637 time/epoch (s) 5.28882 time/total (s) 34648.2 Epoch -188 ---------------------------------- --------------- 2022-05-10 22:48:12.982657 PDT | [0] Epoch -187 finished ---------------------------------- --------------- epoch -187 replay_buffer/size 999996 trainer/num train calls 814000 trainer/Policy Loss -2.26359 trainer/Log Pis Mean 2.41361 trainer/Log Pis Std 2.59725 trainer/Log Pis Max 9.96481 trainer/Log Pis Min -7.08966 trainer/policy/mean Mean 0.148201 trainer/policy/mean Std 0.622079 trainer/policy/mean Max 0.997436 trainer/policy/mean Min -0.996923 trainer/policy/normal/std Mean 0.37269 trainer/policy/normal/std Std 0.184359 trainer/policy/normal/std Max 1.17319 trainer/policy/normal/std Min 0.0655461 trainer/policy/normal/log_std Mean -1.14363 trainer/policy/normal/log_std Std 0.604294 trainer/policy/normal/log_std Max 0.15973 trainer/policy/normal/log_std Min -2.725 eval/num steps total 536642 eval/num paths total 1036 eval/path length Mean 646 eval/path length Std 0 eval/path length Max 646 eval/path length Min 646 eval/Rewards Mean 3.18824 eval/Rewards Std 0.740808 eval/Rewards Max 4.76105 eval/Rewards Min 0.980132 eval/Returns Mean 2059.6 eval/Returns Std 0 eval/Returns Max 2059.6 eval/Returns Min 2059.6 eval/Actions Mean 0.137466 eval/Actions Std 0.595083 eval/Actions Max 0.998239 eval/Actions Min -0.998153 eval/Num Paths 1 eval/Average Returns 2059.6 eval/normalized_score 63.9062 time/evaluation sampling (s) 0.86244 time/logging (s) 0.00280268 time/sampling batch (s) 0.262416 time/saving (s) 0.00285894 time/training (s) 4.14129 time/epoch (s) 5.2718 time/total (s) 34653.5 Epoch -187 ---------------------------------- --------------- 2022-05-10 22:48:18.320831 PDT | [0] Epoch -186 finished ---------------------------------- --------------- epoch -186 replay_buffer/size 999996 trainer/num train calls 815000 trainer/Policy Loss -2.15146 trainer/Log Pis Mean 1.97783 trainer/Log Pis Std 2.71413 trainer/Log Pis Max 9.98084 trainer/Log Pis Min -4.4258 trainer/policy/mean Mean 0.115562 trainer/policy/mean Std 0.619229 trainer/policy/mean Max 0.997261 trainer/policy/mean Min -0.997464 trainer/policy/normal/std Mean 0.387507 trainer/policy/normal/std Std 0.183829 trainer/policy/normal/std Max 0.949494 trainer/policy/normal/std Min 0.0695747 trainer/policy/normal/log_std Mean -1.091 trainer/policy/normal/log_std Std 0.576289 trainer/policy/normal/log_std Max -0.0518259 trainer/policy/normal/log_std Min -2.66535 eval/num steps total 537288 eval/num paths total 1037 eval/path length Mean 646 eval/path length Std 0 eval/path length Max 646 eval/path length Min 646 eval/Rewards Mean 3.25803 eval/Rewards Std 0.745418 eval/Rewards Max 4.74564 eval/Rewards Min 0.982064 eval/Returns Mean 2104.69 eval/Returns Std 0 eval/Returns Max 2104.69 eval/Returns Min 2104.69 eval/Actions Mean 0.148859 eval/Actions Std 0.61042 eval/Actions Max 0.99721 eval/Actions Min -0.999152 eval/Num Paths 1 eval/Average Returns 2104.69 eval/normalized_score 65.2916 time/evaluation sampling (s) 0.876509 time/logging (s) 0.00255169 time/sampling batch (s) 0.26595 time/saving (s) 0.00283978 time/training (s) 4.16635 time/epoch (s) 5.3142 time/total (s) 34658.8 Epoch -186 ---------------------------------- --------------- 2022-05-10 22:48:23.673834 PDT | [0] Epoch -185 finished ---------------------------------- --------------- epoch -185 replay_buffer/size 999996 trainer/num train calls 816000 trainer/Policy Loss -2.26913 trainer/Log Pis Mean 2.16633 trainer/Log Pis Std 2.58076 trainer/Log Pis Max 9.21083 trainer/Log Pis Min -4.32588 trainer/policy/mean Mean 0.122833 trainer/policy/mean Std 0.619012 trainer/policy/mean Max 0.998543 trainer/policy/mean Min -0.997039 trainer/policy/normal/std Mean 0.381364 trainer/policy/normal/std Std 0.177419 trainer/policy/normal/std Max 1.19132 trainer/policy/normal/std Min 0.0674072 trainer/policy/normal/log_std Mean -1.10002 trainer/policy/normal/log_std Std 0.560678 trainer/policy/normal/log_std Max 0.175062 trainer/policy/normal/log_std Min -2.697 eval/num steps total 537860 eval/num paths total 1038 eval/path length Mean 572 eval/path length Std 0 eval/path length Max 572 eval/path length Min 572 eval/Rewards Mean 3.12822 eval/Rewards Std 0.728646 eval/Rewards Max 4.57405 eval/Rewards Min 0.976809 eval/Returns Mean 1789.34 eval/Returns Std 0 eval/Returns Max 1789.34 eval/Returns Min 1789.34 eval/Actions Mean 0.150887 eval/Actions Std 0.604908 eval/Actions Max 0.998609 eval/Actions Min -0.998352 eval/Num Paths 1 eval/Average Returns 1789.34 eval/normalized_score 55.6021 time/evaluation sampling (s) 0.870319 time/logging (s) 0.00249857 time/sampling batch (s) 0.26796 time/saving (s) 0.00292068 time/training (s) 4.18567 time/epoch (s) 5.32936 time/total (s) 34664.1 Epoch -185 ---------------------------------- --------------- 2022-05-10 22:48:29.028659 PDT | [0] Epoch -184 finished ---------------------------------- --------------- epoch -184 replay_buffer/size 999996 trainer/num train calls 817000 trainer/Policy Loss -2.31595 trainer/Log Pis Mean 2.14709 trainer/Log Pis Std 2.55294 trainer/Log Pis Max 8.53767 trainer/Log Pis Min -4.51533 trainer/policy/mean Mean 0.185737 trainer/policy/mean Std 0.615209 trainer/policy/mean Max 0.997267 trainer/policy/mean Min -0.998915 trainer/policy/normal/std Mean 0.387491 trainer/policy/normal/std Std 0.18902 trainer/policy/normal/std Max 1.03656 trainer/policy/normal/std Min 0.0739593 trainer/policy/normal/log_std Mean -1.09848 trainer/policy/normal/log_std Std 0.589373 trainer/policy/normal/log_std Max 0.0359053 trainer/policy/normal/log_std Min -2.60424 eval/num steps total 538393 eval/num paths total 1039 eval/path length Mean 533 eval/path length Std 0 eval/path length Max 533 eval/path length Min 533 eval/Rewards Mean 3.19724 eval/Rewards Std 0.851327 eval/Rewards Max 5.32314 eval/Rewards Min 0.985514 eval/Returns Mean 1704.13 eval/Returns Std 0 eval/Returns Max 1704.13 eval/Returns Min 1704.13 eval/Actions Mean 0.147328 eval/Actions Std 0.58336 eval/Actions Max 0.998192 eval/Actions Min -0.998379 eval/Num Paths 1 eval/Average Returns 1704.13 eval/normalized_score 52.984 time/evaluation sampling (s) 0.878222 time/logging (s) 0.00229858 time/sampling batch (s) 0.266139 time/saving (s) 0.00289593 time/training (s) 4.18153 time/epoch (s) 5.33108 time/total (s) 34669.5 Epoch -184 ---------------------------------- --------------- 2022-05-10 22:48:34.378857 PDT | [0] Epoch -183 finished ---------------------------------- --------------- epoch -183 replay_buffer/size 999996 trainer/num train calls 818000 trainer/Policy Loss -2.26993 trainer/Log Pis Mean 2.19634 trainer/Log Pis Std 2.64649 trainer/Log Pis Max 9.63493 trainer/Log Pis Min -5.69551 trainer/policy/mean Mean 0.132963 trainer/policy/mean Std 0.616569 trainer/policy/mean Max 0.99815 trainer/policy/mean Min -0.997236 trainer/policy/normal/std Mean 0.375309 trainer/policy/normal/std Std 0.181433 trainer/policy/normal/std Max 0.9403 trainer/policy/normal/std Min 0.065463 trainer/policy/normal/log_std Mean -1.12702 trainer/policy/normal/log_std Std 0.582162 trainer/policy/normal/log_std Max -0.0615564 trainer/policy/normal/log_std Min -2.72627 eval/num steps total 538915 eval/num paths total 1040 eval/path length Mean 522 eval/path length Std 0 eval/path length Max 522 eval/path length Min 522 eval/Rewards Mean 3.1608 eval/Rewards Std 0.831342 eval/Rewards Max 5.42914 eval/Rewards Min 0.977658 eval/Returns Mean 1649.94 eval/Returns Std 0 eval/Returns Max 1649.94 eval/Returns Min 1649.94 eval/Actions Mean 0.147239 eval/Actions Std 0.585134 eval/Actions Max 0.998055 eval/Actions Min -0.997687 eval/Num Paths 1 eval/Average Returns 1649.94 eval/normalized_score 51.3189 time/evaluation sampling (s) 0.877662 time/logging (s) 0.00234407 time/sampling batch (s) 0.265374 time/saving (s) 0.00309172 time/training (s) 4.17813 time/epoch (s) 5.3266 time/total (s) 34674.8 Epoch -183 ---------------------------------- --------------- 2022-05-10 22:48:39.707716 PDT | [0] Epoch -182 finished ---------------------------------- --------------- epoch -182 replay_buffer/size 999996 trainer/num train calls 819000 trainer/Policy Loss -2.1284 trainer/Log Pis Mean 2.18603 trainer/Log Pis Std 2.59526 trainer/Log Pis Max 9.15774 trainer/Log Pis Min -4.51496 trainer/policy/mean Mean 0.135337 trainer/policy/mean Std 0.618246 trainer/policy/mean Max 0.996956 trainer/policy/mean Min -0.99814 trainer/policy/normal/std Mean 0.379961 trainer/policy/normal/std Std 0.190188 trainer/policy/normal/std Max 0.916384 trainer/policy/normal/std Min 0.0623913 trainer/policy/normal/log_std Mean -1.13098 trainer/policy/normal/log_std Std 0.619198 trainer/policy/normal/log_std Max -0.0873203 trainer/policy/normal/log_std Min -2.77433 eval/num steps total 539575 eval/num paths total 1041 eval/path length Mean 660 eval/path length Std 0 eval/path length Max 660 eval/path length Min 660 eval/Rewards Mean 3.17999 eval/Rewards Std 0.705237 eval/Rewards Max 4.65704 eval/Rewards Min 0.976982 eval/Returns Mean 2098.79 eval/Returns Std 0 eval/Returns Max 2098.79 eval/Returns Min 2098.79 eval/Actions Mean 0.159129 eval/Actions Std 0.601874 eval/Actions Max 0.998725 eval/Actions Min -0.99759 eval/Num Paths 1 eval/Average Returns 2098.79 eval/normalized_score 65.1104 time/evaluation sampling (s) 0.869875 time/logging (s) 0.00258242 time/sampling batch (s) 0.264354 time/saving (s) 0.00289238 time/training (s) 4.16565 time/epoch (s) 5.30535 time/total (s) 34680.1 Epoch -182 ---------------------------------- --------------- 2022-05-10 22:48:45.055692 PDT | [0] Epoch -181 finished ---------------------------------- --------------- epoch -181 replay_buffer/size 999996 trainer/num train calls 820000 trainer/Policy Loss -2.3678 trainer/Log Pis Mean 2.21554 trainer/Log Pis Std 2.70181 trainer/Log Pis Max 10.2305 trainer/Log Pis Min -6.33599 trainer/policy/mean Mean 0.142248 trainer/policy/mean Std 0.61868 trainer/policy/mean Max 0.997016 trainer/policy/mean Min -0.999341 trainer/policy/normal/std Mean 0.378204 trainer/policy/normal/std Std 0.183193 trainer/policy/normal/std Max 0.955565 trainer/policy/normal/std Min 0.0680394 trainer/policy/normal/log_std Mean -1.12259 trainer/policy/normal/log_std Std 0.591702 trainer/policy/normal/log_std Max -0.0454526 trainer/policy/normal/log_std Min -2.68767 eval/num steps total 540149 eval/num paths total 1042 eval/path length Mean 574 eval/path length Std 0 eval/path length Max 574 eval/path length Min 574 eval/Rewards Mean 3.1279 eval/Rewards Std 0.701885 eval/Rewards Max 4.46345 eval/Rewards Min 0.978585 eval/Returns Mean 1795.41 eval/Returns Std 0 eval/Returns Max 1795.41 eval/Returns Min 1795.41 eval/Actions Mean 0.154703 eval/Actions Std 0.60711 eval/Actions Max 0.998397 eval/Actions Min -0.996726 eval/Num Paths 1 eval/Average Returns 1795.41 eval/normalized_score 55.7888 time/evaluation sampling (s) 0.87116 time/logging (s) 0.00238471 time/sampling batch (s) 0.265383 time/saving (s) 0.00283984 time/training (s) 4.18244 time/epoch (s) 5.32421 time/total (s) 34685.4 Epoch -181 ---------------------------------- --------------- 2022-05-10 22:48:50.384204 PDT | [0] Epoch -180 finished ---------------------------------- --------------- epoch -180 replay_buffer/size 999996 trainer/num train calls 821000 trainer/Policy Loss -2.22675 trainer/Log Pis Mean 2.15817 trainer/Log Pis Std 2.57623 trainer/Log Pis Max 9.65069 trainer/Log Pis Min -6.72498 trainer/policy/mean Mean 0.13638 trainer/policy/mean Std 0.613242 trainer/policy/mean Max 0.997613 trainer/policy/mean Min -0.997762 trainer/policy/normal/std Mean 0.382818 trainer/policy/normal/std Std 0.182088 trainer/policy/normal/std Max 0.914847 trainer/policy/normal/std Min 0.066027 trainer/policy/normal/log_std Mean -1.10582 trainer/policy/normal/log_std Std 0.582503 trainer/policy/normal/log_std Max -0.0889988 trainer/policy/normal/log_std Min -2.71769 eval/num steps total 541146 eval/num paths total 1044 eval/path length Mean 498.5 eval/path length Std 0.5 eval/path length Max 499 eval/path length Min 498 eval/Rewards Mean 3.10525 eval/Rewards Std 0.76989 eval/Rewards Max 4.71837 eval/Rewards Min 0.979287 eval/Returns Mean 1547.97 eval/Returns Std 0.712645 eval/Returns Max 1548.68 eval/Returns Min 1547.25 eval/Actions Mean 0.154849 eval/Actions Std 0.591837 eval/Actions Max 0.998691 eval/Actions Min -0.999268 eval/Num Paths 2 eval/Average Returns 1547.97 eval/normalized_score 48.1857 time/evaluation sampling (s) 0.880872 time/logging (s) 0.00353066 time/sampling batch (s) 0.265411 time/saving (s) 0.00294077 time/training (s) 4.15332 time/epoch (s) 5.30608 time/total (s) 34690.7 Epoch -180 ---------------------------------- --------------- 2022-05-10 22:48:55.749157 PDT | [0] Epoch -179 finished ---------------------------------- --------------- epoch -179 replay_buffer/size 999996 trainer/num train calls 822000 trainer/Policy Loss -2.43265 trainer/Log Pis Mean 2.38945 trainer/Log Pis Std 2.58871 trainer/Log Pis Max 9.78009 trainer/Log Pis Min -5.30852 trainer/policy/mean Mean 0.16467 trainer/policy/mean Std 0.627185 trainer/policy/mean Max 0.996824 trainer/policy/mean Min -0.998672 trainer/policy/normal/std Mean 0.377575 trainer/policy/normal/std Std 0.187353 trainer/policy/normal/std Max 0.934215 trainer/policy/normal/std Min 0.0676488 trainer/policy/normal/log_std Mean -1.12955 trainer/policy/normal/log_std Std 0.599418 trainer/policy/normal/log_std Max -0.068049 trainer/policy/normal/log_std Min -2.69343 eval/num steps total 541783 eval/num paths total 1045 eval/path length Mean 637 eval/path length Std 0 eval/path length Max 637 eval/path length Min 637 eval/Rewards Mean 3.17863 eval/Rewards Std 0.745461 eval/Rewards Max 4.72711 eval/Rewards Min 0.985665 eval/Returns Mean 2024.79 eval/Returns Std 0 eval/Returns Max 2024.79 eval/Returns Min 2024.79 eval/Actions Mean 0.146188 eval/Actions Std 0.603881 eval/Actions Max 0.997547 eval/Actions Min -0.998382 eval/Num Paths 1 eval/Average Returns 2024.79 eval/normalized_score 62.8365 time/evaluation sampling (s) 0.903292 time/logging (s) 0.00269824 time/sampling batch (s) 0.265311 time/saving (s) 0.00311976 time/training (s) 4.16575 time/epoch (s) 5.34018 time/total (s) 34696.1 Epoch -179 ---------------------------------- --------------- 2022-05-10 22:49:01.101643 PDT | [0] Epoch -178 finished ---------------------------------- --------------- epoch -178 replay_buffer/size 999996 trainer/num train calls 823000 trainer/Policy Loss -2.36945 trainer/Log Pis Mean 2.39868 trainer/Log Pis Std 2.63944 trainer/Log Pis Max 10.2229 trainer/Log Pis Min -4.87074 trainer/policy/mean Mean 0.132069 trainer/policy/mean Std 0.621954 trainer/policy/mean Max 0.998146 trainer/policy/mean Min -0.997459 trainer/policy/normal/std Mean 0.374667 trainer/policy/normal/std Std 0.183152 trainer/policy/normal/std Max 1.01413 trainer/policy/normal/std Min 0.0638782 trainer/policy/normal/log_std Mean -1.13709 trainer/policy/normal/log_std Std 0.602731 trainer/policy/normal/log_std Max 0.014034 trainer/policy/normal/log_std Min -2.75078 eval/num steps total 542678 eval/num paths total 1047 eval/path length Mean 447.5 eval/path length Std 15.5 eval/path length Max 463 eval/path length Min 432 eval/Rewards Mean 3.18592 eval/Rewards Std 0.889198 eval/Rewards Max 5.53391 eval/Rewards Min 0.97971 eval/Returns Mean 1425.7 eval/Returns Std 64.2595 eval/Returns Max 1489.96 eval/Returns Min 1361.44 eval/Actions Mean 0.150073 eval/Actions Std 0.579255 eval/Actions Max 0.997956 eval/Actions Min -0.999333 eval/Num Paths 2 eval/Average Returns 1425.7 eval/normalized_score 44.429 time/evaluation sampling (s) 0.909437 time/logging (s) 0.00326762 time/sampling batch (s) 0.264561 time/saving (s) 0.00287185 time/training (s) 4.14897 time/epoch (s) 5.32911 time/total (s) 34701.4 Epoch -178 ---------------------------------- --------------- 2022-05-10 22:49:06.444093 PDT | [0] Epoch -177 finished ---------------------------------- --------------- epoch -177 replay_buffer/size 999996 trainer/num train calls 824000 trainer/Policy Loss -2.13365 trainer/Log Pis Mean 2.22909 trainer/Log Pis Std 2.69776 trainer/Log Pis Max 11.3668 trainer/Log Pis Min -10.2796 trainer/policy/mean Mean 0.137533 trainer/policy/mean Std 0.624887 trainer/policy/mean Max 0.998489 trainer/policy/mean Min -0.997491 trainer/policy/normal/std Mean 0.376861 trainer/policy/normal/std Std 0.180346 trainer/policy/normal/std Max 1.02203 trainer/policy/normal/std Min 0.0662519 trainer/policy/normal/log_std Mean -1.12395 trainer/policy/normal/log_std Std 0.588342 trainer/policy/normal/log_std Max 0.0217954 trainer/policy/normal/log_std Min -2.71429 eval/num steps total 543258 eval/num paths total 1048 eval/path length Mean 580 eval/path length Std 0 eval/path length Max 580 eval/path length Min 580 eval/Rewards Mean 3.17689 eval/Rewards Std 0.711211 eval/Rewards Max 4.7952 eval/Rewards Min 0.980365 eval/Returns Mean 1842.59 eval/Returns Std 0 eval/Returns Max 1842.59 eval/Returns Min 1842.59 eval/Actions Mean 0.155811 eval/Actions Std 0.609185 eval/Actions Max 0.998117 eval/Actions Min -0.998192 eval/Num Paths 1 eval/Average Returns 1842.59 eval/normalized_score 57.2384 time/evaluation sampling (s) 0.895102 time/logging (s) 0.00253168 time/sampling batch (s) 0.264559 time/saving (s) 0.00308069 time/training (s) 4.15287 time/epoch (s) 5.31815 time/total (s) 34706.7 Epoch -177 ---------------------------------- --------------- 2022-05-10 22:49:11.788739 PDT | [0] Epoch -176 finished ---------------------------------- --------------- epoch -176 replay_buffer/size 999996 trainer/num train calls 825000 trainer/Policy Loss -2.17633 trainer/Log Pis Mean 2.23063 trainer/Log Pis Std 2.65952 trainer/Log Pis Max 10.0136 trainer/Log Pis Min -4.95159 trainer/policy/mean Mean 0.136761 trainer/policy/mean Std 0.614042 trainer/policy/mean Max 0.996872 trainer/policy/mean Min -0.998496 trainer/policy/normal/std Mean 0.373001 trainer/policy/normal/std Std 0.178523 trainer/policy/normal/std Max 0.948931 trainer/policy/normal/std Min 0.0670286 trainer/policy/normal/log_std Mean -1.13177 trainer/policy/normal/log_std Std 0.580224 trainer/policy/normal/log_std Max -0.052419 trainer/policy/normal/log_std Min -2.70264 eval/num steps total 543790 eval/num paths total 1049 eval/path length Mean 532 eval/path length Std 0 eval/path length Max 532 eval/path length Min 532 eval/Rewards Mean 3.20687 eval/Rewards Std 0.870878 eval/Rewards Max 5.60195 eval/Rewards Min 0.982826 eval/Returns Mean 1706.05 eval/Returns Std 0 eval/Returns Max 1706.05 eval/Returns Min 1706.05 eval/Actions Mean 0.165189 eval/Actions Std 0.586639 eval/Actions Max 0.998761 eval/Actions Min -0.997673 eval/Num Paths 1 eval/Average Returns 1706.05 eval/normalized_score 53.0431 time/evaluation sampling (s) 0.8889 time/logging (s) 0.00230058 time/sampling batch (s) 0.264892 time/saving (s) 0.00290254 time/training (s) 4.16163 time/epoch (s) 5.32063 time/total (s) 34712.1 Epoch -176 ---------------------------------- --------------- 2022-05-10 22:49:17.126961 PDT | [0] Epoch -175 finished ---------------------------------- --------------- epoch -175 replay_buffer/size 999996 trainer/num train calls 826000 trainer/Policy Loss -2.40503 trainer/Log Pis Mean 2.49496 trainer/Log Pis Std 2.60665 trainer/Log Pis Max 10.7362 trainer/Log Pis Min -4.96034 trainer/policy/mean Mean 0.154712 trainer/policy/mean Std 0.626599 trainer/policy/mean Max 0.995674 trainer/policy/mean Min -0.998036 trainer/policy/normal/std Mean 0.370776 trainer/policy/normal/std Std 0.183186 trainer/policy/normal/std Max 0.949382 trainer/policy/normal/std Min 0.0636447 trainer/policy/normal/log_std Mean -1.15197 trainer/policy/normal/log_std Std 0.612197 trainer/policy/normal/log_std Max -0.0519437 trainer/policy/normal/log_std Min -2.75444 eval/num steps total 544706 eval/num paths total 1051 eval/path length Mean 458 eval/path length Std 52 eval/path length Max 510 eval/path length Min 406 eval/Rewards Mean 3.11866 eval/Rewards Std 0.80397 eval/Rewards Max 5.34904 eval/Rewards Min 0.97902 eval/Returns Mean 1428.35 eval/Returns Std 189.32 eval/Returns Max 1617.67 eval/Returns Min 1239.03 eval/Actions Mean 0.156661 eval/Actions Std 0.600706 eval/Actions Max 0.997682 eval/Actions Min -0.998741 eval/Num Paths 2 eval/Average Returns 1428.35 eval/normalized_score 44.5103 time/evaluation sampling (s) 0.878869 time/logging (s) 0.00319322 time/sampling batch (s) 0.265881 time/saving (s) 0.0028486 time/training (s) 4.16432 time/epoch (s) 5.31511 time/total (s) 34717.4 Epoch -175 ---------------------------------- --------------- 2022-05-10 22:49:22.476331 PDT | [0] Epoch -174 finished ---------------------------------- --------------- epoch -174 replay_buffer/size 999996 trainer/num train calls 827000 trainer/Policy Loss -2.42708 trainer/Log Pis Mean 2.2609 trainer/Log Pis Std 2.65458 trainer/Log Pis Max 13.5692 trainer/Log Pis Min -6.99265 trainer/policy/mean Mean 0.164858 trainer/policy/mean Std 0.623894 trainer/policy/mean Max 0.99754 trainer/policy/mean Min -0.998087 trainer/policy/normal/std Mean 0.381488 trainer/policy/normal/std Std 0.182883 trainer/policy/normal/std Max 0.98804 trainer/policy/normal/std Min 0.0666715 trainer/policy/normal/log_std Mean -1.1138 trainer/policy/normal/log_std Std 0.594497 trainer/policy/normal/log_std Max -0.0120323 trainer/policy/normal/log_std Min -2.70798 eval/num steps total 545697 eval/num paths total 1053 eval/path length Mean 495.5 eval/path length Std 3.5 eval/path length Max 499 eval/path length Min 492 eval/Rewards Mean 3.04667 eval/Rewards Std 0.791117 eval/Rewards Max 4.90268 eval/Rewards Min 0.977093 eval/Returns Mean 1509.62 eval/Returns Std 11.7547 eval/Returns Max 1521.38 eval/Returns Min 1497.87 eval/Actions Mean 0.155261 eval/Actions Std 0.582731 eval/Actions Max 0.996943 eval/Actions Min -0.998072 eval/Num Paths 2 eval/Average Returns 1509.62 eval/normalized_score 47.0076 time/evaluation sampling (s) 0.874003 time/logging (s) 0.00342434 time/sampling batch (s) 0.265023 time/saving (s) 0.00280591 time/training (s) 4.18049 time/epoch (s) 5.32574 time/total (s) 34722.7 Epoch -174 ---------------------------------- --------------- 2022-05-10 22:49:27.781075 PDT | [0] Epoch -173 finished ---------------------------------- --------------- epoch -173 replay_buffer/size 999996 trainer/num train calls 828000 trainer/Policy Loss -2.34515 trainer/Log Pis Mean 2.33299 trainer/Log Pis Std 2.64831 trainer/Log Pis Max 16.8712 trainer/Log Pis Min -4.09536 trainer/policy/mean Mean 0.161247 trainer/policy/mean Std 0.616407 trainer/policy/mean Max 0.998226 trainer/policy/mean Min -0.999861 trainer/policy/normal/std Mean 0.369107 trainer/policy/normal/std Std 0.179125 trainer/policy/normal/std Max 1.06048 trainer/policy/normal/std Min 0.0656778 trainer/policy/normal/log_std Mean -1.15021 trainer/policy/normal/log_std Std 0.600939 trainer/policy/normal/log_std Max 0.0587223 trainer/policy/normal/log_std Min -2.72299 eval/num steps total 546315 eval/num paths total 1054 eval/path length Mean 618 eval/path length Std 0 eval/path length Max 618 eval/path length Min 618 eval/Rewards Mean 3.20414 eval/Rewards Std 0.77296 eval/Rewards Max 5.28958 eval/Rewards Min 0.98622 eval/Returns Mean 1980.16 eval/Returns Std 0 eval/Returns Max 1980.16 eval/Returns Min 1980.16 eval/Actions Mean 0.139737 eval/Actions Std 0.591389 eval/Actions Max 0.998754 eval/Actions Min -0.998476 eval/Num Paths 1 eval/Average Returns 1980.16 eval/normalized_score 61.4652 time/evaluation sampling (s) 0.865774 time/logging (s) 0.00250348 time/sampling batch (s) 0.268762 time/saving (s) 0.00284746 time/training (s) 4.13992 time/epoch (s) 5.2798 time/total (s) 34728 Epoch -173 ---------------------------------- --------------- 2022-05-10 22:49:33.166886 PDT | [0] Epoch -172 finished ---------------------------------- --------------- epoch -172 replay_buffer/size 999996 trainer/num train calls 829000 trainer/Policy Loss -2.28678 trainer/Log Pis Mean 2.18786 trainer/Log Pis Std 2.47566 trainer/Log Pis Max 16.633 trainer/Log Pis Min -5.34302 trainer/policy/mean Mean 0.141406 trainer/policy/mean Std 0.610716 trainer/policy/mean Max 0.99848 trainer/policy/mean Min -0.99948 trainer/policy/normal/std Mean 0.380349 trainer/policy/normal/std Std 0.187267 trainer/policy/normal/std Max 0.95951 trainer/policy/normal/std Min 0.0639676 trainer/policy/normal/log_std Mean -1.11994 trainer/policy/normal/log_std Std 0.595544 trainer/policy/normal/log_std Max -0.0413331 trainer/policy/normal/log_std Min -2.74938 eval/num steps total 546893 eval/num paths total 1055 eval/path length Mean 578 eval/path length Std 0 eval/path length Max 578 eval/path length Min 578 eval/Rewards Mean 3.16945 eval/Rewards Std 0.736403 eval/Rewards Max 5.09687 eval/Rewards Min 0.982139 eval/Returns Mean 1831.94 eval/Returns Std 0 eval/Returns Max 1831.94 eval/Returns Min 1831.94 eval/Actions Mean 0.153606 eval/Actions Std 0.605555 eval/Actions Max 0.997594 eval/Actions Min -0.997322 eval/Num Paths 1 eval/Average Returns 1831.94 eval/normalized_score 56.9112 time/evaluation sampling (s) 0.927258 time/logging (s) 0.00240311 time/sampling batch (s) 0.263838 time/saving (s) 0.0029167 time/training (s) 4.16545 time/epoch (s) 5.36187 time/total (s) 34733.4 Epoch -172 ---------------------------------- --------------- 2022-05-10 22:49:38.488587 PDT | [0] Epoch -171 finished ---------------------------------- --------------- epoch -171 replay_buffer/size 999996 trainer/num train calls 830000 trainer/Policy Loss -2.29302 trainer/Log Pis Mean 2.29225 trainer/Log Pis Std 2.54112 trainer/Log Pis Max 10.2713 trainer/Log Pis Min -3.90295 trainer/policy/mean Mean 0.133254 trainer/policy/mean Std 0.618758 trainer/policy/mean Max 0.998235 trainer/policy/mean Min -0.997716 trainer/policy/normal/std Mean 0.370869 trainer/policy/normal/std Std 0.184165 trainer/policy/normal/std Max 0.972304 trainer/policy/normal/std Min 0.0656851 trainer/policy/normal/log_std Mean -1.15147 trainer/policy/normal/log_std Std 0.610164 trainer/policy/normal/log_std Max -0.0280871 trainer/policy/normal/log_std Min -2.72288 eval/num steps total 547885 eval/num paths total 1057 eval/path length Mean 496 eval/path length Std 10 eval/path length Max 506 eval/path length Min 486 eval/Rewards Mean 3.11006 eval/Rewards Std 0.834227 eval/Rewards Max 5.44271 eval/Rewards Min 0.983353 eval/Returns Mean 1542.59 eval/Returns Std 47.0185 eval/Returns Max 1589.61 eval/Returns Min 1495.57 eval/Actions Mean 0.15211 eval/Actions Std 0.579331 eval/Actions Max 0.99721 eval/Actions Min -0.998235 eval/Num Paths 2 eval/Average Returns 1542.59 eval/normalized_score 48.0205 time/evaluation sampling (s) 0.87458 time/logging (s) 0.00365449 time/sampling batch (s) 0.26443 time/saving (s) 0.0029251 time/training (s) 4.1538 time/epoch (s) 5.29939 time/total (s) 34738.7 Epoch -171 ---------------------------------- --------------- 2022-05-10 22:49:43.785381 PDT | [0] Epoch -170 finished ---------------------------------- --------------- epoch -170 replay_buffer/size 999996 trainer/num train calls 831000 trainer/Policy Loss -2.42741 trainer/Log Pis Mean 2.3211 trainer/Log Pis Std 2.61889 trainer/Log Pis Max 9.4431 trainer/Log Pis Min -3.98251 trainer/policy/mean Mean 0.147189 trainer/policy/mean Std 0.62171 trainer/policy/mean Max 0.997902 trainer/policy/mean Min -0.997798 trainer/policy/normal/std Mean 0.373814 trainer/policy/normal/std Std 0.179704 trainer/policy/normal/std Max 0.955166 trainer/policy/normal/std Min 0.0708049 trainer/policy/normal/log_std Mean -1.13213 trainer/policy/normal/log_std Std 0.587137 trainer/policy/normal/log_std Max -0.0458704 trainer/policy/normal/log_std Min -2.64783 eval/num steps total 548536 eval/num paths total 1058 eval/path length Mean 651 eval/path length Std 0 eval/path length Max 651 eval/path length Min 651 eval/Rewards Mean 3.24029 eval/Rewards Std 0.743191 eval/Rewards Max 4.65266 eval/Rewards Min 0.980418 eval/Returns Mean 2109.43 eval/Returns Std 0 eval/Returns Max 2109.43 eval/Returns Min 2109.43 eval/Actions Mean 0.156786 eval/Actions Std 0.607158 eval/Actions Max 0.998749 eval/Actions Min -0.998358 eval/Num Paths 1 eval/Average Returns 2109.43 eval/normalized_score 65.4373 time/evaluation sampling (s) 0.882247 time/logging (s) 0.00266295 time/sampling batch (s) 0.26117 time/saving (s) 0.00294843 time/training (s) 4.12331 time/epoch (s) 5.27234 time/total (s) 34743.9 Epoch -170 ---------------------------------- --------------- 2022-05-10 22:49:49.107817 PDT | [0] Epoch -169 finished ---------------------------------- --------------- epoch -169 replay_buffer/size 999996 trainer/num train calls 832000 trainer/Policy Loss -2.18456 trainer/Log Pis Mean 2.20615 trainer/Log Pis Std 2.56069 trainer/Log Pis Max 13.612 trainer/Log Pis Min -4.95513 trainer/policy/mean Mean 0.13682 trainer/policy/mean Std 0.614606 trainer/policy/mean Max 0.997157 trainer/policy/mean Min -0.998653 trainer/policy/normal/std Mean 0.381961 trainer/policy/normal/std Std 0.187424 trainer/policy/normal/std Max 1.01381 trainer/policy/normal/std Min 0.0758057 trainer/policy/normal/log_std Mean -1.1152 trainer/policy/normal/log_std Std 0.594288 trainer/policy/normal/log_std Max 0.0137106 trainer/policy/normal/log_std Min -2.57958 eval/num steps total 549115 eval/num paths total 1059 eval/path length Mean 579 eval/path length Std 0 eval/path length Max 579 eval/path length Min 579 eval/Rewards Mean 3.19259 eval/Rewards Std 0.759138 eval/Rewards Max 4.87177 eval/Rewards Min 0.985053 eval/Returns Mean 1848.51 eval/Returns Std 0 eval/Returns Max 1848.51 eval/Returns Min 1848.51 eval/Actions Mean 0.152977 eval/Actions Std 0.589752 eval/Actions Max 0.998105 eval/Actions Min -0.998566 eval/Num Paths 1 eval/Average Returns 1848.51 eval/normalized_score 57.4203 time/evaluation sampling (s) 0.869883 time/logging (s) 0.00238871 time/sampling batch (s) 0.262748 time/saving (s) 0.00295335 time/training (s) 4.16074 time/epoch (s) 5.29871 time/total (s) 34749.2 Epoch -169 ---------------------------------- --------------- 2022-05-10 22:49:54.425629 PDT | [0] Epoch -168 finished ---------------------------------- --------------- epoch -168 replay_buffer/size 999996 trainer/num train calls 833000 trainer/Policy Loss -2.28972 trainer/Log Pis Mean 2.20147 trainer/Log Pis Std 2.60677 trainer/Log Pis Max 11.8121 trainer/Log Pis Min -4.43169 trainer/policy/mean Mean 0.124197 trainer/policy/mean Std 0.614879 trainer/policy/mean Max 0.997273 trainer/policy/mean Min -0.998401 trainer/policy/normal/std Mean 0.368801 trainer/policy/normal/std Std 0.180474 trainer/policy/normal/std Max 1.02853 trainer/policy/normal/std Min 0.062659 trainer/policy/normal/log_std Mean -1.14864 trainer/policy/normal/log_std Std 0.591552 trainer/policy/normal/log_std Max 0.0281325 trainer/policy/normal/log_std Min -2.77005 eval/num steps total 549891 eval/num paths total 1060 eval/path length Mean 776 eval/path length Std 0 eval/path length Max 776 eval/path length Min 776 eval/Rewards Mean 3.25868 eval/Rewards Std 0.732364 eval/Rewards Max 5.18998 eval/Rewards Min 0.979219 eval/Returns Mean 2528.73 eval/Returns Std 0 eval/Returns Max 2528.73 eval/Returns Min 2528.73 eval/Actions Mean 0.140702 eval/Actions Std 0.591784 eval/Actions Max 0.996575 eval/Actions Min -0.998334 eval/Num Paths 1 eval/Average Returns 2528.73 eval/normalized_score 78.3208 time/evaluation sampling (s) 0.871848 time/logging (s) 0.00286354 time/sampling batch (s) 0.263202 time/saving (s) 0.00284876 time/training (s) 4.15406 time/epoch (s) 5.29482 time/total (s) 34754.5 Epoch -168 ---------------------------------- --------------- 2022-05-10 22:49:59.752121 PDT | [0] Epoch -167 finished ---------------------------------- --------------- epoch -167 replay_buffer/size 999996 trainer/num train calls 834000 trainer/Policy Loss -2.38455 trainer/Log Pis Mean 2.26295 trainer/Log Pis Std 2.56576 trainer/Log Pis Max 9.28946 trainer/Log Pis Min -5.61934 trainer/policy/mean Mean 0.1365 trainer/policy/mean Std 0.61728 trainer/policy/mean Max 0.998003 trainer/policy/mean Min -0.998479 trainer/policy/normal/std Mean 0.37438 trainer/policy/normal/std Std 0.18475 trainer/policy/normal/std Max 0.93036 trainer/policy/normal/std Min 0.060911 trainer/policy/normal/log_std Mean -1.13844 trainer/policy/normal/log_std Std 0.601924 trainer/policy/normal/log_std Max -0.0721835 trainer/policy/normal/log_std Min -2.79834 eval/num steps total 550552 eval/num paths total 1061 eval/path length Mean 661 eval/path length Std 0 eval/path length Max 661 eval/path length Min 661 eval/Rewards Mean 3.19759 eval/Rewards Std 0.670949 eval/Rewards Max 4.58307 eval/Rewards Min 0.982672 eval/Returns Mean 2113.6 eval/Returns Std 0 eval/Returns Max 2113.6 eval/Returns Min 2113.6 eval/Actions Mean 0.1488 eval/Actions Std 0.609968 eval/Actions Max 0.998093 eval/Actions Min -0.999301 eval/Num Paths 1 eval/Average Returns 2113.6 eval/normalized_score 65.5655 time/evaluation sampling (s) 0.866642 time/logging (s) 0.00271205 time/sampling batch (s) 0.263975 time/saving (s) 0.00302752 time/training (s) 4.16658 time/epoch (s) 5.30294 time/total (s) 34759.8 Epoch -167 ---------------------------------- --------------- 2022-05-10 22:50:05.061789 PDT | [0] Epoch -166 finished ---------------------------------- --------------- epoch -166 replay_buffer/size 999996 trainer/num train calls 835000 trainer/Policy Loss -2.20326 trainer/Log Pis Mean 2.3257 trainer/Log Pis Std 2.6309 trainer/Log Pis Max 10.1679 trainer/Log Pis Min -6.73904 trainer/policy/mean Mean 0.145093 trainer/policy/mean Std 0.627093 trainer/policy/mean Max 0.999189 trainer/policy/mean Min -0.997883 trainer/policy/normal/std Mean 0.376129 trainer/policy/normal/std Std 0.181731 trainer/policy/normal/std Max 0.980729 trainer/policy/normal/std Min 0.0659275 trainer/policy/normal/log_std Mean -1.12721 trainer/policy/normal/log_std Std 0.589214 trainer/policy/normal/log_std Max -0.0194592 trainer/policy/normal/log_std Min -2.7192 eval/num steps total 551102 eval/num paths total 1062 eval/path length Mean 550 eval/path length Std 0 eval/path length Max 550 eval/path length Min 550 eval/Rewards Mean 3.25494 eval/Rewards Std 0.820851 eval/Rewards Max 4.96923 eval/Rewards Min 0.980368 eval/Returns Mean 1790.22 eval/Returns Std 0 eval/Returns Max 1790.22 eval/Returns Min 1790.22 eval/Actions Mean 0.151874 eval/Actions Std 0.583666 eval/Actions Max 0.997699 eval/Actions Min -0.998388 eval/Num Paths 1 eval/Average Returns 1790.22 eval/normalized_score 55.629 time/evaluation sampling (s) 0.87038 time/logging (s) 0.00227772 time/sampling batch (s) 0.262335 time/saving (s) 0.00277688 time/training (s) 4.14803 time/epoch (s) 5.2858 time/total (s) 34765.1 Epoch -166 ---------------------------------- --------------- 2022-05-10 22:50:10.388128 PDT | [0] Epoch -165 finished ---------------------------------- --------------- epoch -165 replay_buffer/size 999996 trainer/num train calls 836000 trainer/Policy Loss -2.35838 trainer/Log Pis Mean 2.16748 trainer/Log Pis Std 2.55422 trainer/Log Pis Max 9.20266 trainer/Log Pis Min -5.8108 trainer/policy/mean Mean 0.150529 trainer/policy/mean Std 0.610883 trainer/policy/mean Max 0.99836 trainer/policy/mean Min -0.998474 trainer/policy/normal/std Mean 0.380728 trainer/policy/normal/std Std 0.196813 trainer/policy/normal/std Max 1.08586 trainer/policy/normal/std Min 0.0647653 trainer/policy/normal/log_std Mean -1.13299 trainer/policy/normal/log_std Std 0.621341 trainer/policy/normal/log_std Max 0.0823706 trainer/policy/normal/log_std Min -2.73699 eval/num steps total 551767 eval/num paths total 1063 eval/path length Mean 665 eval/path length Std 0 eval/path length Max 665 eval/path length Min 665 eval/Rewards Mean 3.25555 eval/Rewards Std 0.713957 eval/Rewards Max 5.29794 eval/Rewards Min 0.986052 eval/Returns Mean 2164.94 eval/Returns Std 0 eval/Returns Max 2164.94 eval/Returns Min 2164.94 eval/Actions Mean 0.14665 eval/Actions Std 0.600012 eval/Actions Max 0.99787 eval/Actions Min -0.997479 eval/Num Paths 1 eval/Average Returns 2164.94 eval/normalized_score 67.1428 time/evaluation sampling (s) 0.891565 time/logging (s) 0.00294481 time/sampling batch (s) 0.261113 time/saving (s) 0.0033119 time/training (s) 4.14484 time/epoch (s) 5.30377 time/total (s) 34770.4 Epoch -165 ---------------------------------- --------------- 2022-05-10 22:50:15.724330 PDT | [0] Epoch -164 finished ---------------------------------- --------------- epoch -164 replay_buffer/size 999996 trainer/num train calls 837000 trainer/Policy Loss -2.20521 trainer/Log Pis Mean 2.19013 trainer/Log Pis Std 2.57598 trainer/Log Pis Max 13.3288 trainer/Log Pis Min -4.89114 trainer/policy/mean Mean 0.137817 trainer/policy/mean Std 0.614444 trainer/policy/mean Max 0.997739 trainer/policy/mean Min -0.998466 trainer/policy/normal/std Mean 0.36798 trainer/policy/normal/std Std 0.173951 trainer/policy/normal/std Max 1.06697 trainer/policy/normal/std Min 0.0723024 trainer/policy/normal/log_std Mean -1.14006 trainer/policy/normal/log_std Std 0.568123 trainer/policy/normal/log_std Max 0.064825 trainer/policy/normal/log_std Min -2.6269 eval/num steps total 552261 eval/num paths total 1064 eval/path length Mean 494 eval/path length Std 0 eval/path length Max 494 eval/path length Min 494 eval/Rewards Mean 3.16217 eval/Rewards Std 0.778475 eval/Rewards Max 4.86682 eval/Rewards Min 0.979013 eval/Returns Mean 1562.11 eval/Returns Std 0 eval/Returns Max 1562.11 eval/Returns Min 1562.11 eval/Actions Mean 0.156254 eval/Actions Std 0.595391 eval/Actions Max 0.998565 eval/Actions Min -0.998031 eval/Num Paths 1 eval/Average Returns 1562.11 eval/normalized_score 48.6203 time/evaluation sampling (s) 0.905417 time/logging (s) 0.00218157 time/sampling batch (s) 0.260889 time/saving (s) 0.00321329 time/training (s) 4.13963 time/epoch (s) 5.31134 time/total (s) 34775.8 Epoch -164 ---------------------------------- --------------- 2022-05-10 22:50:21.037429 PDT | [0] Epoch -163 finished ---------------------------------- --------------- epoch -163 replay_buffer/size 999996 trainer/num train calls 838000 trainer/Policy Loss -2.38225 trainer/Log Pis Mean 2.21637 trainer/Log Pis Std 2.61573 trainer/Log Pis Max 12.4338 trainer/Log Pis Min -6.07492 trainer/policy/mean Mean 0.110321 trainer/policy/mean Std 0.629152 trainer/policy/mean Max 0.998858 trainer/policy/mean Min -0.997689 trainer/policy/normal/std Mean 0.38409 trainer/policy/normal/std Std 0.187605 trainer/policy/normal/std Max 0.913007 trainer/policy/normal/std Min 0.063451 trainer/policy/normal/log_std Mean -1.10877 trainer/policy/normal/log_std Std 0.593532 trainer/policy/normal/log_std Max -0.0910123 trainer/policy/normal/log_std Min -2.75749 eval/num steps total 553250 eval/num paths total 1066 eval/path length Mean 494.5 eval/path length Std 85.5 eval/path length Max 580 eval/path length Min 409 eval/Rewards Mean 3.14075 eval/Rewards Std 0.749569 eval/Rewards Max 4.74936 eval/Rewards Min 0.977016 eval/Returns Mean 1553.1 eval/Returns Std 292.402 eval/Returns Max 1845.5 eval/Returns Min 1260.7 eval/Actions Mean 0.135074 eval/Actions Std 0.599122 eval/Actions Max 0.997969 eval/Actions Min -0.999037 eval/Num Paths 2 eval/Average Returns 1553.1 eval/normalized_score 48.3435 time/evaluation sampling (s) 0.893272 time/logging (s) 0.00360454 time/sampling batch (s) 0.26158 time/saving (s) 0.00319704 time/training (s) 4.12923 time/epoch (s) 5.29088 time/total (s) 34781 Epoch -163 ---------------------------------- --------------- 2022-05-10 22:50:26.378423 PDT | [0] Epoch -162 finished ---------------------------------- --------------- epoch -162 replay_buffer/size 999996 trainer/num train calls 839000 trainer/Policy Loss -2.17542 trainer/Log Pis Mean 2.22649 trainer/Log Pis Std 2.68295 trainer/Log Pis Max 11.2623 trainer/Log Pis Min -6.18638 trainer/policy/mean Mean 0.145199 trainer/policy/mean Std 0.612449 trainer/policy/mean Max 0.997504 trainer/policy/mean Min -0.998186 trainer/policy/normal/std Mean 0.377126 trainer/policy/normal/std Std 0.17638 trainer/policy/normal/std Max 0.902782 trainer/policy/normal/std Min 0.0666113 trainer/policy/normal/log_std Mean -1.11617 trainer/policy/normal/log_std Std 0.573135 trainer/policy/normal/log_std Max -0.102274 trainer/policy/normal/log_std Min -2.70888 eval/num steps total 553789 eval/num paths total 1067 eval/path length Mean 539 eval/path length Std 0 eval/path length Max 539 eval/path length Min 539 eval/Rewards Mean 3.1659 eval/Rewards Std 0.86699 eval/Rewards Max 5.36411 eval/Rewards Min 0.979541 eval/Returns Mean 1706.42 eval/Returns Std 0 eval/Returns Max 1706.42 eval/Returns Min 1706.42 eval/Actions Mean 0.157255 eval/Actions Std 0.577535 eval/Actions Max 0.99825 eval/Actions Min -0.998988 eval/Num Paths 1 eval/Average Returns 1706.42 eval/normalized_score 53.0544 time/evaluation sampling (s) 0.900821 time/logging (s) 0.00241219 time/sampling batch (s) 0.263768 time/saving (s) 0.0029747 time/training (s) 4.14629 time/epoch (s) 5.31627 time/total (s) 34786.4 Epoch -162 ---------------------------------- --------------- 2022-05-10 22:50:31.693142 PDT | [0] Epoch -161 finished ---------------------------------- --------------- epoch -161 replay_buffer/size 999996 trainer/num train calls 840000 trainer/Policy Loss -2.28322 trainer/Log Pis Mean 2.34463 trainer/Log Pis Std 2.63686 trainer/Log Pis Max 11.2206 trainer/Log Pis Min -5.68761 trainer/policy/mean Mean 0.116041 trainer/policy/mean Std 0.619137 trainer/policy/mean Max 0.997179 trainer/policy/mean Min -0.997745 trainer/policy/normal/std Mean 0.36991 trainer/policy/normal/std Std 0.180999 trainer/policy/normal/std Max 0.981203 trainer/policy/normal/std Min 0.0705154 trainer/policy/normal/log_std Mean -1.14545 trainer/policy/normal/log_std Std 0.589698 trainer/policy/normal/log_std Max -0.0189762 trainer/policy/normal/log_std Min -2.65192 eval/num steps total 554332 eval/num paths total 1068 eval/path length Mean 543 eval/path length Std 0 eval/path length Max 543 eval/path length Min 543 eval/Rewards Mean 3.24548 eval/Rewards Std 0.826523 eval/Rewards Max 5.09126 eval/Rewards Min 0.98492 eval/Returns Mean 1762.3 eval/Returns Std 0 eval/Returns Max 1762.3 eval/Returns Min 1762.3 eval/Actions Mean 0.142646 eval/Actions Std 0.583965 eval/Actions Max 0.998539 eval/Actions Min -0.998759 eval/Num Paths 1 eval/Average Returns 1762.3 eval/normalized_score 54.7712 time/evaluation sampling (s) 0.875908 time/logging (s) 0.00238369 time/sampling batch (s) 0.261552 time/saving (s) 0.00296999 time/training (s) 4.14808 time/epoch (s) 5.29089 time/total (s) 34791.7 Epoch -161 ---------------------------------- --------------- 2022-05-10 22:50:37.006168 PDT | [0] Epoch -160 finished ---------------------------------- --------------- epoch -160 replay_buffer/size 999996 trainer/num train calls 841000 trainer/Policy Loss -2.12832 trainer/Log Pis Mean 2.16501 trainer/Log Pis Std 2.59246 trainer/Log Pis Max 9.23344 trainer/Log Pis Min -5.21023 trainer/policy/mean Mean 0.134779 trainer/policy/mean Std 0.602019 trainer/policy/mean Max 0.996753 trainer/policy/mean Min -0.99744 trainer/policy/normal/std Mean 0.374746 trainer/policy/normal/std Std 0.180947 trainer/policy/normal/std Max 0.949268 trainer/policy/normal/std Min 0.0632678 trainer/policy/normal/log_std Mean -1.12929 trainer/policy/normal/log_std Std 0.584361 trainer/policy/normal/log_std Max -0.052064 trainer/policy/normal/log_std Min -2.76038 eval/num steps total 554818 eval/num paths total 1069 eval/path length Mean 486 eval/path length Std 0 eval/path length Max 486 eval/path length Min 486 eval/Rewards Mean 3.1948 eval/Rewards Std 0.800172 eval/Rewards Max 4.7374 eval/Rewards Min 0.980268 eval/Returns Mean 1552.67 eval/Returns Std 0 eval/Returns Max 1552.67 eval/Returns Min 1552.67 eval/Actions Mean 0.143821 eval/Actions Std 0.599347 eval/Actions Max 0.997547 eval/Actions Min -0.998451 eval/Num Paths 1 eval/Average Returns 1552.67 eval/normalized_score 48.3304 time/evaluation sampling (s) 0.874772 time/logging (s) 0.00220545 time/sampling batch (s) 0.263866 time/saving (s) 0.00283075 time/training (s) 4.14527 time/epoch (s) 5.28895 time/total (s) 34797 Epoch -160 ---------------------------------- --------------- 2022-05-10 22:50:42.318063 PDT | [0] Epoch -159 finished ---------------------------------- --------------- epoch -159 replay_buffer/size 999996 trainer/num train calls 842000 trainer/Policy Loss -2.13489 trainer/Log Pis Mean 2.21025 trainer/Log Pis Std 2.70406 trainer/Log Pis Max 11.8811 trainer/Log Pis Min -5.0693 trainer/policy/mean Mean 0.16819 trainer/policy/mean Std 0.617866 trainer/policy/mean Max 0.998599 trainer/policy/mean Min -0.997408 trainer/policy/normal/std Mean 0.385875 trainer/policy/normal/std Std 0.181863 trainer/policy/normal/std Max 0.962976 trainer/policy/normal/std Min 0.0675329 trainer/policy/normal/log_std Mean -1.09697 trainer/policy/normal/log_std Std 0.58318 trainer/policy/normal/log_std Max -0.0377272 trainer/policy/normal/log_std Min -2.69514 eval/num steps total 555735 eval/num paths total 1071 eval/path length Mean 458.5 eval/path length Std 48.5 eval/path length Max 507 eval/path length Min 410 eval/Rewards Mean 3.10903 eval/Rewards Std 0.786446 eval/Rewards Max 4.78359 eval/Rewards Min 0.978335 eval/Returns Mean 1425.49 eval/Returns Std 165.442 eval/Returns Max 1590.93 eval/Returns Min 1260.05 eval/Actions Mean 0.158677 eval/Actions Std 0.593607 eval/Actions Max 0.998551 eval/Actions Min -0.998857 eval/Num Paths 2 eval/Average Returns 1425.49 eval/normalized_score 44.4225 time/evaluation sampling (s) 0.88088 time/logging (s) 0.00314526 time/sampling batch (s) 0.262585 time/saving (s) 0.0028068 time/training (s) 4.14001 time/epoch (s) 5.28943 time/total (s) 34802.2 Epoch -159 ---------------------------------- --------------- 2022-05-10 22:50:47.637771 PDT | [0] Epoch -158 finished ---------------------------------- --------------- epoch -158 replay_buffer/size 999996 trainer/num train calls 843000 trainer/Policy Loss -2.45406 trainer/Log Pis Mean 2.48166 trainer/Log Pis Std 2.74613 trainer/Log Pis Max 15.0094 trainer/Log Pis Min -6.11096 trainer/policy/mean Mean 0.120457 trainer/policy/mean Std 0.622514 trainer/policy/mean Max 0.998461 trainer/policy/mean Min -0.999965 trainer/policy/normal/std Mean 0.376871 trainer/policy/normal/std Std 0.184933 trainer/policy/normal/std Max 0.931632 trainer/policy/normal/std Min 0.0670394 trainer/policy/normal/log_std Mean -1.12716 trainer/policy/normal/log_std Std 0.590512 trainer/policy/normal/log_std Max -0.0708172 trainer/policy/normal/log_std Min -2.70248 eval/num steps total 556701 eval/num paths total 1074 eval/path length Mean 322 eval/path length Std 137.799 eval/path length Max 509 eval/path length Min 181 eval/Rewards Mean 2.90005 eval/Rewards Std 0.924656 eval/Rewards Max 5.22992 eval/Rewards Min 0.978565 eval/Returns Mean 933.817 eval/Returns Std 466.441 eval/Returns Max 1555.3 eval/Returns Min 431.592 eval/Actions Mean 0.113864 eval/Actions Std 0.549015 eval/Actions Max 0.997588 eval/Actions Min -0.997614 eval/Num Paths 3 eval/Average Returns 933.817 eval/normalized_score 29.3154 time/evaluation sampling (s) 0.869357 time/logging (s) 0.00349306 time/sampling batch (s) 0.262656 time/saving (s) 0.00283753 time/training (s) 4.15773 time/epoch (s) 5.29608 time/total (s) 34807.5 Epoch -158 ---------------------------------- --------------- 2022-05-10 22:50:52.972242 PDT | [0] Epoch -157 finished ---------------------------------- --------------- epoch -157 replay_buffer/size 999996 trainer/num train calls 844000 trainer/Policy Loss -2.30979 trainer/Log Pis Mean 2.18325 trainer/Log Pis Std 2.71026 trainer/Log Pis Max 10.6674 trainer/Log Pis Min -6.22753 trainer/policy/mean Mean 0.133144 trainer/policy/mean Std 0.62536 trainer/policy/mean Max 0.997077 trainer/policy/mean Min -0.998812 trainer/policy/normal/std Mean 0.378935 trainer/policy/normal/std Std 0.182455 trainer/policy/normal/std Max 0.945601 trainer/policy/normal/std Min 0.0642201 trainer/policy/normal/log_std Mean -1.11865 trainer/policy/normal/log_std Std 0.586962 trainer/policy/normal/log_std Max -0.0559343 trainer/policy/normal/log_std Min -2.74544 eval/num steps total 557615 eval/num paths total 1076 eval/path length Mean 457 eval/path length Std 43 eval/path length Max 500 eval/path length Min 414 eval/Rewards Mean 3.07616 eval/Rewards Std 0.773457 eval/Rewards Max 4.8288 eval/Rewards Min 0.975819 eval/Returns Mean 1405.8 eval/Returns Std 134.098 eval/Returns Max 1539.9 eval/Returns Min 1271.71 eval/Actions Mean 0.14031 eval/Actions Std 0.589591 eval/Actions Max 0.997969 eval/Actions Min -0.99704 eval/Num Paths 2 eval/Average Returns 1405.8 eval/normalized_score 43.8176 time/evaluation sampling (s) 0.869268 time/logging (s) 0.00335859 time/sampling batch (s) 0.263607 time/saving (s) 0.00279031 time/training (s) 4.17185 time/epoch (s) 5.31087 time/total (s) 34812.9 Epoch -157 ---------------------------------- --------------- 2022-05-10 22:50:58.271035 PDT | [0] Epoch -156 finished ---------------------------------- --------------- epoch -156 replay_buffer/size 999996 trainer/num train calls 845000 trainer/Policy Loss -2.18858 trainer/Log Pis Mean 2.21976 trainer/Log Pis Std 2.59608 trainer/Log Pis Max 9.55245 trainer/Log Pis Min -4.64043 trainer/policy/mean Mean 0.124251 trainer/policy/mean Std 0.613008 trainer/policy/mean Max 0.997819 trainer/policy/mean Min -0.998565 trainer/policy/normal/std Mean 0.37185 trainer/policy/normal/std Std 0.177867 trainer/policy/normal/std Max 0.884583 trainer/policy/normal/std Min 0.0645511 trainer/policy/normal/log_std Mean -1.13848 trainer/policy/normal/log_std Std 0.591746 trainer/policy/normal/log_std Max -0.122639 trainer/policy/normal/log_std Min -2.7403 eval/num steps total 558502 eval/num paths total 1078 eval/path length Mean 443.5 eval/path length Std 36.5 eval/path length Max 480 eval/path length Min 407 eval/Rewards Mean 3.14521 eval/Rewards Std 0.774681 eval/Rewards Max 4.87672 eval/Rewards Min 0.979343 eval/Returns Mean 1394.9 eval/Returns Std 138.671 eval/Returns Max 1533.57 eval/Returns Min 1256.23 eval/Actions Mean 0.147726 eval/Actions Std 0.594062 eval/Actions Max 0.998128 eval/Actions Min -0.998423 eval/Num Paths 2 eval/Average Returns 1394.9 eval/normalized_score 43.4826 time/evaluation sampling (s) 0.87314 time/logging (s) 0.00320645 time/sampling batch (s) 0.26182 time/saving (s) 0.00277457 time/training (s) 4.13445 time/epoch (s) 5.27539 time/total (s) 34818.1 Epoch -156 ---------------------------------- --------------- 2022-05-10 22:51:03.584971 PDT | [0] Epoch -155 finished ---------------------------------- --------------- epoch -155 replay_buffer/size 999996 trainer/num train calls 846000 trainer/Policy Loss -2.27512 trainer/Log Pis Mean 2.33628 trainer/Log Pis Std 2.54485 trainer/Log Pis Max 11.5204 trainer/Log Pis Min -4.88655 trainer/policy/mean Mean 0.160226 trainer/policy/mean Std 0.62026 trainer/policy/mean Max 0.99648 trainer/policy/mean Min -0.99844 trainer/policy/normal/std Mean 0.386216 trainer/policy/normal/std Std 0.19013 trainer/policy/normal/std Max 1.05008 trainer/policy/normal/std Min 0.06208 trainer/policy/normal/log_std Mean -1.10549 trainer/policy/normal/log_std Std 0.598456 trainer/policy/normal/log_std Max 0.0488677 trainer/policy/normal/log_std Min -2.77933 eval/num steps total 559105 eval/num paths total 1079 eval/path length Mean 603 eval/path length Std 0 eval/path length Max 603 eval/path length Min 603 eval/Rewards Mean 3.20614 eval/Rewards Std 0.74363 eval/Rewards Max 5.2903 eval/Rewards Min 0.98195 eval/Returns Mean 1933.3 eval/Returns Std 0 eval/Returns Max 1933.3 eval/Returns Min 1933.3 eval/Actions Mean 0.142399 eval/Actions Std 0.600517 eval/Actions Max 0.998135 eval/Actions Min -0.997861 eval/Num Paths 1 eval/Average Returns 1933.3 eval/normalized_score 60.0256 time/evaluation sampling (s) 0.865945 time/logging (s) 0.00253134 time/sampling batch (s) 0.263037 time/saving (s) 0.00279532 time/training (s) 4.15587 time/epoch (s) 5.29018 time/total (s) 34823.4 Epoch -155 ---------------------------------- --------------- 2022-05-10 22:51:08.883398 PDT | [0] Epoch -154 finished ---------------------------------- --------------- epoch -154 replay_buffer/size 999996 trainer/num train calls 847000 trainer/Policy Loss -2.20737 trainer/Log Pis Mean 2.30352 trainer/Log Pis Std 2.58141 trainer/Log Pis Max 10.1045 trainer/Log Pis Min -4.3109 trainer/policy/mean Mean 0.143191 trainer/policy/mean Std 0.624312 trainer/policy/mean Max 0.996617 trainer/policy/mean Min -0.998473 trainer/policy/normal/std Mean 0.380719 trainer/policy/normal/std Std 0.179028 trainer/policy/normal/std Max 0.957224 trainer/policy/normal/std Min 0.063042 trainer/policy/normal/log_std Mean -1.10806 trainer/policy/normal/log_std Std 0.577128 trainer/policy/normal/log_std Max -0.0437174 trainer/policy/normal/log_std Min -2.76395 eval/num steps total 559592 eval/num paths total 1080 eval/path length Mean 487 eval/path length Std 0 eval/path length Max 487 eval/path length Min 487 eval/Rewards Mean 3.22483 eval/Rewards Std 0.798281 eval/Rewards Max 4.87396 eval/Rewards Min 0.977115 eval/Returns Mean 1570.49 eval/Returns Std 0 eval/Returns Max 1570.49 eval/Returns Min 1570.49 eval/Actions Mean 0.143144 eval/Actions Std 0.595609 eval/Actions Max 0.997462 eval/Actions Min -0.998481 eval/Num Paths 1 eval/Average Returns 1570.49 eval/normalized_score 48.8779 time/evaluation sampling (s) 0.864026 time/logging (s) 0.00211381 time/sampling batch (s) 0.262571 time/saving (s) 0.00278975 time/training (s) 4.14311 time/epoch (s) 5.27461 time/total (s) 34828.7 Epoch -154 ---------------------------------- --------------- 2022-05-10 22:51:14.263651 PDT | [0] Epoch -153 finished ---------------------------------- --------------- epoch -153 replay_buffer/size 999996 trainer/num train calls 848000 trainer/Policy Loss -2.28653 trainer/Log Pis Mean 2.2118 trainer/Log Pis Std 2.66363 trainer/Log Pis Max 8.95605 trainer/Log Pis Min -6.08355 trainer/policy/mean Mean 0.143249 trainer/policy/mean Std 0.616715 trainer/policy/mean Max 0.998595 trainer/policy/mean Min -0.998742 trainer/policy/normal/std Mean 0.382754 trainer/policy/normal/std Std 0.187135 trainer/policy/normal/std Max 1.1016 trainer/policy/normal/std Min 0.0713109 trainer/policy/normal/log_std Mean -1.11385 trainer/policy/normal/log_std Std 0.597683 trainer/policy/normal/log_std Max 0.0967624 trainer/policy/normal/log_std Min -2.64071 eval/num steps total 560092 eval/num paths total 1081 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.14469 eval/Rewards Std 0.762042 eval/Rewards Max 4.71352 eval/Rewards Min 0.983783 eval/Returns Mean 1572.35 eval/Returns Std 0 eval/Returns Max 1572.35 eval/Returns Min 1572.35 eval/Actions Mean 0.159244 eval/Actions Std 0.60324 eval/Actions Max 0.998316 eval/Actions Min -0.997277 eval/Num Paths 1 eval/Average Returns 1572.35 eval/normalized_score 48.9348 time/evaluation sampling (s) 0.950206 time/logging (s) 0.00222998 time/sampling batch (s) 0.263217 time/saving (s) 0.00290628 time/training (s) 4.13828 time/epoch (s) 5.35684 time/total (s) 34834.1 Epoch -153 ---------------------------------- --------------- 2022-05-10 22:51:19.560531 PDT | [0] Epoch -152 finished ---------------------------------- --------------- epoch -152 replay_buffer/size 999996 trainer/num train calls 849000 trainer/Policy Loss -2.17624 trainer/Log Pis Mean 2.28158 trainer/Log Pis Std 2.47906 trainer/Log Pis Max 9.47277 trainer/Log Pis Min -5.37696 trainer/policy/mean Mean 0.202447 trainer/policy/mean Std 0.610539 trainer/policy/mean Max 0.998135 trainer/policy/mean Min -0.997035 trainer/policy/normal/std Mean 0.374927 trainer/policy/normal/std Std 0.181291 trainer/policy/normal/std Max 1.05746 trainer/policy/normal/std Min 0.0632029 trainer/policy/normal/log_std Mean -1.13437 trainer/policy/normal/log_std Std 0.600931 trainer/policy/normal/log_std Max 0.0558728 trainer/policy/normal/log_std Min -2.7614 eval/num steps total 560733 eval/num paths total 1082 eval/path length Mean 641 eval/path length Std 0 eval/path length Max 641 eval/path length Min 641 eval/Rewards Mean 3.25471 eval/Rewards Std 0.781688 eval/Rewards Max 4.95133 eval/Rewards Min 0.978378 eval/Returns Mean 2086.27 eval/Returns Std 0 eval/Returns Max 2086.27 eval/Returns Min 2086.27 eval/Actions Mean 0.164617 eval/Actions Std 0.592607 eval/Actions Max 0.998644 eval/Actions Min -0.998706 eval/Num Paths 1 eval/Average Returns 2086.27 eval/normalized_score 64.7255 time/evaluation sampling (s) 0.863507 time/logging (s) 0.00251223 time/sampling batch (s) 0.26217 time/saving (s) 0.00282411 time/training (s) 4.14297 time/epoch (s) 5.27398 time/total (s) 34839.3 Epoch -152 ---------------------------------- --------------- 2022-05-10 22:51:25.895013 PDT | [0] Epoch -151 finished ---------------------------------- --------------- epoch -151 replay_buffer/size 999996 trainer/num train calls 850000 trainer/Policy Loss -2.0912 trainer/Log Pis Mean 2.23399 trainer/Log Pis Std 2.56773 trainer/Log Pis Max 9.48212 trainer/Log Pis Min -5.08265 trainer/policy/mean Mean 0.176235 trainer/policy/mean Std 0.607064 trainer/policy/mean Max 0.999133 trainer/policy/mean Min -0.998322 trainer/policy/normal/std Mean 0.377287 trainer/policy/normal/std Std 0.183605 trainer/policy/normal/std Max 0.907078 trainer/policy/normal/std Min 0.0673575 trainer/policy/normal/log_std Mean -1.12708 trainer/policy/normal/log_std Std 0.59514 trainer/policy/normal/log_std Max -0.0975273 trainer/policy/normal/log_std Min -2.69774 eval/num steps total 561453 eval/num paths total 1083 eval/path length Mean 720 eval/path length Std 0 eval/path length Max 720 eval/path length Min 720 eval/Rewards Mean 3.29764 eval/Rewards Std 0.743375 eval/Rewards Max 5.05766 eval/Rewards Min 0.985498 eval/Returns Mean 2374.3 eval/Returns Std 0 eval/Returns Max 2374.3 eval/Returns Min 2374.3 eval/Actions Mean 0.153993 eval/Actions Std 0.59579 eval/Actions Max 0.99756 eval/Actions Min -0.999344 eval/Num Paths 1 eval/Average Returns 2374.3 eval/normalized_score 73.5758 time/evaluation sampling (s) 0.971783 time/logging (s) 0.00290351 time/sampling batch (s) 0.300256 time/saving (s) 0.00326418 time/training (s) 5.03314 time/epoch (s) 6.31135 time/total (s) 34845.7 Epoch -151 ---------------------------------- --------------- 2022-05-10 22:51:32.310532 PDT | [0] Epoch -150 finished ---------------------------------- --------------- epoch -150 replay_buffer/size 999996 trainer/num train calls 851000 trainer/Policy Loss -2.30774 trainer/Log Pis Mean 2.19936 trainer/Log Pis Std 2.64239 trainer/Log Pis Max 10.8387 trainer/Log Pis Min -5.81356 trainer/policy/mean Mean 0.146473 trainer/policy/mean Std 0.616038 trainer/policy/mean Max 0.998687 trainer/policy/mean Min -0.998451 trainer/policy/normal/std Mean 0.373912 trainer/policy/normal/std Std 0.181532 trainer/policy/normal/std Max 0.95056 trainer/policy/normal/std Min 0.0717976 trainer/policy/normal/log_std Mean -1.13358 trainer/policy/normal/log_std Std 0.588693 trainer/policy/normal/log_std Max -0.0507038 trainer/policy/normal/log_std Min -2.6339 eval/num steps total 561969 eval/num paths total 1084 eval/path length Mean 516 eval/path length Std 0 eval/path length Max 516 eval/path length Min 516 eval/Rewards Mean 3.12665 eval/Rewards Std 0.801533 eval/Rewards Max 5.24385 eval/Rewards Min 0.986937 eval/Returns Mean 1613.35 eval/Returns Std 0 eval/Returns Max 1613.35 eval/Returns Min 1613.35 eval/Actions Mean 0.15014 eval/Actions Std 0.592374 eval/Actions Max 0.997917 eval/Actions Min -0.997532 eval/Num Paths 1 eval/Average Returns 1613.35 eval/normalized_score 50.1948 time/evaluation sampling (s) 1.04996 time/logging (s) 0.00229009 time/sampling batch (s) 0.30048 time/saving (s) 0.002968 time/training (s) 5.03495 time/epoch (s) 6.39065 time/total (s) 34852.1 Epoch -150 ---------------------------------- --------------- 2022-05-10 22:51:38.734847 PDT | [0] Epoch -149 finished ---------------------------------- --------------- epoch -149 replay_buffer/size 999996 trainer/num train calls 852000 trainer/Policy Loss -2.17823 trainer/Log Pis Mean 2.27368 trainer/Log Pis Std 2.62709 trainer/Log Pis Max 9.63068 trainer/Log Pis Min -5.14002 trainer/policy/mean Mean 0.147446 trainer/policy/mean Std 0.621704 trainer/policy/mean Max 0.998952 trainer/policy/mean Min -0.9984 trainer/policy/normal/std Mean 0.378672 trainer/policy/normal/std Std 0.177815 trainer/policy/normal/std Max 0.912266 trainer/policy/normal/std Min 0.0698263 trainer/policy/normal/log_std Mean -1.11207 trainer/policy/normal/log_std Std 0.572281 trainer/policy/normal/log_std Max -0.0918233 trainer/policy/normal/log_std Min -2.66174 eval/num steps total 562465 eval/num paths total 1085 eval/path length Mean 496 eval/path length Std 0 eval/path length Max 496 eval/path length Min 496 eval/Rewards Mean 3.05068 eval/Rewards Std 0.774944 eval/Rewards Max 4.7234 eval/Rewards Min 0.986258 eval/Returns Mean 1513.14 eval/Returns Std 0 eval/Returns Max 1513.14 eval/Returns Min 1513.14 eval/Actions Mean 0.157923 eval/Actions Std 0.587437 eval/Actions Max 0.997498 eval/Actions Min -0.997031 eval/Num Paths 1 eval/Average Returns 1513.14 eval/normalized_score 47.1156 time/evaluation sampling (s) 1.0192 time/logging (s) 0.00226775 time/sampling batch (s) 0.301871 time/saving (s) 0.00293323 time/training (s) 5.07398 time/epoch (s) 6.40025 time/total (s) 34858.5 Epoch -149 ---------------------------------- --------------- 2022-05-10 22:51:45.154230 PDT | [0] Epoch -148 finished ---------------------------------- --------------- epoch -148 replay_buffer/size 999996 trainer/num train calls 853000 trainer/Policy Loss -2.24103 trainer/Log Pis Mean 2.33512 trainer/Log Pis Std 2.61623 trainer/Log Pis Max 12.8776 trainer/Log Pis Min -3.66832 trainer/policy/mean Mean 0.134244 trainer/policy/mean Std 0.614696 trainer/policy/mean Max 0.997858 trainer/policy/mean Min -0.998953 trainer/policy/normal/std Mean 0.37645 trainer/policy/normal/std Std 0.182164 trainer/policy/normal/std Max 0.953087 trainer/policy/normal/std Min 0.0629386 trainer/policy/normal/log_std Mean -1.12836 trainer/policy/normal/log_std Std 0.594203 trainer/policy/normal/log_std Max -0.0480494 trainer/policy/normal/log_std Min -2.7656 eval/num steps total 563462 eval/num paths total 1087 eval/path length Mean 498.5 eval/path length Std 17.5 eval/path length Max 516 eval/path length Min 481 eval/Rewards Mean 3.14203 eval/Rewards Std 0.846481 eval/Rewards Max 5.4276 eval/Rewards Min 0.980141 eval/Returns Mean 1566.3 eval/Returns Std 38.9395 eval/Returns Max 1605.24 eval/Returns Min 1527.36 eval/Actions Mean 0.15766 eval/Actions Std 0.591857 eval/Actions Max 0.998344 eval/Actions Min -0.998788 eval/Num Paths 2 eval/Average Returns 1566.3 eval/normalized_score 48.7491 time/evaluation sampling (s) 1.02779 time/logging (s) 0.00356511 time/sampling batch (s) 0.301283 time/saving (s) 0.00300634 time/training (s) 5.06081 time/epoch (s) 6.39645 time/total (s) 34864.9 Epoch -148 ---------------------------------- --------------- 2022-05-10 22:51:51.594577 PDT | [0] Epoch -147 finished ---------------------------------- --------------- epoch -147 replay_buffer/size 999996 trainer/num train calls 854000 trainer/Policy Loss -2.02063 trainer/Log Pis Mean 1.78513 trainer/Log Pis Std 2.335 trainer/Log Pis Max 8.10624 trainer/Log Pis Min -5.49079 trainer/policy/mean Mean 0.148055 trainer/policy/mean Std 0.588089 trainer/policy/mean Max 0.998331 trainer/policy/mean Min -0.997482 trainer/policy/normal/std Mean 0.377085 trainer/policy/normal/std Std 0.189216 trainer/policy/normal/std Max 0.963023 trainer/policy/normal/std Min 0.0699244 trainer/policy/normal/log_std Mean -1.13495 trainer/policy/normal/log_std Std 0.60728 trainer/policy/normal/log_std Max -0.0376782 trainer/policy/normal/log_std Min -2.66034 eval/num steps total 564461 eval/num paths total 1089 eval/path length Mean 499.5 eval/path length Std 3.5 eval/path length Max 503 eval/path length Min 496 eval/Rewards Mean 3.09695 eval/Rewards Std 0.76909 eval/Rewards Max 4.76384 eval/Rewards Min 0.981048 eval/Returns Mean 1546.92 eval/Returns Std 23.3163 eval/Returns Max 1570.24 eval/Returns Min 1523.61 eval/Actions Mean 0.151329 eval/Actions Std 0.590158 eval/Actions Max 0.997838 eval/Actions Min -0.998266 eval/Num Paths 2 eval/Average Returns 1546.92 eval/normalized_score 48.1537 time/evaluation sampling (s) 1.04493 time/logging (s) 0.00361662 time/sampling batch (s) 0.301942 time/saving (s) 0.00302074 time/training (s) 5.06269 time/epoch (s) 6.4162 time/total (s) 34871.3 Epoch -147 ---------------------------------- --------------- 2022-05-10 22:51:57.629204 PDT | [0] Epoch -146 finished ---------------------------------- --------------- epoch -146 replay_buffer/size 999996 trainer/num train calls 855000 trainer/Policy Loss -2.45124 trainer/Log Pis Mean 2.41703 trainer/Log Pis Std 2.61932 trainer/Log Pis Max 9.39292 trainer/Log Pis Min -7.26027 trainer/policy/mean Mean 0.116937 trainer/policy/mean Std 0.645178 trainer/policy/mean Max 0.99458 trainer/policy/mean Min -0.999763 trainer/policy/normal/std Mean 0.377242 trainer/policy/normal/std Std 0.181487 trainer/policy/normal/std Max 0.912629 trainer/policy/normal/std Min 0.0638908 trainer/policy/normal/log_std Mean -1.12572 trainer/policy/normal/log_std Std 0.594829 trainer/policy/normal/log_std Max -0.0914254 trainer/policy/normal/log_std Min -2.75058 eval/num steps total 565020 eval/num paths total 1090 eval/path length Mean 559 eval/path length Std 0 eval/path length Max 559 eval/path length Min 559 eval/Rewards Mean 3.19886 eval/Rewards Std 0.771397 eval/Rewards Max 4.67444 eval/Rewards Min 0.984592 eval/Returns Mean 1788.17 eval/Returns Std 0 eval/Returns Max 1788.17 eval/Returns Min 1788.17 eval/Actions Mean 0.140133 eval/Actions Std 0.605001 eval/Actions Max 0.998544 eval/Actions Min -0.998727 eval/Num Paths 1 eval/Average Returns 1788.17 eval/normalized_score 55.5661 time/evaluation sampling (s) 1.04339 time/logging (s) 0.00245288 time/sampling batch (s) 0.285618 time/saving (s) 0.00300231 time/training (s) 4.67524 time/epoch (s) 6.00971 time/total (s) 34877.3 Epoch -146 ---------------------------------- --------------- 2022-05-10 22:52:02.935305 PDT | [0] Epoch -145 finished ---------------------------------- --------------- epoch -145 replay_buffer/size 999996 trainer/num train calls 856000 trainer/Policy Loss -2.60732 trainer/Log Pis Mean 2.34847 trainer/Log Pis Std 2.59698 trainer/Log Pis Max 10.0384 trainer/Log Pis Min -3.19846 trainer/policy/mean Mean 0.169845 trainer/policy/mean Std 0.620136 trainer/policy/mean Max 0.999271 trainer/policy/mean Min -0.998985 trainer/policy/normal/std Mean 0.380462 trainer/policy/normal/std Std 0.185811 trainer/policy/normal/std Max 1.00175 trainer/policy/normal/std Min 0.0707326 trainer/policy/normal/log_std Mean -1.11933 trainer/policy/normal/log_std Std 0.5964 trainer/policy/normal/log_std Max 0.00174465 trainer/policy/normal/log_std Min -2.64885 eval/num steps total 565565 eval/num paths total 1091 eval/path length Mean 545 eval/path length Std 0 eval/path length Max 545 eval/path length Min 545 eval/Rewards Mean 3.20712 eval/Rewards Std 0.825049 eval/Rewards Max 5.10848 eval/Rewards Min 0.98472 eval/Returns Mean 1747.88 eval/Returns Std 0 eval/Returns Max 1747.88 eval/Returns Min 1747.88 eval/Actions Mean 0.147918 eval/Actions Std 0.582783 eval/Actions Max 0.998832 eval/Actions Min -0.998212 eval/Num Paths 1 eval/Average Returns 1747.88 eval/normalized_score 54.3283 time/evaluation sampling (s) 0.879541 time/logging (s) 0.0023002 time/sampling batch (s) 0.261337 time/saving (s) 0.00297147 time/training (s) 4.13595 time/epoch (s) 5.2821 time/total (s) 34882.6 Epoch -145 ---------------------------------- --------------- 2022-05-10 22:52:08.223990 PDT | [0] Epoch -144 finished ---------------------------------- --------------- epoch -144 replay_buffer/size 999996 trainer/num train calls 857000 trainer/Policy Loss -2.40886 trainer/Log Pis Mean 2.33929 trainer/Log Pis Std 2.68562 trainer/Log Pis Max 15.1872 trainer/Log Pis Min -5.24874 trainer/policy/mean Mean 0.14972 trainer/policy/mean Std 0.613383 trainer/policy/mean Max 0.999049 trainer/policy/mean Min -0.99983 trainer/policy/normal/std Mean 0.369713 trainer/policy/normal/std Std 0.184156 trainer/policy/normal/std Max 0.965422 trainer/policy/normal/std Min 0.0684354 trainer/policy/normal/log_std Mean -1.15229 trainer/policy/normal/log_std Std 0.60341 trainer/policy/normal/log_std Max -0.0351899 trainer/policy/normal/log_std Min -2.68186 eval/num steps total 566082 eval/num paths total 1092 eval/path length Mean 517 eval/path length Std 0 eval/path length Max 517 eval/path length Min 517 eval/Rewards Mean 3.18266 eval/Rewards Std 0.882581 eval/Rewards Max 5.55674 eval/Rewards Min 0.981927 eval/Returns Mean 1645.43 eval/Returns Std 0 eval/Returns Max 1645.43 eval/Returns Min 1645.43 eval/Actions Mean 0.150443 eval/Actions Std 0.57507 eval/Actions Max 0.998448 eval/Actions Min -0.998279 eval/Num Paths 1 eval/Average Returns 1645.43 eval/normalized_score 51.1805 time/evaluation sampling (s) 0.869242 time/logging (s) 0.00222738 time/sampling batch (s) 0.260872 time/saving (s) 0.00283612 time/training (s) 4.1297 time/epoch (s) 5.26487 time/total (s) 34887.8 Epoch -144 ---------------------------------- --------------- 2022-05-10 22:52:13.532663 PDT | [0] Epoch -143 finished ---------------------------------- --------------- epoch -143 replay_buffer/size 999996 trainer/num train calls 858000 trainer/Policy Loss -2.22486 trainer/Log Pis Mean 2.06271 trainer/Log Pis Std 2.46875 trainer/Log Pis Max 10.4661 trainer/Log Pis Min -7.64282 trainer/policy/mean Mean 0.174796 trainer/policy/mean Std 0.61204 trainer/policy/mean Max 0.998589 trainer/policy/mean Min -0.998498 trainer/policy/normal/std Mean 0.385428 trainer/policy/normal/std Std 0.179913 trainer/policy/normal/std Max 0.984793 trainer/policy/normal/std Min 0.0673639 trainer/policy/normal/log_std Mean -1.09566 trainer/policy/normal/log_std Std 0.577858 trainer/policy/normal/log_std Max -0.0153233 trainer/policy/normal/log_std Min -2.69765 eval/num steps total 567065 eval/num paths total 1094 eval/path length Mean 491.5 eval/path length Std 10.5 eval/path length Max 502 eval/path length Min 481 eval/Rewards Mean 3.13528 eval/Rewards Std 0.806435 eval/Rewards Max 4.75261 eval/Rewards Min 0.981199 eval/Returns Mean 1540.99 eval/Returns Std 7.87394 eval/Returns Max 1548.86 eval/Returns Min 1533.12 eval/Actions Mean 0.153711 eval/Actions Std 0.59768 eval/Actions Max 0.997614 eval/Actions Min -0.99863 eval/Num Paths 2 eval/Average Returns 1540.99 eval/normalized_score 47.9714 time/evaluation sampling (s) 0.866523 time/logging (s) 0.00375602 time/sampling batch (s) 0.262468 time/saving (s) 0.00322321 time/training (s) 4.15042 time/epoch (s) 5.28639 time/total (s) 34893.1 Epoch -143 ---------------------------------- --------------- 2022-05-10 22:52:18.816109 PDT | [0] Epoch -142 finished ---------------------------------- --------------- epoch -142 replay_buffer/size 999996 trainer/num train calls 859000 trainer/Policy Loss -2.0344 trainer/Log Pis Mean 2.06349 trainer/Log Pis Std 2.60373 trainer/Log Pis Max 10.3148 trainer/Log Pis Min -5.85485 trainer/policy/mean Mean 0.150835 trainer/policy/mean Std 0.609442 trainer/policy/mean Max 0.998014 trainer/policy/mean Min -0.997862 trainer/policy/normal/std Mean 0.386482 trainer/policy/normal/std Std 0.186933 trainer/policy/normal/std Max 0.974348 trainer/policy/normal/std Min 0.0680953 trainer/policy/normal/log_std Mean -1.10141 trainer/policy/normal/log_std Std 0.593338 trainer/policy/normal/log_std Max -0.0259865 trainer/policy/normal/log_std Min -2.68685 eval/num steps total 567622 eval/num paths total 1095 eval/path length Mean 557 eval/path length Std 0 eval/path length Max 557 eval/path length Min 557 eval/Rewards Mean 3.19772 eval/Rewards Std 0.813722 eval/Rewards Max 4.9773 eval/Rewards Min 0.98579 eval/Returns Mean 1781.13 eval/Returns Std 0 eval/Returns Max 1781.13 eval/Returns Min 1781.13 eval/Actions Mean 0.161518 eval/Actions Std 0.581497 eval/Actions Max 0.99833 eval/Actions Min -0.998707 eval/Num Paths 1 eval/Average Returns 1781.13 eval/normalized_score 55.3499 time/evaluation sampling (s) 0.876381 time/logging (s) 0.00227453 time/sampling batch (s) 0.26113 time/saving (s) 0.00280908 time/training (s) 4.11539 time/epoch (s) 5.25798 time/total (s) 34898.4 Epoch -142 ---------------------------------- --------------- 2022-05-10 22:52:24.132393 PDT | [0] Epoch -141 finished ---------------------------------- --------------- epoch -141 replay_buffer/size 999996 trainer/num train calls 860000 trainer/Policy Loss -2.21159 trainer/Log Pis Mean 2.27656 trainer/Log Pis Std 2.58532 trainer/Log Pis Max 9.84691 trainer/Log Pis Min -4.06542 trainer/policy/mean Mean 0.141434 trainer/policy/mean Std 0.61646 trainer/policy/mean Max 0.998456 trainer/policy/mean Min -0.997829 trainer/policy/normal/std Mean 0.374529 trainer/policy/normal/std Std 0.186429 trainer/policy/normal/std Max 0.979232 trainer/policy/normal/std Min 0.0665849 trainer/policy/normal/log_std Mean -1.14078 trainer/policy/normal/log_std Std 0.608465 trainer/policy/normal/log_std Max -0.0209869 trainer/policy/normal/log_std Min -2.70928 eval/num steps total 568192 eval/num paths total 1096 eval/path length Mean 570 eval/path length Std 0 eval/path length Max 570 eval/path length Min 570 eval/Rewards Mean 3.21375 eval/Rewards Std 0.803858 eval/Rewards Max 4.69422 eval/Rewards Min 0.982157 eval/Returns Mean 1831.84 eval/Returns Std 0 eval/Returns Max 1831.84 eval/Returns Min 1831.84 eval/Actions Mean 0.150876 eval/Actions Std 0.59924 eval/Actions Max 0.997486 eval/Actions Min -0.998919 eval/Num Paths 1 eval/Average Returns 1831.84 eval/normalized_score 56.908 time/evaluation sampling (s) 0.869727 time/logging (s) 0.00240789 time/sampling batch (s) 0.262367 time/saving (s) 0.00280713 time/training (s) 4.15549 time/epoch (s) 5.2928 time/total (s) 34903.7 Epoch -141 ---------------------------------- --------------- 2022-05-10 22:52:29.466775 PDT | [0] Epoch -140 finished ---------------------------------- --------------- epoch -140 replay_buffer/size 999996 trainer/num train calls 861000 trainer/Policy Loss -2.26774 trainer/Log Pis Mean 2.27133 trainer/Log Pis Std 2.44591 trainer/Log Pis Max 9.8623 trainer/Log Pis Min -4.78183 trainer/policy/mean Mean 0.197077 trainer/policy/mean Std 0.596265 trainer/policy/mean Max 0.997433 trainer/policy/mean Min -0.996843 trainer/policy/normal/std Mean 0.377852 trainer/policy/normal/std Std 0.186358 trainer/policy/normal/std Max 0.983023 trainer/policy/normal/std Min 0.0657977 trainer/policy/normal/log_std Mean -1.13349 trainer/policy/normal/log_std Std 0.614793 trainer/policy/normal/log_std Max -0.0171226 trainer/policy/normal/log_std Min -2.72117 eval/num steps total 569011 eval/num paths total 1098 eval/path length Mean 409.5 eval/path length Std 2.5 eval/path length Max 412 eval/path length Min 407 eval/Rewards Mean 3.07721 eval/Rewards Std 0.821023 eval/Rewards Max 4.81466 eval/Rewards Min 0.988768 eval/Returns Mean 1260.12 eval/Returns Std 8.46491 eval/Returns Max 1268.58 eval/Returns Min 1251.65 eval/Actions Mean 0.149248 eval/Actions Std 0.594522 eval/Actions Max 0.998165 eval/Actions Min -0.999217 eval/Num Paths 2 eval/Average Returns 1260.12 eval/normalized_score 39.3412 time/evaluation sampling (s) 0.862832 time/logging (s) 0.00315819 time/sampling batch (s) 0.264458 time/saving (s) 0.00297641 time/training (s) 4.17797 time/epoch (s) 5.31139 time/total (s) 34909 Epoch -140 ---------------------------------- --------------- 2022-05-10 22:52:34.775815 PDT | [0] Epoch -139 finished ---------------------------------- --------------- epoch -139 replay_buffer/size 999996 trainer/num train calls 862000 trainer/Policy Loss -2.43157 trainer/Log Pis Mean 2.39058 trainer/Log Pis Std 2.60402 trainer/Log Pis Max 9.57221 trainer/Log Pis Min -5.68461 trainer/policy/mean Mean 0.128649 trainer/policy/mean Std 0.6375 trainer/policy/mean Max 0.998965 trainer/policy/mean Min -0.997481 trainer/policy/normal/std Mean 0.377312 trainer/policy/normal/std Std 0.182196 trainer/policy/normal/std Max 0.975147 trainer/policy/normal/std Min 0.068111 trainer/policy/normal/log_std Mean -1.12526 trainer/policy/normal/log_std Std 0.592721 trainer/policy/normal/log_std Max -0.0251674 trainer/policy/normal/log_std Min -2.68662 eval/num steps total 569774 eval/num paths total 1100 eval/path length Mean 381.5 eval/path length Std 41.5 eval/path length Max 423 eval/path length Min 340 eval/Rewards Mean 3.10569 eval/Rewards Std 0.871085 eval/Rewards Max 4.87254 eval/Rewards Min 0.987629 eval/Returns Mean 1184.82 eval/Returns Std 156.14 eval/Returns Max 1340.96 eval/Returns Min 1028.68 eval/Actions Mean 0.113215 eval/Actions Std 0.577559 eval/Actions Max 0.99847 eval/Actions Min -0.999732 eval/Num Paths 2 eval/Average Returns 1184.82 eval/normalized_score 37.0277 time/evaluation sampling (s) 0.871261 time/logging (s) 0.00275555 time/sampling batch (s) 0.262198 time/saving (s) 0.00270892 time/training (s) 4.146 time/epoch (s) 5.28492 time/total (s) 34914.3 Epoch -139 ---------------------------------- --------------- 2022-05-10 22:52:40.085177 PDT | [0] Epoch -138 finished ---------------------------------- --------------- epoch -138 replay_buffer/size 999996 trainer/num train calls 863000 trainer/Policy Loss -2.26468 trainer/Log Pis Mean 2.31406 trainer/Log Pis Std 2.68175 trainer/Log Pis Max 16.8138 trainer/Log Pis Min -5.94614 trainer/policy/mean Mean 0.131511 trainer/policy/mean Std 0.631753 trainer/policy/mean Max 0.998712 trainer/policy/mean Min -0.999847 trainer/policy/normal/std Mean 0.380221 trainer/policy/normal/std Std 0.184651 trainer/policy/normal/std Max 0.989867 trainer/policy/normal/std Min 0.066003 trainer/policy/normal/log_std Mean -1.1161 trainer/policy/normal/log_std Std 0.588243 trainer/policy/normal/log_std Max -0.010185 trainer/policy/normal/log_std Min -2.71806 eval/num steps total 570644 eval/num paths total 1102 eval/path length Mean 435 eval/path length Std 23 eval/path length Max 458 eval/path length Min 412 eval/Rewards Mean 3.14417 eval/Rewards Std 0.859438 eval/Rewards Max 5.25687 eval/Rewards Min 0.976955 eval/Returns Mean 1367.71 eval/Returns Std 95.6946 eval/Returns Max 1463.41 eval/Returns Min 1272.02 eval/Actions Mean 0.145284 eval/Actions Std 0.581566 eval/Actions Max 0.998233 eval/Actions Min -0.998293 eval/Num Paths 2 eval/Average Returns 1367.71 eval/normalized_score 42.6472 time/evaluation sampling (s) 0.865036 time/logging (s) 0.00316244 time/sampling batch (s) 0.261723 time/saving (s) 0.00278986 time/training (s) 4.15337 time/epoch (s) 5.28608 time/total (s) 34919.6 Epoch -138 ---------------------------------- --------------- 2022-05-10 22:52:45.411560 PDT | [0] Epoch -137 finished ---------------------------------- --------------- epoch -137 replay_buffer/size 999996 trainer/num train calls 864000 trainer/Policy Loss -2.23722 trainer/Log Pis Mean 2.05356 trainer/Log Pis Std 2.59376 trainer/Log Pis Max 10.6703 trainer/Log Pis Min -9.53875 trainer/policy/mean Mean 0.163846 trainer/policy/mean Std 0.60684 trainer/policy/mean Max 0.997487 trainer/policy/mean Min -0.997923 trainer/policy/normal/std Mean 0.379446 trainer/policy/normal/std Std 0.18393 trainer/policy/normal/std Max 1.05737 trainer/policy/normal/std Min 0.066153 trainer/policy/normal/log_std Mean -1.11963 trainer/policy/normal/log_std Std 0.592461 trainer/policy/normal/log_std Max 0.0557859 trainer/policy/normal/log_std Min -2.71579 eval/num steps total 571076 eval/num paths total 1103 eval/path length Mean 432 eval/path length Std 0 eval/path length Max 432 eval/path length Min 432 eval/Rewards Mean 3.07662 eval/Rewards Std 0.903163 eval/Rewards Max 5.46486 eval/Rewards Min 0.983665 eval/Returns Mean 1329.1 eval/Returns Std 0 eval/Returns Max 1329.1 eval/Returns Min 1329.1 eval/Actions Mean 0.146651 eval/Actions Std 0.568408 eval/Actions Max 0.998636 eval/Actions Min -0.998064 eval/Num Paths 1 eval/Average Returns 1329.1 eval/normalized_score 41.4608 time/evaluation sampling (s) 0.88357 time/logging (s) 0.00193904 time/sampling batch (s) 0.261999 time/saving (s) 0.00276201 time/training (s) 4.15123 time/epoch (s) 5.3015 time/total (s) 34924.9 Epoch -137 ---------------------------------- --------------- 2022-05-10 22:52:50.765491 PDT | [0] Epoch -136 finished ---------------------------------- --------------- epoch -136 replay_buffer/size 999996 trainer/num train calls 865000 trainer/Policy Loss -2.2347 trainer/Log Pis Mean 2.42592 trainer/Log Pis Std 2.64348 trainer/Log Pis Max 9.6011 trainer/Log Pis Min -5.24762 trainer/policy/mean Mean 0.147305 trainer/policy/mean Std 0.61648 trainer/policy/mean Max 0.996065 trainer/policy/mean Min -0.99831 trainer/policy/normal/std Mean 0.380412 trainer/policy/normal/std Std 0.186124 trainer/policy/normal/std Max 1.03206 trainer/policy/normal/std Min 0.0647038 trainer/policy/normal/log_std Mean -1.12037 trainer/policy/normal/log_std Std 0.599133 trainer/policy/normal/log_std Max 0.03156 trainer/policy/normal/log_std Min -2.73793 eval/num steps total 571607 eval/num paths total 1104 eval/path length Mean 531 eval/path length Std 0 eval/path length Max 531 eval/path length Min 531 eval/Rewards Mean 3.14692 eval/Rewards Std 0.865455 eval/Rewards Max 5.43846 eval/Rewards Min 0.988171 eval/Returns Mean 1671.01 eval/Returns Std 0 eval/Returns Max 1671.01 eval/Returns Min 1671.01 eval/Actions Mean 0.154559 eval/Actions Std 0.571948 eval/Actions Max 0.996467 eval/Actions Min -0.998273 eval/Num Paths 1 eval/Average Returns 1671.01 eval/normalized_score 51.9665 time/evaluation sampling (s) 0.896757 time/logging (s) 0.00239092 time/sampling batch (s) 0.263545 time/saving (s) 0.0029166 time/training (s) 4.16516 time/epoch (s) 5.33077 time/total (s) 34930.2 Epoch -136 ---------------------------------- --------------- 2022-05-10 22:52:56.100442 PDT | [0] Epoch -135 finished ---------------------------------- --------------- epoch -135 replay_buffer/size 999996 trainer/num train calls 866000 trainer/Policy Loss -1.98801 trainer/Log Pis Mean 1.94433 trainer/Log Pis Std 2.36673 trainer/Log Pis Max 9.64497 trainer/Log Pis Min -4.46951 trainer/policy/mean Mean 0.1349 trainer/policy/mean Std 0.596935 trainer/policy/mean Max 0.996268 trainer/policy/mean Min -0.99804 trainer/policy/normal/std Mean 0.378422 trainer/policy/normal/std Std 0.18108 trainer/policy/normal/std Max 1.09113 trainer/policy/normal/std Min 0.064366 trainer/policy/normal/log_std Mean -1.11558 trainer/policy/normal/log_std Std 0.576752 trainer/policy/normal/log_std Max 0.0872169 trainer/policy/normal/log_std Min -2.74317 eval/num steps total 572272 eval/num paths total 1105 eval/path length Mean 665 eval/path length Std 0 eval/path length Max 665 eval/path length Min 665 eval/Rewards Mean 3.19354 eval/Rewards Std 0.700021 eval/Rewards Max 4.69419 eval/Rewards Min 0.98655 eval/Returns Mean 2123.7 eval/Returns Std 0 eval/Returns Max 2123.7 eval/Returns Min 2123.7 eval/Actions Mean 0.157841 eval/Actions Std 0.603024 eval/Actions Max 0.998025 eval/Actions Min -0.998856 eval/Num Paths 1 eval/Average Returns 2123.7 eval/normalized_score 65.8758 time/evaluation sampling (s) 0.884504 time/logging (s) 0.00266754 time/sampling batch (s) 0.262368 time/saving (s) 0.00287342 time/training (s) 4.15889 time/epoch (s) 5.31131 time/total (s) 34935.5 Epoch -135 ---------------------------------- --------------- 2022-05-10 22:53:01.515592 PDT | [0] Epoch -134 finished ---------------------------------- --------------- epoch -134 replay_buffer/size 999996 trainer/num train calls 867000 trainer/Policy Loss -2.26757 trainer/Log Pis Mean 2.22877 trainer/Log Pis Std 2.56381 trainer/Log Pis Max 11.7589 trainer/Log Pis Min -5.72227 trainer/policy/mean Mean 0.128206 trainer/policy/mean Std 0.621015 trainer/policy/mean Max 0.997255 trainer/policy/mean Min -0.998086 trainer/policy/normal/std Mean 0.373703 trainer/policy/normal/std Std 0.184859 trainer/policy/normal/std Max 0.957907 trainer/policy/normal/std Min 0.0739093 trainer/policy/normal/log_std Mean -1.13921 trainer/policy/normal/log_std Std 0.598491 trainer/policy/normal/log_std Max -0.0430044 trainer/policy/normal/log_std Min -2.60492 eval/num steps total 572852 eval/num paths total 1106 eval/path length Mean 580 eval/path length Std 0 eval/path length Max 580 eval/path length Min 580 eval/Rewards Mean 3.20939 eval/Rewards Std 0.781501 eval/Rewards Max 4.7472 eval/Rewards Min 0.986103 eval/Returns Mean 1861.45 eval/Returns Std 0 eval/Returns Max 1861.45 eval/Returns Min 1861.45 eval/Actions Mean 0.154111 eval/Actions Std 0.603188 eval/Actions Max 0.998481 eval/Actions Min -0.998498 eval/Num Paths 1 eval/Average Returns 1861.45 eval/normalized_score 57.8178 time/evaluation sampling (s) 0.952427 time/logging (s) 0.00249806 time/sampling batch (s) 0.263236 time/saving (s) 0.00292822 time/training (s) 4.17013 time/epoch (s) 5.39122 time/total (s) 34940.9 Epoch -134 ---------------------------------- --------------- 2022-05-10 22:53:06.855357 PDT | [0] Epoch -133 finished ---------------------------------- --------------- epoch -133 replay_buffer/size 999996 trainer/num train calls 868000 trainer/Policy Loss -2.30901 trainer/Log Pis Mean 2.41157 trainer/Log Pis Std 2.70131 trainer/Log Pis Max 9.8614 trainer/Log Pis Min -6.53085 trainer/policy/mean Mean 0.134429 trainer/policy/mean Std 0.628611 trainer/policy/mean Max 0.998813 trainer/policy/mean Min -0.997601 trainer/policy/normal/std Mean 0.371597 trainer/policy/normal/std Std 0.183507 trainer/policy/normal/std Max 0.97941 trainer/policy/normal/std Min 0.0669424 trainer/policy/normal/log_std Mean -1.14359 trainer/policy/normal/log_std Std 0.595807 trainer/policy/normal/log_std Max -0.0208051 trainer/policy/normal/log_std Min -2.70392 eval/num steps total 573499 eval/num paths total 1107 eval/path length Mean 647 eval/path length Std 0 eval/path length Max 647 eval/path length Min 647 eval/Rewards Mean 3.19607 eval/Rewards Std 0.710738 eval/Rewards Max 4.75686 eval/Rewards Min 0.982296 eval/Returns Mean 2067.86 eval/Returns Std 0 eval/Returns Max 2067.86 eval/Returns Min 2067.86 eval/Actions Mean 0.151937 eval/Actions Std 0.602626 eval/Actions Max 0.998982 eval/Actions Min -0.998072 eval/Num Paths 1 eval/Average Returns 2067.86 eval/normalized_score 64.1599 time/evaluation sampling (s) 0.882452 time/logging (s) 0.00268949 time/sampling batch (s) 0.263843 time/saving (s) 0.00293955 time/training (s) 4.16396 time/epoch (s) 5.31588 time/total (s) 34946.2 Epoch -133 ---------------------------------- --------------- 2022-05-10 22:53:12.202903 PDT | [0] Epoch -132 finished ---------------------------------- --------------- epoch -132 replay_buffer/size 999996 trainer/num train calls 869000 trainer/Policy Loss -2.18804 trainer/Log Pis Mean 2.17554 trainer/Log Pis Std 2.62096 trainer/Log Pis Max 9.66942 trainer/Log Pis Min -4.41409 trainer/policy/mean Mean 0.15565 trainer/policy/mean Std 0.60627 trainer/policy/mean Max 0.997859 trainer/policy/mean Min -0.998248 trainer/policy/normal/std Mean 0.378177 trainer/policy/normal/std Std 0.183877 trainer/policy/normal/std Max 0.943408 trainer/policy/normal/std Min 0.0667844 trainer/policy/normal/log_std Mean -1.12433 trainer/policy/normal/log_std Std 0.5957 trainer/policy/normal/log_std Max -0.0582568 trainer/policy/normal/log_std Min -2.70629 eval/num steps total 574001 eval/num paths total 1108 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.08563 eval/Rewards Std 0.766997 eval/Rewards Max 4.78641 eval/Rewards Min 0.97901 eval/Returns Mean 1548.98 eval/Returns Std 0 eval/Returns Max 1548.98 eval/Returns Min 1548.98 eval/Actions Mean 0.151728 eval/Actions Std 0.583546 eval/Actions Max 0.997482 eval/Actions Min -0.999123 eval/Num Paths 1 eval/Average Returns 1548.98 eval/normalized_score 48.217 time/evaluation sampling (s) 0.860917 time/logging (s) 0.00217781 time/sampling batch (s) 0.264322 time/saving (s) 0.00289871 time/training (s) 4.19285 time/epoch (s) 5.32317 time/total (s) 34951.6 Epoch -132 ---------------------------------- --------------- 2022-05-10 22:53:17.537271 PDT | [0] Epoch -131 finished ---------------------------------- --------------- epoch -131 replay_buffer/size 999996 trainer/num train calls 870000 trainer/Policy Loss -2.21533 trainer/Log Pis Mean 2.23875 trainer/Log Pis Std 2.55854 trainer/Log Pis Max 9.08368 trainer/Log Pis Min -4.97502 trainer/policy/mean Mean 0.142235 trainer/policy/mean Std 0.612525 trainer/policy/mean Max 0.997099 trainer/policy/mean Min -0.997431 trainer/policy/normal/std Mean 0.368942 trainer/policy/normal/std Std 0.17841 trainer/policy/normal/std Max 0.956115 trainer/policy/normal/std Min 0.0674475 trainer/policy/normal/log_std Mean -1.14419 trainer/policy/normal/log_std Std 0.582651 trainer/policy/normal/log_std Max -0.0448768 trainer/policy/normal/log_std Min -2.69641 eval/num steps total 574969 eval/num paths total 1110 eval/path length Mean 484 eval/path length Std 36 eval/path length Max 520 eval/path length Min 448 eval/Rewards Mean 3.18192 eval/Rewards Std 0.859746 eval/Rewards Max 5.43881 eval/Rewards Min 0.980697 eval/Returns Mean 1540.05 eval/Returns Std 114.895 eval/Returns Max 1654.94 eval/Returns Min 1425.15 eval/Actions Mean 0.151292 eval/Actions Std 0.589155 eval/Actions Max 0.998717 eval/Actions Min -0.997816 eval/Num Paths 2 eval/Average Returns 1540.05 eval/normalized_score 47.9425 time/evaluation sampling (s) 0.862034 time/logging (s) 0.00341124 time/sampling batch (s) 0.26596 time/saving (s) 0.00281923 time/training (s) 4.17745 time/epoch (s) 5.31168 time/total (s) 34956.9 Epoch -131 ---------------------------------- --------------- 2022-05-10 22:53:22.866989 PDT | [0] Epoch -130 finished ---------------------------------- --------------- epoch -130 replay_buffer/size 999996 trainer/num train calls 871000 trainer/Policy Loss -2.19632 trainer/Log Pis Mean 2.13379 trainer/Log Pis Std 2.58769 trainer/Log Pis Max 9.50701 trainer/Log Pis Min -5.37318 trainer/policy/mean Mean 0.129308 trainer/policy/mean Std 0.612695 trainer/policy/mean Max 0.999051 trainer/policy/mean Min -0.998145 trainer/policy/normal/std Mean 0.38029 trainer/policy/normal/std Std 0.187315 trainer/policy/normal/std Max 0.961331 trainer/policy/normal/std Min 0.060004 trainer/policy/normal/log_std Mean -1.12018 trainer/policy/normal/log_std Std 0.595854 trainer/policy/normal/log_std Max -0.0394362 trainer/policy/normal/log_std Min -2.81334 eval/num steps total 575423 eval/num paths total 1111 eval/path length Mean 454 eval/path length Std 0 eval/path length Max 454 eval/path length Min 454 eval/Rewards Mean 3.17715 eval/Rewards Std 0.905386 eval/Rewards Max 5.28121 eval/Rewards Min 0.988201 eval/Returns Mean 1442.43 eval/Returns Std 0 eval/Returns Max 1442.43 eval/Returns Min 1442.43 eval/Actions Mean 0.140608 eval/Actions Std 0.575473 eval/Actions Max 0.995724 eval/Actions Min -0.998322 eval/Num Paths 1 eval/Average Returns 1442.43 eval/normalized_score 44.9429 time/evaluation sampling (s) 0.867042 time/logging (s) 0.00204283 time/sampling batch (s) 0.264011 time/saving (s) 0.00284178 time/training (s) 4.16837 time/epoch (s) 5.30431 time/total (s) 34962.2 Epoch -130 ---------------------------------- --------------- 2022-05-10 22:53:28.196155 PDT | [0] Epoch -129 finished ---------------------------------- --------------- epoch -129 replay_buffer/size 999996 trainer/num train calls 872000 trainer/Policy Loss -2.34695 trainer/Log Pis Mean 2.40778 trainer/Log Pis Std 2.60811 trainer/Log Pis Max 13.5728 trainer/Log Pis Min -5.22058 trainer/policy/mean Mean 0.128489 trainer/policy/mean Std 0.615716 trainer/policy/mean Max 0.997879 trainer/policy/mean Min -0.998922 trainer/policy/normal/std Mean 0.377681 trainer/policy/normal/std Std 0.18378 trainer/policy/normal/std Max 1.14575 trainer/policy/normal/std Min 0.0661799 trainer/policy/normal/log_std Mean -1.12729 trainer/policy/normal/log_std Std 0.600407 trainer/policy/normal/log_std Max 0.136058 trainer/policy/normal/log_std Min -2.71538 eval/num steps total 576404 eval/num paths total 1113 eval/path length Mean 490.5 eval/path length Std 4.5 eval/path length Max 495 eval/path length Min 486 eval/Rewards Mean 3.1355 eval/Rewards Std 0.792757 eval/Rewards Max 4.796 eval/Rewards Min 0.982255 eval/Returns Mean 1537.96 eval/Returns Std 11.9921 eval/Returns Max 1549.95 eval/Returns Min 1525.97 eval/Actions Mean 0.138819 eval/Actions Std 0.592907 eval/Actions Max 0.998462 eval/Actions Min -0.998517 eval/Num Paths 2 eval/Average Returns 1537.96 eval/normalized_score 47.8783 time/evaluation sampling (s) 0.861874 time/logging (s) 0.00359162 time/sampling batch (s) 0.265016 time/saving (s) 0.00305106 time/training (s) 4.17327 time/epoch (s) 5.3068 time/total (s) 34967.5 Epoch -129 ---------------------------------- --------------- 2022-05-10 22:53:33.558669 PDT | [0] Epoch -128 finished ---------------------------------- --------------- epoch -128 replay_buffer/size 999996 trainer/num train calls 873000 trainer/Policy Loss -2.43486 trainer/Log Pis Mean 2.48126 trainer/Log Pis Std 2.49191 trainer/Log Pis Max 9.41167 trainer/Log Pis Min -2.72796 trainer/policy/mean Mean 0.130562 trainer/policy/mean Std 0.626908 trainer/policy/mean Max 0.996742 trainer/policy/mean Min -0.997615 trainer/policy/normal/std Mean 0.375492 trainer/policy/normal/std Std 0.183494 trainer/policy/normal/std Max 1.01772 trainer/policy/normal/std Min 0.0670258 trainer/policy/normal/log_std Mean -1.12955 trainer/policy/normal/log_std Std 0.587775 trainer/policy/normal/log_std Max 0.0175624 trainer/policy/normal/log_std Min -2.70268 eval/num steps total 576973 eval/num paths total 1114 eval/path length Mean 569 eval/path length Std 0 eval/path length Max 569 eval/path length Min 569 eval/Rewards Mean 3.19423 eval/Rewards Std 0.800019 eval/Rewards Max 4.67718 eval/Rewards Min 0.982274 eval/Returns Mean 1817.52 eval/Returns Std 0 eval/Returns Max 1817.52 eval/Returns Min 1817.52 eval/Actions Mean 0.156741 eval/Actions Std 0.600429 eval/Actions Max 0.998316 eval/Actions Min -0.998724 eval/Num Paths 1 eval/Average Returns 1817.52 eval/normalized_score 56.468 time/evaluation sampling (s) 0.861077 time/logging (s) 0.00266573 time/sampling batch (s) 0.267166 time/saving (s) 0.00317042 time/training (s) 4.20339 time/epoch (s) 5.33746 time/total (s) 34972.8 Epoch -128 ---------------------------------- --------------- 2022-05-10 22:53:38.909370 PDT | [0] Epoch -127 finished ---------------------------------- --------------- epoch -127 replay_buffer/size 999996 trainer/num train calls 874000 trainer/Policy Loss -2.4656 trainer/Log Pis Mean 2.49675 trainer/Log Pis Std 2.65512 trainer/Log Pis Max 15.016 trainer/Log Pis Min -4.17097 trainer/policy/mean Mean 0.126322 trainer/policy/mean Std 0.632558 trainer/policy/mean Max 0.996513 trainer/policy/mean Min -0.999808 trainer/policy/normal/std Mean 0.37246 trainer/policy/normal/std Std 0.184705 trainer/policy/normal/std Max 0.974877 trainer/policy/normal/std Min 0.0675141 trainer/policy/normal/log_std Mean -1.14053 trainer/policy/normal/log_std Std 0.591917 trainer/policy/normal/log_std Max -0.0254443 trainer/policy/normal/log_std Min -2.69542 eval/num steps total 577478 eval/num paths total 1115 eval/path length Mean 505 eval/path length Std 0 eval/path length Max 505 eval/path length Min 505 eval/Rewards Mean 3.1216 eval/Rewards Std 0.760762 eval/Rewards Max 4.75012 eval/Rewards Min 0.984457 eval/Returns Mean 1576.41 eval/Returns Std 0 eval/Returns Max 1576.41 eval/Returns Min 1576.41 eval/Actions Mean 0.159907 eval/Actions Std 0.59509 eval/Actions Max 0.998351 eval/Actions Min -0.996092 eval/Num Paths 1 eval/Average Returns 1576.41 eval/normalized_score 49.0597 time/evaluation sampling (s) 0.866215 time/logging (s) 0.00226965 time/sampling batch (s) 0.264674 time/saving (s) 0.00295221 time/training (s) 4.19004 time/epoch (s) 5.32615 time/total (s) 34978.2 Epoch -127 ---------------------------------- --------------- 2022-05-10 22:53:44.271526 PDT | [0] Epoch -126 finished ---------------------------------- --------------- epoch -126 replay_buffer/size 999996 trainer/num train calls 875000 trainer/Policy Loss -2.39079 trainer/Log Pis Mean 2.37542 trainer/Log Pis Std 2.47761 trainer/Log Pis Max 9.7216 trainer/Log Pis Min -5.22274 trainer/policy/mean Mean 0.14797 trainer/policy/mean Std 0.615422 trainer/policy/mean Max 0.996226 trainer/policy/mean Min -0.998332 trainer/policy/normal/std Mean 0.375347 trainer/policy/normal/std Std 0.182051 trainer/policy/normal/std Max 0.934424 trainer/policy/normal/std Min 0.0656581 trainer/policy/normal/log_std Mean -1.13094 trainer/policy/normal/log_std Std 0.592379 trainer/policy/normal/log_std Max -0.0678251 trainer/policy/normal/log_std Min -2.72329 eval/num steps total 578461 eval/num paths total 1117 eval/path length Mean 491.5 eval/path length Std 1.5 eval/path length Max 493 eval/path length Min 490 eval/Rewards Mean 3.05989 eval/Rewards Std 0.793386 eval/Rewards Max 4.82293 eval/Rewards Min 0.98508 eval/Returns Mean 1503.93 eval/Returns Std 5.70942 eval/Returns Max 1509.64 eval/Returns Min 1498.23 eval/Actions Mean 0.154637 eval/Actions Std 0.582986 eval/Actions Max 0.998652 eval/Actions Min -0.999467 eval/Num Paths 2 eval/Average Returns 1503.93 eval/normalized_score 46.8328 time/evaluation sampling (s) 0.878446 time/logging (s) 0.0038229 time/sampling batch (s) 0.264826 time/saving (s) 0.00291912 time/training (s) 4.1897 time/epoch (s) 5.33971 time/total (s) 34983.5 Epoch -126 ---------------------------------- --------------- 2022-05-10 22:53:49.610395 PDT | [0] Epoch -125 finished ---------------------------------- --------------- epoch -125 replay_buffer/size 999996 trainer/num train calls 876000 trainer/Policy Loss -2.32951 trainer/Log Pis Mean 2.29681 trainer/Log Pis Std 2.64016 trainer/Log Pis Max 14.4536 trainer/Log Pis Min -4.17744 trainer/policy/mean Mean 0.153539 trainer/policy/mean Std 0.615104 trainer/policy/mean Max 0.99771 trainer/policy/mean Min -0.997829 trainer/policy/normal/std Mean 0.375213 trainer/policy/normal/std Std 0.185597 trainer/policy/normal/std Max 0.981637 trainer/policy/normal/std Min 0.0629935 trainer/policy/normal/log_std Mean -1.13698 trainer/policy/normal/log_std Std 0.603849 trainer/policy/normal/log_std Max -0.018534 trainer/policy/normal/log_std Min -2.76472 eval/num steps total 579447 eval/num paths total 1119 eval/path length Mean 493 eval/path length Std 18 eval/path length Max 511 eval/path length Min 475 eval/Rewards Mean 3.17369 eval/Rewards Std 0.799157 eval/Rewards Max 4.98188 eval/Rewards Min 0.981864 eval/Returns Mean 1564.63 eval/Returns Std 47.0291 eval/Returns Max 1611.66 eval/Returns Min 1517.6 eval/Actions Mean 0.154726 eval/Actions Std 0.597684 eval/Actions Max 0.99842 eval/Actions Min -0.998486 eval/Num Paths 2 eval/Average Returns 1564.63 eval/normalized_score 48.6976 time/evaluation sampling (s) 0.863086 time/logging (s) 0.00374013 time/sampling batch (s) 0.264895 time/saving (s) 0.00308983 time/training (s) 4.17976 time/epoch (s) 5.31457 time/total (s) 34988.8 Epoch -125 ---------------------------------- --------------- 2022-05-10 22:53:54.943270 PDT | [0] Epoch -124 finished ---------------------------------- --------------- epoch -124 replay_buffer/size 999996 trainer/num train calls 877000 trainer/Policy Loss -2.32696 trainer/Log Pis Mean 2.33747 trainer/Log Pis Std 2.65642 trainer/Log Pis Max 10.2562 trainer/Log Pis Min -5.69879 trainer/policy/mean Mean 0.138846 trainer/policy/mean Std 0.615693 trainer/policy/mean Max 0.998818 trainer/policy/mean Min -0.998417 trainer/policy/normal/std Mean 0.372324 trainer/policy/normal/std Std 0.182407 trainer/policy/normal/std Max 0.927089 trainer/policy/normal/std Min 0.0668909 trainer/policy/normal/log_std Mean -1.14056 trainer/policy/normal/log_std Std 0.594356 trainer/policy/normal/log_std Max -0.0757053 trainer/policy/normal/log_std Min -2.70469 eval/num steps total 580434 eval/num paths total 1121 eval/path length Mean 493.5 eval/path length Std 88.5 eval/path length Max 582 eval/path length Min 405 eval/Rewards Mean 3.13415 eval/Rewards Std 0.785995 eval/Rewards Max 4.95438 eval/Rewards Min 0.980672 eval/Returns Mean 1546.7 eval/Returns Std 299.242 eval/Returns Max 1845.94 eval/Returns Min 1247.46 eval/Actions Mean 0.153193 eval/Actions Std 0.593824 eval/Actions Max 0.997687 eval/Actions Min -0.99856 eval/Num Paths 2 eval/Average Returns 1546.7 eval/normalized_score 48.1469 time/evaluation sampling (s) 0.869543 time/logging (s) 0.00339067 time/sampling batch (s) 0.263729 time/saving (s) 0.00283696 time/training (s) 4.16893 time/epoch (s) 5.30843 time/total (s) 34994.1 Epoch -124 ---------------------------------- --------------- 2022-05-10 22:54:00.254206 PDT | [0] Epoch -123 finished ---------------------------------- --------------- epoch -123 replay_buffer/size 999996 trainer/num train calls 878000 trainer/Policy Loss -2.26746 trainer/Log Pis Mean 2.26539 trainer/Log Pis Std 2.5666 trainer/Log Pis Max 9.24876 trainer/Log Pis Min -5.81675 trainer/policy/mean Mean 0.118338 trainer/policy/mean Std 0.620648 trainer/policy/mean Max 0.998943 trainer/policy/mean Min -0.997302 trainer/policy/normal/std Mean 0.375296 trainer/policy/normal/std Std 0.185359 trainer/policy/normal/std Max 1.06469 trainer/policy/normal/std Min 0.0651647 trainer/policy/normal/log_std Mean -1.13377 trainer/policy/normal/log_std Std 0.595436 trainer/policy/normal/log_std Max 0.0626814 trainer/policy/normal/log_std Min -2.73084 eval/num steps total 580975 eval/num paths total 1122 eval/path length Mean 541 eval/path length Std 0 eval/path length Max 541 eval/path length Min 541 eval/Rewards Mean 3.18622 eval/Rewards Std 0.82869 eval/Rewards Max 5.12073 eval/Rewards Min 0.989143 eval/Returns Mean 1723.74 eval/Returns Std 0 eval/Returns Max 1723.74 eval/Returns Min 1723.74 eval/Actions Mean 0.146308 eval/Actions Std 0.578846 eval/Actions Max 0.998313 eval/Actions Min -0.997676 eval/Num Paths 1 eval/Average Returns 1723.74 eval/normalized_score 53.5867 time/evaluation sampling (s) 0.87919 time/logging (s) 0.00243758 time/sampling batch (s) 0.262854 time/saving (s) 0.0028672 time/training (s) 4.13891 time/epoch (s) 5.28626 time/total (s) 34999.4 Epoch -123 ---------------------------------- --------------- 2022-05-10 22:54:05.602258 PDT | [0] Epoch -122 finished ---------------------------------- --------------- epoch -122 replay_buffer/size 999996 trainer/num train calls 879000 trainer/Policy Loss -2.29788 trainer/Log Pis Mean 2.16686 trainer/Log Pis Std 2.65594 trainer/Log Pis Max 9.72838 trainer/Log Pis Min -6.94563 trainer/policy/mean Mean 0.129697 trainer/policy/mean Std 0.616675 trainer/policy/mean Max 0.995846 trainer/policy/mean Min -0.997699 trainer/policy/normal/std Mean 0.373488 trainer/policy/normal/std Std 0.183953 trainer/policy/normal/std Max 0.974685 trainer/policy/normal/std Min 0.0615553 trainer/policy/normal/log_std Mean -1.13942 trainer/policy/normal/log_std Std 0.599884 trainer/policy/normal/log_std Max -0.0256413 trainer/policy/normal/log_std Min -2.78782 eval/num steps total 581562 eval/num paths total 1123 eval/path length Mean 587 eval/path length Std 0 eval/path length Max 587 eval/path length Min 587 eval/Rewards Mean 3.19574 eval/Rewards Std 0.764151 eval/Rewards Max 5.39524 eval/Rewards Min 0.981403 eval/Returns Mean 1875.9 eval/Returns Std 0 eval/Returns Max 1875.9 eval/Returns Min 1875.9 eval/Actions Mean 0.154546 eval/Actions Std 0.606251 eval/Actions Max 0.997418 eval/Actions Min -0.997765 eval/Num Paths 1 eval/Average Returns 1875.9 eval/normalized_score 58.2618 time/evaluation sampling (s) 0.881185 time/logging (s) 0.00257448 time/sampling batch (s) 0.26463 time/saving (s) 0.0031235 time/training (s) 4.17276 time/epoch (s) 5.32427 time/total (s) 35004.8 Epoch -122 ---------------------------------- --------------- 2022-05-10 22:54:10.948141 PDT | [0] Epoch -121 finished ---------------------------------- --------------- epoch -121 replay_buffer/size 999996 trainer/num train calls 880000 trainer/Policy Loss -2.12051 trainer/Log Pis Mean 2.05337 trainer/Log Pis Std 2.58365 trainer/Log Pis Max 17.6272 trainer/Log Pis Min -5.19333 trainer/policy/mean Mean 0.166552 trainer/policy/mean Std 0.596929 trainer/policy/mean Max 0.995277 trainer/policy/mean Min -0.999977 trainer/policy/normal/std Mean 0.377483 trainer/policy/normal/std Std 0.189077 trainer/policy/normal/std Max 0.979979 trainer/policy/normal/std Min 0.0620449 trainer/policy/normal/log_std Mean -1.13732 trainer/policy/normal/log_std Std 0.618515 trainer/policy/normal/log_std Max -0.0202237 trainer/policy/normal/log_std Min -2.7799 eval/num steps total 582211 eval/num paths total 1124 eval/path length Mean 649 eval/path length Std 0 eval/path length Max 649 eval/path length Min 649 eval/Rewards Mean 3.22566 eval/Rewards Std 0.698931 eval/Rewards Max 4.73338 eval/Rewards Min 0.984396 eval/Returns Mean 2093.46 eval/Returns Std 0 eval/Returns Max 2093.46 eval/Returns Min 2093.46 eval/Actions Mean 0.147399 eval/Actions Std 0.608632 eval/Actions Max 0.997503 eval/Actions Min -0.997503 eval/Num Paths 1 eval/Average Returns 2093.46 eval/normalized_score 64.9464 time/evaluation sampling (s) 0.904863 time/logging (s) 0.00269623 time/sampling batch (s) 0.262372 time/saving (s) 0.00303845 time/training (s) 4.14859 time/epoch (s) 5.32156 time/total (s) 35010.1 Epoch -121 ---------------------------------- --------------- 2022-05-10 22:54:16.294976 PDT | [0] Epoch -120 finished ---------------------------------- --------------- epoch -120 replay_buffer/size 999996 trainer/num train calls 881000 trainer/Policy Loss -2.1914 trainer/Log Pis Mean 2.08092 trainer/Log Pis Std 2.5795 trainer/Log Pis Max 12.5786 trainer/Log Pis Min -5.19296 trainer/policy/mean Mean 0.131487 trainer/policy/mean Std 0.606671 trainer/policy/mean Max 0.997241 trainer/policy/mean Min -0.998056 trainer/policy/normal/std Mean 0.375243 trainer/policy/normal/std Std 0.178726 trainer/policy/normal/std Max 0.97112 trainer/policy/normal/std Min 0.0661716 trainer/policy/normal/log_std Mean -1.12262 trainer/policy/normal/log_std Std 0.573372 trainer/policy/normal/log_std Max -0.0293053 trainer/policy/normal/log_std Min -2.7155 eval/num steps total 582791 eval/num paths total 1125 eval/path length Mean 580 eval/path length Std 0 eval/path length Max 580 eval/path length Min 580 eval/Rewards Mean 3.17819 eval/Rewards Std 0.770604 eval/Rewards Max 4.85326 eval/Rewards Min 0.987337 eval/Returns Mean 1843.35 eval/Returns Std 0 eval/Returns Max 1843.35 eval/Returns Min 1843.35 eval/Actions Mean 0.159464 eval/Actions Std 0.599352 eval/Actions Max 0.999086 eval/Actions Min -0.99857 eval/Num Paths 1 eval/Average Returns 1843.35 eval/normalized_score 57.2618 time/evaluation sampling (s) 0.898463 time/logging (s) 0.00247582 time/sampling batch (s) 0.262427 time/saving (s) 0.00293706 time/training (s) 4.15625 time/epoch (s) 5.32255 time/total (s) 35015.4 Epoch -120 ---------------------------------- --------------- 2022-05-10 22:54:21.658519 PDT | [0] Epoch -119 finished ---------------------------------- --------------- epoch -119 replay_buffer/size 999996 trainer/num train calls 882000 trainer/Policy Loss -2.05345 trainer/Log Pis Mean 2.29593 trainer/Log Pis Std 2.64346 trainer/Log Pis Max 11.2046 trainer/Log Pis Min -4.17932 trainer/policy/mean Mean 0.149315 trainer/policy/mean Std 0.605159 trainer/policy/mean Max 0.995934 trainer/policy/mean Min -0.998027 trainer/policy/normal/std Mean 0.365276 trainer/policy/normal/std Std 0.176248 trainer/policy/normal/std Max 0.923684 trainer/policy/normal/std Min 0.0677331 trainer/policy/normal/log_std Mean -1.15522 trainer/policy/normal/log_std Std 0.585796 trainer/policy/normal/log_std Max -0.0793853 trainer/policy/normal/log_std Min -2.69218 eval/num steps total 583341 eval/num paths total 1126 eval/path length Mean 550 eval/path length Std 0 eval/path length Max 550 eval/path length Min 550 eval/Rewards Mean 3.23111 eval/Rewards Std 0.833673 eval/Rewards Max 4.86795 eval/Rewards Min 0.989301 eval/Returns Mean 1777.11 eval/Returns Std 0 eval/Returns Max 1777.11 eval/Returns Min 1777.11 eval/Actions Mean 0.153334 eval/Actions Std 0.584664 eval/Actions Max 0.997392 eval/Actions Min -0.999024 eval/Num Paths 1 eval/Average Returns 1777.11 eval/normalized_score 55.2263 time/evaluation sampling (s) 0.901855 time/logging (s) 0.0023233 time/sampling batch (s) 0.268677 time/saving (s) 0.00294383 time/training (s) 4.16338 time/epoch (s) 5.33917 time/total (s) 35020.8 Epoch -119 ---------------------------------- --------------- 2022-05-10 22:54:26.990764 PDT | [0] Epoch -118 finished ---------------------------------- --------------- epoch -118 replay_buffer/size 999996 trainer/num train calls 883000 trainer/Policy Loss -2.23474 trainer/Log Pis Mean 2.1372 trainer/Log Pis Std 2.5447 trainer/Log Pis Max 13.5195 trainer/Log Pis Min -5.53576 trainer/policy/mean Mean 0.144344 trainer/policy/mean Std 0.613512 trainer/policy/mean Max 0.997795 trainer/policy/mean Min -0.998561 trainer/policy/normal/std Mean 0.369633 trainer/policy/normal/std Std 0.179491 trainer/policy/normal/std Max 0.906095 trainer/policy/normal/std Min 0.0671158 trainer/policy/normal/log_std Mean -1.14663 trainer/policy/normal/log_std Std 0.593783 trainer/policy/normal/log_std Max -0.0986114 trainer/policy/normal/log_std Min -2.70134 eval/num steps total 583884 eval/num paths total 1127 eval/path length Mean 543 eval/path length Std 0 eval/path length Max 543 eval/path length Min 543 eval/Rewards Mean 3.18852 eval/Rewards Std 0.82477 eval/Rewards Max 5.19484 eval/Rewards Min 0.98521 eval/Returns Mean 1731.37 eval/Returns Std 0 eval/Returns Max 1731.37 eval/Returns Min 1731.37 eval/Actions Mean 0.140849 eval/Actions Std 0.574029 eval/Actions Max 0.996603 eval/Actions Min -0.998328 eval/Num Paths 1 eval/Average Returns 1731.37 eval/normalized_score 53.8209 time/evaluation sampling (s) 0.874281 time/logging (s) 0.00248393 time/sampling batch (s) 0.263675 time/saving (s) 0.00317048 time/training (s) 4.16473 time/epoch (s) 5.30834 time/total (s) 35026.1 Epoch -118 ---------------------------------- --------------- 2022-05-10 22:54:32.326475 PDT | [0] Epoch -117 finished ---------------------------------- --------------- epoch -117 replay_buffer/size 999996 trainer/num train calls 884000 trainer/Policy Loss -2.3418 trainer/Log Pis Mean 2.15762 trainer/Log Pis Std 2.64379 trainer/Log Pis Max 10.0956 trainer/Log Pis Min -6.96772 trainer/policy/mean Mean 0.129318 trainer/policy/mean Std 0.620942 trainer/policy/mean Max 0.996589 trainer/policy/mean Min -0.999583 trainer/policy/normal/std Mean 0.379233 trainer/policy/normal/std Std 0.187708 trainer/policy/normal/std Max 1.01032 trainer/policy/normal/std Min 0.0704949 trainer/policy/normal/log_std Mean -1.12442 trainer/policy/normal/log_std Std 0.598692 trainer/policy/normal/log_std Max 0.0102711 trainer/policy/normal/log_std Min -2.65221 eval/num steps total 584374 eval/num paths total 1128 eval/path length Mean 490 eval/path length Std 0 eval/path length Max 490 eval/path length Min 490 eval/Rewards Mean 3.03 eval/Rewards Std 0.764469 eval/Rewards Max 4.58914 eval/Rewards Min 0.980039 eval/Returns Mean 1484.7 eval/Returns Std 0 eval/Returns Max 1484.7 eval/Returns Min 1484.7 eval/Actions Mean 0.148826 eval/Actions Std 0.578206 eval/Actions Max 0.996505 eval/Actions Min -0.997501 eval/Num Paths 1 eval/Average Returns 1484.7 eval/normalized_score 46.2418 time/evaluation sampling (s) 0.866195 time/logging (s) 0.00245009 time/sampling batch (s) 0.264262 time/saving (s) 0.00410031 time/training (s) 4.17407 time/epoch (s) 5.31108 time/total (s) 35031.4 Epoch -117 ---------------------------------- --------------- 2022-05-10 22:54:37.657966 PDT | [0] Epoch -116 finished ---------------------------------- --------------- epoch -116 replay_buffer/size 999996 trainer/num train calls 885000 trainer/Policy Loss -2.21695 trainer/Log Pis Mean 2.3494 trainer/Log Pis Std 2.59469 trainer/Log Pis Max 9.95593 trainer/Log Pis Min -3.99548 trainer/policy/mean Mean 0.17648 trainer/policy/mean Std 0.607957 trainer/policy/mean Max 0.998911 trainer/policy/mean Min -0.999684 trainer/policy/normal/std Mean 0.376536 trainer/policy/normal/std Std 0.184497 trainer/policy/normal/std Max 0.911652 trainer/policy/normal/std Min 0.0646127 trainer/policy/normal/log_std Mean -1.13403 trainer/policy/normal/log_std Std 0.607595 trainer/policy/normal/log_std Max -0.0924969 trainer/policy/normal/log_std Min -2.73934 eval/num steps total 585003 eval/num paths total 1129 eval/path length Mean 629 eval/path length Std 0 eval/path length Max 629 eval/path length Min 629 eval/Rewards Mean 3.20838 eval/Rewards Std 0.75295 eval/Rewards Max 4.84419 eval/Rewards Min 0.980752 eval/Returns Mean 2018.07 eval/Returns Std 0 eval/Returns Max 2018.07 eval/Returns Min 2018.07 eval/Actions Mean 0.140109 eval/Actions Std 0.582253 eval/Actions Max 0.998232 eval/Actions Min -0.998303 eval/Num Paths 1 eval/Average Returns 2018.07 eval/normalized_score 62.6301 time/evaluation sampling (s) 0.859295 time/logging (s) 0.00253112 time/sampling batch (s) 0.265563 time/saving (s) 0.00284603 time/training (s) 4.17713 time/epoch (s) 5.30736 time/total (s) 35036.7 Epoch -116 ---------------------------------- --------------- 2022-05-10 22:54:43.081355 PDT | [0] Epoch -115 finished ---------------------------------- --------------- epoch -115 replay_buffer/size 999996 trainer/num train calls 886000 trainer/Policy Loss -2.07888 trainer/Log Pis Mean 2.16031 trainer/Log Pis Std 2.65058 trainer/Log Pis Max 10.7959 trainer/Log Pis Min -6.09649 trainer/policy/mean Mean 0.134503 trainer/policy/mean Std 0.614147 trainer/policy/mean Max 0.995656 trainer/policy/mean Min -0.998344 trainer/policy/normal/std Mean 0.37578 trainer/policy/normal/std Std 0.180883 trainer/policy/normal/std Max 0.948495 trainer/policy/normal/std Min 0.063493 trainer/policy/normal/log_std Mean -1.12702 trainer/policy/normal/log_std Std 0.588358 trainer/policy/normal/log_std Max -0.0528789 trainer/policy/normal/log_std Min -2.75683 eval/num steps total 585538 eval/num paths total 1130 eval/path length Mean 535 eval/path length Std 0 eval/path length Max 535 eval/path length Min 535 eval/Rewards Mean 3.14781 eval/Rewards Std 0.869568 eval/Rewards Max 5.41579 eval/Rewards Min 0.984777 eval/Returns Mean 1684.08 eval/Returns Std 0 eval/Returns Max 1684.08 eval/Returns Min 1684.08 eval/Actions Mean 0.150019 eval/Actions Std 0.573324 eval/Actions Max 0.997933 eval/Actions Min -0.998505 eval/Num Paths 1 eval/Average Returns 1684.08 eval/normalized_score 52.368 time/evaluation sampling (s) 0.938415 time/logging (s) 0.0023454 time/sampling batch (s) 0.265085 time/saving (s) 0.00305516 time/training (s) 4.19013 time/epoch (s) 5.39903 time/total (s) 35042.1 Epoch -115 ---------------------------------- --------------- 2022-05-10 22:54:48.414471 PDT | [0] Epoch -114 finished ---------------------------------- --------------- epoch -114 replay_buffer/size 999996 trainer/num train calls 887000 trainer/Policy Loss -2.12838 trainer/Log Pis Mean 2.08214 trainer/Log Pis Std 2.48356 trainer/Log Pis Max 8.94956 trainer/Log Pis Min -7.09701 trainer/policy/mean Mean 0.156768 trainer/policy/mean Std 0.609443 trainer/policy/mean Max 0.999091 trainer/policy/mean Min -0.997533 trainer/policy/normal/std Mean 0.373646 trainer/policy/normal/std Std 0.17889 trainer/policy/normal/std Max 0.98505 trainer/policy/normal/std Min 0.0677705 trainer/policy/normal/log_std Mean -1.13068 trainer/policy/normal/log_std Std 0.582936 trainer/policy/normal/log_std Max -0.0150631 trainer/policy/normal/log_std Min -2.69163 eval/num steps total 586362 eval/num paths total 1132 eval/path length Mean 412 eval/path length Std 4 eval/path length Max 416 eval/path length Min 408 eval/Rewards Mean 3.07564 eval/Rewards Std 0.825916 eval/Rewards Max 5.0359 eval/Rewards Min 0.983117 eval/Returns Mean 1267.16 eval/Returns Std 16.0655 eval/Returns Max 1283.23 eval/Returns Min 1251.1 eval/Actions Mean 0.15128 eval/Actions Std 0.591794 eval/Actions Max 0.99873 eval/Actions Min -0.99949 eval/Num Paths 2 eval/Average Returns 1267.16 eval/normalized_score 39.5578 time/evaluation sampling (s) 0.865135 time/logging (s) 0.00310142 time/sampling batch (s) 0.264634 time/saving (s) 0.00303108 time/training (s) 4.17406 time/epoch (s) 5.30996 time/total (s) 35047.4 Epoch -114 ---------------------------------- --------------- 2022-05-10 22:54:53.750258 PDT | [0] Epoch -113 finished ---------------------------------- --------------- epoch -113 replay_buffer/size 999996 trainer/num train calls 888000 trainer/Policy Loss -2.2537 trainer/Log Pis Mean 2.23181 trainer/Log Pis Std 2.56474 trainer/Log Pis Max 12.309 trainer/Log Pis Min -6.25842 trainer/policy/mean Mean 0.128088 trainer/policy/mean Std 0.614133 trainer/policy/mean Max 0.997771 trainer/policy/mean Min -0.999388 trainer/policy/normal/std Mean 0.375939 trainer/policy/normal/std Std 0.179098 trainer/policy/normal/std Max 0.960193 trainer/policy/normal/std Min 0.0712758 trainer/policy/normal/log_std Mean -1.12161 trainer/policy/normal/log_std Std 0.574985 trainer/policy/normal/log_std Max -0.0406207 trainer/policy/normal/log_std Min -2.6412 eval/num steps total 586858 eval/num paths total 1133 eval/path length Mean 496 eval/path length Std 0 eval/path length Max 496 eval/path length Min 496 eval/Rewards Mean 3.07877 eval/Rewards Std 0.76091 eval/Rewards Max 4.67578 eval/Rewards Min 0.989382 eval/Returns Mean 1527.07 eval/Returns Std 0 eval/Returns Max 1527.07 eval/Returns Min 1527.07 eval/Actions Mean 0.155284 eval/Actions Std 0.588656 eval/Actions Max 0.996524 eval/Actions Min -0.997942 eval/Num Paths 1 eval/Average Returns 1527.07 eval/normalized_score 47.5436 time/evaluation sampling (s) 0.862798 time/logging (s) 0.00218636 time/sampling batch (s) 0.263133 time/saving (s) 0.00277812 time/training (s) 4.18021 time/epoch (s) 5.31111 time/total (s) 35052.7 Epoch -113 ---------------------------------- --------------- 2022-05-10 22:54:59.085882 PDT | [0] Epoch -112 finished ---------------------------------- --------------- epoch -112 replay_buffer/size 999996 trainer/num train calls 889000 trainer/Policy Loss -2.25819 trainer/Log Pis Mean 2.24009 trainer/Log Pis Std 2.58536 trainer/Log Pis Max 13.4987 trainer/Log Pis Min -3.63171 trainer/policy/mean Mean 0.10871 trainer/policy/mean Std 0.609909 trainer/policy/mean Max 0.998164 trainer/policy/mean Min -0.998647 trainer/policy/normal/std Mean 0.378733 trainer/policy/normal/std Std 0.182065 trainer/policy/normal/std Max 0.898155 trainer/policy/normal/std Min 0.065795 trainer/policy/normal/log_std Mean -1.1175 trainer/policy/normal/log_std Std 0.582037 trainer/policy/normal/log_std Max -0.107412 trainer/policy/normal/log_std Min -2.72121 eval/num steps total 587453 eval/num paths total 1134 eval/path length Mean 595 eval/path length Std 0 eval/path length Max 595 eval/path length Min 595 eval/Rewards Mean 3.15347 eval/Rewards Std 0.767481 eval/Rewards Max 5.34761 eval/Rewards Min 0.988944 eval/Returns Mean 1876.32 eval/Returns Std 0 eval/Returns Max 1876.32 eval/Returns Min 1876.32 eval/Actions Mean 0.140013 eval/Actions Std 0.584492 eval/Actions Max 0.998213 eval/Actions Min -0.997047 eval/Num Paths 1 eval/Average Returns 1876.32 eval/normalized_score 58.2746 time/evaluation sampling (s) 0.867743 time/logging (s) 0.00245702 time/sampling batch (s) 0.264281 time/saving (s) 0.0028652 time/training (s) 4.17474 time/epoch (s) 5.31209 time/total (s) 35058 Epoch -112 ---------------------------------- --------------- 2022-05-10 22:55:04.430036 PDT | [0] Epoch -111 finished ---------------------------------- --------------- epoch -111 replay_buffer/size 999996 trainer/num train calls 890000 trainer/Policy Loss -2.366 trainer/Log Pis Mean 2.26431 trainer/Log Pis Std 2.57588 trainer/Log Pis Max 10.2541 trainer/Log Pis Min -4.61908 trainer/policy/mean Mean 0.122383 trainer/policy/mean Std 0.621531 trainer/policy/mean Max 0.998134 trainer/policy/mean Min -0.997853 trainer/policy/normal/std Mean 0.382525 trainer/policy/normal/std Std 0.185663 trainer/policy/normal/std Max 0.976465 trainer/policy/normal/std Min 0.0687676 trainer/policy/normal/log_std Mean -1.11241 trainer/policy/normal/log_std Std 0.593161 trainer/policy/normal/log_std Max -0.0238166 trainer/policy/normal/log_std Min -2.67702 eval/num steps total 587982 eval/num paths total 1135 eval/path length Mean 529 eval/path length Std 0 eval/path length Max 529 eval/path length Min 529 eval/Rewards Mean 3.2351 eval/Rewards Std 0.851384 eval/Rewards Max 5.38847 eval/Rewards Min 0.980158 eval/Returns Mean 1711.37 eval/Returns Std 0 eval/Returns Max 1711.37 eval/Returns Min 1711.37 eval/Actions Mean 0.142662 eval/Actions Std 0.590265 eval/Actions Max 0.997991 eval/Actions Min -0.998311 eval/Num Paths 1 eval/Average Returns 1711.37 eval/normalized_score 53.2065 time/evaluation sampling (s) 0.869791 time/logging (s) 0.00235457 time/sampling batch (s) 0.264927 time/saving (s) 0.00293769 time/training (s) 4.17986 time/epoch (s) 5.31987 time/total (s) 35063.4 Epoch -111 ---------------------------------- --------------- 2022-05-10 22:55:09.764822 PDT | [0] Epoch -110 finished ---------------------------------- --------------- epoch -110 replay_buffer/size 999996 trainer/num train calls 891000 trainer/Policy Loss -2.1455 trainer/Log Pis Mean 2.2155 trainer/Log Pis Std 2.61468 trainer/Log Pis Max 10.9635 trainer/Log Pis Min -7.44378 trainer/policy/mean Mean 0.107575 trainer/policy/mean Std 0.622368 trainer/policy/mean Max 0.998094 trainer/policy/mean Min -0.997649 trainer/policy/normal/std Mean 0.372901 trainer/policy/normal/std Std 0.176636 trainer/policy/normal/std Max 0.883371 trainer/policy/normal/std Min 0.0643548 trainer/policy/normal/log_std Mean -1.12852 trainer/policy/normal/log_std Std 0.572263 trainer/policy/normal/log_std Max -0.12401 trainer/policy/normal/log_std Min -2.74334 eval/num steps total 588566 eval/num paths total 1136 eval/path length Mean 584 eval/path length Std 0 eval/path length Max 584 eval/path length Min 584 eval/Rewards Mean 3.13831 eval/Rewards Std 0.718817 eval/Rewards Max 4.88022 eval/Rewards Min 0.988365 eval/Returns Mean 1832.77 eval/Returns Std 0 eval/Returns Max 1832.77 eval/Returns Min 1832.77 eval/Actions Mean 0.149624 eval/Actions Std 0.599881 eval/Actions Max 0.998084 eval/Actions Min -0.99744 eval/Num Paths 1 eval/Average Returns 1832.77 eval/normalized_score 56.9366 time/evaluation sampling (s) 0.863127 time/logging (s) 0.00238965 time/sampling batch (s) 0.264198 time/saving (s) 0.00283711 time/training (s) 4.17821 time/epoch (s) 5.31076 time/total (s) 35068.7 Epoch -110 ---------------------------------- --------------- 2022-05-10 22:55:15.094594 PDT | [0] Epoch -109 finished ---------------------------------- --------------- epoch -109 replay_buffer/size 999996 trainer/num train calls 892000 trainer/Policy Loss -2.52257 trainer/Log Pis Mean 2.51501 trainer/Log Pis Std 2.5886 trainer/Log Pis Max 11.4993 trainer/Log Pis Min -4.09317 trainer/policy/mean Mean 0.110126 trainer/policy/mean Std 0.636359 trainer/policy/mean Max 0.998234 trainer/policy/mean Min -0.998193 trainer/policy/normal/std Mean 0.380421 trainer/policy/normal/std Std 0.188499 trainer/policy/normal/std Max 0.989904 trainer/policy/normal/std Min 0.0696226 trainer/policy/normal/log_std Mean -1.12343 trainer/policy/normal/log_std Std 0.603767 trainer/policy/normal/log_std Max -0.0101478 trainer/policy/normal/log_std Min -2.66467 eval/num steps total 589199 eval/num paths total 1137 eval/path length Mean 633 eval/path length Std 0 eval/path length Max 633 eval/path length Min 633 eval/Rewards Mean 3.20664 eval/Rewards Std 0.751204 eval/Rewards Max 4.72284 eval/Rewards Min 0.979401 eval/Returns Mean 2029.8 eval/Returns Std 0 eval/Returns Max 2029.8 eval/Returns Min 2029.8 eval/Actions Mean 0.145005 eval/Actions Std 0.5978 eval/Actions Max 0.998548 eval/Actions Min -0.998156 eval/Num Paths 1 eval/Average Returns 2029.8 eval/normalized_score 62.9906 time/evaluation sampling (s) 0.867927 time/logging (s) 0.00249938 time/sampling batch (s) 0.263781 time/saving (s) 0.00278406 time/training (s) 4.16865 time/epoch (s) 5.30565 time/total (s) 35074 Epoch -109 ---------------------------------- --------------- 2022-05-10 22:55:20.429683 PDT | [0] Epoch -108 finished ---------------------------------- --------------- epoch -108 replay_buffer/size 999996 trainer/num train calls 893000 trainer/Policy Loss -2.33785 trainer/Log Pis Mean 2.19826 trainer/Log Pis Std 2.61569 trainer/Log Pis Max 11.1768 trainer/Log Pis Min -5.56778 trainer/policy/mean Mean 0.138623 trainer/policy/mean Std 0.620163 trainer/policy/mean Max 0.997956 trainer/policy/mean Min -0.998408 trainer/policy/normal/std Mean 0.372876 trainer/policy/normal/std Std 0.177505 trainer/policy/normal/std Max 0.920208 trainer/policy/normal/std Min 0.0769296 trainer/policy/normal/log_std Mean -1.13129 trainer/policy/normal/log_std Std 0.578824 trainer/policy/normal/log_std Max -0.0831558 trainer/policy/normal/log_std Min -2.56486 eval/num steps total 589756 eval/num paths total 1138 eval/path length Mean 557 eval/path length Std 0 eval/path length Max 557 eval/path length Min 557 eval/Rewards Mean 3.20746 eval/Rewards Std 0.809306 eval/Rewards Max 4.93491 eval/Rewards Min 0.979174 eval/Returns Mean 1786.56 eval/Returns Std 0 eval/Returns Max 1786.56 eval/Returns Min 1786.56 eval/Actions Mean 0.149487 eval/Actions Std 0.573012 eval/Actions Max 0.997861 eval/Actions Min -0.998947 eval/Num Paths 1 eval/Average Returns 1786.56 eval/normalized_score 55.5166 time/evaluation sampling (s) 0.890029 time/logging (s) 0.00232075 time/sampling batch (s) 0.262798 time/saving (s) 0.00284041 time/training (s) 4.15292 time/epoch (s) 5.31091 time/total (s) 35079.3 Epoch -108 ---------------------------------- --------------- 2022-05-10 22:55:25.785418 PDT | [0] Epoch -107 finished ---------------------------------- --------------- epoch -107 replay_buffer/size 999996 trainer/num train calls 894000 trainer/Policy Loss -2.33524 trainer/Log Pis Mean 2.42222 trainer/Log Pis Std 2.57978 trainer/Log Pis Max 11.2845 trainer/Log Pis Min -4.35479 trainer/policy/mean Mean 0.118877 trainer/policy/mean Std 0.624059 trainer/policy/mean Max 0.997372 trainer/policy/mean Min -0.998305 trainer/policy/normal/std Mean 0.375634 trainer/policy/normal/std Std 0.185661 trainer/policy/normal/std Max 1.02441 trainer/policy/normal/std Min 0.0721812 trainer/policy/normal/log_std Mean -1.13514 trainer/policy/normal/log_std Std 0.602385 trainer/policy/normal/log_std Max 0.0241146 trainer/policy/normal/log_std Min -2.62858 eval/num steps total 590681 eval/num paths total 1140 eval/path length Mean 462.5 eval/path length Std 49.5 eval/path length Max 512 eval/path length Min 413 eval/Rewards Mean 3.15577 eval/Rewards Std 0.82678 eval/Rewards Max 5.52153 eval/Rewards Min 0.981385 eval/Returns Mean 1459.55 eval/Returns Std 184.611 eval/Returns Max 1644.16 eval/Returns Min 1274.93 eval/Actions Mean 0.15044 eval/Actions Std 0.593813 eval/Actions Max 0.998142 eval/Actions Min -0.998546 eval/Num Paths 2 eval/Average Returns 1459.55 eval/normalized_score 45.4689 time/evaluation sampling (s) 0.907693 time/logging (s) 0.00361478 time/sampling batch (s) 0.262316 time/saving (s) 0.00298894 time/training (s) 4.15644 time/epoch (s) 5.33305 time/total (s) 35084.6 Epoch -107 ---------------------------------- --------------- 2022-05-10 22:55:31.201913 PDT | [0] Epoch -106 finished ---------------------------------- --------------- epoch -106 replay_buffer/size 999996 trainer/num train calls 895000 trainer/Policy Loss -2.09356 trainer/Log Pis Mean 2.10782 trainer/Log Pis Std 2.58825 trainer/Log Pis Max 9.62728 trainer/Log Pis Min -6.76824 trainer/policy/mean Mean 0.172677 trainer/policy/mean Std 0.606452 trainer/policy/mean Max 0.997969 trainer/policy/mean Min -0.998075 trainer/policy/normal/std Mean 0.389114 trainer/policy/normal/std Std 0.188112 trainer/policy/normal/std Max 1.23648 trainer/policy/normal/std Min 0.062917 trainer/policy/normal/log_std Mean -1.09402 trainer/policy/normal/log_std Std 0.592628 trainer/policy/normal/log_std Max 0.212268 trainer/policy/normal/log_std Min -2.76594 eval/num steps total 591182 eval/num paths total 1141 eval/path length Mean 501 eval/path length Std 0 eval/path length Max 501 eval/path length Min 501 eval/Rewards Mean 3.07552 eval/Rewards Std 0.779772 eval/Rewards Max 4.96318 eval/Rewards Min 0.98976 eval/Returns Mean 1540.83 eval/Returns Std 0 eval/Returns Max 1540.83 eval/Returns Min 1540.83 eval/Actions Mean 0.160411 eval/Actions Std 0.583806 eval/Actions Max 0.997017 eval/Actions Min -0.997874 eval/Num Paths 1 eval/Average Returns 1540.83 eval/normalized_score 47.9666 time/evaluation sampling (s) 0.931165 time/logging (s) 0.00227351 time/sampling batch (s) 0.267882 time/saving (s) 0.00290333 time/training (s) 4.18647 time/epoch (s) 5.3907 time/total (s) 35090 Epoch -106 ---------------------------------- --------------- 2022-05-10 22:55:36.613334 PDT | [0] Epoch -105 finished ---------------------------------- --------------- epoch -105 replay_buffer/size 999996 trainer/num train calls 896000 trainer/Policy Loss -2.13643 trainer/Log Pis Mean 2.00924 trainer/Log Pis Std 2.50603 trainer/Log Pis Max 8.85741 trainer/Log Pis Min -8.33298 trainer/policy/mean Mean 0.161509 trainer/policy/mean Std 0.602438 trainer/policy/mean Max 0.997747 trainer/policy/mean Min -0.998847 trainer/policy/normal/std Mean 0.372977 trainer/policy/normal/std Std 0.184919 trainer/policy/normal/std Max 0.963486 trainer/policy/normal/std Min 0.0638482 trainer/policy/normal/log_std Mean -1.14299 trainer/policy/normal/log_std Std 0.603152 trainer/policy/normal/log_std Max -0.0371976 trainer/policy/normal/log_std Min -2.75125 eval/num steps total 591777 eval/num paths total 1142 eval/path length Mean 595 eval/path length Std 0 eval/path length Max 595 eval/path length Min 595 eval/Rewards Mean 3.15227 eval/Rewards Std 0.794838 eval/Rewards Max 5.37336 eval/Rewards Min 0.985263 eval/Returns Mean 1875.6 eval/Returns Std 0 eval/Returns Max 1875.6 eval/Returns Min 1875.6 eval/Actions Mean 0.146189 eval/Actions Std 0.585678 eval/Actions Max 0.997783 eval/Actions Min -0.998432 eval/Num Paths 1 eval/Average Returns 1875.6 eval/normalized_score 58.2525 time/evaluation sampling (s) 0.910336 time/logging (s) 0.0028225 time/sampling batch (s) 0.267558 time/saving (s) 0.00310724 time/training (s) 4.20362 time/epoch (s) 5.38744 time/total (s) 35095.4 Epoch -105 ---------------------------------- --------------- 2022-05-10 22:55:42.000808 PDT | [0] Epoch -104 finished ---------------------------------- --------------- epoch -104 replay_buffer/size 999996 trainer/num train calls 897000 trainer/Policy Loss -2.25724 trainer/Log Pis Mean 2.21583 trainer/Log Pis Std 2.55125 trainer/Log Pis Max 9.01416 trainer/Log Pis Min -5.919 trainer/policy/mean Mean 0.118402 trainer/policy/mean Std 0.617442 trainer/policy/mean Max 0.996897 trainer/policy/mean Min -0.996561 trainer/policy/normal/std Mean 0.375933 trainer/policy/normal/std Std 0.179327 trainer/policy/normal/std Max 0.973169 trainer/policy/normal/std Min 0.0669038 trainer/policy/normal/log_std Mean -1.12256 trainer/policy/normal/log_std Std 0.578148 trainer/policy/normal/log_std Max -0.0271977 trainer/policy/normal/log_std Min -2.7045 eval/num steps total 592399 eval/num paths total 1143 eval/path length Mean 622 eval/path length Std 0 eval/path length Max 622 eval/path length Min 622 eval/Rewards Mean 3.20761 eval/Rewards Std 0.793016 eval/Rewards Max 5.22246 eval/Rewards Min 0.983449 eval/Returns Mean 1995.14 eval/Returns Std 0 eval/Returns Max 1995.14 eval/Returns Min 1995.14 eval/Actions Mean 0.145298 eval/Actions Std 0.584023 eval/Actions Max 0.997984 eval/Actions Min -0.998203 eval/Num Paths 1 eval/Average Returns 1995.14 eval/normalized_score 61.9254 time/evaluation sampling (s) 0.89379 time/logging (s) 0.00260186 time/sampling batch (s) 0.267306 time/saving (s) 0.00295265 time/training (s) 4.19583 time/epoch (s) 5.36248 time/total (s) 35100.8 Epoch -104 ---------------------------------- --------------- 2022-05-10 22:55:47.374035 PDT | [0] Epoch -103 finished ---------------------------------- --------------- epoch -103 replay_buffer/size 999996 trainer/num train calls 898000 trainer/Policy Loss -2.38445 trainer/Log Pis Mean 2.34974 trainer/Log Pis Std 2.77567 trainer/Log Pis Max 13.9551 trainer/Log Pis Min -8.85078 trainer/policy/mean Mean 0.14744 trainer/policy/mean Std 0.628445 trainer/policy/mean Max 0.997263 trainer/policy/mean Min -0.996772 trainer/policy/normal/std Mean 0.381874 trainer/policy/normal/std Std 0.193084 trainer/policy/normal/std Max 1.06459 trainer/policy/normal/std Min 0.0676654 trainer/policy/normal/log_std Mean -1.12452 trainer/policy/normal/log_std Std 0.612932 trainer/policy/normal/log_std Max 0.0625939 trainer/policy/normal/log_std Min -2.69318 eval/num steps total 592904 eval/num paths total 1144 eval/path length Mean 505 eval/path length Std 0 eval/path length Max 505 eval/path length Min 505 eval/Rewards Mean 3.10531 eval/Rewards Std 0.752449 eval/Rewards Max 4.74471 eval/Rewards Min 0.982603 eval/Returns Mean 1568.18 eval/Returns Std 0 eval/Returns Max 1568.18 eval/Returns Min 1568.18 eval/Actions Mean 0.165311 eval/Actions Std 0.595707 eval/Actions Max 0.99815 eval/Actions Min -0.997234 eval/Num Paths 1 eval/Average Returns 1568.18 eval/normalized_score 48.8068 time/evaluation sampling (s) 0.87925 time/logging (s) 0.00225022 time/sampling batch (s) 0.267152 time/saving (s) 0.00291373 time/training (s) 4.19696 time/epoch (s) 5.34853 time/total (s) 35106.1 Epoch -103 ---------------------------------- --------------- 2022-05-10 22:55:52.744859 PDT | [0] Epoch -102 finished ---------------------------------- --------------- epoch -102 replay_buffer/size 999996 trainer/num train calls 899000 trainer/Policy Loss -2.12094 trainer/Log Pis Mean 2.14151 trainer/Log Pis Std 2.62356 trainer/Log Pis Max 10.276 trainer/Log Pis Min -6.06561 trainer/policy/mean Mean 0.124738 trainer/policy/mean Std 0.615377 trainer/policy/mean Max 0.998581 trainer/policy/mean Min -0.99787 trainer/policy/normal/std Mean 0.374366 trainer/policy/normal/std Std 0.179237 trainer/policy/normal/std Max 0.934514 trainer/policy/normal/std Min 0.0691734 trainer/policy/normal/log_std Mean -1.12881 trainer/policy/normal/log_std Std 0.582639 trainer/policy/normal/log_std Max -0.0677282 trainer/policy/normal/log_std Min -2.67114 eval/num steps total 593404 eval/num paths total 1145 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.10273 eval/Rewards Std 0.753553 eval/Rewards Max 4.69705 eval/Rewards Min 0.986633 eval/Returns Mean 1551.36 eval/Returns Std 0 eval/Returns Max 1551.36 eval/Returns Min 1551.36 eval/Actions Mean 0.151933 eval/Actions Std 0.597982 eval/Actions Max 0.997191 eval/Actions Min -0.996073 eval/Num Paths 1 eval/Average Returns 1551.36 eval/normalized_score 48.2901 time/evaluation sampling (s) 0.873459 time/logging (s) 0.00221114 time/sampling batch (s) 0.267668 time/saving (s) 0.0028884 time/training (s) 4.2003 time/epoch (s) 5.34652 time/total (s) 35111.5 Epoch -102 ---------------------------------- --------------- 2022-05-10 22:55:58.144195 PDT | [0] Epoch -101 finished ---------------------------------- --------------- epoch -101 replay_buffer/size 999996 trainer/num train calls 900000 trainer/Policy Loss -2.30918 trainer/Log Pis Mean 2.23933 trainer/Log Pis Std 2.57026 trainer/Log Pis Max 15.3626 trainer/Log Pis Min -5.98134 trainer/policy/mean Mean 0.130414 trainer/policy/mean Std 0.619866 trainer/policy/mean Max 0.997981 trainer/policy/mean Min -0.999249 trainer/policy/normal/std Mean 0.380686 trainer/policy/normal/std Std 0.183475 trainer/policy/normal/std Max 1.00739 trainer/policy/normal/std Min 0.0661988 trainer/policy/normal/log_std Mean -1.11513 trainer/policy/normal/log_std Std 0.590374 trainer/policy/normal/log_std Max 0.00736415 trainer/policy/normal/log_std Min -2.71509 eval/num steps total 593975 eval/num paths total 1146 eval/path length Mean 571 eval/path length Std 0 eval/path length Max 571 eval/path length Min 571 eval/Rewards Mean 3.11247 eval/Rewards Std 0.725812 eval/Rewards Max 4.45223 eval/Rewards Min 0.980302 eval/Returns Mean 1777.22 eval/Returns Std 0 eval/Returns Max 1777.22 eval/Returns Min 1777.22 eval/Actions Mean 0.154463 eval/Actions Std 0.605902 eval/Actions Max 0.998588 eval/Actions Min -0.997799 eval/Num Paths 1 eval/Average Returns 1777.22 eval/normalized_score 55.2297 time/evaluation sampling (s) 0.882044 time/logging (s) 0.00245056 time/sampling batch (s) 0.267751 time/saving (s) 0.00547198 time/training (s) 4.21742 time/epoch (s) 5.37514 time/total (s) 35116.9 Epoch -101 ---------------------------------- --------------- 2022-05-10 22:56:03.541462 PDT | [0] Epoch -100 finished ---------------------------------- --------------- epoch -100 replay_buffer/size 999996 trainer/num train calls 901000 trainer/Policy Loss -2.25735 trainer/Log Pis Mean 2.37326 trainer/Log Pis Std 2.59443 trainer/Log Pis Max 9.91429 trainer/Log Pis Min -4.7791 trainer/policy/mean Mean 0.130004 trainer/policy/mean Std 0.619607 trainer/policy/mean Max 0.997996 trainer/policy/mean Min -0.998406 trainer/policy/normal/std Mean 0.381617 trainer/policy/normal/std Std 0.184327 trainer/policy/normal/std Max 0.958525 trainer/policy/normal/std Min 0.069175 trainer/policy/normal/log_std Mean -1.11371 trainer/policy/normal/log_std Std 0.591485 trainer/policy/normal/log_std Max -0.04236 trainer/policy/normal/log_std Min -2.67112 eval/num steps total 594526 eval/num paths total 1147 eval/path length Mean 551 eval/path length Std 0 eval/path length Max 551 eval/path length Min 551 eval/Rewards Mean 3.20387 eval/Rewards Std 0.809018 eval/Rewards Max 4.92343 eval/Rewards Min 0.984595 eval/Returns Mean 1765.33 eval/Returns Std 0 eval/Returns Max 1765.33 eval/Returns Min 1765.33 eval/Actions Mean 0.146055 eval/Actions Std 0.576244 eval/Actions Max 0.998151 eval/Actions Min -0.998806 eval/Num Paths 1 eval/Average Returns 1765.33 eval/normalized_score 54.8645 time/evaluation sampling (s) 0.87484 time/logging (s) 0.0024242 time/sampling batch (s) 0.2686 time/saving (s) 0.00290105 time/training (s) 4.22407 time/epoch (s) 5.37284 time/total (s) 35122.2 Epoch -100 ---------------------------------- --------------- 2022-05-10 22:56:08.899064 PDT | [0] Epoch -99 finished ---------------------------------- --------------- epoch -99 replay_buffer/size 999996 trainer/num train calls 902000 trainer/Policy Loss -2.20112 trainer/Log Pis Mean 2.23985 trainer/Log Pis Std 2.6323 trainer/Log Pis Max 14.063 trainer/Log Pis Min -5.71613 trainer/policy/mean Mean 0.1364 trainer/policy/mean Std 0.62024 trainer/policy/mean Max 0.995645 trainer/policy/mean Min -0.999356 trainer/policy/normal/std Mean 0.37868 trainer/policy/normal/std Std 0.185104 trainer/policy/normal/std Max 1.02055 trainer/policy/normal/std Min 0.0642429 trainer/policy/normal/log_std Mean -1.12274 trainer/policy/normal/log_std Std 0.592626 trainer/policy/normal/log_std Max 0.0203449 trainer/policy/normal/log_std Min -2.74508 eval/num steps total 595069 eval/num paths total 1148 eval/path length Mean 543 eval/path length Std 0 eval/path length Max 543 eval/path length Min 543 eval/Rewards Mean 3.21715 eval/Rewards Std 0.811332 eval/Rewards Max 5.17976 eval/Rewards Min 0.982425 eval/Returns Mean 1746.91 eval/Returns Std 0 eval/Returns Max 1746.91 eval/Returns Min 1746.91 eval/Actions Mean 0.148348 eval/Actions Std 0.591395 eval/Actions Max 0.997573 eval/Actions Min -0.99912 eval/Num Paths 1 eval/Average Returns 1746.91 eval/normalized_score 54.2986 time/evaluation sampling (s) 0.874152 time/logging (s) 0.00241438 time/sampling batch (s) 0.266553 time/saving (s) 0.00299116 time/training (s) 4.18727 time/epoch (s) 5.33338 time/total (s) 35127.6 Epoch -99 ---------------------------------- --------------- 2022-05-10 22:56:14.299213 PDT | [0] Epoch -98 finished ---------------------------------- --------------- epoch -98 replay_buffer/size 999996 trainer/num train calls 903000 trainer/Policy Loss -2.21468 trainer/Log Pis Mean 2.09627 trainer/Log Pis Std 2.56508 trainer/Log Pis Max 9.0342 trainer/Log Pis Min -5.23792 trainer/policy/mean Mean 0.119675 trainer/policy/mean Std 0.613291 trainer/policy/mean Max 0.996821 trainer/policy/mean Min -0.998415 trainer/policy/normal/std Mean 0.370148 trainer/policy/normal/std Std 0.180749 trainer/policy/normal/std Max 0.973115 trainer/policy/normal/std Min 0.0667081 trainer/policy/normal/log_std Mean -1.14687 trainer/policy/normal/log_std Std 0.596688 trainer/policy/normal/log_std Max -0.0272528 trainer/policy/normal/log_std Min -2.70743 eval/num steps total 595586 eval/num paths total 1149 eval/path length Mean 517 eval/path length Std 0 eval/path length Max 517 eval/path length Min 517 eval/Rewards Mean 3.18297 eval/Rewards Std 0.804545 eval/Rewards Max 5.28958 eval/Rewards Min 0.988121 eval/Returns Mean 1645.59 eval/Returns Std 0 eval/Returns Max 1645.59 eval/Returns Min 1645.59 eval/Actions Mean 0.149884 eval/Actions Std 0.592201 eval/Actions Max 0.998258 eval/Actions Min -0.997095 eval/Num Paths 1 eval/Average Returns 1645.59 eval/normalized_score 51.1854 time/evaluation sampling (s) 0.878751 time/logging (s) 0.00234272 time/sampling batch (s) 0.27351 time/saving (s) 0.00295541 time/training (s) 4.2179 time/epoch (s) 5.37546 time/total (s) 35133 Epoch -98 ---------------------------------- --------------- 2022-05-10 22:56:19.696925 PDT | [0] Epoch -97 finished ---------------------------------- --------------- epoch -97 replay_buffer/size 999996 trainer/num train calls 904000 trainer/Policy Loss -2.3881 trainer/Log Pis Mean 2.26725 trainer/Log Pis Std 2.68178 trainer/Log Pis Max 9.17098 trainer/Log Pis Min -4.40245 trainer/policy/mean Mean 0.133647 trainer/policy/mean Std 0.625355 trainer/policy/mean Max 0.998847 trainer/policy/mean Min -0.99777 trainer/policy/normal/std Mean 0.371828 trainer/policy/normal/std Std 0.179937 trainer/policy/normal/std Max 1.02735 trainer/policy/normal/std Min 0.0730252 trainer/policy/normal/log_std Mean -1.13669 trainer/policy/normal/log_std Std 0.583005 trainer/policy/normal/log_std Max 0.0269858 trainer/policy/normal/log_std Min -2.61695 eval/num steps total 596181 eval/num paths total 1150 eval/path length Mean 595 eval/path length Std 0 eval/path length Max 595 eval/path length Min 595 eval/Rewards Mean 3.12825 eval/Rewards Std 0.75127 eval/Rewards Max 4.96614 eval/Rewards Min 0.983163 eval/Returns Mean 1861.31 eval/Returns Std 0 eval/Returns Max 1861.31 eval/Returns Min 1861.31 eval/Actions Mean 0.161771 eval/Actions Std 0.591636 eval/Actions Max 0.998596 eval/Actions Min -0.997476 eval/Num Paths 1 eval/Average Returns 1861.31 eval/normalized_score 57.8135 time/evaluation sampling (s) 0.883006 time/logging (s) 0.00254018 time/sampling batch (s) 0.269154 time/saving (s) 0.00300467 time/training (s) 4.21578 time/epoch (s) 5.37349 time/total (s) 35138.3 Epoch -97 ---------------------------------- --------------- 2022-05-10 22:56:25.087897 PDT | [0] Epoch -96 finished ---------------------------------- --------------- epoch -96 replay_buffer/size 999996 trainer/num train calls 905000 trainer/Policy Loss -2.23721 trainer/Log Pis Mean 2.32966 trainer/Log Pis Std 2.43465 trainer/Log Pis Max 10.6739 trainer/Log Pis Min -4.23584 trainer/policy/mean Mean 0.107968 trainer/policy/mean Std 0.623646 trainer/policy/mean Max 0.996883 trainer/policy/mean Min -0.997702 trainer/policy/normal/std Mean 0.376203 trainer/policy/normal/std Std 0.183513 trainer/policy/normal/std Max 0.975662 trainer/policy/normal/std Min 0.0695334 trainer/policy/normal/log_std Mean -1.12717 trainer/policy/normal/log_std Std 0.587224 trainer/policy/normal/log_std Max -0.0246389 trainer/policy/normal/log_std Min -2.66595 eval/num steps total 597166 eval/num paths total 1152 eval/path length Mean 492.5 eval/path length Std 87.5 eval/path length Max 580 eval/path length Min 405 eval/Rewards Mean 3.18646 eval/Rewards Std 0.76485 eval/Rewards Max 4.82204 eval/Rewards Min 0.982196 eval/Returns Mean 1569.33 eval/Returns Std 311.107 eval/Returns Max 1880.44 eval/Returns Min 1258.23 eval/Actions Mean 0.149622 eval/Actions Std 0.598079 eval/Actions Max 0.998192 eval/Actions Min -0.999067 eval/Num Paths 2 eval/Average Returns 1569.33 eval/normalized_score 48.8422 time/evaluation sampling (s) 0.939448 time/logging (s) 0.00390657 time/sampling batch (s) 0.263445 time/saving (s) 0.0031239 time/training (s) 4.15805 time/epoch (s) 5.36798 time/total (s) 35143.7 Epoch -96 ---------------------------------- --------------- 2022-05-10 22:56:30.414532 PDT | [0] Epoch -95 finished ---------------------------------- --------------- epoch -95 replay_buffer/size 999996 trainer/num train calls 906000 trainer/Policy Loss -2.209 trainer/Log Pis Mean 2.21661 trainer/Log Pis Std 2.66522 trainer/Log Pis Max 13.9144 trainer/Log Pis Min -5.36341 trainer/policy/mean Mean 0.151064 trainer/policy/mean Std 0.61802 trainer/policy/mean Max 0.998777 trainer/policy/mean Min -0.998849 trainer/policy/normal/std Mean 0.38209 trainer/policy/normal/std Std 0.184962 trainer/policy/normal/std Max 0.978694 trainer/policy/normal/std Min 0.0674659 trainer/policy/normal/log_std Mean -1.11259 trainer/policy/normal/log_std Std 0.591696 trainer/policy/normal/log_std Max -0.0215365 trainer/policy/normal/log_std Min -2.69613 eval/num steps total 597821 eval/num paths total 1153 eval/path length Mean 655 eval/path length Std 0 eval/path length Max 655 eval/path length Min 655 eval/Rewards Mean 3.15132 eval/Rewards Std 0.68465 eval/Rewards Max 4.67355 eval/Rewards Min 0.98758 eval/Returns Mean 2064.11 eval/Returns Std 0 eval/Returns Max 2064.11 eval/Returns Min 2064.11 eval/Actions Mean 0.150485 eval/Actions Std 0.604606 eval/Actions Max 0.99772 eval/Actions Min -0.997942 eval/Num Paths 1 eval/Average Returns 2064.11 eval/normalized_score 64.0448 time/evaluation sampling (s) 0.87452 time/logging (s) 0.00283278 time/sampling batch (s) 0.262901 time/saving (s) 0.00316161 time/training (s) 4.15777 time/epoch (s) 5.30119 time/total (s) 35149 Epoch -95 ---------------------------------- --------------- 2022-05-10 22:56:35.746831 PDT | [0] Epoch -94 finished ---------------------------------- --------------- epoch -94 replay_buffer/size 999996 trainer/num train calls 907000 trainer/Policy Loss -2.07976 trainer/Log Pis Mean 2.23949 trainer/Log Pis Std 2.57259 trainer/Log Pis Max 10.474 trainer/Log Pis Min -5.9308 trainer/policy/mean Mean 0.131042 trainer/policy/mean Std 0.6147 trainer/policy/mean Max 0.998086 trainer/policy/mean Min -0.998155 trainer/policy/normal/std Mean 0.36792 trainer/policy/normal/std Std 0.175751 trainer/policy/normal/std Max 0.971845 trainer/policy/normal/std Min 0.0645813 trainer/policy/normal/log_std Mean -1.14702 trainer/policy/normal/log_std Std 0.585343 trainer/policy/normal/log_std Max -0.0285585 trainer/policy/normal/log_std Min -2.73983 eval/num steps total 598408 eval/num paths total 1154 eval/path length Mean 587 eval/path length Std 0 eval/path length Max 587 eval/path length Min 587 eval/Rewards Mean 3.21681 eval/Rewards Std 0.751654 eval/Rewards Max 4.95922 eval/Rewards Min 0.990145 eval/Returns Mean 1888.27 eval/Returns Std 0 eval/Returns Max 1888.27 eval/Returns Min 1888.27 eval/Actions Mean 0.166165 eval/Actions Std 0.602663 eval/Actions Max 0.998718 eval/Actions Min -0.998676 eval/Num Paths 1 eval/Average Returns 1888.27 eval/normalized_score 58.6419 time/evaluation sampling (s) 0.890593 time/logging (s) 0.00260941 time/sampling batch (s) 0.260683 time/saving (s) 0.00300883 time/training (s) 4.15055 time/epoch (s) 5.30745 time/total (s) 35154.3 Epoch -94 ---------------------------------- --------------- 2022-05-10 22:56:41.095519 PDT | [0] Epoch -93 finished ---------------------------------- --------------- epoch -93 replay_buffer/size 999996 trainer/num train calls 908000 trainer/Policy Loss -2.10709 trainer/Log Pis Mean 2.17384 trainer/Log Pis Std 2.53679 trainer/Log Pis Max 9.8092 trainer/Log Pis Min -7.92718 trainer/policy/mean Mean 0.174204 trainer/policy/mean Std 0.5987 trainer/policy/mean Max 0.997675 trainer/policy/mean Min -0.998077 trainer/policy/normal/std Mean 0.373412 trainer/policy/normal/std Std 0.180683 trainer/policy/normal/std Max 0.957647 trainer/policy/normal/std Min 0.0632493 trainer/policy/normal/log_std Mean -1.13696 trainer/policy/normal/log_std Std 0.595944 trainer/policy/normal/log_std Max -0.0432763 trainer/policy/normal/log_std Min -2.76067 eval/num steps total 598924 eval/num paths total 1155 eval/path length Mean 516 eval/path length Std 0 eval/path length Max 516 eval/path length Min 516 eval/Rewards Mean 3.20247 eval/Rewards Std 0.846966 eval/Rewards Max 5.57481 eval/Rewards Min 0.987525 eval/Returns Mean 1652.48 eval/Returns Std 0 eval/Returns Max 1652.48 eval/Returns Min 1652.48 eval/Actions Mean 0.147177 eval/Actions Std 0.592894 eval/Actions Max 0.997007 eval/Actions Min -0.998214 eval/Num Paths 1 eval/Average Returns 1652.48 eval/normalized_score 51.3969 time/evaluation sampling (s) 0.896897 time/logging (s) 0.00234309 time/sampling batch (s) 0.261481 time/saving (s) 0.00303441 time/training (s) 4.16039 time/epoch (s) 5.32414 time/total (s) 35159.6 Epoch -93 ---------------------------------- --------------- 2022-05-10 22:56:46.440977 PDT | [0] Epoch -92 finished ---------------------------------- --------------- epoch -92 replay_buffer/size 999996 trainer/num train calls 909000 trainer/Policy Loss -2.17379 trainer/Log Pis Mean 2.46718 trainer/Log Pis Std 2.48337 trainer/Log Pis Max 8.69324 trainer/Log Pis Min -7.13901 trainer/policy/mean Mean 0.128498 trainer/policy/mean Std 0.621746 trainer/policy/mean Max 0.997269 trainer/policy/mean Min -0.997757 trainer/policy/normal/std Mean 0.372406 trainer/policy/normal/std Std 0.180347 trainer/policy/normal/std Max 0.988328 trainer/policy/normal/std Min 0.0671018 trainer/policy/normal/log_std Mean -1.13357 trainer/policy/normal/log_std Std 0.577838 trainer/policy/normal/log_std Max -0.011741 trainer/policy/normal/log_std Min -2.70154 eval/num steps total 599446 eval/num paths total 1156 eval/path length Mean 522 eval/path length Std 0 eval/path length Max 522 eval/path length Min 522 eval/Rewards Mean 3.1184 eval/Rewards Std 0.793822 eval/Rewards Max 5.30296 eval/Rewards Min 0.981069 eval/Returns Mean 1627.8 eval/Returns Std 0 eval/Returns Max 1627.8 eval/Returns Min 1627.8 eval/Actions Mean 0.161263 eval/Actions Std 0.591168 eval/Actions Max 0.997659 eval/Actions Min -0.996914 eval/Num Paths 1 eval/Average Returns 1627.8 eval/normalized_score 50.6388 time/evaluation sampling (s) 0.900975 time/logging (s) 0.0023857 time/sampling batch (s) 0.261634 time/saving (s) 0.00302175 time/training (s) 4.1534 time/epoch (s) 5.32141 time/total (s) 35165 Epoch -92 ---------------------------------- --------------- 2022-05-10 22:56:51.762125 PDT | [0] Epoch -91 finished ---------------------------------- --------------- epoch -91 replay_buffer/size 999996 trainer/num train calls 910000 trainer/Policy Loss -2.27482 trainer/Log Pis Mean 2.32846 trainer/Log Pis Std 2.55666 trainer/Log Pis Max 14.289 trainer/Log Pis Min -3.95382 trainer/policy/mean Mean 0.157023 trainer/policy/mean Std 0.612381 trainer/policy/mean Max 0.997153 trainer/policy/mean Min -0.998131 trainer/policy/normal/std Mean 0.381202 trainer/policy/normal/std Std 0.183845 trainer/policy/normal/std Max 0.975694 trainer/policy/normal/std Min 0.0662366 trainer/policy/normal/log_std Mean -1.11614 trainer/policy/normal/log_std Std 0.596543 trainer/policy/normal/log_std Max -0.0246064 trainer/policy/normal/log_std Min -2.71452 eval/num steps total 600016 eval/num paths total 1157 eval/path length Mean 570 eval/path length Std 0 eval/path length Max 570 eval/path length Min 570 eval/Rewards Mean 3.17437 eval/Rewards Std 0.765564 eval/Rewards Max 4.90017 eval/Rewards Min 0.980307 eval/Returns Mean 1809.39 eval/Returns Std 0 eval/Returns Max 1809.39 eval/Returns Min 1809.39 eval/Actions Mean 0.157784 eval/Actions Std 0.598263 eval/Actions Max 0.998319 eval/Actions Min -0.998166 eval/Num Paths 1 eval/Average Returns 1809.39 eval/normalized_score 56.2183 time/evaluation sampling (s) 0.882039 time/logging (s) 0.00245716 time/sampling batch (s) 0.26274 time/saving (s) 0.00296648 time/training (s) 4.14662 time/epoch (s) 5.29682 time/total (s) 35170.3 Epoch -91 ---------------------------------- --------------- 2022-05-10 22:56:57.109518 PDT | [0] Epoch -90 finished ---------------------------------- --------------- epoch -90 replay_buffer/size 999996 trainer/num train calls 911000 trainer/Policy Loss -2.32792 trainer/Log Pis Mean 2.41158 trainer/Log Pis Std 2.62919 trainer/Log Pis Max 15.3698 trainer/Log Pis Min -5.34796 trainer/policy/mean Mean 0.136939 trainer/policy/mean Std 0.624917 trainer/policy/mean Max 0.996818 trainer/policy/mean Min -0.999206 trainer/policy/normal/std Mean 0.382149 trainer/policy/normal/std Std 0.18975 trainer/policy/normal/std Max 1.09893 trainer/policy/normal/std Min 0.0664995 trainer/policy/normal/log_std Mean -1.1196 trainer/policy/normal/log_std Std 0.606044 trainer/policy/normal/log_std Max 0.094341 trainer/policy/normal/log_std Min -2.71056 eval/num steps total 600586 eval/num paths total 1158 eval/path length Mean 570 eval/path length Std 0 eval/path length Max 570 eval/path length Min 570 eval/Rewards Mean 3.10768 eval/Rewards Std 0.708385 eval/Rewards Max 4.45444 eval/Rewards Min 0.987395 eval/Returns Mean 1771.38 eval/Returns Std 0 eval/Returns Max 1771.38 eval/Returns Min 1771.38 eval/Actions Mean 0.150925 eval/Actions Std 0.608442 eval/Actions Max 0.998099 eval/Actions Min -0.997881 eval/Num Paths 1 eval/Average Returns 1771.38 eval/normalized_score 55.0503 time/evaluation sampling (s) 0.877947 time/logging (s) 0.00246593 time/sampling batch (s) 0.264457 time/saving (s) 0.00286968 time/training (s) 4.17545 time/epoch (s) 5.32319 time/total (s) 35175.6 Epoch -90 ---------------------------------- --------------- 2022-05-10 22:57:02.410392 PDT | [0] Epoch -89 finished ---------------------------------- --------------- epoch -89 replay_buffer/size 999996 trainer/num train calls 912000 trainer/Policy Loss -2.35345 trainer/Log Pis Mean 2.13685 trainer/Log Pis Std 2.63312 trainer/Log Pis Max 10.028 trainer/Log Pis Min -5.41783 trainer/policy/mean Mean 0.185042 trainer/policy/mean Std 0.611588 trainer/policy/mean Max 0.998119 trainer/policy/mean Min -0.997722 trainer/policy/normal/std Mean 0.374927 trainer/policy/normal/std Std 0.179086 trainer/policy/normal/std Max 1.0052 trainer/policy/normal/std Min 0.0688196 trainer/policy/normal/log_std Mean -1.12455 trainer/policy/normal/log_std Std 0.575322 trainer/policy/normal/log_std Max 0.00518323 trainer/policy/normal/log_std Min -2.67627 eval/num steps total 601134 eval/num paths total 1159 eval/path length Mean 548 eval/path length Std 0 eval/path length Max 548 eval/path length Min 548 eval/Rewards Mean 3.23192 eval/Rewards Std 0.839035 eval/Rewards Max 5.22251 eval/Rewards Min 0.987807 eval/Returns Mean 1771.09 eval/Returns Std 0 eval/Returns Max 1771.09 eval/Returns Min 1771.09 eval/Actions Mean 0.153218 eval/Actions Std 0.582125 eval/Actions Max 0.998431 eval/Actions Min -0.998334 eval/Num Paths 1 eval/Average Returns 1771.09 eval/normalized_score 55.0415 time/evaluation sampling (s) 0.864243 time/logging (s) 0.00247408 time/sampling batch (s) 0.26031 time/saving (s) 0.00285709 time/training (s) 4.1473 time/epoch (s) 5.27719 time/total (s) 35180.9 Epoch -89 ---------------------------------- --------------- 2022-05-10 22:57:07.706968 PDT | [0] Epoch -88 finished ---------------------------------- --------------- epoch -88 replay_buffer/size 999996 trainer/num train calls 913000 trainer/Policy Loss -2.4068 trainer/Log Pis Mean 2.35111 trainer/Log Pis Std 2.66327 trainer/Log Pis Max 11.5192 trainer/Log Pis Min -3.82253 trainer/policy/mean Mean 0.134014 trainer/policy/mean Std 0.626895 trainer/policy/mean Max 0.997819 trainer/policy/mean Min -0.998622 trainer/policy/normal/std Mean 0.367608 trainer/policy/normal/std Std 0.17884 trainer/policy/normal/std Max 1.00665 trainer/policy/normal/std Min 0.0668604 trainer/policy/normal/log_std Mean -1.15041 trainer/policy/normal/log_std Std 0.588211 trainer/policy/normal/log_std Max 0.00662737 trainer/policy/normal/log_std Min -2.70515 eval/num steps total 601988 eval/num paths total 1161 eval/path length Mean 427 eval/path length Std 26 eval/path length Max 453 eval/path length Min 401 eval/Rewards Mean 3.10881 eval/Rewards Std 0.899062 eval/Rewards Max 5.17373 eval/Rewards Min 0.985632 eval/Returns Mean 1327.46 eval/Returns Std 94.6615 eval/Returns Max 1422.12 eval/Returns Min 1232.8 eval/Actions Mean 0.146858 eval/Actions Std 0.574922 eval/Actions Max 0.998323 eval/Actions Min -0.998561 eval/Num Paths 2 eval/Average Returns 1327.46 eval/normalized_score 41.4105 time/evaluation sampling (s) 0.872787 time/logging (s) 0.00314155 time/sampling batch (s) 0.26082 time/saving (s) 0.00286586 time/training (s) 4.13363 time/epoch (s) 5.27324 time/total (s) 35186.1 Epoch -88 ---------------------------------- --------------- 2022-05-10 22:57:13.025836 PDT | [0] Epoch -87 finished ---------------------------------- --------------- epoch -87 replay_buffer/size 999996 trainer/num train calls 914000 trainer/Policy Loss -2.27741 trainer/Log Pis Mean 2.39527 trainer/Log Pis Std 2.60747 trainer/Log Pis Max 12.1527 trainer/Log Pis Min -8.51544 trainer/policy/mean Mean 0.181046 trainer/policy/mean Std 0.612938 trainer/policy/mean Max 0.997286 trainer/policy/mean Min -0.998346 trainer/policy/normal/std Mean 0.371928 trainer/policy/normal/std Std 0.177699 trainer/policy/normal/std Max 0.894465 trainer/policy/normal/std Min 0.0650235 trainer/policy/normal/log_std Mean -1.13846 trainer/policy/normal/log_std Std 0.592478 trainer/policy/normal/log_std Max -0.111529 trainer/policy/normal/log_std Min -2.73301 eval/num steps total 602485 eval/num paths total 1162 eval/path length Mean 497 eval/path length Std 0 eval/path length Max 497 eval/path length Min 497 eval/Rewards Mean 3.06032 eval/Rewards Std 0.775573 eval/Rewards Max 4.75222 eval/Rewards Min 0.990248 eval/Returns Mean 1520.98 eval/Returns Std 0 eval/Returns Max 1520.98 eval/Returns Min 1520.98 eval/Actions Mean 0.160102 eval/Actions Std 0.580851 eval/Actions Max 0.997875 eval/Actions Min -0.997878 eval/Num Paths 1 eval/Average Returns 1520.98 eval/normalized_score 47.3565 time/evaluation sampling (s) 0.879312 time/logging (s) 0.00214394 time/sampling batch (s) 0.263633 time/saving (s) 0.00282715 time/training (s) 4.14619 time/epoch (s) 5.29411 time/total (s) 35191.4 Epoch -87 ---------------------------------- --------------- 2022-05-10 22:57:18.329068 PDT | [0] Epoch -86 finished ---------------------------------- --------------- epoch -86 replay_buffer/size 999996 trainer/num train calls 915000 trainer/Policy Loss -2.4148 trainer/Log Pis Mean 2.51082 trainer/Log Pis Std 2.79564 trainer/Log Pis Max 18.9702 trainer/Log Pis Min -6.30885 trainer/policy/mean Mean 0.0954808 trainer/policy/mean Std 0.633357 trainer/policy/mean Max 0.996427 trainer/policy/mean Min -0.999866 trainer/policy/normal/std Mean 0.376094 trainer/policy/normal/std Std 0.18781 trainer/policy/normal/std Max 0.996888 trainer/policy/normal/std Min 0.0650517 trainer/policy/normal/log_std Mean -1.13685 trainer/policy/normal/log_std Std 0.607733 trainer/policy/normal/log_std Max -0.00311699 trainer/policy/normal/log_std Min -2.73257 eval/num steps total 602990 eval/num paths total 1163 eval/path length Mean 505 eval/path length Std 0 eval/path length Max 505 eval/path length Min 505 eval/Rewards Mean 3.14535 eval/Rewards Std 0.779528 eval/Rewards Max 4.87221 eval/Rewards Min 0.984701 eval/Returns Mean 1588.4 eval/Returns Std 0 eval/Returns Max 1588.4 eval/Returns Min 1588.4 eval/Actions Mean 0.155928 eval/Actions Std 0.596158 eval/Actions Max 0.998823 eval/Actions Min -0.996338 eval/Num Paths 1 eval/Average Returns 1588.4 eval/normalized_score 49.4282 time/evaluation sampling (s) 0.871667 time/logging (s) 0.00222143 time/sampling batch (s) 0.260422 time/saving (s) 0.00285958 time/training (s) 4.14215 time/epoch (s) 5.27933 time/total (s) 35196.7 Epoch -86 ---------------------------------- --------------- 2022-05-10 22:57:23.653824 PDT | [0] Epoch -85 finished ---------------------------------- --------------- epoch -85 replay_buffer/size 999996 trainer/num train calls 916000 trainer/Policy Loss -2.19118 trainer/Log Pis Mean 2.2186 trainer/Log Pis Std 2.6021 trainer/Log Pis Max 13.4826 trainer/Log Pis Min -5.95865 trainer/policy/mean Mean 0.143481 trainer/policy/mean Std 0.612708 trainer/policy/mean Max 0.998019 trainer/policy/mean Min -0.998898 trainer/policy/normal/std Mean 0.380081 trainer/policy/normal/std Std 0.186833 trainer/policy/normal/std Max 0.984312 trainer/policy/normal/std Min 0.0605508 trainer/policy/normal/log_std Mean -1.12445 trainer/policy/normal/log_std Std 0.606898 trainer/policy/normal/log_std Max -0.0158125 trainer/policy/normal/log_std Min -2.80427 eval/num steps total 603947 eval/num paths total 1165 eval/path length Mean 478.5 eval/path length Std 2.5 eval/path length Max 481 eval/path length Min 476 eval/Rewards Mean 3.18897 eval/Rewards Std 0.846059 eval/Rewards Max 4.77725 eval/Rewards Min 0.979919 eval/Returns Mean 1525.92 eval/Returns Std 11.2341 eval/Returns Max 1537.16 eval/Returns Min 1514.69 eval/Actions Mean 0.148722 eval/Actions Std 0.602156 eval/Actions Max 0.997168 eval/Actions Min -0.998815 eval/Num Paths 2 eval/Average Returns 1525.92 eval/normalized_score 47.5084 time/evaluation sampling (s) 0.873133 time/logging (s) 0.00350482 time/sampling batch (s) 0.262488 time/saving (s) 0.00288225 time/training (s) 4.16031 time/epoch (s) 5.30232 time/total (s) 35202 Epoch -85 ---------------------------------- --------------- 2022-05-10 22:57:28.946334 PDT | [0] Epoch -84 finished ---------------------------------- --------------- epoch -84 replay_buffer/size 999996 trainer/num train calls 917000 trainer/Policy Loss -2.28945 trainer/Log Pis Mean 2.18552 trainer/Log Pis Std 2.49732 trainer/Log Pis Max 9.57999 trainer/Log Pis Min -5.56717 trainer/policy/mean Mean 0.133669 trainer/policy/mean Std 0.607012 trainer/policy/mean Max 0.99762 trainer/policy/mean Min -0.998666 trainer/policy/normal/std Mean 0.367531 trainer/policy/normal/std Std 0.178174 trainer/policy/normal/std Max 0.910307 trainer/policy/normal/std Min 0.0657188 trainer/policy/normal/log_std Mean -1.15275 trainer/policy/normal/log_std Std 0.595117 trainer/policy/normal/log_std Max -0.0939738 trainer/policy/normal/log_std Min -2.72237 eval/num steps total 604426 eval/num paths total 1166 eval/path length Mean 479 eval/path length Std 0 eval/path length Max 479 eval/path length Min 479 eval/Rewards Mean 3.10485 eval/Rewards Std 0.825511 eval/Rewards Max 4.82904 eval/Rewards Min 0.985897 eval/Returns Mean 1487.22 eval/Returns Std 0 eval/Returns Max 1487.22 eval/Returns Min 1487.22 eval/Actions Mean 0.146373 eval/Actions Std 0.576661 eval/Actions Max 0.998219 eval/Actions Min -0.998489 eval/Num Paths 1 eval/Average Returns 1487.22 eval/normalized_score 46.3193 time/evaluation sampling (s) 0.873156 time/logging (s) 0.00210145 time/sampling batch (s) 0.260984 time/saving (s) 0.0029011 time/training (s) 4.12811 time/epoch (s) 5.26726 time/total (s) 35207.3 Epoch -84 ---------------------------------- --------------- 2022-05-10 22:57:34.265089 PDT | [0] Epoch -83 finished ---------------------------------- --------------- epoch -83 replay_buffer/size 999996 trainer/num train calls 918000 trainer/Policy Loss -2.26919 trainer/Log Pis Mean 2.36517 trainer/Log Pis Std 2.45232 trainer/Log Pis Max 9.62964 trainer/Log Pis Min -5.3155 trainer/policy/mean Mean 0.137982 trainer/policy/mean Std 0.61961 trainer/policy/mean Max 0.997209 trainer/policy/mean Min -0.99785 trainer/policy/normal/std Mean 0.372286 trainer/policy/normal/std Std 0.184678 trainer/policy/normal/std Max 0.979367 trainer/policy/normal/std Min 0.0572626 trainer/policy/normal/log_std Mean -1.14496 trainer/policy/normal/log_std Std 0.603178 trainer/policy/normal/log_std Max -0.0208484 trainer/policy/normal/log_std Min -2.86011 eval/num steps total 605414 eval/num paths total 1168 eval/path length Mean 494 eval/path length Std 4 eval/path length Max 498 eval/path length Min 490 eval/Rewards Mean 3.05754 eval/Rewards Std 0.77406 eval/Rewards Max 4.70681 eval/Rewards Min 0.983465 eval/Returns Mean 1510.42 eval/Returns Std 21.9306 eval/Returns Max 1532.35 eval/Returns Min 1488.49 eval/Actions Mean 0.152652 eval/Actions Std 0.581809 eval/Actions Max 0.997009 eval/Actions Min -0.997648 eval/Num Paths 2 eval/Average Returns 1510.42 eval/normalized_score 47.0321 time/evaluation sampling (s) 0.88162 time/logging (s) 0.00358067 time/sampling batch (s) 0.262379 time/saving (s) 0.00286973 time/training (s) 4.1459 time/epoch (s) 5.29635 time/total (s) 35212.6 Epoch -83 ---------------------------------- --------------- 2022-05-10 22:57:39.591595 PDT | [0] Epoch -82 finished ---------------------------------- --------------- epoch -82 replay_buffer/size 999996 trainer/num train calls 919000 trainer/Policy Loss -2.43092 trainer/Log Pis Mean 2.48362 trainer/Log Pis Std 2.6721 trainer/Log Pis Max 11.3347 trainer/Log Pis Min -3.81099 trainer/policy/mean Mean 0.129246 trainer/policy/mean Std 0.628401 trainer/policy/mean Max 0.997342 trainer/policy/mean Min -0.996971 trainer/policy/normal/std Mean 0.37689 trainer/policy/normal/std Std 0.186021 trainer/policy/normal/std Max 0.947762 trainer/policy/normal/std Min 0.0642111 trainer/policy/normal/log_std Mean -1.1338 trainer/policy/normal/log_std Std 0.608269 trainer/policy/normal/log_std Max -0.0536522 trainer/policy/normal/log_std Min -2.74558 eval/num steps total 605952 eval/num paths total 1169 eval/path length Mean 538 eval/path length Std 0 eval/path length Max 538 eval/path length Min 538 eval/Rewards Mean 3.18687 eval/Rewards Std 0.858849 eval/Rewards Max 5.46849 eval/Rewards Min 0.982483 eval/Returns Mean 1714.53 eval/Returns Std 0 eval/Returns Max 1714.53 eval/Returns Min 1714.53 eval/Actions Mean 0.161865 eval/Actions Std 0.58652 eval/Actions Max 0.99846 eval/Actions Min -0.997851 eval/Num Paths 1 eval/Average Returns 1714.53 eval/normalized_score 53.3037 time/evaluation sampling (s) 0.870903 time/logging (s) 0.00240031 time/sampling batch (s) 0.263023 time/saving (s) 0.00299613 time/training (s) 4.16203 time/epoch (s) 5.30135 time/total (s) 35217.9 Epoch -82 ---------------------------------- --------------- 2022-05-10 22:57:44.920729 PDT | [0] Epoch -81 finished ---------------------------------- --------------- epoch -81 replay_buffer/size 999996 trainer/num train calls 920000 trainer/Policy Loss -2.19703 trainer/Log Pis Mean 2.25363 trainer/Log Pis Std 2.66707 trainer/Log Pis Max 9.57584 trainer/Log Pis Min -4.69118 trainer/policy/mean Mean 0.156607 trainer/policy/mean Std 0.602492 trainer/policy/mean Max 0.99674 trainer/policy/mean Min -0.997984 trainer/policy/normal/std Mean 0.373342 trainer/policy/normal/std Std 0.185847 trainer/policy/normal/std Max 0.982901 trainer/policy/normal/std Min 0.0661005 trainer/policy/normal/log_std Mean -1.14215 trainer/policy/normal/log_std Std 0.602195 trainer/policy/normal/log_std Max -0.0172468 trainer/policy/normal/log_std Min -2.71658 eval/num steps total 606542 eval/num paths total 1170 eval/path length Mean 590 eval/path length Std 0 eval/path length Max 590 eval/path length Min 590 eval/Rewards Mean 3.15104 eval/Rewards Std 0.724121 eval/Rewards Max 4.75196 eval/Rewards Min 0.983142 eval/Returns Mean 1859.11 eval/Returns Std 0 eval/Returns Max 1859.11 eval/Returns Min 1859.11 eval/Actions Mean 0.163912 eval/Actions Std 0.604255 eval/Actions Max 0.998742 eval/Actions Min -0.998597 eval/Num Paths 1 eval/Average Returns 1859.11 eval/normalized_score 57.746 time/evaluation sampling (s) 0.875668 time/logging (s) 0.0024784 time/sampling batch (s) 0.26233 time/saving (s) 0.00287739 time/training (s) 4.16166 time/epoch (s) 5.30501 time/total (s) 35223.2 Epoch -81 ---------------------------------- --------------- 2022-05-10 22:57:50.200768 PDT | [0] Epoch -80 finished ---------------------------------- --------------- epoch -80 replay_buffer/size 999996 trainer/num train calls 921000 trainer/Policy Loss -2.47271 trainer/Log Pis Mean 2.43724 trainer/Log Pis Std 2.4541 trainer/Log Pis Max 10.0391 trainer/Log Pis Min -3.65767 trainer/policy/mean Mean 0.162977 trainer/policy/mean Std 0.622172 trainer/policy/mean Max 0.998541 trainer/policy/mean Min -0.997047 trainer/policy/normal/std Mean 0.371441 trainer/policy/normal/std Std 0.178831 trainer/policy/normal/std Max 0.96651 trainer/policy/normal/std Min 0.0661343 trainer/policy/normal/log_std Mean -1.13868 trainer/policy/normal/log_std Std 0.58687 trainer/policy/normal/log_std Max -0.0340641 trainer/policy/normal/log_std Min -2.71607 eval/num steps total 607192 eval/num paths total 1171 eval/path length Mean 650 eval/path length Std 0 eval/path length Max 650 eval/path length Min 650 eval/Rewards Mean 3.18426 eval/Rewards Std 0.743947 eval/Rewards Max 4.83781 eval/Rewards Min 0.986358 eval/Returns Mean 2069.77 eval/Returns Std 0 eval/Returns Max 2069.77 eval/Returns Min 2069.77 eval/Actions Mean 0.153961 eval/Actions Std 0.600406 eval/Actions Max 0.998411 eval/Actions Min -0.998729 eval/Num Paths 1 eval/Average Returns 2069.77 eval/normalized_score 64.2185 time/evaluation sampling (s) 0.879693 time/logging (s) 0.00254779 time/sampling batch (s) 0.259813 time/saving (s) 0.00284111 time/training (s) 4.11095 time/epoch (s) 5.25585 time/total (s) 35228.5 Epoch -80 ---------------------------------- --------------- 2022-05-10 22:57:55.490423 PDT | [0] Epoch -79 finished ---------------------------------- --------------- epoch -79 replay_buffer/size 999996 trainer/num train calls 922000 trainer/Policy Loss -2.16605 trainer/Log Pis Mean 2.22359 trainer/Log Pis Std 2.60273 trainer/Log Pis Max 14.8567 trainer/Log Pis Min -5.34124 trainer/policy/mean Mean 0.139435 trainer/policy/mean Std 0.608201 trainer/policy/mean Max 0.998352 trainer/policy/mean Min -0.99946 trainer/policy/normal/std Mean 0.372965 trainer/policy/normal/std Std 0.177153 trainer/policy/normal/std Max 0.94915 trainer/policy/normal/std Min 0.0642991 trainer/policy/normal/log_std Mean -1.12947 trainer/policy/normal/log_std Std 0.575624 trainer/policy/normal/log_std Max -0.0521886 trainer/policy/normal/log_std Min -2.74421 eval/num steps total 608167 eval/num paths total 1173 eval/path length Mean 487.5 eval/path length Std 43.5 eval/path length Max 531 eval/path length Min 444 eval/Rewards Mean 3.2091 eval/Rewards Std 0.878028 eval/Rewards Max 5.5491 eval/Rewards Min 0.979617 eval/Returns Mean 1564.43 eval/Returns Std 149.944 eval/Returns Max 1714.38 eval/Returns Min 1414.49 eval/Actions Mean 0.145924 eval/Actions Std 0.583488 eval/Actions Max 0.998573 eval/Actions Min -0.999313 eval/Num Paths 2 eval/Average Returns 1564.43 eval/normalized_score 48.6917 time/evaluation sampling (s) 0.888387 time/logging (s) 0.00360734 time/sampling batch (s) 0.260887 time/saving (s) 0.00291213 time/training (s) 4.11082 time/epoch (s) 5.26661 time/total (s) 35233.7 Epoch -79 ---------------------------------- --------------- 2022-05-10 22:58:00.901639 PDT | [0] Epoch -78 finished ---------------------------------- --------------- epoch -78 replay_buffer/size 999996 trainer/num train calls 923000 trainer/Policy Loss -2.41697 trainer/Log Pis Mean 2.27536 trainer/Log Pis Std 2.68409 trainer/Log Pis Max 11.8726 trainer/Log Pis Min -4.94983 trainer/policy/mean Mean 0.113643 trainer/policy/mean Std 0.627153 trainer/policy/mean Max 0.996341 trainer/policy/mean Min -0.998601 trainer/policy/normal/std Mean 0.376399 trainer/policy/normal/std Std 0.186187 trainer/policy/normal/std Max 1.06769 trainer/policy/normal/std Min 0.0674846 trainer/policy/normal/log_std Mean -1.13292 trainer/policy/normal/log_std Std 0.601375 trainer/policy/normal/log_std Max 0.0654979 trainer/policy/normal/log_std Min -2.69586 eval/num steps total 608857 eval/num paths total 1174 eval/path length Mean 690 eval/path length Std 0 eval/path length Max 690 eval/path length Min 690 eval/Rewards Mean 3.22861 eval/Rewards Std 0.765111 eval/Rewards Max 5.54251 eval/Rewards Min 0.984523 eval/Returns Mean 2227.74 eval/Returns Std 0 eval/Returns Max 2227.74 eval/Returns Min 2227.74 eval/Actions Mean 0.158817 eval/Actions Std 0.600099 eval/Actions Max 0.996664 eval/Actions Min -0.999192 eval/Num Paths 1 eval/Average Returns 2227.74 eval/normalized_score 69.0725 time/evaluation sampling (s) 0.917452 time/logging (s) 0.00297547 time/sampling batch (s) 0.266443 time/saving (s) 0.00336848 time/training (s) 4.19612 time/epoch (s) 5.38636 time/total (s) 35239.1 Epoch -78 ---------------------------------- --------------- 2022-05-10 22:58:06.311928 PDT | [0] Epoch -77 finished ---------------------------------- --------------- epoch -77 replay_buffer/size 999996 trainer/num train calls 924000 trainer/Policy Loss -2.38765 trainer/Log Pis Mean 2.33102 trainer/Log Pis Std 2.61538 trainer/Log Pis Max 10.1573 trainer/Log Pis Min -5.17794 trainer/policy/mean Mean 0.0979071 trainer/policy/mean Std 0.629266 trainer/policy/mean Max 0.996521 trainer/policy/mean Min -0.997111 trainer/policy/normal/std Mean 0.367775 trainer/policy/normal/std Std 0.175009 trainer/policy/normal/std Max 1.0026 trainer/policy/normal/std Min 0.0673372 trainer/policy/normal/log_std Mean -1.1449 trainer/policy/normal/log_std Std 0.579277 trainer/policy/normal/log_std Max 0.00259694 trainer/policy/normal/log_std Min -2.69804 eval/num steps total 609483 eval/num paths total 1175 eval/path length Mean 626 eval/path length Std 0 eval/path length Max 626 eval/path length Min 626 eval/Rewards Mean 3.22501 eval/Rewards Std 0.777355 eval/Rewards Max 5.24863 eval/Rewards Min 0.980167 eval/Returns Mean 2018.86 eval/Returns Std 0 eval/Returns Max 2018.86 eval/Returns Min 2018.86 eval/Actions Mean 0.146782 eval/Actions Std 0.589773 eval/Actions Max 0.998502 eval/Actions Min -0.998306 eval/Num Paths 1 eval/Average Returns 2018.86 eval/normalized_score 62.6543 time/evaluation sampling (s) 0.982919 time/logging (s) 0.00288699 time/sampling batch (s) 0.26235 time/saving (s) 0.00328218 time/training (s) 4.13404 time/epoch (s) 5.38548 time/total (s) 35244.5 Epoch -77 ---------------------------------- --------------- 2022-05-10 22:58:11.628508 PDT | [0] Epoch -76 finished ---------------------------------- --------------- epoch -76 replay_buffer/size 999996 trainer/num train calls 925000 trainer/Policy Loss -2.19359 trainer/Log Pis Mean 2.33339 trainer/Log Pis Std 2.60397 trainer/Log Pis Max 13.6901 trainer/Log Pis Min -4.06846 trainer/policy/mean Mean 0.119611 trainer/policy/mean Std 0.620206 trainer/policy/mean Max 0.998139 trainer/policy/mean Min -0.997494 trainer/policy/normal/std Mean 0.36669 trainer/policy/normal/std Std 0.17749 trainer/policy/normal/std Max 0.891537 trainer/policy/normal/std Min 0.0695227 trainer/policy/normal/log_std Mean -1.15296 trainer/policy/normal/log_std Std 0.588493 trainer/policy/normal/log_std Max -0.114809 trainer/policy/normal/log_std Min -2.6661 eval/num steps total 610016 eval/num paths total 1176 eval/path length Mean 533 eval/path length Std 0 eval/path length Max 533 eval/path length Min 533 eval/Rewards Mean 3.20182 eval/Rewards Std 0.841882 eval/Rewards Max 5.54571 eval/Rewards Min 0.984117 eval/Returns Mean 1706.57 eval/Returns Std 0 eval/Returns Max 1706.57 eval/Returns Min 1706.57 eval/Actions Mean 0.157755 eval/Actions Std 0.588266 eval/Actions Max 0.996946 eval/Actions Min -0.997759 eval/Num Paths 1 eval/Average Returns 1706.57 eval/normalized_score 53.059 time/evaluation sampling (s) 0.880425 time/logging (s) 0.00232495 time/sampling batch (s) 0.261617 time/saving (s) 0.00295463 time/training (s) 4.144 time/epoch (s) 5.29132 time/total (s) 35249.8 Epoch -76 ---------------------------------- --------------- 2022-05-10 22:58:16.957753 PDT | [0] Epoch -75 finished ---------------------------------- --------------- epoch -75 replay_buffer/size 999996 trainer/num train calls 926000 trainer/Policy Loss -2.27116 trainer/Log Pis Mean 2.25266 trainer/Log Pis Std 2.5962 trainer/Log Pis Max 10.4607 trainer/Log Pis Min -5.74632 trainer/policy/mean Mean 0.132495 trainer/policy/mean Std 0.619569 trainer/policy/mean Max 0.996993 trainer/policy/mean Min -0.998022 trainer/policy/normal/std Mean 0.371496 trainer/policy/normal/std Std 0.176848 trainer/policy/normal/std Max 1.00933 trainer/policy/normal/std Min 0.0653448 trainer/policy/normal/log_std Mean -1.13418 trainer/policy/normal/log_std Std 0.577506 trainer/policy/normal/log_std Max 0.00928961 trainer/policy/normal/log_std Min -2.72808 eval/num steps total 610517 eval/num paths total 1177 eval/path length Mean 501 eval/path length Std 0 eval/path length Max 501 eval/path length Min 501 eval/Rewards Mean 3.1141 eval/Rewards Std 0.781911 eval/Rewards Max 4.80866 eval/Rewards Min 0.980987 eval/Returns Mean 1560.17 eval/Returns Std 0 eval/Returns Max 1560.17 eval/Returns Min 1560.17 eval/Actions Mean 0.160467 eval/Actions Std 0.591739 eval/Actions Max 0.998509 eval/Actions Min -0.996581 eval/Num Paths 1 eval/Average Returns 1560.17 eval/normalized_score 48.5606 time/evaluation sampling (s) 0.891766 time/logging (s) 0.00224565 time/sampling batch (s) 0.261778 time/saving (s) 0.002867 time/training (s) 4.14614 time/epoch (s) 5.3048 time/total (s) 35255.1 Epoch -75 ---------------------------------- --------------- 2022-05-10 22:58:22.277277 PDT | [0] Epoch -74 finished ---------------------------------- --------------- epoch -74 replay_buffer/size 999996 trainer/num train calls 927000 trainer/Policy Loss -2.28615 trainer/Log Pis Mean 2.22462 trainer/Log Pis Std 2.48829 trainer/Log Pis Max 13.415 trainer/Log Pis Min -5.37623 trainer/policy/mean Mean 0.131467 trainer/policy/mean Std 0.617543 trainer/policy/mean Max 0.997896 trainer/policy/mean Min -0.998351 trainer/policy/normal/std Mean 0.369363 trainer/policy/normal/std Std 0.181202 trainer/policy/normal/std Max 0.929424 trainer/policy/normal/std Min 0.0679462 trainer/policy/normal/log_std Mean -1.15018 trainer/policy/normal/log_std Std 0.598233 trainer/policy/normal/log_std Max -0.0731903 trainer/policy/normal/log_std Min -2.68904 eval/num steps total 611061 eval/num paths total 1178 eval/path length Mean 544 eval/path length Std 0 eval/path length Max 544 eval/path length Min 544 eval/Rewards Mean 3.16907 eval/Rewards Std 0.823608 eval/Rewards Max 5.00865 eval/Rewards Min 0.981992 eval/Returns Mean 1723.97 eval/Returns Std 0 eval/Returns Max 1723.97 eval/Returns Min 1723.97 eval/Actions Mean 0.141667 eval/Actions Std 0.578704 eval/Actions Max 0.998534 eval/Actions Min -0.998239 eval/Num Paths 1 eval/Average Returns 1723.97 eval/normalized_score 53.5937 time/evaluation sampling (s) 0.879067 time/logging (s) 0.00235744 time/sampling batch (s) 0.263287 time/saving (s) 0.00298129 time/training (s) 4.14776 time/epoch (s) 5.29545 time/total (s) 35260.4 Epoch -74 ---------------------------------- --------------- 2022-05-10 22:58:27.584157 PDT | [0] Epoch -73 finished ---------------------------------- --------------- epoch -73 replay_buffer/size 999996 trainer/num train calls 928000 trainer/Policy Loss -2.12031 trainer/Log Pis Mean 2.05528 trainer/Log Pis Std 2.5303 trainer/Log Pis Max 8.74994 trainer/Log Pis Min -6.28296 trainer/policy/mean Mean 0.151897 trainer/policy/mean Std 0.605502 trainer/policy/mean Max 0.998866 trainer/policy/mean Min -0.99803 trainer/policy/normal/std Mean 0.378117 trainer/policy/normal/std Std 0.182329 trainer/policy/normal/std Max 0.941465 trainer/policy/normal/std Min 0.0640151 trainer/policy/normal/log_std Mean -1.12203 trainer/policy/normal/log_std Std 0.588975 trainer/policy/normal/log_std Max -0.0603187 trainer/policy/normal/log_std Min -2.74864 eval/num steps total 611563 eval/num paths total 1179 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.14293 eval/Rewards Std 0.759336 eval/Rewards Max 4.73616 eval/Rewards Min 0.986387 eval/Returns Mean 1577.75 eval/Returns Std 0 eval/Returns Max 1577.75 eval/Returns Min 1577.75 eval/Actions Mean 0.160981 eval/Actions Std 0.598078 eval/Actions Max 0.998011 eval/Actions Min -0.996495 eval/Num Paths 1 eval/Average Returns 1577.75 eval/normalized_score 49.1009 time/evaluation sampling (s) 0.875354 time/logging (s) 0.00224614 time/sampling batch (s) 0.262052 time/saving (s) 0.0028654 time/training (s) 4.13995 time/epoch (s) 5.28247 time/total (s) 35265.7 Epoch -73 ---------------------------------- --------------- 2022-05-10 22:58:32.910713 PDT | [0] Epoch -72 finished ---------------------------------- --------------- epoch -72 replay_buffer/size 999996 trainer/num train calls 929000 trainer/Policy Loss -2.29478 trainer/Log Pis Mean 2.21646 trainer/Log Pis Std 2.4448 trainer/Log Pis Max 10.5135 trainer/Log Pis Min -4.42167 trainer/policy/mean Mean 0.170408 trainer/policy/mean Std 0.615868 trainer/policy/mean Max 0.997388 trainer/policy/mean Min -0.998596 trainer/policy/normal/std Mean 0.375651 trainer/policy/normal/std Std 0.182478 trainer/policy/normal/std Max 0.937809 trainer/policy/normal/std Min 0.0651709 trainer/policy/normal/log_std Mean -1.13156 trainer/policy/normal/log_std Std 0.595965 trainer/policy/normal/log_std Max -0.0642092 trainer/policy/normal/log_std Min -2.73074 eval/num steps total 612385 eval/num paths total 1181 eval/path length Mean 411 eval/path length Std 7 eval/path length Max 418 eval/path length Min 404 eval/Rewards Mean 3.10617 eval/Rewards Std 0.79577 eval/Rewards Max 5.11564 eval/Rewards Min 0.981006 eval/Returns Mean 1276.63 eval/Returns Std 32.6576 eval/Returns Max 1309.29 eval/Returns Min 1243.98 eval/Actions Mean 0.153441 eval/Actions Std 0.59104 eval/Actions Max 0.998636 eval/Actions Min -0.999288 eval/Num Paths 2 eval/Average Returns 1276.63 eval/normalized_score 39.8488 time/evaluation sampling (s) 0.875437 time/logging (s) 0.00315186 time/sampling batch (s) 0.263632 time/saving (s) 0.00289796 time/training (s) 4.15808 time/epoch (s) 5.3032 time/total (s) 35271 Epoch -72 ---------------------------------- --------------- 2022-05-10 22:58:38.233785 PDT | [0] Epoch -71 finished ---------------------------------- --------------- epoch -71 replay_buffer/size 999996 trainer/num train calls 930000 trainer/Policy Loss -2.35174 trainer/Log Pis Mean 2.3912 trainer/Log Pis Std 2.55528 trainer/Log Pis Max 10.7704 trainer/Log Pis Min -4.74568 trainer/policy/mean Mean 0.152905 trainer/policy/mean Std 0.619162 trainer/policy/mean Max 0.996234 trainer/policy/mean Min -0.997253 trainer/policy/normal/std Mean 0.370025 trainer/policy/normal/std Std 0.182682 trainer/policy/normal/std Max 0.968354 trainer/policy/normal/std Min 0.0657245 trainer/policy/normal/log_std Mean -1.15229 trainer/policy/normal/log_std Std 0.607565 trainer/policy/normal/log_std Max -0.0321579 trainer/policy/normal/log_std Min -2.72228 eval/num steps total 613370 eval/num paths total 1183 eval/path length Mean 492.5 eval/path length Std 7.5 eval/path length Max 500 eval/path length Min 485 eval/Rewards Mean 3.08664 eval/Rewards Std 0.803209 eval/Rewards Max 4.78453 eval/Rewards Min 0.98399 eval/Returns Mean 1520.17 eval/Returns Std 23.6023 eval/Returns Max 1543.77 eval/Returns Min 1496.57 eval/Actions Mean 0.15503 eval/Actions Std 0.587081 eval/Actions Max 0.998119 eval/Actions Min -0.998164 eval/Num Paths 2 eval/Average Returns 1520.17 eval/normalized_score 47.3317 time/evaluation sampling (s) 0.873659 time/logging (s) 0.0035392 time/sampling batch (s) 0.264068 time/saving (s) 0.00287761 time/training (s) 4.15537 time/epoch (s) 5.29951 time/total (s) 35276.3 Epoch -71 ---------------------------------- --------------- 2022-05-10 22:58:43.577290 PDT | [0] Epoch -70 finished ---------------------------------- -------------- epoch -70 replay_buffer/size 999996 trainer/num train calls 931000 trainer/Policy Loss -2.06816 trainer/Log Pis Mean 2.30996 trainer/Log Pis Std 2.60915 trainer/Log Pis Max 9.7933 trainer/Log Pis Min -3.58617 trainer/policy/mean Mean 0.145319 trainer/policy/mean Std 0.611992 trainer/policy/mean Max 0.996411 trainer/policy/mean Min -0.998016 trainer/policy/normal/std Mean 0.369129 trainer/policy/normal/std Std 0.181216 trainer/policy/normal/std Max 0.93952 trainer/policy/normal/std Min 0.0694663 trainer/policy/normal/log_std Mean -1.15586 trainer/policy/normal/log_std Std 0.612485 trainer/policy/normal/log_std Max -0.0623864 trainer/policy/normal/log_std Min -2.66691 eval/num steps total 614363 eval/num paths total 1186 eval/path length Mean 331 eval/path length Std 42.9496 eval/path length Max 373 eval/path length Min 272 eval/Rewards Mean 3.03891 eval/Rewards Std 0.961959 eval/Rewards Max 5.35353 eval/Rewards Min 0.980207 eval/Returns Mean 1005.88 eval/Returns Std 146.812 eval/Returns Max 1157.97 eval/Returns Min 807.432 eval/Actions Mean 0.112391 eval/Actions Std 0.564571 eval/Actions Max 0.999103 eval/Actions Min -0.998266 eval/Num Paths 3 eval/Average Returns 1005.88 eval/normalized_score 31.5295 time/evaluation sampling (s) 0.879982 time/logging (s) 0.003571 time/sampling batch (s) 0.265595 time/saving (s) 0.0028376 time/training (s) 4.1676 time/epoch (s) 5.31958 time/total (s) 35281.6 Epoch -70 ---------------------------------- -------------- 2022-05-10 22:58:48.903030 PDT | [0] Epoch -69 finished ---------------------------------- --------------- epoch -69 replay_buffer/size 999996 trainer/num train calls 932000 trainer/Policy Loss -2.36034 trainer/Log Pis Mean 2.38007 trainer/Log Pis Std 2.49069 trainer/Log Pis Max 9.1801 trainer/Log Pis Min -4.94154 trainer/policy/mean Mean 0.142602 trainer/policy/mean Std 0.619049 trainer/policy/mean Max 0.998509 trainer/policy/mean Min -0.997942 trainer/policy/normal/std Mean 0.372999 trainer/policy/normal/std Std 0.1804 trainer/policy/normal/std Max 0.888182 trainer/policy/normal/std Min 0.0737961 trainer/policy/normal/log_std Mean -1.13345 trainer/policy/normal/log_std Std 0.582839 trainer/policy/normal/log_std Max -0.118578 trainer/policy/normal/log_std Min -2.60645 eval/num steps total 614868 eval/num paths total 1187 eval/path length Mean 505 eval/path length Std 0 eval/path length Max 505 eval/path length Min 505 eval/Rewards Mean 3.1002 eval/Rewards Std 0.749664 eval/Rewards Max 4.73065 eval/Rewards Min 0.980844 eval/Returns Mean 1565.6 eval/Returns Std 0 eval/Returns Max 1565.6 eval/Returns Min 1565.6 eval/Actions Mean 0.162091 eval/Actions Std 0.59659 eval/Actions Max 0.998347 eval/Actions Min -0.997594 eval/Num Paths 1 eval/Average Returns 1565.6 eval/normalized_score 48.7276 time/evaluation sampling (s) 0.871448 time/logging (s) 0.00218585 time/sampling batch (s) 0.26446 time/saving (s) 0.00286957 time/training (s) 4.15941 time/epoch (s) 5.30038 time/total (s) 35286.9 Epoch -69 ---------------------------------- --------------- 2022-05-10 22:58:54.202152 PDT | [0] Epoch -68 finished ---------------------------------- --------------- epoch -68 replay_buffer/size 999996 trainer/num train calls 933000 trainer/Policy Loss -2.24964 trainer/Log Pis Mean 2.20768 trainer/Log Pis Std 2.49794 trainer/Log Pis Max 12.2753 trainer/Log Pis Min -5.56527 trainer/policy/mean Mean 0.124384 trainer/policy/mean Std 0.620906 trainer/policy/mean Max 0.997454 trainer/policy/mean Min -0.998188 trainer/policy/normal/std Mean 0.389225 trainer/policy/normal/std Std 0.189983 trainer/policy/normal/std Max 1.05854 trainer/policy/normal/std Min 0.069387 trainer/policy/normal/log_std Mean -1.09589 trainer/policy/normal/log_std Std 0.595254 trainer/policy/normal/log_std Max 0.0568892 trainer/policy/normal/log_std Min -2.66806 eval/num steps total 615514 eval/num paths total 1188 eval/path length Mean 646 eval/path length Std 0 eval/path length Max 646 eval/path length Min 646 eval/Rewards Mean 3.19715 eval/Rewards Std 0.757672 eval/Rewards Max 4.75566 eval/Rewards Min 0.982575 eval/Returns Mean 2065.36 eval/Returns Std 0 eval/Returns Max 2065.36 eval/Returns Min 2065.36 eval/Actions Mean 0.145745 eval/Actions Std 0.605404 eval/Actions Max 0.997855 eval/Actions Min -0.998795 eval/Num Paths 1 eval/Average Returns 2065.36 eval/normalized_score 64.0831 time/evaluation sampling (s) 0.885833 time/logging (s) 0.00262332 time/sampling batch (s) 0.258725 time/saving (s) 0.00280528 time/training (s) 4.12567 time/epoch (s) 5.27566 time/total (s) 35292.2 Epoch -68 ---------------------------------- --------------- 2022-05-10 22:58:59.458829 PDT | [0] Epoch -67 finished ---------------------------------- --------------- epoch -67 replay_buffer/size 999996 trainer/num train calls 934000 trainer/Policy Loss -2.31819 trainer/Log Pis Mean 2.46328 trainer/Log Pis Std 2.5628 trainer/Log Pis Max 9.45606 trainer/Log Pis Min -5.02269 trainer/policy/mean Mean 0.174662 trainer/policy/mean Std 0.621479 trainer/policy/mean Max 0.998829 trainer/policy/mean Min -0.998304 trainer/policy/normal/std Mean 0.378076 trainer/policy/normal/std Std 0.181143 trainer/policy/normal/std Max 1.0296 trainer/policy/normal/std Min 0.0661952 trainer/policy/normal/log_std Mean -1.12022 trainer/policy/normal/log_std Std 0.586419 trainer/policy/normal/log_std Max 0.0291699 trainer/policy/normal/log_std Min -2.71515 eval/num steps total 616066 eval/num paths total 1189 eval/path length Mean 552 eval/path length Std 0 eval/path length Max 552 eval/path length Min 552 eval/Rewards Mean 3.18725 eval/Rewards Std 0.811159 eval/Rewards Max 4.72951 eval/Rewards Min 0.987241 eval/Returns Mean 1759.36 eval/Returns Std 0 eval/Returns Max 1759.36 eval/Returns Min 1759.36 eval/Actions Mean 0.148396 eval/Actions Std 0.578626 eval/Actions Max 0.998416 eval/Actions Min -0.998391 eval/Num Paths 1 eval/Average Returns 1759.36 eval/normalized_score 54.681 time/evaluation sampling (s) 0.85733 time/logging (s) 0.00242961 time/sampling batch (s) 0.258257 time/saving (s) 0.0030033 time/training (s) 4.11198 time/epoch (s) 5.233 time/total (s) 35297.5 Epoch -67 ---------------------------------- --------------- 2022-05-10 22:59:04.730277 PDT | [0] Epoch -66 finished ---------------------------------- --------------- epoch -66 replay_buffer/size 999996 trainer/num train calls 935000 trainer/Policy Loss -2.22045 trainer/Log Pis Mean 2.36101 trainer/Log Pis Std 2.60276 trainer/Log Pis Max 10.0119 trainer/Log Pis Min -5.09224 trainer/policy/mean Mean 0.149294 trainer/policy/mean Std 0.619417 trainer/policy/mean Max 0.997881 trainer/policy/mean Min -0.998726 trainer/policy/normal/std Mean 0.377041 trainer/policy/normal/std Std 0.188814 trainer/policy/normal/std Max 0.998672 trainer/policy/normal/std Min 0.0621482 trainer/policy/normal/log_std Mean -1.13525 trainer/policy/normal/log_std Std 0.609541 trainer/policy/normal/log_std Max -0.00132881 trainer/policy/normal/log_std Min -2.77823 eval/num steps total 616630 eval/num paths total 1190 eval/path length Mean 564 eval/path length Std 0 eval/path length Max 564 eval/path length Min 564 eval/Rewards Mean 3.23242 eval/Rewards Std 0.816137 eval/Rewards Max 4.87655 eval/Rewards Min 0.986073 eval/Returns Mean 1823.09 eval/Returns Std 0 eval/Returns Max 1823.09 eval/Returns Min 1823.09 eval/Actions Mean 0.166034 eval/Actions Std 0.582658 eval/Actions Max 0.998 eval/Actions Min -0.998823 eval/Num Paths 1 eval/Average Returns 1823.09 eval/normalized_score 56.639 time/evaluation sampling (s) 0.858193 time/logging (s) 0.00236723 time/sampling batch (s) 0.259215 time/saving (s) 0.00287722 time/training (s) 4.1247 time/epoch (s) 5.24735 time/total (s) 35302.7 Epoch -66 ---------------------------------- --------------- 2022-05-10 22:59:09.995622 PDT | [0] Epoch -65 finished ---------------------------------- --------------- epoch -65 replay_buffer/size 999996 trainer/num train calls 936000 trainer/Policy Loss -2.17303 trainer/Log Pis Mean 2.10503 trainer/Log Pis Std 2.63386 trainer/Log Pis Max 11.2199 trainer/Log Pis Min -6.16736 trainer/policy/mean Mean 0.0912476 trainer/policy/mean Std 0.621927 trainer/policy/mean Max 0.995566 trainer/policy/mean Min -0.997747 trainer/policy/normal/std Mean 0.373258 trainer/policy/normal/std Std 0.17571 trainer/policy/normal/std Max 1.03564 trainer/policy/normal/std Min 0.0699293 trainer/policy/normal/log_std Mean -1.12453 trainer/policy/normal/log_std Std 0.56619 trainer/policy/normal/log_std Max 0.035022 trainer/policy/normal/log_std Min -2.66027 eval/num steps total 617117 eval/num paths total 1191 eval/path length Mean 487 eval/path length Std 0 eval/path length Max 487 eval/path length Min 487 eval/Rewards Mean 3.07156 eval/Rewards Std 0.806582 eval/Rewards Max 4.9026 eval/Rewards Min 0.981652 eval/Returns Mean 1495.85 eval/Returns Std 0 eval/Returns Max 1495.85 eval/Returns Min 1495.85 eval/Actions Mean 0.145286 eval/Actions Std 0.574391 eval/Actions Max 0.997853 eval/Actions Min -0.99793 eval/Num Paths 1 eval/Average Returns 1495.85 eval/normalized_score 46.5844 time/evaluation sampling (s) 0.854347 time/logging (s) 0.0022185 time/sampling batch (s) 0.264362 time/saving (s) 0.00282289 time/training (s) 4.11794 time/epoch (s) 5.24169 time/total (s) 35308 Epoch -65 ---------------------------------- --------------- 2022-05-10 22:59:15.294923 PDT | [0] Epoch -64 finished ---------------------------------- --------------- epoch -64 replay_buffer/size 999996 trainer/num train calls 937000 trainer/Policy Loss -2.13367 trainer/Log Pis Mean 2.42521 trainer/Log Pis Std 2.60387 trainer/Log Pis Max 9.47155 trainer/Log Pis Min -4.34909 trainer/policy/mean Mean 0.176018 trainer/policy/mean Std 0.610201 trainer/policy/mean Max 0.997377 trainer/policy/mean Min -0.998615 trainer/policy/normal/std Mean 0.37268 trainer/policy/normal/std Std 0.184653 trainer/policy/normal/std Max 1.08164 trainer/policy/normal/std Min 0.0680892 trainer/policy/normal/log_std Mean -1.14264 trainer/policy/normal/log_std Std 0.599889 trainer/policy/normal/log_std Max 0.0784771 trainer/policy/normal/log_std Min -2.68694 eval/num steps total 617702 eval/num paths total 1192 eval/path length Mean 585 eval/path length Std 0 eval/path length Max 585 eval/path length Min 585 eval/Rewards Mean 3.22437 eval/Rewards Std 0.757061 eval/Rewards Max 5.02644 eval/Rewards Min 0.988541 eval/Returns Mean 1886.26 eval/Returns Std 0 eval/Returns Max 1886.26 eval/Returns Min 1886.26 eval/Actions Mean 0.172579 eval/Actions Std 0.597703 eval/Actions Max 0.998567 eval/Actions Min -0.998529 eval/Num Paths 1 eval/Average Returns 1886.26 eval/normalized_score 58.58 time/evaluation sampling (s) 0.880053 time/logging (s) 0.00241714 time/sampling batch (s) 0.262634 time/saving (s) 0.00287026 time/training (s) 4.12723 time/epoch (s) 5.2752 time/total (s) 35313.2 Epoch -64 ---------------------------------- --------------- 2022-05-10 22:59:20.600547 PDT | [0] Epoch -63 finished ---------------------------------- --------------- epoch -63 replay_buffer/size 999996 trainer/num train calls 938000 trainer/Policy Loss -2.33296 trainer/Log Pis Mean 2.31392 trainer/Log Pis Std 2.50856 trainer/Log Pis Max 8.98367 trainer/Log Pis Min -5.45658 trainer/policy/mean Mean 0.150619 trainer/policy/mean Std 0.6187 trainer/policy/mean Max 0.997826 trainer/policy/mean Min -0.996904 trainer/policy/normal/std Mean 0.378771 trainer/policy/normal/std Std 0.186495 trainer/policy/normal/std Max 1.00998 trainer/policy/normal/std Min 0.0682432 trainer/policy/normal/log_std Mean -1.12606 trainer/policy/normal/log_std Std 0.600665 trainer/policy/normal/log_std Max 0.00993403 trainer/policy/normal/log_std Min -2.68468 eval/num steps total 618280 eval/num paths total 1193 eval/path length Mean 578 eval/path length Std 0 eval/path length Max 578 eval/path length Min 578 eval/Rewards Mean 3.14803 eval/Rewards Std 0.713154 eval/Rewards Max 4.63252 eval/Rewards Min 0.979563 eval/Returns Mean 1819.56 eval/Returns Std 0 eval/Returns Max 1819.56 eval/Returns Min 1819.56 eval/Actions Mean 0.154029 eval/Actions Std 0.608642 eval/Actions Max 0.99787 eval/Actions Min -0.9973 eval/Num Paths 1 eval/Average Returns 1819.56 eval/normalized_score 56.5308 time/evaluation sampling (s) 0.884637 time/logging (s) 0.00244775 time/sampling batch (s) 0.261664 time/saving (s) 0.00309576 time/training (s) 4.12969 time/epoch (s) 5.28154 time/total (s) 35318.5 Epoch -63 ---------------------------------- --------------- 2022-05-10 22:59:25.950174 PDT | [0] Epoch -62 finished ---------------------------------- --------------- epoch -62 replay_buffer/size 999996 trainer/num train calls 939000 trainer/Policy Loss -2.19752 trainer/Log Pis Mean 2.28623 trainer/Log Pis Std 2.63794 trainer/Log Pis Max 10.5633 trainer/Log Pis Min -4.46187 trainer/policy/mean Mean 0.161842 trainer/policy/mean Std 0.609509 trainer/policy/mean Max 0.997562 trainer/policy/mean Min -0.998126 trainer/policy/normal/std Mean 0.37615 trainer/policy/normal/std Std 0.185199 trainer/policy/normal/std Max 1.03545 trainer/policy/normal/std Min 0.0694952 trainer/policy/normal/log_std Mean -1.13574 trainer/policy/normal/log_std Std 0.609372 trainer/policy/normal/log_std Max 0.0348371 trainer/policy/normal/log_std Min -2.6665 eval/num steps total 619268 eval/num paths total 1195 eval/path length Mean 494 eval/path length Std 1 eval/path length Max 495 eval/path length Min 493 eval/Rewards Mean 3.07768 eval/Rewards Std 0.773537 eval/Rewards Max 4.64985 eval/Rewards Min 0.986952 eval/Returns Mean 1520.37 eval/Returns Std 2.87937 eval/Returns Max 1523.25 eval/Returns Min 1517.49 eval/Actions Mean 0.160085 eval/Actions Std 0.590383 eval/Actions Max 0.998247 eval/Actions Min -0.996803 eval/Num Paths 2 eval/Average Returns 1520.37 eval/normalized_score 47.3378 time/evaluation sampling (s) 0.909431 time/logging (s) 0.00370356 time/sampling batch (s) 0.263404 time/saving (s) 0.0029504 time/training (s) 4.14697 time/epoch (s) 5.32646 time/total (s) 35323.8 Epoch -62 ---------------------------------- --------------- 2022-05-10 22:59:31.321897 PDT | [0] Epoch -61 finished ---------------------------------- --------------- epoch -61 replay_buffer/size 999996 trainer/num train calls 940000 trainer/Policy Loss -2.25812 trainer/Log Pis Mean 2.2508 trainer/Log Pis Std 2.5963 trainer/Log Pis Max 9.92577 trainer/Log Pis Min -4.36095 trainer/policy/mean Mean 0.146363 trainer/policy/mean Std 0.616331 trainer/policy/mean Max 0.997248 trainer/policy/mean Min -0.998063 trainer/policy/normal/std Mean 0.377851 trainer/policy/normal/std Std 0.177196 trainer/policy/normal/std Max 0.882429 trainer/policy/normal/std Min 0.071601 trainer/policy/normal/log_std Mean -1.1159 trainer/policy/normal/log_std Std 0.577633 trainer/policy/normal/log_std Max -0.125077 trainer/policy/normal/log_std Min -2.63665 eval/num steps total 619821 eval/num paths total 1196 eval/path length Mean 553 eval/path length Std 0 eval/path length Max 553 eval/path length Min 553 eval/Rewards Mean 3.21854 eval/Rewards Std 0.835458 eval/Rewards Max 4.76194 eval/Rewards Min 0.980654 eval/Returns Mean 1779.85 eval/Returns Std 0 eval/Returns Max 1779.85 eval/Returns Min 1779.85 eval/Actions Mean 0.154609 eval/Actions Std 0.579194 eval/Actions Max 0.99896 eval/Actions Min -0.998952 eval/Num Paths 1 eval/Average Returns 1779.85 eval/normalized_score 55.3106 time/evaluation sampling (s) 0.903444 time/logging (s) 0.00246519 time/sampling batch (s) 0.264868 time/saving (s) 0.00298277 time/training (s) 4.17228 time/epoch (s) 5.34604 time/total (s) 35329.2 Epoch -61 ---------------------------------- --------------- 2022-05-10 22:59:36.625608 PDT | [0] Epoch -60 finished ---------------------------------- --------------- epoch -60 replay_buffer/size 999996 trainer/num train calls 941000 trainer/Policy Loss -2.27536 trainer/Log Pis Mean 2.11755 trainer/Log Pis Std 2.59222 trainer/Log Pis Max 10.6777 trainer/Log Pis Min -4.8879 trainer/policy/mean Mean 0.115659 trainer/policy/mean Std 0.623218 trainer/policy/mean Max 0.998197 trainer/policy/mean Min -0.997592 trainer/policy/normal/std Mean 0.373304 trainer/policy/normal/std Std 0.181361 trainer/policy/normal/std Max 0.951925 trainer/policy/normal/std Min 0.0686988 trainer/policy/normal/log_std Mean -1.13639 trainer/policy/normal/log_std Std 0.592776 trainer/policy/normal/log_std Max -0.0492694 trainer/policy/normal/log_std Min -2.67802 eval/num steps total 620321 eval/num paths total 1197 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.08768 eval/Rewards Std 0.75625 eval/Rewards Max 4.65026 eval/Rewards Min 0.98646 eval/Returns Mean 1543.84 eval/Returns Std 0 eval/Returns Max 1543.84 eval/Returns Min 1543.84 eval/Actions Mean 0.145311 eval/Actions Std 0.595485 eval/Actions Max 0.997024 eval/Actions Min -0.998701 eval/Num Paths 1 eval/Average Returns 1543.84 eval/normalized_score 48.059 time/evaluation sampling (s) 0.892154 time/logging (s) 0.00213946 time/sampling batch (s) 0.260955 time/saving (s) 0.00280504 time/training (s) 4.12101 time/epoch (s) 5.27906 time/total (s) 35334.5 Epoch -60 ---------------------------------- --------------- 2022-05-10 22:59:41.963599 PDT | [0] Epoch -59 finished ---------------------------------- --------------- epoch -59 replay_buffer/size 999996 trainer/num train calls 942000 trainer/Policy Loss -2.32928 trainer/Log Pis Mean 2.24822 trainer/Log Pis Std 2.65278 trainer/Log Pis Max 9.90156 trainer/Log Pis Min -6.00661 trainer/policy/mean Mean 0.151892 trainer/policy/mean Std 0.621348 trainer/policy/mean Max 0.997913 trainer/policy/mean Min -0.998137 trainer/policy/normal/std Mean 0.379542 trainer/policy/normal/std Std 0.188121 trainer/policy/normal/std Max 1.12696 trainer/policy/normal/std Min 0.0660593 trainer/policy/normal/log_std Mean -1.12533 trainer/policy/normal/log_std Std 0.604177 trainer/policy/normal/log_std Max 0.119525 trainer/policy/normal/log_std Min -2.7172 eval/num steps total 620970 eval/num paths total 1198 eval/path length Mean 649 eval/path length Std 0 eval/path length Max 649 eval/path length Min 649 eval/Rewards Mean 3.2352 eval/Rewards Std 0.789317 eval/Rewards Max 4.85743 eval/Rewards Min 0.983647 eval/Returns Mean 2099.64 eval/Returns Std 0 eval/Returns Max 2099.64 eval/Returns Min 2099.64 eval/Actions Mean 0.166595 eval/Actions Std 0.586198 eval/Actions Max 0.998876 eval/Actions Min -0.998717 eval/Num Paths 1 eval/Average Returns 2099.64 eval/normalized_score 65.1365 time/evaluation sampling (s) 0.88192 time/logging (s) 0.00271267 time/sampling batch (s) 0.264273 time/saving (s) 0.00317817 time/training (s) 4.16224 time/epoch (s) 5.31432 time/total (s) 35339.8 Epoch -59 ---------------------------------- --------------- 2022-05-10 22:59:47.264707 PDT | [0] Epoch -58 finished ---------------------------------- --------------- epoch -58 replay_buffer/size 999996 trainer/num train calls 943000 trainer/Policy Loss -2.15863 trainer/Log Pis Mean 2.17006 trainer/Log Pis Std 2.63214 trainer/Log Pis Max 12.6892 trainer/Log Pis Min -5.57863 trainer/policy/mean Mean 0.139331 trainer/policy/mean Std 0.613358 trainer/policy/mean Max 0.998697 trainer/policy/mean Min -0.99719 trainer/policy/normal/std Mean 0.375742 trainer/policy/normal/std Std 0.185359 trainer/policy/normal/std Max 0.981928 trainer/policy/normal/std Min 0.0614171 trainer/policy/normal/log_std Mean -1.13406 trainer/policy/normal/log_std Std 0.601129 trainer/policy/normal/log_std Max -0.0182368 trainer/policy/normal/log_std Min -2.79007 eval/num steps total 621706 eval/num paths total 1199 eval/path length Mean 736 eval/path length Std 0 eval/path length Max 736 eval/path length Min 736 eval/Rewards Mean 3.22321 eval/Rewards Std 0.628327 eval/Rewards Max 4.54924 eval/Rewards Min 0.984684 eval/Returns Mean 2372.28 eval/Returns Std 0 eval/Returns Max 2372.28 eval/Returns Min 2372.28 eval/Actions Mean 0.154003 eval/Actions Std 0.614708 eval/Actions Max 0.99811 eval/Actions Min -0.997818 eval/Num Paths 1 eval/Average Returns 2372.28 eval/normalized_score 73.5137 time/evaluation sampling (s) 0.875288 time/logging (s) 0.00303011 time/sampling batch (s) 0.262486 time/saving (s) 0.00312148 time/training (s) 4.13252 time/epoch (s) 5.27645 time/total (s) 35345.1 Epoch -58 ---------------------------------- --------------- 2022-05-10 22:59:52.609956 PDT | [0] Epoch -57 finished ---------------------------------- --------------- epoch -57 replay_buffer/size 999996 trainer/num train calls 944000 trainer/Policy Loss -2.40116 trainer/Log Pis Mean 2.39058 trainer/Log Pis Std 2.80708 trainer/Log Pis Max 10.6659 trainer/Log Pis Min -5.07648 trainer/policy/mean Mean 0.150512 trainer/policy/mean Std 0.619664 trainer/policy/mean Max 0.99847 trainer/policy/mean Min -0.998264 trainer/policy/normal/std Mean 0.376579 trainer/policy/normal/std Std 0.178677 trainer/policy/normal/std Max 0.954462 trainer/policy/normal/std Min 0.0666072 trainer/policy/normal/log_std Mean -1.12272 trainer/policy/normal/log_std Std 0.584228 trainer/policy/normal/log_std Max -0.0466073 trainer/policy/normal/log_std Min -2.70894 eval/num steps total 622685 eval/num paths total 1201 eval/path length Mean 489.5 eval/path length Std 8.5 eval/path length Max 498 eval/path length Min 481 eval/Rewards Mean 3.15692 eval/Rewards Std 0.788083 eval/Rewards Max 4.78375 eval/Rewards Min 0.982482 eval/Returns Mean 1545.31 eval/Returns Std 5.27569 eval/Returns Max 1550.59 eval/Returns Min 1540.03 eval/Actions Mean 0.153611 eval/Actions Std 0.601575 eval/Actions Max 0.997103 eval/Actions Min -0.998884 eval/Num Paths 2 eval/Average Returns 1545.31 eval/normalized_score 48.1041 time/evaluation sampling (s) 0.869464 time/logging (s) 0.00343863 time/sampling batch (s) 0.266787 time/saving (s) 0.00277894 time/training (s) 4.17727 time/epoch (s) 5.31973 time/total (s) 35350.4 Epoch -57 ---------------------------------- --------------- 2022-05-10 22:59:58.062839 PDT | [0] Epoch -56 finished ---------------------------------- --------------- epoch -56 replay_buffer/size 999996 trainer/num train calls 945000 trainer/Policy Loss -2.30779 trainer/Log Pis Mean 2.34607 trainer/Log Pis Std 2.76014 trainer/Log Pis Max 16.4501 trainer/Log Pis Min -7.90375 trainer/policy/mean Mean 0.130448 trainer/policy/mean Std 0.618105 trainer/policy/mean Max 0.99697 trainer/policy/mean Min -0.99903 trainer/policy/normal/std Mean 0.371824 trainer/policy/normal/std Std 0.183473 trainer/policy/normal/std Max 0.954381 trainer/policy/normal/std Min 0.0687016 trainer/policy/normal/log_std Mean -1.14058 trainer/policy/normal/log_std Std 0.589013 trainer/policy/normal/log_std Max -0.046692 trainer/policy/normal/log_std Min -2.67798 eval/num steps total 623340 eval/num paths total 1202 eval/path length Mean 655 eval/path length Std 0 eval/path length Max 655 eval/path length Min 655 eval/Rewards Mean 3.15017 eval/Rewards Std 0.693115 eval/Rewards Max 4.79239 eval/Rewards Min 0.978141 eval/Returns Mean 2063.36 eval/Returns Std 0 eval/Returns Max 2063.36 eval/Returns Min 2063.36 eval/Actions Mean 0.151463 eval/Actions Std 0.603842 eval/Actions Max 0.998376 eval/Actions Min -0.997963 eval/Num Paths 1 eval/Average Returns 2063.36 eval/normalized_score 64.0218 time/evaluation sampling (s) 0.943104 time/logging (s) 0.00266106 time/sampling batch (s) 0.268925 time/saving (s) 0.00299065 time/training (s) 4.2098 time/epoch (s) 5.42748 time/total (s) 35355.8 Epoch -56 ---------------------------------- --------------- 2022-05-10 23:00:03.491715 PDT | [0] Epoch -55 finished ---------------------------------- --------------- epoch -55 replay_buffer/size 999996 trainer/num train calls 946000 trainer/Policy Loss -2.30069 trainer/Log Pis Mean 2.38134 trainer/Log Pis Std 2.61705 trainer/Log Pis Max 9.6828 trainer/Log Pis Min -3.96815 trainer/policy/mean Mean 0.105935 trainer/policy/mean Std 0.629011 trainer/policy/mean Max 0.99854 trainer/policy/mean Min -0.997368 trainer/policy/normal/std Mean 0.366282 trainer/policy/normal/std Std 0.179039 trainer/policy/normal/std Max 1.01671 trainer/policy/normal/std Min 0.0638696 trainer/policy/normal/log_std Mean -1.1524 trainer/policy/normal/log_std Std 0.582085 trainer/policy/normal/log_std Max 0.0165684 trainer/policy/normal/log_std Min -2.75091 eval/num steps total 623997 eval/num paths total 1203 eval/path length Mean 657 eval/path length Std 0 eval/path length Max 657 eval/path length Min 657 eval/Rewards Mean 3.18575 eval/Rewards Std 0.671443 eval/Rewards Max 4.78334 eval/Rewards Min 0.982348 eval/Returns Mean 2093.03 eval/Returns Std 0 eval/Returns Max 2093.03 eval/Returns Min 2093.03 eval/Actions Mean 0.150745 eval/Actions Std 0.615557 eval/Actions Max 0.998374 eval/Actions Min -0.998902 eval/Num Paths 1 eval/Average Returns 2093.03 eval/normalized_score 64.9335 time/evaluation sampling (s) 0.884487 time/logging (s) 0.00267091 time/sampling batch (s) 0.270948 time/saving (s) 0.00292311 time/training (s) 4.24316 time/epoch (s) 5.40419 time/total (s) 35361.2 Epoch -55 ---------------------------------- --------------- 2022-05-10 23:00:08.880068 PDT | [0] Epoch -54 finished ---------------------------------- --------------- epoch -54 replay_buffer/size 999996 trainer/num train calls 947000 trainer/Policy Loss -2.16509 trainer/Log Pis Mean 2.10981 trainer/Log Pis Std 2.48606 trainer/Log Pis Max 8.6034 trainer/Log Pis Min -5.57344 trainer/policy/mean Mean 0.157052 trainer/policy/mean Std 0.607635 trainer/policy/mean Max 0.997204 trainer/policy/mean Min -0.996844 trainer/policy/normal/std Mean 0.383642 trainer/policy/normal/std Std 0.18803 trainer/policy/normal/std Max 1.01202 trainer/policy/normal/std Min 0.0683862 trainer/policy/normal/log_std Mean -1.11273 trainer/policy/normal/log_std Std 0.600798 trainer/policy/normal/log_std Max 0.011946 trainer/policy/normal/log_std Min -2.68258 eval/num steps total 624990 eval/num paths total 1205 eval/path length Mean 496.5 eval/path length Std 9.5 eval/path length Max 506 eval/path length Min 487 eval/Rewards Mean 3.13079 eval/Rewards Std 0.820682 eval/Rewards Max 5.16221 eval/Rewards Min 0.982304 eval/Returns Mean 1554.44 eval/Returns Std 22.5843 eval/Returns Max 1577.02 eval/Returns Min 1531.85 eval/Actions Mean 0.15134 eval/Actions Std 0.586252 eval/Actions Max 0.998025 eval/Actions Min -0.998382 eval/Num Paths 2 eval/Average Returns 1554.44 eval/normalized_score 48.3846 time/evaluation sampling (s) 0.878615 time/logging (s) 0.00372975 time/sampling batch (s) 0.26917 time/saving (s) 0.00291088 time/training (s) 4.21031 time/epoch (s) 5.36474 time/total (s) 35366.6 Epoch -54 ---------------------------------- --------------- 2022-05-10 23:00:14.242219 PDT | [0] Epoch -53 finished ---------------------------------- --------------- epoch -53 replay_buffer/size 999996 trainer/num train calls 948000 trainer/Policy Loss -2.26041 trainer/Log Pis Mean 2.29362 trainer/Log Pis Std 2.57241 trainer/Log Pis Max 9.75638 trainer/Log Pis Min -4.73354 trainer/policy/mean Mean 0.136237 trainer/policy/mean Std 0.613099 trainer/policy/mean Max 0.99856 trainer/policy/mean Min -0.99713 trainer/policy/normal/std Mean 0.37549 trainer/policy/normal/std Std 0.182021 trainer/policy/normal/std Max 1.0499 trainer/policy/normal/std Min 0.0738754 trainer/policy/normal/log_std Mean -1.12961 trainer/policy/normal/log_std Std 0.590605 trainer/policy/normal/log_std Max 0.0486982 trainer/policy/normal/log_std Min -2.60537 eval/num steps total 625814 eval/num paths total 1206 eval/path length Mean 824 eval/path length Std 0 eval/path length Max 824 eval/path length Min 824 eval/Rewards Mean 3.26583 eval/Rewards Std 0.635397 eval/Rewards Max 4.83415 eval/Rewards Min 0.982384 eval/Returns Mean 2691.05 eval/Returns Std 0 eval/Returns Max 2691.05 eval/Returns Min 2691.05 eval/Actions Mean 0.156469 eval/Actions Std 0.608893 eval/Actions Max 0.998964 eval/Actions Min -0.998122 eval/Num Paths 1 eval/Average Returns 2691.05 eval/normalized_score 83.308 time/evaluation sampling (s) 0.881278 time/logging (s) 0.00311689 time/sampling batch (s) 0.265579 time/saving (s) 0.0028968 time/training (s) 4.18391 time/epoch (s) 5.33678 time/total (s) 35371.9 Epoch -53 ---------------------------------- --------------- 2022-05-10 23:00:19.616220 PDT | [0] Epoch -52 finished ---------------------------------- --------------- epoch -52 replay_buffer/size 999996 trainer/num train calls 949000 trainer/Policy Loss -2.2266 trainer/Log Pis Mean 2.21532 trainer/Log Pis Std 2.54818 trainer/Log Pis Max 9.14148 trainer/Log Pis Min -3.89096 trainer/policy/mean Mean 0.148866 trainer/policy/mean Std 0.609873 trainer/policy/mean Max 0.998672 trainer/policy/mean Min -0.9975 trainer/policy/normal/std Mean 0.375056 trainer/policy/normal/std Std 0.183088 trainer/policy/normal/std Max 0.989355 trainer/policy/normal/std Min 0.064869 trainer/policy/normal/log_std Mean -1.13211 trainer/policy/normal/log_std Std 0.591928 trainer/policy/normal/log_std Max -0.0107022 trainer/policy/normal/log_std Min -2.73539 eval/num steps total 626343 eval/num paths total 1207 eval/path length Mean 529 eval/path length Std 0 eval/path length Max 529 eval/path length Min 529 eval/Rewards Mean 3.2397 eval/Rewards Std 0.825849 eval/Rewards Max 5.55696 eval/Rewards Min 0.989716 eval/Returns Mean 1713.8 eval/Returns Std 0 eval/Returns Max 1713.8 eval/Returns Min 1713.8 eval/Actions Mean 0.15803 eval/Actions Std 0.588667 eval/Actions Max 0.998615 eval/Actions Min -0.996819 eval/Num Paths 1 eval/Average Returns 1713.8 eval/normalized_score 53.2811 time/evaluation sampling (s) 0.873329 time/logging (s) 0.00242417 time/sampling batch (s) 0.266719 time/saving (s) 0.00292208 time/training (s) 4.20342 time/epoch (s) 5.34881 time/total (s) 35377.3 Epoch -52 ---------------------------------- --------------- 2022-05-10 23:00:24.923919 PDT | [0] Epoch -51 finished ---------------------------------- --------------- epoch -51 replay_buffer/size 999996 trainer/num train calls 950000 trainer/Policy Loss -2.55497 trainer/Log Pis Mean 2.34101 trainer/Log Pis Std 2.76197 trainer/Log Pis Max 15.7246 trainer/Log Pis Min -6.67792 trainer/policy/mean Mean 0.107014 trainer/policy/mean Std 0.633772 trainer/policy/mean Max 0.997791 trainer/policy/mean Min -0.999587 trainer/policy/normal/std Mean 0.373912 trainer/policy/normal/std Std 0.177248 trainer/policy/normal/std Max 1.04494 trainer/policy/normal/std Min 0.0653589 trainer/policy/normal/log_std Mean -1.13032 trainer/policy/normal/log_std Std 0.586537 trainer/policy/normal/log_std Max 0.0439584 trainer/policy/normal/log_std Min -2.72786 eval/num steps total 627303 eval/num paths total 1209 eval/path length Mean 480 eval/path length Std 186 eval/path length Max 666 eval/path length Min 294 eval/Rewards Mean 3.12773 eval/Rewards Std 0.83921 eval/Rewards Max 5.3701 eval/Rewards Min 0.981593 eval/Returns Mean 1501.31 eval/Returns Std 605.081 eval/Returns Max 2106.39 eval/Returns Min 896.227 eval/Actions Mean 0.124838 eval/Actions Std 0.574063 eval/Actions Max 0.998935 eval/Actions Min -0.997875 eval/Num Paths 2 eval/Average Returns 1501.31 eval/normalized_score 46.7521 time/evaluation sampling (s) 0.871081 time/logging (s) 0.00347717 time/sampling batch (s) 0.261021 time/saving (s) 0.00283689 time/training (s) 4.14635 time/epoch (s) 5.28477 time/total (s) 35382.6 Epoch -51 ---------------------------------- --------------- 2022-05-10 23:00:30.246820 PDT | [0] Epoch -50 finished ---------------------------------- --------------- epoch -50 replay_buffer/size 999996 trainer/num train calls 951000 trainer/Policy Loss -2.19496 trainer/Log Pis Mean 2.19636 trainer/Log Pis Std 2.57109 trainer/Log Pis Max 8.86461 trainer/Log Pis Min -4.95996 trainer/policy/mean Mean 0.175575 trainer/policy/mean Std 0.603917 trainer/policy/mean Max 0.999439 trainer/policy/mean Min -0.99814 trainer/policy/normal/std Mean 0.384748 trainer/policy/normal/std Std 0.188499 trainer/policy/normal/std Max 1.21067 trainer/policy/normal/std Min 0.0628333 trainer/policy/normal/log_std Mean -1.11194 trainer/policy/normal/log_std Std 0.607806 trainer/policy/normal/log_std Max 0.191174 trainer/policy/normal/log_std Min -2.76727 eval/num steps total 627870 eval/num paths total 1210 eval/path length Mean 567 eval/path length Std 0 eval/path length Max 567 eval/path length Min 567 eval/Rewards Mean 3.16235 eval/Rewards Std 0.769177 eval/Rewards Max 4.83679 eval/Rewards Min 0.984439 eval/Returns Mean 1793.05 eval/Returns Std 0 eval/Returns Max 1793.05 eval/Returns Min 1793.05 eval/Actions Mean 0.156644 eval/Actions Std 0.600088 eval/Actions Max 0.998011 eval/Actions Min -0.99811 eval/Num Paths 1 eval/Average Returns 1793.05 eval/normalized_score 55.7162 time/evaluation sampling (s) 0.878193 time/logging (s) 0.00239709 time/sampling batch (s) 0.262337 time/saving (s) 0.00290594 time/training (s) 4.15209 time/epoch (s) 5.29792 time/total (s) 35387.9 Epoch -50 ---------------------------------- --------------- 2022-05-10 23:00:35.570737 PDT | [0] Epoch -49 finished ---------------------------------- --------------- epoch -49 replay_buffer/size 999996 trainer/num train calls 952000 trainer/Policy Loss -2.34058 trainer/Log Pis Mean 2.21805 trainer/Log Pis Std 2.6133 trainer/Log Pis Max 10.6916 trainer/Log Pis Min -4.56057 trainer/policy/mean Mean 0.145945 trainer/policy/mean Std 0.615913 trainer/policy/mean Max 0.998682 trainer/policy/mean Min -0.998367 trainer/policy/normal/std Mean 0.381644 trainer/policy/normal/std Std 0.185331 trainer/policy/normal/std Max 0.951299 trainer/policy/normal/std Min 0.0680935 trainer/policy/normal/log_std Mean -1.11596 trainer/policy/normal/log_std Std 0.598108 trainer/policy/normal/log_std Max -0.049927 trainer/policy/normal/log_std Min -2.68687 eval/num steps total 628704 eval/num paths total 1212 eval/path length Mean 417 eval/path length Std 10 eval/path length Max 427 eval/path length Min 407 eval/Rewards Mean 3.10431 eval/Rewards Std 0.909712 eval/Rewards Max 5.44642 eval/Rewards Min 0.97987 eval/Returns Mean 1294.5 eval/Returns Std 35.1416 eval/Returns Max 1329.64 eval/Returns Min 1259.36 eval/Actions Mean 0.144006 eval/Actions Std 0.571563 eval/Actions Max 0.995795 eval/Actions Min -0.997487 eval/Num Paths 2 eval/Average Returns 1294.5 eval/normalized_score 40.3976 time/evaluation sampling (s) 0.909591 time/logging (s) 0.00317172 time/sampling batch (s) 0.259473 time/saving (s) 0.00301115 time/training (s) 4.12566 time/epoch (s) 5.3009 time/total (s) 35393.2 Epoch -49 ---------------------------------- --------------- 2022-05-10 23:00:40.893774 PDT | [0] Epoch -48 finished ---------------------------------- --------------- epoch -48 replay_buffer/size 999996 trainer/num train calls 953000 trainer/Policy Loss -2.31492 trainer/Log Pis Mean 2.43146 trainer/Log Pis Std 2.66555 trainer/Log Pis Max 14.3234 trainer/Log Pis Min -6.12639 trainer/policy/mean Mean 0.125487 trainer/policy/mean Std 0.632873 trainer/policy/mean Max 0.997792 trainer/policy/mean Min -0.998315 trainer/policy/normal/std Mean 0.377616 trainer/policy/normal/std Std 0.184305 trainer/policy/normal/std Max 0.995871 trainer/policy/normal/std Min 0.0688644 trainer/policy/normal/log_std Mean -1.12518 trainer/policy/normal/log_std Std 0.591723 trainer/policy/normal/log_std Max -0.00413742 trainer/policy/normal/log_std Min -2.67562 eval/num steps total 629213 eval/num paths total 1213 eval/path length Mean 509 eval/path length Std 0 eval/path length Max 509 eval/path length Min 509 eval/Rewards Mean 3.12645 eval/Rewards Std 0.767362 eval/Rewards Max 4.81546 eval/Rewards Min 0.9854 eval/Returns Mean 1591.36 eval/Returns Std 0 eval/Returns Max 1591.36 eval/Returns Min 1591.36 eval/Actions Mean 0.166227 eval/Actions Std 0.592946 eval/Actions Max 0.998594 eval/Actions Min -0.998473 eval/Num Paths 1 eval/Average Returns 1591.36 eval/normalized_score 49.5191 time/evaluation sampling (s) 0.906775 time/logging (s) 0.00226645 time/sampling batch (s) 0.260301 time/saving (s) 0.00299334 time/training (s) 4.12546 time/epoch (s) 5.2978 time/total (s) 35398.5 Epoch -48 ---------------------------------- --------------- 2022-05-10 23:00:46.216387 PDT | [0] Epoch -47 finished ---------------------------------- --------------- epoch -47 replay_buffer/size 999996 trainer/num train calls 954000 trainer/Policy Loss -2.30053 trainer/Log Pis Mean 2.14956 trainer/Log Pis Std 2.62919 trainer/Log Pis Max 11.9272 trainer/Log Pis Min -5.36286 trainer/policy/mean Mean 0.139331 trainer/policy/mean Std 0.616715 trainer/policy/mean Max 0.998465 trainer/policy/mean Min -0.998327 trainer/policy/normal/std Mean 0.3797 trainer/policy/normal/std Std 0.184541 trainer/policy/normal/std Max 0.967371 trainer/policy/normal/std Min 0.0707175 trainer/policy/normal/log_std Mean -1.1195 trainer/policy/normal/log_std Std 0.592435 trainer/policy/normal/log_std Max -0.0331728 trainer/policy/normal/log_std Min -2.64906 eval/num steps total 629719 eval/num paths total 1214 eval/path length Mean 506 eval/path length Std 0 eval/path length Max 506 eval/path length Min 506 eval/Rewards Mean 3.14165 eval/Rewards Std 0.781723 eval/Rewards Max 5.04822 eval/Rewards Min 0.982963 eval/Returns Mean 1589.67 eval/Returns Std 0 eval/Returns Max 1589.67 eval/Returns Min 1589.67 eval/Actions Mean 0.153179 eval/Actions Std 0.590915 eval/Actions Max 0.998806 eval/Actions Min -0.998567 eval/Num Paths 1 eval/Average Returns 1589.67 eval/normalized_score 49.4672 time/evaluation sampling (s) 0.895073 time/logging (s) 0.00221968 time/sampling batch (s) 0.260128 time/saving (s) 0.002939 time/training (s) 4.13755 time/epoch (s) 5.29791 time/total (s) 35403.8 Epoch -47 ---------------------------------- --------------- 2022-05-10 23:00:51.552126 PDT | [0] Epoch -46 finished ---------------------------------- --------------- epoch -46 replay_buffer/size 999996 trainer/num train calls 955000 trainer/Policy Loss -2.37464 trainer/Log Pis Mean 2.31487 trainer/Log Pis Std 2.55991 trainer/Log Pis Max 11.1613 trainer/Log Pis Min -5.23579 trainer/policy/mean Mean 0.139193 trainer/policy/mean Std 0.630994 trainer/policy/mean Max 0.997974 trainer/policy/mean Min -0.997712 trainer/policy/normal/std Mean 0.379823 trainer/policy/normal/std Std 0.185852 trainer/policy/normal/std Max 0.949357 trainer/policy/normal/std Min 0.0656234 trainer/policy/normal/log_std Mean -1.12295 trainer/policy/normal/log_std Std 0.601912 trainer/policy/normal/log_std Max -0.0519706 trainer/policy/normal/log_std Min -2.72382 eval/num steps total 630640 eval/num paths total 1216 eval/path length Mean 460.5 eval/path length Std 48.5 eval/path length Max 509 eval/path length Min 412 eval/Rewards Mean 3.12555 eval/Rewards Std 0.804047 eval/Rewards Max 5.05233 eval/Rewards Min 0.982585 eval/Returns Mean 1439.31 eval/Returns Std 174.245 eval/Returns Max 1613.56 eval/Returns Min 1265.07 eval/Actions Mean 0.154698 eval/Actions Std 0.594577 eval/Actions Max 0.998386 eval/Actions Min -0.998864 eval/Num Paths 2 eval/Average Returns 1439.31 eval/normalized_score 44.8473 time/evaluation sampling (s) 0.904657 time/logging (s) 0.00340906 time/sampling batch (s) 0.261426 time/saving (s) 0.00310612 time/training (s) 4.13993 time/epoch (s) 5.31253 time/total (s) 35409.1 Epoch -46 ---------------------------------- --------------- 2022-05-10 23:00:56.856154 PDT | [0] Epoch -45 finished ---------------------------------- --------------- epoch -45 replay_buffer/size 999996 trainer/num train calls 956000 trainer/Policy Loss -2.3165 trainer/Log Pis Mean 2.21832 trainer/Log Pis Std 2.63816 trainer/Log Pis Max 9.52438 trainer/Log Pis Min -6.72568 trainer/policy/mean Mean 0.147493 trainer/policy/mean Std 0.613704 trainer/policy/mean Max 0.997183 trainer/policy/mean Min -0.996907 trainer/policy/normal/std Mean 0.376851 trainer/policy/normal/std Std 0.184522 trainer/policy/normal/std Max 0.9629 trainer/policy/normal/std Min 0.0705115 trainer/policy/normal/log_std Mean -1.13 trainer/policy/normal/log_std Std 0.599228 trainer/policy/normal/log_std Max -0.037806 trainer/policy/normal/log_std Min -2.65198 eval/num steps total 631202 eval/num paths total 1217 eval/path length Mean 562 eval/path length Std 0 eval/path length Max 562 eval/path length Min 562 eval/Rewards Mean 3.26695 eval/Rewards Std 0.816047 eval/Rewards Max 4.90265 eval/Rewards Min 0.982441 eval/Returns Mean 1836.03 eval/Returns Std 0 eval/Returns Max 1836.03 eval/Returns Min 1836.03 eval/Actions Mean 0.168695 eval/Actions Std 0.586255 eval/Actions Max 0.998262 eval/Actions Min -0.998686 eval/Num Paths 1 eval/Average Returns 1836.03 eval/normalized_score 57.0366 time/evaluation sampling (s) 0.885791 time/logging (s) 0.00239905 time/sampling batch (s) 0.259165 time/saving (s) 0.00299758 time/training (s) 4.12852 time/epoch (s) 5.27887 time/total (s) 35414.4 Epoch -45 ---------------------------------- --------------- 2022-05-10 23:01:02.195508 PDT | [0] Epoch -44 finished ---------------------------------- --------------- epoch -44 replay_buffer/size 999996 trainer/num train calls 957000 trainer/Policy Loss -2.43099 trainer/Log Pis Mean 2.42951 trainer/Log Pis Std 2.73048 trainer/Log Pis Max 10.7162 trainer/Log Pis Min -4.77287 trainer/policy/mean Mean 0.137317 trainer/policy/mean Std 0.62656 trainer/policy/mean Max 0.998855 trainer/policy/mean Min -0.997424 trainer/policy/normal/std Mean 0.380283 trainer/policy/normal/std Std 0.186382 trainer/policy/normal/std Max 0.950621 trainer/policy/normal/std Min 0.0689257 trainer/policy/normal/log_std Mean -1.12207 trainer/policy/normal/log_std Std 0.602079 trainer/policy/normal/log_std Max -0.0506398 trainer/policy/normal/log_std Min -2.67473 eval/num steps total 631787 eval/num paths total 1218 eval/path length Mean 585 eval/path length Std 0 eval/path length Max 585 eval/path length Min 585 eval/Rewards Mean 3.13809 eval/Rewards Std 0.726989 eval/Rewards Max 4.94141 eval/Rewards Min 0.986849 eval/Returns Mean 1835.78 eval/Returns Std 0 eval/Returns Max 1835.78 eval/Returns Min 1835.78 eval/Actions Mean 0.154699 eval/Actions Std 0.59953 eval/Actions Max 0.998087 eval/Actions Min -0.997313 eval/Num Paths 1 eval/Average Returns 1835.78 eval/normalized_score 57.0292 time/evaluation sampling (s) 0.876293 time/logging (s) 0.00240892 time/sampling batch (s) 0.262761 time/saving (s) 0.00285744 time/training (s) 4.17061 time/epoch (s) 5.31493 time/total (s) 35419.7 Epoch -44 ---------------------------------- --------------- 2022-05-10 23:01:07.514281 PDT | [0] Epoch -43 finished ---------------------------------- --------------- epoch -43 replay_buffer/size 999996 trainer/num train calls 958000 trainer/Policy Loss -2.06497 trainer/Log Pis Mean 2.14971 trainer/Log Pis Std 2.55986 trainer/Log Pis Max 9.42488 trainer/Log Pis Min -8.2814 trainer/policy/mean Mean 0.135157 trainer/policy/mean Std 0.614765 trainer/policy/mean Max 0.99778 trainer/policy/mean Min -0.996508 trainer/policy/normal/std Mean 0.373158 trainer/policy/normal/std Std 0.181284 trainer/policy/normal/std Max 0.91707 trainer/policy/normal/std Min 0.0642602 trainer/policy/normal/log_std Mean -1.13708 trainer/policy/normal/log_std Std 0.593563 trainer/policy/normal/log_std Max -0.0865716 trainer/policy/normal/log_std Min -2.74481 eval/num steps total 632372 eval/num paths total 1219 eval/path length Mean 585 eval/path length Std 0 eval/path length Max 585 eval/path length Min 585 eval/Rewards Mean 3.19843 eval/Rewards Std 0.750023 eval/Rewards Max 4.95687 eval/Rewards Min 0.987635 eval/Returns Mean 1871.08 eval/Returns Std 0 eval/Returns Max 1871.08 eval/Returns Min 1871.08 eval/Actions Mean 0.16842 eval/Actions Std 0.597585 eval/Actions Max 0.99865 eval/Actions Min -0.998275 eval/Num Paths 1 eval/Average Returns 1871.08 eval/normalized_score 58.1138 time/evaluation sampling (s) 0.875741 time/logging (s) 0.00238009 time/sampling batch (s) 0.261853 time/saving (s) 0.00291825 time/training (s) 4.15143 time/epoch (s) 5.29432 time/total (s) 35425 Epoch -43 ---------------------------------- --------------- 2022-05-10 23:01:12.863164 PDT | [0] Epoch -42 finished ---------------------------------- --------------- epoch -42 replay_buffer/size 999996 trainer/num train calls 959000 trainer/Policy Loss -2.00164 trainer/Log Pis Mean 1.88567 trainer/Log Pis Std 2.45426 trainer/Log Pis Max 8.26066 trainer/Log Pis Min -4.96775 trainer/policy/mean Mean 0.106575 trainer/policy/mean Std 0.600862 trainer/policy/mean Max 0.995857 trainer/policy/mean Min -0.998413 trainer/policy/normal/std Mean 0.376103 trainer/policy/normal/std Std 0.183715 trainer/policy/normal/std Max 0.944983 trainer/policy/normal/std Min 0.0671491 trainer/policy/normal/log_std Mean -1.12883 trainer/policy/normal/log_std Std 0.591566 trainer/policy/normal/log_std Max -0.0565878 trainer/policy/normal/log_std Min -2.70084 eval/num steps total 633209 eval/num paths total 1221 eval/path length Mean 418.5 eval/path length Std 141.5 eval/path length Max 560 eval/path length Min 277 eval/Rewards Mean 3.12603 eval/Rewards Std 0.927875 eval/Rewards Max 5.75492 eval/Rewards Min 0.981993 eval/Returns Mean 1308.24 eval/Returns Std 456.551 eval/Returns Max 1764.8 eval/Returns Min 851.694 eval/Actions Mean 0.123027 eval/Actions Std 0.571862 eval/Actions Max 0.999374 eval/Actions Min -0.998262 eval/Num Paths 2 eval/Average Returns 1308.24 eval/normalized_score 40.82 time/evaluation sampling (s) 0.86869 time/logging (s) 0.00316233 time/sampling batch (s) 0.269064 time/saving (s) 0.00288147 time/training (s) 4.1817 time/epoch (s) 5.32549 time/total (s) 35430.3 Epoch -42 ---------------------------------- --------------- 2022-05-10 23:01:18.186074 PDT | [0] Epoch -41 finished ---------------------------------- --------------- epoch -41 replay_buffer/size 999996 trainer/num train calls 960000 trainer/Policy Loss -2.18158 trainer/Log Pis Mean 2.09068 trainer/Log Pis Std 2.48605 trainer/Log Pis Max 9.99281 trainer/Log Pis Min -4.36815 trainer/policy/mean Mean 0.129708 trainer/policy/mean Std 0.604298 trainer/policy/mean Max 0.998042 trainer/policy/mean Min -0.998343 trainer/policy/normal/std Mean 0.373505 trainer/policy/normal/std Std 0.184419 trainer/policy/normal/std Max 0.940963 trainer/policy/normal/std Min 0.0638505 trainer/policy/normal/log_std Mean -1.14335 trainer/policy/normal/log_std Std 0.609409 trainer/policy/normal/log_std Max -0.0608515 trainer/policy/normal/log_std Min -2.75121 eval/num steps total 633775 eval/num paths total 1222 eval/path length Mean 566 eval/path length Std 0 eval/path length Max 566 eval/path length Min 566 eval/Rewards Mean 3.26422 eval/Rewards Std 0.75977 eval/Rewards Max 4.75942 eval/Rewards Min 0.980087 eval/Returns Mean 1847.55 eval/Returns Std 0 eval/Returns Max 1847.55 eval/Returns Min 1847.55 eval/Actions Mean 0.145333 eval/Actions Std 0.606897 eval/Actions Max 0.998065 eval/Actions Min -0.998691 eval/Num Paths 1 eval/Average Returns 1847.55 eval/normalized_score 57.3907 time/evaluation sampling (s) 0.865503 time/logging (s) 0.0024525 time/sampling batch (s) 0.261719 time/saving (s) 0.00313212 time/training (s) 4.16513 time/epoch (s) 5.29794 time/total (s) 35435.6 Epoch -41 ---------------------------------- --------------- 2022-05-10 23:01:23.489478 PDT | [0] Epoch -40 finished ---------------------------------- --------------- epoch -40 replay_buffer/size 999996 trainer/num train calls 961000 trainer/Policy Loss -2.42054 trainer/Log Pis Mean 2.42151 trainer/Log Pis Std 2.58624 trainer/Log Pis Max 11.6008 trainer/Log Pis Min -4.72907 trainer/policy/mean Mean 0.144542 trainer/policy/mean Std 0.618082 trainer/policy/mean Max 0.998843 trainer/policy/mean Min -0.999867 trainer/policy/normal/std Mean 0.372745 trainer/policy/normal/std Std 0.1824 trainer/policy/normal/std Max 1.08477 trainer/policy/normal/std Min 0.0724771 trainer/policy/normal/log_std Mean -1.14002 trainer/policy/normal/log_std Std 0.595816 trainer/policy/normal/log_std Max 0.0813639 trainer/policy/normal/log_std Min -2.62448 eval/num steps total 634720 eval/num paths total 1224 eval/path length Mean 472.5 eval/path length Std 3.5 eval/path length Max 476 eval/path length Min 469 eval/Rewards Mean 3.20328 eval/Rewards Std 0.861757 eval/Rewards Max 4.93533 eval/Rewards Min 0.9812 eval/Returns Mean 1513.55 eval/Returns Std 2.24347 eval/Returns Max 1515.79 eval/Returns Min 1511.3 eval/Actions Mean 0.152109 eval/Actions Std 0.592708 eval/Actions Max 0.998084 eval/Actions Min -0.998696 eval/Num Paths 2 eval/Average Returns 1513.55 eval/normalized_score 47.1282 time/evaluation sampling (s) 0.865124 time/logging (s) 0.00333541 time/sampling batch (s) 0.261506 time/saving (s) 0.00286116 time/training (s) 4.14719 time/epoch (s) 5.28002 time/total (s) 35440.9 Epoch -40 ---------------------------------- --------------- 2022-05-10 23:01:28.804003 PDT | [0] Epoch -39 finished ---------------------------------- --------------- epoch -39 replay_buffer/size 999996 trainer/num train calls 962000 trainer/Policy Loss -2.25463 trainer/Log Pis Mean 2.30021 trainer/Log Pis Std 2.52154 trainer/Log Pis Max 12.6118 trainer/Log Pis Min -5.74111 trainer/policy/mean Mean 0.147062 trainer/policy/mean Std 0.6097 trainer/policy/mean Max 0.998593 trainer/policy/mean Min -0.997923 trainer/policy/normal/std Mean 0.372234 trainer/policy/normal/std Std 0.182728 trainer/policy/normal/std Max 1.05205 trainer/policy/normal/std Min 0.0671496 trainer/policy/normal/log_std Mean -1.14055 trainer/policy/normal/log_std Std 0.59354 trainer/policy/normal/log_std Max 0.0507392 trainer/policy/normal/log_std Min -2.70083 eval/num steps total 635195 eval/num paths total 1225 eval/path length Mean 475 eval/path length Std 0 eval/path length Max 475 eval/path length Min 475 eval/Rewards Mean 3.17455 eval/Rewards Std 0.862389 eval/Rewards Max 4.71818 eval/Rewards Min 0.98126 eval/Returns Mean 1507.91 eval/Returns Std 0 eval/Returns Max 1507.91 eval/Returns Min 1507.91 eval/Actions Mean 0.149262 eval/Actions Std 0.59873 eval/Actions Max 0.997033 eval/Actions Min -0.998958 eval/Num Paths 1 eval/Average Returns 1507.91 eval/normalized_score 46.955 time/evaluation sampling (s) 0.864257 time/logging (s) 0.00244913 time/sampling batch (s) 0.262837 time/saving (s) 0.00321826 time/training (s) 4.15648 time/epoch (s) 5.28924 time/total (s) 35446.2 Epoch -39 ---------------------------------- --------------- 2022-05-10 23:01:34.128346 PDT | [0] Epoch -38 finished ---------------------------------- --------------- epoch -38 replay_buffer/size 999996 trainer/num train calls 963000 trainer/Policy Loss -2.22731 trainer/Log Pis Mean 2.53409 trainer/Log Pis Std 2.61779 trainer/Log Pis Max 9.67107 trainer/Log Pis Min -6.62452 trainer/policy/mean Mean 0.160849 trainer/policy/mean Std 0.622068 trainer/policy/mean Max 0.998118 trainer/policy/mean Min -0.998319 trainer/policy/normal/std Mean 0.380374 trainer/policy/normal/std Std 0.188195 trainer/policy/normal/std Max 1.03831 trainer/policy/normal/std Min 0.0642403 trainer/policy/normal/log_std Mean -1.12378 trainer/policy/normal/log_std Std 0.606322 trainer/policy/normal/log_std Max 0.0375957 trainer/policy/normal/log_std Min -2.74512 eval/num steps total 635773 eval/num paths total 1226 eval/path length Mean 578 eval/path length Std 0 eval/path length Max 578 eval/path length Min 578 eval/Rewards Mean 3.19308 eval/Rewards Std 0.715236 eval/Rewards Max 4.79154 eval/Rewards Min 0.985336 eval/Returns Mean 1845.6 eval/Returns Std 0 eval/Returns Max 1845.6 eval/Returns Min 1845.6 eval/Actions Mean 0.157495 eval/Actions Std 0.614438 eval/Actions Max 0.998416 eval/Actions Min -0.998545 eval/Num Paths 1 eval/Average Returns 1845.6 eval/normalized_score 57.3309 time/evaluation sampling (s) 0.869134 time/logging (s) 0.00241857 time/sampling batch (s) 0.262182 time/saving (s) 0.00281807 time/training (s) 4.16292 time/epoch (s) 5.29947 time/total (s) 35451.5 Epoch -38 ---------------------------------- --------------- 2022-05-10 23:01:39.502190 PDT | [0] Epoch -37 finished ---------------------------------- --------------- epoch -37 replay_buffer/size 999996 trainer/num train calls 964000 trainer/Policy Loss -2.22987 trainer/Log Pis Mean 2.27091 trainer/Log Pis Std 2.60473 trainer/Log Pis Max 10.7012 trainer/Log Pis Min -4.9006 trainer/policy/mean Mean 0.170178 trainer/policy/mean Std 0.620753 trainer/policy/mean Max 0.997872 trainer/policy/mean Min -0.997613 trainer/policy/normal/std Mean 0.388892 trainer/policy/normal/std Std 0.187491 trainer/policy/normal/std Max 0.996022 trainer/policy/normal/std Min 0.0652651 trainer/policy/normal/log_std Mean -1.09362 trainer/policy/normal/log_std Std 0.590402 trainer/policy/normal/log_std Max -0.00398613 trainer/policy/normal/log_std Min -2.7293 eval/num steps total 636677 eval/num paths total 1228 eval/path length Mean 452 eval/path length Std 47 eval/path length Max 499 eval/path length Min 405 eval/Rewards Mean 3.09834 eval/Rewards Std 0.77039 eval/Rewards Max 4.66246 eval/Rewards Min 0.98072 eval/Returns Mean 1400.45 eval/Returns Std 157.399 eval/Returns Max 1557.85 eval/Returns Min 1243.05 eval/Actions Mean 0.149104 eval/Actions Std 0.599117 eval/Actions Max 0.997759 eval/Actions Min -0.997003 eval/Num Paths 2 eval/Average Returns 1400.45 eval/normalized_score 43.6531 time/evaluation sampling (s) 0.936706 time/logging (s) 0.00338653 time/sampling batch (s) 0.261109 time/saving (s) 0.00299882 time/training (s) 4.14661 time/epoch (s) 5.35081 time/total (s) 35456.9 Epoch -37 ---------------------------------- --------------- 2022-05-10 23:01:44.838766 PDT | [0] Epoch -36 finished ---------------------------------- --------------- epoch -36 replay_buffer/size 999996 trainer/num train calls 965000 trainer/Policy Loss -2.34084 trainer/Log Pis Mean 2.38262 trainer/Log Pis Std 2.57261 trainer/Log Pis Max 16.9537 trainer/Log Pis Min -4.0852 trainer/policy/mean Mean 0.172956 trainer/policy/mean Std 0.613569 trainer/policy/mean Max 0.998834 trainer/policy/mean Min -0.999592 trainer/policy/normal/std Mean 0.372203 trainer/policy/normal/std Std 0.181602 trainer/policy/normal/std Max 0.926281 trainer/policy/normal/std Min 0.0636183 trainer/policy/normal/log_std Mean -1.14644 trainer/policy/normal/log_std Std 0.611297 trainer/policy/normal/log_std Max -0.0765776 trainer/policy/normal/log_std Min -2.75485 eval/num steps total 637155 eval/num paths total 1229 eval/path length Mean 478 eval/path length Std 0 eval/path length Max 478 eval/path length Min 478 eval/Rewards Mean 3.18359 eval/Rewards Std 0.839535 eval/Rewards Max 4.74843 eval/Rewards Min 0.979144 eval/Returns Mean 1521.75 eval/Returns Std 0 eval/Returns Max 1521.75 eval/Returns Min 1521.75 eval/Actions Mean 0.154356 eval/Actions Std 0.606921 eval/Actions Max 0.997609 eval/Actions Min -0.998492 eval/Num Paths 1 eval/Average Returns 1521.75 eval/normalized_score 47.3803 time/evaluation sampling (s) 0.864472 time/logging (s) 0.00211886 time/sampling batch (s) 0.263274 time/saving (s) 0.00283986 time/training (s) 4.17823 time/epoch (s) 5.31093 time/total (s) 35462.2 Epoch -36 ---------------------------------- --------------- 2022-05-10 23:01:50.176032 PDT | [0] Epoch -35 finished ---------------------------------- --------------- epoch -35 replay_buffer/size 999996 trainer/num train calls 966000 trainer/Policy Loss -2.03412 trainer/Log Pis Mean 2.06123 trainer/Log Pis Std 2.37055 trainer/Log Pis Max 9.97119 trainer/Log Pis Min -4.3425 trainer/policy/mean Mean 0.119683 trainer/policy/mean Std 0.601697 trainer/policy/mean Max 0.997798 trainer/policy/mean Min -0.998499 trainer/policy/normal/std Mean 0.374276 trainer/policy/normal/std Std 0.191763 trainer/policy/normal/std Max 1.02633 trainer/policy/normal/std Min 0.0646025 trainer/policy/normal/log_std Mean -1.14747 trainer/policy/normal/log_std Std 0.616651 trainer/policy/normal/log_std Max 0.0259923 trainer/policy/normal/log_std Min -2.7395 eval/num steps total 638147 eval/num paths total 1231 eval/path length Mean 496 eval/path length Std 10 eval/path length Max 506 eval/path length Min 486 eval/Rewards Mean 3.11947 eval/Rewards Std 0.804355 eval/Rewards Max 5.03088 eval/Rewards Min 0.983404 eval/Returns Mean 1547.26 eval/Returns Std 15.9479 eval/Returns Max 1563.2 eval/Returns Min 1531.31 eval/Actions Mean 0.144918 eval/Actions Std 0.589649 eval/Actions Max 0.998523 eval/Actions Min -0.998695 eval/Num Paths 2 eval/Average Returns 1547.26 eval/normalized_score 48.1639 time/evaluation sampling (s) 0.880416 time/logging (s) 0.00362314 time/sampling batch (s) 0.262443 time/saving (s) 0.00289039 time/training (s) 4.16548 time/epoch (s) 5.31485 time/total (s) 35467.5 Epoch -35 ---------------------------------- --------------- 2022-05-10 23:01:55.490559 PDT | [0] Epoch -34 finished ---------------------------------- --------------- epoch -34 replay_buffer/size 999996 trainer/num train calls 967000 trainer/Policy Loss -2.25856 trainer/Log Pis Mean 2.33097 trainer/Log Pis Std 2.72451 trainer/Log Pis Max 9.34219 trainer/Log Pis Min -8.3512 trainer/policy/mean Mean 0.159763 trainer/policy/mean Std 0.616238 trainer/policy/mean Max 0.997573 trainer/policy/mean Min -0.997716 trainer/policy/normal/std Mean 0.378793 trainer/policy/normal/std Std 0.180102 trainer/policy/normal/std Max 0.959073 trainer/policy/normal/std Min 0.0725967 trainer/policy/normal/log_std Mean -1.11404 trainer/policy/normal/log_std Std 0.576081 trainer/policy/normal/log_std Max -0.0417878 trainer/policy/normal/log_std Min -2.62284 eval/num steps total 638647 eval/num paths total 1232 eval/path length Mean 500 eval/path length Std 0 eval/path length Max 500 eval/path length Min 500 eval/Rewards Mean 3.12959 eval/Rewards Std 0.782859 eval/Rewards Max 4.74509 eval/Rewards Min 0.98258 eval/Returns Mean 1564.8 eval/Returns Std 0 eval/Returns Max 1564.8 eval/Returns Min 1564.8 eval/Actions Mean 0.158652 eval/Actions Std 0.597676 eval/Actions Max 0.998797 eval/Actions Min -0.9966 eval/Num Paths 1 eval/Average Returns 1564.8 eval/normalized_score 48.7028 time/evaluation sampling (s) 0.891919 time/logging (s) 0.0022254 time/sampling batch (s) 0.25956 time/saving (s) 0.00287546 time/training (s) 4.13246 time/epoch (s) 5.28904 time/total (s) 35472.8 Epoch -34 ---------------------------------- --------------- 2022-05-10 23:02:00.802997 PDT | [0] Epoch -33 finished ---------------------------------- --------------- epoch -33 replay_buffer/size 999996 trainer/num train calls 968000 trainer/Policy Loss -2.30149 trainer/Log Pis Mean 2.435 trainer/Log Pis Std 2.51541 trainer/Log Pis Max 10.2704 trainer/Log Pis Min -4.12058 trainer/policy/mean Mean 0.155072 trainer/policy/mean Std 0.627953 trainer/policy/mean Max 0.998451 trainer/policy/mean Min -0.997937 trainer/policy/normal/std Mean 0.37442 trainer/policy/normal/std Std 0.178906 trainer/policy/normal/std Max 0.91413 trainer/policy/normal/std Min 0.0633547 trainer/policy/normal/log_std Mean -1.13129 trainer/policy/normal/log_std Std 0.591263 trainer/policy/normal/log_std Max -0.0897825 trainer/policy/normal/log_std Min -2.75901 eval/num steps total 639619 eval/num paths total 1234 eval/path length Mean 486 eval/path length Std 6 eval/path length Max 492 eval/path length Min 480 eval/Rewards Mean 3.18389 eval/Rewards Std 0.817702 eval/Rewards Max 4.8358 eval/Rewards Min 0.983577 eval/Returns Mean 1547.37 eval/Returns Std 19.9737 eval/Returns Max 1567.35 eval/Returns Min 1527.4 eval/Actions Mean 0.151094 eval/Actions Std 0.599227 eval/Actions Max 0.998036 eval/Actions Min -0.999067 eval/Num Paths 2 eval/Average Returns 1547.37 eval/normalized_score 48.1675 time/evaluation sampling (s) 0.88494 time/logging (s) 0.00358669 time/sampling batch (s) 0.260396 time/saving (s) 0.00304555 time/training (s) 4.13748 time/epoch (s) 5.28945 time/total (s) 35478.1 Epoch -33 ---------------------------------- --------------- 2022-05-10 23:02:06.152680 PDT | [0] Epoch -32 finished ---------------------------------- --------------- epoch -32 replay_buffer/size 999996 trainer/num train calls 969000 trainer/Policy Loss -2.44631 trainer/Log Pis Mean 2.46682 trainer/Log Pis Std 2.42738 trainer/Log Pis Max 13.677 trainer/Log Pis Min -4.76881 trainer/policy/mean Mean 0.182353 trainer/policy/mean Std 0.622269 trainer/policy/mean Max 0.997392 trainer/policy/mean Min -0.998377 trainer/policy/normal/std Mean 0.379415 trainer/policy/normal/std Std 0.189886 trainer/policy/normal/std Max 0.934945 trainer/policy/normal/std Min 0.0539546 trainer/policy/normal/log_std Mean -1.13349 trainer/policy/normal/log_std Std 0.623301 trainer/policy/normal/log_std Max -0.067267 trainer/policy/normal/log_std Min -2.91961 eval/num steps total 640521 eval/num paths total 1236 eval/path length Mean 451 eval/path length Std 44 eval/path length Max 495 eval/path length Min 407 eval/Rewards Mean 3.13681 eval/Rewards Std 0.782778 eval/Rewards Max 4.8813 eval/Rewards Min 0.988391 eval/Returns Mean 1414.7 eval/Returns Std 164.349 eval/Returns Max 1579.05 eval/Returns Min 1250.35 eval/Actions Mean 0.156328 eval/Actions Std 0.599563 eval/Actions Max 0.997008 eval/Actions Min -0.999301 eval/Num Paths 2 eval/Average Returns 1414.7 eval/normalized_score 44.091 time/evaluation sampling (s) 0.91365 time/logging (s) 0.00339721 time/sampling batch (s) 0.263008 time/saving (s) 0.00301898 time/training (s) 4.14193 time/epoch (s) 5.325 time/total (s) 35483.4 Epoch -32 ---------------------------------- --------------- 2022-05-10 23:02:11.487956 PDT | [0] Epoch -31 finished ---------------------------------- --------------- epoch -31 replay_buffer/size 999996 trainer/num train calls 970000 trainer/Policy Loss -2.23651 trainer/Log Pis Mean 2.38199 trainer/Log Pis Std 2.60994 trainer/Log Pis Max 9.96059 trainer/Log Pis Min -4.55846 trainer/policy/mean Mean 0.156256 trainer/policy/mean Std 0.622675 trainer/policy/mean Max 0.998046 trainer/policy/mean Min -0.998382 trainer/policy/normal/std Mean 0.377157 trainer/policy/normal/std Std 0.183492 trainer/policy/normal/std Max 1.01579 trainer/policy/normal/std Min 0.0698318 trainer/policy/normal/log_std Mean -1.12391 trainer/policy/normal/log_std Std 0.585873 trainer/policy/normal/log_std Max 0.0156705 trainer/policy/normal/log_std Min -2.66167 eval/num steps total 641106 eval/num paths total 1237 eval/path length Mean 585 eval/path length Std 0 eval/path length Max 585 eval/path length Min 585 eval/Rewards Mean 3.14592 eval/Rewards Std 0.748335 eval/Rewards Max 5.21611 eval/Rewards Min 0.984613 eval/Returns Mean 1840.37 eval/Returns Std 0 eval/Returns Max 1840.37 eval/Returns Min 1840.37 eval/Actions Mean 0.150401 eval/Actions Std 0.599821 eval/Actions Max 0.998364 eval/Actions Min -0.997963 eval/Num Paths 1 eval/Average Returns 1840.37 eval/normalized_score 57.17 time/evaluation sampling (s) 0.879989 time/logging (s) 0.00245407 time/sampling batch (s) 0.261679 time/saving (s) 0.00294745 time/training (s) 4.16283 time/epoch (s) 5.3099 time/total (s) 35488.7 Epoch -31 ---------------------------------- --------------- 2022-05-10 23:02:16.818279 PDT | [0] Epoch -30 finished ---------------------------------- --------------- epoch -30 replay_buffer/size 999996 trainer/num train calls 971000 trainer/Policy Loss -2.31404 trainer/Log Pis Mean 2.2651 trainer/Log Pis Std 2.39603 trainer/Log Pis Max 9.22955 trainer/Log Pis Min -4.38824 trainer/policy/mean Mean 0.163328 trainer/policy/mean Std 0.614646 trainer/policy/mean Max 0.995107 trainer/policy/mean Min -0.997056 trainer/policy/normal/std Mean 0.376606 trainer/policy/normal/std Std 0.181322 trainer/policy/normal/std Max 0.990277 trainer/policy/normal/std Min 0.065103 trainer/policy/normal/log_std Mean -1.1295 trainer/policy/normal/log_std Std 0.601028 trainer/policy/normal/log_std Max -0.0097701 trainer/policy/normal/log_std Min -2.73178 eval/num steps total 641620 eval/num paths total 1238 eval/path length Mean 514 eval/path length Std 0 eval/path length Max 514 eval/path length Min 514 eval/Rewards Mean 3.11305 eval/Rewards Std 0.838699 eval/Rewards Max 5.43452 eval/Rewards Min 0.988499 eval/Returns Mean 1600.11 eval/Returns Std 0 eval/Returns Max 1600.11 eval/Returns Min 1600.11 eval/Actions Mean 0.153909 eval/Actions Std 0.580555 eval/Actions Max 0.996688 eval/Actions Min -0.998005 eval/Num Paths 1 eval/Average Returns 1600.11 eval/normalized_score 49.7878 time/evaluation sampling (s) 0.890553 time/logging (s) 0.00225838 time/sampling batch (s) 0.260399 time/saving (s) 0.0029528 time/training (s) 4.14904 time/epoch (s) 5.3052 time/total (s) 35494 Epoch -30 ---------------------------------- --------------- 2022-05-10 23:02:22.157036 PDT | [0] Epoch -29 finished ---------------------------------- --------------- epoch -29 replay_buffer/size 999996 trainer/num train calls 972000 trainer/Policy Loss -2.53969 trainer/Log Pis Mean 2.54583 trainer/Log Pis Std 2.63211 trainer/Log Pis Max 9.61682 trainer/Log Pis Min -3.94464 trainer/policy/mean Mean 0.135645 trainer/policy/mean Std 0.63757 trainer/policy/mean Max 0.996904 trainer/policy/mean Min -0.997807 trainer/policy/normal/std Mean 0.374398 trainer/policy/normal/std Std 0.18407 trainer/policy/normal/std Max 0.868543 trainer/policy/normal/std Min 0.0636938 trainer/policy/normal/log_std Mean -1.14074 trainer/policy/normal/log_std Std 0.609758 trainer/policy/normal/log_std Max -0.140938 trainer/policy/normal/log_std Min -2.75367 eval/num steps total 642430 eval/num paths total 1239 eval/path length Mean 810 eval/path length Std 0 eval/path length Max 810 eval/path length Min 810 eval/Rewards Mean 3.28045 eval/Rewards Std 0.667373 eval/Rewards Max 4.75383 eval/Rewards Min 0.990024 eval/Returns Mean 2657.17 eval/Returns Std 0 eval/Returns Max 2657.17 eval/Returns Min 2657.17 eval/Actions Mean 0.152169 eval/Actions Std 0.6145 eval/Actions Max 0.997292 eval/Actions Min -0.998817 eval/Num Paths 1 eval/Average Returns 2657.17 eval/normalized_score 82.267 time/evaluation sampling (s) 0.881658 time/logging (s) 0.00300772 time/sampling batch (s) 0.263758 time/saving (s) 0.00287746 time/training (s) 4.16371 time/epoch (s) 5.31501 time/total (s) 35499.4 Epoch -29 ---------------------------------- --------------- 2022-05-10 23:02:27.496992 PDT | [0] Epoch -28 finished ---------------------------------- --------------- epoch -28 replay_buffer/size 999996 trainer/num train calls 973000 trainer/Policy Loss -2.26032 trainer/Log Pis Mean 2.2892 trainer/Log Pis Std 2.51212 trainer/Log Pis Max 9.53434 trainer/Log Pis Min -5.88905 trainer/policy/mean Mean 0.14651 trainer/policy/mean Std 0.611476 trainer/policy/mean Max 0.996748 trainer/policy/mean Min -0.998843 trainer/policy/normal/std Mean 0.374919 trainer/policy/normal/std Std 0.188808 trainer/policy/normal/std Max 0.99171 trainer/policy/normal/std Min 0.0660798 trainer/policy/normal/log_std Mean -1.14141 trainer/policy/normal/log_std Std 0.608494 trainer/policy/normal/log_std Max -0.00832503 trainer/policy/normal/log_std Min -2.71689 eval/num steps total 642919 eval/num paths total 1240 eval/path length Mean 489 eval/path length Std 0 eval/path length Max 489 eval/path length Min 489 eval/Rewards Mean 3.1115 eval/Rewards Std 0.80448 eval/Rewards Max 4.87019 eval/Rewards Min 0.982206 eval/Returns Mean 1521.52 eval/Returns Std 0 eval/Returns Max 1521.52 eval/Returns Min 1521.52 eval/Actions Mean 0.145446 eval/Actions Std 0.583368 eval/Actions Max 0.998188 eval/Actions Min -0.997994 eval/Num Paths 1 eval/Average Returns 1521.52 eval/normalized_score 47.3732 time/evaluation sampling (s) 0.867476 time/logging (s) 0.00215791 time/sampling batch (s) 0.263081 time/saving (s) 0.00281364 time/training (s) 4.17895 time/epoch (s) 5.31448 time/total (s) 35504.7 Epoch -28 ---------------------------------- --------------- 2022-05-10 23:02:32.766626 PDT | [0] Epoch -27 finished ---------------------------------- --------------- epoch -27 replay_buffer/size 999996 trainer/num train calls 974000 trainer/Policy Loss -2.2381 trainer/Log Pis Mean 2.12918 trainer/Log Pis Std 2.56487 trainer/Log Pis Max 10.6369 trainer/Log Pis Min -9.09268 trainer/policy/mean Mean 0.12828 trainer/policy/mean Std 0.615594 trainer/policy/mean Max 0.996933 trainer/policy/mean Min -0.998312 trainer/policy/normal/std Mean 0.384551 trainer/policy/normal/std Std 0.186445 trainer/policy/normal/std Max 0.986264 trainer/policy/normal/std Min 0.0664699 trainer/policy/normal/log_std Mean -1.10438 trainer/policy/normal/log_std Std 0.585854 trainer/policy/normal/log_std Max -0.0138309 trainer/policy/normal/log_std Min -2.71101 eval/num steps total 643431 eval/num paths total 1241 eval/path length Mean 512 eval/path length Std 0 eval/path length Max 512 eval/path length Min 512 eval/Rewards Mean 3.12947 eval/Rewards Std 0.795669 eval/Rewards Max 5.12558 eval/Rewards Min 0.981844 eval/Returns Mean 1602.29 eval/Returns Std 0 eval/Returns Max 1602.29 eval/Returns Min 1602.29 eval/Actions Mean 0.164465 eval/Actions Std 0.590991 eval/Actions Max 0.998549 eval/Actions Min -0.996255 eval/Num Paths 1 eval/Average Returns 1602.29 eval/normalized_score 49.8548 time/evaluation sampling (s) 0.856454 time/logging (s) 0.00219115 time/sampling batch (s) 0.257212 time/saving (s) 0.00275915 time/training (s) 4.12683 time/epoch (s) 5.24545 time/total (s) 35509.9 Epoch -27 ---------------------------------- --------------- 2022-05-10 23:02:38.045317 PDT | [0] Epoch -26 finished ---------------------------------- --------------- epoch -26 replay_buffer/size 999996 trainer/num train calls 975000 trainer/Policy Loss -2.42139 trainer/Log Pis Mean 2.22265 trainer/Log Pis Std 2.53779 trainer/Log Pis Max 10.16 trainer/Log Pis Min -5.17459 trainer/policy/mean Mean 0.109432 trainer/policy/mean Std 0.619616 trainer/policy/mean Max 0.997423 trainer/policy/mean Min -0.997764 trainer/policy/normal/std Mean 0.373844 trainer/policy/normal/std Std 0.180944 trainer/policy/normal/std Max 0.972473 trainer/policy/normal/std Min 0.0690529 trainer/policy/normal/log_std Mean -1.13341 trainer/policy/normal/log_std Std 0.589507 trainer/policy/normal/log_std Max -0.0279132 trainer/policy/normal/log_std Min -2.67288 eval/num steps total 643915 eval/num paths total 1242 eval/path length Mean 484 eval/path length Std 0 eval/path length Max 484 eval/path length Min 484 eval/Rewards Mean 3.14562 eval/Rewards Std 0.849909 eval/Rewards Max 4.81279 eval/Rewards Min 0.986549 eval/Returns Mean 1522.48 eval/Returns Std 0 eval/Returns Max 1522.48 eval/Returns Min 1522.48 eval/Actions Mean 0.146254 eval/Actions Std 0.588852 eval/Actions Max 0.998508 eval/Actions Min -0.998363 eval/Num Paths 1 eval/Average Returns 1522.48 eval/normalized_score 47.4026 time/evaluation sampling (s) 0.86007 time/logging (s) 0.00211585 time/sampling batch (s) 0.258141 time/saving (s) 0.00277604 time/training (s) 4.132 time/epoch (s) 5.2551 time/total (s) 35515.2 Epoch -26 ---------------------------------- --------------- 2022-05-10 23:02:43.285638 PDT | [0] Epoch -25 finished ---------------------------------- --------------- epoch -25 replay_buffer/size 999996 trainer/num train calls 976000 trainer/Policy Loss -2.12277 trainer/Log Pis Mean 2.37985 trainer/Log Pis Std 2.59921 trainer/Log Pis Max 12.8513 trainer/Log Pis Min -5.91603 trainer/policy/mean Mean 0.0904844 trainer/policy/mean Std 0.617565 trainer/policy/mean Max 0.999223 trainer/policy/mean Min -0.998293 trainer/policy/normal/std Mean 0.369398 trainer/policy/normal/std Std 0.184638 trainer/policy/normal/std Max 0.970494 trainer/policy/normal/std Min 0.062722 trainer/policy/normal/log_std Mean -1.1554 trainer/policy/normal/log_std Std 0.60848 trainer/policy/normal/log_std Max -0.02995 trainer/policy/normal/log_std Min -2.76904 eval/num steps total 644575 eval/num paths total 1243 eval/path length Mean 660 eval/path length Std 0 eval/path length Max 660 eval/path length Min 660 eval/Rewards Mean 3.21564 eval/Rewards Std 0.730501 eval/Rewards Max 4.80609 eval/Rewards Min 0.983623 eval/Returns Mean 2122.32 eval/Returns Std 0 eval/Returns Max 2122.32 eval/Returns Min 2122.32 eval/Actions Mean 0.159546 eval/Actions Std 0.608045 eval/Actions Max 0.998787 eval/Actions Min -0.998451 eval/Num Paths 1 eval/Average Returns 2122.32 eval/normalized_score 65.8334 time/evaluation sampling (s) 0.854595 time/logging (s) 0.00267999 time/sampling batch (s) 0.256356 time/saving (s) 0.00292663 time/training (s) 4.10076 time/epoch (s) 5.21732 time/total (s) 35520.4 Epoch -25 ---------------------------------- --------------- 2022-05-10 23:02:48.565957 PDT | [0] Epoch -24 finished ---------------------------------- --------------- epoch -24 replay_buffer/size 999996 trainer/num train calls 977000 trainer/Policy Loss -2.35457 trainer/Log Pis Mean 2.28131 trainer/Log Pis Std 2.53872 trainer/Log Pis Max 10.6182 trainer/Log Pis Min -4.11852 trainer/policy/mean Mean 0.144077 trainer/policy/mean Std 0.620149 trainer/policy/mean Max 0.998785 trainer/policy/mean Min -0.996948 trainer/policy/normal/std Mean 0.376686 trainer/policy/normal/std Std 0.181264 trainer/policy/normal/std Max 0.965941 trainer/policy/normal/std Min 0.0707865 trainer/policy/normal/log_std Mean -1.12336 trainer/policy/normal/log_std Std 0.583284 trainer/policy/normal/log_std Max -0.0346521 trainer/policy/normal/log_std Min -2.64809 eval/num steps total 645153 eval/num paths total 1244 eval/path length Mean 578 eval/path length Std 0 eval/path length Max 578 eval/path length Min 578 eval/Rewards Mean 3.13615 eval/Rewards Std 0.701203 eval/Rewards Max 4.58787 eval/Rewards Min 0.980119 eval/Returns Mean 1812.7 eval/Returns Std 0 eval/Returns Max 1812.7 eval/Returns Min 1812.7 eval/Actions Mean 0.147289 eval/Actions Std 0.607444 eval/Actions Max 0.998326 eval/Actions Min -0.998417 eval/Num Paths 1 eval/Average Returns 1812.7 eval/normalized_score 56.3198 time/evaluation sampling (s) 0.860116 time/logging (s) 0.00230732 time/sampling batch (s) 0.257692 time/saving (s) 0.00290712 time/training (s) 4.13257 time/epoch (s) 5.2556 time/total (s) 35525.7 Epoch -24 ---------------------------------- --------------- 2022-05-10 23:02:53.817607 PDT | [0] Epoch -23 finished ---------------------------------- ---------------- epoch -23 replay_buffer/size 999996 trainer/num train calls 978000 trainer/Policy Loss -2.27289 trainer/Log Pis Mean 2.1105 trainer/Log Pis Std 2.49363 trainer/Log Pis Max 9.63877 trainer/Log Pis Min -4.58065 trainer/policy/mean Mean 0.135658 trainer/policy/mean Std 0.613462 trainer/policy/mean Max 0.996371 trainer/policy/mean Min -0.998243 trainer/policy/normal/std Mean 0.385585 trainer/policy/normal/std Std 0.187307 trainer/policy/normal/std Max 0.999137 trainer/policy/normal/std Min 0.070459 trainer/policy/normal/log_std Mean -1.10091 trainer/policy/normal/log_std Std 0.58256 trainer/policy/normal/log_std Max -0.000863448 trainer/policy/normal/log_std Min -2.65272 eval/num steps total 645651 eval/num paths total 1245 eval/path length Mean 498 eval/path length Std 0 eval/path length Max 498 eval/path length Min 498 eval/Rewards Mean 3.09453 eval/Rewards Std 0.769585 eval/Rewards Max 4.75698 eval/Rewards Min 0.980777 eval/Returns Mean 1541.08 eval/Returns Std 0 eval/Returns Max 1541.08 eval/Returns Min 1541.08 eval/Actions Mean 0.155055 eval/Actions Std 0.592679 eval/Actions Max 0.998752 eval/Actions Min -0.998326 eval/Num Paths 1 eval/Average Returns 1541.08 eval/normalized_score 47.9741 time/evaluation sampling (s) 0.866458 time/logging (s) 0.00228448 time/sampling batch (s) 0.256569 time/saving (s) 0.00302854 time/training (s) 4.09949 time/epoch (s) 5.22783 time/total (s) 35530.9 Epoch -23 ---------------------------------- ---------------- 2022-05-10 23:02:59.090736 PDT | [0] Epoch -22 finished ---------------------------------- --------------- epoch -22 replay_buffer/size 999996 trainer/num train calls 979000 trainer/Policy Loss -2.1684 trainer/Log Pis Mean 2.23442 trainer/Log Pis Std 2.54398 trainer/Log Pis Max 10.2136 trainer/Log Pis Min -4.83394 trainer/policy/mean Mean 0.131707 trainer/policy/mean Std 0.617244 trainer/policy/mean Max 0.996925 trainer/policy/mean Min -0.99849 trainer/policy/normal/std Mean 0.376414 trainer/policy/normal/std Std 0.183486 trainer/policy/normal/std Max 0.947579 trainer/policy/normal/std Min 0.0591371 trainer/policy/normal/log_std Mean -1.12865 trainer/policy/normal/log_std Std 0.593096 trainer/policy/normal/log_std Max -0.0538449 trainer/policy/normal/log_std Min -2.8279 eval/num steps total 646583 eval/num paths total 1246 eval/path length Mean 932 eval/path length Std 0 eval/path length Max 932 eval/path length Min 932 eval/Rewards Mean 3.26331 eval/Rewards Std 0.660201 eval/Rewards Max 5.35152 eval/Rewards Min 0.979432 eval/Returns Mean 3041.41 eval/Returns Std 0 eval/Returns Max 3041.41 eval/Returns Min 3041.41 eval/Actions Mean 0.162465 eval/Actions Std 0.611813 eval/Actions Max 0.998693 eval/Actions Min -0.997852 eval/Num Paths 1 eval/Average Returns 3041.41 eval/normalized_score 94.0731 time/evaluation sampling (s) 0.855499 time/logging (s) 0.00318391 time/sampling batch (s) 0.259501 time/saving (s) 0.00287249 time/training (s) 4.12932 time/epoch (s) 5.25037 time/total (s) 35536.1 Epoch -22 ---------------------------------- --------------- 2022-05-10 23:03:04.397174 PDT | [0] Epoch -21 finished ---------------------------------- --------------- epoch -21 replay_buffer/size 999996 trainer/num train calls 980000 trainer/Policy Loss -2.1853 trainer/Log Pis Mean 2.23209 trainer/Log Pis Std 2.43004 trainer/Log Pis Max 12.8337 trainer/Log Pis Min -6.02687 trainer/policy/mean Mean 0.165559 trainer/policy/mean Std 0.607849 trainer/policy/mean Max 0.996216 trainer/policy/mean Min -0.99859 trainer/policy/normal/std Mean 0.383076 trainer/policy/normal/std Std 0.190908 trainer/policy/normal/std Max 1.06977 trainer/policy/normal/std Min 0.0645093 trainer/policy/normal/log_std Mean -1.11717 trainer/policy/normal/log_std Std 0.605426 trainer/policy/normal/log_std Max 0.0674438 trainer/policy/normal/log_std Min -2.74095 eval/num steps total 647073 eval/num paths total 1247 eval/path length Mean 490 eval/path length Std 0 eval/path length Max 490 eval/path length Min 490 eval/Rewards Mean 3.05082 eval/Rewards Std 0.776317 eval/Rewards Max 4.66576 eval/Rewards Min 0.984576 eval/Returns Mean 1494.9 eval/Returns Std 0 eval/Returns Max 1494.9 eval/Returns Min 1494.9 eval/Actions Mean 0.150615 eval/Actions Std 0.581376 eval/Actions Max 0.998436 eval/Actions Min -0.999259 eval/Num Paths 1 eval/Average Returns 1494.9 eval/normalized_score 46.5553 time/evaluation sampling (s) 0.887917 time/logging (s) 0.00213833 time/sampling batch (s) 0.258864 time/saving (s) 0.00291077 time/training (s) 4.12973 time/epoch (s) 5.28156 time/total (s) 35541.4 Epoch -21 ---------------------------------- --------------- 2022-05-10 23:03:09.659687 PDT | [0] Epoch -20 finished ---------------------------------- --------------- epoch -20 replay_buffer/size 999996 trainer/num train calls 981000 trainer/Policy Loss -2.27492 trainer/Log Pis Mean 2.33931 trainer/Log Pis Std 2.59852 trainer/Log Pis Max 9.37253 trainer/Log Pis Min -4.7739 trainer/policy/mean Mean 0.0968716 trainer/policy/mean Std 0.627115 trainer/policy/mean Max 0.996978 trainer/policy/mean Min -0.997727 trainer/policy/normal/std Mean 0.385558 trainer/policy/normal/std Std 0.187122 trainer/policy/normal/std Max 0.971118 trainer/policy/normal/std Min 0.0643499 trainer/policy/normal/log_std Mean -1.10167 trainer/policy/normal/log_std Std 0.586548 trainer/policy/normal/log_std Max -0.029307 trainer/policy/normal/log_std Min -2.74342 eval/num steps total 647651 eval/num paths total 1248 eval/path length Mean 578 eval/path length Std 0 eval/path length Max 578 eval/path length Min 578 eval/Rewards Mean 3.23788 eval/Rewards Std 0.779985 eval/Rewards Max 4.88452 eval/Rewards Min 0.985827 eval/Returns Mean 1871.5 eval/Returns Std 0 eval/Returns Max 1871.5 eval/Returns Min 1871.5 eval/Actions Mean 0.16576 eval/Actions Std 0.608981 eval/Actions Max 0.998474 eval/Actions Min -0.998463 eval/Num Paths 1 eval/Average Returns 1871.5 eval/normalized_score 58.1265 time/evaluation sampling (s) 0.859318 time/logging (s) 0.00246834 time/sampling batch (s) 0.257215 time/saving (s) 0.0028589 time/training (s) 4.11702 time/epoch (s) 5.23888 time/total (s) 35546.7 Epoch -20 ---------------------------------- --------------- 2022-05-10 23:03:14.922553 PDT | [0] Epoch -19 finished ---------------------------------- --------------- epoch -19 replay_buffer/size 999996 trainer/num train calls 982000 trainer/Policy Loss -2.12571 trainer/Log Pis Mean 2.161 trainer/Log Pis Std 2.47091 trainer/Log Pis Max 14.4848 trainer/Log Pis Min -6.21746 trainer/policy/mean Mean 0.153992 trainer/policy/mean Std 0.606324 trainer/policy/mean Max 0.995864 trainer/policy/mean Min -0.999693 trainer/policy/normal/std Mean 0.367207 trainer/policy/normal/std Std 0.177332 trainer/policy/normal/std Max 0.945493 trainer/policy/normal/std Min 0.0652385 trainer/policy/normal/log_std Mean -1.15253 trainer/policy/normal/log_std Std 0.59261 trainer/policy/normal/log_std Max -0.056049 trainer/policy/normal/log_std Min -2.7297 eval/num steps total 648060 eval/num paths total 1249 eval/path length Mean 409 eval/path length Std 0 eval/path length Max 409 eval/path length Min 409 eval/Rewards Mean 3.06504 eval/Rewards Std 0.889128 eval/Rewards Max 5.08436 eval/Rewards Min 0.987273 eval/Returns Mean 1253.6 eval/Returns Std 0 eval/Returns Max 1253.6 eval/Returns Min 1253.6 eval/Actions Mean 0.131649 eval/Actions Std 0.576154 eval/Actions Max 0.996167 eval/Actions Min -0.999577 eval/Num Paths 1 eval/Average Returns 1253.6 eval/normalized_score 39.141 time/evaluation sampling (s) 0.859792 time/logging (s) 0.00193992 time/sampling batch (s) 0.257833 time/saving (s) 0.00285402 time/training (s) 4.11592 time/epoch (s) 5.23834 time/total (s) 35551.9 Epoch -19 ---------------------------------- --------------- 2022-05-10 23:03:20.270415 PDT | [0] Epoch -18 finished ---------------------------------- --------------- epoch -18 replay_buffer/size 999996 trainer/num train calls 983000 trainer/Policy Loss -2.32977 trainer/Log Pis Mean 2.35147 trainer/Log Pis Std 2.60666 trainer/Log Pis Max 9.78392 trainer/Log Pis Min -3.90487 trainer/policy/mean Mean 0.121164 trainer/policy/mean Std 0.619506 trainer/policy/mean Max 0.997755 trainer/policy/mean Min -0.998251 trainer/policy/normal/std Mean 0.367919 trainer/policy/normal/std Std 0.181425 trainer/policy/normal/std Max 0.899559 trainer/policy/normal/std Min 0.0647938 trainer/policy/normal/log_std Mean -1.15644 trainer/policy/normal/log_std Std 0.603574 trainer/policy/normal/log_std Max -0.10585 trainer/policy/normal/log_std Min -2.73654 eval/num steps total 648580 eval/num paths total 1250 eval/path length Mean 520 eval/path length Std 0 eval/path length Max 520 eval/path length Min 520 eval/Rewards Mean 3.16148 eval/Rewards Std 0.823427 eval/Rewards Max 5.45076 eval/Rewards Min 0.981753 eval/Returns Mean 1643.97 eval/Returns Std 0 eval/Returns Max 1643.97 eval/Returns Min 1643.97 eval/Actions Mean 0.147148 eval/Actions Std 0.587387 eval/Actions Max 0.997573 eval/Actions Min -0.996755 eval/Num Paths 1 eval/Average Returns 1643.97 eval/normalized_score 51.1355 time/evaluation sampling (s) 0.941851 time/logging (s) 0.00224003 time/sampling batch (s) 0.259502 time/saving (s) 0.00286697 time/training (s) 4.11758 time/epoch (s) 5.32404 time/total (s) 35557.2 Epoch -18 ---------------------------------- --------------- 2022-05-10 23:03:25.569827 PDT | [0] Epoch -17 finished ---------------------------------- --------------- epoch -17 replay_buffer/size 999996 trainer/num train calls 984000 trainer/Policy Loss -2.40777 trainer/Log Pis Mean 2.40422 trainer/Log Pis Std 2.73127 trainer/Log Pis Max 9.65418 trainer/Log Pis Min -4.18688 trainer/policy/mean Mean 0.12318 trainer/policy/mean Std 0.631721 trainer/policy/mean Max 0.99862 trainer/policy/mean Min -0.997994 trainer/policy/normal/std Mean 0.375288 trainer/policy/normal/std Std 0.182762 trainer/policy/normal/std Max 1.13255 trainer/policy/normal/std Min 0.0603285 trainer/policy/normal/log_std Mean -1.13378 trainer/policy/normal/log_std Std 0.59907 trainer/policy/normal/log_std Max 0.124473 trainer/policy/normal/log_std Min -2.80795 eval/num steps total 649083 eval/num paths total 1251 eval/path length Mean 503 eval/path length Std 0 eval/path length Max 503 eval/path length Min 503 eval/Rewards Mean 3.09799 eval/Rewards Std 0.765512 eval/Rewards Max 4.74693 eval/Rewards Min 0.986156 eval/Returns Mean 1558.29 eval/Returns Std 0 eval/Returns Max 1558.29 eval/Returns Min 1558.29 eval/Actions Mean 0.156308 eval/Actions Std 0.591743 eval/Actions Max 0.998604 eval/Actions Min -0.997242 eval/Num Paths 1 eval/Average Returns 1558.29 eval/normalized_score 48.5029 time/evaluation sampling (s) 0.880642 time/logging (s) 0.00215552 time/sampling batch (s) 0.265361 time/saving (s) 0.00276711 time/training (s) 4.12426 time/epoch (s) 5.27519 time/total (s) 35562.5 Epoch -17 ---------------------------------- --------------- 2022-05-10 23:03:30.914651 PDT | [0] Epoch -16 finished ---------------------------------- --------------- epoch -16 replay_buffer/size 999996 trainer/num train calls 985000 trainer/Policy Loss -2.35927 trainer/Log Pis Mean 2.35743 trainer/Log Pis Std 2.53075 trainer/Log Pis Max 9.85133 trainer/Log Pis Min -5.52152 trainer/policy/mean Mean 0.155698 trainer/policy/mean Std 0.61405 trainer/policy/mean Max 0.998539 trainer/policy/mean Min -0.997783 trainer/policy/normal/std Mean 0.367458 trainer/policy/normal/std Std 0.177201 trainer/policy/normal/std Max 0.945739 trainer/policy/normal/std Min 0.0664856 trainer/policy/normal/log_std Mean -1.14966 trainer/policy/normal/log_std Std 0.587822 trainer/policy/normal/log_std Max -0.0557881 trainer/policy/normal/log_std Min -2.71077 eval/num steps total 649598 eval/num paths total 1252 eval/path length Mean 515 eval/path length Std 0 eval/path length Max 515 eval/path length Min 515 eval/Rewards Mean 3.19314 eval/Rewards Std 0.820678 eval/Rewards Max 5.36723 eval/Rewards Min 0.982161 eval/Returns Mean 1644.47 eval/Returns Std 0 eval/Returns Max 1644.47 eval/Returns Min 1644.47 eval/Actions Mean 0.158924 eval/Actions Std 0.591515 eval/Actions Max 0.99854 eval/Actions Min -0.995984 eval/Num Paths 1 eval/Average Returns 1644.47 eval/normalized_score 51.1508 time/evaluation sampling (s) 0.883974 time/logging (s) 0.00231134 time/sampling batch (s) 0.261711 time/saving (s) 0.00310927 time/training (s) 4.16969 time/epoch (s) 5.3208 time/total (s) 35567.8 Epoch -16 ---------------------------------- --------------- 2022-05-10 23:03:36.242886 PDT | [0] Epoch -15 finished ---------------------------------- --------------- epoch -15 replay_buffer/size 999996 trainer/num train calls 986000 trainer/Policy Loss -2.26776 trainer/Log Pis Mean 2.25709 trainer/Log Pis Std 2.7236 trainer/Log Pis Max 19.2416 trainer/Log Pis Min -9.76988 trainer/policy/mean Mean 0.158112 trainer/policy/mean Std 0.623672 trainer/policy/mean Max 0.995776 trainer/policy/mean Min -0.999925 trainer/policy/normal/std Mean 0.376657 trainer/policy/normal/std Std 0.180446 trainer/policy/normal/std Max 0.987615 trainer/policy/normal/std Min 0.0606195 trainer/policy/normal/log_std Mean -1.12542 trainer/policy/normal/log_std Std 0.589934 trainer/policy/normal/log_std Max -0.0124622 trainer/policy/normal/log_std Min -2.80314 eval/num steps total 650555 eval/num paths total 1254 eval/path length Mean 478.5 eval/path length Std 5.5 eval/path length Max 484 eval/path length Min 473 eval/Rewards Mean 3.17049 eval/Rewards Std 0.866628 eval/Rewards Max 4.76405 eval/Rewards Min 0.980331 eval/Returns Mean 1517.08 eval/Returns Std 6.16427 eval/Returns Max 1523.24 eval/Returns Min 1510.91 eval/Actions Mean 0.156211 eval/Actions Std 0.594264 eval/Actions Max 0.998705 eval/Actions Min -0.999028 eval/Num Paths 2 eval/Average Returns 1517.08 eval/normalized_score 47.2366 time/evaluation sampling (s) 0.883738 time/logging (s) 0.00355746 time/sampling batch (s) 0.263041 time/saving (s) 0.00295628 time/training (s) 4.15156 time/epoch (s) 5.30486 time/total (s) 35573.1 Epoch -15 ---------------------------------- --------------- 2022-05-10 23:03:41.571999 PDT | [0] Epoch -14 finished ---------------------------------- --------------- epoch -14 replay_buffer/size 999996 trainer/num train calls 987000 trainer/Policy Loss -2.5864 trainer/Log Pis Mean 2.55585 trainer/Log Pis Std 2.55059 trainer/Log Pis Max 9.92418 trainer/Log Pis Min -5.32991 trainer/policy/mean Mean 0.144262 trainer/policy/mean Std 0.633257 trainer/policy/mean Max 0.997041 trainer/policy/mean Min -0.996814 trainer/policy/normal/std Mean 0.36545 trainer/policy/normal/std Std 0.177743 trainer/policy/normal/std Max 0.935991 trainer/policy/normal/std Min 0.0643978 trainer/policy/normal/log_std Mean -1.15706 trainer/policy/normal/log_std Std 0.590468 trainer/policy/normal/log_std Max -0.0661494 trainer/policy/normal/log_std Min -2.74268 eval/num steps total 651200 eval/num paths total 1255 eval/path length Mean 645 eval/path length Std 0 eval/path length Max 645 eval/path length Min 645 eval/Rewards Mean 3.16793 eval/Rewards Std 0.718321 eval/Rewards Max 4.83389 eval/Rewards Min 0.980151 eval/Returns Mean 2043.31 eval/Returns Std 0 eval/Returns Max 2043.31 eval/Returns Min 2043.31 eval/Actions Mean 0.143847 eval/Actions Std 0.589606 eval/Actions Max 0.998428 eval/Actions Min -0.998062 eval/Num Paths 1 eval/Average Returns 2043.31 eval/normalized_score 63.4057 time/evaluation sampling (s) 0.868969 time/logging (s) 0.00262488 time/sampling batch (s) 0.263426 time/saving (s) 0.00282771 time/training (s) 4.16604 time/epoch (s) 5.30389 time/total (s) 35578.5 Epoch -14 ---------------------------------- --------------- 2022-05-10 23:03:46.912323 PDT | [0] Epoch -13 finished ---------------------------------- --------------- epoch -13 replay_buffer/size 999996 trainer/num train calls 988000 trainer/Policy Loss -2.39667 trainer/Log Pis Mean 2.3275 trainer/Log Pis Std 2.72709 trainer/Log Pis Max 17.8176 trainer/Log Pis Min -6.90393 trainer/policy/mean Mean 0.117423 trainer/policy/mean Std 0.626609 trainer/policy/mean Max 0.997904 trainer/policy/mean Min -0.999855 trainer/policy/normal/std Mean 0.38171 trainer/policy/normal/std Std 0.183155 trainer/policy/normal/std Max 1.14781 trainer/policy/normal/std Min 0.0603004 trainer/policy/normal/log_std Mean -1.10854 trainer/policy/normal/log_std Std 0.580082 trainer/policy/normal/log_std Max 0.137853 trainer/policy/normal/log_std Min -2.80842 eval/num steps total 651702 eval/num paths total 1256 eval/path length Mean 502 eval/path length Std 0 eval/path length Max 502 eval/path length Min 502 eval/Rewards Mean 3.11995 eval/Rewards Std 0.793058 eval/Rewards Max 4.7781 eval/Rewards Min 0.978036 eval/Returns Mean 1566.21 eval/Returns Std 0 eval/Returns Max 1566.21 eval/Returns Min 1566.21 eval/Actions Mean 0.158549 eval/Actions Std 0.585552 eval/Actions Max 0.998222 eval/Actions Min -0.995935 eval/Num Paths 1 eval/Average Returns 1566.21 eval/normalized_score 48.7464 time/evaluation sampling (s) 0.873789 time/logging (s) 0.00225563 time/sampling batch (s) 0.263828 time/saving (s) 0.00299551 time/training (s) 4.1726 time/epoch (s) 5.31547 time/total (s) 35583.8 Epoch -13 ---------------------------------- --------------- 2022-05-10 23:03:52.219442 PDT | [0] Epoch -12 finished ---------------------------------- --------------- epoch -12 replay_buffer/size 999996 trainer/num train calls 989000 trainer/Policy Loss -2.43547 trainer/Log Pis Mean 2.40243 trainer/Log Pis Std 2.56072 trainer/Log Pis Max 10.9566 trainer/Log Pis Min -4.47005 trainer/policy/mean Mean 0.137835 trainer/policy/mean Std 0.626067 trainer/policy/mean Max 0.997493 trainer/policy/mean Min -0.997585 trainer/policy/normal/std Mean 0.36644 trainer/policy/normal/std Std 0.178184 trainer/policy/normal/std Max 0.935216 trainer/policy/normal/std Min 0.061836 trainer/policy/normal/log_std Mean -1.15406 trainer/policy/normal/log_std Std 0.588951 trainer/policy/normal/log_std Max -0.0669779 trainer/policy/normal/log_std Min -2.78327 eval/num steps total 652357 eval/num paths total 1257 eval/path length Mean 655 eval/path length Std 0 eval/path length Max 655 eval/path length Min 655 eval/Rewards Mean 3.22607 eval/Rewards Std 0.674698 eval/Rewards Max 4.80712 eval/Rewards Min 0.986879 eval/Returns Mean 2113.07 eval/Returns Std 0 eval/Returns Max 2113.07 eval/Returns Min 2113.07 eval/Actions Mean 0.138709 eval/Actions Std 0.605021 eval/Actions Max 0.997271 eval/Actions Min -0.997806 eval/Num Paths 1 eval/Average Returns 2113.07 eval/normalized_score 65.5492 time/evaluation sampling (s) 0.865496 time/logging (s) 0.00260271 time/sampling batch (s) 0.261784 time/saving (s) 0.00292223 time/training (s) 4.15017 time/epoch (s) 5.28297 time/total (s) 35589.1 Epoch -12 ---------------------------------- --------------- 2022-05-10 23:03:57.603867 PDT | [0] Epoch -11 finished ---------------------------------- --------------- epoch -11 replay_buffer/size 999996 trainer/num train calls 990000 trainer/Policy Loss -2.46026 trainer/Log Pis Mean 2.40369 trainer/Log Pis Std 2.56384 trainer/Log Pis Max 10.3552 trainer/Log Pis Min -4.96593 trainer/policy/mean Mean 0.156389 trainer/policy/mean Std 0.62044 trainer/policy/mean Max 0.998853 trainer/policy/mean Min -0.998376 trainer/policy/normal/std Mean 0.367463 trainer/policy/normal/std Std 0.176637 trainer/policy/normal/std Max 0.950396 trainer/policy/normal/std Min 0.0633893 trainer/policy/normal/log_std Mean -1.1508 trainer/policy/normal/log_std Std 0.59072 trainer/policy/normal/log_std Max -0.0508765 trainer/policy/normal/log_std Min -2.75846 eval/num steps total 652844 eval/num paths total 1258 eval/path length Mean 487 eval/path length Std 0 eval/path length Max 487 eval/path length Min 487 eval/Rewards Mean 3.17378 eval/Rewards Std 0.819587 eval/Rewards Max 4.78078 eval/Rewards Min 0.984542 eval/Returns Mean 1545.63 eval/Returns Std 0 eval/Returns Max 1545.63 eval/Returns Min 1545.63 eval/Actions Mean 0.152583 eval/Actions Std 0.599899 eval/Actions Max 0.996367 eval/Actions Min -0.998681 eval/Num Paths 1 eval/Average Returns 1545.63 eval/normalized_score 48.1139 time/evaluation sampling (s) 0.863615 time/logging (s) 0.00224555 time/sampling batch (s) 0.264859 time/saving (s) 0.00302689 time/training (s) 4.2258 time/epoch (s) 5.35955 time/total (s) 35594.4 Epoch -11 ---------------------------------- --------------- 2022-05-10 23:04:02.906584 PDT | [0] Epoch -10 finished ---------------------------------- --------------- epoch -10 replay_buffer/size 999996 trainer/num train calls 991000 trainer/Policy Loss -2.30147 trainer/Log Pis Mean 2.32402 trainer/Log Pis Std 2.63414 trainer/Log Pis Max 10.2209 trainer/Log Pis Min -5.83453 trainer/policy/mean Mean 0.108386 trainer/policy/mean Std 0.623176 trainer/policy/mean Max 0.995993 trainer/policy/mean Min -0.998133 trainer/policy/normal/std Mean 0.374867 trainer/policy/normal/std Std 0.182143 trainer/policy/normal/std Max 1.07245 trainer/policy/normal/std Min 0.0691612 trainer/policy/normal/log_std Mean -1.12874 trainer/policy/normal/log_std Std 0.582943 trainer/policy/normal/log_std Max 0.0699499 trainer/policy/normal/log_std Min -2.67132 eval/num steps total 653401 eval/num paths total 1259 eval/path length Mean 557 eval/path length Std 0 eval/path length Max 557 eval/path length Min 557 eval/Rewards Mean 3.23885 eval/Rewards Std 0.822279 eval/Rewards Max 4.96428 eval/Rewards Min 0.984072 eval/Returns Mean 1804.04 eval/Returns Std 0 eval/Returns Max 1804.04 eval/Returns Min 1804.04 eval/Actions Mean 0.150867 eval/Actions Std 0.580205 eval/Actions Max 0.998215 eval/Actions Min -0.998638 eval/Num Paths 1 eval/Average Returns 1804.04 eval/normalized_score 56.0539 time/evaluation sampling (s) 0.862984 time/logging (s) 0.00239533 time/sampling batch (s) 0.260726 time/saving (s) 0.00305158 time/training (s) 4.14942 time/epoch (s) 5.27858 time/total (s) 35599.7 Epoch -10 ---------------------------------- --------------- 2022-05-10 23:04:08.219529 PDT | [0] Epoch -9 finished ---------------------------------- --------------- epoch -9 replay_buffer/size 999996 trainer/num train calls 992000 trainer/Policy Loss -2.29999 trainer/Log Pis Mean 2.28474 trainer/Log Pis Std 2.64613 trainer/Log Pis Max 9.83516 trainer/Log Pis Min -7.11466 trainer/policy/mean Mean 0.153672 trainer/policy/mean Std 0.617603 trainer/policy/mean Max 0.995461 trainer/policy/mean Min -0.998939 trainer/policy/normal/std Mean 0.377232 trainer/policy/normal/std Std 0.183228 trainer/policy/normal/std Max 0.997047 trainer/policy/normal/std Min 0.0648764 trainer/policy/normal/log_std Mean -1.12652 trainer/policy/normal/log_std Std 0.595479 trainer/policy/normal/log_std Max -0.00295718 trainer/policy/normal/log_std Min -2.73527 eval/num steps total 653929 eval/num paths total 1260 eval/path length Mean 528 eval/path length Std 0 eval/path length Max 528 eval/path length Min 528 eval/Rewards Mean 3.21497 eval/Rewards Std 0.828205 eval/Rewards Max 5.54444 eval/Rewards Min 0.986722 eval/Returns Mean 1697.51 eval/Returns Std 0 eval/Returns Max 1697.51 eval/Returns Min 1697.51 eval/Actions Mean 0.160757 eval/Actions Std 0.595119 eval/Actions Max 0.998261 eval/Actions Min -0.997444 eval/Num Paths 1 eval/Average Returns 1697.51 eval/normalized_score 52.7805 time/evaluation sampling (s) 0.868954 time/logging (s) 0.00228545 time/sampling batch (s) 0.261666 time/saving (s) 0.0028334 time/training (s) 4.15254 time/epoch (s) 5.28828 time/total (s) 35605 Epoch -9 ---------------------------------- --------------- 2022-05-10 23:04:13.530699 PDT | [0] Epoch -8 finished ---------------------------------- --------------- epoch -8 replay_buffer/size 999996 trainer/num train calls 993000 trainer/Policy Loss -2.09847 trainer/Log Pis Mean 2.11343 trainer/Log Pis Std 2.59133 trainer/Log Pis Max 10.0445 trainer/Log Pis Min -5.15845 trainer/policy/mean Mean 0.135156 trainer/policy/mean Std 0.608834 trainer/policy/mean Max 0.997677 trainer/policy/mean Min -0.997897 trainer/policy/normal/std Mean 0.367006 trainer/policy/normal/std Std 0.178571 trainer/policy/normal/std Max 0.969518 trainer/policy/normal/std Min 0.062306 trainer/policy/normal/log_std Mean -1.1556 trainer/policy/normal/log_std Std 0.598101 trainer/policy/normal/log_std Max -0.0309557 trainer/policy/normal/log_std Min -2.7757 eval/num steps total 654865 eval/num paths total 1262 eval/path length Mean 468 eval/path length Std 21 eval/path length Max 489 eval/path length Min 447 eval/Rewards Mean 3.12787 eval/Rewards Std 0.857446 eval/Rewards Max 5.46309 eval/Rewards Min 0.984844 eval/Returns Mean 1463.85 eval/Returns Std 44.5264 eval/Returns Max 1508.37 eval/Returns Min 1419.32 eval/Actions Mean 0.141975 eval/Actions Std 0.578042 eval/Actions Max 0.997598 eval/Actions Min -0.998005 eval/Num Paths 2 eval/Average Returns 1463.85 eval/normalized_score 45.601 time/evaluation sampling (s) 0.860029 time/logging (s) 0.00369619 time/sampling batch (s) 0.262317 time/saving (s) 0.00292036 time/training (s) 4.15945 time/epoch (s) 5.28841 time/total (s) 35610.3 Epoch -8 ---------------------------------- --------------- 2022-05-10 23:04:18.922471 PDT | [0] Epoch -7 finished ---------------------------------- --------------- epoch -7 replay_buffer/size 999996 trainer/num train calls 994000 trainer/Policy Loss -2.3513 trainer/Log Pis Mean 2.34342 trainer/Log Pis Std 2.74848 trainer/Log Pis Max 9.92374 trainer/Log Pis Min -7.35355 trainer/policy/mean Mean 0.169513 trainer/policy/mean Std 0.606228 trainer/policy/mean Max 0.997843 trainer/policy/mean Min -0.998771 trainer/policy/normal/std Mean 0.370969 trainer/policy/normal/std Std 0.17916 trainer/policy/normal/std Max 0.939191 trainer/policy/normal/std Min 0.0652292 trainer/policy/normal/log_std Mean -1.14363 trainer/policy/normal/log_std Std 0.59662 trainer/policy/normal/log_std Max -0.062736 trainer/policy/normal/log_std Min -2.72985 eval/num steps total 655857 eval/num paths total 1264 eval/path length Mean 496 eval/path length Std 4 eval/path length Max 500 eval/path length Min 492 eval/Rewards Mean 3.12516 eval/Rewards Std 0.78745 eval/Rewards Max 4.96699 eval/Rewards Min 0.981688 eval/Returns Mean 1550.08 eval/Returns Std 5.69603 eval/Returns Max 1555.78 eval/Returns Min 1544.39 eval/Actions Mean 0.152524 eval/Actions Std 0.593901 eval/Actions Max 0.998298 eval/Actions Min -0.998404 eval/Num Paths 2 eval/Average Returns 1550.08 eval/normalized_score 48.2507 time/evaluation sampling (s) 0.873518 time/logging (s) 0.00384004 time/sampling batch (s) 0.268412 time/saving (s) 0.00297184 time/training (s) 4.2183 time/epoch (s) 5.36704 time/total (s) 35615.7 Epoch -7 ---------------------------------- --------------- 2022-05-10 23:04:24.271770 PDT | [0] Epoch -6 finished ---------------------------------- --------------- epoch -6 replay_buffer/size 999996 trainer/num train calls 995000 trainer/Policy Loss -2.09735 trainer/Log Pis Mean 2.0902 trainer/Log Pis Std 2.63259 trainer/Log Pis Max 12.1302 trainer/Log Pis Min -6.41766 trainer/policy/mean Mean 0.115668 trainer/policy/mean Std 0.613908 trainer/policy/mean Max 0.997579 trainer/policy/mean Min -0.997863 trainer/policy/normal/std Mean 0.370366 trainer/policy/normal/std Std 0.179617 trainer/policy/normal/std Max 1.06063 trainer/policy/normal/std Min 0.0676025 trainer/policy/normal/log_std Mean -1.14445 trainer/policy/normal/log_std Std 0.593131 trainer/policy/normal/log_std Max 0.0588644 trainer/policy/normal/log_std Min -2.69411 eval/num steps total 656802 eval/num paths total 1266 eval/path length Mean 472.5 eval/path length Std 5.5 eval/path length Max 478 eval/path length Min 467 eval/Rewards Mean 3.19747 eval/Rewards Std 0.867713 eval/Rewards Max 4.85606 eval/Rewards Min 0.981687 eval/Returns Mean 1510.8 eval/Returns Std 12.4946 eval/Returns Max 1523.3 eval/Returns Min 1498.31 eval/Actions Mean 0.140517 eval/Actions Std 0.587715 eval/Actions Max 0.996629 eval/Actions Min -0.999178 eval/Num Paths 2 eval/Average Returns 1510.8 eval/normalized_score 47.0439 time/evaluation sampling (s) 0.884245 time/logging (s) 0.00347305 time/sampling batch (s) 0.265703 time/saving (s) 0.00285762 time/training (s) 4.1678 time/epoch (s) 5.32408 time/total (s) 35621 Epoch -6 ---------------------------------- --------------- 2022-05-10 23:04:29.618497 PDT | [0] Epoch -5 finished ---------------------------------- --------------- epoch -5 replay_buffer/size 999996 trainer/num train calls 996000 trainer/Policy Loss -2.20072 trainer/Log Pis Mean 2.22915 trainer/Log Pis Std 2.57154 trainer/Log Pis Max 10.33 trainer/Log Pis Min -5.6632 trainer/policy/mean Mean 0.159424 trainer/policy/mean Std 0.613889 trainer/policy/mean Max 0.997355 trainer/policy/mean Min -0.996018 trainer/policy/normal/std Mean 0.37731 trainer/policy/normal/std Std 0.179224 trainer/policy/normal/std Max 0.908325 trainer/policy/normal/std Min 0.0654499 trainer/policy/normal/log_std Mean -1.12115 trainer/policy/normal/log_std Std 0.585207 trainer/policy/normal/log_std Max -0.096153 trainer/policy/normal/log_std Min -2.72647 eval/num steps total 657636 eval/num paths total 1267 eval/path length Mean 834 eval/path length Std 0 eval/path length Max 834 eval/path length Min 834 eval/Rewards Mean 3.26049 eval/Rewards Std 0.636931 eval/Rewards Max 4.66922 eval/Rewards Min 0.983326 eval/Returns Mean 2719.25 eval/Returns Std 0 eval/Returns Max 2719.25 eval/Returns Min 2719.25 eval/Actions Mean 0.158209 eval/Actions Std 0.61474 eval/Actions Max 0.997723 eval/Actions Min -0.999348 eval/Num Paths 1 eval/Average Returns 2719.25 eval/normalized_score 84.1744 time/evaluation sampling (s) 0.87101 time/logging (s) 0.00306871 time/sampling batch (s) 0.265119 time/saving (s) 0.00281861 time/training (s) 4.17956 time/epoch (s) 5.32157 time/total (s) 35626.3 Epoch -5 ---------------------------------- --------------- 2022-05-10 23:04:34.959642 PDT | [0] Epoch -4 finished ---------------------------------- --------------- epoch -4 replay_buffer/size 999996 trainer/num train calls 997000 trainer/Policy Loss -2.21116 trainer/Log Pis Mean 2.19412 trainer/Log Pis Std 2.57995 trainer/Log Pis Max 9.71104 trainer/Log Pis Min -9.39821 trainer/policy/mean Mean 0.163778 trainer/policy/mean Std 0.601127 trainer/policy/mean Max 0.997678 trainer/policy/mean Min -0.997804 trainer/policy/normal/std Mean 0.360181 trainer/policy/normal/std Std 0.177452 trainer/policy/normal/std Max 0.88637 trainer/policy/normal/std Min 0.0674324 trainer/policy/normal/log_std Mean -1.17546 trainer/policy/normal/log_std Std 0.596751 trainer/policy/normal/log_std Max -0.120621 trainer/policy/normal/log_std Min -2.69663 eval/num steps total 658549 eval/num paths total 1269 eval/path length Mean 456.5 eval/path length Std 6.5 eval/path length Max 463 eval/path length Min 450 eval/Rewards Mean 3.22015 eval/Rewards Std 0.8765 eval/Rewards Max 5.34916 eval/Rewards Min 0.984582 eval/Returns Mean 1470 eval/Returns Std 26.0787 eval/Returns Max 1496.08 eval/Returns Min 1443.92 eval/Actions Mean 0.13585 eval/Actions Std 0.571372 eval/Actions Max 0.997939 eval/Actions Min -0.998818 eval/Num Paths 2 eval/Average Returns 1470 eval/normalized_score 45.7901 time/evaluation sampling (s) 0.866929 time/logging (s) 0.00330811 time/sampling batch (s) 0.264619 time/saving (s) 0.00283151 time/training (s) 4.17924 time/epoch (s) 5.31692 time/total (s) 35631.6 Epoch -4 ---------------------------------- --------------- 2022-05-10 23:04:40.328493 PDT | [0] Epoch -3 finished ---------------------------------- --------------- epoch -3 replay_buffer/size 999996 trainer/num train calls 998000 trainer/Policy Loss -2.3917 trainer/Log Pis Mean 2.33089 trainer/Log Pis Std 2.60162 trainer/Log Pis Max 9.8431 trainer/Log Pis Min -5.32437 trainer/policy/mean Mean 0.131519 trainer/policy/mean Std 0.627797 trainer/policy/mean Max 0.995761 trainer/policy/mean Min -0.99739 trainer/policy/normal/std Mean 0.382195 trainer/policy/normal/std Std 0.191617 trainer/policy/normal/std Max 1.03556 trainer/policy/normal/std Min 0.068318 trainer/policy/normal/log_std Mean -1.12144 trainer/policy/normal/log_std Std 0.608267 trainer/policy/normal/log_std Max 0.0349405 trainer/policy/normal/log_std Min -2.68358 eval/num steps total 659044 eval/num paths total 1270 eval/path length Mean 495 eval/path length Std 0 eval/path length Max 495 eval/path length Min 495 eval/Rewards Mean 3.09045 eval/Rewards Std 0.770802 eval/Rewards Max 4.69435 eval/Rewards Min 0.985421 eval/Returns Mean 1529.77 eval/Returns Std 0 eval/Returns Max 1529.77 eval/Returns Min 1529.77 eval/Actions Mean 0.149561 eval/Actions Std 0.597016 eval/Actions Max 0.998443 eval/Actions Min -0.999162 eval/Num Paths 1 eval/Average Returns 1529.77 eval/normalized_score 47.6267 time/evaluation sampling (s) 0.879344 time/logging (s) 0.00218373 time/sampling batch (s) 0.264656 time/saving (s) 0.00281737 time/training (s) 4.19451 time/epoch (s) 5.34351 time/total (s) 35637 Epoch -3 ---------------------------------- --------------- 2022-05-10 23:04:45.664320 PDT | [0] Epoch -2 finished ---------------------------------- --------------- epoch -2 replay_buffer/size 999996 trainer/num train calls 999000 trainer/Policy Loss -2.18446 trainer/Log Pis Mean 2.08999 trainer/Log Pis Std 2.52985 trainer/Log Pis Max 9.1891 trainer/Log Pis Min -4.40858 trainer/policy/mean Mean 0.162549 trainer/policy/mean Std 0.609567 trainer/policy/mean Max 0.998416 trainer/policy/mean Min -0.99801 trainer/policy/normal/std Mean 0.385507 trainer/policy/normal/std Std 0.187535 trainer/policy/normal/std Max 1.00483 trainer/policy/normal/std Min 0.0603922 trainer/policy/normal/log_std Mean -1.10502 trainer/policy/normal/log_std Std 0.595098 trainer/policy/normal/log_std Max 0.0048147 trainer/policy/normal/log_std Min -2.8069 eval/num steps total 660026 eval/num paths total 1272 eval/path length Mean 491 eval/path length Std 10 eval/path length Max 501 eval/path length Min 481 eval/Rewards Mean 3.14582 eval/Rewards Std 0.782094 eval/Rewards Max 4.76306 eval/Rewards Min 0.981421 eval/Returns Mean 1544.6 eval/Returns Std 9.07953 eval/Returns Max 1553.68 eval/Returns Min 1535.52 eval/Actions Mean 0.151866 eval/Actions Std 0.600839 eval/Actions Max 0.997831 eval/Actions Min -0.999138 eval/Num Paths 2 eval/Average Returns 1544.6 eval/normalized_score 48.0822 time/evaluation sampling (s) 0.896662 time/logging (s) 0.00345387 time/sampling batch (s) 0.263185 time/saving (s) 0.00293749 time/training (s) 4.14616 time/epoch (s) 5.3124 time/total (s) 35642.3 Epoch -2 ---------------------------------- --------------- 2022-05-10 23:04:51.036795 PDT | [0] Epoch -1 finished ---------------------------------- --------------- epoch -1 replay_buffer/size 999996 trainer/num train calls 1e+06 trainer/Policy Loss -2.27525 trainer/Log Pis Mean 2.15702 trainer/Log Pis Std 2.52771 trainer/Log Pis Max 8.60468 trainer/Log Pis Min -6.53534 trainer/policy/mean Mean 0.1859 trainer/policy/mean Std 0.610519 trainer/policy/mean Max 0.997337 trainer/policy/mean Min -0.99736 trainer/policy/normal/std Mean 0.378741 trainer/policy/normal/std Std 0.185395 trainer/policy/normal/std Max 0.921401 trainer/policy/normal/std Min 0.066887 trainer/policy/normal/log_std Mean -1.12529 trainer/policy/normal/log_std Std 0.599462 trainer/policy/normal/log_std Max -0.0818604 trainer/policy/normal/log_std Min -2.70475 eval/num steps total 660677 eval/num paths total 1273 eval/path length Mean 651 eval/path length Std 0 eval/path length Max 651 eval/path length Min 651 eval/Rewards Mean 3.1486 eval/Rewards Std 0.69761 eval/Rewards Max 4.61483 eval/Rewards Min 0.98408 eval/Returns Mean 2049.74 eval/Returns Std 0 eval/Returns Max 2049.74 eval/Returns Min 2049.74 eval/Actions Mean 0.155055 eval/Actions Std 0.601893 eval/Actions Max 0.998138 eval/Actions Min -0.998756 eval/Num Paths 1 eval/Average Returns 2049.74 eval/normalized_score 63.6031 time/evaluation sampling (s) 0.895615 time/logging (s) 0.00261549 time/sampling batch (s) 0.270805 time/saving (s) 0.00546209 time/training (s) 4.17237 time/epoch (s) 5.34687 time/total (s) 35647.6 Epoch -1 ---------------------------------- ---------------