wintermelontree commited on
Commit
7394318
·
verified ·
1 Parent(s): cd459d8

Upload folder using huggingface_hub

Browse files
square_low_dim/_2025_12_31_17_38_53.log ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Using robot0_eef_pos with dim 3 for observation
2
+ Using robot0_eef_quat with dim 4 for observation
3
+ Using robot0_gripper_qpos with dim 2 for observation
4
+ Using object with dim 14 for observation
5
+ Total low-dim observation dim: 23
6
+ Original action dim: 7
7
+ Final action dim: 7
8
+ ===== Basic stats =====
9
+ Total transitions: 30154
10
+ Total trajectories: 200
11
+ Traj length mean/std: 150.77, 20.31765488436104
12
+ Traj length min/max: 107, 236
13
+ obs min: [-0.21917232 -0.04869477 0. 0. -0.74314939 -0.13437093
14
+ -0.12024449 -0.00132913 -0.04164234 -0.13853475 0. 0.
15
+ -0.50633821 -0.25425125 -0.99980872 -0.99994538 -0.14878118 -0.24626347
16
+ -0.05728711 -0.99999732 -0.99996454 -0.28367242 0. ]
17
+ obs max: [0.26955168 0.31146888 1.08664924 0.99999989 0.74583464 0.18068227
18
+ 0.13211839 0.04187732 0. 0.26977406 0.2468017 1.06947778
19
+ 0.21079789 0.49256304 0.99999865 0.99977963 0.14990348 0.18911523
20
+ 0.21308565 0.99999809 0.99999315 0.25840759 0.75748754]
21
+ action min: [-1. -1. -1. -0.30129695 -1. -1.
22
+ -1. ]
23
+ action max: [1. 1. 1. 0.25129211 0.39740705 1.
24
+ 1. ]
25
+ Trajectory demo_0: cumulative reward = 1.0 (length: 127 -> 123)
26
+ Trajectory demo_1: cumulative reward = 1.0 (length: 123 -> 119)
27
+ Trajectory demo_2: cumulative reward = 1.0 (length: 124 -> 120)
28
+ Trajectory demo_3: cumulative reward = 1.0 (length: 149 -> 145)
29
+ Trajectory demo_4: cumulative reward = 1.0 (length: 154 -> 150)
30
+ Trajectory demo_5: cumulative reward = 1.0 (length: 144 -> 140)
31
+ Trajectory demo_6: cumulative reward = 1.0 (length: 144 -> 140)
32
+ Trajectory demo_7: cumulative reward = 1.0 (length: 150 -> 146)
33
+ Trajectory demo_8: cumulative reward = 1.0 (length: 160 -> 156)
34
+ Trajectory demo_9: cumulative reward = 1.0 (length: 152 -> 148)
35
+ Trajectory demo_10: cumulative reward = 1.0 (length: 134 -> 130)
36
+ Trajectory demo_11: cumulative reward = 1.0 (length: 156 -> 152)
37
+ Trajectory demo_12: cumulative reward = 1.0 (length: 133 -> 129)
38
+ Trajectory demo_13: cumulative reward = 1.0 (length: 153 -> 149)
39
+ Trajectory demo_14: cumulative reward = 1.0 (length: 152 -> 148)
40
+ Trajectory demo_15: cumulative reward = 1.0 (length: 141 -> 137)
41
+ DEBUG Trajectory 16 (demo_16): Original length=174, non-zero rewards=5
42
+ DEBUG Trajectory 16 (demo_16): Reward sum=5.0, unique rewards=[0. 1.]
43
+ DEBUG Trajectory 16 (demo_16): Last 20 rewards: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1.]
44
+ DEBUG Trajectory 16 (demo_16): Truncating at index 170, cumsum at that point = 1.0
45
+ Trajectory demo_16: cumulative reward = 1.0 (length: 174 -> 170)
46
+ Trajectory demo_17: cumulative reward = 1.0 (length: 174 -> 170)
47
+ Trajectory demo_18: cumulative reward = 1.0 (length: 127 -> 123)
48
+ Trajectory demo_19: cumulative reward = 1.0 (length: 165 -> 161)
49
+ Trajectory demo_20: cumulative reward = 1.0 (length: 145 -> 141)
50
+ Trajectory demo_21: cumulative reward = 1.0 (length: 179 -> 175)
51
+ Trajectory demo_22: cumulative reward = 1.0 (length: 130 -> 126)
52
+ Trajectory demo_23: cumulative reward = 1.0 (length: 176 -> 172)
53
+ Trajectory demo_24: cumulative reward = 1.0 (length: 155 -> 151)
54
+ Trajectory demo_25: cumulative reward = 1.0 (length: 158 -> 154)
55
+ Trajectory demo_26: cumulative reward = 1.0 (length: 182 -> 178)
56
+ Trajectory demo_27: cumulative reward = 1.0 (length: 180 -> 176)
57
+ Trajectory demo_28: cumulative reward = 1.0 (length: 136 -> 132)
58
+ Trajectory demo_29: cumulative reward = 1.0 (length: 183 -> 179)
59
+ Trajectory demo_30: cumulative reward = 1.0 (length: 134 -> 130)
60
+ Trajectory demo_31: cumulative reward = 1.0 (length: 139 -> 135)
61
+ Trajectory demo_32: cumulative reward = 1.0 (length: 142 -> 138)
62
+ Trajectory demo_33: cumulative reward = 1.0 (length: 120 -> 116)
63
+ Trajectory demo_34: cumulative reward = 1.0 (length: 129 -> 125)
64
+ Trajectory demo_35: cumulative reward = 1.0 (length: 174 -> 170)
65
+ Trajectory demo_36: cumulative reward = 1.0 (length: 147 -> 143)
66
+ Trajectory demo_37: cumulative reward = 1.0 (length: 160 -> 156)
67
+ Trajectory demo_38: cumulative reward = 1.0 (length: 114 -> 110)
68
+ Trajectory demo_39: cumulative reward = 1.0 (length: 128 -> 124)
69
+ Trajectory demo_40: cumulative reward = 1.0 (length: 140 -> 136)
70
+ Trajectory demo_41: cumulative reward = 1.0 (length: 134 -> 130)
71
+ Trajectory demo_42: cumulative reward = 1.0 (length: 186 -> 182)
72
+ Trajectory demo_43: cumulative reward = 1.0 (length: 122 -> 118)
73
+ Trajectory demo_44: cumulative reward = 1.0 (length: 125 -> 121)
74
+ Trajectory demo_45: cumulative reward = 1.0 (length: 180 -> 176)
75
+ Trajectory demo_46: cumulative reward = 1.0 (length: 132 -> 128)
76
+ Trajectory demo_47: cumulative reward = 1.0 (length: 148 -> 144)
77
+ Trajectory demo_48: cumulative reward = 1.0 (length: 170 -> 166)
78
+ Trajectory demo_49: cumulative reward = 1.0 (length: 167 -> 163)
79
+ Trajectory demo_50: cumulative reward = 1.0 (length: 171 -> 167)
80
+ Trajectory demo_51: cumulative reward = 1.0 (length: 127 -> 123)
81
+ Trajectory demo_52: cumulative reward = 1.0 (length: 111 -> 107)
82
+ Trajectory demo_53: cumulative reward = 1.0 (length: 170 -> 166)
83
+ Trajectory demo_54: cumulative reward = 1.0 (length: 219 -> 215)
84
+ Trajectory demo_55: cumulative reward = 1.0 (length: 158 -> 154)
85
+ Trajectory demo_56: cumulative reward = 1.0 (length: 129 -> 125)
86
+ Trajectory demo_57: cumulative reward = 1.0 (length: 114 -> 110)
87
+ Trajectory demo_58: cumulative reward = 1.0 (length: 161 -> 157)
88
+ Trajectory demo_59: cumulative reward = 1.0 (length: 148 -> 144)
89
+ Trajectory demo_60: cumulative reward = 1.0 (length: 133 -> 129)
90
+ Trajectory demo_61: cumulative reward = 1.0 (length: 152 -> 148)
91
+ Trajectory demo_62: cumulative reward = 1.0 (length: 154 -> 150)
92
+ Trajectory demo_63: cumulative reward = 1.0 (length: 150 -> 146)
93
+ Trajectory demo_64: cumulative reward = 1.0 (length: 171 -> 166)
94
+ Trajectory demo_65: cumulative reward = 1.0 (length: 155 -> 151)
95
+ Trajectory demo_66: cumulative reward = 1.0 (length: 137 -> 133)
96
+ Trajectory demo_67: cumulative reward = 1.0 (length: 153 -> 149)
97
+ Trajectory demo_68: cumulative reward = 1.0 (length: 158 -> 154)
98
+ Trajectory demo_69: cumulative reward = 1.0 (length: 184 -> 180)
99
+ Trajectory demo_70: cumulative reward = 1.0 (length: 172 -> 168)
100
+ Trajectory demo_71: cumulative reward = 1.0 (length: 136 -> 132)
101
+ Trajectory demo_72: cumulative reward = 1.0 (length: 149 -> 145)
102
+ Trajectory demo_73: cumulative reward = 1.0 (length: 159 -> 155)
103
+ Trajectory demo_74: cumulative reward = 1.0 (length: 129 -> 125)
104
+ Trajectory demo_75: cumulative reward = 1.0 (length: 151 -> 147)
105
+ Trajectory demo_76: cumulative reward = 1.0 (length: 162 -> 158)
106
+ Trajectory demo_77: cumulative reward = 1.0 (length: 236 -> 232)
107
+ Trajectory demo_78: cumulative reward = 1.0 (length: 170 -> 166)
108
+ Trajectory demo_79: cumulative reward = 1.0 (length: 150 -> 146)
109
+ Trajectory demo_80: cumulative reward = 1.0 (length: 167 -> 163)
110
+ Trajectory demo_81: cumulative reward = 1.0 (length: 140 -> 136)
111
+ Trajectory demo_82: cumulative reward = 1.0 (length: 136 -> 132)
112
+ Trajectory demo_83: cumulative reward = 1.0 (length: 171 -> 167)
113
+ Trajectory demo_84: cumulative reward = 1.0 (length: 142 -> 138)
114
+ Trajectory demo_85: cumulative reward = 1.0 (length: 134 -> 130)
115
+ Trajectory demo_86: cumulative reward = 1.0 (length: 160 -> 156)
116
+ Trajectory demo_87: cumulative reward = 1.0 (length: 177 -> 173)
117
+ Trajectory demo_88: cumulative reward = 1.0 (length: 165 -> 161)
118
+ Trajectory demo_89: cumulative reward = 1.0 (length: 159 -> 155)
119
+ Trajectory demo_90: cumulative reward = 1.0 (length: 127 -> 123)
120
+ Trajectory demo_91: cumulative reward = 1.0 (length: 107 -> 103)
121
+ Trajectory demo_92: cumulative reward = 1.0 (length: 162 -> 158)
122
+ Trajectory demo_93: cumulative reward = 1.0 (length: 171 -> 166)
123
+ Trajectory demo_94: cumulative reward = 1.0 (length: 148 -> 144)
124
+ Trajectory demo_95: cumulative reward = 1.0 (length: 142 -> 138)
125
+ Trajectory demo_96: cumulative reward = 1.0 (length: 145 -> 141)
126
+ Trajectory demo_97: cumulative reward = 1.0 (length: 158 -> 154)
127
+ Trajectory demo_98: cumulative reward = 1.0 (length: 129 -> 125)
128
+ Trajectory demo_99: cumulative reward = 1.0 (length: 162 -> 158)
129
+ ===== Truncation Statistics =====
130
+ Original total steps: 30154
131
+ Truncated total steps: 29348
132
+ Reduction: 806 steps (2.7%)
133
+ Train - Trajectories: 200, Transitions: 29348
134
+ Val - Trajectories: 0, Transitions: 0.0
square_low_dim/normalization.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6a2bedc7cd4bc40129c51e531eec12f948339472b4f04e1d00fedf34bf2f4a3
3
+ size 1136
square_low_dim/train.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a2f472fba0c67af0065a10e6ec9ca07897e6274e5f1f7dc79c5906befce3f22
3
+ size 5777820
square_low_dim/val.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef423294119533dd4d99cd9e9c6d933cd3cedf7ddf9070a66d603c05a24cffc2
3
+ size 964