wintermelontree commited on
Commit
efd7b67
·
verified ·
1 Parent(s): 2b01767

Upload folder using huggingface_hub

Browse files
can_low_dim/_2026_01_06_23_34_52.log ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Using robot0_eef_pos with dim 3 for observation
2
+ Using robot0_eef_quat with dim 4 for observation
3
+ Using robot0_gripper_qpos with dim 2 for observation
4
+ Using object with dim 14 for observation
5
+ Total low-dim observation dim: 23
6
+ Original action dim: 7
7
+ Final action dim: 7
8
+ ===== Basic stats =====
9
+ Total transitions: 23207
10
+ Total trajectories: 200
11
+ Traj length mean/std: 116.035, 13.685896938089225
12
+ Traj length min/max: 82, 151
13
+ obs min: [-0.07376085 -0.42253869 0. 0. -0.73516192 -0.22861566
14
+ -0.34267776 0. -0.0403102 -0.04001666 -0.41950386 0.
15
+ -0.60131852 -0.55893041 -0.77678055 -0.99984988 -0.12209211 -0.10018459
16
+ 0. -0.99995351 -0.99995863 -0.79737371 0. ]
17
+ obs max: [0.29611527 0.41260722 1.22413338 0.99997648 0.47231822 0.13808564
18
+ 0.08067435 0.04106108 0. 0.28709319 0.4541125 1.21502806
19
+ 0.64036898 0.70398297 0.99988429 0.99991152 0.27907834 0.34339995
20
+ 0.35659635 0.99983454 0.99994522 0.59665596 0.83187222]
21
+ action min: [-1. -1. -1. -0.55634028 -1. -1.
22
+ -1. ]
23
+ action max: [1. 1. 1. 0.72973686 0.45003703 0.74534029
24
+ 1. ]
25
+ Trajectory demo_0: cumulative reward = 1.0 (length: 118 -> 113)
26
+ Trajectory demo_1: cumulative reward = 1.0 (length: 118 -> 114)
27
+ Trajectory demo_2: cumulative reward = 1.0 (length: 113 -> 109)
28
+ Trajectory demo_3: cumulative reward = 1.0 (length: 98 -> 94)
29
+ Trajectory demo_4: cumulative reward = 1.0 (length: 102 -> 98)
30
+ Trajectory demo_5: cumulative reward = 1.0 (length: 134 -> 130)
31
+ Trajectory demo_6: cumulative reward = 1.0 (length: 115 -> 111)
32
+ Trajectory demo_7: cumulative reward = 1.0 (length: 115 -> 111)
33
+ Trajectory demo_8: cumulative reward = 1.0 (length: 127 -> 123)
34
+ Trajectory demo_9: cumulative reward = 1.0 (length: 120 -> 116)
35
+ Trajectory demo_10: cumulative reward = 1.0 (length: 129 -> 125)
36
+ Trajectory demo_11: cumulative reward = 1.0 (length: 94 -> 90)
37
+ Trajectory demo_12: cumulative reward = 1.0 (length: 115 -> 111)
38
+ Trajectory demo_13: cumulative reward = 1.0 (length: 129 -> 125)
39
+ Trajectory demo_14: cumulative reward = 1.0 (length: 113 -> 109)
40
+ Trajectory demo_15: cumulative reward = 1.0 (length: 95 -> 91)
41
+ DEBUG Trajectory 16 (demo_16): Original length=136, non-zero rewards=5
42
+ DEBUG Trajectory 16 (demo_16): Reward sum=5.0, unique rewards=[0. 1.]
43
+ DEBUG Trajectory 16 (demo_16): Last 20 rewards: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1.]
44
+ DEBUG Trajectory 16 (demo_16): Truncating at index 132, cumsum at that point = 1.0
45
+ Trajectory demo_16: cumulative reward = 1.0 (length: 136 -> 132)
46
+ Trajectory demo_17: cumulative reward = 1.0 (length: 108 -> 104)
47
+ Trajectory demo_18: cumulative reward = 1.0 (length: 94 -> 90)
48
+ Trajectory demo_19: cumulative reward = 1.0 (length: 111 -> 107)
49
+ Trajectory demo_20: cumulative reward = 1.0 (length: 92 -> 88)
50
+ Trajectory demo_21: cumulative reward = 1.0 (length: 100 -> 96)
51
+ Trajectory demo_22: cumulative reward = 1.0 (length: 102 -> 98)
52
+ Trajectory demo_23: cumulative reward = 1.0 (length: 142 -> 138)
53
+ Trajectory demo_24: cumulative reward = 1.0 (length: 108 -> 104)
54
+ Trajectory demo_25: cumulative reward = 1.0 (length: 131 -> 127)
55
+ Trajectory demo_26: cumulative reward = 1.0 (length: 112 -> 108)
56
+ Trajectory demo_27: cumulative reward = 1.0 (length: 124 -> 120)
57
+ Trajectory demo_28: cumulative reward = 1.0 (length: 141 -> 137)
58
+ Trajectory demo_29: cumulative reward = 1.0 (length: 107 -> 103)
59
+ Trajectory demo_30: cumulative reward = 1.0 (length: 127 -> 123)
60
+ Trajectory demo_31: cumulative reward = 1.0 (length: 96 -> 92)
61
+ Trajectory demo_32: cumulative reward = 1.0 (length: 110 -> 106)
62
+ Trajectory demo_33: cumulative reward = 1.0 (length: 115 -> 111)
63
+ Trajectory demo_34: cumulative reward = 1.0 (length: 106 -> 102)
64
+ Trajectory demo_35: cumulative reward = 1.0 (length: 111 -> 107)
65
+ Trajectory demo_36: cumulative reward = 1.0 (length: 103 -> 99)
66
+ Trajectory demo_37: cumulative reward = 1.0 (length: 116 -> 112)
67
+ Trajectory demo_38: cumulative reward = 1.0 (length: 107 -> 103)
68
+ Trajectory demo_39: cumulative reward = 1.0 (length: 113 -> 109)
69
+ Trajectory demo_40: cumulative reward = 1.0 (length: 117 -> 113)
70
+ Trajectory demo_41: cumulative reward = 1.0 (length: 136 -> 132)
71
+ Trajectory demo_42: cumulative reward = 1.0 (length: 117 -> 113)
72
+ Trajectory demo_43: cumulative reward = 1.0 (length: 106 -> 102)
73
+ Trajectory demo_44: cumulative reward = 1.0 (length: 121 -> 117)
74
+ Trajectory demo_45: cumulative reward = 1.0 (length: 127 -> 123)
75
+ Trajectory demo_46: cumulative reward = 1.0 (length: 106 -> 102)
76
+ Trajectory demo_47: cumulative reward = 1.0 (length: 129 -> 125)
77
+ Trajectory demo_48: cumulative reward = 1.0 (length: 113 -> 109)
78
+ Trajectory demo_49: cumulative reward = 1.0 (length: 106 -> 102)
79
+ Trajectory demo_50: cumulative reward = 1.0 (length: 112 -> 108)
80
+ Trajectory demo_51: cumulative reward = 1.0 (length: 115 -> 111)
81
+ Trajectory demo_52: cumulative reward = 1.0 (length: 119 -> 115)
82
+ Trajectory demo_53: cumulative reward = 1.0 (length: 115 -> 111)
83
+ Trajectory demo_54: cumulative reward = 1.0 (length: 109 -> 105)
84
+ Trajectory demo_55: cumulative reward = 1.0 (length: 101 -> 97)
85
+ Trajectory demo_56: cumulative reward = 1.0 (length: 101 -> 97)
86
+ Trajectory demo_57: cumulative reward = 1.0 (length: 119 -> 115)
87
+ Trajectory demo_58: cumulative reward = 1.0 (length: 125 -> 121)
88
+ Trajectory demo_59: cumulative reward = 1.0 (length: 113 -> 109)
89
+ Trajectory demo_60: cumulative reward = 1.0 (length: 97 -> 93)
90
+ Trajectory demo_61: cumulative reward = 1.0 (length: 113 -> 109)
91
+ Trajectory demo_62: cumulative reward = 1.0 (length: 121 -> 116)
92
+ Trajectory demo_63: cumulative reward = 1.0 (length: 130 -> 126)
93
+ Trajectory demo_64: cumulative reward = 1.0 (length: 103 -> 99)
94
+ Trajectory demo_65: cumulative reward = 1.0 (length: 112 -> 108)
95
+ Trajectory demo_66: cumulative reward = 1.0 (length: 131 -> 127)
96
+ Trajectory demo_67: cumulative reward = 1.0 (length: 133 -> 129)
97
+ Trajectory demo_68: cumulative reward = 1.0 (length: 103 -> 99)
98
+ Trajectory demo_69: cumulative reward = 1.0 (length: 125 -> 120)
99
+ Trajectory demo_70: cumulative reward = 1.0 (length: 101 -> 97)
100
+ Trajectory demo_71: cumulative reward = 1.0 (length: 109 -> 105)
101
+ Trajectory demo_72: cumulative reward = 1.0 (length: 92 -> 88)
102
+ Trajectory demo_73: cumulative reward = 1.0 (length: 119 -> 115)
103
+ Trajectory demo_74: cumulative reward = 1.0 (length: 114 -> 110)
104
+ Trajectory demo_75: cumulative reward = 1.0 (length: 103 -> 99)
105
+ Trajectory demo_76: cumulative reward = 1.0 (length: 100 -> 96)
106
+ Trajectory demo_77: cumulative reward = 1.0 (length: 121 -> 117)
107
+ Trajectory demo_78: cumulative reward = 1.0 (length: 99 -> 95)
108
+ Trajectory demo_79: cumulative reward = 1.0 (length: 93 -> 89)
109
+ Trajectory demo_80: cumulative reward = 1.0 (length: 110 -> 106)
110
+ Trajectory demo_81: cumulative reward = 1.0 (length: 150 -> 146)
111
+ Trajectory demo_82: cumulative reward = 1.0 (length: 94 -> 90)
112
+ Trajectory demo_83: cumulative reward = 1.0 (length: 108 -> 104)
113
+ Trajectory demo_84: cumulative reward = 1.0 (length: 105 -> 101)
114
+ Trajectory demo_85: cumulative reward = 1.0 (length: 115 -> 111)
115
+ Trajectory demo_86: cumulative reward = 1.0 (length: 126 -> 122)
116
+ Trajectory demo_87: cumulative reward = 1.0 (length: 88 -> 84)
117
+ Trajectory demo_88: cumulative reward = 1.0 (length: 115 -> 111)
118
+ Trajectory demo_89: cumulative reward = 1.0 (length: 115 -> 111)
119
+ Trajectory demo_90: cumulative reward = 1.0 (length: 131 -> 127)
120
+ Trajectory demo_91: cumulative reward = 1.0 (length: 114 -> 110)
121
+ Trajectory demo_92: cumulative reward = 1.0 (length: 109 -> 105)
122
+ Trajectory demo_93: cumulative reward = 1.0 (length: 96 -> 92)
123
+ Trajectory demo_94: cumulative reward = 1.0 (length: 127 -> 123)
124
+ Trajectory demo_95: cumulative reward = 1.0 (length: 121 -> 117)
125
+ Trajectory demo_96: cumulative reward = 1.0 (length: 140 -> 136)
126
+ Trajectory demo_97: cumulative reward = 1.0 (length: 104 -> 99)
127
+ Trajectory demo_98: cumulative reward = 1.0 (length: 110 -> 106)
128
+ Trajectory demo_99: cumulative reward = 1.0 (length: 113 -> 108)
129
+ ===== Truncation Statistics =====
130
+ Original total steps: 23207
131
+ Truncated total steps: 22400
132
+ Reduction: 807 steps (3.5%)
133
+ Train - Trajectories: 200, Transitions: 22400
134
+ Val - Trajectories: 0, Transitions: 0.0
can_low_dim/normalization.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb81411e83124a3072efaba223d23428f8c692a8f4f2699f2156aef8d8605473
3
+ size 1133
can_low_dim/train.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9631a1df9f4b5eb135bfb75685cd7110872921652d9037b66352e2a0362c63b9
3
+ size 4665227
can_low_dim/val.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef423294119533dd4d99cd9e9c6d933cd3cedf7ddf9070a66d603c05a24cffc2
3
+ size 964