wintermelontree commited on
Commit
23cc9b8
·
verified ·
1 Parent(s): 418b877

Upload folder using huggingface_hub

Browse files
can_low_dim/_2026_01_06_23_34_32.log ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Using robot0_eef_pos with dim 3 for observation
2
+ Using robot0_eef_quat with dim 4 for observation
3
+ Using robot0_gripper_qpos with dim 2 for observation
4
+ Using object with dim 14 for observation
5
+ Total low-dim observation dim: 23
6
+ Original action dim: 7
7
+ Final action dim: 7
8
+ ===== Basic stats =====
9
+ Total transitions: 23207
10
+ Total trajectories: 200
11
+ Traj length mean/std: 116.035, 13.685896938089225
12
+ Traj length min/max: 82, 151
13
+ obs min: [-0.07376085 -0.42253869 0. 0. -0.73516192 -0.22861566
14
+ -0.34267776 0. -0.0403102 -0.04001666 -0.41950386 0.
15
+ -0.60131852 -0.55893041 -0.77678055 -0.99984988 -0.12209211 -0.10018459
16
+ 0. -0.99995351 -0.99995863 -0.79737371 0. ]
17
+ obs max: [0.29611527 0.41260722 1.22413338 0.99997648 0.47231822 0.13808564
18
+ 0.08067435 0.04106108 0. 0.28709319 0.4541125 1.21502806
19
+ 0.64036898 0.70398297 0.99988429 0.99991152 0.27907834 0.34339995
20
+ 0.35659635 0.99983454 0.99994522 0.59665596 0.83187222]
21
+ action min: [-1. -1. -1. -0.55634028 -1. -1.
22
+ -1. ]
23
+ action max: [1. 1. 1. 0.72973686 0.45003703 0.74534029
24
+ 1. ]
25
+ Trajectory demo_0: cumulative reward = 6.0 (length: 118 -> 118)
26
+ Trajectory demo_1: cumulative reward = 5.0 (length: 118 -> 118)
27
+ Trajectory demo_2: cumulative reward = 5.0 (length: 113 -> 113)
28
+ Trajectory demo_3: cumulative reward = 5.0 (length: 98 -> 98)
29
+ Trajectory demo_4: cumulative reward = 5.0 (length: 102 -> 102)
30
+ Trajectory demo_5: cumulative reward = 5.0 (length: 134 -> 134)
31
+ Trajectory demo_6: cumulative reward = 5.0 (length: 115 -> 115)
32
+ Trajectory demo_7: cumulative reward = 5.0 (length: 115 -> 115)
33
+ Trajectory demo_8: cumulative reward = 5.0 (length: 127 -> 127)
34
+ Trajectory demo_9: cumulative reward = 5.0 (length: 120 -> 120)
35
+ Trajectory demo_10: cumulative reward = 5.0 (length: 129 -> 129)
36
+ Trajectory demo_11: cumulative reward = 5.0 (length: 94 -> 94)
37
+ Trajectory demo_12: cumulative reward = 5.0 (length: 115 -> 115)
38
+ Trajectory demo_13: cumulative reward = 5.0 (length: 129 -> 129)
39
+ Trajectory demo_14: cumulative reward = 5.0 (length: 113 -> 113)
40
+ Trajectory demo_15: cumulative reward = 5.0 (length: 95 -> 95)
41
+ DEBUG Trajectory 16 (demo_16): Original length=136, non-zero rewards=5
42
+ DEBUG Trajectory 16 (demo_16): Reward sum=5.0, unique rewards=[0. 1.]
43
+ DEBUG Trajectory 16 (demo_16): Last 20 rewards: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1.]
44
+ Trajectory demo_16: cumulative reward = 5.0 (length: 136 -> 136)
45
+ Trajectory demo_17: cumulative reward = 5.0 (length: 108 -> 108)
46
+ Trajectory demo_18: cumulative reward = 5.0 (length: 94 -> 94)
47
+ Trajectory demo_19: cumulative reward = 5.0 (length: 111 -> 111)
48
+ Trajectory demo_20: cumulative reward = 5.0 (length: 92 -> 92)
49
+ Trajectory demo_21: cumulative reward = 5.0 (length: 100 -> 100)
50
+ Trajectory demo_22: cumulative reward = 5.0 (length: 102 -> 102)
51
+ Trajectory demo_23: cumulative reward = 5.0 (length: 142 -> 142)
52
+ Trajectory demo_24: cumulative reward = 5.0 (length: 108 -> 108)
53
+ Trajectory demo_25: cumulative reward = 5.0 (length: 131 -> 131)
54
+ Trajectory demo_26: cumulative reward = 5.0 (length: 112 -> 112)
55
+ Trajectory demo_27: cumulative reward = 5.0 (length: 124 -> 124)
56
+ Trajectory demo_28: cumulative reward = 5.0 (length: 141 -> 141)
57
+ Trajectory demo_29: cumulative reward = 5.0 (length: 107 -> 107)
58
+ Trajectory demo_30: cumulative reward = 5.0 (length: 127 -> 127)
59
+ Trajectory demo_31: cumulative reward = 5.0 (length: 96 -> 96)
60
+ Trajectory demo_32: cumulative reward = 5.0 (length: 110 -> 110)
61
+ Trajectory demo_33: cumulative reward = 5.0 (length: 115 -> 115)
62
+ Trajectory demo_34: cumulative reward = 5.0 (length: 106 -> 106)
63
+ Trajectory demo_35: cumulative reward = 5.0 (length: 111 -> 111)
64
+ Trajectory demo_36: cumulative reward = 5.0 (length: 103 -> 103)
65
+ Trajectory demo_37: cumulative reward = 5.0 (length: 116 -> 116)
66
+ Trajectory demo_38: cumulative reward = 5.0 (length: 107 -> 107)
67
+ Trajectory demo_39: cumulative reward = 5.0 (length: 113 -> 113)
68
+ Trajectory demo_40: cumulative reward = 5.0 (length: 117 -> 117)
69
+ Trajectory demo_41: cumulative reward = 5.0 (length: 136 -> 136)
70
+ Trajectory demo_42: cumulative reward = 5.0 (length: 117 -> 117)
71
+ Trajectory demo_43: cumulative reward = 5.0 (length: 106 -> 106)
72
+ Trajectory demo_44: cumulative reward = 5.0 (length: 121 -> 121)
73
+ Trajectory demo_45: cumulative reward = 5.0 (length: 127 -> 127)
74
+ Trajectory demo_46: cumulative reward = 5.0 (length: 106 -> 106)
75
+ Trajectory demo_47: cumulative reward = 5.0 (length: 129 -> 129)
76
+ Trajectory demo_48: cumulative reward = 5.0 (length: 113 -> 113)
77
+ Trajectory demo_49: cumulative reward = 5.0 (length: 106 -> 106)
78
+ Trajectory demo_50: cumulative reward = 5.0 (length: 112 -> 112)
79
+ Trajectory demo_51: cumulative reward = 5.0 (length: 115 -> 115)
80
+ Trajectory demo_52: cumulative reward = 5.0 (length: 119 -> 119)
81
+ Trajectory demo_53: cumulative reward = 5.0 (length: 115 -> 115)
82
+ Trajectory demo_54: cumulative reward = 5.0 (length: 109 -> 109)
83
+ Trajectory demo_55: cumulative reward = 5.0 (length: 101 -> 101)
84
+ Trajectory demo_56: cumulative reward = 5.0 (length: 101 -> 101)
85
+ Trajectory demo_57: cumulative reward = 5.0 (length: 119 -> 119)
86
+ Trajectory demo_58: cumulative reward = 5.0 (length: 125 -> 125)
87
+ Trajectory demo_59: cumulative reward = 5.0 (length: 113 -> 113)
88
+ Trajectory demo_60: cumulative reward = 5.0 (length: 97 -> 97)
89
+ Trajectory demo_61: cumulative reward = 5.0 (length: 113 -> 113)
90
+ Trajectory demo_62: cumulative reward = 6.0 (length: 121 -> 121)
91
+ Trajectory demo_63: cumulative reward = 5.0 (length: 130 -> 130)
92
+ Trajectory demo_64: cumulative reward = 5.0 (length: 103 -> 103)
93
+ Trajectory demo_65: cumulative reward = 5.0 (length: 112 -> 112)
94
+ Trajectory demo_66: cumulative reward = 5.0 (length: 131 -> 131)
95
+ Trajectory demo_67: cumulative reward = 5.0 (length: 133 -> 133)
96
+ Trajectory demo_68: cumulative reward = 5.0 (length: 103 -> 103)
97
+ Trajectory demo_69: cumulative reward = 6.0 (length: 125 -> 125)
98
+ Trajectory demo_70: cumulative reward = 5.0 (length: 101 -> 101)
99
+ Trajectory demo_71: cumulative reward = 5.0 (length: 109 -> 109)
100
+ Trajectory demo_72: cumulative reward = 5.0 (length: 92 -> 92)
101
+ Trajectory demo_73: cumulative reward = 5.0 (length: 119 -> 119)
102
+ Trajectory demo_74: cumulative reward = 5.0 (length: 114 -> 114)
103
+ Trajectory demo_75: cumulative reward = 5.0 (length: 103 -> 103)
104
+ Trajectory demo_76: cumulative reward = 5.0 (length: 100 -> 100)
105
+ Trajectory demo_77: cumulative reward = 5.0 (length: 121 -> 121)
106
+ Trajectory demo_78: cumulative reward = 5.0 (length: 99 -> 99)
107
+ Trajectory demo_79: cumulative reward = 5.0 (length: 93 -> 93)
108
+ Trajectory demo_80: cumulative reward = 5.0 (length: 110 -> 110)
109
+ Trajectory demo_81: cumulative reward = 5.0 (length: 150 -> 150)
110
+ Trajectory demo_82: cumulative reward = 5.0 (length: 94 -> 94)
111
+ Trajectory demo_83: cumulative reward = 5.0 (length: 108 -> 108)
112
+ Trajectory demo_84: cumulative reward = 5.0 (length: 105 -> 105)
113
+ Trajectory demo_85: cumulative reward = 5.0 (length: 115 -> 115)
114
+ Trajectory demo_86: cumulative reward = 5.0 (length: 126 -> 126)
115
+ Trajectory demo_87: cumulative reward = 5.0 (length: 88 -> 88)
116
+ Trajectory demo_88: cumulative reward = 5.0 (length: 115 -> 115)
117
+ Trajectory demo_89: cumulative reward = 5.0 (length: 115 -> 115)
118
+ Trajectory demo_90: cumulative reward = 5.0 (length: 131 -> 131)
119
+ Trajectory demo_91: cumulative reward = 5.0 (length: 114 -> 114)
120
+ Trajectory demo_92: cumulative reward = 5.0 (length: 109 -> 109)
121
+ Trajectory demo_93: cumulative reward = 5.0 (length: 96 -> 96)
122
+ Trajectory demo_94: cumulative reward = 5.0 (length: 127 -> 127)
123
+ Trajectory demo_95: cumulative reward = 5.0 (length: 121 -> 121)
124
+ Trajectory demo_96: cumulative reward = 5.0 (length: 140 -> 140)
125
+ Trajectory demo_97: cumulative reward = 6.0 (length: 104 -> 104)
126
+ Trajectory demo_98: cumulative reward = 5.0 (length: 110 -> 110)
127
+ Trajectory demo_99: cumulative reward = 6.0 (length: 113 -> 113)
128
+ ===== Truncation Statistics =====
129
+ Original total steps: 23207
130
+ Truncated total steps: 23207
131
+ Reduction: 0 steps (0.0%)
132
+ Train - Trajectories: 200, Transitions: 23207
133
+ Val - Trajectories: 0, Transitions: 0.0
can_low_dim/normalization.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb81411e83124a3072efaba223d23428f8c692a8f4f2699f2156aef8d8605473
3
+ size 1133
can_low_dim/train.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:457ffe5d312c2ce0cc00ca67420cb6268d7fd0a9247b3766c1ceed74968840c2
3
+ size 4837523
can_low_dim/val.npz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef423294119533dd4d99cd9e9c6d933cd3cedf7ddf9070a66d603c05a24cffc2
3
+ size 964