rlogh commited on
Commit
a8bc832
·
verified ·
1 Parent(s): df3da15

Update README: 17 clean episodes, 2937 timesteps

Browse files
Files changed (1) hide show
  1. README.md +177 -0
README.md ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Rock Climb — Grasp-Taxonomy-Aware 3D Diffusion Policy
2
+
3
+ Trains a DP3-style point cloud diffusion policy conditioned on grasp type (crimp/sloper/pinch/jug)
4
+ to autonomously grasp climbing holds with a Franka arm + LEAP Hand.
5
+
6
+ ---
7
+
8
+ ## Quick Start (Training Machine)
9
+
10
+ ### 1. Clone the repo
11
+
12
+ ```bash
13
+ git clone https://github.com/rumilog/rock-climb.git tele
14
+ cd tele
15
+ ```
16
+
17
+ ### 2. Create a Python environment
18
+
19
+ ```bash
20
+ python3 -m venv venv
21
+ source venv/bin/activate
22
+ pip install --upgrade pip
23
+ ```
24
+
25
+ Install PyTorch with CUDA (adjust to match your GPU driver):
26
+
27
+ ```bash
28
+ # For CUDA 11.8:
29
+ pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
30
+
31
+ # For CUDA 12.1:
32
+ pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
33
+ ```
34
+
35
+ Install remaining dependencies:
36
+
37
+ ```bash
38
+ pip install -r requirements.txt
39
+ ```
40
+
41
+ ### 3. Download the dataset from Hugging Face
42
+
43
+ ```bash
44
+ mkdir -p datasets
45
+ huggingface-cli download rlogh/climbing-holds-pointcloud --repo-type dataset --local-dir ./datasets/climbing_holds.zarr
46
+ ```
47
+
48
+ Verify the download:
49
+
50
+ ```bash
51
+ python3 -c "
52
+ import zarr
53
+ z = zarr.open('datasets/climbing_holds.zarr', 'r')
54
+ print('Episodes:', z['meta/episode_ends'].shape[0])
55
+ print('Timesteps:', z['data/state'].shape[0])
56
+ print('Point cloud shape:', z['data/point_cloud'].shape)
57
+ print('Grasp type IDs:', z['meta/grasp_type_id'][:5], '...')
58
+ "
59
+ ```
60
+
61
+ Expected output:
62
+ ```
63
+ Episodes: 17
64
+ Timesteps: 2937
65
+ Point cloud shape: (2937, 1024, 3)
66
+ Grasp type IDs: [3 3 3 3 3] ...
67
+ ```
68
+
69
+ (grasp_type_id=3 is jug — this is a jug-only pilot dataset on hold 0, clean PC with z_min=0.006)
70
+
71
+ ### 4. Run training
72
+
73
+ ```bash
74
+ cd data_collection
75
+
76
+ python3 train.py \
77
+ --point-cloud \
78
+ --zarr ../datasets/climbing_holds.zarr \
79
+ --ckpt-dir ../checkpoints/pc_pilot \
80
+ --epochs 3000 \
81
+ --batch 128 \
82
+ --augment \
83
+ --good-only \
84
+ --save-every 100
85
+ ```
86
+
87
+ Training writes to `../checkpoints/pc_pilot/`:
88
+ - `best.pt` — EMA weights with lowest validation loss (use this for evaluation)
89
+ - `epoch_XXXX.pt` — periodic snapshots
90
+ - `norm_stats.json` — min-max normalization stats (required by evaluate.py)
91
+ - `training_status.md` — live progress updated every 10 epochs
92
+
93
+ ### 5. Monitor training
94
+
95
+ ```bash
96
+ cat ../checkpoints/pc_pilot/training_status.md
97
+ ```
98
+
99
+ ---
100
+
101
+ ## Training Details
102
+
103
+ | Setting | Value |
104
+ |---------|-------|
105
+ | Architecture | PointNet encoder + 1D temporal U-Net (DP3-style) |
106
+ | PointNet output | 256-d |
107
+ | Grasp type conditioning | one-hot(4) → MLP → 64-d, fused with observation |
108
+ | Conditioning vector | 512-d (PointNet 256 + State 128 + GraspType 64 + MLP) |
109
+ | U-Net dims | (512, 1024, 2048) |
110
+ | Optimizer | AdamW, lr=1e-4 |
111
+ | LR schedule | 500-step cosine warmup |
112
+ | EMA | Power-law warmup (power=0.75) |
113
+ | Normalization | Min-max to [-1, 1] (DP3 convention) |
114
+ | Diffusion | 100-step cosine DDPM (train), 10-step DDIM (inference) |
115
+ | Obs horizon | 2 timesteps |
116
+ | Pred horizon | 16 timesteps |
117
+ | Action horizon | 8 timesteps |
118
+ | Action dim | 23 (7 arm joints + 16 hand joints) |
119
+ | Point cloud | 1024 pts, XYZ only, world frame, FPS downsampled |
120
+ | Dataset | 17 episodes, 2937 timesteps, hold 0 (jug) — clean pilot (z_min=0.006) |
121
+
122
+ ---
123
+
124
+ ## Architecture
125
+
126
+ ```
127
+ Point Cloud (1024×3) → PointNet → 256-d
128
+ Robot State (2×23) → MLP → 128-d
129
+ Grasp Type (one-hot) → MLP → 64-d
130
+ Concat → MLP → 512-d conditioning vector
131
+
132
+ DDPM 1D Temporal U-Net
133
+
134
+ Action chunk (16 × 23-dim)
135
+ ```
136
+
137
+ Grasp type IDs: `0=crimp, 1=sloper, 2=pinch, 3=jug`
138
+
139
+ ---
140
+
141
+ ## Copying Checkpoints Back
142
+
143
+ After training, copy the checkpoint back to the robot machine for evaluation:
144
+
145
+ ```bash
146
+ scp -r checkpoints/pc_pilot/ user@robot-machine:/path/to/tele/checkpoints/pc_pilot/
147
+ ```
148
+
149
+ Then on the robot machine:
150
+
151
+ ```bash
152
+ source ~/franka/bin/activate
153
+ source ~/frankapy/catkin_ws/devel/setup.bash
154
+ cd ~/Desktop/tele/data_collection
155
+ python3 evaluate.py --checkpoint ../checkpoints/pc_pilot/best.pt --hold 0 --grasp-type jug
156
+ ```
157
+
158
+ ---
159
+
160
+ ## Dataset Structure (zarr)
161
+
162
+ ```
163
+ climbing_holds.zarr/
164
+ data/
165
+ state (N, 23) float32 — arm(7) + hand(16) joint positions
166
+ action (N, 23) float32 — same layout, shifted +1 timestep
167
+ point_cloud (N, 1024, 3) float32 — clean scene scan per episode, repeated per timestep
168
+ timestamps (N,) float64
169
+ meta/
170
+ episode_ends (E,) int64
171
+ hold_id (E,) int64 — 0=edge_A, 1=edge_B, 2=sloper, 3=pinch, 4=test_edge
172
+ quality (E,) int64 — 1=good, 0=bad
173
+ grasp_type (E,) str — "crimp" | "sloper" | "pinch" | "jug"
174
+ grasp_type_id (E,) int64 — 0=crimp, 1=sloper, 2=pinch, 3=jug
175
+ ```
176
+
177
+ Note: images are NOT included in this dataset — the policy uses point clouds only.