aliangdw commited on
Commit
ec9b401
·
verified ·
1 Parent(s): 01f02fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -12
README.md CHANGED
@@ -2,26 +2,65 @@
2
  license: apache-2.0
3
  base_model: Qwen/Qwen3-VL-4B-Instruct
4
  tags:
5
- - reward_model
6
- - rfm
7
- - preference_comparisons
8
  library_name: transformers
9
  ---
10
 
11
- # aliangdw/rfm_qwen4b_pref_prog_succ_8frames_all_discrete_10bins_part2
12
 
13
- ## Model Details
14
 
15
- - **Base Model**: Qwen/Qwen3-VL-4B-Instruct
16
- - **Model Type**: qwen3_vl
17
 
18
- ## Training Run
19
 
20
- - **Wandb Run**: [ant_rfm_qwen4b_4gpu_bs16_pref_prog_succ_8_frames_all_discrete_10_bins_part2](https://wandb.ai/clvr/rfm/runs/wydywqsb)
21
- - **Wandb ID**: `wydywqsb`
22
- - **Project**: rfm
23
- - **Notes**: all run with prog_token per frame, qwen 4b, discrete progress, 10 bins
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  ## Citation
26
 
27
  If you use this model, please cite:
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
3
  base_model: Qwen/Qwen3-VL-4B-Instruct
4
  tags:
5
+ - reward model
6
+ - robot learning
7
+ - foundation models
8
  library_name: transformers
9
  ---
10
 
11
+ # Robometer 4B
12
 
13
+ **Paper:** [arXiv (Coming Soon)](https://arxiv.org/)
14
 
15
+ **Robometer** is a general-purpose vision-language reward model for robotics. It is trained on [RBM-1M](https://huggingface.co/datasets/) with **Qwen3-VL-4B** to predict **per-frame progress**, **per-frame success**, and **trajectory preferences** from rollout videos. The model combines (1) frame-level progress supervision on expert data and (2) trajectory-comparison preference supervision, so it can learn from both successful and failed rollouts and generalize across diverse robot embodiments and tasks.
 
16
 
17
+ Given a **task instruction** and a **rollout video** (or frame sequence), the model predicts:
18
 
19
+ - **Per-frame progress** — continuous progress values over time (e.g. 0–1 or binned).
20
+ - **Per-frame success** — success probability (or binary) at each timestep.
21
+ - **Preference / ranking** — which of two trajectories is better for the task.
22
+
23
+ ### Usage
24
+
25
+ For full setup, example scripts, and configs, see the **GitHub repo**: [github.com/aliang8/robometer](https://github.com/aliang8/robometer).
26
+
27
+ **Option 1 — Run the model locally** (loads this checkpoint from Hugging Face):
28
+
29
+ ```bash
30
+ uv run python scripts/example_inference_local.py \
31
+ --model-path aliangdw/Robometer-4B \
32
+ --video /path/to/video.mp4 \
33
+ --task "your task description"
34
+ ```
35
+
36
+ **Option 2 — Use the evaluation server** (start server, then run client):
37
+
38
+ ```bash
39
+ # Start server
40
+ uv run python robometer/evals/eval_server.py \
41
+ --config-path=robometer/configs \
42
+ --config-name=eval_config_server \
43
+ model_path=aliangdw/Robometer-4B \
44
+ server_url=0.0.0.0 \
45
+ server_port=8000
46
+
47
+ # Client (no robometer dependency)
48
+ uv run python scripts/example_inference.py \
49
+ --eval-server-url http://localhost:8000 \
50
+ --video /path/to/video.mp4 \
51
+ --task "your task description"
52
+ ```
53
 
54
  ## Citation
55
 
56
  If you use this model, please cite:
57
+
58
+ ```bibtex
59
+ @misc{robometer2025,
60
+ title={Robometer: Scaling General-Purpose Robotic Reward Models via Trajectory Comparisons},
61
+ author={Anthony Liang* and Yigit Korkmaz* and Jiahui Zhang and Minyoung Hwang and Abrar Anwar and Sidhant Kaushik and Aditya Shah and Alex S. Huang and Luke Zettlemoyer and Dieter Fox and Yu Xiang and Anqi Li and Andreea Bobu and Abhishek Gupta and Stephen Tu† and Erdem B{\i}y{\i}k† and Jesse Zhang†},
62
+ year={2025},
63
+ url={https://github.com/aliang8/reward_fm},
64
+ note={arXiv coming soon}
65
+ }
66
+ ```