RoboMaster / README.md
robomaster2025's picture
Update README.md
eb2e2ee verified
---
license: apache-2.0
---
## πŸ”₯ Reproduce Website Demos
1. **[Environment Set Up]** Our environment setup is identical to [CogVideoX](https://github.com/THUDM/CogVideo). You can refer to their configuration to complete the environment setup.
```bash
conda create -n robomaster python=3.10
conda activate robomaster
```
2. Robotic Manipulation on Diverse Out-of-Domain Objects.
```bash
python inference_inthewild.py \
--input_path demos/diverse_ood_objs \
--output_path samples/infer_diverse_ood_objs \
--transformer_path ckpts/RoboMaster \
--model_path ckpts/CogVideoX-Fun-V1.5-5b-InP
```
3. Robotic Manipulation with Diverse Skills
```bash
python inference_inthewild.py \
--input_path demos/diverse_skills \
--output_path samples/infer_diverse_skills \
--transformer_path ckpts/RoboMaster \
--model_path ckpts/CogVideoX-Fun-V1.5-5b-InP
```
4. Long Video Generation in Auto-Regressive Manner
```bash
python inference_inthewild.py \
--input_path demos/long_video \
--output_path samples/long_video \
--transformer_path ckpts/RoboMaster \
--model_path ckpts/CogVideoX-Fun-V1.5-5b-InP
```
## πŸš€ Benchmark Evaluation (Reproduce Paper Results)
```
β”œβ”€β”€ RoboMaster
β”œβ”€β”€ eval_metrics
β”œβ”€β”€ VBench
β”œβ”€β”€ common_metrics_on_video_quality
β”œβ”€β”€ eval_traj
β”œβ”€β”€ results
β”œβ”€β”€ bridge_eval_gt
β”œβ”€β”€ bridge_eval_ours
β”œβ”€β”€ bridge_eval_ours_tracking
```
**(1) Inference on Benchmark & Prepare Evaluation Files**
1. Generating `bridge_eval_ours`. (Note that the results may vary slightly across different computing machines, even with the same seed. We have prepared the reference files under `eval_metrics/results`)
```bash
cd RoboMaster/
python inference_eval.py
```
1. Generating `bridge_eval_ours_tracking`: Install [CoTracker3](https://github.com/facebookresearch/co-tracker), and then estimate tracking points with grid size 30 on `bridge_eval_ours`.
**(2) Evaluation on Visual Quality**
1. Evaluation of VBench metrics.
```bash
cd eval_metrics/VBench
python evaluate.py \
--dimension aesthetic_quality imaging_quality temporal_flickering motion_smoothness subject_consistency background_consistency \
--videos_path ../results/bridge_eval_ours \
--mode=custom_input \
--output_path evaluation_results
```
2. Evaluation of FVD and FID metrics.
```bash
cd eval_metrics/common_metrics_on_video_quality
python calculate.py -v1_f ../results/bridge_eval_ours -v2_f ../results/bridge_eval_gt
python -m pytorch_fid eval_1 eval_2
```
**(3) Evaluation on Trajectory (Robotic Arm & Manipulated Object)**
1. Estimation of TrajError metrics. (Note that we exclude some samples listed in `failed_track.txt`, due to failed estimation by [CoTracker3](https://github.com/facebookresearch/co-tracker))
```bash
cd eval_metrics/eval_traj
python calculate_traj.py \
--input_path_1 ../results/bridge_eval_ours \
--input_path_2 ../results/bridge_eval_gt \
--tracking_path ../results/bridge_eval_ours_tracking \
--output_path evaluation_results
```
2. Check the visualization videos under `evaluation_results`. We blend the trajectories of robotic arm and object throughout the entire video for better illustration.