File size: 3,499 Bytes
51cccb3
 
 
 
eb2e2ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51cccb3
 
 
 
 
 
 
 
 
 
 
 
 
eb2e2ee
51cccb3
 
 
 
 
 
 
eb2e2ee
51cccb3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eb2e2ee
51cccb3
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
license: apache-2.0
---

## πŸ”₯ Reproduce Website Demos

1. **[Environment Set Up]** Our environment setup is identical to [CogVideoX](https://github.com/THUDM/CogVideo). You can refer to their configuration to complete the environment setup.
    ```bash
    conda create -n robomaster python=3.10
    conda activate robomaster
    ```

2. Robotic Manipulation on Diverse Out-of-Domain Objects.
    ```bash
    python inference_inthewild.py \
        --input_path demos/diverse_ood_objs \
        --output_path samples/infer_diverse_ood_objs \
        --transformer_path ckpts/RoboMaster \
        --model_path ckpts/CogVideoX-Fun-V1.5-5b-InP
    ```

3. Robotic Manipulation with Diverse Skills
    ```bash
    python inference_inthewild.py \
        --input_path demos/diverse_skills \
        --output_path samples/infer_diverse_skills \
        --transformer_path ckpts/RoboMaster \
        --model_path ckpts/CogVideoX-Fun-V1.5-5b-InP
    ```

4. Long Video Generation in Auto-Regressive Manner
    ```bash
    python inference_inthewild.py \
        --input_path demos/long_video \
        --output_path samples/long_video \
        --transformer_path ckpts/RoboMaster \
        --model_path ckpts/CogVideoX-Fun-V1.5-5b-InP
    ```
    
## πŸš€ Benchmark Evaluation (Reproduce Paper Results)
  ```
β”œβ”€β”€ RoboMaster
    β”œβ”€β”€ eval_metrics
        β”œβ”€β”€ VBench
        β”œβ”€β”€ common_metrics_on_video_quality
        β”œβ”€β”€ eval_traj
        β”œβ”€β”€ results
            β”œβ”€β”€ bridge_eval_gt
            β”œβ”€β”€ bridge_eval_ours
            β”œβ”€β”€ bridge_eval_ours_tracking
  ```

**(1) Inference on Benchmark & Prepare Evaluation Files**
1. Generating `bridge_eval_ours`. (Note that the results may vary slightly across different computing machines, even with the same seed. We have prepared the reference files under `eval_metrics/results`)
    ```bash
    cd RoboMaster/
    python inference_eval.py
    ```
1. Generating `bridge_eval_ours_tracking`: Install [CoTracker3](https://github.com/facebookresearch/co-tracker), and then estimate tracking points with grid size 30 on `bridge_eval_ours`. 

**(2) Evaluation on Visual Quality**

1. Evaluation of VBench metrics.
    ```bash
    cd eval_metrics/VBench
    python evaluate.py \
        --dimension aesthetic_quality imaging_quality temporal_flickering motion_smoothness subject_consistency background_consistency \
        --videos_path ../results/bridge_eval_ours \
        --mode=custom_input \
        --output_path evaluation_results
    ```

2. Evaluation of FVD and FID metrics.
    ```bash
    cd eval_metrics/common_metrics_on_video_quality
    python calculate.py -v1_f ../results/bridge_eval_ours -v2_f ../results/bridge_eval_gt
    python -m pytorch_fid eval_1 eval_2
    ```


**(3) Evaluation on Trajectory (Robotic Arm & Manipulated Object)**

1. Estimation of TrajError metrics. (Note that we exclude some samples listed in `failed_track.txt`, due to failed estimation by [CoTracker3](https://github.com/facebookresearch/co-tracker))
    ```bash
    cd eval_metrics/eval_traj
    python calculate_traj.py \
        --input_path_1 ../results/bridge_eval_ours \
        --input_path_2 ../results/bridge_eval_gt \
        --tracking_path ../results/bridge_eval_ours_tracking \
        --output_path evaluation_results
    ```

2. Check the visualization videos under `evaluation_results`. We blend the trajectories of robotic arm and object throughout the entire video for better illustration.