Commit ·
91d8892
1
Parent(s): f084cf7
Update model card with description and paper link (#1)
Browse files- Update model card with description and paper link (fa6d40a365c5971c63b9f5c1594dc57432f75a94)
Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -1,13 +1,29 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
metrics:
|
| 4 |
- mae
|
| 5 |
- accuracy
|
| 6 |
-
base_model:
|
| 7 |
-
- Qwen/Qwen2.5-VL-7B-Instruct
|
| 8 |
pipeline_tag: video-text-to-text
|
| 9 |
---
|
| 10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
## Citations
|
| 13 |
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- Qwen/Qwen2.5-VL-7B-Instruct
|
| 4 |
license: apache-2.0
|
| 5 |
metrics:
|
| 6 |
- mae
|
| 7 |
- accuracy
|
|
|
|
|
|
|
| 8 |
pipeline_tag: video-text-to-text
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# PRIMO R1: Process Reasoning Induced Monitoring
|
| 12 |
+
|
| 13 |
+
This repository contains the model weights for PRIMO R1, introduced in the paper [From Passive Observer to Active Critic: Reinforcement Learning Elicits Process Reasoning for Robotic Manipulation](https://huggingface.co/papers/2603.15600).
|
| 14 |
+
|
| 15 |
+
## Model Description
|
| 16 |
+
|
| 17 |
+
PRIMO R1 is a 7B framework designed to transform video Multimodal Large Language Models (MLLMs) from passive "Observers" into active "Critics" for long-horizon robotic manipulation. While traditional models often focus on recognizing ongoing events, PRIMO R1 evaluates the current state of a task relative to its final goal.
|
| 18 |
+
|
| 19 |
+
The model is fine-tuned from [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) using outcome-based Reinforcement Learning to elicit explicit Chain-of-Thought (CoT) generation for progress estimation. Its architecture incorporates a structured temporal input that anchors video sequences between the initial and current state images.
|
| 20 |
+
|
| 21 |
+
## Key Features
|
| 22 |
+
|
| 23 |
+
- **RL-Induced Reasoning**: Uses outcome-based RL to incentivize the generation of thought processes that evaluate state progress.
|
| 24 |
+
- **State-of-the-Art Performance**: Achieves a 50% reduction in the mean absolute error of specialized reasoning baselines, outperforming much larger general MLLMs.
|
| 25 |
+
- **Strong Generalization**: Exhibits zero-shot performance on failure detection tasks, achieving 67.0% accuracy on the RoboFail benchmark and surpassing closed-source models like OpenAI o1.
|
| 26 |
+
- **Structured Temporal Input**: Explicitly anchors the video sequence between initial and current state images to provide clear goal-oriented context.
|
| 27 |
|
| 28 |
## Citations
|
| 29 |
|