| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | tags: |
| | - video-understanding |
| | - reward-model |
| | - computer-use |
| | - qwen3-vl |
| | - multimodal |
| | base_model: Qwen/Qwen3-VL-8B-Instruct |
| | pipeline_tag: video-text-to-text |
| | library_name: transformers |
| | --- |
| | |
| | # ExeVRM: Execution Video Reward Model |
| |
|
| | ExeVRM (Execution Video Reward Model) is a fine-tuned [Qwen3-VL-8B-Instruct](https://huggingface.co/Qwen/Qwen3-VL-8B-Instruct) model that judges whether a computer-use agent's video trajectory successfully completes a given task. Given a screen recording of an agent performing a task and a natural language instruction, ExeVRM predicts whether the execution is **correct** or **incorrect**. |
| |
|
| | ## Model Summary |
| |
|
| | | Attribute | Value | |
| | |---|---| |
| | | Base Model | Qwen3-VL-8B-Instruct | |
| | | Parameters | 8.7B | |
| | | Architecture | Qwen3VLForConditionalGeneration | |
| | | Precision | bfloat16 | |
| | | Max Context Length | 128,000 tokens | |
| | | Video Resolution | 720p (1280x720) | |
| | | Max Video Frames | 50 | |
| | | Video FPS | 1.0 | |
| | | Training Data | OSWorld + AgentNet + ScaleCUA | |
| | | Training Loss | 0.046 | |
| | | Eval Accuracy | 84.7% | |
| |
|
| | ## Key Features: STP & TTP |
| |
|
| | ExeVRM incorporates two token pruning techniques that enable efficient processing of long execution videos: |
| |
|
| | - **STP (Spatial Token Pruning)**: Reduces visual tokens within each frame by merging spatially similar patches (e.g., uniform UI backgrounds). Uses connected-component analysis to identify and prune large homogeneous regions. |
| | - **TTP (Temporal Token Pruning)**: Reduces visual tokens across frames by detecting temporally duplicated patches between consecutive frames (e.g., static screen regions between agent actions). |
| |
|
| | Combined, STP + TTP achieve 40-60% token reduction while maintaining reward prediction quality. |
| |
|
| | ### Token Pruning Parameters Used in Training |
| |
|
| | | Parameter | Value | |
| | |---|---| |
| | | `use_stp` | `true` | |
| | | `stp_mode` | `forward_removal` | |
| | | `stp_threshold` | `3.0` | |
| | | `stp_skip_ratio` | `0.0` | |
| | | `stp_large_comp_threshold` | `10` | |
| | | `stp_patch_level` | `true` | |
| | | `use_raw_frames_in_stp` | `true` | |
| | | `use_ttp` | `true` | |
| | | `ttp_threshold` | `0.9999` | |
| | | `ttp_similarity_metric` | `cosine` | |
| |
|
| | ## How to Use |
| |
|
| | ### Installation |
| |
|
| | ```bash |
| | pip install transformers torch accelerate |
| | ``` |
| |
|
| | For video processing: |
| |
|
| | ```bash |
| | pip install av pillow |
| | ``` |
| |
|
| | ### Loading the Model from Hugging Face |
| |
|
| | ```python |
| | from transformers import Qwen3VLForConditionalGeneration, AutoProcessor |
| | import torch |
| | |
| | model_name = "lime-nlp/ExeVRM-8B" # Replace with the actual HF repo name |
| | |
| | # Load model |
| | model = Qwen3VLForConditionalGeneration.from_pretrained( |
| | model_name, |
| | torch_dtype=torch.bfloat16, |
| | device_map="auto", |
| | trust_remote_code=True, |
| | ) |
| | |
| | # Load processor |
| | processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True) |
| | ``` |
| |
|
| | ### Video Preprocessing |
| |
|
| | ExeVRM expects 720p (1280x720) video frames sampled at 1 FPS with a maximum of 50 frames. Here is how to preprocess a video: |
| |
|
| | ```python |
| | import av |
| | import numpy as np |
| | import math |
| | |
| | |
| | def sample_frames_from_video(video_path, fps=1.0, max_frames=50): |
| | """ |
| | Sample frames from a video at the specified FPS, up to max_frames. |
| | |
| | Args: |
| | video_path: Path to the video file (mp4, avi, etc.) |
| | fps: Target frames per second for sampling (default: 1.0) |
| | max_frames: Maximum number of frames to sample (default: 50) |
| | |
| | Returns: |
| | List of PIL Images |
| | """ |
| | container = av.open(video_path) |
| | stream = container.streams.video[0] |
| | |
| | total_frames = stream.frames |
| | duration_seconds = float(stream.duration * stream.time_base) |
| | |
| | # Compute number of frames to sample |
| | sample_frames = max(1, math.floor(duration_seconds * fps)) |
| | sample_frames = min(total_frames, max_frames, sample_frames) |
| | |
| | # Uniformly sample frame indices |
| | indices = np.linspace(0, total_frames - 1, sample_frames).astype(np.int32) |
| | indices_set = set(indices.tolist()) |
| | |
| | frames = [] |
| | for i, frame in enumerate(container.decode(video=0)): |
| | if i in indices_set: |
| | frames.append(frame.to_image()) # Convert to PIL Image |
| | if i > max(indices): |
| | break |
| | |
| | container.close() |
| | return frames |
| | ``` |
| |
|
| | ### Running Inference |
| |
|
| | ```python |
| | from qwen_vl_utils import process_vision_info |
| | |
| | # Prepare the input message |
| | messages = [ |
| | { |
| | "role": "user", |
| | "content": [ |
| | {"type": "video", "video": "path/to/agent_trajectory.mp4", "fps": 1.0}, |
| | {"type": "text", "text": ( |
| | "Given a user task and a computer-using video recording, " |
| | "evaluate whether the user completes the task or not. " |
| | "Reply your judgement in the \\box{}.\n" |
| | "If the video correctly completes the task, reply \\box{correct}. " |
| | "Otherwise, reply \\box{incorrect}.\n\n" |
| | "# User Task\n" |
| | "Open Google Chrome and search for 'weather today'\n" |
| | )}, |
| | ], |
| | } |
| | ] |
| | |
| | # Process inputs |
| | text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
| | image_inputs, video_inputs = process_vision_info(messages) |
| | inputs = processor( |
| | text=[text], |
| | images=image_inputs, |
| | videos=video_inputs, |
| | padding=True, |
| | return_tensors="pt", |
| | ).to(model.device) |
| | |
| | # Generate |
| | with torch.no_grad(): |
| | output_ids = model.generate(**inputs, max_new_tokens=128) |
| | |
| | # Decode |
| | generated_ids = output_ids[:, inputs.input_ids.shape[1]:] |
| | response = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] |
| | print(response) |
| | # Expected output: \box{correct} or \box{incorrect} |
| | ``` |
| |
|
| | ### Using with `qwen_vl_utils` (Recommended) |
| |
|
| | For the most streamlined experience, install the Qwen VL utilities: |
| |
|
| | ```bash |
| | pip install qwen-vl-utils |
| | ``` |
| |
|
| | This provides `process_vision_info()` which handles video frame extraction and formatting automatically, including frame sampling at the specified FPS. |
| |
|
| | ### Manual Frame-by-Frame Inference |
| |
|
| | If you need more control over frame sampling (e.g., for custom preprocessing): |
| |
|
| | ```python |
| | # 1. Sample frames manually |
| | frames = sample_frames_from_video("path/to/video.mp4", fps=1.0, max_frames=50) |
| | |
| | # 2. Build message with individual frames as images |
| | content = [{"type": "video", "video": frames}] |
| | content.append({ |
| | "type": "text", |
| | "text": ( |
| | "Given a user task and a computer-using video recording, " |
| | "evaluate whether the user completes the task or not. " |
| | "Reply your judgement in the \\box{}.\n" |
| | "If the video correctly completes the task, reply \\box{correct}. " |
| | "Otherwise, reply \\box{incorrect}.\n\n" |
| | "# User Task\n" |
| | "Your task description here\n" |
| | ), |
| | }) |
| | |
| | messages = [{"role": "user", "content": content}] |
| | |
| | # 3. Process and run inference (same as above) |
| | text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
| | image_inputs, video_inputs = process_vision_info(messages) |
| | inputs = processor( |
| | text=[text], |
| | images=image_inputs, |
| | videos=video_inputs, |
| | padding=True, |
| | return_tensors="pt", |
| | ).to(model.device) |
| | |
| | with torch.no_grad(): |
| | output_ids = model.generate(**inputs, max_new_tokens=256) |
| | |
| | generated_ids = output_ids[:, inputs.input_ids.shape[1]:] |
| | response = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] |
| | print(response) |
| | ``` |
| |
|
| | ## Prompt Format |
| |
|
| | ExeVRM uses the following prompt template for binary reward prediction: |
| |
|
| | ``` |
| | Given a user task and a computer-using video recording, evaluate whether the user completes the task or not. Reply your judgement in the \box{}. |
| | If the video correctly completes the task, reply \box{correct}. Otherwise, reply \box{incorrect}. |
| | |
| | # User Task |
| | <task description> |
| | ``` |
| |
|
| | If you want the model output justifications, use the following template instead: |
| |
|
| | ``` |
| | Given a user task and a computer-using video recording, evaluate whether the user completes the task or not. Reply your judgement in the \box{}. |
| | If the video correctly completes the task, reply \box{correct}. Otherwise, reply \box{incorrect}. |
| | If the video does not complete the task (i.e., incorrect), please provide the timestamp range, i.e., from <[time_start] seconds> to <[time_end] seconds>, of the video that deviates from the user's instruction. |
| | |
| | # User Task |
| | <task description> |
| | ``` |
| |
|
| | ## Training Details |
| |
|
| | - **Training Framework**: [ExeVRM](https://github.com/limenlp/ExeVRM) (built on [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)) |
| | - **Fine-tuning**: Full fine-tuning of the language model; vision tower and multi-modal projector are frozen |
| | - **Optimizer**: AdamW with cosine learning rate schedule |
| | - **Learning rate**: 5e-6 |
| | - **Warmup ratio**: 0.1 |
| | - **Epochs**: 1 |
| | - **Batch size**: 1 per device, gradient accumulation steps = 2 |
| | - **Precision**: bfloat16 |
| | - **Attention**: FlashAttention-2 |
| | - **DeepSpeed**: ZeRO Stage 2 |
| |
|
| | ## Limitations |
| |
|
| | - The model is trained on computer-use execution videos (desktop/web/mobile). Performance on other video domains is not guaranteed. |
| | - Video inputs should be 720p resolution for best results (matching training distribution). |
| | - The model outputs binary judgments (`\box{correct}` / `\box{incorrect}`) and is not designed for open-ended video QA. |
| | - STP and TTP token pruning are applied during training. For inference without the ExeVRM framework, the model processes full video tokens (no pruning), which may require more GPU memory for long videos. |
| |
|
| | ## Citation |
| | If you think this model helps your research, please cite the following paper: |
| | ``` |
| | @misc{song2026videobasedrewardmodelingcomputeruse, |
| | title={Video-Based Reward Modeling for Computer-Use Agents}, |
| | author={Linxin Song and Jieyu Zhang and Huanxin Sheng and Taiwei Shi and Gupta Rahul and Yang Liu and Ranjay Krishna and Jian Kang and Jieyu Zhao}, |
| | year={2026}, |
| | eprint={2603.10178}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CV}, |
| | url={https://arxiv.org/abs/2603.10178}, |
| | } |
| | ``` |
| |
|
| | ## License |
| |
|
| | Apache License 2.0 |