The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
X-VLA Inference & ManiSkill Conflict Evaluation
This repository contains the inference code and ManiSkill conflict evaluation environment for X-VLA (Soft-Prompted Transformer as a Scalable Cross-Embodiment Vision-Language-Action Model), evaluated on OOD conflict experiments.
Note: XVLA achieved 0% task success on all 10 conflict experiments (color_object evaluated; gripper retracted without interaction ~90% of the time). This repo is provided for reproducibility and as a reference implementation. See the companion repos for models that achieved higher success: genie-inference-maniskill.
Repository Structure
xvla-inference/
βββ xvla/ # X-VLA inference code
β βββ main.py # Main rollout script (conflict eval)
β βββ deploy.py # X-VLA FastAPI inference server launcher
β βββ run_ood_experiment_inference.sh # Batch OOD sweep script
β βββ vlm_eval_xvla_conflicts.py # VLM eval (Gemini-2.5-Flash) of rollout videos
β βββ models/ # Model architecture (Florence2 + X-VLA)
β βββ deploy/ # X-VLA-Pt base pretrained model config
β βββ conflict_stats.json # Action/state normalization stats
β βββ requirements.txt # Python dependencies
β βββ finetune_readme.md # Fine-tuning guide
β
βββ maniskill_conflict/ # ManiSkill conflict environment
βββ mani_skill/ # Modified ManiSkill package (VerbObjectColor-v1)
βββ conflict_experiment/ # Pair generation & experiment utilities
βββ collection_strategy/ # Training vocabulary & task language
Action Space
X-VLA outputs action shape (30, 20) β 30 steps Γ 20 dims β but only the first 8 dims are meaningful joint-space targets:
| Component | Dims | Description |
|---|---|---|
| Joint targets | 7 | q0..q6 absolute joint positions |
| Gripper | 1 | Gripper open/close target |
| Padding | 12 | Zero-padded (training artifact) |
| Total | 20 | per action step |
- Control mode:
pd_joint_pos(absolute joint position) - The server outputs z-score normalized actions;
main.pyunnormalizes usingconflict_stats.json - State input: 8-dim joint state, z-score normalized, zero-padded to 20 dims before sending to server
Conflict Experiments
Ten pairwise factor conflict experiments were evaluated (each with 400 OOD episodes):
| Experiment | Factor 1 | Factor 2 |
|---|---|---|
| color_object | color | shape |
| color_size | color | size |
| color_spatial | color | spatial |
| size_object | size | shape |
| spatial_object | spatial | shape |
| spatial_size | spatial | size |
| verb_color | verb | color |
| verb_object | verb | shape |
| verb_size | verb | size |
| verb_spatial | verb | spatial |
XVLA result: 0% success across all experiments. The policy (fine-tuned from X-VLA-Pt checkpoint at step 30000) did not develop directional grasping behavior in the conflict environment β the gripper retracted without interacting with objects ~90% of the time.
Setup
1. Install dependencies
git clone https://huggingface.co/datasets/yqi19/xvla-inference
cd xvla-inference
# Install ManiSkill conflict environment
pip install -e maniskill_conflict/
# Install X-VLA dependencies
pip install -r xvla/requirements.txt
2. Download checkpoint
Download the X-VLA fine-tuned checkpoint from yqi19/xvla:
from huggingface_hub import snapshot_download
snapshot_download(repo_id="yqi19/xvla", local_dir="./xvla_checkpoint")
Place the checkpoint at xvla/checkpoints/color_object/ckpt-30000/.
3. Launch inference server
cd xvla
python deploy.py --model_path checkpoints/color_object/ckpt-30000 --port 8010
4. Run conflict evaluation
# Single pair
python xvla/main.py \
--host localhost --port 8010 \
--experiment color_object --pair-i 0 --pair-j 1 --run-type color \
--num-episodes 20
# Full sweep (all pairs for one experiment)
bash xvla/run_ood_experiment_inference.sh color_object 8010
VLM Evaluation
After collecting rollout videos, evaluate factor dominance using Gemini-2.5-Flash:
export NVIDIA_API_KEY=<your_api_key>
export XVLA_EXPERIMENTS_DIR=xvla/data/conflict_xvla/experiments
export XVLA_RESULTS_DIR=results
export XVLA_VLM_OUT_DIR=vlm_eval_output
python xvla/vlm_eval_xvla_conflicts.py color_object
Note: Since XVLA had 0% task success, there are no meaningful videos to evaluate β the VLM script is included for completeness.
Citation
@article{zheng2025x,
title = {X-VLA: Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model},
author = {Zheng, Jinliang and Li, Jianxiong and Wang, Zhihao and others},
journal = {arXiv preprint arXiv:2510.10274},
year = {2025}
}
- Downloads last month
- -