Transformers
Safetensors
qwen2_5_vl
reward_model
rbm
preference_comparisons
text-generation-inference
Instructions to use jesbu1/robometer-4b-fft-libero with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use jesbu1/robometer-4b-fft-libero with Transformers:
# Load model directly from transformers import AutoProcessor, RFM processor = AutoProcessor.from_pretrained("jesbu1/robometer-4b-fft-libero") model = RFM.from_pretrained("jesbu1/robometer-4b-fft-libero") - Notebooks
- Google Colab
- Kaggle
jesbu1/robometer-4b-fft-libero
This Robometer model is full-finetuned on the libero_all dataset containing: LIBERO-90, 10, Object, Goal, Spatial, and generated failures from each of these datasets.
This is likely to outperform the default Robometer model because the standard one is not trained on LIBERO-90.
Model Details
- Base Model: Qwen/Qwen3-VL-4B-Instruct
- Model Type: qwen2_5_vl
Training Run
- Wandb Run: rbm
- Wandb ID:
wj739wca - Project: robometer
- Notes: training Robometer
Citation
If you use this model, please cite:
- Downloads last month
- 74
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for jesbu1/robometer-4b-fft-libero
Base model
Qwen/Qwen3-VL-4B-Instruct