UI-TARS: Pioneering Automated GUI Interaction with Native Agents
Paper • 2501.12326 • Published • 64
# Load model directly
from transformers import AutoProcessor, AutoModelForImageTextToText
processor = AutoProcessor.from_pretrained("chakra-labs/GLADOS-1")
model = AutoModelForImageTextToText.from_pretrained("chakra-labs/GLADOS-1")
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:]))GLADOS-1 is the first computer-use (CUA) model post-trained using collective, crowd-sourced trajectories. Leveraging the enourmous PANGO dataset (with primarily Chrome based interactions), it's purpose is to provide a lense as to what's possible with enormous trajectory sizes in computer use.
It also represents the first open-sourced post-training pipeline for UI-TARS, inspired by the existing Qwen2VL finetuning series.
This model is designed to:
📕 Release Blog | 🤗 Code | 🔧 Deployment (via UI-TARS) | 🖥️ Running on your own computer (via UI-TARS Desktop)
@misc{chakralabs2025glados-1,
author = {Chakra Labs},
title = {GLADOS-1},
url = {https://github.com/Chakra-Network/GLADOS-1},
year = {2025}
}
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="chakra-labs/GLADOS-1") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)