Update README.md
Browse files
README.md
CHANGED
|
@@ -21,7 +21,7 @@ pipeline_tag: image-text-to-text
|
|
| 21 |
GLADOS-1 is the first computer-use (CUA) model post-trained using **collective, crowd-sourced trajectories**.
|
| 22 |
Leveraging the enourmous [Pango dataset](https://huggingface.co/datasets/chakra-labs/pango-sample) (with primarily Chrome based interactions), it's purpose is to provide a lense as to what's possible with enormous trajectory sizes in computer use.
|
| 23 |
|
| 24 |
-
It also represents the first open-sourced post-training pipeline for [UI-TARS](https://arxiv.org/pdf/2501.12326, inspired by the existing [Qwen2VL finetuning series](https://github.com/2U1/Qwen2-VL-Finetune).
|
| 25 |
|
| 26 |
This model is designed to:
|
| 27 |
- **Be compliant**. It has been taught to rigorouly follow directions and output action formats compatible with downstream parsers like PyAutoGUI.
|
|
|
|
| 21 |
GLADOS-1 is the first computer-use (CUA) model post-trained using **collective, crowd-sourced trajectories**.
|
| 22 |
Leveraging the enourmous [Pango dataset](https://huggingface.co/datasets/chakra-labs/pango-sample) (with primarily Chrome based interactions), it's purpose is to provide a lense as to what's possible with enormous trajectory sizes in computer use.
|
| 23 |
|
| 24 |
+
It also represents the first open-sourced post-training pipeline for [UI-TARS](https://arxiv.org/pdf/2501.12326), inspired by the existing [Qwen2VL finetuning series](https://github.com/2U1/Qwen2-VL-Finetune).
|
| 25 |
|
| 26 |
This model is designed to:
|
| 27 |
- **Be compliant**. It has been taught to rigorouly follow directions and output action formats compatible with downstream parsers like PyAutoGUI.
|