Instructions to use OS-Copilot/OS-Atlas-Pro-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OS-Copilot/OS-Atlas-Pro-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="OS-Copilot/OS-Atlas-Pro-7B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("OS-Copilot/OS-Atlas-Pro-7B") model = AutoModelForImageTextToText.from_pretrained("OS-Copilot/OS-Atlas-Pro-7B") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OS-Copilot/OS-Atlas-Pro-7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OS-Copilot/OS-Atlas-Pro-7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OS-Copilot/OS-Atlas-Pro-7B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/OS-Copilot/OS-Atlas-Pro-7B
- SGLang
How to use OS-Copilot/OS-Atlas-Pro-7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OS-Copilot/OS-Atlas-Pro-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OS-Copilot/OS-Atlas-Pro-7B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OS-Copilot/OS-Atlas-Pro-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OS-Copilot/OS-Atlas-Pro-7B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use OS-Copilot/OS-Atlas-Pro-7B with Docker Model Runner:
docker model run hf.co/OS-Copilot/OS-Atlas-Pro-7B
How does the Agent is supposed to be working?
In your prompt you're are writing:
{"type": "text", "text": "Task instruction: to allow the user to enter their first name\nHistory: null" },
Would you please clarify what the "History" is then? Should I append every executed instruction separated by a comma to the History: task1, task2, taskn? Or is it something like an append of the assistant role output to the overall messages list?
But then, there is no "planner", so how does the agent decide what alternative trajectories might be? Additionally, I have observed that the "thought" of the agent does not represent a good reasoning process. Of course, it's just a 7B model, so I wasn't expecting good agent behavior. Nice try anyway :)
Hello,
Just wondering how did you get the model to work it seems the starter transformer code lists invalid models, and processors e.g. /nas/shared/NLP_A100/wuzhenyu/ckpt/20240928_finetune_qwen_7b_3m_imgsiz_1024_bs_1024_lr_1e-7_wd_1e-3_mixture I put in the OS-Atlas_pro-7B in its place but am getting some issues could you please share your model configuration
Hello @aswad546 ,
I just implemented a kind of ReAct loop, but as I've written, this model simply can't work as a agent very well due to missing reasoning within the training.
Thanks @Maverick17 for your response. Do you any good open source agents that do not use paid models like GPT under the hood?
@aswad546 . It depends, how difficult the task for an agent is. Currently the only way to go is to built on top of gpt4o + small language action model.
Yes, I've noticed people heavily favour GPT for this use case I was hoping to see some sort of open source implementation but I guess models like Llama 3.2 and Qwen VL (even for vision understanding) just are not there yet. I am curious to see how GPT o1 does on these tasks since as per my research it seems that the accuracy for task completion is still quite low. I think it will become better with time and maybe open source models can compete as well.
@aswad546 True, I’ve noticed the same thing. There’s no way a 7B model, even if it’s specialized (fine-tuned) for mobile task execution, could outperform the "big players" like GPT-4o or Anthropic paired with a small language-action model. This is because specialized models often lack strong reasoning abilities. Even top-tier models from families like Qwen-2-VL or InternVL2 excel in understanding both images and tasks more broadly.
I think we’re at least a year away from achieving 90% accuracy in these scenarios.
By the way, you mentioned the O1 model—does it even support vision, or not?
Oops my bad it seems the o1 model does indeed not support vision based tasks yet. I think a year is still optimistic since the tools available currently aren't really cutting it enough to be reliable. But I am very interested to see where these automated agents go they could have a lot of applications.