Instructions to use TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX") model = AutoModelForCausalLM.from_pretrained("TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - MLX
How to use TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- vLLM
How to use TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX
- SGLang
How to use TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Pi new
How to use TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX
Run Hermes
hermes
- MLX LM
How to use TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX", "messages": [ {"role": "user", "content": "Hello"} ] }' - Docker Model Runner
How to use TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX with Docker Model Runner:
docker model run hf.co/TheBlueObserver/Qwen2.5-Coder-0.5B-Instruct-MLX
Upload folder using huggingface_hub
Upload folder using huggingface_hub
Multi commit ID: 8ce77c7be94fa06e0019a2dd4899362ea1d3fa696b2adf5f5814a78df048452c
Scheduled commits:
- Upload 10 file(s) totalling 999.6M (7c66ad79f196d0f46fec175e9ef20d8bc3ae9ad9b40aef299ed0a9ff1c6eac94)
This is a PR opened using the huggingface_hub library in the context of a multi-commit. PR can be commented as a usual PR. However, please be aware that manually updating the PR description, changing the PR status, or pushing new commits, is not recommended as it might corrupt the commit process. Learn more about multi-commits in this guide.
Multi-commit is now completed! You can ping the repo owner to review the changes. This PR can now be commented or modified without risking to corrupt it.
This is a comment posted using the huggingface_hub library in the context of a multi-commit. Learn more about multi-commits in this guide.
create_pr=False has been passed so PR is automatically merged.
This is a comment posted using the huggingface_hub library in the context of a multi-commit. Learn more about multi-commits in this guide.