Instructions to use TheBlueObserver/gemma-2-2b-it-MLX with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use TheBlueObserver/gemma-2-2b-it-MLX with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="TheBlueObserver/gemma-2-2b-it-MLX") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TheBlueObserver/gemma-2-2b-it-MLX") model = AutoModelForCausalLM.from_pretrained("TheBlueObserver/gemma-2-2b-it-MLX") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - MLX
How to use TheBlueObserver/gemma-2-2b-it-MLX with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("TheBlueObserver/gemma-2-2b-it-MLX") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- vLLM
How to use TheBlueObserver/gemma-2-2b-it-MLX with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "TheBlueObserver/gemma-2-2b-it-MLX" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TheBlueObserver/gemma-2-2b-it-MLX", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/TheBlueObserver/gemma-2-2b-it-MLX
- SGLang
How to use TheBlueObserver/gemma-2-2b-it-MLX with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "TheBlueObserver/gemma-2-2b-it-MLX" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TheBlueObserver/gemma-2-2b-it-MLX", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "TheBlueObserver/gemma-2-2b-it-MLX" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TheBlueObserver/gemma-2-2b-it-MLX", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - MLX LM
How to use TheBlueObserver/gemma-2-2b-it-MLX with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "TheBlueObserver/gemma-2-2b-it-MLX"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "TheBlueObserver/gemma-2-2b-it-MLX" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "TheBlueObserver/gemma-2-2b-it-MLX", "messages": [ {"role": "user", "content": "Hello"} ] }' - Docker Model Runner
How to use TheBlueObserver/gemma-2-2b-it-MLX with Docker Model Runner:
docker model run hf.co/TheBlueObserver/gemma-2-2b-it-MLX
Upload folder using huggingface_hub
Upload folder using huggingface_hub
Multi commit ID: 61e21251182015aff2bcd74c692c08c6836ec978a11de6ec8e31da97ec227b9b
Scheduled commits:
- Upload 1 file(s) totalling 5.2G (68608e2a37b2a7b5f1ead60268348d7c1ab6f72a93f0f4a558f162308ab580cf)
- Upload 7 file(s) totalling 38.7M (0faacf7155b17ce25cfa665c177a6e6e96f8a93543e3d799751477262cb601dc)
This is a PR opened using the huggingface_hub library in the context of a multi-commit. PR can be commented as a usual PR. However, please be aware that manually updating the PR description, changing the PR status, or pushing new commits, is not recommended as it might corrupt the commit process. Learn more about multi-commits in this guide.
create_pr=False has been passed so PR is automatically merged.
This is a comment posted using the huggingface_hub library in the context of a multi-commit. Learn more about multi-commits in this guide.
Multi-commit is now completed! You can ping the repo owner to review the changes. This PR can now be commented or modified without risking to corrupt it.
This is a comment posted using the huggingface_hub library in the context of a multi-commit. Learn more about multi-commits in this guide.