Text Generation
Transformers
Safetensors
Chinese
English
minicpm4
conversational
custom_code
4-bit precision
Instructions to use openbmb/MiniCPM4-8B-mlx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openbmb/MiniCPM4-8B-mlx with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="openbmb/MiniCPM4-8B-mlx", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("openbmb/MiniCPM4-8B-mlx", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use openbmb/MiniCPM4-8B-mlx with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "openbmb/MiniCPM4-8B-mlx" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "openbmb/MiniCPM4-8B-mlx", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/openbmb/MiniCPM4-8B-mlx
- SGLang
How to use openbmb/MiniCPM4-8B-mlx with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "openbmb/MiniCPM4-8B-mlx" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "openbmb/MiniCPM4-8B-mlx", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "openbmb/MiniCPM4-8B-mlx" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "openbmb/MiniCPM4-8B-mlx", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use openbmb/MiniCPM4-8B-mlx with Docker Model Runner:
docker model run hf.co/openbmb/MiniCPM4-8B-mlx
Improve model card: Update paper link, add project page and relevant tags
#1
by nielsr HF Staff - opened
This PR enhances the model card for openbmb/MiniCPM4-8B-mlx by:
- Updating the paper link to the official Hugging Face Papers page: https://huggingface.co/papers/2506.07900. This replaces the direct link to the technical report PDF.
- Adding a link to the project page (Hugging Face collection): https://huggingface.co/collections/openbmb/minicpm4-6841ab29d180257e940baa9b.
- Adding
tool-useandlong-contexttags to the metadata to improve discoverability, reflecting the model's capabilities in tool calling and long-text understanding as evidenced in the paper abstract and GitHub README.
These changes ensure the model card is more complete, accurate, and aligned with Hugging Face Hub best practices. The library_name: transformers has been retained as evidence from config.json confirms its compatibility.