Instructions to use KORMo-Team/KORMo-10B-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use KORMo-Team/KORMo-10B-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="KORMo-Team/KORMo-10B-base", trust_remote_code=True)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("KORMo-Team/KORMo-10B-base", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use KORMo-Team/KORMo-10B-base with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "KORMo-Team/KORMo-10B-base" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KORMo-Team/KORMo-10B-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/KORMo-Team/KORMo-10B-base
- SGLang
How to use KORMo-Team/KORMo-10B-base with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "KORMo-Team/KORMo-10B-base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KORMo-Team/KORMo-10B-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "KORMo-Team/KORMo-10B-base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KORMo-Team/KORMo-10B-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use KORMo-Team/KORMo-10B-base with Docker Model Runner:
docker model run hf.co/KORMo-Team/KORMo-10B-base
I propose modifying the KORMo modelling to ensure compatibility with both Transformers 4.57.1 and 5.2.
In the case of RotaryEmbedding, the inv_freq value is calculated in the init and reused.
In Transformers 5.2, the model is loaded using the meta device, so this calculation does not take place. Consequently, in 5.2, logic was added to the _init_weights function to restore inv_freq via an else statement. In the case of KORMo, as it uses a custom _init_weights function, this logic was not applied, resulting in the issue where the RoPE value was not used during inference.
The following changes have been made to the code:
Added logic to restore inv_freq in init_weights to KORMoPreTrainedModel.
Added the copy function used in _init_weights to the top of the file.
We resolved an issue where the original_inv_freq key value was not registered in _buffer by cloning the self.inv_freq value, which previously returned None because it was not calculated. (RotaryEmbedding)
We added the compute_default_rope_parameters function, which was missing in version 5.2. (RotaryEmbedding)
Compatible with both version 4.57.1 and version 5.2
Thank you.