Instructions to use LumiOpen/Viking-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LumiOpen/Viking-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="LumiOpen/Viking-7B")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("LumiOpen/Viking-7B") model = AutoModelForCausalLM.from_pretrained("LumiOpen/Viking-7B") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use LumiOpen/Viking-7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "LumiOpen/Viking-7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LumiOpen/Viking-7B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/LumiOpen/Viking-7B
- SGLang
How to use LumiOpen/Viking-7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "LumiOpen/Viking-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LumiOpen/Viking-7B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "LumiOpen/Viking-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LumiOpen/Viking-7B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use LumiOpen/Viking-7B with Docker Model Runner:
docker model run hf.co/LumiOpen/Viking-7B
Commit History
minor update 6e93406
jonabur commited on
update for final checkpoint 21dbecf
jonabur commited on
2000B 17b86f2
jonabur commited on
1900B 2dd6c2a
jonabur commited on
1800B 5d4195a
jonabur commited on
1700B 75f7af1
jonabur commited on
1600B 352f674
jonabur commited on
1500B 887c840
jonabur commited on
1400B 32370a5
jonabur commited on
1300B 6c19b8d
jonabur commited on
add README.md 084dfbe
jonabur commited on
1000B c1e7853
jonabur commited on
update readme 673653b
jonabur commited on
900B bbfb1fe
jonabur commited on
800B 480b578
jonabur commited on
700B 999953f
jonabur commited on
600B 8475766
jonabur commited on
500B 648504c
jonabur commited on
400B 53f8564
jonabur commited on
300B 961b5d1
jonabur commited on
200B 3300bd4
jonabur commited on
100B d211bc8
jonabur commited on