Instructions to use starble-dev/Starlight-V3-12B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use starble-dev/Starlight-V3-12B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="starble-dev/Starlight-V3-12B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("starble-dev/Starlight-V3-12B") model = AutoModelForCausalLM.from_pretrained("starble-dev/Starlight-V3-12B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use starble-dev/Starlight-V3-12B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "starble-dev/Starlight-V3-12B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "starble-dev/Starlight-V3-12B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/starble-dev/Starlight-V3-12B
- SGLang
How to use starble-dev/Starlight-V3-12B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "starble-dev/Starlight-V3-12B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "starble-dev/Starlight-V3-12B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "starble-dev/Starlight-V3-12B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "starble-dev/Starlight-V3-12B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use starble-dev/Starlight-V3-12B with Docker Model Runner:
docker model run hf.co/starble-dev/Starlight-V3-12B
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("starble-dev/Starlight-V3-12B")
model = AutoModelForCausalLM.from_pretrained("starble-dev/Starlight-V3-12B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))General Use Sampling:
Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near 0.3 at first or else you will get some weird results. This is mentioned by MistralAI at their Transformers section.
Best Samplers:
I found best success using the following for Starlight-V3-12B:
Temperature:0.7-1.2(Additional stopping strings will be necessary as you increase the temperature)
Top K:-1
Min P:0.05
Rep Penalty:1.03-1.1
Why Version 3?
Currently the other versions resulted in really bad results that I didn't upload them, the version number is just the internal version.
Goal
The idea is to keep the strengths of anthracite-org/magnum-12b-v2 while adding some more creativity that seems to be lacking in the model. Mistral-Nemo by itself seems to behave less sporadic due to the low temperature needed but this gets a bit repetitive, although it's still the best model I've used so far.
Results
I am not entirely pleased with the result of the merge but it seems okay, though base anthracite-org/magnum-12b-v2 might just be better by itself. However, I'll still experiement on different merge methods. Leaking of the training data used on both models seems a bit more apparent when using higher temperature values, especially the use of author notes on the system prompt. Generally I'd advise to create a stopping string for "```" to avoid the generation of the training data. Original Models:
- UsernameJustAnother/Nemo-12B-Marlin-v5 (Thank you so much for your work ♥)
- anthracite-org/magnum-12b-v2 (Thank you so much for your work ♥)
GGUF Quants
- starble-dev/Starlight-V3-12B-GGUF
- mradermacher/Starlight-V3-12B-GGUF
- mradermacher/Starlight-V3-12B-i1-GGUF (imatrix)
Original Model Licenses & This Model License: Apache 2.0
Starlight-V3-12B
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the TIES merge method using models/magnum-12b-v2 as a base.
Models Merged
The following models were included in the merge:
- UsernameJustAnother/Nemo-12B-Marlin-v5
- anthracite-org/magnum-12b-v2
Configuration
The following YAML configuration was used to produce this model:
models:
- model: anthracite-org/magnum-12b-v2
parameters:
density: 0.3
weight: 0.5
- model: UsernameJustAnother/Nemo-12B-Marlin-v5
parameters:
density: 0.7
weight: 0.5
merge_method: ties
base_model: anthracite-org/magnum-12b-v2
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
- Downloads last month
- 11
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="starble-dev/Starlight-V3-12B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)