Text Generation
Transformers
Safetensors
English
Korean
solar_open
upstage
solar
Mixture of Experts
100b
llm
conversational
Instructions to use upstage/Solar-Open-100B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use upstage/Solar-Open-100B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="upstage/Solar-Open-100B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("upstage/Solar-Open-100B") model = AutoModelForCausalLM.from_pretrained("upstage/Solar-Open-100B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use upstage/Solar-Open-100B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "upstage/Solar-Open-100B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "upstage/Solar-Open-100B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/upstage/Solar-Open-100B
- SGLang
How to use upstage/Solar-Open-100B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "upstage/Solar-Open-100B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "upstage/Solar-Open-100B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "upstage/Solar-Open-100B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "upstage/Solar-Open-100B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use upstage/Solar-Open-100B with Docker Model Runner:
docker model run hf.co/upstage/Solar-Open-100B
add solar open technical report. add benchmark results.
#19
by keunwooupstage - opened
- .gitattributes +1 -0
- README.md +44 -3
- solar-open-technical-report.pdf +3 -0
.gitattributes
CHANGED
|
@@ -17,6 +17,7 @@
|
|
| 17 |
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
*.pb filter=lfs diff=lfs merge=lfs -text
|
|
|
|
| 20 |
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
*.pt filter=lfs diff=lfs merge=lfs -text
|
|
|
|
| 17 |
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pdf filter=lfs diff=lfs merge=lfs -text
|
| 21 |
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 22 |
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 23 |
*.pt filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -28,7 +28,7 @@ tags:
|
|
| 28 |
## Model Overview
|
| 29 |
|
| 30 |
* **Model Name:** Solar Open 100B
|
| 31 |
-
* **Hugging Face ID:** Upstage/Solar-Open-100B
|
| 32 |
* **Architecture:** Mixture-of-Experts (MoE)
|
| 33 |
* **Total Parameters:** 102.6B
|
| 34 |
* **Active Parameters:** 12B (per token)
|
|
@@ -36,10 +36,12 @@ tags:
|
|
| 36 |
* **Pre-training Tokens:** 19.7 Trillion
|
| 37 |
* **Context Length:** 128k
|
| 38 |
* **Training Hardware:** NVIDIA B200 GPUs
|
| 39 |
-
* **License:** **Solar-Apache License 2.0** (See [LICENSE](
|
| 40 |
* **Hardware Requirements:**
|
| 41 |
* **Minimum:** 4x NVIDIA A100 (80GB)
|
| 42 |
|
|
|
|
|
|
|
| 43 |
## License
|
| 44 |
This repository contains both model weights and code,
|
| 45 |
which are licensed under different terms:
|
|
@@ -54,7 +56,46 @@ which are licensed under different terms:
|
|
| 54 |
|
| 55 |
## Performance
|
| 56 |
|
| 57 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
|
| 59 |
## Inference Quickstart
|
| 60 |
|
|
|
|
| 28 |
## Model Overview
|
| 29 |
|
| 30 |
* **Model Name:** Solar Open 100B
|
| 31 |
+
* **Hugging Face ID:** `Upstage/Solar-Open-100B`
|
| 32 |
* **Architecture:** Mixture-of-Experts (MoE)
|
| 33 |
* **Total Parameters:** 102.6B
|
| 34 |
* **Active Parameters:** 12B (per token)
|
|
|
|
| 36 |
* **Pre-training Tokens:** 19.7 Trillion
|
| 37 |
* **Context Length:** 128k
|
| 38 |
* **Training Hardware:** NVIDIA B200 GPUs
|
| 39 |
+
* **License:** **Solar-Apache License 2.0** (See [LICENSE](#license))
|
| 40 |
* **Hardware Requirements:**
|
| 41 |
* **Minimum:** 4x NVIDIA A100 (80GB)
|
| 42 |
|
| 43 |
+
For more details, please refer to [Solar Open Technical Report](solar-open-technical-report.pdf).
|
| 44 |
+
|
| 45 |
## License
|
| 46 |
This repository contains both model weights and code,
|
| 47 |
which are licensed under different terms:
|
|
|
|
| 56 |
|
| 57 |
## Performance
|
| 58 |
|
| 59 |
+
### Korean Benchmarks
|
| 60 |
+
|
| 61 |
+
| Category | Benchmarks | Model Name (102B) | gpt-oss-120b (117B, high) | gpt-oss-120b (117B, medium) | GLM-4.5-Air (110B) |
|
| 62 |
+
| :--- | :--- | :---: | :---: | :---: | :---: |
|
| 63 |
+
| **General** | KMMLU | 73.0 | 72.7 | 70.3 | 70.2 |
|
| 64 |
+
| | KMMLU-Pro | 64.0 | 62.6 | 60.5 | 60.7 |
|
| 65 |
+
| | CLIcK | 78.9 | 77.2 | 72.9 | 48.3 |
|
| 66 |
+
| | HAE-RAE v1.1 | 73.3 | 70.8 | 69.6 | 42.6 |
|
| 67 |
+
| | KoBALT | 44.3 | 52.6 | 45.0 | 40.3 |
|
| 68 |
+
| **Finance** | KBankMMLU (in-house) | 65.5 | 62.5 | 61.5 | 64.7 |
|
| 69 |
+
| **Law** | KBL | 65.5 | 62.8 | 60.1 | 60.6 |
|
| 70 |
+
| **Medical** | KorMedMCQA | 84.4 | 75.8 | 76.3 | 80.5 |
|
| 71 |
+
| **Math** | Ko-AIME 2024 (in-house) | 80.3 | 90.0 | 76.7 | 80.0 |
|
| 72 |
+
| | Ko-AIME 2025 (in-house) | 80.0 | 90.0 | 70.0 | 83.3 |
|
| 73 |
+
| | HRM8K | 87.6 | 89.5 | 84.8 | 86.0 |
|
| 74 |
+
| **IF** | Ko-IFEval | 87.5 | 93.2 | 86.7 | 79.5 |
|
| 75 |
+
| **Preference** | Ko Arena Hard v2 (in-house) | 79.9 | 79.5 | 73.8 | 60.4 |
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
### English Benchmarks
|
| 79 |
+
|
| 80 |
+
| Category | Benchmarks | Model Name (102B) | gpt-oss-120b (117B, high) | gpt-oss-120b (117B, medium) | GLM-4.5-Air (110B) |
|
| 81 |
+
| :--- | :--- | :---: | :---: | :---: | :---: |
|
| 82 |
+
| **General** | MMLU | 88.2 | 88.6 | 87.9 | 83.3 |
|
| 83 |
+
| | MMLU-Pro | 80.4 | 80.4 | 78.6 | 81.4 |
|
| 84 |
+
| | GPQA-Diamond | 68.1 | 78.0 | 69.4 | 75.8 |
|
| 85 |
+
| | HLE (text only) | 10.5 | 18.4 | 7.23 | 10.8 |
|
| 86 |
+
| **Math** | AIME 2024 | 91.7 | 94.3 | 77.7 | 88.7 |
|
| 87 |
+
| | AIME 2025 | 84.3 | 91.7 | 75.0 | 82.7 |
|
| 88 |
+
| | HMMT 2025 (Feb) | 73.3 | 80.0 | 63.3 | 66.7 |
|
| 89 |
+
| | HMMT 2025 (Nov) | 80.0 | 73.3 | 66.7 | 70.0 |
|
| 90 |
+
| **Code** | LiveCodeBench (v1–v6 cumul) | 74.2 | 89.9 | 82.8 | 71.9 |
|
| 91 |
+
| **IF** | IFBench | 53.7 | 70.8 | 61.2 | 37.8 |
|
| 92 |
+
| | IFEval | 88.0 | 91.4 | 86.5 | 86.5 |
|
| 93 |
+
| **Preference** | Arena Hard v2 | 74.8 | 79.6 | 72.7 | 62.5 |
|
| 94 |
+
| | Writing Bench | 7.51 | 6.61 | 6.55 | 7.40 |
|
| 95 |
+
| **Agent** | Tau² Airline | 52.4 | 56.0 | 52.8 | 60.8 |
|
| 96 |
+
| | Tau² Telecom | 55.6 | 57.7 | 47.4 | 28.1 |
|
| 97 |
+
| | Tau² Retail | 59.3 | 76.5 | 68.4 | 71.9 |
|
| 98 |
+
| **Long** | AA-LCR | 35.0 | 48.3 | 45.0 | 37.3 |
|
| 99 |
|
| 100 |
## Inference Quickstart
|
| 101 |
|
solar-open-technical-report.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:05d6664d644f12a4eff2deaa1d061e377aa11fce81e679c2837a8bfdecf509cd
|
| 3 |
+
size 366668
|