Akicou commited on
Commit
b975fa7
·
verified ·
1 Parent(s): 341700f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -148
README.md CHANGED
@@ -8,191 +8,82 @@ tags:
8
  - upstage
9
  - solar
10
  - moe
11
- - 100b
12
  - llm
 
 
13
  base_model:
14
  - upstage/Solar-Open-100B
15
  ---
16
 
17
  <p align="center">
18
- <img src="./Solar-Open-69B-REAP.png" alt="Solar Open Model" width="100%">
19
  </p>
20
 
21
- # **Solar Open to 69B Reap**
22
 
23
- **Solar Open** is Upstage's flagship **102B-parameter** and has been REAP'ed to a 69B Model using a modified Repository of Cerebras's REAP Repo on github
24
 
25
- ## Links to Quants:
26
 
27
- - [Solar Open 69B REAP GGUF](https://huggingface.co/Akicou/Solar-Open-69B-REAP-GGUF)
28
-
29
- # **Solar Open**
30
-
31
- **Solar Open** is Upstage's flagship **102B-parameter** large language model, trained **entirely from scratch** and released under the **Solar-Apache License 2.0** (see [LICENSE](#license) for details). As a **Mixture-of-Experts (MoE)** architecture, it delivers enterprise-grade performance in reasoning, instruction-following, and agentic capabilities—all while prioritizing transparency and customization for the open-source community.
32
-
33
- ## Highlights
34
 
35
- * **MoE Architecture (102B / 12B):** Built on a Mixture-of-Experts architecture with **102B total / 12B active parameters**. This design delivers the knowledge depth of a massive model with the inference speed and cost-efficiency of a much smaller model.
36
- * **Massive Training Scale:** Pre-trained on **19.7 trillion tokens**, ensuring broad knowledge coverage and robust reasoning capabilities across various domains.
37
-
38
- ## Model Overview
39
-
40
- * **Model Name:** Solar Open 100B
41
- * **Hugging Face ID:** Upstage/Solar-Open-100B
42
- * **Architecture:** Mixture-of-Experts (MoE)
43
- * **Total Parameters:** 102.6B
44
- * **Active Parameters:** 12B (per token)
45
- * **Experts:** 129 Experts (top 8 among 128 Routed + 1 Shared)
46
- * **Pre-training Tokens:** 19.7 Trillion
47
- * **Context Length:** 128k
48
- * **Training Hardware:** NVIDIA B200 GPUs
49
- * **License:** **Solar-Apache License 2.0** (See [LICENSE](./LICENSE))
50
- * **Hardware Requirements:**
51
- * **Minimum:** 4x NVIDIA A100 (80GB)
52
 
53
- ## License
54
- This repository contains both model weights and code,
55
- which are licensed under different terms:
56
 
57
- 1. MODEL WEIGHTS (*.safetensors)
58
- Licensed under **Solar-Apache License 2.0**
59
- See: https://huggingface.co/upstage/Solar-Open-100B/blob/main/LICENSE
60
 
61
- 2. CODE (*.py, *.json, *.jinja files)
62
- Licensed under **Apache License 2.0**
63
- See: https://www.apache.org/licenses/LICENSE-2.0
64
 
65
- ## Performance
 
66
 
67
- TBA
 
 
 
68
 
69
- ## Inference Quickstart
 
 
 
 
70
 
71
- We recommend using the following generation parameters:
72
 
73
- ```
74
- temperature=0.8
75
- top_p=0.95
76
- top_k=50
77
- ```
78
 
79
  ### Transformers
80
-
81
- Install the required dependencies:
82
-
83
- ```bash
84
- pip install -U transformers kernels torch accelerate
85
- ```
86
-
87
- Run inference with the following code:
88
 
89
  ```python
90
  import torch
91
  from transformers import AutoModelForCausalLM, AutoTokenizer
92
 
93
- MODEL_ID = "upstage/Solar-Open-100B"
94
 
95
- # Load model and tokenizer
96
  tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
97
-
98
  model = AutoModelForCausalLM.from_pretrained(
99
- pretrained_model_name_or_path=MODEL_ID,
100
  torch_dtype=torch.bfloat16,
101
  device_map="auto",
102
- trust_remote_code=True,
103
- )
104
-
105
- # Prepare input
106
- messages = [{"role": "user", "content": "who are you?"}]
107
- inputs = tokenizer.apply_chat_template(
108
- messages,
109
- tokenize=True,
110
- add_generation_prompt=True,
111
- return_dict=True,
112
- return_tensors="pt",
113
  )
114
- inputs = inputs.to(model.device)
115
-
116
- # Generate response
117
- generated_ids = model.generate(
118
- **inputs,
119
- max_new_tokens=4096,
120
- temperature=0.8,
121
- top_p=0.95,
122
- top_k=50,
123
- do_sample=True,
124
- )
125
- generated_text = tokenizer.decode(generated_ids[0][inputs.input_ids.shape[1] :])
126
- print(generated_text)
127
- ```
128
 
129
- ### vLLM
130
-
131
- #### Option 1: Using Docker (Highly Recommended)
132
- Docker is the **recommended deployment method** for running `Solar-Open-100B`.
133
-
134
- ```bash
135
- # For 8 GPUs
136
- docker run --gpus all \
137
- --ipc=host \
138
- -p 8000:8000 \
139
- upstage/vllm-solar-open:latest \
140
- upstage/Solar-Open-100B \
141
- --trust-remote-code \
142
- --enable-auto-tool-choice \
143
- --tool-call-parser solar_open \
144
- --reasoning-parser solar_open \
145
- --logits-processors vllm.model_executor.models.parallel_tool_call_logits_processor:ParallelToolCallLogitsProcessor \
146
- --logits-processors vllm.model_executor.models.solar_open_logits_processor:SolarOpenTemplateLogitsProcessor \
147
- --tensor-parallel-size 8
148
- ```
149
 
150
- #### Option 2: Installing from Source
151
- For development, debugging, custom modifications or offline inference, Solar Open can also be run
152
- using a source installation of vLLM. We recommend using **[uv](https://docs.astral.sh/uv/)** for environment
153
- management and dependency resolution.
154
 
155
- Create and activate a Python virtual environment
156
- ```bash
157
- uv venv --python 3.12 --seed
158
- source .venv/bin/activate
159
  ```
160
 
161
- Install Solar Open's optimized vLLM
162
- ```bash
163
- VLLM_PRECOMPILED_WHEEL_LOCATION="https://github.com/vllm-project/vllm/releases/download/v0.12.0/vllm-0.12.0-cp38-abi3-manylinux_2_31_x86_64.whl" \
164
- VLLM_USE_PRECOMPILED=1 \
165
- uv pip install git+https://github.com/UpstageAI/vllm.git@v0.12.0-solar-open
166
- ```
167
-
168
- Start the vLLM server (For 8 GPUs)
169
- ```bash
170
- vllm serve upstage/Solar-Open-100B \
171
- --trust-remote-code \
172
- --enable-auto-tool-choice \
173
- --tool-call-parser solar_open \
174
- --reasoning-parser solar_open \
175
- --logits-processors vllm.model_executor.models.parallel_tool_call_logits_processor:ParallelToolCallLogitsProcessor \
176
- --logits-processors vllm.model_executor.models.solar_open_logits_processor:SolarOpenTemplateLogitsProcessor \
177
- --tensor-parallel-size 8
178
- ```
179
-
180
- ## Public API Access
181
-
182
- The official API service for Solar Open is scheduled to launch publicly on **January**.
183
-
184
- * **Access:** Upstage Console (TBA)
185
- * **Documentation:** Upstage Console (TBA)
186
-
187
- ## Citation
188
 
189
- If you use Solar Open in your research, please cite:
190
 
191
- ```bibtex
192
- @misc{solar-open-2025,
193
- title={Solar Open: Scaling Upstage's LLM Capabilities with MoE},
194
- author={Upstage AI},
195
- year={2025},
196
- url={https://huggingface.co/Upstage/Solar-Open-100B}
197
- }
198
- ```
 
8
  - upstage
9
  - solar
10
  - moe
 
11
  - llm
12
+ - pruning
13
+ - reap
14
  base_model:
15
  - upstage/Solar-Open-100B
16
  ---
17
 
18
  <p align="center">
19
+ <img src="./Solar-Open-69B-REAP.png" alt="Solar Open 69B REAP" width="100%">
20
  </p>
21
 
22
+ # Solar Open to 69B REAP
23
 
24
+ This repository contains a pruned version of Upstage's **Solar-Open-100B**. Using **REAP (Router Expert Activation Pruning)**, the model has been compressed from its original size to a more efficient **69B parameter** model.
25
 
26
+ ## Model Highlights
27
 
28
+ * **Pruning Method:** REAP (Router Expert Activation Pruning) based on the [Cerebras Research REAP implementation](https://github.com/CerebrasResearch/reap).
29
+ * **Optimization:** Pruned using ~100 samples from the `nickrosh/Evol-Instruct-Code-80k-v1` dataset.
30
+ * **Hardware used:** 4x NVIDIA A100 SXM.
31
+ * **Custom Chat Template:** Includes a specialized chat template designed to manage reasoning length and prevent "non-stop" yapping.
 
 
 
32
 
33
+ ## Links to Quants
34
+ - [Solar Open 69B REAP GGUF](https://huggingface.co/Akicou/Solar-Open-69B-REAP-GGUF)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
+ ---
 
 
37
 
38
+ ## Technical Details & Pruning
 
 
39
 
40
+ This model was created by modifying a clone of the Cerebras REAP repository. The goal was to reduce the overhead of the 102B MoE architecture while maintaining high performance in core tasks.
 
 
41
 
42
+ ### Acknowledgments
43
+ Special thanks to **[Barney Greenway](https://huggingface.co/McG-221)** for identifying the "infinite reasoning/non-stop yapping" issue found in earlier iterations.
44
 
45
+ ### Chat Template & Behavior
46
+ To address the long-winded reasoning issues, I implemented a custom `chat_template` that prioritizes concise outputs.
47
+ > [!IMPORTANT]
48
+ > While this model is more efficient for general instructions and coding, it is currently **not optimized for math**.
49
 
50
+ ### Future Plans
51
+ Future REAP uploads to this profile will include specialized experts for:
52
+ * Advanced Mathematics
53
+ * Function-calling
54
+ * SWE-environment (Software Engineering)
55
 
56
+ ---
57
 
58
+ ## Usage
 
 
 
 
59
 
60
  ### Transformers
61
+ You will need `transformers`, `accelerate`, and `torch`.
 
 
 
 
 
 
 
62
 
63
  ```python
64
  import torch
65
  from transformers import AutoModelForCausalLM, AutoTokenizer
66
 
67
+ MODEL_ID = "Akicou/Solar-Open-69B-REAP" # Replace with your actual repo path
68
 
 
69
  tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
 
70
  model = AutoModelForCausalLM.from_pretrained(
71
+ MODEL_ID,
72
  torch_dtype=torch.bfloat16,
73
  device_map="auto",
74
+ trust_remote_code=True
 
 
 
 
 
 
 
 
 
 
75
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76
 
77
+ # The model uses a custom chat template to keep reasoning concise
78
+ messages = [{"role": "user", "content": "Explain how REAP pruning works."}]
79
+ inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
 
81
+ outputs = model.generate(inputs, max_new_tokens=1024, temperature=0.7)
82
+ print(tokenizer.decode(outputs[0]))
 
 
83
 
 
 
 
 
84
  ```
85
 
86
+ ## License
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87
 
88
+ The model weights are licensed under the **Solar-Apache License 2.0**, following the base model requirements from Upstage.
89