Solar-Open-100B-GGUF

Solar Open Model

Description

This repository contains GGUF format model files for Upstage's Solar-Open-100B.

Solar Open is a massive 102B-parameter Mixture-of-Experts (MoE) model trained from scratch on 19.7 trillion tokens. Despite its large total size, it uses only 12B active parameters during inference, offering a unique combination of massive knowledge capacity and efficient generation speed.

Note: Please check the specific file sizes in the "Files and versions" tab.

How to Run (llama.cpp)

Recommended Parameters: Upstage recommends the following sampling parameters for Solar Open:

  • Temperature: 0.8
  • Top-P: 0.95
  • Top-K: 50

CLI Example

./llama-cli -m Solar-Open-100B.Q4_K_M.gguf \
  -c 8192 \
  --temp 0.8 \
  --top-p 0.95 \
  --top-k 50 \
  -p "User: Who are you?\nAssistant:" \
  -cnv

Server Example

./llama-server -m Solar-Open-100B.Q4_K_M.gguf \
  --port 8080 \
  --host 0.0.0.0 \
  -c 8192 \
  -ngl 99

License

The model weights are licensed under the Solar-Apache License 2.0. Please review the full license terms here: LICENSE

Citation

If you use Solar Open in your research, please cite:

@misc{solar-open-2025,
    title={Solar Open: Scaling Upstage's LLM Capabilities with MoE},
    author={Upstage AI},
    year={2025},
    url={https://huggingface.co/Upstage/Solar-Open-100B}
}
Downloads last month
388
GGUF
Model size
103B params
Architecture
glm4moe
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for AaryanK/Solar-Open-100B-GGUF

Quantized
(6)
this model