How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cortexso/simplescaling-s1:
# Run inference directly in the terminal:
llama-cli -hf cortexso/simplescaling-s1:
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cortexso/simplescaling-s1:
# Run inference directly in the terminal:
llama-cli -hf cortexso/simplescaling-s1:
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf cortexso/simplescaling-s1:
# Run inference directly in the terminal:
./llama-cli -hf cortexso/simplescaling-s1:
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf cortexso/simplescaling-s1:
# Run inference directly in the terminal:
./build/bin/llama-cli -hf cortexso/simplescaling-s1:
Use Docker
docker model run hf.co/cortexso/simplescaling-s1:
Quick Links

Overview

The 'simplescaling-s1' model is a refined version of 'simplescaling/s1-32B,' designed to enhance scalability and streamline tasks in AI applications. It focuses on efficiently managing resource allocation while maintaining high performance across various workloads. This model is particularly effective for text generation, summarization, and conversational AI, as it balances speed and accuracy. Users can leverage 'simplescaling-s1' for building scalable applications that require processing large datasets or generating content quickly. Overall, the model achieves impressive results with reduced computational overhead, making it suitable for both research and practical deployments.

Variants

No Variant Cortex CLI command
1 Simplescaling-s1-32b cortex run simplescaling-s1:32b

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexso/simplescaling-s1
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run simplescaling-s1
    

Credits

Downloads last month
69
GGUF
Model size
33B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for cortexso/simplescaling-s1