Text Generation
Transformers
Safetensors
llama
nebula
reasoning
conversational
text-generation-inference
Instructions to use OrionLLM/Nebula with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OrionLLM/Nebula with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="OrionLLM/Nebula") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OrionLLM/Nebula") model = AutoModelForCausalLM.from_pretrained("OrionLLM/Nebula") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OrionLLM/Nebula with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OrionLLM/Nebula" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OrionLLM/Nebula", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/OrionLLM/Nebula
- SGLang
How to use OrionLLM/Nebula with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OrionLLM/Nebula" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OrionLLM/Nebula", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OrionLLM/Nebula" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OrionLLM/Nebula", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use OrionLLM/Nebula with Docker Model Runner:
docker model run hf.co/OrionLLM/Nebula
File size: 3,110 Bytes
eaf1738 4694a8e eaf1738 40a5695 eaf1738 2a02df9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 | ---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- nebula
- reasoning
- text-generation
- transformers
---
# Nebula
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/685ea8ff7b4139b6845ce395/YF0kEDYMGJhcM3Lbl2EOD.png" alt="Nebula logo" width="100">
</p>
## 1. Introduction
**Nebula** is a **320M-parameter** generalist Small Reasoning Model trained on **200B+ tokens**, designed for edge AI and on-device deployment.
Nebula is designed to deliver an unusually strong balance of **memory**, **general reasoning**, **math**, and **retrieval-friendly behavior** for its size class, aiming to outperform many small models of a similar parameter range on non-code, industry-style benchmarks.
## 2. Reasoning style
Nebula’s reasoning traces use an intentionally compact style with **dense, short, frequently non-verbal sentences**, optimized for efficiency under limited model capacity.
Traces use the following stenographic notation integrated into special tokens:
### Logical markers
| Token | Meaning | Usage |
| ----- | ------- | ----- |
| **→** | derivation / implication | For very short causal/logical flow |
| **↺** | iterative return / refinement loop | For backtracking, reconsidering priors, RAG re-querying |
| **?** | uncertainty/questions to resolve | Can be appended to short expressions/words, not only interrogatives |
| **!/※** | insight/breakthroughs | Emphatic mark for knowledge discovery |
| **≈** | approximation/estimates | For intermediary hypothesis / uncertain preliminary statements |
| **∴** | therefore / final step | Use sparingly to mark stable conclusions |
### Uncertainty
| Token | Meaning | Usage |
| ----- | ------- | ----- |
| **●** | high confidence | well-supported empirical/theoretical ground; “anchor points.” |
| **◐** | medium/partial confidence | incomplete data; plausible but unverified links |
| **○** | low confidence | speculation, missing context, weak inference chain |
| **⚠** | bias/premise risk | domain mismatch, cultural assumptions, language-switch artifacts |
| **?maybe?** | soft speculation | marks tentative ideas, branches that might collapse later |
### Verification process
| Token | Meaning | Usage |
| ----- | ------- | ----- |
| **☐** | unverified hypothesis | raw claim, no cross-check yet |
| **☑** | intermediate verification | one source/argument supports it |
| **✓** | confirmed/validated | multiple independent supports (●-level) |
This reasoning format is designed to remain expressive while being lightweight enough for a small model.
## 3. Fine-Tuning/RL
Nebula has been successfully fine-tuned for a variety of tasks
Because Nebula is a reasoning-oriented model, it is expected to train well with reinforcement learning methods such as **GRPO**, both for **verifiable tasks** (with objective rewards) and for subjective tasks using an **LLM-as-a-judge**.
## 4. Benchmarks
| Model | MMLU |
|------|-----:|
| **Nebula** | **40.0** |
| SmolLM2-360M | 35.8 |
| Gemma 3 270M (IT) | 26.5 |
| Granite-4.0-H-350M | 36.21 | |