File size: 3,110 Bytes
eaf1738
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4694a8e
eaf1738
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40a5695
eaf1738
 
 
 
 
 
 
 
 
 
2a02df9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- nebula
- reasoning
- text-generation
- transformers
---

# Nebula

<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/685ea8ff7b4139b6845ce395/YF0kEDYMGJhcM3Lbl2EOD.png" alt="Nebula logo" width="100">
</p>

## 1. Introduction
**Nebula** is a **320M-parameter** generalist Small Reasoning Model trained on **200B+ tokens**, designed for edge AI and on-device deployment.

Nebula is designed to deliver an unusually strong balance of **memory**, **general reasoning**, **math**, and **retrieval-friendly behavior** for its size class, aiming to outperform many small models of a similar parameter range on non-code, industry-style benchmarks.

## 2. Reasoning style
Nebula’s reasoning traces use an intentionally compact style with **dense, short, frequently non-verbal sentences**, optimized for efficiency under limited model capacity.

Traces use the following stenographic notation integrated into special tokens:

### Logical markers

| Token | Meaning | Usage |
| ----- | ------- | ----- |
| **→** | derivation / implication | For very short causal/logical flow |
| **↺** | iterative return / refinement loop | For backtracking, reconsidering priors, RAG re-querying |
| **?** | uncertainty/questions to resolve | Can be appended to short expressions/words, not only interrogatives |
| **!/※** | insight/breakthroughs | Emphatic mark for knowledge discovery |
| **≈** | approximation/estimates | For intermediary hypothesis / uncertain preliminary statements |
| **∴** | therefore / final step | Use sparingly to mark stable conclusions |

### Uncertainty

| Token | Meaning | Usage |
| ----- | ------- | ----- |
| **●** | high confidence | well-supported empirical/theoretical ground; “anchor points.” |
| **◐** | medium/partial confidence | incomplete data; plausible but unverified links |
| **○** | low confidence | speculation, missing context, weak inference chain |
| **⚠** | bias/premise risk | domain mismatch, cultural assumptions, language-switch artifacts |
| **?maybe?** | soft speculation | marks tentative ideas, branches that might collapse later |

### Verification process

| Token | Meaning | Usage |
| ----- | ------- | ----- |
| **☐** | unverified hypothesis | raw claim, no cross-check yet |
| **☑** | intermediate verification | one source/argument supports it |
| **✓** | confirmed/validated | multiple independent supports (●-level) |

This reasoning format is designed to remain expressive while being lightweight enough for a small model.

## 3. Fine-Tuning/RL
Nebula has been successfully fine-tuned for a variety of tasks

Because Nebula is a reasoning-oriented model, it is expected to train well with reinforcement learning methods such as **GRPO**, both for **verifiable tasks** (with objective rewards) and for subjective tasks using an **LLM-as-a-judge**.

## 4. Benchmarks

| Model | MMLU |
|------|-----:|
| **Nebula** | **40.0** |
| SmolLM2-360M | 35.8 |
| Gemma 3 270M (IT) | 26.5 |
| Granite-4.0-H-350M | 36.21 |