Update README.md
Browse files
README.md
CHANGED
|
@@ -14,6 +14,46 @@ tags:
|
|
| 14 |
library_name: transformers
|
| 15 |
---
|
| 16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
# DeepBrainz-R1-2B
|
| 18 |
|
| 19 |
**DeepBrainz-R1-2B** is a compact, high-performance reasoning model engineered by **DeepBrainz AI & Labs**. It is part of the **DeepBrainz-R1 Series**, designed to deliver frontier-class reasoning capabilities in cost-effective parameter sizes.
|
|
@@ -28,7 +68,7 @@ This variant features a **32,768 token context window**, optimized for processin
|
|
| 28 |
- **Context Window:** 32,768 tokens
|
| 29 |
- **Specialization:** STEM Reasoning, Logic, Code Analysis
|
| 30 |
- **Architecture:** Optimized Dense Transformer
|
| 31 |
-
- **Deployment:** Ready for vLLM,
|
| 32 |
|
| 33 |
---
|
| 34 |
|
|
|
|
| 14 |
library_name: transformers
|
| 15 |
---
|
| 16 |
|
| 17 |
+
### 🚀 Introducing DeepBrainz-R1 — Reasoning-First Small Language Models for Agentic Systems
|
| 18 |
+
|
| 19 |
+
Today we’re releasing **DeepBrainz-R1**, a family of **reasoning-first Small Language Models (SLMs)** designed for **agentic AI systems in real-world production**.
|
| 20 |
+
|
| 21 |
+
Agentic systems don’t ask once — they reason repeatedly. Tool calls, verification loops, schema-constrained outputs, retries, and long-context planning fundamentally change the economics and reliability requirements of language models. LLM-only stacks struggle under this load.
|
| 22 |
+
|
| 23 |
+
DeepBrainz-R1 is built from the opposite premise:
|
| 24 |
+
|
| 25 |
+
> **Reasoning is a trained behavior, not an emergent side-effect of scale.**
|
| 26 |
+
|
| 27 |
+
#### What DeepBrainz-R1 is designed for
|
| 28 |
+
|
| 29 |
+
* **Repeatable multi-step reasoning**, not one-shot chat
|
| 30 |
+
* **Agent-compatible behavior**: tool use, structured outputs, low-variance reasoning
|
| 31 |
+
* **Production economics**: lower latency, predictable cost, deployability
|
| 32 |
+
* **Inference-time scalability**: compute where needed, not everywhere
|
| 33 |
+
|
| 34 |
+
#### The R1 lineup
|
| 35 |
+
|
| 36 |
+
* **[DeepBrainz-R1-4B](https://huggingface.co/DeepBrainz/DeepBrainz-R1-4B)** — *Flagship production model*
|
| 37 |
+
Best starting point for reliable agentic systems.
|
| 38 |
+
* **[DeepBrainz-R1-2B](https://huggingface.co/DeepBrainz/DeepBrainz-R1-2B)** — *Balanced production model*
|
| 39 |
+
Strong reasoning with lower cost and latency.
|
| 40 |
+
* **[DeepBrainz-R1-0.6B-v2](https://huggingface.co/DeepBrainz/DeepBrainz-R1-0.6B-v2)** — *Canonical small model*
|
| 41 |
+
Cost-efficient baseline for small-model agent workloads.
|
| 42 |
+
* **[Long-context variants (16K / 40K)](https://huggingface.co/collections/DeepBrainz/deepbrainz-r1-reasoning-first-slms-for-agentic-systems)** — early and experimental
|
| 43 |
+
* **[Research checkpoints](https://huggingface.co/collections/DeepBrainz/deepbrainz-r1-research-checkpoints)** — raw artifacts for ablation and evaluation
|
| 44 |
+
* **[Community quantizations (GGUF, low-bit)](https://huggingface.co/collections/DeepBrainz/deepbrainz-r1-community-quantizations-gguf-and-low-bit)** — community-maintained, not officially supported
|
| 45 |
+
|
| 46 |
+
We publish **supported releases, experimental variants, and research checkpoints separately** to keep expectations clear for builders, enterprises, and researchers.
|
| 47 |
+
|
| 48 |
+
#### Why now
|
| 49 |
+
|
| 50 |
+
2026 is the year agentic AI stops being a demo and starts becoming infrastructure. Infrastructure cannot rely on LLM-only economics or LLM-only reliability.
|
| 51 |
+
**Reasoning-first SLMs are the only viable path to scaling agents sustainably.**
|
| 52 |
+
|
| 53 |
+
— **DeepBrainz AI & Labs**
|
| 54 |
+
|
| 55 |
+
---
|
| 56 |
+
|
| 57 |
# DeepBrainz-R1-2B
|
| 58 |
|
| 59 |
**DeepBrainz-R1-2B** is a compact, high-performance reasoning model engineered by **DeepBrainz AI & Labs**. It is part of the **DeepBrainz-R1 Series**, designed to deliver frontier-class reasoning capabilities in cost-effective parameter sizes.
|
|
|
|
| 68 |
- **Context Window:** 32,768 tokens
|
| 69 |
- **Specialization:** STEM Reasoning, Logic, Code Analysis
|
| 70 |
- **Architecture:** Optimized Dense Transformer
|
| 71 |
+
- **Deployment:** Ready for vLLM, SGLang, and local inference
|
| 72 |
|
| 73 |
---
|
| 74 |
|