File size: 4,856 Bytes
3468263 d2e2e5b 4f34492 3468263 4f34492 3468263 d2e2e5b 3468263 814125a 3468263 4f34492 3468263 814125a 3468263 814125a 3468263 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- deepbrainz
- reasoning
- mathematics
- code
- enterprise
- 2b
- long-context
library_name: transformers
---
### π Introducing DeepBrainz-R1 β Reasoning-First Small Language Models for Agentic Systems
Today weβre releasing **DeepBrainz-R1**, a family of **reasoning-first Small Language Models (SLMs)** designed for **agentic AI systems in real-world production**.
Agentic systems donβt ask once β they reason repeatedly. Tool calls, verification loops, schema-constrained outputs, retries, and long-context planning fundamentally change the economics and reliability requirements of language models. LLM-only stacks struggle under this load.
DeepBrainz-R1 is built from the opposite premise:
> **Reasoning is a trained behavior, not an emergent side-effect of scale.**
#### What DeepBrainz-R1 is designed for
* **Repeatable multi-step reasoning**, not one-shot chat
* **Agent-compatible behavior**: tool use, structured outputs, low-variance reasoning
* **Production economics**: lower latency, predictable cost, deployability
* **Inference-time scalability**: compute where needed, not everywhere
#### The R1 lineup
* **[DeepBrainz-R1-4B](https://huggingface.co/DeepBrainz/DeepBrainz-R1-4B)** β *Flagship production model*
Best starting point for reliable agentic systems.
* **[DeepBrainz-R1-2B](https://huggingface.co/DeepBrainz/DeepBrainz-R1-2B)** β *Balanced production model*
Strong reasoning with lower cost and latency.
* **[DeepBrainz-R1-0.6B-v2](https://huggingface.co/DeepBrainz/DeepBrainz-R1-0.6B-v2)** β *Canonical small model*
Cost-efficient baseline for small-model agent workloads.
* **[Long-context variants (16K / 40K)](https://huggingface.co/collections/DeepBrainz/deepbrainz-r1-reasoning-first-slms-for-agentic-systems)** β early and experimental
* **[Research checkpoints](https://huggingface.co/collections/DeepBrainz/deepbrainz-r1-research-checkpoints)** β raw artifacts for ablation and evaluation
* **[Community quantizations (GGUF, low-bit)](https://huggingface.co/collections/DeepBrainz/deepbrainz-r1-community-quantizations-gguf-and-low-bit)** β community-maintained, not officially supported
We publish **supported releases, experimental variants, and research checkpoints separately** to keep expectations clear for builders, enterprises, and researchers.
#### Why now
2026 is the year agentic AI stops being a demo and starts becoming infrastructure. Infrastructure cannot rely on LLM-only economics or LLM-only reliability.
**Reasoning-first SLMs are the only viable path to scaling agents sustainably.**
β **DeepBrainz AI & Labs**
---
# DeepBrainz-R1-2B
**DeepBrainz-R1-2B** is a compact, high-performance reasoning model engineered by **DeepBrainz AI & Labs**. It is part of the **DeepBrainz-R1 Series**, designed to deliver frontier-class reasoning capabilities in cost-effective parameter sizes.
This variant features a **32,768 token context window**, optimized for processing medium-to-long documents and codebases.
---
## π Model Highlights
- **Parameter Count:** ~2B
- **Context Window:** 32,768 tokens
- **Specialization:** STEM Reasoning, Logic, Code Analysis
- **Architecture:** Optimized Dense Transformer
- **Deployment:** Ready for vLLM, SGLang, and local inference
---
## π― Intended Use Cases
- **Agentic Workflows:** Reliability in multi-step planning tasks.
- **Math & Science:** Solving complex word problems and equations.
- **Code Generation:** Writing and debugging algorithms.
- **Structured Data Extraction:** Parsing and reasoning over unstructured text.
> **Note:** This model is post-trained for reasoning and agentic reliability.
> For conversational chat, additional instruction tuning is recommended.
---
## π» Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "DeepBrainz/DeepBrainz-R1-2B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="bfloat16",
device_map="auto"
)
prompt = "Analyze the time complexity of the following algorithm:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
---
## ποΈ Technical Summary
The model has undergone **post-training** to enhance reasoning quality, stability, and agentic reliability.
*Detailed post-training recipes and dataset compositions are not fully disclosed.*
---
## π License
This model is released under the **Apache 2.0** license, allowing for academic and commercial use.
---
<div align="center">
<b>DeepBrainz AI & Labs</b><br>
<i>Advancing General Intelligence through Scalable Reasoning</i>
</div>
|