Update README.md
Browse files
README.md
CHANGED
|
@@ -11,9 +11,9 @@ tags:
|
|
| 11 |
- small-model
|
| 12 |
---
|
| 13 |
|
| 14 |
-
# DeepBrainz
|
| 15 |
|
| 16 |
-
**DeepBrainz
|
| 17 |
|
| 18 |
Despite its small size, R1-0.6B emphasizes **structured reasoning**, **stepwise problem decomposition**, and **stable generation behavior**, making it well-suited for research, education, and lightweight deployment scenarios.
|
| 19 |
|
|
@@ -45,7 +45,7 @@ It is **not** intended as a general-purpose chat replacement for larger frontier
|
|
| 45 |
```python
|
| 46 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 47 |
|
| 48 |
-
model_id = "DeepBrainz/
|
| 49 |
|
| 50 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 51 |
model = AutoModelForCausalLM.from_pretrained(model_id)
|
|
|
|
| 11 |
- small-model
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# DeepBrainz-R1-0.6B
|
| 15 |
|
| 16 |
+
**DeepBrainz-R1-0.6B** is a compact, reasoning-focused language model designed for efficient problem-solving in **mathematics, logic, and code-related tasks**.
|
| 17 |
|
| 18 |
Despite its small size, R1-0.6B emphasizes **structured reasoning**, **stepwise problem decomposition**, and **stable generation behavior**, making it well-suited for research, education, and lightweight deployment scenarios.
|
| 19 |
|
|
|
|
| 45 |
```python
|
| 46 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 47 |
|
| 48 |
+
model_id = "DeepBrainz/DeepBrainz-R1-0.6B"
|
| 49 |
|
| 50 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 51 |
model = AutoModelForCausalLM.from_pretrained(model_id)
|