ArunkumarVR commited on
Commit
55ced14
·
verified ·
1 Parent(s): bb75a7c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -11,11 +11,11 @@ tags:
11
  - small-model
12
  ---
13
 
14
- # DeepBrainz-R1-0.6B
15
 
16
- **DeepBrainz-R1-0.6B** is a compact, reasoning-focused language model designed for efficient problem-solving in **mathematics, logic, and code-related tasks**.
17
 
18
- Despite its small size, R1-0.6B emphasizes **structured reasoning**, **stepwise problem decomposition**, and **stable generation behavior**, making it well-suited for research, education, and lightweight deployment scenarios.
19
 
20
  ---
21
 
@@ -45,7 +45,7 @@ It is **not** intended as a general-purpose chat replacement for larger frontier
45
  ```python
46
  from transformers import AutoModelForCausalLM, AutoTokenizer
47
 
48
- model_id = "DeepBrainz/DeepBrainz-R1-0.6B"
49
 
50
  tokenizer = AutoTokenizer.from_pretrained(model_id)
51
  model = AutoModelForCausalLM.from_pretrained(model_id)
@@ -68,7 +68,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
68
 
69
  ## Training & Alignment
70
 
71
- R1-0.6B was trained using modern post-training techniques emphasizing reasoning quality and generation stability.
72
  Specific training details are intentionally abstracted in this public-facing release.
73
 
74
  ---
 
11
  - small-model
12
  ---
13
 
14
+ # DeepBrainz-R1-0.6B-Exp
15
 
16
+ **DeepBrainz-R1-0.6B-Exp** is a compact, reasoning-focused language model designed for efficient problem-solving in **mathematics, logic, and code-related tasks**.
17
 
18
+ Despite its small size, R1-0.6B-Exp emphasizes experimental **structured reasoning**, **stepwise problem decomposition**, and **stable generation behavior**, making it well-suited for research, education, and lightweight deployment scenarios.
19
 
20
  ---
21
 
 
45
  ```python
46
  from transformers import AutoModelForCausalLM, AutoTokenizer
47
 
48
+ model_id = "DeepBrainz/DeepBrainz-R1-0.6B-Exp"
49
 
50
  tokenizer = AutoTokenizer.from_pretrained(model_id)
51
  model = AutoModelForCausalLM.from_pretrained(model_id)
 
68
 
69
  ## Training & Alignment
70
 
71
+ R1-0.6B-Exp was trained using modern post-training techniques emphasizing reasoning quality and generation stability.
72
  Specific training details are intentionally abstracted in this public-facing release.
73
 
74
  ---