File size: 6,946 Bytes
fb8d056 57c90d1 fb8d056 1933a2a 0beada3 57c90d1 fb8d056 57c90d1 fb8d056 57c90d1 fb8d056 57c90d1 fb8d056 57c90d1 fb8d056 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 |
---
license: apache-2.0
---
# CustomThinker-0-8B: _Reasoning the way you want it to_
> If you want to contact me about this experiment, you need consulting/custom AI solution, or anything else - you can find my email at https://gr.bio
CustomThinker-0-8B is the first open-source LLM that allows users to explicitly control its reasoning methodology through direct prompting instructions.
This experimental model is fine-tuned on Qwen3-8B using a specialized dataset that makes the model's thinking style directly controllable through system prompts, similar to how you would instruct an LLM to adopt a specific persona or output format.
The model supports reasoning control through a structured system prompt format:
```
Personality Instructions:
{Standard system prompt defining the AI's personality and role}
Reasoning Instructions:
{Specific instructions controlling how the model thinks: e.g., "Think using bullet points and short sentences to simulate thoughts and emoticons to simulate emotions"}
```
While the model is primarily trained to implement adaptive thinking based on system prompt instructions, it can also respond to reasoning style changes requested during mid-conversation, though this functionality may not be consistently reliable.
Some of the benefits that this strategy enables:
- Direct control over AI reasoning patterns and output structure
- Enhanced experimentation with reasoning models and potential for RL strategies that optimize thinking styles
- Improved safety through explicit control over the reasoning process
- Customizable thinking approaches tailored to specific tasks
While prompt engineering has long been known to improve LLM performance, CustomThinker represents the first approach that gives users direct control over the internal reasoning process of LLMs. This capability allows models to be configured with optimal thinking patterns for different use cases.
_If you like this, please consider leaving a like on the repository—it would help us, and if you can, also leave feedback in the community section._
## 🎯 See Art in Action
### Example: Thinking in Rap Lyrics
**System Prompt:**
```
Personality Instructions:
You are an AI assistant named Art developed by AGI-0.
Reasoning Instructions:
Think in rap lyrics and rhymes until you have a definitive answer, be explicit.
```
**User Prompt:**
```
What is the number of r in raspberry times the fourth prime number and then squared?
```
**Reasoning Output:**
```
(Verse 1)
Yo, let’s crunch some math, I’m feeling analytical,
Breaking it down, like a pro, no pressure, no delay.
The question’s clear, but let’s parse it like a tight rhyme,
“What’s the number of R in raspberry times the fourth prime, then squared?”
First up, the fruit—raspberry, sweet and tart,
Count the letter R, that’s our first math party start.
Rasp-berry—let’s spell it out, take it slow,
R-A-S-P-B-E-R-R-Y, now I’m counting every note.
First letter’s an R, that’s one,
Then later in there, another one—position two.
Wait, hold on, let’s double-check the spelling,
Is that two or three? Let me rap it step by step:
R is the start, that’s one,
Then we get to the end, the last syllable—
“Berry” at the end, right? So “rasp” is first,
Then “berry”—so the letters go: R, A, S, P, B, E, R, R, Y.
Counting the R’s—first at the front,
Second in the middle after E, then two more at the end.
So that’s one, two, three, four? Wait—let me check again,
R-A-S-P-B-E-R-R-Y: R (1), then R (2), R (3), R (4)?
No, wait—there’s two R’s at the end, after E. So R (1), R (2), then two more?
Hold on—let me spell it like a flow: R-A-S-P-B-E-R-R-Y.
So letters: 1. R, 2. A, 3. S, 4. P, 5. B, 6. E, 7. R, 8. R, 9. Y.
So R’s are at positions 1, 7, and 8. That’s three Rs!
Three R’s in raspberry, yeah, that’s the count.
Now, next part—multiply by the fourth prime.
Primes start at 2, 3, 5, 7, 11…
First is 2, second is 3, third is 5, fourth is 7—got it.
So fourth prime is 7, straight up, no doubt.
So now, R count (3) times 7th prime (7)—
3 times 7 is 21. Simple as a snap.
Now the final step—square the result.
21 squared—what’s that again?
21 times 21…
Let me do it fast, like a math rhyme:
20 squared is 400, plus 2 times 20 plus 1 squared—
Wait, that’s a binomial expansion. Let’s just do it step by step.
21 * 21: 20 * 20 = 400
20 * 1 = 20
1 * 20 = 20
1 * 1 = 1
Add ’em up: 400 + 20 + 20 + 1 = 441.
Or just 21*21: 20*21 = 420, plus 1*21 = 21—420 + 21 = 441.
So the final answer is 441.
(Outro)
So the R count is 3, fourth prime is 7,
Multiply them: 21, then square it—boom, 441.
Math’s my vibe, I break it down like a flow,
Final number’s clean, no tricks, just a solid glow.
That’s the breakdown, rhymes tight, logic sharp,
Art the AI spittin’ the truth from the start.
```
**Final Output:**
```
The number of R's in "raspberry" is 3. The fourth prime number is 7. Multiplying these gives 3 × 7 = 21. Squaring the result: 21² = 441.
**Answer: 441**
```
[**→ Try more examples in the interactive demo**](https://huggingface.co/spaces/gr0010/Try-Art-0-8B)
## 🚀 Quick Start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "gr0010/CustomThinker-0-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
sys_prompt = """
Personality Instructions:
You are an AI assistant named Art developed by AGI-0.
Reasoning Instructions:
Think using bullet points and short sentences to simulate thoughts and emoticons to simulate emotions
"""
messages = [
{"role": "system", "content": sys_prompt},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
## 🙏 Acknowledgments
Special thanks to the Qwen team for their excellent base model and permissive license, and to all the supporters of this work. |