GlorryLlama-3B-Reasoning
- Developed by: whitelotus0
- License: Apache-2.0
- Finetuned from model: unsloth/Llama-3.2-3B-Instruct
Model Description
GlorryLlama-3B is a fine-tuned version of Llama 3.2 3B, specialized in reasoning and stream-of-consciousness thinking.
It was trained on the ServiceNow-AI/R1-Distill-SFT dataset, which encourages the model to "think" before it answers. The model mimics a reflective assistant that explores, doubts, and refines its own logic before providing a final solution.
This model was trained 2x faster with Unsloth and Huggingface's TRL library.
Intended Use & Prompt Format
To get the reasoning behavior (Chain of Thought), you must wrap your input in the specific <problem> tags and use the system prompt below.
System / Instruction Prompt
You are a reflective assistant engaging in thorough, iterative reasoning, mimicking human stream-of-consciousness thinking. Your approach emphasizes exploration, self-doubt, and continuous refinement before coming up with an answer.
<problem>
{YOUR QUESTION HERE}
</problem>
How to Use
Using Unsloth (Recommended for speed)
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "whitelotus0/glorryllama",
max_seq_length = 2048,
dtype = None,
load_in_4bit = True,
)
FastLanguageModel.for_inference(model)
# Define the prompt structure
sys_prompt = """You are a reflective assistant engaging in thorough, iterative reasoning, mimicking human stream-of-consciousness thinking. Your approach emphasizes exploration, self-doubt, and continuous refinement before coming up with an answer.
<problem>
{}
</problem>
"""
# Format the query
query = "How many 'r's are present in 'strawberry'?"
formatted_message = sys_prompt.format(query)
messages = [
{"role": "user", "content": formatted_message},
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True,
return_tensors = "pt",
).to("cuda")
outputs = model.generate(
input_ids = inputs,
max_new_tokens = 1024,
use_cache = True,
temperature = 1.5,
min_p = 0.1
)
print(tokenizer.batch_decode(outputs)[0])
Using Transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("whitelotus0/glorryllama")
model = AutoModelForCausalLM.from_pretrained("whitelotus0/glorryllama", device_map="auto")
prompt_template = """You are a reflective assistant engaging in thorough, iterative reasoning, mimicking human stream-of-consciousness thinking. Your approach emphasizes exploration, self-doubt, and continuous refinement before coming up with an answer.
<problem>
{}
</problem>
"""
text = prompt_template.format("Explain logic clearly.")
inputs = tokenizer(text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Details
Hyperparameters
- Training Steps: 100
- Learning Rate: 1e-4
- Batch Size: 2 (per device) with 4 gradient accumulation steps
- Optimizer: AdamW 8-bit
- Precision: bfloat16
- Quantization: 4-bit (QLoRA)
Dataset
Trained on the ServiceNow-AI/R1-Distill-SFT dataset, which contains reasoning traces generated by DeepSeek-R1.
- Downloads last month
- 8
Model tree for whitelotus0/glorryllama
Base model
meta-llama/Llama-3.2-3B-Instruct
Finetuned
unsloth/Llama-3.2-3B-Instruct
