Duchifat-2-Thinking 🛰️
Duchifat-2-Thinking is a lightweight, efficient Language Model (136M parameters) specifically fine-tuned for Reasoning tasks and Instruction Following. It utilizes a unique Triple-Prompt architecture (Instruction-Thought-Output) to ensure high-quality, focused, and logical responses.
Model Details
- Developed by: Raziel / TopAI
- Model type: Causal Language Model (Transformer)
- Language(s): English (Primary), Hebrew (Identity)
- License: Apache 2.0
- Base Model: Duchifat-2 (136M)
- Training Technique: SFT (Supervised Fine-Tuning) with Chain-of-Thought Alignment.
Key Features
- Triple-Prompt Architecture: Designed to process an internal "Thought" block before generating the final output.
- Efficient Reasoning: Optimized for CPU and low-resource environments without sacrificing logical consistency.
- Clean Output: Significantly reduced hallucination and "word salad" compared to standard small models.
Prompt Format
To get the best results, use the following structured prompt:
### instruction:
{Your Question}
### thought:
{The logic or reasoning the model should follow}
### output:
Usage Example
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "razielAI/Duchifat-2-Instruct-Thinking" # Update with your exact HF path
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float32)
instruction = "Who are you?"
thought = "The user is asking for my identity. I should state I am Duchifat-2 developed by TopAI."
prompt = f"### instruction:\n{instruction}\n\n### thought:\n{thought}\n\n### output:\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100, temperature=0.1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Performance & Limitations
Duchifat-2-Instruct-Thinking is a **Small Language Model (SLM). While it excels at structured tasks and guided reasoning:
It may require a guided thought block for highly complex logic.
Best used with low temperature (0.1 - 0.3) for factual consistency.
Citation
If you use this model in your research or project, please cite:
Plaintext
@misc{duchifat2thinking2026,
author = {Raziel, TopAI},
title = {Duchifat-2-Thinking: A Lightweight Reasoning Model},
year = {2026},
publisher = {Hugging Face},
journal = {Hugging Face Model Hub}
}
- Downloads last month
- 336
Model tree for razielAI/Duchifat-2-Instruct-Thinking
Base model
Raziel1234/Duchifat-2