๐ŸŒ‘ Shadow 0.7B (Reasoning + Coding Edition)

Shadow 0.7B is a specialized Small Language Model (SLM) optimized for logical reasoning, competitive programming, and chain-of-thought processing.

Built on the Qwen3 0.6B architecture and fine-tuned using Unsloth, Shadow delivers surprising reasoning depth and "thinking-first" responses uncommon for a model of this size.


Key Features

  • ๐Ÿง  Structured Reasoning: Uses <think> style internal reasoning patterns to improve answer quality.
  • ๐Ÿ’ป Coding Specialist: Excels at Python, C++, and algorithmic problem-solving.
  • โšก Ultra-Lightweight: Runs on CPU, T4, mobile, or even low-VRAM consumer GPUs.

๐Ÿ’ป Quick Start (Python)

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "Redhanuman/Shadow-0.7B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Write a Python script to check for palindromes. Explain your logic."
messages = [
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **inputs,
    max_new_tokens=1024
)

print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))

๐Ÿ› ๏ธ Training Details

  • Creator: Aman Kumar Pandey (LPU)
  • Framework: Unsloth (2ร— faster training)
  • Base Model: Qwen3-0.6B
  • Method: QLoRA fine-tuning with Chain-of-Draft (CoD) reasoning data
  • Datasets: GSM8K, DeepSeek R1 distilled reasoning samples
Downloads last month
65
Safetensors
Model size
0.6B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Redhanuman/Shadow-0.7B

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(524)
this model
Merges
1 model
Quantizations
1 model

Space using Redhanuman/Shadow-0.7B 1