Qwen3-Space.Agent_DASD-Uncensored-4B
A SLERP-merged Qwen3-4B combining creative writing, deep reasoning, and uncensored agentic capabilities.
Model Composition
Sequential Spherical Linear Interpolation (SLERP):
- bowen-upenn/Qwen3-4B-CreativeWriting-SFT + Alibaba-Apsara/DASD-4B-Thinking @ t=0.5
- Result + WithinUsAI/Qwen3-Space.Agent.Claude.Uncensored-4B @ t=0.5
Parent Models
- Creative Writing SFT: Storytelling, narrative, character development, stylistic prose.
- DASD-4B-Thinking: Long chain-of-thought reasoning, complex problem solving.
- Claude-Uncensored Agent: Agentic behavior, tool use, reduced refusals, Claude-like personality.
Key Strengths
- Balanced creativity + deep reasoning
- Strong agentic / tool-use capabilities
- Reduced censorship compared to typical aligned models
- Good at long-context creative + reasoning tasks
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "GODsStrongestSoldier/Qwen3-Space.Agent_DASD-Uncensored-4B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
messages = [{"role": "user", "content": "Write a thoughtful sci-fi story with internal monologue."}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
output = model.generate(inputs, max_new_tokens=1200, temperature=0.75, do_sample=True)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Recommended Settings
- Temperature: 0.7-0.9 for creative tasks
- Use system prompts for agent behavior
- Encourage step-by-step thinking for complex problems
Technical Specs
- Base: Qwen3-4B (36 layers, GQA, 32k context)
- Merge: Sequential SLERP (no additional training)
- Size: ~4B parameters
- Precision: FP16 (safetensors)
Acknowledgments
- bowen-upenn, Alibaba-Apsara, WithinUsAI, and the Qwen team.
Merged on Kaggle via sequential SLERP — May 2026