# ๐Ÿง  ThoughtSwitch V1 1.7B Instruct โ€” A Mode-Adaptive Reasoning Language Model > **Model ID**: [`BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct`](https://huggingface.co/BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct) > **Architecture**: Decoder-only transformer (GPT-style) > **Parameters**: 1.7 Billion > **Capabilities**: Dynamic "Thinking" vs. "Non-Thinking" mode-switching > **Fine-Tuned for**: Instruction-following --- ## ๐Ÿš€ Overview **ThoughtSwitch V1** is a next-generation instruction-tuned language model that brings a new paradigm to text generation: **Autonomous Cognitive Mode Switching**. It is capable of **interpreting user prompts and switching between two distinct modes** of behavior: - ๐Ÿง  **Thinking Mode**: Deep reasoning, logical step-by-step solutions, slow but deliberate outputs. - ๐Ÿ’ฌ **Non-Thinking Mode**: Quick completions, casual replies, storytelling, and chat-like fluency. Whether you're building reasoning agents, fast assistants, or multi-modal chains-of-thought applications, ThoughtSwitch adapts intelligentlyโ€”so **you donโ€™t have to force the prompt**. --- ## ๐Ÿง  Key Features - โœ… **Autonomous Mode Switching** Understands when to think deeply and when to generate fluently, based on prompt phrasing. - โœ… **Instruction Tuned** Trained to follow human-like instructions and align closely with user intent. - โœ… **1.7B Parameters** Small enough for efficient inference, yet powerful for sophisticated reasoning. - โœ… **Open Weights** Fully accessible under a permissive license (specify in HF model card). --- ## โœจ Example Prompts Prompt (Thinking Mode): "Think step by step to solve this math problem: What is 17 multiplied by 23?" โ†’ Reasoned output with intermediate steps and justification. Prompt (Non-Thinking Mode): "Write a quick sci-fi story about a robot discovering love." โ†’ Smooth, creative storytelling without unnecessary reasoning. --- ## ๐Ÿ”ง Usage from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct") model = AutoModelForCausalLM.from_pretrained("BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct") prompt = "Think step by step: Why does ice float on water?" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=150) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) --- ## ๐Ÿงช Intended Use Cases - ๐Ÿง  **Reasoning Agents** โ€” For multi-hop question answering, logical puzzles, or decision support. - ๐Ÿ“š **Tutoring & Education** โ€” Adaptive explanations that vary depth based on student prompts. - ๐Ÿค– **Conversational AI** โ€” More natural and flexible interactions with variable "thinking effort". - โœ๏ธ **Creative Writing** โ€” Generate stories, poems, and ideas with or without deep context. --- ## โš ๏ธ Limitations - Like all LLMs, it may hallucinate or generate biased content. - Mode switching is **probabilistic**, not guaranteedโ€”prompt clearly for best results. - Performance may vary outside of English or unfamiliar domains. --- ## ๐Ÿ“ˆ Performance (Unofficial Benchmarks) | Task | Performance | |------------------------|------------------| | Commonsense Reasoning | โœ… Strong | | Instruction Following | โœ… Strong | | Fast Casual Generation | โœ… Very Strong | | Math (Step-by-Step) | โš ๏ธ Moderate | | Factual QA | โš ๏ธ May hallucinate| --- ## ๐Ÿ› ๏ธ Model Details - **Architecture**: GPT-style decoder (causal LM) - **Training**: Custom pretraining with hybrid reasoning/non-reasoning dataset - **Instruction Fine-Tuning**: Yes, using curated prompt-response pairs - **Token Limit**: 2048 tokens (extendable with rope scaling) --- ## ๐Ÿ” Quantized Version Looking for fast inference? Check out the GGUF-quantized version (by @mradermacher) for compatibility with llama.cpp, KoboldAI, and other lightweight runtimes. --- ## ๐Ÿ“„ Citation If you use this model in your research or application, please cite it as: @misc{thoughswitch2025, title={ThoughtSwitch V1 1.7B Instruct: A Mode-Adaptive Reasoning Language Model}, author={BrainWave-ML}, year={2025}, howpublished={\url{https://huggingface.co/BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct}} } --- ## ๐Ÿ’ฌ Contact For issues, feedback, or collaboration: - ๐Ÿค– Hugging Face Page: https://huggingface.co/BrainWave-ML/ThoughtSwitch-V1-1.7b-Instruct - ๐Ÿ“ง Email: *[YourContact@domain.com]* - ๐ŸŒ Website: *[https://brainwave-ml.ai]* (optional) - ๐Ÿ’ฌ Discord or Community: *Coming Soon* --- ## ๐Ÿ™Œ Acknowledgments Developed by the team at **BrainWave-ML**. Inspired by the question: *โ€œWhat if language models could choose when to think?โ€* --- > **ThoughtSwitch**: Think when you need to. Generate when you don't.