--- library_name: transformers tags: - text-generation-inference - code - reinforcement-learning - math license: apache-2.0 language: - en base_model: - Qwen/Qwen3-1.7B pipeline_tag: text-generation --- ![78.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/l1J-T76goSfuIfoKX_uL5.png) # **Wolf-Rayet-2B-Prime3** > **Wolf-Rayet-2B-Prime3** is a compact, coding-optimized language model built on the **Qwen3 1.7B architecture**, fine-tuned for high-accuracy **code generation**, **debugging**, and **technical reasoning**. With approximately **2 billion effective parameters**, it offers a strong balance between performance and deployability—ideal for developers, educators, and engineers operating in resource-constrained or latency-sensitive environments. > \[!note] > GGUF: [https://huggingface.co/prithivMLmods/Wolf-Rayet-2B-Prime3-GGUF](https://huggingface.co/prithivMLmods/Wolf-Rayet-2B-Prime3-GGUF) --- ## **Key Features** 1. **Qwen3 Architecture Core** Based on the modern and efficient **Qwen3 1.7B** transformer backbone, offering improved context handling and token efficiency for both single-turn and multi-turn programming tasks. 2. **Code-First Fine-Tuning** Trained extensively on diverse code datasets including Python, JavaScript, C++, and Bash, with auxiliary tuning on software documentation, APIs, and debugging dialogues. 3. **Multi-Step Technical Reasoning** Demonstrates the ability to deconstruct complex programming problems, explain logic, refactor code, and correct errors—particularly useful for students, engineers, and coding educators. 4. **Structured Output Proficiency** Supports accurate generation of structured formats like JSON, YAML, Markdown, and code blocks—ready to plug into developer tools, notebooks, and documentation pipelines. 5. **Compact Yet Capable** With a \~2B parameter scale, it delivers competitive performance without the high resource requirements of larger models, and is easily deployable on modern GPUs or high-end CPUs. 6. **Multilingual Coding Support** Capable of generating and understanding code in 10+ programming languages, with a focus on real-world use cases, automation scripts, and algorithmic solutions. --- ## **Quickstart with Transformers** ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Wolf-Rayet-2B-Prime3" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Write a Python function to check if a number is prime." messages = [ {"role": "system", "content": "You are a helpful coding assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` --- ## **Intended Use** * Code generation, refactoring, and cross-language translation * Programming education and tutoring * Technical documentation and boilerplate generation * Debugging assistance and bug-fix suggestions * Lightweight integration into IDEs, developer tools, and offline environments --- ## **Limitations** * Context length is shorter than that of larger models (>7B) * May require prompt engineering for complex or deeply nested code * Limited general natural language conversation capabilities * Not intended for creative writing or non-technical tasks --- ## **References** 1. [Qwen3 (1.7B) Model Overview](https://huggingface.co/Qwen/Qwen1.5-1.8B) 2. [YaRN: Efficient Context Window Extension of Large Language Models](https://arxiv.org/pdf/2309.00071)