Blueprint-9B
Blueprint-9B is a specialized fine-tune of the Qwen 3.5-9B architecture, optimized for logical reasoning and interpreting unstructured technical instructions. It is designed to act as a project architect, prioritizing structural logic over simple syntax generation.
Performance Benchmarks
| Metric | Blueprint-9B | Qwen 3.5 9B (Base) |
|---|---|---|
| Logic (GSM8K) | 83.8% | 81.0% |
| Code (HumanEval) | 62.2% | 81.7% |
| Scripts (MBPP) | 62.0% | 82.0% |
Note: This model trades raw coding speed for higher reasoning accuracy, making it ideal for planning complex projects from informal notes.
###licensing & support Blueprint-9B is released under the Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) license. Technical Support & Professional Inquiries: For support, please use the Discussions tab or contact: blueprintai.help1@gmail.com
###Official Credits & Legal Acknowledgments To ensure full compliance with open-source standards and respect for foundational work, we provide the following credits. This model is a derivative work distributed under the same license as the training data: Base Architecture: Developed by the Alibaba Cloud Qwen Team. We credit them for the high-performance Qwen 3.5-9B model foundation. Primary Dataset: Databricks, Inc. for the databricks-dolly-15k dataset (Copyright 2023 Databricks, Inc.). This model strictly adheres to the CC BY-SA 3.0 requirements as mandated by the Dolly dataset. Coding Data: Credits to the BigCode Project for the StarCoder instruct datasets and the m-a-p team for CodeFeedback. Instruction Tuning: Recognition to Tarun Sharma for the Alpaca-based Python instruction sets.
Implementation
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "BlueprintLabs/Blueprint-9B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16
)
- Downloads last month
- 137