Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- README.md +168 -0
- added_tokens.json +28 -0
- chat_template.jinja +97 -0
- config.json +77 -0
- merges.txt +0 -0
- model.safetensors +3 -0
- special_tokens_map.json +31 -0
- tokenizer.json +3 -0
- tokenizer_config.json +244 -0
- vocab.json +0 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,168 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
base_model:
|
| 3 |
+
- Daemontatox/Tiny-OR1-Rust
|
| 4 |
+
tags:
|
| 5 |
+
- bnb-my-repo
|
| 6 |
+
- text-generation-inference
|
| 7 |
+
- transformers
|
| 8 |
+
- unsloth
|
| 9 |
+
- qwen3
|
| 10 |
+
- rust
|
| 11 |
+
- code
|
| 12 |
+
- idiomatic rust
|
| 13 |
+
- reasoning
|
| 14 |
+
license: apache-2.0
|
| 15 |
+
language:
|
| 16 |
+
- en
|
| 17 |
+
datasets:
|
| 18 |
+
- Tesslate/Rust_Dataset
|
| 19 |
+
library_name: transformers
|
| 20 |
+
---
|
| 21 |
+
# Daemontatox/Tiny-OR1-Rust (Quantized)
|
| 22 |
+
|
| 23 |
+
## Description
|
| 24 |
+
This model is a quantized version of the original model [`Daemontatox/Tiny-OR1-Rust`](https://huggingface.co/Daemontatox/Tiny-OR1-Rust).
|
| 25 |
+
|
| 26 |
+
It's quantized using the BitsAndBytes library to 4-bit using the [bnb-my-repo](https://huggingface.co/spaces/bnb-community/bnb-my-repo) space.
|
| 27 |
+
|
| 28 |
+
## Quantization Details
|
| 29 |
+
- **Quantization Type**: int4
|
| 30 |
+
- **bnb_4bit_quant_type**: nf4
|
| 31 |
+
- **bnb_4bit_use_double_quant**: True
|
| 32 |
+
- **bnb_4bit_compute_dtype**: bfloat16
|
| 33 |
+
- **bnb_4bit_quant_storage**: uint8
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
# 📄 Original Model Information
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
# Tiny-OR1-Rust
|
| 42 |
+
|
| 43 |
+
**A lightweight Rust code assistant model for code generation, completion, and explanation.**
|
| 44 |
+
|
| 45 |
+
## Model Description
|
| 46 |
+
|
| 47 |
+
Tiny-OR1-Rust is a specialized language model fine-tuned from Qwen3-1.7B for Rust programming tasks. Built on the efficient Qwen3 architecture, this 1.7B parameter model provides effective code generation, completion, and explanation capabilities specifically tailored for the Rust programming language while maintaining a compact footprint.
|
| 48 |
+
|
| 49 |
+
## Model Details
|
| 50 |
+
|
| 51 |
+
- **Model Name**: Tiny-OR1-Rust
|
| 52 |
+
- **Developer**: Daemontatox
|
| 53 |
+
- **Model Type**: Code Generation / Text-to-Code
|
| 54 |
+
- **Language**: Rust
|
| 55 |
+
- **Architecture**: Qwen3-based Transformer
|
| 56 |
+
- **Parameters**: 1.7B
|
| 57 |
+
- **Base Model**: Qwen3-1.7B
|
| 58 |
+
- **Training Dataset**: Tesslate/Rust_Dataset
|
| 59 |
+
|
| 60 |
+
## Intended Use
|
| 61 |
+
|
| 62 |
+
### Primary Use Cases
|
| 63 |
+
- **Code Generation**: Generate Rust code from natural language descriptions
|
| 64 |
+
- **Code Completion**: Complete partial Rust code snippets
|
| 65 |
+
- **Code Explanation**: Explain Rust code functionality and concepts
|
| 66 |
+
- **Learning Assistant**: Help developers learn Rust programming patterns and best practices
|
| 67 |
+
|
| 68 |
+
### Intended Users
|
| 69 |
+
- Rust developers and learners
|
| 70 |
+
- Students studying systems programming
|
| 71 |
+
- Developers transitioning to Rust from other languages
|
| 72 |
+
- Code editors and IDEs integrating Rust assistance
|
| 73 |
+
|
| 74 |
+
## How to Use
|
| 75 |
+
|
| 76 |
+
### Basic Usage
|
| 77 |
+
|
| 78 |
+
```python
|
| 79 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 80 |
+
|
| 81 |
+
# Load model and tokenizer
|
| 82 |
+
tokenizer = AutoTokenizer.from_pretrained("Daemontatox/Tiny-OR1-Rust")
|
| 83 |
+
model = AutoModelForCausalLM.from_pretrained("Daemontatox/Tiny-OR1-Rust")
|
| 84 |
+
|
| 85 |
+
# Example prompt
|
| 86 |
+
prompt = "Write a Rust function to calculate factorial:"
|
| 87 |
+
|
| 88 |
+
# Generate code
|
| 89 |
+
inputs = tokenizer.encode(prompt, return_tensors="pt")
|
| 90 |
+
outputs = model.generate(inputs, max_length=150, temperature=0.7, pad_token_id=tokenizer.eos_token_id)
|
| 91 |
+
generated_code = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 92 |
+
|
| 93 |
+
print(generated_code)
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
### Prompt Examples
|
| 97 |
+
|
| 98 |
+
**Code Generation:**
|
| 99 |
+
```
|
| 100 |
+
"Write a Rust function that reads a file and counts the number of lines:"
|
| 101 |
+
"Create a Rust struct for a binary tree with insert and search methods:"
|
| 102 |
+
"Implement a thread-safe counter using Arc and Mutex in Rust:"
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
**Code Explanation:**
|
| 106 |
+
```
|
| 107 |
+
"Explain this Rust code: fn main() { let x = vec![1, 2, 3]; }"
|
| 108 |
+
"What does the ? operator do in Rust error handling?"
|
| 109 |
+
```
|
| 110 |
+
|
| 111 |
+
## Training Data
|
| 112 |
+
|
| 113 |
+
The model was trained on the **Tesslate/Rust_Dataset**, which contains:
|
| 114 |
+
- Rust source code from various projects
|
| 115 |
+
- Code documentation and comments
|
| 116 |
+
- Rust programming examples and tutorials
|
| 117 |
+
- Community-contributed Rust code snippets
|
| 118 |
+
|
| 119 |
+
## Performance
|
| 120 |
+
|
| 121 |
+
The model demonstrates strong performance in:
|
| 122 |
+
- Generating syntactically correct Rust code
|
| 123 |
+
- Understanding Rust-specific concepts (ownership, borrowing, lifetimes)
|
| 124 |
+
- Providing contextually appropriate code completions
|
| 125 |
+
- Explaining Rust programming patterns
|
| 126 |
+
|
| 127 |
+
## Limitations
|
| 128 |
+
|
| 129 |
+
- **Domain Specificity**: Optimized for Rust code; may not perform well on other programming languages
|
| 130 |
+
- **Model Size**: Being a "tiny" model, it may have limitations with very complex code generation tasks
|
| 131 |
+
- **Context Length**: Limited context window may affect performance on very long code sequences
|
| 132 |
+
- **Specialized Knowledge**: May not have extensive knowledge of very recent Rust features or niche crates
|
| 133 |
+
|
| 134 |
+
## Ethical Considerations
|
| 135 |
+
|
| 136 |
+
- The model generates code based on training data patterns and may reproduce coding practices from the dataset
|
| 137 |
+
- Users should review and test generated code before using in production environments
|
| 138 |
+
- The model should not be used as a substitute for understanding fundamental programming concepts
|
| 139 |
+
|
| 140 |
+
## License
|
| 141 |
+
|
| 142 |
+
[Specify license - e.g., MIT, Apache 2.0, etc.]
|
| 143 |
+
|
| 144 |
+
## Citation
|
| 145 |
+
|
| 146 |
+
```bibtex
|
| 147 |
+
@misc{tiny-or1-rust,
|
| 148 |
+
title={Tiny-OR1-Rust: A Lightweight Rust Code Assistant Based on Qwen3},
|
| 149 |
+
author={Daemontatox},
|
| 150 |
+
year={2024},
|
| 151 |
+
howpublished={\url{https://huggingface.co/Daemontatox/Tiny-OR1-Rust}},
|
| 152 |
+
note={Fine-tuned from Qwen3-1.7B on Tesslate/Rust_Dataset}
|
| 153 |
+
}
|
| 154 |
+
```
|
| 155 |
+
|
| 156 |
+
## Contact
|
| 157 |
+
|
| 158 |
+
For questions, issues, or contributions, please contact [your contact information or GitHub profile].
|
| 159 |
+
|
| 160 |
+
## Acknowledgments
|
| 161 |
+
|
| 162 |
+
- Thanks to the Tesslate team for providing the Rust dataset
|
| 163 |
+
- Built upon the excellent Qwen3-1.7B foundation model by Alibaba Cloud
|
| 164 |
+
- Special recognition to the Rust community for their contributions to open-source Rust code
|
| 165 |
+
|
| 166 |
+
---
|
| 167 |
+
|
| 168 |
+
*This model is part of ongoing efforts to make Rust programming more accessible through AI assistance.*
|
added_tokens.json
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"</think>": 151668,
|
| 3 |
+
"</tool_call>": 151658,
|
| 4 |
+
"</tool_response>": 151666,
|
| 5 |
+
"<think>": 151667,
|
| 6 |
+
"<tool_call>": 151657,
|
| 7 |
+
"<tool_response>": 151665,
|
| 8 |
+
"<|box_end|>": 151649,
|
| 9 |
+
"<|box_start|>": 151648,
|
| 10 |
+
"<|endoftext|>": 151643,
|
| 11 |
+
"<|file_sep|>": 151664,
|
| 12 |
+
"<|fim_middle|>": 151660,
|
| 13 |
+
"<|fim_pad|>": 151662,
|
| 14 |
+
"<|fim_prefix|>": 151659,
|
| 15 |
+
"<|fim_suffix|>": 151661,
|
| 16 |
+
"<|im_end|>": 151645,
|
| 17 |
+
"<|im_start|>": 151644,
|
| 18 |
+
"<|image_pad|>": 151655,
|
| 19 |
+
"<|object_ref_end|>": 151647,
|
| 20 |
+
"<|object_ref_start|>": 151646,
|
| 21 |
+
"<|quad_end|>": 151651,
|
| 22 |
+
"<|quad_start|>": 151650,
|
| 23 |
+
"<|repo_name|>": 151663,
|
| 24 |
+
"<|video_pad|>": 151656,
|
| 25 |
+
"<|vision_end|>": 151653,
|
| 26 |
+
"<|vision_pad|>": 151654,
|
| 27 |
+
"<|vision_start|>": 151652
|
| 28 |
+
}
|
chat_template.jinja
ADDED
|
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{%- if tools %}
|
| 2 |
+
{{- '<|im_start|>system\n' }}
|
| 3 |
+
{%- if messages[0].role == 'system' %}
|
| 4 |
+
{{- messages[0].content + '\n\n' }}
|
| 5 |
+
{%- endif %}
|
| 6 |
+
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
|
| 7 |
+
{%- for tool in tools %}
|
| 8 |
+
{{- "\n" }}
|
| 9 |
+
{{- tool | tojson }}
|
| 10 |
+
{%- endfor %}
|
| 11 |
+
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
|
| 12 |
+
{%- else %}
|
| 13 |
+
{%- if messages[0].role == 'system' %}
|
| 14 |
+
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
|
| 15 |
+
{%- endif %}
|
| 16 |
+
{%- endif %}
|
| 17 |
+
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
|
| 18 |
+
{%- for forward_message in messages %}
|
| 19 |
+
{%- set index = (messages|length - 1) - loop.index0 %}
|
| 20 |
+
{%- set message = messages[index] %}
|
| 21 |
+
{%- set tool_start = '<tool_response>' %}
|
| 22 |
+
{%- set tool_start_length = tool_start|length %}
|
| 23 |
+
{%- set start_of_message = message.content[:tool_start_length] %}
|
| 24 |
+
{%- set tool_end = '</tool_response>' %}
|
| 25 |
+
{%- set tool_end_length = tool_end|length %}
|
| 26 |
+
{%- set start_pos = (message.content|length) - tool_end_length %}
|
| 27 |
+
{%- if start_pos < 0 %}
|
| 28 |
+
{%- set start_pos = 0 %}
|
| 29 |
+
{%- endif %}
|
| 30 |
+
{%- set end_of_message = message.content[start_pos:] %}
|
| 31 |
+
{%- if ns.multi_step_tool and message.role == "user" and not(start_of_message == tool_start and end_of_message == tool_end) %}
|
| 32 |
+
{%- set ns.multi_step_tool = false %}
|
| 33 |
+
{%- set ns.last_query_index = index %}
|
| 34 |
+
{%- endif %}
|
| 35 |
+
{%- endfor %}
|
| 36 |
+
{%- for message in messages %}
|
| 37 |
+
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
|
| 38 |
+
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
|
| 39 |
+
{%- elif message.role == "assistant" %}
|
| 40 |
+
{%- set content = message.content %}
|
| 41 |
+
{%- set reasoning_content = '' %}
|
| 42 |
+
{%- if message.reasoning_content is defined and message.reasoning_content is not none %}
|
| 43 |
+
{%- set reasoning_content = message.reasoning_content %}
|
| 44 |
+
{%- else %}
|
| 45 |
+
{%- if '</think>' in message.content %}
|
| 46 |
+
{%- set content = (message.content.split('</think>')|last).lstrip('\n') %}
|
| 47 |
+
{%- set reasoning_content = (message.content.split('</think>')|first).rstrip('\n') %}
|
| 48 |
+
{%- set reasoning_content = (reasoning_content.split('<think>')|last).lstrip('\n') %}
|
| 49 |
+
{%- endif %}
|
| 50 |
+
{%- endif %}
|
| 51 |
+
{%- if loop.index0 > ns.last_query_index %}
|
| 52 |
+
{%- if loop.last or (not loop.last and reasoning_content) %}
|
| 53 |
+
{{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
|
| 54 |
+
{%- else %}
|
| 55 |
+
{{- '<|im_start|>' + message.role + '\n' + content }}
|
| 56 |
+
{%- endif %}
|
| 57 |
+
{%- else %}
|
| 58 |
+
{{- '<|im_start|>' + message.role + '\n' + content }}
|
| 59 |
+
{%- endif %}
|
| 60 |
+
{%- if message.tool_calls %}
|
| 61 |
+
{%- for tool_call in message.tool_calls %}
|
| 62 |
+
{%- if (loop.first and content) or (not loop.first) %}
|
| 63 |
+
{{- '\n' }}
|
| 64 |
+
{%- endif %}
|
| 65 |
+
{%- if tool_call.function %}
|
| 66 |
+
{%- set tool_call = tool_call.function %}
|
| 67 |
+
{%- endif %}
|
| 68 |
+
{{- '<tool_call>\n{"name": "' }}
|
| 69 |
+
{{- tool_call.name }}
|
| 70 |
+
{{- '", "arguments": ' }}
|
| 71 |
+
{%- if tool_call.arguments is string %}
|
| 72 |
+
{{- tool_call.arguments }}
|
| 73 |
+
{%- else %}
|
| 74 |
+
{{- tool_call.arguments | tojson }}
|
| 75 |
+
{%- endif %}
|
| 76 |
+
{{- '}\n</tool_call>' }}
|
| 77 |
+
{%- endfor %}
|
| 78 |
+
{%- endif %}
|
| 79 |
+
{{- '<|im_end|>\n' }}
|
| 80 |
+
{%- elif message.role == "tool" %}
|
| 81 |
+
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
|
| 82 |
+
{{- '<|im_start|>user' }}
|
| 83 |
+
{%- endif %}
|
| 84 |
+
{{- '\n<tool_response>\n' }}
|
| 85 |
+
{{- message.content }}
|
| 86 |
+
{{- '\n</tool_response>' }}
|
| 87 |
+
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
|
| 88 |
+
{{- '<|im_end|>\n' }}
|
| 89 |
+
{%- endif %}
|
| 90 |
+
{%- endif %}
|
| 91 |
+
{%- endfor %}
|
| 92 |
+
{%- if add_generation_prompt %}
|
| 93 |
+
{{- '<|im_start|>assistant\n' }}
|
| 94 |
+
{%- if enable_thinking is defined and enable_thinking is false %}
|
| 95 |
+
{{- '<think>\n\n</think>\n\n' }}
|
| 96 |
+
{%- endif %}
|
| 97 |
+
{%- endif %}
|
config.json
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"Qwen3Model"
|
| 4 |
+
],
|
| 5 |
+
"attention_bias": false,
|
| 6 |
+
"attention_dropout": 0.0,
|
| 7 |
+
"eos_token_id": 151645,
|
| 8 |
+
"head_dim": 128,
|
| 9 |
+
"hidden_act": "silu",
|
| 10 |
+
"hidden_size": 2048,
|
| 11 |
+
"initializer_range": 0.02,
|
| 12 |
+
"intermediate_size": 6144,
|
| 13 |
+
"layer_types": [
|
| 14 |
+
"full_attention",
|
| 15 |
+
"full_attention",
|
| 16 |
+
"full_attention",
|
| 17 |
+
"full_attention",
|
| 18 |
+
"full_attention",
|
| 19 |
+
"full_attention",
|
| 20 |
+
"full_attention",
|
| 21 |
+
"full_attention",
|
| 22 |
+
"full_attention",
|
| 23 |
+
"full_attention",
|
| 24 |
+
"full_attention",
|
| 25 |
+
"full_attention",
|
| 26 |
+
"full_attention",
|
| 27 |
+
"full_attention",
|
| 28 |
+
"full_attention",
|
| 29 |
+
"full_attention",
|
| 30 |
+
"full_attention",
|
| 31 |
+
"full_attention",
|
| 32 |
+
"full_attention",
|
| 33 |
+
"full_attention",
|
| 34 |
+
"full_attention",
|
| 35 |
+
"full_attention",
|
| 36 |
+
"full_attention",
|
| 37 |
+
"full_attention",
|
| 38 |
+
"full_attention",
|
| 39 |
+
"full_attention",
|
| 40 |
+
"full_attention",
|
| 41 |
+
"full_attention"
|
| 42 |
+
],
|
| 43 |
+
"max_position_embeddings": 40960,
|
| 44 |
+
"max_window_layers": 28,
|
| 45 |
+
"model_type": "qwen3",
|
| 46 |
+
"num_attention_heads": 16,
|
| 47 |
+
"num_hidden_layers": 28,
|
| 48 |
+
"num_key_value_heads": 8,
|
| 49 |
+
"pad_token_id": 151654,
|
| 50 |
+
"quantization_config": {
|
| 51 |
+
"_load_in_4bit": true,
|
| 52 |
+
"_load_in_8bit": false,
|
| 53 |
+
"bnb_4bit_compute_dtype": "bfloat16",
|
| 54 |
+
"bnb_4bit_quant_storage": "uint8",
|
| 55 |
+
"bnb_4bit_quant_type": "nf4",
|
| 56 |
+
"bnb_4bit_use_double_quant": true,
|
| 57 |
+
"llm_int8_enable_fp32_cpu_offload": false,
|
| 58 |
+
"llm_int8_has_fp16_weight": false,
|
| 59 |
+
"llm_int8_skip_modules": null,
|
| 60 |
+
"llm_int8_threshold": 6.0,
|
| 61 |
+
"load_in_4bit": true,
|
| 62 |
+
"load_in_8bit": false,
|
| 63 |
+
"quant_method": "bitsandbytes"
|
| 64 |
+
},
|
| 65 |
+
"rms_norm_eps": 1e-06,
|
| 66 |
+
"rope_scaling": null,
|
| 67 |
+
"rope_theta": 1000000,
|
| 68 |
+
"sliding_window": null,
|
| 69 |
+
"tie_word_embeddings": true,
|
| 70 |
+
"torch_dtype": "bfloat16",
|
| 71 |
+
"transformers_version": "4.53.1",
|
| 72 |
+
"unsloth_fixed": true,
|
| 73 |
+
"unsloth_version": "2025.6.8",
|
| 74 |
+
"use_cache": true,
|
| 75 |
+
"use_sliding_window": false,
|
| 76 |
+
"vocab_size": 151936
|
| 77 |
+
}
|
merges.txt
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:adfea0905cf252a57f85d9cbd52834db313a09515f9d7e1bcff4024ab7183c65
|
| 3 |
+
size 1349975746
|
special_tokens_map.json
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"additional_special_tokens": [
|
| 3 |
+
"<|im_start|>",
|
| 4 |
+
"<|im_end|>",
|
| 5 |
+
"<|object_ref_start|>",
|
| 6 |
+
"<|object_ref_end|>",
|
| 7 |
+
"<|box_start|>",
|
| 8 |
+
"<|box_end|>",
|
| 9 |
+
"<|quad_start|>",
|
| 10 |
+
"<|quad_end|>",
|
| 11 |
+
"<|vision_start|>",
|
| 12 |
+
"<|vision_end|>",
|
| 13 |
+
"<|vision_pad|>",
|
| 14 |
+
"<|image_pad|>",
|
| 15 |
+
"<|video_pad|>"
|
| 16 |
+
],
|
| 17 |
+
"eos_token": {
|
| 18 |
+
"content": "<|im_end|>",
|
| 19 |
+
"lstrip": false,
|
| 20 |
+
"normalized": false,
|
| 21 |
+
"rstrip": false,
|
| 22 |
+
"single_word": false
|
| 23 |
+
},
|
| 24 |
+
"pad_token": {
|
| 25 |
+
"content": "<|vision_pad|>",
|
| 26 |
+
"lstrip": false,
|
| 27 |
+
"normalized": false,
|
| 28 |
+
"rstrip": false,
|
| 29 |
+
"single_word": false
|
| 30 |
+
}
|
| 31 |
+
}
|
tokenizer.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ae1a036a9837df9caeebb840d09d80e8feef0f6d2bae982970d1ad34f5946aff
|
| 3 |
+
size 11422753
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,244 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"add_bos_token": false,
|
| 3 |
+
"add_prefix_space": false,
|
| 4 |
+
"added_tokens_decoder": {
|
| 5 |
+
"151643": {
|
| 6 |
+
"content": "<|endoftext|>",
|
| 7 |
+
"lstrip": false,
|
| 8 |
+
"normalized": false,
|
| 9 |
+
"rstrip": false,
|
| 10 |
+
"single_word": false,
|
| 11 |
+
"special": true
|
| 12 |
+
},
|
| 13 |
+
"151644": {
|
| 14 |
+
"content": "<|im_start|>",
|
| 15 |
+
"lstrip": false,
|
| 16 |
+
"normalized": false,
|
| 17 |
+
"rstrip": false,
|
| 18 |
+
"single_word": false,
|
| 19 |
+
"special": true
|
| 20 |
+
},
|
| 21 |
+
"151645": {
|
| 22 |
+
"content": "<|im_end|>",
|
| 23 |
+
"lstrip": false,
|
| 24 |
+
"normalized": false,
|
| 25 |
+
"rstrip": false,
|
| 26 |
+
"single_word": false,
|
| 27 |
+
"special": true
|
| 28 |
+
},
|
| 29 |
+
"151646": {
|
| 30 |
+
"content": "<|object_ref_start|>",
|
| 31 |
+
"lstrip": false,
|
| 32 |
+
"normalized": false,
|
| 33 |
+
"rstrip": false,
|
| 34 |
+
"single_word": false,
|
| 35 |
+
"special": true
|
| 36 |
+
},
|
| 37 |
+
"151647": {
|
| 38 |
+
"content": "<|object_ref_end|>",
|
| 39 |
+
"lstrip": false,
|
| 40 |
+
"normalized": false,
|
| 41 |
+
"rstrip": false,
|
| 42 |
+
"single_word": false,
|
| 43 |
+
"special": true
|
| 44 |
+
},
|
| 45 |
+
"151648": {
|
| 46 |
+
"content": "<|box_start|>",
|
| 47 |
+
"lstrip": false,
|
| 48 |
+
"normalized": false,
|
| 49 |
+
"rstrip": false,
|
| 50 |
+
"single_word": false,
|
| 51 |
+
"special": true
|
| 52 |
+
},
|
| 53 |
+
"151649": {
|
| 54 |
+
"content": "<|box_end|>",
|
| 55 |
+
"lstrip": false,
|
| 56 |
+
"normalized": false,
|
| 57 |
+
"rstrip": false,
|
| 58 |
+
"single_word": false,
|
| 59 |
+
"special": true
|
| 60 |
+
},
|
| 61 |
+
"151650": {
|
| 62 |
+
"content": "<|quad_start|>",
|
| 63 |
+
"lstrip": false,
|
| 64 |
+
"normalized": false,
|
| 65 |
+
"rstrip": false,
|
| 66 |
+
"single_word": false,
|
| 67 |
+
"special": true
|
| 68 |
+
},
|
| 69 |
+
"151651": {
|
| 70 |
+
"content": "<|quad_end|>",
|
| 71 |
+
"lstrip": false,
|
| 72 |
+
"normalized": false,
|
| 73 |
+
"rstrip": false,
|
| 74 |
+
"single_word": false,
|
| 75 |
+
"special": true
|
| 76 |
+
},
|
| 77 |
+
"151652": {
|
| 78 |
+
"content": "<|vision_start|>",
|
| 79 |
+
"lstrip": false,
|
| 80 |
+
"normalized": false,
|
| 81 |
+
"rstrip": false,
|
| 82 |
+
"single_word": false,
|
| 83 |
+
"special": true
|
| 84 |
+
},
|
| 85 |
+
"151653": {
|
| 86 |
+
"content": "<|vision_end|>",
|
| 87 |
+
"lstrip": false,
|
| 88 |
+
"normalized": false,
|
| 89 |
+
"rstrip": false,
|
| 90 |
+
"single_word": false,
|
| 91 |
+
"special": true
|
| 92 |
+
},
|
| 93 |
+
"151654": {
|
| 94 |
+
"content": "<|vision_pad|>",
|
| 95 |
+
"lstrip": false,
|
| 96 |
+
"normalized": false,
|
| 97 |
+
"rstrip": false,
|
| 98 |
+
"single_word": false,
|
| 99 |
+
"special": true
|
| 100 |
+
},
|
| 101 |
+
"151655": {
|
| 102 |
+
"content": "<|image_pad|>",
|
| 103 |
+
"lstrip": false,
|
| 104 |
+
"normalized": false,
|
| 105 |
+
"rstrip": false,
|
| 106 |
+
"single_word": false,
|
| 107 |
+
"special": true
|
| 108 |
+
},
|
| 109 |
+
"151656": {
|
| 110 |
+
"content": "<|video_pad|>",
|
| 111 |
+
"lstrip": false,
|
| 112 |
+
"normalized": false,
|
| 113 |
+
"rstrip": false,
|
| 114 |
+
"single_word": false,
|
| 115 |
+
"special": true
|
| 116 |
+
},
|
| 117 |
+
"151657": {
|
| 118 |
+
"content": "<tool_call>",
|
| 119 |
+
"lstrip": false,
|
| 120 |
+
"normalized": false,
|
| 121 |
+
"rstrip": false,
|
| 122 |
+
"single_word": false,
|
| 123 |
+
"special": false
|
| 124 |
+
},
|
| 125 |
+
"151658": {
|
| 126 |
+
"content": "</tool_call>",
|
| 127 |
+
"lstrip": false,
|
| 128 |
+
"normalized": false,
|
| 129 |
+
"rstrip": false,
|
| 130 |
+
"single_word": false,
|
| 131 |
+
"special": false
|
| 132 |
+
},
|
| 133 |
+
"151659": {
|
| 134 |
+
"content": "<|fim_prefix|>",
|
| 135 |
+
"lstrip": false,
|
| 136 |
+
"normalized": false,
|
| 137 |
+
"rstrip": false,
|
| 138 |
+
"single_word": false,
|
| 139 |
+
"special": false
|
| 140 |
+
},
|
| 141 |
+
"151660": {
|
| 142 |
+
"content": "<|fim_middle|>",
|
| 143 |
+
"lstrip": false,
|
| 144 |
+
"normalized": false,
|
| 145 |
+
"rstrip": false,
|
| 146 |
+
"single_word": false,
|
| 147 |
+
"special": false
|
| 148 |
+
},
|
| 149 |
+
"151661": {
|
| 150 |
+
"content": "<|fim_suffix|>",
|
| 151 |
+
"lstrip": false,
|
| 152 |
+
"normalized": false,
|
| 153 |
+
"rstrip": false,
|
| 154 |
+
"single_word": false,
|
| 155 |
+
"special": false
|
| 156 |
+
},
|
| 157 |
+
"151662": {
|
| 158 |
+
"content": "<|fim_pad|>",
|
| 159 |
+
"lstrip": false,
|
| 160 |
+
"normalized": false,
|
| 161 |
+
"rstrip": false,
|
| 162 |
+
"single_word": false,
|
| 163 |
+
"special": false
|
| 164 |
+
},
|
| 165 |
+
"151663": {
|
| 166 |
+
"content": "<|repo_name|>",
|
| 167 |
+
"lstrip": false,
|
| 168 |
+
"normalized": false,
|
| 169 |
+
"rstrip": false,
|
| 170 |
+
"single_word": false,
|
| 171 |
+
"special": false
|
| 172 |
+
},
|
| 173 |
+
"151664": {
|
| 174 |
+
"content": "<|file_sep|>",
|
| 175 |
+
"lstrip": false,
|
| 176 |
+
"normalized": false,
|
| 177 |
+
"rstrip": false,
|
| 178 |
+
"single_word": false,
|
| 179 |
+
"special": false
|
| 180 |
+
},
|
| 181 |
+
"151665": {
|
| 182 |
+
"content": "<tool_response>",
|
| 183 |
+
"lstrip": false,
|
| 184 |
+
"normalized": false,
|
| 185 |
+
"rstrip": false,
|
| 186 |
+
"single_word": false,
|
| 187 |
+
"special": false
|
| 188 |
+
},
|
| 189 |
+
"151666": {
|
| 190 |
+
"content": "</tool_response>",
|
| 191 |
+
"lstrip": false,
|
| 192 |
+
"normalized": false,
|
| 193 |
+
"rstrip": false,
|
| 194 |
+
"single_word": false,
|
| 195 |
+
"special": false
|
| 196 |
+
},
|
| 197 |
+
"151667": {
|
| 198 |
+
"content": "<think>",
|
| 199 |
+
"lstrip": false,
|
| 200 |
+
"normalized": false,
|
| 201 |
+
"rstrip": false,
|
| 202 |
+
"single_word": false,
|
| 203 |
+
"special": false
|
| 204 |
+
},
|
| 205 |
+
"151668": {
|
| 206 |
+
"content": "</think>",
|
| 207 |
+
"lstrip": false,
|
| 208 |
+
"normalized": false,
|
| 209 |
+
"rstrip": false,
|
| 210 |
+
"single_word": false,
|
| 211 |
+
"special": false
|
| 212 |
+
}
|
| 213 |
+
},
|
| 214 |
+
"additional_special_tokens": [
|
| 215 |
+
"<|im_start|>",
|
| 216 |
+
"<|im_end|>",
|
| 217 |
+
"<|object_ref_start|>",
|
| 218 |
+
"<|object_ref_end|>",
|
| 219 |
+
"<|box_start|>",
|
| 220 |
+
"<|box_end|>",
|
| 221 |
+
"<|quad_start|>",
|
| 222 |
+
"<|quad_end|>",
|
| 223 |
+
"<|vision_start|>",
|
| 224 |
+
"<|vision_end|>",
|
| 225 |
+
"<|vision_pad|>",
|
| 226 |
+
"<|image_pad|>",
|
| 227 |
+
"<|video_pad|>"
|
| 228 |
+
],
|
| 229 |
+
"bos_token": null,
|
| 230 |
+
"clean_up_tokenization_spaces": false,
|
| 231 |
+
"eos_token": "<|im_end|>",
|
| 232 |
+
"errors": "replace",
|
| 233 |
+
"extra_special_tokens": {},
|
| 234 |
+
"max_length": 1024,
|
| 235 |
+
"model_max_length": 40960,
|
| 236 |
+
"pad_token": "<|vision_pad|>",
|
| 237 |
+
"padding_side": "right",
|
| 238 |
+
"split_special_tokens": false,
|
| 239 |
+
"stride": 0,
|
| 240 |
+
"tokenizer_class": "Qwen2Tokenizer",
|
| 241 |
+
"truncation_side": "right",
|
| 242 |
+
"truncation_strategy": "longest_first",
|
| 243 |
+
"unk_token": null
|
| 244 |
+
}
|
vocab.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|