PursuitOfDataScience commited on
Commit
353cb75
·
verified ·
1 Parent(s): ebdd4a5

Upload Qwen3.5-0.8B-thinking (CoT-SFT on 0.5M-thinking)

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ base_model: Qwen/Qwen3.5-0.8B-Base
6
+ datasets:
7
+ - PursuitOfDataScience/0.5M-thinking
8
+ tags:
9
+ - qwen3.5
10
+ - chain-of-thought
11
+ - reasoning
12
+ - math
13
+ - sft
14
+ pipeline_tag: text-generation
15
+ ---
16
+
17
+ # Qwen3.5-0.8B-thinking
18
+
19
+ A **Chain-of-Thought fine-tuned** version of [Qwen/Qwen3.5-0.8B-Base](https://huggingface.co/Qwen/Qwen3.5-0.8B-Base),
20
+ trained to reason step-by-step using `<think>` tags before producing a final answer.
21
+
22
+ ---
23
+
24
+ ## Model Details
25
+
26
+ | Attribute | Value |
27
+ |---|---|
28
+ | **Base model** | [Qwen/Qwen3.5-0.8B-Base](https://huggingface.co/Qwen/Qwen3.5-0.8B-Base) |
29
+ | **Architecture** | Qwen3_5ForCausalLM (hybrid linear / full attention) |
30
+ | **Parameters** | ~0.8B |
31
+ | **Context window** | 4096 tokens |
32
+ | **Hidden size** | 1024 |
33
+ | **Layers** | 24 |
34
+ | **Attention heads** | 8 (2 KV heads) |
35
+ | **Vocabulary** | 248,320 tokens |
36
+ | **Precision** | bfloat16 |
37
+
38
+ ---
39
+
40
+ ## Training Details
41
+
42
+ ### Data
43
+ Fine-tuned on [PursuitOfDataScience/0.5M-thinking](https://huggingface.co/datasets/PursuitOfDataScience/0.5M-thinking),
44
+ a dataset of ~500K examples with structured chain-of-thought reasoning wrapped in `<think>` / `</think>` tags
45
+ followed by a clean final answer.
46
+
47
+ After filtering examples that exceed the 4096-token context window, **244,997 examples** were used for training.
48
+
49
+ ### Procedure
50
+
51
+ The model was trained with **supervised fine-tuning (SFT)** using HuggingFace `Trainer`:
52
+
53
+ | Hyperparameter | Value |
54
+ |---|---|
55
+ | Epochs | 1 |
56
+ | Per-device batch size | 4 |
57
+ | Gradient accumulation steps | 8 |
58
+ | **Effective batch size** | **32** |
59
+ | Learning rate | 2e-5 |
60
+ | LR schedule | Linear with warmup |
61
+ | Warmup steps | 100 |
62
+ | Max sequence length | 4096 |
63
+ | Total optimizer steps | 7,657 |
64
+ | Hardware | 1× H100 GPU |
65
+ | Precision | bfloat16 |
66
+ | Attention | SDPA (scaled dot-product attention) |
67
+
68
+ **Prompt format used during training:**
69
+
70
+ ```
71
+ user: <question>
72
+ assistant: <think>
73
+ <step-by-step reasoning>
74
+ </think>
75
+ <final answer>
76
+ ```
77
+
78
+ The `<think>` tag is hardcoded into the prompt prefix so the model always learns
79
+ to emit structured reasoning before the answer.
80
+
81
+ **Label masking:** Only the assistant response (starting after `<think>`) is
82
+ included in the cross-entropy loss — the prompt tokens are masked with `-100`.
83
+
84
+ ---
85
+
86
+ ## Usage
87
+
88
+ ```python
89
+ from transformers import AutoModelForCausalLM, AutoTokenizer
90
+ import torch
91
+
92
+ model_id = "PursuitOfDataScience/Qwen3.5-0.8B-thinking"
93
+
94
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
95
+ model = AutoModelForCausalLM.from_pretrained(
96
+ model_id,
97
+ torch_dtype=torch.bfloat16,
98
+ device_map="auto",
99
+ trust_remote_code=True,
100
+ )
101
+
102
+ question = "If Alice has 3 apples and buys 5 more, how many apples does she have?"
103
+
104
+ prompt = (
105
+ f"user: Solve this math problem step by step. "
106
+ f"Show your reasoning, then give the final answer after ####.\n\n"
107
+ f"Question: {question}\n"
108
+ f"assistant: <think>\n"
109
+ )
110
+
111
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
112
+ outputs = model.generate(
113
+ **inputs,
114
+ max_new_tokens=1024,
115
+ temperature=0.6,
116
+ top_p=0.9,
117
+ do_sample=True,
118
+ )
119
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
120
+ ```
121
+
122
+ ---
123
+
124
+ ## GSM8K Benchmark Results (Pass@1)
125
+
126
+ Evaluated on the GSM8K test set (1,319 examples) using greedy-like sampling
127
+ (temperature=0.6, top_p=0.9, max_new_tokens=4096).
128
+
129
+ | Model | Examples | Accuracy |
130
+ |---|---|---|
131
+ | Qwen3.5-0.8B-Base (with `<think>`) | 1,319 | 58.23% |
132
+ | Qwen3.5-0.8B-Base (no `<think>`) | 1,319 | 51.40% |
133
+ | checkpoint-500 | 1,319 | 57.32% |
134
+ | checkpoint-1000 | 1,319 | 59.97% |
135
+ | checkpoint-1500 | 1,319 | 63.53% |
136
+ | checkpoint-2000 | 1,319 | 60.20% |
137
+ | checkpoint-2500 | 1,319 | 59.21% |
138
+ | checkpoint-3000 | 1,319 | 60.73% |
139
+ | checkpoint-3500 | 1,319 | 60.58% |
140
+ | checkpoint-4000 | 1,319 | 60.35% |
141
+ | checkpoint-4500 | 1,319 | 61.11% |
142
+ | checkpoint-5000 | 1,319 | 58.61% |
143
+ | checkpoint-5500 | 1,319 | 62.62% |
144
+ | checkpoint-6000 | 1,319 | 62.17% |
145
+ | checkpoint-6500 | 1,319 | 61.11% |
146
+ | checkpoint-7000 | 1,319 | TBD |
147
+ | checkpoint-7500 | 1,319 | TBD |
148
+ | checkpoint-7657 | 1,319 | TBD |
149
+ | **final_model** | 1,319 | TBD |
150
+
151
+ > Results for checkpoints-7000 through final_model are pending evaluation.
152
+ > This table will be updated once those runs complete.
153
+
154
+ ---
155
+
156
+ ## Acknowledgements
157
+
158
+ - Base model: [Qwen/Qwen3.5-0.8B-Base](https://huggingface.co/Qwen/Qwen3.5-0.8B-Base) by the Qwen Team (Alibaba Cloud)
159
+ - Training data: [PursuitOfDataScience/0.5M-thinking](https://huggingface.co/datasets/PursuitOfDataScience/0.5M-thinking)
160
+
161
+ ---
162
+
163
+ ## License
164
+
165
+ Apache 2.0 — same as the base model.
chat_template.jinja ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- set image_count = namespace(value=0) %}
2
+ {%- set video_count = namespace(value=0) %}
3
+ {%- macro render_content(content, do_vision_count, is_system_content=false) %}
4
+ {%- if content is string %}
5
+ {{- content }}
6
+ {%- elif content is iterable and content is not mapping %}
7
+ {%- for item in content %}
8
+ {%- if 'image' in item or 'image_url' in item or item.type == 'image' %}
9
+ {%- if is_system_content %}
10
+ {{- raise_exception('System message cannot contain images.') }}
11
+ {%- endif %}
12
+ {%- if do_vision_count %}
13
+ {%- set image_count.value = image_count.value + 1 %}
14
+ {%- endif %}
15
+ {%- if add_vision_id %}
16
+ {{- 'Picture ' ~ image_count.value ~ ': ' }}
17
+ {%- endif %}
18
+ {{- '<|vision_start|><|image_pad|><|vision_end|>' }}
19
+ {%- elif 'video' in item or item.type == 'video' %}
20
+ {%- if is_system_content %}
21
+ {{- raise_exception('System message cannot contain videos.') }}
22
+ {%- endif %}
23
+ {%- if do_vision_count %}
24
+ {%- set video_count.value = video_count.value + 1 %}
25
+ {%- endif %}
26
+ {%- if add_vision_id %}
27
+ {{- 'Video ' ~ video_count.value ~ ': ' }}
28
+ {%- endif %}
29
+ {{- '<|vision_start|><|video_pad|><|vision_end|>' }}
30
+ {%- elif 'text' in item %}
31
+ {{- item.text }}
32
+ {%- else %}
33
+ {{- raise_exception('Unexpected item type in content.') }}
34
+ {%- endif %}
35
+ {%- endfor %}
36
+ {%- elif content is none or content is undefined %}
37
+ {{- '' }}
38
+ {%- else %}
39
+ {{- raise_exception('Unexpected content type.') }}
40
+ {%- endif %}
41
+ {%- endmacro %}
42
+ {%- if not messages %}
43
+ {{- raise_exception('No messages provided.') }}
44
+ {%- endif %}
45
+ {%- if tools and tools is iterable and tools is not mapping %}
46
+ {{- '<|im_start|>system\n' }}
47
+ {{- "# Tools\n\nYou have access to the following functions:\n\n<tools>" }}
48
+ {%- for tool in tools %}
49
+ {{- "\n" }}
50
+ {{- tool | tojson }}
51
+ {%- endfor %}
52
+ {{- "\n</tools>" }}
53
+ {{- '\n\nIf you choose to call a function ONLY reply in the following format with NO suffix:\n\n<tool_call>\n<function=example_function_name>\n<parameter=example_parameter_1>\nvalue_1\n</parameter>\n<parameter=example_parameter_2>\nThis is the value for the second parameter\nthat can span\nmultiple lines\n</parameter>\n</function>\n</tool_call>\n\n<IMPORTANT>\nReminder:\n- Function calls MUST follow the specified format: an inner <function=...></function> block must be nested within <tool_call></tool_call> XML tags\n- Required parameters MUST be specified\n- You may provide optional reasoning for your function call in natural language BEFORE the function call, but NOT after\n- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls\n</IMPORTANT>' }}
54
+ {%- if messages[0].role == 'system' %}
55
+ {%- set content = render_content(messages[0].content, false, true)|trim %}
56
+ {%- if content %}
57
+ {{- '\n\n' + content }}
58
+ {%- endif %}
59
+ {%- endif %}
60
+ {{- '<|im_end|>\n' }}
61
+ {%- else %}
62
+ {%- if messages[0].role == 'system' %}
63
+ {%- set content = render_content(messages[0].content, false, true)|trim %}
64
+ {{- '<|im_start|>system\n' + content + '<|im_end|>\n' }}
65
+ {%- endif %}
66
+ {%- endif %}
67
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
68
+ {%- for message in messages[::-1] %}
69
+ {%- set index = (messages|length - 1) - loop.index0 %}
70
+ {%- if ns.multi_step_tool and message.role == "user" %}
71
+ {%- set content = render_content(message.content, false)|trim %}
72
+ {%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %}
73
+ {%- set ns.multi_step_tool = false %}
74
+ {%- set ns.last_query_index = index %}
75
+ {%- endif %}
76
+ {%- endif %}
77
+ {%- endfor %}
78
+ {%- if ns.multi_step_tool %}
79
+ {{- raise_exception('No user query found in messages.') }}
80
+ {%- endif %}
81
+ {%- for message in messages %}
82
+ {%- set content = render_content(message.content, true)|trim %}
83
+ {%- if message.role == "system" %}
84
+ {%- if not loop.first %}
85
+ {{- raise_exception('System message must be at the beginning.') }}
86
+ {%- endif %}
87
+ {%- elif message.role == "user" %}
88
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
89
+ {%- elif message.role == "assistant" %}
90
+ {%- set reasoning_content = '' %}
91
+ {%- if message.reasoning_content is string %}
92
+ {%- set reasoning_content = message.reasoning_content %}
93
+ {%- else %}
94
+ {%- if '</think>' in content %}
95
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
96
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
97
+ {%- endif %}
98
+ {%- endif %}
99
+ {%- set reasoning_content = reasoning_content|trim %}
100
+ {%- if loop.index0 > ns.last_query_index %}
101
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content + '\n</think>\n\n' + content }}
102
+ {%- else %}
103
+ {{- '<|im_start|>' + message.role + '\n' + content }}
104
+ {%- endif %}
105
+ {%- if message.tool_calls and message.tool_calls is iterable and message.tool_calls is not mapping %}
106
+ {%- for tool_call in message.tool_calls %}
107
+ {%- if tool_call.function is defined %}
108
+ {%- set tool_call = tool_call.function %}
109
+ {%- endif %}
110
+ {%- if loop.first %}
111
+ {%- if content|trim %}
112
+ {{- '\n\n<tool_call>\n<function=' + tool_call.name + '>\n' }}
113
+ {%- else %}
114
+ {{- '<tool_call>\n<function=' + tool_call.name + '>\n' }}
115
+ {%- endif %}
116
+ {%- else %}
117
+ {{- '\n<tool_call>\n<function=' + tool_call.name + '>\n' }}
118
+ {%- endif %}
119
+ {%- if tool_call.arguments is defined %}
120
+ {%- for args_name, args_value in tool_call.arguments|items %}
121
+ {{- '<parameter=' + args_name + '>\n' }}
122
+ {%- set args_value = args_value | tojson | safe if args_value is mapping or (args_value is sequence and args_value is not string) else args_value | string %}
123
+ {{- args_value }}
124
+ {{- '\n</parameter>\n' }}
125
+ {%- endfor %}
126
+ {%- endif %}
127
+ {{- '</function>\n</tool_call>' }}
128
+ {%- endfor %}
129
+ {%- endif %}
130
+ {{- '<|im_end|>\n' }}
131
+ {%- elif message.role == "tool" %}
132
+ {%- if loop.previtem and loop.previtem.role != "tool" %}
133
+ {{- '<|im_start|>user' }}
134
+ {%- endif %}
135
+ {{- '\n<tool_response>\n' }}
136
+ {{- content }}
137
+ {{- '\n</tool_response>' }}
138
+ {%- if not loop.last and loop.nextitem.role != "tool" %}
139
+ {{- '<|im_end|>\n' }}
140
+ {%- elif loop.last %}
141
+ {{- '<|im_end|>\n' }}
142
+ {%- endif %}
143
+ {%- else %}
144
+ {{- raise_exception('Unexpected message role.') }}
145
+ {%- endif %}
146
+ {%- endfor %}
147
+ {%- if add_generation_prompt %}
148
+ {{- '<|im_start|>assistant\n' }}
149
+ {%- if enable_thinking is defined and enable_thinking is true %}
150
+ {{- '<think>\n' }}
151
+ {%- else %}
152
+ {{- '<think>\n\n</think>\n\n' }}
153
+ {%- endif %}
154
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen3_5ForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "attn_output_gate": true,
8
+ "bos_token_id": null,
9
+ "dtype": "bfloat16",
10
+ "eos_token_id": 248044,
11
+ "full_attention_interval": 4,
12
+ "head_dim": 256,
13
+ "hidden_act": "silu",
14
+ "hidden_size": 1024,
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3584,
17
+ "layer_types": [
18
+ "linear_attention",
19
+ "linear_attention",
20
+ "linear_attention",
21
+ "full_attention",
22
+ "linear_attention",
23
+ "linear_attention",
24
+ "linear_attention",
25
+ "full_attention",
26
+ "linear_attention",
27
+ "linear_attention",
28
+ "linear_attention",
29
+ "full_attention",
30
+ "linear_attention",
31
+ "linear_attention",
32
+ "linear_attention",
33
+ "full_attention",
34
+ "linear_attention",
35
+ "linear_attention",
36
+ "linear_attention",
37
+ "full_attention",
38
+ "linear_attention",
39
+ "linear_attention",
40
+ "linear_attention",
41
+ "full_attention"
42
+ ],
43
+ "linear_conv_kernel_dim": 4,
44
+ "linear_key_head_dim": 128,
45
+ "linear_num_key_heads": 16,
46
+ "linear_num_value_heads": 16,
47
+ "linear_value_head_dim": 128,
48
+ "mamba_ssm_dtype": "float32",
49
+ "max_position_embeddings": 262144,
50
+ "mlp_only_layers": [],
51
+ "model_type": "qwen3_5_text",
52
+ "mtp_num_hidden_layers": 1,
53
+ "mtp_use_dedicated_embeddings": false,
54
+ "num_attention_heads": 8,
55
+ "num_hidden_layers": 24,
56
+ "num_key_value_heads": 2,
57
+ "pad_token_id": null,
58
+ "partial_rotary_factor": 0.25,
59
+ "rms_norm_eps": 1e-06,
60
+ "rope_parameters": {
61
+ "mrope_interleaved": true,
62
+ "mrope_section": [
63
+ 11,
64
+ 11,
65
+ 10
66
+ ],
67
+ "partial_rotary_factor": 0.25,
68
+ "rope_theta": 10000000,
69
+ "rope_type": "default"
70
+ },
71
+ "tie_word_embeddings": true,
72
+ "transformers_version": "5.3.0.dev0",
73
+ "use_cache": false,
74
+ "vocab_size": 248320
75
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "eos_token_id": 248044,
4
+ "transformers_version": "5.3.0.dev0",
5
+ "use_cache": true
6
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4acb5c49ee552c9102b728e08e693827fdf2bf1bb5e6c5eedb516e192b1f167
3
+ size 1504827608
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87a7830d63fcf43bf241c3c5242e96e62dd3fdc29224ca26fed8ea333db72de4
3
+ size 19989343
tokenizer_config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "audio_bos_token": "<|audio_start|>",
4
+ "audio_eos_token": "<|audio_end|>",
5
+ "audio_token": "<|audio_pad|>",
6
+ "backend": "tokenizers",
7
+ "bos_token": null,
8
+ "clean_up_tokenization_spaces": false,
9
+ "eos_token": "<|endoftext|>",
10
+ "errors": "replace",
11
+ "image_token": "<|image_pad|>",
12
+ "is_local": true,
13
+ "model_max_length": 262144,
14
+ "model_specific_special_tokens": {
15
+ "audio_bos_token": "<|audio_start|>",
16
+ "audio_eos_token": "<|audio_end|>",
17
+ "audio_token": "<|audio_pad|>",
18
+ "image_token": "<|image_pad|>",
19
+ "video_token": "<|video_pad|>",
20
+ "vision_bos_token": "<|vision_start|>",
21
+ "vision_eos_token": "<|vision_end|>"
22
+ },
23
+ "pad_token": "<|endoftext|>",
24
+ "pretokenize_regex": "(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?[\\p{L}\\p{M}]+|\\p{N}| ?[^\\s\\p{L}\\p{M}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+",
25
+ "split_special_tokens": false,
26
+ "tokenizer_class": "TokenizersBackend",
27
+ "unk_token": null,
28
+ "video_token": "<|video_pad|>",
29
+ "vision_bos_token": "<|vision_start|>",
30
+ "vision_eos_token": "<|vision_end|>"
31
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff