fabian-maincode commited on
Commit
525736b
·
verified ·
1 Parent(s): 157caa8

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ tags:
7
+ - code
8
+ - python
9
+ - maincoder
10
+ - code-generation
11
+ - reinforcement-learning
12
+ - mcpo
13
+ pipeline_tag: text-generation
14
+ base_model: Maincode/Maincoder-1B
15
+ ---
16
+ <img src="https://huggingface.co/datasets/Maincode/assets/resolve/e51154e034201be1a5dad0e9c8de31d8b9f17643/maincoder_logo.png" alt="" width="1250">
17
+
18
+ [**Maincoder-1B**](https://maincode.com/maincoder/) is a code-focused language model optimized for code generation and completion tasks. The model achieves strong performance on coding benchmarks while maintaining a compact size suitable for local deployment.
19
+
20
+ # Key Features
21
+
22
+ - **Code Generation**: Optimized for Python code completion and generation tasks.
23
+ - **Compact Size**: 1 billion parameters, lightweight enough to run on consumer hardware.
24
+ - **Deep Architecture**: Modern transformer architecture with RoPE embeddings, grouped-query attention, QK normalization and high depth-to-width ratio.
25
+ - **Advanced Data Mixing**: Pre-trained and mid-trained on custom data mixes developed for high-performance coding.
26
+ - **MCPO Algorithm**: Fine-tuned with specialised reinforcement learning policy optimisation algorithm to improve training stability and accelerate convergence.
27
+ - **SOTA Performance**: State-of-the-art performance on Python coding benchmarks HumanEval, HumanEval+ and MBPP+.
28
+
29
+ # Benchmark Results
30
+
31
+ <img src="https://huggingface.co/datasets/Maincode/assets/resolve/main/performance_h.png" alt="Benchmark Performance Across Baseline LLMs" width="1050">
32
+
33
+ | Model | HumanEval | HumanEval+ | MBPP+ | MMLU | GSM8K |
34
+ |---|---:|---:|---:|---:|---:|
35
+ | [Maincode/Maincoder-1B](https://huggingface.co/Maincode/Maincoder-1B) | **0.7622** | **0.7256** | **0.7090** | 0.3054 | 0.2976 |
36
+ | [deepseek-ai/deepseek-coder-1.3b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-instruct) | 0.5610 | 0.5305 | 0.6217 | 0.2705 | 0.0413 |
37
+ | [HuggingFaceTB/SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B) | 0.5366 | 0.5000 | 0.6799 | **0.5928** | 0.5505 |
38
+ | [Qwen/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct) | 0.4634 | 0.4451 | 0.6561 | 0.4984 | 0.4944 |
39
+ | [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | 0.4024 | 0.3780 | 0.5582 | 0.5571 |**0.6865** |
40
+
41
+ # Model Overview
42
+
43
+ Maincoder uses a modern transformer decoder architecture with:
44
+
45
+ - **Rotary Position Embeddings**: With theta of 1,000,000.
46
+ - **RMSNorm**: Pre-normalization for stable training.
47
+ - **Grouped Query Attention**: 4:1 ratio of query to key-value heads.
48
+ - **QK Normalization**: RMSNorm applied to attention queries and keys.
49
+ - **SwiGLU MLP**: Gated linear units with SiLU activation.
50
+
51
+ | Attribute | Value |
52
+ |-----------|-------|
53
+ | Parameters | 1B |
54
+ | Hidden Size | 1536 |
55
+ | Layers | 32 |
56
+ | Attention Heads | 16 (4 KV heads) |
57
+ | Head Dimension | 96 |
58
+ | Vocabulary Size | 151,936 |
59
+ | Context Length | 2,048 |
60
+ | Precision | bfloat16 |
61
+
62
+ # Usage
63
+
64
+ ### Installation
65
+
66
+ ```bash
67
+ pip install transformers torch
68
+ ```
69
+
70
+ ### Quick Start
71
+
72
+ ```python
73
+ from transformers import AutoModelForCausalLM, AutoTokenizer
74
+
75
+ model = AutoModelForCausalLM.from_pretrained(
76
+ "Maincode/Maincoder-1B",
77
+ torch_dtype="auto",
78
+ device_map="auto",
79
+ trust_remote_code=True,
80
+ )
81
+ tokenizer = AutoTokenizer.from_pretrained(
82
+ "Maincode/Maincoder-1B",
83
+ trust_remote_code=True,
84
+ )
85
+
86
+ # Code completion example
87
+ prompt = '''def fibonacci(n: int) -> int:
88
+ """Return the n-th Fibonacci number."""
89
+ '''
90
+
91
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
92
+ outputs = model.generate(
93
+ **inputs,
94
+ max_new_tokens=256,
95
+ temperature=0.2,
96
+ do_sample=True,
97
+ )
98
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
99
+ ```
100
+
101
+ ### Code Completion
102
+
103
+ ```python
104
+ # Function completion
105
+ prompt = '''def quicksort(arr: list) -> list:
106
+ """Sort a list using the quicksort algorithm."""
107
+ '''
108
+
109
+ # Class completion
110
+ prompt = '''class BinarySearchTree:
111
+ """A binary search tree implementation."""
112
+
113
+ def __init__(self):
114
+ '''
115
+
116
+ # Algorithm implementation
117
+ prompt = '''def dijkstra(graph: dict, start: str, end: str) -> tuple:
118
+ """Find the shortest path using Dijkstra's algorithm.
119
+
120
+ Args:
121
+ graph: Adjacency list representation of the graph
122
+ start: Starting node
123
+ end: Target node
124
+
125
+ Returns:
126
+ Tuple of (distance, path)
127
+ """
128
+ '''
129
+ ```
130
+
131
+ # Additional Notes
132
+
133
+ ## Reproducibility
134
+
135
+ <details>
136
+ <summary>Model evaluations were run on 8 AMD MI355X GPUs via the <a href="https://github.com/EleutherAI/lm-evaluation-harness">EleutherAI</a> framework.</summary>
137
+
138
+ ```bash
139
+ docker run --rm -it \
140
+ --device=/dev/kfd --device=/dev/dri --group-add=video \
141
+ --ipc=host --security-opt seccomp=unconfined \
142
+ -v $(pwd):/workspace -w /workspace \
143
+ -e HF_TOKEN \
144
+ -e PYTHONHASHSEED=0 \
145
+ -e TORCH_DETERMINISTIC=1 \
146
+ -e ROCBLAS_ATOMICS_MODE="0" \
147
+ -e MIOPEN_FIND_MODE="1" \
148
+ -e CUBLAS_WORKSPACE_CONFIG=":4096:8" \
149
+ -e HF_ALLOW_CODE_EVAL="1" \
150
+ rocm/pytorch:rocm7.1.1_ubuntu24.04_py3.12_pytorch_release_2.9.1 \
151
+ bash -c 'pip install "lm_eval[hf]" && \
152
+ accelerate launch -m lm_eval \
153
+ --model hf --model_args "pretrained=Maincode/Maincoder-1B,trust_remote_code=True,dtype=float32" \
154
+ --tasks humaneval,humaneval_plus,mbpp_plus,mmlu,gsm8k \
155
+ --device cuda:0 --batch_size 32 --seed 42 \
156
+ --confirm_run_unsafe_code'
157
+ ```
158
+
159
+ </details>
160
+
161
+ ## Limitations
162
+
163
+ - Context length limited to 2,048 tokens
164
+ - Primarily optimized for Python, performance may vary on other languages
165
+ - May generate code with bugs or security issues - always review generated code
166
+
167
+ <div style="margin-left:14px; border-left:4px solid #3b82f6; background:rgba(59,130,246,0.08); padding:8px 10px; border-radius:8px; font-size:0.92em; margin:10px 0;">
168
+ <strong>Disclaimer</strong>: This model has <strong>not</strong> undergone any alignment or safety tuning (e.g., RLHF/RLAIF, DPO, or safety fine-tuning). Outputs may be unsafe or biased. Please use appropriate safeguards and evaluate carefully for your use case.
169
+ </div>
170
+
171
+ ## License
172
+
173
+ This model is released under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
174
+
175
+ ## Citation
176
+
177
+ ```bibtex
178
+ @misc{maincoder2025,
179
+ title = {Maincoder-1B: A High-Performance 1B Parameter Coding Model},
180
+ author = {Maincode Team},
181
+ year = {2025},
182
+ organization = {Maincode},
183
+ howpublished = {\url{https://huggingface.co/Maincode/Maincoder-1B}}
184
+ }
185
+ ```
186
+
187
+ ## Contact
188
+
189
+ For questions, issues, or collaboration inquiries, please visit [Maincode](https://maincode.com).
__init__.py ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2025 Maincode. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ from .configuration_maincoder import MaincoderConfig
17
+ from .modelling_maincoder import (
18
+ MaincoderForCausalLM,
19
+ MaincoderModel,
20
+ MaincoderPreTrainedModel,
21
+ )
22
+
23
+
24
+ __all__ = [
25
+ "MaincoderConfig",
26
+ "MaincoderPreTrainedModel",
27
+ "MaincoderModel",
28
+ "MaincoderForCausalLM",
29
+ ]
30
+
added_tokens.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<tool_call>": 151657,
4
+ "<|box_end|>": 151649,
5
+ "<|box_start|>": 151648,
6
+ "<|endoftext|>": 151643,
7
+ "<|file_sep|>": 151664,
8
+ "<|fim_middle|>": 151660,
9
+ "<|fim_pad|>": 151662,
10
+ "<|fim_prefix|>": 151659,
11
+ "<|fim_suffix|>": 151661,
12
+ "<|im_end|>": 151645,
13
+ "<|im_start|>": 151644,
14
+ "<|image_pad|>": 151655,
15
+ "<|object_ref_end|>": 151647,
16
+ "<|object_ref_start|>": 151646,
17
+ "<|quad_end|>": 151651,
18
+ "<|quad_start|>": 151650,
19
+ "<|repo_name|>": 151663,
20
+ "<|video_pad|>": 151656,
21
+ "<|vision_end|>": 151653,
22
+ "<|vision_pad|>": 151654,
23
+ "<|vision_start|>": 151652
24
+ }
chat_template.jinja ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0]['role'] == 'system' %}
4
+ {{- messages[0]['content'] }}
5
+ {%- else %}
6
+ {{- 'You are a helpful assistant.' }}
7
+ {%- endif %}
8
+ {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
9
+ {%- for tool in tools %}
10
+ {{- "\n" }}
11
+ {{- tool | tojson }}
12
+ {%- endfor %}
13
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
14
+ {%- else %}
15
+ {%- if messages[0]['role'] == 'system' %}
16
+ {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
17
+ {%- else %}
18
+ {{- '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n' }}
19
+ {%- endif %}
20
+ {%- endif %}
21
+ {%- for message in messages %}
22
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
23
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
24
+ {%- elif message.role == "assistant" %}
25
+ {{- '<|im_start|>' + message.role }}
26
+ {%- if message.content %}
27
+ {{- '\n' + message.content }}
28
+ {%- endif %}
29
+ {%- for tool_call in message.tool_calls %}
30
+ {%- if tool_call.function is defined %}
31
+ {%- set tool_call = tool_call.function %}
32
+ {%- endif %}
33
+ {{- '\n<tool_call>\n{"name": "' }}
34
+ {{- tool_call.name }}
35
+ {{- '", "arguments": ' }}
36
+ {{- tool_call.arguments | tojson }}
37
+ {{- '}\n</tool_call>' }}
38
+ {%- endfor %}
39
+ {{- '<|im_end|>\n' }}
40
+ {%- elif message.role == "tool" %}
41
+ {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
42
+ {{- '<|im_start|>user' }}
43
+ {%- endif %}
44
+ {{- '\n<tool_response>\n' }}
45
+ {{- message.content }}
46
+ {{- '\n</tool_response>' }}
47
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
48
+ {{- '<|im_end|>\n' }}
49
+ {%- endif %}
50
+ {%- endif %}
51
+ {%- endfor %}
52
+ {%- if add_generation_prompt %}
53
+ {{- '<|im_start|>assistant\n' }}
54
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MaincoderForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_maincoder.MaincoderConfig",
8
+ "AutoModel": "modelling_maincoder.MaincoderForCausalLM",
9
+ "AutoModelForCausalLM": "modelling_maincoder.MaincoderForCausalLM"
10
+ },
11
+ "bos_token_id": null,
12
+ "eos_token_id": 151643,
13
+ "head_dim": 96,
14
+ "hidden_act": "silu",
15
+ "hidden_size": 1536,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 4096,
18
+ "intermediate_size_mlp": 4096,
19
+ "max_position_embeddings": 2048,
20
+ "model_type": "maincoder",
21
+ "num_attention_heads": 16,
22
+ "num_hidden_layers": 32,
23
+ "num_key_value_heads": 4,
24
+ "pad_token_id": 151643,
25
+ "rms_norm_eps": 1e-05,
26
+ "rope_scaling": null,
27
+ "rope_theta": 1000000.0,
28
+ "tie_word_embeddings": true,
29
+ "torch_dtype": "bfloat16",
30
+ "transformers_version": "4.57.3",
31
+ "use_cache": true,
32
+ "use_qk_norm": true,
33
+ "vocab_size": 151936
34
+ }
configuration_maincoder.py ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2025 Maincode. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Maincoder model configuration."""
16
+
17
+ from typing import Optional
18
+
19
+ from transformers.configuration_utils import PretrainedConfig
20
+ from transformers.utils import logging
21
+
22
+
23
+ logger = logging.get_logger(__name__)
24
+
25
+
26
+ class MaincoderConfig(PretrainedConfig):
27
+ r"""
28
+ Configuration class for Maincoder model.
29
+
30
+ Args:
31
+ vocab_size (`int`, *optional*, defaults to 151936):
32
+ Vocabulary size of the Maincoder model.
33
+ hidden_size (`int`, *optional*, defaults to 1536):
34
+ Dimension of the hidden representations.
35
+ intermediate_size (`int`, *optional*, defaults to 4096):
36
+ Dimension of the MLP intermediate representations.
37
+ intermediate_size_mlp (`int`, *optional*, defaults to 4096):
38
+ Dimension of the MLP representations (same as intermediate_size for dense models).
39
+ num_hidden_layers (`int`, *optional*, defaults to 32):
40
+ Number of hidden layers in the Transformer decoder.
41
+ num_attention_heads (`int`, *optional*, defaults to 16):
42
+ Number of attention heads for each attention layer.
43
+ num_key_value_heads (`int`, *optional*, defaults to 4):
44
+ Number of key-value heads for Grouped Query Attention (GQA).
45
+ head_dim (`int`, *optional*, defaults to 96):
46
+ Dimension of each attention head.
47
+ hidden_act (`str`, *optional*, defaults to `"silu"`):
48
+ The activation function in the MLP.
49
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
50
+ Maximum sequence length the model can handle.
51
+ initializer_range (`float`, *optional*, defaults to 0.02):
52
+ Standard deviation for weight initialization.
53
+ rms_norm_eps (`float`, *optional*, defaults to 1e-05):
54
+ Epsilon for RMS normalization layers.
55
+ use_cache (`bool`, *optional*, defaults to `True`):
56
+ Whether to use key-value cache for generation.
57
+ pad_token_id (`int`, *optional*, defaults to 151643):
58
+ Padding token id.
59
+ bos_token_id (`int`, *optional*):
60
+ Beginning of sequence token id.
61
+ eos_token_id (`int`, *optional*, defaults to 151643):
62
+ End of sequence token id.
63
+ tie_word_embeddings (`bool`, *optional*, defaults to `True`):
64
+ Whether to tie input and output embeddings.
65
+ rope_theta (`float`, *optional*, defaults to 1000000.0):
66
+ Base period for RoPE embeddings.
67
+ rope_scaling (`Dict`, *optional*):
68
+ RoPE scaling configuration for extended context.
69
+ attention_dropout (`float`, *optional*, defaults to 0.0):
70
+ Dropout probability for attention weights.
71
+ use_qk_norm (`bool`, *optional*, defaults to `True`):
72
+ Whether to apply RMS normalization to query and key.
73
+
74
+ Example:
75
+ ```python
76
+ >>> from configuration_maincoder import MaincoderConfig
77
+ >>> from modelling_maincoder import MaincoderForCausalLM
78
+
79
+ >>> config = MaincoderConfig()
80
+ >>> model = MaincoderForCausalLM(config)
81
+ ```
82
+ """
83
+
84
+ model_type = "maincoder"
85
+ keys_to_ignore_at_inference = ["past_key_values"]
86
+
87
+ def __init__(
88
+ self,
89
+ vocab_size: int = 151936,
90
+ hidden_size: int = 1536,
91
+ intermediate_size: int = 4096,
92
+ intermediate_size_mlp: int = 4096,
93
+ num_hidden_layers: int = 32,
94
+ num_attention_heads: int = 16,
95
+ num_key_value_heads: Optional[int] = 4,
96
+ head_dim: Optional[int] = 96,
97
+ hidden_act: str = "silu",
98
+ max_position_embeddings: int = 2048,
99
+ initializer_range: float = 0.02,
100
+ rms_norm_eps: float = 1e-5,
101
+ use_cache: bool = True,
102
+ pad_token_id: Optional[int] = 151643,
103
+ bos_token_id: Optional[int] = None,
104
+ eos_token_id: int = 151643,
105
+ tie_word_embeddings: bool = True,
106
+ rope_theta: float = 1000000.0,
107
+ rope_scaling: Optional[dict] = None,
108
+ attention_dropout: float = 0.0,
109
+ use_qk_norm: bool = True,
110
+ **kwargs,
111
+ ):
112
+ self.vocab_size = vocab_size
113
+ self.hidden_size = hidden_size
114
+ self.intermediate_size = intermediate_size
115
+ self.intermediate_size_mlp = intermediate_size_mlp
116
+ self.num_hidden_layers = num_hidden_layers
117
+ self.num_attention_heads = num_attention_heads
118
+ self.max_position_embeddings = max_position_embeddings
119
+ self.initializer_range = initializer_range
120
+ self.rms_norm_eps = rms_norm_eps
121
+ self.use_cache = use_cache
122
+ self.rope_theta = rope_theta
123
+ self.rope_scaling = rope_scaling
124
+ self.attention_dropout = attention_dropout
125
+ self.use_qk_norm = use_qk_norm
126
+ self.hidden_act = hidden_act
127
+
128
+ # GQA configuration
129
+ self.num_key_value_heads = num_key_value_heads if num_key_value_heads is not None else num_attention_heads
130
+ self.head_dim = head_dim if head_dim is not None else self.hidden_size // self.num_attention_heads
131
+
132
+ super().__init__(
133
+ pad_token_id=pad_token_id,
134
+ bos_token_id=bos_token_id,
135
+ eos_token_id=eos_token_id,
136
+ tie_word_embeddings=tie_word_embeddings,
137
+ **kwargs,
138
+ )
139
+
140
+
141
+ __all__ = ["MaincoderConfig"]
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "eos_token_id": [
4
+ 151643,
5
+ 128001,
6
+ 128008,
7
+ 128009
8
+ ],
9
+ "pad_token_id": 151643,
10
+ "transformers_version": "4.57.3"
11
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fde6820b5be360f5ebc2d3136b3ced4d0c11976c58a4770fe61b8f11912cfac9
3
+ size 2052447608
modelling_maincoder.py ADDED
@@ -0,0 +1,487 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2025 Maincode. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Maincoder model implementation."""
16
+
17
+ from typing import Callable, Optional, Union
18
+
19
+ import torch
20
+ import torch.nn as nn
21
+
22
+ from transformers.activations import ACT2FN
23
+ from transformers.cache_utils import Cache, DynamicCache
24
+ from transformers.generation import GenerationMixin
25
+ from transformers.masking_utils import create_causal_mask
26
+ from transformers.modeling_flash_attention_utils import FlashAttentionKwargs
27
+ from transformers.modeling_layers import GradientCheckpointingLayer
28
+ from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast
29
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS, dynamic_rope_update
30
+ from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel
31
+ from transformers.processing_utils import Unpack
32
+ from transformers.utils import TransformersKwargs, auto_docstring, can_return_tuple, logging
33
+
34
+ from .configuration_maincoder import MaincoderConfig
35
+
36
+
37
+ logger = logging.get_logger(__name__)
38
+
39
+
40
+ class MaincoderRMSNorm(nn.Module):
41
+ """RMSNorm implementation equivalent to T5LayerNorm."""
42
+
43
+ def __init__(self, hidden_size, eps=1e-5):
44
+ """
45
+ MatildaPlusRMSNorm is equivalent to T5LayerNorm
46
+ """
47
+ super().__init__()
48
+ self.eps = eps
49
+ self.weight = nn.Parameter(torch.ones(hidden_size))
50
+
51
+ def _norm(self, x):
52
+ return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
53
+
54
+ def forward(self, x):
55
+ output = self._norm(x.float()).type_as(x)
56
+ return output * self.weight
57
+
58
+ def extra_repr(self):
59
+ return f"{tuple(self.weight.shape)}, eps={self.eps}"
60
+
61
+
62
+ class MaincoderMLP(nn.Module):
63
+ """SwiGLU-style MLP."""
64
+
65
+ def __init__(self, config: MaincoderConfig):
66
+ super().__init__()
67
+ self.hidden_size = config.hidden_size
68
+ self.intermediate_size = config.intermediate_size_mlp
69
+
70
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
71
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
72
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
73
+ self.act_fn = ACT2FN[config.hidden_act]
74
+
75
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
76
+ return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
77
+
78
+
79
+ class MaincoderRotaryEmbedding(nn.Module):
80
+ """Rotary Position Embedding."""
81
+
82
+ def __init__(self, config: MaincoderConfig, device=None):
83
+ super().__init__()
84
+ self.rope_type = "llama3" if config.rope_scaling is not None else "default"
85
+ self.config = config
86
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
87
+
88
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device)
89
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
90
+
91
+ @torch.no_grad()
92
+ @dynamic_rope_update
93
+ def forward(self, x: torch.Tensor, position_ids: torch.Tensor) -> torch.Tensor:
94
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1)
95
+ position_ids_expanded = position_ids[:, None, :].float()
96
+
97
+ device_type = x.device.type if isinstance(x.device.type, str) and x.device.type != "mps" else "cpu"
98
+ with torch.autocast(device_type=device_type, enabled=False):
99
+ freqs = (inv_freq_expanded.to(x.device) @ position_ids_expanded).transpose(1, 2)
100
+ freqs_cis = torch.polar(torch.ones_like(freqs), freqs)
101
+ freqs_cis = freqs_cis * self.attention_scaling
102
+
103
+ return freqs_cis
104
+
105
+
106
+ def apply_rotary_emb(
107
+ xq: torch.Tensor,
108
+ xk: torch.Tensor,
109
+ freqs_cis: torch.Tensor,
110
+ ) -> tuple[torch.Tensor, torch.Tensor]:
111
+ """Apply rotary embeddings to query and key tensors."""
112
+ xq_ = torch.view_as_complex(xq.float().reshape(*xq.shape[:-1], -1, 2))
113
+ xk_ = torch.view_as_complex(xk.float().reshape(*xk.shape[:-1], -1, 2))
114
+
115
+ # Broadcast freqs_cis
116
+ freqs_cis = freqs_cis[:, :, None, :]
117
+
118
+ xq_out = torch.view_as_real(xq_ * freqs_cis).flatten(3)
119
+ xk_out = torch.view_as_real(xk_ * freqs_cis).flatten(3)
120
+
121
+ return xq_out.type_as(xq), xk_out.type_as(xk)
122
+
123
+
124
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
125
+ """Repeat key/value heads to match query heads for GQA."""
126
+ if n_rep == 1:
127
+ return hidden_states
128
+ batch, num_kv_heads, slen, head_dim = hidden_states.shape
129
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_kv_heads, n_rep, slen, head_dim)
130
+ return hidden_states.reshape(batch, num_kv_heads * n_rep, slen, head_dim)
131
+
132
+
133
+ def eager_attention_forward(
134
+ module: nn.Module,
135
+ query: torch.Tensor,
136
+ key: torch.Tensor,
137
+ value: torch.Tensor,
138
+ attention_mask: Optional[torch.Tensor],
139
+ scaling: float,
140
+ dropout: float = 0.0,
141
+ **kwargs,
142
+ ) -> tuple[torch.Tensor, torch.Tensor]:
143
+ """Eager attention implementation."""
144
+ key_states = repeat_kv(key, module.num_key_value_groups)
145
+ value_states = repeat_kv(value, module.num_key_value_groups)
146
+
147
+ attn_weights = torch.matmul(query, key_states.transpose(2, 3)) * scaling
148
+
149
+ if attention_mask is not None:
150
+ attn_weights = attn_weights + attention_mask[:, :, :, : key_states.shape[-2]]
151
+
152
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype)
153
+ attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training)
154
+
155
+ attn_output = torch.matmul(attn_weights, value_states)
156
+ attn_output = attn_output.transpose(1, 2).contiguous()
157
+
158
+ return attn_output, attn_weights
159
+
160
+
161
+ class MaincoderAttention(nn.Module):
162
+ """Multi-headed attention with Grouped Query Attention (GQA) and RoPE."""
163
+
164
+ def __init__(self, config: MaincoderConfig, layer_idx: int):
165
+ super().__init__()
166
+ self.config = config
167
+ self.layer_idx = layer_idx
168
+ self.head_dim = config.head_dim
169
+ self.num_attention_heads = config.num_attention_heads
170
+ self.num_key_value_heads = config.num_key_value_heads
171
+ self.num_key_value_groups = self.num_attention_heads // self.num_key_value_heads
172
+ self.scaling = self.head_dim**-0.5
173
+ self.attention_dropout = config.attention_dropout
174
+
175
+ self.q_proj = nn.Linear(config.hidden_size, self.num_attention_heads * self.head_dim, bias=False)
176
+ self.k_proj = nn.Linear(config.hidden_size, self.num_key_value_heads * self.head_dim, bias=False)
177
+ self.v_proj = nn.Linear(config.hidden_size, self.num_key_value_heads * self.head_dim, bias=False)
178
+ self.o_proj = nn.Linear(self.num_attention_heads * self.head_dim, config.hidden_size, bias=False)
179
+
180
+ # QK normalization
181
+ if config.use_qk_norm:
182
+ self.q_norm = MaincoderRMSNorm(self.head_dim, eps=config.rms_norm_eps)
183
+ self.k_norm = MaincoderRMSNorm(self.head_dim, eps=config.rms_norm_eps)
184
+
185
+ def forward(
186
+ self,
187
+ hidden_states: torch.Tensor,
188
+ position_embeddings: torch.Tensor,
189
+ attention_mask: Optional[torch.Tensor] = None,
190
+ past_key_values: Optional[Cache] = None,
191
+ cache_position: Optional[torch.LongTensor] = None,
192
+ **kwargs: Unpack[FlashAttentionKwargs],
193
+ ) -> tuple[torch.Tensor, Optional[torch.Tensor]]:
194
+ batch_size, seq_len, _ = hidden_states.shape
195
+
196
+ query_states = self.q_proj(hidden_states).view(batch_size, seq_len, self.num_attention_heads, self.head_dim)
197
+ key_states = self.k_proj(hidden_states).view(batch_size, seq_len, self.num_key_value_heads, self.head_dim)
198
+ value_states = self.v_proj(hidden_states).view(batch_size, seq_len, self.num_key_value_heads, self.head_dim)
199
+
200
+ # Apply RoPE
201
+ query_states, key_states = apply_rotary_emb(query_states, key_states, position_embeddings)
202
+
203
+ # Apply QK normalization
204
+ if hasattr(self, "q_norm"):
205
+ query_states = self.q_norm(query_states)
206
+ key_states = self.k_norm(key_states)
207
+
208
+ # Transpose for attention: (batch, heads, seq, head_dim)
209
+ query_states = query_states.transpose(1, 2)
210
+ key_states = key_states.transpose(1, 2)
211
+ value_states = value_states.transpose(1, 2)
212
+
213
+ # Update KV cache
214
+ if past_key_values is not None:
215
+ cache_kwargs = {"cache_position": cache_position}
216
+ key_states, value_states = past_key_values.update(key_states, value_states, self.layer_idx, cache_kwargs)
217
+
218
+ # Attention
219
+ attention_fn: Callable = eager_attention_forward
220
+ if self.config._attn_implementation != "eager":
221
+ attention_fn = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
222
+
223
+ attn_output, attn_weights = attention_fn(
224
+ self,
225
+ query_states,
226
+ key_states,
227
+ value_states,
228
+ attention_mask,
229
+ dropout=0.0 if not self.training else self.attention_dropout,
230
+ scaling=self.scaling,
231
+ **kwargs,
232
+ )
233
+
234
+ attn_output = attn_output.reshape(batch_size, seq_len, -1)
235
+ attn_output = self.o_proj(attn_output)
236
+
237
+ return attn_output, attn_weights
238
+
239
+
240
+ class MaincoderDecoderLayer(GradientCheckpointingLayer):
241
+ """Transformer decoder layer with pre-norm architecture."""
242
+
243
+ def __init__(self, config: MaincoderConfig, layer_idx: int):
244
+ super().__init__()
245
+ self.self_attn = MaincoderAttention(config, layer_idx)
246
+ self.feed_forward = MaincoderMLP(config)
247
+ self.input_layernorm = MaincoderRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
248
+ self.post_attention_layernorm = MaincoderRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
249
+
250
+ def forward(
251
+ self,
252
+ hidden_states: torch.Tensor,
253
+ attention_mask: Optional[torch.Tensor] = None,
254
+ position_embeddings: Optional[torch.Tensor] = None,
255
+ past_key_values: Optional[Cache] = None,
256
+ cache_position: Optional[torch.LongTensor] = None,
257
+ **kwargs: Unpack[FlashAttentionKwargs],
258
+ ) -> torch.Tensor:
259
+ # Self Attention
260
+ residual = hidden_states
261
+ hidden_states = self.input_layernorm(hidden_states)
262
+ hidden_states, _ = self.self_attn(
263
+ hidden_states=hidden_states,
264
+ position_embeddings=position_embeddings,
265
+ attention_mask=attention_mask,
266
+ past_key_values=past_key_values,
267
+ cache_position=cache_position,
268
+ **kwargs,
269
+ )
270
+ hidden_states = residual + hidden_states
271
+
272
+ # Feed Forward
273
+ residual = hidden_states
274
+ hidden_states = self.post_attention_layernorm(hidden_states)
275
+ hidden_states = self.feed_forward(hidden_states)
276
+ hidden_states = residual + hidden_states
277
+
278
+ return hidden_states
279
+
280
+
281
+ @auto_docstring
282
+ class MaincoderPreTrainedModel(PreTrainedModel):
283
+ """Base class for Maincoder models."""
284
+
285
+ config_class = MaincoderConfig
286
+ base_model_prefix = "model"
287
+ supports_gradient_checkpointing = True
288
+ _no_split_modules = ["MaincoderDecoderLayer"]
289
+ _skip_keys_device_placement = ["past_key_values"]
290
+ _supports_sdpa = True
291
+ _supports_flex_attn = True
292
+
293
+ def _init_weights(self, module: nn.Module):
294
+ std = self.config.initializer_range
295
+ if isinstance(module, nn.Linear):
296
+ module.weight.data.normal_(mean=0.0, std=std)
297
+ if module.bias is not None:
298
+ module.bias.data.zero_()
299
+ elif isinstance(module, nn.Embedding):
300
+ module.weight.data.normal_(mean=0.0, std=std)
301
+ if module.padding_idx is not None:
302
+ module.weight.data[module.padding_idx].zero_()
303
+ elif isinstance(module, MaincoderRMSNorm):
304
+ module.weight.data.fill_(1.0)
305
+
306
+
307
+ @auto_docstring
308
+ class MaincoderModel(MaincoderPreTrainedModel):
309
+ """Maincoder transformer model outputting raw hidden states."""
310
+
311
+ def __init__(self, config: MaincoderConfig):
312
+ super().__init__(config)
313
+ self.padding_idx = config.pad_token_id
314
+ self.vocab_size = config.vocab_size
315
+
316
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
317
+ self.layers = nn.ModuleList(
318
+ [MaincoderDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
319
+ )
320
+ self.norm = MaincoderRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
321
+ self.rotary_emb = MaincoderRotaryEmbedding(config)
322
+
323
+ self.post_init()
324
+
325
+ @can_return_tuple
326
+ @auto_docstring
327
+ def forward(
328
+ self,
329
+ input_ids: Optional[torch.LongTensor] = None,
330
+ attention_mask: Optional[torch.Tensor] = None,
331
+ position_ids: Optional[torch.LongTensor] = None,
332
+ past_key_values: Optional[Cache] = None,
333
+ inputs_embeds: Optional[torch.FloatTensor] = None,
334
+ use_cache: Optional[bool] = None,
335
+ cache_position: Optional[torch.LongTensor] = None,
336
+ **kwargs: Unpack[TransformersKwargs],
337
+ ) -> Union[tuple, BaseModelOutputWithPast]:
338
+ if (input_ids is None) ^ (inputs_embeds is not None):
339
+ raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
340
+
341
+ if inputs_embeds is None:
342
+ inputs_embeds = self.embed_tokens(input_ids)
343
+
344
+ if use_cache and past_key_values is None:
345
+ past_key_values = DynamicCache()
346
+
347
+ if cache_position is None:
348
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
349
+ cache_position = torch.arange(
350
+ past_seen_tokens,
351
+ past_seen_tokens + inputs_embeds.shape[1],
352
+ device=inputs_embeds.device,
353
+ )
354
+
355
+ if position_ids is None:
356
+ position_ids = cache_position.unsqueeze(0)
357
+
358
+ # Create causal mask
359
+ causal_mask = create_causal_mask(
360
+ config=self.config,
361
+ input_embeds=inputs_embeds,
362
+ attention_mask=attention_mask,
363
+ cache_position=cache_position,
364
+ past_key_values=past_key_values,
365
+ )
366
+
367
+ # Position embeddings
368
+ position_embeddings = self.rotary_emb(inputs_embeds, position_ids)
369
+
370
+ hidden_states = inputs_embeds
371
+ for layer in self.layers:
372
+ hidden_states = layer(
373
+ hidden_states,
374
+ attention_mask=causal_mask,
375
+ position_embeddings=position_embeddings,
376
+ past_key_values=past_key_values,
377
+ cache_position=cache_position,
378
+ **kwargs,
379
+ )
380
+
381
+ hidden_states = self.norm(hidden_states)
382
+
383
+ return BaseModelOutputWithPast(
384
+ last_hidden_state=hidden_states,
385
+ past_key_values=past_key_values if use_cache else None,
386
+ )
387
+
388
+
389
+ class MaincoderForCausalLM(MaincoderPreTrainedModel, GenerationMixin):
390
+ """Maincoder model with a causal language modeling head."""
391
+
392
+ _tied_weights_keys = ["lm_head.weight"]
393
+
394
+ def __init__(self, config: MaincoderConfig):
395
+ super().__init__(config)
396
+ self.model = MaincoderModel(config)
397
+ self.vocab_size = config.vocab_size
398
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
399
+
400
+ self.post_init()
401
+
402
+ def get_input_embeddings(self) -> nn.Embedding:
403
+ return self.model.embed_tokens
404
+
405
+ def set_input_embeddings(self, value: nn.Embedding):
406
+ self.model.embed_tokens = value
407
+
408
+ def get_output_embeddings(self) -> nn.Linear:
409
+ return self.lm_head
410
+
411
+ def set_output_embeddings(self, new_embeddings: nn.Linear):
412
+ self.lm_head = new_embeddings
413
+
414
+ @can_return_tuple
415
+ @auto_docstring
416
+ def forward(
417
+ self,
418
+ input_ids: Optional[torch.LongTensor] = None,
419
+ attention_mask: Optional[torch.Tensor] = None,
420
+ position_ids: Optional[torch.LongTensor] = None,
421
+ past_key_values: Optional[Cache] = None,
422
+ inputs_embeds: Optional[torch.FloatTensor] = None,
423
+ labels: Optional[torch.LongTensor] = None,
424
+ use_cache: Optional[bool] = None,
425
+ cache_position: Optional[torch.LongTensor] = None,
426
+ logits_to_keep: Union[int, torch.Tensor] = 0,
427
+ **kwargs: Unpack[TransformersKwargs],
428
+ ) -> Union[tuple, CausalLMOutputWithPast]:
429
+ r"""
430
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
431
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
432
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
433
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
434
+
435
+ Example:
436
+
437
+ ```python
438
+ >>> from transformers import AutoTokenizer
439
+ >>> from modelling_maincoder import MaincoderForCausalLM
440
+
441
+ >>> model = MaincoderForCausalLM.from_pretrained("maincoder/maincoder")
442
+ >>> tokenizer = AutoTokenizer.from_pretrained("maincoder/maincoder")
443
+
444
+ >>> prompt = "def hello_world():"
445
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
446
+
447
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=50)
448
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True)[0]
449
+ ```"""
450
+ outputs = self.model(
451
+ input_ids=input_ids,
452
+ attention_mask=attention_mask,
453
+ position_ids=position_ids,
454
+ past_key_values=past_key_values,
455
+ inputs_embeds=inputs_embeds,
456
+ use_cache=use_cache,
457
+ cache_position=cache_position,
458
+ **kwargs,
459
+ )
460
+
461
+ hidden_states = outputs.last_hidden_state
462
+
463
+ # Only compute logits for tokens we need
464
+ if isinstance(logits_to_keep, int) and logits_to_keep > 0:
465
+ hidden_states = hidden_states[:, -logits_to_keep:, :]
466
+
467
+ logits = self.lm_head(hidden_states)
468
+
469
+ loss = None
470
+ if labels is not None:
471
+ loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size, **kwargs)
472
+
473
+ return CausalLMOutputWithPast(
474
+ loss=loss,
475
+ logits=logits,
476
+ past_key_values=outputs.past_key_values,
477
+ hidden_states=outputs.hidden_states,
478
+ attentions=outputs.attentions,
479
+ )
480
+
481
+
482
+ __all__ = [
483
+ "MaincoderConfig",
484
+ "MaincoderPreTrainedModel",
485
+ "MaincoderModel",
486
+ "MaincoderForCausalLM",
487
+ ]
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c5ae00e602b8860cbd784ba82a8aa14e8feecec692e7076590d014d7b7fdafa
3
+ size 11421896
tokenizer_config.json ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|im_start|>",
184
+ "<|im_end|>",
185
+ "<|object_ref_start|>",
186
+ "<|object_ref_end|>",
187
+ "<|box_start|>",
188
+ "<|box_end|>",
189
+ "<|quad_start|>",
190
+ "<|quad_end|>",
191
+ "<|vision_start|>",
192
+ "<|vision_end|>",
193
+ "<|vision_pad|>",
194
+ "<|image_pad|>",
195
+ "<|video_pad|>"
196
+ ],
197
+ "bos_token": null,
198
+ "clean_up_tokenization_spaces": false,
199
+ "eos_token": "<|endoftext|>",
200
+ "errors": "replace",
201
+ "extra_special_tokens": {},
202
+ "model_max_length": 32768,
203
+ "pad_token": "<|endoftext|>",
204
+ "split_special_tokens": false,
205
+ "tokenizer_class": "Qwen2Tokenizer",
206
+ "unk_token": null
207
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff