servantofares zhanghanxiao commited on
Commit
b6fc5a5
·
0 Parent(s):

Duplicate from inclusionAI/Ring-mini-2.0

Browse files

Co-authored-by: zhanghanxiao <zhanghanxiao@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2025 inclusionAI
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - inclusionAI/Ling-mini-base-2.0-20T
4
+ library_name: transformers
5
+ license: mit
6
+ pipeline_tag: text-generation
7
+ ---
8
+
9
+ # Ring-mini-2.0
10
+
11
+ <p align="center">
12
+ <img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
13
+ <p>
14
+
15
+ <p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>
16
+ | &nbsp;&nbsp;🐙 <a href="https://zenmux.ai/inclusionai/ring-mini-2.0">Experience Now</a></p>
17
+
18
+ This model is presented in the paper [Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model](https://huggingface.co/papers/2510.18855).
19
+
20
+ Today, we officially release Ring-mini-2.0 — a high-performance inference-oriented MoE model deeply optimized based on the Ling 2.0 architecture. With only 16B total parameters and 1.4B activated parameters, it achieves comprehensive reasoning capabilities comparable to dense models below the 10B scale. It excels particularly in logical reasoning, code generation, and mathematical tasks, while supporting 128K long-context processing and 300+ tokens/s high-speed generation.
21
+
22
+ ## Enhanced Reasoning: Joint Training with SFT + RLVR + RLHF
23
+ Built upon Ling-mini-2.0-base, Ring-mini-2.0 undergoes further training with Long-CoT SFT, more stable and continuous RLVR, and RLHF joint optimization, significantly improving the stability and generalization of complex reasoning. On multiple challenging benchmarks (LiveCodeBench, AIME 2025, GPQA, ARC-AGI-v1, etc.), it outperforms dense models below 10B and even rivals larger MoE models (e.g., gpt-oss-20B-medium) with comparable output lengths, particularly excelling in logical reasoning.
24
+
25
+ <p align="center">
26
+ <img src="https://mdn.alipayobjects.com/huamei_d2byvp/afts/img/O2YKQqkdEvAAAAAASzAAAAgADod9AQFr/original" width="1000"/>
27
+
28
+ <p>
29
+
30
+ ## High Sparsity, High-Speed Generation
31
+ Inheriting the efficient MoE design of the Ling 2.0 series, Ring-mini-2.0 activates only 1.4B parameters and achieves performance equivalent to 7–8B dense models through architectural optimizations such as 1/32 expert activation ratio and MTP layers. Thanks to its low activation and high sparsity design, Ring-mini-2.0 delivers a throughput of 300+ tokens/s when deployed on H20. With Expert Dual Streaming inference optimization, this can be further boosted to 500+ tokens/s, significantly reducing inference costs for high-concurrency scenarios involving thinking models. Additionally, with YaRN extrapolation, it supports 128K long-context processing, achieving a relative speedup of up to 7x in long-output scenarios.
32
+ <p align="center">
33
+ <img src="https://mdn.alipayobjects.com/huamei_d2byvp/afts/img/gjJKSpFVphEAAAAAgdAAAAgADod9AQFr/original" width="1000"/>
34
+ <p>
35
+
36
+ <p align="center">
37
+ <img src="https://mdn.alipayobjects.com/huamei_d2byvp/afts/img/o-vGQadCF_4AAAAAgLAAAAgADod9AQFr/original" width="1000"/>
38
+ <p>
39
+
40
+
41
+ ## Model Downloads
42
+
43
+ <div align="center">
44
+
45
+ | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
46
+ | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
47
+ | Ring-mini-2.0 | 16.8B | 1.4B | 128K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-mini-2.0) <br>[🤖 Modelscope](https://modelscope.cn/models/inclusionAI/Ring-mini-2.0)|
48
+ </div>
49
+
50
+ ## Quickstart
51
+
52
+ ### 🤗 Hugging Face Transformers
53
+
54
+ Here is a code snippet to show you how to use the chat model with `transformers`:
55
+
56
+ ```python
57
+ from transformers import AutoModelForCausalLM, AutoTokenizer
58
+
59
+ model_name = "inclusionAI/Ring-mini-2.0"
60
+
61
+ model = AutoModelForCausalLM.from_pretrained(
62
+ model_name,
63
+ torch_dtype="auto",
64
+ device_map="auto",
65
+ trust_remote_code=True
66
+ )
67
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
68
+
69
+ prompt = "Give me a short introduction to large language models."
70
+ messages = [
71
+ {"role": "system", "content": "You are Ring, an assistant created by inclusionAI"},
72
+ {"role": "user", "content": prompt}
73
+ ]
74
+ text = tokenizer.apply_chat_template(
75
+ messages,
76
+ tokenize=False,
77
+ add_generation_prompt=True,
78
+ enable_thinking=True
79
+ )
80
+ model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
81
+
82
+ generated_ids = model.generate(
83
+ **model_inputs,
84
+ max_new_tokens=8192
85
+ )
86
+ generated_ids = [
87
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
88
+ ]
89
+
90
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
91
+ ```
92
+
93
+ ## License
94
+ This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ring-mini-2.0/blob/main/LICENSE).
95
+
96
+ ## Project Page
97
+ Access the demo and experience the model at: [https://zenmux.ai/inclusionai/ring-mini-2.0](https://zenmux.ai/inclusionai/ring-mini-2.0)
98
+
99
+ ## Code
100
+ The full code repository for this model can be found on GitHub: [https://github.com/inclusionAI/Ring-V2](https://github.com/inclusionAI/Ring-V2)
101
+
102
+ ## Citation
103
+ If you find our work helpful, feel free to give us a cite.
104
+ ```bibtex
105
+ @article{lingteam2025every,
106
+ title={Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model},
107
+ author={Ling Team and Anqi Shen and Baihui Li and Bin Hu and Bin Jing and Cai Chen and Chao Huang and Chao Zhang and Chaokun Yang and Cheng Lin and Chengyao Wen and Congqi Li and Deng Zhao and Dingbo Yuan and Donghai You and Fagui Mao and Fanzhuang Meng and Feng Xu and Guojie Li and Guowei Wang and Hao Dai and Haonan Zheng and Hong Liu and Jia Guo and Jiaming Liu and Jian Liu and Jianhao Fu and Jiannan Shi and Jianwen Wang and Jianxin Lai and Jin Yang and Jun Mei and Jun Zhou and Junbo Zhao and Junping Zhao and Kuan Xu and Le Su and Lei Chen and Li Tang and Liang Jiang and Liangcheng Fu and Lianhao Xu and Linfeng Shi and Lisha Liao and Longfei Zheng and Meng Li and Mingchun Chen and Qi Zuo and Qiang Cheng and Qianggang Cao and Qitao Shi and Quanrui Guo and Senlin Zhu and Shaofei Wang and Shaomian Zheng and Shuaicheng Li and Shuwei Gu and Siba Chen and Tao Wu and Tao Zhang and Tianyu Zhang and Tianyu Zhou and Tiwei Bie and Tongkai Yang and Wang Hong and Wang Ren and Weihua Chen and Wenbo Yu and Wengang Zheng and Xiangchun Wang and Xiaodong Yan and Xiaopei Wan and Xin Zhao and Xinyu Kong and Xinyu Tang and Xudong Han and Xudong Wang and Xuemin Yang and Xueyu Hu and Yalin Zhang and Yan Sun and Yicheng Shan and Yilong Wang and Yingying Xu and Yongkang Liu and Yongzhen Guo and Yuanyuan Wang and Yuchen Yan and Yuefan Wang and Yuhong Guo and Zehuan Li and Zhankai Xu and Zhe Li and Zhenduo Zhang and Zhengke Gui and Zhenxuan Pan and Zhenyu Huang and Zhenzhong Lan and Zhiqiang Ding and Zhiqiang Zhang and Zhixun Li and Zhizhen Liu and Zihao Wang and Zujie Wen},
108
+ year={2025},
109
+ eprint={2510.18855},
110
+ archivePrefix={arXiv},
111
+ primaryClass={cs.AI}
112
+ }
113
+ ```
config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BailingMoeV2ForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_bailing_moe_v2.BailingMoeV2Config",
8
+ "AutoModel": "modeling_bailing_moe_v2.BailingMoeV2Model",
9
+ "AutoModelForCausalLM": "modeling_bailing_moe_v2.BailingMoeV2ForCausalLM"
10
+ },
11
+ "num_hidden_layers": 20,
12
+ "hidden_size": 2048,
13
+ "intermediate_size": 5120,
14
+ "eos_token_id": 156892,
15
+ "pad_token_id": 156892,
16
+ "first_k_dense_replace": 1,
17
+ "hidden_act": "silu",
18
+ "max_position_embeddings": 32768,
19
+ "model_type": "bailing_moe",
20
+ "moe_intermediate_size": 512,
21
+ "norm_topk_prob": true,
22
+ "num_experts_per_tok": 8,
23
+ "num_attention_heads": 16,
24
+ "num_experts": 256,
25
+ "num_key_value_heads": 4,
26
+ "rope_theta": 600000,
27
+ "rope_scaling": null,
28
+ "tie_word_embeddings": false,
29
+ "torch_dtype": "bfloat16",
30
+ "transformers_version": "4.52.3",
31
+ "use_bias": false,
32
+ "use_rmsnorm": true,
33
+ "rms_norm_eps": 1e-06,
34
+ "head_dim": 128,
35
+ "num_shared_experts": 1,
36
+ "use_cache": true,
37
+ "use_qkv_bias": false,
38
+ "embedding_dropout": 0.0,
39
+ "output_dropout": 0.0,
40
+ "vocab_size": 157184,
41
+ "partial_rotary_factor": 0.5,
42
+ "router_dtype": "fp32",
43
+ "moe_router_enable_expert_bias": true,
44
+ "routed_scaling_factor": 2.5,
45
+ "n_group": 8,
46
+ "topk_group": 4,
47
+ "use_qk_norm": true,
48
+ "score_function": "sigmoid",
49
+ "moe_shared_expert_intermediate_size": 512
50
+ }
configuration_bailing_moe_v2.py ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Bailing MoE V2 model configuration"""
2
+
3
+ from transformers.configuration_utils import PretrainedConfig
4
+
5
+
6
+ class BailingMoeV2Config(PretrainedConfig):
7
+
8
+ def __init__(
9
+ self,
10
+ vocab_size=157184,
11
+ hidden_size=2048,
12
+ intermediate_size=5120,
13
+ num_hidden_layers=20,
14
+ num_attention_heads=16,
15
+ num_key_value_heads=4,
16
+ hidden_act="silu",
17
+ use_qkv_bias=False, # bailing only
18
+ use_bias=False, # bailing only
19
+ rms_norm_eps=1e-06,
20
+ tie_word_embeddings=False, # PretrainedConfig key, here change default value.
21
+ embedding_dropout=0.0,
22
+ attention_dropout=0.0,
23
+ output_dropout=0.0,
24
+ initializer_range=0.02,
25
+ max_position_embeddings=32768,
26
+ rope_theta=600000.0,
27
+ use_cache=True,
28
+ max_window_layers=20,
29
+ rope_scaling=None,
30
+ pad_token_id=156892,
31
+ eos_token_id=156892,
32
+ num_experts=256,
33
+ num_shared_experts=1,
34
+ num_experts_per_tok=8,
35
+ n_group=8,
36
+ topk_group=4,
37
+ moe_intermediate_size=512,
38
+ first_k_dense_replace=1,
39
+ head_dim=128,
40
+ output_router_logits=False,
41
+ use_qk_norm=True,
42
+ num_nextn_predict_layers=0,
43
+ mtp_loss_scaling_factor=0,
44
+ moe_router_enable_expert_bias=True,
45
+ routed_scaling_factor=1.0,
46
+ **kwargs,
47
+ ):
48
+ self.num_hidden_layers = num_hidden_layers
49
+ self.vocab_size = vocab_size
50
+ self.hidden_size = hidden_size
51
+ self.intermediate_size = intermediate_size
52
+ self.num_attention_heads = num_attention_heads
53
+ self.num_key_value_heads = num_key_value_heads
54
+ self.hidden_act = hidden_act
55
+ self.use_qkv_bias = use_qkv_bias
56
+ self.use_bias = use_bias
57
+ self.rms_norm_eps = rms_norm_eps
58
+ self.embedding_dropout = embedding_dropout
59
+ self.attention_dropout = attention_dropout
60
+ self.output_dropout = output_dropout
61
+ self.num_nextn_predict_layers = num_nextn_predict_layers
62
+ self.mtp_loss_scaling_factor = mtp_loss_scaling_factor
63
+ self.initializer_range = initializer_range
64
+ self.max_position_embeddings = max_position_embeddings
65
+ self.rope_theta = rope_theta
66
+ self.use_cache = use_cache
67
+ self.max_window_layers = max_window_layers
68
+ self.head_dim = head_dim or self.hidden_size // self.num_attention_heads
69
+ self.rope_scaling = rope_scaling
70
+ self.use_qk_norm = use_qk_norm
71
+ self.moe_router_enable_expert_bias = moe_router_enable_expert_bias
72
+ self.routed_scaling_factor = routed_scaling_factor
73
+
74
+ # MoE configs
75
+ self.num_experts = num_experts
76
+ self.num_shared_experts = num_shared_experts
77
+ self.num_experts_per_tok = num_experts_per_tok
78
+ self.n_group = n_group
79
+ self.topk_group = topk_group
80
+ self.moe_intermediate_size = moe_intermediate_size
81
+ self.first_k_dense_replace = first_k_dense_replace
82
+ self.output_router_logits = output_router_logits
83
+
84
+ super().__init__(pad_token_id=pad_token_id, eos_token_id=eos_token_id, tie_word_embeddings=tie_word_embeddings, **kwargs)
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 156891,
3
+ "eos_token_id": [
4
+ 156892,
5
+ 156895
6
+ ],
7
+ "pad_token_id": 156892,
8
+ "transformers_version": "4.52.3"
9
+ }
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:835f1b7ee15389f110d8be35c0d21b6e5d5af321fa5182a6a2cb1889fc7fd5f5
3
+ size 9999812472
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd837407ff9d544e58d49d0fc04566cdccda73c0e0e8da94b91b50a764d1eb31
3
+ size 9999813848
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f104defce6e603bca7a62448ac8404b47c360fb87ffde31c2d6ba550d7bee98b
3
+ size 9999878728
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3426631ac28f1ff8a4b870c9de3b64acc899268d846f3b56f1a0b66e8b5923a
3
+ size 2513629968
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
modeling_bailing_moe_v2.py ADDED
@@ -0,0 +1,1533 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2025 Antgroup and The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
5
+ # and OPT implementations in this library. It has been modified from its
6
+ # original forms to accommodate minor architectural differences compared
7
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
8
+ #
9
+ # Licensed under the Apache License, Version 2.0 (the "License");
10
+ # you may not use this file except in compliance with the License.
11
+ # You may obtain a copy of the License at
12
+ #
13
+ # http://www.apache.org/licenses/LICENSE-2.0
14
+ #
15
+ # Unless required by applicable law or agreed to in writing, software
16
+ # distributed under the License is distributed on an "AS IS" BASIS,
17
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ # See the License for the specific language governing permissions and
19
+ # limitations under the License.
20
+ """PyTorch BailingMoE model."""
21
+
22
+ import math
23
+ import warnings
24
+ from typing import List, Optional, Tuple, Union
25
+
26
+ import torch
27
+ import torch.nn.functional as F
28
+ from torch import nn
29
+
30
+ from transformers.activations import ACT2FN
31
+ from transformers.cache_utils import Cache, DynamicCache
32
+ from transformers.modeling_attn_mask_utils import (
33
+ AttentionMaskConverter,
34
+ _prepare_4d_attention_mask,
35
+ _prepare_4d_causal_attention_mask,
36
+ _prepare_4d_causal_attention_mask_for_sdpa,
37
+ )
38
+ from transformers.modeling_outputs import MoeModelOutputWithPast
39
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS, dynamic_rope_update
40
+ from transformers.modeling_utils import PreTrainedModel
41
+ from transformers.pytorch_utils import ALL_LAYERNORM_LAYERS, is_torch_greater_or_equal_than_1_13
42
+ from transformers.utils import (
43
+ add_start_docstrings,
44
+ add_start_docstrings_to_model_forward,
45
+ is_flash_attn_2_available,
46
+ is_flash_attn_greater_or_equal_2_10,
47
+ logging,
48
+ replace_return_docstrings,
49
+ )
50
+ from transformers.utils.import_utils import is_torch_fx_available
51
+ from .configuration_bailing_moe_v2 import BailingMoeV2Config
52
+ from transformers.generation.utils import GenerationMixin
53
+ from dataclasses import dataclass
54
+ from transformers.utils import ModelOutput
55
+
56
+
57
+ if is_flash_attn_2_available():
58
+ from flash_attn import flash_attn_func, flash_attn_varlen_func
59
+ from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa
60
+
61
+
62
+ # This makes `_prepare_4d_causal_attention_mask` a leaf function in the FX graph.
63
+ # It means that the function will not be traced through and simply appear as a node in the graph.
64
+ if is_torch_fx_available():
65
+ if not is_torch_greater_or_equal_than_1_13:
66
+ import torch.fx
67
+
68
+ _prepare_4d_causal_attention_mask = torch.fx.wrap(_prepare_4d_causal_attention_mask)
69
+
70
+
71
+ logger = logging.get_logger(__name__)
72
+
73
+ _CONFIG_FOR_DOC = "BailingMoeV2Config"
74
+
75
+
76
+ def roll_tensor(tensor, shifts=-1, dims=-1, fill_value=0):
77
+ """Roll the tensor input along the given dimension(s).
78
+ Inserted elements are set to be 0.0.
79
+ """
80
+ rolled_tensor = torch.roll(tensor, shifts=shifts, dims=dims)
81
+ rolled_tensor.select(dims, shifts).fill_(fill_value)
82
+ return rolled_tensor, rolled_tensor.sum()
83
+
84
+
85
+ @dataclass
86
+ class MoEV2CausalLMOutputWithPast(ModelOutput):
87
+ """
88
+ Base class for causal language model (or autoregressive) outputs as well as Mixture of Expert's router hidden
89
+ states terms, to train a MoE model.
90
+
91
+ Args:
92
+ loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
93
+ Language modeling loss (for next-token prediction).
94
+ logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
95
+ Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
96
+ past_key_values (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
97
+ It is a [`~cache_utils.Cache`] instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache).
98
+
99
+ Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
100
+ `past_key_values` input) to speed up sequential decoding.
101
+ hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
102
+ Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
103
+ one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
104
+
105
+ Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
106
+ attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
107
+ Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
108
+ sequence_length)`.
109
+
110
+ Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
111
+ heads.
112
+ z_loss (`torch.FloatTensor`, *optional*, returned when `labels` is provided):
113
+ z_loss for the sparse modules.
114
+ aux_loss (`torch.FloatTensor`, *optional*, returned when `labels` is provided):
115
+ aux_loss for the sparse modules.
116
+ router_logits (`tuple(torch.FloatTensor)`, *optional*, returned when `output_router_logits=True` is passed or when `config.add_router_probs=True`):
117
+ Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, sequence_length, num_experts)`.
118
+
119
+ Router logits of the encoder model, useful to compute the auxiliary loss and the z_loss for the sparse
120
+ modules.
121
+ """
122
+
123
+ loss: Optional[torch.FloatTensor] = None
124
+ logits: Optional[torch.FloatTensor] = None
125
+ past_key_values: Optional[Cache] = None
126
+ hidden_states: Optional[tuple[torch.FloatTensor, ...]] = None
127
+ attentions: Optional[tuple[torch.FloatTensor, ...]] = None
128
+ z_loss: Optional[torch.FloatTensor] = None
129
+ aux_loss: Optional[torch.FloatTensor] = None
130
+ router_logits: Optional[tuple[torch.FloatTensor]] = None
131
+ mtp_loss: Optional[torch.FloatTensor] = None
132
+ mtp_logits: Optional[tuple[torch.FloatTensor, ...]] = None
133
+
134
+
135
+ class MoeV2ModelOutputWithPast(MoeModelOutputWithPast):
136
+
137
+ def __init__(self, mtp_hidden_states=None, **kwargs):
138
+ super().__init__(**kwargs)
139
+ self.mtp_hidden_states = mtp_hidden_states
140
+
141
+
142
+ def _get_unpad_data(attention_mask):
143
+ seqlens_in_batch = attention_mask.sum(dim=-1, dtype=torch.int32)
144
+ indices = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
145
+ max_seqlen_in_batch = seqlens_in_batch.max().item()
146
+ cu_seqlens = F.pad(torch.cumsum(seqlens_in_batch, dim=0, dtype=torch.torch.int32), (1, 0))
147
+ return (
148
+ indices,
149
+ cu_seqlens,
150
+ max_seqlen_in_batch,
151
+ )
152
+
153
+
154
+ def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
155
+ warnings.warn(
156
+ "Calling `transformers.models.BailingMoeV2.modeling_BailingMoeV2._prepare_4d_attention_mask` is deprecated and will be removed in v4.37. Use `transformers.modeling_attn_mask_utils._prepare_4d_attention_mask"
157
+ )
158
+ return _prepare_4d_attention_mask(mask=mask, dtype=dtype, tgt_len=tgt_len)
159
+
160
+
161
+ def _make_causal_mask(
162
+ input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0
163
+ ):
164
+ warnings.warn(
165
+ "Calling `transformers.models.BailingMoeV2.modeling_BailingMoeV2._make_causal_mask` is deprecated and will be removed in v4.37. Use `transformers.models.BailingMoeV2.modeling_BailingMoeV2.AttentionMaskConverter._make_causal_mask"
166
+ )
167
+ return AttentionMaskConverter._make_causal_mask(
168
+ input_ids_shape=input_ids_shape, dtype=dtype, device=device, past_key_values_length=past_key_values_length
169
+ )
170
+
171
+
172
+ class BailingMoeV2RMSNorm(nn.Module):
173
+ def __init__(self, hidden_size, eps=1e-6):
174
+ """
175
+ BailingMoeV2RMSNorm is equivalent to T5LayerNorm
176
+ """
177
+ super().__init__()
178
+ self.weight = nn.Parameter(torch.ones(hidden_size))
179
+ self.variance_epsilon = eps
180
+
181
+ def forward(self, hidden_states):
182
+ input_dtype = hidden_states.dtype
183
+ hidden_states = hidden_states.to(torch.float32)
184
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
185
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
186
+ return self.weight * hidden_states.to(input_dtype)
187
+
188
+
189
+ ALL_LAYERNORM_LAYERS.append(BailingMoeV2RMSNorm)
190
+
191
+
192
+ class BailingMoeV2RotaryEmbedding(nn.Module):
193
+ def __init__(self, config: BailingMoeV2Config, device=None):
194
+ super().__init__()
195
+ # BC: "rope_type" was originally "type"
196
+ if hasattr(config, "rope_scaling") and config.rope_scaling is not None:
197
+ self.rope_type = config.rope_scaling.get("rope_type", config.rope_scaling.get("type"))
198
+ else:
199
+ self.rope_type = "default"
200
+ self.max_seq_len_cached = config.max_position_embeddings
201
+ self.original_max_seq_len = config.max_position_embeddings
202
+
203
+ self.config = config
204
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
205
+
206
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device)
207
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
208
+ self.original_inv_freq = self.inv_freq
209
+
210
+ @torch.no_grad()
211
+ @dynamic_rope_update # power user: used with advanced RoPE types (e.g. dynamic rope)
212
+ def forward(self, x, position_ids):
213
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)
214
+ position_ids_expanded = position_ids[:, None, :].float()
215
+
216
+ device_type = x.device.type if isinstance(x.device.type, str) and x.device.type != "mps" else "cpu"
217
+ with torch.autocast(device_type=device_type, enabled=False): # Force float32
218
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
219
+ emb = torch.cat((freqs, freqs), dim=-1)
220
+ cos = emb.cos() * self.attention_scaling
221
+ sin = emb.sin() * self.attention_scaling
222
+
223
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
224
+
225
+
226
+ # Copied from transformers.models.llama.modeling_llama.rotate_half
227
+ def rotate_half(x):
228
+ """Rotates half the hidden dims of the input."""
229
+ x1 = x[..., : x.shape[-1] // 2]
230
+ x2 = x[..., x.shape[-1] // 2 :]
231
+ return torch.cat((-x2, x1), dim=-1)
232
+
233
+
234
+ # Copied from transformers.models.llama.modeling_llama.apply_rotary_pos_emb
235
+ def apply_rotary_pos_emb(q, k, cos, sin, unsqueeze_dim=1):
236
+ """Applies Rotary Position Embedding to the query and key tensors.
237
+
238
+ Args:
239
+ q (`torch.Tensor`): The query tensor.
240
+ k (`torch.Tensor`): The key tensor.
241
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
242
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
243
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
244
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
245
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
246
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
247
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
248
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
249
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
250
+ Returns:
251
+ `tuple(torch.Tensor)` comprising the query and key tensors rotated using the Rotary Position Embedding.
252
+ """
253
+ cos = cos.unsqueeze(unsqueeze_dim)
254
+ sin = sin.unsqueeze(unsqueeze_dim)
255
+
256
+ # Keep half or full tensor for later concatenation
257
+ rotary_dim = cos.shape[-1]
258
+ q_rot, q_pass = q[..., :rotary_dim], q[..., rotary_dim:]
259
+ k_rot, k_pass = k[..., :rotary_dim], k[..., rotary_dim:]
260
+
261
+ # Apply rotary embeddings on the first half or full tensor
262
+ q_embed = (q_rot * cos) + (rotate_half(q_rot) * sin)
263
+ k_embed = (k_rot * cos) + (rotate_half(k_rot) * sin)
264
+
265
+ # Concatenate back to full shape
266
+ q_embed = torch.cat([q_embed, q_pass], dim=-1)
267
+ k_embed = torch.cat([k_embed, k_pass], dim=-1)
268
+ return q_embed, k_embed
269
+
270
+
271
+ class BailingMoeV2MLP(nn.Module):
272
+ def __init__(self, config: BailingMoeV2Config, intermediate_size: int):
273
+ super().__init__()
274
+ self.config = config
275
+ self.hidden_size = config.hidden_size
276
+ self.intermediate_size = intermediate_size
277
+
278
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
279
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
280
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
281
+ self.act_fn = ACT2FN[config.hidden_act]
282
+
283
+ def forward(self, x):
284
+ return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
285
+
286
+
287
+ class BailingMoeV2Gate(nn.Module):
288
+ def __init__(self, config):
289
+ super().__init__()
290
+ self.config = config
291
+ self.top_k = config.num_experts_per_tok
292
+ self.num_experts = config.num_experts
293
+
294
+ self.n_group = config.n_group
295
+ self.topk_group = config.topk_group
296
+
297
+ # topk selection algorithm
298
+ self.gating_dim = config.hidden_size
299
+ self.weight = nn.Parameter(torch.empty((self.num_experts, self.gating_dim)))
300
+ self.routed_scaling_factor = config.routed_scaling_factor
301
+
302
+ self.register_buffer("expert_bias", torch.zeros((self.num_experts)))
303
+ self.reset_parameters()
304
+
305
+ def reset_parameters(self) -> None:
306
+ import torch.nn.init as init
307
+
308
+ init.kaiming_uniform_(self.weight, a=math.sqrt(5))
309
+
310
+ def group_limited_topk(
311
+ self,
312
+ scores: torch.Tensor,
313
+ ):
314
+ num_tokens, _ = scores.size()
315
+ # Organize the experts into groups
316
+ group_scores = scores.view(num_tokens, self.n_group, -1).topk(2, dim=-1)[0].sum(dim=-1)
317
+ group_idx = torch.topk(group_scores, k=self.topk_group, dim=-1, sorted=False)[1]
318
+ group_mask = torch.zeros_like(group_scores)
319
+ group_mask.scatter_(1, group_idx, 1)
320
+
321
+ # Mask the experts based on selection groups
322
+ score_mask = (
323
+ group_mask.unsqueeze(-1)
324
+ .expand(num_tokens, self.n_group, self.num_experts // self.n_group)
325
+ .reshape(num_tokens, -1)
326
+ )
327
+
328
+ masked_scores = scores.masked_fill(~score_mask.bool(), float('-inf'))
329
+ probs, top_indices = torch.topk(masked_scores, k=self.top_k, dim=-1)
330
+
331
+ return probs, top_indices
332
+
333
+ def forward(self, hidden_states):
334
+ # compute gating score
335
+ hidden_states = hidden_states.view(-1, hidden_states.shape[-1])
336
+ logits = F.linear(hidden_states.type(torch.float32), self.weight.type(torch.float32))
337
+
338
+ scores = torch.sigmoid(logits.float()).type_as(logits)
339
+
340
+ scores_for_routing = scores + self.expert_bias
341
+ _, topk_idx = self.group_limited_topk(scores_for_routing)
342
+
343
+ scores = torch.gather(scores, dim=1, index=topk_idx).type_as(logits)
344
+
345
+ topk_weight = scores / (scores.sum(dim=-1, keepdim=True) + 1e-20) if self.top_k > 1 else scores
346
+ topk_weight = topk_weight * self.routed_scaling_factor
347
+
348
+ return topk_idx, topk_weight, logits
349
+
350
+
351
+ class BailingMoeV2SparseMoeBlock(nn.Module):
352
+ """
353
+ A mixed expert module containing shared experts.
354
+ """
355
+
356
+ def __init__(self, config: BailingMoeV2Config):
357
+ super().__init__()
358
+ self.config = config
359
+ self.num_experts_per_tok = config.num_experts_per_tok
360
+ self._setup_experts()
361
+ self.gate = BailingMoeV2Gate(config)
362
+ if config.num_shared_experts is not None:
363
+ self.shared_experts = BailingMoeV2MLP(
364
+ config=config, intermediate_size=config.moe_intermediate_size * config.num_shared_experts
365
+ )
366
+
367
+ def _setup_experts(self):
368
+ self.experts = nn.ModuleList(
369
+ [
370
+ BailingMoeV2MLP(config=self.config, intermediate_size=self.config.moe_intermediate_size)
371
+ for _ in range(self.config.num_experts)
372
+ ]
373
+ )
374
+
375
+ def forward(self, hidden_states):
376
+ identity = hidden_states
377
+ bsz, seq_len, h = hidden_states.shape
378
+ topk_idx, topk_weight, router_logits = self.gate(hidden_states)
379
+ hidden_states = hidden_states.view(-1, hidden_states.shape[-1])
380
+ flat_topk_idx = topk_idx.view(-1)
381
+ if self.training:
382
+ hidden_states = hidden_states.repeat_interleave(self.num_experts_per_tok, dim=0)
383
+ y = torch.empty_like(hidden_states)
384
+ for i, expert in enumerate(self.experts):
385
+ y[flat_topk_idx == i] = expert(hidden_states[flat_topk_idx == i])
386
+ y = (y.view(*topk_weight.shape, -1) * topk_weight.unsqueeze(-1)).sum(dim=1)
387
+ y = y.to(hidden_states.dtype).view(bsz, seq_len, h)
388
+ else:
389
+ y = self.moe_infer(hidden_states, topk_idx, topk_weight).view(bsz, seq_len, h)
390
+ if self.config.num_shared_experts is not None:
391
+ y = y + self.shared_experts(identity)
392
+ return y, (router_logits.view(bsz, seq_len, -1), topk_idx.view(bsz, seq_len, -1))
393
+
394
+ @torch.no_grad()
395
+ def moe_infer(self, x, topk_ids, topk_weight):
396
+ cnts = topk_ids.new_zeros((topk_ids.shape[0], len(self.experts)))
397
+ cnts.scatter_(1, topk_ids, 1)
398
+ tokens_per_expert = cnts.sum(dim=0)
399
+ idxs = topk_ids.view(-1).argsort()
400
+ sorted_tokens = x[idxs // topk_ids.shape[1]]
401
+ tokens_per_expert = tokens_per_expert.cpu().numpy()
402
+ outputs = []
403
+ start_idx = 0
404
+ for i, num_tokens in enumerate(tokens_per_expert):
405
+ end_idx = start_idx + num_tokens
406
+ if num_tokens == 0:
407
+ continue
408
+ expert = self.experts[i]
409
+ tokens_for_this_expert = sorted_tokens[start_idx:end_idx]
410
+ expert_out = expert(tokens_for_this_expert)
411
+ outputs.append(expert_out.to(x.device))
412
+ start_idx = end_idx
413
+
414
+ outs = torch.cat(outputs, dim=0) if len(outputs) else sorted_tokens.new_empty(0)
415
+ new_x = torch.empty_like(outs)
416
+ new_x[idxs] = outs
417
+ final_out = (
418
+ new_x.view(*topk_ids.shape, -1)
419
+ .type(topk_weight.dtype)
420
+ .mul_(topk_weight.unsqueeze(dim=-1))
421
+ .sum(dim=1)
422
+ .type(new_x.dtype)
423
+ )
424
+ return final_out
425
+
426
+
427
+ # Copied from transformers.models.llama.modeling_llama.repeat_kv
428
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
429
+ """
430
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
431
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
432
+ """
433
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
434
+ if n_rep == 1:
435
+ return hidden_states
436
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
437
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
438
+
439
+
440
+ # Copied from transformers.models.llama.modeling_llama.LlamaAttention with Llama->BailingMoeV2
441
+ class BailingMoeV2Attention(nn.Module):
442
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
443
+
444
+ def __init__(self, config: BailingMoeV2Config, layer_idx: Optional[int] = None):
445
+ super().__init__()
446
+ self.config = config
447
+ self.layer_idx = layer_idx
448
+ if layer_idx is None:
449
+ logger.warning_once(
450
+ f"Instantiating {self.__class__.__name__} without passing `layer_idx` is not recommended and will "
451
+ "to errors during the forward call, if caching is used. Please make sure to provide a `layer_idx` "
452
+ "when creating this class."
453
+ )
454
+
455
+ self.attention_dropout = config.attention_dropout
456
+ self.hidden_size = config.hidden_size
457
+ self.num_heads = config.num_attention_heads
458
+ self.head_dim = config.head_dim or self.hidden_size // self.num_heads
459
+ partial_rotary_factor = config.partial_rotary_factor if hasattr(config, "partial_rotary_factor") else 1.0
460
+ self.rope_dim = int(self.head_dim * partial_rotary_factor)
461
+ self.num_key_value_heads = config.num_key_value_heads
462
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
463
+ self.max_position_embeddings = config.max_position_embeddings
464
+ self.rope_theta = config.rope_theta
465
+ self.is_causal = True
466
+
467
+ self.query_key_value = nn.Linear(
468
+ self.hidden_size,
469
+ (self.num_heads + 2 * self.num_key_value_heads) * self.head_dim,
470
+ bias=config.use_qkv_bias,
471
+ )
472
+
473
+ if self.config.use_qk_norm:
474
+ self.query_layernorm = BailingMoeV2RMSNorm(self.head_dim, eps=config.rms_norm_eps)
475
+ self.key_layernorm = BailingMoeV2RMSNorm(self.head_dim, eps=config.rms_norm_eps)
476
+ self.dense = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=config.use_bias)
477
+
478
+ def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
479
+ return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
480
+
481
+ def forward(
482
+ self,
483
+ hidden_states: torch.Tensor,
484
+ attention_mask: Optional[torch.Tensor] = None,
485
+ position_ids: Optional[torch.LongTensor] = None,
486
+ past_key_value: Optional[Cache] = None,
487
+ output_attentions: bool = False,
488
+ use_cache: bool = False,
489
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
490
+ **kwargs,
491
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
492
+
493
+ bsz, q_len, _ = hidden_states.size()
494
+
495
+ qkv = self.query_key_value(hidden_states)
496
+ qkv = qkv.view(bsz, q_len, self.num_heads + 2 * self.num_key_value_heads, self.head_dim)
497
+
498
+ query_states, key_states, value_states = qkv.split(
499
+ [self.num_heads, self.num_key_value_heads, self.num_key_value_heads], dim=-2
500
+ )
501
+ query_states = query_states.transpose(1, 2)
502
+ key_states = key_states.transpose(1, 2)
503
+ value_states = value_states.transpose(1, 2)
504
+
505
+ if self.config.use_qk_norm:
506
+ query_states = self.query_layernorm(query_states)
507
+ key_states = self.key_layernorm(key_states)
508
+
509
+ cos, sin = position_embeddings
510
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
511
+
512
+ if past_key_value is not None:
513
+ if self.layer_idx is None:
514
+ raise ValueError(
515
+ f"The cache structure has changed since version v4.36. If you are using {self.__class__.__name__} "
516
+ "for auto-regressive decoding with k/v caching, please make sure to initialize the attention class "
517
+ "with a layer index."
518
+ )
519
+ cache_kwargs = {"sin": sin, "cos": cos}
520
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
521
+
522
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
523
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
524
+
525
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
526
+
527
+ kv_seq_len = key_states.shape[-2]
528
+ if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
529
+ raise ValueError(
530
+ f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
531
+ f" {attn_weights.size()}"
532
+ )
533
+
534
+ if attention_mask is not None:
535
+ if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
536
+ raise ValueError(
537
+ f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
538
+ )
539
+ attn_weights = attn_weights + attention_mask
540
+
541
+ # upcast attention to fp32
542
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
543
+ attn_weights = nn.functional.dropout(attn_weights, p=self.attention_dropout, training=self.training)
544
+ attn_output = torch.matmul(attn_weights, value_states)
545
+
546
+ if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
547
+ raise ValueError(
548
+ f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
549
+ f" {attn_output.size()}"
550
+ )
551
+
552
+ attn_output = attn_output.transpose(1, 2).contiguous()
553
+
554
+ attn_output = attn_output.reshape(bsz, q_len, -1)
555
+
556
+ attn_output = self.dense(attn_output)
557
+
558
+ if not output_attentions:
559
+ attn_weights = None
560
+
561
+ return attn_output, attn_weights, past_key_value
562
+
563
+
564
+ # Copied from transformers.models.llama.modeling_llama.LlamaFlashAttention2 with Llama->BailingMoeV2
565
+ class BailingMoeV2FlashAttention2(BailingMoeV2Attention):
566
+ """
567
+ BailingMoeV2 flash attention module. This module inherits from `BailingMoeV2Attention` as the weights of the module stays
568
+ untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
569
+ flash attention and deal with padding tokens in case the input contains any of them.
570
+ """
571
+
572
+ def __init__(self, *args, **kwargs):
573
+ super().__init__(*args, **kwargs)
574
+
575
+ # TODO: Should be removed once Flash Attention for RoCm is bumped to 2.1.
576
+ # flash_attn<2.1 generates top-left aligned causal mask, while what is needed here is bottom-right alignement, that was made default for flash_attn>=2.1. This attribute is used to handle this difference. Reference: https://github.com/Dao-AILab/flash-attention/releases/tag/v2.1.0.
577
+ # Beware that with flash_attn<2.1, using q_seqlen != k_seqlen (except for the case q_seqlen == 1) produces a wrong mask (top-left).
578
+ self._flash_attn_uses_top_left_mask = not is_flash_attn_greater_or_equal_2_10()
579
+
580
+ def forward(
581
+ self,
582
+ hidden_states: torch.Tensor,
583
+ attention_mask: Optional[torch.LongTensor] = None,
584
+ position_ids: Optional[torch.LongTensor] = None,
585
+ past_key_value: Optional[Cache] = None,
586
+ output_attentions: bool = False,
587
+ use_cache: bool = False,
588
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
589
+ **kwargs,
590
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
591
+ # BailingMoeV2FlashAttention2 attention does not support output_attentions
592
+ output_attentions = False
593
+
594
+ bsz, q_len, _ = hidden_states.size()
595
+
596
+ # Flash attention requires the input to have the shape
597
+ # batch_size x seq_length x head_dim x hidden_dim
598
+ # therefore we just need to keep the original shape
599
+
600
+ qkv = self.query_key_value(hidden_states)
601
+ qkv = qkv.view(bsz, q_len, self.num_heads + 2 * self.num_key_value_heads, self.head_dim)
602
+
603
+ query_states, key_states, value_states = qkv.split(
604
+ [self.num_heads, self.num_key_value_heads, self.num_key_value_heads], dim=-2
605
+ )
606
+ query_states = query_states.transpose(1, 2)
607
+ key_states = key_states.transpose(1, 2)
608
+ value_states = value_states.transpose(1, 2)
609
+
610
+ if self.config.use_qk_norm:
611
+ query_states = self.query_layernorm(query_states)
612
+ key_states = self.key_layernorm(key_states)
613
+
614
+ cos, sin = position_embeddings
615
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
616
+
617
+ if past_key_value is not None:
618
+ cache_kwargs = {"sin": sin, "cos": cos}
619
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
620
+
621
+ # TODO: These transpose are quite inefficient but Flash Attention requires the layout [batch_size, sequence_length, num_heads, head_dim]. We would need to refactor the KV cache
622
+ # to be able to avoid many of these transpose/reshape/view.
623
+ query_states = query_states.transpose(1, 2)
624
+ key_states = key_states.transpose(1, 2)
625
+ value_states = value_states.transpose(1, 2)
626
+
627
+ dropout_rate = self.attention_dropout if self.training else 0.0
628
+
629
+ # In PEFT, usually we cast the layer norms in float32 for training stability reasons
630
+ # therefore the input hidden states gets silently cast in float32. Hence, we need
631
+ # cast them back in the correct dtype just to be sure everything works as expected.
632
+ # This might slow down training & inference so it is recommended to not cast the LayerNorms
633
+ # in fp32. (BailingMoeV2RMSNorm handles it correctly)
634
+
635
+ input_dtype = query_states.dtype
636
+ if input_dtype == torch.float32:
637
+ # Handle the case where the model is quantized
638
+ if hasattr(self.config, "_pre_quantization_dtype"):
639
+ target_dtype = self.config._pre_quantization_dtype
640
+ elif torch.is_autocast_enabled():
641
+ target_dtype = torch.get_autocast_gpu_dtype()
642
+ else:
643
+ target_dtype = self.query_key_value.weight.dtype
644
+
645
+ logger.warning_once(
646
+ f"The input hidden states seems to be silently casted in float32, this might be related to"
647
+ f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
648
+ f" {target_dtype}."
649
+ )
650
+
651
+ query_states = query_states.to(target_dtype)
652
+ key_states = key_states.to(target_dtype)
653
+ value_states = value_states.to(target_dtype)
654
+
655
+ attn_output = self._flash_attention_forward(
656
+ query_states, key_states, value_states, attention_mask, q_len, dropout=dropout_rate
657
+ )
658
+
659
+ attn_output = attn_output.reshape(bsz, q_len, -1).contiguous()
660
+ attn_output = self.dense(attn_output)
661
+
662
+ if not output_attentions:
663
+ attn_weights = None
664
+
665
+ return attn_output, attn_weights, past_key_value
666
+
667
+ def _flash_attention_forward(
668
+ self, query_states, key_states, value_states, attention_mask, query_length, dropout=0.0, softmax_scale=None
669
+ ):
670
+ """
671
+ Calls the forward method of Flash Attention - if the input hidden states contain at least one padding token
672
+ first unpad the input, then computes the attention scores and pad the final attention scores.
673
+
674
+ Args:
675
+ query_states (`torch.Tensor`):
676
+ Input query states to be passed to Flash Attention API
677
+ key_states (`torch.Tensor`):
678
+ Input key states to be passed to Flash Attention API
679
+ value_states (`torch.Tensor`):
680
+ Input value states to be passed to Flash Attention API
681
+ attention_mask (`torch.Tensor`):
682
+ The padding mask - corresponds to a tensor of size `(batch_size, seq_len)` where 0 stands for the
683
+ position of padding tokens and 1 for the position of non-padding tokens.
684
+ dropout (`int`, *optional*):
685
+ Attention dropout
686
+ softmax_scale (`float`, *optional*):
687
+ The scaling of QK^T before applying softmax. Default to 1 / sqrt(head_dim)
688
+ query_length (`int`):
689
+ The length of the query sequence in terms of tokens. This represents the number of tokens in the
690
+ `query_states` tensor along the sequence dimension. It is used to determine the effective sequence
691
+ length for attention computations.
692
+ """
693
+ if not self._flash_attn_uses_top_left_mask:
694
+ causal = self.is_causal
695
+ else:
696
+ # TODO: Remove the `query_length != 1` check once Flash Attention for RoCm is bumped to 2.1. For details, please see the comment in BailingMoeV2FlashAttention2 __init__.
697
+ causal = self.is_causal and query_length != 1
698
+
699
+ # Contains at least one padding token in the sequence
700
+ if attention_mask is not None:
701
+ batch_size = query_states.shape[0]
702
+ query_states, key_states, value_states, indices_q, cu_seq_lens, max_seq_lens = self._upad_input(
703
+ query_states, key_states, value_states, attention_mask, query_length
704
+ )
705
+
706
+ cu_seqlens_q, cu_seqlens_k = cu_seq_lens
707
+ max_seqlen_in_batch_q, max_seqlen_in_batch_k = max_seq_lens
708
+
709
+ attn_output_unpad = flash_attn_varlen_func(
710
+ query_states,
711
+ key_states,
712
+ value_states,
713
+ cu_seqlens_q=cu_seqlens_q,
714
+ cu_seqlens_k=cu_seqlens_k,
715
+ max_seqlen_q=max_seqlen_in_batch_q,
716
+ max_seqlen_k=max_seqlen_in_batch_k,
717
+ dropout_p=dropout,
718
+ softmax_scale=softmax_scale,
719
+ causal=causal,
720
+ )
721
+
722
+ attn_output = pad_input(attn_output_unpad, indices_q, batch_size, query_length)
723
+ else:
724
+ attn_output = flash_attn_func(
725
+ query_states, key_states, value_states, dropout, softmax_scale=softmax_scale, causal=causal
726
+ )
727
+
728
+ return attn_output
729
+
730
+ def _upad_input(self, query_layer, key_layer, value_layer, attention_mask, query_length):
731
+ indices_k, cu_seqlens_k, max_seqlen_in_batch_k = _get_unpad_data(attention_mask)
732
+ batch_size, kv_seq_len, num_key_value_heads, head_dim = key_layer.shape
733
+
734
+ key_layer = index_first_axis(
735
+ key_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim), indices_k
736
+ )
737
+ value_layer = index_first_axis(
738
+ value_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim), indices_k
739
+ )
740
+ if query_length == kv_seq_len:
741
+ query_layer = index_first_axis(
742
+ query_layer.reshape(batch_size * kv_seq_len, self.num_heads, head_dim), indices_k
743
+ )
744
+ cu_seqlens_q = cu_seqlens_k
745
+ max_seqlen_in_batch_q = max_seqlen_in_batch_k
746
+ indices_q = indices_k
747
+ elif query_length == 1:
748
+ max_seqlen_in_batch_q = 1
749
+ cu_seqlens_q = torch.arange(
750
+ batch_size + 1, dtype=torch.int32, device=query_layer.device
751
+ ) # There is a memcpy here, that is very bad.
752
+ indices_q = cu_seqlens_q[:-1]
753
+ query_layer = query_layer.squeeze(1)
754
+ else:
755
+ # The -q_len: slice assumes left padding.
756
+ attention_mask = attention_mask[:, -query_length:]
757
+ query_layer, indices_q, cu_seqlens_q, max_seqlen_in_batch_q = unpad_input(query_layer, attention_mask)
758
+
759
+ return (
760
+ query_layer,
761
+ key_layer,
762
+ value_layer,
763
+ indices_q,
764
+ (cu_seqlens_q, cu_seqlens_k),
765
+ (max_seqlen_in_batch_q, max_seqlen_in_batch_k),
766
+ )
767
+
768
+
769
+ # Copied from transformers.models.llama.modeling_llama.LlamaSdpaAttention with Llama->BailingMoeV2
770
+ class BailingMoeV2SdpaAttention(BailingMoeV2Attention):
771
+ """
772
+ BailingMoeV2 attention module using torch.nn.functional.scaled_dot_product_attention. This module inherits from
773
+ `BailingMoeV2Attention` as the weights of the module stays untouched. The only changes are on the forward pass to adapt to
774
+ SDPA API.
775
+ """
776
+
777
+ # Adapted from BailingMoeV2Attention.forward
778
+ def forward(
779
+ self,
780
+ hidden_states: torch.Tensor,
781
+ attention_mask: Optional[torch.Tensor] = None,
782
+ position_ids: Optional[torch.LongTensor] = None,
783
+ past_key_value: Optional[Cache] = None,
784
+ output_attentions: bool = False,
785
+ use_cache: bool = False,
786
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
787
+ **kwargs,
788
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
789
+ if output_attentions:
790
+ # TODO: Improve this warning with e.g. `model.config.attn_implementation = "manual"` once this is implemented.
791
+ logger.warning_once(
792
+ "BailingMoeV2Model is using BailingMoeV2SdpaAttention, but `torch.nn.functional.scaled_dot_product_attention` does not support `output_attentions=True`. Falling back to the manual attention implementation, "
793
+ 'but specifying the manual implementation will be required from Transformers version v5.0.0 onwards. This warning can be removed using the argument `attn_implementation="eager"` when loading the model.'
794
+ )
795
+ return super().forward(
796
+ hidden_states=hidden_states,
797
+ attention_mask=attention_mask,
798
+ position_ids=position_ids,
799
+ past_key_value=past_key_value,
800
+ output_attentions=output_attentions,
801
+ use_cache=use_cache,
802
+ )
803
+
804
+ bsz, q_len, _ = hidden_states.size()
805
+
806
+ qkv = self.query_key_value(hidden_states)
807
+ qkv = qkv.view(bsz, q_len, self.num_heads + 2 * self.num_key_value_heads, self.head_dim)
808
+
809
+ query_states, key_states, value_states = qkv.split(
810
+ [self.num_heads, self.num_key_value_heads, self.num_key_value_heads], dim=-2
811
+ )
812
+ query_states = query_states.transpose(1, 2)
813
+ key_states = key_states.transpose(1, 2)
814
+ value_states = value_states.transpose(1, 2)
815
+
816
+ if self.config.use_qk_norm:
817
+ query_states = self.query_layernorm(query_states)
818
+ key_states = self.key_layernorm(key_states)
819
+
820
+ cos, sin = position_embeddings
821
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
822
+
823
+ if past_key_value is not None:
824
+ cache_kwargs = {"sin": sin, "cos": cos}
825
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
826
+
827
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
828
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
829
+
830
+ if attention_mask is not None:
831
+ kv_seq_len = key_states.shape[-2]
832
+ if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
833
+ raise ValueError(
834
+ f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
835
+ )
836
+
837
+ # SDPA with memory-efficient backend is currently (torch==2.1.2) bugged with non-contiguous inputs with custom attn_mask,
838
+ # Reference: https://github.com/pytorch/pytorch/issues/112577.
839
+ if query_states.device.type == "cuda" and attention_mask is not None:
840
+ query_states = query_states.contiguous()
841
+ key_states = key_states.contiguous()
842
+ value_states = value_states.contiguous()
843
+
844
+ attn_output = torch.nn.functional.scaled_dot_product_attention(
845
+ query_states,
846
+ key_states,
847
+ value_states,
848
+ attn_mask=attention_mask,
849
+ dropout_p=self.attention_dropout if self.training else 0.0,
850
+ # The q_len > 1 is necessary to match with AttentionMaskConverter.to_causal_4d that does not create a causal mask in case q_len == 1.
851
+ is_causal=self.is_causal and attention_mask is None and q_len > 1,
852
+ )
853
+
854
+ attn_output = attn_output.transpose(1, 2).contiguous()
855
+ attn_output = attn_output.reshape(bsz, q_len, -1)
856
+
857
+ attn_output = self.dense(attn_output)
858
+
859
+ return attn_output, None, past_key_value
860
+
861
+
862
+ ATTENTION_CLASSES = {
863
+ "eager": BailingMoeV2Attention,
864
+ "flash_attention_2": BailingMoeV2FlashAttention2,
865
+ "sdpa": BailingMoeV2SdpaAttention,
866
+ }
867
+
868
+
869
+ class BailingMoeV2MTPLayer(nn.Module):
870
+ def __init__(self, config: BailingMoeV2Config, layer_idx: int):
871
+ super().__init__()
872
+ self.layer_idx = layer_idx
873
+ self.input_layernorm = BailingMoeV2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
874
+ self.enorm = BailingMoeV2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
875
+
876
+ self.eh_proj = nn.Linear(config.hidden_size * 2, config.hidden_size, bias=False)
877
+ self.post_attention_layernorm = BailingMoeV2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
878
+ self.attention = ATTENTION_CLASSES[config._attn_implementation](config=config, layer_idx=layer_idx)
879
+ self.mlp = BailingMoeV2SparseMoeBlock(config)
880
+
881
+ self.hnorm = BailingMoeV2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
882
+ self.final_layernorm = BailingMoeV2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
883
+
884
+ def forward(
885
+ self,
886
+ input_embeds,
887
+ hidden_states: torch.Tensor,
888
+ attention_mask: Optional[torch.Tensor] = None,
889
+ position_ids: Optional[torch.LongTensor] = None,
890
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
891
+ output_attentions: Optional[bool] = False,
892
+ output_router_logits: Optional[bool] = False,
893
+ use_cache: Optional[bool] = False,
894
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
895
+ **kwargs,
896
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
897
+ input_embeds = self.enorm(input_embeds)
898
+ hidden_states = self.hnorm(hidden_states)
899
+ hidden_states = self.eh_proj(torch.cat([input_embeds, hidden_states], dim=-1))
900
+ residual = hidden_states
901
+
902
+ hidden_states = self.input_layernorm(hidden_states)
903
+
904
+ # Self Attention
905
+ hidden_states, self_attn_weights, present_key_value = self.attention(
906
+ hidden_states=hidden_states,
907
+ attention_mask=attention_mask,
908
+ position_ids=position_ids,
909
+ past_key_value=past_key_value,
910
+ output_attentions=output_attentions,
911
+ position_embeddings=position_embeddings,
912
+ use_cache=use_cache,
913
+ )
914
+ hidden_states = residual + hidden_states
915
+
916
+ # Fully Connected
917
+ residual = hidden_states
918
+ hidden_states = self.post_attention_layernorm(hidden_states)
919
+ hidden_states = self.mlp(hidden_states)
920
+ if isinstance(hidden_states, tuple):
921
+ hidden_states, router_logits = hidden_states
922
+ else:
923
+ router_logits = None
924
+ hidden_states = residual + hidden_states.to(residual.device)
925
+ hidden_states = self.final_layernorm(hidden_states)
926
+
927
+ outputs = (hidden_states,)
928
+
929
+ if output_attentions:
930
+ outputs += (self_attn_weights,)
931
+
932
+ if use_cache:
933
+ outputs += (present_key_value,)
934
+
935
+ if output_router_logits:
936
+ outputs += (router_logits,)
937
+
938
+ return outputs
939
+
940
+
941
+ class BailingMoeV2DecoderLayer(nn.Module):
942
+ def __init__(self, config: BailingMoeV2Config, layer_idx: int):
943
+ super().__init__()
944
+ self.hidden_size = config.hidden_size
945
+
946
+ self.attention = ATTENTION_CLASSES[config._attn_implementation](config=config, layer_idx=layer_idx)
947
+
948
+ self.mlp = (
949
+ BailingMoeV2SparseMoeBlock(config)
950
+ if (config.num_experts is not None and layer_idx >= config.first_k_dense_replace)
951
+ else BailingMoeV2MLP(config=config, intermediate_size=config.intermediate_size)
952
+ )
953
+ self.input_layernorm = BailingMoeV2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
954
+ self.post_attention_layernorm = BailingMoeV2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
955
+
956
+ def forward(
957
+ self,
958
+ hidden_states: torch.Tensor,
959
+ attention_mask: Optional[torch.Tensor] = None,
960
+ position_ids: Optional[torch.LongTensor] = None,
961
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
962
+ output_attentions: Optional[bool] = False,
963
+ output_router_logits: Optional[bool] = False,
964
+ use_cache: Optional[bool] = False,
965
+ position_embeddings: Optional[Tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
966
+ **kwargs,
967
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
968
+ """
969
+ Args:
970
+ hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
971
+ attention_mask (`torch.FloatTensor`, *optional*):
972
+ attention mask of size `(batch_size, sequence_length)` if flash attention is used or `(batch_size, 1,
973
+ query_sequence_length, key_sequence_length)` if default attention is used.
974
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
975
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
976
+ config.n_positions - 1]`.
977
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*):
978
+ cached past key and value projection states
979
+ output_attentions (`bool`, *optional*):
980
+ Whether to return the attentions tensors of all attention layers. See `attentions` under
981
+ returned tensors for more detail.
982
+ output_router_logits (`bool`, *optional*):
983
+ Whether or not to return the logits of all the routers. They are useful for computing the router loss,
984
+ and should not be returned during inference.
985
+ use_cache (`bool`, *optional*):
986
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
987
+ (see `past_key_values`).
988
+ """
989
+ residual = hidden_states
990
+
991
+ hidden_states = self.input_layernorm(hidden_states)
992
+
993
+ # Self Attention
994
+ hidden_states, self_attn_weights, present_key_value = self.attention(
995
+ hidden_states=hidden_states,
996
+ attention_mask=attention_mask,
997
+ position_ids=position_ids,
998
+ past_key_value=past_key_value,
999
+ output_attentions=output_attentions,
1000
+ position_embeddings=position_embeddings,
1001
+ use_cache=use_cache,
1002
+ )
1003
+ hidden_states = residual + hidden_states
1004
+
1005
+ # Fully Connected
1006
+ residual = hidden_states
1007
+ hidden_states = self.post_attention_layernorm(hidden_states)
1008
+ hidden_states = self.mlp(hidden_states)
1009
+ if isinstance(hidden_states, tuple):
1010
+ hidden_states, router_logits = hidden_states
1011
+ else:
1012
+ router_logits = None
1013
+ hidden_states = residual + hidden_states.to(residual.device)
1014
+
1015
+ outputs = (hidden_states,)
1016
+
1017
+ if output_attentions:
1018
+ outputs += (self_attn_weights,)
1019
+
1020
+ if use_cache:
1021
+ outputs += (present_key_value,)
1022
+
1023
+ if output_router_logits:
1024
+ outputs += (router_logits,)
1025
+
1026
+ return outputs
1027
+
1028
+
1029
+ BAILINGMOEV2_START_DOCSTRING = r"""
1030
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
1031
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
1032
+ etc.)
1033
+
1034
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
1035
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
1036
+ and behavior.
1037
+
1038
+ Parameters:
1039
+ config ([`BailingMoeV2Config`]):
1040
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
1041
+ load the weights associated with the model, only the configuration. Check out the
1042
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
1043
+ """
1044
+
1045
+
1046
+ @add_start_docstrings(
1047
+ "The bare BailingMoeV2 Model outputting raw hidden-states without any specific head on top.",
1048
+ BAILINGMOEV2_START_DOCSTRING,
1049
+ )
1050
+ class BailingMoeV2PreTrainedModel(PreTrainedModel):
1051
+ config_class = BailingMoeV2Config
1052
+ base_model_prefix = "model"
1053
+ supports_gradient_checkpointing = True
1054
+ _no_split_modules = ["BailingMoeV2DecoderLayer"]
1055
+ _skip_keys_device_placement = "past_key_values"
1056
+ _supports_flash_attn_2 = True
1057
+ _supports_sdpa = True
1058
+ _supports_cache_class = True
1059
+
1060
+ def _init_weights(self, module):
1061
+ std = self.config.initializer_range
1062
+ if isinstance(module, nn.Linear):
1063
+ module.weight.data.normal_(mean=0.0, std=std)
1064
+ if module.bias is not None:
1065
+ module.bias.data.zero_()
1066
+ elif isinstance(module, nn.Embedding):
1067
+ module.weight.data.normal_(mean=0.0, std=std)
1068
+ if module.padding_idx is not None:
1069
+ module.weight.data[module.padding_idx].zero_()
1070
+
1071
+
1072
+ BAILINGMOEV2_INPUTS_DOCSTRING = r"""
1073
+ Args:
1074
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
1075
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
1076
+ it.
1077
+
1078
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
1079
+ [`PreTrainedTokenizer.__call__`] for details.
1080
+
1081
+ [What are input IDs?](../glossary#input-ids)
1082
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
1083
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
1084
+
1085
+ - 1 for tokens that are **not masked**,
1086
+ - 0 for tokens that are **masked**.
1087
+
1088
+ [What are attention masks?](../glossary#attention-mask)
1089
+
1090
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
1091
+ [`PreTrainedTokenizer.__call__`] for details.
1092
+
1093
+ If `past_key_values` is used, optionally only the last `input_ids` have to be input (see
1094
+ `past_key_values`).
1095
+
1096
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
1097
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
1098
+ information on the default strategy.
1099
+
1100
+ - 1 indicates the head is **not masked**,
1101
+ - 0 indicates the head is **masked**.
1102
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1103
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
1104
+ config.n_positions - 1]`.
1105
+
1106
+ [What are position IDs?](../glossary#position-ids)
1107
+ past_key_values (`Cache` or `tuple(tuple(torch.FloatTensor))`, *optional*):
1108
+ Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
1109
+ blocks) that can be used to speed up sequential decoding. This typically consists in the `past_key_values`
1110
+ returned by the model at a previous stage of decoding, when `use_cache=True` or `config.use_cache=True`.
1111
+
1112
+ Two formats are allowed:
1113
+ - a [`~cache_utils.Cache`] instance;
1114
+ - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of
1115
+ shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`). This is also known as the legacy
1116
+ cache format.
1117
+
1118
+ The model will output the same cache format that is fed as input. If no `past_key_values` are passed, the
1119
+ legacy cache format will be returned.
1120
+
1121
+ If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't
1122
+ have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids`
1123
+ of shape `(batch_size, sequence_length)`.
1124
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
1125
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
1126
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
1127
+ model's internal embedding lookup matrix.
1128
+ use_cache (`bool`, *optional*):
1129
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
1130
+ `past_key_values`).
1131
+ output_attentions (`bool`, *optional*):
1132
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
1133
+ tensors for more detail.
1134
+ output_hidden_states (`bool`, *optional*):
1135
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
1136
+ more detail.
1137
+ return_dict (`bool`, *optional*):
1138
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
1139
+ """
1140
+
1141
+
1142
+ @add_start_docstrings(
1143
+ "The bare BailingMoeV2 Model outputting raw hidden-states without any specific head on top.",
1144
+ BAILINGMOEV2_START_DOCSTRING,
1145
+ )
1146
+ class BailingMoeV2Model(BailingMoeV2PreTrainedModel):
1147
+ """
1148
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`BailingMoeV2DecoderLayer`]
1149
+
1150
+ Args:
1151
+ config: BailingMoeV2Config
1152
+ """
1153
+
1154
+ def __init__(self, config: BailingMoeV2Config):
1155
+ super().__init__(config)
1156
+ self.padding_idx = config.pad_token_id
1157
+ self.vocab_size = config.vocab_size
1158
+ self.num_nextn_predict_layers = config.num_nextn_predict_layers
1159
+
1160
+ self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
1161
+ self.layers = []
1162
+ for layer_idx in range(config.num_hidden_layers + config.num_nextn_predict_layers):
1163
+ layer_cls = BailingMoeV2DecoderLayer if layer_idx < config.num_hidden_layers else BailingMoeV2MTPLayer
1164
+ self.layers.append(layer_cls(config, layer_idx))
1165
+
1166
+ self.layers = nn.ModuleList(self.layers)
1167
+
1168
+ self._use_sdpa = config._attn_implementation == "sdpa"
1169
+ self._use_flash_attention_2 = config._attn_implementation == "flash_attention_2"
1170
+ self.norm = BailingMoeV2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
1171
+ self.rotary_emb = BailingMoeV2RotaryEmbedding(config=config)
1172
+ self.gradient_checkpointing = False
1173
+ # Initialize weights and apply final processing
1174
+ self.post_init()
1175
+
1176
+ def get_input_embeddings(self):
1177
+ return self.word_embeddings
1178
+
1179
+ def set_input_embeddings(self, value):
1180
+ self.word_embeddings = value
1181
+
1182
+ @add_start_docstrings_to_model_forward(BAILINGMOEV2_INPUTS_DOCSTRING)
1183
+ def forward(
1184
+ self,
1185
+ input_ids: torch.LongTensor = None,
1186
+ attention_mask: Optional[torch.Tensor] = None,
1187
+ position_ids: Optional[torch.LongTensor] = None,
1188
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1189
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1190
+ use_cache: Optional[bool] = None,
1191
+ output_attentions: Optional[bool] = None,
1192
+ output_hidden_states: Optional[bool] = None,
1193
+ output_router_logits: Optional[bool] = None,
1194
+ return_dict: Optional[bool] = None,
1195
+ **kwargs,
1196
+ ) -> Union[Tuple, MoeV2ModelOutputWithPast]:
1197
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1198
+ output_hidden_states = (
1199
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1200
+ )
1201
+ output_router_logits = (
1202
+ output_router_logits if output_router_logits is not None else self.config.output_router_logits
1203
+ )
1204
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1205
+
1206
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1207
+
1208
+ # retrieve input_ids and inputs_embeds
1209
+ if input_ids is not None and inputs_embeds is not None:
1210
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
1211
+ elif input_ids is not None:
1212
+ batch_size, seq_length = input_ids.shape[:2]
1213
+ elif inputs_embeds is not None:
1214
+ batch_size, seq_length = inputs_embeds.shape[:2]
1215
+ else:
1216
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
1217
+
1218
+ if self.gradient_checkpointing and self.training:
1219
+ if use_cache:
1220
+ logger.warning_once(
1221
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`transformers."
1222
+ )
1223
+ use_cache = False
1224
+
1225
+ if use_cache and past_key_values is None:
1226
+ past_key_values = DynamicCache()
1227
+
1228
+ if inputs_embeds is None:
1229
+ inputs_embeds = self.word_embeddings(input_ids)
1230
+
1231
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
1232
+
1233
+ if position_ids is None:
1234
+ position_ids = torch.arange(
1235
+ past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
1236
+ )
1237
+ position_ids = position_ids.unsqueeze(0)
1238
+
1239
+ if self._use_flash_attention_2:
1240
+ # 2d mask is passed through the layers
1241
+ attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None
1242
+ elif self._use_sdpa and not output_attentions:
1243
+ # output_attentions=True can not be supported when using SDPA, and we fall back on
1244
+ # the manual implementation that requires a 4D causal mask in all cases.
1245
+ attention_mask = _prepare_4d_causal_attention_mask_for_sdpa(
1246
+ attention_mask,
1247
+ (batch_size, seq_length),
1248
+ inputs_embeds,
1249
+ past_seen_tokens,
1250
+ )
1251
+ else:
1252
+ # 4d mask is passed through the layers
1253
+ attention_mask = _prepare_4d_causal_attention_mask(
1254
+ attention_mask, (batch_size, seq_length), inputs_embeds, past_seen_tokens
1255
+ )
1256
+
1257
+ # embed positions
1258
+ hidden_states = inputs_embeds
1259
+
1260
+ # create position embeddings to be shared across the decoder layers
1261
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
1262
+
1263
+ # decoder layers
1264
+ all_hidden_states = () if output_hidden_states else None
1265
+ all_self_attns = () if output_attentions else None
1266
+ all_router_logits = () if output_router_logits else None
1267
+ next_decoder_cache = None
1268
+ layers = self.layers[: -self.num_nextn_predict_layers] if self.num_nextn_predict_layers > 0 else self.layers
1269
+ mtp_layers = self.layers[-self.num_nextn_predict_layers :] if self.num_nextn_predict_layers > 0 else None
1270
+
1271
+ for decoder_layer in layers:
1272
+ if output_hidden_states:
1273
+ all_hidden_states += (hidden_states,)
1274
+
1275
+ if self.gradient_checkpointing and self.training:
1276
+ layer_outputs = self._gradient_checkpointing_func(
1277
+ decoder_layer.__call__,
1278
+ hidden_states,
1279
+ attention_mask,
1280
+ position_ids,
1281
+ past_key_values,
1282
+ output_attentions,
1283
+ output_router_logits,
1284
+ use_cache,
1285
+ position_embeddings,
1286
+ )
1287
+ else:
1288
+ layer_outputs = decoder_layer(
1289
+ hidden_states,
1290
+ attention_mask=attention_mask,
1291
+ position_ids=position_ids,
1292
+ past_key_value=past_key_values,
1293
+ output_attentions=output_attentions,
1294
+ output_router_logits=output_router_logits,
1295
+ use_cache=use_cache,
1296
+ position_embeddings=position_embeddings,
1297
+ )
1298
+ hidden_states = layer_outputs[0]
1299
+
1300
+ if use_cache:
1301
+ next_decoder_cache = layer_outputs[2 if output_attentions else 1]
1302
+
1303
+ if output_attentions:
1304
+ all_self_attns += (layer_outputs[1],)
1305
+
1306
+ if output_router_logits and layer_outputs[-1] is not None:
1307
+ all_router_logits += (layer_outputs[-1],)
1308
+
1309
+ hidden_states = self.norm(hidden_states)
1310
+ main_hidden_states = hidden_states
1311
+
1312
+ # add hidden states from the last decoder layer
1313
+ if output_hidden_states:
1314
+ all_hidden_states += (main_hidden_states,)
1315
+
1316
+ mtp_hidden_states = None
1317
+
1318
+ if mtp_layers:
1319
+ for decoder_layer in mtp_layers:
1320
+ input_ids, _ = roll_tensor(input_ids, shifts=-1, dims=-1)
1321
+ inputs_embeds = self.word_embeddings(input_ids)
1322
+
1323
+ if self.gradient_checkpointing and self.training:
1324
+ layer_outputs = self._gradient_checkpointing_func(
1325
+ decoder_layer.__call__,
1326
+ inputs_embeds,
1327
+ hidden_states,
1328
+ attention_mask,
1329
+ position_ids,
1330
+ past_key_values,
1331
+ output_attentions,
1332
+ output_router_logits,
1333
+ use_cache,
1334
+ position_embeddings,
1335
+ )
1336
+ else:
1337
+ layer_outputs = decoder_layer(
1338
+ inputs_embeds,
1339
+ hidden_states,
1340
+ attention_mask=attention_mask,
1341
+ position_ids=position_ids,
1342
+ past_key_value=past_key_values,
1343
+ output_attentions=output_attentions,
1344
+ output_router_logits=output_router_logits,
1345
+ use_cache=use_cache,
1346
+ position_embeddings=position_embeddings,
1347
+ )
1348
+ if mtp_hidden_states is None:
1349
+ mtp_hidden_states = []
1350
+ hidden_states = layer_outputs[0]
1351
+ mtp_hidden_states.append(hidden_states)
1352
+
1353
+ if output_hidden_states:
1354
+ all_hidden_states += (hidden_states,)
1355
+
1356
+ if use_cache:
1357
+ next_decoder_cache = layer_outputs[2 if output_attentions else 1]
1358
+
1359
+ if output_attentions:
1360
+ all_self_attns += (layer_outputs[1],)
1361
+
1362
+ if output_router_logits and layer_outputs[-1] is not None:
1363
+ all_router_logits += (layer_outputs[-1],)
1364
+
1365
+ next_cache = None
1366
+ if use_cache:
1367
+ next_cache = next_decoder_cache
1368
+ if not return_dict:
1369
+ return tuple(
1370
+ v
1371
+ for v in [main_hidden_states, next_cache, all_hidden_states, all_self_attns, all_router_logits]
1372
+ if v is not None
1373
+ )
1374
+ return MoeV2ModelOutputWithPast(
1375
+ last_hidden_state=main_hidden_states,
1376
+ past_key_values=next_cache,
1377
+ hidden_states=all_hidden_states,
1378
+ mtp_hidden_states=mtp_hidden_states,
1379
+ attentions=all_self_attns,
1380
+ router_logits=all_router_logits,
1381
+ )
1382
+
1383
+
1384
+ class BailingMoeV2ForCausalLM(BailingMoeV2PreTrainedModel, GenerationMixin):
1385
+ _tied_weights_keys = ["lm_head.weight"]
1386
+
1387
+ def __init__(self, config: BailingMoeV2Config):
1388
+ super().__init__(config)
1389
+ self.model = BailingMoeV2Model(config)
1390
+ self.vocab_size = config.vocab_size
1391
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1392
+ self.num_nextn_predict_layers = config.num_nextn_predict_layers
1393
+ self.mtp_loss_scaling_factor = config.mtp_loss_scaling_factor
1394
+
1395
+ # Initialize weights and apply final processing
1396
+ self.post_init()
1397
+
1398
+ def get_input_embeddings(self):
1399
+ return self.model.word_embeddings
1400
+
1401
+ def set_input_embeddings(self, value):
1402
+ self.model.word_embeddings = value
1403
+
1404
+ def get_output_embeddings(self):
1405
+ return self.lm_head
1406
+
1407
+ def set_output_embeddings(self, new_embeddings):
1408
+ self.lm_head = new_embeddings
1409
+
1410
+ def set_decoder(self, decoder):
1411
+ self.model = decoder
1412
+
1413
+ def get_decoder(self):
1414
+ return self.model
1415
+
1416
+ @add_start_docstrings_to_model_forward(BAILINGMOEV2_INPUTS_DOCSTRING)
1417
+ @replace_return_docstrings(output_type=MoEV2CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
1418
+ def forward(
1419
+ self,
1420
+ input_ids: torch.LongTensor = None,
1421
+ attention_mask: Optional[torch.Tensor] = None,
1422
+ position_ids: Optional[torch.LongTensor] = None,
1423
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1424
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1425
+ labels: Optional[torch.LongTensor] = None,
1426
+ use_cache: Optional[bool] = None,
1427
+ output_attentions: Optional[bool] = None,
1428
+ output_hidden_states: Optional[bool] = None,
1429
+ output_router_logits: Optional[bool] = None,
1430
+ return_dict: Optional[bool] = None,
1431
+ **kwargs,
1432
+ ) -> Union[Tuple, MoEV2CausalLMOutputWithPast]:
1433
+ r"""
1434
+ Args:
1435
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1436
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
1437
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
1438
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
1439
+
1440
+ Returns:
1441
+
1442
+ Example:
1443
+
1444
+ ```python
1445
+ >>> from transformers import AutoTokenizer
1446
+
1447
+ >>> model = BailingMoeV2ForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
1448
+ >>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
1449
+
1450
+ >>> prompt = "Hey, are you conscious? Can you talk to me?"
1451
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
1452
+
1453
+ >>> # Generate
1454
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
1455
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
1456
+ "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
1457
+ ```"""
1458
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1459
+ output_hidden_states = (
1460
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1461
+ )
1462
+ output_router_logits = (
1463
+ output_router_logits if output_router_logits is not None else self.config.output_router_logits
1464
+ )
1465
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1466
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
1467
+ outputs = self.model(
1468
+ input_ids=input_ids,
1469
+ attention_mask=attention_mask,
1470
+ position_ids=position_ids,
1471
+ past_key_values=past_key_values,
1472
+ inputs_embeds=inputs_embeds,
1473
+ use_cache=use_cache,
1474
+ output_attentions=output_attentions,
1475
+ output_hidden_states=output_hidden_states,
1476
+ output_router_logits=output_router_logits,
1477
+ return_dict=return_dict,
1478
+ **kwargs,
1479
+ )
1480
+
1481
+ loss = None
1482
+ all_mtp_loss = None
1483
+ aux_loss = None
1484
+ hidden_states = outputs[0]
1485
+ logits = self.lm_head(hidden_states)
1486
+ logits = logits.float()
1487
+
1488
+ if labels is not None:
1489
+ loss = self.loss_function(logits, labels, self.config.vocab_size, **kwargs)
1490
+
1491
+ all_mtp_logits = None
1492
+ if self.num_nextn_predict_layers > 0:
1493
+ mtp_hidden_states = outputs.mtp_hidden_states
1494
+ shift_labels_mtp = None
1495
+ for i in range(self.num_nextn_predict_layers):
1496
+ mtp_hidden_states = mtp_hidden_states[i]
1497
+ mtp_logits = self.lm_head(mtp_hidden_states).float()
1498
+ if all_mtp_logits is None:
1499
+ all_mtp_logits = []
1500
+ all_mtp_logits.append(mtp_logits)
1501
+ if labels is not None:
1502
+ if shift_labels_mtp is None:
1503
+ shift_labels_mtp = labels.clone()
1504
+ shift_labels_mtp, _ = roll_tensor(shift_labels_mtp, shifts=-1, dims=-1, fill_value=-100)
1505
+ mtp_logits_ = mtp_logits.view(-1, self.config.vocab_size)
1506
+ mtp_loss = self.loss_function(mtp_logits_, shift_labels_mtp.to(mtp_logits_.device).view(-1), self.config.vocab_size, **kwargs)
1507
+ if loss is not None:
1508
+ loss += self.mtp_loss_scaling_factor * mtp_loss
1509
+ else:
1510
+ loss = self.mtp_loss_scaling_factor * mtp_loss
1511
+
1512
+ if all_mtp_loss is None:
1513
+ all_mtp_loss = []
1514
+ all_mtp_loss.append(mtp_loss)
1515
+
1516
+ if not return_dict:
1517
+ output = (logits,) + outputs[1:]
1518
+ if output_router_logits:
1519
+ output = (aux_loss,) + output
1520
+ return (loss,) + output if loss is not None else output
1521
+
1522
+ return MoEV2CausalLMOutputWithPast(
1523
+ loss=loss,
1524
+ mtp_loss=all_mtp_loss,
1525
+ aux_loss=aux_loss,
1526
+ logits=logits,
1527
+ mtp_logits=all_mtp_logits,
1528
+ past_key_values=outputs.past_key_values,
1529
+ hidden_states=outputs.hidden_states,
1530
+ attentions=outputs.attentions,
1531
+ router_logits=outputs.router_logits,
1532
+ )
1533
+
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|startoftext|>",
3
+ "cls_token": "[CLS]",
4
+ "eos_token": "<|endoftext|>",
5
+ "gmask_token": "[gMASK]",
6
+ "pad_token": "<|endoftext|>"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "bos_token": "<|startoftext|>",
5
+ "chat_template": "{% for message in messages %}{% set role = message['role'] | lower %}{% if role == 'user' %}{% set role = 'HUMAN' %}{% endif %}{% set role = role | upper %}{{ '<role>' + role + '</role>' + message['content'] }}{% endfor %}{% if add_generation_prompt %}{{ '<role>ASSISTANT</role><think>\n' }}{% endif %}",
6
+ "clean_up_tokenization_spaces": false,
7
+ "cls_token": "[CLS]",
8
+ "eos_token": "<|endoftext|>",
9
+ "fast_tokenizer": true,
10
+ "gmask_token": "[gMASK]",
11
+ "merges_file": null,
12
+ "model_max_length": 1000000000000000019884624838656,
13
+ "pad_token": "<|endoftext|>",
14
+ "tokenizer_class": "PreTrainedTokenizerFast",
15
+ "trust_remote_code": true,
16
+ "vocab_file": null
17
+ }