haijunlv commited on
Commit
7e66b3f
·
verified ·
1 Parent(s): 3f987f9

Add files using upload-large-folder tool

Browse files
Files changed (50) hide show
  1. 0092638_seism.npy +3 -0
  2. LICENSE.txt +201 -0
  3. README.md +305 -0
  4. added_tokens.json +53 -0
  5. chat_template.jinja +112 -0
  6. config.json +97 -0
  7. configuration_interns1_pro.py +225 -0
  8. deployment_guide.md +167 -0
  9. generation_config.json +14 -0
  10. merges.txt +0 -0
  11. model-00002-of-00153.safetensors +3 -0
  12. model-00006-of-00153.safetensors +3 -0
  13. model-00011-of-00153.safetensors +3 -0
  14. model-00014-of-00153.safetensors +3 -0
  15. model-00015-of-00153.safetensors +3 -0
  16. model-00021-of-00153.safetensors +3 -0
  17. model-00023-of-00153.safetensors +3 -0
  18. model-00024-of-00153.safetensors +3 -0
  19. model-00027-of-00153.safetensors +3 -0
  20. model-00029-of-00153.safetensors +3 -0
  21. model-00030-of-00153.safetensors +3 -0
  22. model-00031-of-00153.safetensors +3 -0
  23. model-00035-of-00153.safetensors +3 -0
  24. model-00036-of-00153.safetensors +3 -0
  25. model-00037-of-00153.safetensors +3 -0
  26. model-00038-of-00153.safetensors +3 -0
  27. model-00039-of-00153.safetensors +3 -0
  28. model-00041-of-00153.safetensors +3 -0
  29. model-00042-of-00153.safetensors +3 -0
  30. model-00049-of-00153.safetensors +3 -0
  31. model-00050-of-00153.safetensors +3 -0
  32. model-00052-of-00153.safetensors +3 -0
  33. model-00053-of-00153.safetensors +3 -0
  34. model-00054-of-00153.safetensors +3 -0
  35. model-00056-of-00153.safetensors +3 -0
  36. model-00058-of-00153.safetensors +3 -0
  37. model-00063-of-00153.safetensors +3 -0
  38. model-00071-of-00153.safetensors +3 -0
  39. model-00072-of-00153.safetensors +3 -0
  40. model-00077-of-00153.safetensors +3 -0
  41. model-00080-of-00153.safetensors +3 -0
  42. model-00083-of-00153.safetensors +3 -0
  43. model-00091-of-00153.safetensors +3 -0
  44. model-00096-of-00153.safetensors +3 -0
  45. model-00108-of-00153.safetensors +3 -0
  46. model-00123-of-00153.safetensors +3 -0
  47. model-00125-of-00153.safetensors +3 -0
  48. model-00149-of-00153.safetensors +3 -0
  49. modeling_interns1_pro.py +0 -0
  50. preprocessor_config.json +23 -0
0092638_seism.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2b94653c6964b630038897a27cb6d276ff866d9ecd1f6419358b9407f0df62e
3
+ size 72128
LICENSE.txt ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright 2025-2026 Shanghai AI Laboratory
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-text-to-text
4
+ library_name: transformers
5
+ ---
6
+
7
+
8
+ ## Intern-S1-Pro
9
+
10
+ <div align="center">
11
+ <img src="./figs/title.png" />
12
+
13
+ <div>&nbsp;</div>
14
+
15
+ [💻Github Repo](https://github.com/InternLM/Intern-S1) • [🤗Model Collections](https://huggingface.co/collections/internlm/intern-s1-6882e325e8ac1c58ba108aa5) • [📜Technical Report](https://arxiv.org/abs/2508.15763) • [💬Online Chat](https://chat.intern-ai.org.cn/)
16
+
17
+ </div>
18
+
19
+ <p align="center">
20
+ 👋 join us on <a href="https://discord.gg/xa29JuW87d" target="_blank">Discord</a> and <a href="https://cdn.vansin.top/intern-s1.jpg" target="_blank">WeChat</a>
21
+ </p>
22
+
23
+
24
+
25
+ ## Introduction
26
+
27
+ We introduce **Intern-S1-Pro**, a trillion-scale MoE multimodal scientific reasoning model. Intern-S1-Pro scales to 1T total parameters with 512 experts, activating 8 experts per token (22B activated parameters). The model delivers top-tier performance on advanced reasoning benchmarks and achieves leading results across key AI4Science domains (chemistry, materials, life-science, earth, etc.), while maintaining strong general multimodal and text capabilities.
28
+
29
+ ### Features
30
+
31
+ - **State-of-the-art scientific reasoning**, competitive with leading closed-source models across AI4Science tasks.
32
+ - **Strong general multimodal performance** on various benchmarks.
33
+ - **Trillion-scale MoE training efficiency** with **STE** routing (dense gradient for router training) and **grouped routing** for stable convergence and balanced expert parallelism.
34
+ - **Fourier Position Encoding (FoPE) + upgraded time-series modeling** for better physical signal representation; supports long, heterogeneous time-series (10^0–10^6 points).
35
+
36
+ ### Performance
37
+
38
+ We evaluate the Intern-S1-Pro on various benchmarks, including general datasets and scientific datasets. We report the performance comparison with the recent VLMs and LLMs below.
39
+
40
+ ![performance](./figs/performance.jpeg)
41
+
42
+
43
+ > **Note**: <u>Underline</u> means the best performance among open-sourced models, **Bold** indicates the best performance among all models.
44
+
45
+ We use the [OpenCompass](https://github.com/open-compass/OpenCompass/) and [VLMEvalKit](https://github.com/open-compass/vlmevalkit) to evaluate all models.
46
+
47
+
48
+ ## Quick Start
49
+
50
+ ### Sampling Parameters
51
+
52
+ We recommend using the following hyperparameters to ensure better results
53
+
54
+ ```python
55
+ top_p = 0.95
56
+ top_k = 50
57
+ min_p = 0.0
58
+ temperature = 0.8
59
+ ```
60
+
61
+ ### Serving
62
+
63
+ > [!IMPORTANT]
64
+ > Running a trillion-parameter model using the native Hugging Face forward method is challenging. We strongly recommend using an LLM inference engine (such as LMDeploy, vLLM, or sglang) to host Intern-S1-Pro and accessing the model via API.
65
+
66
+ Intern-S1-Pro can be deployed using any of the following LLM inference frameworks:
67
+
68
+ - LMDeploy
69
+ - vLLM
70
+ - SGLang
71
+
72
+ Detailed deployment examples for these frameworks are available in the [Model Deployment Guide](./deployment_guide.md).
73
+
74
+ > Deployment support for the time-series module is under optimization and will be released soon.
75
+
76
+
77
+ ## Advanced Usage
78
+
79
+ ### Tool Calling
80
+
81
+ Many Large Language Models (LLMs) now feature Tool Calling, a powerful capability that allows them to extend their functionality by interacting with external tools and APIs. This enables models to perform tasks like fetching up-to-the-minute information, running code, or calling functions within other applications.
82
+
83
+ A key advantage for developers is that a growing number of open-source LLMs are designed to be compatible with the OpenAI API. This means you can leverage the same familiar syntax and structure from the OpenAI library to implement tool calling with these open-source models. As a result, the code demonstrated in this tutorial is versatile—it works not just with OpenAI models, but with any model that follows the same interface standard.
84
+
85
+ To illustrate how this works, let's dive into a practical code example that uses tool calling to get the latest weather forecast (based on lmdeploy api server).
86
+
87
+ ```python
88
+
89
+
90
+ from openai import OpenAI
91
+ import json
92
+
93
+
94
+ def get_current_temperature(location: str, unit: str = "celsius"):
95
+ """Get current temperature at a location.
96
+
97
+ Args:
98
+ location: The location to get the temperature for, in the format "City, State, Country".
99
+ unit: The unit to return the temperature in. Defaults to "celsius". (choices: ["celsius", "fahrenheit"])
100
+
101
+ Returns:
102
+ the temperature, the location, and the unit in a dict
103
+ """
104
+ return {
105
+ "temperature": 26.1,
106
+ "location": location,
107
+ "unit": unit,
108
+ }
109
+
110
+
111
+ def get_temperature_date(location: str, date: str, unit: str = "celsius"):
112
+ """Get temperature at a location and date.
113
+
114
+ Args:
115
+ location: The location to get the temperature for, in the format "City, State, Country".
116
+ date: The date to get the temperature for, in the format "Year-Month-Day".
117
+ unit: The unit to return the temperature in. Defaults to "celsius". (choices: ["celsius", "fahrenheit"])
118
+
119
+ Returns:
120
+ the temperature, the location, the date and the unit in a dict
121
+ """
122
+ return {
123
+ "temperature": 25.9,
124
+ "location": location,
125
+ "date": date,
126
+ "unit": unit,
127
+ }
128
+
129
+ def get_function_by_name(name):
130
+ if name == "get_current_temperature":
131
+ return get_current_temperature
132
+ if name == "get_temperature_date":
133
+ return get_temperature_date
134
+
135
+ tools = [{
136
+ 'type': 'function',
137
+ 'function': {
138
+ 'name': 'get_current_temperature',
139
+ 'description': 'Get current temperature at a location.',
140
+ 'parameters': {
141
+ 'type': 'object',
142
+ 'properties': {
143
+ 'location': {
144
+ 'type': 'string',
145
+ 'description': 'The location to get the temperature for, in the format \'City, State, Country\'.'
146
+ },
147
+ 'unit': {
148
+ 'type': 'string',
149
+ 'enum': [
150
+ 'celsius',
151
+ 'fahrenheit'
152
+ ],
153
+ 'description': 'The unit to return the temperature in. Defaults to \'celsius\'.'
154
+ }
155
+ },
156
+ 'required': [
157
+ 'location'
158
+ ]
159
+ }
160
+ }
161
+ }, {
162
+ 'type': 'function',
163
+ 'function': {
164
+ 'name': 'get_temperature_date',
165
+ 'description': 'Get temperature at a location and date.',
166
+ 'parameters': {
167
+ 'type': 'object',
168
+ 'properties': {
169
+ 'location': {
170
+ 'type': 'string',
171
+ 'description': 'The location to get the temperature for, in the format \'City, State, Country\'.'
172
+ },
173
+ 'date': {
174
+ 'type': 'string',
175
+ 'description': 'The date to get the temperature for, in the format \'Year-Month-Day\'.'
176
+ },
177
+ 'unit': {
178
+ 'type': 'string',
179
+ 'enum': [
180
+ 'celsius',
181
+ 'fahrenheit'
182
+ ],
183
+ 'description': 'The unit to return the temperature in. Defaults to \'celsius\'.'
184
+ }
185
+ },
186
+ 'required': [
187
+ 'location',
188
+ 'date'
189
+ ]
190
+ }
191
+ }
192
+ }]
193
+
194
+
195
+
196
+ messages = [
197
+ {'role': 'user', 'content': 'Today is 2024-11-14, What\'s the temperature in San Francisco now? How about tomorrow?'}
198
+ ]
199
+
200
+ openai_api_key = "EMPTY"
201
+ openai_api_base = "http://0.0.0.0:23333/v1"
202
+ client = OpenAI(
203
+ api_key=openai_api_key,
204
+ base_url=openai_api_base,
205
+ )
206
+ model_name = client.models.list().data[0].id
207
+ response = client.chat.completions.create(
208
+ model=model_name,
209
+ messages=messages,
210
+ max_tokens=32768,
211
+ temperature=0.8,
212
+ top_p=0.95,
213
+ extra_body=dict(spaces_between_special_tokens=False),
214
+ tools=tools)
215
+ print(response.choices[0].message)
216
+ messages.append(response.choices[0].message)
217
+
218
+ for tool_call in response.choices[0].message.tool_calls:
219
+ tool_call_args = json.loads(tool_call.function.arguments)
220
+ tool_call_result = get_function_by_name(tool_call.function.name)(**tool_call_args)
221
+ tool_call_result = json.dumps(tool_call_result, ensure_ascii=False)
222
+ messages.append({
223
+ 'role': 'tool',
224
+ 'name': tool_call.function.name,
225
+ 'content': tool_call_result,
226
+ 'tool_call_id': tool_call.id
227
+ })
228
+
229
+ response = client.chat.completions.create(
230
+ model=model_name,
231
+ messages=messages,
232
+ temperature=0.8,
233
+ top_p=0.95,
234
+ extra_body=dict(spaces_between_special_tokens=False),
235
+ tools=tools)
236
+ print(response.choices[0].message)
237
+ ```
238
+
239
+ ### Switching Between Thinking and Non-Thinking Modes
240
+
241
+ Intern-S1-Pro enables thinking mode by default, enhancing the model's reasoning capabilities to generate higher-quality responses. This feature can be disabled by setting `enable_thinking=False` in `tokenizer.apply_chat_template`
242
+
243
+ ```python
244
+ text = tokenizer.apply_chat_template(
245
+ messages,
246
+ tokenize=False,
247
+ add_generation_prompt=True,
248
+ enable_thinking=False # think mode indicator
249
+ )
250
+ ```
251
+
252
+ With serving Intern-S1-Pro models, you can dynamically control the thinking mode by adjusting the `enable_thinking` parameter in your requests.
253
+
254
+ ```python
255
+ from openai import OpenAI
256
+ import json
257
+
258
+ messages = [
259
+ {
260
+ 'role': 'user',
261
+ 'content': 'who are you'
262
+ }, {
263
+ 'role': 'assistant',
264
+ 'content': 'I am an AI'
265
+ }, {
266
+ 'role': 'user',
267
+ 'content': 'AGI is?'
268
+ }]
269
+
270
+ openai_api_key = "EMPTY"
271
+ openai_api_base = "http://0.0.0.0:23333/v1"
272
+ client = OpenAI(
273
+ api_key=openai_api_key,
274
+ base_url=openai_api_base,
275
+ )
276
+ model_name = client.models.list().data[0].id
277
+
278
+ response = client.chat.completions.create(
279
+ model=model_name,
280
+ messages=messages,
281
+ temperature=0.8,
282
+ top_p=0.95,
283
+ max_tokens=2048,
284
+ extra_body={
285
+ "chat_template_kwargs": {"enable_thinking": False}
286
+ }
287
+ )
288
+ print(json.dumps(response.model_dump(), indent=2, ensure_ascii=False))
289
+ ```
290
+
291
+ ## Citation
292
+
293
+ If you find this work useful, feel free to give us a cite.
294
+
295
+ ```
296
+ @misc{bai2025interns1scientificmultimodalfoundation,
297
+ title={Intern-S1: A Scientific Multimodal Foundation Model},
298
+ author={Lei Bai and Zhongrui Cai and Maosong Cao and Weihan Cao and Chiyu Chen and Haojiong Chen and Kai Chen and Pengcheng Chen and Ying Chen and Yongkang Chen and Yu Cheng and Yu Cheng and Pei Chu and Tao Chu and Erfei Cui and Ganqu Cui and Long Cui and Ziyun Cui and Nianchen Deng and Ning Ding and Nanqin Dong and Peijie Dong and Shihan Dou and Sinan Du and Haodong Duan and Caihua Fan and Ben Gao and Changjiang Gao and Jianfei Gao and Songyang Gao and Yang Gao and Zhangwei Gao and Jiaye Ge and Qiming Ge and Lixin Gu and Yuzhe Gu and Aijia Guo and Qipeng Guo and Xu Guo and Conghui He and Junjun He and Yili Hong and Siyuan Hou and Caiyu Hu and Hanglei Hu and Jucheng Hu and Ming Hu and Zhouqi Hua and Haian Huang and Junhao Huang and Xu Huang and Zixian Huang and Zhe Jiang and Lingkai Kong and Linyang Li and Peiji Li and Pengze Li and Shuaibin Li and Tianbin Li and Wei Li and Yuqiang Li and Dahua Lin and Junyao Lin and Tianyi Lin and Zhishan Lin and Hongwei Liu and Jiangning Liu and Jiyao Liu and Junnan Liu and Kai Liu and Kaiwen Liu and Kuikun Liu and Shichun Liu and Shudong Liu and Wei Liu and Xinyao Liu and Yuhong Liu and Zhan Liu and Yinquan Lu and Haijun Lv and Hongxia Lv and Huijie Lv and Qidang Lv and Ying Lv and Chengqi Lyu and Chenglong Ma and Jianpeng Ma and Ren Ma and Runmin Ma and Runyuan Ma and Xinzhu Ma and Yichuan Ma and Zihan Ma and Sixuan Mi and Junzhi Ning and Wenchang Ning and Xinle Pang and Jiahui Peng and Runyu Peng and Yu Qiao and Jiantao Qiu and Xiaoye Qu and Yuan Qu and Yuchen Ren and Fukai Shang and Wenqi Shao and Junhao Shen and Shuaike Shen and Chunfeng Song and Demin Song and Diping Song and Chenlin Su and Weijie Su and Weigao Sun and Yu Sun and Qian Tan and Cheng Tang and Huanze Tang and Kexian Tang and Shixiang Tang and Jian Tong and Aoran Wang and Bin Wang and Dong Wang and Lintao Wang and Rui Wang and Weiyun Wang and Wenhai Wang and Yi Wang and Ziyi Wang and Ling-I Wu and Wen Wu and Yue Wu and Zijian Wu and Linchen Xiao and Shuhao Xing and Chao Xu and Huihui Xu and Jun Xu and Ruiliang Xu and Wanghan Xu and GanLin Yang and Yuming Yang and Haochen Ye and Jin Ye and Shenglong Ye and Jia Yu and Jiashuo Yu and Jing Yu and Fei Yuan and Bo Zhang and Chao Zhang and Chen Zhang and Hongjie Zhang and Jin Zhang and Qiaosheng Zhang and Qiuyinzhe Zhang and Songyang Zhang and Taolin Zhang and Wenlong Zhang and Wenwei Zhang and Yechen Zhang and Ziyang Zhang and Haiteng Zhao and Qian Zhao and Xiangyu Zhao and Xiangyu Zhao and Bowen Zhou and Dongzhan Zhou and Peiheng Zhou and Yuhao Zhou and Yunhua Zhou and Dongsheng Zhu and Lin Zhu and Yicheng Zou},
299
+ year={2025},
300
+ eprint={2508.15763},
301
+ archivePrefix={arXiv},
302
+ primaryClass={cs.LG},
303
+ url={https://arxiv.org/abs/2508.15763},
304
+ }
305
+ ```
added_tokens.json ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</SMILES>": 151687,
3
+ "</box>": 151677,
4
+ "</dna>": 151691,
5
+ "</img>": 151671,
6
+ "</protein>": 151689,
7
+ "</quad>": 151673,
8
+ "</ref>": 151675,
9
+ "</rna>": 151693,
10
+ "</think>": 151668,
11
+ "</tool_call>": 151658,
12
+ "</tool_response>": 151666,
13
+ "<IMG_CONTEXT>": 151669,
14
+ "<SMILES>": 151686,
15
+ "<TS_CONTEXT>": 151685,
16
+ "<box>": 151676,
17
+ "<dna>": 151690,
18
+ "<img>": 151670,
19
+ "<protein>": 151688,
20
+ "<quad>": 151672,
21
+ "<ref>": 151674,
22
+ "<rna>": 151692,
23
+ "<think>": 151667,
24
+ "<tool_call>": 151657,
25
+ "<tool_response>": 151665,
26
+ "<video>": 151682,
27
+ "<|/ts|>": 151684,
28
+ "<|action_end|>": 151679,
29
+ "<|action_start|>": 151678,
30
+ "<|box_end|>": 151649,
31
+ "<|box_start|>": 151648,
32
+ "<|endoftext|>": 151643,
33
+ "<|file_sep|>": 151664,
34
+ "<|fim_middle|>": 151660,
35
+ "<|fim_pad|>": 151662,
36
+ "<|fim_prefix|>": 151659,
37
+ "<|fim_suffix|>": 151661,
38
+ "<|im_end|>": 151645,
39
+ "<|im_start|>": 151644,
40
+ "<|image_pad|>": 151655,
41
+ "<|interpreter|>": 151680,
42
+ "<|object_ref_end|>": 151647,
43
+ "<|object_ref_start|>": 151646,
44
+ "<|plugin|>": 151681,
45
+ "<|quad_end|>": 151651,
46
+ "<|quad_start|>": 151650,
47
+ "<|repo_name|>": 151663,
48
+ "<|ts|>": 151683,
49
+ "<|video_pad|>": 151656,
50
+ "<|vision_end|>": 151653,
51
+ "<|vision_pad|>": 151654,
52
+ "<|vision_start|>": 151652
53
+ }
chat_template.jinja ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- set image_count = namespace(value=0) %}
2
+ {%- set video_count = namespace(value=0) %}
3
+ {%- macro render_content(content, do_vision_count) %}
4
+ {%- if content is string %}
5
+ {{- content }}
6
+ {%- else %}
7
+ {%- for item in content %}
8
+ {%- if 'image' in item or 'image_url' in item or item.type == 'image' %}
9
+ {%- if do_vision_count %}
10
+ {%- set image_count.value = image_count.value + 1 %}
11
+ {%- endif %}
12
+ {{- 'Picture ' + image_count.value|string + ': <|vision_start|><|image_pad|><|vision_end|>'-}}
13
+ {%- elif 'video' in item or item.type == 'video' %}
14
+ {%- if do_vision_count %}
15
+ {%- set video_count.value = video_count.value + 1 %}
16
+ {%- endif %}
17
+ {{- 'Video ' + video_count.value|string + ': <|vision_start|><|video_pad|><|vision_end|>'-}}
18
+ {%- elif 'text' in item %}
19
+ {{- item.text }}
20
+ {%- elif 'time_series' in item or item.type == 'time_series' %}
21
+ {{- '<|ts|><TS_CONTEXT><|/ts|>'-}}
22
+ {%- endif %}
23
+ {%- endfor %}
24
+ {%- endif %}
25
+ {%- endmacro %}
26
+ {%- if tools %}
27
+ {{- '<|im_start|>system\n' }}
28
+ {%- if messages[0].role == 'system' %}
29
+ {{- render_content(messages[0].content, false) + '\n\n' }}
30
+ {%- endif %}
31
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
32
+ {%- for tool in tools %}
33
+ {{- "\n" }}
34
+ {{- tool | tojson }}
35
+ {%- endfor %}
36
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
37
+ {%- else %}
38
+ {%- if messages[0].role == 'system' %}
39
+ {{- '<|im_start|>system\n' + render_content(messages[0].content, false) + '<|im_end|>\n' }}
40
+ {%- endif %}
41
+ {%- endif %}
42
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
43
+ {%- for message in messages[::-1] %}
44
+ {%- set index = (messages|length - 1) - loop.index0 %}
45
+ {%- if ns.multi_step_tool and message.role == "user" %}
46
+ {%- set content = render_content(message.content, false) %}
47
+ {%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %}
48
+ {%- set ns.multi_step_tool = false %}
49
+ {%- set ns.last_query_index = index %}
50
+ {%- endif %}
51
+ {%- endif %}
52
+ {%- endfor %}
53
+ {%- for message in messages %}
54
+ {%- set content = render_content(message.content, True) %}
55
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
56
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
57
+ {%- elif message.role == "assistant" %}
58
+ {%- set reasoning_content = '' %}
59
+ {%- if message.reasoning_content is string %}
60
+ {%- set reasoning_content = message.reasoning_content %}
61
+ {%- else %}
62
+ {%- if '</think>' in content %}
63
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
64
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
65
+ {%- endif %}
66
+ {%- endif %}
67
+ {%- if loop.index0 > ns.last_query_index %}
68
+ {%- if loop.last or (not loop.last and reasoning_content) %}
69
+ {{- '<|im_start|>' + message.role + '\n<think>' + reasoning_content.strip('\n') + '</think>\n\n' + content.lstrip('\n') }}
70
+ {%- else %}
71
+ {{- '<|im_start|>' + message.role + '\n' + content }}
72
+ {%- endif %}
73
+ {%- else %}
74
+ {{- '<|im_start|>' + message.role + '\n' + content }}
75
+ {%- endif %}
76
+ {%- if message.tool_calls %}
77
+ {%- for tool_call in message.tool_calls %}
78
+ {%- if (loop.first and content) or (not loop.first) %}
79
+ {{- '\n' }}
80
+ {%- endif %}
81
+ {%- if tool_call.function %}
82
+ {%- set tool_call = tool_call.function %}
83
+ {%- endif %}
84
+ {{- '<tool_call>\n{"name": "' }}
85
+ {{- tool_call.name }}
86
+ {{- '", "arguments": ' }}
87
+ {%- if tool_call.arguments is string %}
88
+ {{- tool_call.arguments }}
89
+ {%- else %}
90
+ {{- tool_call.arguments | tojson }}
91
+ {%- endif %}
92
+ {{- '}\n</tool_call>' }}
93
+ {%- endfor %}
94
+ {%- endif %}
95
+ {{- '<|im_end|>\n' }}
96
+ {%- elif message.role == "tool" %}
97
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
98
+ {{- '<|im_start|>user' }}
99
+ {%- endif %}
100
+ {{- '\n<tool_response>\n' }}
101
+ {{- content }}
102
+ {{- '\n</tool_response>' }}
103
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
104
+ {{- '<|im_end|>\n' }}
105
+ {%- endif %}
106
+ {%- endif %}
107
+ {%- endfor %}
108
+ {%- if add_generation_prompt %}
109
+ {{- '<|im_start|>assistant\n' }}
110
+ {%- if enable_thinking is defined and not enable_thinking %}{{- '<think></think>\n\n'-}}{% endif %}
111
+ {%- if enable_thinking is not defined or enable_thinking %}{{- '<think>'-}}{% endif %}
112
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "InternS1ProForConditionalGeneration"
4
+ ],
5
+ "image_token_id": 151655,
6
+ "model_type": "interns1_pro",
7
+ "text_config": {
8
+ "attention_bias": false,
9
+ "attention_dropout": 0.0,
10
+ "bos_token_id": 151643,
11
+ "decoder_sparse_step": 1,
12
+ "dtype": "bfloat16",
13
+ "eos_token_id": 151645,
14
+ "head_dim": 128,
15
+ "hidden_act": "silu",
16
+ "hidden_size": 4096,
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 12288,
19
+ "max_position_embeddings": 262144,
20
+ "mlp_only_layers": [],
21
+ "model_type": "interns1_pro_text",
22
+ "moe_intermediate_size": 1536,
23
+ "norm_topk_prob": true,
24
+ "num_attention_heads": 64,
25
+ "num_experts": 512,
26
+ "num_experts_per_tok": 8,
27
+ "num_hidden_layers": 94,
28
+ "num_key_value_heads": 4,
29
+ "rms_norm_eps": 1e-06,
30
+ "rope_scaling": {
31
+ "rope_type": "default",
32
+ "fope_init_factor": 0.5,
33
+ "fope_sep_head": true,
34
+ "num_inv_freq": null
35
+ },
36
+ "rope_theta": 5000000,
37
+ "router_n_groups": 8,
38
+ "use_cache": true,
39
+ "vocab_size": 155008
40
+ },
41
+ "tie_word_embeddings": false,
42
+ "transformers_version": "4.57.0.dev0",
43
+ "video_token_id": 151656,
44
+ "vision_config": {
45
+ "depth": 24,
46
+ "hidden_act": "gelu_pytorch_tanh",
47
+ "hidden_size": 1024,
48
+ "in_channels": 3,
49
+ "initializer_range": 0.02,
50
+ "intermediate_size": 4096,
51
+ "model_type": "interns1_pro_vision",
52
+ "num_heads": 16,
53
+ "num_position_embeddings": 2304,
54
+ "out_hidden_size": 4096,
55
+ "patch_size": 16,
56
+ "spatial_merge_size": 2,
57
+ "temporal_patch_size": 2
58
+ },
59
+ "vision_end_token_id": 151653,
60
+ "vision_start_token_id": 151652,
61
+ "ts_config": {
62
+ "auto_map": {
63
+ "AutoConfig": "configuration_interns1_pro.InternS1ProTimeSeriesConfig",
64
+ "AutoModel": "modeling_interns1_pro.InternS1ProTimeSeriesModel"
65
+ },
66
+ "activation_dropout": 0.0,
67
+ "activation_function": "gelu",
68
+ "architectures": [
69
+ "InternS1TimeSeriesModel"
70
+ ],
71
+ "attention_dropout": 0.0,
72
+ "d_model": 768,
73
+ "dropout": 0.0,
74
+ "dtype": "bfloat16",
75
+ "encoder_attention_heads": 8,
76
+ "encoder_ffn_dim": 3072,
77
+ "encoder_layerdrop": 0.0,
78
+ "encoder_layers": 17,
79
+ "model_type": "interns1_pro_time_series",
80
+ "max_source_positions": 1500,
81
+ "num_mel_bins": 80,
82
+ "out_hidden_size": 4096,
83
+ "scale_embedding": false,
84
+ "ts_adapt_in_dim": 256,
85
+ "ts_adapt_out_dim": 1024,
86
+ "use_cache": true,
87
+ "attn_implementation": "eager"
88
+ },
89
+ "ts_end_id": 151684,
90
+ "ts_start_id": 151683,
91
+ "ts_token_id": 151685,
92
+ "auto_map": {
93
+ "AutoConfig": "configuration_interns1_pro.InternS1ProConfig",
94
+ "AutoModel": "modeling_interns1_pro.InternS1ProModel",
95
+ "AutoModelForCausalLM": "modeling_interns1_pro.InternS1ProForConditionalGeneration"
96
+ }
97
+ }
configuration_interns1_pro.py ADDED
@@ -0,0 +1,225 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2025 HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ from transformers.configuration_utils import PretrainedConfig
17
+ from transformers.modeling_rope_utils import rope_config_validation
18
+ from transformers import WhisperConfig
19
+
20
+
21
+ class InternS1ProTextConfig(PretrainedConfig):
22
+ model_type = "interns1_pro_text"
23
+ base_config_key = "text_config"
24
+ keys_to_ignore_at_inference = ["past_key_values"]
25
+ base_model_tp_plan = {
26
+ "layers.*.self_attn.q_proj": "colwise",
27
+ "layers.*.self_attn.k_proj": "colwise",
28
+ "layers.*.self_attn.v_proj": "colwise",
29
+ "layers.*.self_attn.o_proj": "rowwise",
30
+ "layers.*.mlp.experts.*.gate_proj": "colwise",
31
+ "layers.*.mlp.experts.*.up_proj": "colwise",
32
+ "layers.*.mlp.experts.*.down_proj": "rowwise",
33
+ "layers.*.mlp.gate_proj": "colwise",
34
+ "layers.*.mlp.up_proj": "colwise",
35
+ "layers.*.mlp.down_proj": "rowwise",
36
+ }
37
+ base_model_pp_plan = {
38
+ "embed_tokens": (["input_ids"], ["inputs_embeds"]),
39
+ "layers": (["hidden_states", "attention_mask"], ["hidden_states"]),
40
+ "norm": (["hidden_states"], ["hidden_states"]),
41
+ }
42
+
43
+ def __init__(
44
+ self,
45
+ vocab_size=151936,
46
+ hidden_size=2048,
47
+ intermediate_size=5632,
48
+ num_hidden_layers=24,
49
+ num_attention_heads=16,
50
+ num_key_value_heads=16,
51
+ hidden_act="silu",
52
+ max_position_embeddings=128000,
53
+ initializer_range=0.02,
54
+ rms_norm_eps=1e-6,
55
+ use_cache=True,
56
+ tie_word_embeddings=False,
57
+ rope_theta=5000000.0,
58
+ attention_bias=False,
59
+ attention_dropout=0.0,
60
+ decoder_sparse_step=1,
61
+ moe_intermediate_size=1408,
62
+ num_experts_per_tok=4,
63
+ num_experts=60,
64
+ norm_topk_prob=True,
65
+ router_aux_loss_coef=0.001,
66
+ mlp_only_layers=None,
67
+ rope_scaling=None,
68
+ head_dim=None,
69
+ **kwargs,
70
+ ):
71
+ self.vocab_size = vocab_size
72
+ self.max_position_embeddings = max_position_embeddings
73
+ self.hidden_size = hidden_size
74
+ self.intermediate_size = intermediate_size
75
+ self.num_hidden_layers = num_hidden_layers
76
+ self.num_attention_heads = num_attention_heads
77
+
78
+ # for backward compatibility
79
+ if num_key_value_heads is None:
80
+ num_key_value_heads = num_attention_heads
81
+
82
+ self.num_key_value_heads = num_key_value_heads
83
+ self.hidden_act = hidden_act
84
+ self.initializer_range = initializer_range
85
+ self.rms_norm_eps = rms_norm_eps
86
+ self.use_cache = use_cache
87
+ self.rope_theta = rope_theta
88
+ self.attention_bias = attention_bias
89
+ self.attention_dropout = attention_dropout
90
+ self.rope_scaling = rope_scaling
91
+ self.head_dim = head_dim or hidden_size // num_attention_heads
92
+
93
+ rope_config_validation(self, ignore_keys={"fope_init_factor", "fope_sep_head", "num_inv_freq"})
94
+
95
+ # MoE arguments
96
+ self.decoder_sparse_step = decoder_sparse_step
97
+ self.moe_intermediate_size = moe_intermediate_size
98
+ self.num_experts_per_tok = num_experts_per_tok
99
+ self.num_experts = num_experts
100
+ self.norm_topk_prob = norm_topk_prob
101
+ self.router_aux_loss_coef = router_aux_loss_coef
102
+ self.mlp_only_layers = [] if mlp_only_layers is None else mlp_only_layers
103
+
104
+ super().__init__(tie_word_embeddings=tie_word_embeddings, **kwargs)
105
+
106
+
107
+ class InternS1ProVisionConfig(PretrainedConfig):
108
+ model_type = "interns1_pro_vision"
109
+ base_config_key = "vision_config"
110
+
111
+ def __init__(
112
+ self,
113
+ depth=27,
114
+ hidden_size=1152,
115
+ hidden_act="gelu_pytorch_tanh",
116
+ intermediate_size=4304,
117
+ num_heads=16,
118
+ in_channels=3,
119
+ patch_size=16,
120
+ spatial_merge_size=2,
121
+ temporal_patch_size=2,
122
+ out_hidden_size=3584,
123
+ num_position_embeddings=2304,
124
+ initializer_range=0.02,
125
+ **kwargs,
126
+ ):
127
+ super().__init__(**kwargs)
128
+
129
+ self.depth = depth
130
+ self.hidden_size = hidden_size
131
+ self.hidden_act = hidden_act
132
+ self.intermediate_size = intermediate_size
133
+ self.num_heads = num_heads
134
+ self.in_channels = in_channels
135
+ self.patch_size = patch_size
136
+ self.spatial_merge_size = spatial_merge_size
137
+ self.temporal_patch_size = temporal_patch_size
138
+ self.out_hidden_size = out_hidden_size
139
+ self.num_position_embeddings = num_position_embeddings
140
+ self.initializer_range = initializer_range
141
+
142
+ class InternS1ProTimeSeriesConfig(WhisperConfig):
143
+
144
+ model_type = "interns1_pro_time_series"
145
+ base_config_key = "ts_config"
146
+
147
+ def __init__(
148
+ self,
149
+ ts_adapt_in_dim: int=256,
150
+ ts_adapt_out_dim: int=1024,
151
+ ts_hidden_dim: int=1024,
152
+ ts_cnn_channels: list[int]=[1, 32, 64, 128, 128],
153
+ ts_cnn_kernel_sizes: list[int]=[3, 5, 5, 5],
154
+ ts_cnn_strides: list[int]=[2, 4, 4, 5],
155
+ ts_cnn_paddings: list[int]=[1, 2, 2, 2],
156
+ ts_concat_subsampling_in_channels: int=128,
157
+ ts_concat_subsampling_concat_size: int=2,
158
+ use_flash_attn: bool=False,
159
+ **kwargs
160
+ ):
161
+ super().__init__(**kwargs)
162
+
163
+ self.ts_cnn_channels = ts_cnn_channels
164
+ self.ts_cnn_kernel_sizes = ts_cnn_kernel_sizes
165
+ self.ts_cnn_strides = ts_cnn_strides
166
+ self.ts_cnn_paddings = ts_cnn_paddings
167
+ self.ts_concat_subsampling_in_channels = ts_concat_subsampling_in_channels
168
+ self.ts_concat_subsampling_concat_size = ts_concat_subsampling_concat_size
169
+
170
+ self.ts_adapt_in_dim = ts_adapt_in_dim
171
+ self.ts_adapt_out_dim = ts_adapt_out_dim
172
+
173
+ self.ts_hidden_dim = ts_hidden_dim
174
+ self.use_flash_attn = use_flash_attn
175
+
176
+ assert self.ts_adapt_out_dim == self.ts_hidden_dim, "ts_adapt_out_dim should be equal to ts_hidden_dim"
177
+ assert self.ts_concat_subsampling_in_channels == self.ts_cnn_channels[-1], "ts_concat_subsampling_in_channels should be equal to the out_channel of the last cnn layer"
178
+
179
+
180
+ class InternS1ProConfig(PretrainedConfig):
181
+ model_type = "interns1_pro"
182
+ sub_configs = {"vision_config": InternS1ProVisionConfig, "text_config": InternS1ProTextConfig, 'ts_config':InternS1ProTimeSeriesConfig}
183
+ keys_to_ignore_at_inference = ["past_key_values"]
184
+
185
+ def __init__(
186
+ self,
187
+ text_config=None,
188
+ vision_config=None,
189
+ ts_config=None,
190
+ image_token_id=151655,
191
+ video_token_id=151656,
192
+ vision_start_token_id=151652,
193
+ vision_end_token_id=151653,
194
+ ts_token_id=151685,
195
+ ts_start_id=151683,
196
+ ts_end_id=151684,
197
+ tie_word_embeddings=False,
198
+ **kwargs,
199
+ ):
200
+ if isinstance(vision_config, dict):
201
+ self.vision_config = self.sub_configs["vision_config"](**vision_config)
202
+ elif vision_config is None:
203
+ self.vision_config = self.sub_configs["vision_config"]()
204
+
205
+ if isinstance(text_config, dict):
206
+ self.text_config = self.sub_configs["text_config"](**text_config)
207
+ elif text_config is None:
208
+ self.text_config = self.sub_configs["text_config"]()
209
+
210
+ if isinstance(ts_config, dict):
211
+ self.ts_config = self.sub_configs["ts_config"](**ts_config)
212
+ elif ts_config is None:
213
+ self.ts_config = self.sub_configs["ts_config"]()
214
+
215
+ self.image_token_id = image_token_id
216
+ self.video_token_id = video_token_id
217
+ self.vision_start_token_id = vision_start_token_id
218
+ self.vision_end_token_id = vision_end_token_id
219
+ self.ts_token_id = ts_token_id
220
+ self.ts_start_id = ts_start_id
221
+ self.ts_end_id = ts_end_id
222
+ super().__init__(**kwargs, tie_word_embeddings=tie_word_embeddings)
223
+
224
+
225
+ __all__ = ["InternS1ProConfig", "InternS1ProTextConfig", "InternS1ProVisionConfig", "InternS1ProTimeSeriesConfig"]
deployment_guide.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Intern-S1-Pro Deployment Guide
2
+
3
+ The Intern-S1-Pro release is a 1T parameter model stored in FP8 format. Deployment requires at least **two 8-GPU H200** nodes, with either of the following configurations:
4
+
5
+ - Tensor Parallelism (TP)
6
+ - Data Parallelism (DP) + Expert Parallelism (EP)
7
+
8
+ > NOTE: The deployment examples in this guide are provided for reference only and may not represent the latest or most optimized configurations. Inference frameworks are under active development — always consult the official documentation from each framework’s maintainers to ensure peak performance and compatibility.
9
+
10
+ ## LMDeploy
11
+
12
+ Required version `lmdeploy>=0.12.0`
13
+
14
+ - Tensor Parallelism
15
+
16
+ ```bash
17
+ # start ray on node 0 and node 1
18
+
19
+ # node 0
20
+ lmdeploy serve api_server internlm/Intern-S1-Pro --backend pytorch --tp 16
21
+ ```
22
+
23
+ - Data Parallelism + Expert Parallelism
24
+
25
+ ```
26
+ # node 0, proxy server
27
+ lmdeploy serve proxy --server-name ${proxy_server_ip} --server-port ${proxy_server_port} --routing-strategy 'min_expected_latency' --serving-strategy Hybrid
28
+
29
+ # node 0
30
+ export LMDEPLOY_DP_MASTER_ADDR=${node0_ip}
31
+ export LMDEPLOY_DP_MASTER_PORT=29555
32
+ lmdeploy serve api_server \
33
+ internlm/Intern-S1-Pro \
34
+ --backend pytorch \
35
+ --tp 1 \
36
+ --dp 16 \
37
+ --ep 16 \
38
+ --proxy-url http://${proxy_server_ip}:${proxy_server_port} \
39
+ --nnodes 2 \
40
+ --node-rank 0 \
41
+ --reasoning-parser intern-s1 \
42
+ --tool-call-parser qwen3
43
+
44
+ # node 1
45
+ export LMDEPLOY_DP_MASTER_ADDR=${node0_ip}
46
+ export LMDEPLOY_DP_MASTER_PORT=29555
47
+ lmdeploy serve api_server \
48
+ internlm/Intern-S1-Pro \
49
+ --backend pytorch \
50
+ --tp 1 \
51
+ --dp 16 \
52
+ --ep 16 \
53
+ --proxy-url http://${proxy_server_ip}:${proxy_server_port} \
54
+ --nnodes 2 \
55
+ --node-rank 1 \
56
+ --reasoning-parser intern-s1 \
57
+ --tool-call-parser qwen3
58
+ ```
59
+
60
+ ## vLLM
61
+
62
+ - Tensor Parallelism + Expert Parallelism
63
+
64
+ ```bash
65
+ # start ray on node 0 and node 1
66
+
67
+ # node 0
68
+ export VLLM_ENGINE_READY_TIMEOUT_S=10000
69
+ vllm serve internlm/Intern-S1-Pro \
70
+ --tensor-parallel-size 16 \
71
+ --enable-expert-parallel \
72
+ --distributed-executor-backend ray \
73
+ --max-model-len 65536 \
74
+ --trust-remote-code \
75
+ --reasoning-parser deepseek_r1 \
76
+ --enable-auto-tool-choice \
77
+ --tool-call-parser hermes
78
+ ```
79
+
80
+ - Data Parallelism + Expert Parallelism
81
+
82
+ ```bash
83
+ # node 0
84
+ export VLLM_ENGINE_READY_TIMEOUT_S=10000
85
+ vllm serve internlm/Intern-S1-Pro \
86
+ --all2all-backend deepep_low_latency \
87
+ --tensor-parallel-size 1 \
88
+ --enable-expert-parallel \
89
+ --data-parallel-size 16 \
90
+ --data-parallel-size-local 8 \
91
+ --data-parallel-address ${node0_ip} \
92
+ --data-parallel-rpc-port 13345 \
93
+ --gpu_memory_utilization 0.8 \
94
+ --mm_processor_cache_gb=0 \
95
+ --media-io-kwargs '{"video": {"num_frames": 768, "fps": 2}}' \
96
+ --max-model-len 65536 \
97
+ --trust-remote-code \
98
+ --api-server-count=8 \
99
+ --reasoning-parser deepseek_r1 \
100
+ --enable-auto-tool-choice \
101
+ --tool-call-parser hermes
102
+
103
+ # node 1
104
+ export VLLM_ENGINE_READY_TIMEOUT_S=10000
105
+ vllm serve internlm/Intern-S1-Pro \
106
+ --all2all-backend deepep_low_latency \
107
+ --tensor-parallel-size 1 \
108
+ --enable-expert-parallel \
109
+ --data-parallel-size 16 \
110
+ --data-parallel-size-local 8 \
111
+ --data-parallel-start-rank 8 \
112
+ --data-parallel-address ${node0_ip} \
113
+ --data-parallel-rpc-port 13345 \
114
+ --gpu_memory_utilization 0.8 \
115
+ --mm_processor_cache_gb=0 \
116
+ --media-io-kwargs '{"video": {"num_frames": 768, "fps": 2}}' \
117
+ --max-model-len 65536 \
118
+ --trust-remote-code \
119
+ --headless \
120
+ --reasoning-parser deepseek_r1 \
121
+ --enable-auto-tool-choice \
122
+ --tool-call-parser hermes
123
+ ```
124
+
125
+ > NOTE: To prevent out-of-memory (OOM) errors, we limit the context length using `--max-model-len 65536`. For datasets requiring longer responses, you may increase this value as needed. Additionally, video inference can consume substantial memory in vLLM API server processes; we therefore recommend setting `--media-io-kwargs '{"video": {"num_frames": 768, "fps": 2}}'` to constrain preprocessing memory usage during video benchmarking.
126
+
127
+ ## SGLang
128
+
129
+ You can use the docker image `lmsysorg/sglang:dev` to deploy. Refer to [using-docker](https://docs.sglang.io/get_started/install.html#method-3-using-docker) for more.
130
+
131
+ - Tensor Parallelism + Expert Parallelism
132
+
133
+ ```bash
134
+ export DIST_ADDR=${master_node_ip}:${master_node_port}
135
+
136
+ # node 0
137
+ python3 -m sglang.launch_server \
138
+ --model-path internlm/Intern-S1-Pro \
139
+ --tp 16 \
140
+ --ep 16 \
141
+ --mem-fraction-static 0.85 \
142
+ --trust-remote-code \
143
+ --dist-init-addr ${DIST_ADDR} \
144
+ --nnodes 2 \
145
+ --attention-backend fa3 \
146
+ --mm-attention-backend fa3 \
147
+ --keep-mm-feature-on-device \
148
+ --node-rank 0 \
149
+ --reasoning-parser qwen3 \
150
+ --tool-call-parser qwen
151
+
152
+ # node 1
153
+ python3 -m sglang.launch_server \
154
+ --model-path internlm/Intern-S1-Pro \
155
+ --tp 16 \
156
+ --ep 16 \
157
+ --mem-fraction-static 0.85 \
158
+ --trust-remote-code \
159
+ --dist-init-addr ${DIST_ADDR} \
160
+ --nnodes 2 \
161
+ --attention-backend fa3 \
162
+ --mm-attention-backend fa3 \
163
+ --keep-mm-feature-on-device \
164
+ --node-rank 1 \
165
+ --reasoning-parser qwen3 \
166
+ --tool-call-parser qwen
167
+ ```
generation_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "pad_token_id": 151643,
4
+ "do_sample": true,
5
+ "eos_token_id": [
6
+ 151645,
7
+ 151643
8
+ ],
9
+ "top_p": 0.95,
10
+ "top_k": 50,
11
+ "temperature": 0.8,
12
+ "repetition_penalty": 1.0,
13
+ "transformers_version": "4.56.0"
14
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00002-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f161e5952037d91be89c584acf61776ceb4b1d1c26f927fe0625984665ee021
3
+ size 11990981720
model-00006-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65a158cd7e12ccde2d97fbbff5cc0236f2738f3980ba2df861316cde212d69db
3
+ size 11999090192
model-00011-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae883500c89ac568318fe8817758671c99d8499dd7019af93e8bf4f465aadf0d
3
+ size 11991649048
model-00014-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c070dd07c311611e753db399531c63d4cc118e15774dd885dd7bc5d231eb391
3
+ size 11991648816
model-00015-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6501163ec8b9e4f33ca8d28a9c2f9640eca60ded1f882c03cf47c51df47fb4f7
3
+ size 11991648704
model-00021-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0cc18178dec2687c9bd3b70c9e9c218b3053db7c51867accd00085f083c5a9b
3
+ size 11991649912
model-00023-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13081a7e38a9ad371cf045bb76a65fa969eb61609a63b1422ba8dc0f8426a720
3
+ size 11991649704
model-00024-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2747931201049fd63fb74592b9ac1fc6ef64253558b974e3dd5315969ec6005b
3
+ size 11991649968
model-00027-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fd7acf4f423975a6d227905669efd06a1723cf0381b2e976b8a8942d33a777f
3
+ size 11991649848
model-00029-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95c8ccb8fd3c6aaea3c5e97e6e0e0fd391efa4aea21fd50dfdf0853c2e930640
3
+ size 11991649912
model-00030-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d839d5dd50663199838bfd5385e357c910c76115949180a6fb69746671bf5e43
3
+ size 11991649672
model-00031-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b39c62b5014c8e0d39b98428e116288631bf3195ed61abb7283cb863e34e2455
3
+ size 11991649768
model-00035-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f51caad96bcc3dc6066715a6f2425097606a9007e3496026d6fd6c973bcf5e8
3
+ size 11991649792
model-00036-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a7201f1959adb214b037b532bb38e076414e4c2ee7a0b08a42ff227d5a12a65
3
+ size 11991649632
model-00037-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b27f54fb2c25ac8e2cd4794a7f7c55a770ded36329f25b64bb1ffe173018cc0e
3
+ size 11991649912
model-00038-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cad45f8f5dca766fdb151b5aa98cb39b754b091c8ee1c65a6a22303f5caa4129
3
+ size 11991649616
model-00039-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7a66e760a0baf469bf25a9546623532d2bb872b5747b07268e9a49b2eecdf71
3
+ size 11991649856
model-00041-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6344b14cc766690b260be5b302d50abfa5366b4cda9bab2d8068525e5303de1
3
+ size 11991649480
model-00042-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe67a72d3524b28fbdf629a2c03e859abf9eca7419435f59be529133856830e7
3
+ size 11991649912
model-00049-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:835d927ef257c922f42bb4a326a2da783b43c028cd4891ce6c54adff263cd377
3
+ size 11991649536
model-00050-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7fd66850d587e1ae0d8647dd02eaa855a106dc38032c9dba9cd35dc2843b903
3
+ size 11991649912
model-00052-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f66623b8e61f818f6818836520a434d1de117e48d40c447641007235cd660489
3
+ size 11991649712
model-00053-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3109663635d12acb0a52f8b2e056a1fdcd7a2a092bf42b932822be72a4544ad
3
+ size 11991649984
model-00054-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:296f52e951cea46e8a7a7bf8421ed55a72b6577613e2f0895a1c64b43acf403a
3
+ size 11991649512
model-00056-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71468408509af66a9d2c475d1bec98b899d954445b268923271b8441771ebbca
3
+ size 11991649832
model-00058-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a6da7bc64e9193e0ac026e09c82e8d7f8974ab38056adc1d4a10863329ed83e
3
+ size 11991649912
model-00063-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:428a8a4e7bbc6de5b28d51d543c5515a7e6e6ab795c41faa06db59c92f659d61
3
+ size 11991649912
model-00071-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed7461d7e029422916f5961a14ff9b7e6392c5130dcc2260297d32619b315a6b
3
+ size 11991649912
model-00072-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67ae90a91f5cb39867d107ccf52d91e2f2c43c68b4c27b9be4ce0ceb9501aa84
3
+ size 11991649760
model-00077-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86e04015c419e4962cd7eb42170849f295533b944b94d05c3140dabc1bc28f6b
3
+ size 11991649880
model-00080-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:980169207447458e182b6353f9a43ff25eb1b6737504243ee5269bd94129b7d9
3
+ size 11991649704
model-00083-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f327e0e48e907b9add20449b3c3b8ece6fd4040b210153ca9f910a8e05034ae
3
+ size 11991649496
model-00091-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc8616c05ba7f1f36d7712309c3edc32d93da7dd938e915aa38e964223d33e68
3
+ size 11991649448
model-00096-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20db8c7779cef5b1488064b3eed8520028ff6d3f503a5a29610e86aa68a83e2c
3
+ size 11991649592
model-00108-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c908f5a4880e2b12a1e2f2695983492e0565af8719f6f5a1a6ce9b7cd4a27368
3
+ size 11991649912
model-00123-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e19c56b53cb7fe5bcff8826e094fec28c16d6fcb9e13a96b59b16f4300e7ac31
3
+ size 11991649656
model-00125-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a279928072f0259a8ee9bbad76043fae7da21f0f13db24df23b0f7cdafa38f4
3
+ size 11991649576
model-00149-of-00153.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f5a196d86d034e3d4b1e2184c7bbc7f5fe638d18059204128609cd52dded1ce
3
+ size 11991649464
modeling_interns1_pro.py ADDED
The diff for this file is too large to render. See raw diff
 
preprocessor_config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "size": {
3
+ "longest_edge": 16777216,
4
+ "shortest_edge": 65536
5
+ },
6
+ "patch_size": 16,
7
+ "temporal_patch_size": 2,
8
+ "merge_size": 2,
9
+ "image_mean": [
10
+ 0.5,
11
+ 0.5,
12
+ 0.5
13
+ ],
14
+ "image_std": [
15
+ 0.5,
16
+ 0.5,
17
+ 0.5
18
+ ],
19
+ "image_processor_type": "Qwen2VLImageProcessorFast",
20
+ "auto_map": {
21
+ "AutoProcessor": "processing_interns1_pro.InternS1ProProcessor"
22
+ }
23
+ }