YummyYum commited on
Commit
6367152
·
verified ·
1 Parent(s): f811601

Upload folder using huggingface_hub

Browse files
Files changed (46) hide show
  1. .msc +0 -0
  2. .mv +1 -0
  3. LICENSE +39 -0
  4. README.md +153 -0
  5. config.json +36 -0
  6. configuration.json +1 -0
  7. pytorch_model-00000-TP-common.safetensors +3 -0
  8. pytorch_model-00001-TP-common.safetensors +3 -0
  9. pytorch_model-00002-TP-common.safetensors +3 -0
  10. pytorch_model-00003-TP-common.safetensors +3 -0
  11. pytorch_model-00004-TP-common.safetensors +3 -0
  12. pytorch_model-00005-TP-common.safetensors +3 -0
  13. pytorch_model-00006-TP-000.safetensors +3 -0
  14. pytorch_model-00006-TP-001.safetensors +3 -0
  15. pytorch_model-00006-TP-002.safetensors +3 -0
  16. pytorch_model-00006-TP-003.safetensors +3 -0
  17. pytorch_model-00006-TP-004.safetensors +3 -0
  18. pytorch_model-00006-TP-005.safetensors +3 -0
  19. pytorch_model-00006-TP-006.safetensors +3 -0
  20. pytorch_model-00006-TP-007.safetensors +3 -0
  21. pytorch_model-00007-TP-000.safetensors +3 -0
  22. pytorch_model-00007-TP-001.safetensors +3 -0
  23. pytorch_model-00007-TP-002.safetensors +3 -0
  24. pytorch_model-00007-TP-003.safetensors +3 -0
  25. pytorch_model-00007-TP-004.safetensors +3 -0
  26. pytorch_model-00007-TP-005.safetensors +3 -0
  27. pytorch_model-00007-TP-006.safetensors +3 -0
  28. pytorch_model-00007-TP-007.safetensors +3 -0
  29. pytorch_model-00008-TP-000.safetensors +3 -0
  30. pytorch_model-00008-TP-001.safetensors +3 -0
  31. pytorch_model-00008-TP-002.safetensors +3 -0
  32. pytorch_model-00008-TP-003.safetensors +3 -0
  33. pytorch_model-00008-TP-004.safetensors +3 -0
  34. pytorch_model-00008-TP-005.safetensors +3 -0
  35. pytorch_model-00008-TP-006.safetensors +3 -0
  36. pytorch_model-00008-TP-007.safetensors +3 -0
  37. pytorch_model-00009-TP-common.safetensors +3 -0
  38. pytorch_model-00010-TP-common.safetensors +3 -0
  39. pytorch_model-00011-TP-common.safetensors +3 -0
  40. pytorch_model-00012-TP-common.safetensors +3 -0
  41. pytorch_model-00013-TP-common.safetensors +3 -0
  42. pytorch_model-00014-TP-common.safetensors +3 -0
  43. pytorch_model-00015-TP-common.safetensors +3 -0
  44. pytorch_model-00016-TP-common.safetensors +3 -0
  45. pytorch_model-00017-TP-common.safetensors +3 -0
  46. tokenizer.tok.json +0 -0
.msc ADDED
Binary file (4 kB). View file
 
.mv ADDED
@@ -0,0 +1 @@
 
 
1
+ Revision:master,CreatedAt:1756021687
LICENSE ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Grok 2 Community License Agreement
2
+ Last Updated: August 23, 2025
3
+
4
+ 1. Background and Definitions
5
+ By downloading, accessing, or using the Materials (as defined below) relating to Grok 2 provided by X.AI LLC (“xAI”), you (“Licensee” or “you”) agree to the terms of this agreement (“Agreement”). If you accept this Agreement for or on behalf of an entity, you represent that you have the authority to bind that entity. As used in this Agreement, “Materials” means the Grok 2 materials provided to you by xAI under this Agreement, consisting of: (1) one or more machine learning models (including architecture and parameters); and (2) related artifacts (including associated data, documentation, and software) that are provided to you hereunder.
6
+
7
+ 2. License Grant & Scope
8
+ a. Permitted Uses: xAI grants you a non-exclusive, worldwide, revocable license to use, reproduce, distribute, and modify the Materials:
9
+ • For non-commercial and research purposes; and
10
+ • For commercial use solely if you and your affiliates abide by all of the guardrails provided in xAI's Acceptable Use Policy (https://x.ai/legal/acceptable-use-policy), including 1. Comply with the law, 2. Do not harm people or property, and 3. Respect guardrails and don't mislead.
11
+ b. Restrictions:
12
+ • You may not use the Materials, derivatives, or outputs (including generated data) to train, create, or improve any foundational, large language, or general-purpose AI models, except for modifications or fine-tuning of Grok 2 permitted under and in accordance with the terms of this Agreement.
13
+ • No right to use xAI’s trademarks is granted, except as required for attribution (see below).
14
+
15
+ 3. Distribution & Attribution
16
+ If you distribute the Materials, derivatives, or products/services incorporating them:
17
+ • Include this Agreement and a notice stating: “This product includes materials licensed under the xAI Community License. Copyright © 2025 xAI. All rights reserved.”
18
+ • Prominently display “Powered by xAI” in related materials or interfaces.
19
+
20
+ 4. Ownership & Outputs
21
+ xAI retains all rights in the Materials. This Agreement does not impose any restrictions or obligations with respect to any use, modification, or sharing of any outputs generated by using the Materials. If you provide feedback, suggestions, or ideas, you grant xAI a perpetual, worldwide, irrevocable, royalty-free license to use and incorporate that feedback without restriction.
22
+
23
+ 5. Acceptable Use
24
+ You are responsible for implementing appropriate safety measures, including filters and human oversight, suitable for your use case. You must comply with xAI’s Acceptable Use Policy (AUP), as well as all applicable laws. You may not use the Materials for illegal, harmful, or abusive activities.
25
+
26
+ 6. Warranty Disclaimer & Limitation of Liability
27
+ THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, NONINFRINGEMENT, ACCURACY, OR THE ABSENCE OF LATENT OR OTHER DEFECTS OR ERRORS, WHETHER OR NOT DISCOVERABLE, ALL TO THE GREATEST EXTENT PERMISSIBLE UNDER APPLICABLE LAW.
28
+ YOU ARE SOLELY RESPONSIBLE FOR (1) CLEARING RIGHTS OF OTHER PERSONS THAT MAY APPLY TO THE MATERIALS OR ANY USE THEREOF, INCLUDING WITHOUT LIMITATION ANY PERSON'S COPYRIGHTS OR OTHER RIGHTS INCLUDED OR EMBODIED IN THE MATERIALS; (2) OBTAINING ANY NECESSARY CONSENTS, PERMISSIONS OR OTHER RIGHTS REQUIRED FOR ANY USE OF THE MATERIALS; OR (3) PERFORMING ANY DUE DILIGENCE OR UNDERTAKING ANY OTHER INVESTIGATIONS INTO THE MATERIALS OR ANYTHING INCORPORATED OR EMBODIED THEREIN.
29
+ IN NO EVENT SHALL XAI BE LIABLE FOR ANY CLAIM, DAMAGES, OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT, OR OTHERWISE, ARISING FROM, OUT OF, OR IN CONNECTION WITH THE MATERIALS, THE USE THEREOF, OR OTHER DEALINGS THEREIN. TO THE MAXIMUM EXTENT PERMITTED BY LAW, XAI WILL NOT BE LIABLE FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR FOR AGGREGATE LIABILITY EXCEEDING $100, REGARDLESS OF THE LEGAL THEORY.
30
+
31
+ 7. Termination
32
+ This license terminates immediately upon your breach or if you exceed the permitted commercial threshold. Upon termination, you must cease all use and delete all copies of the Materials and derivatives.
33
+ Additionally, if you file, maintain, or voluntarily participate in a lawsuit against any person or entity alleging that the Materials, or any part thereof, directly or indirectly infringe any patent, then your license under this Agreement shall immediately terminate. This does not apply to a lawsuit brought in response to a corresponding lawsuit first filed against you.
34
+
35
+ 8. Governing Law
36
+ The laws of Texas govern this Agreement, and any dispute shall be resolved exclusively in the courts located in Tarrant County, Texas.
37
+
38
+ 9. Miscellaneous
39
+ This Agreement is the entire agreement between the parties on this subject. Failure to enforce any provision is not a waiver. If any provision is unenforceable, the remainder remains in effect. xAI may assign this Agreement, including in connection with a merger or acquisition. Licensee may not assign this Agreement without xAI’s prior written consent. This Agreement creates no third-party beneficiaries. You must comply with all applicable export control, trade compliance, and sanctions laws.
README.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ **FlagOS** is a unified heterogeneous computing software stack for large models, co-developed with leading global chip manufacturers. With core technologies such as the **FlagScale** distributed training/inference framework, **FlagGems** universal operator library, **FlagCX** communication library, and **FlagTree** unified compiler, the **FlagRelease** platform leverages the FlagOS stack to automatically produce and release various combinations of <chip + open-source model>. This enables efficient and automated model migration across diverse chips, opening a new chapter for large model deployment and application.
4
+
5
+ Based on this, the **grok-2-FlagOS** model is adapted for the Nvidia chip using the FlagOS software stack, enabling:
6
+
7
+ ### Integrated Deployment
8
+
9
+ - Deep integration with the open-source [FlagScale framework](https://github.com/FlagOpen/FlagScale)
10
+ - Out-of-the-box inference scripts with pre-configured hardware and software parameters
11
+ - Released **FlagOS** container image supporting deployment within minutes
12
+
13
+ ### Consistency Validation
14
+
15
+ - Rigorously evaluated through benchmark testing: Performance and results from the FlagOS software stack are compared against native stacks on multiple public.
16
+
17
+ # Technical Overview
18
+
19
+ ## **FlagScale Distributed Training and Inference Framework**
20
+
21
+ FlagScale is an end-to-end framework for large models across heterogeneous computing resources, maximizing computational efficiency and ensuring model validity through core technologies. Its key advantages include:
22
+
23
+ - **Unified Deployment Interface:** Standardized command-line tools support one-click service deployment across multiple hardware platforms, significantly reducing adaptation costs in heterogeneous environments.
24
+ - **Intelligent Parallel Optimization:** Automatically generates optimal distributed parallel strategies based on chip computing characteristics, achieving dynamic load balancing of computation/communication resources.
25
+ - **Seamless Operator Switching:** Deep integration with the FlagGems operator library allows high-performance operators to be invoked via environment variables without modifying model code.
26
+
27
+ ## **FlagGems Universal Large-Model Operator Library**
28
+
29
+ FlagGems is a Triton-based, cross-architecture operator library collaboratively developed with industry partners. Its core strengths include:
30
+
31
+ - **Full-stack Coverage**: Over 100 operators, with a broader range of operator types than competing libraries.
32
+ - **Ecosystem Compatibility**: Supports 7 accelerator backends. Ongoing optimizations have significantly improved performance.
33
+ - **High Efficiency**: Employs unique code generation and runtime optimization techniques for faster secondary development and better runtime performance compared to alternatives.
34
+
35
+ ## **FlagEval Evaluation Framework**
36
+
37
+ FlagEval (Libra)** is a comprehensive evaluation system and open platform for large models launched in 2023. It aims to establish scientific, fair, and open benchmarks, methodologies, and tools to help researchers assess model and training algorithm performance. It features:
38
+ - **Multi-dimensional Evaluation**: Supports 800+ model evaluations across NLP, CV, Audio, and Multimodal fields, covering 20+ downstream tasks including language understanding and image-text generation.
39
+ - **Industry-Grade Use Cases**: Has completed horizontal evaluations of mainstream large models, providing authoritative benchmarks for chip-model performance validation.
40
+
41
+ # Evaluation Results
42
+
43
+ ## Benchmark Result
44
+
45
+ | Metrics | grok-2-H100-CUDA | grok-2-FlagOS |
46
+ |-------------------|--------------------------|-----------------------------|
47
+ | AIME_0fewshot_@avg1 | 0.200 | 0.100 |
48
+ | GPQA_0fewshot_@avg1 | 0.466 | 0.480 |
49
+ | LiveBench-0fewshot_@avg1 | 0.451 | 0.437 |
50
+ | MMLU_5fewshot_@avg1 | 0.747 | 0.747 |
51
+ | MUSR_0fewshot_@avg | 0.606 | 0.619 |
52
+
53
+ # User Guide
54
+
55
+ **Environment Setup**
56
+
57
+ | Item | Version |
58
+ | ------------- | ------------------------------------------------------------ |
59
+ | Docker Version | Docker version 28.1.0, build 4d8c241 |
60
+ | Operating System | Ubuntu 22.04.5 LTS |
61
+ | FlagScale | Version: 0.8.0 |
62
+ | FlagGems | Version: 3.0 |
63
+
64
+ ## Operation Steps
65
+
66
+ ### Download Open-source Model Weights
67
+
68
+ ```bash
69
+ pip install modelscope
70
+ modelscope download --model xai-org/grok-2 --local_dir /share/grok-2
71
+
72
+ ```
73
+
74
+ ### Download FlagOS Image
75
+
76
+ ```bash
77
+ docker pull harbor.baai.ac.cn/flagrelease-public/flagrelease_nvidia_grok2
78
+ ```
79
+
80
+ ### Start the inference service
81
+
82
+ ```bash
83
+ #Container Startup
84
+ docker run --rm --init --detach --net=host --uts=host --ipc=host --security-opt=seccomp=unconfined --privileged=true --ulimit stack=67108864 --ulimit memlock=-1 --ulimit nofile=1048576:1048576 --shm-size=32G -v /share:/share --gpus all --name flagos harbor.baai.ac.cn/flagrelease-public/flagrelease_nvidia_grok2 sleep infinity
85
+ ```
86
+
87
+ ### Serve
88
+
89
+ ```bash
90
+ flagscale serve grok2
91
+
92
+ ```
93
+
94
+
95
+ ## Service Invocation
96
+
97
+ ### API-based Invocation Script
98
+
99
+ ```bash
100
+ import openai
101
+ openai.api_key = "EMPTY"
102
+ openai.base_url = "http://<server_ip>:9010/v1/"
103
+ model = "grok-2-nvidia-flagos"
104
+ messages = [
105
+ {"role": "system", "content": "You are a helpful assistant."},
106
+ {"role": "user", "content": "What's the weather like today?"}
107
+ ]
108
+ response = openai.chat.completions.create(
109
+ model=model,
110
+ messages=messages,
111
+ stream=False,
112
+ )
113
+ for item in response:
114
+ print(item)
115
+
116
+ ```
117
+
118
+ ### AnythingLLM Integration Guide
119
+
120
+ #### 1. Download & Install
121
+
122
+ - Visit the official site: https://anythingllm.com/
123
+ - Choose the appropriate version for your OS (Windows/macOS/Linux)
124
+ - Follow the installation wizard to complete the setup
125
+
126
+ #### 2. Configuration
127
+
128
+ - Launch AnythingLLM
129
+ - Open settings (bottom left, fourth tab)
130
+ - Configure core LLM parameters
131
+ - Click "Save Settings" to apply changes
132
+
133
+ #### 3. Model Interaction
134
+
135
+ - After model loading is complete:
136
+ - Click **"New Conversation"**
137
+ - Enter your question (e.g., “Explain the basics of quantum computing”)
138
+ - Click the send button to get a response
139
+
140
+ # Contributing
141
+
142
+ We warmly welcome global developers to join us:
143
+
144
+ 1. Submit Issues to report problems
145
+ 2. Create Pull Requests to contribute code
146
+ 3. Improve technical documentation
147
+ 4. Expand hardware adaptation support
148
+
149
+
150
+ # License
151
+
152
+ 本模型的权重来源于xai-org/grok-2,以apache2.0协议https://www.apache.org/licenses/LICENSE-2.0.txt开源。
153
+
config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Grok1ForCausalLM"
4
+ ],
5
+ "embedding_multiplier_scale": 90.50966799187809,
6
+ "output_multiplier_scale": 0.5,
7
+ "vocab_size": 131072,
8
+ "hidden_size": 8192,
9
+ "intermediate_size": 32768,
10
+ "moe_intermediate_size": 16384,
11
+ "max_position_embeddings": 131072,
12
+ "num_experts_per_tok": 2,
13
+ "num_local_experts": 8,
14
+ "residual_moe": true,
15
+ "num_attention_heads": 64,
16
+ "num_key_value_heads": 8,
17
+ "num_hidden_layers": 64,
18
+ "head_dim": 128,
19
+ "rms_norm_eps": 1e-05,
20
+ "final_logit_softcapping": 50,
21
+ "attn_logit_softcapping": 30.0,
22
+ "router_logit_softcapping": 30.0,
23
+ "rope_theta": 208533496,
24
+ "attn_temperature_len": 1024,
25
+ "sliding_window_size": -1,
26
+ "global_attn_every_n": 1,
27
+ "model_type": "git",
28
+ "torch_dtype": "bfloat16",
29
+ "rope_type": "original",
30
+ "original_max_position_embeddings": 8192,
31
+ "scaling_factor": 16.0,
32
+ "extrapolation_factor": 1.0,
33
+ "attn_factor": 1.0,
34
+ "beta_fast": 8,
35
+ "beta_slow": 1
36
+ }
configuration.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"framework": "pytorch", "task": "text-generation", "allow_remote": true}
pytorch_model-00000-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:315d2eb626636477b35a19c60b481ae43b70fc7b9d24645255ded00cecf2a6bb
3
+ size 2147483760
pytorch_model-00001-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f8001b2d7baaee0bd5bce3ad41d117148c591e485dc8a838b8abccb650ef9bb
3
+ size 2147483744
pytorch_model-00002-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e41eeadefc5da657bac925e752dab11672d7ccee729ed60df8c32df60c6fcd6e
3
+ size 16472
pytorch_model-00003-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5117eeb28c1447403df924fa93c62a9ea09d2d6fa773e27367fc5e6687b10b99
3
+ size 34359745872
pytorch_model-00004-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6921cc3380cfbd783a3b52bccf81bc5c7c645548f9f2f1b86279649edbba3cf0
3
+ size 34359745872
pytorch_model-00005-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90bb1c9d09d31b58becb2cf13f1694166708a52cd97315185c6a7ddbcc64f1ca
3
+ size 34359745744
pytorch_model-00006-TP-000.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5128445f49d89ed53463d6e454ef9f466ca65e2607a006761a558f9fd60754d
3
+ size 17179936544
pytorch_model-00006-TP-001.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9f0869351c7a2baac725b2b2903dec79b9afb0f49f2004ce3a1f35763708e40
3
+ size 17179936544
pytorch_model-00006-TP-002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:930f397a72ed323783e3f029bc1252beb9f2d2bce69610fa0f094e361547d1dc
3
+ size 17179936544
pytorch_model-00006-TP-003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b266d9293692e69cc2decbf1914916cde492de8b485e7ad670447674e5170e49
3
+ size 17179936544
pytorch_model-00006-TP-004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be0f2daffdc5a7c031ce685cc5bd89fe9058eedb14a107855ea45e272b7022c0
3
+ size 17179936544
pytorch_model-00006-TP-005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4697b87056edc930ec836528985fb25a40040417226e47a9c483786b81b56a7d
3
+ size 17179936544
pytorch_model-00006-TP-006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d572dc1910937e16be0017118daca90734af892ea019581584c738abd74975d
3
+ size 17179936544
pytorch_model-00006-TP-007.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cddca91357cc76894e1704d0bba02dcddeb7e063329f30198a26f3eec98f1ca1
3
+ size 17179936544
pytorch_model-00007-TP-000.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ca76131b44751d11ba6abafa0567939812c83af63ff3ed91771e5a03b28e395
3
+ size 17179936544
pytorch_model-00007-TP-001.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6689603a91fec740d69a67512a7d6c3d4176a2068fcc88e08a79646c7bc9a4ec
3
+ size 17179936544
pytorch_model-00007-TP-002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91c5af3bc69e126adf09cf066b5554fd55a804cb246e48c769725044bf6c2482
3
+ size 17179936544
pytorch_model-00007-TP-003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1643a8a743dd135e3aa082cb732ade54c624e188e264ce2ca29818eca4579129
3
+ size 17179936544
pytorch_model-00007-TP-004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1cce342c135799ea961f7c7335e1bd485190e7ed4511f1a4b25792af856b702b
3
+ size 17179936544
pytorch_model-00007-TP-005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecb1e412e72be6acf5a94300d20b922440710dbc3ff655081e14fde52dead193
3
+ size 17179936544
pytorch_model-00007-TP-006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff8c1dc7ba5222e7e88eeaa8c71f9bcc6c72abb2642cd53708e89c3bb29e9a5e
3
+ size 17179936544
pytorch_model-00007-TP-007.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d890e53bae6eb64606791a70ec4055a0909eb75d704bc505050f87a8f4304776
3
+ size 17179936544
pytorch_model-00008-TP-000.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6023ef14f89644bba6dea2b3cd268ef90a603a724e58e6f25fa4b35e09da805
3
+ size 17179936544
pytorch_model-00008-TP-001.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adf26b682c3898af5f1ab6fe106579d29aea00ab661e8d164988b6e8093f6412
3
+ size 17179936544
pytorch_model-00008-TP-002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8381e5ff848f33a2a715155aec5303309d1044bb105d0267bbbbd702e2ea4181
3
+ size 17179936544
pytorch_model-00008-TP-003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38471f261ce898dc7d1266fbabbacd922248767d4e909a0f431d2adaf77c9d1f
3
+ size 17179936544
pytorch_model-00008-TP-004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22d33701fa88dff2ecadb388238a12c7a5d63c6de2dae120ae98ffacca109683
3
+ size 17179936544
pytorch_model-00008-TP-005.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abdaa52c585fe182cd82918bde1ec3fa2b12033a1c9c11cebd753a2b3db88355
3
+ size 17179936544
pytorch_model-00008-TP-006.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f50b318ba532e0032bd6f25b59b54852029e6a026ef19852baa7851b6d5d7004
3
+ size 17179936544
pytorch_model-00008-TP-007.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7846ffc97ad222ca88bb38771a97409e8a80b62ce3573c096378a76f97c54578
3
+ size 17179936544
pytorch_model-00009-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cebb4ddcdb6520cb50dabdb0a5568e83de081dbebb00208e2a44afd82fd3c352
3
+ size 1073749240
pytorch_model-00010-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db9376f06a12092da29d028c2ebfe1206b1c85c50513d5138e7442f02ede635c
3
+ size 8589942120
pytorch_model-00011-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1306ca738a956652a3d6bea2df5f3899d0c42dd96f22d75c62f72792f236d7a7
3
+ size 8589942120
pytorch_model-00012-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88ca01f2876be5a7b268f0cd068a8e49bc9daae48426322bb420eacfcf308703
3
+ size 1073749240
pytorch_model-00013-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:924822ee882d55ac56938342ca435bcd2c4e71a7116dba90e6301845ab19f509
3
+ size 1055096
pytorch_model-00014-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:658fddf97d85edcd527861c7785d9d4a7801d09799819f9de85aa7b2d29ecf9e
3
+ size 1055160
pytorch_model-00015-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc94f4dff86cad956d7eae017ff7219d55e2d902d45378fb5502729020376c30
3
+ size 1055032
pytorch_model-00016-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63b443a0a63169c76e6342304583351a50a571ce45070f32650ec0da0f8de08b
3
+ size 1055096
pytorch_model-00017-TP-common.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49021a8d7c657eedb8bb83de222fe9d8015cb1db2388ee2e02ad017b1d289f32
3
+ size 8395888
tokenizer.tok.json ADDED
The diff for this file is too large to render. See raw diff