Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- README.md +32 -0
- RELEASE +21 -0
- ckpt-0/tensor00087_000 +3 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
ckpt-0/tensor* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
library_name: grok
|
| 5 |
+
tags:
|
| 6 |
+
- grok-1
|
| 7 |
+
---
|
| 8 |
+
# Grok-1
|
| 9 |
+
|
| 10 |
+
This repository contains the weights of the Grok-1 open-weights model. You can find the code in the [GitHub Repository](https://github.com/xai-org/grok-1/tree/main).
|
| 11 |
+
|
| 12 |
+
# Download instruction
|
| 13 |
+
Clone the repo & download the `int8` checkpoint to the `checkpoints` directory by executing this command in the repo root directory:
|
| 14 |
+
|
| 15 |
+
```shell
|
| 16 |
+
git clone https://github.com/xai-org/grok-1.git && cd grok-1
|
| 17 |
+
pip install huggingface_hub[hf_transfer]
|
| 18 |
+
huggingface-cli download xai-org/grok-1 --repo-type model --include ckpt-0/* --local-dir checkpoints --local-dir-use-symlinks False
|
| 19 |
+
```
|
| 20 |
+
|
| 21 |
+
Then, you can run:
|
| 22 |
+
|
| 23 |
+
```shell
|
| 24 |
+
pip install -r requirements.txt
|
| 25 |
+
python run.py
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
You should be seeing output from the language model.
|
| 29 |
+
|
| 30 |
+
Due to the large size of the model (314B parameters), a multi-GPU machine is required to test the model with the example code.
|
| 31 |
+
|
| 32 |
+
p.s. we're hiring: https://x.ai/careers
|
RELEASE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
ββββββββββββββββββββββββββββ
|
| 2 |
+
β _______ β
|
| 3 |
+
β /\ |_ _| β
|
| 4 |
+
β __ __ / \ | | β
|
| 5 |
+
β \ \/ / / /\ \ | | β
|
| 6 |
+
β > < / ____ \ _| |_ β
|
| 7 |
+
β /_/\_\/_/ \_\_____| β
|
| 8 |
+
β β
|
| 9 |
+
β Understand the Universe β
|
| 10 |
+
ββββββββββββββββββββββββββββ
|
| 11 |
+
|
| 12 |
+
βββββββββββββββββββββ
|
| 13 |
+
β xAI Grok-1 (314B) β
|
| 14 |
+
βββββββββββββββββββββ
|
| 15 |
+
|
| 16 |
+
βββββββββββββββββββββββββββββββββββββββββββ
|
| 17 |
+
β 314B parameter Mixture of Experts model β
|
| 18 |
+
β - Base model (not finetuned) β
|
| 19 |
+
β - 8 experts (2 active) β
|
| 20 |
+
β - 86B active parameters β
|
| 21 |
+
βββββββββββββββββββββββββββββββββββββββββββ
|
ckpt-0/tensor00087_000
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f7058977298d4a7cd8aa2471963a11605829823a478a7faf14b51fd7ea113ea7
|
| 3 |
+
size 1611399491
|