Upload model
Browse files
README.md
CHANGED
|
@@ -18,7 +18,6 @@ base_model: bn22/Mistral-7B-Instruct-v0.1-sharded
|
|
| 18 |
|
| 19 |
|
| 20 |
- **Developed by:** [More Information Needed]
|
| 21 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 22 |
- **Shared by [optional]:** [More Information Needed]
|
| 23 |
- **Model type:** [More Information Needed]
|
| 24 |
- **Language(s) (NLP):** [More Information Needed]
|
|
@@ -77,7 +76,7 @@ Use the code below to get started with the model.
|
|
| 77 |
|
| 78 |
### Training Data
|
| 79 |
|
| 80 |
-
<!-- This should link to a
|
| 81 |
|
| 82 |
[More Information Needed]
|
| 83 |
|
|
@@ -108,7 +107,7 @@ Use the code below to get started with the model.
|
|
| 108 |
|
| 109 |
#### Testing Data
|
| 110 |
|
| 111 |
-
<!-- This should link to a
|
| 112 |
|
| 113 |
[More Information Needed]
|
| 114 |
|
|
@@ -212,7 +211,7 @@ The following `bitsandbytes` quantization config was used during training:
|
|
| 212 |
- llm_int8_has_fp16_weight: True
|
| 213 |
- bnb_4bit_quant_type: nf4
|
| 214 |
- bnb_4bit_use_double_quant: True
|
| 215 |
-
- bnb_4bit_compute_dtype:
|
| 216 |
|
| 217 |
### Framework versions
|
| 218 |
|
|
|
|
| 18 |
|
| 19 |
|
| 20 |
- **Developed by:** [More Information Needed]
|
|
|
|
| 21 |
- **Shared by [optional]:** [More Information Needed]
|
| 22 |
- **Model type:** [More Information Needed]
|
| 23 |
- **Language(s) (NLP):** [More Information Needed]
|
|
|
|
| 76 |
|
| 77 |
### Training Data
|
| 78 |
|
| 79 |
+
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 80 |
|
| 81 |
[More Information Needed]
|
| 82 |
|
|
|
|
| 107 |
|
| 108 |
#### Testing Data
|
| 109 |
|
| 110 |
+
<!-- This should link to a Data Card if possible. -->
|
| 111 |
|
| 112 |
[More Information Needed]
|
| 113 |
|
|
|
|
| 211 |
- llm_int8_has_fp16_weight: True
|
| 212 |
- bnb_4bit_quant_type: nf4
|
| 213 |
- bnb_4bit_use_double_quant: True
|
| 214 |
+
- bnb_4bit_compute_dtype: bfloat16
|
| 215 |
|
| 216 |
### Framework versions
|
| 217 |
|