Update README.md
Browse files
README.md
CHANGED
|
@@ -111,19 +111,11 @@ special_tokens:
|
|
| 111 |
|
| 112 |
# md7b-alpha
|
| 113 |
|
| 114 |
-
This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on
|
| 115 |
It achieves the following results on the evaluation set:
|
| 116 |
- Loss: 1.0238
|
| 117 |
|
| 118 |
-
##
|
| 119 |
-
|
| 120 |
-
More information needed
|
| 121 |
-
|
| 122 |
-
## Intended uses & limitations
|
| 123 |
-
|
| 124 |
-
More information needed
|
| 125 |
-
|
| 126 |
-
## Training and evaluation data
|
| 127 |
|
| 128 |
More information needed
|
| 129 |
|
|
@@ -177,9 +169,4 @@ The following `bitsandbytes` quantization config was used during training:
|
|
| 177 |
- llm_int8_has_fp16_weight: False
|
| 178 |
- bnb_4bit_quant_type: nf4
|
| 179 |
- bnb_4bit_use_double_quant: True
|
| 180 |
-
- bnb_4bit_compute_dtype: bfloat16
|
| 181 |
-
|
| 182 |
-
### Framework versions
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
- PEFT 0.6.0
|
|
|
|
| 111 |
|
| 112 |
# md7b-alpha
|
| 113 |
|
| 114 |
+
This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on a set of datasets.
|
| 115 |
It achieves the following results on the evaluation set:
|
| 116 |
- Loss: 1.0238
|
| 117 |
|
| 118 |
+
## Evaluation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 119 |
|
| 120 |
More information needed
|
| 121 |
|
|
|
|
| 169 |
- llm_int8_has_fp16_weight: False
|
| 170 |
- bnb_4bit_quant_type: nf4
|
| 171 |
- bnb_4bit_use_double_quant: True
|
| 172 |
+
- bnb_4bit_compute_dtype: bfloat16
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|