Update README.md
#6
by
philschmid
- opened
README.md
CHANGED
|
@@ -10,7 +10,7 @@ language:
|
|
| 10 |
|
| 11 |
This model is an Open-Assistant fine-tuning of Meta's CodeLlama 13B LLM.
|
| 12 |
|
| 13 |
-
**Note**: Due to the new RoPE Theta value (1e6 instead of 1e4), for correct results you must
|
| 14 |
|
| 15 |
## Model Details
|
| 16 |
|
|
|
|
| 10 |
|
| 11 |
This model is an Open-Assistant fine-tuning of Meta's CodeLlama 13B LLM.
|
| 12 |
|
| 13 |
+
**Note**: Due to the new RoPE Theta value (1e6 instead of 1e4), for correct results you must Huggingface `transformers >= 4.33`.
|
| 14 |
|
| 15 |
## Model Details
|
| 16 |
|