Update README.md
Browse files
README.md
CHANGED
|
@@ -58,6 +58,7 @@ We include the updated template in the tokenizer config, so you can use `tokeniz
|
|
| 58 |
### Training Data
|
| 59 |
|
| 60 |
The model has been trained on a mix of some publically-available and permissively-licensed data as well as a majority of unique internal datasets which we have created.
|
|
|
|
| 61 |
|
| 62 |
## Evaluation
|
| 63 |
|
|
|
|
| 58 |
### Training Data
|
| 59 |
|
| 60 |
The model has been trained on a mix of some publically-available and permissively-licensed data as well as a majority of unique internal datasets which we have created.
|
| 61 |
+
Our data encompasses examples of a length up to 16384 tokens, further enhancing the model's long-context capability.
|
| 62 |
|
| 63 |
## Evaluation
|
| 64 |
|