Update README.md
Browse files
README.md
CHANGED
|
@@ -212,8 +212,6 @@ model-index:
|
|
| 212 |
|
| 213 |
The model was pretrained on approximately 400 billion tokens and achieves state-of-the-art performance across several benchmarks designed to evaluate Portuguese language models. **All data, source code, and recipes used to develop the Tucano2 series are open and fully reproducible.**
|
| 214 |
|
| 215 |
-
Read our preprint [here](https://arxiv.org/abs/XXXX.XXXXX) to learn more about the Tucano2 series.
|
| 216 |
-
|
| 217 |
## Details
|
| 218 |
|
| 219 |
- **Architecture:** a Transformer-based model ([`llama`](https://huggingface.co/docs/transformers/main/en/model_doc/llama))
|
|
|
|
| 212 |
|
| 213 |
The model was pretrained on approximately 400 billion tokens and achieves state-of-the-art performance across several benchmarks designed to evaluate Portuguese language models. **All data, source code, and recipes used to develop the Tucano2 series are open and fully reproducible.**
|
| 214 |
|
|
|
|
|
|
|
| 215 |
## Details
|
| 216 |
|
| 217 |
- **Architecture:** a Transformer-based model ([`llama`](https://huggingface.co/docs/transformers/main/en/model_doc/llama))
|