Vocabulary Expanded Models
Collection
Models exploring vocabulary expansion for low-resource languages. Naming: {base}-{lang}-{samples}-{tokens}.
•
72 items
•
Updated
This model is a vocabulary-expanded version of gemma2-2b for Tibetan.
| Parameter | Value |
|---|---|
| Base Model | gemma2-2b |
| Target Language | Tibetan |
| Training Samples | 1,000 |
| Added Tokens | 64 |
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Intellexus/gemma2-2b-bo-1k-64")
tokenizer = AutoTokenizer.from_pretrained("Intellexus/gemma2-2b-bo-1k-64")
text = "Your text here"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
### Gemma 2 (Base Model)
@article{gemma2024,
title = "Gemma 2: Improving Open Language Models at a Practical Size",
author = "{Gemma Team, Google DeepMind}",
journal = "arXiv preprint arXiv:2408.00118",
year = "2024",
url = "https://arxiv.org/abs/2408.00118",
}
### CC-100 (Training Data)
@inproceedings{conneau-etal-2020-unsupervised,
title = "Unsupervised Cross-lingual Representation Learning at Scale",
author = "Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzman, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
year = "2020",
url = "https://aclanthology.org/2020.acl-main.747",
}
@inproceedings{wenzek-etal-2020-ccnet,
title = "{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data",
author = "Wenzek, Guillaume and Lachaux, Marie-Anne and Conneau, Alexis and Chaudhary, Vishrav and Guzman, Francisco and Joulin, Armand and Grave, Edouard",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
year = "2020",
url = "https://aclanthology.org/2020.lrec-1.494",
}
### NLLB-200 (Tibetan Parallel Data)
@inproceedings{schwenk-etal-2021-ccmatrix,
title = "{CCM}atrix: Mining Billions of High-Quality Parallel Sentences on the Web",
author = "Schwenk, Holger and others",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics",
year = "2021",
url = "https://aclanthology.org/2021.acl-long.507",
}
@article{heffernan2022bitext,
title = "Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages",
author = "Heffernan, Kevin and others",
journal = "arXiv preprint arXiv:2205.12654",
year = "2022",
}
@article{nllb2022,
title = "No Language Left Behind: Scaling Human-Centered Machine Translation",
author = "{NLLB Team}",
journal = "arXiv preprint arXiv:2207.04672",
year = "2022",
}
@misc{intellexus-gemma2-2b-bo-1k-64,
author = {Intellexus},
title = {gemma2-2b-bo-1k-64},
year = {2025},
publisher = {HuggingFace},
url = {https://huggingface.co/Intellexus/gemma2-2b-bo-1k-64}
}
Base model
google/gemma-2-2b