eligapris/kirundi-english
Viewer • Updated • 21.4k • 1
How to use eligapris/rn-tokenizer with Transformers:
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("eligapris/rn-tokenizer", dtype="auto")This repository contains a BPE tokenizer trained specifically for the Kirundi language (ISO code: run).
The tokenizer was trained on the Kirundi-English parallel corpus:
You can use this tokenizer in your project by first installing the required dependencies:
pip install transformers
Then load the tokenizer directly from the Hugging Face Hub:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("eligapris/rn-tokenizer")
Or if you have downloaded the tokenizer files locally:
from transformers import PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json")
You can load the tokenizer in two ways:
# Method 1: Using AutoTokenizer (recommended)
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("eligapris/rn-tokenizer")
# Method 2: Using PreTrainedTokenizerFast with local file
from transformers import PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json")
# Basic tokenization
text = "ab'umudugudu hafi ya bose bateranira kumva ijambo ry'Imana."
encoded = tokenizer(text)
print(f"Input IDs: {encoded['input_ids']}")
print(f"Tokens: {tokenizer.convert_ids_to_tokens(encoded['input_ids'])}")
# Process multiple sentences at once
texts = [
"ifumbire mvaruganda.",
"aba azi gukora kandi afite ubushobozi"
]
encoded = tokenizer(texts, padding=True, truncation=True)
print("Batch encoding:", encoded)
# Add special tokens like [CLS] and [SEP]
encoded = tokenizer(text, add_special_tokens=True)
tokens = tokenizer.convert_ids_to_tokens(encoded['input_ids'])
print(f"Tokens with special tokens: {tokens}")
# Convert token IDs back to text
ids = encoded['input_ids']
decoded_text = tokenizer.decode(ids)
print(f"Decoded text: {decoded_text}")
# Pad or truncate sequences to a specific length
encoded = tokenizer(
texts,
padding='max_length',
max_length=32,
truncation=True,
return_tensors='pt' # Return PyTorch tensors
)
print("Padded sequences:", encoded['input_ids'].shape)
This tokenizer is intended to serve as a foundation for future Kirundi language model development, including potential fine-tuning with techniques like LoRA (Low-Rank Adaptation).
dependencies = {
"transformers": ">=4.30.0",
"tokenizers": ">=0.13.0"
}
eligrapris