ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
Paper
•
2003.10555
•
Published
Bilingual ELECTRA (Czech-Slovak) is an Electra-small model pretrained on a mixed Czech and Slovak corpus. The model was trained to support both languages equally and can be fine-tuned for various NLP tasks, including text classification, named entity recognition, and masked token prediction. The model is released under the CC BY 4.0 license, which allows commercial use.
The model uses a SentencePiece tokenizer and requires a SentencePiece model file (m.model) for proper tokenization. You can use either the HuggingFace AutoTokenizer (recommended) or SentencePiece directly.
from transformers import AutoTokenizer, ElectraForPreTraining
# Load the tokenizer directly from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("AILabTUL/BiELECTRA-czech-slovak")
# Or load from local directory
# tokenizer = AutoTokenizer.from_pretrained("./CZSK")
# Load the pretrained model
model = ElectraForPreTraining.from_pretrained("AILabTUL/BiELECTRA-czech-slovak")
# Tokenize input text
sentence = "Toto je testovací věta v češtině a slovenčine."
inputs = tokenizer(sentence, return_tensors="pt")
# Run inference
outputs = model(**inputs)
from transformers import ElectraForPreTraining
import sentencepiece as spm
import torch
# Load the SentencePiece model
sp = spm.SentencePieceProcessor()
sp.load("m.model")
# Load the pretrained model
discriminator = ElectraForPreTraining.from_pretrained("AILabTUL/BiELECTRA-czech-slovak")
# Tokenize input text (note: input should be lowercase)
sentence = "toto je testovací věta v češtině a slovenčine."
tokens = sp.encode(sentence, out_type=str)
token_ids = sp.encode(sentence)
# Convert to tensor
input_tensor = torch.tensor([token_ids])
# Run inference
outputs = discriminator(input_tensor)
predictions = torch.nn.Sigmoid()(outputs[0]).cpu().detach().numpy()
This model was published as part of the research paper:
"Study on Automatic Punctuation Restoration in Bilingual Broadcast Stream"
@inproceedings{polacek-2025-study,
title = "Study on Automatic Punctuation Restoration in Bilingual Broadcast Stream",
author = "Polacek, Martin",
editor = "Velichkov, Boris and
Nikolova-Koleva, Ivelina and
Slavcheva, Milena",
booktitle = "Proceedings of the 9th Student Research Workshop associated with the International Conference Recent Advances in Natural Language Processing",
month = sep,
year = "2025",
address = "Varna, Bulgaria",
publisher = "INCOMA Ltd., Shoumen, Bulgaria",
url = "https://aclanthology.org/2025.ranlp-stud.5/",
pages = "37--43",
doi = "10.26615/issn.2603-2821.2025_005"
}