Transformers
PyTorch
Safetensors
Turkish
English
t5
text2text-generation
mt5
text-generation-inference
turkish
Instructions to use bonur/t5-base-tr with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bonur/t5-base-tr with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("bonur/t5-base-tr") model = AutoModelForSeq2SeqLM.from_pretrained("bonur/t5-base-tr") - Notebooks
- Google Colab
- Kaggle
Model Card
Please check google/mt5-base model. This model is pruned version of mt5-base model to only work in Turkish and English. Also for methodology, you can check Russian version of mT5-base cointegrated/rut5-base.
Usage
You should import required libraries by:
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
To load model:
model = T5ForConditionalGeneration.from_pretrained('bonur/t5-base-tr')
tokenizer = T5Tokenizer.from_pretrained('bonur/t5-base-tr')
To make inference with given text, you can use the following code:
inputs = tokenizer("Bu hafta hasta olduğum için <extra_id_0> gittim.", return_tensors='pt')
with torch.no_grad():
hypotheses = model.generate(
**inputs,
do_sample=True, top_p=0.95,
num_return_sequences=2,
repetition_penalty=2.75,
max_length=32,
)
for h in hypotheses:
print(tokenizer1.decode(h))
You can tune parameters for better result, and this model is ready to fine-tune in bilingual downstream tasks with English and Turkish.
- Downloads last month
- 9
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support