Instructions to use obss/mt5-small-3task-both-tquad2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use obss/mt5-small-3task-both-tquad2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="obss/mt5-small-3task-both-tquad2")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("obss/mt5-small-3task-both-tquad2") model = AutoModelForSeq2SeqLM.from_pretrained("obss/mt5-small-3task-both-tquad2") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use obss/mt5-small-3task-both-tquad2 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "obss/mt5-small-3task-both-tquad2" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "obss/mt5-small-3task-both-tquad2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/obss/mt5-small-3task-both-tquad2
- SGLang
How to use obss/mt5-small-3task-both-tquad2 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "obss/mt5-small-3task-both-tquad2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "obss/mt5-small-3task-both-tquad2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "obss/mt5-small-3task-both-tquad2" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "obss/mt5-small-3task-both-tquad2", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use obss/mt5-small-3task-both-tquad2 with Docker Model Runner:
docker model run hf.co/obss/mt5-small-3task-both-tquad2
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("obss/mt5-small-3task-both-tquad2")
model = AutoModelForSeq2SeqLM.from_pretrained("obss/mt5-small-3task-both-tquad2")YAML Metadata Warning:The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
mt5-small for Turkish Question Generation
Automated question generation and question answering using text-to-text transformers by OBSS AI.
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-small-3task-both-tquad2', qg_format='both')
Citation 📜
@article{akyon2022questgen,
author = {Akyon, Fatih Cagatay and Cavusoglu, Ali Devrim Ekin and Cengiz, Cemil and Altinuc, Sinan Onur and Temizel, Alptekin},
doi = {10.3906/elk-1300-0632.3914},
journal = {Turkish Journal of Electrical Engineering and Computer Sciences},
title = {{Automated question generation and question answering from Turkish texts}},
url = {https://journals.tubitak.gov.tr/elektrik/vol30/iss5/17/},
year = {2022}
}
Overview ✔️
Language model: mt5-small
Language: Turkish
Downstream-task: Extractive QA/QG, Answer Extraction
Training data: TQuADv2-train
Code: https://github.com/obss/turkish-question-generation
Paper: https://journals.tubitak.gov.tr/elektrik/vol30/iss5/17/
Hyperparameters
batch_size = 256
n_epochs = 15
base_LM_model = "mt5-small"
max_source_length = 512
max_target_length = 64
learning_rate = 1.0e-3
task_lisst = ["qa", "qg", "ans_ext"]
qg_format = "both"
Performance
Refer to paper.
Usage 🔥
from core.api import GenerationAPI
generation_api = GenerationAPI('mt5-small-3task-both-tquad2', qg_format='both')
context = """
Bu modelin eğitiminde, Türkçe soru cevap verileri kullanılmıştır.
Çalışmada sunulan yöntemle, Türkçe metinlerden otomatik olarak soru ve cevap
üretilebilir. Bu proje ile paylaşılan kaynak kodu ile Türkçe Soru Üretme
/ Soru Cevaplama konularında yeni akademik çalışmalar yapılabilir.
Projenin detaylarına paylaşılan Github ve Arxiv linklerinden ulaşılabilir.
"""
# a) Fully Automated Question Generation
generation_api(task='question-generation', context=context)
# b) Question Answering
question = "Bu model ne işe yarar?"
generation_api(task='question-answering', context=context, question=question)
# b) Answer Extraction
generation_api(task='answer-extraction', context=context)
- Downloads last month
- 13
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="obss/mt5-small-3task-both-tquad2")