How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("token-classification", model="saeub/bert-stage")
# Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification

tokenizer = AutoTokenizer.from_pretrained("saeub/bert-stage")
model = AutoModelForTokenClassification.from_pretrained("saeub/bert-stage")
Quick Links

Statement Segmentation in German Easy Language (StaGE) submission

This model is our submission to the StaGE shared task at KONVENS 2024 under the team name StaGE FriGHt. It is based on bert-base-multilingual-cased and fine-tuned for binary classification of statement span heads. The training data can be accessed here.

For more information, refer to the corresponding GitHub repository, the paper about our submission (to be published) and the overview paper (to be published).

Downloads last month
6
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support