#HackathonSomosNLP 22: Winners & Honor mentions
Collection
14 items • Updated • 3
How to use somosnlp-hackathon-2022/readability-es-sentences with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="somosnlp-hackathon-2022/readability-es-sentences") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("somosnlp-hackathon-2022/readability-es-sentences")
model = AutoModelForSequenceClassification.from_pretrained("somosnlp-hackathon-2022/readability-es-sentences")Model based on the Roberta architecture finetuned on BERTIN for readability assessment of Spanish texts.
This version of the model was trained on a mix of datasets, using sentence-level granularity when possible. The model performs binary classification among the following classes:
It achieves a F1 macro average score of 0.8923, measured on the validation set.
readability-es-sentences (this model). Two classes, sentence-based dataset.readability-es-paragraphs. Two classes, paragraph-based dataset.readability-es-3class-sentences. Three classes, sentence-based dataset.readability-es-3class-paragraphs. Three classes, paragraph-based dataset.readability-es-hackathon-pln-public, composed of:Please, refer to this training run for full details on hyperparameters and training regime.