How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
# Warning: Pipeline type "translation" is no longer supported in transformers v5.
# You must load the model directly (see below) or downgrade to v4.x with:
# 'pip install "transformers<5.0.0'
from transformers import pipeline

pipe = pipeline("translation", model="Qilex/bart-largeEN-ME")
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("Qilex/bart-largeEN-ME")
model = AutoModelForSeq2SeqLM.from_pretrained("Qilex/bart-largeEN-ME")
Quick Links

This is a BART-large model finetuned on roughly 58000 aligned sentence pairs in English and Middle English, collected from the works of Geoffrey Chaucer, John Wycliffe, and the Gawain Poet.
It includes special characters such as ΓΎ.
This model reflects the spelling inconsistencies characteristic of Middle English.
Because the model is trained largely on poetry and some prose, it is best at translating those sorts of tasks.
Performance can be improved by sentence tokenizing input data and translating sentence-by-sentence.
Removing contractions (hadn't -> had not) also boosts performance.

Downloads last month
16
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train Qilex/bart-largeEN-ME

Spaces using Qilex/bart-largeEN-ME 2

Evaluation results