ms-hardwick-translator
This model is a fine-tuned version of google/flan-t5-base trained to rewrite English text into the style and tone of Melissa Fulmore-Hardwick.
Model description
The ms-hardwick-translator adapts written English to match Melissa Fulmore-Hardwick’s voice — her choice of words, sentence rhythm, and instructional style.
It is built on the Flan-T5 Base architecture, a sequence-to-sequence transformer model optimized for instruction following and natural language generation.
- Architecture: T5 (encoder-decoder)
- Base model: google/flan-t5-base
- Language: English
- Task: Style translation / text rewriting
- Framework: Hugging Face Transformers
Intended uses & limitations
Intended uses
- Convert standard text into Melissa Fulmore-Hardwick’s instructional style for lesson materials.
- Create consistent tone for educational resources or projects that require her voice.
Limitations
- May not preserve highly technical terms if they were not present in the training set.
- Not intended for literal translation between different human languages.
- Works best for short to medium-length text (1–3 paragraphs).
Training and evaluation data
- Dataset type: Custom parallel dataset with two columns:
- input: Original English sentence.
- target: Sentence rewritten in Melissa Fulmore-Hardwick’s voice.
- Dataset size: [Insert row count]
- Source: Curated examples from transcripts, speeches, and rewritten passages.
Training procedure
Preprocessing
- Tokenized with
AutoTokenizerfromgoogle/flan-t5-base - Maximum input length: [Insert max length used]
- Maximum target length: [Insert max length used]
Training hyperparameters
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: AdamW
- epochs: [Insert number if known]
Frameworks
Example usage
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("ClergeF/ms-hardwick-translator")
model = AutoModelForSeq2SeqLM.from_pretrained("ClergeF/ms-hardwick-translator")
text = "Please complete your homework by tomorrow."
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for ClergeF/ms-hardwick-translator
Base model
google/flan-t5-base