GPT-2 Fine-tuned on Wikitext
This model is a fine-tuned version of gpt2 on the Wikitext dataset.
It is trained for causal language modeling, making it capable of generating coherent English text given a prompt.
Task
Language Modeling / Text Generation:
- Predicts the next word in a sequence.
- Can be used for creative writing, story generation, or general text completion.
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("dina1/GPT2_finetuned_with_wikitext")
tokenizer = AutoTokenizer.from_pretrained("dina1/GPT2_finetuned_with_wikitext")
prompt = "Once upon a time in a distant galaxy"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=50, num_return_sequences=1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 1
Dataset used to train dina1/GPT2_finetuned_with_wikitext
Evaluation results
- perplexity on Wikitextself-reported25.400