metadata
language: en
datasets:
- wikitext
metrics:
- perplexity
model-index:
- name: GPT-2 Fine-tuned on Wikitext
results:
- task:
type: text-generation
name: Language Modeling
dataset:
name: Wikitext
type: wikitext
metrics:
- type: perplexity
value: 25.4
tags:
- gpt2
- language-modeling
- text-generation
license: mit
GPT-2 Fine-tuned on Wikitext
This model is a fine-tuned version of gpt2 on the Wikitext dataset.
It is trained for causal language modeling, making it capable of generating coherent English text given a prompt.
Task
Language Modeling / Text Generation:
- Predicts the next word in a sequence.
- Can be used for creative writing, story generation, or general text completion.
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("dina1/GPT2_finetuned_with_wikitext")
tokenizer = AutoTokenizer.from_pretrained("dina1/GPT2_finetuned_with_wikitext")
prompt = "Once upon a time in a distant galaxy"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=50, num_return_sequences=1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))