GPT-2 Fine-tuned on Wikitext

This model is a fine-tuned version of gpt2 on the Wikitext dataset.
It is trained for causal language modeling, making it capable of generating coherent English text given a prompt.

Task

Language Modeling / Text Generation:

  • Predicts the next word in a sequence.
  • Can be used for creative writing, story generation, or general text completion.

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("dina1/GPT2_finetuned_with_wikitext")
tokenizer = AutoTokenizer.from_pretrained("dina1/GPT2_finetuned_with_wikitext")

prompt = "Once upon a time in a distant galaxy"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=50, num_return_sequences=1)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train dina1/GPT2_finetuned_with_wikitext

Evaluation results