GPT-2 Fine-tuned Model by ps2program
This is a GPT-2 model fine-tuned on [your dataset name, e.g., TinyStories].
Model Details
- Base Model: GPT-2 (small)
- Fine-tuning method: LoRA adapters merged into base model
- Dataset: TinyStories or specify your dataset
- Training epochs: 3
- Batch size: 4
- Framework: Hugging Face Transformers + PEFT
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "ps2program/gpt2-finetuned-ps2prahlad"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "Once upon a time"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=50, do_sample=True, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))