Configuration Parsing
Warning:
Invalid JSON for config file config.json
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
PoeticTextGenerator_GPT2
ποΈ Overview
This model is a GPT-2 Small variant fine-tuned specifically for the task of unconditional and conditional poetic text generation. It has been trained on a curated corpus of classical and contemporary English poetry, allowing it to generate text that mimics meter, rhyme, and figurative language patterns. The model is configured as a GPT2LMHeadModel for Language Modeling.
π§ Model Architecture
The model leverages the powerful transformer architecture of the GPT-2 Small base model.
- Base Model:
gpt2(124M parameters) - Task: Causal Language Modeling (
GPT2LMHeadModel) - Tokenization: Standard GPT-2 Byte Pair Encoding (BPE) tokenizer.
- Training Data: Approximately 20,000 poems spanning multiple centuries and styles (e.g., sonnets, free verse, haikus).
- Hyperparameters: Fine-tuned with a low learning rate to preserve the linguistic capabilities of the base model while acquiring poetic style.
- Key Config:
do_sample=Trueandtemperature=0.8are set as default generation parameters to encourage creative and diverse outputs.
π‘ Intended Use
- Creative Writing Assistance: Providing prompts, completing stanzas, or generating entire poems for writers.
- Artistic Installations: Generating dynamic, ever-changing poetic text for digital art or interactive projects.
- Stylometric Research: Studying the model's ability to imitate different poetic styles by adjusting the prompt or conditioning data.
- Educational Tool: Demonstrating the capabilities of large language models in creative domains.
How to use
from transformers import pipeline, set_seed
generator = pipeline(
"text-generation",
model="[YOUR_HF_USERNAME]/PoeticTextGenerator_GPT2"
)
set_seed(42)
# Conditional Generation (Prompting a theme)
prompt = "The shadow of the moon fell upon the silent street,"
output = generator(
prompt,
max_length=50,
num_return_sequences=1,
temperature=0.9,
top_p=0.95,
do_sample=True
)
print(output[0]['generated_text'])
# Unconditional Generation (Starting from a single word)
# output = generator("A", max_length=100, num_return_sequences=1)
- Downloads last month
- 16
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support