Instructions to use Exquisique/Shakespeare_Alike with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Exquisique/Shakespeare_Alike with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Exquisique/Shakespeare_Alike")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Exquisique/Shakespeare_Alike") model = AutoModelForCausalLM.from_pretrained("Exquisique/Shakespeare_Alike") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Exquisique/Shakespeare_Alike with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Exquisique/Shakespeare_Alike" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Exquisique/Shakespeare_Alike", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Exquisique/Shakespeare_Alike
- SGLang
How to use Exquisique/Shakespeare_Alike with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Exquisique/Shakespeare_Alike" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Exquisique/Shakespeare_Alike", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Exquisique/Shakespeare_Alike" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Exquisique/Shakespeare_Alike", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Exquisique/Shakespeare_Alike with Docker Model Runner:
docker model run hf.co/Exquisique/Shakespeare_Alike
Shakespeare_Alike
Model Overview
Shakespeare_Alike is a fine-tuned variant of [meta-llama/Llama-3.2-1B], specifically tailored to generate English sonnets and poetry in the distinctive style of William Shakespeare. This model was trained on the [Exquisique/Shakespeare_Poetry] dataset, a curated corpus derived from the works of Shakespeare. It is designed to emulate Shakespearean language, meter, and poetic forms for creative, educational, and entertainment purposes.
Intended Use
- Primary use case: Generation of Shakespearean sonnets and verses.
- Potential applications: Creative writing, educational demonstrations, literary style transfer, and entertainment.
Model Details
- Base Model: meta-llama/Llama-3.2-1B
- Fine-tuned Dataset: Exquisique/Shakespeare_Poetry (derived from Shakespeare's poetic works)
- Languages: English
- License: Apache-2.0
Example Generation
Prompt:
O gentle moon, whose silver beams do lightGenerated:
O gentle moon, whose silver beams do light
The midnight stage where lonely hearts do pine,
Thou watchest all, and hast in patient sight
The weeping stars that in the heavens shine.
Thy ancient glow recalls the poet's art,
In sonnet's form my trembling mind is caught,
For thou and he are kindred, set apart,
By time and muse, by memory and thought.
Inference Recommendations
- Temperature: 0.8 to 1.0
- Top_p: 0.9
- Max new tokens: 128–256
- For optimal results, provide prompts inspired by classical English poetry or direct lines from Shakespeare.
Usage Example
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Exquisique/Shakespeare_Alike"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "O gentle moon, whose silver beams do light"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(
**inputs, max_new_tokens=128, temperature=0.9, top_p=0.95
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Limitations
While the model generates poetry in the Shakespearean style, some outputs may depart from strict poetic sense or logical consistency. Generated texts should be reviewed before use in critical or educational settings.
Citation
If you use this model in academic or commercial projects, please cite the corresponding Hugging Face repository.
- Downloads last month
- 8