recoilme/SyntheticPrompts
Viewer • Updated • 47.2k • 65 • 3
How to use recoilme/insomnia_v1 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="recoilme/insomnia_v1") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("recoilme/insomnia_v1")
model = AutoModelForCausalLM.from_pretrained("recoilme/insomnia_v1")How to use recoilme/insomnia_v1 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "recoilme/insomnia_v1"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "recoilme/insomnia_v1",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/recoilme/insomnia_v1
How to use recoilme/insomnia_v1 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "recoilme/insomnia_v1" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "recoilme/insomnia_v1",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "recoilme/insomnia_v1" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "recoilme/insomnia_v1",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use recoilme/insomnia_v1 with Docker Model Runner:
docker model run hf.co/recoilme/insomnia_v1
v2 is out! https://huggingface.co/recoilme/insomnia_v2
Project by https://aiartlab.org/
A GPT2 model to generate prompts for SDXL or similar models.
Trained from the GPT2 small model. Attach a Style to affect the render further.
Trained on Syntetic prompts generated with Mistral7b.
Dataset: https://huggingface.co/datasets/recoilme/SyntheticPrompts
from transformers import pipeline, GPT2Tokenizer,GPT2LMHeadModel
checkpoint_path = "recoilme/insomnia_v2"
model = GPT2LMHeadModel.from_pretrained(checkpoint_path)
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
texts = [
"Frog in a Harry Potter costume",
"Cat, art by ",
"a photo of woman underwater",
"thunderstorms on the alien planet, very shocking",
"Standing on the land of a new planet, the Female astronaut dances",
"The face of the cat woman, a face beautiful, young. The head is adorned with the Egyptian crown of Bastet.",
]
for text in texts:
print(f"Input: {text}:")
out = text_generator(text, max_length=150, num_return_sequences=2,temperature=1.0,)
print(f"Output 1: {out[0]['generated_text']}\n\n")
print(f"Output 2: {out[1]['generated_text']}")
print("\n")
Input: Frog in a Harry Potter costume:
Output 1: Frog in a Harry Potter costume, detailed with a touch of magical realism, highlight bulging eyes, slick skin, webbed feet, add atmospheric detail misty breath, dawns first light at lilycovered pond, end with a nod to Gabriel Garca Mrquezs wizarding world.
Output 2: Frog in a Harry Potter costume, detailed and exact, persona reminiscent of a dragon or wizard duel, setting for a graveyard, atmosphere charged with suspense and anticipation, mystical creatures looming, cinematic style emphasizing amphibious grace.```