Instructions to use UnfilteredAI/Promt-generator with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use UnfilteredAI/Promt-generator with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="UnfilteredAI/Promt-generator")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("UnfilteredAI/Promt-generator") model = AutoModelForCausalLM.from_pretrained("UnfilteredAI/Promt-generator") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use UnfilteredAI/Promt-generator with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "UnfilteredAI/Promt-generator" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UnfilteredAI/Promt-generator", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/UnfilteredAI/Promt-generator
- SGLang
How to use UnfilteredAI/Promt-generator with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "UnfilteredAI/Promt-generator" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UnfilteredAI/Promt-generator", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "UnfilteredAI/Promt-generator" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UnfilteredAI/Promt-generator", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use UnfilteredAI/Promt-generator with Docker Model Runner:
docker model run hf.co/UnfilteredAI/Promt-generator
How to use from
SGLangUse Docker images
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "UnfilteredAI/Promt-generator" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "UnfilteredAI/Promt-generator",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'Quick Links
Model Card: UnfilteredAI/Promt-generator
Model Overview
The UnfilteredAI/Promt-generator is a text generation model designed specifically for creating prompts for text-to-image models. It leverages PyTorch and safetensors for optimized performance and storage, ensuring that it can be easily deployed and scaled for prompt generation tasks.
Intended Use
This model is primarily intended for:
- Prompt generation for text-to-image models.
- Creative AI applications where generating high-quality, diverse image descriptions is critical.
- Supporting AI artists and developers working on generative art projects.
How to Use
To generate prompts using this model, follow these steps:
- Load the model in your PyTorch environment.
- Input your desired parameters for the prompt generation task.
- The model will return text descriptions based on the input, which can then be used with text-to-image models.
Example Code:
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("UnfilteredAI/Promt-generator")
model = AutoModelForCausalLM.from_pretrained("UnfilteredAI/Promt-generator")
prompt = "a red car"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
generated_prompt = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_prompt)
- Downloads last month
- 987
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "UnfilteredAI/Promt-generator" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "UnfilteredAI/Promt-generator", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'