How to use from
SGLang
Install from pip and serve model
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
    --model-path "UnfilteredAI/Promt-generator" \
    --host 0.0.0.0 \
    --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "UnfilteredAI/Promt-generator",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Use Docker images
docker run --gpus all \
    --shm-size 32g \
    -p 30000:30000 \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HF_TOKEN=<secret>" \
    --ipc=host \
    lmsysorg/sglang:latest \
    python3 -m sglang.launch_server \
        --model-path "UnfilteredAI/Promt-generator" \
        --host 0.0.0.0 \
        --port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "UnfilteredAI/Promt-generator",
		"prompt": "Once upon a time,",
		"max_tokens": 512,
		"temperature": 0.5
	}'
Quick Links

Model Card: UnfilteredAI/Promt-generator

Model Overview

The UnfilteredAI/Promt-generator is a text generation model designed specifically for creating prompts for text-to-image models. It leverages PyTorch and safetensors for optimized performance and storage, ensuring that it can be easily deployed and scaled for prompt generation tasks.

Intended Use

This model is primarily intended for:

  • Prompt generation for text-to-image models.
  • Creative AI applications where generating high-quality, diverse image descriptions is critical.
  • Supporting AI artists and developers working on generative art projects.

How to Use

To generate prompts using this model, follow these steps:

  1. Load the model in your PyTorch environment.
  2. Input your desired parameters for the prompt generation task.
  3. The model will return text descriptions based on the input, which can then be used with text-to-image models.

Example Code:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("UnfilteredAI/Promt-generator")
model = AutoModelForCausalLM.from_pretrained("UnfilteredAI/Promt-generator")

prompt = "a red car"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
generated_prompt = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(generated_prompt)
Downloads last month
987
Safetensors
Model size
0.6B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for UnfilteredAI/Promt-generator

Quantizations
4 models

Spaces using UnfilteredAI/Promt-generator 16