Instructions to use trollek/SmolImagePromptHelper-135M with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use trollek/SmolImagePromptHelper-135M with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="trollek/SmolImagePromptHelper-135M") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("trollek/SmolImagePromptHelper-135M") model = AutoModelForCausalLM.from_pretrained("trollek/SmolImagePromptHelper-135M") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use trollek/SmolImagePromptHelper-135M with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "trollek/SmolImagePromptHelper-135M" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "trollek/SmolImagePromptHelper-135M", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/trollek/SmolImagePromptHelper-135M
- SGLang
How to use trollek/SmolImagePromptHelper-135M with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "trollek/SmolImagePromptHelper-135M" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "trollek/SmolImagePromptHelper-135M", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "trollek/SmolImagePromptHelper-135M" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "trollek/SmolImagePromptHelper-135M", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use trollek/SmolImagePromptHelper-135M with Docker Model Runner:
docker model run hf.co/trollek/SmolImagePromptHelper-135M
Smol Image Prompt Helper
This is meant to be a drop-in replacement for my last image prompt helper but with a new trick and a much smaller size. It achieves the following results on the evaluation set:
- Loss: 1.0077
Model description
Lets say you have a node in ComfyUI to parse JSON and send the appropriate prompt to the text encoders. Tadaaa:
You are an AI assistant tasked with expanding and formatting image prompts. You are given an input that you will need to write image prompts for different text encoders.
Always respond with the following format:
{
"clip_l": "<keywords from image analysis>",
"clip_g": "<simple descriptions of the image>",
"t5xxl": "<complex semanticly rich description of the image>",
"negative": "<contrasting keywords for what is not in the image>"
}
Intended uses & limitations
Have a look at the dataset that I created (ImagePromptHelper-v02 (CC BY 4.0)) and you will see whaaaaat I've doooone.
Training procedure
I continued the pretraining with SDXL and Flux prompts and then SFT'd it on my own dataset.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 1
- seed: 443
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 1.1631 | 0.3966 | 500 | 1.2816 |
| 1.019 | 0.7932 | 1000 | 1.1431 |
| 0.9857 | 1.1896 | 1500 | 1.0818 |
| 1.0436 | 1.5862 | 2000 | 1.0459 |
| 0.9918 | 1.9827 | 2500 | 1.0235 |
| 0.9287 | 2.3791 | 3000 | 1.0114 |
| 0.9205 | 2.7757 | 3500 | 1.0079 |
Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0+cu126
- Datasets 3.4.1
- Tokenizers 0.21.0
- Downloads last month
- 4