Instructions to use QuantFactory/TITPOP-200M-dev-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use QuantFactory/TITPOP-200M-dev-GGUF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="QuantFactory/TITPOP-200M-dev-GGUF") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("QuantFactory/TITPOP-200M-dev-GGUF", dtype="auto") - llama-cpp-python
How to use QuantFactory/TITPOP-200M-dev-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/TITPOP-200M-dev-GGUF", filename="TITPOP-200M-dev.Q2_K.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/TITPOP-200M-dev-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/TITPOP-200M-dev-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/TITPOP-200M-dev-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/TITPOP-200M-dev-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/TITPOP-200M-dev-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/TITPOP-200M-dev-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/TITPOP-200M-dev-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/TITPOP-200M-dev-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/TITPOP-200M-dev-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/TITPOP-200M-dev-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use QuantFactory/TITPOP-200M-dev-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "QuantFactory/TITPOP-200M-dev-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/TITPOP-200M-dev-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/QuantFactory/TITPOP-200M-dev-GGUF:Q4_K_M
- SGLang
How to use QuantFactory/TITPOP-200M-dev-GGUF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "QuantFactory/TITPOP-200M-dev-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/TITPOP-200M-dev-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "QuantFactory/TITPOP-200M-dev-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/TITPOP-200M-dev-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use QuantFactory/TITPOP-200M-dev-GGUF with Ollama:
ollama run hf.co/QuantFactory/TITPOP-200M-dev-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/TITPOP-200M-dev-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/TITPOP-200M-dev-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/TITPOP-200M-dev-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/TITPOP-200M-dev-GGUF to start chatting
- Docker Model Runner
How to use QuantFactory/TITPOP-200M-dev-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/TITPOP-200M-dev-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/TITPOP-200M-dev-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/TITPOP-200M-dev-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.TITPOP-200M-dev-GGUF-Q4_K_M
List all available models
lemonade list
QuantFactory/TITPOP-200M-dev-GGUF
This is quantized version of KBlueLeaf/TITPOP-200M-dev created using llama.cpp
Original Model Card
[WIP] TITPOP
What is this
TITPOP is a tool to extend, generate, refine the input prompt for T2I models.
It can work on both Danbooru tags and Natural Language. Which means you can use it on almost all the existed T2I models.
You can take it as "pro max" version of DTG
Traning Detail
- Model Arch: LLaMA
- Size: 200M param
- Training Data:
- Danbooru Metadata: 7.8M entries
- CC12M/GBC10M: around 11M entries
- Coyo11M: around 11M entries
- Training Procedure:
- Danbooru + cc12m: 5epoch
- Danbooru: 1epoch
- Danbooru + cc12m + coyo11m: 3epoch (currently 2epoch, still training)
- Token Seen: Currently 35B token
- Cost Time: Around 2~3 week on 4x3090
How to use this model?
Although the official inference code with lot of formatting and automatic features is in private now.
You can still try to make your own inference interface based on format below:
quality: masterpiece
aspect ratio: 1.0
target: <|short|> <|tag_to_long|>
tag: 1girl, solo, dragon girl, dragon horns, dragon tail
Then you will get output like:
quality: masterpiece
aspect ratio: 1.0
target: <|short|> <|tag_to_long|>
tag: 1girl, solo, dragon girl, dragon horns, dragon tail, smile, ponytail, cleavage cutout, pointy ears, large breasts, black dress, white background, thighhighs, bare shoulders, tail, breasts, clothing cutout, simple background, blonde hair, long hair, blue eyes, looking at viewer, horns,
long: A young woman with blonde hair and cat ears on her head. she is wearing a black outfit with gold accents and has a sword in her right hand. the woman is sitting on top of a large orange snake that is coiled around her body. the snake appears to be attacking her, as if it is attacking her.
All the supported mode is:
None #Tags only, DTG mode
tag_to_long
long_to_tag
short_to_long
short_to_tag
tag_to_short_to_long
short_to_tag_to_long
short_to_long_to_tag
Brief Explaination of Possible "Weird" Output
The model is trained on "What we used for training T2I model", which is basically caption from VLM.
Since these VLM have lot of different hallucination, this project will also generate some content that "looks like have hallucination"
But since the T2I model we want to use also trained on these kind of things, it can still generate descent image, or even better.
For example:
- Lot of animal ears/horns feature will be captioned as "cat ears" in most of VLM, include GPT4o or Claude3.5 sonnet.
So if you met some weird output which looks like conflicting with tags, try to generate image from it first.
You should take the Natural Language part as "different English", since that's what we used for T2I currently...
Why inference code is private? When will it be open sourced?
- This model/tool is still under development, currently is early Alpha version.
- I'm doing some research and projects based on this.
- The model is released under CC-BY-NC-ND License currently. If you have interest, you can implement inference by yourself.
- Once the project/research are done, I will open source all these models/codes with Apache2 license.
Citation
@misc{TITPOP2024,
author = {Shih-Ying Yeh},
title = {TITPOP: Text to Image with Text Presampling for Optimal Prompting},
howpublished = {\url{https://huggingface.co/KBlueLeaf/TITPOP-200M-dev}},
year = {2024},
note = {Still under development},
}
- Downloads last month
- 238
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
