Instructions to use CCCCCC/VPO-5B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use CCCCCC/VPO-5B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="CCCCCC/VPO-5B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("CCCCCC/VPO-5B") model = AutoModelForCausalLM.from_pretrained("CCCCCC/VPO-5B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use CCCCCC/VPO-5B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "CCCCCC/VPO-5B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CCCCCC/VPO-5B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/CCCCCC/VPO-5B
- SGLang
How to use CCCCCC/VPO-5B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "CCCCCC/VPO-5B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CCCCCC/VPO-5B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "CCCCCC/VPO-5B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CCCCCC/VPO-5B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use CCCCCC/VPO-5B with Docker Model Runner:
docker model run hf.co/CCCCCC/VPO-5B
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("CCCCCC/VPO-5B")
model = AutoModelForCausalLM.from_pretrained("CCCCCC/VPO-5B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))VPO: Aligning Text-to-Video Generation Models with Prompt Optimization
- Repository: https://github.com/thu-coai/VPO
- Paper: VPO: Aligning Text-to-Video Generation Models with Prompt Optimization
- Data: https://huggingface.co/datasets/CCCCCC/VPO
VPO
VPO is a principled prompt optimization framework grounded in the principles of harmlessness, accuracy, and helpfulness. VPO employs a two-stage process that first constructs a supervised fine-tuning dataset guided by safety and alignment, and then conducts preference learning with both text-level and video-level feedback. As a result, VPO preserves user intent while enhancing video quality and safety.
Model Details
Video Generation Model
This model is trained to optimize user prompt for CogVideoX-5B. VPO-2B is for CogVideoX-2B.
Data
Our dataset can be found here.
Language
English
Intended Use
Prompt Template
We adopt a prompt template as
In this task, your goal is to expand the user's short query into a detailed and well-structured English prompt for generating short videos.
Please ensure that the generated video prompt adheres to the following principles:
1. **Harmless**: The prompt must be safe, respectful, and free from any harmful, offensive, or unethical content.
2. **Aligned**: The prompt should fully preserve the user's intent, incorporating all relevant details from the original query while ensuring clarity and coherence.
3. **Helpful for High-Quality Video Generation**: The prompt should be descriptive and vivid to facilitate high-quality video creation. Keep the scene feasible and well-suited for a brief duration, avoiding unnecessary complexity or unrealistic elements not mentioned in the query.
User Query:{user prompt}
Video Prompt:
Inference code
Here is an example code for inference:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = ''
prompt_template = """In this task, your goal is to expand the user's short query into a detailed and well-structured English prompt for generating short videos.
Please ensure that the generated video prompt adheres to the following principles:
1. **Harmless**: The prompt must be safe, respectful, and free from any harmful, offensive, or unethical content.
2. **Aligned**: The prompt should fully preserve the user's intent, incorporating all relevant details from the original query while ensuring clarity and coherence.
3. **Helpful for High-Quality Video Generation**: The prompt should be descriptive and vivid to facilitate high-quality video creation. Keep the scene feasible and well-suited for a brief duration, avoiding unnecessary complexity or unrealistic elements not mentioned in the query.
User Query:{}
Video Prompt:"""
device = 'cuda:0'
model = AutoModelForCausalLM.from_pretrained(model_path).half().eval().to(device)
# for 8bit
# model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device, load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_path)
text = "a cute dog on the grass"
messgae = [{'role': 'user', 'content': prompt_template.format(text)}]
model_inputs = tokenizer.apply_chat_template(messgae, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(device)
output = model.generate(model_inputs, max_new_tokens=1024, do_sample=True, top_p=1.0, temperature=0.7, num_beams=1)
resp = tokenizer.decode(output[0]).split('<|start_header_id|>assistant<|end_header_id|>')[1].split('<|eot_id|>')[0].strip()
print(resp)
See our Github Repo for more detailed usage (e.g. Inference with Vllm).
- Downloads last month
- 10
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="CCCCCC/VPO-5B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)