Instructions to use neoALI/ARC-Hunyuan-Video-7B-Emotion with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use neoALI/ARC-Hunyuan-Video-7B-Emotion with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("TencentARC/ARC-Hunyuan-Video-7B") model = PeftModel.from_pretrained(base_model, "neoALI/ARC-Hunyuan-Video-7B-Emotion") - Transformers
How to use neoALI/ARC-Hunyuan-Video-7B-Emotion with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="neoALI/ARC-Hunyuan-Video-7B-Emotion")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("neoALI/ARC-Hunyuan-Video-7B-Emotion", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use neoALI/ARC-Hunyuan-Video-7B-Emotion with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "neoALI/ARC-Hunyuan-Video-7B-Emotion" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "neoALI/ARC-Hunyuan-Video-7B-Emotion", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/neoALI/ARC-Hunyuan-Video-7B-Emotion
- SGLang
How to use neoALI/ARC-Hunyuan-Video-7B-Emotion with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "neoALI/ARC-Hunyuan-Video-7B-Emotion" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "neoALI/ARC-Hunyuan-Video-7B-Emotion", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "neoALI/ARC-Hunyuan-Video-7B-Emotion" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "neoALI/ARC-Hunyuan-Video-7B-Emotion", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use neoALI/ARC-Hunyuan-Video-7B-Emotion with Docker Model Runner:
docker model run hf.co/neoALI/ARC-Hunyuan-Video-7B-Emotion
ARC-Hunyuan-Video-7B-Emotion
A fine-tuned version of TencentARC/ARC-Hunyuan-Video-7B specialized for emotion classification in videos.
Model Description
This model is a LoRA adapter fine-tuned on the ARC-Hunyuan-Video-7B base model for emotion classification tasks.
Key Features:
- Task: Video emotion classification
- Base Model: ARC-Hunyuan-Video-7B (7B parameters)
- Training Method: LoRA (Low-Rank Adaptation)
- Special Feature: Trained using LLM-generated feature descriptions of videos, enabling better understanding of emotional content
Model Details
- Developed by: NEOALI
- Model type: Video-language model with LoRA adapter
- Language(s): English and Chinese
- License: MIT
- Fine-tuned from: TencentARC/ARC-Hunyuan-Video-7B
Training Details
- Training regime: LoRA fine-tuning
- LoRA rank: 8
- LoRA alpha: 8
- Training data: Videos with LLM-generated emotional feature descriptions
Usage
Requirements
pip install torch transformers peft
Loading the Model
from transformers import AutoModel, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModel.from_pretrained("TencentARC/ARC-Hunyuan-Video-7B")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "neoALI/ARC-Hunyuan-Video-7B-Emotion")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("TencentARC/ARC-Hunyuan-Video-7B")
Intended Use
This model is designed for:
- Emotion classification in short videos (up to 5 minutes)
- Understanding emotional content in user-generated videos
- Video content analysis requiring emotional intelligence
Limitations
- Inherits limitations from the base ARC-Hunyuan-Video-7B model
- Best performance on videos up to 5 minutes in length
- Optimized for emotion classification; may require additional fine-tuning for other tasks
Acknowledgements
This model is built upon ARC-Hunyuan-Video-7B by TencentARC. We thank the original authors for their excellent work.
- Downloads last month
- 4
Model tree for neoALI/ARC-Hunyuan-Video-7B-Emotion
Base model
TencentARC/ARC-Hunyuan-Video-7B