YAML Metadata Warning: empty or missing yaml metadata in repo card
Check out the documentation for more information.
CommentarySet
We have provided the code Code, and the complete CommentarySet data.
Data Structure:
For training and validating, you should arrange the dataset and code in the following structure:
- YOUR_MODEL_ROOT_DIRECTORY
- data
- commentary
- atheletics_final.json
- basketball_final.json
- ...
- video
- athletics
- 001
- 5.mp4
- 7.mp4
- ...
- ...
- 001
- basketball
- ...
- athletics
- train.json
- test.json
- commentary
- Video LLMs Official Code(e.g VILA)
- metric_six_dimensional.py
- metric_traditional_gpt.py
- metric_traditional.py
- eval.sh
- run.sh
- data
Code
The code includes:
run_exp.pyfiles for 8 baseline models, enabling the use of VideoLLM to generate commentary.- Three metric scripts
metric_six_dimensional.py,metric_traditional_gpt.py,metric_traditional.pythat implement the metrics mentioned in our paper. - Shell scripts:
run.shandeval.sh, which can be used directly for model inference and result evaluation.
Here are some suggestions for you before running the code.
Inference Process
Download the Official VideoLLM Code: Download the official code for the corresponding VideoLLM models, and place the provided
run_exp.pyfile in the appropriate path as specified below:(You can also use other models, but you need do create theirrun_exp.py.)- Chat-UniVi:
./Chat-UniVi/ - InternVL 1.5:
./InternVL/VL1_5/ - InternVL 2.0:
./InternVL/VL2/ - Kangaroo:
./Kangaroo-main/ - LLaVA-NeXT:
./LLaVA-NeXT/playground/demo/ - LongVA:
./LongVA/ - Video-LLaVA:
./Video-LLaVA/videollava/serve/ - VILA:
./VILA/
- Chat-UniVi:
Download the Checkpoints: Download the checkpoints to the given path in every
run_exp.pyfile.Set Up the Environment: Configure the environment based on the information provided by the official model documentation.
Set Up
run.sh: Configure the environment variables,gpu_id, andPath_To_Your_Models_run_expinrun.shaccordingly.After that, use run.sh for commentary generation by:
./run.sh
- Finally, the experiment is completed, the commentary generated by the model will be saved in
./data/res/model_name/sports_name.json.
Evaluation Process
- Configure Metrics: In the three metric scripts, set the
base_urlandapi_keyin the client at the beginning ofmetric_six_dimensional.py,metric_traditional_gpt.py, ormetric_traditional.py. - In the
eval.shfile, lines6-11allow you to choose the metric you want to use from a total of 3 metrics by uncommenting the corresponding line. - Set the config
model_nameineval.sh, to the name of the model you want to test in the corresponding line (it should be the same as the model name in the folder where the inference results are stored). - Run the following command to start the evaluation process. The eval.sh script will automatically evaluate the model commentary based on the list in test.json.
./eval.sh
Fine-tuning On CommentarySet:
After preparing the dataset following the structure mentioned above, you can finetune on our train.json.
data
We have provided all the data in CommentarySet. Folder commentary contains information about each clip, folder video includes all clips in .mp4 format, and test.json and train.json record the complete information of the two subsets.
When running the code, place the data in the root directory (./data).
- Downloads last month
- 6