You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

YAML Metadata Warning: empty or missing yaml metadata in repo card

Check out the documentation for more information.

CommentarySet

We have provided the code Code, and the complete CommentarySet data.

Data Structure:

For training and validating, you should arrange the dataset and code in the following structure:

  • YOUR_MODEL_ROOT_DIRECTORY
    • data
      • commentary
        • atheletics_final.json
        • basketball_final.json
        • ...
      • video
        • athletics
          • 001
            • 5.mp4
            • 7.mp4
            • ...
          • ...
        • basketball
        • ...
      • train.json
      • test.json
    • Video LLMs Official Code(e.g VILA)
    • metric_six_dimensional.py
    • metric_traditional_gpt.py
    • metric_traditional.py
    • eval.sh
    • run.sh

Code

The code includes:

  1. run_exp.py files for 8 baseline models, enabling the use of VideoLLM to generate commentary.
  2. Three metric scripts metric_six_dimensional.py,metric_traditional_gpt.py,metric_traditional.py that implement the metrics mentioned in our paper.
  3. Shell scripts: run.sh and eval.sh, which can be used directly for model inference and result evaluation.

Here are some suggestions for you before running the code.

Inference Process

  1. Download the Official VideoLLM Code: Download the official code for the corresponding VideoLLM models, and place the provided run_exp.py file in the appropriate path as specified below:(You can also use other models, but you need do create their run_exp.py.)

    • Chat-UniVi: ./Chat-UniVi/
    • InternVL 1.5: ./InternVL/VL1_5/
    • InternVL 2.0: ./InternVL/VL2/
    • Kangaroo: ./Kangaroo-main/
    • LLaVA-NeXT: ./LLaVA-NeXT/playground/demo/
    • LongVA: ./LongVA/
    • Video-LLaVA: ./Video-LLaVA/videollava/serve/
    • VILA: ./VILA/
  2. Download the Checkpoints: Download the checkpoints to the given path in every run_exp.py file.

  3. Set Up the Environment: Configure the environment based on the information provided by the official model documentation.

  4. Set Up run.sh: Configure the environment variables, gpu_id, and Path_To_Your_Models_run_exp in run.sh accordingly.

  5. After that, use run.sh for commentary generation by:

./run.sh
  1. Finally, the experiment is completed, the commentary generated by the model will be saved in
./data/res/model_name/sports_name.json.

Evaluation Process

  1. Configure Metrics: In the three metric scripts, set the base_url and api_key in the client at the beginning of metric_six_dimensional.py, metric_traditional_gpt.py, or metric_traditional.py.
  2. In the eval.sh file, lines 6-11 allow you to choose the metric you want to use from a total of 3 metrics by uncommenting the corresponding line.
  3. Set the config model_name in eval.sh, to the name of the model you want to test in the corresponding line (it should be the same as the model name in the folder where the inference results are stored).
  4. Run the following command to start the evaluation process. The eval.sh script will automatically evaluate the model commentary based on the list in test.json.
./eval.sh

Fine-tuning On CommentarySet:

After preparing the dataset following the structure mentioned above, you can finetune on our train.json.

data

We have provided all the data in CommentarySet. Folder commentary contains information about each clip, folder video includes all clips in .mp4 format, and test.json and train.json record the complete information of the two subsets. When running the code, place the data in the root directory (./data).

Downloads last month
6