vico / README (1).md
isaiahbjork's picture
Upload 3 files
7cbe03c verified

ViCo - Conversational Head Generation Challenge

[Homepage](conversational head generation challenge (vico-challenge.github.io))

Data Download

Whole Set: OneDrive

Note: For Train Set, data in listening_head.zip contains the data in talking_head.zip.

Guidelines

In Train Set, for each track, the data consists of three parts:

  • videos/*.mp4: all videos without audio track

  • audios/*.wav: all audios

  • *.csv: return meta data about all videos/audios

    Name Type Description
    video_id str ID of video
    uuid str ID of video sub-clips
    speaker_id int ID of speaker
    listener_id (optional) int ID of listener (only in listening_head)

    Given the uuid, the only audio audios/{uuid}.wav can be identified, and the listener's video is videos/{uuid}.listener.mp4, the speaker's video is videos/{uuid}.speaker.mp4.

In Validation Set, it organized as the final Test Set exclude the output/ directory.

The inputs consist of these parts:

  • videos/*.mp4: speaker videos, only in listening_head
  • audios/*.wav: all audios
  • first_frames/*.jpg: first frames of expected listener/speaker videos
  • ref_images/(\d+).jpg: reference images by person id
  • *.csv: return meta data about all videos/audios, same to CSVs in Train Set

Submission Format

For every sample in the test set, you must generate only one human face video.

  • Talking Head Generation: For each audios/{uuid}.wav, given the first frame of result first_frames/{uuid}.speaker.jpg and its reference image ref_images/{speaker_id}.jpg, generation head video file named as talkinghead_test_results/{uuid}.speaker.mp4
  • Listening Head Generation: For each speaker videos/{uuid}.speaker.mp4, given the first frame of result first_frames/{uuid}.listener.jpg and its related listener's reference image ref_images/{listener_id}.jpg, generation listener's head video file named as listeninghead_test_results/{uuid}.listener.mp4

All generated videos should be formated as .mp4 format, and compressed into one [team_name]_(\d+).zip file for each track. Due to the file storage limiation, the competitors are asked to upload their [team_name]_(\d+).zip file to an online storage (e.g. Onedrive, Googledrive, DropBox, BaiduPan, etc.), and submit a public download link to our evaluation system.

Note: The evaluation results will be sent to the team captain via Email in few hours after the download link is submitted, and the Leaderboard will be updated in every 24 hours.

Evaluation Metrics and Ranking Rules

The quality of generated videos will be quantitative evaluated from the following perspectives:

  • generation quality (image level): SSIM, CPBD, PSNR
  • generation quality (feature level): FID
  • identity preserving: Cosine Similarity (Arcface)
  • expression: L1 distance of 3dmm exp features
  • head motion: L1 distance of 3dmm angle & trans features
  • lip sync (speaker only): AV offset and AV confidence (SyncNet)
  • lip landmark distance: L1 distance of lip landmarks

Scripts can be accessed from this github repo. The final ranking is based on the number of first place across all metrics. Teams in the first three place (teams with same #Top-1 will be tied for the same place) will receive award certificates. Individuals/teams with top submissions or novel solutions will present their work at the ACM MM2022 workshop. Besides the quantification ranking results, we will also ask experts (from production and user experience development area) to select one team for the Best Visual Effects award.

Competition Rules

  • Pre-trained models are allowed in the competition. The pre-trained models should be public when the participants submit their results.
  • Participants are restricted to train their algorithms on ``ViCo'' training set. Collecting additional data for the target identities is not allowed. Collecting additional unlabeled data for pretraining is ok. Please specify any and all external data used for training when uploading results.
  • Additional annotation on the provided training data is fine (e.g., bounding boxes, keypoints, etc.). Teams should specify that they collected additional annotations when submitting results.
  • We ask that all participants respect the spirit of the competition and do not cheat. Hand-making is forbidden.
  • One account per participant. You cannot sign up from multiple accounts and therefore you cannot submit from multiple accounts.
  • No private sharing outside teams. Privately sharing code or data outside of teams is not permitted.

Terms and Conditions

The dataset users have requested permission to use the ViCo database. In exchange for such permission, the users hereby agree to the following terms and conditions:

  • The database can only be used for non-commercial research and educational purposes.
  • The authors of the database make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
  • You accepts full responsibility for your use of the Database and shall defend and indemnify the Authors of ViCo, against any and all claims arising from your use of the Database, including but not limited to your use of any copies of copyrighted images that you may create from the Database.
  • You may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
  • If you are employed by a for-profit, commercial entity, your employer shall also be bound by these terms and conditions, and you hereby represents that you are authorized to enter into this agreement on behalf of such employer.

Citation

@article{zhou2021responsive,
  title={Responsive Listening Head Generation: A Benchmark Dataset and Baseline},
  author={Zhou, Mohan and Bai, Yalong and Zhang, Wei and Zhao, Tiejun and Mei, Tao},
  journal={arXiv preprint arXiv:2112.13548},
  year={2021}
}