UniTalk-ASD / README.md
plnguyen2908's picture
Update README.md
0c0f803 verified
metadata
license: mit
language:
  - en
pretty_name: UniTalk
size_categories:
  - 1M<n<10M
configs:
  - config_name: default
    data_files:
      - split: train
        path:
          - csv/train/*.csv
      - split: test
        path:
          - csv/val/*.csv

Data storage for the Active Speaker Detection Dataset: UniTalk

Le Thien Phuc Nguyen*, Zhuoran Yu*, Khoa Cao Quang Nhat, Yuwei Guo, Tu Ho Manh Pham, Tuan Tai Nguyen, Toan Ngo Duc Vo, Lucas Poon, Soochahn Lee, Yong Jae Lee

(* Equal Contribution)

Storage Structure

Since the dataset is large and complex, we zip the video_id folder and store on Hugging Face.

Here is the raw structure on Hungging face:

root/
├── csv/
│   ├── val
|   |    |_ video_id1.csv
|   |    |_ video_id2.csv
|   |
│   └── train
|        |_ video_id1.csv
|        |_ video_id2.csv
|
├── clips_audios/
│   ├── train/
│   │   └── <video_id>1.zip
|   |   |── <video_id>2.zip
│   |
|   |── val/
│       └── <video_id>.zip
│           
└── clips_videos/
    ├── train/
    │   └── <video_id>1.zip
    |   |── <video_id>2.zip
    |
    |── val/
        └── <video_id>.zip

Download the dataset

You can yse provided code in https://github.com/plnguyen2908/UniTalk-ASD-code/tree/main. The repo's url is also provided in the paper. You just need to clone, download pandas, and run in around 800-900 seconds:

python download_dataset.py --save_path /path/to/the/dataset

After running that script, the structure of the dataset in the local machine is:

root/
├── csv/
│   ├── val_orig.csv
│   └── train_orig.csv
├── clips_audios/
│   ├── train/
│   │   └── <video_id>/
│   │       └── <entity_id>.wav
│   └── val/
│       └── <video_id>/
│           └── <entity_id>.wav
└── clips_videos/
    ├── train/
    │   └── <video_id>/
    │       └── <entity_id>/
    │           ├── <time>.jpg (face)
    │           └── <time>.jpg (face)
    └── val/
        └── <video_id>/
            └── <entity_id>/
                ├── <time>.jpg (face)
                └── <time>.jpg (face)

Exploring the dataset

  • Inside the csv folder, there are 2 csv files for training and testing. In each csv files, each row represents a face, and there are 10 columns where:
    • video_id: the id of the video
    • frame_timestamp: the timestamp of the face in video_id
    • entity_box_x1, entity_box_y1, entity_box_x2, entity_box_y2: the relative coordinate of the bounding box of the face
    • label: SPEAKING_AUDIBLE or NOT_SPEAKING
    • entity_id: the id of the face tracks (a set of consecutive faces of the same person) in the format video_id:number
    • label_id: 1 or 0
    • instance_id: consecutive faces of an entity_id which are always not speaking are speaking. It is in the format entity_id:number
  • Inside clips_audios, there are 2 folders which are train and val splits. In each split, there will be a list of video_id folder which contains the audio file (in form of wav) for each entity_id.
  • Inside clips_videos, there are 2 folders which are train and val splits. In each split, there will be a list of video_id folder in which each contains a list of entity_id folder. In each entity_id folder, there are images of the face of that entity_id person.
  • We sample the video at 25 fps. So, if you want to use other cues to support the face prediction, we would recommend checking the video_list folder which contains the link to the list of videos we use. You can download it and sample at 25 fps.

Loading each entity's id information from Huggging Face

We also provide a way to load the information of each entity_id (i.e, face track) through the hub of huggingface. However, this method is less flexible and cannot be used for models that use multiple face tracks like ASDNet or LoCoNet. You just need to run:

from datasets import load_dataset
dataset = load_dataset("plnguyen2908/UniTalk", split = "train|val", trust_remote_code=True)

This method is more memory-efficient. However, its drawback is speed (around 20-40 hours to read all instances of face tracks) and less flexible than the first method.

For each instance, it will return:

{
    "entity_id": the id of the face track
    "images": list of images of face crops of the face_track
    "audio": the audio that has been read from wavfile.read
    "frame_timestamp": time of each face crop in the video
    "label_id": the label of each face (0 or 1)
}

Remarks