cwuau commited on
Commit
2ec5dea
·
verified ·
1 Parent(s): 23a7318

Rename readme.md to README.md

Browse files
Files changed (1) hide show
  1. readme.md → README.md +34 -32
readme.md → README.md RENAMED
@@ -1,32 +1,34 @@
1
- ## Video clip name
2
- Each video clip is named as videoX_Y_personZ, which means it is the Yth clip of the Zth subject from coaching session X.
3
-
4
- ## Openface
5
- We extract the second level features from [OpenFace](https://github.com/TadasBaltrusaitis/OpenFace/wiki/Unix-Installation). The extracted files are stored under "secondfeature/videoX_Y_personZ.csv". These features include:
6
-
7
- - **Gaze Direction and Angles**
8
- - Three coordinates to describe the gaze direction of left and right eyes respectively
9
- - Two scalars to describe the horizontal and vertical gaze angles
10
-
11
- - **Head Position**
12
- - Three coordinates to describe the location of the head relative to the camera
13
-
14
- - **Head Rotation**
15
- - Rotation of the head described with pitch, yaw, and roll
16
-
17
- - **Facial Action Units (AUs)**
18
- - Intensities of 17 AUs represented as scalars
19
- - Presence of 18 AUs represented as scalars
20
-
21
- ## I3D
22
- We use the [I3D Repository](https://github.com/v-iashin/video_features) to extract the I3D vectors. One I3D vector is extracted for each clip. The features are stored in "final_data_1.json".
23
-
24
- ## Acoustics
25
- We use [ParselMouth](https://github.com/YannickJadoul/Parselmouth) to extract the acoustics features. They are stored in "label_results_w_audio_final.json". We also calculate the high level features such as the percentage of high/low volume, high/low pitch, and std of volume/pitch. These are stored in "new_bert_ac_dict.json".
26
-
27
- ## Narrations
28
- We collect the narrations from the Live Transcript functions in Zoom. They are stored in "label_results_w_audio_final.json". We also extract the bert features from the narrations and store them in "new_bert_ac_dict.json".
29
-
30
- ## Data split
31
- Split information can be found in "final_data_1.json". Note that "split" should be one of "train", "unlabel", and "test". We use "unlabel" for validation purposes.
32
-
 
 
 
1
+ # CMOSE: Comprehensive Multi-Modality Online Student Engagement Dataset with High-Quality Labels
2
+
3
+ ## Video clip name
4
+ Each video clip is named as videoX_Y_personZ, which means it is the Yth clip of the Zth subject from coaching session X.
5
+
6
+ ## Openface
7
+ We extract the second level features from [OpenFace](https://github.com/TadasBaltrusaitis/OpenFace/wiki/Unix-Installation). The extracted files are stored under "secondfeature/videoX_Y_personZ.csv". These features include:
8
+
9
+ - **Gaze Direction and Angles**
10
+ - Three coordinates to describe the gaze direction of left and right eyes respectively
11
+ - Two scalars to describe the horizontal and vertical gaze angles
12
+
13
+ - **Head Position**
14
+ - Three coordinates to describe the location of the head relative to the camera
15
+
16
+ - **Head Rotation**
17
+ - Rotation of the head described with pitch, yaw, and roll
18
+
19
+ - **Facial Action Units (AUs)**
20
+ - Intensities of 17 AUs represented as scalars
21
+ - Presence of 18 AUs represented as scalars
22
+
23
+ ## I3D
24
+ We use the [I3D Repository](https://github.com/v-iashin/video_features) to extract the I3D vectors. One I3D vector is extracted for each clip. The features are stored in "final_data_1.json".
25
+
26
+ ## Acoustics
27
+ We use [ParselMouth](https://github.com/YannickJadoul/Parselmouth) to extract the acoustics features. They are stored in "label_results_w_audio_final.json". We also calculate the high level features such as the percentage of high/low volume, high/low pitch, and std of volume/pitch. These are stored in "new_bert_ac_dict.json".
28
+
29
+ ## Narrations
30
+ We collect the narrations from the Live Transcript functions in Zoom. They are stored in "label_results_w_audio_final.json". We also extract the bert features from the narrations and store them in "new_bert_ac_dict.json".
31
+
32
+ ## Data split
33
+ Split information can be found in "final_data_1.json". Note that "split" should be one of "train", "unlabel", and "test". We use "unlabel" for validation purposes.
34
+