Rename readme.md to README.md
Browse files- readme.md → README.md +34 -32
readme.md → README.md
RENAMED
|
@@ -1,32 +1,34 @@
|
|
| 1 |
-
#
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
|
|
|
|
|
|
|
|
| 1 |
+
# CMOSE: Comprehensive Multi-Modality Online Student Engagement Dataset with High-Quality Labels
|
| 2 |
+
|
| 3 |
+
## Video clip name
|
| 4 |
+
Each video clip is named as videoX_Y_personZ, which means it is the Yth clip of the Zth subject from coaching session X.
|
| 5 |
+
|
| 6 |
+
## Openface
|
| 7 |
+
We extract the second level features from [OpenFace](https://github.com/TadasBaltrusaitis/OpenFace/wiki/Unix-Installation). The extracted files are stored under "secondfeature/videoX_Y_personZ.csv". These features include:
|
| 8 |
+
|
| 9 |
+
- **Gaze Direction and Angles**
|
| 10 |
+
- Three coordinates to describe the gaze direction of left and right eyes respectively
|
| 11 |
+
- Two scalars to describe the horizontal and vertical gaze angles
|
| 12 |
+
|
| 13 |
+
- **Head Position**
|
| 14 |
+
- Three coordinates to describe the location of the head relative to the camera
|
| 15 |
+
|
| 16 |
+
- **Head Rotation**
|
| 17 |
+
- Rotation of the head described with pitch, yaw, and roll
|
| 18 |
+
|
| 19 |
+
- **Facial Action Units (AUs)**
|
| 20 |
+
- Intensities of 17 AUs represented as scalars
|
| 21 |
+
- Presence of 18 AUs represented as scalars
|
| 22 |
+
|
| 23 |
+
## I3D
|
| 24 |
+
We use the [I3D Repository](https://github.com/v-iashin/video_features) to extract the I3D vectors. One I3D vector is extracted for each clip. The features are stored in "final_data_1.json".
|
| 25 |
+
|
| 26 |
+
## Acoustics
|
| 27 |
+
We use [ParselMouth](https://github.com/YannickJadoul/Parselmouth) to extract the acoustics features. They are stored in "label_results_w_audio_final.json". We also calculate the high level features such as the percentage of high/low volume, high/low pitch, and std of volume/pitch. These are stored in "new_bert_ac_dict.json".
|
| 28 |
+
|
| 29 |
+
## Narrations
|
| 30 |
+
We collect the narrations from the Live Transcript functions in Zoom. They are stored in "label_results_w_audio_final.json". We also extract the bert features from the narrations and store them in "new_bert_ac_dict.json".
|
| 31 |
+
|
| 32 |
+
## Data split
|
| 33 |
+
Split information can be found in "final_data_1.json". Note that "split" should be one of "train", "unlabel", and "test". We use "unlabel" for validation purposes.
|
| 34 |
+
|