Add dataset files
Browse files- .gitattributes +1 -0
- BaboonLand/README.md +109 -0
- BaboonLand/cvat_templates/behavior.zip +3 -0
- BaboonLand/cvat_templates/tracking.zip +3 -0
- BaboonLand/scripts/charades2video.py +73 -0
- BaboonLand/scripts/charades2visual.py +72 -0
- BaboonLand/scripts/dataset2charades.py +311 -0
- BaboonLand/scripts/dataset2tracking.py +197 -0
- BaboonLand/scripts/requirements.txt +13 -0
- BaboonLand/scripts/tracking2ultralytics.py +144 -0
- BaboonLand/scripts/tracks2mini-scenes.py +183 -0
- BaboonLand/scripts/ultralytics2pyramid.py +172 -0
.gitattributes
CHANGED
|
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 57 |
# Video files - compressed
|
| 58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 57 |
# Video files - compressed
|
| 58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
| 60 |
+
*.jsonl filter=lfs diff=lfs merge=lfs -text
|
BaboonLand/README.md
ADDED
|
@@ -0,0 +1,109 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
BaboonLand: Tracking Primates in the Wild and Automating Ethograms From Drone Videos
|
| 2 |
+
|
| 3 |
+
The dataset structure looks as follows:
|
| 4 |
+
|
| 5 |
+
BaboonLand
|
| 6 |
+
/charades -> The dataset converted to Charades format to train and evaluate behavior
|
| 7 |
+
recognition models. You can download the generated dataset from our webpage
|
| 8 |
+
or you can generate it yourself. See instructions below.
|
| 9 |
+
...
|
| 10 |
+
/cvat_templates -> You can use these templates to backup projects in CVAT.
|
| 11 |
+
It will allow you to explore and adjust the annotations in CVAT.
|
| 12 |
+
/behavior.zip
|
| 13 |
+
/tracking.zip
|
| 14 |
+
/dataset -> The dataset is located here.
|
| 15 |
+
/video_1
|
| 16 |
+
/actions -> The behavior annotations are located here.
|
| 17 |
+
/0.xml
|
| 18 |
+
/1.xml -> Annotations of the behavior for an individual with ID=1.
|
| 19 |
+
...
|
| 20 |
+
/n.xml
|
| 21 |
+
/mini-scenes -> Generated mini-scenes from video.xml and tracks.xml. The name of the video matches
|
| 22 |
+
ID of the track in tracks.xml. The name of the video also matches the behavior
|
| 23 |
+
annotations file in the actions folder. For example, a track with ID=1 will be extracted
|
| 24 |
+
into mini-scenes/1.mp4 and there will be behavior annotations for this track located
|
| 25 |
+
in actions/1.xml.
|
| 26 |
+
/0.mp4
|
| 27 |
+
/1.mp4
|
| 28 |
+
...
|
| 29 |
+
/n.mp4
|
| 30 |
+
/timeline.jpg -> A timeline of the original video and corresponding mini-scenes. This file is generated
|
| 31 |
+
for convenience only. You can use it to look for a mini-scene with a specific length or
|
| 32 |
+
relative location in the video.
|
| 33 |
+
/tracks.xml -> This file contains tracks and bounding boxes of baboons in
|
| 34 |
+
CVAT for video 1.1 format. Each track has a unique ID. This
|
| 35 |
+
number matches the name of the file in the actions folder.
|
| 36 |
+
For example, if you want to get the track and corresponding
|
| 37 |
+
bounding boxes of a baboon with ID=1, you can get this
|
| 38 |
+
information from the tracks.xml file. If you want to explore
|
| 39 |
+
the behavior of the baboon with ID=1, you can get this
|
| 40 |
+
information with the help of the actions/1.xml file.
|
| 41 |
+
/video.mp4 -> The original video from a drone.
|
| 42 |
+
/video_2
|
| 43 |
+
/actions
|
| 44 |
+
/0.xml
|
| 45 |
+
/1.xml
|
| 46 |
+
...
|
| 47 |
+
/n.xml
|
| 48 |
+
/mini-scenes
|
| 49 |
+
/0.mp4
|
| 50 |
+
/1.mp4
|
| 51 |
+
...
|
| 52 |
+
/n.mp4
|
| 53 |
+
/timeline.jpg
|
| 54 |
+
/tracks.xml
|
| 55 |
+
/video.mp4
|
| 56 |
+
...
|
| 57 |
+
/video_n
|
| 58 |
+
/actions
|
| 59 |
+
/0.xml
|
| 60 |
+
/1.xml
|
| 61 |
+
...
|
| 62 |
+
/n.xml
|
| 63 |
+
/mini-scenes
|
| 64 |
+
/0.mp4
|
| 65 |
+
/1.mp4
|
| 66 |
+
...
|
| 67 |
+
/n.mp4
|
| 68 |
+
/tracks.xml
|
| 69 |
+
/video.mp4
|
| 70 |
+
/scripts
|
| 71 |
+
/requirements.txt -> Install all the requirements to be able to run scripts.
|
| 72 |
+
/tracks2mini-scenes.py -> Use this script to generate the mini-scenes from
|
| 73 |
+
video.xml and tracks.xml files.
|
| 74 |
+
/dataset2charades.py -> Use this script to generate a dataset for Baboon behavior
|
| 75 |
+
recognition in Charades format. The generated dataset can
|
| 76 |
+
be used to train a model with the SlowFast framework.
|
| 77 |
+
/charades2video.py -> Use this script if you want to combine images from the dataset
|
| 78 |
+
in Charades format back to videos. These videos can be used to
|
| 79 |
+
create demos of the model performance.
|
| 80 |
+
/charades2visual.py -> Use this script if you want to combine images from the dataset
|
| 81 |
+
in Charades format back to videos and visualize corresponding
|
| 82 |
+
behavior annotations.
|
| 83 |
+
/dataset2tracking.py -> Use this script to generate a data split for training and
|
| 84 |
+
evaluating tracking algorithms.
|
| 85 |
+
/tracking2ultralytics.py -> Use this script to generate a Baboon detection dataset in
|
| 86 |
+
Ultralytics (YOLO) format. The dataset can be used to train
|
| 87 |
+
detection models with the Ultralytics (YOLOv8) framework.
|
| 88 |
+
/ultralytics2pyramid.py -> Use this script to split the original 5.3K images in the
|
| 89 |
+
Ultralytics dataset into tiles. You will create a dataset with
|
| 90 |
+
2x2, 3x3, and 4x4 tiles. It will help to train a model that
|
| 91 |
+
will be more robust for both small and large baboons.
|
| 92 |
+
/tracking -> The dataset split into train and test for tracking and train converted to
|
| 93 |
+
Ultralytics format to train and evaluate detection models. You can download the
|
| 94 |
+
generated dataset from our webpage or you can generate it yourself. See
|
| 95 |
+
instructions below.
|
| 96 |
+
...
|
| 97 |
+
/README.md -> You are reading this file.
|
| 98 |
+
|
| 99 |
+
How to generate behavior dataset?
|
| 100 |
+
tracks2mini-scenes.py (skip, mini-scenes already generated) -> dataset2charades.py ->
|
| 101 |
+
-> charades2video.py (optional) -> charades2visual.py (optional)
|
| 102 |
+
Or you can download generated charades.zip from our webpage.
|
| 103 |
+
|
| 104 |
+
How to generate detection and tracking dataset?
|
| 105 |
+
tracks2mini-scenes.py (skip, mini-scenes already generated) -> dataset2tracking.py ->
|
| 106 |
+
-> tracking2ultralytics.py -> ultralytics2pyramid.py
|
| 107 |
+
Or you can download generated tracking.zip from our webpage.
|
| 108 |
+
|
| 109 |
+
All ".xml" files are compatible with CVAT and are simplified versions of CVAT for video 1.1 format.
|
BaboonLand/cvat_templates/behavior.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1e389c6c1c49d3bfcf1832461c5ae524b4dbd4842f1f20eb45979cbd680b0946
|
| 3 |
+
size 613
|
BaboonLand/cvat_templates/tracking.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fbe3b1a4109d40cf805916bd46a57e85119ef53fb88a83b1c60f842ae514fd28
|
| 3 |
+
size 293
|
BaboonLand/scripts/charades2video.py
ADDED
|
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
import json
|
| 4 |
+
import cv2
|
| 5 |
+
from natsort import natsorted
|
| 6 |
+
import pandas as pd
|
| 7 |
+
from tqdm import tqdm
|
| 8 |
+
|
| 9 |
+
if __name__ == "__main__":
|
| 10 |
+
if "scripts" in os.listdir(".."):
|
| 11 |
+
os.chdir("..")
|
| 12 |
+
|
| 13 |
+
sys.dont_write_bytecode = True
|
| 14 |
+
|
| 15 |
+
path_to_image = "charades/dataset/image"
|
| 16 |
+
path_to_video = "charades/dataset/video"
|
| 17 |
+
annotation_train = "charades/annotation/train.csv"
|
| 18 |
+
annotation_val = "charades/annotation/val.csv"
|
| 19 |
+
classes_json = "charades/annotation/classes.json"
|
| 20 |
+
visual = False
|
| 21 |
+
|
| 22 |
+
if not os.path.exists(path_to_video):
|
| 23 |
+
os.makedirs(path_to_video)
|
| 24 |
+
|
| 25 |
+
with open(classes_json, "r") as file:
|
| 26 |
+
label2number = json.load(file)
|
| 27 |
+
|
| 28 |
+
number2label = {value: key for key, value in label2number.items()}
|
| 29 |
+
|
| 30 |
+
df_train = pd.read_csv(annotation_train, sep=" ")
|
| 31 |
+
df_val = pd.read_csv(annotation_val, sep=" ")
|
| 32 |
+
df = pd.concat([df_train, df_val], axis=0)
|
| 33 |
+
folders = natsorted(os.listdir(path_to_image))
|
| 34 |
+
|
| 35 |
+
hierarchy = {}
|
| 36 |
+
|
| 37 |
+
for folder in folders:
|
| 38 |
+
main = folder.split(".")[0]
|
| 39 |
+
|
| 40 |
+
if hierarchy.get(main) is None:
|
| 41 |
+
hierarchy[main] = [folder]
|
| 42 |
+
else:
|
| 43 |
+
hierarchy[main].append(folder)
|
| 44 |
+
|
| 45 |
+
for i, folder in tqdm(enumerate(hierarchy.keys()), total=len(hierarchy.keys())):
|
| 46 |
+
vw = cv2.VideoWriter(f"{path_to_video}/{folder}.mp4", cv2.VideoWriter_fourcc("m", "p", "4", "v"), 29.97,
|
| 47 |
+
(400, 300))
|
| 48 |
+
|
| 49 |
+
for segment in hierarchy[folder]:
|
| 50 |
+
mapping = {}
|
| 51 |
+
|
| 52 |
+
for index, row in df[df.original_vido_id == segment].iterrows():
|
| 53 |
+
mapping[row["frame_id"]] = number2label[row["labels"]]
|
| 54 |
+
|
| 55 |
+
for j, file in enumerate(natsorted(os.listdir(path_to_image + os.sep + segment))):
|
| 56 |
+
image = cv2.imread(f"{path_to_image}/{segment}/{file}")
|
| 57 |
+
|
| 58 |
+
if visual:
|
| 59 |
+
color = (0, 0, 0)
|
| 60 |
+
label = mapping[j + 1]
|
| 61 |
+
thickness_in = 1
|
| 62 |
+
size = 0.7
|
| 63 |
+
label_length = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, size, thickness_in)
|
| 64 |
+
copied = image.copy()
|
| 65 |
+
cv2.rectangle(image, (10, 10), (20 + label_length[0][0], 40), (255, 255, 255), -1)
|
| 66 |
+
cv2.putText(image, label, (16, 31),
|
| 67 |
+
cv2.FONT_HERSHEY_SIMPLEX, size, tuple([i - 50 for i in color]), thickness_in,
|
| 68 |
+
cv2.LINE_AA)
|
| 69 |
+
image = cv2.addWeighted(image, 0.4, copied, 0.6, 0.0)
|
| 70 |
+
|
| 71 |
+
vw.write(image)
|
| 72 |
+
|
| 73 |
+
vw.release()
|
BaboonLand/scripts/charades2visual.py
ADDED
|
@@ -0,0 +1,72 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
import json
|
| 4 |
+
import cv2
|
| 5 |
+
from natsort import natsorted
|
| 6 |
+
import pandas as pd
|
| 7 |
+
from tqdm import tqdm
|
| 8 |
+
|
| 9 |
+
if __name__ == "__main__":
|
| 10 |
+
if "scripts" in os.listdir(".."):
|
| 11 |
+
os.chdir("..")
|
| 12 |
+
|
| 13 |
+
sys.dont_write_bytecode = True
|
| 14 |
+
|
| 15 |
+
path_to_image = "charades/dataset/image"
|
| 16 |
+
path_to_video = "charades/dataset/visual"
|
| 17 |
+
annotation_train = "charades/annotation/train.csv"
|
| 18 |
+
annotation_val = "charades/annotation/val.csv"
|
| 19 |
+
classes_json = "charades/annotation/classes.json"
|
| 20 |
+
visual = True
|
| 21 |
+
|
| 22 |
+
if not os.path.exists(path_to_video):
|
| 23 |
+
os.makedirs(path_to_video)
|
| 24 |
+
|
| 25 |
+
with open(classes_json, "r") as file:
|
| 26 |
+
label2number = json.load(file)
|
| 27 |
+
|
| 28 |
+
number2label = {value: key for key, value in label2number.items()}
|
| 29 |
+
|
| 30 |
+
df_train = pd.read_csv(annotation_train, sep=" ")
|
| 31 |
+
df_val = pd.read_csv(annotation_val, sep=" ")
|
| 32 |
+
df = pd.concat([df_train, df_val], axis=0)
|
| 33 |
+
folders = natsorted(os.listdir(path_to_image))
|
| 34 |
+
|
| 35 |
+
hierarchy = {}
|
| 36 |
+
|
| 37 |
+
for folder in folders:
|
| 38 |
+
main = folder.split(".")[0]
|
| 39 |
+
|
| 40 |
+
if hierarchy.get(main) is None:
|
| 41 |
+
hierarchy[main] = [folder]
|
| 42 |
+
else:
|
| 43 |
+
hierarchy[main].append(folder)
|
| 44 |
+
|
| 45 |
+
for i, folder in tqdm(enumerate(hierarchy.keys()), total=len(hierarchy.keys())):
|
| 46 |
+
vw = cv2.VideoWriter(f"{path_to_video}/{folder}.mp4", cv2.VideoWriter_fourcc("m", "p", "4", "v"), 29.97,
|
| 47 |
+
(400, 300))
|
| 48 |
+
|
| 49 |
+
for segment in hierarchy[folder]:
|
| 50 |
+
mapping = {}
|
| 51 |
+
|
| 52 |
+
for index, row in df[df.original_vido_id == segment].iterrows():
|
| 53 |
+
mapping[row["frame_id"]] = number2label[row["labels"]]
|
| 54 |
+
|
| 55 |
+
for j, file in enumerate(natsorted(os.listdir(path_to_image + os.sep + segment))):
|
| 56 |
+
image = cv2.imread(f"{path_to_image}/{segment}/{file}")
|
| 57 |
+
|
| 58 |
+
if visual:
|
| 59 |
+
color = (0, 0, 0)
|
| 60 |
+
label = mapping[j + 1]
|
| 61 |
+
thickness_in = 1
|
| 62 |
+
size = 0.7
|
| 63 |
+
label_length = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, size, thickness_in)
|
| 64 |
+
copied = image.copy()
|
| 65 |
+
cv2.rectangle(image, (10, 10), (20 + label_length[0][0], 40), (255, 255, 255), -1)
|
| 66 |
+
cv2.putText(image, label, (16, 31),
|
| 67 |
+
cv2.FONT_HERSHEY_SIMPLEX, size, tuple([i - 50 for i in color]), thickness_in, cv2.LINE_AA)
|
| 68 |
+
image = cv2.addWeighted(image, 0.4, copied, 0.6, 0.0)
|
| 69 |
+
|
| 70 |
+
vw.write(image)
|
| 71 |
+
|
| 72 |
+
vw.release()
|
BaboonLand/scripts/dataset2charades.py
ADDED
|
@@ -0,0 +1,311 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
import json
|
| 4 |
+
from lxml import etree
|
| 5 |
+
from collections import OrderedDict
|
| 6 |
+
import pandas as pd
|
| 7 |
+
from natsort import natsorted
|
| 8 |
+
import cv2
|
| 9 |
+
from tqdm import tqdm
|
| 10 |
+
from sklearn.utils import shuffle
|
| 11 |
+
import shutil
|
| 12 |
+
|
| 13 |
+
if __name__ == "__main__":
|
| 14 |
+
if "scripts" in os.listdir(".."):
|
| 15 |
+
os.chdir("..")
|
| 16 |
+
|
| 17 |
+
sys.dont_write_bytecode = True
|
| 18 |
+
|
| 19 |
+
optimize = True
|
| 20 |
+
dataset_path = "dataset"
|
| 21 |
+
charades_path = "charades_"
|
| 22 |
+
|
| 23 |
+
label2number = {"Walking/Running": 0,
|
| 24 |
+
"Sitting/Standing": 1,
|
| 25 |
+
"Fighting/Playing": 2,
|
| 26 |
+
"Self-Grooming": 3,
|
| 27 |
+
"Being Groomed": 4,
|
| 28 |
+
"Grooming Somebody": 5,
|
| 29 |
+
"Mutual Grooming": 6,
|
| 30 |
+
"Infant-Carrying": 7,
|
| 31 |
+
"Foraging": 8,
|
| 32 |
+
"Drinking": 9,
|
| 33 |
+
"Mounting": 10,
|
| 34 |
+
"Sleeping": 11,
|
| 35 |
+
"Occluded": 12}
|
| 36 |
+
|
| 37 |
+
if not os.path.exists(charades_path):
|
| 38 |
+
os.makedirs(charades_path)
|
| 39 |
+
|
| 40 |
+
if not os.path.exists(f"{charades_path}/annotation"):
|
| 41 |
+
os.makedirs(f"{charades_path}/annotation")
|
| 42 |
+
|
| 43 |
+
if not os.path.exists(f"{charades_path}/dataset/image"):
|
| 44 |
+
os.makedirs(f"{charades_path}/dataset/image")
|
| 45 |
+
|
| 46 |
+
with open(f"{charades_path}/annotation/classes.json", "w") as file:
|
| 47 |
+
json.dump(label2number, file)
|
| 48 |
+
|
| 49 |
+
headers = {"original_vido_id": [], "video_id": pd.Series(dtype="int"), "frame_id": pd.Series(dtype="int"),
|
| 50 |
+
"path": [], "labels": []}
|
| 51 |
+
charades_df = pd.DataFrame(data=headers)
|
| 52 |
+
video_id = 1
|
| 53 |
+
folder_name = 1
|
| 54 |
+
flag = True
|
| 55 |
+
|
| 56 |
+
for i, folder in enumerate(natsorted(os.listdir(dataset_path))):
|
| 57 |
+
if os.path.exists(f"{dataset_path}/{folder}/actions"):
|
| 58 |
+
for j, file in enumerate(natsorted(os.listdir(f"{dataset_path}/{folder}/actions"))):
|
| 59 |
+
if os.path.splitext(file)[1] == ".xml":
|
| 60 |
+
annotation_file = f"{dataset_path}/{folder}/actions/{file}"
|
| 61 |
+
video_file = f"{dataset_path}/{folder}/mini-scenes/{os.path.splitext(file)[0]}.mp4"
|
| 62 |
+
|
| 63 |
+
if not os.path.exists(video_file):
|
| 64 |
+
print(f"{video_file} does not exist.")
|
| 65 |
+
continue
|
| 66 |
+
|
| 67 |
+
root = etree.parse(annotation_file).getroot()
|
| 68 |
+
|
| 69 |
+
try:
|
| 70 |
+
label = next(root.iterfind("track")).attrib["label"]
|
| 71 |
+
except StopIteration:
|
| 72 |
+
print(f"SKIPPED: {dataset_path}/{folder}/actions/{file}, EMPTY ANNOTATION")
|
| 73 |
+
continue
|
| 74 |
+
|
| 75 |
+
annotated = OrderedDict()
|
| 76 |
+
|
| 77 |
+
for track in root.iterfind("track"):
|
| 78 |
+
for entry in track.iter("points"):
|
| 79 |
+
frame_id = entry.attrib["frame"]
|
| 80 |
+
outside = entry.attrib["outside"]
|
| 81 |
+
|
| 82 |
+
if outside == "1":
|
| 83 |
+
continue
|
| 84 |
+
|
| 85 |
+
behavior = "".join(entry.find("attribute").itertext())
|
| 86 |
+
|
| 87 |
+
if annotated.get(frame_id) is None:
|
| 88 |
+
annotated[frame_id] = OrderedDict()
|
| 89 |
+
|
| 90 |
+
annotated[frame_id] = behavior
|
| 91 |
+
|
| 92 |
+
counter = 0
|
| 93 |
+
|
| 94 |
+
for value in annotated.values():
|
| 95 |
+
if value in label2number.keys():
|
| 96 |
+
counter += 1
|
| 97 |
+
|
| 98 |
+
if counter < 90:
|
| 99 |
+
print(f"SKIPPED: {dataset_path}/{folder}/actions/{file}, length={counter}<90")
|
| 100 |
+
continue
|
| 101 |
+
|
| 102 |
+
folder_code = f"{label[0].capitalize()}{folder_name:04d}"
|
| 103 |
+
folder_name += 1
|
| 104 |
+
output_folder = f"{charades_path}/dataset/image/{folder_code}"
|
| 105 |
+
progress = f"{i + 1}/{len(os.listdir(dataset_path))}," \
|
| 106 |
+
f"{j + 1}/{len(os.listdir(f'{dataset_path}/{folder}/actions'))}:" \
|
| 107 |
+
f"{dataset_path}/{folder}/actions/{file} -> {output_folder}"
|
| 108 |
+
print(progress)
|
| 109 |
+
sys.stdout.flush()
|
| 110 |
+
|
| 111 |
+
index = 0
|
| 112 |
+
adjusted_index = 1
|
| 113 |
+
vc = cv2.VideoCapture(video_file)
|
| 114 |
+
size = int(vc.get(cv2.CAP_PROP_FRAME_COUNT))
|
| 115 |
+
|
| 116 |
+
for k in range(1, size):
|
| 117 |
+
if annotated.get(str(k)) is None:
|
| 118 |
+
annotated[str(k)] = annotated[str(k - 1)]
|
| 119 |
+
|
| 120 |
+
while vc.isOpened():
|
| 121 |
+
if flag is False:
|
| 122 |
+
if index < size:
|
| 123 |
+
returned = True
|
| 124 |
+
frame = None
|
| 125 |
+
else:
|
| 126 |
+
returned = False
|
| 127 |
+
frame = None
|
| 128 |
+
else:
|
| 129 |
+
returned, frame = vc.read()
|
| 130 |
+
|
| 131 |
+
if returned:
|
| 132 |
+
if not os.path.exists(output_folder):
|
| 133 |
+
os.makedirs(output_folder)
|
| 134 |
+
|
| 135 |
+
behavior = annotated.get(str(index))
|
| 136 |
+
|
| 137 |
+
if behavior in label2number.keys():
|
| 138 |
+
if flag:
|
| 139 |
+
cv2.imwrite(f"{output_folder}/{adjusted_index}.jpg", frame)
|
| 140 |
+
|
| 141 |
+
charades_df.loc[len(charades_df.index)] = [f"{folder_code}",
|
| 142 |
+
video_id,
|
| 143 |
+
adjusted_index,
|
| 144 |
+
f"{folder_code}/{adjusted_index}.jpg",
|
| 145 |
+
str(label2number[behavior])]
|
| 146 |
+
|
| 147 |
+
adjusted_index += 1
|
| 148 |
+
|
| 149 |
+
index += 1
|
| 150 |
+
else:
|
| 151 |
+
break
|
| 152 |
+
|
| 153 |
+
vc.release()
|
| 154 |
+
video_id += 1
|
| 155 |
+
|
| 156 |
+
if video_id % 10 == 0:
|
| 157 |
+
charades_df.to_csv(f"{charades_path}/annotation/data.csv", sep=" ", index=False)
|
| 158 |
+
|
| 159 |
+
charades_df.to_csv(f"{charades_path}/annotation/data.csv", sep=" ", index=False)
|
| 160 |
+
videos = shuffle(charades_df["original_vido_id"].unique(), random_state=42)
|
| 161 |
+
test = videos[:int(len(videos) / 100 * 25)]
|
| 162 |
+
|
| 163 |
+
train_original_vido_id = []
|
| 164 |
+
train_video_id = []
|
| 165 |
+
train_frame_id = []
|
| 166 |
+
train_path = []
|
| 167 |
+
train_labels = []
|
| 168 |
+
test_original_vido_id = []
|
| 169 |
+
test_video_id = []
|
| 170 |
+
test_frame_id = []
|
| 171 |
+
test_path = []
|
| 172 |
+
test_labels = []
|
| 173 |
+
|
| 174 |
+
for index, row in tqdm(charades_df.iterrows(), total=charades_df.shape[0]):
|
| 175 |
+
if row["original_vido_id"] in test:
|
| 176 |
+
test_original_vido_id.append(row["original_vido_id"])
|
| 177 |
+
test_video_id.append(row["video_id"])
|
| 178 |
+
test_frame_id.append(row["frame_id"])
|
| 179 |
+
test_path.append(row["path"])
|
| 180 |
+
test_labels.append(str(row["labels"]))
|
| 181 |
+
else:
|
| 182 |
+
train_original_vido_id.append(row["original_vido_id"])
|
| 183 |
+
train_video_id.append(row["video_id"])
|
| 184 |
+
train_frame_id.append(row["frame_id"])
|
| 185 |
+
train_path.append(row["path"])
|
| 186 |
+
train_labels.append(str(row["labels"]))
|
| 187 |
+
|
| 188 |
+
train_df = pd.DataFrame(data={"original_vido_id": train_original_vido_id,
|
| 189 |
+
"video_id": pd.Series(train_video_id, dtype="int"),
|
| 190 |
+
"frame_id": pd.Series(train_frame_id, dtype="int"),
|
| 191 |
+
"path": train_path, "labels": train_labels})
|
| 192 |
+
test_df = pd.DataFrame(data={"original_vido_id": test_original_vido_id,
|
| 193 |
+
"video_id": pd.Series(test_video_id, dtype="int"),
|
| 194 |
+
"frame_id": pd.Series(test_frame_id, dtype="int"),
|
| 195 |
+
"path": test_path, "labels": test_labels})
|
| 196 |
+
|
| 197 |
+
train_df.to_csv(f"{charades_path}/annotation/train.csv", sep=" ", index=False)
|
| 198 |
+
test_df.to_csv(f"{charades_path}/annotation/val.csv", sep=" ", index=False)
|
| 199 |
+
|
| 200 |
+
optimized_path = "charades"
|
| 201 |
+
|
| 202 |
+
if optimize:
|
| 203 |
+
shutil.copy(f"{charades_path}/annotation/classes.json", f"{optimized_path}/annotation/classes.json")
|
| 204 |
+
|
| 205 |
+
if not os.path.exists(optimized_path):
|
| 206 |
+
os.makedirs(optimized_path)
|
| 207 |
+
|
| 208 |
+
if not os.path.exists(f"{optimized_path}/annotation"):
|
| 209 |
+
os.makedirs(f"{optimized_path}/annotation")
|
| 210 |
+
|
| 211 |
+
if not os.path.exists(f"{optimized_path}/dataset/image"):
|
| 212 |
+
os.makedirs(f"{optimized_path}/dataset/image")
|
| 213 |
+
|
| 214 |
+
train_df = pd.read_csv(f"{charades_path}/annotation/train.csv", sep=" ")
|
| 215 |
+
val_df = pd.read_csv(f"{charades_path}/annotation/val.csv", sep=" ")
|
| 216 |
+
segment_size = 90
|
| 217 |
+
|
| 218 |
+
new_df = []
|
| 219 |
+
video_id = 1
|
| 220 |
+
|
| 221 |
+
for folder in tqdm(train_df["original_vido_id"].unique()):
|
| 222 |
+
subset = train_df[train_df["original_vido_id"] == folder].reset_index(drop=True)
|
| 223 |
+
number_of_subfolders = len(subset) // segment_size
|
| 224 |
+
|
| 225 |
+
for i in range(number_of_subfolders):
|
| 226 |
+
for j in range(segment_size):
|
| 227 |
+
original_vido_id = subset.loc[i * segment_size + j]["original_vido_id"]
|
| 228 |
+
new_original_vido_id = f"{original_vido_id}.{i + 1}"
|
| 229 |
+
new_video_id = video_id
|
| 230 |
+
new_frame_id = j + 1
|
| 231 |
+
old_path = subset.loc[i * segment_size + j]["path"]
|
| 232 |
+
new_path = f"{new_original_vido_id}/{new_frame_id}.jpg"
|
| 233 |
+
new_labels = subset.loc[i * segment_size + j]["labels"]
|
| 234 |
+
new_df.append([new_original_vido_id, new_video_id, new_frame_id, new_path, new_labels])
|
| 235 |
+
|
| 236 |
+
if not os.path.exists(f"{optimized_path}/dataset/image/{new_original_vido_id}"):
|
| 237 |
+
os.makedirs(f"{optimized_path}/dataset/image/{new_original_vido_id}")
|
| 238 |
+
|
| 239 |
+
shutil.copy(f"{charades_path}/dataset/image/{old_path}",
|
| 240 |
+
f"{optimized_path}/dataset/image/{new_path}")
|
| 241 |
+
|
| 242 |
+
video_id += 1
|
| 243 |
+
|
| 244 |
+
for i in range(len(subset) % segment_size):
|
| 245 |
+
original_vido_id = subset.loc[number_of_subfolders * segment_size + i]["original_vido_id"]
|
| 246 |
+
new_original_vido_id = f"{original_vido_id}.{number_of_subfolders + 1}"
|
| 247 |
+
new_video_id = video_id
|
| 248 |
+
new_frame_id = i + 1
|
| 249 |
+
old_path = subset.loc[number_of_subfolders * segment_size + i]["path"]
|
| 250 |
+
new_path = f"{new_original_vido_id}/{new_frame_id}.jpg"
|
| 251 |
+
new_labels = subset.loc[number_of_subfolders * segment_size + i]["labels"]
|
| 252 |
+
new_df.append([new_original_vido_id, new_video_id, new_frame_id, new_path, new_labels])
|
| 253 |
+
|
| 254 |
+
if not os.path.exists(f"{optimized_path}/dataset/image/{new_original_vido_id}"):
|
| 255 |
+
os.makedirs(f"{optimized_path}/dataset/image/{new_original_vido_id}")
|
| 256 |
+
|
| 257 |
+
shutil.copy(f"{charades_path}/dataset/image/{old_path}", f"{optimized_path}/dataset/image/{new_path}")
|
| 258 |
+
|
| 259 |
+
video_id += 1
|
| 260 |
+
|
| 261 |
+
train_df = pd.DataFrame(new_df, columns=["original_vido_id", "video_id", "frame_id", "path", "labels"])
|
| 262 |
+
train_df.to_csv(f"{optimized_path}/annotation/train.csv", sep=" ", index=False)
|
| 263 |
+
|
| 264 |
+
new_df = []
|
| 265 |
+
|
| 266 |
+
for folder in tqdm(val_df["original_vido_id"].unique()):
|
| 267 |
+
subset = val_df[val_df["original_vido_id"] == folder].reset_index(drop=True)
|
| 268 |
+
number_of_subfolders = len(subset) // segment_size
|
| 269 |
+
|
| 270 |
+
for i in range(number_of_subfolders):
|
| 271 |
+
for j in range(segment_size):
|
| 272 |
+
original_vido_id = subset.loc[i * segment_size + j]["original_vido_id"]
|
| 273 |
+
new_original_vido_id = f"{original_vido_id}.{i + 1}"
|
| 274 |
+
new_video_id = video_id
|
| 275 |
+
new_frame_id = j + 1
|
| 276 |
+
old_path = subset.loc[i * segment_size + j]["path"]
|
| 277 |
+
new_path = f"{new_original_vido_id}/{new_frame_id}.jpg"
|
| 278 |
+
new_labels = subset.loc[i * segment_size + j]["labels"]
|
| 279 |
+
new_df.append([new_original_vido_id, new_video_id, new_frame_id, new_path, new_labels])
|
| 280 |
+
|
| 281 |
+
if not os.path.exists(f"{optimized_path}/dataset/image/{new_original_vido_id}"):
|
| 282 |
+
os.makedirs(f"{optimized_path}/dataset/image/{new_original_vido_id}")
|
| 283 |
+
|
| 284 |
+
shutil.copy(f"{charades_path}/dataset/image/{old_path}",
|
| 285 |
+
f"{optimized_path}/dataset/image/{new_path}")
|
| 286 |
+
|
| 287 |
+
video_id += 1
|
| 288 |
+
|
| 289 |
+
for i in range(len(subset) % segment_size):
|
| 290 |
+
original_vido_id = subset.loc[number_of_subfolders * segment_size + i]["original_vido_id"]
|
| 291 |
+
new_original_vido_id = f"{original_vido_id}.{number_of_subfolders + 1}"
|
| 292 |
+
new_video_id = video_id
|
| 293 |
+
new_frame_id = i + 1
|
| 294 |
+
old_path = subset.loc[number_of_subfolders * segment_size + i]["path"]
|
| 295 |
+
new_path = f"{new_original_vido_id}/{new_frame_id}.jpg"
|
| 296 |
+
new_labels = subset.loc[number_of_subfolders * segment_size + i]["labels"]
|
| 297 |
+
new_df.append([new_original_vido_id, new_video_id, new_frame_id, new_path, new_labels])
|
| 298 |
+
|
| 299 |
+
if not os.path.exists(f"{optimized_path}/dataset/image/{new_original_vido_id}"):
|
| 300 |
+
os.makedirs(f"{optimized_path}/dataset/image/{new_original_vido_id}")
|
| 301 |
+
|
| 302 |
+
shutil.copy(f"{charades_path}/dataset/image/{old_path}", f"{optimized_path}/dataset/image/{new_path}")
|
| 303 |
+
|
| 304 |
+
video_id += 1
|
| 305 |
+
|
| 306 |
+
val_df = pd.DataFrame(new_df, columns=["original_vido_id", "video_id", "frame_id", "path", "labels"])
|
| 307 |
+
val_df.to_csv(f"{optimized_path}/annotation/val.csv", sep=" ", index=False)
|
| 308 |
+
|
| 309 |
+
shutil.rmtree(charades_path)
|
| 310 |
+
else:
|
| 311 |
+
shutil.move(charades_path, optimized_path)
|
BaboonLand/scripts/dataset2tracking.py
ADDED
|
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
|
| 4 |
+
import cv2
|
| 5 |
+
import json
|
| 6 |
+
from lxml import etree
|
| 7 |
+
from natsort import natsorted
|
| 8 |
+
from collections import OrderedDict
|
| 9 |
+
|
| 10 |
+
if __name__ == "__main__":
|
| 11 |
+
if "scripts" in os.listdir(".."):
|
| 12 |
+
os.chdir("..")
|
| 13 |
+
|
| 14 |
+
sys.dont_write_bytecode = True
|
| 15 |
+
|
| 16 |
+
path = "dataset"
|
| 17 |
+
tracking_path = "tracking"
|
| 18 |
+
test_size = 0.25
|
| 19 |
+
|
| 20 |
+
if not os.path.exists(tracking_path):
|
| 21 |
+
os.makedirs(tracking_path)
|
| 22 |
+
|
| 23 |
+
if not os.path.exists(f"{tracking_path}/train"):
|
| 24 |
+
os.makedirs(f"{tracking_path}/train")
|
| 25 |
+
|
| 26 |
+
if not os.path.exists(f"{tracking_path}/test"):
|
| 27 |
+
os.makedirs(f"{tracking_path}/test")
|
| 28 |
+
|
| 29 |
+
train_split = {}
|
| 30 |
+
test_split = {}
|
| 31 |
+
|
| 32 |
+
for folder in natsorted(os.listdir(path)):
|
| 33 |
+
vc = cv2.VideoCapture(f"{path}/{folder}/video.mp4")
|
| 34 |
+
train_start = 0
|
| 35 |
+
test_end = int(vc.get(cv2.CAP_PROP_FRAME_COUNT)) - 1
|
| 36 |
+
train_end = test_end - int(int(vc.get(cv2.CAP_PROP_FRAME_COUNT)) * test_size)
|
| 37 |
+
test_start = train_end + 1
|
| 38 |
+
vc.release()
|
| 39 |
+
|
| 40 |
+
train_split[folder] = {"start": train_start, "end": train_end}
|
| 41 |
+
test_split[folder] = {"start": test_start, "end": test_end}
|
| 42 |
+
|
| 43 |
+
with open(f"{tracking_path}/train.json", "w") as file:
|
| 44 |
+
json.dump(train_split, file, indent=4)
|
| 45 |
+
|
| 46 |
+
with open(f"{tracking_path}/test.json", "w") as file:
|
| 47 |
+
json.dump(test_split, file, indent=4)
|
| 48 |
+
|
| 49 |
+
for folder in natsorted(os.listdir(path)):
|
| 50 |
+
print(f"{path}/{folder} -> {tracking_path}/train/{folder} | {tracking_path}/test/{folder}")
|
| 51 |
+
|
| 52 |
+
if not os.path.exists(f"{tracking_path}/train/{folder}"):
|
| 53 |
+
os.makedirs(f"{tracking_path}/train/{folder}")
|
| 54 |
+
|
| 55 |
+
if not os.path.exists(f"{tracking_path}/test/{folder}"):
|
| 56 |
+
os.makedirs(f"{tracking_path}/test/{folder}")
|
| 57 |
+
|
| 58 |
+
video_path = f"{path}/{folder}/video.mp4"
|
| 59 |
+
annotation_path = f"{path}/{folder}/tracks.xml"
|
| 60 |
+
|
| 61 |
+
root = etree.parse(annotation_path).getroot()
|
| 62 |
+
|
| 63 |
+
annotated_train = dict()
|
| 64 |
+
annotated_test = dict()
|
| 65 |
+
|
| 66 |
+
for track in root.iterfind("track"):
|
| 67 |
+
track_id = int(track.attrib["id"])
|
| 68 |
+
|
| 69 |
+
for box in track.iter("box"):
|
| 70 |
+
frame_id = int(box.attrib["frame"])
|
| 71 |
+
keyframe = int(box.attrib["keyframe"])
|
| 72 |
+
|
| 73 |
+
if train_split[folder]["start"] <= frame_id <= train_split[folder]["end"]:
|
| 74 |
+
if annotated_train.get(track_id) is None:
|
| 75 |
+
annotated_train[track_id] = OrderedDict()
|
| 76 |
+
|
| 77 |
+
if frame_id - train_split[folder]["start"] == 0:
|
| 78 |
+
keyframe_train = 1
|
| 79 |
+
else:
|
| 80 |
+
keyframe_train = keyframe
|
| 81 |
+
|
| 82 |
+
annotated_train[track_id][frame_id - train_split[folder]["start"]] = [int(float(box.attrib["xtl"])),
|
| 83 |
+
int(float(box.attrib["ytl"])),
|
| 84 |
+
int(float(box.attrib["xbr"])),
|
| 85 |
+
int(float(box.attrib["ybr"])),
|
| 86 |
+
keyframe_train]
|
| 87 |
+
|
| 88 |
+
if test_split[folder]["start"] <= frame_id <= test_split[folder]["end"]:
|
| 89 |
+
if annotated_test.get(track_id) is None:
|
| 90 |
+
annotated_test[track_id] = OrderedDict()
|
| 91 |
+
|
| 92 |
+
if frame_id - test_split[folder]["start"] == 0:
|
| 93 |
+
keyframe_test = 1
|
| 94 |
+
else:
|
| 95 |
+
keyframe_test = keyframe
|
| 96 |
+
|
| 97 |
+
annotated_test[track_id][frame_id - test_split[folder]["start"]] = [int(float(box.attrib["xtl"])),
|
| 98 |
+
int(float(box.attrib["ytl"])),
|
| 99 |
+
int(float(box.attrib["xbr"])),
|
| 100 |
+
int(float(box.attrib["ybr"])),
|
| 101 |
+
keyframe_test]
|
| 102 |
+
|
| 103 |
+
xml_page = etree.Element("annotations")
|
| 104 |
+
etree.SubElement(xml_page, "version").text = "1.1"
|
| 105 |
+
|
| 106 |
+
for track_id in annotated_train.keys():
|
| 107 |
+
xml_track = etree.Element("track", id=str(track_id), label="Baboon", source="manual")
|
| 108 |
+
|
| 109 |
+
for frame_id in annotated_train[track_id].keys():
|
| 110 |
+
if frame_id == sorted(annotated_train[track_id].keys())[-1]:
|
| 111 |
+
outside = "1"
|
| 112 |
+
else:
|
| 113 |
+
outside = "0"
|
| 114 |
+
|
| 115 |
+
xml_box = etree.Element("box", frame=str(frame_id), outside=outside, occluded="0",
|
| 116 |
+
keyframe=str(annotated_train[track_id][frame_id][4]),
|
| 117 |
+
xtl=f"{annotated_train[track_id][frame_id][0]:.2f}",
|
| 118 |
+
ytl=f"{annotated_train[track_id][frame_id][1]:.2f}",
|
| 119 |
+
xbr=f"{annotated_train[track_id][frame_id][2]:.2f}",
|
| 120 |
+
ybr=f"{annotated_train[track_id][frame_id][3]:.2f}", z_order="0")
|
| 121 |
+
|
| 122 |
+
xml_track.append(xml_box)
|
| 123 |
+
|
| 124 |
+
if len(annotated_train[track_id].keys()) > 0:
|
| 125 |
+
xml_page.append(xml_track)
|
| 126 |
+
|
| 127 |
+
xml_document = etree.ElementTree(xml_page)
|
| 128 |
+
xml_document.write(f"{tracking_path}/train/{folder}/tracks.xml", xml_declaration=True, pretty_print=True,
|
| 129 |
+
encoding="utf-8")
|
| 130 |
+
|
| 131 |
+
xml_page = etree.Element("annotations")
|
| 132 |
+
etree.SubElement(xml_page, "version").text = "1.1"
|
| 133 |
+
|
| 134 |
+
for track_id in annotated_test.keys():
|
| 135 |
+
xml_track = etree.Element("track", id=str(track_id), label="Baboon", source="manual")
|
| 136 |
+
|
| 137 |
+
for frame_id in annotated_test[track_id].keys():
|
| 138 |
+
if frame_id == sorted(annotated_test[track_id].keys())[-1]:
|
| 139 |
+
outside = "1"
|
| 140 |
+
else:
|
| 141 |
+
outside = "0"
|
| 142 |
+
|
| 143 |
+
xml_box = etree.Element("box", frame=str(frame_id), outside=outside, occluded="0",
|
| 144 |
+
keyframe=str(annotated_test[track_id][frame_id][4]),
|
| 145 |
+
xtl=f"{annotated_test[track_id][frame_id][0]:.2f}",
|
| 146 |
+
ytl=f"{annotated_test[track_id][frame_id][1]:.2f}",
|
| 147 |
+
xbr=f"{annotated_test[track_id][frame_id][2]:.2f}",
|
| 148 |
+
ybr=f"{annotated_test[track_id][frame_id][3]:.2f}", z_order="0")
|
| 149 |
+
|
| 150 |
+
xml_track.append(xml_box)
|
| 151 |
+
|
| 152 |
+
if len(annotated_test[track_id].keys()) > 0:
|
| 153 |
+
xml_page.append(xml_track)
|
| 154 |
+
|
| 155 |
+
xml_document = etree.ElementTree(xml_page)
|
| 156 |
+
xml_document.write(f"{tracking_path}/test/{folder}/tracks.xml", xml_declaration=True, pretty_print=True,
|
| 157 |
+
encoding="utf-8")
|
| 158 |
+
|
| 159 |
+
vc = cv2.VideoCapture(video_path)
|
| 160 |
+
width, height = int(vc.get(cv2.CAP_PROP_FRAME_WIDTH)), int(vc.get(cv2.CAP_PROP_FRAME_HEIGHT))
|
| 161 |
+
vw = cv2.VideoWriter(f"{tracking_path}/train/{folder}/video.mp4",
|
| 162 |
+
cv2.VideoWriter_fourcc("m", "p", "4", "v"), 29.97, (width, height))
|
| 163 |
+
index = 0
|
| 164 |
+
|
| 165 |
+
while vc.isOpened():
|
| 166 |
+
returned, frame = vc.read()
|
| 167 |
+
|
| 168 |
+
if returned:
|
| 169 |
+
if train_split[folder]["start"] <= index <= train_split[folder]["end"]:
|
| 170 |
+
vw.write(frame)
|
| 171 |
+
|
| 172 |
+
index += 1
|
| 173 |
+
else:
|
| 174 |
+
break
|
| 175 |
+
|
| 176 |
+
vc.release()
|
| 177 |
+
vw.release()
|
| 178 |
+
|
| 179 |
+
vc = cv2.VideoCapture(video_path)
|
| 180 |
+
width, height = int(vc.get(cv2.CAP_PROP_FRAME_WIDTH)), int(vc.get(cv2.CAP_PROP_FRAME_HEIGHT))
|
| 181 |
+
vw = cv2.VideoWriter(f"{tracking_path}/test/{folder}/video.mp4",
|
| 182 |
+
cv2.VideoWriter_fourcc("m", "p", "4", "v"), 29.97, (width, height))
|
| 183 |
+
index = 0
|
| 184 |
+
|
| 185 |
+
while vc.isOpened():
|
| 186 |
+
returned, frame = vc.read()
|
| 187 |
+
|
| 188 |
+
if returned:
|
| 189 |
+
if test_split[folder]["start"] <= index <= test_split[folder]["end"]:
|
| 190 |
+
vw.write(frame)
|
| 191 |
+
|
| 192 |
+
index += 1
|
| 193 |
+
else:
|
| 194 |
+
break
|
| 195 |
+
|
| 196 |
+
vc.release()
|
| 197 |
+
vw.release()
|
BaboonLand/scripts/requirements.txt
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
numpy>=1.23.4
|
| 2 |
+
opencv-python>=4.7.0.68
|
| 3 |
+
scipy>=1.10.0
|
| 4 |
+
lxml>=4.9.2
|
| 5 |
+
tqdm>=4.64.1
|
| 6 |
+
torch>=1.10.0+cu111
|
| 7 |
+
natsort>=8.2.0
|
| 8 |
+
ruamel.yaml>=0.17.21
|
| 9 |
+
ultralytics~=8.0.36
|
| 10 |
+
pandas>=1.3.5
|
| 11 |
+
matplotlib~=3.7.1
|
| 12 |
+
seaborn~=0.12.2
|
| 13 |
+
scikit-learn~=1.2.2
|
BaboonLand/scripts/tracking2ultralytics.py
ADDED
|
@@ -0,0 +1,144 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
import cv2
|
| 4 |
+
import ruamel.yaml as yaml
|
| 5 |
+
from lxml import etree
|
| 6 |
+
from collections import OrderedDict
|
| 7 |
+
from tqdm import tqdm
|
| 8 |
+
import shutil
|
| 9 |
+
from natsort import natsorted
|
| 10 |
+
from sklearn.utils import shuffle
|
| 11 |
+
|
| 12 |
+
if __name__ == "__main__":
|
| 13 |
+
if "scripts" in os.listdir(".."):
|
| 14 |
+
os.chdir("..")
|
| 15 |
+
|
| 16 |
+
sys.dont_write_bytecode = True
|
| 17 |
+
|
| 18 |
+
train_path = "tracking/train"
|
| 19 |
+
dataset = "tracking/ultralytics"
|
| 20 |
+
skip = 32
|
| 21 |
+
|
| 22 |
+
if os.path.exists(f"{dataset}"):
|
| 23 |
+
shutil.rmtree(f"{dataset}")
|
| 24 |
+
|
| 25 |
+
if not os.path.exists(f"{dataset}/images/train"):
|
| 26 |
+
os.makedirs(f"{dataset}/images/train")
|
| 27 |
+
if not os.path.exists(f"{dataset}/images/val"):
|
| 28 |
+
os.makedirs(f"{dataset}/images/val")
|
| 29 |
+
if not os.path.exists(f"{dataset}/images/test"):
|
| 30 |
+
os.makedirs(f"{dataset}/images/test")
|
| 31 |
+
if not os.path.exists(f"{dataset}/labels/train"):
|
| 32 |
+
os.makedirs(f"{dataset}/labels/train")
|
| 33 |
+
if not os.path.exists(f"{dataset}/labels/val"):
|
| 34 |
+
os.makedirs(f"{dataset}/labels/val")
|
| 35 |
+
if not os.path.exists(f"{dataset}/labels/test"):
|
| 36 |
+
os.makedirs(f"{dataset}/labels/test")
|
| 37 |
+
|
| 38 |
+
dataset_file = f"""
|
| 39 |
+
path: .
|
| 40 |
+
train: images/train
|
| 41 |
+
val: images/val
|
| 42 |
+
test: images/test
|
| 43 |
+
|
| 44 |
+
nc: 1
|
| 45 |
+
names: ['Baboon']
|
| 46 |
+
"""
|
| 47 |
+
|
| 48 |
+
with open(f"{dataset}/ultralytics.yaml", "w") as file:
|
| 49 |
+
yaml.dump(yaml.load(dataset_file, Loader=yaml.RoundTripLoader, preserve_quotes=True),
|
| 50 |
+
file, Dumper=yaml.RoundTripDumper)
|
| 51 |
+
|
| 52 |
+
label2index = {
|
| 53 |
+
"Baboon": 0,
|
| 54 |
+
}
|
| 55 |
+
|
| 56 |
+
videos = []
|
| 57 |
+
annotations = []
|
| 58 |
+
|
| 59 |
+
for folder in os.listdir(train_path):
|
| 60 |
+
videos.append(f"{train_path}/{folder}/video.mp4")
|
| 61 |
+
annotations.append(f"{train_path}/{folder}/tracks.xml")
|
| 62 |
+
|
| 63 |
+
for i, (video, annotation) in enumerate(zip(videos, annotations)):
|
| 64 |
+
print(f"{i + 1}/{len(annotations)}:")
|
| 65 |
+
|
| 66 |
+
if not os.path.exists(video):
|
| 67 |
+
print(f"Path {video} does not exist.")
|
| 68 |
+
continue
|
| 69 |
+
|
| 70 |
+
vc = cv2.VideoCapture(video)
|
| 71 |
+
width = vc.get(cv2.CAP_PROP_FRAME_WIDTH)
|
| 72 |
+
height = vc.get(cv2.CAP_PROP_FRAME_HEIGHT)
|
| 73 |
+
annotated_size = int(vc.get(cv2.CAP_PROP_FRAME_COUNT))
|
| 74 |
+
|
| 75 |
+
root = etree.parse(annotation).getroot()
|
| 76 |
+
name = os.path.splitext(video.split("/")[-2])[0]
|
| 77 |
+
|
| 78 |
+
annotated = dict()
|
| 79 |
+
|
| 80 |
+
for track in root.iterfind("track"):
|
| 81 |
+
track_id = int(track.attrib["id"])
|
| 82 |
+
label = label2index[track.attrib["label"].lower().capitalize()]
|
| 83 |
+
|
| 84 |
+
for box in track.iter("box"):
|
| 85 |
+
frame_id = int(box.attrib["frame"])
|
| 86 |
+
|
| 87 |
+
if annotated.get(frame_id) is None:
|
| 88 |
+
annotated[frame_id] = OrderedDict()
|
| 89 |
+
|
| 90 |
+
x_start = float(box.attrib["xtl"])
|
| 91 |
+
y_start = float(box.attrib["ytl"])
|
| 92 |
+
x_end = float(box.attrib["xbr"])
|
| 93 |
+
y_end = float(box.attrib["ybr"])
|
| 94 |
+
x_center = (x_start + (x_end - x_start) / 2) / width
|
| 95 |
+
y_center = (y_start + (y_end - y_start) / 2) / height
|
| 96 |
+
w = (x_end - x_start) / width
|
| 97 |
+
h = (y_end - y_start) / height
|
| 98 |
+
annotated[frame_id][track_id] = [label, x_center, y_center, w, h]
|
| 99 |
+
|
| 100 |
+
index = 0
|
| 101 |
+
pbar = tqdm(total=annotated_size)
|
| 102 |
+
|
| 103 |
+
while vc.isOpened():
|
| 104 |
+
returned, frame = vc.read()
|
| 105 |
+
saved = False
|
| 106 |
+
|
| 107 |
+
if returned:
|
| 108 |
+
if annotated.get(index) is not None:
|
| 109 |
+
if index % skip == 0:
|
| 110 |
+
for box in annotated[index].values():
|
| 111 |
+
if not saved:
|
| 112 |
+
cv2.imwrite(f"{dataset}/images/train/{name}_{index}.jpg", frame)
|
| 113 |
+
saved = True
|
| 114 |
+
|
| 115 |
+
with open(f"{dataset}/labels/train/{name}_{index}.txt", "a") as file:
|
| 116 |
+
file.write(f"{box[0]} {box[1]:.6f} {box[2]:.6f} {box[3]:.6f} {box[4]:.6f}\n")
|
| 117 |
+
|
| 118 |
+
index += 1
|
| 119 |
+
pbar.update(1)
|
| 120 |
+
else:
|
| 121 |
+
break
|
| 122 |
+
|
| 123 |
+
pbar.close()
|
| 124 |
+
vc.release()
|
| 125 |
+
|
| 126 |
+
print("Distribute train, val, and test...")
|
| 127 |
+
images = natsorted([file for file in os.listdir(f"{dataset}/images/train") if
|
| 128 |
+
os.path.isfile(os.path.join(f"{dataset}/images/train", file))])
|
| 129 |
+
labels = natsorted([file for file in os.listdir(f"{dataset}/labels/train") if
|
| 130 |
+
os.path.isfile(os.path.join(f"{dataset}/labels/train", file))])
|
| 131 |
+
|
| 132 |
+
images, labels = shuffle(images, labels, random_state=42)
|
| 133 |
+
|
| 134 |
+
for file in tqdm(images[int(len(images) * 0.8):int(len(images) * 0.87)]):
|
| 135 |
+
shutil.move(f"{dataset}/images/train/{file}", f"{dataset}/images/val/{file}")
|
| 136 |
+
|
| 137 |
+
for file in tqdm(labels[int(len(labels) * 0.8):int(len(labels) * 0.87)]):
|
| 138 |
+
shutil.move(f"{dataset}/labels/train/{file}", f"{dataset}/labels/val/{file}")
|
| 139 |
+
|
| 140 |
+
for file in tqdm(images[int(len(images) * 0.87):]):
|
| 141 |
+
shutil.move(f"{dataset}/images/train/{file}", f"{dataset}/images/test/{file}")
|
| 142 |
+
|
| 143 |
+
for file in tqdm(labels[int(len(labels) * 0.87):]):
|
| 144 |
+
shutil.move(f"{dataset}/labels/train/{file}", f"{dataset}/labels/test/{file}")
|
BaboonLand/scripts/tracks2mini-scenes.py
ADDED
|
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
import os
|
| 3 |
+
import sys
|
| 4 |
+
from lxml import etree
|
| 5 |
+
import cv2
|
| 6 |
+
from src.utils import get_scene
|
| 7 |
+
from collections import OrderedDict
|
| 8 |
+
from tqdm import tqdm
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
class Object:
|
| 12 |
+
def __init__(self, object_id, centroid, color, attribute=None):
|
| 13 |
+
self.object_id = object_id
|
| 14 |
+
self.centroid = centroid
|
| 15 |
+
self.color = color
|
| 16 |
+
self.attribute = attribute
|
| 17 |
+
|
| 18 |
+
def __getattr__(self, name):
|
| 19 |
+
if self.attribute is None:
|
| 20 |
+
return None
|
| 21 |
+
|
| 22 |
+
if self.attribute.get(name) is None:
|
| 23 |
+
return None
|
| 24 |
+
else:
|
| 25 |
+
return self.attribute[name]
|
| 26 |
+
|
| 27 |
+
@staticmethod
|
| 28 |
+
def object_factory(objects, centroids, colors, attributes):
|
| 29 |
+
entities = []
|
| 30 |
+
|
| 31 |
+
for object_id, centroid in objects.items():
|
| 32 |
+
assigned = False
|
| 33 |
+
|
| 34 |
+
for i, c in enumerate(centroids):
|
| 35 |
+
if np.array_equal(centroid, c):
|
| 36 |
+
entities.append(Object(object_id, centroid, colors[object_id], attributes[i]))
|
| 37 |
+
assigned = True
|
| 38 |
+
|
| 39 |
+
if not assigned:
|
| 40 |
+
# Object disappeared for some frames.
|
| 41 |
+
entities.append(Object(object_id, centroid, colors[object_id], None))
|
| 42 |
+
|
| 43 |
+
return entities
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
def generate_timeline_image(folder, timeline, annotated_size):
|
| 47 |
+
timeline_image = np.zeros(shape=(len(timeline["tracks"].keys()) * 100, annotated_size, 3), dtype=np.uint8)
|
| 48 |
+
|
| 49 |
+
for i, (key, value) in enumerate(timeline["tracks"].items()):
|
| 50 |
+
if timeline["colors"].get(key) is None:
|
| 51 |
+
color = (127, 127, 127)
|
| 52 |
+
else:
|
| 53 |
+
color = timeline["colors"][key]
|
| 54 |
+
|
| 55 |
+
binary = np.array(value, dtype=np.int32)
|
| 56 |
+
binary[binary >= 0] = 1
|
| 57 |
+
binary[binary < 0] = 0
|
| 58 |
+
timeline_image[(i * 100):(i + 1) * 100, 0:annotated_size] = color
|
| 59 |
+
mask = np.repeat(np.array(binary, dtype=np.uint8).reshape(1, -1), repeats=100, axis=0)
|
| 60 |
+
image = timeline_image[(i * 100):(i + 1) * 100, 0:annotated_size]
|
| 61 |
+
timeline_image[(i * 100):(i + 1) * 100, 0:annotated_size] = \
|
| 62 |
+
cv2.bitwise_and(image, image, mask=mask)
|
| 63 |
+
|
| 64 |
+
timeline_resized = cv2.resize(timeline_image, (1000, timeline_image.shape[0]))
|
| 65 |
+
|
| 66 |
+
for i, (key, value) in enumerate(timeline["tracks"].items()):
|
| 67 |
+
if timeline["colors"].get(key) is None:
|
| 68 |
+
color = (127, 127, 127)
|
| 69 |
+
else:
|
| 70 |
+
color = timeline["colors"][key]
|
| 71 |
+
|
| 72 |
+
cv2.putText(timeline_resized, str(key), (30, i * 100 + 50),
|
| 73 |
+
cv2.FONT_HERSHEY_SIMPLEX, 0.6, tuple([j - 30 for j in color]), 2, cv2.LINE_AA)
|
| 74 |
+
|
| 75 |
+
cv2.imwrite(f"{folder}/timeline.jpg", timeline_resized)
|
| 76 |
+
|
| 77 |
+
|
| 78 |
+
def extract(path):
|
| 79 |
+
video_path = f"{path}/video.mp4"
|
| 80 |
+
annotation_path = f"{path}/tracks.xml"
|
| 81 |
+
|
| 82 |
+
root = etree.parse(annotation_path).getroot()
|
| 83 |
+
|
| 84 |
+
annotated = dict()
|
| 85 |
+
|
| 86 |
+
for track in root.iterfind("track"):
|
| 87 |
+
track_id = int(track.attrib["id"])
|
| 88 |
+
|
| 89 |
+
for box in track.iter("box"):
|
| 90 |
+
frame_id = int(box.attrib["frame"])
|
| 91 |
+
|
| 92 |
+
if annotated.get(frame_id) is None:
|
| 93 |
+
annotated[frame_id] = OrderedDict()
|
| 94 |
+
|
| 95 |
+
annotated[frame_id][track_id] = [int(float(box.attrib["xtl"])),
|
| 96 |
+
int(float(box.attrib["ytl"])),
|
| 97 |
+
int(float(box.attrib["xbr"])),
|
| 98 |
+
int(float(box.attrib["ybr"]))]
|
| 99 |
+
|
| 100 |
+
annotated_size = max(annotated.keys()) + 1
|
| 101 |
+
scene_width, scene_height = 400, 300
|
| 102 |
+
vc = cv2.VideoCapture(video_path)
|
| 103 |
+
print(f"{video_path} | {annotation_path} -> {path}/mini-scenes")
|
| 104 |
+
|
| 105 |
+
if not os.path.exists(f"{path}/mini-scenes"):
|
| 106 |
+
os.makedirs(f"{path}/mini-scenes")
|
| 107 |
+
|
| 108 |
+
index = 0
|
| 109 |
+
tracked_indices = OrderedDict()
|
| 110 |
+
timeline = OrderedDict()
|
| 111 |
+
timeline["tracks"] = OrderedDict()
|
| 112 |
+
timeline["tracks"]["main"] = [-1] * annotated_size
|
| 113 |
+
timeline["colors"] = {}
|
| 114 |
+
vc.set(cv2.CAP_PROP_POS_FRAMES, index)
|
| 115 |
+
tracks_vw = dict()
|
| 116 |
+
pbar = tqdm(total=annotated_size)
|
| 117 |
+
|
| 118 |
+
while vc.isOpened():
|
| 119 |
+
returned, frame = vc.read()
|
| 120 |
+
|
| 121 |
+
if returned:
|
| 122 |
+
if annotated.get(index) is not None:
|
| 123 |
+
centroids = []
|
| 124 |
+
attributes = []
|
| 125 |
+
objects = OrderedDict()
|
| 126 |
+
colors = OrderedDict()
|
| 127 |
+
|
| 128 |
+
for object_id, box in annotated[index].items():
|
| 129 |
+
attribute = {}
|
| 130 |
+
|
| 131 |
+
centroid = (int(box[0] + (box[2] - box[0]) / 2), int(box[1] + (box[3] - box[1]) / 2))
|
| 132 |
+
centroids.append(centroid)
|
| 133 |
+
attribute["box"] = box
|
| 134 |
+
attributes.append(attribute)
|
| 135 |
+
|
| 136 |
+
objects[object_id] = centroid
|
| 137 |
+
colors_values = [(170, 105, 63), (58, 61, 189), (149, 91, 107), (69, 65, 127), (65, 174, 213),
|
| 138 |
+
(87, 111, 118), (46, 122, 228), (201, 158, 190), (127, 234, 241), (109, 110, 0),
|
| 139 |
+
(169, 140, 87), (85, 209, 246), (141, 75, 0), (44, 85, 242), (227, 222, 149),
|
| 140 |
+
(194, 205, 237), (117, 49, 206), (71, 114, 90), (149, 176, 207)]
|
| 141 |
+
colors[object_id] = colors_values[object_id % len(colors_values)]
|
| 142 |
+
timeline["colors"][object_id] = colors[object_id]
|
| 143 |
+
|
| 144 |
+
objects = Object.object_factory(objects, centroids, colors, attributes=attributes)
|
| 145 |
+
|
| 146 |
+
for object in objects:
|
| 147 |
+
if tracks_vw.get(object.object_id) is None:
|
| 148 |
+
tracks_vw[object.object_id] = cv2.VideoWriter(f"{path}/mini-scenes/{object.object_id}.mp4",
|
| 149 |
+
cv2.VideoWriter_fourcc("m", "p", "4", "v"),
|
| 150 |
+
29.97, (scene_width, scene_height))
|
| 151 |
+
tracked_indices[object.object_id] = 0
|
| 152 |
+
timeline["tracks"][object.object_id] = [-1] * annotated_size
|
| 153 |
+
|
| 154 |
+
for object in objects:
|
| 155 |
+
scene_frame = frame.copy()
|
| 156 |
+
scene_frame = get_scene(scene_frame, object, scene_width, scene_height)
|
| 157 |
+
tracks_vw[object.object_id].write(scene_frame)
|
| 158 |
+
timeline["tracks"][object.object_id][index] = tracked_indices[object.object_id]
|
| 159 |
+
tracked_indices[object.object_id] += 1
|
| 160 |
+
|
| 161 |
+
timeline["tracks"]["main"][index] = index
|
| 162 |
+
index += 1
|
| 163 |
+
pbar.update(1)
|
| 164 |
+
else:
|
| 165 |
+
break
|
| 166 |
+
|
| 167 |
+
for track_key in tracks_vw.keys():
|
| 168 |
+
tracks_vw[track_key].release()
|
| 169 |
+
|
| 170 |
+
generate_timeline_image(path, timeline, annotated_size)
|
| 171 |
+
pbar.close()
|
| 172 |
+
vc.release()
|
| 173 |
+
|
| 174 |
+
|
| 175 |
+
if __name__ == "__main__":
|
| 176 |
+
if "scripts" in os.listdir(".."):
|
| 177 |
+
os.chdir("..")
|
| 178 |
+
|
| 179 |
+
sys.dont_write_bytecode = True
|
| 180 |
+
|
| 181 |
+
for item in os.listdir("dataset"):
|
| 182 |
+
if os.path.isdir(f"dataset/{item}"):
|
| 183 |
+
extract(f"dataset/{item}")
|
BaboonLand/scripts/ultralytics2pyramid.py
ADDED
|
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
import sys
|
| 3 |
+
import cv2
|
| 4 |
+
import ruamel.yaml as yaml
|
| 5 |
+
from tqdm import tqdm
|
| 6 |
+
|
| 7 |
+
if __name__ == "__main__":
|
| 8 |
+
if "scripts" in os.listdir(".."):
|
| 9 |
+
os.chdir("..")
|
| 10 |
+
|
| 11 |
+
sys.dont_write_bytecode = True
|
| 12 |
+
|
| 13 |
+
slices_pyramid = [2, 3, 4]
|
| 14 |
+
ultralytics_path = "tracking/ultralytics"
|
| 15 |
+
pyramid_path = "tracking/pyramid"
|
| 16 |
+
draw_boxes = False
|
| 17 |
+
|
| 18 |
+
if not os.path.exists(f"{pyramid_path}/images/train"):
|
| 19 |
+
os.makedirs(f"{pyramid_path}/images/train")
|
| 20 |
+
if not os.path.exists(f"{pyramid_path}/images/val"):
|
| 21 |
+
os.makedirs(f"{pyramid_path}/images/val")
|
| 22 |
+
if not os.path.exists(f"{pyramid_path}/images/test"):
|
| 23 |
+
os.makedirs(f"{pyramid_path}/images/test")
|
| 24 |
+
if not os.path.exists(f"{pyramid_path}/labels/train"):
|
| 25 |
+
os.makedirs(f"{pyramid_path}/labels/train")
|
| 26 |
+
if not os.path.exists(f"{pyramid_path}/labels/val"):
|
| 27 |
+
os.makedirs(f"{pyramid_path}/labels/val")
|
| 28 |
+
if not os.path.exists(f"{pyramid_path}/labels/test"):
|
| 29 |
+
os.makedirs(f"{pyramid_path}/labels/test")
|
| 30 |
+
|
| 31 |
+
dataset_file = f"""
|
| 32 |
+
path: .
|
| 33 |
+
train: images/train
|
| 34 |
+
val: images/val
|
| 35 |
+
test: images/test
|
| 36 |
+
|
| 37 |
+
nc: 1
|
| 38 |
+
names: ['Baboon']
|
| 39 |
+
"""
|
| 40 |
+
|
| 41 |
+
with open(f"{pyramid_path}/pyramid.yaml", "w") as file:
|
| 42 |
+
yaml.dump(yaml.load(dataset_file, Loader=yaml.RoundTripLoader, preserve_quotes=True),
|
| 43 |
+
file, Dumper=yaml.RoundTripDumper)
|
| 44 |
+
|
| 45 |
+
images = []
|
| 46 |
+
annotations = []
|
| 47 |
+
|
| 48 |
+
for file in os.listdir(f"{ultralytics_path}/images/train"):
|
| 49 |
+
images.append(f"{ultralytics_path}/images/train/{file}")
|
| 50 |
+
annotations.append(f"{ultralytics_path}/labels/train/{os.path.splitext(file)[0]}.txt")
|
| 51 |
+
|
| 52 |
+
for file in os.listdir(f"{ultralytics_path}/images/val"):
|
| 53 |
+
images.append(f"{ultralytics_path}/images/val/{file}")
|
| 54 |
+
annotations.append(f"{ultralytics_path}/labels/val/{os.path.splitext(file)[0]}.txt")
|
| 55 |
+
|
| 56 |
+
for file in os.listdir(f"{ultralytics_path}/images/test"):
|
| 57 |
+
images.append(f"{ultralytics_path}/images/test/{file}")
|
| 58 |
+
annotations.append(f"{ultralytics_path}/labels/test/{os.path.splitext(file)[0]}.txt")
|
| 59 |
+
|
| 60 |
+
for slices in slices_pyramid:
|
| 61 |
+
for image_path, annotation_path in tqdm(zip(images, annotations), total=len(images)):
|
| 62 |
+
name = os.path.splitext(image_path.split("/")[-1])[0]
|
| 63 |
+
image = cv2.imread(image_path)
|
| 64 |
+
image_height, image_width = image.shape[:2]
|
| 65 |
+
|
| 66 |
+
with open(annotation_path, "r") as file:
|
| 67 |
+
bounding_boxes_ultralytics = file.readlines()
|
| 68 |
+
bounding_boxes = []
|
| 69 |
+
|
| 70 |
+
for bounding_box_ultralytics in bounding_boxes_ultralytics:
|
| 71 |
+
label, x, y, width, height = bounding_box_ultralytics.strip().split(" ")
|
| 72 |
+
label, x, y, width, height = int(label), float(x), float(y), float(width), float(height)
|
| 73 |
+
x *= image_width
|
| 74 |
+
y *= image_height
|
| 75 |
+
width *= image_width
|
| 76 |
+
height *= image_height
|
| 77 |
+
bounding_boxes.append([label,
|
| 78 |
+
int(x - width // 2),
|
| 79 |
+
int(y - height // 2),
|
| 80 |
+
int(x - width // 2 + width),
|
| 81 |
+
int(y - height // 2 + height)])
|
| 82 |
+
|
| 83 |
+
slice_height = image_height // slices
|
| 84 |
+
slice_width = image_width // slices
|
| 85 |
+
|
| 86 |
+
for i in range(slices):
|
| 87 |
+
for j in range(slices):
|
| 88 |
+
index = (i * slices) + j + 1
|
| 89 |
+
distribution_folder = image_path.split("/")[-2]
|
| 90 |
+
y_start = i * slice_height
|
| 91 |
+
y_end = (i + 1) * slice_height
|
| 92 |
+
x_start = j * slice_width
|
| 93 |
+
x_end = (j + 1) * slice_width
|
| 94 |
+
sliced = image[y_start:y_end, x_start:x_end]
|
| 95 |
+
tile = [x_start, y_start, x_end, y_end]
|
| 96 |
+
tile_boxes = []
|
| 97 |
+
|
| 98 |
+
for bounding_box in bounding_boxes:
|
| 99 |
+
points = [False, False, False, False]
|
| 100 |
+
|
| 101 |
+
if tile[0] <= bounding_box[1] and tile[2] >= bounding_box[1]:
|
| 102 |
+
if tile[1] <= bounding_box[2] and tile[3] >= bounding_box[2]:
|
| 103 |
+
points[0] = True
|
| 104 |
+
if tile[1] <= bounding_box[4] and tile[3] >= bounding_box[4]:
|
| 105 |
+
points[2] = True
|
| 106 |
+
|
| 107 |
+
if tile[0] <= bounding_box[3] and tile[2] >= bounding_box[3]:
|
| 108 |
+
if tile[1] <= bounding_box[2] and tile[3] >= bounding_box[2]:
|
| 109 |
+
points[1] = True
|
| 110 |
+
if tile[1] <= bounding_box[4] and tile[3] >= bounding_box[4]:
|
| 111 |
+
points[3] = True
|
| 112 |
+
|
| 113 |
+
if sum(points) == 0:
|
| 114 |
+
continue
|
| 115 |
+
elif sum(points) == 4:
|
| 116 |
+
tile_boxes.append([bounding_box[0], bounding_box[1] - tile[0], bounding_box[2] - tile[1],
|
| 117 |
+
bounding_box[3] - tile[0], bounding_box[4] - tile[1]])
|
| 118 |
+
elif points[0]:
|
| 119 |
+
if points[1]:
|
| 120 |
+
tile_boxes.append(
|
| 121 |
+
[bounding_box[0], bounding_box[1] - tile[0], bounding_box[2] - tile[1],
|
| 122 |
+
bounding_box[3] - tile[0], slice_height - 1])
|
| 123 |
+
elif points[2]:
|
| 124 |
+
tile_boxes.append(
|
| 125 |
+
[bounding_box[0], bounding_box[1] - tile[0], bounding_box[2] - tile[1],
|
| 126 |
+
slice_width - 1, bounding_box[4] - tile[1]])
|
| 127 |
+
else:
|
| 128 |
+
tile_boxes.append(
|
| 129 |
+
[bounding_box[0], bounding_box[1] - tile[0], bounding_box[2] - tile[1],
|
| 130 |
+
slice_width - 1, slice_height - 1])
|
| 131 |
+
elif points[1]:
|
| 132 |
+
if points[3]:
|
| 133 |
+
tile_boxes.append([bounding_box[0], 0, bounding_box[2] - tile[1],
|
| 134 |
+
bounding_box[3] - tile[0], bounding_box[4] - tile[1]])
|
| 135 |
+
else:
|
| 136 |
+
tile_boxes.append([bounding_box[0], 0, bounding_box[2] - tile[1],
|
| 137 |
+
bounding_box[3] - tile[0], slice_height - 1])
|
| 138 |
+
elif points[2]:
|
| 139 |
+
if points[3]:
|
| 140 |
+
tile_boxes.append([bounding_box[0], bounding_box[1] - tile[0], 0,
|
| 141 |
+
bounding_box[3] - tile[0], bounding_box[4] - tile[1]])
|
| 142 |
+
else:
|
| 143 |
+
tile_boxes.append([bounding_box[0], bounding_box[1] - tile[0], 0,
|
| 144 |
+
slice_width - 1, bounding_box[4] - tile[1]])
|
| 145 |
+
else:
|
| 146 |
+
tile_boxes.append(
|
| 147 |
+
[bounding_box[0], 0, 0, bounding_box[3] - tile[0], bounding_box[4] - tile[1]])
|
| 148 |
+
|
| 149 |
+
if draw_boxes:
|
| 150 |
+
for tile_box in tile_boxes:
|
| 151 |
+
cv2.rectangle(sliced, (tile_box[1], tile_box[2]), (tile_box[3], tile_box[4]),
|
| 152 |
+
(255, 255, 255))
|
| 153 |
+
|
| 154 |
+
cv2.imwrite(f"{pyramid_path}/images/{distribution_folder}/{name}.slice-{slices}.index-{index}.png",
|
| 155 |
+
sliced)
|
| 156 |
+
|
| 157 |
+
if len(tile_boxes) > 0:
|
| 158 |
+
for tile_box in tile_boxes:
|
| 159 |
+
x_center = (tile_box[1] + (tile_box[3] - tile_box[1]) / 2) / slice_width
|
| 160 |
+
y_center = (tile_box[2] + (tile_box[4] - tile_box[2]) / 2) / slice_height
|
| 161 |
+
w = (tile_box[3] - tile_box[1]) / slice_width
|
| 162 |
+
h = (tile_box[4] - tile_box[2]) / slice_height
|
| 163 |
+
|
| 164 |
+
with open(
|
| 165 |
+
f"{pyramid_path}/labels/{distribution_folder}/{name}.slice-{slices}.index-{index}.txt",
|
| 166 |
+
"a") as file:
|
| 167 |
+
file.write(f"{tile_box[0]} {x_center:.6f} {y_center:.6f} {w:.6f} {h:.6f}\n")
|
| 168 |
+
else:
|
| 169 |
+
with open(
|
| 170 |
+
f"{pyramid_path}/labels/{distribution_folder}/{name}.slice-{slices}.index-{index}.txt",
|
| 171 |
+
"w") as file:
|
| 172 |
+
pass
|