File size: 918 Bytes
dd2c21c 1d931eb | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | ## Load in Python
import pandas as pd
from pathlib import Path
import cv2
# annotations
csv_path = "annotations.csv"
df = pd.read_csv(csv_path)
# view first rows
print(df.head())
# access a clip
row = df.iloc[0]
video_file = Path(row.video_path)
# basic frame iterator
cap = cv2.VideoCapture(str(video_file))
frames = []
while True:
ok, frame = cap.read()
if not ok:
break
frames.append(frame)
cap.release()
print("frames:", len(frames))
print("object_class:", row.object_class)
print("container:", row.container_type)
print("outcome:", row.outcome)
count clips by container_type
filter outcomes to find failure clusters
group by persistence to test off-frame behavior
sample occlusion ranges for tests
does grip outcome correlate with container?
do mis-grips cluster near occlusion?
does persistence help reduce false resets?
can baseline models handle this without spatial fields?
|