orgn3ai commited on
Commit
2e96fed
·
verified ·
1 Parent(s): f8254dc

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
README.md ADDED
@@ -0,0 +1,313 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-nd-4.0
3
+ task_categories:
4
+ - video-classification
5
+ language:
6
+ - en
7
+ tags:
8
+ - egocentric
9
+ - embodied-ai
10
+ - robotics
11
+ - real-world
12
+ - computer-vision
13
+ - dataset
14
+ - sample-dataset
15
+ size_categories:
16
+ - n<1K
17
+ ---
18
+
19
+ # MEAT-CUT-sample
20
+
21
+ ## Overview
22
+
23
+ This dataset provides a high-quality, multi-view synchronized capture of expert procedural tasks in a professional butchery environment. It specifically focuses on the complex manipulation of **non-rigid and deformable objects** (meat, sausage stuffing, and casings), a significant challenge in current robotics and computer vision research.
24
+
25
+
26
+ <video controls width="100%">
27
+ <source src="medias/mosaic.mp4" type="video/mp4">
28
+ Your browser does not support the video tag.
29
+ </video>
30
+
31
+ ## Key Technical Features
32
+
33
+ * **Synchronized Multi-View FPV & 3rd Person:** Includes perfectly aligned ego-centric (First-Person View) and multiple third-person perspectives.
34
+ * **Expert Human Narration:** Each task is accompanied by a human voice-over explaining the **intent, tactile feedback, and professional heuristics** behind every gesture.
35
+ * **Non-Rigid Physics:** Captures complex material behaviors such as plasticity, elasticity, and shear during the sausage-making process.
36
+ * **Multimodal Grounding:** Provides a direct link between visual actions and expert verbal instructions, ideal for training **Vision-Language Models (VLM)**.
37
+ * **High-Quality, Multi-View Synchronization:** All views are precisely time-aligned to ensure seamless cross-modal understanding.
38
+
39
+ ## Use Cases for Research
40
+
41
+ * **Embodied AI & World Models:** Training agents to predict the physical consequences of interacting with deformable organic matter.
42
+ * **Procedural Task Learning:** Modeling long-form sequential actions where step order and expert intent are critical.
43
+ * **Tactile-Visual Inference:** Learning to estimate force and material resistance through visual observation and expert narration.
44
+
45
+ ## Full Dataset Specifications
46
+
47
+ This Hugging Face repository contains a **5-minute preview sample**. The full professional corpus includes:
48
+ * **Total Duration:** 50+ hours of continuous expert operations.
49
+ * **Tasks:** Full-cycle sausage production, precise meat cutting, and tool maintenance.
50
+ * **Data Quality:** 4K resolution, studio-grade audio, and temporal action annotations.
51
+
52
+ ## Dataset Statistics
53
+
54
+ This section provides detailed statistics extracted from `dataset_metadata.json`:
55
+
56
+ ### Overall Statistics
57
+
58
+ - **Dataset Name**: MEAT-CUT-sample
59
+ - **Batch ID**: 02
60
+ - **Total Clips**: 214
61
+ - **Number of Sequences**: 2
62
+ - **Number of Streams**: 2
63
+ - **Stream Types**: ego, third
64
+
65
+ ### Duration Statistics
66
+
67
+ - **Total Duration**: 6.42 minutes (385.20 seconds)
68
+ - **Average Clip Duration**: 1.80 seconds
69
+ - **Min Clip Duration**: 1.80 seconds
70
+ - **Max Clip Duration**: 1.80 seconds
71
+
72
+ ### Clip Configuration
73
+
74
+ - **Base Clip Duration**: 1.00 seconds
75
+ - **Clip Duration with Padding**: 1.80 seconds
76
+ - **Padding**: 400 ms
77
+
78
+ ### Statistics by Stream Type
79
+
80
+ #### Ego
81
+
82
+ - **Number of clips**: 107
83
+ - **Total duration**: 3.21 minutes (192.60 seconds)
84
+ - **Average clip duration**: 1.80 seconds
85
+ - **Min clip duration**: 1.80 seconds
86
+ - **Max clip duration**: 1.80 seconds
87
+
88
+ #### Third
89
+
90
+ - **Number of clips**: 107
91
+ - **Total duration**: 3.21 minutes (192.60 seconds)
92
+ - **Average clip duration**: 1.80 seconds
93
+ - **Min clip duration**: 1.80 seconds
94
+ - **Max clip duration**: 1.80 seconds
95
+
96
+ > **Note**: Complete metadata is available in `dataset_metadata.json` in the dataset root directory.
97
+
98
+ ## Dataset Structure
99
+
100
+ The dataset uses a **unified structure** where each example contains all synchronized video streams:
101
+
102
+ ```
103
+ dataset/
104
+ ├── data-*.arrow # Dataset files (Arrow format)
105
+ ├── dataset_info.json # Dataset metadata
106
+ ├── dataset_metadata.json # Complete dataset statistics
107
+ ├── state.json # Dataset state
108
+ ├── README.md # This file
109
+ ├── medias/ # Media files (mosaics, previews, etc.)
110
+ │ └── mosaic.mp4 # Mosaic preview video
111
+ └── videos/ # All video clips
112
+ └── ego/ # Ego video clips
113
+ └── third/ # Third video clips
114
+ ```
115
+
116
+ ### Dataset Format
117
+
118
+ The dataset contains **214 synchronized scenes** in a single `train` split. Each example includes:
119
+
120
+ - **Synchronized video columns**: One column per flux type (e.g., `ego_video`, `third_video`, `top_video`)
121
+ - **Scene metadata**: `scene_id`, `sync_id`, `duration_sec`, `fps`
122
+ - **Rich metadata dictionary**: Task, environment, audio info, and synchronization details
123
+
124
+ All videos in a single example are synchronized and correspond to the same moment in time.
125
+
126
+ ## Usage
127
+
128
+ ### Load Dataset
129
+
130
+ ```python
131
+ from datasets import load_dataset
132
+
133
+ # Load from Hugging Face Hub
134
+ dataset = load_dataset('orgn3ai/MEAT-CUT-sample')
135
+
136
+ # IMPORTANT: The dataset has a 'train' split
137
+ # Check available splits
138
+ print(f"Available splits: {list(dataset.keys())}") # Should show: ['train']
139
+
140
+ # Or load from local directory
141
+ # from datasets import load_from_disk
142
+ # dataset = load_from_disk('outputs/02/dataset')
143
+
144
+ # Access the 'train' split
145
+ train_data = dataset['train']
146
+
147
+ # Access synchronized scenes from the train split
148
+ example = train_data[0] # First synchronized scene
149
+
150
+ # Or directly:
151
+ example = dataset['train'][0] # First synchronized scene
152
+
153
+ # Access all synchronized videos
154
+ ego_video = example['ego_video'] # Ego-centric view
155
+ third_video = example['third_video'] # Third-person view
156
+
157
+ # Access metadata
158
+ print(f"Scene ID: {example['scene_id']}")
159
+ print(f"Duration: {example['duration_sec']}s")
160
+ print(f"FPS: {example['fps']}")
161
+ print(f"Metadata: {example['metadata']}")
162
+
163
+ # Get dataset info
164
+ print(f"Number of examples in train split: {len(dataset['train'])}")
165
+ ```
166
+
167
+ ### Access Synchronized Videos
168
+
169
+ Each example contains all synchronized video streams. Access them directly:
170
+
171
+ ```python
172
+ import cv2
173
+ from pathlib import Path
174
+
175
+ # IMPORTANT: Always access the 'train' split
176
+ # Get a synchronized scene from the train split
177
+ example = dataset['train'][0]
178
+
179
+ # Access video objects (Video type stores path in 'path' attribute or as dict)
180
+ ego_video_obj = example.get('ego_video')
181
+ third_video_obj = example.get('third_video')
182
+
183
+ # Extract path from Video object (Video type stores: {{'path': 'videos/ego/0000.mp4', 'bytes': ...}})
184
+ def get_video_path(video_obj):
185
+ if isinstance(video_obj, dict) and 'path' in video_obj:
186
+ return video_obj['path']
187
+ elif isinstance(video_obj, str):
188
+ return video_obj
189
+ else:
190
+ return getattr(video_obj, 'path', str(video_obj))
191
+
192
+ ego_video_path = get_video_path(ego_video_obj)
193
+ third_video_path = get_video_path(third_video_obj)
194
+
195
+ # Resolve full paths from dataset cache (when loading from Hub)
196
+ cache_dir = Path(dataset['train'].cache_files[0]['filename']).parent.parent
197
+ ego_video_full_path = cache_dir / ego_video_path
198
+ third_video_full_path = cache_dir / third_video_path
199
+
200
+ # Process all synchronized videos together
201
+ # IMPORTANT: Iterate over the 'train' split
202
+ for example in dataset['train']:
203
+ scene_id = example['scene_id']
204
+ sync_id = example['sync_id']
205
+ metadata = example['metadata']
206
+
207
+ print(f"Scene {{scene_id}}: {{metadata['num_fluxes']}} synchronized fluxes")
208
+ print(f"Flux names: {{metadata['flux_names']}}")
209
+
210
+ # Access video paths and resolve them
211
+ ego_video_path = example.get('ego_video')
212
+ third_video_path = example.get('third_video')
213
+
214
+ # Resolve full paths
215
+ ego_video_full = cache_dir / ego_video_path
216
+ third_video_full = cache_dir / third_video_path
217
+
218
+ # Process synchronized videos...
219
+ ```
220
+
221
+ ### Filter and Process
222
+
223
+ ```python
224
+ # IMPORTANT: Always work with the 'train' split
225
+ # Filter by sync_id
226
+ scene = dataset['train'].filter(lambda x: x['sync_id'] == 0)[0]
227
+
228
+ # Filter by metadata
229
+ scenes_with_audio = dataset['train'].filter(lambda x: x['metadata']['has_audio'])
230
+
231
+ # Access metadata fields
232
+ # Iterate over the 'train' split
233
+ for example in dataset['train']:
234
+ task = example['metadata']['task']
235
+ environment = example['metadata']['environment']
236
+ has_audio = example['metadata']['has_audio']
237
+ flux_names = example['metadata']['flux_names']
238
+ sync_offsets = example['metadata']['sync_offsets_ms']
239
+ ```
240
+
241
+ ### Dataset Features
242
+
243
+ Each example contains:
244
+
245
+ - **`scene_id`**: Unique scene identifier (e.g., "01_0000")
246
+ - **`sync_id`**: Synchronization ID linking synchronized clips
247
+ - **`duration_sec`**: Duration of the synchronized clip in seconds
248
+ - **`fps`**: Frames per second (default: 30.0)
249
+ - **`batch_id`**: Batch identifier
250
+ - **`dataset_name`**: Dataset name from config
251
+ - **`ego_video`**: Video object for ego-centric view (Hugging Face `Video` type with `decode=False`, stores path)
252
+ - **`third_video`**: Video object for third-person view (Hugging Face `Video` type with `decode=False`, stores path)
253
+ - **`metadata`**: Dictionary containing:
254
+ - `task`: Task identifier
255
+ - `environment`: Environment description
256
+ - `has_audio`: Whether videos contain audio
257
+ - `num_fluxes`: Number of synchronized flux types
258
+ - `flux_names`: List of flux names present
259
+ - `sequence_ids`: List of original sequence IDs
260
+ - `sync_offsets_ms`: List of synchronization offsets
261
+
262
+ ### Access Video Files
263
+
264
+ ```python
265
+ import cv2
266
+ from pathlib import Path
267
+
268
+ # Get an example
269
+ example = dataset['train'][0]
270
+
271
+ # Access video paths (stored as relative paths)
272
+ ego_video_path = example.get('ego_video') # e.g., 'videos/ego/0000.mp4'
273
+
274
+ # Resolve full path from dataset cache (when loading from Hub)
275
+ cache_dir = Path(dataset['train'].cache_files[0]['filename']).parent.parent
276
+ ego_video_full_path = cache_dir / ego_video_path
277
+
278
+ # Load video using OpenCV or other library
279
+ cap = cv2.VideoCapture(str(ego_video_full_path))
280
+
281
+ # Or use a helper function
282
+ def get_video_path(dataset, example, flux_name):
283
+ """Get the full path to a video file from a dataset example."""
284
+ cache_dir = Path(dataset['train'].cache_files[0]['filename']).parent.parent
285
+ video_relative_path = example[f'{flux_name}_video']
286
+ return cache_dir / video_relative_path
287
+
288
+ # Usage
289
+ ego_path = get_video_path(dataset, example, 'ego')
290
+ cap = cv2.VideoCapture(str(ego_path))
291
+ ```
292
+
293
+ ## Additional Notes
294
+
295
+ **Important**: This dataset uses a unified structure where each example contains all synchronized video streams in separate columns. All examples are in the `train` split.
296
+
297
+ **Synchronization**: Videos in the same example (same index in the `train` split) are automatically synchronized. They share the same `sync_id` and correspond to the same moment in time.
298
+
299
+ **Video Paths**: Video paths are stored using Hugging Face's `Video` type with `decode=False`. To access the actual file path, extract the `path` attribute from the Video object (see examples above).
300
+
301
+ - `clip_index`: Clip index within the flux folder
302
+ - `duration_sec`: Clip duration in seconds
303
+ - `start_time_sec`: Start time in source video
304
+ - `batch_id`, `dataset_name`, `source_video`, `sync_offset_ms`: Additional metadata
305
+
306
+ ## Commercial Licensing & Contact
307
+
308
+ The complete dataset is available for commercial licensing and large-scale industrial or academic research. It offers deep insights into "tacit knowledge" that is otherwise unavailable in public video repositories.
309
+ **To discuss full access or custom data collection, please contact : lain@gmail.com **
310
+
311
+ ## License
312
+
313
+ This dataset is licensed under **cc-by-nc-nd-4.0**.
dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train"]}
dataset_metadata.json ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "dataset_name": "MEAT-CUT-sample",
3
+ "batch_id": "02",
4
+ "total_clips": 214,
5
+ "num_sequences": 2,
6
+ "num_streams": 2,
7
+ "stream_types": [
8
+ "ego",
9
+ "third"
10
+ ],
11
+ "padding_ms": 400,
12
+ "default_duration_ms": 1000,
13
+ "clip_duration_ms": {
14
+ "base": 1000,
15
+ "with_padding": 1800
16
+ },
17
+ "duration_ms": {
18
+ "average": 1800.0,
19
+ "total": 385200,
20
+ "min": 1800,
21
+ "max": 1800
22
+ },
23
+ "duration_sec": {
24
+ "average": 1.8,
25
+ "total": 385.2,
26
+ "min": 1.8,
27
+ "max": 1.8
28
+ },
29
+ "flux_stats": {
30
+ "ego": {
31
+ "num_clips": 107,
32
+ "duration_ms": {
33
+ "average": 1800.0,
34
+ "total": 192600,
35
+ "min": 1800,
36
+ "max": 1800
37
+ },
38
+ "duration_sec": {
39
+ "average": 1.8,
40
+ "total": 192.6,
41
+ "min": 1.8,
42
+ "max": 1.8
43
+ }
44
+ },
45
+ "third": {
46
+ "num_clips": 107,
47
+ "duration_ms": {
48
+ "average": 1800.0,
49
+ "total": 192600,
50
+ "min": 1800,
51
+ "max": 1800
52
+ },
53
+ "duration_sec": {
54
+ "average": 1.8,
55
+ "total": 192.6,
56
+ "min": 1.8,
57
+ "max": 1.8
58
+ }
59
+ }
60
+ },
61
+ "sequences": [
62
+ {
63
+ "id": "01_ego",
64
+ "flux_name": "ego",
65
+ "num_clips": 107,
66
+ "duration_ms": 1000
67
+ },
68
+ {
69
+ "id": "01_third",
70
+ "flux_name": "third",
71
+ "num_clips": 107,
72
+ "duration_ms": 1000
73
+ }
74
+ ]
75
+ }
medias/mosaic.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:635071593d59c4be4b5708f002c2c3f24489d60880ea4fc90177702f36f436c0
3
+ size 1887972
train/data-00000-of-00003.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:893d502092d73c62819432f51cf06b3145b04821c252a3781bbcd66b6d8effad
3
+ size 18296
train/data-00001-of-00003.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0a4469151ff9269ff9848d43af5da8b762896ab456fae2dc018f73319f56085
3
+ size 18272
train/data-00002-of-00003.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f60d413538fc435a2d76faf061351334d0244156097aa289a830400f3053b767
3
+ size 17736
train/dataset_info.json ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "scene_id": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "sync_id": {
10
+ "dtype": "int32",
11
+ "_type": "Value"
12
+ },
13
+ "duration_sec": {
14
+ "dtype": "float32",
15
+ "_type": "Value"
16
+ },
17
+ "fps": {
18
+ "dtype": "float32",
19
+ "_type": "Value"
20
+ },
21
+ "batch_id": {
22
+ "dtype": "string",
23
+ "_type": "Value"
24
+ },
25
+ "dataset_name": {
26
+ "dtype": "string",
27
+ "_type": "Value"
28
+ },
29
+ "ego_video": {
30
+ "decode": false,
31
+ "_type": "Video"
32
+ },
33
+ "third_video": {
34
+ "decode": false,
35
+ "_type": "Video"
36
+ },
37
+ "metadata": {
38
+ "task": {
39
+ "dtype": "string",
40
+ "_type": "Value"
41
+ },
42
+ "environment": {
43
+ "dtype": "string",
44
+ "_type": "Value"
45
+ },
46
+ "has_audio": {
47
+ "dtype": "bool",
48
+ "_type": "Value"
49
+ },
50
+ "num_fluxes": {
51
+ "dtype": "int32",
52
+ "_type": "Value"
53
+ },
54
+ "flux_names": {
55
+ "feature": {
56
+ "dtype": "string",
57
+ "_type": "Value"
58
+ },
59
+ "_type": "List"
60
+ },
61
+ "sequence_ids": {
62
+ "feature": {
63
+ "dtype": "string",
64
+ "_type": "Value"
65
+ },
66
+ "_type": "List"
67
+ },
68
+ "sync_offsets_ms": {
69
+ "feature": {
70
+ "flux_name": {
71
+ "dtype": "string",
72
+ "_type": "Value"
73
+ },
74
+ "offset_ms": {
75
+ "dtype": "int32",
76
+ "_type": "Value"
77
+ }
78
+ },
79
+ "_type": "List"
80
+ }
81
+ }
82
+ },
83
+ "homepage": "",
84
+ "license": ""
85
+ }
train/state.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00003.arrow"
5
+ },
6
+ {
7
+ "filename": "data-00001-of-00003.arrow"
8
+ },
9
+ {
10
+ "filename": "data-00002-of-00003.arrow"
11
+ }
12
+ ],
13
+ "_fingerprint": "8e968cab3177fe01",
14
+ "_format_columns": null,
15
+ "_format_kwargs": {},
16
+ "_format_type": null,
17
+ "_output_all_columns": false,
18
+ "_split": null
19
+ }
videos/ego/0000.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71a1f74758131e0a26a8aaf9cbc53d45b4fde7000a35cba9760ffadfb45be67c
3
+ size 5783059
videos/ego/0001.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95e928267a200675d1af9f0ed1ec31af852b96774856da975dd8f235bc80ec18
3
+ size 3792297
videos/ego/0002.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b602f05b01e8e0782472664d51651b876de890e8029289452c538e14dde6a47c
3
+ size 5322416
videos/ego/0003.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c08956c0b261b9dae1aba02a32b63d0d61c72ec1025bd284609b9d0dfb900da1
3
+ size 7109854
videos/ego/0004.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec56d42675b6b3b0d3e056f0bdb94bd0ea7fc4546c86a3236292659a3008187d
3
+ size 2747206
videos/ego/0005.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82010491f959667383de3853e04fee20e20a4363ab36c9d889eb9d453f970584
3
+ size 4043867
videos/ego/0006.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94efa6fc61ee8722f86158b759344698879c944ed081418c9016bed20cb92e9a
3
+ size 6191621
videos/ego/0007.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1f5feb99140e9a08bd609ecb43decdbb22f5c62da5ca330c51b96e16b9866f0
3
+ size 3304110
videos/ego/0008.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bf2d39315ede68a8f2324eeb4e856380b4bf43170eaf2415fb6ee9773518afa
3
+ size 4696753
videos/ego/0009.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a2abf07fa7c9002cfc16c5ca4444633c33ffdb917a97a574eb1ed02acd7a3a1
3
+ size 6278517
videos/ego/0010.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:342a5cd4a9f048b3df8358a70b52366c08d8e84280cb450b5fa31799861564fd
3
+ size 6099057
videos/ego/0011.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a87ed6c327c1dcadd42a2b02c241d2f54b2da16e5396b48c47baff8f81d417c0
3
+ size 7491039
videos/ego/0012.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df8c76cc12e4f1cdb581f57ddebd83cbafb968866e9770c54928c1ab7cd235ab
3
+ size 3066803
videos/ego/0013.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65d892300b310fc479b574344933a5a301adbe7726e78987c5efd94b6b6548b8
3
+ size 5421232
videos/ego/0014.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13630ed4c0525fea0069803adf95ca2022e141b701dacc7f3dbafb3d8708715b
3
+ size 6871835
videos/ego/0015.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:074d0714bdd0e43881758d921d49d28f6cd37ce278b5effda11ce254db950129
3
+ size 2742536
videos/ego/0016.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e66cfdc570dfe33b2b87092c2d13abdd78f7ed4253d47a856d1c7bbd7997fd1e
3
+ size 4747091
videos/ego/0017.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a66f68299618c35d9be3f67e71912064fb2a1392e1a9424ac6e6e4e41b5f0d6c
3
+ size 6168318
videos/ego/0018.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a380963829f5e36c8cb738dd49dca64dc6d4415aae7230d05a2d04e8c44deae
3
+ size 2207179
videos/ego/0019.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ced3c9f1f734528e67e6795c792dc482c3ef9d5766dd78386462947ff27f1fb1
3
+ size 4373858
videos/ego/0020.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6cc7741b5501ac42cc4c65076eb8b520084c9c98e7cbba7d496f666e1f4412b
3
+ size 6383013
videos/ego/0021.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb4ba9dec08b196c0d31ebffc8e6641924dd8809237b27aa02e37570a5da1b0d
3
+ size 2537749
videos/ego/0022.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bca2dd516a1fc8f7d5c2936fca88d0cb29a32fb4432884fce65a61c3de8607f8
3
+ size 3770489
videos/ego/0023.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8249395a25f3420b2b3619efec897b2493fab0272a6d6c17c82952ef9eb9f80
3
+ size 2895689
videos/ego/0024.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0733e48eb71a7d94c019713f97a165237bdd9c96dd1075f8c5e86f80e4cfb1e6
3
+ size 5896189
videos/ego/0025.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da7c3681e9799406fe0888708450248d32e23ac69ddde1b0f021bc8a2dbfb7fa
3
+ size 2456525
videos/ego/0026.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00bb5e70db48c3634f470e657111123258e2b85155e0c8991d0ac98abb4fbe33
3
+ size 3408393
videos/ego/0027.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:958e93939ed9140dd30b42af779e56ada463fe006b41caba64bfb8fefff5abc3
3
+ size 4261978
videos/ego/0028.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad81d9025521470ab395e36741c70728da713faa19451154b9db5e9ffb2a8209
3
+ size 5532126
videos/ego/0029.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70d9d033a8cddce2db5d97ddc928ea899f8ec81dd61cb00126033a6dc80898a7
3
+ size 4983449
videos/ego/0030.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be0f7a3d87cbd5dba63d46da276dee0b6d3c15b05230a4281d780f0c76f598af
3
+ size 3388144
videos/ego/0031.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97ebc89a440838b44019323ffcc034d3b093c37adc4f332ccc6d9e35e8e6bc80
3
+ size 4183953
videos/ego/0032.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5260a827967823d4a698783f7df73eee90f66ca3b78be59f2920bafe0c8402ed
3
+ size 5215170
videos/ego/0033.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d8fb0dbc1c5eb5dd0e0f89b8c7f8b2c2ff15c0fc68cab8ee35a142aeb637de6
3
+ size 3137757
videos/ego/0034.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14f12e55f799c13db466e743fff97c4f273ee62f51f36f69146a89c83f7ff871
3
+ size 3981681
videos/ego/0035.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d61d49c1ab77fbf5ac2645ccb97b54e7ca080b162117330ef676ddd4472b5fc1
3
+ size 8841921
videos/ego/0036.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c0a7618fa68ec174dc79237fe073f202a73b5888c6ded5d99da8f12724f531e
3
+ size 4194789
videos/ego/0037.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41db31cd9b6a1d4821e62e2319fa22eadb0ec69b14774d83dec60880a4a59521
3
+ size 6331357
videos/ego/0038.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:287445c86425117f309ca995cdb518a6ee731f1e386639dd2d121690db75e336
3
+ size 2350147
videos/ego/0039.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6481fd1f5635640c5dc1c1167723dbb35a83262b07ba0f30ebe666d168d409ee
3
+ size 5134594
videos/ego/0040.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea4fb1ab8ff2065a2632f8abe6d59ae67c151d94291a22eb871022db80ce27fd
3
+ size 7261490