Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
tags:
|
| 6 |
+
- bridge_data
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
This dataset contains [`BridgeData V1` and `BridgeData V2`](https://rail-berkeley.github.io/bridgedata/), which is orignally downloaded from [this tar.gz file](https://rail.eecs.berkeley.edu/datasets/bridge_release/data/demos_8_17.zip), and use the following scripts to preprocess the data:
|
| 10 |
+
- [Preprocess V1](https://github.com/Kiteretsu77/This_and_That_VDM/blob/main/curation_pipeline/match_dataset_v1.py)
|
| 11 |
+
- [Preprocess V2](https://github.com/Kiteretsu77/This_and_That_VDM/blob/main/curation_pipeline/match_dataset_v2.py)
|
| 12 |
+
- [Train/test split](https://github.com/Kiteretsu77/This_and_That_VDM/blob/main/scripts/train_test_split.py)
|
| 13 |
+
|
| 14 |
+
After the processing, we have the dataset with the following structure:
|
| 15 |
+
| Folder name | Trajectory number | Size |
|
| 16 |
+
| :---------: | :---------------: | :--: |
|
| 17 |
+
| bridge_data_v1 (train) | 11007 | 30GB |
|
| 18 |
+
| bridge_data_v2 (train) | 16527 | 62GB |
|
| 19 |
+
| bridge_data_v1_test | 1222 | 3.3GB |
|
| 20 |
+
| bridge_data_v2_test | 1836 | 6.9GB |
|
| 21 |
+
```
|