Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
## Terms of Use
|
| 4 |
|
|
@@ -11,24 +22,25 @@ By downloading the dataset, you agree to the following terms:
|
|
| 11 |
|
| 12 |
## Introduction
|
| 13 |
|
| 14 |
-
'./Image': Images collected from MovieNet dataset.
|
| 15 |
|
| 16 |
'./annotation/Images.mat': 1 * 736,835 struct (736,835 images)
|
| 17 |
Each line describes the pedestrian information of an image,
|
| 18 |
-
including the image name (imname), the number and
|
| 19 |
of pedestrians appearing (nAppear and box) in this image.
|
| 20 |
'./annotation/pool.mat': test images.
|
| 21 |
|
| 22 |
'./annotation/test/train_test/Train.mat': 5532 query persons for training.
|
| 23 |
-
'./annotation/test/train_test/TestG2000-TestG10000.mat': 2900 query persons with gallery size
|
| 24 |
|
| 25 |
*Note: The location of each person is stored as (xmin, ymin, width, height),
|
| 26 |
i.e. crop_im = I ( idlocate(2):idlocate(2)+idlocate(4), idlocate(1):idlocate(1)+idlocate(3) );
|
| 27 |
|
| 28 |
|
| 29 |
|
| 30 |
-
##
|
| 31 |
|
|
|
|
| 32 |
@article{zheng2021glcnet,
|
| 33 |
title={Global-local context network for person search},
|
| 34 |
author={Zheng, Peng and Qin, Jie and Yan, Yichao and Liao, Shengcai and Ni, Bingbing and Cheng, Xiaogang and Shao, Ling},
|
|
@@ -36,3 +48,4 @@ i.e. crop_im = I ( idlocate(2):idlocate(2)+idlocate(4), idlocate(1):idlocate(1)+
|
|
| 36 |
volume={8},
|
| 37 |
year={2021}
|
| 38 |
}
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-sa-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-classification
|
| 5 |
+
size_categories:
|
| 6 |
+
- 10B<n<100B
|
| 7 |
+
---
|
| 8 |
|
| 9 |
+
> MovieNet-PS is a dataset for person search in movie data.
|
| 10 |
+
|
| 11 |
+
Check our [GitHub repo](https://github.com/ZhengPeng7/GLCNet) for details.
|
| 12 |
+
Thanks to [MovieNet](https://movienet.github.io/), which is the source of raw data.
|
| 13 |
|
| 14 |
## Terms of Use
|
| 15 |
|
|
|
|
| 22 |
|
| 23 |
## Introduction
|
| 24 |
|
| 25 |
+
'./Image': Images collected from the MovieNet dataset.
|
| 26 |
|
| 27 |
'./annotation/Images.mat': 1 * 736,835 struct (736,835 images)
|
| 28 |
Each line describes the pedestrian information of an image,
|
| 29 |
+
including the image name (imname), the number, and the location
|
| 30 |
of pedestrians appearing (nAppear and box) in this image.
|
| 31 |
'./annotation/pool.mat': test images.
|
| 32 |
|
| 33 |
'./annotation/test/train_test/Train.mat': 5532 query persons for training.
|
| 34 |
+
'./annotation/test/train_test/TestG2000-TestG10000.mat': 2900 query persons with gallery size varying from 2000 to 10000 for testing.
|
| 35 |
|
| 36 |
*Note: The location of each person is stored as (xmin, ymin, width, height),
|
| 37 |
i.e. crop_im = I ( idlocate(2):idlocate(2)+idlocate(4), idlocate(1):idlocate(1)+idlocate(3) );
|
| 38 |
|
| 39 |
|
| 40 |
|
| 41 |
+
## Citation
|
| 42 |
|
| 43 |
+
```
|
| 44 |
@article{zheng2021glcnet,
|
| 45 |
title={Global-local context network for person search},
|
| 46 |
author={Zheng, Peng and Qin, Jie and Yan, Yichao and Liao, Shengcai and Ni, Bingbing and Cheng, Xiaogang and Shao, Ling},
|
|
|
|
| 48 |
volume={8},
|
| 49 |
year={2021}
|
| 50 |
}
|
| 51 |
+
```
|