File size: 1,501 Bytes
844a51a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d50b2df
844a51a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
dataset_info:
  features:
  - name: obs_uid
    dtype: string
  - name: usr_uid
    dtype: string
  - name: caption
    dtype: string
  - name: image
    dtype: image
  - name: clicks_path
    sequence:
      sequence: int32
      length: 2
  - name: clicks_time
    sequence: timestamp[s]
  splits:
  - name: train
    num_bytes: 1611467
    num_examples: 3848
  download_size: 241443505
  dataset_size: 1611467
---

### Dataset  Description
CapMIT1003 is a dataset of captions and click-contingent image explorations collected during captioning tasks. 
CapMIT1003 is based on the same stimuli from the well-known MIT1003 benchmark, for which eye-tracking data
under free-viewing conditions is available, which offers a promising opportunity to concurrently study human attention under both tasks.


### Usage
You can load CapMIT1003 as follows:


```python
from datasets import load_dataset

capmit1003_dataset = load_dataset("azugarini/CapMIT1003", trust_remote_code=True)
print(capmit1003_dataset["train"][0]) #print first example
```





### Citation Information
If you use this dataset in your research or work, please cite the following paper:

```
@article{zanca2023contrastive,
  title={Contrastive Language-Image Pretrained Models are Zero-Shot Human Scanpath Predictors},
  author={Zanca, Dario and Zugarini, Andrea and Dietz, Simon and Altstidl, Thomas R and Ndjeuha, Mark A Turban and Schwinn, Leo and Eskofier, Bjoern},
  journal={arXiv preprint arXiv:2305.12380},
  year={2023}
```