ZinengTang commited on
Commit
e0a34da
·
verified ·
1 Parent(s): 1425707

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -50
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  language: en
3
- license: apache-2.0
4
  size_categories:
5
  - 1K<n<10K
6
  task_categories:
@@ -32,35 +32,16 @@ The key feature of this dataset is that it captures communication between two ag
32
  - **Curated by**: University of California, Berkeley
33
  - **Time per Task**: Median 33.0s for speakers, 10.5s for listeners
34
 
35
- ## Uses
36
-
37
- ### Direct Use
38
-
39
- The dataset is designed for:
40
- - Training and evaluating referring expression generation models
41
- - Training and evaluating visual question answering systems
42
- - Studying human spatial language use in multi-perspective scenarios
43
- - Developing embodied AI systems that can communicate about shared environments
44
- - Research on perspective-taking in language generation and comprehension
45
-
46
- ### Out-of-Scope Use
47
-
48
- The dataset should not be used for:
49
- - Training systems to navigate or manipulate physical environments
50
- - Training general-purpose vision-language models without consideration of perspective
51
- - Applications requiring real-time interaction or dialogue (dataset contains single-turn interactions only)
52
-
53
  ## Dataset Structure
54
 
55
  Each instance contains:
56
- - Speaker view image (1280x720 resolution)
57
- - Listener view image (1280x720 resolution)
58
- - Natural language referring expression from speaker
59
  - Target object location
60
  - Listener object selection
61
  - Scene metadata including:
62
  - Agent positions and orientations
63
- - Field of view overlap measurements
64
  - Referent placement method (random vs adversarial)
65
  - Base environment identifier
66
 
@@ -88,32 +69,11 @@ The dataset was created to study how humans and AI systems handle referential co
88
 
89
  #### Who are the source data producers?
90
 
91
- - Base 3D environments: ScanNet++ dataset
92
  - Referring expressions: English-speaking crowdworkers from the United States
93
  - Quality filtering: Automated GPT-4V system
94
  - Scene generation: Automated system with physics simulation
95
 
96
- ### Personal and Sensitive Information
97
-
98
- The dataset does not contain personally identifiable information. Crowdworker data was checked to exclude private information and offensive content.
99
-
100
- ## Bias, Risks, and Limitations
101
-
102
- - Limited to indoor environments from ScanNet++
103
- - English language only
104
- - Single-turn interactions only (no dialogue)
105
- - Restricted to specific object types (spheres)
106
- - May reflect cultural biases in spatial language use
107
- - Limited demographic diversity of crowdworkers
108
-
109
- ### Recommendations
110
-
111
- - Consider cultural and linguistic differences in spatial language when using the dataset
112
- - Account for perspective differences when developing models
113
- - Evaluate performance across different relative orientations and referent placements
114
- - Consider expanding to multi-turn dialogue in future work
115
- - Test for biases in spatial language use across different demographics
116
-
117
  ## Citation
118
 
119
  **BibTeX:**
@@ -128,8 +88,4 @@ The dataset does not contain personally identifiable information. Crowdworker da
128
 
129
  ## Dataset Card Contact
130
 
131
- Contact the authors at {terran, lingjun, suhr}@berkeley.edu
132
-
133
- ## More Information
134
-
135
- Code, models, and dataset available at: https://github.com/zinengtang/MulAgentRef
 
1
  ---
2
  language: en
3
+ license: mit
4
  size_categories:
5
  - 1K<n<10K
6
  task_categories:
 
32
  - **Curated by**: University of California, Berkeley
33
  - **Time per Task**: Median 33.0s for speakers, 10.5s for listeners
34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ## Dataset Structure
36
 
37
  Each instance contains:
38
+ - Speaker view image (1024x1024 resolution)
39
+ - Listener view image (1024x1024 resolution)
40
+ - Natural language referring expression from human speaker
41
  - Target object location
42
  - Listener object selection
43
  - Scene metadata including:
44
  - Agent positions and orientations
 
45
  - Referent placement method (random vs adversarial)
46
  - Base environment identifier
47
 
 
69
 
70
  #### Who are the source data producers?
71
 
72
+ - Base 3D environments: ScanNet dataset
73
  - Referring expressions: English-speaking crowdworkers from the United States
74
  - Quality filtering: Automated GPT-4V system
75
  - Scene generation: Automated system with physics simulation
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  ## Citation
78
 
79
  **BibTeX:**
 
88
 
89
  ## Dataset Card Contact
90
 
91
+ Contact the authors at terran@berkeley.edu