vatsal-malaviya commited on
Commit
e10671b
·
verified ·
1 Parent(s): cc9af71

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +134 -0
README.md CHANGED
@@ -37,4 +37,138 @@ configs:
37
  data_files:
38
  - split: train
39
  path: data/train-*
 
 
 
 
 
 
 
 
 
 
 
40
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  data_files:
38
  - split: train
39
  path: data/train-*
40
+ task_categories:
41
+ - text-to-image
42
+ language:
43
+ - en
44
+ tags:
45
+ - T2I
46
+ - Reasoning
47
+ - Action
48
+ - Benchmark
49
+ size_categories:
50
+ - n<1K
51
  ---
52
+ # AcT2I-Prompts
53
+
54
+ ## What is this?
55
+
56
+ AcT2I-Prompts is the core prompt set from the AcT2I benchmark. It contains 125 base action-centric prompts that describe interactions between two animal agents (e.g. "a goose competing for dominance with a turkey") plus 3 enriched variants per base prompt:
57
+
58
+ * `spatial_prompt`
59
+ * `emotional_prompt`
60
+ * `temporal_prompt`
61
+
62
+ Each base prompt is also labeled along semantic axes:
63
+
64
+ * `rarity_label` (how biologically/common the interaction is)
65
+ * `emotional_valence` (aggressive / defensive / affiliative / communicative)
66
+ * `spatial_topology` (pursuit vs physical contact vs distant interaction)
67
+ * `temporal_extent` (instantaneous vs extended action)
68
+
69
+ Total rows: 125 (one per base prompt). Each row includes all 4 textual variants.
70
+
71
+ This repo intentionally does **not** include generated images or human study results. Those are released separately.
72
+
73
+ ---
74
+
75
+ ## Intended use
76
+
77
+ This dataset is **for evaluation / analysis of text-to-image models**, not for training.
78
+
79
+ Typical use:
80
+
81
+ 1. For each row, take the `base_prompt` and (optionally) the enriched `spatial_prompt`, `emotional_prompt`, and `temporal_prompt`.
82
+ 2. Generate images from your T2I model for each variant.
83
+ 3. Measure whether the model's image actually depicts the described interaction and action.
84
+
85
+ These prompts are meant to stress-test spatial, temporal, and affective reasoning ("who is doing what to whom, in what posture, with what intent, at what moment").
86
+
87
+ ### Out-of-scope / disallowed use
88
+
89
+ This dataset is **not** intended for:
90
+
91
+ * Training or promoting violent / graphic animal content for shock or harassment.
92
+ * Generating deceptive media presented as "real" wildlife attacks or staged cruelty.
93
+ * Drawing conclusions about human social behavior, human interpersonal violence, or human identity bias. The benchmark is deliberately animal–animal and two-agent focused.
94
+
95
+ Do not use this dataset to build abusive content pipelines.
96
+
97
+ ---
98
+
99
+ ## Data fields
100
+
101
+ Each row in `data/prompts.jsonl` represents one base interaction scenario.
102
+
103
+ * `id` (int)
104
+ * `base_prompt` (str)
105
+ * `animal1` (str)
106
+ * `animal2` (str)
107
+ * `action` (str)
108
+ * `rarity_label` (str: `frequent` | `rare` | `very_rare`)
109
+ * `emotional_valence` (str: `aggressive` | `defensive` | `affiliative` | `communicative`)
110
+ * `spatial_topology` (str: `proximal-contact` | `pursuit / avoidance` | `distant interaction`)
111
+ * `temporal_extent` (str: `instantaneous` | `extended action`)
112
+ * `spatial_prompt` (str)
113
+ * `emotional_prompt` (str)
114
+ * `temporal_prompt` (str)
115
+
116
+ There are no train/dev/test splits. All 125 rows are considered the official evaluation set.
117
+
118
+ ---
119
+
120
+ ## Dataset creation
121
+
122
+ ### Curation rationale
123
+
124
+ Most existing "compositional" prompts test simple attribute binding ("a blue cat on a skateboard"). AcT2I instead targets **interaction semantics**: chasing, comforting, retaliating, asserting dominance, surrendering, etc. These require:
125
+
126
+ * asymmetric roles (one agent acts on the other),
127
+ * physically plausible contact / pursuit / restraint poses,
128
+ * temporal cues (in the middle of an attack vs after being struck),
129
+ * emotional / intent cues (aggressive vs affiliative).
130
+
131
+ We focus on animal–animal interactions (instead of human–human violence or human identity scenarios) to:
132
+
133
+ 1. Reduce sensitive social/ethical risk around representing harm between humans.
134
+ 2. Get clearer signal about action depiction instead of immediately running into "the model can't draw human hands" failures.
135
+
136
+ ### How prompts were generated
137
+
138
+ * We defined pairs of animals and an interaction verb (e.g. "competing for dominance with", "comforting", "chasing", "retaliating against").
139
+ * We wrote a concise `base_prompt` for each interaction.
140
+ * For each base prompt, we produced three enriched variants:
141
+
142
+ * `spatial_prompt`: adds explicit body orientation / physical layout.
143
+ * `emotional_prompt`: adds affect / intent wording.
144
+ * `temporal_prompt`: anchors the scene in a specific moment or phase of action.
145
+ * We assigned semantic labels (`rarity_label`, `emotional_valence`, `spatial_topology`, `temporal_extent`) to each base prompt.
146
+
147
+ ### Who created the data
148
+
149
+ All prompts, enriched variants, and semantic labels were authored/verified by the AcT2I team. No personal names, locations, or other PII were included.
150
+
151
+ ---
152
+
153
+ ## Bias, risks, and limitations
154
+
155
+ * **Violence / aggression content:** Many prompts explicitly describe aggression, dominance, pursuit, or threat between animals. This is intentional (models struggle most with these high-contact, asymmetric actions). However, it means the dataset can be used to generate violent-looking content. Please use responsibly.
156
+
157
+ * **Scope limitations:** The benchmark is animal–animal only and two-agent only. Results should not be overgeneralized to human social interactions, medical scenarios, multi-agent scenes, tool use, etc.
158
+
159
+ * **Biological plausibility:** Some interactions are biologically rare or borderline impossible. That is deliberate: we care about whether the model can depict the *requested* interaction clearly, not whether the interaction is common in nature.
160
+
161
+ ---
162
+
163
+ ## Citation
164
+
165
+ If you use AcT2I-Prompts, please cite:
166
+
167
+ ```bibtex
168
+ @article{malaviya2025act2i,
169
+ title={AcT2I: Evaluating and Improving Action Depiction in Text-to-Image Models},
170
+ author={Malaviya, Vatsal and Chatterjee, Agneet and Patel, Maitreya and Yang, Yezhou and Baral, Chitta},
171
+ journal={arXiv preprint arXiv:2509.16141},
172
+ year={2025}
173
+ }
174
+ ```