Datasets:

Modalities:
Image
Text
Formats:
parquet
Size:
< 1K
ArXiv:

Improve dataset card: add metadata, license, and task descriptions

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +18 -4
README.md CHANGED
@@ -1,4 +1,7 @@
1
  ---
 
 
 
2
  configs:
3
  - config_name: anchor_recognition
4
  data_files:
@@ -22,7 +25,6 @@ configs:
22
  path: cognitive_mapping/test-00000.parquet
23
  ---
24
 
25
-
26
  <h1 align="center">Communicating about Space: Language-Mediated Spatial Integration Across Partial Views</h1>
27
 
28
  <p align="center">
@@ -30,11 +32,23 @@ configs:
30
  <a href="https://github.com/ankursikarwar/Cosmic"><img src="https://img.shields.io/badge/github-Comm--About--Space-blue?logo=github" alt="Github"/> </a>
31
  </p>
32
 
33
- <div align="justify">
 
 
 
 
 
 
34
 
35
- Humans routinely transform local, viewpoint-dependent observations into shared spatial models through language. COSMIC asks whether MLLMs can do the same. The benchmark places two static agents in the same indoor scene from different egocentric viewpoints. The agents must communicate exclusively through natural language to jointly solve a spatial QA task.
 
 
 
 
 
 
36
 
37
- </div>
38
 
39
  ## Usage
40
 
 
1
  ---
2
+ license: mit
3
+ task_categories:
4
+ - image-text-to-text
5
  configs:
6
  - config_name: anchor_recognition
7
  data_files:
 
25
  path: cognitive_mapping/test-00000.parquet
26
  ---
27
 
 
28
  <h1 align="center">Communicating about Space: Language-Mediated Spatial Integration Across Partial Views</h1>
29
 
30
  <p align="center">
 
32
  <a href="https://github.com/ankursikarwar/Cosmic"><img src="https://img.shields.io/badge/github-Comm--About--Space-blue?logo=github" alt="Github"/> </a>
33
  </p>
34
 
35
+ COSMIC (Collaborative Spatial Communication) is a diagnostic benchmark that tests whether Multimodal Large Language Models (MLLMs) can align distinct egocentric views through multi-turn dialogue to form a coherent, allocentric understanding of a shared 3D environment.
36
+
37
+ The benchmark places two static agents in the same indoor scene from different egocentric viewpoints. The agents must communicate exclusively through natural language to jointly solve a spatial QA task.
38
+
39
+ ## Benchmark Tasks
40
+
41
+ COSMIC contains **899 indoor scenes** and **1,250 question–answer pairs** spanning five tasks:
42
 
43
+ | Task | Description |
44
+ |---|---|
45
+ | **Anchor Recognition** | Establish shared anchor objects across distinct egocentric perspectives |
46
+ | **Global Counting** | Aggregate object counts across two partial views while disambiguating which instances are shared and which are view-exclusive |
47
+ | **Relative Distance** | Estimate which object is metrically closest or farthest from a target, requiring agents to align their partial views and compare distances |
48
+ | **Relative Direction** | Determine the egocentric direction of a target object using cross-view spatial reasoning |
49
+ | **Cognitive Mapping** | Communicate complementary partial observations to build a shared map-like representation of the room, verifying whether a proposed top-down layout is spatially accurate |
50
 
51
+ All tasks use a multiple-choice format (4 options, except Cognitive Mapping which is binary) with carefully constructed distractors.
52
 
53
  ## Usage
54