Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
sikarwarank commited on
Commit
57f2e88
·
verified ·
1 Parent(s): d973ad9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -33,8 +33,12 @@ configs:
33
  <a href="https://github.com/ankursikarwar/Cosmic"><img src="https://img.shields.io/badge/github-repo-blue?logo=github" alt="Github"/> </a>
34
  </p>
35
 
 
 
36
  Humans routinely transform local, viewpoint-dependent observations into shared spatial models through language. COSMIC asks whether MLLMs can do the same. The benchmark places two static agents in the same indoor scene from different egocentric viewpoints. The agents must communicate exclusively through natural language to jointly solve a spatial QA task.
37
 
 
 
38
  ## Usage
39
 
40
  ```python
 
33
  <a href="https://github.com/ankursikarwar/Cosmic"><img src="https://img.shields.io/badge/github-repo-blue?logo=github" alt="Github"/> </a>
34
  </p>
35
 
36
+ <div align="justify">
37
+
38
  Humans routinely transform local, viewpoint-dependent observations into shared spatial models through language. COSMIC asks whether MLLMs can do the same. The benchmark places two static agents in the same indoor scene from different egocentric viewpoints. The agents must communicate exclusively through natural language to jointly solve a spatial QA task.
39
 
40
+ </div>
41
+
42
  ## Usage
43
 
44
  ```python