LLDDSS commited on
Commit
8540ffd
·
verified ·
1 Parent(s): 01ff005

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -3
README.md CHANGED
@@ -1,3 +1,67 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+ # 🧠 Causal3D: A Benchmark for Visual Causal Reasoning
5
+
6
+ **Causal3D** is a comprehensive benchmark designed to evaluate models’ abilities to uncover *latent causal relations* from structured and visual data. This dataset integrates **3D-rendered scenes** with **tabular causal annotations**, providing a unified testbed for advancing *causal discovery*, *causal representation learning*, and *causal reasoning* with **vision-language models (VLMs)** and **large language models (LLMs)**.
7
+
8
+ ---
9
+
10
+ ## 📌 Overview
11
+
12
+ While recent progress in AI and computer vision has been remarkable, there remains a major gap in evaluating causal reasoning over complex visual inputs. **Causal3D** bridges this gap by providing:
13
+
14
+ - **19 curated 3D-scene datasets** simulating diverse real-world causal phenomena.
15
+ - Paired **tabular causal graphs** and **image observations** across multiple views and backgrounds.
16
+ - Benchmarks for evaluating models in both **structured** (tabular) and **unstructured** (image) modalities.
17
+
18
+ ---
19
+
20
+ ## 🧩 Dataset Structure
21
+
22
+ Each sub-dataset (scene) contains:
23
+
24
+ - `images/`: Rendered images under different camera views and backgrounds.
25
+ - `metadata.csv`: Instance-level annotations including object attributes and positions.
26
+ <!-- - `causal_graph.json`: Ground-truth causal structure (as adjacency matrix or graph).
27
+ - `view_info.json`: Camera/viewpoint metadata.
28
+ - `split.json`: Recommended train/val/test splits for benchmarking. -->
29
+
30
+ ---
31
+
32
+ ## 🎯 Evaluation Tasks
33
+
34
+ Causal3D supports a range of causal reasoning tasks, including:
35
+
36
+ - **Causal discovery** from image sequences or tables
37
+ - **Intervention prediction** under modified object states or backgrounds
38
+ - **Counterfactual reasoning** across views
39
+ - **VLM-based causal inference** given multimodal prompts
40
+
41
+ ---
42
+
43
+ ## 📊 Benchmark Results
44
+
45
+ We evaluate a diverse set of methods:
46
+
47
+ - **Classical causal discovery**: PC, GES, NOTEARS
48
+ - **Causal representation learning**: CausalVAE, ICM-based encoders
49
+ - **Vision-Language and Large Language Models**: GPT-4V, Claude-3.5, Gemini-1.5
50
+
51
+ **Key Findings**:
52
+
53
+ - As causal structures grow more complex, **model performance drops significantly** without strong prior assumptions.
54
+ - A noticeable performance gap exists between models trained on structured data and those applied directly to visual inputs.
55
+
56
+ ---
57
+
58
+ <!-- ## 🔍 Example Use Case
59
+
60
+ ```python
61
+ from causal3d import load_scene_data
62
+
63
+ scene = "SpringPendulum"
64
+ data = load_scene_data(scene, split="train")
65
+ images = data["images"]
66
+ metadata = data["table"]
67
+ graph = data["causal_graph"] -->