yangbh217 commited on
Commit
135d9a9
·
verified ·
1 Parent(s): e961b6e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -3
README.md CHANGED
@@ -1,7 +1,5 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
  # MMSci_Table
 
5
  Dataset for the paper "[Does Table Source Matter? Benchmarking and Improving Multimodal Scientific Table Understanding and Reasoning](s)"
6
 
7
  # MMSci Dataset Collection
@@ -14,6 +12,14 @@ The MMSci dataset collection consists of three complementary datasets designed f
14
  - **MMSci-Ins**: An instruction tuning dataset with 12K samples across three table-based tasks
15
  - **MMSci-Eval**: A benchmark with 3,114 testing samples for numerical reasoning evaluation
16
 
 
 
 
 
 
 
 
 
17
  ## Dataset Details
18
 
19
  ### MMSci-Pre
@@ -27,6 +33,9 @@ The MMSci dataset collection consists of three complementary datasets designed f
27
  - Complex layouts and relationships from scientific papers
28
  - Focus on tables with significant numerical values
29
 
 
 
 
30
  ### MMSci-Ins
31
  - **Size**: 12K samples
32
  - **Format**: Instruction-following samples with reasoning steps
@@ -40,6 +49,10 @@ The MMSci dataset collection consists of three complementary datasets designed f
40
  - Each table paired with one TQA, TFV, and T2T task
41
  - Built upon scientific domain tables
42
 
 
 
 
 
43
  ### MMSci-Eval
44
  - **Size**: 3,114 samples
45
  - **Purpose**: Comprehensive evaluation of numerical reasoning capabilities
@@ -65,6 +78,15 @@ The datasets were created through a rigorous process:
65
  - Evaluating numerical reasoning capabilities in scientific contexts
66
  - Benchmarking table understanding and reasoning systems
67
 
 
 
 
 
 
 
 
 
 
68
  ## Citation
69
 
70
  If you found this repository or paper is helpful to you, please cite our paper.
 
 
 
 
1
  # MMSci_Table
2
+
3
  Dataset for the paper "[Does Table Source Matter? Benchmarking and Improving Multimodal Scientific Table Understanding and Reasoning](s)"
4
 
5
  # MMSci Dataset Collection
 
12
  - **MMSci-Ins**: An instruction tuning dataset with 12K samples across three table-based tasks
13
  - **MMSci-Eval**: A benchmark with 3,114 testing samples for numerical reasoning evaluation
14
 
15
+ ## Framework Overview
16
+ ![Framework Overview](model.pdf)
17
+ <div class="flex justify-center">
18
+ <img src="https://huggingface.co/datasets/yangbh217/MMSci_Table/edit/main/model.png" alt="Framework Overview">
19
+ </div>
20
+
21
+ *Figure 1: Overview of the MMSci framework showing the four key stages: Table Image Generation, Dataset Construction, Table Structure Learning, and Visual Instruction Tuning.*
22
+
23
  ## Dataset Details
24
 
25
  ### MMSci-Pre
 
33
  - Complex layouts and relationships from scientific papers
34
  - Focus on tables with significant numerical values
35
 
36
+ ![MMSci-Pre Example](html1.pdf)
37
+ *Figure 2: Example from MMSci-Pre dataset showing the table image and its corresponding HTML representation.*
38
+
39
  ### MMSci-Ins
40
  - **Size**: 12K samples
41
  - **Format**: Instruction-following samples with reasoning steps
 
49
  - Each table paired with one TQA, TFV, and T2T task
50
  - Built upon scientific domain tables
51
 
52
+
53
+ ![MMSci-Ins Example](input1.pdf)
54
+ *Figure 3: Example from MMSci-Ins dataset showing instruction-following samples across different tasks.*
55
+
56
  ### MMSci-Eval
57
  - **Size**: 3,114 samples
58
  - **Purpose**: Comprehensive evaluation of numerical reasoning capabilities
 
78
  - Evaluating numerical reasoning capabilities in scientific contexts
79
  - Benchmarking table understanding and reasoning systems
80
 
81
+
82
+ #### Table Question Answering (TQA)
83
+ ![TQA Example](tqa1.pdf)
84
+ *Figure 4: Example of a TQA task showing the question, reasoning steps, and answer.*
85
+
86
+ #### Table Fact Verification (TFV)
87
+ ![TFV Example](tfv1.pdf)
88
+ *Figure 5: Example of a TFV task showing the statement, verification process, and conclusion.*
89
+
90
  ## Citation
91
 
92
  If you found this repository or paper is helpful to you, please cite our paper.