Datasets:
Tasks:
Text Classification
Modalities:
Image
Formats:
imagefolder
Sub-tasks:
fact-checking
Languages:
English
Size:
1K - 10K
ArXiv:
License:
Update task categories to include image-text-to-text
#2
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -5,16 +5,18 @@ language:
|
|
| 5 |
- en
|
| 6 |
license: cc-by-4.0
|
| 7 |
multilinguality: monolingual
|
| 8 |
-
pretty_name: SciVer
|
| 9 |
size_categories:
|
| 10 |
- 1K<n<10K
|
| 11 |
source_datasets:
|
| 12 |
- original
|
| 13 |
task_categories:
|
| 14 |
- text-classification
|
|
|
|
| 15 |
task_ids:
|
| 16 |
- fact-checking
|
|
|
|
| 17 |
---
|
|
|
|
| 18 |
# SCIVER: A Benchmark for Multimodal Scientific Claim Verification
|
| 19 |
|
| 20 |
<p align="center">
|
|
@@ -23,7 +25,6 @@ task_ids:
|
|
| 23 |
<a href="https://huggingface.co/datasets/chengyewang/SciVer">π€ Data</a>
|
| 24 |
</p>
|
| 25 |
|
| 26 |
-
|
| 27 |
## π° News
|
| 28 |
- [May 15, 2025] SciVer has been accepted by ACL 2025 Main!
|
| 29 |
|
|
@@ -47,7 +48,7 @@ task_ids:
|
|
| 47 |
- Analytical
|
| 48 |
- π Context includes **text paragraphs, multiple tables, and charts**
|
| 49 |
- π Labels: `Entailed`, `Refuted`
|
| 50 |
-
- π Evaluated across **21 leading foundation models**, including GPT-4o, Gemini, Claude 3.5, Qwen2.5-VL, LLaMA-3.2-Vision, etc.
|
| 51 |
- βοΈ Includes **step-by-step rationale** and **automated accuracy evaluation**
|
| 52 |
|
| 53 |
------
|
|
@@ -65,8 +66,9 @@ Each SCIVER sample includes:
|
|
| 65 |
|
| 66 |
1. **Direct Reasoning** β extract simple facts
|
| 67 |
2. **Parallel Reasoning** β synthesize info from multiple sources
|
| 68 |
-
3. **Sequential Reasoning** β
|
| 69 |
-
4. **
|
|
|
|
| 70 |
|
| 71 |
------
|
| 72 |
|
|
@@ -153,5 +155,4 @@ If you use our work and are inspired by our work, please consider cite us:
|
|
| 153 |
primaryClass={cs.CL},
|
| 154 |
url={https://arxiv.org/abs/2506.15569},
|
| 155 |
}
|
| 156 |
-
```
|
| 157 |
-
|
|
|
|
| 5 |
- en
|
| 6 |
license: cc-by-4.0
|
| 7 |
multilinguality: monolingual
|
|
|
|
| 8 |
size_categories:
|
| 9 |
- 1K<n<10K
|
| 10 |
source_datasets:
|
| 11 |
- original
|
| 12 |
task_categories:
|
| 13 |
- text-classification
|
| 14 |
+
- image-text-to-text
|
| 15 |
task_ids:
|
| 16 |
- fact-checking
|
| 17 |
+
pretty_name: SciVer
|
| 18 |
---
|
| 19 |
+
|
| 20 |
# SCIVER: A Benchmark for Multimodal Scientific Claim Verification
|
| 21 |
|
| 22 |
<p align="center">
|
|
|
|
| 25 |
<a href="https://huggingface.co/datasets/chengyewang/SciVer">π€ Data</a>
|
| 26 |
</p>
|
| 27 |
|
|
|
|
| 28 |
## π° News
|
| 29 |
- [May 15, 2025] SciVer has been accepted by ACL 2025 Main!
|
| 30 |
|
|
|
|
| 48 |
- Analytical
|
| 49 |
- π Context includes **text paragraphs, multiple tables, and charts**
|
| 50 |
- π Labels: `Entailed`, `Refuted`
|
| 51 |
+
- π Evaluated across **21 leading foundation models**, including o4-mini, GPT-4o, Gemini, Claude 3.5, Qwen2.5-VL, LLaMA-3.2-Vision, etc.
|
| 52 |
- βοΈ Includes **step-by-step rationale** and **automated accuracy evaluation**
|
| 53 |
|
| 54 |
------
|
|
|
|
| 66 |
|
| 67 |
1. **Direct Reasoning** β extract simple facts
|
| 68 |
2. **Parallel Reasoning** β synthesize info from multiple sources
|
| 69 |
+
3. **Sequential Reasoning** β synthesize info from multiple sources
|
| 70 |
+
4. **Sequential Reasoning** β perform step-by-step inference
|
| 71 |
+
5. **Analytical Reasoning** β apply domain expertise and logic
|
| 72 |
|
| 73 |
------
|
| 74 |
|
|
|
|
| 155 |
primaryClass={cs.CL},
|
| 156 |
url={https://arxiv.org/abs/2506.15569},
|
| 157 |
}
|
| 158 |
+
```
|
|
|