Table + Text IR Evaluation
Collection
An evaluation suite created for benchmarking of retrieval models on Table+Text retrieval datasets.
•
8 items
•
Updated
qid
string | did
string | score
int32 |
|---|---|---|
1566
|
1635
| 1
|
1567
|
1641
| 1
|
1568
|
1645
| 1
|
1569
|
1648
| 1
|
1570
|
1653
| 1
|
1571
|
1656
| 1
|
1572
|
1654
| 1
|
1573
|
1659
| 1
|
1574
|
1662
| 1
|
1574
|
1663
| 1
|
1575
|
1665
| 1
|
1576
|
1672
| 1
|
1576
|
1673
| 1
|
1577
|
1678
| 1
|
1578
|
1680
| 1
|
1579
|
1685
| 1
|
1580
|
1689
| 1
|
1582
|
1695
| 1
|
1583
|
1699
| 1
|
1585
|
1705
| 1
|
1586
|
1690
| 1
|
1586
|
1692
| 1
|
1587
|
1709
| 1
|
1588
|
1714
| 1
|
1589
|
1716
| 1
|
1590
|
1719
| 1
|
1591
|
1724
| 1
|
1592
|
1726
| 1
|
1594
|
1735
| 1
|
1595
|
1737
| 1
|
1596
|
1740
| 1
|
1597
|
1749
| 1
|
1598
|
1752
| 1
|
1599
|
1756
| 1
|
1600
|
1759
| 1
|
1601
|
1765
| 1
|
1602
|
1717
| 1
|
1603
|
1723
| 1
|
1603
|
1724
| 1
|
1604
|
1769
| 1
|
1605
|
1773
| 1
|
1606
|
1777
| 1
|
1607
|
1780
| 1
|
1608
|
1784
| 1
|
1609
|
1791
| 1
|
1609
|
1793
| 1
|
1611
|
1801
| 1
|
1612
|
1804
| 1
|
1613
|
1800
| 1
|
1613
|
1801
| 1
|
1614
|
1807
| 1
|
1615
|
1635
| 1
|
1615
|
1637
| 1
|
1616
|
1731
| 1
|
1617
|
1809
| 1
|
1618
|
1816
| 1
|
1619
|
1819
| 1
|
1619
|
1820
| 1
|
1620
|
1824
| 1
|
1621
|
1825
| 1
|
1622
|
1829
| 1
|
1623
|
1831
| 1
|
1624
|
1835
| 1
|
1625
|
1838
| 1
|
1626
|
1799
| 1
|
1626
|
1801
| 1
|
1627
|
1843
| 1
|
1628
|
1850
| 1
|
1629
|
1852
| 1
|
1629
|
1854
| 1
|
1630
|
1855
| 1
|
1631
|
1861
| 1
|
1632
|
1866
| 1
|
1633
|
1871
| 1
|
1634
|
1873
| 1
|
1634
|
1875
| 1
|
1635
|
1724
| 1
|
1636
|
1711
| 1
|
1636
|
1714
| 1
|
1637
|
1877
| 1
|
1638
|
1886
| 1
|
1639
|
1891
| 1
|
1640
|
1895
| 1
|
1641
|
1897
| 1
|
1642
|
1901
| 1
|
1643
|
1906
| 1
|
1644
|
1912
| 1
|
1646
|
1917
| 1
|
1647
|
1920
| 1
|
1649
|
1927
| 1
|
1650
|
1931
| 1
|
1651
|
1935
| 1
|
1652
|
1938
| 1
|
1653
|
1942
| 1
|
1654
|
1838
| 1
|
1654
|
1839
| 1
|
1655
|
1945
| 1
|
1656
|
1951
| 1
|
1657
|
1955
| 1
|
1658
|
1958
| 1
|
This dataset is part of a Table + Text retrieval benchmark. Includes queries and relevance judgments across dev split(s), with corpus in 1 format(s): corpus.
| Config | Description | Split(s) |
|---|---|---|
default |
Relevance judgments (qrels): qid, did, score |
dev |
queries |
Query IDs and text | dev_queries |
corpus |
Plain text corpus: _id, title, text |
corpus |
| Dataset | Structured | #Train | #Dev | #Test | #Corpus |
|---|---|---|---|---|---|
| OpenWikiTables | ✓ | 53.8k | 6.6k | 6.6k | 24.7k |
| NQTables | ✓ | 9.6k | 1.1k | 1k | 170k |
| FeTaQA | ✓ | 7.3k | 1k | 2k | 10.3k |
| OTT-QA (small) | ✓ | 41.5k | 2.2k | -- | 8.8k |
| MultiHierTT | ✗ | -- | 929 | -- | 9.9k |
| AIT-QA | ✗ | -- | -- | 515 | 1.9k |
| StatcanRetrieval | ✗ | -- | -- | 870 | 5.9k |
| watsonxDocsQA | ✗ | -- | -- | 30 | 1.1k |
If you use TableIR Eval: Table-Text IR Evaluation Collection, please cite:
@misc{doshi2026tableir,
title = {TableIR Eval: Table-Text IR Evaluation Collection},
author = {Doshi, Meet and Boni, Odellia and Kumar, Vishwajeet and Sen, Jaydeep and Joshi, Sachindra},
year = {2026},
institution = {IBM Research},
howpublished = {https://huggingface.co/collections/ibm-research/table-text-ir-evaluation},
note = {Hugging Face dataset collection}
}
All credit goes to original authors. Please cite their work:
@inproceedings{zhao-etal-2022-multihiertt,
title = "{M}ulti{H}iertt: Numerical Reasoning over Multi Hierarchical Tabular and Textual Data",
author = "Zhao, Yilun and
Li, Yunxiang and
Li, Chenying and
Zhang, Rui",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.454",
pages = "6588--6600",
}