Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
query-id
stringclasses
334 values
corpus-id
stringlengths
10
13
score
int64
1
1
ifc_318
uniclass_0
1
ifc_318
uniclass_1
1
ifc_220
uniclass_2
1
ifc_220
uniclass_3
1
ifc_220
uniclass_4
1
ifc_220
uniclass_5
1
ifc_220
uniclass_6
1
ifc_220
uniclass_7
1
ifc_220
uniclass_972
1
ifc_220
uniclass_973
1
ifc_220
uniclass_974
1
ifc_220
uniclass_976
1
ifc_220
uniclass_977
1
ifc_220
uniclass_978
1
ifc_220
uniclass_979
1
ifc_220
uniclass_980
1
ifc_220
uniclass_981
1
ifc_220
uniclass_982
1
ifc_220
uniclass_983
1
ifc_220
uniclass_984
1
ifc_220
uniclass_985
1
ifc_220
uniclass_986
1
ifc_220
uniclass_987
1
ifc_220
uniclass_988
1
ifc_220
uniclass_989
1
ifc_220
uniclass_990
1
ifc_220
uniclass_991
1
ifc_220
uniclass_992
1
ifc_220
uniclass_993
1
ifc_220
uniclass_994
1
ifc_220
uniclass_995
1
ifc_220
uniclass_996
1
ifc_220
uniclass_997
1
ifc_220
uniclass_998
1
ifc_220
uniclass_999
1
ifc_220
uniclass_1000
1
ifc_220
uniclass_1001
1
ifc_220
uniclass_1002
1
ifc_220
uniclass_1003
1
ifc_220
uniclass_1004
1
ifc_220
uniclass_1005
1
ifc_220
uniclass_1006
1
ifc_220
uniclass_1007
1
ifc_220
uniclass_1008
1
ifc_220
uniclass_1009
1
ifc_220
uniclass_1010
1
ifc_220
uniclass_1011
1
ifc_220
uniclass_1012
1
ifc_220
uniclass_1013
1
ifc_220
uniclass_1014
1
ifc_220
uniclass_1015
1
ifc_220
uniclass_1016
1
ifc_220
uniclass_1017
1
ifc_220
uniclass_1018
1
ifc_220
uniclass_1019
1
ifc_220
uniclass_1020
1
ifc_220
uniclass_1021
1
ifc_242
uniclass_8
1
ifc_242
uniclass_15
1
ifc_242
uniclass_24
1
ifc_517
uniclass_22
1
ifc_517
uniclass_103
1
ifc_517
uniclass_104
1
ifc_517
uniclass_105
1
ifc_517
uniclass_106
1
ifc_517
uniclass_107
1
ifc_517
uniclass_108
1
ifc_517
uniclass_109
1
ifc_517
uniclass_110
1
ifc_517
uniclass_111
1
ifc_517
uniclass_113
1
ifc_517
uniclass_114
1
ifc_517
uniclass_115
1
ifc_513
uniclass_31
1
ifc_514
uniclass_32
1
ifc_514
uniclass_33
1
ifc_514
uniclass_34
1
ifc_514
uniclass_35
1
ifc_514
uniclass_36
1
ifc_514
uniclass_37
1
ifc_514
uniclass_38
1
ifc_514
uniclass_39
1
ifc_514
uniclass_40
1
ifc_514
uniclass_41
1
ifc_514
uniclass_42
1
ifc_514
uniclass_43
1
ifc_514
uniclass_44
1
ifc_514
uniclass_45
1
ifc_514
uniclass_46
1
ifc_514
uniclass_47
1
ifc_514
uniclass_48
1
ifc_514
uniclass_49
1
ifc_514
uniclass_50
1
ifc_514
uniclass_51
1
ifc_514
uniclass_52
1
ifc_514
uniclass_53
1
ifc_514
uniclass_54
1
ifc_514
uniclass_55
1
ifc_514
uniclass_56
1
ifc_224
uniclass_57
1
End of preview. Expand in Data Studio

BuiltBenchRetrieval

An MTEB dataset
Massive Text Embedding Benchmark

Retrieval of built asset entity type/class descriptions given a query describing an entity as represented in well-established industry classification systems such as Uniclass, IFC, etc.

Task category t2t
Domains Engineering, Written
Reference https://arxiv.org/abs/2411.12056

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_tasks(["BuiltBenchRetrieval"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repitory.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@article{shahinmoghadam2024benchmarking,
  author = {Shahinmoghadam, Mehrzad and Motamedi, Ali},
  journal = {arXiv preprint arXiv:2411.12056},
  title = {Benchmarking pre-trained text embedding models in aligning built asset information},
  year = {2024},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("BuiltBenchRetrieval")

desc_stats = task.metadata.descriptive_stats
{
    "test": {
        "num_samples": 3095,
        "number_of_characters": 977174,
        "num_documents": 2761,
        "min_document_length": 204,
        "average_document_length": 341.6859833393698,
        "max_document_length": 633,
        "unique_documents": 2761,
        "num_queries": 334,
        "min_query_length": 12,
        "average_query_length": 101.13473053892216,
        "max_query_length": 430,
        "unique_queries": 334,
        "none_queries": 0,
        "num_relevant_docs": 2761,
        "min_relevant_docs_per_query": 1,
        "average_relevant_docs_per_query": 8.266467065868264,
        "max_relevant_docs_per_query": 214,
        "unique_relevant_docs": 2761,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
22

Papers for mteb/BuiltBenchRetrieval