Datasets:
Question stringlengths 2 3 | Design stringclasses 4 values | Criterion stringclasses 4 values |
|---|---|---|
Q1 | Kitchen Utensil Grip | Lightweight |
Q2 | Kitchen Utensil Grip | Resistant to Heat |
Q3 | Kitchen Utensil Grip | Corrosion Resistant |
Q4 | Kitchen Utensil Grip | High Strength |
Q5 | Spacecraft Component | Lightweight |
Q6 | Spacecraft Component | Resistant to Heat |
Q7 | Spacecraft Component | Corrosion Resistant |
Q8 | Spacecraft Component | High Strength |
Q9 | Underwater Component | Lightweight |
Q10 | Underwater Component | Resistant to Heat |
Q11 | Underwater Component | Corrosion Resistant |
Q12 | Underwater Component | High Strength |
Q13 | Safety Helmet | Lightweight |
Q14 | Safety Helmet | Resistant to Heat |
Q15 | Safety Helmet | Corrosion Resistant |
Q16 | Safety Helmet | High Strength |
YAML Metadata Warning: The task_categories "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
MSEval Dataset:
A benchmark designed to facilitate evaluation and modify the behavior of a foundation model through different existing techniques in the context of material selection for conceptual design.
The data is collected by conducting a survey of experts in the field of material selection. The same questions mentioned in keyquestions.csv are asked to experts.
This can be used to evaluate a Language model performance and its spread compared to a human evaluation.
To get into a more detailed explanation - use this link [https://arxiv.org/abs/2407.09719v1]
Overview
We introduce MSEval, a benchmark derived from survey results of experts in the field of material selection.
The MixEval consists of two files: CleanResponses and AllResponses. Below presents the dataset file tree:
MSEval
│
├── AllResponses.csv
└── CleanResponses.csv
└── KeyQuestions.csv
Dataset Usage
An example use of the dataset using the datasets library is shown in https://github.com/cmudrc/MSEval
To use this dataset using pandas:
import pandas as pd
df = pd.read_csv("hf://datasets/cmudrc/Material_Selection_Eval/AllResponses.csv")
Replace AllResponses with CleanResponses and KeyQuestions in the pathname if required.
Citation
If you found the dataset useful, please cite:
@misc{jain2024msevaldatasetmaterialselection,
title={MSEval: A Dataset for Material Selection in Conceptual Design to Evaluate Algorithmic Models},
author={Yash Patawari Jain and Daniele Grandi and Allin Groom and Brandon Cramer and Christopher McComb},
year={2024},
eprint={2407.09719},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2407.09719},
}
license: mit
- Downloads last month
- 46