Datasets:
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Tags:
benchmark
llm-evaluation
large-language-models
large-language-model
large-multimodal-models
llm-training
DOI:
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -39,5 +39,18 @@ The data is collected by conducting a survey of experts in the field of material
|
|
| 39 |
This can be used to evaluate a Language model performance and its spread compared to a human evaluation.
|
| 40 |
|
| 41 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
license: mit
|
| 43 |
---
|
|
|
|
| 39 |
This can be used to evaluate a Language model performance and its spread compared to a human evaluation.
|
| 40 |
|
| 41 |
---
|
| 42 |
+
|
| 43 |
+
# Overview
|
| 44 |
+
We introduce MSEval, a benchmark derived from survey results of experts in the field of material selection.
|
| 45 |
+
|
| 46 |
+
The MixEval consists of two files: `CleanResponses` and `AllResponses`. Below presents the dataset file tree:
|
| 47 |
+
|
| 48 |
+
```
|
| 49 |
+
MSEval
|
| 50 |
+
│
|
| 51 |
+
├── AllResponses.csv
|
| 52 |
+
└── CleanResponses.csv
|
| 53 |
+
└── KeyQuestions.csv
|
| 54 |
+
```
|
| 55 |
license: mit
|
| 56 |
---
|