DatawiseAgent-benchmarkdata / DataModeling /task /kaggle-llm-science-exam.txt
JasperYOU's picture
Upload folder using huggingface_hub
9595c78 verified
Description
### Goal of the Competition
Inspired by the OpenBookQA dataset, this competition challenges participants to answer difficult science-based questions written by a Large Language Model. Your work will help researchers better understand the ability of LLMs to test themselves, and the potential of LLMs that can be run in resource-constrained environments.
### Context
As the scope of large language model capabilities expands, a growing area of research is using LLMs to characterize themselves. Because many preexisting NLP benchmarks have been shown to be trivial for state-of-the-art models, there has also been interesting work showing that LLMs can be used to create more challenging tasks to test ever more powerful models. At the same time, methods like quantization and knowledge distillation are being used to effectively shrink language models and run them on more modest hardware. The Kaggle environment provides a unique lens to study this as submissions are subject to both GPU and time limits.
The dataset for this challenge was generated by giving GPT-3.5 snippets of text on a range of scientific topics pulled from Wikipedia, and asking it to write a multiple-choice question (with a known answer), then filtering out easy questions. Right now we estimate that the largest models run on Kaggle are around 10 billion parameters, whereas GPT-3.5 clocks in at 175 billion parameters. If a question-answering model can ace a test written by a question-writing model more than 10 times its size, this would be a genuinely interesting result; on the other hand, if a larger model can effectively stump a smaller one, this has compelling implications on the ability of LLMs to benchmark and test themselves.
This is a Code Competition. Refer to Code Requirements for details.
### Evaluation
Submissions are evaluated according to the Mean Average Precision @ 3 (MAP@3):
\[ \text{MAP@3} = \frac{1}{U} \sum_{u=1}^{U} \sum_{k=1}^{\min(n, 3)} P(k) \times \text{rel}(k) \]
where \( U \) is the number of questions in the test set, \( P(k) \) is the precision at cutoff \( k \), \( n \) is the number of predictions per question, and \( \text{rel}(k) \) is an indicator function equaling 1 if the item at rank \( k \) is a relevant (correct) label, zero otherwise. Once a correct label has been scored for an individual question in the test set, that label is no longer considered relevant for that question, and additional predictions of that label are skipped in the calculation. For example, if the correct label is A for an observation, the following predictions all score an average precision of 1.0:
\[ [A, B, C, D, E] \]
\[ [A, A, A, A, A] \]
\[ [A, B, A, C, A] \]
### Submission File
For each id in the test set, you may predict up to 3 labels for your prediction. The file should contain a header and have the following format:
```
id,prediction
0,A B C
1,B C A
2,C A B
etc.
```
### Dataset Description
Your challenge in this competition is to answer multiple-choice questions written by an LLM. While the specifics of the process used to generate these questions aren't public, we've included 200 sample questions with answers to show the format, and to give a general sense of the kind of questions in the test set. However, there may be a distributional shift between the sample questions and the test set, so solutions that generalize to a broad set of questions are likely to perform better. Each question consists of a prompt (the question), 5 options labeled A, B, C, D, and E, and the correct answer labeled answer (this holds the label of the most correct answer, as defined by the generating LLM).
This competition uses a hidden test. When your submitted notebook is scored, the actual test data (including a sample submission) will be made available to your notebook. The test set has the same format as the provided test.csv but has ~4000 questions that may be different in subject matter.
### Files
- train.csv: a set of 200 questions with the answer column
- test.csv: the test set; your task is to predict the top three most probable answers given the prompt.
**NOTE:** The test data you see here is just a copy of the training data without the answers. The unseen re-run test set is comprised of ~4,000 different prompts.
- sample_submission.csv: a sample submission file in the correct format
### Columns
- `prompt`: the text of the question being asked
- `A`: option A; if this option is correct, then `answer` will be `A`
- `B`: option B; if this option is correct, then `answer` will be `B`
- `C`: option C; if this option is correct, then `answer` will be `C`
- `D`: option D; if this option is correct, then `answer` will be `D`
- `E`: option E; if this option is correct, then `answer` will be `E`
- `answer`: the most correct answer, as defined by the generating LLM (one of `A`, `B`, `C`, `D`, or `E`)