license: apache-2.0
task_categories:
- question-answering
tags:
- reasoning
- llm
- benchmark
This dataset accompanies the paper Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models. It contains data demonstrating reasoning failures in state-of-the-art Large Language Models (LLMs) on simple common-sense problems.
The dataset includes prompts and model responses, allowing researchers to analyze the limitations of current LLMs in reasoning tasks.
Data Details: The dataset consists of prompts designed to expose weaknesses in reasoning capabilities. The responses from various LLMs are included, illustrating the models' performance and confidence levels.
Code and Data: The code for reproducing the experiments and the raw experimental data can be found at https://github.com/LAION-AI/AIW.