Datasets:

Modalities:
Tabular
Text
Formats:
csv
Size:
< 1K
Libraries:
Datasets
pandas
License:
AA-LCR_Dataset.csv CHANGED
The diff for this file is too large to render. See raw diff
 
extracted_text/AA-LCR_extracted-text.zip → AA-LCR_extracted-text.zip RENAMED
File without changes
README.md CHANGED
@@ -1,10 +1,5 @@
1
  ---
2
  license: apache-2.0
3
- configs:
4
- - config_name: default
5
- data_files:
6
- - split: test
7
- path: "AA-LCR_Dataset.csv"
8
  ---
9
 
10
  # Artificial Analysis Long Context Reasoning (AA-LCR) Dataset
@@ -13,7 +8,7 @@ AA-LCR includes 100 hard text-based questions that require reasoning across mult
13
 
14
  ## Dataset Development
15
 
16
- AA-LCR was created through a rigorous multi-phase process involving several members of the Artificial Analysis research team and more than a dozen undergraduate students who were engaged on a short-term contract basis to write and/or validate questions.
17
 
18
  **Document Curation**: We selected diverse document sets (company reports, government consultations, legal documents, academic papers) averaging ~100,000 tokens each, representing real materials knowledge workers analyze.
19
 
@@ -31,9 +26,9 @@ This approach validates that AA-LCR tests genuine reasoning capabilities rather
31
 
32
  ## Technical Details
33
 
34
- AA-LCR comprises 100 questions across 7 types of text-only documents (i.e. Company Reports, Industry Reports, Government Consultations, Academia, Legal, Marketing Materials and Survey Reports). Multiple independent documents, forming a Document Set with a total length of ~100k tokens are passed as context for each question. For instance, the Company Documents topic includes separate document sets containing 2023 and 2024 company reports, respectively.
35
 
36
- Each question requires using the Document Set and applying general and mathematical reasoning.
37
 
38
  <div class="overflow-x-auto my-6">
39
  <table class="min-w-full border border-gray-300 bg-white">
@@ -118,11 +113,11 @@ Each question requires using the Document Set and applying general and mathemati
118
 
119
  **Sample Question:**
120
 
121
- ```json
122
  For the company and quarter where the company reported a 13.5% decline on the prior quarters operating income. What was their adjusted EBITDA? List the company name and adjusted EBITDA
123
 
124
  Answer: Equinix, $901 million
125
- ```
126
 
127
  Examples of other types of questions include:
128
 
@@ -156,53 +151,11 @@ END QUESTION
156
 
157
  Reported token counts per question are based on the completed prompt, using the `cl100k_base` tokenizer from `tiktoken`.
158
 
159
- The order in which documents are loaded in matters - they should be added to the prompt template in the order of the filenames in `data_source_filenames`. Below are code snippets showing how we read the questions and extracted text files from disk.
160
-
161
- ```
162
- def load_questions(self) -> list[dict]:
163
- """Load LCR questions from HuggingFace dataset"""
164
- csv_path = hf_hub_download(
165
- repo_id="ArtificialAnalysis/AA-LCR",
166
- filename="AA-LCR_Dataset.csv",
167
- repo_type="dataset",
168
- )
169
-
170
- questions = []
171
- with open(csv_path, encoding="utf-8") as f:
172
- reader = csv.DictReader(f)
173
- for row in reader:
174
- # Parse data_source_filenames as ordered list
175
- if "data_source_filenames" in row and isinstance(row["data_source_filenames"], str):
176
- row["data_source_filenames"] = row["data_source_filenames"].split(";")
177
-
178
- # Parse answer as list (semicolon-separated criteria)
179
- if "answer" in row and isinstance(row["answer"], str):
180
- row["answer"] = row["answer"].split(";")
181
- questions.append(row)
182
-
183
- return questions
184
-
185
- def get_document_set(
186
- self, dataset_folder: str, document_category: str, document_set_id: str, data_source_filenames: list[str]
187
- ) -> list[str]:
188
- """Get document set for a question in the order specified by data_source_filenames"""
189
-
190
- # Documents are extracted to lcr/lcr/{category}/{set_id}/ from the HuggingFace zip
191
- document_set_path = os.path.join(dataset_folder, document_category, document_set_id)
192
-
193
- document_texts = []
194
- for filename in data_source_filenames:
195
- document_path = os.path.join(document_set_path, filename)
196
- with open(document_path, encoding="utf-8") as f:
197
- document_texts.append(f.read())
198
- return document_texts
199
- ```
200
-
201
  ## Scoring Approach
202
 
203
  We use an LLM-based equality checker to evaluate responses:
204
 
205
- ```
206
  Assess whether the following CANDIDATE ANSWER is CORRECT or INCORRECT.
207
  For the CANDIDATE ANSWER to be correct, it must be consistent with the OFFICIAL ANSWER.
208
 
@@ -212,7 +165,7 @@ CANDIDATE ANSWER TO ASSESS: {candidate_answer}
212
 
213
  Reply only with CORRECT or INCORRECT.
214
 
215
- ```
216
 
217
  Qwen3 235B A22B 2507 Non-reasoning is used as the equality checker model.
218
 
@@ -222,17 +175,11 @@ The AA-LCR dataset is available at [https://huggingface.co/datasets/ArtificialAn
222
 
223
  If you use AA-LCR in your research, please cite:
224
 
225
- ```json
226
  @dataset{artificialanalysis2025lcr,
227
  title={Artificial Analysis Long Context Reasoning Benchmark(LCR)},
228
  author={Artificial Analysis Team},
229
  year={2025},
230
  publisher={Artificial Analysis, Inc.}
231
  }
232
- ```
233
-
234
- ## License
235
-
236
- **Question set**: Licensed under the Apache License 2.0
237
-
238
- **Document set**: Provided as a text representation of documents publicly available at time of dataset creation. We do not claim copyright or place any license over this data.
 
1
  ---
2
  license: apache-2.0
 
 
 
 
 
3
  ---
4
 
5
  # Artificial Analysis Long Context Reasoning (AA-LCR) Dataset
 
8
 
9
  ## Dataset Development
10
 
11
+ AA-LCR was created through a rigorous multi-phase process involving several members of the Artificial Analysis research team and more than a dozen undergraduate students who were engaged on a short-term contract basis to write and/or validate questions.
12
 
13
  **Document Curation**: We selected diverse document sets (company reports, government consultations, legal documents, academic papers) averaging ~100,000 tokens each, representing real materials knowledge workers analyze.
14
 
 
26
 
27
  ## Technical Details
28
 
29
+ AA-LCR comprises 100 questions across 7 types of text-only documents (i.e. Company Reports, Industry Reports, Government Consultations, Academia, Legal, Marketing Materials and Survey Reports). Multiple independent documents, forming a Document Set with a total length of ~100k tokens are passed as context for each question. For instance, the Company Documents topic includes separate document sets containing 2023 and 2024 company reports, respectively.
30
 
31
+ Each question requires using the Document Set and applying general and mathematical reasoning.
32
 
33
  <div class="overflow-x-auto my-6">
34
  <table class="min-w-full border border-gray-300 bg-white">
 
113
 
114
  **Sample Question:**
115
 
116
+ \`\`\`json
117
  For the company and quarter where the company reported a 13.5% decline on the prior quarters operating income. What was their adjusted EBITDA? List the company name and adjusted EBITDA
118
 
119
  Answer: Equinix, $901 million
120
+ \`\`\`
121
 
122
  Examples of other types of questions include:
123
 
 
151
 
152
  Reported token counts per question are based on the completed prompt, using the `cl100k_base` tokenizer from `tiktoken`.
153
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
154
  ## Scoring Approach
155
 
156
  We use an LLM-based equality checker to evaluate responses:
157
 
158
+ \`\`\`
159
  Assess whether the following CANDIDATE ANSWER is CORRECT or INCORRECT.
160
  For the CANDIDATE ANSWER to be correct, it must be consistent with the OFFICIAL ANSWER.
161
 
 
165
 
166
  Reply only with CORRECT or INCORRECT.
167
 
168
+ \`\`\`
169
 
170
  Qwen3 235B A22B 2507 Non-reasoning is used as the equality checker model.
171
 
 
175
 
176
  If you use AA-LCR in your research, please cite:
177
 
178
+ \`\`\`json
179
  @dataset{artificialanalysis2025lcr,
180
  title={Artificial Analysis Long Context Reasoning Benchmark(LCR)},
181
  author={Artificial Analysis Team},
182
  year={2025},
183
  publisher={Artificial Analysis, Inc.}
184
  }
185
+ \`\`\`