--- license: mit task_categories: - question-answering language: - en tags: - context - qa - long - benchmark - llm size_categories: - 1K **Note**: The buckets are chosen to stress‐test long‐context inference. The exact cutoff may be implementation‐dependent, but each row’s `length` field indicates the precise token count. ## 3. Loading If this collection has been published under a Hugging Face dataset ID (for example, `slinusc/qa_increasing_context_length`), you can load it directly: ```python from datasets import load_dataset # Replace with the actual HF dataset ID if different dataset = load_dataset("slinusc/qa_increasing_context_length") # Print overall structure and splits print(dataset) # Inspect column names in the “train” split print(dataset["train"].column_names) ``` ``` ["context", "question", "answer", "length", "dataset", "context_range"] ``` ## 4. Citation & License * If you plan to publish results using this dataset, please refer to the original LongBench publication (LongBench: A Bedrock-Level Benchmark for Foundation Models) and cite the specific subset(s) from which examples were drawn. * Check the Hugging Face hub (dataset card) for detailed licensing information. Typically, LongBench subsets carry permissive licenses for research use, but always verify at [https://huggingface.co/datasets/…](https://huggingface.co/datasets/…) before redistribution.