File size: 2,831 Bytes
79bca0b 6a69356 79bca0b 6a69356 79bca0b da6daef 6a69356 ace8cdd 6a69356 ace8cdd 7785274 ace8cdd 7785274 ace8cdd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
---
language:
- en
task_categories:
- question-answering
- summarization
- text-generation
pretty_name: LoopServe Multi-Turn Dialogue Benchmark
tags:
- llm
- kv_cache
configs:
- config_name: conversations
data_files: conversations.jsonl
- config_name: multi_turn_few_shot_learning
data_files: multi_turn/few_shot_learning/*.jsonl
- config_name: multi_turn_needle_in_haystack
data_files: multi_turn/needle_in_haystack/*.jsonl
- config_name: multi_turn_question_answering
data_files: multi_turn/question_answering/*.jsonl
- config_name: multi_turn_summarization
data_files: multi_turn/summarization/*.jsonl
- config_name: single_turn_few_shot_learning
data_files: single_turn/few_shot_learning/*.jsonl
- config_name: single_turn_needle_in_haystack
data_files: single_turn/needle_in_haystack/*.jsonl
- config_name: single_turn_question_answering
data_files: single_turn/question_answering/*.jsonl
- config_name: single_turn_summarization
data_files: single_turn/summarization/*.jsonl
---
This repository contains the benchmark datasets proposed in the paper **[LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues](https://huggingface.co/papers/2507.13681)**.
The LoopServe benchmark introduces eleven multi-turn datasets designed to evaluate large language models (LLMs) on realistic query positions and conversational dependencies. This is crucial for assessing LLM inference acceleration methods in dynamic, multi-turn dialogue settings common in applications like chatbots and virtual assistants.
**Paper:** [LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues](https://huggingface.co/papers/2507.13681)
### Sample Usage
You can load different subsets of the dataset using the `load_dataset` function from the `datasets` library. For example, to load the `multi_turn_question_answering` subset:
```python
from datasets import load_dataset
# Load the multi-turn question-answering subset
dataset_qa_multi = load_dataset("MKV_Cache", "multi_turn_question_answering")
print(dataset_qa_multi)
# Load the single-turn summarization subset
dataset_sum_single = load_dataset("MKV_Cache", "single_turn_summarization")
print(dataset_sum_single)
# Load the base conversations data
dataset_conv = load_dataset("MKV_Cache", "conversations")
print(dataset_conv)
```
### Dataset Structure
The repository contains the following file structure for the benchmark data:
``` shell
.
βββ README.md
βββ conversations.jsonl
βββ multi_turn
β βββ few_shot_learning
β βββ needle_in_haystack
β βββ question_answering
β βββ summarization
βββ single_turn
βββ few_shot_learning
βββ needle_in_haystack
βββ question_answering
βββ summarization
``` |