language:
- en
task_categories:
- question-answering
- summarization
- text-generation
pretty_name: LoopServe Multi-Turn Dialogue Benchmark
tags:
- llm
- kv_cache
configs:
- config_name: conversations
data_files: conversations.jsonl
- config_name: multi_turn_few_shot_learning
data_files: multi_turn/few_shot_learning/*.jsonl
- config_name: multi_turn_needle_in_haystack
data_files: multi_turn/needle_in_haystack/*.jsonl
- config_name: multi_turn_question_answering
data_files: multi_turn/question_answering/*.jsonl
- config_name: multi_turn_summarization
data_files: multi_turn/summarization/*.jsonl
- config_name: single_turn_few_shot_learning
data_files: single_turn/few_shot_learning/*.jsonl
- config_name: single_turn_needle_in_haystack
data_files: single_turn/needle_in_haystack/*.jsonl
- config_name: single_turn_question_answering
data_files: single_turn/question_answering/*.jsonl
- config_name: single_turn_summarization
data_files: single_turn/summarization/*.jsonl
This repository contains the benchmark datasets proposed in the paper LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues.
The LoopServe benchmark introduces eleven multi-turn datasets designed to evaluate large language models (LLMs) on realistic query positions and conversational dependencies. This is crucial for assessing LLM inference acceleration methods in dynamic, multi-turn dialogue settings common in applications like chatbots and virtual assistants.
Paper: LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues
Sample Usage
You can load different subsets of the dataset using the load_dataset function from the datasets library. For example, to load the multi_turn_question_answering subset:
from datasets import load_dataset
# Load the multi-turn question-answering subset
dataset_qa_multi = load_dataset("MKV_Cache", "multi_turn_question_answering")
print(dataset_qa_multi)
# Load the single-turn summarization subset
dataset_sum_single = load_dataset("MKV_Cache", "single_turn_summarization")
print(dataset_sum_single)
# Load the base conversations data
dataset_conv = load_dataset("MKV_Cache", "conversations")
print(dataset_conv)
Dataset Structure
The repository contains the following file structure for the benchmark data:
.
βββ README.md
βββ conversations.jsonl
βββ multi_turn
β βββ few_shot_learning
β βββ needle_in_haystack
β βββ question_answering
β βββ summarization
βββ single_turn
βββ few_shot_learning
βββ needle_in_haystack
βββ question_answering
βββ summarization