nielsr's picture
nielsr HF Staff
Improve dataset card: Add paper link, update name, expand configs, and enhance description
6a69356 verified
|
raw
history blame
2.83 kB
metadata
language:
  - en
task_categories:
  - question-answering
  - summarization
  - text-generation
pretty_name: LoopServe Multi-Turn Dialogue Benchmark
tags:
  - llm
  - kv_cache
configs:
  - config_name: conversations
    data_files: conversations.jsonl
  - config_name: multi_turn_few_shot_learning
    data_files: multi_turn/few_shot_learning/*.jsonl
  - config_name: multi_turn_needle_in_haystack
    data_files: multi_turn/needle_in_haystack/*.jsonl
  - config_name: multi_turn_question_answering
    data_files: multi_turn/question_answering/*.jsonl
  - config_name: multi_turn_summarization
    data_files: multi_turn/summarization/*.jsonl
  - config_name: single_turn_few_shot_learning
    data_files: single_turn/few_shot_learning/*.jsonl
  - config_name: single_turn_needle_in_haystack
    data_files: single_turn/needle_in_haystack/*.jsonl
  - config_name: single_turn_question_answering
    data_files: single_turn/question_answering/*.jsonl
  - config_name: single_turn_summarization
    data_files: single_turn/summarization/*.jsonl

This repository contains the benchmark datasets proposed in the paper LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues.

The LoopServe benchmark introduces eleven multi-turn datasets designed to evaluate large language models (LLMs) on realistic query positions and conversational dependencies. This is crucial for assessing LLM inference acceleration methods in dynamic, multi-turn dialogue settings common in applications like chatbots and virtual assistants.

Paper: LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues

Sample Usage

You can load different subsets of the dataset using the load_dataset function from the datasets library. For example, to load the multi_turn_question_answering subset:

from datasets import load_dataset

# Load the multi-turn question-answering subset
dataset_qa_multi = load_dataset("MKV_Cache", "multi_turn_question_answering")
print(dataset_qa_multi)

# Load the single-turn summarization subset
dataset_sum_single = load_dataset("MKV_Cache", "single_turn_summarization")
print(dataset_sum_single)

# Load the base conversations data
dataset_conv = load_dataset("MKV_Cache", "conversations")
print(dataset_conv)

Dataset Structure

The repository contains the following file structure for the benchmark data:

.
β”œβ”€β”€ README.md
β”œβ”€β”€ conversations.jsonl
β”œβ”€β”€ multi_turn
β”‚   β”œβ”€β”€ few_shot_learning
β”‚   β”œβ”€β”€ needle_in_haystack
β”‚   β”œβ”€β”€ question_answering
β”‚   └── summarization
└── single_turn
    β”œβ”€β”€ few_shot_learning
    β”œβ”€β”€ needle_in_haystack
    β”œβ”€β”€ question_answering
    └── summarization