Improve dataset card: Add paper link, update name, expand configs, and enhance description

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +51 -8
README.md CHANGED
@@ -1,22 +1,65 @@
1
  ---
 
 
2
  task_categories:
3
  - question-answering
4
  - summarization
5
  - text-generation
6
- language:
7
- - en
8
  tags:
9
  - llm
10
  - kv_cache
11
- pretty_name: MKV_Cache
12
-
13
  configs:
14
- - config_name: question-answering
15
- data_files: question-answering/*.jsonl
16
- - config_name: summarization
17
- data_files: summarization/*.jsonl
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ---
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ``` shell
21
  .
22
  ├── README.md
 
1
  ---
2
+ language:
3
+ - en
4
  task_categories:
5
  - question-answering
6
  - summarization
7
  - text-generation
8
+ pretty_name: LoopServe Multi-Turn Dialogue Benchmark
 
9
  tags:
10
  - llm
11
  - kv_cache
 
 
12
  configs:
13
+ - config_name: conversations
14
+ data_files: conversations.jsonl
15
+ - config_name: multi_turn_few_shot_learning
16
+ data_files: multi_turn/few_shot_learning/*.jsonl
17
+ - config_name: multi_turn_needle_in_haystack
18
+ data_files: multi_turn/needle_in_haystack/*.jsonl
19
+ - config_name: multi_turn_question_answering
20
+ data_files: multi_turn/question_answering/*.jsonl
21
+ - config_name: multi_turn_summarization
22
+ data_files: multi_turn/summarization/*.jsonl
23
+ - config_name: single_turn_few_shot_learning
24
+ data_files: single_turn/few_shot_learning/*.jsonl
25
+ - config_name: single_turn_needle_in_haystack
26
+ data_files: single_turn/needle_in_haystack/*.jsonl
27
+ - config_name: single_turn_question_answering
28
+ data_files: single_turn/question_answering/*.jsonl
29
+ - config_name: single_turn_summarization
30
+ data_files: single_turn/summarization/*.jsonl
31
  ---
32
 
33
+ This repository contains the benchmark datasets proposed in the paper **[LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues](https://huggingface.co/papers/2507.13681)**.
34
+
35
+ The LoopServe benchmark introduces eleven multi-turn datasets designed to evaluate large language models (LLMs) on realistic query positions and conversational dependencies. This is crucial for assessing LLM inference acceleration methods in dynamic, multi-turn dialogue settings common in applications like chatbots and virtual assistants.
36
+
37
+ **Paper:** [LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues](https://huggingface.co/papers/2507.13681)
38
+
39
+ ### Sample Usage
40
+
41
+ You can load different subsets of the dataset using the `load_dataset` function from the `datasets` library. For example, to load the `multi_turn_question_answering` subset:
42
+
43
+ ```python
44
+ from datasets import load_dataset
45
+
46
+ # Load the multi-turn question-answering subset
47
+ dataset_qa_multi = load_dataset("MKV_Cache", "multi_turn_question_answering")
48
+ print(dataset_qa_multi)
49
+
50
+ # Load the single-turn summarization subset
51
+ dataset_sum_single = load_dataset("MKV_Cache", "single_turn_summarization")
52
+ print(dataset_sum_single)
53
+
54
+ # Load the base conversations data
55
+ dataset_conv = load_dataset("MKV_Cache", "conversations")
56
+ print(dataset_conv)
57
+ ```
58
+
59
+ ### Dataset Structure
60
+
61
+ The repository contains the following file structure for the benchmark data:
62
+
63
  ``` shell
64
  .
65
  ├── README.md