Duguce commited on
Commit
78e70a6
·
1 Parent(s): 54b2a17

feat: add data files cofig

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -3,12 +3,14 @@ pretty_name: TurtleBench
3
  size_categories:
4
  - 1K<n<10K
5
  configs:
6
- - config_name: chinese
7
  data_files:
8
- - path: "chinese/zh_data-00000-of-00001.jsonl"
9
- - config_name: english
10
- data_files:
11
- - path: "english/zh_data-00000-of-00001.jsonl"
 
 
12
  ---
13
  ## Overview
14
  TurtleBench is a novel evaluation benchmark designed to assess the reasoning capabilities of large language models (LLMs) using yes/no puzzles (commonly known as "Turtle Soup puzzles"). This dataset is constructed based on user guesses collected from our online Turtle Soup Puzzle platform, providing a dynamic and interactive means of evaluation. Unlike traditional static evaluation benchmarks, TurtleBench focuses on testing models in interactive settings to better capture their logical reasoning performance. The dataset contains real user guesses and annotated responses, enabling a fair and challenging evaluation for modern LLMs.
 
3
  size_categories:
4
  - 1K<n<10K
5
  configs:
6
+ - config_name: default
7
  data_files:
8
+ - split: chinese
9
+ path:
10
+ - "chinese/zh_data-00000-of-00001.jsonl"
11
+ - "data/def.csv"
12
+ - split: english
13
+ path: "english/zh_data-00000-of-00001.jsonl"
14
  ---
15
  ## Overview
16
  TurtleBench is a novel evaluation benchmark designed to assess the reasoning capabilities of large language models (LLMs) using yes/no puzzles (commonly known as "Turtle Soup puzzles"). This dataset is constructed based on user guesses collected from our online Turtle Soup Puzzle platform, providing a dynamic and interactive means of evaluation. Unlike traditional static evaluation benchmarks, TurtleBench focuses on testing models in interactive settings to better capture their logical reasoning performance. The dataset contains real user guesses and annotated responses, enabling a fair and challenging evaluation for modern LLMs.