rooneyma756 commited on
Commit
578ad34
·
verified ·
1 Parent(s): 21c98b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -16,7 +16,7 @@ size_categories:
16
 
17
 
18
  # Dataset Summary
19
- CCR-Bench is designed to assess LLMs’ ability to follow complex instructions through a progressive and multi-dimensional lens. The construction of CCR-Bench follows a logical progression from simple to complex, and from foundational to application-level scenarios. It contains 174 test cases and comprises three core components: Complex Content-Format Constraints, Logical Workflow Control and Industrial Scenario Application. The goal is to evaluate the practical utility and robustness of LLMs under conditions that approximate real-world industrial deployments. **We recommend reading the [paper]() for more background on task significance.**
20
 
21
 
22
  # Dataset Description
@@ -27,7 +27,7 @@ CCR_Bench covers 3 core compents:
27
 
28
  * **Complex Content-Format Constraints**: Existing benchmarks have improved model performance in generating structured text and adhering to simple constraints. Building upon this, CCR-Bench introduces a set of tightly coupled “content-format” instructions, in which the output format itself constitutes a critical component of the content logic. These instructions require models to generate specific content while strictly adhering to predefined format specifications, where the format itself is an integral component of the content’s logical structure. This dimension aims to rigorously assess models’ precision in complying with complex, multi-layered constraints, especially their ability to integrate formatting requirements into the overall logic of content generation.
29
 
30
- * **Logical Workflow Control**: To complement the limited scope of current instruction-following benchmarks in evaluating complex task decomposition, conditional reasoning, stepwise planning, and tool usage, we design tasks that demand multi-turn interaction, procedural planning, and state tracking. This dimension evaluates a model’s capacity to transition from passively following instructions to actively orchestrating and executing complex workflows. This datasets contains complex instruction sets from 9 small scenarios, which are customer service, data flow, flight tickets, maze, online game, printer assistant, real estate, tree painter, world cup simulator.
31
 
32
  * **Industrial Scenario Application**: This dimension synthesizes the capabilities assessed in the previous two components by introducing comprehensive tasks situated in realistic industrial contexts. These tasks involve both
33
  content-format constraints and logical reasoning, while also being tightly integrated with domain-specific requirements. The goal is to evaluate the practical utility and robustness of LLMs under conditions that approximate real-world industrial deployments. The dataset contains complex instruction sets for medical-industrial scenarios, such as pre-consultation and healthy diet guidance.
@@ -44,9 +44,9 @@ There are three files in the root directory: complex_content_format_constraint,
44
 
45
  Among these, both complex_content_format_constraint and industry_scenario_application directory contain only a single file each, namely the test data in JSONL format.
46
 
47
- The logical_workflow_control directory, however, has a more complex structure. Within its folder, there are two subdirectories: resources, scenarios_zh, test_cases.
48
 
49
- * The resources folder contains additional reference information required by the model for three scenarios: World Cup, Maze, and Print Trees.
50
 
51
  * The scenarios_zh folder stores flowcharts, user dialogue templates, and notes for each respective scenario.
52
 
@@ -136,12 +136,12 @@ An example of the industry_scenario_application looks as follows:
136
  ### Data Fields
137
 
138
  The data fields on logical_workflow_control are as follows:
139
- * `idx`: A unique ID for the prompt.
140
- * `language`: Describes the language of the data.
141
- * `scenario`: Describes the scenario the data.
142
- * `user_targets`: Describes the task the model should perform.
143
- * `extra_info`: Describes the extra_info the model can reference during the response.
144
- * `possible_function_calls`: Describes the possible function calls the model perform the prompt.
145
 
146
  The data fields on industry_scenario_application are as follows:
147
  * `idx`: A unique ID for the prompt.
 
16
 
17
 
18
  # Dataset Summary
19
+ CCR-Bench is designed to assess LLMs’ ability to follow complex instructions through a progressive and multi-dimensional lens. The construction of CCR-Bench follows a logical progression from simple to complex, and from foundational to application-level scenarios. It contains 174 test cases and comprises three core components: Complex Content-Format Constraints, Logical Workflow Control and Industrial Scenario Application. The goal is to evaluate the practical utility and robustness of LLMs under conditions that approximate real-world industrial deployments. **We recommend reading the [paper](http://arxiv.org/abs/2506.18421) for more background on task significance.**
20
 
21
 
22
  # Dataset Description
 
27
 
28
  * **Complex Content-Format Constraints**: Existing benchmarks have improved model performance in generating structured text and adhering to simple constraints. Building upon this, CCR-Bench introduces a set of tightly coupled “content-format” instructions, in which the output format itself constitutes a critical component of the content logic. These instructions require models to generate specific content while strictly adhering to predefined format specifications, where the format itself is an integral component of the content’s logical structure. This dimension aims to rigorously assess models’ precision in complying with complex, multi-layered constraints, especially their ability to integrate formatting requirements into the overall logic of content generation.
29
 
30
+ * **Logical Workflow Control**: To complement the limited scope of current instruction-following benchmarks in evaluating complex task decomposition, conditional reasoning, stepwise planning, and tool usage, we design tasks that demand multi-turn interaction with user, procedural planning, and state tracking. This dimension evaluates a model’s capacity to transition from passively following instructions to actively orchestrating and executing complex workflows. This dataset contains complex instruction sets from 9 small scenarios, which are customer service, data flow, flight tickets, maze, online game, printer assistant, real estate, tree painter, and world cup simulator.
31
 
32
  * **Industrial Scenario Application**: This dimension synthesizes the capabilities assessed in the previous two components by introducing comprehensive tasks situated in realistic industrial contexts. These tasks involve both
33
  content-format constraints and logical reasoning, while also being tightly integrated with domain-specific requirements. The goal is to evaluate the practical utility and robustness of LLMs under conditions that approximate real-world industrial deployments. The dataset contains complex instruction sets for medical-industrial scenarios, such as pre-consultation and healthy diet guidance.
 
44
 
45
  Among these, both complex_content_format_constraint and industry_scenario_application directory contain only a single file each, namely the test data in JSONL format.
46
 
47
+ The logical_workflow_control directory, however, has a more complex structure. Within its folder, there are three subdirectories: resources, scenarios_zh, test_cases.
48
 
49
+ * The resources folder contains additional reference information required by the model for three scenarios: world cup simulator, maze, and tree painter.
50
 
51
  * The scenarios_zh folder stores flowcharts, user dialogue templates, and notes for each respective scenario.
52
 
 
136
  ### Data Fields
137
 
138
  The data fields on logical_workflow_control are as follows:
139
+ * `idx`: A unique ID for the test case.
140
+ * `language`: Describes the language of the test case.
141
+ * `scenario`: Describes the scenario used in the test case.
142
+ * `user_targets`: Describes the tasks the user agent needs to finish.
143
+ * `extra_info`: Describes the extra information of the user agent.
144
+ * `possible_function_calls`: Describes the function calls used in the execution process of the golden answer, which is only used during the scoring process.
145
 
146
  The data fields on industry_scenario_application are as follows:
147
  * `idx`: A unique ID for the prompt.