File size: 8,711 Bytes
c679f1d
bf06b8d
 
ed5ce32
 
 
bf06b8d
 
 
 
 
 
 
ed5ce32
 
 
bf06b8d
 
c679f1d
bf06b8d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9c3bd6c
bf06b8d
 
9c3bd6c
bf06b8d
 
 
 
 
 
9c3bd6c
 
bf06b8d
 
 
0112b0c
bf06b8d
 
 
 
 
 
 
 
 
 
 
 
0112b0c
bf06b8d
 
 
0112b0c
bf06b8d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8efb61f
bf06b8d
 
 
 
 
 
 
 
 
 
 
 
8efb61f
bf06b8d
 
 
 
 
51b1b63
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
---
license: mit
task_categories:
- text-generation
- reinforcement-learning
- other
tags:
- crm
- multi-turn-conversation
- tool-calling
- agent-evaluation
- benchmark
- continual-learning
- conversational-ai
- tool-use
- multi-turn-dialogue
size_categories:
- 1K<n<10K
---

# Arc CRM Benchmark Dataset

## Dataset Description

The Arc CRM Benchmark is a production-realistic synthetic CRM environment dataset for evaluating LLM agents on state-modifying workflows. This dataset provides a comprehensive testbed for measuring agent performance, reliability, and adaptation through continual learning frameworks.

The dataset contains **1,200 multi-turn conversations** covering diverse CRM workflows with varying complexity. Each conversation simulates realistic user interactions with a CRM system, requiring agents to execute tool calls, manage state, and handle cross-turn references.

### Dataset Summary

- **Total Conversations**: 1,200
- **Format**: JSONL (one conversation per line)
- **Complexity Distribution**:
  - **Simple** (1-3 turns): 280 conversations (~23%)
  - **Medium** (4-6 turns): 625 conversations (~52%)
  - **Complex** (7-10 turns): 295 conversations (~25%)

### Workflow Categories

The dataset spans **9 distinct workflow categories** derived from production CRM task definitions:

1. **Opportunity Management**: Create, modify, search, view details
2. **Quote Generation and Management**
3. **Client and Contact Management**
4. **Document Upload and Management**
5. **Contract Creation and Tracking**
6. **Note and Communication Logging**
7. **Cross-entity workflows** combining multiple operations

### Key Features

- **Production-Realistic CRM Schema**: Full entity model with strict validation, foreign-key relationships, enum constraints, and business logic guards
- **Template References**: Conversations use `{{turn_N.field}}` syntax for cross-turn entity references
- **Schema Compliance**: All tool arguments validated against production CRM schema
- **Deterministic Generation**: Every conversation can be regenerated from seed data and schema definitions
- **Initial State**: Each conversation includes initial entity state (clients, opportunities, quotes, contracts, documents, notes)
- **Expected Responses**: Ground-truth assistant responses for LLM judge evaluation
- **Success Criteria**: Multiple evaluation modes (all_turns, final_state, both)
- **Failure Scenarios**: Includes conversations with expected failures for robustness testing

## Dataset Structure

Each conversation contains:

- **`conversation_id`**: Unique identifier for the conversation
- **`workflow_category`**: Category of workflow (e.g., "Opportunity Management", "Client Management")
- **`complexity_level`**: "simple", "medium", or "complex"
- **`turns`**: List of conversation turns, each containing:
  - `turn_id`: Sequential turn number (1-indexed)
  - `user_utterance`: Natural language user input
  - `expected_tool`: Tool name expected to be called
  - `expected_args`: Dictionary of expected arguments (may contain `{{turn_N.field}}` templates)
  - `references_previous_turns`: List of turn IDs this turn references
  - `expect_success`: Whether this turn is expected to succeed
  - `expected_error_substring`: If expect_success=False, substring to match in error message
  - `failure_category`: Category of failure if this is a failure scenario
  - `expected_response`: Structured description of expected assistant reply with evaluation criteria
- **`initial_entities`**: Dictionary of entities that exist before conversation starts (seed_data with Client, Contact, Opportunity, Quote, Contract entities)
- **`final_expected_state`**: Expected state after all turns complete (for validation)
- **`success_criteria`**: How to evaluate success ("all_turns", "final_state", or "both")
- **`contains_failure`**: Whether conversation contains a failure scenario
- **`failure_turn`**: Turn number where failure is expected (if contains_failure=True)
- **`verification_mode`**: How to verify conversation success ("database" or "mock")
- **`chain_id`**: Optional chain identifier if conversation is part of a workflow chain
- **`segment_number`**: Optional segment number within a chain (1-indexed)
- **`segment_boundaries`**: Optional list of turn numbers where segments end (for chained conversations)
- **`expected_outcome`**: Optional expected outcome description
- **`cumulative_context`**: Optional dictionary of context accumulated from previous segments (for chains)

## Usage

### Loading the Dataset

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("Arc-Intelligence/arc-crm-benchmark", split="train")

# Get first conversation
conv = dataset[0]
print(f"Conversation ID: {conv['conversation_id']}")
print(f"Complexity: {conv['complexity_level']}")
print(f"Workflow: {conv['workflow_category']}")
print(f"Number of turns: {len(conv['turns'])}")
```

**Note**: The Hugging Face dataset viewer is not available for this dataset due to the size and complexity of individual conversations (each conversation contains deeply nested structures with multiple turns, initial entities, and expected responses). However, the dataset is fully functional and can be loaded programmatically using the `datasets` library as shown above.

### Example: Iterating Through Turns

```python
conversation = dataset[0]

for turn in conversation['turns']:
    print(f"Turn {turn['turn_id']}: {turn['user_utterance']}")
    print(f"  Expected tool: {turn['expected_tool']}")
    print(f"  Expected args: {turn['expected_args']}")
    if turn.get('expected_response'):
        print(f"  Expected response: {turn['expected_response']['text']}")
```

### Example: Accessing Initial State

```python
conversation = dataset[0]
initial_entities = conversation['initial_entities']['seed_data']

# Access pre-existing clients
if initial_entities and 'Client' in initial_entities:
    for client_id, client_data in initial_entities['Client'].items():
        print(f"Client: {client_data['name']} ({client_id})")
```

## Evaluation

This dataset is designed for evaluating:

- **Tool calling accuracy**: Correct tool selection and argument parsing
- **Multi-turn conversation handling**: Maintaining context across turns
- **State management**: Tracking and modifying CRM entities correctly
- **Cross-turn reference resolution**: Resolving `{{turn_N.field}}` template references
- **Response quality**: Natural language communication of results
- **Robustness**: Handling failure scenarios and error conditions

The dataset is compatible with the [Arc CRM Benchmark evaluation harness](https://github.com/Arc-Computer/arc-crm-benchmark), which provides comprehensive metrics including tool execution validation, response quality assessment via LLM judge, and token usage tracking.

## Related Resources

- **Repository**: [github.com/Arc-Computer/arc-crm-benchmark](https://github.com/Arc-Computer/arc-crm-benchmark)
- **Atlas SDK**: [github.com/Arc-Computer/atlas-sdk](https://github.com/Arc-Computer/atlas-sdk) - Runtime adaptive learning framework
- **Documentation**: [docs.arc.computer](https://docs.arc.computer)

## Citation

If you use this dataset in your research, please cite:

```bibtex
@software{arc_crm_benchmark,
  title = {Arc CRM Benchmark: A Synthetic Environment for LLM Agent Evaluation},
  author = {Arc Intelligence},
  year = {2025},
  url = {https://github.com/Arc-Computer/arc-crm-benchmark},
  version = {1.0}
}
```

## License

This dataset is released under the MIT License. See the [LICENSE](https://github.com/Arc-Computer/arc-crm-benchmark/blob/main/LICENSE) file for details.

## Acknowledgments

This benchmark was developed in collaboration with the Reply Scale AI research team to provide a production-realistic testbed for evaluating LLM agents on state-modifying workflows. Given their extensive exposure to production CRM systems deployed at large organizations, the Reply Scale AI research team contributed critical domain expertise in designing the CRM schema, workflow patterns, and interaction models. This collaboration ensured the benchmark accurately reflects the API structures, validation constraints, and operational complexity found in enterprise production environments, enabling researchers and practitioners to evaluate agent reliability, efficiency, and adaptation capabilities in realistic scenarios that mirror actual deployment conditions.

## Contact

For questions, issues, or collaboration opportunities:
- **GitHub Issues**: [github.com/Arc-Computer/arc-crm-benchmark/issues](https://github.com/Arc-Computer/arc-crm-benchmark/issues)
- **Organization**: [Arc Intelligence](https://huggingface.co/Arc-Intelligence)