File size: 8,957 Bytes
e350ff7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
---
license: mit
task_categories:
  - text-classification
  - tabular-classification
language:
  - en
tags:
  - sequential-recommendation
  - markov-chain
  - transformer
  - multi-task-learning
  - api-recommendation
  - context-engineering
  - user-behavior
  - simulation
size_categories:
  - 10K<n<100K
pretty_name: Context Engineering V1 - Sequential API Recommendation Dataset
---

# Context Engineering V1: Sequential API Recommendation Dataset

This dataset accompanies the research paper:

> **Rethink Context Engineering Using an Attention-based Architecture**
> Yiqiao Yin — University of Chicago Booth School of Business / Columbia University

It was generated using the open-source **`context-engineer`** Python package:

- **GitHub:** [https://github.com/yiqiao-yin/context-engineer-repo](https://github.com/yiqiao-yin/context-engineer-repo)
- **PyPI:** [https://pypi.org/project/context-engineer/0.1.0/](https://pypi.org/project/context-engineer/0.1.0/)

---

## Dataset Summary

This dataset contains **simulated sequential API usage logs** modeled as Markov chains, designed for training and evaluating multi-task transformer models for sequential API recommendation. The simulation encompasses **2,000 user sessions** totaling **20,000 API calls** across **100 APIs** organized into **10 functional categories**, with **4 distinct session goal types** driving workflow-specific behavioral patterns.

The dataset is split into two files:

| File | Rows | Description |
|---|---|---|
| `user_sessions.parquet` | 2,000 | Full user session sequences with goal labels |
| `training_pairs.parquet` | 18,000 | Supervised input-output pairs for model training |

### Key Statistics

| Metric | Value |
|---|---|
| Total users | 2,000 |
| Total API calls | 20,000 |
| Unique APIs | 100 (across 10 categories) |
| Avg. session length | 10 API calls |
| Session goal types | 4 |
| Training pairs generated | 18,000 |
| Max input sequence length | 6 |
| Random seed | 42 |

---

## Dataset Structure

### `user_sessions.parquet`

Each row represents one complete user session:

| Column | Type | Description |
|---|---|---|
| `user_id` | int | Unique user/session identifier (0–1999) |
| `session_goal_id` | int | Goal type ID (0–3) |
| `session_goal` | string | Goal name: `ml_pipeline`, `data_analysis`, `user_management`, `quick_viz` |
| `sequence_length` | int | Number of API calls in the session |
| `api_sequence` | string (JSON list) | Ordered list of API IDs called during the session |
| `category_sequence` | string (JSON list) | Ordered list of API category names |

### `training_pairs.parquet`

Each row is a supervised training example with multi-task labels:

| Column | Type | Description |
|---|---|---|
| `input_sequence` | string (JSON list) | Context window of preceding API calls (up to 6) |
| `input_length` | int | Number of tokens in the input sequence |
| `target_api` | int | Ground-truth next API ID to predict |
| `target_category` | string | Category name of the target API |
| `session_goal_id` | int | Session goal label (auxiliary task) |
| `session_goal` | string | Session goal name |
| `session_end` | int | Whether this is the last action in the session (0 or 1) |

---

## API Categories

The 100 APIs are organized into 10 functional categories, reflecting typical enterprise platform architecture:

| Category | API Range | Description |
|---|---|---|
| Authentication | 0–9 | Login, session management |
| User Management | 10–19 | Roles, permissions, accounts |
| Data Input | 20–29 | Data ingestion, file upload |
| Data Processing | 30–39 | Transformation, cleaning, feature engineering |
| ML Training | 40–49 | Model training, hyperparameter tuning |
| ML Prediction | 50–59 | Inference, batch prediction |
| Basic Visualization | 60–69 | Charts, basic plots |
| Advanced Visualization | 70–79 | Dashboards, interactive visualizations |
| Export/Share | 80–89 | Export, report generation |
| Administration | 90–99 | System config, monitoring |

## Session Goals

| Goal ID | Goal Name | Distribution | Workflow Adherence |
|---|---|---|---|
| 0 | ML Pipeline | 34.8% | 85% |
| 1 | Data Analysis | 26.1% | 80% |
| 2 | User Management | 24.3% | 90% |
| 3 | Quick Visualization | 14.8% | 75% |

---

## How to Use

### Load with Hugging Face `datasets`

```python
from datasets import load_dataset

# Load both splits
dataset = load_dataset("eagle0504/context-engineering-v1")

# Or load individual files
sessions = load_dataset("eagle0504/context-engineering-v1", data_files="user_sessions.parquet")
pairs = load_dataset("eagle0504/context-engineering-v1", data_files="training_pairs.parquet")
```

### Load with Pandas

```python
import pandas as pd

sessions = pd.read_parquet("hf://datasets/eagle0504/context-engineering-v1/user_sessions.parquet")
pairs = pd.read_parquet("hf://datasets/eagle0504/context-engineering-v1/training_pairs.parquet")
```

### Reproduce with the `context-engineer` Package

You can regenerate this exact dataset (or create your own variant) using the package:

```bash
pip install context-engineer
```

```python
from context_engineer import simulate_multitask_markov_data, create_multitask_training_pairs, set_random_seeds

# Set seed for exact reproducibility
set_random_seeds(42)

# Generate 2000 user sessions (matches this dataset)
sequences, goals = simulate_multitask_markov_data(
    num_users=2000,
    num_apis=100,
    clicks_per_user=10,
)

# Create supervised training pairs
input_seqs, target_apis, goal_labels, session_end_labels = create_multitask_training_pairs(
    sequences, goals, max_seq_len=6
)
```

### Run the Full Training Pipeline

```python
from context_engineer import run_pipeline

# Reproduce the full experiment from the paper
results = run_pipeline(seed=42)

model = results["model"]       # Trained PyTorch model
metrics = results["metrics"]   # ~79.8% top-1 accuracy, 99.97% top-5 hit rate
```

### Generate Custom Datasets via CLI

```bash
# Generate data and save to JSON
context-engineer generate --num-users 5000 --clicks 15 --seed 99 --output my_data.json

# Run the full pipeline
context-engineer run --num-users 1000 --epochs 30
```

---

## Benchmark Results (from the paper)

A multi-task attention-based transformer trained on this dataset achieves:

| Metric | Value |
|---|---|
| API Prediction Accuracy (Top-1) | **79.83%** |
| Mean Reciprocal Rank (MRR) | **0.7983** |
| Top-5 Hit Rate | **99.97%** |
| Top-10 Hit Rate | **100.00%** |
| Goal Prediction Accuracy | **81.6%** |
| Session End Accuracy | **99.3%** |
| Improvement over Markov baseline | **+432%** |

---

## Citation

If you use this dataset in your research, please cite:

```bibtex
@article{yin2025rethink,
  title={Rethink Context Engineering Using an Attention-based Architecture},
  author={Yin, Yiqiao},
  year={2025}
}
```

---

## Disclaimer

**About the Author.** This dataset and the accompanying `context-engineer` package were created by [Yiqiao Yin](https://www.y-yin.io/), who holds affiliations with the University of Chicago Booth School of Business and the Department of Statistics at Columbia University. The author brings over a decade of professional experience in the SaaS (Software as a Service) and Platform-as-a-Service (PaaS) domain, spanning enterprise software development, API ecosystem design, user behavior analytics, and machine learning infrastructure. The API category taxonomy, workflow patterns, user persona definitions, and transition probability structures encoded in this simulator are informed by that cumulative domain expertise—reflecting realistic patterns observed in production enterprise environments over the course of many years.

**Simulation, Not Real Data.** This dataset is **entirely synthetic**. It was generated programmatically using the open-source [`context-engineer`](https://pypi.org/project/context-engineer/) Python package. **No real user data, proprietary platform logs, personally identifiable information (PII), or third-party datasets of any kind are included, referenced, or derived from in this release.** The Markov chain transition probabilities, user personas, and session goal distributions are designed to approximate realistic enterprise API usage patterns for research purposes, but they do not represent, reproduce, or leak any actual user behavior from any specific platform or organization.

**Reproducibility.** This dataset is fully reproducible. Running the generation script with `seed=42` and the default parameters (`num_users=2000`, `num_apis=100`, `clicks_per_user=10`) will produce an identical dataset. The source code is publicly available at [github.com/yiqiao-yin/context-engineer-repo](https://github.com/yiqiao-yin/context-engineer-repo).

**License.** This dataset is released under the [MIT License](https://opensource.org/licenses/MIT). You are free to use, modify, and distribute it for academic and commercial purposes with attribution.