eagle0504 commited on
Commit
e350ff7
·
verified ·
1 Parent(s): 1f1ac71

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +234 -0
README.md ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-classification
5
+ - tabular-classification
6
+ language:
7
+ - en
8
+ tags:
9
+ - sequential-recommendation
10
+ - markov-chain
11
+ - transformer
12
+ - multi-task-learning
13
+ - api-recommendation
14
+ - context-engineering
15
+ - user-behavior
16
+ - simulation
17
+ size_categories:
18
+ - 10K<n<100K
19
+ pretty_name: Context Engineering V1 - Sequential API Recommendation Dataset
20
+ ---
21
+
22
+ # Context Engineering V1: Sequential API Recommendation Dataset
23
+
24
+ This dataset accompanies the research paper:
25
+
26
+ > **Rethink Context Engineering Using an Attention-based Architecture**
27
+ > Yiqiao Yin — University of Chicago Booth School of Business / Columbia University
28
+
29
+ It was generated using the open-source **`context-engineer`** Python package:
30
+
31
+ - **GitHub:** [https://github.com/yiqiao-yin/context-engineer-repo](https://github.com/yiqiao-yin/context-engineer-repo)
32
+ - **PyPI:** [https://pypi.org/project/context-engineer/0.1.0/](https://pypi.org/project/context-engineer/0.1.0/)
33
+
34
+ ---
35
+
36
+ ## Dataset Summary
37
+
38
+ This dataset contains **simulated sequential API usage logs** modeled as Markov chains, designed for training and evaluating multi-task transformer models for sequential API recommendation. The simulation encompasses **2,000 user sessions** totaling **20,000 API calls** across **100 APIs** organized into **10 functional categories**, with **4 distinct session goal types** driving workflow-specific behavioral patterns.
39
+
40
+ The dataset is split into two files:
41
+
42
+ | File | Rows | Description |
43
+ |---|---|---|
44
+ | `user_sessions.parquet` | 2,000 | Full user session sequences with goal labels |
45
+ | `training_pairs.parquet` | 18,000 | Supervised input-output pairs for model training |
46
+
47
+ ### Key Statistics
48
+
49
+ | Metric | Value |
50
+ |---|---|
51
+ | Total users | 2,000 |
52
+ | Total API calls | 20,000 |
53
+ | Unique APIs | 100 (across 10 categories) |
54
+ | Avg. session length | 10 API calls |
55
+ | Session goal types | 4 |
56
+ | Training pairs generated | 18,000 |
57
+ | Max input sequence length | 6 |
58
+ | Random seed | 42 |
59
+
60
+ ---
61
+
62
+ ## Dataset Structure
63
+
64
+ ### `user_sessions.parquet`
65
+
66
+ Each row represents one complete user session:
67
+
68
+ | Column | Type | Description |
69
+ |---|---|---|
70
+ | `user_id` | int | Unique user/session identifier (0–1999) |
71
+ | `session_goal_id` | int | Goal type ID (0–3) |
72
+ | `session_goal` | string | Goal name: `ml_pipeline`, `data_analysis`, `user_management`, `quick_viz` |
73
+ | `sequence_length` | int | Number of API calls in the session |
74
+ | `api_sequence` | string (JSON list) | Ordered list of API IDs called during the session |
75
+ | `category_sequence` | string (JSON list) | Ordered list of API category names |
76
+
77
+ ### `training_pairs.parquet`
78
+
79
+ Each row is a supervised training example with multi-task labels:
80
+
81
+ | Column | Type | Description |
82
+ |---|---|---|
83
+ | `input_sequence` | string (JSON list) | Context window of preceding API calls (up to 6) |
84
+ | `input_length` | int | Number of tokens in the input sequence |
85
+ | `target_api` | int | Ground-truth next API ID to predict |
86
+ | `target_category` | string | Category name of the target API |
87
+ | `session_goal_id` | int | Session goal label (auxiliary task) |
88
+ | `session_goal` | string | Session goal name |
89
+ | `session_end` | int | Whether this is the last action in the session (0 or 1) |
90
+
91
+ ---
92
+
93
+ ## API Categories
94
+
95
+ The 100 APIs are organized into 10 functional categories, reflecting typical enterprise platform architecture:
96
+
97
+ | Category | API Range | Description |
98
+ |---|---|---|
99
+ | Authentication | 0–9 | Login, session management |
100
+ | User Management | 10–19 | Roles, permissions, accounts |
101
+ | Data Input | 20–29 | Data ingestion, file upload |
102
+ | Data Processing | 30–39 | Transformation, cleaning, feature engineering |
103
+ | ML Training | 40–49 | Model training, hyperparameter tuning |
104
+ | ML Prediction | 50–59 | Inference, batch prediction |
105
+ | Basic Visualization | 60–69 | Charts, basic plots |
106
+ | Advanced Visualization | 70–79 | Dashboards, interactive visualizations |
107
+ | Export/Share | 80–89 | Export, report generation |
108
+ | Administration | 90–99 | System config, monitoring |
109
+
110
+ ## Session Goals
111
+
112
+ | Goal ID | Goal Name | Distribution | Workflow Adherence |
113
+ |---|---|---|---|
114
+ | 0 | ML Pipeline | 34.8% | 85% |
115
+ | 1 | Data Analysis | 26.1% | 80% |
116
+ | 2 | User Management | 24.3% | 90% |
117
+ | 3 | Quick Visualization | 14.8% | 75% |
118
+
119
+ ---
120
+
121
+ ## How to Use
122
+
123
+ ### Load with Hugging Face `datasets`
124
+
125
+ ```python
126
+ from datasets import load_dataset
127
+
128
+ # Load both splits
129
+ dataset = load_dataset("eagle0504/context-engineering-v1")
130
+
131
+ # Or load individual files
132
+ sessions = load_dataset("eagle0504/context-engineering-v1", data_files="user_sessions.parquet")
133
+ pairs = load_dataset("eagle0504/context-engineering-v1", data_files="training_pairs.parquet")
134
+ ```
135
+
136
+ ### Load with Pandas
137
+
138
+ ```python
139
+ import pandas as pd
140
+
141
+ sessions = pd.read_parquet("hf://datasets/eagle0504/context-engineering-v1/user_sessions.parquet")
142
+ pairs = pd.read_parquet("hf://datasets/eagle0504/context-engineering-v1/training_pairs.parquet")
143
+ ```
144
+
145
+ ### Reproduce with the `context-engineer` Package
146
+
147
+ You can regenerate this exact dataset (or create your own variant) using the package:
148
+
149
+ ```bash
150
+ pip install context-engineer
151
+ ```
152
+
153
+ ```python
154
+ from context_engineer import simulate_multitask_markov_data, create_multitask_training_pairs, set_random_seeds
155
+
156
+ # Set seed for exact reproducibility
157
+ set_random_seeds(42)
158
+
159
+ # Generate 2000 user sessions (matches this dataset)
160
+ sequences, goals = simulate_multitask_markov_data(
161
+ num_users=2000,
162
+ num_apis=100,
163
+ clicks_per_user=10,
164
+ )
165
+
166
+ # Create supervised training pairs
167
+ input_seqs, target_apis, goal_labels, session_end_labels = create_multitask_training_pairs(
168
+ sequences, goals, max_seq_len=6
169
+ )
170
+ ```
171
+
172
+ ### Run the Full Training Pipeline
173
+
174
+ ```python
175
+ from context_engineer import run_pipeline
176
+
177
+ # Reproduce the full experiment from the paper
178
+ results = run_pipeline(seed=42)
179
+
180
+ model = results["model"] # Trained PyTorch model
181
+ metrics = results["metrics"] # ~79.8% top-1 accuracy, 99.97% top-5 hit rate
182
+ ```
183
+
184
+ ### Generate Custom Datasets via CLI
185
+
186
+ ```bash
187
+ # Generate data and save to JSON
188
+ context-engineer generate --num-users 5000 --clicks 15 --seed 99 --output my_data.json
189
+
190
+ # Run the full pipeline
191
+ context-engineer run --num-users 1000 --epochs 30
192
+ ```
193
+
194
+ ---
195
+
196
+ ## Benchmark Results (from the paper)
197
+
198
+ A multi-task attention-based transformer trained on this dataset achieves:
199
+
200
+ | Metric | Value |
201
+ |---|---|
202
+ | API Prediction Accuracy (Top-1) | **79.83%** |
203
+ | Mean Reciprocal Rank (MRR) | **0.7983** |
204
+ | Top-5 Hit Rate | **99.97%** |
205
+ | Top-10 Hit Rate | **100.00%** |
206
+ | Goal Prediction Accuracy | **81.6%** |
207
+ | Session End Accuracy | **99.3%** |
208
+ | Improvement over Markov baseline | **+432%** |
209
+
210
+ ---
211
+
212
+ ## Citation
213
+
214
+ If you use this dataset in your research, please cite:
215
+
216
+ ```bibtex
217
+ @article{yin2025rethink,
218
+ title={Rethink Context Engineering Using an Attention-based Architecture},
219
+ author={Yin, Yiqiao},
220
+ year={2025}
221
+ }
222
+ ```
223
+
224
+ ---
225
+
226
+ ## Disclaimer
227
+
228
+ **About the Author.** This dataset and the accompanying `context-engineer` package were created by [Yiqiao Yin](https://www.y-yin.io/), who holds affiliations with the University of Chicago Booth School of Business and the Department of Statistics at Columbia University. The author brings over a decade of professional experience in the SaaS (Software as a Service) and Platform-as-a-Service (PaaS) domain, spanning enterprise software development, API ecosystem design, user behavior analytics, and machine learning infrastructure. The API category taxonomy, workflow patterns, user persona definitions, and transition probability structures encoded in this simulator are informed by that cumulative domain expertise—reflecting realistic patterns observed in production enterprise environments over the course of many years.
229
+
230
+ **Simulation, Not Real Data.** This dataset is **entirely synthetic**. It was generated programmatically using the open-source [`context-engineer`](https://pypi.org/project/context-engineer/) Python package. **No real user data, proprietary platform logs, personally identifiable information (PII), or third-party datasets of any kind are included, referenced, or derived from in this release.** The Markov chain transition probabilities, user personas, and session goal distributions are designed to approximate realistic enterprise API usage patterns for research purposes, but they do not represent, reproduce, or leak any actual user behavior from any specific platform or organization.
231
+
232
+ **Reproducibility.** This dataset is fully reproducible. Running the generation script with `seed=42` and the default parameters (`num_users=2000`, `num_apis=100`, `clicks_per_user=10`) will produce an identical dataset. The source code is publicly available at [github.com/yiqiao-yin/context-engineer-repo](https://github.com/yiqiao-yin/context-engineer-repo).
233
+
234
+ **License.** This dataset is released under the [MIT License](https://opensource.org/licenses/MIT). You are free to use, modify, and distribute it for academic and commercial purposes with attribution.