--- license: mit task_categories: - text-classification - tabular-classification language: - en tags: - sequential-recommendation - markov-chain - transformer - multi-task-learning - api-recommendation - context-engineering - user-behavior - simulation size_categories: - 10K **Rethink Context Engineering Using an Attention-based Architecture** > Yiqiao Yin — University of Chicago Booth School of Business / Columbia University It was generated using the open-source **`context-engineer`** Python package: - **GitHub:** [https://github.com/yiqiao-yin/context-engineer-repo](https://github.com/yiqiao-yin/context-engineer-repo) - **PyPI:** [https://pypi.org/project/context-engineer/0.1.0/](https://pypi.org/project/context-engineer/0.1.0/) --- ## Dataset Summary This dataset contains **simulated sequential API usage logs** modeled as Markov chains, designed for training and evaluating multi-task transformer models for sequential API recommendation. The simulation encompasses **2,000 user sessions** totaling **20,000 API calls** across **100 APIs** organized into **10 functional categories**, with **4 distinct session goal types** driving workflow-specific behavioral patterns. The dataset is split into two files: | File | Rows | Description | |---|---|---| | `user_sessions.parquet` | 2,000 | Full user session sequences with goal labels | | `training_pairs.parquet` | 18,000 | Supervised input-output pairs for model training | ### Key Statistics | Metric | Value | |---|---| | Total users | 2,000 | | Total API calls | 20,000 | | Unique APIs | 100 (across 10 categories) | | Avg. session length | 10 API calls | | Session goal types | 4 | | Training pairs generated | 18,000 | | Max input sequence length | 6 | | Random seed | 42 | --- ## Dataset Structure ### `user_sessions.parquet` Each row represents one complete user session: | Column | Type | Description | |---|---|---| | `user_id` | int | Unique user/session identifier (0–1999) | | `session_goal_id` | int | Goal type ID (0–3) | | `session_goal` | string | Goal name: `ml_pipeline`, `data_analysis`, `user_management`, `quick_viz` | | `sequence_length` | int | Number of API calls in the session | | `api_sequence` | string (JSON list) | Ordered list of API IDs called during the session | | `category_sequence` | string (JSON list) | Ordered list of API category names | ### `training_pairs.parquet` Each row is a supervised training example with multi-task labels: | Column | Type | Description | |---|---|---| | `input_sequence` | string (JSON list) | Context window of preceding API calls (up to 6) | | `input_length` | int | Number of tokens in the input sequence | | `target_api` | int | Ground-truth next API ID to predict | | `target_category` | string | Category name of the target API | | `session_goal_id` | int | Session goal label (auxiliary task) | | `session_goal` | string | Session goal name | | `session_end` | int | Whether this is the last action in the session (0 or 1) | --- ## API Categories The 100 APIs are organized into 10 functional categories, reflecting typical enterprise platform architecture: | Category | API Range | Description | |---|---|---| | Authentication | 0–9 | Login, session management | | User Management | 10–19 | Roles, permissions, accounts | | Data Input | 20–29 | Data ingestion, file upload | | Data Processing | 30–39 | Transformation, cleaning, feature engineering | | ML Training | 40–49 | Model training, hyperparameter tuning | | ML Prediction | 50–59 | Inference, batch prediction | | Basic Visualization | 60–69 | Charts, basic plots | | Advanced Visualization | 70–79 | Dashboards, interactive visualizations | | Export/Share | 80–89 | Export, report generation | | Administration | 90–99 | System config, monitoring | ## Session Goals | Goal ID | Goal Name | Distribution | Workflow Adherence | |---|---|---|---| | 0 | ML Pipeline | 34.8% | 85% | | 1 | Data Analysis | 26.1% | 80% | | 2 | User Management | 24.3% | 90% | | 3 | Quick Visualization | 14.8% | 75% | --- ## How to Use ### Load with Hugging Face `datasets` ```python from datasets import load_dataset # Load both splits dataset = load_dataset("eagle0504/context-engineering-v1") # Or load individual files sessions = load_dataset("eagle0504/context-engineering-v1", data_files="user_sessions.parquet") pairs = load_dataset("eagle0504/context-engineering-v1", data_files="training_pairs.parquet") ``` ### Load with Pandas ```python import pandas as pd sessions = pd.read_parquet("hf://datasets/eagle0504/context-engineering-v1/user_sessions.parquet") pairs = pd.read_parquet("hf://datasets/eagle0504/context-engineering-v1/training_pairs.parquet") ``` ### Reproduce with the `context-engineer` Package You can regenerate this exact dataset (or create your own variant) using the package: ```bash pip install context-engineer ``` ```python from context_engineer import simulate_multitask_markov_data, create_multitask_training_pairs, set_random_seeds # Set seed for exact reproducibility set_random_seeds(42) # Generate 2000 user sessions (matches this dataset) sequences, goals = simulate_multitask_markov_data( num_users=2000, num_apis=100, clicks_per_user=10, ) # Create supervised training pairs input_seqs, target_apis, goal_labels, session_end_labels = create_multitask_training_pairs( sequences, goals, max_seq_len=6 ) ``` ### Run the Full Training Pipeline ```python from context_engineer import run_pipeline # Reproduce the full experiment from the paper results = run_pipeline(seed=42) model = results["model"] # Trained PyTorch model metrics = results["metrics"] # ~79.8% top-1 accuracy, 99.97% top-5 hit rate ``` ### Generate Custom Datasets via CLI ```bash # Generate data and save to JSON context-engineer generate --num-users 5000 --clicks 15 --seed 99 --output my_data.json # Run the full pipeline context-engineer run --num-users 1000 --epochs 30 ``` --- ## Benchmark Results (from the paper) A multi-task attention-based transformer trained on this dataset achieves: | Metric | Value | |---|---| | API Prediction Accuracy (Top-1) | **79.83%** | | Mean Reciprocal Rank (MRR) | **0.7983** | | Top-5 Hit Rate | **99.97%** | | Top-10 Hit Rate | **100.00%** | | Goal Prediction Accuracy | **81.6%** | | Session End Accuracy | **99.3%** | | Improvement over Markov baseline | **+432%** | --- ## Citation If you use this dataset in your research, please cite: ```bibtex @article{yin2025rethink, title={Rethink Context Engineering Using an Attention-based Architecture}, author={Yin, Yiqiao}, year={2025} } ``` --- ## Disclaimer **About the Author.** This dataset and the accompanying `context-engineer` package were created by [Yiqiao Yin](https://www.y-yin.io/), who holds affiliations with the University of Chicago Booth School of Business and the Department of Statistics at Columbia University. The author brings over a decade of professional experience in the SaaS (Software as a Service) and Platform-as-a-Service (PaaS) domain, spanning enterprise software development, API ecosystem design, user behavior analytics, and machine learning infrastructure. The API category taxonomy, workflow patterns, user persona definitions, and transition probability structures encoded in this simulator are informed by that cumulative domain expertise—reflecting realistic patterns observed in production enterprise environments over the course of many years. **Simulation, Not Real Data.** This dataset is **entirely synthetic**. It was generated programmatically using the open-source [`context-engineer`](https://pypi.org/project/context-engineer/) Python package. **No real user data, proprietary platform logs, personally identifiable information (PII), or third-party datasets of any kind are included, referenced, or derived from in this release.** The Markov chain transition probabilities, user personas, and session goal distributions are designed to approximate realistic enterprise API usage patterns for research purposes, but they do not represent, reproduce, or leak any actual user behavior from any specific platform or organization. **Reproducibility.** This dataset is fully reproducible. Running the generation script with `seed=42` and the default parameters (`num_users=2000`, `num_apis=100`, `clicks_per_user=10`) will produce an identical dataset. The source code is publicly available at [github.com/yiqiao-yin/context-engineer-repo](https://github.com/yiqiao-yin/context-engineer-repo). **License.** This dataset is released under the [MIT License](https://opensource.org/licenses/MIT). You are free to use, modify, and distribute it for academic and commercial purposes with attribution.