File size: 4,153 Bytes
a4089b3 2c4334b 7e81101 e38e01a 2c4334b 5723273 2c4334b 5723273 2c4334b 7e81101 0b20013 2c4334b 7e81101 0b20013 2c4334b 7e81101 0b20013 2c4334b 7e81101 0b20013 2c4334b 7e81101 0b20013 2c4334b 7e81101 0b20013 2c4334b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 | ---
license: mit
task_categories:
- text-classification
tags:
- code
pretty_name: model 0 training dataset
size_categories:
- 1K<n<10K
---
# ## Dataset Card: Terminal Log Boundary Prediction (Streaming)
### 📋 Dataset Purpose & Model 0 Overview
This dataset is designed to train **"Model 0"** for the Winter 2026 iteration of the **AutoDocs** project.
You can access the official repository here:
[AutoDocs (Winter 2026) Repository](https://github.com/CSC392-CSC492-Building-AI-ML-systems/AutoDocs-Winter2026/tree/main)
#### Objective
The primary objective of the model is the **binary classification of sequential data**.
It is engineered to process continuous, timestamped terminal logs formatted in XML to determine
if a specific line represents a **"Boundary"** between logical events.
#### Methodology: Sliding-Window Approach
Instead of ingesting a massive log file in its entirety, the dataset employs a **sliding-window approach**.
The model analyzes a short historical context to evaluate the **Target Line** (the most recent entry):
* **Pattern Recognition**: The model looks at the previous 14 timesteps (e.g., "The terminal has been downloading packages").
* **Boundary Prediction**: It predicts if the Target Line breaks that pattern (e.g., "The download finished and the shell prompt returned")
* or represents the continuation of the ongoing process.
### 🗂️ Dataset Structure
The dataset is in `JSONL` format, each row contains three primary fields:
* **`instruction`**: The system prompt defining "new" vs. "old" events.
* **`input`**: The sliding-window data, split into:
* `### CONTEXT`: Up to 14 historical XML chunks. (Or 14 timestamps)
* `### TARGET LINE`: The 15th chunk to be classified. (Or the 15-th timestamp)
* **`label / output`**: Formatted as `{timestamp}, {class} event`.
### ✂️ Rules of Truncation
Raw terminal logs (like `apt-get` installations) can easily overflow an LLM's
context window. To prevent this, the data engineering pipeline applies a
strict **Two-Phase Truncation** rule:
#### Phase 1: Intra-Chunk Truncation (Line Limit)
If a single `<system_output>` block contains more than 15 lines of text, it
is sliced. The first 5 lines and the last 5 lines are preserved, and the
middle is replaced with a marker: `... [TRUNCATED X LINES] ...`. Note
that `<user_input>` tags are **never** truncated to preserve
human-interaction signals.
#### Phase 2: Window-Level Compression (Context Limit)
If the entire 14-chunk context window exceeds 25 total lines, the window
is compressed:
* The **5 oldest chunks** and the **5 most recent chunks** are kept
fully intact.
* For the chunks in the **middle**, the text is completely stripped out,
leaving only the XML tags (e.g., `<system_output timestamp="X">...
[TRUNCATED TO SAVE SPACE] ...</system_output>`).
* This preserves the chronological timeline and sequence of events
without bloating the token count.
### ⚖️ Data Sampling & Balancing
In a typical terminal log, over 95% of the lines are "Old Events," which
would lead the model to simply guess the majority class. To force actual
learning, this dataset uses **Negative Downsampling**:
* **New Events (Positives):** 100% of detected boundaries are kept.
* **Old Events (Negatives):** Downsampled so that there is exactly a
**2:1 ratio** (Two old events for every one new event).
#### Hard Negative Mining
When selecting which "Old Events" to keep for the 2:1 ratio, the algorithm
prioritizes **Hard Negatives**. Specifically, it targets `<user_input>`
tags that contain a newline character (`\n`). This teaches the model the
difficult lesson that a user pressing "Enter" is often just a completion
of an input phase, not necessarily a new logical event.
#### Example Data Row
```json
{
"instruction": "Your task is to analyze terminal XML logs...",
"input": "### CONTEXT (Previous Events):\n<system_output timestamp=\"10.01\">demo@server:~$ apt update</system_output>\n<system_output timestamp=\"10.05\">Reading lists...</system_output>\n\n### TARGET LINE:\n<user_input timestamp=\"12.40\">s</user_input>",
"output": "12.40, old event"
} |