Update README.md
Browse files
README.md
CHANGED
|
@@ -1,53 +1,80 @@
|
|
| 1 |
-
# Dataset Card: Terminal Log Boundary Prediction (Streaming)
|
| 2 |
|
| 3 |
-
## 📋 Dataset Summary
|
| 4 |
-
This dataset is designed to train Large Language Models (LLMs) to detect
|
|
|
|
| 5 |
|
| 6 |
-
|
|
|
|
|
|
|
|
|
|
| 7 |
|
| 8 |
-
|
| 9 |
-
The dataset is provided in `JSONL` format, mapped specifically for conversational instruction-tuning (like ChatML). Each row contains three fields:
|
| 10 |
-
|
| 11 |
-
* **`instruction`**: The static system prompt that explicitly defines what makes a "new event" (e.g., shell prompts returning, phase transitions in automated scripts) versus an "old event" (e.g., a user pressing the Enter key, continuous downloading).
|
| 12 |
-
* **`input`**: The sliding-window terminal data. It is separated into two blocks:
|
| 13 |
-
* `### CONTEXT (Previous Events):` Up to 14 historical XML chunks to help the model understand the current state of the terminal.
|
| 14 |
-
* `### TARGET LINE (Extract and Classify THIS Timestamp):` The 15th chunk containing the specific timestamp the model needs to evaluate.
|
| 15 |
-
* **`label / output`**: The ground-truth prediction formatted strictly as `{timestamp}, {class} event`.
|
| 16 |
|
| 17 |
-
###
|
| 18 |
-
``
|
| 19 |
-
|
| 20 |
-
"instruction": "Your task is to analyze terminal XML logs and determine whether the timestamp in the TARGET LINE belongs to a \"new event\" or an \"old event\"...",
|
| 21 |
-
"input": "### CONTEXT (Previous Events):\n<system_output timestamp=\"10.01\">demo@server:~$ apt update</system_output>\n<system_output timestamp=\"10.05\">Reading package lists...</system_output>\n\n### TARGET LINE:\n<user_input timestamp=\"12.40\">s</user_input>",
|
| 22 |
-
"output": "12.40, old event"
|
| 23 |
-
}
|
| 24 |
|
| 25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
### 🎯 The Model's Goal
|
| 28 |
-
The primary objective of the model is **binary classification of sequential data**.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
---
|
| 31 |
|
| 32 |
### ✂️ Rules of Truncation
|
| 33 |
-
Raw terminal logs (like `apt-get` installations) can easily overflow an LLM's
|
|
|
|
|
|
|
| 34 |
|
| 35 |
#### Phase 1: Intra-Chunk Truncation (Line Limit)
|
| 36 |
-
If a single `<system_output>` block contains more than 15 lines of text, it
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
#### Phase 2: Window-Level Compression (Context Limit)
|
| 39 |
-
If the entire 14-chunk context window exceeds 25 total lines, the window
|
| 40 |
-
|
| 41 |
-
*
|
| 42 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
---
|
| 45 |
|
| 46 |
### ⚖️ Data Sampling & Balancing
|
| 47 |
-
In a typical terminal log, over 95% of the lines are "Old Events," which
|
|
|
|
|
|
|
| 48 |
|
| 49 |
* **New Events (Positives):** 100% of detected boundaries are kept.
|
| 50 |
-
* **Old Events (Negatives):** Downsampled so that there is exactly a
|
|
|
|
| 51 |
|
| 52 |
#### Hard Negative Mining
|
| 53 |
-
When selecting which "Old Events" to keep for the 2:1 ratio, the algorithm
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ## Dataset Card: Terminal Log Boundary Prediction (Streaming)
|
| 2 |
|
| 3 |
+
### 📋 Dataset Summary
|
| 4 |
+
This dataset is designed to train Large Language Models (LLMs) to detect
|
| 5 |
+
phase transitions, or **"boundaries,"** within continuous terminal XML logs.
|
| 6 |
|
| 7 |
+
The dataset uses a **sliding-window approach**. Instead of reading a
|
| 8 |
+
massive log file at once, the model analyzes a short history of events
|
| 9 |
+
to determine if the **Target Line** (the final entry) marks a new
|
| 10 |
+
logical event or the continuation of an ongoing process.
|
| 11 |
|
| 12 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
+
### 🗂️ Dataset Structure
|
| 15 |
+
The dataset is in `JSONL` format, optimized for ChatML instruction-tuning.
|
| 16 |
+
Each row contains three primary fields:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
+
* **`instruction`**: The system prompt defining "new" vs. "old" events.
|
| 19 |
+
* **`input`**: The sliding-window data, split into:
|
| 20 |
+
* `### CONTEXT`: Up to 14 historical XML chunks.
|
| 21 |
+
* `### TARGET LINE`: The 15th chunk to be classified.
|
| 22 |
+
* **`label / output`**: Formatted as `{timestamp}, {class} event`.
|
| 23 |
|
| 24 |
### 🎯 The Model's Goal
|
| 25 |
+
The primary objective of the model is **binary classification of sequential data**.
|
| 26 |
+
By looking at the historical context (e.g., "The terminal has been downloading
|
| 27 |
+
packages for the last 14 steps"), the model must predict if the timestamp in
|
| 28 |
+
the Target Line breaks that pattern and establishes a new boundary (e.g.,
|
| 29 |
+
"The download finished and the shell prompt returned").
|
| 30 |
|
| 31 |
---
|
| 32 |
|
| 33 |
### ✂️ Rules of Truncation
|
| 34 |
+
Raw terminal logs (like `apt-get` installations) can easily overflow an LLM's
|
| 35 |
+
context window. To prevent this, the data engineering pipeline applies a
|
| 36 |
+
strict **Two-Phase Truncation** rule:
|
| 37 |
|
| 38 |
#### Phase 1: Intra-Chunk Truncation (Line Limit)
|
| 39 |
+
If a single `<system_output>` block contains more than 15 lines of text, it
|
| 40 |
+
is sliced. The first 5 lines and the last 5 lines are preserved, and the
|
| 41 |
+
middle is replaced with a marker: `... [TRUNCATED X LINES] ...`. Note
|
| 42 |
+
that `<user_input>` tags are **never** truncated to preserve
|
| 43 |
+
human-interaction signals.
|
| 44 |
|
| 45 |
#### Phase 2: Window-Level Compression (Context Limit)
|
| 46 |
+
If the entire 14-chunk context window exceeds 25 total lines, the window
|
| 47 |
+
is compressed:
|
| 48 |
+
* The **5 oldest chunks** and the **5 most recent chunks** are kept
|
| 49 |
+
fully intact.
|
| 50 |
+
* For the chunks in the **middle**, the text is completely stripped out,
|
| 51 |
+
leaving only the XML tags (e.g., `<system_output timestamp="X">...
|
| 52 |
+
[TRUNCATED TO SAVE SPACE] ...</system_output>`).
|
| 53 |
+
* This preserves the chronological timeline and sequence of events
|
| 54 |
+
without bloating the token count.
|
| 55 |
|
| 56 |
---
|
| 57 |
|
| 58 |
### ⚖️ Data Sampling & Balancing
|
| 59 |
+
In a typical terminal log, over 95% of the lines are "Old Events," which
|
| 60 |
+
would lead the model to simply guess the majority class. To force actual
|
| 61 |
+
learning, this dataset uses **Negative Downsampling**:
|
| 62 |
|
| 63 |
* **New Events (Positives):** 100% of detected boundaries are kept.
|
| 64 |
+
* **Old Events (Negatives):** Downsampled so that there is exactly a
|
| 65 |
+
**2:1 ratio** (Two old events for every one new event).
|
| 66 |
|
| 67 |
#### Hard Negative Mining
|
| 68 |
+
When selecting which "Old Events" to keep for the 2:1 ratio, the algorithm
|
| 69 |
+
prioritizes **Hard Negatives**. Specifically, it targets `<user_input>`
|
| 70 |
+
tags that contain a newline character (`\n`). This teaches the model the
|
| 71 |
+
difficult lesson that a user pressing "Enter" is often just a completion
|
| 72 |
+
of an input phase, not necessarily a new logical event.
|
| 73 |
+
|
| 74 |
+
#### Example Data Row
|
| 75 |
+
```json
|
| 76 |
+
{
|
| 77 |
+
"instruction": "Your task is to analyze terminal XML logs...",
|
| 78 |
+
"input": "### CONTEXT (Previous Events):\n<system_output timestamp=\"10.01\">demo@server:~$ apt update</system_output>\n<system_output timestamp=\"10.05\">Reading lists...</system_output>\n\n### TARGET LINE:\n<user_input timestamp=\"12.40\">s</user_input>",
|
| 79 |
+
"output": "12.40, old event"
|
| 80 |
+
}
|