File size: 2,252 Bytes
eb44adc
 
 
 
 
 
 
 
 
 
 
07dc8ad
 
 
 
 
 
 
eb44adc
07dc8ad
 
eb44adc
 
 
b8e12c5
eb44adc
633979f
 
eb44adc
 
 
 
 
 
 
 
 
 
 
 
45865fd
eb44adc
 
 
 
 
 
 
45865fd
eb44adc
 
 
 
 
 
 
 
 
 
2e49af5
 
 
 
 
 
 
 
 
 
 
 
 
eb44adc
 
3873bf5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- agent
- coding
- terminal
- shell
- pretrain
- pretraining
- agentic
- llm
- web
- fineweb
- dclm
pretty_name: WebTerminal
size_categories:
- 10M<n<100M
---
# Terminal/CLI Web Text

![webterminal](webterminal.png)

**v0.1**

A filtered extract of terminal and command-line content from two large web-text corpora.

## Sources

- **DCLM** (`Zyphra/dclm-dedup`)
- **FineWeb** (`Salesforce/fineweb_deduplicated`)

## How it was built

1. **Fast filter**: skip any document that doesn't contain obvious CLI indicators (`$`, `sudo`, `pip install`, `` ```bash ``, `root@`, etc.)
2. **Score**: remaining docs are scored (0-34) across five signals, each with a per-match point value and a cap:

| Filter | Description | Points | Cap |
|---|---|---|---|
| Prompt patterns | Shell prompts like `$ cmd`, `user@host:~$`, `>>>`, `root@`, `PS C:\` | 2 per match | 10 |
| CLI commands | Known commands: `sudo`, `apt-get`, `pip install`, `git clone`, `docker run`, `curl`, `ssh`, `gcc`, etc. (30+ patterns) | 1 per unique match | 8 |
| stdout patterns | Output indicators: "successfully installed", "cloning into", `drwx` (ls output), "packets transmitted", "traceback", version strings | 2 per match | 6 |
| Code blocks | Terminal-flavored code blocks: `` ```bash ``, `` ```shell ``, `<pre><code>`, terminal/console div classes | 2 per match | 6 |
| Indented blocks | 3+ consecutive lines indented 4+ spaces (code/output blocks) | 1 per match | 4 |

Documents scoring >=5 are kept.

3. **Dedup**: exact dedup across both datasets using xxhash64 on full text.

## Stats

| | Chunks | Size | Rows |
|---|---|---|---|
| DCLM | 13,144 | ~229 GB | ~18.8M |
| FineWeb | 8,800 | ~669 GB | ~47.5M |

| Score | Count | % | Cumulative |
|---|---|---|---|
| 5 | 39,025,201 | 63.62% | 63.62% |
| 6 | 10,787,199 | 17.59% | 81.21% |
| 7 | 4,063,886 | 6.63% | 87.83% |
| 8 | 2,911,983 | 4.75% | 92.58% |
| 9 | 1,304,162 | 2.13% | 94.70% |
| 10 | 1,022,996 | 1.67% | 96.37% |
| 11-14 | 1,609,090 | 2.62% | 98.99% |
| 15-20 | 536,421 | 0.87% | 99.87% |
| 21-34 | 80,340 | 0.13% | 100.00% |
| **Total** | **61,341,278** | | |

## Use case

Mostly for upsampling agentic-adjacent data during pretraining.