| # OpenTransformers Web Crawl v1 | |
| **Your data. Your company. No apologies.** | |
| ## Stats | |
| - Total pages: 45,026 | |
| - Total text: 651.3 MB | |
| - Crawled: 2026-01-13 | |
| ## Format | |
| JSONL (gzipped), one document per line: | |
| ```json | |
| { | |
| "url": "https://example.com/page", | |
| "domain": "example.com", | |
| "timestamp": "2026-01-13T02:43:19.685727", | |
| "status": 200, | |
| "text": "Clean extracted text content...", | |
| "text_len": 1234, | |
| "html_len": 5678, | |
| "links": 42, | |
| "fetch_ms": 150, | |
| "hash": "abc123..." | |
| } | |
| ``` | |
| ## Sources | |
| Diverse high-quality web content: Hacker News, Reddit (ML/programming/science), | |
| arXiv, Wikipedia, tech blogs, news sites, and discovered links. | |
| ## Usage | |
| ```python | |
| from datasets import load_dataset | |
| ds = load_dataset("OpenTransformer/web-crawl-v1") | |
| ``` | |
| ## License | |
| Public domain. Do whatever you want. | |
| --- | |
| *Crawled by OpenTransformers Ltd* | |
| *https://github.com/OpenTransformer* | |