espsluar commited on
Commit
fdb1fca
·
verified ·
1 Parent(s): da28b91

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +211 -39
README.md CHANGED
@@ -1,89 +1,261 @@
1
  ---
2
  task_categories:
3
  - text-generation
4
- - question-answering
5
  language:
6
  - en
7
  size_categories:
8
  - n<1K
 
 
 
 
 
9
  ---
10
 
11
- # CrawlerLM: HTML-to-JSON Dataset
12
 
13
- A synthetic dataset for training language models to extract structured JSON data from raw HTML.
14
 
15
  ## Dataset Description
16
 
17
- This dataset contains HTML pages paired with their structured JSON representations, designed for fine-tuning small language models for web scraping and information extraction tasks.
18
 
19
- ### Dataset Structure
20
 
21
- Each example contains:
22
- - `example_html`: Raw HTML content from real web pages
23
- - `expected_json`: Structured extraction with fields:
24
- - `url`: Page URL
25
- - `title`: Page title
26
- - `text`: Main text content
27
- - `author`: Author name (or null)
28
- - `published_date`: Publication date (or null)
29
- - `image`: Main image URL
30
- - `favicon`: Favicon URL
31
- - `id`: Unique identifier
32
 
33
- ### Data Splits
34
 
35
- - **Train**: 450 synthetic variations
36
- - **Test**: 50 synthetic variations
37
 
38
- ### Data Sources
39
 
40
- - Base HTML samples from Common Crawl
41
- - Structured extractions via Exa API
42
- - Synthetic variations generated programmatically
43
 
44
- ### Use Cases
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
- - Fine-tuning small models for web scraping
47
- - Training HTML-to-JSON extraction models
48
- - Benchmarking structured data extraction
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
  ## Usage
51
 
 
 
52
  ```python
53
  from datasets import load_dataset
54
 
55
- dataset = load_dataset("espsluar/crawlerlm-html-to-json")
 
56
 
57
- # Access splits
58
  train_data = dataset["train"]
 
59
  test_data = dataset["test"]
60
 
61
- # Example
62
  example = train_data[0]
 
63
  print(f"HTML length: {len(example['example_html'])} chars")
64
  print(f"Title: {example['expected_json']['title']}")
65
  ```
66
 
67
- ## Dataset Creation
68
 
69
- Generated using the CrawlerLM pipeline:
70
- 1. Sample diverse URLs from Common Crawl
71
- 2. Filter for quality (SPA detection, content scoring)
72
- 3. Extract structured data via Exa API
73
- 4. Generate synthetic variations (wrappers, noise, perturbations)
74
 
75
- ## License
 
76
 
77
- MIT
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
 
79
  ## Citation
80
 
81
  ```bibtex
82
  @misc{crawlerlm2025,
83
  author = {Jack Luar},
84
- title = {CrawlerLM: HTML-to-JSON Dataset},
85
  year = {2025},
86
  publisher = {HuggingFace},
87
  howpublished = {\url{https://huggingface.co/datasets/espsluar/crawlerlm-html-to-json}}
88
  }
89
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  task_categories:
3
  - text-generation
4
+ - information-extraction
5
  language:
6
  - en
7
  size_categories:
8
  - n<1K
9
+ tags:
10
+ - web-scraping
11
+ - html-extraction
12
+ - structured-data
13
+ - synthetic-data
14
  ---
15
 
16
+ # CrawlerLM: HTML Fragment to Structured JSON
17
 
18
+ A synthetic dataset for training language models to extract structured JSON from HTML fragments across multiple schema types.
19
 
20
  ## Dataset Description
21
 
22
+ This dataset contains HTML fragments paired with structured JSON annotations across three schema types: **recipes**, **job postings**, and **events**. It's designed for fine-tuning small language models to perform domain-specific information extraction from messy, real-world HTML.
23
 
24
+ ### Key Features
25
 
26
+ - **60 manually annotated base examples** from diverse web sources
27
+ - **3 schema types** with domain-specific fields
28
+ - **Synthetic augmentation** to 447+ training examples with realistic HTML variations
29
+ - **Two configurations**: raw (HTML→JSON) and chat (instruction format)
30
+ - **Token-filtered** (all examples ≤24K tokens)
 
 
 
 
 
 
31
 
32
+ ## Configurations
33
 
34
+ ### `raw` Configuration
 
35
 
36
+ HTML fragment to JSON extraction format.
37
 
38
+ **Fields**:
39
+ - `example_html` (string): Raw HTML fragment
40
+ - `expected_json` (dict): Structured extraction with schema-specific fields
41
 
42
+ **Example**:
43
+ ```python
44
+ {
45
+ "example_html": "<div class=\"recipe-card\">...</div>",
46
+ "expected_json": {
47
+ "type": "recipe",
48
+ "title": "Best Ever Macaroni Cheese",
49
+ "ingredients": ["500g macaroni", "200g cheddar", ...],
50
+ "instructions": ["Boil pasta", "Make sauce", ...],
51
+ "prep_time": "10 mins",
52
+ "cook_time": "20 mins",
53
+ ...
54
+ }
55
+ }
56
+ ```
57
+
58
+ **Splits**:
59
+ - Train: 400 examples (augmented from 48 base examples)
60
+ - Validation: 50 examples (augmented from 6 base examples)
61
+ - Test: 6 examples (pristine, no augmentation)
62
+
63
+ ### `chat` Configuration
64
+
65
+ Instruction-tuning format for training chat models.
66
+
67
+ **Fields**:
68
+ - `messages` (list): Conversational format with user/assistant roles
69
+
70
+ **Example**:
71
+ ```python
72
+ {
73
+ "messages": [
74
+ {
75
+ "role": "user",
76
+ "content": "Extract structured data from the following HTML and return it as JSON.\n\nHTML:\n<div>...</div>"
77
+ },
78
+ {
79
+ "role": "assistant",
80
+ "content": "{\"type\": \"recipe\", \"title\": \"...\", ...}"
81
+ }
82
+ ]
83
+ }
84
+ ```
85
+
86
+ **Splits**:
87
+ - Train: 391 examples (9 filtered out for exceeding token limit)
88
+ - Validation: 50 examples
89
+ - Test: 6 examples
90
+
91
+ ## Schema Types
92
+
93
+ ### Recipe (`type: "recipe"`)
94
+
95
+ **Fields**: `type`, `title`, `description`, `ingredients`, `instructions`, `prep_time`, `cook_time`, `total_time`, `servings`, `cuisine`, `difficulty`, `rating`, `author`, `image_url`, `video_url`, `source_url`, `published_date`
96
+
97
+ **Use case**: Extracting recipe data from food blogs, cooking sites
98
+
99
+ **Example sources**: BBC Good Food, AllRecipes, Serious Eats
100
 
101
+ ### Job Posting (`type: "job_posting"`)
102
+
103
+ **Fields**: `type`, `title`, `company`, `location`, `compensation`, `benefits`, `mode_of_work`, `job_type`, `experience_level`, `requirements`, `responsibilities`, `description`, `application_url`, `company_logo`, `source_url`
104
+
105
+ **Use case**: Parsing job listings from career pages, job boards
106
+
107
+ **Example sources**: Greenhouse, Lever, LinkedIn Jobs
108
+
109
+ ### Event (`type: "event"`)
110
+
111
+ **Fields**: `type`, `title`, `description`, `datetime`, `end_datetime`, `location`, `venue`, `organizer`, `price`, `registration_url`, `image_url`, `category`, `tags`, `source_url`
112
+
113
+ **Use case**: Extracting event details from event listings, calendars
114
+
115
+ **Example sources**: Eventbrite, Meetup, local event pages
116
+
117
+ ## Data Collection Process
118
+
119
+ 1. **Manual Annotation**: 61 HTML fragments manually annotated using custom Chrome extension
120
+ 2. **Quality Filtering**: Removed 1 example exceeding 24K token limit (60 examples remaining)
121
+ 3. **Stratified Split**: 80/10/10 split by schema type (48 train / 6 val / 6 test base examples)
122
+ 4. **Synthetic Augmentation**:
123
+ - Train: ~8 variations per base example (400 total)
124
+ - Val: ~8 variations per base example (50 total)
125
+ - Test: No augmentation (6 pristine examples)
126
+ 5. **Chat Conversion**: Convert to instruction-tuning format with token filtering
127
+
128
+ ### Augmentation Strategies
129
+
130
+ - **Structural variations**: Wrapper divs, nesting depth changes
131
+ - **Attribute noise**: Random classes, IDs, data-* attributes
132
+ - **Template variations**: Semantically equivalent tags (div ↔ section)
133
+ - **HTML comments**: Developer comments injection
134
+ - **Whitespace variations**: Minified vs. prettified formatting
135
+
136
+ All augmentations preserve semantic content and ensure `expected_json` remains unchanged.
137
 
138
  ## Usage
139
 
140
+ ### Load Raw Configuration
141
+
142
  ```python
143
  from datasets import load_dataset
144
 
145
+ # Load raw HTML→JSON format
146
+ dataset = load_dataset("espsluar/crawlerlm-html-to-json", "raw")
147
 
 
148
  train_data = dataset["train"]
149
+ val_data = dataset["validation"]
150
  test_data = dataset["test"]
151
 
152
+ # Inspect example
153
  example = train_data[0]
154
+ print(f"Schema type: {example['expected_json']['type']}")
155
  print(f"HTML length: {len(example['example_html'])} chars")
156
  print(f"Title: {example['expected_json']['title']}")
157
  ```
158
 
159
+ ### Load Chat Configuration
160
 
161
+ ```python
162
+ from datasets import load_dataset
 
 
 
163
 
164
+ # Load chat format for instruction tuning
165
+ dataset = load_dataset("espsluar/crawlerlm-html-to-json", "chat")
166
 
167
+ train_data = dataset["train"]
168
+
169
+ # Inspect example
170
+ example = train_data[0]
171
+ print(f"User prompt: {example['messages'][0]['content'][:100]}...")
172
+ print(f"Assistant response: {example['messages'][1]['content'][:100]}...")
173
+ ```
174
+
175
+ ### Filter by Schema Type
176
+
177
+ ```python
178
+ # Filter for only recipes
179
+ recipes = dataset["train"].filter(
180
+ lambda x: '"type": "recipe"' in x["messages"][1]["content"]
181
+ )
182
+
183
+ print(f"Recipe examples: {len(recipes)}")
184
+ ```
185
+
186
+ ## Dataset Statistics
187
+
188
+ | Split | Raw Examples | Chat Examples | Schema Distribution |
189
+ |-------|--------------|---------------|---------------------|
190
+ | Train | 400 | 391 | ~133 recipe, ~150 job_posting, ~117 event |
191
+ | Validation | 50 | 50 | ~17 recipe, ~17 job_posting, ~16 event |
192
+ | Test | 6 | 6 | 2 recipe, 2 job_posting, 2 event |
193
+
194
+ **Schema Distribution** (base examples before augmentation):
195
+ - Recipe: 19 examples (31.7%)
196
+ - Job Posting: 22 examples (36.7%)
197
+ - Event: 19 examples (31.7%)
198
+
199
+ ## Intended Use
200
+
201
+ ### Primary Use Cases
202
+
203
+ - Fine-tuning small language models (0.5B-7B parameters) for HTML extraction
204
+ - Training domain-specific web scrapers
205
+ - Benchmarking structured data extraction performance
206
+ - Teaching models to handle messy, real-world HTML
207
+
208
+ ### Out of Scope
209
+
210
+ - Full webpage extraction (this dataset focuses on **fragments**, not entire pages)
211
+ - Single-field extraction (schemas have 10-17 fields each)
212
+ - Non-English content
213
+ - Dynamic/JavaScript-rendered content
214
+
215
+ ## Limitations
216
+
217
+ - **Limited schema types**: Only 3 schema types (recipe, job_posting, event)
218
+ - **English only**: All examples are from English-language websites
219
+ - **Static HTML**: No JavaScript-rendered or dynamic content
220
+ - **Token limit**: All examples ≤24K tokens (may not represent very long pages)
221
+ - **Augmentation artifacts**: Synthetic variations may not perfectly match real-world HTML diversity
222
+
223
+ ## Ethical Considerations
224
+
225
+ - **Web scraping**: This dataset is intended for educational and research purposes. Users should respect robots.txt and website terms of service when deploying trained models.
226
+ - **Data sources**: All HTML fragments are from publicly accessible websites
227
+ - **Privacy**: No personally identifiable information (PII) is intentionally included
228
 
229
  ## Citation
230
 
231
  ```bibtex
232
  @misc{crawlerlm2025,
233
  author = {Jack Luar},
234
+ title = {CrawlerLM: HTML Fragment to Structured JSON},
235
  year = {2025},
236
  publisher = {HuggingFace},
237
  howpublished = {\url{https://huggingface.co/datasets/espsluar/crawlerlm-html-to-json}}
238
  }
239
  ```
240
+
241
+ ## License
242
+
243
+ MIT
244
+
245
+ ## Dataset Creation
246
+
247
+ **Tooling**: Custom Chrome extension for manual annotation ([github.com/espsluar/c4ai-crawlerlm](https://github.com/espsluar/c4ai-crawlerlm))
248
+
249
+ **Pipeline**:
250
+ 1. Manual HTML fragment selection and annotation
251
+ 2. Schema-specific field extraction
252
+ 3. Quality filtering (token limits, validation)
253
+ 4. Stratified train/val/test split
254
+ 5. Synthetic augmentation (structural, attribute, whitespace variations)
255
+ 6. Chat format conversion with instruction templates
256
+
257
+ **Quality Control**:
258
+ - Manual review of all base annotations
259
+ - Token count validation (≤24K per example)
260
+ - Schema validation (required fields, types)
261
+ - Stratified sampling to ensure balanced schema distribution