espsluar commited on
Commit
62cca04
·
verified ·
1 Parent(s): 0057cf1

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +76 -82
README.md CHANGED
@@ -1,7 +1,6 @@
1
  ---
2
  task_categories:
3
  - text-generation
4
- - information-extraction
5
  language:
6
  - en
7
  size_categories:
@@ -11,61 +10,32 @@ tags:
11
  - html-extraction
12
  - structured-data
13
  - synthetic-data
 
14
  ---
15
 
16
- # CrawlerLM: HTML Fragment to Structured JSON
17
 
18
- A synthetic dataset for training language models to extract structured JSON from HTML fragments across multiple schema types.
19
 
20
  ## Dataset Description
21
 
22
- This dataset contains HTML fragments paired with structured JSON annotations across three schema types: **recipes**, **job postings**, and **events**. It's designed for fine-tuning small language models to perform domain-specific information extraction from messy, real-world HTML.
23
 
24
  ### Key Features
25
 
26
- - **60 manually annotated base examples** from diverse web sources
27
- - **3 schema types** with domain-specific fields
28
- - **Synthetic augmentation** to 447+ training examples with realistic HTML variations
29
- - **Two configurations**: raw (HTML→JSON) and chat (instruction format)
30
- - **Token-filtered** (all examples ≤24K tokens)
31
 
32
- ## Configurations
33
 
34
- ### `raw` Configuration
35
-
36
- HTML fragment to JSON extraction format.
37
-
38
- **Fields**:
39
- - `example_html` (string): Raw HTML fragment
40
- - `expected_json` (dict): Structured extraction with schema-specific fields
41
-
42
- **Example**:
43
- ```python
44
- {
45
- "example_html": "<div class=\"recipe-card\">...</div>",
46
- "expected_json": {
47
- "type": "recipe",
48
- "title": "Best Ever Macaroni Cheese",
49
- "ingredients": ["500g macaroni", "200g cheddar", ...],
50
- "instructions": ["Boil pasta", "Make sauce", ...],
51
- "prep_time": "10 mins",
52
- "cook_time": "20 mins",
53
- ...
54
- }
55
- }
56
- ```
57
-
58
- **Splits**:
59
- - Train: 400 examples (augmented from 48 base examples)
60
- - Validation: 50 examples (augmented from 6 base examples)
61
- - Test: 6 examples (pristine, no augmentation)
62
-
63
- ### `chat` Configuration
64
-
65
- Instruction-tuning format for training chat models.
66
 
67
  **Fields**:
68
  - `messages` (list): Conversational format with user/assistant roles
 
 
69
 
70
  **Example**:
71
  ```python
@@ -73,18 +43,18 @@ Instruction-tuning format for training chat models.
73
  "messages": [
74
  {
75
  "role": "user",
76
- "content": "Extract structured data from the following HTML and return it as JSON.\n\nHTML:\n<div>...</div>"
77
  },
78
  {
79
  "role": "assistant",
80
- "content": "{\"type\": \"recipe\", \"title\": \"...\", ...}"
81
  }
82
  ]
83
  }
84
  ```
85
 
86
  **Splits**:
87
- - Train: 391 examples (9 filtered out for exceeding token limit)
88
  - Validation: 50 examples
89
  - Test: 6 examples
90
 
@@ -116,14 +86,11 @@ Instruction-tuning format for training chat models.
116
 
117
  ## Data Collection Process
118
 
119
- 1. **Manual Annotation**: 61 HTML fragments manually annotated using custom Chrome extension
120
- 2. **Quality Filtering**: Removed 1 example exceeding 24K token limit (60 examples remaining)
121
- 3. **Stratified Split**: 80/10/10 split by schema type (48 train / 6 val / 6 test base examples)
122
- 4. **Synthetic Augmentation**:
123
- - Train: ~8 variations per base example (400 total)
124
- - Val: ~8 variations per base example (50 total)
125
- - Test: No augmentation (6 pristine examples)
126
- 5. **Chat Conversion**: Convert to instruction-tuning format with token filtering
127
 
128
  ### Augmentation Strategies
129
 
@@ -137,13 +104,13 @@ All augmentations preserve semantic content and ensure `expected_json` remains u
137
 
138
  ## Usage
139
 
140
- ### Load Raw Configuration
141
 
142
  ```python
143
  from datasets import load_dataset
144
 
145
- # Load raw HTML→JSON format
146
- dataset = load_dataset("espsluar/crawlerlm-html-to-json", "raw")
147
 
148
  train_data = dataset["train"]
149
  val_data = dataset["validation"]
@@ -151,50 +118,77 @@ test_data = dataset["test"]
151
 
152
  # Inspect example
153
  example = train_data[0]
154
- print(f"Schema type: {example['expected_json']['type']}")
155
- print(f"HTML length: {len(example['example_html'])} chars")
156
- print(f"Title: {example['expected_json']['title']}")
157
  ```
158
 
159
- ### Load Chat Configuration
160
 
161
  ```python
162
  from datasets import load_dataset
163
 
164
- # Load chat format for instruction tuning
165
- dataset = load_dataset("espsluar/crawlerlm-html-to-json", "chat")
166
 
167
- train_data = dataset["train"]
 
 
 
168
 
169
- # Inspect example
170
- example = train_data[0]
171
- print(f"User prompt: {example['messages'][0]['content'][:100]}...")
172
- print(f"Assistant response: {example['messages'][1]['content'][:100]}...")
173
  ```
174
 
175
- ### Filter by Schema Type
176
 
177
  ```python
178
- # Filter for only recipes
179
- recipes = dataset["train"].filter(
180
- lambda x: '"type": "recipe"' in x["messages"][1]["content"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
181
  )
182
 
183
- print(f"Recipe examples: {len(recipes)}")
184
  ```
185
 
186
  ## Dataset Statistics
187
 
188
- | Split | Raw Examples | Chat Examples | Schema Distribution |
189
- |-------|--------------|---------------|---------------------|
190
- | Train | 400 | 391 | ~133 recipe, ~150 job_posting, ~117 event |
191
- | Validation | 50 | 50 | ~17 recipe, ~17 job_posting, ~16 event |
192
- | Test | 6 | 6 | 2 recipe, 2 job_posting, 2 event |
 
193
 
194
- **Schema Distribution** (base examples before augmentation):
195
- - Recipe: 19 examples (31.7%)
196
- - Job Posting: 22 examples (36.7%)
197
- - Event: 19 examples (31.7%)
198
 
199
  ## Intended Use
200
 
@@ -217,7 +211,7 @@ print(f"Recipe examples: {len(recipes)}")
217
  - **Limited schema types**: Only 3 schema types (recipe, job_posting, event)
218
  - **English only**: All examples are from English-language websites
219
  - **Static HTML**: No JavaScript-rendered or dynamic content
220
- - **Token limit**: All examples ≤24K tokens (may not represent very long pages)
221
  - **Augmentation artifacts**: Synthetic variations may not perfectly match real-world HTML diversity
222
 
223
  ## Ethical Considerations
 
1
  ---
2
  task_categories:
3
  - text-generation
 
4
  language:
5
  - en
6
  size_categories:
 
10
  - html-extraction
11
  - structured-data
12
  - synthetic-data
13
+ - instruction-tuning
14
  ---
15
 
16
+ # CrawlerLM: HTML to JSON Extraction
17
 
18
+ A synthetic instruction-tuning dataset for training language models to extract structured JSON from HTML.
19
 
20
  ## Dataset Description
21
 
22
+ This dataset contains HTML paired with structured JSON extraction tasks in chat format. It's designed for fine-tuning small language models to perform structured data extraction from messy, real-world HTML across multiple domains.
23
 
24
  ### Key Features
25
 
26
+ - **447 examples** in instruction-tuning chat format
27
+ - **Real HTML** from diverse web sources (recipes, job postings, events)
28
+ - **Synthetic augmentation** with realistic HTML variations
29
+ - **Clean splits**: train (391) / validation (50) / test (6)
 
30
 
31
+ ## Dataset Format
32
 
33
+ All examples are in instruction-tuning chat format with user/assistant messages.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
  **Fields**:
36
  - `messages` (list): Conversational format with user/assistant roles
37
+ - User message: Instruction + HTML input
38
+ - Assistant message: JSON output
39
 
40
  **Example**:
41
  ```python
 
43
  "messages": [
44
  {
45
  "role": "user",
46
+ "content": "Extract structured data from the following HTML and return it as JSON.\n\nHTML:\n<div class=\"recipe-card\">...</div>"
47
  },
48
  {
49
  "role": "assistant",
50
+ "content": "{\"type\": \"recipe\", \"title\": \"Best Ever Macaroni Cheese\", \"ingredients\": [\"500g macaroni\", ...], ...}"
51
  }
52
  ]
53
  }
54
  ```
55
 
56
  **Splits**:
57
+ - Train: 391 examples
58
  - Validation: 50 examples
59
  - Test: 6 examples
60
 
 
86
 
87
  ## Data Collection Process
88
 
89
+ 1. **Manual Annotation**: HTML fragments manually annotated using custom Chrome extension
90
+ 2. **Quality Filtering**: Token limit filtering and validation
91
+ 3. **Stratified Split**: Train/val/test split by schema type before augmentation
92
+ 4. **Synthetic Augmentation**: Generate HTML variations while preserving JSON semantics
93
+ 5. **Chat Conversion**: Convert to instruction-tuning format with system prompt
 
 
 
94
 
95
  ### Augmentation Strategies
96
 
 
104
 
105
  ## Usage
106
 
107
+ ### Load Dataset
108
 
109
  ```python
110
  from datasets import load_dataset
111
 
112
+ # Load the dataset
113
+ dataset = load_dataset("espsluar/crawlerlm-html-to-json")
114
 
115
  train_data = dataset["train"]
116
  val_data = dataset["validation"]
 
118
 
119
  # Inspect example
120
  example = train_data[0]
121
+ print(f"User prompt: {example['messages'][0]['content'][:100]}...")
122
+ print(f"Assistant response: {example['messages'][1]['content'][:100]}...")
 
123
  ```
124
 
125
+ ### Filter by Schema Type
126
 
127
  ```python
128
  from datasets import load_dataset
129
 
130
+ dataset = load_dataset("espsluar/crawlerlm-html-to-json")
 
131
 
132
+ # Filter for only recipes
133
+ recipes = dataset["train"].filter(
134
+ lambda x: '"type": "recipe"' in x["messages"][1]["content"]
135
+ )
136
 
137
+ print(f"Recipe examples: {len(recipes)}")
 
 
 
138
  ```
139
 
140
+ ### Fine-tuning Example
141
 
142
  ```python
143
+ from datasets import load_dataset
144
+ from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments
145
+
146
+ # Load dataset
147
+ dataset = load_dataset("espsluar/crawlerlm-html-to-json")
148
+
149
+ # Load model and tokenizer
150
+ model_name = "Qwen/Qwen2.5-0.5B-Instruct"
151
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
152
+ model = AutoModelForCausalLM.from_pretrained(model_name)
153
+
154
+ # Apply chat template and tokenize
155
+ def format_example(example):
156
+ text = tokenizer.apply_chat_template(
157
+ example["messages"],
158
+ tokenize=False
159
+ )
160
+ return tokenizer(text, truncation=True, max_length=4096)
161
+
162
+ tokenized_dataset = dataset.map(format_example, remove_columns=["messages"])
163
+
164
+ # Train
165
+ trainer = Trainer(
166
+ model=model,
167
+ args=TrainingArguments(
168
+ output_dir="./crawlerlm-finetuned",
169
+ per_device_train_batch_size=1,
170
+ num_train_epochs=3,
171
+ ),
172
+ train_dataset=tokenized_dataset["train"],
173
+ eval_dataset=tokenized_dataset["validation"],
174
  )
175
 
176
+ trainer.train()
177
  ```
178
 
179
  ## Dataset Statistics
180
 
181
+ | Split | Examples | Schema Distribution |
182
+ |-------|----------|---------------------|
183
+ | Train | 391 | ~133 recipe, ~150 job_posting, ~117 event |
184
+ | Validation | 50 | ~17 recipe, ~17 job_posting, ~16 event |
185
+ | Test | 6 | 2 recipe, 2 job_posting, 2 event |
186
+ | **Total** | **447** | |
187
 
188
+ **Schema Distribution**:
189
+ - Recipe: ~152 examples (34%)
190
+ - Job Posting: ~169 examples (38%)
191
+ - Event: ~135 examples (30%)
192
 
193
  ## Intended Use
194
 
 
211
  - **Limited schema types**: Only 3 schema types (recipe, job_posting, event)
212
  - **English only**: All examples are from English-language websites
213
  - **Static HTML**: No JavaScript-rendered or dynamic content
214
+ - **Moderate dataset size**: 447 examples total (391 training examples)
215
  - **Augmentation artifacts**: Synthetic variations may not perfectly match real-world HTML diversity
216
 
217
  ## Ethical Considerations