Datasets:
File size: 2,767 Bytes
b243fef | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | ---
language: en
pretty_name: Job Training Data for JSON Extraction
task_categories:
- text-generation
tags:
- json
- web-scraping
- job-postings
- training-data
- json-extraction
size_categories: medium
---
# Job Training Data
Training dataset for fine-tuning LLMs to extract structured JSON from job postings.
## Description
This dataset contains **12,000 examples** of job postings in markdown format paired with their JSON extractions. Used to train the `job-posting-extractor-qwen` model.
## Data Format
Each example contains:
- `instruction`: What to do (e.g., "Extract job fields as JSON").
- `input`: Job posting in markdown format.
- `output`: Expected JSON output (as a string).
### Example Entry
```json
{
"instruction": "Extract all job fields as JSON object.",
"input": "# Job Position\n**Position:** Platform Engineer\n**Company:** DATAECONOMY\n**Location:** Charlotte, NC\n\n## Job Description\nRole: Platform engineer...",
"output": "{\"job_title\": \"Platform Engineer\", \"company\": \"DATAECONOMY\", \"location\": \"Charlotte, NC\"}"
}
```
## How It Was Created
1. Data sourced from webscraped job postings.
2. Converted to markdown using template-based generation.
3. JSON labels programmatically extracted from the scraped data.
4. Augmented with 15 instruction variations.
## Dataset Statistics
| Metric | Value |
|--------|-------|
| Total examples | 12,000 |
| Unique JSON fields | 7 (job_title, company, location, work_type, description, experience_level, salary) |
| Instruction variations | 15 |
## Files
- `job_training_data.json`: Main training data (12,000 examples).
## License & Attribution
This dataset is licensed under the **Creative Commons Attribution 4.0 International (CC BY 4.0)** license.
You are free to **use, share, copy, modify, and redistribute** this material for any purpose (including commercial use), **provided that proper attribution is given**.
### Attribution requirements
Any reuse, redistribution, or derivative work **must** include:
1. **The creator's name**: `HelixCipher`
2. **A link to the original repository**:
https://github.com/HelixCipher/fine-tuning-an-local-llm-for-web-scraping
3. **An indication of whether changes were made**
4. **A reference to the license (CC BY 4.0)**
#### Example Attribution
> This work is based on *Fine-Tuning An Local LLM for Web Scraping* by `HelixCipher`.
> Original source: https://github.com/HelixCipher/fine-tuning-an-local-llm-for-web-scraping
> Licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0).
You may place this attribution in a README, documentation, credits section, or other visible location appropriate to the medium.
Full license text: https://creativecommons.org/licenses/by/4.0/ |