job-training-data / README.md
HelixCipher's picture
Upload folder using huggingface_hub
b243fef verified
metadata
language: en
pretty_name: Job Training Data for JSON Extraction
task_categories:
  - text-generation
tags:
  - json
  - web-scraping
  - job-postings
  - training-data
  - json-extraction
size_categories: medium

Job Training Data

Training dataset for fine-tuning LLMs to extract structured JSON from job postings.

Description

This dataset contains 12,000 examples of job postings in markdown format paired with their JSON extractions. Used to train the job-posting-extractor-qwen model.

Data Format

Each example contains:

  • instruction: What to do (e.g., "Extract job fields as JSON").

  • input: Job posting in markdown format.

  • output: Expected JSON output (as a string).

Example Entry

{
  "instruction": "Extract all job fields as JSON object.",
  "input": "# Job Position\n**Position:** Platform Engineer\n**Company:** DATAECONOMY\n**Location:** Charlotte, NC\n\n## Job Description\nRole: Platform engineer...",
  "output": "{\"job_title\": \"Platform Engineer\", \"company\": \"DATAECONOMY\", \"location\": \"Charlotte, NC\"}"
}

How It Was Created

  1. Data sourced from webscraped job postings.

  2. Converted to markdown using template-based generation.

  3. JSON labels programmatically extracted from the scraped data.

  4. Augmented with 15 instruction variations.

Dataset Statistics

Metric Value
Total examples 12,000
Unique JSON fields 7 (job_title, company, location, work_type, description, experience_level, salary)
Instruction variations 15

Files

  • job_training_data.json: Main training data (12,000 examples).

License & Attribution

This dataset is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.

You are free to use, share, copy, modify, and redistribute this material for any purpose (including commercial use), provided that proper attribution is given.

Attribution requirements

Any reuse, redistribution, or derivative work must include:

  1. The creator's name: HelixCipher

  2. A link to the original repository:

    https://github.com/HelixCipher/fine-tuning-an-local-llm-for-web-scraping

  3. An indication of whether changes were made

  4. A reference to the license (CC BY 4.0)

Example Attribution

This work is based on Fine-Tuning An Local LLM for Web Scraping by HelixCipher.
Original source: https://github.com/HelixCipher/fine-tuning-an-local-llm-for-web-scraping

Licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0).

You may place this attribution in a README, documentation, credits section, or other visible location appropriate to the medium.

Full license text: https://creativecommons.org/licenses/by/4.0/