gooddocs-v0 / data_collection_utils /scrape_gh_docs_config.yaml
MRiabov's picture
(optim) optim rebuild to run only over readme and add workers
3356c03
# Global configuration for scrape_gh_docs.py
# All configuration is driven by this file. The CLI only supports --no-fetch for convenience.
# Inputs
# Preferred: one or more Parquet files produced by our collectors, each with a 'link' column
# Examples:
# - data_collection_utils/awesome_final_repos.py -> awesome-repos.parquet
# - data_collection_utils/top_1000_repos.py -> top-1000-repos.parquet
input_parquet:
- ../output/links.filtered.parquet
# Output directories/files
outdir: ../output/raw_docs
md_failed: ../md-failed.txt
texts_parquet: ../output/cleaned_texts_on_metadata_only.parquet
# Concurrency and behavior
# Use separate knobs to avoid hitting API limits during scraping while keeping rebuild fast.
# scrape_workers applies to network/API fetch stage; rebuild_workers applies to local post-processing.
scrape_workers: 3 # keep low to avoid GitHub 403s
rebuild_workers: 6 # can be higher (I/O + optional CPU)
# Use processes for CPU-bound post-processing (lang detection); options: thread | process
postprocess_executor: process
dry_run: false
no_fetch: false
quiet: false
# How often to checkpoint partial outputs (in processed repos)
checkpoint_every: 50
# Resume: skip refetching repos that already exist under outdir
resume: true
# Auth
# Secrets are NOT configured here. Put your GitHub token in a .env file (recommended)
# or export it in your shell environment. Required env var:
# GITHUB_TOKEN
# Example .env:
# GITHUB_TOKEN=ghp_your_token_here
# Strategies
# Use git sparse-checkout to try to download only documentation folders first.
prefer_sparse: false
# If sparse docs not found, optionally download the whole repo zip; still only .md will be extracted when only_md true
prefer_zip: false
# Content selection
# Download only Markdown files from any chosen strategy
only_md: true
# Minimum number of .md files for a repo to be considered useful (otherwise marked low-md-count)
min_docs_md_count: 10
# Filtering (note: age filtering moved to metadata stage)
# Language filtering for texts parquet
lang_filter: en
min_text_chars: 200