MRiabov commited on
Commit
9827b39
·
verified ·
1 Parent(s): d7bb892

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +171 -0
  2. github_api_utils.py +302 -0
  3. requirements.txt +2 -0
README.md ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Awesome Final Repos Dataset
2
+
3
+ Collect a curated dataset of “final” GitHub repositories starting from an Awesome list-of-lists. This tool crawls Awesome lists recursively and extracts end repositories whose names do not contain “awesome” (case-insensitive). It optionally enriches entries with GitHub stars and README previews.
4
+
5
+ - Script: `data_collection_utils/awesome_final_repos.py`
6
+ - Config: `data_collection_utils/awesome_scrap_config.yaml`
7
+ - Output dataset: `awesome-repos.parquet`
8
+
9
+ ## Key Features
10
+
11
+ - Crawl Awesome lists to a configurable depth and collect “final” repos only at the edge depth (`depth == max_depth`).
12
+ - Extract name, canonical GitHub link, and description from Awesome README bullet entries.
13
+ - Optional enrichment:
14
+ - GitHub stars (`stargazers_count`)
15
+ - README preview text (first N characters)
16
+ - Caching of fetched READMEs for faster repeated runs.
17
+ - Progress bars for enrichment phases with `tqdm`.
18
+
19
+ ## How It Works
20
+
21
+ 1. Start from a root Awesome repo (e.g., `sindresorhus/awesome`).
22
+ 2. Fetch the README markdown via GitHub API (not HTML).
23
+ 3. Parse bullet entries of the form:
24
+ - `[Name](https://github.com/owner/repo) - Short description`
25
+ - If there is no “ - ” separator, the remainder of the line becomes the description (with the link removed).
26
+ 4. Recurse through linked Awesome lists until `max_depth` is reached.
27
+ 5. Only at `depth == max_depth`, collect non-awesome repos as “final” repos.
28
+ 6. Optionally enrich with stars and README previews.
29
+
30
+ ## Installation
31
+
32
+ Use Python 3.11+. Install dependencies:
33
+
34
+ ```bash
35
+ pip install -r requirements.txt
36
+ ```
37
+
38
+ Core dependencies include: `aiohttp`, `pandas`, `pyarrow`, `tqdm`, `pyyaml`, `python-dotenv`, `numpy`, `requests`, `duckdb`, `langid`, `playwright`, `huggingface_hub`.
39
+
40
+ ## Configuration
41
+
42
+ Edit `data_collection_utils/awesome_scrap_config.yaml` to control defaults. Important keys:
43
+
44
+ - `root`: Root Awesome repo URL, e.g. `https://github.com/sindresorhus/awesome`
45
+ - `depth`: Max recursion depth. Only repos discovered at this depth are collected.
46
+ - `workers`: Concurrency for async requests.
47
+ - `output_dir`: Where to write `awesome-repos.parquet`.
48
+ - `cache_dir`: JSON cache for retrieved READMEs.
49
+ - `fetch_stars`: true/false to enrich with `stargazers_count`.
50
+ - `fetch_readme_preview`: true/false to enrich with `readme_preview`.
51
+ - `readme_preview_chars`: Max characters for the preview.
52
+
53
+ Example:
54
+
55
+ ```yaml
56
+ root: https://github.com/sindresorhus/awesome
57
+ depth: 2
58
+ workers: 20
59
+
60
+ output_dir: .
61
+ cache_dir: output/awesome_parse_cache
62
+
63
+ # Enrichments
64
+ fetch_stars: false
65
+ fetch_readme_preview: false
66
+ readme_preview_chars: 1000
67
+ ```
68
+
69
+ All of the above can be overridden via CLI flags.
70
+
71
+ ## Usage
72
+
73
+ From the repo root:
74
+
75
+ ```bash
76
+ python3 data_collection_utils/awesome_final_repos.py \
77
+ --root https://github.com/sindresorhus/awesome \
78
+ --depth 2 \
79
+ --workers 20
80
+ ```
81
+
82
+ Flags:
83
+
84
+ - `--root`: Root Awesome repo URL
85
+ - `--depth`: Maximum recursion depth (0 = only root)
86
+ - `--workers`: Concurrency for async fetching (default from config)
87
+ - `--output-dir`: Output directory for `awesome-repos.parquet`
88
+ - `--cache-dir`: Cache directory for README content
89
+ - `--fetch-readme-preview`: Include a `readme_preview` column
90
+ - `--readme-preview-chars`: Number of chars to include in `readme_preview`
91
+
92
+ Run flow:
93
+
94
+ - Crawl Awesome lists and log progress as `[depth=X] awesome: owner/repo`.
95
+ - After crawling completes, enrichment steps run with progress bars:
96
+ - “Description enrichment” for missing descriptions
97
+ - “README enrichment” for preview extraction (if enabled)
98
+ - Write `awesome-repos.parquet` to the `--output-dir`.
99
+
100
+ ## Authentication and Rate Limits
101
+
102
+ Set a GitHub token to increase API limits and reduce throttling:
103
+
104
+ ```bash
105
+ export GITHUB_TOKEN=ghp_your_token_here
106
+ ```
107
+
108
+ The script reads `GITHUB_TOKEN` from environment (via `python-dotenv` or directly exported).
109
+
110
+ ## Output Schema
111
+
112
+ `awesome-repos.parquet` columns (presence depends on config):
113
+
114
+ - `name`: Repo name extracted from the Awesome bullet link text.
115
+ - `link`: Canonical GitHub repo URL, e.g., `https://github.com/owner/repo`.
116
+ - `description`: Short description (from Awesome bullet or enriched via GitHub repo API if missing).
117
+ - `source_repo`: The Awesome list repo that referenced this entry at the deepest traversal level.
118
+ - `stars` (optional): GitHub `stargazers_count`.
119
+ - `readme_preview` (optional): First N characters of README markdown.
120
+
121
+ Notes:
122
+
123
+ - Only “final” repos (non-awesome) discovered at `depth == max_depth` are included.
124
+ - Internal traversal bookkeeping is not written to the final dataset.
125
+
126
+ ## Caching
127
+
128
+ Fetched README markdown is cached at:
129
+
130
+ ```
131
+ <cache_dir>/readme_cache.json
132
+ ```
133
+
134
+ Caching dramatically speeds up iterative development and re-runs.
135
+
136
+ ## Performance Tips
137
+
138
+ - Provide a `GITHUB_TOKEN` for higher rate limits.
139
+ - Tune `--workers` based on your network and rate limits (e.g., 20–50).
140
+ - Keep `fetch_readme_preview: false` to reduce requests if you only need core fields.
141
+ - Reuse the cache between runs.
142
+
143
+ ## Example: Publish the Dataset
144
+
145
+ To publish the dataset to the Hugging Face Hub (dataset repo):
146
+
147
+ ```bash
148
+ # Create or upload to your dataset repo
149
+ hf upload your-username/awesome-repo-descriptions . --repo-type dataset
150
+ ```
151
+
152
+ Example public dataset: `MRiabov/awesome-repo-descriptions`
153
+
154
+ ```
155
+ https://huggingface.co/datasets/MRiabov/awesome-repo-descriptions
156
+ ```
157
+
158
+ ## Project Structure
159
+
160
+ ```
161
+ .
162
+ ├── data_collection_utils/
163
+ │ ├── awesome_final_repos.py
164
+ │ └── awesome_scrap_config.yaml
165
+ ├── awesome-repos.parquet
166
+ └── requirements.txt
167
+ ```
168
+
169
+ ## License
170
+
171
+ This project is provided as-is. Please verify the licenses of any third-party repositories included in the dataset before redistribution or downstream use.
github_api_utils.py ADDED
@@ -0,0 +1,302 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ GitHub API utilities for scraping and metadata collection.
4
+ Separated from scrape_gh_docs.py to keep the main script slimmer.
5
+ """
6
+
7
+ from __future__ import annotations
8
+
9
+ import os
10
+ import time
11
+ import logging
12
+ import threading
13
+ from urllib.parse import quote_plus
14
+ from typing import Optional, Dict, Any, List
15
+
16
+ import requests
17
+ import aiohttp
18
+
19
+ GITHUB_API = "https://api.github.com"
20
+
21
+ # Use the same logger name as the main script so logs route through its handler
22
+ logger = logging.getLogger("scrape_gh_docs")
23
+
24
+ _thread_local = threading.local()
25
+
26
+
27
+ def github_headers() -> Dict[str, str]:
28
+ token = os.getenv("GITHUB_TOKEN")
29
+ h = {"Accept": "application/vnd.github.v3+json", "User-Agent": "docs-scraper/1.0"}
30
+ if token:
31
+ h["Authorization"] = f"token {token}"
32
+ return h
33
+
34
+
35
+ def get_session() -> requests.Session:
36
+ sess = getattr(_thread_local, "session", None)
37
+ if sess is None:
38
+ sess = requests.Session()
39
+ _thread_local.session = sess
40
+ return sess
41
+
42
+
43
+ def request_json(
44
+ url: str, params: Optional[dict] = None, accept_status=(200,), max_retries: int = 3
45
+ ):
46
+ for attempt in range(max_retries):
47
+ resp = get_session().get(
48
+ url, headers=github_headers(), params=params, timeout=30
49
+ )
50
+ if resp.status_code in accept_status:
51
+ # Some endpoints return empty responses on success (e.g. 204). Handle json errors defensively.
52
+ try:
53
+ return resp.json()
54
+ except Exception:
55
+ return None
56
+ if resp.status_code == 403:
57
+ # rate limit or blocked - try to get reset and sleep
58
+ reset = resp.headers.get("X-RateLimit-Reset")
59
+ ra = resp.headers.get("Retry-After")
60
+ if ra:
61
+ wait = int(ra)
62
+ elif reset:
63
+ wait = max(5, int(reset) - int(time.time()))
64
+ else:
65
+ wait = 30
66
+ logger.warning(
67
+ f"403 from {url}. Sleeping {wait}s (attempt {attempt + 1}/{max_retries})"
68
+ )
69
+ time.sleep(wait)
70
+ continue
71
+ if 500 <= resp.status_code < 600:
72
+ backoff = (attempt + 1) * 5
73
+ logger.warning(f"{resp.status_code} from {url}. Backing off {backoff}s")
74
+ time.sleep(backoff)
75
+ continue
76
+ logger.error(f"Request to {url} returned {resp.status_code}: {resp.text}")
77
+ return None
78
+ logger.error(f"Exhausted retries for {url}")
79
+ return None
80
+
81
+
82
+ def download_file(url: str, dest_path):
83
+ dest_path.parent.mkdir(parents=True, exist_ok=True)
84
+ with get_session().get(url, headers=github_headers(), stream=True, timeout=60) as r:
85
+ r.raise_for_status()
86
+ with open(dest_path, "wb") as f:
87
+ for chunk in r.iter_content(chunk_size=8192):
88
+ if chunk:
89
+ f.write(chunk)
90
+
91
+
92
+ # === High-level GitHub API helpers ===
93
+
94
+
95
+ def get_repo_info(owner: str, repo: str) -> Optional[Dict[str, Any]]:
96
+ url = f"{GITHUB_API}/repos/{owner}/{repo}"
97
+ return request_json(url)
98
+
99
+
100
+ def get_default_branch(
101
+ owner: str, repo: str, repo_json: Optional[Dict[str, Any]] = None
102
+ ) -> Optional[str]:
103
+ if repo_json and "default_branch" in repo_json:
104
+ return repo_json["default_branch"]
105
+ info = get_repo_info(owner, repo)
106
+ if not info:
107
+ return None
108
+ return info.get("default_branch")
109
+
110
+
111
+ def get_latest_commit_date(
112
+ owner: str,
113
+ repo: str,
114
+ ref: Optional[str],
115
+ repo_json: Optional[Dict[str, Any]] = None,
116
+ ) -> Optional[str]:
117
+ """
118
+ Return ISO8601 date string of the latest commit on the given ref (branch or SHA).
119
+ Falls back to repo's pushed_at if commits endpoint returns nothing.
120
+ """
121
+ branch = ref or (repo_json.get("default_branch") if repo_json else None) or "main"
122
+ commits = request_json(
123
+ f"{GITHUB_API}/repos/{owner}/{repo}/commits",
124
+ params={"sha": branch, "per_page": 1},
125
+ accept_status=(200,),
126
+ )
127
+ if isinstance(commits, list) and commits:
128
+ try:
129
+ return commits[0]["commit"]["author"]["date"]
130
+ except Exception:
131
+ pass
132
+ if repo_json is None:
133
+ repo_json = get_repo_info(owner, repo) or {}
134
+ return repo_json.get("pushed_at")
135
+
136
+
137
+ def get_contents(owner: str, repo: str, path: str, ref: Optional[str] = None):
138
+ url = f"{GITHUB_API}/repos/{owner}/{repo}/contents/{quote_plus(path)}"
139
+ params = {"ref": ref} if ref else None
140
+ return request_json(url, params=params, accept_status=(200, 404))
141
+
142
+
143
+ def get_owner_type(owner: str) -> Optional[str]:
144
+ info = request_json(f"{GITHUB_API}/users/{owner}", accept_status=(200, 404))
145
+ if not info:
146
+ return None
147
+ return info.get("type")
148
+
149
+
150
+ def get_org_repos(owner: str, per_page: int = 100) -> List[Dict[str, Any]]:
151
+ owner_type = get_owner_type(owner)
152
+ base = "orgs" if owner_type == "Organization" else "users"
153
+ repos: List[Dict[str, Any]] = []
154
+ page = 1
155
+ while True:
156
+ url = f"{GITHUB_API}/{base}/{owner}/repos"
157
+ params = {"per_page": per_page, "page": page}
158
+ data = request_json(url, params=params)
159
+ if not data:
160
+ if page == 1 and base == "orgs":
161
+ base = "users"
162
+ continue
163
+ break
164
+ repos.extend(data)
165
+ if len(data) < per_page:
166
+ break
167
+ page += 1
168
+ return repos
169
+
170
+
171
+ def search_repos(query: str, per_page: int = 5) -> List[Dict[str, Any]]:
172
+ url = f"{GITHUB_API}/search/repositories"
173
+ params = {"q": query, "per_page": per_page}
174
+ res = request_json(url, params=params, accept_status=(200,))
175
+ if not res:
176
+ return []
177
+ return res.get("items", [])
178
+
179
+
180
+ def get_repo_tree_paths(owner: str, repo: str, ref: Optional[str]) -> List[str]:
181
+ ref = ref or "main"
182
+ url = f"{GITHUB_API}/repos/{owner}/{repo}/git/trees/{quote_plus(ref)}"
183
+ params = {"recursive": 1}
184
+ data = request_json(url, params=params, accept_status=(200,))
185
+ if not data or "tree" not in data:
186
+ return []
187
+ paths: List[str] = []
188
+ for entry in data["tree"]:
189
+ if entry.get("type") == "blob" and "path" in entry:
190
+ paths.append(entry["path"])
191
+ return paths
192
+
193
+
194
+ def get_repo_tree_md_paths(owner: str, repo: str, ref: Optional[str]) -> List[str]:
195
+ """
196
+ Return only Markdown file paths from the repository tree on the given ref
197
+ using the Git Trees API (recursive=1).
198
+
199
+ This is a convenience wrapper over get_repo_tree_paths() that filters to
200
+ .md files, case-insensitive.
201
+ """
202
+ all_paths = get_repo_tree_paths(owner, repo, ref)
203
+ return [p for p in all_paths if p.lower().endswith(".md")]
204
+
205
+
206
+ async def fetch_repo_readme_markdown(
207
+ session: aiohttp.ClientSession, owner: str, repo: str
208
+ ) -> Optional[str]:
209
+ """
210
+ Fetch README markdown using the contents API, trying README.md and readme.md.
211
+ Returns the markdown text or None if not found.
212
+ """
213
+ headers = github_headers()
214
+ for name in ("README.md", "readme.md"):
215
+ url = f"{GITHUB_API}/repos/{owner}/{repo}/contents/{name}"
216
+ try:
217
+ async with session.get(url, headers=headers) as resp:
218
+ if resp.status == 200:
219
+ data = await resp.json()
220
+ if isinstance(data, dict) and "download_url" in data:
221
+ download_url = data["download_url"]
222
+ async with session.get(download_url, headers=headers) as d:
223
+ if d.status == 200:
224
+ return await d.text()
225
+ except Exception:
226
+ continue
227
+ # Fallback: list the repo tree at depth=0 (root only) and select any file starting with README*
228
+ try:
229
+ # Get default branch to address the tree head
230
+ repo_url = f"{GITHUB_API}/repos/{owner}/{repo}"
231
+ default_branch = "main"
232
+ async with session.get(repo_url, headers=headers) as rinfo:
233
+ if rinfo.status == 200:
234
+ info = await rinfo.json()
235
+ if isinstance(info, dict) and info.get("default_branch"):
236
+ default_branch = info["default_branch"]
237
+
238
+ # Depth=0 tree (root only). Omitting recursive parameter implies non-recursive.
239
+ tree_url = f"{GITHUB_API}/repos/{owner}/{repo}/git/trees/{quote_plus(default_branch)}"
240
+ async with session.get(tree_url, headers=headers) as rtree:
241
+ if rtree.status != 200:
242
+ return None
243
+ tree = await rtree.json()
244
+ if not isinstance(tree, dict) or "tree" not in tree:
245
+ return None
246
+
247
+ entries = tree["tree"]
248
+ # Find candidates that start with README (case-insensitive) and are files (blobs)
249
+ candidates = []
250
+ for e in entries:
251
+ if e.get("type") != "blob":
252
+ continue
253
+ path = e.get("path")
254
+ if not path:
255
+ continue
256
+ name_lower = path.lower()
257
+ if name_lower.startswith("readme"):
258
+ # Priority: .md < .rst < .org < others; shorter names first
259
+ prio_map = {".md": 0, ".rst": 1, ".org": 2}
260
+ ext = ""
261
+ if "." in path:
262
+ ext = path[path.rfind(".") :].lower()
263
+ prio = (prio_map.get(ext, 3), len(path))
264
+ candidates.append((prio, path))
265
+
266
+ if not candidates:
267
+ return None
268
+ candidates.sort()
269
+ chosen_path = candidates[0][1]
270
+
271
+ # Fetch the chosen README variant via contents API to get a direct download URL
272
+ contents_url = f"{GITHUB_API}/repos/{owner}/{repo}/contents/{quote_plus(chosen_path)}"
273
+ async with session.get(contents_url, headers=headers) as rc:
274
+ if rc.status != 200:
275
+ return None
276
+ cdata = await rc.json()
277
+ if isinstance(cdata, dict) and "download_url" in cdata:
278
+ download_url = cdata["download_url"]
279
+ async with session.get(download_url, headers=headers) as rd:
280
+ if rd.status == 200:
281
+ return await rd.text()
282
+ except Exception:
283
+ return None
284
+
285
+ return None
286
+
287
+
288
+ async def fetch_repo_description(
289
+ session: aiohttp.ClientSession, owner: str, repo: str
290
+ ) -> Optional[str]:
291
+ url = f"https://api.github.com/repos/{owner}/{repo}"
292
+ try:
293
+ async with session.get(url, headers=github_headers()) as resp:
294
+ if resp.status == 200:
295
+ data = await resp.json()
296
+ if isinstance(data, dict) and "description" in data:
297
+ desc = data["description"]
298
+ if isinstance(desc, str):
299
+ return desc
300
+ except Exception:
301
+ return None
302
+ return None
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ requests
2
+ dotenv