init
Browse files- README.md +60 -3
- python_sample.jsonl +0 -0
- raw/Readme.md +19 -0
- raw/negative/collect_script.py +381 -0
- raw/negative/negative_raw.jsonl +0 -0
- raw/positive/abstract_script.py +40 -0
- raw/positive/github_repositories.csv +0 -0
- raw/positive/positive_original.part01.rar +3 -0
- raw/positive/positive_original.part02.rar +3 -0
- raw/positive/positive_original.part03.rar +3 -0
- raw/positive/positive_original.part04.rar +3 -0
- raw/positive/positive_original.part05.rar +3 -0
- raw/positive/positive_original.part06.rar +3 -0
- script/divide.py +77 -0
- script/extract_members.py +30 -0
- script/ratio.py +43 -0
README.md
CHANGED
|
@@ -1,3 +1,60 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Dataset Scripts
|
| 2 |
+
|
| 3 |
+
## divide.py
|
| 4 |
+
|
| 5 |
+
`divide.py` is a script designed to split a JSONL file into two separate files based on the approximate token count of a specified text field. It detects the appropriate text field from the input JSONL and uses the median token count as a threshold to categorize the entries into "short" and "long".
|
| 6 |
+
|
| 7 |
+
### Usage
|
| 8 |
+
|
| 9 |
+
To use `divide.py`, run the following command in your terminal:
|
| 10 |
+
|
| 11 |
+
```bash
|
| 12 |
+
python divide.py --input <input_jsonl_path> --short_out <output_short_jsonl_path> --long_out <output_long_jsonl_path>
|
| 13 |
+
```
|
| 14 |
+
|
| 15 |
+
- `--input`: Path to the input JSONL file (required).
|
| 16 |
+
- `--short_out`: Path to the output JSONL file for short entries (default: `short.jsonl`).
|
| 17 |
+
- `--long_out`: Path to the output JSONL file for long entries (default: `long.jsonl`).
|
| 18 |
+
|
| 19 |
+
## ratio.py
|
| 20 |
+
|
| 21 |
+
`ratio.py` is a script that creates datasets with specified positive and negative sample ratios from two JSONL files containing positive and negative samples. It randomly samples from the provided datasets to create a new dataset based on the defined configuration.
|
| 22 |
+
|
| 23 |
+
### Usage
|
| 24 |
+
|
| 25 |
+
To use `ratio.py`, simply run the script:
|
| 26 |
+
|
| 27 |
+
```bash
|
| 28 |
+
python ratio.py
|
| 29 |
+
```
|
| 30 |
+
|
| 31 |
+
This script will read from `positive/positive.jsonl` and `negative/negative.jsonl`, and create datasets based on the configurations defined in the script. The output files will be named `dataset_{name}.jsonl` for each configuration.
|
| 32 |
+
|
| 33 |
+
### Dataset Configurations
|
| 34 |
+
|
| 35 |
+
The following configurations are available in the script:
|
| 36 |
+
|
| 37 |
+
- `1_1`: 2000 total samples with a 1:1 positive to negative ratio.
|
| 38 |
+
- `1_5`: 1200 total samples with a 1:5 positive to negative ratio.
|
| 39 |
+
- `5_1`: 1200 total samples with a 5:1 positive to negative ratio.
|
| 40 |
+
|
| 41 |
+
## extract_members.py
|
| 42 |
+
|
| 43 |
+
`extract_members.py` is a script that extracts members and non-members from a JSONL file based on the `label` field. It reads from `python_sample.jsonl`, where a `label` of `1` indicates a member and a `label` of `0` indicates a non-member. The script outputs two separate JSONL files: one for members and one for non-members.
|
| 44 |
+
|
| 45 |
+
### Usage
|
| 46 |
+
|
| 47 |
+
To use `extract_members.py`, run the following command in your terminal:
|
| 48 |
+
|
| 49 |
+
```bash
|
| 50 |
+
python extract_members.py
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
This script will read from `dataset/python_sample.jsonl` and create the following output files:
|
| 54 |
+
|
| 55 |
+
- `dataset/member.jsonl`: Contains all entries with `label` equal to `1`.
|
| 56 |
+
- `dataset/non-member.jsonl`: Contains all entries with `label` equal to `0`.
|
| 57 |
+
|
| 58 |
+
### Output
|
| 59 |
+
|
| 60 |
+
After running the script, you will see a message indicating the number of extracted members and non-members.
|
python_sample.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
raw/Readme.md
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# raw dataset and script
|
| 2 |
+
## description
|
| 3 |
+
This folder contains various original files and scripts used in the dataset creation process. This dataset consists of two parts: positive and negative.
|
| 4 |
+
|
| 5 |
+
## positive
|
| 6 |
+
Unzip positive_original.rar to obtain positive_original.jsonl.
|
| 7 |
+
|
| 8 |
+
positive_original.jsonl is obtained by filtering Python language data from The Pile's GitHub dataset.
|
| 9 |
+
|
| 10 |
+
positive.jsonl is derived from positive_original.jsonl: starting from the 1st line, take the first 10 lines out of every 100 lines. A total of 10,000 lines are processed, resulting in 1,000 lines.
|
| 11 |
+
|
| 12 |
+
abstract_script.py is the script used to generate positive.jsonl from positive_original.jsonl.
|
| 13 |
+
|
| 14 |
+
For the repository used for The Pile dataset, please refer to github_repositories.csv (from https://github.com/EleutherAI/github-downloader).
|
| 15 |
+
|
| 16 |
+
## negative
|
| 17 |
+
negative_raw.jsonl is obtained from GitHub: filter Python language repositories created after January 1, 2024, sort them in descending order of star count. After finding 10 Python functions in a repository, switch to next repository. A total of 100 repositories are included, resulting in 1,000 entries.
|
| 18 |
+
|
| 19 |
+
collect_script.py is the script used to collect negative_raw.jsonl.
|
raw/negative/collect_script.py
ADDED
|
@@ -0,0 +1,381 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import requests
|
| 2 |
+
import json
|
| 3 |
+
import ast
|
| 4 |
+
from datetime import datetime
|
| 5 |
+
import time
|
| 6 |
+
import os
|
| 7 |
+
from requests.adapters import HTTPAdapter
|
| 8 |
+
from requests.packages.urllib3.util.retry import Retry
|
| 9 |
+
|
| 10 |
+
# Configuration Parameters
|
| 11 |
+
GITHUB_TOKEN = 'YOUR-GITHUB-TOKEN'
|
| 12 |
+
|
| 13 |
+
HEADERS = {
|
| 14 |
+
"Authorization": f"token {GITHUB_TOKEN}",
|
| 15 |
+
"Accept": "application/vnd.github.v3+json"
|
| 16 |
+
}
|
| 17 |
+
|
| 18 |
+
# Search for active Python repositories created after 2024
|
| 19 |
+
SEARCH_QUERY = "language:python created:>=2024-01-01"
|
| 20 |
+
OUTPUT_FILE = "negative_raw.jsonl"
|
| 21 |
+
STATE_FILE = "crawler_state.json" # File to save crawling state
|
| 22 |
+
FUNCTIONS_PER_REPO = 10 # Maximum number of functions to collect per repository
|
| 23 |
+
MAX_RETRIES = 10 # Maximum number of retries for requests
|
| 24 |
+
RETRY_BACKOFF_FACTOR = 2 # Multiplier for retry waiting time
|
| 25 |
+
NETWORK_ERROR_SLEEP = 30 # Waiting time (seconds) after network errors
|
| 26 |
+
|
| 27 |
+
def create_session():
|
| 28 |
+
"""Create a requests session with retry mechanism"""
|
| 29 |
+
session = requests.Session()
|
| 30 |
+
|
| 31 |
+
# Set up retry strategy
|
| 32 |
+
retry_strategy = Retry(
|
| 33 |
+
total=MAX_RETRIES,
|
| 34 |
+
backoff_factor=RETRY_BACKOFF_FACTOR,
|
| 35 |
+
status_forcelist=[429, 500, 502, 503, 504, 408],
|
| 36 |
+
allowed_methods=["HEAD", "GET", "OPTIONS"]
|
| 37 |
+
)
|
| 38 |
+
|
| 39 |
+
# Apply retry strategy to HTTP and HTTPS connections
|
| 40 |
+
adapter = HTTPAdapter(max_retries=retry_strategy)
|
| 41 |
+
session.mount("https://", adapter)
|
| 42 |
+
session.mount("http://", adapter)
|
| 43 |
+
|
| 44 |
+
return session
|
| 45 |
+
|
| 46 |
+
def save_state(state):
|
| 47 |
+
"""Save current crawling state to file"""
|
| 48 |
+
with open(STATE_FILE, 'w') as f:
|
| 49 |
+
json.dump(state, f)
|
| 50 |
+
|
| 51 |
+
def load_state():
|
| 52 |
+
"""Load crawling state from file"""
|
| 53 |
+
if os.path.exists(STATE_FILE):
|
| 54 |
+
with open(STATE_FILE, 'r') as f:
|
| 55 |
+
return json.load(f)
|
| 56 |
+
return {
|
| 57 |
+
'current_page': 1,
|
| 58 |
+
'processed_repos': [],
|
| 59 |
+
'processed_files': set(),
|
| 60 |
+
'repo_function_counts': {}
|
| 61 |
+
}
|
| 62 |
+
|
| 63 |
+
def parse_functions(source_code):
|
| 64 |
+
"""Parse Python code and return list of functions"""
|
| 65 |
+
try:
|
| 66 |
+
tree = ast.parse(source_code)
|
| 67 |
+
except Exception as e:
|
| 68 |
+
print(f"Failed to parse code: {str(e)}")
|
| 69 |
+
return []
|
| 70 |
+
|
| 71 |
+
functions = []
|
| 72 |
+
source_lines = source_code.split('\n')
|
| 73 |
+
|
| 74 |
+
for node in ast.walk(tree):
|
| 75 |
+
if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef)):
|
| 76 |
+
if not hasattr(node, 'end_lineno'):
|
| 77 |
+
continue # Skip functions where end line number cannot be obtained
|
| 78 |
+
|
| 79 |
+
start_line = node.lineno - 1
|
| 80 |
+
end_line = node.end_lineno
|
| 81 |
+
function_code = '\n'.join(source_lines[start_line:end_line])
|
| 82 |
+
|
| 83 |
+
functions.append({
|
| 84 |
+
'name': node.name,
|
| 85 |
+
'code': function_code
|
| 86 |
+
})
|
| 87 |
+
|
| 88 |
+
return functions
|
| 89 |
+
|
| 90 |
+
def fetch_github_api(url, params=None, session=None):
|
| 91 |
+
"""Send GitHub API request and handle rate limits and network exceptions"""
|
| 92 |
+
if not session:
|
| 93 |
+
session = create_session()
|
| 94 |
+
|
| 95 |
+
retries = 0
|
| 96 |
+
|
| 97 |
+
while retries < MAX_RETRIES:
|
| 98 |
+
try:
|
| 99 |
+
response = session.get(url, headers=HEADERS, params=params, timeout=60, verify=True)
|
| 100 |
+
|
| 101 |
+
if response.status_code == 403 and 'rate limit' in response.text.lower():
|
| 102 |
+
reset_time = int(response.headers.get('X-RateLimit-Reset', 0))
|
| 103 |
+
sleep_time = max(reset_time - time.time(), 0) + 5
|
| 104 |
+
print(f"Rate limited, waiting {sleep_time:.1f} seconds")
|
| 105 |
+
time.sleep(sleep_time)
|
| 106 |
+
continue
|
| 107 |
+
|
| 108 |
+
return response
|
| 109 |
+
|
| 110 |
+
except requests.exceptions.SSLError as e:
|
| 111 |
+
print(f"SSL Error: {str(e)}, attempting retry {retries+1}/{MAX_RETRIES}")
|
| 112 |
+
retries += 1
|
| 113 |
+
time.sleep(NETWORK_ERROR_SLEEP) # Wait longer after network errors
|
| 114 |
+
|
| 115 |
+
except requests.exceptions.RequestException as e:
|
| 116 |
+
print(f"Request Exception: {str(e)}, attempting retry {retries+1}/{MAX_RETRIES}")
|
| 117 |
+
retries += 1
|
| 118 |
+
time.sleep(NETWORK_ERROR_SLEEP) # Wait longer after network errors
|
| 119 |
+
|
| 120 |
+
print(f"Reached maximum retry attempts, skipping URL: {url}")
|
| 121 |
+
return None
|
| 122 |
+
|
| 123 |
+
def get_file_creation_date(repo_full_name, file_path, commit_sha, session=None):
|
| 124 |
+
"""Get file creation date (first commit date)"""
|
| 125 |
+
# Get file commit history
|
| 126 |
+
commits_url = f"https://api.github.com/repos/{repo_full_name}/commits?path={file_path}&per_page=100"
|
| 127 |
+
commits_response = fetch_github_api(commits_url, session=session)
|
| 128 |
+
|
| 129 |
+
if not commits_response or commits_response.status_code != 200:
|
| 130 |
+
return None
|
| 131 |
+
|
| 132 |
+
commits = commits_response.json()
|
| 133 |
+
if not commits:
|
| 134 |
+
return None
|
| 135 |
+
|
| 136 |
+
# The last commit is the first commit of the file
|
| 137 |
+
first_commit = commits[-1]
|
| 138 |
+
return first_commit['commit']['committer']['date']
|
| 139 |
+
|
| 140 |
+
def main():
|
| 141 |
+
# Load saved state
|
| 142 |
+
state = load_state()
|
| 143 |
+
page = state['current_page']
|
| 144 |
+
processed_repos = state['processed_repos']
|
| 145 |
+
processed_files = set(state.get('processed_files', []))
|
| 146 |
+
repo_function_counts = state.get('repo_function_counts', {})
|
| 147 |
+
|
| 148 |
+
session = create_session()
|
| 149 |
+
|
| 150 |
+
with open(OUTPUT_FILE, 'a') as out_file: # Open in append mode
|
| 151 |
+
while True:
|
| 152 |
+
# Search for eligible repositories
|
| 153 |
+
search_url = f"https://api.github.com/search/repositories?q={SEARCH_QUERY}&per_page=100&page={page}"
|
| 154 |
+
response = fetch_github_api(search_url, session=session)
|
| 155 |
+
|
| 156 |
+
if not response or response.status_code != 200:
|
| 157 |
+
print(f"Search failed, waiting {NETWORK_ERROR_SLEEP} seconds before retrying")
|
| 158 |
+
time.sleep(NETWORK_ERROR_SLEEP)
|
| 159 |
+
continue
|
| 160 |
+
|
| 161 |
+
data = response.json()
|
| 162 |
+
if not data.get('items'):
|
| 163 |
+
print(f"No search results on page {page}, exiting")
|
| 164 |
+
break
|
| 165 |
+
|
| 166 |
+
print(f"Processing page {page}, total {len(data['items'])} repositories")
|
| 167 |
+
|
| 168 |
+
for repo_item in data['items']:
|
| 169 |
+
repo_full_name = repo_item['full_name']
|
| 170 |
+
|
| 171 |
+
# Skip already processed repositories
|
| 172 |
+
if repo_full_name in processed_repos:
|
| 173 |
+
print(f"Skipping processed repository: {repo_full_name}")
|
| 174 |
+
continue
|
| 175 |
+
|
| 176 |
+
stars = repo_item['stargazers_count']
|
| 177 |
+
print(f"\nProcessing repository: {repo_full_name} (Stars: {stars})")
|
| 178 |
+
|
| 179 |
+
# Get or initialize repository function counter
|
| 180 |
+
repo_function_count = repo_function_counts.get(repo_full_name, 0)
|
| 181 |
+
|
| 182 |
+
# Get repository's default branch
|
| 183 |
+
repo_url = f"https://api.github.com/repos/{repo_full_name}"
|
| 184 |
+
repo_response = fetch_github_api(repo_url, session=session)
|
| 185 |
+
if not repo_response or repo_response.status_code != 200:
|
| 186 |
+
print(f"Failed to get repository information, skipping {repo_full_name}")
|
| 187 |
+
continue
|
| 188 |
+
default_branch = repo_response.json().get('default_branch', 'main')
|
| 189 |
+
|
| 190 |
+
# Get list of Python files in the repository (recursive retrieval)
|
| 191 |
+
contents_url = f"https://api.github.com/repos/{repo_full_name}/contents?ref={default_branch}"
|
| 192 |
+
stack = [contents_url]
|
| 193 |
+
|
| 194 |
+
try:
|
| 195 |
+
while stack:
|
| 196 |
+
# Check if function count limit is reached
|
| 197 |
+
if repo_function_count >= FUNCTIONS_PER_REPO:
|
| 198 |
+
print(f"Collected {repo_function_count} functions from {repo_full_name}, limit reached, moving to next repository")
|
| 199 |
+
break
|
| 200 |
+
|
| 201 |
+
current_url = stack.pop()
|
| 202 |
+
contents_response = fetch_github_api(current_url, session=session)
|
| 203 |
+
|
| 204 |
+
if not contents_response or contents_response.status_code != 200:
|
| 205 |
+
print(f"Failed to get directory contents, skipping {current_url}")
|
| 206 |
+
continue
|
| 207 |
+
|
| 208 |
+
items = contents_response.json()
|
| 209 |
+
|
| 210 |
+
for item in items:
|
| 211 |
+
# Check if function count limit is reached
|
| 212 |
+
if repo_function_count >= FUNCTIONS_PER_REPO:
|
| 213 |
+
print(f"Collected {repo_function_count} functions from {repo_full_name}, limit reached, moving to next repository")
|
| 214 |
+
break
|
| 215 |
+
|
| 216 |
+
if item['type'] == 'dir':
|
| 217 |
+
# Recursively process subdirectories
|
| 218 |
+
stack.append(item['url'])
|
| 219 |
+
elif item['name'].endswith('.py'):
|
| 220 |
+
file_path = item['path']
|
| 221 |
+
file_id = f"{repo_full_name}/{file_path}"
|
| 222 |
+
|
| 223 |
+
if file_id in processed_files:
|
| 224 |
+
continue
|
| 225 |
+
|
| 226 |
+
# Get file content
|
| 227 |
+
download_url = item['download_url']
|
| 228 |
+
file_response = fetch_github_api(download_url, session=session)
|
| 229 |
+
|
| 230 |
+
if not file_response or file_response.status_code != 200:
|
| 231 |
+
print(f"Failed to get file content, skipping {file_id}")
|
| 232 |
+
continue
|
| 233 |
+
|
| 234 |
+
# Get SHA while retrieving file content (for querying commit history)
|
| 235 |
+
content_data = file_response.json() if 'json' in file_response.headers.get('Content-Type', '') else {}
|
| 236 |
+
sha = content_data.get('sha', '')
|
| 237 |
+
|
| 238 |
+
# Get file creation date
|
| 239 |
+
creation_date_str = get_file_creation_date(repo_full_name, file_path, sha, session=session)
|
| 240 |
+
if not creation_date_str:
|
| 241 |
+
print(f"Failed to get file creation date, skipping {file_id}")
|
| 242 |
+
continue
|
| 243 |
+
|
| 244 |
+
creation_date = datetime.strptime(creation_date_str, "%Y-%m-%dT%H:%M:%SZ")
|
| 245 |
+
|
| 246 |
+
# Strictly check file creation date
|
| 247 |
+
if creation_date < datetime(2024, 1, 1):
|
| 248 |
+
print(f"File {file_id} was created on {creation_date_str}, which is before 2024-01-01, skipping")
|
| 249 |
+
continue
|
| 250 |
+
|
| 251 |
+
# Parse functions in the file
|
| 252 |
+
source_code = file_response.text
|
| 253 |
+
functions = parse_functions(source_code)
|
| 254 |
+
|
| 255 |
+
if not functions:
|
| 256 |
+
print(f"No functions found in file {file_id}")
|
| 257 |
+
continue
|
| 258 |
+
|
| 259 |
+
print(f"Parsed {len(functions)} functions from file {file_id}")
|
| 260 |
+
|
| 261 |
+
# Write function records and update counter
|
| 262 |
+
for func in functions:
|
| 263 |
+
if repo_function_count >= FUNCTIONS_PER_REPO:
|
| 264 |
+
break
|
| 265 |
+
|
| 266 |
+
record = {
|
| 267 |
+
'function': func['code'],
|
| 268 |
+
'creation_date': creation_date_str,
|
| 269 |
+
'repo': repo_full_name,
|
| 270 |
+
'file_path': file_path,
|
| 271 |
+
'stars': stars,
|
| 272 |
+
'label': 0
|
| 273 |
+
}
|
| 274 |
+
out_file.write(json.dumps(record) + '\n')
|
| 275 |
+
repo_function_count += 1
|
| 276 |
+
|
| 277 |
+
# Save state after every 10 functions processed
|
| 278 |
+
if repo_function_count % 10 == 0:
|
| 279 |
+
state = {
|
| 280 |
+
'current_page': page,
|
| 281 |
+
'processed_repos': processed_repos,
|
| 282 |
+
'processed_files': list(processed_files),
|
| 283 |
+
'repo_function_counts': repo_function_counts
|
| 284 |
+
}
|
| 285 |
+
save_state(state)
|
| 286 |
+
print(f"State saved: Collected {repo_function_count} functions from repository {repo_full_name}")
|
| 287 |
+
|
| 288 |
+
# Mark file as processed
|
| 289 |
+
processed_files.add(file_id)
|
| 290 |
+
|
| 291 |
+
# Save state after processing each directory
|
| 292 |
+
state = {
|
| 293 |
+
'current_page': page,
|
| 294 |
+
'processed_repos': processed_repos,
|
| 295 |
+
'processed_files': list(processed_files),
|
| 296 |
+
'repo_function_counts': repo_function_counts
|
| 297 |
+
}
|
| 298 |
+
save_state(state)
|
| 299 |
+
|
| 300 |
+
except KeyboardInterrupt:
|
| 301 |
+
print("User interruption detected, saving current state...")
|
| 302 |
+
state = {
|
| 303 |
+
'current_page': page,
|
| 304 |
+
'processed_repos': processed_repos,
|
| 305 |
+
'processed_files': list(processed_files),
|
| 306 |
+
'repo_function_counts': repo_function_counts
|
| 307 |
+
}
|
| 308 |
+
save_state(state)
|
| 309 |
+
print("State saved, program exiting. Next run will resume from the last interruption point.")
|
| 310 |
+
return
|
| 311 |
+
|
| 312 |
+
except Exception as e:
|
| 313 |
+
print(f"Unexpected error occurred while processing repository {repo_full_name}: {str(e)}")
|
| 314 |
+
print("Saving current state and skipping this repository...")
|
| 315 |
+
state = {
|
| 316 |
+
'current_page': page,
|
| 317 |
+
'processed_repos': processed_repos,
|
| 318 |
+
'processed_files': list(processed_files),
|
| 319 |
+
'repo_function_counts': repo_function_counts
|
| 320 |
+
}
|
| 321 |
+
save_state(state)
|
| 322 |
+
continue
|
| 323 |
+
|
| 324 |
+
# Update repository function count
|
| 325 |
+
repo_function_counts[repo_full_name] = repo_function_count
|
| 326 |
+
|
| 327 |
+
# Mark repository as processed
|
| 328 |
+
processed_repos.append(repo_full_name)
|
| 329 |
+
|
| 330 |
+
# Save state
|
| 331 |
+
state = {
|
| 332 |
+
'current_page': page,
|
| 333 |
+
'processed_repos': processed_repos,
|
| 334 |
+
'processed_files': list(processed_files),
|
| 335 |
+
'repo_function_counts': repo_function_counts
|
| 336 |
+
}
|
| 337 |
+
save_state(state)
|
| 338 |
+
|
| 339 |
+
print(f"Completed processing repository {repo_full_name}, collected {repo_function_count} functions in total")
|
| 340 |
+
|
| 341 |
+
# Check if there is a next page
|
| 342 |
+
if 'next' in response.links:
|
| 343 |
+
page += 1
|
| 344 |
+
# Save page number state
|
| 345 |
+
state = {
|
| 346 |
+
'current_page': page,
|
| 347 |
+
'processed_repos': processed_repos,
|
| 348 |
+
'processed_files': list(processed_files),
|
| 349 |
+
'repo_function_counts': repo_function_counts
|
| 350 |
+
}
|
| 351 |
+
save_state(state)
|
| 352 |
+
print(f"State saved: About to process page {page}")
|
| 353 |
+
else:
|
| 354 |
+
print(f"No next page, processing completed")
|
| 355 |
+
break
|
| 356 |
+
|
| 357 |
+
if __name__ == "__main__":
|
| 358 |
+
main()
|
| 359 |
+
|
| 360 |
+
'''
|
| 361 |
+
Usage Instructions
|
| 362 |
+
|
| 363 |
+
1. Run the script normally:
|
| 364 |
+
```bash
|
| 365 |
+
python collect_script.py
|
| 366 |
+
```
|
| 367 |
+
|
| 368 |
+
2. To start over from scratch, delete the state file:
|
| 369 |
+
```bash
|
| 370 |
+
rm crawler_state.json
|
| 371 |
+
```
|
| 372 |
+
|
| 373 |
+
3. After network recovery, simply re-run the script to resume from the interruption point
|
| 374 |
+
|
| 375 |
+
4. Adjust parameters:
|
| 376 |
+
```python
|
| 377 |
+
# Increase retry count or extend waiting time
|
| 378 |
+
MAX_RETRIES = 15
|
| 379 |
+
NETWORK_ERROR_SLEEP = 60
|
| 380 |
+
```
|
| 381 |
+
'''
|
raw/negative/negative_raw.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
raw/positive/abstract_script.py
ADDED
|
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import json
|
| 2 |
+
|
| 3 |
+
def process_jsonl():
|
| 4 |
+
"""
|
| 5 |
+
Process the jsonl file: starting from line 1, extract the first 10 lines from every 100-line block,
|
| 6 |
+
process a total of 10,000 lines, and extract 1,000 lines in total
|
| 7 |
+
"""
|
| 8 |
+
# Define input and output file paths
|
| 9 |
+
input_file = 'positive_original.jsonl'
|
| 10 |
+
output_file = 'positive.jsonl'
|
| 11 |
+
|
| 12 |
+
with open(input_file, 'r', encoding='utf-8') as infile, \
|
| 13 |
+
open(output_file, 'w', encoding='utf-8') as outfile:
|
| 14 |
+
|
| 15 |
+
# Initialize counters
|
| 16 |
+
line_count = 0
|
| 17 |
+
extracted_count = 0
|
| 18 |
+
|
| 19 |
+
# Read the first 10,000 lines
|
| 20 |
+
while line_count < 10000:
|
| 21 |
+
line = infile.readline()
|
| 22 |
+
if not line: # End of file
|
| 23 |
+
break
|
| 24 |
+
|
| 25 |
+
line_count += 1
|
| 26 |
+
# Calculate which 100-line block the current line belongs to
|
| 27 |
+
block_number = (line_count - 1) // 100
|
| 28 |
+
# Calculate the position of the current line within its block
|
| 29 |
+
position_in_block = (line_count - 1) % 100
|
| 30 |
+
|
| 31 |
+
# If the line is among the first 10 lines in the block, write it to the output file
|
| 32 |
+
if position_in_block < 10:
|
| 33 |
+
outfile.write(line)
|
| 34 |
+
extracted_count += 1
|
| 35 |
+
|
| 36 |
+
print(f"Processing completed! A total of {line_count} lines were read, and {extracted_count} lines were extracted.")
|
| 37 |
+
print(f"Extracted data has been saved to {output_file}")
|
| 38 |
+
|
| 39 |
+
if __name__ == "__main__":
|
| 40 |
+
process_jsonl()
|
raw/positive/github_repositories.csv
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
raw/positive/positive_original.part01.rar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f1469ec26d9eaaf0b92447ca021ff4f98edb9ce805230a2e0fd74e12ab5ddbaa
|
| 3 |
+
size 47185920
|
raw/positive/positive_original.part02.rar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fe4e12678c2d7ec16f31170199a3b69939dcbf111d5647266d1613e9921e03df
|
| 3 |
+
size 47185920
|
raw/positive/positive_original.part03.rar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cdcb001ff4f6ab681520c2dd21342d052f9a1c2d80489783adc1e8ee3a5e389a
|
| 3 |
+
size 47185920
|
raw/positive/positive_original.part04.rar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:baa14b7d82d1c71b9c6118843d53eb22abe8b4de074762de60717eee1132be07
|
| 3 |
+
size 47185920
|
raw/positive/positive_original.part05.rar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2473bb1f788196c620ac9ed1e49948e6046222f920699108bcba1ece070652d6
|
| 3 |
+
size 47185920
|
raw/positive/positive_original.part06.rar
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fd5b83429cc003e82f994f3ab23c7c362acd05d0defdc00269bdd2b86f6fafd5
|
| 3 |
+
size 9153456
|
script/divide.py
ADDED
|
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import json, re, argparse, statistics
|
| 2 |
+
|
| 3 |
+
def approx_token_count(text: str) -> int:
|
| 4 |
+
if not text:
|
| 5 |
+
return 0
|
| 6 |
+
return len(re.findall(r"\w+|\S", text, flags=re.UNICODE))
|
| 7 |
+
|
| 8 |
+
def detect_text_field(obj):
|
| 9 |
+
for k in ["text", "function", "code", "content", "body"]:
|
| 10 |
+
if k in obj and isinstance(obj[k], str):
|
| 11 |
+
return k
|
| 12 |
+
str_fields = [(k, len(v)) for k, v in obj.items() if isinstance(v, str)]
|
| 13 |
+
if str_fields:
|
| 14 |
+
return max(str_fields, key=lambda x: x[1])[0]
|
| 15 |
+
return None
|
| 16 |
+
|
| 17 |
+
def main():
|
| 18 |
+
ap = argparse.ArgumentParser()
|
| 19 |
+
ap.add_argument("--input", required=True, help="input JSONL path")
|
| 20 |
+
ap.add_argument("--short_out", default="short.jsonl", help="output short JSONL")
|
| 21 |
+
ap.add_argument("--long_out", default="long.jsonl", help="output long JSONL")
|
| 22 |
+
args = ap.parse_args()
|
| 23 |
+
|
| 24 |
+
# First pass: detect field and collect lengths
|
| 25 |
+
detected_field = None
|
| 26 |
+
lengths = []
|
| 27 |
+
total = decode_errors = 0
|
| 28 |
+
|
| 29 |
+
with open(args.input, "r", encoding="utf-8") as f:
|
| 30 |
+
for line in f:
|
| 31 |
+
total += 1
|
| 32 |
+
line = line.strip()
|
| 33 |
+
if not line:
|
| 34 |
+
continue
|
| 35 |
+
try:
|
| 36 |
+
obj = json.loads(line)
|
| 37 |
+
except json.JSONDecodeError:
|
| 38 |
+
decode_errors += 1
|
| 39 |
+
continue
|
| 40 |
+
if detected_field is None:
|
| 41 |
+
detected_field = detect_text_field(obj) or "text"
|
| 42 |
+
lengths.append(approx_token_count(obj.get(detected_field, "")))
|
| 43 |
+
|
| 44 |
+
if not lengths:
|
| 45 |
+
raise SystemExit("No valid samples parsed from input.")
|
| 46 |
+
|
| 47 |
+
# Use median as threshold
|
| 48 |
+
threshold = int(statistics.median(lengths))
|
| 49 |
+
|
| 50 |
+
# Second pass: split
|
| 51 |
+
short_cnt = long_cnt = 0
|
| 52 |
+
with open(args.input, "r", encoding="utf-8") as f_in, \
|
| 53 |
+
open(args.short_out, "w", encoding="utf-8") as f_s, \
|
| 54 |
+
open(args.long_out, "w", encoding="utf-8") as f_l:
|
| 55 |
+
for line in f_in:
|
| 56 |
+
line = line.strip()
|
| 57 |
+
if not line:
|
| 58 |
+
continue
|
| 59 |
+
obj = json.loads(line)
|
| 60 |
+
L = approx_token_count(obj.get(detected_field, ""))
|
| 61 |
+
if L <= threshold:
|
| 62 |
+
f_s.write(json.dumps(obj, ensure_ascii=False) + "\n")
|
| 63 |
+
short_cnt += 1
|
| 64 |
+
else:
|
| 65 |
+
f_l.write(json.dumps(obj, ensure_ascii=False) + "\n")
|
| 66 |
+
long_cnt += 1
|
| 67 |
+
|
| 68 |
+
# Report
|
| 69 |
+
total_valid = short_cnt + long_cnt
|
| 70 |
+
print("==== Split Done ====")
|
| 71 |
+
print(f"Input: {args.input}")
|
| 72 |
+
print(f"Detected text field: {detected_field}")
|
| 73 |
+
print(f"Threshold (median approx tokens): {threshold}")
|
| 74 |
+
print(f"Counts: short={short_cnt}, long={long_cnt}, long_ratio={(long_cnt/total_valid if total_valid else 0):.2%}")
|
| 75 |
+
|
| 76 |
+
if __name__ == "__main__":
|
| 77 |
+
main()
|
script/extract_members.py
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import json
|
| 2 |
+
|
| 3 |
+
def extract_members(input_path, member_output_path, non_member_output_path):
|
| 4 |
+
with open(input_path, 'r', encoding='utf-8') as f:
|
| 5 |
+
members = []
|
| 6 |
+
non_members = []
|
| 7 |
+
|
| 8 |
+
for line in f:
|
| 9 |
+
if line.strip(): # Skip empty lines
|
| 10 |
+
obj = json.loads(line)
|
| 11 |
+
# Check the label to determine membership
|
| 12 |
+
if obj.get('label') == 1:
|
| 13 |
+
members.append(obj)
|
| 14 |
+
elif obj.get('label') == 0:
|
| 15 |
+
non_members.append(obj)
|
| 16 |
+
|
| 17 |
+
# Write members to member.jsonl
|
| 18 |
+
with open(member_output_path, 'w', encoding='utf-8') as f:
|
| 19 |
+
for member in members:
|
| 20 |
+
f.write(json.dumps(member, ensure_ascii=False) + "\n")
|
| 21 |
+
|
| 22 |
+
# Write non-members to non-member.jsonl
|
| 23 |
+
with open(non_member_output_path, 'w', encoding='utf-8') as f:
|
| 24 |
+
for non_member in non_members:
|
| 25 |
+
f.write(json.dumps(non_member, ensure_ascii=False) + "\n")
|
| 26 |
+
|
| 27 |
+
print(f'Extracted {len(members)} members and {len(non_members)} non-members.')
|
| 28 |
+
|
| 29 |
+
if __name__ == "__main__":
|
| 30 |
+
extract_members('dataset/python_sample.jsonl', 'dataset/member.jsonl', 'dataset/non-member.jsonl')
|
script/ratio.py
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import json
|
| 2 |
+
import random
|
| 3 |
+
import os
|
| 4 |
+
|
| 5 |
+
positive_path = 'positive/positive.jsonl'
|
| 6 |
+
negative_path = 'negative/negative.jsonl'
|
| 7 |
+
|
| 8 |
+
def read_jsonl(path):
|
| 9 |
+
with open(path, 'r', encoding='utf-8') as f:
|
| 10 |
+
return [line for line in f if line.strip()]
|
| 11 |
+
|
| 12 |
+
positives = read_jsonl(positive_path)
|
| 13 |
+
negatives = read_jsonl(negative_path)
|
| 14 |
+
|
| 15 |
+
datasets_config = {
|
| 16 |
+
'1_1': {'total': 2000, 'pos_ratio': 1, 'neg_ratio': 1},
|
| 17 |
+
'1_5': {'total': 1200, 'pos_ratio': 1, 'neg_ratio': 5},
|
| 18 |
+
'5_1': {'total': 1200, 'pos_ratio': 5, 'neg_ratio': 1}
|
| 19 |
+
}
|
| 20 |
+
|
| 21 |
+
for name, config in datasets_config.items():
|
| 22 |
+
total = config['total']
|
| 23 |
+
pos_ratio = config['pos_ratio']
|
| 24 |
+
neg_ratio = config['neg_ratio']
|
| 25 |
+
|
| 26 |
+
pos_count = int(total * pos_ratio / (pos_ratio + neg_ratio))
|
| 27 |
+
neg_count = total - pos_count
|
| 28 |
+
|
| 29 |
+
pos_count = min(pos_count, len(positives))
|
| 30 |
+
neg_count = min(neg_count, len(negatives))
|
| 31 |
+
|
| 32 |
+
pos_samples = random.sample(positives, pos_count)
|
| 33 |
+
neg_samples = random.sample(negatives, neg_count)
|
| 34 |
+
|
| 35 |
+
dataset = pos_samples + neg_samples
|
| 36 |
+
random.shuffle(dataset)
|
| 37 |
+
|
| 38 |
+
out_path = f'dataset_{name}.jsonl'
|
| 39 |
+
with open(out_path, 'w', encoding='utf-8') as f:
|
| 40 |
+
for line in dataset:
|
| 41 |
+
f.write(line if line.endswith('\n') else line + '\n')
|
| 42 |
+
|
| 43 |
+
print(f'{out_path} created: {pos_count} positive, {neg_count} negative samples (total: {len(dataset)})')
|