Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -35,29 +35,29 @@ This dataset captures the evolution of GitHub's trending repositories over time,
|
|
| 35 |
- ⭐ **89.8%** scraping success rate from Wayback Machine
|
| 36 |
- 🏆 **Pre-processed monthly rankings** with weighted scoring
|
| 37 |
|
| 38 |
-
## 🔧 Dataset
|
| 39 |
|
| 40 |
-
This dataset
|
| 41 |
|
| 42 |
-
###
|
| 43 |
-
|
| 44 |
|
| 45 |
```python
|
| 46 |
from datasets import load_dataset
|
| 47 |
-
ds = load_dataset('ronantakizawa/github-top-projects', 'full')
|
| 48 |
```
|
| 49 |
|
| 50 |
-
###
|
| 51 |
-
|
| 52 |
|
| 53 |
```python
|
| 54 |
from datasets import load_dataset
|
| 55 |
-
ds = load_dataset('ronantakizawa/github-top-projects', 'monthly')
|
| 56 |
```
|
| 57 |
|
| 58 |
## 📁 Dataset Files
|
| 59 |
|
| 60 |
-
###
|
| 61 |
**Complete daily trending data** - All 423,098 entries
|
| 62 |
|
| 63 |
| Column | Type | Description |
|
|
@@ -76,7 +76,7 @@ ds = load_dataset('ronantakizawa/github-top-projects', 'monthly')
|
|
| 76 |
- Creating custom aggregations (weekly, yearly, etc.)
|
| 77 |
- Studying viral repository behavior
|
| 78 |
|
| 79 |
-
###
|
| 80 |
**Monthly top 25 repositories** - Pre-processed with weighted scoring
|
| 81 |
|
| 82 |
| Column | Type | Description |
|
|
@@ -180,11 +180,11 @@ This rewards both **consistency** (frequent appearances) and **position** (highe
|
|
| 180 |
from datasets import load_dataset
|
| 181 |
|
| 182 |
# Load complete daily dataset (423,098 entries)
|
| 183 |
-
ds_full = load_dataset('ronantakizawa/github-top-projects', 'full')
|
| 184 |
df_full = ds_full['train'].to_pandas()
|
| 185 |
|
| 186 |
# Load monthly top 25 dataset (3,200 entries)
|
| 187 |
-
ds_monthly = load_dataset('ronantakizawa/github-top-projects', 'monthly')
|
| 188 |
df_monthly = ds_monthly['train'].to_pandas()
|
| 189 |
|
| 190 |
# Filter to 2020+ (with star data)
|
|
@@ -201,8 +201,8 @@ print(nov_2025[['rank', 'repository', 'star_count', 'ranking_appearances']])
|
|
| 201 |
import pandas as pd
|
| 202 |
|
| 203 |
# Download files from the dataset page, then:
|
| 204 |
-
df_full = pd.read_csv('
|
| 205 |
-
df_monthly = pd.read_csv('
|
| 206 |
|
| 207 |
# Get top trending projects of 2024
|
| 208 |
df_2024 = df_full[df_full['date'].str.startswith('2024')]
|
|
|
|
| 35 |
- ⭐ **89.8%** scraping success rate from Wayback Machine
|
| 36 |
- 🏆 **Pre-processed monthly rankings** with weighted scoring
|
| 37 |
|
| 38 |
+
## 🔧 Dataset Structure
|
| 39 |
|
| 40 |
+
This dataset is organized into **two subdirectories**:
|
| 41 |
|
| 42 |
+
### `full/` - Complete Daily Data
|
| 43 |
+
423,098 trending entries (2013-2025)
|
| 44 |
|
| 45 |
```python
|
| 46 |
from datasets import load_dataset
|
| 47 |
+
ds = load_dataset('ronantakizawa/github-top-projects', data_dir='full')
|
| 48 |
```
|
| 49 |
|
| 50 |
+
### `monthly/` - Monthly Top 25
|
| 51 |
+
3,200 entries (top 25 per month with weighted scoring)
|
| 52 |
|
| 53 |
```python
|
| 54 |
from datasets import load_dataset
|
| 55 |
+
ds = load_dataset('ronantakizawa/github-top-projects', data_dir='monthly')
|
| 56 |
```
|
| 57 |
|
| 58 |
## 📁 Dataset Files
|
| 59 |
|
| 60 |
+
### `full/data.csv` (19 MB)
|
| 61 |
**Complete daily trending data** - All 423,098 entries
|
| 62 |
|
| 63 |
| Column | Type | Description |
|
|
|
|
| 76 |
- Creating custom aggregations (weekly, yearly, etc.)
|
| 77 |
- Studying viral repository behavior
|
| 78 |
|
| 79 |
+
### `monthly/data.csv` (211 KB)
|
| 80 |
**Monthly top 25 repositories** - Pre-processed with weighted scoring
|
| 81 |
|
| 82 |
| Column | Type | Description |
|
|
|
|
| 180 |
from datasets import load_dataset
|
| 181 |
|
| 182 |
# Load complete daily dataset (423,098 entries)
|
| 183 |
+
ds_full = load_dataset('ronantakizawa/github-top-projects', data_dir='full')
|
| 184 |
df_full = ds_full['train'].to_pandas()
|
| 185 |
|
| 186 |
# Load monthly top 25 dataset (3,200 entries)
|
| 187 |
+
ds_monthly = load_dataset('ronantakizawa/github-top-projects', data_dir='monthly')
|
| 188 |
df_monthly = ds_monthly['train'].to_pandas()
|
| 189 |
|
| 190 |
# Filter to 2020+ (with star data)
|
|
|
|
| 201 |
import pandas as pd
|
| 202 |
|
| 203 |
# Download files from the dataset page, then:
|
| 204 |
+
df_full = pd.read_csv('full/data.csv')
|
| 205 |
+
df_monthly = pd.read_csv('monthly/data.csv')
|
| 206 |
|
| 207 |
# Get top trending projects of 2024
|
| 208 |
df_2024 = df_full[df_full['date'].str.startswith('2024')]
|