github-repos-backup / README.md
senal88's picture
Upload README.md with huggingface_hub
e678e4f verified
---
license: mit
task_categories:
- other
tags:
- github
- repositories
- backup
- metadata
language:
- en
- pt
size_categories:
- n<1K
pretty_name: GitHub Repositories Backup - senal88
---
# GitHub Repositories Backup - senal88
> **Last Updated:** 2026-01-05 08:45:08
## πŸ“Š Dataset Overview
This dataset contains metadata about all GitHub repositories owned by [@senal88](https://github.com/senal88).
### Statistics
- **Total Repositories:** 682
- **Private Repositories:** 208
- **Public Repositories:** 474
- **Forks:** 448
- **Total Stars:** 91
- **Total Size:** ~48.91 GB
## πŸ“ Files
### `repos.csv`
Tabular format with all repository metadata:
- Name, URL, Type (Private/Public)
- Creation date, Last update, Last push
- Size, Stars, Forks
- Primary language, Description
### `repos_organized.json`
Structured JSON with:
- Metadata and statistics
- Repositories grouped by update date
- Repositories grouped by creation date
### `repos.md`
Markdown table for easy viewing
## πŸ”„ Update Frequency
This dataset is updated daily via automated backup.
## πŸ“ Schema
```json
{
"name": "string",
"url": "string",
"type": "private|public",
"is_fork": "boolean",
"created_at": "date",
"updated_at": "date",
"pushed_at": "date",
"size_kb": "integer",
"stars": "integer",
"forks": "integer",
"language": "string",
"description": "string"
}
```
## πŸ“š Usage
```python
from datasets import load_dataset
# Load dataset
ds = load_dataset("senal88/github-repos-backup")
# Load as pandas
import pandas as pd
df = pd.read_csv("hf://datasets/senal88/github-repos-backup/repos.csv")
```
## πŸ”— Links
- [GitHub Profile](https://github.com/senal88)
- [Backup System Documentation](https://github.com/senal88/prompts-ssot)
## πŸ“„ License
MIT License - This dataset contains public repository metadata.