Datasets:
The dataset viewer is not available for this split.
Parquet error: Scan size limit exceeded: attempted to read 585246340 bytes, limit is 300000000 bytes
Make sure that
1. the Parquet files contain a page index to enable random access without loading entire row groups2. otherwise use smaller row-group sizes when serializing the Parquet files
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
GitHub Repo Embeddings (Dataset)
This dataset contains:
- GitHub repository embeddings learned from star co-occurrence.
- Raw data for training such embeddings (2016 - 2025 years)
It is generated by the same pipeline as this repo and is intended for offline analysis, research, and downstream search/indexing.
See Demo which uses trained embeddings
Summary
- Source: GitHub Archive (BigQuery) WatchEvent + repo metadata.
- Signal: repositories starred together by the same user.
- Model:
torch.nn.EmbeddingBagtrained with MultiSimilarityLoss. - Embedding size: 128 dims.
Files
starred_repos.parquet
User-level training data.
repo_ids: list[int], repo ids starred by a user (order preserved from events).
repos_meta.parquet
Repository metadata aligned with the training data.
repo_id: intrepo_name: str (owner/name)stars: int, frequency of stars in this datasetcreated_at: datetime, repo creation date (first push event)last_updated: datetime, last push event
repo_embeddings_with_meta.parquet
Repository metadata + learned embeddings aligned by repo_id.
- Includes columns from
repos_meta.parquet embedding: list[float], 128-dim vector
Notes
- The dataset is derived from public GitHub Archive data and is intended for research and demo purposes.
- Downloads last month
- -