Update README and stats (pipeline complete)
Browse files
README.md
CHANGED
|
@@ -355,7 +355,7 @@ configs:
|
|
| 355 |
|
| 356 |
# OpenAlex: Complete Academic Research Database
|
| 357 |
|
| 358 |
-
The world's scholarly research catalog, converted to analysis-ready Parquet.
|
| 359 |
|
| 360 |
## Table of Contents
|
| 361 |
|
|
@@ -373,7 +373,7 @@ The world's scholarly research catalog, converted to analysis-ready Parquet. 606
|
|
| 373 |
|
| 374 |
[OpenAlex](https://openalex.org) is a free, open catalog of the global research system: papers, authors, institutions, journals, topics, publishers, and funders. It's maintained by [OurResearch](https://ourresearch.org/) as the open replacement for the discontinued Microsoft Academic Graph. The index currently covers over 250 million scholarly works with full citation networks, authorship chains, institutional affiliations, and topic classifications.
|
| 375 |
|
| 376 |
-
This dataset is a straight conversion of the [OpenAlex snapshot](https://docs.openalex.org/download-all-data/openalex-snapshot) (2026-04) from gzipped JSON Lines into sharded, ZSTD-compressed Parquet. **
|
| 377 |
|
| 378 |
### File layout
|
| 379 |
|
|
@@ -520,7 +520,7 @@ snapshot_download(
|
|
| 520 |
| **Sources** | 280.7K | Journals, repositories, conferences, and ebook platforms with ISSN, DOAJ status, and APC pricing |
|
| 521 |
| **Institutions** | 121.5K | Universities, research centers, companies, and government bodies with ROR IDs and geolocation |
|
| 522 |
| **Authors** | 113.6M | Researchers with ORCID IDs, h-index, affiliations, and publication statistics |
|
| 523 |
-
| **Works** |
|
| 524 |
|
| 525 |
## How entities connect
|
| 526 |
|
|
@@ -788,138 +788,4 @@ Scholarly works (articles, books, datasets) with citations, DOIs, topics, author
|
|
| 788 |
| `is_oa` | bool |
|
| 789 |
| `oa_status` | string |
|
| 790 |
| `oa_url` | string |
|
| 791 |
-
| `
|
| 792 |
-
| `best_oa_location` | string |
|
| 793 |
-
| `locations` | string |
|
| 794 |
-
| `authorships` | string |
|
| 795 |
-
| `biblio_volume` | string |
|
| 796 |
-
| `biblio_issue` | string |
|
| 797 |
-
| `biblio_first_page` | string |
|
| 798 |
-
| `biblio_last_page` | string |
|
| 799 |
-
| `primary_topic` | string |
|
| 800 |
-
| `topics` | string |
|
| 801 |
-
| `keywords` | string |
|
| 802 |
-
| `referenced_works` | string |
|
| 803 |
-
| `related_works` | string |
|
| 804 |
-
| `abstract_inverted_index` | string |
|
| 805 |
-
| `ids` | string |
|
| 806 |
-
| `counts_by_year` | string |
|
| 807 |
-
| `sustainable_development_goals` | string |
|
| 808 |
-
| `indexed_in` | string |
|
| 809 |
-
| `created_date` | string |
|
| 810 |
-
| `updated_date` | string |
|
| 811 |
-
|
| 812 |
-
|
| 813 |
-
## Working with abstracts
|
| 814 |
-
|
| 815 |
-
OpenAlex stores abstracts as inverted indices: each word maps to an array of positions where it appears. This is compact but you need to reconstruct the text before using it.
|
| 816 |
-
|
| 817 |
-
### Python
|
| 818 |
-
|
| 819 |
-
```python
|
| 820 |
-
import json
|
| 821 |
-
|
| 822 |
-
def reconstruct_abstract(inverted_index_json):
|
| 823 |
-
if not inverted_index_json:
|
| 824 |
-
return None
|
| 825 |
-
idx = json.loads(inverted_index_json)
|
| 826 |
-
words = []
|
| 827 |
-
for word, positions in idx.items():
|
| 828 |
-
for pos in positions:
|
| 829 |
-
words.append((pos, word))
|
| 830 |
-
words.sort()
|
| 831 |
-
return " ".join(w for _, w in words)
|
| 832 |
-
|
| 833 |
-
# With DuckDB
|
| 834 |
-
import duckdb
|
| 835 |
-
conn = duckdb.connect()
|
| 836 |
-
df = conn.sql("""
|
| 837 |
-
SELECT id, title, abstract_inverted_index
|
| 838 |
-
FROM read_parquet('hf://datasets/open-index/open-alex/data/works/*.parquet')
|
| 839 |
-
WHERE abstract_inverted_index IS NOT NULL
|
| 840 |
-
LIMIT 5
|
| 841 |
-
""").df()
|
| 842 |
-
|
| 843 |
-
df["abstract"] = df["abstract_inverted_index"].apply(reconstruct_abstract)
|
| 844 |
-
```
|
| 845 |
-
|
| 846 |
-
### SQL: how many works have abstracts?
|
| 847 |
-
|
| 848 |
-
```sql
|
| 849 |
-
SELECT publication_year, COUNT(*) as total,
|
| 850 |
-
SUM(CASE WHEN abstract_inverted_index IS NOT NULL THEN 1 ELSE 0 END) as with_abstract,
|
| 851 |
-
ROUND(100.0 * SUM(CASE WHEN abstract_inverted_index IS NOT NULL THEN 1 ELSE 0 END) / COUNT(*), 1) as pct
|
| 852 |
-
FROM 'hf://datasets/open-index/open-alex/data/works/*.parquet'
|
| 853 |
-
WHERE publication_year BETWEEN 2000 AND 2025
|
| 854 |
-
GROUP BY publication_year
|
| 855 |
-
ORDER BY publication_year;
|
| 856 |
-
```
|
| 857 |
-
|
| 858 |
-
## Pipeline details
|
| 859 |
-
|
| 860 |
-
This conversion is done in Go using [parquet-go](https://github.com/parquet-go/parquet-go) for writing and [gjson](https://github.com/tidwall/gjson) for zero-allocation JSON field extraction.
|
| 861 |
-
|
| 862 |
-
1. **Fetch manifests.** Each entity type has a Redshift-compatible manifest on S3 listing every data file with its URL, size, and record count.
|
| 863 |
-
|
| 864 |
-
2. **Download parts.** Each manifest entry points to a gzip-compressed JSON Lines file (typically under 2 GB). Downloads support HTTP Range resume.
|
| 865 |
-
|
| 866 |
-
3. **Stream and convert.** Each `.gz` file is streamed through a decompressor and parsed line by line. Fields get extracted and written to Parquet shards (1M rows per file, 500K rows per row group, ZSTD compression). Nested structures like authorships and locations stay as raw JSON strings.
|
| 867 |
-
|
| 868 |
-
4. **Upload incrementally.** Completed parquet shards are committed to this repo as they're produced, not saved up until the end. For large entities like works (~595 GB compressed), this is essential since the whole dataset won't fit on disk at once. Parts are deleted right after conversion, and shards are deleted right after upload.
|
| 869 |
-
|
| 870 |
-
5. **Track quality.** Every 1000th row gets sampled during conversion to measure field population rates. That's where the data completeness tables above come from.
|
| 871 |
-
|
| 872 |
-
## Things to know
|
| 873 |
-
|
| 874 |
-
**Nested fields are JSON strings.** Authorships, locations, topics, counts_by_year, and IDs are all stored as JSON-encoded strings, not native Parquet structs. This keeps the schema flat and makes it work everywhere, but you'll need `json_extract()` (DuckDB) or `json.loads()` (Python) to parse them.
|
| 875 |
-
|
| 876 |
-
**Abstracts need reconstruction.** The `abstract_inverted_index` field stores word-to-position mappings, not plain text. See [Working with abstracts](#working-with-abstracts) for reconstruction code.
|
| 877 |
-
|
| 878 |
-
**IDs are full URLs.** OpenAlex IDs look like `https://openalex.org/W2741809807`. If you need short IDs, grab everything after the last `/`.
|
| 879 |
-
|
| 880 |
-
**Citation counts are from the snapshot date.** The `cited_by_count` reflects the 2026-04 snapshot, not live counts.
|
| 881 |
-
|
| 882 |
-
**The concepts entity is not included.** OpenAlex deprecated concepts in favor of topics. This dataset has topics but not the legacy concepts.
|
| 883 |
-
|
| 884 |
-
**Some fields only exist in the API.** Things like `content_urls` on works are only available through the OpenAlex REST API, not in the bulk snapshot.
|
| 885 |
-
|
| 886 |
-
## Dataset statistics
|
| 887 |
-
|
| 888 |
-
You can query the per-entity statistics directly from the `stats.csv` file included in the dataset:
|
| 889 |
-
|
| 890 |
-
```sql
|
| 891 |
-
SELECT * FROM read_csv_auto('hf://datasets/open-index/open-alex/stats.csv')
|
| 892 |
-
ORDER BY records DESC;
|
| 893 |
-
```
|
| 894 |
-
|
| 895 |
-
The `stats.csv` file tracks each entity type with the following columns:
|
| 896 |
-
|
| 897 |
-
| Column | Description |
|
| 898 |
-
|--------|-------------|
|
| 899 |
-
| `entity_type` | Entity type identifier (works, authors, etc.) |
|
| 900 |
-
| `display_name` | Human-readable entity name |
|
| 901 |
-
| `records` | Number of records for this entity |
|
| 902 |
-
| `shards` | Number of Parquet shards |
|
| 903 |
-
| `total_bytes` | Total Parquet file size in bytes |
|
| 904 |
-
| `dur_s` | Processing duration in seconds |
|
| 905 |
-
| `committed_at` | ISO 8601 timestamp of when this entity was committed |
|
| 906 |
-
|
| 907 |
-
## Attribution
|
| 908 |
-
|
| 909 |
-
This dataset comes from [OpenAlex](https://openalex.org), a free and open catalog of the world's scholarly research. OpenAlex is maintained by [OurResearch](https://ourresearch.org/) and released under [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/) (public domain).
|
| 910 |
-
|
| 911 |
-
If you use this data in published research, please cite:
|
| 912 |
-
|
| 913 |
-
> Priem, J., Piwowar, H., & Orr, R. (2022). OpenAlex: A fully-open index of scholarly works, authors, venues, institutions, and concepts. *ArXiv*. https://arxiv.org/abs/2205.01833
|
| 914 |
-
|
| 915 |
-
More on citing OpenAlex: [https://docs.openalex.org/how-to-use-the-api/get-started](https://docs.openalex.org/how-to-use-the-api/get-started)
|
| 916 |
-
|
| 917 |
-
### License
|
| 918 |
-
|
| 919 |
-
CC0 1.0 Universal (Public Domain), same as the underlying OpenAlex data. Use it however you want.
|
| 920 |
-
|
| 921 |
-
### Questions?
|
| 922 |
-
|
| 923 |
-
For issues with this Parquet conversion, open a discussion on the [Community tab](https://huggingface.co/datasets/open-index/open-alex/discussions). For questions about OpenAlex itself, see the [OpenAlex docs](https://docs.openalex.org/) or email [support@openalex.org](mailto:support@openalex.org).
|
| 924 |
-
|
| 925 |
-
*Snapshot: 2026-04*
|
|
|
|
| 355 |
|
| 356 |
# OpenAlex: Complete Academic Research Database
|
| 357 |
|
| 358 |
+
The world's scholarly research catalog, converted to analysis-ready Parquet. 114.1M records across 7 entity types.
|
| 359 |
|
| 360 |
## Table of Contents
|
| 361 |
|
|
|
|
| 373 |
|
| 374 |
[OpenAlex](https://openalex.org) is a free, open catalog of the global research system: papers, authors, institutions, journals, topics, publishers, and funders. It's maintained by [OurResearch](https://ourresearch.org/) as the open replacement for the discontinued Microsoft Academic Graph. The index currently covers over 250 million scholarly works with full citation networks, authorship chains, institutional affiliations, and topic classifications.
|
| 375 |
|
| 376 |
+
This dataset is a straight conversion of the [OpenAlex snapshot](https://docs.openalex.org/download-all-data/openalex-snapshot) (2026-04) from gzipped JSON Lines into sharded, ZSTD-compressed Parquet. **114.1M total records** split into files of up to 1 million rows each. You can query it directly with DuckDB (no download needed), stream it with the `datasets` library, or pull specific entities with `huggingface_hub`.
|
| 377 |
|
| 378 |
### File layout
|
| 379 |
|
|
|
|
| 520 |
| **Sources** | 280.7K | Journals, repositories, conferences, and ebook platforms with ISSN, DOAJ status, and APC pricing |
|
| 521 |
| **Institutions** | 121.5K | Universities, research centers, companies, and government bodies with ROR IDs and geolocation |
|
| 522 |
| **Authors** | 113.6M | Researchers with ORCID IDs, h-index, affiliations, and publication statistics |
|
| 523 |
+
| **Works** | n/a | Scholarly works (articles, books, datasets) with citations, DOIs, topics, authorships, and open access status |
|
| 524 |
|
| 525 |
## How entities connect
|
| 526 |
|
|
|
|
| 788 |
| `is_oa` | bool |
|
| 789 |
| `oa_status` | string |
|
| 790 |
| `oa_url` | string |
|
| 791 |
+
| `pr
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|