Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
parquet
Sub-tasks:
document-retrieval
Languages:
English
Size:
100M - 1B
License:
| language: | |
| - en | |
| license: cc0-1.0 | |
| size_categories: | |
| - 10M<n<100m | |
| task_categories: | |
| - text-retrieval | |
| task_ids: | |
| - document-retrieval | |
| # abstracts-embeddings | |
| This is the embeddings of the titles and abstracts of 110 million academic publications taken from the [OpenAlex](https://openalex.org) dataset as of January 1, 2025. The embeddings are generated with a Unix pipeline, chaining together the AWS CLI, gzip, `oa_jsonl` (a C parser tailored to the JSON Lines structure of the OpenAlex snapshot), and a Python embedding script. The source code of `oa_jsonl` and the Makefile which sets up the pipeline is available on [Github](https://github.com/colonelwatch/abstracts-search), but the general process is as follows: | |
| 1. Decode the JSON entry of an individual work | |
| 2. From the language field, determine if the abstract will be in English, and if not, go back to step 1 | |
| 3. From the abstract inverted index field, reconstruct the text of the abstract | |
| 4. If there is a title field, construct a single document in the format `title + ' ' + abstract`, or if not, just use the abstract | |
| 5. Compute an embedding with the [stella_en_1.5B_v5](https://huggingface.co/NovaSearch/stella_en_1.5B_v5) model (bfloat16 precison) | |
| 6. Write it to a local SQLite3 database | |
| Said database is then exported in parquet format as pairs of OpenAlex IDs and length-1024 float32 vectors. Because the model was run with bfloat16 precision, thus yielding bfloat16 vectors, the conversion to float32 leaves the lower two bytes as all-zero. This was exploited with byte-stream compression to store the vectors in a parquet with full precision but no wasted space. This does however mean that opening the parquets in the Hugging Face `datasets` library will lead to the cache using twice the space. | |
| Each parquet file was made with up to 2097152 (2 * 1024 * 1024) works because that is the largest power of two where the file size was no more than 4GB (the limit on FAT32 filesystems). Also, the parquet files were made with a row group size of 65536. | |
| Though the OpenAlex dataset records 240 million works, not all of these works have abstracts or are in English. Besides the works without abstracts, the stella_en_1.5B_v5 model was only trained on English texts, hence the filtering. | |