colonelwatch's picture
Rename idxs.txt to openalex_ids.txt
0d901e5
|
raw
history blame
2.12 kB
metadata
language:
  - en
license: cc0-1.0
size_categories:
  - 10M<n<100m
task_categories:
  - text-retrieval
task_ids:
  - document-retrieval

abstracts-embeddings

This is the embeddings of the titles and abstracts of 95 million academic publications taken from the OpenAlex dataset as of May 5, 2023. The script that generated the embeddings is available on Github, but the general process is as follows:

  1. Reconstruct the text of the abstract from the inverted index format
  2. Construct a single document string in the format title + ' ' + abstract or just abstract if there is no title
  3. Determine if the document string is in English using fastText
  4. If it is in English, compute an embedding using the all-MiniLM-L6-v2 model provided by sentence-transformers

Though the OpenAlex dataset records 240 million works, not all of these works have abstracts or are in English. However, the all-MiniLM-L6-v2 model was only trained on English texts, hence the filtering.

Dataset Structure

In the future, this dataset might become a parquet in order to admit all the features offered by Hugging Face Datasets, but it consists only of a text file and a numpy memmap for now. The memmap is an array of many length-384 np.float16 vectors, and the i-th row vector in this array corresponds with the i-th line in the text file. The text file is just a list of ids that can be used to get more information from the OpenAlex API.

import numpy as np

with open('openalex_ids.txt', 'r') as f:
    idxs = f.read().splitlines()

embeddings = np.memmap('embeddings.memmap', dtype=np.float16, mode='r').reshape(-1, 384)

However, the memmap cannot be uploaded to Hugging Face as a single file, so it's split with the command split -b 3221225472 -d --suffix-length=3 --additional-suffix=.memmap embeddings.memmap embeddings_. It can be put back together with the command cat embeddings_*.memmap > embeddings.memmap.