colonelwatch commited on
Commit
d22adc7
·
1 Parent(s): 72d7fcc

Update the README with more precise instructions about how the embeddings were made

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -18,9 +18,11 @@ This is the embeddings of the titles and abstracts of 110 million academic publi
18
  2. From the language field, determine if the abstract will be in English, and if not, go back to step 1
19
  3. From the abstract inverted index field, reconstruct the text of the abstract
20
  4. If there is a title field, construct a single document in the format `title + ' ' + abstract`, or if not, just use the abstract
21
- 5. Compute an embedding with the [stella_en_1.5B_v5](https://huggingface.co/NovaSearch/stella_en_1.5B_v5) model
22
  6. Write it to a local SQLite3 database
23
 
24
- Said database is then exported in parquet format as pairs of OpenAlex IDs and length-1024 float32 vectors. The model was run with bfloat16 quantization, yielding bfloat16 vectors, but the conversion from bfloat16 to float32 leaves the lower two bytes as all-zero. This was exploited with byte-stream compression to store the vectors in a parquet with full precision but no wasted space. This does however mean that opening the parquets in the Hugging Face `datasets` library will lead to the cache using twice the space.
 
 
25
 
26
  Though the OpenAlex dataset records 240 million works, not all of these works have abstracts or are in English. Besides the works without abstracts, the stella_en_1.5B_v5 model was only trained on English texts, hence the filtering.
 
18
  2. From the language field, determine if the abstract will be in English, and if not, go back to step 1
19
  3. From the abstract inverted index field, reconstruct the text of the abstract
20
  4. If there is a title field, construct a single document in the format `title + ' ' + abstract`, or if not, just use the abstract
21
+ 5. Compute an embedding with the [stella_en_1.5B_v5](https://huggingface.co/NovaSearch/stella_en_1.5B_v5) model (bfloat16 precison)
22
  6. Write it to a local SQLite3 database
23
 
24
+ Said database is then exported in parquet format as pairs of OpenAlex IDs and length-1024 float32 vectors. Because the model was run with bfloat16 precision, thus yielding bfloat16 vectors, the conversion to float32 leaves the lower two bytes as all-zero. This was exploited with byte-stream compression to store the vectors in a parquet with full precision but no wasted space. This does however mean that opening the parquets in the Hugging Face `datasets` library will lead to the cache using twice the space.
25
+
26
+ Each parquet file was made with up to 2097152 (2 * 1024 * 1024) works because that is the largest power of two where the file size was no more than 4GB (the limit on FAT32 filesystems). Also, the parquet files were made with a row group size of 65536.
27
 
28
  Though the OpenAlex dataset records 240 million works, not all of these works have abstracts or are in English. Besides the works without abstracts, the stella_en_1.5B_v5 model was only trained on English texts, hence the filtering.