mmarone commited on
Commit
bcdf6de
·
verified ·
1 Parent(s): 15e9d5f

Add files using upload-large-folder tool

Browse files
Files changed (1) hide show
  1. README.md +41 -5
README.md CHANGED
@@ -1,11 +1,47 @@
1
  [WIP]
2
 
3
- Contains Fineweb Edu split into two parts:
4
 
5
- - A metadata table, containing information like document hashes, count statistics, dump, url, etc
6
- - A deduplicated table of just hash and text.
 
 
 
 
7
 
8
- This saves space: we don't store redundant copies of the text data. We can join the two tables on the `hash` key if necessary. We also have count statistics and an ordering of the counts (i.e. the argsort).
9
 
10
- This is built on Fineweb Edu v1, not the most recent version!
11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  [WIP]
2
 
3
+ # FineWeb-Edu with Metadata
4
 
5
+ This repo contains 3 versions of the FineWeb-Edu v1 dataset:
6
+ ```
7
+ fwedu1-metaonly/
8
+ fwedu1-text-content-zstd/
9
+ fineweb-edu-1.0.0-meta-and-text/
10
+ ```
11
 
12
+ These are all joinable via the hash column, which is [xxhash64](https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.xxhash64.html) in pyspark, calculated on the text column. This hash is unique for all instances in the dataset. For convenience, this join is done for you in the third table
13
 
14
+ `fwedu1-metaonly` is just the metadata of the data exactly as it comes from the FineWeb-Edu v1 subset. This include duplicates! There are XXX records. For instance, identical text content might have been found at several different urls, across many CC dumps. The advantage of storing this data separately is that is is MUCH smaller than the text data and still allows for useful analysis - and you can always join it back!
15
 
16
+ ```
17
+ DataFrame[id: string, dump: string, url: string, file_path: string, language: string, language_score: double, token_count: bigint, score: double, int_score: bigint, hash: bigint]
18
+ ```
19
+
20
+ `fwedu1-text-content-zstd/` is the deduplicated data and is a table containing only the text content and the hash. This saves space - we don't need to store redundant copies of the text data.
21
+
22
+ ```
23
+ DataFrame[hash: bigint, rebuilt_count: bigint, first_text: string]
24
+ ```
25
+
26
+ `fineweb-edu-1.0.0-meta-and-text/` is the joined data, containing both the text data the metadata. It has the count columns used in our work (to come) and has the varying instance level data (e.g. url) compressed into a struct column.
27
+
28
+ ```
29
+ DataFrame[hash: bigint, text: string, instances: array<struct<dump:string,file_path:string,id:string,url:string>>, language: string, language_score: double, token_count: bigint, score: double, int_score: bigint, split: string, original_doc_count: bigint, position: int, reversed_count: int, tiktoken_size: int]
30
+ ```
31
+ This lets you easily run a query like this:
32
+
33
+ ```python
34
+ from pyspark.sql import functions as F
35
+ from pyspark.sql.types import ArrayType, StringType
36
+
37
+ df = spark.read.parquet("fineweb-1.0.0-meta-and-text")
38
+
39
+ filtered_df = df.filter(F.size(F.array_distinct(F.transform(F.col("instances"), lambda x: x.url))) > 1)
40
+ print(filtered_df.count())
41
+ filtered_df.show()
42
+
43
+ # 57292242 57M documents are found at more than one url - many of these are trivial differences like http vs https, but some reflect more interesting patterns like migrations or rehosts.
44
+ ```
45
+ Which finds all duplicated text content that appears at distinct urls!
46
+
47
+ **NOTE: This was built on v1 of the FineWeb-Edu dataset, which has been updated since**