alexliap commited on
Commit
2cd44df
·
verified ·
1 Parent(s): 3f610f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -11
README.md CHANGED
@@ -3,18 +3,21 @@ language:
3
  - el
4
  license: apache-2.0
5
  size_categories:
6
- - 10G<n<100G
7
  task_categories:
8
  - text-generation
9
  configs:
10
  - config_name: finepdfs_el
11
- data_files: "finepdfs_el/*.parquet"
12
  - config_name: fineweb_hq_el
13
- data_files: "fineweb_hq_el/*.parquet"
14
  - config_name: finewiki_el
15
- data_files: "finewiki_el/*.parquet"
16
  - config_name: wikipedia_el
17
- data_files: "wikipedia_el/*.parquet"
 
 
 
18
  ---
19
 
20
  This dataset contains Greek language text data from multiple high-quality sources.
@@ -22,7 +25,6 @@ This dataset contains Greek language text data from multiple high-quality source
22
  ## Dataset Statistics
23
 
24
  - **Total tokens:** ~21.1 billion (GPT-4 tokenizer)
25
- - **Total size:** 13.87 GB
26
  - **Total records:** 5,032,854
27
 
28
  ### Token Distribution
@@ -95,10 +97,6 @@ print(ds[0]["text"])
95
  print(ds[0]["token_count"])
96
  ```
97
 
98
- ## Total Dataset Size
99
-
100
- **Total size:** 13.87 GB
101
-
102
  ## License
103
 
104
  Apache 2.0 (inherits from source datasets)
@@ -115,4 +113,4 @@ If you use this dataset, please cite the original sources:
115
  - FineWiki: HuggingFaceFW/finewiki
116
  - FineWeb2-HQ: epfml/FineWeb2-HQ
117
  - FinePDFs-Edu: HuggingFaceFW/finepdfs-edu
118
- - Wikipedia: Wikimedia Foundation
 
3
  - el
4
  license: apache-2.0
5
  size_categories:
6
+ - 1M<n<10M
7
  task_categories:
8
  - text-generation
9
  configs:
10
  - config_name: finepdfs_el
11
+ data_files: finepdfs_el/*.parquet
12
  - config_name: fineweb_hq_el
13
+ data_files: fineweb_hq_el/*.parquet
14
  - config_name: finewiki_el
15
+ data_files: finewiki_el/*.parquet
16
  - config_name: wikipedia_el
17
+ data_files: wikipedia_el/*.parquet
18
+ tags:
19
+ - llms
20
+ - pretraining
21
  ---
22
 
23
  This dataset contains Greek language text data from multiple high-quality sources.
 
25
  ## Dataset Statistics
26
 
27
  - **Total tokens:** ~21.1 billion (GPT-4 tokenizer)
 
28
  - **Total records:** 5,032,854
29
 
30
  ### Token Distribution
 
97
  print(ds[0]["token_count"])
98
  ```
99
 
 
 
 
 
100
  ## License
101
 
102
  Apache 2.0 (inherits from source datasets)
 
113
  - FineWiki: HuggingFaceFW/finewiki
114
  - FineWeb2-HQ: epfml/FineWeb2-HQ
115
  - FinePDFs-Edu: HuggingFaceFW/finepdfs-edu
116
+ - Wikipedia: Wikimedia Foundation