Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Common Crawl sample
|
| 2 |
+
|
| 3 |
+
A small unofficial random subset of the famous Common Crawl dataset.
|
| 4 |
+
|
| 5 |
+
- 60 random segment WET files were downloaded from [Common Crawl](https://commoncrawl.org/) on 2024-05-12.
|
| 6 |
+
- Lines between 500 and 5000 characters long (inclusive) were kept.
|
| 7 |
+
- Only unique texts were kept.
|
| 8 |
+
- No other filtering.
|
| 9 |
+
|
| 10 |
+
## Languages
|
| 11 |
+
|
| 12 |
+
- Each text was assigned to one of the [language codes](https://github.com/google/cld3?tab=readme-ov-file#supported-languages) using the GCLD3 Python package.
|
| 13 |
+
- The Chinese texts were classified as either simplified, traditional, or Cantonese using the [fastlangid package](https://github.com/currentslab/fastlangid).
|
| 14 |
+
- For each language, 10% of the rows were randomly selected as the test set.
|
| 15 |
+
- The test set of the "all" languages split is the union of the test sets of all the languages in the dataset.
|
| 16 |
+
|
| 17 |
+
:warning: **Warning!** :no_entry_sign:
|
| 18 |
+
- This dataset is raw and unfiltered from the Internet.
|
| 19 |
+
- So it contains objectionable content, false information, and possibly personally identifiable information.
|
| 20 |
+
- But it's mostly spam and repetitive junk. Just spam, spam, spam. Spam everywhere. :poop: Please filter it according to your needs.
|
| 21 |
+
|
| 22 |
+
## Limitations
|
| 23 |
+
|
| 24 |
+
- Some languages are greatly overrepresented.
|
| 25 |
+
- Samples of webpages may not represent real language use.
|
| 26 |
+
- Major problem is spam. Throws off the language detectors.
|
| 27 |
+
- Many false positives like Taiwanese traditional Chinese being classified as Cantonese.
|
| 28 |
+
- The testing split isn't truly independent of the training split
|
| 29 |
+
- for example, different paragraphs from the same webpage can end up in both training and testing splits
|