Datasets:

Modalities:
Text
Formats:
json
Languages:
Macedonian
ArXiv:
Libraries:
Datasets
pandas
License:
nielsr HF Staff commited on
Commit
69a010e
·
verified ·
1 Parent(s): 937b7d4

Add task category and link to paper

Browse files

This PR adds the `text-generation` task category to the dataset and includes a link to the paper: https://huggingface.co/papers/2506.09560.

Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  language:
3
  - mk
 
4
  tags:
5
  - macedonian
6
  - text
@@ -8,12 +9,15 @@ tags:
8
  - cleaned
9
  datasets:
10
  - LVSTCK/macedonian-corpus-cleaned
11
- license: cc-by-4.0
 
12
  ---
13
 
14
  # Macedonian Corpus - Cleaned
15
  [raw version here](https://huggingface.co/datasets/LVSTCK/macedonian-corpus-raw)
16
 
 
 
17
  ## 🌟 Key Highlights
18
  - **Size**: 35.5 GB, **Word Count**: 3.31 billion
19
  - Filtered for **irrelevant and low-quality content** using C4 and Gopher filtering.
@@ -22,7 +26,7 @@ license: cc-by-4.0
22
  ## 📋 Overview
23
  Macedonian is widely recognized as a low-resource language in the field of NLP. Publicly available resources in Macedonian are extremely limited, and as far as we know, no consolidated resource encompassing all available public data exists. Another challenge is the state of digitalized books and documents in Macedonia. The country lags behind in this regard, with many books and documents existing only as scanned images. This makes it difficult to extract textual information, which is critical for advancing linguistic research, education, and NLP applications in Macedonian language. To address these challenges, we created this **Macedonian Corpus**. This corpus consolidates multiple sources of Macedonian text data, including books, academic papers, web content, and other textual resources.
24
 
25
- This version of the corpus is **cleaned**, meaning the data has been subjected to filtering to ensure high-quality text for NLP tasks. The filtering was done using [datatrove](https://github.com/huggingface/datatrove), mainly motivated by [fineweb-2](https://github.com/huggingface/fineweb-2), but with slightly less aggressive settings to retain a broader range of text sources.
26
 
27
  This implementation applies heuristic rules derived from the [C4 dataset](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [Gopher dataset](https://arxiv.org/pdf/2112.11446.pdf) quality heuristic filters. For reference to the specific filtering code used in our processes, see the GitHub repositories for the [C4 filters](https://github.com/huggingface/datatrove/blob/47379fd9783a8731b6d595470d8b06af7da17e83/src/datatrove/pipeline/filters/c4_filters.py#L27) and the [Gopher quality filters](https://github.com/huggingface/datatrove/blob/47379fd9783a8731b6d595470d8b06af7da17e83/src/datatrove/pipeline/filters/gopher_quality_filter.py#L13). For those interested in applying custom filtering, the raw dataset can be accessed at [macedonian-corpus-raw](https://huggingface.co/datasets/LVSTCK/macedonian-corpus-raw).
28
 
 
1
  ---
2
  language:
3
  - mk
4
+ license: cc-by-4.0
5
  tags:
6
  - macedonian
7
  - text
 
9
  - cleaned
10
  datasets:
11
  - LVSTCK/macedonian-corpus-cleaned
12
+ task_categories:
13
+ - text-generation
14
  ---
15
 
16
  # Macedonian Corpus - Cleaned
17
  [raw version here](https://huggingface.co/datasets/LVSTCK/macedonian-corpus-raw)
18
 
19
+ [Paper](https://huggingface.co/papers/2506.09560)
20
+
21
  ## 🌟 Key Highlights
22
  - **Size**: 35.5 GB, **Word Count**: 3.31 billion
23
  - Filtered for **irrelevant and low-quality content** using C4 and Gopher filtering.
 
26
  ## 📋 Overview
27
  Macedonian is widely recognized as a low-resource language in the field of NLP. Publicly available resources in Macedonian are extremely limited, and as far as we know, no consolidated resource encompassing all available public data exists. Another challenge is the state of digitalized books and documents in Macedonia. The country lags behind in this regard, with many books and documents existing only as scanned images. This makes it difficult to extract textual information, which is critical for advancing linguistic research, education, and NLP applications in Macedonian language. To address these challenges, we created this **Macedonian Corpus**. This corpus consolidates multiple sources of Macedonian text data, including books, academic papers, web content, and other textual resources.
28
 
29
+ This version of the corpus is **cleaned**, meaning the data has been subjected to filtering to ensure high-quality text for NLP tasks. The filtering was done using [datatrove](https://github.com/huggingface/datatrove), mainly motivated by [fineweb-2](https://huggingface.co/fineweb-2), but with slightly less aggressive settings to retain a broader range of text sources.
30
 
31
  This implementation applies heuristic rules derived from the [C4 dataset](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [Gopher dataset](https://arxiv.org/pdf/2112.11446.pdf) quality heuristic filters. For reference to the specific filtering code used in our processes, see the GitHub repositories for the [C4 filters](https://github.com/huggingface/datatrove/blob/47379fd9783a8731b6d595470d8b06af7da17e83/src/datatrove/pipeline/filters/c4_filters.py#L27) and the [Gopher quality filters](https://github.com/huggingface/datatrove/blob/47379fd9783a8731b6d595470d8b06af7da17e83/src/datatrove/pipeline/filters/gopher_quality_filter.py#L13). For those interested in applying custom filtering, the raw dataset can be accessed at [macedonian-corpus-raw](https://huggingface.co/datasets/LVSTCK/macedonian-corpus-raw).
32