Datasets:
annotations_creators:
- SLPL
language_creators:
- SLPL
language:
- fa
license:
- mit
multilinguality:
- monolingual
size_categories:
- 200M<n<300M
task_categories:
- language-modeling
- masked-language-modeling
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: naab (A ready-to-use plug-and-play corpus in Farsi)
naab: A ready-to-use plug-and-play corpus in Farsi
[If you wanted to join our community to keep up with news, models and datasets from naab, click on this link.]
Table of Contents
- Dataset Card Creation Guide
Dataset Description
- Homepage: Sharif Speech and Language Processing Lab
- Paper: If the dataset was introduced by a paper or there was a paper written describing the dataset, add URL here (landing page for Arxiv paper preferred)
- Point of Contact: Sadra Sabouri
Dataset Summary
naab is the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word ناب which means pure and high-grade. We also provide the raw version of the corpus called naab-raw and an easy-to-use pre-processor that can be employed by those who wanted to make a customized corpus.
You can use this corpus by the commands below:
from datasets import load_dataset
dataset = load_dataset("SLPL/naab")
Note: be sure that your machine has at least 130 GB free space, also it may take a while to download.
You may need to download parts/splits of this corpus too, if so use the command below (You can find more ways to use it here):
from datasets import load_dataset
dataset = load_dataset("SLPL/naab", split="train[:10%]")
Supported Tasks and Leaderboards
This corpus can be used for training all language models which can be trained by mask language modeling.
language-modelingmasked-language-modeling
Dataset Structure
Each row of the dataset will look like something like the below:
{
'text': "این یک تست برای نمایش یک پاراگراف در پیکره متنی ناب است.",
}
text: the textual paragraph.
Data Splits
This dataset includes two splits (train and test). We split these two by dividing the randomly permuted version of the corpus into (95%, 5%) division respected to (train, test). Since validation is usually occurring during the train with the train dataset we avoid proposing another split for it.
| train | test | |
|---|---|---|
| Input Sentences | 225892925 | 11083851 |
| Average Sentence Length | 61 | 25 |
Below you can see the log-based histogram of word/paragraph over the two splits of the dataset.
Dataset Creation
Curation Rationale
Due to the lack of a huge amount of text data in lower resource languages - like Farsi - researchers working on these languages were always finding it hard to start to fine-tune such models. This phenomenon can lead to a situation in which the golden opportunity for fine-tuning models is just in hands of a few companies or countries which contributes to the weakening the open science.
The last biggest cleaned merged textual corpus in Farsi is a 70GB cleaned text corpus from a compilation of 8 big data sets that have been cleaned and can be downloaded directly. Our solution to the discussed issues is called naab. It provides 126GB (including more than 224 million sequences and nearly 15 billion words) as the training corpus and 2.3GB (including nearly 11 million sequences and nearly 300 million words) as the test corpus.
Source Data
The textual corpora that we used as our source data are illustrated in the figure below. It contains 5 corpora which are linked in the coming sections.
Persian NLP
This corpus includes eight corpora that are sorted based on their volume as below:
- Common Crawl: 65GB (link)
- MirasText: 12G
- W2C – Web to Corpus: 1GB (link)
- Persian Wikipedia (March 2020 dump): 787MB (link)
- Leipzig Corpora: 424M (link)
- VOA corpus: 66MB (link)
- Persian poems corpus: 61MB (link)
- TEP: Tehran English-Persian parallel corpus: 33MB (link)
AGP
This corpus was a formerly private corpus for ASR Gooyesh Pardaz which is now published for all users by this project. This corpus contains more than 140 million paragraphs summed up in 23GB (after cleaning). This corpus is a mixture of both formal and informal paragraphs that are crawled from different websites and/or social media.
OSCAR-fa
OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the go classy architecture. Data is distributed by language in both original and deduplicated form. We used the unshuffled-deduplicated-fa from this corpus, after cleaning there were about 36GB remaining.
Telegram
Telegram, a cloud-based instant messaging service, is a widely used application in Iran. Following this hypothesis, we prepared a list of Telegram channels in Farsi covering various topics including sports, daily news, jokes, movies and entertainment, etc. The text data extracted from mentioned channels mainly contains informal data.
LSCP
The Large Scale Colloquial Persian Language Understanding dataset has 120M sentences from 27M casual Persian sentences with its derivation tree, part-of-speech tags, sentiment polarity, and translations in English, German, Czech, Italian, and Hindi. However, we just used the Farsi part of it and after cleaning we had 2.3GB of it remaining. Since the dataset is casual, it may help our corpus have more informal sentences although its proportion to formal paragraphs is not comparable.
Initial Data Collection and Normalization
The data collection process was separated into two parts. In the first part, we searched for existing corpora. After downloading these corpora we started to crawl data from some social networks. Then thanks to ASR Gooyesh Pardaz we were provided with enough textual data to start the naab journey.
We used a preprocessor based on some stream-based Linux kernel commands so that this process can be less time/memory-consuming. The code is provided here.
Personal and Sensitive Information
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See Larson 2017 for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
If efforts were made to anonymize the data, describe the anonymization process.
Considerations for Using the Data
Social Impact of Dataset
Farsi is a language used by millions of people, for thousands of years; therefore, there exists numerous resources for this language. However, no-one has ever published a big enough easy to use corpus of textual data. Our dataset eases the path of pre-training and fine-tuning Farsi Language Models (LMs) in self-supervised manner which can lead to better tools for retention and development of Farsi. As a matter of fact, the informal portion of naab contains various dialects including, Turkish, Luri, etc. which are under-represented languages. Although the amount of data is comparably small, but it can be helpful in training a multi-lingual Tokenizer for Farsi variations. As mentioned before, some parts of our dataset are crawled from social media which in result means it contains ethnic, religious, and gender biases.
Discussion of Biases
During Exploratory Data Analysis (EDA), we found samples of data including biased opinions about race, religion, and gender. Based on the result we saw in our samples, only a small portion of informal data can be considered biased. Therefore, we anticipate that it won’t affect the trained language model on this data. Furthermore, we decided to keep this small part of data as it may become helpful in training models for classifying harmful and hateful texts.
Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
Additional Information
Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
Licensing Information
Provide the license and link to the license webpage if available.
Citation Information
Provide the BibTex-formatted reference for the dataset. For example:
@article{article_id,
author = {Author List},
title = {Dataset Paper Title},
journal = {Publication Venue},
year = {2525}
}
If the dataset has a DOI, please provide it here.
Contributions
Thanks to @sadrasabouri and @elnazrahmati for adding this dataset.