mainkilora mshojaei77 commited on
Commit
40b6375
·
0 Parent(s):

Duplicate from mshojaei77/persian-document-corpus

Browse files

Co-authored-by: Mohammad Shojaei <mshojaei77@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: text
5
+ dtype: string
6
+ - name: file_name
7
+ dtype: string
8
+ splits:
9
+ - name: train
10
+ num_bytes: 1534739938
11
+ num_examples: 13110
12
+ download_size: 497931218
13
+ dataset_size: 1534739938
14
+ configs:
15
+ - config_name: default
16
+ data_files:
17
+ - split: train
18
+ path: data/train-*
19
+ license: gpl
20
+ task_categories:
21
+ - text-generation
22
+ language:
23
+ - fa
24
+ tags:
25
+ - chemistry
26
+ - biology
27
+ - finance
28
+ - legal
29
+ - music
30
+ - art
31
+ - medical
32
+ size_categories:
33
+ - 10K<n<100K
34
+ ---
35
+
36
+ # Persian Document Corpus
37
+
38
+ ### Dataset Summary
39
+
40
+ The Persian Document Corpus (PDC) is a large collection of Persian documents, comprising **over 13,000 files**, gathered from publicly accessible PDFs across a wide array of knowledge domains. This corpus includes research articles, theses, dissertations, scientific reports, and book chapters, offering a rich and diverse resource for the Persian Natural Language Processing (NLP) community. It is designed to facilitate the training and evaluation of NLP models for a broad spectrum of tasks and applications in various fields.
41
+
42
+ The documents span a **diverse range of subjects**, reflecting the breadth of Persian scholarly and professional output. The corpus is expected to include documents from the following fields:
43
+
44
+ * **Science and Technology:** This category encompasses Mathematics, Physics (including Quantum Physics), Chemistry, Computer Science, Engineering, Information Technology (including Blockchain, Cryptocurrencies, Metaverse, NFTs, Smart Contracts), Automotive Engineering, and other technical domains.
45
+ * **Social Sciences and Humanities:** This area includes Sociology, Political Science, Economics, Law (particularly Women's Rights), Philosophy, History (especially Iranian History), Education, Linguistics (including Persian and English Language Learning), Library Science, Media Studies (including Podcasts and Film), Literature, and potentially Psychology and Gender Studies (Feminism, Women's Studies).
46
+ * **Applied Sciences and Practical Domains:** This section covers Medicine and Health (including topics like Pregnancy, Menstruation, Dermatology - Acne, Ophthalmology - Eye Diseases, Cosmetics - Perfume, Nail Care, Masks), Agriculture (including Poultry - Chicken, Floriculture - Flowers), Business and Finance (Stock Market, Real Estate, E-commerce - Online Classifieds), Culinary Arts (Recipes), Home Improvement, and potentially Career Development (Employment, Migration), as well as general interest topics like Games, Music, and Lifestyle.
47
+
48
+ This dataset is particularly valuable for tasks that require the analysis of formal Persian language, specialized terminology, and structured writing styles across these diverse domains.
49
+
50
+ ### Supported Tasks
51
+
52
+ * **Language Modeling:** Train Persian language models on a wide variety of formal texts.
53
+ * **Information Retrieval:** Develop and evaluate Persian information retrieval systems.
54
+ * **Keyword Extraction:** Extract key terms and build domain-specific vocabularies for various fields.
55
+ * **Text Summarization:** Summarize Persian documents from diverse subject areas.
56
+
57
+ ## Dataset Creation
58
+
59
+ ### Curation Rationale
60
+
61
+ This dataset was created to address the need for a large, publicly available Persian text corpus covering a diverse range of subjects. Existing Persian datasets often lack sufficient content representing the breadth of knowledge domains, which is essential for training robust models for various NLP tasks. The PDC aims to bridge this gap and support advancements in Persian NLP research and applications across diverse fields of study.
62
+
63
+ ### Source Data
64
+
65
+ #### Data Collection and Processing
66
+
67
+ - **Data Acquisition:** Documents were discovered and collected through web searches using a curated list of [Persian search queries](https://huggingface.co/datasets/mshojaei77/persian-search-queries) designed to target publicly available Persian PDF documents across a wide range of topics. These topics span Science, Technology, Social Sciences, Humanities, and applied fields. Examples of subjects include "Mathematics," "Chemistry," "Feminism," "Migration Studies," and "Persian Podcasts," among many others. The search queries combined subject keywords (e.g., 'Mathematics', 'Feminism') with terms indicating document types (e.g., 'article', 'thesis', 'dissertation', 'book', 'guide') to refine searches and ensure relevance to informative content.
68
+ - **PDF Download and Conversion:** Relevant PDFs identified through search queries were downloaded and then converted to plain text format. This conversion process utilized a combination of tools optimized for Persian text, which can be challenging due to its right-to-left script and complex layouts.
69
+ - **Text Cleaning and Normalization:** The extracted text underwent several cleaning and normalization steps to improve data quality and consistency:
70
+ - **PDF Artifact Removal:** Headers, footers, page numbers, watermarks, and other extraneous elements introduced during PDF conversion were removed using regular expressions and rule-based methods.
71
+ - **Character Normalization:** Character encodings were standardized to UTF-8, and inconsistencies in Persian characters were addressed. This also included normalizing diacritics and vowel markings for uniformity.
72
+ - **Whitespace Normalization:** Multiple whitespace characters (spaces, tabs, newlines) were consolidated into single spaces, and leading/trailing whitespace was removed from lines and documents.
73
+
74
+
75
+ ## Considerations for Using the Data
76
+
77
+ ### Social Impact
78
+
79
+ This dataset is intended to have a positive social impact by:
80
+ - **Democratizing Access to Persian Knowledge:** Providing a large, freely available collection of Persian texts from various fields to researchers, students, and developers globally.
81
+ - **Advancing Persian Language Technology:** Serving as a valuable resource for developing and enhancing Persian language processing tools and models, benefiting Persian speakers in education, research, and various professional domains.
82
+ - **Enabling Interdisciplinary Research on Persian Content:** Facilitating the computational analysis of Persian content across disciplines, leading to insights into research trends, knowledge dissemination, and the evolution of information within Persian-speaking communities in diverse fields of study.
83
+
84
+ ### Dataset Bias
85
+
86
+ Users should be aware of potential biases in this dataset:
87
+ - **Formal Language Bias:** The corpus primarily consists of formal and structured writing, which may differ significantly from informal Persian or other genres. Models trained on this data might perform optimally on formal text but less effectively on other types of Persian text.
88
+ - **Topical Bias:** While aiming for broad topical coverage, the distribution of subjects may not perfectly represent the entire spectrum of Persian knowledge production or interests. The subject list in [Persian Search Queries Dataset](https://huggingface.co/datasets/mshojaei77/persian-search-queries) indicates the intended scope, but topic representation may vary based on the online availability of PDFs and the search queries used. For example, topics like "Games," "Songs," or "Fortune Telling" might be less represented in structured documents compared to core science or social science subjects, despite being included in the subject queries. Further analysis of the corpus's topic distribution is recommended to understand the actual topical balance.
89
+ - **Source Bias:** The dataset is limited to publicly accessible documents found through web searches, potentially excluding documents behind paywalls, in private repositories, or not indexed by search engines. This may bias the corpus towards institutions and individuals more likely to publish their work online in PDF format.
90
+ - **Geographic and Institutional Bias:** The origin of documents might be skewed towards specific geographic regions or institutions more active in online publishing of information and research.
91
+
92
+ ### Known Limitations
93
+
94
+ - **PDF Conversion Artifacts:** Despite cleaning efforts, some documents may still contain residual artifacts from PDF conversion, such as layout inconsistencies, broken characters, or misidentified text elements. Users should expect to handle some noise in the data.
95
+ - **Document Length Variability:** The dataset includes documents of varying lengths, from short articles to lengthy dissertations and books. This variation may need consideration depending on the specific NLP task.
96
+ - **Language Mixing:** While primarily in Persian, some documents may contain technical terms, citations, or abstracts in English or other languages, which is common in academic and technical writing.
97
+
98
+ ## Additional Information
99
+
100
+ ### Dataset Curator
101
+
102
+ This dataset was curated by [Mohammad Shojaei](https://huggingface.co/mshojaei77).
103
+
104
+ ### Dataset Statistics
105
+
106
+ - **Number of Documents:** 13k
107
+ - **Total Token Count:** Approximately 1 billion tokens
108
+ - **Number of Subjects:** 1313
109
+
110
+ ### Usage Example
111
+
112
+ Here's a basic example of loading and using the dataset with the `datasets` library from Hugging Face:
113
+
114
+ ```python
115
+ from datasets import load_dataset
116
+
117
+ # Load the dataset
118
+ dataset = load_dataset("mshojaei77/persian-document-corpus")
119
+
120
+ # Access the first few examples
121
+ for example in dataset['train'].select(range(3)):
122
+ print(f"File: {example['file_name']}")
123
+ print(f"Text preview (first 200 characters):\n{example['text'][:200]}...")
124
+ print("-" * 50)
125
+
126
+ # Example: Iterate through all documents and count words (basic example)
127
+ def count_words(example):
128
+ words = example['text'].split() # Simple word splitting; more sophisticated tokenization may be needed
129
+ return {'word_count': len(words)}
130
+
131
+ dataset = dataset.map(count_words)
132
+ total_word_count = sum(dataset['train']['word_count'])
133
+ print(f"\nTotal word count in the dataset (approximate): {total_word_count}")
134
+ ```
135
+
136
+ ### Licensing Information
137
+
138
+ This dataset is released under the [GNU General Public License v3.0 (GPL-3.0)](https://www.gnu.org/licenses/gpl-3.0.en.html). This license permits free use, distribution, and modification, even for commercial purposes, provided that derivative works are also licensed under GPL-3.0.
139
+
140
+ ### Citation Information
141
+
142
+ If you utilize this dataset in your research or applications, please cite it as follows:
143
+
144
+ ```bibtex
145
+ @misc{persian-document-corpus2024,
146
+ title={Persian Document Corpus},
147
+ author={Mohammad Shojaei},
148
+ year={2024},
149
+ publisher={Hugging Face},
150
+ howpublished={\url{https://huggingface.co/datasets/mshojaei77/persian-document-corpus}}
151
+ }
152
+ ```
153
+
154
+ ### Contributions
155
+
156
+ For questions, issues, or to contribute to the dataset, please open issues or submit pull requests. Contributions to improve the dataset and its documentation are welcomed.
data/train-00000-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea429d685dab25890406bd2870c2eaf6b51ef95e877a108726239ca7e805699a
3
+ size 143945576
data/train-00001-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d3955aefc9b1ef907f1e46b7b4f9b43bc0bbf88d95a5b33e43ff1df74854c20
3
+ size 131134068
data/train-00002-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c189ec739c647d3648aa53a3a06d23e80c20b0f9b5c99812de96c8d170134ac8
3
+ size 102422144
data/train-00003-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0485e5e8fc01f83cb106cfcefa2004c58cba698cdfe0d022a0671952c63b5c7e
3
+ size 120429430