Update README to v2
Browse files
README.md
CHANGED
|
@@ -77,15 +77,17 @@ tags:
|
|
| 77 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 78 |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 79 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 80 |
-
- **Size of downloaded dataset files:** 3.
|
| 81 |
-
- **Size of the generated dataset:** 1.
|
| 82 |
- **Total amount of disk used:** 5.1 GB
|
| 83 |
|
| 84 |
### Dataset Summary
|
| 85 |
|
| 86 |
In an effort to democratize research on low-resource languages, we release UzCrawl dataset, a web and telegram crawl corpus consisting of materials from nearly 1.2 million unique sources in the Uzbek Language.
|
| 87 |
|
| 88 |
-
Please refer to our [blogpost](https://tahrirchi.uz/grammatika-tekshiruvi)
|
|
|
|
|
|
|
| 89 |
|
| 90 |
To load and use dataset, run this script:
|
| 91 |
|
|
@@ -95,14 +97,22 @@ from datasets import load_dataset
|
|
| 95 |
uz_crawl=load_dataset("tahrirchi/uz-crawl")
|
| 96 |
```
|
| 97 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 98 |
## Dataset Structure
|
| 99 |
|
| 100 |
### Data Instances
|
| 101 |
|
| 102 |
#### plain_text
|
| 103 |
|
| 104 |
-
- **Size of downloaded dataset files:** 3.
|
| 105 |
-
- **Size of the generated dataset:** 1.
|
| 106 |
- **Total amount of disk used:** 5.1 GB
|
| 107 |
|
| 108 |
An example of 'news' looks as follows.
|
|
@@ -125,14 +135,14 @@ The data fields are the same among all splits.
|
|
| 125 |
|
| 126 |
### Data Splits
|
| 127 |
|
| 128 |
-
|
|
| 129 |
-
|
| 130 |
-
| news |
|
| 131 |
-
| telegram_blogs |
|
| 132 |
|
| 133 |
## Dataset Creation
|
| 134 |
|
| 135 |
-
The news portion have been crawled from
|
| 136 |
|
| 137 |
|
| 138 |
## Citation
|
|
@@ -140,7 +150,7 @@ The news portion have been crawled from 21 different websites using [Scrapy](htt
|
|
| 140 |
Please cite this model using the following format:
|
| 141 |
|
| 142 |
```
|
| 143 |
-
@online{
|
| 144 |
author = {Mukhammadsaid Mamasaidov and Abror Shopulatov},
|
| 145 |
title = {UzCrawl dataset},
|
| 146 |
year = {2023},
|
|
|
|
| 77 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 78 |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 79 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 80 |
+
- **Size of downloaded dataset files:** 3.41 GB
|
| 81 |
+
- **Size of the generated dataset:** 1.68 GB
|
| 82 |
- **Total amount of disk used:** 5.1 GB
|
| 83 |
|
| 84 |
### Dataset Summary
|
| 85 |
|
| 86 |
In an effort to democratize research on low-resource languages, we release UzCrawl dataset, a web and telegram crawl corpus consisting of materials from nearly 1.2 million unique sources in the Uzbek Language.
|
| 87 |
|
| 88 |
+
Please refer to our [blogpost](https://tahrirchi.uz/grammatika-tekshiruvi) for further details.
|
| 89 |
+
|
| 90 |
+
P.S. We updated the dataset with 2nd version that extends the scope to new topics as well as being up to date to March 2024.
|
| 91 |
|
| 92 |
To load and use dataset, run this script:
|
| 93 |
|
|
|
|
| 97 |
uz_crawl=load_dataset("tahrirchi/uz-crawl")
|
| 98 |
```
|
| 99 |
|
| 100 |
+
To use v1 of the dataset, run this script instead:
|
| 101 |
+
|
| 102 |
+
```python
|
| 103 |
+
from datasets import load_dataset
|
| 104 |
+
|
| 105 |
+
uz_crawl=load_dataset("tahrirchi/uz-crawl", revision="b8de928a18559b2a1659124c412ae31c9c238eb8")
|
| 106 |
+
```
|
| 107 |
+
|
| 108 |
## Dataset Structure
|
| 109 |
|
| 110 |
### Data Instances
|
| 111 |
|
| 112 |
#### plain_text
|
| 113 |
|
| 114 |
+
- **Size of downloaded dataset files:** 3.41 GB
|
| 115 |
+
- **Size of the generated dataset:** 1.68 GB
|
| 116 |
- **Total amount of disk used:** 5.1 GB
|
| 117 |
|
| 118 |
An example of 'news' looks as follows.
|
|
|
|
| 135 |
|
| 136 |
### Data Splits
|
| 137 |
|
| 138 |
+
| split_name |num_examples|
|
| 139 |
+
|-----------------|-----------:|
|
| 140 |
+
| news | 1250018 |
|
| 141 |
+
| telegram_blogs | 368017 |
|
| 142 |
|
| 143 |
## Dataset Creation
|
| 144 |
|
| 145 |
+
The news portion have been crawled from 57 different websites using [Scrapy](https://scrapy.org/) framework. And telegram_blogs portion is consisted of manually curated texts from 128 high-quality Telegram channels.
|
| 146 |
|
| 147 |
|
| 148 |
## Citation
|
|
|
|
| 150 |
Please cite this model using the following format:
|
| 151 |
|
| 152 |
```
|
| 153 |
+
@online{Mamasaidov2023UzCrawl,
|
| 154 |
author = {Mukhammadsaid Mamasaidov and Abror Shopulatov},
|
| 155 |
title = {UzCrawl dataset},
|
| 156 |
year = {2023},
|