Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -54,7 +54,7 @@ dataset_info:
|
|
| 54 |
num_bytes: 4225989.067784654
|
| 55 |
num_examples: 17863
|
| 56 |
download_size: 20360216
|
| 57 |
-
dataset_size: 28172393
|
| 58 |
configs:
|
| 59 |
- config_name: default
|
| 60 |
data_files:
|
|
@@ -64,4 +64,52 @@ configs:
|
|
| 64 |
path: data/validation-*
|
| 65 |
- split: test
|
| 66 |
path: data/test-*
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
num_bytes: 4225989.067784654
|
| 55 |
num_examples: 17863
|
| 56 |
download_size: 20360216
|
| 57 |
+
dataset_size: 28172393
|
| 58 |
configs:
|
| 59 |
- config_name: default
|
| 60 |
data_files:
|
|
|
|
| 64 |
path: data/validation-*
|
| 65 |
- split: test
|
| 66 |
path: data/test-*
|
| 67 |
+
license: mit
|
| 68 |
+
task_categories:
|
| 69 |
+
- text-classification
|
| 70 |
+
pretty_name: multilang-detect
|
| 71 |
---
|
| 72 |
+
# Dataset Card for Multilingual Language Detection
|
| 73 |
+
|
| 74 |
+
## Dataset Details
|
| 75 |
+
|
| 76 |
+
|
| 77 |
+
This dataset is a comprehensive resource for **multilingual text classification**, specifically designed for language identification. It contains over 100,000 text samples from 36 different languages, sourced from various public datasets and meticulously cleaned for machine learning applications.
|
| 78 |
+
|
| 79 |
+
The primary goal of this dataset is to train models that can accurately predict the language of a given text snippet. It is pre-split into training (70%), validation (15%), and test (15%) sets, with stratification on the `Language` column to ensure a balanced distribution of languages across all splits.
|
| 80 |
+
|
| 81 |
+
- **Language(s) (NLP):** Arabic, Bulgarian, Chinese, Danish, Dutch, English, Estonian, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Kannada, Korean, Latin, Malayalam, Modern Greek, Persian, Polish, Portuguese, Pushto, Romanian, Russian, Spanish, Swahili, Swedish, Tamil, Thai, Turkish, Urdu, Vietnamese.
|
| 82 |
+
- **License:** `mit`
|
| 83 |
+
|
| 84 |
+
- **Repository:** [minhleduc/multilang-classify-dataset-02](https://huggingface.co/datasets/minhleduc/multilang-classify-dataset-02)
|
| 85 |
+
|
| 86 |
+
## Uses
|
| 87 |
+
|
| 88 |
+
### Direct Use
|
| 89 |
+
|
| 90 |
+
This dataset is ideal for:
|
| 91 |
+
- Training and fine-tuning language identification models.
|
| 92 |
+
- Benchmarking multilingual text classifiers.
|
| 93 |
+
- Research in multilingual representation learning.
|
| 94 |
+
|
| 95 |
+
### Out-of-Scope Use
|
| 96 |
+
|
| 97 |
+
This dataset is not suitable for:
|
| 98 |
+
- Evaluating the grammatical correctness or fluency of a text.
|
| 99 |
+
- Training models for tasks that require pristine, uncleaned text with original punctuation and casing.
|
| 100 |
+
- Making judgments about the author of the text. The data has been anonymized and processed, and may contain biases from its original sources.
|
| 101 |
+
|
| 102 |
+
## Dataset Structure
|
| 103 |
+
|
| 104 |
+
The dataset is provided in a `DatasetDict` containing `train`, `validation`, and `test` splits. Each sample has two fields:
|
| 105 |
+
|
| 106 |
+
- `Text` (`string`): The cleaned text sample.
|
| 107 |
+
- `Language` (`ClassLabel`): The language of the text, encoded as an integer. The mapping from integer to language name is stored in the feature metadata.
|
| 108 |
+
|
| 109 |
+
**Example:**
|
| 110 |
+
```json
|
| 111 |
+
{
|
| 112 |
+
"Text": "this is an example of english text",
|
| 113 |
+
"Language": English
|
| 114 |
+
}
|
| 115 |
+
```
|