Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
multi-class-classification
Languages:
Catalan
Size:
10K - 100K
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -49,7 +49,7 @@ CA- Catalan
|
|
| 49 |
|
| 50 |
### Data Instances
|
| 51 |
|
| 52 |
-
|
| 53 |
|
| 54 |
### Data Fields
|
| 55 |
|
|
@@ -80,8 +80,8 @@ We used a simple model with the article text and associated labels, without furt
|
|
| 80 |
|
| 81 |
### Data Splits
|
| 82 |
|
| 83 |
-
*
|
| 84 |
-
*
|
| 85 |
|
| 86 |
|
| 87 |
## Dataset Creation
|
|
@@ -102,7 +102,7 @@ Para cada página, se extrae también el “summary” que proporciona Wikipedia
|
|
| 102 |
|
| 103 |
#### Initial Data Collection and Normalization
|
| 104 |
|
| 105 |
-
The source data are
|
| 106 |
|
| 107 |
#### Who are the source language producers?
|
| 108 |
|
|
|
|
| 49 |
|
| 50 |
### Data Instances
|
| 51 |
|
| 52 |
+
Two json files, one for each split.
|
| 53 |
|
| 54 |
### Data Fields
|
| 55 |
|
|
|
|
| 80 |
|
| 81 |
### Data Splits
|
| 82 |
|
| 83 |
+
* hfeval_ca.json: 3970 label-document pairs
|
| 84 |
+
* hftrain_ca.json: 9231 label-document pairs
|
| 85 |
|
| 86 |
|
| 87 |
## Dataset Creation
|
|
|
|
| 102 |
|
| 103 |
#### Initial Data Collection and Normalization
|
| 104 |
|
| 105 |
+
The source data are thematic categories in the different Wikipedias
|
| 106 |
|
| 107 |
#### Who are the source language producers?
|
| 108 |
|