File size: 1,315 Bytes
547a341 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
language:
- en
task_categories:
- text-classification
---
# AutoTrain Dataset for project: twitter-disaster-v2
## Dataset Description
This dataset has been automatically processed by AutoTrain for project twitter-disaster-v2.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_id": 1,
"feat_keyword": null,
"feat_location": null,
"text": "Our Deeds are the Reason of this #earthquake May ALLAH Forgive us all",
"target": 1
},
{
"feat_id": 4,
"feat_keyword": null,
"feat_location": null,
"text": "Forest fire near La Ronge Sask. Canada",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_id": "Value(dtype='int64', id=None)",
"feat_keyword": "Value(dtype='string', id=None)",
"feat_location": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['0', '1'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 7613 |
| valid | 0 |
|