File size: 1,444 Bytes
4d93caf | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | ---
language:
- en
task_categories:
- token-classification
---
# AutoTrain Dataset for project: aniaitokenclassification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project aniaitokenclassification.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"tokens": [
"I",
" booked",
"a",
" flight",
"to",
"London."
],
"tags": [
4,
2,
2,
5,
2,
1
]
},
{
"tokens": [
"Apple",
"Inc.",
"is",
"planning",
"to",
"open",
"a",
"new",
"store",
"in",
"Paris."
],
"tags": [
3,
3,
2,
2,
2,
2,
2,
2,
2,
2,
1
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"tokens": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"tags": "Sequence(feature=ClassLabel(names=['COMPANY', 'LOC', 'O', 'ORG', 'PER', 'THING'], id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 23 |
| valid | 6 |
|