autotrain-data-processor commited on
Commit
3dea4fa
·
1 Parent(s): 469daaa

Processed data from AutoTrain data processor ([2023-02-23 13:35 ]

Browse files
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - text-classification
4
+
5
+ ---
6
+ # AutoTrain Dataset for project: i-bert-twitter-sentiment
7
+
8
+ ## Dataset Description
9
+
10
+ This dataset has been automatically processed by AutoTrain for project i-bert-twitter-sentiment.
11
+
12
+ ### Languages
13
+
14
+ The BCP-47 code for the dataset's language is unk.
15
+
16
+ ## Dataset Structure
17
+
18
+ ### Data Instances
19
+
20
+ A sample from this dataset looks as follows:
21
+
22
+ ```json
23
+ [
24
+ {
25
+ "text": "Thanks\\u002c Dave! Great show tonight. Sorry\\u002c Craig. I\\u2019ve got to get to bed. I\\u2019ll catch you tomorrow. @user David Letterman",
26
+ "target": 2
27
+ },
28
+ {
29
+ "text": "\"I've been watching Gilmore Girls for the past 3 hours. Oops, happy Thursday!\"",
30
+ "target": 2
31
+ }
32
+ ]
33
+ ```
34
+
35
+ ### Dataset Fields
36
+
37
+ The dataset has the following fields (also called "features"):
38
+
39
+ ```json
40
+ {
41
+ "text": "Value(dtype='string', id=None)",
42
+ "target": "ClassLabel(names=['negative', 'neutral', 'positive'], id=None)"
43
+ }
44
+ ```
45
+
46
+ ### Dataset Splits
47
+
48
+ This dataset is split into a train and validation split. The split sizes are as follow:
49
+
50
+ | Split name | Num samples |
51
+ | ------------ | ------------------- |
52
+ | train | 36491 |
53
+ | valid | 9124 |
processed/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train", "valid"]}
processed/train/dataset.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be9ad9afd5bcc0ccf1f3656ab22047701ad87c70333dd7a0bdab0b3b50949173
3
+ size 4349720
processed/train/dataset_info.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "AutoTrain generated dataset",
4
+ "features": {
5
+ "text": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "target": {
10
+ "names": [
11
+ "negative",
12
+ "neutral",
13
+ "positive"
14
+ ],
15
+ "_type": "ClassLabel"
16
+ }
17
+ },
18
+ "homepage": "",
19
+ "license": "",
20
+ "splits": {
21
+ "train": {
22
+ "name": "train",
23
+ "num_bytes": 4341210,
24
+ "num_examples": 36491,
25
+ "dataset_name": null
26
+ }
27
+ }
28
+ }
processed/train/state.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "dataset.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "3e802e633a5e9b5f",
8
+ "_format_columns": [
9
+ "target",
10
+ "text"
11
+ ],
12
+ "_format_kwargs": {},
13
+ "_format_type": null,
14
+ "_indexes": {},
15
+ "_output_all_columns": false,
16
+ "_split": null
17
+ }
processed/valid/dataset.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e46cbb6908443ec1447d70814e52146dd949198b66bf89b6ee74b8662a496cf
3
+ size 1086504
processed/valid/dataset_info.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "AutoTrain generated dataset",
4
+ "features": {
5
+ "text": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "target": {
10
+ "names": [
11
+ "negative",
12
+ "neutral",
13
+ "positive"
14
+ ],
15
+ "_type": "ClassLabel"
16
+ }
17
+ },
18
+ "homepage": "",
19
+ "license": "",
20
+ "splits": {
21
+ "valid": {
22
+ "name": "valid",
23
+ "num_bytes": 1083912,
24
+ "num_examples": 9124,
25
+ "dataset_name": null
26
+ }
27
+ }
28
+ }
processed/valid/state.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "dataset.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "d0613462141b0ac3",
8
+ "_format_columns": [
9
+ "target",
10
+ "text"
11
+ ],
12
+ "_format_kwargs": {},
13
+ "_format_type": null,
14
+ "_indexes": {},
15
+ "_output_all_columns": false,
16
+ "_split": null
17
+ }