autotrain-data-processor commited on
Commit ·
e439c8f
1
Parent(s): 714f167
Processed data from AutoTrain data processor ([2023-04-08 16:47 ]
Browse files- README.md +53 -0
- processed/dataset_dict.json +1 -0
- processed/train/data-00000-of-00001.arrow +3 -0
- processed/train/dataset_info.json +24 -0
- processed/train/state.json +16 -0
- processed/valid/data-00000-of-00001.arrow +3 -0
- processed/valid/dataset_info.json +24 -0
- processed/valid/state.json +16 -0
README.md
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- summarization
|
| 4 |
+
|
| 5 |
+
---
|
| 6 |
+
# AutoTrain Dataset for project: pro
|
| 7 |
+
|
| 8 |
+
## Dataset Description
|
| 9 |
+
|
| 10 |
+
This dataset has been automatically processed by AutoTrain for project pro.
|
| 11 |
+
|
| 12 |
+
### Languages
|
| 13 |
+
|
| 14 |
+
The BCP-47 code for the dataset's language is unk.
|
| 15 |
+
|
| 16 |
+
## Dataset Structure
|
| 17 |
+
|
| 18 |
+
### Data Instances
|
| 19 |
+
|
| 20 |
+
A sample from this dataset looks as follows:
|
| 21 |
+
|
| 22 |
+
```json
|
| 23 |
+
[
|
| 24 |
+
{
|
| 25 |
+
"text": "Dietitian",
|
| 26 |
+
"target": "As a dietitian, I would like to design a vegetarian recipe for 2 people that has approximate 500 calories per serving and has a low glycemic index. Can you please provide a suggestion?"
|
| 27 |
+
},
|
| 28 |
+
{
|
| 29 |
+
"text": "IT Architect",
|
| 30 |
+
"target": "I want you to act as an IT Architect. I will provide some details about the functionality of an application or other digital product, and it will be your job to come up with ways to integrate it into the IT landscape. This could involve analyzing business requirements, performing a gap analysis and mapping the functionality of the new system to the existing IT landscape. Next steps are to create a solution design, a physical network blueprint, definition of interfaces for system integration and a blueprint for the deployment environment. My first request is \"I need help to integrate a CMS system.\""
|
| 31 |
+
}
|
| 32 |
+
]
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
### Dataset Fields
|
| 36 |
+
|
| 37 |
+
The dataset has the following fields (also called "features"):
|
| 38 |
+
|
| 39 |
+
```json
|
| 40 |
+
{
|
| 41 |
+
"text": "Value(dtype='string', id=None)",
|
| 42 |
+
"target": "Value(dtype='string', id=None)"
|
| 43 |
+
}
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
### Dataset Splits
|
| 47 |
+
|
| 48 |
+
This dataset is split into a train and validation split. The split sizes are as follow:
|
| 49 |
+
|
| 50 |
+
| Split name | Num samples |
|
| 51 |
+
| ------------ | ------------------- |
|
| 52 |
+
| train | 122 |
|
| 53 |
+
| valid | 31 |
|
processed/dataset_dict.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"splits": ["train", "valid"]}
|
processed/train/data-00000-of-00001.arrow
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7372ade870543f185bffa704f5ddf8ca5956cf44061c21f2a6f99d795b285a2d
|
| 3 |
+
size 60880
|
processed/train/dataset_info.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"citation": "",
|
| 3 |
+
"description": "AutoTrain generated dataset",
|
| 4 |
+
"features": {
|
| 5 |
+
"text": {
|
| 6 |
+
"dtype": "string",
|
| 7 |
+
"_type": "Value"
|
| 8 |
+
},
|
| 9 |
+
"target": {
|
| 10 |
+
"dtype": "string",
|
| 11 |
+
"_type": "Value"
|
| 12 |
+
}
|
| 13 |
+
},
|
| 14 |
+
"homepage": "",
|
| 15 |
+
"license": "",
|
| 16 |
+
"splits": {
|
| 17 |
+
"train": {
|
| 18 |
+
"name": "train",
|
| 19 |
+
"num_bytes": 60286,
|
| 20 |
+
"num_examples": 122,
|
| 21 |
+
"dataset_name": null
|
| 22 |
+
}
|
| 23 |
+
}
|
| 24 |
+
}
|
processed/train/state.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_data_files": [
|
| 3 |
+
{
|
| 4 |
+
"filename": "data-00000-of-00001.arrow"
|
| 5 |
+
}
|
| 6 |
+
],
|
| 7 |
+
"_fingerprint": "8e1beb9969cf64e2",
|
| 8 |
+
"_format_columns": [
|
| 9 |
+
"target",
|
| 10 |
+
"text"
|
| 11 |
+
],
|
| 12 |
+
"_format_kwargs": {},
|
| 13 |
+
"_format_type": null,
|
| 14 |
+
"_output_all_columns": false,
|
| 15 |
+
"_split": null
|
| 16 |
+
}
|
processed/valid/data-00000-of-00001.arrow
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6cd459478a1291face83ea6bd2c5235211f7a3d68aab490cf91d61a156616c11
|
| 3 |
+
size 14888
|
processed/valid/dataset_info.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"citation": "",
|
| 3 |
+
"description": "AutoTrain generated dataset",
|
| 4 |
+
"features": {
|
| 5 |
+
"text": {
|
| 6 |
+
"dtype": "string",
|
| 7 |
+
"_type": "Value"
|
| 8 |
+
},
|
| 9 |
+
"target": {
|
| 10 |
+
"dtype": "string",
|
| 11 |
+
"_type": "Value"
|
| 12 |
+
}
|
| 13 |
+
},
|
| 14 |
+
"homepage": "",
|
| 15 |
+
"license": "",
|
| 16 |
+
"splits": {
|
| 17 |
+
"valid": {
|
| 18 |
+
"name": "valid",
|
| 19 |
+
"num_bytes": 14295,
|
| 20 |
+
"num_examples": 31,
|
| 21 |
+
"dataset_name": null
|
| 22 |
+
}
|
| 23 |
+
}
|
| 24 |
+
}
|
processed/valid/state.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_data_files": [
|
| 3 |
+
{
|
| 4 |
+
"filename": "data-00000-of-00001.arrow"
|
| 5 |
+
}
|
| 6 |
+
],
|
| 7 |
+
"_fingerprint": "affc16a9e512cd8d",
|
| 8 |
+
"_format_columns": [
|
| 9 |
+
"target",
|
| 10 |
+
"text"
|
| 11 |
+
],
|
| 12 |
+
"_format_kwargs": {},
|
| 13 |
+
"_format_type": null,
|
| 14 |
+
"_output_all_columns": false,
|
| 15 |
+
"_split": null
|
| 16 |
+
}
|