Vishva007 commited on
Commit
cfa0f90
·
verified ·
1 Parent(s): aca884e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -4
README.md CHANGED
@@ -11,13 +11,100 @@ dataset_info:
11
  dtype: string
12
  splits:
13
  - name: train
14
- num_bytes: 6414373
15
- num_examples: 8000
16
- download_size: 4061209
17
- dataset_size: 6414373
18
  configs:
19
  - config_name: default
20
  data_files:
21
  - split: train
22
  path: data/train-*
 
 
 
 
 
 
 
 
 
23
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  dtype: string
12
  splits:
13
  - name: train
14
+ num_bytes: 3243541
15
+ num_examples: 4000
16
+ download_size: 2050955
17
+ dataset_size: 3243541
18
  configs:
19
  - config_name: default
20
  data_files:
21
  - split: train
22
  path: data/train-*
23
+ license: apache-2.0
24
+ task_categories:
25
+ - table-question-answering
26
+ - question-answering
27
+ - text-generation
28
+ language:
29
+ - en
30
+ size_categories:
31
+ - 1K<n<10K
32
  ---
33
+
34
+
35
+ # Databricks-Dolly-8k
36
+
37
+ The resulting dataset contains **8000 samples** of the [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset.
38
+
39
+ This split of an even smaller subset is provided for very fast experimentation and evaluation of models when computational resources are highly limited or for quick prototyping.
40
+
41
+ ## Dataset Structure
42
+
43
+ The dataset is provided as a `DatasetDict` with the following splits:
44
+
45
+ * **`train`**: Contains 8000 samples.
46
+
47
+ Each split contains the following features, identical to the original dataset:
48
+
49
+ * `id`: The unique identifier for each sample.
50
+ * `instruction`: The instruction or prompt for the task.
51
+ * `response`: The response to the given instruction.
52
+ * `context`: Additional context or information related to the instruction.
53
+ * `source`: The source of the sample.
54
+
55
+ ## Usage
56
+
57
+ You can easily load this split dataset using the `datasets` library:
58
+
59
+ ```python
60
+ from datasets import load_dataset
61
+
62
+ databricks_dolly_8k = load_dataset("Vishva007/Databricks-Dolly-8k")
63
+
64
+ print(databricks_dolly_8k)
65
+ print(databricks_dolly_8k["train"][0])
66
+ ```
67
+
68
+ ## Example Usage
69
+
70
+ Here’s an example of how you might use this dataset in a Python script:
71
+
72
+ ```python
73
+ from datasets import load_dataset
74
+
75
+ # Load the dataset
76
+ databricks_dolly_8k = load_dataset("Vishva007/Databricks-Dolly-8k")
77
+
78
+ # Print the first sample in the training set
79
+ print(databricks_dolly_8k["train"][0])
80
+
81
+ # Access specific fields from the first sample
82
+ sample = databricks_dolly_8k["train"][0]
83
+ print(f"ID: {sample['id']}")
84
+ print(f"Instruction: {sample['instruction']}")
85
+ print(f"Response: {sample['response']}")
86
+ print(f"Context: {sample['context']}")
87
+ print(f"Source: {sample['source']}")
88
+ ```
89
+
90
+ ## Dataset Info
91
+
92
+ ### Features
93
+
94
+ - `id`: The unique identifier for each sample.
95
+ - `instruction`: The instruction or prompt for the task.
96
+ - `response`: The response to the given instruction.
97
+ - `context`: Additional context or information related to the instruction.
98
+ - `source`: The source of the sample.
99
+
100
+ ### Splits
101
+
102
+ - **`train`**: Contains 8000 samples.
103
+
104
+ ## License
105
+
106
+ This dataset is derived from the [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset, which is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
107
+
108
+ For more details about the original dataset, please refer to the [official documentation](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
109
+
110
+ ---