Chiedo John commited on
Commit
50d0a00
·
0 Parent(s):

Initial dataset commit with Hello World examples

Browse files

- Added train, validation, and test splits in JSONL format
- Created dataset loader script (hello_world.py)
- Added comprehensive dataset card documentation
- Total of 20 examples with greeting classification labels

Files changed (5) hide show
  1. README.md +201 -0
  2. hello_world.py +83 -0
  3. test.jsonl +5 -0
  4. train.jsonl +10 -0
  5. validation.jsonl +5 -0
README.md ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ size_categories:
6
+ - n<1K
7
+ task_categories:
8
+ - text-classification
9
+ pretty_name: Hello World Dataset
10
+ dataset_info:
11
+ features:
12
+ - name: text
13
+ dtype: string
14
+ - name: label
15
+ dtype:
16
+ class_label:
17
+ names:
18
+ '0': greeting
19
+ '1': partial_greeting
20
+ '2': greeting_variant
21
+ splits:
22
+ - name: train
23
+ num_bytes: 380
24
+ num_examples: 10
25
+ - name: validation
26
+ num_bytes: 190
27
+ num_examples: 5
28
+ - name: test
29
+ num_bytes: 190
30
+ num_examples: 5
31
+ download_size: 760
32
+ dataset_size: 760
33
+ configs:
34
+ - config_name: default
35
+ data_files:
36
+ - split: train
37
+ path: train.jsonl
38
+ - split: validation
39
+ path: validation.jsonl
40
+ - split: test
41
+ path: test.jsonl
42
+ ---
43
+
44
+ # Hello World Dataset
45
+
46
+ ## Dataset Description
47
+
48
+ A simple demonstration dataset containing various forms of "Hello World" text for educational purposes. This dataset is designed to work with the [chiedo/hello-world](https://huggingface.co/chiedo/hello-world) model.
49
+
50
+ ### Dataset Summary
51
+
52
+ This dataset contains 20 examples of "Hello World" variations with classification labels. It's perfect for:
53
+ - Learning how to create and structure datasets on Hugging Face
54
+ - Testing basic text classification models
55
+ - Understanding dataset loading with the `datasets` library
56
+
57
+ ## Dataset Structure
58
+
59
+ ### Data Instances
60
+
61
+ Each instance contains:
62
+ - `text`: A string containing a variation of "Hello World"
63
+ - `label`: A classification label (greeting, partial_greeting, or greeting_variant)
64
+
65
+ Example:
66
+ ```json
67
+ {
68
+ "text": "Hello World!",
69
+ "label": "greeting"
70
+ }
71
+ ```
72
+
73
+ ### Data Fields
74
+
75
+ - `text` (string): The text content
76
+ - `label` (ClassLabel): One of three categories:
77
+ - `greeting`: Complete "Hello World" phrases
78
+ - `partial_greeting`: Only "Hello" or "World"
79
+ - `greeting_variant`: Variations like "Hello there" or "World hello"
80
+
81
+ ### Data Splits
82
+
83
+ | Split | Examples |
84
+ |------------|----------|
85
+ | train | 10 |
86
+ | validation | 5 |
87
+ | test | 5 |
88
+
89
+ ## Dataset Creation
90
+
91
+ ### Curation Rationale
92
+
93
+ This dataset was created as a minimal example to demonstrate:
94
+ 1. How to structure a dataset for Hugging Face
95
+ 2. How to create custom dataset loaders
96
+ 3. How to integrate datasets with models
97
+
98
+ ### Source Data
99
+
100
+ The data was manually created for demonstration purposes.
101
+
102
+ ## Usage
103
+
104
+ ### Loading the Dataset
105
+
106
+ ```python
107
+ from datasets import load_dataset
108
+
109
+ # Load the dataset
110
+ dataset = load_dataset("chiedo/hello-world")
111
+
112
+ # Access different splits
113
+ train_data = dataset["train"]
114
+ validation_data = dataset["validation"]
115
+ test_data = dataset["test"]
116
+
117
+ # Example: Print first training example
118
+ print(train_data[0])
119
+ # Output: {'text': 'Hello World!', 'label': 0} # 0 corresponds to 'greeting'
120
+ ```
121
+
122
+ ### Using with the Model
123
+
124
+ ```python
125
+ from transformers import AutoModel, AutoTokenizer
126
+ from datasets import load_dataset
127
+
128
+ # Load model and tokenizer
129
+ model = AutoModel.from_pretrained("chiedo/hello-world", trust_remote_code=True)
130
+ tokenizer = AutoTokenizer.from_pretrained("chiedo/hello-world", trust_remote_code=True)
131
+
132
+ # Load dataset
133
+ dataset = load_dataset("chiedo/hello-world")
134
+
135
+ # Process a batch
136
+ texts = dataset["train"]["text"][:5]
137
+ inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt")
138
+ outputs = model(**inputs)
139
+ ```
140
+
141
+ ### Dataset Features
142
+
143
+ ```python
144
+ from datasets import load_dataset
145
+
146
+ dataset = load_dataset("chiedo/hello-world")
147
+
148
+ # View dataset info
149
+ print(dataset)
150
+
151
+ # Get label names
152
+ label_names = dataset["train"].features["label"].names
153
+ print(f"Labels: {label_names}")
154
+ # Output: Labels: ['greeting', 'partial_greeting', 'greeting_variant']
155
+
156
+ # Convert label integers to names
157
+ for example in dataset["train"].select(range(3)):
158
+ label_int = example["label"]
159
+ label_name = label_names[label_int]
160
+ print(f"Text: {example['text']}, Label: {label_name}")
161
+ ```
162
+
163
+ ## Considerations for Using the Data
164
+
165
+ ### Social Impact
166
+
167
+ This is a demonstration dataset with no real-world application or social impact.
168
+
169
+ ### Limitations
170
+
171
+ - Very small dataset (20 examples total)
172
+ - Limited vocabulary (variations of "Hello" and "World")
173
+ - Not suitable for training production models
174
+ - For educational purposes only
175
+
176
+ ## Additional Information
177
+
178
+ ### Dataset Curators
179
+
180
+ Created by chiedo for demonstration purposes.
181
+
182
+ ### Licensing Information
183
+
184
+ MIT License - Free to use for any purpose.
185
+
186
+ ### Citation Information
187
+
188
+ If you use this dataset as a template:
189
+
190
+ ```bibtex
191
+ @dataset{hello_world_dataset,
192
+ title={Hello World Dataset - A Minimal Dataset Example},
193
+ author={chiedo},
194
+ year={2024},
195
+ publisher={Hugging Face}
196
+ }
197
+ ```
198
+
199
+ ### Contributions
200
+
201
+ This is a demonstration dataset. For real dataset contributions, please follow Hugging Face's dataset contribution guidelines.
hello_world.py ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Hello World Dataset - A simple dataset for demonstration purposes."""
2
+
3
+ import json
4
+ import datasets
5
+
6
+ _DESCRIPTION = """\
7
+ Hello World Dataset is a simple demonstration dataset containing various forms
8
+ of "Hello World" text with labels for greeting classification.
9
+ """
10
+
11
+ _HOMEPAGE = "https://huggingface.co/datasets/chiedo/hello-world"
12
+
13
+ _LICENSE = "MIT"
14
+
15
+ _URLS = {
16
+ "train": "train.jsonl",
17
+ "validation": "validation.jsonl",
18
+ "test": "test.jsonl",
19
+ }
20
+
21
+
22
+ class HelloWorld(datasets.GeneratorBasedBuilder):
23
+ """Hello World demonstration dataset."""
24
+
25
+ VERSION = datasets.Version("1.0.0")
26
+
27
+ BUILDER_CONFIGS = [
28
+ datasets.BuilderConfig(name="default", version=VERSION, description="Default configuration"),
29
+ ]
30
+
31
+ DEFAULT_CONFIG_NAME = "default"
32
+
33
+ def _info(self):
34
+ features = datasets.Features(
35
+ {
36
+ "text": datasets.Value("string"),
37
+ "label": datasets.ClassLabel(names=["greeting", "partial_greeting", "greeting_variant"]),
38
+ }
39
+ )
40
+
41
+ return datasets.DatasetInfo(
42
+ description=_DESCRIPTION,
43
+ features=features,
44
+ homepage=_HOMEPAGE,
45
+ license=_LICENSE,
46
+ )
47
+
48
+ def _split_generators(self, dl_manager):
49
+ urls = _URLS
50
+ data_dir = dl_manager.download_and_extract(urls)
51
+
52
+ return [
53
+ datasets.SplitGenerator(
54
+ name=datasets.Split.TRAIN,
55
+ gen_kwargs={
56
+ "filepath": data_dir["train"],
57
+ "split": "train",
58
+ },
59
+ ),
60
+ datasets.SplitGenerator(
61
+ name=datasets.Split.VALIDATION,
62
+ gen_kwargs={
63
+ "filepath": data_dir["validation"],
64
+ "split": "validation",
65
+ },
66
+ ),
67
+ datasets.SplitGenerator(
68
+ name=datasets.Split.TEST,
69
+ gen_kwargs={
70
+ "filepath": data_dir["test"],
71
+ "split": "test",
72
+ },
73
+ ),
74
+ ]
75
+
76
+ def _generate_examples(self, filepath, split):
77
+ with open(filepath, encoding="utf-8") as f:
78
+ for key, row in enumerate(f):
79
+ data = json.loads(row)
80
+ yield key, {
81
+ "text": data["text"],
82
+ "label": data["label"],
83
+ }
test.jsonl ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {"text": "Hello World.", "label": "greeting"}
2
+ {"text": "world", "label": "partial_greeting"}
3
+ {"text": "Hello!", "label": "partial_greeting"}
4
+ {"text": "Hello world?", "label": "greeting"}
5
+ {"text": "hello World", "label": "greeting"}
train.jsonl ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {"text": "Hello World!", "label": "greeting"}
2
+ {"text": "Hello world", "label": "greeting"}
3
+ {"text": "hello world!", "label": "greeting"}
4
+ {"text": "Hello, World!", "label": "greeting"}
5
+ {"text": "Hello", "label": "partial_greeting"}
6
+ {"text": "World", "label": "partial_greeting"}
7
+ {"text": "Hello there", "label": "greeting_variant"}
8
+ {"text": "World hello", "label": "greeting_variant"}
9
+ {"text": "HELLO WORLD", "label": "greeting"}
10
+ {"text": "hello", "label": "partial_greeting"}
validation.jsonl ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {"text": "Hello, world", "label": "greeting"}
2
+ {"text": "World!", "label": "partial_greeting"}
3
+ {"text": "hello world.", "label": "greeting"}
4
+ {"text": "Hello World!!!", "label": "greeting"}
5
+ {"text": "Hello world!", "label": "greeting"}