File size: 4,689 Bytes
50d0a00
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
---
language:
- en
license: mit
size_categories:
- n<1K
task_categories:
- text-classification
pretty_name: Hello World Dataset
dataset_info:
  features:
  - name: text
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': greeting
          '1': partial_greeting
          '2': greeting_variant
  splits:
  - name: train
    num_bytes: 380
    num_examples: 10
  - name: validation
    num_bytes: 190
    num_examples: 5
  - name: test
    num_bytes: 190
    num_examples: 5
  download_size: 760
  dataset_size: 760
configs:
- config_name: default
  data_files:
  - split: train
    path: train.jsonl
  - split: validation
    path: validation.jsonl
  - split: test
    path: test.jsonl
---

# Hello World Dataset

## Dataset Description

A simple demonstration dataset containing various forms of "Hello World" text for educational purposes. This dataset is designed to work with the [chiedo/hello-world](https://huggingface.co/chiedo/hello-world) model.

### Dataset Summary

This dataset contains 20 examples of "Hello World" variations with classification labels. It's perfect for:
- Learning how to create and structure datasets on Hugging Face
- Testing basic text classification models
- Understanding dataset loading with the `datasets` library

## Dataset Structure

### Data Instances

Each instance contains:
- `text`: A string containing a variation of "Hello World"
- `label`: A classification label (greeting, partial_greeting, or greeting_variant)

Example:
```json
{
  "text": "Hello World!",
  "label": "greeting"
}
```

### Data Fields

- `text` (string): The text content
- `label` (ClassLabel): One of three categories:
  - `greeting`: Complete "Hello World" phrases
  - `partial_greeting`: Only "Hello" or "World"
  - `greeting_variant`: Variations like "Hello there" or "World hello"

### Data Splits

| Split      | Examples |
|------------|----------|
| train      | 10       |
| validation | 5        |
| test       | 5        |

## Dataset Creation

### Curation Rationale

This dataset was created as a minimal example to demonstrate:
1. How to structure a dataset for Hugging Face
2. How to create custom dataset loaders
3. How to integrate datasets with models

### Source Data

The data was manually created for demonstration purposes.

## Usage

### Loading the Dataset

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("chiedo/hello-world")

# Access different splits
train_data = dataset["train"]
validation_data = dataset["validation"]
test_data = dataset["test"]

# Example: Print first training example
print(train_data[0])
# Output: {'text': 'Hello World!', 'label': 0}  # 0 corresponds to 'greeting'
```

### Using with the Model

```python
from transformers import AutoModel, AutoTokenizer
from datasets import load_dataset

# Load model and tokenizer
model = AutoModel.from_pretrained("chiedo/hello-world", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("chiedo/hello-world", trust_remote_code=True)

# Load dataset
dataset = load_dataset("chiedo/hello-world")

# Process a batch
texts = dataset["train"]["text"][:5]
inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt")
outputs = model(**inputs)
```

### Dataset Features

```python
from datasets import load_dataset

dataset = load_dataset("chiedo/hello-world")

# View dataset info
print(dataset)

# Get label names
label_names = dataset["train"].features["label"].names
print(f"Labels: {label_names}")
# Output: Labels: ['greeting', 'partial_greeting', 'greeting_variant']

# Convert label integers to names
for example in dataset["train"].select(range(3)):
    label_int = example["label"]
    label_name = label_names[label_int]
    print(f"Text: {example['text']}, Label: {label_name}")
```

## Considerations for Using the Data

### Social Impact

This is a demonstration dataset with no real-world application or social impact.

### Limitations

- Very small dataset (20 examples total)
- Limited vocabulary (variations of "Hello" and "World")
- Not suitable for training production models
- For educational purposes only

## Additional Information

### Dataset Curators

Created by chiedo for demonstration purposes.

### Licensing Information

MIT License - Free to use for any purpose.

### Citation Information

If you use this dataset as a template:

```bibtex
@dataset{hello_world_dataset,
  title={Hello World Dataset - A Minimal Dataset Example},
  author={chiedo},
  year={2024},
  publisher={Hugging Face}
}
```

### Contributions

This is a demonstration dataset. For real dataset contributions, please follow Hugging Face's dataset contribution guidelines.