language:
- en
license: mit
size_categories:
- n<1K
task_categories:
- text-classification
pretty_name: Hello World Dataset
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': greeting
'1': partial_greeting
'2': greeting_variant
splits:
- name: train
num_bytes: 380
num_examples: 10
- name: validation
num_bytes: 190
num_examples: 5
- name: test
num_bytes: 190
num_examples: 5
download_size: 760
dataset_size: 760
configs:
- config_name: default
data_files:
- split: train
path: train.jsonl
- split: validation
path: validation.jsonl
- split: test
path: test.jsonl
Hello World Dataset
Dataset Description
A simple demonstration dataset containing various forms of "Hello World" text for educational purposes. This dataset is designed to work with the chiedo/hello-world model.
Dataset Summary
This dataset contains 20 examples of "Hello World" variations with classification labels. It's perfect for:
- Learning how to create and structure datasets on Hugging Face
- Testing basic text classification models
- Understanding dataset loading with the
datasetslibrary
Dataset Structure
Data Instances
Each instance contains:
text: A string containing a variation of "Hello World"label: A classification label (greeting, partial_greeting, or greeting_variant)
Example:
{
"text": "Hello World!",
"label": "greeting"
}
Data Fields
text(string): The text contentlabel(ClassLabel): One of three categories:greeting: Complete "Hello World" phrasespartial_greeting: Only "Hello" or "World"greeting_variant: Variations like "Hello there" or "World hello"
Data Splits
| Split | Examples |
|---|---|
| train | 10 |
| validation | 5 |
| test | 5 |
Dataset Creation
Curation Rationale
This dataset was created as a minimal example to demonstrate:
- How to structure a dataset for Hugging Face
- How to create custom dataset loaders
- How to integrate datasets with models
Source Data
The data was manually created for demonstration purposes.
Usage
Loading the Dataset
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("chiedo/hello-world")
# Access different splits
train_data = dataset["train"]
validation_data = dataset["validation"]
test_data = dataset["test"]
# Example: Print first training example
print(train_data[0])
# Output: {'text': 'Hello World!', 'label': 0} # 0 corresponds to 'greeting'
Using with the Model
from transformers import AutoModel, AutoTokenizer
from datasets import load_dataset
# Load model and tokenizer
model = AutoModel.from_pretrained("chiedo/hello-world", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("chiedo/hello-world", trust_remote_code=True)
# Load dataset
dataset = load_dataset("chiedo/hello-world")
# Process a batch
texts = dataset["train"]["text"][:5]
inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt")
outputs = model(**inputs)
Dataset Features
from datasets import load_dataset
dataset = load_dataset("chiedo/hello-world")
# View dataset info
print(dataset)
# Get label names
label_names = dataset["train"].features["label"].names
print(f"Labels: {label_names}")
# Output: Labels: ['greeting', 'partial_greeting', 'greeting_variant']
# Convert label integers to names
for example in dataset["train"].select(range(3)):
label_int = example["label"]
label_name = label_names[label_int]
print(f"Text: {example['text']}, Label: {label_name}")
Considerations for Using the Data
Social Impact
This is a demonstration dataset with no real-world application or social impact.
Limitations
- Very small dataset (20 examples total)
- Limited vocabulary (variations of "Hello" and "World")
- Not suitable for training production models
- For educational purposes only
Additional Information
Dataset Curators
Created by chiedo for demonstration purposes.
Licensing Information
MIT License - Free to use for any purpose.
Citation Information
If you use this dataset as a template:
@dataset{hello_world_dataset,
title={Hello World Dataset - A Minimal Dataset Example},
author={chiedo},
year={2024},
publisher={Hugging Face}
}
Contributions
This is a demonstration dataset. For real dataset contributions, please follow Hugging Face's dataset contribution guidelines.