Datasets:
license: mit
task_categories:
- text-generation
language:
- de
tags:
- handwriting
- stroke-data
- rnn-training
- stylus
- s-pen
- parquet
- jsonl
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: data/*.parquet
v2testing
This dataset contains handwriting stroke data collected using a stylus (S Pen) on a tablet device. Optimized for training RNNs (Recurrent Neural Networks) on handwriting generation/recognition tasks.
Dataset Description
- Schema Version: 1.0.0
- Format: Apache Parquet (columnar, compressed) + JSONL backup
- Language: German
Data Format
Data is available in two formats in the data/ directory:
- Parquet files (
*.parquet): Columnar format, optimized for HuggingFace datasets - JSONL files (
*.jsonl): Line-delimited JSON backup, easy to parse
Both formats contain identical RNN training data with the same batch IDs.
Parquet Schema
Each row in the Parquet files represents a complete handwriting sample:
| Column | Type | Description |
|---|---|---|
id |
string | Unique identifier (UUID) |
text |
string | The prompt text that was written |
dx |
list | Delta X offsets between consecutive points |
dy |
list | Delta Y offsets between consecutive points |
eos |
list | End-of-stroke flags (1 = pen lift, 0 = continue) |
scale |
double | Scale factor used for normalization |
created_at |
string | ISO timestamp of creation |
session_id |
string | Collection session identifier |
JSONL Format
Each line in the JSONL files is a JSON object with the following structure:
{"id": "uuid", "text": "prompt text", "points": [{"dx": 0, "dy": 0, "eos": 0}, ...], "scale": 1.0}
| Field | Type | Description |
|---|---|---|
id |
string | Unique identifier (UUID) |
text |
string | The prompt text that was written |
points |
array | Array of point objects with dx, dy, eos |
scale |
number (optional) | Scale factor used for normalization |
RNN Training Format
The stroke data is stored in the format commonly used for RNN handwriting models:
- dx/dy: Position deltas from the previous point (first point has dx=dy=0)
- eos: Binary flag indicating pen lifts (end of stroke)
- Data is normalized by bounding box for consistent scale
Visualization
Preview SVGs are available in renders_preview/ for HuggingFace Dataset Viewer.
Usage
Using Parquet (Recommended for HuggingFace)
from datasets import load_dataset
# For private repos, use: load_dataset("finnbusse/v2testing", token="YOUR_HF_TOKEN")
dataset = load_dataset("finnbusse/v2testing")
# Access a sample
sample = dataset['train'][0]
# Stroke data is already native Python lists (no JSON parsing needed)
dx = sample['dx']
dy = sample['dy']
eos = sample['eos']
# Reconstruct absolute positions
x, y = 0, 0
positions = []
for dx_i, dy_i, eos_i in zip(dx, dy, eos):
x += dx_i
y += dy_i
positions.append((x, y, eos_i))
Using JSONL (Alternative)
JSONL filenames follow the batch ID pattern: YYYYMMDD_HHMMSS_XXXX.jsonl
import json
import glob
# Read all JSONL files in the data directory
for jsonl_file in glob.glob('data/*.jsonl'):
with open(jsonl_file, 'r') as f:
for line in f:
sample = json.loads(line)
points = sample['points']
scale = sample.get('scale', 1.0) # scale is optional
# Each point has: dx, dy, eos
Collection Method
Data was collected using a web application with Pointer Events API, capturing stylus input including pressure and tilt when available.