Datasets:
File size: 3,739 Bytes
8510cca |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
---
license: mit
task_categories:
- text-generation
language:
- de
tags:
- handwriting
- stroke-data
- rnn-training
- stylus
- s-pen
- parquet
- jsonl
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: data/*.parquet
---
# v2testing
This dataset contains handwriting stroke data collected using a stylus (S Pen) on a tablet device.
Optimized for training RNNs (Recurrent Neural Networks) on handwriting generation/recognition tasks.
## Dataset Description
- **Schema Version:** 1.0.0
- **Format:** Apache Parquet (columnar, compressed) + JSONL backup
- **Language:** German
## Data Format
Data is available in two formats in the `data/` directory:
- **Parquet files** (`*.parquet`): Columnar format, optimized for HuggingFace datasets
- **JSONL files** (`*.jsonl`): Line-delimited JSON backup, easy to parse
Both formats contain identical RNN training data with the same batch IDs.
### Parquet Schema
Each row in the Parquet files represents a complete handwriting sample:
| Column | Type | Description |
|--------|------|-------------|
| `id` | string | Unique identifier (UUID) |
| `text` | string | The prompt text that was written |
| `dx` | list<double> | Delta X offsets between consecutive points |
| `dy` | list<double> | Delta Y offsets between consecutive points |
| `eos` | list<double> | End-of-stroke flags (1 = pen lift, 0 = continue) |
| `scale` | double | Scale factor used for normalization |
| `created_at` | string | ISO timestamp of creation |
| `session_id` | string | Collection session identifier |
### JSONL Format
Each line in the JSONL files is a JSON object with the following structure:
```json
{"id": "uuid", "text": "prompt text", "points": [{"dx": 0, "dy": 0, "eos": 0}, ...], "scale": 1.0}
```
| Field | Type | Description |
|-------|------|-------------|
| `id` | string | Unique identifier (UUID) |
| `text` | string | The prompt text that was written |
| `points` | array | Array of point objects with dx, dy, eos |
| `scale` | number (optional) | Scale factor used for normalization |
### RNN Training Format
The stroke data is stored in the format commonly used for RNN handwriting models:
- **dx/dy**: Position deltas from the previous point (first point has dx=dy=0)
- **eos**: Binary flag indicating pen lifts (end of stroke)
- Data is normalized by bounding box for consistent scale
## Visualization
Preview SVGs are available in `renders_preview/` for HuggingFace Dataset Viewer.
## Usage
### Using Parquet (Recommended for HuggingFace)
```python
from datasets import load_dataset
# For private repos, use: load_dataset("finnbusse/v2testing", token="YOUR_HF_TOKEN")
dataset = load_dataset("finnbusse/v2testing")
# Access a sample
sample = dataset['train'][0]
# Stroke data is already native Python lists (no JSON parsing needed)
dx = sample['dx']
dy = sample['dy']
eos = sample['eos']
# Reconstruct absolute positions
x, y = 0, 0
positions = []
for dx_i, dy_i, eos_i in zip(dx, dy, eos):
x += dx_i
y += dy_i
positions.append((x, y, eos_i))
```
### Using JSONL (Alternative)
JSONL filenames follow the batch ID pattern: `YYYYMMDD_HHMMSS_XXXX.jsonl`
```python
import json
import glob
# Read all JSONL files in the data directory
for jsonl_file in glob.glob('data/*.jsonl'):
with open(jsonl_file, 'r') as f:
for line in f:
sample = json.loads(line)
points = sample['points']
scale = sample.get('scale', 1.0) # scale is optional
# Each point has: dx, dy, eos
```
## Collection Method
Data was collected using a web application with Pointer Events API, capturing stylus input including pressure and tilt when available.
|