metadata
tags:
- onnx
- gesture-recognition
- time-series-classification
- android
- on-device
- scikit-learn
datasets:
- ravenwing/cheedeh-IMU-data
library_name: onnxruntime
task_categories:
- time-series-classification
metrics:
- accuracy
- f1
cheedeh-gesture-classifier
ONNX model for classifying phone air-gestures from accelerometer data. Designed for on-device inference on Android. Trained with scikit-learn (StandardScaler + SVM rbf), exported to ONNX.
Classes: z, m, s, o, none
Files
| File | Description |
|---|---|
gesture_classifier.onnx |
Inference model (StandardScaler + SVM, ONNX opset 15) |
label_map.json |
Maps output class index (0–4) to gesture name |
Model Details
| Property | Value |
|---|---|
| Architecture | StandardScaler + SVM (rbf, C=10, gamma=scale) |
| Input | 52 hand-crafted features from 3-axis accelerometer |
| Output | Class index (int64) + probabilities (float32[5]) |
| Test accuracy | 0.759 |
| Macro F1 | 0.793 |
| Training samples | ~372 |
| Test samples | ~54 |
Usage
Input tensor: float32[1, 52] — 52 features extracted from a 100-point resampled accelerometer gesture.
Output tensors: int64[1] (class index), float32[1, 5] (class probabilities).
For data collection and inference implementation see cheedeh-collect.
Input Sensor Requirements
- Sensor type:
TYPE_LINEAR_ACCELERATION(gravity-compensated, m/s²) - Sample rate: ~50 Hz (interpolated to exactly 100 points before feature extraction)
- Gesture duration: typically 0.5–3 seconds
The none class represents background / non-gesture motion.
Training
Trained on the cheedeh-IMU-data dataset collected with the cheedeh-collect Android app. Training pipeline at cheedeh-learn.
Class weights were balanced during training to handle imbalanced class distribution.