compressionkit-ppg-8x
A PPG signal compression codec using Residual Vector Quantization (RVQ), optimized for edge and wearable devices.
Model Details
- Modality: PPG
- Sample Rate: 64 Hz
- Compression Ratio: 8x
- Quantization: INT8
- RVQ Levels: 4
- Codebook Size: 256 entries ร 16D
- Encoder Input:
[None, 1, 320, 1] - Encoder Output:
[None, 1, 40, 16]
Quality Metrics
Time Domain
| Metric | Mean | Median | P90 |
|---|---|---|---|
| PRD (%) | 10.8485 | 4.1954 | 22.4460 |
| RMSE | 0.0497 | 0.0367 | 0.0776 |
| Cosine Similarity | 0.9703 | 0.9991 | 0.9996 |
Spectral
- Band Total Relative Error (median): 0.0336
Bitrate
- Codec CR (uniform): 8.0x
- Codec CR (learned prior): 8.76x
Usage
Python (compressionkit runtime)
from compressionkit.runtime import RVQCodec
codec = RVQCodec.from_pretrained("Ambiq/compressionkit-ppg-8x")
# Encode: float32 signal โ RVQ indices
indices = codec.encode(signal)
# Decode: RVQ indices โ reconstructed signal
recon = codec.decode(indices)
Local deployment directory
codec = RVQCodec("path/to/deploy/")
Files
| File | Description |
|---|---|
encoder_int8.tflite |
INT8 quantized encoder (on-device) |
encoder.h |
C header for encoder |
decoder_float32.tflite |
Float32 decoder (server-side evaluation) |
decoder_int8.tflite |
INT8 decoder (optional, on-device) |
codebook.npz |
RVQ codebook tables |
codebook.h |
C header for codebook |
config.json |
Deployment manifest |
sample_stimulus.npz |
Synthetic test data |
quality_scorecard.json |
Full evaluation metrics |
Dataset & License
Training data: MESA (NSRR restricted). Sample data uses synthetic physiokit waveforms only โ no patient data is redistributed.
Model weights are released under the Ambiq Model Weights License โ deployment is restricted to Ambiq silicon devices. See LICENSE-MODEL-WEIGHTS.md for full terms.
Citation
@software{compressionkit,
author = {Ambiq AI},
title = {compressionKIT: Signal Compression for Edge AI},
url = {https://github.com/AmbiqAI/compressionkit}
}
- Downloads last month
- 44
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support