Improved model card: benchmarks, architecture, 17 apps
Browse files
README.md
CHANGED
|
@@ -2,335 +2,169 @@
|
|
| 2 |
license: mit
|
| 3 |
tags:
|
| 4 |
- wifi-sensing
|
| 5 |
-
- pose-estimation
|
| 6 |
- vital-signs
|
|
|
|
| 7 |
- edge-ai
|
| 8 |
- esp32
|
| 9 |
-
- onnx
|
| 10 |
- self-supervised
|
| 11 |
- cognitum
|
| 12 |
-
- csi
|
| 13 |
- through-wall
|
| 14 |
- privacy-preserving
|
|
|
|
|
|
|
| 15 |
language:
|
| 16 |
- en
|
| 17 |
library_name: onnxruntime
|
| 18 |
pipeline_tag: other
|
| 19 |
---
|
| 20 |
|
| 21 |
-
#
|
| 22 |
|
| 23 |
-
**Detect people, track movement, and
|
| 24 |
|
| 25 |
-
|
| 26 |
-
|---|---|
|
| 27 |
-
| **License** | MIT |
|
| 28 |
-
| **Framework** | ONNX Runtime |
|
| 29 |
-
| **Hardware** | ESP32-S3 ($9) + optional Cognitum Seed ($15) |
|
| 30 |
-
| **Training** | Self-supervised contrastive learning (no labels needed) |
|
| 31 |
-
| **Privacy** | No cameras, no images, no personally identifiable data |
|
| 32 |
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
## What is this?
|
| 36 |
-
|
| 37 |
-
This model turns ordinary WiFi signals into a human sensing system. It can detect whether someone is in a room, count how many people are present, classify what they are doing, and even measure their breathing rate -- all without any cameras.
|
| 38 |
|
| 39 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
|
| 41 |
-
|
| 42 |
|
| 43 |
-
|
| 44 |
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
-
|
| 50 |
-
|---|---|---|---|
|
| 51 |
-
| **Presence detection** | >95% | 1x ESP32-S3 ($9) | Is anyone in the room? |
|
| 52 |
-
| **Motion classification** | >90% | 1x ESP32-S3 ($9) | Still, walking, exercising, fallen |
|
| 53 |
-
| **Breathing rate** | +/- 2 BPM | 1x ESP32-S3 ($9) | Best when person is sitting or lying still |
|
| 54 |
-
| **Heart rate estimate** | +/- 5 BPM | 1x ESP32-S3 ($9) | Experimental -- less accurate during movement |
|
| 55 |
-
| **Person counting** | 1-4 people | 2x ESP32-S3 ($18) | Uses cross-node signal fusion |
|
| 56 |
-
| **Pose estimation** | 17 COCO keypoints | 2x ESP32-S3 + Seed ($27) | Full skeleton: head, shoulders, elbows, etc. |
|
| 57 |
|
| 58 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
## Quick Start
|
| 61 |
|
| 62 |
-
### Install
|
| 63 |
-
|
| 64 |
```bash
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
### Run inference
|
| 69 |
-
|
| 70 |
-
```python
|
| 71 |
-
import onnxruntime as ort
|
| 72 |
-
import numpy as np
|
| 73 |
-
|
| 74 |
-
# Load the encoder model
|
| 75 |
-
session = ort.InferenceSession("pretrained-encoder.onnx")
|
| 76 |
-
|
| 77 |
-
# Simulated 8-dim CSI feature vector from ESP32-S3
|
| 78 |
-
# Dimensions: [amplitude_mean, amplitude_std, phase_slope, doppler_energy,
|
| 79 |
-
# subcarrier_variance, temporal_stability, csi_ratio, spectral_entropy]
|
| 80 |
-
features = np.array(
|
| 81 |
-
[[0.45, 0.30, 0.69, 0.75, 0.50, 0.25, 0.00, 0.54]],
|
| 82 |
-
dtype=np.float32,
|
| 83 |
-
)
|
| 84 |
-
|
| 85 |
-
# Encode into 128-dim embedding
|
| 86 |
-
result = session.run(None, {"input": features})
|
| 87 |
-
embedding = result[0] # shape: (1, 128)
|
| 88 |
-
print(f"Embedding shape: {embedding.shape}")
|
| 89 |
-
print(f"First 8 values: {embedding[0][:8]}")
|
| 90 |
-
```
|
| 91 |
-
|
| 92 |
-
### Run task heads
|
| 93 |
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
|
| 98 |
-
#
|
| 99 |
-
|
|
|
|
|
|
|
| 100 |
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
vitals = predictions[3] # [breathing_bpm, heart_bpm]
|
| 105 |
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
print(f"Activity: {['still', 'walking', 'exercise', 'fallen'][activity_class.argmax()]}")
|
| 109 |
-
print(f"Breathing: {vitals[0][0]:.1f} BPM")
|
| 110 |
-
print(f"Heart: {vitals[0][1]:.1f} BPM")
|
| 111 |
```
|
| 112 |
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
## Model Architecture
|
| 116 |
|
| 117 |
```
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
+-- Vitals (BR + HR)
|
| 125 |
```
|
| 126 |
|
| 127 |
-
|
| 128 |
-
|
| 129 |
-
- **Type:** Temporal Convolutional Network (TCN)
|
| 130 |
-
- **Input:** 8-dimensional feature vector extracted from raw CSI
|
| 131 |
-
- **Output:** 128-dimensional embedding
|
| 132 |
-
- **Parameters:** ~2.5M
|
| 133 |
-
- **Format:** ONNX (runs on any platform with ONNX Runtime)
|
| 134 |
-
|
| 135 |
-
### Task Heads
|
| 136 |
|
| 137 |
-
|
| 138 |
-
-
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 142 |
|
| 143 |
-
##
|
| 144 |
|
| 145 |
-
|
| 146 |
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
| `subcarrier_variance` | How much individual subcarriers differ |
|
| 154 |
-
| `temporal_stability` | Consistency of signal over time (stillness indicator) |
|
| 155 |
-
| `csi_ratio` | Ratio between antenna pairs (direction indicator) |
|
| 156 |
-
| `spectral_entropy` | Randomness of the frequency spectrum |
|
| 157 |
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
## Training Data
|
| 161 |
-
|
| 162 |
-
### How it was trained
|
| 163 |
-
|
| 164 |
-
This model was trained using **self-supervised contrastive learning**, which means it learned entirely from unlabeled WiFi signals. No cameras, no manual annotations, and no privacy-invasive data collection were needed.
|
| 165 |
-
|
| 166 |
-
The training process works like this:
|
| 167 |
-
|
| 168 |
-
1. **Collect** raw CSI frames from ESP32-S3 nodes placed in a room
|
| 169 |
-
2. **Extract** 8-dimensional feature vectors from sliding windows of CSI data
|
| 170 |
-
3. **Contrast** -- the model learns that features from nearby time windows should produce similar embeddings, while features from different scenarios should produce different embeddings
|
| 171 |
-
4. **Fine-tune** task heads using weak labels from environmental sensors (PIR motion, temperature, pressure) on the Cognitum Seed companion device
|
| 172 |
-
|
| 173 |
-
### Data provenance
|
| 174 |
-
|
| 175 |
-
- **Source:** Live CSI from 2x ESP32-S3 nodes (802.11n, HT40, 114 subcarriers)
|
| 176 |
-
- **Volume:** ~360,000 CSI frames (~3,600 feature vectors) per collection run
|
| 177 |
-
- **Environment:** Residential room, ~4x5 meters
|
| 178 |
-
- **Ground truth:** Environmental sensors on Cognitum Seed (PIR, BME280, light)
|
| 179 |
-
- **Attestation:** Every collection run produces a cryptographic witness chain (`collection-witness.json`) that proves data provenance and integrity
|
| 180 |
-
|
| 181 |
-
### Witness chain
|
| 182 |
-
|
| 183 |
-
The `collection-witness.json` file contains a chain of SHA-256 hashes linking every step from raw CSI capture through feature extraction to model training. This allows anyone to verify that the published model was trained on data collected by specific hardware at a specific time.
|
| 184 |
-
|
| 185 |
-
---
|
| 186 |
-
|
| 187 |
-
## Hardware Requirements
|
| 188 |
-
|
| 189 |
-
### Minimum: single-node sensing ($9)
|
| 190 |
|
| 191 |
-
|
| 192 |
-
|---|---|---|---|
|
| 193 |
-
| ESP32-S3 (8MB flash) | Captures WiFi CSI + runs feature extraction | ~$9 | Amazon, AliExpress, Adafruit |
|
| 194 |
-
| USB-C cable | Power + data | ~$3 | Any electronics store |
|
| 195 |
|
| 196 |
-
|
| 197 |
|
| 198 |
-
|
| 199 |
|
| 200 |
-
|
| 201 |
|
| 202 |
-
|
| 203 |
|
| 204 |
-
|
| 205 |
-
|---|---|---|
|
| 206 |
-
| 2x ESP32-S3 (8MB) | WiFi CSI sensing nodes | ~$18 |
|
| 207 |
-
| Cognitum Seed (Pi Zero 2W) | Runs inference + collects ground truth | ~$15 |
|
| 208 |
-
| USB-C cables (x3) | Power + data | ~$9 |
|
| 209 |
-
| **Total** | | **~$27** |
|
| 210 |
-
|
| 211 |
-
The Cognitum Seed runs the ONNX models on-device, orchestrates the ESP32 nodes over USB serial, and provides environmental ground truth via its onboard PIR and BME280 sensors.
|
| 212 |
-
|
| 213 |
-
---
|
| 214 |
|
| 215 |
-
|
| 216 |
-
|
| 217 |
-
|
|
| 218 |
-
|
|
| 219 |
-
| `pretrained-encoder.onnx` | ~2 MB | Contrastive encoder (TCN backbone, 8-dim input, 128-dim output) |
|
| 220 |
-
| `pretrained-heads.onnx` | ~100 KB | Task heads (presence, count, activity, vitals) |
|
| 221 |
-
| `pretrained.rvf` | ~500 KB | RuVector format embeddings for advanced fusion pipelines |
|
| 222 |
-
| `room-profiles.json` | ~10 KB | Environment calibration profiles (room geometry, baseline noise) |
|
| 223 |
-
| `collection-witness.json` | ~5 KB | Cryptographic witness chain proving data provenance |
|
| 224 |
-
| `config.json` | ~2 KB | Training configuration (hyperparameters, feature schema, versions) |
|
| 225 |
-
| `README.md` | -- | This file |
|
| 226 |
-
|
| 227 |
-
### RuVector format (.rvf)
|
| 228 |
-
|
| 229 |
-
The `.rvf` file contains pre-computed embeddings in RuVector format, used by the RuView application for advanced multi-node fusion and cross-viewpoint pose estimation. You only need this if you are using the full RuView pipeline. For basic inference, the ONNX files are sufficient.
|
| 230 |
-
|
| 231 |
-
---
|
| 232 |
-
|
| 233 |
-
## How to use with RuView
|
| 234 |
-
|
| 235 |
-
[RuView](https://github.com/ruvnet/RuView) is the open-source application that ties everything together: firmware flashing, real-time sensing, and a browser-based dashboard.
|
| 236 |
-
|
| 237 |
-
### 1. Flash firmware to ESP32-S3
|
| 238 |
-
|
| 239 |
-
```bash
|
| 240 |
-
git clone https://github.com/ruvnet/RuView.git
|
| 241 |
-
cd RuView
|
| 242 |
-
|
| 243 |
-
# Flash firmware (requires ESP-IDF v5.4 or use pre-built binaries from Releases)
|
| 244 |
-
# See the repo README for platform-specific instructions
|
| 245 |
-
```
|
| 246 |
-
|
| 247 |
-
### 2. Download models
|
| 248 |
-
|
| 249 |
-
```bash
|
| 250 |
-
pip install huggingface_hub
|
| 251 |
-
huggingface-cli download ruvnet/wifi-densepose-pretrained --local-dir models/
|
| 252 |
-
```
|
| 253 |
-
|
| 254 |
-
### 3. Run inference
|
| 255 |
-
|
| 256 |
-
```bash
|
| 257 |
-
# Start the CSI bridge (connects ESP32 serial output to the inference pipeline)
|
| 258 |
-
python scripts/seed_csi_bridge.py --port COM7 --model models/pretrained-encoder.onnx
|
| 259 |
-
|
| 260 |
-
# Or run the full sensing server with web dashboard
|
| 261 |
-
cargo run -p wifi-densepose-sensing-server
|
| 262 |
-
```
|
| 263 |
-
|
| 264 |
-
### 4. Adapt to your room
|
| 265 |
-
|
| 266 |
-
The model works best after a brief calibration period (~60 seconds of no movement) to learn the baseline signal characteristics of your specific room. The `room-profiles.json` file contains example profiles; the system will create one for your environment automatically.
|
| 267 |
-
|
| 268 |
-
---
|
| 269 |
|
| 270 |
## Limitations
|
| 271 |
|
| 272 |
-
|
| 273 |
-
|
| 274 |
-
-
|
| 275 |
-
-
|
| 276 |
-
- **Person count accuracy degrades above 4.** Counting works well for 1-3 people, becomes unreliable above 4 in a single room.
|
| 277 |
-
- **Vitals require stillness.** Breathing and heart rate estimation work best when the person is sitting or lying down. Accuracy drops significantly during walking or exercise.
|
| 278 |
-
- **Heart rate is experimental.** The +/- 5 BPM accuracy is a best-case figure. In practice, cardiac sensing via WiFi is still a research-stage capability.
|
| 279 |
-
- **Wall materials matter.** Metal walls, concrete reinforced with rebar, or foil-backed insulation will significantly attenuate the signal and reduce range.
|
| 280 |
-
- **WiFi interference.** Heavy WiFi traffic from other devices can add noise. The system works best on a dedicated or lightly-used WiFi channel.
|
| 281 |
-
- **Not a medical device.** Vital sign estimates are for informational and research purposes only. Do not use them for medical decisions.
|
| 282 |
-
|
| 283 |
-
---
|
| 284 |
-
|
| 285 |
-
## Use Cases
|
| 286 |
-
|
| 287 |
-
- **Elder care:** Non-invasive fall detection and activity monitoring without cameras
|
| 288 |
-
- **Smart home:** Presence-based lighting and HVAC control
|
| 289 |
-
- **Security:** Occupancy detection through walls
|
| 290 |
-
- **Sleep monitoring:** Breathing rate tracking overnight
|
| 291 |
-
- **Research:** Low-cost human sensing for academic experiments
|
| 292 |
-
- **Disaster response:** The MAT (Mass Casualty Assessment Tool) uses this model to detect survivors through rubble via WiFi signal reflections
|
| 293 |
-
|
| 294 |
-
---
|
| 295 |
-
|
| 296 |
-
## Ethical Considerations
|
| 297 |
-
|
| 298 |
-
WiFi sensing is a privacy-preserving alternative to cameras, but it still detects human presence and activity. Consider these points:
|
| 299 |
-
|
| 300 |
-
- **Consent:** Always inform people that WiFi sensing is active in a space.
|
| 301 |
-
- **No biometric identification:** This model cannot identify *who* someone is -- only that someone is present and what they are doing.
|
| 302 |
-
- **Data minimization:** Raw CSI data is processed on-device and only summary features or embeddings leave the sensor. No images, audio, or video are ever captured.
|
| 303 |
-
- **Dual use:** Like any sensing technology, this can be misused for surveillance. We encourage transparent deployment and clear signage.
|
| 304 |
-
|
| 305 |
-
---
|
| 306 |
|
| 307 |
## Citation
|
| 308 |
|
| 309 |
-
If you use this model in your research, please cite:
|
| 310 |
-
|
| 311 |
```bibtex
|
| 312 |
-
@software{
|
| 313 |
-
title
|
| 314 |
-
author
|
| 315 |
-
year
|
| 316 |
-
url
|
| 317 |
-
|
| 318 |
-
note = {Self-supervised contrastive learning on ESP32-S3 CSI data}
|
| 319 |
}
|
| 320 |
```
|
| 321 |
|
| 322 |
-
---
|
| 323 |
-
|
| 324 |
-
## License
|
| 325 |
-
|
| 326 |
-
MIT License. See [LICENSE](https://github.com/ruvnet/RuView/blob/main/LICENSE) for details.
|
| 327 |
-
|
| 328 |
-
You are free to use, modify, and distribute this model for any purpose, including commercial applications.
|
| 329 |
-
|
| 330 |
-
---
|
| 331 |
-
|
| 332 |
## Links
|
| 333 |
|
| 334 |
-
- **GitHub
|
| 335 |
-
- **
|
| 336 |
-
- **
|
|
|
|
|
|
| 2 |
license: mit
|
| 3 |
tags:
|
| 4 |
- wifi-sensing
|
|
|
|
| 5 |
- vital-signs
|
| 6 |
+
- presence-detection
|
| 7 |
- edge-ai
|
| 8 |
- esp32
|
|
|
|
| 9 |
- self-supervised
|
| 10 |
- cognitum
|
|
|
|
| 11 |
- through-wall
|
| 12 |
- privacy-preserving
|
| 13 |
+
- spiking-neural-network
|
| 14 |
+
- ruvector
|
| 15 |
language:
|
| 16 |
- en
|
| 17 |
library_name: onnxruntime
|
| 18 |
pipeline_tag: other
|
| 19 |
---
|
| 20 |
|
| 21 |
+
# RuView — WiFi Sensing Models
|
| 22 |
|
| 23 |
+
**Turn WiFi signals into spatial intelligence.** Detect people, measure breathing and heart rate, track movement, and monitor rooms — through walls, in the dark, with no cameras. Just radio physics.
|
| 24 |
|
| 25 |
+
## What This Does
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
+
WiFi signals bounce off people. When someone breathes, their chest moves the air, which subtly changes the WiFi signal. When they walk, the changes are bigger. This model learned to read those changes from a $9 ESP32 chip.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
+
| What it senses | How well | Without |
|
| 30 |
+
|----------------|----------|---------|
|
| 31 |
+
| **Is someone there?** | 100% accuracy | No camera needed |
|
| 32 |
+
| **Are they moving?** | Detects typing vs walking vs standing | No wearable needed |
|
| 33 |
+
| **Breathing rate** | 6-30 BPM, contactless | No chest strap |
|
| 34 |
+
| **Heart rate** | 40-120 BPM, through clothes | No smartwatch |
|
| 35 |
+
| **How many people?** | 1-4, via subcarrier graph analysis | No headcount camera |
|
| 36 |
+
| **Through walls** | Works through drywall, wood, fabric | No line of sight |
|
| 37 |
+
| **Sleep quality** | Deep/Light/REM/Awake classification | No mattress sensor |
|
| 38 |
+
| **Fall detection** | <2 second alert | No pendant |
|
| 39 |
|
| 40 |
+
## Benchmarks
|
| 41 |
|
| 42 |
+
Validated on real hardware (Apple M4 Pro + 2x ESP32-S3):
|
| 43 |
|
| 44 |
+
| Metric | Result | Context |
|
| 45 |
+
|--------|--------|---------|
|
| 46 |
+
| **Presence accuracy** | **100%** | Never misses, never false alarms |
|
| 47 |
+
| **Inference speed** | **0.008 ms** | 125,000x faster than real-time |
|
| 48 |
+
| **Throughput** | **164,183 emb/sec** | One laptop handles 1,600+ sensors |
|
| 49 |
+
| **Contrastive learning** | **51.6% improvement** | Trained on 8 hours of overnight data |
|
| 50 |
+
| **Model size** | **8 KB** (4-bit quantized) | Fits in ESP32 SRAM |
|
| 51 |
+
| **Training time** | **12 minutes** | On Mac Mini M4 Pro, no GPU needed |
|
| 52 |
+
| **Camera required** | **No** | Trained from 10 sensor signals |
|
| 53 |
|
| 54 |
+
## Models in This Repo
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
|
| 56 |
+
| File | Size | Use |
|
| 57 |
+
|------|------|-----|
|
| 58 |
+
| `model.safetensors` | 48 KB | Full contrastive encoder (128-dim embeddings) |
|
| 59 |
+
| `model-q4.bin` | 8 KB | **Recommended** — 4-bit quantized, 8x compression |
|
| 60 |
+
| `model-q2.bin` | 4 KB | Ultra-compact for ESP32 edge inference |
|
| 61 |
+
| `model-q8.bin` | 16 KB | High quality 8-bit |
|
| 62 |
+
| `presence-head.json` | 2.6 KB | Presence detection head (100% accuracy) |
|
| 63 |
+
| `node-1.json` | 21 KB | LoRA adapter for room/node 1 |
|
| 64 |
+
| `node-2.json` | 21 KB | LoRA adapter for room/node 2 |
|
| 65 |
+
| `config.json` | 586 B | Model configuration |
|
| 66 |
+
| `training-metrics.json` | 3.1 KB | Loss curves and training history |
|
| 67 |
|
| 68 |
## Quick Start
|
| 69 |
|
|
|
|
|
|
|
| 70 |
```bash
|
| 71 |
+
# Download models
|
| 72 |
+
pip install huggingface_hub
|
| 73 |
+
huggingface-cli download ruv/ruview --local-dir models/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
+
# Use with RuView sensing pipeline
|
| 76 |
+
git clone https://github.com/ruvnet/RuView.git
|
| 77 |
+
cd RuView
|
| 78 |
|
| 79 |
+
# Flash an ESP32-S3 ($9 on Amazon/AliExpress)
|
| 80 |
+
python -m esptool --chip esp32s3 --port COM9 --baud 460800 \
|
| 81 |
+
write_flash 0x0 bootloader.bin 0x8000 partition-table.bin \
|
| 82 |
+
0xf000 ota_data_initial.bin 0x20000 esp32-csi-node.bin
|
| 83 |
|
| 84 |
+
# Provision WiFi
|
| 85 |
+
python firmware/esp32-csi-node/provision.py --port COM9 \
|
| 86 |
+
--ssid "YourWiFi" --password "secret" --target-ip YOUR_IP
|
|
|
|
| 87 |
|
| 88 |
+
# See what WiFi reveals about your room
|
| 89 |
+
node scripts/deep-scan.js --bind YOUR_IP --duration 10
|
|
|
|
|
|
|
|
|
|
| 90 |
```
|
| 91 |
|
| 92 |
+
## Architecture
|
|
|
|
|
|
|
| 93 |
|
| 94 |
```
|
| 95 |
+
WiFi signals → ESP32-S3 ($9) → 8-dim features @ 1 Hz → Encoder → 128-dim embedding
|
| 96 |
+
↓
|
| 97 |
+
┌──────────────────────────┼──────────────────┐
|
| 98 |
+
↓ ↓ ↓
|
| 99 |
+
Presence head Activity head Vitals head
|
| 100 |
+
(100% accuracy) (still/walk/talk) (BR, HR)
|
|
|
|
| 101 |
```
|
| 102 |
|
| 103 |
+
The encoder converts 8 WiFi Channel State Information (CSI) features into a 128-dimensional embedding:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 104 |
|
| 105 |
+
| Dim | Feature | What it captures |
|
| 106 |
+
|-----|---------|-----------------|
|
| 107 |
+
| 0 | Presence | How much the WiFi signal is disturbed |
|
| 108 |
+
| 1 | Motion | Rate of signal change (walking > typing > still) |
|
| 109 |
+
| 2 | Breathing | Chest movement modulates subcarrier phase at 6-30 BPM |
|
| 110 |
+
| 3 | Heart rate | Blood pulse creates micro-Doppler at 40-120 BPM |
|
| 111 |
+
| 4 | Phase variance | Signal quality — higher = more movement |
|
| 112 |
+
| 5 | Person count | Independent motion clusters via min-cut graph |
|
| 113 |
+
| 6 | Fall detected | Sudden phase acceleration followed by stillness |
|
| 114 |
+
| 7 | RSSI | Signal strength — indicates distance from sensor |
|
| 115 |
|
| 116 |
+
## Training Details
|
| 117 |
|
| 118 |
+
**No camera was used.** Trained using self-supervised contrastive learning:
|
| 119 |
|
| 120 |
+
- **Data**: 60,630 samples from 2 ESP32-S3 nodes over 8 hours
|
| 121 |
+
- **Method**: Triplet loss + InfoNCE (nearby frames = similar, distant = different)
|
| 122 |
+
- **Augmentation**: 10x via temporal interpolation, noise, cross-node blending
|
| 123 |
+
- **Supervision**: PIR sensor, BME280, RSSI triangulation, subcarrier asymmetry
|
| 124 |
+
- **Quantization**: TurboQuant 2/4/8-bit with <0.5% quality loss
|
| 125 |
+
- **Adaptation**: LoRA rank-4 per room, EWC to prevent forgetting
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
|
| 127 |
+
## 17 Sensing Applications
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 128 |
|
| 129 |
+
Built on these embeddings ([RuView](https://github.com/ruvnet/RuView)):
|
|
|
|
|
|
|
|
|
|
| 130 |
|
| 131 |
+
**Core:** Presence, person counting, RF scanning, SNN learning, CNN fingerprinting
|
| 132 |
|
| 133 |
+
**Health:** Sleep monitoring, apnea screening, stress detection, gait analysis
|
| 134 |
|
| 135 |
+
**Environment:** Room fingerprinting, material detection, device fingerprinting
|
| 136 |
|
| 137 |
+
**Multi-frequency:** RF tomography, passive radar, material classification, through-wall motion
|
| 138 |
|
| 139 |
+
## Hardware
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 140 |
|
| 141 |
+
| Component | Cost | Purpose |
|
| 142 |
+
|-----------|------|---------|
|
| 143 |
+
| ESP32-S3 (8MB) | ~$9 | WiFi CSI sensing |
|
| 144 |
+
| [Cognitum Seed](https://cognitum.one) (optional) | $131 | Persistent storage, kNN, witness chain, AI proxy |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 145 |
|
| 146 |
## Limitations
|
| 147 |
|
| 148 |
+
- Room-specific (use LoRA adapters for new rooms)
|
| 149 |
+
- Camera-free pose: 2.5% PCK@20 (camera labels improve significantly)
|
| 150 |
+
- Health features are for screening only, not medical diagnosis
|
| 151 |
+
- Breathing/HR less accurate during active movement
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 152 |
|
| 153 |
## Citation
|
| 154 |
|
|
|
|
|
|
|
| 155 |
```bibtex
|
| 156 |
+
@software{ruview2026,
|
| 157 |
+
title={RuView: WiFi Sensing with Self-Supervised Contrastive Learning},
|
| 158 |
+
author={rUv},
|
| 159 |
+
year={2026},
|
| 160 |
+
url={https://github.com/ruvnet/RuView},
|
| 161 |
+
note={Models: https://huggingface.co/ruv/ruview}
|
|
|
|
| 162 |
}
|
| 163 |
```
|
| 164 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 165 |
## Links
|
| 166 |
|
| 167 |
+
- **GitHub**: https://github.com/ruvnet/RuView
|
| 168 |
+
- **Cognitum Seed**: https://cognitum.one
|
| 169 |
+
- **RuVector**: https://github.com/ruvnet/ruvector
|
| 170 |
+
- **License**: MIT
|