alexanderchemeris's picture
LeNEPA Encoder trained on Aionoscope balanced dataset
c72c956 verified
---
license: cc-by-nc-4.0
library_name: pytorch
tags:
- time-series
- ecg
- representation-learning
---
# LeNEPA encoder (balanced, encoder-only)
This repository contains an **encoder-only** LeNEPA checkpoint exported to `safetensors` for minimal inference.
What is included:
- `lenepa_encoder.safetensors` — **encoder weights only** (no projector, no training/probe state)
- `inference.py` — minimal end-to-end inference (no Hydra, no W&B)
- `lenepa_encoder_config.json` — fixed IO + architecture contract
- `provenance.json` — original `.pt` checkpoint path + W&B URL
## IO contract
Inputs:
- `x_waveform`: `torch.float32` with shape `[B, 1, 5000]`
- sampling frequency: `500` Hz
- channels: `["I"]` (so `C=1`)
Outputs:
- `patch_tokens`: `[B, 200, 192]` (post-final-norm tokens)
- `embedding`: `[B, 192]` (mean pooled over tokens)
## Usage
Smoke test (loads `lenepa_encoder.safetensors` from the current directory and prints output shapes):
```bash
python inference.py
```
Programmatic usage:
```python
from pathlib import Path
import torch
from inference import encode_lenepa, load_lenepa_encoder
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = load_lenepa_encoder(weights_path=Path("lenepa_encoder.safetensors"), device=device)
x = torch.randn(2, 1, 5000, device=device, dtype=torch.float32) # [B, C, L]
out = encode_lenepa(model=model, x_waveform=x)
print(out.embedding.shape)
```