test_model_HAR / README.md
gustavo-luz's picture
Update README.md
35ecf0d verified
metadata
license: mit
library_name: minerva-ml
tags:
  - human-activity-recognition
  - self-supervised-learning
  - time-series
  - sensor-data
  - smartphone-har
datasets:
  - daghar
metrics:
  - accuracy

Benchmarking Encoders and SSL for Smartphone-Based HAR

This repository hosts the checkpoints of the best models of the benchmark study: "Benchmarking Encoders and Self-Supervised Learning for Smartphone-Based Human Activity Recognition", accepted for publication in IEEE Access (2026).

Project Resources

Project Website Paper

Model Description

This project provides a large-scale evaluation of 6 encoders combined with 4 Self-Supervised Learning (SSL) techniques (TF-C, TNC, LFR, and DIET).

  • Developed by: Hub of Artificial Intelligence and Cognitive Architectures (H.IAAC), University of Campinas.
  • Model Type: Time-series Classification (Sensor-based).
  • Architecture: Supports ResNet-SE-5, CNN-PFF, and others via the minerva-ml library.
  • SSL Paradigms: TF-C, TNC, LFR, and DIET.

How to Get Started

You can easily load these models using the minerva-ml framework. We provide the best ones

LFR trained on motionsense and finetune refinement on motionsense - achieves 97.5% accuracy

Open In Colab

TFC trained on motionsense and freeze refinement on UCI - for demonstration of tfc backbones

Open In Colab

New models are coming soon

Prerequisites

pip install minerva-ml huggingface_hub

Loading a Specific Checkpoint

from huggingface_hub import hf_hub_download
from minerva.models.nets.base import SimpleSupervisedModel
from minerva.models.nets.time_series.cnns import CNN_PF_Backbone
from minerva.models.ssl.tfc import TFC_Backbone
import torch

# 1. Download weights
checkpoint_path = hf_hub_download(
    repo_id="GustavoLuz-Projects/test_model_HAR",
    filename="best_ms_lfr_ts2vec_ft.ckpt" 
)

Training Data

The models were trained/benchmarked using the DAGHAR datasets, standardized for 6-channel sensor input (Accelerometer and Gyroscope) we used the standardized view of the DAGHAR Dataset, as introduced in the following paper:

Napoli, O., Duarte, D., Alves, P., Soto, D.H.P., de Oliveira, H.E., Rocha, A., Boccato, L. and Borin, E., 2024. 
A benchmark for domain adaptation and generalization in smartphone-based human activity recognition. 
Scientific Data, 11(1), p.1192.

If you use these models or the benchmark results, please cite:

@article{daluz2026benchmarking,
  title={Benchmarking Encoders and Self-Supervised Learning for Smartphone-Based Human Activity Recognition},
  author={da Luz, Gustavo P. C. P. and Soto, Darlinne H. P. and Napoli, Otávio O. and Rocha, Anderson and Boccato, Levy and Borin, Edson},
  journal={IEEE Access},
  year={2026},
  publisher={IEEE}
}