Menta / README.md
HealthAILab's picture
Improve model card: Add pipeline tag, library name, paper/project/code links (#1)
2bac606 verified
metadata
license: apache-2.0
pipeline_tag: text-classification
library_name: transformers

Menta: A Small Language Model for On-Device Mental Health Prediction

Menta is an optimized small language model (SLM) fine-tuned specifically for multi-task mental health prediction from social media data. As presented in the paper Menta: A Small Language Model for On-Device Mental Health Prediction, it addresses the need for privacy-preserving and efficient mental health assessment on mobile devices.

Menta Workflow

Privacy-Preserving Mental Health Assessment Using Small Language Models on Mobile Devices

Overview

Menta is an optimized small language model for multi task mental health prediction from social media. It is trained with a LoRA based cross dataset regimen and a balanced accuracy oriented objective across six classification tasks. Compared with nine state of the art small language model baselines, Menta delivers an average improvement of 15.2 percent over the best SLM without fine tuning and it surpasses 13B parameter large language models on depression and stress while remaining about 3.25 times smaller. We also demonstrate real time on device inference on an iPhone 15 Pro Max that uses about 3 GB of RAM, enabling scalable and privacy preserving mental health monitoring.

Key Features

  • Privacy-First: All processing happens on-device, no data leaves your device
  • Mobile-Optimized: Designed specifically for iOS devices with efficient resource usage
  • Multi-Dimensional Analysis: Evaluates depression, stress, and suicidal thoughts
  • Real-Time Monitoring: Provides immediate in-situ predictions
  • High Accuracy: Fine-tuned SLMs for mental health assessment tasks

Technical Stack

Deployment

  • Language: Swift, SwiftUI
  • Platform: iOS 15.0+
  • ML Framework: llama.cpp (C++ inference)
  • Model Format: GGUF (quantized models)

Training

  • Language: Python 3.8+
  • Frameworks: PyTorch, Transformers
  • Techniques: LoRA fine-tuning, multi-task learning
  • Base Models: Small Language Models (SLMs)

For more detailed deployment and training instructions, please refer to the GitHub repository.

Citation

If you find our work helpful or inspiring, please feel free to cite it:

@inproceedings{menta2025menta,
      title={Menta: A Small Language Model for On-Device Mental Health Prediction},
      author={},
      booktitle={Annual Conference on Neural Information Processing Systems},
      year={2025},
      url={https://arxiv.org/abs/2512.02716},
}