Menta: A Small Language Model for On-Device Mental Health Prediction
Menta is an optimized small language model (SLM) fine-tuned specifically for multi-task mental health prediction from social media data. As presented in the paper Menta: A Small Language Model for On-Device Mental Health Prediction, it addresses the need for privacy-preserving and efficient mental health assessment on mobile devices.
- Paper: Menta: A Small Language Model for On-Device Mental Health Prediction
- Project Page: https://xxue752-nz.github.io/menta-project/
- Code Repository: https://github.com/xxue752-nz/Menta
Privacy-Preserving Mental Health Assessment Using Small Language Models on Mobile Devices
Overview
Menta is an optimized small language model for multi task mental health prediction from social media. It is trained with a LoRA based cross dataset regimen and a balanced accuracy oriented objective across six classification tasks. Compared with nine state of the art small language model baselines, Menta delivers an average improvement of 15.2 percent over the best SLM without fine tuning and it surpasses 13B parameter large language models on depression and stress while remaining about 3.25 times smaller. We also demonstrate real time on device inference on an iPhone 15 Pro Max that uses about 3 GB of RAM, enabling scalable and privacy preserving mental health monitoring.
Key Features
- Privacy-First: All processing happens on-device, no data leaves your device
- Mobile-Optimized: Designed specifically for iOS devices with efficient resource usage
- Multi-Dimensional Analysis: Evaluates depression, stress, and suicidal thoughts
- Real-Time Monitoring: Provides immediate in-situ predictions
- High Accuracy: Fine-tuned SLMs for mental health assessment tasks
Technical Stack
Deployment
- Language: Swift, SwiftUI
- Platform: iOS 15.0+
- ML Framework:
llama.cpp(C++ inference) - Model Format: GGUF (quantized models)
Training
- Language: Python 3.8+
- Frameworks: PyTorch, Transformers
- Techniques: LoRA fine-tuning, multi-task learning
- Base Models: Small Language Models (SLMs)
For more detailed deployment and training instructions, please refer to the GitHub repository.
Citation
If you find our work helpful or inspiring, please feel free to cite it:
@inproceedings{menta2025menta,
title={Menta: A Small Language Model for On-Device Mental Health Prediction},
author={},
booktitle={Annual Conference on Neural Information Processing Systems},
year={2025},
url={https://arxiv.org/abs/2512.02716},
}
- Downloads last month
- 100
We're not able to determine the quantization variants.