Visual Moral Compass
Visual Moral Compass is a fine-tuned CLIP model that classifies images based on Moral Foundations Theory.
Model Description
This model extends CLIP (openai/clip-vit-base-patch16) with five classifier heads to predict moral dimensions in images:
- Care vs. Harm: Concerns about suffering and protection
- Fairness vs. Cheating: Concerns about justice and reciprocity
- Loyalty vs. Betrayal: Concerns about group membership and solidarity
- Respect vs. Subversion: Concerns about hierarchy and authority
- Sanctity vs. Degradation: Concerns about purity and contamination
Usage
from visual_moral_compass import VisualMoralCompass
# Load model
model = VisualMoralCompass.from_pretrained("YOUR_USERNAME/visual-moral-compass")
# Classify an image
results = model.classify_image("path/to/image.jpg")
print(results)
Citation
If you use this model, please cite:
@inproceedings{moralclip2025,
author = {Condez, Ana Carolina and Tavares, Diogo and Magalh\~{a}es, Jo\~{a}o},
title = {MoralCLIP: Contrastive Alignment of Vision-and-Language Representations with Moral Foundations Theory},
year = {2025},
isbn = {9798400720352},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
doi = {10.1145/3746027.3758166},
booktitle = {Proceedings of the 33rd ACM International Conference on Multimedia},
pages = {12399–12408},
numpages = {10},
location = {Dublin, Ireland},
series = {MM '25}
}
Model Details
- Base Model: openai/clip-vit-base-patch16
- Training Data: Social-Moral Image Database
- Downloads last month
- 1