File size: 1,788 Bytes
41883b9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
tags:
- clip
- moral-foundations
- vision
- image-classification
- multimodal
license: mit
---
# Visual Moral Compass
Visual Moral Compass is a fine-tuned CLIP model that classifies images based on Moral Foundations Theory.
## Model Description
This model extends CLIP (openai/clip-vit-base-patch16) with five classifier heads to predict moral dimensions in images:
- **Care vs. Harm**: Concerns about suffering and protection
- **Fairness vs. Cheating**: Concerns about justice and reciprocity
- **Loyalty vs. Betrayal**: Concerns about group membership and solidarity
- **Respect vs. Subversion**: Concerns about hierarchy and authority
- **Sanctity vs. Degradation**: Concerns about purity and contamination
## Usage
```python
from visual_moral_compass import VisualMoralCompass
# Load model
model = VisualMoralCompass.from_pretrained("YOUR_USERNAME/visual-moral-compass")
# Classify an image
results = model.classify_image("path/to/image.jpg")
print(results)
```
## Citation
If you use this model, please cite:
```bibtex
@inproceedings{moralclip2025,
author = {Condez, Ana Carolina and Tavares, Diogo and Magalh\~{a}es, Jo\~{a}o},
title = {MoralCLIP: Contrastive Alignment of Vision-and-Language Representations with Moral Foundations Theory},
year = {2025},
isbn = {9798400720352},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
doi = {10.1145/3746027.3758166},
booktitle = {Proceedings of the 33rd ACM International Conference on Multimedia},
pages = {12399–12408},
numpages = {10},
location = {Dublin, Ireland},
series = {MM '25}
}
```
## Model Details
- **Base Model**: openai/clip-vit-base-patch16
- **Training Data**: Social-Moral Image Database
|