anaaa2 commited on
Commit
41883b9
·
verified ·
1 Parent(s): c81ab10

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +61 -0
  2. config.json +30 -0
  3. pytorch_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - clip
4
+ - moral-foundations
5
+ - vision
6
+ - image-classification
7
+ - multimodal
8
+ license: mit
9
+ ---
10
+
11
+ # Visual Moral Compass
12
+
13
+ Visual Moral Compass is a fine-tuned CLIP model that classifies images based on Moral Foundations Theory.
14
+
15
+ ## Model Description
16
+
17
+ This model extends CLIP (openai/clip-vit-base-patch16) with five classifier heads to predict moral dimensions in images:
18
+ - **Care vs. Harm**: Concerns about suffering and protection
19
+ - **Fairness vs. Cheating**: Concerns about justice and reciprocity
20
+ - **Loyalty vs. Betrayal**: Concerns about group membership and solidarity
21
+ - **Respect vs. Subversion**: Concerns about hierarchy and authority
22
+ - **Sanctity vs. Degradation**: Concerns about purity and contamination
23
+
24
+ ## Usage
25
+
26
+ ```python
27
+ from visual_moral_compass import VisualMoralCompass
28
+
29
+ # Load model
30
+ model = VisualMoralCompass.from_pretrained("YOUR_USERNAME/visual-moral-compass")
31
+
32
+ # Classify an image
33
+ results = model.classify_image("path/to/image.jpg")
34
+ print(results)
35
+ ```
36
+
37
+ ## Citation
38
+
39
+ If you use this model, please cite:
40
+
41
+ ```bibtex
42
+ @inproceedings{moralclip2025,
43
+ author = {Condez, Ana Carolina and Tavares, Diogo and Magalh\~{a}es, Jo\~{a}o},
44
+ title = {MoralCLIP: Contrastive Alignment of Vision-and-Language Representations with Moral Foundations Theory},
45
+ year = {2025},
46
+ isbn = {9798400720352},
47
+ publisher = {Association for Computing Machinery},
48
+ address = {New York, NY, USA},
49
+ doi = {10.1145/3746027.3758166},
50
+ booktitle = {Proceedings of the 33rd ACM International Conference on Multimedia},
51
+ pages = {12399–12408},
52
+ numpages = {10},
53
+ location = {Dublin, Ireland},
54
+ series = {MM '25}
55
+ }
56
+ ```
57
+
58
+ ## Model Details
59
+
60
+ - **Base Model**: openai/clip-vit-base-patch16
61
+ - **Training Data**: Social-Moral Image Database
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "visual-moral-compass",
3
+ "base_model": "openai/clip-vit-base-patch16",
4
+ "num_pairs": 5,
5
+ "moral_foundations": [
6
+ [
7
+ "Care",
8
+ "Harm"
9
+ ],
10
+ [
11
+ "Fairness",
12
+ "Cheating"
13
+ ],
14
+ [
15
+ "Loyalty",
16
+ "Betrayal"
17
+ ],
18
+ [
19
+ "Respect",
20
+ "Subversion"
21
+ ],
22
+ [
23
+ "Sanctity",
24
+ "Degradation"
25
+ ]
26
+ ],
27
+ "architectures": [
28
+ "CLIPImageMultiClassification"
29
+ ]
30
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c153ba344baef5b0b04ed4094053a011404f23caf8815df772627d765be259af
3
+ size 598675498