MoralCLIP
MoralCLIP extends CLIP with explicit moral grounding based on Moral Foundations Theory (MFT). This model aligns image and text representations by shared moral meaning rather than purely semantic similarity.
Model Details
- Base Model: openai/clip-vit-base-patch16
- Training Data: ~15k image-text pairs with MFT annotations
- Moral Foundations: Care, Fairness, Loyalty, Authority, Sanctity
- Paper: Under review
Usage
from transformers import CLIPModel, CLIPProcessor
from PIL import Image
import torch
model = CLIPModel.from_pretrained("anaaa2/moralclip-base")
processor = CLIPProcessor.from_pretrained("anaaa2/moralclip-base")
img = Image.open("image_path").convert("RGB")
inputs = processor(text=["a photo of care"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
image_embeds = outputs.image_embeds
text_embeds = outputs.text_embeds
- Downloads last month
- 3
Model tree for anaaa2/moralclip-base
Base model
openai/clip-vit-base-patch16