moralclip-base / README.md
anaaa2's picture
Update README.md
1e33792 verified
metadata
library_name: transformers
language:
  - en
base_model:
  - openai/clip-vit-base-patch16
license: mit
tags:
  - clip
  - moral-foundations
  - multimodal
  - ethics

MoralCLIP

MoralCLIP extends CLIP with explicit moral grounding based on Moral Foundations Theory (MFT). This model aligns image and text representations by shared moral meaning rather than purely semantic similarity.

Model Details

  • Base Model: openai/clip-vit-base-patch16
  • Training Data: ~15k image-text pairs with MFT annotations
  • Moral Foundations: Care, Fairness, Loyalty, Authority, Sanctity
  • Paper: Under review

Usage

from transformers import CLIPModel, CLIPProcessor
from PIL import Image
import torch

model = CLIPModel.from_pretrained("anaaa2/moralclip-base")
processor = CLIPProcessor.from_pretrained("anaaa2/moralclip-base")

img = Image.open("image_path").convert("RGB")

inputs = processor(text=["a photo of care"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)

image_embeds = outputs.image_embeds
text_embeds = outputs.text_embeds