OceanGPT-X
Collection
4 items • Updated • 2
A vision-language model fine-tuned on marine imagery and textual data. Optimized for species identification, zero-shot classification, and cross-validation in underwater/sonar environments.
transformers and open_clipfrom transformers import CLIPProcessor, CLIPModel
from PIL import Image
model = CLIPModel.from_pretrained("zjunlp/OceanCLIP-0.15B")
processor = CLIPProcessor.from_pretrained("zjunlp/OceanCLIP-0.15B")
image = Image.open("marine_image.jpg")
inputs = processor(
text=["a photo of a clownfish", "a photo of a coral reef"],
images=image,
return_tensors="pt",
padding=True
)
outputs = model(**inputs)
probs = outputs.logits_per_image.softmax(dim=-1)
print(probs)