metadata
license: apache-2.0
base_model:
- openai/clip-vit-large-patch14
tags:
- >-
This model has been finetuned in FP32 using Joycaption images. Both the
Vision and Text models have been trained. Text model trained 10x epochs more
then vision