# Load model directly
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("UdS-LSV/mcse-coco-roberta-base")
model = AutoModel.from_pretrained("UdS-LSV/mcse-coco-roberta-base")Quick Links
MCSE: Multimodal Contrastive Learning of Sentence Embeddings (NAACL 2022)
Paper link: https://aclanthology.org/2022.naacl-main.436/
Github: https://github.com/uds-lsv/MCSE
Author list: Miaoran Zhang, Marius Mosbach, David Adelani, Michael Hedderich, Dietrich Klakow
Model Details
- base model: roberta-base
- training data: Wiki1M + MS-COCO
Evaluation Results
| STS12 | STS13 | STS14 | STS15 | STS16 | STSBenchmark | SICKRelatedness | Avg. |
|---|---|---|---|---|---|---|---|
| 70.79 | 82.81 | 76.16 | 83.13 | 81.76 | 81.72 | 70.81 | 78.17 |
- Downloads last month
- 14
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="UdS-LSV/mcse-coco-roberta-base")