C3: Continued Pretraining with Contrastive Weak Supervision for Cross Language Ad-Hoc Retrieval
Paper • 2204.11989 • Published
How to use eugene-yang/dpr-xlm-align-engtrained with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("feature-extraction", model="eugene-yang/dpr-xlm-align-engtrained") # Load model directly
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("eugene-yang/dpr-xlm-align-engtrained")
model = AutoModel.from_pretrained("eugene-yang/dpr-xlm-align-engtrained")DPR Model with XLM-Align trained with MSMARCO.
Please consider citing the following paper if you use this model.
@inproceedings{sigir2022c3,
author = {Eugene Yang and Suraj Nair and Ramraj Chandradevan and Rebecca Iglesias-Flores and Douglas W. Oard},
title = {C3: Continued Pretraining with Contrastive Weak Supervision for Cross Language Ad-Hoc Retrieval},
booktitle = {Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR) (Short Paper)},
year = {2022},
url = {https://arxiv.org/abs/2204.11989}
}