| ## MiniCPM-RR | |
| **MiniCPM-RR** 是面壁智能与清华大学自然语言处理实验室(THUNLP)共同开发的中英双语言文本重排序模型,有如下特点: | |
| - 出色的中文、英文重排序能力。 | |
| - 出色的中英跨语言重排序能力。 | |
| MiniCPM-RR 基于 [MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16) 训练,结构上采取双向注意力。采取多阶段训练方式,共使用包括开源数据、机造数据、闭源数据在内的约 600 万条训练数据。 | |
| 欢迎关注 RAG 套件系列: | |
| - 检索模型:[MiniCPM-R](https://huggingface.co/openbmb/MiniCPM-R) | |
| - 重排模型:[MiniCPM-RR](https://huggingface.co/openbmb/MiniCPM-RR) | |
| - 面向 RAG 场景的 LoRA 插件:[MiniCPM3-RAG-LoRA](https://huggingface.co/openbmb/MiniCPM3-RAG-LoRA) | |
| **MiniCPM-RR** is a bilingual & cross-lingual text re-ranking model developed by ModelBest Inc. and THUNLP, featuring: | |
| - Exceptional Chinese and English re-ranking capabilities. | |
| - Outstanding cross-lingual re-ranking capabilities between Chinese and English. | |
| MiniCPM-RR is trained based on [MiniCPM-2B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16) and incorporates bidirectional attention in its architecture. The model underwent multi-stage training using approximately 6 million training examples, including open-source, synthetic, and proprietary data. | |
| We also invite you to explore the RAG toolkit series: | |
| - Retrieval Model: [MiniCPM-R](https://huggingface.co/openbmb/MiniCPM-R) | |
| - Re-ranking Model: [MiniCPM-RR](https://huggingface.co/openbmb/MiniCPM-RR) | |
| - LoRA Plugin for RAG scenarios: [MiniCPM3-RAG-LoRA](https://huggingface.co/openbmb/MiniCPM3-RAG-LoRA) | |
| ## 模型信息 Model Information | |
| - 模型大小:2.4B | |
| - 最大输入token数:1024 | |
| - Model Size: 2.4B | |
| - Max Input Tokens: 1024 | |
| ## 使用方法 Usage | |
| ### 输入格式 Input Format | |
| 本模型支持指令,输入格式如下: | |
| MiniCPM-RR supports instructions in the following format: | |
| ``` | |
| <s>Instruction: {{ instruction }} Query: {{ query }}</s>{{ document }} | |
| ``` | |
| 例如: | |
| For example: | |
| ``` | |
| <s>Instruction: 为这个医学问题检索相关回答。Query: 咽喉癌的成因是什么?</s>(文档省略) | |
| ``` | |
| ``` | |
| <s>Instruction: Given a claim about climate change, retrieve documents that support or refute the claim. Query: However the warming trend is slower than most climate models have forecast.</s>(document omitted) | |
| ``` | |
| 也可以不提供指令,即采取如下格式: | |
| MiniCPM-RR also works in instruction-free mode in the following format: | |
| ``` | |
| <s>Query: {{ query }}</s>{{ document }} | |
| ``` | |
| 我们在BEIR与C-MTEB/Retrieval上测试时使用的指令见 `instructions.json`,其他测试不使用指令。 | |
| When running evaluation on BEIR and C-MTEB/Retrieval, we use instructions in `instructions.json`. For other evaluations, we do not use instructions. | |
| ### 环境要求 Requirements | |
| ``` | |
| transformers==4.37.2 | |
| flash-attn>2.3.5 | |
| ``` | |
| ### 示例脚本 Demo | |
| ```python | |
| from transformers import AutoModel, AutoTokenizer, AutoModelForSequenceClassification | |
| import torch | |
| import numpy as np | |
| model_name = "openbmb/MiniCPM-RR" | |
| tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) | |
| tokenizer.padding_side = "right" | |
| model = AutoModelForSequenceClassification.from_pretrained(model_name, trust_remote_code=True,attn_implementation="flash_attention_2", torch_dtype=torch.float16).to("cuda") | |
| model.eval() | |
| max_len_q, max_len_d = 512, 512 | |
| def tokenize_our(query,doc): | |
| input_id_query = tokenizer.encode(query, add_special_tokens=False, max_length=max_len_q, truncation=True) | |
| input_id_doc = tokenizer.encode(doc, add_special_tokens=False, max_length=max_len_d, truncation=True) | |
| pad_input = {"input_ids": [tokenizer.bos_token_id] + input_id_query + [tokenizer.eos_token_id] + input_id_doc} | |
| return tokenizer.pad( | |
| pad_input, | |
| padding="max_length", | |
| max_length=max_len_q + max_len_d + 2, | |
| return_tensors="pt", | |
| ) | |
| @torch.no_grad() | |
| def rerank(input_query, input_docs): | |
| tokenized_inputs = [tokenize_our(input_query, input_doc).to("cuda") for input_doc in input_docs] | |
| input_ids = { | |
| "input_ids": [tokenized_input["input_ids"] for tokenized_input in tokenized_inputs], | |
| "attention_mask": [tokenized_input["attention_mask"] for tokenized_input in tokenized_inputs] | |
| } | |
| for k in input_ids: | |
| input_ids[k] = torch.stack(input_ids[k]).to("cuda") | |
| outputs = model(**input_ids) | |
| score = outputs.logits | |
| return score.float().detach().cpu().numpy() | |
| queries = ["中国的首都是哪里?"] | |
| passages = [["beijing", "shanghai"]] | |
| INSTRUCTION = "Query: " | |
| queries = [INSTRUCTION + query for query in queries] | |
| scores = [] | |
| for i in range(len(queries)): | |
| print(queries[i]) | |
| scores.append(rerank(queries[i],passages[i])) | |
| print(np.array(scores)) # [[[-4.7421875][-8.8515625]]] | |
| ``` | |
| ## 实验结果 Evaluation Results | |
| ### 中文与英文重排序结果 CN/EN Re-ranking Results | |
| 中文对`bge-large-zh-v1.5`检索的top-100进行重排,英文对`bge-large-en-v1.5`检索的top-100进行重排。 | |
| We re-rank top-100 docments from `bge-large-zh-v1.5` in C-MTEB/Retrieval and from `bge-large-en-v1.5` in BEIR. | |
| | 模型 Model | C-MTEB/Retrieval (NDCG@10) | BEIR (NDCG@10) | | |
| |----------------------------|-------------------|---------------| | |
| | bge-large-zh-v1.5(Retriever for Chinese) | 70.46 | - | | |
| | bge-large-en-v1.5(Retriever for English) | - | 54.29 | | |
| | bge-reranker-v2-m3 | 71.82 | 55.36 | | |
| | bge-reranker-v2-minicpm-28 | 73.51 | 59.86 | | |
| | bge-reranker-v2-gemma | 71.74 | 60.71 | | |
| | bge-reranker-v2.5-gemma2 | - | **63.67** | | |
| | MiniCPM-RR | **76.79** | 61.32 | | |
| ### 中英跨语言重排序结果 CN-EN Cross-lingual Re-ranking Results | |
| 对bge-m3(Dense)检索的top100进行重排。 | |
| We re-rank top-100 documents from `bge-m3` (Dense). | |
| | 模型 Model | MKQA EN-CN (Recall@20) | NeuCLIR22 (NDCG@10) | NeuCLIR23 (NDCG@10) | | |
| |------------------------------------|--------------------|--------------------|--------------------| | |
| | bge-m3 (Dense)(Retriever) | 66.4 | 30.49 | 41.09 | | |
| | jina-reranker-v2-base-multilingual | 69.33 | 36.66 | 50.03 | | |
| | bge-reranker-v2-m3 | 69.75 | 40.98 | 49.67 | | |
| | gte-multilingual-reranker-base | 68.51 | 38.74 | 45.3 | | |
| | MiniCPM-RR | **71.73** | **43.65** | **50.59** | | |
| ## 许可证 License | |
| - 本仓库中代码依照 [Apache-2.0 协议](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE)开源。 | |
| - MiniCPM-RR 模型权重的使用则需要遵循 [MiniCPM 模型协议](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md)。 | |
| - MiniCPM-RR 模型权重对学术研究完全开放。如需将模型用于商业用途,请填写[此问卷](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g)。 | |
| * The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License. | |
| * The usage of MiniCPM-RR model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md). | |
| * The models and weights of MiniCPM-RR are completely free for academic research. After filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, MiniCPM-RR weights are also available for free commercial use. | |