|
|
--- |
|
|
language: |
|
|
- zh |
|
|
metrics: |
|
|
- accuracy |
|
|
library_name: transformers |
|
|
pipeline_tag: sentence-similarity |
|
|
--- |
|
|
# 模型介绍 |
|
|
|
|
|
用于文本搜索(相似度计算)的预训练模型,灵感来源于Dense Passage Retrieval for Open-Domain Question Answering。 |
|
|
不同于transformers.DPR,我们提供的是一个纯Bert模型,方便与其他模型进行集成。 |
|
|
|
|
|
# 模型细节 |
|
|
|
|
|
模型结构使用tinybert。 |
|
|
|
|
|
采用Dureader和cmrc2018数据集进行训练。这是两个问答数据集。利用数据中的“问题-上下文对”分别计算向量表示,$Q \in R^{b, d}$和$C \in R^{b, d}$,然后求内积:$Q @ C = similarity \in R^{b, b}$。$similarity$表示batch中样本之间的余弦相似度。然后利用伪标签torch.eye(batch_size)将问题转化为分类问题。 |
|
|
|
|
|
# 快速开始 |
|
|
|
|
|
```python |
|
|
from transformers import BertModel, BertTokenizer |
|
|
import numpy as np |
|
|
|
|
|
model = BertModel.from_pretrained("zerohell/tinydpr-acc_0.315-bs_307", cache_dir=".") |
|
|
tokenizer = BertTokenizer.from_pretrained("zerohell/tinydpr-acc_0.315-bs_307", cache_dir=".") |
|
|
|
|
|
encoded_text = tokenizer(text="采用Dureader和cmrc2018数据集进行训练。", return_tensors="pt", max_length=512, |
|
|
padding='longest', truncation=True).to(model.device) |
|
|
encoded_text = {k: v.to(model.device) for (k, v) in encoded_text.items()} |
|
|
text_model_output1 = model(**encoded_text).pooler_output |
|
|
text_model_output1 = text_model_output1.cpu().detach().numpy() |
|
|
encoded_text = tokenizer(text="这个模型是采用什么数据集训练的?", return_tensors="pt", max_length=512, |
|
|
padding='longest', truncation=True).to(model.device) |
|
|
encoded_text = {k: v.to(model.device) for (k, v) in encoded_text.items()} |
|
|
text_model_output2 = model(**encoded_text).pooler_output |
|
|
text_model_output2 = text_model_output2.cpu().detach().numpy() |
|
|
print(text_model_output1 @ text_model_output2.T / np.linalg.norm(text_model_output1) / np.linalg.norm(text_model_output2)) |
|
|
``` |
|
|
|