Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- zh
|
| 4 |
+
metrics:
|
| 5 |
+
- accuracy
|
| 6 |
+
library_name: transformers
|
| 7 |
+
pipeline_tag: sentence-similarity
|
| 8 |
+
---
|
| 9 |
+
# 模型介绍
|
| 10 |
+
|
| 11 |
+
用于文本搜索(相似度计算)的预训练模型,灵感来源于Dense Passage Retrieval for Open-Domain Question Answering。
|
| 12 |
+
不同于transformers.DPR,我们提供的是一个纯Bert模型,方便与其他模型进行集成。
|
| 13 |
+
|
| 14 |
+
# 模型细节
|
| 15 |
+
|
| 16 |
+
模型结构使用tinybert。
|
| 17 |
+
|
| 18 |
+
采用Dureader和cmrc2018数据集进行训练。这是两个问答数据集。利用数据中的“问题-上下文对”分别计算向量表示,$Q \in R^{b, d}$和$C \in R^{b, d}$,然后求内积:$Q @ C = similarity \in R^{b, b}$。$similarity$表示batch中样本之间的余弦相似度。然后利用伪标签torch.eye(batch_size)将问题转化为分类问题。
|
| 19 |
+
|
| 20 |
+
# 快速开始
|
| 21 |
+
|
| 22 |
+
```python
|
| 23 |
+
from transformers import BertModel, BertTokenizer
|
| 24 |
+
|
| 25 |
+
model = BertModel.from_pretrained("tinydpr-acc_0.315-bs_307", cache_dir=".")
|
| 26 |
+
tokenizer = BertTokenizer.from_pretrained("tinydpr-acc_0.315-bs_307", cache_dir=".")
|
| 27 |
+
|
| 28 |
+
encoded_text = tokenizer(text="采用Dureader和cmrc2018数据集进行训练。", return_tensors="pt", max_length=512,
|
| 29 |
+
padding='longest', truncation=True).to(model.device)
|
| 30 |
+
encoded_text = {k: v.to(model.device) for (k, v) in encoded_text.items()}
|
| 31 |
+
text_model_output1 = model(**encoded_text).pooler_output
|
| 32 |
+
text_model_output1 = text_model_output1.cpu().detach().numpy()
|
| 33 |
+
encoded_text = tokenizer(text="这个模型是采用什么数据集训练的?", return_tensors="pt", max_length=512,
|
| 34 |
+
padding='longest', truncation=True).to(model.device)
|
| 35 |
+
encoded_text = {k: v.to(model.device) for (k, v) in encoded_text.items()}
|
| 36 |
+
text_model_output2 = model(**encoded_text).pooler_output
|
| 37 |
+
text_model_output2 = text_model_output2.cpu().detach().numpy()
|
| 38 |
+
print(text_model_output1 @ text_model_output2)
|
| 39 |
+
```
|