Create Readme
Browse files
README.md
ADDED
|
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- zh
|
| 4 |
+
tags:
|
| 5 |
+
- Seq2SeqLM
|
| 6 |
+
- 古文
|
| 7 |
+
- 文言文
|
| 8 |
+
- 中国古代官职地名拆分
|
| 9 |
+
- ancient
|
| 10 |
+
- classical
|
| 11 |
+
license: cc-by-nc-sa-4.0
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# <font color="IndianRed"> Name Splitter </font>
|
| 15 |
+
[](https://colab.research.google.com/drive/1eZyJgeQOFfpG3QOlq0haDz8pE8jhsGOt#scrollTo=XChPisgxiiji)
|
| 16 |
+
|
| 17 |
+
Our model <font color="cornflowerblue"> Name Splitter </font> is a Named Entity Recognition Classical Chinese language model that is intended to <font color="IndianRed">split the address portion in Classical Chinese office titles.</font>. This model is first inherited from raynardj/classical-chinese-punctuation-guwen-biaodian Classical Chinese punctuation model, and finetuned using over a 25,000 high-quality punctuation pairs collected CBDB group (China Biographical Database).
|
| 18 |
+
|
| 19 |
+
### <font color="IndianRed"> Sample input txt file </font>
|
| 20 |
+
The sample input txt file can be downloaded here:
|
| 21 |
+
https://huggingface.co/cbdb/OfficeTitleAddressSplitter/blob/main/vocab.txt
|
| 22 |
+
|
| 23 |
+
### <font color="IndianRed"> How to use </font>
|
| 24 |
+
|
| 25 |
+
Here is how to use this model to get the features of a given text in PyTorch:
|
| 26 |
+
|
| 27 |
+
<font color="cornflowerblue"> 1. Import model and packages </font>
|
| 28 |
+
```python
|
| 29 |
+
from transformers import AutoTokenizer, AutoModelForTokenClassification
|
| 30 |
+
|
| 31 |
+
PRETRAINED = "cbdb/NameSplitter"
|
| 32 |
+
tokenizer = AutoTokenizer.from_pretrained(PRETRAINED)
|
| 33 |
+
model = AutoModelForTokenClassification.from_pretrained(PRETRAINED)
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
<font color="cornflowerblue"> 2. Load Data </font>
|
| 37 |
+
```python
|
| 38 |
+
# Load your data here
|
| 39 |
+
test_list = ['徐元文漢魏風致集', '熊方補後漢書年表', '羅振玉本朝學術源流槪略 一卷', '陶諧陶莊敏集']
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
<font color="cornflowerblue"> 3. Make a prediction </font>
|
| 44 |
+
```python
|
| 45 |
+
def predict_class(test):
|
| 46 |
+
tokens_test = tokenizer.encode_plus(
|
| 47 |
+
test,
|
| 48 |
+
add_special_tokens=True,
|
| 49 |
+
return_attention_mask=True,
|
| 50 |
+
padding=True,
|
| 51 |
+
max_length=128,
|
| 52 |
+
return_tensors='pt',
|
| 53 |
+
truncation=True
|
| 54 |
+
)
|
| 55 |
+
|
| 56 |
+
test_seq = torch.tensor(tokens_test['input_ids'])
|
| 57 |
+
test_mask = torch.tensor(tokens_test['attention_mask'])
|
| 58 |
+
|
| 59 |
+
inputs = {
|
| 60 |
+
"input_ids": test_seq,
|
| 61 |
+
"attention_mask": test_mask
|
| 62 |
+
}
|
| 63 |
+
with torch.no_grad():
|
| 64 |
+
# print(inputs.shape)
|
| 65 |
+
outputs = model(**inputs)
|
| 66 |
+
outputs = outputs.logits.detach().cpu().numpy()
|
| 67 |
+
|
| 68 |
+
softmax_score = softmax(outputs)
|
| 69 |
+
softmax_score = np.argmax(softmax_score, axis=2)[0]
|
| 70 |
+
return test_seq, softmax_score
|
| 71 |
+
|
| 72 |
+
for test_sen0 in test_list:
|
| 73 |
+
test_seq, pred_class_proba = predict_class(test_sen0)
|
| 74 |
+
test_sen = tokenizer.decode(test_seq[0]).split()
|
| 75 |
+
label = [idx2label[i] for i in pred_class_proba]
|
| 76 |
+
|
| 77 |
+
element_to_find = '。'
|
| 78 |
+
|
| 79 |
+
if element_to_find in label:
|
| 80 |
+
index = label.index(element_to_find)
|
| 81 |
+
test_sen_pred = [i for i in test_sen0]
|
| 82 |
+
test_sen_pred.insert(index, element_to_find)
|
| 83 |
+
test_sen_pred = ''.join(test_sen_pred)
|
| 84 |
+
|
| 85 |
+
else:
|
| 86 |
+
test_sen_pred = [i for i in test_sen0]
|
| 87 |
+
test_sen_pred = ''.join(test_sen_pred)
|
| 88 |
+
|
| 89 |
+
print(test_sen_pred)
|
| 90 |
+
```
|
| 91 |
+
徐元文。漢魏風致集<br>
|
| 92 |
+
熊方。補後漢書年表<br>
|
| 93 |
+
羅振玉。本朝學術源流槪略 一卷<br>
|
| 94 |
+
陶諧。陶莊敏集<br>
|
| 95 |
+
|
| 96 |
+
### <font color="IndianRed">Authors </font>
|
| 97 |
+
Queenie Luo (queenieluo[at]g.harvard.edu)
|
| 98 |
+
<br>
|
| 99 |
+
Hongsu Wang
|
| 100 |
+
<br>
|
| 101 |
+
Peter Bol
|
| 102 |
+
<br>
|
| 103 |
+
CBDB Group
|
| 104 |
+
|
| 105 |
+
### <font color="IndianRed">License </font>
|
| 106 |
+
Copyright (c) 2023 CBDB
|
| 107 |
+
|
| 108 |
+
Except where otherwise noted, content on this repository is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0).
|
| 109 |
+
To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/ or
|
| 110 |
+
send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.
|