metadata
tags:
- spacy
- token-classification
language:
- zh
license: mit
model-index:
- name: zh_core_web_trf
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.7608897127
- name: NER Recall
type: recall
value: 0.7217582418
- name: NER F Score
type: f_score
value: 0.7408075795
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9175332527
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.7572203056
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.7145288854
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.6920716113
Chinese transformer pipeline (Transformer(name='bert-base-chinese', piece_encoder='bert-wordpiece', stride=152, type='bert', width=768, window=208, vocab_size=21128)). Components: transformer, tagger, parser, ner, attribute_ruler.
| Feature | Description |
|---|---|
| Name | zh_core_web_trf |
| Version | 3.7.2 |
| spaCy | >=3.7.0,<3.8.0 |
| Default Pipeline | transformer, tagger, parser, attribute_ruler, ner |
| Components | transformer, tagger, parser, attribute_ruler, ner |
| Vectors | 0 keys, 0 unique vectors (0 dimensions) |
| Sources | OntoNotes 5 (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston) CoreNLP Universal Dependencies Converter (Stanford NLP Group) bert-base-chinese (Hugging Face) |
| License | MIT |
| Author | Explosion |
Label Scheme
View label scheme (99 labels for 3 components)
| Component | Labels |
|---|---|
tagger |
AD, AS, BA, CC, CD, CS, DEC, DEG, DER, DEV, DT, ETC, FW, IJ, INF, JJ, LB, LC, M, MSP, NN, NR, NT, OD, ON, P, PN, PU, SB, SP, URL, VA, VC, VE, VV, X |
parser |
ROOT, acl, advcl:loc, advmod, advmod:dvp, advmod:loc, advmod:rcomp, amod, amod:ordmod, appos, aux:asp, aux:ba, aux:modal, aux:prtmod, auxpass, case, cc, ccomp, compound:nn, compound:vc, conj, cop, dep, det, discourse, dobj, etc, mark, mark:clf, name, neg, nmod, nmod:assmod, nmod:poss, nmod:prep, nmod:range, nmod:tmod, nmod:topic, nsubj, nsubj:xsubj, nsubjpass, nummod, parataxis:prnmod, punct, xcomp |
ner |
CARDINAL, DATE, EVENT, FAC, GPE, LANGUAGE, LAW, LOC, MONEY, NORP, ORDINAL, ORG, PERCENT, PERSON, PRODUCT, QUANTITY, TIME, WORK_OF_ART |
Accuracy
| Type | Score |
|---|---|
TOKEN_ACC |
95.85 |
TOKEN_P |
94.58 |
TOKEN_R |
91.36 |
TOKEN_F |
92.94 |
TAG_ACC |
91.75 |
SENTS_P |
70.92 |
SENTS_R |
67.57 |
SENTS_F |
69.21 |
DEP_UAS |
75.72 |
DEP_LAS |
71.45 |
ENTS_P |
76.09 |
ENTS_R |
72.18 |
ENTS_F |
74.08 |