Instructions to use chi-vi/novel_ner_v2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use chi-vi/novel_ner_v2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="chi-vi/novel_ner_v2")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("chi-vi/novel_ner_v2") model = AutoModelForTokenClassification.from_pretrained("chi-vi/novel_ner_v2") - Notebooks
- Google Colab
- Kaggle
novel_ner_v2
This model is a fine-tuned version of hfl/chinese-electra-180g-large-discriminator on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1311
- Overall Precision: 0.8622
- Overall Recall: 0.8757
- Overall F1: 0.8689
- Overall Accuracy: 0.9753
- Ucm: 0.7594ner
- Lcm: 0.7305
Model description
NER model for chinese web novel domain
- Downloads last month
- 576
Model tree for chi-vi/novel_ner_v2
Base model
hfl/chinese-electra-180g-large-discriminator