Chengfengke commited on
Commit
0398188
·
verified ·
1 Parent(s): 772bd9c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -9,7 +9,7 @@ language:
9
  - zh
10
  pipeline_tag: fill-mask
11
  ---
12
- # Herberta: Pretrained Language Model for Herbal Medicine
13
 
14
  **Herberta** is a pretrained model for herbal medicine research, developed based on the `bert-base-chinese` model. The model has been fine-tuned on domain-specific data from 675 ancient books and 32 Traditional Chinese Medicine (TCM) textbooks. It is designed to support a variety of TCM-related NLP tasks.
15
 
@@ -50,8 +50,8 @@ pip install herberta
50
  ```python
51
  from transformers import AutoTokenizer, AutoModel
52
 
53
- # Replace "XiaoEnn/herberta" with the Hugging Face model repository name
54
- model_name = "XiaoEnn/herberta"
55
 
56
  # Load tokenizer and model
57
  tokenizer = AutoTokenizer.from_pretrained(model_name)
@@ -80,7 +80,7 @@ print("Embedding vector:", sentence_embedding)
80
  from transformers import BertTokenizer, BertForMaskedLM
81
 
82
  # Load the model and tokenizer
83
- model_name = "Chengfengke/herberta"
84
  tokenizer = BertTokenizer.from_pretrained(model_name)
85
  model = BertForMaskedLM.from_pretrained(model_name)
86
  inputs = tokenizer("This is an example text for herbal medicine.", return_tensors="pt")
 
9
  - zh
10
  pipeline_tag: fill-mask
11
  ---
12
+ # Herbert: Pretrained Bert Model for Herbal Medicine
13
 
14
  **Herberta** is a pretrained model for herbal medicine research, developed based on the `bert-base-chinese` model. The model has been fine-tuned on domain-specific data from 675 ancient books and 32 Traditional Chinese Medicine (TCM) textbooks. It is designed to support a variety of TCM-related NLP tasks.
15
 
 
50
  ```python
51
  from transformers import AutoTokenizer, AutoModel
52
 
53
+ # Replace "Chengfengke/herbert" with the Hugging Face model repository name
54
+ model_name = "Chengfengke/herbert"
55
 
56
  # Load tokenizer and model
57
  tokenizer = AutoTokenizer.from_pretrained(model_name)
 
80
  from transformers import BertTokenizer, BertForMaskedLM
81
 
82
  # Load the model and tokenizer
83
+ model_name = "Chengfengke/herbert"
84
  tokenizer = BertTokenizer.from_pretrained(model_name)
85
  model = BertForMaskedLM.from_pretrained(model_name)
86
  inputs = tokenizer("This is an example text for herbal medicine.", return_tensors="pt")