Improve model card: Add pipeline tag, language, and descriptive tags, and paper link

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +12 -2
README.md CHANGED
@@ -1,14 +1,24 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
4
  ---
5
 
 
 
6
  # Model Card
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
  Our **JpharmaBERT (large)** is a continually pre-trained version of the BERT model ([tohoku-nlp/bert-large-japanese-v2](https://huggingface.co/tohoku-nlp/bert-large-japanese-v2)), further trained on pharmaceutical data — the same dataset used for [eques/jpharmatron](https://huggingface.co/EQUES/JPharmatron-7B).
10
 
11
- # Examoke Usage
12
 
13
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
14
  ```python
 
1
  ---
2
  library_name: transformers
3
+ pipeline_tag: fill-mask
4
+ language:
5
+ - ja
6
+ - en
7
+ tags:
8
+ - bert
9
+ - japanese
10
+ - pharmaceutical
11
+ - biomedical
12
  ---
13
 
14
+ Paper: [A Japanese Language Model and Three New Evaluation Benchmarks for Pharmaceutical NLP](https://huggingface.co/papers/2505.16661)
15
+
16
  # Model Card
17
 
18
  <!-- Provide a quick summary of what the model is/does. -->
19
  Our **JpharmaBERT (large)** is a continually pre-trained version of the BERT model ([tohoku-nlp/bert-large-japanese-v2](https://huggingface.co/tohoku-nlp/bert-large-japanese-v2)), further trained on pharmaceutical data — the same dataset used for [eques/jpharmatron](https://huggingface.co/EQUES/JPharmatron-7B).
20
 
21
+ # Example Usage
22
 
23
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
24
  ```python