Muhammadidrees commited on
Commit
0c9dfb0
·
verified ·
1 Parent(s): 9111427

Upload folder using huggingface_hub

Browse files
Files changed (6) hide show
  1. .gitattributes +0 -1
  2. README.md +86 -0
  3. config.json +25 -0
  4. merges.txt +0 -0
  5. pytorch_model.bin +3 -0
  6. vocab.json +0 -0
.gitattributes CHANGED
@@ -25,7 +25,6 @@
25
  *.safetensors filter=lfs diff=lfs merge=lfs -text
26
  saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
  *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
  *.tflite filter=lfs diff=lfs merge=lfs -text
30
  *.tgz filter=lfs diff=lfs merge=lfs -text
31
  *.wasm filter=lfs diff=lfs merge=lfs -text
 
25
  *.safetensors filter=lfs diff=lfs merge=lfs -text
26
  saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
  *.tar.* filter=lfs diff=lfs merge=lfs -text
 
28
  *.tflite filter=lfs diff=lfs merge=lfs -text
29
  *.tgz filter=lfs diff=lfs merge=lfs -text
30
  *.wasm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ widget:
5
+ - text: "COVID-19 is"
6
+ ---
7
+
8
+ ## BioGPT
9
+
10
+ Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
11
+
12
+ You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
13
+ set a seed for reproducibility:
14
+
15
+ ```python
16
+ >>> from transformers import pipeline, set_seed
17
+ >>> from transformers import BioGptTokenizer, BioGptForCausalLM
18
+ >>> model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
19
+ >>> tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
20
+ >>> generator = pipeline('text-generation', model=model, tokenizer=tokenizer)
21
+ >>> set_seed(42)
22
+ >>> generator("COVID-19 is", max_length=20, num_return_sequences=5, do_sample=True)
23
+ [{'generated_text': 'COVID-19 is a disease that spreads worldwide and is currently found in a growing proportion of the population'},
24
+ {'generated_text': 'COVID-19 is one of the largest viral epidemics in the world.'},
25
+ {'generated_text': 'COVID-19 is a common condition affecting an estimated 1.1 million people in the United States alone.'},
26
+ {'generated_text': 'COVID-19 is a pandemic, the incidence has been increased in a manner similar to that in other'},
27
+ {'generated_text': 'COVID-19 is transmitted via droplets, air-borne, or airborne transmission.'}]
28
+ ```
29
+
30
+ Here is how to use this model to get the features of a given text in PyTorch:
31
+
32
+ ```python
33
+ from transformers import BioGptTokenizer, BioGptForCausalLM
34
+ tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
35
+ model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
36
+ text = "Replace me by any text you'd like."
37
+ encoded_input = tokenizer(text, return_tensors='pt')
38
+ output = model(**encoded_input)
39
+ ```
40
+
41
+ Beam-search decoding:
42
+
43
+ ```python
44
+ import torch
45
+ from transformers import BioGptTokenizer, BioGptForCausalLM, set_seed
46
+
47
+ tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt")
48
+ model = BioGptForCausalLM.from_pretrained("microsoft/biogpt")
49
+
50
+ sentence = "COVID-19 is"
51
+ inputs = tokenizer(sentence, return_tensors="pt")
52
+
53
+ set_seed(42)
54
+
55
+ with torch.no_grad():
56
+ beam_output = model.generate(**inputs,
57
+ min_length=100,
58
+ max_length=1024,
59
+ num_beams=5,
60
+ early_stopping=True
61
+ )
62
+ tokenizer.decode(beam_output[0], skip_special_tokens=True)
63
+ 'COVID-19 is a global pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative agent of coronavirus disease 2019 (COVID-19), which has spread to more than 200 countries and territories, including the United States (US), Canada, Australia, New Zealand, the United Kingdom (UK), and the United States of America (USA), as of March 11, 2020, with more than 800,000 confirmed cases and more than 800,000 deaths.'
64
+ ```
65
+
66
+ ## Citation
67
+
68
+ If you find BioGPT useful in your research, please cite the following paper:
69
+
70
+ ```latex
71
+ @article{10.1093/bib/bbac409,
72
+ author = {Luo, Renqian and Sun, Liai and Xia, Yingce and Qin, Tao and Zhang, Sheng and Poon, Hoifung and Liu, Tie-Yan},
73
+ title = "{BioGPT: generative pre-trained transformer for biomedical text generation and mining}",
74
+ journal = {Briefings in Bioinformatics},
75
+ volume = {23},
76
+ number = {6},
77
+ year = {2022},
78
+ month = {09},
79
+ abstract = "{Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98\%, 38.42\% and 40.76\% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2\% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.}",
80
+ issn = {1477-4054},
81
+ doi = {10.1093/bib/bbac409},
82
+ url = {https://doi.org/10.1093/bib/bbac409},
83
+ note = {bbac409},
84
+ eprint = {https://academic.oup.com/bib/article-pdf/23/6/bbac409/47144271/bbac409.pdf},
85
+ }
86
+ ```
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_dropout": 0.0,
3
+ "architectures": [
4
+ "BioGptForCausalLM"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 1024,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 4096,
14
+ "layer_norm_eps": 1e-12,
15
+ "layerdrop": 0.0,
16
+ "max_position_embeddings": 1024,
17
+ "model_type": "biogpt",
18
+ "num_attention_heads": 16,
19
+ "num_hidden_layers": 24,
20
+ "pad_token_id": 1,
21
+ "scale_embedding": true,
22
+ "transformers_version": "4.25.0.dev0",
23
+ "use_cache": true,
24
+ "vocab_size": 42384
25
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11b9d608003652a84595aecee376d5c09156779228639b0ecb147cafb43cd409
3
+ size 1560781537
vocab.json ADDED
The diff for this file is too large to render. See raw diff