File size: 9,702 Bytes
17c6d62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
<!--Copyright 2022 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.

⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

-->

# BioGPT [[biogpt]]

## κ°œμš” [[overview]]

BioGPTλŠ” Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon, Tie-Yan Liu에 μ˜ν•΄ [BioGPT: generative pre-trained transformer for biomedical text generation and mining](https://academic.oup.com/bib/advance-article/doi/10.1093/bib/bbac409/6713511?guestAccessKey=a66d9b5d-4f83-4017-bb52-405815c907b9) μ—μ„œ μ œμ•ˆλœ λͺ¨λΈμž…λ‹ˆλ‹€. BioGPTλŠ” μƒλ¬Όμ˜ν•™ ν…μŠ€νŠΈ 생성과 λ§ˆμ΄λ‹μ„ μœ„ν•΄ 도메인에 νŠΉν™”λœ μƒμ„±ν˜• 사전 ν•™μŠ΅ 트랜슀포머 μ–Έμ–΄ λͺ¨λΈμž…λ‹ˆλ‹€. BioGPTλŠ” 트랜슀포머 μ–Έμ–΄ λͺ¨λΈ ꡬ쑰λ₯Ό λ”°λ₯΄λ©°, 1,500만 개의 PubMed μ΄ˆλ‘μ„ μ΄μš©ν•΄ μ²˜μŒλΆ€ν„° ν•™μŠ΅λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

λ…Όλ¬Έμ˜ μ΄ˆλ‘μ€ λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€:

*μƒλ¬Όμ˜ν•™ λΆ„μ•Όμ—μ„œ 사전 ν•™μŠ΅λœ μ–Έμ–΄ λͺ¨λΈμ€ 일반 μžμ—°μ–΄ 처리 λΆ„μ•Όμ—μ„œμ˜ 성곡에 μ˜κ°μ„ λ°›μ•„ 점점 더 λ§Žμ€ μ£Όλͺ©μ„ λ°›κ³  μžˆμŠ΅λ‹ˆλ‹€. 일반 μ–Έμ–΄ λΆ„μ•Όμ—μ„œ 사전 ν•™μŠ΅λœ μ–Έμ–΄ λͺ¨λΈμ˜ 두 κ°€μ§€ μ£Όμš” 계톡인 BERT(및 κ·Έ λ³€ν˜•)와 GPT(및 κ·Έ λ³€ν˜•) 쀑 첫 λ²ˆμ§ΈλŠ” μƒλ¬Όμ˜ν•™ λΆ„μ•Όμ—μ„œ BioBERT와 PubMedBERT와 같이 κ΄‘λ²”μœ„ν•˜κ²Œ μ—°κ΅¬λ˜μ—ˆμŠ΅λ‹ˆλ‹€. 이듀은 λ‹€μ–‘ν•œ λΆ„λ₯˜ 기반의 μƒλ¬Όμ˜ν•™ μž‘μ—…μ—μ„œ 큰 성곡을 κ±°λ‘μ—ˆμ§€λ§Œ, 생성 λŠ₯λ ₯의 뢀쑱은 κ·Έλ“€μ˜ 적용 λ²”μœ„λ₯Ό μ œν•œν–ˆμŠ΅λ‹ˆλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” λŒ€κ·œλͺ¨ μƒλ¬Όμ˜ν•™ λ¬Έν—Œμ„ 사전 ν•™μŠ΅ν•œ 도메인 νŠΉν™” μƒμ„±ν˜• 트랜슀포머 μ–Έμ–΄ λͺ¨λΈμΈ BioGPTλ₯Ό μ œμ•ˆν•©λ‹ˆλ‹€. μš°λ¦¬λŠ” 6개의 μƒλ¬Όμ˜ν•™ μžμ—°μ–΄ 처리 μž‘μ—…μ—μ„œ BioGPTλ₯Ό ν‰κ°€ν•œ κ²°κ³Ό, λŒ€λΆ€λΆ„μ˜ μž‘μ—…μ—μ„œ 이전 λͺ¨λΈλ³΄λ‹€ μš°μˆ˜ν•œ μ„±λŠ₯을 λ³΄μ˜€μŠ΅λ‹ˆλ‹€. 특히, BC5CDR, KD-DTI, DDI μ—”λ“œ-투-μ—”λ“œ 관계 μΆ”μΆœ μž‘μ—…μ—μ„œ 각각 44.98%, 38.42%, 40.76%의 F1 점수λ₯Ό κΈ°λ‘ν•˜μ˜€μœΌλ©°, PubMedQAμ—μ„œ 78.2%의 정확도λ₯Ό 달성해 μƒˆλ‘œμš΄ 기둝을 μ„Έμ› μŠ΅λ‹ˆλ‹€. λ˜ν•œ ν…μŠ€νŠΈ 생성에 λŒ€ν•œ 사둀 μ—°κ΅¬λŠ” μƒλ¬Όμ˜ν•™ μš©μ–΄μ— λŒ€ν•œ μœ μ°½ν•œ μ„€λͺ…을 μƒμ„±ν•˜λŠ” 데 μžˆμ–΄ BioGPT의 μž₯점을 λ”μš± μž…μ¦ν–ˆμŠ΅λ‹ˆλ‹€.*

이 λͺ¨λΈμ€ [kamalkraj](https://huggingface.co/kamalkraj)에 μ˜ν•΄ κΈ°μ—¬λ˜μ—ˆμŠ΅λ‹ˆλ‹€. 원본 μ½”λ“œλŠ” [μ—¬κΈ°](https://github.com/microsoft/BioGPT)μ—μ„œ 찾을 수 μžˆμŠ΅λ‹ˆλ‹€.

## μ‚¬μš© 팁 [[usage-tips]]

- BioGPTλŠ” μ ˆλŒ€μ  μœ„μΉ˜ μž„λ² λ”©(absolute position embedding)을 μ‚¬μš©ν•˜λ―€λ‘œ, μž…λ ₯을 μ™Όμͺ½μ΄ μ•„λ‹Œ 였λ₯Έμͺ½μ—μ„œ νŒ¨λ”©ν•˜λŠ” 것이 ꢌμž₯λ©λ‹ˆλ‹€.
- BioGPTλŠ” 인과적 μ–Έμ–΄ λͺ¨λΈλ§(Casual Langague Modeling, CLM) λͺ©ν‘œλ‘œ ν•™μŠ΅λ˜μ—ˆκΈ° λ•Œλ¬Έμ—, λ‹€μŒ 토큰을 μ˜ˆμΈ‘ν•˜λŠ” 데 κ°•λ ₯ν•œ μ„±λŠ₯을 λ³΄μž…λ‹ˆλ‹€. 이 κΈ°λŠ₯을 ν™œμš©ν•˜μ—¬ BioGPTλŠ” ꡬ문적으둜 μΌκ΄€λœ ν…μŠ€νŠΈλ₯Ό 생성할 수 있으며, μ˜ˆμ‹œ 슀크립트 `run_generation.py`μ—μ„œ 이λ₯Ό 확인할 수 μžˆμŠ΅λ‹ˆλ‹€.
- 이 λͺ¨λΈμ€ `past_key_values`(PyTorch 용)λ₯Ό μž…λ ₯으둜 받을 수 μžˆλŠ”λ°, μ΄λŠ” 이전에 κ³„μ‚°λœ ν‚€/κ°’ μ–΄ν…μ…˜ μŒμž…λ‹ˆλ‹€. 이 값을 μ‚¬μš©ν•˜λ©΄ ν…μŠ€νŠΈ 생성 쀑 이미 κ³„μ‚°λœ 값을 λ‹€μ‹œ κ³„μ‚°ν•˜μ§€ μ•Šλ„λ‘ ν•  수 μžˆμŠ΅λ‹ˆλ‹€. PyTorchμ—μ„œ `past_key_values` μΈμˆ˜λŠ” BioGptForCausalLM.forward() λ©”μ†Œλ“œμ—μ„œ μžμ„Ένžˆ μ„€λͺ…λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€.

### Scaled Dot Product Attention(SDPA) μ‚¬μš© [[using-scaled-dot-product-attention-sdpa]]

PyTorchλŠ” `torch.nn.functional`의 μΌλΆ€λ‘œ μŠ€μΌ€μΌλœ 점곱 μ–΄ν…μ…˜(SDPA) μ—°μ‚°μžλ₯Ό 기본적으둜 ν¬ν•¨ν•©λ‹ˆλ‹€. 이 ν•¨μˆ˜λŠ” μž…λ ₯κ³Ό μ‚¬μš© 쀑인 ν•˜λ“œμ›¨μ–΄μ— 따라 μ—¬λŸ¬ κ΅¬ν˜„μ„ μ μš©ν•  수 μžˆμŠ΅λ‹ˆλ‹€. μžμ„Έν•œ λ‚΄μš©μ€ [곡식 λ¬Έμ„œ](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) λ˜λŠ” [GPU μΆ”λ‘ ](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) νŽ˜μ΄μ§€λ₯Ό μ°Έμ‘°ν•˜μ„Έμš”.

`torch>=2.1.1`μ—μ„œ κ΅¬ν˜„μ΄ κ°€λŠ₯ν•œ 경우 SDPAλŠ” 기본적으둜 μ‚¬μš©λ˜λ©°, `attn_implementation="sdpa"`λ₯Ό `from_pretrained()`μ—μ„œ μ„€μ •ν•˜μ—¬ SDPA μ‚¬μš©μ„ λͺ…μ‹œμ μœΌλ‘œ μš”μ²­ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

```
from transformers import BioGptForCausalLM
model = BioGptForCausalLM.from_pretrained("microsoft/biogpt", attn_implementation="sdpa", torch_dtype=torch.float16)
```

NVIDIA GeForce RTX 2060-8GB, PyTorch 2.3.1, Ubuntu 20.04 ν™˜κ²½μ—μ„œ `float16` 및 CausalLM ν—€λ“œκ°€ μžˆλŠ” `microsoft/biogpt` λͺ¨λΈλ‘œ 둜컬 벀치마크λ₯Ό μˆ˜ν–‰ν•œ κ²°κ³Ό, ν›ˆλ ¨ 쀑 λ‹€μŒκ³Ό 같은 속도 ν–₯상을 ν™•μΈν–ˆμŠ΅λ‹ˆλ‹€.

졜적의 속도 ν–₯상을 μœ„ν•΄ λͺ¨λΈμ„ λ°˜μ •λ°€λ„(예: `torch.float16` λ˜λŠ” `torch.bfloat16`)둜 λ‘œλ“œν•˜λŠ” 것이 μ’‹μŠ΅λ‹ˆλ‹€.

| num_training_steps | batch_size | seq_len | is cuda | Time per batch (eager - s) | Time per batch (sdpa - s) | Speedup (%) | Eager peak mem (MB) | sdpa peak mem (MB) | Mem saving (%) |
|--------------------|------------|---------|---------|----------------------------|---------------------------|-------------|---------------------|--------------------|----------------|
| 100                | 1          | 128     | False   | 0.038                      | 0.031                     | 21.301      | 1601.862            | 1601.497           | 0.023          |
| 100                | 1          | 256     | False   | 0.039                      | 0.034                     | 15.084      | 1624.944            | 1625.296           | -0.022         |
| 100                | 2          | 128     | False   | 0.039                      | 0.033                     | 16.820      | 1624.567            | 1625.296           | -0.045         |
| 100                | 2          | 256     | False   | 0.065                      | 0.059                     | 10.255      | 1672.164            | 1672.164           | 0.000          |
| 100                | 4          | 128     | False   | 0.062                      | 0.058                     | 6.998       | 1671.435            | 1672.164           | -0.044         |
| 100                | 4          | 256     | False   | 0.113                      | 0.100                     | 13.316      | 2350.179            | 1848.435           | 27.144         |
| 100                | 8          | 128     | False   | 0.107                      | 0.098                     | 9.883       | 2098.521            | 1848.435           | 13.530         |
| 100                | 8          | 256     | False   | 0.222                      | 0.196                     | 13.413      | 3989.980            | 2986.492           | 33.601         |

NVIDIA GeForce RTX 2060-8GB, PyTorch 2.3.1, Ubuntu 20.04 ν™˜κ²½μ—μ„œ `float16` 및 AutoModel ν—€λ“œκ°€ μžˆλŠ” `microsoft/biogpt` λͺ¨λΈλ‘œ μΆ”λ‘  쀑 λ‹€μŒκ³Ό 같은 속도 ν–₯상을 ν™•μΈν–ˆμŠ΅λ‹ˆλ‹€.

| num_batches | batch_size | seq_len | is cuda | is half | use mask | Per token latency eager (ms) | Per token latency SDPA (ms) | Speedup (%) | Mem eager (MB) | Mem BT (MB) | Mem saved (%) |
|-------------|------------|---------|---------|---------|----------|------------------------------|-----------------------------|-------------|----------------|--------------|---------------|
| 50          | 1          | 64      | True    | True    | True     | 0.115                        | 0.098                       | 17.392      | 716.998        | 716.998      | 0.000         |
| 50          | 1          | 128     | True    | True    | True     | 0.115                        | 0.093                       | 24.640      | 730.916        | 730.916      | 0.000         |
| 50          | 2          | 64      | True    | True    | True     | 0.114                        | 0.096                       | 19.204      | 730.900        | 730.900      | 0.000         |
| 50          | 2          | 128     | True    | True    | True     | 0.117                        | 0.095                       | 23.529      | 759.262        | 759.262      | 0.000         |
| 50          | 4          | 64      | True    | True    | True     | 0.113                        | 0.096                       | 18.325      | 759.229        | 759.229      | 0.000         |
| 50          | 4          | 128     | True    | True    | True     | 0.186                        | 0.178                       | 4.289       | 816.478        | 816.478      | 0.000         |


## λ¦¬μ†ŒμŠ€ [[resources]]

- [인과적 μ–Έμ–΄ λͺ¨λΈλ§ μž‘μ—… κ°€μ΄λ“œ](../tasks/language_modeling)

## BioGptConfig [[transformers.BioGptConfig]]

[[autodoc]] BioGptConfig


## BioGptTokenizer [[transformers.BioGptTokenizer]]

[[autodoc]] BioGptTokenizer
    - save_vocabulary


## BioGptModel [[transformers.BioGptModel]]

[[autodoc]] BioGptModel
    - forward


## BioGptForCausalLM [[transformers.BioGptForCausalLM]]

[[autodoc]] BioGptForCausalLM
    - forward


## BioGptForTokenClassification [[transformers.BioGptForTokenClassification]]

[[autodoc]] BioGptForTokenClassification
    - forward


## BioGptForSequenceClassification [[transformers.BioGptForSequenceClassification]]

[[autodoc]] BioGptForSequenceClassification
    - forward