deepspeed / transformers /docs /source /ko /tasks /question_answering.md
xingzhikb's picture
init
002bd9b
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
# ์งˆ์˜ ์‘๋‹ต(Question Answering)[[question-answering]]
[[open-in-colab]]
<Youtube id="ajPx5LwJD-I"/>
์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ๋Š” ์ฃผ์–ด์ง„ ์งˆ๋ฌธ์— ๋Œ€ํ•œ ๋‹ต๋ณ€์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. Alexa, Siri ๋˜๋Š” Google๊ณผ ๊ฐ™์€ ๊ฐ€์ƒ ๋น„์„œ์—๊ฒŒ ๋‚ ์”จ๊ฐ€ ์–ด๋–ค์ง€ ๋ฌผ์–ด๋ณธ ์ ์ด ์žˆ๋‹ค๋ฉด ์งˆ์˜ ์‘๋‹ต ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด๋ณธ ์ ์ด ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ์—๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค.
- ์ถ”์ถœ์ (Extractive) ์งˆ์˜ ์‘๋‹ต: ์ฃผ์–ด์ง„ ๋ฌธ๋งฅ์—์„œ ๋‹ต๋ณ€์„ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค.
- ์ƒ์„ฑ์ (Abstractive) ์งˆ์˜ ์‘๋‹ต: ๋ฌธ๋งฅ์—์„œ ์งˆ๋ฌธ์— ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋‹ตํ•˜๋Š” ๋‹ต๋ณ€์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค.
์ด ๊ฐ€์ด๋“œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฐฉ๋ฒ•๋“ค์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค.
1. ์ถ”์ถœ์  ์งˆ์˜ ์‘๋‹ต์„ ํ•˜๊ธฐ ์œ„ํ•ด [SQuAD](https://huggingface.co/datasets/squad) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased) ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ
2. ์ถ”๋ก ์— ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ ์‚ฌ์šฉํ•˜๊ธฐ
<Tip>
์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ํƒœ์Šคํฌ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค.
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
[ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-J](../model_doc/gptj), [I-BERT](../model_doc/ibert), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LED](../model_doc/led), [LiLT](../model_doc/lilt), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [LXMERT](../model_doc/lxmert), [MarkupLM](../model_doc/markuplm), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [OPT](../model_doc/opt), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [Splinter](../model_doc/splinter), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso)
<!--End of the generated tip-->
</Tip>
์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”:
```bash
pip install transformers datasets evaluate
```
์—ฌ๋Ÿฌ๋ถ„์˜ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•ด์„œ ๋กœ๊ทธ์ธํ•ฉ๋‹ˆ๋‹ค:
```py
>>> from huggingface_hub import notebook_login
>>> notebook_login()
```
## SQuAD ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-squad-dataset]]
๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ SQuAD ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ ํ›ˆ๋ จํ•˜๋ฉฐ ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž˜ ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
```py
>>> from datasets import load_dataset
>>> squad = load_dataset("squad", split="train[:5000]")
```
๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋ถ„ํ• ๋œ `train`์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ ๋‚˜๋ˆ„์–ด์ค๋‹ˆ๋‹ค:
```py
>>> squad = squad.train_test_split(test_size=0.2)
```
๊ทธ๋ฆฌ๊ณ ๋‚˜์„œ ์˜ˆ์‹œ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ํ•˜๋‚˜ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค:
```py
>>> squad["train"][0]
{'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']},
'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.',
'id': '5733be284776f41900661182',
'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',
'title': 'University_of_Notre_Dame'
}
```
์ด ์ค‘์—์„œ ๋ช‡ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ํ•ญ๋ชฉ์ด ์žˆ์Šต๋‹ˆ๋‹ค:
- `answers`: ๋‹ต์•ˆ ํ† ํฐ์˜ ์‹œ์ž‘ ์œ„์น˜์™€ ๋‹ต์•ˆ ํ…์ŠคํŠธ
- `context`: ๋ชจ๋ธ์ด ๋‹ต์„ ์ถ”์ถœํ•˜๋Š”๋ฐ ํ•„์š”ํ•œ ๋ฐฐ๊ฒฝ ์ง€์‹
- `question`: ๋ชจ๋ธ์ด ๋‹ตํ•ด์•ผ ํ•˜๋Š” ์งˆ๋ฌธ
## ์ „์ฒ˜๋ฆฌ[[preprocess]]
<Youtube id="qgaM0weJHpA"/>
๋‹ค์Œ ๋‹จ๊ณ„์—์„œ๋Š” `question` ๋ฐ `context` ํ•ญ๋ชฉ์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด DistilBERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค:
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased")
```
์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ์™€ ๊ด€๋ จํ•ด์„œ ํŠนํžˆ ์œ ์˜ํ•ด์•ผํ•  ๋ช‡ ๊ฐ€์ง€ ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค:
1. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€ ์˜ˆ์ œ์—๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ฅผ ์ดˆ๊ณผํ•˜๋Š” ๋งค์šฐ ๊ธด `context`๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ธด ์‹œํ€€์Šค๋ฅผ ๋‹ค๋ฃจ๊ธฐ ์œ„ํ•ด์„œ๋Š”, `truncation="only_second"`๋กœ ์„ค์ •ํ•ด `context`๋งŒ ์ž˜๋ผ๋‚ด๋ฉด ๋ฉ๋‹ˆ๋‹ค.
2. ๊ทธ ๋‹ค์Œ, `return_offset_mapping=True`๋กœ ์„ค์ •ํ•ด ๋‹ต๋ณ€์˜ ์‹œ์ž‘๊ณผ ์ข…๋ฃŒ ์œ„์น˜๋ฅผ ์›๋ž˜์˜ `context`์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค.
3. ๋งคํ•‘์„ ์™„๋ฃŒํ•˜๋ฉด, ์ด์ œ ๋‹ต๋ณ€์—์„œ ์‹œ์ž‘ ํ† ํฐ๊ณผ ์ข…๋ฃŒ ํ† ํฐ์„ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜คํ”„์…‹์˜ ์–ด๋А ๋ถ€๋ถ„์ด `question`๊ณผ `context`์— ํ•ด๋‹นํ•˜๋Š”์ง€ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก [`~tokenizers.Encoding.sequence_ids`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”.
๋‹ค์Œ์€ `answer`์˜ ์‹œ์ž‘ ํ† ํฐ๊ณผ ์ข…๋ฃŒ ํ† ํฐ์„ ์ž˜๋ผ๋‚ด์„œ `context`์— ๋งคํ•‘ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค:
```py
>>> def preprocess_function(examples):
... questions = [q.strip() for q in examples["question"]]
... inputs = tokenizer(
... questions,
... examples["context"],
... max_length=384,
... truncation="only_second",
... return_offsets_mapping=True,
... padding="max_length",
... )
... offset_mapping = inputs.pop("offset_mapping")
... answers = examples["answers"]
... start_positions = []
... end_positions = []
... for i, offset in enumerate(offset_mapping):
... answer = answers[i]
... start_char = answer["answer_start"][0]
... end_char = answer["answer_start"][0] + len(answer["text"][0])
... sequence_ids = inputs.sequence_ids(i)
... # Find the start and end of the context
... idx = 0
... while sequence_ids[idx] != 1:
... idx += 1
... context_start = idx
... while sequence_ids[idx] == 1:
... idx += 1
... context_end = idx - 1
... # If the answer is not fully inside the context, label it (0, 0)
... if offset[context_start][0] > end_char or offset[context_end][1] < start_char:
... start_positions.append(0)
... end_positions.append(0)
... else:
... # Otherwise it's the start and end token positions
... idx = context_start
... while idx <= context_end and offset[idx][0] <= start_char:
... idx += 1
... start_positions.append(idx - 1)
... idx = context_end
... while idx >= context_start and offset[idx][1] >= end_char:
... idx -= 1
... end_positions.append(idx + 1)
... inputs["start_positions"] = start_positions
... inputs["end_positions"] = end_positions
... return inputs
```
๋ชจ๋“  ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด, ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `batched=True`๋กœ ์„ค์ •ํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋“ค์„ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋น ๋ฅด๊ฒŒ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์€ ๋ชจ๋‘ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค:
```py
>>> tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names)
```
์ด์ œ [`DefaultDataCollator`]๋ฅผ ์ด์šฉํ•ด ์˜ˆ์‹œ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers์˜ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ(data collator)์™€ ๋‹ฌ๋ฆฌ, [`DefaultDataCollator`]๋Š” ํŒจ๋”ฉ๊ณผ ๊ฐ™์€ ์ถ”๊ฐ€ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค:
<frameworkcontent>
<pt>
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator()
```
</pt>
<tf>
```py
>>> from transformers import DefaultDataCollator
>>> data_collator = DefaultDataCollator(return_tensors="tf")
```
</tf>
</frameworkcontent>
## ํ›ˆ๋ จ[[train]]
<frameworkcontent>
<pt>
<Tip>
[`Trainer`]๋ฅผ ์ด์šฉํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ์ดˆ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์„ธ์š”!
</Tip>
์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForQuestionAnswering`]์œผ๋กœ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค:
```py
>>> from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer
>>> model = AutoModelForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased")
```
์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค:
1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •ํ•ฉ๋‹ˆ๋‹ค. ๊ผญ ํ•„์š”ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir` ์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋กœ ์„ค์ •ํ•ด์„œ ์ด ๋ชจ๋ธ์„ Hub๋กœ ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค).
2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋“ค์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค.
3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•ด์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค.
```py
>>> training_args = TrainingArguments(
... output_dir="my_awesome_qa_model",
... evaluation_strategy="epoch",
... learning_rate=2e-5,
... per_device_train_batch_size=16,
... per_device_eval_batch_size=16,
... num_train_epochs=3,
... weight_decay=0.01,
... push_to_hub=True,
... )
>>> trainer = Trainer(
... model=model,
... args=training_args,
... train_dataset=tokenized_squad["train"],
... eval_dataset=tokenized_squad["test"],
... tokenizer=tokenizer,
... data_collator=data_collator,
... )
>>> trainer.train()
```
ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, [`~transformers.Trainer.push_to_hub`] ๋งค์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•ด์„œ ๋ชจ๋“  ์‚ฌ๋žŒ๋“ค์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๊ณต์œ ํ•ด์ฃผ์„ธ์š”:
```py
>>> trainer.push_to_hub()
```
</pt>
<tf>
<Tip>
Keras๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์—์„œ ๊ธฐ์ดˆ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์„ธ์š”!
</Tip>
TensorFlow๋ฅผ ์ด์šฉํ•œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•จ์ˆ˜, ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด ๋ฐ ๋ช‡ ๊ฐ€์ง€ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค:
```py
>>> from transformers import create_optimizer
>>> batch_size = 16
>>> num_epochs = 2
>>> total_train_steps = (len(tokenized_squad["train"]) // batch_size) * num_epochs
>>> optimizer, schedule = create_optimizer(
... init_lr=2e-5,
... num_warmup_steps=0,
... num_train_steps=total_train_steps,
... )
```
๊ทธ ๋‹ค์Œ [`TFAutoModelForQuestionAnswering`]์œผ๋กœ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค:
```py
>>> from transformers import TFAutoModelForQuestionAnswering
>>> model = TFAutoModelForQuestionAnswering("distilbert/distilbert-base-uncased")
```
[`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•ด์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค:
```py
>>> tf_train_set = model.prepare_tf_dataset(
... tokenized_squad["train"],
... shuffle=True,
... batch_size=16,
... collate_fn=data_collator,
... )
>>> tf_validation_set = model.prepare_tf_dataset(
... tokenized_squad["test"],
... shuffle=False,
... batch_size=16,
... collate_fn=data_collator,
... )
```
[`compile`](https://keras.io/api/models/model_training_apis/#compile-method)๋กœ ํ›ˆ๋ จํ•  ๋ชจ๋ธ์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค:
```py
>>> import tensorflow as tf
>>> model.compile(optimizer=optimizer)
```
๋งˆ์ง€๋ง‰์œผ๋กœ ๋ชจ๋ธ์„ Hub๋กœ ํ‘ธ์‹œํ•  ๋ฐฉ๋ฒ•์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ‘ธ์‹œํ•  ๊ฒฝ๋กœ๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค:
```py
>>> from transformers.keras_callbacks import PushToHubCallback
>>> callback = PushToHubCallback(
... output_dir="my_awesome_qa_model",
... tokenizer=tokenizer,
... )
```
๋“œ๋””์–ด ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ ์ˆ˜, ์ฝœ๋ฐฑ์„ ์„ค์ •ํ•œ ํ›„ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ์ด์šฉํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค:
```py
>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=[callback])
```
ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ Hub์— ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค!
</tf>
</frameworkcontent>
<Tip>
์งˆ์˜ ์‘๋‹ต์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋” ์ž์„ธํ•œ ์˜ˆ์‹œ๋Š” [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”.
</Tip>
## ํ‰๊ฐ€[[evaluate]]
์งˆ์˜ ์‘๋‹ต์„ ํ‰๊ฐ€ํ•˜๋ ค๋ฉด ์ƒ๋‹นํ•œ ์–‘์˜ ํ›„์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์‹œ๊ฐ„์ด ๋„ˆ๋ฌด ๋งŽ์ด ๊ฑธ๋ฆฌ์ง€ ์•Š๋„๋ก ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ํ‰๊ฐ€ ๋‹จ๊ณ„๋ฅผ ์ƒ๋žตํ•ฉ๋‹ˆ๋‹ค. [`Trainer`]๋Š” ํ›ˆ๋ จ ๊ณผ์ •์—์„œ ํ‰๊ฐ€ ์†์‹ค(evaluation loss)์„ ๊ณ„์† ๊ณ„์‚ฐํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ๋Œ€๋žต์ ์œผ๋กœ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
์‹œ๊ฐ„์— ์—ฌ์œ ๊ฐ€ ์žˆ๊ณ  ์งˆ์˜ ์‘๋‹ต ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๊ด€์‹ฌ์ด ์žˆ๋‹ค๋ฉด ๐Ÿค— Hugging Face Course์˜ [Question answering](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing) ์ฑ•ํ„ฐ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”!
## ์ถ”๋ก [[inference]]
์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค!
์งˆ๋ฌธ๊ณผ ๋ชจ๋ธ์ด ์˜ˆ์ธกํ•˜๊ธฐ ์›ํ•˜๋Š” ๋ฌธ๋งฅ(context)๋ฅผ ์ƒ๊ฐํ•ด๋ณด์„ธ์š”:
```py
>>> question = "How many programming languages does BLOOM support?"
>>> context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages."
```
์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ๊ฐ€์žฅ ์‰ฌ์šด ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ์งˆ์˜ ์‘๋‹ต์„ ํ•˜๊ธฐ ์œ„ํ•ด์„œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ž…๋ ฅํ•ฉ๋‹ˆ๋‹ค:
```py
>>> from transformers import pipeline
>>> question_answerer = pipeline("question-answering", model="my_awesome_qa_model")
>>> question_answerer(question=question, context=context)
{'score': 0.2058267742395401,
'start': 10,
'end': 95,
'answer': '176 billion parameters and can generate text in 46 languages natural languages and 13'}
```
์›ํ•œ๋‹ค๋ฉด `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ง์ ‘ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค:
<frameworkcontent>
<pt>
ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•ด์„œ PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค:
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model")
>>> inputs = tokenizer(question, context, return_tensors="pt")
```
๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค:
```py
>>> from transformers import AutoModelForQuestionAnswering
>>> model = AutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model")
>>> with torch.no_grad():
... outputs = model(**inputs)
```
๋ชจ๋ธ์˜ ์ถœ๋ ฅ์—์„œ ์‹œ์ž‘ ๋ฐ ์ข…๋ฃŒ ์œ„์น˜๊ฐ€ ์–ด๋”˜์ง€ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ์–ป์Šต๋‹ˆ๋‹ค:
```py
>>> answer_start_index = outputs.start_logits.argmax()
>>> answer_end_index = outputs.end_logits.argmax()
```
์˜ˆ์ธก๋œ ํ† ํฐ์„ ํ•ด๋…ํ•ด์„œ ๋‹ต์„ ์–ป์Šต๋‹ˆ๋‹ค:
```py
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens)
'176 billion parameters and can generate text in 46 languages natural languages and 13'
```
</pt>
<tf>
ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•ด์„œ TensorFlow ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค:
```py
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model")
>>> inputs = tokenizer(question, text, return_tensors="tf")
```
๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค:
```py
>>> from transformers import TFAutoModelForQuestionAnswering
>>> model = TFAutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model")
>>> outputs = model(**inputs)
```
๋ชจ๋ธ์˜ ์ถœ๋ ฅ์—์„œ ์‹œ์ž‘ ๋ฐ ์ข…๋ฃŒ ์œ„์น˜๊ฐ€ ์–ด๋”˜์ง€ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ์–ป์Šต๋‹ˆ๋‹ค:
```py
>>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
>>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
```
์˜ˆ์ธก๋œ ํ† ํฐ์„ ํ•ด๋…ํ•ด์„œ ๋‹ต์„ ์–ป์Šต๋‹ˆ๋‹ค:
```py
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens)
'176 billion parameters and can generate text in 46 languages natural languages and 13'
```
</tf>
</frameworkcontent>