repo_id
stringlengths
15
89
file_path
stringlengths
27
180
content
stringlengths
1
2.23M
__index_level_0__
int64
0
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/philosophy.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ด๋…๊ณผ ๋ชฉํ‘œ [[philosophy]] ๐Ÿค— Transformers๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ชฉ์ ์œผ๋กœ ๋งŒ๋“ค์–ด์ง„ ๋…์ž์ ์ธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค: - ๋Œ€๊ทœ๋ชจ Transformers ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ฑฐ๋‚˜ ์—ฐ๊ตฌํ•˜๊ฑฐ๋‚˜ ํ™•์žฅํ•˜๋ ค๋Š” ๊ธฐ๊ณ„ ํ•™์Šต ์—ฐ๊ตฌ์› ๋ฐ ๊ต์œก์ž๋ฅผ ์œ„ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. - ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ฑฐ๋‚˜ ์ œ์ž‘์šฉ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ ์ž ํ•˜๋Š” ์‹ค์ „ ๊ฐœ๋ฐœ์ž๋ฅผ ์œ„ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. - ํŠน์ • ๊ธฐ๊ณ„ ํ•™์Šต ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์‚ฌ์šฉํ•˜๊ธฐ๋งŒ ํ•˜๋ ค๋Š” ์—”์ง€๋‹ˆ์–ด๋ฅผ ์œ„ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ๋‘ ๊ฐ€์ง€ ์ฃผ์š” ๋ชฉํ‘œ๋ฅผ ๊ฐ€์ง€๊ณ  ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค: 1. ์‚ฌ์šฉํ•˜๊ธฐ ์‰ฝ๊ณ  ๋น ๋ฅด๊ฒŒ ๋งŒ๋“œ๋Š” ๊ฒƒ: - ํ•™์Šตํ•ด์•ผ ํ•  ์‚ฌ์šฉ์ž ๋Œ€์ƒ ์ถ”์ƒํ™”์˜ ์ˆ˜๋ฅผ ์ œํ•œํ–ˆ์Šต๋‹ˆ๋‹ค. ์‹ค์ œ๋กœ ๊ฑฐ์˜ ์ถ”์ƒํ™”๊ฐ€ ์—†์œผ๋ฉฐ, ๊ฐ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ์„ธ ๊ฐ€์ง€ ํ‘œ์ค€ ํด๋ž˜์Šค์ธ [configuration](main_classes/configuration), [models](main_classes/model) ๋ฐ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค์ธ ([tokenizer](main_classes/tokenizer)๋Š” NLP์šฉ, [image processor](main_classes/image_processor)๋Š” ๋น„์ „์šฉ, [feature extractor](main_classes/feature_extractor)๋Š” ์˜ค๋””์˜ค์šฉ, [processor](main_classes/processors)๋Š” ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž…๋ ฅ์šฉ)๋งŒ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. - ์ด๋Ÿฌํ•œ ํด๋ž˜์Šค๋Š” ๊ณตํ†ต์ ์ธ `from_pretrained()` ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ๋ฆฌ ํ›ˆ๋ จ๋œ ์ธ์Šคํ„ด์Šค์—์„œ ๊ฐ„๋‹จํ•˜๊ณ  ํ†ต์ผ๋œ ๋ฐฉ์‹์œผ๋กœ ์ดˆ๊ธฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฉ”์†Œ๋“œ๋Š” ๋ฏธ๋ฆฌ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ์—์„œ ๊ด€๋ จ ํด๋ž˜์Šค ์ธ์Šคํ„ด์Šค์™€ ๊ด€๋ จ ๋ฐ์ดํ„ฐ(๊ตฌ์„ฑ์˜ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ, ํ† ํฌ๋‚˜์ด์ €์˜ ์–ดํœ˜, ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜)๋ฅผ (ํ•„์š”ํ•œ ๊ฒฝ์šฐ) ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์บ์‹œํ•˜๋ฉฐ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์ฒดํฌํฌ์ธํŠธ๋Š” [Hugging Face Hub](https://huggingface.co/models)์—์„œ ์ œ๊ณต๋˜๊ฑฐ๋‚˜ ์‚ฌ์šฉ์ž ์ž์ฒด์˜ ์ €์žฅ๋œ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. - ์ด ์„ธ ๊ฐ€์ง€ ๊ธฐ๋ณธ ํด๋ž˜์Šค ์œ„์— ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” [`pipeline`] API๋ฅผ ์ œ๊ณตํ•˜์—ฌ ์ฃผ์–ด์ง„ ์ž‘์—…์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ๋น ๋ฅด๊ฒŒ ์ถ”๋ก ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉํ•˜๊ณ , [`Trainer`]๋ฅผ ์ œ๊ณตํ•˜์—ฌ PyTorch ๋ชจ๋ธ์„ ๋น ๋ฅด๊ฒŒ ํ›ˆ๋ จํ•˜๊ฑฐ๋‚˜ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋“  TensorFlow ๋ชจ๋ธ์€ `Keras.fit`๊ณผ ํ˜ธํ™˜๋ฉ๋‹ˆ๋‹ค). - ๊ฒฐ๊ณผ์ ์œผ๋กœ, ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์‹ ๊ฒฝ๋ง์„ ๊ตฌ์ถ•ํ•˜๊ธฐ ์œ„ํ•œ ๋ชจ๋“ˆ์‹ ๋„๊ตฌ ์ƒ์ž๊ฐ€ ์•„๋‹™๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ํ™•์žฅํ•˜๊ฑฐ๋‚˜ ๊ตฌ์ถ•ํ•˜๋ ค๋ฉด ์ผ๋ฐ˜์ ์ธ Python, PyTorch, TensorFlow, Keras ๋ชจ๋“ˆ์„ ์‚ฌ์šฉํ•˜๊ณ  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๊ธฐ๋ณธ ํด๋ž˜์Šค๋ฅผ ์ƒ์†ํ•˜์—ฌ ๋ชจ๋ธ ๋กœ๋”ฉ ๋ฐ ์ €์žฅ๊ณผ ๊ฐ™์€ ๊ธฐ๋Šฅ์„ ์žฌ์‚ฌ์šฉํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ฝ”๋”ฉ ์ฒ ํ•™์— ๋Œ€ํ•ด ๋” ์ž์„ธํžˆ ์•Œ๊ณ  ์‹ถ๋‹ค๋ฉด [Repeat Yourself](https://huggingface.co/blog/transformers-design-philosophy) ๋ธ”๋กœ๊ทธ ๊ธ€์„ ํ™•์ธํ•ด๋ณด์„ธ์š”. 2. ์›๋ž˜ ๋ชจ๋ธ๊ณผ ๊ฐ€๋Šฅํ•œ ํ•œ ๊ทผ์ ‘ํ•œ ์„ฑ๋Šฅ์„ ์ œ๊ณตํ•˜๋Š” ์ตœ์‹  ๋ชจ๋ธ์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ: - ๊ฐ ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•ด ๊ณต์‹ ์ €์ž๊ฐ€ ์ œ๊ณตํ•œ ๊ฒฐ๊ณผ๋ฅผ ์žฌํ˜„ํ•˜๋Š” ์ ์–ด๋„ ํ•œ ๊ฐ€์ง€ ์˜ˆ์ œ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. - ์ฝ”๋“œ๋Š” ์›๋ž˜ ์ฝ”๋“œ์™€ ๊ฐ€๋Šฅํ•œ ํ•œ ์œ ์‚ฌํ•˜๊ฒŒ ์œ ์ง€๋˜๋ฏ€๋กœ PyTorch ์ฝ”๋“œ๋Š” TensorFlow ์ฝ”๋“œ๋กœ ๋ณ€ํ™˜๋˜์–ด *pytorchic*ํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ๊ณ , ๊ทธ ๋ฐ˜๋Œ€์˜ ๊ฒฝ์šฐ๋„ ๋งˆ์ฐฌ๊ฐ€์ง€์ž…๋‹ˆ๋‹ค. ๊ธฐํƒ€ ๋ชฉํ‘œ ๋ช‡ ๊ฐ€์ง€: - ๋ชจ๋ธ์˜ ๋‚ด๋ถ€๋ฅผ ๊ฐ€๋Šฅํ•œ ์ผ๊ด€๋˜๊ฒŒ ๋…ธ์ถœ์‹œํ‚ค๊ธฐ: - ์ „์ฒด ์€๋‹‰ ์ƒํƒœ์™€ ์–ดํ…์…˜ ๊ฐ€์ค‘์น˜์— ๋Œ€ํ•œ ์•ก์„ธ์Šค๋ฅผ ๋‹จ์ผ API๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. - ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค ๋ฐ ๊ธฐ๋ณธ ๋ชจ๋ธ API๋Š” ๋ชจ๋ธ ๊ฐ„์— ์‰ฝ๊ฒŒ ์ „ํ™˜ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ‘œ์ค€ํ™”๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. - ๋ฏธ์„ธ ์กฐ์ • ๋ฐ ๋ชจ๋ธ ํƒ์ƒ‰์„ ์œ„ํ•œ ์œ ๋งํ•œ ๋„๊ตฌ๋“ค์„ ์ฃผ๊ด€์ ์œผ๋กœ ์„ ํƒํ•˜๊ธฐ: - ๋ฏธ์„ธ ์กฐ์ •์„ ์œ„ํ•ด ์–ดํœ˜ ๋ฐ ์ž„๋ฒ ๋”ฉ์— ์ƒˆ๋กœ์šด ํ† ํฐ์„ ๊ฐ„๋‹จํ•˜๊ณ  ์ผ๊ด€๋œ ๋ฐฉ์‹์œผ๋กœ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. - Transformer ํ—ค๋“œ๋ฅผ ๋งˆ์Šคํ‚นํ•˜๊ณ  ๊ฐ€์ง€์น˜๊ธฐํ•˜๋Š” ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. - PyTorch, TensorFlow 2.0 ๋ฐ Flax ๊ฐ„์— ์‰ฝ๊ฒŒ ์ „ํ™˜ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜์—ฌ ํ•˜๋‚˜์˜ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ํ›ˆ๋ จํ•˜๊ณ  ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ์ถ”๋ก ํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. ## ์ฃผ์š” ๊ฐœ๋… [[main-concepts]] ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ๊ฐ ๋ชจ๋ธ์— ๋Œ€ํ•ด ์„ธ ๊ฐ€์ง€ ์œ ํ˜•์˜ ํด๋ž˜์Šค๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ตฌ์ถ•๋˜์—ˆ์Šต๋‹ˆ๋‹ค: - **๋ชจ๋ธ ํด๋ž˜์Šค**๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ์ œ๊ณตํ•˜๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜์™€ ํ•จ๊ป˜ ์ž‘๋™ํ•˜๋Š” PyTorch ๋ชจ๋ธ([torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)), Keras ๋ชจ๋ธ([tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)), JAX/Flax ๋ชจ๋ธ([flax.linen.Module](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html))์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - **๊ตฌ์„ฑ ํด๋ž˜์Šค**๋Š” ๋ชจ๋ธ์„ ๊ตฌ์ถ•ํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ(์˜ˆ: ๋ ˆ์ด์–ด ์ˆ˜ ๋ฐ ์€๋‹‰ ํฌ๊ธฐ)๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ตฌ์„ฑ ํด๋ž˜์Šค๋ฅผ ์ง์ ‘ ์ธ์Šคํ„ด์Šคํ™”ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ํŠนํžˆ, ์ˆ˜์ • ์—†์ด ๊ณ  ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜๋ฉด ๋ชจ๋ธ์˜ ์ผ๋ถ€์ธ ๊ตฌ์„ฑ์„ ์ž๋™์œผ๋กœ ์ธ์Šคํ„ด์Šคํ™”๋ฉ๋‹ˆ๋‹ค. - **์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค**๋Š” ์›์‹œ ๋ฐ์ดํ„ฐ๋ฅผ ๋ชจ๋ธ์ด ์ˆ˜์šฉํ•˜๋Š” ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. [Tokenizer](main_classes/tokenizer)๋Š” ๊ฐ ๋ชจ๋ธ์˜ ์–ดํœ˜๋ฅผ ์ €์žฅํ•˜๊ณ , ๋ฌธ์ž์—ด์„ ํ† ํฐ ์ž„๋ฒ ๋”ฉ ์ธ๋ฑ์Šค ๋ฆฌ์ŠคํŠธ๋กœ ์ธ์ฝ”๋”ฉํ•˜๊ณ  ๋””์ฝ”๋”ฉํ•˜๊ธฐ ์œ„ํ•œ ๋ฉ”์†Œ๋“œ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. [Image processors](main_classes/image_processor)๋Š” ๋น„์ „ ์ž…๋ ฅ์„ ์ „์ฒ˜๋ฆฌํ•˜๊ณ , [feature extractors](main_classes/feature_extractor)๋Š” ์˜ค๋””์˜ค ์ž…๋ ฅ์„ ์ „์ฒ˜๋ฆฌํ•˜๋ฉฐ, [processor](main_classes/processors)๋Š” ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ์ด๋Ÿฌํ•œ ํด๋ž˜์Šค๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ธ์Šคํ„ด์Šค์—์„œ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ๋กœ์ปฌ๋กœ ์ €์žฅํ•˜๋ฉฐ, ์„ธ ๊ฐ€์ง€ ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ Hub์—์„œ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `from_pretrained()` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์ž์ฒด์—์„œ ์ œ๊ณตํ•˜๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ฒ„์ „(์ง€์›๋˜๋Š” ๋ชจ๋ธ์€ [Model Hub](https://huggingface.co/models)์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ์Œ)์ด๋‚˜ ์‚ฌ์šฉ์ž๊ฐ€ ๋กœ์ปฌ๋กœ ์ €์žฅํ•œ ๊ฒฝ์šฐ(๋˜๋Š” ์„œ๋ฒ„์— ์ €์žฅํ•œ ๊ฒฝ์šฐ)์˜ ๋ชจ๋ธ, ๊ตฌ์„ฑ ๋ฐ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - `save_pretrained()` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ, ๊ตฌ์„ฑ ๋ฐ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ๋กœ์ปฌ๋กœ ์ €์žฅํ•˜์—ฌ `from_pretrained()`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค์‹œ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - `push_to_hub()` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ, ๊ตฌ์„ฑ ๋ฐ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ Hub์— ๊ณต์œ ํ•˜์—ฌ ๋ชจ๋‘์—๊ฒŒ ์‰ฝ๊ฒŒ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/pipeline_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ถ”๋ก ์„ ์œ„ํ•œ Pipeline[[pipelines-for-inference]] [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋ฉด ์–ธ์–ด, ์ปดํ“จํ„ฐ ๋น„์ „, ์˜ค๋””์˜ค ๋ฐ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ํƒœ์Šคํฌ์— ๋Œ€ํ•œ ์ถ”๋ก ์„ ์œ„ํ•ด [Hub](https://huggingface.co/models)์˜ ์–ด๋–ค ๋ชจ๋ธ์ด๋“  ์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŠน์ • ๋ถ„์•ผ์— ๋Œ€ํ•œ ๊ฒฝํ—˜์ด ์—†๊ฑฐ๋‚˜, ๋ชจ๋ธ์„ ์ด๋ฃจ๋Š” ์ฝ”๋“œ๊ฐ€ ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ์—๋„ [`pipeline`]์„ ์‚ฌ์šฉํ•ด์„œ ์ถ”๋ก ํ•  ์ˆ˜ ์žˆ์–ด์š”! ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹ค์Œ์„ ๋ฐฐ์›Œ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. * ์ถ”๋ก ์„ ์œ„ํ•ด [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• * ํŠน์ • ํ† ํฌ๋‚˜์ด์ € ๋˜๋Š” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• * ์–ธ์–ด, ์ปดํ“จํ„ฐ ๋น„์ „, ์˜ค๋””์˜ค ๋ฐ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ํƒœ์Šคํฌ์—์„œ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• <Tip> ์ง€์›ํ•˜๋Š” ๋ชจ๋“  ํƒœ์Šคํฌ์™€ ์“ธ ์ˆ˜ ์žˆ๋Š” ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋‹ด์€ ๋ชฉ๋ก์€ [`pipeline`] ์„ค๋ช…์„œ๋ฅผ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”. </Tip> ## Pipeline ์‚ฌ์šฉํ•˜๊ธฐ[[pipeline-usage]] ๊ฐ ํƒœ์Šคํฌ๋งˆ๋‹ค ๊ณ ์œ ์˜ [`pipeline`]์ด ์žˆ์ง€๋งŒ, ๊ฐœ๋ณ„ ํŒŒ์ดํ”„๋ผ์ธ์„ ๋‹ด๊ณ ์žˆ๋Š” ์ถ”์ƒํ™”๋œ [`pipeline`]๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์œผ๋กœ ๋” ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. [`pipeline`]์€ ํƒœ์Šคํฌ์— ์•Œ๋งž๊ฒŒ ์ถ”๋ก ์ด ๊ฐ€๋Šฅํ•œ ๊ธฐ๋ณธ ๋ชจ๋ธ๊ณผ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ์ž๋™์œผ๋กœ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. 1. ๋จผ์ € [`pipeline`]์„ ์ƒ์„ฑํ•˜๊ณ  ํƒœ์Šคํฌ๋ฅผ ์ง€์ •ํ•˜์„ธ์š”. ```py >>> from transformers import pipeline >>> generator = pipeline(task="automatic-speech-recognition") ``` 2. ๊ทธ๋ฆฌ๊ณ  [`pipeline`]์— ์ž…๋ ฅ์„ ๋„ฃ์–ด์ฃผ์„ธ์š”. ```py >>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP LIVE UP THE TRUE MEANING OF ITS TREES'} ``` ๊ธฐ๋Œ€ํ–ˆ๋˜ ๊ฒฐ๊ณผ๊ฐ€ ์•„๋‹Œ๊ฐ€์š”? Hub์—์„œ [๊ฐ€์žฅ ๋งŽ์ด ๋‹ค์šด๋กœ๋“œ๋œ ์ž๋™ ์Œ์„ฑ ์ธ์‹ ๋ชจ๋ธ](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=downloads)๋กœ ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋Š”์ง€ ํ™•์ธํ•ด๋ณด์„ธ์š”. ๋‹ค์Œ์€ [openai/whisper-large](https://huggingface.co/openai/whisper-large)๋กœ ์‹œ๋„ํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> generator = pipeline(model="openai/whisper-large") >>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ``` ํ›จ์”ฌ ๋” ๋‚˜์•„์กŒ๊ตฐ์š”! Hub์˜ ๋ชจ๋ธ๋“ค์€ ์—ฌ๋Ÿฌ ๋‹ค์–‘ํ•œ ์–ธ์–ด์™€ ์ „๋ฌธ๋ถ„์•ผ๋ฅผ ์•„์šฐ๋ฅด๊ธฐ ๋•Œ๋ฌธ์— ๊ผญ ์ž์‹ ์˜ ์–ธ์–ด๋‚˜ ๋ถ„์•ผ์— ํŠนํ™”๋œ ๋ชจ๋ธ์„ ์ฐพ์•„๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ๋ธŒ๋ผ์šฐ์ €๋ฅผ ๋ฒ—์–ด๋‚  ํ•„์š”์—†์ด Hub์—์„œ ์ง์ ‘ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์„ ํ™•์ธํ•˜๊ณ  ๋‹ค๋ฅธ ๋ชจ๋ธ๊ณผ ๋น„๊ตํ•ด์„œ ์ž์‹ ์˜ ์ƒํ™ฉ์— ๋” ์ ํ•ฉํ•œ์ง€, ์• ๋งคํ•œ ์ž…๋ ฅ์„ ๋” ์ž˜ ์ฒ˜๋ฆฌํ•˜๋Š”์ง€๋„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งŒ์•ฝ ์ƒํ™ฉ์— ์•Œ๋งž๋Š” ๋ชจ๋ธ์„ ์—†๋‹ค๋ฉด ์–ธ์ œ๋‚˜ ์ง์ ‘ [ํ›ˆ๋ จ](training)์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ž…๋ ฅ์ด ์—ฌ๋Ÿฌ ๊ฐœ ์žˆ๋Š” ๊ฒฝ์šฐ, ๋ฆฌ์ŠคํŠธ ํ˜•ํƒœ๋กœ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py generator( [ "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac", "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac", ] ) ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ์„ ์ˆœํšŒํ•˜๊ฑฐ๋‚˜ ์›น์„œ๋ฒ„์— ์˜ฌ๋ ค๋‘์–ด ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, ๊ฐ ์ƒ์„ธ ํŽ˜์ด์ง€๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. [๋ฐ์ดํ„ฐ์„ธํŠธ์—์„œ Pipeline ์‚ฌ์šฉํ•˜๊ธฐ](#using-pipelines-on-a-dataset) [์›น์„œ๋ฒ„์—์„œ Pipeline ์‚ฌ์šฉํ•˜๊ธฐ](./pipeline_webserver) ## ๋งค๊ฐœ๋ณ€์ˆ˜[[parameters]] [`pipeline`]์€ ๋งŽ์€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ํŠน์ • ํƒœ์Šคํฌ์šฉ์ธ ๊ฒƒ๋„ ์žˆ๊ณ , ๋ฒ”์šฉ์ธ ๊ฒƒ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์›ํ•˜๋Š” ์œ„์น˜์— ์–ด๋””๋“  ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋„ฃ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py generator(model="openai/whisper-large", my_parameter=1) out = generate(...) # This will use `my_parameter=1`. out = generate(..., my_parameter=2) # This will override and use `my_parameter=2`. out = generate(...) # This will go back to using `my_parameter=1`. ``` ์ค‘์š”ํ•œ 3๊ฐ€์ง€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ๊ธฐ๊ธฐ(device)[[device]] `device=n`์ฒ˜๋Ÿผ ๊ธฐ๊ธฐ๋ฅผ ์ง€์ •ํ•˜๋ฉด ํŒŒ์ดํ”„๋ผ์ธ์ด ์ž๋™์œผ๋กœ ํ•ด๋‹น ๊ธฐ๊ธฐ์— ๋ชจ๋ธ์„ ๋ฐฐ์น˜ํ•ฉ๋‹ˆ๋‹ค. ํŒŒ์ดํ† ์น˜์—์„œ๋‚˜ ํ…์„œํ”Œ๋กœ์šฐ์—์„œ๋„ ๋ชจ๋‘ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ```py generator(model="openai/whisper-large", device=0) ``` ๋ชจ๋ธ์ด GPU ํ•˜๋‚˜์— ๋Œ์•„๊ฐ€๊ธฐ ๋ฒ„๊ฒ๋‹ค๋ฉด, `device_map="auto"`๋ฅผ ์ง€์ •ํ•ด์„œ ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate)๊ฐ€ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ ์–ด๋–ป๊ฒŒ ๋กœ๋“œํ•˜๊ณ  ์ €์žฅํ• ์ง€ ์ž๋™์œผ๋กœ ๊ฒฐ์ •ํ•˜๋„๋ก ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py #!pip install accelerate generator(model="openai/whisper-large", device_map="auto") ``` ### ๋ฐฐ์น˜ ์‚ฌ์ด์ฆˆ[[batch-size]] ๊ธฐ๋ณธ์ ์œผ๋กœ ํŒŒ์ดํ”„๋ผ์ธ์€ [์—ฌ๊ธฐ](https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching)์— ๋‚˜์˜จ ์ด์œ ๋กœ ์ถ”๋ก ์„ ์ผ๊ด„ ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ฐ„๋‹จํžˆ ์„ค๋ช…ํ•˜์ž๋ฉด ์ผ๊ด„ ์ฒ˜๋ฆฌ๊ฐ€ ๋ฐ˜๋“œ์‹œ ๋” ๋น ๋ฅด์ง€ ์•Š๊ณ  ์˜คํžˆ๋ ค ๋” ๋А๋ ค์งˆ ์ˆ˜๋„ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ž์‹ ์˜ ์ƒํ™ฉ์— ์ ํ•ฉํ•˜๋‹ค๋ฉด, ์ด๋ ‡๊ฒŒ ์‚ฌ์šฉํ•˜์„ธ์š”. ```py generator(model="openai/whisper-large", device=0, batch_size=2) audio_filenames = [f"audio_{i}.flac" for i in range(10)] texts = generator(audio_filenames) ``` ํŒŒ์ดํ”„๋ผ์ธ ์œ„ ์ œ๊ณต๋œ 10๊ฐœ์˜ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ถ”๊ฐ€๋กœ ์ฒ˜๋ฆฌํ•˜๋Š” ์ฝ”๋“œ ์—†์ด (์ผ๊ด„ ์ฒ˜๋ฆฌ์— ๋ณด๋‹ค ํšจ๊ณผ์ ์ธ GPU ์œ„) ๋ชจ๋ธ์— 2๊ฐœ์”ฉ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ์ถœ๋ ฅ์€ ์ผ๊ด„ ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š์•˜์„ ๋•Œ์™€ ๋˜‘๊ฐ™์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ์—์„œ ์†๋„๋ฅผ ๋” ๋‚ผ ์ˆ˜๋„ ์žˆ๋Š” ๋ฐฉ๋ฒ• ์ค‘ ํ•˜๋‚˜์ผ ๋ฟ์ž…๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ์€ ์ผ๊ด„ ์ฒ˜๋ฆฌ์˜ ๋ณต์žกํ•œ ๋ถ€๋ถ„์„ ์ค„์—ฌ์ฃผ๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. (์˜ˆ๋ฅผ ๋“ค์–ด ๊ธด ์˜ค๋””์˜ค ํŒŒ์ผ์ฒ˜๋Ÿผ) ์—ฌ๋Ÿฌ ๋ถ€๋ถ„์œผ๋กœ ๋‚˜๋ˆ ์•ผ ๋ชจ๋ธ์ด ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒƒ์„ [*chunk batching*](./main_classes/pipelines#pipeline-chunk-batching)์ด๋ผ๊ณ  ํ•˜๋Š”๋ฐ, ํŒŒ์ดํ”„๋ผ์ธ์„ ์‚ฌ์šฉํ•˜๋ฉด ์ž๋™์œผ๋กœ ๋‚˜๋ˆ ์ค๋‹ˆ๋‹ค. ### ํŠน์ • ํƒœ์Šคํฌ์šฉ ๋งค๊ฐœ๋ณ€์ˆ˜[[task-specific-parameters]] ๊ฐ ํƒœ์Šคํฌ๋งˆ๋‹ค ๊ตฌํ˜„ํ•  ๋•Œ ์œ ์—ฐ์„ฑ๊ณผ ์˜ต์…˜์„ ์ œ๊ณตํ•˜๊ธฐ ์œ„ํ•ด ํƒœ์Šคํฌ์šฉ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด [`transformers.AutomaticSpeechRecognitionPipeline.__call__`] ๋ฉ”์„œ๋“œ์—๋Š” ๋™์˜์ƒ์˜ ์ž๋ง‰์„ ๋„ฃ์„ ๋•Œ ์œ ์šฉํ•  ๊ฒƒ ๊ฐ™์€ `return_timestamps` ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> # Not using whisper, as it cannot provide timestamps. >>> generator = pipeline(model="facebook/wav2vec2-large-960h-lv60-self", return_timestamps="word") >>> generator("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP AND LIVE OUT THE TRUE MEANING OF ITS CREED', 'chunks': [{'text': 'I', 'timestamp': (1.22, 1.24)}, {'text': 'HAVE', 'timestamp': (1.42, 1.58)}, {'text': 'A', 'timestamp': (1.66, 1.68)}, {'text': 'DREAM', 'timestamp': (1.76, 2.14)}, {'text': 'BUT', 'timestamp': (3.68, 3.8)}, {'text': 'ONE', 'timestamp': (3.94, 4.06)}, {'text': 'DAY', 'timestamp': (4.16, 4.3)}, {'text': 'THIS', 'timestamp': (6.36, 6.54)}, {'text': 'NATION', 'timestamp': (6.68, 7.1)}, {'text': 'WILL', 'timestamp': (7.32, 7.56)}, {'text': 'RISE', 'timestamp': (7.8, 8.26)}, {'text': 'UP', 'timestamp': (8.38, 8.48)}, {'text': 'AND', 'timestamp': (10.08, 10.18)}, {'text': 'LIVE', 'timestamp': (10.26, 10.48)}, {'text': 'OUT', 'timestamp': (10.58, 10.7)}, {'text': 'THE', 'timestamp': (10.82, 10.9)}, {'text': 'TRUE', 'timestamp': (10.98, 11.18)}, {'text': 'MEANING', 'timestamp': (11.26, 11.58)}, {'text': 'OF', 'timestamp': (11.66, 11.7)}, {'text': 'ITS', 'timestamp': (11.76, 11.88)}, {'text': 'CREED', 'timestamp': (12.0, 12.38)}]} ``` ๋ณด์‹œ๋‹ค์‹œํ”ผ ๋ชจ๋ธ์ด ํ…์ŠคํŠธ๋ฅผ ์ถ”๋ก ํ•  ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๊ฐ ๋‹จ์–ด๋ฅผ ๋งํ•œ ์‹œ์ ๊นŒ์ง€๋„ ์ถœ๋ ฅํ–ˆ์Šต๋‹ˆ๋‹ค. ํƒœ์Šคํฌ๋งˆ๋‹ค ๋‹ค์–‘ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”๋ฐ์š”. ์›ํ•˜๋Š” ํƒœ์Šคํฌ์˜ API๋ฅผ ์ฐธ์กฐํ•ด์„œ ๋ฐ”๊ฟ”๋ณผ ์ˆ˜ ์žˆ๋Š” ์—ฌ๋Ÿฌ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! ์ง€๊ธˆ๊นŒ์ง€ ๋‹ค๋ค„๋ณธ [`~transformers.AutomaticSpeechRecognitionPipeline`]์—๋Š” `chunk_length_s` ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ํ™”๋‚˜ 1์‹œ๊ฐ„ ๋ถ„๋Ÿ‰์˜ ๋™์˜์ƒ์˜ ์ž๋ง‰ ์ž‘์—…์„ ํ•  ๋•Œ์ฒ˜๋Ÿผ, ์ผ๋ฐ˜์ ์œผ๋กœ ๋ชจ๋ธ์ด ์ž์ฒด์ ์œผ๋กœ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์—†๋Š” ๋งค์šฐ ๊ธด ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ฒ˜๋ฆฌํ•  ๋•Œ ์œ ์šฉํ•˜์ฃ . ๋„์›€์ด ๋  ๋งŒํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ฐพ์ง€ ๋ชปํ–ˆ๋‹ค๋ฉด ์–ธ์ œ๋“ ์ง€ [์š”์ฒญ](https://github.com/huggingface/transformers/issues/new?assignees=&labels=feature&template=feature-request.yml)ํ•ด์ฃผ์„ธ์š”! ## ๋ฐ์ดํ„ฐ์„ธํŠธ์—์„œ Pipeline ์‚ฌ์šฉํ•˜๊ธฐ[[using-pipelines-on-a-dataset]] ํŒŒ์ดํ”„๋ผ์ธ์€ ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ์„ธํŠธ์—์„œ๋„ ์ถ”๋ก  ์ž‘์—…์„ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋•Œ ์ดํ„ฐ๋ ˆ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฑธ ์ถ”์ฒœ๋“œ๋ฆฝ๋‹ˆ๋‹ค. ```py def data(): for i in range(1000): yield f"My example {i}" pipe = pipe(model="gpt2", device=0) generated_characters = 0 for out in pipe(data()): generated_characters += len(out["generated_text"]) ``` ์ดํ„ฐ๋ ˆ์ดํ„ฐ `data()`๋Š” ๊ฐ ๊ฒฐ๊ณผ๋ฅผ ํ˜ธ์ถœ๋งˆ๋‹ค ์ƒ์„ฑํ•˜๊ณ , ํŒŒ์ดํ”„๋ผ์ธ์€ ์ž…๋ ฅ์ด ์ˆœํšŒํ•  ์ˆ˜ ์žˆ๋Š” ์ž๋ฃŒ๊ตฌ์กฐ์ž„์„ ์ž๋™์œผ๋กœ ์ธ์‹ํ•˜์—ฌ GPU์—์„œ ๊ธฐ์กด ๋ฐ์ดํ„ฐ๊ฐ€ ์ฒ˜๋ฆฌ๋˜๋Š” ๋™์•ˆ ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋ฅผ ๊ฐ€์ ธ์˜ค๊ธฐ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค.(์ด๋•Œ ๋‚ด๋ถ€์ ์œผ๋กœ [DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader)๋ฅผ ์‚ฌ์šฉํ•ด์š”.) ์ด ๊ณผ์ •์€ ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ๋ฉ”๋ชจ๋ฆฌ์— ์ ์žฌํ•˜์ง€ ์•Š๊ณ ๋„ GPU์— ์ตœ๋Œ€ํ•œ ๋น ๋ฅด๊ฒŒ ์ƒˆ๋กœ์šด ์ž‘์—…์„ ๊ณต๊ธ‰ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ผ๊ด„ ์ฒ˜๋ฆฌ๊ฐ€ ๋” ๋น ๋ฅผ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์—, `batch_size` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์กฐ์ •ํ•ด๋ด๋„ ์ข‹์•„์š”. ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ์ˆœํšŒํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ ๐Ÿค— [Datasets](https://github.com/huggingface/datasets/)๋ฅผ ํ™œ์šฉํ•˜๋Š” ๊ฒƒ์ธ๋ฐ์š”. ```py # KeyDataset is a util that will just output the item we're interested in. from transformers.pipelines.pt_utils import KeyDataset pipe = pipeline(model="hf-internal-testing/tiny-random-wav2vec2", device=0) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation[:10]") for out in pipe(KeyDataset(dataset["audio"])): print(out) ``` ## ์›น์„œ๋ฒ„์—์„œ Pipeline ์‚ฌ์šฉํ•˜๊ธฐ[[using-pipelines-for-a-webserver]] <Tip> ์ถ”๋ก  ์—”์ง„์„ ๋งŒ๋“œ๋Š” ๊ณผ์ •์€ ๋”ฐ๋กœ ํŽ˜์ด์ง€๋ฅผ ์ž‘์„ฑํ• ๋งŒํ•œ ๋ณต์žกํ•œ ์ฃผ์ œ์ž…๋‹ˆ๋‹ค. </Tip> [Link](./pipeline_webserver) ## ๋น„์ „ Pipeline[[vision-pipeline]] ๋น„์ „ ํƒœ์Šคํฌ๋ฅผ ์œ„ํ•ด [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ์ผ์€ ๊ฑฐ์˜ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ํƒœ์Šคํฌ๋ฅผ ์ง€์ •ํ•˜๊ณ  ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜๊ธฐ์— ์ „๋‹ฌํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋Š” ์ธํ„ฐ๋„ท ๋งํฌ ๋˜๋Š” ๋กœ์ปฌ ๊ฒฝ๋กœ์˜ ํ˜•ํƒœ๋กœ ์ „๋‹ฌํ•ด์ฃผ์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด ์•„๋ž˜์— ํ‘œ์‹œ๋œ ๊ณ ์–‘์ด๋Š” ์–ด๋–ค ์ข…์ธ๊ฐ€์š”? ![pipeline-cat-chonk](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg) ```py >>> from transformers import pipeline >>> vision_classifier = pipeline(model="google/vit-base-patch16-224") >>> preds = vision_classifier( ... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}] ``` ### ํ…์ŠคํŠธ Pipeline[[text-pipeline]] NLP ํƒœ์Šคํฌ๋ฅผ ์œ„ํ•ด [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ์ผ๋„ ๊ฑฐ์˜ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> # This model is a `zero-shot-classification` model. >>> # It will classify text, except you are free to choose any label you might imagine >>> classifier = pipeline(model="facebook/bart-large-mnli") >>> classifier( ... "I have a problem with my iphone that needs to be resolved asap!!", ... candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"], ... ) {'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['urgent', 'phone', 'computer', 'not urgent', 'tablet'], 'scores': [0.504, 0.479, 0.013, 0.003, 0.002]} ``` ### ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ Pipeline[[multimodal-pipeline]] [`pipeline`]์€ ์—ฌ๋Ÿฌ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ(์—ญ์ฃผ: ์˜ค๋””์˜ค, ๋น„๋””์˜ค, ํ…์ŠคํŠธ์™€ ๊ฐ™์€ ๋ฐ์ดํ„ฐ ํ˜•ํƒœ)๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์‹œ๋กœ ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต(VQA; Visual Question Answering) ํƒœ์Šคํฌ๋Š” ํ…์ŠคํŠธ์™€ ์ด๋ฏธ์ง€๋ฅผ ๋ชจ๋‘ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ์–ด๋–ค ์ด๋ฏธ์ง€ ๋งํฌ๋‚˜ ๋ฌป๊ณ  ์‹ถ์€ ์งˆ๋ฌธ๋„ ์ž์œ ๋กญ๊ฒŒ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋Š” URL ๋˜๋Š” ๋กœ์ปฌ ๊ฒฝ๋กœ์˜ ํ˜•ํƒœ๋กœ ์ „๋‹ฌํ•ด์ฃผ์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด ์ด [๊ฑฐ๋ž˜๋ช…์„ธ์„œ ์‚ฌ์ง„](https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png)์—์„œ ๊ฑฐ๋ž˜๋ช…์„ธ์„œ ๋ฒˆํ˜ธ๋ฅผ ๋ฌป๊ณ  ์‹ถ๋‹ค๋ฉด, ```py >>> from transformers import pipeline >>> vqa = pipeline(model="impira/layoutlm-document-qa") >>> vqa( ... image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png", ... question="What is the invoice number?", ... ) [{'score': 0.42514941096305847, 'answer': 'us-001', 'start': 16, 'end': 16}] ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/add_tensorflow_model.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์–ด๋–ป๊ฒŒ ๐Ÿค— Transformers ๋ชจ๋ธ์„ TensorFlow๋กœ ๋ณ€ํ™˜ํ•˜๋‚˜์š”? [[how-to-convert-a-transformers-model-to-tensorflow]] ๐Ÿค— Transformers์—์„œ์ฒ˜๋Ÿผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ํ”„๋ ˆ์ž„์›Œํฌ๊ฐ€ ์žˆ๋‹ค๋Š” ๊ฒƒ์€ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ์„ค๊ณ„ํ•  ๋•Œ ๊ทธ๋“ค์˜ ๊ฐ•์ ์„ ์œ ์—ฐํ•˜๊ฒŒ ์ด์šฉํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ์žฅ์ ์ด ์žˆ์ง€๋งŒ, ๋ชจ๋ธ ๋ณ„๋กœ ํ˜ธํ™˜์„ฑ์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•œ๋‹ค๋Š” ๋‹จ์  ๋˜ํ•œ ์กด์žฌํ•œ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ข‹์€ ์†Œ์‹์€ ๊ธฐ์กด ๋ชจ๋ธ์— TensorFlow ํ˜ธํ™˜์„ฑ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด [์ฒ˜์Œ๋ถ€ํ„ฐ ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ](add_new_model)๋ณด๋‹ค๋„ ๊ฐ„๋‹จํ•˜๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค! ๋งŒ์•ฝ ๋Œ€๊ทœ๋ชจ TensorFlow ๋ชจ๋ธ์„ ๋” ๊นŠ์ด ์ดํ•ดํ•˜๋ ค๊ฑฐ๋‚˜, ์˜คํ”ˆ ์†Œ์Šค์— ํฐ ๊ธฐ์—ฌ๋ฅผ ํ•˜๋ ค๊ฑฐ๋‚˜, ์„ ํƒํ•œ ๋ชจ๋ธ์— Tensorflow๋ฅผ ํ™œ์šฉํ•˜๋ คํ•œ๋‹ค๋ฉด, ์ด ์•ˆ๋‚ด์„œ๋Š” ์—ฌ๋Ÿฌ๋ถ„๊ป˜ ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ๋Š” Hugging Face ํŒ€์˜ ์ตœ์†Œํ•œ์˜ ๊ฐ๋… ์•„๋ž˜์—์„œ ๐Ÿค— Transformers์—์„œ ์‚ฌ์šฉ๋˜๋Š” TensorFlow ๋ชจ๋ธ ๊ฐ€์ค‘์น˜์™€/๋˜๋Š” ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ๋Š” ์ปค๋ฎค๋‹ˆํ‹ฐ ๊ตฌ์„ฑ์›์ธ ์—ฌ๋Ÿฌ๋ถ„์„ ๋Œ€์ƒ์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ์€ ์‰ฌ์šด ์ผ์ด ์•„๋‹ˆ์ง€๋งŒ, ์ด ๊ฐ€์ด๋“œ๋ฅผ ํ†ตํ•ด ์กฐ๊ธˆ ๋œ ํž˜๋“ค๊ณ  ํ›จ์”ฌ ์‰ฌ์šด ์ž‘์—…์œผ๋กœ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋‘์˜ ๊ฒฝํ—˜์„ ๋ชจ์œผ๋Š” ๊ฒƒ์€ ์ด ์ž‘์—…์„ ์ ์ฐจ์ ์œผ๋กœ ๋” ์‰ฝ๊ฒŒ ๋งŒ๋“œ๋Š” ๋ฐ ๊ต‰์žฅํžˆ ์ค‘์š”ํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์ด ๊ฐ€์ด๋“œ๋ฅผ ๊ฐœ์„ ์‹œํ‚ฌ๋งŒํ•œ ์ œ์•ˆ์ด ๋– ์˜ค๋ฅด๋ฉด ๊ณต์œ ํ•˜์‹œ๋Š”๊ฑธ ์ ๊ทน์ ์œผ๋กœ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค! ๋” ๊นŠ์ด ์•Œ์•„๋ณด๊ธฐ ์ „์—, ๐Ÿค— Transformers๋ฅผ ์ฒ˜์Œ ์ ‘ํ•˜๋Š” ๊ฒฝ์šฐ ๋‹ค์Œ ์ž๋ฃŒ๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค: - [๐Ÿค— Transformers์˜ ์ผ๋ฐ˜ ๊ฐœ์š”](add_new_model#general-overview-of-transformers) - [Hugging Face์˜ TensorFlow ์ฒ ํ•™](https://huggingface.co/blog/tensorflow-philosophy) ์ด ๊ฐ€์ด๋“œ์˜ ๋‚˜๋จธ์ง€ ๋ถ€๋ถ„์—์„œ๋Š” ์ƒˆ๋กœ์šด TensorFlow ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ๋‹จ๊ณ„, Pytorch๋ฅผ TensorFlow ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์ ˆ์ฐจ ๋ฐ ML ํ”„๋ ˆ์ž„์›Œํฌ ๊ฐ„์˜ ๋ถˆ์ผ์น˜๋ฅผ ํšจ์œจ์ ์œผ๋กœ ๋””๋ฒ„๊น…ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•Œ๊ฒŒ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•ด๋ด…์‹œ๋‹ค! <Tip> ์‚ฌ์šฉํ•˜๋ ค๋Š” ๋ชจ๋ธ์ด ์ด๋ฏธ ํ•ด๋‹นํ•˜๋Š” TensorFlow ์•„ํ‚คํ…์ฒ˜๊ฐ€ ์žˆ๋Š”์ง€ ํ™•์‹คํ•˜์ง€ ์•Š๋‚˜์š”? ์„ ํƒํ•œ ๋ชจ๋ธ([์˜ˆ](https://huggingface.co/bert-base-uncased/blob/main/config.json#L14))์˜ `config.json`์˜ `model_type` ํ•„๋“œ๋ฅผ ํ™•์ธํ•ด๋ณด์„ธ์š”. ๐Ÿค— Transformers์˜ ํ•ด๋‹น ๋ชจ๋ธ ํด๋”์—๋Š” "modeling_tf"๋กœ ์‹œ์ž‘ํ•˜๋Š” ํŒŒ์ผ์ด ์žˆ๋Š” ๊ฒฝ์šฐ, ํ•ด๋‹น ๋ชจ๋ธ์—๋Š” ํ•ด๋‹น TensorFlow ์•„ํ‚คํ…์ฒ˜([์˜ˆ](https://github.com/huggingface/transformers/tree/main/src/transformers/models/bert))๊ฐ€ ์žˆ๋‹ค๋Š” ์˜๋ฏธ์ž…๋‹ˆ๋‹ค. </Tip> ## TensorFlow ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜ ์ฝ”๋“œ ์ถ”๊ฐ€ํ•˜๋Š” ๋‹จ๊ณ„๋ณ„ ๊ฐ€์ด๋“œ [[step-by-step-guide-to add-tensorFlow-model-architecture-code]] ๋Œ€๊ทœ๋ชจ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ฐ€์ง„ ๋ชจ๋ธ์„ ์„ค๊ณ„ํ•˜๋Š” ๋ฐฉ๋ฒ•์—๋Š” ์—ฌ๋Ÿฌ๊ฐ€์ง€๊ฐ€ ์žˆ์œผ๋ฉฐ, ํ•ด๋‹น ์„ค๊ณ„๋ฅผ ๊ตฌํ˜„ํ•˜๋Š” ๋ฐฉ๋ฒ•๋„ ์—ฌ๋Ÿฌ ๊ฐ€์ง€์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์šฐ๋ฆฌ๋Š” [๐Ÿค— Transformers ์ผ๋ฐ˜ ๊ฐœ์š”](add_new_model#general-overview-of-transformers)์—์„œ ์–ธ๊ธ‰ํ•œ ๋Œ€๋กœ ์ผ๊ด€๋œ ์„ค๊ณ„ ์„ ํƒ์— ๋”ฐ๋ผ์•ผ์ง€๋งŒ ๐Ÿค— Transformers๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ํŽธํ•  ๊ฒƒ์ด๋ผ๋Š” ํ™•๊ณ ํ•œ ์˜๊ฒฌ์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ์˜ ๊ฒฝํ—˜์„ ํ†ตํ•ด TensorFlow ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐ ๊ด€๋ จ๋œ ์ค‘์š”ํ•œ ๋ช‡ ๊ฐ€์ง€ ์‚ฌํ•ญ์„ ์•Œ๋ ค ๋“œ๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - ์ด๋ฏธ ์žˆ๋Š”๊ฑธ ๋‹ค์‹œ ๊ฐœ๋ฐœํ•˜๋ ค ํ•˜์ง€ ๋งˆ์„ธ์š”! ์ตœ์†Œํ•œ 2๊ฐœ์˜ ์ด๋ฏธ ๊ตฌํ˜„๋œ ๋ชจ๋ธ์„ ๋Œ€๊ฐœ ์ฐธ์กฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ตฌํ˜„ํ•˜๋ ค๋Š” ๋ชจ๋ธ๊ณผ ๊ธฐ๋Šฅ์ƒ ๋™์ผํ•œ Pytorch ๋ชจ๋ธ ํ•˜๋‚˜์™€ ๊ฐ™์€ ๋ฌธ์ œ ์œ ํ˜•์„ ํ’€๊ณ  ์žˆ๋Š” ๋‹ค๋ฅธ TensorFlow ๋ชจ๋ธ ํ•˜๋‚˜๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”. - ์šฐ์ˆ˜ํ•œ ๋ชจ๋ธ ๊ตฌํ˜„์€ ์‹œ๊ฐ„์ด ์ง€๋‚˜๋„ ๋‚จ์•„์žˆ์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ์ฝ”๋“œ๊ฐ€ ์•„๋ฆ„๋‹ต๋‹ค๋Š” ์ด์œ ๊ฐ€ ์•„๋‹ˆ๋ผ ์ฝ”๋“œ๊ฐ€ ๋ช…ํ™•ํ•˜๊ณ  ๋””๋ฒ„๊น… ๋ฐ ๊ฐœ์„ ์ด ์‰ฝ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. TensorFlow ๊ตฌํ˜„์—์„œ ๋‹ค๋ฅธ ๋ชจ๋ธ๋“ค๊ณผ ํŒจํ„ด์„ ๋˜‘๊ฐ™์ด ํ•˜๊ณ  Pytorch ๊ตฌํ˜„๊ณผ์˜ ๋ถˆ์ผ์น˜๋ฅผ ์ตœ์†Œํ™”ํ•˜์—ฌ ๋ฉ”์ธํ…Œ์ด๋„ˆ์˜ ์—…๋ฌด๋ฅผ ์‰ฝ๊ฒŒ ํ•œ๋‹ค๋ฉด, ๊ธฐ์—ฌํ•œ ์ฝ”๋“œ๊ฐ€ ์˜ค๋ž˜๋„๋ก ์œ ์ง€๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ํ•„์š”ํ•˜๋‹ค๋ฉด ๋„์›€์„ ์š”์ฒญํ•˜์„ธ์š”! ๐Ÿค— Transformers ํŒ€์€ ์—ฌ๋Ÿฌ๋ถ„์„ ๋•๊ธฐ ์œ„ํ•ด ์žˆ์œผ๋ฉฐ, ์—ฌ๋Ÿฌ๋ถ„์ด ์ง๋ฉดํ•œ ๋™์ผํ•œ ๋ฌธ์ œ์— ๋Œ€ํ•œ ํ•ด๊ฒฐ์ฑ…์„ ์ด๋ฏธ ์ฐพ์€ ๊ฒฝ์šฐ๋„ ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. TensorFlow ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ๋‹จ๊ณ„๋ฅผ ๊ฐœ๋žต์ ์œผ๋กœ ์จ๋ณด๋ฉด: 1. ๋ณ€ํ™˜ํ•˜๋ ค๋Š” ๋ชจ๋ธ ์„ ํƒ 2. transformers ๊ฐœ๋ฐœ ํ™˜๊ฒฝ ์ค€๋น„ 3. (์„ ํƒ ์‚ฌํ•ญ) ์ด๋ก ์  ์ธก๋ฉด ๋ฐ ๊ธฐ์กด ๊ตฌํ˜„ ์ดํ•ด 4. ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜ ๊ตฌํ˜„ 5. ๋ชจ๋ธ ํ…Œ์ŠคํŠธ ๊ตฌํ˜„ 6. PR (pull request) ์ œ์ถœ 7. (์„ ํƒ ์‚ฌํ•ญ) ๋ฐ๋ชจ ๋นŒ๋“œ ๋ฐ ๊ณต์œ  ### 1.-3. ๋ชจ๋ธ ๊ธฐ์—ฌ ์ค€๋น„ [[1.-3.-prepare-your-model-contribution]] **1. ๋ณ€ํ™˜ํ•˜๋ ค๋Š” ๋ชจ๋ธ ์„ ํƒ** ์šฐ์„  ๊ธฐ๋ณธ ์‚ฌํ•ญ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋จผ์ € ๋ณ€ํ™˜ํ•˜๋ ค๋Š” ์•„ํ‚คํ…์ฒ˜๋ฅผ ์•Œ์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํŠน์ • ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•œ ๊ด€์‹ฌ ์—†๋Š” ๊ฒฝ์šฐ, ๐Ÿค— Transformers ํŒ€์—๊ฒŒ ์ œ์•ˆ์„ ์š”์ฒญํ•˜๋Š” ๊ฒƒ์€ ์—ฌ๋Ÿฌ๋ถ„์˜ ์˜ํ–ฅ๋ ฅ์„ ๊ทน๋Œ€ํ™”ํ•˜๋Š” ์ข‹์€ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” TensorFlow์—์„œ ๋น ์ ธ ์žˆ๋Š” ๊ฐ€์žฅ ์œ ๋ช…ํ•œ ์•„ํ‚คํ…์ฒ˜๋กœ ์ด๋Œ์–ด ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค. TensorFlow์—์„œ ์‚ฌ์šฉํ•  ๋ชจ๋ธ์ด ์ด๋ฏธ ๐Ÿค— Transformers์— TensorFlow ์•„ํ‚คํ…์ฒ˜ ๊ตฌํ˜„์ด ์žˆ์ง€๋งŒ ๊ฐ€์ค‘์น˜๊ฐ€ ์—†๋Š” ๊ฒฝ์šฐ, ์ด ํŽ˜์ด์ง€์˜ [๊ฐ€์ค‘์น˜ ์ถ”๊ฐ€ ์„น์…˜](#adding-tensorflow-weights-to-hub)์œผ๋กœ ๋ฐ”๋กœ ์ด๋™ํ•˜์…”๋„ ๋ฉ๋‹ˆ๋‹ค. ๊ฐ„๋‹จํžˆ ๋งํ•ด์„œ, ์ด ์•ˆ๋‚ด์„œ์˜ ๋‚˜๋จธ์ง€ ๋ถ€๋ถ„์€ TensorFlow ๋ฒ„์ „์˜ *BrandNewBert*([๊ฐ€์ด๋“œ](add_new_model)์™€ ๋™์ผํ•œ ์˜ˆ์ œ)๋ฅผ ๊ธฐ์—ฌํ•˜๋ ค๊ณ  ๊ฒฐ์ •ํ–ˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•ฉ๋‹ˆ๋‹ค. <Tip> TensorFlow ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์ž‘์—…์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•ด๋‹น ์ž‘์—…์ด ์ง„ํ–‰ ์ค‘์ธ์ง€ ํ™•์ธํ•˜์„ธ์š”. `BrandNewBert`๋ฅผ ๊ฒ€์ƒ‰ํ•˜์—ฌ [pull request GitHub ํŽ˜์ด์ง€](https://github.com/huggingface/transformers/pulls?q=is%3Apr)์—์„œ TensorFlow ๊ด€๋ จ pull request๊ฐ€ ์—†๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> **2. transformers ๊ฐœ๋ฐœ ํ™˜๊ฒฝ ์ค€๋น„** ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์„ ํƒํ•œ ํ›„, ๊ด€๋ จ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์˜๋„๋ฅผ ๋ฏธ๋ฆฌ ์•Œ๋ฆฌ๊ธฐ ์œ„ํ•ด Draft PR์„ ์—ฌ์„ธ์š”. ์•„๋ž˜ ์ง€์นจ๋Œ€๋กœ ํ•˜์‹œ๋ฉด ํ™˜๊ฒฝ์„ ์„ค์ •ํ•˜๊ณ  Draft PR์„ ์—ด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. 'Fork' ๋ฒ„ํŠผ์„ ํด๋ฆญํ•˜์—ฌ [๋ฆฌํฌ์ง€ํ„ฐ๋ฆฌ](https://github.com/huggingface/transformers)๋ฅผ ํฌํฌํ•˜์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด GitHub ์‚ฌ์šฉ์ž ๊ณ„์ •์— ์ฝ”๋“œ์˜ ์‚ฌ๋ณธ์ด ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. 2. `transformers` ํฌํฌ๋ฅผ ๋กœ์ปฌ ๋””์Šคํฌ์— ํด๋ก ํ•˜๊ณ  ์›๋ณธ ๋ฆฌํฌ์ง€ํ„ฐ๋ฆฌ๋ฅผ ์›๊ฒฉ ๋ฆฌํฌ์ง€ํ„ฐ๋ฆฌ๋กœ ์ถ”๊ฐ€ํ•˜์„ธ์š”. ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ค์ •ํ•˜์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์—ฌ ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` ์šด์˜ ์ฒด์ œ์— ๋”ฐ๋ผ์„œ Transformers์˜ ์„ ํƒ์  ์ข…์†์„ฑ์ด ์ฆ๊ฐ€ํ•˜๋ฉด์„œ ์œ„ ๋ช…๋ น์ด ์‹คํŒจํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๊ฒฝ์šฐ TensorFlow๋ฅผ ์„ค์น˜ํ•œ ํ›„ ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜์„ธ์š”. ```bash pip install -e ".[quality]" ``` **์ฐธ๊ณ :** CUDA๋ฅผ ์„ค์น˜ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ชจ๋ธ์ด CPU์—์„œ ์ž‘๋™ํ•˜๋„๋ก ๋งŒ๋“œ๋Š” ๊ฒƒ๋งŒ์œผ๋กœ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. 4. ๋ฉ”์ธ ๋ธŒ๋žœ์น˜์—์„œ ๋งŒ๋“œ๋ ค๋Š” ๊ธฐ๋Šฅ์ด ์ž˜ ํ‘œํ˜„๋˜๋Š” ์ด๋ฆ„์œผ๋กœ ๋ธŒ๋žœ์น˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ```bash git checkout -b add_tf_brand_new_bert ``` 5. ๋ฉ”์ธ ๋ธŒ๋žœ์น˜์˜ ํ˜„์žฌ ์ƒํƒœ๋ฅผ ํŽ˜์น˜(fetch)ํ•˜๊ณ  ๋ฆฌ๋ฒ ์ด์Šคํ•˜์„ธ์š”. ```bash git fetch upstream git rebase upstream/main ``` 6. `transformers/src/models/brandnewbert/`์— `modeling_tf_brandnewbert.py`๋ผ๋Š” ๋นˆ `.py` ํŒŒ์ผ์„ ์ถ”๊ฐ€ํ•˜์„ธ์š”. ์ด ํŒŒ์ผ์ด TensorFlow ๋ชจ๋ธ ํŒŒ์ผ์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. 7. ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๊ณ„์ •์— ํ‘ธ์‹œํ•˜์„ธ์š”. ```bash git add . git commit -m "initial commit" git push -u origin add_tf_brand_new_bert ``` 8. ๋งŒ์กฑ์Šค๋Ÿฌ์šด ๊ฒฝ์šฐ GitHub์—์„œ ํฌํฌ๋œ ์›น ํŽ˜์ด์ง€๋กœ ์ด๋™ํ•ฉ๋‹ˆ๋‹ค. "Pull request"๋ฅผ ํด๋ฆญํ•ฉ๋‹ˆ๋‹ค. Hugging Face ํŒ€์˜ GitHub ID๋ฅผ ๋ฆฌ๋ทฐ์–ด๋กœ ์ถ”๊ฐ€ํ•ด์„œ, ์•ž์œผ๋กœ์˜ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์— ๋Œ€ํ•ด Hugging Face ํŒ€์ด ์•Œ๋ฆผ์„ ๋ฐ›์„ ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. 9. GitHub Pull Requests ํŽ˜์ด์ง€์˜ ์˜ค๋ฅธ์ชฝ์— ์žˆ๋Š” "Convert to draft"๋ฅผ ํด๋ฆญํ•˜์—ฌ PR์„ ์ดˆ์•ˆ์œผ๋กœ ๋ณ€๊ฒฝํ•˜์„ธ์š”. ์ด์ œ ๐Ÿค— Transformers์—์„œ *BrandNewBert*๋ฅผ TensorFlow๋กœ ๋ณ€ํ™˜ํ•  ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ค์ •ํ–ˆ์Šต๋‹ˆ๋‹ค. **3. (์„ ํƒ ์‚ฌํ•ญ) ์ด๋ก ์  ์ธก๋ฉด ๋ฐ ๊ธฐ์กด ๊ตฌํ˜„ ์ดํ•ด** *BrandNewBert*์ฒ˜๋Ÿผ ์ž์„ธํ•œ ๊ธ€์ด ์žˆ๋‹ค๋ฉด ์‹œ๊ฐ„์„ ๋‚ด์–ด ๋…ผ๋ฌธ์„ ์ฝ๋Š”๊ฑธ ์ถ”์ฒœ๋“œ๋ฆฝ๋‹ˆ๋‹ค. ์ดํ•ดํ•˜๊ธฐ ์–ด๋ ค์šด ๋ถ€๋ถ„์ด ๋งŽ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡๋‹ค๊ณ  ํ•ด์„œ ๊ฑฑ์ •ํ•˜์ง€ ๋งˆ์„ธ์š”! ๋ชฉํ‘œ๋Š” ๋…ผ๋ฌธ์˜ ์‹ฌ๋„์žˆ๋Š” ์ด๋ก ์  ์ดํ•ด๊ฐ€ ์•„๋‹ˆ๋ผ TensorFlow๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Transformers์— ๋ชจ๋ธ์„ ํšจ๊ณผ์ ์œผ๋กœ ๋‹ค์‹œ ๊ตฌํ˜„ํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ํ•„์ˆ˜ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋งŽ์€ ์‹œ๊ฐ„์„ ์ด๋ก ์  ์ดํ•ด์— ํˆฌ์žํ•  ํ•„์š”๋Š” ์—†์ง€๋งŒ ์‹ค์šฉ์ ์ธ ์ธก๋ฉด์—์„œ ํ˜„์žฌ ์กด์žฌํ•˜๋Š” ๋ชจ๋ธ ๋ฌธ์„œ ํŽ˜์ด์ง€(e.g. [model docs for BERT](model_doc/bert))์— ์ง‘์ค‘ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ๊ธฐ๋ณธ ์‚ฌํ•ญ์„ ์ดํ•ดํ•œ ํ›„, ๊ธฐ์กด ๊ตฌํ˜„์„ ์ดํ•ดํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ž‘์—… ์ค‘์ธ ๋ชจ๋ธ์— ๋Œ€ํ•œ ์‹ค์ œ ๊ตฌํ˜„์ด ์—ฌ๋Ÿฌ๋ถ„์˜ ๊ธฐ๋Œ€์™€ ์ผ์น˜ํ•จ์„ ํ™•์ธํ•˜๊ณ , TensorFlow ์ธก๋ฉด์—์„œ์˜ ๊ธฐ์ˆ ์  ๋ฌธ์ œ๋ฅผ ์˜ˆ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ง‰๋Œ€ํ•œ ์–‘์˜ ์ •๋ณด๋ฅผ ์ฒ˜์Œ์œผ๋กœ ํ•™์Šตํ•  ๋•Œ ์••๋„๋‹นํ•˜๋Š” ๊ฒƒ์€ ์ž์—ฐ์Šค๋Ÿฌ์šด ์ผ์ž…๋‹ˆ๋‹ค. ์ด ๋‹จ๊ณ„์—์„œ ๋ชจ๋ธ์˜ ๋ชจ๋“  ์ธก๋ฉด์„ ์ดํ•ดํ•ด์•ผ ํ•˜๋Š” ํ•„์š”๋Š” ์ „ํ˜€ ์—†์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์šฐ๋ฆฌ๋Š” Hugging Face์˜ [ํฌ๋Ÿผ](https://discuss.huggingface.co/)์„ ํ†ตํ•ด ์งˆ๋ฌธ์ด ์žˆ๋Š” ๊ฒฝ์šฐ ๋Œ€๋‹ต์„ ๊ตฌํ•  ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ### 4. ๋ชจ๋ธ ๊ตฌํ˜„ [[4-model-implementation]] ์ด์ œ ๋“œ๋””์–ด ์ฝ”๋”ฉ์„ ์‹œ์ž‘ํ•  ์‹œ๊ฐ„์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ์˜ ์ œ์•ˆ๋œ ์‹œ์ž‘์ ์€ PyTorch ํŒŒ์ผ ์ž์ฒด์ž…๋‹ˆ๋‹ค: `modeling_brand_new_bert.py`์˜ ๋‚ด์šฉ์„ `src/transformers/models/brand_new_bert/` ๋‚ด๋ถ€์˜ `modeling_tf_brand_new_bert.py`์— ๋ณต์‚ฌํ•ฉ๋‹ˆ๋‹ค. ์ด ์„น์…˜์˜ ๋ชฉํ‘œ๋Š” ํŒŒ์ผ์„ ์ˆ˜์ •ํ•˜๊ณ  ๐Ÿค— Transformers์˜ import ๊ตฌ์กฐ๋ฅผ ์—…๋ฐ์ดํŠธํ•˜์—ฌ `TFBrandNewBert` ๋ฐ `TFBrandNewBert.from_pretrained(model_repo, from_pt=True)`๊ฐ€ ์„ฑ๊ณต์ ์œผ๋กœ ์ž‘๋™ํ•˜๋Š” TensorFlow *BrandNewBert* ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์œ ๊ฐ์Šค๋Ÿฝ๊ฒŒ๋„, PyTorch ๋ชจ๋ธ์„ TensorFlow๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ทœ์น™์€ ์—†์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ”„๋กœ์„ธ์Šค๋ฅผ ๊ฐ€๋Šฅํ•œํ•œ ์›ํ™œํ•˜๊ฒŒ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด ๋‹ค์Œ ํŒ์„ ๋”ฐ๋ฅผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ๋ชจ๋“  ํด๋ž˜์Šค ์ด๋ฆ„ ์•ž์— `TF`๋ฅผ ๋ถ™์ž…๋‹ˆ๋‹ค(์˜ˆ: `BrandNewBert`๋Š” `TFBrandNewBert`๊ฐ€ ๋ฉ๋‹ˆ๋‹ค). - ๋Œ€๋ถ€๋ถ„์˜ PyTorch ์ž‘์—…์—๋Š” ์ง์ ‘์ ์ธ TensorFlow ๋Œ€์ฒด๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `torch.nn.Linear`๋Š” `tf.keras.layers.Dense`์— ํ•ด๋‹นํ•˜๊ณ , `torch.nn.Dropout`์€ `tf.keras.layers.Dropout`์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ํŠน์ • ์ž‘์—…์— ๋Œ€ํ•ด ํ™•์‹ ์ด ์—†๋Š” ๊ฒฝ์šฐ [TensorFlow ๋ฌธ์„œ](https://www.tensorflow.org/api_docs/python/tf)๋‚˜ [PyTorch ๋ฌธ์„œ](https://pytorch.org/docs/stable/)๋ฅผ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ๐Ÿค— Transformers ์ฝ”๋“œ๋ฒ ์ด์Šค์—์„œ ํŒจํ„ด์„ ์ฐพ์œผ์„ธ์š”. ์ง์ ‘์ ์ธ ๋Œ€์ฒด๊ฐ€ ์—†๋Š” ํŠน์ • ์ž‘์—…์„ ๋งŒ๋‚˜๋ฉด ๋‹ค๋ฅธ ์‚ฌ๋žŒ์ด ์ด๋ฏธ ๋™์ผํ•œ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•œ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. - ๊ธฐ๋ณธ์ ์œผ๋กœ PyTorch์™€ ๋™์ผํ•œ ๋ณ€์ˆ˜ ์ด๋ฆ„๊ณผ ๊ตฌ์กฐ๋ฅผ ์œ ์ง€ํ•˜์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋””๋ฒ„๊น…๊ณผ ๋ฌธ์ œ ์ถ”์ , ๊ทธ๋ฆฌ๊ณ  ๋ฌธ์ œ ํ•ด๊ฒฐ ์ถ”๊ฐ€๊ฐ€ ๋” ์‰ฌ์›Œ์ง‘๋‹ˆ๋‹ค. - ์ผ๋ถ€ ๋ ˆ์ด์–ด๋Š” ๊ฐ ํ”„๋ ˆ์ž„์›Œํฌ๋งˆ๋‹ค ๋‹ค๋ฅธ ๊ธฐ๋ณธ๊ฐ’์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋Œ€ํ‘œ์ ์ธ ์˜ˆ๋กœ ๋ฐฐ์น˜ ์ •๊ทœํ™” ๋ ˆ์ด์–ด์˜ epsilon์€ [PyTorch](https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html#torch.nn.BatchNorm2d)์—์„œ `1e-5`์ด๊ณ  [TensorFlow](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization)์—์„œ `1e-3`์ž…๋‹ˆ๋‹ค. ๋ฌธ์„œ๋ฅผ ๋ชจ๋‘ ํ™•์ธํ•˜์„ธ์š”! - PyTorch์˜ `nn.Parameter` ๋ณ€์ˆ˜๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ TF ๋ ˆ์ด์–ด์˜ `build()` ๋‚ด์—์„œ ์ดˆ๊ธฐํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์˜ˆ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”: [PyTorch](https://github.com/huggingface/transformers/blob/655f72a6896c0533b1bdee519ed65a059c2425ac/src/transformers/models/vit_mae/modeling_vit_mae.py#L212) / [TensorFlow](https://github.com/huggingface/transformers/blob/655f72a6896c0533b1bdee519ed65a059c2425ac/src/transformers/models/vit_mae/modeling_tf_vit_mae.py#L220) - PyTorch ๋ชจ๋ธ์˜ ํ•จ์ˆ˜ ์ƒ๋‹จ์— `#copied from ...`๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ, TensorFlow ๋ชจ๋ธ์— TensorFlow ์•„ํ‚คํ…์ฒ˜๊ฐ€ ์žˆ๋‹ค๋ฉด TensorFlow ๋ชจ๋ธ์ด ํ•ด๋‹น ํ•จ์ˆ˜๋ฅผ ๋ณต์‚ฌํ•œ ์•„ํ‚คํ…์ฒ˜์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - TensorFlow ํ•จ์ˆ˜์—์„œ `name` ์†์„ฑ์„ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ํ• ๋‹นํ•˜๋Š” ๊ฒƒ์€ `from_pt=True` ๊ฐ€์ค‘์น˜ ๊ต์ฐจ ๋กœ๋”ฉ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. `name`์€ ๋Œ€๋ถ€๋ถ„ PyTorch ์ฝ”๋“œ์˜ ํ•ด๋‹น ๋ณ€์ˆ˜์˜ ์ด๋ฆ„์ž…๋‹ˆ๋‹ค. `name`์ด ์ œ๋Œ€๋กœ ์„ค์ •๋˜์ง€ ์•Š์œผ๋ฉด ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ ๋กœ๋“œํ•  ๋•Œ ์˜ค๋ฅ˜ ๋ฉ”์‹œ์ง€์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ๊ธฐ๋ณธ ๋ชจ๋ธ ํด๋ž˜์Šค์ธ `BrandNewBertModel`์˜ ๋กœ์ง์€ ์‹ค์ œ๋กœ Keras ๋ ˆ์ด์–ด ์„œ๋ธŒํด๋ž˜์Šค([์˜ˆ์‹œ](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L719))์ธ `TFBrandNewBertMainLayer`์— ์žˆ์Šต๋‹ˆ๋‹ค. `TFBrandNewBertModel`์€ ์ด ๋ ˆ์ด์–ด๋ฅผ ๊ฐ์‹ธ๊ธฐ๋งŒ ํ•˜๋Š” ๋ž˜ํผ ์—ญํ• ์„ ํ•ฉ๋‹ˆ๋‹ค. - Keras ๋ชจ๋ธ์€ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด ๋นŒ๋“œ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ `TFBrandNewBertPreTrainedModel`์€ ๋ชจ๋ธ์˜ ์ž…๋ ฅ ์˜ˆ์ œ์ธ `dummy_inputs`([์˜ˆ์‹œ](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L916)) ์œ ์ง€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - ๋„์›€์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ ๋„์›€์„ ์š”์ฒญํ•˜์„ธ์š”. ์šฐ๋ฆฌ๋Š” ์—ฌ๊ธฐ ์žˆ์–ด์„œ ๋„์›€์„ ๋“œ๋ฆฌ๊ธฐ ์œ„ํ•ด ์žˆ๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค! ๐Ÿค— ๋ชจ๋ธ ํŒŒ์ผ ์ž์ฒด ์™ธ์—๋„ ๋ชจ๋ธ ํด๋ž˜์Šค ๋ฐ ๊ด€๋ จ ๋ฌธ์„œ ํŽ˜์ด์ง€์— ๋Œ€ํ•œ ํฌ์ธํ„ฐ๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ถ€๋ถ„์€ ๋‹ค๋ฅธ PR([์˜ˆ์‹œ](https://github.com/huggingface/transformers/pull/18020/files))์˜ ํŒจํ„ด์„ ๋”ฐ๋ผ ์™„์ „ํžˆ ์™„๋ฃŒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ํ•„์š”ํ•œ ์ˆ˜๋™ ๋ณ€๊ฒฝ ๋ชฉ๋ก์ž…๋‹ˆ๋‹ค. - `src/transformers/__init__.py`์— *BrandNewBert*์˜ ๋ชจ๋“  ๊ณต๊ฐœ ํด๋ž˜์Šค๋ฅผ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. - `src/transformers/models/auto/modeling_tf_auto.py`์—์„œ *BrandNewBert* ํด๋ž˜์Šค๋ฅผ ํ•ด๋‹น Auto ํด๋ž˜์Šค์— ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. - `src/transformers/utils/dummy_tf_objects.py`์— *BrandNewBert*์™€ ๊ด€๋ จ๋œ ๋ ˆ์ด์ง€ ๋กœ๋”ฉ ํด๋ž˜์Šค๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. - `src/transformers/models/brand_new_bert/__init__.py`์—์„œ ๊ณต๊ฐœ ํด๋ž˜์Šค์— ๋Œ€ํ•œ import ๊ตฌ์กฐ๋ฅผ ์—…๋ฐ์ดํŠธํ•ฉ๋‹ˆ๋‹ค. - `docs/source/en/model_doc/brand_new_bert.md`์—์„œ *BrandNewBert*์˜ ๊ณต๊ฐœ ๋ฉ”์„œ๋“œ์— ๋Œ€ํ•œ ๋ฌธ์„œ ํฌ์ธํ„ฐ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. - `docs/source/en/model_doc/brand_new_bert.md`์˜ *BrandNewBert* ๊ธฐ์—ฌ์ž ๋ชฉ๋ก์— ์ž์‹ ์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. - ๋งˆ์ง€๋ง‰์œผ๋กœ โœ… ๋…น์ƒ‰ ์ฒดํฌ๋ฐ•์Šค๋ฅผ TensorFlow ์—ด docs/source/en/index.md ์•ˆ BrandNewBert์— ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ๊ตฌํ˜„์ด ๋งŒ์กฑํ•˜๋ฉด ๋‹ค์Œ ์ฒดํฌ๋ฆฌ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์—ฌ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๊ฐ€ ์ค€๋น„๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. 1. ํ›ˆ๋ จ ์‹œ๊ฐ„์— ๋‹ค๋ฅด๊ฒŒ ๋™์ž‘ํ•˜๋Š” `training` ์ธ์ˆ˜๋กœ ๋ถˆ๋ฆฌ๋Š” ๋ชจ๋“  ๋ ˆ์ด์–ด(์˜ˆ: Dropout)๋Š” ์ตœ์ƒ์œ„ ํด๋ž˜์Šค์—์„œ ์ „ํŒŒ๋ฉ๋‹ˆ๋‹ค. 2. #copied from ...๊ฐ€๋Šฅํ•  ๋•Œ๋งˆ๋‹ค ์‚ฌ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค. 3. `TFBrandNewBertMainLayer`์™€ ๊ทธ๊ฒƒ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋“  ํด๋ž˜์Šค๋Š” `call`ํ•จ์ˆ˜๋กœ `@unpack_inputs`์™€ ํ•จ๊ป˜ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ ๋ฉ๋‹ˆ๋‹ค. 4. `TFBrandNewBertMainLayer`๋Š” `@keras_serializable`๋กœ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ ๋ฉ๋‹ˆ๋‹ค. 5. TensorFlow ๋ชจ๋ธ์€ `TFBrandNewBert.from_pretrained(model_repo, from_pt=True)`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ PyTorch ๊ฐ€์ค‘์น˜์—์„œ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 6. ์˜ˆ์ƒ ์ž…๋ ฅ ํ˜•์‹์„ ์‚ฌ์šฉํ•˜์—ฌ TensorFlow ๋ชจ๋ธ์„ ํ˜ธ์ถœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### 5. ๋ชจ๋ธ ํ…Œ์ŠคํŠธ ๊ตฌํ˜„ [[5-add-model-tests]] TensorFlow ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ตฌํ˜„ํ•˜๋Š” ๋ฐ ์„ฑ๊ณตํ–ˆ์Šต๋‹ˆ๋‹ค! ์ด์ œ TensorFlow ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ๊ตฌํ˜„์„ ์ž‘์„ฑํ•  ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ์ด ์˜ˆ์ƒ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ „์— ์šฐ๋ฆฌ๋Š” `test_modeling_brand_new_bert.py` ํŒŒ์ผ์„ `tests/models/brand_new_bert/ into test_modeling_tf_brand_new_bert.py`์— ๋ณต์‚ฌํ•œ ๋’ค, TensorFlow๋กœ ๊ต์ฒดํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ง€๊ธˆ์€, ๋ชจ๋“  `.from_pretrained()`์„ `from_pt=True`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์กด์žฌํ•˜๋Š” Pytorch ๊ฐ€์ค‘์น˜๋ฅผ ๊ฐ€์ ธ์˜ค๋„๋ก ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ์™„๋ฃŒํ•˜์…จ์œผ๋ฉด, ์ด์ œ ์ง„์‹ค์˜ ์ˆœ๊ฐ„์ด ์ฐพ์•„์™”์Šต๋‹ˆ๋‹ค: ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ด ๋ณด์„ธ์š”! ๐Ÿ˜ฌ ```bash NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py ``` ์˜ค๋ฅ˜๊ฐ€ ๋งŽ์ด ๋‚˜ํƒ€๋‚  ๊ฒƒ์ด์ง€๋งŒ ๊ดœ์ฐฎ์Šต๋‹ˆ๋‹ค! ๊ธฐ๊ณ„ ํ•™์Šต ๋ชจ๋ธ์„ ๋””๋ฒ„๊น…ํ•˜๋Š” ๊ฒƒ์€ ์•…๋ช…๋†’๊ฒŒ ์–ด๋ ค์šฐ๋ฉฐ ์„ฑ๊ณต์˜ ํ•ต์‹ฌ ์š”์†Œ๋Š” ์ธ๋‚ด์‹ฌ์ž…๋‹ˆ๋‹ค (`breakpoint()`๋„ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค). ์šฐ๋ฆฌ์˜ ๊ฒฝํ—˜์ƒ์œผ๋กœ๋Š” ML ํ”„๋ ˆ์ž„์›Œํฌ ์‚ฌ์ด์˜ ๋ฏธ๋ฌ˜ํ•œ ๋ถˆ์ผ์น˜๋กœ ์ธํ•ด ๊ฐ€์žฅ ์–ด๋ ค์šด ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ์ด์— ๋Œ€ํ•œ ๋ช‡ ๊ฐ€์ง€ ์ง€์นจ์ด ์ด ๊ฐ€์ด๋“œ์˜ ๋ ๋ถ€๋ถ„์— ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๊ฒฝ์šฐ์—๋Š” ์ผ๋ฐ˜ ํ…Œ์ŠคํŠธ๊ฐ€ ์ง์ ‘ ๋ชจ๋ธ์— ์ ์šฉ๋˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด ๊ฒฝ์šฐ ๋ชจ๋ธ ํ…Œ์ŠคํŠธ ํด๋ž˜์Šค ๋ ˆ๋ฒจ์—์„œ ์žฌ์ •์˜๋ฅผ ์ œ์•ˆํ•ฉ๋‹ˆ๋‹ค. ๋ฌธ์ œ๊ฐ€ ๋ฌด์—‡์ด๋“ ์ง€ ์ƒ๊ด€์—†์ด ๋ฌธ์ œ๊ฐ€ ์žˆ์œผ๋ฉด ๋‹น์‹ ์ด ๊ณ ๋ฆฝ๋˜์—ˆ๋‹ค๋ฉด draft pull request์—์„œ ๋„์›€์„ ์š”์ฒญํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ํ…Œ์ŠคํŠธ๊ฐ€ ํ†ต๊ณผ๋˜๋ฉด ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ ๋ชจ๋ธ์„ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ์ถ”๊ฐ€ํ•  ์ค€๋น„๊ฐ€ ๊ฑฐ์˜ ์™„๋ฃŒ๋œ ๊ฒƒ์ž…๋‹ˆ๋‹ค! ๐ŸŽ‰ ํ…Œ์ŠคํŠธ๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [๐Ÿค— Transformers์˜ ํ…Œ์ŠคํŠธ ๊ฐ€์ด๋“œ](https://huggingface.co/transformers/contributing.html#running-tests)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ### 6.-7. ๋ชจ๋“  ์‚ฌ์šฉ์ž๊ฐ€ ๋‹น์‹ ์˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•˜๊ธฐ [[6.-7.-ensure-everyone -can-use-your-model]] **6. ํ’€ ์š”์ฒญ ์ œ์ถœํ•˜๊ธฐ** ๊ตฌํ˜„๊ณผ ํ…Œ์ŠคํŠธ๊ฐ€ ์™„๋ฃŒ๋˜๋ฉด ํ’€ ์š”์ฒญ์„ ์ œ์ถœํ•  ์‹œ๊ฐ„์ž…๋‹ˆ๋‹ค. ์ฝ”๋“œ๋ฅผ ํ‘ธ์‹œํ•˜๊ธฐ ์ „์— ์ฝ”๋“œ ์„œ์‹ ๋งž์ถ”๊ธฐ ์œ ํ‹ธ๋ฆฌํ‹ฐ์ธ `make fixup` ๐Ÿช„ ๋ฅผ ์‹คํ–‰ํ•˜์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ž๋™์œผ๋กœ ์„œ์‹ ์˜ค๋ฅ˜๋ฅผ ์ˆ˜์ •ํ•˜๋ฉฐ ์ž๋™ ๊ฒ€์‚ฌ๊ฐ€ ์‹คํŒจํ•˜๋Š” ๊ฒƒ์„ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋“œ๋ž˜ํ”„ํŠธ ํ’€ ์š”์ฒญ์„ ์‹ค์ œ ํ’€ ์š”์ฒญ์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์‹œ๊ฐ„์ž…๋‹ˆ๋‹ค. "๋ฆฌ๋ทฐ ์ค€๋น„๋จ" ๋ฒ„ํŠผ์„ ํด๋ฆญํ•˜๊ณ  Joao (`@gante`)์™€ Matt (`@Rocketknight1`)๋ฅผ ๋ฆฌ๋ทฐ์–ด๋กœ ์ถ”๊ฐ€ํ•˜์„ธ์š”. ๋ชจ๋ธ ํ’€ ์š”์ฒญ์—๋Š” ์ ์–ด๋„ 3๋ช…์˜ ๋ฆฌ๋ทฐ์–ด๊ฐ€ ํ•„์š”ํ•˜์ง€๋งŒ, ๊ทธ๋“ค์ด ๋‹น์‹ ์˜ ๋ชจ๋ธ์— ์ ์ ˆํ•œ ์ถ”๊ฐ€ ๋ฆฌ๋ทฐ์–ด๋ฅผ ์ฐพ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ฆฌ๋ทฐ์–ด๋“ค์ด PR ์ƒํƒœ์— ๋งŒ์กฑํ•˜๋ฉด ๋งˆ์ง€๋ง‰์œผ๋กœ `.from_pretrained()` ํ˜ธ์ถœ์—์„œ `from_pt=True` ํ”Œ๋ž˜๊ทธ๋ฅผ ์ œ๊ฑฐํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. TensorFlow ๊ฐ€์ค‘์น˜๊ฐ€ ์—†๊ธฐ ๋•Œ๋ฌธ์— ์ด๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค! ์ด๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ์•„๋ž˜ ์„น์…˜์˜ ์ง€์นจ์„ ํ™•์ธํ•˜์„ธ์š”. ๋งˆ์นจ๋‚ด TensorFlow ๊ฐ€์ค‘์น˜๊ฐ€ ๋ณ‘ํ•ฉ๋˜๊ณ , ์ ์–ด๋„ 3๋ช…์˜ ๋ฆฌ๋ทฐ์–ด ์Šน์ธ์„ ๋ฐ›์•˜์œผ๋ฉฐ ๋ชจ๋“  CI ๊ฒ€์‚ฌ๊ฐ€ ํ†ต๊ณผ๋˜์—ˆ๋‹ค๋ฉด, ๋กœ์ปฌ๋กœ ํ…Œ์ŠคํŠธ๋ฅผ ํ•œ ๋ฒˆ ๋” ํ™•์ธํ•˜์„ธ์š”. ```bash NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py ``` ๊ทธ๋ฆฌ๊ณ  ์šฐ๋ฆฌ๋Š” ๋‹น์‹ ์˜ PR์„ ๋ณ‘ํ•ฉํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค! ๋งˆ์ผ์Šคํ†ค ๋‹ฌ์„ฑ์„ ์ถ•ํ•˜๋“œ๋ฆฝ๋‹ˆ๋‹ค! ๐ŸŽ‰ **7. (์„ ํƒ ์‚ฌํ•ญ) ๋ฐ๋ชจ๋ฅผ ๋งŒ๋“ค๊ณ  ์„ธ์ƒ๊ณผ ๊ณต์œ ํ•˜๊ธฐ** ์˜คํ”ˆ ์†Œ์Šค์˜ ๊ฐ€์žฅ ์–ด๋ ค์šด ๋ถ€๋ถ„ ์ค‘ ํ•˜๋‚˜๋Š” ๋ฐœ๊ฒฌ์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์‚ฌ์šฉ์ž๋“ค์ด ๋‹น์‹ ์˜ ๋ฉ‹์ง„ TensorFlow ๊ธฐ์—ฌ๋ฅผ ์–ด๋–ป๊ฒŒ ์•Œ ์ˆ˜ ์žˆ์„๊นŒ์š”? ๋ฌผ๋ก  ์ ์ ˆํ•œ ์ปค๋ฎค๋‹ˆ์ผ€์ด์…˜์œผ๋กœ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค! ๐Ÿ“ฃ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋Š” ๋‘ ๊ฐ€์ง€ ์ฃผ์š” ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค: - ๋ฐ๋ชจ ๋งŒ๋“ค๊ธฐ. Gradio ๋ฐ๋ชจ, ๋…ธํŠธ๋ถ ๋ฐ ๋ชจ๋ธ์„ ์ž๋ž‘ํ•˜๋Š” ๋‹ค๋ฅธ ์žฌ๋ฏธ์žˆ๋Š” ๋ฐฉ๋ฒ•์„ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. [์ปค๋ฎค๋‹ˆํ‹ฐ ๊ธฐ๋ฐ˜ ๋ฐ๋ชจ](https://huggingface.co/docs/transformers/community)์— ๋…ธํŠธ๋ถ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์„ ์ ๊ทน ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. - Twitter์™€ LinkedIn๊ณผ ๊ฐ™์€ ์†Œ์…œ ๋ฏธ๋””์–ด์— ์ด์•ผ๊ธฐ ๊ณต์œ ํ•˜๊ธฐ. ๋‹น์‹ ์˜ ์ž‘์—…์— ์ž๋ž‘์Šค๋Ÿฌ์›Œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๋‹น์‹ ์˜ ์—…์ ์„ ๊ณต์œ ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ ๋‹น์‹ ์˜ ๋ชจ๋ธ์€ ์ „ ์„ธ๊ณ„์˜ ์ˆ˜์ฒœ ๋ช…์˜ ์—”์ง€๋‹ˆ์–ด์™€ ์—ฐ๊ตฌ์›๋“ค์— ์˜ํ•ด ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค ๐ŸŒ! ์šฐ๋ฆฌ๋Š” ๋‹น์‹ ์˜ ๊ฒŒ์‹œ๋ฌผ์„ ๋ฆฌํŠธ์œ—ํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ํ•จ๊ป˜ ๋‹น์‹ ์˜ ์ž‘์—…์„ ๊ณต์œ ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ๐Ÿค— ํ—ˆ๋ธŒ์— TensorFlow ๊ฐ€์ค‘์น˜ ์ถ”๊ฐ€ํ•˜๊ธฐ [[adding-tensorFlow-weights-to-๐Ÿค—-hub]] TensorFlow ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๊ฐ€ ๐Ÿค— Transformers์—์„œ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•˜๋‹ค๊ณ  ๊ฐ€์ •ํ•˜๊ณ , PyTorch ๊ฐ€์ค‘์น˜๋ฅผ TensorFlow ๊ฐ€์ค‘์น˜๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์€ ์‰ฝ์Šต๋‹ˆ๋‹ค! ๋‹ค์Œ์€ ๊ทธ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: 1. ํ„ฐ๋ฏธ๋„์—์„œ Hugging Face ๊ณ„์ •์œผ๋กœ ๋กœ๊ทธ์ธ๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค. `huggingface-cli login` ๋ช…๋ น์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋กœ๊ทธ์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. (์•ก์„ธ์Šค ํ† ํฐ์€ [์—ฌ๊ธฐ](https://huggingface.co/settings/tokens)์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.) 2. `transformers-cli pt-to-tf --model-name foo/bar`๋ฅผ ์‹คํ–‰ํ•˜์‹ญ์‹œ์˜ค. ์—ฌ๊ธฐ์„œ `foo/bar`๋Š” ๋ณ€ํ™˜ํ•˜๋ ค๋Š” PyTorch ๊ฐ€์ค‘์น˜๊ฐ€ ์žˆ๋Š” ๋ชจ๋ธ ์ €์žฅ์†Œ์˜ ์ด๋ฆ„์ž…๋‹ˆ๋‹ค. 3. ๋ฐฉ๊ธˆ ๋งŒ๋“  ๐Ÿค— ํ—ˆ๋ธŒ PR์—์„œ `@joaogante`์™€ `@Rocketknight1`์„ ํƒœ๊ทธํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๊ฒŒ ๋‹ค์ž…๋‹ˆ๋‹ค! ๐ŸŽ‰ ## ML ํ”„๋ ˆ์ž„์›Œํฌ ๊ฐ„ ๋””๋ฒ„๊น… ๐Ÿ›[[debugging-mismatches-across-ml-frameworks]] ์ƒˆ๋กœ์šด ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ถ”๊ฐ€ํ•˜๊ฑฐ๋‚˜ ๊ธฐ์กด ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•œ TensorFlow ๊ฐ€์ค‘์น˜๋ฅผ ์ƒ์„ฑํ•  ๋•Œ, PyTorch์™€ TensorFlow ๊ฐ„์˜ ๋ถˆ์ผ์น˜๋กœ ์ธํ•œ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‹ฌ์ง€์–ด ๋‘ ํ”„๋ ˆ์ž„์›Œํฌ์˜ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜ ์ฝ”๋“œ๊ฐ€ ๋™์ผํ•ด ๋ณด์ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌด์Šจ ์ผ์ด ๋ฒŒ์–ด์ง€๊ณ  ์žˆ๋Š” ๊ฑธ๊นŒ์š”? ๐Ÿค” ๋จผ์ €, ์ด๋Ÿฌํ•œ ๋ถˆ์ผ์น˜๋ฅผ ์ดํ•ดํ•˜๋Š” ์ด์œ ์— ๋Œ€ํ•ด ์ด์•ผ๊ธฐํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋งŽ์€ ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ฉค๋ฒ„๋“ค์€ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉํ•˜๊ณ , ์šฐ๋ฆฌ์˜ ๋ชจ๋ธ์ด ์˜ˆ์ƒ๋Œ€๋กœ ์ž‘๋™ํ•  ๊ฒƒ์ด๋ผ๊ณ  ๋ฏฟ์Šต๋‹ˆ๋‹ค. ๋‘ ํ”„๋ ˆ์ž„์›Œํฌ ๊ฐ„์— ํฐ ๋ถˆ์ผ์น˜๊ฐ€ ์žˆ์œผ๋ฉด ๋ชจ๋ธ์ด ์ ์–ด๋„ ํ•˜๋‚˜์˜ ํ”„๋ ˆ์ž„์›Œํฌ์— ๋Œ€ํ•œ ์ฐธ์กฐ ๊ตฌํ˜„์„ ๋”ฐ๋ฅด์ง€ ์•Š์Œ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋ธ์ด ์˜๋„ํ•œ ๋Œ€๋กœ ์ž‘๋™ํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Œ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์ด๋Š” ์•„์˜ˆ ์‹คํ–‰๋˜์ง€ ์•Š๋Š” ๋ชจ๋ธ๋ณด๋‹ค ๋‚˜์  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ๋”ฐ๋ผ์„œ ์šฐ๋ฆฌ๋Š” ๋ชจ๋“  ๋ชจ๋ธ์˜ ํ”„๋ ˆ์ž„์›Œํฌ ๋ถˆ์ผ์น˜๋ฅผ `1e-5`๋ณด๋‹ค ์ž‘๊ฒŒ ์œ ์ง€ํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๊ธฐํƒ€ ์ˆซ์ž ๋ฌธ์ œ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ์„ธ์„ธํ•œ ๋ฌธ์ œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์„ธ์„ธํ•จ์— ์ง‘์ค‘ํ•˜๋Š” ๊ณต์ •์—์„œ ํ•„์ˆ˜ ์š”์†Œ๋Š” ์ธ๋‚ด์‹ฌ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ข…๋ฅ˜์˜ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•  ๋•Œ ๊ถŒ์žฅ๋˜๋Š” ์ž‘์—… ํ๋ฆ„์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. ๋ถˆ์ผ์น˜์˜ ์›์ธ์„ ์ฐพ์•„๋ณด์‹ญ์‹œ์˜ค. ๋ณ€ํ™˜ ์ค‘์ธ ๋ชจ๋ธ์€ ์•„๋งˆ๋„ ํŠน์ • ์ง€์ ๊นŒ์ง€ ๊ฑฐ์˜ ๋™์ผํ•œ ๋‚ด๋ถ€ ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‘ ํ”„๋ ˆ์ž„์›Œํฌ์˜ ์•„ํ‚คํ…์ฒ˜์— `breakpoint()` ๋ฌธ์„ ๋„ฃ๊ณ , ์œ„์—์„œ ์•„๋ž˜๋กœ ์ˆซ์ž ๋ณ€์ˆ˜์˜ ๊ฐ’์„ ๋น„๊ตํ•˜์—ฌ ๋ฌธ์ œ์˜ ๊ทผ์›์„ ์ฐพ์•„๋ƒ…๋‹ˆ๋‹ค. 2. ์ด์ œ ๋ฌธ์ œ์˜ ๊ทผ์›์„ ์ฐพ์•˜์œผ๋ฏ€๋กœ ๐Ÿค— Transformers ํŒ€์— ์—ฐ๋ฝํ•˜์„ธ์š”. ์šฐ๋ฆฌ๋Š” ๋น„์Šทํ•œ ๋ฌธ์ œ๋ฅผ ์ด์ „์— ๊ฒช์—ˆ์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ ๋น ๋ฅด๊ฒŒ ํ•ด๊ฒฐ์ฑ…์„ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ์™ธ์ ์ธ ๊ฒฝ์šฐ์—๋Š” StackOverflow์™€ GitHub ์ด์Šˆ์™€ ๊ฐ™์€ ์ธ๊ธฐ์žˆ๋Š” ํŽ˜์ด์ง€๋ฅผ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค. 3. ๋” ์ด์ƒ ํ•ด๊ฒฐ์ฑ…์ด ์—†๋Š” ๊ฒฝ์šฐ, ๋” ๊นŠ์ด ๋“ค์–ด๊ฐ€์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ข‹์€ ์†Œ์‹์€ ๋ฌธ์ œ์˜ ์›์ธ์„ ์ฐพ์•˜์œผ๋ฏ€๋กœ ๋‚˜๋จธ์ง€ ๋ชจ๋ธ์„ ์ถ”์ƒํ™”ํ•˜๊ณ  ๋ฌธ์ œ๊ฐ€ ์žˆ๋Š” ๋ช…๋ น์–ด์— ์ดˆ์ ์„ ๋งž์ถœ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ๋‚˜์œ ์†Œ์‹์€ ํ•ด๋‹น ๋ช…๋ น์–ด์˜ ์†Œ์Šค ๊ตฌํ˜„์— ๋Œ€ํ•ด ์•Œ์•„๋ด์•ผ ํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ผ๋ถ€ ๊ฒฝ์šฐ์—๋Š” ์ฐธ์กฐ ๊ตฌํ˜„์— ๋ฌธ์ œ๊ฐ€ ์žˆ์„ ์ˆ˜๋„ ์žˆ์œผ๋‹ˆ ์—…์ŠคํŠธ๋ฆผ ์ €์žฅ์†Œ์—์„œ ์ด์Šˆ๋ฅผ ์—ด๊ธฐ๋ฅผ ๊บผ๋ฆฌ์ง€ ๋งˆ์‹ญ์‹œ์˜ค. ์–ด๋–ค ๊ฒฝ์šฐ์—๋Š” ๐Ÿค— Transformers ํŒ€๊ณผ์˜ ํ† ๋ก ์„ ํ†ตํ•ด ๋ถˆ์ผ์น˜๋ฅผ ์ˆ˜์ •ํ•  ์ˆ˜ ์—†์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ์ถœ๋ ฅ ๋ ˆ์ด์–ด์—์„œ ๋ถˆ์ผ์น˜๊ฐ€ ๋งค์šฐ ์ž‘์ง€๋งŒ ์ˆจ๊ฒจ์ง„ ์ƒํƒœ์—์„œ ํฌ๊ฒŒ ๋‚˜ํƒ€๋‚  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ๋ชจ๋ธ์„ ๋ฐฐํฌํ•˜๋Š” ๊ฒƒ์„ ์šฐ์„ ์‹œํ•˜๊ธฐ ์œ„ํ•ด ๋ถˆ์ผ์น˜๋ฅผ ๋ฌด์‹œํ•˜๊ธฐ๋กœ ๊ฒฐ์ •ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ„์—์„œ ์–ธ๊ธ‰ํ•œ `pt-to-tf` CLI์—๋Š” ๊ฐ€์ค‘์น˜ ๋ณ€ํ™˜ ์‹œ ์˜ค๋ฅ˜ ๋ฉ”์‹œ์ง€๋ฅผ ๋ฌด์‹œํ•˜๋Š” `--max-error` ํ”Œ๋ž˜๊ทธ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/tokenizer_summary.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ† ํฌ๋‚˜์ด์ € ์š”์•ฝ[[summary-of-the-tokenizers]] [[open-in-colab]] ์ด ํŽ˜์ด์ง€์—์„œ๋Š” ํ† ํฐํ™”์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. <Youtube id="VFp38yj8h3A"/> [๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ํŠœํ† ๋ฆฌ์–ผ](preprocessing)์—์„œ ์‚ดํŽด๋ณธ ๊ฒƒ์ฒ˜๋Ÿผ, ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ๊ฒƒ์€ ํ…์ŠคํŠธ๋ฅผ ๋‹จ์–ด ๋˜๋Š” ์„œ๋ธŒ์›Œ๋“œ๋กœ ๋ถ„ํ• ํ•˜๊ณ  ๋ฃฉ์—… ํ…Œ์ด๋ธ”์„ ํ†ตํ•ด id๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ณผ์ •์ž…๋‹ˆ๋‹ค. ๋‹จ์–ด ๋˜๋Š” ์„œ๋ธŒ์›Œ๋“œ๋ฅผ id๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์€ ๊ฐ„๋‹จํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ด๋ฒˆ ๋ฌธ์„œ์—์„œ๋Š” ํ…์ŠคํŠธ๋ฅผ ๋‹จ์–ด ๋˜๋Š” ์„œ๋ธŒ์›Œ๋“œ๋กœ ์ชผ๊ฐœ๋Š” ๊ฒƒ(์ฆ‰, ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ๊ฒƒ)์— ์ค‘์ ์„ ๋‘๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ตฌ์ฒด์ ์œผ๋กœ, ๐Ÿค— Transformers์—์„œ ์‚ฌ์šฉ๋˜๋Š” ์„ธ ๊ฐ€์ง€ ์ฃผ์š” ํ† ํฐํ™” ์œ ํ˜•์ธ [Byte-Pair Encoding (BPE)](#byte-pair-encoding), [WordPiece](#wordpiece), [SentencePiece](#sentencepiece)๋ฅผ ์‚ดํŽด๋ณด๊ณ  ์–ด๋–ค ๋ชจ๋ธ์—์„œ ์–ด๋–ค ํ† ํฐํ™” ์œ ํ˜•์„ ์‚ฌ์šฉํ•˜๋Š”์ง€ ์˜ˆ์‹œ๋ฅผ ๋ณด์—ฌ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ฐ ๋ชจ๋ธ ํŽ˜์ด์ง€์— ์—ฐ๊ฒฐ๋œ ํ† ํฌ๋‚˜์ด์ €์˜ ๋ฌธ์„œ๋ฅผ ๋ณด๋ฉด ์‚ฌ์ „ ํ›ˆ๋ จ ๋ชจ๋ธ์—์„œ ์–ด๋–ค ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ–ˆ๋Š”์ง€ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [`BertTokenizer`]๋ฅผ ๋ณด๋ฉด ์ด ๋ชจ๋ธ์ด [WordPiece](#wordpiece)๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๊ฐœ์š”[[introduction]] ํ…์ŠคํŠธ๋ฅผ ์ž‘์€ ๋ฌถ์Œ(chunk)์œผ๋กœ ์ชผ๊ฐœ๋Š” ๊ฒƒ์€ ๋ณด๊ธฐ๋ณด๋‹ค ์–ด๋ ค์šด ์ž‘์—…์ด๋ฉฐ, ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `"Don't you love ๐Ÿค— Transformers? We sure do."` ๋ผ๋Š” ๋ฌธ์žฅ์„ ์‚ดํŽด๋ณด๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. <Youtube id="nhJxYji1aho"/> ์œ„ ๋ฌธ์žฅ์„ ํ† ํฐํ™”ํ•˜๋Š” ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ ๊ณต๋ฐฑ์„ ๊ธฐ์ค€์œผ๋กœ ์ชผ๊ฐœ๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ† ํฐํ™”๋œ ๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` ["Don't", "you", "love", "๐Ÿค—", "Transformers?", "We", "sure", "do."] ``` ์ด๋Š” ์ฒซ ๋ฒˆ์งธ ๊ฒฐ๊ณผ๋กœ๋Š” ํ•ฉ๋ฆฌ์ ์ด์ง€๋งŒ, `"Transformers?"`์™€ `"do."`ํ† ํฐ์„ ๋ณด๋ฉด ๊ฐ๊ฐ `"Transformer"`์™€ `"do"`์— ๊ตฌ๋‘์ ์ด ๋ถ™์–ด์žˆ๋Š” ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ตฌ๋‘์ ์„ ๊ณ ๋ คํ•ด์•ผ ๋ชจ๋ธ์ด ๋‹จ์–ด์˜ ๋‹ค๋ฅธ ํ‘œํ˜„๊ณผ ๊ทธ ๋’ค์— ์˜ฌ ์ˆ˜ ์žˆ๋Š” ๋ชจ๋“  ๊ฐ€๋Šฅํ•œ ๊ตฌ๋‘์ ์„ ํ•™์Šตํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ๋ชจ๋ธ์ด ํ•™์Šตํ•ด์•ผ ํ•˜๋Š” ํ‘œํ˜„์˜ ์ˆ˜๊ฐ€ ํญ๋ฐœ์ ์œผ๋กœ ์ฆ๊ฐ€ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ๊ตฌ๋‘์ ์„ ๊ณ ๋ คํ•œ ํ† ํฐํ™” ๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` ["Don", "'", "t", "you", "love", "๐Ÿค—", "Transformers", "?", "We", "sure", "do", "."] ``` ์ด์ „๋ณด๋‹ค ๋‚˜์•„์กŒ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ, `"Don't"`์˜ ํ† ํฐํ™” ๊ฒฐ๊ณผ๋„ ์ˆ˜์ •์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. `"Don't"`๋Š” `"do not"`์˜ ์ค„์ž„๋ง์ด๊ธฐ ๋•Œ๋ฌธ์— `["Do", "n't"]`๋กœ ํ† ํฐํ™”๋˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋ถ€ํ„ฐ ๋ณต์žกํ•ด์ง€๊ธฐ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ด ์ ์ด ๊ฐ ๋ชจ๋ธ๋งˆ๋‹ค ๊ณ ์œ ํ•œ ํ† ํฐํ™” ์œ ํ˜•์ด ์กด์žฌํ•˜๋Š” ์ด์œ  ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ๋ฐ ์ ์šฉํ•˜๋Š” ๊ทœ์น™์— ๋”ฐ๋ผ ๋™์ผํ•œ ํ…์ŠคํŠธ์— ๋Œ€ํ•ด ํ† ํฐํ™”๋œ ๊ฒฐ๊ณผ๊ฐ€ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์€ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋œ ๊ฒƒ๊ณผ ๋™์ผํ•œ ๊ทœ์น™์œผ๋กœ ํ† ํฐํ™”๋œ ์ž…๋ ฅ์„ ์ œ๊ณตํ•ด์•ผ๋งŒ ์ œ๋Œ€๋กœ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. [spaCy](https://spacy.io/)์™€ [Moses](http://www.statmt.org/moses/?n=Development.GetStarted)๋Š” ์œ ๋ช…ํ•œ ๊ทœ์น™ ๊ธฐ๋ฐ˜ ํ† ํฌ๋‚˜์ด์ €์ž…๋‹ˆ๋‹ค. ์˜ˆ์ œ์— *spaCy*์™€ *Moses* ๋ฅผ ์ ์šฉํ•œ ๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` ["Do", "n't", "you", "love", "๐Ÿค—", "Transformers", "?", "We", "sure", "do", "."] ``` ๋ณด์‹œ๋‹ค์‹œํ”ผ ๊ณต๋ฐฑ ๋ฐ ๊ตฌ๋‘์  ํ† ํฐํ™”์™€ ๊ทœ์น™ ๊ธฐ๋ฐ˜ ํ† ํฐํ™”๊ฐ€ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๊ณต๋ฐฑ ๋ฐ ๊ตฌ๋‘์ , ๊ทœ์น™ ๊ธฐ๋ฐ˜ ํ† ํฐํ™”์€ ๋ชจ๋‘ ๋‹จ์–ด ๋ฌธ์žฅ์„ ๋‹จ์–ด๋กœ ์ชผ๊ฐœ๋Š” ๋‹จ์–ด ํ† ํฐํ™”์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ด ํ† ํฐํ™” ๋ฐฉ๋ฒ•์€ ํ…์ŠคํŠธ๋ฅผ ๋” ์ž‘์€ ๋ฌถ์Œ(chunk)๋กœ ๋ถ„ํ• ํ•˜๋Š” ๊ฐ€์žฅ ์ง๊ด€์ ์ธ ๋ฐฉ๋ฒ•์ด์ง€๋งŒ, ๋Œ€๊ทœ๋ชจ ํ…์ŠคํŠธ ๋ง๋ญ‰์น˜์— ๋Œ€ํ•ด์„œ๋Š” ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ๊ณต๋ฐฑ ๋ฐ ๊ตฌ๋‘์  ํ† ํฐํ™”๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋งค์šฐ ํฐ ์–ดํœ˜(์‚ฌ์šฉ๋œ ๋ชจ๋“  ๊ณ ์œ  ๋‹จ์–ด์™€ ํ† ํฐ ์ง‘ํ•ฉ)์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. *์˜ˆ๋ฅผ ๋“ค์–ด*, [Transformer XL](model_doc/transformerxl)์€ ๊ณต๋ฐฑ ๋ฐ ๊ตฌ๋‘์  ํ† ํฐํ™”๋ฅผ ์‚ฌ์šฉํ•ด ์–ดํœ˜(vocabulary) ํฌ๊ธฐ๊ฐ€ 267,735์ž…๋‹ˆ๋‹ค! ์–ดํœ˜ ํฌ๊ธฐ๊ฐ€ ํฌ๋ฉด ๋ชจ๋ธ์— ์ž…๋ ฅ ๋ฐ ์ถœ๋ ฅ ๋ ˆ์ด์–ด๋กœ ์—„์ฒญ๋‚œ ์ž„๋ฒ ๋”ฉ ํ–‰๋ ฌ์ด ํ•„์š”ํ•˜๋ฏ€๋กœ ๋ฉ”๋ชจ๋ฆฌ์™€ ์‹œ๊ฐ„ ๋ณต์žก์„ฑ์ด ๋ชจ๋‘ ์ฆ๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์€ ์–ดํœ˜ ํฌ๊ธฐ๊ฐ€ 50,000๊ฐœ๋ฅผ ๋„˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋“œ๋ฌผ๋ฉฐ, ํŠนํžˆ ๋‹จ์ผ ์–ธ์–ด์— ๋Œ€ํ•ด์„œ๋งŒ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฒฝ์šฐ์—๋Š” ๋”์šฑ ๊ทธ๋ ‡์Šต๋‹ˆ๋‹ค. ๋‹จ์ˆœํ•œ ๊ณต๋ฐฑ๊ณผ ๊ตฌ๋‘์  ํ† ํฐํ™”๊ฐ€ ๋งŒ์กฑ์Šค๋Ÿฝ์ง€ ์•Š๋‹ค๋ฉด ๋‹จ์ˆœํžˆ ๋ฌธ์ž๋ฅผ ํ† ํฐํ™”ํ•˜๋ฉด ์–ด๋–จ๊นŒ์š”? <Youtube id="ssLq_EK2jLE"/> ๋ฌธ์ž ํ† ํฐํ™”๋Š” ์•„์ฃผ ๊ฐ„๋‹จํ•˜๊ณ  ๋ฉ”๋ชจ๋ฆฌ์™€ ์‹œ๊ฐ„ ๋ณต์žก๋„๋ฅผ ํฌ๊ฒŒ ์ค„์ผ ์ˆ˜ ์žˆ์ง€๋งŒ, ๋ชจ๋ธ์ด ์˜๋ฏธ ์žˆ๋Š” ์ž…๋ ฅ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๊ธฐ์—๋Š” ํ›จ์”ฌ ๋” ์–ด๋ ต์Šต๋‹ˆ๋‹ค. *์˜ˆ๋ฅผ ๋“ค์–ด*, ๋ฌธ์ž `"t"`์— ๋Œ€ํ•œ ์˜๋ฏธ ์žˆ๋Š” ๋ฌธ๋งฅ ๋…๋ฆฝ์  ํ‘œํ˜„์„ ๋ฐฐ์šฐ๋Š” ๊ฒƒ ๋ณด๋‹ค ๋‹จ์–ด `"today"`์— ๋Œ€ํ•œ ์˜๋ฏธ ์žˆ๋Š” ๋ฌธ๋งฅ ๋…๋ฆฝ์  ํ‘œํ˜„์„ ๋ฐฐ์šฐ๋Š” ๊ฒƒ์ด ํ›จ์”ฌ ๋” ์–ด๋ ต์Šต๋‹ˆ๋‹ค. ๋ฌธ์ž ํ† ํฐํ™”๋Š” ์ข…์ข… ์„ฑ๋Šฅ ์ €ํ•˜๋ฅผ ๋™๋ฐ˜ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋‘ ๊ฐ€์ง€ ์žฅ์ ์„ ๋ชจ๋‘ ์–ป๊ธฐ ์œ„ํ•ด ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์€ **์„œ๋ธŒ์›Œ๋“œ** ํ† ํฐํ™”๋ผ๊ณ  ํ•˜๋Š” ๋‹จ์–ด ์ˆ˜์ค€๊ณผ ๋ฌธ์ž ์ˆ˜์ค€ ํ† ํฐํ™”์˜ ํ•˜์ด๋ธŒ๋ฆฌ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ## ์„œ๋ธŒ์›Œ๋“œ ํ† ํฐํ™”[[subword-tokenization]] <Youtube id="zHvTiHr506c"/> ์„œ๋ธŒ์›Œ๋“œ ํ† ํฐํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์ž์ฃผ ์‚ฌ์šฉ๋˜๋Š” ๋‹จ์–ด๋Š” ๋” ์ž‘์€ ํ•˜์œ„ ๋‹จ์–ด๋กœ ์ชผ๊ฐœ๊ณ , ๋“œ๋ฌธ ๋‹จ์–ด๋Š” ์˜๋ฏธ ์žˆ๋Š” ํ•˜์œ„ ๋‹จ์–ด๋กœ ๋ถ„ํ•ด๋˜์–ด์•ผ ํ•œ๋‹ค๋Š” ์›์น™์— ๋”ฐ๋ผ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด `"annoyingly"`๋Š” ๋“œ๋ฌธ ๋‹จ์–ด๋กœ ๊ฐ„์ฃผ๋˜์–ด `"annoying"`๊ณผ `"ly"`๋กœ ๋ถ„ํ•ด๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `"annoyingly"`๊ฐ€ `"annoying"`๊ณผ `"ly"`์˜ ํ•ฉ์„ฑ์–ด์ธ ๋ฐ˜๋ฉด, `"annoying"`๊ณผ `"ly"` ๋‘˜ ๋‹ค ๋…๋ฆฝ์ ์ธ ์„œ๋ธŒ์›Œ๋“œ๋กœ ์ž์ฃผ ๋“ฑ์žฅํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ„ฐํ‚ค์–ด์™€ ๊ฐ™์€ ์‘์ง‘์„ฑ ์–ธ์–ด์—์„œ ํŠนํžˆ ์œ ์šฉํ•˜๋ฉฐ, ์„œ๋ธŒ์›Œ๋“œ๋ฅผ ๋ฌถ์–ด ์ž„์˜๋กœ ๊ธด ๋ณตํ•ฉ ๋‹จ์–ด๋ฅผ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์„œ๋ธŒ์›Œ๋“œ ํ† ํฐํ™”๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ์ด ์˜๋ฏธ ์žˆ๋Š” ๋ฌธ๋งฅ ๋…๋ฆฝ์  ํ‘œํ˜„์„ ํ•™์Šตํ•˜๋ฉด์„œ ํ•ฉ๋ฆฌ์ ์ธ ์–ดํœ˜ ํฌ๊ธฐ๋ฅผ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์„œ๋ธŒ์›Œ๋“œ ํ† ํฐํ™”๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ์€ ์ด์ „์— ๋ณธ ์ ์ด ์—†๋Š” ๋‹จ์–ด๋ฅผ ์•Œ๋ ค์ง„ ์„œ๋ธŒ์›Œ๋“œ๋กœ ๋ถ„ํ•ดํ•˜์—ฌ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [`~transformers.BertTokenizer`]๋Š” `"I have a new GPU!"` ๋ผ๋Š” ๋ฌธ์žฅ์„ ์•„๋ž˜์™€ ๊ฐ™์ด ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") >>> tokenizer.tokenize("I have a new GPU!") ["i", "have", "a", "new", "gp", "##u", "!"] ``` ๋Œ€์†Œ๋ฌธ์ž๊ฐ€ ์—†๋Š” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ๋ฌธ์žฅ์˜ ์‹œ์ž‘์ด ์†Œ๋ฌธ์ž๋กœ ํ‘œ๊ธฐ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋‹จ์–ด `["i", "have", "a", "new"]`๋Š” ํ† ํฌ๋‚˜์ด์ €์˜ ์–ดํœ˜์— ์†ํ•˜์ง€๋งŒ, `"gpu"`๋Š” ์†ํ•˜์ง€ ์•Š๋Š” ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋Š” `"gpu"`๋ฅผ ์•Œ๋ ค์ง„ ๋‘ ๊ฐœ์˜ ์„œ๋ธŒ์›Œ๋“œ๋กœ ์ชผ๊ฐญ๋‹ˆ๋‹ค: `["gp" and "##u"]`. `"##"`์€ ํ† ํฐ์˜ ๋‚˜๋จธ์ง€ ๋ถ€๋ถ„์ด ๊ณต๋ฐฑ ์—†์ด ์ด์ „ ํ† ํฐ์— ์—ฐ๊ฒฐ๋˜์–ด์•ผ(attach) ํ•จ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค(ํ† ํฐํ™” ๋””์ฝ”๋”ฉ ๋˜๋Š” ์—ญ์ „์„ ์œ„ํ•ด). ๋˜ ๋‹ค๋ฅธ ์˜ˆ๋กœ, [`~transformers.XLNetTokenizer`]๋Š” ์ด์ „์— ์˜ˆ์‹œ ๋ฌธ์žฅ์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import XLNetTokenizer >>> tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased") >>> tokenizer.tokenize("Don't you love ๐Ÿค— Transformers? We sure do.") ["โ–Don", "'", "t", "โ–you", "โ–love", "โ–", "๐Ÿค—", "โ–", "Transform", "ers", "?", "โ–We", "โ–sure", "โ–do", "."] ``` `"โ–"`๊ฐ€ ๊ฐ€์ง€๋Š” ์˜๋ฏธ๋Š” [SentencePiece](#sentencepiece)์—์„œ ๋‹ค์‹œ ์‚ดํŽด๋ณด๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋ณด๋‹ค์‹œํ”ผ `"Transformers"` ๋ผ๋Š” ๋“œ๋ฌธ ๋‹จ์–ด๋Š” ์„œ๋ธŒ์›Œ๋“œ `"Transform"`์™€ `"ers"`๋กœ ์ชผ๊ฐœ์ง‘๋‹ˆ๋‹ค. ์ด์ œ ๋‹ค์–‘ํ•œ ํ•˜์œ„ ๋‹จ์–ด ํ† ํฐํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ† ํฐํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์ผ๋ฐ˜์ ์œผ๋กœ ํ•ด๋‹น ๋ชจ๋ธ์ด ํ•™์Šต๋˜๋Š” ๋ง๋ญ‰์น˜์— ๋Œ€ํ•ด ์ˆ˜ํ–‰๋˜๋Š” ์–ด๋–ค ํ˜•ํƒœ์˜ ํ•™์Šต์— ์˜์กดํ•œ๋‹ค๋Š” ์ ์— ์œ ์˜ํ•˜์„ธ์š”. <a id='byte-pair-encoding'></a> ### ๋ฐ”์ดํŠธ ํŽ˜์–ด ์ธ์ฝ”๋”ฉ (Byte-Pair Encoding, BPE)[[bytepair-encoding-bpe]] ๋ฐ”์ดํŠธ ํŽ˜์–ด ์ธ์ฝ”๋”ฉ(BPE)์€ [Neural Machine Translation of Rare Words with Subword Units (Sennrich et al., 2015)](https://arxiv.org/abs/1508.07909) ์—์„œ ์†Œ๊ฐœ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. BPE๋Š” ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋ฅผ ๋‹จ์–ด๋กœ ๋ถ„ํ• ํ•˜๋Š” ์‚ฌ์ „ ํ† ํฌ๋‚˜์ด์ €(pre-tokenizer)์— ์˜์กดํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ† ํฐํ™”(Pretokenization)์—๋Š” [GPT-2](model_doc/gpt2), [Roberta](model_doc/roberta)์™€ ๊ฐ™์€ ๊ฐ„๋‹จํ•œ ๊ณต๋ฐฑ ํ† ํฐํ™”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ณต์žกํ•œ ์‚ฌ์ „ ํ† ํฐํ™”์—๋Š” ๊ทœ์น™ ๊ธฐ๋ฐ˜ ํ† ํฐํ™”๊ฐ€ ํ•ด๋‹นํ•˜๋Š”๋ฐ, ํ›ˆ๋ จ ๋ง๋ญ‰์น˜์—์„œ ๊ฐ ๋‹จ์–ด์˜ ๋นˆ๋„๋ฅผ ๊ณ„์‚ฐํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. [XLM](model_doc/xlm), ๋Œ€๋ถ€๋ถ„์˜ ์–ธ์–ด์—์„œ Moses๋ฅผ ์‚ฌ์šฉํ•˜๋Š” [FlauBERT](model_doc/flaubert), Spacy์™€ ftfy๋ฅผ ์‚ฌ์šฉํ•˜๋Š” [GPT](model_doc/gpt)๊ฐ€ ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ† ํฐํ™” ์ดํ›„์—, ๊ณ ์œ  ๋‹จ์–ด ์ง‘ํ•ฉ๊ฐ€ ์ƒ์„ฑ๋˜๊ณ  ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์—์„œ ๊ฐ ๋‹จ์–ด๊ฐ€ ๋“ฑ์žฅํ•˜๋Š” ๋นˆ๋„๊ฐ€ ๊ฒฐ์ •๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, BPE๋Š” ๊ณ ์œ  ๋‹จ์–ด ์ง‘ํ•ฉ์— ๋‚˜ํƒ€๋‚˜๋Š” ๋ชจ๋“  ๊ธฐํ˜ธ๋กœ ๊ตฌ์„ฑ๋œ ๊ธฐ๋ณธ ์–ดํœ˜๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ๊ธฐ๋ณธ ์–ดํœ˜์˜ ๋‘ ๊ธฐํ˜ธ์—์„œ ์ƒˆ๋กœ์šด ๊ธฐํ˜ธ๋ฅผ ํ˜•์„ฑํ•˜๋Š” ๋ณ‘ํ•ฉ ๊ทœ์น™์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์–ดํœ˜๊ฐ€ ์›ํ•˜๋Š” ์–ดํœ˜ ํฌ๊ธฐ์— ๋„๋‹ฌํ•  ๋•Œ๊นŒ์ง€ ์œ„์˜ ๊ณผ์ •์„ ๋ฐ˜๋ณตํ•ฉ๋‹ˆ๋‹ค. ์–ดํœ˜ ํฌ๊ธฐ๋Š” ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ›ˆ๋ จ์‹œํ‚ค๊ธฐ ์ „์— ์ •์˜ํ•ด์•ผ ํ•˜๋Š” ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ผ๋Š” ์ ์„ ์œ ์˜ํ•˜์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด, ์‚ฌ์ „ ํ† ํฐํ™” ํ›„ ๋นˆ๋„๋ฅผ ํฌํ•จํ•œ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์–ดํœ˜ ์ง‘ํ•ฉ์ด ๊ฒฐ์ •๋˜์—ˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ``` ("hug", 10), ("pug", 5), ("pun", 12), ("bun", 4), ("hugs", 5) ``` ๊ฒฐ๊ณผ์ ์œผ๋กœ ๊ธฐ๋ณธ ์–ดํœ˜๋Š” `["b", "g", "h", "n", "p", "s", "u"]` ์ด๊ณ , ๊ฐ ๋‹จ์–ด๋ฅผ ๊ธฐ๋ณธ ์–ดํœ˜์— ์†ํ•˜๋Š” ๊ธฐํ˜ธ๋กœ ์ชผ๊ฐœ๋ฉด ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` ("h" "u" "g", 10), ("p" "u" "g", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "u" "g" "s", 5) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ BPE๋Š” ๊ฐ€๋Šฅํ•œ ๊ฐ ๊ธฐํ˜ธ ์Œ์˜ ๋นˆ๋„๋ฅผ ๊ณ„์‚ฐํ•˜์—ฌ ๊ฐ€์žฅ ์ž์ฃผ ๋ฐœ์ƒํ•˜๋Š” ๊ธฐํ˜ธ ์Œ์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. ์œ„์˜ ์˜ˆ์‹œ์—์„œ `"h"` ๋’ค์— ์˜ค๋Š” `"u"`๋Š” _10 + 5 = 15_ ๋ฒˆ ๋“ฑ์žฅํ•ฉ๋‹ˆ๋‹ค. (`"hug"`์—์„œ 10๋ฒˆ, `"hugs"`์—์„œ 5๋ฒˆ ๋“ฑ์žฅ) ํ•˜์ง€๋งŒ, ๊ฐ€์žฅ ๋“ฑ์žฅ ๋นˆ๋„๊ฐ€ ๋†’์€ ๊ธฐํ˜ธ ์Œ์€ `"u"` ๋’ค์— ์˜ค๋Š” `"g"`์ž…๋‹ˆ๋‹ค. _10 + 5 + 5 = 20_ ์œผ๋กœ ์ด 20๋ฒˆ ๋“ฑ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ๋ณ‘ํ•ฉํ•˜๋Š” ๊ฐ€์žฅ ์ฒซ ๋ฒˆ์งธ ์Œ์€ `"u"` ๋’ค์— ์˜ค๋Š” `"g"`์ž…๋‹ˆ๋‹ค. `"ug"`๊ฐ€ ์–ดํœ˜์— ์ถ”๊ฐ€๋˜์–ด ์–ดํœ˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` ("h" "ug", 10), ("p" "ug", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "ug" "s", 5) ``` BPE๋Š” ๋‹ค์Œ์œผ๋กœ ๊ฐ€์žฅ ๋งŽ์ด ๋“ฑ์žฅํ•˜๋Š” ๊ธฐํ˜ธ ์Œ์„ ์‹๋ณ„ํ•ฉ๋‹ˆ๋‹ค. `"u"` ๋’ค์— ์˜ค๋Š” `"n"`์€ 16๋ฒˆ ๋“ฑ์žฅํ•ด `"un"` ์œผ๋กœ ๋ณ‘ํ•ฉ๋˜์–ด ์–ดํœ˜์— ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ๊ทธ ๋‹ค์Œ์œผ๋กœ ๋นˆ๋„์ˆ˜๊ฐ€ ๋†“์€ ๊ธฐํ˜ธ ์Œ์€ `"h"` ๋’ค์— ์˜ค๋Š” `"ug"`๋กœ 15๋ฒˆ ๋“ฑ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์‹œ ํ•œ ๋ฒˆ `"hug"`๋กœ ๋ณ‘ํ•ฉ๋˜์–ด ์–ดํœ˜์— ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ํ˜„์žฌ ๋‹จ๊ณ„์—์„œ ์–ดํœ˜๋Š” `["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"]` ์ด๊ณ , ๊ณ ์œ  ๋‹จ์–ด ์ง‘ํ•ฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` ("hug", 10), ("p" "ug", 5), ("p" "un", 12), ("b" "un", 4), ("hug" "s", 5) ``` ์ด ์‹œ์ ์—์„œ ๋ฐ”์ดํŠธ ํŽ˜์–ด ์ธ์ฝ”๋”ฉ ํ›ˆ๋ จ์ด ์ค‘๋‹จ๋œ๋‹ค๊ณ  ๊ฐ€์ •ํ•˜๋ฉด, ํ›ˆ๋ จ๋œ ๋ณ‘ํ•ฉ ๊ทœ์น™์€ ์ƒˆ๋กœ์šด ๋‹จ์–ด์— ์ ์šฉ๋ฉ๋‹ˆ๋‹ค(๊ธฐ๋ณธ ์–ดํœ˜์— ํฌํ•จ๋œ ๊ธฐํ˜ธ๊ฐ€ ์ƒˆ๋กœ์šด ๋‹จ์–ด์— ํฌํ•จ๋˜์ง€ ์•Š๋Š” ํ•œ). ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹จ์–ด `"bug"`๋Š” `["b", "ug"]`๋กœ ํ† ํฐํ™”๋˜์ง€๋งŒ, `"m"`์ด ๊ธฐ๋ณธ ์–ดํœ˜์— ์—†๊ธฐ ๋•Œ๋ฌธ์— `"mug"`๋Š” `["<unk>", "ug"]`๋กœ ํ† ํฐํ™”๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์—๋Š” ๋‹จ์ผ ๋ฌธ์ž๊ฐ€ ์ตœ์†Œํ•œ ํ•œ ๋ฒˆ ๋“ฑ์žฅํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ผ๋ฐ˜์ ์œผ๋กœ `"m"`๊ณผ ๊ฐ™์€ ๋‹จ์ผ ๋ฌธ์ž๋Š” `"<unk>"` ๊ธฐํ˜ธ๋กœ ๋Œ€์ฒด๋˜์ง€ ์•Š์ง€๋งŒ, ์ด๋ชจํ‹ฐ์ฝ˜๊ณผ ๊ฐ™์€ ํŠน๋ณ„ํ•œ ๋ฌธ์ž์ธ ๊ฒฝ์šฐ์—๋Š” ๋Œ€์ฒด๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ „์— ์–ธ๊ธ‰ํ–ˆ๋“ฏ์ด ์–ดํœ˜ ํฌ๊ธฐ(์ฆ‰ ๊ธฐ๋ณธ ์–ดํœ˜ ํฌ๊ธฐ + ๋ณ‘ํ•ฉ ํšŸ์ˆ˜)๋Š” ์„ ํƒํ•ด์•ผํ•˜๋Š” ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด [GPT](model_doc/gpt)์˜ ๊ธฐ๋ณธ ์–ดํœ˜ ํฌ๊ธฐ๋Š” 478, 40,000๋ฒˆ์˜ ๋ณ‘ํ•ฉ ์ดํ›„์— ํ›ˆ๋ จ์„ ์ข…๋ฃŒํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์–ดํœ˜ ํฌ๊ธฐ๊ฐ€ 40,478์ž…๋‹ˆ๋‹ค. #### ๋ฐ”์ดํŠธ ์ˆ˜์ค€ BPE (Byte-level BPE)[[bytelevel-bpe]] ๊ฐ€๋Šฅํ•œ ๋ชจ๋“  ๊ธฐ๋ณธ ๋ฌธ์ž๋ฅผ ํฌํ•จํ•˜๋Š” ๊ธฐ๋ณธ ์–ดํœ˜์˜ ํฌ๊ธฐ๋Š” ๊ต‰์žฅํžˆ ์ปค์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. (์˜ˆ: ๋ชจ๋“  ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž๋ฅผ ๊ธฐ๋ณธ ๋ฌธ์ž๋กœ ๊ฐ„์ฃผํ•˜๋Š” ๊ฒฝ์šฐ) ๋” ๋‚˜์€ ๊ธฐ๋ณธ ์–ดํœ˜๋ฅผ ๊ฐ–๋„๋ก [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)๋Š” ๊ธฐ๋ณธ ์–ดํœ˜๋กœ ๋ฐ”์ดํŠธ(bytes)๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฐฉ์‹์€ ๋ชจ๋“  ๊ธฐ๋ณธ ๋ฌธ์ž๊ฐ€ ์–ดํœ˜์— ํฌํ•จ๋˜๋„๋ก ํ•˜๋ฉด์„œ ๊ธฐ๋ณธ ์–ดํœ˜์˜ ํฌ๊ธฐ๋ฅผ 256์œผ๋กœ ์ œํ•œํ•ฉ๋‹ˆ๋‹ค. ๊ตฌ๋‘์ ์„ ๋‹ค๋ฃจ๋Š” ์ถ”๊ฐ€์ ์ธ ๊ทœ์น™์„ ์‚ฌ์šฉํ•ด GPT2 ํ† ํฌ๋‚˜์ด์ €๋Š” ๋ชจ๋“  ํ…์ŠคํŠธ๋ฅผ <unk> ๊ธฐํ˜ธ ์—†์ด ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [GPT-2](model_doc/gpt)์˜ ์–ดํœ˜ ํฌ๊ธฐ๋Š” 50,257๋กœ 256 ๋ฐ”์ดํŠธ ํฌ๊ธฐ์˜ ๊ธฐ๋ณธ ํ† ํฐ, ํŠน๋ณ„ํ•œ end-of-text ํ† ํฐ๊ณผ 50,000๋ฒˆ์˜ ๋ณ‘ํ•ฉ์œผ๋กœ ํ•™์Šตํ•œ ๊ธฐํ˜ธ๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. <a id='wordpiece'></a> ### ์›Œ๋“œํ”ผ์Šค (WordPiece)[[wordpiece]] ์›Œ๋“œํ”ผ์Šค๋Š” [BERT](model_doc/bert), [DistilBERT](model_doc/distilbert), [Electra](model_doc/electra)์— ์‚ฌ์šฉ๋œ ์„œ๋ธŒ์›Œ๋“œ ํ† ํฐํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์ž…๋‹ˆ๋‹ค. ์ด ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ [Japanese and Korean Voice Search (Schuster et al., 2012)](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf)์—์„œ ์†Œ๊ฐœ๋˜์—ˆ๊ณ , BPE์™€ ๊ต‰์žฅํžˆ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ์›Œ๋“œํ”ผ์Šค๋Š” ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์— ๋“ฑ์žฅํ•˜๋Š” ๋ชจ๋“  ๋ฌธ์ž๋กœ ๊ธฐ๋ณธ ์–ดํœ˜๋ฅผ ์ดˆ๊ธฐํ™”ํ•œ ํ›„, ์ฃผ์–ด์ง„ ๋ณ‘ํ•ฉ ๊ทœ์น™์— ๋”ฐ๋ผ ์ ์ง„์ ์œผ๋กœ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. BPE์™€๋Š” ๋Œ€์กฐ์ ์œผ๋กœ ์›Œ๋“œํ”ผ์Šค๋Š” ๊ฐ€์žฅ ๋นˆ๋„์ˆ˜๊ฐ€ ๋†’์€ ๊ธฐํ˜ธ ์Œ์„ ์„ ํƒํ•˜์ง€ ์•Š๊ณ , ์–ดํœ˜์— ์ถ”๊ฐ€๋˜์—ˆ์„ ๋•Œ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์˜ ์šฐ๋„๊ฐ€ ์ตœ๋Œ€ํ™”๋˜๋Š” ์Œ์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. ์ •ํ™•ํžˆ ๋ฌด์Šจ ์˜๋ฏธ์ผ๊นŒ์š”? ์ด์ „ ์˜ˆ์‹œ๋ฅผ ์ฐธ์กฐํ•˜๋ฉด, ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์˜ ์šฐ๋„ ๊ฐ’์„ ์ตœ๋Œ€ํ™”ํ•˜๋Š” ๊ฒƒ์€ ๋ชจ๋“  ๊ธฐํ˜ธ ์Œ ์ค‘์—์„œ ์ฒซ ๋ฒˆ์งธ ๊ธฐํ˜ธ์™€ ๋‘ ๋ฒˆ์งธ ๊ธฐํ˜ธ์˜ ํ™•๋ฅ ๋กœ ๋‚˜๋ˆˆ ํ™•๋ฅ ์ด ๊ฐ€์žฅ ํฐ ๊ธฐํ˜ธ ์Œ์„ ์ฐพ๋Š” ๊ฒƒ๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด `"ug"`์˜ ํ™•๋ฅ ์ด `"u"`์™€ `"g"` ๊ฐ๊ฐ์œผ๋กœ ์ชผ๊ฐœ์กŒ์„ ๋•Œ ๋ณด๋‹ค ๋†’์•„์•ผ `"u"` ๋’ค์— ์˜ค๋Š” `"g"`๋Š” ๋ณ‘ํ•ฉ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ง๊ด€์ ์œผ๋กœ ์›Œ๋“œํ”ผ์Šค๋Š” ๋‘ ๊ธฐํ˜ธ๋ฅผ ๋ณ‘ํ•ฉํ•˜์—ฌ _์žƒ๋Š”_ ๊ฒƒ์„ ํ‰๊ฐ€ํ•˜์—ฌ ๊ทธ๋งŒํ•œ _๊ฐ€์น˜_๊ฐ€ ์žˆ๋Š”์ง€ ํ™•์ธํ•œ๋‹ค๋Š” ์ ์—์„œ BPE์™€ ์•ฝ๊ฐ„ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. <a id='unigram'></a> ### ์œ ๋‹ˆ๊ทธ๋žจ (Unigram)[[unigram]] ์œ ๋‹ˆ๊ทธ๋žจ์€ [Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates (Kudo, 2018)](https://arxiv.org/pdf/1804.10959.pdf)์—์„œ ์ œ์•ˆ๋œ ์„œ๋ธŒ์›Œ๋“œ ํ† ํฐํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์ž…๋‹ˆ๋‹ค. BPE๋‚˜ ์›Œ๋“œํ”ผ์Šค์™€ ๋‹ฌ๋ฆฌ ์œ ๋‹ˆ๊ทธ๋žจ์€ ๊ธฐ๋ณธ ์–ดํœ˜๋ฅผ ๋งŽ์€ ์ˆ˜์˜ ๊ธฐํ˜ธ๋กœ ์ดˆ๊ธฐํ™”ํ•œ ํ›„ ๊ฐ ๊ธฐํ˜ธ๋ฅผ ์ ์ง„์ ์œผ๋กœ ์ค„์—ฌ ๋” ์ž‘์€ ์–ดํœ˜๋ฅผ ์–ป์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๊ธฐ๋ณธ ์–ดํœ˜๋Š” ๋ชจ๋“  ์‚ฌ์ „ ํ† ํฐํ™”๋œ ๋‹จ์–ด์™€ ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ํ•˜์œ„ ๋ฌธ์ž์—ด์— ํ•ด๋‹นํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ ๋‹ˆ๊ทธ๋žจ์€ transformers ๋ชจ๋ธ์—์„œ ์ง์ ‘์ ์œผ๋กœ ์‚ฌ์šฉ๋˜์ง€๋Š” ์•Š์ง€๋งŒ, [SentencePiece](#sentencepiece)์™€ ํ•จ๊ป˜ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๊ฐ ํ›ˆ๋ จ ๋‹จ๊ณ„์—์„œ ์œ ๋‹ˆ๊ทธ๋žจ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ํ˜„์žฌ ์–ดํœ˜์™€ ์œ ๋‹ˆ๊ทธ๋žจ ์–ธ์–ด ๋ชจ๋ธ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ์†์‹ค(ํ”ํžˆ ๋กœ๊ทธ ์šฐ๋„๋กœ ์ •์˜๋จ)์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์–ดํœ˜์˜ ๊ฐ ๊ธฐํ˜ธ์— ๋Œ€ํ•ด ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ํ•ด๋‹น ๊ธฐํ˜ธ๋ฅผ ์–ดํœ˜์—์„œ ์ œ๊ฑฐํ•  ๊ฒฝ์šฐ ์ „์ฒด ์†์‹ค์ด ์–ผ๋งˆ๋‚˜ ์ฆ๊ฐ€ํ• ์ง€ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ์ดํ›„์— ์œ ๋‹ˆ๊ทธ๋žจ์€ ์†์‹ค ์ฆ๊ฐ€์œจ์ด ๊ฐ€์žฅ ๋‚ฎ์€ ๊ธฐํ˜ธ์˜ p(๋ณดํ†ต 10% ๋˜๋Š” 20%) ํผ์„ผํŠธ๋ฅผ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. (์ œ๊ฑฐ๋˜๋Š” ๊ธฐํ˜ธ๋Š” ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ์ „์ฒด ์†์‹ค์— ๊ฐ€์žฅ ์ž‘์€ ์˜ํ–ฅ์„ ๋ฏธ์นฉ๋‹ˆ๋‹ค.) ์–ดํœ˜๊ฐ€ ์›ํ•˜๋Š” ํฌ๊ธฐ์— ๋„๋‹ฌํ•  ๋•Œ๊นŒ์ง€ ์ด ๊ณผ์ •์„ ๋ฐ˜๋ณตํ•ฉ๋‹ˆ๋‹ค. ์œ ๋‹ˆ๊ทธ๋žจ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ํ•ญ์ƒ ๊ธฐ๋ณธ ๋ฌธ์ž๋ฅผ ํฌํ•จํ•ด ์–ด๋–ค ๋‹จ์–ด๋ผ๋„ ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ ๋‹ˆ๊ทธ๋žจ์ด ๋ณ‘ํ•ฉ ๊ทœ์น™์— ๊ธฐ๋ฐ˜ํ•˜์ง€ ์•Š๊ธฐ ๋–„๋ฌธ์— (BPE๋‚˜ ์›Œ๋“œํ”ผ์Šค์™€๋Š” ๋Œ€์กฐ์ ์œผ๋กœ), ํ•ด๋‹น ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ํ›ˆ๋ จ ์ดํ›„์— ์ƒˆ๋กœ์šด ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š”๋ฐ ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ํ›ˆ๋ จ๋œ ์œ ๋‹ˆ๊ทธ๋žจ ํ† ํฐํ™”๊ฐ€ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์–ดํœ˜๋ฅผ ๊ฐ€์ง„๋‹ค๋ฉด: ``` ["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"], ``` `"hugs"`๋Š” ๋‘ ๊ฐ€์ง€๋กœ ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `["hug", "s"]`์™€ `["h", "ug", "s"]` ๋˜๋Š” `["h", "u", "g", "s"]`. ๊ทธ๋ ‡๋‹ค๋ฉด ์–ด๋–ค ํ† ํฐํ™” ๋ฐฉ๋ฒ•์„ ์„ ํƒํ•ด์•ผ ํ• ๊นŒ์š”? ์œ ๋‹ˆ๊ทธ๋žจ์€ ์–ดํœ˜๋ฅผ ์ €์žฅํ•˜๋Š” ๊ฒƒ ์™ธ์—๋„ ํ›ˆ๋ จ ๋ง๋ญ‰์น˜์— ๊ฐ ํ† ํฐ์˜ ํ™•๋ฅ ์„ ์ €์žฅํ•˜์—ฌ ํ›ˆ๋ จ ํ›„ ๊ฐ€๋Šฅํ•œ ๊ฐ ํ† ํฐํ™”์˜ ํ™•๋ฅ ์„ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์ด ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๋‹จ์ˆœํžˆ ์‹ค์ œ๋กœ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํ† ํฐํ™”๋ฅผ ์„ ํƒํ•˜์ง€๋งŒ, ํ™•๋ฅ ์— ๋”ฐ๋ผ ๊ฐ€๋Šฅํ•œ ํ† ํฐํ™”๋ฅผ ์ƒ˜ํ”Œ๋งํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ€๋Šฅ์„ฑ๋„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ™•๋ฅ ์€ ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ํ•™์Šตํ•œ ์†์‹ค์— ์˜ํ•ด ์ •์˜๋ฉ๋‹ˆ๋‹ค. ๋‹จ์–ด๋กœ ๊ตฌ์„ฑ๋œ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ๋ฅผ \\(x_{1}, \dots, x_{N}\\)๋ผ ํ•˜๊ณ , ๋‹จ์–ด \\(x_{i}\\)์— ๋Œ€ํ•œ ๊ฐ€๋Šฅํ•œ ๋ชจ๋“  ํ† ํฐํ™” ๊ฒฐ๊ณผ๋ฅผ \\(S(x_{i})\\)๋ผ ํ•œ๋‹ค๋ฉด, ์ „์ฒด ์†์‹ค์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜๋ฉ๋‹ˆ๋‹ค: $$\mathcal{L} = -\sum_{i=1}^{N} \log \left ( \sum_{x \in S(x_{i})} p(x) \right )$$ <a id='sentencepiece'></a> ### ์„ผํ…์Šคํ”ผ์Šค (SentencePiece)[[sentencepiece]] ์ง€๊ธˆ๊นŒ์ง€ ๋‹ค๋ฃฌ ํ† ํฐํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๋™์ผํ•œ ๋ฌธ์ œ๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค: ์ž…๋ ฅ ํ…์ŠคํŠธ๋Š” ๊ณต๋ฐฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋‹จ์–ด๋ฅผ ๊ตฌ๋ถ„ํ•œ๋‹ค๊ณ  ๊ฐ€์ •ํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ, ๋ชจ๋“  ์–ธ์–ด์—์„œ ๋‹จ์–ด๋ฅผ ๊ตฌ๋ถ„ํ•˜๊ธฐ ์œ„ํ•ด ๊ณต๋ฐฑ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํ•œ๊ฐ€์ง€ ๊ฐ€๋Šฅํ•œ ํ•ด๊ฒฐ๋ฐฉ์•ˆ์€ ํŠน์ • ์–ธ์–ด์— ํŠนํ™”๋œ ์‚ฌ์ „ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด [XLM](model_doc/xlm)์€ ํŠน์ • ์ค‘๊ตญ์–ด, ์ผ๋ณธ์–ด, ํƒœ๊ตญ์–ด ์‚ฌ์ „ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ์ œ๋ฅผ ์ผ๋ฐ˜์ ์ธ ๋ฐฉ๋ฒ•์œผ๋กœ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด, [SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing (Kudo et al., 2018)](https://arxiv.org/pdf/1808.06226.pdf)๋Š” ์ž…๋ ฅ์„ ์ŠคํŠธ๋ฆผ์œผ๋กœ ์ฒ˜๋ฆฌํ•ด ๊ณต๋ฐฑ๋ฅผ ํ•˜๋‚˜์˜ ๋ฌธ์ž๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ดํ›„์— BPE ๋˜๋Š” ์œ ๋‹ˆ๊ทธ๋žจ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‚ฌ์šฉํ•ด ์ ์ ˆํ•œ ์–ดํœ˜๋ฅผ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค. [`XLNetTokenizer`]๋Š” ์„ผํ…์Šคํ”ผ์Šค๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์œ„์—์„œ ๋‹ค๋ฃฌ ์˜ˆ์‹œ์—์„œ ์–ดํœ˜์— `"โ–"`๊ฐ€ ํฌํ•จ๋˜์–ด์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ํ† ํฐ์„ ํ•ฉ์นœ ํ›„ `"โ–"`์„ ๊ณต๋ฐฑ์œผ๋กœ ๋Œ€์ฒดํ•˜๋ฉด ๋˜๊ธฐ ๋•Œ๋ฌธ์— ์„ผํ…์Šคํ”ผ์Šค๋กœ ํ† ํฐํ™”๋œ ๊ฒฐ๊ณผ๋Š” ๋””์ฝ”๋”ฉํ•˜๊ธฐ ์ˆ˜์›”ํ•ฉ๋‹ˆ๋‹ค. transformers์—์„œ ์ œ๊ณตํ•˜๋Š” ์„ผํ…์Šคํ”ผ์Šค ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋“  ๋ชจ๋ธ์€ ์œ ๋‹ˆ๊ทธ๋žจ๊ณผ ํ•จ๊ป˜ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. [ALBERT](model_doc/albert), [XLNet](model_doc/xlnet), [Marian](model_doc/marian), [T5](model_doc/t5) ๋ชจ๋ธ์ด ์„ผํ…์Šคํ”ผ์Šค ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/perf_train_cpu_many.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‹ค์ค‘ CPU์—์„œ ํšจ์œจ์ ์œผ๋กœ ํ›ˆ๋ จํ•˜๊ธฐ [[efficient-training-on-multiple-cpus]] ํ•˜๋‚˜์˜ CPU์—์„œ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์ด ๋„ˆ๋ฌด ๋А๋ฆด ๋•Œ๋Š” ๋‹ค์ค‘ CPU๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ๋Š” PyTorch ๊ธฐ๋ฐ˜์˜ DDP๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ถ„์‚ฐ CPU ํ›ˆ๋ จ์„ ํšจ์œจ์ ์œผ๋กœ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ## PyTorch์šฉ Intelยฎ oneCCL ๋ฐ”์ธ๋”ฉ [[intel-oneccl-bindings-for-pytorch]] [Intelยฎ oneCCL](https://github.com/oneapi-src/oneCCL) (collective communications library)์€ allreduce, allgather, alltoall๊ณผ ๊ฐ™์€ ์ง‘ํ•ฉ ํ†ต์‹ (collective communications)์„ ๊ตฌํ˜„ํ•œ ํšจ์œจ์ ์ธ ๋ถ„์‚ฐ ๋”ฅ๋Ÿฌ๋‹ ํ›ˆ๋ จ์„ ์œ„ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค. oneCCL์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋Š” [oneCCL ๋ฌธ์„œ](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html)์™€ [oneCCL ์‚ฌ์–‘](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html)์„ ์ฐธ์กฐํ•˜์„ธ์š”. `oneccl_bindings_for_pytorch` ๋ชจ๋“ˆ (`torch_ccl`์€ ๋ฒ„์ „ 1.12 ์ด์ „์— ์‚ฌ์šฉ)์€ PyTorch C10D ProcessGroup API๋ฅผ ๊ตฌํ˜„ํ•˜๋ฉฐ, ์™ธ๋ถ€ ProcessGroup๋กœ ๋™์ ์œผ๋กœ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์œผ๋ฉฐ ํ˜„์žฌ Linux ํ”Œ๋žซํผ์—์„œ๋งŒ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. [oneccl_bind_pt](https://github.com/intel/torch-ccl)์—์„œ ๋” ์ž์„ธํ•œ ์ •๋ณด๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ### PyTorch์šฉ Intelยฎ oneCCL ๋ฐ”์ธ๋”ฉ ์„ค์น˜: [[intel-oneccl-bindings-for-pytorch-installation]] ๋‹ค์Œ Python ๋ฒ„์ „์— ๋Œ€ํ•œ Wheel ํŒŒ์ผ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. | Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | | :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: | | 1.13.0 | | โˆš | โˆš | โˆš | โˆš | | 1.12.100 | | โˆš | โˆš | โˆš | โˆš | | 1.12.0 | | โˆš | โˆš | โˆš | โˆš | | 1.11.0 | | โˆš | โˆš | โˆš | โˆš | | 1.10.0 | โˆš | โˆš | โˆš | โˆš | | ``` pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu ``` `{pytorch_version}`์€ 1.13.0๊ณผ ๊ฐ™์ด PyTorch ๋ฒ„์ „์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. [oneccl_bind_pt ์„ค์น˜](https://github.com/intel/torch-ccl)์— ๋Œ€ํ•œ ๋” ๋งŽ์€ ์ ‘๊ทผ ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•ด ๋ณด์„ธ์š”. oneCCL๊ณผ PyTorch์˜ ๋ฒ„์ „์€ ์ผ์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. <Tip warning={true}> oneccl_bindings_for_pytorch 1.12.0 ๋ฒ„์ „์˜ ๋ฏธ๋ฆฌ ๋นŒ๋“œ๋œ Wheel ํŒŒ์ผ์€ PyTorch 1.12.1๊ณผ ํ˜ธํ™˜๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค(PyTorch 1.12.0์šฉ์ž…๋‹ˆ๋‹ค). PyTorch 1.12.1์€ oneccl_bindings_for_pytorch 1.12.10 ๋ฒ„์ „๊ณผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ## Intelยฎ MPI ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ [[intel-mpi-library]] ์ด ํ‘œ์ค€ ๊ธฐ๋ฐ˜ MPI ๊ตฌํ˜„์„ ์‚ฌ์šฉํ•˜์—ฌ Intelยฎ ์•„ํ‚คํ…์ฒ˜์—์„œ ์œ ์—ฐํ•˜๊ณ  ํšจ์œจ์ ์ด๋ฉฐ ํ™•์žฅ ๊ฐ€๋Šฅํ•œ ํด๋Ÿฌ์Šคํ„ฐ ๋ฉ”์‹œ์ง•์„ ์ œ๊ณตํ•˜์„ธ์š”. ์ด ๊ตฌ์„ฑ ์š”์†Œ๋Š” Intelยฎ oneAPI HPC Toolkit์˜ ์ผ๋ถ€์ž…๋‹ˆ๋‹ค. oneccl_bindings_for_pytorch๋Š” MPI ๋„๊ตฌ ์„ธํŠธ์™€ ํ•จ๊ป˜ ์„ค์น˜๋ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ํ™˜๊ฒฝ์„ ์†Œ์Šค๋กœ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. Intelยฎ oneCCL ๋ฒ„์ „ 1.12.0 ์ด์ƒ์ธ ๊ฒฝ์šฐ ``` oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)") source $oneccl_bindings_for_pytorch_path/env/setvars.sh ``` Intelยฎ oneCCL ๋ฒ„์ „์ด 1.12.0 ๋ฏธ๋งŒ์ธ ๊ฒฝ์šฐ ``` torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))") source $torch_ccl_path/env/setvars.sh ``` #### IPEX ์„ค์น˜: [[ipex-installation]] IPEX๋Š” Float32์™€ BFloat16์„ ๋ชจ๋‘ ์‚ฌ์šฉํ•˜๋Š” CPU ํ›ˆ๋ จ์„ ์œ„ํ•œ ์„ฑ๋Šฅ ์ตœ์ ํ™”๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. [single CPU section](./perf_train_cpu)์„ ์ฐธ์กฐํ•˜์„ธ์š”. ์ด์–ด์„œ ๋‚˜์˜ค๋Š” "Trainer์—์„œ์˜ ์‚ฌ์šฉ"์€ Intelยฎ MPI ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ mpirun์„ ์˜ˆ๋กœ ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. ## Trainer์—์„œ์˜ ์‚ฌ์šฉ [[usage-in-trainer]] Trainer์—์„œ ccl ๋ฐฑ์—”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฉ€ํ‹ฐ CPU ๋ถ„์‚ฐ ํ›ˆ๋ จ์„ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ๋ช…๋ น ์ธ์ˆ˜์— **`--ddp_backend ccl`**์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [์งˆ์˜ ์‘๋‹ต ์˜ˆ์ œ](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)๋ฅผ ์‚ฌ์šฉํ•œ ์˜ˆ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ช…๋ น์€ ํ•œ Xeon ๋…ธ๋“œ์—์„œ 2๊ฐœ์˜ ํ”„๋กœ์„ธ์Šค๋กœ ํ›ˆ๋ จ์„ ํ™œ์„ฑํ™”ํ•˜๋ฉฐ, ๊ฐ ์†Œ์ผ“๋‹น ํ•˜๋‚˜์˜ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. OMP_NUM_THREADS/CCL_WORKER_COUNT ๋ณ€์ˆ˜๋Š” ์ตœ์ ์˜ ์„ฑ๋Šฅ์„ ์œ„ํ•ด ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=127.0.0.1 mpirun -n 2 -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex ``` ๋‹ค์Œ ๋ช…๋ น์€ ๋‘ ๊ฐœ์˜ Xeon(๋…ธ๋“œ0 ๋ฐ ๋…ธ๋“œ1, ์ฃผ ํ”„๋กœ์„ธ์Šค๋กœ ๋…ธ๋“œ0์„ ์‚ฌ์šฉ)์—์„œ ์ด 4๊ฐœ์˜ ํ”„๋กœ์„ธ์Šค๋กœ ํ›ˆ๋ จ์„ ํ™œ์„ฑํ™”ํ•˜๋ฉฐ, ๊ฐ ์†Œ์ผ“๋‹น ํ•˜๋‚˜์˜ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. OMP_NUM_THREADS/CCL_WORKER_COUNT ๋ณ€์ˆ˜๋Š” ์ตœ์ ์˜ ์„ฑ๋Šฅ์„ ์œ„ํ•ด ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋…ธ๋“œ0์—์„œ๋Š” ๊ฐ ๋…ธ๋“œ์˜ IP ์ฃผ์†Œ๋ฅผ ํฌํ•จํ•˜๋Š” ๊ตฌ์„ฑ ํŒŒ์ผ(์˜ˆ: hostfile)์„ ์ƒ์„ฑํ•˜๊ณ  ํ•ด๋‹น ๊ตฌ์„ฑ ํŒŒ์ผ ๊ฒฝ๋กœ๋ฅผ ์ธ์ˆ˜๋กœ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```shell script cat hostfile xxx.xxx.xxx.xxx #node0 ip xxx.xxx.xxx.xxx #node1 ip ``` ์ด์ œ ๋…ธ๋“œ0์—์„œ ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜๋ฉด **4DDP**๊ฐ€ ๋…ธ๋“œ0 ๋ฐ ๋…ธ๋“œ1์—์„œ BF16 ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋กœ ํ™œ์„ฑํ™”๋ฉ๋‹ˆ๋‹ค. ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 4 -ppn 2 \ -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex \ --bf16 ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/transformers_agents.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Transformers Agent [[transformers-agent]] <Tip warning={true}> Transformers Agent๋Š” ์‹คํ—˜ ์ค‘์ธ API๋กœ ์–ธ์ œ๋“ ์ง€ ๋ณ€๊ฒฝ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. API ๋˜๋Š” ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์ด ๋ณ€๊ฒฝ๋˜๊ธฐ ์‰ฝ๊ธฐ ๋•Œ๋ฌธ์— ์—์ด์ „ํŠธ๊ฐ€ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ฒฐ๊ณผ๋„ ๋‹ฌ๋ผ์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> Transformers ๋ฒ„์ „ 4.29.0.์—์„œ *๋„๊ตฌ*์™€ *์—์ด์ „ํŠธ*๋ผ๋Š” ์ปจ์…‰์„ ๋„์ž…ํ–ˆ์Šต๋‹ˆ๋‹ค. [์ด colab](https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj)์—์„œ ์‚ฌ์šฉํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ„๋‹จํžˆ ๋งํ•˜๋ฉด, Agent๋Š” ํŠธ๋žœ์Šคํฌ๋จธ ์œ„์— ์ž์—ฐ์–ด API๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์—„์„ ๋œ ๋„๊ตฌ ์„ธํŠธ๋ฅผ ์ •์˜ํ•˜๊ณ , ์ž์—ฐ์–ด๋ฅผ ํ•ด์„ํ•˜์—ฌ ์ด๋Ÿฌํ•œ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์—์ด์ „ํŠธ๋ฅผ ์„ค๊ณ„ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด API๋Š” ํ™•์žฅ์ด ๊ฐ€๋Šฅํ•˜๋„๋ก ์„ค๊ณ„ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ฃผ์š” ๋„๊ตฌ๋ฅผ ์„ ๋ณ„ํ•ด๋‘์—ˆ์ง€๋งŒ, ์ปค๋ฎค๋‹ˆํ‹ฐ์—์„œ ๊ฐœ๋ฐœํ•œ ๋ชจ๋“  ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ์‹œ์Šคํ…œ์„ ์‰ฝ๊ฒŒ ํ™•์žฅํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•๋„ ๋ณด์—ฌ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค. ๋ช‡ ๊ฐ€์ง€ ์˜ˆ๋ฅผ ํ†ตํ•ด ์ƒˆ๋กœ์šด API๋กœ ๋ฌด์—‡์„ ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด API๋Š” ํŠนํžˆ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž‘์—…์—์„œ ๊ฐ•๋ ฅํ•˜๋ฏ€๋กœ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์†Œ๋ฆฌ๋‚ด์–ด ์ฝ์–ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py agent.run("Caption the following image", image=image) ``` | **Input** | **Output** | |-----------------------------------------------------------------------------------------------------------------------------|-----------------------------------| | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/beaver.png" width=200> | A beaver is swimming in the water | --- ```py agent.run("Read the following text out loud", text=text) ``` | **Input** | **Output** | |-------------------------------------------------------------------------------------------------------------------------|----------------------------------------------| | A beaver is swimming in the water | <audio controls><source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tts_example.wav" type="audio/wav"> your browser does not support the audio element. </audio> --- ```py agent.run( "In the following `document`, where will the TRRF Scientific Advisory Council Meeting take place?", document=document, ) ``` | **Input** | **Output** | |-----------------------------------------------------------------------------------------------------------------------------|----------------| | <img src="https://datasets-server.huggingface.co/assets/hf-internal-testing/example-documents/--/hf-internal-testing--example-documents/test/0/image/image.jpg" width=200> | ballroom foyer | ## ๋ฐ”๋กœ ์‹œ์ž‘ํ•˜๊ธฐ [[quickstart]] `agent.run`์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋จผ์ € ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ(LLM)์ธ ์—์ด์ „ํŠธ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ €ํฌ๋Š” openAI ๋ชจ๋ธ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ BigCode ๋ฐ OpenAssistant์˜ ์˜คํ”ˆ์†Œ์Šค ๋Œ€์ฒด ๋ชจ๋ธ๋„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. openAI ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์ด ๋” ์šฐ์ˆ˜ํ•˜์ง€๋งŒ(๋‹จ, openAI API ํ‚ค๊ฐ€ ํ•„์š”ํ•˜๋ฏ€๋กœ ๋ฌด๋ฃŒ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์Œ), Hugging Face๋Š” BigCode์™€ OpenAssistant ๋ชจ๋ธ์˜ ์—”๋“œํฌ์ธํŠธ์— ๋Œ€ํ•œ ๋ฌด๋ฃŒ ์•ก์„ธ์Šค๋ฅผ ์ œ๊ณตํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์šฐ์„  ๋ชจ๋“  ๊ธฐ๋ณธ ์ข…์†์„ฑ์„ ์„ค์น˜ํ•˜๋ ค๋ฉด `agents`๋ฅผ ์ถ”๊ฐ€๋กœ ์„ค์น˜ํ•˜์„ธ์š”. ```bash pip install transformers[agents] ``` openAI ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด `openai` ์ข…์†์„ฑ์„ ์„ค์น˜ํ•œ ํ›„ [`OpenAiAgent`]๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```bash pip install openai ``` ```py from transformers import OpenAiAgent agent = OpenAiAgent(model="text-davinci-003", api_key="<your_api_key>") ``` BigCode ๋˜๋Š” OpenAssistant๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋จผ์ € ๋กœ๊ทธ์ธํ•˜์—ฌ Inference API์— ์•ก์„ธ์Šคํ•˜์„ธ์š”: ```py from huggingface_hub import login login("<YOUR_TOKEN>") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์—์ด์ „ํŠธ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. ```py from transformers import HfAgent # Starcoder agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") # StarcoderBase # agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoderbase") # OpenAssistant # agent = HfAgent(url_endpoint="https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5") ``` ํ˜„์žฌ Hugging Face์—์„œ ๋ฌด๋ฃŒ๋กœ ์ œ๊ณตํ•˜๋Š” ์ถ”๋ก  API๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ž์ฒด ์ถ”๋ก  ์—”๋“œํฌ์ธํŠธ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ(๋˜๋Š” ๋‹ค๋ฅธ ์—”๋“œํฌ์ธํŠธ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ) ์œ„์˜ URL์„ ํ•ด๋‹น URL ์—”๋“œํฌ์ธํŠธ๋กœ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <Tip> StarCoder์™€ OpenAssistant๋Š” ๋ฌด๋ฃŒ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ๊ฐ„๋‹จํ•œ ์ž‘์—…์—์„œ ๋†€๋ผ์šธ ์ •๋„๋กœ ์ž˜ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋” ๋ณต์žกํ•œ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ฒ˜๋ฆฌํ•  ๋•Œ๋Š” ์ฒดํฌํฌ์ธํŠธ๊ฐ€ ์ž˜ ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด OpenAI ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ์•„์‰ฝ๊ฒŒ๋„ ์˜คํ”ˆ์†Œ์Šค๋Š” ์•„๋‹ˆ์ง€๋งŒ ํ˜„์žฌ๋กœ์„œ๋Š” ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. </Tip> ์ด์ œ ์ค€๋น„๊ฐ€ ์™„๋ฃŒ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ์ด์ œ ์ž์œ ๋กญ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋‘ ๊ฐ€์ง€ API์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ๋‹จ์ผ ์‹คํ–‰ (run) [[single-execution-(run)]] ๋‹จ์ผ ์‹คํ–‰ ๋ฐฉ๋ฒ•์€ ์—์ด์ „ํŠธ์˜ [`~Agent.run`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์ž…๋‹ˆ๋‹ค: ```py agent.run("Draw me a picture of rivers and lakes.") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> ์ˆ˜ํ–‰ํ•˜๋ ค๋Š” ์ž‘์—…์— ์ ํ•ฉํ•œ ๋„๊ตฌ๋ฅผ ์ž๋™์œผ๋กœ ์„ ํƒํ•˜์—ฌ ์ ์ ˆํ•˜๊ฒŒ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๋™์ผํ•œ ๋ช…๋ น์–ด์—์„œ ํ•˜๋‚˜ ๋˜๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (๋‹ค๋งŒ, ๋ช…๋ น์–ด๊ฐ€ ๋ณต์žกํ• ์ˆ˜๋ก ์—์ด์ „ํŠธ๊ฐ€ ์‹คํŒจํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์•„์ง‘๋‹ˆ๋‹ค). ```py agent.run("Draw me a picture of the sea then transform the picture to add an island") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sea_and_island.png" width=200> <br/> ๋ชจ๋“  [`~Agent.run`] ์ž‘์—…์€ ๋…๋ฆฝ์ ์ด๋ฏ€๋กœ ๋‹ค๋ฅธ ์ž‘์—…์œผ๋กœ ์—ฌ๋Ÿฌ ๋ฒˆ ์—ฐ์†ํ•ด์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `agent`๋Š” ํฐ ์–ธ์–ด ๋ชจ๋ธ์ผ ๋ฟ์ด๋ฏ€๋กœ ํ”„๋กฌํ”„ํŠธ์— ์•ฝ๊ฐ„์˜ ๋ณ€ํ™”๋ฅผ ์ฃผ๋ฉด ์™„์ „ํžˆ ๋‹ค๋ฅธ ๊ฒฐ๊ณผ๊ฐ€ ๋‚˜์˜ฌ ์ˆ˜ ์žˆ๋‹ค๋Š” ์ ์— ์œ ์˜ํ•˜์„ธ์š”. ์ˆ˜ํ–‰ํ•˜๋ ค๋Š” ์ž‘์—…์„ ์ตœ๋Œ€ํ•œ ๋ช…ํ™•ํ•˜๊ฒŒ ์„ค๋ช…ํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ข‹์€ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์€ [์—ฌ๊ธฐ](custom_tools#writing-good-user-inputs)์—์„œ ์ž์„ธํžˆ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๋Ÿฌ ์‹คํ–‰์— ๊ฑธ์ณ ์ƒํƒœ๋ฅผ ์œ ์ง€ํ•˜๊ฑฐ๋‚˜ ํ…์ŠคํŠธ๊ฐ€ ์•„๋‹Œ ๊ฐœ์ฒด๋ฅผ ์—์ด์ „ํŠธ์—๊ฒŒ ์ „๋‹ฌํ•˜๋ ค๋Š” ๊ฒฝ์šฐ์—๋Š” ์—์ด์ „ํŠธ๊ฐ€ ์‚ฌ์šฉํ•  ๋ณ€์ˆ˜๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๊ฐ•๊ณผ ํ˜ธ์ˆ˜์˜ ์ฒซ ๋ฒˆ์งธ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•œ ๋’ค, ๋ชจ๋ธ์ด ํ•ด๋‹น ๊ทธ๋ฆผ์— ์„ฌ์„ ์ถ”๊ฐ€ํ•˜๋„๋ก ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์š”์ฒญํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python picture = agent.run("Generate a picture of rivers and lakes.") updated_picture = agent.run("Transform the image in `picture` to add an island to it.", picture=picture) ``` <Tip> ์ด ๋ฐฉ๋ฒ•์€ ๋ชจ๋ธ์ด ์š”์ฒญ์„ ์ดํ•ดํ•˜์ง€ ๋ชปํ•˜๊ณ  ๋„๊ตฌ๋ฅผ ํ˜ผํ•ฉํ•  ๋•Œ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```py agent.run("Draw me the picture of a capybara swimming in the sea") ``` ์—ฌ๊ธฐ์„œ ๋ชจ๋ธ์€ ๋‘ ๊ฐ€์ง€ ๋ฐฉ์‹์œผ๋กœ ํ•ด์„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - `text-to-image`์ด ๋ฐ”๋‹ค์—์„œ ํ—ค์—„์น˜๋Š” ์นดํ”ผ๋ฐ”๋ผ๋ฅผ ์ƒ์„ฑํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. - ๋˜๋Š” `text-to-image`์ด ์นดํ”ผ๋ฐ”๋ผ๋ฅผ ์ƒ์„ฑํ•œ ๋‹ค์Œ `image-transformation` ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ”๋‹ค์—์„œ ํ—ค์—„์น˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ ์‹œ๋‚˜๋ฆฌ์˜ค๋ฅผ ๊ฐ•์ œ๋กœ ์‹คํ–‰ํ•˜๋ ค๋ฉด ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ธ์ˆ˜๋กœ ์ „๋‹ฌํ•˜์—ฌ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py agent.run("Draw me a picture of the `prompt`", prompt="a capybara swimming in the sea") ``` </Tip> ### ๋Œ€ํ™” ๊ธฐ๋ฐ˜ ์‹คํ–‰ (chat) [[chat-based-execution-(chat)]] ์—์ด์ „ํŠธ๋Š” [`~Agent.chat`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋Œ€ํ™” ๊ธฐ๋ฐ˜ ์ ‘๊ทผ ๋ฐฉ์‹๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```py agent.chat("Generate a picture of rivers and lakes") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> ```py agent.chat("Transform the picture so that there is a rock in there") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_and_beaver.png" width=200> <br/> ์ด ๋ฐฉ์‹์€ ์—ฌ๋Ÿฌ ๋ช…๋ น์–ด์— ๊ฑธ์ณ ์ƒํƒœ๋ฅผ ์œ ์ง€ํ•˜๊ณ ์ž ํ•  ๋•Œ ํฅ๋ฏธ๋กœ์šด ์ ‘๊ทผ ๋ฐฉ์‹์ž…๋‹ˆ๋‹ค. ์‹คํ—˜์šฉ์œผ๋กœ ๋” ์ข‹์ง€๋งŒ ๋ณต์žกํ•œ ๋ช…๋ น์–ด๋ณด๋‹ค๋Š” ๋‹จ์ผ ๋ช…๋ น์–ด([`~Agent.run`] ๋ฉ”์†Œ๋“œ๊ฐ€ ๋” ์ž˜ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ช…๋ น์–ด)์— ํ›จ์”ฌ ๋” ์ž˜ ์ž‘๋™ํ•˜๋Š” ๊ฒฝํ–ฅ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฉ”์†Œ๋“œ๋Š” ํ…์ŠคํŠธ๊ฐ€ ์•„๋‹Œ ์œ ํ˜•์ด๋‚˜ ํŠน์ • ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ „๋‹ฌํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ์ธ์ˆ˜๋ฅผ ๋ฐ›์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ### โš ๏ธ ์›๊ฒฉ ์‹คํ–‰ [[remote-execution]] ๋ฐ๋ชจ ๋ชฉ์ ๊ณผ ๋ชจ๋“  ์„ค์ •์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ์—์ด์ „ํŠธ๊ฐ€ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ๊ธฐ๋ณธ ๋„๊ตฌ์— ๋Œ€ํ•œ ์›๊ฒฉ ์‹คํ–‰๊ธฐ๋ฅผ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋„๊ตฌ๋Š” [inference endpoints](https://huggingface.co/inference-endpoints)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋งŒ๋“ค์–ด์กŒ์Šต๋‹ˆ๋‹ค. ์›๊ฒฉ ์‹คํ–‰๊ธฐ ๋„๊ตฌ๋ฅผ ์ง์ ‘ ์„ค์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด๋ ค๋ฉด [์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ ๊ฐ€์ด๋“œ](./custom_tools)๋ฅผ ์ฝ์–ด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ์›๊ฒฉ ๋„๊ตฌ๋กœ ์‹คํ–‰ํ•˜๋ ค๋ฉด [`~Agent.run`] ๋˜๋Š” [`~Agent.chat`] ์ค‘ ํ•˜๋‚˜์— `remote=True`๋ฅผ ์ง€์ •ํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ ๋ช…๋ น์€ ๋งŽ์€ RAM์ด๋‚˜ GPU ์—†์ด๋„ ๋ชจ๋“  ์žฅ์น˜์—์„œ ํšจ์œจ์ ์œผ๋กœ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py agent.run("Draw me a picture of rivers and lakes", remote=True) ``` [`~Agent.chat`]๋„ ๋งˆ์ฐฌ๊ฐ€์ง€์ž…๋‹ˆ๋‹ค: ```py agent.chat("Draw me a picture of rivers and lakes", remote=True) ``` ### ์—ฌ๊ธฐ์„œ ๋ฌด์Šจ ์ผ์ด ์ผ์–ด๋‚˜๋Š” ๊ฑฐ์ฃ ? ๋„๊ตฌ๋ž€ ๋ฌด์—‡์ด๊ณ , ์—์ด์ „ํŠธ๋ž€ ๋ฌด์—‡์ธ๊ฐ€์š”? [[whats-happening-here-what-are-tools-and-what-are-agents]] <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/diagram.png"> #### ์—์ด์ „ํŠธ [[agents]] ์—ฌ๊ธฐ์„œ "์—์ด์ „ํŠธ"๋Š” ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ์ด๋ฉฐ, ํŠน์ • ๋„๊ตฌ ๋ชจ์Œ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๋„๋ก ํ”„๋กฌํ”„ํŠธํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. LLM์€ ์ž‘์€ ์ฝ”๋“œ ์ƒ˜ํ”Œ์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์ƒ๋‹นํžˆ ๋Šฅ์ˆ™ํ•˜๋ฏ€๋กœ, ์ด ์žฅ์ ์„ ํ™œ์šฉํ•ด ๋„๊ตฌ ๋ชจ์Œ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ์ž‘์€ ์ฝ”๋“œ ์ƒ˜ํ”Œ์„ ์ œ๊ณตํ•˜๋ผ๋Š” ๋ฉ”์‹œ์ง€๋ฅผ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์—์ด์ „ํŠธ์—๊ฒŒ ์ œ๊ณตํ•˜๋Š” ์ž‘์—…๊ณผ ์ œ๊ณตํ•˜๋Š” ๋„๊ตฌ์— ๋Œ€ํ•œ ์„ค๋ช…์œผ๋กœ ์ด ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์™„๋ฃŒ๋ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์‚ฌ์šฉ ์ค‘์ธ ๋„๊ตฌ๋“ค์˜ ๋ฌธ์„œ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ํ•ด๋‹น ๋„๊ตฌ๋“ค์˜ ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ์„ ์˜ˆ์ƒํ•˜๊ณ , ๊ด€๋ จ๋œ ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. #### ๋„๊ตฌ [[tools]] ๋„๊ตฌ๋Š” ๋งค์šฐ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฆ„๊ณผ ์„ค๋ช…์ด ์žˆ๋Š” ๋‹จ์ผ ๊ธฐ๋Šฅ์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ด๋Ÿฌํ•œ ๋„๊ตฌ์˜ ์„ค๋ช…์„ ์‚ฌ์šฉํ•˜์—ฌ ์ƒ๋‹ด์›์—๊ฒŒ ํ”„๋กฌํ”„ํŠธ๋ฅผ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค. ์ด ํ”„๋กฌํ”„ํŠธ๋ฅผ ํ†ตํ•ด ์ƒ๋‹ด์›์—๊ฒŒ ์ฟผ๋ฆฌ์—์„œ ์š”์ฒญ๋œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ๋งค์šฐ ์›์ž์ ์ธ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋” ๋‚˜์€ ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๊ธฐ ๋•Œ๋ฌธ์— ํŒŒ์ดํ”„๋ผ์ธ์ด ์•„๋‹Œ ์™„์ „ํžˆ ์ƒˆ๋กœ์šด ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ์€ ๋” ๋งŽ์ด ๋ฆฌํŒฉํ„ฐ๋ง๋˜๋ฉฐ ์ข…์ข… ์—ฌ๋Ÿฌ ์ž‘์—…์„ ํ•˜๋‚˜๋กœ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. ๋„๊ตฌ๋Š” ํ•˜๋‚˜์˜ ๋งค์šฐ ๊ฐ„๋‹จํ•œ ์ž‘์—…์—๋งŒ ์ง‘์ค‘ํ•˜๋„๋ก ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. #### ์ฝ”๋“œ ์‹คํ–‰?! [[code-execution]] ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ด ์ฝ”๋“œ๋Š” ๋„๊ตฌ์™€ ํ•จ๊ป˜ ์ „๋‹ฌ๋œ ์ž…๋ ฅ ์„ธํŠธ์— ๋Œ€ํ•ด ์ž‘์€ Python ์ธํ„ฐํ”„๋ฆฌํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. "์ž„์˜ ์ฝ”๋“œ ์‹คํ–‰์ด๋ผ๋‹ˆ!"์ด๋ผ๊ณ  ๋น„๋ช…์„ ์ง€๋ฅด๋Š” ์†Œ๋ฆฌ๊ฐ€ ๋“ค๋ฆฌ๊ฒ ์ง€๋งŒ, ๊ทธ๋ ‡์ง€ ์•Š์€ ์ด์œ ๋ฅผ ์„ค๋ช…ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ํ˜ธ์ถœํ•  ์ˆ˜ ์žˆ๋Š” ํ•จ์ˆ˜๋Š” ์ œ๊ณตํ•œ ๋„๊ตฌ์™€ ์ธ์‡„ ๊ธฐ๋Šฅ๋ฟ์ด๋ฏ€๋กœ ์ด๋ฏธ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ๊ธฐ๋Šฅ์ด ์ œํ•œ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. Hugging Face ๋„๊ตฌ๋กœ ์ œํ•œ๋˜์–ด ์žˆ๋‹ค๋ฉด ์•ˆ์ „ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์–ดํŠธ๋ฆฌ๋ทฐํŠธ ์กฐํšŒ๋‚˜ ๊ฐ€์ ธ์˜ค๊ธฐ๋ฅผ ํ—ˆ์šฉํ•˜์ง€ ์•Š์œผ๋ฏ€๋กœ (์–ด์ฐจํ”ผ ์ž‘์€ ํ•จ์ˆ˜ ์ง‘ํ•ฉ์— ์ž…/์ถœ๋ ฅ์„ ์ „๋‹ฌํ•  ๋•Œ๋Š” ํ•„์š”ํ•˜์ง€ ์•Š์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค) ๊ฐ€์žฅ ๋ช…๋ฐฑํ•œ ๊ณต๊ฒฉ(์–ด์ฐจํ”ผ LLM์— ์ถœ๋ ฅํ•˜๋ผ๋Š” ๋ฉ”์‹œ์ง€๋ฅผ ํ‘œ์‹œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค)์€ ๋ฌธ์ œ๊ฐ€ ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋งค์šฐ ์•ˆ์ „ํ•˜๊ฒŒ ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด ์ถ”๊ฐ€ ์ธ์ˆ˜ return_code=True๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ run() ๋ฉ”์†Œ๋“œ๋ฅผ ์‹คํ–‰ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ์—์ด์ „ํŠธ๊ฐ€ ์‹คํ–‰ํ•  ์ฝ”๋“œ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ  ์‹คํ–‰ํ• ์ง€ ์—ฌ๋ถ€๋ฅผ ๊ฒฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ถˆ๋ฒ•์ ์ธ ์—ฐ์‚ฐ์„ ์ˆ˜ํ–‰ํ•˜๋ ค๊ณ  ํ•˜๊ฑฐ๋‚˜ ์—์ด์ „ํŠธ๊ฐ€ ์ƒ์„ฑํ•œ ์ฝ”๋“œ์— ์ผ๋ฐ˜์ ์ธ ํŒŒ์ด์ฌ ์˜ค๋ฅ˜๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ ์‹คํ–‰์ด ์ค‘์ง€๋ฉ๋‹ˆ๋‹ค. ### ์—„์„ ๋œ ๋„๊ตฌ ๋ชจ์Œ [[a-curated-set-of-tools]] ์ €ํฌ๋Š” ์ด๋Ÿฌํ•œ ์—์ด์ „ํŠธ๋“ค์˜ ์—ญ๋Ÿ‰์„ ๊ฐ•ํ™”ํ•  ์ˆ˜ ์žˆ๋Š” ์ผ๋ จ์˜ ๋„๊ตฌ๋ฅผ ํ™•์ธํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์—ฐ๋™๋œ ๋„๊ตฌ์˜ ์ตœ์‹  ๋ชฉ๋ก์ž…๋‹ˆ๋‹ค: - **๋ฌธ์„œ ์งˆ๋ฌธ ๋‹ต๋ณ€**: ์ด๋ฏธ์ง€ ํ˜•์‹์˜ ๋ฌธ์„œ(์˜ˆ: PDF)๊ฐ€ ์ฃผ์–ด์ง€๋ฉด ์ด ๋ฌธ์„œ์— ๋Œ€ํ•œ ์งˆ๋ฌธ์— ๋‹ต๋ณ€ํ•ฉ๋‹ˆ๋‹ค. ([Donut](./model_doc/donut)) - **ํ…์ŠคํŠธ ์งˆ๋ฌธ ๋‹ต๋ณ€**: ๊ธด ํ…์ŠคํŠธ์™€ ์งˆ๋ฌธ์ด ์ฃผ์–ด์ง€๋ฉด ํ…์ŠคํŠธ์—์„œ ์งˆ๋ฌธ์— ๋‹ต๋ณ€ํ•ฉ๋‹ˆ๋‹ค. ([Flan-T5](./model_doc/flan-t5)) - **๋ฌด์กฐ๊ฑด ์ด๋ฏธ์ง€ ์บก์…”๋‹**: ์ด๋ฏธ์ง€์— ์บก์…˜์„ ๋‹ต๋‹ˆ๋‹ค! ([BLIP](./model_doc/blip)) - **์ด๋ฏธ์ง€ ์งˆ๋ฌธ ๋‹ต๋ณ€**: ์ด๋ฏธ์ง€๊ฐ€ ์ฃผ์–ด์ง€๋ฉด ์ด ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์งˆ๋ฌธ์— ๋‹ต๋ณ€ํ•˜๊ธฐ. ([VILT](./model_doc/vilt)) - **์ด๋ฏธ์ง€ ๋ถ„ํ• **: ์ด๋ฏธ์ง€์™€ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์ฃผ์–ด์ง€๋ฉด ํ•ด๋‹น ํ”„๋กฌํ”„ํŠธ์˜ ๋ถ„ํ•  ๋งˆ์Šคํฌ๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ([CLIPSeg](./model_doc/clipseg)) - **์Œ์„ฑ์„ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜**: ์‚ฌ๋žŒ์ด ๋งํ•˜๋Š” ์˜ค๋””์˜ค ๋…น์Œ์ด ์ฃผ์–ด์ง€๋ฉด ์Œ์„ฑ์„ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ([Whisper](./model_doc/whisper)) - **ํ…์ŠคํŠธ ์Œ์„ฑ ๋ณ€ํ™˜**: ํ…์ŠคํŠธ๋ฅผ ์Œ์„ฑ์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ([SpeechT5](./model_doc/speecht5)) - **์ œ๋กœ ์ƒท(zero-shot) ํ…์ŠคํŠธ ๋ถ„๋ฅ˜**: ํ…์ŠคํŠธ์™€ ๋ ˆ์ด๋ธ” ๋ชฉ๋ก์ด ์ฃผ์–ด์ง€๋ฉด ํ…์ŠคํŠธ์™€ ๊ฐ€์žฅ ๊ด€๋ จ ์žˆ๋Š” ๋ ˆ์ด๋ธ”์„ ์‹๋ณ„ํ•ฉ๋‹ˆ๋‹ค. ([BART](./model_doc/bart)) - **ํ…์ŠคํŠธ ์š”์•ฝ**: ๊ธด ํ…์ŠคํŠธ๋ฅผ ํ•œ ๋ฌธ์žฅ ๋˜๋Š” ๋ช‡ ๋ฌธ์žฅ์œผ๋กœ ์š”์•ฝํ•ฉ๋‹ˆ๋‹ค. ([BART](./model_doc/bart)) - **๋ฒˆ์—ญ**: ํ…์ŠคํŠธ๋ฅผ ์ง€์ •๋œ ์–ธ์–ด๋กœ ๋ฒˆ์—ญํ•ฉ๋‹ˆ๋‹ค. ([NLLB](./model_doc/nllb)) ์ด๋Ÿฌํ•œ ๋„๊ตฌ๋Š” ํŠธ๋žœ์Šคํฌ๋จธ์— ํ†ตํ•ฉ๋˜์–ด ์žˆ์œผ๋ฉฐ, ์˜ˆ๋ฅผ ๋“ค์–ด ์ˆ˜๋™์œผ๋กœ๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py from transformers import load_tool tool = load_tool("text-to-speech") audio = tool("This is a text to speech tool") ``` ### ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ [[custom-tools]] ์—„์„ ๋œ ๋„๊ตฌ ์„ธํŠธ๋„ ์žˆ์ง€๋งŒ, ์ด ๊ตฌํ˜„์ด ์ œ๊ณตํ•˜๋Š” ๊ฐ€์žฅ ํฐ ๊ฐ€์น˜๋Š” ์‚ฌ์šฉ์ž ์ง€์ • ๋„๊ตฌ๋ฅผ ๋น ๋ฅด๊ฒŒ ๋งŒ๋“ค๊ณ  ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ๋„๊ตฌ์˜ ์ฝ”๋“œ๋ฅผ Hugging Face Space๋‚˜ ๋ชจ๋ธ ์ €์žฅ์†Œ์— ํ‘ธ์‹œํ•˜๋ฉด ์—์ด์ „ํŠธ์—๊ฒŒ ์ง์ ‘ ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`huggingface-tools` organization](https://huggingface.co/huggingface-tools)์— ๋ช‡ ๊ฐ€์ง€ **ํŠธ๋žœ์Šคํฌ๋จธ์— ๊ตฌ์• ๋ฐ›์ง€ ์•Š๋Š”** ํˆด์„ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค: - **ํ…์ŠคํŠธ ๋‹ค์šด๋กœ๋”**: ์›น URL์—์„œ ํ…์ŠคํŠธ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. - **ํ…์ŠคํŠธ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜**: ํ”„๋กฌํ”„ํŠธ์— ๋”ฐ๋ผ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜์—ฌ ์•ˆ์ •์ ์ธ ํ™•์‚ฐ์„ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. - **์ด๋ฏธ์ง€ ๋ณ€ํ™˜**: ์ดˆ๊ธฐ ์ด๋ฏธ์ง€์™€ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€๋ฅผ ์ˆ˜์ •ํ•˜๊ณ , ์•ˆ์ •์ ์ธ ํ™•์‚ฐ์„ ํ™œ์šฉํ•˜๋Š” ์ง€์‹œ ํ”ฝ์…€ 2 ํ”ฝ์…€์„ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. - **ํ…์ŠคํŠธ ๋น„๋””์˜ค ๋ณ€ํ™˜**: ํ”„๋กฌํ”„ํŠธ์— ๋”ฐ๋ผ ์ž‘์€ ๋น„๋””์˜ค๋ฅผ ์ƒ์„ฑํ•˜๋ฉฐ, damo-vilab์„ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ €ํฌ๊ฐ€ ์ฒ˜์Œ๋ถ€ํ„ฐ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋Š” ํ…์ŠคํŠธ-์ด๋ฏธ์ง€ ๋ณ€ํ™˜ ๋„๊ตฌ๋Š” [*huggingface-tools/text-to-image*](https://huggingface.co/spaces/huggingface-tools/text-to-image)์— ์žˆ๋Š” ์›๊ฒฉ ๋„๊ตฌ์ž…๋‹ˆ๋‹ค! ์ €ํฌ๋Š” ์ด ๋„๊ตฌ์™€ ๋‹ค๋ฅธ ์กฐ์ง์— ์ด๋Ÿฌํ•œ ๋„๊ตฌ๋ฅผ ๊ณ„์† ์ถœ์‹œํ•˜์—ฌ ์ด ๊ตฌํ˜„์„ ๋”์šฑ ๊ฐ•ํ™”ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ [`huggingface-tools`](https://huggingface.co/huggingface-tools)์— ์žˆ๋Š” ๋„๊ตฌ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [๋‹ค์Œ ๊ฐ€์ด๋“œ](custom_tools)์—์„œ ๋„๊ตฌ๋ฅผ ์ž‘์„ฑํ•˜๊ณ  ๊ณต์œ ํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ Hub์— ์žˆ๋Š” ์‚ฌ์šฉ์ž ์ง€์ • ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ### ์ฝ”๋“œ ์ƒ์„ฑ[[code-generation]] ์ง€๊ธˆ๊นŒ์ง€ ์—์ด์ „ํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ๋“œ๋ ธ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์—์ด์ „ํŠธ๋Š” ๋งค์šฐ ์ œํ•œ๋œ Python ์ธํ„ฐํ”„๋ฆฌํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹คํ–‰ํ•  ์ฝ”๋“œ๋งŒ ์ƒ์„ฑํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์„ค์ •์—์„œ ์ƒ์„ฑ๋œ ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ์—์ด์ „ํŠธ์—๊ฒŒ ๋„๊ตฌ ์ •์˜ ๋ฐ ์ •ํ™•ํ•œ ๊ฐ€์ ธ์˜ค๊ธฐ์™€ ํ•จ๊ป˜ ์ฝ”๋“œ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋ผ๋Š” ๋ฉ”์‹œ์ง€๋ฅผ ํ‘œ์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ ๋ช…๋ น์–ด๋Š” ```python agent.run("Draw me a picture of rivers and lakes", return_code=True) ``` ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ```python from transformers import load_tool image_generator = load_tool("huggingface-tools/text-to-image") image = image_generator(prompt="rivers and lakes") ``` ์ด ์ฝ”๋“œ๋Š” ์ง์ ‘ ์ˆ˜์ •ํ•˜๊ณ  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/perf_train_tpu_tf.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TensorFlow๋กœ TPU์—์„œ ํ›ˆ๋ จํ•˜๊ธฐ[[training-on-tpu-with-tensorflow]] <Tip> ์ž์„ธํ•œ ์„ค๋ช…์ด ํ•„์š”ํ•˜์ง€ ์•Š๊ณ  ๋ฐ”๋กœ TPU ์ƒ˜ํ”Œ ์ฝ”๋“œ๋ฅผ ์‹œ์ž‘ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด [์šฐ๋ฆฌ์˜ TPU ์˜ˆ์ œ ๋…ธํŠธ๋ถ!](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)์„ ํ™•์ธํ•˜์„ธ์š”. </Tip> ### TPU๊ฐ€ ๋ฌด์—‡์ธ๊ฐ€์š”?[[what-is-a-tpu]] TPU๋Š” **ํ…์„œ ์ฒ˜๋ฆฌ ์žฅ์น˜**์ž…๋‹ˆ๋‹ค. Google์—์„œ ์„ค๊ณ„ํ•œ ํ•˜๋“œ์›จ์–ด๋กœ, GPU์ฒ˜๋Ÿผ ์‹ ๊ฒฝ๋ง ๋‚ด์—์„œ ํ…์„œ ์—ฐ์‚ฐ์„ ๋”์šฑ ๋น ๋ฅด๊ฒŒ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋„คํŠธ์›Œํฌ ํ›ˆ๋ จ๊ณผ ์ถ”๋ก  ๋ชจ๋‘์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ Google์˜ ํด๋ผ์šฐ๋“œ ์„œ๋น„์Šค๋ฅผ ํ†ตํ•ด ์ด์šฉํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, Google Colab๊ณผ Kaggle Kernel์„ ํ†ตํ•ด ์†Œ๊ทœ๋ชจ TPU๋ฅผ ๋ฌด๋ฃŒ๋กœ ์ง์ ‘ ์ด์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. [๐Ÿค— Transformers์˜ ๋ชจ๋“  Tensorflow ๋ชจ๋ธ์€ Keras ๋ชจ๋ธ](https://huggingface.co/blog/tensorflow-philosophy)์ด๊ธฐ ๋•Œ๋ฌธ์—, ์ด ๋ฌธ์„œ์—์„œ ๋‹ค๋ฃจ๋Š” ๋Œ€๋ถ€๋ถ„์˜ ๋ฉ”์†Œ๋“œ๋Š” ๋Œ€์ฒด๋กœ ๋ชจ๋“  Keras ๋ชจ๋ธ์„ ์œ„ํ•œ TPU ํ›ˆ๋ จ์— ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ํ•˜์ง€๋งŒ Transformer์™€ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ HuggingFace ์ƒํƒœ๊ณ„(hug-o-system?)์— ํŠนํ™”๋œ ๋ช‡ ๊ฐ€์ง€ ์‚ฌํ•ญ์ด ์žˆ์œผ๋ฉฐ, ํ•ด๋‹น ์‚ฌํ•ญ์— ๋Œ€ํ•ด ์„ค๋ช…ํ•  ๋•Œ ๋ฐ˜๋“œ์‹œ ์–ธ๊ธ‰ํ•˜๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ### ์–ด๋–ค ์ข…๋ฅ˜์˜ TPU๊ฐ€ ์žˆ๋‚˜์š”?[[what-kinds-of-tpu-are-available]] ์‹ ๊ทœ ์‚ฌ์šฉ์ž๋Š” TPU์˜ ๋ฒ”์œ„์™€ ๋‹ค์–‘ํ•œ ์ด์šฉ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ๋งค์šฐ ํ˜ผ๋ž€์Šค๋Ÿฌ์›Œํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. **TPU ๋…ธ๋“œ**์™€ **TPU VM**์˜ ์ฐจ์ด์ ์€ ๊ฐ€์žฅ ๋จผ์ € ์ดํ•ดํ•ด์•ผ ํ•  ํ•ต์‹ฌ์ ์ธ ๊ตฌ๋ถ„ ์‚ฌํ•ญ์ž…๋‹ˆ๋‹ค. **TPU ๋…ธ๋“œ**๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค๋ฉด, ์‹ค์ œ๋กœ๋Š” ์›๊ฒฉ TPU๋ฅผ ๊ฐ„์ ‘์ ์œผ๋กœ ์ด์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋„คํŠธ์›Œํฌ์™€ ๋ฐ์ดํ„ฐ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ดˆ๊ธฐํ™”ํ•œ ๋‹ค์Œ, ์ด๋ฅผ ์›๊ฒฉ ๋…ธ๋“œ๋กœ ์ „๋‹ฌํ•  ๋ณ„๋„์˜ VM์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. Google Colab์—์„œ TPU๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, **TPU ๋…ธ๋“œ** ๋ฐฉ์‹์œผ๋กœ ์ด์šฉํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. TPU ๋…ธ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ์ด๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ์‚ฌ์šฉ์ž์—๊ฒŒ ์˜ˆ๊ธฐ์น˜ ์•Š์€ ํ˜„์ƒ์ด ๋ฐœ์ƒํ•˜๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค! ํŠนํžˆ, TPU๋Š” ํŒŒ์ด์ฌ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๊ธฐ๊ธฐ(machine)์™€ ๋ฌผ๋ฆฌ์ ์œผ๋กœ ๋‹ค๋ฅธ ์‹œ์Šคํ…œ์— ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋กœ์ปฌ ๊ธฐ๊ธฐ์— ๋ฐ์ดํ„ฐ๋ฅผ ์ €์žฅํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ฆ‰, ์ปดํ“จํ„ฐ์˜ ๋‚ด๋ถ€ ์ €์žฅ์†Œ์—์„œ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐ์ดํ„ฐ ํŒŒ์ดํ”„๋ผ์ธ์€ ์ ˆ๋Œ€ ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค! ๋กœ์ปฌ ๊ธฐ๊ธฐ์— ๋ฐ์ดํ„ฐ๋ฅผ ์ €์žฅํ•˜๋Š” ๋Œ€์‹ ์—, ๋ฐ์ดํ„ฐ ํŒŒ์ดํ”„๋ผ์ธ์ด ์›๊ฒฉ TPU ๋…ธ๋“œ์—์„œ ์‹คํ–‰ ์ค‘์ผ ๋•Œ์—๋„ ๋ฐ์ดํ„ฐ ํŒŒ์ดํ”„๋ผ์ธ์ด ๊ณ„์† ์ด์šฉํ•  ์ˆ˜ ์žˆ๋Š” Google Cloud Storage์— ๋ฐ์ดํ„ฐ๋ฅผ ์ €์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. <Tip> ๋ฉ”๋ชจ๋ฆฌ์— ์žˆ๋Š” ๋ชจ๋“  ๋ฐ์ดํ„ฐ๋ฅผ `np.ndarray` ๋˜๋Š” `tf.Tensor`๋กœ ๋งž์ถœ ์ˆ˜ ์žˆ๋‹ค๋ฉด, Google Cloud Storage์— ์—…๋กœ๋“œํ•  ํ•„์š” ์—†์ด, Colab ๋˜๋Š” TPU ๋…ธ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด์„œ ํ•ด๋‹น ๋ฐ์ดํ„ฐ์— `fit()` ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> <Tip> **๐Ÿค—ํŠน์ˆ˜ํ•œ Hugging Face ํŒ๐Ÿค—:** TF ์ฝ”๋“œ ์˜ˆ์ œ์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋Š” `Dataset.to_tf_dataset()` ๋ฉ”์†Œ๋“œ์™€ ๊ทธ ์ƒ์œ„ ๋ž˜ํผ(wrapper)์ธ `model.prepare_tf_dataset()`๋Š” ๋ชจ๋‘ TPU ๋…ธ๋“œ์—์„œ ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ ์ด์œ ๋Š” `tf.data.Dataset`์„ ์ƒ์„ฑํ•˜๋”๋ผ๋„ โ€œ์ˆœ์ˆ˜ํ•œโ€ `tf.data` ํŒŒ์ดํ”„๋ผ์ธ์ด ์•„๋‹ˆ๋ฉฐ `tf.numpy_function` ๋˜๋Š” `Dataset.from_generator()`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ธฐ๋ณธ HuggingFace `Dataset`์—์„œ ๋ฐ์ดํ„ฐ๋ฅผ ์ „์†กํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ด HuggingFace `Dataset`๋Š” ๋กœ์ปฌ ๋””์Šคํฌ์— ์žˆ๋Š” ๋ฐ์ดํ„ฐ๋กœ ์ง€์›๋˜๋ฉฐ ์›๊ฒฉ TPU ๋…ธ๋“œ๊ฐ€ ์ฝ์„ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. </Tip> TPU๋ฅผ ์ด์šฉํ•˜๋Š” ๋‘ ๋ฒˆ์งธ ๋ฐฉ๋ฒ•์€ **TPU VM**์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. TPU VM์„ ์‚ฌ์šฉํ•  ๋•Œ, GPU VM์—์„œ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ๊ณผ ๊ฐ™์ด TPU๊ฐ€ ์žฅ์ฐฉ๋œ ๊ธฐ๊ธฐ์— ์ง์ ‘ ์—ฐ๊ฒฐํ•ฉ๋‹ˆ๋‹ค. ํŠนํžˆ ๋ฐ์ดํ„ฐ ํŒŒ์ดํ”„๋ผ์ธ๊ณผ ๊ด€๋ จํ•˜์—ฌ, TPU VM์€ ๋Œ€์ฒด๋กœ ์ž‘์—…ํ•˜๊ธฐ ๋” ์‰ฝ์Šต๋‹ˆ๋‹ค. ์œ„์˜ ๋ชจ๋“  ๊ฒฝ๊ณ ๋Š” TPU VM์—๋Š” ํ•ด๋‹น๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค! ์ด ๋ฌธ์„œ๋Š” ์˜๊ฒฌ์ด ํฌํ•จ๋œ ๋ฌธ์„œ์ด๋ฉฐ, ์ €ํฌ์˜ ์˜๊ฒฌ์ด ์—ฌ๊ธฐ์— ์žˆ์Šต๋‹ˆ๋‹ค: **๊ฐ€๋Šฅํ•˜๋ฉด TPU ๋…ธ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ๋งˆ์„ธ์š”.** TPU ๋…ธ๋“œ๋Š” TPU VM๋ณด๋‹ค ๋” ๋ณต์žกํ•˜๊ณ  ๋””๋ฒ„๊น…ํ•˜๊ธฐ๊ฐ€ ๋” ์–ด๋ ต์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ํ–ฅํ›„์—๋Š” ์ง€์›๋˜์ง€ ์•Š์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. Google์˜ ์ตœ์‹  TPU์ธ TPUv4๋Š” TPU VM์œผ๋กœ๋งŒ ์ด์šฉํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, TPU ๋…ธ๋“œ๋Š” ์ ์  ๋” "๊ตฌ์‹" ์ด์šฉ ๋ฐฉ๋ฒ•์ด ๋  ๊ฒƒ์œผ๋กœ ์ „๋ง๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ TPU ๋…ธ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” Colab๊ณผ Kaggle Kernel์—์„œ๋งŒ ๋ฌด๋ฃŒ TPU ์ด์šฉ์ด ๊ฐ€๋Šฅํ•œ ๊ฒƒ์œผ๋กœ ํ™•์ธ๋˜์–ด, ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์ด๋ฅผ ๋‹ค๋ฃจ๋Š” ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•ด ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค! ์ด์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์„ค๋ช…์ด ๋‹ด๊ธด ์ฝ”๋“œ ์ƒ˜ํ”Œ์€ [TPU ์˜ˆ์ œ ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)์—์„œ ํ™•์ธํ•˜์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ### ์–ด๋–ค ํฌ๊ธฐ์˜ TPU๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‚˜์š”?[[what-sizes-of-tpu-are-available]] ๋‹จ์ผ TPU(v2-8/v3-8/v4-8)๋Š” 8๊ฐœ์˜ ๋ณต์ œ๋ณธ(replicas)์„ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. TPU๋Š” ์ˆ˜๋ฐฑ ๋˜๋Š” ์ˆ˜์ฒœ ๊ฐœ์˜ ๋ณต์ œ๋ณธ์„ ๋™์‹œ์— ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” **pod**๋กœ ์กด์žฌํ•ฉ๋‹ˆ๋‹ค. ๋‹จ์ผ TPU๋ฅผ ํ•˜๋‚˜ ์ด์ƒ ์‚ฌ์šฉํ•˜์ง€๋งŒ ์ „์ฒด Pod๋ณด๋‹ค ์ ๊ฒŒ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ(์˜ˆ๋ฅผ ๋“ค๋ฉด, v3-32), TPU ๊ตฌ์„ฑ์„ **pod ์Šฌ๋ผ์ด์Šค**๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค. Colab์„ ํ†ตํ•ด ๋ฌด๋ฃŒ TPU์— ์ด์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ๊ธฐ๋ณธ์ ์œผ๋กœ ๋‹จ์ผ v2-8 TPU๋ฅผ ์ œ๊ณต๋ฐ›์Šต๋‹ˆ๋‹ค. ### XLA์— ๋Œ€ํ•ด ๋“ค์–ด๋ณธ ์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. XLA๋ž€ ๋ฌด์—‡์ด๊ณ  TPU์™€ ์–ด๋–ค ๊ด€๋ จ์ด ์žˆ๋‚˜์š”?[[i-keep-hearing-about-this-xla-thing-whats-xla-and-how-does-it-relate-to-tpus]] XLA๋Š” ์ตœ์ ํ™” ์ปดํŒŒ์ผ๋Ÿฌ๋กœ, TensorFlow์™€ JAX์—์„œ ๋ชจ๋‘ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. JAX์—์„œ๋Š” ์œ ์ผํ•œ ์ปดํŒŒ์ผ๋Ÿฌ์ด์ง€๋งŒ, TensorFlow์—์„œ๋Š” ์„ ํƒ ์‚ฌํ•ญ์ž…๋‹ˆ๋‹ค(ํ•˜์ง€๋งŒ TPU์—์„œ๋Š” ํ•„์ˆ˜์ž…๋‹ˆ๋‹ค!). Keras ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ๋•Œ ์ด๋ฅผ ํ™œ์„ฑํ™”ํ•˜๋Š” ๊ฐ€์žฅ ์‰ฌ์šด ๋ฐฉ๋ฒ•์€ `jit_compile=True` ์ธ์ˆ˜๋ฅผ `model.compile()`์— ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์˜ค๋ฅ˜๊ฐ€ ์—†๊ณ  ์„ฑ๋Šฅ์ด ์–‘ํ˜ธํ•˜๋‹ค๋ฉด, TPU๋กœ ์ „ํ™˜ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ๋‹ค๋Š” ์ข‹์€ ์‹ ํ˜ธ์ž…๋‹ˆ๋‹ค! TPU์—์„œ ๋””๋ฒ„๊น…ํ•˜๋Š” ๊ฒƒ์€ ๋Œ€๊ฐœ CPU/GPU๋ณด๋‹ค ์กฐ๊ธˆ ๋” ์–ด๋ ต๊ธฐ ๋•Œ๋ฌธ์—, TPU์—์„œ ์‹œ๋„ํ•˜๊ธฐ ์ „์— ๋จผ์ € XLA๋กœ CPU/GPU์—์„œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฌผ๋ก  ์˜ค๋ž˜ ํ•™์Šตํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ๊ณผ ๋ฐ์ดํ„ฐ ํŒŒ์ดํ”„๋ผ์ธ์ด ์˜ˆ์ƒ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋ช‡ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. <Tip> XLA๋กœ ์ปดํŒŒ์ผ๋œ ์ฝ”๋“œ๋Š” ๋Œ€์ฒด๋กœ ๋” ๋น ๋ฆ…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ TPU์—์„œ ์‹คํ–‰ํ•  ๊ณ„ํš์ด ์—†๋”๋ผ๋„, `jit_compile=True`๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉด ์„ฑ๋Šฅ์ด ํ–ฅ์ƒ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ XLA ํ˜ธํ™˜์„ฑ์— ๋Œ€ํ•œ ์•„๋ž˜ ์ฃผ์˜ ์‚ฌํ•ญ์„ ๋ฐ˜๋“œ์‹œ ํ™•์ธํ•˜์„ธ์š”! </Tip> <Tip warning={true}> **๋ผˆ์•„ํ”ˆ ๊ฒฝํ—˜์—์„œ ์–ป์€ ํŒ:** `jit_compile=True`๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์†๋„๋ฅผ ๋†’์ด๊ณ  CPU/GPU ์ฝ”๋“œ๊ฐ€ XLA์™€ ํ˜ธํ™˜๋˜๋Š”์ง€ ๊ฒ€์ฆํ•  ์ˆ˜ ์žˆ๋Š” ์ข‹์€ ๋ฐฉ๋ฒ•์ด์ง€๋งŒ, ์‹ค์ œ TPU์—์„œ ํ›ˆ๋ จํ•  ๋•Œ ๊ทธ๋Œ€๋กœ ๋‚จ๊ฒจ๋‘๋ฉด ๋งŽ์€ ๋ฌธ์ œ๋ฅผ ์ดˆ๋ž˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. XLA ์ปดํŒŒ์ผ์€ TPU์—์„œ ์•”์‹œ์ ์œผ๋กœ ์ด๋ค„์ง€๋ฏ€๋กœ, ์‹ค์ œ TPU์—์„œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜๊ธฐ ์ „์— ํ•ด๋‹น ์ค„์„ ์ œ๊ฑฐํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”! </Tip> ### ์ œ XLA ๋ชจ๋ธ๊ณผ ํ˜ธํ™˜ํ•˜๋ ค๋ฉด ์–ด๋–ป๊ฒŒ ํ•ด์•ผ ํ•˜๋‚˜์š”?[[how-do-i-make-my-model-xla-compatible]] ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ, ์—ฌ๋Ÿฌ๋ถ„์˜ ์ฝ”๋“œ๋Š” ์ด๋ฏธ XLA์™€ ํ˜ธํ™˜๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค! ๊ทธ๋Ÿฌ๋‚˜ ํ‘œ์ค€ TensorFlow์—์„œ ์ž‘๋™ํ•˜์ง€๋งŒ, XLA์—์„œ๋Š” ์ž‘๋™ํ•˜์ง€ ์•Š๋Š” ๋ช‡ ๊ฐ€์ง€ ์‚ฌํ•ญ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์•„๋ž˜ ์„ธ ๊ฐ€์ง€ ํ•ต์‹ฌ ๊ทœ์น™์œผ๋กœ ๊ฐ„์ถ”๋ ธ์Šต๋‹ˆ๋‹ค: <Tip> **ํŠน์ˆ˜ํ•œ HuggingFace ํŒ๐Ÿค—:** ์ €ํฌ๋Š” TensorFlow ๋ชจ๋ธ๊ณผ ์†์‹ค ํ•จ์ˆ˜๋ฅผ XLA์™€ ํ˜ธํ™˜๋˜๋„๋ก ์žฌ์ž‘์„ฑํ•˜๋Š” ๋ฐ ๋งŽ์€ ๋…ธ๋ ฅ์„ ๊ธฐ์šธ์˜€์Šต๋‹ˆ๋‹ค. ์ €ํฌ์˜ ๋ชจ๋ธ๊ณผ ์†์‹ค ํ•จ์ˆ˜๋Š” ๋Œ€๊ฐœ ๊ธฐ๋ณธ์ ์œผ๋กœ ๊ทœ์น™ #1๊ณผ #2๋ฅผ ๋”ฐ๋ฅด๋ฏ€๋กœ `transformers` ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ์ด๋ฅผ ๊ฑด๋„ˆ๋›ธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ž์ฒด ๋ชจ๋ธ๊ณผ ์†์‹ค ํ•จ์ˆ˜๋ฅผ ์ž‘์„ฑํ•  ๋•Œ๋Š” ์ด๋Ÿฌํ•œ ๊ทœ์น™์„ ์žŠ์ง€ ๋งˆ์„ธ์š”! </Tip> #### XLA ๊ทœ์น™ #1: ์ฝ”๋“œ์—์„œ โ€œ๋ฐ์ดํ„ฐ ์ข…์† ์กฐ๊ฑด๋ฌธโ€์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค[[xla-rule-1-your-code-cannot-have-datadependent-conditionals]] ์–ด๋–ค `if`๋ฌธ๋„ `tf.Tensor` ๋‚ด๋ถ€์˜ ๊ฐ’์— ์ข…์†๋  ์ˆ˜ ์—†๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ด ์ฝ”๋“œ ๋ธ”๋ก์€ XLA๋กœ ์ปดํŒŒ์ผํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค! ```python if tf.reduce_sum(tensor) > 10: tensor = tensor / 2.0 ``` ์ฒ˜์Œ์—๋Š” ๋งค์šฐ ์ œํ•œ์ ์œผ๋กœ ๋ณด์ผ ์ˆ˜ ์žˆ์ง€๋งŒ, ๋Œ€๋ถ€๋ถ„์˜ ์‹ ๊ฒฝ๋ง ์ฝ”๋“œ์—์„œ๋Š” ์ด๋ฅผ ์ˆ˜ํ–‰ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. `tf.cond`๋ฅผ ์‚ฌ์šฉํ•˜๊ฑฐ๋‚˜([์—ฌ๊ธฐ](https://www.tensorflow.org/api_docs/python/tf/cond) ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐ), ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์กฐ๊ฑด๋ฌธ์„ ์ œ๊ฑฐํ•˜๊ณ  ๋Œ€์‹  ์ง€ํ‘œ ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์˜๋ฆฌํ•œ ์ˆ˜ํ•™ ํŠธ๋ฆญ์„ ์ฐพ์•„๋‚ด์–ด ์ด ์ œํ•œ์„ ์šฐํšŒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python sum_over_10 = tf.cast(tf.reduce_sum(tensor) > 10, tf.float32) tensor = tensor / (1.0 + sum_over_10) ``` ์ด ์ฝ”๋“œ๋Š” ์œ„์˜ ์ฝ”๋“œ์™€ ์ •ํ™•ํžˆ ๋™์ผํ•œ ํšจ๊ณผ๋ฅผ ๊ตฌํ˜„ํ•˜์ง€๋งŒ, ์กฐ๊ฑด๋ฌธ์„ ์ œ๊ฑฐํ•˜์—ฌ ๋ฌธ์ œ ์—†์ด XLA๋กœ ์ปดํŒŒ์ผ๋˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค! #### XLA ๊ทœ์น™ #2: ์ฝ”๋“œ์—์„œ "๋ฐ์ดํ„ฐ ์ข…์† ํฌ๊ธฐ"๋ฅผ ๊ฐ€์งˆ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค[[xla-rule-2-your-code-cannot-have-datadependent-shapes]] ์ฝ”๋“œ์—์„œ ๋ชจ๋“  `tf.Tensor` ๊ฐ์ฒด์˜ ํฌ๊ธฐ๊ฐ€ ํ•ด๋‹น ๊ฐ’์— ์ข…์†๋  ์ˆ˜ ์—†๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `tf.unique` ํ•จ์ˆ˜๋Š” ์ž…๋ ฅ์—์„œ ๊ฐ ๊ณ ์œ  ๊ฐ’์˜ ์ธ์Šคํ„ด์Šค ํ•˜๋‚˜๋ฅผ ํฌํ•จํ•˜๋Š” `tensor`๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ธฐ ๋•Œ๋ฌธ์— XLA๋กœ ์ปดํŒŒ์ผํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ด ์ถœ๋ ฅ์˜ ํฌ๊ธฐ๋Š” ์ž…๋ ฅ `Tensor`๊ฐ€ ์–ผ๋งˆ๋‚˜ ๋ฐ˜๋ณต์ ์ธ์ง€์— ๋”ฐ๋ผ ๋ถ„๋ช…ํžˆ ๋‹ฌ๋ผ์งˆ ๊ฒƒ์ด๋ฏ€๋กœ, XLA๋Š” ์ด๋ฅผ ์ฒ˜๋ฆฌํ•˜์ง€ ๋ชปํ•ฉ๋‹ˆ๋‹ค! ์ผ๋ฐ˜์ ์œผ๋กœ, ๋Œ€๋ถ€๋ถ„์˜ ์‹ ๊ฒฝ๋ง ์ฝ”๋“œ๋Š” ๊ธฐ๋ณธ๊ฐ’์œผ๋กœ ๊ทœ์น™ 2๋ฅผ ๋”ฐ๋ฆ…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ฌธ์ œ๊ฐ€ ๋˜๋Š” ๋ช‡ ๊ฐ€์ง€ ๋Œ€ํ‘œ์ ์ธ ์‚ฌ๋ก€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ€์žฅ ํ”ํ•œ ์‚ฌ๋ก€ ์ค‘ ํ•˜๋‚˜๋Š” **๋ ˆ์ด๋ธ” ๋งˆ์Šคํ‚น**์„ ์‚ฌ์šฉํ•˜์—ฌ ์†์‹ค(loss)์„ ๊ณ„์‚ฐํ•  ๋•Œ, ํ•ด๋‹น ์œ„์น˜๋ฅผ ๋ฌด์‹œํ•˜๋„๋ก ๋‚˜ํƒ€๋‚ด๊ธฐ ์œ„ํ•ด ๋ ˆ์ด๋ธ”์„ ์Œ์ˆ˜ ๊ฐ’์œผ๋กœ ์„ค์ •ํ•˜๋Š” ๊ฒฝ์šฐ์ž…๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ” ๋งˆ์Šคํ‚น์„ ์ง€์›ํ•˜๋Š” NumPy๋‚˜ PyTorch ์†์‹ค ํ•จ์ˆ˜๋ฅผ ๋ณด๋ฉด [๋ถˆ ์ธ๋ฑ์‹ฑ](https://numpy.org/doc/stable/user/basics.indexing.html#boolean-array-indexing)์„ ์‚ฌ์šฉํ•˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ฝ”๋“œ๋ฅผ ์ž์ฃผ ์ ‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python label_mask = labels >= 0 masked_outputs = outputs[label_mask] masked_labels = labels[label_mask] loss = compute_loss(masked_outputs, masked_labels) mean_loss = torch.mean(loss) ``` ์ด ์ฝ”๋“œ๋Š” NumPy๋‚˜ PyTorch์—์„œ๋Š” ๋ฌธ์ œ ์—†์ด ์ž‘๋™ํ•˜์ง€๋งŒ, XLA์—์„œ๋Š” ์†์ƒ๋ฉ๋‹ˆ๋‹ค! ์™œ ๊ทธ๋Ÿด๊นŒ์š”? ์–ผ๋งˆ๋‚˜ ๋งŽ์€ ์œ„์น˜๊ฐ€ ๋งˆ์Šคํ‚น๋˜๋Š”์ง€์— ๋”ฐ๋ผ `masked_outputs`์™€ `masked_labels`์˜ ํฌ๊ธฐ๊ฐ€ ๋‹ฌ๋ผ์ ธ์„œ, **๋ฐ์ดํ„ฐ ์ข…์† ํฌ๊ธฐ**๊ฐ€ ๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ทœ์น™ #1๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ์ด ์ฝ”๋“œ๋ฅผ ๋‹ค์‹œ ์ž‘์„ฑํ•˜๋ฉด ๋ฐ์ดํ„ฐ ์ข…์†์  ๋ชจ์–‘ ํฌ๊ธฐ๊ฐ€ ์ •ํ™•ํžˆ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ์‚ฐ์ถœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python label_mask = tf.cast(labels >= 0, tf.float32) loss = compute_loss(outputs, labels) loss = loss * label_mask # Set negative label positions to 0 mean_loss = tf.reduce_sum(loss) / tf.reduce_sum(label_mask) ``` ์—ฌ๊ธฐ์„œ, ๋ชจ๋“  ์œ„์น˜์— ๋Œ€ํ•œ ์†์‹ค์„ ๊ณ„์‚ฐํ•˜์ง€๋งŒ, ํ‰๊ท ์„ ๊ณ„์‚ฐํ•  ๋•Œ ๋ถ„์ž์™€ ๋ถ„๋ชจ ๋ชจ๋‘์—์„œ ๋งˆ์Šคํฌ๋œ ์œ„์น˜๋ฅผ 0์œผ๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ฐ์ดํ„ฐ ์ข…์† ํฌ๊ธฐ๋ฅผ ๋ฐฉ์ง€ํ•˜๊ณ  XLA ํ˜ธํ™˜์„ฑ์„ ์œ ์ง€ํ•˜๋ฉด์„œ ์ฒซ ๋ฒˆ์งธ ๋ธ”๋ก๊ณผ ์ •ํ™•ํžˆ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ์‚ฐ์ถœํ•ฉ๋‹ˆ๋‹ค. ๊ทœ์น™ #1์—์„œ์™€ ๋™์ผํ•œ ํŠธ๋ฆญ์„ ์‚ฌ์šฉํ•˜์—ฌ `tf.bool`์„ `tf.float32`๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์ด๋ฅผ ์ง€ํ‘œ ๋ณ€์ˆ˜๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น ํŠธ๋ฆญ์€ ๋งค์šฐ ์œ ์šฉํ•˜๋ฉฐ, ์ž์ฒด ์ฝ”๋“œ๋ฅผ XLA๋กœ ๋ณ€ํ™˜ํ•ด์•ผ ํ•  ๊ฒฝ์šฐ ๊ธฐ์–ตํ•ด ๋‘์„ธ์š”! #### XLA ๊ทœ์น™ #3: XLA๋Š” ๊ฐ๊ธฐ ๋‹ค๋ฅธ ์ž…๋ ฅ ํฌ๊ธฐ๊ฐ€ ๋‚˜ํƒ€๋‚  ๋•Œ๋งˆ๋‹ค ๋ชจ๋ธ์„ ๋‹ค์‹œ ์ปดํŒŒ์ผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค[[xla-rule-3-xla-will-need-to-recompile-your-model-for-every-different-input-shape-it-sees]] ์ด๊ฒƒ์€ ๊ฐ€์žฅ ํฐ ๋ฌธ์ œ์ž…๋‹ˆ๋‹ค. ์ž…๋ ฅ ํฌ๊ธฐ๊ฐ€ ๋งค์šฐ ๊ฐ€๋ณ€์ ์ธ ๊ฒฝ์šฐ, XLA๋Š” ๋ชจ๋ธ์„ ๋ฐ˜๋ณตํ•ด์„œ ๋‹ค์‹œ ์ปดํŒŒ์ผํ•ด์•ผ ํ•˜๋ฏ€๋กœ ์„ฑ๋Šฅ์— ํฐ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์ œ๋Š” ํ† ํฐํ™” ํ›„ ์ž…๋ ฅ ํ…์ŠคํŠธ์˜ ๊ธธ์ด๊ฐ€ ๊ฐ€๋ณ€์ ์ธ NLP ๋ชจ๋ธ์—์„œ ์ฃผ๋กœ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์—์„œ๋Š” ์ •์  ํฌ๊ธฐ๊ฐ€ ๋” ํ”ํ•˜๋ฉฐ, ํ•ด๋‹น ๊ทœ์น™์ด ํ›จ์”ฌ ๋œ ๋ฌธ์ œ์‹œ ๋ฉ๋‹ˆ๋‹ค. ๊ทœ์น™ #3์„ ์–ด๋–ป๊ฒŒ ์šฐํšŒํ•  ์ˆ˜ ์žˆ์„๊นŒ์š”? ํ•ต์‹ฌ์€ **ํŒจ๋”ฉ**์ž…๋‹ˆ๋‹ค. ๋ชจ๋“  ์ž…๋ ฅ์„ ๋™์ผํ•œ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•œ ๋‹ค์Œ, `attention_mask`๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์–ด๋–ค XLA ๋ฌธ์ œ๋„ ์—†์ด ๊ฐ€๋ณ€ ํฌ๊ธฐ์—์„œ ๊ฐ€์ ธ์˜จ ๊ฒƒ๊ณผ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ณผ๋„ํ•œ ํŒจ๋”ฉ์€ ์‹ฌ๊ฐํ•œ ์†๋„ ์ €ํ•˜๋ฅผ ์•ผ๊ธฐํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ์ƒ˜ํ”Œ์„ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋ฉด, ๋ฌดํ•œํ•œ ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ๊ตฌ์„ฑ๋œ ๋ฐฐ์น˜๊ฐ€ ์ƒ์„ฑ๋˜์–ด ๋งŽ์€ ์—ฐ์‚ฐ๊ณผ ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ๋‚ญ๋น„๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ด ๋ฌธ์ œ์— ๋Œ€ํ•œ ์™„๋ฒฝํ•œ ํ•ด๊ฒฐ์ฑ…์€ ์—†์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ, ๋ช‡ ๊ฐ€์ง€ ํŠธ๋ฆญ์„ ์‹œ๋„ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•œ ๊ฐ€์ง€ ์œ ์šฉํ•œ ํŠธ๋ฆญ์€ **์ƒ˜ํ”Œ ๋ฐฐ์น˜๋ฅผ 32 ๋˜๋Š” 64 ํ† ํฐ๊ณผ ๊ฐ™์€ ์ˆซ์ž์˜ ๋ฐฐ์ˆ˜๊นŒ์ง€ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.** ์ด๋Š” ํ† ํฐ ์ˆ˜๊ฐ€ ์†Œํญ ์ฆ๊ฐ€ํ•˜์ง€๋งŒ, ๋ชจ๋“  ์ž…๋ ฅ ํฌ๊ธฐ๊ฐ€ 32 ๋˜๋Š” 64์˜ ๋ฐฐ์ˆ˜์—ฌ์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๊ณ ์œ ํ•œ ์ž…๋ ฅ ํฌ๊ธฐ์˜ ์ˆ˜๊ฐ€ ๋Œ€ํญ ์ค„์–ด๋“ญ๋‹ˆ๋‹ค. ๊ณ ์œ ํ•œ ์ž…๋ ฅ ํฌ๊ธฐ๊ฐ€ ์ ๋‹ค๋Š” ๊ฒƒ์€ XLA ์ปดํŒŒ์ผ ํšŸ์ˆ˜๊ฐ€ ์ ์–ด์ง„๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค! <Tip> **๐Ÿค—ํŠน์ˆ˜ํ•œ HuggingFace ํŒ๐Ÿค—:** ํ† ํฌ๋‚˜์ด์ €์™€ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์— ๋„์›€์ด ๋  ์ˆ˜ ์žˆ๋Š” ๋ฉ”์†Œ๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ฌ ๋•Œ `padding="max_length"` ๋˜๋Š” `padding="longest"`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŒจ๋”ฉ๋œ ๋ฐ์ดํ„ฐ๋ฅผ ์ถœ๋ ฅํ•˜๋„๋ก ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €์™€ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋Š” ๋‚˜ํƒ€๋‚˜๋Š” ๊ณ ์œ ํ•œ ์ž…๋ ฅ ํฌ๊ธฐ์˜ ์ˆ˜๋ฅผ ์ค„์ด๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” `pad_to_multiple_of` ์ธ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค! </Tip> ### ์‹ค์ œ TPU๋กœ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋ ค๋ฉด ์–ด๋–ป๊ฒŒ ํ•ด์•ผ ํ•˜๋‚˜์š”?[[how-do-i-actually-train-my-model-on-tpu]] ํ›ˆ๋ จ์ด XLA์™€ ํ˜ธํ™˜๋˜๊ณ  (TPU ๋…ธ๋“œ/Colab์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ) ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ ์ ์ ˆํ•˜๊ฒŒ ์ค€๋น„๋˜์—ˆ๋‹ค๋ฉด, TPU์—์„œ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ์€ ๋†€๋ž๋„๋ก ์‰ฝ์Šต๋‹ˆ๋‹ค! ์ฝ”๋“œ์—์„œ ๋ช‡ ์ค„๋งŒ ์ถ”๊ฐ€ํ•˜์—ฌ, TPU๋ฅผ ์ดˆ๊ธฐํ™”ํ•˜๊ณ  ๋ชจ๋ธ๊ณผ ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ `TPUStrategy` ๋ฒ”์œ„ ๋‚ด์— ์ƒ์„ฑ๋˜๋„๋ก ๋ณ€๊ฒฝํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. [์šฐ๋ฆฌ์˜ TPU ์˜ˆ์ œ ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)์„ ์ฐธ์กฐํ•˜์—ฌ ์‹ค์ œ๋กœ ์ž‘๋™ํ•˜๋Š” ๋ชจ์Šต์„ ํ™•์ธํ•ด ๋ณด์„ธ์š”! ### ์š”์•ฝ[[summary]] ์—ฌ๊ธฐ์— ๋งŽ์€ ๋‚ด์šฉ์ด ํฌํ•จ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ, TPU ํ›ˆ๋ จ์„ ์œ„ํ•œ ๋ชจ๋ธ์„ ์ค€๋น„ํ•  ๋•Œ ๋”ฐ๋ฅผ ์ˆ˜ ์žˆ๋Š” ๊ฐ„๋žตํ•œ ์ฒดํฌ๋ฆฌ์ŠคํŠธ๋กœ ์š”์•ฝํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: - ์ฝ”๋“œ๊ฐ€ XLA์˜ ์„ธ ๊ฐ€์ง€ ๊ทœ์น™์„ ๋”ฐ๋ฅด๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. - CPU/GPU์—์„œ `jit_compile=True`๋กœ ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•˜๊ณ  XLA๋กœ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. - ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฉ”๋ชจ๋ฆฌ์— ๊ฐ€์ ธ์˜ค๊ฑฐ๋‚˜ TPU ํ˜ธํ™˜ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค([๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) ์ฐธ์กฐ) - ์ฝ”๋“œ๋ฅผ Colab(accelerator๊ฐ€ โ€œTPUโ€๋กœ ์„ค์ •๋จ) ๋˜๋Š” Google Cloud์˜ TPU VM์œผ๋กœ ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ํ•ฉ๋‹ˆ๋‹ค. - TPU ์ดˆ๊ธฐํ™” ์ฝ”๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค([๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) ์ฐธ์กฐ) - `TPUStrategy`๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ๊ณผ ๋ชจ๋ธ ์ƒ์„ฑ์ด `strategy.scope()` ๋‚ด์— ์žˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค([๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) ์ฐธ์กฐ) - TPU๋กœ ์ด๋™ํ•  ๋•Œ `jit_compile=True`๋ฅผ ๋‹ค์‹œ ์„ค์ •ํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”! - ๐Ÿ™๐Ÿ™๐Ÿ™๐Ÿฅบ๐Ÿฅบ๐Ÿฅบ - model.fit()์„ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. - ์—ฌ๋Ÿฌ๋ถ„์ด ํ•ด๋ƒˆ์Šต๋‹ˆ๋‹ค!
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: ๋‘˜๋Ÿฌ๋ณด๊ธฐ - local: installation title: ์„ค์น˜๋ฐฉ๋ฒ• title: ์‹œ์ž‘ํ•˜๊ธฐ - sections: - local: pipeline_tutorial title: Pipeline์œผ๋กœ ์ถ”๋ก ํ•˜๊ธฐ - local: autoclass_tutorial title: AutoClass๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ์ธ์Šคํ„ด์Šค ๋กœ๋“œํ•˜๊ธฐ - local: preprocessing title: ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ - local: training title: ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ - local: run_scripts title: ์Šคํฌ๋ฆฝํŠธ๋กœ ํ•™์Šตํ•˜๊ธฐ - local: accelerate title: ๐Ÿค— Accelerate๋กœ ๋ถ„์‚ฐ ํ•™์Šต ๊ตฌ์„ฑํ•˜๊ธฐ - local: peft title: ๐Ÿค— PEFT๋กœ ์–ด๋Œ‘ํ„ฐ ๋กœ๋“œ ๋ฐ ํ•™์Šตํ•˜๊ธฐ - local: model_sharing title: ๋งŒ๋“  ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ - local: transformers_agents title: ์—์ด์ „ํŠธ - local: llm_tutorial title: ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ๋กœ ์ƒ์„ฑํ•˜๊ธฐ title: ํŠœํ† ๋ฆฌ์–ผ - sections: - sections: - local: tasks/sequence_classification title: ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ - local: tasks/token_classification title: ํ† ํฐ ๋ถ„๋ฅ˜ - local: tasks/question_answering title: ์งˆ์˜ ์‘๋‹ต(Question Answering) - local: tasks/language_modeling title: ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง(Causal language modeling) - local: tasks/masked_language_modeling title: ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง(Masked language modeling) - local: tasks/translation title: ๋ฒˆ์—ญ - local: tasks/summarization title: ์š”์•ฝ - local: tasks/multiple_choice title: ๊ฐ๊ด€์‹ ๋ฌธ์ œ(Multiple Choice) title: ์ž์—ฐ์–ด์ฒ˜๋ฆฌ isExpanded: false - sections: - local: tasks/audio_classification title: ์˜ค๋””์˜ค ๋ถ„๋ฅ˜ - local: tasks/asr title: ์ž๋™ ์Œ์„ฑ ์ธ์‹ title: ์˜ค๋””์˜ค isExpanded: false - sections: - local: tasks/image_classification title: ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ - local: tasks/semantic_segmentation title: ์˜๋ฏธ์  ๋ถ„ํ• (Semantic segmentation) - local: tasks/video_classification title: ์˜์ƒ ๋ถ„๋ฅ˜ - local: tasks/object_detection title: ๊ฐ์ฒด ํƒ์ง€ - local: tasks/zero_shot_object_detection title: ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€ - local: tasks/zero_shot_image_classification title: ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ - local: tasks/monocular_depth_estimation title: ๋‹จ์ผ ์˜์ƒ ๊ธฐ๋ฐ˜ ๊นŠ์ด ์ถ”์ • title: ์ปดํ“จํ„ฐ ๋น„์ „ isExpanded: false - sections: - local: tasks/image_captioning title: ์ด๋ฏธ์ง€ ์บก์…”๋‹ - local: tasks/document_question_answering title: ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต(Document Question Answering) - local: tasks/visual_question_answering title: ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต (Visual Question Answering) title: ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ isExpanded: false title: ํƒœ์Šคํฌ ๊ฐ€์ด๋“œ - sections: - local: fast_tokenizers title: ๐Ÿค— Tokenizers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ํ† ํฌ๋‚˜์ด์ € ์‚ฌ์šฉํ•˜๊ธฐ - local: multilingual title: ๋‹ค๊ตญ์–ด ๋ชจ๋ธ ์ถ”๋ก ํ•˜๊ธฐ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Customize text generation strategy - local: create_a_model title: ๋ชจ๋ธ๋ณ„ API ์‚ฌ์šฉํ•˜๊ธฐ - local: custom_models title: ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ - local: sagemaker title: Amazon SageMaker์—์„œ ํ•™์Šต ์‹คํ–‰ํ•˜๊ธฐ - local: serialization title: ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ - local: tflite title: TFLite๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ - local: torchscript title: TorchScript๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Benchmarks - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Notebooks with examples - local: community title: ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ฆฌ์†Œ์Šค - local: custom_tools title: ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์™€ ํ”„๋กฌํ”„ํŠธ - local: troubleshooting title: ๋ฌธ์ œ ํ•ด๊ฒฐ title: (๋ฒˆ์—ญ์ค‘) ๊ฐœ๋ฐœ์ž ๊ฐ€์ด๋“œ - sections: - local: performance title: ์„ฑ๋Šฅ ๋ฐ ํ™•์žฅ์„ฑ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Training on one GPU - local: perf_train_gpu_many title: ๋‹ค์ค‘ GPU์—์„œ ํ›ˆ๋ จ ์ง„ํ–‰ํ•˜๊ธฐ - local: perf_train_cpu title: CPU์—์„œ ํ›ˆ๋ จ - local: perf_train_cpu_many title: ๋‹ค์ค‘ CPU์—์„œ ํ›ˆ๋ จํ•˜๊ธฐ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Training on TPUs - local: perf_train_tpu_tf title: TensorFlow๋กœ TPU์—์„œ ํ›ˆ๋ จํ•˜๊ธฐ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Training on Specialized Hardware - local: perf_infer_cpu title: CPU๋กœ ์ถ”๋ก ํ•˜๊ธฐ - local: perf_infer_gpu_one title: ํ•˜๋‚˜์˜ GPU๋ฅผ ํ™œ์šฉํ•œ ์ถ”๋ก  - local: perf_infer_gpu_many title: ๋‹ค์ค‘ GPU์—์„œ ์ถ”๋ก  - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Inference on Specialized Hardware - local: perf_hardware title: ํ›ˆ๋ จ์šฉ ์‚ฌ์šฉ์ž ๋งž์ถคํ˜• ํ•˜๋“œ์›จ์–ด - local: big_models title: ๋Œ€ํ˜• ๋ชจ๋ธ์„ ์ธ์Šคํ„ด์Šคํ™” - local: debugging title: ๋””๋ฒ„๊น… - local: hpo_train title: Trainer API๋ฅผ ์‚ฌ์šฉํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํƒ์ƒ‰ - local: tf_xla title: TensorFlow ๋ชจ๋ธ์„ ์œ„ํ•œ XLA ํ†ตํ•ฉ title: (๋ฒˆ์—ญ์ค‘) ์„ฑ๋Šฅ ๋ฐ ํ™•์žฅ์„ฑ - sections: - local: contributing title: ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•˜๋Š” ๋ฐฉ๋ฒ• - local: add_new_model title: ๐Ÿค— Transformers์— ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• - local: add_tensorflow_model title: ์–ด๋–ป๊ฒŒ ๐Ÿค— Transformers ๋ชจ๋ธ์„ TensorFlow๋กœ ๋ณ€ํ™˜ํ•˜๋‚˜์š”? - local: add_new_pipeline title: ์–ด๋–ป๊ฒŒ ๐Ÿค— Transformers์— ํŒŒ์ดํ”„๋ผ์ธ์„ ์ถ”๊ฐ€ํ•˜๋‚˜์š”? - local: testing title: ํ…Œ์ŠคํŠธ - local: pr_checks title: Pull Request์— ๋Œ€ํ•œ ๊ฒ€์‚ฌ title: (๋ฒˆ์—ญ์ค‘) ๊ธฐ์—ฌํ•˜๊ธฐ - sections: - local: philosophy title: ์ด๋…๊ณผ ๋ชฉํ‘œ - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Glossary - local: task_summary title: ๐Ÿค— Transformers๋กœ ํ•  ์ˆ˜ ์žˆ๋Š” ์ž‘์—… - local: tasks_explained title: ๐Ÿค— Transformers๋กœ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ• - local: model_summary title: Transformer ๋ชจ๋ธ๊ตฐ - local: tokenizer_summary title: ํ† ํฌ๋‚˜์ด์ € ์š”์•ฝ - local: attention title: ์–ดํ…์…˜ ๋งค์ปค๋‹ˆ์ฆ˜ - local: pad_truncation title: ํŒจ๋”ฉ๊ณผ ์ž˜๋ผ๋‚ด๊ธฐ - local: bertology title: BERTology - local: perplexity title: ๊ณ ์ • ๊ธธ์ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(Perplexity) - local: pipeline_webserver title: ์ถ”๋ก  ์›น ์„œ๋ฒ„๋ฅผ ์œ„ํ•œ ํŒŒ์ดํ”„๋ผ์ธ - local: model_memory_anatomy title: ๋ชจ๋ธ ํ•™์Šต ํ•ด๋ถ€ํ•˜๊ธฐ title: (๋ฒˆ์—ญ์ค‘) ๊ฐœ๋… ๊ฐ€์ด๋“œ - sections: - sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Auto Classes - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Callbacks - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Configuration - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Data Collator - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Keras callbacks - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Logging - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Models - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Text Generation - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ONNX - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Optimization - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Model outputs - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Pipelines - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Processors - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Quantization - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Tokenizer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Trainer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DeepSpeed Integration - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Feature Extractor - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Image Processor title: (๋ฒˆ์—ญ์ค‘) ๋ฉ”์ธ ํด๋ž˜์Šค - sections: - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ALBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BART - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BARThez - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BARTpho - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BertGeneration - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BertJapanese - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Bertweet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BigBird - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BigBirdPegasus - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BioGpt - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Blenderbot - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Blenderbot Small - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BLOOM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BORT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ByT5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CamemBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CANINE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CodeGen - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ConvBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CPM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CPMANT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CTRL - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DeBERTa - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DeBERTa-v2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DialoGPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DistilBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DPR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ELECTRA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Encoder Decoder Models - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ERNIE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ErnieM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ESM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FLAN-T5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FLAN-UL2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FlauBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FSMT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Funnel Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT Neo - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT NeoX - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT NeoX Japanese - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT-J - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPT2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPTBigCode - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPTSAN Japanese - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GPTSw3 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) HerBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) I-BERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Jukebox - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LED - local: model_doc/llama title: LLaMA - local: model_doc/llama2 title: LLaMA2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Longformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LongT5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LUKE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) M2M100 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MarianMT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MarkupLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MBart and MBart-50 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MEGA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MegatronBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MegatronGPT2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) mLUKE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MobileBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MPNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MT5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MVP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) NEZHA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) NLLB - local: in_translation title: (๋ฒˆ์—ญ์ค‘) NLLB-MoE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Nystrรถmformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Open-Llama - local: in_translation title: (๋ฒˆ์—ญ์ค‘) OPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Pegasus - local: in_translation title: (๋ฒˆ์—ญ์ค‘) PEGASUS-X - local: in_translation title: (๋ฒˆ์—ญ์ค‘) PhoBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) PLBart - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ProphetNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) QDQBert - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RAG - local: in_translation title: (๋ฒˆ์—ญ์ค‘) REALM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Reformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RemBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RetriBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RoBERTa - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RoBERTa-PreLayerNorm - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RoCBert - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RoFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Splinter - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SqueezeBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SwitchTransformers - local: in_translation title: (๋ฒˆ์—ญ์ค‘) T5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) T5v1.1 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TAPEX - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Transformer XL - local: in_translation title: (๋ฒˆ์—ญ์ค‘) UL2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) X-MOD - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XGLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM-ProphetNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM-RoBERTa - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM-RoBERTa-XL - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLM-V - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) YOSO title: (๋ฒˆ์—ญ์ค‘) ํ…์ŠคํŠธ ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BEiT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BiT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Conditional DETR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ConvNeXT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ConvNeXTV2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CvT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Deformable DETR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DeiT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DETA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DETR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DiNAT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DiT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) EfficientFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) EfficientNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FocalNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GLPN - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ImageGPT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LeViT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Mask2Former - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MaskFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MobileNetV1 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MobileNetV2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MobileViT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) NAT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) PoolFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) RegNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ResNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SegFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Swin Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Swin Transformer V2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Swin2SR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Table Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TimeSformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) UperNet - local: in_translation title: (๋ฒˆ์—ญ์ค‘) VAN - local: in_translation title: (๋ฒˆ์—ญ์ค‘) VideoMAE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Vision Transformer (ViT) - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ViT Hybrid - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ViTMAE - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ViTMSN - local: in_translation title: (๋ฒˆ์—ญ์ค‘) YOLOS title: (๋ฒˆ์—ญ์ค‘) ๋น„์ „ ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Audio Spectrogram Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CLAP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Hubert - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MCTCT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SEW - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SEW-D - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Speech2Text - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Speech2Text2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) SpeechT5 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) UniSpeech - local: in_translation title: (๋ฒˆ์—ญ์ค‘) UniSpeech-SAT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Wav2Vec2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Wav2Vec2-Conformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Wav2Vec2Phoneme - local: in_translation title: (๋ฒˆ์—ญ์ค‘) WavLM - local: model_doc/whisper title: Whisper - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLS-R - local: in_translation title: (๋ฒˆ์—ญ์ค‘) XLSR-Wav2Vec2 title: (๋ฒˆ์—ญ์ค‘) ์˜ค๋””์˜ค ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ALIGN - local: in_translation title: (๋ฒˆ์—ญ์ค‘) AltCLIP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BLIP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BLIP-2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) BridgeTower - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Chinese-CLIP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CLIP - local: in_translation title: (๋ฒˆ์—ญ์ค‘) CLIPSeg - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Data2Vec - local: in_translation title: (๋ฒˆ์—ญ์ค‘) DePlot - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Donut - local: in_translation title: (๋ฒˆ์—ญ์ค‘) FLAVA - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GIT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) GroupViT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LayoutLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LayoutLMV2 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LayoutLMV3 - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LayoutXLM - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LiLT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) LXMERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MatCha - local: in_translation title: (๋ฒˆ์—ญ์ค‘) MGP-STR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) OneFormer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) OWL-ViT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Perceiver - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Pix2Struct - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Segment Anything - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Speech Encoder Decoder Models - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TAPAS - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TrOCR - local: in_translation title: (๋ฒˆ์—ญ์ค‘) TVLT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) ViLT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Vision Encoder Decoder Models - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Vision Text Dual Encoder - local: in_translation title: (๋ฒˆ์—ญ์ค‘) VisualBERT - local: in_translation title: (๋ฒˆ์—ญ์ค‘) X-CLIP title: (๋ฒˆ์—ญ์ค‘) ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Decision Transformer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Trajectory Transformer title: (๋ฒˆ์—ญ์ค‘) ๊ฐ•ํ™”ํ•™์Šต ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Informer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Time Series Transformer title: (๋ฒˆ์—ญ์ค‘) ์‹œ๊ณ„์—ด ๋ชจ๋ธ - isExpanded: false sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Graphormer title: (๋ฒˆ์—ญ์ค‘) Graph models title: (๋ฒˆ์—ญ์ค‘) ๋ชจ๋ธ - sections: - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Custom Layers and Utilities - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for pipelines - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Tokenizers - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Trainer - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Generation - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Image Processors - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Audio processing - local: in_translation title: (๋ฒˆ์—ญ์ค‘) General Utilities - local: in_translation title: (๋ฒˆ์—ญ์ค‘) Utilities for Time Series title: (๋ฒˆ์—ญ์ค‘) Internal Helpers title: (๋ฒˆ์—ญ์ค‘) API
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/perf_train_cpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CPU์—์„œ ํšจ์œจ์ ์ธ ํ›ˆ๋ จ [[efficient-training-on-cpu]] ์ด ๊ฐ€์ด๋“œ๋Š” CPU์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ํ›ˆ๋ จํ•˜๋Š” ๋ฐ ์ดˆ์ ์„ ๋งž์ถฅ๋‹ˆ๋‹ค. ## IPEX์™€ ํ˜ผํ•ฉ ์ •๋ฐ€๋„ [[mixed-precision-with-ipex]] IPEX๋Š” AVX-512 ์ด์ƒ์„ ์ง€์›ํ•˜๋Š” CPU์— ์ตœ์ ํ™”๋˜์–ด ์žˆ์œผ๋ฉฐ, AVX2๋งŒ ์ง€์›ํ•˜๋Š” CPU์—๋„ ๊ธฐ๋Šฅ์ ์œผ๋กœ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ AVX-512 ์ด์ƒ์˜ Intel CPU ์„ธ๋Œ€์—์„œ๋Š” ์„ฑ๋Šฅ์ƒ ์ด์ ์ด ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋˜์ง€๋งŒ, AVX2๋งŒ ์ง€์›ํ•˜๋Š” CPU (์˜ˆ: AMD CPU ๋˜๋Š” ์˜ค๋ž˜๋œ Intel CPU)์˜ ๊ฒฝ์šฐ์—๋Š” IPEX ์•„๋ž˜์—์„œ ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ๋ณด์ผ ์ˆ˜ ์žˆ์ง€๋งŒ ์ด๋Š” ๋ณด์žฅ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. IPEX๋Š” Float32์™€ BFloat16๋ฅผ ๋ชจ๋‘ ์‚ฌ์šฉํ•˜์—ฌ CPU ํ›ˆ๋ จ์„ ์œ„ํ•œ ์„ฑ๋Šฅ ์ตœ์ ํ™”๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. BFloat16์˜ ์‚ฌ์šฉ์€ ๋‹ค์Œ ์„น์…˜์˜ ์ฃผ์š” ์ดˆ์ ์ž…๋‹ˆ๋‹ค. ์ €์ •๋ฐ€๋„ ๋ฐ์ดํ„ฐ ํƒ€์ž…์ธ BFloat16์€ 3์„ธ๋Œ€ Xeonยฎ Scalable ํ”„๋กœ์„ธ์„œ (์ฝ”๋“œ๋ช…: Cooper Lake)์—์„œ AVX512 ๋ช…๋ น์–ด ์ง‘ํ•ฉ์„ ๋„ค์ดํ‹ฐ๋ธŒ๋กœ ์ง€์›ํ•ด ์™”์œผ๋ฉฐ, ๋‹ค์Œ ์„ธ๋Œ€์˜ Intelยฎ Xeonยฎ Scalable ํ”„๋กœ์„ธ์„œ์—์„œ Intelยฎ Advanced Matrix Extensions (Intelยฎ AMX) ๋ช…๋ น์–ด ์ง‘ํ•ฉ์„ ์ง€์›ํ•˜์—ฌ ์„ฑ๋Šฅ์„ ํฌ๊ฒŒ ํ–ฅ์ƒ์‹œํ‚ฌ ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. CPU ๋ฐฑ์—”๋“œ์˜ ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ๊ธฐ๋Šฅ์€ PyTorch-1.10๋ถ€ํ„ฐ ํ™œ์„ฑํ™”๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋™์‹œ์—, Intelยฎ Extension for PyTorch์—์„œ BFloat16์— ๋Œ€ํ•œ CPU์˜ ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ๋ฐ ์—ฐ์‚ฐ์ž์˜ BFloat16 ์ตœ์ ํ™”๋ฅผ ๋Œ€๊ทœ๋ชจ๋กœ ํ™œ์„ฑํ™”ํ•˜๊ณ , PyTorch ๋งˆ์Šคํ„ฐ ๋ธŒ๋žœ์น˜๋กœ ๋ถ€๋ถ„์ ์œผ๋กœ ์—…์ŠคํŠธ๋ฆผ์„ ๋ฐ˜์˜ํ–ˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž๋“ค์€ IPEX ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋” ๋‚˜์€ ์„ฑ๋Šฅ๊ณผ ์‚ฌ์šฉ์ž ๊ฒฝํ—˜์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/amp.html)์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋ฅผ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค. ### IPEX ์„ค์น˜: [[ipex-installation]] IPEX ๋ฆด๋ฆฌ์Šค๋Š” PyTorch๋ฅผ ๋”ฐ๋ผ๊ฐ‘๋‹ˆ๋‹ค. pip๋ฅผ ํ†ตํ•ด ์„ค์น˜ํ•˜๋ ค๋ฉด: | PyTorch Version | IPEX version | | :---------------: | :----------: | | 1.13 | 1.13.0+cpu | | 1.12 | 1.12.300+cpu | | 1.11 | 1.11.200+cpu | | 1.10 | 1.10.100+cpu | ``` pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu ``` [IPEX ์„ค์น˜](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html)์— ๋Œ€ํ•œ ๋” ๋งŽ์€ ์ ‘๊ทผ ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค. ### Trainer์—์„œ์˜ ์‚ฌ์šฉ๋ฒ• [[usage-in-trainer]] Trainer์—์„œ IPEX์˜ ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋ฅผ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ์‚ฌ์šฉ์ž๋Š” ํ›ˆ๋ จ ๋ช…๋ น ์ธ์ˆ˜์— `use_ipex`, `bf16`, `no_cuda`๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [Transformers ์งˆ๋ฌธ-์‘๋‹ต](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)์˜ ์‚ฌ์šฉ ์‚ฌ๋ก€๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. - CPU์—์„œ BF16 ์ž๋™ ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ IPEX๋กœ ํ›ˆ๋ จํ•˜๊ธฐ: <pre> python run_qa.py \ --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ <b>--use_ipex \</b> <b>--bf16 --no_cuda</b></pre> ### ์‹ค์Šต ์˜ˆ์‹œ [[practice-example]] ๋ธ”๋กœ๊ทธ: [Intel Sapphire Rapids๋กœ PyTorch Transformers ๊ฐ€์†ํ™”](https://huggingface.co/blog/intel-sapphire-rapids)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/add_new_model.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Hugging Face Transformers๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๋ฌด์—‡์ธ๊ฐ€์š”? [[how-to-add-a-model-to-transformers]] Hugging Face Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์ปค๋ฎค๋‹ˆํ‹ฐ ๊ธฐ์—ฌ์ž๋“ค ๋•๋ถ„์— ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋Š” ๋„์ „์ ์ธ ํ”„๋กœ์ ํŠธ์ด๋ฉฐ Hugging Face Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ๊ตฌํ˜„ํ•  ๋ชจ๋ธ์— ๋Œ€ํ•œ ๊นŠ์€ ์ดํ•ด๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. Hugging Face์—์„œ๋Š” ๋” ๋งŽ์€ ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ฉค๋ฒ„๊ฐ€ ๋ชจ๋ธ์„ ์ ๊ทน์ ์œผ๋กœ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ๋„๋ก ์ง€์›ํ•˜๊ณ ์ž ํ•˜๋ฉฐ, ์ด ๊ฐ€์ด๋“œ๋ฅผ ํ†ตํ•ด PyTorch ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ณผ์ •์„ ์•ˆ๋‚ดํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค (PyTorch๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•ด์ฃผ์„ธ์š”). <Tip> TensorFlow ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•˜๊ณ ์ž ํ•˜๋Š” ๊ฒฝ์šฐ [๐Ÿค— Transformers ๋ชจ๋ธ์„ TensorFlow๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๋ฐฉ๋ฒ•](add_tensorflow_model) ๊ฐ€์ด๋“œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด ๊ณผ์ •์„ ์ง„ํ–‰ํ•˜๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‚ด์šฉ์„ ์ดํ•ดํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค: - ์˜คํ”ˆ ์†Œ์Šค์˜ ๋ชจ๋ฒ” ์‚ฌ๋ก€์— ๋Œ€ํ•œ ํ†ต์ฐฐ๋ ฅ์„ ์–ป์Šต๋‹ˆ๋‹ค. - ๊ฐ€์žฅ ์ธ๊ธฐ ์žˆ๋Š” ๋”ฅ๋Ÿฌ๋‹ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์„ค๊ณ„ ์›์น™์„ ์ดํ•ดํ•ฉ๋‹ˆ๋‹ค. - ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ํ…Œ์ŠคํŠธํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์›๋‹ˆ๋‹ค. - `black`, `ruff`, `make fix-copies`์™€ ๊ฐ™์€ Python ์œ ํ‹ธ๋ฆฌํ‹ฐ๋ฅผ ํ†ตํ•ฉํ•˜์—ฌ ๊น”๋”ํ•˜๊ณ  ๊ฐ€๋…์„ฑ ์žˆ๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์›๋‹ˆ๋‹ค. Hugging Face ํŒ€์€ ํ•ญ์ƒ ๋„์›€์„ ์ค„ ์ค€๋น„๊ฐ€ ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ ํ˜ผ์ž๊ฐ€ ์•„๋‹ˆ๋ผ๋Š” ์ ์„ ๊ธฐ์–ตํ•˜์„ธ์š”. ๐Ÿค— โค๏ธ ์‹œ์ž‘์— ์•ž์„œ ๐Ÿค— Transformers์— ์›ํ•˜๋Š” ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด [New model addition](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&template=new-model-addition.yml) ์ด์Šˆ๋ฅผ ์—ด์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํŠน์ • ๋ชจ๋ธ์„ ๊ธฐ์—ฌํ•˜๋Š” ๋ฐ ํŠน๋ณ„ํžˆ ๊นŒ๋‹ค๋กœ์šด ๊ธฐ์ค€์„ ๊ฐ€์ง€์ง€ ์•Š๋Š” ๊ฒฝ์šฐ [New model label](https://github.com/huggingface/transformers/labels/New%20model)์„ ํ•„ํ„ฐ๋งํ•˜์—ฌ ์š”์ฒญ๋˜์ง€ ์•Š์€ ๋ชจ๋ธ์ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜๊ณ  ์ž‘์—…ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ชจ๋ธ ์š”์ฒญ์„ ์—ด์—ˆ๋‹ค๋ฉด ์ฒซ ๋ฒˆ์งธ ๋‹จ๊ณ„๋Š” ๐Ÿค— Transformers์— ์ต์ˆ™ํ•ด์ง€๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค! ## ๐Ÿค— Transformers์˜ ์ „๋ฐ˜์ ์ธ ๊ฐœ์š” [[general-overview-of-transformers]] ๋จผ์ € ๐Ÿค— Transformers์— ๋Œ€ํ•œ ์ „๋ฐ˜์ ์ธ ๊ฐœ์š”๋ฅผ ํŒŒ์•…ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋งค์šฐ ์ฃผ๊ด€์ ์ธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ด๊ธฐ ๋•Œ๋ฌธ์— ํ•ด๋‹น ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์ฒ ํ•™์ด๋‚˜ ์„ค๊ณ„ ์„ ํƒ ์‚ฌํ•ญ์— ๋™์˜ํ•˜์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์šฐ๋ฆฌ์˜ ๊ฒฝํ—˜์ƒ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๊ธฐ๋ณธ์ ์ธ ์„ค๊ณ„ ์„ ํƒ๊ณผ ์ฒ ํ•™์€ ๐Ÿค— Transformers์˜ ๊ทœ๋ชจ๋ฅผ ํšจ์œจ์ ์œผ๋กœ ํ™•์žฅํ•˜๋ฉด์„œ ์œ ์ง€ ๋ณด์ˆ˜ ๋น„์šฉ์„ ํ•ฉ๋ฆฌ์ ์ธ ์ˆ˜์ค€์œผ๋กœ ์œ ์ง€ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. [๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์ฒ ํ•™์— ๋Œ€ํ•œ ๋ฌธ์„œ](philosophy)๋ฅผ ์ฝ๋Š” ๊ฒƒ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๋” ์ž˜ ์ดํ•ดํ•˜๋Š” ์ข‹์€ ์‹œ์ž‘์ ์ž…๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ชจ๋ธ์— ์ ์šฉํ•˜๋ ค๋Š” ๋ช‡ ๊ฐ€์ง€ ์ž‘์—… ๋ฐฉ์‹์— ๋Œ€ํ•œ ์„ ํƒ ์‚ฌํ•ญ์ด ์žˆ์Šต๋‹ˆ๋‹ค: - ์ผ๋ฐ˜์ ์œผ๋กœ ์ถ”์ƒํ™”๋ณด๋‹ค๋Š” ๊ตฌ์„ฑ์„ ์„ ํ˜ธํ•ฉ๋‹ˆ๋‹ค. - ์ฝ”๋“œ๋ฅผ ๋ณต์ œํ•˜๋Š” ๊ฒƒ์ด ํ•ญ์ƒ ๋‚˜์œ ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค. ์ฝ”๋“œ์˜ ๊ฐ€๋…์„ฑ์ด๋‚˜ ์ ‘๊ทผ์„ฑ์„ ํฌ๊ฒŒ ํ–ฅ์ƒ์‹œํ‚จ๋‹ค๋ฉด ๋ณต์ œํ•˜๋Š” ๊ฒƒ์€ ์ข‹์Šต๋‹ˆ๋‹ค. - ๋ชจ๋ธ ํŒŒ์ผ์€ ๊ฐ€๋Šฅํ•œ ํ•œ ๋…๋ฆฝ์ ์œผ๋กœ ์œ ์ง€๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํŠน์ • ๋ชจ๋ธ์˜ ์ฝ”๋“œ๋ฅผ ์ฝ์„ ๋•Œ ํ•ด๋‹น `modeling_....py` ํŒŒ์ผ๋งŒ ํ™•์ธํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์ฝ”๋“œ๊ฐ€ ์ œํ’ˆ์„ ์ œ๊ณตํ•˜๋Š” ์ˆ˜๋‹จ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๊ฐœ์„ ํ•˜๊ณ ์ž ํ•˜๋Š” ์ œํ’ˆ์ด๋ผ๊ณ ๋„ ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•  ๋•Œ, ์‚ฌ์šฉ์ž๋Š” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์‚ฌ๋žŒ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ฝ”๋“œ๋ฅผ ์ฝ๊ณ  ์ดํ•ดํ•˜๊ณ  ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋“  ์‚ฌ๋žŒ๊นŒ์ง€๋„ ํฌํ•จํ•œ๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์—ผ๋‘์— ๋‘๊ณ  ์ผ๋ฐ˜์ ์ธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์„ค๊ณ„์— ๋Œ€ํ•ด ์กฐ๊ธˆ ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ๋ชจ๋ธ ๊ฐœ์š” [[overview-of-models]] ๋ชจ๋ธ์„ ์„ฑ๊ณต์ ์œผ๋กœ ์ถ”๊ฐ€ํ•˜๋ ค๋ฉด ๋ชจ๋ธ๊ณผ ํ•ด๋‹น ๊ตฌ์„ฑ์ธ [`PreTrainedModel`] ๋ฐ [`PretrainedConfig`] ๊ฐ„์˜ ์ƒํ˜ธ์ž‘์šฉ์„ ์ดํ•ดํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๐Ÿค— Transformers์— ์ถ”๊ฐ€ํ•˜๋ ค๋Š” ๋ชจ๋ธ์„ `BrandNewBert`๋ผ๊ณ  ๋ถ€๋ฅด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_overview.png"/> ๋ณด๋‹ค์‹œํ”ผ, ๐Ÿค— Transformers์—์„œ๋Š” ์ƒ์†์„ ์‚ฌ์šฉํ•˜์ง€๋งŒ ์ถ”์ƒํ™” ์ˆ˜์ค€์„ ์ตœ์†Œํ•œ์œผ๋กœ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ์–ด๋–ค ๋ชจ๋ธ์—์„œ๋„ ๋‘ ์ˆ˜์ค€ ์ด์ƒ์˜ ์ถ”์ƒํ™”๊ฐ€ ์กด์žฌํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. `BrandNewBertModel`์€ `BrandNewBertPreTrainedModel`์—์„œ ์ƒ์†๋ฐ›๊ณ , ์ด ํด๋ž˜์Šค๋Š” [`PreTrainedModel`]์—์„œ ์ƒ์†๋ฐ›์Šต๋‹ˆ๋‹ค. ์ด๋กœ์จ ์ƒˆ๋กœ์šด ๋ชจ๋ธ์€ [`PreTrainedModel`]์—๋งŒ ์˜์กดํ•˜๋„๋ก ํ•˜๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ์ƒˆ๋กœ์šด ๋ชจ๋ธ์— ์ž๋™์œผ๋กœ ์ œ๊ณต๋˜๋Š” ์ค‘์š”ํ•œ ๊ธฐ๋Šฅ์€ [`~PreTrainedModel.from_pretrained`] ๋ฐ [`~PreTrainedModel.save_pretrained`]์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ธฐ๋Šฅ ์™ธ์—๋„ `BrandNewBertModel.forward`์™€ ๊ฐ™์€ ๋‹ค๋ฅธ ์ค‘์š”ํ•œ ๊ธฐ๋Šฅ์€ ์ƒˆ๋กœ์šด `modeling_brand_new_bert.py` ์Šคํฌ๋ฆฝํŠธ์—์„œ ์™„์ „ํžˆ ์ •์˜๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ `BrandNewBertForMaskedLM`๊ณผ ๊ฐ™์€ ํŠน์ • ํ—ค๋“œ ๋ ˆ์ด์–ด๋ฅผ ๊ฐ€์ง„ ๋ชจ๋ธ์€ `BrandNewBertModel`์„ ์ƒ์†๋ฐ›์ง€ ์•Š๊ณ  forward pass์—์„œ ํ˜ธ์ถœํ•  ์ˆ˜ ์žˆ๋Š” `BrandNewBertModel`์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”์ƒํ™” ์ˆ˜์ค€์„ ๋‚ฎ๊ฒŒ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ์ƒˆ๋กœ์šด ๋ชจ๋ธ์€ `BrandNewBertConfig`๋ผ๋Š” ๊ตฌ์„ฑ ํด๋ž˜์Šค๋ฅผ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ตฌ์„ฑ์€ ํ•ญ์ƒ [`PreTrainedModel`]์˜ ์†์„ฑ์œผ๋กœ ์ €์žฅ๋˜๋ฉฐ, ๋”ฐ๋ผ์„œ `BrandNewBertPreTrainedModel`์„ ์ƒ์†๋ฐ›๋Š” ๋ชจ๋“  ํด๋ž˜์Šค์—์„œ `config` ์†์„ฑ์„ ํ†ตํ•ด ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert") model.config # model has access to its config ``` ๋ชจ๋ธ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๊ตฌ์„ฑ์€ [`PretrainedConfig`]์—์„œ ๊ธฐ๋ณธ ์ง๋ ฌํ™” ๋ฐ ์—ญ์ง๋ ฌํ™” ๊ธฐ๋Šฅ์„ ์ƒ์†๋ฐ›์Šต๋‹ˆ๋‹ค. ๊ตฌ์„ฑ๊ณผ ๋ชจ๋ธ์€ ํ•ญ์ƒ *pytorch_model.bin* ํŒŒ์ผ๊ณผ *config.json* ํŒŒ์ผ๋กœ ๊ฐ๊ฐ ๋ณ„๋„๋กœ ์ง๋ ฌํ™”๋ฉ๋‹ˆ๋‹ค. [`~PreTrainedModel.save_pretrained`]๋ฅผ ํ˜ธ์ถœํ•˜๋ฉด ์ž๋™์œผ๋กœ [`~PretrainedConfig.save_pretrained`]๋„ ํ˜ธ์ถœ๋˜๋ฏ€๋กœ ๋ชจ๋ธ๊ณผ ๊ตฌ์„ฑ์ด ๋ชจ๋‘ ์ €์žฅ๋ฉ๋‹ˆ๋‹ค. ### ์ฝ”๋“œ ์Šคํƒ€์ผ [[code-style]] ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ž‘์„ฑํ•  ๋•Œ, Transformers๋Š” ์ฃผ๊ด€์ ์ธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ด๋ฉฐ ๋ช‡ ๊ฐ€์ง€ ๋…ํŠนํ•œ ์ฝ”๋”ฉ ์Šคํƒ€์ผ์ด ์žˆ์Šต๋‹ˆ๋‹ค: 1. ๋ชจ๋ธ์˜ forward pass๋Š” ๋ชจ๋ธ ํŒŒ์ผ์— ์™„์ „ํžˆ ์ž‘์„ฑ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋‹ค๋ฅธ ๋ชจ๋ธ์—์„œ ๋ธ”๋ก์„ ์žฌ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์ฝ”๋“œ๋ฅผ ๋ณต์‚ฌํ•˜์—ฌ ์œ„์— `# Copied from` ์ฃผ์„๊ณผ ํ•จ๊ป˜ ๋ถ™์—ฌ๋„ฃ์œผ๋ฉด ๋ฉ๋‹ˆ๋‹ค (์˜ˆ: [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”). 2. ์ฝ”๋“œ๋Š” ์™„์ „ํžˆ ์ดํ•ดํ•˜๊ธฐ ์‰ฌ์›Œ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณ€์ˆ˜ ์ด๋ฆ„์„ ๋ช…ํ™•ํ•˜๊ฒŒ ์ง€์ •ํ•˜๊ณ  ์•ฝ์–ด๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `act`๋ณด๋‹ค๋Š” `activation`์„ ์„ ํ˜ธํ•ฉ๋‹ˆ๋‹ค. ํ•œ ๊ธ€์ž ๋ณ€์ˆ˜ ์ด๋ฆ„์€ ๋ฃจํ”„์˜ ์ธ๋ฑ์Šค์ธ ๊ฒฝ์šฐ๋ฅผ ์ œ์™ธํ•˜๊ณ  ๊ถŒ์žฅ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. 3. ๋” ์ผ๋ฐ˜์ ์œผ๋กœ, ์งง์€ ๋งˆ๋ฒ• ๊ฐ™์€ ์ฝ”๋“œ๋ณด๋‹ค๋Š” ๊ธธ๊ณ  ๋ช…์‹œ์ ์ธ ์ฝ”๋“œ๋ฅผ ์„ ํ˜ธํ•ฉ๋‹ˆ๋‹ค. 4. PyTorch์—์„œ `nn.Sequential`์„ ํ•˜์œ„ ํด๋ž˜์Šค๋กœ ๋งŒ๋“ค์ง€ ๋ง๊ณ  `nn.Module`์„ ํ•˜์œ„ ํด๋ž˜์Šค๋กœ ๋งŒ๋“ค๊ณ  forward pass๋ฅผ ์ž‘์„ฑํ•˜์—ฌ ๋‹ค๋ฅธ ์‚ฌ๋žŒ์ด ์ฝ”๋“œ๋ฅผ ๋น ๋ฅด๊ฒŒ ๋””๋ฒ„๊ทธํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. print ๋ฌธ์ด๋‚˜ ์ค‘๋‹จ์ ์„ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 5. ํ•จ์ˆ˜ ์‹œ๊ทธ๋‹ˆ์ฒ˜์—๋Š” ํƒ€์ž… ์ฃผ์„์„ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ์™ธ์—๋Š” ํƒ€์ž… ์ฃผ์„๋ณด๋‹ค ๋ณ€์ˆ˜ ์ด๋ฆ„์ด ํ›จ์”ฌ ์ฝ๊ธฐ ์‰ฝ๊ณ  ์ดํ•ดํ•˜๊ธฐ ์‰ฝ์Šต๋‹ˆ๋‹ค. ### ํ† ํฌ๋‚˜์ด์ € ๊ฐœ์š” [[overview-of-tokenizers]] ์•„์ง ์ค€๋น„๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค :-( ์ด ์„น์…˜์€ ๊ณง ์ถ”๊ฐ€๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค! ## ๐Ÿค— Transformers์— ๋ชจ๋ธ ์ถ”๊ฐ€ํ•˜๋Š” ๋‹จ๊ณ„๋ณ„ ๋ฐฉ๋ฒ• [[stepbystep-recipe-to-add-a-model-to-transformers]] ๊ฐ์ž ๋ชจ๋ธ์„ ์ด์‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์„ ํ˜ธ๊ฐ€ ๋‹ค๋ฅด๊ธฐ ๋•Œ๋ฌธ์— ๋‹ค๋ฅธ ๊ธฐ์—ฌ์ž๋“ค์ด Hugging Face์— ๋ชจ๋ธ์„ ์ด์‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์š”์•ฝ์„ ์‚ดํŽด๋ณด๋Š” ๊ฒƒ์ด ๋งค์šฐ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๋ชจ๋ธ์„ ์ด์‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ธ”๋กœ๊ทธ ๊ฒŒ์‹œ๋ฌผ ๋ชฉ๋ก์ž…๋‹ˆ๋‹ค: 1. [GPT2 ๋ชจ๋ธ ์ด์‹ํ•˜๊ธฐ](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) - [Thomas](https://huggingface.co/thomwolf) 2. [WMT19 MT ๋ชจ๋ธ ์ด์‹ํ•˜๊ธฐ](https://huggingface.co/blog/porting-fsmt) - [Stas](https://huggingface.co/stas) ๊ฒฝํ—˜์ƒ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•  ๋•Œ ์ฃผ์˜ํ•ด์•ผ ํ•  ๊ฐ€์žฅ ์ค‘์š”ํ•œ ์‚ฌํ•ญ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - ๊ฐ™์€ ์ผ์„ ๋ฐ˜๋ณตํ•˜์ง€ ๋งˆ์„ธ์š”! ์ƒˆ๋กœ์šด ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์œ„ํ•ด ์ถ”๊ฐ€ํ•  ์ฝ”๋“œ์˜ ๋Œ€๋ถ€๋ถ„์€ ์ด๋ฏธ ๐Ÿค— Transformers ์–ด๋”˜๊ฐ€์— ์กด์žฌํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๋ณต์‚ฌํ•  ์ˆ˜ ์žˆ๋Š” ์œ ์‚ฌํ•œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ฐพ๋Š”๋ฐ ์‹œ๊ฐ„์„ ํˆฌ์žํ•˜์„ธ์š”. [grep](https://www.gnu.org/software/grep/)์™€ [rg](https://github.com/BurntSushi/ripgrep)๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๋ชจ๋ธ์˜ ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ํ•œ ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๊ณ  ๋ชจ๋ธ๋ง ์ฝ”๋“œ๊ฐ€ ๋‹ค๋ฅธ ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ์กด์žฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด FSMT์˜ ๋ชจ๋ธ๋ง ์ฝ”๋“œ๋Š” BART๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๊ณ  FSMT์˜ ํ† ํฌ๋‚˜์ด์ € ์ฝ”๋“œ๋Š” XLM์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. - ์ด๊ฒƒ์€ ๊ณผํ•™์ ์ธ ๋„์ „๋ณด๋‹ค๋Š” ๊ณตํ•™์ ์ธ ๋„์ „์ž…๋‹ˆ๋‹ค. ๋…ผ๋ฌธ์˜ ๋ชจ๋ธ์˜ ๋ชจ๋“  ์ด๋ก ์  ์ธก๋ฉด์„ ์ดํ•ดํ•˜๋ ค๋Š” ๊ฒƒ๋ณด๋‹ค ํšจ์œจ์ ์ธ ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์„ ๋งŒ๋“œ๋Š” ๋ฐ ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ์†Œ๋น„ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - ๋ง‰ํž ๋•Œ ๋„์›€์„ ์š”์ฒญํ•˜์„ธ์š”! ๋ชจ๋ธ์€ ๐Ÿค— Transformers์˜ ํ•ต์‹ฌ ๊ตฌ์„ฑ ์š”์†Œ์ด๋ฏ€๋กœ Hugging Face์˜ ์šฐ๋ฆฌ๋Š” ๋‹น์‹ ์ด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฐ ๋‹จ๊ณ„์—์„œ ๊ธฐ๊บผ์ด ๋„์›€์„ ์ค„ ์ค€๋น„๊ฐ€ ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ง„์ „์ด ์—†๋‹ค๊ณ  ๋А๋ผ๋ฉด ์ฃผ์ €ํ•˜์ง€ ๋ง๊ณ  ๋„์›€์„ ์š”์ฒญํ•˜์„ธ์š”. ๋‹ค์Œ์—์„œ๋Š” ๋ชจ๋ธ์„ ๐Ÿค— Transformers๋กœ ์ด์‹ํ•˜๋Š” ๋ฐ ๊ฐ€์žฅ ์œ ์šฉํ•œ ์ผ๋ฐ˜์ ์ธ ์ ˆ์ฐจ๋ฅผ ์ œ๊ณตํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ชฉ๋ก์€ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•  ๋ชจ๋“  ์ž‘์—…์˜ ์š”์•ฝ์ด๋ฉฐ To-Do ๋ชฉ๋ก์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: โ˜ (์„ ํƒ ์‚ฌํ•ญ) BrandNewBert์˜ ์ด๋ก ์  ์ธก๋ฉด ์ดํ•ด<br> โ˜ Hugging Face ๊ฐœ๋ฐœ ํ™˜๊ฒฝ ์ค€๋น„<br> โ˜ ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ์˜ ๋””๋ฒ„๊น… ํ™˜๊ฒฝ ์„ค์ •<br> โ˜ ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ์™€ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `forward()` pass๊ฐ€ ์„ฑ๊ณต์ ์œผ๋กœ ์‹คํ–‰๋˜๋Š” ์Šคํฌ๋ฆฝํŠธ ์ž‘์„ฑ<br> โ˜ ๐Ÿค— Transformers์— ๋ชจ๋ธ ์Šค์ผˆ๋ ˆํ†ค ์„ฑ๊ณต์ ์œผ๋กœ ์ถ”๊ฐ€<br> โ˜ ์›๋ณธ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๐Ÿค— Transformers ์ฒดํฌํฌ์ธํŠธ๋กœ ์„ฑ๊ณต์ ์œผ๋กœ ๋ณ€ํ™˜<br> โ˜ ๐Ÿค— Transformers์—์„œ ์›๋ณธ ์ฒดํฌํฌ์ธํŠธ์™€ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ๋‚ด์ฃผ๋Š” `forward()` pass ์„ฑ๊ณต์ ์œผ๋กœ ์‹คํ–‰<br> โ˜ ๐Ÿค— Transformers์—์„œ ๋ชจ๋ธ ํ…Œ์ŠคํŠธ ์™„๋ฃŒ<br> โ˜ ๐Ÿค— Transformers์— ํ† ํฌ๋‚˜์ด์ € ์„ฑ๊ณต์ ์œผ๋กœ ์ถ”๊ฐ€<br> โ˜ ์ข…๋‹จ ๊ฐ„ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ ์‹คํ–‰<br> โ˜ ๋ฌธ์„œ ์ž‘์„ฑ ์™„๋ฃŒ<br> โ˜ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ<br> โ˜ Pull request ์ œ์ถœ<br> โ˜ (์„ ํƒ ์‚ฌํ•ญ) ๋ฐ๋ชจ ๋…ธํŠธ๋ถ ์ถ”๊ฐ€ ์šฐ์„ , ์ผ๋ฐ˜์ ์œผ๋กœ๋Š” `BrandNewBert`์˜ ์ด๋ก ์ ์ธ ์ดํ•ด๋กœ ์‹œ์ž‘ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋ก ์  ์ธก๋ฉด์„ ์ง์ ‘ ์ดํ•ดํ•˜๋Š” ๋Œ€์‹  *์ง์ ‘ ํ•ด๋ณด๋ฉด์„œ* ๋ชจ๋ธ์˜ ์ด๋ก ์  ์ธก๋ฉด์„ ์ดํ•ดํ•˜๋Š” ๊ฒƒ์„ ์„ ํ˜ธํ•˜๋Š” ๊ฒฝ์šฐ ๋ฐ”๋กœ `BrandNewBert` ์ฝ”๋“œ ๋ฒ ์ด์Šค๋กœ ๋น ์ ธ๋“œ๋Š” ๊ฒƒ๋„ ๊ดœ์ฐฎ์Šต๋‹ˆ๋‹ค. ์ด ์˜ต์…˜์€ ์—”์ง€๋‹ˆ์–ด๋ง ๊ธฐ์ˆ ์ด ์ด๋ก ์  ๊ธฐ์ˆ ๋ณด๋‹ค ๋” ๋›ฐ์–ด๋‚œ ๊ฒฝ์šฐ, `BrandNewBert`์˜ ๋…ผ๋ฌธ์„ ์ดํ•ดํ•˜๋Š” ๋ฐ ์–ด๋ ค์›€์ด ์žˆ๋Š” ๊ฒฝ์šฐ, ๋˜๋Š” ๊ณผํ•™์ ์ธ ๋…ผ๋ฌธ์„ ์ฝ๋Š” ๊ฒƒ๋ณด๋‹ค ํ”„๋กœ๊ทธ๋ž˜๋ฐ์— ํ›จ์”ฌ ๋” ํฅ๋ฏธ ์žˆ๋Š” ๊ฒฝ์šฐ์— ๋” ์ ํ•ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### 1. (์„ ํƒ ์‚ฌํ•ญ) BrandNewBert์˜ ์ด๋ก ์  ์ธก๋ฉด [[1-optional-theoretical-aspects-of-brandnewbert]] ๋งŒ์•ฝ ๊ทธ๋Ÿฐ ์„œ์ˆ ์ ์ธ ์ž‘์—…์ด ์กด์žฌํ•œ๋‹ค๋ฉด, *BrandNewBert*์˜ ๋…ผ๋ฌธ์„ ์ฝ์–ด๋ณด๋Š” ์‹œ๊ฐ„์„ ๊ฐ€์ ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ดํ•ดํ•˜๊ธฐ ์–ด๋ ค์šด ์„น์…˜์ด ๋งŽ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡๋”๋ผ๋„ ๊ฑฑ์ •ํ•˜์ง€ ๋งˆ์„ธ์š”! ๋ชฉํ‘œ๋Š” ๋…ผ๋ฌธ์˜ ๊นŠ์€ ์ด๋ก ์  ์ดํ•ด๊ฐ€ ์•„๋‹ˆ๋ผ *BrandNewBert*๋ฅผ ๐Ÿค— Transformers์—์„œ ํšจ๊ณผ์ ์œผ๋กœ ์žฌ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ์ด๋ก ์  ์ธก๋ฉด์— ๋„ˆ๋ฌด ๋งŽ์€ ์‹œ๊ฐ„์„ ํˆฌ์žํ•  ํ•„์š”๋Š” ์—†์ง€๋งŒ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์‹ค์ œ์ ์ธ ์ธก๋ฉด์— ์ง‘์ค‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - *BrandNewBert*๋Š” ์–ด๋–ค ์œ ํ˜•์˜ ๋ชจ๋ธ์ธ๊ฐ€์š”? BERT์™€ ์œ ์‚ฌํ•œ ์ธ์ฝ”๋” ๋ชจ๋ธ์ธ๊ฐ€์š”? GPT2์™€ ์œ ์‚ฌํ•œ ๋””์ฝ”๋” ๋ชจ๋ธ์ธ๊ฐ€์š”? BART์™€ ์œ ์‚ฌํ•œ ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์ธ๊ฐ€์š”? ์ด๋“ค ๊ฐ„์˜ ์ฐจ์ด์ ์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ[model_summary](model_summary)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. - *BrandNewBert*์˜ ์‘์šฉ ๋ถ„์•ผ๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”? ํ…์ŠคํŠธ ๋ถ„๋ฅ˜์ธ๊ฐ€์š”? ํ…์ŠคํŠธ ์ƒ์„ฑ์ธ๊ฐ€์š”? ์š”์•ฝ๊ณผ ๊ฐ™์€ Seq2Seq ์ž‘์—…์ธ๊ฐ€์š”? - *brand_new_bert*์™€ BERT/GPT-2/BART์˜ ์ฐจ์ด์ ์€ ๋ฌด์—‡์ธ๊ฐ€์š”? - *brand_new_bert*์™€ ๊ฐ€์žฅ ์œ ์‚ฌํ•œ [๐Ÿค— Transformers ๋ชจ๋ธ](https://huggingface.co/transformers/#contents)์€ ๋ฌด์—‡์ธ๊ฐ€์š”? - ์–ด๋–ค ์ข…๋ฅ˜์˜ ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ์‚ฌ์šฉ๋˜๋‚˜์š”? Sentencepiece ํ† ํฌ๋‚˜์ด์ €์ธ๊ฐ€์š”? Word piece ํ† ํฌ๋‚˜์ด์ €์ธ๊ฐ€์š”? BERT ๋˜๋Š” BART์— ์‚ฌ์šฉ๋˜๋Š” ๋™์ผํ•œ ํ† ํฌ๋‚˜์ด์ €์ธ๊ฐ€์š”? ๋ชจ๋ธ์˜ ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•ด ์ถฉ๋ถ„ํžˆ ์ดํ•ดํ–ˆ๋‹ค๋Š” ์ƒ๊ฐ์ด ๋“  ํ›„, ๊ถ๊ธˆํ•œ ์‚ฌํ•ญ์ด ์žˆ์œผ๋ฉด Hugging Face ํŒ€์— ๋ฌธ์˜ํ•˜์‹ญ์‹œ์˜ค. ์ด๋Š” ๋ชจ๋ธ์˜ ์•„ํ‚คํ…์ฒ˜, ์–ดํ…์…˜ ๋ ˆ์ด์–ด ๋“ฑ์— ๊ด€ํ•œ ์งˆ๋ฌธ์„ ํฌํ•จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Hugging Face์˜ ์œ ์ง€ ๊ด€๋ฆฌ์ž๋“ค์€ ๋ณดํ†ต ์ฝ”๋“œ๋ฅผ ๊ฒ€ํ† ํ•˜๋Š” ๊ฒƒ์— ๋Œ€ํ•ด ๋งค์šฐ ๊ธฐ๋ปํ•˜๋ฏ€๋กœ ๋‹น์‹ ์„ ๋•๋Š” ์ผ์„ ๋งค์šฐ ํ™˜์˜ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค! ### 2. ๊ฐœ๋ฐœ ํ™˜๊ฒฝ ์„ค์ • [[2-next-prepare-your-environment]] 1. ์ €์žฅ์†Œ ํŽ˜์ด์ง€์—์„œ "Fork" ๋ฒ„ํŠผ์„ ํด๋ฆญํ•˜์—ฌ ์ €์žฅ์†Œ์˜ ์‚ฌ๋ณธ์„ GitHub ์‚ฌ์šฉ์ž ๊ณ„์ •์œผ๋กœ ๋งŒ๋“ญ๋‹ˆ๋‹ค. 2. `transformers` fork๋ฅผ ๋กœ์ปฌ ๋””์Šคํฌ์— ํด๋ก ํ•˜๊ณ  ๋ฒ ์ด์Šค ์ €์žฅ์†Œ๋ฅผ ์›๊ฒฉ ์ €์žฅ์†Œ๋กœ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์—ฌ ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` ๊ฐ ์šด์˜ ์ฒด์ œ์— ๋”ฐ๋ผ Transformers์˜ ์„ ํƒ์  ์˜์กด์„ฑ์ด ๊ฐœ์ˆ˜๊ฐ€ ์ฆ๊ฐ€ํ•˜๋ฉด ์ด ๋ช…๋ น์ด ์‹คํŒจํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๊ฒฝ์šฐ์—๋Š” ์ž‘์—… ์ค‘์ธ ๋”ฅ ๋Ÿฌ๋‹ ํ”„๋ ˆ์ž„์›Œํฌ (PyTorch, TensorFlow ๋ฐ/๋˜๋Š” Flax)์„ ์„ค์น˜ํ•œ ํ›„, ๋‹ค์Œ ๋ช…๋ น์„ ์ˆ˜ํ–‰ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```bash pip install -e ".[quality]" ``` ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ์—๋Š” ์ด๊ฒƒ์œผ๋กœ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ƒ์œ„ ๋””๋ ‰ํ† ๋ฆฌ๋กœ ๋Œ์•„๊ฐ‘๋‹ˆ๋‹ค. ```bash cd .. ``` 4. Transformers์— *brand_new_bert*์˜ PyTorch ๋ฒ„์ „์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. PyTorch๋ฅผ ์„ค์น˜ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋งํฌ์˜ ์ง€์นจ์„ ๋”ฐ๋ฅด์‹ญ์‹œ์˜ค: https://pytorch.org/get-started/locally/. **์ฐธ๊ณ :** CUDA๋ฅผ ์„ค์น˜ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ชจ๋ธ์ด CPU์—์„œ ์ž‘๋™ํ•˜๋„๋ก ๋งŒ๋“œ๋Š” ๊ฒƒ์œผ๋กœ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. 5. *brand_new_bert*๋ฅผ ์ด์‹ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ํ•ด๋‹น ์›๋ณธ ์ €์žฅ์†Œ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git cd brand_new_bert pip install -e . ``` ์ด์ œ *brand_new_bert*๋ฅผ ๐Ÿค— Transformers๋กœ ์ด์‹ํ•˜๊ธฐ ์œ„ํ•œ ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ค์ •ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ### 3.-4. ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ ์‹คํ–‰ํ•˜๊ธฐ [[3.-4.-run-a-pretrained-checkpoint-using-the-original-repository]] ๋จผ์ €, ์›๋ณธ *brand_new_bert* ์ €์žฅ์†Œ์—์„œ ์ž‘์—…์„ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์›๋ณธ ๊ตฌํ˜„์€ ๋ณดํ†ต "์—ฐ๊ตฌ์šฉ"์œผ๋กœ ๋งŽ์ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ฆ‰, ๋ฌธ์„œํ™”๊ฐ€ ๋ถ€์กฑํ•˜๊ณ  ์ฝ”๋“œ๊ฐ€ ์ดํ•ดํ•˜๊ธฐ ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๊ฒƒ์ด ๋ฐ”๋กœ *brand_new_bert*๋ฅผ ๋‹ค์‹œ ๊ตฌํ˜„ํ•˜๋ ค๋Š” ๋™๊ธฐ๊ฐ€ ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. Hugging Face์—์„œ์˜ ์ฃผ์š” ๋ชฉํ‘œ ์ค‘ ํ•˜๋‚˜๋Š” **๊ฑฐ์ธ์˜ ์–ด๊นจ ์œ„์— ์„œ๋Š” ๊ฒƒ**์ด๋ฉฐ, ์ด๋Š” ์—ฌ๊ธฐ์—์„œ ์‰ฝ๊ฒŒ ํ•ด์„๋˜์–ด ๋™์ž‘ํ•˜๋Š” ๋ชจ๋ธ์„ ๊ฐ€์ ธ์™€์„œ ๊ฐ€๋Šฅํ•œ ํ•œ **์ ‘๊ทผ ๊ฐ€๋Šฅํ•˜๊ณ  ์‚ฌ์šฉ์ž ์นœํ™”์ ์ด๋ฉฐ ์•„๋ฆ„๋‹ต๊ฒŒ** ๋งŒ๋“œ๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ๐Ÿค— Transformers์—์„œ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๊ตฌํ˜„ํ•˜๋Š” ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๋™๊ธฐ์ž…๋‹ˆ๋‹ค - ์ƒˆ๋กœ์šด ๋ณต์žกํ•œ NLP ๊ธฐ์ˆ ์„ **๋ชจ๋‘์—๊ฒŒ** ์ ‘๊ทผ ๊ฐ€๋Šฅํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์›๋ณธ ์ €์žฅ์†Œ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์‚ดํŽด๋ณด๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ๊ณต์‹ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์„ฑ๊ณต์ ์œผ๋กœ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ์€ ์ข…์ข… **๊ฐ€์žฅ ์–ด๋ ค์šด** ๋‹จ๊ณ„์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ์˜ ๊ฒฝํ—˜์— ๋”ฐ๋ฅด๋ฉด, ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค์— ์ต์ˆ™ํ•ด์ง€๋Š” ๋ฐ ์‹œ๊ฐ„์„ ํˆฌ์žํ•˜๋Š” ๊ฒƒ์ด ๋งค์šฐ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์„ ํŒŒ์•…ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์–ด๋””์„œ ์ฐพ์„ ์ˆ˜ ์žˆ๋Š”์ง€? - ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ํ•ด๋‹น ๋ชจ๋ธ์—๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์€? - ๋ชจ๋ธ๊ณผ ๋…๋ฆฝ์ ์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์€? - ๊ฐ„๋‹จํ•œ forward pass์— ํ•„์š”ํ•œ ํด๋ž˜์Šค์™€ ํ•จ์ˆ˜๋ฅผ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•ด forward pass๋ฅผ ํ•œ ๋ฒˆ ์ถ”์ ํ•ด ๋ณด์„ธ์š”. ์ผ๋ฐ˜์ ์œผ๋กœ ํ•ด๋‹น ํ•จ์ˆ˜๋“ค๋งŒ ๋‹ค์‹œ ๊ตฌํ˜„ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. - ๋ชจ๋ธ์˜ ์ค‘์š”ํ•œ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ํด๋ž˜์Šค๋Š” ์–ด๋””์— ์žˆ๋‚˜์š”? ๋ชจ๋ธ ํ•˜์œ„ ํด๋ž˜์Šค(*EncoderModel*, *DecoderModel* ๋“ฑ)๊ฐ€ ์žˆ๋‚˜์š”? self-attention ๋ ˆ์ด์–ด๋Š” ์–ด๋””์— ์žˆ๋‚˜์š”? self-attention, cross-attention ๋“ฑ ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ๋‹ค๋ฅธ ์–ดํ…์…˜ ๋ ˆ์ด์–ด๊ฐ€ ์žˆ๋‚˜์š”? - ์›๋ณธ ํ™˜๊ฒฝ์—์„œ ๋ชจ๋ธ์„ ๋””๋ฒ„๊ทธํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์€ ๋ฌด์—‡์ธ๊ฐ€์š”? *print* ๋ฌธ์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•˜๋‚˜์š”? *ipdb*์™€ ๊ฐ™์€ ๋Œ€ํ™”์‹ ๋””๋ฒ„๊ฑฐ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‚˜์š”? PyCharm๊ณผ ๊ฐ™์€ ํšจ์œจ์ ์ธ IDE๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์„ ๋””๋ฒ„๊ทธํ•  ์ˆ˜ ์žˆ๋‚˜์š”? ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ์ฝ”๋“œ๋ฅผ ์ด์‹ํ•˜๋Š” ์ž‘์—…์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ์ฝ”๋“œ๋ฅผ **ํšจ์œจ์ ์œผ๋กœ** ๋””๋ฒ„๊ทธํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค! ๋˜ํ•œ, ์˜คํ”ˆ ์†Œ์Šค ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ์ž‘์—…ํ•˜๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ๊ธฐ์–ตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์›๋ณธ ์ €์žฅ์†Œ์—์„œ issue๋ฅผ ์—ด๊ฑฐ๋‚˜ pull request๋ฅผ ์—ด๊ธฐ๋ฅผ ์ฃผ์ €ํ•˜์ง€ ๋งˆ์‹ญ์‹œ์˜ค. ์ด ์ €์žฅ์†Œ์˜ ์œ ์ง€ ๊ด€๋ฆฌ์ž๋“ค์€ ๋ˆ„๊ตฐ๊ฐ€๊ฐ€ ์ž์‹ ๋“ค์˜ ์ฝ”๋“œ๋ฅผ ์‚ดํŽด๋ณธ๋‹ค๋Š” ๊ฒƒ์— ๋Œ€ํ•ด ๋งค์šฐ ๊ธฐ๋ปํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค! ํ˜„์žฌ ์‹œ์ ์—์„œ, ์›๋ž˜ ๋ชจ๋ธ์„ ๋””๋ฒ„๊น…ํ•˜๊ธฐ ์œ„ํ•ด ์–ด๋–ค ๋””๋ฒ„๊น… ํ™˜๊ฒฝ๊ณผ ์ „๋žต์„ ์„ ํ˜ธํ•˜๋Š”์ง€๋Š” ๋‹น์‹ ์—๊ฒŒ ๋‹ฌ๋ ธ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๊ณ ๊ฐ€์˜ GPU ํ™˜๊ฒฝ์„ ๊ตฌ์ถ•ํ•˜๋Š” ๊ฒƒ์€ ๋น„์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. ๋Œ€์‹ , ์›๋ž˜ ์ €์žฅ์†Œ๋กœ ๋“ค์–ด๊ฐ€์„œ ์ž‘์—…์„ ์‹œ์ž‘ํ•  ๋•Œ์™€ ๐Ÿค— Transformers ๋ชจ๋ธ์˜ ๊ตฌํ˜„์„ ์‹œ์ž‘ํ•  ๋•Œ์—๋„ CPU์—์„œ ์ž‘์—…ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์ด๋ฏธ ๐Ÿค— Transformers๋กœ ์„ฑ๊ณต์ ์œผ๋กœ ์ด์‹๋˜์—ˆ์„ ๋•Œ์—๋งŒ ๋ชจ๋ธ์ด GPU์—์„œ๋„ ์˜ˆ์ƒ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ, ์›๋ž˜ ๋ชจ๋ธ์„ ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•œ ๋‘ ๊ฐ€์ง€ ๊ฐ€๋Šฅํ•œ ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์ด ์žˆ์Šต๋‹ˆ๋‹ค. - [Jupyter ๋…ธํŠธ๋ถ](https://jupyter.org/) / [Google Colab](https://colab.research.google.com/notebooks/intro.ipynb) - ๋กœ์ปฌ Python ์Šคํฌ๋ฆฝํŠธ Jupyter ๋…ธํŠธ๋ถ์˜ ์žฅ์ ์€ ์…€ ๋‹จ์œ„๋กœ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋…ผ๋ฆฌ์ ์ธ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ๋” ์ž˜ ๋ถ„๋ฆฌํ•˜๊ณ  ์ค‘๊ฐ„ ๊ฒฐ๊ณผ๋ฅผ ์ €์žฅํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ๋””๋ฒ„๊น… ์‚ฌ์ดํด์ด ๋” ๋นจ๋ผ์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ๋…ธํŠธ๋ถ์€ ๋‹ค๋ฅธ ๊ธฐ์—ฌ์ž์™€ ์‰ฝ๊ฒŒ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ Hugging Face ํŒ€์˜ ๋„์›€์„ ์š”์ฒญํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ๋งค์šฐ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Jupyter ๋…ธํŠธ๋ถ์— ์ต์ˆ™ํ•˜๋‹ค๋ฉด ์ด๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์„ ๊ฐ•๋ ฅํžˆ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. Jupyter ๋…ธํŠธ๋ถ์˜ ๋‹จ์ ์€ ์‚ฌ์šฉ์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ์ƒˆ๋กœ์šด ํ”„๋กœ๊ทธ๋ž˜๋ฐ ํ™˜๊ฒฝ์— ์ ์‘ํ•˜๋Š” ๋ฐ ์‹œ๊ฐ„์„ ํ• ์• ํ•ด์•ผ ํ•˜๋ฉฐ, `ipdb`์™€ ๊ฐ™์€ ์•Œ๋ ค์ง„ ๋””๋ฒ„๊น… ๋„๊ตฌ๋ฅผ ๋” ์ด์ƒ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์„ ์ˆ˜๋„ ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ฐ ์ฝ”๋“œ ๋ฒ ์ด์Šค์— ๋Œ€ํ•ด ์ข‹์€ ์ฒซ ๋ฒˆ์งธ ๋‹จ๊ณ„๋Š” ํ•ญ์ƒ **์ž‘์€** ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋กœ๋“œํ•˜๊ณ  ๋”๋ฏธ ์ •์ˆ˜ ๋ฒกํ„ฐ ์ž…๋ ฅ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋‹จ์ผ forward pass๋ฅผ ์žฌํ˜„ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด์™€ ๊ฐ™์€ ์Šคํฌ๋ฆฝํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์˜์‚ฌ ์ฝ”๋“œ๋กœ ์ž‘์„ฑ): ```python model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids original_output = model.predict(input_ids) ``` ๋‹ค์Œ์œผ๋กœ, ๋””๋ฒ„๊น… ์ „๋žต์— ๋Œ€ํ•ด ์ผ๋ฐ˜์ ์œผ๋กœ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ช‡ ๊ฐ€์ง€ ์„ ํƒ์ง€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - ์›๋ณธ ๋ชจ๋ธ์„ ๋งŽ์€ ์ž‘์€ ํ…Œ์ŠคํŠธ ๊ฐ€๋Šฅํ•œ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•˜๊ณ  ๊ฐ๊ฐ์— ๋Œ€ํ•ด forward pass๋ฅผ ์‹คํ–‰ํ•˜์—ฌ ๊ฒ€์ฆํ•ฉ๋‹ˆ๋‹ค. - ์›๋ณธ ๋ชจ๋ธ์„ ์›๋ณธ *tokenizer*๊ณผ ์›๋ณธ *model*๋กœ๋งŒ ๋ถ„ํ•ดํ•˜๊ณ  ํ•ด๋‹น ๋ถ€๋ถ„์— ๋Œ€ํ•ด forward pass๋ฅผ ์‹คํ–‰ํ•œ ํ›„ ๊ฒ€์ฆ์„ ์œ„ํ•ด ์ค‘๊ฐ„ ์ถœ๋ ฅ(print ๋ฌธ ๋˜๋Š” ์ค‘๋‹จ์ )์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์‹œ ๋งํ•˜์ง€๋งŒ, ์–ด๋–ค ์ „๋žต์„ ์„ ํƒํ• ์ง€๋Š” ๋‹น์‹ ์—๊ฒŒ ๋‹ฌ๋ ค ์žˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค์— ๋”ฐ๋ผ ํ•˜๋‚˜ ๋˜๋Š” ๋‹ค๋ฅธ ์ „๋žต์ด ์œ ๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค๋ฅผ ๋ชจ๋ธ์˜ ์ž‘์€ ํ•˜์œ„ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์—ฌ๋ถ€, ์˜ˆ๋ฅผ ๋“ค์–ด ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค๊ฐ€ ์ฆ‰์‹œ ์‹คํ–‰ ๋ชจ๋“œ์—์„œ ๊ฐ„๋‹จํžˆ ์‹คํ–‰๋  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ, ๊ทธ๋Ÿฐ ๊ฒฝ์šฐ์—๋Š” ๊ทธ ๋…ธ๋ ฅ์ด ๊ฐ€์น˜๊ฐ€ ์žˆ๋‹ค๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ์ดˆ๊ธฐ์— ๋” ์–ด๋ ค์šด ๋ฐฉ๋ฒ•์„ ์„ ํƒํ•˜๋Š” ๊ฒƒ์—๋Š” ๋ช‡ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ์žฅ์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. - ์›๋ณธ ๋ชจ๋ธ์„ ๐Ÿค— Transformers ๊ตฌํ˜„๊ณผ ๋น„๊ตํ•  ๋•Œ ๊ฐ ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์ผ์น˜ํ•˜๋Š”์ง€ ์ž๋™์œผ๋กœ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰, ์‹œ๊ฐ์ ์ธ ๋น„๊ต(print ๋ฌธ์„ ํ†ตํ•œ ๋น„๊ต๊ฐ€ ์•„๋‹Œ) ๋Œ€์‹  ๐Ÿค— Transformers ๊ตฌํ˜„๊ณผ ๊ทธ์— ๋Œ€์‘ํ•˜๋Š” ์›๋ณธ ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ์ „์ฒด ๋ชจ๋ธ์„ ๋ชจ๋“ˆ๋ณ„๋กœ, ์ฆ‰ ์ž‘์€ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•จ์œผ๋กœ์จ ๋ชจ๋ธ์„ ์ด์‹ํ•˜๋Š” ํฐ ๋ฌธ์ œ๋ฅผ ๋‹จ์ˆœํžˆ ๊ฐœ๋ณ„ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ด์‹ํ•˜๋Š” ์ž‘์€ ๋ฌธ์ œ๋กœ ๋ถ„ํ•ดํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ž‘์—…์„ ๋” ์ž˜ ๊ตฌ์กฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ๋ชจ๋ธ์„ ๋…ผ๋ฆฌ์ ์œผ๋กœ ์˜๋ฏธ ์žˆ๋Š” ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„๋ฆฌํ•˜๋Š” ๊ฒƒ์€ ๋ชจ๋ธ์˜ ์„ค๊ณ„์— ๋Œ€ํ•œ ๋” ๋‚˜์€ ๊ฐœ์š”๋ฅผ ์–ป๊ณ  ๋ชจ๋ธ์„ ๋” ์ž˜ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. - ์ด๋Ÿฌํ•œ ๊ตฌ์„ฑ ์š”์†Œ๋ณ„ ํ…Œ์ŠคํŠธ๋ฅผ ํ†ตํ•ด ์ฝ”๋“œ๋ฅผ ๋ณ€๊ฒฝํ•˜๋ฉด์„œ ํšŒ๊ท€๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š๋„๋ก ๋ณด์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [Lysandre์˜ ELECTRA ํ†ตํ•ฉ ๊ฒ€์‚ฌ](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed)๋Š” ์ด๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ์ข‹์€ ์˜ˆ์ œ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์›๋ณธ ์ฝ”๋“œ ๋ฒ ์ด์Šค๊ฐ€ ๋งค์šฐ ๋ณต์žกํ•˜๊ฑฐ๋‚˜ ์ค‘๊ฐ„ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ปดํŒŒ์ผ๋œ ๋ชจ๋“œ์—์„œ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ๋งŒ ํ—ˆ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธ ๊ฐ€๋Šฅํ•œ ์ž‘์€ ํ•˜์œ„ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•˜๋Š” ๊ฒƒ์ด ์‹œ๊ฐ„์ด ๋งŽ์ด ์†Œ์š”๋˜๊ฑฐ๋‚˜ ๋ถˆ๊ฐ€๋Šฅํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. [T5์˜ MeshTensorFlow](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ๋งค์šฐ ๋ณต์žกํ•˜๋ฉฐ ๋ชจ๋ธ์„ ํ•˜์œ„ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•˜๋Š” ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๊ฒฝ์šฐ, ๋ณดํ†ต print ๋ฌธ์„ ํ†ตํ•ด ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ์–ด๋–ค ์ „๋žต์„ ์„ ํƒํ•˜๋”๋ผ๋„ ๊ถŒ์žฅ๋˜๋Š” ์ ˆ์ฐจ๋Š” ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € ์‹œ์ž‘ ๋ ˆ์ด์–ด๋ฅผ ๋””๋ฒ„๊ทธํ•˜๊ณ  ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด๋ฅผ ๋งˆ์ง€๋ง‰์— ๋””๋ฒ„๊ทธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์ˆœ์„œ๋กœ ๊ฐ ๋ ˆ์ด์–ด์˜ ์ถœ๋ ฅ์„ ๊ฒ€์ƒ‰ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค: 1. ๋ชจ๋ธ์— ์ „๋‹ฌ๋œ ์ž…๋ ฅ ID ๊ฐ€์ ธ์˜ค๊ธฐ 2. ์›Œ๋“œ ์ž„๋ฒ ๋”ฉ ๊ฐ€์ ธ์˜ค๊ธฐ 3. ์ฒซ ๋ฒˆ์งธ Transformer ๋ ˆ์ด์–ด์˜ ์ž…๋ ฅ ๊ฐ€์ ธ์˜ค๊ธฐ 4. ์ฒซ ๋ฒˆ์งธ Transformer ๋ ˆ์ด์–ด์˜ ์ถœ๋ ฅ ๊ฐ€์ ธ์˜ค๊ธฐ 5. ๋‹ค์Œ n-1๊ฐœ์˜ Transformer ๋ ˆ์ด์–ด์˜ ์ถœ๋ ฅ ๊ฐ€์ ธ์˜ค๊ธฐ 6. BrandNewBert ๋ชจ๋ธ์˜ ์ถœ๋ ฅ ๊ฐ€์ ธ์˜ค๊ธฐ ์ž…๋ ฅ ID๋Š” ์ •์ˆ˜ ๋ฐฐ์—ด๋กœ ๊ตฌ์„ฑ๋˜๋ฉฐ, ์˜ˆ๋ฅผ ๋“ค์–ด `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]`์™€ ๊ฐ™์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ ˆ์ด์–ด์˜ ์ถœ๋ ฅ์€ ์ข…์ข… ๋‹ค์ฐจ์› ์‹ค์ˆ˜ ๋ฐฐ์—ด๋กœ ๊ตฌ์„ฑ๋˜๋ฉฐ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ``` [[ [-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024], [-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132], [-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648], ..., [-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288], [-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191], [-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]], ``` ๐Ÿค— Transformers์— ์ถ”๊ฐ€๋˜๋Š” ๋ชจ๋“  ๋ชจ๋ธ์€ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋ฅผ ํ†ต๊ณผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ์›๋ณธ ๋ชจ๋ธ๊ณผ ๐Ÿค— Transformers์˜ ์žฌ๊ตฌํ˜„ ๋ฒ„์ „์ด 0.001์˜ ์ •๋ฐ€๋„๋กœ ์ •ํ™•ํžˆ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ๋‚ด์•ผ ํ•ฉ๋‹ˆ๋‹ค! ๋™์ผํ•œ ๋ชจ๋ธ์ด ๋‹ค๋ฅธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ์ž‘์„ฑ๋˜์—ˆ์„ ๋•Œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ํ”„๋ ˆ์ž„์›Œํฌ์— ๋”ฐ๋ผ ์•ฝ๊ฐ„ ๋‹ค๋ฅธ ์ถœ๋ ฅ์„ ์–ป๋Š” ๊ฒƒ์€ ์ •์ƒ์ด๋ฏ€๋กœ 1e-3(0.001)์˜ ์˜ค์ฐจ๋Š” ํ—ˆ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ฑฐ์˜ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ๋‚ด๋Š” ๊ฒƒ๋งŒ์œผ๋กœ๋Š” ์ถฉ๋ถ„ํ•˜์ง€ ์•Š์œผ๋ฉฐ, ์™„๋ฒฝํžˆ ์ผ์น˜ํ•˜๋Š” ์ˆ˜์ค€์ด์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๐Ÿค— Transformers ๋ฒ„์ „์˜ ์ค‘๊ฐ„ ์ถœ๋ ฅ์„ *brand_new_bert*์˜ ์›๋ž˜ ๊ตฌํ˜„์˜ ์ค‘๊ฐ„ ์ถœ๋ ฅ๊ณผ ์—ฌ๋Ÿฌ ๋ฒˆ ๋น„๊ตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ์›๋ณธ ์ €์žฅ์†Œ์˜ **ํšจ์œจ์ ์ธ** ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์ด ์ ˆ๋Œ€์ ์œผ๋กœ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์„ ๊ฐ€๋Šฅํ•œ ํ•œ ํšจ์œจ์ ์œผ๋กœ ๋งŒ๋“œ๋Š” ๋ช‡ ๊ฐ€์ง€ ์กฐ์–ธ์„ ์ œ์‹œํ•ฉ๋‹ˆ๋‹ค. - ์ค‘๊ฐ„ ๊ฒฐ๊ณผ๋ฅผ ๋””๋ฒ„๊ทธํ•˜๋Š” ๊ฐ€์žฅ ์ข‹์€ ๋ฐฉ๋ฒ•์„ ์ฐพ์œผ์„ธ์š”. ์›๋ณธ ์ €์žฅ์†Œ๊ฐ€ PyTorch๋กœ ์ž‘์„ฑ๋˜์—ˆ๋‹ค๋ฉด ์›๋ณธ ๋ชจ๋ธ์„ ๋” ์ž‘์€ ํ•˜์œ„ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๋ถ„ํ•ดํ•˜์—ฌ ์ค‘๊ฐ„ ๊ฐ’์„ ๊ฒ€์ƒ‰ํ•˜๋Š” ๊ธด ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ์— ์‹œ๊ฐ„์„ ํˆฌ์žํ•  ๊ฐ€์น˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ์ €์žฅ์†Œ๊ฐ€ Tensorflow 1๋กœ ์ž‘์„ฑ๋˜์—ˆ๋‹ค๋ฉด [tf.print](https://www.tensorflow.org/api_docs/python/tf/print)์™€ ๊ฐ™์€ Tensorflow ์ถœ๋ ฅ ์ž‘์—…์„ ์‚ฌ์šฉํ•˜์—ฌ ์ค‘๊ฐ„ ๊ฐ’์„ ์ถœ๋ ฅํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ์ €์žฅ์†Œ๊ฐ€ Jax๋กœ ์ž‘์„ฑ๋˜์—ˆ๋‹ค๋ฉด forward pass๋ฅผ ์‹คํ–‰ํ•  ๋•Œ ๋ชจ๋ธ์ด **jit ๋˜์ง€ ์•Š๋„๋ก** ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด [์ด ๋งํฌ](https://github.com/google/jax/issues/196)๋ฅผ ํ™•์ธํ•ด ๋ณด์„ธ์š”. - ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๊ฐ€์žฅ ์ž‘์€ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ์ฒดํฌํฌ์ธํŠธ๊ฐ€ ์ž‘์„์ˆ˜๋ก ๋””๋ฒ„๊ทธ ์‚ฌ์ดํด์ด ๋” ๋นจ๋ผ์ง‘๋‹ˆ๋‹ค. ์ „๋ฐ˜์ ์œผ๋กœ forward pass์— 10์ดˆ ์ด์ƒ์ด ๊ฑธ๋ฆฌ๋Š” ๊ฒฝ์šฐ ํšจ์œจ์ ์ด์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋งค์šฐ ํฐ ์ฒดํฌํฌ์ธํŠธ๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ, ์ƒˆ ํ™˜๊ฒฝ์—์„œ ์ž„์˜๋กœ ์ดˆ๊ธฐํ™”๋œ ๊ฐ€์ค‘์น˜๋กœ ๋”๋ฏธ ๋ชจ๋ธ์„ ๋งŒ๋“ค๊ณ  ํ•ด๋‹น ๊ฐ€์ค‘์น˜๋ฅผ ๐Ÿค— Transformers ๋ฒ„์ „๊ณผ ๋น„๊ตํ•˜๊ธฐ ์œ„ํ•ด ์ €์žฅํ•˜๋Š” ๊ฒƒ์ด ๋” ์˜๋ฏธ๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ๋””๋ฒ„๊น… ์„ค์ •์—์„œ ๊ฐ€์žฅ ์‰ฝ๊ฒŒ forward pass๋ฅผ ํ˜ธ์ถœํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์„ธ์š”. ์›๋ณธ ์ €์žฅ์†Œ์—์„œ **๋‹จ์ผ** forward pass๋งŒ ํ˜ธ์ถœํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ฐพ๋Š” ๊ฒƒ์ด ์ด์ƒ์ ์ž…๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ `predict`, `evaluate`, `forward`, `__call__`๊ณผ ๊ฐ™์ด ํ˜ธ์ถœ๋ฉ๋‹ˆ๋‹ค. `autoregressive_sample`๊ณผ ๊ฐ™์€ ํ…์ŠคํŠธ ์ƒ์„ฑ์—์„œ `forward`๋ฅผ ์—ฌ๋Ÿฌ ๋ฒˆ ํ˜ธ์ถœํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋“ฑ์˜ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋””๋ฒ„๊ทธํ•˜๊ณ  ์‹ถ์ง€ ์•Š์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. - ํ† ํฐํ™” ๊ณผ์ •์„ ๋ชจ๋ธ์˜ *forward* pass์™€ ๋ถ„๋ฆฌํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•˜์„ธ์š”. ์›๋ณธ ์ €์žฅ์†Œ์—์„œ ์ž…๋ ฅ ๋ฌธ์ž์—ด์„ ์ž…๋ ฅํ•ด์•ผ ํ•˜๋Š” ์˜ˆ์ œ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ, ์ž…๋ ฅ ๋ฌธ์ž์—ด์ด ์ž…๋ ฅ ID๋กœ ๋ณ€๊ฒฝ๋˜๋Š” ์ˆœ๊ฐ„์„ ์ฐพ์•„์„œ ์‹œ์ž‘ํ•˜์„ธ์š”. ์ด ๊ฒฝ์šฐ ์ง์ ‘ ID๋ฅผ ์ž…๋ ฅํ•  ์ˆ˜ ์žˆ๋„๋ก ์ž‘์€ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•˜๊ฑฐ๋‚˜ ์›๋ณธ ์ฝ”๋“œ๋ฅผ ์ˆ˜์ •ํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. - ๋””๋ฒ„๊น… ์„ค์ •์—์„œ ๋ชจ๋ธ์ด ํ›ˆ๋ จ ๋ชจ๋“œ๊ฐ€ ์•„๋‹ˆ๋ผ๋Š” ๊ฒƒ์„ ํ™•์ธํ•˜์„ธ์š”. ํ›ˆ๋ จ ๋ชจ๋“œ์—์„œ๋Š” ๋ชจ๋ธ์˜ ์—ฌ๋Ÿฌ ๋“œ๋กญ์•„์›ƒ ๋ ˆ์ด์–ด ๋•Œ๋ฌธ์— ๋ฌด์ž‘์œ„ ์ถœ๋ ฅ์ด ์ƒ์„ฑ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋””๋ฒ„๊น… ํ™˜๊ฒฝ์—์„œ forward pass๊ฐ€ **๊ฒฐ์ •๋ก ์ **์ด๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜๋Š” ๋™์ผํ•œ ํ”„๋ ˆ์ž„์›Œํฌ์— ์žˆ๋Š” ๊ฒฝ์šฐ *transformers.utils.set_seed*๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ๋‹ค์Œ ์„น์…˜์—์„œ๋Š” *brand_new_bert*์— ๋Œ€ํ•ด ์ด ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐ ๋” ๊ตฌ์ฒด์ ์ธ ์„ธ๋ถ€ ์‚ฌํ•ญ/ํŒ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ### 5.-14. ๐Ÿค— Transformers์— BrandNewBert๋ฅผ ์ด์‹ํ•˜๊ธฐ [[5.-14.-port-brandnewbert-to-transformers]] ์ด์ œ, ๋งˆ์นจ๋‚ด ๐Ÿค— Transformers์— ์ƒˆ๋กœ์šด ์ฝ”๋“œ๋ฅผ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers ํฌํฌ์˜ ํด๋ก ์œผ๋กœ ์ด๋™ํ•˜์„ธ์š”: ```bash cd transformers ``` ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๋ชจ๋ธ์˜ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์™€ ์ •ํ™•ํžˆ ์ผ์น˜ํ•˜๋Š” ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ํŠน๋ณ„ํ•œ ๊ฒฝ์šฐ์—๋Š” [์ด ์„น์…˜](#write-a-conversion-script)์— ์„ค๋ช…๋œ๋Œ€๋กœ ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋งŒ ์ถ”๊ฐ€ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ์—๋Š” ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๋ชจ๋ธ์˜ ์ „์ฒด ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ทธ๋Œ€๋กœ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ์ƒˆ๋กœ์šด ๋ชจ๋ธ ์ƒ์„ฑ์„ ์‹œ์ž‘ํ•ฉ์‹œ๋‹ค. ์—ฌ๊ธฐ์—์„œ ๋‘ ๊ฐ€์ง€ ์„ ํƒ์ง€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `transformers-cli add-new-model-like`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ธฐ์กด ๋ชจ๋ธ๊ณผ ์œ ์‚ฌํ•œ ์ƒˆ๋กœ์šด ๋ชจ๋ธ ์ถ”๊ฐ€ํ•˜๊ธฐ - `transformers-cli add-new-model`์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…œํ”Œ๋ฆฟ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ์ƒˆ๋กœ์šด ๋ชจ๋ธ ์ถ”๊ฐ€ํ•˜๊ธฐ (์„ ํƒํ•œ ๋ชจ๋ธ ์œ ํ˜•์— ๋”ฐ๋ผ BERT ๋˜๋Š” Bart์™€ ์œ ์‚ฌํ•œ ๋ชจ์Šต์ผ ๊ฒƒ์ž…๋‹ˆ๋‹ค) ๋‘ ๊ฒฝ์šฐ ๋ชจ๋‘, ๋ชจ๋ธ์˜ ๊ธฐ๋ณธ ์ •๋ณด๋ฅผ ์ž…๋ ฅํ•˜๋Š” ์„ค๋ฌธ์กฐ์‚ฌ๊ฐ€ ์ œ์‹œ๋ฉ๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ๋ช…๋ น์–ด๋Š” `cookiecutter`๋ฅผ ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ž์„ธํ•œ ์ •๋ณด๋Š” [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. **huggingface/transformers ๋ฉ”์ธ ์ €์žฅ์†Œ์— Pull Request ์—ด๊ธฐ** ์ž๋™์œผ๋กœ ์ƒ์„ฑ๋œ ์ฝ”๋“œ๋ฅผ ์ˆ˜์ •ํ•˜๊ธฐ ์ „์—, ์ง€๊ธˆ์€ "์ž‘์—… ์ง„ํ–‰ ์ค‘ (WIP)" ํ’€ ๋ฆฌํ€˜์ŠคํŠธ๋ฅผ ์—ด๊ธฐ ์œ„ํ•œ ์‹œ๊ธฐ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๐Ÿค— Transformers์— "*brand_new_bert* ์ถ”๊ฐ€"๋ผ๋Š” ์ œ๋ชฉ์˜ "[WIP] Add *brand_new_bert*" ํ’€ ๋ฆฌํ€˜์ŠคํŠธ๋ฅผ ์—ฝ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋‹น์‹ ๊ณผ Hugging Face ํŒ€์ด ๐Ÿค— Transformers์— ๋ชจ๋ธ์„ ํ†ตํ•ฉํ•˜๋Š” ์ž‘์—…์„ ํ•จ๊ป˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. ๋ฉ”์ธ ๋ธŒ๋žœ์น˜์—์„œ ์ž‘์—…์„ ์ž˜ ์„ค๋ช…ํ•˜๋Š” ์ด๋ฆ„์œผ๋กœ ๋ธŒ๋žœ์น˜ ์ƒ์„ฑ ```bash git checkout -b add_brand_new_bert ``` 2. ์ž๋™์œผ๋กœ ์ƒ์„ฑ๋œ ์ฝ”๋“œ ์ปค๋ฐ‹ ```bash git add . git commit ``` 3. ํ˜„์žฌ ๋ฉ”์ธ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ๋ฒ ์ด์Šค ```bash git fetch upstream git rebase upstream/main ``` 4. ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๊ณ„์ •์— ํ‘ธ์‹œ ```bash git push -u origin a-descriptive-name-for-my-changes ``` 5. ๋งŒ์กฑ์Šค๋Ÿฝ๋‹ค๋ฉด, GitHub์—์„œ ์ž์‹ ์˜ ํฌํฌํ•œ ์›น ํŽ˜์ด์ง€๋กœ ์ด๋™ํ•ฉ๋‹ˆ๋‹ค. "Pull request"๋ฅผ ํด๋ฆญํ•ฉ๋‹ˆ๋‹ค. Hugging Face ํŒ€์˜ ์ผ๋ถ€ ๋ฉค๋ฒ„์˜ GitHub ํ•ธ๋“ค์„ ๋ฆฌ๋ทฐ์–ด๋กœ ์ถ”๊ฐ€ํ•˜์—ฌ Hugging Face ํŒ€์ด ์•ž์œผ๋กœ์˜ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์— ๋Œ€ํ•ด ์•Œ๋ฆผ์„ ๋ฐ›์„ ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. 6. GitHub ํ’€ ๋ฆฌํ€˜์ŠคํŠธ ์›น ํŽ˜์ด์ง€ ์˜ค๋ฅธ์ชฝ์— ์žˆ๋Š” "Convert to draft"๋ฅผ ํด๋ฆญํ•˜์—ฌ PR์„ ์ดˆ์•ˆ์œผ๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ์–ด๋–ค ์ง„์ „์„ ์ด๋ฃจ์—ˆ๋‹ค๋ฉด ์ž‘์—…์„ ์ปค๋ฐ‹ํ•˜๊ณ  ๊ณ„์ •์— ํ‘ธ์‹œํ•˜์—ฌ ํ’€ ๋ฆฌํ€˜์ŠคํŠธ์— ํ‘œ์‹œ๋˜๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ˜„์žฌ ๋ฉ”์ธ๊ณผ ์ž‘์—…์„ ์—…๋ฐ์ดํŠธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash git fetch upstream git merge upstream/main ``` ์ผ๋ฐ˜์ ์œผ๋กœ, ๋ชจ๋ธ ๋˜๋Š” ๊ตฌํ˜„์— ๊ด€ํ•œ ๋ชจ๋“  ์งˆ๋ฌธ์€ ์ž์‹ ์˜ PR์—์„œ ํ•ด์•ผ ํ•˜๋ฉฐ, PR์—์„œ ํ† ๋ก ๋˜๊ณ  ํ•ด๊ฒฐ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด Hugging Face ํŒ€์ด ์ƒˆ๋กœ์šด ์ฝ”๋“œ๋ฅผ ์ปค๋ฐ‹ํ•˜๊ฑฐ๋‚˜ ์งˆ๋ฌธ์„ ํ•  ๋•Œ ํ•ญ์ƒ ์•Œ๋ฆผ์„ ๋ฐ›์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Hugging Face ํŒ€์—๊ฒŒ ๋ฌธ์ œ ๋˜๋Š” ์งˆ๋ฌธ์„ ํšจ์œจ์ ์œผ๋กœ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ์ถ”๊ฐ€ํ•œ ์ฝ”๋“œ๋ฅผ ๋ช…์‹œํ•˜๋Š” ๊ฒƒ์ด ๋„์›€์ด ๋  ๋•Œ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด, ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋ชจ๋‘ ๋ณผ ์ˆ˜ ์žˆ๋Š” "Files changed" ํƒญ์œผ๋กœ ์ด๋™ํ•˜์—ฌ ์งˆ๋ฌธํ•˜๊ณ ์ž ํ•˜๋Š” ์ค„๋กœ ์ด๋™ํ•œ ๋‹ค์Œ "+" ๊ธฐํ˜ธ๋ฅผ ํด๋ฆญํ•˜์—ฌ ์ฝ”๋ฉ˜ํŠธ๋ฅผ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ๋ฌธ์ด๋‚˜ ๋ฌธ์ œ๊ฐ€ ํ•ด๊ฒฐ๋˜๋ฉด, ์ƒ์„ฑ๋œ ์ฝ”๋ฉ˜ํŠธ์˜ "Resolve" ๋ฒ„ํŠผ์„ ํด๋ฆญํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, Hugging Face ํŒ€์€ ์ฝ”๋“œ๋ฅผ ๋ฆฌ๋ทฐํ•  ๋•Œ ์ฝ”๋ฉ˜ํŠธ๋ฅผ ๋‚จ๊ธธ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” PR์—์„œ ๋Œ€๋ถ€๋ถ„์˜ ์งˆ๋ฌธ์„ GitHub์—์„œ ๋ฌป๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ณต๊ฐœ์— ํฌ๊ฒŒ ๋„์›€์ด ๋˜์ง€ ์•Š๋Š” ๋งค์šฐ ์ผ๋ฐ˜์ ์ธ ์งˆ๋ฌธ์˜ ๊ฒฝ์šฐ, Slack์ด๋‚˜ ์ด๋ฉ”์ผ์„ ํ†ตํ•ด Hugging Face ํŒ€์—๊ฒŒ ๋ฌธ์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. **5. brand_new_bert์— ๋Œ€ํ•ด ์ƒ์„ฑ๋œ ๋ชจ๋ธ ์ฝ”๋“œ๋ฅผ ์ ์šฉํ•˜๊ธฐ** ๋จผ์ €, ์šฐ๋ฆฌ๋Š” ๋ชจ๋ธ ์ž์ฒด์—๋งŒ ์ดˆ์ ์„ ๋งž์ถ”๊ณ  ํ† ํฌ๋‚˜์ด์ €์— ๋Œ€ํ•ด์„œ๋Š” ์‹ ๊ฒฝ ์“ฐ์ง€ ์•Š์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋“  ๊ด€๋ จ ์ฝ”๋“œ๋Š” ๋‹ค์Œ์˜ ์ƒ์„ฑ๋œ ํŒŒ์ผ์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` ๋ฐ `src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`. ์ด์ œ ๋งˆ์นจ๋‚ด ์ฝ”๋”ฉ์„ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค :). `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`์˜ ์ƒ์„ฑ๋œ ์ฝ”๋“œ๋Š” ์ธ์ฝ”๋” ์ „์šฉ ๋ชจ๋ธ์ธ ๊ฒฝ์šฐ BERT์™€ ๋™์ผํ•œ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ฐ€์ง€๊ฑฐ๋‚˜, ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์ธ ๊ฒฝ์šฐ BART์™€ ๋™์ผํ•œ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ฐ€์งˆ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ, ๋ชจ๋ธ์˜ ์ด๋ก ์  ์ธก๋ฉด์— ๋Œ€ํ•ด ๋ฐฐ์šด ๋‚ด์šฉ์„ ๋‹ค์‹œ ์ƒ๊ธฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: *๋ชจ๋ธ์ด BERT ๋˜๋Š” BART์™€ ์–ด๋–ป๊ฒŒ ๋‹ค๋ฅธ๊ฐ€์š”?*. ์ž์ฃผ ๋ณ€๊ฒฝํ•ด์•ผ ํ•˜๋Š” ๊ฒƒ์€ *self-attention* ๋ ˆ์ด์–ด, ์ •๊ทœํ™” ๋ ˆ์ด์–ด์˜ ์ˆœ์„œ ๋“ฑ์„ ๋ณ€๊ฒฝํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค์‹œ ๋งํ•˜์ง€๋งŒ, ์ž์‹ ์˜ ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋„๋ก Transformers์—์„œ ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๋ชจ๋ธ์˜ ์œ ์‚ฌํ•œ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์‚ดํŽด๋ณด๋Š” ๊ฒƒ์ด ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. **์ฐธ๊ณ ๋กœ** ์ด ์‹œ์ ์—์„œ, ์ฝ”๋“œ๊ฐ€ ์™„์ „ํžˆ ์ •ํ™•ํ•˜๊ฑฐ๋‚˜ ๊นจ๋—ํ•˜๋‹ค๊ณ  ํ™•์‹ ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์˜คํžˆ๋ ค ์ฒ˜์Œ์—๋Š” ์›๋ณธ ์ฝ”๋“œ์˜ ์ฒซ ๋ฒˆ์งธ *๋ถˆ์™„์ „ํ•˜๊ณ * ๋ณต์‚ฌ๋œ ๋ฒ„์ „์„ `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`์— ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ๋ชจ๋“  ์ฝ”๋“œ๊ฐ€ ์ถ”๊ฐ€๋  ๋•Œ๊นŒ์ง€ ์ด๋Ÿฌํ•œ ์ž‘์—…์„ ์ง„ํ–‰ํ•œ ํ›„, ๋‹ค์Œ ์„น์…˜์—์„œ ์„ค๋ช…ํ•œ ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ฝ”๋“œ๋ฅผ ์ ์ง„์ ์œผ๋กœ ๊ฐœ์„ ํ•˜๊ณ  ์ˆ˜์ •ํ•˜๋Š” ๊ฒƒ์ด ํ›จ์”ฌ ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ ์ž‘๋™ํ•ด์•ผ ํ•˜๋Š” ์œ ์ผํ•œ ๊ฒƒ์€ ๋‹ค์Œ ๋ช…๋ น์ด ์ž‘๋™ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```python from transformers import BrandNewBertModel, BrandNewBertConfig model = BrandNewBertModel(BrandNewBertConfig()) ``` ์œ„์˜ ๋ช…๋ น์€ `BrandNewBertConfig()`์— ์ •์˜๋œ ๊ธฐ๋ณธ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋”ฐ๋ผ ๋ฌด์ž‘์œ„ ๊ฐ€์ค‘์น˜๋กœ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜๋ฉฐ, ์ด๋กœ์จ ๋ชจ๋“  ๊ตฌ์„ฑ ์š”์†Œ์˜ `init()` ๋ฉ”์„œ๋“œ๊ฐ€ ์ž‘๋™ํ•จ์„ ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ฌด์ž‘์œ„ ์ดˆ๊ธฐํ™”๋Š” `BrandnewBertPreTrainedModel` ํด๋ž˜์Šค์˜ `_init_weights` ๋ฉ”์„œ๋“œ์—์„œ ์ˆ˜ํ–‰๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฉ”์„œ๋“œ๋Š” ๊ตฌ์„ฑ ์„ค์ • ๋ณ€์ˆ˜์— ๋”ฐ๋ผ ๋ชจ๋“  ๋ฆฌํ”„ ๋ชจ๋“ˆ์„ ์ดˆ๊ธฐํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. BERT์˜ `_init_weights` ๋ฉ”์„œ๋“œ ์˜ˆ์ œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```py def _init_weights(self, module): """Initialize the weights""" if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) ``` ๋ช‡ ๊ฐ€์ง€ ๋ชจ๋“ˆ์— ๋Œ€ํ•ด ํŠน๋ณ„ํ•œ ์ดˆ๊ธฐํ™”๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž ์ •์˜ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `Wav2Vec2ForPreTraining`์—์„œ ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐœ์˜ ์„ ํ˜• ๋ ˆ์ด์–ด๋Š” ์ผ๋ฐ˜์ ์ธ PyTorch `nn.Linear`์˜ ์ดˆ๊ธฐํ™”๋ฅผ ๊ฐ€์ ธ์•ผ ํ•˜์ง€๋งŒ, ๋‹ค๋ฅธ ๋ชจ๋“  ๋ ˆ์ด์–ด๋Š” ์œ„์™€ ๊ฐ™์€ ์ดˆ๊ธฐํ™”๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ฝ”๋“œํ™”๋ฉ๋‹ˆ๋‹ค: ```py def _init_weights(self, module): """Initialize the weights""" if isinstnace(module, Wav2Vec2ForPreTraining): module.project_hid.reset_parameters() module.project_q.reset_parameters() module.project_hid._is_hf_initialized = True module.project_q._is_hf_initialized = True elif isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() ``` `_is_hf_initialized` ํ”Œ๋ž˜๊ทธ๋Š” ์„œ๋ธŒ๋ชจ๋“ˆ์„ ํ•œ ๋ฒˆ๋งŒ ์ดˆ๊ธฐํ™”ํ•˜๋„๋ก ๋‚ด๋ถ€์ ์œผ๋กœ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. `module.project_q` ๋ฐ `module.project_hid`์— ๋Œ€ํ•ด `True`๋กœ ์„ค์ •ํ•จ์œผ๋กœ์จ, ์šฐ๋ฆฌ๊ฐ€ ์ˆ˜ํ–‰ํ•œ ์‚ฌ์šฉ์ž ์ •์˜ ์ดˆ๊ธฐํ™”๊ฐ€ ์ดํ›„์— ๋ฎ์–ด์“ฐ์ด์ง€ ์•Š๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, `_init_weights` ํ•จ์ˆ˜๊ฐ€ ์ด๋“ค์—๊ฒŒ ์ ์šฉ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. **6. ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ ์ž‘์„ฑํ•˜๊ธฐ** ๋‹ค์Œ์œผ๋กœ, ๋””๋ฒ„๊ทธ์— ์‚ฌ์šฉํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๊ธฐ์กด ์ €์žฅ์†Œ์—์„œ ๋งŒ๋“  ๐Ÿค— Transformers ๊ตฌํ˜„๊ณผ ํ˜ธํ™˜๋˜๋Š” ์ฒดํฌํฌ์ธํŠธ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ๋Š” ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ฒ˜์Œ๋ถ€ํ„ฐ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค๋Š” *brand_new_bert*์™€ ๋™์ผํ•œ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ์ž‘์„ฑ๋œ ์œ ์‚ฌํ•œ ๋ชจ๋ธ์„ ๋ณ€ํ™˜ํ•œ ๊ธฐ์กด ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ฐพ์•„๋ณด๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ๊ธฐ์กด ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ๋ณต์‚ฌํ•˜์—ฌ ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๋งž๊ฒŒ ์•ฝ๊ฐ„ ์ˆ˜์ •ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์— ๋Œ€ํ•ด ์œ ์‚ฌํ•œ ๊ธฐ์กด ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์–ด๋””์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ๋Š”์ง€ Hugging Face ํŒ€์—๊ฒŒ ๋ฌธ์˜ํ•˜๋Š” ๊ฒƒ์„ ๋ง์„ค์ด์ง€ ๋งˆ์„ธ์š”. - TensorFlow์—์„œ PyTorch๋กœ ๋ชจ๋ธ์„ ์ด์ „ํ•˜๋Š” ๊ฒฝ์šฐ, ์ข‹์€ ์ฐธ๊ณ  ์ž๋ฃŒ๋กœ BERT์˜ ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91)๋ฅผ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - PyTorch์—์„œ PyTorch๋กœ ๋ชจ๋ธ์„ ์ด์ „ํ•˜๋Š” ๊ฒฝ์šฐ, ์ข‹์€ ์ฐธ๊ณ  ์ž๋ฃŒ๋กœ BART์˜ ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py)๋ฅผ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์—์„œ๋Š” PyTorch ๋ชจ๋ธ์ด ๋ ˆ์ด์–ด ๊ฐ€์ค‘์น˜๋ฅผ ์ €์žฅํ•˜๊ณ  ๋ ˆ์ด์–ด ์ด๋ฆ„์„ ์ •์˜ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ๊ฐ„๋‹จํžˆ ์„ค๋ช…ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. PyTorch์—์„œ ๋ ˆ์ด์–ด์˜ ์ด๋ฆ„์€ ๋ ˆ์ด์–ด์— ์ง€์ •ํ•œ ํด๋ž˜์Šค ์†์„ฑ์˜ ์ด๋ฆ„์œผ๋กœ ์ •์˜๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด PyTorch์—์„œ `SimpleModel`์ด๋ผ๋Š” ๋”๋ฏธ ๋ชจ๋ธ์„ ์ •์˜ํ•ด ๋ด…์‹œ๋‹ค: ```python from torch import nn class SimpleModel(nn.Module): def __init__(self): super().__init__() self.dense = nn.Linear(10, 10) self.intermediate = nn.Linear(10, 10) self.layer_norm = nn.LayerNorm(10) ``` ์ด์ œ ์ด ๋ชจ๋ธ ์ •์˜์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ `dense`, `intermediate`, `layer_norm` ๋“ฑ์˜ ๊ฐ€์ค‘์น˜๊ฐ€ ๋žœ๋คํ•˜๊ฒŒ ํ• ๋‹น๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์ถœ๋ ฅํ•˜์—ฌ ์•„ํ‚คํ…์ฒ˜๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python model = SimpleModel() print(model) ``` ์ด๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ``` SimpleModel( (dense): Linear(in_features=10, out_features=10, bias=True) (intermediate): Linear(in_features=10, out_features=10, bias=True) (layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True) ) ``` ์šฐ๋ฆฌ๋Š” ๋ ˆ์ด์–ด์˜ ์ด๋ฆ„์ด PyTorch์—์„œ ํด๋ž˜์Šค ์†์„ฑ์˜ ์ด๋ฆ„์œผ๋กœ ์ •์˜๋˜์–ด ์žˆ๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŠน์ • ๋ ˆ์ด์–ด์˜ ๊ฐ€์ค‘์น˜ ๊ฐ’์„ ์ถœ๋ ฅํ•˜์—ฌ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python print(model.dense.weight.data) ``` ๊ฐ€์ค‘์น˜๊ฐ€ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋˜์—ˆ์Œ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ``` tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212, -0.2077, 0.2157], [ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190, 0.2166, -0.0212], [-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950, -0.1023, -0.0447], [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415, -0.1876, -0.2467], [ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465, 0.2577, 0.0402], [ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604, 0.2132, 0.1680], [ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090, 0.2707, -0.2509], [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407, 0.1829, -0.1568], [-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923, 0.0333, -0.0536], [-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739, 0.2220, 0.2358]]). ``` ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์ฒดํฌํฌ์ธํŠธ์˜ ํ•ด๋‹น ๋ ˆ์ด์–ด์˜ ์ •ํ™•ํ•œ ๊ฐ€์ค‘์น˜๋กœ ์ฑ„์›Œ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python # retrieve matching layer weights, e.g. by # recursive algorithm layer_name = "dense" pretrained_weight = array_of_dense_layer model_pointer = getattr(model, "dense") model_pointer.weight.data = torch.from_numpy(pretrained_weight) ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด PyTorch ๋ชจ๋ธ์˜ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋œ ๊ฐ ๊ฐ€์ค‘์น˜์™€ ํ•ด๋‹น ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜๊ฐ€ **๋ชจ์–‘๊ณผ ์ด๋ฆ„** ๋ชจ๋‘์—์„œ ์ •ํ™•ํžˆ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋ชจ์–‘์— ๋Œ€ํ•œ assert ๋ฌธ์„ ์ถ”๊ฐ€ํ•˜๊ณ  ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜์˜ ์ด๋ฆ„์„ ์ถœ๋ ฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฌธ์žฅ์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```python assert ( model_pointer.weight.shape == pretrained_weight.shape ), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched" ``` ๋˜ํ•œ ๋‘ ๊ฐ€์ค‘์น˜์˜ ์ด๋ฆ„์„ ์ถœ๋ ฅํ•˜์—ฌ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. *์˜ˆ์‹œ*: ```python logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}") ``` ๋ชจ์–‘ ๋˜๋Š” ์ด๋ฆ„์ด ์ผ์น˜ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ, ๋žœ๋ค์œผ๋กœ ์ดˆ๊ธฐํ™”๋œ ๋ ˆ์ด์–ด์— ์ž˜๋ชป๋œ ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜๋ฅผ ํ• ๋‹นํ•œ ๊ฒƒ์œผ๋กœ ์ถ”์ธก๋ฉ๋‹ˆ๋‹ค. ์ž˜๋ชป๋œ ๋ชจ์–‘์€ `BrandNewBertConfig()`์˜ ๊ตฌ์„ฑ ๋งค๊ฐœ๋ณ€์ˆ˜ ์„ค์ •์ด ๋ณ€ํ™˜ํ•˜๋ ค๋Š” ์ฒดํฌํฌ์ธํŠธ์— ์‚ฌ์šฉ๋œ ์„ค์ •๊ณผ ์ •ํ™•ํžˆ ์ผ์น˜ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์ผ ๊ฐ€๋Šฅ์„ฑ์ด ๊ฐ€์žฅ ํฝ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ PyTorch์˜ ๋ ˆ์ด์–ด ๊ตฌํ˜„ ์ž์ฒด์—์„œ ๊ฐ€์ค‘์น˜๋ฅผ ์ „์น˜ํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, **๋ชจ๋“ ** ํ•„์š”ํ•œ ๊ฐ€์ค‘์น˜๊ฐ€ ์ดˆ๊ธฐํ™”๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•˜๊ณ  ์ดˆ๊ธฐํ™”์— ์‚ฌ์šฉ๋˜์ง€ ์•Š์€ ๋ชจ๋“  ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜๋ฅผ ์ถœ๋ ฅํ•˜์—ฌ ๋ชจ๋ธ์ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋ณ€ํ™˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ž˜๋ชป๋œ ๋ชจ์–‘ ๋ฌธ์žฅ์ด๋‚˜ ์ž˜๋ชป๋œ ์ด๋ฆ„ ํ• ๋‹น์œผ๋กœ ์ธํ•ด ๋ณ€ํ™˜ ์‹œ๋„๊ฐ€ ์‹คํŒจํ•˜๋Š” ๊ฒƒ์€ ์™„์ „ํžˆ ์ •์ƒ์ž…๋‹ˆ๋‹ค. ์ด๋Š” `BrandNewBertConfig()`์—์„œ ์ž˜๋ชป๋œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๊ฑฐ๋‚˜ ๐Ÿค— Transformers ๊ตฌํ˜„์—์„œ ์ž˜๋ชป๋œ ์•„ํ‚คํ…์ฒ˜, ๐Ÿค— Transformers ๊ตฌํ˜„์˜ ๊ตฌ์„ฑ ์š”์†Œ ์ค‘ ํ•˜๋‚˜์˜ `init()` ํ•จ์ˆ˜์— ๋ฒ„๊ทธ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ์ด๊ฑฐ๋‚˜ ์ฒดํฌํฌ์ธํŠธ ๊ฐ€์ค‘์น˜ ์ค‘ ํ•˜๋‚˜๋ฅผ ์ „์น˜ํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ์ผ ๊ฐ€๋Šฅ์„ฑ์ด ๊ฐ€์žฅ ๋†’์Šต๋‹ˆ๋‹ค. ์ด ๋‹จ๊ณ„๋Š” ์ด์ „ ๋‹จ๊ณ„์™€ ํ•จ๊ป˜ ๋ฐ˜๋ณต๋˜์–ด์•ผ ํ•˜๋ฉฐ ๋ชจ๋“  ์ฒดํฌํฌ์ธํŠธ์˜ ๊ฐ€์ค‘์น˜๊ฐ€ Transformers ๋ชจ๋ธ์— ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋กœ๋“œ๋˜์—ˆ์„ ๋•Œ๊นŒ์ง€ ๊ณ„์†๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers ๊ตฌํ˜„์— ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋กœ๋“œํ•œ ํ›„์—๋Š” `/path/to/converted/checkpoint/folder`์™€ ๊ฐ™์€ ์›ํ•˜๋Š” ํด๋”์— ๋ชจ๋ธ์„ ์ €์žฅํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น ํด๋”์—๋Š” `pytorch_model.bin` ํŒŒ์ผ๊ณผ `config.json` ํŒŒ์ผ์ด ๋ชจ๋‘ ํฌํ•จ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python model.save_pretrained("/path/to/converted/checkpoint/folder") ``` **7. ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค ๊ตฌํ˜„ํ•˜๊ธฐ** ๐Ÿค— Transformers ๊ตฌํ˜„์— ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ๋กœ๋“œํ•œ ํ›„์—๋Š” ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค๊ฐ€ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๊ตฌํ˜„๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [์›๋ณธ ์ €์žฅ์†Œ์— ์ต์ˆ™ํ•ด์ง€๊ธฐ](#34-run-a-pretrained-checkpoint-using-the-original-repository)์—์„œ ์ด๋ฏธ ์›๋ณธ ์ €์žฅ์†Œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์˜ ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค๋ฅผ ์‹คํ–‰ํ•˜๋Š” ์Šคํฌ๋ฆฝํŠธ๋ฅผ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ์›๋ณธ ๋Œ€์‹  ๐Ÿค— Transformers ๊ตฌํ˜„์„ ์‚ฌ์šฉํ•˜๋Š” ์œ ์‚ฌํ•œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ž‘์„ฑ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```python model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder") input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19] output = model(input_ids).last_hidden_states ``` ๐Ÿค— Transformers ๊ตฌํ˜„๊ณผ ์›๋ณธ ๋ชจ๋ธ ๊ตฌํ˜„์ด ์ฒ˜์Œ๋ถ€ํ„ฐ ์ •ํ™•ํžˆ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ์ œ๊ณตํ•˜์ง€ ์•Š๊ฑฐ๋‚˜ ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค์—์„œ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๋งค์šฐ ๋†’์Šต๋‹ˆ๋‹ค. ์‹ค๋งํ•˜์ง€ ๋งˆ์„ธ์š”. ์˜ˆ์ƒ๋œ ์ผ์ž…๋‹ˆ๋‹ค! ๋จผ์ €, ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค์—์„œ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ข…์ข… ์ž˜๋ชป๋œ ์ฐจ์›์ด ์‚ฌ์šฉ๋˜์–ด *์ฐจ์› ๋ถˆ์ผ์น˜* ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๊ฑฐ๋‚˜ ์ž˜๋ชป๋œ ๋ฐ์ดํ„ฐ ์œ ํ˜• ๊ฐœ์ฒด๊ฐ€ ์‚ฌ์šฉ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด `torch.long` ๋Œ€์‹ ์— `torch.float32`๊ฐ€ ์‚ฌ์šฉ๋œ ๊ฒฝ์šฐ์ž…๋‹ˆ๋‹ค. ํ•ด๊ฒฐํ•  ์ˆ˜ ์—†๋Š” ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด Hugging Face ํŒ€์— ๋„์›€์„ ์š”์ฒญํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers ๊ตฌํ˜„์ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋Š” ์ถœ๋ ฅ์ด `1e-3`์˜ ์ •๋ฐ€๋„๋กœ ๋™์ผํ•œ์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋จผ์ €, ์ถœ๋ ฅ ๋ชจ์–‘์ด ๋™์ผํ•˜๋„๋ก ๋ณด์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ๐Ÿค— Transformers ๊ตฌํ˜„ ์Šคํฌ๋ฆฝํŠธ์™€ ์›๋ณธ ๊ตฌํ˜„ ์‚ฌ์ด์—์„œ `outputs.shape`๋Š” ๋™์ผํ•œ ๊ฐ’์„ ๋ฐ˜ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ๋‹ค์Œ์œผ๋กœ, ์ถœ๋ ฅ ๊ฐ’์ด ๋™์ผํ•˜๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•  ๋•Œ ๊ฐ€์žฅ ์–ด๋ ค์šด ๋ถ€๋ถ„ ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์ถœ๋ ฅ์ด ๋™์ผํ•˜์ง€ ์•Š์€ ์ผ๋ฐ˜์ ์ธ ์‹ค์ˆ˜ ์‚ฌ๋ก€๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - ์ผ๋ถ€ ๋ ˆ์ด์–ด๊ฐ€ ์ถ”๊ฐ€๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ์ฆ‰, *ํ™œ์„ฑํ™”* ๋ ˆ์ด์–ด๊ฐ€ ์ถ”๊ฐ€๋˜์ง€ ์•Š์•˜๊ฑฐ๋‚˜ ์ž”์ฐจ ์—ฐ๊ฒฐ์ด ๋น ์กŒ์Šต๋‹ˆ๋‹ค. - ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ ํ–‰๋ ฌ์ด ์—ฐ๊ฒฐ๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. - ์ž˜๋ชป๋œ ์œ„์น˜ ์ž„๋ฒ ๋”ฉ์ด ์‚ฌ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ๊ตฌํ˜„์—์„œ๋Š” ์˜คํ”„์…‹์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. - ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค ์ค‘์— Dropout์ด ์ ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์ˆ˜์ •ํ•˜๋ ค๋ฉด *model.training์ด False*์ธ์ง€ ํ™•์ธํ•˜๊ณ  ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค ์ค‘์— Dropout ๋ ˆ์ด์–ด๊ฐ€ ์ž˜๋ชป ํ™œ์„ฑํ™”๋˜์ง€ ์•Š๋„๋ก ํ•˜์„ธ์š”. ์ฆ‰, [PyTorch์˜ ๊ธฐ๋Šฅ์  Dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout)์— *self.training*์„ ์ „๋‹ฌํ•˜์„ธ์š”. ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๊ฐ€์žฅ ์ข‹์€ ๋ฐฉ๋ฒ•์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์›๋ณธ ๊ตฌํ˜„๊ณผ ๐Ÿค— Transformers ๊ตฌํ˜„์˜ ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค๋ฅผ ๋‚˜๋ž€ํžˆ ๋†“๊ณ  ์ฐจ์ด์ ์ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด์ƒ์ ์œผ๋กœ๋Š” ์ˆœ๋ฐฉํ–ฅ ํŒจ์Šค์˜ ์ค‘๊ฐ„ ์ถœ๋ ฅ์„ ๋””๋ฒ„๊ทธ/์ถœ๋ ฅํ•˜์—ฌ ์›๋ณธ ๊ตฌํ˜„๊ณผ ๐Ÿค— Transformers ๊ตฌํ˜„์˜ ์ •ํ™•ํ•œ ์œ„์น˜๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ €, ๋‘ ์Šคํฌ๋ฆฝํŠธ์˜ ํ•˜๋“œ์ฝ”๋”ฉ๋œ `input_ids`๊ฐ€ ๋™์ผํ•œ์ง€ ํ™•์ธํ•˜์„ธ์š”. ๋‹ค์Œ์œผ๋กœ, `input_ids`์˜ ์ฒซ ๋ฒˆ์งธ ๋ณ€ํ™˜์˜ ์ถœ๋ ฅ(์ผ๋ฐ˜์ ์œผ๋กœ ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ)์ด ๋™์ผํ•œ์ง€ ํ™•์ธํ•˜์„ธ์š”. ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋„คํŠธ์›Œํฌ์˜ ๊ฐ€์žฅ ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด๊นŒ์ง€ ์ง„ํ–‰ํ•ด๋ณด์„ธ์š”. ์–ด๋А ์‹œ์ ์—์„œ ๋‘ ๊ตฌํ˜„ ์‚ฌ์ด์— ์ฐจ์ด๊ฐ€ ์žˆ๋Š” ๊ฒƒ์„ ์•Œ๊ฒŒ ๋˜๋Š”๋ฐ, ์ด๋Š” ๐Ÿค— Transformers ๊ตฌํ˜„์˜ ๋ฒ„๊ทธ ์œ„์น˜๋ฅผ ๊ฐ€๋ฆฌํ‚ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ €ํฌ ๊ฒฝํ—˜์ƒ์œผ๋กœ๋Š” ์›๋ณธ ๊ตฌํ˜„๊ณผ ๐Ÿค— Transformers ๊ตฌํ˜„ ๋ชจ๋‘์—์„œ ๋™์ผํ•œ ์œ„์น˜์— ๋งŽ์€ ์ถœ๋ ฅ ๋ฌธ์„ ์ถ”๊ฐ€ํ•˜๊ณ  ์ด๋“ค์˜ ์ค‘๊ฐ„ ํ‘œํ˜„์— ๋Œ€ํ•ด ๋™์ผํ•œ ๊ฐ’์„ ๋ณด์ด๋Š” ์ถœ๋ ฅ ๋ฌธ์„ ์—ฐ์†์ ์œผ๋กœ ์ œ๊ฑฐํ•˜๋Š” ๊ฒƒ์ด ๊ฐ„๋‹จํ•˜๊ณ  ํšจ๊ณผ์ ์ธ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. `torch.allclose(original_output, output, atol=1e-3)`๋กœ ์ถœ๋ ฅ์„ ํ™•์ธํ•˜์—ฌ ๋‘ ๊ตฌํ˜„์ด ๋™์ผํ•œ ์ถœ๋ ฅ์„ ํ•˜๋Š” ๊ฒƒ์„ ํ™•์‹ ํ•œ๋‹ค๋ฉด, ๊ฐ€์žฅ ์–ด๋ ค์šด ๋ถ€๋ถ„์€ ๋๋‚ฌ์Šต๋‹ˆ๋‹ค! ์ถ•ํ•˜๋“œ๋ฆฝ๋‹ˆ๋‹ค. ๋‚จ์€ ์ž‘์—…์€ ์‰ฌ์šด ์ผ์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค ๐Ÿ˜Š. **8. ํ•„์š”ํ•œ ๋ชจ๋“  ๋ชจ๋ธ ํ…Œ์ŠคํŠธ ์ถ”๊ฐ€ํ•˜๊ธฐ** ์ด ์‹œ์ ์—์„œ ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์„ฑ๊ณต์ ์œผ๋กœ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ•ด๋‹น ๋ชจ๋ธ์ด ์š”๊ตฌ๋˜๋Š” ๋””์ž์ธ์— ์™„์ „ํžˆ ๋ถ€ํ•ฉํ•˜์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers์™€ ์™„๋ฒฝํ•˜๊ฒŒ ํ˜ธํ™˜๋˜๋Š” ๊ตฌํ˜„์ธ์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋“  ์ผ๋ฐ˜ ํ…Œ์ŠคํŠธ๋ฅผ ํ†ต๊ณผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. Cookiecutter๋Š” ์•„๋งˆ๋„ ๋ชจ๋ธ์„ ์œ„ํ•œ ํ…Œ์ŠคํŠธ ํŒŒ์ผ์„ ์ž๋™์œผ๋กœ ์ถ”๊ฐ€ํ–ˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์•„๋งˆ๋„ `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`์™€ ๊ฐ™์€ ๊ฒฝ๋กœ์— ์œ„์น˜ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ํ…Œ์ŠคํŠธ ํŒŒ์ผ์„ ์‹คํ–‰ํ•˜์—ฌ ์ผ๋ฐ˜ ํ…Œ์ŠคํŠธ๊ฐ€ ๋ชจ๋‘ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ```bash pytest tests/models/brand_new_bert/test_modeling_brand_new_bert.py ``` ๋ชจ๋“  ์ผ๋ฐ˜ ํ…Œ์ŠคํŠธ๋ฅผ ์ˆ˜์ •ํ•œ ํ›„, ์ด์ œ ์ˆ˜ํ–‰ํ•œ ์ž‘์—…์„ ์ถฉ๋ถ„ํžˆ ํ…Œ์ŠคํŠธํ•˜์—ฌ ๋‹ค์Œ ์‚ฌํ•ญ์„ ๋ณด์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - a) ์ปค๋ฎค๋‹ˆํ‹ฐ๊ฐ€ *brand_new_bert*์˜ ํŠน์ • ํ…Œ์ŠคํŠธ๋ฅผ ์‚ดํŽด๋ด„์œผ๋กœ์จ ์ž‘์—…์„ ์‰ฝ๊ฒŒ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•จ - b) ๋ชจ๋ธ์— ๋Œ€ํ•œ ํ–ฅํ›„ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ๋ชจ๋ธ์˜ ์ค‘์š”ํ•œ ๊ธฐ๋Šฅ์„ ์†์ƒ์‹œํ‚ค์ง€ ์•Š๋„๋ก ํ•จ ๋จผ์ € ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋Š” ์ด์ „์— ๋ชจ๋ธ์„ ๐Ÿค— Transformers๋กœ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉํ•œ ๋””๋ฒ„๊น… ์Šคํฌ๋ฆฝํŠธ์™€ ๋™์ผํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. Cookiecutter์— ์ด๋ฏธ ์ด๋Ÿฌํ•œ ๋ชจ๋ธ ํ…Œ์ŠคํŠธ์˜ ํ…œํ”Œ๋ฆฟ์ธ `BrandNewBertModelIntegrationTests`๊ฐ€ ์ถ”๊ฐ€๋˜์–ด ์žˆ์œผ๋ฉฐ, ์—ฌ๋Ÿฌ๋ถ„์ด ์ž‘์„ฑํ•ด์•ผ ํ•  ๋‚ด์šฉ์œผ๋กœ๋งŒ ์ฑ„์›Œ ๋„ฃ์œผ๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๊ฐ€ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜์„ธ์š”. ```bash RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests ``` <Tip> Windows๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ `RUN_SLOW=1`์„ `SET RUN_SLOW=1`๋กœ ๋ฐ”๊ฟ”์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๋‘˜์งธ๋กœ, *brand_new_bert*์— ํŠนํ™”๋œ ๋ชจ๋“  ๊ธฐ๋Šฅ๋„ ๋ณ„๋„์˜ ํ…Œ์ŠคํŠธ์—์„œ ์ถ”๊ฐ€๋กœ ํ…Œ์ŠคํŠธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ถ€๋ถ„์€ ์ข…์ข… ์žŠํžˆ๋Š”๋ฐ, ๋‘ ๊ฐ€์ง€ ์ธก๋ฉด์—์„œ ๊ต‰์žฅํžˆ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. - *brand_new_bert*์˜ ํŠน์ˆ˜ ๊ธฐ๋Šฅ์ด ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•ด์•ผ ํ•˜๋Š”์ง€ ๋ณด์—ฌ์คŒ์œผ๋กœ์จ ์ปค๋ฎค๋‹ˆํ‹ฐ์—๊ฒŒ ๋ชจ๋ธ ์ถ”๊ฐ€ ๊ณผ์ •์—์„œ ์Šต๋“ํ•œ ์ง€์‹์„ ์ „๋‹ฌํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. - ํ–ฅํ›„ ๊ธฐ์—ฌ์ž๋Š” ์ด๋Ÿฌํ•œ ํŠน์ˆ˜ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์—ฌ ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋น ๋ฅด๊ฒŒ ํ…Œ์ŠคํŠธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. **9. ํ† ํฌ๋‚˜์ด์ € ๊ตฌํ˜„ํ•˜๊ธฐ** ๋‹ค์Œ์œผ๋กœ, *brand_new_bert*์˜ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณดํ†ต ํ† ํฌ๋‚˜์ด์ €๋Š” ๐Ÿค— Transformers์˜ ๊ธฐ์กด ํ† ํฌ๋‚˜์ด์ €์™€ ๋™์ผํ•˜๊ฑฐ๋‚˜ ๋งค์šฐ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋จผ์ € ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ์—์„œ ๋ฌธ์ž์—ด์„ ์ž…๋ ฅํ•˜๊ณ  `input_ids`๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋Š” ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์œ ์‚ฌํ•œ ์Šคํฌ๋ฆฝํŠธ์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (์˜์‚ฌ ์ฝ”๋“œ๋กœ ์ž‘์„ฑ): ```python input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = model.tokenize(input_str) ``` ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ๋ฅผ ์ž์„ธํžˆ ์‚ดํŽด๋ณด๊ณ  ์˜ฌ๋ฐ”๋ฅธ ํ† ํฌ๋‚˜์ด์ € ํ•จ์ˆ˜๋ฅผ ์ฐพ๊ฑฐ๋‚˜, ๋ณต์ œ๋ณธ์—์„œ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ์ ์šฉํ•˜์—ฌ `input_ids`๋งŒ ์ถœ๋ ฅํ•˜๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์›๋ณธ ๋ฆฌํฌ์ง€ํ† ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ธฐ๋Šฅ์ ์ธ ํ† ํฐํ™” ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ž‘์„ฑํ•œ ํ›„, ๐Ÿค— Transformers์˜ ์œ ์‚ฌํ•œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ƒ์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ž‘์„ฑ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```python from transformers import BrandNewBertTokenizer input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/") input_ids = tokenizer(input_str).input_ids ``` ๋‘ ๊ฐœ์˜ `input_ids`๊ฐ€ ๋™์ผํ•œ ๊ฐ’์„ ๋ฐ˜ํ™˜ํ•  ๋•Œ, ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ ํ† ํฌ๋‚˜์ด์ € ํ…Œ์ŠคํŠธ ํŒŒ์ผ๋„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. *brand_new_bert*์˜ ๋ชจ๋ธ๋ง ํ…Œ์ŠคํŠธ ํŒŒ์ผ๊ณผ ์œ ์‚ฌํ•˜๊ฒŒ, *brand_new_bert*์˜ ํ† ํฌ๋‚˜์ด์ œ์ด์…˜ ํ…Œ์ŠคํŠธ ํŒŒ์ผ์—๋Š” ๋ช‡ ๊ฐ€์ง€ ํ•˜๋“œ์ฝ”๋”ฉ๋œ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๊ฐ€ ํฌํ•จ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. **10. ์ข…๋‹จ ๊ฐ„ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ ์‹คํ–‰** ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ถ”๊ฐ€ํ•œ ํ›„์—๋Š” ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ช‡ ๊ฐ€์ง€ ์ข…๋‹จ ๊ฐ„ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`์— ์ถ”๊ฐ€ํ•ด์ฃผ์„ธ์š”. ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๋Š” ๐Ÿค— Transformers ๊ตฌํ˜„์ด ์˜ˆ์ƒ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€๋ฅผ ์˜๋ฏธ ์žˆ๋Š” text-to-text ์˜ˆ์‹œ๋กœ ๋ณด์—ฌ์ค˜์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ ์˜ˆ์‹œ๋กœ๋Š” *์˜ˆ๋ฅผ ๋“ค์–ด* source-to-target ๋ฒˆ์—ญ ์Œ, article-to-summary ์Œ, question-to-answer ์Œ ๋“ฑ์ด ํฌํ•จ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ถˆ๋Ÿฌ์˜จ ์ฒดํฌํฌ์ธํŠธ ์ค‘ ์–ด๋А ๊ฒƒ๋„ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์—์„œ ๋ฏธ์„ธ ์กฐ์ •๋˜์ง€ ์•Š์•˜๋‹ค๋ฉด, ๋ชจ๋ธ ํ…Œ์ŠคํŠธ๋งŒ์œผ๋กœ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์™„์ „ํžˆ ๊ธฐ๋Šฅ์„ ๊ฐ–์ถ”์—ˆ๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ GPU์—์„œ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ๋‚ด๋ถ€ ํ…์„œ์˜ ์ผ๋ถ€์— `.to(self.device)` ๋ฌธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์„ ์žŠ์—ˆ์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด ๊ฒฝ์šฐ ํ…Œ์ŠคํŠธ์—์„œ ์˜ค๋ฅ˜๋กœ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. GPU์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์—†๋Š” ๊ฒฝ์šฐ, Hugging Face ํŒ€์ด ํ…Œ์ŠคํŠธ๋ฅผ ๋Œ€์‹  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. **11. ๊ธฐ์ˆ ๋ฌธ์„œ ์ถ”๊ฐ€** ์ด์ œ *brand_new_bert*์— ํ•„์š”ํ•œ ๋ชจ๋“  ๊ธฐ๋Šฅ์ด ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๊ฑฐ์˜ ๋๋‚ฌ์Šต๋‹ˆ๋‹ค! ์ถ”๊ฐ€ํ•ด์•ผ ํ•  ๊ฒƒ์€ ๋ฉ‹์ง„ ๊ธฐ์ˆ ๋ฌธ์„œ๊ณผ ๊ธฐ์ˆ ๋ฌธ์„œ ํŽ˜์ด์ง€์ž…๋‹ˆ๋‹ค. Cookiecutter๊ฐ€ `docs/source/model_doc/brand_new_bert.md`๋ผ๋Š” ํ…œํ”Œ๋ฆฟ ํŒŒ์ผ์„ ์ถ”๊ฐ€ํ•ด์คฌ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ํŽ˜์ด์ง€๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ์‚ฌ์šฉ์ž๋“ค์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์ด ํŽ˜์ด์ง€๋ฅผ ๋จผ์ € ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋ฌธ์„œ๋Š” ์ดํ•ดํ•˜๊ธฐ ์‰ฝ๊ณ  ๊ฐ„๊ฒฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ฃผ๊ธฐ ์œ„ํ•ด *ํŒ*์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋งค์šฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋…์ŠคํŠธ๋ง์— ๊ด€๋ จํ•˜์—ฌ Hugging Face ํŒ€์— ๋ฌธ์˜ํ•˜๋Š” ๊ฒƒ์„ ์ฃผ์ €ํ•˜์ง€ ๋งˆ์„ธ์š”. ๋‹ค์Œ์œผ๋กœ, `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py`์— ์ถ”๊ฐ€๋œ ๋…์ŠคํŠธ๋ง์ด ์˜ฌ๋ฐ”๋ฅด๋ฉฐ ํ•„์š”ํ•œ ๋ชจ๋“  ์ž…๋ ฅ ๋ฐ ์ถœ๋ ฅ์„ ํฌํ•จํ•˜๋„๋ก ํ™•์ธํ•˜์„ธ์š”. [์—ฌ๊ธฐ](writing-documentation)์—์„œ ์šฐ๋ฆฌ์˜ ๋ฌธ์„œ ์ž‘์„ฑ ๊ฐ€์ด๋“œ์™€ ๋…์ŠคํŠธ๋ง ํ˜•์‹์— ๋Œ€ํ•œ ์ƒ์„ธ ๊ฐ€์ด๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌธ์„œ๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๋ชจ๋ธ์˜ ์ฒซ ๋ฒˆ์งธ ์ ‘์ ์ด๊ธฐ ๋•Œ๋ฌธ์—, ๋ฌธ์„œ๋Š” ์ ์–ด๋„ ์ฝ”๋“œ๋งŒํผ์˜ ์ฃผ์˜๋ฅผ ๊ธฐ์šธ์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. **์ฝ”๋“œ ๋ฆฌํŒฉํ† ๋ง** ์ข‹์•„์š”, ์ด์ œ *brand_new_bert*๋ฅผ ์œ„ํ•œ ๋ชจ๋“  ํ•„์š”ํ•œ ์ฝ”๋“œ๋ฅผ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜์—ฌ ์ž ์žฌ์ ์œผ๋กœ ์ž˜๋ชป๋œ ์ฝ”๋“œ ์Šคํƒ€์ผ์„ ์ˆ˜์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ๊ทธ๋ฆฌ๊ณ  ์ฝ”๋”ฉ ์Šคํƒ€์ผ์ด ํ’ˆ์งˆ ์ ๊ฒ€์„ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜๊ณ  ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash make style ``` ๐Ÿค— Transformers์—๋Š” ์—ฌ์ „ํžˆ ์‹คํŒจํ•  ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ๋งค์šฐ ์—„๊ฒฉํ•œ ๋””์ž์ธ ํ…Œ์ŠคํŠธ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋…์ŠคํŠธ๋ง์— ๋ˆ„๋ฝ๋œ ์ •๋ณด๋‚˜ ์ž˜๋ชป๋œ ๋ช…๋ช… ๋•Œ๋ฌธ์— ์ข…์ข… ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ ๋ง‰ํžˆ๋ฉด Hugging Face ํŒ€์ด ๋„์›€์„ ์ค„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```bash make quality ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ฝ”๋“œ๊ฐ€ ์ •ํ™•ํžˆ ์ž‘๋™ํ•˜๋Š” ๊ฒƒ์„ ํ™•์ธํ•œ ํ›„์—๋Š” ํ•ญ์ƒ ์ฝ”๋“œ๋ฅผ ๋ฆฌํŒฉํ† ๋งํ•˜๋Š” ๊ฒƒ์ด ์ข‹์€ ์ƒ๊ฐ์ž…๋‹ˆ๋‹ค. ๋ชจ๋“  ํ…Œ์ŠคํŠธ๊ฐ€ ํ†ต๊ณผ๋œ ์ง€๊ธˆ์€ ์ถ”๊ฐ€ํ•œ ์ฝ”๋“œ๋ฅผ ๋‹ค์‹œ ๊ฒ€ํ† ํ•˜๊ณ  ๋ฆฌํŒฉํ† ๋งํ•˜๋Š” ์ข‹์€ ์‹œ๊ธฐ์ž…๋‹ˆ๋‹ค. ์ด์ œ ์ฝ”๋”ฉ ๋ถ€๋ถ„์„ ์™„๋ฃŒํ–ˆ์Šต๋‹ˆ๋‹ค. ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ๐ŸŽ‰ ๋ฉ‹์ ธ์š”! ๐Ÿ˜Ž **12. ๋ชจ๋ธ์„ ๋ชจ๋ธ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•˜์„ธ์š”** ์ด ๋งˆ์ง€๋ง‰ ํŒŒํŠธ์—์„œ๋Š” ๋ชจ๋“  ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋ณ€ํ™˜ํ•˜์—ฌ ๋ชจ๋ธ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•˜๊ณ  ๊ฐ ์—…๋กœ๋“œ๋œ ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ์— ๋Œ€ํ•œ ๋ชจ๋ธ ์นด๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [Model sharing and uploading Page](model_sharing)๋ฅผ ์ฝ๊ณ  ํ—ˆ๋ธŒ ๊ธฐ๋Šฅ์— ์ต์ˆ™ํ•ด์ง€์„ธ์š”. *brand_new_bert*์˜ ์ €์ž ์กฐ์ง ์•„๋ž˜์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ๋Š” ํ•„์š”ํ•œ ์•ก์„ธ์Šค ๊ถŒํ•œ์„ ์–ป๊ธฐ ์œ„ํ•ด Hugging Face ํŒ€๊ณผ ํ˜‘์—…ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `transformers`์˜ ๋ชจ๋“  ๋ชจ๋ธ์— ์žˆ๋Š” `push_to_hub` ๋ฉ”์„œ๋“œ๋Š” ์ฒดํฌํฌ์ธํŠธ๋ฅผ ํ—ˆ๋ธŒ์— ๋น ๋ฅด๊ณ  ํšจ์œจ์ ์œผ๋กœ ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ์•„๋ž˜์— ์ž‘์€ ์ฝ”๋“œ ์กฐ๊ฐ์ด ๋ถ™์—ฌ์ ธ ์žˆ์Šต๋‹ˆ๋‹ค: ๊ฐ ์ฒดํฌํฌ์ธํŠธ์— ์ ํ•ฉํ•œ ๋ชจ๋ธ ์นด๋“œ๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๋Š” ๊ฒƒ์€ ๊ฐ€์น˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ ์นด๋“œ๋Š” ์ฒดํฌํฌ์ธํŠธ์˜ ํŠน์„ฑ์„ ๊ฐ•์กฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. *์˜ˆ๋ฅผ ๋“ค์–ด* ์ด ์ฒดํฌํฌ์ธํŠธ๋Š” ์–ด๋–ค ๋ฐ์ดํ„ฐ์…‹์—์„œ ์‚ฌ์ „ ํ›ˆ๋ จ/์„ธ๋ถ€ ํ›ˆ๋ จ๋˜์—ˆ๋Š”์ง€? ์ด ๋ชจ๋ธ์€ ์–ด๋–ค ํ•˜์œ„ ์ž‘์—…์—์„œ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š”์ง€? ๊ทธ๋ฆฌ๊ณ  ๋ชจ๋ธ์„ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ช‡ ๊ฐ€์ง€ ์ฝ”๋“œ๋„ ํฌํ•จํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python brand_new_bert.push_to_hub("brand_new_bert") # Uncomment the following line to push to an organization. # brand_new_bert.push_to_hub("<organization>/brand_new_bert") ``` **13. (์„ ํƒ ์‚ฌํ•ญ) ๋…ธํŠธ๋ถ ์ถ”๊ฐ€** *brand_new_bert*๋ฅผ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์—์„œ ์ถ”๋ก  ๋˜๋Š” ๋ฏธ์„ธ ์กฐ์ •์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ž์„ธํžˆ ๋ณด์—ฌ์ฃผ๋Š” ๋…ธํŠธ๋ถ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด ๋งค์šฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ PR์„ ๋ณ‘ํ•ฉํ•˜๋Š” ๋ฐ ํ•„์ˆ˜์ ์ด์ง€๋Š” ์•Š์ง€๋งŒ ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋งค์šฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. **14. ์™„๋ฃŒ๋œ PR ์ œ์ถœ** ์ด์ œ ํ”„๋กœ๊ทธ๋ž˜๋ฐ์„ ๋งˆ์ณค์œผ๋ฉฐ, ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ PR์„ ๋ฉ”์ธ ๋ธŒ๋žœ์น˜์— ๋ณ‘ํ•ฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณดํ†ต Hugging Face ํŒ€์€ ์ด๋ฏธ ์—ฌ๊ธฐ๊นŒ์ง€ ๋„์›€์„ ์ฃผ์—ˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ PR์— ๋ฉ‹์ง„ ์„ค๋ช…์„ ์ถ”๊ฐ€ํ•˜๊ณ  ๋ฆฌ๋ทฐ์–ด์—๊ฒŒ ํŠน์ • ๋””์ž์ธ ์„ ํƒ ์‚ฌํ•ญ์„ ๊ฐ•์กฐํ•˜๋ ค๋ฉด ์™„๋ฃŒ๋œ PR์— ์•ฝ๊ฐ„์˜ ์„ค๋ช…์„ ์ถ”๊ฐ€ํ•˜๋Š” ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€์น˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์ž‘์—…๋ฌผ์„ ๊ณต์œ ํ•˜์„ธ์š”!! [[share-your-work]] ์ด์ œ ์ปค๋ฎค๋‹ˆํ‹ฐ์—์„œ ์ž‘์—…๋ฌผ์„ ์ธ์ •๋ฐ›์„ ์‹œ๊ฐ„์ž…๋‹ˆ๋‹ค! ๋ชจ๋ธ ์ถ”๊ฐ€ ์ž‘์—…์„ ์™„๋ฃŒํ•˜๋Š” ๊ฒƒ์€ Transformers์™€ ์ „์ฒด NLP ์ปค๋ฎค๋‹ˆํ‹ฐ์— ํฐ ๊ธฐ์—ฌ์ž…๋‹ˆ๋‹ค. ๋‹น์‹ ์˜ ์ฝ”๋“œ์™€ ์ด์‹๋œ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์€ ์ˆ˜๋ฐฑ, ์‹ฌ์ง€์–ด ์ˆ˜์ฒœ ๋ช…์˜ ๊ฐœ๋ฐœ์ž์™€ ์—ฐ๊ตฌ์›์— ์˜ํ•ด ํ™•์‹คํžˆ ์‚ฌ์šฉ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹น์‹ ์˜ ์ž‘์—…์— ์ž๋ž‘์Šค๋Ÿฌ์›Œํ•ด์•ผ ํ•˜๋ฉฐ ์ด๋ฅผ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. **๋‹น์‹ ์€ ์ปค๋ฎค๋‹ˆํ‹ฐ ๋‚ด ๋ชจ๋“  ์‚ฌ๋žŒ๋“ค์—๊ฒŒ ๋งค์šฐ ์‰ฝ๊ฒŒ ์ ‘๊ทผ ๊ฐ€๋Šฅํ•œ ๋˜ ๋‹ค๋ฅธ ๋ชจ๋ธ์„ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค! ๐Ÿคฏ**
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/run_scripts.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์Šคํฌ๋ฆฝํŠธ๋กœ ์‹คํ–‰ํ•˜๊ธฐ[[train-with-a-script]] ๐Ÿค— Transformers ๋…ธํŠธ๋ถ๊ณผ ํ•จ๊ป˜ [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), ๋˜๋Š” [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax)๋ฅผ ์‚ฌ์šฉํ•ด ํŠน์ • ํƒœ์Šคํฌ์— ๋Œ€ํ•œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ฃผ๋Š” ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ [์—ฐ๊ตฌ ํ”„๋กœ์ ํŠธ](https://github.com/huggingface/transformers/tree/main/examples/research_projects) ๋ฐ [๋ ˆ๊ฑฐ์‹œ ์˜ˆ์ œ](https://github.com/huggingface/transformers/tree/main/examples/legacy)์—์„œ ๋Œ€๋ถ€๋ถ„ ์ปค๋ฎค๋‹ˆํ‹ฐ์—์„œ ์ œ๊ณตํ•œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์Šคํฌ๋ฆฝํŠธ๋Š” ์ ๊ทน์ ์œผ๋กœ ์œ ์ง€ ๊ด€๋ฆฌ๋˜์ง€ ์•Š์œผ๋ฉฐ ์ตœ์‹  ๋ฒ„์ „์˜ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ํ˜ธํ™˜๋˜์ง€ ์•Š์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํŠน์ • ๋ฒ„์ „์˜ ๐Ÿค— Transformers๋ฅผ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ๋ชจ๋“  ๋ฌธ์ œ์—์„œ ๋ฐ”๋กœ ์ž‘๋™ํ•˜๋Š” ๊ฒƒ์€ ์•„๋‹ˆ๋ฉฐ, ํ•ด๊ฒฐํ•˜๋ ค๋Š” ๋ฌธ์ œ์— ๋งž๊ฒŒ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ๋ณ€๊ฒฝํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋Œ€๋ถ€๋ถ„์˜ ์Šคํฌ๋ฆฝํŠธ์—๋Š” ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ ๋ฐฉ๋ฒ•์ด ๋‚˜์™€์žˆ์–ด ํ•„์š”์— ๋”ฐ๋ผ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ์— ๊ตฌํ˜„ํ•˜๊ณ  ์‹ถ์€ ๊ธฐ๋Šฅ์ด ์žˆ์œผ๋ฉด pull request๋ฅผ ์ œ์ถœํ•˜๊ธฐ ์ „์— [ํฌ๋Ÿผ](https://discuss.huggingface.co/) ๋˜๋Š” [์ด์Šˆ](https://github.com/huggingface/transformers/issues)์—์„œ ๋…ผ์˜ํ•ด ์ฃผ์„ธ์š”. ๋ฒ„๊ทธ ์ˆ˜์ •์€ ํ™˜์˜ํ•˜์ง€๋งŒ ๊ฐ€๋…์„ฑ์„ ํฌ์ƒํ•˜๋ฉด์„œ๊นŒ์ง€ ๋” ๋งŽ์€ ๊ธฐ๋Šฅ์„ ์ถ”๊ฐ€ํ•˜๋Š” pull request๋Š” ๋ณ‘ํ•ฉ(merge)ํ•˜์ง€ ์•Š์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) ๋ฐ [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization)์—์„œ ์š”์•ฝ ํ›ˆ๋ จํ•˜๋Š” ์Šคํฌ๋ฆฝํŠธ ์˜ˆ์ œ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ํŠน๋ณ„ํ•œ ์„ค๋ช…์ด ์—†๋Š” ํ•œ ๋ชจ๋“  ์˜ˆ์ œ๋Š” ๋‘ ํ”„๋ ˆ์ž„์›Œํฌ ๋ชจ๋‘์—์„œ ์ž‘๋™ํ•  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. ## ์„ค์ •ํ•˜๊ธฐ[[setup]] ์ตœ์‹  ๋ฒ„์ „์˜ ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์„ฑ๊ณต์ ์œผ๋กœ ์‹คํ–‰ํ•˜๋ ค๋ฉด ์ƒˆ ๊ฐ€์ƒ ํ™˜๊ฒฝ์—์„œ **์†Œ์Šค๋กœ๋ถ€ํ„ฐ ๐Ÿค— Transformers๋ฅผ ์„ค์น˜**ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash git clone https://github.com/huggingface/transformers cd transformers pip install . ``` ์ด์ „ ๋ฒ„์ „์˜ ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ๋ณด๋ ค๋ฉด ์•„๋ž˜ ํ† ๊ธ€์„ ํด๋ฆญํ•˜์„ธ์š”: <details> <summary>์ด์ „ ๋ฒ„์ „์˜ ๐Ÿค— Transformers ์˜ˆ์ œ</summary> <ul> <li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li> </ul> </details> ๊ทธ๋ฆฌ๊ณ  ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ณต์ œ(clone)ํ•ด์˜จ ๐Ÿค— Transformers ๋ฒ„์ „์„ ํŠน์ • ๋ฒ„์ „(์˜ˆ: v3.5.1)์œผ๋กœ ์ „ํ™˜ํ•˜์„ธ์š”: ```bash git checkout tags/v3.5.1 ``` ์˜ฌ๋ฐ”๋ฅธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๋ฒ„์ „์„ ์„ค์ •ํ•œ ํ›„ ์›ํ•˜๋Š” ์˜ˆ์ œ ํด๋”๋กœ ์ด๋™ํ•˜์—ฌ ์˜ˆ์ œ๋ณ„๋กœ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๋Œ€ํ•œ ์š”๊ตฌ ์‚ฌํ•ญ(requirements)์„ ์„ค์น˜ํ•ฉ๋‹ˆ๋‹ค: ```bash pip install -r requirements.txt ``` ## ์Šคํฌ๋ฆฝํŠธ ์‹คํ–‰ํ•˜๊ธฐ[[run-a-script]] <frameworkcontent> <pt> ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋Š” ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์ „์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์Šคํฌ๋ฆฝํŠธ๋Š” ์š”์•ฝ ๊ธฐ๋Šฅ์„ ์ง€์›ํ•˜๋Š” ์•„ํ‚คํ…์ฒ˜์—์„œ [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์˜ˆ๋Š” [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [T5-small](https://huggingface.co/t5-small)์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. T5 ๋ชจ๋ธ์€ ํ›ˆ๋ จ ๋ฐฉ์‹์— ๋”ฐ๋ผ ์ถ”๊ฐ€ `source_prefix` ์ธ์ˆ˜๊ฐ€ ํ•„์š”ํ•˜๋ฉฐ, ์ด ํ”„๋กฌํ”„ํŠธ๋Š” ์š”์•ฝ ์ž‘์—…์ž„์„ T5์— ์•Œ๋ ค์ค๋‹ˆ๋‹ค. ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋Š” ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์ „์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์Šคํฌ๋ฆฝํŠธ๋Š” ์š”์•ฝ ๊ธฐ๋Šฅ์„ ์ง€์›ํ•˜๋Š” ์•„ํ‚คํ…์ฒ˜์—์„œ Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์˜ˆ๋Š” [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [T5-small](https://huggingface.co/t5-small)์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. T5 ๋ชจ๋ธ์€ ํ›ˆ๋ จ ๋ฐฉ์‹์— ๋”ฐ๋ผ ์ถ”๊ฐ€ `source_prefix` ์ธ์ˆ˜๊ฐ€ ํ•„์š”ํ•˜๋ฉฐ, ์ด ํ”„๋กฌํ”„ํŠธ๋Š” ์š”์•ฝ ์ž‘์—…์ž„์„ T5์— ์•Œ๋ ค์ค๋‹ˆ๋‹ค. ```bash python examples/tensorflow/summarization/run_summarization.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## ํ˜ผํ•ฉ ์ •๋ฐ€๋„(mixed precision)๋กœ ๋ถ„์‚ฐ ํ›ˆ๋ จํ•˜๊ธฐ[[distributed-training-and-mixed-precision]] [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) ํด๋ž˜์Šค๋Š” ๋ถ„์‚ฐ ํ›ˆ๋ จ๊ณผ ํ˜ผํ•ฉ ์ •๋ฐ€๋„(mixed precision)๋ฅผ ์ง€์›ํ•˜๋ฏ€๋กœ ์Šคํฌ๋ฆฝํŠธ์—์„œ๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‘ ๊ฐ€์ง€ ๊ธฐ๋Šฅ์„ ๋ชจ๋‘ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋‘ ๊ฐ€์ง€๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - `fp16` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ด ํ˜ผํ•ฉ ์ •๋ฐ€๋„(mixed precision)๋ฅผ ํ™œ์„ฑํ™”ํ•ฉ๋‹ˆ๋‹ค. - `nproc_per_node` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ด ์‚ฌ์šฉํ•  GPU ๊ฐœ์ˆ˜๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ```bash torchrun \ --nproc_per_node 8 pytorch/summarization/run_summarization.py \ --fp16 \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` TensorFlow ์Šคํฌ๋ฆฝํŠธ๋Š” ๋ถ„์‚ฐ ํ›ˆ๋ จ์„ ์œ„ํ•ด [`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy)๋ฅผ ํ™œ์šฉํ•˜๋ฉฐ, ํ›ˆ๋ จ ์Šคํฌ๋ฆฝํŠธ์— ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๋‹ค์ค‘ GPU ํ™˜๊ฒฝ์ด๋ผ๋ฉด, TensorFlow ์Šคํฌ๋ฆฝํŠธ๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์—ฌ๋Ÿฌ ๊ฐœ์˜ GPU๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ## TPU ์œ„์—์„œ ์Šคํฌ๋ฆฝํŠธ ์‹คํ–‰ํ•˜๊ธฐ[[run-a-script-on-a-tpu]] <frameworkcontent> <pt> Tensor Processing Units (TPUs)๋Š” ์„ฑ๋Šฅ์„ ๊ฐ€์†ํ™”ํ•˜๊ธฐ ์œ„ํ•ด ํŠน๋ณ„ํžˆ ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. PyTorch๋Š” [XLA](https://www.tensorflow.org/xla) ๋”ฅ๋Ÿฌ๋‹ ์ปดํŒŒ์ผ๋Ÿฌ์™€ ํ•จ๊ป˜ TPU๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค(์ž์„ธํ•œ ๋‚ด์šฉ์€ [์—ฌ๊ธฐ](https://github.com/pytorch/xla/blob/master/README.md) ์ฐธ์กฐ). TPU๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด `xla_spawn.py` ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•˜๊ณ  `num_cores` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉํ•˜๋ ค๋Š” TPU ์ฝ”์–ด ์ˆ˜๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ```bash python xla_spawn.py --num_cores 8 \ summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Tensor Processing Units (TPUs)๋Š” ์„ฑ๋Šฅ์„ ๊ฐ€์†ํ™”ํ•˜๊ธฐ ์œ„ํ•ด ํŠน๋ณ„ํžˆ ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. TensorFlow ์Šคํฌ๋ฆฝํŠธ๋Š” TPU๋ฅผ ํ›ˆ๋ จ์— ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด [`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy)๋ฅผ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. TPU๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด TPU ๋ฆฌ์†Œ์Šค์˜ ์ด๋ฆ„์„ `tpu` ์ธ์ˆ˜์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```bash python run_summarization.py \ --tpu name_of_tpu_resource \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## ๐Ÿค— Accelerate๋กœ ์Šคํฌ๋ฆฝํŠธ ์‹คํ–‰ํ•˜๊ธฐ[[run-a-script-with-accelerate]] ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate)๋Š” PyTorch ํ›ˆ๋ จ ๊ณผ์ •์— ๋Œ€ํ•œ ์™„์ „ํ•œ ๊ฐ€์‹œ์„ฑ์„ ์œ ์ง€ํ•˜๋ฉด์„œ ์—ฌ๋Ÿฌ ์œ ํ˜•์˜ ์„ค์ •(CPU ์ „์šฉ, ๋‹ค์ค‘ GPU, TPU)์—์„œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋Š” ํ†ตํ•ฉ ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” PyTorch ์ „์šฉ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค. ๐Ÿค— Accelerate๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: > ์ฐธ๊ณ : Accelerate๋Š” ๋น ๋ฅด๊ฒŒ ๊ฐœ๋ฐœ ์ค‘์ด๋ฏ€๋กœ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด accelerate๋ฅผ ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```bash pip install git+https://github.com/huggingface/accelerate ``` `run_summarization.py` ์Šคํฌ๋ฆฝํŠธ ๋Œ€์‹  `run_summarization_no_trainer.py` ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Accelerate ํด๋ž˜์Šค๊ฐ€ ์ง€์›๋˜๋Š” ์Šคํฌ๋ฆฝํŠธ๋Š” ํด๋”์— `task_no_trainer.py` ํŒŒ์ผ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์—ฌ ๊ตฌ์„ฑ ํŒŒ์ผ์„ ์ƒ์„ฑํ•˜๊ณ  ์ €์žฅํ•ฉ๋‹ˆ๋‹ค: ```bash accelerate config ``` ์„ค์ •์„ ํ…Œ์ŠคํŠธํ•˜์—ฌ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๊ตฌ์„ฑ๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค: ```bash accelerate test ``` ์ด์ œ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค: ```bash accelerate launch run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` ## ์‚ฌ์šฉ์ž ์ •์˜ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์‚ฌ์šฉํ•˜๊ธฐ[[use-a-custom-dataset]] ์š”์•ฝ ์Šคํฌ๋ฆฝํŠธ๋Š” ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ CSV ๋˜๋Š” JSON ํŒŒ์ผ์ธ ๊ฒฝ์šฐ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ๋ช‡ ๊ฐ€์ง€ ์ถ”๊ฐ€ ์ธ์ˆ˜๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - `train_file`๊ณผ `validation_file`์€ ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ํŒŒ์ผ์˜ ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. - `text_column`์€ ์š”์•ฝํ•  ์ž…๋ ฅ ํ…์ŠคํŠธ์ž…๋‹ˆ๋‹ค. - `summary_column`์€ ์ถœ๋ ฅํ•  ๋Œ€์ƒ ํ…์ŠคํŠธ์ž…๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์š”์•ฝ ์Šคํฌ๋ฆฝํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --train_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \ --text_column text_column_name \ --summary_column summary_column_name \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --overwrite_output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate ``` ## ์Šคํฌ๋ฆฝํŠธ ํ…Œ์ŠคํŠธํ•˜๊ธฐ[[test-a-script]] ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋Œ€์ƒ์œผ๋กœ ํ›ˆ๋ จ์„ ์™„๋ฃŒํ•˜๋Š”๋ฐ ๊ฝค ์˜ค๋žœ ์‹œ๊ฐ„์ด ๊ฑธ๋ฆฌ๊ธฐ ๋•Œ๋ฌธ์—, ์ž‘์€ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ๋ชจ๋“  ๊ฒƒ์ด ์˜ˆ์ƒ๋Œ€๋กœ ์‹คํ–‰๋˜๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ตœ๋Œ€ ์ƒ˜ํ”Œ ์ˆ˜๋กœ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค: - `max_train_samples` - `max_eval_samples` - `max_predict_samples` ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --max_train_samples 50 \ --max_eval_samples 50 \ --max_predict_samples 50 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` ๋ชจ๋“  ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๊ฐ€ `max_predict_samples` ์ธ์ˆ˜๋ฅผ ์ง€์›ํ•˜์ง€๋Š” ์•Š์Šต๋‹ˆ๋‹ค. ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ์ด ์ธ์ˆ˜๋ฅผ ์ง€์›ํ•˜๋Š”์ง€ ํ™•์‹คํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ `-h` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ํ™•์ธํ•˜์„ธ์š”: ```bash examples/pytorch/summarization/run_summarization.py -h ``` ## ์ฒดํฌํฌ์ธํŠธ(checkpoint)์—์„œ ํ›ˆ๋ จ ์ด์–ด์„œ ํ•˜๊ธฐ[[resume-training-from-checkpoint]] ๋˜ ๋‹ค๋ฅธ ์œ ์šฉํ•œ ์˜ต์…˜์€ ์ด์ „ ์ฒดํฌํฌ์ธํŠธ์—์„œ ํ›ˆ๋ จ์„ ์žฌ๊ฐœํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ํ›ˆ๋ จ์ด ์ค‘๋‹จ๋˜๋”๋ผ๋„ ์ฒ˜์Œ๋ถ€ํ„ฐ ๋‹ค์‹œ ์‹œ์ž‘ํ•˜์ง€ ์•Š๊ณ  ์ค‘๋‹จํ•œ ๋ถ€๋ถ„๋ถ€ํ„ฐ ๋‹ค์‹œ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฒดํฌํฌ์ธํŠธ์—์„œ ํ›ˆ๋ จ์„ ์žฌ๊ฐœํ•˜๋Š” ๋ฐฉ๋ฒ•์—๋Š” ๋‘ ๊ฐ€์ง€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ๋Š” `output_dir previous_output_dir` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `output_dir`์— ์ €์žฅ๋œ ์ตœ์‹  ์ฒดํฌํฌ์ธํŠธ๋ถ€ํ„ฐ ํ›ˆ๋ จ์„ ์žฌ๊ฐœํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ `overwrite_output_dir`์„ ์ œ๊ฑฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --output_dir previous_output_dir \ --predict_with_generate ``` ๋‘ ๋ฒˆ์งธ๋Š” `resume_from_checkpoint path_to_specific_checkpoint` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠน์ • ์ฒดํฌํฌ์ธํŠธ ํด๋”์—์„œ ํ›ˆ๋ จ์„ ์žฌ๊ฐœํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --resume_from_checkpoint path_to_specific_checkpoint \ --predict_with_generate ``` ## ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ[[share-your-model]] ๋ชจ๋“  ์Šคํฌ๋ฆฝํŠธ๋Š” ์ตœ์ข… ๋ชจ๋ธ์„ [Model Hub](https://huggingface.co/models)์— ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— Hugging Face์— ๋กœ๊ทธ์ธํ–ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash huggingface-cli login ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์Šคํฌ๋ฆฝํŠธ์— `push_to_hub` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ธ์ˆ˜๋Š” Hugging Face ์‚ฌ์šฉ์ž ์ด๋ฆ„๊ณผ `output_dir`์— ์ง€์ •๋œ ํด๋” ์ด๋ฆ„์œผ๋กœ ์ €์žฅ์†Œ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ €์žฅ์†Œ์— ํŠน์ • ์ด๋ฆ„์„ ์ง€์ •ํ•˜๋ ค๋ฉด `push_to_hub_model_id` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ €์žฅ์†Œ๋Š” ๋„ค์ž„์ŠคํŽ˜์ด์Šค ์•„๋ž˜์— ์ž๋™์œผ๋กœ ๋‚˜์—ด๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์˜ˆ๋Š” ํŠน์ • ์ €์žฅ์†Œ ์ด๋ฆ„์œผ๋กœ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --push_to_hub \ --push_to_hub_model_id finetuned-t5-cnn_dailymail \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Transformers ์„ค์น˜ ๋ฐฉ๋ฒ• ! pip install transformers datasets # ๋งˆ์ง€๋ง‰ ๋ฆด๋ฆฌ์Šค ๋Œ€์‹  ์†Œ์Šค์—์„œ ์„ค์น˜ํ•˜๋ ค๋ฉด, ์œ„ ๋ช…๋ น์„ ์ฃผ์„์œผ๋กœ ๋ฐ”๊พธ๊ณ  ์•„๋ž˜ ๋ช…๋ น์„ ํ•ด์ œํ•˜์„ธ์š”. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/task_summary.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers๋กœ ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒƒ[[what__transformers_can_do]] ๐Ÿค— Transformers๋Š” ์ž์—ฐ์–ด์ฒ˜๋ฆฌ(NLP), ์ปดํ“จํ„ฐ ๋น„์ „, ์˜ค๋””์˜ค ๋ฐ ์Œ์„ฑ ์ฒ˜๋ฆฌ ์ž‘์—…์— ๋Œ€ํ•œ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ์ตœ์ฒจ๋‹จ ๋ชจ๋ธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค. ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์„ ์œ„ํ•œ ํ˜„๋Œ€์ ์ธ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง๊ณผ ๊ฐ™์€ ํŠธ๋žœ์Šคํฌ๋จธ๊ฐ€ ์•„๋‹Œ ๋ชจ๋ธ๋„ ํฌํ•จํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์Šค๋งˆํŠธํฐ, ์•ฑ, ํ…”๋ ˆ๋น„์ „๊ณผ ๊ฐ™์€ ์˜ค๋Š˜๋‚  ๊ฐ€์žฅ ์ธ๊ธฐ ์žˆ๋Š” ์†Œ๋น„์ž ์ œํ’ˆ์„ ์‚ดํŽด๋ณด๋ฉด, ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ์ˆ ์ด ๊ทธ ๋’ค์— ์‚ฌ์šฉ๋˜๊ณ  ์žˆ์„ ํ™•๋ฅ ์ด ๋†’์Šต๋‹ˆ๋‹ค. ์Šค๋งˆํŠธํฐ์œผ๋กœ ์ดฌ์˜ํ•œ ์‚ฌ์ง„์—์„œ ๋ฐฐ๊ฒฝ ๊ฐ์ฒด๋ฅผ ์ œ๊ฑฐํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด ์–ด๋–ป๊ฒŒ ํ• ๊นŒ์š”? ์ด๋Š” ํŒŒ๋†‰ํ‹ฑ ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜ ์ž‘์—…์˜ ์˜ˆ์ž…๋‹ˆ๋‹ค(์•„์ง ์ด๊ฒŒ ๋ฌด์—‡์ธ์ง€ ๋ชจ๋ฅธ๋‹ค๋ฉด, ๋‹ค์Œ ์„น์…˜์—์„œ ์„ค๋ช…ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค!). ์ด ํŽ˜์ด์ง€๋Š” ๋‹ค์–‘ํ•œ ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค, ์ปดํ“จํ„ฐ ๋น„์ „, NLP ์ž‘์—…์„ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋‹ค๋ฃจ๋Š” ๊ฐ„๋‹จํ•œ ์˜ˆ์ œ๋ฅผ 3์ค„์˜ ์ฝ”๋“œ๋กœ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ## ์˜ค๋””์˜ค[[audio]] ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค ์ฒ˜๋ฆฌ ์ž‘์—…์€ ๋‹ค๋ฅธ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์™€ ์•ฝ๊ฐ„ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ์ด๋Š” ์ฃผ๋กœ ์˜ค๋””์˜ค๊ฐ€ ์—ฐ์†์ ์ธ ์‹ ํ˜ธ๋กœ ์ž…๋ ฅ๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ์™€ ๋‹ฌ๋ฆฌ ์›๋ณธ ์˜ค๋””์˜ค ํŒŒํ˜•(waveform)์€ ๋ฌธ์žฅ์ด ๋‹จ์–ด๋กœ ๋‚˜๋ˆ ์ง€๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๊น”๋”ํ•˜๊ฒŒ ์ด์‚ฐ์ ์ธ ๋ฌถ์Œ์œผ๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด ์›๋ณธ ์˜ค๋””์˜ค ์‹ ํ˜ธ๋Š” ์ผ์ •ํ•œ ๊ฐ„๊ฒฉ์œผ๋กœ ์ƒ˜ํ”Œ๋ง๋ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น ๊ฐ„๊ฒฉ ๋‚ด์—์„œ ๋” ๋งŽ์€ ์ƒ˜ํ”Œ์„ ์ทจํ•  ๊ฒฝ์šฐ ์ƒ˜ํ”Œ๋ง๋ฅ ์ด ๋†’์•„์ง€๋ฉฐ, ์˜ค๋””์˜ค๋Š” ์›๋ณธ ์˜ค๋””์˜ค ์†Œ์Šค์— ๋” ๊ฐ€๊นŒ์›Œ์ง‘๋‹ˆ๋‹ค. ๊ณผ๊ฑฐ์˜ ์ ‘๊ทผ ๋ฐฉ์‹์€ ์˜ค๋””์˜ค์—์„œ ์œ ์šฉํ•œ ํŠน์ง•์„ ์ถ”์ถœํ•˜๊ธฐ ์œ„ํ•ด ์˜ค๋””์˜ค๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒƒ์ด์—ˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ํ˜„์žฌ๋Š” ์›๋ณธ ์˜ค๋””์˜ค ํŒŒํ˜•์„ ํŠน์„ฑ ์ธ์ฝ”๋”์— ์ง์ ‘ ๋„ฃ์–ด์„œ ์˜ค๋””์˜ค ํ‘œํ˜„(representation)์„ ์ถ”์ถœํ•˜๋Š” ๊ฒƒ์ด ๋” ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๊ฐ€ ๋‹จ์ˆœํ•ด์ง€๊ณ  ๋ชจ๋ธ์ด ๊ฐ€์žฅ ์ค‘์š”ํ•œ ํŠน์ง•์„ ํ•™์Šตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์˜ค๋””์˜ค ๋ถ„๋ฅ˜[[audio_classification]] ์˜ค๋””์˜ค ๋ถ„๋ฅ˜๋Š” ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์— ๋ฏธ๋ฆฌ ์ •์˜๋œ ํด๋ž˜์Šค ์ง‘ํ•ฉ์˜ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋งŽ์€ ๊ตฌ์ฒด์ ์ธ ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ์„ ํฌํ•จํ•œ ๋„“์€ ๋ฒ”์ฃผ์ž…๋‹ˆ๋‹ค. ์ผ๋ถ€ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์Œํ–ฅ ์žฅ๋ฉด ๋ถ„๋ฅ˜: ์˜ค๋””์˜ค์— ์žฅ๋ฉด ๋ ˆ์ด๋ธ”("์‚ฌ๋ฌด์‹ค", "ํ•ด๋ณ€", "๊ฒฝ๊ธฐ์žฅ")์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. * ์Œํ–ฅ ์ด๋ฒคํŠธ ๊ฐ์ง€: ์˜ค๋””์˜ค์— ์†Œ๋ฆฌ ์ด๋ฒคํŠธ ๋ ˆ์ด๋ธ”("์ฐจ ๊ฒฝ์ ", "๊ณ ๋ž˜ ์šธ์Œ์†Œ๋ฆฌ", "์œ ๋ฆฌ ํŒŒ์†")์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. * ํƒœ๊น…: ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ์†Œ๋ฆฌ(์ƒˆ ์ง€์ €๊ท, ํšŒ์˜์—์„œ์˜ ํ™”์ž ์‹๋ณ„)๊ฐ€ ํฌํ•จ๋œ ์˜ค๋””์˜ค์— ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. * ์Œ์•… ๋ถ„๋ฅ˜: ์Œ์•…์— ์žฅ๋ฅด ๋ ˆ์ด๋ธ”("๋ฉ”ํƒˆ", "ํž™ํ•ฉ", "์ปจํŠธ๋ฆฌ")์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> classifier = pipeline(task="audio-classification", model="superb/hubert-base-superb-er") >>> preds = classifier("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4532, 'label': 'hap'}, {'score': 0.3622, 'label': 'sad'}, {'score': 0.0943, 'label': 'neu'}, {'score': 0.0903, 'label': 'ang'}] ``` ### ์ž๋™ ์Œ์„ฑ ์ธ์‹[[automatic_speech_recognition]] ์ž๋™ ์Œ์„ฑ ์ธ์‹(ASR)์€ ์Œ์„ฑ์„ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์Œ์„ฑ์€ ์ธ๊ฐ„์˜ ์ž์—ฐ์Šค๋Ÿฌ์šด ์˜์‚ฌ์†Œํ†ต ํ˜•ํƒœ์ด๊ธฐ ๋•Œ๋ฌธ์— ASR์€ ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ์˜ค๋””์˜ค ์ž‘์—… ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์˜ค๋Š˜๋‚  ASR ์‹œ์Šคํ…œ์€ ์Šคํ”ผ์ปค, ์ „ํ™” ๋ฐ ์ž๋™์ฐจ์™€ ๊ฐ™์€ "์Šค๋งˆํŠธ" ๊ธฐ์ˆ  ์ œํ’ˆ์— ๋‚ด์žฅ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๊ฐ€์ƒ ๋น„์„œ์—๊ฒŒ ์Œ์•… ์žฌ์ƒ, ์•Œ๋ฆผ ์„ค์ • ๋ฐ ๋‚ ์”จ ์ •๋ณด๋ฅผ ์š”์ฒญํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ํŠธ๋žœ์Šคํฌ๋จธ ์•„ํ‚คํ…์ฒ˜๊ฐ€ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค€ ํ•ต์‹ฌ ๋„์ „ ๊ณผ์ œ ์ค‘ ํ•˜๋‚˜๋Š” ์–‘์ด ๋ฐ์ดํ„ฐ ์–‘์ด ์ ์€ ์–ธ์–ด(low-resource language)์— ๋Œ€ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋Œ€๋Ÿ‰์˜ ์Œ์„ฑ ๋ฐ์ดํ„ฐ๋กœ ์‚ฌ์ „ ํ›ˆ๋ จํ•œ ํ›„ ๋ฐ์ดํ„ฐ ์–‘์ด ์ ์€ ์–ธ์–ด์—์„œ ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ์Œ์„ฑ ๋ฐ์ดํ„ฐ 1์‹œ๊ฐ„๋งŒ์œผ๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ฉด ์ด์ „์˜ 100๋ฐฐ ๋งŽ์€ ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ๋ฐ์ดํ„ฐ๋กœ ํ›ˆ๋ จ๋œ ASR ์‹œ์Šคํ…œ๋ณด๋‹ค ํ›จ์”ฌ ๋” ๋†’์€ ํ’ˆ์งˆ์˜ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> transcriber = pipeline(task="automatic-speech-recognition", model="openai/whisper-small") >>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ``` ## ์ปดํ“จํ„ฐ ๋น„์ „[[computer_vision]] ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—… ์ค‘ ๊ฐ€์žฅ ์ดˆ๊ธฐ์˜ ์„ฑ๊ณต์ ์ธ ์ž‘์—… ์ค‘ ํ•˜๋‚˜๋Š” [ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง(CNN)](glossary#convolution)์„ ์‚ฌ์šฉํ•˜์—ฌ ์šฐํŽธ๋ฒˆํ˜ธ ์ˆซ์ž ์ด๋ฏธ์ง€๋ฅผ ์ธ์‹ํ•˜๋Š” ๊ฒƒ์ด์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋Š” ํ”ฝ์…€๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์œผ๋ฉฐ ๊ฐ ํ”ฝ์…€์€ ์ˆซ์ž ๊ฐ’์œผ๋กœ ํ‘œํ˜„๋ฉ๋‹ˆ๋‹ค. ์ด๋กœ์จ ์ด๋ฏธ์ง€๋ฅผ ํ”ฝ์…€ ๊ฐ’์˜ ํ–‰๋ ฌ๋กœ ๋‚˜ํƒ€๋‚ด๋Š” ๊ฒƒ์ด ์‰ฌ์›Œ์ง‘๋‹ˆ๋‹ค. ํŠน์ •ํ•œ ํ”ฝ์…€ ๊ฐ’์˜ ์กฐํ•ฉ์€ ์ด๋ฏธ์ง€์˜ ์ƒ‰์ƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋‹ค์Œ ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์œผ๋กœ ์ ‘๊ทผ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค: 1. ํ•ฉ์„ฑ๊ณฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€์˜ ๋‚ฎ์€ ์ˆ˜์ค€ ํŠน์ง•์—์„œ ๋†’์€ ์ˆ˜์ค€์˜ ์ถ”์ƒ์ ์ธ ์š”์†Œ๊นŒ์ง€ ๊ณ„์ธต์ ์œผ๋กœ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. 2. ์ด๋ฏธ์ง€๋ฅผ ํŒจ์น˜๋กœ ๋‚˜๋ˆ„๊ณ  ํŠธ๋žœ์Šคํฌ๋จธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ ์ง„์ ์œผ๋กœ ๊ฐ ์ด๋ฏธ์ง€ ํŒจ์น˜๊ฐ€ ์„œ๋กœ ์–ด๋– ํ•œ ๋ฐฉ์‹์œผ๋กœ ์—ฐ๊ด€๋˜์–ด ์ด๋ฏธ์ง€๋ฅผ ํ˜•์„ฑํ•˜๋Š”์ง€ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. `CNN`์—์„œ ์„ ํ˜ธํ•˜๋Š” ์ƒํ–ฅ์‹ ์ ‘๊ทผ๋ฒ•๊ณผ๋Š” ๋‹ฌ๋ฆฌ, ์ด ๋ฐฉ์‹์€ ํ๋ฆฟํ•œ ์ด๋ฏธ์ง€๋กœ ์ดˆ์•ˆ์„ ๊ทธ๋ฆฌ๊ณ  ์ ์ง„์ ์œผ๋กœ ์„ ๋ช…ํ•œ ์ด๋ฏธ์ง€๋กœ ๋งŒ๋“ค์–ด๊ฐ€๋Š” ๊ฒƒ๊ณผ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ### ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜[[image_classification]] ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋Š” ํ•œ ๊ฐœ์˜ ์ „์ฒด ์ด๋ฏธ์ง€์— ๋ฏธ๋ฆฌ ์ •์˜๋œ ํด๋ž˜์Šค ์ง‘ํ•ฉ์˜ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ๋ถ„๋ฅ˜ ์ž‘์—…๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์—๋Š” ๋‹ค์–‘ํ•œ ์‹ค์šฉ์ ์ธ ์šฉ๋„๊ฐ€ ์žˆ์œผ๋ฉฐ, ์ผ๋ถ€ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์˜๋ฃŒ: ์งˆ๋ณ‘์„ ๊ฐ์ง€ํ•˜๊ฑฐ๋‚˜ ํ™˜์ž ๊ฑด๊ฐ•์„ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๊ธฐ ์œ„ํ•ด ์˜๋ฃŒ ์ด๋ฏธ์ง€์— ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. * ํ™˜๊ฒฝ: ์œ„์„ฑ ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜ํ•˜์—ฌ ์‚ฐ๋ฆผ ๋ฒŒ์ฑ„๋ฅผ ๊ฐ์‹œํ•˜๊ณ  ์•ผ์ƒ ์ง€์—ญ ๊ด€๋ฆฌ๋ฅผ ์œ„ํ•œ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๊ฑฐ๋‚˜ ์‚ฐ๋ถˆ์„ ๊ฐ์ง€ํ•ฉ๋‹ˆ๋‹ค. * ๋†์—…: ์ž‘๋ฌผ ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜ํ•˜์—ฌ ์‹๋ฌผ ๊ฑด๊ฐ•์„ ํ™•์ธํ•˜๊ฑฐ๋‚˜ ์œ„์„ฑ ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜ํ•˜์—ฌ ํ† ์ง€ ์ด์šฉ ๊ด€์ฐฐ์— ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. * ์ƒํƒœํ•™: ๋™๋ฌผ์ด๋‚˜ ์‹๋ฌผ ์ข… ์ด๋ฏธ์ง€๋ฅผ ๋ถ„๋ฅ˜ํ•˜์—ฌ ์•ผ์ƒ ๋™๋ฌผ ๊ฐœ์ฒด๊ตฐ์„ ์กฐ์‚ฌํ•˜๊ฑฐ๋‚˜ ๋ฉธ์ข… ์œ„๊ธฐ์— ์ฒ˜ํ•œ ์ข…์„ ์ถ”์ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> classifier = pipeline(task="image-classification") >>> preds = classifier( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> print(*preds, sep="\n") {'score': 0.4335, 'label': 'lynx, catamount'} {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'} {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'} {'score': 0.0239, 'label': 'Egyptian cat'} {'score': 0.0229, 'label': 'tiger cat'} ``` ### ๊ฐ์ฒด ํƒ์ง€[[object_detection]] ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์™€ ๋‹ฌ๋ฆฌ ๊ฐ์ฒด ํƒ์ง€๋Š” ์ด๋ฏธ์ง€ ๋‚ด์—์„œ ์—ฌ๋Ÿฌ ๊ฐ์ฒด๋ฅผ ์‹๋ณ„ํ•˜๊ณ  ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋กœ ์ •์˜๋œ ๊ฐ์ฒด์˜ ์œ„์น˜๋ฅผ ํŒŒ์•…ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€์˜ ๋ช‡ ๊ฐ€์ง€ ์‘์šฉ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰: ๋‹ค๋ฅธ ์ฐจ๋Ÿ‰, ๋ณดํ–‰์ž ๋ฐ ์‹ ํ˜ธ๋“ฑ๊ณผ ๊ฐ™์€ ์ผ์ƒ์ ์ธ ๊ตํ†ต ๊ฐ์ฒด๋ฅผ ๊ฐ์ง€ํ•ฉ๋‹ˆ๋‹ค. * ์›๊ฒฉ ๊ฐ์ง€: ์žฌ๋‚œ ๋ชจ๋‹ˆํ„ฐ๋ง, ๋„์‹œ ๊ณ„ํš ๋ฐ ๊ธฐ์ƒ ์˜ˆ์ธก ๋“ฑ์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. * ๊ฒฐํ•จ ํƒ์ง€: ๊ฑด๋ฌผ์˜ ๊ท ์—ด์ด๋‚˜ ๊ตฌ์กฐ์  ์†์ƒ, ์ œ์กฐ ๊ฒฐํ•จ ๋“ฑ์„ ํƒ์ง€ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> detector = pipeline(task="object-detection") >>> preds = detector( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"], "box": pred["box"]} for pred in preds] >>> preds [{'score': 0.9865, 'label': 'cat', 'box': {'xmin': 178, 'ymin': 154, 'xmax': 882, 'ymax': 598}}] ``` ### ์ด๋ฏธ์ง€ ๋ถ„ํ• [[image_segmentation]] ์ด๋ฏธ์ง€ ๋ถ„ํ• ์€ ํ”ฝ์…€ ์ฐจ์›์˜ ์ž‘์—…์œผ๋กœ, ์ด๋ฏธ์ง€ ๋‚ด์˜ ๋ชจ๋“  ํ”ฝ์…€์„ ํด๋ž˜์Šค์— ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๊ฐ์ฒด ํƒ์ง€์™€ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€๋Š” ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€ ๋‚ด์˜ ๊ฐ์ฒด๋ฅผ ๋ ˆ์ด๋ธ”๋งํ•˜๊ณ  ์˜ˆ์ธกํ•˜๋Š” ๋ฐ˜๋ฉด, ๋ถ„ํ• ์€ ๋” ์„ธ๋ถ„ํ™”๋œ ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๋ถ„ํ• ์€ ํ”ฝ์…€ ์ˆ˜์ค€์—์„œ ๊ฐ์ฒด๋ฅผ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„ํ• ์—๋Š” ์—ฌ๋Ÿฌ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์ธ์Šคํ„ด์Šค ๋ถ„ํ• : ๊ฐœ์ฒด์˜ ํด๋ž˜์Šค๋ฅผ ๋ ˆ์ด๋ธ”๋งํ•˜๋Š” ๊ฒƒ ์™ธ์—๋„, ๊ฐœ์ฒด์˜ ๊ฐ ๊ตฌ๋ถ„๋œ ์ธ์Šคํ„ด์Šค์—๋„ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค ("๊ฐœ-1", "๊ฐœ-2" ๋“ฑ). * ํŒŒ๋†‰ํ‹ฑ ๋ถ„ํ• : ์˜๋ฏธ์  ๋ถ„ํ• ๊ณผ ์ธ์Šคํ„ด์Šค ๋ถ„ํ• ์˜ ์กฐํ•ฉ์ž…๋‹ˆ๋‹ค. ๊ฐ ํ”ฝ์…€์„ ์˜๋ฏธ์  ํด๋ž˜์Šค๋กœ ๋ ˆ์ด๋ธ”๋งํ•˜๋Š” **๋™์‹œ์—** ๊ฐœ์ฒด์˜ ๊ฐ๊ฐ ๊ตฌ๋ถ„๋œ ์ธ์Šคํ„ด์Šค๋กœ๋„ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ถ„ํ•  ์ž‘์—…์€ ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์—์„œ ์œ ์šฉํ•˜๋ฉฐ, ์ฃผ๋ณ€ ํ™˜๊ฒฝ์˜ ํ”ฝ์…€ ์ˆ˜์ค€ ์ง€๋„๋ฅผ ์ƒ์„ฑํ•˜์—ฌ ๋ณดํ–‰์ž์™€ ๋‹ค๋ฅธ ์ฐจ๋Ÿ‰ ์ฃผ๋ณ€์—์„œ ์•ˆ์ „ํ•˜๊ฒŒ ํƒ์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ์˜๋ฃŒ ์˜์ƒ์—์„œ๋„ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ถ„ํ•  ์ž‘์—…์ด ํ”ฝ์…€ ์ˆ˜์ค€์—์„œ ๊ฐ์ฒด๋ฅผ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋น„์ •์ƒ์ ์ธ ์„ธํฌ๋‚˜ ์žฅ๊ธฐ์˜ ํŠน์ง•์„ ์‹๋ณ„ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„ํ• ์€ ์˜๋ฅ˜ ๊ฐ€์ƒ ์‹œ์ฐฉ์ด๋‚˜ ์นด๋ฉ”๋ผ๋ฅผ ํ†ตํ•ด ์‹ค์ œ ์„ธ๊ณ„์— ๊ฐ€์ƒ ๊ฐœ์ฒด๋ฅผ ๋ง์”Œ์›Œ ์ฆ๊ฐ• ํ˜„์‹ค ๊ฒฝํ—˜์„ ๋งŒ๋“œ๋Š” ๋“ฑ ์ „์ž ์ƒ๊ฑฐ๋ž˜ ๋ถ„์•ผ์—์„œ๋„ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> segmenter = pipeline(task="image-segmentation") >>> preds = segmenter( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> print(*preds, sep="\n") {'score': 0.9879, 'label': 'LABEL_184'} {'score': 0.9973, 'label': 'snow'} {'score': 0.9972, 'label': 'cat'} ``` ### ๊นŠ์ด ์ถ”์ •[[depth_estimation]] ๊นŠ์ด ์ถ”์ •์€ ์นด๋ฉ”๋ผ๋กœ๋ถ€ํ„ฐ ์ด๋ฏธ์ง€ ๋‚ด๋ถ€์˜ ๊ฐ ํ”ฝ์…€์˜ ๊ฑฐ๋ฆฌ๋ฅผ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ์ด ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์€ ํŠนํžˆ ์žฅ๋ฉด ์ดํ•ด์™€ ์žฌ๊ตฌ์„ฑ์— ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์€ ๋ณดํ–‰์ž, ๊ตํ†ต ํ‘œ์ง€ํŒ ๋ฐ ๋‹ค๋ฅธ ์ฐจ๋Ÿ‰๊ณผ ๊ฐ™์€ ๊ฐ์ฒด์™€์˜ ๊ฑฐ๋ฆฌ๋ฅผ ์ดํ•ดํ•˜์—ฌ ์žฅ์• ๋ฌผ๊ณผ ์ถฉ๋Œ์„ ํ”ผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊นŠ์ด ์ •๋ณด๋Š” ๋˜ํ•œ 2D ์ด๋ฏธ์ง€์—์„œ 3D ํ‘œํ˜„์„ ๊ตฌ์„ฑํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋ฉฐ ์ƒ๋ฌผํ•™์  ๊ตฌ์กฐ๋‚˜ ๊ฑด๋ฌผ์˜ ๊ณ ํ’ˆ์งˆ 3D ํ‘œํ˜„์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊นŠ์ด ์ถ”์ •์—๋Š” ๋‘ ๊ฐ€์ง€ ์ ‘๊ทผ ๋ฐฉ์‹์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์Šคํ…Œ๋ ˆ์˜ค: ์•ฝ๊ฐ„ ๋‹ค๋ฅธ ๊ฐ๋„์—์„œ ์ดฌ์˜๋œ ๋™์ผํ•œ ์ด๋ฏธ์ง€ ๋‘ ์žฅ์„ ๋น„๊ตํ•˜์—ฌ ๊นŠ์ด๋ฅผ ์ถ”์ •ํ•ฉ๋‹ˆ๋‹ค. * ๋‹จ์•ˆ: ๋‹จ์ผ ์ด๋ฏธ์ง€์—์„œ ๊นŠ์ด๋ฅผ ์ถ”์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> depth_estimator = pipeline(task="depth-estimation") >>> preds = depth_estimator( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) ``` ## ์ž์—ฐ์–ด์ฒ˜๋ฆฌ[[natural_language_processing]] ํ…์ŠคํŠธ๋Š” ์ธ๊ฐ„์ด ์˜์‚ฌ ์†Œํ†ตํ•˜๋Š” ์ž์—ฐ์Šค๋Ÿฌ์šด ๋ฐฉ์‹ ์ค‘ ํ•˜๋‚˜์ด๊ธฐ ๋•Œ๋ฌธ์— ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ์—ญ์‹œ ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ์ž‘์—… ์œ ํ˜• ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์ธ์‹ํ•˜๋Š” ํ˜•์‹์œผ๋กœ ํ…์ŠคํŠธ๋ฅผ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด ํ† ํฐํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ๊ฐœ๋ณ„ ๋‹จ์–ด ๋˜๋Š” ํ•˜์œ„ ๋‹จ์–ด(ํ† ํฐ)๋กœ ๋ถ„ํ• ํ•œ ๋‹ค์Œ ์ด๋Ÿฌํ•œ ํ† ํฐ์„ ์ˆซ์ž๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ์ˆซ์ž ์‹œํ€€์Šค๋กœ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ˆซ์ž ์‹œํ€€์Šค๋ฅผ ๋‹ค์–‘ํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ๋ชจ๋ธ์— ์ž…๋ ฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ### ํ…์ŠคํŠธ ๋ถ„๋ฅ˜[[text_classification]] ๋‹ค๋ฅธ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์—์„œ์˜ ๋ถ„๋ฅ˜ ์ž‘์—…๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜๋Š” ๋ฏธ๋ฆฌ ์ •์˜๋œ ํด๋ž˜์Šค ์ง‘ํ•ฉ์—์„œ ํ…์ŠคํŠธ ์‹œํ€€์Šค(๋ฌธ์žฅ ์ˆ˜์ค€, ๋‹จ๋ฝ ๋˜๋Š” ๋ฌธ์„œ ๋“ฑ)์— ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ๋ถ„๋ฅ˜์—๋Š” ๋‹ค์–‘ํ•œ ์‹ค์šฉ์ ์ธ ์‘์šฉ ์‚ฌ๋ก€๊ฐ€ ์žˆ์œผ๋ฉฐ, ์ผ๋ถ€ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ๊ฐ์„ฑ ๋ถ„์„: ํ…์ŠคํŠธ๋ฅผ `๊ธ์ •` ๋˜๋Š” `๋ถ€์ •`๊ณผ ๊ฐ™์€ ์–ด๋–ค ๊ทน์„ฑ์— ๋”ฐ๋ผ ๋ ˆ์ด๋ธ”๋งํ•˜์—ฌ ์ •์น˜, ๊ธˆ์œต, ๋งˆ์ผ€ํŒ…๊ณผ ๊ฐ™์€ ๋ถ„์•ผ์—์„œ ์˜์‚ฌ ๊ฒฐ์ •์— ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๊ณ  ์ง€์›ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. * ์ฝ˜ํ…์ธ  ๋ถ„๋ฅ˜: ํ…์ŠคํŠธ๋ฅผ ์ฃผ์ œ์— ๋”ฐ๋ผ ๋ ˆ์ด๋ธ”๋ง(๋‚ ์”จ, ์Šคํฌ์ธ , ๊ธˆ์œต ๋“ฑ)ํ•˜์—ฌ ๋‰ด์Šค ๋ฐ ์†Œ์…œ ๋ฏธ๋””์–ด ํ”ผ๋“œ์—์„œ ์ •๋ณด๋ฅผ ๊ตฌ์„ฑํ•˜๊ณ  ํ•„ํ„ฐ๋งํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> classifier = pipeline(task="sentiment-analysis") >>> preds = classifier("Hugging Face is the best thing since sliced bread!") >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.9991, 'label': 'POSITIVE'}] ``` ### ํ† ํฐ ๋ถ„๋ฅ˜[[token_classification]] ๋ชจ๋“  ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ์ž‘์—…์—์„œ๋Š” ํ…์ŠคํŠธ๊ฐ€ ๊ฐœ๋ณ„ ๋‹จ์–ด๋‚˜ ํ•˜์œ„ ๋‹จ์–ด๋กœ ๋ถ„๋ฆฌ๋˜์–ด ์ „์ฒ˜๋ฆฌ๋ฉ๋‹ˆ๋‹ค. ๋ถ„๋ฆฌ๋œ ๋‹จ์–ด๋ฅผ [ํ† ํฐ](/glossary#token)์ด๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ํ† ํฐ ๋ถ„๋ฅ˜๋Š” ๊ฐ ํ† ํฐ์— ๋ฏธ๋ฆฌ ์ •์˜๋œ ํด๋ž˜์Šค ์ง‘ํ•ฉ์˜ ๋ ˆ์ด๋ธ”์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ํ† ํฐ ๋ถ„๋ฅ˜์˜ ๋‘ ๊ฐ€์ง€ ์ผ๋ฐ˜์ ์ธ ์œ ํ˜•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ๊ฐœ์ฒด๋ช… ์ธ์‹ (NER): ํ† ํฐ์„ ์กฐ์ง, ์ธ๋ฌผ, ์œ„์น˜ ๋˜๋Š” ๋‚ ์งœ์™€ ๊ฐ™์€ ๊ฐœ์ฒด ๋ฒ”์ฃผ์— ๋”ฐ๋ผ ๋ ˆ์ด๋ธ”๋งํ•ฉ๋‹ˆ๋‹ค. NER์€ ํŠนํžˆ ์œ ์ „์ฒดํ•™์ ์ธ ํ™˜๊ฒฝ์—์„œ ์œ ์ „์ž, ๋‹จ๋ฐฑ์งˆ ๋ฐ ์•ฝ๋ฌผ ์ด๋ฆ„์— ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•˜๋Š” ๋ฐ ๋„๋ฆฌ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. * ํ’ˆ์‚ฌ ํƒœ๊น… (POS): ๋ช…์‚ฌ, ๋™์‚ฌ, ํ˜•์šฉ์‚ฌ์™€ ๊ฐ™์€ ํ’ˆ์‚ฌ์— ๋”ฐ๋ผ ํ† ํฐ์— ๋ ˆ์ด๋ธ”์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. POS๋Š” ๋ฒˆ์—ญ ์‹œ์Šคํ…œ์ด ๋™์ผํ•œ ๋‹จ์–ด๊ฐ€ ๋ฌธ๋ฒ•์ ์œผ๋กœ ์–ด๋–ป๊ฒŒ ๋‹ค๋ฅธ์ง€ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค (๋ช…์‚ฌ๋กœ ์‚ฌ์šฉ๋˜๋Š” "bank(์€ํ–‰)"๊ณผ ๋™์‚ฌ๋กœ ์‚ฌ์šฉ๋˜๋Š” "bank(์˜ˆ๊ธˆ์„ ์˜ˆ์น˜ํ•˜๋‹ค)"๊ณผ ๊ฐ™์€ ๊ฒฝ์šฐ). ```py >>> from transformers import pipeline >>> classifier = pipeline(task="ner") >>> preds = classifier("Hugging Face is a French company based in New York City.") >>> preds = [ ... { ... "entity": pred["entity"], ... "score": round(pred["score"], 4), ... "index": pred["index"], ... "word": pred["word"], ... "start": pred["start"], ... "end": pred["end"], ... } ... for pred in preds ... ] >>> print(*preds, sep="\n") {'entity': 'I-ORG', 'score': 0.9968, 'index': 1, 'word': 'Hu', 'start': 0, 'end': 2} {'entity': 'I-ORG', 'score': 0.9293, 'index': 2, 'word': '##gging', 'start': 2, 'end': 7} {'entity': 'I-ORG', 'score': 0.9763, 'index': 3, 'word': 'Face', 'start': 8, 'end': 12} {'entity': 'I-MISC', 'score': 0.9983, 'index': 6, 'word': 'French', 'start': 18, 'end': 24} {'entity': 'I-LOC', 'score': 0.999, 'index': 10, 'word': 'New', 'start': 42, 'end': 45} {'entity': 'I-LOC', 'score': 0.9987, 'index': 11, 'word': 'York', 'start': 46, 'end': 50} {'entity': 'I-LOC', 'score': 0.9992, 'index': 12, 'word': 'City', 'start': 51, 'end': 55} ``` ### ์งˆ์˜์‘๋‹ต[[question_answering]] ์งˆ์˜์‘๋‹ต์€ ๋˜ ํ•˜๋‚˜์˜ ํ† ํฐ ์ฐจ์›์˜ ์ž‘์—…์œผ๋กœ, ๋ฌธ๋งฅ์ด ์žˆ์„ ๋•Œ(๊ฐœ๋ฐฉํ˜• ๋„๋ฉ”์ธ)์™€ ๋ฌธ๋งฅ์ด ์—†์„ ๋•Œ(ํ์‡„ํ˜• ๋„๋ฉ”์ธ) ์งˆ๋ฌธ์— ๋Œ€ํ•œ ๋‹ต๋ณ€์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ž‘์—…์€ ๊ฐ€์ƒ ๋น„์„œ์—๊ฒŒ ์‹๋‹น์ด ์˜์—… ์ค‘์ธ์ง€์™€ ๊ฐ™์€ ์งˆ๋ฌธ์„ ํ•  ๋•Œ๋งˆ๋‹ค ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ณ ๊ฐ ์ง€์› ๋˜๋Š” ๊ธฐ์ˆ  ์ง€์›์„ ์ œ๊ณตํ•˜๊ฑฐ๋‚˜ ๊ฒ€์ƒ‰ ์—”์ง„์ด ์š”์ฒญํ•œ ์ •๋ณด๋ฅผ ๊ฒ€์ƒ‰ํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์งˆ๋ฌธ ๋‹ต๋ณ€์—๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์ถ”์ถœํ˜•: ์งˆ๋ฌธ๊ณผ ๋ฌธ๋งฅ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ, ๋ชจ๋ธ์ด ์ฃผ์–ด์ง„ ๋ฌธ๋งฅ์˜ ์ผ๋ถ€์—์„œ ๊ฐ€์ ธ์˜จ ํ…์ŠคํŠธ์˜ ๋ฒ”์œ„๋ฅผ ๋‹ต๋ณ€์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. * ์ƒ์„ฑํ˜•: ์งˆ๋ฌธ๊ณผ ๋ฌธ๋งฅ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ, ์ฃผ์–ด์ง„ ๋ฌธ๋งฅ์„ ํ†ตํ•ด ๋‹ต๋ณ€์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด ์ ‘๊ทผ ๋ฐฉ์‹์€ [`QuestionAnsweringPipeline`] ๋Œ€์‹  [`Text2TextGenerationPipeline`]์„ ํ†ตํ•ด ์ฒ˜๋ฆฌ๋ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> question_answerer = pipeline(task="question-answering") >>> preds = question_answerer( ... question="What is the name of the repository?", ... context="The name of the repository is huggingface/transformers", ... ) >>> print( ... f"score: {round(preds['score'], 4)}, start: {preds['start']}, end: {preds['end']}, answer: {preds['answer']}" ... ) score: 0.9327, start: 30, end: 54, answer: huggingface/transformers ``` ### ์š”์•ฝ[[summarization]] ์š”์•ฝ์€ ์›๋ณธ ๋ฌธ์„œ์˜ ์˜๋ฏธ๋ฅผ ์ตœ๋Œ€ํ•œ ๋ณด์กดํ•˜๋ฉด์„œ ๊ธด ๋ฌธ์„œ๋ฅผ ์งง์€ ๋ฌธ์„œ๋กœ ๋งŒ๋“œ๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์š”์•ฝ์€ `sequence-to-sequence` ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ž…๋ ฅ๋ณด๋‹ค ์งง์€ ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ์š”์•ฝ ์ž‘์—…์€ ๋…์ž๊ฐ€ ์žฅ๋ฌธ ๋ฌธ์„œ๋“ค์˜ ์ฃผ์š” ํฌ์ธํŠธ๋ฅผ ๋น ๋ฅด๊ฒŒ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ž…๋ฒ•์•ˆ, ๋ฒ•๋ฅ  ๋ฐ ๊ธˆ์œต ๋ฌธ์„œ, ํŠนํ—ˆ ๋ฐ ๊ณผํ•™ ๋…ผ๋ฌธ์€ ์š”์•ฝ ์ž‘์—…์ด ๋…์ž์˜ ์‹œ๊ฐ„์„ ์ ˆ์•ฝํ•˜๊ณ  ๋…์„œ ๋ณด์กฐ ๋„๊ตฌ๋กœ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค. ์งˆ๋ฌธ ๋‹ต๋ณ€๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์š”์•ฝ์—๋Š” ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์ถ”์ถœํ˜•: ์›๋ณธ ํ…์ŠคํŠธ์—์„œ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๋ฌธ์žฅ์„ ์‹๋ณ„ํ•˜๊ณ  ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. * ์ƒ์„ฑํ˜•: ์›๋ณธ ํ…์ŠคํŠธ์—์„œ ๋ชฉํ‘œ ์š”์•ฝ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ๋ฌธ์„œ์— ์—†๋Š” ์ƒˆ๋กœ์šด ๋‹จ์–ด๋ฅผ ํฌํ•จํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. [`SummarizationPipeline`]์€ ์ƒ์„ฑํ˜• ์ ‘๊ทผ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> summarizer = pipeline(task="summarization") >>> summarizer( ... "In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles." ... ) [{'summary_text': ' The Transformer is the first sequence transduction model based entirely on attention . It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention . For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers .'}] ``` ### ๋ฒˆ์—ญ[[translation]] ๋ฒˆ์—ญ์€ ํ•œ ์–ธ์–ด๋กœ ๋œ ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์„œ๋กœ ๋‹ค๋ฅธ ๋ฐฐ๊ฒฝ์„ ๊ฐ€์ง„ ์‚ฌ๋žŒ๋“ค์ด ์„œ๋กœ ์†Œํ†ตํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ฃผ๋Š” ์ค‘์š”ํ•œ ์—ญํ• ์„ ํ•ฉ๋‹ˆ๋‹ค. ๋” ๋„“์€ ๋Œ€์ค‘์—๊ฒŒ ์ฝ˜ํ…์ธ ๋ฅผ ๋ฒˆ์—ญํ•˜์—ฌ ์ „๋‹ฌํ•˜๊ฑฐ๋‚˜, ์ƒˆ๋กœ์šด ์–ธ์–ด๋ฅผ ๋ฐฐ์šฐ๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ํ•™์Šต ๋„๊ตฌ๊ฐ€ ๋  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์š”์•ฝ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ๋ฒˆ์—ญ์€ `sequence-to-sequence` ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ์€ ์ž…๋ ฅ ์‹œํ€€์Šค๋ฅผ ๋ฐ›์•„์„œ ์ถœ๋ ฅ์ด ๋˜๋Š” ๋ชฉํ‘œ ์‹œํ€€์Šค๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ดˆ๊ธฐ์˜ ๋ฒˆ์—ญ ๋ชจ๋ธ์€ ๋Œ€๋ถ€๋ถ„ ๋‹จ์ผ ์–ธ์–ด๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ์—ˆ์ง€๋งŒ, ์ตœ๊ทผ์—๋Š” ๋งŽ์€ ์–ธ์–ด ์Œ ๊ฐ„์— ๋ฒˆ์—ญ์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ๋‹ค์ค‘ ์–ธ์–ด ๋ชจ๋ธ์— ๋Œ€ํ•œ ๊ด€์‹ฌ์ด ๋†’์•„์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> text = "translate English to French: Hugging Face is a community-based open-source platform for machine learning." >>> translator = pipeline(task="translation", model="t5-small") >>> translator(text) [{'translation_text': "Hugging Face est une tribune communautaire de l'apprentissage des machines."}] ``` ### ์–ธ์–ด ๋ชจ๋ธ๋ง[[language_modeling]] ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ํ…์ŠคํŠธ ์‹œํ€€์Šค์—์„œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์–ธ์–ด ๋ชจ๋ธ์€ ๋งŽ์€ ๋‹ค๋ฅธ ํ•˜์œ„ ์ž‘์—…์— ๋”ฐ๋ผ ๋ฏธ์„ธ ์กฐ์ •๋  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋งค์šฐ ์ธ๊ธฐ ์žˆ๋Š” ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ์ž‘์—…์ด ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ตœ๊ทผ์—๋Š” ์ œ๋กœ ์ƒท(zero-shot) ๋˜๋Š” ํ“จ ์ƒท(few-shot) ํ•™์Šต์ด ๊ฐ€๋Šฅํ•œ ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ(Large Language Models, LLM)์— ๋Œ€ํ•œ ๋งŽ์€ ๊ด€์‹ฌ์ด ๋ฐœ์ƒํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋ธ์ด ๋ช…์‹œ์ ์œผ๋กœ ํ›ˆ๋ จ๋˜์ง€ ์•Š์€ ์ž‘์—…๋„ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค! ์–ธ์–ด ๋ชจ๋ธ์€ ์œ ์ฐฝํ•˜๊ณ  ์„ค๋“๋ ฅ ์žˆ๋Š” ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์ง€๋งŒ, ํ…์ŠคํŠธ๊ฐ€ ํ•ญ์ƒ ์ •ํ™•ํ•˜์ง€๋Š” ์•Š์„ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ฃผ์˜๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์–ธ์–ด ๋ชจ๋ธ๋ง์—๋Š” ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: * ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง: ์ด ๋ชจ๋ธ์˜ ๋ชฉ์ ์€ ์‹œํ€€์Šค์—์„œ ๋‹ค์Œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋Š” ๊ฒƒ์ด๋ฉฐ, ๋ฏธ๋ž˜ ํ† ํฐ์ด ๋งˆ์Šคํ‚น ๋ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> prompt = "Hugging Face is a community-based open-source platform for machine learning." >>> generator = pipeline(task="text-generation") >>> generator(prompt) # doctest: +SKIP ``` * ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง: ์ด ๋ชจ๋ธ์˜ ๋ชฉ์ ์€ ์‹œํ€€์Šค ๋‚ด์˜ ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋Š” ๊ฒƒ์ด๋ฉฐ, ์‹œํ€€์Šค ๋‚ด์˜ ๋ชจ๋“  ํ† ํฐ์— ๋Œ€ํ•œ ์ ‘๊ทผ์ด ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ```py >>> text = "Hugging Face is a community-based open-source <mask> for machine learning." >>> fill_mask = pipeline(task="fill-mask") >>> preds = fill_mask(text, top_k=1) >>> preds = [ ... { ... "score": round(pred["score"], 4), ... "token": pred["token"], ... "token_str": pred["token_str"], ... "sequence": pred["sequence"], ... } ... for pred in preds ... ] >>> preds [{'score': 0.2236, 'token': 1761, 'token_str': ' platform', 'sequence': 'Hugging Face is a community-based open-source platform for machine learning.'}] ``` ์ด ํŽ˜์ด์ง€๋ฅผ ํ†ตํ•ด ๊ฐ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์˜ ๋‹ค์–‘ํ•œ ์ž‘์—… ์œ ํ˜•๊ณผ ๊ฐ ์ž‘์—…์˜ ์‹ค์šฉ์  ์ค‘์š”์„ฑ์— ๋Œ€ํ•ด ์ถ”๊ฐ€์ ์ธ ๋ฐฐ๊ฒฝ ์ •๋ณด๋ฅผ ์–ป์œผ์…จ๊ธฐ๋ฅผ ๋ฐ”๋ž๋‹ˆ๋‹ค. ๋‹ค์Œ [์„น์…˜](tasks_explained)์—์„œ๋Š” ๐Ÿค— Transformer๊ฐ€ ์ด๋Ÿฌํ•œ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” **๋ฐฉ๋ฒ•**์— ๋Œ€ํ•ด ์•Œ์•„๋ณด์‹ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/serialization.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ [[export-to-onnx]] ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์ œํ’ˆ ํ™˜๊ฒฝ์—์„œ ๋ฐฐํฌํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋ชจ๋ธ์„ ์ง๋ ฌํ™”๋œ ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ด๊ณ  ํŠน์ • ๋Ÿฐํƒ€์ž„๊ณผ ํ•˜๋“œ์›จ์–ด์—์„œ ๋กœ๋“œํ•˜๊ณ  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์œผ๋ฉด ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Optimum์€ Transformers์˜ ํ™•์žฅ์œผ๋กœ, PyTorch ๋˜๋Š” TensorFlow์—์„œ ๋ชจ๋ธ์„ ONNX์™€ TFLite์™€ ๊ฐ™์€ ์ง๋ ฌํ™”๋œ ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” `exporters` ๋ชจ๋“ˆ์„ ํ†ตํ•ด ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ๐Ÿค— Optimum์€ ๋˜ํ•œ ์„ฑ๋Šฅ ์ตœ์ ํ™” ๋„๊ตฌ ์„ธํŠธ๋ฅผ ์ œ๊ณตํ•˜์—ฌ ํŠน์ • ํ•˜๋“œ์›จ์–ด์—์„œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ณ  ์‹คํ–‰ํ•  ๋•Œ ์ตœ๋Œ€ ํšจ์œจ์„ฑ์„ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์•ˆ๋‚ด์„œ๋Š” ๐Ÿค— Optimum์„ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. TFLite๋กœ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด๋Š” ์•ˆ๋‚ด์„œ๋Š” [TFLite๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ ํŽ˜์ด์ง€](tflite)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ [[export-to-onnx]] [ONNX (Open Neural Network eXchange)](http://onnx.ai)๋Š” PyTorch์™€ TensorFlow๋ฅผ ํฌํ•จํ•œ ๋‹ค์–‘ํ•œ ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ ์‹ฌ์ธต ํ•™์Šต ๋ชจ๋ธ์„ ๋‚˜ํƒ€๋‚ด๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ๊ณตํ†ต ์—ฐ์‚ฐ์ž ์„ธํŠธ์™€ ๊ณตํ†ต ํŒŒ์ผ ํ˜•์‹์„ ์ •์˜ํ•˜๋Š” ์˜คํ”ˆ ํ‘œ์ค€์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ONNX ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ด์ง€๋ฉด ์ด๋Ÿฌํ•œ ์—ฐ์‚ฐ์ž๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹ ๊ฒฝ๋ง์„ ํ†ตํ•ด ๋ฐ์ดํ„ฐ๊ฐ€ ํ๋ฅด๋Š” ํ๋ฆ„์„ ๋‚˜ํƒ€๋‚ด๋Š” ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„(์ผ๋ฐ˜์ ์œผ๋กœ _์ค‘๊ฐ„ ํ‘œํ˜„_์ด๋ผ๊ณ  ํ•จ)๊ฐ€ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ํ‘œ์ค€ํ™”๋œ ์—ฐ์‚ฐ์ž์™€ ๋ฐ์ดํ„ฐ ์œ ํ˜•์„ ๊ฐ€์ง„ ๊ทธ๋ž˜ํ”„๋ฅผ ๋…ธ์ถœํ•จ์œผ๋กœ์จ, ONNX๋Š” ํ”„๋ ˆ์ž„์›Œํฌ ๊ฐ„์— ์‰ฝ๊ฒŒ ์ „ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, PyTorch์—์„œ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ONNX ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ด๊ณ  TensorFlow์—์„œ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(๊ทธ ๋ฐ˜๋Œ€๋„ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค). ONNX ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ธ ๋ชจ๋ธ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - [๊ทธ๋ž˜ํ”„ ์ตœ์ ํ™”](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/optimization) ๋ฐ [์–‘์žํ™”](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization)์™€ ๊ฐ™์€ ๊ธฐ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ์„ ์œ„ํ•ด ์ตœ์ ํ™”๋ฉ๋‹ˆ๋‹ค. - ONNX Runtime์„ ํ†ตํ•ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`ORTModelForXXX` ํด๋ž˜์Šค๋“ค](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort)์„ ํ†ตํ•ด ๋™์ผํ•œ `AutoModel` API๋ฅผ ๋”ฐ๋ฆ…๋‹ˆ๋‹ค. ์ด API๋Š” ๐Ÿค— Transformers์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. - [์ตœ์ ํ™”๋œ ์ถ”๋ก  ํŒŒ์ดํ”„๋ผ์ธ](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/pipelines)์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๐Ÿค— Transformers์˜ [`pipeline`] ํ•จ์ˆ˜์™€ ๋™์ผํ•œ API๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Optimum์€ ๊ตฌ์„ฑ ๊ฐ์ฒด๋ฅผ ํ™œ์šฉํ•˜์—ฌ ONNX ๋‚ด๋ณด๋‚ด๊ธฐ๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ตฌ์„ฑ ๊ฐ์ฒด๋Š” ์—ฌ๋Ÿฌ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•ด ๋ฏธ๋ฆฌ ์ค€๋น„๋˜์–ด ์žˆ์œผ๋ฉฐ ๋‹ค๋ฅธ ์•„ํ‚คํ…์ฒ˜์— ์‰ฝ๊ฒŒ ํ™•์žฅํ•  ์ˆ˜ ์žˆ๋„๋ก ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ฏธ๋ฆฌ ์ค€๋น„๋œ ๊ตฌ์„ฑ ๋ชฉ๋ก์€ [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/exporters/onnx/overview)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๋Š” ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์„ ๋ชจ๋‘ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค: - ๐Ÿค— Optimum์„ ์‚ฌ์šฉํ•˜์—ฌ CLI๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ - `optimum.onnxruntime`์„ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Optimum์œผ๋กœ ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ ### CLI๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ [[exporting-a-transformers-model-to-onnx-with-cli]] ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ๋จผ์ € ์ถ”๊ฐ€ ์ข…์†์„ฑ์„ ์„ค์น˜ํ•˜์„ธ์š”: ```bash pip install optimum[exporters] ``` ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋ชจ๋“  ์ธ์ˆ˜๋ฅผ ํ™•์ธํ•˜๋ ค๋ฉด [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli)๋ฅผ ์ฐธ์กฐํ•˜๊ฑฐ๋‚˜ ๋ช…๋ น์ค„์—์„œ ๋„์›€๋ง์„ ๋ณด์„ธ์š”. ```bash optimum-cli export onnx --help ``` ์˜ˆ๋ฅผ ๋“ค์–ด, ๐Ÿค— Hub์—์„œ `distilbert-base-uncased-distilled-squad`์™€ ๊ฐ™์€ ๋ชจ๋ธ์˜ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”: ```bash optimum-cli export onnx --model distilbert-base-uncased-distilled-squad distilbert_base_uncased_squad_onnx/ ``` ์œ„์™€ ๊ฐ™์ด ์ง„ํ–‰ ์ƒํ™ฉ์„ ๋‚˜ํƒ€๋‚ด๋Š” ๋กœ๊ทธ๊ฐ€ ํ‘œ์‹œ๋˜๊ณ  ๊ฒฐ๊ณผ์ธ `model.onnx`๊ฐ€ ์ €์žฅ๋œ ์œ„์น˜๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ```bash Validating ONNX model distilbert_base_uncased_squad_onnx/model.onnx... -[โœ“] ONNX model output names match reference model (start_logits, end_logits) - Validating ONNX Model output "start_logits": -[โœ“] (2, 16) matches (2, 16) -[โœ“] all values close (atol: 0.0001) - Validating ONNX Model output "end_logits": -[โœ“] (2, 16) matches (2, 16) -[โœ“] all values close (atol: 0.0001) The ONNX export succeeded and the exported model was saved at: distilbert_base_uncased_squad_onnx ``` ์œ„์˜ ์˜ˆ์ œ๋Š” ๐Ÿค— Hub์—์„œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋Š” ๊ฒƒ์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๋กœ์ปฌ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ผ ๋•Œ์—๋Š” ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜์™€ ํ† ํฌ๋‚˜์ด์ € ํŒŒ์ผ์„ ๋™์ผํ•œ ๋””๋ ‰ํ† ๋ฆฌ(`local_path`)์— ์ €์žฅํ–ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. CLI๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ์—๋Š” ๐Ÿค— Hub์˜ ์ฒดํฌํฌ์ธํŠธ ์ด๋ฆ„ ๋Œ€์‹  `model` ์ธ์ˆ˜์— `local_path`๋ฅผ ์ „๋‹ฌํ•˜๊ณ  `--task` ์ธ์ˆ˜๋ฅผ ์ œ๊ณตํ•˜์„ธ์š”. ์ง€์›๋˜๋Š” ์ž‘์—…์˜ ๋ชฉ๋ก์€ [๐Ÿค— Optimum ๋ฌธ์„œ](https://huggingface.co/docs/optimum/exporters/task_manager)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. `task` ์ธ์ˆ˜๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์œผ๋ฉด ์ž‘์—…์— ํŠนํ™”๋œ ํ—ค๋“œ ์—†์ด ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜๋กœ ๊ธฐ๋ณธ ์„ค์ •๋ฉ๋‹ˆ๋‹ค. ```bash optimum-cli export onnx --model local_path --task question-answering distilbert_base_uncased_squad_onnx/ ``` ๊ทธ ๊ฒฐ๊ณผ๋กœ ์ƒ์„ฑ๋œ `model.onnx` ํŒŒ์ผ์€ ONNX ํ‘œ์ค€์„ ์ง€์›ํ•˜๋Š” ๋งŽ์€ [๊ฐ€์†๊ธฐ](https://onnx.ai/supported-tools.html#deployModel) ์ค‘ ํ•˜๋‚˜์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [ONNX Runtime](https://onnxruntime.ai/)์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ณ  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> from transformers import AutoTokenizer >>> from optimum.onnxruntime import ORTModelForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("distilbert_base_uncased_squad_onnx") >>> model = ORTModelForQuestionAnswering.from_pretrained("distilbert_base_uncased_squad_onnx") >>> inputs = tokenizer("What am I using?", "Using DistilBERT with ONNX Runtime!", return_tensors="pt") >>> outputs = model(**inputs) ``` Hub์˜ TensorFlow ์ฒดํฌํฌ์ธํŠธ์— ๋Œ€ํ•ด์„œ๋„ ๋™์ผํ•œ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [Keras organization](https://huggingface.co/keras-io)์—์„œ ์ˆœ์ˆ˜ํ•œ TensorFlow ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋‚ด๋Š” ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash optimum-cli export onnx --model keras-io/transformers-qa distilbert_base_cased_squad_onnx/ ``` ### `optimum.onnxruntime`์„ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ [[exporting-a-transformers-model-to-onnx-with-optimumonnxruntime]] CLI ๋Œ€์‹ ์— `optimum.onnxruntime`์„ ์‚ฌ์šฉํ•˜์—ฌ ํ”„๋กœ๊ทธ๋ž˜๋ฐ ๋ฐฉ์‹์œผ๋กœ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ง„ํ–‰ํ•˜์„ธ์š”: ```python >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> from transformers import AutoTokenizer >>> model_checkpoint = "distilbert_base_uncased_squad" >>> save_directory = "onnx/" >>> # Load a model from transformers and export it to ONNX >>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True) >>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) >>> # Save the onnx model and tokenizer >>> ort_model.save_pretrained(save_directory) >>> tokenizer.save_pretrained(save_directory) ``` ### ์ง€์›๋˜์ง€ ์•Š๋Š” ์•„ํ‚คํ…์ฒ˜์˜ ๋ชจ๋ธ ๋‚ด๋ณด๋‚ด๊ธฐ [[exporting-a-model-for-an-unsupported-architecture]] ํ˜„์žฌ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์—†๋Š” ๋ชจ๋ธ์„ ์ง€์›ํ•˜๊ธฐ ์œ„ํ•ด ๊ธฐ์—ฌํ•˜๋ ค๋ฉด, ๋จผ์ € [`optimum.exporters.onnx`](https://huggingface.co/docs/optimum/exporters/onnx/overview)์—์„œ ์ง€์›๋˜๋Š”์ง€ ํ™•์ธํ•œ ํ›„ ์ง€์›๋˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ์—๋Š” [๐Ÿค— Optimum์— ๊ธฐ์—ฌ](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/contribute)ํ•˜์„ธ์š”. ### `transformers.onnx`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ ๋‚ด๋ณด๋‚ด๊ธฐ [[exporting-a-model-with-transformersonnx]] <Tip warning={true}> `tranformers.onnx`๋Š” ๋” ์ด์ƒ ์œ ์ง€๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์œ„์—์„œ ์„ค๋ช…ํ•œ ๋Œ€๋กœ ๐Ÿค— Optimum์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด์„ธ์š”. ์ด ์„น์…˜์€ ํ–ฅํ›„ ๋ฒ„์ „์—์„œ ์ œ๊ฑฐ๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. </Tip> ๐Ÿค— Transformers ๋ชจ๋ธ์„ ONNX๋กœ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ์ถ”๊ฐ€ ์ข…์†์„ฑ์„ ์„ค์น˜ํ•˜์„ธ์š”: ```bash pip install transformers[onnx] ``` `transformers.onnx` ํŒจํ‚ค์ง€๋ฅผ Python ๋ชจ๋“ˆ๋กœ ์‚ฌ์šฉํ•˜์—ฌ ์ค€๋น„๋œ ๊ตฌ์„ฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋ƒ…๋‹ˆ๋‹ค: ```bash python -m transformers.onnx --model=distilbert-base-uncased onnx/ ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด `--model` ์ธ์ˆ˜์— ์ •์˜๋œ ์ฒดํฌํฌ์ธํŠธ์˜ ONNX ๊ทธ๋ž˜ํ”„๊ฐ€ ๋‚ด๋ณด๋‚ด์ง‘๋‹ˆ๋‹ค. ๐Ÿค— Hub์—์„œ ์ œ๊ณตํ•˜๋Š” ์ฒดํฌํฌ์ธํŠธ๋‚˜ ๋กœ์ปฌ์— ์ €์žฅ๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ๋กœ ์ƒ์„ฑ๋œ `model.onnx` ํŒŒ์ผ์€ ONNX ํ‘œ์ค€์„ ์ง€์›ํ•˜๋Š” ๋งŽ์€ ๊ฐ€์†๊ธฐ ์ค‘ ํ•˜๋‚˜์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ONNX Runtime์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ณ  ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> from transformers import AutoTokenizer >>> from onnxruntime import InferenceSession >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> session = InferenceSession("onnx/model.onnx") >>> # ONNX Runtime expects NumPy arrays as input >>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np") >>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs)) ``` ํ•„์š”ํ•œ ์ถœ๋ ฅ ์ด๋ฆ„(์˜ˆ: `["last_hidden_state"]`)์€ ๊ฐ ๋ชจ๋ธ์˜ ONNX ๊ตฌ์„ฑ์„ ํ™•์ธํ•˜์—ฌ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, DistilBERT์˜ ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python >>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig >>> config = DistilBertConfig() >>> onnx_config = DistilBertOnnxConfig(config) >>> print(list(onnx_config.outputs.keys())) ["last_hidden_state"] ``` Hub์˜ TensorFlow ์ฒดํฌํฌ์ธํŠธ์— ๋Œ€ํ•ด์„œ๋„ ๋™์ผํ•œ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆœ์ˆ˜ํ•œ TensorFlow ์ฒดํฌํฌ์ธํŠธ๋ฅผ ๋‚ด๋ณด๋ƒ…๋‹ˆ๋‹ค: ```bash python -m transformers.onnx --model=keras-io/transformers-qa onnx/ ``` ๋กœ์ปฌ์— ์ €์žฅ๋œ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜ ํŒŒ์ผ๊ณผ ํ† ํฌ๋‚˜์ด์ € ํŒŒ์ผ์„ ๋™์ผํ•œ ๋””๋ ‰ํ† ๋ฆฌ์— ์ €์žฅํ•œ ๋‹ค์Œ, transformers.onnx ํŒจํ‚ค์ง€์˜ --model ์ธ์ˆ˜๋ฅผ ์›ํ•˜๋Š” ๋””๋ ‰ํ† ๋ฆฌ๋กœ ์ง€์ •ํ•˜์—ฌ ONNX๋กœ ๋‚ด๋ณด๋ƒ…๋‹ˆ๋‹ค: ```bash python -m transformers.onnx --model=local-pt-checkpoint onnx/ ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/pipeline_webserver.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์›น ์„œ๋ฒ„๋ฅผ ์œ„ํ•œ ํŒŒ์ดํ”„๋ผ์ธ ์‚ฌ์šฉํ•˜๊ธฐ[[using_pipelines_for_a_webserver]] <Tip> ์ถ”๋ก  ์—”์ง„์„ ๋งŒ๋“œ๋Š” ๊ฒƒ์€ ๋ณต์žกํ•œ ์ฃผ์ œ์ด๋ฉฐ, "์ตœ์„ ์˜" ์†”๋ฃจ์…˜์€ ๋ฌธ์ œ ๊ณต๊ฐ„์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์งˆ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. CPU ๋˜๋Š” GPU๋ฅผ ์‚ฌ์šฉํ•˜๋Š”์ง€์— ๋”ฐ๋ผ ๋‹ค๋ฅด๊ณ  ๋‚ฎ์€ ์ง€์—ฐ ์‹œ๊ฐ„์„ ์›ํ•˜๋Š”์ง€, ๋†’์€ ์ฒ˜๋ฆฌ๋Ÿ‰์„ ์›ํ•˜๋Š”์ง€, ๋‹ค์–‘ํ•œ ๋ชจ๋ธ์„ ์ง€์›ํ•  ์ˆ˜ ์žˆ๊ธธ ์›ํ•˜๋Š”์ง€, ํ•˜๋‚˜์˜ ํŠน์ • ๋ชจ๋ธ์„ ๊ณ ๋„๋กœ ์ตœ์ ํ™”ํ•˜๊ธธ ์›ํ•˜๋Š”์ง€ ๋“ฑ์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค. ์ด ์ฃผ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์—๋Š” ์—ฌ๋Ÿฌ ๊ฐ€์ง€๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ, ์ด ์žฅ์—์„œ ์ œ์‹œํ•˜๋Š” ๊ฒƒ์€ ์ฒ˜์Œ ์‹œ๋„ํ•ด ๋ณด๊ธฐ์— ์ข‹์€ ์ถœ๋ฐœ์ ์ผ ์ˆ˜๋Š” ์žˆ์ง€๋งŒ, ์ด ์žฅ์„ ์ฝ๋Š” ์—ฌ๋Ÿฌ๋ถ„์ด ํ•„์š”๋กœ ํ•˜๋Š” ์ตœ์ ์˜ ์†”๋ฃจ์…˜์€ ์•„๋‹ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ํ•ต์‹ฌ์ ์œผ๋กœ ์ดํ•ดํ•ด์•ผ ํ•  ์ ์€ [dataset](pipeline_tutorial#using-pipelines-on-a-dataset)๋ฅผ ๋‹ค๋ฃฐ ๋•Œ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ฐ˜๋ณต์ž๋ฅผ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•˜๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด, ์›น ์„œ๋ฒ„๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์š”์ฒญ์„ ๊ธฐ๋‹ค๋ฆฌ๊ณ  ๋“ค์–ด์˜ค๋Š” ๋Œ€๋กœ ์ฒ˜๋ฆฌํ•˜๋Š” ์‹œ์Šคํ…œ์ด๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋ณดํ†ต ์›น ์„œ๋ฒ„๋Š” ๋‹ค์–‘ํ•œ ์š”์ฒญ์„ ๋™์‹œ์— ๋‹ค๋ฃจ๊ธฐ ์œ„ํ•ด ๋งค์šฐ ๋‹ค์ค‘ํ™”๋œ ๊ตฌ์กฐ(๋ฉ€ํ‹ฐ ์Šค๋ ˆ๋”ฉ, ๋น„๋™๊ธฐ ๋“ฑ)๋ฅผ ์ง€๋‹ˆ๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ˜๋ฉด์—, ํŒŒ์ดํ”„๋ผ์ธ(๋Œ€๋ถ€๋ถ„ ํŒŒ์ดํ”„๋ผ์ธ ์•ˆ์— ์žˆ๋Š” ๋ชจ๋ธ)์€ ๋ณ‘๋ ฌ์ฒ˜๋ฆฌ์— ๊ทธ๋‹ค์ง€ ์ข‹์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ํŒŒ์ดํ”„๋ผ์ธ์€ ๋งŽ์€ RAM์„ ์ฐจ์ง€ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ, ํŒŒ์ดํ”„๋ผ์ธ์ด ์‹คํ–‰ ์ค‘์ด๊ฑฐ๋‚˜ ๊ณ„์‚ฐ ์ง‘์•ฝ์ ์ธ ์ž‘์—… ์ค‘์ผ ๋•Œ ๋ชจ๋“  ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋ฆฌ์†Œ์Šค๋ฅผ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€์žฅ ์ข‹์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์ œ๋ฅผ ์šฐ๋ฆฌ๋Š” ์›น ์„œ๋ฒ„๊ฐ€ ์š”์ฒญ์„ ๋ฐ›๊ณ  ๋ณด๋‚ด๋Š” ๊ฐ€๋ฒผ์šด ๋ถ€ํ•˜๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ณ , ์‹ค์ œ ์ž‘์—…์„ ์ฒ˜๋ฆฌํ•˜๋Š” ๋‹จ์ผ ์Šค๋ ˆ๋“œ๋ฅผ ๊ฐ–๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ ํ•ด๊ฒฐํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ๋Š” `starlette` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์‹ค์ œ ํ”„๋ ˆ์ž„์›Œํฌ๋Š” ์ค‘์š”ํ•˜์ง€ ์•Š์ง€๋งŒ, ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๋™์ผํ•œ ํšจ๊ณผ๋ฅผ ๋ณด๊ธฐ ์œ„ํ•ด์„  ์ฝ”๋“œ๋ฅผ ์กฐ์ •ํ•˜๊ฑฐ๋‚˜ ๋ณ€๊ฒฝํ•ด์•ผ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `server.py`๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py from starlette.applications import Starlette from starlette.responses import JSONResponse from starlette.routing import Route from transformers import pipeline import asyncio async def homepage(request): payload = await request.body() string = payload.decode("utf-8") response_q = asyncio.Queue() await request.app.model_queue.put((string, response_q)) output = await response_q.get() return JSONResponse(output) async def server_loop(q): pipe = pipeline(model="bert-base-uncased") while True: (string, response_q) = await q.get() out = pipe(string) await response_q.put(out) app = Starlette( routes=[ Route("/", homepage, methods=["POST"]), ], ) @app.on_event("startup") async def startup_event(): q = asyncio.Queue() app.model_queue = q asyncio.create_task(server_loop(q)) ``` ์ด์ œ ๋‹ค์Œ ๋ช…๋ น์–ด๋กœ ์‹คํ–‰์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash uvicorn server:app ``` ์ด์ œ ์ฟผ๋ฆฌ๋ฅผ ๋‚ ๋ ค๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash curl -X POST -d "test [MASK]" http://localhost:8000/ #[{"score":0.7742936015129089,"token":1012,"token_str":".","sequence":"test."},...] ``` ์ž, ์ด์ œ ์›น ์„œ๋ฒ„๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ข‹์€ ๊ฐœ๋…์„ ์•Œ๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ์ค‘์š”ํ•œ ์ ์€ ๋ชจ๋ธ์„ **ํ•œ ๋ฒˆ๋งŒ** ๊ฐ€์ ธ์˜จ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์›น ์„œ๋ฒ„์—๋Š” ๋ชจ๋ธ์˜ ์‚ฌ๋ณธ์ด ์—†์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฐ ๋ฐฉ์‹์€ ๋ถˆํ•„์š”ํ•œ RAM์ด ์‚ฌ์šฉ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ํ ๋ฉ”์ปค๋‹ˆ์ฆ˜์„ ์‚ฌ์šฉํ•˜๋ฉด, ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋™์  ๋ฐฐ์น˜๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ถ”๋ก  ์ „ ๋‹จ๊ณ„์— ๋ช‡ ๊ฐœ์˜ ํ•ญ๋ชฉ์„ ์ถ•์ ํ•˜๋Š” ๊ฒƒ๊ณผ ๊ฐ™์€ ๋ฉ‹์ง„ ์ž‘์—…์„ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: <Tip warning={true}> ์ฝ”๋“œ๋Š” ์˜๋„์ ์œผ๋กœ ๊ฐ€๋…์„ฑ์„ ์œ„ํ•ด ์˜์‚ฌ ์ฝ”๋“œ์ฒ˜๋Ÿผ ์ž‘์„ฑ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ์•„๋ž˜ ์ฝ”๋“œ๋ฅผ ์ž‘๋™์‹œํ‚ค๊ธฐ ์ „์— ์‹œ์Šคํ…œ ์ž์›์ด ์ถฉ๋ถ„ํ•œ์ง€ ํ™•์ธํ•˜์„ธ์š”! </Tip> ```py (string, rq) = await q.get() strings = [] queues = [] while True: try: (string, rq) = await asyncio.wait_for(q.get(), timeout=0.001) # 1ms except asyncio.exceptions.TimeoutError: break strings.append(string) queues.append(rq) strings outs = pipe(strings, batch_size=len(strings)) for rq, out in zip(queues, outs): await rq.put(out) ``` ๋‹ค์‹œ ๋ง์”€ ๋“œ๋ฆฌ์ž๋ฉด, ์ œ์•ˆ๋œ ์ฝ”๋“œ๋Š” ๊ฐ€๋…์„ฑ์„ ์œ„ํ•ด ์ตœ์ ํ™”๋˜์—ˆ์œผ๋ฉฐ, ์ตœ์ƒ์˜ ์ฝ”๋“œ๋Š” ์•„๋‹™๋‹ˆ๋‹ค. ์ฒซ์งธ, ๋ฐฐ์น˜ ํฌ๊ธฐ ์ œํ•œ์ด ์—†์œผ๋ฉฐ ์ด๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์ข‹์€ ๋ฐฉ์‹์ด ์•„๋‹™๋‹ˆ๋‹ค. ๋‘˜์งธ, ๋ชจ๋“  ํ ๊ฐ€์ ธ์˜ค๊ธฐ์—์„œ ํƒ€์ž„์•„์›ƒ์ด ์žฌ์„ค์ •๋˜๋ฏ€๋กœ ์ถ”๋ก ์„ ์‹คํ–‰ํ•˜๊ธฐ ์ „์— 1ms๋ณด๋‹ค ํ›จ์”ฌ ์˜ค๋ž˜ ๊ธฐ๋‹ค๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์ฒซ ๋ฒˆ์งธ ์š”์ฒญ์„ ๊ทธ๋งŒํผ ์ง€์—ฐ์‹œํ‚ด). ๋‹จ์ผ 1ms ๊ธธ์ด์˜ ๋ฐ๋“œ๋ผ์ธ์„ ๋‘๋Š” ํŽธ์ด ๋” ์ข‹์Šต๋‹ˆ๋‹ค. ์ด ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋ฉด ํ๊ฐ€ ๋น„์–ด ์žˆ์–ด๋„ ํ•ญ์ƒ 1ms๋ฅผ ๊ธฐ๋‹ค๋ฆฌ๊ฒŒ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ์— ์•„๋ฌด๊ฒƒ๋„ ์—†์„ ๋•Œ ์ถ”๋ก ์„ ์›ํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ์ตœ์„ ์˜ ๋ฐฉ๋ฒ•์ด ์•„๋‹ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋ฐฐ์น˜ ์ž‘์—…์ด ์‚ฌ์šฉ๋ก€์— ๋”ฐ๋ผ ์ •๋ง๋กœ ์ค‘์š”ํ•˜๋‹ค๋ฉด ์˜๋ฏธ๊ฐ€ ์žˆ์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์‹œ ๋งํ•˜์ง€๋งŒ, ์ตœ์ƒ์˜ ์†”๋ฃจ์…˜์€ ์—†์Šต๋‹ˆ๋‹ค. ## ๊ณ ๋ คํ•ด์•ผ ํ•  ๋ช‡ ๊ฐ€์ง€ ์‚ฌํ•ญ[[few_things_you_might want_to_consider]] ### ์—๋Ÿฌ ํ™•์ธ[[error_checking]] ํ”„๋กœ๋•์…˜ ํ™˜๊ฒฝ์—์„œ๋Š” ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•  ์—ฌ์ง€๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ๋ชจ์ž๋ผ๊ฑฐ๋‚˜, ๊ณต๊ฐ„์ด ๋ถ€์กฑํ•˜๊ฑฐ๋‚˜, ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐ์— ์‹คํŒจํ•˜๊ฑฐ๋‚˜, ์ฟผ๋ฆฌ๊ฐ€ ์ž˜๋ชป๋˜์—ˆ๊ฑฐ๋‚˜, ์ฟผ๋ฆฌ๋Š” ์ •ํ™•ํ•ด๋„ ๋ชจ๋ธ ์„ค์ •์ด ์ž˜๋ชป๋˜์–ด ์‹คํ–‰์— ์‹คํŒจํ•˜๋Š” ๋“ฑ๋“ฑ ๋งŽ์€ ๊ฒฝ์šฐ๊ฐ€ ์กด์žฌํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์„œ๋ฒ„๊ฐ€ ์‚ฌ์šฉ์ž์—๊ฒŒ ์˜ค๋ฅ˜๋ฅผ ์ถœ๋ ฅํ•˜๋Š” ๊ฒƒ์ด ์ข‹์œผ๋ฏ€๋กœ ์˜ค๋ฅ˜๋ฅผ ํ‘œ์‹œํ•˜๊ธฐ ์œ„ํ•ด `try...except` ๋ฌธ์„ ๋งŽ์ด ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋ณด์•ˆ ์ƒํ™ฉ์— ๋”ฐ๋ผ ๋ชจ๋“  ์˜ค๋ฅ˜๋ฅผ ํ‘œ์‹œํ•˜๋Š” ๊ฒƒ์€ ๋ณด์•ˆ์ƒ ์œ„ํ—˜ํ•  ์ˆ˜๋„ ์žˆ๋‹ค๋Š” ์ ์„ ๋ช…์‹ฌํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ### ์„œํ‚ท ๋ธŒ๋ ˆ์ดํ‚น[[circuit_breaking]] ์›น ์„œ๋ฒ„๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์„œํ‚ท ๋ธŒ๋ ˆ์ดํ‚น์„ ์ˆ˜ํ–‰ํ•  ๋•Œ ๋” ๋‚˜์€ ์ƒํ™ฉ์— ์ง๋ฉดํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ์ด๋Š” ์„œ๋ฒ„๊ฐ€ ์ฟผ๋ฆฌ๋ฅผ ๋ฌด๊ธฐํ•œ ๊ธฐ๋‹ค๋ฆฌ๋Š” ๋Œ€์‹  ๊ณผ๋ถ€ํ•˜ ์ƒํƒœ์ผ ๋•Œ ์ ์ ˆํ•œ ์˜ค๋ฅ˜๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์„œ๋ฒ„๊ฐ€ ๋งค์šฐ ์˜ค๋žœ ์‹œ๊ฐ„ ๋™์•ˆ ๋Œ€๊ธฐํ•˜๊ฑฐ๋‚˜ ์ ๋‹นํ•œ ์‹œ๊ฐ„์ด ์ง€๋‚œ ํ›„์— 504 ์—๋Ÿฌ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋Š” ๋Œ€์‹  503 ์—๋Ÿฌ๋ฅผ ๋น ๋ฅด๊ฒŒ ๋ฐ˜ํ™˜ํ•˜๊ฒŒ ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ œ์•ˆ๋œ ์ฝ”๋“œ์—๋Š” ๋‹จ์ผ ํ๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ ๊ตฌํ˜„ํ•˜๊ธฐ๊ฐ€ ๋น„๊ต์  ์‰ฝ์Šต๋‹ˆ๋‹ค. ํ ํฌ๊ธฐ๋ฅผ ํ™•์ธํ•˜๋Š” ๊ฒƒ์€ ์›น ์„œ๋ฒ„๊ฐ€ ๊ณผ๋ถ€ํ•˜ ์ƒํ•ญ ํ•˜์— ์žˆ์„ ๋•Œ ์—๋Ÿฌ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ธฐ ์œ„ํ•œ ๊ฐ€์žฅ ๊ธฐ์ดˆ์ ์ธ ์ž‘์—…์ž…๋‹ˆ๋‹ค. ### ๋ฉ”์ธ ์“ฐ๋ ˆ๋“œ ์ฐจ๋‹จ[[blocking_the_main_thread]] ํ˜„์žฌ PyTorch๋Š” ๋น„๋™๊ธฐ ์ฒ˜๋ฆฌ๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š์œผ๋ฉฐ, ์‹คํ–‰ ์ค‘์—๋Š” ๋ฉ”์ธ ์Šค๋ ˆ๋“œ๊ฐ€ ์ฐจ๋‹จ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ PyTorch๋ฅผ ๋ณ„๋„์˜ ์Šค๋ ˆ๋“œ/ํ”„๋กœ์„ธ์Šค์—์„œ ์‹คํ–‰ํ•˜๋„๋ก ๊ฐ•์ œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด ์ž‘์—…์ด ์ˆ˜ํ–‰๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์ฝ”๋“œ๊ฐ€ ํ›จ์”ฌ ๋” ๋ณต์žกํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค(์ฃผ๋กœ ์Šค๋ ˆ๋“œ, ๋น„๋™๊ธฐ ์ฒ˜๋ฆฌ, ํ๊ฐ€ ์„œ๋กœ ์ž˜ ๋งž์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค). ํ•˜์ง€๋งŒ ๊ถ๊ทน์ ์œผ๋กœ๋Š” ๊ฐ™์€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹จ์ผ ํ•ญ๋ชฉ์˜ ์ถ”๋ก ์ด ์˜ค๋ž˜ ๊ฑธ๋ฆฐ๋‹ค๋ฉด (> 1์ดˆ), ๋ฉ”์ธ ์“ฐ๋ ˆ๋“œ๋ฅผ ์ฐจ๋‹จํ•˜๋Š” ๊ฒƒ์€ ์ค‘์š”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์ด ๊ฒฝ์šฐ ์ถ”๋ก  ์ค‘ ๋ชจ๋“  ์ฟผ๋ฆฌ๋Š” ์˜ค๋ฅ˜๋ฅผ ๋ฐ›๊ธฐ ์ „์— 1์ดˆ๋ฅผ ๊ธฐ๋‹ค๋ ค์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ### ๋™์  ๋ฐฐ์น˜[[dynamic_batching]] ์ผ๋ฐ˜์ ์œผ๋กœ, ๋ฐฐ์น˜ ์ฒ˜๋ฆฌ๊ฐ€ 1๊ฐœ ํ•ญ๋ชฉ์„ ํ•œ ๋ฒˆ์— ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์— ๋น„ํ•ด ๋ฐ˜๋“œ์‹œ ์„ฑ๋Šฅ ํ–ฅ์ƒ์ด ์žˆ๋Š” ๊ฒƒ์€ ์•„๋‹™๋‹ˆ๋‹ค(์ž์„ธํ•œ ๋‚ด์šฉ์€ [`batching details`](./main_classes/pipelines#pipeline-batching)์„ ์ฐธ๊ณ ํ•˜์„ธ์š”). ํ•˜์ง€๋งŒ ์˜ฌ๋ฐ”๋ฅธ ์„ค์ •์—์„œ ์‚ฌ์šฉํ•˜๋ฉด ๋งค์šฐ ํšจ๊ณผ์ ์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. API์—๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์†๋„ ์ €ํ•˜์˜ ๊ฐ€๋Šฅ์„ฑ์ด ๋งค์šฐ ๋†’๊ธฐ ๋•Œ๋ฌธ์— ๋™์  ๋ฐฐ์น˜ ์ฒ˜๋ฆฌ๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋งค์šฐ ํฐ ๋ชจ๋ธ์ธ BLOOM ์ถ”๋ก ์˜ ๊ฒฝ์šฐ ๋™์  ๋ฐฐ์น˜ ์ฒ˜๋ฆฌ๋Š” ๋ชจ๋“  ์‚ฌ๋žŒ์—๊ฒŒ ์ ์ ˆํ•œ ๊ฒฝํ—˜์„ ์ œ๊ณตํ•˜๋Š” ๋ฐ **ํ•„์ˆ˜**์ž…๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/autoclass_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # AutoClass๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ์ธ์Šคํ„ด์Šค ๋กœ๋“œ[[load-pretrained-instances-with-an-autoclass]] ํŠธ๋žœ์Šคํฌ๋จธ ์•„ํ‚คํ…์ฒ˜๊ฐ€ ๋งค์šฐ ๋‹ค์–‘ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ฒดํฌํฌ์ธํŠธ์— ๋งž๋Š” ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์ด ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‰ฝ๊ณ  ๊ฐ„๋‹จํ•˜๋ฉฐ ์œ ์—ฐํ•˜๊ฒŒ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•œ Transformer ํ•ต์‹ฌ ์ฒ ํ•™์˜ ์ผํ™˜์œผ๋กœ, `AutoClass`๋Š” ์ฃผ์–ด์ง„ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์˜ฌ๋ฐ”๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ž๋™์œผ๋กœ ์ถ”๋ก ํ•˜์—ฌ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. `from_pretrained()` ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•ด ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋น ๋ฅด๊ฒŒ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ๋ชจ๋ธ์„ ์ฒ˜์Œ๋ถ€ํ„ฐ ํ•™์Šตํ•˜๋Š” ๋ฐ ์‹œ๊ฐ„๊ณผ ๋ฆฌ์†Œ์Šค๋ฅผ ํˆฌ์ž…ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ์ฒดํฌํฌ์ธํŠธ์— ๊ตฌ์• ๋ฐ›์ง€ ์•Š๋Š” ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•œ๋‹ค๋Š” ๊ฒƒ์€ ์ฝ”๋“œ๊ฐ€ ํ•œ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์ž‘๋™ํ•˜๋ฉด ์•„ํ‚คํ…์ฒ˜๊ฐ€ ๋‹ค๋ฅด๋”๋ผ๋„ ๋‹ค๋ฅธ ์ฒดํฌํฌ์ธํŠธ(์œ ์‚ฌํ•œ ์ž‘์—…์— ๋Œ€ํ•ด ํ•™์Šต๋œ ๊ฒฝ์šฐ)์—์„œ๋„ ์ž‘๋™ํ•œ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. <Tip> ์•„ํ‚คํ…์ฒ˜๋Š” ๋ชจ๋ธ์˜ ๊ณจ๊ฒฉ์„ ์˜๋ฏธํ•˜๋ฉฐ ์ฒดํฌํฌ์ธํŠธ๋Š” ์ฃผ์–ด์ง„ ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•œ ๊ฐ€์ค‘์น˜์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [BERT](https://huggingface.co/bert-base-uncased)๋Š” ์•„ํ‚คํ…์ฒ˜์ด๊ณ , `bert-base-uncased`๋Š” ์ฒดํฌํฌ์ธํŠธ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์€ ์•„ํ‚คํ…์ฒ˜ ๋˜๋Š” ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์˜๋ฏธํ•  ์ˆ˜ ์žˆ๋Š” ์ผ๋ฐ˜์ ์ธ ์šฉ์–ด์ž…๋‹ˆ๋‹ค. </Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹ค์Œ์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค: * ์‚ฌ์ „ ํ•™์Šต๋œ ํ† ํฌ๋‚˜์ด์ € ๋กœ๋“œํ•˜๊ธฐ. * ์‚ฌ์ „ ํ•™์Šต๋œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ ๋กœ๋“œํ•˜๊ธฐ. * ์‚ฌ์ „ ํ•™์Šต๋œ ํŠน์ง• ์ถ”์ถœ๊ธฐ ๋กœ๋“œํ•˜๊ธฐ. * ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ํ”„๋กœ์„ธ์„œ ๋กœ๋“œํ•˜๊ธฐ. * ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ๋กœ๋“œํ•˜๊ธฐ. ## AutoTokenizer[[autotokenizer]] ๊ฑฐ์˜ ๋ชจ๋“  NLP ์ž‘์—…์€ ํ† ํฌ๋‚˜์ด์ €๋กœ ์‹œ์ž‘๋ฉ๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๋Š” ์‚ฌ์šฉ์ž์˜ ์ž…๋ ฅ์„ ๋ชจ๋ธ์—์„œ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. [`AutoTokenizer.from_pretrained`]๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ``` ๊ทธ๋ฆฌ๊ณ  ์•„๋ž˜์™€ ๊ฐ™์ด ์ž…๋ ฅ์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> sequence = "In a hole in the ground there lived a hobbit." >>> print(tokenizer(sequence)) {'input_ids': [101, 1999, 1037, 4920, 1999, 1996, 2598, 2045, 2973, 1037, 7570, 10322, 4183, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ## AutoImageProcessor[[autoimageprocessor]] ๋น„์ „ ์ž‘์—…์˜ ๊ฒฝ์šฐ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๊ฐ€ ์ด๋ฏธ์ง€๋ฅผ ์˜ฌ๋ฐ”๋ฅธ ์ž…๋ ฅ ํ˜•์‹์œผ๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` ## AutoFeatureExtractor[[autofeatureextractor]] ์˜ค๋””์˜ค ์ž‘์—…์˜ ๊ฒฝ์šฐ ํŠน์ง• ์ถ”์ถœ๊ธฐ๊ฐ€ ์˜ค๋””์˜ค ์‹ ํ˜ธ๋ฅผ ์˜ฌ๋ฐ”๋ฅธ ์ž…๋ ฅ ํ˜•์‹์œผ๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. [`AutoFeatureExtractor.from_pretrained`]๋กœ ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained( ... "ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition" ... ) ``` ## AutoProcessor[[autoprocessor]] ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž‘์—…์—๋Š” ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์˜ ์ „์ฒ˜๋ฆฌ ๋„๊ตฌ๋ฅผ ๊ฒฐํ•ฉํ•œ ํ”„๋กœ์„ธ์„œ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด LayoutLMV2 ๋ชจ๋ธ์—๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์™€ ํ…์ŠคํŠธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ํ•„์š”ํ•˜๋ฉฐ, ํ”„๋กœ์„ธ์„œ๋Š” ์ด ๋‘ ๊ฐ€์ง€๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. [`AutoProcessor.from_pretrained()`]๋กœ ํ”„๋กœ์„ธ์„œ๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("microsoft/layoutlmv2-base-uncased") ``` ## AutoModel[[automodel]] <frameworkcontent> <pt> ๋งˆ์ง€๋ง‰์œผ๋กœ AutoModelForํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์ฃผ์–ด์ง„ ์ž‘์—…์— ๋Œ€ํ•ด ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ž‘์—…์˜ ์ „์ฒด ๋ชฉ๋ก์€ [์—ฌ๊ธฐ](model_doc/auto)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”). ์˜ˆ๋ฅผ ๋“ค์–ด, [`AutoModelForSequenceClassification.from_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‹œํ€€์Šค ๋ถ„๋ฅ˜์šฉ ๋ชจ๋ธ์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‰ฝ๊ฒŒ ์žฌ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค๋ฅธ ์ž‘์—…์— ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` <Tip warning={true}> PyTorch๋ชจ๋ธ์˜ ๊ฒฝ์šฐ `from_pretrained()` ๋ฉ”์„œ๋“œ๋Š” ๋‚ด๋ถ€์ ์œผ๋กœ ํ”ผํด์„ ์‚ฌ์šฉํ•˜์—ฌ ์•ˆ์ „ํ•˜์ง€ ์•Š์€ ๊ฒƒ์œผ๋กœ ์•Œ๋ ค์ง„ `torch.load()`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์‹ ๋ขฐํ•  ์ˆ˜ ์—†๋Š” ์†Œ์Šค์—์„œ ๊ฐ€์ ธ์™”๊ฑฐ๋‚˜ ๋ณ€์กฐ๋˜์—ˆ์„ ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ์€ ๋กœ๋“œํ•˜์ง€ ๋งˆ์„ธ์š”. ํ—ˆ๊น… ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์—์„œ ํ˜ธ์ŠคํŒ…๋˜๋Š” ๊ณต๊ฐœ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ ์ด๋Ÿฌํ•œ ๋ณด์•ˆ ์œ„ํ—˜์ด ๋ถ€๋ถ„์ ์œผ๋กœ ์™„ํ™”๋˜๋ฉฐ, ๊ฐ ์ปค๋ฐ‹ ์‹œ ๋ฉ€์›จ์–ด๋ฅผ [๊ฒ€์‚ฌํ•ฉ๋‹ˆ๋‹ค](https://huggingface.co/docs/hub/security-malware). GPG๋ฅผ ์‚ฌ์šฉํ•ด ์„œ๋ช…๋œ [์ปค๋ฐ‹ ๊ฒ€์ฆ](https://huggingface.co/docs/hub/security-gpg#signing-commits-with-gpg)๊ณผ ๊ฐ™์€ ๋ชจ๋ฒ”์‚ฌ๋ก€๋Š” [๋ฌธ์„œ](https://huggingface.co/docs/hub/security)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ํ…์„œํ”Œ๋กœ์šฐ์™€ Flax ์ฒดํฌํฌ์ธํŠธ๋Š” ์˜ํ–ฅ์„ ๋ฐ›์ง€ ์•Š์œผ๋ฉฐ, `from_pretrained`๋ฉ”์„œ๋“œ์— `from_tf` ์™€ `from_flax` ํ‚ค์›Œ๋“œ ๊ฐ€๋ณ€ ์ธ์ž๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด ๋ฌธ์ œ๋ฅผ ์šฐํšŒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์ผ๋ฐ˜์ ์œผ๋กœ AutoTokenizer ํด๋ž˜์Šค์™€ AutoModelFor ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋ชจ๋ธ ์ธ์Šคํ„ด์Šค๋ฅผ ๋กœ๋“œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋งค๋ฒˆ ์˜ฌ๋ฐ”๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ [ํŠœํ† ๋ฆฌ์–ผ](preprocessing)์—์„œ๋Š” ์ƒˆ๋กญ๊ฒŒ ๋กœ๋“œํ•œ ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ์„ธ ํŠœ๋‹์šฉ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ด…๋‹ˆ๋‹ค. </pt> <tf> ๋งˆ์ง€๋ง‰์œผ๋กœ `TFAutoModelFor` ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์ฃผ์–ด์ง„ ์ž‘์—…์— ๋Œ€ํ•ด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. (์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ž‘์—…์˜ ์ „์ฒด ๋ชฉ๋ก์€ [์—ฌ๊ธฐ](model_doc/auto)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด, [`TFAutoModelForSequenceClassification.from_pretrained`]๋กœ ์‹œํ€€์Šค ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` ์‰ฝ๊ฒŒ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์žฌ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค๋ฅธ ์ž‘์—…์— ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("distilbert-base-uncased") ``` ์ผ๋ฐ˜์ ์œผ๋กœ, `AutoTokenizer`ํด๋ž˜์Šค์™€ `TFAutoModelFor` ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋ชจ๋ธ ์ธ์Šคํ„ด์Šค๋ฅผ ๋กœ๋“œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋งค๋ฒˆ ์˜ฌ๋ฐ”๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ [ํŠœํ† ๋ฆฌ์–ผ](preprocessing)์—์„œ๋Š” ์ƒˆ๋กญ๊ฒŒ ๋กœ๋“œํ•œ ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ์„ธ ํŠœ๋‹์šฉ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์•Œ์•„๋ด…๋‹ˆ๋‹ค. </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/peft.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— PEFT๋กœ ์–ด๋Œ‘ํ„ฐ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-adapters-with-peft]] [[open-in-colab]] [Parameter-Efficient Fine Tuning (PEFT)](https://huggingface.co/blog/peft) ๋ฐฉ๋ฒ•์€ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๋ฏธ์„ธ ์กฐ์ • ์ค‘ ๊ณ ์ •์‹œํ‚ค๊ณ , ๊ทธ ์œ„์— ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋Š” ๋งค์šฐ ์ ์€ ์ˆ˜์˜ ๋งค๊ฐœ๋ณ€์ˆ˜(์–ด๋Œ‘ํ„ฐ)๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์–ด๋Œ‘ํ„ฐ๋Š” ์ž‘์—…๋ณ„ ์ •๋ณด๋ฅผ ํ•™์Šตํ•˜๋„๋ก ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ์ด ์ ‘๊ทผ ๋ฐฉ์‹์€ ์™„์ „ํžˆ ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์— ํ•„์ ํ•˜๋Š” ๊ฒฐ๊ณผ๋ฅผ ์ƒ์„ฑํ•˜๋ฉด์„œ, ๋ฉ”๋ชจ๋ฆฌ ํšจ์œจ์ ์ด๊ณ  ๋น„๊ต์  ์ ์€ ์ปดํ“จํŒ… ๋ฆฌ์†Œ์Šค๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ PEFT๋กœ ํ›ˆ๋ จ๋œ ์–ด๋Œ‘ํ„ฐ๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์ „์ฒด ๋ชจ๋ธ๋ณด๋‹ค ํ›จ์”ฌ ์ž‘๊ธฐ ๋•Œ๋ฌธ์— ๊ณต์œ , ์ €์žฅ ๋ฐ ๊ฐ€์ ธ์˜ค๊ธฐ๊ฐ€ ํŽธ๋ฆฌํ•ฉ๋‹ˆ๋‹ค. <div class="flex flex-col justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/PEFT-hub-screenshot.png"/> <figcaption class="text-center">Hub์— ์ €์žฅ๋œ OPTForCausalLM ๋ชจ๋ธ์˜ ์–ด๋Œ‘ํ„ฐ ๊ฐ€์ค‘์น˜๋Š” ์ตœ๋Œ€ 700MB์— ๋‹ฌํ•˜๋Š” ๋ชจ๋ธ ๊ฐ€์ค‘์น˜์˜ ์ „์ฒด ํฌ๊ธฐ์— ๋น„ํ•ด ์•ฝ 6MB์— ๋ถˆ๊ณผํ•ฉ๋‹ˆ๋‹ค.</figcaption> </div> ๐Ÿค— PEFT ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด [๋ฌธ์„œ](https://huggingface.co/docs/peft/index)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ## ์„ค์ • [[setup]] ๐Ÿค— PEFT๋ฅผ ์„ค์น˜ํ•˜์—ฌ ์‹œ์ž‘ํ•˜์„ธ์š”: ```bash pip install peft ``` ์ƒˆ๋กœ์šด ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉํ•ด๋ณด๊ณ  ์‹ถ๋‹ค๋ฉด, ๋‹ค์Œ ์†Œ์Šค์—์„œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค: ```bash pip install git+https://github.com/huggingface/peft.git ``` ## ์ง€์›๋˜๋Š” PEFT ๋ชจ๋ธ [[supported-peft-models]] ๐Ÿค— Transformers๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์ผ๋ถ€ PEFT ๋ฐฉ๋ฒ•์„ ์ง€์›ํ•˜๋ฉฐ, ๋กœ์ปฌ์ด๋‚˜ Hub์— ์ €์žฅ๋œ ์–ด๋Œ‘ํ„ฐ ๊ฐ€์ค‘์น˜๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๋ช‡ ์ค„์˜ ์ฝ”๋“œ๋งŒ์œผ๋กœ ์‰ฝ๊ฒŒ ์‹คํ–‰ํ•˜๊ฑฐ๋‚˜ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ฐฉ๋ฒ•์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค: - [Low Rank Adapters](https://huggingface.co/docs/peft/conceptual_guides/lora) - [IA3](https://huggingface.co/docs/peft/conceptual_guides/ia3) - [AdaLoRA](https://arxiv.org/abs/2303.10512) ๐Ÿค— PEFT์™€ ๊ด€๋ จ๋œ ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•(์˜ˆ: ํ”„๋กฌํ”„ํŠธ ํ›ˆ๋ จ ๋˜๋Š” ํ”„๋กฌํ”„ํŠธ ํŠœ๋‹) ๋˜๋Š” ์ผ๋ฐ˜์ ์ธ ๐Ÿค— PEFT ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด [๋ฌธ์„œ](https://huggingface.co/docs/peft/index)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## PEFT ์–ด๋Œ‘ํ„ฐ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-a-peft-adapter]] ๐Ÿค— Transformers์—์„œ PEFT ์–ด๋Œ‘ํ„ฐ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๊ณ  ์‚ฌ์šฉํ•˜๋ ค๋ฉด Hub ์ €์žฅ์†Œ๋‚˜ ๋กœ์ปฌ ๋””๋ ‰ํ„ฐ๋ฆฌ์— `adapter_config.json` ํŒŒ์ผ๊ณผ ์–ด๋Œ‘ํ„ฐ ๊ฐ€์ค‘์น˜๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `AutoModelFor` ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ PEFT ์–ด๋Œ‘ํ„ฐ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์ธ๊ณผ ๊ด€๊ณ„ ์–ธ์–ด ๋ชจ๋ธ์šฉ PEFT ์–ด๋Œ‘ํ„ฐ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋ ค๋ฉด ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ฅด์‹ญ์‹œ์˜ค: 1. PEFT ๋ชจ๋ธ ID๋ฅผ ์ง€์ •ํ•˜์‹ญ์‹œ์˜ค. 2. [`AutoModelForCausalLM`] ํด๋ž˜์Šค์— ์ „๋‹ฌํ•˜์‹ญ์‹œ์˜ค. ```py from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(peft_model_id) ``` <Tip> `AutoModelFor` ํด๋ž˜์Šค๋‚˜ ๊ธฐ๋ณธ ๋ชจ๋ธ ํด๋ž˜์Šค(์˜ˆ: `OPTForCausalLM` ๋˜๋Š” `LlamaForCausalLM`) ์ค‘ ํ•˜๋‚˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ PEFT ์–ด๋Œ‘ํ„ฐ๋ฅผ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> `load_adapter` ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ PEFT ์–ด๋Œ‘ํ„ฐ๋ฅผ ๊ฐ€์ ธ์˜ฌ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "facebook/opt-350m" peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(model_id) model.load_adapter(peft_model_id) ``` ## 8๋น„ํŠธ ๋˜๋Š” 4๋น„ํŠธ๋กœ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-in-8bit-or-4bit]] `bitsandbytes` ํ†ตํ•ฉ์€ 8๋น„ํŠธ์™€ 4๋น„ํŠธ ์ •๋ฐ€๋„ ๋ฐ์ดํ„ฐ ์œ ํ˜•์„ ์ง€์›ํ•˜๋ฏ€๋กœ ํฐ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ๋•Œ ์œ ์šฉํ•˜๋ฉด์„œ ๋ฉ”๋ชจ๋ฆฌ๋„ ์ ˆ์•ฝํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ํ•˜๋“œ์›จ์–ด์— ํšจ๊ณผ์ ์œผ๋กœ ๋ถ„๋ฐฐํ•˜๋ ค๋ฉด [`~PreTrainedModel.from_pretrained`]์— `load_in_8bit` ๋˜๋Š” `load_in_4bit` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  `device_map="auto"`๋ฅผ ์„ค์ •ํ•˜์„ธ์š”: ```py from transformers import AutoModelForCausalLM, AutoTokenizer peft_model_id = "ybelkada/opt-350m-lora" model = AutoModelForCausalLM.from_pretrained(peft_model_id, device_map="auto", load_in_8bit=True) ``` ## ์ƒˆ ์–ด๋Œ‘ํ„ฐ ์ถ”๊ฐ€ [[add-a-new-adapter]] ์ƒˆ ์–ด๋Œ‘ํ„ฐ๊ฐ€ ํ˜„์žฌ ์–ด๋Œ‘ํ„ฐ์™€ ๋™์ผํ•œ ์œ ํ˜•์ธ ๊ฒฝ์šฐ์— ํ•œํ•ด ๊ธฐ์กด ์–ด๋Œ‘ํ„ฐ๊ฐ€ ์žˆ๋Š” ๋ชจ๋ธ์— ์ƒˆ ์–ด๋Œ‘ํ„ฐ๋ฅผ ์ถ”๊ฐ€ํ•˜๋ ค๋ฉด [`~peft.PeftModel.add_adapter`]๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋ชจ๋ธ์— ๊ธฐ์กด LoRA ์–ด๋Œ‘ํ„ฐ๊ฐ€ ์—ฐ๊ฒฐ๋˜์–ด ์žˆ๋Š” ๊ฒฝ์šฐ: ```py from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import PeftConfig model_id = "facebook/opt-350m" model = AutoModelForCausalLM.from_pretrained(model_id) lora_config = LoraConfig( target_modules=["q_proj", "k_proj"], init_lora_weights=False ) model.add_adapter(lora_config, adapter_name="adapter_1") ``` ์ƒˆ ์–ด๋Œ‘ํ„ฐ๋ฅผ ์ถ”๊ฐ€ํ•˜๋ ค๋ฉด: ```py # attach new adapter with same config model.add_adapter(lora_config, adapter_name="adapter_2") ``` ์ด์ œ [`~peft.PeftModel.set_adapter`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์–ด๋Œ‘ํ„ฐ๋ฅผ ์‚ฌ์šฉํ•  ์–ด๋Œ‘ํ„ฐ๋กœ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py # use adapter_1 model.set_adapter("adapter_1") output = model.generate(**inputs) print(tokenizer.decode(output_disabled[0], skip_special_tokens=True)) # use adapter_2 model.set_adapter("adapter_2") output_enabled = model.generate(**inputs) print(tokenizer.decode(output_enabled[0], skip_special_tokens=True)) ``` ## ์–ด๋Œ‘ํ„ฐ ํ™œ์„ฑํ™” ๋ฐ ๋น„ํ™œ์„ฑํ™” [[enable-and-disable-adapters]] ๋ชจ๋ธ์— ์–ด๋Œ‘ํ„ฐ๋ฅผ ์ถ”๊ฐ€ํ•œ ํ›„ ์–ด๋Œ‘ํ„ฐ ๋ชจ๋“ˆ์„ ํ™œ์„ฑํ™” ๋˜๋Š” ๋น„ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์–ด๋Œ‘ํ„ฐ ๋ชจ๋“ˆ์„ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด: ```py from transformers import AutoModelForCausalLM, OPTForCausalLM, AutoTokenizer from peft import PeftConfig model_id = "facebook/opt-350m" adapter_model_id = "ybelkada/opt-350m-lora" tokenizer = AutoTokenizer.from_pretrained(model_id) text = "Hello" inputs = tokenizer(text, return_tensors="pt") model = AutoModelForCausalLM.from_pretrained(model_id) peft_config = PeftConfig.from_pretrained(adapter_model_id) # to initiate with random weights peft_config.init_lora_weights = False model.add_adapter(peft_config) model.enable_adapters() output = model.generate(**inputs) ``` ์–ด๋Œ‘ํ„ฐ ๋ชจ๋“ˆ์„ ๋น„ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด: ```py model.disable_adapters() output = model.generate(**inputs) ``` ## PEFT ์–ด๋Œ‘ํ„ฐ ํ›ˆ๋ จ [[train-a-peft-adapter]] PEFT ์–ด๋Œ‘ํ„ฐ๋Š” [`Trainer`] ํด๋ž˜์Šค์—์„œ ์ง€์›๋˜๋ฏ€๋กœ ํŠน์ • ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๋งž๊ฒŒ ์–ด๋Œ‘ํ„ฐ๋ฅผ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ช‡ ์ค„์˜ ์ฝ”๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด LoRA ์–ด๋Œ‘ํ„ฐ๋ฅผ ํ›ˆ๋ จํ•˜๋ ค๋ฉด: <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด [์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](training) ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”. </Tip> 1. ์ž‘์—… ์œ ํ˜• ๋ฐ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ง€์ •ํ•˜์—ฌ ์–ด๋Œ‘ํ„ฐ ๊ตฌ์„ฑ์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [`~peft.LoraConfig`]๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ```py from peft import LoraConfig peft_config = LoraConfig( lora_alpha=16, lora_dropout=0.1, r=64, bias="none", task_type="CAUSAL_LM", ) ``` 2. ๋ชจ๋ธ์— ์–ด๋Œ‘ํ„ฐ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ```py model.add_adapter(peft_config) ``` 3. ์ด์ œ ๋ชจ๋ธ์„ [`Trainer`]์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ```py trainer = Trainer(model=model, ...) trainer.train() ``` ํ›ˆ๋ จํ•œ ์–ด๋Œ‘ํ„ฐ๋ฅผ ์ €์žฅํ•˜๊ณ  ๋‹ค์‹œ ๊ฐ€์ ธ์˜ค๋ ค๋ฉด: ```py model.save_pretrained(save_dir) model = AutoModelForCausalLM.from_pretrained(save_dir) ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/perf_infer_cpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # CPU์—์„œ ํšจ์œจ์ ์ธ ์ถ”๋ก ํ•˜๊ธฐ [[efficient-inference-on-cpu]] ์ด ๊ฐ€์ด๋“œ๋Š” CPU์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ์ถ”๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ค‘์ ์„ ๋‘๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋” ๋น ๋ฅธ ์ถ”๋ก ์„ ์œ„ํ•œ `BetterTransformer` [[bettertransformer-for-faster-inference]] ์šฐ๋ฆฌ๋Š” ์ตœ๊ทผ CPU์—์„œ ํ…์ŠคํŠธ, ์ด๋ฏธ์ง€ ๋ฐ ์˜ค๋””์˜ค ๋ชจ๋ธ์˜ ๋น ๋ฅธ ์ถ”๋ก ์„ ์œ„ํ•ด `BetterTransformer`๋ฅผ ํ†ตํ•ฉํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ํ†ตํ•ฉ์— ๋Œ€ํ•œ ๋” ์ž์„ธํ•œ ๋‚ด์šฉ์€ [์ด ๋ฌธ์„œ](https://huggingface.co/docs/optimum/bettertransformer/overview)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## PyTorch JIT ๋ชจ๋“œ (TorchScript) [[pytorch-jitmode-torchscript]] TorchScript๋Š” PyTorch ์ฝ”๋“œ์—์„œ ์ง๋ ฌํ™”์™€ ์ตœ์ ํ™”๊ฐ€ ๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ์„ ์ƒ์„ฑํ• ๋•Œ ์“ฐ์ž…๋‹ˆ๋‹ค. TorchScript๋กœ ๋งŒ๋“ค์–ด์ง„ ํ”„๋กœ๊ทธ๋žจ์€ ๊ธฐ์กด Python ํ”„๋กœ์„ธ์Šค์—์„œ ์ €์žฅํ•œ ๋’ค, ์ข…์†์„ฑ์ด ์—†๋Š” ์ƒˆ๋กœ์šด ํ”„๋กœ์„ธ์Šค๋กœ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. PyTorch์˜ ๊ธฐ๋ณธ ์„ค์ •์ธ `eager` ๋ชจ๋“œ์™€ ๋น„๊ตํ–ˆ์„๋•Œ, `jit` ๋ชจ๋“œ๋Š” ์—ฐ์‚ฐ์ž ๊ฒฐํ•ฉ๊ณผ ๊ฐ™์€ ์ตœ์ ํ™” ๋ฐฉ๋ฒ•๋ก ์„ ํ†ตํ•ด ๋ชจ๋ธ ์ถ”๋ก ์—์„œ ๋Œ€๋ถ€๋ถ„ ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. TorchScript์— ๋Œ€ํ•œ ์นœ์ ˆํ•œ ์†Œ๊ฐœ๋Š” [PyTorch TorchScript ํŠœํ† ๋ฆฌ์–ผ](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html#tracing-modules)์„ ์ฐธ์กฐํ•˜์„ธ์š”. ### JIT ๋ชจ๋“œ์™€ ํ•จ๊ป˜ํ•˜๋Š” IPEX ๊ทธ๋ž˜ํ”„ ์ตœ์ ํ™” [[ipex-graph-optimization-with-jitmode]] Intelยฎ Extension for PyTorch(IPEX)๋Š” Transformers ๊ณ„์—ด ๋ชจ๋ธ์˜ jit ๋ชจ๋“œ์—์„œ ์ถ”๊ฐ€์ ์ธ ์ตœ์ ํ™”๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. jit ๋ชจ๋“œ์™€ ๋”๋ถˆ์–ด Intelยฎ Extension for PyTorch(IPEX)๋ฅผ ํ™œ์šฉํ•˜์‹œ๊ธธ ๊ฐ•๋ ฅํžˆ ๊ถŒ์žฅ๋“œ๋ฆฝ๋‹ˆ๋‹ค. Transformers ๋ชจ๋ธ์—์„œ ์ž์ฃผ ์‚ฌ์šฉ๋˜๋Š” ์ผ๋ถ€ ์—ฐ์‚ฐ์ž ํŒจํ„ด์€ ์ด๋ฏธ jit ๋ชจ๋“œ ์—ฐ์‚ฐ์ž ๊ฒฐํ•ฉ(operator fusion)์˜ ํ˜•ํƒœ๋กœ Intelยฎ Extension for PyTorch(IPEX)์—์„œ ์ง€์›๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. Multi-head-attention, Concat Linear, Linear+Add, Linear+Gelu, Add+LayerNorm ๊ฒฐํ•ฉ ํŒจํ„ด ๋“ฑ์ด ์ด์šฉ ๊ฐ€๋Šฅํ•˜๋ฉฐ ํ™œ์šฉํ–ˆ์„ ๋•Œ ์„ฑ๋Šฅ์ด ์šฐ์ˆ˜ํ•ฉ๋‹ˆ๋‹ค. ์—ฐ์‚ฐ์ž ๊ฒฐํ•ฉ์˜ ์ด์ ์€ ์‚ฌ์šฉ์ž์—๊ฒŒ ๊ณ ์Šค๋ž€ํžˆ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ๋ถ„์„์— ๋”ฐ๋ฅด๋ฉด, ์งˆ์˜ ์‘๋‹ต, ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ๋ฐ ํ† ํฐ ๋ถ„๋ฅ˜์™€ ๊ฐ™์€ ๊ฐ€์žฅ ์ธ๊ธฐ ์žˆ๋Š” NLP ํƒœ์Šคํฌ ์ค‘ ์•ฝ 70%๊ฐ€ ์ด๋Ÿฌํ•œ ๊ฒฐํ•ฉ ํŒจํ„ด์„ ์‚ฌ์šฉํ•˜์—ฌ Float32 ์ •๋ฐ€๋„์™€ BFloat16 ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ๋ชจ๋‘์—์„œ ์„ฑ๋Šฅ์ƒ์˜ ์ด์ ์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [IPEX ๊ทธ๋ž˜ํ”„ ์ตœ์ ํ™”](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/graph_optimization.html)์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋ฅผ ํ™•์ธํ•˜์„ธ์š”. #### IPEX ์„ค์น˜: [[ipex-installation]] IPEX ๋ฐฐํฌ ์ฃผ๊ธฐ๋Š” PyTorch๋ฅผ ๋”ฐ๋ผ์„œ ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ์ž์„ธํ•œ ์ •๋ณด๋Š” [IPEX ์„ค์น˜ ๋ฐฉ๋ฒ•](https://intel.github.io/intel-extension-for-pytorch/)์„ ํ™•์ธํ•˜์„ธ์š”. ### JIT ๋ชจ๋“œ ์‚ฌ์šฉ๋ฒ• [[usage-of-jitmode]] ํ‰๊ฐ€ ๋˜๋Š” ์˜ˆ์ธก์„ ์œ„ํ•ด Trainer์—์„œ JIT ๋ชจ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด Trainer์˜ ๋ช…๋ น ์ธ์ˆ˜์— `jit_mode_eval`์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. <Tip warning={true}> PyTorch์˜ ๋ฒ„์ „์ด 1.14.0 ์ด์ƒ์ด๋ผ๋ฉด, jit ๋ชจ๋“œ๋Š” jit.trace์—์„œ dict ์ž…๋ ฅ์ด ์ง€์›๋˜๋ฏ€๋กœ, ๋ชจ๋“  ๋ชจ๋ธ์˜ ์˜ˆ์ธก๊ณผ ํ‰๊ฐ€๊ฐ€ ๊ฐœ์„ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. PyTorch์˜ ๋ฒ„์ „์ด 1.14.0 ๋ฏธ๋งŒ์ด๋ผ๋ฉด, ์งˆ์˜ ์‘๋‹ต ๋ชจ๋ธ๊ณผ ๊ฐ™์ด forward ๋งค๊ฐœ๋ณ€์ˆ˜์˜ ์ˆœ์„œ๊ฐ€ jit.trace์˜ ํŠœํ”Œ ์ž…๋ ฅ ์ˆœ์„œ์™€ ์ผ์น˜ํ•˜๋Š” ๋ชจ๋ธ์— ๋“์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ๋ชจ๋ธ๊ณผ ๊ฐ™์ด forward ๋งค๊ฐœ๋ณ€์ˆ˜ ์ˆœ์„œ๊ฐ€ jit.trace์˜ ํŠœํ”Œ ์ž…๋ ฅ ์ˆœ์„œ์™€ ๋‹ค๋ฅธ ๊ฒฝ์šฐ, jit.trace๊ฐ€ ์‹คํŒจํ•˜๋ฉฐ ์˜ˆ์™ธ๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ์ด๋•Œ ์˜ˆ์™ธ์ƒํ™ฉ์„ ์‚ฌ์šฉ์ž์—๊ฒŒ ์•Œ๋ฆฌ๊ธฐ ์œ„ํ•ด Logging์ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. </Tip> [Transformers ์งˆ์˜ ์‘๋‹ต](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)์˜ ์‚ฌ์šฉ ์‚ฌ๋ก€ ์˜ˆ์‹œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. - CPU์—์„œ jit ๋ชจ๋“œ๋ฅผ ์‚ฌ์šฉํ•œ ์ถ”๋ก : <pre>python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ <b>--jit_mode_eval </b></pre> - CPU์—์„œ IPEX์™€ ํ•จ๊ป˜ jit ๋ชจ๋“œ๋ฅผ ์‚ฌ์šฉํ•œ ์ถ”๋ก : <pre>python run_qa.py \ --model_name_or_path csarron/bert-base-uncased-squad-v1 \ --dataset_name squad \ --do_eval \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/ \ --no_cuda \ <b>--use_ipex \</b> <b>--jit_mode_eval</b></pre>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/bertology.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # BERTology BERT์™€ ๊ฐ™์€ ๋Œ€๊ทœ๋ชจ ํŠธ๋žœ์Šคํฌ๋จธ์˜ ๋‚ด๋ถ€ ๋™์ž‘์„ ์กฐ์‚ฌํ•˜๋Š” ์—ฐ๊ตฌ ๋ถ„์•ผ๊ฐ€ ์ ์  ๋” ์ค‘์š”ํ•ด์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ํ˜น์ž๋Š” "BERTology"๋ผ ์นญํ•˜๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ถ„์•ผ์˜ ์ข‹์€ ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - BERT๋Š” ๊ณ ์ „์ ์ธ NLP ํŒŒ์ดํ”„๋ผ์ธ์˜ ์žฌ๋ฐœ๊ฒฌ - Ian Tenney, Dipanjan Das, Ellie Pavlick: https://arxiv.org/abs/1905.05950 - 16๊ฐœ์˜ ํ—ค๋“œ๊ฐ€ ์ •๋ง๋กœ 1๊ฐœ๋ณด๋‹ค ๋‚˜์€๊ฐ€? - Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650 - BERT๋Š” ๋ฌด์—‡์„ ๋ณด๋Š”๊ฐ€? BERT์˜ ์–ดํ…์…˜ ๋ถ„์„ - Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning: https://arxiv.org/abs/1906.04341 - CAT-probing: ํ”„๋กœ๊ทธ๋ž˜๋ฐ ์–ธ์–ด์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์ด ์–ด๋–ป๊ฒŒ ์ฝ”๋“œ ๊ตฌ์กฐ๋ฅผ ๋ณด๋Š”์ง€ ์•Œ์•„๋ณด๊ธฐ ์œ„ํ•œ ๋ฉ”ํŠธ๋ฆญ ๊ธฐ๋ฐ˜ ์ ‘๊ทผ ๋ฐฉ๋ฒ•: https://arxiv.org/abs/2210.04633 ์šฐ๋ฆฌ๋Š” ์ด ์ƒˆ๋กœ์šด ์—ฐ๊ตฌ ๋ถ„์•ผ์˜ ๋ฐœ์ „์„ ๋•๊ธฐ ์œ„ํ•ด, BERT/GPT/GPT-2 ๋ชจ๋ธ์— ๋‚ด๋ถ€ ํ‘œํ˜„์„ ์‚ดํŽด๋ณผ ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ๊ธฐ๋Šฅ์„ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ๋“ค์€ ์ฃผ๋กœ Paul Michel์˜ ํ›Œ๋ฅญํ•œ ์ž‘์—…์„ ์ฐธ๊ณ ํ•˜์—ฌ ๊ฐœ๋ฐœ๋˜์—ˆ์Šต๋‹ˆ๋‹ค (https://arxiv.org/abs/1905.10650): - BERT/GPT/GPT-2์˜ ๋ชจ๋“  ์€๋‹‰ ์ƒํƒœ์— ์ ‘๊ทผํ•˜๊ธฐ, - BERT/GPT/GPT-2์˜ ๊ฐ ํ—ค๋“œ์˜ ๋ชจ๋“  ์–ดํ…์…˜ ๊ฐ€์ค‘์น˜์— ์ ‘๊ทผํ•˜๊ธฐ, - ํ—ค๋“œ์˜ ์ถœ๋ ฅ ๊ฐ’๊ณผ ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ ๊ฒ€์ƒ‰ํ•˜์—ฌ ํ—ค๋“œ ์ค‘์š”๋„ ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  https://arxiv.org/abs/1905.10650์—์„œ ์„ค๋ช…๋œ ๋Œ€๋กœ ํ—ค๋“œ๋ฅผ ์ œ๊ฑฐํ•˜๋Š” ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ธฐ๋Šฅ๋“ค์„ ์ดํ•ดํ•˜๊ณ  ์ง์ ‘ ์‚ฌ์šฉํ•ด๋ณผ ์ˆ˜ ์žˆ๋„๋ก [bertology.py](https://github.com/huggingface/transformers/tree/main/examples/research_projects/bertology/run_bertology.py) ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ ์Šคํฌ๋ฆฝํŠธ์—์„œ๋Š” GLUE์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์—์„œ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๊ณ  ๋ชจ๋ธ์„ ๊ฐ€์ง€์น˜๊ธฐ(prune)ํ•ด๋ด…๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/model_summary.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Transformer ๋ชจ๋ธ๊ตฐ[[the-transformer-model-family]] 2017๋…„์— ์†Œ๊ฐœ๋œ [๊ธฐ๋ณธ Transformer](https://arxiv.org/abs/1706.03762) ๋ชจ๋ธ์€ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) ์ž‘์—…์„ ๋„˜์–ด ์ƒˆ๋กญ๊ณ  ํฅ๋ฏธ๋กœ์šด ๋ชจ๋ธ๋“ค์— ์˜๊ฐ์„ ์ฃผ์—ˆ์Šต๋‹ˆ๋‹ค. [๋‹จ๋ฐฑ์งˆ ์ ‘ํž˜ ๊ตฌ์กฐ ์˜ˆ์ธก](https://huggingface.co/blog/deep-learning-with-proteins), [์น˜ํƒ€์˜ ๋‹ฌ๋ฆฌ๊ธฐ ํ›ˆ๋ จ](https://huggingface.co/blog/train-decision-transformers), [์‹œ๊ณ„์—ด ์˜ˆ์ธก](https://huggingface.co/blog/time-series-transformers) ๋“ฑ์„ ์œ„ํ•œ ๋‹ค์–‘ํ•œ ๋ชจ๋ธ์ด ์ƒ๊ฒจ๋‚ฌ์Šต๋‹ˆ๋‹ค. Transformer์˜ ๋ณ€ํ˜•์ด ๋„ˆ๋ฌด ๋งŽ์•„์„œ, ํฐ ๊ทธ๋ฆผ์„ ๋†“์น˜๊ธฐ ์‰ฝ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์—ฌ๊ธฐ ์žˆ๋Š” ๋ชจ๋“  ๋ชจ๋ธ์˜ ๊ณตํ†ต์ ์€ ๊ธฐ๋ณธ Trasnformer ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ์ผ๋ถ€ ๋ชจ๋ธ์€ ์ธ์ฝ”๋” ๋˜๋Š” ๋””์ฝ”๋”๋งŒ ์‚ฌ์šฉํ•˜๊ณ , ๋‹ค๋ฅธ ๋ชจ๋ธ๋“ค์€ ์ธ์ฝ”๋”์™€ ๋””์ฝ”๋”๋ฅผ ๋ชจ๋‘ ์‚ฌ์šฉํ•˜๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ Transformer ๋ชจ๋ธ๊ตฐ ๋‚ด ์ƒ์œ„ ๋ ˆ๋ฒจ์—์„œ์˜ ์ฐจ์ด์ ์„ ๋ถ„๋ฅ˜ํ•˜๊ณ  ๊ฒ€ํ† ํ•˜๋ฉด ์œ ์šฉํ•œ ๋ถ„๋ฅ˜ ์ฒด๊ณ„๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด์ „์— ์ ‘ํ•ด๋ณด์ง€ ๋ชปํ•œ Transformer ๋ชจ๋ธ๋“ค ๋˜ํ•œ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ธฐ๋ณธ Transformer ๋ชจ๋ธ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๊ฑฐ๋‚˜ ๋ณต์Šต์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ, Hugging Face ๊ฐ•์˜์˜ [ํŠธ๋žœ์Šคํฌ๋จธ๋Š” ์–ด๋–ป๊ฒŒ ๋™์ž‘ํ•˜๋‚˜์š”?](https://huggingface.co/course/chapter1/4?fw=pt) ์ฑ•ํ„ฐ๋ฅผ ํ™•์ธํ•˜์„ธ์š”. <div align="center"> <iframe width="560" height="315" src="https://www.youtube.com/embed/H39Z_720T5s" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> </div> ## ์ปดํ“จํ„ฐ ๋น„์ „[[computer-vision]] <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FacQBpeFBVvrDUlzFlkejoz%2FModelscape-timeline%3Fnode-id%3D0%253A1%26t%3Dm0zJ7m2BQ9oe0WtO-1" allowfullscreen></iframe> ### ํ•ฉ์„ฑ๊ณฑ ๋„คํŠธ์›Œํฌ[[convolutional-network]] [Vision Transformer](https://arxiv.org/abs/2010.11929)๊ฐ€ ํ™•์žฅ์„ฑ๊ณผ ํšจ์œจ์„ฑ์„ ์ž…์ฆํ•˜๊ธฐ ์ „๊นŒ์ง€ ์˜ค๋žซ๋™์•ˆ ํ•ฉ์„ฑ๊ณฑ ๋„คํŠธ์›Œํฌ(CNN)๊ฐ€ ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์˜ ์ง€๋ฐฐ์ ์ธ ํŒจ๋Ÿฌ๋‹ค์ž„์ด์—ˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿผ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ์ด๋™ ๋ถˆ๋ณ€์„ฑ(translation invariance)๊ณผ ๊ฐ™์€ CNN์˜ ์šฐ์ˆ˜ํ•œ ๋ถ€๋ถ„์ด ๋„๋“œ๋ผ์ง€๊ธฐ ๋•Œ๋ฌธ์— ๋ช‡๋ช‡ (ํŠนํžˆ ํŠน์ • ๊ณผ์—…์—์„œ์˜) Transformer ๋ชจ๋ธ์€ ์•„ํ‚คํ…์ฒ˜์— ํ•ฉ์„ฑ๊ณฑ์„ ํ†ตํ•ฉํ•˜๊ธฐ๋„ ํ–ˆ์Šต๋‹ˆ๋‹ค. [ConvNeXt](model_doc/convnext)๋Š” ์ด๋Ÿฐ ๊ด€๋ก€๋ฅผ ๋’ค์ง‘์–ด CNN์„ ํ˜„๋Œ€ํ™”ํ•˜๊ธฐ ์œ„ํ•ด Transformer์˜ ๋””์ž์ธ์„ ์ฐจ์šฉํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด ConvNeXt๋Š” ๊ฒน์น˜์ง€ ์•Š๋Š” ์Šฌ๋ผ์ด๋”ฉ ์ฐฝ(sliding window)์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ ํŒจ์น˜ํ™”ํ•˜๊ณ , ๋” ํฐ ์ปค๋„๋กœ ์ „์—ญ ์ˆ˜์šฉ ํ•„๋“œ(global receptive field)๋ฅผ ํ™•์žฅ์‹œํ‚ต๋‹ˆ๋‹ค. ConvNeXt๋Š” ๋˜ํ•œ ๋ฉ”๋ชจ๋ฆฌ ํšจ์œจ์„ ๋†’์ด๊ณ  ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ์—ฌ๋Ÿฌ ๋ ˆ์ด์–ด ์„ค๊ณ„๋ฅผ ์„ ํƒํ•˜๊ธฐ ๋•Œ๋ฌธ์— Transformer์™€ ๊ฒฌ์ค„๋งŒํ•ฉ๋‹ˆ๋‹ค! ### ์ธ์ฝ”๋”[[cv-encoder]] [Vision Transformer(ViT)](model_doc/vit)๋Š” ํ•ฉ์„ฑ๊ณฑ ์—†๋Š” ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์˜ ๋ง‰์„ ์—ด์—ˆ์Šต๋‹ˆ๋‹ค. ViT๋Š” ํ‘œ์ค€ Transformer ์ธ์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜์ง€๋งŒ, ๊ฐ€์žฅ ํฐ ํ˜์‹ ์€ ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ์‹์ด์—ˆ์Šต๋‹ˆ๋‹ค. ๋ฌธ์žฅ์„ ํ† ํฐ์œผ๋กœ ๋ถ„ํ• ํ•˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ์ด๋ฏธ์ง€๋ฅผ ๊ณ ์ •๋œ ํฌ๊ธฐ์˜ ํŒจ์น˜๋กœ ๋ถ„ํ• ํ•˜๊ณ , ์ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ž„๋ฒ ๋”ฉ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ViT๋Š” Transformer์˜ ํšจ์œจ์ ์ธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ํ™œ์šฉํ•˜์—ฌ ํ›ˆ๋ จ์— ๋” ์ ์€ ์ž์›์„ ์‚ฌ์šฉํ•˜๋ฉด์„œ๋„ ๋‹น์‹œ CNN์— ๋น„๊ฒฌํ•˜๋Š” ๊ฒฐ๊ณผ๋ฅผ ์ž…์ฆํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ViT๋ฅผ ๋’ค์ด์–ด ๋ถ„ํ• (segmentation)๊ณผ ๊ฐ™์€ ๊ณ ๋ฐ€๋„ ๋น„์ „ ์ž‘์—…๊ณผ ํƒ์ง€ ์ž‘์—…๋„ ๋‹ค๋ฃฐ ์ˆ˜ ์žˆ๋Š” ๋‹ค๋ฅธ ๋น„์ „ ๋ชจ๋ธ์ด ๋“ฑ์žฅํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ ์ค‘ ํ•˜๋‚˜๊ฐ€ [Swin](model_doc/swin) Transformer์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ ์ž‘์€ ํฌ๊ธฐ์˜ ํŒจ์น˜์—์„œ ๊ณ„์ธต์  ํŠน์ง• ๋งต(CNN ๐Ÿ‘€๊ณผ ๊ฐ™์ง€๋งŒ ViT์™€๋Š” ๋‹ค๋ฆ„)์„ ๋งŒ๋“ค๊ณ  ๋” ๊นŠ์€ ๋ ˆ์ด์–ด์˜ ์ธ์ ‘ ํŒจ์น˜์™€ ๋ณ‘ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์–ดํ…์…˜(Attention)์€ ์ง€์—ญ ์œˆ๋„์šฐ ๋‚ด์—์„œ๋งŒ ๊ณ„์‚ฐ๋˜๋ฉฐ, ๋ชจ๋ธ์ด ๋” ์ž˜ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๋„๋ก ์–ดํ…์…˜ ๋ ˆ์ด์–ด ๊ฐ„์— ์œˆ๋„์šฐ๋ฅผ ์ด๋™ํ•˜๋ฉฐ ์—ฐ๊ฒฐ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. Swin Transformer๋Š” ๊ณ„์ธต์  ํŠน์ง• ๋งต์„ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ๋ถ„ํ• (segmentation)๊ณผ ํƒ์ง€์™€ ๊ฐ™์€ ๊ณ ๋ฐ€๋„ ์˜ˆ์ธก ์ž‘์—…์— ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. [SegFormer](model_doc/segformer) ์—ญ์‹œ Transformer ์ธ์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ณ„์ธต์  ํŠน์ง• ๋งต์„ ๊ตฌ์ถ•ํ•˜์ง€๋งŒ, ์ƒ๋‹จ์— ๊ฐ„๋‹จํ•œ ๋‹ค์ธต ํผ์…‰ํŠธ๋ก (MLP) ๋””์ฝ”๋”๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ๋ชจ๋“  ํŠน์ง• ๋งต์„ ๊ฒฐํ•ฉํ•˜๊ณ  ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. BeIT์™€ ViTMAE์™€ ๊ฐ™์€ ๋‹ค๋ฅธ ๋น„์ „ ๋ชจ๋ธ์€ BERT์˜ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉํ‘œ(objective)์—์„œ ์˜๊ฐ์„ ์–ป์—ˆ์Šต๋‹ˆ๋‹ค. [BeIT](model_doc/beit)๋Š” *๋งˆ์Šคํฌ๋“œ ์ด๋ฏธ์ง€ ๋ชจ๋ธ๋ง(MIM)*์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋˜๋ฉฐ, ์ด๋ฏธ์ง€ ํŒจ์น˜๋Š” ์ž„์˜๋กœ ๋งˆ์Šคํ‚น๋˜๊ณ  ์ด๋ฏธ์ง€๋„ ์‹œ๊ฐ์  ํ† ํฐ์œผ๋กœ ํ† ํฐํ™”๋ฉ๋‹ˆ๋‹ค. BeIT๋Š” ๋งˆ์Šคํ‚น๋œ ํŒจ์น˜์— ํ•ด๋‹นํ•˜๋Š” ์‹œ๊ฐ์  ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋„๋ก ํ•™์Šต๋ฉ๋‹ˆ๋‹ค. [ViTMAE](model_doc/vitmae)๋„ ๋น„์Šทํ•œ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉํ‘œ๊ฐ€ ์žˆ์ง€๋งŒ, ์‹œ๊ฐ์  ํ† ํฐ ๋Œ€์‹  ํ”ฝ์…€์„ ์˜ˆ์ธกํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ํŠน์ดํ•œ ์ ์€ ์ด๋ฏธ์ง€ ํŒจ์น˜์˜ 75%๊ฐ€ ๋งˆ์Šคํ‚น๋˜์–ด ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค! ๋””์ฝ”๋”๋Š” ๋งˆ์Šคํ‚น๋œ ํ† ํฐ๊ณผ ์ธ์ฝ”๋”ฉ๋œ ํŒจ์น˜์—์„œ ํ”ฝ์…€์„ ์žฌ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ํ›ˆ๋ จ์ด ๋๋‚˜๋ฉด ๋””์ฝ”๋”๋Š” ํ๊ธฐ๋˜๊ณ  ์ธ์ฝ”๋”๋Š” ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. ### ๋””์ฝ”๋”[[cv-decoder]] ๋Œ€๋ถ€๋ถ„์˜ ๋น„์ „ ๋ชจ๋ธ์€ ์ธ์ฝ”๋”์— ์˜์กดํ•˜์—ฌ ์ด๋ฏธ์ง€ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋””์ฝ”๋” ์ „์šฉ ๋น„์ „ ๋ชจ๋ธ์€ ๋“œ๋ญ…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋ฏธ์ง€ ์ƒ์„ฑ ๋“ฑ์˜ ์‚ฌ๋ก€์˜ ๊ฒฝ์šฐ, GPT-2์™€ ๊ฐ™์€ ํ…์ŠคํŠธ ์ƒ์„ฑ ๋ชจ๋ธ์—์„œ ๋ณด์•˜๋“ฏ์ด ๋””์ฝ”๋”๊ฐ€ ๊ฐ€์žฅ ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. [ImageGPT](model_doc/imagegpt)๋Š” GPT-2์™€ ๋™์ผํ•œ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์‚ฌ์šฉํ•˜์ง€๋งŒ, ์‹œํ€€์Šค์˜ ๋‹ค์Œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋Š” ๋Œ€์‹  ์ด๋ฏธ์ง€์˜ ๋‹ค์Œ ํ”ฝ์…€์„ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ImageGPT๋Š” ์ด๋ฏธ์ง€ ์ƒ์„ฑ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์ธ์ฝ”๋”-๋””์ฝ”๋”[[cv-encoder-decoder]] ๋น„์ „ ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์ธ์ฝ”๋”(๋ฐฑ๋ณธ์œผ๋กœ๋„ ์•Œ๋ ค์ง)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ค‘์š”ํ•œ ์ด๋ฏธ์ง€ ํŠน์ง•์„ ์ถ”์ถœํ•œ ํ›„, ์ด๋ฅผ Transformer ๋””์ฝ”๋”๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. [DETR](model_doc/detr)์— ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ฐฑ๋ณธ์ด ์žˆ์ง€๋งŒ, ๊ฐ์ฒด ํƒ์ง€๋ฅผ ์œ„ํ•ด ์™„์ „ํ•œ Transformer ์ธ์ฝ”๋”-๋””์ฝ”๋” ์•„ํ‚คํ…์ฒ˜๋„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ธ์ฝ”๋”๋Š” ์ด๋ฏธ์ง€ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๊ณ  ์ด๋ฅผ ๋””์ฝ”๋”์—์„œ ๊ฐ์ฒด ์ฟผ๋ฆฌ(๊ฐ ๊ฐ์ฒด ์ฟผ๋ฆฌ๋Š” ์ด๋ฏธ์ง€์˜ ์˜์—ญ ๋˜๋Š” ๊ฐ์ฒด์— ์ค‘์ ์„ ๋‘๊ณ  ํ•™์Šต๋œ ์ž„๋ฒ ๋”ฉ)์™€ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. DETR์€ ๊ฐ ๊ฐ์ฒด ์ฟผ๋ฆฌ์— ๋Œ€ํ•œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ์ขŒํ‘œ์™€ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์„ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ## ์ž์—ฐ์–ด์ฒ˜๋ฆฌ[[natural-language-processing]] <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FUhbQAZDlpYW5XEpdFy6GoG%2Fnlp-model-timeline%3Fnode-id%3D0%253A1%26t%3D4mZMr4r1vDEYGJ50-1" allowfullscreen></iframe> ### ์ธ์ฝ”๋”[[nlp-encoder]] [BERT](model_doc/bert)๋Š” ์ธ์ฝ”๋” ์ „์šฉ Transformer๋กœ, ๋‹ค๋ฅธ ํ† ํฐ์„ ๋ณด๊ณ  ์†Œ์œ„ "๋ถ€์ • ํ–‰์œ„"๋ฅผ ์ €์ง€๋ฅด๋Š” ๊ฑธ ๋ง‰๊ธฐ ์œ„ํ•ด ์ž…๋ ฅ์—์„œ ํŠน์ • ํ† ํฐ์„ ์ž„์˜๋กœ ๋งˆ์Šคํ‚นํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ํ›ˆ๋ จ์˜ ๋ชฉํ‘œ๋Š” ์ปจํ…์ŠคํŠธ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด BERT๋Š” ์™ผ์ชฝ๊ณผ ์˜ค๋ฅธ์ชฝ ์ปจํ…์ŠคํŠธ๋ฅผ ์ถฉ๋ถ„ํžˆ ํ™œ์šฉํ•˜์—ฌ ์ž…๋ ฅ์— ๋Œ€ํ•ด ๋” ๊นŠ๊ณ  ํ’๋ถ€ํ•œ ํ‘œํ˜„์„ ํ•™์Šตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ BERT์˜ ์‚ฌ์ „ํ›ˆ๋ จ ์ „๋žต์—๋Š” ์—ฌ์ „ํžˆ ๊ฐœ์„ ์˜ ์—ฌ์ง€๊ฐ€ ๋‚จ์•„ ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค. [RoBERTa](model_doc/roberta)๋Š” ๋” ๊ธด ์‹œ๊ฐ„ ๋™์•ˆ ๋” ํฐ ๋ฐฐ์น˜์— ๋Œ€ํ•œ ํ›ˆ๋ จ์„ ํฌํ•จํ•˜๊ณ , ์ „์ฒ˜๋ฆฌ ์ค‘์— ํ•œ ๋ฒˆ๋งŒ ๋งˆ์Šคํ‚นํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ ๊ฐ ์—ํญ์—์„œ ํ† ํฐ์„ ์ž„์˜๋กœ ๋งˆ์Šคํ‚นํ•˜๊ณ , ๋‹ค์Œ ๋ฌธ์žฅ ์˜ˆ์ธก ๋ชฉํ‘œ๋ฅผ ์ œ๊ฑฐํ•˜๋Š” ์ƒˆ๋กœ์šด ์‚ฌ์ „ํ›ˆ๋ จ ๋ฐฉ์‹์„ ๋„์ž…ํ•จ์œผ๋กœ์จ ์ด๋ฅผ ๊ฐœ์„ ํ–ˆ์Šต๋‹ˆ๋‹ค. ์„ฑ๋Šฅ ๊ฐœ์„ ์„ ์œ„ํ•œ ์ „๋žต์œผ๋กœ ๋ชจ๋ธ ํฌ๊ธฐ๋ฅผ ํ‚ค์šฐ๋Š” ๊ฒƒ์ด ์ง€๋ฐฐ์ ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ํฐ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋ ค๋ฉด ๊ณ„์‚ฐ ๋น„์šฉ์ด ๋งŽ์ด ๋“ญ๋‹ˆ๋‹ค. ๊ณ„์‚ฐ ๋น„์šฉ์„ ์ค„์ด๋Š” ํ•œ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์€ [DistilBERT](model_doc/distilbert)์™€ ๊ฐ™์ด ์ž‘์€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. DistilBERT๋Š” ์••์ถ• ๊ธฐ๋ฒ•์ธ [์ง€์‹ ์ฆ๋ฅ˜(knowledge distillation)](https://arxiv.org/abs/1503.02531)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ, ๊ฑฐ์˜ ๋ชจ๋“  ์–ธ์–ด ์ดํ•ด ๋Šฅ๋ ฅ์„ ์œ ์ง€ํ•˜๋ฉด์„œ ๋” ์ž‘์€ ๋ฒ„์ „์˜ BERT๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋Œ€๋ถ€๋ถ„์˜ Transformer ๋ชจ๋ธ์— ๋” ๋งŽ์€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝํ–ฅ์ด ์ด์–ด์กŒ๊ณ , ์ด์— ๋”ฐ๋ผ ํ›ˆ๋ จ ํšจ์œจ์„ฑ์„ ๊ฐœ์„ ํ•˜๋Š” ๊ฒƒ์— ์ค‘์ ์„ ๋‘” ์ƒˆ๋กœ์šด ๋ชจ๋ธ์ด ๋“ฑ์žฅํ–ˆ์Šต๋‹ˆ๋‹ค. [ALBERT](model_doc/albert)๋Š” ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์œผ๋กœ ๋งค๊ฐœ๋ณ€์ˆ˜ ์ˆ˜๋ฅผ ์ค„์—ฌ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ ์ค„์˜€์Šต๋‹ˆ๋‹ค. ๋ฐ”๋กœ ํฐ ์–ดํœ˜๋ฅผ ๋‘ ๊ฐœ์˜ ์ž‘์€ ํ–‰๋ ฌ๋กœ ๋ถ„๋ฆฌํ•˜๋Š” ๊ฒƒ๊ณผ ๋ ˆ์ด์–ด๊ฐ€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ณต์œ ํ•˜๋„๋ก ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. [DeBERTa](model_doc/deberta)๋Š” ๋‹จ์–ด์™€ ๊ทธ ์œ„์น˜๋ฅผ ๋‘ ๊ฐœ์˜ ๋ฒกํ„ฐ๋กœ ๊ฐœ๋ณ„์ ์œผ๋กœ ์ธ์ฝ”๋”ฉํ•˜๋Š” ๋ถ„๋ฆฌ๋œ(disentangled) ์–ดํ…์…˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜์„ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ์–ดํ…์…˜์€ ๋‹จ์–ด์™€ ์œ„์น˜ ์ž„๋ฒ ๋”ฉ์„ ํฌํ•จํ•˜๋Š” ๋‹จ์ผ ๋ฒกํ„ฐ ๋Œ€์‹  ์ด ๋ณ„๋„์˜ ๋ฒกํ„ฐ์—์„œ ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. [Longformer](model_doc/longformer)๋Š” ํŠนํžˆ ์‹œํ€€์Šค ๊ธธ์ด๊ฐ€ ๊ธด ๋ฌธ์„œ๋ฅผ ์ฒ˜๋ฆฌํ•  ๋•Œ, ์–ดํ…์…˜์„ ๋” ํšจ์œจ์ ์œผ๋กœ ๋งŒ๋“œ๋Š” ๊ฒƒ์— ์ค‘์ ์„ ๋‘์—ˆ์Šต๋‹ˆ๋‹ค. ์ง€์—ญ(local) ์œˆ๋„์šฐ ์–ดํ…์…˜(๊ฐ ํ† ํฐ ์ฃผ๋ณ€์˜ ๊ณ ์ •๋œ ์œˆ๋„์šฐ ํฌ๊ธฐ์—์„œ๋งŒ ๊ณ„์‚ฐ๋˜๋Š” ์–ดํ…์…˜)๊ณผ ์ „์—ญ(global) ์–ดํ…์…˜(๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด `[CLS]`์™€ ๊ฐ™์€ ํŠน์ • ์ž‘์—… ํ† ํฐ์—๋งŒ ํ•ด๋‹น)์˜ ์กฐํ•ฉ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ „์ฒด(full) ์–ดํ…์…˜ ํ–‰๋ ฌ ๋Œ€์‹  ํฌ์†Œ(sparse) ์–ดํ…์…˜ ํ–‰๋ ฌ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ### ๋””์ฝ”๋”[[nlp-decoder]] [GPT-2](model_doc/gpt2)๋Š” ์‹œํ€€์Šค์—์„œ ๋‹ค์Œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๋””์ฝ”๋” ์ „์šฉ Transformer์ž…๋‹ˆ๋‹ค. ํ† ํฐ์„ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ๋งˆ์Šคํ‚นํ•˜์—ฌ ๋ชจ๋ธ์ด ์ด์ „ ํ† ํฐ์„ ๋ณด๊ณ  "๋ถ€์ • ํ–‰์œ„"๋ฅผ ํ•˜์ง€ ๋ชปํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. GPT-2๋Š” ๋ฐฉ๋Œ€ํ•œ ํ…์ŠคํŠธ์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จํ•˜์—ฌ ํ…์ŠคํŠธ๊ฐ€ ์ผ๋ถ€๋งŒ ์ •ํ™•ํ•˜๊ฑฐ๋‚˜ ์‚ฌ์‹ค์ธ ๊ฒฝ์šฐ์—๋„ ์ƒ๋‹นํžˆ ๋Šฅ์ˆ™ํ•˜๊ฒŒ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ GPT-2๋Š” BERT๊ฐ€ ์‚ฌ์ „ํ›ˆ๋ จ์—์„œ ๊ฐ–๋Š” ์–‘๋ฐฉํ–ฅ ์ปจํ…์ŠคํŠธ๊ฐ€ ๋ถ€์กฑํ•˜๊ธฐ ๋•Œ๋ฌธ์— ํŠน์ • ์ž‘์—…์— ์ ํ•ฉํ•˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. [XLNET](model_doc/xlnet)์€ ์–‘๋ฐฉํ–ฅ ํ›ˆ๋ จ์ด ๊ฐ€๋Šฅํ•œ permutation language modeling objective(PLM)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ BERT์™€ GPT-2์˜ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉํ‘œ์— ๋Œ€ํ•œ ์žฅ์ ์„ ํ•จ๊ป˜ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. GPT-2 ์ดํ›„, ์–ธ์–ด ๋ชจ๋ธ์€ ๋”์šฑ ๊ฑฐ๋Œ€ํ•ด์กŒ๊ณ  ํ˜„์žฌ๋Š” *๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ(LLM)*๋กœ ์•Œ๋ ค์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถฉ๋ถ„ํžˆ ํฐ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋œ LLM์€ ํ“จ์ƒท(few-shot) ๋˜๋Š” ์ œ๋กœ์ƒท(zero-shot) ํ•™์Šต์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. [GPT-J](model_doc/gptj)๋Š” 6B ํฌ๊ธฐ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ๊ณ  400B ํฌ๊ธฐ์˜ ํ† ํฐ์œผ๋กœ ํ›ˆ๋ จ๋œ LLM์ž…๋‹ˆ๋‹ค. GPT-J์— ์ด์–ด ๋””์ฝ”๋” ์ „์šฉ ๋ชจ๋ธ๊ตฐ์ธ [OPT](model_doc/opt)๊ฐ€ ๋“ฑ์žฅํ–ˆ์œผ๋ฉฐ, ์ด ์ค‘ ๊ฐ€์žฅ ํฐ ๋ชจ๋ธ์€ 175B ํฌ๊ธฐ์ด๊ณ  180B ํฌ๊ธฐ์˜ ํ† ํฐ์œผ๋กœ ํ›ˆ๋ จ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. [BLOOM](model_doc/bloom)์€ ๋น„์Šทํ•œ ์‹œ๊ธฐ์— ์ถœ์‹œ๋˜์—ˆ์œผ๋ฉฐ, ์ด ์ค‘ ๊ฐ€์žฅ ํฐ ๋ชจ๋ธ์€ 176B ํฌ๊ธฐ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ๊ณ  46๊ฐœ์˜ ์–ธ์–ด์™€ 13๊ฐœ์˜ ํ”„๋กœ๊ทธ๋ž˜๋ฐ ์–ธ์–ด๋กœ ๋œ 366B ํฌ๊ธฐ์˜ ํ† ํฐ์œผ๋กœ ํ›ˆ๋ จ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ### ์ธ์ฝ”๋”-๋””์ฝ”๋”[[nlp-encoder-decoder]] [BART](model_doc/bart)๋Š” ๊ธฐ๋ณธ Transformer ์•„ํ‚คํ…์ฒ˜๋ฅผ ์œ ์ง€ํ•˜์ง€๋งŒ, ์ผ๋ถ€ ํ…์ŠคํŠธ ์ŠคํŒฌ(span)์ด ๋‹จ์ผ `๋งˆ์Šคํฌ` ํ† ํฐ์œผ๋กœ ๋Œ€์ฒด๋˜๋Š” *text infilling* ๋ณ€ํ˜•์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉํ‘œ๋ฅผ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”๋Š” ๋ณ€ํ˜•๋˜์ง€ ์•Š์€ ํ† ํฐ(ํ–ฅํ›„ ํ† ํฐ์€ ๋งˆ์Šคํ‚น๋จ)์„ ์˜ˆ์ธกํ•˜๊ณ  ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด ์ž‘์—…์„ ๋•์Šต๋‹ˆ๋‹ค. [Pegasus](model_doc/pegasus)๋Š” BART์™€ ์œ ์‚ฌํ•˜์ง€๋งŒ, Pegasus๋Š” ํ…์ŠคํŠธ ์ŠคํŒฌ ๋Œ€์‹  ์ „์ฒด ๋ฌธ์žฅ์„ ๋งˆ์Šคํ‚นํ•ฉ๋‹ˆ๋‹ค. Pegasus๋Š” ๋งˆ์Šคํฌ๋“œ ์–ธ์–ด ๋ชจ๋ธ๋ง ์™ธ์—๋„ gap sentence generation(GSG)๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. GSG๋Š” ๋ฌธ์„œ์— ์ค‘์š”ํ•œ ๋ฌธ์žฅ ์ „์ฒด๋ฅผ ๋งˆ์Šคํ‚นํ•˜์—ฌ `๋งˆ์Šคํฌ` ํ† ํฐ์œผ๋กœ ๋Œ€์ฒดํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”๋Š” ๋‚จ์€ ๋ฌธ์žฅ์—์„œ ์ถœ๋ ฅ์„ ์ƒ์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [T5](model_doc/t5)๋Š” ํŠน์ • ์ ‘๋‘์‚ฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋“  NLP ์ž‘์—…์„ ํ…์ŠคํŠธ ํˆฌ ํ…์ŠคํŠธ ๋ฌธ์ œ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ๋” ํŠน์ˆ˜ํ•œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ ‘๋‘์‚ฌ `Summarize:`์€ ์š”์•ฝ ์ž‘์—…์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. T5๋Š” ์ง€๋„(GLUE ๋ฐ SuperGLUE) ํ›ˆ๋ จ๊ณผ ์ž๊ธฐ์ง€๋„ ํ›ˆ๋ จ(ํ† ํฐ์˜ 15%๋ฅผ ์ž„์˜๋กœ ์ƒ˜ํ”Œ๋งํ•˜์—ฌ ์ œ๊ฑฐ)์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ## ์˜ค๋””์˜ค[[audio]] <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2Fvrchl8jDV9YwNVPWu2W0kK%2Fspeech-and-audio-model-timeline%3Fnode-id%3D0%253A1%26t%3DmM4H8pPMuK23rClL-1" allowfullscreen></iframe> ### ์ธ์ฝ”๋”[[audio-encoder]] [Wav2Vec2](model_doc/wav2vec2)๋Š” Transformer ์ธ์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์›๋ณธ ์˜ค๋””์˜ค ํŒŒํ˜•(raw audio waveform)์—์„œ ์ง์ ‘ ์Œ์„ฑ ํ‘œํ˜„์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ํ—ˆ์œ„ ์Œ์„ฑ ํ‘œํ˜„ ์„ธํŠธ์—์„œ ์‹ค์ œ ์Œ์„ฑ ํ‘œํ˜„์„ ํŒ๋ณ„ํ•˜๋Š” ๋Œ€์กฐ ์ž‘์—…์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. [HuBERT](model_doc/hubert)๋Š” Wav2Vec2์™€ ์œ ์‚ฌํ•˜์ง€๋งŒ ํ›ˆ๋ จ ๊ณผ์ •์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ํƒ€๊ฒŸ ๋ ˆ์ด๋ธ”์ด ์œ ์‚ฌํ•œ ์˜ค๋””์˜ค ์„ธ๊ทธ๋จผํŠธ๊ฐ€ ํด๋Ÿฌ์Šคํ„ฐ์— ํ• ๋‹น๋˜์–ด ์€๋‹‰ ๋‹จ์œ„(unit)๊ฐ€ ๋˜๋Š” ๊ตฐ์ง‘ํ™”(clustering) ๋‹จ๊ณ„์—์„œ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์€๋‹‰ ๋‹จ์œ„๋Š” ์˜ˆ์ธก์„ ์œ„ํ•œ ์ž„๋ฒ ๋”ฉ์— ๋งคํ•‘๋ฉ๋‹ˆ๋‹ค. ### ์ธ์ฝ”๋”-๋””์ฝ”๋”[[audio-encoder-decoder]] [Speech2Text](model_doc/speech_to_text)๋Š” ์ž๋™ ์Œ์„ฑ ์ธ์‹(ASR) ๋ฐ ์Œ์„ฑ ๋ฒˆ์—ญ์„ ์œ„ํ•ด ๊ณ ์•ˆ๋œ ์Œ์„ฑ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ ์˜ค๋””์˜ค ํŒŒํ˜•์—์„œ ์ถ”์ถœํ•œ log mel-filter bank ํŠน์ง•์„ ์ฑ„ํƒํ•˜๊ณ  ์ž๊ธฐํšŒ๊ท€ ๋ฐฉ์‹์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จํ•˜์—ฌ, ์ „์‚ฌ๋ณธ ๋˜๋Š” ๋ฒˆ์—ญ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. [Whisper](model_doc/whisper)์€ ASR ๋ชจ๋ธ์ด์ง€๋งŒ, ๋‹ค๋ฅธ ๋งŽ์€ ์Œ์„ฑ ๋ชจ๋ธ๊ณผ ๋‹ฌ๋ฆฌ ์ œ๋กœ์ƒท ์„ฑ๋Šฅ์„ ์œ„ํ•ด ๋Œ€๋Ÿ‰์˜ โœจ ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ โœจ ์˜ค๋””์˜ค ์ „์‚ฌ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํฐ ๋ฌถ์Œ์—๋Š” ์˜์–ด๊ฐ€ ์•„๋‹Œ ์–ธ์–ด๋„ ํฌํ•จ๋˜์–ด ์žˆ์–ด์„œ ์ž์›์ด ์ ์€ ์–ธ์–ด์—๋„ Whisper๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ตฌ์กฐ์ ์œผ๋กœ, Whisper๋Š” Speech2Text์™€ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ์‹ ํ˜ธ๋Š” ์ธ์ฝ”๋”์— ์˜ํ•ด ์ธ์ฝ”๋”ฉ๋œ log-mel spectrogram์œผ๋กœ ๋ณ€ํ™˜๋ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”๋Š” ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ์™€ ์ด์ „ ํ† ํฐ์œผ๋กœ๋ถ€ํ„ฐ ์ž๊ธฐํšŒ๊ท€ ๋ฐฉ์‹์œผ๋กœ ์ „์‚ฌ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ## ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ[[multimodal]] <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FcX125FQHXJS2gxeICiY93p%2Fmultimodal%3Fnode-id%3D0%253A1%26t%3DhPQwdx3HFPWJWnVf-1" allowfullscreen></iframe> ### ์ธ์ฝ”๋”[[mm-encoder]] [VisualBERT](model_doc/visual_bert)๋Š” BERT ์ดํ›„์— ์ถœ์‹œ๋œ ๋น„์ „ ์–ธ์–ด ์ž‘์—…์„ ์œ„ํ•œ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ BERT์™€ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๊ฐ์ฒด ํƒ์ง€ ์‹œ์Šคํ…œ์„ ๊ฒฐํ•ฉํ•˜์—ฌ ์ด๋ฏธ์ง€ ํŠน์ง•์„ ์‹œ๊ฐ ์ž„๋ฒ ๋”ฉ์œผ๋กœ ์ถ”์ถœํ•˜๊ณ , ํ…์ŠคํŠธ ์ž„๋ฒ ๋”ฉ๊ณผ ํ•จ๊ป˜ BERT๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. VisualBERT๋Š” ๋งˆ์Šคํ‚น๋˜์ง€ ์•Š์€ ํ…์ŠคํŠธ์™€ ์‹œ๊ฐ ์ž„๋ฒ ๋”ฉ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋งˆ์Šคํ‚น๋œ ํ…์ŠคํŠธ๋ฅผ ์˜ˆ์ธกํ•˜๊ณ , ํ…์ŠคํŠธ๊ฐ€ ์ด๋ฏธ์ง€์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ์˜ˆ์ธกํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ViT๊ฐ€ ์ด๋ฏธ์ง€ ์ž„๋ฒ ๋”ฉ์„ ๊ตฌํ•˜๋Š” ๋ฐฉ์‹์ด ๋” ์‰ฌ์› ๊ธฐ ๋•Œ๋ฌธ์—, ViT๊ฐ€ ์ถœ์‹œ๋œ ํ›„ [ViLT](model_doc/vilt)๋Š” ์•„ํ‚คํ…์ฒ˜์— ViT๋ฅผ ์ฑ„ํƒํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์ž„๋ฒ ๋”ฉ์€ ํ…์ŠคํŠธ ์ž„๋ฒ ๋”ฉ๊ณผ ํ•จ๊ป˜ ์ฒ˜๋ฆฌ๋ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ, ViLT๋Š” ์ด๋ฏธ์ง€ ํ…์ŠคํŠธ ๋งค์นญ, ๋งˆ์Šคํฌ๋“œ ์–ธ์–ด ๋ชจ๋ธ๋ง, ์ „์ฒด ๋‹จ์–ด ๋งˆ์Šคํ‚น์„ ํ†ตํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. [CLIP](model_doc/clip)์€ ๋‹ค๋ฅธ ์ ‘๊ทผ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜์—ฌ (`์ด๋ฏธ์ง€`, `ํ…์ŠคํŠธ`)์˜ ์Œ ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. (`์ด๋ฏธ์ง€`, `ํ…์ŠคํŠธ`) ์Œ์—์„œ์˜ ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ ์ž„๋ฒ ๋”ฉ ๊ฐ„์˜ ์œ ์‚ฌ๋„๋ฅผ ์ตœ๋Œ€ํ™”ํ•˜๊ธฐ ์œ„ํ•ด 4์–ต ๊ฐœ์˜ (`์ด๋ฏธ์ง€`, `ํ…์ŠคํŠธ`) ์Œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•ด ์ด๋ฏธ์ง€ ์ธ์ฝ”๋”(ViT)์™€ ํ…์ŠคํŠธ ์ธ์ฝ”๋”(Transformer)๋ฅผ ํ•จ๊ป˜ ํ›ˆ๋ จํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ํ›ˆ๋ จ ํ›„, ์ž์—ฐ์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€๊ฐ€ ์ฃผ์–ด์ง„ ํ…์ŠคํŠธ๋ฅผ ์˜ˆ์ธกํ•˜๊ฑฐ๋‚˜ ๊ทธ ๋ฐ˜๋Œ€๋กœ ์˜ˆ์ธกํ•˜๋„๋ก CLIP์— ์ง€์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [OWL-ViT](model_doc/owlvit)๋Š” CLIP์„ ์ œ๋กœ์ƒท ๊ฐ์ฒด ํƒ์ง€๋ฅผ ์œ„ํ•œ ๋ฐฑ๋ณธ(backbone)์œผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ CLIP ์ƒ์— ๊ตฌ์ถ•๋ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ํ›ˆ๋ จ ํ›„, ๊ฐ์ฒด ํƒ์ง€ ํ—ค๋“œ๊ฐ€ ์ถ”๊ฐ€๋˜์–ด (`ํด๋ž˜์Šค`, `๋ฐ”์šด๋”ฉ ๋ฐ•์Šค`) ์Œ์— ๋Œ€ํ•œ ์ง‘ํ•ฉ(set) ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ### ์ธ์ฝ”๋”-๋””์ฝ”๋”[[mm-encoder-decoder]] ๊ด‘ํ•™ ๋ฌธ์ž ์ธ์‹(OCR)์€ ์ด๋ฏธ์ง€๋ฅผ ์ดํ•ดํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์–‘ํ•œ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ํ•„์š”๋กœ ํ•˜๋Š” ์ „ํ†ต์ ์ธ ํ…์ŠคํŠธ ์ธ์‹ ์ž‘์—…์ž…๋‹ˆ๋‹ค. [TrOCR](model_doc/trocr)์€ ์ข…๋‹จ๊ฐ„(end-to-end) Transformer๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด ํ”„๋กœ์„ธ์Šค๋ฅผ ๊ฐ„์†Œํ™”ํ•ฉ๋‹ˆ๋‹ค. ์ธ์ฝ”๋”๋Š” ์ด๋ฏธ์ง€ ์ดํ•ด๋ฅผ ์œ„ํ•œ ViT ๋ฐฉ์‹์˜ ๋ชจ๋ธ์ด๋ฉฐ ์ด๋ฏธ์ง€๋ฅผ ๊ณ ์ •๋œ ํฌ๊ธฐ์˜ ํŒจ์น˜๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”๋Š” ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋ฐ›์•„์„œ ์ž๊ธฐํšŒ๊ท€ ๋ฐฉ์‹์œผ๋กœ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. [Donut](model_doc/donut)์€ OCR ๊ธฐ๋ฐ˜ ์ ‘๊ทผ ๋ฐฉ์‹์— ์˜์กดํ•˜์ง€ ์•Š๋Š” ๋” ์ผ๋ฐ˜์ ์ธ ์‹œ๊ฐ ๋ฌธ์„œ ์ดํ•ด ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ Swin Transformer๋ฅผ ์ธ์ฝ”๋”๋กœ, ๋‹ค๊ตญ์–ด BART๋ฅผ ๋””์ฝ”๋”๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. Donut์€ ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ ์ฃผ์„์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋‹ค์Œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ์ฝ๋„๋ก ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”๋Š” ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์ฃผ์–ด์ง€๋ฉด ํ† ํฐ ์‹œํ€€์Šค๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๋Š” ๊ฐ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ๋Œ€ํ•œ ํŠน์ˆ˜ ํ† ํฐ์œผ๋กœ ํ‘œํ˜„๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ฌธ์„œ ํŒŒ์‹ฑ(parsing)์—๋Š” ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ์™€ ๊ฒฐํ•ฉ๋˜์–ด ๋ฌธ์„œ๋ฅผ ์ •ํ˜• ์ถœ๋ ฅ ํ˜•์‹(JSON)์œผ๋กœ ํŒŒ์‹ฑํ•˜๋Š” ํŠน์ˆ˜ `ํŒŒ์‹ฑ` ํ† ํฐ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ## ๊ฐ•ํ™” ํ•™์Šต[[reinforcement-learning]] <iframe style="border: 1px solid rgba(0, 0, 0, 0.1);" width="1000" height="450" src="https://www.figma.com/embed?embed_host=share&url=https%3A%2F%2Fwww.figma.com%2Ffile%2FiB3Y6RvWYki7ZuKO6tNgZq%2Freinforcement-learning%3Fnode-id%3D0%253A1%26t%3DhPQwdx3HFPWJWnVf-1" allowfullscreen></iframe> ### ๋””์ฝ”๋”[[rl-decoder]] Decision ๋ฐ Trajectory Transformer๋Š” ์ƒํƒœ(state), ํ–‰๋™(action), ๋ณด์ƒ(reward)์„ ์‹œํ€€์Šค ๋ชจ๋ธ๋ง ๋ฌธ์ œ๋กœ ํ‘œํ˜„ํ•ฉ๋‹ˆ๋‹ค. [Decision Transformer](model_doc/decision_transformer)๋Š” ๊ธฐ๋Œ€ ๋ณด์ƒ(returns-to-go), ๊ณผ๊ฑฐ ์ƒํƒœ ๋ฐ ํ–‰๋™์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ฏธ๋ž˜์˜ ์›ํ•˜๋Š” ์ˆ˜์ต(return)์œผ๋กœ ์ด์–ด์ง€๋Š” ์ผ๋ จ์˜ ํ–‰๋™์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰ *K* ์‹œ๊ฐ„ ์Šคํ…(timestep)์— ๋Œ€ํ•ด, ์„ธ ๊ฐ€์ง€ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ๋Š” ๊ฐ๊ฐ ํ† ํฐ ์ž„๋ฒ ๋”ฉ์œผ๋กœ ๋ณ€ํ™˜๋˜๊ณ  GPT์™€ ๊ฐ™์€ ๋ชจ๋ธ์— ์˜ํ•ด ์ฒ˜๋ฆฌ๋˜์–ด ๋ฏธ๋ž˜์˜ ์•ก์…˜ ํ† ํฐ์„ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. [Trajectory Transformer](model_doc/trajectory_transformer)๋„ ์ƒํƒœ, ํ–‰๋™, ๋ณด์ƒ์„ ํ† ํฐํ™”ํ•˜์—ฌ GPT ์•„ํ‚คํ…์ฒ˜๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๋ณด์ƒ ์กฐ๊ฑด์— ์ค‘์ ์„ ๋‘” Decision Transformer์™€ ๋‹ฌ๋ฆฌ Trajectory Transformer๋Š” ๋น” ์„œ์น˜(beam search)๋กœ ๋ฏธ๋ž˜ ํ–‰๋™์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/accelerate.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Accelerate๋ฅผ ํ™œ์šฉํ•œ ๋ถ„์‚ฐ ํ•™์Šต[[distributed-training-with-accelerate]] ๋ชจ๋ธ์ด ์ปค์ง€๋ฉด์„œ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ๋Š” ์ œํ•œ๋œ ํ•˜๋“œ์›จ์–ด์—์„œ ๋” ํฐ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ณ  ํ›ˆ๋ จ ์†๋„๋ฅผ ๋ช‡ ๋ฐฐ๋กœ ๊ฐ€์†ํ™”ํ•˜๊ธฐ ์œ„ํ•œ ์ „๋žต์œผ๋กœ ๋“ฑ์žฅํ–ˆ์Šต๋‹ˆ๋‹ค. Hugging Face์—์„œ๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ํ•˜๋‚˜์˜ ๋จธ์‹ ์— ์—ฌ๋Ÿฌ ๊ฐœ์˜ GPU๋ฅผ ์‚ฌ์šฉํ•˜๋“  ์—ฌ๋Ÿฌ ๋จธ์‹ ์— ์—ฌ๋Ÿฌ ๊ฐœ์˜ GPU๋ฅผ ์‚ฌ์šฉํ•˜๋“  ๋ชจ๋“  ์œ ํ˜•์˜ ๋ถ„์‚ฐ ์„ค์ •์—์„œ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ์‰ฝ๊ฒŒ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋„๋ก ๋•๊ธฐ ์œ„ํ•ด [๐Ÿค— Accelerate](https://huggingface.co/docs/accelerate) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋ถ„์‚ฐ ํ™˜๊ฒฝ์—์„œ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ธฐ๋ณธ PyTorch ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ปค์Šคํ„ฐ๋งˆ์ด์ฆˆํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ด…์‹œ๋‹ค. ## ์„ค์ •[[setup]] ๐Ÿค— Accelerate ์„ค์น˜ ์‹œ์ž‘ํ•˜๊ธฐ: ```bash pip install accelerate ``` ๊ทธ ๋‹ค์Œ, [`~accelerate.Accelerator`] ๊ฐ์ฒด๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ  ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. [`~accelerate.Accelerator`]๋Š” ์ž๋™์œผ๋กœ ๋ถ„์‚ฐ ์„ค์ • ์œ ํ˜•์„ ๊ฐ์ง€ํ•˜๊ณ  ํ›ˆ๋ จ์— ํ•„์š”ํ•œ ๋ชจ๋“  ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ์žฅ์น˜์— ๋ชจ๋ธ์„ ๋ช…์‹œ์ ์œผ๋กœ ๋ฐฐ์น˜ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ```py >>> from accelerate import Accelerator >>> accelerator = Accelerator() ``` ## ๊ฐ€์†ํ™”๋ฅผ ์œ„ํ•œ ์ค€๋น„[[prepare-to-accelerate]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๊ด€๋ จ๋œ ๋ชจ๋“  ํ›ˆ๋ จ ๊ฐ์ฒด๋ฅผ [`~accelerate.Accelerator.prepare`] ๋ฉ”์†Œ๋“œ์— ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—๋Š” ํ›ˆ๋ จ ๋ฐ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ๋กœ๋”, ๋ชจ๋ธ ๋ฐ ์˜ตํ‹ฐ๋งˆ์ด์ €๊ฐ€ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค: ```py >>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( ... train_dataloader, eval_dataloader, model, optimizer ... ) ``` ## ๋ฐฑ์›Œ๋“œ(Backward)[[backward]] ๋งˆ์ง€๋ง‰์œผ๋กœ ํ›ˆ๋ จ ๋ฃจํ”„์˜ ์ผ๋ฐ˜์ ์ธ `loss.backward()`๋ฅผ ๐Ÿค— Accelerate์˜ [`~accelerate.Accelerator.backward`] ๋ฉ”์†Œ๋“œ๋กœ ๋Œ€์ฒดํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... outputs = model(**batch) ... loss = outputs.loss ... accelerator.backward(loss) ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ๋‹ค์Œ ์ฝ”๋“œ์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด, ํ›ˆ๋ จ ๋ฃจํ”„์— ์ฝ”๋“œ ๋„ค ์ค„๋งŒ ์ถ”๊ฐ€ํ•˜๋ฉด ๋ถ„์‚ฐ ํ•™์Šต์„ ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ```diff + from accelerate import Accelerator from transformers import AdamW, AutoModelForSequenceClassification, get_scheduler + accelerator = Accelerator() model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) optimizer = AdamW(model.parameters(), lr=3e-5) - device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") - model.to(device) + train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( + train_dataloader, eval_dataloader, model, optimizer + ) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: - batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss - loss.backward() + accelerator.backward(loss) optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) ``` ## ํ•™์Šต[[train]] ๊ด€๋ จ ์ฝ”๋“œ๋ฅผ ์ถ”๊ฐ€ํ•œ ํ›„์—๋Š” ์Šคํฌ๋ฆฝํŠธ๋‚˜ Colaboratory์™€ ๊ฐ™์€ ๋…ธํŠธ๋ถ์—์„œ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜์„ธ์š”. ### ์Šคํฌ๋ฆฝํŠธ๋กœ ํ•™์Šตํ•˜๊ธฐ[[train-with-a-script]] ์Šคํฌ๋ฆฝํŠธ์—์„œ ํ›ˆ๋ จ์„ ์‹คํ–‰ํ•˜๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์—ฌ ๊ตฌ์„ฑ ํŒŒ์ผ์„ ์ƒ์„ฑํ•˜๊ณ  ์ €์žฅํ•ฉ๋‹ˆ๋‹ค: ```bash accelerate config ``` Then launch your training with: ```bash accelerate launch train.py ``` ### ๋…ธํŠธ๋ถ์œผ๋กœ ํ•™์Šตํ•˜๊ธฐ[[train-with-a-notebook]] Collaboratory์˜ TPU๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๊ฒฝ์šฐ, ๋…ธํŠธ๋ถ์—์„œ๋„ ๐Ÿค— Accelerate๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์„ ๋‹ด๋‹นํ•˜๋Š” ๋ชจ๋“  ์ฝ”๋“œ๋ฅผ ํ•จ์ˆ˜๋กœ ๊ฐ์‹ธ์„œ [`~accelerate.notebook_launcher`]์— ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from accelerate import notebook_launcher >>> notebook_launcher(training_function) ``` ๐Ÿค— Accelerate ๋ฐ ๋‹ค์–‘ํ•œ ๊ธฐ๋Šฅ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [documentation](https://huggingface.co/docs/accelerate)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/tasks_explained.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers๋กœ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•[[how-transformers-solve-tasks]] [๐Ÿค— Transformers๋กœ ํ•  ์ˆ˜ ์žˆ๋Š” ์ž‘์—…](task_summary)์—์„œ ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP), ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค, ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—… ๋“ฑ์˜ ์ค‘์š”ํ•œ ์‘์šฉ์„ ๋ฐฐ์› ์Šต๋‹ˆ๋‹ค. ์ด ํŽ˜์ด์ง€์—์„œ๋Š” ๋ชจ๋ธ์ด ์ด๋Ÿฌํ•œ ์ž‘์—…์„ ์–ด๋–ป๊ฒŒ ํ•ด๊ฒฐํ•˜๋Š”์ง€ ์ž์„ธํžˆ ์‚ดํŽด๋ณด๊ณ  ๋‚ด๋ถ€์—์„œ ์–ด๋–ค ์ผ์ด ์ผ์–ด๋‚˜๋Š”์ง€ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ์ฃผ์–ด์ง„ ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” ๋งŽ์€ ๋ฐฉ๋ฒ•์ด ์žˆ์œผ๋ฉฐ, ์ผ๋ถ€ ๋ชจ๋ธ์€ ํŠน์ • ๊ธฐ์ˆ ์„ ๊ตฌํ˜„ํ•˜๊ฑฐ๋‚˜ ์‹ฌ์ง€์–ด ์ƒˆ๋กœ์šด ๋ฐฉ์‹์œผ๋กœ ์ž‘์—…์— ์ ‘๊ทผํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, Transformer ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ ์ผ๋ฐ˜์ ์ธ ์•„์ด๋””์–ด๋Š” ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์œ ์—ฐํ•œ ์•„ํ‚คํ…์ฒ˜ ๋•๋ถ„์— ๋Œ€๋ถ€๋ถ„์˜ ๋ชจ๋ธ์€ ์ธ์ฝ”๋”, ๋””์ฝ”๋” ๋˜๋Š” ์ธ์ฝ”๋”-๋””์ฝ”๋” ๊ตฌ์กฐ์˜ ๋ณ€ํ˜•์ž…๋‹ˆ๋‹ค. Transformer ๋ชจ๋ธ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์šฐ๋ฆฌ์˜ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—๋Š” ์˜ค๋Š˜๋‚  ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์— ์‚ฌ์šฉ๋˜๋Š” ๋ช‡ ๊ฐ€์ง€ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง(CNNs)๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์šฐ๋ฆฌ๋Š” ํ˜„๋Œ€ CNN์˜ ์ž‘๋™ ๋ฐฉ์‹์— ๋Œ€ํ•ด ์„ค๋ช…ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ž‘์—…์ด ์–ด๋–ป๊ฒŒ ํ•ด๊ฒฐ๋˜๋Š”์ง€ ์„ค๋ช…ํ•˜๊ธฐ ์œ„ํ•ด, ์œ ์šฉํ•œ ์˜ˆ์ธก์„ ์ถœ๋ ฅํ•˜๊ณ ์ž ๋ชจ๋ธ ๋‚ด๋ถ€์—์„œ ์–ด๋–ค ์ผ์ด ์ผ์–ด๋‚˜๋Š”์ง€ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค. - ์˜ค๋””์˜ค ๋ถ„๋ฅ˜ ๋ฐ ์ž๋™ ์Œ์„ฑ ์ธ์‹(ASR)์„ ์œ„ํ•œ [Wav2Vec2](model_doc/wav2vec2) - ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ [Vision Transformer (ViT)](model_doc/vit) ๋ฐ [ConvNeXT](model_doc/convnext) - ๊ฐ์ฒด ํƒ์ง€๋ฅผ ์œ„ํ•œ [DETR](model_doc/detr) - ์ด๋ฏธ์ง€ ๋ถ„ํ• ์„ ์œ„ํ•œ [Mask2Former](model_doc/mask2former) - ๊นŠ์ด ์ถ”์ •์„ ์œ„ํ•œ [GLPN](model_doc/glpn) - ์ธ์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ํ…์ŠคํŠธ ๋ถ„๋ฅ˜, ํ† ํฐ ๋ถ„๋ฅ˜ ๋ฐ ์งˆ์˜์‘๋‹ต๊ณผ ๊ฐ™์€ NLP ์ž‘์—…์„ ์œ„ํ•œ [BERT](model_doc/bert) - ๋””์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ํ…์ŠคํŠธ ์ƒ์„ฑ๊ณผ ๊ฐ™์€ NLP ์ž‘์—…์„ ์œ„ํ•œ [GPT2](model_doc/gpt2) - ์ธ์ฝ”๋”-๋””์ฝ”๋”๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์š”์•ฝ ๋ฐ ๋ฒˆ์—ญ๊ณผ ๊ฐ™์€ NLP ์ž‘์—…์„ ์œ„ํ•œ [BART](model_doc/bart) <Tip> ๋” ๋‚˜์•„๊ฐ€๊ธฐ ์ „์—, ๊ธฐ์กด Transformer ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•œ ๊ธฐ๋ณธ์ ์ธ ์ง€์‹์„ ์ˆ™์ง€ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ธ์ฝ”๋”, ๋””์ฝ”๋” ๋ฐ ์–ดํ…์…˜์˜ ์ž‘๋™ ๋ฐฉ์‹์„ ์•Œ๋ฉด ๋‹ค์–‘ํ•œ Transformer ๋ชจ๋ธ์ด ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•˜๋Š”์ง€ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ์‹œ์ž‘ ๋‹จ๊ณ„๊ฑฐ๋‚˜ ๋ณต์Šต์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ, ๋” ๋งŽ์€ ์ •๋ณด๋ฅผ ์œ„ํ•ด [์ฝ”์Šค](https://huggingface.co/course/chapter1/4?fw=pt)๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip> ## ์Œ์„ฑ ๋ฐ ์˜ค๋””์˜ค[[speech-and-audio]] [Wav2Vec2](model_doc/wav2vec2)๋Š” ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋˜์ง€ ์•Š์€ ์Œ์„ฑ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๋กœ, ์˜ค๋””์˜ค ๋ถ„๋ฅ˜ ๋ฐ ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•ด ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ๋ฐ์ดํ„ฐ๋กœ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/wav2vec2_architecture.png"/> </div> ์ด ๋ชจ๋ธ์—๋Š” 4๊ฐ€์ง€ ์ฃผ์š” ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: 1. *ํŠน์ง• ์ธ์ฝ”๋”(feature encoder)*๋Š” ์›์‹œ ์˜ค๋””์˜ค ํŒŒํ˜•(raw audio waveform)์„ ๊ฐ€์ ธ์™€์„œ ์ œ๋กœ ํ‰๊ท  ๋ฐ ๋‹จ์œ„ ๋ถ„์‚ฐ์œผ๋กœ ํ‘œ์ค€ํ™”ํ•˜๊ณ , ๊ฐ๊ฐ 20ms ๊ธธ์ด์˜ ํŠน์ง• ๋ฒกํ„ฐ์˜ ์‹œํ€€์Šค๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. 2. ์˜ค๋””์˜ค ํŒŒํ˜•์€ ๋ณธ์งˆ์ ์œผ๋กœ ์—ฐ์†์ ์ด๊ธฐ ๋•Œ๋ฌธ์—, ํ…์ŠคํŠธ ์‹œํ€€์Šค๋ฅผ ๋‹จ์–ด๋กœ ๋‚˜๋ˆ„๋Š” ๊ฒƒ๊ณผ ๊ฐ™์ด ๋ถ„ํ• ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ๊ทธ๋ž˜์„œ *์–‘์žํ™” ๋ชจ๋“ˆ(quantization module)*๋กœ ์ „๋‹ฌ๋˜๋Š” ํŠน์ง• ๋ฒกํ„ฐ๋Š” ์ด์‚ฐํ˜• ์Œ์„ฑ ๋‹จ์œ„๋ฅผ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์Œ์„ฑ ๋‹จ์œ„๋Š” *์ฝ”๋“œ๋ถ(codebook)*(์–ดํœ˜์ง‘์ด๋ผ๊ณ  ์ƒ๊ฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค)์ด๋ผ๋Š” ์ฝ”๋“œ๋‹จ์–ด(codewords) ์ฝœ๋ ‰์…˜์—์„œ ์„ ํƒ๋ฉ๋‹ˆ๋‹ค. ์ฝ”๋“œ๋ถ์—์„œ ์—ฐ์†์ ์ธ ์˜ค๋””์˜ค ์ž…๋ ฅ์„ ๊ฐ€์žฅ ์ž˜ ๋‚˜ํƒ€๋‚ด๋Š” ๋ฒกํ„ฐ ๋˜๋Š” ์Œ์„ฑ ๋‹จ์œ„๊ฐ€ ์„ ํƒ๋˜์–ด ๋ชจ๋ธ์„ ํ†ต๊ณผํ•ฉ๋‹ˆ๋‹ค. 3. ํŠน์ง• ๋ฒกํ„ฐ์˜ ์ ˆ๋ฐ˜์€ ๋ฌด์ž‘์œ„๋กœ ๋งˆ์Šคํฌ๊ฐ€ ์ ์šฉ๋˜๋ฉฐ, ๋งˆ์Šคํฌ๋œ ํŠน์ง• ๋ฒกํ„ฐ๋Š” *์ƒ๋Œ€์  ์œ„์น˜ ์ž„๋ฒ ๋”ฉ*์„ ์ถ”๊ฐ€ํ•˜๋Š” Transformer ์ธ์ฝ”๋”์ธ *๋ฌธ๋งฅ ๋„คํŠธ์›Œํฌ(context network)*๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. 4. ๋ฌธ๋งฅ ๋„คํŠธ์›Œํฌ์˜ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉํ‘œ๋Š” *๋Œ€์กฐ์  ์ž‘์—…(contrastive task)*์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์€ ์ž˜๋ชป๋œ ์˜ˆ์ธก ์‹œํ€€์Šค์—์„œ ๋งˆ์Šคํฌ๋œ ์˜ˆ์ธก์˜ ์‹ค์ œ ์–‘์žํ™”๋œ ์Œ์„ฑ ํ‘œํ˜„์„ ์˜ˆ์ธกํ•˜๋ฉฐ, ๋ชจ๋ธ์ด ๊ฐ€์žฅ ์œ ์‚ฌํ•œ ์ปจํ…์ŠคํŠธ ๋ฒกํ„ฐ์™€ ์–‘์žํ™”๋œ ์Œ์„ฑ ๋‹จ์œ„(ํƒ€๊ฒŸ ๋ ˆ์ด๋ธ”)๋ฅผ ์ฐพ๋„๋ก ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ wav2vec2๊ฐ€ ์‚ฌ์ „ํ›ˆ๋ จ๋˜์—ˆ์œผ๋ฏ€๋กœ, ์˜ค๋””์˜ค ๋ถ„๋ฅ˜ ๋˜๋Š” ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•ด ๋ฐ์ดํ„ฐ์— ๋งž์ถฐ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ### ์˜ค๋””์˜ค ๋ถ„๋ฅ˜[[audio-classification]] ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์— ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ๊ธฐ๋ณธ Wav2Vec2 ๋ชจ๋ธ ์ƒ๋‹จ์— ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ(hidden states)๋ฅผ ๋ฐ›๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ž…๋‹ˆ๋‹ค. ์€๋‹‰ ์ƒํƒœ๋Š” ๊ฐ๊ฐ ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅธ ์˜ค๋””์˜ค ํ”„๋ ˆ์ž„์—์„œ ํ•™์Šต๋œ ํŠน์ง•์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ๊ณ ์ • ๊ธธ์ด์˜ ๋ฒกํ„ฐ ํ•˜๋‚˜๋ฅผ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด, ์€๋‹‰ ์ƒํƒœ๋Š” ๋จผ์ € ํ’€๋ง๋˜๊ณ , ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•œ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜๋ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํด๋ž˜์Šค๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด ๋กœ์ง“๊ณผ ํƒ€๊ฒŸ ์‚ฌ์ด์˜ ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์ด ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์˜ค๋””์˜ค ๋ถ„๋ฅ˜ ๊ฐ€์ด๋“œ](tasks/audio_classification)๋ฅผ ํ™•์ธํ•˜์—ฌ Wav2Vec2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ์ž๋™ ์Œ์„ฑ ์ธ์‹[[automatic-speech-recognition]] ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์ž๋™ ์Œ์„ฑ ์ธ์‹์— ์‚ฌ์šฉํ•˜๋ ค๋ฉด, [์—ฐ๊ฒฐ์ฃผ์˜์  ์‹œ๊ฐ„ ๋ถ„๋ฅ˜(CTC, Connectionist Temporal Classification)](glossary#connectionist-temporal-classification-ctc)๋ฅผ ์œ„ํ•ด ๊ธฐ๋ณธ Wav2Vec2 ๋ชจ๋ธ ์ƒ๋‹จ์— ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ๋Š” ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋ฐ›์•„์„œ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋กœ์ง“์€ ํ† ํฐ ํด๋ž˜์Šค(ํ† ํฐ ์ˆ˜๋Š” ์ž‘์—…์˜ ์–ดํœ˜์—์„œ ๋‚˜ํƒ€๋‚ฉ๋‹ˆ๋‹ค)๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. CTC ์†์‹ค์€ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉ๋œ ํ† ํฐ์—์„œ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํ† ํฐ ์‹œํ€€์Šค๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด ๋กœ์ง“๊ณผ ํƒ€๊ฒŸ ์‚ฌ์ด์—์„œ ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. ์ž๋™ ์Œ์„ฑ ์ธ์‹์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์ž๋™ ์Œ์„ฑ ์ธ์‹ ๊ฐ€์ด๋“œ](tasks/asr)๋ฅผ ํ™•์ธํ•˜์—ฌ Wav2Vec2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ## ์ปดํ“จํ„ฐ ๋น„์ „[[computer-vision]] ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์— ์ ‘๊ทผํ•˜๋Š” 2๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค: 1. ์ด๋ฏธ์ง€๋ฅผ ํŒจ์น˜ ์‹œํ€€์Šค๋กœ ๋ถ„๋ฆฌํ•˜๊ณ  Transformer๋กœ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. 2. [ConvNeXT](model_doc/convnext)์™€ ๊ฐ™์€ ํ˜„๋Œ€ CNN์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜์ง€๋งŒ ํ˜„๋Œ€ ๋„คํŠธ์›Œํฌ ์„ค๊ณ„๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์„ธ ๋ฒˆ์งธ ๋ฐฉ๋ฒ•์€ Transformer์™€ ํ•ฉ์„ฑ๊ณฑ(์˜ˆ๋ฅผ ๋“ค์–ด, [Convolutional Vision Transformer](model_doc/cvt) ๋˜๋Š” [LeViT](model_doc/levit))์„ ๊ฒฐํ•ฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ์‚ดํŽด๋ณผ ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•๋งŒ ๊ฒฐํ•ฉํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์—ฌ๊ธฐ์„œ ์ด ๋ฐฉ๋ฒ•์„ ๋‹ค๋ฃจ์ง€ ์•Š์Šต๋‹ˆ๋‹ค. </Tip> ViT์™€ ConvNeXT๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์—์„œ ์‚ฌ์šฉ๋˜์ง€๋งŒ, ๋ฌผ์ฒด ๊ฐ์ง€, ๋ถ„ํ• , ๊นŠ์ด ์ถ”์ •๊ณผ ๊ฐ™์€ ๋‹ค๋ฅธ ๋น„์ „ ์ž‘์—…์—๋Š” ๊ฐ๊ฐ DETR, Mask2Former, GLPN์ด ๋” ์ ํ•ฉํ•˜๋ฏ€๋กœ ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜[[image-classification]] ViT์™€ ConvNeXT ๋ชจ๋‘ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์ง€๋งŒ, ViT๋Š” ์–ดํ…์…˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜์„, ConvNeXT๋Š” ํ•ฉ์„ฑ๊ณฑ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ฃผ๋œ ์ฐจ์ด์ž…๋‹ˆ๋‹ค. #### Transformer[[transformer]] [ViT](model_doc/vit)์€ ํ•ฉ์„ฑ๊ณฑ์„ ์ „์ ์œผ๋กœ ์ˆœ์ˆ˜ Transformer ์•„ํ‚คํ…์ฒ˜๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ์กด Transformer์— ์ต์ˆ™ํ•˜๋‹ค๋ฉด, ViT๋ฅผ ์ดํ•ดํ•˜๋Š” ๋ฐฉ๋ฒ•์˜ ๋Œ€๋ถ€๋ถ„์„ ์ด๋ฏธ ํŒŒ์•…ํ–ˆ๋‹ค๊ณ  ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/vit_architecture.jpg"/> </div> ViT๊ฐ€ ๋„์ž…ํ•œ ์ฃผ์š” ๋ณ€๊ฒฝ ์‚ฌํ•ญ์€ ์ด๋ฏธ์ง€๊ฐ€ Transformer๋กœ ์–ด๋–ป๊ฒŒ ์ „๋‹ฌ๋˜๋Š”์ง€์— ์žˆ์Šต๋‹ˆ๋‹ค: 1. ์ด๋ฏธ์ง€๋Š” ์„œ๋กœ ์ค‘์ฒฉ๋˜์ง€ ์•Š๋Š” ์ •์‚ฌ๊ฐํ˜• ํŒจ์น˜๋กœ ๋ถ„ํ• ๋˜๊ณ , ๊ฐ ํŒจ์น˜๋Š” ๋ฒกํ„ฐ ๋˜๋Š” *ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ(patch embedding)*์œผ๋กœ ๋ณ€ํ™˜๋ฉ๋‹ˆ๋‹ค. ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์€ ์ ์ ˆํ•œ ์ž…๋ ฅ ์ฐจ์›์„ ๋งŒ๋“œ๋Š” 2D ํ•ฉ์„ฑ๊ณฑ ๊ณ„์ธต์—์„œ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค(๊ธฐ๋ณธ Transformer์˜ ๊ฒฝ์šฐ ๊ฐ ํŒจ์น˜์˜ ์ž„๋ฒ ๋”ฉ๋งˆ๋‹ค 768๊ฐœ์˜ ๊ฐ’์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค). 224x224 ํ”ฝ์…€ ์ด๋ฏธ์ง€๊ฐ€ ์žˆ๋‹ค๋ฉด, 16x16 ์ด๋ฏธ์ง€ ํŒจ์น˜ 196๊ฐœ๋กœ ๋ถ„ํ• ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ…์ŠคํŠธ๊ฐ€ ๋‹จ์–ด๋กœ ํ† ํฐํ™”๋˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ, ์ด๋ฏธ์ง€๋„ ํŒจ์น˜ ์‹œํ€€์Šค๋กœ "ํ† ํฐํ™”"๋ฉ๋‹ˆ๋‹ค. 2. *ํ•™์Šต ๊ฐ€๋Šฅํ•œ ์ž„๋ฒ ๋”ฉ(learnable embedding)*(ํŠน์ˆ˜ํ•œ `[CLS]` ํ† ํฐ)์ด BERT์™€ ๊ฐ™์ด ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. `[CLS]` ํ† ํฐ์˜ ๋งˆ์ง€๋ง‰ ์€๋‹‰ ์ƒํƒœ๋Š” ๋ถ€์ฐฉ๋œ ๋ถ„๋ฅ˜ ํ—ค๋“œ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉ๋˜๊ณ , ๋‹ค๋ฅธ ์ถœ๋ ฅ์€ ๋ฌด์‹œ๋ฉ๋‹ˆ๋‹ค. ์ด ํ† ํฐ์€ ๋ชจ๋ธ์ด ์ด๋ฏธ์ง€์˜ ํ‘œํ˜„์„ ์ธ์ฝ”๋”ฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. 3. ํŒจ์น˜์™€ ํ•™์Šต ๊ฐ€๋Šฅํ•œ ์ž„๋ฒ ๋”ฉ์— ๋งˆ์ง€๋ง‰์œผ๋กœ ์ถ”๊ฐ€ํ•  ๊ฒƒ์€ *์œ„์น˜ ์ž„๋ฒ ๋”ฉ*์ž…๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ๋ชจ๋ธ์€ ์ด๋ฏธ์ง€ ํŒจ์น˜์˜ ์ˆœ์„œ๋ฅผ ๋ชจ๋ฅด๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์œ„์น˜ ์ž„๋ฒ ๋”ฉ๋„ ํ•™์Šต ๊ฐ€๋Šฅํ•˜๋ฉฐ, ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ๊ณผ ๋™์ผํ•œ ํฌ๊ธฐ๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ์ตœ์ข…์ ์œผ๋กœ, ๋ชจ๋“  ์ž„๋ฒ ๋”ฉ์ด Transformer ์ธ์ฝ”๋”์— ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. 4. `[CLS]` ํ† ํฐ์„ ํฌํ•จํ•œ ์ถœ๋ ฅ์€ ๋‹ค์ธต ํผ์…‰ํŠธ๋ก  ํ—ค๋“œ(MLP)์— ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ViT์˜ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉํ‘œ๋Š” ๋‹จ์ˆœํžˆ ๋ถ„๋ฅ˜์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ถ„๋ฅ˜ ํ—ค๋“œ์™€ ๊ฐ™์ด, MLP ํ—ค๋“œ๋Š” ์ถœ๋ ฅ์„ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•ด ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์„ ๊ณ„์‚ฐํ•˜์—ฌ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํด๋ž˜์Šค๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๊ฐ€์ด๋“œ](tasks/image_classification)๋ฅผ ํ™•์ธํ•˜์—ฌ ViT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! #### CNN[[cnn]] <Tip> ์ด ์„น์…˜์—์„œ๋Š” ํ•ฉ์„ฑ๊ณฑ์— ๋Œ€ํ•ด ๊ฐ„๋žตํ•˜๊ฒŒ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋ฏธ์ง€์˜ ๋ชจ์–‘๊ณผ ํฌ๊ธฐ๊ฐ€ ์–ด๋–ป๊ฒŒ ๋ณ€ํ™”ํ•˜๋Š”์ง€์— ๋Œ€ํ•œ ์‚ฌ์ „ ์ดํ•ด๊ฐ€ ์žˆ๋‹ค๋ฉด ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•ฉ์„ฑ๊ณฑ์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, fastai book์˜ [ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ์ฑ•ํ„ฐ](https://github.com/fastai/fastbook/blob/master/13_convolutions.ipynb)๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip> [ConvNeXT](model_doc/convnext)๋Š” ์„ฑ๋Šฅ์„ ๋†’์ด๊ธฐ ์œ„ํ•ด ์ƒˆ๋กœ์šด ํ˜„๋Œ€ ๋„คํŠธ์›Œํฌ ์„ค๊ณ„๋ฅผ ์ ์šฉํ•œ CNN ๊ตฌ์กฐ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ•ฉ์„ฑ๊ณฑ์€ ์—ฌ์ „ํžˆ ๋ชจ๋ธ์˜ ํ•ต์‹ฌ์ž…๋‹ˆ๋‹ค. ๋†’์€ ์ˆ˜์ค€์˜ ๊ด€์ ์—์„œ ๋ณผ ๋•Œ, [ํ•ฉ์„ฑ๊ณฑ](glossary#convolution)์€ ์ž‘์€ ํ–‰๋ ฌ(*์ปค๋„*)์— ์ด๋ฏธ์ง€ ํ”ฝ์…€์˜ ์ž‘์€ ์œˆ๋„์šฐ๋ฅผ ๊ณฑํ•˜๋Š” ์—ฐ์‚ฐ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ํŠน์ • ํ…์Šค์ณ(texture)์ด๋‚˜ ์„ ์˜ ๊ณก๋ฅ ๊ณผ ๊ฐ™์€ ์ผ๋ถ€ ํŠน์ง•์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๊ณ  ๋‹ค์Œ ํ”ฝ์…€ ์œˆ๋„์šฐ๋กœ ๋„˜์–ด๊ฐ€๋Š”๋ฐ, ์—ฌ๊ธฐ์„œ ํ•ฉ์„ฑ๊ณฑ์ด ์ด๋™ํ•˜๋Š” ๊ฑฐ๋ฆฌ๋ฅผ *๋ณดํญ(stride)*์ด๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convolution.gif"/> </div> <small>ํŒจ๋”ฉ์ด๋‚˜ ๋ณดํญ์ด ์—†๋Š” ๊ธฐ๋ณธ ํ•ฉ์„ฑ๊ณฑ, <a href="https://arxiv.org/abs/1603.07285">๋”ฅ๋Ÿฌ๋‹์„ ์œ„ํ•œ ํ•ฉ์„ฑ๊ณฑ ์—ฐ์‚ฐ ๊ฐ€์ด๋“œ</a></small> ์ด ์ถœ๋ ฅ์„ ๋‹ค๋ฅธ ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๊ฐ ์—ฐ์†์ ์ธ ๋ ˆ์ด์–ด๋ฅผ ํ†ตํ•ด ๋„คํŠธ์›Œํฌ๋Š” ํ•ซ๋„๊ทธ๋‚˜ ๋กœ์ผ“๊ณผ ๊ฐ™์ด ๋” ๋ณต์žกํ•˜๊ณ  ์ถ”์ƒ์ ์ธ ๊ฒƒ์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด ์‚ฌ์ด์— ํ’€๋ง ๋ ˆ์ด์–ด๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ์ฐจ์›์„ ์ค„์ด๊ณ  ํŠน์ง•์˜ ์œ„์น˜ ๋ณ€ํ™”์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ๋” ๊ฒฌ๊ณ ํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png"/> </div> ConvNeXT๋Š” CNN์„ 5๊ฐ€์ง€ ๋ฐฉ์‹์œผ๋กœ ํ˜„๋Œ€ํ™”ํ•ฉ๋‹ˆ๋‹ค: 1. ๊ฐ ๋‹จ๊ณ„์˜ ๋ธ”๋ก ์ˆ˜๋ฅผ ๋ณ€๊ฒฝํ•˜๊ณ  ๋” ํฐ ๋ณดํญ๊ณผ ๊ทธ์— ๋Œ€์‘ํ•˜๋Š” ์ปค๋„ ํฌ๊ธฐ๋กœ ์ด๋ฏธ์ง€๋ฅผ "ํŒจ์น˜ํ™”(patchify)"ํ•ฉ๋‹ˆ๋‹ค. ๊ฒน์น˜์ง€ ์•Š๋Š” ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ๋Š” ViT๊ฐ€ ์ด๋ฏธ์ง€๋ฅผ ํŒจ์น˜๋กœ ๋ถ„ํ• ํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ์œ ์‚ฌํ•˜๊ฒŒ ์ด ํŒจ์น˜ํ™” ์ „๋žต์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. 2. *๋ณ‘๋ชฉ(bottleneck)* ๋ ˆ์ด์–ด๋Š” ์ฑ„๋„ ์ˆ˜๋ฅผ ์ค„์˜€๋‹ค๊ฐ€ ๋‹ค์‹œ ๋ณต์›ํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด 1x1 ํ•ฉ์„ฑ๊ณฑ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฒƒ์ด ๋” ๋น ๋ฅด๊ณ , ๊นŠ์ด๋ฅผ ๋Š˜๋ฆด ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์—ญ ๋ณ‘๋ชฉ(inverted bottlenect)์€ ์ฑ„๋„ ์ˆ˜๋ฅผ ํ™•์žฅํ•˜๊ณ  ์ถ•์†Œํ•จ์œผ๋กœ์จ ๊ทธ ๋ฐ˜๋Œ€๋กœ ์ˆ˜ํ–‰ํ•˜๋ฏ€๋กœ, ๋ฉ”๋ชจ๋ฆฌ ํšจ์œจ์ด ๋” ๋†’์Šต๋‹ˆ๋‹ค. 3. ๋ณ‘๋ชฉ ๋ ˆ์ด์–ด์˜ ์ผ๋ฐ˜์ ์ธ 3x3 ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด๋ฅผ ๊ฐ ์ž…๋ ฅ ์ฑ„๋„์— ๊ฐœ๋ณ„์ ์œผ๋กœ ํ•ฉ์„ฑ๊ณฑ์„ ์ ์šฉํ•œ ๋‹ค์Œ ๋งˆ์ง€๋ง‰์— ์Œ“๋Š” *๊นŠ์ด๋ณ„ ํ•ฉ์„ฑ๊ณฑ(depthwise convolution)*์œผ๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋„คํŠธ์›Œํฌ ํญ์ด ๋„“ํ˜€ ์„ฑ๋Šฅ์ด ํ–ฅ์ƒ๋ฉ๋‹ˆ๋‹ค. 4. ViT๋Š” ์–ดํ…์…˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜ ๋•๋ถ„์— ํ•œ ๋ฒˆ์— ๋” ๋งŽ์€ ์ด๋ฏธ์ง€๋ฅผ ๋ณผ ์ˆ˜ ์žˆ๋Š” ์ „์—ญ ์ˆ˜์‹  ํ•„๋“œ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ConvNeXT๋Š” ์ปค๋„ ํฌ๊ธฐ๋ฅผ 7x7๋กœ ๋Š˜๋ ค ์ด ํšจ๊ณผ๋ฅผ ์žฌํ˜„ํ•˜๋ ค๊ณ  ์‹œ๋„ํ•ฉ๋‹ˆ๋‹ค. 5. ๋˜ํ•œ ConvNeXT๋Š” Transformer ๋ชจ๋ธ์„ ๋ชจ๋ฐฉํ•˜๋Š” ๋ช‡ ๊ฐ€์ง€ ๋ ˆ์ด์–ด ์„ค๊ณ„๋ฅผ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค. ํ™œ์„ฑํ™” ๋ฐ ์ •๊ทœํ™” ๋ ˆ์ด์–ด๊ฐ€ ๋” ์ ๊ณ , ํ™œ์„ฑํ™” ํ•จ์ˆ˜๊ฐ€ ReLU ๋Œ€์‹  GELU๋กœ ์ „ํ™˜๋˜๊ณ , BatchNorm ๋Œ€์‹  LayerNorm์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ•ฉ์„ฑ๊ณฑ ๋ธ”๋ก์˜ ์ถœ๋ ฅ์€ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋กœ ์ „๋‹ฌ๋˜๋ฉฐ, ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ์ถœ๋ ฅ์„ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์„ ๊ณ„์‚ฐํ•˜์—ฌ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋ ˆ์ด๋ธ”์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ### ๊ฐ์ฒด ํƒ์ง€[[object-detection]] [DETR](model_doc/detr), *DEtection TRansformer*๋Š” CNN๊ณผ Transformer ์ธ์ฝ”๋”-๋””์ฝ”๋”๋ฅผ ๊ฒฐํ•ฉํ•œ ์ข…๋‹จ๊ฐ„(end-to-end) ๊ฐ์ฒด ํƒ์ง€ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/detr_architecture.png"/> </div> 1. ์‚ฌ์ „ํ›ˆ๋ จ๋œ CNN *๋ฐฑ๋ณธ(backbone)*์€ ํ”ฝ์…€ ๊ฐ’์œผ๋กœ ๋‚˜ํƒ€๋‚ธ ์ด๋ฏธ์ง€๋ฅผ ๊ฐ€์ ธ์™€ ์ €ํ•ด์ƒ๋„ ํŠน์ง• ๋งต์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ํŠน์ง• ๋งต์— ๋Œ€ํ•ด 1x1 ํ•ฉ์„ฑ๊ณฑ์„ ์ ์šฉํ•˜์—ฌ ์ฐจ์›์„ ์ค„์ด๊ณ , ๊ณ ์ˆ˜์ค€ ์ด๋ฏธ์ง€ ํ‘œํ˜„์„ ๊ฐ€์ง„ ์ƒˆ๋กœ์šด ํŠน์ง• ๋งต์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. Transformer๋Š” ์‹œํ€€์Šค ๋ชจ๋ธ์ด๊ธฐ ๋•Œ๋ฌธ์— ํŠน์ง• ๋งต์„ ์œ„์น˜ ์ž„๋ฒ ๋”ฉ๊ณผ ๊ฒฐํ•ฉ๋œ ํŠน์ง• ๋ฒกํ„ฐ์˜ ์‹œํ€€์Šค๋กœ ํ‰ํƒ„ํ™”ํ•ฉ๋‹ˆ๋‹ค. 2. ํŠน์ง• ๋ฒกํ„ฐ๋Š” ์–ดํ…์…˜ ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๋Š” ์ธ์ฝ”๋”์— ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ์ธ์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋Š” ๋””์ฝ”๋”์—์„œ *๊ฐ์ฒด ์ฟผ๋ฆฌ*์™€ ๊ฒฐํ•ฉ๋ฉ๋‹ˆ๋‹ค. ๊ฐ์ฒด ์ฟผ๋ฆฌ๋Š” ์ด๋ฏธ์ง€์˜ ๋‹ค๋ฅธ ์˜์—ญ์— ์ดˆ์ ์„ ๋งž์ถ˜ ํ•™์Šต๋œ ์ž„๋ฒ ๋”ฉ์œผ๋กœ ํ•™์Šต๋˜๊ณ , ๊ฐ ์–ดํ…์…˜ ๋ ˆ์ด์–ด๋ฅผ ์ง„ํ–‰ํ•˜๋ฉด์„œ ๊ฐฑ์‹ ๋ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋Š” ๊ฐ ๊ฐ์ฒด ์ฟผ๋ฆฌ์— ๋Œ€ํ•œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ์ขŒํ‘œ์™€ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์„ ์˜ˆ์ธกํ•˜๋Š” ์ˆœ๋ฐฉํ–ฅ ๋„คํŠธ์›Œํฌ์— ์ „๋‹ฌ๋˜๋ฉฐ, ๊ฐ์ฒด๊ฐ€ ์—†๋Š” ๊ฒฝ์šฐ `no object`๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค. DETR์€ ๊ฐ ๊ฐ์ฒด ์ฟผ๋ฆฌ๋ฅผ ๋ณ‘๋ ฌ๋กœ ๋””์ฝ”๋”ฉํ•˜์—ฌ *N* ๊ฐœ์˜ ์ตœ์ข… ์˜ˆ์ธก์„ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ *N*์€ ์ฟผ๋ฆฌ ์ˆ˜์ž…๋‹ˆ๋‹ค. ํ•œ ๋ฒˆ์— ํ•˜๋‚˜์˜ ์š”์†Œ๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ์ผ๋ฐ˜์ ์ธ ์ž๊ธฐํšŒ๊ท€ ๋ชจ๋ธ๊ณผ ๋‹ฌ๋ฆฌ, ๊ฐ์ฒด ํƒ์ง€๋Š” ํ•œ ๋ฒˆ์— *N* ๊ฐœ์˜ ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•˜๋Š” ์ง‘ํ•ฉ ์˜ˆ์ธก ์ž‘์—…(`๋ฐ”์šด๋”ฉ ๋ฐ•์Šค`, `ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”`)์ž…๋‹ˆ๋‹ค. 3. DETR์€ ํ›ˆ๋ จ ์ค‘ *์ด๋ถ„ ๋งค์นญ ์†์‹ค(bipartite matching loss)*์„ ์‚ฌ์šฉํ•˜์—ฌ ๊ณ ์ •๋œ ์ˆ˜์˜ ์˜ˆ์ธก๊ณผ ๊ณ ์ •๋œ ์‹ค์ œ ์ •๋‹ต ๋ ˆ์ด๋ธ”(ground truth labels) ์„ธํŠธ๋ฅผ ๋น„๊ตํ•ฉ๋‹ˆ๋‹ค. *N*๊ฐœ์˜ ๋ ˆ์ด๋ธ” ์„ธํŠธ์— ์‹ค์ œ ์ •๋‹ต ๋ ˆ์ด๋ธ”๋ณด๋‹ค ์ ์€ ๊ฒฝ์šฐ, `no object` ํด๋ž˜์Šค๋กœ ํŒจ๋”ฉ๋ฉ๋‹ˆ๋‹ค. ์ด ์†์‹ค ํ•จ์ˆ˜๋Š” DETR์ด ์˜ˆ์ธก๊ณผ ์‹ค์ œ ์ •๋‹ต ๋ ˆ์ด๋ธ” ๊ฐ„ 1:1 ๋Œ€์‘์„ ์ฐพ๋„๋ก ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ๋˜๋Š” ํด๋ž˜์Šค ๋ ˆ์ด๋ธ” ์ค‘ ํ•˜๋‚˜๋ผ๋„ ์ž˜๋ชป๋œ ๊ฒฝ์šฐ, ์†์‹ค์ด ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ์กด์žฌํ•˜์ง€ ์•Š๋Š” ๊ฐ์ฒด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๊ฒฝ์šฐ, ํŒจ๋„ํ‹ฐ๋ฅผ ๋ฐ›์Šต๋‹ˆ๋‹ค. ์ด๋กœ ์ธํ•ด DETR์€ ์ด๋ฏธ์ง€์—์„œ ๋ˆˆ์— ์ž˜ ๋„๋Š” ๋ฌผ์ฒด ํ•˜๋‚˜์— ์ง‘์ค‘ํ•˜๋Š” ๋Œ€์‹ , ๋‹ค๋ฅธ ๊ฐ์ฒด๋ฅผ ์ฐพ๋„๋ก ๊ถŒ์žฅ๋ฉ๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€ ํ—ค๋“œ๊ฐ€ DETR ์ƒ๋‹จ์— ์ถ”๊ฐ€๋˜์–ด ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”๊ณผ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์˜ ์ขŒํ‘œ๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€ ํ—ค๋“œ์—๋Š” ๋‘ ๊ฐ€์ง€ ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: ๋””์ฝ”๋” ์€๋‹‰ ์ƒํƒœ๋ฅผ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์˜ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด ๋ฐ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ ์˜ˆ์ธกํ•˜๋Š” MLP ๊ฐ์ฒด ํƒ์ง€์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [๊ฐ์ฒด ํƒ์ง€ ๊ฐ€์ด๋“œ](tasks/object_detection)๋ฅผ ํ™•์ธํ•˜์—ฌ DETR์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ์ด๋ฏธ์ง€ ๋ถ„ํ• [[image-segmentation]] [Mask2Former](model_doc/mask2former)๋Š” ๋ชจ๋“  ์œ ํ˜•์˜ ์ด๋ฏธ์ง€ ๋ถ„ํ•  ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฒ”์šฉ ์•„ํ‚คํ…์ฒ˜์ž…๋‹ˆ๋‹ค. ์ „ํ†ต์ ์ธ ๋ถ„ํ•  ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์‹œ๋ฉ˜ํ‹ฑ(semantic) ๋˜๋Š” ํŒŒ๋†‰ํ‹ฑ(panoptic) ๋ถ„ํ• ๊ณผ ๊ฐ™์€ ์ด๋ฏธ์ง€ ๋ถ„ํ• ์˜ ํŠน์ • ํ•˜์œ„ ์ž‘์—…์— ๋งž์ถฐ ์กฐ์ •๋ฉ๋‹ˆ๋‹ค. Mask2Former๋Š” ๋ชจ๋“  ์ž‘์—…์„ *๋งˆ์Šคํฌ ๋ถ„๋ฅ˜* ๋ฌธ์ œ๋กœ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋งˆ์Šคํฌ ๋ถ„๋ฅ˜๋Š” ํ”ฝ์…€์„ *N*๊ฐœ ์„ธ๊ทธ๋จผํŠธ๋กœ ๊ทธ๋ฃนํ™”ํ•˜๊ณ , ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€์— ๋Œ€ํ•ด *N*๊ฐœ์˜ ๋งˆ์Šคํฌ์™€ ๊ทธ์— ๋Œ€์‘ํ•˜๋Š” ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์„ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ Mask2Former์˜ ์ž‘๋™ ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•œ ๋‹ค์Œ, ๋งˆ์ง€๋ง‰์— SegFormer๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png"/> </div> Mask2Former์—๋Š” 3๊ฐ€์ง€ ์ฃผ์š” ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: 1. [Swin](model_doc/swin) ๋ฐฑ๋ณธ์ด ์ด๋ฏธ์ง€๋ฅผ ๋ฐ›์•„ 3๊ฐœ์˜ ์—ฐ์†๋œ 3x3 ํ•ฉ์„ฑ๊ณฑ์—์„œ ์ €ํ•ด์ƒ๋„ ์ด๋ฏธ์ง€ ํŠน์ง• ๋งต์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. 2. ํŠน์ง• ๋งต์€ *ํ”ฝ์…€ ๋””์ฝ”๋”*์— ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ์ด ๋””์ฝ”๋”๋Š” ์ €ํ•ด์ƒ๋„ ํŠน์ง•์„ ๊ณ ํ•ด์ƒ๋„ ํ”ฝ์…€ ์ž„๋ฒ ๋”ฉ์œผ๋กœ ์ ์ง„์ ์œผ๋กœ ์—…์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ํ”ฝ์…€ ๋””์ฝ”๋”๋Š” ์‹ค์ œ๋กœ ์›๋ณธ ์ด๋ฏธ์ง€์˜ 1/32, 1/16, 1/8 ํ•ด์ƒ๋„์˜ ๋‹ค์ค‘ ์Šค์ผ€์ผ ํŠน์ง•(์ €ํ•ด์ƒ๋„ ๋ฐ ๊ณ ํ•ด์ƒ๋„ ํŠน์ง• ๋ชจ๋‘ ํฌํ•จ)์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. 3. ์ด๋Ÿฌํ•œ ์„œ๋กœ ๋‹ค๋ฅธ ํฌ๊ธฐ์˜ ํŠน์ง• ๋งต์€ ๊ณ ํ•ด์ƒ๋„ ํŠน์ง•์—์„œ ์ž‘์€ ๊ฐ์ฒด๋ฅผ ํฌ์ฐฉํ•˜๊ธฐ ์œ„ํ•ด ํ•œ ๋ฒˆ์— ํ•˜๋‚˜์˜ Transformer ๋””์ฝ”๋” ๋ ˆ์ด์–ด์— ์—ฐ์†์ ์œผ๋กœ ๊ณต๊ธ‰๋ฉ๋‹ˆ๋‹ค. Mask2Former์˜ ํ•ต์‹ฌ์€ ๋””์ฝ”๋”์˜ *๋งˆ์Šคํฌ ์–ดํ…์…˜* ๋ฉ”์ปค๋‹ˆ์ฆ˜์ž…๋‹ˆ๋‹ค. ์ „์ฒด ์ด๋ฏธ์ง€๋ฅผ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ๋Š” ํฌ๋กœ์Šค ์–ดํ…์…˜(cross-attention)๊ณผ ๋‹ฌ๋ฆฌ, ๋งˆ์Šคํฌ ์–ดํ…์…˜์€ ์ด๋ฏธ์ง€์˜ ํŠน์ • ์˜์—ญ์—๋งŒ ์ง‘์ค‘ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ด๋ฏธ์ง€์˜ ์ง€์—ญ์  ํŠน์ง•๋งŒ์œผ๋กœ ๋ชจ๋ธ์ด ์ถฉ๋ถ„ํžˆ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋” ๋น ๋ฅด๊ณ  ์„ฑ๋Šฅ์ด ์šฐ์ˆ˜ํ•ฉ๋‹ˆ๋‹ค. 4. [DETR](tasks_explained#object-detection)๊ณผ ๊ฐ™์ด, Mask2Former๋Š” ํ•™์Šต๋œ ๊ฐ์ฒด ์ฟผ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์ด๋ฅผ ํ”ฝ์…€ ๋””์ฝ”๋”์—์„œ์˜ ์ด๋ฏธ์ง€ ํŠน์ง•๊ณผ ๊ฒฐํ•ฉํ•˜์—ฌ ์˜ˆ์ธก ์ง‘ํ•ฉ(`ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”`, `๋งˆ์Šคํฌ ์˜ˆ์ธก`)์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”์˜ ์€๋‹‰ ์ƒํƒœ๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด๋กœ ์ „๋‹ฌ๋˜์–ด ํด๋ž˜์Šค ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•œ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜๋ฉ๋‹ˆ๋‹ค. ๋กœ์ง“๊ณผ ํด๋ž˜์Šค ๋ ˆ์ด๋ธ” ์‚ฌ์ด์˜ ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์„ ๊ณ„์‚ฐํ•˜์—ฌ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๊ฒƒ์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ๋งˆ์Šคํฌ ์˜ˆ์ธก์€ ํ”ฝ์…€ ์ž„๋ฒ ๋”ฉ๊ณผ ์ตœ์ข… ๋””์ฝ”๋” ์€๋‹‰ ์ƒํƒœ๋ฅผ ๊ฒฐํ•ฉํ•˜์—ฌ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์‹œ๊ทธ๋ชจ์ด๋“œ ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ๋ฐ Dice ์†์‹ค์€ ๋กœ์ง“๊ณผ ์‹ค์ œ ์ •๋‹ต ๋งˆ์Šคํฌ(ground truth mask) ์‚ฌ์ด์—์„œ ๊ณ„์‚ฐ๋˜์–ด ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋งˆ์Šคํฌ๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„ํ• ์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์ด๋ฏธ์ง€ ๋ถ„ํ•  ๊ฐ€์ด๋“œ](tasks/semantic_segmentation)๋ฅผ ํ™•์ธํ•˜์—ฌ SegFormer๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ๊นŠ์ด ์ถ”์ •[[depth-estimation]] [GLPN](model_doc/glpn), *Global-Local Path Network*๋Š” [SegFormer](model_doc/segformer) ์ธ์ฝ”๋”์™€ ๊ฒฝ๋Ÿ‰ ๋””์ฝ”๋”๋ฅผ ๊ฒฐํ•ฉํ•œ ๊นŠ์ด ์ถ”์ •์„ ์œ„ํ•œ Transformer์ž…๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/glpn_architecture.jpg"/> </div> 1. ViT์™€ ๊ฐ™์ด, ์ด๋ฏธ์ง€๋Š” ํŒจ์น˜ ์‹œํ€€์Šค๋กœ ๋ถ„ํ• ๋˜์ง€๋งŒ, ์ด๋ฏธ์ง€ ํŒจ์น˜๊ฐ€ ๋” ์ž‘๋‹ค๋Š” ์ ์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ์ด๋Š” ์„ธ๊ทธ๋ฉ˜ํ…Œ์ด์…˜์ด๋‚˜ ๊นŠ์ด ์ถ”์ •๊ณผ ๊ฐ™์€ ๋ฐ€๋„ ์˜ˆ์ธก ์ž‘์—…์— ๋” ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ํŒจ์น˜๋Š” ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์œผ๋กœ ๋ณ€ํ™˜๋˜์–ด(ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์ด ์ƒ์„ฑ๋˜๋Š” ๋ฐฉ๋ฒ•์€ [์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜](#image-classification) ์„น์…˜์„ ์ฐธ์กฐํ•˜์„ธ์š”), ์ธ์ฝ”๋”๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. 2. ์ธ์ฝ”๋”๋Š” ํŒจ์น˜ ์ž„๋ฒ ๋”ฉ์„ ๋ฐ›์•„, ์—ฌ๋Ÿฌ ์ธ์ฝ”๋” ๋ธ”๋ก์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋ธ”๋ก์€ ์–ดํ…์…˜ ๋ฐ Mix-FFN ๋ ˆ์ด์–ด๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ํ›„์ž์˜ ๋ชฉ์ ์€ ์œ„์น˜ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ฐ ์ธ์ฝ”๋” ๋ธ”๋ก์˜ ๋์—๋Š” ๊ณ„์ธต์  ํ‘œํ˜„์„ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•œ *ํŒจ์น˜ ๋ณ‘ํ•ฉ(patch merging)* ๋ ˆ์ด์–ด๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ ์ธ์ ‘ํ•œ ํŒจ์น˜ ๊ทธ๋ฃน์˜ ํŠน์ง•์€ ์—ฐ๊ฒฐ๋˜๊ณ , ์—ฐ๊ฒฐ๋œ ํŠน์ง•์— ์„ ํ˜• ๋ ˆ์ด์–ด๊ฐ€ ์ ์šฉ๋˜์–ด ํŒจ์น˜ ์ˆ˜๋ฅผ 1/4์˜ ํ•ด์ƒ๋„๋กœ ์ค„์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋‹ค์Œ ์ธ์ฝ”๋” ๋ธ”๋ก์˜ ์ž…๋ ฅ์ด ๋˜๋ฉฐ, ์ด๋Ÿฌํ•œ ์ „์ฒด ํ”„๋กœ์„ธ์Šค๋Š” 1/8, 1/16, 1/32 ํ•ด์ƒ๋„์˜ ์ด๋ฏธ์ง€ ํŠน์ง•์„ ๊ฐ€์งˆ ๋•Œ๊นŒ์ง€ ๋ฐ˜๋ณต๋ฉ๋‹ˆ๋‹ค. 3. ๊ฒฝ๋Ÿ‰ ๋””์ฝ”๋”๋Š” ์ธ์ฝ”๋”์—์„œ ๋งˆ์ง€๋ง‰ ํŠน์ง• ๋งต(1/32 ํฌ๊ธฐ)์„ ๊ฐ€์ ธ์™€ 1/16 ํฌ๊ธฐ๋กœ ์—…์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ, ํŠน์ง•์€ *์„ ํƒ์  ํŠน์ง• ์œตํ•ฉ(SFF, Selective Feature Fusion)* ๋ชจ๋“ˆ๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ์ด ๋ชจ๋“ˆ์€ ๊ฐ ํŠน์ง•์— ๋Œ€ํ•ด ์–ดํ…์…˜ ๋งต์—์„œ ๋กœ์ปฌ ๋ฐ ์ „์—ญ ํŠน์ง•์„ ์„ ํƒํ•˜๊ณ  ๊ฒฐํ•ฉํ•œ ๋‹ค์Œ, 1/8๋กœ ์—…์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ์ด ํ”„๋กœ์„ธ์Šค๋Š” ๋””์ฝ”๋”ฉ๋œ ํŠน์„ฑ์ด ์›๋ณธ ์ด๋ฏธ์ง€์™€ ๋™์ผํ•œ ํฌ๊ธฐ๊ฐ€ ๋  ๋•Œ๊นŒ์ง€ ๋ฐ˜๋ณต๋ฉ๋‹ˆ๋‹ค. ์ถœ๋ ฅ์€ ๋‘ ๊ฐœ์˜ ํ•ฉ์„ฑ๊ณฑ ๋ ˆ์ด์–ด๋ฅผ ๊ฑฐ์นœ ๋‹ค์Œ, ์‹œ๊ทธ๋ชจ์ด๋“œ ํ™œ์„ฑํ™”๊ฐ€ ์ ์šฉ๋˜์–ด ๊ฐ ํ”ฝ์…€์˜ ๊นŠ์ด๋ฅผ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ## ์ž์—ฐ์–ด์ฒ˜๋ฆฌ[[natural-language-processing]] Transformer๋Š” ์ดˆ๊ธฐ์— ๊ธฐ๊ณ„ ๋ฒˆ์—ญ์„ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ๊ณ , ๊ทธ ์ดํ›„๋กœ๋Š” ์‚ฌ์‹ค์ƒ ๋ชจ๋“  NLP ์ž‘์—…์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ๊ธฐ๋ณธ ์•„ํ‚คํ…์ฒ˜๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์–ด๋–ค ์ž‘์—…์€ Transformer์˜ ์ธ์ฝ”๋” ๊ตฌ์กฐ์— ์ ํ•ฉํ•˜๋ฉฐ, ๋‹ค๋ฅธ ์ž‘์—…์€ ๋””์ฝ”๋”์— ๋” ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ๋˜ ๋‹ค๋ฅธ ์ž‘์—…์€ Transformer์˜ ์ธ์ฝ”๋”-๋””์ฝ”๋” ๊ตฌ์กฐ๋ฅผ ๋ชจ๋‘ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. ### ํ…์ŠคํŠธ ๋ถ„๋ฅ˜[[text-classification]] [BERT](model_doc/bert)๋Š” ์ธ์ฝ”๋” ์ „์šฉ ๋ชจ๋ธ์ด๋ฉฐ, ํ…์ŠคํŠธ์˜ ํ’๋ถ€ํ•œ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•ด ์–‘๋ฐฉํ–ฅ์˜ ๋‹จ์–ด์— ์ฃผ๋ชฉํ•จ์œผ๋กœ์จ ์‹ฌ์ธต ์–‘๋ฐฉํ–ฅ์„ฑ(deep bidirectionality)์„ ํšจ๊ณผ์ ์œผ๋กœ ๊ตฌํ˜„ํ•œ ์ตœ์ดˆ์˜ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. 1. BERT๋Š” [WordPiece](tokenizer_summary#wordpiece) ํ† ํฐํ™”๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฌธ์žฅ์˜ ํ† ํฐ ์ž„๋ฒ ๋”ฉ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋‹จ์ผ ๋ฌธ์žฅ๊ณผ ํ•œ ์Œ์˜ ๋ฌธ์žฅ์„ ๊ตฌ๋ถ„ํ•˜๊ธฐ ์œ„ํ•ด ํŠน์ˆ˜ํ•œ `[SEP]` ํ† ํฐ์ด ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ํ…์ŠคํŠธ ์‹œํ€€์Šค์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์—๋Š” ํŠน์ˆ˜ํ•œ `[CLS]` ํ† ํฐ์ด ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. `[CLS]` ํ† ํฐ์ด ์žˆ๋Š” ์ตœ์ข… ์ถœ๋ ฅ์€ ๋ถ„๋ฅ˜ ์ž‘์—…์„ ์œ„ํ•œ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋กœ ์ž…๋ ฅ์— ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. BERT๋Š” ๋˜ํ•œ ํ•œ ์Œ์˜ ๋ฌธ์žฅ์—์„œ ๊ฐ ํ† ํฐ์ด ์ฒซ ๋ฒˆ์งธ ๋ฌธ์žฅ์ธ์ง€ ๋‘ ๋ฒˆ์งธ ๋ฌธ์žฅ์— ์†ํ•˜๋Š”์ง€ ๋‚˜ํƒ€๋‚ด๋Š” ์„ธ๊ทธ๋จผํŠธ ์ž„๋ฒ ๋”ฉ(segment embedding)์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. 2. BERT๋Š” ๋งˆ์Šคํฌ๋“œ ์–ธ์–ด ๋ชจ๋ธ๋ง๊ณผ ๋‹ค์Œ ๋ฌธ์žฅ ์˜ˆ์ธก, ๋‘ ๊ฐ€์ง€ ๋ชฉ์ ์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ๋งˆ์Šคํฌ๋“œ ์–ธ์–ด ๋ชจ๋ธ๋ง์—์„œ๋Š” ์ž…๋ ฅ ํ† ํฐ์˜ ์ผ๋ถ€๊ฐ€ ๋ฌด์ž‘์œ„๋กœ ๋งˆ์Šคํ‚น๋˜๊ณ , ๋ชจ๋ธ์€ ์ด๋ฅผ ์˜ˆ์ธกํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋ธ์ด ๋ชจ๋“  ๋‹จ์–ด๋ฅผ ๋ณด๊ณ  ๋‹ค์Œ ๋‹จ์–ด๋ฅผ "์˜ˆ์ธก"ํ•  ์ˆ˜ ์žˆ๋Š” ์–‘๋ฐฉํ–ฅ์„ฑ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ธก๋œ ๋งˆ์Šคํฌ ํ† ํฐ์˜ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋Š” ์–ดํœ˜์— ๋Œ€ํ•œ ์†Œํ”„ํŠธ๋งฅ์Šค๊ฐ€ ์žˆ๋Š” ์ˆœ๋ฐฉํ–ฅ ๋„คํŠธ์›Œํฌ๋กœ ์ „๋‹ฌ๋˜์–ด ๋งˆ์Šคํฌ๋œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•ฉ๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ์‚ฌ์ „ํ›ˆ๋ จ ๋Œ€์ƒ์€ ๋‹ค์Œ ๋ฌธ์žฅ ์˜ˆ์ธก์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์€ ๋ฌธ์žฅ B๊ฐ€ ๋ฌธ์žฅ A ๋‹ค์Œ์— ์˜ค๋Š”์ง€ ์˜ˆ์ธกํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฌธ์žฅ B๊ฐ€ ๋‹ค์Œ ๋ฌธ์žฅ์ธ ๊ฒฝ์šฐ์™€ ๋ฌด์ž‘์œ„ ๋ฌธ์žฅ์ธ ๊ฒฝ์šฐ ๊ฐ๊ฐ 50%์˜ ํ™•๋ฅ ๋กœ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ฌธ์žฅ์ธ์ง€ ์•„๋‹Œ์ง€์— ๋Œ€ํ•œ ์˜ˆ์ธก์€ ๋‘ ๊ฐœ์˜ ํด๋ž˜์Šค(`IsNext` ๋ฐ `NotNext`)์— ๋Œ€ํ•œ ์†Œํ”„ํŠธ๋งฅ์Šค๊ฐ€ ์žˆ๋Š” ์ˆœ๋ฐฉํ–ฅ ๋„คํŠธ์›Œํฌ๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. 3. ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ์€ ์—ฌ๋Ÿฌ ์ธ์ฝ”๋” ๋ ˆ์ด์–ด๋ฅผ ๊ฑฐ์ณ์„œ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜์— ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ๊ธฐ๋ณธ BERT ๋ชจ๋ธ ์ƒ๋‹จ์— ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋ฐ›๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ด๋ฉฐ, ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ธฐ ์œ„ํ•ด ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ๋กœ์ง“๊ณผ ํƒ€๊ฒŸ ๊ฐ„์— ๊ณ„์‚ฐ๋˜์–ด ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋ ˆ์ด๋ธ”์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ๋ถ„๋ฅ˜์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ๊ฐ€์ด๋“œ](tasks/sequence_classification)๋ฅผ ํ™•์ธํ•˜์—ฌ DistilBERT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ํ† ํฐ ๋ถ„๋ฅ˜[[token-classification]] ๊ฐœ์ฒด๋ช… ์ธ์‹(Named Entity Recognition, NER)๊ณผ ๊ฐ™์€ ํ† ํฐ ๋ถ„๋ฅ˜ ์ž‘์—…์— BERT๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ๊ธฐ๋ณธ BERT ๋ชจ๋ธ ์ƒ๋‹จ์— ํ† ํฐ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ํ† ํฐ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋Š” ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋ฐ›๋Š” ์„ ํ˜• ๋ ˆ์ด์–ด์ด๋ฉฐ, ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ธฐ ์œ„ํ•ด ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ๋กœ์ง“๊ณผ ๊ฐ ํ† ํฐ ๊ฐ„์— ๊ณ„์‚ฐ๋˜์–ด ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋ ˆ์ด๋ธ”์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ํ† ํฐ ๋ถ„๋ฅ˜์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [ํ† ํฐ ๋ถ„๋ฅ˜ ๊ฐ€์ด๋“œ](tasks/token_classification)๋ฅผ ํ™•์ธํ•˜์—ฌ DistilBERT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! ### ์งˆ์˜์‘๋‹ต[[question-answering]] ์งˆ์˜์‘๋‹ต์— BERT๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ๊ธฐ๋ณธ BERT ๋ชจ๋ธ ์œ„์— ์ŠคํŒฌ(span) ๋ถ„๋ฅ˜ ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ด ์„ ํ˜• ๋ ˆ์ด์–ด๋Š” ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋ฐ›๊ณ , ๋‹ต๋ณ€์— ๋Œ€์‘ํ•˜๋Š” `์ŠคํŒฌ`์˜ ์‹œ์ž‘๊ณผ ๋ ๋กœ๊ทธ๋ฅผ ๊ณ„์‚ฐํ•˜๊ธฐ ์œ„ํ•ด ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ๋กœ์ง“๊ณผ ๊ฐ ๋ ˆ์ด๋ธ” ์œ„์น˜ ๊ฐ„์— ๊ณ„์‚ฐ๋˜์–ด ๋‹ต๋ณ€์— ๋Œ€์‘ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํ…์ŠคํŠธ์˜ ์ŠคํŒฌ์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ์งˆ์˜์‘๋‹ต์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์งˆ์˜์‘๋‹ต ๊ฐ€์ด๋“œ](tasks/question_answering)๋ฅผ ํ™•์ธํ•˜์—ฌ DistilBERT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! <Tip> ๐Ÿ’ก ์‚ฌ์ „ํ›ˆ๋ จ๋œ BERT๋ฅผ ๋‹ค์–‘ํ•œ ์ž‘์—…์— ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์–ผ๋งˆ๋‚˜ ์‰ฌ์šด์ง€ ์ฃผ๋ชฉํ•˜์„ธ์š”. ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์— ํŠน์ • ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ์€๋‹‰ ์ƒํƒœ๋ฅผ ์›ํ•˜๋Š” ์ถœ๋ ฅ์œผ๋กœ ์กฐ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </Tip> ### ํ…์ŠคํŠธ ์ƒ์„ฑ[[text-generation]] [GPT-2](model_doc/gpt2)๋Š” ๋Œ€๋Ÿ‰์˜ ํ…์ŠคํŠธ์— ๋Œ€ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋””์ฝ”๋”ฉ ์ „์šฉ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ฃผ์–ด์ง€๋ฉด ์„ค๋“๋ ฅ ์žˆ๋Š” (ํ•ญ์ƒ ์‚ฌ์‹ค์€ ์•„๋‹ˆ์ง€๋งŒ!) ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ๋ช…์‹œ์ ์œผ๋กœ ํ›ˆ๋ จ๋˜์ง€ ์•Š์•˜์Œ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ์งˆ์˜์‘๋‹ต๊ณผ ๊ฐ™์€ ๋‹ค๋ฅธ NLP ์ž‘์—…์„ ์™„์ˆ˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gpt2_architecture.png"/> </div> 1. GPT-2๋Š” ๋‹จ์–ด๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  ํ† ํฐ ์ž„๋ฒ ๋”ฉ์„ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด [๋ฐ”์ดํŠธ ํŽ˜์–ด ์ธ์ฝ”๋”ฉ(BPE, byte pair encoding)](tokenizer_summary#bytepair-encoding-bpe)์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์œ„์น˜ ์ธ์ฝ”๋”ฉ์€ ์‹œํ€€์Šค์—์„œ ๊ฐ ํ† ํฐ์˜ ์œ„์น˜๋ฅผ ๋‚˜ํƒ€๋‚ด๊ธฐ ์œ„ํ•ด ํ† ํฐ ์ž„๋ฒ ๋”ฉ์— ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ์€ ์—ฌ๋Ÿฌ ๋””์ฝ”๋” ๋ธ”๋ก์„ ๊ฑฐ์ณ ์ผ๋ถ€ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋””์ฝ”๋” ๋ธ”๋ก ๋‚ด์—์„œ GPT-2๋Š” *๋งˆ์Šคํฌ๋“œ ์…€ํ”„ ์–ดํ…์…˜(masked self-attention)* ๋ ˆ์ด์–ด๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” GPT-2๊ฐ€ ์ดํ›„ ํ† ํฐ(future tokens)์— ์ฃผ์˜๋ฅผ ๊ธฐ์šธ์ผ ์ˆ˜ ์—†๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์™ผ์ชฝ์— ์žˆ๋Š” ํ† ํฐ์—๋งŒ ์ฃผ์˜๋ฅผ ๊ธฐ์šธ์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์Šคํฌ๋“œ ์…€ํ”„ ์–ดํ…์…˜์—์„œ๋Š” ์–ดํ…์…˜ ๋งˆ์Šคํฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ดํ›„ ํ† ํฐ์— ๋Œ€ํ•œ ์ ์ˆ˜(score)๋ฅผ `0`์œผ๋กœ ์„ค์ •ํ•˜๊ธฐ ๋•Œ๋ฌธ์— BERT์˜ [`mask`] ํ† ํฐ๊ณผ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. 2. ๋””์ฝ”๋”์˜ ์ถœ๋ ฅ์€ ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ์— ์ „๋‹ฌ๋˜๋ฉฐ, ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ๋Š” ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋กœ์ง“์œผ๋กœ ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ”์€ ์‹œํ€€์Šค์˜ ๋‹ค์Œ ํ† ํฐ์œผ๋กœ, ๋กœ์ง“์„ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ํ•˜๋‚˜์”ฉ ์ด๋™ํ•˜์—ฌ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ์ด๋™๋œ ๋กœ์ง“๊ณผ ๋ ˆ์ด๋ธ” ๊ฐ„์— ๊ณ„์‚ฐ๋˜์–ด ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๋‹ค์Œ ํ† ํฐ์„ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. GPT-2์˜ ์‚ฌ์ „ํ›ˆ๋ จ ๋ชฉ์ ์€ ์ „์ ์œผ๋กœ [์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง](glossary#causal-language-modeling)์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ, ์‹œํ€€์Šค์—์„œ ๋‹ค์Œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Š” GPT-2๊ฐ€ ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๊ด€๋ จ๋œ ์ž‘์—…์— ํŠนํžˆ ์šฐ์ˆ˜ํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ์ƒ์„ฑ์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง ๊ฐ€์ด๋“œ](tasks/language_modeling#causal-language-modeling)๋ฅผ ํ™•์ธํ•˜์—ฌ DistilGPT-2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! <Tip> ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](generation_strategies) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip> ### ์š”์•ฝ[[summarization]] [BART](model_doc/bart) ๋ฐ [T5](model_doc/t5)์™€ ๊ฐ™์€ ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์€ ์š”์•ฝ ์ž‘์—…์˜ ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค ํŒจํ„ด์„ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ BART์˜ ์ž‘๋™ ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•œ ๋‹ค์Œ, ๋งˆ์ง€๋ง‰์— T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bart_architecture.png"/> </div> 1. BART์˜ ์ธ์ฝ”๋” ์•„ํ‚คํ…์ฒ˜๋Š” BERT์™€ ๋งค์šฐ ์œ ์‚ฌํ•˜๋ฉฐ ํ…์ŠคํŠธ์˜ ํ† ํฐ ๋ฐ ์œ„์น˜ ์ž„๋ฒ ๋”ฉ์„ ๋ฐ›์Šต๋‹ˆ๋‹ค. BART๋Š” ์ž…๋ ฅ์„ ๋ณ€ํ˜•์‹œํ‚ค๊ณ  ๋””์ฝ”๋”๋กœ ์žฌ๊ตฌ์„ฑํ•˜์—ฌ ์‚ฌ์ „ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ํŠน์ • ๋ณ€ํ˜• ๊ธฐ๋ฒ•์ด ์žˆ๋Š” ๋‹ค๋ฅธ ์ธ์ฝ”๋”์™€๋Š” ๋‹ฌ๋ฆฌ, BART๋Š” ๋ชจ๋“  ์œ ํ˜•์˜ ๋ณ€ํ˜•์„ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ *text infilling* ๋ณ€ํ˜• ๊ธฐ๋ฒ•์ด ๊ฐ€์žฅ ์ž˜ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. Text Infiling์—์„œ๋Š” ์—ฌ๋Ÿฌ ํ…์ŠคํŠธ ์ŠคํŒฌ์„ **๋‹จ์ผ** [`mask`] ํ† ํฐ์œผ๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋ธ์ด ๋งˆ์Šคํฌ๋œ ํ† ํฐ์„ ์˜ˆ์ธกํ•ด์•ผ ํ•˜๊ณ , ๋ชจ๋ธ์— ๋ˆ„๋ฝ๋œ ํ† ํฐ์˜ ์ˆ˜๋ฅผ ์˜ˆ์ธกํ•˜๋„๋ก ๊ฐ€๋ฅด์น˜๊ธฐ ๋•Œ๋ฌธ์— ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ๊ณผ ๋งˆ์Šคํฌ๋œ ์ŠคํŒฌ์ด ์ธ์ฝ”๋”๋ฅผ ๊ฑฐ์ณ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ๋ฅผ ์ถœ๋ ฅํ•˜์ง€๋งŒ, BERT์™€ ๋‹ฌ๋ฆฌ BART๋Š” ๋งˆ์ง€๋ง‰์— ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ์ˆœ๋ฐฉํ–ฅ ๋„คํŠธ์›Œํฌ๋ฅผ ์ถ”๊ฐ€ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. 2. ์ธ์ฝ”๋”์˜ ์ถœ๋ ฅ์€ ๋””์ฝ”๋”๋กœ ์ „๋‹ฌ๋˜๋ฉฐ, ๋””์ฝ”๋”๋Š” ์ธ์ฝ”๋”์˜ ์ถœ๋ ฅ์—์„œ ๋งˆ์Šคํฌ ํ† ํฐ๊ณผ ๋ณ€ํ˜•๋˜์ง€ ์•Š์€ ํ† ํฐ์„ ์˜ˆ์ธกํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋””์ฝ”๋”๊ฐ€ ์›๋ณธ ํ…์ŠคํŠธ๋ฅผ ๋ณต์›ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ์ถ”๊ฐ€์ ์ธ ๋ฌธ๋งฅ์„ ์–ป๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ๋””์ฝ”๋”์˜ ์ถœ๋ ฅ์€ ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ์— ์ „๋‹ฌ๋˜๋ฉฐ, ์–ธ์–ด ๋ชจ๋ธ๋ง ํ—ค๋“œ๋Š” ์€๋‹‰ ์ƒํƒœ๋ฅผ ๋กœ์ง“์œผ๋กœ ์„ ํ˜• ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค์€ ๋กœ์ง“๊ณผ ํ† ํฐ์ด ์˜ค๋ฅธ์ชฝ์œผ๋กœ ์ด๋™๋œ ๋ ˆ์ด๋ธ” ๊ฐ„์— ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. ์š”์•ฝ์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [์š”์•ฝ ๊ฐ€์ด๋“œ](tasks/summarization)๋ฅผ ํ™•์ธํ•˜์—ฌ T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! <Tip> ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](generation_strategies) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip> ### ๋ฒˆ์—ญ[[translation]] ๋ฒˆ์—ญ์€ ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค ์ž‘์—…์˜ ๋˜ ๋‹ค๋ฅธ ์˜ˆ๋กœ, [BART](model_doc/bart) ๋˜๋Š” [T5](model_doc/t5)์™€ ๊ฐ™์€ ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ BART์˜ ์ž‘๋™ ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•œ ๋‹ค์Œ, ๋งˆ์ง€๋ง‰์— T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. BART๋Š” ์›์ฒœ ์–ธ์–ด๋ฅผ ํƒ€๊ฒŸ ์–ธ์–ด๋กœ ๋””์ฝ”๋”ฉํ•  ์ˆ˜ ์žˆ๋Š” ์ž…๋ ฅ์— ๋งคํ•‘ํ•˜๊ธฐ ์œ„ํ•ด ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋œ ๋ณ„๋„์˜ ์ธ์ฝ”๋”๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ๋ฒˆ์—ญ์— ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ์ƒˆ๋กœ์šด ์ธ์ฝ”๋”์˜ ์ž„๋ฒ ๋”ฉ์€ ์›๋ณธ ๋‹จ์–ด ์ž„๋ฒ ๋”ฉ ๋Œ€์‹  ์‚ฌ์ „ํ›ˆ๋ จ๋œ ์ธ์ฝ”๋”๋กœ ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค. ์›์ฒœ ์ธ์ฝ”๋”๋Š” ๋ชจ๋ธ ์ถœ๋ ฅ์˜ ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ ์†์‹ค๋กœ๋ถ€ํ„ฐ ์›์ฒœ ์ธ์ฝ”๋”, ์œ„์น˜ ์ž„๋ฒ ๋”ฉ, ์ž…๋ ฅ ์ž„๋ฒ ๋”ฉ์„ ๊ฐฑ์‹ ํ•˜์—ฌ ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ ๋‹จ๊ณ„์—์„œ๋Š” ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ ๊ณ ์ •๋˜๊ณ , ๋‘ ๋ฒˆ์งธ ๋‹จ๊ณ„์—์„œ๋Š” ๋ชจ๋“  ๋ชจ๋ธ ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ ํ•จ๊ป˜ ํ›ˆ๋ จ๋ฉ๋‹ˆ๋‹ค. BART๋Š” ์ดํ›„ ๋ฒˆ์—ญ์„ ์œ„ํ•ด ๋‹ค์–‘ํ•œ ์–ธ์–ด๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋‹ค๊ตญ์–ด ๋ฒ„์ „์˜ mBART๋กœ ํ™•์žฅ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ฒˆ์—ญ์— ์ง์ ‘ ๋„์ „ํ•  ์ค€๋น„๊ฐ€ ๋˜์…จ๋‚˜์š”? ์™„์ „ํ•œ [๋ฒˆ์—ญ ๊ฐ€์ด๋“œ](tasks/summarization)๋ฅผ ํ™•์ธํ•˜์—ฌ T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•˜์„ธ์š”! <Tip> ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](generation_strategies) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”! </Tip>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/pr_checks.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Pull Request์— ๋Œ€ํ•œ ๊ฒ€์‚ฌ [[checks-on-a-pull-request]] ๐Ÿค— Transformers์—์„œ Pull Request๋ฅผ ์—ด ๋•Œ, ๊ธฐ์กด์— ์žˆ๋Š” ๊ฒƒ์„ ๋ง๊ฐ€๋œจ๋ฆฌ์ง€ ์•Š๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ์ƒ๋‹นํ•œ ์ˆ˜์˜ ๊ฒ€์‚ฌ๊ฐ€ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฒ€์‚ฌ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋„ค ๊ฐ€์ง€ ์œ ํ˜•์œผ๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค: - ์ผ๋ฐ˜์ ์ธ ํ…Œ์ŠคํŠธ - ๋ฌธ์„œ ๋นŒ๋“œ - ์ฝ”๋“œ ๋ฐ ๋ฌธ์„œ ์Šคํƒ€์ผ - ์ผ๋ฐ˜ ์ €์žฅ์†Œ ์ผ๊ด€์„ฑ ์ด ๋ฌธ์„œ์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋‹ค์–‘ํ•œ ๊ฒ€์‚ฌ์™€ ๊ทธ ์ด์œ ๋ฅผ ์„ค๋ช…ํ•˜๊ณ , PR์—์„œ ํ•˜๋‚˜ ์ด์ƒ์˜ ๊ฒ€์‚ฌ๊ฐ€ ์‹คํŒจํ•œ ๊ฒฝ์šฐ ๋กœ์ปฌ์—์„œ ์–ด๋–ป๊ฒŒ ๋””๋ฒ„๊ทธํ•˜๋Š”์ง€ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ฐธ๊ณ ๋กœ, ์ด๋Ÿฌํ•œ ๊ฒ€์‚ฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๊ฐœ๋ฐœ ์„ค์น˜๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค: ```bash pip install transformers[dev] ``` ๋˜๋Š” Transformers ์ €์žฅ์†Œ ๋‚ด์— ํŽธ์ง‘ ๊ฐ€๋Šฅํ•œ ์„ค์น˜๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค: ```bash pip install -e .[dev] ``` Transformers์˜ ์„ ํƒ์  ์ข…์†์„ฑ ์ˆ˜๊ฐ€ ๋งŽ์ด ๋Š˜์–ด๋‚ฌ๊ธฐ ๋•Œ๋ฌธ์— ๊ฐœ๋ฐœ ์„ค์น˜๋ฅผ ์‹คํŒจํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐœ๋ฐœ ์„ค์น˜๊ฐ€ ์‹คํŒจํ•˜๋Š” ๊ฒฝ์šฐ, ์ž‘์—… ์ค‘์ธ Deep Learning ํ”„๋ ˆ์ž„์›Œํฌ (PyTorch, TensorFlow ๋ฐ/๋˜๋Š” Flax)๋ฅผ ์„ค์น˜ํ•˜๊ณ  ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”. ```bash pip install transformers[quality] ``` ํŽธ์ง‘ ๊ฐ€๋Šฅํ•œ ์„ค์น˜์˜ ๊ฒฝ์šฐ๋Š” ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”. ```bash pip install -e .[quality] ``` ## ํ…Œ์ŠคํŠธ [[tests]] `ci/circleci: run_tests_`๋กœ ์‹œ์ž‘ํ•˜๋Š” ๋ชจ๋“  ์ž‘์—…์€ Transformers ํ…Œ์ŠคํŠธ ๋ชจ์Œ์˜ ์ผ๋ถ€๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ž‘์—…์€ ํŠน์ • ํ™˜๊ฒฝ์—์„œ ์ผ๋ถ€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ์ค‘์ ์„ ๋‘ก๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด `ci/circleci: run_tests_pipelines_tf`๋Š” TensorFlow๋งŒ ์„ค์น˜๋œ ํ™˜๊ฒฝ์—์„œ ํŒŒ์ดํ”„๋ผ์ธ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ์—์„œ ์‹ค์ œ๋กœ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ์—†์„ ๋•Œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์ง€ ์•Š๊ธฐ ์œ„ํ•ด, ํ…Œ์ŠคํŠธ ๋ชจ์Œ์˜ ์ผ๋ถ€๋งŒ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋ณ€๊ฒฝ ์ „ํ›„์— ๋Œ€ํ•œ ์ฐจ์ด๋ฅผ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ์œ ํ‹ธ๋ฆฌํ‹ฐ๊ฐ€ ์‹คํ–‰๋˜๊ณ , ํ•ด๋‹น ์ฐจ์ด์— ์˜ํ–ฅ์„ ๋ฐ›๋Š” ํ…Œ์ŠคํŠธ๊ฐ€ ์„ ํƒ๋ฉ๋‹ˆ๋‹ค. ์ด ์œ ํ‹ธ๋ฆฌํ‹ฐ๋Š” ๋กœ์ปฌ์—์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash python utils/tests_fetcher.py ``` Transformers ์ €์žฅ์†Œ์˜ ์ตœ์ƒ๋‹จ์—์„œ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ด ์œ ํ‹ธ๋ฆฌํ‹ฐ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค: 1. ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ์žˆ๋Š” ํŒŒ์ผ๋งˆ๋‹ค ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ์ฝ”๋“œ์ธ์ง€ ์ฃผ์„ ๋˜๋Š” ๋ฌธ์„œ ๋ฌธ์ž์—ด์ธ์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ์‹ค์ œ ์ฝ”๋“œ ๋ณ€๊ฒฝ์ด ์žˆ๋Š” ํŒŒ์ผ๋งŒ ์œ ์ง€๋ฉ๋‹ˆ๋‹ค. 2. ์†Œ์Šค ์ฝ”๋“œ ํŒŒ์ผ์˜ ๊ฐ ํŒŒ์ผ์— ๋Œ€ํ•ด ์žฌ๊ท€์ ์œผ๋กœ ์˜ํ–ฅ์„ ์ฃผ๋Š” ๋ชจ๋“  ํŒŒ์ผ์„ ์ œ๊ณตํ•˜๋Š” ๋‚ด๋ถ€ ๋งต์„ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“ˆ B๊ฐ€ ๋ชจ๋“ˆ A๋ฅผ ๊ฐ€์ ธ์˜ค๋ฉด ๋ชจ๋“ˆ A๋Š” ๋ชจ๋“ˆ B์— ์˜ํ–ฅ์„ ์ค๋‹ˆ๋‹ค. ์žฌ๊ท€์ ์ธ ์˜ํ–ฅ์—๋Š” ๊ฐ ๋ชจ๋“ˆ์ด ์ด์ „ ๋ชจ๋“ˆ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋ชจ๋“ˆ ์ฒด์ธ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. 3. ๋‹จ๊ณ„ 1์—์„œ ์ˆ˜์ง‘ํ•œ ํŒŒ์ผ์— ์ด ๋งต์„ ์ ์šฉํ•˜์—ฌ PR์— ์˜ํ–ฅ์„ ๋ฐ›๋Š” ๋ชจ๋ธ ํŒŒ์ผ ๋ชฉ๋ก์„ ์–ป์Šต๋‹ˆ๋‹ค. 4. ๊ฐ ํŒŒ์ผ์„ ํ•ด๋‹นํ•˜๋Š” ํ…Œ์ŠคํŠธ ํŒŒ์ผ์— ๋งคํ•‘ํ•˜๊ณ  ์‹คํ–‰ํ•  ํ…Œ์ŠคํŠธ ๋ชฉ๋ก์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ๋กœ์ปฌ์—์„œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ฉด ๋‹จ๊ณ„ 1, 3 ๋ฐ 4์˜ ๊ฒฐ๊ณผ๋ฅผ ์ถœ๋ ฅํ•˜์—ฌ ์‹คํ–‰๋˜๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์Šคํฌ๋ฆฝํŠธ๋Š” ๋˜ํ•œ `test_list.txt`๋ผ๋Š” ํŒŒ์ผ์„ ์ƒ์„ฑํ•˜์—ฌ ์‹คํ–‰ํ•  ํ…Œ์ŠคํŠธ ๋ชฉ๋ก์„ ํฌํ•จํ•˜๋ฉฐ, ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ํ•ด๋‹น ํ…Œ์ŠคํŠธ๋ฅผ ๋กœ์ปฌ์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash python -m pytest -n 8 --dist=loadfile -rA -s $(cat test_list.txt) ``` ์ž˜๋ชป๋œ ์‚ฌํ•ญ์ด ๋ˆ„๋ฝ๋˜์—ˆ์„ ๊ฒฝ์šฐ, ์ „์ฒด ํ…Œ์ŠคํŠธ ๋ชจ์Œ๋„ ๋งค์ผ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ## ๋ฌธ์„œ ๋นŒ๋“œ [[documentation-build]] `build_pr_documentation` ์ž‘์—…์€ ๋ฌธ์„œ๋ฅผ ๋นŒ๋“œํ•˜๊ณ  ๋ฏธ๋ฆฌ ๋ณด๊ธฐ๋ฅผ ์ƒ์„ฑํ•˜์—ฌ PR์ด ๋ณ‘ํ•ฉ๋œ ํ›„ ๋ชจ๋“  ๊ฒƒ์ด ์ œ๋Œ€๋กœ ๋ณด์ด๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ๋กœ๋ด‡์€ PR์— ๋ฌธ์„œ ๋ฏธ๋ฆฌ๋ณด๊ธฐ ๋งํฌ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. PR์—์„œ ๋งŒ๋“  ๋ณ€๊ฒฝ ์‚ฌํ•ญ์€ ์ž๋™์œผ๋กœ ๋ฏธ๋ฆฌ๋ณด๊ธฐ์— ์—…๋ฐ์ดํŠธ๋ฉ๋‹ˆ๋‹ค. ๋ฌธ์„œ ๋นŒ๋“œ์— ์‹คํŒจํ•œ ๊ฒฝ์šฐ **์„ธ๋ถ€ ์ •๋ณด**๋ฅผ ํด๋ฆญํ•˜์—ฌ ์–ด๋””์—์„œ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ–ˆ๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ค๋ฅ˜๋Š” ์ฃผ๋กœ `toctree`์— ๋ˆ„๋ฝ๋œ ํŒŒ์ผ๊ณผ ๊ฐ™์ด ๊ฐ„๋‹จํ•œ ์˜ค๋ฅ˜์ž…๋‹ˆ๋‹ค. ๋กœ์ปฌ์—์„œ ๋ฌธ์„œ๋ฅผ ๋นŒ๋“œํ•˜๊ฑฐ๋‚˜ ๋ฏธ๋ฆฌ ๋ณผ ๊ฒฝ์šฐ, docs ํด๋”์˜ [`README.md`](https://github.com/huggingface/transformers/tree/main/docs)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## ์ฝ”๋“œ ๋ฐ ๋ฌธ์„œ ์Šคํƒ€์ผ [[code-and-documentation-style]] `black`๊ณผ `ruff`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋“  ์†Œ์Šค ํŒŒ์ผ, ์˜ˆ์ œ ๋ฐ ํ…Œ์ŠคํŠธ์— ์ฝ”๋“œ ํ˜•์‹์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, `utils/style_doc.py`์—์„œ ๋ฌธ์„œ ๋ฌธ์ž์—ด๊ณผ `rst` ํŒŒ์ผ์˜ ํ˜•์‹, ๊ทธ๋ฆฌ๊ณ  Transformers์˜ `__init__.py` ํŒŒ์ผ์—์„œ ์‹คํ–‰๋˜๋Š” ์ง€์—ฐ๋œ ์ž„ํฌํŠธ์˜ ์ˆœ์„œ์— ๋Œ€ํ•œ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋“  ๊ฒƒ์€ ๋‹ค์Œ์„ ์‹คํ–‰ํ•จ์œผ๋กœ์จ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash make style ``` CI๋Š” ์ด๋Ÿฌํ•œ ์‚ฌํ•ญ์ด `ci/circleci: check_code_quality` ๊ฒ€์‚ฌ ๋‚ด์—์„œ ์ ์šฉ๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ `ruff`๋„ ์‹คํ–‰๋˜๋ฉฐ, ์ •์˜๋˜์ง€ ์•Š์€ ๋ณ€์ˆ˜๋‚˜ ์‚ฌ์šฉ๋˜์ง€ ์•Š์€ ๋ณ€์ˆ˜๋ฅผ ๋ฐœ๊ฒฌํ•˜๋ฉด ๊ฒฝ๊ณ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฒ€์‚ฌ๋ฅผ ๋กœ์ปฌ์—์„œ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜์„ธ์š”: ```bash make quality ``` ์ด ์ž‘์—…์€ ๋งŽ์€ ์‹œ๊ฐ„์ด ์†Œ์š”๋  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ํ˜„์žฌ ๋ธŒ๋žœ์น˜์—์„œ ์ˆ˜์ •ํ•œ ํŒŒ์ผ์— ๋Œ€ํ•ด์„œ๋งŒ ๋™์ผํ•œ ์ž‘์—…์„ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜์„ธ์š”. ```bash make fixup ``` ์ด ๋ช…๋ น์€ ํ˜„์žฌ ๋ธŒ๋žœ์น˜์—์„œ ์ˆ˜์ •ํ•œ ํŒŒ์ผ์— ๋Œ€ํ•œ ๋ชจ๋“  ์ถ”๊ฐ€์ ์ธ ๊ฒ€์‚ฌ๋„ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ ์ด๋“ค์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ## ์ €์žฅ์†Œ ์ผ๊ด€์„ฑ [[repository-consistency]] ์ด๋Š” PR์ด ์ €์žฅ์†Œ๋ฅผ ์ •์ƒ์ ์ธ ์ƒํƒœ๋กœ ์œ ์ง€ํ•˜๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ๋ชจ์€ ๊ฒƒ์ด๋ฉฐ, `ci/circleci: check_repository_consistency` ๊ฒ€์‚ฌ์—์„œ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์„ ์‹คํ–‰ํ•จ์œผ๋กœ์จ ๋กœ์ปฌ์—์„œ ์ด ๊ฒ€์‚ฌ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash make repo-consistency ``` ์ด ๊ฒ€์‚ฌ๋Š” ๋‹ค์Œ์„ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. - init์— ์ถ”๊ฐ€๋œ ๋ชจ๋“  ๊ฐ์ฒด๊ฐ€ ๋ฌธ์„œํ™”๋˜์—ˆ๋Š”์ง€ (`utils/check_repo.py`์—์„œ ์ˆ˜ํ–‰) - `__init__.py` ํŒŒ์ผ์˜ ๋‘ ์„น์…˜์— ๋™์ผํ•œ ๋‚ด์šฉ์ด ์žˆ๋Š”์ง€ (`utils/check_inits.py`์—์„œ ์ˆ˜ํ–‰) - ๋‹ค๋ฅธ ๋ชจ๋“ˆ์—์„œ ๋ณต์‚ฌ๋œ ์ฝ”๋“œ๊ฐ€ ์›๋ณธ๊ณผ ์ผ์น˜ํ•˜๋Š”์ง€ (`utils/check_copies.py`์—์„œ ์ˆ˜ํ–‰) - ๋ชจ๋“  ๊ตฌ์„ฑ ํด๋ž˜์Šค์— docstring์— ์–ธ๊ธ‰๋œ ์œ ํšจํ•œ ์ฒดํฌํฌ์ธํŠธ๊ฐ€ ์ ์–ด๋„ ํ•˜๋‚˜ ์žˆ๋Š”์ง€ (`utils/check_config_docstrings.py`์—์„œ ์ˆ˜ํ–‰) - ๋ชจ๋“  ๊ตฌ์„ฑ ํด๋ž˜์Šค๊ฐ€ ํ•ด๋‹นํ•˜๋Š” ๋ชจ๋ธ๋ง ํŒŒ์ผ์—์„œ ์‚ฌ์šฉ๋˜๋Š” ์†์„ฑ๋งŒ ํฌํ•จํ•˜๊ณ  ์žˆ๋Š”์ง€ (`utils/check_config_attributes.py`์—์„œ ์ˆ˜ํ–‰) - README์™€ ๋ฌธ์„œ ์ธ๋ฑ์Šค์˜ ๋ฒˆ์—ญ์ด ๋ฉ”์ธ README์™€ ๋™์ผํ•œ ๋ชจ๋ธ ๋ชฉ๋ก์„ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”์ง€ (`utils/check_copies.py`์—์„œ ์ˆ˜ํ–‰) - ๋ฌธ์„œ์˜ ์ž๋™ ์ƒ์„ฑ๋œ ํ…Œ์ด๋ธ”์ด ์ตœ์‹  ์ƒํƒœ์ธ์ง€ (`utils/check_table.py`์—์„œ ์ˆ˜ํ–‰) - ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—๋Š” ์„ ํƒ์  ์ข…์†์„ฑ์ด ์„ค์น˜๋˜์ง€ ์•Š์•˜๋”๋ผ๋„ ๋ชจ๋“  ๊ฐ์ฒด๊ฐ€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ์ง€ (`utils/check_dummies.py`์—์„œ ์ˆ˜ํ–‰) ์ด๋Ÿฌํ•œ ๊ฒ€์‚ฌ๊ฐ€ ์‹คํŒจํ•˜๋Š” ๊ฒฝ์šฐ, ์ฒ˜์Œ ๋‘ ๊ฐ€์ง€ ํ•ญ๋ชฉ์€ ์ˆ˜๋™์œผ๋กœ ์ˆ˜์ •ํ•ด์•ผ ํ•˜๋ฉฐ, ๋‚˜๋จธ์ง€ ๋„ค ๊ฐ€์ง€ ํ•ญ๋ชฉ์€ ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์—ฌ ์ž๋™์œผ๋กœ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash make fix-copies ``` ์ถ”๊ฐ€์ ์ธ ๊ฒ€์‚ฌ๋Š” ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” PR์— ๋Œ€ํ•œ ๊ฒƒ์œผ๋กœ, ์ฃผ๋กœ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - ์ถ”๊ฐ€๋œ ๋ชจ๋“  ๋ชจ๋ธ์ด Auto-mapping์— ์žˆ๋Š”์ง€ (`utils/check_repo.py`์—์„œ ์ˆ˜ํ–‰) <!-- TODO Sylvain, add a check that makes sure the common tests are implemented.--> - ๋ชจ๋“  ๋ชจ๋ธ์ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ํ…Œ์ŠคํŠธ๋˜์—ˆ๋Š”์ง€ (`utils/check_repo.py`์—์„œ ์ˆ˜ํ–‰) <!-- TODO Sylvain, add the following - ๋ชจ๋“  ๋ชจ๋ธ์ด ๋ฉ”์ธ README, ์ฃผ์š” ๋ฌธ์„œ์— ์ถ”๊ฐ€๋˜์—ˆ๋Š”์ง€ - ์‚ฌ์šฉ๋œ ๋ชจ๋“  ์ฒดํฌํฌ์ธํŠธ๊ฐ€ ์‹ค์ œ๋กœ Hub์— ์กด์žฌํ•˜๋Š”์ง€ --> ### ๋ณต์‚ฌ๋ณธ ํ™•์ธ [[check-copies]] Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ๋ชจ๋ธ ์ฝ”๋“œ์— ๋Œ€ํ•ด ๋งค์šฐ ์™„๊ณ ํ•˜๋ฉฐ, ๊ฐ ๋ชจ๋ธ์€ ๋‹ค๋ฅธ ๋ชจ๋ธ์— ์˜์กดํ•˜์ง€ ์•Š๊ณ  ์™„์ „ํžˆ ๋‹จ์ผ ํŒŒ์ผ๋กœ ๊ตฌํ˜„๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด ํŠน์ • ๋ชจ๋ธ์˜ ์ฝ”๋“œ ๋ณต์‚ฌ๋ณธ์ด ์›๋ณธ๊ณผ ์ผ๊ด€๋œ ์ƒํƒœ๋กœ ์œ ์ง€๋˜๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๋ฉ”์ปค๋‹ˆ์ฆ˜์„ ์ถ”๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋ฒ„๊ทธ ์ˆ˜์ •์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ ๋‹ค๋ฅธ ๋ชจ๋ธ์— ์˜ํ–ฅ์„ ์ฃผ๋Š” ๋ชจ๋“  ๋ชจ๋ธ์„ ๋ณผ ์ˆ˜ ์žˆ์œผ๋ฉฐ ์ˆ˜์ •์„ ์ ์šฉํ• ์ง€ ์ˆ˜์ •๋œ ์‚ฌ๋ณธ์„ ์‚ญ์ œํ• ์ง€ ์„ ํƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <Tip> ํŒŒ์ผ์ด ๋‹ค๋ฅธ ํŒŒ์ผ์˜ ์™„์ „ํ•œ ์‚ฌ๋ณธ์ธ ๊ฒฝ์šฐ ํ•ด๋‹น ํŒŒ์ผ์„ `utils/check_copies.py`์˜ `FULL_COPIES` ์ƒ์ˆ˜์— ๋“ฑ๋กํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ์ด ๋ฉ”์ปค๋‹ˆ์ฆ˜์€ `# Copied from xxx` ํ˜•์‹์˜ ์ฃผ์„์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. `xxx`์—๋Š” ์•„๋ž˜์— ๋ณต์‚ฌ๋˜๋Š” ํด๋ž˜์Šค ๋˜๋Š” ํ•จ์ˆ˜์˜ ์ „์ฒด ๊ฒฝ๋กœ๊ฐ€ ํฌํ•จ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด `RobertaSelfOutput`์€ `BertSelfOutput` ํด๋ž˜์Šค์˜ ๋ณต์‚ฌ๋ณธ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L289)์—์„œ ์ฃผ์„์ด ์žˆ์Šต๋‹ˆ๋‹ค: ```py # Copied from transformers.models.bert.modeling_bert.BertSelfOutput ``` ํด๋ž˜์Šค ์ „์ฒด์— ์ˆ˜์ •์„ ์ ์šฉํ•˜๋Š” ๋Œ€์‹ ์— ๋ณต์‚ฌ๋ณธ๊ณผ ๊ด€๋ จ์žˆ๋Š” ๋ฉ”์„œ๋“œ์— ์ ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L598)์—์„œ `RobertaPreTrainedModel._init_weights`๊ฐ€ `BertPreTrainedModel`์˜ ๋™์ผํ•œ ๋ฉ”์„œ๋“œ์—์„œ ๋ณต์‚ฌ๋œ ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์œผ๋ฉฐ ํ•ด๋‹น ์ฃผ์„์ด ์žˆ์Šต๋‹ˆ๋‹ค: ```py # Copied from transformers.models.bert.modeling_bert.BertPreTrainedModel._init_weights ``` ๋ณต์‚ฌ๋ณธ์ด ์ด๋ฆ„๋งŒ ๋‹ค๋ฅธ ๊ฒฝ์šฐ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: ์˜ˆ๋ฅผ ๋“ค์–ด `RobertaAttention`์—์„œ `BertSelfAttention` ๋Œ€์‹  `RobertaSelfAttention`์„ ์‚ฌ์šฉํ•˜์ง€๋งŒ ๊ทธ ์™ธ์—๋Š” ์ฝ”๋“œ๊ฐ€ ์™„์ „ํžˆ ๋™์ผํ•ฉ๋‹ˆ๋‹ค: ์ด ๋•Œ `# Copied from`์€ `Copied from xxx with foo->bar`์™€ ๊ฐ™์€ ๊ฐ„๋‹จํ•œ ๋ฌธ์ž์—ด ๋Œ€์ฒด๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋“  `foo` ์ธ์Šคํ„ด์Šค๋ฅผ `bar`๋กœ ๋ฐ”๊ฟ”์„œ ์ฝ”๋“œ๋ฅผ ๋ณต์‚ฌํ•ฉ๋‹ˆ๋‹ค. [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/2bd7a27a671fd1d98059124024f580f8f5c0f3b5/src/transformers/models/roberta/modeling_roberta.py#L304C1-L304C86)์—์„œ ์–ด๋–ป๊ฒŒ ์‚ฌ์šฉ๋˜๋Š”์ง€ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py # Copied from transformers.models.bert.modeling_bert.BertAttention with Bert->Roberta ``` ํ™”์‚ดํ‘œ ์ฃผ๋ณ€์—๋Š” ๊ณต๋ฐฑ์ด ์—†์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค(๊ณต๋ฐฑ์ด ๋Œ€์ฒด ํŒจํ„ด์˜ ์ผ๋ถ€์ธ ๊ฒฝ์šฐ๋Š” ์˜ˆ์™ธ์ž…๋‹ˆ๋‹ค). ๋Œ€์ฒด ํŒจํ„ด์„ ์‰ผํ‘œ๋กœ ๊ตฌ๋ถ„ํ•˜์—ฌ ์—ฌ๋Ÿฌ ํŒจํ„ด์„ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด `CamemberForMaskedLM`์€ ๋‘ ๊ฐ€์ง€ ๋Œ€์ฒด ์‚ฌํ•ญ์„ ๊ฐ€์ง„ `RobertaForMaskedLM`์˜ ๋ณต์‚ฌ๋ณธ์ž…๋‹ˆ๋‹ค: `Roberta`๋ฅผ `Camembert`๋กœ ๋Œ€์ฒดํ•˜๊ณ  `ROBERTA`๋ฅผ `CAMEMBERT`๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/15082a9dc6950ecae63a0d3e5060b2fc7f15050a/src/transformers/models/camembert/modeling_camembert.py#L929)์—์„œ ์ด๊ฒƒ์ด ์ฃผ์„์œผ๋กœ ์–ด๋–ป๊ฒŒ ๊ตฌํ˜„๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py # Copied from transformers.models.roberta.modeling_roberta.RobertaForMaskedLM with Roberta->Camembert, ROBERTA->CAMEMBERT ``` ์ˆœ์„œ๊ฐ€ ์ค‘์š”ํ•œ ๊ฒฝ์šฐ(์ด์ „ ์ˆ˜์ •๊ณผ ์ถฉ๋Œํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ) ์ˆ˜์ •์€ ์™ผ์ชฝ์—์„œ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. <Tip> ์ƒˆ ๋ณ€๊ฒฝ์ด ์„œ์‹์„ ๋ณ€๊ฒฝํ•˜๋Š” ๊ฒฝ์šฐ(์งง์€ ์ด๋ฆ„์„ ๋งค์šฐ ๊ธด ์ด๋ฆ„์œผ๋กœ ๋ฐ”๊พธ๋Š” ๊ฒฝ์šฐ) ์ž๋™ ์„œ์‹ ์ง€์ •๊ธฐ๋ฅผ ์ ์šฉํ•œ ํ›„ ๋ณต์‚ฌ๋ณธ์ด ๊ฒ€์‚ฌ๋ฉ๋‹ˆ๋‹ค. </Tip> ํŒจํ„ด์˜ ๋Œ€์†Œ๋ฌธ์ž๊ฐ€ ๋‹ค๋ฅธ ๊ฒฝ์šฐ(๋Œ€๋ฌธ์ž์™€ ์†Œ๋ฌธ์ž๊ฐ€ ํ˜ผ์šฉ๋œ ๋Œ€์ฒด ์–‘์‹) `all-casing` ์˜ต์…˜์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•๋„ ์žˆ์Šต๋‹ˆ๋‹ค. [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/15082a9dc6950ecae63a0d3e5060b2fc7f15050a/src/transformers/models/mobilebert/modeling_mobilebert.py#L1237)์—์„œ `MobileBertForSequenceClassification`์—์„œ ์‚ฌ์šฉ๋œ ์˜ˆ์‹œ๋ฅผ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py # Copied from transformers.models.bert.modeling_bert.BertForSequenceClassification with Bert->MobileBert all-casing ``` ์ด ๊ฒฝ์šฐ, ์ฝ”๋“œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ณต์‚ฌ๋ฉ๋‹ˆ๋‹ค: - `MobileBert`์—์„œ `Bert`๋กœ(์˜ˆ: `MobileBertModel`์„ init์—์„œ ์‚ฌ์šฉํ•  ๋•Œ) - `mobilebert`์—์„œ `bert`๋กœ(์˜ˆ: `self.mobilebert`๋ฅผ ์ •์˜ํ•  ๋•Œ) - `MOBILEBERT`์—์„œ `BERT`๋กœ(`MOBILEBERT_INPUTS_DOCSTRING` ์ƒ์ˆ˜์—์„œ)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/troubleshooting.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋ฌธ์ œ ํ•ด๊ฒฐ[[troubleshoot]] ๋•Œ๋•Œ๋กœ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ์ €ํฌ๊ฐ€ ๋„์™€๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค! ์ด ๊ฐ€์ด๋“œ๋Š” ํ˜„์žฌ๊นŒ์ง€ ํ™•์ธ๋œ ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ๋ฌธ์ œ ๋ช‡ ๊ฐ€์ง€์™€ ๊ทธ๊ฒƒ๋“ค์„ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ๋‹ค๋ฃน๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด ๊ฐ€์ด๋“œ๋Š” ๋ชจ๋“  ๐Ÿค— Transformers ๋ฌธ์ œ๋ฅผ ํฌ๊ด„์ ์œผ๋กœ ๋‹ค๋ฃจ๊ณ  ์žˆ์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ฌธ์ œ ํ•ด๊ฒฐ์— ๋” ๋งŽ์€ ๋„์›€์„ ๋ฐ›์œผ๋ ค๋ฉด ๋‹ค์Œ์„ ์‹œ๋„ํ•ด๋ณด์„ธ์š”: <Youtube id="S2EEG3JIt2A"/> 1. [ํฌ๋Ÿผ](https://discuss.huggingface.co/)์—์„œ ๋„์›€์„ ์š”์ฒญํ•˜์„ธ์š”. [Beginners](https://discuss.huggingface.co/c/beginners/5) ๋˜๋Š” [๐Ÿค— Transformers](https://discuss.huggingface.co/c/transformers/9)์™€ ๊ฐ™์€ ํŠน์ • ์นดํ…Œ๊ณ ๋ฆฌ์— ์งˆ๋ฌธ์„ ๊ฒŒ์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์žฌํ˜„ ๊ฐ€๋Šฅํ•œ ์ฝ”๋“œ์™€ ํ•จ๊ป˜ ์ž˜ ์„œ์ˆ ๋œ ํฌ๋Ÿผ ๊ฒŒ์‹œ๋ฌผ์„ ์ž‘์„ฑํ•˜์—ฌ ์—ฌ๋Ÿฌ๋ถ„์˜ ๋ฌธ์ œ๊ฐ€ ํ•ด๊ฒฐ๋  ๊ฐ€๋Šฅ์„ฑ์„ ๊ทน๋Œ€ํ™”ํ•˜์„ธ์š”! <Youtube id="_PAli-V4wj0"/> 2. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ๊ด€๋ จ๋œ ๋ฒ„๊ทธ์ด๋ฉด ๐Ÿค— Transformers ์ €์žฅ์†Œ์—์„œ [์ด์Šˆ](https://github.com/huggingface/transformers/issues/new/choose)๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”. ๋ฒ„๊ทธ์— ๋Œ€ํ•ด ์„ค๋ช…ํ•˜๋Š” ์ •๋ณด๋ฅผ ๊ฐ€๋Šฅํ•œ ๋งŽ์ด ํฌํ•จํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•˜์—ฌ, ๋ฌด์—‡์ด ์ž˜๋ชป ๋˜์—ˆ๋Š”์ง€์™€ ์–ด๋–ป๊ฒŒ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ๋” ์ž˜ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ์„ธ์š”. 3. ์ด์ „ ๋ฒ„์ „์˜ ๐Ÿค— Transformers์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ์ค‘์š”ํ•œ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ๋ฒ„์ „ ์‚ฌ์ด์— ๋„์ž…๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์— [๋งˆ์ด๊ทธ๋ ˆ์ด์…˜](migration) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ๋ฌธ์ œ ํ•ด๊ฒฐ ๋ฐ ๋„์›€ ๋งค๋‰ด์–ผ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ Hugging Face ๊ฐ•์ขŒ์˜ [8์žฅ](https://huggingface.co/course/chapter8/1?fw=pt)์„ ์ฐธ์กฐํ•˜์„ธ์š”. ## ๋ฐฉํ™”๋ฒฝ ํ™˜๊ฒฝ[[firewalled-environments]] ํด๋ผ์šฐ๋“œ ๋ฐ ๋‚ด๋ถ€๋ง(intranet) ์„ค์ •์˜ ์ผ๋ถ€ GPU ์ธ์Šคํ„ด์Šค๋Š” ์™ธ๋ถ€ ์—ฐ๊ฒฐ์— ๋Œ€ํ•œ ๋ฐฉํ™”๋ฒฝ์œผ๋กœ ์ฐจ๋‹จ๋˜์–ด ์—ฐ๊ฒฐ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋‚˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ค์šด๋กœ๋“œํ•˜๋ ค๊ณ  ํ•  ๋•Œ, ๋‹ค์šด๋กœ๋“œ๊ฐ€ ์ค‘๋‹จ๋˜๊ณ  ๋‹ค์Œ ๋ฉ”์‹œ์ง€์™€ ํ•จ๊ป˜ ์‹œ๊ฐ„ ์ดˆ๊ณผ๋ฉ๋‹ˆ๋‹ค: ``` ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. ``` ์ด ๊ฒฝ์šฐ์—๋Š” ์—ฐ๊ฒฐ ์˜ค๋ฅ˜๋ฅผ ํ”ผํ•˜๊ธฐ ์œ„ํ•ด ๐Ÿค— Transformers๋ฅผ [์˜คํ”„๋ผ์ธ ๋ชจ๋“œ](installation#offline-mode)๋กœ ์‹คํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ## CUDA ๋ฉ”๋ชจ๋ฆฌ ๋ถ€์กฑ(CUDA out of memory)[[cuda-out-of-memory]] ์ˆ˜๋ฐฑ๋งŒ ๊ฐœ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์€ ์ ์ ˆํ•œ ํ•˜๋“œ์›จ์–ด ์—†์ด ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. GPU ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ๋ถ€์กฑํ•œ ๊ฒฝ์šฐ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์ผ๋ฐ˜์ ์ธ ์˜ค๋ฅ˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ``` CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 11.17 GiB total capacity; 9.70 GiB already allocated; 179.81 MiB free; 9.85 GiB reserved in total by PyTorch) ``` ๋‹ค์Œ์€ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ์„ ์ค„์ด๊ธฐ ์œ„ํ•ด ์‹œ๋„ํ•ด ๋ณผ ์ˆ˜ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์ž ์žฌ์ ์ธ ํ•ด๊ฒฐ์ฑ…์ž…๋‹ˆ๋‹ค: - [`TrainingArguments`]์˜ [`per_device_train_batch_size`](main_classes/trainer#transformers.TrainingArguments.per_device_train_batch_size) ๊ฐ’์„ ์ค„์ด์„ธ์š”. - [`TrainingArguments`]์˜ [`gradient_accumulation_steps`](main_classes/trainer#transformers.TrainingArguments.gradient_accumulation_steps)์€ ์ „์ฒด ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ๋Š˜๋ฆฌ์„ธ์š”. <Tip> ๋ฉ”๋ชจ๋ฆฌ ์ ˆ์•ฝ ๊ธฐ์ˆ ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ์„ฑ๋Šฅ [๊ฐ€์ด๋“œ](performance)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ €์žฅ๋œ TensorFlow ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค(Unable to load a saved TensorFlow model)[[unable-to-load-a-saved-uensorFlow-model]] TensorFlow์˜ [model.save](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model) ๋ฉ”์†Œ๋“œ๋Š” ์•„ํ‚คํ…์ฒ˜, ๊ฐ€์ค‘์น˜, ํ›ˆ๋ จ ๊ตฌ์„ฑ ๋“ฑ ์ „์ฒด ๋ชจ๋ธ์„ ๋‹จ์ผ ํŒŒ์ผ์— ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ชจ๋ธ ํŒŒ์ผ์„ ๋‹ค์‹œ ๊ฐ€์ ธ์˜ฌ ๋•Œ ๐Ÿค— Transformers๋Š” ๋ชจ๋ธ ํŒŒ์ผ์— ์žˆ๋Š” ๋ชจ๋“  TensorFlow ๊ด€๋ จ ๊ฐ์ฒด๋ฅผ ๊ฐ€์ ธ์˜ค์ง€ ์•Š์„ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. TensorFlow ๋ชจ๋ธ ์ €์žฅ ๋ฐ ๊ฐ€์ ธ์˜ค๊ธฐ ๋ฌธ์ œ๋ฅผ ํ”ผํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค: - ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ `h5` ํŒŒ์ผ ํ™•์žฅ์ž๋กœ [`model.save_weights`](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model)๋กœ ์ €์žฅํ•œ ๋‹ค์Œ [`~TFPreTrainedModel.from_pretrained`]๋กœ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFPreTrainedModel >>> from tensorflow import keras >>> model.save_weights("some_folder/tf_model.h5") >>> model = TFPreTrainedModel.from_pretrained("some_folder") ``` - ๋ชจ๋ธ์„ [`~TFPretrainedModel.save_pretrained`]๋กœ ์ €์žฅํ•˜๊ณ  [`~TFPreTrainedModel.from_pretrained`]๋กœ ๋‹ค์‹œ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFPreTrainedModel >>> model.save_pretrained("path_to/model") >>> model = TFPreTrainedModel.from_pretrained("path_to/model") ``` ## ImportError[[importerror]] ํŠนํžˆ ์ตœ์‹  ๋ชจ๋ธ์ธ ๊ฒฝ์šฐ ๋งŒ๋‚  ์ˆ˜ ์žˆ๋Š” ๋‹ค๋ฅธ ์ผ๋ฐ˜์ ์ธ ์˜ค๋ฅ˜๋Š” `ImportError`์ž…๋‹ˆ๋‹ค: ``` ImportError: cannot import name 'ImageGPTImageProcessor' from 'transformers' (unknown location) ``` ์ด๋Ÿฌํ•œ ์˜ค๋ฅ˜ ์œ ํ˜•์˜ ๊ฒฝ์šฐ ์ตœ์‹  ๋ชจ๋ธ์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ๋„๋ก ์ตœ์‹  ๋ฒ„์ „์˜ ๐Ÿค— Transformers๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers --upgrade ``` ## CUDA error: device-side assert triggered[[cuda-error-deviceside-assert-triggered]] ๋•Œ๋•Œ๋กœ ์žฅ์น˜ ์ฝ”๋“œ ์˜ค๋ฅ˜์— ๋Œ€ํ•œ ์ผ๋ฐ˜์ ์ธ CUDA ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ``` RuntimeError: CUDA error: device-side assert triggered ``` ๋” ์ž์„ธํ•œ ์˜ค๋ฅ˜ ๋ฉ”์‹œ์ง€๋ฅผ ์–ป์œผ๋ ค๋ฉด ์šฐ์„  ์ฝ”๋“œ๋ฅผ CPU์—์„œ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์ฝ”๋“œ์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— ์ถ”๊ฐ€ํ•˜์—ฌ CPU๋กœ ์ „ํ™˜ํ•˜์„ธ์š”: ```py >>> import os >>> os.environ["CUDA_VISIBLE_DEVICES"] = "" ``` ๋˜ ๋‹ค๋ฅธ ์˜ต์…˜์€ GPU์—์„œ ๋” ๋‚˜์€ ์—ญ์ถ”์ (traceback)์„ ์–ป๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์ฝ”๋“œ์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— ์ถ”๊ฐ€ํ•˜์—ฌ ์—ญ์ถ”์ ์ด ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•œ ์†Œ์Šค๋ฅผ ๊ฐ€๋ฆฌํ‚ค๋„๋ก ํ•˜์„ธ์š”: ```py >>> import os >>> os.environ["CUDA_LAUNCH_BLOCKING"] = "1" ``` ## ํŒจ๋”ฉ ํ† ํฐ์ด ๋งˆ์Šคํ‚น๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ์ž˜๋ชป๋œ ์ถœ๋ ฅ(Incorrect output when padding tokens aren't masked)[[incorrect-output-when-padding-tokens-arent-masked]] ๊ฒฝ์šฐ์— ๋”ฐ๋ผ `input_ids`์— ํŒจ๋”ฉ ํ† ํฐ์ด ํฌํ•จ๋œ ๊ฒฝ์šฐ `hidden_state` ์ถœ๋ ฅ์ด ์˜ฌ๋ฐ”๋ฅด์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ๋ชจ๋ฅผ ์œ„ํ•ด ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. ๋ชจ๋ธ์˜ `pad_token_id`์— ์•ก์„ธ์Šคํ•˜์—ฌ ํ•ด๋‹น ๊ฐ’์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ถ€ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ `pad_token_id`๊ฐ€ `None`์ผ ์ˆ˜ ์žˆ์ง€๋งŒ ์–ธ์ œ๋“ ์ง€ ์ˆ˜๋™์œผ๋กœ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForSequenceClassification >>> import torch >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased") >>> model.config.pad_token_id 0 ``` ๋‹ค์Œ ์˜ˆ์ œ๋Š” ํŒจ๋”ฉ ํ† ํฐ์„ ๋งˆ์Šคํ‚นํ•˜์ง€ ์•Š์€ ์ถœ๋ ฅ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค: ```py >>> input_ids = torch.tensor([[7592, 2057, 2097, 2393, 9611, 2115], [7592, 0, 0, 0, 0, 0]]) >>> output = model(input_ids) >>> print(output.logits) tensor([[ 0.0082, -0.2307], [ 0.1317, -0.1683]], grad_fn=<AddmmBackward0>) ``` ๋‹ค์Œ์€ ๋‘ ๋ฒˆ์งธ ์‹œํ€€์Šค์˜ ์‹ค์ œ ์ถœ๋ ฅ์ž…๋‹ˆ๋‹ค: ```py >>> input_ids = torch.tensor([[7592]]) >>> output = model(input_ids) >>> print(output.logits) tensor([[-0.1008, -0.4061]], grad_fn=<AddmmBackward0>) ``` ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ๋ชจ๋ธ์— `attention_mask`๋ฅผ ์ œ๊ณตํ•˜์—ฌ ํŒจ๋”ฉ ํ† ํฐ์„ ๋ฌด์‹œํ•ด์•ผ ์ด๋Ÿฌํ•œ ์กฐ์šฉํ•œ ์˜ค๋ฅ˜๋ฅผ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋‘ ๋ฒˆ์งธ ์‹œํ€€์Šค์˜ ์ถœ๋ ฅ์ด ์‹ค์ œ ์ถœ๋ ฅ๊ณผ ์ผ์น˜ํ•ฉ๋‹ˆ๋‹ค: <Tip> ์ผ๋ฐ˜์ ์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋Š” ํŠน์ • ํ† ํฌ๋‚˜์ด์ €์˜ ๊ธฐ๋ณธ ๊ฐ’์„ ๊ธฐ์ค€์œผ๋กœ ์‚ฌ์šฉ์ž์— ๋Œ€ํ•œ 'attention_mask'๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. </Tip> ```py >>> attention_mask = torch.tensor([[1, 1, 1, 1, 1, 1], [1, 0, 0, 0, 0, 0]]) >>> output = model(input_ids, attention_mask=attention_mask) >>> print(output.logits) tensor([[ 0.0082, -0.2307], [-0.1008, -0.4061]], grad_fn=<AddmmBackward0>) ``` ๐Ÿค— Transformers๋Š” ํŒจ๋”ฉ ํ† ํฐ์ด ์ œ๊ณต๋œ ๊ฒฝ์šฐ ํŒจ๋”ฉ ํ† ํฐ์„ ๋งˆ์Šคํ‚นํ•˜๊ธฐ ์œ„ํ•œ `attention_mask`๋ฅผ ์ž๋™์œผ๋กœ ์ƒ์„ฑํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ ์ด์œ ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - ์ผ๋ถ€ ๋ชจ๋ธ์—๋Š” ํŒจ๋”ฉ ํ† ํฐ์ด ์—†์Šต๋‹ˆ๋‹ค. - ์ผ๋ถ€ ์‚ฌ์šฉ ์‚ฌ๋ก€์˜ ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž๊ฐ€ ๋ชจ๋ธ์ด ํŒจ๋”ฉ ํ† ํฐ์„ ๊ด€๋ฆฌํ•˜๊ธฐ๋ฅผ ์›ํ•ฉ๋‹ˆ๋‹ค. ## ValueError: ์ด ์œ ํ˜•์˜ AutoModel์— ๋Œ€ํ•ด ์ธ์‹ํ•  ์ˆ˜ ์—†๋Š” XYZ ๊ตฌ์„ฑ ํด๋ž˜์Šค(ValueError: Unrecognized configuration class XYZ for this kind of AutoModel)[[valueerror-unrecognized-configuration-class-xyz-for-this-kind-of-automodel]] ์ผ๋ฐ˜์ ์œผ๋กœ, ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ๊ฐ€์ ธ์˜ค๊ธฐ ์œ„ํ•ด [`AutoModel`] ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค๋Š” ๊ตฌ์„ฑ์— ๋”ฐ๋ผ ์ฃผ์–ด์ง„ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์˜ฌ๋ฐ”๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ž๋™์œผ๋กœ ์ถ”๋ก ํ•˜๊ณ  ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์ฒดํฌํฌ์ธํŠธ์—์„œ ๊ฐ€์ ธ์˜ฌ ๋•Œ ์ด `ValueError`๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด, ์ด๋Š” Auto ํด๋ž˜์Šค๊ฐ€ ์ฃผ์–ด์ง„ ์ฒดํฌํฌ์ธํŠธ์˜ ๊ตฌ์„ฑ์—์„œ ๊ฐ€์ ธ์˜ค๋ ค๋Š” ๋ชจ๋ธ ์œ ํ˜•๊ณผ ๋งคํ•‘์„ ์ฐพ์„ ์ˆ˜ ์—†๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ํ”ํ•˜๊ฒŒ ๋ฐœ์ƒํ•˜๋Š” ๊ฒฝ์šฐ๋Š” ์ฒดํฌํฌ์ธํŠธ๊ฐ€ ์ฃผ์–ด์ง„ ํƒœ์Šคํฌ๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š์„ ๋•Œ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ ์˜ˆ์ œ์—์„œ ์งˆ์˜์‘๋‹ต์— ๋Œ€ํ•œ GPT2๊ฐ€ ์—†๊ธฐ ๋•Œ๋ฌธ์— ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor, AutoModelForQuestionAnswering >>> processor = AutoProcessor.from_pretrained("gpt2-medium") >>> model = AutoModelForQuestionAnswering.from_pretrained("gpt2-medium") ValueError: Unrecognized configuration class <class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'> for this kind of AutoModel: AutoModelForQuestionAnswering. Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, ... ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/add_new_pipeline.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์–ด๋–ป๊ฒŒ ์‚ฌ์šฉ์ž ์ •์˜ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ƒ์„ฑํ•˜๋‚˜์š”? [[how-to-create-a-custom-pipeline]] ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์‚ฌ์šฉ์ž ์ •์˜ ํŒŒ์ดํ”„๋ผ์ธ์„ ์–ด๋–ป๊ฒŒ ์ƒ์„ฑํ•˜๊ณ  [ํ—ˆ๋ธŒ](hf.co/models)์— ๊ณต์œ ํ•˜๊ฑฐ๋‚˜ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋จผ์ € ํŒŒ์ดํ”„๋ผ์ธ์ด ์ˆ˜์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์›์‹œ ์ž…๋ ฅ์„ ๊ฒฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฌธ์ž์—ด, ์›์‹œ ๋ฐ”์ดํŠธ, ๋”•์…”๋„ˆ๋ฆฌ ๋˜๋Š” ๊ฐ€์žฅ ์›ํ•˜๋Š” ์ž…๋ ฅ์ผ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๊ฒƒ์ด๋ฉด ๋ฌด์—‡์ด๋“  ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์ด ์ž…๋ ฅ์„ ๊ฐ€๋Šฅํ•œ ํ•œ ์ˆœ์ˆ˜ํ•œ Python ํ˜•์‹์œผ๋กœ ์œ ์ง€ํ•ด์•ผ (JSON์„ ํ†ตํ•ด ๋‹ค๋ฅธ ์–ธ์–ด์™€๋„) ํ˜ธํ™˜์„ฑ์ด ์ข‹์•„์ง‘๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด ์ „์ฒ˜๋ฆฌ(`preprocess`) ํŒŒ์ดํ”„๋ผ์ธ์˜ ์ž…๋ ฅ(`inputs`)์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `outputs`๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `inputs`์™€ ๊ฐ™์€ ์ •์ฑ…์„ ๋”ฐ๋ฅด๊ณ , ๊ฐ„๋‹จํ• ์ˆ˜๋ก ์ข‹์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด ํ›„์ฒ˜๋ฆฌ(`postprocess`) ๋ฉ”์†Œ๋“œ์˜ ์ถœ๋ ฅ์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋จผ์ € 4๊ฐœ์˜ ๋ฉ”์†Œ๋“œ(`preprocess`, `_forward`, `postprocess` ๋ฐ `_sanitize_parameters`)๋ฅผ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด ๊ธฐ๋ณธ ํด๋ž˜์Šค `Pipeline`์„ ์ƒ์†ํ•˜์—ฌ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ```python from transformers import Pipeline class MyPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] return preprocess_kwargs, {}, {} def preprocess(self, inputs, maybe_arg=2): model_input = Tensor(inputs["input_ids"]) return {"model_input": model_input} def _forward(self, model_inputs): # model_inputs == {"model_input": model_input} outputs = self.model(**model_inputs) # Maybe {"logits": Tensor(...)} return outputs def postprocess(self, model_outputs): best_class = model_outputs["logits"].softmax(-1) return best_class ``` ์ด ๋ถ„ํ•  ๊ตฌ์กฐ๋Š” CPU/GPU์— ๋Œ€ํ•œ ๋น„๊ต์  ์›ํ™œํ•œ ์ง€์›์„ ์ œ๊ณตํ•˜๋Š” ๋™์‹œ์—, ๋‹ค๋ฅธ ์Šค๋ ˆ๋“œ์—์„œ CPU์— ๋Œ€ํ•œ ์‚ฌ์ „/์‚ฌํ›„ ์ฒ˜๋ฆฌ๋ฅผ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๊ฒŒ ์ง€์›ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. `preprocess`๋Š” ์›๋ž˜ ์ •์˜๋œ ์ž…๋ ฅ์„ ๊ฐ€์ ธ์™€ ๋ชจ๋ธ์— ๊ณต๊ธ‰ํ•  ์ˆ˜ ์žˆ๋Š” ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋” ๋งŽ์€ ์ •๋ณด๋ฅผ ํฌํ•จํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ์ผ๋ฐ˜์ ์œผ๋กœ `Dict` ํ˜•ํƒœ์ž…๋‹ˆ๋‹ค. `_forward`๋Š” ๊ตฌํ˜„ ์„ธ๋ถ€ ์‚ฌํ•ญ์ด๋ฉฐ ์ง์ ‘ ํ˜ธ์ถœํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. `forward`๋Š” ์˜ˆ์ƒ ์žฅ์น˜์—์„œ ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•œ ์•ˆ์ „์žฅ์น˜๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์–ด ์„ ํ˜ธ๋˜๋Š” ํ˜ธ์ถœ ๋ฉ”์†Œ๋“œ์ž…๋‹ˆ๋‹ค. ์‹ค์ œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ๋œ ๊ฒƒ์€ `_forward` ๋ฉ”์†Œ๋“œ์— ์†ํ•˜๋ฉฐ, ๋‚˜๋จธ์ง€๋Š” ์ „์ฒ˜๋ฆฌ/ํ›„์ฒ˜๋ฆฌ ๊ณผ์ •์— ์žˆ์Šต๋‹ˆ๋‹ค. `postprocess` ๋ฉ”์†Œ๋“œ๋Š” `_forward`์˜ ์ถœ๋ ฅ์„ ๊ฐ€์ ธ์™€ ์ด์ „์— ๊ฒฐ์ •ํ•œ ์ตœ์ข… ์ถœ๋ ฅ ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. `_sanitize_parameters`๋Š” ์ดˆ๊ธฐํ™” ์‹œ๊ฐ„์— `pipeline(...., maybe_arg=4)`์ด๋‚˜ ํ˜ธ์ถœ ์‹œ๊ฐ„์— `pipe = pipeline(...); output = pipe(...., maybe_arg=4)`๊ณผ ๊ฐ™์ด, ์‚ฌ์šฉ์ž๊ฐ€ ์›ํ•˜๋Š” ๊ฒฝ์šฐ ์–ธ์ œ๋“ ์ง€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ๋„๋ก ํ—ˆ์šฉํ•ฉ๋‹ˆ๋‹ค. `_sanitize_parameters`์˜ ๋ฐ˜ํ™˜ ๊ฐ’์€ `preprocess`, `_forward`, `postprocess`์— ์ง์ ‘ ์ „๋‹ฌ๋˜๋Š” 3๊ฐœ์˜ kwargs ๋”•์…”๋„ˆ๋ฆฌ์ž…๋‹ˆ๋‹ค. ํ˜ธ์ถœ์ž๊ฐ€ ์ถ”๊ฐ€ ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ํ˜ธ์ถœํ•˜์ง€ ์•Š์•˜๋‹ค๋ฉด ์•„๋ฌด๊ฒƒ๋„ ์ฑ„์šฐ์ง€ ๋งˆ์‹ญ์‹œ์˜ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ํ•ญ์ƒ ๋” "์ž์—ฐ์Šค๋Ÿฌ์šด" ํ•จ์ˆ˜ ์ •์˜์˜ ๊ธฐ๋ณธ ์ธ์ˆ˜๋ฅผ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ถ„๋ฅ˜ ์ž‘์—…์—์„œ `top_k` ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ๋Œ€ํ‘œ์ ์ธ ์˜ˆ์ž…๋‹ˆ๋‹ค. ```python >>> pipe = pipeline("my-new-task") >>> pipe("This is a test") [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}, {"label": "3-star", "score": 0.05} {"label": "4-star", "score": 0.025}, {"label": "5-star", "score": 0.025}] >>> pipe("This is a test", top_k=2) [{"label": "1-star", "score": 0.8}, {"label": "2-star", "score": 0.1}] ``` ์ด๋ฅผ ๋‹ฌ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ์šฐ๋ฆฌ๋Š” `postprocess` ๋ฉ”์†Œ๋“œ๋ฅผ ๊ธฐ๋ณธ ๋งค๊ฐœ๋ณ€์ˆ˜์ธ `5`๋กœ ์—…๋ฐ์ดํŠธํ•˜๊ณ  `_sanitize_parameters`๋ฅผ ์ˆ˜์ •ํ•˜์—ฌ ์ด ์ƒˆ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ํ—ˆ์šฉํ•ฉ๋‹ˆ๋‹ค. ```python def postprocess(self, model_outputs, top_k=5): best_class = model_outputs["logits"].softmax(-1) # top_k๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๋กœ์ง ์ถ”๊ฐ€ return best_class def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "maybe_arg" in kwargs: preprocess_kwargs["maybe_arg"] = kwargs["maybe_arg"] postprocess_kwargs = {} if "top_k" in kwargs: postprocess_kwargs["top_k"] = kwargs["top_k"] return preprocess_kwargs, {}, postprocess_kwargs ``` ์ž…/์ถœ๋ ฅ์„ ๊ฐ€๋Šฅํ•œ ํ•œ ๊ฐ„๋‹จํ•˜๊ณ  ์™„์ „ํžˆ JSON ์ง๋ ฌํ™” ๊ฐ€๋Šฅํ•œ ํ˜•์‹์œผ๋กœ ์œ ์ง€ํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•˜์‹ญ์‹œ์˜ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์‚ฌ์šฉ์ž๊ฐ€ ์ƒˆ๋กœ์šด ์ข…๋ฅ˜์˜ ๊ฐœ์ฒด๋ฅผ ์ดํ•ดํ•˜์ง€ ์•Š๊ณ ๋„ ํŒŒ์ดํ”„๋ผ์ธ์„ ์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ์‚ฌ์šฉ ์šฉ์ด์„ฑ์„ ์œ„ํ•ด ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ์œ ํ˜•์˜ ์ธ์ˆ˜(์˜ค๋””์˜ค ํŒŒ์ผ์€ ํŒŒ์ผ ์ด๋ฆ„, URL ๋˜๋Š” ์ˆœ์ˆ˜ํ•œ ๋ฐ”์ดํŠธ์ผ ์ˆ˜ ์žˆ์Œ)๋ฅผ ์ง€์›ํ•˜๋Š” ๊ฒƒ์ด ๋น„๊ต์  ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ## ์ง€์›๋˜๋Š” ์ž‘์—… ๋ชฉ๋ก์— ์ถ”๊ฐ€ํ•˜๊ธฐ [[adding-it-to-the-list-of-supported-tasks]] `new-task`๋ฅผ ์ง€์›๋˜๋Š” ์ž‘์—… ๋ชฉ๋ก์— ๋“ฑ๋กํ•˜๋ ค๋ฉด `PIPELINE_REGISTRY`์— ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```python from transformers.pipelines import PIPELINE_REGISTRY PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, ) ``` ์›ํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ๋ชจ๋ธ์„ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด ๊ฒฝ์šฐ ํŠน์ • ๊ฐœ์ •(๋ถ„๊ธฐ ์ด๋ฆ„ ๋˜๋Š” ์ปค๋ฐ‹ ํ•ด์‹œ์ผ ์ˆ˜ ์žˆ์Œ, ์—ฌ๊ธฐ์„œ๋Š” "abcdef")๊ณผ ํƒ€์ž…์„ ํ•จ๊ป˜ ๊ฐ€์ ธ์™€์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```python PIPELINE_REGISTRY.register_pipeline( "new-task", pipeline_class=MyPipeline, pt_model=AutoModelForSequenceClassification, default={"pt": ("user/awesome_model", "abcdef")}, type="text", # ํ˜„์žฌ ์ง€์› ์œ ํ˜•: text, audio, image, multimodal ) ``` ## Hub์— ํŒŒ์ดํ”„๋ผ์ธ ๊ณต์œ ํ•˜๊ธฐ [[share-your-pipeline-on-the-hub]] Hub์— ์‚ฌ์šฉ์ž ์ •์˜ ํŒŒ์ดํ”„๋ผ์ธ์„ ๊ณต์œ ํ•˜๋ ค๋ฉด `Pipeline` ํ•˜์œ„ ํด๋ž˜์Šค์˜ ์‚ฌ์šฉ์ž ์ •์˜ ์ฝ”๋“œ๋ฅผ Python ํŒŒ์ผ์— ์ €์žฅํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ฌธ์žฅ ์Œ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ์‚ฌ์šฉ์ž ์ •์˜ ํŒŒ์ดํ”„๋ผ์ธ์„ ์‚ฌ์šฉํ•œ๋‹ค๊ณ  ๊ฐ€์ •ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py import numpy as np from transformers import Pipeline def softmax(outputs): maxes = np.max(outputs, axis=-1, keepdims=True) shifted_exp = np.exp(outputs - maxes) return shifted_exp / shifted_exp.sum(axis=-1, keepdims=True) class PairClassificationPipeline(Pipeline): def _sanitize_parameters(self, **kwargs): preprocess_kwargs = {} if "second_text" in kwargs: preprocess_kwargs["second_text"] = kwargs["second_text"] return preprocess_kwargs, {}, {} def preprocess(self, text, second_text=None): return self.tokenizer(text, text_pair=second_text, return_tensors=self.framework) def _forward(self, model_inputs): return self.model(**model_inputs) def postprocess(self, model_outputs): logits = model_outputs.logits[0].numpy() probabilities = softmax(logits) best_class = np.argmax(probabilities) label = self.model.config.id2label[best_class] score = probabilities[best_class].item() logits = logits.tolist() return {"label": label, "score": score, "logits": logits} ``` ๊ตฌํ˜„์€ ํ”„๋ ˆ์ž„์›Œํฌ์— ๊ตฌ์• ๋ฐ›์ง€ ์•Š์œผ๋ฉฐ, PyTorch์™€ TensorFlow ๋ชจ๋ธ์— ๋Œ€ํ•ด ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ `pair_classification.py`๋ผ๋Š” ํŒŒ์ผ์— ์ €์žฅํ•œ ๊ฒฝ์šฐ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ฐ€์ ธ์˜ค๊ณ  ๋“ฑ๋กํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py from pair_classification import PairClassificationPipeline from transformers.pipelines import PIPELINE_REGISTRY from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification PIPELINE_REGISTRY.register_pipeline( "pair-classification", pipeline_class=PairClassificationPipeline, pt_model=AutoModelForSequenceClassification, tf_model=TFAutoModelForSequenceClassification, ) ``` ์ด ์ž‘์—…์ด ์™„๋ฃŒ๋˜๋ฉด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `sgugger/finetuned-bert-mrpc`์€ MRPC ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ๋ฏธ์„ธ ์กฐ์ •๋˜์–ด ๋ฌธ์žฅ ์Œ์„ ํŒจ๋Ÿฌํ”„๋ ˆ์ด์ฆˆ์ธ์ง€ ์•„๋‹Œ์ง€๋ฅผ ๋ถ„๋ฅ˜ํ•ฉ๋‹ˆ๋‹ค. ```py from transformers import pipeline classifier = pipeline("pair-classification", model="sgugger/finetuned-bert-mrpc") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ `Repository`์˜ `save_pretrained` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py from huggingface_hub import Repository repo = Repository("test-dynamic-pipeline", clone_from="{your_username}/test-dynamic-pipeline") classifier.save_pretrained("test-dynamic-pipeline") repo.push_to_hub() ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด "test-dynamic-pipeline" ํด๋” ๋‚ด์— `PairClassificationPipeline`์„ ์ •์˜ํ•œ ํŒŒ์ผ์ด ๋ณต์‚ฌ๋˜๋ฉฐ, ํŒŒ์ดํ”„๋ผ์ธ์˜ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋„ ์ €์žฅํ•œ ํ›„, `{your_username}/test-dynamic-pipeline` ์ €์žฅ์†Œ์— ์žˆ๋Š” ๋ชจ๋“  ๊ฒƒ์„ ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค. ์ดํ›„์—๋Š” `trust_remote_code=True` ์˜ต์…˜๋งŒ ์ œ๊ณตํ•˜๋ฉด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py from transformers import pipeline classifier = pipeline(model="{your_username}/test-dynamic-pipeline", trust_remote_code=True) ``` ## ๐Ÿค— Transformers์— ํŒŒ์ดํ”„๋ผ์ธ ์ถ”๊ฐ€ํ•˜๊ธฐ [[add-the-pipeline-to-transformers]] ๐Ÿค— Transformers์— ์‚ฌ์šฉ์ž ์ •์˜ ํŒŒ์ดํ”„๋ผ์ธ์„ ๊ธฐ์—ฌํ•˜๋ ค๋ฉด, `pipelines` ํ•˜์œ„ ๋ชจ๋“ˆ์— ์‚ฌ์šฉ์ž ์ •์˜ ํŒŒ์ดํ”„๋ผ์ธ ์ฝ”๋“œ์™€ ํ•จ๊ป˜ ์ƒˆ ๋ชจ๋“ˆ์„ ์ถ”๊ฐ€ํ•œ ๋‹ค์Œ, `pipelines/__init__.py`์—์„œ ์ •์˜๋œ ์ž‘์—… ๋ชฉ๋ก์— ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ํ…Œ์ŠคํŠธ๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `tests/test_pipelines_MY_PIPELINE.py`๋ผ๋Š” ์ƒˆ ํŒŒ์ผ์„ ๋งŒ๋“ค๊ณ  ๋‹ค๋ฅธ ํ…Œ์ŠคํŠธ์™€ ์˜ˆ์ œ๋ฅผ ํ•จ๊ป˜ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค. `run_pipeline_test` ํ•จ์ˆ˜๋Š” ๋งค์šฐ ์ผ๋ฐ˜์ ์ด๋ฉฐ, `model_mapping` ๋ฐ `tf_model_mapping`์—์„œ ์ •์˜๋œ ๊ฐ€๋Šฅํ•œ ๋ชจ๋“  ์•„ํ‚คํ…์ฒ˜์˜ ์ž‘์€ ๋ฌด์ž‘์œ„ ๋ชจ๋ธ์—์„œ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ–ฅํ›„ ํ˜ธํ™˜์„ฑ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ๋ฐ ๋งค์šฐ ์ค‘์š”ํ•˜๋ฉฐ, ๋ˆ„๊ตฐ๊ฐ€ `XXXForQuestionAnswering`์„ ์œ„ํ•œ ์ƒˆ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋ฉด ํŒŒ์ดํ”„๋ผ์ธ ํ…Œ์ŠคํŠธ๊ฐ€ ํ•ด๋‹น ๋ชจ๋ธ์—์„œ ์‹คํ–‰์„ ์‹œ๋„ํ•œ๋‹ค๋Š” ์˜๋ฏธ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ๋ฌด์ž‘์œ„์ด๊ธฐ ๋•Œ๋ฌธ์— ์‹ค์ œ ๊ฐ’์„ ํ™•์ธํ•˜๋Š” ๊ฒƒ์€ ๋ถˆ๊ฐ€๋Šฅํ•˜๋ฏ€๋กœ, ๋‹จ์ˆœํžˆ ํŒŒ์ดํ”„๋ผ์ธ ์ถœ๋ ฅ `TYPE`๊ณผ ์ผ์น˜์‹œํ‚ค๊ธฐ ์œ„ํ•œ ๋„์šฐ๋ฏธ `ANY`๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ 2๊ฐœ(์ด์ƒ์ ์œผ๋กœ๋Š” 4๊ฐœ)์˜ ํ…Œ์ŠคํŠธ๋ฅผ ๊ตฌํ˜„ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - `test_small_model_pt`: ์ด ํŒŒ์ดํ”„๋ผ์ธ์— ๋Œ€ํ•œ ์ž‘์€ ๋ชจ๋ธ 1๊ฐœ๋ฅผ ์ •์˜(๊ฒฐ๊ณผ๊ฐ€ ์˜๋ฏธ ์—†์–ด๋„ ์ƒ๊ด€์—†์Œ)ํ•˜๊ณ  ํŒŒ์ดํ”„๋ผ์ธ ์ถœ๋ ฅ์„ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ๋Š” `test_small_model_tf`์™€ ๋™์ผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - `test_small_model_tf`: ์ด ํŒŒ์ดํ”„๋ผ์ธ์— ๋Œ€ํ•œ ์ž‘์€ ๋ชจ๋ธ 1๊ฐœ๋ฅผ ์ •์˜(๊ฒฐ๊ณผ๊ฐ€ ์˜๋ฏธ ์—†์–ด๋„ ์ƒ๊ด€์—†์Œ)ํ•˜๊ณ  ํŒŒ์ดํ”„๋ผ์ธ ์ถœ๋ ฅ์„ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ๋Š” `test_small_model_pt`์™€ ๋™์ผํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - `test_large_model_pt`(`์„ ํƒ์‚ฌํ•ญ`): ๊ฒฐ๊ณผ๊ฐ€ ์˜๋ฏธ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋˜๋Š” ์‹ค์ œ ํŒŒ์ดํ”„๋ผ์ธ์—์„œ ํŒŒ์ดํ”„๋ผ์ธ์„ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๋Š” ์†๋„๊ฐ€ ๋А๋ฆฌ๋ฏ€๋กœ ์ด๋ฅผ ํ‘œ์‹œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ์˜ ๋ชฉํ‘œ๋Š” ํŒŒ์ดํ”„๋ผ์ธ์„ ๋ณด์—ฌ์ฃผ๊ณ  ํ–ฅํ›„ ๋ฆด๋ฆฌ์ฆˆ์—์„œ์˜ ๋ณ€ํ™”๊ฐ€ ์—†๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. - `test_large_model_tf`(`์„ ํƒ์‚ฌํ•ญ`): ๊ฒฐ๊ณผ๊ฐ€ ์˜๋ฏธ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋˜๋Š” ์‹ค์ œ ํŒŒ์ดํ”„๋ผ์ธ์—์„œ ํŒŒ์ดํ”„๋ผ์ธ์„ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๋Š” ์†๋„๊ฐ€ ๋А๋ฆฌ๋ฏ€๋กœ ์ด๋ฅผ ํ‘œ์‹œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ์˜ ๋ชฉํ‘œ๋Š” ํŒŒ์ดํ”„๋ผ์ธ์„ ๋ณด์—ฌ์ฃผ๊ณ  ํ–ฅํ›„ ๋ฆด๋ฆฌ์ฆˆ์—์„œ์˜ ๋ณ€ํ™”๊ฐ€ ์—†๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/torchscript.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # TorchScript๋กœ ๋‚ด๋ณด๋‚ด๊ธฐ[[export-to-torchscript]] <Tip> TorchScript๋ฅผ ํ™œ์šฉํ•œ ์‹คํ—˜์€ ์•„์ง ์ดˆ๊ธฐ ๋‹จ๊ณ„๋กœ, ๊ฐ€๋ณ€์ ์ธ ์ž…๋ ฅ ํฌ๊ธฐ ๋ชจ๋ธ๋“ค์„ ํ†ตํ•ด ๊ทธ ๊ธฐ๋Šฅ์„ฑ์„ ๊ณ„์† ํƒ๊ตฌํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ์€ ์ €ํฌ๊ฐ€ ๊ด€์‹ฌ์„ ๋‘๊ณ  ์žˆ๋Š” ๋ถ„์•ผ ์ค‘ ํ•˜๋‚˜์ด๋ฉฐ, ์•ž์œผ๋กœ ์ถœ์‹œ๋  ๋ฒ„์ „์—์„œ ๋” ๋งŽ์€ ์ฝ”๋“œ ์˜ˆ์ œ, ๋” ์œ ์—ฐํ•œ ๊ตฌํ˜„, ๊ทธ๋ฆฌ๊ณ  Python ๊ธฐ๋ฐ˜ ์ฝ”๋“œ์™€ ์ปดํŒŒ์ผ๋œ TorchScript๋ฅผ ๋น„๊ตํ•˜๋Š” ๋ฒค์น˜๋งˆํฌ๋ฅผ ๋“ฑ์„ ํ†ตํ•ด ๋ถ„์„์„ ์‹ฌํ™”ํ•  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. </Tip> [TorchScript ๋ฌธ์„œ](https://pytorch.org/docs/stable/jit.html)์—์„œ๋Š” ์ด๋ ‡๊ฒŒ ๋งํ•ฉ๋‹ˆ๋‹ค. > TorchScript๋Š” PyTorch ์ฝ”๋“œ์—์„œ ์ง๋ ฌํ™” ๋ฐ ์ตœ์ ํ™” ๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. [JIT๊ณผ TRACE](https://pytorch.org/docs/stable/jit.html)๋Š” ๊ฐœ๋ฐœ์ž๊ฐ€ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด์„œ ํšจ์œจ ์ง€ํ–ฅ์ ์ธ C++ ํ”„๋กœ๊ทธ๋žจ๊ณผ ๊ฐ™์€ ๋‹ค๋ฅธ ํ”„๋กœ๊ทธ๋žจ์—์„œ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” PyTorch ๋ชจ๋“ˆ์ž…๋‹ˆ๋‹ค. PyTorch ๊ธฐ๋ฐ˜ Python ํ”„๋กœ๊ทธ๋žจ๊ณผ ๋‹ค๋ฅธ ํ™˜๊ฒฝ์—์„œ ๋ชจ๋ธ์„ ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก, ๐Ÿค— Transformers ๋ชจ๋ธ์„ TorchScript๋กœ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์žˆ๋Š” ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ์—์„œ๋Š” TorchScript๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด๊ณ  ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด ๋‘ ๊ฐ€์ง€๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค: - `torchscript` ํ”Œ๋ž˜๊ทธ๋กœ ๋ชจ๋ธ ์ธ์Šคํ„ด์Šคํ™” - ๋”๋ฏธ ์ž…๋ ฅ์„ ์‚ฌ์šฉํ•œ ์ˆœ์ „ํŒŒ(forward pass) ์ด ํ•„์ˆ˜ ์กฐ๊ฑด๋“ค์€ ์•„๋ž˜์— ์ž์„ธํžˆ ์„ค๋ช…๋œ ๊ฒƒ์ฒ˜๋Ÿผ ๊ฐœ๋ฐœ์ž๋“ค์ด ์ฃผ์˜ํ•ด์•ผ ํ•  ์—ฌ๋Ÿฌ ์‚ฌํ•ญ๋“ค์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ## TorchScript ํ”Œ๋ž˜๊ทธ์™€ ๋ฌถ์ธ ๊ฐ€์ค‘์น˜(tied weights)[[torchscript-flag-and-tied-weights]] `torchscript` ํ”Œ๋ž˜๊ทธ๊ฐ€ ํ•„์š”ํ•œ ์ด์œ ๋Š” ๋Œ€๋ถ€๋ถ„์˜ ๐Ÿค— Transformers ์–ธ์–ด ๋ชจ๋ธ์—์„œ `Embedding` ๋ ˆ์ด์–ด์™€ `Decoding` ๋ ˆ์ด์–ด ๊ฐ„์˜ ๋ฌถ์ธ ๊ฐ€์ค‘์น˜(tied weights)๊ฐ€ ์กด์žฌํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. TorchScript๋Š” ๋ฌถ์ธ ๊ฐ€์ค‘์น˜๋ฅผ ๊ฐ€์ง„ ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์—†์œผ๋ฏ€๋กœ, ๋ฏธ๋ฆฌ ๊ฐ€์ค‘์น˜๋ฅผ ํ’€๊ณ  ๋ณต์ œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `torchscript` ํ”Œ๋ž˜๊ทธ๋กœ ์ธ์Šคํ„ด์Šคํ™”๋œ ๋ชจ๋ธ์€ `Embedding` ๋ ˆ์ด์–ด์™€ `Decoding` ๋ ˆ์ด์–ด๊ฐ€ ๋ถ„๋ฆฌ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ ์ดํ›„์— ํ›ˆ๋ จํ•ด์„œ๋Š” ์•ˆ ๋ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ์„ ํ•˜๊ฒŒ ๋˜๋ฉด ๋‘ ๋ ˆ์ด์–ด ๊ฐ„ ๋™๊ธฐํ™”๊ฐ€ ํ•ด์ œ๋˜์–ด ์˜ˆ์ƒ์น˜ ๋ชปํ•œ ๊ฒฐ๊ณผ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์–ธ์–ด ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ๊ฐ–์ง€ ์•Š์€ ๋ชจ๋ธ์€ ๊ฐ€์ค‘์น˜๊ฐ€ ๋ฌถ์—ฌ ์žˆ์ง€ ์•Š์•„์„œ ์ด ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ๋“ค์€ `torchscript` ํ”Œ๋ž˜๊ทธ ์—†์ด ์•ˆ์ „ํ•˜๊ฒŒ ๋‚ด๋ณด๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋”๋ฏธ ์ž…๋ ฅ๊ณผ ํ‘œ์ค€ ๊ธธ์ด[[dummy-inputs-and-standard-lengths]] ๋”๋ฏธ ์ž…๋ ฅ(dummy inputs)์€ ๋ชจ๋ธ์˜ ์ˆœ์ „ํŒŒ(forward pass)์— ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ๊ฐ’์ด ๋ ˆ์ด์–ด๋ฅผ ํ†ตํ•ด ์ „ํŒŒ๋˜๋Š” ๋™์•ˆ, PyTorch๋Š” ๊ฐ ํ…์„œ์—์„œ ์‹คํ–‰๋œ ๋‹ค๋ฅธ ์—ฐ์‚ฐ์„ ์ถ”์ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ธฐ๋ก๋œ ์—ฐ์‚ฐ์€ ๋ชจ๋ธ์˜ *์ถ”์ (trace)*์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ถ”์ ์€ ์ž…๋ ฅ์˜ ์ฐจ์›์„ ๊ธฐ์ค€์œผ๋กœ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋”๋ฏธ ์ž…๋ ฅ์˜ ์ฐจ์›์— ์ œํ•œ๋˜์–ด, ๋‹ค๋ฅธ ์‹œํ€€์Šค ๊ธธ์ด๋‚˜ ๋ฐฐ์น˜ ํฌ๊ธฐ์—์„œ๋Š” ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ํฌ๊ธฐ๋กœ ์‹œ๋„ํ•  ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค: ``` `The expanded size of the tensor (3) must match the existing size (7) at non-singleton dimension 2` ``` ์ถ”๋ก  ์ค‘ ๋ชจ๋ธ์— ๊ณต๊ธ‰๋  ๊ฐ€์žฅ ํฐ ์ž…๋ ฅ๋งŒํผ ํฐ ๋”๋ฏธ ์ž…๋ ฅ ํฌ๊ธฐ๋กœ ๋ชจ๋ธ์„ ์ถ”์ ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํŒจ๋”ฉ์€ ๋ˆ„๋ฝ๋œ ๊ฐ’์„ ์ฑ„์šฐ๋Š” ๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ชจ๋ธ์ด ๋” ํฐ ์ž…๋ ฅ ํฌ๊ธฐ๋กœ ์ถ”์ ๋˜๊ธฐ ๋•Œ๋ฌธ์—, ํ–‰๋ ฌ์˜ ์ฐจ์›์ด ์ปค์ง€๊ณ  ๊ณ„์‚ฐ๋Ÿ‰์ด ๋งŽ์•„์ง‘๋‹ˆ๋‹ค. ๋‹ค์–‘ํ•œ ์‹œํ€€์Šค ๊ธธ์ด ๋ชจ๋ธ์„ ๋‚ด๋ณด๋‚ผ ๋•Œ๋Š” ๊ฐ ์ž…๋ ฅ์— ๋Œ€ํ•ด ์ˆ˜ํ–‰๋˜๋Š” ์ด ์—ฐ์‚ฐ ํšŸ์ˆ˜์— ์ฃผ์˜ํ•˜๊ณ  ์„ฑ๋Šฅ์„ ์ฃผ์˜ ๊นŠ๊ฒŒ ํ™•์ธํ•˜์„ธ์š”. ## Python์—์„œ TorchScript ์‚ฌ์šฉํ•˜๊ธฐ[[using-torchscript-in-python]] ์ด ์„น์…˜์—์„œ๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•˜๊ณ  ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•, ์ถ”์ ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ### ๋ชจ๋ธ ์ €์žฅํ•˜๊ธฐ[[saving-a-model]] `BertModel`์„ TorchScript๋กœ ๋‚ด๋ณด๋‚ด๋ ค๋ฉด `BertConfig` ํด๋ž˜์Šค์—์„œ `BertModel`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•œ ๋‹ค์Œ, `traced_bert.pt`๋ผ๋Š” ํŒŒ์ผ๋ช…์œผ๋กœ ๋””์Šคํฌ์— ์ €์žฅํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ```python from transformers import BertModel, BertTokenizer, BertConfig import torch enc = BertTokenizer.from_pretrained("bert-base-uncased") # ์ž…๋ ฅ ํ…์ŠคํŠธ ํ† ํฐํ™”ํ•˜๊ธฐ text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = enc.tokenize(text) # ์ž…๋ ฅ ํ† ํฐ ์ค‘ ํ•˜๋‚˜๋ฅผ ๋งˆ์Šคํ‚นํ•˜๊ธฐ masked_index = 8 tokenized_text[masked_index] = "[MASK]" indexed_tokens = enc.convert_tokens_to_ids(tokenized_text) segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] # ๋”๋ฏธ ์ž…๋ ฅ ๋งŒ๋“ค๊ธฐ tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) dummy_input = [tokens_tensor, segments_tensors] # torchscript ํ”Œ๋ž˜๊ทธ๋กœ ๋ชจ๋ธ ์ดˆ๊ธฐํ™”ํ•˜๊ธฐ # ์ด ๋ชจ๋ธ์€ LM ํ—ค๋“œ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ํ•„์š”ํ•˜์ง€ ์•Š์ง€๋งŒ, ํ”Œ๋ž˜๊ทธ๋ฅผ True๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. config = BertConfig( vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, torchscript=True, ) # ๋ชจ๋ธ์„ ์ธ์Šคํ„ดํŠธํ™”ํ•˜๊ธฐ model = BertModel(config) # ๋ชจ๋ธ์„ ํ‰๊ฐ€ ๋ชจ๋“œ๋กœ ๋‘์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. model.eval() # ๋งŒ์•ฝ *from_pretrained*๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๋Š” ๊ฒฝ์šฐ, TorchScript ํ”Œ๋ž˜๊ทธ๋ฅผ ์‰ฝ๊ฒŒ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค model = BertModel.from_pretrained("bert-base-uncased", torchscript=True) # ์ถ”์  ์ƒ์„ฑํ•˜๊ธฐ traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors]) torch.jit.save(traced_model, "traced_bert.pt") ``` ### ๋ชจ๋ธ ๊ฐ€์ ธ์˜ค๊ธฐ[[loading-a-model]] ์ด์ œ ์ด์ „์— ์ €์žฅํ•œ `BertModel`, ์ฆ‰ `traced_bert.pt`๋ฅผ ๋””์Šคํฌ์—์„œ ๊ฐ€์ ธ์˜ค๊ณ , ์ด์ „์— ์ดˆ๊ธฐํ™”ํ•œ `dummy_input`์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python loaded_model = torch.jit.load("traced_bert.pt") loaded_model.eval() all_encoder_layers, pooled_output = loaded_model(*dummy_input) ``` ### ์ถ”์ ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ํ•˜๊ธฐ[[using-a-traced-model-for-inference]] `__call__` ์ด์ค‘ ์–ธ๋”์Šค์ฝ”์–ด(dunder) ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ์— ์ถ”์ ๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์„ธ์š”: ```python traced_model(tokens_tensor, segments_tensors) ``` ## Neuron SDK๋กœ Hugging Face TorchScript ๋ชจ๋ธ์„ AWS์— ๋ฐฐํฌํ•˜๊ธฐ[[deploy-hugging-face-torchscript-models-to-aws-with-the-neuron-sdk]] AWS๊ฐ€ ํด๋ผ์šฐ๋“œ์—์„œ ์ €๋น„์šฉ, ๊ณ ์„ฑ๋Šฅ ๋จธ์‹  ๋Ÿฌ๋‹ ์ถ”๋ก ์„ ์œ„ํ•œ [Amazon EC2 Inf1](https://aws.amazon.com/ec2/instance-types/inf1/) ์ธ์Šคํ„ด์Šค ์ œํ’ˆ๊ตฐ์„ ์ถœ์‹œํ–ˆ์Šต๋‹ˆ๋‹ค. Inf1 ์ธ์Šคํ„ด์Šค๋Š” ๋”ฅ๋Ÿฌ๋‹ ์ถ”๋ก  ์›Œํฌ๋กœ๋“œ์— ํŠนํ™”๋œ ๋งž์ถค ํ•˜๋“œ์›จ์–ด ๊ฐ€์†๊ธฐ์ธ AWS Inferentia ์นฉ์œผ๋กœ ๊ตฌ๋™๋ฉ๋‹ˆ๋‹ค. [AWS Neuron](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/#)์€ Inferentia๋ฅผ ์œ„ํ•œ SDK๋กœ, Inf1์— ๋ฐฐํฌํ•˜๊ธฐ ์œ„ํ•œ transformers ๋ชจ๋ธ ์ถ”์  ๋ฐ ์ตœ์ ํ™”๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. Neuron SDK๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค: 1. ์ฝ”๋“œ ํ•œ ์ค„๋งŒ ๋ณ€๊ฒฝํ•˜๋ฉด ํด๋ผ์šฐ๋“œ ์ถ”๋ก ๋ฅผ ์œ„ํ•ด TorchScript ๋ชจ๋ธ์„ ์ถ”์ ํ•˜๊ณ  ์ตœ์ ํ™”ํ•  ์ˆ˜ ์žˆ๋Š” ์‰ฌ์šด API 2. ์ฆ‰์‹œ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์„ฑ๋Šฅ ์ตœ์ ํ™”๋กœ [๋น„์šฉ ํšจ์œจ ํ–ฅ์ƒ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/>) 3. [PyTorch](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html) ๋˜๋Š” [TensorFlow](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface_bert/huggingface_bert.html)๋กœ ๊ตฌ์ถ•๋œ Hugging Face transformers ๋ชจ๋ธ ์ง€์› ### ์‹œ์‚ฌ์ [[implications]] [BERT (Bidirectional Encoder Representations from Transformers)](https://huggingface.co/docs/transformers/main/model_doc/bert) ์•„ํ‚คํ…์ฒ˜ ๋˜๋Š” ๊ทธ ๋ณ€ํ˜•์ธ [distilBERT](https://huggingface.co/docs/transformers/main/model_doc/distilbert) ๋ฐ [roBERTa](https://huggingface.co/docs/transformers/main/model_doc/roberta)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ Transformers ๋ชจ๋ธ์€ ์ถ”์ถœ ๊ธฐ๋ฐ˜ ์งˆ์˜์‘๋‹ต, ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ๋ฐ ํ† ํฐ ๋ถ„๋ฅ˜์™€ ๊ฐ™์€ ๋น„์ƒ์„ฑ ์ž‘์—… ์‹œ Inf1์—์„œ ์ตœ์ƒ์˜ ์„ฑ๋Šฅ์„ ๋ณด์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ž‘์—…๋„ [AWS Neuron MarianMT ํŠœํ† ๋ฆฌ์–ผ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/transformers-marianmt.html)์„ ๋”ฐ๋ผ Inf1์—์„œ ์‹คํ–‰๋˜๋„๋ก ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Inferentia์—์„œ ๋ฐ”๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋Š” Neuron ๋ฌธ์„œ์˜ [Model Architecture Fit](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html#models-inferentia) ์„น์…˜์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์ข…์†์„ฑ[[dependencies]] AWS Neuron์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด [Neuron SDK ํ™˜๊ฒฝ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/pytorch-neuron/index.html#installation-guide)์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” [AWS Deep Learning AMI](https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-inferentia-launching.html)์— ๋ฏธ๋ฆฌ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ### AWS Neuron์œผ๋กœ ๋ชจ๋ธ ๋ณ€ํ™˜ํ•˜๊ธฐ[[converting-a-model-for-aws-neuron]] `BertModel`์„ ์ถ”์ ํ•˜๋ ค๋ฉด, [Python์—์„œ TorchScript ์‚ฌ์šฉํ•˜๊ธฐ](torchscript#using-torchscript-in-python)์—์„œ์™€ ๋™์ผํ•œ ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•ด์„œ AWS NEURON์šฉ ๋ชจ๋ธ์„ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. `torch.neuron` ํ”„๋ ˆ์ž„์›Œํฌ ์ต์Šคํ…์…˜์„ ๊ฐ€์ ธ์™€ Python API๋ฅผ ํ†ตํ•ด Neuron SDK์˜ ๊ตฌ์„ฑ ์š”์†Œ์— ์ ‘๊ทผํ•ฉ๋‹ˆ๋‹ค: ```python from transformers import BertModel, BertTokenizer, BertConfig import torch import torch.neuron ``` ๋‹ค์Œ ์ค„๋งŒ ์ˆ˜์ •ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```diff - torch.jit.trace(model, [tokens_tensor, segments_tensors]) + torch.neuron.trace(model, [token_tensor, segments_tensors]) ``` ์ด๋กœ์จ Neuron SDK๊ฐ€ ๋ชจ๋ธ์„ ์ถ”์ ํ•˜๊ณ  Inf1 ์ธ์Šคํ„ด์Šค์— ์ตœ์ ํ™”ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. AWS Neuron SDK์˜ ๊ธฐ๋Šฅ, ๋„๊ตฌ, ์˜ˆ์ œ ํŠœํ† ๋ฆฌ์–ผ ๋ฐ ์ตœ์‹  ์—…๋ฐ์ดํŠธ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด [AWS NeuronSDK ๋ฌธ์„œ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/index.html)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/performance.md
<!--- Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์„ฑ๋Šฅ ๋ฐ ํ™•์žฅ์„ฑ [[performance-and-scalability]] ์ ์  ๋” ํฐ ๊ทœ๋ชจ์˜ ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ณ  ํ”„๋กœ๋•์…˜์— ๋ฐฐํฌํ•˜๋Š” ๋ฐ์—๋Š” ๋‹ค์–‘ํ•œ ์–ด๋ ค์›€์ด ๋”ฐ๋ฆ…๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์ค‘์—๋Š” ๋ชจ๋ธ์ด ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ GPU ๋ฉ”๋ชจ๋ฆฌ๋ณด๋‹ค ๋” ๋งŽ์€ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ํ•„์š”๋กœ ํ•˜๊ฑฐ๋‚˜ ํ›ˆ๋ จ ์†๋„๊ฐ€ ๋งค์šฐ ๋А๋ฆด ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฐฐํฌํ•  ๋•Œ๋Š” ์ œํ’ˆ ํ™˜๊ฒฝ์—์„œ ์š”๊ตฌ๋˜๋Š” ์ฒ˜๋ฆฌ๋Ÿ‰์œผ๋กœ ์ธํ•ด ๊ณผ๋ถ€ํ•˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ๋Š” ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ๊ทน๋ณตํ•˜๊ณ  ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๊ฐ€์žฅ ์ ํ•ฉํ•œ ์„ค์ •์„ ์ฐพ๋„๋ก ๋„์›€์„ ์ฃผ๊ธฐ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ๊ณผ ์ถ”๋ก ์œผ๋กœ ๊ฐ€์ด๋“œ๋ฅผ ๋ถ„ํ• ํ–ˆ๋Š”๋ฐ, ์ด๋Š” ๊ฐ๊ฐ ๋‹ค๋ฅธ ๋ฌธ์ œ์™€ ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•์ด ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ๊ฐ ๊ฐ€์ด๋“œ์—๋Š” ๋‹ค์–‘ํ•œ ์ข…๋ฅ˜์˜ ํ•˜๋“œ์›จ์–ด ์„ค์ •์— ๋Œ€ํ•œ ๋ณ„๋„์˜ ๊ฐ€์ด๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค(์˜ˆ: ํ›ˆ๋ จ์„ ์œ„ํ•œ ๋‹จ์ผ GPU vs ๋‹ค์ค‘ GPU ๋˜๋Š” ์ถ”๋ก ์„ ์œ„ํ•œ CPU vs GPU). ![perf_overview](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/perf_overview.png) ์ด ๋ฌธ์„œ๋Š” ์‚ฌ์šฉ์ž์˜ ์ƒํ™ฉ์— ์œ ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•๋“ค์— ๋Œ€ํ•œ ๊ฐœ์š” ๋ฐ ์‹œ์ž‘์  ์—ญํ• ์„ ํ•ฉ๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ [[training]] ํšจ์œจ์ ์ธ ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ ํ›ˆ๋ จ์—๋Š” GPU๋‚˜ TPU์™€ ๊ฐ™์€ ๊ฐ€์†๊ธฐ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ๊ฒฝ์šฐ๋Š” ๋‹จ์ผ GPU๋งŒ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์ง€๋งŒ, ๋‹ค์ค‘ GPU ๋ฐ CPU ํ›ˆ๋ จ์— ๋Œ€ํ•œ ์„น์…˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค(๊ณง ๋” ๋งŽ์€ ๋‚ด์šฉ์ด ์ถ”๊ฐ€๋  ์˜ˆ์ •). <Tip> ์ฐธ๊ณ : ๋‹จ์ผ GPU ์„น์…˜์—์„œ ์†Œ๊ฐœ๋œ ๋Œ€๋ถ€๋ถ„์˜ ์ „๋žต(์˜ˆ: ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ›ˆ๋ จ ๋˜๋Š” ๊ทธ๋ผ๋””์–ธํŠธ ๋ˆ„์ )์€ ์ผ๋ฐ˜์ ์ธ ๋ชจ๋ธ ํ›ˆ๋ จ์—๋„ ์ ์šฉ๋˜๋ฏ€๋กœ, ๋‹ค์ค‘ GPU๋‚˜ CPU ํ›ˆ๋ จ๊ณผ ๊ฐ™์€ ์„น์…˜์„ ์‚ดํŽด๋ณด๊ธฐ ์ „์— ๊ผญ ์ฐธ๊ณ ํ•˜์‹œ๊ธธ ๋ฐ”๋ž๋‹ˆ๋‹ค. </Tip> ### ๋‹จ์ผ GPU [[single-gpu]] ๋‹จ์ผ GPU์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์€ ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์ง€๋งŒ, ์ด๋ฅผ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๋Š” ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ๋„๊ตฌ์™€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ๋Š” ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ›ˆ๋ จ, ๊ทธ๋ผ๋””์–ธํŠธ ๋ˆ„์  ๋ฐ ์ฒดํฌํฌ์ธํŒ…, ํšจ์œจ์ ์ธ ์˜ตํ‹ฐ๋งˆ์ด์ €, ์ตœ์ ์˜ ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ๊ฒฐ์ •ํ•˜๊ธฐ ์œ„ํ•œ ์ „๋žต ๋“ฑ์— ๋Œ€ํ•ด ๋…ผ์˜ํ•ฉ๋‹ˆ๋‹ค. [๋‹จ์ผ GPU ํ›ˆ๋ จ ์„น์…˜์œผ๋กœ ์ด๋™](perf_train_gpu_one) ### ๋‹ค์ค‘ GPU [[multigpu]] ๋‹จ์ผ GPU์—์„œ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์ด ๋„ˆ๋ฌด ๋А๋ฆฌ๊ฑฐ๋‚˜ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์— ์ ํ•ฉํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์ค‘ GPU ์„ค์ •์œผ๋กœ ์ „ํ™˜ํ•˜๋Š” ๊ฒƒ์€ ๋…ผ๋ฆฌ์ ์ธ ๋‹จ๊ณ„์ด์ง€๋งŒ, ์—ฌ๋Ÿฌ GPU์—์„œ ํ•œ ๋ฒˆ์— ํ›ˆ๋ จํ•˜๋ ค๋ฉด ๊ฐ GPU๋งˆ๋‹ค ๋ชจ๋ธ์˜ ์ „์ฒด ์‚ฌ๋ณธ์„ ๋‘˜์ง€, ํ˜น์€ ๋ชจ๋ธ ์ž์ฒด๋„ ์—ฌ๋Ÿฌ GPU์— ๋ถ„์‚ฐํ•˜์—ฌ ๋‘˜์ง€ ๋“ฑ ์ƒˆ๋กœ์šด ๊ฒฐ์ •์„ ๋‚ด๋ ค์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ๋Š” ๋ฐ์ดํ„ฐ, ํ…์„œ ๋ฐ ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌํ™”์— ๋Œ€ํ•ด ์‚ดํŽด๋ด…๋‹ˆ๋‹ค. [๋‹ค์ค‘ GPU ํ›ˆ๋ จ ์„น์…˜์œผ๋กœ ์ด๋™](perf_train_gpu_many) ### CPU [[cpu]] [CPU ํ›ˆ๋ จ ์„น์…˜์œผ๋กœ ์ด๋™](perf_train_cpu) ### TPU [[tpu]] [_๊ณง ์ œ๊ณต๋  ์˜ˆ์ •_](perf_train_tpu) ### ํŠน์ˆ˜ํ•œ ํ•˜๋“œ์›จ์–ด [[specialized-hardware]] [_๊ณง ์ œ๊ณต๋  ์˜ˆ์ •_](perf_train_special) ## ์ถ”๋ก  [[inference]] ์ œํ’ˆ ๋ฐ ์„œ๋น„์Šค ํ™˜๊ฒฝ์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํšจ์œจ์ ์œผ๋กœ ์ถ”๋ก ํ•˜๋Š” ๊ฒƒ์€ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ๋งŒํผ ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์–ด์ง€๋Š” ์„น์…˜์—์„œ๋Š” CPU ๋ฐ ๋‹จ์ผ/๋‹ค์ค‘ GPU ์„ค์ •์—์„œ ์ถ”๋ก ์„ ์ง„ํ–‰ํ•˜๋Š” ๋‹จ๊ณ„๋ฅผ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค. ### CPU [[cpu]] [CPU ์ถ”๋ก  ์„น์…˜์œผ๋กœ ์ด๋™](perf_infer_cpu) ### ๋‹จ์ผ GPU [[single-gpu]] [๋‹จ์ผ GPU ์ถ”๋ก  ์„น์…˜์œผ๋กœ ์ด๋™](perf_infer_gpu_one) ### ๋‹ค์ค‘ GPU [[multigpu]] [๋‹ค์ค‘ GPU ์ถ”๋ก  ์„น์…˜์œผ๋กœ ์ด๋™](perf_infer_gpu_many) ### ํŠน์ˆ˜ํ•œ ํ•˜๋“œ์›จ์–ด [[specialized-hardware]] [_๊ณง ์ œ๊ณต๋  ์˜ˆ์ •_](perf_infer_special) ## ํ•˜๋“œ์›จ์–ด [[hardware]] ํ•˜๋“œ์›จ์–ด ์„น์…˜์—์„œ๋Š” ์ž์‹ ๋งŒ์˜ ๋”ฅ๋Ÿฌ๋‹ ์žฅ๋น„๋ฅผ ๊ตฌ์ถ•ํ•  ๋•Œ ์œ ์šฉํ•œ ํŒ๊ณผ ์š”๋ น์„ ์‚ดํŽด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [ํ•˜๋“œ์›จ์–ด ์„น์…˜์œผ๋กœ ์ด๋™](perf_hardware) ## ๊ธฐ์—ฌํ•˜๊ธฐ [[contribute]] ์ด ๋ฌธ์„œ๋Š” ์™„์„ฑ๋˜์ง€ ์•Š์€ ์ƒํƒœ์ด๋ฉฐ, ์ถ”๊ฐ€ํ•ด์•ผ ํ•  ๋‚ด์šฉ์ด๋‚˜ ์ˆ˜์ • ์‚ฌํ•ญ์ด ๋งŽ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ถ”๊ฐ€ํ•˜๊ฑฐ๋‚˜ ์ˆ˜์ •ํ•  ๋‚ด์šฉ์ด ์žˆ์œผ๋ฉด ์ฃผ์ €ํ•˜์ง€ ๋ง๊ณ  PR์„ ์—ด์–ด ์ฃผ์‹œ๊ฑฐ๋‚˜, ์ž์„ธํ•œ ๋‚ด์šฉ์„ ๋…ผ์˜ํ•˜๊ธฐ ์œ„ํ•ด Issue๋ฅผ ์‹œ์ž‘ํ•ด ์ฃผ์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. A๊ฐ€ B๋ณด๋‹ค ์ข‹๋‹ค๊ณ  ํ•˜๋Š” ๊ธฐ์—ฌ๋ฅผ ํ•  ๋•Œ๋Š”, ์žฌํ˜„ ๊ฐ€๋Šฅํ•œ ๋ฒค์น˜๋งˆํฌ์™€/๋˜๋Š” ํ•ด๋‹น ์ •๋ณด์˜ ์ถœ์ฒ˜ ๋งํฌ๋ฅผ ํฌํ•จํ•ด์ฃผ์„ธ์š”(๋‹น์‹ ์œผ๋กœ๋ถ€ํ„ฐ์˜ ์ง์ ‘์ ์ธ ์ •๋ณด๊ฐ€ ์•„๋‹Œ ๊ฒฝ์šฐ).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/perf_infer_gpu_many.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‹ค์ค‘ GPU์—์„œ ํšจ์œจ์ ์ธ ์ถ”๋ก  [[efficient-inference-on-a-multiple-gpus]] ์ด ๋ฌธ์„œ์—๋Š” ๋‹ค์ค‘ GPU์—์„œ ํšจ์œจ์ ์œผ๋กœ ์ถ”๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ •๋ณด๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. <Tip> ์ฐธ๊ณ : ๋‹ค์ค‘ GPU ์„ค์ •์€ [๋‹จ์ผ GPU ์„น์…˜](./perf_infer_gpu_one)์—์„œ ์„ค๋ช…๋œ ๋Œ€๋ถ€๋ถ„์˜ ์ „๋žต์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋” ๋‚˜์€ ํ™œ์šฉ์„ ์œ„ํ•ด ๊ฐ„๋‹จํ•œ ๊ธฐ๋ฒ•๋“ค์„ ์•Œ์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ## ๋” ๋น ๋ฅธ ์ถ”๋ก ์„ ์œ„ํ•œ `BetterTransformer` [[bettertransformer-for-faster-inference]] ์šฐ๋ฆฌ๋Š” ์ตœ๊ทผ ํ…์ŠคํŠธ, ์ด๋ฏธ์ง€ ๋ฐ ์˜ค๋””์˜ค ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋‹ค์ค‘ GPU์—์„œ ๋” ๋น ๋ฅธ ์ถ”๋ก ์„ ์œ„ํ•ด `BetterTransformer`๋ฅผ ํ†ตํ•ฉํ–ˆ์Šต๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ ์ด ํ†ตํ•ฉ์— ๋Œ€ํ•œ [๋ฌธ์„œ](https://huggingface.co/docs/optimum/bettertransformer/overview)๋ฅผ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/community.md
<!--โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ปค๋ฎค๋‹ˆํ‹ฐ [[community]] ์ด ํŽ˜์ด์ง€๋Š” ์ปค๋ฎค๋‹ˆํ‹ฐ์—์„œ ๊ฐœ๋ฐœํ•œ ๐Ÿค— Transformers ๋ฆฌ์†Œ์Šค๋ฅผ ์žฌ๊ตฌ์„ฑํ•œ ํŽ˜์ด์ง€์ž…๋‹ˆ๋‹ค. ## ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ฆฌ์†Œ์Šค: [[community-resources]] | ๋ฆฌ์†Œ์Šค | ์„ค๋ช… | ๋งŒ๋“ ์ด | |:----------|:-------------|------:| | [Hugging Face Transformers ์šฉ์–ด์ง‘ ํ”Œ๋ž˜์‹œ์นด๋“œ](https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards) | [Transformers ๋ฌธ์„œ ์šฉ์–ด์ง‘](glossary)์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํ”Œ๋ž˜์‹œ์นด๋“œ ์„ธํŠธ๋กœ, ์ง€์‹์„ ์žฅ๊ธฐ์ ์œผ๋กœ ์œ ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ํŠน๋ณ„ํžˆ ์„ค๊ณ„๋œ ์˜คํ”ˆ์†Œ์Šค ํฌ๋กœ์Šค ํ”Œ๋žซํผ ์•ฑ์ธ [Anki](https://apps.ankiweb.net/)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‰ฝ๊ฒŒ ํ•™์Šต/์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ๋Š” ํ˜•ํƒœ๋กœ ์ œ์ž‘๋˜์—ˆ์Šต๋‹ˆ๋‹ค. [ํ”Œ๋ž˜์‹œ์นด๋“œ ์‚ฌ์šฉ๋ฒ•์— ๋Œ€ํ•œ ์†Œ๊ฐœ ๋™์˜์ƒ](https://www.youtube.com/watch?v=Dji_h7PILrw)์„ ์ฐธ์กฐํ•˜์„ธ์š”. | [Darigov ๋ฆฌ์„œ์น˜](https://www.darigovresearch.com/) | ## ์ปค๋ฎค๋‹ˆํ‹ฐ ๋…ธํŠธ๋ถ: [[community-notebooks]] | ๋…ธํŠธ๋ถ | ์„ค๋ช… | ๋งŒ๋“ ์ด | | |:----------|:-------------|:-------------|------:| | [๊ฐ€์‚ฌ๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ํŠธ๋žœ์Šคํฌ๋จธ๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/AlekseyKorshuk/huggingartists) | GPT-2 ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์ข‹์•„ํ•˜๋Š” ์•„ํ‹ฐ์ŠคํŠธ์˜ ์Šคํƒ€์ผ๋กœ ๊ฐ€์‚ฌ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ• | [Aleksey Korshuk](https://github.com/AlekseyKorshuk) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb) | | [Tensorflow 2๋กœ T5 ํ›ˆ๋ จํ•˜๊ธฐ](https://github.com/snapthat/TF-T5-text-to-text) | Tensorflow 2๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ T5๋ฅผ ํ›ˆ๋ จ์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•. ์ด ๋…ธํŠธ๋ถ์€ Tensorflow 2๋กœ SQUAD๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ตฌํ˜„ํ•œ ์งˆ์˜์‘๋‹ต ์ž‘์—…์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. | [Muhammad Harris](https://github.com/HarrisDePerceptron) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb) | | [TPU์—์„œ T5 ํ›ˆ๋ จํ•˜๊ธฐ](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) | Transformers์™€ Nlp๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ SQUAD๋กœ T5๋ฅผ ํ›ˆ๋ จํ•˜๋Š” ๋ฐฉ๋ฒ• | [Suraj Patil](https://github.com/patil-suraj) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=QLGiFCDqvuil) | | [๋ถ„๋ฅ˜ ๋ฐ ๊ฐ๊ด€์‹ ๋ฌธ์ œ๋ฅผ ์œ„ํ•ด T5 ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | ๋ถ„๋ฅ˜ ๋ฐ ๊ฐ๊ด€์‹ ๋ฌธ์ œ์— ๋งž๊ฒŒ ํ…์ŠคํŠธ-ํ…์ŠคํŠธ ํ˜•์‹์„ ์‚ฌ์šฉํ•˜์—ฌ PyTorch Lightning์œผ๋กœ T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Suraj Patil](https://github.com/patil-suraj) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) | | [์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ์–ธ์–ด๋กœ DialoGPT ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | ์ž์œ  ๋Œ€ํ™”ํ˜• ์ฑ—๋ด‡์„ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ DialoGPT ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Nathan Cooper](https://github.com/ncoop57) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ncoop57/i-am-a-nerd/blob/master/_notebooks/2020-05-12-chatbot-part-1.ipynb) | | [Reformer๋กœ ๊ธด ์‹œํ€€์Šค ๋ชจ๋ธ๋งํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | Reformer๋กœ ์ตœ๋Œ€ 50๋งŒ ํ† ํฐ์˜ ์‹œํ€€์Šค๋ฅผ ํ›ˆ๋ จํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) | | [์š”์•ฝ์„ ์œ„ํ•ด BART ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb) | blurr๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ fastai๋กœ ์š”์•ฝํ•˜๊ธฐ ์œ„ํ•ด BART๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Wayde Gilliam](https://ohmeow.com/) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ohmeow/ohmeow_website/blob/master/posts/2021-05-25-mbart-sequence-classification-with-blurr.ipynb) | | [๋‹ค๋ฅธ ์‚ฌ๋žŒ์˜ ํŠธ์œ—์œผ๋กœ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ํŠธ๋žœ์Šคํฌ๋จธ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | GPT-2 ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์ข‹์•„ํ•˜๋Š” ํŠธ์œ„ํ„ฐ ๊ณ„์ • ์Šคํƒ€์ผ๋กœ ํŠธ์œ—์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ• | [Boris Dayma](https://github.com/borisdayma) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb) | | [Weights & Biases๋กœ ๐Ÿค— Hugging Face ๋ชจ๋ธ ์ตœ์ ํ™”ํ•˜๊ธฐ](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | W&B์™€ Hugging Face์˜ ํ†ตํ•ฉ์„ ๋ณด์—ฌ์ฃผ๋Š” ์ „์ฒด ํŠœํ† ๋ฆฌ์–ผ | [Boris Dayma](https://github.com/borisdayma) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/wandb/examples/blob/master/colabs/huggingface/Optimize_Hugging_Face_models_with_Weights_%26_Biases.ipynb) | | [Longformer ์‚ฌ์ „ํ›ˆ๋ จํ•˜๊ธฐ](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | ๊ธฐ์กด ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์˜ "๊ธด" ๋ฒ„์ „์„ ๋นŒ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ• | [Iz Beltagy](https://beltagy.net) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) | | [QA๋ฅผ ์œ„ํ•ด Longformer ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | QA ์ž‘์—…์„ ์œ„ํ•ด Longformer๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Suraj Patil](https://github.com/patil-suraj) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patil-suraj/Notebooks/blob/master/longformer_qa_training.ipynb) | | [๐Ÿค— Nlp๋กœ ๋ชจ๋ธ ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/How_to_evaluate_Longformer_on_TriviaQA_using_NLP.ipynb) | `Nlp`๋กœ TriviaQA์—์„œ Longformer๋ฅผ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1m7eTGlPmLRgoPkkA7rkhQdZ9ydpmsdLE?usp=sharing) | | [๊ฐ์ • ๋ฒ”์œ„ ์ถ”์ถœ์„ ์œ„ํ•ด T5 ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | ๊ฐ์ • ๋ฒ”์œ„ ์ถ”์ถœ์„ ์œ„ํ•ด ํ…์ŠคํŠธ-ํ…์ŠคํŠธ ํ˜•์‹์„ ์‚ฌ์šฉํ•˜์—ฌ PyTorch Lightning์œผ๋กœ T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Lorenzo Ampil](https://github.com/enzoampil) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/enzoampil/t5-intro/blob/master/t5_qa_training_pytorch_span_extraction.ipynb) | | [๋‹ค์ค‘ ํด๋ž˜์Šค ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด DistilBert ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb) | ๋‹ค์ค‘ ํด๋ž˜์Šค ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด PyTorch๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ DistilBert๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb)| | [๋‹ค์ค‘ ๋ ˆ์ด๋ธ” ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด BERT ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb) | ๋‹ค์ค‘ ๋ ˆ์ด๋ธ” ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด PyTorch๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ BERT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multi_label_classification.ipynb)| | [์š”์•ฝ์„ ์œ„ํ•ด T5 ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb) | ์š”์•ฝ์„ ์œ„ํ•ด PyTorch๋กœ T5๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  WandB๋กœ ์‹คํ—˜์„ ์ถ”์ ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Abhishek Kumar Mishra](https://github.com/abhimishra91) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_summarization_wandb.ipynb)| | [๋™์  ํŒจ๋”ฉ/๋ฒ„์ผ“ํŒ…์œผ๋กœ Transformers ๋ฏธ์„ธ ์กฐ์ • ์†๋„ ๋†’์ด๊ธฐ](https://github.com/ELS-RD/transformers-notebook/blob/master/Divide_Hugging_Face_Transformers_training_time_by_2_or_more.ipynb)| ๋™์  ํŒจ๋”ฉ/๋ฒ„์ผ“ํŒ…์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฏธ์„ธ ์กฐ์ • ์†๋„๋ฅผ 2๋ฐฐ๋กœ ๋†’์ด๋Š” ๋ฐฉ๋ฒ• |[Michael Benesty](https://github.com/pommedeterresautee) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CBfRU1zbfu7-ijiOqAAQUA-RJaxfcJoO?usp=sharing)| |[๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด Reformer ์‚ฌ์ „ํ›ˆ๋ จํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb)| ์–‘๋ฐฉํ–ฅ ์…€ํ”„ ์–ดํ…์…˜ ๋ ˆ์ด์–ด๋ฅผ ์ด์šฉํ•ด์„œ Reformer ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1tzzh0i8PgDQGV3SMFUGxM7_gGae3K-uW?usp=sharing)| | [Sci-BERT ํ™•์žฅ ๋ฐ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/lordtt13/word-embeddings/blob/master/COVID-19%20Research%20Data/COVID-SciBERT.ipynb)| CORD ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ AllenAI์—์„œ ์‚ฌ์ „ํ›ˆ๋ จ๋œ SciBERT ๋ชจ๋ธ์˜ ์–ดํœ˜๋ฅผ ๋Š˜๋ฆฌ๊ณ  ํŒŒ์ดํ”„๋ผ์ธ์„ ๊ตฌ์ถ•ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Tanmay Thakur](https://github.com/lordtt13) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1rqAR40goxbAfez1xvF3hBJphSCsvXmh8)| | [์š”์•ฝ์„ ์œ„ํ•ด Trainer API๋กœ BlenderBotSmall ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/lordtt13/transformers-experiments/blob/master/Custom%20Tasks/fine-tune-blenderbot_small-for-summarization.ipynb)| ์š”์•ฝ์„ ์œ„ํ•ด Trainer API๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ BlenderBotSmall ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ | [Tanmay Thakur](https://github.com/lordtt13) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/19Wmupuls7mykSGyRN_Qo6lPQhgp56ymq?usp=sharing)| | [ํ†ตํ•ฉ ๊ธฐ์šธ๊ธฐ(Integrated Gradient)๋ฅผ ์ด์šฉํ•˜์—ฌ Electra ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ํ•ด์„ํ•˜๊ธฐ](https://github.com/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb) | ๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•ด Electra๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  Captum ํ†ตํ•ฉ ๊ธฐ์šธ๊ธฐ๋กœ ์˜ˆ์ธก์„ ํ•ด์„ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Eliza Szczechla](https://elsanns.github.io) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/electra_fine_tune_interpret_captum_ig.ipynb)| | [Trainer ํด๋ž˜์Šค๋กœ ๋น„์˜์–ด๊ถŒ GPT-2 ๋ชจ๋ธ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb) | Trainer ํด๋ž˜์Šค๋กœ ๋น„์˜์–ด๊ถŒ GPT-2 ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Philipp Schmid](https://www.philschmid.de) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/philschmid/fine-tune-GPT-2/blob/master/Fine_tune_a_non_English_GPT_2_Model_with_Huggingface.ipynb)| |[๋‹ค์ค‘ ๋ผ๋ฒจ ๋ถ„๋ฅ˜ ์ž‘์—…์„ ์œ„ํ•ด DistilBERT ๋ชจ๋ธ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb) | ๋‹ค์ค‘ ๋ผ๋ฒจ ๋ถ„๋ฅ˜ ์ž‘์—…์„ ์œ„ํ•ด DistilBERT ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb)| |[๋ฌธ์žฅ์Œ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด ALBERT ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb) | ๋ฌธ์žฅ์Œ ๋ถ„๋ฅ˜ ์ž‘์—…์„ ์œ„ํ•ด ALBERT ๋ชจ๋ธ ๋˜๋Š” ๋‹ค๋ฅธ BERT ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Nadir El Manouzi](https://github.com/NadirEM) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NadirEM/nlp-notebooks/blob/master/Fine_tune_ALBERT_sentence_pair_classification.ipynb)| |[๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•ด Roberta ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb) | ๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•ด Roberta ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Dhaval Taunk](https://github.com/DhavalTaunk08) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhavalTaunk08/NLP_scripts/blob/master/sentiment_analysis_using_roberta.ipynb)| |[์งˆ๋ฌธ ์ƒ์„ฑ ๋ชจ๋ธ ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/flexudy-pipe/qugeev) | seq2seq ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์ด ์ƒ์„ฑํ•œ ์งˆ๋ฌธ๊ณผ ์ด์— ๋Œ€ํ•œ ๋‹ต๋ณ€์ด ์–ผ๋งˆ๋‚˜ ์ •ํ™•ํ•œ๊ฐ€์š”? | [Pascal Zoleko](https://github.com/zolekode) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1bpsSqCQU-iw_5nNoRm_crPq6FRuJthq_?usp=sharing)| |[DistilBERT์™€ Tensorflow๋กœ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ํ•˜๊ธฐ](https://github.com/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb) | ํ…์ŠคํŠธ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด TensorFlow๋กœ DistilBERT๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Peter Bayerle](https://github.com/peterbayerle) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb)| |[CNN/Dailail ์š”์•ฝ์„ ์œ„ํ•ด ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์— BERT ํ™œ์šฉํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb) | CNN/Dailail ์š”์•ฝ์„ ์œ„ํ•ด *bert-base-uncased* ์ฒดํฌํฌ์ธํŠธ๋ฅผ ํ™œ์šฉํ•˜์—ฌ *EncoderDecoderModel*์„ ์›Œ๋ฐ์—…ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/BERT2BERT_for_CNN_Dailymail.ipynb)| |[BBC XSum ์š”์•ฝ์„ ์œ„ํ•ด ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์— RoBERTa ํ™œ์šฉํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb) | BBC/XSum ์š”์•ฝ์„ ์œ„ํ•ด *roberta-base* ์ฒดํฌํฌ์ธํŠธ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๊ณต์œ  *EncoderDecoderModel*์„ ์›Œ๋ฐ์—…ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/RoBERTaShared_for_BBC_XSum.ipynb)| |[์ˆœ์ฐจ์  ์งˆ๋ฌธ ๋‹ต๋ณ€(SQA)์„ ์œ„ํ•ด TAPAS ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb) | *tapas-base* ์ฒดํฌํฌ์ธํŠธ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์ˆœ์ฐจ์  ์งˆ๋ฌธ ๋‹ต๋ณ€(SQA) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *TapasForQuestionAnswering*์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Fine_tuning_TapasForQuestionAnswering_on_SQA.ipynb)| |[ํ‘œ ์‚ฌ์‹ค ๊ฒ€์‚ฌ(TabFact)๋กœ TAPAS ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb) | ๐Ÿค— Datasets์™€ ๐Ÿค— Transformer ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•˜์—ฌ *tapas-base-finetuned-tabfact* ์ฒดํฌํฌ์ธํŠธ๋กœ ๋ฏธ์„ธ ์กฐ์ •๋œ *TapasForSequenceClassification*์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/TAPAS/Evaluating_TAPAS_on_the_Tabfact_test_set.ipynb)| |[๋ฒˆ์—ญ์„ ์œ„ํ•ด mBART ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb) | ํžŒ๋””์–ด์—์„œ ์˜์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ ์œ„ํ•ด Seq2SeqTrainer๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ mBART๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/huggingface-tutorials/blob/main/translation_training.ipynb)| |[FUNSD(์–‘์‹ ์ดํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ)๋กœ LayoutLM ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb) | ์Šค์บ”ํ•œ ๋ฌธ์„œ์—์„œ ์ •๋ณด ์ถ”์ถœ์„ ์œ„ํ•ด FUNSD ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *LayoutLMForTokenClassification*์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForTokenClassification_on_FUNSD.ipynb)| |[DistilGPT2 ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ๋ฐ ํ…์ŠคํŠธ ์ƒ์„ฑํ•˜๊ธฐ](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb) | DistilGPT2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ• | [Aakash Tripathi](https://github.com/tripathiaakash) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tripathiaakash/DistilGPT2-Tutorial/blob/main/distilgpt2_fine_tuning.ipynb)| |[์ตœ๋Œ€ 8K ํ† ํฐ์—์„œ LED ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb) | ๊ธด ๋ฒ”์œ„๋ฅผ ์š”์•ฝํ•˜๊ธฐ ์œ„ํ•ด PubMed๋กœ LED๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb)| |[Arxiv๋กœ LED ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb) | ๊ธด ๋ฒ”์œ„ ์š”์•ฝ์— ๋Œ€ํ•ด LED๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/LED_on_Arxiv.ipynb)| |[RVL-CDIP(๋ฌธ์„œ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๋ฐ์ดํ„ฐ ์„ธํŠธ)๋กœ LayoutLM ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ)](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb) | ์Šค์บ” ๋ฌธ์„œ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด RVL-CDIP ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *LayoutLMForSequenceClassification*์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/nielsrogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/LayoutLM/Fine_tuning_LayoutLMForSequenceClassification_on_RVL_CDIP.ipynb)| |[GPT2 ์กฐ์ •์„ ํ†ตํ•œ Wav2Vec2 CTC ๋””์ฝ”๋”ฉ](https://github.com/voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb) | ์–ธ์–ด ๋ชจ๋ธ ์กฐ์ •์„ ํ†ตํ•ด CTC ์‹œํ€€์Šค๋ฅผ ๋””์ฝ”๋”ฉํ•˜๋Š” ๋ฐฉ๋ฒ• | [Eric Lam](https://github.com/voidful) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1e_z5jQHYbO2YKEaUgzb1ww1WwiAyydAj?usp=sharing)| |[Trainer ํด๋ž˜์Šค๋กœ ๋‘ ๊ฐœ ์–ธ์–ด๋กœ ์š”์•ฝํ•˜๊ธฐ ์œ„ํ•ด BART ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb) | Trainer ํด๋ž˜์Šค๋กœ ๋‘ ๊ฐœ ์–ธ์–ด๋กœ ์š”์•ฝํ•˜๊ธฐ ์œ„ํ•ด BART ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Eliza Szczechla](https://github.com/elsanns) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/elsanns/xai-nlp-notebooks/blob/master/fine_tune_bart_summarization_two_langs.ipynb)| |[Trivia QA๋กœ Big Bird ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb) | Trivia QA๋กœ ๊ธด ๋ฌธ์„œ ์งˆ๋ฌธ์— ๋Œ€ํ•œ ๋‹ต๋ณ€์— ๋Œ€ํ•ด BigBird๋ฅผ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Patrick von Platen](https://github.com/patrickvonplaten) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Evaluating_Big_Bird_on_TriviaQA.ipynb)| | [Wav2Vec2๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋™์˜์ƒ ์บก์…˜ ๋งŒ๋“ค๊ธฐ](https://github.com/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | Wav2Vec์œผ๋กœ ์˜ค๋””์˜ค๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ๋ชจ๋“  ๋™์˜์ƒ์—์„œ YouTube ์บก์…˜ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ• | [Niklas Muennighoff](https://github.com/Muennighoff) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb) | | [PyTorch Lightning์„ ์‚ฌ์šฉํ•˜์—ฌ CIFAR-10์œผ๋กœ ๋น„์ „ ํŠธ๋žœ์Šคํฌ๋จธ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | HuggingFace Transformers, Datasets, PyTorch Lightning์„ ์‚ฌ์šฉํ•˜์—ฌ CIFAR-10์œผ๋กœ ๋น„์ „ ํŠธ๋žœ์Šคํฌ๋จธ(ViT)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_PyTorch_Lightning.ipynb) | | [๐Ÿค— Trainer๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ CIFAR-10์—์„œ ๋น„์ „ ํŠธ๋žœ์Šคํฌ๋จธ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | Datasets, ๐Ÿค— Trainer๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ CIFAR-10์—์„œ ๋น„์ „ ํŠธ๋žœ์Šคํฌ๋จธ(ViT)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/nielsrogge) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/VisionTransformer/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_%F0%9F%A4%97_Trainer.ipynb) | | [๊ฐœ์ฒด ์ž…๋ ฅ ๋ฐ์ดํ„ฐ ์„ธํŠธ์ธ Open Entity๋กœ LUKE ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | Open Entity ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *LukeForEntityClassification*์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_open_entity.ipynb) | | [๊ด€๊ณ„ ์ถ”์ถœ ๋ฐ์ดํ„ฐ ์„ธํŠธ์ธ TACRED๋กœ LUKE ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | TACRED ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *LukeForEntityPairClassification*์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_tacred.ipynb) | | [์ค‘์š” NER ๋ฒค์น˜๋งˆํฌ์ธ CoNLL-2003์œผ๋กœ LUKE ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | CoNLL-2003 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *LukeForEntitySpanClassification*๋ฅผ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Ikuya Yamada](https://github.com/ikuyamada) |[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/studio-ousia/luke/blob/master/notebooks/huggingface_conll_2003.ipynb) | | [PubMed ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ BigBird-Pegasus ํ‰๊ฐ€ํ•˜๊ธฐ](https://github.com/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | PubMed ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *BigBirdPegasusForConditionalGeneration*๋ฅผ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Vasudev Gupta](https://github.com/vasudevgupta7) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) | | [Wav2Vec2๋ฅผ ์‚ฌ์šฉํ•ด์„œ ์Œ์„ฑ ๊ฐ์ • ๋ถ„๋ฅ˜ํ•˜๊ธฐ](https://github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | ๊ฐ์ • ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด ์‚ฌ์ „ํ›ˆ๋ จ๋œ Wav2Vec2 ๋ชจ๋ธ์„ MEGA ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ํ™œ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• | [Mehrdad Farahani](https://github.com/m3hrdadfi) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb) | | [DETR๋กœ ์ด๋ฏธ์ง€์—์„œ ๊ฐ์ฒด ํƒ์ง€ํ•˜๊ธฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | ํ›ˆ๋ จ๋œ *DetrForObjectDetection* ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€์—์„œ ๊ฐ์ฒด๋ฅผ ํƒ์ง€ํ•˜๊ณ  ์–ดํ…์…˜์„ ์‹œ๊ฐํ™”ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) | | [์‚ฌ์šฉ์ž ์ง€์ • ๊ฐ์ฒด ํƒ์ง€ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ DETR ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | ์‚ฌ์šฉ์ž ์ง€์ • ๊ฐ์ฒด ํƒ์ง€ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ *DetrForObjectDetection*์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Niels Rogge](https://github.com/NielsRogge) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) | | [๊ฐœ์ฒด๋ช… ์ธ์‹์„ ์œ„ํ•ด T5 ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://github.com/ToluClassics/Notebooks/blob/main/T5_Ner_Finetuning.ipynb) | ๊ฐœ์ฒด๋ช… ์ธ์‹ ์ž‘์—…์„ ์œ„ํ•ด *T5*๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• | [Ogundepo Odunayo](https://github.com/ToluClassics) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1obr78FY_cBmWY5ODViCmzdY6O1KB65Vc?usp=sharing) |
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/perf_infer_gpu_one.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‹จ์ผ GPU์—์„œ ํšจ์œจ์ ์ธ ์ถ”๋ก  [[efficient-inference-on-a-single-gpu]] ์ด ๊ฐ€์ด๋“œ ์™ธ์—๋„, [๋‹จ์ผ GPU์—์„œ์˜ ํ›ˆ๋ จ ๊ฐ€์ด๋“œ](perf_train_gpu_one)์™€ [CPU์—์„œ์˜ ์ถ”๋ก  ๊ฐ€์ด๋“œ](perf_infer_cpu)์—์„œ๋„ ๊ด€๋ จ ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## Better Transformer: PyTorch ๋„ค์ดํ‹ฐ๋ธŒ Transformer ํŒจ์ŠคํŠธํŒจ์Šค [[better-transformer-pytorchnative-transformer-fastpath]] PyTorch ๋„ค์ดํ‹ฐ๋ธŒ [`nn.MultiHeadAttention`](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/) ์–ดํ…์…˜ ํŒจ์ŠคํŠธํŒจ์Šค์ธ BetterTransformer๋Š” [๐Ÿค— Optimum ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ](https://huggingface.co/docs/optimum/bettertransformer/overview)์˜ ํ†ตํ•ฉ์„ ํ†ตํ•ด Transformers์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. PyTorch์˜ ์–ดํ…์…˜ ํŒจ์ŠคํŠธํŒจ์Šค๋Š” ์ปค๋„ ํ“จ์ „๊ณผ [์ค‘์ฒฉ๋œ ํ…์„œ](https://pytorch.org/docs/stable/nested.html)์˜ ์‚ฌ์šฉ์„ ํ†ตํ•ด ์ถ”๋ก  ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋ฒค์น˜๋งˆํฌ๋Š” [์ด ๋ธ”๋กœ๊ทธ ๊ธ€](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`optimum`](https://github.com/huggingface/optimum) ํŒจํ‚ค์ง€๋ฅผ ์„ค์น˜ํ•œ ํ›„์—๋Š” ์ถ”๋ก  ์ค‘ Better Transformer๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~PreTrainedModel.to_bettertransformer`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๊ด€๋ จ ๋‚ด๋ถ€ ๋ชจ๋“ˆ์„ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค: ```python model = model.to_bettertransformer() ``` [`~PreTrainedModel.reverse_bettertransformer`] ๋ฉ”์†Œ๋“œ๋Š” ์ •๊ทœํ™”๋œ transformers ๋ชจ๋ธ๋ง์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ์„ ์ €์žฅํ•˜๊ธฐ ์ „ ์›๋ž˜์˜ ๋ชจ๋ธ๋ง์œผ๋กœ ๋Œ์•„๊ฐˆ ์ˆ˜ ์žˆ๋„๋ก ํ•ด์ค๋‹ˆ๋‹ค: ```python model = model.reverse_bettertransformer() model.save_pretrained("saved_model") ``` PyTorch 2.0๋ถ€ํ„ฐ๋Š” ์–ดํ…์…˜ ํŒจ์ŠคํŠธํŒจ์Šค๊ฐ€ ์ธ์ฝ”๋”์™€ ๋””์ฝ”๋” ๋ชจ๋‘์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค. ์ง€์›๋˜๋Š” ์•„ํ‚คํ…์ฒ˜ ๋ชฉ๋ก์€ [์—ฌ๊ธฐ](https://huggingface.co/docs/optimum/bettertransformer/overview#supported-models)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## FP4 ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ์ถ”๋ก ์„ ์œ„ํ•œ `bitsandbytes` ํ†ตํ•ฉ [[bitsandbytes-integration-for-fp4-mixedprecision-inference]] `bitsandbytes`๋ฅผ ์„ค์น˜ํ•˜๋ฉด GPU์—์„œ ์†์‰ฝ๊ฒŒ ๋ชจ๋ธ์„ ์••์ถ•ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. FP4 ์–‘์žํ™”๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์›๋ž˜์˜ ์ „์ฒด ์ •๋ฐ€๋„ ๋ฒ„์ „๊ณผ ๋น„๊ตํ•˜์—ฌ ๋ชจ๋ธ ํฌ๊ธฐ๋ฅผ ์ตœ๋Œ€ 8๋ฐฐ ์ค„์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜์—์„œ ์‹œ์ž‘ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•˜์„ธ์š”. <Tip> ์ด ๊ธฐ๋Šฅ์€ ๋‹ค์ค‘ GPU ์„ค์ •์—์„œ๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ### ์š”๊ตฌ ์‚ฌํ•ญ [[requirements-for-fp4-mixedprecision-inference]] - ์ตœ์‹  `bitsandbytes` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ `pip install bitsandbytes>=0.39.0` - ์ตœ์‹  `accelerate`๋ฅผ ์†Œ์Šค์—์„œ ์„ค์น˜ `pip install git+https://github.com/huggingface/accelerate.git` - ์ตœ์‹  `transformers`๋ฅผ ์†Œ์Šค์—์„œ ์„ค์น˜ `pip install git+https://github.com/huggingface/transformers.git` ### FP4 ๋ชจ๋ธ ์‹คํ–‰ - ๋‹จ์ผ GPU ์„ค์ • - ๋น ๋ฅธ ์‹œ์ž‘ [[running-fp4-models-single-gpu-setup-quickstart]] ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜์—ฌ ๋‹จ์ผ GPU์—์„œ ๋น ๋ฅด๊ฒŒ FP4 ๋ชจ๋ธ์„ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py from transformers import AutoModelForCausalLM model_name = "bigscience/bloom-2b5" model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True) ``` `device_map`์€ ์„ ํƒ ์‚ฌํ•ญ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ `device_map = 'auto'`๋กœ ์„ค์ •ํ•˜๋Š” ๊ฒƒ์ด ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋ฆฌ์†Œ์Šค๋ฅผ ํšจ์œจ์ ์œผ๋กœ ๋””์ŠคํŒจ์น˜ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ถ”๋ก ์— ์žˆ์–ด ๊ถŒ์žฅ๋ฉ๋‹ˆ๋‹ค. ### FP4 ๋ชจ๋ธ ์‹คํ–‰ - ๋‹ค์ค‘ GPU ์„ค์ • [[running-fp4-models-multi-gpu-setup]] ๋‹ค์ค‘ GPU์—์„œ ํ˜ผํ•ฉ 4๋น„ํŠธ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•์€ ๋‹จ์ผ GPU ์„ค์ •๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค(๋™์ผํ•œ ๋ช…๋ น์–ด ์‚ฌ์šฉ): ```py model_name = "bigscience/bloom-2b5" model_4bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_4bit=True) ``` ํ•˜์ง€๋งŒ `accelerate`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ฐ GPU์— ํ• ๋‹นํ•  GPU RAM์„ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด `max_memory` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```py max_memory_mapping = {0: "600MB", 1: "1GB"} model_name = "bigscience/bloom-3b" model_4bit = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", load_in_4bit=True, max_memory=max_memory_mapping ) ``` ์ด ์˜ˆ์—์„œ๋Š” ์ฒซ ๋ฒˆ์งธ GPU๊ฐ€ 600MB์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ๋‘ ๋ฒˆ์งธ GPU๊ฐ€ 1GB๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ### ๊ณ ๊ธ‰ ์‚ฌ์šฉ๋ฒ• [[advanced-usage]] ์ด ๋ฐฉ๋ฒ•์˜ ๋” ๊ณ ๊ธ‰ ์‚ฌ์šฉ๋ฒ•์— ๋Œ€ํ•ด์„œ๋Š” [์–‘์žํ™”](main_classes/quantization) ๋ฌธ์„œ ํŽ˜์ด์ง€๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## Int8 ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ–‰๋ ฌ ๋ถ„ํ•ด๋ฅผ ์œ„ํ•œ `bitsandbytes` ํ†ตํ•ฉ [[bitsandbytes-integration-for-int8-mixedprecision-matrix-decomposition]] <Tip> ์ด ๊ธฐ๋Šฅ์€ ๋‹ค์ค‘ GPU ์„ค์ •์—์„œ๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> [`LLM.int8() : 8-bit Matrix Multiplication for Transformers at Scale`](https://arxiv.org/abs/2208.07339) ๋…ผ๋ฌธ์—์„œ ์šฐ๋ฆฌ๋Š” ๋ช‡ ์ค„์˜ ์ฝ”๋“œ๋กœ Hub์˜ ๋ชจ๋“  ๋ชจ๋ธ์— ๋Œ€ํ•œ Hugging Face ํ†ตํ•ฉ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฐฉ๋ฒ•์€ `float16` ๋ฐ `bfloat16` ๊ฐ€์ค‘์น˜์— ๋Œ€ํ•ด `nn.Linear` ํฌ๊ธฐ๋ฅผ 2๋ฐฐ๋กœ ์ค„์ด๊ณ , `float32` ๊ฐ€์ค‘์น˜์— ๋Œ€ํ•ด 4๋ฐฐ๋กœ ์ค„์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์ ˆ๋ฐ˜ ์ •๋ฐ€๋„์—์„œ ์ด์ƒ์น˜๋ฅผ ์ฒ˜๋ฆฌํ•จ์œผ๋กœ์จ ํ’ˆ์งˆ์— ๊ฑฐ์˜ ์˜ํ–ฅ์„ ๋ฏธ์น˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ![HFxbitsandbytes.png](https://cdn-uploads.huggingface.co/production/uploads/1659861207959-62441d1d9fdefb55a0b7d12c.png) Int8 ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ–‰๋ ฌ ๋ถ„ํ•ด๋Š” ํ–‰๋ ฌ ๊ณฑ์…ˆ์„ ๋‘ ๊ฐœ์˜ ์ŠคํŠธ๋ฆผ์œผ๋กœ ๋ถ„๋ฆฌํ•ฉ๋‹ˆ๋‹ค: (1) fp16๋กœ ๊ณฑํ•ด์ง€๋Š” ์ฒด๊ณ„์ ์ธ ํŠน์ด๊ฐ’ ์ด์ƒ์น˜ ์ŠคํŠธ๋ฆผ ํ–‰๋ ฌ(0.01%) ๋ฐ (2) int8 ํ–‰๋ ฌ ๊ณฑ์…ˆ์˜ ์ผ๋ฐ˜์ ์ธ ์ŠคํŠธ๋ฆผ(99.9%). ์ด ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜๋ฉด ๋งค์šฐ ํฐ ๋ชจ๋ธ์— ๋Œ€ํ•ด ์˜ˆ์ธก ์ €ํ•˜ ์—†์ด int8 ์ถ”๋ก ์ด ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [๋…ผ๋ฌธ](https://arxiv.org/abs/2208.07339)์ด๋‚˜ [ํ†ตํ•ฉ์— ๊ด€ํ•œ ๋ธ”๋กœ๊ทธ ๊ธ€](https://huggingface.co/blog/hf-bitsandbytes-integration)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ![MixedInt8.gif](https://cdn-uploads.huggingface.co/production/uploads/1660567469965-62441d1d9fdefb55a0b7d12c.gif) ์ปค๋„์€ GPU ์ „์šฉ์œผ๋กœ ์ปดํŒŒ์ผ๋˜์–ด ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ํ˜ผํ•ฉ 8๋น„ํŠธ ๋ชจ๋ธ์„ ์‹คํ–‰ํ•˜๋ ค๋ฉด GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ๋ชจ๋ธ์˜ 1/4(๋˜๋Š” ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๊ฐ€ ์ ˆ๋ฐ˜ ์ •๋ฐ€๋„์ธ ๊ฒฝ์šฐ ์ ˆ๋ฐ˜)์„ ์ €์žฅํ•  ์ถฉ๋ถ„ํ•œ GPU ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ์ด ๋ชจ๋“ˆ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๋ช‡ ๊ฐ€์ง€ ์ฐธ๊ณ  ์‚ฌํ•ญ์ด ์•„๋ž˜์— ๋‚˜์™€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜๋Š” [Google colab](#colab-demos)์—์„œ ๋ฐ๋ชจ๋ฅผ ๋”ฐ๋ผํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ### ์š”๊ตฌ ์‚ฌํ•ญ [[requirements-for-int8-mixedprecision-matrix-decomposition]] - `bitsandbytes<0.37.0`์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, 8๋น„ํŠธ ํ…์„œ ์ฝ”์–ด(Turing, Ampere ๋˜๋Š” ์ดํ›„ ์•„ํ‚คํ…์ฒ˜ - ์˜ˆ: T4, RTX20s RTX30s, A40-A100)๋ฅผ ์ง€์›ํ•˜๋Š” NVIDIA GPU์—์„œ ์‹คํ–‰ํ•˜๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. `bitsandbytes>=0.37.0`์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ๋ชจ๋“  GPU๊ฐ€ ์ง€์›๋ฉ๋‹ˆ๋‹ค. - ์˜ฌ๋ฐ”๋ฅธ ๋ฒ„์ „์˜ `bitsandbytes`๋ฅผ ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ์„ค์น˜ํ•˜์„ธ์š”: `pip install bitsandbytes>=0.31.5` - `accelerate`๋ฅผ ์„ค์น˜ํ•˜์„ธ์š” `pip install accelerate>=0.12.0` ### ํ˜ผํ•ฉ Int8 ๋ชจ๋ธ ์‹คํ–‰ - ๋‹จ์ผ GPU ์„ค์ • [[running-mixedint8-models-single-gpu-setup]] ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•œ ํ›„ ํ˜ผํ•ฉ 8๋น„ํŠธ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```py from transformers import AutoModelForCausalLM model_name = "bigscience/bloom-2b5" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) ``` ํ…์ŠคํŠธ ์ƒ์„ฑ์˜ ๊ฒฝ์šฐ: * `pipeline()` ํ•จ์ˆ˜ ๋Œ€์‹  ๋ชจ๋ธ์˜ `generate()` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. `pipeline()` ํ•จ์ˆ˜๋กœ๋Š” ์ถ”๋ก ์ด ๊ฐ€๋Šฅํ•˜์ง€๋งŒ, ํ˜ผํ•ฉ 8๋น„ํŠธ ๋ชจ๋ธ์— ์ตœ์ ํ™”๋˜์ง€ ์•Š์•˜๊ธฐ ๋•Œ๋ฌธ์— `generate()` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ๋А๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, nucleus ์ƒ˜ํ”Œ๋ง๊ณผ ๊ฐ™์€ ์ผ๋ถ€ ์ƒ˜ํ”Œ๋ง ์ „๋žต์€ ํ˜ผํ•ฉ 8๋น„ํŠธ ๋ชจ๋ธ์— ๋Œ€ํ•ด `pipeline()` ํ•จ์ˆ˜์—์„œ ์ง€์›๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. * ์ž…๋ ฅ์„ ๋ชจ๋ธ๊ณผ ๋™์ผํ•œ GPU์— ๋ฐฐ์น˜ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๊ฐ„๋‹จํ•œ ์˜ˆ์ž…๋‹ˆ๋‹ค: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "bigscience/bloom-2b5" tokenizer = AutoTokenizer.from_pretrained(model_name) model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) prompt = "Hello, my llama is cute" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") generated_ids = model.generate(**inputs) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ``` ### ํ˜ผํ•ฉ Int8 ๋ชจ๋ธ ์‹คํ–‰ - ๋‹ค์ค‘ GPU ์„ค์ • [[running-mixedint8-models-multi-gpu-setup]] ๋‹ค์ค‘ GPU์—์„œ ํ˜ผํ•ฉ 8๋น„ํŠธ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๋‹จ์ผ GPU ์„ค์ •๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค(๋™์ผํ•œ ๋ช…๋ น์–ด ์‚ฌ์šฉ): ```py model_name = "bigscience/bloom-2b5" model_8bit = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", load_in_8bit=True) ``` ํ•˜์ง€๋งŒ `accelerate`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ฐ GPU์— ํ• ๋‹นํ•  GPU RAM์„ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด `max_memory` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```py max_memory_mapping = {0: "1GB", 1: "2GB"} model_name = "bigscience/bloom-3b" model_8bit = AutoModelForCausalLM.from_pretrained( model_name, device_map="auto", load_in_8bit=True, max_memory=max_memory_mapping ) ``` ์ด ์˜ˆ์‹œ์—์„œ๋Š” ์ฒซ ๋ฒˆ์งธ GPU๊ฐ€ 1GB์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ๋‘ ๋ฒˆ์งธ GPU๊ฐ€ 2GB๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ### Colab ๋ฐ๋ชจ [[colab-demos]] ์ด ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜๋ฉด ์ด์ „์— Google Colab์—์„œ ์ถ”๋ก ํ•  ์ˆ˜ ์—†์—ˆ๋˜ ๋ชจ๋ธ์— ๋Œ€ํ•ด ์ถ”๋ก ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Google Colab์—์„œ 8๋น„ํŠธ ์–‘์žํ™”๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ T5-11b(42GB in fp32)๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๋ฐ๋ชจ๋ฅผ ํ™•์ธํ•˜์„ธ์š”: [![Open In Colab: T5-11b demo](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1YORPWx4okIHXnjW7MSAidXN29mPVNT7F?usp=sharing) ๋˜๋Š” BLOOM-3B์— ๋Œ€ํ•œ ๋ฐ๋ชจ๋ฅผ ํ™•์ธํ•˜์„ธ์š”: [![Open In Colab: BLOOM-3b demo](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4?usp=sharing)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/perf_train_gpu_many.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‹ค์ค‘ GPU์—์„œ ํšจ์œจ์ ์ธ ํ›ˆ๋ จ [[efficient-training-on-multiple-gpus]] ๋‹จ์ผ GPU์—์„œ์˜ ํ›ˆ๋ จ์ด ๋„ˆ๋ฌด ๋А๋ฆฌ๊ฑฐ๋‚˜ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๊ฐ€ ๋‹จ์ผ GPU์˜ ๋ฉ”๋ชจ๋ฆฌ์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ, ๋‹ค์ค‘-GPU ์„ค์ •์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋‹จ์ผ GPU์—์„œ ๋‹ค์ค‘ GPU๋กœ ์ „ํ™˜ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์ž‘์—…์„ ๋ถ„์‚ฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ, ํ…์„œ ๋˜๋Š” ํŒŒ์ดํ”„๋ผ์ธ๊ณผ ๊ฐ™์€ ๋ณ‘๋ ฌํ™” ๊ธฐ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ์ž‘์—…์„ ๋ณ‘๋ ฌ๋กœ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋Ÿฌํ•œ ์„ค์ •์„ ๋ชจ๋‘์—๊ฒŒ ์ ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์™„๋ฒฝํ•œ ํ•ด๊ฒฐ์ฑ…์€ ์—†์œผ๋ฉฐ, ์–ด๋–ค ์„ค์ •์ด ๊ฐ€์žฅ ์ ํ•ฉํ•œ์ง€๋Š” ์‚ฌ์šฉํ•˜๋Š” ํ•˜๋“œ์›จ์–ด์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ๋Š” ์ฃผ๋กœ PyTorch ๊ธฐ๋ฐ˜์˜ ๊ตฌํ˜„์„ ์ค‘์‹ฌ์œผ๋กœ ์„ค๋ช…ํ•˜๋ฉฐ, ๋Œ€๋ถ€๋ถ„์˜ ๊ฐœ๋…์€ ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ์—๋„ ์ ์šฉ๋  ์ˆ˜ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. <Tip> ์ฐธ๊ณ : [๋‹จ์ผ GPU ์„น์…˜](perf_train_gpu_one)์—์„œ ์†Œ๊ฐœ๋œ ์ „๋žต(ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ํ›ˆ๋ จ ๋˜๋Š” ๊ทธ๋ž˜๋””์–ธํŠธ ๋ˆ„์  ๋“ฑ)์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋ชจ๋ธ ํ›ˆ๋ จ์— ์ ์šฉ๋˜๋ฉฐ, ๋‹ค์ค‘-GPU ๋˜๋Š” CPU ํ›ˆ๋ จ๊ณผ ๊ฐ™์€ ๋‹ค์Œ ์„น์…˜์œผ๋กœ ์ง„์ž…ํ•˜๊ธฐ ์ „์— ํ•ด๋‹น ์„น์…˜์„ ์ฐธ๊ณ ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. </Tip> ๋จผ์ € 1D ๋ณ‘๋ ฌํ™” ๊ธฐ์ˆ ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ๋…ผ์˜ํ•œ ํ›„, ์ด๋Ÿฌํ•œ ๊ธฐ์ˆ ์„ ๊ฒฐํ•ฉํ•˜์—ฌ 2D ๋ฐ 3D ๋ณ‘๋ ฌํ™”๋ฅผ ๊ตฌํ˜„ํ•˜์—ฌ ๋” ๋น ๋ฅธ ํ›ˆ๋ จ๊ณผ ๋” ํฐ ๋ชจ๋ธ์„ ์ง€์›ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ ๋‹ค๋ฅธ ํšจ๊ณผ์ ์ธ ๋Œ€์•ˆ ๋ฐฉ์‹๋„ ์†Œ๊ฐœ๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. ## ๊ฐœ๋… [[concepts]] ๋‹ค์Œ์€ ์ด ๋ฌธ์„œ์—์„œ ์ž์„ธํžˆ ์„ค๋ช…๋  ์ฃผ์š” ๊ฐœ๋…์— ๋Œ€ํ•œ ๊ฐ„๋‹จํ•œ ์„ค๋ช…์ž…๋‹ˆ๋‹ค. 1. **DataParallel (DP)** - ๋™์ผํ•œ ์„ค์ •์ด ์—ฌ๋Ÿฌ ๋ฒˆ ๋ณต์ œ๋˜๊ณ , ๊ฐ ์„ค์ •์— ๋ฐ์ดํ„ฐ ์ผ๋ถ€๋ฅผ ๋ฐ›์Šต๋‹ˆ๋‹ค. ์ฒ˜๋ฆฌ๋Š” ๋ณ‘๋ ฌ๋กœ ์ˆ˜ํ–‰๋˜๋ฉฐ ๋ชจ๋“  ์„ค์ •์€ ๊ฐ ํ›ˆ๋ จ ๋‹จ๊ณ„์˜ ๋๋‚  ๋•Œ ๋™๊ธฐํ™”๋ฉ๋‹ˆ๋‹ค. 2. **TensorParallel (TP)** - ๊ฐ ํ…์„œ๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๋ฌถ์Œ์œผ๋กœ ๋ถ„ํ• ๋˜๊ธฐ์—, ์ „์ฒด ํ…์„œ๊ฐ€ ๋‹จ์ผ GPU์— ์ƒ์ฃผํ•˜๋Š” ๋Œ€์‹  ํ…์„œ์˜ ๊ฐ ์ƒค๋“œ๊ฐ€ ์ง€์ •๋œ GPU์— ์ƒ์ฃผํ•ฉ๋‹ˆ๋‹ค. ์ฒ˜๋ฆฌํ•˜๋Š” ๋™์•ˆ ๊ฐ ์ƒค๋“œ๋Š” ์„œ๋กœ ๋‹ค๋ฅธ GPU์—์„œ ๊ฐœ๋ณ„์ ์œผ๋กœ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ๋˜๋ฉฐ ๊ฒฐ๊ณผ๋Š” ๋‹จ๊ณ„๊ฐ€ ๋๋‚  ๋•Œ ๋™๊ธฐํ™”๋ฉ๋‹ˆ๋‹ค. ๋ถ„ํ• ์ด ์ˆ˜ํ‰ ์ˆ˜์ค€์—์„œ ์ด๋ฃจ์–ด์ง€๊ธฐ ๋•Œ๋ฌธ์— ์ด๋ฅผ ์ˆ˜ํ‰ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ๋ผ๊ณ  ๋ถ€๋ฅผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 3. **PipelineParallel (PP)** - ๋ชจ๋ธ์ด ์ˆ˜์ง์œผ๋กœ (๋ ˆ์ด์–ด ์ˆ˜์ค€) ์—ฌ๋Ÿฌ GPU์— ๋ถ„ํ• ๋˜์–ด ๋ชจ๋ธ์˜ ๋‹จ์ผ GPU์—๋Š” ํ•˜๋‚˜ ๋˜๋Š” ์—ฌ๋Ÿฌ ๋ ˆ์ด์–ด๊ฐ€ ๋ฐฐ์น˜๋ฉ๋‹ˆ๋‹ค. ๊ฐ GPU๋Š” ํŒŒ์ดํ”„๋ผ์ธ์˜ ์„œ๋กœ ๋‹ค๋ฅธ ๋‹จ๊ณ„๋ฅผ ๋ณ‘๋ ฌ๋กœ ์ฒ˜๋ฆฌํ•˜๋ฉฐ ์ž‘์€ ๋ฐฐ์น˜ ๋ฌถ์Œ์—์„œ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. 4. **Zero Redundancy Optimizer (ZeRO)** - TP์™€ ์œ ์‚ฌํ•˜๊ฒŒ ํ…์„œ๋ฅผ ์ƒค๋”ฉํ•˜์ง€๋งŒ, ์ „์ฒด ํ…์„œ๋Š” ์ˆœ๋ฐฉํ–ฅ ๋˜๋Š” ์—ญ๋ฐฉํ–ฅ ๊ณ„์‚ฐ์„ ์œ„ํ•ด ์žฌ๊ตฌ์„ฑ๋˜๋ฏ€๋กœ ๋ชจ๋ธ์„ ์ˆ˜์ •ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ์ œํ•œ๋œ GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๋ณด์™„ํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์–‘ํ•œ ์˜คํ”„๋กœ๋“œ ๊ธฐ์ˆ ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. 5. **Sharded DDP** - ZeRO์˜ ๊ธฐ๋ณธ ๊ฐœ๋…์œผ๋กœ ๋‹ค๋ฅธ ZeRO ๊ตฌํ˜„์—์„œ๋„ ์‚ฌ์šฉ๋˜๋Š” ์šฉ์–ด์ž…๋‹ˆ๋‹ค. ๊ฐ ๊ฐœ๋…์˜ ๊ตฌ์ฒด์ ์ธ ๋‚ด์šฉ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ๋“ค์–ด๊ฐ€๊ธฐ ์ „์— ๋Œ€๊ทœ๋ชจ ์ธํ”„๋ผ์—์„œ ๋Œ€๊ทœ๋ชจ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๋Š” ๊ฒฝ์šฐ์˜ ๋Œ€๋žต์ ์ธ ๊ฒฐ์ • ๊ณผ์ •์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ## ํ™•์žฅ์„ฑ ์ „๋žต [[scalability-strategy]] **โ‡จ ๋‹จ์ผ ๋…ธ๋“œ / ๋‹ค์ค‘-GPU** * ๋ชจ๋ธ์ด ๋‹จ์ผ GPU์— ๋งž๋Š” ๊ฒฝ์šฐ: 1. DDP - ๋ถ„์‚ฐ DP 2. ZeRO - ์ƒํ™ฉ๊ณผ ๊ตฌ์„ฑ์— ๋”ฐ๋ผ ๋” ๋น ๋ฅผ ์ˆ˜๋„ ์žˆ๊ณ  ๊ทธ๋ ‡์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Œ * ๋ชจ๋ธ์ด ๋‹จ์ผ GPU์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ: 1. PP 2. ZeRO 3. TP ๋…ธ๋“œ ๋‚ด ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ๋งค์šฐ ๋น ๋ฅธ NVLINK ๋˜๋Š” NVSwitch์˜ ๊ฒฝ์šฐ ์„ธ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์€ ๋Œ€๋ถ€๋ถ„ ๋น„์Šทํ•œ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์•ผ ํ•˜๋ฉฐ, PP๊ฐ€ ์—†๋Š” ๊ฒฝ์šฐ TP ๋˜๋Š” ZeRO๋ณด๋‹ค ๋น ๋ฅผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. TP์˜ ์ •๋„๋„ ์ฐจ์ด๋ฅผ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŠน์ • ์„ค์ •์—์„œ ์Šน์ž๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด ์‹คํ—˜ํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€์žฅ ์ข‹์Šต๋‹ˆ๋‹ค. TP๋Š” ๊ฑฐ์˜ ํ•ญ์ƒ ๋‹จ์ผ ๋…ธ๋“œ ๋‚ด์—์„œ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ฆ‰, TP ํฌ๊ธฐ <= ๋…ธ๋“œ๋‹น GPU ์ˆ˜์ž…๋‹ˆ๋‹ค. * ๊ฐ€์žฅ ํฐ ๋ ˆ์ด์–ด๊ฐ€ ๋‹จ์ผ GPU์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ: 1. ZeRO๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ - PP๋งŒ์œผ๋กœ๋Š” ๋งž์ง€ ์•Š์œผ๋ฏ€๋กœ TP๋ฅผ ๋ฐ˜๋“œ์‹œ ์‚ฌ์šฉํ•ด์•ผ ํ•จ 2. ZeRO๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ์œ„์˜ "๋‹จ์ผ GPU" ํ•ญ๋ชฉ๊ณผ ๋™์ผ **โ‡จ ๋‹ค์ค‘ ๋…ธ๋“œ / ๋‹ค์ค‘ GPU** * ๋…ธ๋“œ ๊ฐ„ ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ๋น ๋ฅธ ๊ฒฝ์šฐ: 1. ZeRO - ๋ชจ๋ธ์— ๋Œ€๋ถ€๋ถ„์˜ ์ˆ˜์ •์„ ํ•„์š”๋กœ ํ•˜์ง€ ์•Š์Œ 2. PP+TP+DP - ํ†ต์‹ ์ด ์ ์ง€๋งŒ ๋ชจ๋ธ์— ๋Œ€๋Œ€์ ์ธ ๋ณ€๊ฒฝ์ด ํ•„์š”ํ•จ * ๋…ธ๋“œ ๊ฐ„ ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ๋А๋ฆฌ๋ฉฐ, GPU ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ์—ฌ์ „ํžˆ ๋ถ€์กฑํ•œ ๊ฒฝ์šฐ: 1. DP+PP+TP+ZeRO-1 ## ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌํ™” [[data-parallelism]] 2๊ฐœ์˜ GPU๋งŒ์œผ๋กœ๋„ ๋Œ€๋ถ€๋ถ„์˜ ์‚ฌ์šฉ์ž๋“ค์€ `DataParallel` (DP)๊ณผ `DistributedDataParallel` (DDP)์„ ํ†ตํ•ด ํ–ฅ์ƒ๋œ ํ›ˆ๋ จ ์†๋„๋ฅผ ๋ˆ„๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” PyTorch์˜ ๋‚ด์žฅ ๊ธฐ๋Šฅ์ž…๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ DDP๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์œผ๋ฉฐ, DP๋Š” ์ผ๋ถ€ ๋ชจ๋ธ์—์„œ ์ž‘๋™ํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ฃผ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [PyTorch ๋ฌธ์„œ](https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html)์—์„œ๋„ DDP์˜ ์‚ฌ์šฉ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ### DP vs DDP [[dp-vs-ddp]] `DistributedDataParallel` (DDP)์€ ์ผ๋ฐ˜์ ์œผ๋กœ `DataParallel` (DP)๋ณด๋‹ค ๋น ๋ฅด์ง€๋งŒ, ํ•ญ์ƒ ๊ทธ๋ ‡์ง€๋Š” ์•Š์Šต๋‹ˆ๋‹ค: * DP๋Š” ํŒŒ์ด์ฌ ์Šค๋ ˆ๋“œ ๊ธฐ๋ฐ˜์ธ ๋ฐ˜๋ฉด, DDP๋Š” ๋‹ค์ค‘ ํ”„๋กœ์„ธ์Šค ๊ธฐ๋ฐ˜์ด๊ธฐ ๋•Œ๋ฌธ์— GIL๊ณผ ๊ฐ™์€ ํŒŒ์ด์ฌ ์Šค๋ ˆ๋“œ ์ œํ•œ์ด ์—†์Šต๋‹ˆ๋‹ค. * ๊ทธ๋Ÿฌ๋‚˜ GPU ์นด๋“œ ๊ฐ„์˜ ๋А๋ฆฐ ์ƒํ˜ธ ์—ฐ๊ฒฐ์„ฑ์€ DDP๋กœ ์ธํ•ด ์‹ค์ œ๋กœ ๋А๋ฆฐ ๊ฒฐ๊ณผ๋ฅผ ๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‘ ๋ชจ๋“œ ๊ฐ„์˜ GPU ๊ฐ„ ํ†ต์‹  ์˜ค๋ฒ„ํ—ค๋“œ์˜ ์ฃผ์š” ์ฐจ์ด์ ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: [DDP](https://pytorch.org/docs/master/notes/ddp.html): - ์‹œ์ž‘ํ•  ๋•Œ, ์ฃผ ํ”„๋กœ์„ธ์Šค๊ฐ€ ๋ชจ๋ธ์„ gpu 0์—์„œ ๋‹ค๋ฅธ ๋ชจ๋“  gpu๋กœ ๋ณต์ œํ•ฉ๋‹ˆ๋‹ค. - ๊ทธ๋Ÿฐ ๋‹ค์Œ ๊ฐ ๋ฐฐ์น˜์— ๋Œ€ํ•ด: 1. ๊ฐ gpu๋Š” ์ž์ฒด ๋ฏธ๋‹ˆ ๋ฐฐ์น˜ ๋ฐ์ดํ„ฐ๋ฅผ ์ง์ ‘ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. 2. `backward` ๋™์•ˆ ๋กœ์ปฌ ๊ทธ๋ž˜๋””์–ธํŠธ๊ฐ€ ์ค€๋น„๋˜๋ฉด, ๋ชจ๋“  ํ”„๋กœ์„ธ์Šค์— ํ‰๊ท ํ™”๋ฉ๋‹ˆ๋‹ค. [DP](https://pytorch.org/docs/master/generated/torch.nn.DataParallel.html): ๊ฐ ๋ฐฐ์น˜์— ๋Œ€ํ•ด: 1. gpu 0์€ ๋ฐ์ดํ„ฐ ๋ฐฐ์น˜๋ฅผ ์ฝ๊ณ  ๊ฐ gpu์— ๋ฏธ๋‹ˆ ๋ฐฐ์น˜๋ฅผ ๋ณด๋ƒ…๋‹ˆ๋‹ค. 2. ์—…๋ฐ์ดํŠธ๋œ ๋ชจ๋ธ์„ gpu 0์—์„œ ๊ฐ gpu๋กœ ๋ณต์ œํ•ฉ๋‹ˆ๋‹ค. 3. `forward`๋ฅผ ์‹คํ–‰ํ•˜๊ณ  ๊ฐ gpu์˜ ์ถœ๋ ฅ์„ gpu 0์œผ๋กœ ๋ณด๋‚ด๊ณ  ์†์‹ค์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. 4. gpu 0์—์„œ ๋ชจ๋“  gpu๋กœ ์†์‹ค์„ ๋ถ„์‚ฐํ•˜๊ณ  `backward`๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. 5. ๊ฐ gpu์—์„œ ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ gpu 0์œผ๋กœ ๋ณด๋‚ด๊ณ  ์ด๋ฅผ ํ‰๊ท ํ™”ํ•ฉ๋‹ˆ๋‹ค. DDP๋Š” ๊ฐ ๋ฐฐ์น˜๋งˆ๋‹ค ๊ทธ๋ž˜๋””์–ธํŠธ๋ฅผ ๋ณด๋‚ด๋Š” ํ†ต์‹ ๋งŒ์„ ์ˆ˜ํ–‰ํ•˜๋ฉฐ, DP๋Š” ๋ฐฐ์น˜๋งˆ๋‹ค 5๊ฐœ์˜ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ๊ตํ™˜์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. DP๋Š” ํŒŒ์ด์ฌ ์Šค๋ ˆ๋“œ๋ฅผ ํ†ตํ•ด ํ”„๋กœ์„ธ์Šค ๋‚ด์—์„œ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณต์ œํ•˜๋ฉฐ, DDP๋Š” [torch.distributed](https://pytorch.org/docs/master/distributed.html)๋ฅผ ํ†ตํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ๋ณต์ œํ•ฉ๋‹ˆ๋‹ค. DP์—์„œ๋Š” gpu 0์ด ๋‹ค๋ฅธ gpu๋ณด๋‹ค ํ›จ์”ฌ ๋” ๋งŽ์€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋ฏ€๋กœ, gpu์˜ ํ™œ์šฉ๋„๊ฐ€ ๋‚ฎ์•„์ง‘๋‹ˆ๋‹ค. DDP๋Š” ์—ฌ๋Ÿฌ ๋Œ€์˜ ์ปดํ“จํ„ฐ์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, DP์˜ ๊ฒฝ์šฐ๋Š” ๊ทธ๋ ‡์ง€ ์•Š์Šต๋‹ˆ๋‹ค. DP์™€ DDP ์‚ฌ์ด์—๋Š” ๋‹ค๋ฅธ ์ฐจ์ด์ ์ด ์žˆ์ง€๋งŒ, ์ด ํ† ๋ก ๊ณผ๋Š” ๊ด€๋ จ์ด ์—†์Šต๋‹ˆ๋‹ค. ์ด 2๊ฐ€์ง€ ๋ชจ๋“œ๋ฅผ ๊นŠ๊ฒŒ ์ดํ•ดํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, [์ด ๋ฌธ์„œ](https://www.telesens.co/2019/04/04/distributed-data-parallel-training-using-pytorch-on-aws/)๋ฅผ ๊ฐ•๋ ฅํžˆ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ๋Š” ๋ฉ‹์ง„ ๋‹ค์ด์–ด๊ทธ๋žจ์„ ํฌํ•จํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, ๋‹ค์–‘ํ•œ ํ•˜๋“œ์›จ์–ด์—์„œ ์—ฌ๋Ÿฌ ๋ฒค์น˜๋งˆํฌ์™€ ํ”„๋กœํŒŒ์ผ๋Ÿฌ ์ถœ๋ ฅ์„ ์„ค๋ช…ํ•˜์—ฌ ํ•„์š”ํ•œ ์„ธ๋ถ€ ์‚ฌํ•ญ์„ ๋ชจ๋‘ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ์‹ค์ œ ๋ฒค์น˜๋งˆํฌ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: | Type | NVlink | Time | | :----- | ----- | ---: | | 2:DP | Y | 110s | | 2:DDP | Y | 101s | | 2:DDP | N | 131s | ๋ถ„์„: ์—ฌ๊ธฐ์„œ DP๋Š” NVlink๊ฐ€ ์žˆ๋Š” DDP๋ณด๋‹ค ์•ฝ 10% ๋А๋ฆฝ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ NVlink๊ฐ€ ์—†๋Š” DDP๋ณด๋‹ค ์•ฝ 15% ๋น ๋ฆ…๋‹ˆ๋‹ค. ์‹ค์ œ ์ฐจ์ด๋Š” ๊ฐ GPU๊ฐ€ ๋‹ค๋ฅธ GPU์™€ ๋™๊ธฐํ™”ํ•ด์•ผ ํ•˜๋Š” ๋ฐ์ดํ„ฐ ์–‘์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์งˆ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋™๊ธฐํ™”ํ•  ๋ฐ์ดํ„ฐ๊ฐ€ ๋งŽ์„์ˆ˜๋ก ๋А๋ฆฐ ๋งํฌ๊ฐ€ ์ด ์‹คํ–‰ ์‹œ๊ฐ„์„ ๋Šฆ์ถœ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์ „์ฒด ๋ฒค์น˜๋งˆํฌ ์ฝ”๋“œ์™€ ์ถœ๋ ฅ์ž…๋‹ˆ๋‹ค: ํ•ด๋‹น ๋ฒค์น˜๋งˆํฌ์—์„œ `NCCL_P2P_DISABLE=1`์„ ์‚ฌ์šฉํ•˜์—ฌ NVLink ๊ธฐ๋Šฅ์„ ๋น„ํ™œ์„ฑํ™”ํ–ˆ์Šต๋‹ˆ๋‹ค. ``` # DP rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ python examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 110.5948, 'train_samples_per_second': 1.808, 'epoch': 0.69} # DDP w/ NVlink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \ torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69} # DDP w/o NVlink rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 \ torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \ --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69} ``` ํ•˜๋“œ์›จ์–ด: ๊ฐ๊ฐ 24GB์˜ TITAN RTX 2๊ฐœ + NVlink๊ณผ 2๊ฐœ์˜ NVLink (`nvidia-smi topo -m`์—์„œ `NV2`์ž…๋‹ˆ๋‹ค.) ์†Œํ”„ํŠธ์›จ์–ด: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0` ## ZeRO ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌํ™” [[zero-data-parallelism]] ZeRO๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌํ™” (ZeRO-DP)๋Š” ๋‹ค์Œ [๋ธ”๋กœ๊ทธ ๊ธ€](https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/)์˜ ๋‹ค์Œ ๋‹ค์ด์–ด๊ทธ๋žจ์—์„œ ์„ค๋ช…๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ![DeepSpeed-Image-1](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-zero.png) ์ด ๊ฐœ๋…์€ ์ดํ•ดํ•˜๊ธฐ ์–ด๋ ค์šธ ์ˆ˜ ์žˆ์ง€๋งŒ, ์‹ค์ œ๋กœ๋Š” ๋งค์šฐ ๊ฐ„๋‹จํ•œ ๊ฐœ๋…์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์ผ๋ฐ˜์ ์ธ `DataParallel` (DP)๊ณผ ๋™์ผํ•˜์ง€๋งŒ, ์ „์ฒด ๋ชจ๋ธ ๋งค๊ฐœ๋ณ€์ˆ˜, ๊ทธ๋ž˜๋””์–ธํŠธ ๋ฐ ์˜ตํ‹ฐ๋งˆ์ด์ € ์ƒํƒœ๋ฅผ ๋ณต์ œํ•˜๋Š” ๋Œ€์‹  ๊ฐ GPU๋Š” ๊ทธ ์ค‘ ์ผ๋ถ€๋งŒ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์‹คํ–‰ ์‹œ๊ฐ„์—๋Š” ์ฃผ์–ด์ง„ ๋ ˆ์ด์–ด์— ๋Œ€ํ•ด ์ „์ฒด ๋ ˆ์ด์–ด ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ํ•„์š”ํ•  ๋•Œ ๊ฐ GPU๊ฐ€ ์„œ๋กœ์—๊ฒŒ ํ•„์š”ํ•œ ๋ถ€๋ถ„์„ ์ œ๊ณตํ•˜๊ธฐ ์œ„ํ•ด ๋™๊ธฐํ™”๋ฉ๋‹ˆ๋‹ค - ๊ทธ๊ฒŒ ์ „๋ถ€์ž…๋‹ˆ๋‹ค. ๊ฐ๊ฐ 3๊ฐœ์˜ ๋ ˆ์ด์–ด์™€ 3๊ฐœ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์žˆ๋Š” ๊ฐ„๋‹จํ•œ ๋ชจ๋ธ์„ ์ƒ๊ฐํ•ด ๋ด…์‹œ๋‹ค: ``` La | Lb | Lc ---|----|--- a0 | b0 | c0 a1 | b1 | c1 a2 | b2 | c2 ``` ๋ ˆ์ด์–ด La์—๋Š” ๊ฐ€์ค‘์น˜ a0, a1 ๋ฐ a2๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. 3๊ฐœ์˜ GPU๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ, Sharded DDP (= Zero-DP)๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ชจ๋ธ์„ 3๊ฐœ์˜ GPU์— ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค: ``` GPU0: La | Lb | Lc ---|----|--- a0 | b0 | c0 GPU1: La | Lb | Lc ---|----|--- a1 | b1 | c1 GPU2: La | Lb | Lc ---|----|--- a2 | b2 | c2 ``` ์ผ๋ฐ˜์ ์ธ DNN ๋‹ค์ด์–ด๊ทธ๋žจ์„ ์ƒ์ƒํ•ด๋ณด๋ฉด ์ด๋Š” ํ…์„œ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ์™€ ๊ฐ™์€ ์ˆ˜ํ‰ ์Šฌ๋ผ์ด์‹ฑ์ž…๋‹ˆ๋‹ค. ์ˆ˜์ง ์Šฌ๋ผ์ด์‹ฑ์€ ์ „์ฒด ๋ ˆ์ด์–ด ๊ทธ๋ฃน์„ ๋‹ค๋ฅธ GPU์— ๋ฐฐ์น˜ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์‹œ์ž‘์— ๋ถˆ๊ณผํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ ์ด๋Ÿฌํ•œ ๊ฐ๊ฐ์˜ GPU๋Š” DP์—์„œ ์ž‘๋™ํ•˜๋Š” ๊ฒƒ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์ผ๋ฐ˜์ ์ธ ๋ฏธ๋‹ˆ ๋ฐฐ์น˜๋ฅผ ๋ฐ›์Šต๋‹ˆ๋‹ค: ``` x0 => GPU0 x1 => GPU1 x2 => GPU2 ``` ์ž…๋ ฅ์€ ์ˆ˜์ •๋˜์ง€ ์•Š์€ ์ƒํƒœ๋กœ ์ผ๋ฐ˜ ๋ชจ๋ธ์— ์˜ํ•ด ์ฒ˜๋ฆฌ๋  ๊ฒƒ์œผ๋กœ ๊ฐ„์ฃผํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ €, ์ž…๋ ฅ์€ ๋ ˆ์ด์–ด La์— ๋„๋‹ฌํ•ฉ๋‹ˆ๋‹ค. GPU0์—๋งŒ ์ง‘์ค‘ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. x0์€ ์ˆœ๋ฐฉํ–ฅ ๊ฒฝ๋กœ๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด a0, a1, a2 ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ ํ•„์š”ํ•˜์ง€๋งŒ GPU0์—๋Š” a0๋งŒ ์žˆ์Šต๋‹ˆ๋‹ค. GPU1์—์„œ a1์„, GPU2์—์„œ a2๋ฅผ ์ „์†ก๋ฐ›์•„ ๋ชจ๋ธ์˜ ๋ชจ๋“  ์กฐ๊ฐ์„ ํ•˜๋‚˜๋กœ ๋ชจ์๋‹ˆ๋‹ค. ๋ณ‘๋ ฌ์ ์œผ๋กœ, GPU1์€ ๋ฏธ๋‹ˆ ๋ฐฐ์น˜ x1์„ ๋ฐ›๊ณ  a1๋งŒ ๊ฐ€์ง€๊ณ  ์žˆ์ง€๋งŒ, a0 ๋ฐ a2 ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ GPU0 ๋ฐ GPU2์—์„œ ์ด๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. GPU2๋„ ๋™์ผํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ x2๋ฅผ ๋ฐ›๊ณ  GPU0 ๋ฐ GPU1์—์„œ ๊ฐ๊ฐ a0๊ณผ a1์„, ๊ทธ๋ฆฌ๊ณ  ์ž์‹ ์˜ a2์™€ ํ•จ๊ป˜ ์ „์ฒด ํ…์„œ๋ฅผ ๋ณต์›ํ•ฉ๋‹ˆ๋‹ค. 3๊ฐœ์˜ GPU๋Š” ๋ณต์›๋œ ์ „์ฒด ํ…์„œ๋ฅผ ๋ฐ›๊ณ  forward๊ฐ€ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. ๊ณ„์‚ฐ์ด ์™„๋ฃŒ๋˜๋ฉด ๋” ์ด์ƒ ํ•„์š”ํ•˜์ง€ ์•Š์€ ๋ฐ์ดํ„ฐ๋Š” ์‚ญ์ œ๋˜๊ณ , ํ•ด๋‹น ๋ฐ์ดํ„ฐ๋Š” ๊ณ„์‚ฐ ์ค‘์—๋งŒ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋ณต์›์€ ์‚ฌ์ „ ํŒจ์น˜๋ฅผ ํ†ตํ•ด ํšจ์œจ์ ์œผ๋กœ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ „์ฒด ํ”„๋กœ์„ธ์Šค๋Š” ๋ ˆ์ด์–ด Lb์— ๋Œ€ํ•ด ๋ฐ˜๋ณต๋˜๊ณ , ๊ทธ ๋‹ค์Œ Lc๋กœ ์ˆœ๋ฐฉํ–ฅ์œผ๋กœ, ๊ทธ๋‹ค์Œ์€ ์—ญ๋ฐฉํ–ฅ์œผ๋กœ Lc -> Lb -> La๋กœ ๋ฐ˜๋ณต๋ฉ๋‹ˆ๋‹ค. ๊ฐœ์ธ์ ์œผ๋กœ ์ด๊ฒƒ์€ ํšจ์œจ์ ์ธ ๊ทธ๋ฃน ๋ฐฐ๋‚ญ ์—ฌํ–‰์ž์˜ ์ค‘๋Ÿ‰ ๋ถ„๋ฐฐ ์ „๋žต์ฒ˜๋Ÿผ ๋“ค๋ฆฝ๋‹ˆ๋‹ค: 1. ์‚ฌ๋žŒ A๊ฐ€ ํ…ํŠธ๋ฅผ ์šด๋ฐ˜ํ•ฉ๋‹ˆ๋‹ค. 2. ์‚ฌ๋žŒ B๊ฐ€ ๋‚œ๋กœ๋ฅผ ์šด๋ฐ˜ํ•ฉ๋‹ˆ๋‹ค. 3. ์‚ฌ๋žŒ C๊ฐ€ ๋„๋ผ๋ฅผ ์šด๋ฐ˜ํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ ๋งค์ผ ๋ฐค ๊ฐ์ž ๊ฐ€์ง„ ๊ฒƒ์„ ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค๊ณผ ๊ณต์œ ํ•˜๊ณ , ๊ฐ€์ง€์ง€ ์•Š์€ ๊ฒƒ์€ ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค๋กœ๋ถ€ํ„ฐ ๋ฐ›๊ณ , ์•„์นจ์—๋Š” ํ• ๋‹น๋œ ์œ ํ˜•์˜ ์žฅ๋น„๋ฅผ ์‹ธ๊ณ  ๊ณ„์†ํ•ด์„œ ์—ฌํ–‰์„ ์ง„ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด Sharded DDP / Zero DP์ž…๋‹ˆ๋‹ค. ์ด ์ „๋žต์„ ๊ฐ๊ฐ ์ž์‹ ์˜ ํ…ํŠธ, ๋‚œ๋กœ ๋ฐ ๋„๋ผ๋ฅผ ๊ฐœ๋ณ„์ ์œผ๋กœ ์šด๋ฐ˜ํ•ด์•ผ ํ•˜๋Š” ๋‹จ์ˆœํ•œ ์ „๋žต๊ณผ ๋น„๊ตํ•ด๋ณด๋ฉด ํ›จ์”ฌ ๋น„ํšจ์œจ์ ์ผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด Pytorch์˜ DataParallel (DP ๋ฐ DDP)์ž…๋‹ˆ๋‹ค. ์ด ์ฃผ์ œ์— ๋Œ€ํ•ด ๋…ผ๋ฌธ์„ ์ฝ์„ ๋•Œ ๋‹ค์Œ ๋™์˜์–ด๋ฅผ ๋งŒ๋‚  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: Sharded, Partitioned. ZeRO๊ฐ€ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ ๋ถ„ํ• ํ•˜๋Š” ๋ฐฉ์‹์„ ์ž์„ธํžˆ ์‚ดํŽด๋ณด๋ฉด, ํ…์„œ ๋ณ‘๋ ฌํ™”์™€ ๋งค์šฐ ์œ ์‚ฌํ•œ ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ดํ›„์— ์„ค๋ช…๋  ์ˆ˜์ง ๋ชจ๋ธ ๋ณ‘๋ ฌํ™”์™€๋Š” ๋‹ฌ๋ฆฌ ๊ฐ ๋ ˆ์ด์–ด์˜ ๊ฐ€์ค‘์น˜๋ฅผ ๋ถ„ํ• /๋ถ„ํ• ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๊ตฌํ˜„: - [DeepSpeed](https://www.deepspeed.ai/tutorials/zero/)๋Š” 1๋‹จ๊ณ„ + 2๋‹จ๊ณ„ + 3๋‹จ๊ณ„์˜ ZeRO-DP๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. - [Fairscale](https://github.com/facebookresearch/fairscale/#optimizer-state-sharding-zero)์€ 1๋‹จ๊ณ„ + 2๋‹จ๊ณ„ + 3๋‹จ๊ณ„์˜ ZeRO-DP๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. - [`transformers` ํ†ตํ•ฉ](main_classes/trainer#trainer-integrations) ## ๋„ค์ดํ‹ฐ๋ธŒ ๋ชจ๋ธ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ(์ˆ˜์ง์ ) ๋ฐ ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ[[naive-model-parallelism-vertical-and-pipeline-parallelism]] Naive Model Parallelism (MP)์€ ๋ชจ๋ธ ๋ ˆ์ด์–ด ๊ทธ๋ฃน์„ ๋‹ค์ค‘ GPU์— ๋ถ„์‚ฐํ•˜๋Š” ๋ฐฉ์‹์ž…๋‹ˆ๋‹ค. ๋ฉ”์ปค๋‹ˆ์ฆ˜์€ ์ƒ๋Œ€์ ์œผ๋กœ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. ์›ํ•˜๋Š” ๋ ˆ์ด์–ด๋ฅผ `.to()`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์›ํ•˜๋Š” ์žฅ์น˜๋กœ ์ „ํ™˜ํ•˜๋ฉด ๋ฐ์ดํ„ฐ๊ฐ€ ํ•ด๋‹น ๋ ˆ์ด์–ด๋กœ ๋“ค์–ด์˜ค๊ณ  ๋‚˜๊ฐˆ ๋•Œ ๋ฐ์ดํ„ฐ๋„ ๋ ˆ์ด์–ด์™€ ๋™์ผํ•œ ์žฅ์น˜๋กœ ์ „ํ™˜๋˜๊ณ  ๋‚˜๋จธ์ง€๋Š” ์ˆ˜์ •๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ๋ชจ๋ธ์ด ๊ทธ๋ ค์ง€๋Š” ๋ฐฉ์‹์ด ๋ ˆ์ด์–ด๋ฅผ ์„ธ๋กœ๋กœ ์Šฌ๋ผ์ด์Šคํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ด๋ฅผ ์ˆ˜์ง ๋ชจ๋ธ ๋ณ‘๋ ฌํ™”๋ผ๊ณ  ๋ถ€๋ฆ…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ ๋‹ค์ด์–ด๊ทธ๋žจ์€ 8๋ ˆ์ด์–ด ๋ชจ๋ธ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค: ``` =================== =================== | 0 | 1 | 2 | 3 | | 4 | 5 | 6 | 7 | =================== =================== gpu0 gpu1 ``` ์šฐ๋ฆฌ๋Š” ๋ชจ๋ธ์„ ์ˆ˜์ง์œผ๋กœ 2๊ฐœ๋กœ ๋ถ„ํ• ํ•˜์—ฌ ๋ ˆ์ด์–ด 0-3์„ GPU0์— ๋ฐฐ์น˜ํ•˜๊ณ  ๋ ˆ์ด์–ด 4-7์„ GPU1์— ๋ฐฐ์น˜ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋ฐ์ดํ„ฐ๊ฐ€ ๋ ˆ์ด์–ด 0์—์„œ 1๋กœ, 1์—์„œ 2๋กœ, 2์—์„œ 3์œผ๋กœ ์ด๋™ํ•˜๋Š” ๋™์•ˆ์—๋Š” ์ผ๋ฐ˜์ ์ธ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋ฐ์ดํ„ฐ๊ฐ€ ๋ ˆ์ด์–ด 3์—์„œ ๋ ˆ์ด์–ด 4๋กœ ์ „๋‹ฌ๋˜์–ด์•ผ ํ•  ๋•Œ๋Š” GPU0์—์„œ GPU1๋กœ ์ด๋™ํ•ด์•ผ ํ•˜๋ฏ€๋กœ ํ†ต์‹  ์˜ค๋ฒ„ํ—ค๋“œ๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ์ฐธ์—ฌํ•˜๋Š” GPU๊ฐ€ ๋™์ผํ•œ ์ปดํ“จํŒ… ๋…ธ๋“œ(์˜ˆ: ๋™์ผํ•œ ๋ฌผ๋ฆฌ์ ์ธ ๊ธฐ๊ณ„)์— ์žˆ๋Š” ๊ฒฝ์šฐ ์ด ๋ณต์‚ฌ๋Š” ๋งค์šฐ ๋น ๋ฆ…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ GPU๊ฐ€ ์„œ๋กœ ๋‹ค๋ฅธ ์ปดํ“จํŒ… ๋…ธ๋“œ(์˜ˆ: ์—ฌ๋Ÿฌ ๊ธฐ๊ณ„)์— ์œ„์น˜ํ•œ ๊ฒฝ์šฐ ํ†ต์‹  ์˜ค๋ฒ„ํ—ค๋“œ๋Š” ์ƒ๋‹นํžˆ ํฌ๊ฒŒ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋ ˆ์ด์–ด 4๋ถ€ํ„ฐ 5๋กœ, 6์œผ๋กœ, 7๋กœ ์ง„ํ–‰๋˜๋Š” ๊ฒƒ์€ ์ผ๋ฐ˜์ ์ธ ๋ชจ๋ธ๊ณผ ๋™์ผํ•˜๊ฒŒ ์ง„ํ–‰๋˜๊ณ , 7๋ฒˆ์งธ ๋ ˆ์ด์–ด๊ฐ€ ์™„๋ฃŒ๋˜๋ฉด ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ค์‹œ ๋ ˆ์ด์–ด 0์œผ๋กœ ๋ณด๋‚ด๊ฑฐ๋‚˜ ๋˜๋Š” ๋ ˆ์ด๋ธ”์„ ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด๋กœ ๋ณด๋‚ด์•ผ ํ•  ํ•„์š”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ์†์‹ค์„ ๊ณ„์‚ฐํ•˜๊ณ  ์˜ตํ‹ฐ๋งˆ์ด์ €๊ฐ€ ์ž‘๋™ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌธ์ œ์ : - ์ด ๋ฐฉ์‹์„ "naive" MP๋ผ๊ณ  ๋ถ€๋ฅด๋Š” ์ด์œ ๋Š” ์ฃผ์–ด์ง„ ์ƒํ™ฉ์— ํ•˜๋‚˜์˜ GPU๋ฅผ ์ œ์™ธํ•œ ๋ชจ๋“  GPU๊ฐ€ ์œ ํœด ์ƒํƒœ๋ผ๋Š” ์ ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ 4๊ฐœ์˜ GPU๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๋‹จ์ผ GPU์˜ ๋ฉ”๋ชจ๋ฆฌ ์–‘์„ 4๋ฐฐ๋กœ ๋Š˜๋ฆฌ๊ณ  ๋‚˜๋จธ์ง€ ํ•˜๋“œ์›จ์–ด๋Š” ๋ฌด์‹œํ•˜๋Š” ๊ฒƒ๊ณผ ๊ฑฐ์˜ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ์žฅ์น˜ ๊ฐ„ ๋ฐ์ดํ„ฐ ๋ณต์‚ฌ์˜ ์˜ค๋ฒ„ํ—ค๋“œ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ 4๊ฐœ์˜ 6GB ์นด๋“œ๋Š” naive MP๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ 1๊ฐœ์˜ 24GB ์นด๋“œ์™€ ๋™์ผํ•œ ํฌ๊ธฐ๋ฅผ ์ˆ˜์šฉํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ํ›„์ž๋Š” ๋ฐ์ดํ„ฐ ๋ณต์‚ฌ์˜ ์˜ค๋ฒ„ํ—ค๋“œ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ํ›ˆ๋ จ์„ ๋” ๋นจ๋ฆฌ ์™„๋ฃŒํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์˜ˆ๋ฅผ ๋“ค์–ด 40GB ์นด๋“œ๊ฐ€ ์žˆ๊ณ  45GB ๋ชจ๋ธ์„ ๋งž์ถ”์–ด์•ผ ํ•  ๊ฒฝ์šฐ 4๊ฐœ์˜ 40GB ์นด๋“œ๋กœ ๋งž์ถœ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (ํ•˜์ง€๋งŒ ๊ทธ๋ž˜๋””์–ธํŠธ์™€ ์˜ตํ‹ฐ๋งˆ์ด์ € ์ƒํƒœ ๋•Œ๋ฌธ์— ๊ฐ€๊นŒ์Šค๋กœ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค). - ๊ณต์œ  ์ž„๋ฒ ๋”ฉ์€ GPU ๊ฐ„์— ๋ณต์‚ฌํ•ด์•ผ ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌํ™” (PP)์€ ๊ฑฐ์˜ naive MP์™€ ๋™์ผํ•˜์ง€๋งŒ GPU ์œ ํœด ์ƒํƒœ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ๋“ค์–ด์˜ค๋Š” ๋ฐฐ์น˜๋ฅผ ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜๋กœ ๋‚˜๋ˆ„๊ณ  ์ธ๊ณต์ ์œผ๋กœ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ƒ์„ฑํ•˜์—ฌ ์„œ๋กœ ๋‹ค๋ฅธ GPU๊ฐ€ ๋™์‹œ์— ๊ณ„์‚ฐ์— ์ฐธ์—ฌํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. [GPipe ๋…ผ๋ฌธ](https://ai.googleblog.com/2019/03/introducing-gpipe-open-source-library.html)์—์„œ ๊ฐ€์ ธ์˜จ ๊ทธ๋ฆผ์€ ์ƒ๋‹จ์— naive MP๋ฅผ, ํ•˜๋‹จ์—๋Š” PP๋ฅผ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค: ![mp-pp](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-gpipe-bubble.png) ํ•˜๋‹จ ๋‹ค์ด์–ด๊ทธ๋žจ์—์„œ PP๊ฐ€ ์œ ํœด ์˜์—ญ์ด ์ ์€ ๊ฒƒ์„ ์‰ฝ๊ฒŒ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ ํœด ๋ถ€๋ถ„์„ "bubble"์ด๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์ด์–ด๊ทธ๋žจ์˜ ์–‘์ชฝ ๋ถ€๋ถ„์€ ์ฐธ์—ฌํ•˜๋Š” GPU๊ฐ€ 4๊ฐœ์ธ ๋ณ‘๋ ฌ์„ฑ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ์ฆ‰, 4๊ฐœ์˜ GPU๊ฐ€ ํŒŒ์ดํ”„๋ผ์ธ์— ์ฐธ์—ฌํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ 4๊ฐœ์˜ ํŒŒ์ดํ”„ ๋‹จ๊ณ„ F0, F1, F2 ๋ฐ F3์˜ ์ˆœ๋ฐฉํ–ฅ ๊ฒฝ๋กœ์™€ B3, B2, B1 ๋ฐ B0์˜ ์—ญ๋ฐฉํ–ฅ ๊ฒฝ๋กœ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. PP๋Š” ์กฐ์ •ํ•ด์•ผ ํ•  ์ƒˆ๋กœ์šด ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ์ธ `chunks`๋ฅผ ๋„์ž…ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋™์ผํ•œ ํŒŒ์ดํ”„ ๋‹จ๊ณ„๋ฅผ ํ†ตํ•ด ์ผ๋ จ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋ฌถ์–ด์„œ ๋ณด๋‚ด๋Š” ๋ฐฉ์‹์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์•„๋ž˜ ๋‹ค์ด์–ด๊ทธ๋žจ์—์„œ `chunks=4`๋ฅผ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. GPU0์€ 0, 1, 2 ๋ฐ 3 (F0,0, F0,1, F0,2, F0,3) ๋ฌถ์Œ์—์„œ ๋™์ผํ•œ ์ˆœ๋ฐฉํ–ฅ ๊ฒฝ๋กœ๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ , ๋‹ค๋ฅธ GPU๊ฐ€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๊ณ  ์™„๋ฃŒ๊ฐ€ ์‹œ์ž‘๋  ๋•Œ๋งŒ GPU0์ด ๋ฌถ์Œ์˜ ์—ญ์ˆœ์œผ๋กœ 3, 2, 1 ๋ฐ 0 (B0,3, B0,2, B0,1, B0,0) ๊ฒฝ๋กœ๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๊ฐœ๋…์ ์œผ๋กœ ์ด๋Š” ๊ทธ๋ž˜๋””์–ธํŠธ ๋ˆ„์  ๋‹จ๊ณ„ (GAS)์™€ ๋™์ผํ•œ ๊ฐœ๋…์ž…๋‹ˆ๋‹ค. ํŒŒ์ดํ† ์น˜์—์„œ๋Š” `chunks`๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  DeepSpeed์—์„œ๋Š” ๋™์ผํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ GAS๋กœ ์ฐธ์กฐํ•ฉ๋‹ˆ๋‹ค. ๋ฌถ์Œ์œผ๋กœ ์ธํ•ด PP๋Š” ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜ (MBS)์˜ ๊ฐœ๋…์„ ๋„์ž…ํ•ฉ๋‹ˆ๋‹ค. DP๋Š” ์ „์—ญ ๋ฐ์ดํ„ฐ ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ๋ฏธ๋‹ˆ ๋ฐฐ์น˜๋กœ ๋‚˜๋ˆ•๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ DP ์ฐจ์ˆ˜๊ฐ€ 4์ด๊ณ  ์ „์—ญ ๋ฐฐ์น˜ ํฌ๊ธฐ๊ฐ€ 1024์ด๋ฉด 256์”ฉ 4๊ฐœ์˜ ๋ฏธ๋‹ˆ ๋ฐฐ์น˜๋กœ ๋ถ„ํ• ๋ฉ๋‹ˆ๋‹ค (1024/4). ๊ทธ๋ฆฌ๊ณ  `chunks` (๋˜๋Š” GAS)์˜ ์ˆ˜๊ฐ€ 32์ด๋ฉด ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜ ํฌ๊ธฐ๋Š” 8์ด ๋ฉ๋‹ˆ๋‹ค (256/32). ๊ฐ ํŒŒ์ดํ”„๋ผ์ธ ๋‹จ๊ณ„๋Š” ํ•œ ๋ฒˆ์— ํ•˜๋‚˜์˜ ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜์™€ ํ•จ๊ป˜ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. DP + PP ์„ค์ •์˜ ์ „์—ญ ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ๊ณ„์‚ฐํ•˜๋ ค๋ฉด `mbs*chunks*dp_degree` (`8*32*4=1024`)๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์ด์–ด๊ทธ๋žจ์œผ๋กœ ๋Œ์•„๊ฐ€ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. `chunks=1`๋กœ ์„ค์ •ํ•˜๋ฉด ๋งค์šฐ ๋น„ํšจ์œจ์ ์ธ naive MP๊ฐ€ ์ƒ์„ฑ๋˜๋ฉฐ, ๋งค์šฐ ํฐ `chunks` ๊ฐ’์œผ๋กœ ์„ค์ •ํ•˜๋ฉด ์•„์ฃผ ์ž‘์€ ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜ ํฌ๊ธฐ๊ฐ€ ์ƒ์„ฑ๋˜์–ด ํšจ์œจ์ ์ด์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๊ฐ€์žฅ ํšจ์œจ์ ์ธ GPU ํ™œ์šฉ์„ ์œ„ํ•ด ์–ด๋–ค ๊ฐ’์ด ๊ฐ€์žฅ ์ ์ ˆํ•œ์ง€ ์‹คํ—˜์„ ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์ด์–ด๊ทธ๋žจ์—์„œ ๋ณด์ด๋Š” ๊ฒƒ์ฒ˜๋Ÿผ "dead" ์‹œ๊ฐ„์˜ ๋ฒ„๋ธ”์ด ์กด์žฌํ•˜์—ฌ ๋งˆ์ง€๋ง‰ `forward` ๋‹จ๊ณ„๊ฐ€ `backward` ๋‹จ๊ณ„๊ฐ€ ํŒŒ์ดํ”„๋ผ์ธ์„ ์™„๋ฃŒํ•˜๊ธฐ๋ฅผ ๊ธฐ๋‹ค๋ ค์•ผ ํ•˜๋Š” ์ƒํ™ฉ์ด ๋ฐœ์ƒํ•˜์ง€๋งŒ, `chunks`์˜ ๊ฐ€์žฅ ์ ์ ˆํ•œ ๊ฐ’์„ ์ฐพ๋Š” ๊ฒƒ์˜ ๋ชฉ์ ์€ ๋ชจ๋“  ์ฐธ์—ฌํ•˜๋Š” GPU์—์„œ ๋™์‹œ์— ๊ณ ๋„๋กœ ํ™œ์šฉ๋˜๋Š” GPU ํ™œ์šฉ์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜์—ฌ ๋ฒ„๋ธ”์˜ ํฌ๊ธฐ๋ฅผ ์ตœ์†Œํ™”ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•ด๊ฒฐ์ฑ…์€ ์ „ํ†ต์ ์ธ ํŒŒ์ดํ”„๋ผ์ธ API์™€ ๋” ํ˜„๋Œ€์ ์ธ ์†”๋ฃจ์…˜์œผ๋กœ ๋‚˜๋‰ฉ๋‹ˆ๋‹ค. ์ „ํ†ต์ ์ธ ํŒŒ์ดํ”„๋ผ์ธ API ์†”๋ฃจ์…˜๊ณผ ํ˜„๋Œ€์ ์ธ ์†”๋ฃจ์…˜์— ๋Œ€ํ•ด ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ „ํ†ต์ ์ธ ํŒŒ์ดํ”„๋ผ์ธ API ์†”๋ฃจ์…˜: - ํŒŒ์ดํ† ์น˜ - FairScale - DeepSpeed - Megatron-LM ํ˜„๋Œ€์ ์ธ ์†”๋ฃจ์…˜: - Varuna - Sagemaker ์ „ํ†ต์ ์ธ ํŒŒ์ดํ”„๋ผ์ธ API ์†”๋ฃจ์…˜์˜ ๋ฌธ์ œ์ : - ๋ชจ๋ธ์„ ์ƒ๋‹นํžˆ ์ˆ˜์ •ํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ด ๋ฌธ์ œ์ž…๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ์€ ๋ชจ๋“ˆ์˜ ์ •์ƒ์ ์ธ ํ๋ฆ„์„ `nn.Sequential` ์‹œํ€€์Šค๋กœ ๋‹ค์‹œ ์ž‘์„ฑํ•ด์•ผ ํ•˜๋ฏ€๋กœ ๋ชจ๋ธ์˜ ์„ค๊ณ„๋ฅผ ๋ณ€๊ฒฝํ•ด์•ผ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ํ˜„์žฌ ํŒŒ์ดํ”„๋ผ์ธ API๋Š” ๋งค์šฐ ์ œํ•œ์ ์ž…๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ์˜ ๋งค์šฐ ์ฒซ ๋ฒˆ์งธ ๋‹จ๊ณ„์—์„œ ์ „๋‹ฌ๋˜๋Š” ๋งŽ์€ ํŒŒ์ด์ฌ ๋ณ€์ˆ˜๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ ์ด๋ฅผ ํ•ด๊ฒฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ˜„์žฌ ํŒŒ์ดํ”„๋ผ์ธ ์ธํ„ฐํŽ˜์ด์Šค๋Š” ํ•˜๋‚˜์˜ ํ…์„œ ๋˜๋Š” ํ…์„œ์˜ ํŠœํ”Œ์„ ์œ ์ผํ•œ ์ž…๋ ฅ ๋ฐ ์ถœ๋ ฅ์œผ๋กœ ์š”๊ตฌํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ํ…์„œ๋Š” ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜๋กœ ๋ฏธ๋‹ˆ ๋ฐฐ์น˜๋กœ ๋ฌถ์„ ๊ฒƒ์ด๋ฏ€๋กœ ์ฒซ ๋ฒˆ์งธ ์ฐจ์›์œผ๋กœ ๋ฐฐ์น˜ ํฌ๊ธฐ๊ฐ€ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€๋Šฅํ•œ ๊ฐœ์„  ์‚ฌํ•ญ์€ ์—ฌ๊ธฐ์—์„œ ๋…ผ์˜๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. https://github.com/pytorch/pytorch/pull/50693 - ํŒŒ์ดํ”„ ๋‹จ๊ณ„ ์ˆ˜์ค€์—์„œ ์กฐ๊ฑด๋ถ€ ์ œ์–ด ํ๋ฆ„์€ ๋ถˆ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, T5์™€ ๊ฐ™์€ ์ธ์ฝ”๋”-๋””์ฝ”๋” ๋ชจ๋ธ์€ ์กฐ๊ฑด๋ถ€ ์ธ์ฝ”๋” ๋‹จ๊ณ„๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ํŠน๋ณ„ํ•œ ํ•ด๊ฒฐ์ฑ…์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - ๊ฐ ๋ ˆ์ด์–ด๋ฅผ ์ •๋ ฌํ•˜์—ฌ ํ•˜๋‚˜์˜ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์ด ๋‹ค๋ฅธ ๋ชจ๋ธ์˜ ์ž…๋ ฅ์ด ๋˜๋„๋กํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ์•„์ง Varuna์™€ SageMaker๋กœ ์‹คํ—˜ํ•˜์ง€ ์•Š์•˜์ง€๋งŒ, ํ•ด๋‹น ๋…ผ๋ฌธ๋“ค์€ ์œ„์—์„œ ์–ธ๊ธ‰ํ•œ ๋ฌธ์ œ๋“ค์˜ ๋ชฉ๋ก์„ ๊ทน๋ณตํ–ˆ๊ณ  ์‚ฌ์šฉ์ž์˜ ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ํ›จ์”ฌ ์ ๊ฒŒ ํ•„์š”ํ•˜๋‹ค๊ณ  ๋ณด๊ณ ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ตฌํ˜„: - [ํŒŒ์ดํ† ์น˜](https://pytorch.org/docs/stable/pipeline.html) (ํŒŒ์ดํ† ์น˜-1.8์—์„œ ์ดˆ๊ธฐ ์ง€์›, 1.9์—์„œ ์ ์ง„์ ์œผ๋กœ ๊ฐœ์„ ๋˜๊ณ  1.10์—์„œ ๋” ๊ฐœ์„ ๋จ). [์˜ˆ์ œ](https://github.com/pytorch/pytorch/blob/master/benchmarks/distributed/pipeline/pipe.py)๋„ ์ฐธ๊ณ ํ•˜์„ธ์š”. - [FairScale](https://fairscale.readthedocs.io/en/latest/tutorials/pipe.html) - [DeepSpeed](https://www.deepspeed.ai/tutorials/pipeline/) - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)์€ ๋‚ด๋ถ€ ๊ตฌํ˜„์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค - API ์—†์Œ. - [Varuna](https://github.com/microsoft/varuna) - [SageMaker](https://arxiv.org/abs/2111.05972) - ์ด๋Š” AWS์—์„œ๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์†Œ์œ  ์†”๋ฃจ์…˜์ž…๋‹ˆ๋‹ค. - [OSLO](https://github.com/tunib-ai/oslo) - ์ด๋Š” Hugging Face Transformers๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ตฌํ˜„๋œ ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌํ™”์ž…๋‹ˆ๋‹ค. ๐Ÿค— Transformers ์ƒํƒœ: ์ด ์ž‘์„ฑ ์‹œ์ ์—์„œ ๋ชจ๋ธ ์ค‘ ์–ด๋А ๊ฒƒ๋„ ์™„์ „ํ•œ PP๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. GPT2์™€ T5 ๋ชจ๋ธ์€ naive MP๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ฃผ์š” ์žฅ์• ๋ฌผ์€ ๋ชจ๋ธ์„ `nn.Sequential`๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ๋ชจ๋“  ์ž…๋ ฅ์„ ํ…์„œ๋กœ ๊ฐ€์ ธ์™€์•ผ ํ•˜๋Š” ๊ฒƒ์„ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์—†๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ํ˜„์žฌ ๋ชจ๋ธ์—๋Š” ์ด๋Ÿฌํ•œ ๋ณ€ํ™˜์„ ๋งค์šฐ ๋ณต์žกํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ๋งŽ์€ ๊ธฐ๋Šฅ์ด ํฌํ•จ๋˜์–ด ์žˆ์–ด ์ œ๊ฑฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ธฐํƒ€ ์ ‘๊ทผ ๋ฐฉ๋ฒ•: DeepSpeed, Varuna ๋ฐ SageMaker๋Š” [๊ต์ฐจ ํŒŒ์ดํ”„๋ผ์ธ(Interleaved Pipeline)](https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-core-features.html) ๊ฐœ๋…์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ![interleaved-pipeline-execution](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-sagemaker-interleaved-pipeline.png) ์—ฌ๊ธฐ์„œ๋Š” ๋ฒ„๋ธ”(์œ ํœด ์‹œ๊ฐ„)์„ ์—ญ๋ฐฉํ–ฅ ํŒจ์Šค์— ์šฐ์„ ์ˆœ์œ„๋ฅผ ๋ถ€์—ฌํ•˜์—ฌ ์ตœ์†Œํ™”ํ•ฉ๋‹ˆ๋‹ค. Varuna๋Š” ๊ฐ€์žฅ ํšจ์œจ์ ์ธ ์Šค์ผ€์ค„๋ง์„ ์ฐพ๊ธฐ ์œ„ํ•ด ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ์‚ฌ์šฉํ•˜์—ฌ ์Šค์ผ€์ค„์„ ๊ฐœ์„ ํ•˜๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. OSLO๋Š” `nn.Sequential`๋กœ ๋ณ€ํ™˜ํ•˜์ง€ ์•Š๊ณ  Transformers๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌํ™”๋ฅผ ๊ตฌํ˜„ํ–ˆ์Šต๋‹ˆ๋‹ค. ## ํ…์„œ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ [[tensor-parallelism]] ํ…์„œ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ์—์„œ๋Š” ๊ฐ GPU๊ฐ€ ํ…์„œ์˜ ์ผ๋ถ€๋ถ„๋งŒ ์ฒ˜๋ฆฌํ•˜๊ณ  ์ „์ฒด ํ…์„œ๊ฐ€ ํ•„์š”ํ•œ ์—ฐ์‚ฐ์— ๋Œ€ํ•ด์„œ๋งŒ ์ „์ฒด ํ…์„œ๋ฅผ ์ง‘๊ณ„ํ•ฉ๋‹ˆ๋‹ค. ์ด ์„น์…˜์—์„œ๋Š” [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) ๋…ผ๋ฌธ์ธ [Efficient Large-Scale Language Model Training on GPU Clusters](https://arxiv.org/abs/2104.04473)์—์„œ์˜ ๊ฐœ๋…๊ณผ ๋‹ค์ด์–ด๊ทธ๋žจ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. Transformer์˜ ์ฃผ์š” ๊ตฌ์„ฑ ์š”์†Œ๋Š” fully connected `nn.Linear`์™€ ๋น„์„ ํ˜• ํ™œ์„ฑํ™” ํ•จ์ˆ˜์ธ `GeLU`์ž…๋‹ˆ๋‹ค. Megatron ๋…ผ๋ฌธ์˜ ํ‘œ๊ธฐ๋ฒ•์„ ๋”ฐ๋ผ ํ–‰๋ ฌ์˜ ์ ๊ณฑ ๋ถ€๋ถ„์„ `Y = GeLU(XA)`๋กœ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ `X`์™€ `Y`๋Š” ์ž…๋ ฅ ๋ฐ ์ถœ๋ ฅ ๋ฒกํ„ฐ์ด๊ณ  `A`๋Š” ๊ฐ€์ค‘์น˜ ํ–‰๋ ฌ์ž…๋‹ˆ๋‹ค. ํ–‰๋ ฌ ํ˜•ํƒœ๋กœ ๊ณ„์‚ฐ์„ ์‚ดํŽด๋ณด๋ฉด, ํ–‰๋ ฌ ๊ณฑ์…ˆ์„ ๋‹ค์ค‘ GPU๋กœ ๋ถ„ํ• ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์„ ์‰ฝ๊ฒŒ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ![Parallel GEMM](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-parallel_gemm.png) ๊ฐ€์ค‘์น˜ ํ–‰๋ ฌ `A`๋ฅผ `N`๊ฐœ์˜ GPU์— ๋Œ€ํ•ด ์—ด๋ณ„๋กœ ๋ถ„ํ• ํ•˜๊ณ  ๋ณ‘๋ ฌ๋กœ ํ–‰๋ ฌ ๊ณฑ์…ˆ `XA_1`์—์„œ `XA_n`๊นŒ์ง€ ์ˆ˜ํ–‰ํ•˜๋ฉด `N`๊ฐœ์˜ ์ถœ๋ ฅ ๋ฒกํ„ฐ `Y_1, Y_2, ..., Y_n`๊ฐ€ ์ƒ์„ฑ๋˜๋ฉฐ ๋…๋ฆฝ์ ์œผ๋กœ `GeLU`์— ์ „๋‹ฌ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ![independent GeLU](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-independent-gelu.png) ์ด ์›๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋™๊ธฐํ™”๊ฐ€ ํ•„์š”ํ•˜์ง€ ์•Š์€ GPU ๊ฐ„์˜ ์ž„์˜ ๊นŠ์ด์˜ MLP๋ฅผ ์—…๋ฐ์ดํŠธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ฒฐ๊ณผ ๋ฒกํ„ฐ๋ฅผ ์ƒค๋“œ๋กœ๋ถ€ํ„ฐ ์žฌ๊ตฌ์„ฑํ•ด์•ผ ํ•˜๋Š” ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๊นŒ์ง€๋Š” GPU ๊ฐ„์˜ ๋™๊ธฐํ™”๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. Megatron-LM ๋…ผ๋ฌธ์˜ ์ €์ž๋“ค์€ ์ด์— ๋Œ€ํ•œ ์œ ์šฉํ•œ ๊ทธ๋ฆผ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค: ![parallel shard processing](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-parallel_shard_processing.png) ๋‹ค์ค‘ ํ—ค๋“œ ์–ดํ…์…˜ ๋ ˆ์ด์–ด์˜ ๋ณ‘๋ ฌํ™”๋Š” ๋”์šฑ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ ๋…๋ฆฝ์ ์ธ ๋‹ค์ค‘ ํ—ค๋“œ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ด๋ฏธ ๋ณ‘๋ ฌํ™”๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค! ![parallel self-attention](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-tp-parallel_self_attention.png) ํŠน๋ณ„ ๊ณ ๋ ค์‚ฌํ•ญ: TP๋Š” ๋งค์šฐ ๋น ๋ฅธ ๋„คํŠธ์›Œํฌ๊ฐ€ ํ•„์š”ํ•˜๋ฏ€๋กœ ํ•œ ๊ฐœ ์ด์ƒ์˜ ๋…ธ๋“œ์—์„œ TP๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฒƒ์€ ๊ถŒ์žฅ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์‹ค์ œ๋กœ ๋…ธ๋“œ์— 4๊ฐœ์˜ GPU๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ TP์˜ ์ตœ๋Œ€ ์ฐจ์ˆ˜๋Š” 4์ž…๋‹ˆ๋‹ค. TP ์ฐจ์ˆ˜๊ฐ€ 8์ธ ๊ฒฝ์šฐ ์ตœ์†Œํ•œ 8๊ฐœ์˜ GPU๊ฐ€ ์žˆ๋Š” ๋…ธ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์„น์…˜์€ ์›๋ž˜์˜ [๋” ์ž์„ธํ•œ TP ๊ฐœ์š”](https://github.com/huggingface/transformers/issues/10321#issuecomment-783543530)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ž‘์„ฑ์ž๋Š” [@anton-l](https://github.com/anton-l)์ž…๋‹ˆ๋‹ค. SageMaker๋Š” ๋” ํšจ์œจ์ ์ธ ์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด TP์™€ DP๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. ๋Œ€์ฒด ์ด๋ฆ„: - DeepSpeed๋Š” ์ด๋ฅผ [ํ…์„œ ์Šฌ๋ผ์ด์‹ฑ](https://www.deepspeed.ai/training/#model-parallelism)์ด๋ผ๊ณ  ๋ถ€๋ฆ…๋‹ˆ๋‹ค. ๊ตฌํ˜„: - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)์€ ๋‚ด๋ถ€ ๊ตฌํ˜„์„ ๊ฐ€์ง€๊ณ  ์žˆ์œผ๋ฏ€๋กœ ๋ชจ๋ธ์— ๋งค์šฐ ํŠนํ™”๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. - [parallelformers](https://github.com/tunib-ai/parallelformers) (ํ˜„์žฌ๋Š” ์ถ”๋ก ์—๋งŒ ํ•ด๋‹น) - [SageMaker](https://arxiv.org/abs/2111.05972) - ์ด๋Š” AWS์—์„œ๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์†Œ์œ  ์†”๋ฃจ์…˜์ž…๋‹ˆ๋‹ค. - [OSLO](https://github.com/tunib-ai/oslo)์€ Transformers๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํ…์„œ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ ๊ตฌํ˜„์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers ํ˜„ํ™ฉ: - core: ์•„์ง ํ•ต์‹ฌ ๋ถ€๋ถ„์— ๊ตฌํ˜„๋˜์ง€ ์•Š์Œ - ๊ทธ๋Ÿฌ๋‚˜ ์ถ”๋ก ์„ ํ•˜๋ ค๋ฉด [parallelformers](https://github.com/tunib-ai/parallelformers)๊ฐ€ ๋Œ€๋ถ€๋ถ„์˜ ๋ชจ๋ธ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ•ต์‹ฌ ๋ถ€๋ถ„์— ๊ตฌํ˜„๋˜๊ธฐ ์ „๊นŒ์ง€ ๊ทธ๋“ค์˜ ๊ฒƒ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ํ›ˆ๋ จ ๋ชจ๋“œ๋„ ์ง€์›๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. - Deepspeed-Inference๋Š” CUDA ์ปค๋„์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ๋งค์šฐ ๋น ๋ฅธ ์ถ”๋ก  ๋ชจ๋“œ์—์„œ BERT, GPT-2 ๋ฐ GPT-Neo ๋ชจ๋ธ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ [์—ฌ๊ธฐ](https://www.deepspeed.ai/tutorials/inference-tutorial/)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## DP+PP [[dppp]] DeepSpeed [pipeline tutorial](https://www.deepspeed.ai/tutorials/pipeline/)์—์„œ ๋‹ค์Œ ๋‹ค์ด์–ด๊ทธ๋žจ์€ DP์™€ PP๋ฅผ ๊ฒฐํ•ฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ![dp-pp-2d](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-zero-dp-pp.png) ์—ฌ๊ธฐ์„œ DP ๋žญํฌ 0์€ GPU2๋ฅผ ๋ณด์ง€ ๋ชปํ•˜๊ณ , DP ๋žญํฌ 1์€ GPU3์„ ๋ณด์ง€ ๋ชปํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. DP์—๊ฒŒ๋Š” ๋”ฑ 2๊ฐœ์˜ GPU์ธ ๊ฒƒ์ฒ˜๋Ÿผ ๋ฐ์ดํ„ฐ๋ฅผ ๊ณต๊ธ‰ํ•ฉ๋‹ˆ๋‹ค. GPU0์€ PP๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ GPU2์—๊ฒŒ ์ผ๋ถ€ ์ž‘์—…์„ "๋น„๋ฐ€๋ฆฌ์—" ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  GPU1๋„ GPU3์„ ๋„์›€์œผ๋กœ ์‚ผ์•„ ๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ ์ž‘์—…ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ์ฐจ์›๋งˆ๋‹ค ์ ์–ด๋„ 2๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•˜๋ฏ€๋กœ ์ตœ์†Œํ•œ 4๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ตฌํ˜„: - [DeepSpeed](https://github.com/microsoft/DeepSpeed) - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) - [Varuna](https://github.com/microsoft/varuna) - [SageMaker](https://arxiv.org/abs/2111.05972) - [OSLO](https://github.com/tunib-ai/oslo) ๐Ÿค— Transformers ํ˜„ํ™ฉ: ์•„์ง ๊ตฌํ˜„๋˜์ง€ ์•Š์Œ ## DP+PP+TP [[dppptp]] ๋” ํšจ์œจ์ ์ธ ํ›ˆ๋ จ์„ ์œ„ํ•ด PP์™€ TP ๋ฐ DP๋ฅผ ๊ฒฐํ•ฉํ•˜์—ฌ 3D ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ๋‹ค์ด์–ด๊ทธ๋žจ์—์„œ ์ด๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ![dp-pp-tp-3d](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-deepspeed-3d.png) ์ด ๋‹ค์ด์–ด๊ทธ๋žจ์€ [3D parallelism: Scaling to trillion-parameter models](https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/)์ด๋ผ๋Š” ๋ธ”๋กœ๊ทธ ๊ธ€์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ ์ฐจ์›๋งˆ๋‹ค ์ ์–ด๋„ 2๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•˜๋ฏ€๋กœ ์ตœ์†Œํ•œ 8๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ตฌํ˜„: - [DeepSpeed](https://github.com/microsoft/DeepSpeed) - DeepSpeed๋Š” ๋”์šฑ ํšจ์œจ์ ์ธ DP์ธ ZeRO-DP๋ผ๊ณ ๋„ ๋ถ€๋ฆ…๋‹ˆ๋‹ค. - [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) - [Varuna](https://github.com/microsoft/varuna) - [SageMaker](https://arxiv.org/abs/2111.05972) - [OSLO](https://github.com/tunib-ai/oslo) ๐Ÿค— Transformers ํ˜„ํ™ฉ: ์•„์ง ๊ตฌํ˜„๋˜์ง€ ์•Š์Œ. PP์™€ TP๊ฐ€ ์—†๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ## ZeRO DP+PP+TP [[zero-dppptp]] DeepSpeed์˜ ์ฃผ์š” ๊ธฐ๋Šฅ ์ค‘ ํ•˜๋‚˜๋Š” DP์˜ ํ™•์žฅ์ธ ZeRO์ž…๋‹ˆ๋‹ค. ZeRO-DP์— ๋Œ€ํ•ด ์ด๋ฏธ [ZeRO Data Parallelism](#zero-data-parallelism)์—์„œ ๋…ผ์˜๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์ด๋Š” PP๋‚˜ TP๋ฅผ ํ•„์š”๋กœํ•˜์ง€ ์•Š๋Š” ๋…๋ฆฝ์ ์ธ ๊ธฐ๋Šฅ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ PP์™€ TP์™€ ๊ฒฐํ•ฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ZeRO-DP๊ฐ€ PP์™€ (์„ ํƒ์ ์œผ๋กœ TP์™€) ๊ฒฐํ•ฉ๋˜๋ฉด ์ผ๋ฐ˜์ ์œผ๋กœ ZeRO ๋‹จ๊ณ„ 1(์˜ตํ‹ฐ๋งˆ์ด์ € ๋ถ„ํ• )๋งŒ ํ™œ์„ฑํ™”๋ฉ๋‹ˆ๋‹ค. ์ด๋ก ์ ์œผ๋กœ๋Š” ZeRO ๋‹จ๊ณ„ 2(๊ทธ๋ผ๋””์–ธํŠธ ๋ถ„ํ• )๋ฅผ ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, ์ด๋Š” ์„ฑ๋Šฅ์— ๋‚˜์œ ์˜ํ–ฅ์„ ๋ฏธ์น  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ฐ ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜๋งˆ๋‹ค ๊ทธ๋ผ๋””์–ธํŠธ๋ฅผ ์ƒค๋”ฉํ•˜๊ธฐ ์ „์— ์ถ”๊ฐ€์ ์ธ ๋ฆฌ๋“€์Šค-์Šค์บํ„ฐ ์ปฌ๋ ‰ํ‹ฐ๋ธŒ๊ฐ€ ํ•„์š”ํ•˜๋ฉฐ, ์ด๋Š” ์ž ์žฌ์ ์œผ๋กœ ์ƒ๋‹นํ•œ ํ†ต์‹  ์˜ค๋ฒ„ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ํŒŒ์ดํ”„๋ผ์ธ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ์˜ ํŠน์„ฑ์ƒ ์ž‘์€ ๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜๊ฐ€ ์‚ฌ์šฉ๋˜๋ฉฐ, ์‚ฐ์ˆ  ์—ฐ์‚ฐ ๊ฐ•๋„(๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜ ํฌ๊ธฐ)๋ฅผ ๊ท ํ˜• ์žˆ๊ฒŒ ์œ ์ง€ํ•˜๋ฉด์„œ ํŒŒ์ดํ”„๋ผ์ธ ๋ฒ„๋ธ”(๋งˆ์ดํฌ๋กœ ๋ฐฐ์น˜ ์ˆ˜)์„ ์ตœ์†Œํ™”ํ•˜๋Š” ๊ฒƒ์— ์ค‘์ ์„ ๋‘ก๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ•ด๋‹น ํ†ต์‹  ๋น„์šฉ์€ ๋ฌธ์ œ๊ฐ€ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ, PP๋กœ ์ธํ•ด ์ •์ƒ๋ณด๋‹ค ์ ์€ ์ˆ˜์˜ ๋ ˆ์ด์–ด๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ ๋ฉ”๋ชจ๋ฆฌ ์ ˆ์•ฝ์€ ํฌ์ง€ ์•Š์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. PP๋Š” ์ด๋ฏธ ๊ทธ๋ž˜๋””์–ธํŠธ ํฌ๊ธฐ๋ฅผ ``1/PP``๋กœ ์ค„์ด๊ธฐ ๋•Œ๋ฌธ์— ๊ทธ๋ž˜๋””์–ธํŠธ ์ƒค๋”ฉ์˜ ์ ˆ์•ฝ ํšจ๊ณผ๋Š” ์ˆœ์ˆ˜ DP๋ณด๋‹ค๋Š” ๋ฏธ๋ฏธํ•ฉ๋‹ˆ๋‹ค. ZeRO ๋‹จ๊ณ„ 3๋„ ๊ฐ™์€ ์ด์œ ๋กœ ์ข‹์€ ์„ ํƒ์ด ์•„๋‹™๋‹ˆ๋‹ค - ๋” ๋งŽ์€ ๋…ธ๋“œ ๊ฐ„ ํ†ต์‹ ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ZeRO๊ฐ€ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋‹ค๋ฅธ ์ด์ ์€ ZeRO-Offload์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋‹จ๊ณ„ 1์ด๋ฏ€๋กœ ์˜ตํ‹ฐ๋งˆ์ด์ € ์ƒํƒœ๋ฅผ CPU๋กœ ์˜คํ”„๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ตฌํ˜„: - [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed) ๋ฐ [BigScience์˜ Megatron-Deepspeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed), ์ด์ „ ์ €์žฅ์†Œ์˜ ํฌํฌ์ž…๋‹ˆ๋‹ค. - [OSLO](https://github.com/tunib-ai/oslo) ์ค‘์š”ํ•œ ๋…ผ๋ฌธ: - [Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model]( https://arxiv.org/abs/2201.11990) ๐Ÿค— Transformers ํ˜„ํ™ฉ: ์•„์ง ๊ตฌํ˜„๋˜์ง€ ์•Š์Œ, PP์™€ TP๊ฐ€ ์—†๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ## FlexFlow [[flexflow]] [FlexFlow](https://github.com/flexflow/FlexFlow)๋Š” ์•ฝ๊ฐ„ ๋‹ค๋ฅธ ๋ฐฉ์‹์œผ๋กœ ๋ณ‘๋ ฌํ™” ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•ฉ๋‹ˆ๋‹ค. ๋…ผ๋ฌธ: ["Beyond Data and Model Parallelism for Deep Neural Networks" by Zhihao Jia, Matei Zaharia, Alex Aiken](https://arxiv.org/abs/1807.05358) ์ด๋Š” Sample-Operator-Attribute-Parameter๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ์ผ์ข…์˜ 4D ๋ณ‘๋ ฌํ™”๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. 1. Sample = ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌํ™” (์ƒ˜ํ”Œ๋ณ„ ๋ณ‘๋ ฌ) 2. Operator = ๋‹จ์ผ ์—ฐ์‚ฐ์„ ์—ฌ๋Ÿฌ ํ•˜์œ„ ์—ฐ์‚ฐ์œผ๋กœ ๋ณ‘๋ ฌํ™” 3. Attribute = ๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌํ™” (๊ธธ์ด๋ณ„ ๋ณ‘๋ ฌ) 4. Parameter = ๋ชจ๋ธ ๋ณ‘๋ ฌํ™” (์ˆ˜ํ‰ ๋˜๋Š” ์ˆ˜์ง๊ณผ ๊ด€๊ณ„์—†์ด) ์˜ˆ์‹œ: * Sample 512 ๊ธธ์ด์˜ 10๊ฐœ์˜ ๋ฐฐ์น˜๋ฅผ ๊ฐ€์ •ํ•ด ๋ด…์‹œ๋‹ค. ์ด๋ฅผ sample ์ฐจ์›์œผ๋กœ 2๊ฐœ์˜ ์žฅ์น˜์— ๋ณ‘๋ ฌํ™”ํ•˜๋ฉด, 10 x 512๋Š” 5 x 2 x 512๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. * Operator ๋ ˆ์ด์–ด ์ •๊ทœํ™”๋ฅผ ์ˆ˜ํ–‰ํ•œ๋‹ค๋ฉด, ์šฐ์„  std๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋‘ ๋ฒˆ์งธ๋กœ mean์„ ๊ณ„์‚ฐํ•œ ๋‹ค์Œ ๋ฐ์ดํ„ฐ๋ฅผ ์ •๊ทœํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Operator ๋ณ‘๋ ฌํ™”๋Š” std์™€ mean์„ ๋ณ‘๋ ฌ๋กœ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ operator ์ฐจ์›์œผ๋กœ 2๊ฐœ์˜ ์žฅ์น˜ (cuda:0, cuda:1)์— ๋ณ‘๋ ฌํ™”ํ•˜๋ฉด, ๋จผ์ € ์ž…๋ ฅ ๋ฐ์ดํ„ฐ๋ฅผ ๋‘ ์žฅ์น˜๋กœ ๋ณต์‚ฌํ•œ ๋‹ค์Œ cuda:0์—์„œ std๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  cuda:1์—์„œ ๋™์‹œ์— mean์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. * Attribute 512 ๊ธธ์ด์˜ 10๊ฐœ์˜ ๋ฐฐ์น˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ attribute ์ฐจ์›์œผ๋กœ 2๊ฐœ์˜ ์žฅ์น˜์— ๋ณ‘๋ ฌํ™”ํ•˜๋ฉด, 10 x 512๋Š” 10 x 2 x 256์ด ๋ฉ๋‹ˆ๋‹ค. * Parameter ์ด๋Š” tensor ๋ชจ๋ธ ๋ณ‘๋ ฌํ™” ๋˜๋Š” naive layer-wise ๋ชจ๋ธ ๋ณ‘๋ ฌํ™”์™€ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ![flex-flow-soap](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/parallelism-flexflow.jpeg) ์ด ํ”„๋ ˆ์ž„์›Œํฌ์˜ ์ค‘์š”ํ•œ ์ ์€ (1) GPU/TPU/CPU ๋Œ€ (2) RAM/DRAM ๋Œ€ (3) ๋น ๋ฅธ ์ธํŠธ๋ผ-์ปค๋„ฅํŠธ ๋Œ€ ๋А๋ฆฐ ์ธํ„ฐ-์ปค๋„ฅํŠธ์™€ ๊ฐ™์€ ๋ฆฌ์†Œ์Šค๋ฅผ ๊ณ ๋ คํ•˜์—ฌ ์–ด๋””์—์„œ ์–ด๋–ค ๋ณ‘๋ ฌํ™”๋ฅผ ์‚ฌ์šฉํ• ์ง€๋ฅผ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ ์œผ๋กœ ์ž๋™์œผ๋กœ ์ตœ์ ํ™”ํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•˜๋‚˜ ๋งค์šฐ ์ค‘์š”ํ•œ ์ธก๋ฉด์€ FlexFlow๊ฐ€ ์ •์ ์ด๊ณ  ๊ณ ์ •๋œ ์›Œํฌ๋กœ๋“œ๋ฅผ ๊ฐ€์ง„ ๋ชจ๋ธ์— ๋Œ€ํ•œ DNN ๋ณ‘๋ ฌํ™”๋ฅผ ์ตœ์ ํ™”ํ•˜๊ธฐ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋™์ ์ธ ๋™์ž‘์„ ๊ฐ€์ง„ ๋ชจ๋ธ์€ ๋ฐ˜๋ณต๋งˆ๋‹ค ๋‹ค๋ฅธ ๋ณ‘๋ ฌํ™” ์ „๋žต์„ ์„ ํ˜ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด ํ”„๋ ˆ์ž„์›Œํฌ์˜ ์žฅ์ ์€ ์„ ํƒํ•œ ํด๋Ÿฌ์Šคํ„ฐ์—์„œ 30๋ถ„ ๋™์•ˆ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ์‹คํ–‰ํ•˜๊ณ  ์ด ํŠน์ • ํ™˜๊ฒฝ์„ ์ตœ์ ์œผ๋กœ ํ™œ์šฉํ•˜๊ธฐ ์œ„ํ•œ ์ตœ์ƒ์˜ ์ „๋žต์„ ์ œ์•ˆํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ถ€ํ’ˆ์„ ์ถ”๊ฐ€/์ œ๊ฑฐ/๊ต์ฒดํ•˜๋ฉด ์‹คํ–‰ํ•˜๊ณ  ๊ทธ์— ๋Œ€ํ•œ ๊ณ„ํš์„ ๋‹ค์‹œ ์ตœ์ ํ™”ํ•œ ํ›„ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์„ค์ •์€ ์ž์ฒด์ ์ธ ์‚ฌ์šฉ์ž ์ •์˜ ์ตœ์ ํ™”๋ฅผ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers ํ˜„ํ™ฉ: ์•„์ง ํ†ตํ•ฉ๋˜์ง€ ์•Š์Œ. ์ด๋ฏธ [transformers.utils.fx](https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py)๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ์„ FX-์ถ”์ ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด๋Š” FlexFlow์˜ ์„ ํ–‰ ์กฐ๊ฑด์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์–ด๋–ค ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ด์•ผ FlexFlow๊ฐ€ ์šฐ๋ฆฌ์˜ ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ์ž‘๋™ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ํŒŒ์•…ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ## ์–ด๋–ค ์ „๋žต์„ ์‚ฌ์šฉํ•ด์•ผ ํ• ๊นŒ์š”? [[which-strategy-to-use-when]] ๋‹ค์Œ์€ ์–ด๋–ค ๋ณ‘๋ ฌํ™” ์ „๋žต์„ ์–ธ์ œ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š”์ง€์— ๋Œ€ํ•œ ๋งค์šฐ ๋Œ€๋žต์ ์ธ ๊ฐœ์š”์ž…๋‹ˆ๋‹ค. ๊ฐ ๋ชฉ๋ก์˜ ์ฒซ ๋ฒˆ์งธ ์ „๋žต์ด ์ผ๋ฐ˜์ ์œผ๋กœ ๋” ๋น ๋ฆ…๋‹ˆ๋‹ค. **โ‡จ ๋‹จ์ผ GPU** * ๋ชจ๋ธ์ด ๋‹จ์ผ GPU์— ๋งž๋Š” ๊ฒฝ์šฐ: 1. ์ผ๋ฐ˜์ ์ธ ์‚ฌ์šฉ * ๋ชจ๋ธ์ด ๋‹จ์ผ GPU์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ: 1. ZeRO + CPU ๋ฐ ์˜ต์…˜์œผ๋กœ NVMe ์–ธ๋กœ๋“œ 2. ์œ„์™€ ๋™์ผํ•˜๊ฒŒ ์‚ฌ์šฉํ•˜๋˜, ๊ฐ€์žฅ ํฐ ๋ ˆ์ด์–ด๊ฐ€ ๋‹จ์ผ GPU์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ Memory Centric Tiling(์ž์„ธํ•œ ๋‚ด์šฉ์€ ์•„๋ž˜ ์ฐธ์กฐ)์„ ์ถ”๊ฐ€์ ์œผ๋กœ ์‚ฌ์šฉ * ๊ฐ€์žฅ ํฐ ๋ ˆ์ด์–ด๊ฐ€ ๋‹จ์ผ GPU์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ: 1. ZeRO - [Memory Centric Tiling](https://deepspeed.readthedocs.io/en/latest/zero3.html#memory-centric-tiling) (MCT) ํ™œ์„ฑํ™”. ์ด๋ฅผ ํ†ตํ•ด ํฌ๊ธฐ๊ฐ€ ๋งค์šฐ ํฐ ๋ ˆ์ด์–ด๋ฅผ ์ž„์˜๋กœ ๋ถ„ํ• ํ•˜์—ฌ ์ˆœ์ฐจ์ ์œผ๋กœ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. MCT๋Š” GPU์— ํ™œ์„ฑํ™”๋œ ๋งค๊ฐœ๋ณ€์ˆ˜์˜ ์ˆ˜๋ฅผ ์ค„์ด์ง€๋งŒ ํ™œ์„ฑํ™” ๋ฉ”๋ชจ๋ฆฌ์—๋Š” ์˜ํ–ฅ์„ ์ฃผ์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํ˜„์žฌ ์ž‘์„ฑ ๊ธฐ์ค€์œผ๋กœ ์ด ์š”๊ตฌ์‚ฌํ•ญ์€ ๋งค์šฐ ๋“œ๋ฌผ๊ธฐ ๋•Œ๋ฌธ์— ์‚ฌ์šฉ์ž๊ฐ€ `torch.nn.Linear`๋ฅผ ์ˆ˜๋™์œผ๋กœ ์ˆ˜์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. **โ‡จ ๋‹จ์ผ ๋…ธ๋“œ / ๋‹ค์ค‘ GPU** * ๋ชจ๋ธ์ด ๋‹จ์ผ GPU์— ๋งž๋Š” ๊ฒฝ์šฐ: 1. DDP - ๋ถ„์‚ฐ DP 2. ZeRO - ์ƒํ™ฉ๊ณผ ๊ตฌ์„ฑ์— ๋”ฐ๋ผ ๋น ๋ฅผ ์ˆ˜๋„ ์žˆ๊ณ  ๊ทธ๋ ‡์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. * ๋ชจ๋ธ์ด ๋‹จ์ผ GPU์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ: 1. PP 2. ZeRO 3. TP NVLINK ๋˜๋Š” NVSwitch๋ฅผ ํ†ตํ•œ ๋งค์šฐ ๋น ๋ฅธ ์ธํŠธ๋ผ-๋…ธ๋“œ ์—ฐ๊ฒฐ์ด ์žˆ๋Š” ๊ฒฝ์šฐ ์ด ์„ธ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์€ ๊ฑฐ์˜ ๋™๋“ฑํ•  ๊ฒƒ์ด๋ฉฐ, ์ด๋Ÿฌํ•œ ์—ฐ๊ฒฐ์ด ์—†๋Š” ๊ฒฝ์šฐ PP๊ฐ€ TP๋‚˜ ZeRO๋ณด๋‹ค ๋น ๋ฅผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ TP์˜ ์ฐจ์ˆ˜๋„ ์˜ํ–ฅ์„ ์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํŠน์ • ์„ค์ •์—์„œ ์šฐ์Šน์ž๋ฅผ ์ฐพ๊ธฐ ์œ„ํ•ด ์‹คํ—˜ํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€์žฅ ์ข‹์Šต๋‹ˆ๋‹ค. TP๋Š” ๊ฑฐ์˜ ํ•ญ์ƒ ๋‹จ์ผ ๋…ธ๋“œ ๋‚ด์—์„œ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์ฆ‰, TP ํฌ๊ธฐ <= ๋…ธ๋“œ๋‹น GPU ์ˆ˜์ž…๋‹ˆ๋‹ค. * ๊ฐ€์žฅ ํฐ ๋ ˆ์ด์–ด๊ฐ€ ๋‹จ์ผ GPU์— ๋งž์ง€ ์•Š๋Š” ๊ฒฝ์šฐ: 1. ZeRO๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š์„ ๊ฒฝ์šฐ - PP๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์œผ๋ฏ€๋กœ TP๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 2. ZeRO๋ฅผ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ, "๋‹จ์ผ GPU"์˜ ํ•ญ๋ชฉ๊ณผ ๋™์ผํ•œ ํ•ญ๋ชฉ ์ฐธ์กฐ **โ‡จ ๋‹ค์ค‘ ๋…ธ๋“œ / ๋‹ค์ค‘ GPU** * ๋น ๋ฅธ ๋…ธ๋“œ ๊ฐ„ ์—ฐ๊ฒฐ์ด ์žˆ๋Š” ๊ฒฝ์šฐ: 1. ZeRO - ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ˆ˜์ •์ด ๊ฑฐ์˜ ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. 2. PP+TP+DP - ํ†ต์‹ ์ด ์ ์ง€๋งŒ ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋Œ€๊ทœ๋ชจ ๋ณ€๊ฒฝ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. * ๋А๋ฆฐ ๋…ธ๋“œ ๊ฐ„ ์—ฐ๊ฒฐ ๋ฐ GPU ๋ฉ”๋ชจ๋ฆฌ ๋ถ€์กฑํ•œ ๊ฒฝ์šฐ: 1. DP+PP+TP+ZeRO-1
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/training.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ[[finetune-a-pretrained-model]] [[open-in-colab]] ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ฉด ์ƒ๋‹นํ•œ ์ด์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๊ณ„์‚ฐ ๋น„์šฉ๊ณผ ํƒ„์†Œ๋ฐœ์ž๊ตญ์„ ์ค„์ด๊ณ , ์ฒ˜์Œ๋ถ€ํ„ฐ ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ฌ ํ•„์š” ์—†์ด ์ตœ์‹  ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋‹ค์–‘ํ•œ ์ž‘์—…์„ ์œ„ํ•ด ์‚ฌ์ „ ํ•™์Šต๋œ ์ˆ˜์ฒœ ๊ฐœ์˜ ๋ชจ๋ธ์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ์ž์‹ ์˜ ์ž‘์—…๊ณผ ๊ด€๋ จ๋œ ๋ฐ์ดํ„ฐ์…‹์„ ์‚ฌ์šฉํ•ด ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ ๋ฏธ์„ธ ํŠœ๋‹์ด๋ผ๊ณ  ํ•˜๋Š” ๋งค์šฐ ๊ฐ•๋ ฅํ•œ ํ›ˆ๋ จ ๊ธฐ๋ฒ•์ž…๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹น์‹ ์ด ์„ ํƒํ•œ ๋”ฅ๋Ÿฌ๋‹ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค: * ๐Ÿค— Transformers๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ [`Trainer`]. * Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ TensorFlow์—์„œ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ. * ๊ธฐ๋ณธ PyTorch์—์„œ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ. <a id='data-processing'></a> ## ๋ฐ์ดํ„ฐ์…‹ ์ค€๋น„[[prepare-a-dataset]] <Youtube id="_BZearw7f0w"/> ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๊ธฐ ์œ„ํ•ด์„œ ๋ฐ์ดํ„ฐ์…‹์„ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋„๋ก ์ค€๋น„ํ•˜์„ธ์š”. ์ด์ „ ํŠœํ† ๋ฆฌ์–ผ์—์„œ ํ›ˆ๋ จ์„ ์œ„ํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ๋“œ๋ ธ๋Š”๋ฐ, ์ง€๊ธˆ์ด ๋ฐฐ์šธ ๊ฑธ ๋˜์งš์„ ๊ธฐํšŒ์ž…๋‹ˆ๋‹ค! ๋จผ์ € [Yelp ๋ฆฌ๋ทฐ](https://huggingface.co/datasets/yelp_review_full) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("yelp_review_full") >>> dataset["train"][100] {'label': 0, 'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularly...that takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\"serving off their orders\\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'} ``` ํ…์ŠคํŠธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ณ  ์„œ๋กœ ๋‹ค๋ฅธ ๊ธธ์ด์˜ ์‹œํ€€์Šค ํŒจ๋”ฉ ๋ฐ ์ž˜๋ผ๋‚ด๊ธฐ ์ „๋žต์„ ํฌํ•จํ•˜๋ ค๋ฉด ํ† ํฌ๋‚˜์ด์ €๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์…‹์„ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ ค๋ฉด ๐Ÿค— Dataset [`map`](https://huggingface.co/docs/datasets/process#map) ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") >>> def tokenize_function(examples): ... return tokenizer(examples["text"], padding="max_length", truncation=True) >>> tokenized_datasets = dataset.map(tokenize_function, batched=True) ``` ํ•„์š”ํ•œ ๊ฒฝ์šฐ ๋ฏธ์„ธ ํŠœ๋‹์„ ์œ„ํ•ด ๋ฐ์ดํ„ฐ์…‹์˜ ์ž‘์€ ๋ถ€๋ถ„ ์ง‘ํ•ฉ์„ ๋งŒ๋“ค์–ด ๋ฏธ์„ธ ํŠœ๋‹ ์ž‘์—… ์‹œ๊ฐ„์„ ์ค„์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` <a id='trainer'></a> ## Train ์—ฌ๊ธฐ์„œ๋ถ€ํ„ฐ๋Š” ์‚ฌ์šฉํ•˜๋ ค๋Š” ํ”„๋ ˆ์ž„์›Œํฌ์— ํ•ด๋‹นํ•˜๋Š” ์„น์…˜์„ ๋”ฐ๋ผ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ค๋ฅธ์ชฝ ์‚ฌ์ด๋“œ๋ฐ”์˜ ๋งํฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์›ํ•˜๋Š” ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ์ด๋™ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ํŠน์ • ํ”„๋ ˆ์ž„์›Œํฌ์˜ ๋ชจ๋“  ์ฝ˜ํ…์ธ ๋ฅผ ์ˆจ๊ธฐ๋ ค๋ฉด ํ•ด๋‹น ํ”„๋ ˆ์ž„์›Œํฌ ๋ธ”๋ก์˜ ์˜ค๋ฅธ์ชฝ ์ƒ๋‹จ์— ์žˆ๋Š” ๋ฒ„ํŠผ์„ ์‚ฌ์šฉํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค! <frameworkcontent> <pt> <Youtube id="nvBXf7s7vTI"/> ## ํŒŒ์ดํ† ์น˜ Trainer๋กœ ํ›ˆ๋ จํ•˜๊ธฐ[[train-with-pytorch-trainer]] ๐Ÿค— Transformers๋Š” ๐Ÿค— Transformers ๋ชจ๋ธ ํ›ˆ๋ จ์— ์ตœ์ ํ™”๋œ [`Trainer`] ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•˜์—ฌ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ง์ ‘ ์ž‘์„ฑํ•˜์ง€ ์•Š๊ณ ๋„ ์‰ฝ๊ฒŒ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`Trainer`] API๋Š” ๋กœ๊น…(logging), ๊ฒฝ์‚ฌ ๋ˆ„์ (gradient accumulation), ํ˜ผํ•ฉ ์ •๋ฐ€๋„(mixed precision) ๋“ฑ ๋‹ค์–‘ํ•œ ํ›ˆ๋ จ ์˜ต์…˜๊ณผ ๊ธฐ๋Šฅ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๊ณ  ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. Yelp ๋ฆฌ๋ทฐ [๋ฐ์ดํ„ฐ์…‹ ์นด๋“œ](https://huggingface.co/datasets/yelp_review_full#data-fields)์—์„œ 5๊ฐœ์˜ ๋ ˆ์ด๋ธ”์ด ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` <Tip> ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜ ์ค‘ ์ผ๋ถ€๊ฐ€ ์‚ฌ์šฉ๋˜์ง€ ์•Š๊ณ  ์ผ๋ถ€ ๊ฐ€์ค‘์น˜๊ฐ€ ๋ฌด์ž‘์œ„๋กœ ํ‘œ์‹œ๋œ๋‹ค๋Š” ๊ฒฝ๊ณ ๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ๊ฑฑ์ •๋งˆ์„ธ์š”. ์ด๊ฒƒ์€ ์˜ฌ๋ฐ”๋ฅธ ๋™์ž‘์ž…๋‹ˆ๋‹ค! ์‚ฌ์ „ ํ•™์Šต๋œ BERT ๋ชจ๋ธ์˜ ํ—ค๋“œ๋Š” ํ๊ธฐ๋˜๊ณ  ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋œ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋กœ ๋Œ€์ฒด๋ฉ๋‹ˆ๋‹ค. ์ด์ œ ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์˜ ์ง€์‹์œผ๋กœ ์‹œํ€€์Šค ๋ถ„๋ฅ˜ ์ž‘์—…์„ ์œ„ํ•œ ์ƒˆ๋กœ์šด ๋ชจ๋ธ ํ—ค๋“œ๋ฅผ ๋ฏธ์„ธ ํŠœ๋‹ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ### ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ํ›ˆ๋ จ[[training-hyperparameters]] ๋‹ค์Œ์œผ๋กœ ์ •ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋“  ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ์™€ ๋‹ค์–‘ํ•œ ํ›ˆ๋ จ ์˜ต์…˜์„ ํ™œ์„ฑํ™”ํ•˜๊ธฐ ์œ„ํ•œ ํ”Œ๋ž˜๊ทธ๋ฅผ ํฌํ•จํ•˜๋Š” [`TrainingArguments`] ํด๋ž˜์Šค๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๊ธฐ๋ณธ ํ›ˆ๋ จ [ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments)๋กœ ์‹œ์ž‘ํ•˜์ง€๋งŒ, ์ž์œ ๋กญ๊ฒŒ ์‹คํ—˜ํ•˜์—ฌ ์—ฌ๋Ÿฌ๋ถ„๋“ค์—๊ฒŒ ๋งž๋Š” ์ตœ์ ์˜ ์„ค์ •์„ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์—์„œ ์ฒดํฌํฌ์ธํŠธ(checkpoints)๋ฅผ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments(output_dir="test_trainer") ``` ### ํ‰๊ฐ€ ํ•˜๊ธฐ[[evaluate]] [`Trainer`]๋Š” ํ›ˆ๋ จ ์ค‘์— ๋ชจ๋ธ ์„ฑ๋Šฅ์„ ์ž๋™์œผ๋กœ ํ‰๊ฐ€ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ณด๊ณ ํ•  ํ•จ์ˆ˜๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [๐Ÿค— Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” [`evaluate.load`](https://huggingface.co/spaces/evaluate-metric/accuracy) ํ•จ์ˆ˜๋กœ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ„๋‹จํ•œ [`accuracy`]ํ•จ์ˆ˜๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค (์ž์„ธํ•œ ๋‚ด์šฉ์€ [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import numpy as np >>> import evaluate >>> metric = evaluate.load("accuracy") ``` `metric`์—์„œ [`~evaluate.compute`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ์˜ˆ์ธก์˜ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ธก์„ `compute`์— ์ „๋‹ฌํ•˜๊ธฐ ์ „์— ์˜ˆ์ธก์„ ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋“  ๐Ÿค— Transformers ๋ชจ๋ธ์€ ๋กœ์ง“์œผ๋กœ ๋ฐ˜ํ™˜ํ•œ๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•˜์„ธ์š”): ```py >>> def compute_metrics(eval_pred): ... logits, labels = eval_pred ... predictions = np.argmax(logits, axis=-1) ... return metric.compute(predictions=predictions, references=labels) ``` ๋ฏธ์„ธ ํŠœ๋‹ ์ค‘์— ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๋ ค๋ฉด ํ›ˆ๋ จ ์ธ์ˆ˜์— `evaluation_strategy` ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ง€์ •ํ•˜์—ฌ ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TrainingArguments, Trainer >>> training_args = TrainingArguments(output_dir="test_trainer", evaluation_strategy="epoch") ``` ### ํ›ˆ๋ จ ํ•˜๊ธฐ[[trainer]] ๋ชจ๋ธ, ํ›ˆ๋ จ ์ธ์ˆ˜, ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์…‹, ํ‰๊ฐ€ ํ•จ์ˆ˜๊ฐ€ ํฌํ•จ๋œ [`Trainer`] ๊ฐ์ฒด๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` ๊ทธ๋ฆฌ๊ณ  [`~transformers.Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer.train() ``` </pt> <tf> <a id='keras'></a> <Youtube id="rnTGBy2ax1c"/> ## Keras๋กœ ํ…์„œํ”Œ๋กœ์šฐ ๋ชจ๋ธ ํ›ˆ๋ จํ•˜๊ธฐ[[train-a-tensorflow-model-with-keras]] Keras API๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ…์„œํ”Œ๋กœ์šฐ์—์„œ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค! ### Keras์šฉ ๋ฐ์ดํ„ฐ ๋กœ๋“œ[[loading-data-for-keras]] Keras API๋กœ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ํ•™์Šต์‹œํ‚ค๋ ค๋ฉด ๋ฐ์ดํ„ฐ์…‹์„ Keras๊ฐ€ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋Š” ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ ์ž‘์€ ๊ฒฝ์šฐ, ์ „์ฒด๋ฅผ NumPy ๋ฐฐ์—ด๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ Keras๋กœ ์ „๋‹ฌํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๋” ๋ณต์žกํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์ „์— ๋จผ์ € ์ด ์ž‘์—…์„ ์‹œ๋„ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋จผ์ € ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. [GLUE ๋ฒค์น˜๋งˆํฌ](https://huggingface.co/datasets/glue)์˜ CoLA ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ฐ„๋‹จํ•œ ๋ฐ”์ด๋„ˆ๋ฆฌ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ์ž‘์—…์ด๋ฏ€๋กœ ์ง€๊ธˆ์€ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ๋ถ„ํ• ๋งŒ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ```py from datasets import load_dataset dataset = load_dataset("glue", "cola") dataset = dataset["train"] # Just take the training split for now ``` ๋‹ค์Œ์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ NumPy ๋ฐฐ์—ด๋กœ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ”์€ ์ด๋ฏธ 0๊ณผ 1๋กœ ๋œ ๋ฆฌ์ŠคํŠธ์ด๊ธฐ ๋•Œ๋ฌธ์— ํ† ํฐํ™”ํ•˜์ง€ ์•Š๊ณ  ๋ฐ”๋กœ NumPy ๋ฐฐ์—ด๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") tokenized_data = tokenizer(dataset["sentence"], return_tensors="np", padding=True) # Tokenizer returns a BatchEncoding, but we convert that to a dict for Keras tokenized_data = dict(tokenized_data) labels = np.array(dataset["label"]) # Label is already an array of 0 and 1 ``` ๋งˆ์ง€๋ง‰์œผ๋กœ ๋ชจ๋ธ์„ ๋กœ๋“œ, [`compile`](https://keras.io/api/models/model_training_apis/#compile-method), [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)ํ•ฉ๋‹ˆ๋‹ค: ```py from transformers import TFAutoModelForSequenceClassification from tensorflow.keras.optimizers import Adam # Load and compile our model model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased") # Lower learning rates are often better for fine-tuning transformers model.compile(optimizer=Adam(3e-5)) model.fit(tokenized_data, labels) ``` <Tip> ๋ชจ๋ธ์„ `compile()`ํ•  ๋•Œ ์†์‹ค ์ธ์ˆ˜๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค! ์ด ์ธ์ˆ˜๋ฅผ ๋น„์›Œ๋‘๋ฉด ํ—ˆ๊น… ํŽ˜์ด์Šค ๋ชจ๋ธ์€ ์ž‘์—…๊ณผ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์ ํ•ฉํ•œ ์†์‹ค์„ ์ž๋™์œผ๋กœ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. ์›ํ•œ๋‹ค๋ฉด ์–ธ์ œ๋“ ์ง€ ์ง์ ‘ ์†์‹ค์„ ์ง€์ •ํ•˜์—ฌ ์ด๋ฅผ ์žฌ์ •์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </Tip> ์ด ์ ‘๊ทผ ๋ฐฉ์‹์€ ์†Œ๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์—์„œ๋Š” ์ž˜ ์ž‘๋™ํ•˜์ง€๋งŒ, ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์—์„œ๋Š” ๋ฌธ์ œ๊ฐ€ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์™œ ๊ทธ๋Ÿด๊นŒ์š”? ํ† ํฐํ™”๋œ ๋ฐฐ์—ด๊ณผ ๋ ˆ์ด๋ธ”์„ ๋ฉ”๋ชจ๋ฆฌ์— ์™„์ „ํžˆ ๋กœ๋“œํ•˜๊ณ  NumPy๋Š” "๋“ค์ญ‰๋‚ ์ญ‰ํ•œ" ๋ฐฐ์—ด์„ ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์—, ๋ชจ๋“  ํ† ํฐํ™”๋œ ์ƒ˜ํ”Œ์„ ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์—์„œ ๊ฐ€์žฅ ๊ธด ์ƒ˜ํ”Œ์˜ ๊ธธ์ด๋งŒํผ ํŒจ๋”ฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ฐฐ์—ด์ด ํ›จ์”ฌ ๋” ์ปค์ง€๊ณ  ์ด ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์ธํ•ด ํ•™์Šต ์†๋„๋„ ๋А๋ ค์ง‘๋‹ˆ๋‹ค! ### ๋ฐ์ดํ„ฐ๋ฅผ tf.data.Dataset์œผ๋กœ ๋กœ๋“œํ•˜๊ธฐ[[loading-data-as-a-tfdatadataset]] ํ•™์Šต ์†๋„๊ฐ€ ๋А๋ ค์ง€๋Š” ๊ฒƒ์„ ํ”ผํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ๋ฅผ `tf.data.Dataset`์œผ๋กœ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์›ํ•œ๋‹ค๋ฉด ์ง์ ‘ `tf.data` ํŒŒ์ดํ”„๋ผ์ธ์„ ์ง์ ‘ ์ž‘์„ฑํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, ์ด ์ž‘์—…์„ ๊ฐ„ํŽธํ•˜๊ฒŒ ์ˆ˜ํ–‰ํ•˜๋Š” ์ˆ˜ ์žˆ๋Š” ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค: - [`~TFPreTrainedModel.prepare_tf_dataset`]: ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ์ด ๋ฐฉ๋ฒ•์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ๋ฉ”์„œ๋“œ์ด๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋ธ์„ ๊ฒ€์‚ฌํ•˜์—ฌ ๋ชจ๋ธ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์—ด์„ ์ž๋™์œผ๋กœ ํŒŒ์•…ํ•˜๊ณ  ๋‚˜๋จธ์ง€๋Š” ๋ฒ„๋ ค์„œ ๋” ๋‹จ์ˆœํ•˜๊ณ  ์„ฑ๋Šฅ์ด ์ข‹์€ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - [`~datasets.Dataset.to_tf_dataset`]: ์ด ๋ฐฉ๋ฒ•์€ ์ข€ ๋” ๋‚ฎ์€ ์ˆ˜์ค€์ด๋ฉฐ, ํฌํ•จํ•  '์—ด'๊ณผ '๋ ˆ์ด๋ธ”'์„ ์ •ํ™•ํžˆ ์ง€์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ •ํ™•ํžˆ ์ œ์–ดํ•˜๊ณ  ์‹ถ์„ ๋•Œ ์œ ์šฉํ•˜๋ฉฐ, ํฌํ•จํ•  'columns'๊ณผ 'label_cols'์„ ์ •ํ™•ํžˆ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`~TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋จผ์ € ๋‹ค์Œ ์ฝ”๋“œ ์ƒ˜ํ”Œ๊ณผ ๊ฐ™์ด ํ† ํฌ๋‚˜์ด์ € ์ถœ๋ ฅ์„ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์—ด๋กœ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py def tokenize_dataset(data): # Keys of the returned dictionary will be added to the dataset as columns return tokenizer(data["text"]) dataset = dataset.map(tokenize_dataset) ``` ํ—ˆ๊น… ํŽ˜์ด์Šค ๋ฐ์ดํ„ฐ์…‹์€ ๊ธฐ๋ณธ์ ์œผ๋กœ ๋””์Šคํฌ์— ์ €์žฅ๋˜๋ฏ€๋กœ ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ ๋Š˜๋ฆฌ์ง€ ์•Š๋Š”๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•˜์„ธ์š”! ์—ด์ด ์ถ”๊ฐ€๋˜๋ฉด ๋ฐ์ดํ„ฐ์…‹์—์„œ ๋ฐฐ์น˜๋ฅผ ์ŠคํŠธ๋ฆฌ๋ฐํ•˜๊ณ  ๊ฐ ๋ฐฐ์น˜์— ํŒจ๋”ฉ์„ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ํŒจ๋”ฉ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ํŒจ๋”ฉ ํ† ํฐ์˜ ์ˆ˜๋ฅผ ํฌ๊ฒŒ ์ค„์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> tf_dataset = model.prepare_tf_dataset(dataset, batch_size=16, shuffle=True, tokenizer=tokenizer) ``` ์œ„์˜ ์ฝ”๋“œ ์ƒ˜ํ”Œ์—์„œ๋Š” ๋ฐฐ์น˜๊ฐ€ ๋กœ๋“œ๋  ๋•Œ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ํŒจ๋”ฉํ•  ์ˆ˜ ์žˆ๋„๋ก `prepare_tf_dataset`์— ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์…‹์˜ ๋ชจ๋“  ์ƒ˜ํ”Œ ๊ธธ์ด๊ฐ€ ๊ฐ™๊ณ  ํŒจ๋”ฉ์ด ํ•„์š”ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ์ด ์ธ์ˆ˜๋ฅผ ๊ฑด๋„ˆ๋›ธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ƒ˜ํ”Œ์„ ์ฑ„์šฐ๋Š” ๊ฒƒ๋ณด๋‹ค ๋” ๋ณต์žกํ•œ ์ž‘์—…(์˜ˆ: ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด์˜ ํ† ํฐ ์†์ƒ ๋ชจ๋ธ๋ง)์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ํ† ํฐ์„ ์†์ƒ์‹œ์ผœ์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ, `collate_fn` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ƒ˜ํ”Œ ๋ชฉ๋ก์„ ๋ฐฐ์น˜๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์›ํ•˜๋Š” ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•  ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [์˜ˆ์‹œ](https://github.com/huggingface/transformers/tree/main/examples) ๋˜๋Š” [๋…ธํŠธ๋ถ](https://huggingface.co/docs/transformers/notebooks)์„ ์ฐธ์กฐํ•˜์—ฌ ์ด ์ ‘๊ทผ ๋ฐฉ์‹์ด ์‹ค์ œ๋กœ ์ž‘๋™ํ•˜๋Š” ๋ชจ์Šต์„ ํ™•์ธํ•˜์„ธ์š”. `tf.data.Dataset`์„ ์ƒ์„ฑํ•œ ํ›„์—๋Š” ์ด์ „๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•˜๊ณ  ํ›ˆ๋ จ(fit)ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py model.compile(optimizer=Adam(3e-5)) model.fit(tf_dataset) ``` </tf> </frameworkcontent> <a id='pytorch_native'></a> ## ๊ธฐ๋ณธ ํŒŒ์ดํ† ์น˜๋กœ ํ›ˆ๋ จํ•˜๊ธฐ[[train-in-native-pytorch]] <frameworkcontent> <pt> <Youtube id="Dh9CL8fyG80"/> [`Trainer`]๋Š” ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ฒ˜๋ฆฌํ•˜๋ฉฐ ํ•œ ์ค„์˜ ์ฝ”๋“œ๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง์ ‘ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ์„ ์„ ํ˜ธํ•˜๋Š” ์‚ฌ์šฉ์ž์˜ ๊ฒฝ์šฐ, ๊ธฐ๋ณธ PyTorch์—์„œ ๐Ÿค— Transformers ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ ๋…ธํŠธ๋ถ์„ ๋‹ค์‹œ ์‹œ์ž‘ํ•˜๊ฑฐ๋‚˜ ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•ด ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ํ™•๋ณดํ•ด์•ผ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py del model del trainer torch.cuda.empty_cache() ``` ๋‹ค์Œ์œผ๋กœ, 'ํ† ํฐํ™”๋œ ๋ฐ์ดํ„ฐ์…‹'์„ ์ˆ˜๋™์œผ๋กœ ํ›„์ฒ˜๋ฆฌํ•˜์—ฌ ํ›ˆ๋ จ๋ จ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. 1. ๋ชจ๋ธ์ด ์›์‹œ ํ…์ŠคํŠธ๋ฅผ ์ž…๋ ฅ์œผ๋กœ ํ—ˆ์šฉํ•˜์ง€ ์•Š์œผ๋ฏ€๋กœ `text` ์—ด์„ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_datasets = tokenized_datasets.remove_columns(["text"]) ``` 2. ๋ชจ๋ธ์—์„œ ์ธ์ˆ˜์˜ ์ด๋ฆ„์ด `labels`๋กœ ์ง€์ •๋  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒํ•˜๋ฏ€๋กœ `label` ์—ด์˜ ์ด๋ฆ„์„ `labels`๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_datasets = tokenized_datasets.rename_column("label", "labels") ``` 3. ๋ฐ์ดํ„ฐ์…‹์˜ ํ˜•์‹์„ List ๋Œ€์‹  PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๋„๋ก ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_datasets.set_format("torch") ``` ๊ทธ๋ฆฌ๊ณ  ์•ž์„œ ํ‘œ์‹œ๋œ ๋Œ€๋กœ ๋ฐ์ดํ„ฐ์…‹์˜ ๋” ์ž‘์€ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ์ƒ์„ฑํ•˜์—ฌ ๋ฏธ์„ธ ์กฐ์ • ์†๋„๋ฅผ ๋†’์ž…๋‹ˆ๋‹ค: ```py >>> small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) >>> small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) ``` ### DataLoader[[dataloader]] ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์…‹์— ๋Œ€ํ•œ 'DataLoader'๋ฅผ ์ƒ์„ฑํ•˜์—ฌ ๋ฐ์ดํ„ฐ ๋ฐฐ์น˜๋ฅผ ๋ฐ˜๋ณตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from torch.utils.data import DataLoader >>> train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) >>> eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) ``` ์˜ˆ์ธก์„ ์œ„ํ•œ ๋ ˆ์ด๋ธ” ๊ฐœ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) ``` ### ์˜ตํ‹ฐ๋งˆ์ด์ € ๋ฐ ํ•™์Šต ์†๋„ ์Šค์ผ€์ค„๋Ÿฌ[[optimizer-and-learning-rate-scheduler]] ์˜ตํ‹ฐ๋งˆ์ด์ €์™€ ํ•™์Šต ์†๋„ ์Šค์ผ€์ค„๋Ÿฌ๋ฅผ ์ƒ์„ฑํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ํŒŒ์ดํ† ์น˜์—์„œ ์ œ๊ณตํ•˜๋Š” [`AdamW`](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html) ์˜ตํ‹ฐ๋งˆ์ด์ €๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from torch.optim import AdamW >>> optimizer = AdamW(model.parameters(), lr=5e-5) ``` [`Trainer`]์—์„œ ๊ธฐ๋ณธ ํ•™์Šต ์†๋„ ์Šค์ผ€์ค„๋Ÿฌ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import get_scheduler >>> num_epochs = 3 >>> num_training_steps = num_epochs * len(train_dataloader) >>> lr_scheduler = get_scheduler( ... name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ... ) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, GPU์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ 'device'๋ฅผ ์ง€์ •ํ•˜์—ฌ GPU๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด CPU์—์„œ ํ›ˆ๋ จํ•˜๋ฉฐ ๋ช‡ ๋ถ„์ด ์•„๋‹Œ ๋ช‡ ์‹œ๊ฐ„์ด ๊ฑธ๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> import torch >>> device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") >>> model.to(device) ``` <Tip> [Colaboratory](https://colab.research.google.com/) ๋˜๋Š” [SageMaker StudioLab](https://studiolab.sagemaker.aws/)๊ณผ ๊ฐ™์€ ํ˜ธ์ŠคํŒ… ๋…ธํŠธ๋ถ์ด ์—†๋Š” ๊ฒฝ์šฐ ํด๋ผ์šฐ๋“œ GPU์— ๋ฌด๋ฃŒ๋กœ ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์ด์ œ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ๐Ÿฅณ ### ํ›ˆ๋ จ ๋ฃจํ”„[[training-loop]] ํ›ˆ๋ จ ์ง„ํ–‰ ์ƒํ™ฉ์„ ์ถ”์ ํ•˜๋ ค๋ฉด [tqdm](https://tqdm.github.io/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠธ๋ ˆ์ด๋‹ ๋‹จ๊ณ„ ์ˆ˜์— ์ง„ํ–‰๋ฅ  ํ‘œ์‹œ์ค„์„ ์ถ”๊ฐ€ํ•˜์„ธ์š”: ```py >>> from tqdm.auto import tqdm >>> progress_bar = tqdm(range(num_training_steps)) >>> model.train() >>> for epoch in range(num_epochs): ... for batch in train_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... outputs = model(**batch) ... loss = outputs.loss ... loss.backward() ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad() ... progress_bar.update(1) ``` ### ํ‰๊ฐ€ ํ•˜๊ธฐ[[evaluate]] [`Trainer`]์— ํ‰๊ฐ€ ํ•จ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•œ ๋ฐฉ๋ฒ•๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ง์ ‘ ์ž‘์„ฑํ•  ๋•Œ๋„ ๋™์ผํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋ฒˆ์—๋Š” ๊ฐ ์—ํฌํฌ๊ฐ€ ๋๋‚  ๋•Œ๋งˆ๋‹ค ํ‰๊ฐ€์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•˜์—ฌ ๋ณด๊ณ ํ•˜๋Š” ๋Œ€์‹ , [`~evaluate.add_batch`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋“  ๋ฐฐ์น˜๋ฅผ ๋ˆ„์ ํ•˜๊ณ  ๋งจ ๋งˆ์ง€๋ง‰์— ํ‰๊ฐ€์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ```py >>> import evaluate >>> metric = evaluate.load("accuracy") >>> model.eval() >>> for batch in eval_dataloader: ... batch = {k: v.to(device) for k, v in batch.items()} ... with torch.no_grad(): ... outputs = model(**batch) ... logits = outputs.logits ... predictions = torch.argmax(logits, dim=-1) ... metric.add_batch(predictions=predictions, references=batch["labels"]) >>> metric.compute() ``` </pt> </frameworkcontent> <a id='additional-resources'></a> ## ์ถ”๊ฐ€ ์ž๋ฃŒ[[additional-resources]] ๋” ๋งŽ์€ ๋ฏธ์„ธ ํŠœ๋‹ ์˜ˆ์ œ๋Š” ๋‹ค์Œ์„ ์ฐธ์กฐํ•˜์„ธ์š”: - [๐Ÿค— Trnasformers ์˜ˆ์ œ](https://github.com/huggingface/transformers/tree/main/examples)์—๋Š” PyTorch ๋ฐ ํ…์„œํ”Œ๋กœ์šฐ์—์„œ ์ผ๋ฐ˜์ ์ธ NLP ์ž‘์—…์„ ํ›ˆ๋ จํ•  ์ˆ˜ ์žˆ๋Š” ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. - [๐Ÿค— Transformers ๋…ธํŠธ๋ถ](notebooks)์—๋Š” PyTorch ๋ฐ ํ…์„œํ”Œ๋กœ์šฐ์—์„œ ํŠน์ • ์ž‘์—…์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋‹ค์–‘ํ•œ ๋…ธํŠธ๋ถ์ด ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/index.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Transformers [PyTorch](https://pytorch.org/), [TensorFlow](https://www.tensorflow.org/), [JAX](https://jax.readthedocs.io/en/latest/)๋ฅผ ์œ„ํ•œ ์ตœ์ฒจ๋‹จ ๋จธ์‹ ๋Ÿฌ๋‹ ๐Ÿค— Transformers๋Š” ์‚ฌ์ „ํ•™์Šต๋œ ์ตœ์ฒจ๋‹จ ๋ชจ๋ธ๋“ค์„ ์‰ฝ๊ฒŒ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ํ›ˆ๋ จ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” API์™€ ๋„๊ตฌ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์ „ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์“ฐ๋ฉด ์ปดํ“จํŒ… ๋น„์šฉ๊ณผ ํƒ„์†Œ ๋ฐฐ์ถœ๋Ÿ‰์ด ์ค„๊ณ , ๋ชจ๋ธ์„ ์ฒ˜์Œ๋ถ€ํ„ฐ ํ›ˆ๋ จ์‹œํ‚ค๋Š” ๋ฐ ํ•„์š”ํ•œ ์‹œ๊ฐ„๊ณผ ๋ฆฌ์†Œ์Šค๋ฅผ ์ ˆ์•ฝํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ €ํฌ ๋ชจ๋ธ๋“ค์€ ๋‹ค์–‘ํ•œ ๋ถ„์•ผ์˜ ํƒœ์Šคํฌ๋ฅผ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿ“ **์ž์—ฐ์–ด ์ฒ˜๋ฆฌ**: ํ…์ŠคํŠธ ๋ถ„๋ฅ˜, ๊ฐœ์ฒด๋ช… ์ธ์‹, ์งˆ์˜์‘๋‹ต, ์–ธ์–ด ๋ชจ๋ธ๋ง, ์š”์•ฝ, ๋ฒˆ์—ญ, ๊ฐ๊ด€์‹ ์งˆ์˜์‘๋‹ต, ํ…์ŠคํŠธ ์ƒ์„ฑ<br> ๐Ÿ–ผ๏ธ **์ปดํ“จํ„ฐ ๋น„์ „**: ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜, ๊ฐ์ฒด ํƒ์ง€, ๊ฐ์ฒด ๋ถ„ํ• <br> ๐Ÿ—ฃ๏ธ **์˜ค๋””์˜ค**: ์ž๋™์Œ์„ฑ์ธ์‹, ์˜ค๋””์˜ค ๋ถ„๋ฅ˜<br> ๐Ÿ™ **๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ**: ํ‘œ ์งˆ์˜์‘๋‹ต, ๊ด‘ํ•™ ๋ฌธ์ž ์ธ์‹ (OCR), ์Šค์บ”ํ•œ ๋ฌธ์„œ์—์„œ ์ •๋ณด ์ถ”์ถœ, ๋น„๋””์˜ค ๋ถ„๋ฅ˜, ์‹œ๊ฐ ์งˆ์˜์‘๋‹ต ๐Ÿค— Transformers๋Š” PyTorch, TensorFlow์™€ JAX ๊ฐ„์˜ ์ƒํ˜ธ์šด์šฉ์„ฑ์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์œ ์—ฐํ•˜๊ฒŒ ๋ชจ๋ธ์˜ ๊ฐ ๋‹จ๊ณ„๋งˆ๋‹ค ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์ฝ”๋“œ 3์ค„๋งŒ ์จ์„œ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚จ ๋‹ค์Œ, ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ ์ƒ์—์„œ ์ถ”๋ก ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์šด์˜ ํ™˜๊ฒฝ์— ๋ฐฐํฌํ•˜๊ธฐ ์œ„ํ•ด ONNX๋‚˜ TorchScript ํ˜•์‹์œผ๋กœ ๋‚ด๋ณด๋‚ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ์ฐธ์—ฌํ•˜์‹œ๋ ค๋ฉด [Hub](https://huggingface.co/models), [ํฌ๋Ÿผ](https://discuss.huggingface.co/), [๋””์Šค์ฝ”๋“œ](https://discord.com/invite/JfAtkvEtRb)๋ฅผ ๋ฐฉ๋ฌธํ•ด์ฃผ์„ธ์š”! ## Hugging Face ํŒ€๊ณผ ์ง์ ‘ ๋Œ€ํ™”ํ•˜๊ณ  ์‹ถ์œผ์‹ ๊ฐ€์š”?[[hugging-face-team]] <a target="_blank" href="https://huggingface.co/support"> <img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="width: 100%; max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);"> </a> ## ์ฝ˜ํ…์ธ [[contents]] ์ €ํฌ ๊ธฐ์ˆ ๋ฌธ์„œ๋Š” ํฌ๊ฒŒ 5๊ฐœ ์„น์…˜์œผ๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - **์‹œ์ž‘ํ•˜๊ธฐ**์—์„œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ๊ฐ„๋‹จํžˆ ํ›‘์–ด๋ณด๊ณ , ๋ณธ๊ฒฉ์ ์œผ๋กœ ๋›ฐ์–ด๋“ค ์ˆ˜ ์žˆ๊ฒŒ ์„ค์น˜ ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•ฉ๋‹ˆ๋‹ค. - **ํŠœํ† ๋ฆฌ์–ผ**์—์„œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ์ต์ˆ™ํ•ด์งˆ ์ˆ˜ ์žˆ๋„๋ก ์ž์„ธํ•˜๊ณ ๋„ ์‰ฝ๊ฒŒ ๊ธฐ๋ณธ์ ์ธ ๋ถ€๋ถ„์„ ์•ˆ๋‚ดํ•ฉ๋‹ˆ๋‹ค. - **How-to ๊ฐ€์ด๋“œ**์—์„œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด ์‚ฌ์ „ํ•™์Šต๋œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด๋‚˜, ์ง์ ‘ ๋ชจ๋ธ์„ ์ž‘์„ฑํ•˜๊ณ  ๊ณต์œ ํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ๊ฐ™์ด ํŠน์ • ๋ชฉํ‘œ๋ฅผ ๋‹ฌ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•ฉ๋‹ˆ๋‹ค. - **๊ฐœ๋… ๊ฐ€์ด๋“œ**์—์„œ ๐Ÿค— Transformers์˜ ์„ค๊ณ„ ์ฒ ํ•™๊ณผ ํ•จ๊ป˜ ๋ชจ๋ธ์ด๋‚˜ ํƒœ์Šคํฌ ๋’ค์— ์ˆจ๊ฒจ์ง„ ๊ฐœ๋…๋“ค๊ณผ ์•„์ด๋””์–ด๋ฅผ ํƒ๊ตฌํ•˜๊ณ  ์„ค๋ช…์„ ๋ง๋ถ™์ž…๋‹ˆ๋‹ค. - **API**์—์„œ ๋ชจ๋“  ํด๋ž˜์Šค์™€ ํ•จ์ˆ˜๋ฅผ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. - **๋ฉ”์ธ ํด๋ž˜์Šค**์—์„œ configuration, model, tokenizer, pipeline๊ณผ ๊ฐ™์ด ์ œ์ผ ์ค‘์š”ํ•œ ํด๋ž˜์Šค๋“ค์„ ์ž์„ธํžˆ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. - **๋ชจ๋ธ**์—์„œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์† ๊ตฌํ˜„๋œ ๊ฐ ๋ชจ๋ธ๊ณผ ์—ฐ๊ด€๋œ ํด๋ž˜์Šค์™€ ํ•จ์ˆ˜๋ฅผ ์ž์„ธํžˆ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. - **๋‚ด๋ถ€ ์œ ํ‹ธ๋ฆฌํ‹ฐ**์—์„œ ๋‚ด๋ถ€์ ์œผ๋กœ ์‚ฌ์šฉ๋˜๋Š” ์œ ํ‹ธ๋ฆฌํ‹ฐ ํด๋ž˜์Šค์™€ ํ•จ์ˆ˜๋ฅผ ์ž์„ธํžˆ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ### ์ง€์› ๋ชจ๋ธ[[supported-models]] <!--This list is updated automatically from the README with _make fix-copies_. Do not update manually! --> 1. **[ALBERT](model_doc/albert)** (from Google Research and the Toyota Technological Institute at Chicago) released with the paper [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942), by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. 1. **[BART](model_doc/bart)** (from Facebook) released with the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer. 1. **[BARThez](model_doc/barthez)** (from ร‰cole polytechnique) released with the paper [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis. 1. **[BARTpho](model_doc/bartpho)** (from VinAI Research) released with the paper [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://arxiv.org/abs/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. 1. **[BEiT](model_doc/beit)** (from Microsoft) released with the paper [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong, Furu Wei. 1. **[BERT](model_doc/bert)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 1. **[BERT For Sequence Generation](model_doc/bert-generation)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[BERTweet](model_doc/bertweet)** (from VinAI Research) released with the paper [BERTweet: A pre-trained language model for English Tweets](https://aclanthology.org/2020.emnlp-demos.2/) by Dat Quoc Nguyen, Thanh Vu and Anh Tuan Nguyen. 1. **[BigBird-Pegasus](model_doc/bigbird_pegasus)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[BigBird-RoBERTa](model_doc/big_bird)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed. 1. **[Blenderbot](model_doc/blenderbot)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BlenderbotSmall](model_doc/blenderbot-small)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston. 1. **[BLOOM](model_doc/bloom)** (from BigScience workshop) released by the [BigScience Workshop](https://bigscience.huggingface.co/). 1. **[BORT](model_doc/bort)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry. 1. **[ByT5](model_doc/byt5)** (from Google Research) released with the paper [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626) by Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel. 1. **[CamemBERT](model_doc/camembert)** (from Inria/Facebook/Sorbonne) released with the paper [CamemBERT: a Tasty French Language Model](https://arxiv.org/abs/1911.03894) by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suรกrez*, Yoann Dupont, Laurent Romary, ร‰ric Villemonte de la Clergerie, Djamรฉ Seddah and Benoรฎt Sagot. 1. **[CANINE](model_doc/canine)** (from Google Research) released with the paper [CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation](https://arxiv.org/abs/2103.06874) by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. 1. **[CLIP](model_doc/clip)** (from OpenAI) released with the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. 1. **[CLIPSeg](model_doc/clipseg)** (from University of Gรถttingen) released with the paper [Image Segmentation Using Text and Image Prompts](https://arxiv.org/abs/2112.10003) by Timo Lรผddecke and Alexander Ecker. 1. **[CodeGen](model_doc/codegen)** (from Salesforce) released with the paper [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. 1. **[Conditional DETR](model_doc/conditional_detr)** (from Microsoft Research Asia) released with the paper [Conditional DETR for Fast Training Convergence](https://arxiv.org/abs/2108.06152) by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. 1. **[ConvBERT](model_doc/convbert)** (from YituTech) released with the paper [ConvBERT: Improving BERT with Span-based Dynamic Convolution](https://arxiv.org/abs/2008.02496) by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. 1. **[ConvNeXT](model_doc/convnext)** (from Facebook AI) released with the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. 1. **[ConvNeXTV2](model_doc/convnextv2)** (from Facebook AI) released with the paper [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808) by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. 1. **[CPM](model_doc/cpm)** (from Tsinghua University) released with the paper [CPM: A Large-scale Generative Chinese Pre-trained Language Model](https://arxiv.org/abs/2012.00413) by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. 1. **[CTRL](model_doc/ctrl)** (from Salesforce) released with the paper [CTRL: A Conditional Transformer Language Model for Controllable Generation](https://arxiv.org/abs/1909.05858) by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. 1. **[CvT](model_doc/cvt)** (from Microsoft) released with the paper [CvT: Introducing Convolutions to Vision Transformers](https://arxiv.org/abs/2103.15808) by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, Lei Zhang. 1. **[Data2Vec](model_doc/data2vec)** (from Facebook) released with the paper [Data2Vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://arxiv.org/abs/2202.03555) by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli. 1. **[DeBERTa](model_doc/deberta)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[DeBERTa-v2](model_doc/deberta-v2)** (from Microsoft) released with the paper [DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://arxiv.org/abs/2006.03654) by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. 1. **[Decision Transformer](model_doc/decision_transformer)** (from Berkeley/Facebook/Google) released with the paper [Decision Transformer: Reinforcement Learning via Sequence Modeling](https://arxiv.org/abs/2106.01345) by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. 1. **[Deformable DETR](model_doc/deformable_detr)** (from SenseTime Research) released with the paper [Deformable DETR: Deformable Transformers for End-to-End Object Detection](https://arxiv.org/abs/2010.04159) by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. 1. **[DeiT](model_doc/deit)** (from Facebook) released with the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervรฉ Jรฉgou. 1. **[DETR](model_doc/detr)** (from Facebook) released with the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko. 1. **[DialoGPT](model_doc/dialogpt)** (from Microsoft Research) released with the paper [DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation](https://arxiv.org/abs/1911.00536) by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. 1. **[DistilBERT](model_doc/distilbert)** (from HuggingFace), released together with the paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108) by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 into [DistilGPT2](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), RoBERTa into [DistilRoBERTa](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation), Multilingual BERT into [DistilmBERT](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) and a German version of DistilBERT. 1. **[DiT](model_doc/dit)** (from Microsoft Research) released with the paper [DiT: Self-supervised Pre-training for Document Image Transformer](https://arxiv.org/abs/2203.02378) by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. 1. **[Donut](model_doc/donut)** (from NAVER), released together with the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. 1. **[DPR](model_doc/dpr)** (from Facebook) released with the paper [Dense Passage Retrieval for Open-Domain Question Answering](https://arxiv.org/abs/2004.04906) by Vladimir Karpukhin, Barlas OฤŸuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 1. **[DPT](master/model_doc/dpt)** (from Intel Labs) released with the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Renรฉ Ranftl, Alexey Bochkovskiy, Vladlen Koltun. 1. **[EfficientNet](model_doc/efficientnet)** (from Google Research) released with the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le. 1. **[ELECTRA](model_doc/electra)** (from Google Research/Stanford University) released with the paper [ELECTRA: Pre-training text encoders as discriminators rather than generators](https://arxiv.org/abs/2003.10555) by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning. 1. **[EncoderDecoder](model_doc/encoder-decoder)** (from Google Research) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. 1. **[ERNIE](model_doc/ernie)** (from Baidu) released with the paper [ERNIE: Enhanced Representation through Knowledge Integration](https://arxiv.org/abs/1904.09223) by Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu. 1. **[ESM](model_doc/esm)** (from Meta AI) are transformer protein language models. **ESM-1b** was released with the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. **ESM-1v** was released with the paper [Language models enable zero-shot prediction of the effects of mutations on protein function](https://doi.org/10.1101/2021.07.09.450648) by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives. **ESM-2 and ESMFold** were released with the paper [Language models of protein sequences at the scale of evolution enable accurate structure prediction](https://doi.org/10.1101/2022.07.20.500902) by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives. 1. **[FLAN-T5](model_doc/flan-t5)** (from Google AI) released in the repository [google-research/t5x](https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints) by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei 1. **[FlauBERT](model_doc/flaubert)** (from CNRS) released with the paper [FlauBERT: Unsupervised Language Model Pre-training for French](https://arxiv.org/abs/1912.05372) by Hang Le, Loรฏc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoรฎt Crabbรฉ, Laurent Besacier, Didier Schwab. 1. **[FLAVA](model_doc/flava)** (from Facebook AI) released with the paper [FLAVA: A Foundational Language And Vision Alignment Model](https://arxiv.org/abs/2112.04482) by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela. 1. **[FNet](model_doc/fnet)** (from Google Research) released with the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824) by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. 1. **[Funnel Transformer](model_doc/funnel)** (from CMU/Google Brain) released with the paper [Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing](https://arxiv.org/abs/2006.03236) by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. 1. **[GLPN](model_doc/glpn)** (from KAIST) released with the paper [Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth](https://arxiv.org/abs/2201.07436) by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. 1. **[GPT](model_doc/openai-gpt)** (from OpenAI) released with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. 1. **[GPT Neo](model_doc/gpt_neo)** (from EleutherAI) released in the repository [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo) by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. 1. **[GPT NeoX](model_doc/gpt_neox)** (from EleutherAI) released with the paper [GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745) by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach 1. **[GPT NeoX Japanese](model_doc/gpt_neox_japanese)** (from ABEJA) released by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori. 1. **[GPT-2](model_doc/gpt2)** (from OpenAI) released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**. 1. **[GPT-J](model_doc/gptj)** (from EleutherAI) released in the repository [kingoflolz/mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/) by Ben Wang and Aran Komatsuzaki. 1. **[GPTSAN-japanese](model_doc/gptsan-japanese)** released in the repository [tanreinama/GPTSAN](https://github.com/tanreinama/GPTSAN/blob/main/report/model.md) by Toshiyuki Sakamoto(tanreinama). 1. **[GroupViT](model_doc/groupvit)** (from UCSD, NVIDIA) released with the paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094) by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. 1. **[Hubert](model_doc/hubert)** (from Facebook) released with the paper [HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units](https://arxiv.org/abs/2106.07447) by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. 1. **[I-BERT](model_doc/ibert)** (from Berkeley) released with the paper [I-BERT: Integer-only BERT Quantization](https://arxiv.org/abs/2101.01321) by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, Kurt Keutzer. 1. **[ImageGPT](model_doc/imagegpt)** (from OpenAI) released with the paper [Generative Pretraining from Pixels](https://openai.com/blog/image-gpt/) by Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. 1. **[Jukebox](model_doc/jukebox)** (from OpenAI) released with the paper [Jukebox: A Generative Model for Music](https://arxiv.org/pdf/2005.00341.pdf) by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. 1. **[LayoutLM](model_doc/layoutlm)** (from Microsoft Research Asia) released with the paper [LayoutLM: Pre-training of Text and Layout for Document Image Understanding](https://arxiv.org/abs/1912.13318) by Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou. 1. **[LayoutLMv2](model_doc/layoutlmv2)** (from Microsoft Research Asia) released with the paper [LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding](https://arxiv.org/abs/2012.14740) by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. 1. **[LayoutLMv3](model_doc/layoutlmv3)** (from Microsoft Research Asia) released with the paper [LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking](https://arxiv.org/abs/2204.08387) by Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei. 1. **[LayoutXLM](model_doc/layoutxlm)** (from Microsoft Research Asia) released with the paper [LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding](https://arxiv.org/abs/2104.08836) by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. 1. **[LED](model_doc/led)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LeViT](model_doc/levit)** (from Meta AI) released with the paper [LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference](https://arxiv.org/abs/2104.01136) by Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervรฉ Jรฉgou, Matthijs Douze. 1. **[LiLT](model_doc/lilt)** (from South China University of Technology) released with the paper [LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding](https://arxiv.org/abs/2202.13669) by Jiapeng Wang, Lianwen Jin, Kai Ding. 1. **[Longformer](model_doc/longformer)** (from AllenAI) released with the paper [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150) by Iz Beltagy, Matthew E. Peters, Arman Cohan. 1. **[LongT5](model_doc/longt5)** (from Google AI) released with the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/abs/2112.07916) by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang. 1. **[LUKE](model_doc/luke)** (from Studio Ousia) released with the paper [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://arxiv.org/abs/2010.01057) by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, Yuji Matsumoto. 1. **[LXMERT](model_doc/lxmert)** (from UNC Chapel Hill) released with the paper [LXMERT: Learning Cross-Modality Encoder Representations from Transformers for Open-Domain Question Answering](https://arxiv.org/abs/1908.07490) by Hao Tan and Mohit Bansal. 1. **[M-CTC-T](model_doc/mctct)** (from Facebook) released with the paper [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. 1. **[M2M100](model_doc/m2m_100)** (from Facebook) released with the paper [Beyond English-Centric Multilingual Machine Translation](https://arxiv.org/abs/2010.11125) by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin. 1. **[MarianMT](model_doc/marian)** Machine translation models trained using [OPUS](http://opus.nlpl.eu/) data by Jรถrg Tiedemann. The [Marian Framework](https://marian-nmt.github.io/) is being developed by the Microsoft Translator Team. 1. **[MarkupLM](model_doc/markuplm)** (from Microsoft Research Asia) released with the paper [MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. 1. **[Mask2Former](model_doc/mask2former)** (from FAIR and UIUC) released with the paper [Masked-attention Mask Transformer for Universal Image Segmentation](https://arxiv.org/abs/2112.01527) by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. 1. **[MaskFormer](model_doc/maskformer)** (from Meta and UIUC) released with the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov. 1. **[mBART](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Denoising Pre-training for Neural Machine Translation](https://arxiv.org/abs/2001.08210) by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. 1. **[mBART-50](model_doc/mbart)** (from Facebook) released with the paper [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan. 1. **[Megatron-BERT](model_doc/megatron-bert)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[Megatron-GPT2](model_doc/megatron_gpt2)** (from NVIDIA) released with the paper [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. 1. **[mLUKE](model_doc/mluke)** (from Studio Ousia) released with the paper [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151) by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 1. **[MobileBERT](model_doc/mobilebert)** (from CMU/Google Brain) released with the paper [MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices](https://arxiv.org/abs/2004.02984) by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. 1. **[MobileViT](model_doc/mobilevit)** (from Apple) released with the paper [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari. 1. **[MPNet](model_doc/mpnet)** (from Microsoft Research) released with the paper [MPNet: Masked and Permuted Pre-training for Language Understanding](https://arxiv.org/abs/2004.09297) by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu. 1. **[MT5](model_doc/mt5)** (from Google AI) released with the paper [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel. 1. **[MVP](model_doc/mvp)** (from RUC AI Box) released with the paper [MVP: Multi-task Supervised Pre-training for Natural Language Generation](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen. 1. **[Nezha](model_doc/nezha)** (from Huawei Noahโ€™s Ark Lab) released with the paper [NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204) by Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu. 1. **[NLLB](model_doc/nllb)** (from Meta) released with the paper [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) by the NLLB team. 1. **[Nystrรถmformer](model_doc/nystromformer)** (from the University of Wisconsin - Madison) released with the paper [Nystrรถmformer: A Nystrรถm-Based Algorithm for Approximating Self-Attention](https://arxiv.org/abs/2102.03902) by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. 1. **[OneFormer](model_doc/oneformer)** (from SHI Labs) released with the paper [OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi. 1. **[OPT](master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al. 1. **[OWL-ViT](model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby. 1. **[Pegasus](model_doc/pegasus)** (from Google) released with the paper [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/abs/1912.08777) by Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu. 1. **[PEGASUS-X](model_doc/pegasus_x)** (from Google) released with the paper [Investigating Efficiently Extending Transformers for Long Input Summarization](https://arxiv.org/abs/2208.04347) by Jason Phang, Yao Zhao, and Peter J. Liu. 1. **[Perceiver IO](model_doc/perceiver)** (from Deepmind) released with the paper [Perceiver IO: A General Architecture for Structured Inputs & Outputs](https://arxiv.org/abs/2107.14795) by Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hรฉnaff, Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, Joรฃo Carreira. 1. **[PhoBERT](model_doc/phobert)** (from VinAI Research) released with the paper [PhoBERT: Pre-trained language models for Vietnamese](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) by Dat Quoc Nguyen and Anh Tuan Nguyen. 1. **[PLBart](model_doc/plbart)** (from UCLA NLP) released with the paper [Unified Pre-training for Program Understanding and Generation](https://arxiv.org/abs/2103.06333) by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang. 1. **[PoolFormer](model_doc/poolformer)** (from Sea AI Labs) released with the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng. 1. **[ProphetNet](model_doc/prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[QDQBert](model_doc/qdqbert)** (from NVIDIA) released with the paper [Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation](https://arxiv.org/abs/2004.09602) by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius Micikevicius. 1. **[RAG](model_doc/rag)** (from Facebook) released with the paper [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kรผttler, Mike Lewis, Wen-tau Yih, Tim Rocktรคschel, Sebastian Riedel, Douwe Kiela. 1. **[REALM](model_doc/realm.html)** (from Google Research) released with the paper [REALM: Retrieval-Augmented Language Model Pre-Training](https://arxiv.org/abs/2002.08909) by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. 1. **[Reformer](model_doc/reformer)** (from Google Research) released with the paper [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) by Nikita Kitaev, ลukasz Kaiser, Anselm Levskaya. 1. **[RegNet](model_doc/regnet)** (from META Platforms) released with the paper [Designing Network Design Space](https://arxiv.org/abs/2003.13678) by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollรกr. 1. **[RemBERT](model_doc/rembert)** (from Google Research) released with the paper [Rethinking embedding coupling in pre-trained language models](https://arxiv.org/abs/2010.12821) by Hyung Won Chung, Thibault Fรฉvry, Henry Tsai, M. Johnson, Sebastian Ruder. 1. **[ResNet](model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. 1. **[RoBERTa](model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov. 1. **[RoCBert](model_doc/roc_bert)** (from WeChatAI) released with the paper [RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining](https://aclanthology.org/2022.acl-long.65.pdf) by HuiSu, WeiweiShi, XiaoyuShen, XiaoZhou, TuoJi, JiaruiFang, JieZhou. 1. **[RoFormer](model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu. 1. **[SegFormer](model_doc/segformer)** (from NVIDIA) released with the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. 1. **[SEW](model_doc/sew)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SEW-D](model_doc/sew_d)** (from ASAPP) released with the paper [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) by Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi. 1. **[SpeechToTextTransformer](model_doc/speech_to_text)** (from Facebook), released together with the paper [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. 1. **[SpeechToTextTransformer2](model_doc/speech_to_text_2)** (from Facebook), released together with the paper [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. 1. **[Splinter](model_doc/splinter)** (from Tel Aviv University), released together with the paper [Few-Shot Question Answering by Pretraining Span Selection](https://arxiv.org/abs/2101.00438) by Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy. 1. **[SqueezeBERT](model_doc/squeezebert)** (from Berkeley) released with the paper [SqueezeBERT: What can computer vision teach NLP about efficient neural networks?](https://arxiv.org/abs/2006.11316) by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer. 1. **[Swin Transformer](model_doc/swin)** (from Microsoft) released with the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo. 1. **[Swin Transformer V2](model_doc/swinv2)** (from Microsoft) released with the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo. 1. **[T5](model_doc/t5)** (from Google AI) released with the paper [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[T5v1.1](model_doc/t5v1.1)** (from Google AI) released in the repository [google-research/text-to-text-transfer-transformer](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu. 1. **[Table Transformer](model_doc/table-transformer)** (from Microsoft Research) released with the paper [PubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents](https://arxiv.org/abs/2110.00061) by Brandon Smock, Rohith Pesala, Robin Abraham. 1. **[TAPAS](model_doc/tapas)** (from Google AI) released with the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](https://arxiv.org/abs/2004.02349) by Jonathan Herzig, Paweล‚ Krzysztof Nowak, Thomas Mรผller, Francesco Piccinno and Julian Martin Eisenschlos. 1. **[TAPEX](model_doc/tapex)** (from Microsoft Research) released with the paper [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. 1. **[Time Series Transformer](model_doc/time_series_transformer)** (from HuggingFace). 1. **[Trajectory Transformer](model_doc/trajectory_transformers)** (from the University of California at Berkeley) released with the paper [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine 1. **[Transformer-XL](model_doc/transfo-xl)** (from Google/CMU) released with the paper [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov. 1. **[TrOCR](model_doc/trocr)** (from Microsoft), released together with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. 1. **[UL2](model_doc/ul2)** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler 1. **[UniSpeech](model_doc/unispeech)** (from Microsoft Research) released with the paper [UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597) by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang. 1. **[UniSpeechSat](model_doc/unispeech-sat)** (from Microsoft Research) released with the paper [UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu. 1. **[VAN](model_doc/van)** (from Tsinghua University and Nankai University) released with the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. 1. **[VideoMAE](model_doc/videomae)** (from Multimedia Computing Group, Nanjing University) released with the paper [VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training](https://arxiv.org/abs/2203.12602) by Zhan Tong, Yibing Song, Jue Wang, Limin Wang. 1. **[ViLT](model_doc/vilt)** (from NAVER AI Lab/Kakao Enterprise/Kakao Brain) released with the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Wonjae Kim, Bokyung Son, Ildoo Kim. 1. **[Vision Transformer (ViT)](model_doc/vit)** (from Google AI) released with the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby. 1. **[VisualBERT](model_doc/visual_bert)** (from UCLA NLP) released with the paper [VisualBERT: A Simple and Performant Baseline for Vision and Language](https://arxiv.org/pdf/1908.03557) by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. 1. **[ViTMAE](model_doc/vit_mae)** (from Meta AI) released with the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollรกr, Ross Girshick. 1. **[ViTMSN](model_doc/vit_msn)** (from Meta AI) released with the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. 1. **[Wav2Vec2](model_doc/wav2vec2)** (from Facebook AI) released with the paper [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. 1. **[Wav2Vec2-Conformer](model_doc/wav2vec2-conformer)** (from Facebook AI) released with the paper [FAIRSEQ S2T: Fast Speech-to-Text Modeling with FAIRSEQ](https://arxiv.org/abs/2010.05171) by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino. 1. **[Wav2Vec2Phoneme](model_doc/wav2vec2_phoneme)** (from Facebook AI) released with the paper [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition](https://arxiv.org/abs/2109.11680) by Qiantong Xu, Alexei Baevski, Michael Auli. 1. **[WavLM](model_doc/wavlm)** (from Microsoft Research) released with the paper [WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing](https://arxiv.org/abs/2110.13900) by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei. 1. **[Whisper](model_doc/whisper)** (from OpenAI) released with the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf) by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever. 1. **[X-CLIP](model_doc/xclip)** (from Microsoft Research) released with the paper [Expanding Language-Image Pretrained Models for General Video Recognition](https://arxiv.org/abs/2208.02816) by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling. 1. **[XGLM](model_doc/xglm)** (From Facebook AI) released with the paper [Few-shot Learning with Multilingual Language Models](https://arxiv.org/abs/2112.10668) by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li. 1. **[XLM](model_doc/xlm)** (from Facebook) released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau. 1. **[XLM-ProphetNet](model_doc/xlm-prophetnet)** (from Microsoft Research) released with the paper [ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training](https://arxiv.org/abs/2001.04063) by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang and Ming Zhou. 1. **[XLM-RoBERTa](model_doc/xlm-roberta)** (from Facebook AI), released together with the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmรกn, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov. 1. **[XLM-RoBERTa-XL](model_doc/xlm-roberta-xl)** (from Facebook AI), released together with the paper [Larger-Scale Transformers for Multilingual Masked Language Modeling](https://arxiv.org/abs/2105.00572) by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. 1. **[XLNet](model_doc/xlnet)** (from Google/CMU) released with the paper [โ€‹XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le. 1. **[XLS-R](model_doc/xls_r)** (from Facebook AI) released with the paper [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale](https://arxiv.org/abs/2111.09296) by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli. 1. **[XLSR-Wav2Vec2](model_doc/xlsr_wav2vec2)** (from Facebook AI) released with the paper [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael Auli. 1. **[YOLOS](model_doc/yolos)** (from Huazhong University of Science & Technology) released with the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu. 1. **[YOSO](model_doc/yoso)** (from the University of Wisconsin - Madison) released with the paper [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](https://arxiv.org/abs/2111.09714) by Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh. ### ์ง€์› ํ”„๋ ˆ์ž„์›Œํฌ[[supported-framework]] ์•„๋ž˜ ํ‘œ๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์† ๊ฐ ๋ชจ๋ธ์˜ ์ง€์› ํ˜„ํ™ฉ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ํ† ํฐํ™”๋ฅผ ํŒŒ์ด์ฌ (๋ณ„์นญ "slow") ๋˜๋Š” ๐Ÿค— Tokenizers (๋ณ„์นญ "fast") ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ํ•˜๋Š”์ง€; (Flax๋ฅผ ํ†ตํ•œ) Jax, PyTorch, TensorFlow ์ค‘ ์–ด๋–ค ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์ง€์›ํ•˜๋Š”์ง€ ํ‘œ์‹œ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. <!--This table is updated automatically from the auto modules with _make fix-copies_. Do not update manually!--> | Model | Tokenizer slow | Tokenizer fast | PyTorch support | TensorFlow support | Flax Support | |:---------------------------:|:--------------:|:--------------:|:---------------:|:------------------:|:------------:| | ALBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | BART | โœ… | โœ… | โœ… | โœ… | โœ… | | BEiT | โŒ | โŒ | โœ… | โŒ | โœ… | | BERT | โœ… | โœ… | โœ… | โœ… | โœ… | | Bert Generation | โœ… | โŒ | โœ… | โŒ | โŒ | | BigBird | โœ… | โœ… | โœ… | โŒ | โœ… | | BigBird-Pegasus | โŒ | โŒ | โœ… | โŒ | โŒ | | Blenderbot | โœ… | โœ… | โœ… | โœ… | โœ… | | BlenderbotSmall | โœ… | โœ… | โœ… | โœ… | โœ… | | BLOOM | โŒ | โœ… | โœ… | โŒ | โŒ | | CamemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | CANINE | โœ… | โŒ | โœ… | โŒ | โŒ | | CLIP | โœ… | โœ… | โœ… | โœ… | โœ… | | CLIPSeg | โŒ | โŒ | โœ… | โŒ | โŒ | | CodeGen | โœ… | โœ… | โœ… | โŒ | โŒ | | Conditional DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | ConvBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ConvNeXT | โŒ | โŒ | โœ… | โœ… | โŒ | | CTRL | โœ… | โŒ | โœ… | โœ… | โŒ | | CvT | โŒ | โŒ | โœ… | โœ… | โŒ | | Data2VecAudio | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecText | โŒ | โŒ | โœ… | โŒ | โŒ | | Data2VecVision | โŒ | โŒ | โœ… | โœ… | โŒ | | DeBERTa | โœ… | โœ… | โœ… | โœ… | โŒ | | DeBERTa-v2 | โœ… | โœ… | โœ… | โœ… | โŒ | | Decision Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Deformable DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DeiT | โŒ | โŒ | โœ… | โœ… | โŒ | | DETR | โŒ | โŒ | โœ… | โŒ | โŒ | | DistilBERT | โœ… | โœ… | โœ… | โœ… | โœ… | | DonutSwin | โŒ | โŒ | โœ… | โŒ | โŒ | | DPR | โœ… | โœ… | โœ… | โœ… | โŒ | | DPT | โŒ | โŒ | โœ… | โŒ | โŒ | | ELECTRA | โœ… | โœ… | โœ… | โœ… | โœ… | | Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | ERNIE | โŒ | โŒ | โœ… | โŒ | โŒ | | ESM | โœ… | โŒ | โœ… | โœ… | โŒ | | FairSeq Machine-Translation | โœ… | โŒ | โœ… | โŒ | โŒ | | FlauBERT | โœ… | โŒ | โœ… | โœ… | โŒ | | FLAVA | โŒ | โŒ | โœ… | โŒ | โŒ | | FNet | โœ… | โœ… | โœ… | โŒ | โŒ | | Funnel Transformer | โœ… | โœ… | โœ… | โœ… | โŒ | | GLPN | โŒ | โŒ | โœ… | โŒ | โŒ | | GPT Neo | โŒ | โŒ | โœ… | โŒ | โœ… | | GPT NeoX | โŒ | โœ… | โœ… | โŒ | โŒ | | GPT NeoX Japanese | โœ… | โŒ | โœ… | โŒ | โŒ | | GPT-J | โŒ | โŒ | โœ… | โœ… | โœ… | | GroupViT | โŒ | โŒ | โœ… | โœ… | โŒ | | Hubert | โŒ | โŒ | โœ… | โœ… | โŒ | | I-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ImageGPT | โŒ | โŒ | โœ… | โŒ | โŒ | | Jukebox | โœ… | โŒ | โœ… | โŒ | โŒ | | LayoutLM | โœ… | โœ… | โœ… | โœ… | โŒ | | LayoutLMv2 | โœ… | โœ… | โœ… | โŒ | โŒ | | LayoutLMv3 | โœ… | โœ… | โœ… | โœ… | โŒ | | LED | โœ… | โœ… | โœ… | โœ… | โŒ | | LeViT | โŒ | โŒ | โœ… | โŒ | โŒ | | LiLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Longformer | โœ… | โœ… | โœ… | โœ… | โŒ | | LongT5 | โŒ | โŒ | โœ… | โŒ | โœ… | | LUKE | โœ… | โŒ | โœ… | โŒ | โŒ | | LXMERT | โœ… | โœ… | โœ… | โœ… | โŒ | | M-CTC-T | โŒ | โŒ | โœ… | โŒ | โŒ | | M2M100 | โœ… | โŒ | โœ… | โŒ | โŒ | | Marian | โœ… | โŒ | โœ… | โœ… | โœ… | | MarkupLM | โœ… | โœ… | โœ… | โŒ | โŒ | | MaskFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | mBART | โœ… | โœ… | โœ… | โœ… | โœ… | | Megatron-BERT | โŒ | โŒ | โœ… | โŒ | โŒ | | MobileBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | MobileViT | โŒ | โŒ | โœ… | โœ… | โŒ | | MPNet | โœ… | โœ… | โœ… | โœ… | โŒ | | MT5 | โœ… | โœ… | โœ… | โœ… | โœ… | | MVP | โœ… | โœ… | โœ… | โŒ | โŒ | | Nezha | โŒ | โŒ | โœ… | โŒ | โŒ | | Nystrรถmformer | โŒ | โŒ | โœ… | โŒ | โŒ | | OpenAI GPT | โœ… | โœ… | โœ… | โœ… | โŒ | | OpenAI GPT-2 | โœ… | โœ… | โœ… | โœ… | โœ… | | OPT | โŒ | โŒ | โœ… | โœ… | โœ… | | OWL-ViT | โŒ | โŒ | โœ… | โŒ | โŒ | | Pegasus | โœ… | โœ… | โœ… | โœ… | โœ… | | PEGASUS-X | โŒ | โŒ | โœ… | โŒ | โŒ | | Perceiver | โœ… | โŒ | โœ… | โŒ | โŒ | | PLBart | โœ… | โŒ | โœ… | โŒ | โŒ | | PoolFormer | โŒ | โŒ | โœ… | โŒ | โŒ | | ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | QDQBert | โŒ | โŒ | โœ… | โŒ | โŒ | | RAG | โœ… | โŒ | โœ… | โœ… | โŒ | | REALM | โœ… | โœ… | โœ… | โŒ | โŒ | | Reformer | โœ… | โœ… | โœ… | โŒ | โŒ | | RegNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RemBERT | โœ… | โœ… | โœ… | โœ… | โŒ | | ResNet | โŒ | โŒ | โœ… | โœ… | โœ… | | RetriBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | RoCBert | โœ… | โŒ | โœ… | โŒ | โŒ | | RoFormer | โœ… | โœ… | โœ… | โœ… | โœ… | | SegFormer | โŒ | โŒ | โœ… | โœ… | โŒ | | SEW | โŒ | โŒ | โœ… | โŒ | โŒ | | SEW-D | โŒ | โŒ | โœ… | โŒ | โŒ | | Speech Encoder decoder | โŒ | โŒ | โœ… | โŒ | โœ… | | Speech2Text | โœ… | โŒ | โœ… | โœ… | โŒ | | Speech2Text2 | โœ… | โŒ | โŒ | โŒ | โŒ | | Splinter | โœ… | โœ… | โœ… | โŒ | โŒ | | SqueezeBERT | โœ… | โœ… | โœ… | โŒ | โŒ | | Swin Transformer | โŒ | โŒ | โœ… | โœ… | โŒ | | Swin Transformer V2 | โŒ | โŒ | โœ… | โŒ | โŒ | | T5 | โœ… | โœ… | โœ… | โœ… | โœ… | | Table Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | TAPAS | โœ… | โŒ | โœ… | โœ… | โŒ | | Time Series Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Trajectory Transformer | โŒ | โŒ | โœ… | โŒ | โŒ | | Transformer-XL | โœ… | โŒ | โœ… | โœ… | โŒ | | TrOCR | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeech | โŒ | โŒ | โœ… | โŒ | โŒ | | UniSpeechSat | โŒ | โŒ | โœ… | โŒ | โŒ | | VAN | โŒ | โŒ | โœ… | โŒ | โŒ | | VideoMAE | โŒ | โŒ | โœ… | โŒ | โŒ | | ViLT | โŒ | โŒ | โœ… | โŒ | โŒ | | Vision Encoder decoder | โŒ | โŒ | โœ… | โœ… | โœ… | | VisionTextDualEncoder | โŒ | โŒ | โœ… | โŒ | โœ… | | VisualBERT | โŒ | โŒ | โœ… | โŒ | โŒ | | ViT | โŒ | โŒ | โœ… | โœ… | โœ… | | ViTMAE | โŒ | โŒ | โœ… | โœ… | โŒ | | ViTMSN | โŒ | โŒ | โœ… | โŒ | โŒ | | Wav2Vec2 | โœ… | โŒ | โœ… | โœ… | โœ… | | Wav2Vec2-Conformer | โŒ | โŒ | โœ… | โŒ | โŒ | | WavLM | โŒ | โŒ | โœ… | โŒ | โŒ | | Whisper | โœ… | โŒ | โœ… | โœ… | โŒ | | X-CLIP | โŒ | โŒ | โœ… | โŒ | โŒ | | XGLM | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM | โœ… | โŒ | โœ… | โœ… | โŒ | | XLM-ProphetNet | โœ… | โŒ | โœ… | โŒ | โŒ | | XLM-RoBERTa | โœ… | โœ… | โœ… | โœ… | โœ… | | XLM-RoBERTa-XL | โŒ | โŒ | โœ… | โŒ | โŒ | | XLNet | โœ… | โœ… | โœ… | โœ… | โŒ | | YOLOS | โŒ | โŒ | โœ… | โŒ | โŒ | | YOSO | โŒ | โŒ | โœ… | โŒ | โŒ | <!-- End table-->
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/custom_models.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ ๊ณต์œ ํ•˜๊ธฐ[[sharing-custom-models]] ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์‰ฝ๊ฒŒ ํ™•์žฅํ•  ์ˆ˜ ์žˆ๋„๋ก ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ๋ชจ๋ธ์€ ์ถ”์ƒํ™” ์—†์ด ์ €์žฅ์†Œ์˜ ์ง€์ •๋œ ํ•˜์œ„ ํด๋”์— ์™„์ „ํžˆ ์ฝ”๋”ฉ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ, ์†์‰ฝ๊ฒŒ ๋ชจ๋ธ๋ง ํŒŒ์ผ์„ ๋ณต์‚ฌํ•˜๊ณ  ํ•„์š”์— ๋”ฐ๋ผ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์™„์ „ํžˆ ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ๋งŒ๋“œ๋Š” ๊ฒฝ์šฐ์—๋Š” ์ฒ˜์Œ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๊ฒƒ์ด ๋” ์‰ฌ์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” Transformers ๋‚ด์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ๊ณผ ๊ตฌ์„ฑ์„ ์ž‘์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ์—†๋Š” ๊ฒฝ์šฐ์—๋„ ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก (์˜์กด์„ฑ๊ณผ ํ•จ๊ป˜) ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [timm ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ](https://github.com/rwightman/pytorch-image-models)์˜ ResNet ํด๋ž˜์Šค๋ฅผ [`PreTrainedModel`]๋กœ ๋ž˜ํ•‘ํ•œ ResNet ๋ชจ๋ธ์„ ์˜ˆ๋กœ ๋ชจ๋“  ๊ฒƒ์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ## ์‚ฌ์šฉ์ž ์ •์˜ ๊ตฌ์„ฑ ์ž‘์„ฑํ•˜๊ธฐ[[writing-a-custom-configuration]] ๋ชจ๋ธ์— ๋“ค์–ด๊ฐ€๊ธฐ ์ „์— ๋จผ์ € ๊ตฌ์„ฑ์„ ์ž‘์„ฑํ•ด๋ณด๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ `configuration`์€ ๋ชจ๋ธ์„ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ๋ชจ๋“  ์ค‘์š”ํ•œ ๊ฒƒ๋“ค์„ ํฌํ•จํ•˜๊ณ  ์žˆ๋Š” ๊ฐ์ฒด์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ ์„น์…˜์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด, ๋ชจ๋ธ์€ `config`๋ฅผ ์‚ฌ์šฉํ•ด์„œ๋งŒ ์ดˆ๊ธฐํ™”ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์™„๋ฒฝํ•œ ๊ตฌ์„ฑ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜ ์˜ˆ์‹œ์—์„œ๋Š” ResNet ํด๋ž˜์Šค์˜ ์ธ์ˆ˜(argument)๋ฅผ ์กฐ์ •ํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๊ตฌ์„ฑ์€ ๊ฐ€๋Šฅํ•œ ResNet ์ค‘ ๋‹ค๋ฅธ ์œ ํ˜•์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋ช‡ ๊ฐ€์ง€ ์œ ํšจ์„ฑ์„ ํ™•์ธํ•œ ํ›„ ํ•ด๋‹น ์ธ์ˆ˜๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ```python from transformers import PretrainedConfig from typing import List class ResnetConfig(PretrainedConfig): model_type = "resnet" def __init__( self, block_type="bottleneck", layers: List[int] = [3, 4, 6, 3], num_classes: int = 1000, input_channels: int = 3, cardinality: int = 1, base_width: int = 64, stem_width: int = 64, stem_type: str = "", avg_down: bool = False, **kwargs, ): if block_type not in ["basic", "bottleneck"]: raise ValueError(f"`block_type` must be 'basic' or bottleneck', got {block_type}.") if stem_type not in ["", "deep", "deep-tiered"]: raise ValueError(f"`stem_type` must be '', 'deep' or 'deep-tiered', got {stem_type}.") self.block_type = block_type self.layers = layers self.num_classes = num_classes self.input_channels = input_channels self.cardinality = cardinality self.base_width = base_width self.stem_width = stem_width self.stem_type = stem_type self.avg_down = avg_down super().__init__(**kwargs) ``` ์‚ฌ์šฉ์ž ์ •์˜ `configuration`์„ ์ž‘์„ฑํ•  ๋•Œ ๊ธฐ์–ตํ•ด์•ผ ํ•  ์„ธ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ์‚ฌํ•ญ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - `PretrainedConfig`์„ ์ƒ์†ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - `PretrainedConfig`์˜ `__init__`์€ ๋ชจ๋“  kwargs๋ฅผ ํ—ˆ์šฉํ•ด์•ผ ํ•˜๊ณ , - ์ด๋Ÿฌํ•œ `kwargs`๋Š” ์ƒ์œ„ ํด๋ž˜์Šค `__init__`์— ์ „๋‹ฌ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ƒ์†์€ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ๋ชจ๋“  ๊ธฐ๋Šฅ์„ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ ์œผ๋กœ๋ถ€ํ„ฐ ๋น„๋กฏ๋˜๋Š” ๋‘ ๊ฐ€์ง€ ์ œ์•ฝ ์กฐ๊ฑด์€ `PretrainedConfig`์— ์„ค์ •ํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ๋” ๋งŽ์€ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. `from_pretrained` ๋ฉ”์„œ๋“œ๋กœ ๊ตฌ์„ฑ์„ ๋‹ค์‹œ ๋กœ๋“œํ•  ๋•Œ ํ•ด๋‹น ํ•„๋“œ๋Š” ๊ตฌ์„ฑ์—์„œ ์ˆ˜๋ฝํ•œ ํ›„ ์ƒ์œ„ ํด๋ž˜์Šค๋กœ ๋ณด๋‚ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค์— ๋“ฑ๋กํ•˜์ง€ ์•Š๋Š” ํ•œ, `configuration`์—์„œ `model_type`์„ ์ •์˜(์—ฌ๊ธฐ์„œ `model_type="resnet"`)ํ•˜๋Š” ๊ฒƒ์€ ํ•„์ˆ˜ ์‚ฌํ•ญ์ด ์•„๋‹™๋‹ˆ๋‹ค (๋งˆ์ง€๋ง‰ ์„น์…˜ ์ฐธ์กฐ). ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋‹ค๋ฅธ ๋ชจ๋ธ ๊ตฌ์„ฑ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๊ตฌ์„ฑ์„ ์‰ฝ๊ฒŒ ๋งŒ๋“ค๊ณ  ์ €์žฅํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ resnet50d ๊ตฌ์„ฑ์„ ์ƒ์„ฑํ•˜๊ณ  ์ €์žฅํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d_config.save_pretrained("custom-resnet") ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด `custom-resnet` ํด๋” ์•ˆ์— `config.json`์ด๋ผ๋Š” ํŒŒ์ผ์ด ์ €์žฅ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `from_pretrained` ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ตฌ์„ฑ์„ ๋‹ค์‹œ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py resnet50d_config = ResnetConfig.from_pretrained("custom-resnet") ``` ๊ตฌ์„ฑ์„ Hub์— ์ง์ ‘ ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด [`PretrainedConfig`] ํด๋ž˜์Šค์˜ [`~PretrainedConfig.push_to_hub`]์™€ ๊ฐ™์€ ๋‹ค๋ฅธ ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ ์ž‘์„ฑํ•˜๊ธฐ[[writing-a-custom-model]] ์ด์ œ ResNet ๊ตฌ์„ฑ์ด ์žˆ์œผ๋ฏ€๋กœ ๋ชจ๋ธ์„ ์ž‘์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‹ค์ œ๋กœ๋Š” ๋‘ ๊ฐœ๋ฅผ ์ž‘์„ฑํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•˜๋‚˜๋Š” ์ด๋ฏธ์ง€ ๋ฐฐ์น˜์—์„œ hidden features๋ฅผ ์ถ”์ถœํ•˜๋Š” ๊ฒƒ([`BertModel`]๊ณผ ๊ฐ™์ด), ๋‹ค๋ฅธ ํ•˜๋‚˜๋Š” ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— ์ ํ•ฉํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค([`BertForSequenceClassification`]๊ณผ ๊ฐ™์ด). ์ด์ „์— ์–ธ๊ธ‰ํ–ˆ๋“ฏ์ด ์ด ์˜ˆ์ œ์—์„œ๋Š” ๋‹จ์ˆœํ•˜๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ์˜ ๋А์Šจํ•œ ๋ž˜ํผ(loose wrapper)๋งŒ ์ž‘์„ฑํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค๋ฅผ ์ž‘์„ฑํ•˜๊ธฐ ์ „์— ๋ธ”๋ก ์œ ํ˜•๊ณผ ์‹ค์ œ ๋ธ”๋ก ํด๋ž˜์Šค ๊ฐ„์˜ ๋งคํ•‘ ์ž‘์—…๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `ResNet` ํด๋ž˜์Šค๋กœ ์ „๋‹ฌ๋˜์–ด `configuration`์„ ํ†ตํ•ด ๋ชจ๋ธ์ด ์„ ์–ธ๋ฉ๋‹ˆ๋‹ค: ```py from transformers import PreTrainedModel from timm.models.resnet import BasicBlock, Bottleneck, ResNet from .configuration_resnet import ResnetConfig BLOCK_MAPPING = {"basic": BasicBlock, "bottleneck": Bottleneck} class ResnetModel(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor): return self.model.forward_features(tensor) ``` ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์„ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด์„œ๋Š” forward ๋ฉ”์†Œ๋“œ๋งŒ ๋ณ€๊ฒฝํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py import torch class ResnetModelForImageClassification(PreTrainedModel): config_class = ResnetConfig def __init__(self, config): super().__init__(config) block_layer = BLOCK_MAPPING[config.block_type] self.model = ResNet( block_layer, config.layers, num_classes=config.num_classes, in_chans=config.input_channels, cardinality=config.cardinality, base_width=config.base_width, stem_width=config.stem_width, stem_type=config.stem_type, avg_down=config.avg_down, ) def forward(self, tensor, labels=None): logits = self.model(tensor) if labels is not None: loss = torch.nn.cross_entropy(logits, labels) return {"loss": loss, "logits": logits} return {"logits": logits} ``` ๋‘ ๊ฒฝ์šฐ ๋ชจ๋‘ `PreTrainedModel`๋ฅผ ์ƒ์†๋ฐ›๊ณ , `config`๋ฅผ ํ†ตํ•ด ์ƒ์œ„ ํด๋ž˜์Šค ์ดˆ๊ธฐํ™”๋ฅผ ํ˜ธ์ถœํ•˜๋‹ค๋Š” ์ ์„ ๊ธฐ์–ตํ•˜์„ธ์š” (์ผ๋ฐ˜์ ์ธ `torch.nn.Module`์„ ์ž‘์„ฑํ•  ๋•Œ์™€ ๋น„์Šทํ•จ). ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค์— ๋“ฑ๋กํ•˜๊ณ  ์‹ถ์€ ๊ฒฝ์šฐ์—๋Š” `config_class`๋ฅผ ์„ค์ •ํ•˜๋Š” ๋ถ€๋ถ„์ด ํ•„์ˆ˜์ž…๋‹ˆ๋‹ค (๋งˆ์ง€๋ง‰ ์„น์…˜ ์ฐธ์กฐ). <Tip> ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ์กด์žฌํ•˜๋Š” ๋ชจ๋ธ๊ณผ ๊ต‰์žฅํžˆ ์œ ์‚ฌํ•˜๋‹ค๋ฉด, ๋ชจ๋ธ์„ ์ƒ์„ฑํ•  ๋•Œ ๊ตฌ์„ฑ์„ ์ฐธ์กฐํ•ด ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์›ํ•˜๋Š” ๊ฒƒ์„ ๋ชจ๋ธ์ด ๋ฐ˜ํ™˜ํ•˜๋„๋ก ํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, `ResnetModelForImageClassification`์—์„œ ํ–ˆ๋˜ ๊ฒƒ ์ฒ˜๋Ÿผ ๋ ˆ์ด๋ธ”์„ ํ†ต๊ณผ์‹œ์ผฐ์„ ๋•Œ ์†์‹ค๊ณผ ํ•จ๊ป˜ ์‚ฌ์ „ ํ˜•ํƒœ๋กœ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ฒƒ์ด [`Trainer`] ํด๋ž˜์Šค ๋‚ด์—์„œ ์ง์ ‘ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ์— ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ž์‹ ๋งŒ์˜ ํ•™์Šต ๋ฃจํ”„ ๋˜๋Š” ๋‹ค๋ฅธ ํ•™์Šต ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ๊ณ„ํš์ด๋ผ๋ฉด ๋‹ค๋ฅธ ์ถœ๋ ฅ ํ˜•์‹์„ ์‚ฌ์šฉํ•ด๋„ ์ข‹์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋ชจ๋ธ ํด๋ž˜์Šค๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ ํ•˜๋‚˜ ์ƒ์„ฑํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py resnet50d = ResnetModelForImageClassification(resnet50d_config) ``` ๋‹ค์‹œ ๋งํ•˜์ง€๋งŒ, [`~PreTrainedModel.save_pretrained`]๋˜๋Š” [`~PreTrainedModel.push_to_hub`]์ฒ˜๋Ÿผ [`PreTrainedModel`]์— ์†ํ•˜๋Š” ๋ชจ๋“  ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์„น์…˜์—์„œ ๋‘ ๋ฒˆ์งธ ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ ์ฝ”๋“œ์™€ ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋จผ์ €, ๋ชจ๋ธ ๋‚ด๋ถ€์— ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๊ฐ€์ค‘์น˜๋ฅผ ๋กœ๋“œํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ๋ฅผ ํ™œ์šฉํ•  ๋•Œ๋Š”, ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ์„ ์ž์‹ ๋งŒ์˜ ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šต์‹œํ‚ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋น ๋ฅด๊ฒŒ ์ง„ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ resnet50d๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜ ๋ชจ๋ธ์€ resnet50d์˜ ๋ž˜ํผ์ด๊ธฐ ๋•Œ๋ฌธ์—, ๊ฐ€์ค‘์น˜๋ฅผ ์‰ฝ๊ฒŒ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py import timm pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` ์ด์ œ [`~PreTrainedModel.save_pretrained`] ๋˜๋Š” [`~PreTrainedModel.push_to_hub`]๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ๋ชจ๋ธ ์ฝ”๋“œ๊ฐ€ ์ €์žฅ๋˜๋Š”์ง€ ํ™•์ธํ•ด๋ด…์‹œ๋‹ค. ## Hub๋กœ ์ฝ”๋“œ ์—…๋กœ๋“œํ•˜๊ธฐ[[sending-the-code-to-the-hub]] <Tip warning={true}> ์ด API๋Š” ์‹คํ—˜์ ์ด๋ฉฐ ๋‹ค์Œ ๋ฆด๋ฆฌ์Šค์—์„œ ์•ฝ๊ฐ„์˜ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ๋จผ์ € ๋ชจ๋ธ์ด `.py` ํŒŒ์ผ์— ์™„์ „ํžˆ ์ •์˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ๋ชจ๋“  ํŒŒ์ผ์ด ๋™์ผํ•œ ์ž‘์—… ๊ฒฝ๋กœ์— ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ƒ๋Œ€๊ฒฝ๋กœ ์ž„ํฌํŠธ(relative import)์— ์˜์กดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (transformers์—์„œ๋Š” ์ด ๊ธฐ๋Šฅ์— ๋Œ€ํ•œ ํ•˜์œ„ ๋ชจ๋“ˆ์„ ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค). ์ด ์˜ˆ์‹œ์—์„œ๋Š” ์ž‘์—… ๊ฒฝ๋กœ ์•ˆ์˜ `resnet_model`์—์„œ `modeling_resnet.py` ํŒŒ์ผ๊ณผ `configuration_resnet.py` ํŒŒ์ผ์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๊ตฌ์„ฑ ํŒŒ์ผ์—๋Š” `ResnetConfig`์— ๋Œ€ํ•œ ์ฝ”๋“œ๊ฐ€ ์žˆ๊ณ  ๋ชจ๋ธ๋ง ํŒŒ์ผ์—๋Š” `ResnetModel` ๋ฐ `ResnetModelForImageClassification`์— ๋Œ€ํ•œ ์ฝ”๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ``` . โ””โ”€โ”€ resnet_model โ”œโ”€โ”€ __init__.py โ”œโ”€โ”€ configuration_resnet.py โ””โ”€โ”€ modeling_resnet.py ``` Python์ด `resnet_model`์„ ๋ชจ๋“ˆ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ฐ์ง€ํ•˜๋Š” ๋ชฉ์ ์ด๊ธฐ ๋•Œ๋ฌธ์— `__init__.py`๋Š” ๋น„์–ด ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <Tip warning={true}> ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ๋ชจ๋ธ๋ง ํŒŒ์ผ์„ ๋ณต์‚ฌํ•˜๋Š” ๊ฒฝ์šฐ, ๋ชจ๋“  ํŒŒ์ผ ์ƒ๋‹จ์— ์žˆ๋Š” ์ƒ๋Œ€ ๊ฒฝ๋กœ ์ž„ํฌํŠธ(relative import) ๋ถ€๋ถ„์„ `transformers` ํŒจํ‚ค์ง€์—์„œ ์ž„ํฌํŠธ ํ•˜๋„๋ก ๋ณ€๊ฒฝํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๊ธฐ์กด ๊ตฌ์„ฑ์ด๋‚˜ ๋ชจ๋ธ์„ ์žฌ์‚ฌ์šฉ(๋˜๋Š” ์„œ๋ธŒ ํด๋ž˜์Šคํ™”)ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ผ์•ผ ํ•ฉ๋‹ˆ๋‹ค: ๋จผ์ €, ์ƒˆ๋กœ ๋งŒ๋“  ํŒŒ์ผ์— ResNet ๋ชจ๋ธ๊ณผ ๊ตฌ์„ฑ์„ ์ž„ํฌํŠธํ•ฉ๋‹ˆ๋‹ค: ```py from resnet_model.configuration_resnet import ResnetConfig from resnet_model.modeling_resnet import ResnetModel, ResnetModelForImageClassification ``` ๋‹ค์Œ์œผ๋กœ `save_pretrained` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ํ•ด๋‹น ๊ฐ์ฒด์˜ ์ฝ”๋“œ ํŒŒ์ผ์„ ๋ณต์‚ฌํ•˜๊ณ , ๋ณต์‚ฌํ•œ ํŒŒ์ผ์„ Auto ํด๋ž˜์Šค๋กœ ๋“ฑ๋กํ•˜๊ณ (๋ชจ๋ธ์ธ ๊ฒฝ์šฐ) ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค: ```py ResnetConfig.register_for_auto_class() ResnetModel.register_for_auto_class("AutoModel") ResnetModelForImageClassification.register_for_auto_class("AutoModelForImageClassification") ``` `configuration`์— ๋Œ€ํ•œ auto ํด๋ž˜์Šค๋ฅผ ์ง€์ •ํ•  ํ•„์š”๋Š” ์—†์ง€๋งŒ(`configuration` ๊ด€๋ จ auto ํด๋ž˜์Šค๋Š” AutoConfig ํด๋ž˜์Šค ํ•˜๋‚˜๋งŒ ์žˆ์Œ), ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ์—๋Š” ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ง€์ • ๋ชจ๋ธ์€ ๋‹ค์–‘ํ•œ ์ž‘์—…์— ์ ํ•ฉํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ๋ชจ๋ธ์— ๋งž๋Š” auto ํด๋ž˜์Šค๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ์ด์ „์— ์ž‘์—…ํ–ˆ๋˜ ๊ฒƒ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๊ตฌ์„ฑ๊ณผ ๋ชจ๋ธ์„ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py resnet50d_config = ResnetConfig(block_type="bottleneck", stem_width=32, stem_type="deep", avg_down=True) resnet50d = ResnetModelForImageClassification(resnet50d_config) pretrained_model = timm.create_model("resnet50d", pretrained=True) resnet50d.model.load_state_dict(pretrained_model.state_dict()) ``` ์ด์ œ ๋ชจ๋ธ์„ Hub๋กœ ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด ๋กœ๊ทธ์ธ ์ƒํƒœ์ธ์ง€ ํ™•์ธํ•˜์„ธ์š”. ํ„ฐ๋ฏธ๋„์—์„œ ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•ด ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash huggingface-cli login ``` ์ฃผํ”ผํ„ฐ ๋…ธํŠธ๋ถ์˜ ๊ฒฝ์šฐ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```py from huggingface_hub import notebook_login notebook_login() ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ด๋ ‡๊ฒŒ ์ž์‹ ์˜ ๋„ค์ž„์ŠคํŽ˜์ด์Šค(๋˜๋Š” ์ž์‹ ์ด ์†ํ•œ ์กฐ์ง)์— ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py resnet50d.push_to_hub("custom-resnet50d") ``` On top of the modeling weights and the configuration in json format, this also copied the modeling and configuration `.py` files in the folder `custom-resnet50d` and uploaded the result to the Hub. You can check the result in this [model repo](https://huggingface.co/sgugger/custom-resnet50d). json ํ˜•์‹์˜ ๋ชจ๋ธ๋ง ๊ฐ€์ค‘์น˜์™€ ๊ตฌ์„ฑ ์™ธ์—๋„ `custom-resnet50d` ํด๋” ์•ˆ์˜ ๋ชจ๋ธ๋ง๊ณผ ๊ตฌ์„ฑ `.py` ํŒŒ์ผ์„ ๋ณต์‚ฌํ•˜ํ•ด Hub์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. [๋ชจ๋ธ ์ €์žฅ์†Œ](https://huggingface.co/sgugger/custom-resnet50d)์—์„œ ๊ฒฐ๊ณผ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [sharing tutorial](model_sharing) ๋ฌธ์„œ์˜ `push_to_hub` ๋ฉ”์†Œ๋“œ์—์„œ ์ž์„ธํ•œ ๋‚ด์šฉ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์‚ฌ์šฉ์ž ์ •์˜ ์ฝ”๋“œ๋กœ ๋ชจ๋ธ ์‚ฌ์šฉํ•˜๊ธฐ[[using-a-model-with-custom-code]] auto ํด๋ž˜์Šค์™€ `from_pretrained` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž ์ง€์ • ์ฝ”๋“œ ํŒŒ์ผ๊ณผ ํ•จ๊ป˜ ๋ชจ๋“  ๊ตฌ์„ฑ, ๋ชจ๋ธ, ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Hub์— ์—…๋กœ๋“œ๋œ ๋ชจ๋“  ํŒŒ์ผ ๋ฐ ์ฝ”๋“œ๋Š” ๋ฉœ์›จ์–ด๊ฐ€ ์žˆ๋Š”์ง€ ๊ฒ€์‚ฌ๋˜์ง€๋งŒ (์ž์„ธํ•œ ๋‚ด์šฉ์€ [Hub ๋ณด์•ˆ](https://huggingface.co/docs/hub/security#malware-scanning) ์„ค๋ช… ์ฐธ์กฐ), ์ž์‹ ์˜ ์ปดํ“จํ„ฐ์—์„œ ๋ชจ๋ธ ์ฝ”๋“œ์™€ ์ž‘์„ฑ์ž๊ฐ€ ์•…์„ฑ ์ฝ”๋“œ๋ฅผ ์‹คํ–‰ํ•˜์ง€ ์•Š๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ •์˜ ์ฝ”๋“œ๋กœ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด `trust_remote_code=True`๋กœ ์„ค์ •ํ•˜์„ธ์š”: ```py from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained("sgugger/custom-resnet50d", trust_remote_code=True) ``` ๋ชจ๋ธ ์ž‘์„ฑ์ž๊ฐ€ ์•…์˜์ ์œผ๋กœ ์ฝ”๋“œ๋ฅผ ์—…๋ฐ์ดํŠธํ•˜์ง€ ์•Š์•˜๋‹ค๋Š” ์ ์„ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด, ์ปค๋ฐ‹ ํ•ด์‹œ(commit hash)๋ฅผ `revision`์œผ๋กœ ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ๋„ ๊ฐ•๋ ฅํžˆ ๊ถŒ์žฅ๋ฉ๋‹ˆ๋‹ค (๋ชจ๋ธ ์ž‘์„ฑ์ž๋ฅผ ์™„์ „ํžˆ ์‹ ๋ขฐํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ). ```py commit_hash = "ed94a7c6247d8aedce4647f00f20de6875b5b292" model = AutoModelForImageClassification.from_pretrained( "sgugger/custom-resnet50d", trust_remote_code=True, revision=commit_hash ) ``` Hub์—์„œ ๋ชจ๋ธ ์ €์žฅ์†Œ์˜ ์ปค๋ฐ‹ ๊ธฐ๋ก์„ ์ฐพ์•„๋ณผ ๋•Œ, ๋ชจ๋“  ์ปค๋ฐ‹์˜ ์ปค๋ฐ‹ ํ•ด์‹œ๋ฅผ ์‰ฝ๊ฒŒ ๋ณต์‚ฌํ•  ์ˆ˜ ์žˆ๋Š” ๋ฒ„ํŠผ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ## ์‚ฌ์šฉ์ž ์ •์˜ ์ฝ”๋“œ๋กœ ๋งŒ๋“  ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค๋กœ ๋“ฑ๋กํ•˜๊ธฐ[[registering-a-model-with-custom-code-to-the-auto-classes]] ๐Ÿค— Transformers๋ฅผ ์ƒ์†ํ•˜๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค์— ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ํ•ด๋‹น ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์ž„ํฌํŠธํ•ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์ด๋Š” Hub๋กœ ์ฝ”๋“œ๋ฅผ ์—…๋กœ๋“œํ•˜๋Š” ๊ฒƒ๊ณผ ๋‹ค๋ฆ…๋‹ˆ๋‹ค (Hub์—์„œ ์ž๋™์ ์œผ๋กœ ๋ชจ๋ธ ์ฝ”๋“œ๋ฅผ ๋‹ค์šด๋กœ๋“œ ํ•˜๋Š” ๊ฒƒ๊ณผ ๋ฐ˜๋Œ€). ๊ตฌ์„ฑ์— ๊ธฐ์กด ๋ชจ๋ธ ์œ ํ˜•๊ณผ ๋‹ค๋ฅธ `model_type` ์†์„ฑ์ด ์žˆ๊ณ  ๋ชจ๋ธ ํด๋ž˜์Šค์— ์˜ฌ๋ฐ”๋ฅธ `config_class` ์†์„ฑ์ด ์žˆ๋Š” ํ•œ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด auto ํด๋ž˜์Šค์— ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py from transformers import AutoConfig, AutoModel, AutoModelForImageClassification AutoConfig.register("resnet", ResnetConfig) AutoModel.register(ResnetConfig, ResnetModel) AutoModelForImageClassification.register(ResnetConfig, ResnetModelForImageClassification) ``` ์‚ฌ์šฉ์ž ์ •์˜ ๊ตฌ์„ฑ์„ [`AutoConfig`]์— ๋“ฑ๋กํ•  ๋•Œ ์‚ฌ์šฉ๋˜๋Š” ์ฒซ ๋ฒˆ์งธ ์ธ์ˆ˜๋Š” ์‚ฌ์šฉ์ž ์ •์˜ ๊ตฌ์„ฑ์˜ `model_type`๊ณผ ์ผ์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ์‚ฌ์šฉ์ž ์ •์˜ ๋ชจ๋ธ์„ auto ํด๋ž˜์Šค์— ๋“ฑ๋กํ•  ๋•Œ ์‚ฌ์šฉ๋˜๋Š” ์ฒซ ๋ฒˆ์งธ ์ธ์ˆ˜๋Š” ํ•ด๋‹น ๋ชจ๋ธ์˜ `config_class`์™€ ์ผ์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/perf_hardware.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ›ˆ๋ จ์šฉ ์‚ฌ์šฉ์ž ๋งž์ถคํ˜• ํ•˜๋“œ์›จ์–ด [[custom-hardware-for-training]] ๋ชจ๋ธ ํ›ˆ๋ จ๊ณผ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ํ•˜๋“œ์›จ์–ด๋Š” ์„ฑ๋Šฅ์— ํฐ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. GPU์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด, Tim Dettmer์˜ ํ›Œ๋ฅญํ•œ ๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ๋ฅผ ํ™•์ธํ•ด๋ณด์„ธ์š”. [๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ ๋งํฌ](https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/) (์˜์–ด๋กœ ์ž‘์„ฑ๋จ). GPU ์„ค์ •์— ๋Œ€ํ•œ ์‹ค์šฉ์ ์ธ ์กฐ์–ธ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ## GPU [[gpu]] ๋” ํฐ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ๋•Œ๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์„ธ ๊ฐ€์ง€ ์˜ต์…˜์ด ์žˆ์Šต๋‹ˆ๋‹ค: - ๋” ํฐ GPU - ๋” ๋งŽ์€ GPU - ๋” ๋งŽ์€ CPU ๋ฐ NVMe ([DeepSpeed-Infinity](../en/main_classes/deepspeed#nvme-support)๋ฅผ ํ†ตํ•œ ์˜คํ”„๋กœ๋“œ(offload)) ์šฐ์„ , ํ•˜๋‚˜์˜ GPU๋งŒ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด๋ด…์‹œ๋‹ค. ### ์ „์› ๊ณต๊ธ‰๊ณผ ๋ƒ‰๊ฐ [[power-and-cooling]] ๋น„์‹ผ ๊ณ ์„ฑ๋Šฅ GPU๋ฅผ ๊ตฌ๋งคํ•œ ๊ฒฝ์šฐ, ์˜ฌ๋ฐ”๋ฅธ ์ „์› ๊ณต๊ธ‰๊ณผ ์ถฉ๋ถ„ํ•œ ๋ƒ‰๊ฐ์„ ์ œ๊ณตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. **์ „์› ๊ณต๊ธ‰**: ์ผ๋ถ€ ๊ณ ์„ฑ๋Šฅ ์†Œ๋น„์ž์šฉ GPU๋Š” 2๊ฐœ ํ˜น์€ ๊ฐ€๋”๊ฐ€๋‹ค 3๊ฐœ์˜ PCI-E 8ํ•€ ์ „์› ์†Œ์ผ“์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์นด๋“œ์— ์žˆ๋Š” ์†Œ์ผ“ ์ˆ˜๋งŒํผ ๋…๋ฆฝ์ ์ธ 12V PCI-E 8ํ•€ ์ผ€์ด๋ธ”์ด ์—ฐ๊ฒฐ๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ๊ฐ™์€ ์ผ€์ด๋ธ”์˜ ํ•œ์ชฝ ๋์— ์žˆ๋Š” 2๊ฐœ์˜ ์Šคํ”Œ๋ฆฟ(๋˜๋Š” ํ”ผ๊ทธํ…Œ์ผ(pigtail) ์ผ€์ด๋ธ”)์„ ์‚ฌ์šฉํ•˜์ง€ ๋งˆ์„ธ์š”. ์ฆ‰, GPU์— 2๊ฐœ์˜ ์†Œ์ผ“์ด ์žˆ๋‹ค๋ฉด, PSU(์ „์› ๊ณต๊ธ‰ ์žฅ์น˜)์—์„œ ์นด๋“œ๋กœ ์—ฐ๊ฒฐ๋˜๋Š” 2๊ฐœ์˜ PCI-E 8ํ•€ ์ผ€์ด๋ธ”์ด ํ•„์š”ํ•˜๋ฉฐ, ๋์— 2๊ฐœ์˜ PCI-E 8ํ•€ ์ปค๋„ฅํ„ฐ๊ฐ€ ์žˆ๋Š” ์ผ€์ด๋ธ”์ด ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค! ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ์นด๋“œ์˜ ์ „์ฒด ์„ฑ๋Šฅ์„ ์ œ๋Œ€๋กœ ๋ฐœํœ˜ํ•˜์ง€ ๋ชปํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ๊ฐ์˜ PCI-E 8ํ•€ ์ „์› ์ผ€์ด๋ธ”์€ PSU ์ชฝ์˜ 12V ๋ ˆ์ผ์— ์—ฐ๊ฒฐ๋˜์–ด์•ผ ํ•˜๋ฉฐ ์ตœ๋Œ€ 150W์˜ ์ „๋ ฅ์„ ๊ณต๊ธ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ถ€ ๋‹ค๋ฅธ GPU๋Š” PCI-E 12ํ•€ ์ปค๋„ฅํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ, ์ด๋Ÿฌํ•œ ์ปค๋„ฅํ„ฐ๋Š” ์ตœ๋Œ€ 500W-600W์˜ ์ „๋ ฅ์„ ๊ณต๊ธ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ €๊ฐ€ํ˜• GPU๋Š” 6ํ•€ ์ปค๋„ฅํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ, ์ตœ๋Œ€ 75W์˜ ์ „๋ ฅ์„ ๊ณต๊ธ‰ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ GPU๊ฐ€ ์•ˆ์ •์ ์ธ ์ „์••์„ ๋ฐ›์„ ์ˆ˜ ์žˆ๋„๋ก ๊ณ ๊ธ‰ PSU๋ฅผ ์„ ํƒํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ถ€ ์ €ํ’ˆ์งˆ์˜ PSU๋Š” GPU๊ฐ€ ์ตœ๊ณ  ์„ฑ๋Šฅ์œผ๋กœ ๋™์ž‘ํ•˜๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ์ „์••์„ ์•ˆ์ •์ ์œผ๋กœ ๊ณต๊ธ‰ํ•˜์ง€ ๋ชปํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌผ๋ก , PSU๋Š” GPU์— ์ „์›์„ ๊ณต๊ธ‰ํ•˜๊ธฐ์— ์ถฉ๋ถ„ํ•œ ์—ฌ๋ถ„์˜ ์ „๋ ฅ ์šฉ๋Ÿ‰์„ ๊ฐ€์ ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค. **๋ƒ‰๊ฐ**: GPU๊ฐ€ ๊ณผ์—ด๋˜๋ฉด ์„ฑ๋Šฅ์ด ์ €ํ•˜๋˜๊ณ  ์ตœ๋Œ€ ์„ฑ๋Šฅ์„ ๋ฐœํœ˜ํ•˜์ง€ ๋ชปํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋„ˆ๋ฌด ๋œจ๊ฑฐ์›Œ์ง€๋ฉด ์ค‘์ง€๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. GPU๊ฐ€ ๊ณผ์—ด๋  ๋•Œ ์ •ํ™•ํ•œ ์ ์ • ์˜จ๋„๋ฅผ ์•Œ๊ธฐ ์–ด๋ ค์šฐ๋‚˜, ์•„๋งˆ๋„ +80โ„ƒ ๋ฏธ๋งŒ์ด๋ฉด ์ข‹์ง€๋งŒ ๋” ๋‚ฎ์„์ˆ˜๋ก ์ข‹์Šต๋‹ˆ๋‹ค. 70โ„ƒ-75โ„ƒ ์ •๋„๊ฐ€ ํ›Œ๋ฅญํ•œ ์˜จ๋„ ๋ฒ”์œ„์ž…๋‹ˆ๋‹ค. ์„ฑ๋Šฅ ์ €ํ•˜๊ฐ€ ๋ฐœ์ƒํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๋Š” ์˜จ๋„๋Š” ๋Œ€๋žต 84โ„ƒ-90โ„ƒ ์ •๋„์ผ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์„ฑ๋Šฅ ์ €ํ•˜ ์ด์™ธ์—๋„ ์ง€์†์ ์œผ๋กœ ๋งค์šฐ ๋†’์€ ์˜จ๋„๋Š” GPU ์ˆ˜๋ช…์„ ๋‹จ์ถ•์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์–ด์„œ, ์—ฌ๋Ÿฌ ๊ฐœ์˜ GPU๋ฅผ ์‚ฌ์šฉํ•  ๋•Œ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ์ธก๋ฉด ์ค‘ ํ•˜๋‚˜์ธ GPU ๊ฐ„ ์—ฐ๊ฒฐ ๋ฐฉ์‹์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ๋‹ค์ค‘ GPU ์—ฐ๊ฒฐ ๋ฐฉ์‹ [[multigpu-connectivity]] ๋‹ค์ค‘ GPU๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ GPU ๊ฐ„์˜ ์—ฐ๊ฒฐ ๋ฐฉ์‹์€ ์ „์ฒด ํ›ˆ๋ จ ์‹œ๊ฐ„์— ํฐ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งŒ์•ฝ GPU๊ฐ€ ๋™์ผํ•œ ๋ฌผ๋ฆฌ์  ๋…ธ๋“œ์— ์žˆ์„ ๊ฒฝ์šฐ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ``` nvidia-smi topo -m ``` ๋งŒ์•ฝ NVLink๋กœ ์—ฐ๊ฒฐ๋œ ๋“€์–ผ GPU ํ™˜๊ฒฝ์ด๋ผ๋ฉด, ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฐ๊ณผ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ``` GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X NV2 0-23 N/A GPU1 NV2 X 0-23 N/A ``` NVLink๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š๋Š” ๋‹ค๋ฅธ ํ™˜๊ฒฝ์˜ ๊ฒฝ์šฐ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฐ๊ณผ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ``` GPU0 GPU1 CPU Affinity NUMA Affinity GPU0 X PHB 0-11 N/A GPU1 PHB X 0-11 N/A ``` ์ด ๊ฒฐ๊ณผ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฒ”๋ก€๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค: ``` X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks ``` ๋”ฐ๋ผ์„œ ์ฒซ ๋ฒˆ์งธ ๊ฒฐ๊ณผ์˜ `NV2`๋Š” GPU๊ฐ€ 2๊ฐœ์˜ NVLink๋กœ ์—ฐ๊ฒฐ๋˜์–ด ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ๋‚˜ํƒ€๋‚ด๊ณ , ๋‘ ๋ฒˆ์งธ ๊ฒฐ๊ณผ์˜ `PHB`๋Š” ์ผ๋ฐ˜์ ์ธ ์†Œ๋น„์ž์šฉ PCIe+๋ธŒ๋ฆฟ์ง€ ์„ค์ •์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์„ค์ •์—์„œ ์–ด๋–ค ์œ ํ˜•์˜ ์—ฐ๊ฒฐ ๋ฐฉ์‹์„ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ์ผ๋ถ€ ์—ฐ๊ฒฐ ๋ฐฉ์‹์€ GPU ๊ฐ„ ํ†ต์‹ ์„ ๋” ๋น ๋ฅด๊ฒŒ ๋งŒ๋“ค ์ˆ˜ ์žˆ์œผ๋ฉฐ(NVLink์™€ ๊ฐ™์ด), ์–ด๋–ค ์—ฐ๊ฒฐ ๋ฐฉ์‹์€ ๋” ๋А๋ฆฌ๊ฒŒ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(PHB์™€ ๊ฐ™์ด). ์‚ฌ์šฉํ•˜๋Š” ํ™•์žฅ์„ฑ ์†”๋ฃจ์…˜์˜ ์ข…๋ฅ˜์— ๋”ฐ๋ผ ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ์ฃผ์š”ํ•œ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜๋„ ์žˆ๊ณ  ๋ฏธ๋ฏธํ•œ ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. DDP์™€ ๊ฐ™์ด GPU๊ฐ€ ๊ฑฐ์˜ ๋™๊ธฐํ™”ํ•˜์ง€ ์•Š์•„๋„ ๋˜๋Š” ๊ฒฝ์šฐ, ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ๋А๋ ค๋„ ํฐ ์˜ํ–ฅ์„ ๋ฐ›์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ฐ˜๋ฉด ZeRO-DP์™€ ๊ฐ™์ด GPU๊ฐ„ ํ†ต์‹ ์ด ๋งŽ์ด ํ•„์š”ํ•œ ๊ฒฝ์šฐ, ๋” ๋น ๋ฅธ ํ›ˆ๋ จ์„ ์œ„ํ•ด์„œ๋Š” ๋” ๋น ๋ฅธ ์—ฐ๊ฒฐ ์†๋„๊ฐ€ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. #### NVLink [[nvlink]] [NVLink](https://en.wikipedia.org/wiki/NVLink)๋Š” Nvidia์—์„œ ๊ฐœ๋ฐœํ•œ ์œ ์„  ๊ธฐ๋ฐ˜์˜ ์ง๋ ฌ ๋‹ค์ค‘ ๋ ˆ์ธ ๊ทผ๊ฑฐ๋ฆฌ ํ†ต์‹  ๋งํฌ์ž…๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ์„ธ๋Œ€์˜ NVLink๋Š” ๋” ๋น ๋ฅธ ๋Œ€์—ญํญ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. [Nvidia Ampere GA102 GPU Architecture](https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf)์—์„œ ์•„๋ž˜์™€ ๊ฐ™์€ ์ •๋ณด๋ฅผ ํ™•์ธํ•˜์‹ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: > 3์„ธ๋Œ€ NVLinkยฎ > GA102 GPU๋Š” 4๊ฐœ์˜ x4 ๋งํฌ๋ฅผ ํฌํ•จํ•˜๋Š” NVIDIA์˜ 3์„ธ๋Œ€ NVLink ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ํ™œ์šฉํ•˜๋ฉฐ, > ๊ฐ ๋งํฌ๋Š” ๋‘ ๊ฐœ์˜ GPU ๊ฐ„์— ๊ฐ ๋ฐฉํ–ฅ์œผ๋กœ ์ดˆ๋‹น 14.0625GB์˜ ๋Œ€์—ญํญ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. > 4๊ฐœ์˜ ๋งํฌ๋Š” ๊ฐ ๋ฐฉํ–ฅ์— ์ดˆ๋‹น 56.25GB์˜ ๋Œ€์—ญํญ์„ ์ œ๊ณตํ•˜๋ฉฐ, ๋‘ ๊ฐœ์˜ GPU ๊ฐ„์—๋Š” ์ดˆ๋‹น 112.5GB์˜ ์ด ๋Œ€์—ญํญ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. > ๋‘ ๊ฐœ์˜ RTX 3090 GPU๋ฅผ NVLink๋ฅผ ์‚ฌ์šฉํ•ด SLI๋กœ ์—ฐ๊ฒฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. > (3-Way ๋ฐ 4-Way SLI ๊ตฌ์„ฑ์€ ์ง€์›๋˜์ง€ ์•Š์Œ์— ์œ ์˜ํ•˜์„ธ์š”.) ๋”ฐ๋ผ์„œ `nvidia-smi topo -m`์˜ ๊ฒฐ๊ณผ์—์„œ `NVX`์˜ ๊ฐ’์ด ๋†’์„์ˆ˜๋ก ๋” ์ข‹์Šต๋‹ˆ๋‹ค. ์„ธ๋Œ€๋Š” GPU ์•„ํ‚คํ…์ฒ˜์— ๋”ฐ๋ผ ๋‹ค๋ฅผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡๋‹ค๋ฉด, gpt2๋ฅผ ์ž‘์€ wikitext ์ƒ˜ํ”Œ๋กœ ํ•™์Šต์‹œํ‚ค๋Š” ์˜ˆ์ œ๋ฅผ ํ†ตํ•ด, NVLink๊ฐ€ ํ›ˆ๋ จ์— ์–ด๋–ค ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: | NVlink | Time | | ----- | ---: | | Y | 101s | | N | 131s | NVLink ์‚ฌ์šฉ ์‹œ ํ›ˆ๋ จ์ด ์•ฝ 23% ๋” ๋น ๋ฅด๊ฒŒ ์™„๋ฃŒ๋จ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ๋ฒค์น˜๋งˆํฌ์—์„œ๋Š” `NCCL_P2P_DISABLE=1`์„ ์‚ฌ์šฉํ•˜์—ฌ NVLink๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋„๋ก ์„ค์ •ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ๋ฒค์น˜๋งˆํฌ ์ฝ”๋“œ์™€ ๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash # DDP w/ NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 torchrun \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train \ --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 101.9003, 'train_samples_per_second': 1.963, 'epoch': 0.69} # DDP w/o NVLink rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 torchrun \ --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \ --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 {'train_runtime': 131.4367, 'train_samples_per_second': 1.522, 'epoch': 0.69} ``` ํ•˜๋“œ์›จ์–ด: ๊ฐ๊ฐ 2๊ฐœ์˜ TITAN RTX 24GB + 2๊ฐœ์˜ NVLink (`NV2` in `nvidia-smi topo -m`) ์†Œํ”„ํŠธ์›จ์–ด: `pytorch-1.8-to-be` + `cuda-11.0` / `transformers==4.3.0.dev0`
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/attention.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์–ดํ…์…˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜[[attention_mechanisms]] ๋Œ€๋ถ€๋ถ„์˜ ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์€ ์ •๋ฐฉํ–‰๋ ฌ์ธ ์ „์ฒด ์–ดํ…์…˜์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋Š” ๊ธด ํ…์ŠคํŠธ๋ฅผ ๋‹ค๋ฃฐ ๋•Œ๋Š” ํฐ ๊ณ„์‚ฐ ๋ณ‘๋ชฉ ํ˜„์ƒ์„ ์œ ๋ฐœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `Longformer`์™€ `Reformer`๋Š” ํ›ˆ๋ จ ์†๋„๋ฅผ ๋†’์ด๊ธฐ ์œ„ํ•ด ์–ดํ…์…˜ ํ–‰๋ ฌ์˜ ํฌ์†Œ ๋ฒ„์ „์„ ์‚ฌ์šฉํ•˜์—ฌ ํšจ์œจ์„ ๋†’์ด๋ ค๋Š” ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ## LSH ์–ดํ…์…˜[[lsh_attention]] [Reformer](#reformer)๋Š” LSH(Locality Sensitive Hashing) ์–ดํ…์…˜์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. softmax(QK^t)์—์„œ๋Š” ํ–‰๋ ฌ QK^t์˜ (softmax ์ฐจ์›์—์„œ) ๊ฐ€์žฅ ํฐ ์š”์†Œ๋“ค๋งŒ ์œ ์šฉํ•œ ๊ธฐ์—ฌ๋ฅผ ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๊ฐ๊ฐ์˜ ์ฟผ๋ฆฌ q์— ๋Œ€ํ•ด, q์™€ ๊ฐ€๊นŒ์šด ํ‚ค k๋งŒ ๊ณ ๋ คํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•ด์‹œ ํ•จ์ˆ˜๋Š” q์™€ k๊ฐ€ ๊ฐ€๊นŒ์šด์ง€ ์—ฌ๋ถ€๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์–ดํ…์…˜ ๋งˆ์Šคํฌ๋Š” ํ˜„์žฌ ํ† ํฐ์„ ๋งˆ์Šคํ‚นํ•˜์—ฌ ๋ณ€๊ฒฝ๋ฉ๋‹ˆ๋‹ค. ์ด ๋•Œ ์ฒซ ๋ฒˆ์งธ ์œ„์น˜์˜ ํ† ํฐ์€ ์ œ์™ธํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์ฟผ๋ฆฌ์™€ ํ‚ค๊ฐ€ ๋™์ผํ•œ ๊ฐ’์„ ๊ฐ–๊ฒŒ ๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค(์„œ๋กœ ๋งค์šฐ ์œ ์‚ฌํ•จ). ํ•ด์‹œ๋Š” ์•ฝ๊ฐ„์˜ ๋ฌด์ž‘์œ„์„ฑ์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ์‹ค์ œ๋กœ๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ํ•ด์‹œ ํ•จ์ˆ˜๊ฐ€ ์‚ฌ์šฉ๋˜๊ณ  (`n_rounds` ๋งค๊ฐœ๋ณ€์ˆ˜์— ์˜ํ•ด ๊ฒฐ์ •๋จ) ๊ทธ ํ›„์— ํ‰๊ท ๊ฐ’์„ ์ทจํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ## ์ง€์—ญ ์–ดํ…์…˜[[local_attention]] [Longformer](#longformer)๋Š” ์ง€์—ญ ์–ดํ…์…˜์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ข…์ข… ํŠน์ • ํ† ํฐ์— ๋Œ€ํ•ด ์ง€์—ญ ์ปจํ…์ŠคํŠธ(์˜ˆ: ์™ผ์ชฝ๊ณผ ์˜ค๋ฅธ์ชฝ์— ์žˆ๋Š” ๋‘ ๊ฐœ์˜ ํ† ํฐ์€ ๋ฌด์—‡์ธ๊ฐ€์š”?)๋งŒ์œผ๋กœ๋„ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š”๋ฐ ์ถฉ๋ถ„ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ์ž‘์€ ์ฐฝ(window)์„ ๊ฐ€์ง„ ์–ดํ…์…˜ ๋ ˆ์ด์–ด๋ฅผ ์Œ“์Œ์œผ๋กœ์จ ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด๋Š” ์ฐฝ ๋‚ด์˜ ํ† ํฐ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๋” ๋งŽ์€ ์ˆ˜์˜ ํ† ํฐ์— ๋Œ€ํ•œ ์ˆ˜์šฉ ์˜์—ญ(receptive field)์„ ๊ฐ–๊ฒŒ ๋˜์–ด ์ „์ฒด ๋ฌธ์žฅ์˜ ํ‘œํ˜„์„ ๊ตฌ์ถ•ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์ „์— ์„ ํƒ๋œ ์ผ๋ถ€ ์ž…๋ ฅ ํ† ํฐ๋“ค์€ ์ „์—ญ ์–ดํ…์…˜์„ ๋ฐ›์Šต๋‹ˆ๋‹ค. ์ด ๋ช‡ ๊ฐœ์˜ ํ† ํฐ์— ๋Œ€ํ•ด์„œ๋Š” ์–ดํ…์…˜ ํ–‰๋ ฌ์ด ๋ชจ๋“  ํ† ํฐ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด ๊ณผ์ •์€ ๋Œ€์นญ์ ์œผ๋กœ ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ชจ๋“  ํ† ํฐ๋“ค์€ ๋กœ์ปฌ ์ฐฝ ๋‚ด์˜ ํ† ํฐ๋“ค์— ๋”ํ•ด ํ•ด๋‹น ํŠน์ • ํ† ํฐ๋“ค์—๋„ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋…ผ๋ฌธ์˜ Figure 2d์—์„œ ๋‚˜ํƒ€๋‚˜๋ฉฐ, ์•„๋ž˜์— ์ƒ˜ํ”Œ ์–ดํ…์…˜ ๋งˆ์Šคํฌ๊ฐ€ ์ œ์‹œ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค: <div class="flex justify-center"> <img scale="50 %" align="center" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/local_attention_mask.png"/> </div> ์ ์€ ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ์–ดํ…์…˜ ํ–‰๋ ฌ์„ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ์ด ๋” ํฐ ์‹œํ€€์Šค ์ž…๋ ฅ ๊ธธ์ด๋ฅผ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•๋“ค[[other_tricks]] ### ์ถ•๋ณ„ ์œ„์น˜ ์ธ์ฝ”๋”ฉ[[axial_positional_encodings]] [Reformer](#reformer)๋Š” ์ถ•๋ณ„ ์œ„์น˜ ์ธ์ฝ”๋”ฉ(axial positional encodings)์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ์กด์˜ ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์—์„œ๋Š” ์œ„์น˜ ์ธ์ฝ”๋”ฉ ํ–‰๋ ฌ E๋Š” ํฌ๊ธฐ๊ฐ€ \\(l \times d\\)์ธ ํ–‰๋ ฌ์ด๋ฉฐ, ์—ฌ๊ธฐ์„œ \\(l\\)์€ ์‹œํ€€์Šค ๊ธธ์ด(sequence length)์ด๊ณ  \\(d\\)๋Š” ์ˆจ๊ฒจ์ง„ ์ƒํƒœ(hidden state)์˜ ์ฐจ์›์ž…๋‹ˆ๋‹ค. ๋งค์šฐ ๊ธด ํ…์ŠคํŠธ์˜ ๊ฒฝ์šฐ, ์ด ํ–‰๋ ฌ์€ ๋งค์šฐ ํฌ๋ฉฐ GPU ์ƒ์—์„œ ๊ณต๊ฐ„์„ ๋งŽ์ด ์ฐจ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์™„ํ™”ํ•˜๊ธฐ ์œ„ํ•ด, ์ถ•๋ณ„ ์œ„์น˜ ์ธ์ฝ”๋”ฉ์€ ํฐ ํ–‰๋ ฌ E๋ฅผ ๋‘ ๊ฐœ์˜ ์ž‘์€ ํ–‰๋ ฌ E1๊ณผ E2๋กœ ๋ถ„ํ•ดํ•ฉ๋‹ˆ๋‹ค. ์ด๋•Œ E1์˜ ํฌ๊ธฐ๋Š” \\(l_{1} \times d_{1}\\)์ด๊ณ , E2์˜ ํฌ๊ธฐ๋Š” \\(l_{2} \times d_{2}\\)์ž…๋‹ˆ๋‹ค. ์ด๋•Œ \\(l_{1} \times l_{2} = l\\)์ด๊ณ  \\(d_{1} + d_{2} = d\\)(๊ธธ์ด์— ๋Œ€ํ•œ ๊ณฑ์…ˆ ์—ฐ์‚ฐ์„ ์‚ฌ์šฉํ•˜๋ฉด ํ›จ์”ฌ ์ž‘์•„์ง‘๋‹ˆ๋‹ค). E์˜ ์‹œ๊ฐ„ ๋‹จ๊ณ„ j์— ๋Œ€ํ•œ ์ž„๋ฒ ๋”ฉ์€ E1์—์„œ ์‹œ๊ฐ„ ๋‹จ๊ณ„ \\(j \% l1\\)์˜ ์ž„๋ฒ ๋”ฉ๊ณผ E2์—์„œ ์‹œ๊ฐ„ ๋‹จ๊ณ„ \\(j // l1\\)์˜ ์ž„๋ฒ ๋”ฉ์„ ์—ฐ๊ฒฐํ•˜์—ฌ ์–ป์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/llm_tutorial.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ๋กœ ์ƒ์„ฑํ•˜๊ธฐ [[generation-with-llms]] [[open-in-colab]] LLM ๋˜๋Š” ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ์€ ํ…์ŠคํŠธ ์ƒ์„ฑ์˜ ํ•ต์‹ฌ ๊ตฌ์„ฑ ์š”์†Œ์ž…๋‹ˆ๋‹ค. ๊ฐ„๋‹จํžˆ ๋งํ•˜๋ฉด, ์ฃผ์–ด์ง„ ์ž…๋ ฅ ํ…์ŠคํŠธ์— ๋Œ€ํ•œ ๋‹ค์Œ ๋‹จ์–ด(์ •ํ™•ํ•˜๊ฒŒ๋Š” ํ† ํฐ)๋ฅผ ์˜ˆ์ธกํ•˜๊ธฐ ์œ„ํ•ด ํ›ˆ๋ จ๋œ ๋Œ€๊ทœ๋ชจ ์‚ฌ์ „ ํ›ˆ๋ จ ๋ณ€ํ™˜๊ธฐ ๋ชจ๋ธ๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ํ† ํฐ์„ ํ•œ ๋ฒˆ์— ํ•˜๋‚˜์”ฉ ์˜ˆ์ธกํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ƒˆ๋กœ์šด ๋ฌธ์žฅ์„ ์ƒ์„ฑํ•˜๋ ค๋ฉด ๋ชจ๋ธ์„ ํ˜ธ์ถœํ•˜๋Š” ๊ฒƒ ์™ธ์— ๋” ๋ณต์žกํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ์ž๊ธฐํšŒ๊ท€ ์ƒ์„ฑ์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ž๊ธฐํšŒ๊ท€ ์ƒ์„ฑ์€ ๋ช‡ ๊ฐœ์˜ ์ดˆ๊ธฐ ์ž…๋ ฅ๊ฐ’์„ ์ œ๊ณตํ•œ ํ›„, ๊ทธ ์ถœ๋ ฅ์„ ๋‹ค์‹œ ๋ชจ๋ธ์— ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ˜๋ณต์ ์œผ๋กœ ํ˜ธ์ถœํ•˜๋Š” ์ถ”๋ก  ๊ณผ์ •์ž…๋‹ˆ๋‹ค. ๐Ÿค— Transformers์—์„œ๋Š” [`~generation.GenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๊ฐ€ ์ด ์—ญํ• ์„ ํ•˜๋ฉฐ, ์ด๋Š” ์ƒ์„ฑ ๊ธฐ๋Šฅ์„ ๊ฐ€์ง„ ๋ชจ๋“  ๋ชจ๋ธ์—์„œ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹ค์Œ ๋‚ด์šฉ์„ ๋‹ค๋ฃจ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค: * LLM์œผ๋กœ ํ…์ŠคํŠธ ์ƒ์„ฑ * ์ผ๋ฐ˜์ ์œผ๋กœ ๋ฐœ์ƒํ•˜๋Š” ๋ฌธ์ œ ํ•ด๊ฒฐ * LLM์„ ์ตœ๋Œ€ํ•œ ํ™œ์šฉํ•˜๊ธฐ ์œ„ํ•œ ๋‹ค์Œ ๋‹จ๊ณ„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers bitsandbytes>=0.39.0 -q ``` ## ํ…์ŠคํŠธ ์ƒ์„ฑ [[generate-text]] [์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง(causal language modeling)](tasks/language_modeling)์„ ๋ชฉ์ ์œผ๋กœ ํ•™์Šต๋œ ์–ธ์–ด ๋ชจ๋ธ์€ ์ผ๋ จ์˜ ํ…์ŠคํŠธ ํ† ํฐ์„ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ , ๊ทธ ๊ฒฐ๊ณผ๋กœ ๋‹ค์Œ ํ† ํฐ์ด ๋‚˜์˜ฌ ํ™•๋ฅ  ๋ถ„ํฌ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. <!-- [GIF 1 -- FWD PASS] --> <figure class="image table text-center m-0 w-full"> <video style="max-width: 90%; margin: auto;" autoplay loop muted playsinline src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_1_1080p.mov" ></video> <figcaption>"LLM์˜ ์ „๋ฐฉ ํŒจ์Šค"</figcaption> </figure> LLM๊ณผ ์ž๊ธฐํšŒ๊ท€ ์ƒ์„ฑ์„ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ๋•Œ ํ•ต์‹ฌ์ ์ธ ๋ถ€๋ถ„์€ ์ด ํ™•๋ฅ  ๋ถ„ํฌ๋กœ๋ถ€ํ„ฐ ๋‹ค์Œ ํ† ํฐ์„ ์–ด๋–ป๊ฒŒ ๊ณ ๋ฅผ ๊ฒƒ์ธ์ง€์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ฐ˜๋ณต ๊ณผ์ •์— ์‚ฌ์šฉ๋  ํ† ํฐ์„ ๊ฒฐ์ •ํ•˜๋Š” ํ•œ, ์–ด๋– ํ•œ ๋ฐฉ๋ฒ•๋„ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ํ™•๋ฅ  ๋ถ„ํฌ์—์„œ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ํ† ํฐ์„ ์„ ํƒํ•˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๊ฐ„๋‹จํ•  ์ˆ˜๋„ ์žˆ๊ณ , ๊ฒฐ๊ณผ ๋ถ„ํฌ์—์„œ ์ƒ˜ํ”Œ๋งํ•˜๊ธฐ ์ „์— ์ˆ˜์‹ญ ๊ฐ€์ง€ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๋ณต์žกํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. <!-- [GIF 2 -- TEXT GENERATION] --> <figure class="image table text-center m-0 w-full"> <video style="max-width: 90%; margin: auto;" autoplay loop muted playsinline src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/assisted-generation/gif_2_1080p.mov" ></video> <figcaption>"์ž๊ธฐํšŒ๊ท€ ์ƒ์„ฑ์€ ํ™•๋ฅ  ๋ถ„ํฌ์—์„œ ๋‹ค์Œ ํ† ํฐ์„ ๋ฐ˜๋ณต์ ์œผ๋กœ ์„ ํƒํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค."</figcaption> </figure> ์œ„์—์„œ ์„ค๋ช…ํ•œ ๊ณผ์ •์€ ์–ด๋–ค ์ข…๋ฃŒ ์กฐ๊ฑด์ด ์ถฉ์กฑ๋  ๋•Œ๊นŒ์ง€ ๋ฐ˜๋ณต์ ์œผ๋กœ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์‹œํ€€์Šค์˜ ๋(EOS ํ† ํฐ)์„ ์ถœ๋ ฅํ•  ๋•Œ๊นŒ์ง€๋ฅผ ์ข…๋ฃŒ ์กฐ๊ฑด์œผ๋กœ ํ•˜๋Š” ๊ฒƒ์ด ์ด์ƒ์ ์ž…๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š์€ ๊ฒฝ์šฐ์—๋Š” ๋ฏธ๋ฆฌ ์ •์˜๋œ ์ตœ๋Œ€ ๊ธธ์ด์— ๋„๋‹ฌํ–ˆ์„ ๋•Œ ์ƒ์„ฑ์ด ์ค‘๋‹จ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ์˜ˆ์ƒ๋Œ€๋กœ ๋™์ž‘ํ•˜๊ธฐ ์œ„ํ•ด์„  ํ† ํฐ ์„ ํƒ ๋‹จ๊ณ„์™€ ์ •์ง€ ์กฐ๊ฑด์„ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์„ค์ •ํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ด์œ ๋กœ, ๊ฐ ๋ชจ๋ธ์—๋Š” ๊ธฐ๋ณธ ์ƒ์„ฑ ์„ค์ •์ด ์ž˜ ์ •์˜๋œ [`~generation.GenerationConfig`] ํŒŒ์ผ์ด ํ•จ๊ป˜ ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ์ฝ”๋“œ๋ฅผ ํ™•์ธํ•ด๋ด…์‹œ๋‹ค! <Tip> ๊ธฐ๋ณธ LLM ์‚ฌ์šฉ์— ๊ด€์‹ฌ์ด ์žˆ๋‹ค๋ฉด, ์šฐ๋ฆฌ์˜ [`Pipeline`](pipeline_tutorial) ์ธํ„ฐํŽ˜์ด์Šค๋กœ ์‹œ์ž‘ํ•˜๋Š” ๊ฒƒ์„ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ LLM์€ ์–‘์žํ™”๋‚˜ ํ† ํฐ ์„ ํƒ ๋‹จ๊ณ„์—์„œ์˜ ๋ฏธ์„ธํ•œ ์ œ์–ด์™€ ๊ฐ™์€ ๊ณ ๊ธ‰ ๊ธฐ๋Šฅ๋“ค์„ ์ข…์ข… ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ž‘์—…์€ [`~generation.GenerationMixin.generate`]๋ฅผ ํ†ตํ•ด ๊ฐ€์žฅ ์ž˜ ์ˆ˜ํ–‰๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. LLM์„ ์ด์šฉํ•œ ์ž๊ธฐํšŒ๊ท€ ์ƒ์„ฑ์€ ์ž์›์„ ๋งŽ์ด ์†Œ๋ชจํ•˜๋ฏ€๋กœ, ์ ์ ˆํ•œ ์ฒ˜๋ฆฌ๋Ÿ‰์„ ์œ„ํ•ด GPU์—์„œ ์‹คํ–‰๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๋จผ์ €, ๋ชจ๋ธ์„ ๋ถˆ๋Ÿฌ์˜ค์„ธ์š”. ```python >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained( ... "mistralai/Mistral-7B-v0.1", device_map="auto", load_in_4bit=True ... ) ``` `from_pretrained` ํ•จ์ˆ˜๋ฅผ ํ˜ธ์ถœํ•  ๋•Œ 2๊ฐœ์˜ ํ”Œ๋ž˜๊ทธ๋ฅผ ์ฃผ๋ชฉํ•˜์„ธ์š”: - `device_map`์€ ๋ชจ๋ธ์ด GPU๋กœ ์ด๋™๋˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. - `load_in_4bit`๋Š” ๋ฆฌ์†Œ์Šค ์š”๊ตฌ ์‚ฌํ•ญ์„ ํฌ๊ฒŒ ์ค„์ด๊ธฐ ์œ„ํ•ด [4๋น„ํŠธ ๋™์  ์–‘์žํ™”](main_classes/quantization)๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ์™ธ์—๋„ ๋ชจ๋ธ์„ ์ดˆ๊ธฐํ™”ํ•˜๋Š” ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ•์ด ์žˆ์ง€๋งŒ, LLM์„ ์ฒ˜์Œ ์‹œ์ž‘ํ•  ๋•Œ ์ด ์„ค์ •์„ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค. ์ด์–ด์„œ ํ…์ŠคํŠธ ์ž…๋ ฅ์„ [ํ† ํฌ๋‚˜์ด์ €](tokenizer_summary)์œผ๋กœ ์ „์ฒ˜๋ฆฌํ•˜์„ธ์š”. ```python >>> from transformers import AutoTokenizer >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1") >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model_inputs = tokenizer(["A list of colors: red, blue"], return_tensors="pt").to(device) ``` `model_inputs` ๋ณ€์ˆ˜์—๋Š” ํ† ํฐํ™”๋œ ํ…์ŠคํŠธ ์ž…๋ ฅ๊ณผ ํ•จ๊ป˜ ์–ดํ…์…˜ ๋งˆ์Šคํฌ๊ฐ€ ๋“ค์–ด ์žˆ์Šต๋‹ˆ๋‹ค. [`~generation.GenerationMixin.generate`]๋Š” ์–ดํ…์…˜ ๋งˆ์Šคํฌ๊ฐ€ ์ œ๊ณต๋˜์ง€ ์•Š์•˜์„ ๊ฒฝ์šฐ์—๋„ ์ด๋ฅผ ์ถ”๋ก ํ•˜๋ ค๊ณ  ๋…ธ๋ ฅํ•˜์ง€๋งŒ, ์ตœ์ƒ์˜ ์„ฑ๋Šฅ์„ ์œ„ํ•ด์„œ๋Š” ๊ฐ€๋Šฅํ•˜๋ฉด ์–ดํ…์…˜ ๋งˆ์Šคํฌ๋ฅผ ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ [`~generation.GenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•ด ์ƒ์„ฑ๋œ ํ† ํฐ์„ ์–ป์€ ํ›„, ์ด๋ฅผ ์ถœ๋ ฅํ•˜๊ธฐ ์ „์— ํ…์ŠคํŠธ ํ˜•ํƒœ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”. ```python >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A list of colors: red, blue, green, yellow, black, white, and brown' ``` ์ด๊ฒŒ ์ „๋ถ€์ž…๋‹ˆ๋‹ค! ๋ช‡ ์ค„์˜ ์ฝ”๋“œ๋งŒ์œผ๋กœ LLM์˜ ๋Šฅ๋ ฅ์„ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ## ์ผ๋ฐ˜์ ์œผ๋กœ ๋ฐœ์ƒํ•˜๋Š” ๋ฌธ์ œ [[common-pitfalls]] [์ƒ์„ฑ ์ „๋žต](generation_strategies)์ด ๋งŽ๊ณ , ๊ธฐ๋ณธ๊ฐ’์ด ํ•ญ์ƒ ์‚ฌ์šฉ ์‚ฌ๋ก€์— ์ ํ•ฉํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถœ๋ ฅ์ด ์˜ˆ์ƒ๊ณผ ๋‹ค๋ฅผ ๋•Œ ํ”ํžˆ ๋ฐœ์ƒํ•˜๋Š” ๋ฌธ์ œ์™€ ์ด๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ชฉ๋ก์„ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1") >>> tokenizer.pad_token = tokenizer.eos_token # Mistral has no pad token by default >>> model = AutoModelForCausalLM.from_pretrained( ... "mistralai/Mistral-7B-v0.1", device_map="auto", load_in_4bit=True ... ) ``` ### ์ƒ์„ฑ๋œ ์ถœ๋ ฅ์ด ๋„ˆ๋ฌด ์งง๊ฑฐ๋‚˜ ๊ธธ๋‹ค [[generated-output-is-too-shortlong]] [`~generation.GenerationConfig`] ํŒŒ์ผ์—์„œ ๋ณ„๋„๋กœ ์ง€์ •ํ•˜์ง€ ์•Š์œผ๋ฉด, `generate`๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์ตœ๋Œ€ 20๊ฐœ์˜ ํ† ํฐ์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. `generate` ํ˜ธ์ถœ์—์„œ `max_new_tokens`์„ ์ˆ˜๋™์œผ๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐ˜ํ™˜ํ•  ์ˆ˜ ์žˆ๋Š” ์ƒˆ ํ† ํฐ์˜ ์ตœ๋Œ€ ์ˆ˜๋ฅผ ์„ค์ •ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. LLM(์ •ํ™•ํ•˜๊ฒŒ๋Š” [๋””์ฝ”๋” ์ „์šฉ ๋ชจ๋ธ](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt))์€ ์ž…๋ ฅ ํ”„๋กฌํ”„ํŠธ๋„ ์ถœ๋ ฅ์˜ ์ผ๋ถ€๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> model_inputs = tokenizer(["A sequence of numbers: 1, 2"], return_tensors="pt").to("cuda") >>> # By default, the output will contain up to 20 tokens >>> generated_ids = model.generate(**model_inputs, pad_token_id=tokenizer.eos_token_id) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A sequence of numbers: 1, 2, 3, 4, 5' >>> # Setting `max_new_tokens` allows you to control the maximum length >>> generated_ids = model.generate(**model_inputs, pad_token_id=tokenizer.eos_token_id, max_new_tokens=50) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'A sequence of numbers: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,' ``` ### ์ž˜๋ชป๋œ ์ƒ์„ฑ ๋ชจ๋“œ [[incorrect-generation-mode]] ๊ธฐ๋ณธ์ ์œผ๋กœ [`~generation.GenerationConfig`] ํŒŒ์ผ์—์„œ ๋ณ„๋„๋กœ ์ง€์ •ํ•˜์ง€ ์•Š์œผ๋ฉด, `generate`๋Š” ๊ฐ ๋ฐ˜๋ณต์—์„œ ๊ฐ€์žฅ ํ™•๋ฅ ์ด ๋†’์€ ํ† ํฐ์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค(๊ทธ๋ฆฌ๋”” ๋””์ฝ”๋”ฉ). ํ•˜๋ ค๋Š” ์ž‘์—…์— ๋”ฐ๋ผ ์ด ๋ฐฉ๋ฒ•์€ ๋ฐ”๋žŒ์งํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์ฑ—๋ด‡์ด๋‚˜ ์—์„ธ์ด ์ž‘์„ฑ๊ณผ ๊ฐ™์€ ์ฐฝ์˜์ ์ธ ์ž‘์—…์€ ์ƒ˜ํ”Œ๋ง์ด ์ ํ•ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ˜๋ฉด, ์˜ค๋””์˜ค๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜๊ฑฐ๋‚˜ ๋ฒˆ์—ญ๊ณผ ๊ฐ™์€ ์ž…๋ ฅ ๊ธฐ๋ฐ˜ ์ž‘์—…์€ ๊ทธ๋ฆฌ๋”” ๋””์ฝ”๋”ฉ์ด ๋” ์ ํ•ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `do_sample=True`๋กœ ์ƒ˜ํ”Œ๋ง์„ ํ™œ์„ฑํ™”ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด ์ฃผ์ œ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ์ด [๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ](https://huggingface.co/blog/how-to-generate)์—์„œ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python >>> # Set seed or reproducibility -- you don't need this unless you want full reproducibility >>> from transformers import set_seed >>> set_seed(0) >>> model_inputs = tokenizer(["I am a cat."], return_tensors="pt").to("cuda") >>> # LLM + greedy decoding = repetitive, boring output >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'I am a cat. I am a cat. I am a cat. I am a cat' >>> # With sampling, the output becomes more creative! >>> generated_ids = model.generate(**model_inputs, do_sample=True) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] 'I am a cat.\nI just need to be. I am always.\nEvery time' ``` ### ์ž˜๋ชป๋œ ํŒจ๋”ฉ [[wrong-padding-side]] LLM์€ [๋””์ฝ”๋” ์ „์šฉ](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt) ๊ตฌ์กฐ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์–ด, ์ž…๋ ฅ ํ”„๋กฌํ”„ํŠธ์— ๋Œ€ํ•ด ์ง€์†์ ์œผ๋กœ ๋ฐ˜๋ณต ์ฒ˜๋ฆฌ๋ฅผ ํ•ฉ๋‹ˆ๋‹ค. ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์˜ ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅด๋ฉด ํŒจ๋”ฉ ์ž‘์—…์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. LLM์€ ํŒจ๋”ฉ ํ† ํฐ์—์„œ ์ž‘๋™์„ ์ด์–ด๊ฐ€๋„๋ก ์„ค๊ณ„๋˜์ง€ ์•Š์•˜๊ธฐ ๋•Œ๋ฌธ์—, ์ž…๋ ฅ ์™ผ์ชฝ์— ํŒจ๋”ฉ์ด ์ถ”๊ฐ€ ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์–ดํ…์…˜ ๋งˆ์Šคํฌ๋„ ๊ผญ `generate` ํ•จ์ˆ˜์— ์ „๋‹ฌ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค! ```python >>> # The tokenizer initialized above has right-padding active by default: the 1st sequence, >>> # which is shorter, has padding on the right side. Generation fails. >>> model_inputs = tokenizer( ... ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt" ... ).to("cuda") >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids[0], skip_special_tokens=True)[0] '' >>> # With left-padding, it works as expected! >>> tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_7b", padding_side="left") >>> tokenizer.pad_token = tokenizer.eos_token # Llama has no pad token by default >>> model_inputs = tokenizer( ... ["1, 2, 3", "A, B, C, D, E"], padding=True, return_tensors="pt" ... ).to("cuda") >>> generated_ids = model.generate(**model_inputs) >>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] '1, 2, 3, 4, 5, 6,' ``` <!-- TODO: when the prompting guide is ready, mention the importance of setting the right prompt in this section --> ## ์ถ”๊ฐ€ ์ž๋ฃŒ [[further-resources]] ์ž๊ธฐํšŒ๊ท€ ์ƒ์„ฑ ํ”„๋กœ์„ธ์Šค๋Š” ์ƒ๋Œ€์ ์œผ๋กœ ๋‹จ์ˆœํ•œ ํŽธ์ด์ง€๋งŒ, LLM์„ ์ตœ๋Œ€ํ•œ ํ™œ์šฉํ•˜๋ ค๋ฉด ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ์š”์†Œ๋ฅผ ๊ณ ๋ คํ•ด์•ผ ํ•˜๋ฏ€๋กœ ์‰ฝ์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. LLM์— ๋Œ€ํ•œ ๋” ๊นŠ์€ ์ดํ•ด์™€ ํ™œ์šฉ์„ ์œ„ํ•œ ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: <!-- TODO: complete with new guides --> ### ๊ณ ๊ธ‰ ์ƒ์„ฑ ์‚ฌ์šฉ [[advanced-generate-usage]] 1. [๊ฐ€์ด๋“œ](generation_strategies)๋Š” ๋‹ค์–‘ํ•œ ์ƒ์„ฑ ๋ฐฉ๋ฒ•์„ ์ œ์–ดํ•˜๋Š” ๋ฐฉ๋ฒ•, ์ƒ์„ฑ ์„ค์ • ํŒŒ์ผ์„ ์„ค์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•, ์ถœ๋ ฅ์„ ์ŠคํŠธ๋ฆฌ๋ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. 2. [`~generation.GenerationConfig`]์™€ [`~generation.GenerationMixin.generate`], [generate-related classes](internal/generation_utils)๋ฅผ ์ฐธ์กฐํ•ด๋ณด์„ธ์š”. ### LLM ๋ฆฌ๋”๋ณด๋“œ [[llm-leaderboards]] 1. [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)๋Š” ์˜คํ”ˆ ์†Œ์Šค ๋ชจ๋ธ์˜ ํ’ˆ์งˆ์— ์ค‘์ ์„ ๋‘ก๋‹ˆ๋‹ค. 2. [Open LLM-Perf Leaderboard](https://huggingface.co/spaces/optimum/llm-perf-leaderboard)๋Š” LLM ์ฒ˜๋ฆฌ๋Ÿ‰์— ์ค‘์ ์„ ๋‘ก๋‹ˆ๋‹ค. ### ์ง€์—ฐ ์‹œ๊ฐ„ ๋ฐ ์ฒ˜๋ฆฌ๋Ÿ‰ [[latency-and-throughput]] 1. ๋ฉ”๋ชจ๋ฆฌ ์š”๊ตฌ ์‚ฌํ•ญ์„ ์ค„์ด๋ ค๋ฉด, ๋™์  ์–‘์žํ™”์— ๋Œ€ํ•œ [๊ฐ€์ด๋“œ](main_classes/quantization)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ### ๊ด€๋ จ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ [[related-libraries]] 1. [`text-generation-inference`](https://github.com/huggingface/text-generation-inference)๋Š” LLM์„ ์œ„ํ•œ ์‹ค์ œ ์šด์˜ ํ™˜๊ฒฝ์— ์ ํ•ฉํ•œ ์„œ๋ฒ„์ž…๋‹ˆ๋‹ค. 2. [`optimum`](https://github.com/huggingface/optimum)์€ ํŠน์ • ํ•˜๋“œ์›จ์–ด ์žฅ์น˜์—์„œ LLM์„ ์ตœ์ ํ™”ํ•˜๊ธฐ ์œ„ํ•ด ๐Ÿค— Transformers๋ฅผ ํ™•์žฅํ•œ ๊ฒƒ์ž…๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/perplexity.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # ๊ณ ์ • ๊ธธ์ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(Perplexity)[[perplexity-of-fixedlength-models]] [[open-in-colab]] ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(Perplexity, PPL)๋Š” ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ์–ธ์–ด ๋ชจ๋ธ ํ‰๊ฐ€์ง€ํ‘œ ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ธฐ ์ „์— ์ด ํ‰๊ฐ€์ง€ํ‘œ๋Š” ๊ณ ์ „์ ์ธ ์–ธ์–ด ๋ชจ๋ธ(์ž๊ธฐํšŒ๊ท€ ๋˜๋Š” ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ์ด๋ผ๊ณ ๋„ ํ•จ)์—๋งŒ ์ ์šฉ๋˜๋ฉฐ BERT์™€ ๊ฐ™์€ ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ์—๋Š” ์ž˜ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค (BERT๋Š” [summary of the models](../en/model_summary) ๋ฌธ์„œ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”). ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋Š” ์‹œํ€€์Šค์˜ ์Œ์˜ ๋กœ๊ทธ ์šฐ๋„(negative log-likelihood, NLL) ๊ฐ’์˜ ํ‰๊ท ์— ์ง€์ˆ˜(exponentiate)๋ฅผ ์ทจํ•œ ๊ฐ’์œผ๋กœ ์ •์˜๋ฉ๋‹ˆ๋‹ค. ํ† ํฐํ™”๋œ ์‹œํ€€์Šค \\(X = (x_0, x_1, \dots, x_t)\\) ๊ฐ€ ์žˆ์„ ๋•Œ, \\(X\\) ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋Š” ์•„๋ž˜ ์ˆ˜์‹๊ณผ ๊ฐ™์ด ๊ตฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. $$\text{PPL}(X) = \exp \left\{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{<i}) } \right\}$$ \\(\log p_\theta (x_i|x_{<i})\\) ๋Š” ๋ชจ๋ธ์— i๋ฒˆ์งธ ์ด์ „๊นŒ์ง€ ํ† ํฐ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ i๋ฒˆ์งธ ํ† ํฐ์˜ ๋กœ๊ทธ ์šฐ๋„๊ฐ’์ž…๋‹ˆ๋‹ค. ์ง๊ด€์ ์œผ๋กœ ๋ง๋ญ‰์น˜์—์„œ ์ง€์ •๋œ ํ† ํฐ ์ง‘ํ•ฉ์„ ๊ท ์ผํ•˜๊ฒŒ ์˜ˆ์ธกํ•˜๋Š” ๋ชจ๋ธ์˜ ๋Šฅ๋ ฅ์— ๋Œ€ํ•œ ํ‰๊ฐ€๋กœ ์ƒ๊ฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ค‘์š”ํ•œ ์ ์€ ํ† ํฐํ™” ๊ณผ์ •์ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ์— ์ง์ ‘์ ์ธ ์˜ํ–ฅ์„ ๋ฏธ์น˜๋ฏ€๋กœ ์„œ๋กœ ๋‹ค๋ฅธ ๋ชจ๋ธ์„ ๋น„๊ตํ•  ๋•Œ ํ•ญ์ƒ ์ด๋ฅผ ๊ณ ๋ คํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ฐ์ดํ„ฐ์™€ ๋ชจ๋ธ ์˜ˆ์ธก ๊ฐ„์˜ cross-entropy ๊ฐ’์— ์ง€์ˆ˜๋ฅผ ์ทจํ•œ ๊ฒƒ๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ์™€ ๋ฌธ์ž๋‹น ๋น„ํŠธ ์ˆ˜(BPC) ๋ฐ ๋ฐ์ดํ„ฐ ์••์ถ•๊ณผ์˜ ๊ด€๊ณ„์— ๋Œ€ํ•ด ๋” ์ง๊ด€์ ์ธ ์ดํ•ด๋ฅผ ์›ํ•˜์‹ ๋‹ค๋ฉด ๋‹ค์Œ ๊ธ€ [fantastic blog post on The Gradient](https://thegradient.pub/understanding-evaluation-metrics-for-language-models/)์„ ํ™•์ธํ•˜์„ธ์š”. ## ๊ณ ์ • ๊ธธ์ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(PPL) ๊ณ„์‚ฐํ•˜๊ธฐ[[calculating-ppl-with-fixedlength-models]] ๋ชจ๋ธ์˜ ์ปจํ…์ŠคํŠธ ํฌ๊ธฐ๊ฐ€ ์ •ํ•ด์ ธ์žˆ์ง€ ์•Š๋‹ค๋ฉด, ์•„๋ž˜์™€ ๊ฐ™์ด ์‹œํ€€์Šค๋ฅผ ์ž๋™ ํšŒ๊ท€์ ์œผ๋กœ ๋ถ„ํ•ดํ•˜๊ณ  ๊ฐ ๋‹จ๊ณ„์—์„œ ์„ ํ–‰ ํ•˜๋Š” ์ „์ฒด ์‹œํ€€์Šค๋ฅผ ์กฐ๊ฑด๋ถ€ ํ™•๋ฅ ์— ๋„ฃ์–ด ๋ชจ๋ธ์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ๊ณ„์‚ฐํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. <img width="600" alt="Full decomposition of a sequence with unlimited context length" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_full.gif"/> ๊ทธ๋Ÿฌ๋‚˜ ๋ชจ๋ธ์˜ ๊ทผ์‚ฌ์น˜๋ฅผ ๊ตฌํ•  ๋•Œ๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋ชจ๋ธ์ด ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ํ† ํฐ ์ˆ˜์— ์ œํ•œ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๊ฐ€์žฅ ํฐ ๋ฒ„์ „์˜ [GPT-2](model_doc/gpt2)๋Š” ํ† ํฐ์˜ ๊ธธ์ด๊ฐ€ 1024๋กœ ๊ณ ์ •๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ \\(t\\) ๊ฐ€ 1024๋ณด๋‹ค ํฐ ๊ฒฝ์šฐ์— \\(p_\theta(x_t|x_{<t})\\) ์„ ๊ณ„์‚ฐํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ๋Œ€์‹  ์‹œํ€€์Šค๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ํฌ๊ธฐ์™€ ๋™์ผํ•œ ๊ธธ์ด๋Š” ๊ฐ€์ง€๋Š” ๋ถ€๋ถ„ ์‹œํ€€์Šค๋กœ ์ชผ๊ฐญ๋‹ˆ๋‹ค. ๋งŒ์•ฝ ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๊ฐ€ \\(k\\) ๋ผ๋ฉด, ํ† ํฐ \\(x_t\\) ์˜ ์šฐ๋„ ๊ฐ’์„ ๊ณ„์‚ฐํ•  ๋•Œ ์ด์ „ ํ† ํฐ์„ ๋ชจ๋‘ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ , \\(k-1\\) ํ† ํฐ๊นŒ์ง€ ์‚ฌ์šฉํ•ด ๋Œ€๋žต์ ์ธ ์šฐ๋„ ๊ฐ’์„ ์ถ”์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ์‹œํ€€์Šค์— ๋Œ€ํ•œ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ๊ณ„์‚ฐํ•  ๋•Œ, ์ˆ˜์›”ํ•˜์ง€๋งŒ ์ฐจ์„ ์ฑ…์€ ์‹œํ€€์Šค๋ฅผ ์ฒญํฌ๋กœ ์ชผ๊ฐœ๊ณ  ๋ถ„ํ•ด๋œ ๊ฐ ๋ถ€๋ถ„์˜ ๋กœ๊ทธ ์šฐ๋„ ๊ฐ’์„ ๋…๋ฆฝ์ ์œผ๋กœ ํ•ฉ์‚ฐํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. <img width="600" alt="Suboptimal PPL not taking advantage of full available context" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_chunked.gif"/> ์ด ๋ฐฉ๋ฒ•์€ ๊ฐ ๋ถ€๋ถ„์˜ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ํ•œ ๋ฒˆ์˜ ํฌ์›Œ๋“œ ํŒจ์Šค๋กœ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ์–ด ๋น ๋ฅด์ง€๋งŒ ์ผ๋ฐ˜์ ์œผ๋กœ ๋” ๋†’์€(๋” ๋‚˜์œ) PPL์„ ์‚ฐ์ถœํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ๋Œ€๋ถ€๋ถ„์˜ ์˜ˆ์ธก ๋‹จ๊ณ„์—์„œ ๋ชจ๋ธ์˜ ์ปจํ…์ŠคํŠธ๊ฐ€ ์ ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋Œ€์‹ , ๊ณ ์ • ๊ธธ์ด ๋ชจ๋ธ์˜ PPL์€ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ์ „๋žต์œผ๋กœ ํ‰๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ „๋žต์—๋Š” ์ปจํ…์ŠคํŠธ ์œˆ๋„์šฐ์„ ๋ฐ˜๋ณต์ ์œผ๋กœ ์Šฌ๋ผ์ด๋”ฉํ•ด ๋ชจ๋ธ์ด ๊ฐ ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•  ๋•Œ ๋” ๋งŽ์€ ์ปจํ…์ŠคํŠธ๋ฅผ ๊ฐ–๋„๋ก ํ•˜๋Š” ์ž‘์—…์ด ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. <img width="600" alt="Sliding window PPL taking advantage of all available context" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/ppl_sliding.gif"/> ์ด๋Š” ์‹œํ€€์Šค ํ™•๋ฅ ์˜ ์‹ค์ œ ๋ถ„ํ•ด์— ๋” ๊ฐ€๊นŒ์šด ๊ทผ์‚ฌ์น˜์ด๋ฉฐ ์ผ๋ฐ˜์ ์œผ๋กœ ๋” ์œ ๋ฆฌํ•œ ์ ์ˆ˜๋ฅผ ์‚ฐ์ถœํ•ฉ๋‹ˆ๋‹ค. ๋‹จ์ ์€ ๋ง๋ญ‰์น˜์˜ ๊ฐ ํ† ํฐ์— ๋Œ€ํ•ด ๋ณ„๋„์˜ ํฌ์›Œ๋“œ ํŒจ์Šค๊ฐ€ ํ•„์š”ํ•˜๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ˜„์‹ค์ ์œผ๋กœ ์ข‹์€ ์ ˆ์ถฉ์•ˆ์€ ํ•œ ๋ฒˆ์— ํ•œ ํ† ํฐ์”ฉ ์Šฌ๋ผ์ด๋”ฉํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ ๋” ํฐ ๊ฐ„๊ฒฉ์œผ๋กœ ์ปจํ…์ŠคํŠธ๋ฅผ ์ด๋™ํ•˜๋Š” ์ŠคํŠธ๋ผ์ด๋“œ๊ฐ€ ์ ์šฉ๋œ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๊ณ„์‚ฐ์„ ํ›จ์”ฌ ๋” ๋น ๋ฅด๊ฒŒ ์ง„ํ–‰ํ•˜๋ฉด์„œ๋„ ๋ชจ๋ธ์— ๊ฐ ๋‹จ๊ณ„์—์„œ ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ๊ธด ์ปจํ…์ŠคํŠธ๋ฅผ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์˜ˆ์ œ: ๐Ÿค— Transformers์—์„œ GPT-2๋กœ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(perplexity) ๊ณ„์‚ฐํ•˜๊ธฐ[[example-calculating-perplexity-with-gpt2-in-transformers]] ์ด์ œ GPT-2๋กœ ์œ„์˜ ๊ณผ์ •์„ ์‹œ์—ฐํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```python from transformers import GPT2LMHeadModel, GPT2TokenizerFast device = "cuda" model_id = "gpt2-large" model = GPT2LMHeadModel.from_pretrained(model_id).to(device) tokenizer = GPT2TokenizerFast.from_pretrained(model_id) ``` WikiText-2 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๋ช‡ ๊ฐ€์ง€ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ์ „๋žต์„ ์‚ฌ์šฉํ•ด ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ๊ณ„์‚ฐํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ํฌ๊ธฐ๊ฐ€ ์ž‘๊ณ  ํฌ์›Œ๋“œ ํŒจ์Šค ํ•œ ๋ฒˆ๋งŒ ์ˆ˜ํ–‰ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฉ”๋ชจ๋ฆฌ์— ๊ฐ€์ ธ์˜ค๊ณ  ์ธ์ฝ”๋”ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python from datasets import load_dataset test = load_dataset("wikitext", "wikitext-2-raw-v1", split="test") encodings = tokenizer("\n\n".join(test["text"]), return_tensors="pt") ``` ๐Ÿค— Transformers๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ์˜ `labels`๋กœ `input_ids`๋ฅผ ์ „๋‹ฌํ•ด ๊ฐ ํ† ํฐ์— ๋Œ€ํ•œ ํ‰๊ท  ์Œ์˜ ์šฐ๋„ ๊ฐ’์„ ์†์‹ค๋กœ ๋ฐ˜ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋ฉด ๊ฐ ๋ฐ˜๋ณต๋งˆ๋‹ค ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๋Š” ํ† ํฐ์ด ๊ฒน์นฉ๋‹ˆ๋‹ค. ์ปจํ…์ŠคํŠธ๋กœ ์ฒ˜๋ฆฌํ•˜๋Š” ํ† ํฐ์— ๋Œ€ํ•œ ๋กœ๊ทธ ์šฐ๋„ ๊ฐ’์ด ์†์‹ค์— ํฌํ•จ๋˜๋Š” ๊ฒƒ์„ ์›ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ์ด๋Ÿฌํ•œ ํ† ํฐ์˜ `input_ids`๋ฅผ `-100`์œผ๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฌด์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์ŠคํŠธ๋ผ์ด๋“œ(stride)๋ฅผ `512`๋กœ ์‚ฌ์šฉํ•œ ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ์ด ํ•œ ํ† ํฐ์˜ ์กฐ๊ฑด๋ถ€ ์šฐ๋„ ๊ฐ’์„ ๊ณ„์‚ฐํ•  ๋•Œ ์ปจํ…์ŠคํŠธ์— ์ตœ์†Œํ•œ 512๊ฐœ์˜ ํ† ํฐ์ด ํฌํ•จ๋˜์–ด์žˆ๋‹ค๋Š” ์˜๋ฏธ์ž…๋‹ˆ๋‹ค (ํ•ด๋‹น ํ† ํฐ ์•ž์— 512๊ฐœ์˜ ํ† ํฐ์ด ์žˆ๋Š” ๊ฒฝ์šฐ). ```python import torch from tqdm import tqdm max_length = model.config.n_positions stride = 512 seq_len = encodings.input_ids.size(1) nlls = [] prev_end_loc = 0 for begin_loc in tqdm(range(0, seq_len, stride)): end_loc = min(begin_loc + max_length, seq_len) trg_len = end_loc - prev_end_loc # ๋งˆ์ง€๋ง‰ ๋ฃจํ”„์˜ ์ŠคํŠธ๋ผ์ด๋“œ ๊ฐ’๊ณผ ๋‹ค๋ฅผ ์ˆ˜ ์žˆ์Œ input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device) target_ids = input_ids.clone() target_ids[:, :-trg_len] = -100 with torch.no_grad(): outputs = model(input_ids, labels=target_ids) # ์†์‹ค์€ ๋ชจ๋“  ์œ ํšจํ•œ ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•œ ํ‰๊ท ๊ฐ’์„ ๊ตฌํ•˜๋Š” ๊ต์ฐจ ์—”ํŠธ๋กœํ”ผ(cross entropy)๋กœ ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค. # ๋‚˜์ด๋ธŒ ๋ฒ ์ด์ง€์•ˆ ๋ชจ๋ธ์€ ๋‚ด๋ถ€์ ์œผ๋กœ ๋ ˆ์ด๋ธ”์„ ์™ผ์ชฝ์œผ๋กœ 1๊ฐœ์”ฉ ๋ฐ€๊ธฐ ๋•Œ๋ฌธ์—, (ํƒ€์ผ“ - 1)๊ฐœ ๋งŒํผ์˜ ๋ ˆ์ด๋ธ”์— ๋Œ€ํ•ด ์†์‹ค์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. neg_log_likelihood = outputs.loss nlls.append(neg_log_likelihood) prev_end_loc = end_loc if end_loc == seq_len: break ppl = torch.exp(torch.stack(nlls).mean()) ``` ์ŠคํŠธ๋ผ์ด๋“œ๋ฅผ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด์™€ ๋™์ผํ•˜๊ฒŒ ์„ค์ •ํ•˜๋ฉด ์œ„์—์„œ ์„ค๋ช…ํ•œ ์ฐจ์„ ์ฑ…์ธ ๋น„์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ์ „๋žต๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์ŠคํŠธ๋ผ์ด๋“œ๊ฐ€ ์ž‘์„์ˆ˜๋ก ๋ชจ๋ธ์ด ๊ฐ ์˜ˆ์ธก์„ ํ•  ๋•Œ ๋” ๋งŽ์€ ์ปจํ…์ŠคํŠธ๋ฅผ ๋ณผ ์ˆ˜ ์žˆ๊ฒŒ ๋˜์–ด ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ ๊ฐ’์ด ์ข‹์•„์ง‘๋‹ˆ๋‹ค. ์œ„์˜ ๊ณ„์‚ฐ์„ ํ† ํฐ์ด ๊ฒน์น˜์ง€ ์•Š๋„๋ก `stride = 1024`๋กœ ์„ค์ •ํ•˜๋ฉด PPL์€ `19.44`๋กœ GPT-2 ๋…ผ๋ฌธ์—์„œ ๋ณด๊ณ ๋œ `19.93`๊ณผ ๊ฑฐ์˜ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. `stride = 512`๋กœ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ์ „๋žต์„ ์‚ฌ์šฉํ•˜๋ฉด PPL์€ `16.45`๋กœ ๋–จ์–ด์ง‘๋‹ˆ๋‹ค. ์ด๋Š” ๋” ์ข‹์€ ์ ์ˆ˜์ผ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์‹œํ€€์Šค ํ™•๋ฅ ์˜ ์‹ค์ œ ์ž๋™ ํšŒ๊ท€ ๋ถ„ํ•ด์— ๋” ๊ฐ€๊นŒ์šด ๋ฐฉ์‹์œผ๋กœ ๊ณ„์‚ฐ๋ฉ๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/fast_tokenizers.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๐Ÿค— Tokenizers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ํ† ํฌ๋‚˜์ด์ € ์‚ฌ์šฉํ•˜๊ธฐ[[use-tokenizers-from-tokenizers]] [`PreTrainedTokenizerFast`]๋Š” [๐Ÿค— Tokenizers](https://huggingface.co/docs/tokenizers) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๊ธฐ๋ฐ˜ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Tokenizers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ํ† ํฌ๋‚˜์ด์ €๋Š” ๐Ÿค— Transformers๋กœ ๋งค์šฐ ๊ฐ„๋‹จํ•˜๊ฒŒ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ตฌ์ฒด์ ์ธ ๋‚ด์šฉ์— ๋“ค์–ด๊ฐ€๊ธฐ ์ „์—, ๋ช‡ ์ค„์˜ ์ฝ”๋“œ๋กœ ๋”๋ฏธ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋งŒ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```python >>> from tokenizers import Tokenizer >>> from tokenizers.models import BPE >>> from tokenizers.trainers import BpeTrainer >>> from tokenizers.pre_tokenizers import Whitespace >>> tokenizer = Tokenizer(BPE(unk_token="[UNK]")) >>> trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) >>> tokenizer.pre_tokenizer = Whitespace() >>> files = [...] >>> tokenizer.train(files, trainer) ``` ์šฐ๋ฆฌ๊ฐ€ ์ •์˜ํ•œ ํŒŒ์ผ์„ ํ†ตํ•ด ์ด์ œ ํ•™์Šต๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ–๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ๋Ÿฐํƒ€์ž„์—์„œ ๊ณ„์† ์‚ฌ์šฉํ•˜๊ฑฐ๋‚˜ JSON ํŒŒ์ผ๋กœ ์ €์žฅํ•˜์—ฌ ๋‚˜์ค‘์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ํ† ํฌ๋‚˜์ด์ € ๊ฐ์ฒด๋กœ๋ถ€ํ„ฐ ์ง์ ‘ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ[[loading-directly-from-the-tokenizer-object]] ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ์ด ํ† ํฌ๋‚˜์ด์ € ๊ฐ์ฒด๋ฅผ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [`PreTrainedTokenizerFast`] ํด๋ž˜์Šค๋Š” ์ธ์Šคํ„ด์Šคํ™”๋œ *ํ† ํฌ๋‚˜์ด์ €* ๊ฐ์ฒด๋ฅผ ์ธ์ˆ˜๋กœ ๋ฐ›์•„ ์‰ฝ๊ฒŒ ์ธ์Šคํ„ด์Šคํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer) ``` ์ด์ œ `fast_tokenizer` ๊ฐ์ฒด๋Š” ๐Ÿค— Transformers ํ† ํฌ๋‚˜์ด์ €์—์„œ ๊ณต์œ ํ•˜๋Š” ๋ชจ๋“  ๋ฉ”์†Œ๋“œ์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ† ํฌ๋‚˜์ด์ € ํŽ˜์ด์ง€](main_classes/tokenizer)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ## JSON ํŒŒ์ผ์—์„œ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ[[loading-from-a-JSON-file]] <!--In order to load a tokenizer from a JSON file, let's first start by saving our tokenizer:--> JSON ํŒŒ์ผ์—์„œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ ์œ„ํ•ด, ๋จผ์ € ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ €์žฅํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```python >>> tokenizer.save("tokenizer.json") ``` JSON ํŒŒ์ผ์„ ์ €์žฅํ•œ ๊ฒฝ๋กœ๋Š” `tokenizer_file` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ [`PreTrainedTokenizerFast`] ์ดˆ๊ธฐํ™” ๋ฉ”์†Œ๋“œ์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python >>> from transformers import PreTrainedTokenizerFast >>> fast_tokenizer = PreTrainedTokenizerFast(tokenizer_file="tokenizer.json") ``` ์ด์ œ `fast_tokenizer` ๊ฐ์ฒด๋Š” ๐Ÿค— Transformers ํ† ํฌ๋‚˜์ด์ €์—์„œ ๊ณต์œ ํ•˜๋Š” ๋ชจ๋“  ๋ฉ”์†Œ๋“œ์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ† ํฌ๋‚˜์ด์ € ํŽ˜์ด์ง€](main_classes/tokenizer)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/sagemaker.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Amazon SageMaker์—์„œ ํ•™์Šต ์‹คํ–‰ํ•˜๊ธฐ[[run-training-on-amazon-sagemaker]] ๋ฌธ์„œ๊ฐ€ [hf.co/docs/sagemaker](https://huggingface.co/docs/sagemaker)๋กœ ์ด๋™๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ํŽ˜์ด์ง€๋Š” `transformers` 5.0 ์—์„œ ์‚ญ์ œ๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. ### ๋ชฉ์ฐจ[[table-of-content]] - [Train Hugging Face models on Amazon SageMaker with the SageMaker Python SDK](https://huggingface.co/docs/sagemaker/train) - [Deploy Hugging Face models to Amazon SageMaker with the SageMaker Python SDK](https://huggingface.co/docs/sagemaker/inference)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/quicktour.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‘˜๋Ÿฌ๋ณด๊ธฐ [[quick-tour]] [[open-in-colab]] ๐Ÿค— Transformers๋ฅผ ์‹œ์ž‘ํ•ด๋ณด์„ธ์š”! ๊ฐœ๋ฐœํ•ด๋ณธ ์ ์ด ์—†๋”๋ผ๋„ ์‰ฝ๊ฒŒ ์ฝ์„ ์ˆ˜ ์žˆ๋„๋ก ์“ฐ์ธ ์ด ๊ธ€์€ [`pipeline`](./main_classes/pipelines)์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ํ•˜๊ณ , ์‚ฌ์ „ํ•™์Šต๋œ ๋ชจ๋ธ๊ณผ ์ „์ฒ˜๋ฆฌ๊ธฐ๋ฅผ [AutoClass](./model_doc/auto)๋กœ ๋กœ๋“œํ•˜๊ณ , PyTorch ๋˜๋Š” TensorFlow๋กœ ๋ชจ๋ธ์„ ๋น ๋ฅด๊ฒŒ ํ•™์Šต์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•์„ ์†Œ๊ฐœํ•ด ๋“œ๋ฆด ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ณธ ๊ฐ€์ด๋“œ์—์„œ ์†Œ๊ฐœ๋˜๋Š” ๊ฐœ๋…์„ (ํŠนํžˆ ์ดˆ๋ณด์ž์˜ ๊ด€์ ์œผ๋กœ) ๋” ์นœ์ ˆํ•˜๊ฒŒ ์ ‘ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, ํŠœํ† ๋ฆฌ์–ผ์ด๋‚˜ [์ฝ”์Šค](https://huggingface.co/course/chapter1/1)๋ฅผ ์ฐธ์กฐํ•˜๊ธฐ๋ฅผ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash !pip install transformers datasets ``` ๋˜ํ•œ ์„ ํ˜ธํ•˜๋Š” ๋จธ์‹  ๋Ÿฌ๋‹ ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```bash pip install torch ``` </pt> <tf> ```bash pip install tensorflow ``` </tf> </frameworkcontent> ## ํŒŒ์ดํ”„๋ผ์ธ [[pipeline]] <Youtube id="tiZFewofSLM"/> [`pipeline`](./main_classes/pipelines)์€ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๋กœ ์ถ”๋ก ํ•˜๊ธฐ์— ๊ฐ€์žฅ ์‰ฝ๊ณ  ๋น ๋ฅธ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. [`pipeline`]์€ ์—ฌ๋Ÿฌ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์—์„œ ๋‹ค์–‘ํ•œ ๊ณผ์—…์„ ์‰ฝ๊ฒŒ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์•„๋ž˜ ํ‘œ์— ํ‘œ์‹œ๋œ ๋ช‡ ๊ฐ€์ง€ ๊ณผ์—…์„ ๊ธฐ๋ณธ์ ์œผ๋กœ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค: <Tip> ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์ž‘์—…์˜ ์ „์ฒด ๋ชฉ๋ก์€ [Pipelines API ์ฐธ์กฐ](./main_classes/pipelines)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. </Tip> | **ํƒœ์Šคํฌ** | **์„ค๋ช…** | **๋ชจ๋‹ฌ๋ฆฌํ‹ฐ** | **ํŒŒ์ดํ”„๋ผ์ธ ID** | |-----------------|----------------------------------------------------------------------|------------------|-----------------------------------------------| | ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ | ํ…์ŠคํŠธ์— ์•Œ๋งž์€ ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="sentiment-analysis") | | ํ…์ŠคํŠธ ์ƒ์„ฑ | ์ฃผ์–ด์ง„ ๋ฌธ์ž์—ด ์ž…๋ ฅ๊ณผ ์ด์–ด์ง€๋Š” ํ…์ŠคํŠธ ์ƒ์„ฑํ•˜๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="text-generation") | | ๊ฐœ์ฒด๋ช… ์ธ์‹ | ๋ฌธ์ž์—ด์˜ ๊ฐ ํ† ํฐ๋งˆ๋‹ค ์•Œ๋งž์€ ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ (์ธ๋ฌผ, ์กฐ์ง, ์žฅ์†Œ ๋“ฑ๋“ฑ) | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="ner") | | ์งˆ์˜์‘๋‹ต | ์ฃผ์–ด์ง„ ๋ฌธ๋งฅ๊ณผ ์งˆ๋ฌธ์— ๋”ฐ๋ผ ์˜ฌ๋ฐ”๋ฅธ ๋Œ€๋‹ตํ•˜๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="question-answering") | | ๋นˆ์นธ ์ฑ„์šฐ๊ธฐ | ๋ฌธ์ž์—ด์˜ ๋นˆ์นธ์— ์•Œ๋งž์€ ํ† ํฐ ๋งž์ถ”๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="fill-mask") | | ์š”์•ฝ | ํ…์ŠคํŠธ๋‚˜ ๋ฌธ์„œ๋ฅผ ์š”์•ฝํ•˜๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="summarization") | | ๋ฒˆ์—ญ | ํ…์ŠคํŠธ๋ฅผ ํ•œ ์–ธ์–ด์—์„œ ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ | ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ(NLP) | pipeline(task="translation") | | ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ | ์ด๋ฏธ์ง€์— ์•Œ๋งž์€ ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ | ์ปดํ“จํ„ฐ ๋น„์ „(CV) | pipeline(task="image-classification") | | ์ด๋ฏธ์ง€ ๋ถ„ํ•  | ์ด๋ฏธ์ง€์˜ ํ”ฝ์…€๋งˆ๋‹ค ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ(์‹œ๋งจํ‹ฑ, ํŒŒ๋†‰ํ‹ฑ ๋ฐ ์ธ์Šคํ„ด์Šค ๋ถ„ํ•  ํฌํ•จ) | ์ปดํ“จํ„ฐ ๋น„์ „(CV) | pipeline(task="image-segmentation") | | ๊ฐ์ฒด ํƒ์ง€ | ์ด๋ฏธ์ง€ ์† ๊ฐ์ฒด์˜ ๊ฒฝ๊ณ„ ์ƒ์ž๋ฅผ ๊ทธ๋ฆฌ๊ณ  ํด๋ž˜์Šค๋ฅผ ์˜ˆ์ธกํ•˜๊ธฐ | ์ปดํ“จํ„ฐ ๋น„์ „(CV) | pipeline(task="object-detection") | | ์˜ค๋””์˜ค ๋ถ„๋ฅ˜ | ์˜ค๋””์˜ค ํŒŒ์ผ์— ์•Œ๋งž์€ ๋ ˆ์ด๋ธ” ๋ถ™์ด๊ธฐ | ์˜ค๋””์˜ค | pipeline(task="audio-classification") | | ์ž๋™ ์Œ์„ฑ ์ธ์‹ | ์˜ค๋””์˜ค ํŒŒ์ผ ์† ์Œ์„ฑ์„ ํ…์ŠคํŠธ๋กœ ๋ฐ”๊พธ๊ธฐ | ์˜ค๋””์˜ค | pipeline(task="automatic-speech-recognition") | | ์‹œ๊ฐ ์งˆ์˜์‘๋‹ต | ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์— ๋Œ€ํ•ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋Œ€๋‹ตํ•˜๊ธฐ | ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ | pipeline(task="vqa") | | ๋ฌธ์„œ ์งˆ์˜์‘๋‹ต | ์ฃผ์–ด์ง„ ๋ฌธ์„œ์™€ ์งˆ๋ฌธ์— ๋Œ€ํ•ด ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋Œ€๋‹ตํ•˜๊ธฐ | ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ | pipeline(task="document-question-answering") | | ์ด๋ฏธ์ง€ ์บก์…˜ ๋‹ฌ๊ธฐ | ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€์˜ ์บก์…˜ ์ƒ์„ฑํ•˜๊ธฐ | ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ | pipeline(task="image-to-text") | ๋จผ์ € [`pipeline`]์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ์‚ฌ์šฉํ•  ์ž‘์—…์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•ด [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ์˜ˆ์ œ๋ฅผ ๋ณด์—ฌ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis") ``` [`pipeline`]์€ ๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•œ [์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english)๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ž๋™์œผ๋กœ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  ์บ์‹œํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ `classifier`๋ฅผ ๋Œ€์ƒ ํ…์ŠคํŠธ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> classifier("We are very happy to show you the ๐Ÿค— Transformers library.") [{'label': 'POSITIVE', 'score': 0.9998}] ``` ๋งŒ์•ฝ ์ž…๋ ฅ์ด ์—ฌ๋Ÿฌ ๊ฐœ ์žˆ๋Š” ๊ฒฝ์šฐ, ์ž…๋ ฅ์„ ๋ฆฌ์ŠคํŠธ๋กœ [`pipeline`]์— ์ „๋‹ฌํ•˜์—ฌ, ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์„ ๋”•์…”๋„ˆ๋ฆฌ๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ ํ˜•ํƒœ๋กœ ๋ฐ›์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> results = classifier(["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."]) >>> for result in results: ... print(f"label: {result['label']}, with score: {round(result['score'], 4)}") label: POSITIVE, with score: 0.9998 label: NEGATIVE, with score: 0.5309 ``` [`pipeline`]์€ ์ฃผ์–ด์ง„ ๊ณผ์—…์— ๊ด€๊ณ„์—†์ด ๋ฐ์ดํ„ฐ์…‹ ์ „๋ถ€๋ฅผ ์ˆœํšŒํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์˜ˆ์ œ์—์„œ๋Š” ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ๊ณผ์—…์œผ๋กœ ์„ ํƒํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> import torch >>> from transformers import pipeline >>> speech_recognizer = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h") ``` ๋ฐ์ดํ„ฐ์…‹์„ ๋กœ๋“œํ•  ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค. (์ž์„ธํ•œ ๋‚ด์šฉ์€ ๐Ÿค— Datasets [์‹œ์ž‘ํ•˜๊ธฐ](https://huggingface.co/docs/datasets/quickstart#audio)์„ ์ฐธ์กฐํ•˜์„ธ์š”) ์—ฌ๊ธฐ์—์„œ๋Š” [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ์…‹์„ ๋กœ๋“œํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") # doctest: +IGNORE_RESULT ``` ๋ฐ์ดํ„ฐ์…‹์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๊ฐ€ ๊ธฐ์กด ๋ชจ๋ธ์ธ [`facebook/wav2vec2-base-960h`](https://huggingface.co/facebook/wav2vec2-base-960h)์˜ ํ›ˆ๋ จ ๋‹น์‹œ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=speech_recognizer.feature_extractor.sampling_rate)) ``` `"audio"` ์—ด์„ ํ˜ธ์ถœํ•˜๋ฉด ์ž๋™์œผ๋กœ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์™€์„œ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. ์ฒซ 4๊ฐœ ์ƒ˜ํ”Œ์—์„œ ์›์‹œ ์›จ์ด๋ธŒํผ ๋ฐฐ์—ด์„ ์ถ”์ถœํ•˜๊ณ  ํŒŒ์ดํ”„๋ผ์ธ์— ๋ฆฌ์ŠคํŠธ๋กœ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> result = speech_recognizer(dataset[:4]["audio"]) >>> print([d["text"] for d in result]) ['I WOULD LIKE TO SET UP A JOINT ACCOUNT WITH MY PARTNER HOW DO I PROCEED WITH DOING THAT', "FONDERING HOW I'D SET UP A JOIN TO HELL T WITH MY WIFE AND WHERE THE AP MIGHT BE", "I I'D LIKE TOY SET UP A JOINT ACCOUNT WITH MY PARTNER I'M NOT SEEING THE OPTION TO DO IT ON THE APSO I CALLED IN TO GET SOME HELP CAN I JUST DO IT OVER THE PHONE WITH YOU AND GIVE YOU THE INFORMATION OR SHOULD I DO IT IN THE AP AN I'M MISSING SOMETHING UQUETTE HAD PREFERRED TO JUST DO IT OVER THE PHONE OF POSSIBLE THINGS", 'HOW DO I FURN A JOINA COUT'] ``` ์Œ์„ฑ์ด๋‚˜ ๋น„์ „๊ณผ ๊ฐ™์ด ์ž…๋ ฅ์ด ํฐ ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ์…‹์˜ ๊ฒฝ์šฐ, ๋ชจ๋“  ์ž…๋ ฅ์„ ๋ฉ”๋ชจ๋ฆฌ์— ๋กœ๋“œํ•˜๋ ค๋ฉด ๋ฆฌ์ŠคํŠธ ๋Œ€์‹  ์ œ๋„ˆ๋ ˆ์ดํ„ฐ ํ˜•ํƒœ๋กœ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ [Pipelines API ์ฐธ์กฐ](./main_classes/pipelines)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ### ํŒŒ์ดํ”„๋ผ์ธ์—์„œ ๋‹ค๋ฅธ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ € ์‚ฌ์šฉํ•˜๊ธฐ [[use-another-model-and-tokenizer-in-the-pipeline]] [`pipeline`]์€ [Hub](https://huggingface.co/models)์˜ ๋ชจ๋“  ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์—, [`pipeline`]์„ ๋‹ค๋ฅธ ์šฉ๋„์— ๋งž๊ฒŒ ์‰ฝ๊ฒŒ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ํ”„๋ž‘์Šค์–ด ํ…์ŠคํŠธ๋ฅผ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„  Hub์˜ ํƒœ๊ทธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ ์ ˆํ•œ ๋ชจ๋ธ์„ ํ•„ํ„ฐ๋งํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ํ•„ํ„ฐ๋ง๋œ ๊ฒฐ๊ณผ์˜ ์ƒ์œ„ ํ•ญ๋ชฉ์œผ๋กœ๋Š” ํ”„๋ž‘์Šค์–ด ํ…์ŠคํŠธ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋‹ค๊ตญ์–ด [BERT ๋ชจ๋ธ](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment)์ด ๋ฐ˜ํ™˜๋ฉ๋‹ˆ๋‹ค: ```py >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" ``` <frameworkcontent> <pt> [`AutoModelForSequenceClassification`]๊ณผ [`AutoTokenizer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•˜์„ธ์š” (๋‹ค์Œ ์„น์…˜์—์„œ [`AutoClass`]์— ๋Œ€ํ•ด ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค): ```py >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </pt> <tf> [`TFAutoModelForSequenceClassification`]๊ณผ [`AutoTokenizer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ๋œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•˜์„ธ์š” (๋‹ค์Œ ์„น์…˜์—์„œ [`TFAutoClass`]์— ๋Œ€ํ•ด ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค): ```py >>> from transformers import AutoTokenizer, TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` </tf> </frameworkcontent> [`pipeline`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ง€์ •ํ•˜๋ฉด, ์ด์ œ `classifier`๋ฅผ ํ”„๋ž‘์Šค์–ด ํ…์ŠคํŠธ์— ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) >>> classifier("Nous sommes trรจs heureux de vous prรฉsenter la bibliothรจque ๐Ÿค— Transformers.") [{'label': '5 stars', 'score': 0.7273}] ``` ๋งˆ๋•…ํ•œ ๋ชจ๋ธ์„ ์ฐพ์„ ์ˆ˜ ์—†๋Š” ๊ฒฝ์šฐ ๋ฐ์ดํ„ฐ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ์กฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฏธ์„ธ์กฐ์ • ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [๋ฏธ์„ธ์กฐ์ • ํŠœํ† ๋ฆฌ์–ผ](./training)์„ ์ฐธ์กฐํ•˜์„ธ์š”. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ์กฐ์ •ํ•œ ํ›„์—๋Š” ๋ชจ๋ธ์„ Hub์˜ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜์—ฌ ๋จธ์‹ ๋Ÿฌ๋‹ ๋ฏผ์ฃผํ™”์— ๊ธฐ์—ฌํ•ด์ฃผ์„ธ์š”! ๐Ÿค— ## AutoClass [[autoclass]] <Youtube id="AhChOFRegn4"/> [`AutoModelForSequenceClassification`]๊ณผ [`AutoTokenizer`] ํด๋ž˜์Šค๋Š” ์œ„์—์„œ ๋‹ค๋ฃฌ [`pipeline`]์˜ ๊ธฐ๋Šฅ์„ ๊ตฌํ˜„ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. [AutoClass](./model_doc/auto)๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์˜ ์•„ํ‚คํ…์ฒ˜๋ฅผ ์ด๋ฆ„์ด๋‚˜ ๊ฒฝ๋กœ์—์„œ ์ž๋™์œผ๋กœ ๊ฐ€์ ธ์˜ค๋Š” '๋ฐ”๋กœ๊ฐ€๊ธฐ'์ž…๋‹ˆ๋‹ค. ๊ณผ์—…์— ์ ํ•ฉํ•œ `AutoClass`๋ฅผ ์„ ํƒํ•˜๊ณ  ํ•ด๋‹น ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ์„ ํƒํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด์ „ ์„น์…˜์˜ ์˜ˆ์ œ๋กœ ๋Œ์•„๊ฐ€์„œ [`pipeline`]์˜ ๊ฒฐ๊ณผ๋ฅผ `AutoClass`๋ฅผ ํ™œ์šฉํ•ด ๋ณต์ œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### AutoTokenizer [[autotokenizer]] ํ† ํฌ๋‚˜์ด์ €๋Š” ํ…์ŠคํŠธ๋ฅผ ๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ˆซ์ž ๋ฐฐ์—ด ํ˜•ํƒœ๋กœ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ์—ญํ• ์„ ๋‹ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ํ† ํฐํ™” ๊ณผ์ •์—๋Š” ๋‹จ์–ด๋ฅผ ์–ด๋””์—์„œ ๋Š์„์ง€, ์–ด๋А ์ˆ˜์ค€๊นŒ์ง€ ๋‚˜๋ˆŒ์ง€์™€ ๊ฐ™์€ ์—ฌ๋Ÿฌ ๊ทœ์น™๋“ค์ด ์žˆ์Šต๋‹ˆ๋‹ค (ํ† ํฐํ™”์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ† ํฌ๋‚˜์ด์ € ์š”์•ฝ](./tokenizer_summary)์„ ์ฐธ์กฐํ•˜์„ธ์š”). ๊ฐ€์žฅ ์ค‘์š”ํ•œ ์ ์€ ๋ชจ๋ธ์ด ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๋™์ผํ•œ ํ† ํฐํ™” ๊ทœ์น™์„ ์‚ฌ์šฉํ•˜๋„๋ก ๋™์ผํ•œ ๋ชจ๋ธ ์ด๋ฆ„์œผ๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ด์•ผ ํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. [`AutoTokenizer`]๋กœ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ €์— ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> encoding = tokenizer("We are very happy to show you the ๐Ÿค— Transformers library.") >>> print(encoding) {'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹ค์Œ์„ ํฌํ•จํ•œ ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: * [input_ids](./glossary#input-ids): ํ† ํฐ์˜ ์ˆซ์ž ํ‘œํ˜„. * [attention_mask](.glossary#attention-mask): ์–ด๋–ค ํ† ํฐ์— ์ฃผ์˜๋ฅผ ๊ธฐ์šธ์—ฌ์•ผ ํ•˜๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ํ† ํฌ๋‚˜์ด์ €๋Š” ์ž…๋ ฅ์„ ๋ฆฌ์ŠคํŠธ ํ˜•ํƒœ๋กœ๋„ ๋ฐ›์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ, ํ…์ŠคํŠธ๋ฅผ ํŒจ๋”ฉํ•˜๊ณ  ์ž˜๋ผ๋‚ด์–ด ์ผ์ •ํ•œ ๊ธธ์ด์˜ ๋ฌถ์Œ์„ ๋ฐ˜ํ™˜ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```py >>> pt_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="pt", ... ) ``` </pt> <tf> ```py >>> tf_batch = tokenizer( ... ["We are very happy to show you the ๐Ÿค— Transformers library.", "We hope you don't hate it."], ... padding=True, ... truncation=True, ... max_length=512, ... return_tensors="tf", ... ) ``` </tf> </frameworkcontent> <Tip> [์ „์ฒ˜๋ฆฌ](./preprocessing) ํŠœํ† ๋ฆฌ์–ผ์„ ์ฐธ์กฐํ•˜์‹œ๋ฉด ํ† ํฐํ™”์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์„ค๋ช…๊ณผ ํ•จ๊ป˜ ์ด๋ฏธ์ง€, ์˜ค๋””์˜ค์™€ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ž…๋ ฅ์„ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ [`AutoImageProcessor`]์™€ [`AutoFeatureExtractor`], [`AutoProcessor`]์˜ ์‚ฌ์šฉ๋ฐฉ๋ฒ•๋„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ### AutoModel [[automodel]] <frameworkcontent> <pt> ๐Ÿค— Transformers๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ธ์Šคํ„ด์Šค๋ฅผ ๊ฐ„๋‹จํ•˜๊ณ  ํ†ตํ•ฉ๋œ ๋ฐฉ๋ฒ•์œผ๋กœ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰, [`AutoTokenizer`]์ฒ˜๋Ÿผ [`AutoModel`]์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ ์ผํ•œ ์ฐจ์ด์ ์€ ๊ณผ์—…์— ์•Œ๋งž์€ [`AutoModel`]์„ ์„ ํƒํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ (๋˜๋Š” ์‹œํ€€์Šค) ๋ถ„๋ฅ˜์˜ ๊ฒฝ์šฐ [`AutoModelForSequenceClassification`]์„ ๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> pt_model = AutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> [`AutoModel`] ํด๋ž˜์Šค์—์„œ ์ง€์›ํ•˜๋Š” ๊ณผ์—…์— ๋Œ€ํ•ด์„œ๋Š” [๊ณผ์—… ์š”์•ฝ](./task_summary)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ์ด์ œ ์ „์ฒ˜๋ฆฌ๋œ ์ž…๋ ฅ ๋ฌถ์Œ์„ ์ง์ ‘ ๋ชจ๋ธ์— ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜์ฒ˜๋Ÿผ `**`๋ฅผ ์•ž์— ๋ถ™์—ฌ ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ํ’€์–ด์ฃผ๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py >>> pt_outputs = pt_model(**pt_batch) ``` ๋ชจ๋ธ์˜ ์ตœ์ข… ํ™œ์„ฑํ™” ํ•จ์ˆ˜ ์ถœ๋ ฅ์€ `logits` ์†์„ฑ์— ๋‹ด๊ฒจ์žˆ์Šต๋‹ˆ๋‹ค. `logits`์— softmax ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜์—ฌ ํ™•๋ฅ ์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from torch import nn >>> pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1) >>> print(pt_predictions) tensor([[0.0021, 0.0018, 0.0115, 0.2121, 0.7725], [0.2084, 0.1826, 0.1969, 0.1755, 0.2365]], grad_fn=<SoftmaxBackward0>) ``` </pt> <tf> ๐Ÿค— Transformers๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ธ์Šคํ„ด์Šค๋ฅผ ๊ฐ„๋‹จํ•˜๊ณ  ํ†ตํ•ฉ๋œ ๋ฐฉ๋ฒ•์œผ๋กœ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰, [`AutoTokenizer`]์ฒ˜๋Ÿผ [`TFAutoModel`]์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ ์ผํ•œ ์ฐจ์ด์ ์€ ๊ณผ์—…์— ์•Œ๋งž์€ [`TFAutoModel`]์„ ์„ ํƒํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ (๋˜๋Š” ์‹œํ€€์Šค) ๋ถ„๋ฅ˜์˜ ๊ฒฝ์šฐ [`TFAutoModelForSequenceClassification`]์„ ๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model_name = "nlptown/bert-base-multilingual-uncased-sentiment" >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(model_name) ``` <Tip> [`AutoModel`] ํด๋ž˜์Šค์—์„œ ์ง€์›ํ•˜๋Š” ๊ณผ์—…์— ๋Œ€ํ•ด์„œ๋Š” [๊ณผ์—… ์š”์•ฝ](./task_summary)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ์ด์ œ ์ „์ฒ˜๋ฆฌ๋œ ์ž…๋ ฅ ๋ฌถ์Œ์„ ์ง์ ‘ ๋ชจ๋ธ์— ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜์ฒ˜๋Ÿผ ๊ทธ๋Œ€๋กœ ํ…์„œ๋ฅผ ์ „๋‹ฌํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py >>> tf_outputs = tf_model(tf_batch) ``` ๋ชจ๋ธ์˜ ์ตœ์ข… ํ™œ์„ฑํ™” ํ•จ์ˆ˜ ์ถœ๋ ฅ์€ `logits` ์†์„ฑ์— ๋‹ด๊ฒจ์žˆ์Šต๋‹ˆ๋‹ค. `logits`์— softmax ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜์—ฌ ํ™•๋ฅ ์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> tf_predictions = tf.nn.softmax(tf_outputs.logits, axis=-1) >>> tf_predictions # doctest: +IGNORE_RESULT ``` </tf> </frameworkcontent> <Tip> ๋ชจ๋“  ๐Ÿค— Transformers ๋ชจ๋ธ(PyTorch ๋˜๋Š” TensorFlow)์€ (softmax์™€ ๊ฐ™์€) ์ตœ์ข… ํ™œ์„ฑํ™” ํ•จ์ˆ˜ *์ด์ „์—* ํ…์„œ๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์ตœ์ข… ํ™œ์„ฑํ™” ํ•จ์ˆ˜์˜ ์ถœ๋ ฅ์€ ์ข…์ข… ์†์‹ค ํ•จ์ˆ˜ ์ถœ๋ ฅ๊ณผ ๊ฒฐํ•ฉ๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ ์ถœ๋ ฅ์€ ํŠน์ˆ˜ํ•œ ๋ฐ์ดํ„ฐ ํด๋ž˜์Šค์ด๋ฏ€๋กœ IDE์—์„œ ์ž๋™ ์™„์„ฑ๋ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์ถœ๋ ฅ์€ ํŠœํ”Œ์ด๋‚˜ ๋”•์…”๋„ˆ๋ฆฌ์ฒ˜๋Ÿผ ๋™์ž‘ํ•˜๋ฉฐ (์ •์ˆ˜, ์Šฌ๋ผ์ด์Šค ๋˜๋Š” ๋ฌธ์ž์—ด๋กœ ์ธ๋ฑ์‹ฑ ๊ฐ€๋Šฅ), None์ธ ์†์„ฑ์€ ๋ฌด์‹œ๋ฉ๋‹ˆ๋‹ค. </Tip> ### ๋ชจ๋ธ ์ €์žฅํ•˜๊ธฐ [[save-a-model]] <frameworkcontent> <pt> ๋ฏธ์„ธ์กฐ์ •๋œ ๋ชจ๋ธ์„ ํ† ํฌ๋‚˜์ด์ €์™€ ํ•จ๊ป˜ ์ €์žฅํ•˜๋ ค๋ฉด [`PreTrainedModel.save_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```py >>> pt_save_directory = "./pt_save_pretrained" >>> tokenizer.save_pretrained(pt_save_directory) # doctest: +IGNORE_RESULT >>> pt_model.save_pretrained(pt_save_directory) ``` ๋ชจ๋ธ์„ ๋‹ค์‹œ ์‚ฌ์šฉํ•˜๋ ค๋ฉด [`PreTrainedModel.from_pretrained`]๋กœ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> pt_model = AutoModelForSequenceClassification.from_pretrained("./pt_save_pretrained") ``` </pt> <tf> ๋ฏธ์„ธ์กฐ์ •๋œ ๋ชจ๋ธ์„ ํ† ํฌ๋‚˜์ด์ €์™€ ํ•จ๊ป˜ ์ €์žฅํ•˜๋ ค๋ฉด [`TFPreTrainedModel.save_pretrained`]๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```py >>> tf_save_directory = "./tf_save_pretrained" >>> tokenizer.save_pretrained(tf_save_directory) # doctest: +IGNORE_RESULT >>> tf_model.save_pretrained(tf_save_directory) ``` ๋ชจ๋ธ์„ ๋‹ค์‹œ ์‚ฌ์šฉํ•˜๋ ค๋ฉด [`TFPreTrainedModel.from_pretrained`]๋กœ ๋ชจ๋ธ์„ ๋‹ค์‹œ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained("./tf_save_pretrained") ``` </tf> </frameworkcontent> ๐Ÿค— Transformers์˜ ๋ฉ‹์ง„ ๊ธฐ๋Šฅ ์ค‘ ํ•˜๋‚˜๋Š” ๋ชจ๋ธ์„ PyTorch ๋˜๋Š” TensorFlow ๋ชจ๋ธ๋กœ ์ €์žฅํ•ด๋’€๋‹ค๊ฐ€ ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ๋‹ค์‹œ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ๋Š” ์ ์ž…๋‹ˆ๋‹ค. `from_pt` ๋˜๋Š” `from_tf` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ•œ ํ”„๋ ˆ์ž„์›Œํฌ์—์„œ ๋‹ค๋ฅธ ํ”„๋ ˆ์ž„์›Œํฌ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```py >>> from transformers import AutoModel >>> tokenizer = AutoTokenizer.from_pretrained(tf_save_directory) >>> pt_model = AutoModelForSequenceClassification.from_pretrained(tf_save_directory, from_tf=True) ``` </pt> <tf> ```py >>> from transformers import TFAutoModel >>> tokenizer = AutoTokenizer.from_pretrained(pt_save_directory) >>> tf_model = TFAutoModelForSequenceClassification.from_pretrained(pt_save_directory, from_pt=True) ``` </tf> </frameworkcontent> ## ์ปค์Šคํ…€ ๋ชจ๋ธ ๊ตฌ์ถ•ํ•˜๊ธฐ [[custom-model-builds]] ๋ชจ๋ธ์˜ ๊ตฌ์„ฑ ํด๋ž˜์Šค๋ฅผ ์ˆ˜์ •ํ•˜์—ฌ ๋ชจ๋ธ์˜ ๊ตฌ์กฐ๋ฅผ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. (์€๋‹‰์ธต์ด๋‚˜ ์–ดํ…์…˜ ํ—ค๋“œ์˜ ์ˆ˜์™€ ๊ฐ™์€) ๋ชจ๋ธ์˜ ์†์„ฑ์€ ๊ตฌ์„ฑ์—์„œ ์ง€์ •๋˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ปค์Šคํ…€ ๊ตฌ์„ฑ ํด๋ž˜์Šค๋กœ ๋ชจ๋ธ์„ ๋งŒ๋“ค๋ฉด ์ฒ˜์Œ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์†์„ฑ์€ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋˜๋ฏ€๋กœ ์˜๋ฏธ ์žˆ๋Š” ๊ฒฐ๊ณผ๋ฅผ ์–ป์œผ๋ ค๋ฉด ๋จผ์ € ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œ์ผœ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € [`AutoConfig`]๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ์ˆ˜์ •ํ•˜๊ณ  ์‹ถ์€ ์‚ฌ์ „ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜์„ธ์š”. [`AutoConfig.from_pretrained`] ๋‚ด๋ถ€์—์„œ (์–ดํ…์…˜ ํ—ค๋“œ ์ˆ˜์™€ ๊ฐ™์ด) ๋ณ€๊ฒฝํ•˜๋ ค๋Š” ์†์„ฑ๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoConfig >>> my_config = AutoConfig.from_pretrained("distilbert-base-uncased", n_heads=12) ``` <frameworkcontent> <pt> [`AutoModel.from_config`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ”๊พผ ๊ตฌ์„ฑ๋Œ€๋กœ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> from transformers import AutoModel >>> my_model = AutoModel.from_config(my_config) ``` </pt> <tf> [`TFAutoModel.from_config`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ”๊พผ ๊ตฌ์„ฑ๋Œ€๋กœ ๋ชจ๋ธ์„ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> from transformers import TFAutoModel >>> my_model = TFAutoModel.from_config(my_config) ``` </tf> </frameworkcontent> ์ปค์Šคํ…€ ๊ตฌ์„ฑ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [์ปค์Šคํ…€ ์•„ํ‚คํ…์ฒ˜ ๋งŒ๋“ค๊ธฐ](./create_a_model) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ## Trainer - PyTorch์— ์ตœ์ ํ™”๋œ ํ›ˆ๋ จ ๋ฃจํ”„ [[trainer-a-pytorch-optimized-training-loop]] ๋ชจ๋“  ๋ชจ๋ธ์€ [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)์ด๋ฏ€๋กœ ์ผ๋ฐ˜์ ์ธ ํ›ˆ๋ จ ๋ฃจํ”„์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง์ ‘ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ž‘์„ฑํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, ๐Ÿค— Transformers๋Š” PyTorch๋ฅผ ์œ„ํ•œ [`Trainer`] ํด๋ž˜์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค์—๋Š” ๊ธฐ๋ณธ ํ›ˆ๋ จ ๋ฃจํ”„๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์œผ๋ฉฐ ๋ถ„์‚ฐ ํ›ˆ๋ จ, ํ˜ผํ•ฉ ์ •๋ฐ€๋„ ๋“ฑ๊ณผ ๊ฐ™์€ ๊ธฐ๋Šฅ์„ ์ถ”๊ฐ€๋กœ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๊ณผ์—…์— ๋”ฐ๋ผ ๋‹ค๋ฅด์ง€๋งŒ ์ผ๋ฐ˜์ ์œผ๋กœ [`Trainer`]์— ๋‹ค์Œ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: 1. [`PreTrainedModel`] ๋˜๋Š” [`torch.nn.Module`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` 2. [`TrainingArguments`]๋Š” ํ•™์Šต๋ฅ , ๋ฐฐ์น˜ ํฌ๊ธฐ, ํ›ˆ๋ จํ•  ์—ํฌํฌ ์ˆ˜์™€ ๊ฐ™์€ ๋ชจ๋ธ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์ธ์ž๋ฅผ ์ง€์ •ํ•˜์ง€ ์•Š์œผ๋ฉด ๊ธฐ๋ณธ๊ฐ’์ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="path/to/save/folder/", ... learning_rate=2e-5, ... per_device_train_batch_size=8, ... per_device_eval_batch_size=8, ... num_train_epochs=2, ... ) ``` 3. ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์ง• ์ถ”์ถœ๊ธฐ(feature extractor) ๋˜๋Š” ํ”„๋กœ์„ธ์„œ์™€ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` 4. ๋ฐ์ดํ„ฐ์…‹์„ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("rotten_tomatoes") # doctest: +IGNORE_RESULT ``` 5. ๋ฐ์ดํ„ฐ์…‹์„ ํ† ํฐํ™”ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) ``` ๊ทธ๋ฆฌ๊ณ  [`~datasets.Dataset.map`]๋กœ ๋ฐ์ดํ„ฐ์…‹ ์ „์ฒด์— ์ ์šฉํ•˜์„ธ์š”: ```py >>> dataset = dataset.map(tokenize_dataset, batched=True) ``` 6. [`DataCollatorWithPadding`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์˜ ํ‘œ๋ณธ ๋ฌถ์Œ์„ ๋งŒ๋“œ์„ธ์š”: ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` ์ด์ œ ์œ„์˜ ๋ชจ๋“  ํด๋ž˜์Šค๋ฅผ [`Trainer`]๋กœ ๋ชจ์œผ์„ธ์š”: ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=dataset["train"], ... eval_dataset=dataset["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... ) # doctest: +SKIP ``` ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉด [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜์„ธ์š”: ```py >>> trainer.train() # doctest: +SKIP ``` <Tip> ๋ฒˆ์—ญ์ด๋‚˜ ์š”์•ฝ๊ณผ ๊ฐ™์ด ์‹œํ€€์Šค-์‹œํ€€์Šค ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ณผ์—…์—๋Š” [`Seq2SeqTrainer`] ๋ฐ [`Seq2SeqTrainingArguments`] ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. </Tip> [`Trainer`] ๋‚ด์˜ ๋ฉ”์„œ๋“œ๋ฅผ ์„œ๋ธŒํด๋ž˜์Šคํ™”ํ•˜์—ฌ ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ๋ฐ”๊ฟ€ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌ๋ฉด ์†์‹ค ํ•จ์ˆ˜, ์˜ตํ‹ฐ๋งˆ์ด์ €, ์Šค์ผ€์ค„๋Ÿฌ์™€ ๊ฐ™์€ ๊ธฐ๋Šฅ ๋˜ํ•œ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ๋ณ€๊ฒฝ ๊ฐ€๋Šฅํ•œ ๋ฉ”์†Œ๋“œ์— ๋Œ€ํ•ด์„œ๋Š” [`Trainer`] ๋ฌธ์„œ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ˆ˜์ •ํ•˜๋Š” ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์€ [Callbacks](./main_classes/callbacks)๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. Callbacks๋กœ ๋‹ค๋ฅธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ํ†ตํ•ฉํ•˜๊ณ , ํ›ˆ๋ จ ๋ฃจํ”„๋ฅผ ์ฒดํฌํ•˜์—ฌ ์ง„ํ–‰ ์ƒํ™ฉ์„ ๋ณด๊ณ ๋ฐ›๊ฑฐ๋‚˜, ํ›ˆ๋ จ์„ ์กฐ๊ธฐ์— ์ค‘๋‹จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Callbacks์€ ํ›ˆ๋ จ ๋ฃจํ”„ ์ž์ฒด๋ฅผ ๋ฐ”๊พธ์ง€๋Š” ์•Š์Šต๋‹ˆ๋‹ค. ์†์‹ค ํ•จ์ˆ˜์™€ ๊ฐ™์€ ๊ฒƒ์„ ๋ฐ”๊พธ๋ ค๋ฉด [`Trainer`]๋ฅผ ์„œ๋ธŒํด๋ž˜์Šคํ™”ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ## TensorFlow๋กœ ํ›ˆ๋ จ์‹œํ‚ค๊ธฐ [[train-with-tensorflow]] ๋ชจ๋“  ๋ชจ๋ธ์€ [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)์ด๋ฏ€๋กœ [Keras](https://keras.io/) API๋ฅผ ํ†ตํ•ด TensorFlow์—์„œ ํ›ˆ๋ จ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ๋ฐ์ดํ„ฐ์…‹์„ ์‰ฝ๊ฒŒ `tf.data.Dataset` ํ˜•ํƒœ๋กœ ์‰ฝ๊ฒŒ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ๋Š” [`~TFPreTrainedModel.prepare_tf_dataset`] ๋ฉ”์†Œ๋“œ๋ฅผ ์ œ๊ณตํ•˜๊ธฐ ๋•Œ๋ฌธ์—, Keras์˜ [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ๋ฐ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ๋ฉ”์†Œ๋“œ๋กœ ๋ฐ”๋กœ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. [`TFPreTrainedModel`] ๋˜๋Š” [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model)๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` 2. ํ† ํฌ๋‚˜์ด์ €, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ, ํŠน์ง• ์ถ”์ถœ๊ธฐ(feature extractor) ๋˜๋Š” ํ”„๋กœ์„ธ์„œ์™€ ๊ฐ™์€ ์ „์ฒ˜๋ฆฌ ํด๋ž˜์Šค๋ฅผ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` 3. ๋ฐ์ดํ„ฐ์…‹์„ ํ† ํฐํ™”ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> def tokenize_dataset(dataset): ... return tokenizer(dataset["text"]) # doctest: +SKIP ``` 4. [`~datasets.Dataset.map`]์„ ์‚ฌ์šฉํ•˜์—ฌ ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ํ† ํฐํ™” ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๊ณ , ๋ฐ์ดํ„ฐ์…‹๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ [`~TFPreTrainedModel.prepare_tf_dataset`]์— ์ „๋‹ฌํ•˜์„ธ์š”. ๋ฐฐ์น˜ ํฌ๊ธฐ๋ฅผ ๋ณ€๊ฒฝํ•˜๊ฑฐ๋‚˜ ๋ฐ์ดํ„ฐ์…‹์„ ์„ž์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> dataset = dataset.map(tokenize_dataset) # doctest: +SKIP >>> tf_dataset = model.prepare_tf_dataset( ... dataset["train"], batch_size=16, shuffle=True, tokenizer=tokenizer ... ) # doctest: +SKIP ``` 5. ์ค€๋น„๋˜์—ˆ์œผ๋ฉด `compile` ๋ฐ `fit`๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜์„ธ์š”. ๐Ÿค— Transformers์˜ ๋ชจ๋“  ๋ชจ๋ธ์€ ๊ณผ์—…๊ณผ ๊ด€๋ จ๋œ ๊ธฐ๋ณธ ์†์‹ค ํ•จ์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์œผ๋ฏ€๋กœ ๋ช…์‹œ์ ์œผ๋กœ ์ง€์ •ํ•˜์ง€ ์•Š์•„๋„ ๋ฉ๋‹ˆ๋‹ค: ```py >>> from tensorflow.keras.optimizers import Adam >>> model.compile(optimizer=Adam(3e-5)) # No loss argument! >>> model.fit(tf_dataset) # doctest: +SKIP ``` ## ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”? [[whats-next]] ๐Ÿค— Transformers ๋‘˜๋Ÿฌ๋ณด๊ธฐ๋ฅผ ๋ชจ๋‘ ์ฝ์œผ์…จ๋‹ค๋ฉด, ๊ฐ€์ด๋“œ๋ฅผ ์‚ดํŽด๋ณด๊ณ  ๋” ๊ตฌ์ฒด์ ์ธ ๊ฒƒ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ณด์„ธ์š”. ์ด๋ฅผํ…Œ๋ฉด ์ปค์Šคํ…€ ๋ชจ๋ธ ๊ตฌ์ถ•ํ•˜๋Š” ๋ฐฉ๋ฒ•, ๊ณผ์—…์— ์•Œ๋งž๊ฒŒ ๋ชจ๋ธ์„ ๋ฏธ์„ธ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•, ์Šคํฌ๋ฆฝํŠธ๋กœ ๋ชจ๋ธ ํ›ˆ๋ จํ•˜๋Š” ๋ฐฉ๋ฒ• ๋“ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers ํ•ต์‹ฌ ๊ฐœ๋…์— ๋Œ€ํ•ด ๋” ์•Œ์•„๋ณด๋ ค๋ฉด ์ปคํ”ผ ํ•œ ์ž” ๋“ค๊ณ  ๊ฐœ๋… ๊ฐ€์ด๋“œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”!
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/testing.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ…Œ์ŠคํŠธ[[testing]] ๋จผ์ € ๐Ÿค— Transformers ๋ชจ๋ธ์ด ์–ด๋–ป๊ฒŒ ํ…Œ์ŠคํŠธ๋˜๋Š”์ง€ ์‚ดํŽด๋ณด๊ณ , ์ƒˆ๋กœ์šด ํ…Œ์ŠคํŠธ๋ฅผ ์ž‘์„ฑ ๋ฐ ๊ธฐ์กด ํ…Œ์ŠคํŠธ๋ฅผ ๊ฐœ์„ ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ด…์‹œ๋‹ค. ์ด ์ €์žฅ์†Œ์—๋Š” 2๊ฐœ์˜ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: 1. `tests` - ์ผ๋ฐ˜ API์— ๋Œ€ํ•œ ํ…Œ์ŠคํŠธ 2. `examples` - API์˜ ์ผ๋ถ€๊ฐ€ ์•„๋‹Œ ๋‹ค์–‘ํ•œ ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ์— ๋Œ€ํ•œ ํ…Œ์ŠคํŠธ ## Transformers ํ…Œ์ŠคํŠธ ๋ฐฉ๋ฒ•[[how-transformers-are-tested]] 1. PR์ด ์ œ์ถœ๋˜๋ฉด 9๊ฐœ์˜ CircleCi ์ž‘์—…์œผ๋กœ ํ…Œ์ŠคํŠธ๊ฐ€ ์ง„ํ–‰๋ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น PR์— ๋Œ€ํ•ด ์ƒˆ๋กœ์šด ์ปค๋ฐ‹์ด ์ƒ์„ฑ๋  ๋•Œ๋งˆ๋‹ค ํ…Œ์ŠคํŠธ๋Š” ๋‹ค์‹œ ์ง„ํ–‰๋ฉ๋‹ˆ๋‹ค. ์ด ์ž‘์—…๋“ค์€ ์ด [config ํŒŒ์ผ](https://github.com/huggingface/transformers/tree/main/.circleci/config.yml)์— ์ •์˜๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ ํ•„์š”ํ•˜๋‹ค๋ฉด ์‚ฌ์šฉ์ž์˜ ๋กœ์ปฌ ํ™˜๊ฒฝ์—์„œ ๋™์ผํ•˜๊ฒŒ ์žฌํ˜„ํ•ด ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด CI ์ž‘์—…์€ `@slow` ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. 2. [github actions](https://github.com/huggingface/transformers/actions)์— ์˜ํ•ด ์‹คํ–‰๋˜๋Š” ์ž‘์—…์€ 3๊ฐœ์ž…๋‹ˆ๋‹ค: - [torch hub integration](https://github.com/huggingface/transformers/tree/main/.github/workflows/github-torch-hub.yml): torch hub integration์ด ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. - [self-hosted (push)](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-push.yml): `main` ๋ธŒ๋žœ์น˜์—์„œ ์ปค๋ฐ‹์ด ์—…๋ฐ์ดํŠธ๋œ ๊ฒฝ์šฐ์—๋งŒ GPU๋ฅผ ์ด์šฉํ•œ ๋น ๋ฅธ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” `src`, `tests`, `.github` ํด๋” ์ค‘ ํ•˜๋‚˜์— ์ฝ”๋“œ๊ฐ€ ์—…๋ฐ์ดํŠธ๋œ ๊ฒฝ์šฐ์—๋งŒ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. (model card, notebook, ๊ธฐํƒ€ ๋“ฑ๋“ฑ์„ ์ถ”๊ฐ€ํ•œ ๊ฒฝ์šฐ ์‹คํ–‰๋˜์ง€ ์•Š๋„๋ก ํ•˜๊ธฐ ์œ„ํ•ด์„œ์ž…๋‹ˆ๋‹ค) - [self-hosted runner](https://github.com/huggingface/transformers/tree/main/.github/workflows/self-scheduled.yml): `tests` ๋ฐ `examples`์—์„œ GPU๋ฅผ ์ด์šฉํ•œ ์ผ๋ฐ˜ ํ…Œ์ŠคํŠธ, ๋А๋ฆฐ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ```bash RUN_SLOW=1 pytest tests/ RUN_SLOW=1 pytest examples/ ``` ๊ฒฐ๊ณผ๋Š” [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/actions)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ํ…Œ์ŠคํŠธ ์‹คํ–‰[[running-tests]] ### ์‹คํ–‰ํ•  ํ…Œ์ŠคํŠธ ์„ ํƒ[[choosing-which-tests-to-run]] ์ด ๋ฌธ์„œ๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋“  ๋‚ด์šฉ์„ ์ฝ์€ ํ›„์—๋„, ๋” ์ž์„ธํ•œ ๋‚ด์šฉ์ด ํ•„์š”ํ•˜๋‹ค๋ฉด [์—ฌ๊ธฐ](https://docs.pytest.org/en/latest/usage.html)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๊ฐ€์žฅ ์œ ์šฉํ•œ ํ…Œ์ŠคํŠธ ์‹คํ–‰ ๋ฐฉ๋ฒ• ๋ช‡ ๊ฐ€์ง€์ž…๋‹ˆ๋‹ค. ๋ชจ๋‘ ์‹คํ–‰: ```console pytest ``` ๋˜๋Š”: ```bash make test ``` ํ›„์ž๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜๋ฉ๋‹ˆ๋‹ค: ```bash python -m pytest -n auto --dist=loadfile -s -v ./tests/ ``` ์œ„์˜ ๋ช…๋ น์–ด๋Š” pytest์—๊ฒŒ ์•„๋ž˜์˜ ๋‚ด์šฉ์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: - ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ CPU ์ฝ”์–ด ์ˆ˜๋งŒํผ ํ…Œ์ŠคํŠธ ํ”„๋กœ์„ธ์Šค๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. (RAM์ด ์ถฉ๋ถ„ํ•˜์ง€ ์•Š๋‹ค๋ฉด, ํ…Œ์ŠคํŠธ ํ”„๋กœ์„ธ์Šค ์ˆ˜๊ฐ€ ๋„ˆ๋ฌด ๋งŽ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค!) - ๋™์ผํ•œ ํŒŒ์ผ์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋Š” ๋™์ผํ•œ ํ…Œ์ŠคํŠธ ํ”„๋กœ์„ธ์Šค์—์„œ ์‹คํ–‰๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - ์ถœ๋ ฅ์„ ์บก์ฒ˜ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. - ์ž์„ธํ•œ ๋ชจ๋“œ๋กœ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ### ๋ชจ๋“  ํ…Œ์ŠคํŠธ ๋ชฉ๋ก ๊ฐ€์ ธ์˜ค๊ธฐ[[getting-the-list-of-all-tests]] ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ: ```bash pytest --collect-only -q ``` ์ง€์ •๋œ ํ…Œ์ŠคํŠธ ํŒŒ์ผ์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ: ```bash pytest tests/test_optimization.py --collect-only -q ``` ### ํŠน์ • ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ ์‹คํ–‰[[run-a-specific-test-module]] ๊ฐœ๋ณ„ ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ ์‹คํ–‰ํ•˜๊ธฐ: ```bash pytest tests/utils/test_logging.py ``` ### ํŠน์ • ํ…Œ์ŠคํŠธ ์‹คํ–‰[[run-specific-tests]] ๋Œ€๋ถ€๋ถ„์˜ ํ…Œ์ŠคํŠธ ๋‚ด๋ถ€์—์„œ๋Š” unittest๊ฐ€ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํŠน์ • ํ•˜์œ„ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ํ•ด๋‹น ํ…Œ์ŠคํŠธ๋ฅผ ํฌํ•จํ•˜๋Š” unittest ํด๋ž˜์Šค์˜ ์ด๋ฆ„์„ ์•Œ์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest tests/test_optimization.py::OptimizationTest::test_adam_w ``` ์œ„์˜ ๋ช…๋ น์–ด์˜ ์˜๋ฏธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - `tests/test_optimization.py` - ํ…Œ์ŠคํŠธ๊ฐ€ ์žˆ๋Š” ํŒŒ์ผ - `OptimizationTest` - ํด๋ž˜์Šค์˜ ์ด๋ฆ„ - `test_adam_w` - ํŠน์ • ํ…Œ์ŠคํŠธ ํ•จ์ˆ˜์˜ ์ด๋ฆ„ ํŒŒ์ผ์— ์—ฌ๋Ÿฌ ํด๋ž˜์Šค๊ฐ€ ํฌํ•จ๋œ ๊ฒฝ์šฐ, ํŠน์ • ํด๋ž˜์Šค์˜ ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest tests/test_optimization.py::OptimizationTest ``` ์ด ๋ช…๋ น์–ด๋Š” ํ•ด๋‹น ํด๋ž˜์Šค ๋‚ด๋ถ€์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ์•ž์—์„œ ์–ธ๊ธ‰ํ•œ ๊ฒƒ์ฒ˜๋Ÿผ `OptimizationTest` ํด๋ž˜์Šค์— ํฌํ•จ๋œ ํ…Œ์ŠคํŠธ๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash pytest tests/test_optimization.py::OptimizationTest --collect-only -q ``` ํ‚ค์›Œ๋“œ ํ‘œํ˜„์‹์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. `adam`์ด๋ผ๋Š” ์ด๋ฆ„์„ ํฌํ•จํ•˜๋Š” ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest -k adam tests/test_optimization.py ``` ๋…ผ๋ฆฌ ์—ฐ์‚ฐ์ž `and`์™€ `or`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋“  ํ‚ค์›Œ๋“œ๊ฐ€ ์ผ์น˜ํ•ด์•ผ ํ•˜๋Š”์ง€ ๋˜๋Š” ์–ด๋А ํ•˜๋‚˜๊ฐ€ ์ผ์น˜ํ•ด์•ผ ํ•˜๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `not`์€ ๋ถ€์ •ํ•  ๋•Œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `adam`์ด๋ผ๋Š” ์ด๋ฆ„์„ ํฌํ•จํ•˜์ง€ ์•Š๋Š” ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest -k "not adam" tests/test_optimization.py ``` ๋‘ ๊ฐ€์ง€ ํŒจํ„ด์„ ํ•˜๋‚˜๋กœ ๊ฒฐํ•ฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest -k "ada and not adam" tests/test_optimization.py ``` ์˜ˆ๋ฅผ ๋“ค์–ด `test_adafactor`์™€ `test_adam_w`๋ฅผ ๋ชจ๋‘ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest -k "test_adam_w or test_adam_w" tests/test_optimization.py ``` ์—ฌ๊ธฐ์„œ `or`๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์— ์œ ์˜ํ•˜์„ธ์š”. ๋‘ ํ‚ค์›Œ๋“œ ์ค‘ ํ•˜๋‚˜๊ฐ€ ์ผ์น˜ํ•˜๋„๋ก ํ•˜๊ธฐ ์œ„ํ•œ ๋ชฉ์ ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋‘ ํŒจํ„ด์ด ๋ชจ๋‘ ํฌํ•จ๋˜์–ด์•ผ ํ•˜๋Š” ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰ํ•˜๋ ค๋ฉด, `and`๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash pytest -k "test and ada" tests/test_optimization.py ``` ### `accelerate` ํ…Œ์ŠคํŠธ ์‹คํ–‰[[run-`accelerate`-tests]] ๋ชจ๋ธ์—์„œ `accelerate` ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•ด์•ผ ํ•  ๋•Œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด์„œ๋Š” ๋ช…๋ น์–ด์— `-m accelerate_tests`๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `OPT`์—์„œ ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash RUN_SLOW=1 pytest -m accelerate_tests tests/models/opt/test_modeling_opt.py ``` ### ๋ฌธ์„œ ํ…Œ์ŠคํŠธ ์‹คํ–‰[[run-documentation-tests]] ์˜ˆ์‹œ ๋ฌธ์„œ๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ์ง€ ํ…Œ์ŠคํŠธํ•˜๋ ค๋ฉด `doctests`๊ฐ€ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, [`WhisperModel.forward`'s docstring](https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/modeling_whisper.py#L1017-L1035)๋ฅผ ์‚ฌ์šฉํ•ด ๋ด…์‹œ๋‹ค: ```python r""" Returns: Example: ```python >>> import torch >>> from transformers import WhisperModel, WhisperFeatureExtractor >>> from datasets import load_dataset >>> model = WhisperModel.from_pretrained("openai/whisper-base") >>> feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-base") >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> inputs = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt") >>> input_features = inputs.input_features >>> decoder_input_ids = torch.tensor([[1, 1]]) * model.config.decoder_start_token_id >>> last_hidden_state = model(input_features, decoder_input_ids=decoder_input_ids).last_hidden_state >>> list(last_hidden_state.shape) [1, 2, 512] ```""" ``` ์›ํ•˜๋Š” ํŒŒ์ผ์˜ ๋ชจ๋“  docstring ์˜ˆ์ œ๋ฅผ ์ž๋™์œผ๋กœ ํ…Œ์ŠคํŠธํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```bash pytest --doctest-modules <path_to_file_or_dir> ``` ํŒŒ์ผ์˜ ํ™•์žฅ์ž๊ฐ€ markdown์ธ ๊ฒฝ์šฐ `--doctest-glob="*.md"` ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ### ์ˆ˜์ •๋œ ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰[[run-only-modified-tests]] ์ˆ˜์ •๋œ ํŒŒ์ผ ๋˜๋Š” ํ˜„์žฌ ๋ธŒ๋žœ์น˜ (Git ๊ธฐ์ค€)์™€ ๊ด€๋ จ๋œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด [pytest-picked](https://github.com/anapaulagomes/pytest-picked)์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋ณ€๊ฒฝํ•œ ๋‚ด์šฉ์ด ํ…Œ์ŠคํŠธ์— ์˜ํ–ฅ์„ ์ฃผ์ง€ ์•Š์•˜๋Š”์ง€ ๋น ๋ฅด๊ฒŒ ํ™•์ธํ•  ์ˆ˜ ์žˆ๋Š” ์ข‹์€ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ```bash pip install pytest-picked ``` ```bash pytest --picked ``` ์ˆ˜์ •๋˜์—ˆ์ง€๋งŒ, ์•„์ง ์ปค๋ฐ‹๋˜์ง€ ์•Š์€ ๋ชจ๋“  ํŒŒ์ผ ๋ฐ ํด๋”์—์„œ ํ…Œ์ŠคํŠธ๊ฐ€ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ### ์†Œ์Šค ์ˆ˜์ • ์‹œ ์‹คํŒจํ•œ ํ…Œ์ŠคํŠธ ์ž๋™ ์žฌ์‹คํ–‰[[automatically-rerun-failed-tests-on-source-modification]] [pytest-xdist](https://github.com/pytest-dev/pytest-xdist)๋Š” ๋ชจ๋“  ์‹คํŒจํ•œ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฐ์ง€ํ•˜๊ณ , ํŒŒ์ผ์„ ์ˆ˜์ •ํ•œ ํ›„์— ํŒŒ์ผ์„ ๊ณ„์† ์žฌ์‹คํ–‰ํ•˜์—ฌ ํ…Œ์ŠคํŠธ๊ฐ€ ์„ฑ๊ณตํ•  ๋•Œ๊นŒ์ง€ ๊ธฐ๋‹ค๋ฆฌ๋Š” ๋งค์šฐ ์œ ์šฉํ•œ ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ˆ˜์ •ํ•œ ๋‚ด์šฉ์„ ํ™•์ธํ•œ ํ›„ pytest๋ฅผ ๋‹ค์‹œ ์‹œ์ž‘ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ํ…Œ์ŠคํŠธ๊ฐ€ ํ†ต๊ณผ๋  ๋•Œ๊นŒ์ง€ ์ด ๊ณผ์ •์„ ๋ฐ˜๋ณตํ•œ ํ›„ ๋‹ค์‹œ ์ „์ฒด ์‹คํ–‰์ด ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ```bash pip install pytest-xdist ``` ์žฌ๊ท€์  ๋ชจ๋“œ์˜ ์‚ฌ์šฉ: `pytest -f` ๋˜๋Š” `pytest --looponfail` ํŒŒ์ผ์˜ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์€ `looponfailroots` ๋ฃจํŠธ ๋””๋ ‰ํ„ฐ๋ฆฌ์™€ ํ•ด๋‹น ๋‚ด์šฉ์„ (์žฌ๊ท€์ ์œผ๋กœ) ํ™•์ธํ•˜์—ฌ ๊ฐ์ง€๋ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ’์˜ ๊ธฐ๋ณธ๊ฐ’์ด ์ž‘๋™ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ, `setup.cfg`์˜ ์„ค์ • ์˜ต์…˜์„ ๋ณ€๊ฒฝํ•˜์—ฌ ํ”„๋กœ์ ํŠธ์—์„œ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```ini [tool:pytest] looponfailroots = transformers tests ``` ๋˜๋Š” `pytest.ini`/``tox.ini`` ํŒŒ์ผ: ```ini [pytest] looponfailroots = transformers tests ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ini-file์˜ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ๊ธฐ์ค€์œผ๋กœ ์ƒ๋Œ€์ ์œผ๋กœ ์ง€์ •๋œ ๊ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ ํŒŒ์ผ ๋ณ€๊ฒฝ ์‚ฌํ•ญ๋งŒ ์ฐพ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ์„ ๋Œ€์ฒดํ•  ์ˆ˜ ์žˆ๋Š” ๊ตฌํ˜„ ๋ฐฉ๋ฒ•์ธ [pytest-watch](https://github.com/joeyespo/pytest-watch)๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ### ํŠน์ • ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ ๊ฑด๋„ˆ๋›ฐ๊ธฐ[[skip-a-test-module]] ๋ชจ๋“  ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ์„ ์‹คํ–‰ํ•˜๋˜ ํŠน์ • ๋ชจ๋“ˆ์„ ์ œ์™ธํ•˜๋ ค๋ฉด, ์‹คํ–‰ํ•  ํ…Œ์ŠคํŠธ ๋ชฉ๋ก์„ ๋ช…์‹œ์ ์œผ๋กœ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `test_modeling_*.py` ํ…Œ์ŠคํŠธ๋ฅผ ์ œ์™ธํ•œ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest *ls -1 tests/*py | grep -v test_modeling* ``` ### ์ƒํƒœ ์ดˆ๊ธฐํ™”[[clearing state]] CI ๋นŒ๋“œ ๋ฐ (์†๋„์— ๋Œ€ํ•œ) ๊ฒฉ๋ฆฌ๊ฐ€ ์ค‘์š”ํ•œ ๊ฒฝ์šฐ, ์บ์‹œ๋ฅผ ์ง€์›Œ์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash pytest --cache-clear tests ``` ### ํ…Œ์ŠคํŠธ๋ฅผ ๋ณ‘๋ ฌ๋กœ ์‹คํ–‰[[running-tests-in-parallel]] ์ด์ „์— ์–ธ๊ธ‰ํ•œ ๊ฒƒ์ฒ˜๋Ÿผ `make test`๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ๋ณ‘๋ ฌ๋กœ ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•ด `pytest-xdist` ํ”Œ๋Ÿฌ๊ทธ์ธ(`-n X` ์ธ์ˆ˜, ์˜ˆ๋ฅผ ๋“ค์–ด `-n 2`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ 2๊ฐœ์˜ ๋ณ‘๋ ฌ ์ž‘์—… ์‹คํ–‰)์„ ํ†ตํ•ด ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. `pytest-xdist`์˜ `--dist=` ์˜ต์…˜์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…Œ์ŠคํŠธ๋ฅผ ์–ด๋–ป๊ฒŒ ๊ทธ๋ฃนํ™”ํ• ์ง€ ์ œ์–ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `--dist=loadfile`์€ ํ•˜๋‚˜์˜ ํŒŒ์ผ์— ์žˆ๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ๋™์ผํ•œ ํ”„๋กœ์„ธ์Šค๋กœ ๊ทธ๋ฃนํ™”ํ•ฉ๋‹ˆ๋‹ค. ์‹คํ–‰๋œ ํ…Œ์ŠคํŠธ์˜ ์ˆœ์„œ๊ฐ€ ๋‹ค๋ฅด๊ณ  ์˜ˆ์ธกํ•  ์ˆ˜ ์—†๊ธฐ ๋•Œ๋ฌธ์—, `pytest-xdist`๋กœ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ฉด ์‹คํŒจ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (๊ฒ€์ถœ๋˜์ง€ ์•Š์€ ๊ฒฐํ•ฉ๋œ ํ…Œ์ŠคํŠธ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ). ์ด ๊ฒฝ์šฐ [pytest-replay](https://github.com/ESSS/pytest-replay)๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋™์ผํ•œ ์ˆœ์„œ๋กœ ํ…Œ์ŠคํŠธ๋ฅผ ๋‹ค์‹œ ์‹คํ–‰ํ•ด์„œ ์‹คํŒจํ•˜๋Š” ์‹œํ€€์Šค๋ฅผ ์ตœ์†Œํ™”ํ•˜๋Š” ๋ฐ์— ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ### ํ…Œ์ŠคํŠธ ์ˆœ์„œ์™€ ๋ฐ˜๋ณต[[test-order-and-repetition]] ์ž ์žฌ์ ์ธ ์ข…์†์„ฑ ๋ฐ ์ƒํƒœ ๊ด€๋ จ ๋ฒ„๊ทธ(tear down)๋ฅผ ๊ฐ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ํ…Œ์ŠคํŠธ๋ฅผ ์—ฌ๋Ÿฌ ๋ฒˆ, ์—ฐ์†์œผ๋กœ, ๋ฌด์ž‘์œ„๋กœ ๋˜๋Š” ์„ธํŠธ๋กœ ๋ฐ˜๋ณตํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ง์ ‘์ ์ธ ์—ฌ๋Ÿฌ ๋ฒˆ์˜ ๋ฐ˜๋ณต์€ DL์˜ ๋ฌด์ž‘์œ„์„ฑ์— ์˜ํ•ด ๋ฐœ๊ฒฌ๋˜๋Š” ์ผ๋ถ€ ๋ฌธ์ œ๋ฅผ ๊ฐ์ง€ํ•˜๋Š” ๋ฐ์—๋„ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. #### ํ…Œ์ŠคํŠธ๋ฅผ ๋ฐ˜๋ณต[[repeat-tests]] - [pytest-flakefinder](https://github.com/dropbox/pytest-flakefinder): ```bash pip install pytest-flakefinder ``` ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์—ฌ๋Ÿฌ ๋ฒˆ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค(๊ธฐ๋ณธ๊ฐ’์€ 50๋ฒˆ): ```bash pytest --flake-finder --flake-runs=5 tests/test_failing_test.py ``` <Tip> ์ด ํ”Œ๋Ÿฌ๊ทธ์ธ์€ `pytest-xdist`์˜ `-n` ํ”Œ๋ž˜๊ทธ์™€ ํ•จ๊ป˜ ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. </Tip> <Tip> `pytest-repeat`๋ผ๋Š” ๋˜ ๋‹ค๋ฅธ ํ”Œ๋Ÿฌ๊ทธ์ธ๋„ ์žˆ์ง€๋งŒ `unittest`์™€ ํ•จ๊ป˜ ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. </Tip> #### ํ…Œ์ŠคํŠธ๋ฅผ ์ž„์˜์˜ ์ˆœ์„œ๋กœ ์‹คํ–‰[[run-tests-in-a-random-order]] ```bash pip install pytest-random-order ``` ์ค‘์š”: `pytest-random-order`๊ฐ€ ์„ค์น˜๋˜๋ฉด ํ…Œ์ŠคํŠธ๊ฐ€ ์ž๋™์œผ๋กœ ์ž„์˜์˜ ์ˆœ์„œ๋กœ ์„ž์ž…๋‹ˆ๋‹ค. ๊ตฌ์„ฑ ๋ณ€๊ฒฝ์ด๋‚˜ ์ปค๋งจ๋“œ ๋ผ์ธ ์˜ต์…˜์ด ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์•ž์„œ ์„ค๋ช…ํ•œ ๊ฒƒ์ฒ˜๋Ÿผ ์ด๋ฅผ ํ†ตํ•ด ํ•œ ํ…Œ์ŠคํŠธ์˜ ์ƒํƒœ๊ฐ€ ๋‹ค๋ฅธ ํ…Œ์ŠคํŠธ์˜ ์ƒํƒœ์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ๊ฒฐํ•ฉ๋œ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฐ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `pytest-random-order`๊ฐ€ ์„ค์น˜๋˜๋ฉด ํ•ด๋‹น ์„ธ์…˜์—์„œ ์‚ฌ์šฉ๋œ ๋žœ๋ค ์‹œ๋“œ๊ฐ€ ์ถœ๋ ฅ๋˜๋ฉฐ ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest tests [...] Using --random-order-bucket=module Using --random-order-seed=573663 ``` ๋”ฐ๋ผ์„œ ํŠน์ • ์‹œํ€€์Šค๊ฐ€ ์‹คํŒจํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ์ •ํ™•ํ•œ ์‹œ๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest --random-order-seed=573663 [...] Using --random-order-bucket=module Using --random-order-seed=573663 ``` ์ •ํ™•ํžˆ ๋™์ผํ•œ ํ…Œ์ŠคํŠธ ๋ชฉ๋ก(๋˜๋Š” ๋ชฉ๋ก์ด ์—†์Œ)์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์—๋งŒ ์ •ํ™•ํ•œ ์ˆœ์„œ๋ฅผ ์žฌํ˜„ํ•ฉ๋‹ˆ๋‹ค. ๋ชฉ๋ก์„ ์ˆ˜๋™์œผ๋กœ ์ขํžˆ๊ธฐ ์‹œ์ž‘ํ•˜๋ฉด ๋” ์ด์ƒ ์‹œ๋“œ์— ์˜์กดํ•  ์ˆ˜ ์—†๊ณ  ์‹คํŒจํ–ˆ๋˜ ์ •ํ™•ํ•œ ์ˆœ์„œ๋กœ ์ˆ˜๋™์œผ๋กœ ๋ชฉ๋ก์„ ๋‚˜์—ดํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  `--random-order-bucket=none`์„ ์‚ฌ์šฉํ•˜์—ฌ pytest์—๊ฒŒ ์ˆœ์„œ๋ฅผ ์ž„์˜๋กœ ์„ค์ •ํ•˜์ง€ ์•Š๋„๋ก ์•Œ๋ ค์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest --random-order-bucket=none tests/test_a.py tests/test_c.py tests/test_b.py ``` ๋ชจ๋“  ํ…Œ์ŠคํŠธ์— ๋Œ€ํ•ด ์„ž๊ธฐ๋ฅผ ๋น„ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest --random-order-bucket=none ``` ๊ธฐ๋ณธ์ ์œผ๋กœ `--random-order-bucket=module`์ด ๋‚ด์žฌ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ, ๋ชจ๋“ˆ ์ˆ˜์ค€์—์„œ ํŒŒ์ผ์„ ์„ž์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ `class`, `package`, `global` ๋ฐ `none` ์ˆ˜์ค€์—์„œ๋„ ์„ž์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ž์„ธํ•œ ๋‚ด์šฉ์€ ํ•ด๋‹น [๋ฌธ์„œ](https://github.com/jbasko/pytest-random-order)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๋˜ ๋‹ค๋ฅธ ๋ฌด์ž‘์œ„ํ™”์˜ ๋Œ€์•ˆ์€ [`pytest-randomly`](https://github.com/pytest-dev/pytest-randomly)์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋“ˆ์€ ๋งค์šฐ ์œ ์‚ฌํ•œ ๊ธฐ๋Šฅ/์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์ง€๋งŒ, `pytest-random-order`์— ์žˆ๋Š” ๋ฒ„ํ‚ท ๋ชจ๋“œ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์„ค์น˜ ํ›„์—๋Š” ์ž๋™์œผ๋กœ ์ ์šฉ๋˜๋Š” ๋ฌธ์ œ๋„ ๋™์ผํ•˜๊ฒŒ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ### ์™ธ๊ด€๊ณผ ๋А๋‚Œ์„ ๋ณ€๊ฒฝ[[look-and-feel-variations] #### pytest-sugar ์‚ฌ์šฉ[[pytest-sugar]] [pytest-sugar](https://github.com/Frozenball/pytest-sugar)๋Š” ํ…Œ์ŠคํŠธ๊ฐ€ ๋ณด์—ฌ์ง€๋Š” ํ˜•ํƒœ๋ฅผ ๊ฐœ์„ ํ•˜๊ณ , ์ง„ํ–‰ ์ƒํ™ฉ ๋ฐ”๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉฐ, ์‹คํŒจํ•œ ํ…Œ์ŠคํŠธ์™€ ๊ฒ€์ฆ์„ ์ฆ‰์‹œ ํ‘œ์‹œํ•˜๋Š” ํ”Œ๋Ÿฌ๊ทธ์ธ์ž…๋‹ˆ๋‹ค. ์„ค์น˜ํ•˜๋ฉด ์ž๋™์œผ๋กœ ํ™œ์„ฑํ™”๋ฉ๋‹ˆ๋‹ค. ```bash pip install pytest-sugar ``` pytest-sugar ์—†์ด ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash pytest -p no:sugar ``` ๋˜๋Š” ์ œ๊ฑฐํ•˜์„ธ์š”. #### ๊ฐ ํ•˜์œ„ ํ…Œ์ŠคํŠธ ์ด๋ฆ„๊ณผ ์ง„ํ–‰ ์ƒํ™ฉ ๋ณด๊ณ [[report-each-sub-test-name-and-its-progress]] `pytest`๋ฅผ ํ†ตํ•ด ๋‹จ์ผ ๋˜๋Š” ๊ทธ๋ฃน์˜ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๊ฒฝ์šฐ(`pip install pytest-pspec` ์ดํ›„): ```bash pytest --pspec tests/test_optimization.py ``` #### ์‹คํŒจํ•œ ํ…Œ์ŠคํŠธ ์ฆ‰์‹œ ํ‘œ์‹œ[[instantly-shows-failed-tests]] [pytest-instafail](https://github.com/pytest-dev/pytest-instafail)์€ ํ…Œ์ŠคํŠธ ์„ธ์…˜์˜ ๋๊นŒ์ง€ ๊ธฐ๋‹ค๋ฆฌ์ง€ ์•Š๊ณ  ์‹คํŒจ ๋ฐ ์˜ค๋ฅ˜๋ฅผ ์ฆ‰์‹œ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค. ```bash pip install pytest-instafail ``` ```bash pytest --instafail ``` ### GPU ์‚ฌ์šฉ ์—ฌ๋ถ€[[to-GPU-or-not-to-GPU]] GPU๊ฐ€ ํ™œ์„ฑํ™”๋œ ํ™˜๊ฒฝ์—์„œ, CPU ์ „์šฉ ๋ชจ๋“œ๋กœ ํ…Œ์ŠคํŠธํ•˜๋ ค๋ฉด `CUDA_VISIBLE_DEVICES=""`๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค: ```bash CUDA_VISIBLE_DEVICES="" pytest tests/utils/test_logging.py ``` ๋˜๋Š” ๋‹ค์ค‘ GPU๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ `pytest`์—์„œ ์‚ฌ์šฉํ•  GPU๋ฅผ ์ง€์ •ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, GPU `0` ๋ฐ `1`์ด ์žˆ๋Š” ๊ฒฝ์šฐ ๋‹ค์Œ์„ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash CUDA_VISIBLE_DEVICES="1" pytest tests/utils/test_logging.py ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋‹ค๋ฅธ GPU์—์„œ ๋‹ค๋ฅธ ์ž‘์—…์„ ์‹คํ–‰ํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ถ€ ํ…Œ์ŠคํŠธ๋Š” ๋ฐ˜๋“œ์‹œ CPU ์ „์šฉ์œผ๋กœ ์‹คํ–‰ํ•ด์•ผ ํ•˜๋ฉฐ, ์ผ๋ถ€๋Š” CPU ๋˜๋Š” GPU ๋˜๋Š” TPU์—์„œ ์‹คํ–‰ํ•ด์•ผ ํ•˜๊ณ , ์ผ๋ถ€๋Š” ์—ฌ๋Ÿฌ GPU์—์„œ ์‹คํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ์Šคํ‚ต ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ํ…Œ์ŠคํŠธ์˜ ์š”๊ตฌ ์‚ฌํ•ญ์„ CPU/GPU/TPU๋ณ„๋กœ ์„ค์ •ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค: - `require_torch` - ์ด ํ…Œ์ŠคํŠธ๋Š” torch์—์„œ๋งŒ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. - `require_torch_gpu` - `require_torch`์— ์ถ”๊ฐ€๋กœ ์ ์–ด๋„ 1๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - `require_torch_multi_gpu` - `require_torch`์— ์ถ”๊ฐ€๋กœ ์ ์–ด๋„ 2๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - `require_torch_non_multi_gpu` - `require_torch`์— ์ถ”๊ฐ€๋กœ 0๊ฐœ ๋˜๋Š” 1๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - `require_torch_up_to_2_gpus` - `require_torch`์— ์ถ”๊ฐ€๋กœ 0๊ฐœ, 1๊ฐœ ๋˜๋Š” 2๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - `require_torch_tpu` - `require_torch`์— ์ถ”๊ฐ€๋กœ ์ ์–ด๋„ 1๊ฐœ์˜ TPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. GPU ์š”๊ตฌ ์‚ฌํ•ญ์„ ํ‘œ๋กœ ์ •๋ฆฌํ•˜๋ฉด ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋””ใ…: | n gpus | decorator | |--------+--------------------------------| | `>= 0` | `@require_torch` | | `>= 1` | `@require_torch_gpu` | | `>= 2` | `@require_torch_multi_gpu` | | `< 2` | `@require_torch_non_multi_gpu` | | `< 3` | `@require_torch_up_to_2_gpus` | ์˜ˆ๋ฅผ ๋“ค์–ด, 2๊ฐœ ์ด์ƒ์˜ GPU๊ฐ€ ์žˆ๊ณ  pytorch๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ์„ ๋•Œ์—๋งŒ ์‹คํ–‰๋˜์–ด์•ผ ํ•˜๋Š” ํ…Œ์ŠคํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python no-style @require_torch_multi_gpu def test_example_with_multi_gpu(): ``` `tensorflow`๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ `require_tf` ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python no-style @require_tf def test_tf_thing_with_tensorflow(): ``` ์ด๋Ÿฌํ•œ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ์ค‘์ฒฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋А๋ฆฐ ํ…Œ์ŠคํŠธ๋กœ ์ง„ํ–‰๋˜๊ณ  pytorch์—์„œ ์ ์–ด๋„ ํ•˜๋‚˜์˜ GPU๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python no-style @require_torch_gpu @slow def test_example_slow_on_gpu(): ``` `@parametrized`์™€ ๊ฐ™์€ ์ผ๋ถ€ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ํ…Œ์ŠคํŠธ ์ด๋ฆ„์„ ๋‹ค์‹œ ์ž‘์„ฑํ•˜๊ธฐ ๋•Œ๋ฌธ์— `@require_*` ์Šคํ‚ต ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘๋™ํ•˜๋ ค๋ฉด ํ•ญ์ƒ ๋งจ ๋งˆ์ง€๋ง‰์— ๋‚˜์—ด๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์˜ฌ๋ฐ”๋ฅธ ์‚ฌ์šฉ ์˜ˆ์ž…๋‹ˆ๋‹ค: ```python no-style @parameterized.expand(...) @require_torch_multi_gpu def test_integration_foo(): ``` `@pytest.mark.parametrize`์—๋Š” ์ด๋Ÿฌํ•œ ์ˆœ์„œ ๋ฌธ์ œ๋Š” ์—†์œผ๋ฏ€๋กœ ์ฒ˜์Œ ํ˜น์€ ๋งˆ์ง€๋ง‰์— ์œ„์น˜์‹œํ‚ฌ ์ˆ˜ ์žˆ๊ณ  ์ด๋Ÿฌํ•œ ๊ฒฝ์šฐ์—๋„ ์ž˜ ์ž‘๋™ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ unittest๊ฐ€ ์•„๋‹Œ ๊ฒฝ์šฐ์—๋งŒ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ๋‚ด๋ถ€์—์„œ ๋‹ค์Œ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: - ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ GPU ์ˆ˜: ```python from transformers.testing_utils import get_gpu_count n_gpu = get_gpu_count() #torch์™€ tf์™€ ํ•จ๊ป˜ ์ž‘๋™ ``` ### ๋ถ„์‚ฐ ํ›ˆ๋ จ[[distributed-training]] `pytest`๋Š” ๋ถ„์‚ฐ ํ›ˆ๋ จ์„ ์ง์ ‘์ ์œผ๋กœ ๋‹ค๋ฃจ์ง€ ๋ชปํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์‹œ๋„ํ•˜๋ฉด ํ•˜์œ„ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜์ง€ ์•Š๊ณ  `pytest`๋ผ๊ณ  ์ƒ๊ฐํ•˜๊ธฐ์— ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๋ฅผ ๋ฐ˜๋ณตํ•ด์„œ ์‹คํ–‰ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ผ๋ฐ˜ ํ”„๋กœ์„ธ์Šค๋ฅผ ์ƒ์„ฑํ•œ ๋‹ค์Œ ์—ฌ๋Ÿฌ ์›Œ์ปค๋ฅผ ์ƒ์„ฑํ•˜๊ณ  IO ํŒŒ์ดํ”„๋ฅผ ๊ด€๋ฆฌํ•˜๋„๋ก ํ•˜๋ฉด ๋™์ž‘ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ํ…Œ์ŠคํŠธ์ž…๋‹ˆ๋‹ค: - [test_trainer_distributed.py](https://github.com/huggingface/transformers/tree/main/tests/trainer/test_trainer_distributed.py) - [test_deepspeed.py](https://github.com/huggingface/transformers/tree/main/tests/deepspeed/test_deepspeed.py) ์‹คํ–‰ ์ง€์ ์œผ๋กœ ๋ฐ”๋กœ ์ด๋™ํ•˜๋ ค๋ฉด, ํ•ด๋‹น ํ…Œ์ŠคํŠธ์—์„œ `execute_subprocess_async` ํ˜ธ์ถœ์„ ๊ฒ€์ƒ‰ํ•˜์„ธ์š”. ์ด๋Ÿฌํ•œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ์ ์–ด๋„ 2๊ฐœ์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ```bash CUDA_VISIBLE_DEVICES=0,1 RUN_SLOW=1 pytest -sv tests/test_trainer_distributed.py ``` ### ์ถœ๋ ฅ ์บก์ฒ˜[[output-capture]] ํ…Œ์ŠคํŠธ ์‹คํ–‰ ์ค‘ `stdout` ๋ฐ `stderr`๋กœ ์ „์†ก๋œ ๋ชจ๋“  ์ถœ๋ ฅ์ด ์บก์ฒ˜๋ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ๋‚˜ ์„ค์ • ๋ฉ”์†Œ๋“œ๊ฐ€ ์‹คํŒจํ•˜๋ฉด ์บก์ฒ˜๋œ ์ถœ๋ ฅ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์‹คํŒจ ์ถ”์  ์ •๋ณด์™€ ํ•จ๊ป˜ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ์ถœ๋ ฅ ์บก์ฒ˜๋ฅผ ๋น„ํ™œ์„ฑํ™”ํ•˜๊ณ  `stdout` ๋ฐ `stderr`๋ฅผ ์ •์ƒ์ ์œผ๋กœ ๋ฐ›์œผ๋ ค๋ฉด `-s` ๋˜๋Š” `--capture=no`๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```bash pytest -s tests/utils/test_logging.py ``` ํ…Œ์ŠคํŠธ ๊ฒฐ๊ณผ๋ฅผ JUnit ํ˜•์‹์˜ ์ถœ๋ ฅ์œผ๋กœ ๋ณด๋‚ด๋ ค๋ฉด ๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜์„ธ์š”: ```bash py.test tests --junitxml=result.xml ``` ### ์ƒ‰์ƒ ์กฐ์ ˆ[[color-control]] ์ƒ‰์ƒ์ด ์—†๊ฒŒ ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์„ค์ •ํ•˜์„ธ์š”(์˜ˆ๋ฅผ ๋“ค์–ด ํฐ์ƒ‰ ๋ฐฐ๊ฒฝ์— ๋…ธ๋ž€์ƒ‰ ๊ธ€์”จ๋Š” ๊ฐ€๋…์„ฑ์ด ์ข‹์ง€ ์•Š์Šต๋‹ˆ๋‹ค): ```bash pytest --color=no tests/utils/test_logging.py ``` ### online pastebin service์— ํ…Œ์ŠคํŠธ ๋ณด๊ณ ์„œ ์ „์†ก[[sending test report to online pastebin service]] ๊ฐ ํ…Œ์ŠคํŠธ ์‹คํŒจ์— ๋Œ€ํ•œ URL์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```bash pytest --pastebin=failed tests/utils/test_logging.py ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๊ฐ ์‹คํŒจ์— ๋Œ€ํ•œ URL์„ ์ œ๊ณตํ•˜๋Š” remote Paste service์— ํ…Œ์ŠคํŠธ ์‹คํ–‰ ์ •๋ณด๋ฅผ ์ œ์ถœํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ํ…Œ์ŠคํŠธ๋ฅผ ์„ ํƒํ•  ์ˆ˜๋„ ์žˆ๊ณ  ํ˜น์€ ํŠน์ • ์‹คํŒจ๋งŒ ๋ณด๋‚ด๋ ค๋ฉด `-x`์™€ ๊ฐ™์ด ์ถ”๊ฐ€ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ํ…Œ์ŠคํŠธ ์„ธ์…˜ ๋กœ๊ทธ์— ๋Œ€ํ•œ URL์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```bash pytest --pastebin=all tests/utils/test_logging.py ``` ## ํ…Œ์ŠคํŠธ ์ž‘์„ฑ[[writing-tests]] ๐Ÿค— transformers ํ…Œ์ŠคํŠธ๋Š” ๋Œ€๋ถ€๋ถ„ `unittest`๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜์ง€๋งŒ, `pytest`์—์„œ ์‹คํ–‰๋˜๋ฏ€๋กœ ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ๋‘ ์‹œ์Šคํ…œ์˜ ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง€์›๋˜๋Š” ๊ธฐ๋Šฅ์— ๋Œ€ํ•ด [์—ฌ๊ธฐ](https://docs.pytest.org/en/stable/unittest.html)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ๊ธฐ์–ตํ•ด์•ผ ํ•  ์ค‘์š”ํ•œ ์ ์€ ๋Œ€๋ถ€๋ถ„์˜ `pytest` fixture๊ฐ€ ์ž‘๋™ํ•˜์ง€ ์•Š๋Š”๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํŒŒ๋ผ๋ฏธํ„ฐํ™”๋„ ์ž‘๋™ํ•˜์ง€ ์•Š์ง€๋งŒ, ์šฐ๋ฆฌ๋Š” ๋น„์Šทํ•œ ๋ฐฉ์‹์œผ๋กœ ์ž‘๋™ํ•˜๋Š” `parameterized` ๋ชจ๋“ˆ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ### ๋งค๊ฐœ๋ณ€์ˆ˜ํ™”[[parametrization]] ๋™์ผํ•œ ํ…Œ์ŠคํŠธ๋ฅผ ๋‹ค๋ฅธ ์ธ์ˆ˜๋กœ ์—ฌ๋Ÿฌ ๋ฒˆ ์‹คํ–‰ํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ์ข…์ข… ์žˆ์Šต๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ๋‚ด์—์„œ ์ด ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ๊ทธ๋ ‡๊ฒŒ ํ•˜๋ฉด ํ•˜๋‚˜์˜ ์ธ์ˆ˜ ์„ธํŠธ์— ๋Œ€ํ•ด ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ```python # test_this1.py import unittest from parameterized import parameterized class TestMathUnitTest(unittest.TestCase): @parameterized.expand( [ ("negative", -1.5, -2.0), ("integer", 1, 1.0), ("large fraction", 1.6, 1), ] ) def test_floor(self, name, input, expected): assert_equal(math.floor(input), expected) ``` ์ด์ œ ๊ธฐ๋ณธ์ ์œผ๋กœ ์ด ํ…Œ์ŠคํŠธ๋Š” `test_floor`์˜ ๋งˆ์ง€๋ง‰ 3๊ฐœ ์ธ์ˆ˜๊ฐ€ ๋งค๊ฐœ๋ณ€์ˆ˜ ๋ชฉ๋ก์˜ ํ•ด๋‹น ์ธ์ˆ˜์— ํ• ๋‹น๋˜๋Š” ๊ฒƒ์œผ๋กœ 3๋ฒˆ ์‹คํ–‰๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  `negative` ๋ฐ `integer` ๋งค๊ฐœ๋ณ€์ˆ˜ ์ง‘ํ•ฉ๋งŒ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest -k "negative and integer" tests/test_mytest.py ``` ๋˜๋Š” `negative` ํ•˜์œ„ ํ…Œ์ŠคํŠธ๋ฅผ ์ œ์™ธํ•œ ๋ชจ๋“  ์„œ๋ธŒ ํ…Œ์ŠคํŠธ๋ฅผ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest -k "not negative" tests/test_mytest.py ``` ์•ž์—์„œ ์–ธ๊ธ‰ํ•œ `-k` ํ•„ํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ ์™ธ์—๋„, ๊ฐ ์„œ๋ธŒ ํ…Œ์ŠคํŠธ์˜ ์ •ํ™•ํ•œ ์ด๋ฆ„์„ ํ™•์ธํ•œ ํ›„์— ์ผ๋ถ€ ํ˜น์€ ์ „์ฒด ์„œ๋ธŒ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash pytest test_this1.py --collect-only -q ``` ๊ทธ๋ฆฌ๊ณ  ๋‹ค์Œ์˜ ๋‚ด์šฉ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```bash test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer test_this1.py::TestMathUnitTest::test_floor_2_large_fraction ``` 2๊ฐœ์˜ ํŠน์ •ํ•œ ์„œ๋ธŒ ํ…Œ์ŠคํŠธ๋งŒ ์‹คํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest test_this1.py::TestMathUnitTest::test_floor_0_negative test_this1.py::TestMathUnitTest::test_floor_1_integer ``` `transformers`์˜ ๊ฐœ๋ฐœ์ž ์ข…์†์„ฑ์— ์ด๋ฏธ ์žˆ๋Š” [parameterized](https://pypi.org/project/parameterized/) ๋ชจ๋“ˆ์€ `unittests`์™€ `pytest` ํ…Œ์ŠคํŠธ ๋ชจ๋‘์—์„œ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ…Œ์ŠคํŠธ๊ฐ€ `unittest`๊ฐ€ ์•„๋‹Œ ๊ฒฝ์šฐ `pytest.mark.parametrize`๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์ด๋ฏธ ์žˆ๋Š” ์ผ๋ถ€ ํ…Œ์ŠคํŠธ์—์„œ ์‚ฌ์šฉ๋˜๋Š” ๊ฒฝ์šฐ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฃผ๋กœ `examples` ํ•˜์œ„์— ์žˆ์Šต๋‹ˆ๋‹ค). ๋‹ค์Œ์€ `pytest`์˜ `parametrize` ๋งˆ์ปค๋ฅผ ์‚ฌ์šฉํ•œ ๋™์ผํ•œ ์˜ˆ์ž…๋‹ˆ๋‹ค: ```python # test_this2.py import pytest @pytest.mark.parametrize( "name, input, expected", [ ("negative", -1.5, -2.0), ("integer", 1, 1.0), ("large fraction", 1.6, 1), ], ) def test_floor(name, input, expected): assert_equal(math.floor(input), expected) ``` `parameterized`์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ `pytest.mark.parametrize`๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด `-k` ํ•„ํ„ฐ๊ฐ€ ์ž‘๋™ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ์—๋„ ์‹คํ–‰ํ•  ์„œ๋ธŒ ํ…Œ์ŠคํŠธ๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹จ, ์ด ๋งค๊ฐœ๋ณ€์ˆ˜ํ™” ํ•จ์ˆ˜๋Š” ์„œ๋ธŒ ํ…Œ์ŠคํŠธ์˜ ์ด๋ฆ„ ์ง‘ํ•ฉ์„ ์•ฝ๊ฐ„ ๋‹ค๋ฅด๊ฒŒ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ชจ์Šต์ž…๋‹ˆ๋‹ค: ```bash pytest test_this2.py --collect-only -q ``` ๊ทธ๋ฆฌ๊ณ  ๋‹ค์Œ์˜ ๋‚ด์šฉ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```bash test_this2.py::test_floor[integer-1-1.0] test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[large fraction-1.6-1] ``` ํŠน์ •ํ•œ ํ…Œ์ŠคํŠธ์— ๋Œ€ํ•ด์„œ๋งŒ ์‹คํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash pytest test_this2.py::test_floor[negative--1.5--2.0] test_this2.py::test_floor[integer-1-1.0] ``` ์ด์ „์˜ ์˜ˆ์‹œ์™€ ๊ฐ™์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ํŒŒ์ผ ๋ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ[[files-and-directories]] ํ…Œ์ŠคํŠธ์—์„œ ์ข…์ข… ํ˜„์žฌ ํ…Œ์ŠคํŠธ ํŒŒ์ผ๊ณผ ๊ด€๋ จ๋œ ์ƒ๋Œ€์ ์ธ ์œ„์น˜๋ฅผ ์•Œ์•„์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ๊ฐ€ ์—ฌ๋Ÿฌ ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ ํ˜ธ์ถœ๋˜๊ฑฐ๋‚˜ ๊นŠ์ด๊ฐ€ ๋‹ค๋ฅธ ํ•˜์œ„ ๋””๋ ‰ํ„ฐ๋ฆฌ์— ์žˆ์„ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๊ทธ ์œ„์น˜๋ฅผ ์•„๋Š” ๊ฒƒ์€ ๊ฐ„๋‹จํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. `transformers.test_utils.TestCasePlus`๋ผ๋Š” ํ—ฌํผ ํด๋ž˜์Šค๋Š” ๋ชจ๋“  ๊ธฐ๋ณธ ๊ฒฝ๋กœ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ณ  ๊ฐ„๋‹จํ•œ ์•ก์„ธ์„œ๋ฅผ ์ œ๊ณตํ•˜์—ฌ ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•ฉ๋‹ˆ๋‹ค: - `pathlib` ๊ฐ์ฒด(์™„์ „ํžˆ ์ •ํ•ด์ง„ ๊ฒฝ๋กœ) - `test_file_path` - ํ˜„์žฌ ํ…Œ์ŠคํŠธ ํŒŒ์ผ ๊ฒฝ๋กœ (์˜ˆ: `__file__`) - test_file_dir` - ํ˜„์žฌ ํ…Œ์ŠคํŠธ ํŒŒ์ผ์ด ํฌํ•จ๋œ ๋””๋ ‰ํ„ฐ๋ฆฌ - tests_dir` - `tests` ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์˜ ๋””๋ ‰ํ„ฐ๋ฆฌ - examples_dir` - `examples` ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์˜ ๋””๋ ‰ํ„ฐ๋ฆฌ - repo_root_dir` - ์ €์žฅ์†Œ ๋””๋ ‰ํ„ฐ๋ฆฌ - src_dir` - `src`์˜ ๋””๋ ‰ํ„ฐ๋ฆฌ(์˜ˆ: `transformers` ํ•˜์œ„ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ์žˆ๋Š” ๊ณณ) - ๋ฌธ์ž์—ด๋กœ ๋ณ€ํ™˜๋œ ๊ฒฝ๋กœ---์œ„์™€ ๋™์ผํ•˜์ง€๋งŒ, `pathlib` ๊ฐ์ฒด๊ฐ€ ์•„๋‹Œ ๋ฌธ์ž์—ด๋กœ ๊ฒฝ๋กœ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: - `test_file_path_str` - `test_file_dir_str` - `tests_dir_str` - `examples_dir_str` - `repo_root_dir_str` - `src_dir_str` ์œ„์˜ ๋‚ด์šฉ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ํ…Œ์ŠคํŠธ๊ฐ€ 'transformers.test_utils.TestCasePlus'์˜ ์„œ๋ธŒํด๋ž˜์Šค์— ์žˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from transformers.testing_utils import TestCasePlus class PathExampleTest(TestCasePlus): def test_something_involving_local_locations(self): data_dir = self.tests_dir / "fixtures/tests_samples/wmt_en_ro" ``` ๋งŒ์•ฝ `pathlib`๋ฅผ ํ†ตํ•ด ๊ฒฝ๋กœ๋ฅผ ์กฐ์ž‘ํ•  ํ•„์š”๊ฐ€ ์—†๊ฑฐ๋‚˜ ๊ฒฝ๋กœ๋ฅผ ๋ฌธ์ž์—ด๋กœ๋งŒ ํ•„์š”๋กœ ํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” `pathlib` ๊ฐ์ฒด์— `str()`์„ ํ˜ธ์ถœํ•˜๊ฑฐ๋‚˜ `_str`๋กœ ๋๋‚˜๋Š” ์ ‘๊ทผ์ž๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from transformers.testing_utils import TestCasePlus class PathExampleTest(TestCasePlus): def test_something_involving_stringified_locations(self): examples_dir = self.examples_dir_str ``` ### ์ž„์‹œ ํŒŒ์ผ ๋ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ[[temporary-files-and-directories]] ๊ณ ์œ ํ•œ ์ž„์‹œ ํŒŒ์ผ ๋ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ๋ณ‘๋ ฌ ํ…Œ์ŠคํŠธ ์‹คํ–‰์— ์žˆ์–ด ํ•„์ˆ˜์ ์ž…๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•จ์œผ๋กœ์จ ํ…Œ์ŠคํŠธ๋“ค์ด ์„œ๋กœ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋ฎ์–ด์“ฐ์ง€ ์•Š๊ฒŒ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ์šฐ๋ฆฌ๋Š” ์ƒ์„ฑ๋œ ํ…Œ์ŠคํŠธ์˜ ์ข…๋ฃŒ ๋‹จ๊ณ„์—์„œ ์ด๋Ÿฌํ•œ ์ž„์‹œ ํŒŒ์ผ ๋ฐ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ์ œ๊ฑฐํ•˜๊ณ  ์‹ถ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด๋Ÿฌํ•œ ์š”๊ตฌ ์‚ฌํ•ญ์„ ์ถฉ์กฑ์‹œ์ผœ์ฃผ๋Š” `tempfile`๊ณผ ๊ฐ™์€ ํŒจํ‚ค์ง€๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ…Œ์ŠคํŠธ๋ฅผ ๋””๋ฒ„๊น…ํ•  ๋•Œ๋Š” ์ž„์‹œ ํŒŒ์ผ์ด๋‚˜ ๋””๋ ‰ํ„ฐ๋ฆฌ์— ๋“ค์–ด๊ฐ€๋Š” ๋‚ด์šฉ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•˜๋ฉฐ, ์žฌ์‹คํ–‰๋˜๋Š” ๊ฐ ํ…Œ์ŠคํŠธ๋งˆ๋‹ค ์ž„์‹œ ํŒŒ์ผ์ด๋‚˜ ๋””๋ ‰ํ„ฐ๋ฆฌ์˜ ๊ฒฝ๋กœ์— ๋Œ€ํ•ด ๋ฌด์ž‘์œ„ ๊ฐ’์ด ์•„๋‹Œ ์ •ํ™•ํ•œ ๊ฐ’์„ ์•Œ๊ณ  ์‹ถ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. `transformers.test_utils.TestCasePlus`๋ผ๋Š” ๋„์šฐ๋ฏธ ํด๋ž˜์Šค๋Š” ์ด๋Ÿฌํ•œ ๋ชฉ์ ์— ๊ฐ€์žฅ ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค๋Š” `unittest.TestCase`์˜ ํ•˜์œ„ ํด๋ž˜์Šค์ด๋ฏ€๋กœ, ์šฐ๋ฆฌ๋Š” ์ด๊ฒƒ์„ ํ…Œ์ŠคํŠธ ๋ชจ๋“ˆ์—์„œ ์‰ฝ๊ฒŒ ์ƒ์†ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ํ•ด๋‹น ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค: ```python from transformers.testing_utils import TestCasePlus class ExamplesTests(TestCasePlus): def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir() ``` ์ด ์ฝ”๋“œ๋Š” ๊ณ ์œ ํ•œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ์ƒ์„ฑํ•˜๊ณ  `tmp_dir`์„ ํ•ด๋‹น ์œ„์น˜๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. - ๊ณ ์œ ํ•œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```python def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir() ``` `tmp_dir`์—๋Š” ์ƒ์„ฑ๋œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ์˜ ๊ฒฝ๋กœ๊ฐ€ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ…Œ์ŠคํŠธ์˜ ์ข…๋ฃŒ ๋‹จ๊ณ„์—์„œ ์ž๋™์œผ๋กœ ์ œ๊ฑฐ๋ฉ๋‹ˆ๋‹ค. - ์„ ํƒํ•œ ๊ฒฝ๋กœ๋กœ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ ์ƒ์„ฑ ํ›„์— ํ…Œ์ŠคํŠธ ์‹œ์ž‘ ์ „์— ๋น„์–ด ์žˆ๋Š” ์ƒํƒœ์ธ์ง€ ํ™•์ธํ•˜๊ณ , ํ…Œ์ŠคํŠธ ํ›„์—๋Š” ๋น„์šฐ์ง€ ๋งˆ์„ธ์š”. ```python def test_whatever(self): tmp_dir = self.get_auto_remove_tmp_dir("./xxx") ``` ์ด๊ฒƒ์€ ๋””๋ฒ„๊น…ํ•  ๋•Œ ํŠน์ • ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๊ณ , ๊ทธ ๋””๋ ‰ํ„ฐ๋ฆฌ์— ์ด์ „์— ์‹คํ–‰๋œ ํ…Œ์ŠคํŠธ๊ฐ€ ๋ฐ์ดํ„ฐ๋ฅผ ๋‚จ๊ธฐ์ง€ ์•Š๋„๋ก ํ•˜๋Š” ๋ฐ์— ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. - `before` ๋ฐ `after` ์ธ์ˆ˜๋ฅผ ์ง์ ‘ ์˜ค๋ฒ„๋ผ์ด๋”ฉํ•˜์—ฌ ๊ธฐ๋ณธ ๋™์ž‘์„ ๋ณ€๊ฒฝํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ๋‹ค์Œ ์ค‘ ํ•˜๋‚˜์˜ ๋™์ž‘์œผ๋กœ ์ด์–ด์ง‘๋‹ˆ๋‹ค: - `before=True`: ํ…Œ์ŠคํŠธ ์‹œ์ž‘ ์‹œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ํ•ญ์ƒ ์ง€์›Œ์ง‘๋‹ˆ๋‹ค. - `before=False`: ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ์กด ํŒŒ์ผ์€ ๊ทธ๋Œ€๋กœ ๋‚จ์Šต๋‹ˆ๋‹ค. - `after=True`: ํ…Œ์ŠคํŠธ ์ข…๋ฃŒ ์‹œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ํ•ญ์ƒ ์‚ญ์ œ๋ฉ๋‹ˆ๋‹ค. - `after=False`: ํ…Œ์ŠคํŠธ ์ข…๋ฃŒ ์‹œ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๊ฐ€ ํ•ญ์ƒ ๊ทธ๋Œ€๋กœ ์œ ์ง€๋ฉ๋‹ˆ๋‹ค. <Tip> `rm -r`์— ํ•ด๋‹นํ•˜๋Š” ๋ช…๋ น์„ ์•ˆ์ „ํ•˜๊ฒŒ ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•ด, ๋ช…์‹œ์ ์ธ `tmp_dir`์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ํ”„๋กœ์ ํŠธ ์ €์žฅ์†Œ ์ฒดํฌ ์•„์›ƒ์˜ ํ•˜์œ„ ๋””๋ ‰ํ„ฐ๋ฆฌ๋งŒ ํ—ˆ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์‹ค์ˆ˜๋กœ `/tmp`๊ฐ€ ์•„๋‹Œ ์ค‘์š”ํ•œ ํŒŒ์ผ ์‹œ์Šคํ…œ์˜ ์ผ๋ถ€๊ฐ€ ์‚ญ์ œ๋˜์ง€ ์•Š๋„๋ก ํ•ญ์ƒ `./`๋กœ ์‹œ์ž‘ํ•˜๋Š” ๊ฒฝ๋กœ๋ฅผ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> <Tip> ๊ฐ ํ…Œ์ŠคํŠธ๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์ž„์‹œ ๋””๋ ‰ํ„ฐ๋ฆฌ๋ฅผ ๋“ฑ๋กํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๋ณ„๋„๋กœ ์š”์ฒญํ•˜์ง€ ์•Š๋Š” ํ•œ ๋ชจ๋‘ ์ž๋™์œผ๋กœ ์ œ๊ฑฐ๋ฉ๋‹ˆ๋‹ค. </Tip> ### ์ž„์‹œ sys.path ์˜ค๋ฒ„๋ผ์ด๋“œ[[temporary-sys.path-override]] `sys.path`๋ฅผ ๋‹ค๋ฅธ ํ…Œ์ŠคํŠธ๋กœ ์ž„์‹œ๋กœ ์˜ค๋ฒ„๋ผ์ด๋“œํ•˜๊ธฐ ์œ„ํ•ด ์˜ˆ๋ฅผ ๋“ค์–ด `ExtendSysPath` ์ปจํ…์ŠคํŠธ ๊ด€๋ฆฌ์ž๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python import os from transformers.testing_utils import ExtendSysPath bindir = os.path.abspath(os.path.dirname(__file__)) with ExtendSysPath(f"{bindir}/.."): from test_trainer import TrainerIntegrationCommon # noqa ``` ### ํ…Œ์ŠคํŠธ ๊ฑด๋„ˆ๋›ฐ๊ธฐ[[skipping-tests]] ์ด๊ฒƒ์€ ๋ฒ„๊ทธ๊ฐ€ ๋ฐœ๊ฒฌ๋˜์–ด ์ƒˆ๋กœ์šด ํ…Œ์ŠคํŠธ๊ฐ€ ์ž‘์„ฑ๋˜์—ˆ์ง€๋งŒ ์•„์ง ๊ทธ ๋ฒ„๊ทธ๊ฐ€ ์ˆ˜์ •๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ์— ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ํ…Œ์ŠคํŠธ๋ฅผ ์ฃผ ์ €์žฅ์†Œ์— ์ปค๋ฐ‹ํ•˜๋ ค๋ฉด `make test` ์ค‘์— ๊ฑด๋„ˆ๋›ฐ๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐฉ๋ฒ•: - **skip**์€ ํ…Œ์ŠคํŠธ๊ฐ€ ์ผ๋ถ€ ์กฐ๊ฑด์ด ์ถฉ์กฑ๋  ๊ฒฝ์šฐ์—๋งŒ ํ†ต๊ณผ๋  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋˜๊ณ , ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด pytest๊ฐ€ ์ „์ฒด ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ์–ด์•ผ ํ•จ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ์˜ˆ๋กœ๋Š” Windows๊ฐ€ ์•„๋‹Œ ํ”Œ๋žซํผ์—์„œ Windows ์ „์šฉ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๊ฑฐ๋‚˜ ์™ธ๋ถ€ ๋ฆฌ์†Œ์Šค(์˜ˆ๋ฅผ ๋“ค์–ด ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค)์— ์˜์กดํ•˜๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๊ฒƒ์ด ์žˆ์Šต๋‹ˆ๋‹ค. - **xfail**์€ ํ…Œ์ŠคํŠธ๊ฐ€ ํŠน์ •ํ•œ ์ด์œ ๋กœ ์ธํ•ด ์‹คํŒจํ•  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒํ•˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ์˜ˆ๋กœ๋Š” ์•„์ง ๊ตฌํ˜„๋˜์ง€ ์•Š์€ ๊ธฐ๋Šฅ์ด๋‚˜ ์•„์ง ์ˆ˜์ •๋˜์ง€ ์•Š์€ ๋ฒ„๊ทธ์˜ ํ…Œ์ŠคํŠธ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. `xfail`๋กœ ํ‘œ์‹œ๋œ ํ…Œ์ŠคํŠธ๊ฐ€ ์˜ˆ์ƒ๋Œ€๋กœ ์‹คํŒจํ•˜์ง€ ์•Š๊ณ  ํ†ต๊ณผ๋œ ๊ฒฝ์šฐ, ์ด๊ฒƒ์€ xpass์ด๋ฉฐ ํ…Œ์ŠคํŠธ ๊ฒฐ๊ณผ ์š”์•ฝ์— ๊ธฐ๋ก๋ฉ๋‹ˆ๋‹ค. ๋‘ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ์ฐจ์ด์  ์ค‘ ํ•˜๋‚˜๋Š” `skip`์€ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์ง€ ์•Š์ง€๋งŒ `xfail`์€ ์‹คํ–‰ํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์˜ค๋ฅ˜๊ฐ€ ์žˆ๋Š” ์ฝ”๋“œ๊ฐ€ ์ผ๋ถ€ ํ…Œ์ŠคํŠธ์— ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ `xfail`์„ ์‚ฌ์šฉํ•˜์ง€ ๋งˆ์„ธ์š”. #### ๊ตฌํ˜„[[implementation]] - ์ „์ฒด ํ…Œ์ŠคํŠธ๋ฅผ ๋ฌด์กฐ๊ฑด ๊ฑด๋„ˆ๋›ฐ๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python no-style @unittest.skip("this bug needs to be fixed") def test_feature_x(): ``` ๋˜๋Š” pytest๋ฅผ ํ†ตํ•ด: ```python no-style @pytest.mark.skip(reason="this bug needs to be fixed") ``` ๋˜๋Š” `xfail` ๋ฐฉ์‹์œผ๋กœ: ```python no-style @pytest.mark.xfail def test_feature_x(): ``` - ํ…Œ์ŠคํŠธ ๋‚ด๋ถ€์—์„œ ๋‚ด๋ถ€ ํ™•์ธ์— ๋”ฐ๋ผ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python def test_feature_x(): if not has_something(): pytest.skip("unsupported configuration") ``` ๋˜๋Š” ๋ชจ๋“ˆ ์ „์ฒด: ```python import pytest if not pytest.config.getoption("--custom-flag"): pytest.skip("--custom-flag is missing, skipping tests", allow_module_level=True) ``` ๋˜๋Š” `xfail` ๋ฐฉ์‹์œผ๋กœ: ```python def test_feature_x(): pytest.xfail("expected to fail until bug XYZ is fixed") ``` - import๊ฐ€ missing๋œ ๋ชจ๋“ˆ์ด ์žˆ์„ ๋•Œ ๊ทธ ๋ชจ๋“ˆ์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๋ฐฉ๋ฒ•: ```python docutils = pytest.importorskip("docutils", minversion="0.3") ``` - ์กฐ๊ฑด์— ๋”ฐ๋ผ ํ…Œ์ŠคํŠธ๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๋ฐฉ๋ฒ•: ```python no-style @pytest.mark.skipif(sys.version_info < (3,6), reason="requires python3.6 or higher") def test_feature_x(): ``` ๋˜๋Š”: ```python no-style @unittest.skipIf(torch_device == "cpu", "Can't do half precision") def test_feature_x(): ``` ๋˜๋Š” ๋ชจ๋“ˆ ์ „์ฒด๋ฅผ ๊ฑด๋„ˆ๋›ฐ๋Š” ๋ฐฉ๋ฒ•: ```python no-style @pytest.mark.skipif(sys.platform == 'win32', reason="does not run on windows") class TestClass(): def test_feature_x(self): ``` ๋ณด๋‹ค ์ž์„ธํ•œ ์˜ˆ์ œ ๋ฐ ๋ฐฉ๋ฒ•์€ [์—ฌ๊ธฐ](https://docs.pytest.org/en/latest/skipping.html)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ๋А๋ฆฐ ํ…Œ์ŠคํŠธ[[slow-tests]] ํ…Œ์ŠคํŠธ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์ง€์†์ ์œผ๋กœ ํ™•์žฅ๋˜๊ณ  ์žˆ์œผ๋ฉฐ, ์ผ๋ถ€ ํ…Œ์ŠคํŠธ๋Š” ์‹คํ–‰ํ•˜๋Š” ๋ฐ ๋ช‡ ๋ถ„์ด ๊ฑธ๋ฆฝ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์šฐ๋ฆฌ์—๊ฒŒ๋Š” ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๊ฐ€ CI๋ฅผ ํ†ตํ•ด ์™„๋ฃŒ๋˜๊ธฐ๊นŒ์ง€ ํ•œ ์‹œ๊ฐ„์„ ๊ธฐ๋‹ค๋ฆด ์—ฌ์œ ๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ•„์ˆ˜ ํ…Œ์ŠคํŠธ๋ฅผ ์œ„ํ•œ ์ผ๋ถ€ ์˜ˆ์™ธ๋ฅผ ์ œ์™ธํ•˜๊ณ  ๋А๋ฆฐ ํ…Œ์ŠคํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ‘œ์‹œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python no-style from transformers.testing_utils import slow @slow def test_integration_foo(): ``` `@slow`๋กœ ํ‘œ์‹œ๋œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด `RUN_SLOW=1` ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•˜์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash RUN_SLOW=1 pytest tests ``` `@parameterized`์™€ ๊ฐ™์€ ๋ช‡ ๊ฐ€์ง€ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋Š” ํ…Œ์ŠคํŠธ ์ด๋ฆ„์„ ๋‹ค์‹œ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฏ€๋กœ `@slow`์™€ ๋‚˜๋จธ์ง€ ๊ฑด๋„ˆ๋›ฐ๊ธฐ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ `@require_*`๊ฐ€ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ž‘๋™๋˜๋ ค๋ฉด ๋งˆ์ง€๋ง‰์— ๋‚˜์—ด๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์˜ฌ๋ฐ”๋ฅธ ์‚ฌ์šฉ ์˜ˆ์ž…๋‹ˆ๋‹ค. ```python no-style @parameterized.expand(...) @slow def test_integration_foo(): ``` ์ด ๋ฌธ์„œ์˜ ์ดˆ๋ฐ˜๋ถ€์— ์„ค๋ช…๋œ ๊ฒƒ์ฒ˜๋Ÿผ ๋А๋ฆฐ ํ…Œ์ŠคํŠธ๋Š” PR์˜ CI ํ™•์ธ์ด ์•„๋‹Œ ์˜ˆ์•ฝ๋œ ์ผ์ • ๊ธฐ๋ฐ˜์œผ๋กœ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ PR ์ œ์ถœ ์ค‘์— ์ผ๋ถ€ ๋ฌธ์ œ๋ฅผ ๋†“์นœ ์ฑ„๋กœ ๋ณ‘ํ•ฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋“ค์€ ๋‹ค์Œ๋ฒˆ์˜ ์˜ˆ์ •๋œ CI ์ž‘์—… ์ค‘์— ๊ฐ์ง€๋ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ PR์„ ์ œ์ถœํ•˜๊ธฐ ์ „์— ์ž์‹ ์˜ ์ปดํ“จํ„ฐ์—์„œ ๋А๋ฆฐ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๊ฒƒ ๋˜ํ•œ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๋А๋ฆฐ ํ…Œ์ŠคํŠธ๋กœ ํ‘œ์‹œํ•ด์•ผ ํ•˜๋Š”์ง€ ์—ฌ๋ถ€๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ๋Œ€๋žต์ ์ธ ๊ฒฐ์ • ๊ธฐ์ค€์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. ๋งŒ์•ฝ ํ…Œ์ŠคํŠธ๊ฐ€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋‚ด๋ถ€ ๊ตฌ์„ฑ ์š”์†Œ ์ค‘ ํ•˜๋‚˜์— ์ง‘์ค‘๋˜์–ด ์žˆ๋‹ค๋ฉด(์˜ˆ: ๋ชจ๋ธ๋ง ํŒŒ์ผ, ํ† ํฐํ™” ํŒŒ์ผ, ํŒŒ์ดํ”„๋ผ์ธ), ํ•ด๋‹น ํ…Œ์ŠคํŠธ๋ฅผ ๋А๋ฆฐ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์—์„œ ์‹คํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋งŒ์•ฝ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋‹ค๋ฅธ ์ธก๋ฉด(์˜ˆ: ๋ฌธ์„œ ๋˜๋Š” ์˜ˆ์ œ)์— ์ง‘์ค‘๋˜์–ด ์žˆ๋‹ค๋ฉด, ํ•ด๋‹น ํ…Œ์ŠคํŠธ๋ฅผ ๋А๋ฆฐ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ์—์„œ ์‹คํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ด ์ ‘๊ทผ ๋ฐฉ์‹์„ ๋ณด์™„ํ•˜๊ธฐ ์œ„ํ•ด ์˜ˆ์™ธ๋ฅผ ๋งŒ๋“ค์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - ๋ฌด๊ฑฐ์šด ๊ฐ€์ค‘์น˜ ์„ธํŠธ๋‚˜ 50MB๋ณด๋‹ค ํฐ ๋ฐ์ดํ„ฐ์…‹์„ ๋‹ค์šด๋กœ๋“œํ•ด์•ผ ํ•˜๋Š” ๋ชจ๋“  ํ…Œ์ŠคํŠธ(์˜ˆ: ๋ชจ๋ธ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ, ํ† ํฌ๋‚˜์ด์ € ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ, ํŒŒ์ดํ”„๋ผ์ธ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ)๋ฅผ ๋А๋ฆฐ ํ…Œ์ŠคํŠธ๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒฝ์šฐ ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ์šฉ์œผ๋กœ ๋ฌด์ž‘์œ„ ๊ฐ€์ค‘์น˜๋กœ ์ž‘์€ ๋ฒ„์ „์„ ๋งŒ๋“ค์–ด ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋‚ด์šฉ์€ ์•„๋ž˜ ๋‹จ๋ฝ์—์„œ ์„ค๋ช…๋ฉ๋‹ˆ๋‹ค. - ํŠน๋ณ„ํžˆ ๋น ๋ฅด๊ฒŒ ์‹คํ–‰๋˜๋„๋ก ์ตœ์ ํ™”๋˜์ง€ ์•Š์€ ํ•™์Šต์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•˜๋Š” ํ…Œ์ŠคํŠธ๋Š” ๋А๋ฆฐ ํ…Œ์ŠคํŠธ๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. - ๋А๋ฆฌ์ง€ ์•Š์•„์•ผ ํ•  ํ…Œ์ŠคํŠธ ์ค‘ ์ผ๋ถ€๊ฐ€ ๊ทน๋„๋กœ ๋А๋ฆฐ ๊ฒฝ์šฐ ์˜ˆ์™ธ๋ฅผ ๋„์ž…ํ•˜๊ณ  ์ด๋ฅผ `@slow`๋กœ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋Œ€์šฉ๋Ÿ‰ ํŒŒ์ผ์„ ๋””์Šคํฌ์— ์ €์žฅํ•˜๊ณ  ๋ถˆ๋Ÿฌ์˜ค๋Š” ์ž๋™ ๋ชจ๋ธ๋ง ํ…Œ์ŠคํŠธ๋Š” `@slow`์œผ๋กœ ํ‘œ์‹œ๋œ ํ…Œ์ŠคํŠธ์˜ ์ข‹์€ ์˜ˆ์ž…๋‹ˆ๋‹ค. - CI์—์„œ 1์ดˆ ์ด๋‚ด์— ํ…Œ์ŠคํŠธ๊ฐ€ ์™„๋ฃŒ๋˜๋Š” ๊ฒฝ์šฐ(๋‹ค์šด๋กœ๋“œ ํฌํ•จ)์—๋Š” ๋А๋ฆฐ ํ…Œ์ŠคํŠธ๊ฐ€ ์•„๋‹ˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋А๋ฆฐ ํ…Œ์ŠคํŠธ๊ฐ€ ์•„๋‹Œ ๊ฒฝ์šฐ์—๋Š” ๋‹ค์–‘ํ•œ ๋‚ด๋ถ€๋ฅผ ์™„์ „ํžˆ ์ปค๋ฒ„ํ•˜๋ฉด์„œ ๋น ๋ฅด๊ฒŒ ์œ ์ง€๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ฌด์ž‘์œ„ ๊ฐ€์ค‘์น˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŠน๋ณ„ํžˆ ์ƒ์„ฑ๋œ ์ž‘์€ ๋ชจ๋ธ๋กœ ํ…Œ์ŠคํŠธํ•˜๋ฉด ์ƒ๋‹นํ•œ ์ปค๋ฒ„๋ฆฌ์ง€๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ์ตœ์†Œํ•œ์˜ ๋ ˆ์ด์–ด ์ˆ˜(์˜ˆ: 2), ์–ดํœ˜ ํฌ๊ธฐ(์˜ˆ: 1000) ๋“ฑ์˜ ์š”์†Œ๋งŒ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `@slow` ํ…Œ์ŠคํŠธ๋Š” ๋Œ€ํ˜• ๋А๋ฆฐ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ •์„ฑ์ ์ธ ํ…Œ์ŠคํŠธ๋ฅผ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ž‘์€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ™•์ธํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด *tiny* ๋ชจ๋ธ์„ ์ฐพ์•„๋ณด์„ธ์š”. ```bash grep tiny tests examples ``` ๋‹ค์Œ์€ ์ž‘์€ ๋ชจ๋ธ[stas/tiny-wmt19-en-de](https://huggingface.co/stas/tiny-wmt19-en-de)์„ ๋งŒ๋“  [script](https://github.com/huggingface/transformers/tree/main/scripts/fsmt/fsmt-make-tiny-model.py) ์˜ˆ์‹œ์ž…๋‹ˆ๋‹ค. ํŠน์ • ๋ชจ๋ธ์˜ ์•„ํ‚คํ…์ฒ˜์— ๋งž๊ฒŒ ์‰ฝ๊ฒŒ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋Œ€์šฉ๋Ÿ‰ ๋ชจ๋ธ์„ ๋‹ค์šด๋กœ๋“œํ•˜๋Š” ๊ฒฝ์šฐ ๋Ÿฐํƒ€์ž„์„ ์ž˜๋ชป ์ธก์ •ํ•˜๊ธฐ ์‰ฝ์ง€๋งŒ, ๋กœ์ปฌ์—์„œ ํ…Œ์ŠคํŠธํ•˜๋ฉด ๋‹ค์šด๋กœ๋“œํ•œ ํŒŒ์ผ์ด ์บ์‹œ๋˜์–ด ๋‹ค์šด๋กœ๋“œ ์‹œ๊ฐ„์ด ์ธก์ •๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋Œ€์‹  CI ๋กœ๊ทธ์˜ ์‹คํ–‰ ์†๋„ ๋ณด๊ณ ์„œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”(`pytest --durations=0 tests`์˜ ์ถœ๋ ฅ). ์ด ๋ณด๊ณ ์„œ๋Š” ๋А๋ฆฐ ์ด์ƒ๊ฐ’์œผ๋กœ ํ‘œ์‹œ๋˜์ง€ ์•Š๊ฑฐ๋‚˜ ๋น ๋ฅด๊ฒŒ ๋‹ค์‹œ ์ž‘์„ฑํ•ด์•ผ ํ•˜๋Š” ๋А๋ฆฐ ์ด์ƒ๊ฐ’์„ ์ฐพ๋Š” ๋ฐ๋„ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. CI์—์„œ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๊ฐ€ ๋А๋ ค์ง€๊ธฐ ์‹œ์ž‘ํ•˜๋ฉด ์ด ๋ณด๊ณ ์„œ์˜ ๋งจ ์œ„ ๋ชฉ๋ก์— ๊ฐ€์žฅ ๋А๋ฆฐ ํ…Œ์ŠคํŠธ๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ### stdout/stderr ์ถœ๋ ฅ ํ…Œ์ŠคํŠธ[[testing-the-stdout/stderr-output]] `stdout` ๋ฐ/๋˜๋Š” `stderr`๋กœ ์“ฐ๋Š” ํ•จ์ˆ˜๋ฅผ ํ…Œ์ŠคํŠธํ•˜๋ ค๋ฉด `pytest`์˜ [capsys ์‹œ์Šคํ…œ](https://docs.pytest.org/en/latest/capture.html)์„ ์‚ฌ์šฉํ•˜์—ฌ ํ•ด๋‹น ์ŠคํŠธ๋ฆผ์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python import sys def print_to_stdout(s): print(s) def print_to_stderr(s): sys.stderr.write(s) def test_result_and_stdout(capsys): msg = "Hello" print_to_stdout(msg) print_to_stderr(msg) out, err = capsys.readouterr() # ์บก์ฒ˜๋œ ์ถœ๋ ฅ ์ŠคํŠธ๋ฆผ ์‚ฌ์šฉ # ์„ ํƒ ์‚ฌํ•ญ: ์บก์ฒ˜๋œ ์ŠคํŠธ๋ฆผ ์žฌ์ƒ์„ฑ sys.stdout.write(out) sys.stderr.write(err) # ํ…Œ์ŠคํŠธ: assert msg in out assert msg in err ``` ๊ทธ๋ฆฌ๊ณ , ๋ฌผ๋ก  ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ์—๋Š” `stderr`๋Š” ์˜ˆ์™ธ์˜ ์ผ๋ถ€๋กœ ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฏ€๋กœ ํ•ด๋‹น ๊ฒฝ์šฐ์—๋Š” try/except๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python def raise_exception(msg): raise ValueError(msg) def test_something_exception(): msg = "Not a good value" error = "" try: raise_exception(msg) except Exception as e: error = str(e) assert msg in error, f"{msg} is in the exception:\n{error}" ``` `stdout`๋ฅผ ์บก์ฒ˜ํ•˜๋Š” ๋˜ ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์€ `contextlib.redirect_stdout`๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```python from io import StringIO from contextlib import redirect_stdout def print_to_stdout(s): print(s) def test_result_and_stdout(): msg = "Hello" buffer = StringIO() with redirect_stdout(buffer): print_to_stdout(msg) out = buffer.getvalue() # ์„ ํƒ ์‚ฌํ•ญ: ์บก์ฒ˜๋œ ์ŠคํŠธ๋ฆผ ์žฌ์ƒ์„ฑ sys.stdout.write(out) # ํ…Œ์ŠคํŠธ: assert msg in out ``` `stdout` ์บก์ฒ˜์— ๊ด€๋ จ๋œ ์ค‘์š”ํ•œ ๋ฌธ์ œ ์ค‘ ํ•˜๋‚˜๋Š” ๋ณดํ†ต `print`์—์„œ ์ด์ „์— ์ธ์‡„๋œ ๋‚ด์šฉ์„ ์žฌ์„ค์ •ํ•˜๋Š” `\r` ๋ฌธ์ž๊ฐ€ ํฌํ•จ๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. `pytest`์—์„œ๋Š” ๋ฌธ์ œ๊ฐ€ ์—†์ง€๋งŒ `pytest -s`์—์„œ๋Š” ์ด๋Ÿฌํ•œ ๋ฌธ์ž๊ฐ€ ๋ฒ„ํผ์— ํฌํ•จ๋˜๋ฏ€๋กœ `-s`๊ฐ€ ์žˆ๊ฑฐ๋‚˜ ์—†๋Š” ์ƒํƒœ์—์„œ ํƒœ์ŠคํŠธ๋ฅผ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์œผ๋ ค๋ฉด ์บก์ฒ˜๋œ ์ถœ๋ ฅ์— ๋Œ€ํ•ด ์ถ”๊ฐ€์ ์ธ ์ •๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ์—๋Š” `re.sub(r'~.*\r', '', buf, 0, re.M)`์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋„์šฐ๋ฏธ ์ปจํ…์ŠคํŠธ ๊ด€๋ฆฌ์ž ๋ž˜ํผ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ์ถœ๋ ฅ์— `\r`์ด ํฌํ•จ๋˜์–ด ์žˆ๋Š”์ง€์˜ ์—ฌ๋ถ€์— ๊ด€๊ณ„์—†์ด ๋ชจ๋“  ๊ฒƒ์„ ์ž๋™์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๋ฏ€๋กœ ํŽธ๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ```python from transformers.testing_utils import CaptureStdout with CaptureStdout() as cs: function_that_writes_to_stdout() print(cs.out) ``` ๋‹ค์Œ์€ ์ „์ฒด ํ…Œ์ŠคํŠธ ์˜ˆ์ œ์ž…๋‹ˆ๋‹ค. ```python from transformers.testing_utils import CaptureStdout msg = "Secret message\r" final = "Hello World" with CaptureStdout() as cs: print(msg + final) assert cs.out == final + "\n", f"captured: {cs.out}, expecting {final}" ``` `stderr`๋ฅผ ์บก์ฒ˜ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, ๋Œ€์‹  `CaptureStderr` ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ```python from transformers.testing_utils import CaptureStderr with CaptureStderr() as cs: function_that_writes_to_stderr() print(cs.err) ``` ๋‘ ์ŠคํŠธ๋ฆผ์„ ๋™์‹œ์— ์บก์ฒ˜ํ•ด์•ผ ํ•œ๋‹ค๋ฉด, ๋ถ€๋ชจ `CaptureStd` ํด๋ž˜์Šค๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ```python from transformers.testing_utils import CaptureStd with CaptureStd() as cs: function_that_writes_to_stdout_and_stderr() print(cs.err, cs.out) ``` ๋˜ํ•œ, ํ…Œ์ŠคํŠธ์˜ ๋””๋ฒ„๊น…์„ ์ง€์›ํ•˜๊ธฐ ์œ„ํ•ด ์ด๋Ÿฌํ•œ ์ปจํ…์ŠคํŠธ ๊ด€๋ฆฌ์ž๋Š” ๊ธฐ๋ณธ์ ์œผ๋กœ ์ปจํ…์ŠคํŠธ์—์„œ ์ข…๋ฃŒํ•  ๋•Œ ์บก์ฒ˜๋œ ์ŠคํŠธ๋ฆผ์„ ์ž๋™์œผ๋กœ ๋‹ค์‹œ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ### ๋กœ๊ฑฐ ์ŠคํŠธ๋ฆผ ์บก์ฒ˜[[capturing-logger-stream]] ๋กœ๊ฑฐ ์ถœ๋ ฅ์„ ๊ฒ€์ฆํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ `CaptureLogger`๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python from transformers import logging from transformers.testing_utils import CaptureLogger msg = "Testing 1, 2, 3" logging.set_verbosity_info() logger = logging.get_logger("transformers.models.bart.tokenization_bart") with CaptureLogger(logger) as cl: logger.info(msg) assert cl.out, msg + "\n" ``` ### ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์ด์šฉํ•˜์—ฌ ํ…Œ์ŠคํŠธ[[testing-with-environment-variables]] ํŠน์ • ํ…Œ์ŠคํŠธ์˜ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ ์˜ํ–ฅ์„ ๊ฒ€์ฆํ•˜๋ ค๋ฉด `transformers.testing_utils.mockenv`๋ผ๋Š” ๋„์šฐ๋ฏธ ๋ฐ์ฝ”๋ ˆ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python from transformers.testing_utils import mockenv class HfArgumentParserTest(unittest.TestCase): @mockenv(TRANSFORMERS_VERBOSITY="error") def test_env_override(self): env_level_str = os.getenv("TRANSFORMERS_VERBOSITY", None) ``` ์ผ๋ถ€ ๊ฒฝ์šฐ์—๋Š” ์™ธ๋ถ€ ํ”„๋กœ๊ทธ๋žจ์„ ํ˜ธ์ถœํ•ด์•ผํ•  ์ˆ˜๋„ ์žˆ๋Š”๋ฐ, ์ด ๋•Œ์—๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๋กœ์ปฌ ๊ฒฝ๋กœ๋ฅผ ํฌํ•จํ•˜๋Š” `os.environ`์—์„œ `PYTHONPATH`์˜ ์„ค์ •์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํ—ฌํผ ํด๋ž˜์Šค `transformers.test_utils.TestCasePlus`๊ฐ€ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค: ```python from transformers.testing_utils import TestCasePlus class EnvExampleTest(TestCasePlus): def test_external_prog(self): env = self.get_env() # ์ด์ œ `env`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์™ธ๋ถ€ ํ”„๋กœ๊ทธ๋žจ ํ˜ธ์ถœ ``` ํ…Œ์ŠคํŠธ ํŒŒ์ผ์ด `tests` ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ ๋˜๋Š” `examples`์— ์žˆ๋Š”์ง€์— ๋”ฐ๋ผ `env[PYTHONPATH]`๊ฐ€ ๋‘ ๋””๋ ‰ํ„ฐ๋ฆฌ ์ค‘ ํ•˜๋‚˜๋ฅผ ํฌํ•จํ•˜๋„๋ก ์„ค์ •๋˜๋ฉฐ, ํ˜„์žฌ ์ €์žฅ์†Œ์— ๋Œ€ํ•ด ํ…Œ์ŠคํŠธ๊ฐ€ ์ˆ˜ํ–‰๋˜๋„๋ก `src` ๋””๋ ‰ํ„ฐ๋ฆฌ๋„ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ํ˜ธ์ถœ ์ด์ „์— ์„ค์ •๋œ ๊ฒฝ์šฐ์—๋Š” `env[PYTHONPATH]`๋ฅผ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ํ—ฌํผ ๋ฉ”์†Œ๋“œ๋Š” `os.environ` ๊ฐ์ฒด์˜ ์‚ฌ๋ณธ์„ ์ƒ์„ฑํ•˜๋ฏ€๋กœ ์›๋ณธ์€ ๊ทธ๋Œ€๋กœ ์œ ์ง€๋ฉ๋‹ˆ๋‹ค. ### ์žฌํ˜„ ๊ฐ€๋Šฅํ•œ ๊ฒฐ๊ณผ ์–ป๊ธฐ[[getting-reproducible-results]] ์ผ๋ถ€ ์ƒํ™ฉ์—์„œ ํ…Œ์ŠคํŠธ์—์„œ ์ž„์˜์„ฑ์„ ์ œ๊ฑฐํ•˜์—ฌ ๋™์ผํ•˜๊ฒŒ ์žฌํ˜„ ๊ฐ€๋Šฅํ•œ ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ณ  ์‹ถ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹œ๋“œ๋ฅผ ๊ณ ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```python seed = 42 # ํŒŒ์ด์ฌ RNG import random random.seed(seed) # ํŒŒ์ดํ† ์น˜ RNG import torch torch.manual_seed(seed) torch.backends.cudnn.deterministic = True if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed) # ๋„˜ํŒŒ์ด RNG import numpy as np np.random.seed(seed) # ํ…์„œํ”Œ๋กœ RNG tf.random.set_seed(seed) ``` ### ํ…Œ์ŠคํŠธ ๋””๋ฒ„๊น…[[debugging tests]] ๊ฒฝ๊ณ ๊ฐ€ ์žˆ๋Š” ๊ณณ์—์„œ ๋””๋ฒ„๊ฑฐ๋ฅผ ์‹œ์ž‘ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•˜์„ธ์š”. ```bash pytest tests/utils/test_logging.py -W error::UserWarning --pdb ``` ## Github Actions ์›Œํฌํ”Œ๋กœ์šฐ ์ž‘์—… ์ฒ˜๋ฆฌ[[working-with-github-actions-workflows]] ์…€ํ”„ ํ‘ธ์‹œ ์›Œํฌํ”Œ๋กœ์šฐ CI ์ž‘์—…์„ ํŠธ๋ฆฌ๊ฑฐํ•˜๋ ค๋ฉด, ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 1. `transformers` ์›๋ณธ์—์„œ ์ƒˆ ๋ธŒ๋žœ์น˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค(ํฌํฌ๊ฐ€ ์•„๋‹™๋‹ˆ๋‹ค!). 2. ๋ธŒ๋žœ์น˜ ์ด๋ฆ„์€ `ci_` ๋˜๋Š” `ci-`๋กœ ์‹œ์ž‘ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค(`main`๋„ ํŠธ๋ฆฌ๊ฑฐํ•˜์ง€๋งŒ `main`์—์„œ๋Š” PR์„ ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค). ๋˜ํ•œ ํŠน์ • ๊ฒฝ๋กœ์— ๋Œ€ํ•ด์„œ๋งŒ ํŠธ๋ฆฌ๊ฑฐ๋˜๋ฏ€๋กœ ์ด ๋ฌธ์„œ๊ฐ€ ์ž‘์„ฑ๋œ ํ›„์— ๋ณ€๊ฒฝ๋œ ๋‚ด์šฉ์€ [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/blob/main/.github/workflows/self-push.yml)์˜ *push:*์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 3. ์ด ๋ธŒ๋žœ์น˜์—์„œ PR์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค 4. ๊ทธ๋Ÿฐ ๋‹ค์Œ [์—ฌ๊ธฐ](https://github.com/huggingface/transformers/actions/workflows/self-push.yml)์—์„œ ์ž‘์—…์ด ๋‚˜ํƒ€๋‚˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐฑ๋กœ๊ทธ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ, ๋ฐ”๋กœ ์‹คํ–‰๋˜์ง€ ์•Š์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์‹คํ—˜์ ์ธ CI ๊ธฐ๋Šฅ ํ…Œ์ŠคํŠธ[[testing-Experimental-CI-Features]] CI ๊ธฐ๋Šฅ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ๊ฒƒ์€ ์ผ๋ฐ˜ CI ์ž‘๋™์— ๋ฐฉํ•ด๊ฐ€ ๋  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ž ์žฌ์ ์œผ๋กœ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ƒˆ๋กœ์šด CI ๊ธฐ๋Šฅ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 1. ํ…Œ์ŠคํŠธํ•ด์•ผ ํ•  ๋‚ด์šฉ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ์ƒˆ๋กœ์šด ์ „์šฉ ์ž‘์—…์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. 2. ์ƒˆ๋กœ์šด ์ž‘์—…์€ ํ•ญ์ƒ ์„ฑ๊ณตํ•ด์•ผ๋งŒ ๋…น์ƒ‰ โœ“๋ฅผ ๋ฐ›์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์•„๋ž˜์— ์ž์„ธํ•œ ๋‚ด์šฉ์ด ์žˆ์Šต๋‹ˆ๋‹ค). 3. ๋‹ค์–‘ํ•œ PR ์œ ํ˜•์— ๋Œ€ํ•œ ํ™•์ธ์„ ์œ„ํ•ด (์‚ฌ์šฉ์ž ํฌํฌ ๋ธŒ๋žœ์น˜, ํฌํฌ๋˜์ง€ ์•Š์€ ๋ธŒ๋žœ์น˜, github.com UI ์ง์ ‘ ํŒŒ์ผ ํŽธ์ง‘์—์„œ ์ƒ์„ฑ๋œ ๋ธŒ๋žœ์น˜, ๊ฐ•์ œ ํ‘ธ์‹œ ๋“ฑ PR์˜ ์œ ํ˜•์€ ์•„์ฃผ ๋‹ค์–‘ํ•ฉ๋‹ˆ๋‹ค.) ๋ฉฐ์น  ๋™์•ˆ ์‹คํ—˜ ์ž‘์—…์˜ ๋กœ๊ทธ๋ฅผ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๋ฉด์„œ ์‹คํ–‰ํ•ด๋ด…๋‹ˆ๋‹ค. (์˜๋„์ ์œผ๋กœ ํ•ญ์ƒ ๋…น์ƒ‰์„ ํ‘œ์‹œํ•˜๋ฏ€๋กœ ์ž‘์—… ์ „์ฒด๊ฐ€ ๋…น์ƒ‰์€ ์•„๋‹ˆ๋ผ๋Š” ์ ์— ์œ ์˜ํ•ฉ๋‹ˆ๋‹ค.) 4. ๋ชจ๋“  ๊ฒƒ์ด ์•ˆ์ •์ ์ธ์ง€ ํ™•์ธํ•œ ํ›„, ์ƒˆ๋กœ์šด ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๊ธฐ์กด ์ž‘์—…์— ๋ณ‘ํ•ฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด CI ๊ธฐ๋Šฅ ์ž์ฒด์— ๋Œ€ํ•œ ์‹คํ—˜์ด ์ผ๋ฐ˜ ์ž‘์—… ํ๋ฆ„์— ๋ฐฉํ•ด๊ฐ€ ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ƒˆ๋กœ์šด CI ๊ธฐ๋Šฅ์ด ๊ฐœ๋ฐœ ์ค‘์ธ ๋™์•ˆ, ํ•ญ์ƒ ์„ฑ๊ณตํ•˜๋„๋ก ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์€ ๋ฌด์—‡์ผ๊นŒ์š”? TravisCI์™€ ๊ฐ™์€ ์ผ๋ถ€ CI๋Š” `ignore-step-failure`๋ฅผ ์ง€์›ํ•˜๋ฉฐ ์ „์ฒด ์ž‘์—…์„ ์„ฑ๊ณตํ•œ ๊ฒƒ์œผ๋กœ ๋ณด๊ณ ํ•˜์ง€๋งŒ, ํ˜„์žฌ ์šฐ๋ฆฌ๊ฐ€ ์‚ฌ์šฉํ•˜๋Š” CircleCI์™€ Github Actions๋Š” ์ด๋ฅผ ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํ•ด๊ฒฐ์ฑ…์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. bash ์Šคํฌ๋ฆฝํŠธ์—์„œ ๊ฐ€๋Šฅํ•œ ๋งŽ์€ ์˜ค๋ฅ˜๋ฅผ ์–ต์ œํ•˜๊ธฐ ์œ„ํ•ด ์‹คํ–‰ ๋ช…๋ น์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— `set +euo pipefail`์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. 2. ๋งˆ์ง€๋ง‰ ๋ช…๋ น์€ ๋ฐ˜๋“œ์‹œ ์„ฑ๊ณตํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `echo "done"` ๋˜๋Š” `true`๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ์‹œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. ```yaml - run: name: run CI experiment command: | set +euo pipefail echo "setting run-all-despite-any-errors-mode" this_command_will_fail echo "but bash continues to run" # emulate another failure false # but the last command must be a success echo "during experiment do not remove: reporting success to CI, even if there were failures" ``` ๊ฐ„๋‹จํ•œ ๋ช…๋ น์˜ ๊ฒฝ์šฐ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆ˜ํ–‰ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash cmd_that_may_fail || true ``` ๊ฒฐ๊ณผ์— ๋งŒ์กฑํ•œ ํ›„์—๋Š” ๋ฌผ๋ก , ์‹คํ—˜์ ์ธ ๋‹จ๊ณ„ ๋˜๋Š” ์ž‘์—…์„ ์ผ๋ฐ˜ ์ž‘์—…์˜ ๋‚˜๋จธ์ง€ ๋ถ€๋ถ„๊ณผ ํ†ตํ•ฉํ•˜๋ฉด์„œ `set +euo pipefail` ๋˜๋Š” ๊ธฐํƒ€ ์ถ”๊ฐ€ํ•œ ์š”์†Œ๋ฅผ ์ œ๊ฑฐํ•˜์—ฌ ์‹คํ—˜ ์ž‘์—…์ด ์ผ๋ฐ˜ CI ์ž‘๋™์— ๋ฐฉํ•ด๋˜์ง€ ์•Š๋„๋ก ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ „๋ฐ˜์ ์ธ ๊ณผ์ •์€ ์‹คํ—˜ ๋‹จ๊ณ„๊ฐ€ PR์˜ ์ „๋ฐ˜์ ์ธ ์ƒํƒœ์— ์˜ํ–ฅ์„ ์ฃผ์ง€ ์•Š๊ณ  ์‹คํŒจํ•˜๋„๋ก `allow-failure`์™€ ๊ฐ™์€ ๊ธฐ๋Šฅ์„ ์„ค์ •ํ•  ์ˆ˜ ์žˆ๋‹ค๋ฉด ํ›จ์”ฌ ๋” ์‰ฌ์› ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์•ž์—์„œ ์–ธ๊ธ‰ํ•œ ๋ฐ”์™€ ๊ฐ™์ด CircleCI์™€ Github Actions๋Š” ํ˜„์žฌ ์ด๋Ÿฌํ•œ ๊ธฐ๋Šฅ๋“ค ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด ๊ธฐ๋Šฅ์˜ ์ง€์›์„ ์œ„ํ•œ ํˆฌํ‘œ์— ์ฐธ์—ฌํ•˜๊ณ  CI ๊ด€๋ จ ์Šค๋ ˆ๋“œ๋“ค์—์„œ ์ด๋Ÿฌํ•œ ์ƒํ™ฉ์„ ํ™•์ธํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. - [Github Actions:](https://github.com/actions/toolkit/issues/399) - [CircleCI:](https://ideas.circleci.com/ideas/CCI-I-344)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/contributing.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•˜๊ธฐ [[contribute-to-transformers]] ๋ˆ„๊ตฌ๋‚˜ ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์šฐ๋ฆฌ๋Š” ๋ชจ๋“  ์‚ฌ๋žŒ์˜ ๊ธฐ์—ฌ๋ฅผ ์†Œ์ค‘ํžˆ ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค. ์ฝ”๋“œ ๊ธฐ์—ฌ๋Š” ์ปค๋ฎค๋‹ˆํ‹ฐ๋ฅผ ๋•๋Š” ์œ ์ผํ•œ ๋ฐฉ๋ฒ•์ด ์•„๋‹™๋‹ˆ๋‹ค. ์งˆ๋ฌธ์— ๋‹ตํ•˜๊ฑฐ๋‚˜ ๋‹ค๋ฅธ ์‚ฌ๋žŒ์„ ๋„์™€ ๋ฌธ์„œ๋ฅผ ๊ฐœ์„ ํ•˜๋Š” ๊ฒƒ๋„ ๋งค์šฐ ๊ฐ€์น˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋ฅผ ๋„๋ฆฌ ์•Œ๋ฆฌ๋Š” ๊ฒƒ๋„ ํฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค! ๋ฉ‹์ง„ ํ”„๋กœ์ ํŠธ๋“ค์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•œ ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๋Œ€ํ•ด ๋ธ”๋กœ๊ทธ ๊ฒŒ์‹œ๊ธ€์— ์–ธ๊ธ‰ํ•˜๊ฑฐ๋‚˜, ๋„์›€์ด ๋˜์—ˆ์„ ๋•Œ๋งˆ๋‹ค Twitter์— ์•Œ๋ฆฌ๊ฑฐ๋‚˜, ์ €์žฅ์†Œ์— โญ๏ธ ๋ฅผ ํ‘œ์‹œํ•˜์—ฌ ๊ฐ์‚ฌ ์ธ์‚ฌ๋ฅผ ์ „ํ•ด์ฃผ์„ธ์š”. ์–ด๋–ค ๋ฐฉ์‹์œผ๋กœ ๊ธฐ์—ฌํ•˜๋“  [ํ–‰๋™ ๊ทœ์น™](https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md)์„ ์ˆ™์ง€ํ•˜๊ณ  ์กด์ค‘ํ•ด์ฃผ์„ธ์š”. **์ด ์•ˆ๋‚ด์„œ๋Š” ๋ฉ‹์ง„ [scikit-learn ๊ธฐ์—ฌ ์•ˆ๋‚ด์„œ](https://github.com/scikit-learn/scikit-learn/blob/main/CONTRIBUTING.md)์—์„œ ํฐ ์˜๊ฐ์„ ๋ฐ›์•˜์Šต๋‹ˆ๋‹ค.** ## ๊ธฐ์—ฌํ•˜๋Š” ๋ฐฉ๋ฒ• [[ways-to-contribute]] ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์œผ๋กœ ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * ๊ธฐ์กด ์ฝ”๋“œ์˜ ๋ฏธํ•ด๊ฒฐ๋œ ๋ฌธ์ œ๋ฅผ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค. * ๋ฒ„๊ทธ ๋˜๋Š” ์ƒˆ๋กœ ์ถ”๊ฐ€๋˜๊ธธ ์›ํ•˜๋Š” ๊ธฐ๋Šฅ๊ณผ ๊ด€๋ จ๋œ ์ด์Šˆ๋ฅผ ์ œ์ถœํ•ฉ๋‹ˆ๋‹ค. * ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•ฉ๋‹ˆ๋‹ค. * ์˜ˆ์ œ๋‚˜ ๋ฌธ์„œ์— ๊ธฐ์—ฌํ•ฉ๋‹ˆ๋‹ค. ์–ด๋””์„œ๋ถ€ํ„ฐ ์‹œ์ž‘ํ• ์ง€ ๋ชจ๋ฅด๊ฒ ๋‹ค๋ฉด, [Good First Issue](https://github.com/huggingface/transformers/contribute) ๋ชฉ๋ก์„ ํ™•์ธํ•ด๋ณด์„ธ์š”. ์ด ๋ชฉ๋ก์€ ์ดˆ๋ณด์ž๋„ ์ฐธ์—ฌํ•˜๊ธฐ ์‰ฌ์šด ์˜คํ”ˆ ์ด์Šˆ ๋ชฉ๋ก์„ ์ œ๊ณตํ•˜๋ฉฐ, ๋‹น์‹ ์ด ์˜คํ”ˆ์†Œ์Šค์— ์ฒ˜์Œ์œผ๋กœ ๊ธฐ์—ฌํ•˜๋Š” ๋ฐ ํฐ ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ทธ์ € ์ž‘์—…ํ•˜๊ณ  ์‹ถ์€ ์ด์Šˆ์— ๋Œ“๊ธ€๋งŒ ๋‹ฌ์•„์ฃผ๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์กฐ๊ธˆ ๋” ๋„์ „์ ์ธ ์ž‘์—…์„ ์›ํ•œ๋‹ค๋ฉด, [Good Second Issue](https://github.com/huggingface/transformers/labels/Good%20Second%20Issue) ๋ชฉ๋ก๋„ ํ™•์ธํ•ด๋ณด์„ธ์š”. ์ด๋ฏธ ๋‹น์‹ ์ด ์ž˜ ํ•˜๊ณ  ์žˆ๋‹ค๊ณ  ์ƒ๊ฐ๋˜๋”๋ผ๋„, ํ•œ ๋ฒˆ ์‹œ๋„ํ•ด๋ณด์„ธ์š”! ์šฐ๋ฆฌ๋„ ์—ฌ๋Ÿฌ๋ถ„์„ ๋„์šธ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๐Ÿš€ > ์ปค๋ฎค๋‹ˆํ‹ฐ์— ์ด๋ฃจ์–ด์ง€๋Š” ๋ชจ๋“  ๊ธฐ์—ฌ๋Š” ๋˜‘๊ฐ™์ด ์†Œ์ค‘ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿฅฐ ## ๋ฏธํ•ด๊ฒฐ๋œ ๋ฌธ์ œ ์ˆ˜์ •ํ•˜๊ธฐ [[fixing-outstanding-issues]] ๊ธฐ์กด ์ฝ”๋“œ์—์„œ ๋ฐœ๊ฒฌํ•œ ๋ฌธ์ œ์ ์— ๋Œ€ํ•œ ํ•ด๊ฒฐ์ฑ…์ด ๋– ์˜ค๋ฅธ ๊ฒฝ์šฐ, ์–ธ์ œ๋“ ์ง€ [๊ธฐ์—ฌ๋ฅผ ์‹œ์ž‘](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md/#create-a-pull-request)ํ•˜๊ณ  Pull Request๋ฅผ ์ƒ์„ฑํ•ด์ฃผ์„ธ์š”! ## ๋ฒ„๊ทธ ๊ด€๋ จ ์ด์Šˆ๋ฅผ ์ œ๊ธฐํ•˜๊ฑฐ๋‚˜ ์ƒˆ๋กœ์šด ๊ธฐ๋Šฅ ์š”์ฒญํ•˜๊ธฐ [[submitting-a-bugrelated-issue-or-feature-request]] ๋ฒ„๊ทธ ๊ด€๋ จ ์ด์Šˆ๋ฅผ ์ œ๊ธฐํ•˜๊ฑฐ๋‚˜ ์ƒˆ๋กœ์šด ๊ธฐ๋Šฅ์„ ์š”์ฒญํ•  ๋•Œ๋Š” ๋‹ค์Œ ๊ฐ€์ด๋“œ๋ผ์ธ์„ ์ตœ๋Œ€ํ•œ ์ค€์ˆ˜ํ•ด์ฃผ์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ข‹์€ ํ”ผ๋“œ๋ฐฑ๊ณผ ํ•จ๊ป˜ ๋น ๋ฅด๊ฒŒ ๋‹ต๋ณ€ํ•ด ๋“œ๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ### ๋ฒ„๊ทธ๋ฅผ ๋ฐœ๊ฒฌํ•˜์…จ๋‚˜์š”? [[did-you-find-a-bug]] ๐Ÿค— Transformers ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์‚ฌ์šฉ ์ค‘์— ๊ฒช๋Š” ๋ฌธ์ œ๋ฅผ ๋ณด๊ณ ํ•ด์ฃผ๋Š” ์‚ฌ์šฉ์ž๋“ค ๋•๋ถ„์— ๋”์šฑ ๊ฒฌ๊ณ ํ•ด์ง€๊ณ  ์‹ ๋ขฐํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด์Šˆ๋ฅผ ๋ณด๊ณ ํ•˜๊ธฐ ์ „์—, ๋ฒ„๊ทธ๊ฐ€ ์ด๋ฏธ **๋ณด๊ณ ๋˜์ง€ ์•Š์•˜๋Š”์ง€** ํ™•์ธํ•ด์ฃผ์„ธ์š”. (GitHub์˜ ์ด์Šˆ ํƒญ ์•„๋ž˜์˜ ๊ฒ€์ƒ‰ ๋ฐ”๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”). ์ด์Šˆ๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์ž์ฒด์—์„œ ๋ฐœ์ƒํ•œ ๋ฒ„๊ทธ์–ด์•ผ ํ•˜๋ฉฐ, ์ฝ”๋“œ์˜ ๋‹ค๋ฅธ ๋ถ€๋ถ„๊ณผ ๊ด€๋ จ๋œ ๊ฒƒ์ด ์•„๋‹ˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฒ„๊ทธ๊ฐ€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ ๋ฌธ์ œ๋กœ ๋ฐœ์ƒํ•˜์˜€๋Š”์ง€ ํ™•์‹คํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ๋จผ์ € [ํฌ๋Ÿผ](https://discuss.huggingface.co/)์—์„œ ์งˆ๋ฌธํ•ด ์ฃผ์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ผ๋ฐ˜์ ์ธ ์งˆ๋ฌธ๋ณด๋‹ค ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ๊ด€๋ จ๋œ ๋ฌธ์ œ๋ฅผ ๋” ๋น ๋ฅด๊ฒŒ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฒ„๊ทธ๊ฐ€ ์ด๋ฏธ ๋ณด๊ณ ๋˜์ง€ ์•Š์•˜๋‹ค๋Š” ๊ฒƒ์„ ํ™•์ธํ–ˆ๋‹ค๋ฉด, ๋‹ค์Œ ์ •๋ณด๋ฅผ ํฌํ•จํ•˜์—ฌ ์ด์Šˆ๋ฅผ ์ œ์ถœํ•ด ์ฃผ์„ธ์š”. ๊ทธ๋Ÿฌ๋ฉด ์šฐ๋ฆฌ๊ฐ€ ๋น ๋ฅด๊ฒŒ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * ์‚ฌ์šฉ ์ค‘์ธ **์šด์˜์ฒด์ œ ์ข…๋ฅ˜์™€ ๋ฒ„์ „**, ๊ทธ๋ฆฌ๊ณ  **Python**, **PyTorch** ๋˜๋Š” **TensorFlow** ๋ฒ„์ „. * ๋ฒ„๊ทธ๋ฅผ 30์ดˆ ์ด๋‚ด๋กœ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ„๋‹จํ•˜๊ณ  ๋…๋ฆฝ์ ์ธ ์ฝ”๋“œ ์Šค๋‹ˆํŽซ. * ์˜ˆ์™ธ๊ฐ€ ๋ฐœ์ƒํ•œ ๊ฒฝ์šฐ *์ „์ฒด* ํŠธ๋ ˆ์ด์Šค๋ฐฑ. * ์Šคํฌ๋ฆฐ์ƒท๊ณผ ๊ฐ™์ด ๋„์›€์ด ๋  ๊ฒƒ์œผ๋กœ ์ƒ๊ฐ๋˜๋Š” ์ถ”๊ฐ€ ์ •๋ณด๋ฅผ ์ฒจ๋ถ€ํ•ด ์ฃผ์„ธ์š”. ์šด์˜์ฒด์ œ์™€ ์†Œํ”„ํŠธ์›จ์–ด ๋ฒ„์ „์„ ์ž๋™์œผ๋กœ ๊ฐ€์ ธ์˜ค๋ ค๋ฉด ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”: ```bash transformers-cli env ``` ์ €์žฅ์†Œ์˜ ๋ฃจํŠธ ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ๋„ ๊ฐ™์€ ๋ช…๋ น์„ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash python src/transformers/commands/transformers_cli.py env ``` ### ์ƒˆ๋กœ์šด ๊ธฐ๋Šฅ์„ ์›ํ•˜์‹œ๋‚˜์š”? [[do-you-want-a-new-feature]] ๐Ÿค— Transformers์—์„œ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์€ ์ƒˆ๋กœ์šด ๊ธฐ๋Šฅ์ด ์žˆ๋‹ค๋ฉด, ๋‹ค์Œ ๋‚ด์šฉ์„ ํฌํ•จํ•˜์—ฌ ์ด์Šˆ๋ฅผ ์ œ์ถœํ•ด ์ฃผ์„ธ์š”: 1. ์ด ๊ธฐ๋Šฅ์ด ํ•„์š”ํ•œ *์ด์œ *๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”? ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์— ๋Œ€ํ•œ ๋ฌธ์ œ๋‚˜ ๋ถˆ๋งŒ๊ณผ ๊ด€๋ จ์ด ์žˆ๋‚˜์š”? ํ”„๋กœ์ ํŠธ์— ํ•„์š”ํ•œ ๊ธฐ๋Šฅ์ธ๊ฐ€์š”? ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋„์›€์ด ๋  ๋งŒํ•œ ๊ธฐ๋Šฅ์ธ๊ฐ€์š”? ์–ด๋–ค ๋‚ด์šฉ์ด๋“  ์—ฌ๋Ÿฌ๋ถ„์˜ ์ด์•ผ๊ธฐ๋ฅผ ๋“ฃ๊ณ  ์‹ถ์Šต๋‹ˆ๋‹ค! 2. ์š”์ฒญํ•˜๋Š” ๊ธฐ๋Šฅ์„ ์ตœ๋Œ€ํ•œ ์ž์„ธํžˆ ์„ค๋ช…ํ•ด ์ฃผ์„ธ์š”. ๋” ๋งŽ์€ ์ •๋ณด๋ฅผ ์ œ๊ณตํ• ์ˆ˜๋ก ๋” ๋‚˜์€ ๋„์›€์„ ๋“œ๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 3. ํ•ด๋‹น ๊ธฐ๋Šฅ์˜ ์‚ฌ์šฉ๋ฒ•์„ ๋ณด์—ฌ์ฃผ๋Š” *์ฝ”๋“œ ์Šค๋‹ˆํŽซ*์„ ์ œ๊ณตํ•ด ์ฃผ์„ธ์š”. 4. ๊ธฐ๋Šฅ๊ณผ ๊ด€๋ จ๋œ ๋…ผ๋ฌธ์ด ์žˆ๋Š” ๊ฒฝ์šฐ ๋งํฌ๋ฅผ ํฌํ•จํ•ด ์ฃผ์„ธ์š”. ์ด์Šˆ๊ฐ€ ์ž˜ ์ž‘์„ฑ๋˜์—ˆ๋‹ค๋ฉด ์ด์Šˆ๊ฐ€ ์ƒ์„ฑ๋œ ์ˆœ๊ฐ„, ์ด๋ฏธ 80% ์ •๋„์˜ ์ž‘์—…์ด ์™„๋ฃŒ๋œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด์Šˆ๋ฅผ ์ œ๊ธฐํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  ๋งŒํ•œ [ํ…œํ”Œ๋ฆฟ](https://github.com/huggingface/transformers/tree/main/templates)๋„ ์ค€๋น„๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ## ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•˜๊ณ  ์‹ถ์œผ์‹ ๊ฐ€์š”? [[do-you-want-to-implement-a-new-model]] ์ƒˆ๋กœ์šด ๋ชจ๋ธ์€ ๊ณ„์†ํ•ด์„œ ์ถœ์‹œ๋ฉ๋‹ˆ๋‹ค. ๋งŒ์•ฝ ์—ฌ๋Ÿฌ๋ถ„์ด ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด ๋‹ค์Œ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•ด ์ฃผ์„ธ์š”. * ๋ชจ๋ธ์— ๋Œ€ํ•œ ๊ฐ„๋‹จํ•œ ์„ค๋ช…๊ณผ ๋…ผ๋ฌธ ๋งํฌ. * ๊ตฌํ˜„์ด ๊ณต๊ฐœ๋˜์–ด ์žˆ๋‹ค๋ฉด ๊ตฌํ˜„ ๋งํฌ. * ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๊ฐ€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•˜๋‹ค๋ฉด ๊ฐ€์ค‘์น˜ ๋งํฌ. ๋งŒ์•ฝ ๋ชจ๋ธ์„ ์ง์ ‘ ๊ธฐ์—ฌํ•˜๊ณ  ์‹ถ์œผ์‹œ๋‹ค๋ฉด, ์•Œ๋ ค์ฃผ์„ธ์š”. ๐Ÿค— Transformers์— ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ๋„๋ก ๋„์™€๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค! ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [์ƒ์„ธ ์•ˆ๋‚ด์„œ์™€ ํ…œํ”Œ๋ฆฟ](https://github.com/huggingface/transformers/tree/main/templates)์„ ์ œ๊ณตํ•˜๊ณ  ์žˆ์œผ๋ฉฐ, [๐Ÿค— Transformers์— ์ƒˆ๋กœ์šด ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•](https://huggingface.co/docs/transformers/add_new_model)์— ๋Œ€ํ•œ ๊ธฐ์ˆ ์ ์ธ ์•ˆ๋‚ด์„œ๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋ฌธ์„œ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  ์‹ถ์œผ์‹ ๊ฐ€์š”? [[do-you-want-to-add-documentation]] ์šฐ๋ฆฌ๋Š” ์–ธ์ œ๋‚˜ ๋” ๋ช…ํ™•ํ•˜๊ณ  ์ •ํ™•ํ•œ ๋ฌธ์„œ๋ฅผ ์ œ๊ณตํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ๊ฐœ์„ ์ ์„ ์ฐพ๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์˜คํƒˆ์ž๋‚˜ ๋ถ€์กฑํ•œ ๋‚ด์šฉ, ๋ถ„๋ช…ํ•˜์ง€ ์•Š๊ฑฐ๋‚˜ ๋ถ€์ •ํ™•ํ•œ ๋‚ด์šฉ ๋“ฑ์„ ์•Œ๋ ค์ฃผ์‹œ๋ฉด ๊ฐœ์„ ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ๊ด€์‹ฌ์ด ์žˆ์œผ์‹œ๋‹ค๋ฉด ๋ณ€๊ฒฝํ•˜๊ฑฐ๋‚˜ ๊ธฐ์—ฌํ•˜์‹ค ์ˆ˜ ์žˆ๋„๋ก ๋„์™€๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค! ๋ฌธ์„œ๋ฅผ ์ƒ์„ฑ, ๋นŒ๋“œ ๋ฐ ์ž‘์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [README](https://github.com/huggingface/transformers/tree/main/docs) ๋ฌธ์„œ๋ฅผ ํ™•์ธํ•ด ์ฃผ์„ธ์š”. ## ํ’€ ๋ฆฌํ€˜์ŠคํŠธ(Pull Request) ์ƒ์„ฑํ•˜๊ธฐ [[create-a-pull-request]] ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๊ธฐ ์ „์— ๊ธฐ์กด์˜ Pull Request๋‚˜ ์ด์Šˆ๋ฅผ ๊ฒ€์ƒ‰ํ•˜์—ฌ ๋ˆ„๊ตฐ๊ฐ€ ์ด๋ฏธ ๋™์ผํ•œ ์ž‘์—…์„ ํ•˜๊ณ  ์žˆ๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ํ™•์‹คํ•˜์ง€ ์•Š๋‹ค๋ฉด ํ”ผ๋“œ๋ฐฑ์„ ๋ฐ›๊ธฐ ์œ„ํ•ด ์ด์Šˆ๋ฅผ ์—ด์–ด๋ณด๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๊ธฐ๋ณธ์ ์ธ `git` ์‚ฌ์šฉ ๋Šฅ๋ ฅ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. `git`์€ ์‚ฌ์šฉํ•˜๊ธฐ ์‰ฌ์šด ๋„๊ตฌ๋Š” ์•„๋‹ˆ์ง€๋งŒ, ๋งค์šฐ ํ›Œ๋ฅญํ•œ ๋งค๋‰ด์–ผ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์‰˜(shell)์—์„œ `git --help`์„ ์ž…๋ ฅํ•˜์—ฌ ํ™•์ธํ•ด๋ณด์„ธ์š”! ๋งŒ์•ฝ ์ฑ…์„ ์„ ํ˜ธํ•œ๋‹ค๋ฉด, [Pro Git](https://git-scm.com/book/en/v2)์€ ๋งค์šฐ ์ข‹์€ ์ฐธ๊ณ  ์ž๋ฃŒ๊ฐ€ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๐Ÿค— Transformers์— ๊ธฐ์—ฌํ•˜๋ ค๋ฉด **[Python 3.8]((https://github.com/huggingface/transformers/blob/main/setup.py#L426))** ์ด์ƒ์˜ ๋ฒ„์ „์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ์—ฌ๋ฅผ ์‹œ์ž‘ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ฅด์„ธ์š”: 1. ์ €์žฅ์†Œ ํŽ˜์ด์ง€์—์„œ **[Fork](https://github.com/huggingface/transformers/fork)** ๋ฒ„ํŠผ์„ ํด๋ฆญํ•˜์—ฌ ์ €์žฅ์†Œ๋ฅผ ํฌํฌํ•˜์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ฝ”๋“œ์˜ ๋ณต์‚ฌ๋ณธ์ด ์—ฌ๋Ÿฌ๋ถ„์˜ GitHub ์‚ฌ์šฉ์ž ๊ณ„์ • ์•„๋ž˜์— ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. 2. ํฌํฌํ•œ ์ €์žฅ์†Œ๋ฅผ ๋กœ์ปฌ ๋””์Šคํฌ๋กœ ํด๋ก ํ•˜๊ณ , ๊ธฐ๋ณธ ์ €์žฅ์†Œ๋ฅผ ์›๊ฒฉ(remote)์œผ๋กœ ์ถ”๊ฐ€ํ•˜์„ธ์š”: ```bash git clone git@github.com:<your Github handle>/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. ๊ฐœ๋ฐœ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ์ €์žฅํ•  ์ƒˆ ๋ธŒ๋žœ์น˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```bash git checkout -b a-descriptive-name-for-my-changes ``` ๐Ÿšจ ์ ˆ๋Œ€ `main` ๋ธŒ๋žœ์น˜์—์„œ ์ž‘์—…ํ•˜์ง€ **๋งˆ์„ธ์š”!** 4. ๊ฐ€์ƒ ํ™˜๊ฒฝ์—์„œ ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์—ฌ ๊ฐœ๋ฐœ ํ™˜๊ฒฝ์„ ์„ค์ •ํ•˜์„ธ์š”: ```bash pip install -e ".[dev]" ``` ๋งŒ์•ฝ ์ด๋ฏธ ๊ฐ€์ƒ ํ™˜๊ฒฝ์— ๐Ÿค— Transformers๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋‹ค๋ฉด, `-e` ํ”Œ๋ž˜๊ทธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์„ค์น˜ํ•˜๊ธฐ ์ „์— `pip uninstall transformers`๋กœ ์ œ๊ฑฐํ•ด์ฃผ์„ธ์š”. ์—ฌ๋Ÿฌ๋ถ„์˜ ์šด์˜์ฒด์ œ์— ๋”ฐ๋ผ์„œ, ๊ทธ๋ฆฌ๊ณ  ๐Ÿค— Transformers์˜ ์„ ํƒ์  ์˜์กด์„ฑ์˜ ์ˆ˜๊ฐ€ ์ฆ๊ฐ€ํ•˜๋ฉด์„œ, ์ด ๋ช…๋ น์ด ์‹คํŒจํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿด ๊ฒฝ์šฐ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๋”ฅ๋Ÿฌ๋‹ ํ”„๋ ˆ์ž„์›Œํฌ(PyTorch, TensorFlow, ๊ทธ๋ฆฌ๊ณ /๋˜๋Š” Flax)๋ฅผ ์„ค์น˜ํ•œ ํ›„ ์•„๋ž˜ ๋ช…๋ น์„ ์‹คํ–‰ํ•ด์ฃผ์„ธ์š”: ```bash pip install -e ".[quality]" ``` ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ์ด๊ฒƒ์œผ๋กœ ์ถฉ๋ถ„ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. 5. ๋ธŒ๋žœ์น˜์—์„œ ๊ธฐ๋Šฅ์„ ๊ฐœ๋ฐœํ•˜์„ธ์š”. ์ฝ”๋“œ๋ฅผ ์ž‘์—…ํ•˜๋Š” ๋™์•ˆ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ(test suite)๊ฐ€ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ณ€๊ฒฝ ์‚ฌํ•ญ์— ์˜ํ–ฅ์„ ๋ฐ›๋Š” ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์„ธ์š”: ```bash pytest tests/<TEST_TO_RUN>.py ``` ํ…Œ์ŠคํŠธ์— ๋Œ€ํ•œ ๋” ๋งŽ์€ ์ •๋ณด๋Š” [ํ…Œ์ŠคํŠธ](https://huggingface.co/docs/transformers/testing) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ๐Ÿค— Transformers๋Š” `black`๊ณผ `ruff`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์†Œ์Šค ์ฝ”๋“œ์˜ ํ˜•์‹์„ ์ผ๊ด€๋˜๊ฒŒ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ์ ์šฉํ•œ ํ›„์—๋Š” ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ์ž๋™์œผ๋กœ ์Šคํƒ€์ผ ๊ต์ • ๋ฐ ์ฝ”๋“œ ๊ฒ€์ฆ์„ ์ˆ˜ํ–‰ํ•˜์„ธ์š”: ```bash make fixup ``` ์ด๊ฒƒ์€ ๋˜ํ•œ ์ž‘์—… ์ค‘์ธ PR์—์„œ ์ˆ˜์ •ํ•œ ํŒŒ์ผ์—์„œ๋งŒ ์ž‘๋™ํ•˜๋„๋ก ์ตœ์ ํ™”๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฒ€์‚ฌ๋ฅผ ํ•˜๋‚˜์”ฉ ์‹คํ–‰ํ•˜๋ ค๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ์Šคํƒ€์ผ ๊ต์ •์„ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash make style ``` ๐Ÿค— Transformers๋Š” ๋˜ํ•œ `ruff`์™€ ๋ช‡ ๊ฐ€์ง€ ์‚ฌ์šฉ์ž ์ •์˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ฝ”๋”ฉ ์‹ค์ˆ˜๋ฅผ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. CI๋ฅผ ํ†ตํ•ด ํ’ˆ์งˆ ๊ด€๋ฆฌ๊ฐ€ ์ˆ˜ํ–‰๋˜์ง€๋งŒ, ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ๋™์ผํ•œ ๊ฒ€์‚ฌ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash make quality ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ƒˆ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•  ๋•Œ ์ผ๋ถ€ ํŒŒ์ผ์„ ์—…๋ฐ์ดํŠธํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ์•Š๋„๋ก ํ•˜๊ธฐ ์œ„ํ•œ ๋งŽ์€ ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ์ด๋Ÿฌํ•œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash make repo-consistency ``` ์ด๋Ÿฌํ•œ ๊ฒ€์‚ฌ์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ณ  ๊ด€๋ จ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์€ [Pull Request์— ๋Œ€ํ•œ ๊ฒ€์‚ฌ](https://huggingface.co/docs/transformers/pr_checks) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ๋งŒ์•ฝ `docs/source` ๋””๋ ‰ํ„ฐ๋ฆฌ ์•„๋ž˜์˜ ๋ฌธ์„œ๋ฅผ ์ˆ˜์ •ํ•˜๋Š” ๊ฒฝ์šฐ, ๋ฌธ์„œ๊ฐ€ ๋นŒ๋“œ๋  ์ˆ˜ ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ์ด ๊ฒ€์‚ฌ๋Š” Pull Request๋ฅผ ์—ด ๋•Œ๋„ CI์—์„œ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ๋กœ์ปฌ ๊ฒ€์‚ฌ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋ฌธ์„œ ๋นŒ๋”๋ฅผ ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash pip install ".[docs]" ``` ์ €์žฅ์†Œ์˜ ๋ฃจํŠธ ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ ๋‹ค์Œ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜์„ธ์š”: ```bash doc-builder build transformers docs/source/en --build_dir ~/tmp/test-build ``` ์ด ๋ช…๋ น์€ `~/tmp/test-build` ํด๋”์— ๋ฌธ์„œ๋ฅผ ๋นŒ๋“œํ•˜๋ฉฐ, ์ƒ์„ฑ๋œ Markdown ํŒŒ์ผ์„ ์„ ํ˜ธํ•˜๋Š” ํŽธ์ง‘๊ธฐ๋กœ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Pull Request๋ฅผ ์—ด ๋•Œ GitHub์—์„œ ๋ฌธ์„œ๋ฅผ ๋ฏธ๋ฆฌ ๋ณผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ณ€๊ฒฝ ์‚ฌํ•ญ์— ๋งŒ์กฑํ•˜๋ฉด `git add`๋กœ ๋ณ€๊ฒฝ๋œ ํŒŒ์ผ์„ ์ถ”๊ฐ€ํ•˜๊ณ , `git commit`์œผ๋กœ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋กœ์ปฌ์— ๊ธฐ๋กํ•˜์„ธ์š”: ```bash git add modified_file.py git commit ``` [์ข‹์€ ์ปค๋ฐ‹ ๋ฉ”์‹œ์ง€](https://chris.beams.io/posts/git-commit/)๋ฅผ ์ž‘์„ฑํ•˜์—ฌ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋ช…ํ™•ํ•˜๊ฒŒ ์ „๋‹ฌํ•˜์„ธ์š”! ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ํ”„๋กœ์ ํŠธ ์›๋ณธ ์ €์žฅ์†Œ์™€ ๋™๊ธฐํ™”ํ•˜๋ ค๋ฉด, PR์„ *์—ด๊ธฐ ์ „์—* ๋ธŒ๋žœ์น˜๋ฅผ `upstream/branch`๋กœ ๋ฆฌ๋ฒ ์ด์Šค(rebase)ํ•˜์„ธ์š”. ๋˜๋Š” ๊ด€๋ฆฌ์ž์˜ ์š”์ฒญ์— ์ด ์ž‘์—…์ด ํ•„์š”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash git fetch upstream git rebase upstream/main ``` ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋ธŒ๋žœ์น˜์— ํ‘ธ์‹œํ•˜์„ธ์š”: ```bash git push -u origin a-descriptive-name-for-my-changes ``` ์ด๋ฏธ PR์„ ์—ด์—ˆ๋‹ค๋ฉด, `--force` ํ”Œ๋ž˜๊ทธ์™€ ํ•จ๊ป˜ ๊ฐ•์ œ ํ‘ธ์‹œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์•„์ง PR์ด ์—ด๋ฆฌ์ง€ ์•Š์•˜๋‹ค๋ฉด ์ •์ƒ์ ์œผ๋กœ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ํ‘ธ์‹œํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. 6. ์ด์ œ GitHub์—์„œ ํฌํฌํ•œ ์ €์žฅ์†Œ๋กœ ์ด๋™ํ•˜๊ณ  **Pull request(ํ’€ ๋ฆฌํ€˜์ŠคํŠธ)**๋ฅผ ํด๋ฆญํ•˜์—ฌ Pull Request๋ฅผ ์—ด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜์˜ [์ฒดํฌ๋ฆฌ์ŠคํŠธ](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md/#pull-request-checklist)์—์„œ ๋ชจ๋“  ํ•ญ๋ชฉ์— ์ฒดํฌ ํ‘œ์‹œ๋ฅผ ํ•˜์„ธ์š”. ์ค€๋น„๊ฐ€ ์™„๋ฃŒ๋˜๋ฉด ํ”„๋กœ์ ํŠธ ๊ด€๋ฆฌ์ž์—๊ฒŒ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ๋ณด๋‚ด ๊ฒ€ํ† ๋ฅผ ์š”์ฒญํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 7. ๊ด€๋ฆฌ์ž๊ฐ€ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ์š”์ฒญํ•ด๋„ ๊ดœ์ฐฎ์Šต๋‹ˆ๋‹ค. ํ•ต์‹ฌ ๊ธฐ์—ฌ์ž๋“ค๋„ ๋™์ผํ•œ ์ƒํ™ฉ์„ ๊ฒช์Šต๋‹ˆ๋‹ค! ๋ชจ๋‘๊ฐ€ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ Pull Request์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋„๋ก, ๋กœ์ปฌ ๋ธŒ๋žœ์น˜์—์„œ ์ž‘์—…ํ•˜๊ณ  ๋ณ€๊ฒฝ ์‚ฌํ•ญ์„ ํฌํฌํ•œ ์ €์žฅ์†Œ๋กœ ํ‘ธ์‹œํ•˜์„ธ์š”. ๊ทธ๋Ÿฌ๋ฉด ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ์ž๋™์œผ๋กœ Pull Request์— ๋‚˜ํƒ€๋‚ฉ๋‹ˆ๋‹ค. ### Pull Request ์ฒดํฌ๋ฆฌ์ŠคํŠธ [[pull-request-checklist]] โ˜ Pull Request ์ œ๋ชฉ์€ ๊ธฐ์—ฌ ๋‚ด์šฉ์„ ์š”์•ฝํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.<br> โ˜ Pull Request๊ฐ€ ์ด์Šˆ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๊ฒฝ์šฐ, Pull Request ์„ค๋ช…์— ์ด์Šˆ ๋ฒˆํ˜ธ๋ฅผ ์–ธ๊ธ‰ํ•˜์—ฌ ์—ฐ๊ด€๋˜์–ด ์žˆ์Œ์„ ์•Œ๋ ค์ฃผ์„ธ์š”. (์ด์Šˆ๋ฅผ ํ™•์ธํ•˜๋Š” ์‚ฌ๋žŒ๋“ค์ด ํ•ด๋‹น ์ด์Šˆ์— ๋Œ€ํ•œ ์ž‘์—…์ด ์ง„ํ–‰ ์ค‘์ž„์„ ์•Œ ์ˆ˜ ์žˆ๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค).<br> โ˜ ์ž‘์—…์ด ์ง„ํ–‰์ค‘์ด๋ผ๋ฉด ์ œ๋ชฉ ์•ž์— `[WIP]`๋ฅผ ๋ถ™์—ฌ์ฃผ์„ธ์š”. ์ค‘๋ณต ์ž‘์—…์„ ํ”ผํ•˜๊ณ  ๋ณ‘ํ•ฉํ•  ์ค€๋น„๊ฐ€ ๋œ PR๊ณผ ๊ตฌ๋ถ„ํ•˜๊ธฐ์— ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค.<br> โ˜ ๊ธฐ์กด ํ…Œ์ŠคํŠธ๋ฅผ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”.<br> โ˜ ์ƒˆ๋กœ์šด ๊ธฐ๋Šฅ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒฝ์šฐ, ํ•ด๋‹น ๊ธฐ๋Šฅ์— ๋Œ€ํ•œ ํ…Œ์ŠคํŠธ๋„ ์ถ”๊ฐ€ํ•˜์„ธ์š”.<br> - ์ƒˆ ๋ชจ๋ธ์„ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒฝ์šฐ, `ModelTester.all_model_classes = (MyModel, MyModelWithLMHead,...)`์„ ์‚ฌ์šฉํ•˜์—ฌ ์ผ๋ฐ˜์ ์ธ ํ…Œ์ŠคํŠธ๋ฅผ ํ™œ์„ฑํ™”ํ•˜์„ธ์š”. - ์ƒˆ `@slow` ํ…Œ์ŠคํŠธ๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ํ…Œ์ŠคํŠธ๋ฅผ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: `RUN_SLOW=1 python -m pytest tests/models/my_new_model/test_my_new_model.py`. - ์ƒˆ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒฝ์šฐ, ํ…Œ์ŠคํŠธ๋ฅผ ์ž‘์„ฑํ•˜๊ณ  ๋‹ค์Œ ๋ช…๋ น์œผ๋กœ ํ…Œ์ŠคํŠธ๋ฅผ ํ†ต๊ณผํ•˜๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: `RUN_SLOW=1 python -m pytest tests/models/{your_model_name}/test_tokenization_{your_model_name}.py`. - CircleCI์—์„œ๋Š” ๋А๋ฆฐ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜์ง€ ์•Š์ง€๋งŒ, GitHub Actions์—์„œ๋Š” ๋งค์ผ ๋ฐค ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค!<br> โ˜ ๋ชจ๋“  ๊ณต๊ฐœ ๋ฉ”์†Œ๋“œ๋Š” ์œ ์šฉํ•œ ๊ธฐ์ˆ ๋ฌธ์„œ๋ฅผ ๊ฐ€์ ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค (์˜ˆ๋ฅผ ๋“ค์–ด [`modeling_bert.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py) ์ฐธ์กฐ).<br> โ˜ ์ €์žฅ์†Œ๊ฐ€ ๋น ๋ฅด๊ฒŒ ์„ฑ์žฅํ•˜๊ณ  ์žˆ์œผ๋ฏ€๋กœ ์ €์žฅ์†Œ์— ์ƒ๋‹นํ•œ ๋ถ€๋‹ด์„ ์ฃผ๋Š” ์ด๋ฏธ์ง€, ๋™์˜์ƒ ๋ฐ ๊ธฐํƒ€ ํ…์ŠคํŠธ๊ฐ€ ์•„๋‹Œ ํŒŒ์ผ์€ ์ถ”๊ฐ€ํ•˜์ง€ ๋งˆ์„ธ์š”. ๋Œ€์‹  [`hf-internal-testing`](https://huggingface.co/hf-internal-testing)๊ณผ ๊ฐ™์€ Hub ์ €์žฅ์†Œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋Ÿฌํ•œ ํŒŒ์ผ์„ ํ˜ธ์ŠคํŒ…ํ•˜๊ณ  URL๋กœ ์ฐธ์กฐํ•˜์„ธ์š”. ๋ฌธ์„œ์™€ ๊ด€๋ จ๋œ ์ด๋ฏธ์ง€๋Š” ๋‹ค์Œ ์ €์žฅ์†Œ์— ๋ฐฐ์น˜ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images). ์ด ๋ฐ์ดํ„ฐ์…‹ ์ €์žฅ์†Œ์—์„œ PR์„ ์—ด์–ด์„œ Hugging Face ๋ฉค๋ฒ„์—๊ฒŒ ๋ณ‘ํ•ฉ์„ ์š”์ฒญํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Pull Request์—์„œ ์‹คํ–‰๋˜๋Š” ๊ฒ€์‚ฌ์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์ •๋ณด๋Š” [Pull Request์— ๋Œ€ํ•œ ๊ฒ€์‚ฌ](https://huggingface.co/docs/transformers/pr_checks) ๊ฐ€์ด๋“œ๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ### ํ…Œ์ŠคํŠธ [[tests]] ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๋™์ž‘๊ณผ ์—ฌ๋Ÿฌ ์˜ˆ์ œ๋ฅผ ํ…Œ์ŠคํŠธํ•  ์ˆ˜ ์žˆ๋Š” ๊ด‘๋ฒ”์œ„ํ•œ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ํ…Œ์ŠคํŠธ๋Š” [tests](https://github.com/huggingface/transformers/tree/main/tests) ํด๋”์—, ์˜ˆ์ œ ํ…Œ์ŠคํŠธ๋Š” [examples](https://github.com/huggingface/transformers/tree/main/examples) ํด๋”์— ์žˆ์Šต๋‹ˆ๋‹ค. ์†๋„๊ฐ€ ๋น ๋ฅธ `pytest`์™€ `pytest-xdist`๋ฅผ ์„ ํ˜ธํ•ฉ๋‹ˆ๋‹ค. ์ €์žฅ์†Œ์˜ ๋ฃจํŠธ ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•  *ํ•˜์œ„ ํด๋” ๊ฒฝ๋กœ ๋˜๋Š” ํ…Œ์ŠคํŠธ ํŒŒ์ผ ๊ฒฝ๋กœ*๋ฅผ ์ง€์ •ํ•˜์„ธ์š”. ```bash python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model ``` ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ `examples` ๋””๋ ‰ํ„ฐ๋ฆฌ์—์„œ๋„ *ํ•˜์œ„ ํด๋” ๊ฒฝ๋กœ ๋˜๋Š” ํ…Œ์ŠคํŠธ ํŒŒ์ผ ๊ฒฝ๋กœ*๋ฅผ ์ง€์ •ํ•˜์„ธ์š”. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋‹ค์Œ ๋ช…๋ น์€ PyTorch `examples` ๋””๋ ‰ํ„ฐ๋ฆฌ์˜ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ํ•˜์œ„ ํด๋”๋ฅผ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค: ```bash pip install -r examples/xxx/requirements.txt # only needed the first time python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification ``` ์ด๊ฒƒ์ด ์‹ค์ œ๋กœ `make test` ๋ฐ `make test-examples` ๋ช…๋ น์ด ๊ตฌํ˜„๋˜๋Š” ๋ฐฉ์‹์ž…๋‹ˆ๋‹ค (`pip install`์€ ์ œ์™ธํ•ฉ๋‹ˆ๋‹ค)! ๋˜ํ•œ ํŠน์ • ๊ธฐ๋Šฅ๋งŒ ํ…Œ์ŠคํŠธํ•˜๊ธฐ ์œ„ํ•œ ๋” ์ž‘์€ ํ…Œ์ŠคํŠธ๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ธฐ๋ณธ์ ์œผ๋กœ ๋А๋ฆฐ ํ…Œ์ŠคํŠธ๋Š” ๊ฑด๋„ˆ๋›ฐ์ง€๋งŒ `RUN_SLOW` ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ `yes`๋กœ ์„ค์ •ํ•˜์—ฌ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋งŽ์€ ๊ธฐ๊ฐ€๋ฐ”์ดํŠธ ๋‹จ์œ„์˜ ๋ชจ๋ธ์ด ๋‹ค์šด๋กœ๋“œ๋˜๋ฏ€๋กœ ์ถฉ๋ถ„ํ•œ ๋””์Šคํฌ ๊ณต๊ฐ„, ์ข‹์€ ์ธํ„ฐ๋„ท ์—ฐ๊ฒฐ๊ณผ ๋งŽ์€ ์ธ๋‚ด๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค! <Tip warning={true}> ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด *ํ•˜์œ„ ํด๋” ๊ฒฝ๋กœ ๋˜๋Š” ํ…Œ์ŠคํŠธ ํŒŒ์ผ ๊ฒฝ๋กœ*๋ฅผ ์ง€์ •ํ•˜์„ธ์š”. ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด `tests` ๋˜๋Š” `examples` ํด๋”์˜ ๋ชจ๋“  ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๊ฒŒ ๋˜์–ด ๋งค์šฐ ๊ธด ์‹œ๊ฐ„์ด ๊ฑธ๋ฆฝ๋‹ˆ๋‹ค! </Tip> ```bash RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/models/my_new_model RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./examples/pytorch/text-classification ``` ๋А๋ฆฐ ํ…Œ์ŠคํŠธ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ…Œ์ŠคํŠธ ์ค‘์— ๊ธฐ๋ณธ์ ์œผ๋กœ ํ™œ์„ฑํ™”๋˜์ง€ ์•Š๋Š” ๋‹ค๋ฅธ ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: - `RUN_CUSTOM_TOKENIZERS`: ์‚ฌ์šฉ์ž ์ •์˜ ํ† ํฌ๋‚˜์ด์ € ํ…Œ์ŠคํŠธ๋ฅผ ํ™œ์„ฑํ™”ํ•ฉ๋‹ˆ๋‹ค. - `RUN_PT_FLAX_CROSS_TESTS`: PyTorch + Flax ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋ฅผ ํ™œ์„ฑํ™”ํ•ฉ๋‹ˆ๋‹ค. - `RUN_PT_TF_CROSS_TESTS`: TensorFlow + PyTorch ํ†ตํ•ฉ ํ…Œ์ŠคํŠธ๋ฅผ ํ™œ์„ฑํ™”ํ•ฉ๋‹ˆ๋‹ค. ๋” ๋งŽ์€ ํ™˜๊ฒฝ ๋ณ€์ˆ˜์™€ ์ถ”๊ฐ€ ์ •๋ณด๋Š” [testing_utils.py](src/transformers/testing_utils.py)์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๐Ÿค— Transformers๋Š” ํ…Œ์ŠคํŠธ ์‹คํ–‰๊ธฐ๋กœ `pytest`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ…Œ์ŠคํŠธ ์Šค์œ„ํŠธ ์ž์ฒด์—์„œ๋Š” `pytest` ๊ด€๋ จ ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ `unittest`๊ฐ€ ์™„์ „ํžˆ ์ง€์›๋œ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ `unittest`๋กœ ํ…Œ์ŠคํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```bash python -m unittest discover -s tests -t . -v python -m unittest discover -s examples -t examples -v ``` ### ์Šคํƒ€์ผ ๊ฐ€์ด๋“œ [[style-guide]] ๋ฌธ์„œ๋Š” [Google Python ์Šคํƒ€์ผ ๊ฐ€์ด๋“œ](https://google.github.io/styleguide/pyguide.html)๋ฅผ ๋”ฐ๋ฆ…๋‹ˆ๋‹ค. ์ž์„ธํ•œ ์ •๋ณด๋Š” [๋ฌธ์„œ ์ž‘์„ฑ ๊ฐ€์ด๋“œ](https://github.com/huggingface/transformers/tree/main/docs#writing-documentation---specification)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ### Windows์—์„œ ๊ฐœ๋ฐœ [[develop-on-windows]] Windows์—์„œ ๊ฐœ๋ฐœํ•  ๊ฒฝ์šฐ([Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/) ๋˜๋Š” WSL์—์„œ ์ž‘์—…ํ•˜์ง€ ์•Š๋Š” ํ•œ) Windows `CRLF` ์ค„ ๋ฐ”๊ฟˆ์„ Linux `LF` ์ค„ ๋ฐ”๊ฟˆ์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋„๋ก git์„ ๊ตฌ์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```bash git config core.autocrlf input ``` Windows์—์„œ `make` ๋ช…๋ น์„ ์‹คํ–‰ํ•˜๋Š” ํ•œ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์€ MSYS2๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: 1. [MSYS2](https://www.msys2.org/)๋ฅผ ๋‹ค์šด๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. `C:\msys64`์— ์„ค์น˜๋˜์—ˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•ฉ๋‹ˆ๋‹ค. 2. CLI์—์„œ `C:\msys64\msys2.exe`๋ฅผ ์—ฝ๋‹ˆ๋‹ค (์‹œ์ž‘ ๋ฉ”๋‰ด์—์„œ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•ด์•ผ ํ•จ). 3. ์‰˜์—์„œ ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜์—ฌ: `pacman -Syu` ๋ฐ `pacman -S make`๋กœ `make`๋ฅผ ์„ค์น˜ํ•ฉ๋‹ˆ๋‹ค. 4. ํ™˜๊ฒฝ ๋ณ€์ˆ˜ PATH์— `C:\msys64\usr\bin`์„ ์ถ”๊ฐ€ํ•˜์„ธ์š”. ์ด์ œ ๋ชจ๋“  ํ„ฐ๋ฏธ๋„ (Powershell, cmd.exe ๋“ฑ)์—์„œ `make`๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ๐ŸŽ‰ ### ํฌํฌํ•œ ์ €์žฅ์†Œ๋ฅผ ์ƒ์œ„ ์›๋ณธ ๋ธŒ๋žœ์น˜(main)๊ณผ ๋™๊ธฐํ™”ํ•˜๊ธฐ (Hugging Face ์ €์žฅ์†Œ) [[sync-a-forked-repository-with-upstream-main-the-hugging-face-repository]] ํฌํฌํ•œ ์ €์žฅ์†Œ์˜ main ๋ธŒ๋žœ์น˜๋ฅผ ์—…๋ฐ์ดํŠธํ•  ๋•Œ, ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ผ ์ˆ˜ํ–‰ํ•ด์ฃผ์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๊ฐ upstream PR์— ์ฐธ์กฐ ๋…ธํŠธ๊ฐ€ ์ถ”๊ฐ€๋˜๋Š” ๊ฒƒ์„ ํ”ผํ•˜๊ณ  ์ด๋Ÿฌํ•œ PR์— ๊ด€์—ฌํ•˜๋Š” ๊ฐœ๋ฐœ์ž๋“ค์—๊ฒŒ ๋ถˆํ•„์š”ํ•œ ์•Œ๋ฆผ์ด ์ „์†ก๋˜๋Š” ๊ฒƒ์„ ๋ฐฉ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 1. ๊ฐ€๋Šฅํ•˜๋ฉด ํฌํฌ๋œ ์ €์žฅ์†Œ์˜ ๋ธŒ๋žœ์น˜ ๋ฐ PR์„ ์‚ฌ์šฉํ•˜์—ฌ upstream๊ณผ ๋™๊ธฐํ™”ํ•˜์ง€ ๋งˆ์„ธ์š”. ๋Œ€์‹  ํฌํฌ๋œ main ์ €์žฅ์†Œ์— ์ง์ ‘ ๋ณ‘ํ•ฉํ•˜์„ธ์š”. 2. PR์ด ๋ฐ˜๋“œ์‹œ ํ•„์š”ํ•œ ๊ฒฝ์šฐ, ๋ธŒ๋žœ์น˜๋ฅผ ํ™•์ธํ•œ ํ›„ ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```bash git checkout -b your-branch-for-syncing git pull --squash --no-commit upstream main git commit -m '<your message without GitHub references>' git push --set-upstream origin your-branch-for-syncing ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/debugging.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋””๋ฒ„๊น… [[debugging]] ## Multi-GPU ๋„คํŠธ์›Œํฌ ๋ฌธ์ œ ๋””๋ฒ„๊ทธ [[multigpu-network-issues-debug]] `DistributedDataParallel` ๋ฐ ๋‹ค์ค‘ GPU๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จํ•˜๊ฑฐ๋‚˜ ์ถ”๋ก ํ•  ๋•Œ, ํ”„๋กœ์„ธ์Šค ๋ฐ/๋˜๋Š” ๋…ธ๋“œ ๊ฐ„์˜ ์ƒํ˜ธ ํ†ต์‹  ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋„คํŠธ์›Œํฌ ๋ฌธ์ œ๋ฅผ ์ง„๋‹จํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```bash wget https://raw.githubusercontent.com/huggingface/transformers/main/scripts/distributed/torch-distributed-gpu-test.py ``` ์˜ˆ๋ฅผ ๋“ค์–ด, 2๊ฐœ์˜ GPU๊ฐ€ ์ƒํ˜ธ ์ž‘์šฉํ•˜๋Š” ๋ฐฉ์‹์„ ํ…Œ์ŠคํŠธํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‹คํ–‰ํ•˜์„ธ์š”: ```bash python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ``` ๋‘ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์„œ๋กœ ํ†ต์‹ ํ•˜๊ณ  GPU ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ํ• ๋‹นํ•˜๋Š” ๊ฒฝ์šฐ, ๊ฐ๊ฐ "OK" ์ƒํƒœ๋ฅผ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ๋” ๋งŽ์€ GPU ๋˜๋Š” ๋…ธ๋“œ์˜ ๊ฒฝ์šฐ ์Šคํฌ๋ฆฝํŠธ์˜ ์ธ์ˆ˜๋ฅผ ์กฐ์ •ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ง„๋‹จ ์Šคํฌ๋ฆฝํŠธ ๋‚ด์—์„œ ๋” ๋งŽ์€ ์„ธ๋ถ€ ์ •๋ณด์™€ SLURM ํ™˜๊ฒฝ์—์„œ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ ˆ์‹œํ”ผ๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถ”๊ฐ€์ ์ธ ๋””๋ฒ„๊ทธ ์ˆ˜์ค€์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด `NCCL_DEBUG=INFO` ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```bash NCCL_DEBUG=INFO python -m torch.distributed.run --nproc_per_node 2 --nnodes 1 torch-distributed-gpu-test.py ``` ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด NCCL ๊ด€๋ จ ๋””๋ฒ„๊ทธ ์ •๋ณด๊ฐ€ ๋งŽ์ด ์ถœ๋ ฅ๋˜๋ฉฐ, ๋ฌธ์ œ๊ฐ€ ๋ณด๊ณ ๋œ ๊ฒฝ์šฐ์—๋Š” ์ธํ„ฐ๋„ท์—์„œ ๊ฒ€์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜๋Š” ์ถœ๋ ฅ์„ ํ•ด์„ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ž˜ ๋ชจ๋ฅด๋Š” ๊ฒฝ์šฐ ๋กœ๊ทธ ํŒŒ์ผ์„ ์ด์Šˆ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์–ธ๋”ํ”Œ๋กœ ๋ฐ ์˜ค๋ฒ„ํ”Œ๋กœ ๊ฐ์ง€ [[underflow-and-overflow-detection]] <Tip> ์ด ๊ธฐ๋Šฅ์€ ํ˜„์žฌ PyTorch์—์„œ๋งŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> <Tip> ๋‹ค์ค‘ GPU ํ›ˆ๋ จ์„ ์œ„ํ•ด์„œ๋Š” DDP (`torch.distributed.launch`)๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. </Tip> <Tip> ์ด ๊ธฐ๋Šฅ์€ `nn.Module`์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> `loss=NaN`์ด ๋‚˜ํƒ€๋‚˜๊ฑฐ๋‚˜ ๋ชจ๋ธ์ด `inf` ๋˜๋Š” `nan`์œผ๋กœ ์ธํ•ด ๋‹ค๋ฅธ ์ด์ƒํ•œ ๋™์ž‘์„ ํ•˜๋Š” ๊ฒฝ์šฐ, ์–ธ๋”ํ”Œ๋กœ ๋˜๋Š” ์˜ค๋ฒ„ํ”Œ๋กœ์˜ ์ฒซ ๋ฒˆ์งธ ๋ฐœ์ƒ ์œ„์น˜์™€ ๊ทธ ์›์ธ์„ ํŒŒ์•…ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹คํ–‰ํžˆ๋„ ์ด๋ฅผ ์ž๋™์œผ๋กœ ๊ฐ์ง€ํ•˜๋Š” ํŠน์ˆ˜ ๋ชจ๋“ˆ์„ ํ™œ์„ฑํ™”ํ•˜์—ฌ ์‰ฝ๊ฒŒ ์•Œ์•„๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ์„ ๊ธฐ์กด์˜ ๋ช…๋ น์ค„ ์ธ์ˆ˜์— ์ถ”๊ฐ€ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ```bash --debug underflow_overflow ``` ๋˜๋Š” [`TrainingArguments`] ๊ฐ์ฒด๋ฅผ ์ƒ์„ฑํ•  ๋•Œ `debug="underflow_overflow"`๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ์ž์ฒด ํ›ˆ๋ จ ๋ฃจํ”„๋‚˜ ๋‹ค๋ฅธ Trainer๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python from transformers.debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model) ``` [`~debug_utils.DebugUnderflowOverflow`]๋Š” ๋ชจ๋ธ์— ํ›„ํฌ๋ฅผ ์‚ฝ์ž…ํ•˜์—ฌ ๊ฐ forward ํ˜ธ์ถœ ์งํ›„์— ์ž…๋ ฅ ๋ฐ ์ถœ๋ ฅ ๋ณ€์ˆ˜ ๋ฐ ํ•ด๋‹น ๋ชจ๋“ˆ์˜ ๊ฐ€์ค‘์น˜๋ฅผ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค. ํ™œ์„ฑํ™”๋‚˜ ๊ฐ€์ค‘์น˜์˜ ์ตœ์†Œํ•œ ํ•˜๋‚˜์˜ ์š”์†Œ์—์„œ `inf` ๋˜๋Š” `nan`์ด ๊ฐ์ง€๋˜๋ฉด ํ”„๋กœ๊ทธ๋žจ์ด ์–ด์„คํŠธ๋˜๊ณ  ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ณด๊ณ ์„œ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค. (์ด ์˜ˆ์ œ๋Š” fp16 ํ˜ผํ•ฉ ์ •๋ฐ€๋„์—์„œ `google/mt5-small`์—์„œ ์บก์ฒ˜๋œ ๊ฒƒ์ž…๋‹ˆ๋‹ค): ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata encoder.block.1.layer.1.DenseReluDense.dropout Dropout 0.00e+00 2.57e+02 input[0] 0.00e+00 2.85e+02 output [...] encoder.block.2.layer.0 T5LayerSelfAttention 6.78e-04 3.15e+03 input[0] 2.65e-04 3.42e+03 output[0] None output[1] 2.25e-01 1.00e+04 output[2] encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.dropout Dropout 0.00e+00 8.76e+03 input[0] 0.00e+00 9.74e+03 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` ์˜ˆ์ œ ์ถœ๋ ฅ์€ ๊ฐ„๋žต์„ฑ์„ ์œ„ํ•ด ์ค‘๊ฐ„ ๋ถ€๋ถ„์ด ์ž˜๋ ค ์žˆ์Šต๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ์—ด์€ ์ ˆ๋Œ€์ ์œผ๋กœ ๊ฐ€์žฅ ํฐ ์š”์†Œ์˜ ๊ฐ’์ด๋ฉฐ, ๋”ฐ๋ผ์„œ ๋งˆ์ง€๋ง‰ ๋ช‡ ๊ฐœ์˜ ํ”„๋ ˆ์ž„์„ ์ž์„ธํžˆ ์‚ดํŽด๋ณด๋ฉด ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ์ด `1e4` ๋ฒ”์œ„์— ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด ํ›ˆ๋ จ์€ `fp16` ํ˜ผํ•ฉ ์ •๋ฐ€๋„๋กœ ์ˆ˜ํ–‰๋  ๋•Œ ๊ฐ€์žฅ ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„์—์„œ ์˜ค๋ฒ„ํ”Œ๋กœ์šฐ๊ฐ€ ๋ฐœ์ƒํ–ˆ์Šต๋‹ˆ๋‹ค (`fp16`์—์„œ `inf` ์ด์ „์˜ ๊ฐ€์žฅ ํฐ ์ˆซ์ž๋Š” `64e3`์ž…๋‹ˆ๋‹ค). `fp16` ์•„๋ž˜์—์„œ ์˜ค๋ฒ„ํ”Œ๋กœ์šฐ๋ฅผ ํ”ผํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ํ™œ์„ฑํ™”๋Š” `1e4`๋ณด๋‹ค ํ›จ์”ฌ ์ž‘์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด `1e4 * 1e4 = 1e8`์ด๊ธฐ ๋•Œ๋ฌธ์— ํฐ ํ™œ์„ฑํ™”์™€์˜ ํ–‰๋ ฌ ๊ณฑ์€ ์ˆ˜์น˜์ ์ธ ์˜ค๋ฒ„ํ”Œ๋กœ์šฐ ์กฐ๊ฑด์œผ๋กœ ์ด์–ด์งˆ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ถ”์ ์˜ ๋งจ ์ฒ˜์Œ์—์„œ ์–ด๋А ๋ฐฐ์น˜ ๋ฒˆํ˜ธ์—์„œ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ–ˆ๋Š”์ง€ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (์—ฌ๊ธฐ์„œ `Detected inf/nan during batch_number=0`์€ ๋ฌธ์ œ๊ฐ€ ์ฒซ ๋ฒˆ์งธ ๋ฐฐ์น˜์—์„œ ๋ฐœ์ƒํ–ˆ์Œ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค). ๊ฐ ๋ณด๊ณ ๋œ ํ”„๋ ˆ์ž„์€ ํ•ด๋‹น ํ”„๋ ˆ์ž„์ด ๋ณด๊ณ ํ•˜๋Š” ํ•ด๋‹น ๋ชจ๋“ˆ์— ๋Œ€ํ•œ ์™„์ „ํ•œ ํ•ญ๋ชฉ์„ ์„ ์–ธํ•˜๋ฉฐ, ์ด ํ”„๋ ˆ์ž„๋งŒ ์‚ดํŽด๋ณด๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. ``` encoder.block.2.layer.1.layer_norm T5LayerNorm 8.69e-02 4.18e-01 weight 2.65e-04 3.42e+03 input[0] 1.79e-06 4.65e+00 output ``` ์—ฌ๊ธฐ์„œ `encoder.block.2.layer.1.layer_norm`์€ ์ธ์ฝ”๋”์˜ ๋‘ ๋ฒˆ์งธ ๋ธ”๋ก์˜ ์ฒซ ๋ฒˆ์งธ ๋ ˆ์ด์–ด์— ๋Œ€ํ•œ ๋ ˆ์ด์–ด ์ •๊ทœํ™”๋ฅผ ์˜๋ฏธํ•˜๋ฉฐ, `forward`์˜ ํŠน์ • ํ˜ธ์ถœ์€ `T5LayerNorm`์ž…๋‹ˆ๋‹ค. ์ด ๋ณด๊ณ ์„œ์˜ ๋งˆ์ง€๋ง‰ ๋ช‡ ๊ฐœ ํ”„๋ ˆ์ž„์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ``` Detected inf/nan during batch_number=0 Last 21 forward frames: abs min abs max metadata [...] encoder.block.2.layer.1.DenseReluDense.wi_0 Linear 2.17e-07 4.50e+00 weight 1.79e-06 4.65e+00 input[0] 2.68e-06 3.70e+01 output encoder.block.2.layer.1.DenseReluDense.wi_1 Linear 8.08e-07 2.66e+01 weight 1.79e-06 4.65e+00 input[0] 1.27e-04 2.37e+02 output encoder.block.2.layer.1.DenseReluDense.wo Linear 1.01e-06 6.44e+00 weight 0.00e+00 9.74e+03 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense 1.79e-06 4.65e+00 input[0] 3.18e-04 6.27e+04 output encoder.block.2.layer.1.dropout Dropout 3.18e-04 6.27e+04 input[0] 0.00e+00 inf output ``` ๋งˆ์ง€๋ง‰ ํ”„๋ ˆ์ž„์€ `Dropout.forward` ํ•จ์ˆ˜์— ๋Œ€ํ•œ ๋ณด๊ณ ์ž…๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ ํ•ญ๋ชฉ์€ ์œ ์ผํ•œ ์ž…๋ ฅ์„ ๋‚˜ํƒ€๋‚ด๊ณ  ๋‘ ๋ฒˆ์งธ ํ•ญ๋ชฉ์€ ์œ ์ผํ•œ ์ถœ๋ ฅ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๊ฐ€ `DenseReluDense` ํด๋ž˜์Šค ๋‚ด๋ถ€์˜ `dropout` ์†์„ฑ์—์„œ ํ˜ธ์ถœ๋œ ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ฒซ ๋ฒˆ์งธ ๋ ˆ์ด์–ด์˜ ๋‘ ๋ฒˆ์งธ ๋ธ”๋ก์—์„œ ์ฒซ ๋ฒˆ์งธ ๋ฐฐ์น˜ ์ค‘์— ๋ฐœ์ƒํ–ˆ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ ˆ๋Œ€์ ์œผ๋กœ ๊ฐ€์žฅ ํฐ ์ž…๋ ฅ ์š”์†Œ๋Š” `6.27e+04`์ด๊ณ  ์ถœ๋ ฅ๋„ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ `inf`์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ๋Š” `T5DenseGatedGeluDense.forward`๊ฐ€ ์ถœ๋ ฅ ํ™œ์„ฑํ™”๋ฅผ ์ƒ์„ฑํ•˜๋Š”๋ฐ, ์ ˆ๋Œ€์ ์œผ๋กœ ๊ฐ€์žฅ ํฐ ๊ฐ’์ด ์•ฝ 62.7K์ธ ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ’์€ fp16์˜ ์ตœ๋Œ€ ์ œํ•œ์ธ 64K์— ๋งค์šฐ ๊ทผ์ ‘ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ ํ”„๋ ˆ์ž„์—์„œ๋Š” ์ผ๋ถ€ ์š”์†Œ๋ฅผ 0์œผ๋กœ ๋งŒ๋“  ํ›„ ๊ฐ€์ค‘์น˜๋ฅผ ์žฌ์ •๊ทœํ™”ํ•˜๋Š” `Dropout`์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋กœ ์ธํ•ด ์ ˆ๋Œ€ ์ตœ๋Œ€๊ฐ’์ด 64K๋ฅผ ์ดˆ๊ณผํ•˜๊ณ  ์˜ค๋ฒ„ํ”Œ๋กœ์šฐ(`inf`)๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ๋ณด์‹œ๋‹ค์‹œํ”ผ, fp16 ์ˆซ์ž์˜ ๊ฒฝ์šฐ ์ˆซ์ž๊ฐ€ ๋งค์šฐ ์ปค์งˆ ๋•Œ ์ด์ „ ํ”„๋ ˆ์ž„์„ ์‚ดํŽด๋ณด์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณด๊ณ ์„œ๋ฅผ `models/t5/modeling_t5.py`์˜ ์ฝ”๋“œ์™€ ์ผ์น˜์‹œ์ผœ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```python class T5DenseGatedGeluDense(nn.Module): def __init__(self, config): super().__init__() self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) self.gelu_act = ACT2FN["gelu_new"] def forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states ``` ์ด์ œ `dropout` ํ˜ธ์ถœ๊ณผ ์ด์ „์˜ ๋ชจ๋“  ํ˜ธ์ถœ์„ ์‰ฝ๊ฒŒ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ์ง€๋Š” `forward` ํ›„ํฌ์—์„œ ๋ฐœ์ƒํ•˜๋ฏ€๋กœ, ์ด๋Ÿฌํ•œ ๋ณด๊ณ ์„œ๋Š” ๊ฐ `forward`๊ฐ€ ๋ฐ˜ํ™˜๋œ ์งํ›„์— ์ฆ‰์‹œ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค. ์ „์ฒด ๋ณด๊ณ ์„œ๋กœ ๋Œ์•„๊ฐ€์„œ ๋ฌธ์ œ์— ๋Œ€ํ•œ ์กฐ์น˜ ๋ฐ ์ˆ˜์ •์„ ํ•˜๋ ค๋ฉด, ์ˆซ์ž๊ฐ€ ์ฆ๊ฐ€ํ•˜๊ธฐ ์‹œ์ž‘ํ•œ ๋ช‡ ๊ฐœ์˜ ํ”„๋ ˆ์ž„ ์œ„๋กœ ์ด๋™ํ•ด์„œ ์—ฌ๊ธฐ์„œ `fp32` ๋ชจ๋“œ๋กœ ์ „ํ™˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•ด์•ผ ์ˆซ์ž๊ฐ€ ๊ณฑํ•ด์ง€๊ฑฐ๋‚˜ ํ•ฉ์ณ์งˆ ๋•Œ ์˜ค๋ฒ„ํ”Œ๋กœ์šฐ๋˜์ง€ ์•Š์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. ๋ฌผ๋ก  ๋‹ค๋ฅธ ํ•ด๊ฒฐ์ฑ…๋„ ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `amp`๊ฐ€ ํ™œ์„ฑํ™”๋œ ๊ฒฝ์šฐ ์ผ์‹œ์ ์œผ๋กœ ๋„๊ณ  ์›๋ž˜์˜ `forward`๋ฅผ ๋„์šฐ๋ฏธ ๋ž˜ํผ๋กœ ์ด๋™ํ•œ ํ›„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python def _forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states import torch def forward(self, hidden_states): if torch.is_autocast_enabled(): with torch.cuda.amp.autocast(enabled=False): return self._forward(hidden_states) else: return self._forward(hidden_states) ``` ์ž๋™ ๊ฐ์ง€๊ธฐ๋Š” ์ „์ฒด ํ”„๋ ˆ์ž„์˜ ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ์— ๋Œ€ํ•ด์„œ๋งŒ ๋ณด๊ณ ํ•˜๋ฏ€๋กœ, ์–ด๋””๋ฅผ ์‚ดํŽด๋ด์•ผ ํ•˜๋Š”์ง€ ์•Œ๋ฉด ํŠน์ • `forward` ํ•จ์ˆ˜์˜ ์ค‘๊ฐ„ ๋‹จ๊ณ„๋„ ๋ถ„์„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ์—๋Š” `detect_overflow` ๋„์šฐ๋ฏธ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์›ํ•˜๋Š” ์œ„์น˜์— ๊ฐ์ง€๊ธฐ๋ฅผ ์‚ฝ์ž…ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด: ```python from debug_utils import detect_overflow class T5LayerFF(nn.Module): [...] def forward(self, hidden_states): forwarded_states = self.layer_norm(hidden_states) detect_overflow(forwarded_states, "after layer_norm") forwarded_states = self.DenseReluDense(forwarded_states) detect_overflow(forwarded_states, "after DenseReluDense") return hidden_states + self.dropout(forwarded_states) ``` ์—ฌ๊ธฐ์„œ๋Š” ์ด๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ 2๊ฐœ์˜ ๊ฒƒ์„ ์ถ”์ ํ•˜๊ณ  ์ด์ œ `forwarded_states`์˜ `inf` ๋˜๋Š” `nan`์ด ์ค‘๊ฐ„์— ๊ฐ์ง€๋˜์—ˆ๋Š”์ง€๋ฅผ ์ถ”์ ํ•ฉ๋‹ˆ๋‹ค. ์‹ค์ œ๋กœ ์œ„์˜ ์˜ˆ์ œ์—์„œ ๊ฐ ํ˜ธ์ถœ์ด `nn.Module`์ด๊ธฐ ๋•Œ๋ฌธ์— ํƒ์ง€๊ธฐ๊ฐ€ ์ด๋ฏธ ์ด๋ฅผ ๋ณด๊ณ ํ•ฉ๋‹ˆ๋‹ค. ๋กœ์ปฌ์—์„œ ์ง์ ‘ ๊ณ„์‚ฐํ•˜๋Š” ๊ฒฝ์šฐ ์ด๋ ‡๊ฒŒ ์ˆ˜ํ–‰ํ•œ๋‹ค๊ณ  ๊ฐ€์ •ํ•ด ๋ด…์‹œ๋‹ค. ๋˜ํ•œ, ์ž์ฒด ์ฝ”๋“œ์—์„œ ๋””๋ฒ„๊ฑฐ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ๊ฐ’์—์„œ ์ถœ๋ ฅ๋˜๋Š” ํ”„๋ ˆ์ž„ ์ˆ˜๋ฅผ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด: ```python from transformers.debug_utils import DebugUnderflowOverflow debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save=100) ``` ### ํŠน์ • ๋ฐฐ์น˜์˜ ์ ˆ๋Œ“๊ฐ’ ์ตœ์†Œ ๋ฐ ์ตœ๋Œ€ ๊ฐ’ ์ถ”์  [[specific-batch-absolute-min-and-max-value-tracing]] ๋™์ผํ•œ ๋””๋ฒ„๊น… ํด๋ž˜์Šค๋Š” ์–ธ๋”ํ”Œ๋กœ์šฐ/์˜ค๋ฒ„ํ”Œ๋กœ์šฐ ๊ฐ์ง€ ๊ธฐ๋Šฅ์ด ๊บผ์ง„ ์ƒํƒœ์—์„œ ๋ฐฐ์น˜๋ณ„ ์ถ”์ ์—๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ํŠน์ • ๋ฐฐ์น˜์˜ ๊ฐ `forward` ํ˜ธ์ถœ์˜ ๋ชจ๋“  ๊ตฌ์„ฑ ์„ฑ๋ถ„์— ๋Œ€ํ•œ ์ ˆ๋Œ€ ์ตœ์†Ÿ๊ฐ’๊ณผ ์ตœ๋Œ“๊ฐ’์„ ํ™•์ธํ•˜๊ณ , ์ด๋ฅผ ๋ฐฐ์น˜ 1๊ณผ 3์— ๋Œ€ํ•ด์„œ๋งŒ ์ˆ˜ํ–‰ํ•˜๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ด ํด๋ž˜์Šค๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3]) ``` ๊ทธ๋Ÿฌ๋ฉด ์ด์ œ ๋ฐฐ์น˜ 1๊ณผ 3 ์ „์ฒด๊ฐ€ ์–ธ๋”ํ”Œ๋กœ์šฐ/์˜ค๋ฒ„ํ”Œ๋กœ์šฐ ๊ฐ์ง€๊ธฐ์™€ ๋™์ผํ•œ ํ˜•์‹์œผ๋กœ ์ถ”์ ๋ฉ๋‹ˆ๋‹ค. ๋ฐฐ์น˜๋Š” 0๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ํ”„๋กœ๊ทธ๋žจ์ด ํŠน์ • ๋ฐฐ์น˜ ๋ฒˆํ˜ธ ์ดํ›„์— ์˜ค์ž‘๋™ํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๋Š” ๊ฒƒ์„ ์•Œ๊ณ  ์žˆ๋Š” ๊ฒฝ์šฐ์— ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ ‡๊ธฐ ๋•Œ๋ฌธ์— ํ•ด๋‹น ์˜์—ญ์œผ๋กœ ๋ฐ”๋กœ ์ด๋™ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฐ ๊ตฌ์„ฑ์— ๋Œ€ํ•œ ์ƒ˜ํ”Œ ์ถ•์†Œ๋œ ์ถœ๋ ฅ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. ``` *** Starting batch number=1 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.47e+04 input[0] 5.36e-05 7.92e+02 output [...] decoder.dropout Dropout 1.60e-07 2.27e+01 input[0] 0.00e+00 2.52e+01 output decoder T5Stack not a tensor output lm_head Linear 1.01e-06 7.92e+02 weight 0.00e+00 1.11e+00 input[0] 6.06e-02 8.39e+01 output T5ForConditionalGeneration not a tensor output *** Starting batch number=3 *** abs min abs max metadata shared Embedding 1.01e-06 7.92e+02 weight 0.00e+00 2.78e+04 input[0] 5.36e-05 7.92e+02 output [...] ``` ์—ฌ๊ธฐ์—์„œ๋Š” ๋ชจ๋ธ์˜ forward ํ˜ธ์ถœ ์ˆ˜์™€ ๋™์ผํ•œ ์ˆ˜์˜ ํ”„๋ ˆ์ž„์ด ๋คํ”„๋˜๋ฏ€๋กœ ๋งŽ์€ ์ˆ˜์˜ ํ”„๋ ˆ์ž„์ด ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์›ํ•˜๋Š” ๊ฒƒ์ผ ์ˆ˜๋„ ์žˆ๊ณ  ์•„๋‹ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋•Œ๋กœ๋Š” ์ผ๋ฐ˜ ๋””๋ฒ„๊ฑฐ๋ณด๋‹ค ๋””๋ฒ„๊น… ๋ชฉ์ ์œผ๋กœ ๋” ์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ๋ฌธ์ œ๊ฐ€ ๋ฐฐ์น˜ ๋ฒˆํ˜ธ 150์—์„œ ์‹œ์ž‘ํ•˜๋Š” ๊ฒฝ์šฐ 149์™€ 150์˜ ์ถ”์ ์„ ๋คํ”„ํ•˜๊ณ  ์ˆซ์ž๊ฐ€ ์–ด๋””์„œ๋ถ€ํ„ฐ ๋‹ค๋ฅด๊ฒŒ ๋˜์—ˆ๋Š”์ง€ ๋น„๊ตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ํ›ˆ๋ จ์„ ์ค‘์ง€ํ•  ๋ฐฐ์น˜ ๋ฒˆํ˜ธ๋ฅผ ์ง€์ •ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3], abort_after_batch_num=3) ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/ko/custom_tools.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์™€ ํ”„๋กฌํ”„ํŠธ[[custom-tools-and-prompts]] <Tip> Transformers์™€ ๊ด€๋ จํ•˜์—ฌ ์–ด๋–ค ๋„๊ตฌ์™€ ์—์ด์ „ํŠธ๊ฐ€ ์žˆ๋Š”์ง€ ์ž˜ ๋ชจ๋ฅด์‹ ๋‹ค๋ฉด [Transformers Agents](transformers_agents) ํŽ˜์ด์ง€๋ฅผ ๋จผ์ € ์ฝ์–ด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. </Tip> <Tip warning={true}> Transformers Agents๋Š” ์‹คํ—˜ ์ค‘์ธ API๋กœ ์–ธ์ œ๋“ ์ง€ ๋ณ€๊ฒฝ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. API ๋˜๋Š” ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์ด ๋ณ€๊ฒฝ๋˜๊ธฐ ์‰ฝ๊ธฐ ๋•Œ๋ฌธ์— ์—์ด์ „ํŠธ๊ฐ€ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ฒฐ๊ณผ๋„ ๋‹ฌ๋ผ์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์—์ด์ „ํŠธ์—๊ฒŒ ๊ถŒํ•œ์„ ๋ถ€์—ฌํ•˜๊ณ  ์ƒˆ๋กœ์šด ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๊ฒŒ ํ•˜๋ ค๋ฉด ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์™€ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋งŒ๋“ค๊ณ  ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ๋ฌด์—‡๋ณด๋‹ค ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‚ด์šฉ์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: - ํ”„๋กฌํ”„ํŠธ๋ฅผ ์‚ฌ์šฉ์ž ์ •์˜ํ•˜๋Š” ๋ฐฉ๋ฒ• - ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• - ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ• ## ํ”„๋กฌํ”„ํŠธ๋ฅผ ์‚ฌ์šฉ์ž ์ •์˜ํ•˜๊ธฐ[[customizing-the-prompt]] [Transformers Agents](transformers_agents)์—์„œ ์„ค๋ช…ํ•œ ๊ฒƒ์ฒ˜๋Ÿผ ์—์ด์ „ํŠธ๋Š” [`~Agent.run`] ๋ฐ [`~Agent.chat`] ๋ชจ๋“œ์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `run`(์‹คํ–‰) ๋ชจ๋“œ์™€ `chat`(์ฑ„ํŒ…) ๋ชจ๋“œ ๋ชจ๋‘ ๋™์ผํ•œ ๋กœ์ง์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๋ฅผ ๊ตฌ๋™ํ•˜๋Š” ์–ธ์–ด ๋ชจ๋ธ์€ ๊ธด ํ”„๋กฌํ”„ํŠธ์— ๋”ฐ๋ผ ์กฐ๊ฑด์ด ์ง€์ •๋˜๊ณ , ์ค‘์ง€ ํ† ํฐ์— ๋„๋‹ฌํ•  ๋•Œ๊นŒ์ง€ ๋‹ค์Œ ํ† ํฐ์„ ์ƒ์„ฑํ•˜์—ฌ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์™„์ˆ˜ํ•ฉ๋‹ˆ๋‹ค. `chat` ๋ชจ๋“œ์—์„œ๋Š” ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์ด์ „ ์‚ฌ์šฉ์ž ์ž…๋ ฅ ๋ฐ ๋ชจ๋ธ ์ƒ์„ฑ์œผ๋กœ ์—ฐ์žฅ๋œ๋‹ค๋Š” ์ ์ด ๋‘ ๋ชจ๋“œ์˜ ์œ ์ผํ•œ ์ฐจ์ด์ ์ž…๋‹ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์—์ด์ „ํŠธ๊ฐ€ ๊ณผ๊ฑฐ ์ƒํ˜ธ์ž‘์šฉ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜๋ฏ€๋กœ ์—์ด์ „ํŠธ์—๊ฒŒ ์ผ์ข…์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์ œ๊ณตํ•˜๋Š” ์…ˆ์ž…๋‹ˆ๋‹ค. ### ํ”„๋กฌํ”„ํŠธ์˜ ๊ตฌ์กฐ[[structure-of-the-prompt]] ์–ด๋–ป๊ฒŒ ํ”„๋กฌํ”„ํŠธ ์‚ฌ์šฉ์ž ์ •์˜๋ฅผ ์ž˜ ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•ด ํ”„๋กฌํ”„ํŠธ์˜ ๊ตฌ์กฐ๋ฅผ ์ž์„ธํžˆ ์‚ดํŽด๋ด…์‹œ๋‹ค. ํ”„๋กฌํ”„ํŠธ๋Š” ํฌ๊ฒŒ ๋„ค ๋ถ€๋ถ„์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. - 1. ๋„์ž…: ์—์ด์ „ํŠธ๊ฐ€ ์–ด๋–ป๊ฒŒ ํ–‰๋™ํ•ด์•ผ ํ•˜๋Š”์ง€, ๋„๊ตฌ์˜ ๊ฐœ๋…์— ๋Œ€ํ•œ ์„ค๋ช…. - 2. ๋ชจ๋“  ๋„๊ตฌ์— ๋Œ€ํ•œ ์„ค๋ช…. ์ด๋Š” ๋Ÿฐํƒ€์ž„์— ์‚ฌ์šฉ์ž๊ฐ€ ์ •์˜/์„ ํƒํ•œ ๋„๊ตฌ๋กœ ๋™์ ์œผ๋กœ ๋Œ€์ฒด๋˜๋Š” `<<all_tools>>` ํ† ํฐ์œผ๋กœ ์ •์˜๋ฉ๋‹ˆ๋‹ค. - 3. ์ž‘์—… ์˜ˆ์ œ ๋ฐ ํ•ด๋‹น ์†”๋ฃจ์…˜ ์„ธํŠธ. - 4. ํ˜„์žฌ ์˜ˆ์ œ ๋ฐ ํ•ด๊ฒฐ ์š”์ฒญ. ๊ฐ ๋ถ€๋ถ„์„ ๋” ์ž˜ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ์งง์€ ๋ฒ„์ „์„ ํ†ตํ•ด `run` ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์–ด๋–ป๊ฒŒ ๋ณด์ด๋Š”์ง€ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ````text I will ask you to perform a task, your job is to come up with a series of simple commands in Python that will perform the task. [...] You can print intermediate results if it makes sense to do so. Tools: - document_qa: This is a tool that answers a question about a document (pdf). It takes an input named `document` which should be the document containing the information, as well as a `question` that is the question about the document. It returns a text that contains the answer to the question. - image_captioner: This is a tool that generates a description of an image. It takes an input named `image` which should be the image to the caption and returns a text that contains the description in English. [...] Task: "Answer the question in the variable `question` about the image stored in the variable `image`. The question is in French." I will use the following tools: `translator` to translate the question into English and then `image_qa` to answer the question on the input image. Answer: ```py translated_question = translator(question=question, src_lang="French", tgt_lang="English") print(f"The translated question is {translated_question}.") answer = image_qa(image=image, question=translated_question) print(f"The answer is {answer}") ``` Task: "Identify the oldest person in the `document` and create an image showcasing the result as a banner." I will use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer. Answer: ```py answer = document_qa(document, question="What is the oldest person?") print(f"The answer is {answer}.") image = image_generator("A banner showing " + answer) ``` [...] Task: "Draw me a picture of rivers and lakes" I will use the following ```` ๋„์ž…(*"๋„๊ตฌ:"* ์•ž์˜ ํ…์ŠคํŠธ)์—์„œ๋Š” ๋ชจ๋ธ์ด ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•˜๊ณ  ๋ฌด์—‡์„ ํ•ด์•ผ ํ•˜๋Š”์ง€ ์ •ํ™•ํ•˜๊ฒŒ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๋Š” ํ•ญ์ƒ ๊ฐ™์€ ๋ฐฉ์‹์œผ๋กœ ์ž‘๋™ํ•ด์•ผ ํ•˜๋ฏ€๋กœ ์ด ๋ถ€๋ถ„์€ ์‚ฌ์šฉ์ž ์ •์˜ํ•  ํ•„์š”๊ฐ€ ์—†์„ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ ๋ถ€๋ถ„(*"๋„๊ตฌ"* ์•„๋ž˜์˜ ๊ธ€๋จธ๋ฆฌ ๊ธฐํ˜ธ)์€ `run` ๋˜๋Š” `chat`์„ ํ˜ธ์ถœํ•  ๋•Œ ๋™์ ์œผ๋กœ ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์ •ํ™•ํžˆ `agent.toolbox`์— ์žˆ๋Š” ๋„๊ตฌ ์ˆ˜๋งŒํผ ๊ธ€๋จธ๋ฆฌ ๊ธฐํ˜ธ๊ฐ€ ์žˆ๊ณ , ๊ฐ ๊ธ€๋จธ๋ฆฌ ๊ธฐํ˜ธ๋Š” ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์œผ๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค: ```text - <tool.name>: <tool.description> ``` ๋ฌธ์„œ ์งˆ์˜์‘๋‹ต ๋„๊ตฌ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ์ด๋ฆ„๊ณผ ์„ค๋ช…์„ ์ถœ๋ ฅํ•ด์„œ ๋น ๋ฅด๊ฒŒ ํ™•์ธํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py from transformers import load_tool document_qa = load_tool("document-question-answering") print(f"- {document_qa.name}: {document_qa.description}") ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text - document_qa: This is a tool that answers a question about a document (pdf). It takes an input named `document` which should be the document containing the information, as well as a `question` that is the question about the document. It returns a text that contains the answer to the question. ``` ์—ฌ๊ธฐ์„œ ๋„๊ตฌ ์ด๋ฆ„์ด ์งง๊ณ  ์ •ํ™•ํ•˜๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์„ค๋ช…์€ ๋‘ ๋ถ€๋ถ„์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋Š”๋ฐ, ์ฒซ ๋ฒˆ์งธ ๋ถ€๋ถ„์—์„œ๋Š” ๋„๊ตฌ์˜ ๊ธฐ๋Šฅ์„ ์„ค๋ช…ํ•˜๊ณ  ๋‘ ๋ฒˆ์งธ ๋ถ€๋ถ„์—์„œ๋Š” ์˜ˆ์ƒ๋˜๋Š” ์ž…๋ ฅ ์ธ์ˆ˜์™€ ๋ฐ˜ํ™˜ ๊ฐ’์„ ๋ช…์‹œํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ๋„๊ตฌ๋ฅผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์ข‹์€ ๋„๊ตฌ ์ด๋ฆ„๊ณผ ๋„๊ตฌ ์„ค๋ช…์ด ๋งค์šฐ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ๋„๊ตฌ์— ๋Œ€ํ•ด ์•Œ ์ˆ˜ ์žˆ๋Š” ์œ ์ผํ•œ ์ •๋ณด๋Š” ์ด๋ฆ„๊ณผ ์„ค๋ช…๋ฟ์ด๋ฏ€๋กœ, ์ด ๋‘ ๊ฐ€์ง€๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ์ž‘์„ฑํ•˜๊ณ  ๋„๊ตฌ ์ƒ์ž์— ์žˆ๋Š” ๊ธฐ์กด ๋„๊ตฌ์˜ ์Šคํƒ€์ผ๊ณผ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํŠนํžˆ ์ด๋ฆ„์— ๋”ฐ๋ผ ์˜ˆ์ƒ๋˜๋Š” ๋ชจ๋“  ์ธ์ˆ˜๊ฐ€ ์„ค๋ช…์— ์ฝ”๋“œ ์Šคํƒ€์ผ๋กœ ์–ธ๊ธ‰๋˜์–ด ์žˆ๋Š”์ง€, ์˜ˆ์ƒ๋˜๋Š” ์œ ํ˜•๊ณผ ๊ทธ ์œ ํ˜•์ด ๋ฌด์—‡์ธ์ง€์— ๋Œ€ํ•œ ์„ค๋ช…์ด ํฌํ•จ๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. <Tip> ๋„๊ตฌ์— ์–ด๋–ค ์ด๋ฆ„๊ณผ ์„ค๋ช…์ด ์žˆ์–ด์•ผ ํ•˜๋Š”์ง€ ์ดํ•ดํ•˜๋ ค๋ฉด ์—„์„ ๋œ Transformers ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์„ ํ™•์ธํ•˜์„ธ์š”. [`Agent.toolbox`] ์†์„ฑ์„ ๊ฐ€์ง„ ๋ชจ๋“  ๋„๊ตฌ๋ฅผ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ์„ธ ๋ฒˆ์งธ ๋ถ€๋ถ„์—๋Š” ์—์ด์ „ํŠธ๊ฐ€ ์–ด๋–ค ์ข…๋ฅ˜์˜ ์‚ฌ์šฉ์ž ์š”์ฒญ์— ๋Œ€ํ•ด ์–ด๋–ค ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•ด์•ผ ํ•˜๋Š”์ง€ ์ •ํ™•ํ•˜๊ฒŒ ๋ณด์—ฌ์ฃผ๋Š” ์—„์„ ๋œ ์˜ˆ์ œ ์„ธํŠธ๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๋ฅผ ์ง€์›ํ•˜๋Š” ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ์€ ํ”„๋กฌํ”„ํŠธ์—์„œ ํŒจํ„ด์„ ์ธ์‹ํ•˜๊ณ  ์ƒˆ๋กœ์šด ๋ฐ์ดํ„ฐ๋กœ ํŒจํ„ด์„ ๋ฐ˜๋ณตํ•˜๋Š” ๋ฐ ๋งค์šฐ ๋Šฅ์ˆ™ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์—์ด์ „ํŠธ๊ฐ€ ์‹ค์ œ๋กœ ์˜ฌ๋ฐ”๋ฅธ ์‹คํ–‰ ๊ฐ€๋Šฅํ•œ ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•  ๊ฐ€๋Šฅ์„ฑ์„ ๊ทน๋Œ€ํ™”ํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ ์˜ˆ์ œ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ๊ฒƒ์ด ๋งค์šฐ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ํ•œ ๊ฐ€์ง€ ์˜ˆ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ````text Task: "Identify the oldest person in the `document` and create an image showcasing the result as a banner." I will use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer. Answer: ```py answer = document_qa(document, question="What is the oldest person?") print(f"The answer is {answer}.") image = image_generator("A banner showing " + answer) ``` ```` ์ž‘์—… ์„ค๋ช…, ์—์ด์ „ํŠธ๊ฐ€ ์ˆ˜ํ–‰ํ•˜๋ ค๋Š” ์ž‘์—…์— ๋Œ€ํ•œ ์„ค๋ช…, ๋งˆ์ง€๋ง‰์œผ๋กœ ์ƒ์„ฑ๋œ ์ฝ”๋“œ, ์ด ์„ธ ๋ถ€๋ถ„์œผ๋กœ ๊ตฌ์„ฑ๋œ ํ”„๋กฌํ”„ํŠธ๋Š” ๋ชจ๋ธ์— ๋ฐ˜๋ณตํ•˜์—ฌ ์ œ๊ณต๋ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ์˜ ์ผ๋ถ€์ธ ๋ชจ๋“  ์˜ˆ์ œ๋Š” ์ด๋Ÿฌํ•œ ์ •ํ™•ํ•œ ํŒจํ„ด์œผ๋กœ ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ, ์—์ด์ „ํŠธ๊ฐ€ ์ƒˆ ํ† ํฐ์„ ์ƒ์„ฑํ•  ๋•Œ ์ •ํ™•ํžˆ ๋™์ผํ•œ ํŒจํ„ด์„ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ ์˜ˆ์ œ๋Š” Transformers ํŒ€์ด ์„ ๋ณ„ํ•˜๊ณ  ์ผ๋ จ์˜ [problem statements](https://github.com/huggingface/transformers/blob/main/src/transformers/tools/evaluate_agent.py)์— ๋”ฐ๋ผ ์—„๊ฒฉํ•˜๊ฒŒ ํ‰๊ฐ€ํ•˜์—ฌ ์—์ด์ „ํŠธ์˜ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์—์ด์ „ํŠธ์˜ ์‹ค์ œ ์‚ฌ์šฉ ์‚ฌ๋ก€๋ฅผ ์ตœ๋Œ€ํ•œ ์ž˜ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ๋„๋ก ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ์˜ ๋งˆ์ง€๋ง‰ ๋ถ€๋ถ„์€ ๋‹ค์Œ์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค: ```text Task: "Draw me a picture of rivers and lakes" I will use the following ``` ์ด๋Š” ์—์ด์ „ํŠธ๊ฐ€ ์™„๋ฃŒํ•ด์•ผ ํ•  ์ตœ์ข…์ ์ธ ๋ฏธ์™„์„ฑ ์˜ˆ์ œ์ž…๋‹ˆ๋‹ค. ๋ฏธ์™„์„ฑ ์˜ˆ์ œ๋Š” ์‹ค์ œ ์‚ฌ์šฉ์ž ์ž…๋ ฅ์— ๋”ฐ๋ผ ๋™์ ์œผ๋กœ ๋งŒ๋“ค์–ด์ง‘๋‹ˆ๋‹ค. ์œ„ ์˜ˆ์‹œ์˜ ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž๊ฐ€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‹คํ–‰ํ–ˆ์Šต๋‹ˆ๋‹ค: ```py agent.run("Draw me a picture of rivers and lakes") ``` ์‚ฌ์šฉ์ž ์ž…๋ ฅ - *์ฆ‰* Task: *"Draw me a picture of rivers and lakes"*๊ฐ€ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์— ๋งž์ถฐ "Task: <task> \n\n I will use the following"๋กœ ์บ์ŠคํŒ…๋ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ์žฅ์€ ์—์ด์ „ํŠธ์—๊ฒŒ ์กฐ๊ฑด์ด ์ ์šฉ๋˜๋Š” ํ”„๋กฌํ”„ํŠธ์˜ ๋งˆ์ง€๋ง‰ ์ค„์„ ๊ตฌ์„ฑํ•˜๋ฏ€๋กœ ์—์ด์ „ํŠธ๊ฐ€ ์ด์ „ ์˜ˆ์ œ์—์„œ ์ˆ˜ํ–‰ํ•œ ๊ฒƒ๊ณผ ์ •ํ™•ํžˆ ๋™์ผํ•œ ๋ฐฉ์‹์œผ๋กœ ์˜ˆ์ œ๋ฅผ ์™„๋ฃŒํ•˜๋„๋ก ๊ฐ•๋ ฅํ•˜๊ฒŒ ์˜ํ–ฅ์„ ๋ฏธ์นฉ๋‹ˆ๋‹ค. ๋„ˆ๋ฌด ์ž์„ธํžˆ ์„ค๋ช…ํ•˜์ง€ ์•Š๋”๋ผ๋„ ์ฑ„ํŒ… ํ…œํ”Œ๋ฆฟ์˜ ํ”„๋กฌํ”„ํŠธ ๊ตฌ์กฐ๋Š” ๋™์ผํ•˜์ง€๋งŒ ์˜ˆ์ œ์˜ ์Šคํƒ€์ผ์ด ์•ฝ๊ฐ„ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. *์˜ˆ๋ฅผ ๋“ค๋ฉด*: ````text [...] ===== Human: Answer the question in the variable `question` about the image stored in the variable `image`. Assistant: I will use the tool `image_qa` to answer the question on the input image. ```py answer = image_qa(text=question, image=image) print(f"The answer is {answer}") ``` Human: I tried this code, it worked but didn't give me a good result. The question is in French Assistant: In this case, the question needs to be translated first. I will use the tool `translator` to do this. ```py translated_question = translator(question=question, src_lang="French", tgt_lang="English") print(f"The translated question is {translated_question}.") answer = image_qa(text=translated_question, image=image) print(f"The answer is {answer}") ``` ===== [...] ```` `run` ํ”„๋กฌํ”„ํŠธ์˜ ์˜ˆ์™€๋Š” ๋ฐ˜๋Œ€๋กœ, ๊ฐ `chat` ํ”„๋กฌํ”„ํŠธ์˜ ์˜ˆ์—๋Š” *Human(์‚ฌ๋žŒ)*๊ณผ *Assistant(์–ด์‹œ์Šคํ„ดํŠธ)* ๊ฐ„์— ํ•˜๋‚˜ ์ด์ƒ์˜ ๊ตํ™˜์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋“  ๊ตํ™˜์€ `run` ํ”„๋กฌํ”„ํŠธ์˜ ์˜ˆ์™€ ์œ ์‚ฌํ•œ ๊ตฌ์กฐ๋กœ ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž์˜ ์ž…๋ ฅ์ด *Human:* ๋’ค์— ์ถ”๊ฐ€๋˜๋ฉฐ, ์—์ด์ „ํŠธ์—๊ฒŒ ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์ „์— ์ˆ˜ํ–‰ํ•ด์•ผ ํ•  ์ž‘์—…์„ ๋จผ์ € ์ƒ์„ฑํ•˜๋ผ๋Š” ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ๊ตํ™˜์€ ์ด์ „ ๊ตํ™˜์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•  ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ ์œ„์™€ ๊ฐ™์ด ์‚ฌ์šฉ์ž๊ฐ€ "**์ด** ์ฝ”๋“œ๋ฅผ ์‹œ๋„ํ–ˆ์Šต๋‹ˆ๋‹ค"๋ผ๊ณ  ์ž…๋ ฅํ•˜๋ฉด ์ด์ „์— ์ƒ์„ฑ๋œ ์—์ด์ „ํŠธ์˜ ์ฝ”๋“œ๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ๊ณผ๊ฑฐ ๊ตํ™˜์„ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `.chat`์„ ์‹คํ–‰ํ•˜๋ฉด ์‚ฌ์šฉ์ž์˜ ์ž…๋ ฅ ๋˜๋Š” *์ž‘์—…*์ด ๋ฏธ์™„์„ฑ๋œ ์–‘์‹์˜ ์˜ˆ์‹œ๋กœ ์บ์ŠคํŒ…๋ฉ๋‹ˆ๋‹ค: ```text Human: <user-input>\n\nAssistant: ``` ๊ทธ๋Ÿฌ๋ฉด ์—์ด์ „ํŠธ๊ฐ€ ์ด๋ฅผ ์™„์„ฑํ•ฉ๋‹ˆ๋‹ค. `run` ๋ช…๋ น๊ณผ ๋‹ฌ๋ฆฌ `chat` ๋ช…๋ น์€ ์™„๋ฃŒ๋œ ์˜ˆ์ œ๋ฅผ ํ”„๋กฌํ”„ํŠธ์— ์ถ”๊ฐ€ํ•˜์—ฌ ์—์ด์ „ํŠธ์—๊ฒŒ ๋‹ค์Œ `chat` ์ฐจ๋ก€์— ๋Œ€ํ•œ ๋” ๋งŽ์€ ๋ฌธ๋งฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด์ œ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์–ด๋–ป๊ฒŒ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋Š”์ง€ ์•Œ์•˜์œผ๋‹ˆ ์–ด๋–ป๊ฒŒ ์‚ฌ์šฉ์ž ์ •์˜ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์‚ดํŽด๋ด…์‹œ๋‹ค! ### ์ข‹์€ ์‚ฌ์šฉ์ž ์ž…๋ ฅ ์ž‘์„ฑํ•˜๊ธฐ[[writing-good-user-inputs]] ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ์ด ์‚ฌ์šฉ์ž์˜ ์˜๋„๋ฅผ ์ดํ•ดํ•˜๋Š” ๋Šฅ๋ ฅ์ด ์ ์  ๋” ํ–ฅ์ƒ๋˜๊ณ  ์žˆ์ง€๋งŒ, ์—์ด์ „ํŠธ๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ ์ž‘์—…์„ ์„ ํƒํ•  ์ˆ˜ ์žˆ๋„๋ก ์ตœ๋Œ€ํ•œ ์ •ํ™•์„ฑ์„ ์œ ์ง€ํ•˜๋Š” ๊ฒƒ์€ ํฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ์ตœ๋Œ€ํ•œ ์ •ํ™•ํ•˜๋‹ค๋Š” ๊ฒƒ์€ ๋ฌด์—‡์„ ์˜๋ฏธํ• ๊นŒ์š”? ์—์ด์ „ํŠธ๋Š” ํ”„๋กฌํ”„ํŠธ์—์„œ ๋„๊ตฌ ์ด๋ฆ„ ๋ชฉ๋ก๊ณผ ํ•ด๋‹น ์„ค๋ช…์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋” ๋งŽ์€ ๋„๊ตฌ๊ฐ€ ์ถ”๊ฐ€๋ ์ˆ˜๋ก ์—์ด์ „ํŠธ๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ ๋„๊ตฌ๋ฅผ ์„ ํƒํ•˜๊ธฐ๊ฐ€ ๋” ์–ด๋ ค์›Œ์ง€๊ณ  ์‹คํ–‰ํ•  ๋„๊ตฌ์˜ ์˜ฌ๋ฐ”๋ฅธ ์ˆœ์„œ๋ฅผ ์„ ํƒํ•˜๋Š” ๊ฒƒ์€ ๋”์šฑ ์–ด๋ ค์›Œ์ง‘๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ์‹คํŒจ ์‚ฌ๋ก€๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ๋ถ„์„ํ•  ์ฝ”๋“œ๋งŒ ๋ฐ˜ํ™˜ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ```py from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") agent.run("Show me a tree", return_code=True) ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tool: `image_segmenter` to create a segmentation mask for the image. ==Code generated by the agent== mask = image_segmenter(image, prompt="tree") ``` ์šฐ๋ฆฌ๊ฐ€ ์›ํ–ˆ๋˜ ๊ฒฐ๊ณผ๊ฐ€ ์•„๋‹ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋Œ€์‹  ๋‚˜๋ฌด ์ด๋ฏธ์ง€๊ฐ€ ์ƒ์„ฑ๋˜๊ธฐ๋ฅผ ์›ํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๋” ๋†’์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์—์ด์ „ํŠธ๊ฐ€ ํŠน์ • ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ์œ ๋„ํ•˜๋ ค๋ฉด ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์— ์žˆ๋Š” ์ค‘์š”ํ•œ ํ‚ค์›Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ๋งค์šฐ ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•œ๋ฒˆ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py agent.toolbox["image_generator"].description ``` ```text 'This is a tool that creates an image according to a prompt, which is a text description. It takes an input named `prompt` which contains the image description and outputs an image. ``` ์ด๋ฆ„๊ณผ ์„ค๋ช…์€ "image", "prompt", "create" ๋ฐ "generate" ํ‚ค์›Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๋‹จ์–ด๋“ค์„ ์‚ฌ์šฉํ•˜๋ฉด ๋” ์ž˜ ์ž‘๋™ํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์Šต๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๋ฅผ ์กฐ๊ธˆ ๋” ๊ตฌ์ฒดํ™”ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py agent.run("Create an image of a tree", return_code=True) ``` ์ด ์ฝ”๋“œ๋Š” ๋‹ค์Œ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋งŒ๋“ค์–ด๋ƒ…๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tool `image_generator` to generate an image of a tree. ==Code generated by the agent== image = image_generator(prompt="tree") ``` ํ›จ์”ฌ ๋‚ซ๋„ค์š”! ์ €ํฌ๊ฐ€ ์›ํ–ˆ๋˜ ๊ฒƒ๊ณผ ๋น„์Šทํ•ด ๋ณด์ž…๋‹ˆ๋‹ค. ์ฆ‰, ์—์ด์ „ํŠธ๊ฐ€ ์ž‘์—…์„ ์˜ฌ๋ฐ”๋ฅธ ๋„๊ตฌ์— ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋งคํ•‘ํ•˜๋Š” ๋ฐ ์–ด๋ ค์›€์„ ๊ฒช๊ณ  ์žˆ๋‹ค๋ฉด ๋„๊ตฌ ์ด๋ฆ„๊ณผ ์„ค๋ช…์—์„œ ๊ฐ€์žฅ ๊ด€๋ จ์„ฑ์ด ๋†’์€ ํ‚ค์›Œ๋“œ๋ฅผ ์ฐพ์•„๋ณด๊ณ  ์ด๋ฅผ ํ†ตํ•ด ์ž‘์—… ์š”์ฒญ์„ ๊ตฌ์ฒดํ™”ํ•ด ๋ณด์„ธ์š”. ### ๋„๊ตฌ ์„ค๋ช… ์‚ฌ์šฉ์ž ์ •์˜ํ•˜๊ธฐ[[customizing-the-tool-descriptions]] ์•ž์„œ ์‚ดํŽด๋ณธ ๊ฒƒ์ฒ˜๋Ÿผ ์—์ด์ „ํŠธ๋Š” ๊ฐ ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ธฐ๋ณธ ๋„๊ตฌ์—๋Š” ๋งค์šฐ ์ •ํ™•ํ•œ ์ด๋ฆ„๊ณผ ์„ค๋ช…์ด ์žˆ์–ด์•ผ ํ•˜์ง€๋งŒ ํŠน์ • ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๋งž๊ฒŒ ๋„๊ตฌ์˜ ์„ค๋ช…์ด๋‚˜ ์ด๋ฆ„์„ ๋ณ€๊ฒฝํ•˜๋Š” ๊ฒƒ์ด ๋„์›€์ด ๋  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋งค์šฐ ์œ ์‚ฌํ•œ ์—ฌ๋Ÿฌ ๋„๊ตฌ๋ฅผ ์ถ”๊ฐ€ํ–ˆ๊ฑฐ๋‚˜ ํŠน์ • ๋„๋ฉ”์ธ(*์˜ˆ*: ์ด๋ฏธ์ง€ ์ƒ์„ฑ ๋ฐ ๋ณ€ํ™˜)์—๋งŒ ์—์ด์ „ํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๊ฒฝ์šฐ์— ํŠนํžˆ ์ค‘์š”ํ•ด์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์ธ ๋ฌธ์ œ๋Š” ์ด๋ฏธ์ง€ ์ƒ์„ฑ ์ž‘์—…์— ๋งŽ์ด ์‚ฌ์šฉ๋˜๋Š” ๊ฒฝ์šฐ ์—์ด์ „ํŠธ๊ฐ€ ์ด๋ฏธ์ง€ ์ƒ์„ฑ๊ณผ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜/์ˆ˜์ •์„ ํ˜ผ๋™ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. *์˜ˆ๋ฅผ ๋“ค์–ด,* ```py agent.run("Make an image of a house and a car", return_code=True) ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tools `image_generator` to generate an image of a house and `image_transformer` to transform the image of a car into the image of a house. ==Code generated by the agent== house_image = image_generator(prompt="A house") car_image = image_generator(prompt="A car") house_car_image = image_transformer(image=car_image, prompt="A house") ``` ๊ฒฐ๊ณผ๋ฌผ์ด ์šฐ๋ฆฌ๊ฐ€ ์—ฌ๊ธฐ์„œ ์›ํ•˜๋Š” ๊ฒƒ๊ณผ ์ •ํ™•ํžˆ ์ผ์น˜ํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ `image_generator`์™€ `image_transformer`์˜ ์ฐจ์ด์ ์„ ์ดํ•ดํ•˜๊ธฐ ์–ด๋ ค์›Œ์„œ ๋‘ ๊ฐ€์ง€๋ฅผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์€ ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ `image_transformer`์˜ ๋„๊ตฌ ์ด๋ฆ„๊ณผ ์„ค๋ช…์„ ๋ณ€๊ฒฝํ•˜์—ฌ ์—์ด์ „ํŠธ๊ฐ€ ๋„์šธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. "image" ๋ฐ "prompt"์™€ ์•ฝ๊ฐ„ ๋ถ„๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด `modifier`๋ผ๊ณ  ๋Œ€์‹  ๋ถ€๋ฅด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py agent.toolbox["modifier"] = agent.toolbox.pop("image_transformer") agent.toolbox["modifier"].description = agent.toolbox["modifier"].description.replace( "transforms an image according to a prompt", "modifies an image" ) ``` ์ด์ œ "modify"์€ ์ƒˆ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜๋ผ๋Š” ๊ฐ•๋ ฅํ•œ ์‹ ํ˜ธ์ด๋ฏ€๋กœ ์œ„์˜ ํ”„๋กฌํ”„ํŠธ์— ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค์‹œ ์‹คํ–‰ํ•ด ๋ด…์‹œ๋‹ค. ```py agent.run("Make an image of a house and a car", return_code=True) ``` ์—ฌ๊ธฐ์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ฒŒ ๋ฉ๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tools: `image_generator` to generate an image of a house, then `image_generator` to generate an image of a car. ==Code generated by the agent== house_image = image_generator(prompt="A house") car_image = image_generator(prompt="A car") ``` ์šฐ๋ฆฌ๊ฐ€ ์—ผ๋‘์— ๋‘์—ˆ๋˜ ๊ฒƒ๊ณผ ํ™•์‹คํžˆ ๋” ๊ฐ€๊นŒ์›Œ์กŒ์Šต๋‹ˆ๋‹ค! ํ•˜์ง€๋งŒ ์ง‘๊ณผ ์ž๋™์ฐจ๊ฐ€ ๋ชจ๋‘ ๊ฐ™์€ ์ด๋ฏธ์ง€์— ํฌํ•จ๋˜๋ฉด ์ข‹๊ฒ ์Šต๋‹ˆ๋‹ค. ์ž‘์—…์„ ๋‹จ์ผ ์ด๋ฏธ์ง€ ์ƒ์„ฑ์— ๋” ์ง‘์ค‘ํ•˜๋ฉด ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```py agent.run("Create image: 'A house and car'", return_code=True) ``` ```text ==Explanation from the agent== I will use the following tool: `image_generator` to generate an image. ==Code generated by the agent== image = image_generator(prompt="A house and car") ``` <Tip warning={true}> ์—์ด์ „ํŠธ๋Š” ์—ฌ์ „ํžˆ ํŠนํžˆ ์—ฌ๋Ÿฌ ๊ฐœ์ฒด์˜ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ๊ณผ ๊ฐ™์ด ์•ฝ๊ฐ„ ๋” ๋ณต์žกํ•œ ์‚ฌ์šฉ ์‚ฌ๋ก€์—์„œ ์ทจ์•ฝํ•œ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ์•ž์œผ๋กœ ๋ช‡ ๋‹ฌ ์•ˆ์— ์—์ด์ „ํŠธ ์ž์ฒด์™€ ๊ธฐ๋ณธ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋”์šฑ ๊ฐœ์„ ๋˜์–ด ์—์ด์ „ํŠธ๊ฐ€ ๋‹ค์–‘ํ•œ ์‚ฌ์šฉ์ž ์ž…๋ ฅ์— ๋”์šฑ ๊ฐ•๋ ฅํ•˜๊ฒŒ ๋Œ€์‘ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. </Tip> ### ์ „์ฒด ํ”„๋กฌํ”„ํŠธ ์‚ฌ์šฉ์ž ์ •์˜ํ•˜๊ธฐ[[customizing-the-whole-prompt]] ์‚ฌ์šฉ์ž์—๊ฒŒ ์ตœ๋Œ€ํ•œ์˜ ์œ ์—ฐ์„ฑ์„ ์ œ๊ณตํ•˜๊ธฐ ์œ„ํ•ด [์œ„](#structure-of-the-prompt)์— ์„ค๋ช…๋œ ์ „์ฒด ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ ์‚ฌ์šฉ์ž๊ฐ€ ๋ฎ์–ด์“ธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž ์ •์˜ ํ”„๋กฌํ”„ํŠธ์— ์†Œ๊ฐœ ์„น์…˜, ๋„๊ตฌ ์„น์…˜, ์˜ˆ์ œ ์„น์…˜ ๋ฐ ๋ฏธ์™„์„ฑ ์˜ˆ์ œ ์„น์…˜์ด ํฌํ•จ๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. `run` ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ ๋ฎ์–ด์“ฐ๋ ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py template = """ [...] """ agent = HfAgent(your_endpoint, run_prompt_template=template) ``` <Tip warning={true}> ์—์ด์ „ํŠธ๊ฐ€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋„๊ตฌ๋ฅผ ์ธ์‹ํ•˜๊ณ  ์‚ฌ์šฉ์ž์˜ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์‚ฝ์ž…ํ•  ์ˆ˜ ์žˆ๋„๋ก `<<all_tools>>` ๋ฌธ์ž์—ด๊ณผ `<<prompt>>`๋ฅผ `template` ์–ด๋”˜๊ฐ€์— ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ `chat` ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ ๋ฎ์–ด์“ธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `chat` ๋ชจ๋“œ์—์„œ๋Š” ํ•ญ์ƒ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ตํ™˜ ํ˜•์‹์„ ์‚ฌ์šฉํ•œ๋‹ค๋Š” ์ ์— ์œ ์˜ํ•˜์„ธ์š”: ```text Human: <<task>> Assistant: ``` ๋”ฐ๋ผ์„œ ์‚ฌ์šฉ์ž ์ •์˜ `chat` ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์˜ ์˜ˆ์ œ์—์„œ๋„ ์ด ํ˜•์‹์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ธ์Šคํ„ด์Šคํ™” ํ•  ๋•Œ `chat` ํ…œํ”Œ๋ฆฟ์„ ๋ฎ์–ด์“ธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ``` template = """ [...] """ agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template) ``` <Tip warning={true}> ์—์ด์ „ํŠธ๊ฐ€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋„๊ตฌ๋ฅผ ์ธ์‹ํ•  ์ˆ˜ ์žˆ๋„๋ก `<<all_tools>>` ๋ฌธ์ž์—ด์„ `template` ์–ด๋”˜๊ฐ€์— ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๋‘ ๊ฒฝ์šฐ ๋ชจ๋‘ ์ปค๋ฎค๋‹ˆํ‹ฐ์˜ ๋ˆ„๊ตฐ๊ฐ€๊ฐ€ ํ˜ธ์ŠคํŒ…ํ•˜๋Š” ํ…œํ”Œ๋ฆฟ์„ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ ๋Œ€์‹  ์ €์žฅ์†Œ ID๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ธฐ๋ณธ ํ”„๋กฌํ”„ํŠธ๋Š” [์ด ์ €์žฅ์†Œ](https://huggingface.co/datasets/huggingface-tools/default-prompts)๋ฅผ ์˜ˆ๋กœ ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Hub์˜ ์ €์žฅ์†Œ์— ์‚ฌ์šฉ์ž ์ •์˜ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์—…๋กœ๋“œํ•˜์—ฌ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ํ™•์ธํ•˜์„ธ์š”: - ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ €์žฅ์†Œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. - `run` ๋ช…๋ น์— ๋Œ€ํ•œ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ `run_prompt_template.txt`๋ผ๋Š” ํŒŒ์ผ์— ๋„ฃ์œผ์„ธ์š”. - `chat` ๋ช…๋ น์— ๋Œ€ํ•œ ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ `chat_prompt_template.txt`๋ผ๋Š” ํŒŒ์ผ์— ๋„ฃ์œผ์„ธ์š”. ## ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ ์‚ฌ์šฉํ•˜๊ธฐ[[using-custom-tools]] ์ด ์„น์…˜์—์„œ๋Š” ์ด๋ฏธ์ง€ ์ƒ์„ฑ์— ํŠนํ™”๋œ ๋‘ ๊ฐ€์ง€ ๊ธฐ์กด ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: - ๋” ๋งŽ์€ ์ด๋ฏธ์ง€ ์ˆ˜์ •์„ ํ—ˆ์šฉํ•˜๊ธฐ ์œ„ํ•ด [huggingface-tools/image-transformation](https://huggingface.co/spaces/huggingface-tools/image-transformation)์„ [diffusers/controlnet-canny-tool](https://huggingface.co/spaces/diffusers/controlnet-canny-tool)๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. - ๊ธฐ๋ณธ ๋„๊ตฌ ์ƒ์ž์— ์ด๋ฏธ์ง€ ์—…์Šค์ผ€์ผ๋ง์„ ์œ„ํ•œ ์ƒˆ๋กœ์šด ๋„๊ตฌ๊ฐ€ ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค: [diffusers/latent-upscaler-tool](https://huggingface.co/spaces/diffusers/latent-upscaler-tool)๊ฐ€ ๊ธฐ์กด ์ด๋ฏธ์ง€ ๋ณ€ํ™˜ ๋„๊ตฌ๋ฅผ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ํŽธ๋ฆฌํ•œ [`load_tool`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py from transformers import load_tool controlnet_transformer = load_tool("diffusers/controlnet-canny-tool") upscaler = load_tool("diffusers/latent-upscaler-tool") ``` ์—์ด์ „ํŠธ์—๊ฒŒ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉด ๋„๊ตฌ์˜ ์„ค๋ช…๊ณผ ์ด๋ฆ„์ด ์—์ด์ „ํŠธ์˜ ํ”„๋กฌํ”„ํŠธ์— ์ž๋™์œผ๋กœ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์—์ด์ „ํŠธ๊ฐ€ ์‚ฌ์šฉ ๋ฐฉ๋ฒ•์„ ์ดํ•ดํ•  ์ˆ˜ ์žˆ๋„๋ก ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์˜ ์„ค๋ช…๊ณผ ์ด๋ฆ„์„ ์ž˜ ์ž‘์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. `controlnet_transformer`์˜ ์„ค๋ช…๊ณผ ์ด๋ฆ„์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py print(f"Description: '{controlnet_transformer.description}'") print(f"Name: '{controlnet_transformer.name}'") ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text Description: 'This is a tool that transforms an image with ControlNet according to a prompt. It takes two inputs: `image`, which should be the image to transform, and `prompt`, which should be the prompt to use to change it. It returns the modified image.' Name: 'image_transformer' ``` ์ด๋ฆ„๊ณผ ์„ค๋ช…์ด ์ •ํ™•ํ•˜๊ณ  [ํ๋ ˆ์ดํŒ… ๋œ ๋„๊ตฌ ์„ธํŠธ(curated set of tools)](./transformers_agents#a-curated-set-of-tools)์˜ ์Šคํƒ€์ผ์— ๋งž์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, `controlnet_transformer`์™€ `upscaler`๋กœ ์—์ด์ „ํŠธ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ด ๋ด…์‹œ๋‹ค: ```py tools = [controlnet_transformer, upscaler] agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=tools) ``` ์ด ๋ช…๋ น์„ ์‹คํ–‰ํ•˜๋ฉด ๋‹ค์Œ ์ •๋ณด๊ฐ€ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค: ```text image_transformer has been replaced by <transformers_modules.diffusers.controlnet-canny-tool.bd76182c7777eba9612fc03c0 8718a60c0aa6312.image_transformation.ControlNetTransformationTool object at 0x7f1d3bfa3a00> as provided in `additional_tools` ``` ํ๋ ˆ์ดํŒ…๋œ ๋„๊ตฌ ์„ธํŠธ์—๋Š” ์ด๋ฏธ 'image_transformer' ๋„๊ตฌ๊ฐ€ ์žˆ์œผ๋ฉฐ, ์ด ๋„๊ตฌ๋Š” ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋กœ ๋Œ€์ฒด๋ฉ๋‹ˆ๋‹ค. <Tip> ๊ธฐ์กด ๋„๊ตฌ์™€ ๋˜‘๊ฐ™์€ ์ž‘์—…์— ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋Š” ๊ฒฝ์šฐ ๊ธฐ์กด ๋„๊ตฌ๋ฅผ ๋ฎ์–ด์“ฐ๋Š” ๊ฒƒ์ด ์œ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ํ•ด๋‹น ์ž‘์—…์— ๋Šฅ์ˆ™ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๊ฐ€ ๋ฎ์–ด์“ด ๋„๊ตฌ์™€ ์ •ํ™•ํžˆ ๋™์ผํ•œ API๋ฅผ ๋”ฐ๋ผ์•ผ ํ•˜๋ฉฐ, ๊ทธ๋ ‡์ง€ ์•Š์œผ๋ฉด ํ•ด๋‹น ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋“  ์˜ˆ์ œ๊ฐ€ ์—…๋ฐ์ดํŠธ๋˜๋„๋ก ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ ์กฐ์ •ํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์— ์œ ์˜ํ•˜์„ธ์š”. </Tip> ์—…์Šค์ผ€์ผ๋Ÿฌ ๋„๊ตฌ์— ์ง€์ •๋œ 'image_upscaler'๋ผ๋Š” ์ด๋ฆ„ ์•„์ง ๊ธฐ๋ณธ ๋„๊ตฌ ์ƒ์ž์—๋Š” ์กด์žฌํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์—, ๋„๊ตฌ ๋ชฉ๋ก์— ํ•ด๋‹น ์ด๋ฆ„์ด ๊ฐ„๋‹จํžˆ ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ํ˜„์žฌ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋„๊ตฌ ์ƒ์ž๋Š” ์–ธ์ œ๋“ ์ง€ `agent.toolbox` ์†์„ฑ์„ ํ†ตํ•ด ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py print("\n".join([f"- {a}" for a in agent.toolbox.keys()])) ``` ```text - document_qa - image_captioner - image_qa - image_segmenter - transcriber - summarizer - text_classifier - text_qa - text_reader - translator - image_transformer - text_downloader - image_generator - video_generator - image_upscaler ``` ์—์ด์ „ํŠธ์˜ ๋„๊ตฌ ์ƒ์ž์— `image_upscaler`๊ฐ€ ์ถ”๊ฐ€๋œ ์ ์„ ์ฃผ๋ชฉํ•˜์„ธ์š”. ์ด์ œ ์ƒˆ๋กœ์šด ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•ด๋ด…์‹œ๋‹ค! [Transformers Agents Quickstart](./transformers_agents#single-execution-run)์—์„œ ์ƒ์„ฑํ•œ ์ด๋ฏธ์ง€๋ฅผ ๋‹ค์‹œ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ```py from diffusers.utils import load_image image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" ) ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> ์ด๋ฏธ์ง€๋ฅผ ์•„๋ฆ„๋‹ค์šด ๊ฒจ์šธ ํ’๊ฒฝ์œผ๋กœ ๋ฐ”๊ฟ” ๋ด…์‹œ๋‹ค: ```py image = agent.run("Transform the image: 'A frozen lake and snowy forest'", image=image) ``` ```text ==Explanation from the agent== I will use the following tool: `image_transformer` to transform the image. ==Code generated by the agent== image = image_transformer(image, prompt="A frozen lake and snowy forest") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_winter.png" width=200> ์ƒˆ๋กœ์šด ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ ๋„๊ตฌ๋Š” ์ด๋ฏธ์ง€๋ฅผ ๋งค์šฐ ๊ฐ•๋ ฅํ•˜๊ฒŒ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ๋Š” ControlNet์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ์ ์œผ๋กœ ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ ๋„๊ตฌ๋Š” 512x512 ํ”ฝ์…€ ํฌ๊ธฐ์˜ ์ด๋ฏธ์ง€๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์—…์Šค์ผ€์ผ๋งํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์‚ดํŽด๋ด…์‹œ๋‹ค. ```py image = agent.run("Upscale the image", image) ``` ```text ==Explanation from the agent== I will use the following tool: `image_upscaler` to upscale the image. ==Code generated by the agent== upscaled_image = image_upscaler(image) ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_winter_upscale.png" width=400> ์—์ด์ „ํŠธ๋Š” ์—…์Šค์ผ€์ผ๋Ÿฌ ๋„๊ตฌ์˜ ์„ค๋ช…๊ณผ ์ด๋ฆ„๋งŒ ๋ณด๊ณ  ๋ฐฉ๊ธˆ ์ถ”๊ฐ€ํ•œ ์—…์Šค์ผ€์ผ๋Ÿฌ ๋„๊ตฌ์— "์ด๋ฏธ์ง€ ์—…์Šค์ผ€์ผ๋ง"์ด๋ผ๋Š” ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ž๋™์œผ๋กœ ๋งคํ•‘ํ•˜์—ฌ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์‹คํ–‰ํ–ˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ ์ƒˆ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ### ์ƒˆ ๋„๊ตฌ ์ถ”๊ฐ€ํ•˜๊ธฐ[[adding-new-tools]] ์ด ์„น์…˜์—์„œ๋Š” ์—์ด์ „ํŠธ์—๊ฒŒ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ๋Š” ์ƒˆ ๋„๊ตฌ๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ ๋“œ๋ฆฝ๋‹ˆ๋‹ค. #### ์ƒˆ ๋„๊ตฌ ๋งŒ๋“ค๊ธฐ[[creating-a-new-tool]] ๋จผ์ € ๋„๊ตฌ๋ฅผ ๋งŒ๋“œ๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ํŠน์ • ์ž‘์—…์— ๋Œ€ํ•ด ๊ฐ€์žฅ ๋งŽ์€ ๋‹ค์šด๋กœ๋“œ๋ฅผ ๋ฐ›์€ Hugging Face Hub์˜ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋Š”, ๊ทธ๋‹ค์ง€ ์œ ์šฉํ•˜์ง€๋Š” ์•Š์ง€๋งŒ ์žฌ๋ฏธ์žˆ๋Š” ์ž‘์—…์„ ์ถ”๊ฐ€ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```python from huggingface_hub import list_models task = "text-classification" model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) print(model.id) ``` `text-classification`(ํ…์ŠคํŠธ ๋ถ„๋ฅ˜) ์ž‘์—…์˜ ๊ฒฝ์šฐ `'facebook/bart-large-mnli'`๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ , `translation`(๋ฒˆ์—ญ) ์ž‘์—…์˜ ๊ฒฝ์šฐ `'t5-base'`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์—์ด์ „ํŠธ๊ฐ€ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋„๊ตฌ๋กœ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด ์–ด๋–ป๊ฒŒ ํ•ด์•ผ ํ• ๊นŒ์š”? ๋ชจ๋“  ๋„๊ตฌ๋Š” ํ•„์š”ํ•œ ์ฃผ์š” ์†์„ฑ์„ ๋ณด์œ ํ•˜๋Š” ์Šˆํผํด๋ž˜์Šค `Tool`์— ์˜์กดํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์ƒ์†ํ•˜๋Š” ํด๋ž˜์Šค๋ฅผ ๋งŒ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```python from transformers import Tool class HFModelDownloadsTool(Tool): pass ``` ์ด ํด๋ž˜์Šค์—๋Š” ๋ช‡ ๊ฐ€์ง€ ์š”๊ตฌ์‚ฌํ•ญ์ด ์žˆ์Šต๋‹ˆ๋‹ค: - ๋„๊ตฌ ์ž์ฒด์˜ ์ด๋ฆ„์— ํ•ด๋‹นํ•˜๋Š” `name` ์†์„ฑ. ์ˆ˜ํ–‰๋ช…์ด ์žˆ๋Š” ๋‹ค๋ฅธ ๋„๊ตฌ์™€ ํ˜ธํ™˜๋˜๋„๋ก `model_download_counter`๋กœ ์ด๋ฆ„์„ ์ง€์ •ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. - ์—์ด์ „ํŠธ์˜ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ฑ„์šฐ๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ์†์„ฑ `description`. - `inputs` ๋ฐ `outputs` ์†์„ฑ. ์ด๋ฅผ ์ •์˜ํ•˜๋ฉด Python ์ธํ„ฐํ”„๋ฆฌํ„ฐ๊ฐ€ ์œ ํ˜•์— ๋Œ€ํ•œ ์ •๋ณด์— ์ž…๊ฐํ•œ ์„ ํƒ์„ ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋ฉฐ, ๋„๊ตฌ๋ฅผ ํ—ˆ๋ธŒ์— ํ‘ธ์‹œํ•  ๋•Œ gradio ๋ฐ๋ชจ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‘ ์†์„ฑ ๋ชจ๋‘ ๊ฐ’์€ 'ํ…์ŠคํŠธ', '์ด๋ฏธ์ง€' ๋˜๋Š” '์˜ค๋””์˜ค'๊ฐ€ ๋  ์ˆ˜ ์žˆ๋Š” ์˜ˆ์ƒ ๊ฐ’์˜ ๋ฆฌ์ŠคํŠธ์ž…๋‹ˆ๋‹ค. - ์ถ”๋ก  ์ฝ”๋“œ๊ฐ€ ํฌํ•จ๋œ `__call__` ๋ฉ”์†Œ๋“œ. ์ด๊ฒƒ์ด ์šฐ๋ฆฌ๊ฐ€ ์œ„์—์„œ ๋‹ค๋ฃจ์—ˆ๋˜ ์ฝ”๋“œ์ž…๋‹ˆ๋‹ค! ์ด์ œ ํด๋ž˜์Šค์˜ ๋ชจ์Šต์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from transformers import Tool from huggingface_hub import list_models class HFModelDownloadsTool(Tool): name = "model_download_counter" description = ( "This is a tool that returns the most downloaded model of a given task on the Hugging Face Hub. " "It takes the name of the category (such as text-classification, depth-estimation, etc), and " "returns the name of the checkpoint." ) inputs = ["text"] outputs = ["text"] def __call__(self, task: str): model = next(iter(list_models(filter=task, sort="downloads", direction=-1))) return model.id ``` ์ด์ œ ๋„๊ตฌ๋ฅผ ์†์‰ฝ๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋„๊ตฌ๋ฅผ ํŒŒ์ผ์— ์ €์žฅํ•˜๊ณ  ๋ฉ”์ธ ์Šคํฌ๋ฆฝํŠธ์—์„œ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์ด ํŒŒ์ผ์˜ ์ด๋ฆ„์„ `model_downloads.py`๋กœ ์ง€์ •ํ•˜๋ฉด ๊ฒฐ๊ณผ์ ์œผ๋กœ ๊ฐ€์ ธ์˜ค๊ธฐ ์ฝ”๋“œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from model_downloads import HFModelDownloadsTool tool = HFModelDownloadsTool() ``` ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์ด ์ด ๊ธฐ๋Šฅ์„ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๊ณ  ์ดˆ๊ธฐํ™”๋ฅผ ๋” ๊ฐ„๋‹จํ•˜๊ฒŒ ํ•˜๋ ค๋ฉด ๋„ค์ž„์ŠคํŽ˜์ด์Šค ์•„๋ž˜์˜ Hub๋กœ ํ‘ธ์‹œํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡๊ฒŒ ํ•˜๋ ค๋ฉด `tool` ๋ณ€์ˆ˜์—์„œ `push_to_hub`๋ฅผ ํ˜ธ์ถœํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```python tool.push_to_hub("hf-model-downloads") ``` ์ด์ œ ํ—ˆ๋ธŒ์— ์ฝ”๋“œ๊ฐ€ ์ƒ๊ฒผ์Šต๋‹ˆ๋‹ค! ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„์ธ ์—์ด์ „ํŠธ๊ฐ€ ์ฝ”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ํ•˜๋Š” ๋‹จ๊ณ„๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. #### ์—์ด์ „ํŠธ๊ฐ€ ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒŒ ํ•˜๊ธฐ[[Having-the-agent-use-the-tool]] ์ด์ œ ์ด๋Ÿฐ ์‹์œผ๋กœ ํ—ˆ๋ธŒ์— ์กด์žฌํ•˜๋Š” ๋„๊ตฌ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(๋„๊ตฌ์˜ ์‚ฌ์šฉ์ž ์ด๋ฆ„์€ ๋ณ€๊ฒฝํ•˜์„ธ์š”): We now have our tool that lives on the Hub which can be instantiated as such (change the user name for your tool): ```python from transformers import load_tool tool = load_tool("lysandre/hf-model-downloads") ``` ์ด ๋„๊ตฌ๋ฅผ ์—์ด์ „ํŠธ์—์„œ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์—์ด์ „ํŠธ ์ดˆ๊ธฐํ™” ๋ฉ”์†Œ๋“œ์˜ `additional_tools` ๋งค๊ฐœ๋ณ€์ˆ˜์— ์ „๋‹ฌํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```python from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool]) agent.run( "Can you read out loud the name of the model that has the most downloads in the 'text-to-video' task on the Hugging Face Hub?" ) ``` ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฐ๊ณผ๊ฐ€ ์ถœ๋ ฅ๋ฉ๋‹ˆ๋‹ค: ```text ==Code generated by the agent== model = model_download_counter(task="text-to-video") print(f"The model with the most downloads is {model}.") audio_model = text_reader(model) ==Result== The model with the most downloads is damo-vilab/text-to-video-ms-1.7b. ``` and generates the following audio. | **Audio** | |------------------------------------------------------------------------------------------------------------------------------------------------------| | <audio controls><source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/damo.wav" type="audio/wav"/> | <Tip> LLM์— ๋”ฐ๋ผ ์ผ๋ถ€๋Š” ๋งค์šฐ ์ทจ์•ฝํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ œ๋Œ€๋กœ ์ž‘๋™ํ•˜๋ ค๋ฉด ๋งค์šฐ ์ •ํ™•ํ•œ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ๋„๊ตฌ๋ฅผ ์ž˜ ํ™œ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋„๊ตฌ์˜ ์ด๋ฆ„๊ณผ ์„ค๋ช…์„ ์ž˜ ์ •์˜ํ•˜๋Š” ๊ฒƒ์ด ๋ฌด์—‡๋ณด๋‹ค ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. </Tip> ### ๊ธฐ์กด ๋„๊ตฌ ๋Œ€์ฒดํ•˜๊ธฐ[[replacing-existing-tools]] ์—์ด์ „ํŠธ์˜ ๋„๊ตฌ ์ƒ์ž์— ์ƒˆ ํ•ญ๋ชฉ์„ ๋ฐฐ์ •ํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๊ธฐ์กด ๋„๊ตฌ๋ฅผ ๋Œ€์ฒดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐฉ๋ฒ•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```python from transformers import HfAgent, load_tool agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") agent.toolbox["image-transformation"] = load_tool("diffusers/controlnet-canny-tool") ``` <Tip> ๋‹ค๋ฅธ ๋„๊ตฌ๋กœ ๊ต์ฒดํ•  ๋•Œ๋Š” ์ฃผ์˜ํ•˜์„ธ์š”! ์ด ์ž‘์—…์œผ๋กœ ์—์ด์ „ํŠธ์˜ ํ”„๋กฌํ”„ํŠธ๋„ ์กฐ์ •๋ฉ๋‹ˆ๋‹ค. ์ž‘์—…์— ๋” ์ ํ•ฉํ•œ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์žˆ์œผ๋ฉด ์ข‹์„ ์ˆ˜ ์žˆ์ง€๋งŒ, ๋‹ค๋ฅธ ๋„๊ตฌ๋ณด๋‹ค ๋” ๋งŽ์ด ์„ ํƒ๋˜๊ฑฐ๋‚˜ ์ •์˜ํ•œ ๋„๊ตฌ ๋Œ€์‹  ๋‹ค๋ฅธ ๋„๊ตฌ๊ฐ€ ์„ ํƒ๋  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. </Tip> ## gradio-tools ์‚ฌ์šฉํ•˜๊ธฐ[[leveraging-gradio-tools]] [gradio-tools](https://github.com/freddyaboulton/gradio-tools)๋Š” Hugging Face Spaces๋ฅผ ๋„๊ตฌ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ•๋ ฅํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ž…๋‹ˆ๋‹ค. ๊ธฐ์กด์˜ ๋งŽ์€ Spaces๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์‚ฌ์šฉ์ž ์ •์˜ Spaces๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋””์ž์ธํ•  ์ˆ˜ ์žˆ๋„๋ก ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” `Tool.from_gradio` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `gradio_tools`์— ๋Œ€ํ•œ ์ง€์›์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ํ”„๋กฌํ”„ํŠธ๋ฅผ ๊ฐœ์„ ํ•˜๊ณ  ๋” ๋‚˜์€ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด `gradio-tools` ํˆดํ‚ท์—์„œ ์ œ๊ณต๋˜๋Š” `StableDiffusionPromptGeneratorTool` ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•˜๊ณ ์ž ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € `gradio_tools`์—์„œ ๋„๊ตฌ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```python from gradio_tools import StableDiffusionPromptGeneratorTool gradio_tool = StableDiffusionPromptGeneratorTool() ``` ํ•ด๋‹น ์ธ์Šคํ„ด์Šค๋ฅผ `Tool.from_gradio` ๋ฉ”์†Œ๋“œ์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```python from transformers import Tool tool = Tool.from_gradio(gradio_tool) ``` ์ด์ œ ์ผ๋ฐ˜์ ์ธ ์‚ฌ์šฉ์ž ์ •์˜ ๋„๊ตฌ์™€ ๋˜‘๊ฐ™์ด ๊ด€๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ํ™œ์šฉํ•˜์—ฌ `a rabbit wearing a space suit'(์šฐ์ฃผ๋ณต์„ ์ž…์€ ํ† ๋ผ)๋ผ๋Š” ํ”„๋กฌํ”„ํŠธ๋ฅผ ๊ฐœ์„ ํ–ˆ์Šต๋‹ˆ๋‹ค: ```python from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder", additional_tools=[tool]) agent.run("Generate an image of the `prompt` after improving it.", prompt="A rabbit wearing a space suit") ``` ๋ชจ๋ธ์ด ๋„๊ตฌ๋ฅผ ์ ์ ˆํžˆ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค: ```text ==Explanation from the agent== I will use the following tools: `StableDiffusionPromptGenerator` to improve the prompt, then `image_generator` to generate an image according to the improved prompt. ==Code generated by the agent== improved_prompt = StableDiffusionPromptGenerator(prompt) print(f"The improved prompt is {improved_prompt}.") image = image_generator(improved_prompt) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์ „์—: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"> <Tip warning={true}> gradio-tools๋Š” ๋‹ค๋ฅธ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ๋กœ ์ž‘์—…ํ•  ๋•Œ์—๋„ *ํ…์ŠคํŠธ* ์ž…๋ ฅ ๋ฐ ์ถœ๋ ฅ์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ตฌํ˜„์€ ์ด๋ฏธ์ง€ ๋ฐ ์˜ค๋””์˜ค ๊ฐ์ฒด์—์„œ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ํ˜„์žฌ๋Š” ์ด ๋‘ ๊ฐ€์ง€๊ฐ€ ํ˜ธํ™˜๋˜์ง€ ์•Š์ง€๋งŒ ์ง€์› ๊ฐœ์„ ์„ ์œ„ํ•ด ๋…ธ๋ ฅํ•˜๋ฉด์„œ ๋น ๋ฅด๊ฒŒ ํ˜ธํ™˜๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค. </Tip> ## ํ–ฅํ›„ Langchain๊ณผ์˜ ํ˜ธํ™˜์„ฑ[[future-compatibility-with-langchain]] ์ €ํฌ๋Š” Langchain์„ ์ข‹์•„ํ•˜๋ฉฐ ๋งค์šฐ ๋งค๋ ฅ์ ์ธ ๋„๊ตฌ ๋ชจ์Œ์„ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค๊ณ  ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋„๊ตฌ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด Langchain์€ ๋‹ค๋ฅธ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์™€ ์ž‘์—…ํ•  ๋•Œ์—๋„ *ํ…์ŠคํŠธ* ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ข…์ข… ๊ฐ์ฒด์˜ ์ง๋ ฌํ™”๋œ(์ฆ‰, ๋””์Šคํฌ์— ์ €์žฅ๋œ) ๋ฒ„์ „์ž…๋‹ˆ๋‹ค. ์ด ์ฐจ์ด๋กœ ์ธํ•ด transformers-agents์™€ Langchain ๊ฐ„์—๋Š” ๋ฉ€ํ‹ฐ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ๊ฐ€ ์ฒ˜๋ฆฌ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํ–ฅํ›„ ๋ฒ„์ „์—์„œ ์ด ์ œํ•œ์ด ํ•ด๊ฒฐ๋˜๊ธฐ๋ฅผ ๋ฐ”๋ผ๋ฉฐ, ์ด ํ˜ธํ™˜์„ฑ์„ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ๋„๋ก ์—ด๋ ฌํ•œ Langchain ์‚ฌ์šฉ์ž์˜ ๋„์›€์„ ํ™˜์˜ํ•ฉ๋‹ˆ๋‹ค. ์ €ํฌ๋Š” ๋” ๋‚˜์€ ์ง€์›์„ ์ œ๊ณตํ•˜๊ณ ์ž ํ•ฉ๋‹ˆ๋‹ค. ๋„์›€์„ ์ฃผ๊ณ  ์‹ถ์œผ์‹œ๋‹ค๋ฉด, [์ด์Šˆ๋ฅผ ์—ด์–ด](https://github.com/huggingface/transformers/issues/new) ์˜๊ฒฌ์„ ๊ณต์œ ํ•ด ์ฃผ์„ธ์š”.
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/sequence_classification.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ…์ŠคํŠธ ๋ถ„๋ฅ˜[[text-classification]] [[open-in-colab]] <Youtube id="leNG9fN9FQU"/> ํ…์ŠคํŠธ ๋ถ„๋ฅ˜๋Š” ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ์˜ ์ผ์ข…์œผ๋กœ, ํ…์ŠคํŠธ์— ๋ ˆ์ด๋ธ” ๋˜๋Š” ํด๋ž˜์Šค๋ฅผ ์ง€์ •ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๋งŽ์€ ๋Œ€๊ธฐ์—…์ด ๋‹ค์–‘ํ•œ ์‹ค์šฉ์ ์ธ ์‘์šฉ ๋ถ„์•ผ์—์„œ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜๋ฅผ ์šด์˜ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ธ๊ธฐ ์žˆ๋Š” ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ํ˜•ํƒœ ์ค‘ ํ•˜๋‚˜๋Š” ๊ฐ์„ฑ ๋ถ„์„์œผ๋กœ, ํ…์ŠคํŠธ ์‹œํ€€์Šค์— ๐Ÿ™‚ ๊ธ์ •, ๐Ÿ™ ๋ถ€์ • ๋˜๋Š” ๐Ÿ˜ ์ค‘๋ฆฝ๊ณผ ๊ฐ™์€ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ํ•™์Šตํ•  ๋‚ด์šฉ์€: 1. [IMDb](https://huggingface.co/datasets/imdb) ๋ฐ์ดํ„ฐ์…‹์—์„œ [DistilBERT](https://huggingface.co/distilbert-base-uncased)๋ฅผ ํŒŒ์ธ ํŠœ๋‹ํ•˜์—ฌ ์˜ํ™” ๋ฆฌ๋ทฐ๊ฐ€ ๊ธ์ •์ ์ธ์ง€ ๋ถ€์ •์ ์ธ์ง€ ํŒ๋‹จํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์„ ์œ„ํ•ด ํŒŒ์ธ ํŠœ๋‹ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์˜ํ•ด ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [CTRL](../model_doc/ctrl), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPT Neo](../model_doc/gpt_neo), [GPT-J](../model_doc/gptj), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LED](../model_doc/led), [LiLT](../model_doc/lilt), [LLaMA](../model_doc/llama), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MarkupLM](../model_doc/markuplm), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [OpenAI GPT](../model_doc/openai-gpt), [OPT](../model_doc/opt), [Perceiver](../model_doc/perceiver), [PLBart](../model_doc/plbart), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [TAPAS](../model_doc/tapas), [Transformer-XL](../model_doc/transfo-xl), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด, ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## IMDb ๋ฐ์ดํ„ฐ์…‹ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-imdb-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ IMDb ๋ฐ์ดํ„ฐ์…‹์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset >>> imdb = load_dataset("imdb") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ด…์‹œ๋‹ค: ```py >>> imdb["test"][0] { "label": 0, "text": "I love sci-fi and am willing to put up with a lot. Sci-fi movies/TV are usually underfunded, under-appreciated and misunderstood. I tried to like this, I really did, but it is to good TV sci-fi as Babylon 5 is to Star Trek (the original). Silly prosthetics, cheap cardboard sets, stilted dialogues, CG that doesn't match the background, and painfully one-dimensional characters cannot be overcome with a 'sci-fi' setting. (I'm sure there are those of you out there who think Babylon 5 is good sci-fi TV. It's not. It's clichรฉd and uninspiring.) While US viewers might like emotion and character development, sci-fi is a genre that does not take itself seriously (cf. Star Trek). It may treat important issues, yet not as a serious philosophy. It's really difficult to care about the characters here as they are not simply foolish, just missing a spark of life. Their actions and reactions are wooden and predictable, often painful to watch. The makers of Earth KNOW it's rubbish as they have to always say \"Gene Roddenberry's Earth...\" otherwise people would not continue watching. Roddenberry's ashes must be turning in their orbit as this dull, cheap, poorly edited (watching it without advert breaks really brings this home) trudging Trabant of a show lumbers into space. Spoiler. So, kill off a main character. And then bring him back as another actor. Jeeez! Dallas all over again.", } ``` ์ด ๋ฐ์ดํ„ฐ์…‹์—๋Š” ๋‘ ๊ฐ€์ง€ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `text`: ์˜ํ™” ๋ฆฌ๋ทฐ ํ…์ŠคํŠธ - `label`: `0`์€ ๋ถ€์ •์ ์ธ ๋ฆฌ๋ทฐ, `1`์€ ๊ธ์ •์ ์ธ ๋ฆฌ๋ทฐ๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” DistilBERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์™€์„œ `text` ํ•„๋“œ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` `text`๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  ์‹œํ€€์Šค๊ฐ€ DistilBERT์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ๊ธธ์ง€ ์•Š๋„๋ก ์ž๋ฅด๊ธฐ ์œ„ํ•œ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> def preprocess_function(examples): ... return tokenizer(examples["text"], truncation=True) ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด, ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ๋ฐ์ดํ„ฐ์…‹์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด `batched=True`๋กœ ์„ค์ •ํ•จ์œผ๋กœ์จ ๋ฐ์ดํ„ฐ์…‹ `map`๋ฅผ ๋” ๋น ๋ฅด๊ฒŒ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py tokenized_imdb = imdb.map(preprocess_function, batched=True) ``` ์ด์ œ [`DataCollatorWithPadding`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ค์–ด๋ด…์‹œ๋‹ค. ๋ฐ์ดํ„ฐ์…‹ ์ „์ฒด๋ฅผ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๋Œ€์‹ , *๋™์  ํŒจ๋”ฉ*์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ๊ธธ์ด์— ๋งž๊ฒŒ ๋ฌธ์žฅ์„ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ์ด ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer) ``` </pt> <tf> ```py >>> from transformers import DataCollatorWithPadding >>> data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ๋ฉ”ํŠธ๋ฆญ์„ ํฌํ•จํ•˜๋Š” ๊ฒƒ์ด ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋น ๋ฅด๊ฒŒ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด์„œ๋Š” ๐Ÿค— Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด์„œ ์˜ˆ์ธก๊ณผ ๋ ˆ์ด๋ธ”์„ ๊ณ„์‚ฐํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๋„๋ก [`~evaluate.EvaluationModule.compute`]๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... predictions = np.argmax(predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=labels) ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋Š” ์ค€๋น„๋˜์—ˆ๊ณ , ํ›ˆ๋ จ ๊ณผ์ •์„ ์„ค์ •ํ•  ๋•Œ ๋‹ค์‹œ ์‚ดํŽด๋ณผ ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ „์—, `id2label`์™€ `label2id`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ƒ๋˜๋Š” id์™€ ๋ ˆ์ด๋ธ”์˜ ๋งต์„ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> id2label = {0: "NEGATIVE", 1: "POSITIVE"} >>> label2id = {"NEGATIVE": 0, "POSITIVE": 1} ``` <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์˜ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForSequenceClassification`]๋กœ DistilBERT๋ฅผ ๊ฐ€์ณ์˜ค๊ณ  ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer >>> model = AutoModelForSequenceClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=2, id2label=id2label, label2id=label2id ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋์ž…๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” ์œ ์ผํ•œ ํŒŒ๋ผ๋ฏธํ„ฐ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด `push_to_hub=True`๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค.) ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค, [`Trainer`]๋Š” ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜์™€ ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ์…‹, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘๊ธฐ ๋ฐ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์€ ํŒŒ์ธ ํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_model", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=2, ... weight_decay=0.01, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... load_best_model_at_end=True, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_imdb["train"], ... eval_dataset=tokenized_imdb["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` <Tip> [`Trainer`]๋Š” `tokenizer`๋ฅผ ์ „๋‹ฌํ•˜๋ฉด ๊ธฐ๋ณธ์ ์œผ๋กœ ๋™์  ๋งคํ•‘์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ, ๋ช…์‹œ์ ์œผ๋กœ ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘๊ธฐ๋ฅผ ์ง€์ •ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. </Tip> ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์˜ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋ ค๋ฉด, ๋จผ์ € ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•จ์ˆ˜์™€ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด, ๊ทธ๋ฆฌ๊ณ  ์ผ๋ถ€ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import create_optimizer >>> import tensorflow as tf >>> batch_size = 16 >>> num_epochs = 5 >>> batches_per_epoch = len(tokenized_imdb["train"]) // batch_size >>> total_train_steps = int(batches_per_epoch * num_epochs) >>> optimizer, schedule = create_optimizer(init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`TFAutoModelForSequenceClassification`]์„ ์‚ฌ์šฉํ•˜์—ฌ DistilBERT๋ฅผ ๋กœ๋“œํ•˜๊ณ , ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=2, id2label=id2label, label2id=label2id ... ) ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์„ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_imdb["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_imdb["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จํ•  ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์„ค์ •ํ•ด์•ผํ•  ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐ€์ง€๋Š” ์˜ˆ์ธก์—์„œ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ , ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•  ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋‘ [Keras callbacks](../main_classes/keras_callbacks)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. [`~transformers.KerasMetricCallback`]์— `compute_metrics`๋ฅผ ์ „๋‹ฌํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๋†’์ž…๋‹ˆ๋‹ค. ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_model", ... tokenizer=tokenizer, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ฝœ๋ฐฑ์„ ํ•จ๊ป˜ ๋ฌถ์Šต๋‹ˆ๋‹ค: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ๋“œ๋””์–ด, ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์— ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์…‹, ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์…‹, ์—ํญ์˜ ์ˆ˜ ๋ฐ ์ฝœ๋ฐฑ์„ ์ „๋‹ฌํ•˜์—ฌ ํŒŒ์ธ ํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ Hub์— ์—…๋กœ๋“œ๋˜์–ด ๋ชจ๋“  ์‚ฌ๋žŒ์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ํ…์ŠคํŠธ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ๋‹ค์Œ [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์„ ์ˆ˜ํ–‰ํ•˜๊ณ ์ž ํ•˜๋Š” ํ…์ŠคํŠธ๋ฅผ ๊ฐ€์ ธ์™€๋ด…์‹œ๋‹ค: ```py >>> text = "This was a masterpiece. Not completely faithful to the books, but enthralling from beginning to end. Might be my favorite of the three." ``` ํŒŒ์ธ ํŠœ๋‹๋œ ๋ชจ๋ธ๋กœ ์ถ”๋ก ์„ ์‹œ๋„ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๋กœ ๊ฐ์ • ๋ถ„์„์„ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ , ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•ด๋ณด์„ธ์š”: ```py >>> from transformers import pipeline >>> classifier = pipeline("sentiment-analysis", model="stevhliu/my_awesome_model") >>> classifier(text) [{'label': 'POSITIVE', 'score': 0.9994940757751465}] ``` ์›ํ•œ๋‹ค๋ฉด, `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_model") >>> inputs = tokenizer(text, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("stevhliu/my_awesome_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_class_id = logits.argmax().item() >>> model.config.id2label[predicted_class_id] 'POSITIVE' ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  TensorFlow ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_model") >>> inputs = tokenizer(text, return_tensors="tf") ``` ์ž…๋ ฅ๊ฐ’์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSequenceClassification >>> model = TFAutoModelForSequenceClassification.from_pretrained("stevhliu/my_awesome_model") >>> logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> model.config.id2label[predicted_class_id] 'POSITIVE' ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/translation.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋ฒˆ์—ญ[[translation]] [[open-in-colab]] <Youtube id="1JvfrvZgi6c"/> ๋ฒˆ์—ญ์€ ํ•œ ์–ธ์–ด๋กœ ๋œ ์‹œํ€€์Šค๋ฅผ ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋ฒˆ์—ญ์ด๋‚˜ ์š”์•ฝ์€ ์ž…๋ ฅ์„ ๋ฐ›์•„ ์ผ๋ จ์˜ ์ถœ๋ ฅ์„ ๋ฐ˜ํ™˜ํ•˜๋Š” ๊ฐ•๋ ฅํ•œ ํ”„๋ ˆ์ž„์›Œํฌ์ธ ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค ๋ฌธ์ œ๋กœ ๊ตฌ์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ‘œ์ ์ธ ํƒœ์Šคํฌ์ž…๋‹ˆ๋‹ค. ๋ฒˆ์—ญ ์‹œ์Šคํ…œ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋œ ํ…์ŠคํŠธ ๊ฐ„์˜ ๋ฒˆ์—ญ์— ์‚ฌ์šฉ๋˜์ง€๋งŒ, ์Œ์„ฑ ๊ฐ„์˜ ํ†ต์—ญ์ด๋‚˜ ํ…์ŠคํŠธ-์Œ์„ฑ ๋˜๋Š” ์Œ์„ฑ-ํ…์ŠคํŠธ์™€ ๊ฐ™์€ ์กฐํ•ฉ์—๋„ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ํ•™์Šตํ•  ๋‚ด์šฉ์€: 1. ์˜์–ด ํ…์ŠคํŠธ๋ฅผ ํ”„๋ž‘์Šค์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ธฐ ์œ„ํ•ด [T5](https://huggingface.co/t5-small) ๋ชจ๋ธ์„ OPUS Books ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์˜์–ด-ํ”„๋ž‘์Šค์–ด ํ•˜์œ„ ์ง‘ํ•ฉ์œผ๋กœ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ 2. ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. <Tip> ์ด ํƒœ์Šคํฌ ๊ฐ€์ด๋“œ๋Š” ์•„๋ž˜ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—๋„ ์‘์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [GPTSAN-japanese](../model_doc/gptsan-japanese), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [NLLB-MOE](../model_doc/nllb-moe), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [XLM-ProphetNet](../model_doc/xlm-prophetnet) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate sacrebleu ``` ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ์ฐฝ์ด ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”. ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## OPUS Books ๋ฐ์ดํ„ฐ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-opus-books-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ [OPUS Books](https://huggingface.co/datasets/opus_books) ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์˜์–ด-ํ”„๋ž‘์Šค์–ด ํ•˜์œ„ ์ง‘ํ•ฉ์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. ```py >>> from datasets import load_dataset >>> books = load_dataset("opus_books", "en-fr") ``` ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ [`~datasets.Dataset.train_test_split`] ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋กœ ๋ถ„ํ• ํ•˜์„ธ์š”. ```py >>> books = books["train"].train_test_split(test_size=0.2) ``` ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์—์„œ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณผ๊นŒ์š”? ```py >>> books["train"][0] {'id': '90560', 'translation': {'en': 'But this lofty plateau measured only a few fathoms, and soon we reentered Our Element.', 'fr': 'Mais ce plateau รฉlevรฉ ne mesurait que quelques toises, et bientรดt nous fรปmes rentrรฉs dans notre รฉlรฉment.'}} ``` ๋ฐ˜ํ™˜๋œ ๋”•์…”๋„ˆ๋ฆฌ์˜ `translation` ํ‚ค๊ฐ€ ํ…์ŠคํŠธ์˜ ์˜์–ด, ํ”„๋ž‘์Šค์–ด ๋ฒ„์ „์„ ํฌํ•จํ•˜๊ณ  ์žˆ๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="XAR8jnZZuUs"/> ๋‹ค์Œ ๋‹จ๊ณ„๋กœ ์˜์–ด-ํ”„๋ž‘์Šค์–ด ์Œ์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด T5 ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. ```py >>> from transformers import AutoTokenizer >>> checkpoint = "t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) ``` ๋งŒ๋“ค ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋Š” ์•„๋ž˜ ์š”๊ตฌ์‚ฌํ•ญ์„ ์ถฉ์กฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. T5๊ฐ€ ๋ฒˆ์—ญ ํƒœ์Šคํฌ์ž„์„ ์ธ์ง€ํ•  ์ˆ˜ ์žˆ๋„๋ก ์ž…๋ ฅ ์•ž์— ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ถ”๊ฐ€ํ•˜์„ธ์š”. ์—ฌ๋Ÿฌ NLP ํƒœ์Šคํฌ๋ฅผ ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋ธ ์ค‘ ์ผ๋ถ€๋Š” ์ด๋ ‡๊ฒŒ ํƒœ์Šคํฌ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋ฏธ๋ฆฌ ์ค˜์•ผํ•ฉ๋‹ˆ๋‹ค. 2. ์›์–ด(์˜์–ด)๊ณผ ๋ฒˆ์—ญ์–ด(ํ”„๋ž‘์Šค์–ด)๋ฅผ ๋ณ„๋„๋กœ ํ† ํฐํ™”ํ•˜์„ธ์š”. ์˜์–ด ์–ดํœ˜๋กœ ์‚ฌ์ „ ํ•™์Šต๋œ ํ† ํฌ๋‚˜์ด์ €๋กœ ํ”„๋ž‘์Šค์–ด ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•  ์ˆ˜๋Š” ์—†๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. 3. `max_length` ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ์„ค์ •ํ•œ ์ตœ๋Œ€ ๊ธธ์ด๋ณด๋‹ค ๊ธธ์ง€ ์•Š๋„๋ก ์‹œํ€€์Šค๋ฅผ truncateํ•˜์„ธ์š”. ```py >>> source_lang = "en" >>> target_lang = "fr" >>> prefix = "translate English to French: " >>> def preprocess_function(examples): ... inputs = [prefix + example[source_lang] for example in examples["translation"]] ... targets = [example[target_lang] for example in examples["translation"]] ... model_inputs = tokenizer(inputs, text_target=targets, max_length=128, truncation=True) ... return model_inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets์˜ [`~datasets.Dataset.map`] ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ด๋ ค๋ฉด `batched=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> tokenized_books = books.map(preprocess_function, batched=True) ``` ์ด์ œ [`DataCollatorForSeq2Seq`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ์ „๋ถ€๋ฅผ paddingํ•˜๋Š” ๋Œ€์‹ , ๋ฐ์ดํ„ฐ ์ •๋ ฌ ์ค‘ ๊ฐ ๋ฐฐ์น˜์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ๋ฌธ์žฅ์„ *๋™์ ์œผ๋กœ padding*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) ``` </pt> <tf> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€[[evalulate]] ํ›ˆ๋ จ ์ค‘์— ๋ฉ”ํŠธ๋ฆญ์„ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•(evaluation method)์„ ๋น ๋ฅด๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ˜„์žฌ ํƒœ์Šคํฌ์— ์ ํ•ฉํ•œ SacreBLEU ๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. (๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด ๐Ÿค— Evaluate [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import evaluate >>> metric = evaluate.load("sacrebleu") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`~evaluate.EvaluationModule.compute`]์— ์˜ˆ์ธก๊ฐ’๊ณผ ๋ ˆ์ด๋ธ”์„ ์ „๋‹ฌํ•˜์—ฌ SacreBLEU ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> import numpy as np >>> def postprocess_text(preds, labels): ... preds = [pred.strip() for pred in preds] ... labels = [[label.strip()] for label in labels] ... return preds, labels >>> def compute_metrics(eval_preds): ... preds, labels = eval_preds ... if isinstance(preds, tuple): ... preds = preds[0] ... decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) ... labels = np.where(labels != -100, labels, tokenizer.pad_token_id) ... decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) ... decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels) ... result = metric.compute(predictions=decoded_preds, references=decoded_labels) ... result = {"bleu": result["score"]} ... prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds] ... result["gen_len"] = np.mean(prediction_lens) ... result = {k: round(v, 4) for k, v in result.items()} ... return result ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋Š” ์ค€๋น„๋˜์—ˆ๊ณ , ํ›ˆ๋ จ ๊ณผ์ •์„ ์„ค์ •ํ•  ๋•Œ ๋‹ค์‹œ ์‚ดํŽด๋ณผ ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค! </Tip> ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ค€๋น„๊ฐ€ ๋˜์—ˆ๊ตฐ์š”! [`AutoModelForSeq2SeqLM`]์œผ๋กœ T5๋ฅผ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋์ž…๋‹ˆ๋‹ค: 1. [`Seq2SeqTrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜์ธ `output_dir`์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ Hub์— ํ‘ธ์‹œํ•˜๊ธฐ ์œ„ํ•ด `push_to_hub=True`๋กœ ์„ค์ •ํ•˜์„ธ์š”. (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.) [`Trainer`]๋Š” ์—ํญ์ด ๋๋‚ ๋•Œ๋งˆ๋‹ค SacreBLEU ๋ฉ”ํŠธ๋ฆญ์„ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. [`Seq2SeqTrainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, data collator ๋ฐ `compute_metrics` ํ•จ์ˆ˜๋„ ๋ฉ๋‹ฌ์•„ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> training_args = Seq2SeqTrainingArguments( ... output_dir="my_awesome_opus_books_model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... weight_decay=0.01, ... save_total_limit=3, ... num_train_epochs=2, ... predict_with_generate=True, ... fp16=True, ... push_to_hub=True, ... ) >>> trainer = Seq2SeqTrainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_books["train"], ... eval_dataset=tokenized_books["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ```` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด [`~transformers.Trainer.push_to_hub`] ๋ฉ”์„œ๋“œ๋กœ ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•˜์„ธ์š”. ์ด๋Ÿฌ๋ฉด ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋ ค๋ฉด ์šฐ์„  optimizer ํ•จ์ˆ˜, ํ•™์Šต๋ฅ  ์Šค์ผ€์ค„ ๋“ฑ์˜ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> from transformers import AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` ์ด์ œ [`TFAutoModelForSeq2SeqLM`]๋กœ T5๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]๋กœ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_books["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... tokenized_books["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` ํ›ˆ๋ จํ•˜๊ธฐ ์œ„ํ•ด [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ๋ฉ”์„œ๋“œ๋กœ ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•˜์„ธ์š”: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์˜ˆ์ธก๊ฐ’์œผ๋กœ๋ถ€ํ„ฐ SacreBLEU ๋ฉ”ํŠธ๋ฆญ์„ ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•๊ณผ ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ• ๋‘ ๊ฐ€์ง€๋ฅผ ๋ฏธ๋ฆฌ ์„ค์ •ํ•ด๋‘ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‘˜ ๋‹ค [Keras callbacks](../main_classes/keras_callbacks)๋กœ ๊ตฌํ˜„ํ•˜์„ธ์š”. [`~transformers.KerasMetricCallback`]์— `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ [`~transformers.PushToHubCallback`]์—์„œ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_opus_books_model", ... tokenizer=tokenizer, ... ) ``` ์ด์ œ ์ฝœ๋ฐฑ๋“ค์„ ํ•œ๋ฐ๋กœ ๋ฌถ์–ด์ฃผ์„ธ์š”: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ๋“œ๋””์–ด ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ๋ชจ๋“  ์ค€๋น„๋ฅผ ๋งˆ์ณค๊ตฐ์š”! ์ด์ œ ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) ๋ฉ”์„œ๋“œ๋ฅผ ์—ํญ ์ˆ˜์™€ ๋งŒ๋“ค์–ด๋‘” ์ฝœ๋ฐฑ๊ณผ ํ•จ๊ป˜ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜์„ธ์š”: ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ Hub์— ์—…๋กœ๋“œ๋˜๊ณ , ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ๋ฒˆ์—ญ์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ณด๋‹ค ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ํ•ด๋‹น [PyTorch ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb) ๋˜๋Š” [TensorFlow ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation-tf.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ๋‹ค๋ฅธ ์–ธ์–ด๋กœ ๋ฒˆ์—ญํ•˜๊ณ  ์‹ถ์€ ํ…์ŠคํŠธ๋ฅผ ์จ๋ณด์„ธ์š”. T5์˜ ๊ฒฝ์šฐ ์›ํ•˜๋Š” ํƒœ์Šคํฌ๋ฅผ ์ž…๋ ฅ์˜ ์ ‘๋‘์‚ฌ๋กœ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์˜์–ด์—์„œ ํ”„๋ž‘์Šค์–ด๋กœ ๋ฒˆ์—ญํ•˜๋Š” ๊ฒฝ์šฐ, ์•„๋ž˜์™€ ๊ฐ™์€ ์ ‘๋‘์‚ฌ๊ฐ€ ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค: ```py >>> text = "translate English to French: Legumes share resources with nitrogen-fixing bacteria." ``` ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ๋กœ ์ถ”๋ก ํ•˜๊ธฐ์— ์ œ์ผ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•ด๋‹น ๋ชจ๋ธ๋กœ ๋ฒˆ์—ญ `pipeline`์„ ๋งŒ๋“  ๋’ค, ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> translator = pipeline("translation", model="my_awesome_opus_books_model") >>> translator(text) [{'translation_text': 'Legumes partagent des ressources avec des bactรฉries azotantes.'}] ``` ์›ํ•œ๋‹ค๋ฉด `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ง์ ‘ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model") >>> inputs = tokenizer(text, return_tensors="pt").input_ids ``` [`~transformers.generation_utils.GenerationMixin.generate`] ๋ฉ”์„œ๋“œ๋กœ ๋ฒˆ์—ญ์„ ์ƒ์„ฑํ•˜์„ธ์š”. ๋‹ค์–‘ํ•œ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต ๋ฐ ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๊ธฐ ์œ„ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [Text Generation](../main_classes/text_generation) API๋ฅผ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model") >>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋“ค์„ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Les lignรฉes partagent des ressources avec des bactรฉries enfixant l'azote.' ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ TensorFlow ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_opus_books_model") >>> inputs = tokenizer(text, return_tensors="tf").input_ids ``` [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] ๋ฉ”์„œ๋“œ๋กœ ๋ฒˆ์—ญ์„ ์ƒ์„ฑํ•˜์„ธ์š”. ๋‹ค์–‘ํ•œ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต ๋ฐ ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๊ธฐ ์œ„ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [Text Generation](../main_classes/text_generation) API๋ฅผ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค. ```py >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained("my_awesome_opus_books_model") >>> outputs = model.generate(inputs, max_new_tokens=40, do_sample=True, top_k=30, top_p=0.95) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋“ค์„ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Les lugumes partagent les ressources avec des bactรฉries fixatrices d'azote.' ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/zero_shot_image_classification.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜[[zeroshot-image-classification]] [[open-in-colab]] ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋Š” ํŠน์ • ์นดํ…Œ๊ณ ๋ฆฌ์˜ ์˜ˆ์‹œ๊ฐ€ ํฌํ•จ๋œ ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šต๋˜์ง€ ์•Š์€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•ด์„œ๋Š” ๋ ˆ์ด๋ธ”์ด ๋‹ฌ๋ฆฐ ํŠน์ • ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋กœ ๋ชจ๋ธ ํ•™์Šต์ด ํ•„์š”ํ•˜๋ฉฐ, ์ด ๋ชจ๋ธ์€ ํŠน์ • ์ด๋ฏธ์ง€์˜ ํŠน์ง•์„ ๋ ˆ์ด๋ธ”์— "๋งคํ•‘"ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ƒˆ๋กœ์šด ๋ ˆ์ด๋ธ”์ด ์žˆ๋Š” ๋ถ„๋ฅ˜ ์ž‘์—…์— ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š”, ๋ชจ๋ธ์„ "์žฌ๋ณด์ •"ํ•˜๊ธฐ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ด์™€ ๋Œ€์กฐ์ ์œผ๋กœ, ์ œ๋กœ์ƒท ๋˜๋Š” ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜(open vocabulary) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋Œ€๊ทœ๋ชจ ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ์™€ ํ•ด๋‹น ์„ค๋ช…์— ๋Œ€ํ•ด ํ•™์Šต๋œ ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ(multimodal) ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ํฌํ•จํ•œ ๋งŽ์€ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ •๋ ฌ๋œ(aligned) ๋น„์ „ ์–ธ์–ด ํ‘œํ˜„์„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์— ๋Œ€ํ•œ ๋ณด๋‹ค ์œ ์—ฐํ•œ ์ ‘๊ทผ ๋ฐฉ์‹์œผ๋กœ, ์ถ”๊ฐ€ ํ•™์Šต ๋ฐ์ดํ„ฐ ์—†์ด ์ƒˆ๋กœ์šด ๋ ˆ์ด๋ธ”์ด๋‚˜ ํ•™์Šตํ•˜์ง€ ๋ชปํ•œ ์นดํ…Œ๊ณ ๋ฆฌ์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ์ผ๋ฐ˜ํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์‚ฌ์šฉ์ž๊ฐ€ ๋Œ€์ƒ ๊ฐœ์ฒด์— ๋Œ€ํ•œ ์ž์œ  ํ˜•์‹์˜ ํ…์ŠคํŠธ ์„ค๋ช…์œผ๋กœ ์ด๋ฏธ์ง€๋ฅผ ๊ฒ€์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋ฐฐ์šธ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ํŒŒ์ดํ”„๋ผ์ธ ๋งŒ๋“ค๊ธฐ * ์ง์ ‘ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๋ชจ๋ธ ์ถ”๋ก  ์‹คํ–‰ํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q transformers ``` ## ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ํŒŒ์ดํ”„๋ผ์ธ[[zeroshot-image-classification-pipeline]] [`pipeline`]์„ ํ™œ์šฉํ•˜๋ฉด ๊ฐ€์žฅ ๊ฐ„๋‹จํ•˜๊ฒŒ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์ง€์›ํ•˜๋Š” ๋ชจ๋ธ๋กœ ์ถ”๋ก ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์—์„œ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. ```python >>> from transformers import pipeline >>> checkpoint = "openai/clip-vit-large-patch14" >>> detector = pipeline(model=checkpoint, task="zero-shot-image-classification") ``` ๋‹ค์Œ์œผ๋กœ, ๋ถ„๋ฅ˜ํ•˜๊ณ  ์‹ถ์€ ์ด๋ฏธ์ง€๋ฅผ ์„ ํƒํ•˜์„ธ์š”. ```py >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/g8oS8-82DxI/download?ixid=MnwxMjA3fDB8MXx0b3BpY3x8SnBnNktpZGwtSGt8fHx8fDJ8fDE2NzgxMDYwODc&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/owl.jpg" alt="Photo of an owl"/> </div> ์ด๋ฏธ์ง€์™€ ํ•ด๋‹น ์ด๋ฏธ์ง€์˜ ํ›„๋ณด ๋ ˆ์ด๋ธ”์ธ `candidate_labels`๋ฅผ ํŒŒ์ดํ”„๋ผ์ธ์œผ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ง์ ‘ ์ „๋‹ฌํ•˜์ง€๋งŒ, ์ปดํ“จํ„ฐ์— ์ €์žฅ๋œ ์ด๋ฏธ์ง€์˜ ๊ฒฝ๋กœ๋‚˜ url๋กœ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. `candidate_labels`๋Š” ์ด ์˜ˆ์‹œ์ฒ˜๋Ÿผ ๊ฐ„๋‹จํ•œ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ๊ณ  ์ข€ ๋” ์„ค๋ช…์ ์ธ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> predictions = classifier(image, candidate_labels=["fox", "bear", "seagull", "owl"]) >>> predictions [{'score': 0.9996670484542847, 'label': 'owl'}, {'score': 0.000199399160919711, 'label': 'seagull'}, {'score': 7.392891711788252e-05, 'label': 'fox'}, {'score': 5.96074532950297e-05, 'label': 'bear'}] ``` ## ์ง์ ‘ ์ œ๋กœ์ƒท(zero-shot) ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ํ•˜๊ธฐ[[zeroshot-image-classification-by-hand]] ์ด์ œ ์ œ๋กœ์ƒท ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ํŒŒ์ดํ”„๋ผ์ธ ์‚ฌ์šฉ ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด์•˜์œผ๋‹ˆ, ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์—์„œ ๋ชจ๋ธ๊ณผ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด์ „๊ณผ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor, AutoModelForZeroShotImageClassification >>> model = AutoModelForZeroShotImageClassification.from_pretrained(checkpoint) >>> processor = AutoProcessor.from_pretrained(checkpoint) ``` ๋‹ค๋ฅธ ์ด๋ฏธ์ง€๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/xBRQfR2bqNI/download?ixid=MnwxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNjc4Mzg4ODEx&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg" alt="Photo of a car"/> </div> ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์˜ ์ž…๋ ฅ์„ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์„œ๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ๋ณ€ํ™˜ํ•˜๊ณ  ์ •๊ทœํ™”ํ•˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์™€ ํ…์ŠคํŠธ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•˜๋Š” ํ† ํฌ๋‚˜์ด์ €๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ```py >>> candidate_labels = ["tree", "car", "bike", "cat"] >>> inputs = processor(images=image, text=candidate_labels, return_tensors="pt", padding=True) ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ , ๊ฒฐ๊ณผ๋ฅผ ํ›„์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> import torch >>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits = outputs.logits_per_image[0] >>> probs = logits.softmax(dim=-1).numpy() >>> scores = probs.tolist() >>> result = [ ... {"score": score, "label": candidate_label} ... for score, candidate_label in sorted(zip(probs, candidate_labels), key=lambda x: -x[0]) ... ] >>> result [{'score': 0.998572, 'label': 'car'}, {'score': 0.0010570387, 'label': 'bike'}, {'score': 0.0003393686, 'label': 'tree'}, {'score': 3.1572064e-05, 'label': 'cat'}] ```
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/semantic_segmentation.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์˜๋ฏธ์  ๋ถ„ํ• (Semantic segmentation)[[semantic-segmentation]] [[open-in-colab]] <Youtube id="dKE8SIt9C-w"/> ์˜๋ฏธ์  ๋ถ„ํ• (semantic segmentation)์€ ์ด๋ฏธ์ง€์˜ ๊ฐ ํ”ฝ์…€์— ๋ ˆ์ด๋ธ” ๋˜๋Š” ํด๋ž˜์Šค๋ฅผ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ๋ถ„ํ• (segmentation)์—๋Š” ์—ฌ๋Ÿฌ ์ข…๋ฅ˜๊ฐ€ ์žˆ์œผ๋ฉฐ, ์˜๋ฏธ์  ๋ถ„ํ• ์˜ ๊ฒฝ์šฐ ๋™์ผํ•œ ๋ฌผ์ฒด์˜ ๊ณ ์œ  ์ธ์Šคํ„ด์Šค๋ฅผ ๊ตฌ๋ถ„ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋‘ ๋ฌผ์ฒด ๋ชจ๋‘ ๋™์ผํ•œ ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋ฉ๋‹ˆ๋‹ค(์˜ˆ์‹œ๋กœ, "car-1" ๊ณผ "car-2" ๋Œ€์‹  "car"๋กœ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค). ์‹ค์ƒํ™œ์—์„œ ํ”ํžˆ ๋ณผ ์ˆ˜ ์žˆ๋Š” ์˜๋ฏธ์  ๋ถ„ํ• ์˜ ์ ์šฉ ์‚ฌ๋ก€๋กœ๋Š” ๋ณดํ–‰์ž์™€ ์ค‘์š”ํ•œ ๊ตํ†ต ์ •๋ณด๋ฅผ ์‹๋ณ„ํ•˜๋Š” ์ž์œจ ์ฃผํ–‰ ์ž๋™์ฐจ ํ•™์Šต, ์˜๋ฃŒ ์ด๋ฏธ์ง€์˜ ์„ธํฌ์™€ ์ด์ƒ ์ง•ํ›„ ์‹๋ณ„, ๊ทธ๋ฆฌ๊ณ  ์œ„์„ฑ ์ด๋ฏธ์ง€์˜ ํ™˜๊ฒฝ ๋ณ€ํ™” ๋ชจ๋‹ˆํ„ฐ๋ง๋“ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋ฐฐ์šธ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. [SceneParse150](https://huggingface.co/datasets/scene_parse_150) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ด์šฉํ•ด [SegFormer](https://huggingface.co/docs/transformers/main/en/model_doc/segformer#segformer) ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ. 2. ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ธฐ. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [BEiT](../model_doc/beit), [Data2VecVision](../model_doc/data2vec-vision), [DPT](../model_doc/dpt), [MobileNetV2](../model_doc/mobilenet_v2), [MobileViT](../model_doc/mobilevit), [MobileViTV2](../model_doc/mobilevitv2), [SegFormer](../model_doc/segformer), [UPerNet](../model_doc/upernet) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q datasets transformers evaluate ``` ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋‚˜ํƒ€๋‚˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## SceneParse150 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ[[load-sceneparse150-dataset]] ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ SceneParse150 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋” ์ž‘์€ ๋ถ€๋ถ„ ์ง‘ํ•ฉ์„ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด์— ๋Œ€ํ•œ ํ›ˆ๋ จ์— ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ์‹คํ—˜์„ ํ†ตํ•ด ๋ชจ๋“  ๊ฒƒ์ด ์ œ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> ds = load_dataset("scene_parse_150", split="train[:50]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train`์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋ถ„ํ• ํ•˜์„ธ์š”: ```py >>> ds = ds.train_test_split(test_size=0.2) >>> train_ds = ds["train"] >>> test_ds = ds["test"] ``` ๊ทธ๋ฆฌ๊ณ  ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> train_ds[0] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x683 at 0x7F9B0C201F90>, 'annotation': <PIL.PngImagePlugin.PngImageFile image mode=L size=512x683 at 0x7F9B0C201DD0>, 'scene_category': 368} ``` - `image`: ์žฅ๋ฉด์˜ PIL ์ด๋ฏธ์ง€์ž…๋‹ˆ๋‹ค. - `annotation`: ๋ถ„ํ•  ์ง€๋„(segmentation map)์˜ PIL ์ด๋ฏธ์ง€์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ํƒ€๊ฒŸ์ด๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. - `scene_category`: "์ฃผ๋ฐฉ" ๋˜๋Š” "์‚ฌ๋ฌด์‹ค"๊ณผ ๊ฐ™์ด ์ด๋ฏธ์ง€ ์žฅ๋ฉด์„ ์„ค๋ช…ํ•˜๋Š” ์นดํ…Œ๊ณ ๋ฆฌ ID์ž…๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‘˜ ๋‹ค PIL ์ด๋ฏธ์ง€์ธ `image`์™€ `annotation`๋งŒ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋‚˜์ค‘์— ๋ชจ๋ธ์„ ์„ค์ •ํ•  ๋•Œ ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ๋ ˆ์ด๋ธ” ID๋ฅผ ๋ ˆ์ด๋ธ” ํด๋ž˜์Šค์— ๋งคํ•‘ํ•˜๋Š” ์‚ฌ์ „๋„ ๋งŒ๋“ค๊ณ  ์‹ถ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. Hub์—์„œ ๋งคํ•‘์„ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  `id2label` ๋ฐ `label2id` ์‚ฌ์ „์„ ๋งŒ๋“œ์„ธ์š”: ```py >>> import json >>> from huggingface_hub import cached_download, hf_hub_url >>> repo_id = "huggingface/label-files" >>> filename = "ade20k-id2label.json" >>> id2label = json.load(open(cached_download(hf_hub_url(repo_id, filename, repo_type="dataset")), "r")) >>> id2label = {int(k): v for k, v in id2label.items()} >>> label2id = {v: k for k, v in id2label.items()} >>> num_labels = len(id2label) ``` ## ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ[[preprocess] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๋ชจ๋ธ์— ์‚ฌ์šฉํ•  ์ด๋ฏธ์ง€์™€ ์ฃผ์„์„ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด SegFormer ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๊ฐ€ ์‚ฌ์šฉํ•˜๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ๊ฐ™์€ ์ผ๋ถ€ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ๋ฐฐ๊ฒฝ ํด๋ž˜์Šค๋กœ ์ œ๋กœ ์ธ๋ฑ์Šค๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋ฐฐ๊ฒฝ ํด๋ž˜์Šค๋Š” 150๊ฐœ์˜ ํด๋ž˜์Šค์— ์‹ค์ œ๋กœ๋Š” ํฌํ•จ๋˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— `reduce_labels=True` ๋ฅผ ์„ค์ •ํ•ด ๋ชจ๋“  ๋ ˆ์ด๋ธ”์—์„œ ๋ฐฐ๊ฒฝ ํด๋ž˜์Šค๋ฅผ ์ œ๊ฑฐํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ œ๋กœ ์ธ๋ฑ์Šค๋Š” `255`๋กœ ๋Œ€์ฒด๋˜๋ฏ€๋กœ SegFormer์˜ ์†์‹ค ํ•จ์ˆ˜์—์„œ ๋ฌด์‹œ๋ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor >>> checkpoint = "nvidia/mit-b0" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint, reduce_labels=True) ``` <frameworkcontent> <pt> ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์„ ์ ์šฉํ•˜์—ฌ ๊ณผ์ ํ•ฉ์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ๋ณด๋‹ค ๊ฐ•๊ฑดํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” [torchvision](https://pytorch.org/vision/stable/index.html)์˜ [`ColorJitter`](https://pytorch.org/vision/stable/generated/torchvision.transforms.ColorJitter.html)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€์˜ ์ƒ‰์ƒ ์†์„ฑ์„ ์ž„์˜๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ, ์ž์‹ ์ด ์›ํ•˜๋Š” ์ด๋ฏธ์ง€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from torchvision.transforms import ColorJitter >>> jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1) ``` ์ด์ œ ๋ชจ๋ธ์— ์‚ฌ์šฉํ•  ์ด๋ฏธ์ง€์™€ ์ฃผ์„์„ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด ๋‘ ๊ฐœ์˜ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋“ค์€ ์ด๋ฏธ์ง€๋ฅผ `pixel_values`๋กœ, ์ฃผ์„์„ `labels`๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ ์„ธํŠธ์˜ ๊ฒฝ์šฐ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์— ์ด๋ฏธ์ง€๋ฅผ ์ œ๊ณตํ•˜๊ธฐ ์ „์— `jitter`๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ์„ธํŠธ์˜ ๊ฒฝ์šฐ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋Š” `images`๋ฅผ ์ž๋ฅด๊ณ  ์ •๊ทœํ™”ํ•˜๋ฉฐ, ํ…Œ์ŠคํŠธ ์ค‘์—๋Š” ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์ด ์ ์šฉ๋˜์ง€ ์•Š์œผ๋ฏ€๋กœ `labels`๋งŒ ์ž๋ฆ…๋‹ˆ๋‹ค. ```py >>> def train_transforms(example_batch): ... images = [jitter(x) for x in example_batch["image"]] ... labels = [x for x in example_batch["annotation"]] ... inputs = image_processor(images, labels) ... return inputs >>> def val_transforms(example_batch): ... images = [x for x in example_batch["image"]] ... labels = [x for x in example_batch["annotation"]] ... inputs = image_processor(images, labels) ... return inputs ``` ๋ชจ๋“  ๋ฐ์ดํ„ฐ ์„ธํŠธ์— `jitter`๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด, ๐Ÿค— Datasets [`~datasets.Dataset.set_transform`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ์ฆ‰์‹œ ๋ณ€ํ™˜์ด ์ ์šฉ๋˜๊ธฐ ๋•Œ๋ฌธ์— ๋” ๋น ๋ฅด๊ณ  ๋””์Šคํฌ ๊ณต๊ฐ„์„ ๋œ ์ฐจ์ง€ํ•ฉ๋‹ˆ๋‹ค: ```py >>> train_ds.set_transform(train_transforms) >>> test_ds.set_transform(val_transforms) ``` </pt> </frameworkcontent> <frameworkcontent> <tf> ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์„ ์ ์šฉํ•˜์—ฌ ๊ณผ์ ํ•ฉ์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ๋ณด๋‹ค ๊ฐ•๊ฑดํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ๊ฒƒ์ด ์ผ๋ฐ˜์ ์ž…๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” [`tf.image`](https://www.tensorflow.org/api_docs/python/tf/image)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€์˜ ์ƒ‰์ƒ ์†์„ฑ์„ ์ž„์˜๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ, ์ž์‹ ์ด ์›ํ•˜๋Š” ์ด๋ฏธ์ง€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ณ„๊ฐœ์˜ ๋‘ ๋ณ€ํ™˜ ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค: - ์ด๋ฏธ์ง€ ์ฆ๊ฐ•์„ ํฌํ•จํ•˜๋Š” ํ•™์Šต ๋ฐ์ดํ„ฐ ๋ณ€ํ™˜ - ๐Ÿค— Transformers์˜ ์ปดํ“จํ„ฐ ๋น„์ „ ๋ชจ๋ธ์€ ์ฑ„๋„ ์šฐ์„  ๋ ˆ์ด์•„์›ƒ์„ ๊ธฐ๋Œ€ํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์ด๋ฏธ์ง€๋งŒ ๋ฐ”๊พธ๋Š” ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ๋ณ€ํ™˜ ```py >>> import tensorflow as tf >>> def aug_transforms(image): ... image = tf.keras.utils.img_to_array(image) ... image = tf.image.random_brightness(image, 0.25) ... image = tf.image.random_contrast(image, 0.5, 2.0) ... image = tf.image.random_saturation(image, 0.75, 1.25) ... image = tf.image.random_hue(image, 0.1) ... image = tf.transpose(image, (2, 0, 1)) ... return image >>> def transforms(image): ... image = tf.keras.utils.img_to_array(image) ... image = tf.transpose(image, (2, 0, 1)) ... return image ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋ชจ๋ธ์„ ์œ„ํ•ด ๋‘ ๊ฐœ์˜ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด ์ด๋ฏธ์ง€ ๋ฐ ์ฃผ์„ ๋ฐฐ์น˜๋ฅผ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋“ค์€ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜๊ณ  ์ด์ „์— ๋กœ๋“œํ•œ `image_processor`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ `pixel_values`๋กœ, ์ฃผ์„์„ `label`๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. `ImageProcessor` ๋Š” ์ด๋ฏธ์ง€์˜ ํฌ๊ธฐ ์กฐ์ •๊ณผ ์ •๊ทœํ™”๋„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> def train_transforms(example_batch): ... images = [aug_transforms(x.convert("RGB")) for x in example_batch["image"]] ... labels = [x for x in example_batch["annotation"]] ... inputs = image_processor(images, labels) ... return inputs >>> def val_transforms(example_batch): ... images = [transforms(x.convert("RGB")) for x in example_batch["image"]] ... labels = [x for x in example_batch["annotation"]] ... inputs = image_processor(images, labels) ... return inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์— ์ „์ฒ˜๋ฆฌ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.set_transform`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ์ฆ‰์‹œ ๋ณ€ํ™˜์ด ์ ์šฉ๋˜๊ธฐ ๋•Œ๋ฌธ์— ๋” ๋น ๋ฅด๊ณ  ๋””์Šคํฌ ๊ณต๊ฐ„์„ ๋œ ์ฐจ์ง€ํ•ฉ๋‹ˆ๋‹ค: ```py >>> train_ds.set_transform(train_transforms) >>> test_ds.set_transform(val_transforms) ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ๋ฉ”ํŠธ๋ฆญ์„ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ํƒœ์Šคํฌ์—์„œ๋Š” [mean Intersection over Union](https://huggingface.co/spaces/evaluate-metric/accuracy) (IoU) ๋ฉ”ํŠธ๋ฆญ์„ ๋กœ๋“œํ•˜์„ธ์š” (๋ฉ”ํŠธ๋ฆญ์„ ๋กœ๋“œํ•˜๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด ๐Ÿค— Evaluate [quick tour](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”). ```py >>> import evaluate >>> metric = evaluate.load("mean_iou") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋ฉ”ํŠธ๋ฆญ์„ [`~evaluate.EvaluationModule.compute`]ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์˜ˆ์ธก์„ ๋จผ์ € ๋กœ์ง“์œผ๋กœ ๋ณ€ํ™˜ํ•œ ๋‹ค์Œ, ๋ ˆ์ด๋ธ”์˜ ํฌ๊ธฐ์— ๋งž๊ฒŒ ๋ชจ์–‘์„ ๋‹ค์‹œ ์ง€์ •ํ•ด์•ผ [`~evaluate.EvaluationModule.compute`]๋ฅผ ํ˜ธ์ถœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```py >>> import numpy as np >>> import torch >>> from torch import nn >>> def compute_metrics(eval_pred): ... with torch.no_grad(): ... logits, labels = eval_pred ... logits_tensor = torch.from_numpy(logits) ... logits_tensor = nn.functional.interpolate( ... logits_tensor, ... size=labels.shape[-2:], ... mode="bilinear", ... align_corners=False, ... ).argmax(dim=1) ... pred_labels = logits_tensor.detach().cpu().numpy() ... metrics = metric.compute( ... predictions=pred_labels, ... references=labels, ... num_labels=num_labels, ... ignore_index=255, ... reduce_labels=False, ... ) ... for key, value in metrics.items(): ... if isinstance(value, np.ndarray): ... metrics[key] = value.tolist() ... return metrics ``` </pt> </frameworkcontent> <frameworkcontent> <tf> ```py >>> def compute_metrics(eval_pred): ... logits, labels = eval_pred ... logits = tf.transpose(logits, perm=[0, 2, 3, 1]) ... logits_resized = tf.image.resize( ... logits, ... size=tf.shape(labels)[1:], ... method="bilinear", ... ) ... pred_labels = tf.argmax(logits_resized, axis=-1) ... metrics = metric.compute( ... predictions=pred_labels, ... references=labels, ... num_labels=num_labels, ... ignore_index=-1, ... reduce_labels=image_processor.do_reduce_labels, ... ) ... per_category_accuracy = metrics.pop("per_category_accuracy").tolist() ... per_category_iou = metrics.pop("per_category_iou").tolist() ... metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)}) ... metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)}) ... return {"val_" + k: v for k, v in metrics.items()} ``` </tf> </frameworkcontent> ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ํŠธ๋ ˆ์ด๋‹์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋กœ ๋Œ์•„๊ฐ€๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ## ํ•™์Šตํ•˜๊ธฐ[[train]] <frameworkcontent> <pt> <Tip> ๋งŒ์•ฝ [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#finetune-with-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ•™์Šต์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForSemanticSegmentation`]๋กœ SegFormer๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ , ๋ชจ๋ธ์— ๋ ˆ์ด๋ธ” ID์™€ ๋ ˆ์ด๋ธ” ํด๋ž˜์Šค ๊ฐ„์˜ ๋งคํ•‘์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForSemanticSegmentation, TrainingArguments, Trainer >>> model = AutoModelForSemanticSegmentation.from_pretrained(checkpoint, id2label=id2label, label2id=label2id) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ [`TrainingArguments`]์— ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. `image` ์—ด์ด ์‚ญ์ œ๋˜๊ธฐ ๋•Œ๋ฌธ์— ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ์—ด์„ ์ œ๊ฑฐํ•˜์ง€ ์•Š๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. `image` ์—ด์ด ์—†์œผ๋ฉด `pixel_values`์„ ์ƒ์„ฑํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฐ ๊ฒฝ์šฐ๋ฅผ ๋ฐฉ์ง€ํ•˜๋ ค๋ฉด `remove_unused_columns=False`๋กœ ์„ค์ •ํ•˜์„ธ์š”! ์œ ์ผํ•˜๊ฒŒ ํ•„์š”ํ•œ ๋‹ค๋ฅธ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ Hub์— ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ๊ฐ ์—ํฌํฌ๊ฐ€ ๋๋‚  ๋•Œ๋งˆ๋‹ค [`Trainer`]๊ฐ€ IoU ๋ฉ”ํŠธ๋ฆญ์„ ํ‰๊ฐ€ํ•˜๊ณ  ํ•™์Šต ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ, `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ํ•™์Šต ์ธ์ž๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•˜์„ธ์š”. 3. ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•ด [`~Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="segformer-b0-scene-parse-150", ... learning_rate=6e-5, ... num_train_epochs=50, ... per_device_train_batch_size=2, ... per_device_eval_batch_size=2, ... save_total_limit=3, ... evaluation_strategy="steps", ... save_strategy="steps", ... save_steps=20, ... eval_steps=20, ... logging_steps=1, ... eval_accumulation_steps=5, ... remove_unused_columns=False, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=train_ds, ... eval_dataset=test_ds, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด Hub์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <frameworkcontent> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, ๋จผ์ € [๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ](../training#train-a-tensorflow-model-with-keras)์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ฅด์„ธ์š”: 1. ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜๊ณ  ์˜ตํ‹ฐ๋งˆ์ด์ €์™€ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด๋Ÿฌ๋ฅผ ์„ค์ •ํ•˜์„ธ์š”. 2. ์‚ฌ์ „ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜์„ธ์š”. 3. ๐Ÿค— Dataset์„ `tf.data.Dataset`๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”. 4. ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•˜์„ธ์š”. 5. ์ฝœ๋ฐฑ์„ ์ถ”๊ฐ€ํ•˜์—ฌ ๋ฉ”ํŠธ๋ฆญ์„ ๊ณ„์‚ฐํ•˜๊ณ  ๐Ÿค— Hub์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜์„ธ์š”. 6. `fit()` ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ์„ ์‹คํ–‰ํ•˜์„ธ์š”. ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ, ์˜ตํ‹ฐ๋งˆ์ด์ €, ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด๋Ÿฌ๋ฅผ ์ •์˜ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer >>> batch_size = 2 >>> num_epochs = 50 >>> num_train_steps = len(train_ds) * num_epochs >>> learning_rate = 6e-5 >>> weight_decay_rate = 0.01 >>> optimizer, lr_schedule = create_optimizer( ... init_lr=learning_rate, ... num_train_steps=num_train_steps, ... weight_decay_rate=weight_decay_rate, ... num_warmup_steps=0, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋ ˆ์ด๋ธ” ๋งคํ•‘๊ณผ ํ•จ๊ป˜ [`TFAutoModelForSemanticSegmentation`]์„ ์‚ฌ์šฉํ•˜์—ฌ SegFormer๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ  ์˜ตํ‹ฐ๋งˆ์ด์ €๋กœ ์ปดํŒŒ์ผํ•ฉ๋‹ˆ๋‹ค. ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์€ ๋ชจ๋‘ ๋””ํดํŠธ๋กœ ํƒœ์Šคํฌ ๊ด€๋ จ ์†์‹ค ํ•จ์ˆ˜๊ฐ€ ์žˆ์œผ๋ฏ€๋กœ ์›์น˜ ์•Š์œผ๋ฉด ์ง€์ •ํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSemanticSegmentation >>> model = TFAutoModelForSemanticSegmentation.from_pretrained( ... checkpoint, ... id2label=id2label, ... label2id=label2id, ... ) >>> model.compile(optimizer=optimizer) # ์†์‹ค ํ•จ์ˆ˜ ์ธ์ž๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค! ``` [`~datasets.Dataset.to_tf_dataset`] ์™€ [`DefaultDataCollator`]๋ฅผ ์‚ฌ์šฉํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํฌ๋งท์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") >>> tf_train_dataset = train_ds.to_tf_dataset( ... columns=["pixel_values", "label"], ... shuffle=True, ... batch_size=batch_size, ... collate_fn=data_collator, ... ) >>> tf_eval_dataset = test_ds.to_tf_dataset( ... columns=["pixel_values", "label"], ... shuffle=True, ... batch_size=batch_size, ... collate_fn=data_collator, ... ) ``` ์˜ˆ์ธก์œผ๋กœ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ ๐Ÿค— Hub๋กœ ํ‘ธ์‹œํ•˜๋ ค๋ฉด [Keras callbacks](../main_classes/keras_callbacks)๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `compute_metrics` ํ•จ์ˆ˜๋ฅผ [`KerasMetricCallback`]์— ์ „๋‹ฌํ•˜๊ณ , ๋ชจ๋ธ ์—…๋กœ๋“œ๋ฅผ ์œ„ํ•ด [`PushToHubCallback`]๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback >>> metric_callback = KerasMetricCallback( ... metric_fn=compute_metrics, eval_dataset=tf_eval_dataset, batch_size=batch_size, label_cols=["labels"] ... ) >>> push_to_hub_callback = PushToHubCallback(output_dir="scene_segmentation", tokenizer=image_processor) >>> callbacks = [metric_callback, push_to_hub_callback] ``` ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํฌํฌ ์ˆ˜์™€ ํ•จ๊ป˜ `fit()`์„ ํ˜ธ์ถœํ•˜๊ณ , ์ฝœ๋ฐฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit( ... tf_train_dataset, ... validation_data=tf_eval_dataset, ... callbacks=callbacks, ... epochs=num_epochs, ... ) ``` ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ๐Ÿค— Hub์— ๊ณต์œ ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> ## ์ถ”๋ก ํ•˜๊ธฐ[[inference]] ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ํ•  ์ด๋ฏธ์ง€๋ฅผ ๋กœ๋“œํ•˜์„ธ์š”: ```py >>> image = ds[0]["image"] >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-image.png" alt="Image of bedroom"/> </div> <frameworkcontent> <pt> ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•ด ๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€ ๋ถ„ํ• ์„ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์ด๋ฏธ์ง€๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> segmenter = pipeline("image-segmentation", model="my_awesome_seg_model") >>> segmenter(image) [{'score': None, 'label': 'wall', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062690>}, {'score': None, 'label': 'sky', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062A50>}, {'score': None, 'label': 'floor', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062B50>}, {'score': None, 'label': 'ceiling', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062A10>}, {'score': None, 'label': 'bed ', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062E90>}, {'score': None, 'label': 'windowpane', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062390>}, {'score': None, 'label': 'cabinet', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062550>}, {'score': None, 'label': 'chair', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062D90>}, {'score': None, 'label': 'armchair', 'mask': <PIL.Image.Image image mode=L size=640x427 at 0x7FD5B2062E10>}] ``` ์›ํ•˜๋Š” ๊ฒฝ์šฐ `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋กœ ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ณ  `pixel_values`์„ GPU์— ๋ฐฐ์น˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # ๊ฐ€๋Šฅํ•˜๋‹ค๋ฉด GPU๋ฅผ ์‚ฌ์šฉํ•˜๊ณ , ๊ทธ๋ ‡์ง€ ์•Š๋‹ค๋ฉด CPU๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š” >>> encoding = image_processor(image, return_tensors="pt") >>> pixel_values = encoding.pixel_values.to(device) ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> outputs = model(pixel_values=pixel_values) >>> logits = outputs.logits.cpu() ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋กœ์ง“์˜ ํฌ๊ธฐ๋ฅผ ์›๋ณธ ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋กœ ๋‹ค์‹œ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> upsampled_logits = nn.functional.interpolate( ... logits, ... size=image.size[::-1], ... mode="bilinear", ... align_corners=False, ... ) >>> pred_seg = upsampled_logits.argmax(dim=1)[0] ``` </pt> </frameworkcontent> <frameworkcontent> <tf> ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๋กœ๋“œํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ณ  ์ž…๋ ฅ์„ TensorFlow ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("MariaK/scene_segmentation") >>> inputs = image_processor(image, return_tensors="tf") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForSemanticSegmentation >>> model = TFAutoModelForSemanticSegmentation.from_pretrained("MariaK/scene_segmentation") >>> logits = model(**inputs).logits ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋กœ๊ทธ๋ฅผ ์›๋ณธ ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋กœ ์žฌ์กฐ์ •ํ•˜๊ณ  ํด๋ž˜์Šค ์ฐจ์›์— argmax๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> logits = tf.transpose(logits, [0, 2, 3, 1]) >>> upsampled_logits = tf.image.resize( ... logits, ... # `image.size`๊ฐ€ ๋„ˆ๋น„์™€ ๋†’์ด๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ธฐ ๋•Œ๋ฌธ์— `image`์˜ ๋ชจ์–‘์„ ๋ฐ˜์ „์‹œํ‚ต๋‹ˆ๋‹ค ... image.size[::-1], ... ) >>> pred_seg = tf.math.argmax(upsampled_logits, axis=-1)[0] ``` </tf> </frameworkcontent> ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•˜๋ ค๋ฉด [dataset color palette](https://github.com/tensorflow/models/blob/3f1ca33afe3c1631b733ea7e40c294273b9e406d/research/deeplab/utils/get_dataset_colormap.py#L51)๋ฅผ ๊ฐ ํด๋ž˜์Šค๋ฅผ RGB ๊ฐ’์— ๋งคํ•‘ํ•˜๋Š” `ade_palette()`๋กœ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ด๋ฏธ์ง€์™€ ์˜ˆ์ธก๋œ ๋ถ„ํ•  ์ง€๋„(segmentation map)์„ ๊ฒฐํ•ฉํ•˜์—ฌ ๊ตฌ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> import matplotlib.pyplot as plt >>> import numpy as np >>> color_seg = np.zeros((pred_seg.shape[0], pred_seg.shape[1], 3), dtype=np.uint8) >>> palette = np.array(ade_palette()) >>> for label, color in enumerate(palette): ... color_seg[pred_seg == label, :] = color >>> color_seg = color_seg[..., ::-1] # BGR๋กœ ๋ณ€ํ™˜ >>> img = np.array(image) * 0.5 + color_seg * 0.5 # ๋ถ„ํ•  ์ง€๋„์œผ๋กœ ์ด๋ฏธ์ง€ ๊ตฌ์„ฑ >>> img = img.astype(np.uint8) >>> plt.figure(figsize=(15, 10)) >>> plt.imshow(img) >>> plt.show() ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/semantic-seg-preds.png" alt="Image of bedroom overlaid with segmentation map"/> </div>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/image_captioning.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ด๋ฏธ์ง€ ์บก์…”๋‹[[image-captioning]] [[open-in-colab]] ์ด๋ฏธ์ง€ ์บก์…”๋‹(Image captioning)์€ ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์บก์…˜์„ ์˜ˆ์ธกํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์บก์…”๋‹์€ ์‹œ๊ฐ ์žฅ์• ์ธ์ด ๋‹ค์–‘ํ•œ ์ƒํ™ฉ์„ ํƒ์ƒ‰ํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค„ ์ˆ˜ ์žˆ๋„๋ก ์‹œ๊ฐ ์žฅ์• ์ธ์„ ๋ณด์กฐํ•˜๋Š” ๋“ฑ ์‹ค์ƒํ™œ์—์„œ ํ”ํžˆ ํ™œ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด๋ฏธ์ง€ ์บก์…”๋‹์€ ์ด๋ฏธ์ง€๋ฅผ ์„ค๋ช…ํ•จ์œผ๋กœ์จ ์‚ฌ๋žŒ๋“ค์˜ ์ฝ˜ํ…์ธ  ์ ‘๊ทผ์„ฑ์„ ๊ฐœ์„ ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์†Œ๊ฐœํ•  ๋‚ด์šฉ์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์ด๋ฏธ์ง€ ์บก์…”๋‹ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค. * ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate -q pip install jiwer -q ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋ฉด ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”. ```python from huggingface_hub import notebook_login notebook_login() ``` ## ํฌ์ผ“๋ชฌ BLIP ์บก์…˜ ๋ฐ์ดํ„ฐ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-the-pokmon-blip-captions-dataset]] {์ด๋ฏธ์ง€-์บก์…˜} ์Œ์œผ๋กœ ๊ตฌ์„ฑ๋œ ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๋ ค๋ฉด ๐Ÿค— Dataset ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. PyTorch์—์„œ ์ž์‹ ๋งŒ์˜ ์ด๋ฏธ์ง€ ์บก์…˜ ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ๋งŒ๋“ค๋ ค๋ฉด [์ด ๋…ธํŠธ๋ถ](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/GIT/Fine_tune_GIT_on_an_image_captioning_dataset.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. ```python from datasets import load_dataset ds = load_dataset("lambdalabs/pokemon-blip-captions") ds ``` ```bash DatasetDict({ train: Dataset({ features: ['image', 'text'], num_rows: 833 }) }) ``` ์ด ๋ฐ์ดํ„ฐ์„ธํŠธ๋Š” `image`์™€ `text`๋ผ๋Š” ๋‘ ํŠน์„ฑ์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. <Tip> ๋งŽ์€ ์ด๋ฏธ์ง€ ์บก์…˜ ๋ฐ์ดํ„ฐ์„ธํŠธ์—๋Š” ์ด๋ฏธ์ง€๋‹น ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์บก์…˜์ด ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฒฝ์šฐ, ์ผ๋ฐ˜์ ์œผ๋กœ ํ•™์Šต ์ค‘์— ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์บก์…˜ ์ค‘์—์„œ ๋ฌด์ž‘์œ„๋กœ ์ƒ˜ํ”Œ์„ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. </Tip> [~datasets.Dataset.train_test_split] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ํ•™์Šต ๋ถ„ํ• ์„ ํ•™์Šต ๋ฐ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋‚˜๋ˆ•๋‹ˆ๋‹ค: ```python ds = ds["train"].train_test_split(test_size=0.1) train_ds = ds["train"] test_ds = ds["test"] ``` ํ•™์Šต ์„ธํŠธ์˜ ์ƒ˜ํ”Œ ๋ช‡ ๊ฐœ๋ฅผ ์‹œ๊ฐํ™”ํ•ด ๋ด…์‹œ๋‹ค. Let's visualize a couple of samples from the training set. ```python from textwrap import wrap import matplotlib.pyplot as plt import numpy as np def plot_images(images, captions): plt.figure(figsize=(20, 20)) for i in range(len(images)): ax = plt.subplot(1, len(images), i + 1) caption = captions[i] caption = "\n".join(wrap(caption, 12)) plt.title(caption) plt.imshow(images[i]) plt.axis("off") sample_images_to_visualize = [np.array(train_ds[i]["image"]) for i in range(5)] sample_captions = [train_ds[i]["text"] for i in range(5)] plot_images(sample_images_to_visualize, sample_captions) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_training_images_image_cap.png" alt="Sample training images"/> </div> ## ๋ฐ์ดํ„ฐ์„ธํŠธ ์ „์ฒ˜๋ฆฌ[[preprocess-the-dataset]] ๋ฐ์ดํ„ฐ์„ธํŠธ์—๋Š” ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ๋ผ๋Š” ๋‘ ๊ฐ€์ง€ ์–‘์‹์ด ์žˆ๊ธฐ ๋•Œ๋ฌธ์—, ์ „์ฒ˜๋ฆฌ ํŒŒ์ดํ”„๋ผ์ธ์—์„œ ์ด๋ฏธ์ง€์™€ ์บก์…˜์„ ๋ชจ๋‘ ์ „์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ์ „์ฒ˜๋ฆฌ ์ž‘์—…์„ ์œ„ํ•ด, ํŒŒ์ธํŠœ๋‹ํ•˜๋ ค๋Š” ๋ชจ๋ธ์— ์—ฐ๊ฒฐ๋œ ํ”„๋กœ์„ธ์„œ ํด๋ž˜์Šค๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ```python from transformers import AutoProcessor checkpoint = "microsoft/git-base" processor = AutoProcessor.from_pretrained(checkpoint) ``` ํ”„๋กœ์„ธ์„œ๋Š” ๋‚ด๋ถ€์ ์œผ๋กœ ํฌ๊ธฐ ์กฐ์ • ๋ฐ ํ”ฝ์…€ ํฌ๊ธฐ ์กฐ์ •์„ ํฌํ•จํ•œ ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ  ์บก์…˜์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ```python def transforms(example_batch): images = [x for x in example_batch["image"]] captions = [x for x in example_batch["text"]] inputs = processor(images=images, text=captions, padding="max_length") inputs.update({"labels": inputs["input_ids"]}) return inputs train_ds.set_transform(transforms) test_ds.set_transform(transforms) ``` ๋ฐ์ดํ„ฐ์„ธํŠธ๊ฐ€ ์ค€๋น„๋˜์—ˆ์œผ๋‹ˆ ์ด์ œ ํŒŒ์ธํŠœ๋‹์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๊ธฐ๋ณธ ๋ชจ๋ธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-a-base-model]] ["microsoft/git-base"](https://huggingface.co/microsoft/git-base)๋ฅผ [`AutoModelForCausalLM`](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM) ๊ฐ์ฒด๋กœ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(checkpoint) ``` ## ํ‰๊ฐ€[[evaluate]] ์ด๋ฏธ์ง€ ์บก์…˜ ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ [Rouge ์ ์ˆ˜](https://huggingface.co/spaces/evaluate-metric/rouge) ๋˜๋Š” [๋‹จ์–ด ์˜ค๋ฅ˜์œจ(Word Error Rate)](https://huggingface.co/spaces/evaluate-metric/wer)๋กœ ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹จ์–ด ์˜ค๋ฅ˜์œจ(WER)์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๐Ÿค— Evaluate ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. WER์˜ ์ž ์žฌ์  ์ œํ•œ ์‚ฌํ•ญ ๋ฐ ๊ธฐํƒ€ ๋ฌธ์ œ์ ์€ [์ด ๊ฐ€์ด๋“œ](https://huggingface.co/spaces/evaluate-metric/wer)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ```python from evaluate import load import torch wer = load("wer") def compute_metrics(eval_pred): logits, labels = eval_pred predicted = logits.argmax(-1) decoded_labels = processor.batch_decode(labels, skip_special_tokens=True) decoded_predictions = processor.batch_decode(predicted, skip_special_tokens=True) wer_score = wer.compute(predictions=decoded_predictions, references=decoded_labels) return {"wer_score": wer_score} ``` ## ํ•™์Šต![[train!]] ์ด์ œ ๋ชจ๋ธ ํŒŒ์ธํŠœ๋‹์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๐Ÿค— [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ €, [`TrainingArguments`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•™์Šต ์ธ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ```python from transformers import TrainingArguments, Trainer model_name = checkpoint.split("/")[1] training_args = TrainingArguments( output_dir=f"{model_name}-pokemon", learning_rate=5e-5, num_train_epochs=50, fp16=True, per_device_train_batch_size=32, per_device_eval_batch_size=32, gradient_accumulation_steps=2, save_total_limit=3, evaluation_strategy="steps", eval_steps=50, save_strategy="steps", save_steps=50, logging_steps=50, remove_unused_columns=False, push_to_hub=True, label_names=["labels"], load_best_model_at_end=True, ) ``` ํ•™์Šต ์ธ์ˆ˜๋ฅผ ๋ฐ์ดํ„ฐ์„ธํŠธ, ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ๐Ÿค— Trainer์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```python trainer = Trainer( model=model, args=training_args, train_dataset=train_ds, eval_dataset=test_ds, compute_metrics=compute_metrics, ) ``` ํ•™์Šต์„ ์‹œ์ž‘ํ•˜๋ ค๋ฉด [`Trainer`] ๊ฐ์ฒด์—์„œ [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ```python trainer.train() ``` ํ•™์Šต์ด ์ง„ํ–‰๋˜๋ฉด์„œ ํ•™์Šต ์†์‹ค์ด ์›ํ™œํ•˜๊ฒŒ ๊ฐ์†Œํ•˜๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋“  ์‚ฌ๋žŒ์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”: ```python trainer.push_to_hub() ``` ## ์ถ”๋ก [[inference]] `test_ds`์—์„œ ์ƒ˜ํ”Œ ์ด๋ฏธ์ง€๋ฅผ ๊ฐ€์ ธ์™€ ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธํ•ฉ๋‹ˆ๋‹ค. ```python from PIL import Image import requests url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/pokemon.png" image = Image.open(requests.get(url, stream=True).raw) image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/test_image_image_cap.png" alt="Test image"/> </div> ๋ชจ๋ธ์— ์‚ฌ์šฉํ•  ์ด๋ฏธ์ง€๋ฅผ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ```python device = "cuda" if torch.cuda.is_available() else "cpu" inputs = processor(images=image, return_tensors="pt").to(device) pixel_values = inputs.pixel_values ``` [`generate`]๋ฅผ ํ˜ธ์ถœํ•˜๊ณ  ์˜ˆ์ธก์„ ๋””์ฝ”๋”ฉํ•ฉ๋‹ˆ๋‹ค. ```python generated_ids = model.generate(pixel_values=pixel_values, max_length=50) generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] print(generated_caption) ``` ```bash a drawing of a pink and blue pokemon ``` ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ์ด ๊ฝค ๊ดœ์ฐฎ์€ ์บก์…˜์„ ์ƒ์„ฑํ•œ ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค!
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/video_classification.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์˜์ƒ ๋ถ„๋ฅ˜ [[video-classification]] [[open-in-colab]] ์˜์ƒ ๋ถ„๋ฅ˜๋Š” ์˜์ƒ ์ „์ฒด์— ๋ ˆ์ด๋ธ” ๋˜๋Š” ํด๋ž˜์Šค๋ฅผ ์ง€์ •ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๊ฐ ์˜์ƒ์—๋Š” ํ•˜๋‚˜์˜ ํด๋ž˜์Šค๊ฐ€ ์žˆ์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค. ์˜์ƒ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์€ ์˜์ƒ์„ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›์•„ ์–ด๋А ํด๋ž˜์Šค์— ์†ํ•˜๋Š”์ง€์— ๋Œ€ํ•œ ์˜ˆ์ธก์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ์˜์ƒ์ด ์–ด๋–ค ๋‚ด์šฉ์ธ์ง€ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜์ƒ ๋ถ„๋ฅ˜์˜ ์‹ค์ œ ์‘์šฉ ์˜ˆ๋Š” ํ”ผํŠธ๋‹ˆ์Šค ์•ฑ์—์„œ ์œ ์šฉํ•œ ๋™์ž‘ / ์šด๋™ ์ธ์‹ ์„œ๋น„์Šค๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋˜ํ•œ ์‹œ๊ฐ ์žฅ์• ์ธ์ด ์ด๋™ํ•  ๋•Œ ๋ณด์กฐํ•˜๋Š”๋ฐ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค: 1. [UCF101](https://www.crcv.ucf.edu/data/UCF101.php) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ํ†ตํ•ด [VideoMAE](https://huggingface.co/docs/transformers/main/en/model_doc/videomae) ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ. 2. ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ธฐ. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [TimeSformer](../model_doc/timesformer), [VideoMAE](../model_doc/videomae) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q pytorchvideo transformers evaluate ``` ์˜์ƒ์„ ์ฒ˜๋ฆฌํ•˜๊ณ  ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด [PyTorchVideo](https://pytorchvideo.org/)(์ดํ•˜ `pytorchvideo`)๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋‚˜ํƒ€๋‚˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## UCF101 ๋ฐ์ดํ„ฐ์…‹ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ [[load-ufc101-dataset]] [UCF-101](https://www.crcv.ucf.edu/data/UCF101.php) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ(subset)์„ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ํ•™์Šตํ•˜๋Š”๋ฐ ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ๋ฐ์ดํ„ฐ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ๋ถˆ๋Ÿฌ์™€ ๋ชจ๋“  ๊ฒƒ์ด ์ž˜ ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from huggingface_hub import hf_hub_download >>> hf_dataset_identifier = "sayakpaul/ucf101-subset" >>> filename = "UCF101_subset.tar.gz" >>> file_path = hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ•˜์œ„ ์ง‘ํ•ฉ์ด ๋‹ค์šด๋กœ๋“œ ๋˜๋ฉด, ์••์ถ•๋œ ํŒŒ์ผ์˜ ์••์ถ•์„ ํ•ด์ œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tarfile >>> with tarfile.open(file_path) as t: ... t.extractall(".") ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ```bash UCF101_subset/ train/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... val/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... test/ BandMarching/ video_1.mp4 video_2.mp4 ... Archery video_1.mp4 video_2.mp4 ... ... ``` ์ •๋ ฌ๋œ ์˜์ƒ์˜ ๊ฒฝ๋กœ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: ```bash ... 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c04.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g07_c06.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g08_c01.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c02.avi', 'UCF101_subset/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c06.avi' ... ``` ๋™์ผํ•œ ๊ทธ๋ฃน/์žฅ๋ฉด์— ์†ํ•˜๋Š” ์˜์ƒ ํด๋ฆฝ์€ ํŒŒ์ผ ๊ฒฝ๋กœ์—์„œ `g`๋กœ ํ‘œ์‹œ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด, `v_ApplyEyeMakeup_g07_c04.avi`์™€ `v_ApplyEyeMakeup_g07_c06.avi` ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋‘˜์€ ๊ฐ™์€ ๊ทธ๋ฃน์ž…๋‹ˆ๋‹ค. ๊ฒ€์ฆ ๋ฐ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ ๋ถ„ํ• ์„ ํ•  ๋•Œ, [๋ฐ์ดํ„ฐ ๋ˆ„์ถœ(data leakage)](https://www.kaggle.com/code/alexisbcook/data-leakage)์„ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ๋™์ผํ•œ ๊ทธ๋ฃน / ์žฅ๋ฉด์˜ ์˜์ƒ ํด๋ฆฝ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์‚ฌ์šฉํ•˜๋Š” ํ•˜์œ„ ์ง‘ํ•ฉ์€ ์ด๋Ÿฌํ•œ ์ •๋ณด๋ฅผ ๊ณ ๋ คํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ ๋‹ค์Œ์œผ๋กœ, ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์กด์žฌํ•˜๋Š” ๋ผ๋ฒจ์„ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ๋ชจ๋ธ์„ ์ดˆ๊ธฐํ™”ํ•  ๋•Œ ๋„์›€์ด ๋  ๋”•์…”๋„ˆ๋ฆฌ(dictionary data type)๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. * `label2id`: ํด๋ž˜์Šค ์ด๋ฆ„์„ ์ •์ˆ˜์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. * `id2label`: ์ •์ˆ˜๋ฅผ ํด๋ž˜์Šค ์ด๋ฆ„์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. ```py >>> class_labels = sorted({str(path).split("/")[2] for path in all_video_file_paths}) >>> label2id = {label: i for i, label in enumerate(class_labels)} >>> id2label = {i: label for label, i in label2id.items()} >>> print(f"Unique classes: {list(label2id.keys())}.") # Unique classes: ['ApplyEyeMakeup', 'ApplyLipstick', 'Archery', 'BabyCrawling', 'BalanceBeam', 'BandMarching', 'BaseballPitch', 'Basketball', 'BasketballDunk', 'BenchPress']. ``` ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” ์ด 10๊ฐœ์˜ ๊ณ ์œ ํ•œ ํด๋ž˜์Šค๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ ํด๋ž˜์Šค๋งˆ๋‹ค 30๊ฐœ์˜ ์˜์ƒ์ด ํ›ˆ๋ จ ์„ธํŠธ์— ์žˆ์Šต๋‹ˆ๋‹ค ## ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-a-model-to-fine-tune]] ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ฒดํฌํฌ์ธํŠธ์™€ ์ฒดํฌํฌ์ธํŠธ์— ์—ฐ๊ด€๋œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜์ƒ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ ์ธ์ฝ”๋”์—๋Š” ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ์ œ๊ณต๋˜๋ฉฐ, ๋ถ„๋ฅ˜ ํ—ค๋“œ(๋ฐ์ดํ„ฐ๋ฅผ ๋ถ„๋ฅ˜ํ•˜๋Š” ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด)๋Š” ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ „์ฒ˜๋ฆฌ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ž‘์„ฑํ•  ๋•Œ๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๊ฐ€ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import VideoMAEImageProcessor, VideoMAEForVideoClassification >>> model_ckpt = "MCG-NJU/videomae-base" >>> image_processor = VideoMAEImageProcessor.from_pretrained(model_ckpt) >>> model = VideoMAEForVideoClassification.from_pretrained( ... model_ckpt, ... label2id=label2id, ... id2label=id2label, ... ignore_mismatched_sizes=True, # provide this in case you're planning to fine-tune an already fine-tuned checkpoint ... ) ``` ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋™์•ˆ, ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ฒฝ๊ณ ๋ฅผ ๋งˆ์ฃผ์น  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash Some weights of the model checkpoint at MCG-NJU/videomae-base were not used when initializing VideoMAEForVideoClassification: [..., 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight'] - This IS expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing VideoMAEForVideoClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of VideoMAEForVideoClassification were not initialized from the model checkpoint at MCG-NJU/videomae-base and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` ์œ„ ๊ฒฝ๊ณ ๋Š” ์šฐ๋ฆฌ๊ฐ€ ์ผ๋ถ€ ๊ฐ€์ค‘์น˜(์˜ˆ: `classifier` ์ธต์˜ ๊ฐ€์ค‘์น˜์™€ ํŽธํ–ฅ)๋ฅผ ๋ฒ„๋ฆฌ๊ณ  ์ƒˆ๋กœ์šด `classifier` ์ธต์˜ ๊ฐ€์ค‘์น˜์™€ ํŽธํ–ฅ์„ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”ํ•˜๊ณ  ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ๋ ค์ค๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ์—๋Š” ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๊ฐ€์ค‘์น˜๊ฐ€ ์—†๋Š” ์ƒˆ๋กœ์šด ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  ์žˆ์œผ๋ฏ€๋กœ, ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ผ๊ณ  ๊ฒฝ๊ณ ๋ฅผ ๋ณด๋‚ด๋Š” ๊ฒƒ์€ ๋‹น์—ฐํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ด์ œ ์šฐ๋ฆฌ๋Š” ์ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. **์ฐธ๊ณ ** ์ด [์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics)๋Š” ๋„๋ฉ”์ธ์ด ๋งŽ์ด ์ค‘์ฒฉ๋œ ์œ ์‚ฌํ•œ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ๋Œ€ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์–ป์€ ์ฒดํฌํฌ์ธํŠธ์ด๋ฏ€๋กœ ์ด ์ž‘์—…์—์„œ ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ๋ณด์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `MCG-NJU/videomae-base-finetuned-kinetics` ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์–ป์€ [์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/sayakpaul/videomae-base-finetuned-kinetics-finetuned-ucf101-subset)๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ์„ ์œ„ํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ค€๋น„ํ•˜๊ธฐ[[prepare-the-datasets-for-training]] ์˜์ƒ ์ „์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด [PyTorchVideo ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ](https://pytorchvideo.org/)๋ฅผ ํ™œ์šฉํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ์ข…์†์„ฑ์„ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•˜์„ธ์š”. ```py >>> import pytorchvideo.data >>> from pytorchvideo.transforms import ( ... ApplyTransformToKey, ... Normalize, ... RandomShortSideScale, ... RemoveKey, ... ShortSideScale, ... UniformTemporalSubsample, ... ) >>> from torchvision.transforms import ( ... Compose, ... Lambda, ... RandomCrop, ... RandomHorizontalFlip, ... Resize, ... ) ``` ํ•™์Šต ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ณ€ํ™˜์—๋Š” '๊ท ์ผํ•œ ์‹œ๊ฐ„ ์ƒ˜ํ”Œ๋ง(uniform temporal subsampling)', 'ํ”ฝ์…€ ์ •๊ทœํ™”(pixel normalization)', '๋žœ๋ค ์ž˜๋ผ๋‚ด๊ธฐ(random cropping)' ๋ฐ '๋žœ๋ค ์ˆ˜ํ‰ ๋’ค์ง‘๊ธฐ(random horizontal flipping)'์˜ ์กฐํ•ฉ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๊ฒ€์ฆ ๋ฐ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ณ€ํ™˜์—๋Š” '๋žœ๋ค ์ž˜๋ผ๋‚ด๊ธฐ'์™€ '๋žœ๋ค ๋’ค์ง‘๊ธฐ'๋ฅผ ์ œ์™ธํ•œ ๋™์ผํ•œ ๋ณ€ํ™˜ ์ฒด์ธ์„ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ณ€ํ™˜์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๋ ค๋ฉด [PyTorchVideo ๊ณต์‹ ๋ฌธ์„œ](https://pytorchvideo.org)๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ๋œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค์Œ ์ •๋ณด๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: * ์˜์ƒ ํ”„๋ ˆ์ž„ ํ”ฝ์…€์„ ์ •๊ทœํ™”ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ์ด๋ฏธ์ง€ ํ‰๊ท ๊ณผ ํ‘œ์ค€ ํŽธ์ฐจ * ์˜์ƒ ํ”„๋ ˆ์ž„์ด ์กฐ์ •๋  ๊ณต๊ฐ„ ํ•ด์ƒ๋„ ๋จผ์ €, ๋ช‡ ๊ฐ€์ง€ ์ƒ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> mean = image_processor.image_mean >>> std = image_processor.image_std >>> if "shortest_edge" in image_processor.size: ... height = width = image_processor.size["shortest_edge"] >>> else: ... height = image_processor.size["height"] ... width = image_processor.size["width"] >>> resize_to = (height, width) >>> num_frames_to_sample = model.config.num_frames >>> sample_rate = 4 >>> fps = 30 >>> clip_duration = num_frames_to_sample * sample_rate / fps ``` ์ด์ œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ํŠนํ™”๋œ ์ „์ฒ˜๋ฆฌ(transform)๊ณผ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ž์ฒด๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ € ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> train_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... RandomShortSideScale(min_size=256, max_size=320), ... RandomCrop(resize_to), ... RandomHorizontalFlip(p=0.5), ... ] ... ), ... ), ... ] ... ) >>> train_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "train"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("random", clip_duration), ... decode_audio=False, ... transform=train_transform, ... ) ``` ๊ฐ™์€ ๋ฐฉ์‹์˜ ์ž‘์—… ํ๋ฆ„์„ ๊ฒ€์ฆ๊ณผ ํ‰๊ฐ€ ์„ธํŠธ์—๋„ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> val_transform = Compose( ... [ ... ApplyTransformToKey( ... key="video", ... transform=Compose( ... [ ... UniformTemporalSubsample(num_frames_to_sample), ... Lambda(lambda x: x / 255.0), ... Normalize(mean, std), ... Resize(resize_to), ... ] ... ), ... ), ... ] ... ) >>> val_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "val"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) >>> test_dataset = pytorchvideo.data.Ucf101( ... data_path=os.path.join(dataset_root_path, "test"), ... clip_sampler=pytorchvideo.data.make_clip_sampler("uniform", clip_duration), ... decode_audio=False, ... transform=val_transform, ... ) ``` **์ฐธ๊ณ **: ์œ„์˜ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํŒŒ์ดํ”„๋ผ์ธ์€ [๊ณต์‹ ํŒŒ์ดํ† ์น˜ ์˜ˆ์ œ](https://pytorchvideo.org/docs/tutorial_classification#dataset)์—์„œ ๊ฐ€์ ธ์˜จ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” UCF-101 ๋ฐ์ดํ„ฐ์…‹์— ๋งž๊ฒŒ [`pytorchvideo.data.Ucf101()`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.Ucf101) ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋‚ด๋ถ€์ ์œผ๋กœ ์ด ํ•จ์ˆ˜๋Š” [`pytorchvideo.data.labeled_video_dataset.LabeledVideoDataset`](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html#pytorchvideo.data.LabeledVideoDataset) ๊ฐ์ฒด๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. `LabeledVideoDataset` ํด๋ž˜์Šค๋Š” PyTorchVideo ๋ฐ์ดํ„ฐ์…‹์—์„œ ๋ชจ๋“  ์˜์ƒ ๊ด€๋ จ ์ž‘์—…์˜ ๊ธฐ๋ณธ ํด๋ž˜์Šค์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ PyTorchVideo์—์„œ ๋ฏธ๋ฆฌ ์ œ๊ณตํ•˜์ง€ ์•Š๋Š” ์‚ฌ์šฉ์ž ์ง€์ • ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ์ด ํด๋ž˜์Šค๋ฅผ ์ ์ ˆํ•˜๊ฒŒ ํ™•์žฅํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๋” ์ž์„ธํ•œ ์‚ฌํ•ญ์ด ์•Œ๊ณ  ์‹ถ๋‹ค๋ฉด `data` API [๋ฌธ์„œ](https://pytorchvideo.readthedocs.io/en/latest/api/data/data.html) ๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ๋˜ํ•œ ์œ„์˜ ์˜ˆ์‹œ์™€ ์œ ์‚ฌํ•œ ๊ตฌ์กฐ๋ฅผ ๊ฐ–๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋‹ค๋ฉด, `pytorchvideo.data.Ucf101()` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐ ๋ฌธ์ œ๊ฐ€ ์—†์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์˜์ƒ์˜ ๊ฐœ์ˆ˜๋ฅผ ์•Œ๊ธฐ ์œ„ํ•ด `num_videos` ์ธ์ˆ˜์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> print(train_dataset.num_videos, val_dataset.num_videos, test_dataset.num_videos) # (300, 30, 75) ``` ## ๋” ๋‚˜์€ ๋””๋ฒ„๊น…์„ ์œ„ํ•ด ์ „์ฒ˜๋ฆฌ ์˜์ƒ ์‹œ๊ฐํ™”ํ•˜๊ธฐ[[visualize-the-preprocessed-video-for-better-debugging]] ```py >>> import imageio >>> import numpy as np >>> from IPython.display import Image >>> def unnormalize_img(img): ... """Un-normalizes the image pixels.""" ... img = (img * std) + mean ... img = (img * 255).astype("uint8") ... return img.clip(0, 255) >>> def create_gif(video_tensor, filename="sample.gif"): ... """Prepares a GIF from a video tensor. ... ... The video tensor is expected to have the following shape: ... (num_frames, num_channels, height, width). ... """ ... frames = [] ... for video_frame in video_tensor: ... frame_unnormalized = unnormalize_img(video_frame.permute(1, 2, 0).numpy()) ... frames.append(frame_unnormalized) ... kargs = {"duration": 0.25} ... imageio.mimsave(filename, frames, "GIF", **kargs) ... return filename >>> def display_gif(video_tensor, gif_name="sample.gif"): ... """Prepares and displays a GIF from a video tensor.""" ... video_tensor = video_tensor.permute(1, 0, 2, 3) ... gif_filename = create_gif(video_tensor, gif_name) ... return Image(filename=gif_filename) >>> sample_video = next(iter(train_dataset)) >>> video_tensor = sample_video["video"] >>> display_gif(video_tensor) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif.gif" alt="Person playing basketball"/> </div> ## ๋ชจ๋ธ ํ›ˆ๋ จํ•˜๊ธฐ[[train-the-model]] ๐Ÿค— Transformers์˜ [`Trainer`](https://huggingface.co/docs/transformers/main_classes/trainer)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œ์ผœ๋ณด์„ธ์š”. `Trainer`๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๋ ค๋ฉด ํ›ˆ๋ จ ์„ค์ •๊ณผ ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๊ฒƒ์€ [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments)์ž…๋‹ˆ๋‹ค. ์ด ํด๋ž˜์Šค๋Š” ํ›ˆ๋ จ์„ ๊ตฌ์„ฑํ•˜๋Š” ๋ชจ๋“  ์†์„ฑ์„ ํฌํ•จํ•˜๋ฉฐ, ํ›ˆ๋ จ ์ค‘ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•  ์ถœ๋ ฅ ํด๋” ์ด๋ฆ„์„ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ๐Ÿค— Hub์˜ ๋ชจ๋ธ ์ €์žฅ์†Œ์˜ ๋ชจ๋“  ์ •๋ณด๋ฅผ ๋™๊ธฐํ™”ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ํ›ˆ๋ จ ์ธ์ˆ˜๋Š” ๋”ฐ๋กœ ์„ค๋ช…ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์—ฌ๊ธฐ์—์„œ ์ค‘์š”ํ•œ ์ธ์ˆ˜๋Š” `remove_unused_columns=False` ์ž…๋‹ˆ๋‹ค. ์ด ์ธ์ž๋Š” ๋ชจ๋ธ์˜ ํ˜ธ์ถœ ํ•จ์ˆ˜์—์„œ ์‚ฌ์šฉ๋˜์ง€ ์•Š๋Š” ๋ชจ๋“  ์†์„ฑ ์—ด(columns)์„ ์‚ญ์ œํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ๊ฐ’์€ ์ผ๋ฐ˜์ ์œผ๋กœ True์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์‚ฌ์šฉ๋˜์ง€ ์•Š๋Š” ๊ธฐ๋Šฅ ์—ด์„ ์‚ญ์ œํ•˜๋Š” ๊ฒƒ์ด ์ด์ƒ์ ์ด๋ฉฐ, ์ž…๋ ฅ์„ ๋ชจ๋ธ์˜ ํ˜ธ์ถœ ํ•จ์ˆ˜๋กœ ํ’€๊ธฐ(unpack)๊ฐ€ ์‰ฌ์›Œ์ง€๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด ๊ฒฝ์šฐ์—๋Š” `pixel_values`(๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ํ•„์ˆ˜์ ์ธ ํ‚ค)๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋˜์ง€ ์•Š๋Š” ๊ธฐ๋Šฅ('video'๊ฐ€ ํŠนํžˆ ๊ทธ๋ ‡์Šต๋‹ˆ๋‹ค)์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ remove_unused_columns์„ False๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import TrainingArguments, Trainer >>> model_name = model_ckpt.split("/")[-1] >>> new_model_name = f"{model_name}-finetuned-ucf101-subset" >>> num_epochs = 4 >>> args = TrainingArguments( ... new_model_name, ... remove_unused_columns=False, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... learning_rate=5e-5, ... per_device_train_batch_size=batch_size, ... per_device_eval_batch_size=batch_size, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... max_steps=(train_dataset.num_videos // batch_size) * num_epochs, ... ) ``` `pytorchvideo.data.Ucf101()` ํ•จ์ˆ˜๋กœ ๋ฐ˜ํ™˜๋˜๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” `__len__` ๋ฉ”์†Œ๋“œ๊ฐ€ ์ด์‹๋˜์–ด ์žˆ์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ, `TrainingArguments`๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•  ๋•Œ `max_steps`๋ฅผ ์ •์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ, ํ‰๊ฐ€์ง€ํ‘œ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ , ์˜ˆ์ธก๊ฐ’์—์„œ ํ‰๊ฐ€์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•  ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ์ „์ฒ˜๋ฆฌ ์ž‘์—…์€ ์˜ˆ์ธก๋œ ๋กœ์ง“(logits)์— argmax ๊ฐ’์„ ์ทจํ•˜๋Š” ๊ฒƒ๋ฟ์ž…๋‹ˆ๋‹ค: ```py import evaluate metric = evaluate.load("accuracy") def compute_metrics(eval_pred): predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids) ``` **ํ‰๊ฐ€์— ๋Œ€ํ•œ ์ฐธ๊ณ ์‚ฌํ•ญ**: [VideoMAE ๋…ผ๋ฌธ](https://arxiv.org/abs/2203.12602)์—์„œ ์ €์ž๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํ‰๊ฐ€ ์ „๋žต์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ์˜์ƒ์—์„œ ์—ฌ๋Ÿฌ ํด๋ฆฝ์„ ์„ ํƒํ•˜๊ณ  ๊ทธ ํด๋ฆฝ์— ๋‹ค์–‘ํ•œ ํฌ๋กญ์„ ์ ์šฉํ•˜์—ฌ ์ง‘๊ณ„ ์ ์ˆ˜๋ฅผ ๋ณด๊ณ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋ฒˆ ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๊ฐ„๋‹จํ•จ๊ณผ ๊ฐ„๊ฒฐํ•จ์„ ์œ„ํ•ด ํ•ด๋‹น ์ „๋žต์„ ๊ณ ๋ คํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์˜ˆ์ œ๋ฅผ ๋ฌถ์–ด์„œ ๋ฐฐ์น˜๋ฅผ ํ˜•์„ฑํ•˜๋Š” `collate_fn`์„ ์ •์˜ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋ฐฐ์น˜๋Š” `pixel_values`์™€ `labels`๋ผ๋Š” 2๊ฐœ์˜ ํ‚ค๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ```py >>> def collate_fn(examples): ... # permute to (num_frames, num_channels, height, width) ... pixel_values = torch.stack( ... [example["video"].permute(1, 0, 2, 3) for example in examples] ... ) ... labels = torch.tensor([example["label"] for example in examples]) ... return {"pixel_values": pixel_values, "labels": labels} ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ด ๋ชจ๋“  ๊ฒƒ์„ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ํ•จ๊ป˜ `Trainer`์— ์ „๋‹ฌํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค: ```py >>> trainer = Trainer( ... model, ... args, ... train_dataset=train_dataset, ... eval_dataset=val_dataset, ... tokenizer=image_processor, ... compute_metrics=compute_metrics, ... data_collator=collate_fn, ... ) ``` ๋ฐ์ดํ„ฐ๋ฅผ ์ด๋ฏธ ์ฒ˜๋ฆฌํ–ˆ๋Š”๋ฐ๋„ ๋ถˆ๊ตฌํ•˜๊ณ  `image_processor`๋ฅผ ํ† ํฌ๋‚˜์ด์ € ์ธ์ˆ˜๋กœ ๋„ฃ์€ ์ด์œ ๋Š” JSON์œผ๋กœ ์ €์žฅ๋˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ ๊ตฌ์„ฑ ํŒŒ์ผ์ด Hub์˜ ์ €์žฅ์†Œ์— ์—…๋กœ๋“œ๋˜๋„๋ก ํ•˜๊ธฐ ์œ„ํ•จ์ž…๋‹ˆ๋‹ค. `train` ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”: ```py >>> train_results = trainer.train() ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ชจ๋ธ์„ [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์—ฌ ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer.push_to_hub() ``` ## ์ถ”๋ก ํ•˜๊ธฐ[[inference]] ์ข‹์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์˜์ƒ์„ ๋ถˆ๋Ÿฌ์˜ค์„ธ์š”: ```py >>> sample_test_video = next(iter(test_dataset)) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/sample_gif_two.gif" alt="Teams playing basketball"/> </div> ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.VideoClassificationPipeline)์—์„œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๋กœ ์˜์ƒ ๋ถ„๋ฅ˜๋ฅผ ํ•˜๊ธฐ ์œ„ํ•ด `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์˜์ƒ์„ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> video_cls = pipeline(model="my_awesome_video_cls_model") >>> video_cls("https://huggingface.co/datasets/sayakpaul/ucf101-subset/resolve/main/v_BasketballDunk_g14_c06.avi") [{'score': 0.9272987842559814, 'label': 'BasketballDunk'}, {'score': 0.017777055501937866, 'label': 'BabyCrawling'}, {'score': 0.01663011871278286, 'label': 'BalanceBeam'}, {'score': 0.009560945443809032, 'label': 'BandMarching'}, {'score': 0.0068979403004050255, 'label': 'BaseballPitch'}] ``` ๋งŒ์•ฝ ์›ํ•œ๋‹ค๋ฉด ์ˆ˜๋™์œผ๋กœ `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> def run_inference(model, video): ... # (num_frames, num_channels, height, width) ... perumuted_sample_test_video = video.permute(1, 0, 2, 3) ... inputs = { ... "pixel_values": perumuted_sample_test_video.unsqueeze(0), ... "labels": torch.tensor( ... [sample_test_video["label"]] ... ), # this can be skipped if you don't have labels available. ... } ... device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ... inputs = {k: v.to(device) for k, v in inputs.items()} ... model = model.to(device) ... # forward pass ... with torch.no_grad(): ... outputs = model(**inputs) ... logits = outputs.logits ... return logits ``` ๋ชจ๋ธ์— ์ž…๋ ฅ๊ฐ’์„ ๋„ฃ๊ณ  `logits`์„ ๋ฐ˜ํ™˜๋ฐ›์œผ์„ธ์š”: ``` >>> logits = run_inference(trained_model, sample_test_video["video"]) ``` `logits`์„ ๋””์ฝ”๋”ฉํ•˜๋ฉด, ์šฐ๋ฆฌ๋Š” ๋‹ค์Œ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> predicted_class_idx = logits.argmax(-1).item() >>> print("Predicted class:", model.config.id2label[predicted_class_idx]) # Predicted class: BasketballDunk ```
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/visual_question_answering.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต (Visual Question Answering) [[open-in-colab]] ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต(VQA)์€ ์ด๋ฏธ์ง€๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ฐœ๋ฐฉํ˜• ์งˆ๋ฌธ์— ๋Œ€์‘ํ•˜๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ด ์ž‘์—…์„ ์ง€์›ํ•˜๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ์€ ๋Œ€๋ถ€๋ถ„ ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์˜ ์กฐํ•ฉ์ด๋ฉฐ, ์ถœ๋ ฅ์€ ์ž์—ฐ์–ด๋กœ ๋œ ๋‹ต๋ณ€์ž…๋‹ˆ๋‹ค. VQA์˜ ์ฃผ์š” ์‚ฌ์šฉ ์‚ฌ๋ก€๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ์‹œ๊ฐ ์žฅ์• ์ธ์„ ์œ„ํ•œ ์ ‘๊ทผ์„ฑ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ๊ตฌ์ถ•ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. * ๊ต์œก: ๊ฐ•์˜๋‚˜ ๊ต๊ณผ์„œ์— ๋‚˜์˜จ ์‹œ๊ฐ ์ž๋ฃŒ์— ๋Œ€ํ•œ ์งˆ๋ฌธ์— ๋‹ตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ์ฒดํ—˜ํ˜• ์ „์‹œ์™€ ์œ ์  ๋“ฑ์—์„œ๋„ VQA๋ฅผ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. * ๊ณ ๊ฐ ์„œ๋น„์Šค ๋ฐ ์ „์ž์ƒ๊ฑฐ๋ž˜: VQA๋Š” ์‚ฌ์šฉ์ž๊ฐ€ ์ œํ’ˆ์— ๋Œ€ํ•ด ์งˆ๋ฌธํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•จ์œผ๋กœ์จ ์‚ฌ์šฉ์ž ๊ฒฝํ—˜์„ ํ–ฅ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. * ์ด๋ฏธ์ง€ ๊ฒ€์ƒ‰: VQA ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์›ํ•˜๋Š” ํŠน์„ฑ์„ ๊ฐ€์ง„ ์ด๋ฏธ์ง€๋ฅผ ๊ฒ€์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ์‚ฌ์šฉ์ž๋Š” "๊ฐ•์•„์ง€๊ฐ€ ์žˆ์–ด?"๋ผ๊ณ  ๋ฌผ์–ด๋ด์„œ ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€ ๋ฌถ์Œ์—์„œ ๊ฐ•์•„์ง€๊ฐ€ ์žˆ๋Š” ๋ชจ๋“  ์ด๋ฏธ์ง€๋ฅผ ๋ฐ›์•„๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ํ•™์Šตํ•  ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: - VQA ๋ชจ๋ธ ์ค‘ ํ•˜๋‚˜์ธ [ViLT](../../en/model_doc/vilt)๋ฅผ [`Graphcore/vqa` ๋ฐ์ดํ„ฐ์…‹](https://huggingface.co/datasets/Graphcore/vqa) ์—์„œ ๋ฏธ์„ธ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ• - ๋ฏธ์„ธ์กฐ์ •๋œ ViLT ๋ชจ๋ธ๋กœ ์ถ”๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ• - BLIP-2 ๊ฐ™์€ ์ƒ์„ฑ ๋ชจ๋ธ๋กœ ์ œ๋กœ์ƒท VQA ์ถ”๋ก ์„ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ• ## ViLT ๋ฏธ์„ธ ์กฐ์ • [[finetuning-vilt]] ViLT๋Š” Vision Transformer (ViT) ๋‚ด์— ํ…์ŠคํŠธ ์ž„๋ฒ ๋”ฉ์„ ํฌํ•จํ•˜์—ฌ ๋น„์ „/์ž์—ฐ์–ด ์‚ฌ์ „ํ›ˆ๋ จ(VLP; Vision-and-Language Pretraining)์„ ์œ„ํ•œ ๊ธฐ๋ณธ ๋””์ž์ธ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ViLT ๋ชจ๋ธ์€ ๋น„์ „ ํŠธ๋žœ์Šคํฌ๋จธ(ViT)์— ํ…์ŠคํŠธ ์ž„๋ฒ ๋”ฉ์„ ๋„ฃ์–ด ๋น„์ „/์–ธ์–ด ์‚ฌ์ „ํ›ˆ๋ จ(VLP; Vision-and-Language Pre-training)์„ ์œ„ํ•œ ๊ธฐ๋ณธ์ ์ธ ๋””์ž์ธ์„ ๊ฐ–์ท„์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ ์—ฌ๋Ÿฌ ๋‹ค์šด์ŠคํŠธ๋ฆผ ์ž‘์—…์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. VQA ํƒœ์Šคํฌ์—์„œ๋Š” (`[CLS]` ํ† ํฐ์˜ ์ตœ์ข… ์€๋‹‰ ์ƒํƒœ ์œ„์— ์„ ํ˜• ๋ ˆ์ด์–ด์ธ) ๋ถ„๋ฅ˜ ํ—ค๋”๊ฐ€ ์žˆ์œผ๋ฉฐ ๋ฌด์ž‘์œ„๋กœ ์ดˆ๊ธฐํ™”๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์—ฌ๊ธฐ์—์„œ ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต์€ **๋ถ„๋ฅ˜ ๋ฌธ์ œ**๋กœ ์ทจ๊ธ‰๋ฉ๋‹ˆ๋‹ค. ์ตœ๊ทผ์˜ BLIP, BLIP-2, InstructBLIP์™€ ๊ฐ™์€ ๋ชจ๋ธ๋“ค์€ VQA๋ฅผ ์ƒ์„ฑํ˜• ์ž‘์—…์œผ๋กœ ๊ฐ„์ฃผํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์ด๋“œ์˜ ํ›„๋ฐ˜๋ถ€์—์„œ๋Š” ์ด๋Ÿฐ ๋ชจ๋ธ๋“ค์„ ์‚ฌ์šฉํ•˜์—ฌ ์ œ๋กœ์ƒท VQA ์ถ”๋ก ์„ ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์„ค๋ช…ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „ ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ–ˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. ```bash pip install -q transformers datasets ``` ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅ ๋“œ๋ฆฝ๋‹ˆ๋‹ค. Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๐Ÿค— Hub์— ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ๋‚˜ํƒ€๋‚˜๋ฉด ๋กœ๊ทธ์ธํ•  ํ† ํฐ์„ ์ž…๋ ฅํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ „์—ญ ๋ณ€์ˆ˜๋กœ ์„ ์–ธํ•˜์„ธ์š”. ```py >>> model_checkpoint = "dandelin/vilt-b32-mlm" ``` ## ๋ฐ์ดํ„ฐ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-the-data]] ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” `Graphcore/vqa` ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์ž‘์€ ์ƒ˜ํ”Œ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ๋Š” [๐Ÿค— Hub](https://huggingface.co/datasets/Graphcore/vqa) ์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`Graphcore/vqa` ๋ฐ์ดํ„ฐ์„ธํŠธ](https://huggingface.co/datasets/Graphcore/vqa) ์˜ ๋Œ€์•ˆ์œผ๋กœ ๊ณต์‹ [VQA ๋ฐ์ดํ„ฐ์„ธํŠธ ํŽ˜์ด์ง€](https://visualqa.org/download.html) ์—์„œ ๋™์ผํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋‹ค์šด๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง์ ‘ ๊ณต์ˆ˜ํ•œ ๋ฐ์ดํ„ฐ๋กœ ํŠœํ† ๋ฆฌ์–ผ์„ ๋”ฐ๋ฅด๊ณ  ์‹ถ๋‹ค๋ฉด [์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ์„ธํŠธ ๋งŒ๋“ค๊ธฐ](https://huggingface.co/docs/datasets/image_dataset#loading-script) ๋ผ๋Š” ๐Ÿค— Datasets ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์˜ ์ฒซ 200๊ฐœ ํ•ญ๋ชฉ์„ ๋ถˆ๋Ÿฌ์™€ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ํŠน์„ฑ์„ ํ™•์ธํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```python >>> from datasets import load_dataset >>> dataset = load_dataset("Graphcore/vqa", split="validation[:200]") >>> dataset Dataset({ features: ['question', 'question_type', 'question_id', 'image_id', 'answer_type', 'label'], num_rows: 200 }) ``` ์˜ˆ์ œ๋ฅผ ํ•˜๋‚˜ ๋ฝ‘์•„ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ํŠน์„ฑ์„ ์ดํ•ดํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> dataset[0] {'question': 'Where is he looking?', 'question_type': 'none of the above', 'question_id': 262148000, 'image_id': '/root/.cache/huggingface/datasets/downloads/extracted/ca733e0e000fb2d7a09fbcc94dbfe7b5a30750681d0e965f8e0a23b1c2f98c75/val2014/COCO_val2014_000000262148.jpg', 'answer_type': 'other', 'label': {'ids': ['at table', 'down', 'skateboard', 'table'], 'weights': [0.30000001192092896, 1.0, 0.30000001192092896, 0.30000001192092896]}} ``` ๋ฐ์ดํ„ฐ์„ธํŠธ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํŠน์„ฑ์ด ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค: * `question`: ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์งˆ๋ฌธ * `image_id`: ์งˆ๋ฌธ๊ณผ ๊ด€๋ จ๋œ ์ด๋ฏธ์ง€์˜ ๊ฒฝ๋กœ * `label`: ๋ฐ์ดํ„ฐ์˜ ๋ ˆ์ด๋ธ” (annotations) ๋‚˜๋จธ์ง€ ํŠน์„ฑ๋“ค์€ ํ•„์š”ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ์‚ญ์ œํ•ด๋„ ๋ฉ๋‹ˆ๋‹ค: ```py >>> dataset = dataset.remove_columns(['question_type', 'question_id', 'answer_type']) ``` ๋ณด์‹œ๋‹ค์‹œํ”ผ `label` ํŠน์„ฑ์€ ๊ฐ™์€ ์งˆ๋ฌธ๋งˆ๋‹ค ๋‹ต๋ณ€์ด ์—ฌ๋Ÿฌ ๊ฐœ ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ชจ๋‘ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ๋ผ๋ฒจ๋Ÿฌ๋“ค๋กœ๋ถ€ํ„ฐ ์ˆ˜์ง‘๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์ธ๋ฐ์š”. ์งˆ๋ฌธ์˜ ๋‹ต๋ณ€์€ ์ฃผ๊ด€์ ์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฒฝ์šฐ ์งˆ๋ฌธ์€ "๊ทธ๋Š” ์–ด๋””๋ฅผ ๋ณด๊ณ  ์žˆ๋‚˜์š”?" ์˜€์ง€๋งŒ, ์–ด๋–ค ์‚ฌ๋žŒ๋“ค์€ "์•„๋ž˜"๋กœ ๋ ˆ์ด๋ธ”์„ ๋‹ฌ์•˜๊ณ , ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์€ "ํ…Œ์ด๋ธ”" ๋˜๋Š” "์Šค์ผ€์ดํŠธ๋ณด๋“œ" ๋“ฑ์œผ๋กœ ์ฃผ์„์„ ๋‹ฌ์•˜์Šต๋‹ˆ๋‹ค. ์•„๋ž˜์˜ ์ด๋ฏธ์ง€๋ฅผ ๋ณด๊ณ  ์–ด๋–ค ๋‹ต๋ณ€์„ ์„ ํƒํ•  ๊ฒƒ์ธ์ง€ ์ƒ๊ฐํ•ด ๋ณด์„ธ์š”: ```python >>> from PIL import Image >>> image = Image.open(dataset[0]['image_id']) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/vqa-example.png" alt="VQA Image Example"/> </div> ์งˆ๋ฌธ๊ณผ ๋‹ต๋ณ€์˜ ๋ชจํ˜ธ์„ฑ์œผ๋กœ ์ธํ•ด ์ด๋Ÿฌํ•œ ๋ฐ์ดํ„ฐ์„ธํŠธ๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๋‹ต๋ณ€์ด ๊ฐ€๋Šฅํ•˜๋ฏ€๋กœ ๋‹ค์ค‘ ๋ ˆ์ด๋ธ” ๋ถ„๋ฅ˜ ๋ฌธ์ œ๋กœ ์ฒ˜๋ฆฌ๋ฉ๋‹ˆ๋‹ค. ๊ฒŒ๋‹ค๊ฐ€, ์›ํ•ซ(one-hot) ์ธ์ฝ”๋”ฉ ๋ฒกํ„ฐ๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ๋ณด๋‹ค๋Š” ๋ ˆ์ด๋ธ”์—์„œ ํŠน์ • ๋‹ต๋ณ€์ด ๋‚˜ํƒ€๋‚˜๋Š” ํšŸ์ˆ˜๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์†Œํ”„ํŠธ ์ธ์ฝ”๋”ฉ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์œ„์˜ ์˜ˆ์‹œ์—์„œ "์•„๋ž˜"๋ผ๋Š” ๋‹ต๋ณ€์ด ๋‹ค๋ฅธ ๋‹ต๋ณ€๋ณด๋‹ค ํ›จ์”ฌ ๋” ์ž์ฃผ ์„ ํƒ๋˜์—ˆ๊ธฐ ๋•Œ๋ฌธ์— ๋ฐ์ดํ„ฐ์„ธํŠธ์—์„œ `weight`๋ผ๊ณ  ๋ถˆ๋ฆฌ๋Š” ์ ์ˆ˜๋กœ 1.0์„ ๊ฐ€์ง€๋ฉฐ, ๋‚˜๋จธ์ง€ ๋‹ต๋ณ€๋“ค์€ 1.0 ๋ฏธ๋งŒ์˜ ์ ์ˆ˜๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค. ์ ์ ˆํ•œ ๋ถ„๋ฅ˜ ํ—ค๋”๋กœ ๋ชจ๋ธ์„ ๋‚˜์ค‘์— ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ธฐ ์œ„ํ•ด ๋ ˆ์ด๋ธ”์„ ์ •์ˆ˜๋กœ ๋งคํ•‘ํ•œ ๋”•์…”๋„ˆ๋ฆฌ ํ•˜๋‚˜, ๋ฐ˜๋Œ€๋กœ ์ •์ˆ˜๋ฅผ ๋ ˆ์ด๋ธ”๋กœ ๋งคํ•‘ํ•œ ๋”•์…”๋„ˆ๋ฆฌ ํ•˜๋‚˜ ์ด 2๊ฐœ์˜ ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> import itertools >>> labels = [item['ids'] for item in dataset['label']] >>> flattened_labels = list(itertools.chain(*labels)) >>> unique_labels = list(set(flattened_labels)) >>> label2id = {label: idx for idx, label in enumerate(unique_labels)} >>> id2label = {idx: label for label, idx in label2id.items()} ``` ์ด์ œ ๋งคํ•‘์ด ์™„๋ฃŒ๋˜์—ˆ์œผ๋ฏ€๋กœ ๋ฌธ์ž์—ด ๋‹ต๋ณ€์„ ํ•ด๋‹น id๋กœ ๊ต์ฒดํ•˜๊ณ , ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ๋” ํŽธ๋ฆฌํ•œ ํ›„์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด ํŽธํ‰ํ™” ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python >>> def replace_ids(inputs): ... inputs["label"]["ids"] = [label2id[x] for x in inputs["label"]["ids"]] ... return inputs >>> dataset = dataset.map(replace_ids) >>> flat_dataset = dataset.flatten() >>> flat_dataset.features {'question': Value(dtype='string', id=None), 'image_id': Value(dtype='string', id=None), 'label.ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'label.weights': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None)} ``` ## ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ [[preprocessing-data]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๋ชจ๋ธ์„ ์œ„ํ•ด ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์ค€๋น„ํ•˜๊ธฐ ์œ„ํ•ด ViLT ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. [`ViltProcessor`]๋Š” BERT ํ† ํฌ๋‚˜์ด์ €์™€ ViLT ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ํŽธ๋ฆฌํ•˜๊ฒŒ ํ•˜๋‚˜์˜ ํ”„๋กœ์„ธ์„œ๋กœ ๋ฌถ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import ViltProcessor >>> processor = ViltProcessor.from_pretrained(model_checkpoint) ``` ๋ฐ์ดํ„ฐ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋ ค๋ฉด ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์„ [`ViltProcessor`]๋กœ ์ธ์ฝ”๋”ฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์„œ๋Š” [`BertTokenizerFast`]๋กœ ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ฆˆํ•˜๊ณ  ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์œ„ํ•ด `input_ids`, `attention_mask` ๋ฐ `token_type_ids`๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋Š” [`ViltImageProcessor`]๋กœ ์ด๋ฏธ์ง€๋ฅผ ํฌ๊ธฐ ์กฐ์ •ํ•˜๊ณ  ์ •๊ทœํ™”ํ•˜๋ฉฐ, `pixel_values`์™€ `pixel_mask`๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฐ ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๋Š” ๋ชจ๋‘ ๋‚ด๋ถ€์—์„œ ์ด๋ฃจ์–ด์ง€๋ฏ€๋กœ, `processor`๋ฅผ ํ˜ธ์ถœํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์•„์ง ํƒ€๊ฒŸ ๋ ˆ์ด๋ธ”์ด ์™„์„ฑ๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ํƒ€๊ฒŸ์˜ ํ‘œํ˜„์—์„œ ๊ฐ ์š”์†Œ๋Š” ๊ฐ€๋Šฅํ•œ ๋‹ต๋ณ€(๋ ˆ์ด๋ธ”)์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ •ํ™•ํ•œ ๋‹ต๋ณ€์˜ ์š”์†Œ๋Š” ํ•ด๋‹น ์ ์ˆ˜(weight)๋ฅผ ์œ ์ง€์‹œํ‚ค๊ณ  ๋‚˜๋จธ์ง€ ์š”์†Œ๋Š” 0์œผ๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์•„๋ž˜ ํ•จ์ˆ˜๊ฐ€ ์œ„์—์„œ ์„ค๋ช…ํ•œ๋Œ€๋กœ ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์— `processor`๋ฅผ ์ ์šฉํ•˜๊ณ  ๋ ˆ์ด๋ธ”์„ ํ˜•์‹์— ๋งž์ถฅ๋‹ˆ๋‹ค: ```py >>> import torch >>> def preprocess_data(examples): ... image_paths = examples['image_id'] ... images = [Image.open(image_path) for image_path in image_paths] ... texts = examples['question'] ... encoding = processor(images, texts, padding="max_length", truncation=True, return_tensors="pt") ... for k, v in encoding.items(): ... encoding[k] = v.squeeze() ... targets = [] ... for labels, scores in zip(examples['label.ids'], examples['label.weights']): ... target = torch.zeros(len(id2label)) ... for label, score in zip(labels, scores): ... target[label] = score ... targets.append(target) ... encoding["labels"] = targets ... return encoding ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets์˜ [`~datasets.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์‹ญ์‹œ์˜ค. `batched=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•จ์œผ๋กœ์จ `map`์„ ๋” ๋น ๋ฅด๊ฒŒ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์€ ์ œ๊ฑฐํ•˜์„ธ์š”. ```py >>> processed_dataset = flat_dataset.map(preprocess_data, batched=True, remove_columns=['question','question_type', 'question_id', 'image_id', 'answer_type', 'label.ids', 'label.weights']) >>> processed_dataset Dataset({ features: ['input_ids', 'token_type_ids', 'attention_mask', 'pixel_values', 'pixel_mask', 'labels'], num_rows: 200 }) ``` ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ, [`DefaultDataCollator`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ๋กœ ์“ธ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` ## ๋ชจ๋ธ ํ›ˆ๋ จ [[train-the-model]] ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์œ„ํ•ด ์ค€๋น„๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`ViltForQuestionAnswering`]์œผ๋กœ ViLT๋ฅผ ๊ฐ€์ ธ์˜ฌ ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ”์˜ ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers import ViltForQuestionAnswering >>> model = ViltForQuestionAnswering.from_pretrained(model_checkpoint, num_labels=len(id2label), id2label=id2label, label2id=label2id) ``` ์ด ์‹œ์ ์—์„œ๋Š” ๋‹ค์Œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”: ```py >>> from transformers import TrainingArguments >>> repo_id = "MariaK/vilt_finetuned_200" >>> training_args = TrainingArguments( ... output_dir=repo_id, ... per_device_train_batch_size=4, ... num_train_epochs=20, ... save_steps=200, ... logging_steps=50, ... learning_rate=5e-5, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ์„ธํŠธ, ํ”„๋กœ์„ธ์„œ, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=processed_dataset, ... tokenizer=processor, ... ) ``` 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”: ```py >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, [`~Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๐Ÿค— Hub์— ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` ## ์ถ”๋ก  [[inference]] ViLT ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ๐Ÿค— Hub์— ์—…๋กœ๋“œํ–ˆ๋‹ค๋ฉด ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•ด๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`Pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> pipe = pipeline("visual-question-answering", model="MariaK/vilt_finetuned_200") ``` ์ด ๊ฐ€์ด๋“œ์˜ ๋ชจ๋ธ์€ 200๊ฐœ์˜ ์˜ˆ์ œ์—์„œ๋งŒ ํ›ˆ๋ จ๋˜์—ˆ์œผ๋ฏ€๋กœ ๊ทธ๋‹ค์ง€ ๋งŽ์€ ๊ฒƒ์„ ๊ธฐ๋Œ€ํ•  ์ˆ˜๋Š” ์—†์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์ฒซ ๋ฒˆ์งธ ์˜ˆ์ œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก  ๊ฒฐ๊ณผ๋ฅผ ์„ค๋ช…ํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> example = dataset[0] >>> image = Image.open(example['image_id']) >>> question = example['question'] >>> print(question) >>> pipe(image, question, top_k=1) "Where is he looking?" [{'score': 0.5498199462890625, 'answer': 'down'}] ``` ๋น„๋ก ํ™•์‹ ์€ ๋ณ„๋กœ ์—†์ง€๋งŒ, ๋ชจ๋ธ์€ ์‹ค์ œ๋กœ ๋ฌด์–ธ๊ฐ€๋ฅผ ๋ฐฐ์› ์Šต๋‹ˆ๋‹ค. ๋” ๋งŽ์€ ์˜ˆ์ œ์™€ ๋” ๊ธด ํ›ˆ๋ จ ๊ธฐ๊ฐ„์ด ์ฃผ์–ด์ง„๋‹ค๋ฉด ๋ถ„๋ช… ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค! ์›ํ•œ๋‹ค๋ฉด ํŒŒ์ดํ”„๋ผ์ธ์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: 1. ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์„ ๊ฐ€์ ธ์™€์„œ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์— ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. 2. ์ „์ฒ˜๋ฆฌ๋œ ๊ฒฐ๊ณผ๋ฅผ ๋ชจ๋ธ์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. ๋กœ์ง“์—์„œ ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ ์žˆ๋Š” ๋‹ต๋ณ€์˜ id๋ฅผ ๊ฐ€์ ธ์™€์„œ `id2label`์—์„œ ์‹ค์ œ ๋‹ต๋ณ€์„ ์ฐพ์Šต๋‹ˆ๋‹ค. ```py >>> processor = ViltProcessor.from_pretrained("MariaK/vilt_finetuned_200") >>> image = Image.open(example['image_id']) >>> question = example['question'] >>> # prepare inputs >>> inputs = processor(image, question, return_tensors="pt") >>> model = ViltForQuestionAnswering.from_pretrained("MariaK/vilt_finetuned_200") >>> # forward pass >>> with torch.no_grad(): ... outputs = model(**inputs) >>> logits = outputs.logits >>> idx = logits.argmax(-1).item() >>> print("Predicted answer:", model.config.id2label[idx]) Predicted answer: down ``` ## ์ œ๋กœ์ƒท VQA [[zeroshot-vqa]] ์ด์ „ ๋ชจ๋ธ์€ VQA๋ฅผ ๋ถ„๋ฅ˜ ๋ฌธ์ œ๋กœ ์ฒ˜๋ฆฌํ–ˆ์Šต๋‹ˆ๋‹ค. BLIP, BLIP-2 ๋ฐ InstructBLIP์™€ ๊ฐ™์€ ์ตœ๊ทผ์˜ ๋ชจ๋ธ์€ VQA๋ฅผ ์ƒ์„ฑ ์ž‘์—…์œผ๋กœ ์ ‘๊ทผํ•ฉ๋‹ˆ๋‹ค. [BLIP-2](../../en/model_doc/blip-2)๋ฅผ ์˜ˆ๋กœ ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ ์‚ฌ์ „ํ›ˆ๋ จ๋œ ๋น„์ „ ์ธ์ฝ”๋”์™€ LLM์˜ ๋ชจ๋“  ์กฐํ•ฉ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ƒˆ๋กœ์šด ๋น„์ „-์ž์—ฐ์–ด ์‚ฌ์ „ ํ•™์Šต ํŒจ๋Ÿฌ๋‹ค์ž„์„ ๋„์ž…ํ–ˆ์Šต๋‹ˆ๋‹ค. ([BLIP-2 ๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ](https://huggingface.co/blog/blip-2)๋ฅผ ํ†ตํ•ด ๋” ์ž์„ธํžˆ ์•Œ์•„๋ณผ ์ˆ˜ ์žˆ์–ด์š”) ์ด๋ฅผ ํ†ตํ•ด ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต์„ ํฌํ•จํ•œ ์—ฌ๋Ÿฌ ๋น„์ „-์ž์—ฐ์–ด ์ž‘์—…์—์„œ SOTA๋ฅผ ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ ์–ด๋–ป๊ฒŒ VQA์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ์„ค๋ช…ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๋จผ์ € ๋ชจ๋ธ์„ ๊ฐ€์ ธ์™€ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ GPU๊ฐ€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๊ฒฝ์šฐ ๋ชจ๋ธ์„ ๋ช…์‹œ์ ์œผ๋กœ GPU๋กœ ์ „์†กํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด์ „์—๋Š” ํ›ˆ๋ จํ•  ๋•Œ ์“ฐ์ง€ ์•Š์€ ์ด์œ ๋Š” [`Trainer`]๊ฐ€ ์ด ๋ถ€๋ถ„์„ ์ž๋™์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor, Blip2ForConditionalGeneration >>> import torch >>> processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") >>> model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16) >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) ``` ๋ชจ๋ธ์€ ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ๋ฅผ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›์œผ๋ฏ€๋กœ, VQA ๋ฐ์ดํ„ฐ์„ธํŠธ์˜ ์ฒซ ๋ฒˆ์งธ ์˜ˆ์ œ์—์„œ์™€ ๋™์ผํ•œ ์ด๋ฏธ์ง€/์งˆ๋ฌธ ์Œ์„ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> example = dataset[0] >>> image = Image.open(example['image_id']) >>> question = example['question'] ``` BLIP-2๋ฅผ ์‹œ๊ฐ์  ์งˆ์˜์‘๋‹ต ์ž‘์—…์— ์‚ฌ์šฉํ•˜๋ ค๋ฉด ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ๊ฐ€ `Question: {} Answer:` ํ˜•์‹์„ ๋”ฐ๋ผ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> prompt = f"Question: {question} Answer:" ``` ์ด์ œ ๋ชจ๋ธ์˜ ํ”„๋กœ์„ธ์„œ๋กœ ์ด๋ฏธ์ง€/ํ”„๋กฌํ”„ํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ณ , ์ฒ˜๋ฆฌ๋œ ์ž…๋ ฅ์„ ๋ชจ๋ธ์„ ํ†ตํ•ด ์ „๋‹ฌํ•˜๊ณ , ์ถœ๋ ฅ์„ ๋””์ฝ”๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> inputs = processor(image, text=prompt, return_tensors="pt").to(device, torch.float16) >>> generated_ids = model.generate(**inputs, max_new_tokens=10) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() >>> print(generated_text) "He is looking at the crowd" ``` ๋ณด์‹œ๋‹ค์‹œํ”ผ ๋ชจ๋ธ์€ ๊ตฐ์ค‘์„ ์ธ์‹ํ•˜๊ณ , ์–ผ๊ตด์˜ ๋ฐฉํ–ฅ(์•„๋ž˜์ชฝ์„ ๋ณด๊ณ  ์žˆ์Œ)์„ ์ธ์‹ํ–ˆ์ง€๋งŒ, ๊ตฐ์ค‘์ด ์Šค์ผ€์ดํ„ฐ ๋’ค์— ์žˆ๋‹ค๋Š” ์‚ฌ์‹ค์„ ๋†“์ณค์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‚ฌ๋žŒ์ด ์ง์ ‘ ๋ผ๋ฒจ๋งํ•œ ๋ฐ์ดํ„ฐ์…‹์„ ์–ป์„ ์ˆ˜ ์—†๋Š” ๊ฒฝ์šฐ์—, ์ด ์ ‘๊ทผ๋ฒ•์€ ๋น ๋ฅด๊ฒŒ ์œ ์šฉํ•œ ๊ฒฐ๊ณผ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/image_classification.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜[[image-classification]] [[open-in-colab]] <Youtube id="tjAIM7BOYhw"/> ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋Š” ์ด๋ฏธ์ง€์— ๋ ˆ์ด๋ธ” ๋˜๋Š” ํด๋ž˜์Šค๋ฅผ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ๋˜๋Š” ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์™€ ๋‹ฌ๋ฆฌ ์ž…๋ ฅ์€ ์ด๋ฏธ์ง€๋ฅผ ๊ตฌ์„ฑํ•˜๋Š” ํ”ฝ์…€ ๊ฐ’์ž…๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜์—๋Š” ์ž์—ฐ์žฌํ•ด ํ›„ ํ”ผํ•ด ๊ฐ์ง€, ๋†์ž‘๋ฌผ ๊ฑด๊ฐ• ๋ชจ๋‹ˆํ„ฐ๋ง, ์˜๋ฃŒ ์ด๋ฏธ์ง€์—์„œ ์งˆ๋ณ‘์˜ ์ง•ํ›„ ๊ฒ€์‚ฌ ์ง€์› ๋“ฑ ๋‹ค์–‘ํ•œ ์‘์šฉ ์‚ฌ๋ก€๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์Œ์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค: 1. [Food-101](https://huggingface.co/datasets/food101) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [ViT](model_doc/vit)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์ด๋ฏธ์ง€์—์„œ ์‹ํ’ˆ ํ•ญ๋ชฉ์„ ๋ถ„๋ฅ˜ํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ • ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์˜ํ•ด ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [BEiT](../model_doc/beit), [BiT](../model_doc/bit), [ConvNeXT](../model_doc/convnext), [ConvNeXTV2](../model_doc/convnextv2), [CvT](../model_doc/cvt), [Data2VecVision](../model_doc/data2vec-vision), [DeiT](../model_doc/deit), [DiNAT](../model_doc/dinat), [EfficientFormer](../model_doc/efficientformer), [EfficientNet](../model_doc/efficientnet), [FocalNet](../model_doc/focalnet), [ImageGPT](../model_doc/imagegpt), [LeViT](../model_doc/levit), [MobileNetV1](../model_doc/mobilenet_v1), [MobileNetV2](../model_doc/mobilenet_v2), [MobileViT](../model_doc/mobilevit), [NAT](../model_doc/nat), [Perceiver](../model_doc/perceiver), [PoolFormer](../model_doc/poolformer), [RegNet](../model_doc/regnet), [ResNet](../model_doc/resnet), [SegFormer](../model_doc/segformer), [Swin Transformer](../model_doc/swin), [Swin Transformer V2](../model_doc/swinv2), [VAN](../model_doc/van), [ViT](../model_doc/vit), [ViT Hybrid](../model_doc/vit_hybrid), [ViTMSN](../model_doc/vit_msn) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด, ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Food-101 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-food101-dataset]] ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ Food-101 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋” ์ž‘์€ ๋ถ€๋ถ„ ์ง‘ํ•ฉ์„ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•œ ํ›ˆ๋ จ์— ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ์‹คํ—˜์„ ํ†ตํ•ด ๋ชจ๋“  ๊ฒƒ์ด ์ œ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> food = load_dataset("food101", split="train[:5000]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train`์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋ถ„ํ• ํ•˜์„ธ์š”: ```py >>> food = food.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ  ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> food["train"][0] {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x512 at 0x7F52AFC8AC50>, 'label': 79} ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๊ฐ ์˜ˆ์ œ์—๋Š” ๋‘ ๊ฐœ์˜ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `image`: ์‹ํ’ˆ ํ•ญ๋ชฉ์˜ PIL ์ด๋ฏธ์ง€ - `label`: ์‹ํ’ˆ ํ•ญ๋ชฉ์˜ ๋ ˆ์ด๋ธ” ํด๋ž˜์Šค ๋ชจ๋ธ์ด ๋ ˆ์ด๋ธ” ID์—์„œ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์„ ์‰ฝ๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ๋„๋ก ๋ ˆ์ด๋ธ” ์ด๋ฆ„์„ ์ •์ˆ˜๋กœ ๋งคํ•‘ํ•˜๊ณ , ์ •์ˆ˜๋ฅผ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์œผ๋กœ ๋งคํ•‘ํ•˜๋Š” ์‚ฌ์ „์„ ๋งŒ๋“œ์„ธ์š”: ```py >>> labels = food["train"].features["label"].names >>> label2id, id2label = dict(), dict() >>> for i, label in enumerate(labels): ... label2id[label] = str(i) ... id2label[str(i)] = label ``` ์ด์ œ ๋ ˆ์ด๋ธ” ID๋ฅผ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์œผ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> id2label[str(79)] 'prime_rib' ``` ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ์ด๋ฏธ์ง€๋ฅผ ํ…์„œ๋กœ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ViT ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor >>> checkpoint = "google/vit-base-patch16-224-in21k" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) ``` <frameworkcontent> <pt> ์ด๋ฏธ์ง€์— ๋ช‡ ๊ฐ€์ง€ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜์—ฌ ๊ณผ์ ํ•ฉ์— ๋Œ€ํ•ด ๋ชจ๋ธ์„ ๋” ๊ฒฌ๊ณ ํ•˜๊ฒŒ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ Torchvision์˜ [`transforms`](https://pytorch.org/vision/stable/transforms.html) ๋ชจ๋“ˆ์„ ์‚ฌ์šฉํ•˜์ง€๋งŒ, ์›ํ•˜๋Š” ์ด๋ฏธ์ง€ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€์˜ ์ž„์˜ ๋ถ€๋ถ„์„ ํฌ๋กญํ•˜๊ณ  ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ•œ ๋‹ค์Œ, ์ด๋ฏธ์ง€ ํ‰๊ท ๊ณผ ํ‘œ์ค€ ํŽธ์ฐจ๋กœ ์ •๊ทœํ™”ํ•˜์„ธ์š”: ```py >>> from torchvision.transforms import RandomResizedCrop, Compose, Normalize, ToTensor >>> normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std) >>> size = ( ... image_processor.size["shortest_edge"] ... if "shortest_edge" in image_processor.size ... else (image_processor.size["height"], image_processor.size["width"]) ... ) >>> _transforms = Compose([RandomResizedCrop(size), ToTensor(), normalize]) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด ๋ณ€ํ™˜์„ ์ ์šฉํ•˜๊ณ  ์ด๋ฏธ์ง€์˜ `pixel_values`(๋ชจ๋ธ์— ๋Œ€ํ•œ ์ž…๋ ฅ)๋ฅผ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> def transforms(examples): ... examples["pixel_values"] = [_transforms(img.convert("RGB")) for img in examples["image"]] ... del examples["image"] ... return examples ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ๊ธฐ๋Šฅ์„ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.with_transform`]์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์š”์†Œ๋ฅผ ๊ฐ€์ ธ์˜ฌ ๋•Œ ๋ณ€ํ™˜์ด ์ฆ‰์‹œ ์ ์šฉ๋ฉ๋‹ˆ๋‹ค: ```py >>> food = food.with_transform(transforms) ``` ์ด์ œ [`DefaultDataCollator`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๐Ÿค— Transformers์˜ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ๋‹ฌ๋ฆฌ, `DefaultDataCollator`๋Š” ํŒจ๋”ฉ๊ณผ ๊ฐ™์€ ์ถ”๊ฐ€์ ์ธ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` </pt> </frameworkcontent> <frameworkcontent> <tf> ๊ณผ์ ํ•ฉ์„ ๋ฐฉ์ง€ํ•˜๊ณ  ๋ชจ๋ธ์„ ๋ณด๋‹ค ๊ฒฌ๊ณ ํ•˜๊ฒŒ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ›ˆ๋ จ ๋ถ€๋ถ„์— ๋ฐ์ดํ„ฐ ์ฆ๊ฐ•์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ Keras ์ „์ฒ˜๋ฆฌ ๋ ˆ์ด์–ด๋กœ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ๋ณ€ํ™˜(๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ํฌํ•จ)๊ณผ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ๋ณ€ํ™˜(์ค‘์•™ ํฌ๋กœํ•‘, ํฌ๊ธฐ ์กฐ์ •, ์ •๊ทœํ™”๋งŒ)์„ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. `tf.image` ๋˜๋Š” ๋‹ค๋ฅธ ์›ํ•˜๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from tensorflow import keras >>> from tensorflow.keras import layers >>> size = (image_processor.size["height"], image_processor.size["width"]) >>> train_data_augmentation = keras.Sequential( ... [ ... layers.RandomCrop(size[0], size[1]), ... layers.Rescaling(scale=1.0 / 127.5, offset=-1), ... layers.RandomFlip("horizontal"), ... layers.RandomRotation(factor=0.02), ... layers.RandomZoom(height_factor=0.2, width_factor=0.2), ... ], ... name="train_data_augmentation", ... ) >>> val_data_augmentation = keras.Sequential( ... [ ... layers.CenterCrop(size[0], size[1]), ... layers.Rescaling(scale=1.0 / 127.5, offset=-1), ... ], ... name="val_data_augmentation", ... ) ``` ๋‹ค์Œ์œผ๋กœ ํ•œ ๋ฒˆ์— ํ•˜๋‚˜์˜ ์ด๋ฏธ์ง€๊ฐ€ ์•„๋‹ˆ๋ผ ์ด๋ฏธ์ง€ ๋ฐฐ์น˜์— ์ ์ ˆํ•œ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ```py >>> import numpy as np >>> import tensorflow as tf >>> from PIL import Image >>> def convert_to_tf_tensor(image: Image): ... np_image = np.array(image) ... tf_image = tf.convert_to_tensor(np_image) ... # `expand_dims()` is used to add a batch dimension since ... # the TF augmentation layers operates on batched inputs. ... return tf.expand_dims(tf_image, 0) >>> def preprocess_train(example_batch): ... """Apply train_transforms across a batch.""" ... images = [ ... train_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"] ... ] ... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images] ... return example_batch ... def preprocess_val(example_batch): ... """Apply val_transforms across a batch.""" ... images = [ ... val_data_augmentation(convert_to_tf_tensor(image.convert("RGB"))) for image in example_batch["image"] ... ] ... example_batch["pixel_values"] = [tf.transpose(tf.squeeze(image)) for image in images] ... return example_batch ``` ๐Ÿค— Datasets [`~datasets.Dataset.set_transform`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ฆ‰์‹œ ๋ณ€ํ™˜์„ ์ ์šฉํ•˜์„ธ์š”: ```py food["train"].set_transform(preprocess_train) food["test"].set_transform(preprocess_val) ``` ์ตœ์ข… ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๋กœ `DefaultDataCollator`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๐Ÿค— Transformers์˜ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ๋‹ฌ๋ฆฌ `DefaultDataCollator`๋Š” ํŒจ๋”ฉ๊ณผ ๊ฐ™์€ ์ถ”๊ฐ€ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (๐Ÿค— Evaluate [๋น ๋ฅธ ๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด์„ธ์š”): ```py >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ธก๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... predictions = np.argmax(predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=labels) ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•˜๋ฉด ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForImageClassification`]๋กœ ViT๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜, ๋ ˆ์ด๋ธ” ๋งคํ•‘ ๋ฐ ๋ ˆ์ด๋ธ” ์ˆ˜๋ฅผ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForImageClassification, TrainingArguments, Trainer >>> model = AutoModelForImageClassification.from_pretrained( ... checkpoint, ... num_labels=len(labels), ... id2label=id2label, ... label2id=label2id, ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋์ž…๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `image` ์—ด์ด ์‚ญ์ œ๋˜๊ธฐ ๋•Œ๋ฌธ์— ๋ฏธ์‚ฌ์šฉ ์—ด์„ ์ œ๊ฑฐํ•˜์ง€ ์•Š๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. `image` ์—ด์ด ์—†์œผ๋ฉด `pixel_values`์„ ์ƒ์„ฑํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ด ๋™์ž‘์„ ๋ฐฉ์ง€ํ•˜๋ ค๋ฉด `remove_unused_columns=False`๋กœ ์„ค์ •ํ•˜์„ธ์š”! ๋‹ค๋ฅธ ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ ์ €์žฅ ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋กœ ์„ค์ •ํ•˜๋ฉด ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค, [`Trainer`]๊ฐ€ ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. [`Trainer`]์— ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ ๋ฐ `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_food_model", ... remove_unused_columns=False, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... learning_rate=5e-5, ... per_device_train_batch_size=16, ... gradient_accumulation_steps=4, ... per_device_eval_batch_size=16, ... num_train_epochs=3, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=food["train"], ... eval_dataset=food["test"], ... tokenizer=image_processor, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ชจ๋“  ์‚ฌ๋žŒ์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋กœ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <frameworkcontent> <tf> <Tip> Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, ๋จผ์ € [๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ](./training#train-a-tensorflow-model-with-keras)์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด ๋‹ค์Œ ๋‹จ๊ณ„๋ฅผ ๋”ฐ๋ฅด์„ธ์š”: 1. ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜๊ณ  ์˜ตํ‹ฐ๋งˆ์ด์ €์™€ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. 2. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. 3. ๐Ÿค— Dataset์„ `tf.data.Dataset`์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. 4. ๋ชจ๋ธ์„ ์ปดํŒŒ์ผํ•ฉ๋‹ˆ๋‹ค. 5. ์ฝœ๋ฐฑ์„ ์ถ”๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด `fit()` ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. 6. ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ์„ ๐Ÿค— Hub์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ, ์˜ตํ‹ฐ๋งˆ์ด์ € ๋ฐ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด์„ ์ •์˜ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_epochs = 5 >>> num_train_steps = len(food["train"]) * num_epochs >>> learning_rate = 3e-5 >>> weight_decay_rate = 0.01 >>> optimizer, lr_schedule = create_optimizer( ... init_lr=learning_rate, ... num_train_steps=num_train_steps, ... weight_decay_rate=weight_decay_rate, ... num_warmup_steps=0, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๋ ˆ์ด๋ธ” ๋งคํ•‘๊ณผ ํ•จ๊ป˜ [`TFAuto ModelForImageClassification`]์œผ๋กœ ViT๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForImageClassification >>> model = TFAutoModelForImageClassification.from_pretrained( ... checkpoint, ... id2label=id2label, ... label2id=label2id, ... ) ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ [`~datasets.Dataset.to_tf_dataset`]์™€ `data_collator`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> # converting our train dataset to tf.data.Dataset >>> tf_train_dataset = food["train"].to_tf_dataset( ... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator ... ) >>> # converting our test dataset to tf.data.Dataset >>> tf_eval_dataset = food["test"].to_tf_dataset( ... columns="pixel_values", label_cols="label", shuffle=True, batch_size=batch_size, collate_fn=data_collator ... ) ``` `compile()`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•˜์„ธ์š”: ```py >>> from tensorflow.keras.losses import SparseCategoricalCrossentropy >>> loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) >>> model.compile(optimizer=optimizer, loss=loss) ``` ์˜ˆ์ธก์—์„œ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ ๐Ÿค— Hub๋กœ ํ‘ธ์‹œํ•˜๋ ค๋ฉด [Keras callbacks](../main_classes/keras_callbacks)๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `compute_metrics` ํ•จ์ˆ˜๋ฅผ [KerasMetricCallback](../main_classes/keras_callbacks#transformers.KerasMetricCallback)์— ์ „๋‹ฌํ•˜๊ณ , [PushToHubCallback](../main_classes/keras_callbacks#transformers.PushToHubCallback)์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import KerasMetricCallback, PushToHubCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_eval_dataset) >>> push_to_hub_callback = PushToHubCallback( ... output_dir="food_classifier", ... tokenizer=image_processor, ... save_strategy="no", ... ) >>> callbacks = [metric_callback, push_to_hub_callback] ``` ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ ์ˆ˜์™€ ํ•จ๊ป˜ `fit()`์„ ํ˜ธ์ถœํ•˜๊ณ , ์ฝœ๋ฐฑ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(tf_train_dataset, validation_data=tf_eval_dataset, epochs=num_epochs, callbacks=callbacks) Epoch 1/5 250/250 [==============================] - 313s 1s/step - loss: 2.5623 - val_loss: 1.4161 - accuracy: 0.9290 Epoch 2/5 250/250 [==============================] - 265s 1s/step - loss: 0.9181 - val_loss: 0.6808 - accuracy: 0.9690 Epoch 3/5 250/250 [==============================] - 252s 1s/step - loss: 0.3910 - val_loss: 0.4303 - accuracy: 0.9820 Epoch 4/5 250/250 [==============================] - 251s 1s/step - loss: 0.2028 - val_loss: 0.3191 - accuracy: 0.9900 Epoch 5/5 250/250 [==============================] - 238s 949ms/step - loss: 0.1232 - val_loss: 0.3259 - accuracy: 0.9890 ``` ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ๐Ÿค— Hub์— ๊ณต์œ ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ๋‹ค์Œ [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์„ ์ˆ˜ํ–‰ํ•˜๊ณ ์ž ํ•˜๋Š” ์ด๋ฏธ์ง€๋ฅผ ๊ฐ€์ ธ์™€๋ด…์‹œ๋‹ค: ```py >>> ds = load_dataset("food101", split="validation[:10]") >>> image = ds["image"][0] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png" alt="image of beignets"/> </div> ๋ฏธ์„ธ ์กฐ์ • ๋ชจ๋ธ๋กœ ์ถ”๋ก ์„ ์‹œ๋„ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๋กœ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์ด๋ฏธ์ง€๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> classifier = pipeline("image-classification", model="my_awesome_food_model") >>> classifier(image) [{'score': 0.31856709718704224, 'label': 'beignets'}, {'score': 0.015232225880026817, 'label': 'bruschetta'}, {'score': 0.01519392803311348, 'label': 'chicken_wings'}, {'score': 0.013022331520915031, 'label': 'pork_chop'}, {'score': 0.012728818692266941, 'label': 'prime_rib'}] ``` ์›ํ•œ๋‹ค๋ฉด, `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ์ด๋ฏธ์ง€๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  `input`์„ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor >>> import torch >>> image_processor = AutoImageProcessor.from_pretrained("my_awesome_food_model") >>> inputs = image_processor(image, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  logits์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForImageClassification >>> model = AutoModelForImageClassification.from_pretrained("my_awesome_food_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ํ™•๋ฅ ์ด ๊ฐ€์žฅ ๋†’์€ ์˜ˆ์ธก ๋ ˆ์ด๋ธ”์„ ๊ฐ€์ ธ์˜ค๊ณ , ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_label = logits.argmax(-1).item() >>> model.config.id2label[predicted_label] 'beignets' ``` </pt> </frameworkcontent> <frameworkcontent> <tf> ์ด๋ฏธ์ง€๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  `input`์„ TensorFlow ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("MariaK/food_classifier") >>> inputs = image_processor(image, return_tensors="tf") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  logits์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForImageClassification >>> model = TFAutoModelForImageClassification.from_pretrained("MariaK/food_classifier") >>> logits = model(**inputs).logits ``` ํ™•๋ฅ ์ด ๊ฐ€์žฅ ๋†’์€ ์˜ˆ์ธก ๋ ˆ์ด๋ธ”์„ ๊ฐ€์ ธ์˜ค๊ณ , ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> model.config.id2label[predicted_class_id] 'beignets' ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/asr.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ž๋™ ์Œ์„ฑ ์ธ์‹[[automatic-speech-recognition]] [[open-in-colab]] <Youtube id="TksaY_FDgnk"/> ์ž๋™ ์Œ์„ฑ ์ธ์‹(Automatic Speech Recognition, ASR)์€ ์Œ์„ฑ ์‹ ํ˜ธ๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ์Œ์„ฑ ์ž…๋ ฅ ์‹œํ€€์Šค๋ฅผ ํ…์ŠคํŠธ ์ถœ๋ ฅ์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. Siri์™€ Alexa์™€ ๊ฐ™์€ ๊ฐ€์ƒ ์–ด์‹œ์Šคํ„ดํŠธ๋Š” ASR ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ผ์ƒ์ ์œผ๋กœ ์‚ฌ์šฉ์ž๋ฅผ ๋•๊ณ  ์žˆ์œผ๋ฉฐ, ํšŒ์˜ ์ค‘ ๋ผ์ด๋ธŒ ์บก์…˜ ๋ฐ ๋ฉ”๋ชจ ์ž‘์„ฑ๊ณผ ๊ฐ™์€ ์œ ์šฉํ•œ ์‚ฌ์šฉ์ž ์นœํ™”์  ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ๋„ ๋งŽ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ์†Œ๊ฐœํ•  ๋‚ด์šฉ์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์˜ค๋””์˜ค๋ฅผ ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. 2. ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์˜ํ•ด ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [Data2VecAudio](../model_doc/data2vec-audio), [Hubert](../model_doc/hubert), [M-CTC-T](../model_doc/mctct), [SEW](../model_doc/sew), [SEW-D](../model_doc/sew-d), [UniSpeech](../model_doc/unispeech), [UniSpeechSat](../model_doc/unispeech-sat), [Wav2Vec2](../model_doc/wav2vec2), [Wav2Vec2-Conformer](../model_doc/wav2vec2-conformer), [WavLM](../model_doc/wavlm) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate jiwer ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋ฉด ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”. ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## MInDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-minds-14-dataset]] ๋จผ์ €, ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€๋ถ„์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•œ ํ›ˆ๋ จ์— ์‹œ๊ฐ„์„ ๋“ค์ด๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ๊ฒ€์ฆํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset, Audio >>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train[:100]") ``` [`~Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train`์„ ํ›ˆ๋ จ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋‚˜๋ˆ„์„ธ์š”: ```py >>> minds = minds.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ  ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ํ™•์ธํ•˜์„ธ์š”: ```py >>> minds DatasetDict({ train: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 16 }) test: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 4 }) }) ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” `lang_id`์™€ `english_transcription`๊ณผ ๊ฐ™์€ ์œ ์šฉํ•œ ์ •๋ณด๊ฐ€ ๋งŽ์ด ํฌํ•จ๋˜์–ด ์žˆ์ง€๋งŒ, ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” `audio`์™€ `transcription`์— ์ดˆ์ ์„ ๋งž์ถœ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์—ด์€ [`~datasets.Dataset.remove_columns`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ œ๊ฑฐํ•˜์„ธ์š”: ```py >>> minds = minds.remove_columns(["english_transcription", "intent_class", "lang_id"]) ``` ์˜ˆ์‹œ๋ฅผ ๋‹ค์‹œ ํ•œ๋ฒˆ ํ™•์ธํ•ด๋ณด์„ธ์š”: ```py >>> minds["train"][0] {'audio': {'array': array([-0.00024414, 0. , 0. , ..., 0.00024414, 0.00024414, 0.00024414], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 8000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"} ``` ๋‘ ๊ฐœ์˜ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `audio`: ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๊ธฐ ์œ„ํ•ด ํ˜ธ์ถœํ•ด์•ผ ํ•˜๋Š” ์Œ์„ฑ ์‹ ํ˜ธ์˜ 1์ฐจ์› `array(๋ฐฐ์—ด)` - `transcription`: ๋ชฉํ‘œ ํ…์ŠคํŠธ ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ์œผ๋กœ ์˜ค๋””์˜ค ์‹ ํ˜ธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ Wav2Vec2 ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base") ``` MInDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ๋Š” 8000kHz์ด๋ฏ€๋กœ([๋ฐ์ดํ„ฐ ์„ธํŠธ ์นด๋“œ](https://huggingface.co/datasets/PolyAI/minds14)์—์„œ ํ™•์ธ), ์‚ฌ์ „ ํ›ˆ๋ จ๋œ Wav2Vec2 ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ 16000kHz๋กœ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) >>> minds["train"][0] {'audio': {'array': array([-2.38064706e-04, -1.58618059e-04, -5.43987835e-06, ..., 2.78103951e-04, 2.38446111e-04, 1.18740834e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'sampling_rate': 16000}, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602ba9e2963e11ccd901cd4f.wav', 'transcription': "hi I'm trying to use the banking app on my phone and currently my checking and savings account balance is not refreshing"} ``` ์œ„์˜ 'transcription'์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋“ฏ์ด ํ…์ŠคํŠธ๋Š” ๋Œ€๋ฌธ์ž์™€ ์†Œ๋ฌธ์ž๊ฐ€ ์„ž์—ฌ ์žˆ์Šต๋‹ˆ๋‹ค. Wav2Vec2 ํ† ํฌ๋‚˜์ด์ €๋Š” ๋Œ€๋ฌธ์ž ๋ฌธ์ž์— ๋Œ€ํ•ด์„œ๋งŒ ํ›ˆ๋ จ๋˜์–ด ์žˆ์œผ๋ฏ€๋กœ ํ…์ŠคํŠธ๊ฐ€ ํ† ํฌ๋‚˜์ด์ €์˜ ์–ดํœ˜์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> def uppercase(example): ... return {"transcription": example["transcription"].upper()} >>> minds = minds.map(uppercase) ``` ์ด์ œ ๋‹ค์Œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: 1. `audio` ์—ด์„ ํ˜ธ์ถœํ•˜์—ฌ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. 2. ์˜ค๋””์˜ค ํŒŒ์ผ์—์„œ `input_values`๋ฅผ ์ถ”์ถœํ•˜๊ณ  ํ”„๋กœ์„ธ์„œ๋กœ `transcription` ์—ด์„ ํ† ํฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> def prepare_dataset(batch): ... audio = batch["audio"] ... batch = processor(audio["array"], sampling_rate=audio["sampling_rate"], text=batch["transcription"]) ... batch["input_length"] = len(batch["input_values"][0]) ... return batch ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `num_proc` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ”„๋กœ์„ธ์Šค ์ˆ˜๋ฅผ ๋Š˜๋ฆฌ๋ฉด `map`์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`~datasets.Dataset.remove_columns`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์„ ์ œ๊ฑฐํ•˜์„ธ์š”: ```py >>> encoded_minds = minds.map(prepare_dataset, remove_columns=minds.column_names["train"], num_proc=4) ``` ๐Ÿค— Transformers์—๋Š” ์ž๋™ ์Œ์„ฑ ์ธ์‹์šฉ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•˜๋ ค๋ฉด [`DataCollatorWithPadding`]์„ ์กฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋Š” ํ…์ŠคํŠธ์™€ ๋ ˆ์ด๋ธ”์„ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ์š”์†Œ์˜ ๊ธธ์ด์— ๋™์ ์œผ๋กœ ํŒจ๋”ฉํ•˜์—ฌ ๊ธธ์ด๋ฅผ ๊ท ์ผํ•˜๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. `tokenizer` ํ•จ์ˆ˜์—์„œ `padding=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ํŒจ๋”ฉํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ๋™์  ํŒจ๋”ฉ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ๋‹ฌ๋ฆฌ ์ด ํŠน์ • ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋Š” `input_values`์™€ `labels`์— ๋Œ€ํ•ด ๋‹ค๋ฅธ ํŒจ๋”ฉ ๋ฐฉ๋ฒ•์„ ์ ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torch >>> from dataclasses import dataclass, field >>> from typing import Any, Dict, List, Optional, Union >>> @dataclass ... class DataCollatorCTCWithPadding: ... processor: AutoProcessor ... padding: Union[bool, str] = "longest" ... def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: ... # ์ž…๋ ฅ๊ณผ ๋ ˆ์ด๋ธ”์„ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค ... # ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅด๊ณ , ๊ฐ๊ฐ ๋‹ค๋ฅธ ํŒจ๋”ฉ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค ... input_features = [{"input_values": feature["input_values"][0]} for feature in features] ... label_features = [{"input_ids": feature["labels"]} for feature in features] ... batch = self.processor.pad(input_features, padding=self.padding, return_tensors="pt") ... labels_batch = self.processor.pad(labels=label_features, padding=self.padding, return_tensors="pt") ... # ํŒจ๋”ฉ์— ๋Œ€ํ•ด ์†์‹ค์„ ์ ์šฉํ•˜์ง€ ์•Š๋„๋ก -100์œผ๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค ... labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) ... batch["labels"] = labels ... return batch ``` ์ด์ œ `DataCollatorForCTCWithPadding`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> data_collator = DataCollatorCTCWithPadding(processor=processor, padding="longest") ``` ## ํ‰๊ฐ€ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [๋‹จ์–ด ์˜ค๋ฅ˜์œจ(Word Error Rate, WER)](https://huggingface.co/spaces/evaluate-metric/wer) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๐Ÿค— Evaluate [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import evaluate >>> wer = evaluate.load("wer") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ธก๊ฐ’๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ WER์„ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(pred): ... pred_logits = pred.predictions ... pred_ids = np.argmax(pred_logits, axis=-1) ... pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id ... pred_str = processor.batch_decode(pred_ids) ... label_str = processor.batch_decode(pred.label_ids, group_tokens=False) ... wer = wer.compute(predictions=pred_str, references=label_str) ... return {"wer": wer} ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จํ•˜๊ธฐ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForCTC`]๋กœ Wav2Vec2๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”. `ctc_loss_reduction` ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ CTC ์†์‹ค์— ์ ์šฉํ•  ์ถ•์†Œ(reduction) ๋ฐฉ๋ฒ•์„ ์ง€์ •ํ•˜์„ธ์š”. ๊ธฐ๋ณธ๊ฐ’์ธ ํ•ฉ๊ณ„ ๋Œ€์‹  ํ‰๊ท ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ๋” ์ข‹์€ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForCTC, TrainingArguments, Trainer >>> model = AutoModelForCTC.from_pretrained( ... "facebook/wav2vec2-base", ... ctc_loss_reduction="mean", ... pad_token_id=processor.tokenizer.pad_token_id, ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`์€ ๋ชจ๋ธ์„ ์ €์žฅํ•  ๊ฒฝ๋กœ๋ฅผ ์ง€์ •ํ•˜๋Š” ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). [`Trainer`]๋Š” ๊ฐ ์—ํญ๋งˆ๋‹ค WER์„ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ, `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_asr_mind_model", ... per_device_train_batch_size=8, ... gradient_accumulation_steps=2, ... learning_rate=1e-5, ... warmup_steps=500, ... max_steps=2000, ... gradient_checkpointing=True, ... fp16=True, ... group_by_length=True, ... evaluation_strategy="steps", ... per_device_eval_batch_size=8, ... save_steps=1000, ... eval_steps=1000, ... logging_steps=25, ... load_best_model_at_end=True, ... metric_for_best_model="wer", ... greater_is_better=False, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=encoded_minds["train"], ... eval_dataset=encoded_minds["test"], ... tokenizer=processor.feature_extractor, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋‘๊ฐ€ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <Tip> ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ์˜์–ด ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ [๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ](https://huggingface.co/blog/fine-tune-wav2vec2-english)์™€ ๋‹ค๊ตญ์–ด ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ [ํฌ์ŠคํŠธ](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก ํ•˜๊ธฐ[[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค์„ธ์š”. ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์˜ค๋””์˜ค ํŒŒ์ผ์˜ ์ƒ˜ํ”Œ๋ง ๋น„์œจ์„ ๋ชจ๋ธ์˜ ์ƒ˜ํ”Œ๋ง ๋ ˆ์ดํŠธ์— ๋งž๊ฒŒ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”! ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", "en-US", split="train") >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000)) >>> sampling_rate = dataset.features["audio"].sampling_rate >>> audio_file = dataset[0]["audio"]["path"] ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•ด๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ž๋™ ์Œ์„ฑ ์ธ์‹์„ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> transcriber = pipeline("automatic-speech-recognition", model="stevhliu/my_awesome_asr_minds_model") >>> transcriber(audio_file) {'text': 'I WOUD LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'} ``` <Tip> ํ…์ŠคํŠธ๋กœ ๋ณ€ํ™˜๋œ ๊ฒฐ๊ณผ๊ฐ€ ๊ฝค ๊ดœ์ฐฎ์ง€๋งŒ ๋” ์ข‹์„ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค! ๋” ๋‚˜์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป์œผ๋ ค๋ฉด ๋” ๋งŽ์€ ์˜ˆ์ œ๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”! </Tip> `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ์žฌํ˜„ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ์˜ค๋””์˜ค ํŒŒ์ผ๊ณผ ํ…์ŠคํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ณ  PyTorch ํ…์„œ๋กœ `input`์„ ๋ฐ˜ํ™˜ํ•  ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("stevhliu/my_awesome_asr_mind_model") >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  ๋กœ์ง“์„ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForCTC >>> model = AutoModelForCTC.from_pretrained("stevhliu/my_awesome_asr_mind_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์˜ `input_ids`๋ฅผ ์˜ˆ์ธกํ•˜๊ณ , ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ธก๋œ `input_ids`๋ฅผ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> import torch >>> predicted_ids = torch.argmax(logits, dim=-1) >>> transcription = processor.batch_decode(predicted_ids) >>> transcription ['I WOUL LIKE O SET UP JOINT ACOUNT WTH Y PARTNER'] ``` </pt> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/token_classification.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ํ† ํฐ ๋ถ„๋ฅ˜[[token-classification]] [[open-in-colab]] <Youtube id="wVHdVlPScxA"/> ํ† ํฐ ๋ถ„๋ฅ˜๋Š” ๋ฌธ์žฅ์˜ ๊ฐœ๋ณ„ ํ† ํฐ์— ๋ ˆ์ด๋ธ”์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ผ๋ฐ˜์ ์ธ ํ† ํฐ ๋ถ„๋ฅ˜ ์ž‘์—… ์ค‘ ํ•˜๋‚˜๋Š” ๊ฐœ์ฒด๋ช… ์ธ์‹(Named Entity Recognition, NER)์ž…๋‹ˆ๋‹ค. ๊ฐœ์ฒด๋ช… ์ธ์‹์€ ๋ฌธ์žฅ์—์„œ ์‚ฌ๋žŒ, ์œ„์น˜ ๋˜๋Š” ์กฐ์ง๊ณผ ๊ฐ™์€ ๊ฐ ๊ฐœ์ฒด์˜ ๋ ˆ์ด๋ธ”์„ ์ฐพ์œผ๋ ค๊ณ  ์‹œ๋„ํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ํ•™์Šตํ•  ๋‚ด์šฉ์€: 1. [WNUT 17](https://huggingface.co/datasets/wnut_17) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [DistilBERT](https://huggingface.co/distilbert-base-uncased)๋ฅผ ํŒŒ์ธ ํŠœ๋‹ํ•˜์—ฌ ์ƒˆ๋กœ์šด ๊ฐœ์ฒด๋ฅผ ํƒ์ง€ํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์„ ์œ„ํ•ด ํŒŒ์ธ ํŠœ๋‹ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์— ์˜ํ•ด ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [ALBERT](../model_doc/albert), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BioGpt](../model_doc/biogpt), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LiLT](../model_doc/lilt), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MarkupLM](../model_doc/markuplm), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [QDQBert](../model_doc/qdqbert), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate seqeval ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด, ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## WNUT 17 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-wnut-17-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ WNUT 17 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset >>> wnut = load_dataset("wnut_17") ``` ๋‹ค์Œ ์˜ˆ์ œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> wnut["train"][0] {'id': '0', 'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0], 'tokens': ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'] } ``` `ner_tags`์˜ ๊ฐ ์ˆซ์ž๋Š” ๊ฐœ์ฒด๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ์ˆซ์ž๋ฅผ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์œผ๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ๊ฐœ์ฒด๊ฐ€ ๋ฌด์—‡์ธ์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> label_list = wnut["train"].features[f"ner_tags"].feature.names >>> label_list [ "O", "B-corporation", "I-corporation", "B-creative-work", "I-creative-work", "B-group", "I-group", "B-location", "I-location", "B-person", "I-person", "B-product", "I-product", ] ``` ๊ฐ `ner_tag`์˜ ์•ž์— ๋ถ™์€ ๋ฌธ์ž๋Š” ๊ฐœ์ฒด์˜ ํ† ํฐ ์œ„์น˜๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค: - `B-`๋Š” ๊ฐœ์ฒด์˜ ์‹œ์ž‘์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. - `I-`๋Š” ํ† ํฐ์ด ๋™์ผํ•œ ๊ฐœ์ฒด ๋‚ด๋ถ€์— ํฌํ•จ๋˜์–ด ์žˆ์Œ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค(์˜ˆ๋ฅผ ๋“ค์–ด `State` ํ† ํฐ์€ `Empire State Building`์™€ ๊ฐ™์€ ๊ฐœ์ฒด์˜ ์ผ๋ถ€์ž…๋‹ˆ๋‹ค). - `0`๋Š” ํ† ํฐ์ด ์–ด๋–ค ๊ฐœ์ฒด์—๋„ ํ•ด๋‹นํ•˜์ง€ ์•Š์Œ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="iY2AZYdZAr0"/> ๋‹ค์Œ์œผ๋กœ `tokens` ํ•„๋“œ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด DistilBERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` ์œ„์˜ ์˜ˆ์ œ `tokens` ํ•„๋“œ๋ฅผ ๋ณด๋ฉด ์ž…๋ ฅ์ด ์ด๋ฏธ ํ† ํฐํ™”๋œ ๊ฒƒ์ฒ˜๋Ÿผ ๋ณด์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‹ค์ œ๋กœ ์ž…๋ ฅ์€ ์•„์ง ํ† ํฐํ™”๋˜์ง€ ์•Š์•˜์œผ๋ฏ€๋กœ ๋‹จ์–ด๋ฅผ ํ•˜์œ„ ๋‹จ์–ด๋กœ ํ† ํฐํ™”ํ•˜๊ธฐ ์œ„ํ•ด `is_split_into_words=True`๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ œ๋กœ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> example = wnut["train"][0] >>> tokenized_input = tokenizer(example["tokens"], is_split_into_words=True) >>> tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"]) >>> tokens ['[CLS]', '@', 'paul', '##walk', 'it', "'", 's', 'the', 'view', 'from', 'where', 'i', "'", 'm', 'living', 'for', 'two', 'weeks', '.', 'empire', 'state', 'building', '=', 'es', '##b', '.', 'pretty', 'bad', 'storm', 'here', 'last', 'evening', '.', '[SEP]'] ``` ๊ทธ๋Ÿฌ๋‚˜ ์ด๋กœ ์ธํ•ด `[CLS]`๊ณผ `[SEP]`๋ผ๋Š” ํŠน์ˆ˜ ํ† ํฐ์ด ์ถ”๊ฐ€๋˜๊ณ , ํ•˜์œ„ ๋‹จ์–ด ํ† ํฐํ™”๋กœ ์ธํ•ด ์ž…๋ ฅ๊ณผ ๋ ˆ์ด๋ธ” ๊ฐ„์— ๋ถˆ์ผ์น˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ํ•˜๋‚˜์˜ ๋ ˆ์ด๋ธ”์— ํ•ด๋‹นํ•˜๋Š” ๋‹จ์ผ ๋‹จ์–ด๋Š” ์ด์ œ ๋‘ ๊ฐœ์˜ ํ•˜์œ„ ๋‹จ์–ด๋กœ ๋ถ„ํ• ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ๊ณผ ๋ ˆ์ด๋ธ”์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์žฌ์ •๋ ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. [`word_ids`](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.BatchEncoding.word_ids) ๋ฉ”์†Œ๋“œ๋กœ ๋ชจ๋“  ํ† ํฐ์„ ํ•ด๋‹น ๋‹จ์–ด์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. 2. ํŠน์ˆ˜ ํ† ํฐ `[CLS]`์™€ `[SEP]`์— `-100` ๋ ˆ์ด๋ธ”์„ ํ• ๋‹นํ•˜์—ฌ, PyTorch ์†์‹ค ํ•จ์ˆ˜๊ฐ€ ํ•ด๋‹น ํ† ํฐ์„ ๋ฌด์‹œํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. 3. ์ฃผ์–ด์ง„ ๋‹จ์–ด์˜ ์ฒซ ๋ฒˆ์งธ ํ† ํฐ์—๋งŒ ๋ ˆ์ด๋ธ”์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ™์€ ๋‹จ์–ด์˜ ๋‹ค๋ฅธ ํ•˜์œ„ ํ† ํฐ์— `-100`์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ํ† ํฐ๊ณผ ๋ ˆ์ด๋ธ”์„ ์žฌ์ •๋ ฌํ•˜๊ณ  DistilBERT์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ๊ธธ์ง€ ์•Š๋„๋ก ์‹œํ€€์Šค๋ฅผ ์ž˜๋ผ๋‚ด๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```py >>> def tokenize_and_align_labels(examples): ... tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True) ... labels = [] ... for i, label in enumerate(examples[f"ner_tags"]): ... word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word. ... previous_word_idx = None ... label_ids = [] ... for word_idx in word_ids: # Set the special tokens to -100. ... if word_idx is None: ... label_ids.append(-100) ... elif word_idx != previous_word_idx: # Only label the first token of a given word. ... label_ids.append(label[word_idx]) ... else: ... label_ids.append(-100) ... previous_word_idx = word_idx ... labels.append(label_ids) ... tokenized_inputs["labels"] = labels ... return tokenized_inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด, ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `batched=True`๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> tokenized_wnut = wnut.map(tokenize_and_align_labels, batched=True) ``` ์ด์ œ [`DataCollatorWithPadding`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ค์–ด๋ด…์‹œ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด๋ฅผ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๋Œ€์‹ , *๋™์  ํŒจ๋”ฉ*์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ๊ธด ๊ธธ์ด์— ๋งž๊ฒŒ ๋ฌธ์žฅ์„ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ์ด ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorForTokenClassification >>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer) ``` </pt> <tf> ```py >>> from transformers import DataCollatorForTokenClassification >>> data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€[[evaluation]] ํ›ˆ๋ จ ์ค‘ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋Š” ๊ฒƒ์ด ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋น ๋ฅด๊ฒŒ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด์„œ๋Š” ๐Ÿค— Evaluate [๋น ๋ฅธ ๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”). Seqeval์€ ์‹ค์ œ๋กœ ์ •๋ฐ€๋„, ์žฌํ˜„๋ฅ , F1 ๋ฐ ์ •ํ™•๋„์™€ ๊ฐ™์€ ์—ฌ๋Ÿฌ ์ ์ˆ˜๋ฅผ ์‚ฐ์ถœํ•ฉ๋‹ˆ๋‹ค. ```py >>> import evaluate >>> seqeval = evaluate.load("seqeval") ``` ๋จผ์ € NER ๋ ˆ์ด๋ธ”์„ ๊ฐ€์ ธ์˜จ ๋‹ค์Œ, [`~evaluate.EvaluationModule.compute`]์— ์‹ค์ œ ์˜ˆ์ธก๊ณผ ์‹ค์ œ ๋ ˆ์ด๋ธ”์„ ์ „๋‹ฌํ•˜์—ฌ ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> labels = [label_list[i] for i in example[f"ner_tags"]] >>> def compute_metrics(p): ... predictions, labels = p ... predictions = np.argmax(predictions, axis=2) ... true_predictions = [ ... [label_list[p] for (p, l) in zip(prediction, label) if l != -100] ... for prediction, label in zip(predictions, labels) ... ] ... true_labels = [ ... [label_list[l] for (p, l) in zip(prediction, label) if l != -100] ... for prediction, label in zip(predictions, labels) ... ] ... results = seqeval.compute(predictions=true_predictions, references=true_labels) ... return { ... "precision": results["overall_precision"], ... "recall": results["overall_recall"], ... "f1": results["overall_f1"], ... "accuracy": results["overall_accuracy"], ... } ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•˜๋ฉด ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ „์—, `id2label`์™€ `label2id`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ƒ๋˜๋Š” id์™€ ๋ ˆ์ด๋ธ”์˜ ๋งต์„ ์ƒ์„ฑํ•˜์„ธ์š”: ```py >>> id2label = { ... 0: "O", ... 1: "B-corporation", ... 2: "I-corporation", ... 3: "B-creative-work", ... 4: "I-creative-work", ... 5: "B-group", ... 6: "I-group", ... 7: "B-location", ... 8: "I-location", ... 9: "B-person", ... 10: "I-person", ... 11: "B-product", ... 12: "I-product", ... } >>> label2id = { ... "O": 0, ... "B-corporation": 1, ... "I-corporation": 2, ... "B-creative-work": 3, ... "I-creative-work": 4, ... "B-group": 5, ... "I-group": 6, ... "B-location": 7, ... "I-location": 8, ... "B-person": 9, ... "I-person": 10, ... "B-product": 11, ... "I-product": 12, ... } ``` <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForSequenceClassification`]๋กœ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer >>> model = AutoModelForTokenClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๊ฑฐ์น˜๋ฉด ๋์ž…๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” ์œ ์ผํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด `push_to_hub=True`๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ธฐ ์œ„ํ•ด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค.) ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค, [`Trainer`]๋Š” seqeval ์ ์ˆ˜๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜์™€ ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ ๋ฐ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_wnut_model", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=2, ... weight_decay=0.01, ... evaluation_strategy="epoch", ... save_strategy="epoch", ... load_best_model_at_end=True, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_wnut["train"], ... eval_dataset=tokenized_wnut["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š์€ ๊ฒฝ์šฐ, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์˜ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋ ค๋ฉด, ๋จผ์ € ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•จ์ˆ˜์™€ ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด, ๊ทธ๋ฆฌ๊ณ  ์ผ๋ถ€ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_train_epochs = 3 >>> num_train_steps = (len(tokenized_wnut["train"]) // batch_size) * num_train_epochs >>> optimizer, lr_schedule = create_optimizer( ... init_lr=2e-5, ... num_train_steps=num_train_steps, ... weight_decay_rate=0.01, ... num_warmup_steps=0, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`TFAutoModelForSequenceClassification`]์„ ์‚ฌ์šฉํ•˜์—ฌ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ , ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=13, id2label=id2label, label2id=label2id ... ) ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_wnut["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_wnut["validation"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จํ•  ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์„ค์ •ํ•ด์•ผํ•  ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐ€์ง€๋Š” ์˜ˆ์ธก์—์„œ seqeval ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ , ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•  ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋‘ [Keras callbacks](../main_classes/keras_callbacks)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ˆ˜ํ–‰๋ฉ๋‹ˆ๋‹ค. [`~transformers.KerasMetricCallback`]์— `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_wnut_model", ... tokenizer=tokenizer, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ฝœ๋ฐฑ์„ ํ•จ๊ป˜ ๋ฌถ์Šต๋‹ˆ๋‹ค: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ๋“œ๋””์–ด, ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์— ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ์˜ ์ˆ˜ ๋ฐ ์ฝœ๋ฐฑ์„ ์ „๋‹ฌํ•˜์—ฌ ํŒŒ์ธ ํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=callbacks) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ํ† ํฐ ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ•˜๋Š” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ๋‹ค์Œ [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ํŒŒ์ธ ํŠœ๋‹ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์„ ์ˆ˜ํ–‰ํ•˜๊ณ ์ž ํ•˜๋Š” ํ…์ŠคํŠธ๋ฅผ ๊ฐ€์ ธ์™€๋ด…์‹œ๋‹ค: ```py >>> text = "The Golden State Warriors are an American professional basketball team based in San Francisco." ``` ํŒŒ์ธ ํŠœ๋‹๋œ ๋ชจ๋ธ๋กœ ์ถ”๋ก ์„ ์‹œ๋„ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๋กœ NER์˜ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ , ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•ด๋ณด์„ธ์š”: ```py >>> from transformers import pipeline >>> classifier = pipeline("ner", model="stevhliu/my_awesome_wnut_model") >>> classifier(text) [{'entity': 'B-location', 'score': 0.42658573, 'index': 2, 'word': 'golden', 'start': 4, 'end': 10}, {'entity': 'I-location', 'score': 0.35856336, 'index': 3, 'word': 'state', 'start': 11, 'end': 16}, {'entity': 'B-group', 'score': 0.3064001, 'index': 4, 'word': 'warriors', 'start': 17, 'end': 25}, {'entity': 'B-location', 'score': 0.65523505, 'index': 13, 'word': 'san', 'start': 80, 'end': 83}, {'entity': 'B-location', 'score': 0.4668663, 'index': 14, 'word': 'francisco', 'start': 84, 'end': 93}] ``` ์›ํ•œ๋‹ค๋ฉด, `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model") >>> inputs = tokenizer(text, return_tensors="pt") ``` ์ž…๋ ฅ์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForTokenClassification >>> model = AutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predictions = torch.argmax(logits, dim=2) >>> predicted_token_class = [model.config.id2label[t.item()] for t in predictions[0]] >>> predicted_token_class ['O', 'O', 'B-location', 'I-location', 'B-group', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-location', 'B-location', 'O', 'O'] ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  TensorFlow ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_wnut_model") >>> inputs = tokenizer(text, return_tensors="tf") ``` ์ž…๋ ฅ๊ฐ’์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForTokenClassification >>> model = TFAutoModelForTokenClassification.from_pretrained("stevhliu/my_awesome_wnut_model") >>> logits = model(**inputs).logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> predicted_token_class = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> predicted_token_class ['O', 'O', 'B-location', 'I-location', 'B-group', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-location', 'B-location', 'O', 'O'] ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/language_modeling.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ๋ง[[causal-language-modeling]] [[open-in-colab]] ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง๊ณผ ๋งˆ์Šคํฌ๋“œ ์–ธ์–ด ๋ชจ๋ธ๋ง, ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์œผ๋กœ ๋‚˜๋‰ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์ธ๊ณผ์  ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ์€ ํ…์ŠคํŠธ ์ƒ์„ฑ์— ์ž์ฃผ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋˜ ์ฐฝ์˜์ ์ธ ๋ฐฉํ–ฅ์œผ๋กœ ์‘์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ง์ ‘ ์‚ฌ์šฉํ•˜๋ฉฐ ์žฌ๋ฏธ์žˆ๋Š” ํƒ๊ตฌ๋ฅผ ํ•ด๋ณด๊ฑฐ๋‚˜, Copilot ๋˜๋Š” CodeParrot์™€ ๊ฐ™์€ ์ง€๋Šฅํ˜• ์ฝ”๋”ฉ ์–ด์‹œ์Šคํ„ดํŠธ์˜ ๊ธฐ๋ฐ˜์ด ๋˜๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. <Youtube id="Vpjb1lu0MDk"/> ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ํ† ํฐ ์‹œํ€€์Šค์—์„œ ๋‹ค์Œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋ฉฐ, ๋ชจ๋ธ์€ ์™ผ์ชฝ์˜ ํ† ํฐ์—๋งŒ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋ชจ๋ธ์ด ๋ฏธ๋ž˜์˜ ํ† ํฐ์„ ๋ณผ ์ˆ˜ ์—†๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ์˜ ์˜ˆ๋กœ GPT-2๊ฐ€ ์žˆ์ฃ . ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๋‹ค์Œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•ฉ๋‹ˆ๋‹ค: 1. [DistilGPT2](https://huggingface.co/distilgpt2) ๋ชจ๋ธ์„ [ELI5](https://huggingface.co/datasets/eli5) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ [r/askscience](https://www.reddit.com/r/askscience/) ํ•˜์œ„ ์ง‘ํ•ฉ์œผ๋กœ ๋ฏธ์„ธ ์กฐ์ • 2. ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉ <Tip> ์ด ์•ˆ๋‚ด์„œ์˜ ๋‹จ๊ณ„์™€ ๋™์ผํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด ๋‹ค๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์•„ํ‚คํ…์ฒ˜ ์ค‘ ํ•˜๋‚˜๋ฅผ ์„ ํƒํ•˜์„ธ์š”: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [BART](../model_doc/bart), [BERT](../model_doc/bert), [Bert Generation](../model_doc/bert-generation), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BioGpt](../model_doc/biogpt), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CodeGen](../model_doc/codegen), [CPM-Ant](../model_doc/cpmant), [CTRL](../model_doc/ctrl), [Data2VecText](../model_doc/data2vec-text), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [GIT](../model_doc/git), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [GPT NeoX Japanese](../model_doc/gpt_neox_japanese), [GPT-J](../model_doc/gptj), [LLaMA](../model_doc/llama), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MVP](../model_doc/mvp), [OpenLlama](../model_doc/open-llama), [OpenAI GPT](../model_doc/openai-gpt), [OPT](../model_doc/opt), [Pegasus](../model_doc/pegasus), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [RWKV](../model_doc/rwkv), [Speech2Text2](../model_doc/speech_to_text_2), [Transformer-XL](../model_doc/transfo-xl), [TrOCR](../model_doc/trocr), [XGLM](../model_doc/xglm), [XLM](../model_doc/xlm), [XLM-ProphetNet](../model_doc/xlm-prophetnet), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ๊ณต์œ ํ•˜๊ธฐ ์œ„ํ•ด Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ์•Œ๋ฆผ์ด ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## ELI5 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ[[load-eli5-dataset]] ๋จผ์ €, ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ r/askscience์˜ ์ž‘์€ ํ•˜์œ„ ์ง‘ํ•ฉ์ธ ELI5 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ํ•™์Šตํ•˜๋Š” ๋ฐ ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํˆฌ์žํ•˜๊ธฐ ์ „์—, ์‹คํ—˜ํ•ด๋ด„์œผ๋กœ์จ ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> eli5 = load_dataset("eli5", split="train_asks[:5000]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train_asks` ๋ถ„ํ• ์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ•™์Šต ๋ฐ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค: ```py >>> eli5 = eli5.train_test_split(test_size=0.2) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> eli5["train"][0] {'answers': {'a_id': ['c3d1aib', 'c3d4lya'], 'score': [6, 3], 'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]}, 'answers_urls': {'url': []}, 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']}, 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls': {'url': []}} ``` ๋งŽ์•„ ๋ณด์ผ ์ˆ˜ ์žˆ์ง€๋งŒ, ์‹ค์ œ๋กœ๋Š” `text` ํ•„๋“œ๋งŒ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์–ธ์–ด ๋ชจ๋ธ๋ง ์ž‘์—…์˜ ์žฅ์ ์€ ๋ ˆ์ด๋ธ”์ด ํ•„์š”ํ•˜์ง€ ์•Š๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค์Œ ๋‹จ์–ด *์ž์ฒด๊ฐ€* ๋ ˆ์ด๋ธ”์ž…๋‹ˆ๋‹ค. (์ด๋ ‡๊ฒŒ ๋ ˆ์ด๋ธ”์„ ์ œ๊ณตํ•˜์ง€ ์•Š์•„๋„ ๋˜๋Š” ํ•™์Šต์„ ๋น„์ง€๋„ ํ•™์Šต์ด๋ผ๊ณ  ์ผ์ปซ์Šต๋‹ˆ๋‹ค) ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="ma1TrR7gE7I"/> ๋‹ค์Œ ๋‹จ๊ณ„๋Š” `text` ํ•„๋“œ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด DistilGPT2 ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2") ``` ์œ„์˜ ์˜ˆ์ œ์—์„œ ์•Œ ์ˆ˜ ์žˆ๋“ฏ์ด, `text` ํ•„๋“œ๋Š” `answers` ์•„๋ž˜์— ์ค‘์ฒฉ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ [`flatten`](https://huggingface.co/docs/datasets/process#flatten) ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ค‘์ฒฉ ๊ตฌ์กฐ์—์„œ `text` ํ•˜์œ„ ํ•„๋“œ๋ฅผ ์ถ”์ถœํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> eli5 = eli5.flatten() >>> eli5["train"][0] {'answers.a_id': ['c3d1aib', 'c3d4lya'], 'answers.score': [6, 3], 'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"], 'answers_urls.url': [], 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'], 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls.url': []} ``` ๊ฐ ํ•˜์œ„ ํ•„๋“œ๋Š” ์ด์ œ `answers` ์ ‘๋‘์‚ฌ๋ฅผ ๊ฐ€์ง„ ๋ณ„๋„์˜ ์—ด๋กœ ๋‚˜๋‰˜์—ˆ์œผ๋ฉฐ, `text` ํ•„๋“œ๋Š” ์ด์ œ ๋ฆฌ์ŠคํŠธ์ž…๋‹ˆ๋‹ค. ๊ฐ ๋ฌธ์žฅ์„ ๊ฐœ๋ณ„์ ์œผ๋กœ ํ† ํฐํ™”ํ•˜๋Š” ๋Œ€์‹ , ๋จผ์ € ๋ฆฌ์ŠคํŠธ๋ฅผ ๋ฌธ์ž์—ด๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ํ•œ๊บผ๋ฒˆ์— ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๋ฌธ์ž์—ด ๋ฆฌ์ŠคํŠธ๋ฅผ ๊ฒฐํ•ฉํ•˜๊ณ  ๊ฒฐ๊ณผ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ์ฒซ ๋ฒˆ์งธ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜์ž…๋‹ˆ๋‹ค: ```py >>> def preprocess_function(examples): ... return tokenizer([" ".join(x) for x in examples["answers.text"]]) ``` ์ด ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `batched=True`๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๊ณ , `num_proc`๋ฅผ ์ฆ๊ฐ€์‹œ์ผœ ํ”„๋กœ์„ธ์Šค ์ˆ˜๋ฅผ ๋Š˜๋ฆด ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š” ์—†๋Š” ์—ด์€ ์ œ๊ฑฐํ•˜์„ธ์š”: ```py >>> tokenized_eli5 = eli5.map( ... preprocess_function, ... batched=True, ... num_proc=4, ... remove_columns=eli5["train"].column_names, ... ) ``` ์ด์ œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ์‹œํ€€์Šค๊ฐ€ ํ† ํฐํ™”๋์ง€๋งŒ, ์ผ๋ถ€ ์‹œํ€€์Šค๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ๊ธธ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ œ ๋‘ ๋ฒˆ์งธ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ - ๋ชจ๋“  ์‹œํ€€์Šค๋ฅผ ์—ฐ๊ฒฐํ•˜๊ณ , - `block_size`๋กœ ์ •์˜๋œ ๊ธธ์ด๋กœ ์—ฐ๊ฒฐ๋œ ์‹œํ€€์Šค๋ฅผ ์—ฌ๋Ÿฌ ๊ฐœ์˜ ์งง์€ ๋ฌถ์Œ์œผ๋กœ ๋‚˜๋ˆ•๋‹ˆ๋‹ค. ์ด ๊ฐ’์€ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด์™€ GPU RAM์„ ๊ณ ๋ คํ•ด ์ถฉ๋ถ„ํžˆ ์งง์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> block_size = 128 >>> def group_texts(examples): ... # Concatenate all texts. ... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} ... total_length = len(concatenated_examples[list(examples.keys())[0]]) ... # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can ... # customize this part to your needs. ... if total_length >= block_size: ... total_length = (total_length // block_size) * block_size ... # Split by chunks of block_size. ... result = { ... k: [t[i : i + block_size] for i in range(0, total_length, block_size)] ... for k, t in concatenated_examples.items() ... } ... result["labels"] = result["input_ids"].copy() ... return result ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— `group_texts` ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜์„ธ์š”: ```py >>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`DataCollatorForLanguageModeling`]์„ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ์˜ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด๋ฅผ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค, ์ทจํ•ฉ ๋‹จ๊ณ„์—์„œ ๊ฐ ๋ฐฐ์น˜์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ๋ฌธ์žฅ์„ *๋™์ ์œผ๋กœ ํŒจ๋”ฉ*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์ข…๊ฒฐ ํ† ํฐ์„ ์‚ฌ์šฉํ•˜๊ณ  `mlm=False`๋กœ ์„ค์ •ํ•˜์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ž…๋ ฅ์„ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ํ•œ ์นธ์”ฉ ์‹œํ”„ํŠธํ•œ ๊ฐ’์„ ๋ ˆ์ด๋ธ”๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DataCollatorForLanguageModeling >>> tokenizer.pad_token = tokenizer.eos_token >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) ``` </pt> <tf> ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์ข…๊ฒฐ ํ† ํฐ์„ ์‚ฌ์šฉํ•˜๊ณ  `mlm=False`๋กœ ์„ค์ •ํ•˜์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ž…๋ ฅ์„ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ํ•œ ์นธ์”ฉ ์‹œํ”„ํŠธํ•œ ๊ฐ’์„ ๋ ˆ์ด๋ธ”๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DataCollatorForLanguageModeling >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ž˜ ๋ชจ๋ฅด์‹ ๋‹ค๋ฉด [๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ](../training#train-with-pytorch-trainer)์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForCausalLM`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ DistilGPT2๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForCausalLM, TrainingArguments, Trainer >>> model = AutoModelForCausalLM.from_pretrained("distilgpt2") ``` ์—ฌ๊ธฐ๊นŒ์ง€ ์ง„ํ–‰ํ•˜๋ฉด ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. `output_dir`์€ ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ, ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค. (๋จผ์ € Hugging Face์— ๋กœ๊ทธ์ธ ํ•„์ˆ˜) `push_to_hub=True`๋กœ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 2. ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ [`Trainer`]์— ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ฐ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ํ•จ๊ป˜ ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์„ธ์š”. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_eli5_clm-model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=lm_dataset["train"], ... eval_dataset=lm_dataset["test"], ... data_collator=data_collator, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด [`~transformers.Trainer.evaluate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•˜๊ณ  ํผํ”Œ๋ ‰์„œํ‹ฐ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> import math >>> eval_results = trainer.evaluate() >>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") Perplexity: 49.61 ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด [๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ](../training#train-a-tensorflow-model-with-keras)์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด, ๋จผ์ € ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•จ์ˆ˜, ํ•™์Šต๋ฅ  ์Šค์ผ€์ค„ ๋ฐ ์ผ๋ถ€ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`TFAutoModelForCausalLM`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ DistilGPT2๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForCausalLM >>> model = TFAutoModelForCausalLM.from_pretrained("distilgpt2") ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> tf_train_set = model.prepare_tf_dataset( ... lm_dataset["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... lm_dataset["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์œ„ํ•ด ๊ตฌ์„ฑํ•˜์„ธ์š”. Transformers ๋ชจ๋ธ์€ ๋ชจ๋‘ ๊ธฐ๋ณธ์ ์ธ ์ž‘์—… ๊ด€๋ จ ์†์‹ค ํ•จ์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์œผ๋ฏ€๋กœ, ์›ํ•œ๋‹ค๋ฉด ๋ณ„๋„๋กœ ์ง€์ •ํ•˜์ง€ ์•Š์•„๋„ ๋ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) # ๋ณ„๋„๋กœ loss ์ธ์ž๋ฅผ ๋„ฃ์ง€ ์•Š์•˜์–ด์š”! ``` [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( ... output_dir="my_awesome_eli5_clm-model", ... tokenizer=tokenizer, ... ) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•˜๊ธฐ ์œ„ํ•ด [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ํ˜ธ์ถœํ•˜์„ธ์š”. ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ ์ˆ˜ ๋ฐ ์ฝœ๋ฐฑ์„ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋˜์–ด ๋ชจ๋‘๊ฐ€ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ์ธ๊ณผ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋” ์ž์„ธํ•œ ์˜ˆ์ œ๋Š” ํ•ด๋‹นํ•˜๋Š” [PyTorch ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) ๋˜๋Š” [TensorFlow ๋…ธํŠธ๋ถ](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋ฏ€๋กœ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ƒ์„ฑํ•  ํ…์ŠคํŠธ๋ฅผ ์œ„ํ•œ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋งŒ๋“ค์–ด๋ณด์„ธ์š”: ```py >>> prompt = "Somatic hypermutation allows the immune system to" ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ๊ฐ„๋‹จํžˆ ์‚ฌ์šฉํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ํ…์ŠคํŠธ ์ƒ์„ฑ์„ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> generator = pipeline("text-generation", model="my_awesome_eli5_clm-model") >>> generator(prompt) [{'generated_text': "Somatic hypermutation allows the immune system to be able to effectively reverse the damage caused by an infection.\n\n\nThe damage caused by an infection is caused by the immune system's ability to perform its own self-correcting tasks."}] ``` <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model") >>> inputs = tokenizer(prompt, return_tensors="pt").input_ids ``` [`~transformers.generation_utils.GenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•˜์„ธ์š”. ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๋Š” ๋‹ค์–‘ํ•œ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต๊ณผ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](../generation_strategies) ํŽ˜์ด์ง€๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ```py >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋ฅผ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ["Somatic hypermutation allows the immune system to react to drugs with the ability to adapt to a different environmental situation. In other words, a system of 'hypermutation' can help the immune system to adapt to a different environmental situation or in some cases even a single life. In contrast, researchers at the University of Massachusetts-Boston have found that 'hypermutation' is much stronger in mice than in humans but can be found in humans, and that it's not completely unknown to the immune system. A study on how the immune system"] ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ TensorFlow ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•˜์„ธ์š”: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model") >>> inputs = tokenizer(prompt, return_tensors="tf").input_ids ``` [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์š”์•ฝ์„ ์ƒ์„ฑํ•˜์„ธ์š”. ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๋Š” ๋‹ค์–‘ํ•œ ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต๊ณผ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ ์ „๋žต](../generation_strategies) ํŽ˜์ด์ง€๋ฅผ ํ™•์ธํ•˜์„ธ์š”. ```py >>> from transformers import TFAutoModelForCausalLM >>> model = TFAutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model") >>> outputs = model.generate(input_ids=inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋ฅผ ๋‹ค์‹œ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜์„ธ์š”: ```py >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Somatic hypermutation allows the immune system to detect the presence of other viruses as they become more prevalent. Therefore, researchers have identified a high proportion of human viruses. The proportion of virus-associated viruses in our study increases with age. Therefore, we propose a simple algorithm to detect the presence of these new viruses in our samples as a sign of improved immunity. A first study based on this algorithm, which will be published in Science on Friday, aims to show that this finding could translate into the development of a better vaccine that is more effective for'] ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/summarization.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์š”์•ฝ[[summarization]] [[open-in-colab]] <Youtube id="yHnr5Dk2zCI"/> ์š”์•ฝ์€ ๋ฌธ์„œ๋‚˜ ๊ธฐ์‚ฌ์—์„œ ์ค‘์š”ํ•œ ์ •๋ณด๋ฅผ ๋ชจ๋‘ ํฌํ•จํ•˜๋˜ ์งง๊ฒŒ ๋งŒ๋“œ๋Š” ์ผ์ž…๋‹ˆ๋‹ค. ๋ฒˆ์—ญ๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค ๋ฌธ์ œ๋กœ ๊ตฌ์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ‘œ์ ์ธ ์ž‘์—… ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์š”์•ฝ์—๋Š” ์•„๋ž˜์™€ ๊ฐ™์ด ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค: - ์ถ”์ถœ(Extractive) ์š”์•ฝ: ๋ฌธ์„œ์—์„œ ๊ฐ€์žฅ ๊ด€๋ จ์„ฑ ๋†’์€ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. - ์ƒ์„ฑ(Abstractive) ์š”์•ฝ: ๊ฐ€์žฅ ๊ด€๋ จ์„ฑ ๋†’์€ ์ •๋ณด๋ฅผ ํฌ์ฐฉํ•ด๋‚ด๋Š” ์ƒˆ๋กœ์šด ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ์†Œ๊ฐœํ•  ๋‚ด์šฉ์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. ์ƒ์„ฑ ์š”์•ฝ์„ ์œ„ํ•œ [BillSum](https://huggingface.co/datasets/billsum) ๋ฐ์ดํ„ฐ์…‹ ์ค‘ ์บ˜๋ฆฌํฌ๋‹ˆ์•„ ์ฃผ ๋ฒ•์•ˆ ํ•˜์œ„ ์ง‘ํ•ฉ์œผ๋กœ [T5](https://huggingface.co/t5-small)๋ฅผ ํŒŒ์ธํŠœ๋‹ํ•ฉ๋‹ˆ๋‹ค. 2. ํŒŒ์ธํŠœ๋‹๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ถ”๋ก ํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [BART](../model_doc/bart), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [Encoder decoder](../model_doc/encoder-decoder), [FairSeq Machine-Translation](../model_doc/fsmt), [GPTSAN-japanese](../model_doc/gptsan-japanese), [LED](../model_doc/led), [LongT5](../model_doc/longt5), [M2M100](../model_doc/m2m_100), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MT5](../model_doc/mt5), [MVP](../model_doc/mvp), [NLLB](../model_doc/nllb), [NLLB-MOE](../model_doc/nllb-moe), [Pegasus](../model_doc/pegasus), [PEGASUS-X](../model_doc/pegasus_x), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [SwitchTransformers](../model_doc/switch_transformers), [T5](../model_doc/t5), [XLM-ProphetNet](../model_doc/xlm-prophetnet) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate rouge_score ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋ฉด ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”. ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## BillSum ๋ฐ์ดํ„ฐ์…‹ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-billsum-dataset]] ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ BillSum ๋ฐ์ดํ„ฐ์…‹์˜ ์ž‘์€ ๋ฒ„์ „์ธ ์บ˜๋ฆฌํฌ๋‹ˆ์•„ ์ฃผ ๋ฒ•์•ˆ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from datasets import load_dataset >>> billsum = load_dataset("billsum", split="ca_test") ``` [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋กœ ๋ฐ์ดํ„ฐ์…‹์„ ํ•™์Šต์šฉ์™€ ํ…Œ์ŠคํŠธ์šฉ์œผ๋กœ ๋‚˜๋ˆ„์„ธ์š”: ```py >>> billsum = billsum.train_test_split(test_size=0.2) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์‹œ๋ฅผ ํ•˜๋‚˜ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> billsum["train"][0] {'summary': 'Existing law authorizes state agencies to enter into contracts for the acquisition of goods or services upon approval by the Department of General Services. Existing law sets forth various requirements and prohibitions for those contracts, including, but not limited to, a prohibition on entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between spouses and domestic partners or same-sex and different-sex couples in the provision of benefits. Existing law provides that a contract entered into in violation of those requirements and prohibitions is void and authorizes the state or any person acting on behalf of the state to bring a civil action seeking a determination that a contract is in violation and therefore void. Under existing law, a willful violation of those requirements and prohibitions is a misdemeanor.\nThis bill would also prohibit a state agency from entering into contracts for the acquisition of goods or services of $100,000 or more with a contractor that discriminates between employees on the basis of gender identity in the provision of benefits, as specified. By expanding the scope of a crime, this bill would impose a state-mandated local program.\nThe California Constitution requires the state to reimburse local agencies and school districts for certain costs mandated by the state. Statutory provisions establish procedures for making that reimbursement.\nThis bill would provide that no reimbursement is required by this act for a specified reason.', 'text': 'The people of the State of California do enact as follows:\n\n\nSECTION 1.\nSection 10295.35 is added to the Public Contract Code, to read:\n10295.35.\n(a) (1) Notwithstanding any other law, a state agency shall not enter into any contract for the acquisition of goods or services in the amount of one hundred thousand dollars ($100,000) or more with a contractor that, in the provision of benefits, discriminates between employees on the basis of an employeeโ€™s or dependentโ€™s actual or perceived gender identity, including, but not limited to, the employeeโ€™s or dependentโ€™s identification as transgender.\n(2) For purposes of this section, โ€œcontractโ€ includes contracts with a cumulative amount of one hundred thousand dollars ($100,000) or more per contractor in each fiscal year.\n(3) For purposes of this section, an employee health plan is discriminatory if the plan is not consistent with Section 1365.5 of the Health and Safety Code and Section 10140 of the Insurance Code.\n(4) The requirements of this section shall apply only to those portions of a contractorโ€™s operations that occur under any of the following conditions:\n(A) Within the state.\n(B) On real property outside the state if the property is owned by the state or if the state has a right to occupy the property, and if the contractorโ€™s presence at that location is connected to a contract with the state.\n(C) Elsewhere in the United States where work related to a state contract is being performed.\n(b) Contractors shall treat as confidential, to the maximum extent allowed by law or by the requirement of the contractorโ€™s insurance provider, any request by an employee or applicant for employment benefits or any documentation of eligibility for benefits submitted by an employee or applicant for employment.\n(c) After taking all reasonable measures to find a contractor that complies with this section, as determined by the state agency, the requirements of this section may be waived under any of the following circumstances:\n(1) There is only one prospective contractor willing to enter into a specific contract with the state agency.\n(2) The contract is necessary to respond to an emergency, as determined by the state agency, that endangers the public health, welfare, or safety, or the contract is necessary for the provision of essential services, and no entity that complies with the requirements of this section capable of responding to the emergency is immediately available.\n(3) The requirements of this section violate, or are inconsistent with, the terms or conditions of a grant, subvention, or agreement, if the agency has made a good faith attempt to change the terms or conditions of any grant, subvention, or agreement to authorize application of this section.\n(4) The contractor is providing wholesale or bulk water, power, or natural gas, the conveyance or transmission of the same, or ancillary services, as required for ensuring reliable services in accordance with good utility practice, if the purchase of the same cannot practically be accomplished through the standard competitive bidding procedures and the contractor is not providing direct retail services to end users.\n(d) (1) A contractor shall not be deemed to discriminate in the provision of benefits if the contractor, in providing the benefits, pays the actual costs incurred in obtaining the benefit.\n(2) If a contractor is unable to provide a certain benefit, despite taking reasonable measures to do so, the contractor shall not be deemed to discriminate in the provision of benefits.\n(e) (1) Every contract subject to this chapter shall contain a statement by which the contractor certifies that the contractor is in compliance with this section.\n(2) The department or other contracting agency shall enforce this section pursuant to its existing enforcement powers.\n(3) (A) If a contractor falsely certifies that it is in compliance with this section, the contract with that contractor shall be subject to Article 9 (commencing with Section 10420), unless, within a time period specified by the department or other contracting agency, the contractor provides to the department or agency proof that it has complied, or is in the process of complying, with this section.\n(B) The application of the remedies or penalties contained in Article 9 (commencing with Section 10420) to a contract subject to this chapter shall not preclude the application of any existing remedies otherwise available to the department or other contracting agency under its existing enforcement powers.\n(f) Nothing in this section is intended to regulate the contracting practices of any local jurisdiction.\n(g) This section shall be construed so as not to conflict with applicable federal laws, rules, or regulations. In the event that a court or agency of competent jurisdiction holds that federal law, rule, or regulation invalidates any clause, sentence, paragraph, or section of this code or the application thereof to any person or circumstances, it is the intent of the state that the court or agency sever that clause, sentence, paragraph, or section so that the remainder of this section shall remain in effect.\nSEC. 2.\nSection 10295.35 of the Public Contract Code shall not be construed to create any new enforcement authority or responsibility in the Department of General Services or any other contracting agency.\nSEC. 3.\nNo reimbursement is required by this act pursuant to Section 6 of Article XIII\u2009B of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIII\u2009B of the California Constitution.', 'title': 'An act to add Section 10295.35 to the Public Contract Code, relating to public contracts.'} ``` ์—ฌ๊ธฐ์„œ ๋‹ค์Œ ๋‘ ๊ฐœ์˜ ํ•„๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค: - `text`: ๋ชจ๋ธ์˜ ์ž…๋ ฅ์ด ๋  ๋ฒ•์•ˆ ํ…์ŠคํŠธ์ž…๋‹ˆ๋‹ค. - `summary`: `text`์˜ ๊ฐ„๋žตํ•œ ๋ฒ„์ „์œผ๋กœ ๋ชจ๋ธ์˜ ํƒ€๊ฒŸ์ด ๋ฉ๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ์œผ๋กœ `text`์™€ `summary`๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•œ T5 ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> checkpoint = "t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) ``` ์ƒ์„ฑํ•˜๋ ค๋Š” ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋Š” ์•„๋ž˜ ์กฐ๊ฑด์„ ๋งŒ์กฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. ์ž…๋ ฅ ์•ž์— ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋ถ™์—ฌ T5๊ฐ€ ์š”์•ฝ ์ž‘์—…์ž„์„ ์ธ์‹ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๋Ÿฌ NLP ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ์ผ๋ถ€ ๋ชจ๋ธ์€ ํŠน์ • ์ž‘์—…์— ๋Œ€ํ•œ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. 2. ๋ ˆ์ด๋ธ”์„ ํ† ํฐํ™”ํ•  ๋•Œ `text_target` ์ธ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. 3. `max_length` ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ์„ค์ •๋œ ์ตœ๋Œ€ ๊ธธ์ด๋ฅผ ๋„˜์ง€ ์•Š๋„๋ก ๊ธด ์‹œํ€€์Šค๋ฅผ ์ž˜๋ผ๋ƒ…๋‹ˆ๋‹ค. ```py >>> prefix = "summarize: " >>> def preprocess_function(examples): ... inputs = [prefix + doc for doc in examples["text"]] ... model_inputs = tokenizer(inputs, max_length=1024, truncation=True) ... labels = tokenizer(text_target=examples["summary"], max_length=128, truncation=True) ... model_inputs["labels"] = labels["input_ids"] ... return model_inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์— ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets์˜ [`~datasets.Dataset.map`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `batched=True`๋กœ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> tokenized_billsum = billsum.map(preprocess_function, batched=True) ``` ์ด์ œ [`DataCollatorForSeq2Seq`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“œ์„ธ์š”. ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์„ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ๋ฐฐ์น˜๋งˆ๋‹ค ๊ฐ€์žฅ ๊ธด ๋ฌธ์žฅ ๊ธธ์ด์— ๋งž์ถฐ *๋™์  ํŒจ๋”ฉ*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint) ``` </pt> <tf> ```py >>> from transformers import DataCollatorForSeq2Seq >>> data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€[[evaluate]] ํ•™์Šต ์ค‘์— ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๋ถˆ๋Ÿฌ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. (ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๋ถˆ๋Ÿฌ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๐Ÿค— Evaluate [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.) ```py >>> import evaluate >>> rouge = evaluate.load("rouge") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ธก๊ฐ’๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ ROUGE ์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True) ... labels = np.where(labels != -100, labels, tokenizer.pad_token_id) ... decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) ... result = rouge.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) ... prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions] ... result["gen_len"] = np.mean(prediction_lens) ... return {k: round(v, 4) for k, v in result.items()} ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ•™์Šต์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋กœ ๋˜๋Œ์•„์˜ฌ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ## ํ•™์Šต[[train]] <frameworkcontent> <pt> <Tip> ๋ชจ๋ธ์„ [`Trainer`]๋กœ ํŒŒ์ธํŠœ๋‹ ํ•˜๋Š” ๊ฒƒ์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•ด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ•™์Šต์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForSeq2SeqLM`]๋กœ T5๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`Seq2SeqTrainingArguments`]์—์„œ ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•˜์„ธ์š”. ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ Hub์— ํ‘ธ์‹œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.) [`Trainer`]๋Š” ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค ROUGE ์ง€ํ‘œ๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ•™์Šต ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ์…‹, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ ๋ฐ `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ํ•™์Šต ์ธ์ˆ˜๋ฅผ [`Seq2SeqTrainer`]์— ์ „๋‹ฌํ•˜์„ธ์š”. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> training_args = Seq2SeqTrainingArguments( ... output_dir="my_awesome_billsum_model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... weight_decay=0.01, ... save_total_limit=3, ... num_train_epochs=4, ... predict_with_generate=True, ... fp16=True, ... push_to_hub=True, ... ) >>> trainer = Seq2SeqTrainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_billsum["train"], ... eval_dataset=tokenized_billsum["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด, ๋ˆ„๊ตฌ๋‚˜ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋กœ Hub์— ๊ณต์œ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ ํŒŒ์ธํŠœ๋‹์„ ํ•˜๋Š” ๊ฒƒ์ด ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์—์„œ ๊ธฐ๋ณธ์ ์ธ ํŠœํ† ๋ฆฌ์–ผ์„ ํ™•์ธํ•˜์„ธ์š”! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋ ค๋ฉด, ๋จผ์ € ์˜ตํ‹ฐ๋งˆ์ด์ €, ํ•™์Šต๋ฅ  ์Šค์ผ€์ค„ ๊ทธ๋ฆฌ๊ณ  ๋ช‡ ๊ฐ€์ง€ ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ [`TFAutoModelForSeq2SeqLM`]์„ ์‚ฌ์šฉํ•˜์—ฌ T5๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained(checkpoint) ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์„ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_billsum["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... tokenized_billsum["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ตฌ์„ฑํ•˜์„ธ์š”: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ํ•™์Šต์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์„ค์ •ํ•ด์•ผ ํ•  ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐ€์ง€๋Š” ์˜ˆ์ธก์—์„œ ROUGE ์ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ Hub์— ํ‘ธ์‹œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‘ ์ž‘์—… ๋ชจ๋‘ [Keras callbacks](../main_classes/keras_callbacks)์œผ๋กœ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [`~transformers.KerasMetricCallback`]์— `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ‘ธ์‹œํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_billsum_model", ... tokenizer=tokenizer, ... ) ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ฝœ๋ฐฑ์„ ๋ฒˆ๋“ค๋กœ ๋ฌถ์–ด์ค๋‹ˆ๋‹ค: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ๋“œ๋””์–ด ๋ชจ๋ธ ํ•™์Šต์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ํ•™์Šต ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ์…‹, ์—ํญ ์ˆ˜ ๋ฐ ์ฝœ๋ฐฑ๊ณผ ํ•จ๊ป˜ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜์„ธ์š”. ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=callbacks) ``` ํ•™์Šต์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ Hub์— ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ์š”์•ฝ์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋” ์ž์„ธํ•œ ์˜ˆ์ œ๋ฅผ ๋ณด๋ ค๋ฉด [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/summarization-tf.ipynb)์„ ์ฐธ๊ณ ํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ข‹์•„์š”, ์ด์ œ ๋ชจ๋ธ์„ ํŒŒ์ธํŠœ๋‹ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์š”์•ฝํ•  ํ…์ŠคํŠธ๋ฅผ ์ž‘์„ฑํ•ด๋ณด์„ธ์š”. T5์˜ ๊ฒฝ์šฐ ์ž‘์—…์— ๋”ฐ๋ผ ์ž…๋ ฅ ์•ž์— ์ ‘๋‘์‚ฌ๋ฅผ ๋ถ™์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์š”์•ฝ์˜ ๊ฒฝ์šฐ, ์•„๋ž˜์™€ ๊ฐ™์€ ์ ‘๋‘์‚ฌ๋ฅผ ์ž…๋ ฅ ์•ž์— ๋ถ™์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> text = "summarize: The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country. It'll lower the deficit and ask the ultra-wealthy and corporations to pay their fair share. And no one making under $400,000 per year will pay a penny more in taxes." ``` ์ถ”๋ก ์„ ์œ„ํ•ด ํŒŒ์ธํŠœ๋‹ํ•œ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•ด ๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์š”์•ฝ์„ ์ˆ˜ํ–‰ํ•  [`pipeline`]์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> summarizer = pipeline("summarization", model="stevhliu/my_awesome_billsum_model") >>> summarizer(text) [{"summary_text": "The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country."}] ``` ์›ํ•œ๋‹ค๋ฉด ์ˆ˜๋™์œผ๋กœ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜์—ฌ [`pipeline`]์˜ ๊ฒฐ๊ณผ์™€ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ฆˆํ•˜๊ณ  `input_ids`๋ฅผ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model") >>> inputs = tokenizer(text, return_tensors="pt").input_ids ``` ์š”์•ฝ๋ฌธ์„ ์ƒ์„ฑํ•˜๋ ค๋ฉด [`~transformers.generation_utils.GenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ๋‹ค์–‘ํ•œ ์ „๋žต๊ณผ ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๊ธฐ ์œ„ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ](../main_classes/text_generation) API๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ```py >>> from transformers import AutoModelForSeq2SeqLM >>> model = AutoModelForSeq2SeqLM.from_pretrained("stevhliu/my_awesome_billsum_model") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=False) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋ฅผ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.' ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฌ๋‚˜์ด์ฆˆํ•˜๊ณ  `input_ids`๋ฅผ TensorFlow ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("stevhliu/my_awesome_billsum_model") >>> inputs = tokenizer(text, return_tensors="tf").input_ids ``` ์š”์•ฝ๋ฌธ์„ ์ƒ์„ฑํ•˜๋ ค๋ฉด [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ํ…์ŠคํŠธ ์ƒ์„ฑ์— ๋Œ€ํ•œ ๋‹ค์–‘ํ•œ ์ „๋žต๊ณผ ์ƒ์„ฑ์„ ์ œ์–ดํ•˜๊ธฐ ์œ„ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [ํ…์ŠคํŠธ ์ƒ์„ฑ](../main_classes/text_generation) API๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ```py >>> from transformers import TFAutoModelForSeq2SeqLM >>> model = TFAutoModelForSeq2SeqLM.from_pretrained("stevhliu/my_awesome_billsum_model") >>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=False) ``` ์ƒ์„ฑ๋œ ํ† ํฐ ID๋ฅผ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'the inflation reduction act lowers prescription drug costs, health care costs, and energy costs. it's the most aggressive action on tackling the climate crisis in american history. it will ask the ultra-wealthy and corporations to pay their fair share.' ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/question_answering.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์งˆ์˜ ์‘๋‹ต(Question Answering)[[question-answering]] [[open-in-colab]] <Youtube id="ajPx5LwJD-I"/> ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ๋Š” ์ฃผ์–ด์ง„ ์งˆ๋ฌธ์— ๋Œ€ํ•œ ๋‹ต๋ณ€์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. Alexa, Siri ๋˜๋Š” Google๊ณผ ๊ฐ™์€ ๊ฐ€์ƒ ๋น„์„œ์—๊ฒŒ ๋‚ ์”จ๊ฐ€ ์–ด๋–ค์ง€ ๋ฌผ์–ด๋ณธ ์ ์ด ์žˆ๋‹ค๋ฉด ์งˆ์˜ ์‘๋‹ต ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด๋ณธ ์ ์ด ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ์—๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋‘ ๊ฐ€์ง€ ์œ ํ˜•์ด ์žˆ์Šต๋‹ˆ๋‹ค. - ์ถ”์ถœ์ (Extractive) ์งˆ์˜ ์‘๋‹ต: ์ฃผ์–ด์ง„ ๋ฌธ๋งฅ์—์„œ ๋‹ต๋ณ€์„ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค. - ์ƒ์„ฑ์ (Abstractive) ์งˆ์˜ ์‘๋‹ต: ๋ฌธ๋งฅ์—์„œ ์งˆ๋ฌธ์— ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ๋‹ตํ•˜๋Š” ๋‹ต๋ณ€์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฐฉ๋ฒ•๋“ค์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. 1. ์ถ”์ถœ์  ์งˆ์˜ ์‘๋‹ต์„ ํ•˜๊ธฐ ์œ„ํ•ด [SQuAD](https://huggingface.co/datasets/squad) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [DistilBERT](https://huggingface.co/distilbert-base-uncased) ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ 2. ์ถ”๋ก ์— ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ ์‚ฌ์šฉํ•˜๊ธฐ <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ํƒœ์Šคํฌ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค. <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [GPT-J](../model_doc/gptj), [I-BERT](../model_doc/ibert), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3), [LED](../model_doc/led), [LiLT](../model_doc/lilt), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [LXMERT](../model_doc/lxmert), [MarkupLM](../model_doc/markuplm), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [OPT](../model_doc/opt), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [Splinter](../model_doc/splinter), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` ์—ฌ๋Ÿฌ๋ถ„์˜ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•ด์„œ ๋กœ๊ทธ์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## SQuAD ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-squad-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ SQuAD ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ ํ›ˆ๋ จํ•˜๋ฉฐ ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž˜ ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> squad = load_dataset("squad", split="train[:5000]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋ถ„ํ• ๋œ `train`์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋กœ ๋‚˜๋ˆ„์–ด์ค๋‹ˆ๋‹ค: ```py >>> squad = squad.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ ๋‚˜์„œ ์˜ˆ์‹œ๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ํ•˜๋‚˜ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค: ```py >>> squad["train"][0] {'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']}, 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.', 'id': '5733be284776f41900661182', 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?', 'title': 'University_of_Notre_Dame' } ``` ์ด ์ค‘์—์„œ ๋ช‡ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ํ•ญ๋ชฉ์ด ์žˆ์Šต๋‹ˆ๋‹ค: - `answers`: ๋‹ต์•ˆ ํ† ํฐ์˜ ์‹œ์ž‘ ์œ„์น˜์™€ ๋‹ต์•ˆ ํ…์ŠคํŠธ - `context`: ๋ชจ๋ธ์ด ๋‹ต์„ ์ถ”์ถœํ•˜๋Š”๋ฐ ํ•„์š”ํ•œ ๋ฐฐ๊ฒฝ ์ง€์‹ - `question`: ๋ชจ๋ธ์ด ๋‹ตํ•ด์•ผ ํ•˜๋Š” ์งˆ๋ฌธ ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="qgaM0weJHpA"/> ๋‹ค์Œ ๋‹จ๊ณ„์—์„œ๋Š” `question` ๋ฐ `context` ํ•ญ๋ชฉ์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด DistilBERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ``` ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ์™€ ๊ด€๋ จํ•ด์„œ ํŠนํžˆ ์œ ์˜ํ•ด์•ผํ•  ๋ช‡ ๊ฐ€์ง€ ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: 1. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€ ์˜ˆ์ œ์—๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ฅผ ์ดˆ๊ณผํ•˜๋Š” ๋งค์šฐ ๊ธด `context`๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ธด ์‹œํ€€์Šค๋ฅผ ๋‹ค๋ฃจ๊ธฐ ์œ„ํ•ด์„œ๋Š”, `truncation="only_second"`๋กœ ์„ค์ •ํ•ด `context`๋งŒ ์ž˜๋ผ๋‚ด๋ฉด ๋ฉ๋‹ˆ๋‹ค. 2. ๊ทธ ๋‹ค์Œ, `return_offset_mapping=True`๋กœ ์„ค์ •ํ•ด ๋‹ต๋ณ€์˜ ์‹œ์ž‘๊ณผ ์ข…๋ฃŒ ์œ„์น˜๋ฅผ ์›๋ž˜์˜ `context`์— ๋งคํ•‘ํ•ฉ๋‹ˆ๋‹ค. 3. ๋งคํ•‘์„ ์™„๋ฃŒํ•˜๋ฉด, ์ด์ œ ๋‹ต๋ณ€์—์„œ ์‹œ์ž‘ ํ† ํฐ๊ณผ ์ข…๋ฃŒ ํ† ํฐ์„ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜คํ”„์…‹์˜ ์–ด๋А ๋ถ€๋ถ„์ด `question`๊ณผ `context`์— ํ•ด๋‹นํ•˜๋Š”์ง€ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก [`~tokenizers.Encoding.sequence_ids`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ๋‹ค์Œ์€ `answer`์˜ ์‹œ์ž‘ ํ† ํฐ๊ณผ ์ข…๋ฃŒ ํ† ํฐ์„ ์ž˜๋ผ๋‚ด์„œ `context`์— ๋งคํ•‘ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค: ```py >>> def preprocess_function(examples): ... questions = [q.strip() for q in examples["question"]] ... inputs = tokenizer( ... questions, ... examples["context"], ... max_length=384, ... truncation="only_second", ... return_offsets_mapping=True, ... padding="max_length", ... ) ... offset_mapping = inputs.pop("offset_mapping") ... answers = examples["answers"] ... start_positions = [] ... end_positions = [] ... for i, offset in enumerate(offset_mapping): ... answer = answers[i] ... start_char = answer["answer_start"][0] ... end_char = answer["answer_start"][0] + len(answer["text"][0]) ... sequence_ids = inputs.sequence_ids(i) ... # Find the start and end of the context ... idx = 0 ... while sequence_ids[idx] != 1: ... idx += 1 ... context_start = idx ... while sequence_ids[idx] == 1: ... idx += 1 ... context_end = idx - 1 ... # If the answer is not fully inside the context, label it (0, 0) ... if offset[context_start][0] > end_char or offset[context_end][1] < start_char: ... start_positions.append(0) ... end_positions.append(0) ... else: ... # Otherwise it's the start and end token positions ... idx = context_start ... while idx <= context_end and offset[idx][0] <= start_char: ... idx += 1 ... start_positions.append(idx - 1) ... idx = context_end ... while idx >= context_start and offset[idx][1] >= end_char: ... idx -= 1 ... end_positions.append(idx + 1) ... inputs["start_positions"] = start_positions ... inputs["end_positions"] = end_positions ... return inputs ``` ๋ชจ๋“  ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜๋ ค๋ฉด, ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. `batched=True`๋กœ ์„ค์ •ํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋“ค์„ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋น ๋ฅด๊ฒŒ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์€ ๋ชจ๋‘ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names) ``` ์ด์ œ [`DefaultDataCollator`]๋ฅผ ์ด์šฉํ•ด ์˜ˆ์‹œ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers์˜ ๋‹ค๋ฅธ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ(data collator)์™€ ๋‹ฌ๋ฆฌ, [`DefaultDataCollator`]๋Š” ํŒจ๋”ฉ๊ณผ ๊ฐ™์€ ์ถ”๊ฐ€ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` </pt> <tf> ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator(return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋ฅผ ์ด์šฉํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์—์„œ ๊ธฐ์ดˆ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForQuestionAnswering`]์œผ๋กœ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer >>> model = AutoModelForQuestionAnswering.from_pretrained("distilbert-base-uncased") ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์—์„œ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •ํ•ฉ๋‹ˆ๋‹ค. ๊ผญ ํ•„์š”ํ•œ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir` ์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋กœ ์„ค์ •ํ•ด์„œ ์ด ๋ชจ๋ธ์„ Hub๋กœ ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋“ค์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•ด์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_qa_model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=3, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_squad["train"], ... eval_dataset=tokenized_squad["test"], ... tokenizer=tokenizer, ... data_collator=data_collator, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, [`~transformers.Trainer.push_to_hub`] ๋งค์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์„ Hub์— ๊ณต์œ ํ•ด์„œ ๋ชจ๋“  ์‚ฌ๋žŒ๋“ค์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๊ณต์œ ํ•ด์ฃผ์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๊ฒƒ์— ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด, [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)์—์„œ ๊ธฐ์ดˆ ํŠœํ† ๋ฆฌ์–ผ์„ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> TensorFlow๋ฅผ ์ด์šฉํ•œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด ์˜ตํ‹ฐ๋งˆ์ด์ € ํ•จ์ˆ˜, ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด ๋ฐ ๋ช‡ ๊ฐ€์ง€ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_epochs = 2 >>> total_train_steps = (len(tokenized_squad["train"]) // batch_size) * num_epochs >>> optimizer, schedule = create_optimizer( ... init_lr=2e-5, ... num_warmup_steps=0, ... num_train_steps=total_train_steps, ... ) ``` ๊ทธ ๋‹ค์Œ [`TFAutoModelForQuestionAnswering`]์œผ๋กœ DistilBERT๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForQuestionAnswering >>> model = TFAutoModelForQuestionAnswering("distilbert-base-uncased") ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•ด์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_squad["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_squad["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)๋กœ ํ›ˆ๋ จํ•  ๋ชจ๋ธ์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ ๋ชจ๋ธ์„ Hub๋กœ ํ‘ธ์‹œํ•  ๋ฐฉ๋ฒ•์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. [`~transformers.PushToHubCallback`]์—์„œ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ํ‘ธ์‹œํ•  ๊ฒฝ๋กœ๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( ... output_dir="my_awesome_qa_model", ... tokenizer=tokenizer, ... ) ``` ๋“œ๋””์–ด ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ์„ธํŠธ์™€ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ ์ˆ˜, ์ฝœ๋ฐฑ์„ ์„ค์ •ํ•œ ํ›„ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ์ด์šฉํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=[callback]) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ Hub์— ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ์งˆ์˜ ์‘๋‹ต์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋” ์ž์„ธํ•œ ์˜ˆ์‹œ๋Š” [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ํ‰๊ฐ€[[evaluate]] ์งˆ์˜ ์‘๋‹ต์„ ํ‰๊ฐ€ํ•˜๋ ค๋ฉด ์ƒ๋‹นํ•œ ์–‘์˜ ํ›„์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์‹œ๊ฐ„์ด ๋„ˆ๋ฌด ๋งŽ์ด ๊ฑธ๋ฆฌ์ง€ ์•Š๋„๋ก ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ํ‰๊ฐ€ ๋‹จ๊ณ„๋ฅผ ์ƒ๋žตํ•ฉ๋‹ˆ๋‹ค. [`Trainer`]๋Š” ํ›ˆ๋ จ ๊ณผ์ •์—์„œ ํ‰๊ฐ€ ์†์‹ค(evaluation loss)์„ ๊ณ„์† ๊ณ„์‚ฐํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ๋Œ€๋žต์ ์œผ๋กœ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‹œ๊ฐ„์— ์—ฌ์œ ๊ฐ€ ์žˆ๊ณ  ์งˆ์˜ ์‘๋‹ต ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๊ด€์‹ฌ์ด ์žˆ๋‹ค๋ฉด ๐Ÿค— Hugging Face Course์˜ [Question answering](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing) ์ฑ•ํ„ฐ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! ## ์ถ”๋ก [[inference]] ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์งˆ๋ฌธ๊ณผ ๋ชจ๋ธ์ด ์˜ˆ์ธกํ•˜๊ธฐ ์›ํ•˜๋Š” ๋ฌธ๋งฅ(context)๋ฅผ ์ƒ๊ฐํ•ด๋ณด์„ธ์š”: ```py >>> question = "How many programming languages does BLOOM support?" >>> context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages." ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ๊ฐ€์žฅ ์‰ฌ์šด ๋ฐฉ๋ฒ•์€ [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ์งˆ์˜ ์‘๋‹ต์„ ํ•˜๊ธฐ ์œ„ํ•ด์„œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ž…๋ ฅํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> question_answerer = pipeline("question-answering", model="my_awesome_qa_model") >>> question_answerer(question=question, context=context) {'score': 0.2058267742395401, 'start': 10, 'end': 95, 'answer': '176 billion parameters and can generate text in 46 languages natural languages and 13'} ``` ์›ํ•œ๋‹ค๋ฉด `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ง์ ‘ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•ด์„œ PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model") >>> inputs = tokenizer(question, context, return_tensors="pt") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForQuestionAnswering >>> model = AutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model") >>> with torch.no_grad(): ... outputs = model(**inputs) ``` ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์—์„œ ์‹œ์ž‘ ๋ฐ ์ข…๋ฃŒ ์œ„์น˜๊ฐ€ ์–ด๋”˜์ง€ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ์–ป์Šต๋‹ˆ๋‹ค: ```py >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() ``` ์˜ˆ์ธก๋œ ํ† ํฐ์„ ํ•ด๋…ํ•ด์„œ ๋‹ต์„ ์–ป์Šต๋‹ˆ๋‹ค: ```py >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens) '176 billion parameters and can generate text in 46 languages natural languages and 13' ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•ด์„œ TensorFlow ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model") >>> inputs = tokenizer(question, text, return_tensors="tf") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForQuestionAnswering >>> model = TFAutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model") >>> outputs = model(**inputs) ``` ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์—์„œ ์‹œ์ž‘ ๋ฐ ์ข…๋ฃŒ ์œ„์น˜๊ฐ€ ์–ด๋”˜์ง€ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ์–ป์Šต๋‹ˆ๋‹ค: ```py >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) ``` ์˜ˆ์ธก๋œ ํ† ํฐ์„ ํ•ด๋…ํ•ด์„œ ๋‹ต์„ ์–ป์Šต๋‹ˆ๋‹ค: ```py >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens) '176 billion parameters and can generate text in 46 languages natural languages and 13' ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/audio_classification.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์˜ค๋””์˜ค ๋ถ„๋ฅ˜[[audio_classification]] [[open-in-colab]] <Youtube id="KWwzcmG98Ds"/> ์˜ค๋””์˜ค ๋ถ„๋ฅ˜๋Š” ํ…์ŠคํŠธ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์— ํด๋ž˜์Šค ๋ ˆ์ด๋ธ” ์ถœ๋ ฅ์„ ํ• ๋‹นํ•ฉ๋‹ˆ๋‹ค. ์œ ์ผํ•œ ์ฐจ์ด์ ์€ ํ…์ŠคํŠธ ์ž…๋ ฅ ๋Œ€์‹  ์›์‹œ ์˜ค๋””์˜ค ํŒŒํ˜•์ด ์žˆ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์˜ค๋””์˜ค ๋ถ„๋ฅ˜์˜ ์‹ค์ œ ์ ์šฉ ๋ถ„์•ผ์—๋Š” ํ™”์ž์˜ ์˜๋„ ํŒŒ์•…, ์–ธ์–ด ๋ถ„๋ฅ˜, ์†Œ๋ฆฌ๋กœ ๋™๋ฌผ ์ข…์„ ์‹๋ณ„ํ•˜๋Š” ๊ฒƒ ๋“ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฌธ์„œ์—์„œ ๋ฐฉ๋ฒ•์„ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: 1. [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base)๋กœ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ํ™”์ž์˜ ์˜๋„๋ฅผ ๋ถ„๋ฅ˜ํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์— ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์„ธ์š”. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ์•„๋ž˜์˜ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [Audio Spectrogram Transformer](../model_doc/audio-spectrogram-transformer), [Data2VecAudio](../model_doc/data2vec-audio), [Hubert](../model_doc/hubert), [SEW](../model_doc/sew), [SEW-D](../model_doc/sew-d), [UniSpeech](../model_doc/unispeech), [UniSpeechSat](../model_doc/unispeech-sat), [Wav2Vec2](../model_doc/wav2vec2), [Wav2Vec2-Conformer](../model_doc/wav2vec2-conformer), [WavLM](../model_doc/wavlm), [Whisper](../model_doc/whisper) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ—ˆ๊น…ํŽ˜์ด์Šค ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## MInDS-14 ๋ฐ์ดํ„ฐ์…‹ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ[[load_minds_14_dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ MinDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset, Audio >>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train` ๋ถ„ํ• ์„ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋” ์ž‘์€ ํ›ˆ๋ จ ๋ฐ ํ…Œ์ŠคํŠธ ์ง‘ํ•ฉ์œผ๋กœ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ์†Œ๋น„ํ•˜๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> minds = minds.train_test_split(test_size=0.2) ``` ์ด์ œ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ์‚ดํŽด๋ณผ๊ฒŒ์š”: ```py >>> minds DatasetDict({ train: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 450 }) test: Dataset({ features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'], num_rows: 113 }) }) ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” `lang_id` ๋ฐ `english_transcription`๊ณผ ๊ฐ™์€ ์œ ์šฉํ•œ ์ •๋ณด๊ฐ€ ๋งŽ์ด ํฌํ•จ๋˜์–ด ์žˆ์ง€๋งŒ ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” `audio` ๋ฐ `intent_class`์— ์ค‘์ ์„ ๋‘˜ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์—ด์€ [`~datasets.Dataset.remove_columns`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค: ```py >>> minds = minds.remove_columns(["path", "transcription", "english_transcription", "lang_id"]) ``` ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> minds["train"][0] {'audio': {'array': array([ 0. , 0. , 0. , ..., -0.00048828, -0.00024414, -0.00024414], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav', 'sampling_rate': 8000}, 'intent_class': 2} ``` ๋‘ ๊ฐœ์˜ ํ•„๋“œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค: - `audio`: ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๊ธฐ ์œ„ํ•ด ํ˜ธ์ถœํ•ด์•ผ ํ•˜๋Š” ์Œ์„ฑ ์‹ ํ˜ธ์˜ 1์ฐจ์› `๋ฐฐ์—ด`์ž…๋‹ˆ๋‹ค. - `intent_class`: ํ™”์ž์˜ ์˜๋„์— ๋Œ€ํ•œ ํด๋ž˜์Šค ID๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ๋ชจ๋ธ์ด ๋ ˆ์ด๋ธ” ID์—์„œ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์„ ์‰ฝ๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ๋„๋ก ๋ ˆ์ด๋ธ” ์ด๋ฆ„์„ ์ •์ˆ˜๋กœ ๋งคํ•‘ํ•˜๋Š” ์‚ฌ์ „์„ ๋งŒ๋“ค๊ฑฐ๋‚˜ ๊ทธ ๋ฐ˜๋Œ€๋กœ ๋งคํ•‘ํ•˜๋Š” ์‚ฌ์ „์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> labels = minds["train"].features["intent_class"].names >>> label2id, id2label = dict(), dict() >>> for i, label in enumerate(labels): ... label2id[label] = str(i) ... id2label[str(i)] = label ``` ์ด์ œ ๋ ˆ์ด๋ธ” ID๋ฅผ ๋ ˆ์ด๋ธ” ์ด๋ฆ„์œผ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> id2label[str(2)] 'app_error' ``` ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ์˜ค๋””์˜ค ์‹ ํ˜ธ๋ฅผ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด Wav2Vec2 ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` MinDS-14 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„๋Š” 8000khz์ด๋ฏ€๋กœ(์ด ์ •๋ณด๋Š” [๋ฐ์ดํ„ฐ์„ธํŠธ ์นด๋“œ](https://huggingface.co/datasets/PolyAI/minds14)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค), ์‚ฌ์ „ ํ›ˆ๋ จ๋œ Wav2Vec2 ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ 16000kHz๋กœ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) >>> minds["train"][0] {'audio': {'array': array([ 2.2098757e-05, 4.6582241e-05, -2.2803260e-05, ..., -2.8419291e-04, -2.3305941e-04, -1.1425107e-04], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav', 'sampling_rate': 16000}, 'intent_class': 2} ``` ์ด์ œ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: 1. ๊ฐ€์ ธ์˜ฌ `์˜ค๋””์˜ค` ์—ด์„ ํ˜ธ์ถœํ•˜๊ณ  ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๋ฆฌ์ƒ˜ํ”Œ๋งํ•ฉ๋‹ˆ๋‹ค. 2. ์˜ค๋””์˜ค ํŒŒ์ผ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„๊ฐ€ ๋ชจ๋ธ์— ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์˜ค๋””์˜ค ๋ฐ์ดํ„ฐ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ์ด ์ •๋ณด๋Š” Wav2Vec2 [๋ชจ๋ธ ์นด๋“œ](https://huggingface.co/facebook/wav2vec2-base)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 3. ๊ธด ์ž…๋ ฅ์ด ์ž˜๋ฆฌ์ง€ ์•Š๊ณ  ์ผ๊ด„ ์ฒ˜๋ฆฌ๋˜๋„๋ก ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True ... ) ... return inputs ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ „์ฒ˜๋ฆฌ ๊ธฐ๋Šฅ์„ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. `batched=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map`์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์„ ์ œ๊ฑฐํ•˜๊ณ  `intent_class`์˜ ์ด๋ฆ„์„ ๋ชจ๋ธ์ด ์˜ˆ์ƒํ•˜๋Š” ์ด๋ฆ„์ธ `label`๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค: ```py >>> encoded_minds = minds.map(preprocess_function, remove_columns="audio", batched=True) >>> encoded_minds = encoded_minds.rename_column("intent_class", "label") ``` ## ํ‰๊ฐ€ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ๋ฉ”ํŠธ๋ฆญ์„ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค— [Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [accuracy(์ •ํ™•๋„)](https://huggingface.co/spaces/evaluate-metric/accuracy) ๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค(๋ฉ”ํŠธ๋ฆญ์„ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ๐Ÿค— Evalutate [๋น ๋ฅธ ๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour) ์ฐธ์กฐํ•˜์„ธ์š”): ```py >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ์˜ˆ์ธก๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions = np.argmax(eval_pred.predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=eval_pred.label_ids) ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํŠธ๋ ˆ์ด๋‹์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)์„ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForAudioClassification`]์„ ์ด์šฉํ•ด์„œ Wav2Vec2๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค. ์˜ˆ์ƒ๋˜๋Š” ๋ ˆ์ด๋ธ” ์ˆ˜์™€ ๋ ˆ์ด๋ธ” ๋งคํ•‘์„ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer >>> num_labels = len(id2label) >>> model = AutoModelForAudioClassification.from_pretrained( ... "facebook/wav2vec2-base", num_labels=num_labels, label2id=label2id, id2label=id2label ... ) ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ [`TrainingArguments`]์— ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์ž…๋‹ˆ๋‹ค. `push_to_hub = True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด ํ—ˆ๊น… ํŽ˜์ด์Šค์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค [`Trainer`]๊ฐ€ ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ, `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ž๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_mind_model", ... evaluation_strategy="epoch", ... save_strategy="epoch", ... learning_rate=3e-5, ... per_device_train_batch_size=32, ... gradient_accumulation_steps=4, ... per_device_eval_batch_size=32, ... num_train_epochs=10, ... warmup_ratio=0.1, ... logging_steps=10, ... load_best_model_at_end=True, ... metric_for_best_model="accuracy", ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=encoded_minds["train"], ... eval_dataset=encoded_minds["test"], ... tokenizer=feature_extractor, ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋“  ์‚ฌ๋žŒ์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> </frameworkcontent> <Tip> For a more in-depth example of how to finetune a model for audio classification, take a look at the corresponding [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb). </Tip> ## ์ถ”๋ก [[inference]] ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ์ถ”๋ก ์„ ์‹คํ–‰ํ•  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ๊ฒฝ์šฐ ์˜ค๋””์˜ค ํŒŒ์ผ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„๋ฅผ ๋ชจ๋ธ์˜ ์ƒ˜ํ”Œ๋ง ์†๋„์™€ ์ผ์น˜ํ•˜๋„๋ก ๋ฆฌ์ƒ˜ํ”Œ๋งํ•˜๋Š” ๊ฒƒ์„ ์žŠ์ง€ ๋งˆ์„ธ์š”! ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000)) >>> sampling_rate = dataset.features["audio"].sampling_rate >>> audio_file = dataset[0]["audio"]["path"] ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์‹œํ—˜ํ•ด ๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์˜ค๋””์˜ค ๋ถ„๋ฅ˜๋ฅผ ์œ„ํ•œ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> classifier = pipeline("audio-classification", model="stevhliu/my_awesome_minds_model") >>> classifier(audio_file) [ {'score': 0.09766869246959686, 'label': 'cash_deposit'}, {'score': 0.07998877018690109, 'label': 'app_error'}, {'score': 0.0781070664525032, 'label': 'joint_account'}, {'score': 0.07667109370231628, 'label': 'pay_bill'}, {'score': 0.0755252093076706, 'label': 'balance'} ] ``` ์›ํ•˜๋Š” ๊ฒฝ์šฐ `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: <frameworkcontent> <pt> ํŠน์ง• ์ถ”์ถœ๊ธฐ๋ฅผ ๊ฐ€์ ธ์™€์„œ ์˜ค๋””์˜ค ํŒŒ์ผ์„ ์ „์ฒ˜๋ฆฌํ•˜๊ณ  `์ž…๋ ฅ`์„ PyTorch ํ…์„œ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("stevhliu/my_awesome_minds_model") >>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  ๋กœ์ง“์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForAudioClassification >>> model = AutoModelForAudioClassification.from_pretrained("stevhliu/my_awesome_minds_model") >>> with torch.no_grad(): ... logits = model(**inputs).logits ``` ํ™•๋ฅ ์ด ๊ฐ€์žฅ ๋†’์€ ํด๋ž˜์Šค๋ฅผ ๊ฐ€์ ธ์˜จ ๋‹ค์Œ ๋ชจ๋ธ์˜ `id2label` ๋งคํ•‘์„ ์‚ฌ์šฉํ•˜์—ฌ ์ด๋ฅผ ๋ ˆ์ด๋ธ”๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import torch >>> predicted_class_ids = torch.argmax(logits).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> predicted_label 'cash_deposit' ``` </pt> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/zero_shot_object_detection.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€[[zeroshot-object-detection]] [[open-in-colab]] ์ผ๋ฐ˜์ ์œผ๋กœ [๊ฐ์ฒด ํƒ์ง€](object_detection)์— ์‚ฌ์šฉ๋˜๋Š” ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ ์„ธํŠธ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ํ•™์Šต ๋ฐ์ดํ„ฐ์— ์กด์žฌํ•˜๋Š” ํด๋ž˜์Šค(๋ ˆ์ด๋ธ”)๋งŒ ํƒ์ง€ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ํ•œ๊ณ„์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋Š” [OWL-ViT](../model_doc/owlvit) ๋ชจ๋ธ๋กœ ์ œ๋กœ์ƒท ๊ฐ์ฒด ํƒ์ง€๊ฐ€ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. OWL-ViT๋Š” ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜(open-vocabulary) ๊ฐ์ฒด ํƒ์ง€๊ธฐ์ž…๋‹ˆ๋‹ค. ์ฆ‰, ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋ฏธ์„ธ ์กฐ์ •ํ•˜์ง€ ์•Š๊ณ  ์ž์œ  ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ด๋ฏธ์ง€์—์„œ ๊ฐ์ฒด๋ฅผ ํƒ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. OWL-ViT ๋ชจ๋ธ์€ ๋ฉ€ํ‹ฐ ๋ชจ๋‹ฌ ํ‘œํ˜„์„ ํ™œ์šฉํ•ด ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜ ํƒ์ง€(open-vocabulary detection)๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. [CLIP](../model_doc/clip) ๋ชจ๋ธ์— ๊ฒฝ๋Ÿ‰ํ™”(lightweight)๋œ ๊ฐ์ฒด ๋ถ„๋ฅ˜์™€ ์ง€์—ญํ™”(localization) ํ—ค๋“œ๋ฅผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. ๊ฐœ๋ฐฉํ˜• ์–ดํœ˜ ํƒ์ง€๋Š” CLIP์˜ ํ…์ŠคํŠธ ์ธ์ฝ”๋”๋กœ free-text ์ฟผ๋ฆฌ๋ฅผ ์ž„๋ฒ ๋”ฉํ•˜๊ณ , ๊ฐ์ฒด ๋ถ„๋ฅ˜์™€ ์ง€์—ญํ™” ํ—ค๋“œ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€์™€ ํ•ด๋‹น ํ…์ŠคํŠธ ์„ค๋ช…์„ ์—ฐ๊ฒฐํ•˜๋ฉด ViT๊ฐ€ ์ด๋ฏธ์ง€ ํŒจ์น˜(image patches)๋ฅผ ์ž…๋ ฅ์œผ๋กœ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. OWL-ViT ๋ชจ๋ธ์˜ ์ €์ž๋“ค์€ CLIP ๋ชจ๋ธ์„ ์ฒ˜์Œ๋ถ€ํ„ฐ ํ•™์Šต(scratch learning)ํ•œ ํ›„์—, bipartite matching loss๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‘œ์ค€ ๊ฐ์ฒด ์ธ์‹ ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ OWL-ViT ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ์ ‘๊ทผ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋ฉด ๋ชจ๋ธ์€ ๋ ˆ์ด๋ธ”์ด ์ง€์ •๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•œ ์‚ฌ์ „ ํ•™์Šต ์—†์ด๋„ ํ…์ŠคํŠธ ์„ค๋ช…์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ฐ์ฒด๋ฅผ ํƒ์ง€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ๋Š” OWL-ViT ๋ชจ๋ธ์˜ ์‚ฌ์šฉ๋ฒ•์„ ๋‹ค๋ฃฐ ๊ฒƒ์ž…๋‹ˆ๋‹ค: - ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ ๊ธฐ๋ฐ˜ ๊ฐ์ฒด ํƒ์ง€ - ์ผ๊ด„ ๊ฐ์ฒด ํƒ์ง€ - ์ด๋ฏธ์ง€ ๊ฐ€์ด๋“œ ๊ฐ์ฒด ํƒ์ง€ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q transformers ``` ## ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€ ํŒŒ์ดํ”„๋ผ์ธ[[zeroshot-object-detection-pipeline]] [`pipeline`]์„ ํ™œ์šฉํ•˜๋ฉด ๊ฐ€์žฅ ๊ฐ„๋‹จํ•˜๊ฒŒ OWL-ViT ๋ชจ๋ธ์„ ์ถ”๋ก ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=zero-shot-image-classification&sort=downloads)์—์„œ ์ œ๋กœ์ƒท(zero-shot) ๊ฐ์ฒด ํƒ์ง€์šฉ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค: ```python >>> from transformers import pipeline >>> checkpoint = "google/owlvit-base-patch32" >>> detector = pipeline(model=checkpoint, task="zero-shot-object-detection") ``` ๋‹ค์Œ์œผ๋กœ, ๊ฐ์ฒด๋ฅผ ํƒ์ง€ํ•˜๊ณ  ์‹ถ์€ ์ด๋ฏธ์ง€๋ฅผ ์„ ํƒํ•˜์„ธ์š”. ์—ฌ๊ธฐ์„œ๋Š” [NASA](https://www.nasa.gov/multimedia/imagegallery/index.html) Great Images ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ผ๋ถ€์ธ ์šฐ์ฃผ๋น„ํ–‰์‚ฌ ์—์ผ๋ฆฐ ์ฝœ๋ฆฐ์Šค(Eileen Collins) ์‚ฌ์ง„์„ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> import skimage >>> import numpy as np >>> from PIL import Image >>> image = skimage.data.astronaut() >>> image = Image.fromarray(np.uint8(image)).convert("RGB") >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_1.png" alt="Astronaut Eileen Collins"/> </div> ์ด๋ฏธ์ง€์™€ ํ•ด๋‹น ์ด๋ฏธ์ง€์˜ ํ›„๋ณด ๋ ˆ์ด๋ธ”์„ ํŒŒ์ดํ”„๋ผ์ธ์œผ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ง์ ‘ ์ „๋‹ฌํ•˜์ง€๋งŒ, ์ปดํ“จํ„ฐ์— ์ €์žฅ๋œ ์ด๋ฏธ์ง€์˜ ๊ฒฝ๋กœ๋‚˜ url๋กœ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. candidate_labels๋Š” ์ด ์˜ˆ์‹œ์ฒ˜๋Ÿผ ๊ฐ„๋‹จํ•œ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ๊ณ  ์ข€ ๋” ์„ค๋ช…์ ์ธ ๋‹จ์–ด์ผ ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ์ด๋ฏธ์ง€๋ฅผ ๊ฒ€์ƒ‰(query)ํ•˜๋ ค๋Š” ๋ชจ๋“  ํ•ญ๋ชฉ์— ๋Œ€ํ•œ ํ…์ŠคํŠธ ์„ค๋ช…๋„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> predictions = detector( ... image, ... candidate_labels=["human face", "rocket", "nasa badge", "star-spangled banner"], ... ) >>> predictions [{'score': 0.3571370542049408, 'label': 'human face', 'box': {'xmin': 180, 'ymin': 71, 'xmax': 271, 'ymax': 178}}, {'score': 0.28099656105041504, 'label': 'nasa badge', 'box': {'xmin': 129, 'ymin': 348, 'xmax': 206, 'ymax': 427}}, {'score': 0.2110239565372467, 'label': 'rocket', 'box': {'xmin': 350, 'ymin': -1, 'xmax': 468, 'ymax': 288}}, {'score': 0.13790413737297058, 'label': 'star-spangled banner', 'box': {'xmin': 1, 'ymin': 1, 'xmax': 105, 'ymax': 509}}, {'score': 0.11950037628412247, 'label': 'nasa badge', 'box': {'xmin': 277, 'ymin': 338, 'xmax': 327, 'ymax': 380}}, {'score': 0.10649408400058746, 'label': 'rocket', 'box': {'xmin': 358, 'ymin': 64, 'xmax': 424, 'ymax': 280}}] ``` ์ด์ œ ์˜ˆ์ธก๊ฐ’์„ ์‹œ๊ฐํ™”ํ•ด๋ด…์‹œ๋‹ค: ```py >>> from PIL import ImageDraw >>> draw = ImageDraw.Draw(image) >>> for prediction in predictions: ... box = prediction["box"] ... label = prediction["label"] ... score = prediction["score"] ... xmin, ymin, xmax, ymax = box.values() ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{label}: {round(score,2)}", fill="white") >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_2.png" alt="Visualized predictions on NASA image"/> </div> ## ํ…์ŠคํŠธ ํ”„๋กฌํ”„ํŠธ ๊ธฐ๋ฐ˜ ๊ฐ์ฒด ํƒ์ง€[[textprompted-zeroshot-object-detection-by-hand]] ์ œ๋กœ์ƒท ๊ฐ์ฒด ํƒ์ง€ ํŒŒ์ดํ”„๋ผ์ธ ์‚ฌ์šฉ๋ฒ•์— ๋Œ€ํ•ด ์‚ดํŽด๋ณด์•˜์œผ๋‹ˆ, ์ด์ œ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋ณต์ œํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?other=owlvit)์—์„œ ๊ด€๋ จ ๋ชจ๋ธ๊ณผ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ์œผ๋กœ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด์ „๊ณผ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection >>> model = AutoModelForZeroShotObjectDetection.from_pretrained(checkpoint) >>> processor = AutoProcessor.from_pretrained(checkpoint) ``` ๋‹ค๋ฅธ ์ด๋ฏธ์ง€๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> import requests >>> url = "https://unsplash.com/photos/oj0zeY2Ltk4/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MTR8fHBpY25pY3xlbnwwfHx8fDE2Nzc0OTE1NDk&force=true&w=640" >>> im = Image.open(requests.get(url, stream=True).raw) >>> im ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_3.png" alt="Beach photo"/> </div> ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์˜ ์ž…๋ ฅ์„ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์„œ๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ๋ณ€ํ™˜ํ•˜๊ณ  ์ •๊ทœํ™”ํ•˜๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์™€ ํ…์ŠคํŠธ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•˜๋Š” [`CLIPTokenizer`]๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ```py >>> text_queries = ["hat", "book", "sunglasses", "camera"] >>> inputs = processor(text=text_queries, images=im, return_tensors="pt") ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  ๊ฒฐ๊ณผ๋ฅผ ํ›„์ฒ˜๋ฆฌ ๋ฐ ์‹œ๊ฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๊ฐ€ ๋ชจ๋ธ์— ์ด๋ฏธ์ง€๋ฅผ ์ž…๋ ฅํ•˜๊ธฐ ์ „์— ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์—, [`~OwlViTImageProcessor.post_process_object_detection`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ์˜ˆ์ธก๊ฐ’์˜ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค(bounding box)๊ฐ€ ์›๋ณธ ์ด๋ฏธ์ง€์˜ ์ขŒํ‘œ์™€ ์ƒ๋Œ€์ ์œผ๋กœ ๋™์ผํ•œ์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torch >>> with torch.no_grad(): ... outputs = model(**inputs) ... target_sizes = torch.tensor([im.size[::-1]]) ... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes)[0] >>> draw = ImageDraw.Draw(im) >>> scores = results["scores"].tolist() >>> labels = results["labels"].tolist() >>> boxes = results["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{text_queries[label]}: {round(score,2)}", fill="white") >>> im ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"/> </div> ## ์ผ๊ด„ ์ฒ˜๋ฆฌ[[batch-processing]] ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€์™€ ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋ฅผ ์ „๋‹ฌํ•˜์—ฌ ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€์—์„œ ์„œ๋กœ ๋‹ค๋ฅธ(๋˜๋Š” ๋™์ผํ•œ) ๊ฐ์ฒด๋ฅผ ๊ฒ€์ƒ‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ผ๊ด„ ์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด์„œ ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋Š” ์ด์ค‘ ๋ฆฌ์ŠคํŠธ๋กœ, ์ด๋ฏธ์ง€๋Š” PIL ์ด๋ฏธ์ง€, PyTorch ํ…์„œ, ๋˜๋Š” NumPy ๋ฐฐ์—ด๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ๋กœ ํ”„๋กœ์„ธ์„œ์— ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> images = [image, im] >>> text_queries = [ ... ["human face", "rocket", "nasa badge", "star-spangled banner"], ... ["hat", "book", "sunglasses", "camera"], ... ] >>> inputs = processor(text=text_queries, images=images, return_tensors="pt") ``` ์ด์ „์—๋Š” ํ›„์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด ๋‹จ์ผ ์ด๋ฏธ์ง€์˜ ํฌ๊ธฐ๋ฅผ ํ…์„œ๋กœ ์ „๋‹ฌํ–ˆ์ง€๋งŒ, ํŠœํ”Œ์„ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ๊ณ , ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ํŠœํ”Œ๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์•„๋ž˜ ๋‘ ์˜ˆ์ œ์— ๋Œ€ํ•œ ์˜ˆ์ธก์„ ์ƒ์„ฑํ•˜๊ณ , ๋‘ ๋ฒˆ์งธ ์ด๋ฏธ์ง€(`image_idx = 1`)๋ฅผ ์‹œ๊ฐํ™”ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> with torch.no_grad(): ... outputs = model(**inputs) ... target_sizes = [x.size[::-1] for x in images] ... results = processor.post_process_object_detection(outputs, threshold=0.1, target_sizes=target_sizes) >>> image_idx = 1 >>> draw = ImageDraw.Draw(images[image_idx]) >>> scores = results[image_idx]["scores"].tolist() >>> labels = results[image_idx]["labels"].tolist() >>> boxes = results[image_idx]["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="red", width=1) ... draw.text((xmin, ymin), f"{text_queries[image_idx][label]}: {round(score,2)}", fill="white") >>> images[image_idx] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_4.png" alt="Beach photo with detected objects"/> </div> ## ์ด๋ฏธ์ง€ ๊ฐ€์ด๋“œ ๊ฐ์ฒด ํƒ์ง€[[imageguided-object-detection]] ํ…์ŠคํŠธ ์ฟผ๋ฆฌ๋ฅผ ์ด์šฉํ•œ ์ œ๋กœ์ƒท ๊ฐ์ฒด ํƒ์ง€ ์™ธ์—๋„ OWL-ViT ๋ชจ๋ธ์€ ์ด๋ฏธ์ง€ ๊ฐ€์ด๋“œ ๊ฐ์ฒด ํƒ์ง€ ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋ฅผ ์ฟผ๋ฆฌ๋กœ ์‚ฌ์šฉํ•ด ๋Œ€์ƒ ์ด๋ฏธ์ง€์—์„œ ์œ ์‚ฌํ•œ ๊ฐ์ฒด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋‹ค๋Š” ์˜๋ฏธ์ž…๋‹ˆ๋‹ค. ํ…์ŠคํŠธ ์ฟผ๋ฆฌ์™€ ๋‹ฌ๋ฆฌ ํ•˜๋‚˜์˜ ์˜ˆ์ œ ์ด๋ฏธ์ง€์—์„œ๋งŒ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์†ŒํŒŒ์— ๊ณ ์–‘์ด ๋‘ ๋งˆ๋ฆฌ๊ฐ€ ์žˆ๋Š” ์ด๋ฏธ์ง€๋ฅผ ๋Œ€์ƒ ์ด๋ฏธ์ง€(target image)๋กœ, ๊ณ ์–‘์ด ํ•œ ๋งˆ๋ฆฌ๊ฐ€ ์žˆ๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ฟผ๋ฆฌ๋กœ ์‚ฌ์šฉํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image_target = Image.open(requests.get(url, stream=True).raw) >>> query_url = "http://images.cocodataset.org/val2017/000000524280.jpg" >>> query_image = Image.open(requests.get(query_url, stream=True).raw) ``` ๋‹ค์Œ ์ด๋ฏธ์ง€๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> import matplotlib.pyplot as plt >>> fig, ax = plt.subplots(1, 2) >>> ax[0].imshow(image_target) >>> ax[1].imshow(query_image) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_5.png" alt="Cats"/> </div> ์ „์ฒ˜๋ฆฌ ๋‹จ๊ณ„์—์„œ ํ…์ŠคํŠธ ์ฟผ๋ฆฌ ๋Œ€์‹ ์— `query_images`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> inputs = processor(images=image_target, query_images=query_image, return_tensors="pt") ``` ์˜ˆ์ธก์˜ ๊ฒฝ์šฐ, ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๋Š” ๋Œ€์‹  [`~OwlViTForObjectDetection.image_guided_detection`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ”์ด ์—†๋‹ค๋Š” ์ ์„ ์ œ์™ธํ•˜๋ฉด ์ด์ „๊ณผ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์ด์ „๊ณผ ๋™์ผํ•˜๊ฒŒ ์ด๋ฏธ์ง€๋ฅผ ์‹œ๊ฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> with torch.no_grad(): ... outputs = model.image_guided_detection(**inputs) ... target_sizes = torch.tensor([image_target.size[::-1]]) ... results = processor.post_process_image_guided_detection(outputs=outputs, target_sizes=target_sizes)[0] >>> draw = ImageDraw.Draw(image_target) >>> scores = results["scores"].tolist() >>> boxes = results["boxes"].tolist() >>> for box, score, label in zip(boxes, scores, labels): ... xmin, ymin, xmax, ymax = box ... draw.rectangle((xmin, ymin, xmax, ymax), outline="white", width=4) >>> image_target ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/zero-sh-obj-detection_6.png" alt="Cats with bounding boxes"/> </div> OWL-ViT ๋ชจ๋ธ์„ ์ถ”๋ก ํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด ์•„๋ž˜ ๋ฐ๋ชจ๋ฅผ ํ™•์ธํ•˜์„ธ์š”: <iframe src="https://adirik-owl-vit.hf.space" frameborder="0" width="850" height="450" ></iframe>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/document_question_answering.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต(Document Question Answering) [[document_question_answering]] [[open-in-colab]] ๋ฌธ์„œ ์‹œ๊ฐ์  ์งˆ์˜ ์‘๋‹ต(Document Visual Question Answering)์ด๋ผ๊ณ ๋„ ํ•˜๋Š” ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต(Document Question Answering)์€ ๋ฌธ์„œ ์ด๋ฏธ์ง€์— ๋Œ€ํ•œ ์งˆ๋ฌธ์— ๋‹ต๋ณ€์„ ์ฃผ๋Š” ํƒœ์Šคํฌ์ž…๋‹ˆ๋‹ค. ์ด ํƒœ์Šคํฌ๋ฅผ ์ง€์›ํ•˜๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์˜ ์กฐํ•ฉ์ด๊ณ , ์ถœ๋ ฅ์€ ์ž์—ฐ์–ด๋กœ ๋œ ๋‹ต๋ณ€์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ชจ๋ธ์€ ํ…์ŠคํŠธ, ๋‹จ์–ด์˜ ์œ„์น˜(๋ฐ”์šด๋”ฉ ๋ฐ•์Šค), ์ด๋ฏธ์ง€ ๋“ฑ ๋‹ค์–‘ํ•œ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ๋ฅผ ํ™œ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ๋Š” ๋‹ค์Œ ๋‚ด์šฉ์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค: - [DocVQA dataset](https://huggingface.co/datasets/nielsr/docvqa_1200_examples_donut)์„ ์‚ฌ์šฉํ•ด [LayoutLMv2](../model_doc/layoutlmv2) ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ - ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ํƒœ์Šคํฌ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [LayoutLM](../model_doc/layoutlm), [LayoutLMv2](../model_doc/layoutlmv2), [LayoutLMv3](../model_doc/layoutlmv3) <!--End of the generated tip--> </Tip> LayoutLMv2๋Š” ํ† ํฐ์˜ ๋งˆ์ง€๋ง‰ ์€๋‹‰์ธต ์œ„์— ์งˆ์˜ ์‘๋‹ต ํ—ค๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ด ๋‹ต๋ณ€์˜ ์‹œ์ž‘ ํ† ํฐ๊ณผ ๋ ํ† ํฐ์˜ ์œ„์น˜๋ฅผ ์˜ˆ์ธกํ•จ์œผ๋กœ์จ ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ๋ฅผ ํ•ด๊ฒฐํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ๋ฌธ๋งฅ์ด ์ฃผ์–ด์กŒ์„ ๋•Œ ์งˆ๋ฌธ์— ๋‹ตํ•˜๋Š” ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๋Š” ์ถ”์ถœํ˜• ์งˆ์˜ ์‘๋‹ต(Extractive question answering)์œผ๋กœ ๋ฌธ์ œ๋ฅผ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค. ๋ฌธ๋งฅ์€ OCR ์—”์ง„์˜ ์ถœ๋ ฅ์—์„œ ๊ฐ€์ ธ์˜ค๋ฉฐ, ์—ฌ๊ธฐ์„œ๋Š” Google์˜ Tesseract๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”. LayoutLMv2๋Š” detectron2, torchvision ๋ฐ ํ…Œ์„œ๋ž™ํŠธ๋ฅผ ํ•„์š”๋กœ ํ•ฉ๋‹ˆ๋‹ค. ```bash pip install -q transformers datasets ``` ```bash pip install 'git+https://github.com/facebookresearch/detectron2.git' pip install torchvision ``` ```bash sudo apt install tesseract-ocr pip install -q pytesseract ``` ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋“ค์„ ๋ชจ๋‘ ์„ค์น˜ํ•œ ํ›„ ๋Ÿฐํƒ€์ž„์„ ๋‹ค์‹œ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋‹น์‹ ์˜ ๋ชจ๋ธ์„ ๊ณต์œ ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•ด์„œ ๋ชจ๋ธ์„ ๐Ÿค— Hub์— ์—…๋กœ๋“œํ•˜์„ธ์š”. ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์‹คํ–‰๋˜๋ฉด, ๋กœ๊ทธ์ธ์„ ์œ„ํ•ด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ๋ช‡ ๊ฐ€์ง€ ์ „์—ญ ๋ณ€์ˆ˜๋ฅผ ์ •์˜ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> model_checkpoint = "microsoft/layoutlmv2-base-uncased" >>> batch_size = 4 ``` ## ๋ฐ์ดํ„ฐ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ [[load-the-data]] ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๐Ÿค— Hub์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ๋Š” ์ „์ฒ˜๋ฆฌ๋œ DocVQA์˜ ์ž‘์€ ์ƒ˜ํ”Œ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. DocVQA์˜ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด, [DocVQA homepage](https://rrc.cvc.uab.es/?ch=17)์— ๊ฐ€์ž… ํ›„ ๋‹ค์šด๋กœ๋“œ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋‹ค์šด๋กœ๋“œ ํ–ˆ๋‹ค๋ฉด, ์ด ๊ฐ€์ด๋“œ๋ฅผ ๊ณ„์† ์ง„ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด [๐Ÿค— dataset์— ํŒŒ์ผ์„ ๊ฐ€์ ธ์˜ค๋Š” ๋ฐฉ๋ฒ•](https://huggingface.co/docs/datasets/loading#local-and-remote-files)์„ ํ™•์ธํ•˜์„ธ์š”. ```py >>> from datasets import load_dataset >>> dataset = load_dataset("nielsr/docvqa_1200_examples") >>> dataset DatasetDict({ train: Dataset({ features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'], num_rows: 1000 }) test: Dataset({ features: ['id', 'image', 'query', 'answers', 'words', 'bounding_boxes', 'answer'], num_rows: 200 }) }) ``` ๋ณด์‹œ๋‹ค์‹œํ”ผ, ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ์ด๋ฏธ ํ›ˆ๋ จ ์„ธํŠธ์™€ ํ…Œ์ŠคํŠธ ์„ธํŠธ๋กœ ๋‚˜๋ˆ„์–ด์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌด์ž‘์œ„๋กœ ์˜ˆ์ œ๋ฅผ ์‚ดํŽด๋ณด๋ฉด์„œ ํŠน์„ฑ์„ ํ™•์ธํ•ด๋ณด์„ธ์š”. ```py >>> dataset["train"].features ``` ๊ฐ ํ•„๋“œ๊ฐ€ ๋‚˜ํƒ€๋‚ด๋Š” ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * `id`: ์˜ˆ์ œ์˜ id * `image`: ๋ฌธ์„œ ์ด๋ฏธ์ง€๋ฅผ ํฌํ•จํ•˜๋Š” PIL.Image.Image ๊ฐ์ฒด * `query`: ์งˆ๋ฌธ ๋ฌธ์ž์—ด - ์—ฌ๋Ÿฌ ์–ธ์–ด์˜ ์ž์—ฐ์–ด๋กœ ๋œ ์งˆ๋ฌธ * `answers`: ์‚ฌ๋žŒ์ด ์ฃผ์„์„ ๋‹จ ์ •๋‹ต ๋ฆฌ์ŠคํŠธ * `words` and `bounding_boxes`: OCR์˜ ๊ฒฐ๊ณผ๊ฐ’๋“ค์ด๋ฉฐ ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์‚ฌ์šฉํ•˜์ง€ ์•Š์„ ์˜ˆ์ • * `answer`: ๋‹ค๋ฅธ ๋ชจ๋ธ๊ณผ ์ผ์น˜ํ•˜๋Š” ๋‹ต๋ณ€์ด๋ฉฐ ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ์‚ฌ์šฉํ•˜์ง€ ์•Š์„ ์˜ˆ์ • ์˜์–ด๋กœ ๋œ ์งˆ๋ฌธ๋งŒ ๋‚จ๊ธฐ๊ณ  ๋‹ค๋ฅธ ๋ชจ๋ธ์— ๋Œ€ํ•œ ์˜ˆ์ธก์„ ํฌํ•จํ•˜๋Š” `answer` ํŠน์„ฑ์„ ์‚ญ์ œํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ฃผ์„ ์ž‘์„ฑ์ž๊ฐ€ ์ œ๊ณตํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ฒซ ๋ฒˆ์งธ ๋‹ต๋ณ€์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ๋˜๋Š” ๋ฌด์ž‘์œ„๋กœ ์ƒ˜ํ”Œ์„ ์ถ”์ถœํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> updated_dataset = dataset.map(lambda example: {"question": example["query"]["en"]}, remove_columns=["query"]) >>> updated_dataset = updated_dataset.map( ... lambda example: {"answer": example["answers"][0]}, remove_columns=["answer", "answers"] ... ) ``` ์ด ๊ฐ€์ด๋“œ์—์„œ ์‚ฌ์šฉํ•˜๋Š” LayoutLMv2 ์ฒดํฌํฌ์ธํŠธ๋Š” `max_position_embeddings = 512`๋กœ ํ›ˆ๋ จ๋˜์—ˆ์Šต๋‹ˆ๋‹ค(์ด ์ •๋ณด๋Š” [์ฒดํฌํฌ์ธํŠธ์˜ `config.json` ํŒŒ์ผ](https://huggingface.co/microsoft/layoutlmv2-base-uncased/blob/main/config.json#L18)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค). ๋ฐ”๋กœ ์˜ˆ์ œ๋ฅผ ์ž˜๋ผ๋‚ผ ์ˆ˜๋„ ์žˆ์ง€๋งŒ, ๊ธด ๋ฌธ์„œ์˜ ๋์— ๋‹ต๋ณ€์ด ์žˆ์–ด ์ž˜๋ฆฌ๋Š” ์ƒํ™ฉ์„ ํ”ผํ•˜๊ธฐ ์œ„ํ•ด ์—ฌ๊ธฐ์„œ๋Š” ์ž„๋ฒ ๋”ฉ์ด 512๋ณด๋‹ค ๊ธธ์–ด์งˆ ๊ฐ€๋Šฅ์„ฑ์ด ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์˜ˆ์ œ๋ฅผ ์ œ๊ฑฐํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์žˆ๋Š” ๋Œ€๋ถ€๋ถ„์˜ ๋ฌธ์„œ๊ฐ€ ๊ธด ๊ฒฝ์šฐ ์Šฌ๋ผ์ด๋”ฉ ์œˆ๋„์šฐ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค - ์ž์„ธํ•œ ๋‚ด์šฉ์„ ํ™•์ธํ•˜๊ณ  ์‹ถ์œผ๋ฉด ์ด [๋…ธํŠธ๋ถ](https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb)์„ ํ™•์ธํ•˜์„ธ์š”. ```py >>> updated_dataset = updated_dataset.filter(lambda x: len(x["words"]) + len(x["question"].split()) < 512) ``` ์ด ์‹œ์ ์—์„œ ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ OCR ํŠน์„ฑ๋„ ์ œ๊ฑฐํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. OCR ํŠน์„ฑ์€ ๋‹ค๋ฅธ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•œ ๊ฒƒ์œผ๋กœ, ์ด ๊ฐ€์ด๋“œ์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋ธ์˜ ์ž…๋ ฅ ์š”๊ตฌ ์‚ฌํ•ญ๊ณผ ์ผ์น˜ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ์ด ํŠน์„ฑ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์ผ๋ถ€ ์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋Œ€์‹ , ์›๋ณธ ๋ฐ์ดํ„ฐ์— [`LayoutLMv2Processor`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ OCR ๋ฐ ํ† ํฐํ™”๋ฅผ ๋ชจ๋‘ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋ชจ๋ธ์ด ์š”๊ตฌํ•˜๋Š” ์ž…๋ ฅ์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€๋ฅผ ์ˆ˜๋™์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๋ ค๋ฉด, [`LayoutLMv2` model documentation](../model_doc/layoutlmv2)์—์„œ ๋ชจ๋ธ์ด ์š”๊ตฌํ•˜๋Š” ์ž…๋ ฅ ํฌ๋งท์„ ํ™•์ธํ•ด๋ณด์„ธ์š”. ```py >>> updated_dataset = updated_dataset.remove_columns("words") >>> updated_dataset = updated_dataset.remove_columns("bounding_boxes") ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ฐ์ดํ„ฐ ํƒ์ƒ‰์„ ์™„๋ฃŒํ•˜๊ธฐ ์œ„ํ•ด ์ด๋ฏธ์ง€ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ด…์‹œ๋‹ค. ```py >>> updated_dataset["train"][11]["image"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/docvqa_example.jpg" alt="DocVQA Image Example"/> </div> ## ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ [[preprocess-the-data]] ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต ํƒœ์Šคํฌ๋Š” ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ํƒœ์Šคํฌ์ด๋ฉฐ, ๊ฐ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์˜ ์ž…๋ ฅ์ด ๋ชจ๋ธ์˜ ์š”๊ตฌ์— ๋งž๊ฒŒ ์ „์ฒ˜๋ฆฌ ๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์™€ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ๋ฅผ ์ธ์ฝ”๋”ฉํ•  ์ˆ˜ ์žˆ๋Š” ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฒฐํ•ฉํ•œ [`LayoutLMv2Processor`]๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained(model_checkpoint) ``` ### ๋ฌธ์„œ ์ด๋ฏธ์ง€ ์ „์ฒ˜๋ฆฌ [[preprocessing-document-images]] ๋จผ์ €, ํ”„๋กœ์„ธ์„œ์˜ `image_processor`๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์— ๋Œ€ํ•œ ๋ฌธ์„œ ์ด๋ฏธ์ง€๋ฅผ ์ค€๋น„ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ธฐ๋ณธ๊ฐ’์œผ๋กœ, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋Š” ์ด๋ฏธ์ง€ ํฌ๊ธฐ๋ฅผ 224x224๋กœ ์กฐ์ •ํ•˜๊ณ  ์ƒ‰์ƒ ์ฑ„๋„์˜ ์ˆœ์„œ๊ฐ€ ์˜ฌ๋ฐ”๋ฅธ์ง€ ํ™•์ธํ•œ ํ›„ ๋‹จ์–ด์™€ ์ •๊ทœํ™”๋œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ ์–ป๊ธฐ ์œ„ํ•ด ํ…Œ์„œ๋ž™ํŠธ๋ฅผ ์‚ฌ์šฉํ•ด OCR๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์šฐ๋ฆฌ๊ฐ€ ํ•„์š”ํ•œ ๊ฒƒ๊ณผ ๊ธฐ๋ณธ๊ฐ’์€ ์™„์ „ํžˆ ๋™์ผํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ๋ฐฐ์น˜์— ๊ธฐ๋ณธ ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•˜๊ณ  OCR์˜ ๊ฒฐ๊ณผ๋ฅผ ๋ณ€ํ™˜ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค. ```py >>> image_processor = processor.image_processor >>> def get_ocr_words_and_boxes(examples): ... images = [image.convert("RGB") for image in examples["image"]] ... encoded_inputs = image_processor(images) ... examples["image"] = encoded_inputs.pixel_values ... examples["words"] = encoded_inputs.words ... examples["boxes"] = encoded_inputs.boxes ... return examples ``` ์ด ์ „์ฒ˜๋ฆฌ๋ฅผ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด์— ๋น ๋ฅด๊ฒŒ ์ ์šฉํ•˜๋ ค๋ฉด [`~datasets.Dataset.map`]๋ฅผ ์‚ฌ์šฉํ•˜์„ธ์š”. ```py >>> dataset_with_ocr = updated_dataset.map(get_ocr_words_and_boxes, batched=True, batch_size=2) ``` ### ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ [[preprocessing-text-data]] ์ด๋ฏธ์ง€์— OCR์„ ์ ์šฉํ–ˆ์œผ๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํ…์ŠคํŠธ ๋ถ€๋ถ„์„ ๋ชจ๋ธ์— ๋งž๊ฒŒ ์ธ์ฝ”๋”ฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ธ์ฝ”๋”ฉ์—๋Š” ์ด์ „ ๋‹จ๊ณ„์—์„œ ๊ฐ€์ ธ์˜จ ๋‹จ์–ด์™€ ๋ฐ•์Šค๋ฅผ ํ† ํฐ ์ˆ˜์ค€์˜ `input_ids`, `attention_mask`, `token_type_ids` ๋ฐ `bbox`๋กœ ๋ณ€ํ™˜ํ•˜๋Š” ์ž‘์—…์ด ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ํ…์ŠคํŠธ๋ฅผ ์ „์ฒ˜๋ฆฌํ•˜๋ ค๋ฉด ํ”„๋กœ์„ธ์„œ์˜ `tokenizer`๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> tokenizer = processor.tokenizer ``` ์œ„์—์„œ ์–ธ๊ธ‰ํ•œ ์ „์ฒ˜๋ฆฌ ์™ธ์—๋„ ๋ชจ๋ธ์„ ์œ„ํ•ด ๋ ˆ์ด๋ธ”์„ ์ถ”๊ฐ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Transformers์˜ `xxxForQuestionAnswering` ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ, ๋ ˆ์ด๋ธ”์€ `start_positions`์™€ `end_positions`๋กœ ๊ตฌ์„ฑ๋˜๋ฉฐ ์–ด๋–ค ํ† ํฐ์ด ๋‹ต๋ณ€์˜ ์‹œ์ž‘๊ณผ ๋์— ์žˆ๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ๋ ˆ์ด๋ธ” ์ถ”๊ฐ€๋ฅผ ์œ„ํ•ด์„œ, ๋จผ์ € ๋” ํฐ ๋ฆฌ์ŠคํŠธ(๋‹จ์–ด ๋ฆฌ์ŠคํŠธ)์—์„œ ํ•˜์œ„ ๋ฆฌ์ŠคํŠธ(๋‹จ์–ด๋กœ ๋ถ„ํ• ๋œ ๋‹ต๋ณ€)์„ ์ฐพ์„ ์ˆ˜ ์žˆ๋Š” ํ—ฌํผ ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์ด ํ•จ์ˆ˜๋Š” `words_list`์™€ `answer_list`, ์ด๋ ‡๊ฒŒ ๋‘ ๋ฆฌ์ŠคํŠธ๋ฅผ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ `words_list`๋ฅผ ๋ฐ˜๋ณตํ•˜์—ฌ `words_list`์˜ ํ˜„์žฌ ๋‹จ์–ด(words_list[i])๊ฐ€ `answer_list`์˜ ์ฒซ ๋ฒˆ์งธ ๋‹จ์–ด(answer_list[0])์™€ ๊ฐ™์€์ง€, ํ˜„์žฌ ๋‹จ์–ด์—์„œ ์‹œ์ž‘ํ•ด `answer_list`์™€ ๊ฐ™์€ ๊ธธ์ด๋งŒํผ์˜ `words_list`์˜ ํ•˜์œ„ ๋ฆฌ์ŠคํŠธ๊ฐ€ `answer_list`์™€ ์ผ์น˜ํ•˜๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ์ด ์กฐ๊ฑด์ด ์ฐธ์ด๋ผ๋ฉด ์ผ์น˜ํ•˜๋Š” ํ•ญ๋ชฉ์„ ๋ฐœ๊ฒฌํ–ˆ์Œ์„ ์˜๋ฏธํ•˜๋ฉฐ, ํ•จ์ˆ˜๋Š” ์ผ์น˜ ํ•ญ๋ชฉ, ์‹œ์ž‘ ์ธ๋ฑ์Šค(idx) ๋ฐ ์ข…๋ฃŒ ์ธ๋ฑ์Šค(idx + len(answer_list) - 1)๋ฅผ ๊ธฐ๋กํ•ฉ๋‹ˆ๋‹ค. ์ผ์น˜ํ•˜๋Š” ํ•ญ๋ชฉ์ด ๋‘ ๊ฐœ ์ด์ƒ ๋ฐœ๊ฒฌ๋˜๋ฉด ํ•จ์ˆ˜๋Š” ์ฒซ ๋ฒˆ์งธ ํ•ญ๋ชฉ๋งŒ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ผ์น˜ํ•˜๋Š” ํ•ญ๋ชฉ์ด ์—†๋‹ค๋ฉด ํ•จ์ˆ˜๋Š” (`None`, 0, 0)์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> def subfinder(words_list, answer_list): ... matches = [] ... start_indices = [] ... end_indices = [] ... for idx, i in enumerate(range(len(words_list))): ... if words_list[i] == answer_list[0] and words_list[i : i + len(answer_list)] == answer_list: ... matches.append(answer_list) ... start_indices.append(idx) ... end_indices.append(idx + len(answer_list) - 1) ... if matches: ... return matches[0], start_indices[0], end_indices[0] ... else: ... return None, 0, 0 ``` ์ด ํ•จ์ˆ˜๊ฐ€ ์–ด๋–ป๊ฒŒ ์ •๋‹ต์˜ ์œ„์น˜๋ฅผ ์ฐพ๋Š”์ง€ ์„ค๋ช…ํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์Œ ์˜ˆ์ œ์—์„œ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> example = dataset_with_ocr["train"][1] >>> words = [word.lower() for word in example["words"]] >>> match, word_idx_start, word_idx_end = subfinder(words, example["answer"].lower().split()) >>> print("Question: ", example["question"]) >>> print("Words:", words) >>> print("Answer: ", example["answer"]) >>> print("start_index", word_idx_start) >>> print("end_index", word_idx_end) Question: Who is in cc in this letter? Words: ['wie', 'baw', 'brown', '&', 'williamson', 'tobacco', 'corporation', 'research', '&', 'development', 'internal', 'correspondence', 'to:', 'r.', 'h.', 'honeycutt', 'ce:', 't.f.', 'riehl', 'from:', '.', 'c.j.', 'cook', 'date:', 'may', '8,', '1995', 'subject:', 'review', 'of', 'existing', 'brainstorming', 'ideas/483', 'the', 'major', 'function', 'of', 'the', 'product', 'innovation', 'graup', 'is', 'to', 'develop', 'marketable', 'nove!', 'products', 'that', 'would', 'be', 'profitable', 'to', 'manufacture', 'and', 'sell.', 'novel', 'is', 'defined', 'as:', 'of', 'a', 'new', 'kind,', 'or', 'different', 'from', 'anything', 'seen', 'or', 'known', 'before.', 'innovation', 'is', 'defined', 'as:', 'something', 'new', 'or', 'different', 'introduced;', 'act', 'of', 'innovating;', 'introduction', 'of', 'new', 'things', 'or', 'methods.', 'the', 'products', 'may', 'incorporate', 'the', 'latest', 'technologies,', 'materials', 'and', 'know-how', 'available', 'to', 'give', 'then', 'a', 'unique', 'taste', 'or', 'look.', 'the', 'first', 'task', 'of', 'the', 'product', 'innovation', 'group', 'was', 'to', 'assemble,', 'review', 'and', 'categorize', 'a', 'list', 'of', 'existing', 'brainstorming', 'ideas.', 'ideas', 'were', 'grouped', 'into', 'two', 'major', 'categories', 'labeled', 'appearance', 'and', 'taste/aroma.', 'these', 'categories', 'are', 'used', 'for', 'novel', 'products', 'that', 'may', 'differ', 'from', 'a', 'visual', 'and/or', 'taste/aroma', 'point', 'of', 'view', 'compared', 'to', 'canventional', 'cigarettes.', 'other', 'categories', 'include', 'a', 'combination', 'of', 'the', 'above,', 'filters,', 'packaging', 'and', 'brand', 'extensions.', 'appearance', 'this', 'category', 'is', 'used', 'for', 'novel', 'cigarette', 'constructions', 'that', 'yield', 'visually', 'different', 'products', 'with', 'minimal', 'changes', 'in', 'smoke', 'chemistry', 'two', 'cigarettes', 'in', 'cne.', 'emulti-plug', 'te', 'build', 'yaur', 'awn', 'cigarette.', 'eswitchable', 'menthol', 'or', 'non', 'menthol', 'cigarette.', '*cigarettes', 'with', 'interspaced', 'perforations', 'to', 'enable', 'smoker', 'to', 'separate', 'unburned', 'section', 'for', 'future', 'smoking.', 'ยซshort', 'cigarette,', 'tobacco', 'section', '30', 'mm.', 'ยซextremely', 'fast', 'buming', 'cigarette.', 'ยซnovel', 'cigarette', 'constructions', 'that', 'permit', 'a', 'significant', 'reduction', 'iretobacco', 'weight', 'while', 'maintaining', 'smoking', 'mechanics', 'and', 'visual', 'characteristics.', 'higher', 'basis', 'weight', 'paper:', 'potential', 'reduction', 'in', 'tobacco', 'weight.', 'ยซmore', 'rigid', 'tobacco', 'column;', 'stiffing', 'agent', 'for', 'tobacco;', 'e.g.', 'starch', '*colored', 'tow', 'and', 'cigarette', 'papers;', 'seasonal', 'promotions,', 'e.g.', 'pastel', 'colored', 'cigarettes', 'for', 'easter', 'or', 'in', 'an', 'ebony', 'and', 'ivory', 'brand', 'containing', 'a', 'mixture', 'of', 'all', 'black', '(black', 'paper', 'and', 'tow)', 'and', 'ail', 'white', 'cigarettes.', '499150498'] Answer: T.F. Riehl start_index 17 end_index 18 ``` ํ•œํŽธ, ์œ„ ์˜ˆ์ œ๊ฐ€ ์ธ์ฝ”๋”ฉ๋˜๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค: ```py >>> encoding = tokenizer(example["question"], example["words"], example["boxes"]) >>> tokenizer.decode(encoding["input_ids"]) [CLS] who is in cc in this letter? [SEP] wie baw brown & williamson tobacco corporation research & development ... ``` ์ด์ œ ์ธ์ฝ”๋”ฉ๋œ ์ž…๋ ฅ์—์„œ ์ •๋‹ต์˜ ์œ„์น˜๋ฅผ ์ฐพ์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. * `token_type_ids`๋Š” ์–ด๋–ค ํ† ํฐ์ด ์งˆ๋ฌธ์— ์†ํ•˜๋Š”์ง€, ๊ทธ๋ฆฌ๊ณ  ์–ด๋–ค ํ† ํฐ์ด ๋ฌธ์„œ์˜ ๋‹จ์–ด์— ํฌํ•จ๋˜๋Š”์ง€๋ฅผ ์•Œ๋ ค์ค๋‹ˆ๋‹ค. * `tokenizer.cls_token_id` ์ž…๋ ฅ์˜ ์‹œ์ž‘ ๋ถ€๋ถ„์— ์žˆ๋Š” ํŠน์ˆ˜ ํ† ํฐ์„ ์ฐพ๋Š” ๋ฐ ๋„์›€์„ ์ค๋‹ˆ๋‹ค. * `word_ids`๋Š” ์›๋ณธ `words`์—์„œ ์ฐพ์€ ๋‹ต๋ณ€์„ ์ „์ฒด ์ธ์ฝ”๋”ฉ๋œ ์ž…๋ ฅ์˜ ๋™์ผํ•œ ๋‹ต๊ณผ ์ผ์น˜์‹œํ‚ค๊ณ  ์ธ์ฝ”๋”ฉ๋œ ์ž…๋ ฅ์—์„œ ๋‹ต๋ณ€์˜ ์‹œ์ž‘/๋ ์œ„์น˜๋ฅผ ๊ฒฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ์œ„ ๋‚ด์šฉ๋“ค์„ ์—ผ๋‘์— ๋‘๊ณ  ๋ฐ์ดํ„ฐ ์„ธํŠธ ์˜ˆ์ œ์˜ ๋ฐฐ์น˜๋ฅผ ์ธ์ฝ”๋”ฉํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> def encode_dataset(examples, max_length=512): ... questions = examples["question"] ... words = examples["words"] ... boxes = examples["boxes"] ... answers = examples["answer"] ... # ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ์ธ์ฝ”๋”ฉํ•˜๊ณ  start_positions์™€ end_positions๋ฅผ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค ... encoding = tokenizer(questions, words, boxes, max_length=max_length, padding="max_length", truncation=True) ... start_positions = [] ... end_positions = [] ... # ๋ฐฐ์น˜์˜ ์˜ˆ์ œ๋ฅผ ๋ฐ˜๋ณตํ•ฉ๋‹ˆ๋‹ค ... for i in range(len(questions)): ... cls_index = encoding["input_ids"][i].index(tokenizer.cls_token_id) ... # ์˜ˆ์ œ์˜ words์—์„œ ๋‹ต๋ณ€์˜ ์œ„์น˜๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค ... words_example = [word.lower() for word in words[i]] ... answer = answers[i] ... match, word_idx_start, word_idx_end = subfinder(words_example, answer.lower().split()) ... if match: ... # ์ผ์น˜ํ•˜๋Š” ํ•ญ๋ชฉ์„ ๋ฐœ๊ฒฌํ•˜๋ฉด, `token_type_ids`๋ฅผ ์‚ฌ์šฉํ•ด ์ธ์ฝ”๋”ฉ์—์„œ ๋‹จ์–ด๊ฐ€ ์‹œ์ž‘ํ•˜๋Š” ์œ„์น˜๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค ... token_type_ids = encoding["token_type_ids"][i] ... token_start_index = 0 ... while token_type_ids[token_start_index] != 1: ... token_start_index += 1 ... token_end_index = len(encoding["input_ids"][i]) - 1 ... while token_type_ids[token_end_index] != 1: ... token_end_index -= 1 ... word_ids = encoding.word_ids(i)[token_start_index : token_end_index + 1] ... start_position = cls_index ... end_position = cls_index ... # words์˜ ๋‹ต๋ณ€ ์œ„์น˜์™€ ์ผ์น˜ํ•  ๋•Œ๊นŒ์ง€ word_ids๋ฅผ ๋ฐ˜๋ณตํ•˜๊ณ  `token_start_index`๋ฅผ ๋Š˜๋ฆฝ๋‹ˆ๋‹ค ... # ์ผ์น˜ํ•˜๋ฉด `token_start_index`๋ฅผ ์ธ์ฝ”๋”ฉ์—์„œ ๋‹ต๋ณ€์˜ `start_position`์œผ๋กœ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค ... for id in word_ids: ... if id == word_idx_start: ... start_position = token_start_index ... else: ... token_start_index += 1 ... # ๋น„์Šทํ•˜๊ฒŒ, ๋์—์„œ ์‹œ์ž‘ํ•ด `word_ids`๋ฅผ ๋ฐ˜๋ณตํ•˜๋ฉฐ ๋‹ต๋ณ€์˜ `end_position`์„ ์ฐพ์Šต๋‹ˆ๋‹ค ... for id in word_ids[::-1]: ... if id == word_idx_end: ... end_position = token_end_index ... else: ... token_end_index -= 1 ... start_positions.append(start_position) ... end_positions.append(end_position) ... else: ... start_positions.append(cls_index) ... end_positions.append(cls_index) ... encoding["image"] = examples["image"] ... encoding["start_positions"] = start_positions ... encoding["end_positions"] = end_positions ... return encoding ``` ์ด์ œ ์ด ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๊ฐ€ ์žˆ์œผ๋‹ˆ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ธ์ฝ”๋”ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> encoded_train_dataset = dataset_with_ocr["train"].map( ... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["train"].column_names ... ) >>> encoded_test_dataset = dataset_with_ocr["test"].map( ... encode_dataset, batched=True, batch_size=2, remove_columns=dataset_with_ocr["test"].column_names ... ) ``` ์ธ์ฝ”๋”ฉ๋œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ํŠน์„ฑ์ด ์–ด๋–ป๊ฒŒ ์ƒ๊ฒผ๋Š”์ง€ ํ™•์ธํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> encoded_train_dataset.features {'image': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='uint8', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'bbox': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'start_positions': Value(dtype='int64', id=None), 'end_positions': Value(dtype='int64', id=None)} ``` ## ํ‰๊ฐ€ [[evaluation]] ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต์„ ํ‰๊ฐ€ํ•˜๋ ค๋ฉด ์ƒ๋‹นํ•œ ์–‘์˜ ํ›„์ฒ˜๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์‹œ๊ฐ„์ด ๋„ˆ๋ฌด ๋งŽ์ด ๊ฑธ๋ฆฌ์ง€ ์•Š๋„๋ก ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ํ‰๊ฐ€ ๋‹จ๊ณ„๋ฅผ ์ƒ๋žตํ•ฉ๋‹ˆ๋‹ค. [`Trainer`]๊ฐ€ ํ›ˆ๋ จ ๊ณผ์ •์—์„œ ํ‰๊ฐ€ ์†์‹ค(evaluation loss)์„ ๊ณ„์† ๊ณ„์‚ฐํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ๋Œ€๋žต์ ์œผ๋กœ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถ”์ถœ์ (Extractive) ์งˆ์˜ ์‘๋‹ต์€ ๋ณดํ†ต F1/exact match ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•ด ํ‰๊ฐ€๋ฉ๋‹ˆ๋‹ค. ์ง์ ‘ ๊ตฌํ˜„ํ•ด๋ณด๊ณ  ์‹ถ์œผ์‹œ๋‹ค๋ฉด, Hugging Face course์˜ [Question Answering chapter](https://huggingface.co/course/chapter7/7?fw=pt#postprocessing)์„ ์ฐธ๊ณ ํ•˜์„ธ์š”. ## ํ›ˆ๋ จ [[train]] ์ถ•ํ•˜ํ•ฉ๋‹ˆ๋‹ค! ์ด ๊ฐ€์ด๋“œ์˜ ๊ฐ€์žฅ ์–ด๋ ค์šด ๋ถ€๋ถ„์„ ์„ฑ๊ณต์ ์œผ๋กœ ์ฒ˜๋ฆฌํ–ˆ์œผ๋‹ˆ ์ด์ œ ๋‚˜๋งŒ์˜ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ จ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋‹จ๊ณ„๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ์Šต๋‹ˆ๋‹ค: * ์ „์ฒ˜๋ฆฌ์—์„œ์˜ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด [`AutoModelForDocumentQuestionAnswering`]์œผ๋กœ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. * [`TrainingArguments`]๋กœ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •ํ•ฉ๋‹ˆ๋‹ค. * ์˜ˆ์ œ๋ฅผ ๋ฐฐ์น˜ ์ฒ˜๋ฆฌํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” [`DefaultDataCollator`]๊ฐ€ ์ ๋‹นํ•ฉ๋‹ˆ๋‹ค. * ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ(Data collator)์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋“ค์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. * [`~Trainer.train`]์„ ํ˜ธ์ถœํ•ด์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForDocumentQuestionAnswering >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained(model_checkpoint) ``` [`TrainingArguments`]์—์„œ `output_dir`์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๊ณ , ์ ์ ˆํ•œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•˜๋ ค๋ฉด `push_to_hub`๋ฅผ `True`๋กœ ์„ค์ •ํ•˜์„ธ์š” (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ์ด ๊ฒฝ์šฐ `output_dir`์€ ๋ชจ๋ธ์˜ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ํ‘ธ์‹œํ•  ๋ ˆํฌ์ง€ํ† ๋ฆฌ์˜ ์ด๋ฆ„์ด ๋ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import TrainingArguments >>> # ๋ณธ์ธ์˜ ๋ ˆํฌ์ง€ํ† ๋ฆฌ ID๋กœ ๋ฐ”๊พธ์„ธ์š” >>> repo_id = "MariaK/layoutlmv2-base-uncased_finetuned_docvqa" >>> training_args = TrainingArguments( ... output_dir=repo_id, ... per_device_train_batch_size=4, ... num_train_epochs=20, ... save_steps=200, ... logging_steps=50, ... evaluation_strategy="steps", ... learning_rate=5e-5, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` ๊ฐ„๋‹จํ•œ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๋ฅผ ์ •์˜ํ•˜์—ฌ ์˜ˆ์ œ๋ฅผ ํ•จ๊ป˜ ๋ฐฐ์น˜ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import DefaultDataCollator >>> data_collator = DefaultDataCollator() ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ชจ๋“  ๊ฒƒ์„ ํ•œ ๊ณณ์— ๋ชจ์•„ [`~Trainer.train`]์„ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=data_collator, ... train_dataset=encoded_train_dataset, ... eval_dataset=encoded_test_dataset, ... tokenizer=processor, ... ) >>> trainer.train() ``` ์ตœ์ข… ๋ชจ๋ธ์„ ๐Ÿค— Hub์— ์ถ”๊ฐ€ํ•˜๋ ค๋ฉด, ๋ชจ๋ธ ์นด๋“œ๋ฅผ ์ƒ์„ฑํ•˜๊ณ  `push_to_hub`๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> trainer.create_model_card() >>> trainer.push_to_hub() ``` ## ์ถ”๋ก  [[inference]] ์ด์ œ LayoutLMv2 ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ๐Ÿค— Hub์— ์—…๋กœ๋“œํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์—๋„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ๋ณด๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`Pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> example = dataset["test"][2] >>> question = example["query"]["en"] >>> image = example["image"] >>> print(question) >>> print(example["answers"]) 'Who is โ€˜presidingโ€™ TRRF GENERAL SESSION (PART 1)?' ['TRRF Vice President', 'lee a. waller'] ``` ๊ทธ ๋‹ค์Œ, ๋ชจ๋ธ๋กœ ๋ฌธ์„œ ์งˆ์˜ ์‘๋‹ต์„ ํ•˜๊ธฐ ์œ„ํ•ด ํŒŒ์ดํ”„๋ผ์ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์ด๋ฏธ์ง€ + ์งˆ๋ฌธ ์กฐํ•ฉ์„ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import pipeline >>> qa_pipeline = pipeline("document-question-answering", model="MariaK/layoutlmv2-base-uncased_finetuned_docvqa") >>> qa_pipeline(image, question) [{'score': 0.9949808120727539, 'answer': 'Lee A. Waller', 'start': 55, 'end': 57}] ``` ์›ํ•œ๋‹ค๋ฉด ํŒŒ์ดํ”„๋ผ์ธ์˜ ๊ฒฐ๊ณผ๋ฅผ ์ˆ˜๋™์œผ๋กœ ๋ณต์ œํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค: 1. ์ด๋ฏธ์ง€์™€ ์งˆ๋ฌธ์„ ๊ฐ€์ ธ์™€ ๋ชจ๋ธ์˜ ํ”„๋กœ์„ธ์„œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ชจ๋ธ์— ๋งž๊ฒŒ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ์„ ํ†ตํ•ด ๊ฒฐ๊ณผ ๋˜๋Š” ์ „์ฒ˜๋ฆฌ๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. ๋ชจ๋ธ์€ ์–ด๋–ค ํ† ํฐ์ด ๋‹ต๋ณ€์˜ ์‹œ์ž‘์— ์žˆ๋Š”์ง€, ์–ด๋–ค ํ† ํฐ์ด ๋‹ต๋ณ€์ด ๋์— ์žˆ๋Š”์ง€๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” `start_logits`์™€ `end_logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋‘˜ ๋‹ค (batch_size, sequence_length) ํ˜•ํƒœ๋ฅผ ๊ฐ–์Šต๋‹ˆ๋‹ค. 4. `start_logits`์™€ `end_logits`์˜ ๋งˆ์ง€๋ง‰ ์ฐจ์›์„ ์ตœ๋Œ€๋กœ ๋งŒ๋“œ๋Š” ๊ฐ’์„ ์ฐพ์•„ ์˜ˆ์ƒ `start_idx`์™€ `end_idx`๋ฅผ ์–ป์Šต๋‹ˆ๋‹ค. 5. ํ† ํฌ๋‚˜์ด์ €๋กœ ๋‹ต๋ณ€์„ ๋””์ฝ”๋”ฉํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torch >>> from transformers import AutoProcessor >>> from transformers import AutoModelForDocumentQuestionAnswering >>> processor = AutoProcessor.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa") >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("MariaK/layoutlmv2-base-uncased_finetuned_docvqa") >>> with torch.no_grad(): ... encoding = processor(image.convert("RGB"), question, return_tensors="pt") ... outputs = model(**encoding) ... start_logits = outputs.start_logits ... end_logits = outputs.end_logits ... predicted_start_idx = start_logits.argmax(-1).item() ... predicted_end_idx = end_logits.argmax(-1).item() >>> processor.tokenizer.decode(encoding.input_ids.squeeze()[predicted_start_idx : predicted_end_idx + 1]) 'lee a. waller' ```
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/masked_language_modeling.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง(Masked language modeling)[[masked-language-modeling]] [[open-in-colab]] <Youtube id="mqElG5QJWUg"/> ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ์‹œํ€€์Šค์—์„œ ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์„ ์˜ˆ์ธกํ•˜๋ฉฐ, ๋ชจ๋ธ์€ ์–‘๋ฐฉํ–ฅ์œผ๋กœ ํ† ํฐ์— ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ์€ ํ† ํฐ์˜ ์™ผ์ชฝ๊ณผ ์˜ค๋ฅธ์ชฝ ์–‘์ชฝ์—์„œ ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์€ ์ „์ฒด ์‹œํ€€์Šค์— ๋Œ€ํ•œ ๋ฌธ๋งฅ์  ์ดํ•ด๊ฐ€ ํ•„์š”ํ•œ ์ž‘์—…์— ์ ํ•ฉํ•˜๋ฉฐ, BERT๊ฐ€ ๊ทธ ์˜ˆ์— ํ•ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋‹ค๋ฃฐ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. [ELI5](https://huggingface.co/datasets/eli5) ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ [r/askscience](https://www.reddit.com/r/askscience/) ๋ถ€๋ถ„์„ ์‚ฌ์šฉํ•ด [DistilRoBERTa](https://huggingface.co/distilroberta-base) ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก  ์‹œ์— ์ง์ ‘ ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ์ฒ˜๋Ÿผ ๋‹ค๋ฅธ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•ด ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ ์•„ํ‚คํ…์ณ ์ค‘ ํ•˜๋‚˜๋ฅผ ์„ ํƒํ•˜์„ธ์š”: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [ALBERT](../model_doc/albert), [BART](../model_doc/bart), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [CamemBERT](../model_doc/camembert), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa](../model_doc/deberta), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ESM](../model_doc/esm), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [I-BERT](../model_doc/ibert), [LayoutLM](../model_doc/layoutlm), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [MVP](../model_doc/mvp), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [Perceiver](../model_doc/perceiver), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [TAPAS](../model_doc/tapas), [Wav2Vec2](../model_doc/wav2vec2), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€์˜ ๊ณต์œ ๋ฅผ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด(When prompted) ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## ELI5 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-eli5-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ ELI5 ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ r/askscience ์ค‘ ์ผ๋ถ€๋งŒ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ ํ•™์Šต์— ๋” ๋งŽ์€ ์‹œ๊ฐ„์„ ํ• ์• ํ•˜๊ธฐ ์ „์— ๋ชจ๋“  ๊ฒƒ์ด ์ž‘๋™ํ•˜๋Š”์ง€ ์‹คํ—˜ํ•˜๊ณ  ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```py >>> from datasets import load_dataset >>> eli5 = load_dataset("eli5", split="train_asks[:5000]") ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ `train_asks`๋ฅผ [`~datasets.Dataset.train_test_split`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์™€ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ๋กœ ๋ถ„ํ• ํ•ฉ๋‹ˆ๋‹ค: ```py >>> eli5 = eli5.train_test_split(test_size=0.2) ``` ๊ทธ๋ฆฌ๊ณ  ์•„๋ž˜ ์˜ˆ์‹œ๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”: ```py >>> eli5["train"][0] {'answers': {'a_id': ['c3d1aib', 'c3d4lya'], 'score': [6, 3], 'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]}, 'answers_urls': {'url': []}, 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']}, 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls': {'url': []}} ``` ๋งŽ์•„ ๋ณด์ผ ์ˆ˜ ์žˆ์ง€๋งŒ ์‹ค์ œ๋กœ๋Š” `text` ํ•„๋“œ์—๋งŒ ์ง‘์ค‘ํ•˜๋ฉด ๋ฉ๋‚˜๋‹ค. ์–ธ์–ด ๋ชจ๋ธ๋ง ์ž‘์—…์˜ ๋ฉ‹์ง„ ์ ์€ (๋น„์ง€๋„ ํ•™์Šต์œผ๋กœ) *๋‹ค์Œ ๋‹จ์–ด๊ฐ€ ๋ ˆ์ด๋ธ”*์ด๊ธฐ ๋•Œ๋ฌธ์— ๋ ˆ์ด๋ธ”์ด ๋”ฐ๋กœ ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] <Youtube id="8PmhEIXhBvI"/> ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด, ๋‹ค์Œ ๋‹จ๊ณ„๋กœ DistilRoBERTa ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ฐ€์ ธ์™€์„œ `text` ํ•˜์œ„ ํ•„๋“œ๋ฅผ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("distilroberta-base") ``` ์œ„์˜ ์˜ˆ์ œ์—์„œ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, `text` ํ•„๋“œ๋Š” `answers` ์•ˆ์— ์ค‘์ฒฉ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ค‘์ฒฉ๋œ ๊ตฌ์กฐ์—์„œ [`flatten`](https://huggingface.co/docs/datasets/process#flatten) ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ `text` ํ•˜์œ„ ํ•„๋“œ๋ฅผ ์ถ”์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> eli5 = eli5.flatten() >>> eli5["train"][0] {'answers.a_id': ['c3d1aib', 'c3d4lya'], 'answers.score': [6, 3], 'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.", "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"], 'answers_urls.url': [], 'document': '', 'q_id': 'nyxfp', 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?', 'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'], 'subreddit': 'askscience', 'title': 'Few questions about this space walk photograph.', 'title_urls.url': []} ``` ์ด์ œ ๊ฐ ํ•˜์œ„ ํ•„๋“œ๋Š” `answers` ์ ‘๋‘์‚ฌ(prefix)๋กœ ํ‘œ์‹œ๋œ ๋Œ€๋กœ ๋ณ„๋„์˜ ์—ด์ด ๋˜๊ณ , `text` ํ•„๋“œ๋Š” ์ด์ œ ๋ฆฌ์ŠคํŠธ๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๊ฐ ๋ฌธ์žฅ์„ ๊ฐœ๋ณ„์ ์œผ๋กœ ํ† ํฐํ™”ํ•˜๋Š” ๋Œ€์‹  ๋ฆฌ์ŠคํŠธ๋ฅผ ๋ฌธ์ž์—ด๋กœ ๋ณ€ํ™˜ํ•˜์—ฌ ํ•œ๋ฒˆ์— ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ๊ฐ ์˜ˆ์ œ์— ๋Œ€ํ•ด ๋ฌธ์ž์—ด๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฆฌ์ŠคํŠธ๋ฅผ `join`ํ•˜๊ณ  ๊ฒฐ๊ณผ๋ฅผ ํ† ํฐํ™”ํ•˜๋Š” ์ฒซ ๋ฒˆ์งธ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜์ž…๋‹ˆ๋‹ค: ```py >>> def preprocess_function(examples): ... return tokenizer([" ".join(x) for x in examples["answers.text"]]) ``` ์ด ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์ ์šฉํ•˜๊ธฐ ์œ„ํ•ด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋„๋ก `batched=True`๋ฅผ ์„ค์ •ํ•˜๊ณ  `num_proc`๋กœ ์ฒ˜๋ฆฌ ํšŸ์ˆ˜๋ฅผ ๋Š˜๋ฆฌ๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•˜์ง€ ์•Š์€ ์—ด์€ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค: ```py >>> tokenized_eli5 = eli5.map( ... preprocess_function, ... batched=True, ... num_proc=4, ... remove_columns=eli5["train"].column_names, ... ) ``` ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋Š” ํ† ํฐ ์‹œํ€€์Šค๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์ง€๋งŒ ์ด ์ค‘ ์ผ๋ถ€๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ๊น๋‹ˆ๋‹ค. ์ด์ œ ๋‘ ๋ฒˆ์งธ ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ด - ๋ชจ๋“  ์‹œํ€€์Šค๋ฅผ ์—ฐ๊ฒฐํ•˜๊ณ  - ์—ฐ๊ฒฐ๋œ ์‹œํ€€์Šค๋ฅผ ์ •์˜ํ•œ `block_size` ๋ณด๋‹ค ๋” ์งง์€ ๋ฉ์–ด๋ฆฌ๋กœ ๋ถ„ํ• ํ•˜๋Š”๋ฐ, ์ด ๋ฉ์–ด๋ฆฌ๋Š” ๋ชจ๋ธ์˜ ์ตœ๋Œ€ ์ž…๋ ฅ ๊ธธ์ด๋ณด๋‹ค ์งง๊ณ  GPU RAM์ด ์ˆ˜์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๊ธธ์ด์—ฌ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> block_size = 128 >>> def group_texts(examples): ... # Concatenate all texts. ... concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} ... total_length = len(concatenated_examples[list(examples.keys())[0]]) ... # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can ... # customize this part to your needs. ... if total_length >= block_size: ... total_length = (total_length // block_size) * block_size ... # Split by chunks of block_size. ... result = { ... k: [t[i : i + block_size] for i in range(0, total_length, block_size)] ... for k, t in concatenated_examples.items() ... } ... result["labels"] = result["input_ids"].copy() ... return result ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์„ธํŠธ์— `group_texts` ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) ``` ์ด์ œ [`DataCollatorForLanguageModeling`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์˜ˆ์ œ์˜ ๋ฐฐ์น˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด๋ฅผ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค collation ๋‹จ๊ณ„์—์„œ ๋งค ๋ฐฐ์น˜์•ˆ์—์„œ์˜ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ๋ฌธ์žฅ์„ *๋™์ ์œผ๋กœ ํŒจ๋”ฉ*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. <frameworkcontent> <pt> ์‹œํ€€์Šค ๋ ํ† ํฐ์„ ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐ˜๋ณตํ•  ๋•Œ๋งˆ๋‹ค ํ† ํฐ์„ ๋ฌด์ž‘์œ„๋กœ ๋งˆ์Šคํ‚นํ•˜๋„๋ก `mlm_-probability`๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DataCollatorForLanguageModeling >>> tokenizer.pad_token = tokenizer.eos_token >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15) ``` </pt> <tf> ์‹œํ€€์Šค ๋ ํ† ํฐ์„ ํŒจ๋”ฉ ํ† ํฐ์œผ๋กœ ์‚ฌ์šฉํ•˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ๋ฐ˜๋ณตํ•  ๋•Œ๋งˆ๋‹ค ํ† ํฐ์„ ๋ฌด์ž‘์œ„๋กœ ๋งˆ์Šคํ‚นํ•˜๋„๋ก `mlm_-probability`๋ฅผ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import DataCollatorForLanguageModeling >>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15, return_tensors="tf") ``` </tf> </frameworkcontent> ## ํ›ˆ๋ จ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForMaskedLM`]๋ฅผ ์‚ฌ์šฉํ•ด DistilRoBERTa ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForMaskedLM >>> model = AutoModelForMaskedLM.from_pretrained("distilroberta-base") ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๊ฐ€ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. [`TrainingArguments`]์˜ ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ ์ €์žฅ ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์€ ์œ ์ผํ•œ ํ•„์ˆ˜ ํŒŒ๋ผ๋ฏธํ„ฐ์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค (๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด Hugging Face์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ ๋ฐ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ(collator)์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_eli5_mlm_model", ... evaluation_strategy="epoch", ... learning_rate=2e-5, ... num_train_epochs=3, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=lm_dataset["train"], ... eval_dataset=lm_dataset["test"], ... data_collator=data_collator, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด [`~transformers.Trainer.evaluate`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŽ„ํ”Œ๋ ‰์„œํ‹ฐ(perplexity)๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import math >>> eval_results = trainer.evaluate() >>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") Perplexity: 8.76 ``` ๊ทธ๋ฆฌ๊ณ  [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์ด ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก, Hub๋กœ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> TensorFlow๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์˜ตํ‹ฐ๋งˆ์ด์ €(optimizer) ํ•จ์ˆ˜ ์„ค์ •, ํ•™์Šต๋ฅ (learning rate) ์Šค์ผ€์ฅด๋ง, ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ์„ค์ •๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer, AdamWeightDecay >>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01) ``` ๋‹ค์Œ์œผ๋กœ [`TFAutoModelForMaskedLM`]๋ฅผ ์‚ฌ์šฉํ•ด DistilRoBERTa ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForMaskedLM >>> model = TFAutoModelForMaskedLM.from_pretrained("distilroberta-base") ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•˜์„ธ์š”: ```py >>> tf_train_set = model.prepare_tf_dataset( ... lm_dataset["train"], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) >>> tf_test_set = model.prepare_tf_dataset( ... lm_dataset["test"], ... shuffle=False, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method) ๋ฉ”์†Œ๋“œ๋ฅผ ํ†ตํ•ด ๋ชจ๋ธ ํ›ˆ๋ จ์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> import tensorflow as tf >>> model.compile(optimizer=optimizer) ``` ์ด๋Š” ์—…๋กœ๋“œํ•  ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €์˜ ์œ„์น˜๋ฅผ [`~transformers.PushToHubCallback`]์— ์ง€์ •ํ•˜์—ฌ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> callback = PushToHubCallback( ... output_dir="my_awesome_eli5_mlm_model", ... tokenizer=tokenizer, ... ) ``` ๋“œ๋””์–ด ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•  ๋•Œ ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํฌํฌ ์ˆ˜, ์ฝœ๋ฐฑ์ด ํฌํ•จ๋œ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback]) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด, ์ž๋™์œผ๋กœ Hub๋กœ ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ๋งˆ์Šคํ‚น๋œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ์œ„ํ•ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ณด๋‹ค ์‹ฌ์ธต์ ์ธ ์˜ˆ์ œ๋Š” [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb)์„ ์ฐธ์กฐํ•˜์„ธ์š”. </Tip> ## ์ถ”๋ก [[inference]] ์ง€๊ธˆ๊นŒ์ง€ ๋ชจ๋ธ ๋ฏธ์„ธ ์กฐ์ •์„ ์ž˜ ํ–ˆ์œผ๋‹ˆ, ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ๋ชจ๋ธ์ด ๋นˆ์นธ์„ ์ฑ„์šธ ํ…์ŠคํŠธ๋ฅผ ์ŠคํŽ˜์…œ ํ† ํฐ(special token)์ธ `<mask>` ํ† ํฐ์œผ๋กœ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค: ```py >>> text = "The Milky Way is a <mask> galaxy." ``` ์ถ”๋ก ์„ ์œ„ํ•ด ๋ฏธ์„ธ ์กฐ์ •ํ•œ ๋ชจ๋ธ์„ ํ…Œ์ŠคํŠธํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. `fill-mask`ํƒœ์Šคํฌ๋กœ `pipeline`์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ํ…์ŠคํŠธ๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. `top_k` ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ˜ํ™˜ํ•˜๋Š” ์˜ˆ์ธก์˜ ์ˆ˜๋ฅผ ์ง€์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> mask_filler = pipeline("fill-mask", "stevhliu/my_awesome_eli5_mlm_model") >>> mask_filler(text, top_k=3) [{'score': 0.5150994658470154, 'token': 21300, 'token_str': ' spiral', 'sequence': 'The Milky Way is a spiral galaxy.'}, {'score': 0.07087188959121704, 'token': 2232, 'token_str': ' massive', 'sequence': 'The Milky Way is a massive galaxy.'}, {'score': 0.06434620916843414, 'token': 650, 'token_str': ' small', 'sequence': 'The Milky Way is a small galaxy.'}] ``` <frameworkcontent> <pt> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ PyTorch ํ…์„œ ํ˜•ํƒœ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, `<mask>` ํ† ํฐ์˜ ์œ„์น˜๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_mlm_model") >>> inputs = tokenizer(text, return_tensors="pt") >>> mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1] ``` ๋ชจ๋ธ์— `inputs`๋ฅผ ์ž…๋ ฅํ•˜๊ณ , ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์˜ `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForMaskedLM >>> model = AutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model") >>> logits = model(**inputs).logits >>> mask_token_logits = logits[0, mask_token_index, :] ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์€ ๊ฐ€์ง„ ๋งˆ์Šคํฌ ํ† ํฐ 3๊ฐœ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ , ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค: ```py >>> top_3_tokens = torch.topk(mask_token_logits, 3, dim=1).indices[0].tolist() >>> for token in top_3_tokens: ... print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. ``` </pt> <tf> ํ…์ŠคํŠธ๋ฅผ ํ† ํฐํ™”ํ•˜๊ณ  `input_ids`๋ฅผ TensorFlow ํ…์„œ ํ˜•ํƒœ๋กœ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, `<mask>` ํ† ํฐ์˜ ์œ„์น˜๋ฅผ ์ง€์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_mlm_model") >>> inputs = tokenizer(text, return_tensors="tf") >>> mask_token_index = tf.where(inputs["input_ids"] == tokenizer.mask_token_id)[0, 1] ``` ๋ชจ๋ธ์— `inputs`๋ฅผ ์ž…๋ ฅํ•˜๊ณ , ๋งˆ์Šคํ‚น๋œ ํ† ํฐ์˜ `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForMaskedLM >>> model = TFAutoModelForMaskedLM.from_pretrained("stevhliu/my_awesome_eli5_mlm_model") >>> logits = model(**inputs).logits >>> mask_token_logits = logits[0, mask_token_index, :] ``` ๊ทธ๋Ÿฐ ๋‹ค์Œ ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์€ ๊ฐ€์ง„ ๋งˆ์Šคํฌ ํ† ํฐ 3๊ฐœ๋ฅผ ๋ฐ˜ํ™˜ํ•˜๊ณ , ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค: ```py >>> top_3_tokens = tf.math.top_k(mask_token_logits, 3).indices.numpy() >>> for token in top_3_tokens: ... print(text.replace(tokenizer.mask_token, tokenizer.decode([token]))) The Milky Way is a spiral galaxy. The Milky Way is a massive galaxy. The Milky Way is a small galaxy. ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/monocular_depth_estimation.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๋‹จ์ผ ์˜์ƒ ๊ธฐ๋ฐ˜ ๊นŠ์ด ์ถ”์ •[[depth-estimation-pipeline]] ๋‹จ์ผ ์˜์ƒ ๊ธฐ๋ฐ˜ ๊นŠ์ด ์ถ”์ •์€ ํ•œ ์žฅ๋ฉด์˜ ๋‹จ์ผ ์ด๋ฏธ์ง€์—์„œ ์žฅ๋ฉด์˜ ๊นŠ์ด ์ •๋ณด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์ž…๋‹ˆ๋‹ค. ์ฆ‰, ๋‹จ์ผ ์นด๋ฉ”๋ผ ์‹œ์ ์˜ ์žฅ๋ฉด์— ์žˆ๋Š” ๋ฌผ์ฒด์˜ ๊ฑฐ๋ฆฌ๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๊ณผ์ •์ž…๋‹ˆ๋‹ค. ๋‹จ์ผ ์˜์ƒ ๊ธฐ๋ฐ˜ ๊นŠ์ด ์ถ”์ •์€ 3D ์žฌ๊ตฌ์„ฑ, ์ฆ๊ฐ• ํ˜„์‹ค, ์ž์œจ ์ฃผํ–‰, ๋กœ๋ด‡ ๊ณตํ•™ ๋“ฑ ๋‹ค์–‘ํ•œ ๋ถ„์•ผ์—์„œ ์‘์šฉ๋ฉ๋‹ˆ๋‹ค. ์กฐ๋ช… ์กฐ๊ฑด, ๊ฐ€๋ ค์ง, ํ…์Šค์ฒ˜์™€ ๊ฐ™์€ ์š”์†Œ์˜ ์˜ํ–ฅ์„ ๋ฐ›์„ ์ˆ˜ ์žˆ๋Š” ์žฅ๋ฉด ๋‚ด ๋ฌผ์ฒด์™€ ํ•ด๋‹น ๊นŠ์ด ์ •๋ณด ๊ฐ„์˜ ๋ณต์žกํ•œ ๊ด€๊ณ„๋ฅผ ๋ชจ๋ธ์ด ์ดํ•ดํ•ด์•ผ ํ•˜๋ฏ€๋กœ ๊นŒ๋‹ค๋กœ์šด ์ž‘์—…์ž…๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ๋‹ค๋ฃจ๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [DPT](../model_doc/dpt), [GLPN](../model_doc/glpn) <!--End of the generated tip--> </Tip> ์ด๋ฒˆ ๊ฐ€์ด๋“œ์—์„œ ๋ฐฐ์šธ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: * ๊นŠ์ด ์ถ”์ • ํŒŒ์ดํ”„๋ผ์ธ ๋งŒ๋“ค๊ธฐ * ์ง์ ‘ ๊นŠ์ด ์ถ”์ • ์ถ”๋ก ํ•˜๊ธฐ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์—, ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q transformers ``` ## ๊นŠ์ด ์ถ”์ • ํŒŒ์ดํ”„๋ผ์ธ[[depth-estimation-inference-by-hand]] ๊นŠ์ด ์ถ”์ •์„ ์ถ”๋ก ํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ ํ•ด๋‹น ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•˜๋Š” [`pipeline`]์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. [Hugging Face Hub ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads)์—์„œ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import pipeline >>> checkpoint = "vinvino02/glpn-nyu" >>> depth_estimator = pipeline("depth-estimation", model=checkpoint) ``` ๋‹ค์Œ์œผ๋กœ, ๋ถ„์„ํ•  ์ด๋ฏธ์ง€๋ฅผ ํ•œ ์žฅ ์„ ํƒํ•˜์„ธ์š”: ```py >>> from PIL import Image >>> import requests >>> url = "https://unsplash.com/photos/HwBAsSbPBDU/download?ixid=MnwxMjA3fDB8MXxzZWFyY2h8MzR8fGNhciUyMGluJTIwdGhlJTIwc3RyZWV0fGVufDB8MHx8fDE2Nzg5MDEwODg&force=true&w=640" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-estimation-example.jpg" alt="Photo of a busy street"/> </div> ์ด๋ฏธ์ง€๋ฅผ ํŒŒ์ดํ”„๋ผ์ธ์œผ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. ```py >>> predictions = depth_estimator(image) ``` ํŒŒ์ดํ”„๋ผ์ธ์€ ๋‘ ๊ฐœ์˜ ํ•ญ๋ชฉ์„ ๊ฐ€์ง€๋Š” ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ๋Š” `predicted_depth`๋กœ ๊ฐ ํ”ฝ์…€์˜ ๊นŠ์ด๋ฅผ ๋ฏธํ„ฐ๋กœ ํ‘œํ˜„ํ•œ ๊ฐ’์„ ๊ฐ€์ง€๋Š” ํ…์„œ์ž…๋‹ˆ๋‹ค. ๋‘ ๋ฒˆ์งธ๋Š” `depth`๋กœ ๊นŠ์ด ์ถ”์ • ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•˜๋Š” PIL ์ด๋ฏธ์ง€์ž…๋‹ˆ๋‹ค. ์ด์ œ ์‹œ๊ฐํ™”ํ•œ ๊ฒฐ๊ณผ๋ฅผ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> predictions["depth"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"/> </div> ## ์ง์ ‘ ๊นŠ์ด ์ถ”์ • ์ถ”๋ก ํ•˜๊ธฐ[[depth-estimation-inference-by-hand]] ์ด์ œ ๊นŠ์ด ์ถ”์ • ํŒŒ์ดํ”„๋ผ์ธ ์‚ฌ์šฉ๋ฒ•์„ ์‚ดํŽด๋ณด์•˜์œผ๋‹ˆ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋ณต์ œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์‚ดํŽด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. [Hugging Face Hub ์ฒดํฌํฌ์ธํŠธ](https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads)์—์„œ ๋ชจ๋ธ๊ณผ ๊ด€๋ จ ํ”„๋กœ์„ธ์„œ๋ฅผ ๊ฐ€์ ธ์˜ค๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ ์ด์ „์— ์‚ฌ์šฉํ•œ ์ฒดํฌํฌ์ธํŠธ์™€ ๋™์ผํ•œ ๊ฒƒ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation >>> checkpoint = "vinvino02/glpn-nyu" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) >>> model = AutoModelForDepthEstimation.from_pretrained(checkpoint) ``` ํ•„์š”ํ•œ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜์„ ์ฒ˜๋ฆฌํ•˜๋Š” `image_processor`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ด๋ฏธ์ง€ ์ž…๋ ฅ์„ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. `image_processor`๋Š” ํฌ๊ธฐ ์กฐ์ • ๋ฐ ์ •๊ทœํ™” ๋“ฑ ํ•„์š”ํ•œ ์ด๋ฏธ์ง€ ๋ณ€ํ™˜์„ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> pixel_values = image_processor(image, return_tensors="pt").pixel_values ``` ์ค€๋น„ํ•œ ์ž…๋ ฅ์„ ๋ชจ๋ธ๋กœ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค: ```py >>> import torch >>> with torch.no_grad(): ... outputs = model(pixel_values) ... predicted_depth = outputs.predicted_depth ``` ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•ฉ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> # ์›๋ณธ ์‚ฌ์ด์ฆˆ๋กœ ๋ณต์› >>> prediction = torch.nn.functional.interpolate( ... predicted_depth.unsqueeze(1), ... size=image.size[::-1], ... mode="bicubic", ... align_corners=False, ... ).squeeze() >>> output = prediction.numpy() >>> formatted = (output * 255 / np.max(output)).astype("uint8") >>> depth = Image.fromarray(formatted) >>> depth ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"/> </div>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/object_detection.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๊ฐ์ฒด ํƒ์ง€ [[object-detection]] [[open-in-colab]] ๊ฐ์ฒด ํƒ์ง€๋Š” ์ด๋ฏธ์ง€์—์„œ ์ธ์Šคํ„ด์Šค(์˜ˆ: ์‚ฌ๋žŒ, ๊ฑด๋ฌผ ๋˜๋Š” ์ž๋™์ฐจ)๋ฅผ ๊ฐ์ง€ํ•˜๋Š” ์ปดํ“จํ„ฐ ๋น„์ „ ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๊ฐ์ฒด ํƒ์ง€ ๋ชจ๋ธ์€ ์ด๋ฏธ์ง€๋ฅผ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›๊ณ  ํƒ์ง€๋œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์˜ ์ขŒํ‘œ์™€ ๊ด€๋ จ๋œ ๋ ˆ์ด๋ธ”์„ ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค. ํ•˜๋‚˜์˜ ์ด๋ฏธ์ง€์—๋Š” ์—ฌ๋Ÿฌ ๊ฐ์ฒด๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์œผ๋ฉฐ ๊ฐ๊ฐ์€ ์ž์ฒด์ ์ธ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์™€ ๋ ˆ์ด๋ธ”์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์˜ˆ: ์ฐจ์™€ ๊ฑด๋ฌผ์ด ์žˆ๋Š” ์ด๋ฏธ์ง€). ๋˜ํ•œ ๊ฐ ๊ฐ์ฒด๋Š” ์ด๋ฏธ์ง€์˜ ๋‹ค๋ฅธ ๋ถ€๋ถ„์— ์กด์žฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค(์˜ˆ: ์ด๋ฏธ์ง€์— ์—ฌ๋Ÿฌ ๋Œ€์˜ ์ฐจ๊ฐ€ ์žˆ์„ ์ˆ˜ ์žˆ์Œ). ์ด ์ž‘์—…์€ ๋ณดํ–‰์ž, ๋„๋กœ ํ‘œ์ง€ํŒ, ์‹ ํ˜ธ๋“ฑ๊ณผ ๊ฐ™์€ ๊ฒƒ๋“ค์„ ๊ฐ์ง€ํ•˜๋Š” ์ž์œจ ์ฃผํ–‰์— ์ผ๋ฐ˜์ ์œผ๋กœ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ์‘์šฉ ๋ถ„์•ผ๋กœ๋Š” ์ด๋ฏธ์ง€ ๋‚ด ๊ฐ์ฒด ์ˆ˜ ๊ณ„์‚ฐ ๋ฐ ์ด๋ฏธ์ง€ ๊ฒ€์ƒ‰ ๋“ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ ๋‹ค์Œ์„ ๋ฐฐ์šธ ๊ฒƒ์ž…๋‹ˆ๋‹ค: 1. ํ•ฉ์„ฑ๊ณฑ ๋ฐฑ๋ณธ(์ธํ’‹ ๋ฐ์ดํ„ฐ์˜ ํŠน์„ฑ์„ ์ถ”์ถœํ•˜๋Š” ํ•ฉ์„ฑ๊ณฑ ๋„คํŠธ์›Œํฌ)๊ณผ ์ธ์ฝ”๋”-๋””์ฝ”๋” ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์„ ๊ฒฐํ•ฉํ•œ [DETR](https://huggingface.co/docs/transformers/model_doc/detr) ๋ชจ๋ธ์„ [CPPE-5](https://huggingface.co/datasets/cppe-5) ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋Œ€ํ•ด ๋ฏธ์„ธ์กฐ์ • ํ•˜๊ธฐ 2. ๋ฏธ์„ธ์กฐ์ • ํ•œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๊ธฐ. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์˜ ํƒœ์Šคํฌ๋Š” ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [Conditional DETR](../model_doc/conditional_detr), [Deformable DETR](../model_doc/deformable_detr), [DETA](../model_doc/deta), [DETR](../model_doc/detr), [Table Transformer](../model_doc/table-transformer), [YOLOS](../model_doc/yolos) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install -q datasets transformers evaluate timm albumentations ``` ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์—์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค๊ธฐ ์œ„ํ•œ ๐Ÿค— Datasets๊ณผ ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•œ ๐Ÿค— Transformers, ๋ฐ์ดํ„ฐ๋ฅผ ์ฆ๊ฐ•ํ•˜๊ธฐ ์œ„ํ•œ `albumentations`๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. DETR ๋ชจ๋ธ์˜ ํ•ฉ์„ฑ๊ณฑ ๋ฐฑ๋ณธ์„ ๊ฐ€์ ธ์˜ค๊ธฐ ์œ„ํ•ด์„œ๋Š” ํ˜„์žฌ `timm`์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก Hugging Face ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋‚˜ํƒ€๋‚˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•˜์„ธ์š”: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## CPPE-5 ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ [[load-the-CPPE-5-dataset]] [CPPE-5](https://huggingface.co/datasets/cppe-5) ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” COVID-19 ๋Œ€์œ ํ–‰ ์ƒํ™ฉ์—์„œ ์˜๋ฃŒ ์ „๋ฌธ์ธ๋ ฅ ๋ณดํ˜ธ ์žฅ๋น„(PPE)๋ฅผ ์‹๋ณ„ํ•˜๋Š” ์–ด๋…ธํ…Œ์ด์…˜์ด ํฌํ•จ๋œ ์ด๋ฏธ์ง€๋ฅผ ๋‹ด๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๊ฐ€์ ธ์˜ค์„ธ์š”: ```py >>> from datasets import load_dataset >>> cppe5 = load_dataset("cppe-5") >>> cppe5 DatasetDict({ train: Dataset({ features: ['image_id', 'image', 'width', 'height', 'objects'], num_rows: 1000 }) test: Dataset({ features: ['image_id', 'image', 'width', 'height', 'objects'], num_rows: 29 }) }) ``` ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ๋Š” ํ•™์Šต ์„ธํŠธ ์ด๋ฏธ์ง€ 1,000๊ฐœ์™€ ํ…Œ์ŠคํŠธ ์„ธํŠธ ์ด๋ฏธ์ง€ 29๊ฐœ๋ฅผ ๊ฐ–๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ์— ์ต์ˆ™ํ•ด์ง€๊ธฐ ์œ„ํ•ด, ์˜ˆ์‹œ๊ฐ€ ์–ด๋–ป๊ฒŒ ๊ตฌ์„ฑ๋˜์–ด ์žˆ๋Š”์ง€ ์‚ดํŽด๋ณด์„ธ์š”. ```py >>> cppe5["train"][0] {'image_id': 15, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=943x663 at 0x7F9EC9E77C10>, 'width': 943, 'height': 663, 'objects': {'id': [114, 115, 116, 117], 'area': [3796, 1596, 152768, 81002], 'bbox': [[302.0, 109.0, 73.0, 52.0], [810.0, 100.0, 57.0, 28.0], [160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0]], 'category': [4, 4, 0, 0]}} ``` ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ์žˆ๋Š” ์˜ˆ์‹œ๋Š” ๋‹ค์Œ์˜ ์˜์—ญ์„ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค: - `image_id`: ์˜ˆ์‹œ ์ด๋ฏธ์ง€ id - `image`: ์ด๋ฏธ์ง€๋ฅผ ํฌํ•จํ•˜๋Š” `PIL.Image.Image` ๊ฐ์ฒด - `width`: ์ด๋ฏธ์ง€์˜ ๋„ˆ๋น„ - `height`: ์ด๋ฏธ์ง€์˜ ๋†’์ด - `objects`: ์ด๋ฏธ์ง€ ์•ˆ์˜ ๊ฐ์ฒด๋“ค์˜ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ๋ฅผ ํฌํ•จํ•˜๋Š” ๋”•์…”๋„ˆ๋ฆฌ: - `id`: ์–ด๋…ธํ…Œ์ด์…˜ id - `area`: ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์˜ ๋ฉด์  - `bbox`: ๊ฐ์ฒด์˜ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ([COCO ํฌ๋งท](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco)์œผ๋กœ) - `category`: ๊ฐ์ฒด์˜ ์นดํ…Œ๊ณ ๋ฆฌ, ๊ฐ€๋Šฅํ•œ ๊ฐ’์œผ๋กœ๋Š” `Coverall (0)`, `Face_Shield (1)`, `Gloves (2)`, `Goggles (3)` ๋ฐ `Mask (4)` ๊ฐ€ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. `bbox` ํ•„๋“œ๊ฐ€ DETR ๋ชจ๋ธ์ด ์š”๊ตฌํ•˜๋Š” COCO ํ˜•์‹์„ ๋”ฐ๋ฅธ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ `objects` ๋‚ด๋ถ€์˜ ํ•„๋“œ ๊ทธ๋ฃน์€ DETR์ด ์š”๊ตฌํ•˜๋Š” ์–ด๋…ธํ…Œ์ด์…˜ ํ˜•์‹๊ณผ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šต์— ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ๋ฅผ ๋” ์ž˜ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•ด์„œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ํ•œ ๊ฐ€์ง€ ์˜ˆ์‹œ๋ฅผ ์‹œ๊ฐํ™”ํ•˜์„ธ์š”. ```py >>> import numpy as np >>> import os >>> from PIL import Image, ImageDraw >>> image = cppe5["train"][0]["image"] >>> annotations = cppe5["train"][0]["objects"] >>> draw = ImageDraw.Draw(image) >>> categories = cppe5["train"].features["objects"].feature["category"].names >>> id2label = {index: x for index, x in enumerate(categories, start=0)} >>> label2id = {v: k for k, v in id2label.items()} >>> for i in range(len(annotations["id"])): ... box = annotations["bbox"][i - 1] ... class_idx = annotations["category"][i - 1] ... x, y, w, h = tuple(box) ... draw.rectangle((x, y, x + w, y + h), outline="red", width=1) ... draw.text((x, y), id2label[class_idx], fill="white") >>> image ``` <div class="flex justify-center"> <img src="https://i.imgur.com/TdaqPJO.png" alt="CPPE-5 Image Example"/> </div> ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค์™€ ์—ฐ๊ฒฐ๋œ ๋ ˆ์ด๋ธ”์„ ์‹œ๊ฐํ™”ํ•˜๋ ค๋ฉด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋ฉ”ํƒ€ ๋ฐ์ดํ„ฐ, ํŠนํžˆ `category` ํ•„๋“œ์—์„œ ๋ ˆ์ด๋ธ”์„ ๊ฐ€์ ธ์™€์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ๋ ˆ์ด๋ธ” ID๋ฅผ ๋ ˆ์ด๋ธ” ํด๋ž˜์Šค์— ๋งคํ•‘ํ•˜๋Š” `id2label`๊ณผ ๋ฐ˜๋Œ€๋กœ ๋งคํ•‘ํ•˜๋Š” `label2id` ๋”•์…”๋„ˆ๋ฆฌ๋ฅผ ๋งŒ๋“ค์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ ์„ค์ •ํ•  ๋•Œ ์ด๋Ÿฌํ•œ ๋งคํ•‘์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋งคํ•‘์€ ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์—์„œ ๋ชจ๋ธ์„ ๊ณต์œ ํ–ˆ์„ ๋•Œ ๋‹ค๋ฅธ ์‚ฌ๋žŒ๋“ค์ด ์žฌ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ๋ฅผ ๋” ์ž˜ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•œ ์ตœ์ข… ๋‹จ๊ณ„๋กœ, ์ž ์žฌ์ ์ธ ๋ฌธ์ œ๋ฅผ ์ฐพ์•„๋ณด์„ธ์š”. ๊ฐ์ฒด ๊ฐ์ง€๋ฅผ ์œ„ํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์ž์ฃผ ๋ฐœ์ƒํ•˜๋Š” ๋ฌธ์ œ ์ค‘ ํ•˜๋‚˜๋Š” ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๊ฐ€ ์ด๋ฏธ์ง€์˜ ๊ฐ€์žฅ์ž๋ฆฌ๋ฅผ ๋„˜์–ด๊ฐ€๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ "๋„˜์–ด๊ฐ€๋Š” ๊ฒƒ(run away)"์€ ํ›ˆ๋ จ ์ค‘์— ์˜ค๋ฅ˜๋ฅผ ๋ฐœ์ƒ์‹œํ‚ฌ ์ˆ˜ ์žˆ๊ธฐ์— ์ด ๋‹จ๊ณ„์—์„œ ์ฒ˜๋ฆฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์—๋„ ๊ฐ™์€ ๋ฌธ์ œ๊ฐ€ ์žˆ๋Š” ๋ช‡ ๊ฐ€์ง€ ์˜ˆ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ€์ด๋“œ์—์„œ๋Š” ๊ฐ„๋‹จํ•˜๊ฒŒํ•˜๊ธฐ ์œ„ํ•ด ๋ฐ์ดํ„ฐ์—์„œ ์ด๋Ÿฌํ•œ ์ด๋ฏธ์ง€๋ฅผ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. ```py >>> remove_idx = [590, 821, 822, 875, 876, 878, 879] >>> keep = [i for i in range(len(cppe5["train"])) if i not in remove_idx] >>> cppe5["train"] = cppe5["train"].select(keep) ``` ## ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌํ•˜๊ธฐ [[preprocess-the-data]] ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ • ํ•˜๋ ค๋ฉด, ๋ฏธ๋ฆฌ ํ•™์Šต๋œ ๋ชจ๋ธ์—์„œ ์‚ฌ์šฉํ•œ ์ „์ฒ˜๋ฆฌ ๋ฐฉ์‹๊ณผ ์ •ํ™•ํ•˜๊ฒŒ ์ผ์น˜ํ•˜๋„๋ก ์‚ฌ์šฉํ•  ๋ฐ์ดํ„ฐ๋ฅผ ์ „์ฒ˜๋ฆฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. [`AutoImageProcessor`]๋Š” ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜์—ฌ DETR ๋ชจ๋ธ์ด ํ•™์Šต์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” `pixel_values`, `pixel_mask`, ๊ทธ๋ฆฌ๊ณ  `labels`๋ฅผ ์ƒ์„ฑํ•˜๋Š” ์ž‘์—…์„ ๋‹ด๋‹นํ•ฉ๋‹ˆ๋‹ค. ์ด ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ์—๋Š” ๊ฑฑ์ •ํ•˜์ง€ ์•Š์•„๋„ ๋˜๋Š” ๋ช‡ ๊ฐ€์ง€ ์†์„ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค: - `image_mean = [0.485, 0.456, 0.406 ]` - `image_std = [0.229, 0.224, 0.225]` ์ด ๊ฐ’๋“ค์€ ๋ชจ๋ธ ์‚ฌ์ „ ํ›ˆ๋ จ ์ค‘ ์ด๋ฏธ์ง€๋ฅผ ์ •๊ทœํ™”ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ํ‰๊ท ๊ณผ ํ‘œ์ค€ ํŽธ์ฐจ์ž…๋‹ˆ๋‹ค. ์ด ๊ฐ’๋“ค์€ ์ถ”๋ก  ๋˜๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ด๋ฏธ์ง€ ๋ชจ๋ธ์„ ์„ธ๋ฐ€ํ•˜๊ฒŒ ์กฐ์ •ํ•  ๋•Œ ๋ณต์ œํ•ด์•ผ ํ•˜๋Š” ์ค‘์š”ํ•œ ๊ฐ’์ž…๋‹ˆ๋‹ค. ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ๊ณผ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ์—์„œ ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋ฅผ ์ธ์Šคํ„ด์Šคํ™”ํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoImageProcessor >>> checkpoint = "facebook/detr-resnet-50" >>> image_processor = AutoImageProcessor.from_pretrained(checkpoint) ``` `image_processor`์— ์ด๋ฏธ์ง€๋ฅผ ์ „๋‹ฌํ•˜๊ธฐ ์ „์—, ๋ฐ์ดํ„ฐ ์„ธํŠธ์— ๋‘ ๊ฐ€์ง€ ์ „์ฒ˜๋ฆฌ๋ฅผ ์ ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: - ์ด๋ฏธ์ง€ ์ฆ๊ฐ• - DETR ๋ชจ๋ธ์˜ ์š”๊ตฌ์— ๋งž๊ฒŒ ์–ด๋…ธํ…Œ์ด์…˜์„ ๋‹ค์‹œ ํฌ๋งทํŒ… ์ฒซ์งธ๋กœ, ๋ชจ๋ธ์ด ํ•™์Šต ๋ฐ์ดํ„ฐ์— ๊ณผ์ ํ•ฉ ๋˜์ง€ ์•Š๋„๋ก ๋ฐ์ดํ„ฐ ์ฆ๊ฐ• ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์ค‘ ์•„๋ฌด๊ฑฐ๋‚˜ ์‚ฌ์šฉํ•˜์—ฌ ๋ณ€ํ™˜์„ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—์„œ๋Š” [Albumentations](https://albumentations.ai/docs/) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค... ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ๋ณ€ํ™˜์„ ์ด๋ฏธ์ง€์— ์ ์šฉํ•˜๊ณ  ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋ฅผ ์ ์ ˆํ•˜๊ฒŒ ์—…๋ฐ์ดํŠธํ•˜๋„๋ก ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ๋ฌธ์„œ์—๋Š” [๊ฐ์ฒด ํƒ์ง€๋ฅผ ์œ„ํ•ด ์ด๋ฏธ์ง€๋ฅผ ๋ณด๊ฐ•ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๊ฐ€์ด๋“œ](https://huggingface.co/docs/datasets/object_detection)๊ฐ€ ์žˆ์œผ๋ฉฐ, ์ด ์˜ˆ์ œ์™€ ์ •ํ™•ํžˆ ๋™์ผํ•œ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ๊ฐ ์ด๋ฏธ์ง€๋ฅผ (480, 480) ํฌ๊ธฐ๋กœ ์กฐ์ •ํ•˜๊ณ , ์ขŒ์šฐ๋กœ ๋’ค์ง‘๊ณ , ๋ฐ๊ธฐ๋ฅผ ๋†’์ด๋Š” ๋™์ผํ•œ ์ ‘๊ทผ๋ฒ•์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค: ```py >>> import albumentations >>> import numpy as np >>> import torch >>> transform = albumentations.Compose( ... [ ... albumentations.Resize(480, 480), ... albumentations.HorizontalFlip(p=1.0), ... albumentations.RandomBrightnessContrast(p=1.0), ... ], ... bbox_params=albumentations.BboxParams(format="coco", label_fields=["category"]), ... ) ``` ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ๋Š” ์–ด๋…ธํ…Œ์ด์…˜์ด ๋‹ค์Œ๊ณผ ๊ฐ™์€ ํ˜•์‹์ผ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒํ•ฉ๋‹ˆ๋‹ค: `{'image_id': int, 'annotations': List[Dict]}`, ์—ฌ๊ธฐ์„œ ๊ฐ ๋”•์…”๋„ˆ๋ฆฌ๋Š” COCO ๊ฐ์ฒด ์–ด๋…ธํ…Œ์ด์…˜์ž…๋‹ˆ๋‹ค. ๋‹จ์ผ ์˜ˆ์ œ์— ๋Œ€ํ•ด ์–ด๋…ธํ…Œ์ด์…˜์˜ ํ˜•์‹์„ ๋‹ค์‹œ ์ง€์ •ํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> def formatted_anns(image_id, category, area, bbox): ... annotations = [] ... for i in range(0, len(category)): ... new_ann = { ... "image_id": image_id, ... "category_id": category[i], ... "isCrowd": 0, ... "area": area[i], ... "bbox": list(bbox[i]), ... } ... annotations.append(new_ann) ... return annotations ``` ์ด์ œ ์ด๋ฏธ์ง€์™€ ์–ด๋…ธํ…Œ์ด์…˜ ์ „์ฒ˜๋ฆฌ ๋ณ€ํ™˜์„ ๊ฒฐํ•ฉํ•˜์—ฌ ์˜ˆ์ œ ๋ฐฐ์น˜์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> # transforming a batch >>> def transform_aug_ann(examples): ... image_ids = examples["image_id"] ... images, bboxes, area, categories = [], [], [], [] ... for image, objects in zip(examples["image"], examples["objects"]): ... image = np.array(image.convert("RGB"))[:, :, ::-1] ... out = transform(image=image, bboxes=objects["bbox"], category=objects["category"]) ... area.append(objects["area"]) ... images.append(out["image"]) ... bboxes.append(out["bboxes"]) ... categories.append(out["category"]) ... targets = [ ... {"image_id": id_, "annotations": formatted_anns(id_, cat_, ar_, box_)} ... for id_, cat_, ar_, box_ in zip(image_ids, categories, area, bboxes) ... ] ... return image_processor(images=images, annotations=targets, return_tensors="pt") ``` ์ด์ „ ๋‹จ๊ณ„์—์„œ ๋งŒ๋“  ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ๐Ÿค— Datasets์˜ [`~datasets.Dataset.with_transform`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ ์ „์ฒด์— ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฉ”์†Œ๋“œ๋Š” ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์š”์†Œ๋ฅผ ๊ฐ€์ ธ์˜ฌ ๋•Œ๋งˆ๋‹ค ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋ฅผ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด ์‹œ์ ์—์„œ๋Š” ์ „์ฒ˜๋ฆฌ ํ›„ ๋ฐ์ดํ„ฐ ์„ธํŠธ์—์„œ ์˜ˆ์‹œ ํ•˜๋‚˜๋ฅผ ๊ฐ€์ ธ์™€์„œ ๋ณ€ํ™˜ ํ›„ ๋ชจ์–‘์ด ์–ด๋–ป๊ฒŒ ๋˜๋Š”์ง€ ํ™•์ธํ•ด ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋•Œ, `pixel_values` ํ…์„œ, `pixel_mask` ํ…์„œ, ๊ทธ๋ฆฌ๊ณ  `labels`๋กœ ๊ตฌ์„ฑ๋œ ํ…์„œ๊ฐ€ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> cppe5["train"] = cppe5["train"].with_transform(transform_aug_ann) >>> cppe5["train"][15] {'pixel_values': tensor([[[ 0.9132, 0.9132, 0.9132, ..., -1.9809, -1.9809, -1.9809], [ 0.9132, 0.9132, 0.9132, ..., -1.9809, -1.9809, -1.9809], [ 0.9132, 0.9132, 0.9132, ..., -1.9638, -1.9638, -1.9638], ..., [-1.5699, -1.5699, -1.5699, ..., -1.9980, -1.9980, -1.9980], [-1.5528, -1.5528, -1.5528, ..., -1.9980, -1.9809, -1.9809], [-1.5528, -1.5528, -1.5528, ..., -1.9980, -1.9809, -1.9809]], [[ 1.3081, 1.3081, 1.3081, ..., -1.8431, -1.8431, -1.8431], [ 1.3081, 1.3081, 1.3081, ..., -1.8431, -1.8431, -1.8431], [ 1.3081, 1.3081, 1.3081, ..., -1.8256, -1.8256, -1.8256], ..., [-1.3179, -1.3179, -1.3179, ..., -1.8606, -1.8606, -1.8606], [-1.3004, -1.3004, -1.3004, ..., -1.8606, -1.8431, -1.8431], [-1.3004, -1.3004, -1.3004, ..., -1.8606, -1.8431, -1.8431]], [[ 1.4200, 1.4200, 1.4200, ..., -1.6476, -1.6476, -1.6476], [ 1.4200, 1.4200, 1.4200, ..., -1.6476, -1.6476, -1.6476], [ 1.4200, 1.4200, 1.4200, ..., -1.6302, -1.6302, -1.6302], ..., [-1.0201, -1.0201, -1.0201, ..., -1.5604, -1.5604, -1.5604], [-1.0027, -1.0027, -1.0027, ..., -1.5604, -1.5430, -1.5430], [-1.0027, -1.0027, -1.0027, ..., -1.5604, -1.5430, -1.5430]]]), 'pixel_mask': tensor([[1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], ..., [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1], [1, 1, 1, ..., 1, 1, 1]]), 'labels': {'size': tensor([800, 800]), 'image_id': tensor([756]), 'class_labels': tensor([4]), 'boxes': tensor([[0.7340, 0.6986, 0.3414, 0.5944]]), 'area': tensor([519544.4375]), 'iscrowd': tensor([0]), 'orig_size': tensor([480, 480])}} ``` ๊ฐ๊ฐ์˜ ์ด๋ฏธ์ง€๋ฅผ ์„ฑ๊ณต์ ์œผ๋กœ ์ฆ๊ฐ•ํ•˜๊ณ  ์ด๋ฏธ์ง€์˜ ์–ด๋…ธํ…Œ์ด์…˜์„ ์ค€๋น„ํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ „์ฒ˜๋ฆฌ๋Š” ์•„์ง ๋๋‚˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ, ์ด๋ฏธ์ง€๋ฅผ ๋ฐฐ์น˜๋กœ ๋งŒ๋“ค ์‚ฌ์šฉ์ž ์ •์˜ `collate_fn`์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น ๋ฐฐ์น˜์—์„œ ๊ฐ€์žฅ ํฐ ์ด๋ฏธ์ง€์— ์ด๋ฏธ์ง€(ํ˜„์žฌ `pixel_values` ์ธ)๋ฅผ ํŒจ๋“œํ•˜๊ณ , ์‹ค์ œ ํ”ฝ์…€(1)๊ณผ ํŒจ๋”ฉ(0)์„ ๋‚˜ํƒ€๋‚ด๊ธฐ ์œ„ํ•ด ๊ทธ์— ํ•ด๋‹นํ•˜๋Š” ์ƒˆ๋กœ์šด `pixel_mask`๋ฅผ ์ƒ์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ```py >>> def collate_fn(batch): ... pixel_values = [item["pixel_values"] for item in batch] ... encoding = image_processor.pad(pixel_values, return_tensors="pt") ... labels = [item["labels"] for item in batch] ... batch = {} ... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_mask"] = encoding["pixel_mask"] ... batch["labels"] = labels ... return batch ``` ## DETR ๋ชจ๋ธ ํ•™์Šต์‹œํ‚ค๊ธฐ [[training-the-DETR-model]] ์ด์ „ ์„น์…˜์—์„œ ๋Œ€๋ถ€๋ถ„์˜ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜์—ฌ ์ด์ œ ๋ชจ๋ธ์„ ํ•™์Šตํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! ์ด ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ด๋ฏธ์ง€๋Š” ๋ฆฌ์‚ฌ์ด์ฆˆ ํ›„์—๋„ ์—ฌ์ „ํžˆ ์šฉ๋Ÿ‰์ด ํฌ๊ธฐ ๋•Œ๋ฌธ์—, ์ด ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ • ํ•˜๋ ค๋ฉด ์ ์–ด๋„ ํ•˜๋‚˜์˜ GPU๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ํ•™์Šต์€ ๋‹ค์Œ์˜ ๋‹จ๊ณ„๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค: 1. [`AutoModelForObjectDetection`]์„ ์‚ฌ์šฉํ•˜์—ฌ ์ „์ฒ˜๋ฆฌ์™€ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. 2. [`TrainingArguments`]์—์„œ ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. 3. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์ด๋ฏธ์ง€ ํ”„๋กœ์„ธ์„œ ๋ฐ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ์™€ ํ•จ๊ป˜ [`Trainer`]์— ํ›ˆ๋ จ ์ธ์ˆ˜๋ฅผ ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 4. [`~Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ • ํ•ฉ๋‹ˆ๋‹ค. ์ „์ฒ˜๋ฆฌ์— ์‚ฌ์šฉํ•œ ์ฒดํฌํฌ์ธํŠธ์™€ ๋™์ผํ•œ ์ฒดํฌํฌ์ธํŠธ์—์„œ ๋ชจ๋ธ์„ ๊ฐ€์ ธ์˜ฌ ๋•Œ, ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ์—์„œ ๋งŒ๋“  `label2id`์™€ `id2label` ๋งคํ•‘์„ ์ „๋‹ฌํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, `ignore_mismatched_sizes=True`๋ฅผ ์ง€์ •ํ•˜์—ฌ ๊ธฐ์กด ๋ถ„๋ฅ˜ ํ—ค๋“œ(๋ชจ๋ธ์—์„œ ๋ถ„๋ฅ˜์— ์‚ฌ์šฉ๋˜๋Š” ๋งˆ์ง€๋ง‰ ๋ ˆ์ด์–ด)๋ฅผ ์ƒˆ ๋ถ„๋ฅ˜ ํ—ค๋“œ๋กœ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import AutoModelForObjectDetection >>> model = AutoModelForObjectDetection.from_pretrained( ... checkpoint, ... id2label=id2label, ... label2id=label2id, ... ignore_mismatched_sizes=True, ... ) ``` [`TrainingArguments`]์—์„œ `output_dir`์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•œ ๋‹ค์Œ, ํ•„์š”์— ๋”ฐ๋ผ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๊ตฌ์„ฑํ•˜์„ธ์š”. ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ์—ด์„ ์ œ๊ฑฐํ•˜์ง€ ์•Š๋„๋ก ์ฃผ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋งŒ์•ฝ `remove_unused_columns`๊ฐ€ `True`์ผ ๊ฒฝ์šฐ ์ด๋ฏธ์ง€ ์—ด์ด ์‚ญ์ œ๋ฉ๋‹ˆ๋‹ค. ์ด๋ฏธ์ง€ ์—ด์ด ์—†๋Š” ๊ฒฝ์šฐ `pixel_values`๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์—†๊ธฐ ๋•Œ๋ฌธ์— `remove_unused_columns`๋ฅผ `False`๋กœ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ Hub์— ์—…๋กœ๋“œํ•˜์—ฌ ๊ณต์œ ํ•˜๋ ค๋ฉด `push_to_hub`๋ฅผ `True`๋กœ ์„ค์ •ํ•˜์‹ญ์‹œ์˜ค(ํ—ˆ๊น…ํŽ˜์ด์Šค์— ๋กœ๊ทธ์ธํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ```py >>> from transformers import TrainingArguments >>> training_args = TrainingArguments( ... output_dir="detr-resnet-50_finetuned_cppe5", ... per_device_train_batch_size=8, ... num_train_epochs=10, ... fp16=True, ... save_steps=200, ... logging_steps=50, ... learning_rate=1e-5, ... weight_decay=1e-4, ... save_total_limit=2, ... remove_unused_columns=False, ... push_to_hub=True, ... ) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ `model`, `training_args`, `collate_fn`, `image_processor`์™€ ๋ฐ์ดํ„ฐ ์„ธํŠธ(`cppe5`)๋ฅผ ๋ชจ๋‘ ๊ฐ€์ ธ์˜จ ํ›„, [`~transformers.Trainer.train`]๋ฅผ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค. ```py >>> from transformers import Trainer >>> trainer = Trainer( ... model=model, ... args=training_args, ... data_collator=collate_fn, ... train_dataset=cppe5["train"], ... tokenizer=image_processor, ... ) >>> trainer.train() ``` `training_args`์—์„œ `push_to_hub`๋ฅผ `True`๋กœ ์„ค์ •ํ•œ ๊ฒฝ์šฐ, ํ•™์Šต ์ฒดํฌํฌ์ธํŠธ๋Š” ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋ฉ๋‹ˆ๋‹ค. ํ•™์Šต ์™„๋ฃŒ ํ›„, [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ์ตœ์ข… ๋ชจ๋ธ์„ ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ```py >>> trainer.push_to_hub() ``` ## ํ‰๊ฐ€ํ•˜๊ธฐ [[evaluate]] ๊ฐ์ฒด ํƒ์ง€ ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์ผ๋ จ์˜ <a href="https://cocodataset.org/#detection-eval">COCO-์Šคํƒ€์ผ ์ง€ํ‘œ</a>๋กœ ํ‰๊ฐ€๋ฉ๋‹ˆ๋‹ค. ๊ธฐ์กด์— ๊ตฌํ˜„๋œ ํ‰๊ฐ€ ์ง€ํ‘œ ์ค‘ ํ•˜๋‚˜๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜๋„ ์žˆ์ง€๋งŒ, ์—ฌ๊ธฐ์—์„œ๋Š” ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์— ํ‘ธ์‹œํ•œ ์ตœ์ข… ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ `torchvision`์—์„œ ์ œ๊ณตํ•˜๋Š” ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. `torchvision` ํ‰๊ฐ€์ž(evaluator)๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ์‹ค์ธก๊ฐ’์ธ COCO ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. COCO ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ ๋นŒ๋“œํ•˜๋Š” API๋Š” ๋ฐ์ดํ„ฐ๋ฅผ ํŠน์ • ํ˜•์‹์œผ๋กœ ์ €์žฅํ•ด์•ผ ํ•˜๋ฏ€๋กœ, ๋จผ์ € ์ด๋ฏธ์ง€์™€ ์–ด๋…ธํ…Œ์ด์…˜์„ ๋””์Šคํฌ์— ์ €์žฅํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ•™์Šต์„ ์œ„ํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ์ค€๋น„ํ•  ๋•Œ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ, cppe5["test"]์—์„œ์˜ ์–ด๋…ธํ…Œ์ด์…˜์€ ํฌ๋งท์„ ๋งž์ถฐ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋ฏธ์ง€๋Š” ๊ทธ๋Œ€๋กœ ์œ ์ง€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ‰๊ฐ€ ๋‹จ๊ณ„๋Š” ์•ฝ๊ฐ„์˜ ์ž‘์—…์ด ํ•„์š”ํ•˜์ง€๋งŒ, ํฌ๊ฒŒ ์„ธ ๊ฐ€์ง€ ์ฃผ์š” ๋‹จ๊ณ„๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋จผ์ €, `cppe5["test"]` ์„ธํŠธ๋ฅผ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค: ์–ด๋…ธํ…Œ์ด์…˜์„ ํฌ๋งท์— ๋งž๊ฒŒ ๋งŒ๋“ค๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ๋””์Šคํฌ์— ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. ```py >>> import json >>> # format annotations the same as for training, no need for data augmentation >>> def val_formatted_anns(image_id, objects): ... annotations = [] ... for i in range(0, len(objects["id"])): ... new_ann = { ... "id": objects["id"][i], ... "category_id": objects["category"][i], ... "iscrowd": 0, ... "image_id": image_id, ... "area": objects["area"][i], ... "bbox": objects["bbox"][i], ... } ... annotations.append(new_ann) ... return annotations >>> # Save images and annotations into the files torchvision.datasets.CocoDetection expects >>> def save_cppe5_annotation_file_images(cppe5): ... output_json = {} ... path_output_cppe5 = f"{os.getcwd()}/cppe5/" ... if not os.path.exists(path_output_cppe5): ... os.makedirs(path_output_cppe5) ... path_anno = os.path.join(path_output_cppe5, "cppe5_ann.json") ... categories_json = [{"supercategory": "none", "id": id, "name": id2label[id]} for id in id2label] ... output_json["images"] = [] ... output_json["annotations"] = [] ... for example in cppe5: ... ann = val_formatted_anns(example["image_id"], example["objects"]) ... output_json["images"].append( ... { ... "id": example["image_id"], ... "width": example["image"].width, ... "height": example["image"].height, ... "file_name": f"{example['image_id']}.png", ... } ... ) ... output_json["annotations"].extend(ann) ... output_json["categories"] = categories_json ... with open(path_anno, "w") as file: ... json.dump(output_json, file, ensure_ascii=False, indent=4) ... for im, img_id in zip(cppe5["image"], cppe5["image_id"]): ... path_img = os.path.join(path_output_cppe5, f"{img_id}.png") ... im.save(path_img) ... return path_output_cppe5, path_anno ``` ๋‹ค์Œ์œผ๋กœ, `cocoevaluator`์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” `CocoDetection` ํด๋ž˜์Šค์˜ ์ธ์Šคํ„ด์Šค๋ฅผ ์ค€๋น„ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import torchvision >>> class CocoDetection(torchvision.datasets.CocoDetection): ... def __init__(self, img_folder, image_processor, ann_file): ... super().__init__(img_folder, ann_file) ... self.image_processor = image_processor ... def __getitem__(self, idx): ... # read in PIL image and target in COCO format ... img, target = super(CocoDetection, self).__getitem__(idx) ... # preprocess image and target: converting target to DETR format, ... # resizing + normalization of both image and target) ... image_id = self.ids[idx] ... target = {"image_id": image_id, "annotations": target} ... encoding = self.image_processor(images=img, annotations=target, return_tensors="pt") ... pixel_values = encoding["pixel_values"].squeeze() # remove batch dimension ... target = encoding["labels"][0] # remove batch dimension ... return {"pixel_values": pixel_values, "labels": target} >>> im_processor = AutoImageProcessor.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> path_output_cppe5, path_anno = save_cppe5_annotation_file_images(cppe5["test"]) >>> test_ds_coco_format = CocoDetection(path_output_cppe5, im_processor, path_anno) ``` ๋งˆ์ง€๋ง‰์œผ๋กœ, ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์™€์„œ ํ‰๊ฐ€๋ฅผ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค. ```py >>> import evaluate >>> from tqdm import tqdm >>> model = AutoModelForObjectDetection.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> module = evaluate.load("ybelkada/cocoevaluate", coco=test_ds_coco_format.coco) >>> val_dataloader = torch.utils.data.DataLoader( ... test_ds_coco_format, batch_size=8, shuffle=False, num_workers=4, collate_fn=collate_fn ... ) >>> with torch.no_grad(): ... for idx, batch in enumerate(tqdm(val_dataloader)): ... pixel_values = batch["pixel_values"] ... pixel_mask = batch["pixel_mask"] ... labels = [ ... {k: v for k, v in t.items()} for t in batch["labels"] ... ] # these are in DETR format, resized + normalized ... # forward pass ... outputs = model(pixel_values=pixel_values, pixel_mask=pixel_mask) ... orig_target_sizes = torch.stack([target["orig_size"] for target in labels], dim=0) ... results = im_processor.post_process(outputs, orig_target_sizes) # convert outputs of model to COCO api ... module.add(prediction=results, reference=labels) ... del batch >>> results = module.compute() >>> print(results) Accumulating evaluation results... DONE (t=0.08s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.352 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.681 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.292 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.168 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.208 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.429 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.274 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.484 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.501 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.191 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.323 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.590 ``` ์ด๋Ÿฌํ•œ ๊ฒฐ๊ณผ๋Š” [`~transformers.TrainingArguments`]์˜ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์กฐ์ •ํ•˜์—ฌ ๋”์šฑ ๊ฐœ์„ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•œ๋ฒˆ ์‹œ๋„ํ•ด ๋ณด์„ธ์š”! ## ์ถ”๋ก ํ•˜๊ธฐ [[inference]] DETR ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ • ๋ฐ ํ‰๊ฐ€ํ•˜๊ณ , ํ—ˆ๊น…ํŽ˜์ด์Šค ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ ํ–ˆ์œผ๋ฏ€๋กœ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์ถ”๋ก ์— ์‚ฌ์šฉํ•˜๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์€ [`pipeline`]์—์„œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ๊ณผ ํ•จ๊ป˜ ๊ฐ์ฒด ํƒ์ง€๋ฅผ ์œ„ํ•œ ํŒŒ์ดํ”„๋ผ์ธ์„ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ , ์ด๋ฏธ์ง€๋ฅผ ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers import pipeline >>> import requests >>> url = "https://i.imgur.com/2lnWoly.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> obj_detector = pipeline("object-detection", model="devonho/detr-resnet-50_finetuned_cppe5") >>> obj_detector(image) ``` ๋งŒ์•ฝ ์›ํ•œ๋‹ค๋ฉด ์ˆ˜๋™์œผ๋กœ `pipeline`์˜ ๊ฒฐ๊ณผ๋ฅผ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> image_processor = AutoImageProcessor.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> model = AutoModelForObjectDetection.from_pretrained("devonho/detr-resnet-50_finetuned_cppe5") >>> with torch.no_grad(): ... inputs = image_processor(images=image, return_tensors="pt") ... outputs = model(**inputs) ... target_sizes = torch.tensor([image.size[::-1]]) ... results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[0] >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... print( ... f"Detected {model.config.id2label[label.item()]} with confidence " ... f"{round(score.item(), 3)} at location {box}" ... ) Detected Coverall with confidence 0.566 at location [1215.32, 147.38, 4401.81, 3227.08] Detected Mask with confidence 0.584 at location [2449.06, 823.19, 3256.43, 1413.9] ``` ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค: ```py >>> draw = ImageDraw.Draw(image) >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... x, y, x2, y2 = tuple(box) ... draw.rectangle((x, y, x2, y2), outline="red", width=1) ... draw.text((x, y), model.config.id2label[label.item()], fill="white") >>> image ``` <div class="flex justify-center"> <img src="https://i.imgur.com/4QZnf9A.png" alt="Object detection result on a new image"/> </div>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/tasks/multiple_choice.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # ๊ฐ๊ด€์‹ ๋ฌธ์ œ[[multiple-choice]] [[open-in-colab]] ๊ฐ๊ด€์‹ ๊ณผ์ œ๋Š” ๋ฌธ๋งฅ๊ณผ ํ•จ๊ป˜ ์—ฌ๋Ÿฌ ๊ฐœ์˜ ํ›„๋ณด ๋‹ต๋ณ€์ด ์ œ๊ณต๋˜๊ณ  ๋ชจ๋ธ์ด ์ •๋‹ต์„ ์„ ํƒํ•˜๋„๋ก ํ•™์Šต๋œ๋‹ค๋Š” ์ ์„ ์ œ์™ธํ•˜๋ฉด ์งˆ์˜์‘๋‹ต๊ณผ ์œ ์‚ฌํ•ฉ๋‹ˆ๋‹ค. ์ง„ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค: 1. [SWAG](https://huggingface.co/datasets/swag) ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ 'regular' ๊ตฌ์„ฑ์œผ๋กœ [BERT](https://huggingface.co/bert-base-uncased)๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ์—ฌ๋Ÿฌ ์˜ต์…˜๊ณผ ์ผ๋ถ€ ์ปจํ…์ŠคํŠธ๊ฐ€ ์ฃผ์–ด์กŒ์„ ๋•Œ ๊ฐ€์žฅ ์ ํ•ฉํ•œ ๋‹ต์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. 2. ์ถ”๋ก ์— ๋ฏธ์„ธ ์กฐ์ •๋œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. <Tip> ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ ์„ค๋ช…ํ•˜๋Š” ์ž‘์—…์€ ๋‹ค์Œ ๋ชจ๋ธ ์•„ํ‚คํ…์ฒ˜์—์„œ ์ง€์›๋ฉ๋‹ˆ๋‹ค: <!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> [ALBERT](../model_doc/albert), [BERT](../model_doc/bert), [BigBird](../model_doc/big_bird), [CamemBERT](../model_doc/camembert), [CANINE](../model_doc/canine), [ConvBERT](../model_doc/convbert), [Data2VecText](../model_doc/data2vec-text), [DeBERTa-v2](../model_doc/deberta-v2), [DistilBERT](../model_doc/distilbert), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [ErnieM](../model_doc/ernie_m), [FlauBERT](../model_doc/flaubert), [FNet](../model_doc/fnet), [Funnel Transformer](../model_doc/funnel), [I-BERT](../model_doc/ibert), [Longformer](../model_doc/longformer), [LUKE](../model_doc/luke), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MobileBERT](../model_doc/mobilebert), [MPNet](../model_doc/mpnet), [Nezha](../model_doc/nezha), [Nystrรถmformer](../model_doc/nystromformer), [QDQBert](../model_doc/qdqbert), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [SqueezeBERT](../model_doc/squeezebert), [XLM](../model_doc/xlm), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod), [YOSO](../model_doc/yoso) <!--End of the generated tip--> </Tip> ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ํ•„์š”ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๊ฐ€ ๋ชจ๋‘ ์„ค์น˜๋˜์–ด ์žˆ๋Š”์ง€ ํ™•์ธํ•˜์„ธ์š”: ```bash pip install transformers datasets evaluate ``` ๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๊ณ  ์ปค๋ฎค๋‹ˆํ‹ฐ์™€ ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ—ˆ๊น…ํŽ˜์ด์Šค ๊ณ„์ •์— ๋กœ๊ทธ์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋˜๋ฉด ํ† ํฐ์„ ์ž…๋ ฅํ•˜์—ฌ ๋กœ๊ทธ์ธํ•ฉ๋‹ˆ๋‹ค: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## SWAG ๋ฐ์ดํ„ฐ ์„ธํŠธ ๊ฐ€์ ธ์˜ค๊ธฐ[[load-swag-dataset]] ๋จผ์ € ๐Ÿค— Datasets ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์—์„œ SWAG ๋ฐ์ดํ„ฐ์…‹์˜ '์ผ๋ฐ˜' ๊ตฌ์„ฑ์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> from datasets import load_dataset >>> swag = load_dataset("swag", "regular") ``` ์ด์ œ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ดํŽด๋ด…๋‹ˆ๋‹ค: ```py >>> swag["train"][0] {'ending0': 'passes by walking down the street playing their instruments.', 'ending1': 'has heard approaching them.', 'ending2': "arrives and they're outside dancing and asleep.", 'ending3': 'turns the lead singer watches the performance.', 'fold-ind': '3416', 'gold-source': 'gold', 'label': 0, 'sent1': 'Members of the procession walk down the street holding small horn brass instruments.', 'sent2': 'A drum line', 'startphrase': 'Members of the procession walk down the street holding small horn brass instruments. A drum line', 'video-id': 'anetv_jkn6uvmqwh4'} ``` ์—ฌ๊ธฐ์—๋Š” ๋งŽ์€ ํ•„๋“œ๊ฐ€ ์žˆ๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๋ณด์ด์ง€๋งŒ ์‹ค์ œ๋กœ๋Š” ๋งค์šฐ ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค: - `sent1` ๋ฐ `sent2`: ์ด ํ•„๋“œ๋Š” ๋ฌธ์žฅ์ด ์–ด๋–ป๊ฒŒ ์‹œ์ž‘๋˜๋Š”์ง€ ๋ณด์—ฌ์ฃผ๋ฉฐ, ์ด ๋‘ ํ•„๋“œ๋ฅผ ํ•ฉ์น˜๋ฉด `์‹œ์ž‘ ๊ตฌ์ ˆ(startphrase)` ํ•„๋“œ๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. - `์ข…๋ฃŒ ๊ตฌ์ ˆ(ending)`: ๋ฌธ์žฅ์ด ์–ด๋–ป๊ฒŒ ๋๋‚  ์ˆ˜ ์žˆ๋Š”์ง€์— ๋Œ€ํ•œ ๊ฐ€๋Šฅํ•œ ์ข…๋ฃŒ ๊ตฌ์ ˆ๋ฅผ ์ œ์‹œํ•˜์ง€๋งŒ ๊ทธ ์ค‘ ํ•˜๋‚˜๋งŒ ์ •๋‹ต์ž…๋‹ˆ๋‹ค. - `๋ ˆ์ด๋ธ”(label)`: ์˜ฌ๋ฐ”๋ฅธ ๋ฌธ์žฅ ์ข…๋ฃŒ ๊ตฌ์ ˆ์„ ์‹๋ณ„ํ•ฉ๋‹ˆ๋‹ค. ## ์ „์ฒ˜๋ฆฌ[[preprocess]] ๋‹ค์Œ ๋‹จ๊ณ„๋Š” ๋ฌธ์žฅ์˜ ์‹œ์ž‘๊ณผ ๋„ค ๊ฐ€์ง€ ๊ฐ€๋Šฅํ•œ ๊ตฌ์ ˆ์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด BERT ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๋ถˆ๋Ÿฌ์˜ต๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ``` ์ƒ์„ฑํ•˜๋ ค๋Š” ์ „์ฒ˜๋ฆฌ ํ•จ์ˆ˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค: 1. `sent1` ํ•„๋“œ๋ฅผ ๋„ค ๊ฐœ ๋ณต์‚ฌํ•œ ๋‹ค์Œ ๊ฐ๊ฐ์„ `sent2`์™€ ๊ฒฐํ•ฉํ•˜์—ฌ ๋ฌธ์žฅ์ด ์‹œ์ž‘๋˜๋Š” ๋ฐฉ์‹์„ ์žฌํ˜„ํ•ฉ๋‹ˆ๋‹ค. 2. `sent2`๋ฅผ ๋„ค ๊ฐ€์ง€ ๊ฐ€๋Šฅํ•œ ๋ฌธ์žฅ ๊ตฌ์ ˆ ๊ฐ๊ฐ๊ณผ ๊ฒฐํ•ฉํ•ฉ๋‹ˆ๋‹ค. 3. ์ด ๋‘ ๋ชฉ๋ก์„ ํ† ํฐํ™”ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ‰ํƒ„ํ™”(flatten)ํ•˜๊ณ , ๊ฐ ์˜ˆ์ œ์— ํ•ด๋‹นํ•˜๋Š” `input_ids`, `attention_mask` ๋ฐ `labels` ํ•„๋“œ๋ฅผ ๊ฐ–๋„๋ก ๋‹ค์ฐจ์›ํ™”(unflatten) ํ•ฉ๋‹ˆ๋‹ค. ```py >>> ending_names = ["ending0", "ending1", "ending2", "ending3"] >>> def preprocess_function(examples): ... first_sentences = [[context] * 4 for context in examples["sent1"]] ... question_headers = examples["sent2"] ... second_sentences = [ ... [f"{header} {examples[end][i]}" for end in ending_names] for i, header in enumerate(question_headers) ... ] ... first_sentences = sum(first_sentences, []) ... second_sentences = sum(second_sentences, []) ... tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True) ... return {k: [v[i : i + 4] for i in range(0, len(v), 4)] for k, v in tokenized_examples.items()} ``` ์ „์ฒด ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์— ์ „์ฒ˜๋ฆฌ ๊ธฐ๋Šฅ์„ ์ ์šฉํ•˜๋ ค๋ฉด ๐Ÿค— Datasets [`~datasets.Dataset.map`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. `batched=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์˜ ์—ฌ๋Ÿฌ ์š”์†Œ๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜๋ฉด `map` ํ•จ์ˆ˜์˜ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py tokenized_swag = swag.map(preprocess_function, batched=True) ``` ๐Ÿค— Transformers์—๋Š” ๊ฐ๊ด€์‹์šฉ ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ๊ฐ€ ์—†์œผ๋ฏ€๋กœ ์˜ˆ์ œ ๋ฐฐ์น˜๋ฅผ ๋งŒ๋“ค๋ ค๋ฉด [`DataCollatorWithPadding`]์„ ์กฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ์ดํ„ฐ ์ •๋ ฌ ์ค‘์— ์ „์ฒด ๋ฐ์ดํ„ฐ ์ง‘ํ•ฉ์„ ์ตœ๋Œ€ ๊ธธ์ด๋กœ ํŒจ๋”ฉํ•˜๋Š” ๋Œ€์‹  ๋ฐฐ์น˜ ์ค‘ ๊ฐ€์žฅ ๊ธด ๊ธธ์ด๋กœ ๋ฌธ์žฅ์„ *๋™์  ํŒจ๋”ฉ*ํ•˜๋Š” ๊ฒƒ์ด ๋” ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. `DataCollatorForMultipleChoice`๋Š” ๋ชจ๋“  ๋ชจ๋ธ ์ž…๋ ฅ์„ ํ‰ํƒ„ํ™”ํ•˜๊ณ  ํŒจ๋”ฉ์„ ์ ์šฉํ•˜๋ฉฐ ๊ทธ ๊ฒฐ๊ณผ๋ฅผ ๊ฒฐ๊ณผ๋ฅผ ๋‹ค์ฐจ์›ํ™”ํ•ฉ๋‹ˆ๋‹ค: <frameworkcontent> <pt> ```py >>> from dataclasses import dataclass >>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy >>> from typing import Optional, Union >>> import torch >>> @dataclass ... class DataCollatorForMultipleChoice: ... """ ... Data collator that will dynamically pad the inputs for multiple choice received. ... """ ... tokenizer: PreTrainedTokenizerBase ... padding: Union[bool, str, PaddingStrategy] = True ... max_length: Optional[int] = None ... pad_to_multiple_of: Optional[int] = None ... def __call__(self, features): ... label_name = "label" if "label" in features[0].keys() else "labels" ... labels = [feature.pop(label_name) for feature in features] ... batch_size = len(features) ... num_choices = len(features[0]["input_ids"]) ... flattened_features = [ ... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features ... ] ... flattened_features = sum(flattened_features, []) ... batch = self.tokenizer.pad( ... flattened_features, ... padding=self.padding, ... max_length=self.max_length, ... pad_to_multiple_of=self.pad_to_multiple_of, ... return_tensors="pt", ... ) ... batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()} ... batch["labels"] = torch.tensor(labels, dtype=torch.int64) ... return batch ``` </pt> <tf> ```py >>> from dataclasses import dataclass >>> from transformers.tokenization_utils_base import PreTrainedTokenizerBase, PaddingStrategy >>> from typing import Optional, Union >>> import tensorflow as tf >>> @dataclass ... class DataCollatorForMultipleChoice: ... """ ... Data collator that will dynamically pad the inputs for multiple choice received. ... """ ... tokenizer: PreTrainedTokenizerBase ... padding: Union[bool, str, PaddingStrategy] = True ... max_length: Optional[int] = None ... pad_to_multiple_of: Optional[int] = None ... def __call__(self, features): ... label_name = "label" if "label" in features[0].keys() else "labels" ... labels = [feature.pop(label_name) for feature in features] ... batch_size = len(features) ... num_choices = len(features[0]["input_ids"]) ... flattened_features = [ ... [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features ... ] ... flattened_features = sum(flattened_features, []) ... batch = self.tokenizer.pad( ... flattened_features, ... padding=self.padding, ... max_length=self.max_length, ... pad_to_multiple_of=self.pad_to_multiple_of, ... return_tensors="tf", ... ) ... batch = {k: tf.reshape(v, (batch_size, num_choices, -1)) for k, v in batch.items()} ... batch["labels"] = tf.convert_to_tensor(labels, dtype=tf.int64) ... return batch ``` </tf> </frameworkcontent> ## ํ‰๊ฐ€ ํ•˜๊ธฐ[[evaluate]] ํ›ˆ๋ จ ์ค‘์— ๋ฉ”ํŠธ๋ฆญ์„ ํฌํ•จํ•˜๋ฉด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ๐Ÿค—[Evaluate](https://huggingface.co/docs/evaluate/index) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์„ ๋น ๋ฅด๊ฒŒ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ž‘์—…์—์„œ๋Š” [accuracy](https://huggingface.co/spaces/evaluate-metric/accuracy) ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค(๐Ÿค— Evaluate [๋‘˜๋Ÿฌ๋ณด๊ธฐ](https://huggingface.co/docs/evaluate/a_quick_tour)๋ฅผ ์ฐธ์กฐํ•˜์—ฌ ์ง€ํ‘œ๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด์„ธ์š”): ```py >>> import evaluate >>> accuracy = evaluate.load("accuracy") ``` ๊ทธ๋ฆฌ๊ณ  ์˜ˆ์ธก๊ณผ ๋ ˆ์ด๋ธ”์„ [`~evaluate.EvaluationModule.compute`]์— ์ „๋‹ฌํ•˜์—ฌ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ํ•จ์ˆ˜๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค: ```py >>> import numpy as np >>> def compute_metrics(eval_pred): ... predictions, labels = eval_pred ... predictions = np.argmax(predictions, axis=1) ... return accuracy.compute(predictions=predictions, references=labels) ``` ์ด์ œ `compute_metrics` ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ จ์„ ์„ค์ •ํ•  ๋•Œ ์ด ํ•จ์ˆ˜๋กœ ๋Œ์•„๊ฐ€๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ## ํ›ˆ๋ จ ํ•˜๊ธฐ[[train]] <frameworkcontent> <pt> <Tip> [`Trainer`]๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-with-pytorch-trainer)๋ฅผ ์‚ดํŽด๋ณด์„ธ์š”! </Tip> ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•  ์ค€๋น„๊ฐ€ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค! [`AutoModelForMultipleChoice`]๋กœ BERT๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForMultipleChoice, TrainingArguments, Trainer >>> model = AutoModelForMultipleChoice.from_pretrained("bert-base-uncased") ``` ์ด์ œ ์„ธ ๋‹จ๊ณ„๋งŒ ๋‚จ์•˜์Šต๋‹ˆ๋‹ค: 1. ํ›ˆ๋ จ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ [`TrainingArguments`]์— ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์œ ์ผํ•œ ํ•„์ˆ˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ๋ชจ๋ธ์„ ์ €์žฅํ•  ์œ„์น˜๋ฅผ ์ง€์ •ํ•˜๋Š” `output_dir`์ž…๋‹ˆ๋‹ค. `push_to_hub=True`๋ฅผ ์„ค์ •ํ•˜์—ฌ ์ด ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ํ‘ธ์‹œํ•ฉ๋‹ˆ๋‹ค(๋ชจ๋ธ์„ ์—…๋กœ๋“œํ•˜๋ ค๋ฉด ํ—ˆ๊น… ํŽ˜์ด์Šค์— ๋กœ๊ทธ์ธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). ๊ฐ ์—ํญ์ด ๋๋‚  ๋•Œ๋งˆ๋‹ค [`Trainer`]๊ฐ€ ์ •ํ™•๋„๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ  ํ›ˆ๋ จ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์ €์žฅํ•ฉ๋‹ˆ๋‹ค. 2. ๋ชจ๋ธ, ๋ฐ์ดํ„ฐ ์„ธํŠธ, ํ† ํฌ๋‚˜์ด์ €, ๋ฐ์ดํ„ฐ ์ฝœ๋ ˆ์ดํ„ฐ, `compute_metrics` ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ํ›ˆ๋ จ ์ธ์ž๋ฅผ [`Trainer`]์— ์ „๋‹ฌํ•ฉ๋‹ˆ๋‹ค. 3. [`~Trainer.train`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ```py >>> training_args = TrainingArguments( ... output_dir="my_awesome_swag_model", ... evaluation_strategy="epoch", ... save_strategy="epoch", ... load_best_model_at_end=True, ... learning_rate=5e-5, ... per_device_train_batch_size=16, ... per_device_eval_batch_size=16, ... num_train_epochs=3, ... weight_decay=0.01, ... push_to_hub=True, ... ) >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=tokenized_swag["train"], ... eval_dataset=tokenized_swag["validation"], ... tokenizer=tokenizer, ... data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer), ... compute_metrics=compute_metrics, ... ) >>> trainer.train() ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋“  ์‚ฌ๋žŒ์ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก [`~transformers.Trainer.push_to_hub`] ๋ฉ”์†Œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ์— ๊ณต์œ ํ•˜์„ธ์š”: ```py >>> trainer.push_to_hub() ``` </pt> <tf> <Tip> Keras๋กœ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐ ์ต์ˆ™ํ•˜์ง€ ์•Š๋‹ค๋ฉด ๊ธฐ๋ณธ ํŠœํ† ๋ฆฌ์–ผ [์—ฌ๊ธฐ](../training#train-a-tensorflow-model-with-keras)๋ฅผ ์‚ดํŽด๋ณด์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค! </Tip> TensorFlow์—์„œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋ ค๋ฉด ์ตœ์ ํ™” ํ•จ์ˆ˜, ํ•™์Šต๋ฅ  ์Šค์ผ€์ฅด ๋ฐ ๋ช‡ ๊ฐ€์ง€ ํ•™์Šต ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์„ค์ •ํ•˜๋Š” ๊ฒƒ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜์„ธ์š”: ```py >>> from transformers import create_optimizer >>> batch_size = 16 >>> num_train_epochs = 2 >>> total_train_steps = (len(tokenized_swag["train"]) // batch_size) * num_train_epochs >>> optimizer, schedule = create_optimizer(init_lr=5e-5, num_warmup_steps=0, num_train_steps=total_train_steps) ``` ๊ทธ๋ฆฌ๊ณ  [`TFAutoModelForMultipleChoice`]๋กœ BERT๋ฅผ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForMultipleChoice >>> model = TFAutoModelForMultipleChoice.from_pretrained("bert-base-uncased") ``` [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ ์„ธํŠธ๋ฅผ `tf.data.Dataset` ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> data_collator = DataCollatorForMultipleChoice(tokenizer=tokenizer) >>> tf_train_set = model.prepare_tf_dataset( ... tokenized_swag["train"], ... shuffle=True, ... batch_size=batch_size, ... collate_fn=data_collator, ... ) >>> tf_validation_set = model.prepare_tf_dataset( ... tokenized_swag["validation"], ... shuffle=False, ... batch_size=batch_size, ... collate_fn=data_collator, ... ) ``` [`compile`](https://keras.io/api/models/model_training_apis/#compile-method)์„ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ ๋ชจ๋ธ์„ ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.compile(optimizer=optimizer) ``` ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•˜๊ธฐ ์ „์— ์„ค์ •ํ•ด์•ผ ํ•  ๋งˆ์ง€๋ง‰ ๋‘ ๊ฐ€์ง€๋Š” ์˜ˆ์ธก์˜ ์ •ํ™•๋„๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ๋ชจ๋ธ์„ ํ—ˆ๋ธŒ๋กœ ํ‘ธ์‹œํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ๋‘ ๊ฐ€์ง€ ์ž‘์—…์€ ๋ชจ๋‘ [Keras ์ฝœ๋ฐฑ](../main_classes/keras_callbacks)์„ ์‚ฌ์šฉํ•˜์—ฌ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. `compute_metrics`ํ•จ์ˆ˜๋ฅผ [`~transformers.KerasMetricCallback`]์— ์ „๋‹ฌํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import KerasMetricCallback >>> metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=tf_validation_set) ``` ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ์—…๋กœ๋“œํ•  ์œ„์น˜๋ฅผ [`~transformers.PushToHubCallback`]์—์„œ ์ง€์ •ํ•˜์„ธ์š”: ```py >>> from transformers.keras_callbacks import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="my_awesome_model", ... tokenizer=tokenizer, ... ) ``` ๊ทธ๋ฆฌ๊ณ  ์ฝœ๋ฐฑ์„ ํ•จ๊ป˜ ๋ฌถ์Šต๋‹ˆ๋‹ค: ```py >>> callbacks = [metric_callback, push_to_hub_callback] ``` ์ด์ œ ๋ชจ๋ธ ํ›ˆ๋ จ์„ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค! ํ›ˆ๋ จ ๋ฐ ๊ฒ€์ฆ ๋ฐ์ดํ„ฐ ์„ธํŠธ, ์—ํญ ์ˆ˜, ์ฝœ๋ฐฑ์„ ์‚ฌ์šฉํ•˜์—ฌ [`fit`](https://keras.io/api/models/model_training_apis/#fit-method)์„ ํ˜ธ์ถœํ•˜๊ณ  ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค: ```py >>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=2, callbacks=callbacks) ``` ํ›ˆ๋ จ์ด ์™„๋ฃŒ๋˜๋ฉด ๋ชจ๋ธ์ด ์ž๋™์œผ๋กœ ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋˜์–ด ๋ˆ„๊ตฌ๋‚˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! </tf> </frameworkcontent> <Tip> ๊ฐ๊ด€์‹ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ณด๋‹ค ์‹ฌ์ธต์ ์ธ ์˜ˆ๋Š” ์•„๋ž˜ ๋ฌธ์„œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb) ๋˜๋Š” [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). </Tip> ## ์ถ”๋ก  ํ•˜๊ธฐ[[inference]] ์ด์ œ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ–ˆ์œผ๋‹ˆ ์ถ”๋ก ์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค! ํ…์ŠคํŠธ์™€ ๋‘ ๊ฐœ์˜ ํ›„๋ณด ๋‹ต์•ˆ์„ ์ž‘์„ฑํ•ฉ๋‹ˆ๋‹ค: ```py >>> prompt = "France has a bread law, Le Dรฉcret Pain, with strict rules on what is allowed in a traditional baguette." >>> candidate1 = "The law does not apply to croissants and brioche." >>> candidate2 = "The law applies to baguettes." ``` <frameworkcontent> <pt> ๊ฐ ํ”„๋กฌํ”„ํŠธ์™€ ํ›„๋ณด ๋‹ต๋ณ€ ์Œ์„ ํ† ํฐํ™”ํ•˜์—ฌ PyTorch ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ `labels`์„ ์ƒ์„ฑํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model") >>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="pt", padding=True) >>> labels = torch.tensor(0).unsqueeze(0) ``` ์ž…๋ ฅ๊ณผ ๋ ˆ์ด๋ธ”์„ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜๊ณ  `logits`์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoModelForMultipleChoice >>> model = AutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model") >>> outputs = model(**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels) >>> logits = outputs.logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> predicted_class = logits.argmax().item() >>> predicted_class '0' ``` </pt> <tf> ๊ฐ ํ”„๋กฌํ”„ํŠธ์™€ ํ›„๋ณด ๋‹ต์•ˆ ์Œ์„ ํ† ํฐํ™”ํ•˜์—ฌ ํ…์„œํ”Œ๋กœ ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_swag_model") >>> inputs = tokenizer([[prompt, candidate1], [prompt, candidate2]], return_tensors="tf", padding=True) ``` ๋ชจ๋ธ์— ์ž…๋ ฅ์„ ์ „๋‹ฌํ•˜๊ณ  `logits`๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค: ```py >>> from transformers import TFAutoModelForMultipleChoice >>> model = TFAutoModelForMultipleChoice.from_pretrained("my_awesome_swag_model") >>> inputs = {k: tf.expand_dims(v, 0) for k, v in inputs.items()} >>> outputs = model(inputs) >>> logits = outputs.logits ``` ๊ฐ€์žฅ ๋†’์€ ํ™•๋ฅ ์„ ๊ฐ€์ง„ ํด๋ž˜์Šค๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค: ```py >>> predicted_class = int(tf.math.argmax(logits, axis=-1)[0]) >>> predicted_class '0' ``` </tf> </frameworkcontent>
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/model_doc/llama2.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Llama2 [[llama2]] ## ๊ฐœ์š” [[overview]] Llama2 ๋ชจ๋ธ์€ Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Ya1smine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom์˜ ๋…ผ๋ฌธ [LLaMA: Open Foundation and Fine-Tuned Chat Models](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/)์—์„œ ์ œ์•ˆ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ฑ„ํŒ… ์–ดํ”Œ๋ฆฌ์ผ€์ด์…˜์— ๋งž๊ฒŒ ๋ฏธ์„ธ ์กฐ์ •๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ํฌํ•จ๋œ 7B์—์„œ 70B ๋ฒ”์œ„์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ง„ ๊ธฐ์ดˆ ์–ธ์–ด ๋ชจ๋ธ ๋ชจ์Œ์ž…๋‹ˆ๋‹ค! ๋…ผ๋ฌธ์˜ ์ดˆ๋ก์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: *์ด ์—ฐ๊ตฌ์—์„œ ์šฐ๋ฆฌ๋Š” 70์–ต์—์„œ 700์–ต ํŒŒ๋ผ๋ฏธํ„ฐ์˜ ๋ฒ”์œ„์—์„œ ์‚ฌ์ „ ํ›ˆ๋ จ ๋ฐ ๋ฏธ์„ธ ์กฐ์ •๋œ ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ(LLMs)์˜ ๋ชจ์Œ์ธ Llama 2๋ฅผ ๊ฐœ๋ฐœ ๋ฐ ๊ณต๊ฐœํ•ฉ๋‹ˆ๋‹ค. Llama 2-Chat๋ผ๊ณ  ๋ถˆ๋ฆฌ๋Š” ๋ฏธ์„ธ ์กฐ์ •๋œ LLMs์€ ๋Œ€ํ™” ์‚ฌ์šฉ ์‚ฌ๋ก€์— ์ตœ์ ํ™”๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ์˜ ๋ชจ๋ธ์€ ํ…Œ์ŠคํŠธํ•œ ๋Œ€๋ถ€๋ถ„์˜ ๋ฒค์น˜๋งˆํฌ์—์„œ ์˜คํ”ˆ ์†Œ์Šค ์ฑ„ํŒ… ๋ชจ๋ธ๋ณด๋‹ค ์„ฑ๋Šฅ์ด ๋›ฐ์–ด๋‚˜๋ฉฐ, ์œ ์šฉ์„ฑ๊ณผ ์•ˆ์ „์„ฑ์— ๋Œ€ํ•œ ์ธ์  ํ‰๊ฐ€๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๋น„๊ณต๊ฐœ ์†Œ์Šค ๋ชจ๋ธ์„ ๋Œ€์ฒดํ•  ์ˆ˜ ์žˆ๋Š” ์ ์ ˆํ•œ ๋Œ€์•ˆ์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” Llama 2-Chat์˜ ๋ฏธ์„ธ ์กฐ์ • ๋ฐ ์•ˆ์ „์„ฑ ํ–ฅ์ƒ์˜ ์ ‘๊ทผ ๋ฐฉ์‹์— ๋Œ€ํ•œ ์ž์„ธํ•œ ์„ค๋ช…์„ ์ œ๊ณตํ•˜์—ฌ ์ปค๋ฎค๋‹ˆํ‹ฐ๊ฐ€ ์šฐ๋ฆฌ์˜ ์ž‘์—…์„ ๊ธฐ๋ฐ˜์œผ๋กœ LLMs์˜ ์ฑ…์ž„์žˆ๋Š” ๊ฐœ๋ฐœ์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•ฉ๋‹ˆ๋‹ค.* [์—ฌ๊ธฐ](https://huggingface.co/models?search=llama2)์—์„œ ๋ชจ๋“  Llama2 ๋ชจ๋ธ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. <Tip warning={true}> `Llama2` ๋ชจ๋ธ์€ `bfloat16`์„ ์‚ฌ์šฉํ•˜์—ฌ ํ›ˆ๋ จ๋˜์—ˆ์ง€๋งŒ, ์›๋ž˜ ์ถ”๋ก ์€ `float16`์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํ—ˆ๋ธŒ์— ์—…๋กœ๋“œ๋œ ์ฒดํฌํฌ์ธํŠธ๋Š” `torch_dtype = 'float16'`์„ ์‚ฌ์šฉํ•˜๋ฉฐ, ์ด๋Š” `AutoModel` API์— ์˜ํ•ด ์ฒดํฌํฌ์ธํŠธ๋ฅผ `torch.float32`์—์„œ `torch.float16`์œผ๋กœ ์บ์ŠคํŒ…ํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์˜จ๋ผ์ธ ๊ฐ€์ค‘์น˜์˜ `dtype`์€ `model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto")`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ์ดˆ๊ธฐํ™”ํ•  ๋•Œ `torch_dtype="auto"`๋ฅผ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ํ•œ ๋Œ€๋ถ€๋ถ„ ๊ด€๋ จ์ด ์—†์Šต๋‹ˆ๋‹ค. ๊ทธ ์ด์œ ๋Š” ๋ชจ๋ธ์ด ๋จผ์ € ๋‹ค์šด๋กœ๋“œ๋  ๊ฒƒ์ด๊ณ  (์˜จ๋ผ์ธ ์ฒดํฌํฌ์ธํŠธ์˜ `dtype`์„ ์‚ฌ์šฉํ•˜์—ฌ) ๊ทธ๋‹ค์Œ์— ๊ธฐ๋ณธ `dtype`์ธ `torch`๋กœ ์บ์ŠคํŒ…ํ•˜๊ณ (`torch.float32`๊ฐ€ ๋จ), ๋งˆ์ง€๋ง‰์œผ๋กœ ๊ตฌ์„ฑ(configuration)์—์„œ ์ œ๊ณต๋œ `torch_dtype`์ด ์žˆ๋Š” ๊ฒฝ์šฐ ์ด๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๋ชจ๋ธ์„ `float16`์—์„œ ํ›ˆ๋ จํ•˜๋Š” ๊ฒƒ์€ ๊ถŒ์žฅ๋˜์ง€ ์•Š์œผ๋ฉฐ `nan`์„ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์œผ๋กœ ์•Œ๋ ค์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋ชจ๋ธ์€ `bfloat16`์—์„œ ํ›ˆ๋ จ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. </Tip> ๐Ÿฏ ํŒ: - Llama2 ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜๋Š” [์ด ์–‘์‹](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)์„ ์ž‘์„ฑํ•˜์—ฌ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ์•„ํ‚คํ…์ฒ˜๋Š” ์ฒ˜์Œ ๋ฒ„์ „์˜ Llama์™€ ๋งค์šฐ ์œ ์‚ฌํ•˜๋ฉฐ, [์ด ๋…ผ๋ฌธ](https://arxiv.org/pdf/2305.13245.pdf)์˜ ๋‚ด์šฉ์— ๋”ฐ๋ผ Grouped Query Attention (GQA)์ด ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค. - `config.pretraining_tp`๋ฅผ 1๊ณผ ๋‹ค๋ฅธ ๊ฐ’์œผ๋กœ ์„ค์ •ํ•˜๋ฉด ๋” ์ •ํ™•ํ•˜์ง€๋งŒ ๋А๋ฆฐ ์„ ํ˜• ๋ ˆ์ด์–ด ๊ณ„์‚ฐ์ด ํ™œ์„ฑํ™”๋˜์–ด ์›๋ณธ ๋กœ์ง“๊ณผ ๋” ์ž˜ ์ผ์น˜ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. - ์›๋ž˜ ๋ชจ๋ธ์€ `pad_id = -1`์„ ์‚ฌ์šฉํ•˜๋Š”๋ฐ, ์ด๋Š” ํŒจ๋”ฉ ํ† ํฐ์ด ์—†์Œ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ๋™์ผํ•œ ๋กœ์ง์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์—†์œผ๋ฏ€๋กœ `tokenizer.add_special_tokens({"pad_token":"<pad>"})`๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํŒจ๋”ฉ ํ† ํฐ์„ ์ถ”๊ฐ€ํ•˜๊ณ  ์ด์— ๋”ฐ๋ผ ํ† ํฐ ์ž„๋ฒ ๋”ฉ ํฌ๊ธฐ๋ฅผ ์กฐ์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ `model.config.pad_token_id`๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ชจ๋ธ์˜ `embed_tokens` ๋ ˆ์ด์–ด๋Š” `self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.config.padding_idx)`๋กœ ์ดˆ๊ธฐํ™”๋˜์–ด, ํŒจ๋”ฉ ํ† ํฐ ์ธ์ฝ”๋”ฉ์ด 0์„ ์ถœ๋ ฅํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ดˆ๊ธฐํ™” ์‹œ์— ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. - ์–‘์‹์„ ์ž‘์„ฑํ•˜๊ณ  ๋ชจ๋ธ ์ฒดํฌํฌ์ธํŠธ ์ ‘๊ทผ ๊ถŒํ•œ์„ ์–ป์€ ํ›„์—๋Š” ์ด๋ฏธ ๋ณ€ํ™˜๋œ ์ฒดํฌํฌ์ธํŠธ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡์ง€ ์•Š๊ณ  ์ž์‹ ์˜ ๋ชจ๋ธ์„ ์ง์ ‘ ๋ณ€ํ™˜ํ•˜๋ ค๋Š” ๊ฒฝ์šฐ, [๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py)๋ฅผ ์ž์œ ๋กญ๊ฒŒ ์‚ฌ์šฉํ•˜์„ธ์š”. ์Šคํฌ๋ฆฝํŠธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์˜ˆ์‹œ์˜ ๋ช…๋ น์–ด๋กœ ํ˜ธ์ถœํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```bash python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` - ๋ณ€ํ™˜ ํ›„ ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python from transformers import LlamaForCausalLM, LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained("/output/path") model = LlamaForCausalLM.from_pretrained("/output/path") ``` ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ๋ชจ๋ธ์„ float16 ์ •๋ฐ€๋„๋กœ ์ „๋ถ€ ํ˜ธ์ŠคํŠธํ•  ์ˆ˜ ์žˆ์„ ๋งŒํผ ์ถฉ๋ถ„ํ•œ CPU RAM์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค (๊ฐ€์žฅ ํฐ ๋ฒ„์ „์ด ์—ฌ๋Ÿฌ ์ฒดํฌํฌ์ธํŠธ๋กœ ์ œ๊ณต๋˜๋”๋ผ๋„ ๊ฐ ์ฒดํฌํฌ์ธํŠธ๋Š” ๋ชจ๋ธ ๊ฐ€์ค‘์น˜์˜ ์ผ๋ถ€๋งŒ์„ ํฌํ•จํ•˜๋ฏ€๋กœ ๋ชจ๋‘ RAM์— ๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค). 75B ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ, ์ด 145GB์˜ RAM์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - LLaMA ํ† ํฌ๋‚˜์ด์ €๋Š” [sentencepiece](https://github.com/google/sentencepiece)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ BPE ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. sentencepiece์˜ ํŠน์ง• ์ค‘ ํ•˜๋‚˜๋Š” ์‹œํ€€์Šค๋ฅผ ๋””์ฝ”๋”ฉํ•  ๋•Œ ์ฒซ ๋ฒˆ์งธ ํ† ํฐ์ด ๋‹จ์–ด์˜ ์‹œ์ž‘์ด๋ฉด (์˜ˆ: "Banana") ํ† ํฌ๋‚˜์ด์ €๋Š” ๋ฌธ์ž์—ด ์•ž์— ์ ‘๋‘์‚ฌ ๊ณต๊ฐ„์„ ์ถ”๊ฐ€ํ•˜์ง€ ์•Š๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ [Arthur Zucker](https://huggingface.co/ArthurZ)๊ฐ€ [Lysandre Debut](https://huggingface.co/lysandre)์˜ ๋„์›€์„ ๋ฐ›์•„ ์ œ๊ณตํ•˜์˜€์Šต๋‹ˆ๋‹ค. Hugging Face์—์„œ์˜ ๊ตฌํ˜„ ์ฝ”๋“œ๋Š” [์—ฌ๊ธฐ](https://github.com/EleutherAI/gpt-neox)์˜ GPT-NeoX ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ฉ๋‹ˆ๋‹ค. ์ €์ž์˜ ์›๋ž˜ ์ฝ”๋“œ๋Š” [์—ฌ๊ธฐ](https://github.com/facebookresearch/llama)์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## ๋ฆฌ์†Œ์Šค [[resources]] LLaMA2๋ฅผ ์‹œ์ž‘ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  Hugging Face์˜ ๊ณต์‹ ๋ฐ ์ปค๋ฎค๋‹ˆํ‹ฐ(๐ŸŒŽ๋กœ ํ‘œ์‹œ) ๋ฆฌ์†Œ์Šค ๋ชฉ๋ก์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์— ์ƒˆ๋กœ์šด ๋ฆฌ์†Œ์Šค๋ฅผ ์ถ”๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด์„œ Pull Request๋ฅผ ์—ด์–ด ์ฃผ์‹œ๋ฉด ๊ฒ€ํ† ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค! ๋ฆฌ์†Œ์Šค๋Š” ๊ธฐ์กด ๋ฆฌ์†Œ์Šค์™€ ์ค‘๋ณต๋˜์ง€ ์•Š๋Š” ์ƒˆ๋กœ์šด ๊ฒƒ์„ ๋ณด์—ฌ์ฃผ๋Š” ๊ฒƒ์ด ์ด์ƒ์ ์ž…๋‹ˆ๋‹ค. - [Llama 2 is here - get it on Hugging Face](https://huggingface.co/blog/llama2), Llama 2์— ๊ด€ํ•œ ๋ธ”๋กœ๊ทธ ํฌ์ŠคํŠธ์™€ ๐Ÿค— Transformers ๋ฐ ๐Ÿค— PEFT์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋‚ด์šฉ์ž…๋‹ˆ๋‹ค. - [LLaMA 2 - Every Resource you need](https://www.philschmid.de/llama-2), LLaMA 2์— ๋Œ€ํ•ด ์•Œ์•„๋ณด๊ณ  ๋น ๋ฅด๊ฒŒ ์‹œ์ž‘ํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ๊ด€๋ จ ๋ฆฌ์†Œ์Šค์˜ ๋ชจ์Œ์ž…๋‹ˆ๋‹ค. <PipelineTag pipeline="text-generation"/> - Google Colab์—์„œ QLoRA์™€ 4-bit ์ •๋ฐ€๋„๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ Llama 2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing)์ž…๋‹ˆ๋‹ค. ๐ŸŒŽ - "Llama-v2-7b-guanaco" ๋ชจ๋ธ์„ 4-bit QLoRA๋กœ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  PDF์—์„œ Q&A ๋ฐ์ดํ„ฐ์…‹์„ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/134o_cXcMe_lsvl15ZE_4Y75Kstepsntu?usp=sharing)์ž…๋‹ˆ๋‹ค. ๐ŸŒŽ โš—๏ธ ์ตœ์ ํ™” - [Llama 2๋ฅผ DPO๋กœ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://huggingface.co/blog/dpo-trl), TRL ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ DPO ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ํŠน์ • ๋ฐ์ดํ„ฐ์…‹์—์„œ Llama 2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•˜๋Š” ๊ฐ€์ด๋“œ์ž…๋‹ˆ๋‹ค. - [ํ™•์žฅ ๊ฐ€์ด๋“œ: Llama 2 ๋ช…๋ น์–ด ์กฐ์ •](https://www.philschmid.de/instruction-tune-llama-2), ์ž…๋ ฅ์—์„œ ๋ช…๋ น์–ด๋ฅผ ์ƒ์„ฑํ•˜๋„๋ก Llama 2๋ฅผ ํ›ˆ๋ จ์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•์„ ์•ˆ๋‚ดํ•˜๋Š” ๊ฐ€์ด๋“œ๋กœ, ๋ช…๋ น์–ด๋ฅผ ๋”ฐ๋ฅด๋Š” ๋ชจ๋ธ์—์„œ ๋ช…๋ น์–ด๋ฅผ ์ฃผ๋Š” ๋ชจ๋ธ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค. - ๊ฐœ์ธ ์ปดํ“จํ„ฐ์—์„œ QLoRA์™€ TRL์„ ์‚ฌ์šฉํ•˜์—ฌ Llama 2 ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1SYpgFpcmtIUzdE7pxqknrM4ArCASfkFQ?usp=sharing)์ž…๋‹ˆ๋‹ค. ๐ŸŒŽ โšก๏ธ ์ถ”๋ก  - AutoGPTQ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ GPTQ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ Llama 2 ๋ชจ๋ธ์„ ์–‘์žํ™”ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1TC56ArKerXUpbgRy5vM3woRsbTEVNq7h?usp=sharing)์ž…๋‹ˆ๋‹ค. ๐ŸŒŽ - ๋กœ์ปฌ ์ปดํ“จํ„ฐ๋‚˜ Google Colab์—์„œ 4-bit ์–‘์žํ™”๋กœ Llama 2 ์ฑ„ํŒ… ๋ชจ๋ธ์„ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1X1z9Q6domMKl2CnEM0QGHNwidLfR4dW2?usp=sharing)์ž…๋‹ˆ๋‹ค. ๐ŸŒŽ ๐Ÿš€ ๋ฐฐํฌ - [Amazon SageMaker์—์„œ LLaMA 2 (7-70B) ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ](https://www.philschmid.de/sagemaker-llama2-qlora), Amazon SageMaker์—์„œ QLoRA ๋ฏธ์„ธ ์กฐ์ • ๋ฐ ๋ฐฐํฌ์— ์ด๋ฅด๊ธฐ๊นŒ์ง€์˜ ์™„์ „ํ•œ ๊ฐ€์ด๋“œ์ž…๋‹ˆ๋‹ค. - [Amazon SageMaker์—์„œ Llama 2 7B/13B/70B ๋ฐฐํฌํ•˜๊ธฐ](https://www.philschmid.de/sagemaker-llama-llm), ์•ˆ์ „ํ•˜๊ณ  ํ™•์žฅ ๊ฐ€๋Šฅํ•œ ๋ฐฐํฌ๋ฅผ ์œ„ํ•ด Hugging Face์˜ LLM DLC ์ปจํ…Œ์ด๋„ˆ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๊ฐ€์ด๋“œ์ž…๋‹ˆ๋‹ค. ## LlamaConfig [[llamaconfig]] [[autodoc]] LlamaConfig ## LlamaTokenizer [[llamatokenizer]] [[autodoc]] LlamaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## LlamaTokenizerFast [[llamatokenizerfast]] [[autodoc]] LlamaTokenizerFast - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - update_post_processor - save_vocabulary ## LlamaModel [[llamamodel]] [[autodoc]] LlamaModel - forward ## LlamaForCausalLM [[llamaforcausallm]] [[autodoc]] LlamaForCausalLM - forward ## LlamaForSequenceClassification [[llamaforsequenceclassification]] [[autodoc]] LlamaForSequenceClassification - forward
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/model_doc/llama.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # LLaMA [[llama]] ## ๊ฐœ์š” [[overview]] LLaMA ๋ชจ๋ธ์€ Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothรฉe Lacroix, Baptiste Roziรจre, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample์— ์˜ํ•ด ์ œ์•ˆ๋œ [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)์—์„œ ์†Œ๊ฐœ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ 7B์—์„œ 65B๊ฐœ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๊นŒ์ง€ ๋‹ค์–‘ํ•œ ํฌ๊ธฐ์˜ ๊ธฐ์ดˆ ์–ธ์–ด ๋ชจ๋ธ์„ ๋ชจ์•„๋†“์€ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋…ผ๋ฌธ์˜ ์ดˆ๋ก์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: *"LLaMA๋Š” 7B์—์„œ 65B๊ฐœ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ ์ˆ˜๋ฅผ ๊ฐ€์ง„ ๊ธฐ์ดˆ ์–ธ์–ด ๋ชจ๋ธ์˜ ๋ชจ์Œ์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ์ˆ˜์กฐ ๊ฐœ์˜ ํ† ํฐ์œผ๋กœ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œ์ผฐ๊ณ , ๊ณต๊ฐœ์ ์œผ๋กœ ์ด์šฉ ๊ฐ€๋Šฅํ•œ ๋ฐ์ดํ„ฐ์…‹๋งŒ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ตœ๊ณ  ์ˆ˜์ค€์˜ ๋ชจ๋ธ์„ ํ›ˆ๋ จ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Œ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค. ํŠนํžˆ, LLaMA-13B ๋ชจ๋ธ์€ ๋Œ€๋ถ€๋ถ„์˜ ๋ฒค์น˜๋งˆํฌ์—์„œ GPT-3 (175B)๋ฅผ ๋Šฅ๊ฐ€ํ•˜๋ฉฐ, LLaMA-65B๋Š” ์ตœ๊ณ  ์ˆ˜์ค€์˜ ๋ชจ๋ธ์ธ Chinchilla-70B์™€ PaLM-540B์— ๋ฒ„๊ธˆ๊ฐ€๋Š” ์„ฑ๋Šฅ์„ ๋ณด์ž…๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๋ชจ๋“  ๋ชจ๋ธ์„ ์—ฐ๊ตฌ ์ปค๋ฎค๋‹ˆํ‹ฐ์— ๊ณต๊ฐœํ•ฉ๋‹ˆ๋‹ค."* ํŒ: - LLaMA ๋ชจ๋ธ์˜ ๊ฐ€์ค‘์น˜๋Š” [์ด ์–‘์‹](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form)์„ ์ž‘์„ฑํ•˜์—ฌ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ๊ฐ€์ค‘์น˜๋ฅผ ๋‹ค์šด๋กœ๋“œํ•œ ํ›„์—๋Š” ์ด๋ฅผ [๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ Hugging Face Transformers ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ๋ณ€ํ™˜ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•˜๋ ค๋ฉด ์•„๋ž˜์˜ ์˜ˆ์‹œ ๋ช…๋ น์–ด๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”: ```bash python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` - ๋ณ€ํ™˜์„ ํ•˜์˜€๋‹ค๋ฉด ๋ชจ๋ธ๊ณผ ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค: ```python from transformers import LlamaForCausalLM, LlamaTokenizer tokenizer = LlamaTokenizer.from_pretrained("/output/path") model = LlamaForCausalLM.from_pretrained("/output/path") ``` ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‹คํ–‰ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋ชจ๋ธ์„ float16 ์ •๋ฐ€๋„๋กœ ์ „๋ถ€ ๋กœ๋“œํ•  ์ˆ˜ ์žˆ์„ ๋งŒํผ์˜ ์ถฉ๋ถ„ํ•œ CPU RAM์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. (๊ฐ€์žฅ ํฐ ๋ฒ„์ „์˜ ๋ชจ๋ธ์ด ์—ฌ๋Ÿฌ ์ฒดํฌํฌ์ธํŠธ๋กœ ๋‚˜๋‰˜์–ด ์žˆ๋”๋ผ๋„, ๊ฐ ์ฒดํฌํฌ์ธํŠธ๋Š” ๋ชจ๋ธ์˜ ๊ฐ ๊ฐ€์ค‘์น˜์˜ ์ผ๋ถ€๋ฅผ ํฌํ•จํ•˜๊ณ  ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋“  ์ฒดํฌํฌ์ธํŠธ๋ฅผ RAM์— ๋กœ๋“œํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค) 65B ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ, ์ด 130GB์˜ RAM์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. - LLaMA ํ† ํฌ๋‚˜์ด์ €๋Š” [sentencepiece](https://github.com/google/sentencepiece)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” BPE ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. sentencepiece์˜ ํŠน์ง• ์ค‘ ํ•˜๋‚˜๋Š” ์‹œํ€€์Šค๋ฅผ ๋””์ฝ”๋”ฉํ•  ๋•Œ ์ฒซ ํ† ํฐ์ด ๋‹จ์–ด์˜ ์‹œ์ž‘์ด๋ผ๋ฉด (์˜ˆ๋ฅผ ๋“ค์–ด "Banana"), ํ† ํฌ๋‚˜์ด์ €๋Š” ๋ฌธ์ž์—ด ์•ž์— ๊ณต๋ฐฑ์„ ์ถ”๊ฐ€ํ•˜์ง€ ์•Š๋Š”๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ [BlackSamorez](https://huggingface.co/BlackSamorez)์˜ ๊ธฐ์—ฌ์™€ ํ•จ๊ป˜, [zphang](https://huggingface.co/zphang)์— ์˜ํ•ด ์ œ๊ณต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. Hugging Face์—์„œ์˜ ๊ตฌํ˜„ ์ฝ”๋“œ๋Š” GPT-NeoX๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋ฉฐ [์—ฌ๊ธฐ](https://github.com/EleutherAI/gpt-neox)์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ๊ณ , ์ €์ž์˜ ์ฝ”๋“œ ์›๋ณธ์€ [์—ฌ๊ธฐ](https://github.com/facebookresearch/llama)์—์„œ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์›๋ž˜ LLaMA ๋ชจ๋ธ์„ ๊ธฐ๋ฐ˜์œผ๋กœ Meta AI์—์„œ ๋ช‡ ๊ฐ€์ง€ ํ›„์† ์ž‘์—…์„ ๋ฐœํ‘œํ–ˆ์Šต๋‹ˆ๋‹ค: - **Llama2**: Llama2๋Š” ๊ตฌ์กฐ์ ์ธ ๋ช‡ ๊ฐ€์ง€ ์ˆ˜์ •(Grouped Query Attention)์„ ํ†ตํ•ด ๊ฐœ์„ ๋œ ๋ฒ„์ „์ด๋ฉฐ, 2์กฐ ๊ฐœ์˜ ํ† ํฐ์œผ๋กœ ์‚ฌ์ „ ํ›ˆ๋ จ์ด ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. Llama2์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ [์ด ๋ฌธ์„œ](llama2)๋ฅผ ์ฐธ๊ณ ํ•˜์„ธ์š”. ## ๋ฆฌ์†Œ์Šค [[resources]] LLaMA๋ฅผ ์‹œ์ž‘ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  Hugging Face ๋ฐ ์ปค๋ฎค๋‹ˆํ‹ฐ(๐ŸŒŽ๋กœ ํ‘œ์‹œ)์˜ ๊ณต์‹ ์ž๋ฃŒ ๋ชฉ๋ก์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์— ์ž๋ฃŒ๋ฅผ ์ œ์ถœํ•˜๊ณ  ์‹ถ๋‹ค๋ฉด Pull Request๋ฅผ ์˜ฌ๋ ค์ฃผ์„ธ์š”! ์ถ”๊ฐ€ํ•  ์ž๋ฃŒ๋Š” ๊ธฐ์กด์˜ ์ž๋ฃŒ์™€ ์ค‘๋ณต๋˜์ง€ ์•Š๊ณ  ์ƒˆ๋กœ์šด ๋‚ด์šฉ์„ ๋ณด์—ฌ์ฃผ๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. <PipelineTag pipeline="text-classification"/> - LLaMA ๋ชจ๋ธ์„ ํ…์ŠคํŠธ ๋ถ„๋ฅ˜ ์ž‘์—…์— ์ ์šฉํ•˜๊ธฐ ์œ„ํ•œ ํ”„๋กฌํ”„ํŠธ ํŠœ๋‹ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-sst2.ipynb#scrollTo=f04ba4d2) ๐ŸŒŽ <PipelineTag pipeline="question-answering"/> - [Stack Exchange](https://stackexchange.com/)์—์„œ ์งˆ๋ฌธ์— ๋‹ตํ•˜๋Š” LLaMA๋ฅผ ํ›ˆ๋ จํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์œ„ํ•œ [StackLLaMA: RLHF๋กœ LLaMA๋ฅผ ํ›ˆ๋ จํ•˜๋Š” ์‹ค์ „ ๊ฐ€์ด๋“œ](https://huggingface.co/blog/stackllama#stackllama-a-hands-on-guide-to-train-llama-with-rlhf) ๐ŸŒŽ โš—๏ธ ์ตœ์ ํ™” - ์ œํ•œ๋œ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๊ฐ€์ง„ GPU์—์„œ xturing ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ LLaMA ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1SQUXq1AMZPSLD4mk3A3swUIc6Y2dclme?usp=sharing) ๐ŸŒŽ โšก๏ธ ์ถ”๋ก  - ๐Ÿค— PEFT ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์˜ PeftModel์„ ์‚ฌ์šฉํ•˜์—ฌ LLaMA ๋ชจ๋ธ์„ ์‹คํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/github/DominguesM/alpaca-lora-ptbr-7b/blob/main/notebooks/02%20-%20Evaluate.ipynb) ๐ŸŒŽ - LangChain์„ ์‚ฌ์šฉํ•˜์—ฌ PEFT ์–ด๋Œ‘ํ„ฐ LLaMA ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/drive/1l2GiSSPbajVyp2Nk3CFT4t3uH6-5TiBe?usp=sharing) ๐ŸŒŽ ๐Ÿš€ ๋ฐฐํฌ - ๐Ÿค— PEFT ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์™€ ์‚ฌ์šฉ์ž ์นœํ™”์ ์ธ UI๋กœ LLaMA ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://colab.research.google.com/github/lxe/simple-llama-finetuner/blob/master/Simple_LLaMA_FineTuner.ipynb#scrollTo=3PM_DilAZD8T) ๐ŸŒŽ - Amazon SageMaker์—์„œ ํ…์ŠคํŠธ ์ƒ์„ฑ์„ ์œ„ํ•ด Open-LLaMA ๋ชจ๋ธ์„ ๋ฐฐํฌํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ [๋…ธํŠธ๋ถ](https://github.com/aws/amazon-sagemaker-examples/blob/main/introduction_to_amazon_algorithms/jumpstart-foundation-models/text-generation-open-llama.ipynb) ๐ŸŒŽ ## LlamaConfig [[llamaconfig]] [[autodoc]] LlamaConfig ## LlamaTokenizer [[llamatokenizer]] [[autodoc]] LlamaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## LlamaTokenizerFast [[llamatokenizerfast]] [[autodoc]] LlamaTokenizerFast - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - update_post_processor - save_vocabulary ## LlamaModel [[llamamodel]] [[autodoc]] LlamaModel - forward ## LlamaForCausalLM [[llamaforcausallm]] [[autodoc]] LlamaForCausalLM - forward ## LlamaForSequenceClassification [[llamaforsequenceclassification]] [[autodoc]] LlamaForSequenceClassification - forward
0
hf_public_repos/transformers/docs/source/ko
hf_public_repos/transformers/docs/source/ko/model_doc/whisper.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Whisper [[whisper]] ## ๊ฐœ์š” [[overview]] Whisper ๋ชจ๋ธ์€ Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever์— ์˜ํ•ด [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf)์—์„œ ์ œ์•ˆ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋…ผ๋ฌธ์˜ ์ดˆ๋ก์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค: *์šฐ๋ฆฌ๋Š” ์ธํ„ฐ๋„ท์—์„œ ๋Œ€๋Ÿ‰์˜ ์˜ค๋””์˜ค๋ฅผ ๊ธ€๋กœ ์˜ฎ๊ธด ๊ฒƒ์„ ์˜ˆ์ธกํ•˜๋„๋ก ๊ฐ„๋‹จํžˆ ํ›ˆ๋ จ๋œ ์Œ์„ฑ ์ฒ˜๋ฆฌ ์‹œ์Šคํ…œ์˜ ์„ฑ๋Šฅ์„ ์—ฐ๊ตฌํ•ฉ๋‹ˆ๋‹ค. 68๋งŒ ์‹œ๊ฐ„์˜ ๋‹ค๊ตญ์–ด ๋ฐ ๋‹ค์ค‘ ์ž‘์—… ์ง€๋„(multitask supervision)์— ํ™•์žฅํ–ˆ์„ ๋•Œ, ๊ฒฐ๊ณผ ๋ชจ๋ธ์€ ํ‘œ์ค€ ๋ฒค์น˜๋งˆํฌ์— ์ž˜ ์ผ๋ฐ˜ํ™”๋˜๋ฉฐ, ๋ฏธ์„ธ ์กฐ์ •์ด ํ•„์š” ์—†๋Š” ์ œ๋กœ์ƒท ์ „์†ก ์„ค์ •์—์„œ ์ด์ „์˜ ์™„์ „ํžˆ ์ง€๋„๋œ(fully-supervised) ๊ฒฐ๊ณผ์™€ ๊ฒฝ์Ÿํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค. ์‚ฌ๋žŒ๊ณผ ๋น„๊ตํ•˜๋ฉด, ์ด ๋ชจ๋ธ์€ ์‚ฌ๋žŒ์˜ ์ •ํ™•๋„์™€ ๊ฒฌ๊ณ ์„ฑ์— ๊ทผ์ ‘ํ•ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ๊ฐ•๋ ฅํ•œ ์Œ์„ฑ ์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•œ ์ถ”๊ฐ€ ์ž‘์—…์˜ ๊ธฐ๋ฐ˜์ด ๋  ๋ชจ๋ธ๊ณผ ์ถ”๋ก  ์ฝ”๋“œ๋ฅผ ๊ณต๊ฐœํ•ฉ๋‹ˆ๋‹ค.* ํŒ: - ์ด ๋ชจ๋ธ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ๋ณ„๋„์˜ ๋ฏธ์„ธ ์กฐ์ • ์—†์ด๋„ ์ž˜ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. - ์•„ํ‚คํ…์ฒ˜๋Š” ๊ณ ์ „์ ์ธ ์ธ์ฝ”๋”-๋””์ฝ”๋” ์•„ํ‚คํ…์ฒ˜๋ฅผ ๋”ฐ๋ฅด๊ธฐ ๋•Œ๋ฌธ์—, ์ถ”๋ก ์„ ์œ„ํ•ด [`~generation.GenerationMixin.generate`] ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. - ํ˜„์žฌ ์ถ”๋ก ์€ ์งง์€ ํ˜•์‹์—๋งŒ ๊ตฌํ˜„๋˜์–ด ์žˆ์œผ๋ฉฐ, ์˜ค๋””์˜ค๋Š” 30์ดˆ ๋ฏธ๋งŒ์˜ ์„ธ๊ทธ๋จผํŠธ๋กœ ๋ฏธ๋ฆฌ ๋ถ„ํ• ๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํƒ€์ž„์Šคํƒฌํ”„๋ฅผ ํฌํ•จํ•œ ๊ธด ํ˜•์‹์— ๋Œ€ํ•œ ์ถ”๋ก ์€ ํ–ฅํ›„ ๋ฆด๋ฆฌ์Šค์—์„œ ๊ตฌํ˜„๋  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. - [`WhisperProcessor`]๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์— ์‚ฌ์šฉํ•  ์˜ค๋””์˜ค๋ฅผ ์ค€๋น„ํ•˜๊ณ , ์˜ˆ์ธก๋œ ID๋ฅผ ํ…์ŠคํŠธ๋กœ ๋””์ฝ”๋”ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ๋ชจ๋ธ๊ณผ ํ”„๋กœ์„ธ์„œ๋ฅผ ๋ณ€ํ™˜ํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค: ```bash python src/transformers/models/whisper/convert_openai_to_hf.py --checkpoint_path "" --pytorch_dump_folder_path "Arthur/whisper-3" --convert_preprocessor True ``` ์Šคํฌ๋ฆฝํŠธ๋Š” OpenAI ์ฒดํฌํฌ์ธํŠธ์—์„œ ํ•„์š”ํ•œ ๋ชจ๋“  ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ž๋™์œผ๋กœ ๊ฒฐ์ •ํ•ฉ๋‹ˆ๋‹ค. OpenAI ๋ณ€ํ™˜์„ ์ˆ˜ํ–‰ํ•˜๋ ค๋ฉด `tiktoken` ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์„ค์น˜ํ•ด์•ผ OpenAI ํ† ํฐํ™”๊ธฐ๋ฅผ `tokenizers` ๋ฒ„์ „์œผ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์€ [Arthur Zucker](https://huggingface.co/ArthurZ)์— ์˜ํ•ด ์ œ๊ณต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ชจ๋ธ์˜ Tensorflow ๋ฒ„์ „์€ [amyeroberts](https://huggingface.co/amyeroberts)์— ์˜ํ•ด ์ œ๊ณต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์›๋ณธ ์ฝ”๋“œ๋Š” [์—ฌ๊ธฐ](https://github.com/openai/whisper)์—์„œ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ## WhisperConfig [[whisperconfig]] [[autodoc]] WhisperConfig ## WhisperTokenizer [[whispertokenizer]] [[autodoc]] WhisperTokenizer - set_prefix_tokens - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## WhisperTokenizerFast [[whispertokenizerfast]] [[autodoc]] WhisperTokenizerFast - set_prefix_tokens - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary ## WhisperFeatureExtractor [[whisperfeatureextractor]] [[autodoc]] WhisperFeatureExtractor - __call__ ## WhisperProcessor [[whisperprocessor]] [[autodoc]] WhisperProcessor - __call__ - from_pretrained - save_pretrained - batch_decode - decode ## WhisperModel [[whispermodel]] [[autodoc]] WhisperModel - forward - _mask_input_features ## WhisperForConditionalGeneration [[whisperforconditionalgeneration]] [[autodoc]] WhisperForConditionalGeneration - forward ## WhisperForAudioClassification [[whisperforaudioclassification]] [[autodoc]] WhisperForAudioClassification - forward ## TFWhisperModel [[tfwhispermodel]] [[autodoc]] TFWhisperModel - call ## TFWhisperForConditionalGeneration [[tfwhisperforconditionalgeneration]] [[autodoc]] TFWhisperForConditionalGeneration - call ## FlaxWhisperModel [[flaxwhispermodel]] [[autodoc]] FlaxWhisperModel - __call__ ## FlaxWhisperForConditionalGeneration [[flaxwhisperforconditionalgeneration]] [[autodoc]] FlaxWhisperForConditionalGeneration - __call__ ## FlaxWhisperForAudioClassification [[flaxwhisperforaudioclassification]] [[autodoc]] FlaxWhisperForAudioClassification - __call__
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/multilingual.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Multilingual models for inference [[open-in-colab]] There are several multilingual models in ๐Ÿค— Transformers, and their inference usage differs from monolingual models. Not *all* multilingual model usage is different though. Some models, like [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased), can be used just like a monolingual model. This guide will show you how to use multilingual models whose usage differs for inference. ## XLM XLM has ten different checkpoints, only one of which is monolingual. The nine remaining model checkpoints can be split into two categories: the checkpoints that use language embeddings and those that don't. ### XLM with language embeddings The following XLM models use language embeddings to specify the language used at inference: - `xlm-mlm-ende-1024` (Masked language modeling, English-German) - `xlm-mlm-enfr-1024` (Masked language modeling, English-French) - `xlm-mlm-enro-1024` (Masked language modeling, English-Romanian) - `xlm-mlm-xnli15-1024` (Masked language modeling, XNLI languages) - `xlm-mlm-tlm-xnli15-1024` (Masked language modeling + translation, XNLI languages) - `xlm-clm-enfr-1024` (Causal language modeling, English-French) - `xlm-clm-ende-1024` (Causal language modeling, English-German) Language embeddings are represented as a tensor of the same shape as the `input_ids` passed to the model. The values in these tensors depend on the language used and are identified by the tokenizer's `lang2id` and `id2lang` attributes. In this example, load the `xlm-clm-enfr-1024` checkpoint (Causal language modeling, English-French): ```py >>> import torch >>> from transformers import XLMTokenizer, XLMWithLMHeadModel >>> tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024") >>> model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024") ``` The `lang2id` attribute of the tokenizer displays this model's languages and their ids: ```py >>> print(tokenizer.lang2id) {'en': 0, 'fr': 1} ``` Next, create an example input: ```py >>> input_ids = torch.tensor([tokenizer.encode("Wikipedia was used to")]) # batch size of 1 ``` Set the language id as `"en"` and use it to define the language embedding. The language embedding is a tensor filled with `0` since that is the language id for English. This tensor should be the same size as `input_ids`. ```py >>> language_id = tokenizer.lang2id["en"] # 0 >>> langs = torch.tensor([language_id] * input_ids.shape[1]) # torch.tensor([0, 0, 0, ..., 0]) >>> # We reshape it to be of size (batch_size, sequence_length) >>> langs = langs.view(1, -1) # is now of shape [1, sequence_length] (we have a batch size of 1) ``` Now you can pass the `input_ids` and language embedding to the model: ```py >>> outputs = model(input_ids, langs=langs) ``` The [run_generation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation/run_generation.py) script can generate text with language embeddings using the `xlm-clm` checkpoints. ### XLM without language embeddings The following XLM models do not require language embeddings during inference: - `xlm-mlm-17-1280` (Masked language modeling, 17 languages) - `xlm-mlm-100-1280` (Masked language modeling, 100 languages) These models are used for generic sentence representations, unlike the previous XLM checkpoints. ## BERT The following BERT models can be used for multilingual tasks: - `bert-base-multilingual-uncased` (Masked language modeling + Next sentence prediction, 102 languages) - `bert-base-multilingual-cased` (Masked language modeling + Next sentence prediction, 104 languages) These models do not require language embeddings during inference. They should identify the language from the context and infer accordingly. ## XLM-RoBERTa The following XLM-RoBERTa models can be used for multilingual tasks: - `xlm-roberta-base` (Masked language modeling, 100 languages) - `xlm-roberta-large` (Masked language modeling, 100 languages) XLM-RoBERTa was trained on 2.5TB of newly created and cleaned CommonCrawl data in 100 languages. It provides strong gains over previously released multilingual models like mBERT or XLM on downstream tasks like classification, sequence labeling, and question answering. ## M2M100 The following M2M100 models can be used for multilingual translation: - `facebook/m2m100_418M` (Translation) - `facebook/m2m100_1.2B` (Translation) In this example, load the `facebook/m2m100_418M` checkpoint to translate from Chinese to English. You can set the source language in the tokenizer: ```py >>> from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> chinese_text = "ไธ่ฆๆ’ๆ‰‹ๅทซๅธซ็š„ไบ‹ๅ‹™, ๅ› ็‚บไป–ๅ€‘ๆ˜ฏๅพฎๅฆ™็š„, ๅพˆๅฟซๅฐฑๆœƒ็™ผๆ€’." >>> tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="zh") >>> model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") ``` Tokenize the text: ```py >>> encoded_zh = tokenizer(chinese_text, return_tensors="pt") ``` M2M100 forces the target language id as the first generated token to translate to the target language. Set the `forced_bos_token_id` to `en` in the `generate` method to translate to English: ```py >>> generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) 'Do not interfere with the matters of the witches, because they are delicate and will soon be angry.' ``` ## MBart The following MBart models can be used for multilingual translation: - `facebook/mbart-large-50-one-to-many-mmt` (One-to-many multilingual machine translation, 50 languages) - `facebook/mbart-large-50-many-to-many-mmt` (Many-to-many multilingual machine translation, 50 languages) - `facebook/mbart-large-50-many-to-one-mmt` (Many-to-one multilingual machine translation, 50 languages) - `facebook/mbart-large-50` (Multilingual translation, 50 languages) - `facebook/mbart-large-cc25` In this example, load the `facebook/mbart-large-50-many-to-many-mmt` checkpoint to translate Finnish to English. You can set the source language in the tokenizer: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> en_text = "Do not meddle in the affairs of wizards, for they are subtle and quick to anger." >>> fi_text = "ร„lรค sekaannu velhojen asioihin, sillรค ne ovat hienovaraisia ja nopeasti vihaisia." >>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt", src_lang="fi_FI") >>> model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") ``` Tokenize the text: ```py >>> encoded_en = tokenizer(en_text, return_tensors="pt") ``` MBart forces the target language id as the first generated token to translate to the target language. Set the `forced_bos_token_id` to `en` in the `generate` method to translate to English: ```py >>> generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"]) >>> tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) "Don't interfere with the wizard's affairs, because they are subtle, will soon get angry." ``` If you are using the `facebook/mbart-large-50-many-to-one-mmt` checkpoint, you don't need to force the target language id as the first generated token otherwise the usage is the same.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/generation_strategies.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Text generation strategies Text generation is essential to many NLP tasks, such as open-ended text generation, summarization, translation, and more. It also plays a role in a variety of mixed-modality applications that have text as an output like speech-to-text and vision-to-text. Some of the models that can generate text include GPT2, XLNet, OpenAI GPT, CTRL, TransformerXL, XLM, Bart, T5, GIT, Whisper. Check out a few examples that use [`~transformers.generation_utils.GenerationMixin.generate`] method to produce text outputs for different tasks: * [Text summarization](./tasks/summarization#inference) * [Image captioning](./model_doc/git#transformers.GitForCausalLM.forward.example) * [Audio transcription](./model_doc/whisper#transformers.WhisperForConditionalGeneration.forward.example) Note that the inputs to the generate method depend on the model's modality. They are returned by the model's preprocessor class, such as AutoTokenizer or AutoProcessor. If a model's preprocessor creates more than one kind of input, pass all the inputs to generate(). You can learn more about the individual model's preprocessor in the corresponding model's documentation. The process of selecting output tokens to generate text is known as decoding, and you can customize the decoding strategy that the `generate()` method will use. Modifying a decoding strategy does not change the values of any trainable parameters. However, it can have a noticeable impact on the quality of the generated output. It can help reduce repetition in the text and make it more coherent. This guide describes: * default generation configuration * common decoding strategies and their main parameters * saving and sharing custom generation configurations with your fine-tuned model on ๐Ÿค— Hub ## Default text generation configuration A decoding strategy for a model is defined in its generation configuration. When using pre-trained models for inference within a [`pipeline`], the models call the `PreTrainedModel.generate()` method that applies a default generation configuration under the hood. The default configuration is also used when no custom configuration has been saved with the model. When you load a model explicitly, you can inspect the generation configuration that comes with it through `model.generation_config`: ```python >>> from transformers import AutoModelForCausalLM >>> model = AutoModelForCausalLM.from_pretrained("distilgpt2") >>> model.generation_config GenerationConfig { "bos_token_id": 50256, "eos_token_id": 50256, } ``` Printing out the `model.generation_config` reveals only the values that are different from the default generation configuration, and does not list any of the default values. The default generation configuration limits the size of the output combined with the input prompt to a maximum of 20 tokens to avoid running into resource limitations. The default decoding strategy is greedy search, which is the simplest decoding strategy that picks a token with the highest probability as the next token. For many tasks and small output sizes this works well. However, when used to generate longer outputs, greedy search can start producing highly repetitive results. ## Customize text generation You can override any `generation_config` by passing the parameters and their values directly to the [`generate`] method: ```python >>> my_model.generate(**inputs, num_beams=4, do_sample=True) # doctest: +SKIP ``` Even if the default decoding strategy mostly works for your task, you can still tweak a few things. Some of the commonly adjusted parameters include: - `max_new_tokens`: the maximum number of tokens to generate. In other words, the size of the output sequence, not including the tokens in the prompt. As an alternative to using the output's length as a stopping criteria, you can choose to stop generation whenever the full generation exceeds some amount of time. To learn more, check [`StoppingCriteria`]. - `num_beams`: by specifying a number of beams higher than 1, you are effectively switching from greedy search to beam search. This strategy evaluates several hypotheses at each time step and eventually chooses the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability sequences that start with a lower probability initial tokens and would've been ignored by the greedy search. - `do_sample`: if set to `True`, this parameter enables decoding strategies such as multinomial sampling, beam-search multinomial sampling, Top-K sampling and Top-p sampling. All these strategies select the next token from the probability distribution over the entire vocabulary with various strategy-specific adjustments. - `num_return_sequences`: the number of sequence candidates to return for each input. This option is only available for the decoding strategies that support multiple sequence candidates, e.g. variations of beam search and sampling. Decoding strategies like greedy search and contrastive search return a single output sequence. ## Save a custom decoding strategy with your model If you would like to share your fine-tuned model with a specific generation configuration, you can: * Create a [`GenerationConfig`] class instance * Specify the decoding strategy parameters * Save your generation configuration with [`GenerationConfig.save_pretrained`], making sure to leave its `config_file_name` argument empty * Set `push_to_hub` to `True` to upload your config to the model's repo ```python >>> from transformers import AutoModelForCausalLM, GenerationConfig >>> model = AutoModelForCausalLM.from_pretrained("my_account/my_model") # doctest: +SKIP >>> generation_config = GenerationConfig( ... max_new_tokens=50, do_sample=True, top_k=50, eos_token_id=model.config.eos_token_id ... ) >>> generation_config.save_pretrained("my_account/my_model", push_to_hub=True) # doctest: +SKIP ``` You can also store several generation configurations in a single directory, making use of the `config_file_name` argument in [`GenerationConfig.save_pretrained`]. You can later instantiate them with [`GenerationConfig.from_pretrained`]. This is useful if you want to store several generation configurations for a single model (e.g. one for creative text generation with sampling, and one for summarization with beam search). You must have the right Hub permissions to add configuration files to a model. ```python >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig >>> tokenizer = AutoTokenizer.from_pretrained("t5-small") >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-small") >>> translation_generation_config = GenerationConfig( ... num_beams=4, ... early_stopping=True, ... decoder_start_token_id=0, ... eos_token_id=model.config.eos_token_id, ... pad_token=model.config.pad_token_id, ... ) >>> # Tip: add `push_to_hub=True` to push to the Hub >>> translation_generation_config.save_pretrained("/tmp", "translation_generation_config.json") >>> # You could then use the named generation config file to parameterize generation >>> generation_config = GenerationConfig.from_pretrained("/tmp", "translation_generation_config.json") >>> inputs = tokenizer("translate English to French: Configuration files are easy to use!", return_tensors="pt") >>> outputs = model.generate(**inputs, generation_config=generation_config) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Les fichiers de configuration sont faciles ร  utiliser!'] ``` ## Streaming The `generate()` supports streaming, through its `streamer` input. The `streamer` input is compatible with any instance from a class that has the following methods: `put()` and `end()`. Internally, `put()` is used to push new tokens and `end()` is used to flag the end of text generation. <Tip warning={true}> The API for the streamer classes is still under development and may change in the future. </Tip> In practice, you can craft your own streaming class for all sorts of purposes! We also have basic streaming classes ready for you to use. For example, you can use the [`TextStreamer`] class to stream the output of `generate()` into your screen, one word at a time: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer >>> tok = AutoTokenizer.from_pretrained("gpt2") >>> model = AutoModelForCausalLM.from_pretrained("gpt2") >>> inputs = tok(["An increasing sequence: one,"], return_tensors="pt") >>> streamer = TextStreamer(tok) >>> # Despite returning the usual output, the streamer will also print the generated text to stdout. >>> _ = model.generate(**inputs, streamer=streamer, max_new_tokens=20) An increasing sequence: one, two, three, four, five, six, seven, eight, nine, ten, eleven, ``` ## Decoding strategies Certain combinations of the `generate()` parameters, and ultimately `generation_config`, can be used to enable specific decoding strategies. If you are new to this concept, we recommend reading [this blog post that illustrates how common decoding strategies work](https://huggingface.co/blog/how-to-generate). Here, we'll show some of the parameters that control the decoding strategies and illustrate how you can use them. ### Greedy Search [`generate`] uses greedy search decoding by default so you don't have to pass any parameters to enable it. This means the parameters `num_beams` is set to 1 and `do_sample=False`. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "I look forward to" >>> checkpoint = "distilgpt2" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['I look forward to seeing you all again!\n\n\n\n\n\n\n\n\n\n\n'] ``` ### Contrastive search The contrastive search decoding strategy was proposed in the 2022 paper [A Contrastive Framework for Neural Text Generation](https://arxiv.org/abs/2202.06417). It demonstrates superior results for generating non-repetitive yet coherent long outputs. To learn how contrastive search works, check out [this blog post](https://huggingface.co/blog/introducing-csearch). The two main parameters that enable and control the behavior of contrastive search are `penalty_alpha` and `top_k`: ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> checkpoint = "gpt2-large" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = "Hugging Face Company is" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> outputs = model.generate(**inputs, penalty_alpha=0.6, top_k=4, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Hugging Face Company is a family owned and operated business. We pride ourselves on being the best in the business and our customer service is second to none.\n\nIf you have any questions about our products or services, feel free to contact us at any time. We look forward to hearing from you!'] ``` ### Multinomial sampling As opposed to greedy search that always chooses a token with the highest probability as the next token, multinomial sampling (also called ancestral sampling) randomly selects the next token based on the probability distribution over the entire vocabulary given by the model. Every token with a non-zero probability has a chance of being selected, thus reducing the risk of repetition. To enable multinomial sampling set `do_sample=True` and `num_beams=1`. ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, set_seed >>> set_seed(0) # For reproducibility >>> checkpoint = "gpt2-large" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> prompt = "Today was an amazing day because" >>> inputs = tokenizer(prompt, return_tensors="pt") >>> outputs = model.generate(**inputs, do_sample=True, num_beams=1, max_new_tokens=100) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Today was an amazing day because when you go to the World Cup and you don\'t, or when you don\'t get invited, that\'s a terrible feeling."'] ``` ### Beam-search decoding Unlike greedy search, beam-search decoding keeps several hypotheses at each time step and eventually chooses the hypothesis that has the overall highest probability for the entire sequence. This has the advantage of identifying high-probability sequences that start with lower probability initial tokens and would've been ignored by the greedy search. To enable this decoding strategy, specify the `num_beams` (aka number of hypotheses to keep track of) that is greater than 1. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "It is astonishing how one can" >>> checkpoint = "gpt2-medium" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, max_new_tokens=50) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['It is astonishing how one can have such a profound impact on the lives of so many people in such a short period of time."\n\nHe added: "I am very proud of the work I have been able to do in the last few years.\n\n"I have'] ``` ### Beam-search multinomial sampling As the name implies, this decoding strategy combines beam search with multinomial sampling. You need to specify the `num_beams` greater than 1, and set `do_sample=True` to use this decoding strategy. ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, set_seed >>> set_seed(0) # For reproducibility >>> prompt = "translate English to German: The house is wonderful." >>> checkpoint = "t5-small" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, do_sample=True) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'Das Haus ist wunderbar.' ``` ### Diverse beam search decoding The diverse beam search decoding strategy is an extension of the beam search strategy that allows for generating a more diverse set of beam sequences to choose from. To learn how it works, refer to [Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models](https://arxiv.org/pdf/1610.02424.pdf). This approach has three main parameters: `num_beams`, `num_beam_groups`, and `diversity_penalty`. The diversity penalty ensures the outputs are distinct across groups, and beam search is used within each group. ```python >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> checkpoint = "google/pegasus-xsum" >>> prompt = ( ... "The Permaculture Design Principles are a set of universal design principles " ... "that can be applied to any location, climate and culture, and they allow us to design " ... "the most efficient and sustainable human habitation and food production systems. " ... "Permaculture is a design system that encompasses a wide variety of disciplines, such " ... "as ecology, landscape design, environmental science and energy conservation, and the " ... "Permaculture design principles are drawn from these various disciplines. Each individual " ... "design principle itself embodies a complete conceptual framework based on sound " ... "scientific principles. When we bring all these separate principles together, we can " ... "create a design system that both looks at whole systems, the parts that these systems " ... "consist of, and how those parts interact with each other to create a complex, dynamic, " ... "living system. Each design principle serves as a tool that allows us to integrate all " ... "the separate parts of a design, referred to as elements, into a functional, synergistic, " ... "whole system, where the elements harmoniously interact and work together in the most " ... "efficient way possible." ... ) >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) >>> outputs = model.generate(**inputs, num_beams=5, num_beam_groups=5, max_new_tokens=30, diversity_penalty=1.0) >>> tokenizer.decode(outputs[0], skip_special_tokens=True) 'The Design Principles are a set of universal design principles that can be applied to any location, climate and culture, and they allow us to design the' ``` This guide illustrates the main parameters that enable various decoding strategies. More advanced parameters exist for the [`generate`] method, which gives you even further control over the [`generate`] method's behavior. For the complete list of the available parameters, refer to the [API documentation](./main_classes/text_generation.md). ### Assisted Decoding Assisted decoding is a modification of the decoding strategies above that uses an assistant model with the same tokenizer (ideally a much smaller model) to greedily generate a few candidate tokens. The main model then validates the candidate tokens in a single forward pass, which speeds up the decoding process. Currently, only greedy search and sampling are supported with assisted decoding, and doesn't support batched inputs. To learn more about assisted decoding, check [this blog post](https://huggingface.co/blog/assisted-generation). To enable assisted decoding, set the `assistant_model` argument with a model. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> prompt = "Alice and Bob" >>> checkpoint = "EleutherAI/pythia-1.4b-deduped" >>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are sitting in a bar. Alice is drinking a beer and Bob is drinking a'] ``` When using assisted decoding with sampling methods, you can use the `temperature` argument to control the randomness just like in multinomial sampling. However, in assisted decoding, reducing the temperature will help improving latency. ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed >>> set_seed(42) # For reproducibility >>> prompt = "Alice and Bob" >>> checkpoint = "EleutherAI/pythia-1.4b-deduped" >>> assistant_checkpoint = "EleutherAI/pythia-160m-deduped" >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> inputs = tokenizer(prompt, return_tensors="pt") >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> assistant_model = AutoModelForCausalLM.from_pretrained(assistant_checkpoint) >>> outputs = model.generate(**inputs, assistant_model=assistant_model, do_sample=True, temperature=0.5) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Alice and Bob are going to the same party. It is a small party, in a small'] ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/tflite.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Export to TFLite [TensorFlow Lite](https://www.tensorflow.org/lite/guide) is a lightweight framework for deploying machine learning models on resource-constrained devices, such as mobile phones, embedded systems, and Internet of Things (IoT) devices. TFLite is designed to optimize and run models efficiently on these devices with limited computational power, memory, and power consumption. A TensorFlow Lite model is represented in a special efficient portable format identified by the `.tflite` file extension. ๐Ÿค— Optimum offers functionality to export ๐Ÿค— Transformers models to TFLite through the `exporters.tflite` module. For the list of supported model architectures, please refer to [๐Ÿค— Optimum documentation](https://huggingface.co/docs/optimum/exporters/tflite/overview). To export a model to TFLite, install the required dependencies: ```bash pip install optimum[exporters-tf] ``` To check out all available arguments, refer to the [๐Ÿค— Optimum docs](https://huggingface.co/docs/optimum/main/en/exporters/tflite/usage_guides/export_a_model), or view help in command line: ```bash optimum-cli export tflite --help ``` To export a model's checkpoint from the ๐Ÿค— Hub, for example, `bert-base-uncased`, run the following command: ```bash optimum-cli export tflite --model bert-base-uncased --sequence_length 128 bert_tflite/ ``` You should see the logs indicating progress and showing where the resulting `model.tflite` is saved, like this: ```bash Validating TFLite model... -[โœ“] TFLite model output names match reference model (logits) - Validating TFLite Model output "logits": -[โœ“] (1, 128, 30522) matches (1, 128, 30522) -[x] values not close enough, max diff: 5.817413330078125e-05 (atol: 1e-05) The TensorFlow Lite export succeeded with the warning: The maximum absolute difference between the output of the reference model and the TFLite exported model is not within the set tolerance 1e-05: - logits: max diff = 5.817413330078125e-05. The exported model was saved at: bert_tflite ``` The example above illustrates exporting a checkpoint from ๐Ÿค— Hub. When exporting a local model, first make sure that you saved both the model's weights and tokenizer files in the same directory (`local_path`). When using CLI, pass the `local_path` to the `model` argument instead of the checkpoint name on ๐Ÿค— Hub.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/preprocessing.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Preprocess [[open-in-colab]] Before you can train a model on a dataset, it needs to be preprocessed into the expected model input format. Whether your data is text, images, or audio, they need to be converted and assembled into batches of tensors. ๐Ÿค— Transformers provides a set of preprocessing classes to help prepare your data for the model. In this tutorial, you'll learn that for: * Text, use a [Tokenizer](./main_classes/tokenizer) to convert text into a sequence of tokens, create a numerical representation of the tokens, and assemble them into tensors. * Speech and audio, use a [Feature extractor](./main_classes/feature_extractor) to extract sequential features from audio waveforms and convert them into tensors. * Image inputs use a [ImageProcessor](./main_classes/image) to convert images into tensors. * Multimodal inputs, use a [Processor](./main_classes/processors) to combine a tokenizer and a feature extractor or image processor. <Tip> `AutoProcessor` **always** works and automatically chooses the correct class for the model you're using, whether you're using a tokenizer, image processor, feature extractor or processor. </Tip> Before you begin, install ๐Ÿค— Datasets so you can load some datasets to experiment with: ```bash pip install datasets ``` ## Natural Language Processing <Youtube id="Yffk5aydLzg"/> The main tool for preprocessing textual data is a [tokenizer](main_classes/tokenizer). A tokenizer splits text into *tokens* according to a set of rules. The tokens are converted into numbers and then tensors, which become the model inputs. Any additional inputs required by the model are added by the tokenizer. <Tip> If you plan on using a pretrained model, it's important to use the associated pretrained tokenizer. This ensures the text is split the same way as the pretraining corpus, and uses the same corresponding tokens-to-index (usually referred to as the *vocab*) during pretraining. </Tip> Get started by loading a pretrained tokenizer with the [`AutoTokenizer.from_pretrained`] method. This downloads the *vocab* a model was pretrained with: ```py >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") ``` Then pass your text to the tokenizer: ```py >>> encoded_input = tokenizer("Do not meddle in the affairs of wizards, for they are subtle and quick to anger.") >>> print(encoded_input) {'input_ids': [101, 2079, 2025, 19960, 10362, 1999, 1996, 3821, 1997, 16657, 1010, 2005, 2027, 2024, 11259, 1998, 4248, 2000, 4963, 1012, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ``` The tokenizer returns a dictionary with three important items: * [input_ids](glossary#input-ids) are the indices corresponding to each token in the sentence. * [attention_mask](glossary#attention-mask) indicates whether a token should be attended to or not. * [token_type_ids](glossary#token-type-ids) identifies which sequence a token belongs to when there is more than one sequence. Return your input by decoding the `input_ids`: ```py >>> tokenizer.decode(encoded_input["input_ids"]) '[CLS] Do not meddle in the affairs of wizards, for they are subtle and quick to anger. [SEP]' ``` As you can see, the tokenizer added two special tokens - `CLS` and `SEP` (classifier and separator) - to the sentence. Not all models need special tokens, but if they do, the tokenizer automatically adds them for you. If there are several sentences you want to preprocess, pass them as a list to the tokenizer: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_inputs = tokenizer(batch_sentences) >>> print(encoded_inputs) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]]} ``` ### Pad Sentences aren't always the same length which can be an issue because tensors, the model inputs, need to have a uniform shape. Padding is a strategy for ensuring tensors are rectangular by adding a special *padding token* to shorter sentences. Set the `padding` parameter to `True` to pad the shorter sequences in the batch to match the longest sequence: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` The first and third sentences are now padded with `0`'s because they are shorter. ### Truncation On the other end of the spectrum, sometimes a sequence may be too long for a model to handle. In this case, you'll need to truncate the sequence to a shorter length. Set the `truncation` parameter to `True` to truncate a sequence to the maximum length accepted by the model: ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True) >>> print(encoded_input) {'input_ids': [[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` <Tip> Check out the [Padding and truncation](./pad_truncation) concept guide to learn more different padding and truncation arguments. </Tip> ### Build tensors Finally, you want the tokenizer to return the actual tensors that get fed to the model. Set the `return_tensors` parameter to either `pt` for PyTorch, or `tf` for TensorFlow: <frameworkcontent> <pt> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt") >>> print(encoded_input) {'input_ids': tensor([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]])} ``` </pt> <tf> ```py >>> batch_sentences = [ ... "But what about second breakfast?", ... "Don't think he knows about second breakfast, Pip.", ... "What about elevensies?", ... ] >>> encoded_input = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf") >>> print(encoded_input) {'input_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[101, 1252, 1184, 1164, 1248, 6462, 136, 102, 0, 0, 0, 0, 0, 0, 0], [101, 1790, 112, 189, 1341, 1119, 3520, 1164, 1248, 6462, 117, 21902, 1643, 119, 102], [101, 1327, 1164, 5450, 23434, 136, 102, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'token_type_ids': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(2, 9), dtype=int32, numpy= array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)>} ``` </tf> </frameworkcontent> ## Audio For audio tasks, you'll need a [feature extractor](main_classes/feature_extractor) to prepare your dataset for the model. The feature extractor is designed to extract features from raw audio data, and convert them into tensors. Load the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset (see the ๐Ÿค— [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) for more details on how to load a dataset) to see how you can use a feature extractor with audio datasets: ```py >>> from datasets import load_dataset, Audio >>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` Access the first element of the `audio` column to take a look at the input. Calling the `audio` column automatically loads and resamples the audio file: ```py >>> dataset[0]["audio"] {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 8000} ``` This returns three items: * `array` is the speech signal loaded - and potentially resampled - as a 1D array. * `path` points to the location of the audio file. * `sampling_rate` refers to how many data points in the speech signal are measured per second. For this tutorial, you'll use the [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) model. Take a look at the model card, and you'll learn Wav2Vec2 is pretrained on 16kHz sampled speech audio. It is important your audio data's sampling rate matches the sampling rate of the dataset used to pretrain the model. If your data's sampling rate isn't the same, then you need to resample your data. 1. Use ๐Ÿค— Datasets' [`~datasets.Dataset.cast_column`] method to upsample the sampling rate to 16kHz: ```py >>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000)) ``` 2. Call the `audio` column again to resample the audio file: ```py >>> dataset[0]["audio"] {'array': array([ 2.3443763e-05, 2.1729663e-04, 2.2145823e-04, ..., 3.8356509e-05, -7.3497440e-06, -2.1754686e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'sampling_rate': 16000} ``` Next, load a feature extractor to normalize and pad the input. When padding textual data, a `0` is added for shorter sequences. The same idea applies to audio data. The feature extractor adds a `0` - interpreted as silence - to `array`. Load the feature extractor with [`AutoFeatureExtractor.from_pretrained`]: ```py >>> from transformers import AutoFeatureExtractor >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") ``` Pass the audio `array` to the feature extractor. We also recommend adding the `sampling_rate` argument in the feature extractor in order to better debug any silent errors that may occur. ```py >>> audio_input = [dataset[0]["audio"]["array"]] >>> feature_extractor(audio_input, sampling_rate=16000) {'input_values': [array([ 3.8106556e-04, 2.7506407e-03, 2.8015103e-03, ..., 5.6335266e-04, 4.6588284e-06, -1.7142107e-04], dtype=float32)]} ``` Just like the tokenizer, you can apply padding or truncation to handle variable sequences in a batch. Take a look at the sequence length of these two audio samples: ```py >>> dataset[0]["audio"]["array"].shape (173398,) >>> dataset[1]["audio"]["array"].shape (106496,) ``` Create a function to preprocess the dataset so the audio samples are the same lengths. Specify a maximum sample length, and the feature extractor will either pad or truncate the sequences to match it: ```py >>> def preprocess_function(examples): ... audio_arrays = [x["array"] for x in examples["audio"]] ... inputs = feature_extractor( ... audio_arrays, ... sampling_rate=16000, ... padding=True, ... max_length=100000, ... truncation=True, ... ) ... return inputs ``` Apply the `preprocess_function` to the first few examples in the dataset: ```py >>> processed_dataset = preprocess_function(dataset[:5]) ``` The sample lengths are now the same and match the specified maximum length. You can pass your processed dataset to the model now! ```py >>> processed_dataset["input_values"][0].shape (100000,) >>> processed_dataset["input_values"][1].shape (100000,) ``` ## Computer vision For computer vision tasks, you'll need an [image processor](main_classes/image_processor) to prepare your dataset for the model. Image preprocessing consists of several steps that convert images into the input expected by the model. These steps include but are not limited to resizing, normalizing, color channel correction, and converting images to tensors. <Tip> Image preprocessing often follows some form of image augmentation. Both image preprocessing and image augmentation transform image data, but they serve different purposes: * Image augmentation alters images in a way that can help prevent overfitting and increase the robustness of the model. You can get creative in how you augment your data - adjust brightness and colors, crop, rotate, resize, zoom, etc. However, be mindful not to change the meaning of the images with your augmentations. * Image preprocessing guarantees that the images match the modelโ€™s expected input format. When fine-tuning a computer vision model, images must be preprocessed exactly as when the model was initially trained. You can use any library you like for image augmentation. For image preprocessing, use the `ImageProcessor` associated with the model. </Tip> Load the [food101](https://huggingface.co/datasets/food101) dataset (see the ๐Ÿค— [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) for more details on how to load a dataset) to see how you can use an image processor with computer vision datasets: <Tip> Use ๐Ÿค— Datasets `split` parameter to only load a small sample from the training split since the dataset is quite large! </Tip> ```py >>> from datasets import load_dataset >>> dataset = load_dataset("food101", split="train[:100]") ``` Next, take a look at the image with ๐Ÿค— Datasets [`Image`](https://huggingface.co/docs/datasets/package_reference/main_classes?highlight=image#datasets.Image) feature: ```py >>> dataset[0]["image"] ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vision-preprocess-tutorial.png"/> </div> Load the image processor with [`AutoImageProcessor.from_pretrained`]: ```py >>> from transformers import AutoImageProcessor >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224") ``` First, let's add some image augmentation. You can use any library you prefer, but in this tutorial, we'll use torchvision's [`transforms`](https://pytorch.org/vision/stable/transforms.html) module. If you're interested in using another data augmentation library, learn how in the [Albumentations](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_albumentations.ipynb) or [Kornia notebooks](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification_kornia.ipynb). 1. Here we use [`Compose`](https://pytorch.org/vision/master/generated/torchvision.transforms.Compose.html) to chain together a couple of transforms - [`RandomResizedCrop`](https://pytorch.org/vision/main/generated/torchvision.transforms.RandomResizedCrop.html) and [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html). Note that for resizing, we can get the image size requirements from the `image_processor`. For some models, an exact height and width are expected, for others only the `shortest_edge` is defined. ```py >>> from torchvision.transforms import RandomResizedCrop, ColorJitter, Compose >>> size = ( ... image_processor.size["shortest_edge"] ... if "shortest_edge" in image_processor.size ... else (image_processor.size["height"], image_processor.size["width"]) ... ) >>> _transforms = Compose([RandomResizedCrop(size), ColorJitter(brightness=0.5, hue=0.5)]) ``` 2. The model accepts [`pixel_values`](model_doc/visionencoderdecoder#transformers.VisionEncoderDecoderModel.forward.pixel_values) as its input. `ImageProcessor` can take care of normalizing the images, and generating appropriate tensors. Create a function that combines image augmentation and image preprocessing for a batch of images and generates `pixel_values`: ```py >>> def transforms(examples): ... images = [_transforms(img.convert("RGB")) for img in examples["image"]] ... examples["pixel_values"] = image_processor(images, do_resize=False, return_tensors="pt")["pixel_values"] ... return examples ``` <Tip> In the example above we set `do_resize=False` because we have already resized the images in the image augmentation transformation, and leveraged the `size` attribute from the appropriate `image_processor`. If you do not resize images during image augmentation, leave this parameter out. By default, `ImageProcessor` will handle the resizing. If you wish to normalize images as a part of the augmentation transformation, use the `image_processor.image_mean`, and `image_processor.image_std` values. </Tip> 3. Then use ๐Ÿค— Datasets[`~datasets.Dataset.set_transform`] to apply the transforms on the fly: ```py >>> dataset.set_transform(transforms) ``` 4. Now when you access the image, you'll notice the image processor has added `pixel_values`. You can pass your processed dataset to the model now! ```py >>> dataset[0].keys() ``` Here is what the image looks like after the transforms are applied. The image has been randomly cropped and it's color properties are different. ```py >>> import numpy as np >>> import matplotlib.pyplot as plt >>> img = dataset[0]["pixel_values"] >>> plt.imshow(img.permute(1, 2, 0)) ``` <div class="flex justify-center"> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/preprocessed_image.png"/> </div> <Tip> For tasks like object detection, semantic segmentation, instance segmentation, and panoptic segmentation, `ImageProcessor` offers post processing methods. These methods convert model's raw outputs into meaningful predictions such as bounding boxes, or segmentation maps. </Tip> ### Pad In some cases, for instance, when fine-tuning [DETR](./model_doc/detr), the model applies scale augmentation at training time. This may cause images to be different sizes in a batch. You can use [`DetrImageProcessor.pad`] from [`DetrImageProcessor`] and define a custom `collate_fn` to batch images together. ```py >>> def collate_fn(batch): ... pixel_values = [item["pixel_values"] for item in batch] ... encoding = image_processor.pad(pixel_values, return_tensors="pt") ... labels = [item["labels"] for item in batch] ... batch = {} ... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_mask"] = encoding["pixel_mask"] ... batch["labels"] = labels ... return batch ``` ## Multimodal For tasks involving multimodal inputs, you'll need a [processor](main_classes/processors) to prepare your dataset for the model. A processor couples together two processing objects such as as tokenizer and feature extractor. Load the [LJ Speech](https://huggingface.co/datasets/lj_speech) dataset (see the ๐Ÿค— [Datasets tutorial](https://huggingface.co/docs/datasets/load_hub) for more details on how to load a dataset) to see how you can use a processor for automatic speech recognition (ASR): ```py >>> from datasets import load_dataset >>> lj_speech = load_dataset("lj_speech", split="train") ``` For ASR, you're mainly focused on `audio` and `text` so you can remove the other columns: ```py >>> lj_speech = lj_speech.map(remove_columns=["file", "id", "normalized_text"]) ``` Now take a look at the `audio` and `text` columns: ```py >>> lj_speech[0]["audio"] {'array': array([-7.3242188e-04, -7.6293945e-04, -6.4086914e-04, ..., 7.3242188e-04, 2.1362305e-04, 6.1035156e-05], dtype=float32), 'path': '/root/.cache/huggingface/datasets/downloads/extracted/917ece08c95cf0c4115e45294e3cd0dee724a1165b7fc11798369308a465bd26/LJSpeech-1.1/wavs/LJ001-0001.wav', 'sampling_rate': 22050} >>> lj_speech[0]["text"] 'Printing, in the only sense with which we are at present concerned, differs from most if not from all the arts and crafts represented in the Exhibition' ``` Remember you should always [resample](preprocessing#audio) your audio dataset's sampling rate to match the sampling rate of the dataset used to pretrain a model! ```py >>> lj_speech = lj_speech.cast_column("audio", Audio(sampling_rate=16_000)) ``` Load a processor with [`AutoProcessor.from_pretrained`]: ```py >>> from transformers import AutoProcessor >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") ``` 1. Create a function to process the audio data contained in `array` to `input_values`, and tokenize `text` to `labels`. These are the inputs to the model: ```py >>> def prepare_dataset(example): ... audio = example["audio"] ... example.update(processor(audio=audio["array"], text=example["text"], sampling_rate=16000)) ... return example ``` 2. Apply the `prepare_dataset` function to a sample: ```py >>> prepare_dataset(lj_speech[0]) ``` The processor has now added `input_values` and `labels`, and the sampling rate has also been correctly downsampled to 16kHz. You can pass your processed dataset to the model now!
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/pad_truncation.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Padding and truncation Batched inputs are often different lengths, so they can't be converted to fixed-size tensors. Padding and truncation are strategies for dealing with this problem, to create rectangular tensors from batches of varying lengths. Padding adds a special **padding token** to ensure shorter sequences will have the same length as either the longest sequence in a batch or the maximum length accepted by the model. Truncation works in the other direction by truncating long sequences. In most cases, padding your batch to the length of the longest sequence and truncating to the maximum length a model can accept works pretty well. However, the API supports more strategies if you need them. The three arguments you need to are: `padding`, `truncation` and `max_length`. The `padding` argument controls padding. It can be a boolean or a string: - `True` or `'longest'`: pad to the longest sequence in the batch (no padding is applied if you only provide a single sequence). - `'max_length'`: pad to a length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). Padding will still be applied if you only provide a single sequence. - `False` or `'do_not_pad'`: no padding is applied. This is the default behavior. The `truncation` argument controls truncation. It can be a boolean or a string: - `True` or `'longest_first'`: truncate to a maximum length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). This will truncate token by token, removing a token from the longest sequence in the pair until the proper length is reached. - `'only_second'`: truncate to a maximum length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). This will only truncate the second sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided. - `'only_first'`: truncate to a maximum length specified by the `max_length` argument or the maximum length accepted by the model if no `max_length` is provided (`max_length=None`). This will only truncate the first sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided. - `False` or `'do_not_truncate'`: no truncation is applied. This is the default behavior. The `max_length` argument controls the length of the padding and truncation. It can be an integer or `None`, in which case it will default to the maximum length the model can accept. If the model has no specific maximum input length, truncation or padding to `max_length` is deactivated. The following table summarizes the recommended way to setup padding and truncation. If you use pairs of input sequences in any of the following examples, you can replace `truncation=True` by a `STRATEGY` selected in `['only_first', 'only_second', 'longest_first']`, i.e. `truncation='only_second'` or `truncation='longest_first'` to control how both sequences in the pair are truncated as detailed before. | Truncation | Padding | Instruction | |--------------------------------------|-----------------------------------|---------------------------------------------------------------------------------------------| | no truncation | no padding | `tokenizer(batch_sentences)` | | | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True)` or | | | | `tokenizer(batch_sentences, padding='longest')` | | | padding to max model input length | `tokenizer(batch_sentences, padding='max_length')` | | | padding to specific length | `tokenizer(batch_sentences, padding='max_length', max_length=42)` | | | padding to a multiple of a value | `tokenizer(batch_sentences, padding=True, pad_to_multiple_of=8) | | truncation to max model input length | no padding | `tokenizer(batch_sentences, truncation=True)` or | | | | `tokenizer(batch_sentences, truncation=STRATEGY)` | | | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True)` or | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY)` | | | padding to max model input length | `tokenizer(batch_sentences, padding='max_length', truncation=True)` or | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY)` | | | padding to specific length | Not possible | | truncation to specific length | no padding | `tokenizer(batch_sentences, truncation=True, max_length=42)` or | | | | `tokenizer(batch_sentences, truncation=STRATEGY, max_length=42)` | | | padding to max sequence in batch | `tokenizer(batch_sentences, padding=True, truncation=True, max_length=42)` or | | | | `tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42)` | | | padding to max model input length | Not possible | | | padding to specific length | `tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42)` or | | | | `tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)` |
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/quantization.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Quantization Quantization techniques focus on representing data with less information while also trying to not lose too much accuracy. This often means converting a data type to represent the same information with fewer bits. For example, if your model weights are stored as 32-bit floating points and they're quantized to 16-bit floating points, this halves the model size which makes it easier to store and reduces memory-usage. Lower precision can also speedup inference because it takes less time to perform calculations with fewer bits. Transformers supports several quantization schemes to help you run inference with large language models (LLMs) and finetune adapters on quantized models. This guide will show you how to use Activation-aware Weight Quantization (AWQ), AutoGPTQ, and bitsandbytes. ## AWQ <Tip> Try AWQ quantization with this [notebook](https://colab.research.google.com/drive/1HzZH89yAXJaZgwJDhQj9LqSBux932BvY)! </Tip> [Activation-aware Weight Quantization (AWQ)](https://hf.co/papers/2306.00978) doesn't quantize all the weights in a model, and instead, it preserves a small percentage of weights that are important for LLM performance. This significantly reduces quantization loss such that you can run models in 4-bit precision without experiencing any performance degradation. There are several libraries for quantizing models with the AWQ algorithm, such as [llm-awq](https://github.com/mit-han-lab/llm-awq), [autoawq](https://github.com/casper-hansen/AutoAWQ) or [optimum-intel](https://huggingface.co/docs/optimum/main/en/intel/optimization_inc). Transformers supports loading models quantized with the llm-awq and autoawq libraries. This guide will show you how to load models quantized with autoawq, but the processs is similar for llm-awq quantized models. Make sure you have autoawq installed: ```bash pip install autoawq ``` AWQ-quantized models can be identified by checking the `quantization_config` attribute in the model's [config.json](https://huggingface.co/TheBloke/zephyr-7B-alpha-AWQ/blob/main/config.json) file: ```json { "_name_or_path": "/workspace/process/huggingfaceh4_zephyr-7b-alpha/source", "architectures": [ "MistralForCausalLM" ], ... ... ... "quantization_config": { "quant_method": "awq", "zero_point": true, "group_size": 128, "bits": 4, "version": "gemm" } } ``` A quantized model is loaded with the [`~PreTrainedModel.from_pretrained`] method. If you loaded your model on the CPU, make sure to move it to a GPU device first. Use the `device_map` parameter to specify where to place the model: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "TheBloke/zephyr-7B-alpha-AWQ" model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda:0") ``` Loading an AWQ-quantized model automatically sets other weights to fp16 by default for performance reasons. If you want to load these other weights in a different format, use the `torch_dtype` parameter: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "TheBloke/zephyr-7B-alpha-AWQ" model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float32) ``` AWQ quantization can also be combined with [FlashAttention-2](perf_infer_gpu_one#flashattention-2) to further accelerate inference: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("TheBloke/zephyr-7B-alpha-AWQ", use_flash_attention_2=True, device_map="cuda:0") ``` ## AutoGPTQ <Tip> Try GPTQ quantization with PEFT in this [notebook](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing) and learn more about it's details in this [blog post](https://huggingface.co/blog/gptq-integration)! </Tip> The [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) library implements the GPTQ algorithm, a post-training quantization technique where each row of the weight matrix is quantized independently to find a version of the weights that minimizes the error. These weights are quantized to int4, but they're restored to fp16 on the fly during inference. This can save your memory-usage by 4x because the int4 weights are dequantized in a fused kernel rather than a GPU's global memory, and you can also expect a speedup in inference because using a lower bitwidth takes less time to communicate. Before you begin, make sure the following libraries are installed: ```bash pip install auto-gptq pip install git+https://github.com/huggingface/optimum.git pip install git+https://github.com/huggingface/transformers.git pip install --upgrade accelerate ``` To quantize a model (currently only supported for text models), you need to create a [`GPTQConfig`] class and set the number of bits to quantize to, a dataset to calibrate the weights for quantization, and a tokenizer to prepare the dataset. ```py from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig model_id = "facebook/opt-125m" tokenizer = AutoTokenizer.from_pretrained(model_id) gptq_config = GPTQConfig(bits=4, dataset="c4", tokenizer=tokenizer) ``` You could also pass your own dataset as a list of strings, but it is highly recommended to use the same dataset from the GPTQ paper. ```py dataset = ["auto-gptq is an easy-to-use model quantization library with user-friendly apis, based on GPTQ algorithm."] gptq_config = GPTQConfig(bits=4, dataset=dataset, tokenizer=tokenizer) ``` Load a model to quantize and pass the `gptq_config` to the [`~AutoModelForCausalLM.from_pretrained`] method. Set `device_map="auto"` to automatically offload the model to a CPU to help fit the model in memory, and allow the model modules to be moved between the CPU and GPU for quantization. ```py quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", quantization_config=gptq_config) ``` If you're running out of memory because a dataset is too large, disk offloading is not supported. If this is the case, try passing the `max_memory` parameter to allocate the amount of memory to use on your device (GPU and CPU): ```py quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", max_memory={0: "30GiB", 1: "46GiB", "cpu": "30GiB"}, quantization_config=gptq_config) ``` <Tip warning={true}> Depending on your hardware, it can take some time to quantize a model from scratch. It can take ~5 minutes to quantize the [faceboook/opt-350m]() model on a free-tier Google Colab GPU, but it'll take ~4 hours to quantize a 175B parameter model on a NVIDIA A100. Before you quantize a model, it is a good idea to check the Hub if a GPTQ-quantized version of the model already exists. </Tip> Once your model is quantized, you can push the model and tokenizer to the Hub where it can be easily shared and accessed. Use the [`~PreTrainedModel.push_to_hub`] method to save the [`GPTQConfig`]: ```py quantized_model.push_to_hub("opt-125m-gptq") tokenizer.push_to_hub("opt-125m-gptq") ``` You could also save your quantized model locally with the [`~PreTrainedModel.save_pretrained`] method. If the model was quantized with the `device_map` parameter, make sure to move the entire model to a GPU or CPU before saving it. For example, to save the model on a CPU: ```py quantized_model.save_pretrained("opt-125m-gptq") tokenizer.save_pretrained("opt-125m-gptq") # if quantized with device_map set quantized_model.to("cpu") quantized_model.save_pretrained("opt-125m-gptq") ``` Reload a quantized model with the [`~PreTrainedModel.from_pretrained`] method, and set `device_map="auto"` to automatically distribute the model on all available GPUs to load the model faster without using more memory than needed. ```py from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto") ``` ### ExLlama [ExLlama](https://github.com/turboderp/exllama) is a Python/C++/CUDA implementation of the [Llama](model_doc/llama) model that is designed for faster inference with 4-bit GPTQ weights (check out these [benchmarks](https://github.com/huggingface/optimum/tree/main/tests/benchmark#gptq-benchmark)). The ExLlama kernel is activated by default when you create a [`GPTQConfig`] object. To boost inference speed even further, use the [ExLlamaV2](https://github.com/turboderp/exllamav2) kernels by configuring the `exllama_config` parameter: ```py import torch from transformers import AutoModelForCausalLM, GPTQConfig gptq_config = GPTQConfig(bits=4, exllama_config={"version":2}) model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="auto", quantization_config=gptq_config) ``` <Tip warning={true}> Only 4-bit models are supported, and we recommend deactivating the ExLlama kernels if you're finetuning a quantized model with PEFT. </Tip> The ExLlama kernels are only supported when the entire model is on the GPU. If you're doing inference on a CPU with AutoGPTQ (version > 0.4.2), then you'll need to disable the ExLlama kernel. This overwrites the attributes related to the ExLlama kernels in the quantization config of the config.json file. ```py import torch from transformers import AutoModelForCausalLM, GPTQConfig gptq_config = GPTQConfig(bits=4, use_exllama=False) model = AutoModelForCausalLM.from_pretrained("{your_username}/opt-125m-gptq", device_map="cpu", quantization_config=gptq_config) ``` ## bitsandbytes [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) is the easiest option for quantizing a model to 8 and 4-bit. 8-bit quantization multiplies outliers in fp16 with non-outliers in int8, converts the non-outlier values back to fp16, and then adds them together to return the weights in fp16. This reduces the degradative effect outlier values have on a model's performance. 4-bit quantization compresses a model even further, and it is commonly used with [QLoRA](https://hf.co/papers/2305.14314) to finetune quantized LLMs. To use bitsandbytes, make sure you have the following libraries installed: <hfoptions id="bnb"> <hfoption id="8-bit"> ```bash pip install transformers accelerate bitsandbytes>0.37.0 ``` </hfoption> <hfoption id="4-bit"> ```bash pip install bitsandbytes>=0.39.0 pip install --upgrade accelerate pip install --upgrade transformers ``` </hfoption> </hfoptions> Now you can quantize a model with the `load_in_8bit` or `load_in_4bit` parameters in the [`~PreTrainedModel.from_pretrained`] method. This works for any model in any modality, as long as it supports loading with Accelerate and contains `torch.nn.Linear` layers. <hfoptions id="bnb"> <hfoption id="8-bit"> Quantizing a model in 8-bit halves the memory-usage, and for large models, set `device_map="auto"` to efficiently use the GPUs available: ```py from transformers import AutoModelForCausalLM model_8bit = AutoModelForCausalLM.from_pretrained("bigscience/bloom-1b7", device_map="auto", load_in_8bit=True) ``` By default, all the other modules such as `torch.nn.LayerNorm` are converted to `torch.float16`. You can change the data type of these modules with the `torch_dtype` parameter if you want: ```py import torch from transformers import AutoModelForCausalLM model_8bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_8bit=True, torch_dtype=torch.float32) model_8bit.model.decoder.layers[-1].final_layer_norm.weight.dtype ``` Once a model is quantized to 8-bit, you can't push the quantized weights to the Hub unless you're using the latest version of Transformers and bitsandbytes. If you have the latest versions, then you can push the 8-bit model to the Hub with the [`~PreTrainedModel.push_to_hub`] method. The quantization config.json file is pushed first, followed by the quantized model weights. ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("bigscience/bloom-560m", device_map="auto", load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m") model.push_to_hub("bloom-560m-8bit") ``` </hfoption> <hfoption id="4-bit"> Quantizing a model in 4-bit reduces your memory-usage by 4x, and for large models, set `device_map="auto"` to efficiently use the GPUs available: ```py from transformers import AutoModelForCausalLM model_4bit = AutoModelForCausalLM.from_pretrained("bigscience/bloom-1b7", device_map="auto", load_in_4bit=True) ``` By default, all the other modules such as `torch.nn.LayerNorm` are converted to `torch.float16`. You can change the data type of these modules with the `torch_dtype` parameter if you want: ```py import torch from transformers import AutoModelForCausalLM model_4bit = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", load_in_4bit=True, torch_dtype=torch.float32) model_4bit.model.decoder.layers[-1].final_layer_norm.weight.dtype ``` Once a model is quantized to 4-bit, you can't push the quantized weights to the Hub. </hfoption> </hfoptions> <Tip warning={true}> Training with 8-bit and 4-bit weights are only supported for training *extra* parameters. </Tip> You can check your memory footprint with the `get_memory_footprint` method: ```py print(model.get_memory_footprint()) ``` Quantized models can be loaded from the [`~PreTrainedModel.from_pretrained`] method without needing to specify the `load_in_8bit` or `load_in_4bit` parameters: ```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("{your_username}/bloom-560m-8bit", device_map="auto") ``` ### 8-bit <Tip> Learn more about the details of 8-bit quantization in this [blog post](https://huggingface.co/blog/hf-bitsandbytes-integration)! </Tip> This section explores some of the specific features of 8-bit models, such as offloading, outlier thresholds, skipping module conversion, and finetuning. #### Offloading 8-bit models can offload weights between the CPU and GPU to support fitting very large models into memory. The weights dispatched to the CPU are actually stored in **float32**, and aren't converted to 8-bit. For example, to enable offloading for the [bigscience/bloom-1b7](https://huggingface.co/bigscience/bloom-1b7) model, start by creating a [`BitsAndBytesConfig`]: ```py from transformers import AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True) ``` Design a custom device map to fit everything on your GPU except for the `lm_head`, which you'll dispatch to the CPU: ```py device_map = { "transformer.word_embeddings": 0, "transformer.word_embeddings_layernorm": 0, "lm_head": "cpu", "transformer.h": 0, "transformer.ln_f": 0, } ``` Now load your model with the custom `device_map` and `quantization_config`: ```py model_8bit = AutoModelForCausalLM.from_pretrained( "bigscience/bloom-1b7", device_map=device_map, quantization_config=quantization_config, ) ``` #### Outlier threshold An "outlier" is a hidden state value greater than a certain threshold, and these values are computed in fp16. While the values are usually normally distributed ([-3.5, 3.5]), this distribution can be very different for large models ([-60, 6] or [6, 60]). 8-bit quantization works well for values ~5, but beyond that, there is a significant performance penalty. A good default threshold value is 6, but a lower threshold may be needed for more unstable models (small models or finetuning). To find the best threshold for your model, we recommend experimenting with the `llm_int8_threshold` parameter in [`BitsAndBytesConfig`]: ```py from transformers import AutoModelForCausalLM, BitsAndBytesConfig model_id = "bigscience/bloom-1b7" quantization_config = BitsAndBytesConfig( llm_int8_threshold=10, ) model_8bit = AutoModelForCausalLM.from_pretrained( model_id, device_map=device_map, quantization_config=quantization_config, ) ``` #### Skip module conversion For some models, like [Jukebox](model_doc/jukebox), you don't need to quantize every module to 8-bit which can actually cause instability. With Jukebox, there are several `lm_head` modules that should be skipped using the `llm_int8_skip_modules` parameter in [`BitsAndBytesConfig`]: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_id = "bigscience/bloom-1b7" quantization_config = BitsAndBytesConfig( llm_int8_skip_modules=["lm_head"], ) model_8bit = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", quantization_config=quantization_config, ) ``` #### Finetuning With the [PEFT](https://github.com/huggingface/peft) library, you can finetune large models like [flan-t5-large](https://huggingface.co/google/flan-t5-large) and [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) with 8-bit quantization. You don't need to pass the `device_map` parameter for training because it'll automatically load your model on a GPU. However, you can still customize the device map with the `device_map` parameter if you want to (`device_map="auto"` should only be used for inference). ### 4-bit <Tip> Try 4-bit quantization in this [notebook](https://colab.research.google.com/drive/1ge2F1QSK8Q7h0hn3YKuBCOAS0bK8E0wf) and learn more about it's details in this [blog post](https://huggingface.co/blog/4bit-transformers-bitsandbytes). </Tip> This section explores some of the specific features of 4-bit models, such as changing the compute data type, using the Normal Float 4 (NF4) data type, and using nested quantization. #### Compute data type To speedup computation, you can change the data type from float32 (the default value) to bf16 using the `bnb_4bit_compute_dtype` parameter in [`BitsAndBytesConfig`]: ```py import torch from transformers import BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16) ``` #### Normal Float 4 (NF4) NF4 is a 4-bit data type from the [QLoRA](https://hf.co/papers/2305.14314) paper, adapted for weights initialized from a normal distribution. You should use NF4 for training 4-bit base models. This can be configured with the `bnb_4bit_quant_type` parameter in the [`BitsAndBytesConfig`]: ```py from transformers import BitsAndBytesConfig nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", ) model_nf4 = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=nf4_config) ``` For inference, the `bnb_4bit_quant_type` does not have a huge impact on performance. However, to remain consistent with the model weights, you should use the `bnb_4bit_compute_dtype` and `torch_dtype` values. #### Nested quantization Nested quantization is a technique that can save additional memory at no additional performance cost. This feature performs a second quantization of the already quantized weights to save an addition 0.4 bits/parameter. For example, with nested quantization, you can finetune a [Llama-13b](https://huggingface.co/meta-llama/Llama-2-13b) model on a 16GB NVIDIA T4 GPU with a sequence length of 1024, a batch size of 1, and enabling gradient accumulation with 4 steps. ```py from transformers import BitsAndBytesConfig double_quant_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, ) model_double_quant = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b", quantization_config=double_quant_config) ``` ## Optimum The [Optimum](https://huggingface.co/docs/optimum/index) library supports quantization for Intel, Furiosa, ONNX Runtime, GPTQ, and lower-level PyTorch quantization functions. Consider using Optimum for quantization if you're using specific and optimized hardware like Intel CPUs, Furiosa NPUs or a model accelerator like ONNX Runtime. ## Benchmarks To compare the speed, throughput, and latency of each quantization scheme, check the following benchmarks obtained from the [optimum-benchmark](https://github.com/huggingface/optimum-benchmark) library. The benchmark was run on a NVIDIA A1000 for the [TheBloke/Mistral-7B-v0.1-AWQ](https://huggingface.co/TheBloke/Mistral-7B-v0.1-AWQ) and [TheBloke/Mistral-7B-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-v0.1-GPTQ) models. These were also tested against the bitsandbytes quantization methods as well as a native fp16 model. <div class="flex gap-4"> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/quantization/forward_memory_plot.png" alt="forward peak memory per batch size" /> <figcaption class="mt-2 text-center text-sm text-gray-500">forward peak memory/batch size</figcaption> </div> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/quantization/generate_memory_plot.png" alt="generate peak memory per batch size" /> <figcaption class="mt-2 text-center text-sm text-gray-500">generate peak memory/batch size</figcaption> </div> </div> <div class="flex gap-4"> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/quantization/generate_throughput_plot.png" alt="generate throughput per batch size" /> <figcaption class="mt-2 text-center text-sm text-gray-500">generate throughput/batch size</figcaption> </div> <div> <img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/quantization/forward_latency_plot.png" alt="forward latency per batch size" /> <figcaption class="mt-2 text-center text-sm text-gray-500">forward latency/batch size</figcaption> </div> </div> The benchmarks indicate AWQ quantization is the fastest for inference, text generation, and has the lowest peak memory for text generation. However, AWQ has the largest forward latency per batch size. For a more detailed discussion about the pros and cons of each quantization method, read the [Overview of natively supported quantization schemes in ๐Ÿค— Transformers](https://huggingface.co/blog/overview-quantization-transformers) blog post.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/model_memory_anatomy.md
<!--- Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Model training anatomy To understand performance optimization techniques that one can apply to improve efficiency of model training speed and memory utilization, it's helpful to get familiar with how GPU is utilized during training, and how compute intensity varies depending on an operation performed. Let's start by exploring a motivating example of GPU utilization and the training run of a model. For the demonstration, we'll need to install a few libraries: ```bash pip install transformers datasets accelerate nvidia-ml-py3 ``` The `nvidia-ml-py3` library allows us to monitor the memory usage of the models from within Python. You might be familiar with the `nvidia-smi` command in the terminal - this library allows to access the same information in Python directly. Then, we create some dummy data: random token IDs between 100 and 30000 and binary labels for a classifier. In total, we get 512 sequences each with length 512 and store them in a [`~datasets.Dataset`] with PyTorch format. ```py >>> import numpy as np >>> from datasets import Dataset >>> seq_len, dataset_size = 512, 512 >>> dummy_data = { ... "input_ids": np.random.randint(100, 30000, (dataset_size, seq_len)), ... "labels": np.random.randint(0, 1, (dataset_size)), ... } >>> ds = Dataset.from_dict(dummy_data) >>> ds.set_format("pt") ``` To print summary statistics for the GPU utilization and the training run with the [`Trainer`] we define two helper functions: ```py >>> from pynvml import * >>> def print_gpu_utilization(): ... nvmlInit() ... handle = nvmlDeviceGetHandleByIndex(0) ... info = nvmlDeviceGetMemoryInfo(handle) ... print(f"GPU memory occupied: {info.used//1024**2} MB.") >>> def print_summary(result): ... print(f"Time: {result.metrics['train_runtime']:.2f}") ... print(f"Samples/second: {result.metrics['train_samples_per_second']:.2f}") ... print_gpu_utilization() ``` Let's verify that we start with a free GPU memory: ```py >>> print_gpu_utilization() GPU memory occupied: 0 MB. ``` That looks good: the GPU memory is not occupied as we would expect before we load any models. If that's not the case on your machine make sure to stop all processes that are using GPU memory. However, not all free GPU memory can be used by the user. When a model is loaded to the GPU the kernels are also loaded, which can take up 1-2GB of memory. To see how much it is we load a tiny tensor into the GPU which triggers the kernels to be loaded as well. ```py >>> import torch >>> torch.ones((1, 1)).to("cuda") >>> print_gpu_utilization() GPU memory occupied: 1343 MB. ``` We see that the kernels alone take up 1.3GB of GPU memory. Now let's see how much space the model uses. ## Load Model First, we load the `bert-large-uncased` model. We load the model weights directly to the GPU so that we can check how much space just the weights use. ```py >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-large-uncased").to("cuda") >>> print_gpu_utilization() GPU memory occupied: 2631 MB. ``` We can see that the model weights alone take up 1.3 GB of GPU memory. The exact number depends on the specific GPU you are using. Note that on newer GPUs a model can sometimes take up more space since the weights are loaded in an optimized fashion that speeds up the usage of the model. Now we can also quickly check if we get the same result as with `nvidia-smi` CLI: ```bash nvidia-smi ``` ```bash Tue Jan 11 08:58:05 2022 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:00:04.0 Off | 0 | | N/A 37C P0 39W / 300W | 2631MiB / 16160MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 3721 C ...nvs/codeparrot/bin/python 2629MiB | +-----------------------------------------------------------------------------+ ``` We get the same number as before and you can also see that we are using a V100 GPU with 16GB of memory. So now we can start training the model and see how the GPU memory consumption changes. First, we set up a few standard training arguments: ```py default_args = { "output_dir": "tmp", "evaluation_strategy": "steps", "num_train_epochs": 1, "log_level": "error", "report_to": "none", } ``` <Tip> If you plan to run multiple experiments, in order to properly clear the memory between experiments, restart the Python kernel between experiments. </Tip> ## Memory utilization at vanilla training Let's use the [`Trainer`] and train the model without using any GPU performance optimization techniques and a batch size of 4: ```py >>> from transformers import TrainingArguments, Trainer, logging >>> logging.set_verbosity_error() >>> training_args = TrainingArguments(per_device_train_batch_size=4, **default_args) >>> trainer = Trainer(model=model, args=training_args, train_dataset=ds) >>> result = trainer.train() >>> print_summary(result) ``` ``` Time: 57.82 Samples/second: 8.86 GPU memory occupied: 14949 MB. ``` We see that already a relatively small batch size almost fills up our GPU's entire memory. However, a larger batch size can often result in faster model convergence or better end performance. So ideally we want to tune the batch size to our model's needs and not to the GPU limitations. What's interesting is that we use much more memory than the size of the model. To understand a bit better why this is the case let's have a look at a model's operations and memory needs. ## Anatomy of Model's Operations Transformers architecture includes 3 main groups of operations grouped below by compute-intensity. 1. **Tensor Contractions** Linear layers and components of Multi-Head Attention all do batched **matrix-matrix multiplications**. These operations are the most compute-intensive part of training a transformer. 2. **Statistical Normalizations** Softmax and layer normalization are less compute-intensive than tensor contractions, and involve one or more **reduction operations**, the result of which is then applied via a map. 3. **Element-wise Operators** These are the remaining operators: **biases, dropout, activations, and residual connections**. These are the least compute-intensive operations. This knowledge can be helpful to know when analyzing performance bottlenecks. This summary is derived from [Data Movement Is All You Need: A Case Study on Optimizing Transformers 2020](https://arxiv.org/abs/2007.00072) ## Anatomy of Model's Memory We've seen that training the model uses much more memory than just putting the model on the GPU. This is because there are many components during training that use GPU memory. The components on GPU memory are the following: 1. model weights 2. optimizer states 3. gradients 4. forward activations saved for gradient computation 5. temporary buffers 6. functionality-specific memory A typical model trained in mixed precision with AdamW requires 18 bytes per model parameter plus activation memory. For inference there are no optimizer states and gradients, so we can subtract those. And thus we end up with 6 bytes per model parameter for mixed precision inference, plus activation memory. Let's look at the details. **Model Weights:** - 4 bytes * number of parameters for fp32 training - 6 bytes * number of parameters for mixed precision training (maintains a model in fp32 and one in fp16 in memory) **Optimizer States:** - 8 bytes * number of parameters for normal AdamW (maintains 2 states) - 2 bytes * number of parameters for 8-bit AdamW optimizers like [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) - 4 bytes * number of parameters for optimizers like SGD with momentum (maintains only 1 state) **Gradients** - 4 bytes * number of parameters for either fp32 or mixed precision training (gradients are always kept in fp32) **Forward Activations** - size depends on many factors, the key ones being sequence length, hidden size and batch size. There are the input and output that are being passed and returned by the forward and the backward functions and the forward activations saved for gradient computation. **Temporary Memory** Additionally, there are all kinds of temporary variables which get released once the calculation is done, but in the moment these could require additional memory and could push to OOM. Therefore, when coding it's crucial to think strategically about such temporary variables and sometimes to explicitly free those as soon as they are no longer needed. **Functionality-specific memory** Then, your software could have special memory needs. For example, when generating text using beam search, the software needs to maintain multiple copies of inputs and outputs. **`forward` vs `backward` Execution Speed** For convolutions and linear layers there are 2x flops in the backward compared to the forward, which generally translates into ~2x slower (sometimes more, because sizes in the backward tend to be more awkward). Activations are usually bandwidth-limited, and itโ€™s typical for an activation to have to read more data in the backward than in the forward (e.g. activation forward reads once, writes once, activation backward reads twice, gradOutput and output of the forward, and writes once, gradInput). As you can see, there are potentially a few places where we could save GPU memory or speed up operations. Now that you understand what affects GPU utilization and computation speed, refer to the [Methods and tools for efficient training on a single GPU](perf_train_gpu_one) documentation page to learn about performance optimization techniques.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/model_sharing.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Share a model The last two tutorials showed how you can fine-tune a model with PyTorch, Keras, and ๐Ÿค— Accelerate for distributed setups. The next step is to share your model with the community! At Hugging Face, we believe in openly sharing knowledge and resources to democratize artificial intelligence for everyone. We encourage you to consider sharing your model with the community to help others save time and resources. In this tutorial, you will learn two methods for sharing a trained or fine-tuned model on the [Model Hub](https://huggingface.co/models): - Programmatically push your files to the Hub. - Drag-and-drop your files to the Hub with the web interface. <iframe width="560" height="315" src="https://www.youtube.com/embed/XvSGPZFEjDY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <Tip> To share a model with the community, you need an account on [huggingface.co](https://huggingface.co/join). You can also join an existing organization or create a new one. </Tip> ## Repository features Each repository on the Model Hub behaves like a typical GitHub repository. Our repositories offer versioning, commit history, and the ability to visualize differences. The Model Hub's built-in versioning is based on git and [git-lfs](https://git-lfs.github.com/). In other words, you can treat one model as one repository, enabling greater access control and scalability. Version control allows *revisions*, a method for pinning a specific version of a model with a commit hash, tag or branch. As a result, you can load a specific model version with the `revision` parameter: ```py >>> model = AutoModel.from_pretrained( ... "julien-c/EsperBERTo-small", revision="v2.0.1" # tag name, or branch name, or commit hash ... ) ``` Files are also easily edited in a repository, and you can view the commit history as well as the difference: ![vis_diff](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vis_diff.png) ## Setup Before sharing a model to the Hub, you will need your Hugging Face credentials. If you have access to a terminal, run the following command in the virtual environment where ๐Ÿค— Transformers is installed. This will store your access token in your Hugging Face cache folder (`~/.cache/` by default): ```bash huggingface-cli login ``` If you are using a notebook like Jupyter or Colaboratory, make sure you have the [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) library installed. This library allows you to programmatically interact with the Hub. ```bash pip install huggingface_hub ``` Then use `notebook_login` to sign-in to the Hub, and follow the link [here](https://huggingface.co/settings/token) to generate a token to login with: ```py >>> from huggingface_hub import notebook_login >>> notebook_login() ``` ## Convert a model for all frameworks To ensure your model can be used by someone working with a different framework, we recommend you convert and upload your model with both PyTorch and TensorFlow checkpoints. While users are still able to load your model from a different framework if you skip this step, it will be slower because ๐Ÿค— Transformers will need to convert the checkpoint on-the-fly. Converting a checkpoint for another framework is easy. Make sure you have PyTorch and TensorFlow installed (see [here](installation) for installation instructions), and then find the specific model for your task in the other framework. <frameworkcontent> <pt> Specify `from_tf=True` to convert a checkpoint from TensorFlow to PyTorch: ```py >>> pt_model = DistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_tf=True) >>> pt_model.save_pretrained("path/to/awesome-name-you-picked") ``` </pt> <tf> Specify `from_pt=True` to convert a checkpoint from PyTorch to TensorFlow: ```py >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("path/to/awesome-name-you-picked", from_pt=True) ``` Then you can save your new TensorFlow model with its new checkpoint: ```py >>> tf_model.save_pretrained("path/to/awesome-name-you-picked") ``` </tf> <jax> If a model is available in Flax, you can also convert a checkpoint from PyTorch to Flax: ```py >>> flax_model = FlaxDistilBertForSequenceClassification.from_pretrained( ... "path/to/awesome-name-you-picked", from_pt=True ... ) ``` </jax> </frameworkcontent> ## Push a model during training <frameworkcontent> <pt> <Youtube id="Z1-XMy-GNLQ"/> Sharing a model to the Hub is as simple as adding an extra parameter or callback. Remember from the [fine-tuning tutorial](training), the [`TrainingArguments`] class is where you specify hyperparameters and additional training options. One of these training options includes the ability to push a model directly to the Hub. Set `push_to_hub=True` in your [`TrainingArguments`]: ```py >>> training_args = TrainingArguments(output_dir="my-awesome-model", push_to_hub=True) ``` Pass your training arguments as usual to [`Trainer`]: ```py >>> trainer = Trainer( ... model=model, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... ) ``` After you fine-tune your model, call [`~transformers.Trainer.push_to_hub`] on [`Trainer`] to push the trained model to the Hub. ๐Ÿค— Transformers will even automatically add training hyperparameters, training results and framework versions to your model card! ```py >>> trainer.push_to_hub() ``` </pt> <tf> Share a model to the Hub with [`PushToHubCallback`]. In the [`PushToHubCallback`] function, add: - An output directory for your model. - A tokenizer. - The `hub_model_id`, which is your Hub username and model name. ```py >>> from transformers import PushToHubCallback >>> push_to_hub_callback = PushToHubCallback( ... output_dir="./your_model_save_path", tokenizer=tokenizer, hub_model_id="your-username/my-awesome-model" ... ) ``` Add the callback to [`fit`](https://keras.io/api/models/model_training_apis/), and ๐Ÿค— Transformers will push the trained model to the Hub: ```py >>> model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3, callbacks=push_to_hub_callback) ``` </tf> </frameworkcontent> ## Use the `push_to_hub` function You can also call `push_to_hub` directly on your model to upload it to the Hub. Specify your model name in `push_to_hub`: ```py >>> pt_model.push_to_hub("my-awesome-model") ``` This creates a repository under your username with the model name `my-awesome-model`. Users can now load your model with the `from_pretrained` function: ```py >>> from transformers import AutoModel >>> model = AutoModel.from_pretrained("your_username/my-awesome-model") ``` If you belong to an organization and want to push your model under the organization name instead, just add it to the `repo_id`: ```py >>> pt_model.push_to_hub("my-awesome-org/my-awesome-model") ``` The `push_to_hub` function can also be used to add other files to a model repository. For example, add a tokenizer to a model repository: ```py >>> tokenizer.push_to_hub("my-awesome-model") ``` Or perhaps you'd like to add the TensorFlow version of your fine-tuned PyTorch model: ```py >>> tf_model.push_to_hub("my-awesome-model") ``` Now when you navigate to your Hugging Face profile, you should see your newly created model repository. Clicking on the **Files** tab will display all the files you've uploaded to the repository. For more details on how to create and upload files to a repository, refer to the Hub documentation [here](https://huggingface.co/docs/hub/how-to-upstream). ## Upload with the web interface Users who prefer a no-code approach are able to upload a model through the Hub's web interface. Visit [huggingface.co/new](https://huggingface.co/new) to create a new repository: ![new_model_repo](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/new_model_repo.png) From here, add some information about your model: - Select the **owner** of the repository. This can be yourself or any of the organizations you belong to. - Pick a name for your model, which will also be the repository name. - Choose whether your model is public or private. - Specify the license usage for your model. Now click on the **Files** tab and click on the **Add file** button to upload a new file to your repository. Then drag-and-drop a file to upload and add a commit message. ![upload_file](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/upload_file.png) ## Add a model card To make sure users understand your model's capabilities, limitations, potential biases and ethical considerations, please add a model card to your repository. The model card is defined in the `README.md` file. You can add a model card by: * Manually creating and uploading a `README.md` file. * Clicking on the **Edit model card** button in your model repository. Take a look at the DistilBert [model card](https://huggingface.co/distilbert-base-uncased) for a good example of the type of information a model card should include. For more details about other options you can control in the `README.md` file such as a model's carbon footprint or widget examples, refer to the documentation [here](https://huggingface.co/docs/hub/models-cards).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/create_a_model.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Create a custom architecture An [`AutoClass`](model_doc/auto) automatically infers the model architecture and downloads pretrained configuration and weights. Generally, we recommend using an `AutoClass` to produce checkpoint-agnostic code. But users who want more control over specific model parameters can create a custom ๐Ÿค— Transformers model from just a few base classes. This could be particularly useful for anyone who is interested in studying, training or experimenting with a ๐Ÿค— Transformers model. In this guide, dive deeper into creating a custom model without an `AutoClass`. Learn how to: - Load and customize a model configuration. - Create a model architecture. - Create a slow and fast tokenizer for text. - Create an image processor for vision tasks. - Create a feature extractor for audio tasks. - Create a processor for multimodal tasks. ## Configuration A [configuration](main_classes/configuration) refers to a model's specific attributes. Each model configuration has different attributes; for instance, all NLP models have the `hidden_size`, `num_attention_heads`, `num_hidden_layers` and `vocab_size` attributes in common. These attributes specify the number of attention heads or hidden layers to construct a model with. Get a closer look at [DistilBERT](model_doc/distilbert) by accessing [`DistilBertConfig`] to inspect it's attributes: ```py >>> from transformers import DistilBertConfig >>> config = DistilBertConfig() >>> print(config) DistilBertConfig { "activation": "gelu", "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` [`DistilBertConfig`] displays all the default attributes used to build a base [`DistilBertModel`]. All attributes are customizable, creating space for experimentation. For example, you can customize a default model to: - Try a different activation function with the `activation` parameter. - Use a higher dropout ratio for the attention probabilities with the `attention_dropout` parameter. ```py >>> my_config = DistilBertConfig(activation="relu", attention_dropout=0.4) >>> print(my_config) DistilBertConfig { "activation": "relu", "attention_dropout": 0.4, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.16.2", "vocab_size": 30522 } ``` Pretrained model attributes can be modified in the [`~PretrainedConfig.from_pretrained`] function: ```py >>> my_config = DistilBertConfig.from_pretrained("distilbert-base-uncased", activation="relu", attention_dropout=0.4) ``` Once you are satisfied with your model configuration, you can save it with [`~PretrainedConfig.save_pretrained`]. Your configuration file is stored as a JSON file in the specified save directory: ```py >>> my_config.save_pretrained(save_directory="./your_model_save_path") ``` To reuse the configuration file, load it with [`~PretrainedConfig.from_pretrained`]: ```py >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") ``` <Tip> You can also save your configuration file as a dictionary or even just the difference between your custom configuration attributes and the default configuration attributes! See the [configuration](main_classes/configuration) documentation for more details. </Tip> ## Model The next step is to create a [model](main_classes/models). The model - also loosely referred to as the architecture - defines what each layer is doing and what operations are happening. Attributes like `num_hidden_layers` from the configuration are used to define the architecture. Every model shares the base class [`PreTrainedModel`] and a few common methods like resizing input embeddings and pruning self-attention heads. In addition, all models are also either a [`torch.nn.Module`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/Model) or [`flax.linen.Module`](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html) subclass. This means models are compatible with each of their respective framework's usage. <frameworkcontent> <pt> Load your custom configuration attributes into the model: ```py >>> from transformers import DistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/config.json") >>> model = DistilBertModel(my_config) ``` This creates a model with random values instead of pretrained weights. You won't be able to use this model for anything useful yet until you train it. Training is a costly and time-consuming process. It is generally better to use a pretrained model to obtain better results faster, while using only a fraction of the resources required for training. Create a pretrained model with [`~PreTrainedModel.from_pretrained`]: ```py >>> model = DistilBertModel.from_pretrained("distilbert-base-uncased") ``` When you load pretrained weights, the default model configuration is automatically loaded if the model is provided by ๐Ÿค— Transformers. However, you can still replace - some or all of - the default model configuration attributes with your own if you'd like: ```py >>> model = DistilBertModel.from_pretrained("distilbert-base-uncased", config=my_config) ``` </pt> <tf> Load your custom configuration attributes into the model: ```py >>> from transformers import TFDistilBertModel >>> my_config = DistilBertConfig.from_pretrained("./your_model_save_path/my_config.json") >>> tf_model = TFDistilBertModel(my_config) ``` This creates a model with random values instead of pretrained weights. You won't be able to use this model for anything useful yet until you train it. Training is a costly and time-consuming process. It is generally better to use a pretrained model to obtain better results faster, while using only a fraction of the resources required for training. Create a pretrained model with [`~TFPreTrainedModel.from_pretrained`]: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert-base-uncased") ``` When you load pretrained weights, the default model configuration is automatically loaded if the model is provided by ๐Ÿค— Transformers. However, you can still replace - some or all of - the default model configuration attributes with your own if you'd like: ```py >>> tf_model = TFDistilBertModel.from_pretrained("distilbert-base-uncased", config=my_config) ``` </tf> </frameworkcontent> ### Model heads At this point, you have a base DistilBERT model which outputs the *hidden states*. The hidden states are passed as inputs to a model head to produce the final output. ๐Ÿค— Transformers provides a different model head for each task as long as a model supports the task (i.e., you can't use DistilBERT for a sequence-to-sequence task like translation). <frameworkcontent> <pt> For example, [`DistilBertForSequenceClassification`] is a base DistilBERT model with a sequence classification head. The sequence classification head is a linear layer on top of the pooled outputs. ```py >>> from transformers import DistilBertForSequenceClassification >>> model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` Easily reuse this checkpoint for another task by switching to a different model head. For a question answering task, you would use the [`DistilBertForQuestionAnswering`] model head. The question answering head is similar to the sequence classification head except it is a linear layer on top of the hidden states output. ```py >>> from transformers import DistilBertForQuestionAnswering >>> model = DistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased") ``` </pt> <tf> For example, [`TFDistilBertForSequenceClassification`] is a base DistilBERT model with a sequence classification head. The sequence classification head is a linear layer on top of the pooled outputs. ```py >>> from transformers import TFDistilBertForSequenceClassification >>> tf_model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") ``` Easily reuse this checkpoint for another task by switching to a different model head. For a question answering task, you would use the [`TFDistilBertForQuestionAnswering`] model head. The question answering head is similar to the sequence classification head except it is a linear layer on top of the hidden states output. ```py >>> from transformers import TFDistilBertForQuestionAnswering >>> tf_model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased") ``` </tf> </frameworkcontent> ## Tokenizer The last base class you need before using a model for textual data is a [tokenizer](main_classes/tokenizer) to convert raw text to tensors. There are two types of tokenizers you can use with ๐Ÿค— Transformers: - [`PreTrainedTokenizer`]: a Python implementation of a tokenizer. - [`PreTrainedTokenizerFast`]: a tokenizer from our Rust-based [๐Ÿค— Tokenizer](https://huggingface.co/docs/tokenizers/python/latest/) library. This tokenizer type is significantly faster - especially during batch tokenization - due to its Rust implementation. The fast tokenizer also offers additional methods like *offset mapping* which maps tokens to their original words or characters. Both tokenizers support common methods such as encoding and decoding, adding new tokens, and managing special tokens. <Tip warning={true}> Not every model supports a fast tokenizer. Take a look at this [table](index#supported-frameworks) to check if a model has fast tokenizer support. </Tip> If you trained your own tokenizer, you can create one from your *vocabulary* file: ```py >>> from transformers import DistilBertTokenizer >>> my_tokenizer = DistilBertTokenizer(vocab_file="my_vocab_file.txt", do_lower_case=False, padding_side="left") ``` It is important to remember the vocabulary from a custom tokenizer will be different from the vocabulary generated by a pretrained model's tokenizer. You need to use a pretrained model's vocabulary if you are using a pretrained model, otherwise the inputs won't make sense. Create a tokenizer with a pretrained model's vocabulary with the [`DistilBertTokenizer`] class: ```py >>> from transformers import DistilBertTokenizer >>> slow_tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased") ``` Create a fast tokenizer with the [`DistilBertTokenizerFast`] class: ```py >>> from transformers import DistilBertTokenizerFast >>> fast_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-uncased") ``` <Tip> By default, [`AutoTokenizer`] will try to load a fast tokenizer. You can disable this behavior by setting `use_fast=False` in `from_pretrained`. </Tip> ## Image Processor An image processor processes vision inputs. It inherits from the base [`~image_processing_utils.ImageProcessingMixin`] class. To use, create an image processor associated with the model you're using. For example, create a default [`ViTImageProcessor`] if you are using [ViT](model_doc/vit) for image classification: ```py >>> from transformers import ViTImageProcessor >>> vit_extractor = ViTImageProcessor() >>> print(vit_extractor) ViTImageProcessor { "do_normalize": true, "do_resize": true, "image_processor_type": "ViTImageProcessor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 224 } ``` <Tip> If you aren't looking for any customization, just use the `from_pretrained` method to load a model's default image processor parameters. </Tip> Modify any of the [`ViTImageProcessor`] parameters to create your custom image processor: ```py >>> from transformers import ViTImageProcessor >>> my_vit_extractor = ViTImageProcessor(resample="PIL.Image.BOX", do_normalize=False, image_mean=[0.3, 0.3, 0.3]) >>> print(my_vit_extractor) ViTImageProcessor { "do_normalize": false, "do_resize": true, "image_processor_type": "ViTImageProcessor", "image_mean": [ 0.3, 0.3, 0.3 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": "PIL.Image.BOX", "size": 224 } ``` ## Feature Extractor A feature extractor processes audio inputs. It inherits from the base [`~feature_extraction_utils.FeatureExtractionMixin`] class, and may also inherit from the [`SequenceFeatureExtractor`] class for processing audio inputs. To use, create a feature extractor associated with the model you're using. For example, create a default [`Wav2Vec2FeatureExtractor`] if you are using [Wav2Vec2](model_doc/wav2vec2) for audio classification: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor() >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": true, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 16000 } ``` <Tip> If you aren't looking for any customization, just use the `from_pretrained` method to load a model's default feature extractor parameters. </Tip> Modify any of the [`Wav2Vec2FeatureExtractor`] parameters to create your custom feature extractor: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> w2v2_extractor = Wav2Vec2FeatureExtractor(sampling_rate=8000, do_normalize=False) >>> print(w2v2_extractor) Wav2Vec2FeatureExtractor { "do_normalize": false, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0.0, "return_attention_mask": false, "sampling_rate": 8000 } ``` ## Processor For models that support multimodal tasks, ๐Ÿค— Transformers offers a processor class that conveniently wraps processing classes such as a feature extractor and a tokenizer into a single object. For example, let's use the [`Wav2Vec2Processor`] for an automatic speech recognition task (ASR). ASR transcribes audio to text, so you will need a feature extractor and a tokenizer. Create a feature extractor to handle the audio inputs: ```py >>> from transformers import Wav2Vec2FeatureExtractor >>> feature_extractor = Wav2Vec2FeatureExtractor(padding_value=1.0, do_normalize=True) ``` Create a tokenizer to handle the text inputs: ```py >>> from transformers import Wav2Vec2CTCTokenizer >>> tokenizer = Wav2Vec2CTCTokenizer(vocab_file="my_vocab_file.txt") ``` Combine the feature extractor and tokenizer in [`Wav2Vec2Processor`]: ```py >>> from transformers import Wav2Vec2Processor >>> processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) ``` With two basic classes - configuration and model - and an additional preprocessing class (tokenizer, image processor, feature extractor, or processor), you can create any of the models supported by ๐Ÿค— Transformers. Each of these base classes are configurable, allowing you to use the specific attributes you want. You can easily setup a model for training or modify an existing pretrained model to fine-tune.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/llm_tutorial_optimization.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Optimizing LLMs for Speed and Memory [[open-in-colab]] Large Language Models (LLMs) such as GPT3/4, [Falcon](https://huggingface.co/tiiuae/falcon-40b), and [Llama](https://huggingface.co/meta-llama/Llama-2-70b-hf) are rapidly advancing in their ability to tackle human-centric tasks, establishing themselves as essential tools in modern knowledge-based industries. Deploying these models in real-world tasks remains challenging, however: - To exhibit near-human text understanding and generation capabilities, LLMs currently require to be composed of billions of parameters (see [Kaplan et al](https://arxiv.org/abs/2001.08361), [Wei et. al](https://arxiv.org/abs/2206.07682)). This consequently amplifies the memory demands for inference. - In many real-world tasks, LLMs need to be given extensive contextual information. This necessitates the model's capability to manage very long input sequences during inference. The crux of these challenges lies in augmenting the computational and memory capabilities of LLMs, especially when handling expansive input sequences. In this guide, we will go over the effective techniques for efficient LLM deployment: 1. **Lower Precision:** Research has shown that operating at reduced numerical precision, namely [8-bit and 4-bit](./main_classes/quantization.md) can achieve computational advantages without a considerable decline in model performance. 2. **Flash Attention:** Flash Attention is a variation of the attention algorithm that not only provides a more memory-efficient approach but also realizes increased efficiency due to optimized GPU memory utilization. 3. **Architectural Innovations:** Considering that LLMs are always deployed in the same way during inference, namely autoregressive text generation with a long input context, specialized model architectures have been proposed that allow for more efficient inference. The most important advancement in model architectures hereby are [Alibi](https://arxiv.org/abs/2108.12409), [Rotary embeddings](https://arxiv.org/abs/2104.09864), [Multi-Query Attention (MQA)](https://arxiv.org/abs/1911.02150) and [Grouped-Query-Attention (GQA)]((https://arxiv.org/abs/2305.13245)). Throughout this guide, we will offer an analysis of auto-regressive generation from a tensor's perspective. We delve into the pros and cons of adopting lower precision, provide a comprehensive exploration of the latest attention algorithms, and discuss improved LLM architectures. While doing so, we run practical examples showcasing each of the feature improvements. ## 1. Lower Precision Memory requirements of LLMs can be best understood by seeing the LLM as a set of weight matrices and vectors and the text inputs as a sequence of vectors. In the following, the definition *weights* will be used to signify all model weight matrices and vectors. At the time of writing this guide, LLMs consist of at least a couple billion parameters. Each parameter thereby is made of a decimal number, e.g. `4.5689` which is usually stored in either [float32](https://en.wikipedia.org/wiki/Single-precision_floating-point_format), [bfloat16](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format), or [float16](https://en.wikipedia.org/wiki/Half-precision_floating-point_format) format. This allows us to easily compute the memory requirement to load the LLM into memory: > *Loading the weights of a model having X billion parameters requires roughly 4 * X GB of VRAM in float32 precision* Nowadays, models are however rarely trained in full float32 precision, but usually in bfloat16 precision or less frequently in float16 precision. Therefore the rule of thumb becomes: > *Loading the weights of a model having X billion parameters requires roughly 2 * X GB of VRAM in bfloat16/float16 precision* For shorter text inputs (less than 1024 tokens), the memory requirement for inference is very much dominated by the memory requirement to load the weights. Therefore, for now, let's assume that the memory requirement for inference is equal to the memory requirement to load the model into the GPU VRAM. To give some examples of how much VRAM it roughly takes to load a model in bfloat16: - **GPT3** requires 2 \* 175 GB = **350 GB** VRAM - [**Bloom**](https://huggingface.co/bigscience/bloom) requires 2 \* 176 GB = **352 GB** VRAM - [**Llama-2-70b**](https://huggingface.co/meta-llama/Llama-2-70b-hf) requires 2 \* 70 GB = **140 GB** VRAM - [**Falcon-40b**](https://huggingface.co/tiiuae/falcon-40b) requires 2 \* 40 GB = **80 GB** VRAM - [**MPT-30b**](https://huggingface.co/mosaicml/mpt-30b) requires 2 \* 30 GB = **60 GB** VRAM - [**bigcode/starcoder**](https://huggingface.co/bigcode/starcoder) requires 2 \* 15.5 = **31 GB** VRAM As of writing this document, the largest GPU chip on the market is the A100 & H100 offering 80GB of VRAM. Most of the models listed before require more than 80GB just to be loaded and therefore necessarily require [tensor parallelism](https://huggingface.co/docs/transformers/perf_train_gpu_many#tensor-parallelism) and/or [pipeline parallelism](https://huggingface.co/docs/transformers/perf_train_gpu_many#naive-model-parallelism-vertical-and-pipeline-parallelism). ๐Ÿค— Transformers does not support tensor parallelism out of the box as it requires the model architecture to be written in a specific way. If you're interested in writing models in a tensor-parallelism-friendly way, feel free to have a look at [the text-generation-inference library](https://github.com/huggingface/text-generation-inference/tree/main/server/text_generation_server/models/custom_modeling). Naive pipeline parallelism is supported out of the box. For this, simply load the model with `device="auto"` which will automatically place the different layers on the available GPUs as explained [here](https://huggingface.co/docs/accelerate/v0.22.0/en/concept_guides/big_model_inference). Note, however that while very effective, this naive pipeline parallelism does not tackle the issues of GPU idling. For this more advanced pipeline parallelism is required as explained [here](https://huggingface.co/docs/transformers/en/perf_train_gpu_many#naive-model-parallelism-vertical-and-pipeline-parallelism). If you have access to an 8 x 80GB A100 node, you could load BLOOM as follows ```bash !pip install transformers accelerate bitsandbytes optimum ``` ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("bigscience/bloom", device_map="auto", pad_token_id=0) ``` By using `device_map="auto"` the attention layers would be equally distributed over all available GPUs. In this guide, we will use [bigcode/octocoder](https://huggingface.co/bigcode/octocoder) as it can be run on a single 40 GB A100 GPU device chip. Note that all memory and speed optimizations that we will apply going forward, are equally applicable to models that require model or tensor parallelism. Since the model is loaded in bfloat16 precision, using our rule of thumb above, we would expect the memory requirement to run inference with `bigcode/octocoder` to be around 31 GB VRAM. Let's give it a try. We first load the model and tokenizer and then pass both to Transformers' [pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines) object. ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import torch model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", torch_dtype=torch.bfloat16, device_map="auto", pad_token_id=0) tokenizer = AutoTokenizer.from_pretrained("bigcode/octocoder") pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) ``` ```python prompt = "Question: Please write a function in Python that transforms bytes to Giga bytes.\n\nAnswer:" result = pipe(prompt, max_new_tokens=60)[0]["generated_text"][len(prompt):] result ``` **Output**: ``` Here is a Python function that transforms bytes to Giga bytes:\n\n```python\ndef bytes_to_giga_bytes(bytes):\n return bytes / 1024 / 1024 / 1024\n```\n\nThis function takes a single ``` Nice, we can now directly use the result to convert bytes into Gigabytes. ```python def bytes_to_giga_bytes(bytes): return bytes / 1024 / 1024 / 1024 ``` Let's call [`torch.cuda.max_memory_allocated`](https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_allocated.html) to measure the peak GPU memory allocation. ```python bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) ``` **Output**: ```bash 29.0260648727417 ``` Close enough to our back-of-the-envelope computation! We can see the number is not exactly correct as going from bytes to kilobytes requires a multiplication of 1024 instead of 1000. Therefore the back-of-the-envelope formula can also be understood as an "at most X GB" computation. Note that if we had tried to run the model in full float32 precision, a whopping 64 GB of VRAM would have been required. > Almost all models are trained in bfloat16 nowadays, there is no reason to run the model in full float32 precision if [your GPU supports bfloat16](https://discuss.pytorch.org/t/bfloat16-native-support/117155/5). Float32 won't give better inference results than the precision that was used to train the model. If you are unsure in which format the model weights are stored on the Hub, you can always look into the checkpoint's config under `"torch_dtype"`, *e.g.* [here](https://huggingface.co/meta-llama/Llama-2-7b-hf/blob/6fdf2e60f86ff2481f2241aaee459f85b5b0bbb9/config.json#L21). It is recommended to set the model to the same precision type as written in the config when loading with `from_pretrained(..., torch_dtype=...)` except when the original type is float32 in which case one can use both `float16` or `bfloat16` for inference. Let's define a `flush(...)` function to free all allocated memory so that we can accurately measure the peak allocated GPU memory. ```python del pipe del model import gc import torch def flush(): gc.collect() torch.cuda.empty_cache() torch.cuda.reset_peak_memory_stats() ``` Let's call it now for the next experiment. ```python flush() ``` In the recent version of the accelerate library, you can also use an utility method called `release_memory()` ```python from accelerate.utils import release_memory # ... release_memory(model) ``` Now what if your GPU does not have 32 GB of VRAM? It has been found that model weights can be quantized to 8-bit or 4-bits without a significant loss in performance (see [Dettmers et al.](https://arxiv.org/abs/2208.07339)). Model can be quantized to even 3 or 2 bits with an acceptable loss in performance as shown in the recent [GPTQ paper](https://arxiv.org/abs/2210.17323) ๐Ÿคฏ. Without going into too many details, quantization schemes aim at reducing the precision of weights while trying to keep the model's inference results as accurate as possible (*a.k.a* as close as possible to bfloat16). Note that quantization works especially well for text generation since all we care about is choosing the *set of most likely next tokens* and don't really care about the exact values of the next token *logit* distribution. All that matters is that the next token *logit* distribution stays roughly the same so that an `argmax` or `topk` operation gives the same results. There are various quantization techniques, which we won't discuss in detail here, but in general, all quantization techniques work as follows: - 1. Quantize all weights to the target precision - 2. Load the quantized weights, and pass the input sequence of vectors in bfloat16 precision - 3. Dynamically dequantize weights to bfloat16 to perform the computation with their input vectors in bfloat16 precision In a nutshell, this means that *inputs-weight matrix* multiplications, with \\( X \\) being the *inputs*, \\( W \\) being a weight matrix and \\( Y \\) being the output: $$ Y = X * W $$ are changed to $$ Y = X * \text{dequantize}(W) $$ for every matrix multiplication. Dequantization and re-quantization is performed sequentially for all weight matrices as the inputs run through the network graph. Therefore, inference time is often **not** reduced when using quantized weights, but rather increases. Enough theory, let's give it a try! To quantize the weights with Transformers, you need to make sure that the [`bitsandbytes`](https://github.com/TimDettmers/bitsandbytes) library is installed. ```bash !pip install bitsandbytes ``` We can then load models in 8-bit quantization by simply adding a `load_in_8bit=True` flag to `from_pretrained`. ```python model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", load_in_8bit=True, pad_token_id=0) ``` Now, let's run our example again and measure the memory usage. ```python pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) result = pipe(prompt, max_new_tokens=60)[0]["generated_text"][len(prompt):] result ``` **Output**: ``` Here is a Python function that transforms bytes to Giga bytes:\n\n```python\ndef bytes_to_giga_bytes(bytes):\n return bytes / 1024 / 1024 / 1024\n```\n\nThis function takes a single ``` Nice, we're getting the same result as before, so no loss in accuracy! Let's look at how much memory was used this time. ```python bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) ``` **Output**: ``` 15.219234466552734 ``` Significantly less! We're down to just a bit over 15 GBs and could therefore run this model on consumer GPUs like the 4090. We're seeing a very nice gain in memory efficiency and more or less no degradation to the model's output. However, we can also notice a slight slow-down during inference. We delete the models and flush the memory again. ```python del model del pipe ``` ```python flush() ``` Let's see what peak GPU memory consumption 4-bit quantization gives. Quantizing the model to 4-bit can be done with the same API as before - this time by passing `load_in_4bit=True` instead of `load_in_8bit=True`. ```python model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", load_in_4bit=True, low_cpu_mem_usage=True, pad_token_id=0) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) result = pipe(prompt, max_new_tokens=60)[0]["generated_text"][len(prompt):] result ``` **Output**: ``` Here is a Python function that transforms bytes to Giga bytes:\n\n```\ndef bytes_to_gigabytes(bytes):\n return bytes / 1024 / 1024 / 1024\n```\n\nThis function takes a single argument ``` We're almost seeing the same output text as before - just the `python` is missing just before the code snippet. Let's see how much memory was required. ```python bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) ``` **Output**: ``` 9.543574333190918 ``` Just 9.5GB! That's really not a lot for a >15 billion parameter model. While we see very little degradation in accuracy for our model here, 4-bit quantization can in practice often lead to different results compared to 8-bit quantization or full `bfloat16` inference. It is up to the user to try it out. Also note that inference here was again a bit slower compared to 8-bit quantization which is due to the more aggressive quantization method used for 4-bit quantization leading to \\( \text{quantize} \\) and \\( \text{dequantize} \\) taking longer during inference. ```python del model del pipe ``` ```python flush() ``` Overall, we saw that running OctoCoder in 8-bit precision reduced the required GPU VRAM from 32G GPU VRAM to only 15GB and running the model in 4-bit precision further reduces the required GPU VRAM to just a bit over 9GB. 4-bit quantization allows the model to be run on GPUs such as RTX3090, V100, and T4 which are quite accessible for most people. For more information on quantization and to see how one can quantize models to require even less GPU VRAM memory than 4-bit, we recommend looking into the [`AutoGPTQ`](https://huggingface.co/docs/transformers/main/en/main_classes/quantization#autogptq-integration%60) implementation. > As a conclusion, it is important to remember that model quantization trades improved memory efficiency against accuracy and in some cases inference time. If GPU memory is not a constraint for your use case, there is often no need to look into quantization. However many GPUs simply can't run LLMs without quantization methods and in this case, 4-bit and 8-bit quantization schemes are extremely useful tools. For more in-detail usage information, we strongly recommend taking a look at the [Transformers Quantization Docs](https://huggingface.co/docs/transformers/main_classes/quantization#general-usage). Next, let's look into how we can improve computational and memory efficiency by using better algorithms and an improved model architecture. ## 2. Flash Attention Today's top-performing LLMs share more or less the same fundamental architecture that consists of feed-forward layers, activation layers, layer normalization layers, and most crucially, self-attention layers. Self-attention layers are central to Large Language Models (LLMs) in that they enable the model to understand the contextual relationships between input tokens. However, the peak GPU memory consumption for self-attention layers grows *quadratically* both in compute and memory complexity with number of input tokens (also called *sequence length*) that we denote in the following by \\( N \\) . While this is not really noticeable for shorter input sequences (of up to 1000 input tokens), it becomes a serious problem for longer input sequences (at around 16000 input tokens). Let's take a closer look. The formula to compute the output \\( \mathbf{O} \\) of a self-attention layer for an input \\( \mathbf{X} \\) of length \\( N \\) is: $$ \textbf{O} = \text{Attn}(\mathbf{X}) = \mathbf{V} \times \text{Softmax}(\mathbf{QK}^T) \text{ with } \mathbf{Q} = \mathbf{W}_q \mathbf{X}, \mathbf{V} = \mathbf{W}_v \mathbf{X}, \mathbf{K} = \mathbf{W}_k \mathbf{X} $$ \\( \mathbf{X} = (\mathbf{x}_1, ... \mathbf{x}_{N}) \\) is thereby the input sequence to the attention layer. The projections \\( \mathbf{Q} \\) and \\( \mathbf{K} \\) will each consist of \\( N \\) vectors resulting in the \\( \mathbf{QK}^T \\) being of size \\( N^2 \\) . LLMs usually have multiple attention heads, thus doing multiple self-attention computations in parallel. Assuming, the LLM has 40 attention heads and runs in bfloat16 precision, we can calculate the memory requirement to store the \\( \mathbf{QK^T} \\) matrices to be \\( 40 * 2 * N^2 \\) bytes. For \\( N=1000 \\) only around 50 MB of VRAM are needed, however, for \\( N=16000 \\) we would need 19 GB of VRAM, and for \\( N=100,000 \\) we would need almost 1TB just to store the \\( \mathbf{QK}^T \\) matrices. Long story short, the default self-attention algorithm quickly becomes prohibitively memory-expensive for large input contexts. As LLMs improve in text comprehension and generation, they are applied to increasingly complex tasks. While models once handled the translation or summarization of a few sentences, they now manage entire pages, demanding the capability to process extensive input lengths. How can we get rid of the exorbitant memory requirements for large input lengths? We need a new way to compute the self-attention mechanism that gets rid of the \\( QK^T \\) matrix. [Tri Dao et al.](https://arxiv.org/abs/2205.14135) developed exactly such a new algorithm and called it **Flash Attention**. In a nutshell, Flash Attention breaks the \\(\mathbf{V} \times \text{Softmax}(\mathbf{QK}^T\\)) computation apart and instead computes smaller chunks of the output by iterating over multiple softmax computation steps: $$ \textbf{O}_i \leftarrow s^a_{ij} * \textbf{O}_i + s^b_{ij} * \mathbf{V}_{j} \times \text{Softmax}(\mathbf{QK}^T_{i,j}) \text{ for multiple } i, j \text{ iterations} $$ with \\( s^a_{ij} \\) and \\( s^b_{ij} \\) being some softmax normalization statistics that need to be recomputed for every \\( i \\) and \\( j \\) . Please note that the whole Flash Attention is a bit more complex and is greatly simplified here as going in too much depth is out of scope for this guide. The reader is invited to take a look at the well-written [Flash Attention paper](https://arxiv.org/abs/2205.14135) for more details. The main takeaway here is: > By keeping track of softmax normalization statistics and by using some smart mathematics, Flash Attention gives **numerical identical** outputs compared to the default self-attention layer at a memory cost that only increases linearly with \\( N \\) . Looking at the formula, one would intuitively say that Flash Attention must be much slower compared to the default self-attention formula as more computation needs to be done. Indeed Flash Attention requires more FLOPs compared to normal attention as the softmax normalization statistics have to constantly be recomputed (see [paper](https://arxiv.org/abs/2205.14135) for more details if interested) > However, Flash Attention is much faster in inference compared to default attention which comes from its ability to significantly reduce the demands on the slower, high-bandwidth memory of the GPU (VRAM), focusing instead on the faster on-chip memory (SRAM). Essentially, Flash Attention makes sure that all intermediate write and read operations can be done using the fast *on-chip* SRAM memory instead of having to access the slower VRAM memory to compute the output vector \\( \mathbf{O} \\) . In practice, there is currently absolutely no reason to **not** use Flash Attention if available. The algorithm gives mathematically the same outputs, and is both faster and more memory-efficient. Let's look at a practical example. Our OctoCoder model now gets a significantly longer input prompt which includes a so-called *system prompt*. System prompts are used to steer the LLM into a better assistant that is tailored to the users' task. In the following, we use a system prompt that will make OctoCoder a better coding assistant. ```python system_prompt = """Below are a series of dialogues between various people and an AI technical assistant. The assistant tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble but knowledgeable. The assistant is happy to help with code questions and will do their best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical really does its best, and doesn't let caution get too much in the way of being useful. The Starcoder models are a series of 15.5B parameter models trained on 80+ programming languages from The Stack (v1.2) (excluding opt-out requests). The model uses Multi Query Attention, was trained using the Fill-in-the-Middle objective, and with 8,192 tokens context window for a trillion tokens of heavily deduplicated data. ----- Question: Write a function that takes two lists and returns a list that has alternating elements from each input list. Answer: Sure. Here is a function that does that. def alternating(list1, list2): results = [] for i in range(len(list1)): results.append(list1[i]) results.append(list2[i]) return results Question: Can you write some test cases for this function? Answer: Sure, here are some tests. assert alternating([10, 20, 30], [1, 2, 3]) == [10, 1, 20, 2, 30, 3] assert alternating([True, False], [4, 5]) == [True, 4, False, 5] assert alternating([], []) == [] Question: Modify the function so that it returns all input elements when the lists have uneven length. The elements from the longer list should be at the end. Answer: Here is the modified function. def alternating(list1, list2): results = [] for i in range(min(len(list1), len(list2))): results.append(list1[i]) results.append(list2[i]) if len(list1) > len(list2): results.extend(list1[i+1:]) else: results.extend(list2[i+1:]) return results ----- """ ``` For demonstration purposes, we duplicate the system prompt by ten so that the input length is long enough to observe Flash Attention's memory savings. We append the original text prompt `"Question: Please write a function in Python that transforms bytes to Giga bytes.\n\nAnswer: Here"` ```python long_prompt = 10 * system_prompt + prompt ``` We instantiate our model again in bfloat16 precision. ```python model = AutoModelForCausalLM.from_pretrained("bigcode/octocoder", torch_dtype=torch.bfloat16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained("bigcode/octocoder") pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) ``` Let's now run the model just like before *without Flash Attention* and measure the peak GPU memory requirement and inference time. ```python import time start_time = time.time() result = pipe(long_prompt, max_new_tokens=60)[0]["generated_text"][len(long_prompt):] print(f"Generated in {time.time() - start_time} seconds.") result ``` **Output**: ``` Generated in 10.96854019165039 seconds. Sure. Here is a function that does that.\n\ndef bytes_to_giga(bytes):\n return bytes / 1024 / 1024 / 1024\n\nAnswer: Sure. Here is a function that does that.\n\ndef ```` We're getting the same output as before, however this time, the model repeats the answer multiple times until it's 60 tokens cut-off. This is not surprising as we've repeated the system prompt ten times for demonstration purposes and thus cued the model to repeat itself. **Note** that the system prompt should not be repeated ten times in real-world applications - one time is enough! Let's measure the peak GPU memory requirement. ```python bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) ``` **Output**: ```bash 37.668193340301514 ``` As we can see the peak GPU memory requirement is now significantly higher than in the beginning, which is largely due to the longer input sequence. Also the generation takes a little over a minute now. We call `flush()` to free GPU memory for our next experiment. ```python flush() ``` For comparison, let's run the same function, but enable Flash Attention instead. To do so, we convert the model to [BetterTransformers](https://huggingface.co/docs/optimum/bettertransformer/overview) and by doing so enabling PyTorch's [SDPA self-attention](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention) which in turn is based on Flash Attention. ```python model.to_bettertransformer() ``` Now we run the exact same code snippet as before and under the hood Transformers will make use of Flash Attention. ```py start_time = time.time() with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): result = pipe(long_prompt, max_new_tokens=60)[0]["generated_text"][len(long_prompt):] print(f"Generated in {time.time() - start_time} seconds.") result ``` **Output**: ``` Generated in 3.0211617946624756 seconds. Sure. Here is a function that does that.\n\ndef bytes_to_giga(bytes):\n return bytes / 1024 / 1024 / 1024\n\nAnswer: Sure. Here is a function that does that.\n\ndef ``` We're getting the exact same result as before, but can observe a very significant speed-up thanks to Flash Attention. Let's measure the memory consumption one last time. ```python bytes_to_giga_bytes(torch.cuda.max_memory_allocated()) ``` **Output**: ``` 32.617331981658936 ``` And we're almost back to our original 29GB peak GPU memory from the beginning. We can observe that we only use roughly 100MB more GPU memory when passing a very long input sequence with Flash Attention compared to passing a short input sequence as done in the beginning. ```py flush() ``` For more information on how to use Flash Attention, please have a look at [this doc page](https://huggingface.co/docs/transformers/en/perf_infer_gpu_one#flashattention-2). ## 3. Architectural Innovations So far we have looked into improving computational and memory efficiency by: - Casting the weights to a lower precision format - Replacing the self-attention algorithm with a more memory- and compute efficient version Let's now look into how we can change the architecture of an LLM so that it is most effective and efficient for task that require long text inputs, *e.g.*: - Retrieval augmented Questions Answering, - Summarization, - Chat Note that *chat* not only requires the LLM to handle long text inputs, but it also necessitates that the LLM is able to efficiently handle the back-and-forth dialogue between user and assistant (such as ChatGPT). Once trained, the fundamental LLM architecture is difficult to change, so it is important to make considerations about the LLM's tasks beforehand and accordingly optimize the model's architecture. There are two important components of the model architecture that quickly become memory and/or performance bottlenecks for large input sequences. - The positional embeddings - The key-value cache Let's go over each component in more detail ### 3.1 Improving positional embeddings of LLMs Self-attention puts each token in relation to each other's tokens. As an example, the \\( \text{Softmax}(\mathbf{QK}^T) \\) matrix of the text input sequence *"Hello", "I", "love", "you"* could look as follows: ![](/blog/assets/163_optimize_llm/self_attn_tokens.png) Each word token is given a probability mass at which it attends all other word tokens and, therefore is put into relation with all other word tokens. E.g. the word *"love"* attends to the word *"Hello"* with 5%, to *"I"* with 30%, and to itself with 65%. A LLM based on self-attention, but without position embeddings would have great difficulties in understanding the positions of the text inputs to each other. This is because the probability score computed by \\( \mathbf{QK}^T \\) relates each word token to each other word token in \\( O(1) \\) computations regardless of their relative positional distance to each other. Therefore, for the LLM without position embeddings each token appears to have the same distance to all other tokens, *e.g.* differentiating between *"Hello I love you"* and *"You love I hello"* would be very challenging. For the LLM to understand sentence order, an additional *cue* is needed and is usually applied in the form of *positional encodings* (or also called *positional embeddings*). Positional encodings, encode the position of each token into a numerical presentation that the LLM can leverage to better understand sentence order. The authors of the [*Attention Is All You Need*](https://arxiv.org/abs/1706.03762) paper introduced sinusoidal positional embeddings \\( \mathbf{P} = \mathbf{p}_1, \ldots, \mathbf{p}_N \\) . where each vector \\( \mathbf{p}_i \\) is computed as a sinusoidal function of its position \\( i \\) . The positional encodings are then simply added to the input sequence vectors \\( \mathbf{\hat{X}} = \mathbf{\hat{x}}_1, \ldots, \mathbf{\hat{x}}_N \\) = \\( \mathbf{x}_1 + \mathbf{p}_1, \ldots, \mathbf{x}_N + \mathbf{p}_N \\) thereby cueing the model to better learn sentence order. Instead of using fixed position embeddings, others (such as [Devlin et al.](https://arxiv.org/abs/1810.04805)) used learned positional encodings for which the positional embeddings \\( \mathbf{P} \\) are learned during training. Sinusoidal and learned position embeddings used to be the predominant methods to encode sentence order into LLMs, but a couple of problems related to these positional encodings were found: 1. Sinusoidal and learned position embeddings are both absolute positional embeddings, *i.e.* encoding a unique embedding for each position id: \\( 0, \ldots, N \\) . As shown by [Huang et al.](https://arxiv.org/abs/2009.13658) and [Su et al.](https://arxiv.org/abs/2104.09864), absolute positional embeddings lead to poor LLM performance for long text inputs. For long text inputs, it is advantageous if the model learns the relative positional distance input tokens have to each other instead of their absolute position. 2. When using learned position embeddings, the LLM has to be trained on a fixed input length \\( N \\), which makes it difficult to extrapolate to an input length longer than what it was trained on. Recently, relative positional embeddings that can tackle the above mentioned problems have become more popular, most notably: - [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) - [ALiBi](https://arxiv.org/abs/2108.12409) Both *RoPE* and *ALiBi* argue that it's best to cue the LLM about sentence order directly in the self-attention algorithm as it's there that word tokens are put into relation with each other. More specifically, sentence order should be cued by modifying the \\( \mathbf{QK}^T \\) computation. Without going into too many details, *RoPE* notes that positional information can be encoded into query-key pairs, *e.g.* \\( \mathbf{q}_i \\) and \\( \mathbf{x}_j \\) by rotating each vector by an angle \\( \theta * i \\) and \\( \theta * j \\) respectively with \\( i, j \\) describing each vectors sentence position: $$ \mathbf{\hat{q}}_i^T \mathbf{\hat{x}}_j = \mathbf{{q}}_i^T \mathbf{R}_{\theta, i -j} \mathbf{{x}}_j. $$ \\( \mathbf{R}_{\theta, i - j} \\) thereby represents a rotational matrix. \\( \theta \\) is *not* learned during training, but instead set to a pre-defined value that depends on the maximum input sequence length during training. > By doing so, the propability score between \\( \mathbf{q}_i \\) and \\( \mathbf{q}_j \\) is only affected if \\( i \ne j \\) and solely depends on the relative distance \\( i - j \\) regardless of each vector's specific positions \\( i \\) and \\( j \\) . *RoPE* is used in multiple of today's most important LLMs, such as: - [**Falcon**](https://huggingface.co/tiiuae/falcon-40b) - [**Llama**](https://arxiv.org/abs/2302.13971) - [**PaLM**](https://arxiv.org/abs/2204.02311) As an alternative, *ALiBi* proposes a much simpler relative position encoding scheme. The relative distance that input tokens have to each other is added as a negative integer scaled by a pre-defined value `m` to each query-key entry of the \\( \mathbf{QK}^T \\) matrix right before the softmax computation. ![](/blog/assets/163_optimize_llm/alibi.png) As shown in the [ALiBi](https://arxiv.org/abs/2108.12409) paper, this simple relative positional encoding allows the model to retain a high performance even at very long text input sequences. *ALiBi* is used in multiple of today's most important LLMs, such as: - [**MPT**](https://huggingface.co/mosaicml/mpt-30b) - [**BLOOM**](https://huggingface.co/bigscience/bloom) Both *RoPE* and *ALiBi* position encodings can extrapolate to input lengths not seen during training whereas it has been shown that extrapolation works much better out-of-the-box for *ALiBi* as compared to *RoPE*. For ALiBi, one simply increases the values of the lower triangular position matrix to match the length of the input sequence. For *RoPE*, keeping the same \\( \theta \\) that was used during training leads to poor results when passing text inputs much longer than those seen during training, *c.f* [Press et al.](https://arxiv.org/abs/2108.12409). However, the community has found a couple of effective tricks that adapt \\( \theta \\), thereby allowing *RoPE* position embeddings to work well for extrapolated text input sequences (see [here](https://github.com/huggingface/transformers/pull/24653)). > Both RoPE and ALiBi are relative positional embeddings that are *not* learned during training, but instead are based on the following intuitions: - Positional cues about the text inputs should be given directly to the \\( QK^T \\) matrix of the self-attention layer - The LLM should be incentivized to learn a constant *relative* distance positional encodings have to each other - The further text input tokens are from each other, the lower the probability of their query-value probability. Both RoPE and ALiBi lower the query-key probability of tokens far away from each other. RoPE by decreasing their vector product by increasing the angle between the query-key vectors. ALiBi by adding large negative numbers to the vector product In conclusion, LLMs that are intended to be deployed in tasks that require handling large text inputs are better trained with relative positional embeddings, such as RoPE and ALiBi. Also note that even if an LLM with RoPE and ALiBi has been trained only on a fixed length of say \\( N_1 = 2048 \\) it can still be used in practice with text inputs much larger than \\( N_1 \\), like \\( N_2 = 8192 > N_1 \\) by extrapolating the positional embeddings. ### 3.2 The key-value cache Auto-regressive text generation with LLMs works by iteratively putting in an input sequence, sampling the next token, appending the next token to the input sequence, and continuing to do so until the LLM produces a token that signifies that the generation has finished. Please have a look at [Transformer's Generate Text Tutorial](https://huggingface.co/docs/transformers/llm_tutorial#generate-text) to get a more visual explanation of how auto-regressive generation works. Let's run a quick code snippet to show how auto-regressive works in practice. We will simply take the most likely next token via `torch.argmax`. ```python input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to("cuda") for _ in range(5): next_logits = model(input_ids)["logits"][:, -1:] next_token_id = torch.argmax(next_logits,dim=-1) input_ids = torch.cat([input_ids, next_token_id], dim=-1) print("shape of input_ids", input_ids.shape) generated_text = tokenizer.batch_decode(input_ids[:, -5:]) generated_text ``` **Output**: ``` shape of input_ids torch.Size([1, 21]) shape of input_ids torch.Size([1, 22]) shape of input_ids torch.Size([1, 23]) shape of input_ids torch.Size([1, 24]) shape of input_ids torch.Size([1, 25]) [' Here is a Python function'] ``` As we can see every time we increase the text input tokens by the just sampled token. With very few exceptions, LLMs are trained using the [causal language modeling objective](https://huggingface.co/docs/transformers/tasks/language_modeling#causal-language-modeling) and therefore mask the upper triangle matrix of the attention score - this is why in the two diagrams above the attention scores are left blank (*a.k.a* have 0 probability). For a quick recap on causal language modeling you can refer to the [*Illustrated Self Attention blog*](https://jalammar.github.io/illustrated-gpt2/#part-2-illustrated-self-attention). As a consequence, tokens *never* depend on previous tokens, more specifically the \\( \mathbf{q}_i \\) vector is never put in relation with any key, values vectors \\( \mathbf{k}_j, \mathbf{v}_j \\) if \\( j > i \\) . Instead \\( \mathbf{q}_i \\) only attends to previous key-value vectors \\( \mathbf{k}_{m < i}, \mathbf{v}_{m < i} \text{ , for } m \in \{0, \ldots i - 1\} \\). In order to reduce unnecessary computation, one can therefore cache each layer's key-value vectors for all previous timesteps. In the following, we will tell the LLM to make use of the key-value cache by retrieving and forwarding it for each forward pass. In Transformers, we can retrieve the key-value cache by passing the `use_cache` flag to the `forward` call and can then pass it with the current token. ```python past_key_values = None # past_key_values is the key-value cache generated_tokens = [] next_token_id = tokenizer(prompt, return_tensors="pt")["input_ids"].to("cuda") for _ in range(5): next_logits, past_key_values = model(next_token_id, past_key_values=past_key_values, use_cache=True).to_tuple() next_logits = next_logits[:, -1:] next_token_id = torch.argmax(next_logits, dim=-1) print("shape of input_ids", next_token_id.shape) print("length of key-value cache", len(past_key_values[0][0])) # past_key_values are of shape [num_layers, 0 for k, 1 for v, batch_size, length, hidden_dim] generated_tokens.append(next_token_id.item()) generated_text = tokenizer.batch_decode(generated_tokens) generated_text ``` **Output**: ``` shape of input_ids torch.Size([1, 1]) length of key-value cache 20 shape of input_ids torch.Size([1, 1]) length of key-value cache 21 shape of input_ids torch.Size([1, 1]) length of key-value cache 22 shape of input_ids torch.Size([1, 1]) length of key-value cache 23 shape of input_ids torch.Size([1, 1]) length of key-value cache 24 [' Here', ' is', ' a', ' Python', ' function'] ``` As one can see, when using the key-value cache the text input tokens are *not* increased in length, but remain a single input vector. The length of the key-value cache on the other hand is increased by one at every decoding step. > Making use of the key-value cache means that the \\( \mathbf{QK}^T \\) is essentially reduced to \\( \mathbf{q}_c\mathbf{K}^T \\) with \\( \mathbf{q}_c \\) being the query projection of the currently passed input token which is *always* just a single vector. Using the key-value cache has two advantages: - Significant increase in computational efficiency as less computations are performed compared to computing the full \\( \mathbf{QK}^T \\) matrix. This leads to an increase in inference speed - The maximum required memory is not increased quadratically with the number of generated tokens, but only increases linearly. > One should *always* make use of the key-value cache as it leads to identical results and a significant speed-up for longer input sequences. Transformers has the key-value cache enabled by default when making use of the text pipeline or the [`generate` method](https://huggingface.co/docs/transformers/main_classes/text_generation). <Tip warning={true}> Note that, despite our advice to use key-value caches, your LLM output may be slightly different when you use them. This is a property of the matrix multiplication kernels themselves -- you can read more about it [here](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535). </Tip> #### 3.2.1 Multi-round conversation The key-value cache is especially useful for applications such as chat where multiple passes of auto-regressive decoding are required. Let's look at an example. ``` User: How many people live in France? Assistant: Roughly 75 million people live in France User: And how many are in Germany? Assistant: Germany has ca. 81 million inhabitants ``` In this chat, the LLM runs auto-regressive decoding twice: 1. The first time, the key-value cache is empty and the input prompt is `"User: How many people live in France?"` and the model auto-regressively generates the text `"Roughly 75 million people live in France"` while increasing the key-value cache at every decoding step. 2. The second time the input prompt is `"User: How many people live in France? \n Assistant: Roughly 75 million people live in France \n User: And how many in Germany?"`. Thanks to the cache, all key-value vectors for the first two sentences are already computed. Therefore the input prompt only consists of `"User: And how many in Germany?"`. While processing the shortened input prompt, it's computed key-value vectors are concatenated to the key-value cache of the first decoding. The second Assistant's answer `"Germany has ca. 81 million inhabitants"` is then auto-regressively generated with the key-value cache consisting of encoded key-value vectors of `"User: How many people live in France? \n Assistant: Roughly 75 million people live in France \n User: And how many are in Germany?"`. Two things should be noted here: 1. Keeping all the context is crucial for LLMs deployed in chat so that the LLM understands all the previous context of the conversation. E.g. for the example above the LLM needs to understand that the user refers to the population when asking `"And how many are in Germany"`. 2. The key-value cache is extremely useful for chat as it allows us to continuously grow the encoded chat history instead of having to re-encode the chat history again from scratch (as e.g. would be the case when using an encoder-decoder architecture). In `transformers`, a `generate` call will return `past_key_values` when `return_dict_in_generate=True` is passed, in addition to the default `use_cache=True`. Note that it is not yet available through the `pipeline` interface. ```python # Generation as usual prompt = system_prompt + "Question: Please write a function in Python that transforms bytes to Giga bytes.\n\nAnswer: Here" model_inputs = tokenizer(prompt, return_tensors='pt') generation_output = model.generate(**model_inputs, max_new_tokens=60, return_dict_in_generate=True) decoded_output = tokenizer.batch_decode(generation_output.sequences)[0] # Piping the returned `past_key_values` to speed up the next conversation round prompt = decoded_output + "\nQuestion: How can I modify the function above to return Mega bytes instead?\n\nAnswer: Here" model_inputs = tokenizer(prompt, return_tensors='pt') generation_output = model.generate( **model_inputs, past_key_values=generation_output.past_key_values, max_new_tokens=60, return_dict_in_generate=True ) tokenizer.batch_decode(generation_output.sequences)[0][len(prompt):] ``` **Output**: ``` is a modified version of the function that returns Mega bytes instead. def bytes_to_megabytes(bytes): return bytes / 1024 / 1024 Answer: The function takes a number of bytes as input and returns the number of ``` Great, no additional time is spent recomputing the same key and values for the attention layer! There is however one catch. While the required peak memory for the \\( \mathbf{QK}^T \\) matrix is significantly reduced, holding the key-value cache in memory can become very memory expensive for long input sequences or multi-turn chat. Remember that the key-value cache needs to store the key-value vectors for all previous input vectors \\( \mathbf{x}_i \text{, for } i \in \{1, \ldots, c - 1\} \\) for all self-attention layers and for all attention heads. Let's compute the number of float values that need to be stored in the key-value cache for the LLM `bigcode/octocoder` that we used before. The number of float values amounts to two times the sequence length times the number of attention heads times the attention head dimension and times the number of layers. Computing this for our LLM at a hypothetical input sequence length of 16000 gives: ```python config = model.config 2 * 16_000 * config.n_layer * config.n_head * config.n_embd // config.n_head ``` **Output**: ``` 7864320000 ``` Roughly 8 billion float values! Storing 8 billion float values in `float16` precision requires around 15 GB of RAM which is circa half as much as the model weights themselves! Researchers have proposed two methods that allow to significantly reduce the memory cost of storing the key-value cache, which are explored in the next subsections. #### 3.2.2 Multi-Query-Attention (MQA) [Multi-Query-Attention](https://arxiv.org/abs/1911.02150) was proposed in Noam Shazeer's *Fast Transformer Decoding: One Write-Head is All You Need* paper. As the title says, Noam found out that instead of using `n_head` key-value projections weights, one can use a single head-value projection weight pair that is shared across all attention heads without that the model's performance significantly degrades. > By using a single head-value projection weight pair, the key value vectors \\( \mathbf{k}_i, \mathbf{v}_i \\) have to be identical across all attention heads which in turn means that we only need to store 1 key-value projection pair in the cache instead of `n_head` ones. As most LLMs use between 20 and 100 attention heads, MQA significantly reduces the memory consumption of the key-value cache. For the LLM used in this notebook we could therefore reduce the required memory consumption from 15 GB to less than 400 MB at an input sequence length of 16000. In addition to memory savings, MQA also leads to improved computational efficiency as explained in the following. In auto-regressive decoding, large key-value vectors need to be reloaded, concatenated with the current key-value vector pair to be then fed into the \\( \mathbf{q}_c\mathbf{K}^T \\) computation at every step. For auto-regressive decoding, the required memory bandwidth for the constant reloading can become a serious time bottleneck. By reducing the size of the key-value vectors less memory needs to be accessed, thus reducing the memory bandwidth bottleneck. For more detail, please have a look at [Noam's paper](https://arxiv.org/abs/1911.02150). The important part to understand here is that reducing the number of key-value attention heads to 1 only makes sense if a key-value cache is used. The peak memory consumption of the model for a single forward pass without key-value cache stays unchanged as every attention head still has a unique query vector so that each attention head still has a different \\( \mathbf{QK}^T \\) matrix. MQA has seen wide adoption by the community and is now used by many of the most popular LLMs: - [**Falcon**](https://huggingface.co/tiiuae/falcon-40b) - [**PaLM**](https://arxiv.org/abs/2204.02311) - [**MPT**](https://huggingface.co/mosaicml/mpt-30b) - [**BLOOM**](https://huggingface.co/bigscience/bloom) Also, the checkpoint used in this notebook - `bigcode/octocoder` - makes use of MQA. #### 3.2.3 Grouped-Query-Attention (GQA) [Grouped-Query-Attention](https://arxiv.org/abs/2305.13245), as proposed by Ainslie et al. from Google, found that using MQA can often lead to quality degradation compared to using vanilla multi-key-value head projections. The paper argues that more model performance can be kept by less drastically reducing the number of query head projection weights. Instead of using just a single key-value projection weight, `n < n_head` key-value projection weights should be used. By choosing `n` to a significantly smaller value than `n_head`, such as 2,4 or 8 almost all of the memory and speed gains from MQA can be kept while sacrificing less model capacity and thus arguably less performance. Moreover, the authors of GQA found out that existing model checkpoints can be *uptrained* to have a GQA architecture with as little as 5% of the original pre-training compute. While 5% of the original pre-training compute can still be a massive amount, GQA *uptraining* allows existing checkpoints to be useful for longer input sequences. GQA was only recently proposed which is why there is less adoption at the time of writing this notebook. The most notable application of GQA is [Llama-v2](https://huggingface.co/meta-llama/Llama-2-70b-hf). > As a conclusion, it is strongly recommended to make use of either GQA or MQA if the LLM is deployed with auto-regressive decoding and is required to handle large input sequences as is the case for example for chat. ## Conclusion The research community is constantly coming up with new, nifty ways to speed up inference time for ever-larger LLMs. As an example, one such promising research direction is [speculative decoding](https://arxiv.org/abs/2211.17192) where "easy tokens" are generated by smaller, faster language models and only "hard tokens" are generated by the LLM itself. Going into more detail is out of the scope of this notebook, but can be read upon in this [nice blog post](https://huggingface.co/blog/assisted-generation). The reason massive LLMs such as GPT3/4, Llama-2-70b, Claude, PaLM can run so quickly in chat-interfaces such as [Hugging Face Chat](https://huggingface.co/chat/) or ChatGPT is to a big part thanks to the above-mentioned improvements in precision, algorithms, and architecture. Going forward, accelerators such as GPUs, TPUs, etc... will only get faster and allow for more memory, but one should nevertheless always make sure to use the best available algorithms and architectures to get the most bang for your buck ๐Ÿค—
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/tf_xla.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # XLA Integration for TensorFlow Models [[open-in-colab]] Accelerated Linear Algebra, dubbed XLA, is a compiler for accelerating the runtime of TensorFlow Models. From the [official documentation](https://www.tensorflow.org/xla): XLA (Accelerated Linear Algebra) is a domain-specific compiler for linear algebra that can accelerate TensorFlow models with potentially no source code changes. Using XLA in TensorFlow is simple โ€“ it comes packaged inside the `tensorflow` library, and it can be triggered with the `jit_compile` argument in any graph-creating function such as [`tf.function`](https://www.tensorflow.org/guide/intro_to_graphs). When using Keras methods like `fit()` and `predict()`, you can enable XLA simply by passing the `jit_compile` argument to `model.compile()`. However, XLA is not limited to these methods - it can also be used to accelerate any arbitrary `tf.function`. Several TensorFlow methods in ๐Ÿค— Transformers have been rewritten to be XLA-compatible, including text generation for models such as [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2), [T5](https://huggingface.co/docs/transformers/model_doc/t5) and [OPT](https://huggingface.co/docs/transformers/model_doc/opt), as well as speech processing for models such as [Whisper](https://huggingface.co/docs/transformers/model_doc/whisper). While the exact amount of speed-up is very much model-dependent, for TensorFlow text generation models inside ๐Ÿค— Transformers, we noticed a speed-up of ~100x. This document will explain how you can use XLA for these models to get the maximum amount of performance. Weโ€™ll also provide links to additional resources if youโ€™re interested to learn more about the benchmarks and our design philosophy behind the XLA integration. ## Running TF functions with XLA Let us consider the following model in TensorFlow: ```py import tensorflow as tf model = tf.keras.Sequential( [tf.keras.layers.Dense(10, input_shape=(10,), activation="relu"), tf.keras.layers.Dense(5, activation="softmax")] ) ``` The above model accepts inputs having a dimension of `(10, )`. We can use the model for running a forward pass like so: ```py # Generate random inputs for the model. batch_size = 16 input_vector_dim = 10 random_inputs = tf.random.normal((batch_size, input_vector_dim)) # Run a forward pass. _ = model(random_inputs) ``` In order to run the forward pass with an XLA-compiled function, weโ€™d need to do: ```py xla_fn = tf.function(model, jit_compile=True) _ = xla_fn(random_inputs) ``` The default `call()` function of the `model` is used for compiling the XLA graph. But if thereโ€™s any other model function you want to compile into XLA thatโ€™s also possible with: ```py my_xla_fn = tf.function(model.my_xla_fn, jit_compile=True) ``` ## Running a TF text generation model with XLA from ๐Ÿค— Transformers To enable XLA-accelerated generation within ๐Ÿค— Transformers, you need to have a recent version of `transformers` installed. You can install it by running: ```bash pip install transformers --upgrade ``` And then you can run the following code: ```py import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM # Will error if the minimal version of Transformers is not installed. from transformers.utils import check_min_version check_min_version("4.21.0") tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("gpt2") input_string = ["TensorFlow is"] # One line to create an XLA generation function xla_generate = tf.function(model.generate, jit_compile=True) tokenized_input = tokenizer(input_string, return_tensors="tf") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f"Generated -- {decoded_text}") # Generated -- TensorFlow is an open-source, open-source, distributed-source application # framework for the ``` As you can notice, enabling XLA on `generate()` is just a single line of code. The rest of the code remains unchanged. However, there are a couple of gotchas in the above code snippet that are specific to XLA. You need to be aware of those to realize the speed-ups that XLA can bring in. We discuss these in the following section. ## Gotchas to be aware of When you are executing an XLA-enabled function (like `xla_generate()` above) for the first time, it will internally try to infer the computation graph, which is time-consuming. This process is known as [โ€œtracingโ€](https://www.tensorflow.org/guide/intro_to_graphs#when_is_a_function_tracing). You might notice that the generation time is not fast. Successive calls of `xla_generate()` (or any other XLA-enabled function) wonโ€™t have to infer the computation graph, given the inputs to the function follow the same shape with which the computation graph was initially built. While this is not a problem for modalities with fixed input shapes (e.g., images), you must pay attention if you are working with variable input shape modalities (e.g., text). To ensure `xla_generate()` always operates with the same input shapes, you can specify the `padding` arguments when calling the tokenizer. ```py import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("gpt2") input_string = ["TensorFlow is"] xla_generate = tf.function(model.generate, jit_compile=True) # Here, we call the tokenizer with padding options. tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") generated_tokens = xla_generate(**tokenized_input, num_beams=2) decoded_text = tokenizer.decode(generated_tokens[0], skip_special_tokens=True) print(f"Generated -- {decoded_text}") ``` This way, you can ensure that the inputs to `xla_generate()` will always receive inputs with the shape it was traced with and thus leading to speed-ups in the generation time. You can verify this with the code below: ```py import time import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("gpt2", padding_side="left", pad_token="</s>") model = TFAutoModelForCausalLM.from_pretrained("gpt2") xla_generate = tf.function(model.generate, jit_compile=True) for input_string in ["TensorFlow is", "TensorFlow is a", "TFLite is a"]: tokenized_input = tokenizer(input_string, pad_to_multiple_of=8, padding=True, return_tensors="tf") start = time.time_ns() generated_tokens = xla_generate(**tokenized_input, num_beams=2) end = time.time_ns() print(f"Execution time -- {(end - start) / 1e6:.1f} ms\n") ``` On a Tesla T4 GPU, you can expect the outputs like so: ```bash Execution time -- 30819.6 ms Execution time -- 79.0 ms Execution time -- 78.9 ms ``` The first call to `xla_generate()` is time-consuming because of tracing, but the successive calls are orders of magnitude faster. Keep in mind that any change in the generation options at any point with trigger re-tracing and thus leading to slow-downs in the generation time. We didnโ€™t cover all the text generation options ๐Ÿค— Transformers provides in this document. We encourage you to read the documentation for advanced use cases. ## Additional Resources Here, we leave you with some additional resources if you want to delve deeper into XLA in ๐Ÿค— Transformers and in general. * [This Colab Notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/91_tf_xla_generate.ipynb) provides an interactive demonstration if you want to fiddle with the XLA-compatible encoder-decoder (like [T5](https://huggingface.co/docs/transformers/model_doc/t5)) and decoder-only (like [GPT2](https://huggingface.co/docs/transformers/model_doc/gpt2)) text generation models. * [This blog post](https://huggingface.co/blog/tf-xla-generate) provides an overview of the comparison benchmarks for XLA-compatible models along with a friendly introduction to XLA in TensorFlow. * [This blog post](https://blog.tensorflow.org/2022/11/how-hugging-face-improved-text-generation-performance-with-xla.html) discusses our design philosophy behind adding XLA support to the TensorFlow models in ๐Ÿค— Transformers. * Recommended posts for learning more about XLA and TensorFlow graphs in general: * [XLA: Optimizing Compiler for Machine Learning](https://www.tensorflow.org/xla) * [Introduction to graphs and tf.function](https://www.tensorflow.org/guide/intro_to_graphs) * [Better performance with tf.function](https://www.tensorflow.org/guide/function)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/installation.md
<!--- Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Installation Install ๐Ÿค— Transformers for whichever deep learning library you're working with, setup your cache, and optionally configure ๐Ÿค— Transformers to run offline. ๐Ÿค— Transformers is tested on Python 3.6+, PyTorch 1.1.0+, TensorFlow 2.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: * [PyTorch](https://pytorch.org/get-started/locally/) installation instructions. * [TensorFlow 2.0](https://www.tensorflow.org/install/pip) installation instructions. * [Flax](https://flax.readthedocs.io/en/latest/) installation instructions. ## Install with pip You should install ๐Ÿค— Transformers in a [virtual environment](https://docs.python.org/3/library/venv.html). If you're unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/). A virtual environment makes it easier to manage different projects, and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: ```bash python -m venv .env ``` Activate the virtual environment. On Linux and MacOs: ```bash source .env/bin/activate ``` Activate Virtual environment on Windows ```bash .env/Scripts/activate ``` Now you're ready to install ๐Ÿค— Transformers with the following command: ```bash pip install transformers ``` For CPU-support only, you can conveniently install ๐Ÿค— Transformers and a deep learning library in one line. For example, install ๐Ÿค— Transformers and PyTorch with: ```bash pip install 'transformers[torch]' ``` ๐Ÿค— Transformers and TensorFlow 2.0: ```bash pip install 'transformers[tf-cpu]' ``` <Tip warning={true}> M1 / ARM Users You will need to install the following before installing TensorFLow 2.0 ``` brew install cmake brew install pkg-config ``` </Tip> ๐Ÿค— Transformers and Flax: ```bash pip install 'transformers[flax]' ``` Finally, check if ๐Ÿค— Transformers has been properly installed by running the following command. It will download a pretrained model: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))" ``` Then print out the label and score: ```bash [{'label': 'POSITIVE', 'score': 0.9998704791069031}] ``` ## Install from source Install ๐Ÿค— Transformers from source with the following command: ```bash pip install git+https://github.com/huggingface/transformers ``` This command installs the bleeding edge `main` version rather than the latest `stable` version. The `main` version is useful for staying up-to-date with the latest developments. For instance, if a bug has been fixed since the last official release but a new release hasn't been rolled out yet. However, this means the `main` version may not always be stable. We strive to keep the `main` version operational, and most issues are usually resolved within a few hours or a day. If you run into a problem, please open an [Issue](https://github.com/huggingface/transformers/issues) so we can fix it even sooner! Check if ๐Ÿค— Transformers has been properly installed by running the following command: ```bash python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I love you'))" ``` ## Editable install You will need an editable install if you'd like to: * Use the `main` version of the source code. * Contribute to ๐Ÿค— Transformers and need to test changes in the code. Clone the repository and install ๐Ÿค— Transformers with the following commands: ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` These commands will link the folder you cloned the repository to and your Python library paths. Python will now look inside the folder you cloned to in addition to the normal library paths. For example, if your Python packages are typically installed in `~/anaconda3/envs/main/lib/python3.7/site-packages/`, Python will also search the folder you cloned to: `~/transformers/`. <Tip warning={true}> You must keep the `transformers` folder if you want to keep using the library. </Tip> Now you can easily update your clone to the latest version of ๐Ÿค— Transformers with the following command: ```bash cd ~/transformers/ git pull ``` Your Python environment will find the `main` version of ๐Ÿค— Transformers on the next run. ## Install with conda Install from the conda channel `huggingface`: ```bash conda install -c huggingface transformers ``` ## Cache setup Pretrained models are downloaded and locally cached at: `~/.cache/huggingface/hub`. This is the default directory given by the shell environment variable `TRANSFORMERS_CACHE`. On Windows, the default directory is given by `C:\Users\username\.cache\huggingface\hub`. You can change the shell environment variables shown below - in order of priority - to specify a different cache directory: 1. Shell environment variable (default): `HUGGINGFACE_HUB_CACHE` or `TRANSFORMERS_CACHE`. 2. Shell environment variable: `HF_HOME`. 3. Shell environment variable: `XDG_CACHE_HOME` + `/huggingface`. <Tip> ๐Ÿค— Transformers will use the shell environment variables `PYTORCH_TRANSFORMERS_CACHE` or `PYTORCH_PRETRAINED_BERT_CACHE` if you are coming from an earlier iteration of this library and have set those environment variables, unless you specify the shell environment variable `TRANSFORMERS_CACHE`. </Tip> ## Offline mode Run ๐Ÿค— Transformers in a firewalled or offline environment with locally cached files by setting the environment variable `TRANSFORMERS_OFFLINE=1`. <Tip> Add [๐Ÿค— Datasets](https://huggingface.co/docs/datasets/) to your offline training workflow with the environment variable `HF_DATASETS_OFFLINE=1`. </Tip> ```bash HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ... ``` This script should run without hanging or waiting to timeout because it won't attempt to download the model from the Hub. You can also bypass loading a model from the Hub from each [`~PreTrainedModel.from_pretrained`] call with the [`local_files_only`] parameter. When set to `True`, only local files are loaded: ```py from transformers import T5Model model = T5Model.from_pretrained("./path/to/local/directory", local_files_only=True) ``` ### Fetch models and tokenizers to use offline Another option for using ๐Ÿค— Transformers offline is to download the files ahead of time, and then point to their local path when you need to use them offline. There are three ways to do this: * Download a file through the user interface on the [Model Hub](https://huggingface.co/models) by clicking on the โ†“ icon. ![download-icon](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/download-icon.png) * Use the [`PreTrainedModel.from_pretrained`] and [`PreTrainedModel.save_pretrained`] workflow: 1. Download your files ahead of time with [`PreTrainedModel.from_pretrained`]: ```py >>> from transformers import AutoTokenizer, AutoModelForSeq2SeqLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B") >>> model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B") ``` 2. Save your files to a specified directory with [`PreTrainedModel.save_pretrained`]: ```py >>> tokenizer.save_pretrained("./your/path/bigscience_t0") >>> model.save_pretrained("./your/path/bigscience_t0") ``` 3. Now when you're offline, reload your files with [`PreTrainedModel.from_pretrained`] from the specified directory: ```py >>> tokenizer = AutoTokenizer.from_pretrained("./your/path/bigscience_t0") >>> model = AutoModel.from_pretrained("./your/path/bigscience_t0") ``` * Programmatically download files with the [huggingface_hub](https://github.com/huggingface/huggingface_hub/tree/main/src/huggingface_hub) library: 1. Install the `huggingface_hub` library in your virtual environment: ```bash python -m pip install huggingface_hub ``` 2. Use the [`hf_hub_download`](https://huggingface.co/docs/hub/adding-a-library#download-files-from-the-hub) function to download a file to a specific path. For example, the following command downloads the `config.json` file from the [T0](https://huggingface.co/bigscience/T0_3B) model to your desired path: ```py >>> from huggingface_hub import hf_hub_download >>> hf_hub_download(repo_id="bigscience/T0_3B", filename="config.json", cache_dir="./your/path/bigscience_t0") ``` Once your file is downloaded and locally cached, specify it's local path to load and use it: ```py >>> from transformers import AutoConfig >>> config = AutoConfig.from_pretrained("./your/path/bigscience_t0/config.json") ``` <Tip> See the [How to download files from the Hub](https://huggingface.co/docs/hub/how-to-downstream) section for more details on downloading files stored on the Hub. </Tip>
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/hpo_train.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Hyperparameter Search using Trainer API ๐Ÿค— Transformers provides a [`Trainer`] class optimized for training ๐Ÿค— Transformers models, making it easier to start training without manually writing your own training loop. The [`Trainer`] provides API for hyperparameter search. This doc shows how to enable it in example. ## Hyperparameter Search backend [`Trainer`] supports four hyperparameter search backends currently: [optuna](https://optuna.org/), [sigopt](https://sigopt.com/), [raytune](https://docs.ray.io/en/latest/tune/index.html) and [wandb](https://wandb.ai/site/sweeps). you should install them before using them as the hyperparameter search backend ```bash pip install optuna/sigopt/wandb/ray[tune] ``` ## How to enable Hyperparameter search in example Define the hyperparameter search space, different backends need different format. For sigopt, see sigopt [object_parameter](https://docs.sigopt.com/ai-module-api-references/api_reference/objects/object_parameter), it's like following: ```py >>> def sigopt_hp_space(trial): ... return [ ... {"bounds": {"min": 1e-6, "max": 1e-4}, "name": "learning_rate", "type": "double"}, ... { ... "categorical_values": ["16", "32", "64", "128"], ... "name": "per_device_train_batch_size", ... "type": "categorical", ... }, ... ] ``` For optuna, see optuna [object_parameter](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/002_configurations.html#sphx-glr-tutorial-10-key-features-002-configurations-py), it's like following: ```py >>> def optuna_hp_space(trial): ... return { ... "learning_rate": trial.suggest_float("learning_rate", 1e-6, 1e-4, log=True), ... "per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [16, 32, 64, 128]), ... } ``` Optuna provides multi-objective HPO. You can pass `direction` in `hyperparameter_search` and define your own compute_objective to return multiple objective values. The Pareto Front (`List[BestRun]`) will be returned in hyperparameter_search, you should refer to the test case `TrainerHyperParameterMultiObjectOptunaIntegrationTest` in [test_trainer](https://github.com/huggingface/transformers/blob/main/tests/trainer/test_trainer.py). It's like following ```py >>> best_trials = trainer.hyperparameter_search( ... direction=["minimize", "maximize"], ... backend="optuna", ... hp_space=optuna_hp_space, ... n_trials=20, ... compute_objective=compute_objective, ... ) ``` For raytune, see raytune [object_parameter](https://docs.ray.io/en/latest/tune/api/search_space.html), it's like following: ```py >>> def ray_hp_space(trial): ... return { ... "learning_rate": tune.loguniform(1e-6, 1e-4), ... "per_device_train_batch_size": tune.choice([16, 32, 64, 128]), ... } ``` For wandb, see wandb [object_parameter](https://docs.wandb.ai/guides/sweeps/configuration), it's like following: ```py >>> def wandb_hp_space(trial): ... return { ... "method": "random", ... "metric": {"name": "objective", "goal": "minimize"}, ... "parameters": { ... "learning_rate": {"distribution": "uniform", "min": 1e-6, "max": 1e-4}, ... "per_device_train_batch_size": {"values": [16, 32, 64, 128]}, ... }, ... } ``` Define a `model_init` function and pass it to the [`Trainer`], as an example: ```py >>> def model_init(trial): ... return AutoModelForSequenceClassification.from_pretrained( ... model_args.model_name_or_path, ... from_tf=bool(".ckpt" in model_args.model_name_or_path), ... config=config, ... cache_dir=model_args.cache_dir, ... revision=model_args.model_revision, ... token=True if model_args.use_auth_token else None, ... ) ``` Create a [`Trainer`] with your `model_init` function, training arguments, training and test datasets, and evaluation function: ```py >>> trainer = Trainer( ... model=None, ... args=training_args, ... train_dataset=small_train_dataset, ... eval_dataset=small_eval_dataset, ... compute_metrics=compute_metrics, ... tokenizer=tokenizer, ... model_init=model_init, ... data_collator=data_collator, ... ) ``` Call hyperparameter search, get the best trial parameters, backend could be `"optuna"`/`"sigopt"`/`"wandb"`/`"ray"`. direction can be`"minimize"` or `"maximize"`, which indicates whether to optimize greater or lower objective. You could define your own compute_objective function, if not defined, the default compute_objective will be called, and the sum of eval metric like f1 is returned as objective value. ```py >>> best_trial = trainer.hyperparameter_search( ... direction="maximize", ... backend="optuna", ... hp_space=optuna_hp_space, ... n_trials=20, ... compute_objective=compute_objective, ... ) ``` ## Hyperparameter search For DDP finetune Currently, Hyperparameter search for DDP is enabled for optuna and sigopt. Only the rank-zero process will generate the search trial and pass the argument to other ranks.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/big_models.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Instantiating a big model When you want to use a very big pretrained model, one challenge is to minimize the use of the RAM. The usual workflow from PyTorch is: 1. Create your model with random weights. 2. Load your pretrained weights. 3. Put those pretrained weights in your random model. Step 1 and 2 both require a full version of the model in memory, which is not a problem in most cases, but if your model starts weighing several GigaBytes, those two copies can make you get out of RAM. Even worse, if you are using `torch.distributed` to launch a distributed training, each process will load the pretrained model and store these two copies in RAM. <Tip> Note that the randomly created model is initialized with "empty" tensors, which take the space in memory without filling it (thus the random values are whatever was in this chunk of memory at a given time). The random initialization following the appropriate distribution for the kind of model/parameters instantiated (like a normal distribution for instance) is only performed after step 3 on the non-initialized weights, to be as fast as possible! </Tip> In this guide, we explore the solutions Transformers offer to deal with this issue. Note that this is an area of active development, so the APIs explained here may change slightly in the future. ## Sharded checkpoints Since version 4.18.0, model checkpoints that end up taking more than 10GB of space are automatically sharded in smaller pieces. In terms of having one single checkpoint when you do `model.save_pretrained(save_dir)`, you will end up with several partial checkpoints (each of which being of size < 10GB) and an index that maps parameter names to the files they are stored in. You can control the maximum size before sharding with the `max_shard_size` parameter, so for the sake of an example, we'll use a normal-size models with a small shard size: let's take a traditional BERT model. ```py from transformers import AutoModel model = AutoModel.from_pretrained("bert-base-cased") ``` If you save it using [`~PreTrainedModel.save_pretrained`], you will get a new folder with two files: the config of the model and its weights: ```py >>> import os >>> import tempfile >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir) ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model.bin'] ``` Now let's use a maximum shard size of 200MB: ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... print(sorted(os.listdir(tmp_dir))) ['config.json', 'pytorch_model-00001-of-00003.bin', 'pytorch_model-00002-of-00003.bin', 'pytorch_model-00003-of-00003.bin', 'pytorch_model.bin.index.json'] ``` On top of the configuration of the model, we see three different weights files, and an `index.json` file which is our index. A checkpoint like this can be fully reloaded using the [`~PreTrainedModel.from_pretrained`] method: ```py >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... new_model = AutoModel.from_pretrained(tmp_dir) ``` The main advantage of doing this for big models is that during step 2 of the workflow shown above, each shard of the checkpoint is loaded after the previous one, capping the memory usage in RAM to the model size plus the size of the biggest shard. Behind the scenes, the index file is used to determine which keys are in the checkpoint, and where the corresponding weights are stored. We can load that index like any json and get a dictionary: ```py >>> import json >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... with open(os.path.join(tmp_dir, "pytorch_model.bin.index.json"), "r") as f: ... index = json.load(f) >>> print(index.keys()) dict_keys(['metadata', 'weight_map']) ``` The metadata just consists of the total size of the model for now. We plan to add other information in the future: ```py >>> index["metadata"] {'total_size': 433245184} ``` The weights map is the main part of this index, which maps each parameter name (as usually found in a PyTorch model `state_dict`) to the file it's stored in: ```py >>> index["weight_map"] {'embeddings.LayerNorm.bias': 'pytorch_model-00001-of-00003.bin', 'embeddings.LayerNorm.weight': 'pytorch_model-00001-of-00003.bin', ... ``` If you want to directly load such a sharded checkpoint inside a model without using [`~PreTrainedModel.from_pretrained`] (like you would do `model.load_state_dict()` for a full checkpoint) you should use [`~modeling_utils.load_sharded_checkpoint`]: ```py >>> from transformers.modeling_utils import load_sharded_checkpoint >>> with tempfile.TemporaryDirectory() as tmp_dir: ... model.save_pretrained(tmp_dir, max_shard_size="200MB") ... load_sharded_checkpoint(model, tmp_dir) ``` ## Low memory loading Sharded checkpoints reduce the memory usage during step 2 of the workflow mentioned above, but in order to use that model in a low memory setting, we recommend leveraging our tools based on the Accelerate library. Please read the following guide for more information: [Large model loading using Accelerate](./main_classes/model#large-model-loading)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/philosophy.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Philosophy ๐Ÿค— Transformers is an opinionated library built for: - machine learning researchers and educators seeking to use, study or extend large-scale Transformers models. - hands-on practitioners who want to fine-tune those models or serve them in production, or both. - engineers who just want to download a pretrained model and use it to solve a given machine learning task. The library was designed with two strong goals in mind: 1. Be as easy and fast to use as possible: - We strongly limited the number of user-facing abstractions to learn, in fact, there are almost no abstractions, just three standard classes required to use each model: [configuration](main_classes/configuration), [models](main_classes/model), and a preprocessing class ([tokenizer](main_classes/tokenizer) for NLP, [image processor](main_classes/image_processor) for vision, [feature extractor](main_classes/feature_extractor) for audio, and [processor](main_classes/processors) for multimodal inputs). - All of these classes can be initialized in a simple and unified way from pretrained instances by using a common `from_pretrained()` method which downloads (if needed), caches and loads the related class instance and associated data (configurations' hyperparameters, tokenizers' vocabulary, and models' weights) from a pretrained checkpoint provided on [Hugging Face Hub](https://huggingface.co/models) or your own saved checkpoint. - On top of those three base classes, the library provides two APIs: [`pipeline`] for quickly using a model for inference on a given task and [`Trainer`] to quickly train or fine-tune a PyTorch model (all TensorFlow models are compatible with `Keras.fit`). - As a consequence, this library is NOT a modular toolbox of building blocks for neural nets. If you want to extend or build upon the library, just use regular Python, PyTorch, TensorFlow, Keras modules and inherit from the base classes of the library to reuse functionalities like model loading and saving. If you'd like to learn more about our coding philosophy for models, check out our [Repeat Yourself](https://huggingface.co/blog/transformers-design-philosophy) blog post. 2. Provide state-of-the-art models with performances as close as possible to the original models: - We provide at least one example for each architecture which reproduces a result provided by the official authors of said architecture. - The code is usually as close to the original code base as possible which means some PyTorch code may be not as *pytorchic* as it could be as a result of being converted TensorFlow code and vice versa. A few other goals: - Expose the models' internals as consistently as possible: - We give access, using a single API, to the full hidden-states and attention weights. - The preprocessing classes and base model APIs are standardized to easily switch between models. - Incorporate a subjective selection of promising tools for fine-tuning and investigating these models: - A simple and consistent way to add new tokens to the vocabulary and embeddings for fine-tuning. - Simple ways to mask and prune Transformer heads. - Easily switch between PyTorch, TensorFlow 2.0 and Flax, allowing training with one framework and inference with another. ## Main concepts The library is built around three types of classes for each model: - **Model classes** can be PyTorch models ([torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)), Keras models ([tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)) or JAX/Flax models ([flax.linen.Module](https://flax.readthedocs.io/en/latest/api_reference/flax.linen/module.html)) that work with the pretrained weights provided in the library. - **Configuration classes** store the hyperparameters required to build a model (such as the number of layers and hidden size). You don't always need to instantiate these yourself. In particular, if you are using a pretrained model without any modification, creating the model will automatically take care of instantiating the configuration (which is part of the model). - **Preprocessing classes** convert the raw data into a format accepted by the model. A [tokenizer](main_classes/tokenizer) stores the vocabulary for each model and provide methods for encoding and decoding strings in a list of token embedding indices to be fed to a model. [Image processors](main_classes/image_processor) preprocess vision inputs, [feature extractors](main_classes/feature_extractor) preprocess audio inputs, and a [processor](main_classes/processors) handles multimodal inputs. All these classes can be instantiated from pretrained instances, saved locally, and shared on the Hub with three methods: - `from_pretrained()` lets you instantiate a model, configuration, and preprocessing class from a pretrained version either provided by the library itself (the supported models can be found on the [Model Hub](https://huggingface.co/models)) or stored locally (or on a server) by the user. - `save_pretrained()` lets you save a model, configuration, and preprocessing class locally so that it can be reloaded using `from_pretrained()`. - `push_to_hub()` lets you share a model, configuration, and a preprocessing class to the Hub, so it is easily accessible to everyone.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/pipeline_tutorial.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Pipelines for inference The [`pipeline`] makes it simple to use any model from the [Hub](https://huggingface.co/models) for inference on any language, computer vision, speech, and multimodal tasks. Even if you don't have experience with a specific modality or aren't familiar with the underlying code behind the models, you can still use them for inference with the [`pipeline`]! This tutorial will teach you to: * Use a [`pipeline`] for inference. * Use a specific tokenizer or model. * Use a [`pipeline`] for audio, vision, and multimodal tasks. <Tip> Take a look at the [`pipeline`] documentation for a complete list of supported tasks and available parameters. </Tip> ## Pipeline usage While each task has an associated [`pipeline`], it is simpler to use the general [`pipeline`] abstraction which contains all the task-specific pipelines. The [`pipeline`] automatically loads a default model and a preprocessing class capable of inference for your task. Let's take the example of using the [`pipeline`] for automatic speech recognition (ASR), or speech-to-text. 1. Start by creating a [`pipeline`] and specify the inference task: ```py >>> from transformers import pipeline >>> transcriber = pipeline(task="automatic-speech-recognition") ``` 2. Pass your input to the [`pipeline`]. In the case of speech recognition, this is an audio input file: ```py >>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': 'I HAVE A DREAM BUT ONE DAY THIS NATION WILL RISE UP LIVE UP THE TRUE MEANING OF ITS TREES'} ``` Not the result you had in mind? Check out some of the [most downloaded automatic speech recognition models](https://huggingface.co/models?pipeline_tag=automatic-speech-recognition&sort=trending) on the Hub to see if you can get a better transcription. Let's try the [Whisper large-v2](https://huggingface.co/openai/whisper-large) model from OpenAI. Whisper was released 2 years later than Wav2Vec2, and was trained on close to 10x more data. As such, it beats Wav2Vec2 on most downstream benchmarks. It also has the added benefit of predicting punctuation and casing, neither of which are possible with Wav2Vec2. Let's give it a try here to see how it performs: ```py >>> transcriber = pipeline(model="openai/whisper-large-v2") >>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ``` Now this result looks more accurate! For a deep-dive comparison on Wav2Vec2 vs Whisper, refer to the [Audio Transformers Course](https://huggingface.co/learn/audio-course/chapter5/asr_models). We really encourage you to check out the Hub for models in different languages, models specialized in your field, and more. You can check out and compare model results directly from your browser on the Hub to see if it fits or handles corner cases better than other ones. And if you don't find a model for your use case, you can always start [training](training) your own! If you have several inputs, you can pass your input as a list: ```py transcriber( [ "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac", "https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac", ] ) ``` Pipelines are great for experimentation as switching from one model to another is trivial; however, there are some ways to optimize them for larger workloads than experimentation. See the following guides that dive into iterating over whole datasets or using pipelines in a webserver: of the docs: * [Using pipelines on a dataset](#using-pipelines-on-a-dataset) * [Using pipelines for a webserver](./pipeline_webserver) ## Parameters [`pipeline`] supports many parameters; some are task specific, and some are general to all pipelines. In general, you can specify parameters anywhere you want: ```py transcriber = pipeline(model="openai/whisper-large-v2", my_parameter=1) out = transcriber(...) # This will use `my_parameter=1`. out = transcriber(..., my_parameter=2) # This will override and use `my_parameter=2`. out = transcriber(...) # This will go back to using `my_parameter=1`. ``` Let's check out 3 important ones: ### Device If you use `device=n`, the pipeline automatically puts the model on the specified device. This will work regardless of whether you are using PyTorch or Tensorflow. ```py transcriber = pipeline(model="openai/whisper-large-v2", device=0) ``` If the model is too large for a single GPU and you are using PyTorch, you can set `device_map="auto"` to automatically determine how to load and store the model weights. Using the `device_map` argument requires the ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate) package: ```bash pip install --upgrade accelerate ``` The following code automatically loads and stores model weights across devices: ```py transcriber = pipeline(model="openai/whisper-large-v2", device_map="auto") ``` Note that if `device_map="auto"` is passed, there is no need to add the argument `device=device` when instantiating your `pipeline` as you may encounter some unexpected behavior! ### Batch size By default, pipelines will not batch inference for reasons explained in detail [here](https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching). The reason is that batching is not necessarily faster, and can actually be quite slower in some cases. But if it works in your use case, you can use: ```py transcriber = pipeline(model="openai/whisper-large-v2", device=0, batch_size=2) audio_filenames = [f"https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/{i}.flac" for i in range(1, 5)] texts = transcriber(audio_filenames) ``` This runs the pipeline on the 4 provided audio files, but it will pass them in batches of 2 to the model (which is on a GPU, where batching is more likely to help) without requiring any further code from you. The output should always match what you would have received without batching. It is only meant as a way to help you get more speed out of a pipeline. Pipelines can also alleviate some of the complexities of batching because, for some pipelines, a single item (like a long audio file) needs to be chunked into multiple parts to be processed by a model. The pipeline performs this [*chunk batching*](./main_classes/pipelines#pipeline-chunk-batching) for you. ### Task specific parameters All tasks provide task specific parameters which allow for additional flexibility and options to help you get your job done. For instance, the [`transformers.AutomaticSpeechRecognitionPipeline.__call__`] method has a `return_timestamps` parameter which sounds promising for subtitling videos: ```py >>> transcriber = pipeline(model="openai/whisper-large-v2", return_timestamps=True) >>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.', 'chunks': [{'timestamp': (0.0, 11.88), 'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its'}, {'timestamp': (11.88, 12.38), 'text': ' creed.'}]} ``` As you can see, the model inferred the text and also outputted **when** the various sentences were pronounced. There are many parameters available for each task, so check out each task's API reference to see what you can tinker with! For instance, the [`~transformers.AutomaticSpeechRecognitionPipeline`] has a `chunk_length_s` parameter which is helpful for working on really long audio files (for example, subtitling entire movies or hour-long videos) that a model typically cannot handle on its own: ```python >>> transcriber = pipeline(model="openai/whisper-large-v2", chunk_length_s=30, return_timestamps=True) >>> transcriber("https://huggingface.co/datasets/sanchit-gandhi/librispeech_long/resolve/main/audio.wav") {'text': " Chapter 16. I might have told you of the beginning of this liaison in a few lines, but I wanted you to see every step by which we came. I, too, agree to whatever Marguerite wished, Marguerite to be unable to live apart from me. It was the day after the evening... ``` If you can't find a parameter that would really help you out, feel free to [request it](https://github.com/huggingface/transformers/issues/new?assignees=&labels=feature&template=feature-request.yml)! ## Using pipelines on a dataset The pipeline can also run inference on a large dataset. The easiest way we recommend doing this is by using an iterator: ```py def data(): for i in range(1000): yield f"My example {i}" pipe = pipeline(model="gpt2", device=0) generated_characters = 0 for out in pipe(data()): generated_characters += len(out[0]["generated_text"]) ``` The iterator `data()` yields each result, and the pipeline automatically recognizes the input is iterable and will start fetching the data while it continues to process it on the GPU (this uses [DataLoader](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader) under the hood). This is important because you don't have to allocate memory for the whole dataset and you can feed the GPU as fast as possible. Since batching could speed things up, it may be useful to try tuning the `batch_size` parameter here. The simplest way to iterate over a dataset is to just load one from ๐Ÿค— [Datasets](https://github.com/huggingface/datasets/): ```py # KeyDataset is a util that will just output the item we're interested in. from transformers.pipelines.pt_utils import KeyDataset from datasets import load_dataset pipe = pipeline(model="hf-internal-testing/tiny-random-wav2vec2", device=0) dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation[:10]") for out in pipe(KeyDataset(dataset, "audio")): print(out) ``` ## Using pipelines for a webserver <Tip> Creating an inference engine is a complex topic which deserves it's own page. </Tip> [Link](./pipeline_webserver) ## Vision pipeline Using a [`pipeline`] for vision tasks is practically identical. Specify your task and pass your image to the classifier. The image can be a link, a local path or a base64-encoded image. For example, what species of cat is shown below? ![pipeline-cat-chonk](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg) ```py >>> from transformers import pipeline >>> vision_classifier = pipeline(model="google/vit-base-patch16-224") >>> preds = vision_classifier( ... images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4335, 'label': 'lynx, catamount'}, {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'}, {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'}, {'score': 0.0239, 'label': 'Egyptian cat'}, {'score': 0.0229, 'label': 'tiger cat'}] ``` ## Text pipeline Using a [`pipeline`] for NLP tasks is practically identical. ```py >>> from transformers import pipeline >>> # This model is a `zero-shot-classification` model. >>> # It will classify text, except you are free to choose any label you might imagine >>> classifier = pipeline(model="facebook/bart-large-mnli") >>> classifier( ... "I have a problem with my iphone that needs to be resolved asap!!", ... candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"], ... ) {'sequence': 'I have a problem with my iphone that needs to be resolved asap!!', 'labels': ['urgent', 'phone', 'computer', 'not urgent', 'tablet'], 'scores': [0.504, 0.479, 0.013, 0.003, 0.002]} ``` ## Multimodal pipeline The [`pipeline`] supports more than one modality. For example, a visual question answering (VQA) task combines text and image. Feel free to use any image link you like and a question you want to ask about the image. The image can be a URL or a local path to the image. For example, if you use this [invoice image](https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png): ```py >>> from transformers import pipeline >>> vqa = pipeline(model="impira/layoutlm-document-qa") >>> vqa( ... image="https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png", ... question="What is the invoice number?", ... ) [{'score': 0.42515, 'answer': 'us-001', 'start': 16, 'end': 16}] ``` <Tip> To run the example above you need to have [`pytesseract`](https://pypi.org/project/pytesseract/) installed in addition to ๐Ÿค— Transformers: ```bash sudo apt install -y tesseract-ocr pip install pytesseract ``` </Tip> ## Using `pipeline` on large models with ๐Ÿค— `accelerate`: You can easily run `pipeline` on large models using ๐Ÿค— `accelerate`! First make sure you have installed `accelerate` with `pip install accelerate`. First load your model using `device_map="auto"`! We will use `facebook/opt-1.3b` for our example. ```py # pip install accelerate import torch from transformers import pipeline pipe = pipeline(model="facebook/opt-1.3b", torch_dtype=torch.bfloat16, device_map="auto") output = pipe("This is a cool example!", do_sample=True, top_p=0.95) ``` You can also pass 8-bit loaded models if you install `bitsandbytes` and add the argument `load_in_8bit=True` ```py # pip install accelerate bitsandbytes import torch from transformers import pipeline pipe = pipeline(model="facebook/opt-1.3b", device_map="auto", model_kwargs={"load_in_8bit": True}) output = pipe("This is a cool example!", do_sample=True, top_p=0.95) ``` Note that you can replace the checkpoint with any of the Hugging Face model that supports large model loading such as BLOOM!
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/add_tensorflow_model.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # How to convert a ๐Ÿค— Transformers model to TensorFlow? Having multiple frameworks available to use with ๐Ÿค— Transformers gives you flexibility to play their strengths when designing your application, but it implies that compatibility must be added on a per-model basis. The good news is that adding TensorFlow compatibility to an existing model is simpler than [adding a new model from scratch](add_new_model)! Whether you wish to have a deeper understanding of large TensorFlow models, make a major open-source contribution, or enable TensorFlow for your model of choice, this guide is for you. This guide empowers you, a member of our community, to contribute TensorFlow model weights and/or architectures to be used in ๐Ÿค— Transformers, with minimal supervision from the Hugging Face team. Writing a new model is no small feat, but hopefully this guide will make it less of a rollercoaster ๐ŸŽข and more of a walk in the park ๐Ÿšถ. Harnessing our collective experiences is absolutely critical to make this process increasingly easier, and thus we highly encourage that you suggest improvements to this guide! Before you dive deeper, it is recommended that you check the following resources if you're new to ๐Ÿค— Transformers: - [General overview of ๐Ÿค— Transformers](add_new_model#general-overview-of-transformers) - [Hugging Face's TensorFlow Philosophy](https://huggingface.co/blog/tensorflow-philosophy) In the remainder of this guide, you will learn what's needed to add a new TensorFlow model architecture, the procedure to convert PyTorch into TensorFlow model weights, and how to efficiently debug mismatches across ML frameworks. Let's get started! <Tip> Are you unsure whether the model you wish to use already has a corresponding TensorFlow architecture? &nbsp; Check the `model_type` field of the `config.json` of your model of choice ([example](https://huggingface.co/bert-base-uncased/blob/main/config.json#L14)). If the corresponding model folder in ๐Ÿค— Transformers has a file whose name starts with "modeling_tf", it means that it has a corresponding TensorFlow architecture ([example](https://github.com/huggingface/transformers/tree/main/src/transformers/models/bert)). </Tip> ## Step-by-step guide to add TensorFlow model architecture code There are many ways to design a large model architecture, and multiple ways of implementing said design. However, you might recall from our [general overview of ๐Ÿค— Transformers](add_new_model#general-overview-of-transformers) that we are an opinionated bunch - the ease of use of ๐Ÿค— Transformers relies on consistent design choices. From experience, we can tell you a few important things about adding TensorFlow models: - Don't reinvent the wheel! More often than not, there are at least two reference implementations you should check: the PyTorch equivalent of the model you are implementing and other TensorFlow models for the same class of problems. - Great model implementations survive the test of time. This doesn't happen because the code is pretty, but rather because the code is clear, easy to debug and build upon. If you make the life of the maintainers easy with your TensorFlow implementation, by replicating the same patterns as in other TensorFlow models and minimizing the mismatch to the PyTorch implementation, you ensure your contribution will be long lived. - Ask for help when you're stuck! The ๐Ÿค— Transformers team is here to help, and we've probably found solutions to the same problems you're facing. Here's an overview of the steps needed to add a TensorFlow model architecture: 1. Select the model you wish to convert 2. Prepare transformers dev environment 3. (Optional) Understand theoretical aspects and the existing implementation 4. Implement the model architecture 5. Implement model tests 6. Submit the pull request 7. (Optional) Build demos and share with the world ### 1.-3. Prepare your model contribution **1. Select the model you wish to convert** Let's start off with the basics: the first thing you need to know is the architecture you want to convert. If you don't have your eyes set on a specific architecture, asking the ๐Ÿค— Transformers team for suggestions is a great way to maximize your impact - we will guide you towards the most prominent architectures that are missing on the TensorFlow side. If the specific model you want to use with TensorFlow already has a TensorFlow architecture implementation in ๐Ÿค— Transformers but is lacking weights, feel free to jump straight into the [weight conversion section](#adding-tensorflow-weights-to-hub) of this page. For simplicity, the remainder of this guide assumes you've decided to contribute with the TensorFlow version of *BrandNewBert* (the same example as in the [guide](add_new_model) to add a new model from scratch). <Tip> Before starting the work on a TensorFlow model architecture, double-check that there is no ongoing effort to do so. You can search for `BrandNewBert` on the [pull request GitHub page](https://github.com/huggingface/transformers/pulls?q=is%3Apr) to confirm that there is no TensorFlow-related pull request. </Tip> **2. Prepare transformers dev environment** Having selected the model architecture, open a draft PR to signal your intention to work on it. Follow the instructions below to set up your environment and open a draft PR. 1. Fork the [repository](https://github.com/huggingface/transformers) by clicking on the 'Fork' button on the repository's page. This creates a copy of the code under your GitHub user account. 2. Clone your `transformers` fork to your local disk, and add the base repository as a remote: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. Set up a development environment, for instance by running the following command: ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If that's the case make sure to install TensorFlow then do: ```bash pip install -e ".[quality]" ``` **Note:** You don't need to have CUDA installed. Making the new model work on CPU is sufficient. 4. Create a branch with a descriptive name from your main branch ```bash git checkout -b add_tf_brand_new_bert ``` 5. Fetch and rebase to current main ```bash git fetch upstream git rebase upstream/main ``` 6. Add an empty `.py` file in `transformers/src/models/brandnewbert/` named `modeling_tf_brandnewbert.py`. This will be your TensorFlow model file. 7. Push the changes to your account using: ```bash git add . git commit -m "initial commit" git push -u origin add_tf_brand_new_bert ``` 8. Once you are satisfied, go to the webpage of your fork on GitHub. Click on โ€œPull requestโ€. Make sure to add the GitHub handle of some members of the Hugging Face team as reviewers, so that the Hugging Face team gets notified for future changes. 9. Change the PR into a draft by clicking on โ€œConvert to draftโ€ on the right of the GitHub pull request web page. Now you have set up a development environment to port *BrandNewBert* to TensorFlow in ๐Ÿค— Transformers. **3. (Optional) Understand theoretical aspects and the existing implementation** You should take some time to read *BrandNewBert's* paper, if such descriptive work exists. There might be large sections of the paper that are difficult to understand. If this is the case, this is fine - don't worry! The goal is not to get a deep theoretical understanding of the paper, but to extract the necessary information required to effectively re-implement the model in ๐Ÿค— Transformers using TensorFlow. That being said, you don't have to spend too much time on the theoretical aspects, but rather focus on the practical ones, namely the existing model documentation page (e.g. [model docs for BERT](model_doc/bert)). After you've grasped the basics of the models you are about to implement, it's important to understand the existing implementation. This is a great chance to confirm that a working implementation matches your expectations for the model, as well as to foresee technical challenges on the TensorFlow side. It's perfectly natural that you feel overwhelmed with the amount of information that you've just absorbed. It is definitely not a requirement that you understand all facets of the model at this stage. Nevertheless, we highly encourage you to clear any pressing questions in our [forum](https://discuss.huggingface.co/). ### 4. Model implementation Now it's time to finally start coding. Our suggested starting point is the PyTorch file itself: copy the contents of `modeling_brand_new_bert.py` inside `src/transformers/models/brand_new_bert/` into `modeling_tf_brand_new_bert.py`. The goal of this section is to modify the file and update the import structure of ๐Ÿค— Transformers such that you can import `TFBrandNewBert` and `TFBrandNewBert.from_pretrained(model_repo, from_pt=True)` successfully loads a working TensorFlow *BrandNewBert* model. Sadly, there is no prescription to convert a PyTorch model into TensorFlow. You can, however, follow our selection of tips to make the process as smooth as possible: - Prepend `TF` to the name of all classes (e.g. `BrandNewBert` becomes `TFBrandNewBert`). - Most PyTorch operations have a direct TensorFlow replacement. For example, `torch.nn.Linear` corresponds to `tf.keras.layers.Dense`, `torch.nn.Dropout` corresponds to `tf.keras.layers.Dropout`, etc. If you're not sure about a specific operation, you can use the [TensorFlow documentation](https://www.tensorflow.org/api_docs/python/tf) or the [PyTorch documentation](https://pytorch.org/docs/stable/). - Look for patterns in the ๐Ÿค— Transformers codebase. If you come across a certain operation that doesn't have a direct replacement, the odds are that someone else already had the same problem. - By default, keep the same variable names and structure as in PyTorch. This will make it easier to debug, track issues, and add fixes down the line. - Some layers have different default values in each framework. A notable example is the batch normalization layer's epsilon (`1e-5` in [PyTorch](https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html#torch.nn.BatchNorm2d) and `1e-3` in [TensorFlow](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization)). Double-check the documentation! - PyTorch's `nn.Parameter` variables typically need to be initialized within TF Layer's `build()`. See the following example: [PyTorch](https://github.com/huggingface/transformers/blob/655f72a6896c0533b1bdee519ed65a059c2425ac/src/transformers/models/vit_mae/modeling_vit_mae.py#L212) / [TensorFlow](https://github.com/huggingface/transformers/blob/655f72a6896c0533b1bdee519ed65a059c2425ac/src/transformers/models/vit_mae/modeling_tf_vit_mae.py#L220) - If the PyTorch model has a `#copied from ...` on top of a function, the odds are that your TensorFlow model can also borrow that function from the architecture it was copied from, assuming it has a TensorFlow architecture. - Assigning the `name` attribute correctly in TensorFlow functions is critical to do the `from_pt=True` weight cross-loading. `name` is almost always the name of the corresponding variable in the PyTorch code. If `name` is not properly set, you will see it in the error message when loading the model weights. - The logic of the base model class, `BrandNewBertModel`, will actually reside in `TFBrandNewBertMainLayer`, a Keras layer subclass ([example](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L719)). `TFBrandNewBertModel` will simply be a wrapper around this layer. - Keras models need to be built in order to load pretrained weights. For that reason, `TFBrandNewBertPreTrainedModel` will need to hold an example of inputs to the model, the `dummy_inputs` ([example](https://github.com/huggingface/transformers/blob/4fd32a1f499e45f009c2c0dea4d81c321cba7e02/src/transformers/models/bert/modeling_tf_bert.py#L916)). - If you get stuck, ask for help - we're here to help you! ๐Ÿค— In addition to the model file itself, you will also need to add the pointers to the model classes and related documentation pages. You can complete this part entirely following the patterns in other PRs ([example](https://github.com/huggingface/transformers/pull/18020/files)). Here's a list of the needed manual changes: - Include all public classes of *BrandNewBert* in `src/transformers/__init__.py` - Add *BrandNewBert* classes to the corresponding Auto classes in `src/transformers/models/auto/modeling_tf_auto.py` - Add the lazy loading classes related to *BrandNewBert* in `src/transformers/utils/dummy_tf_objects.py` - Update the import structures for the public classes in `src/transformers/models/brand_new_bert/__init__.py` - Add the documentation pointers to the public methods of *BrandNewBert* in `docs/source/en/model_doc/brand_new_bert.md` - Add yourself to the list of contributors to *BrandNewBert* in `docs/source/en/model_doc/brand_new_bert.md` - Finally, add a green tick โœ… to the TensorFlow column of *BrandNewBert* in `docs/source/en/index.md` When you're happy with your implementation, run the following checklist to confirm that your model architecture is ready: 1. All layers that behave differently at train time (e.g. Dropout) are called with a `training` argument, which is propagated all the way from the top-level classes 2. You have used `#copied from ...` whenever possible 3. `TFBrandNewBertMainLayer` and all classes that use it have their `call` function decorated with `@unpack_inputs` 4. `TFBrandNewBertMainLayer` is decorated with `@keras_serializable` 5. A TensorFlow model can be loaded from PyTorch weights using `TFBrandNewBert.from_pretrained(model_repo, from_pt=True)` 6. You can call the TensorFlow model using the expected input format ### 5. Add model tests Hurray, you've implemented a TensorFlow model! Now it's time to add tests to make sure that your model behaves as expected. As in the previous section, we suggest you start by copying the `test_modeling_brand_new_bert.py` file in `tests/models/brand_new_bert/` into `test_modeling_tf_brand_new_bert.py`, and continue by making the necessary TensorFlow replacements. For now, in all `.from_pretrained()` calls, you should use the `from_pt=True` flag to load the existing PyTorch weights. After you're done, it's time for the moment of truth: run the tests! ๐Ÿ˜ฌ ```bash NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py ``` The most likely outcome is that you'll see a bunch of errors. Don't worry, this is expected! Debugging ML models is notoriously hard, and the key ingredient to success is patience (and `breakpoint()`). In our experience, the hardest problems arise from subtle mismatches between ML frameworks, for which we have a few pointers at the end of this guide. In other cases, a general test might not be directly applicable to your model, in which case we suggest an override at the model test class level. Regardless of the issue, don't hesitate to ask for help in your draft pull request if you're stuck. When all tests pass, congratulations, your model is nearly ready to be added to the ๐Ÿค— Transformers library! ๐ŸŽ‰ ### 6.-7. Ensure everyone can use your model **6. Submit the pull request** Once you're done with the implementation and the tests, it's time to submit a pull request. Before pushing your code, run our code formatting utility, `make fixup` ๐Ÿช„. This will automatically fix any formatting issues, which would cause our automatic checks to fail. It's now time to convert your draft pull request into a real pull request. To do so, click on the "Ready for review" button and add Joao (`@gante`) and Matt (`@Rocketknight1`) as reviewers. A model pull request will need at least 3 reviewers, but they will take care of finding appropriate additional reviewers for your model. After all reviewers are happy with the state of your PR, the final action point is to remove the `from_pt=True` flag in `.from_pretrained()` calls. Since there are no TensorFlow weights, you will have to add them! Check the section below for instructions on how to do it. Finally, when the TensorFlow weights get merged, you have at least 3 reviewer approvals, and all CI checks are green, double-check the tests locally one last time ```bash NVIDIA_TF32_OVERRIDE=0 RUN_SLOW=1 RUN_PT_TF_CROSS_TESTS=1 \ py.test -vv tests/models/brand_new_bert/test_modeling_tf_brand_new_bert.py ``` and we will merge your PR! Congratulations on the milestone ๐ŸŽ‰ **7. (Optional) Build demos and share with the world** One of the hardest parts about open-source is discovery. How can the other users learn about the existence of your fabulous TensorFlow contribution? With proper communication, of course! ๐Ÿ“ฃ There are two main ways to share your model with the community: - Build demos. These include Gradio demos, notebooks, and other fun ways to show off your model. We highly encourage you to add a notebook to our [community-driven demos](https://huggingface.co/docs/transformers/community). - Share stories on social media like Twitter and LinkedIn. You should be proud of your work and share your achievement with the community - your model can now be used by thousands of engineers and researchers around the world ๐ŸŒ! We will be happy to retweet your posts and help you share your work with the community. ## Adding TensorFlow weights to ๐Ÿค— Hub Assuming that the TensorFlow model architecture is available in ๐Ÿค— Transformers, converting PyTorch weights into TensorFlow weights is a breeze! Here's how to do it: 1. Make sure you are logged into your Hugging Face account in your terminal. You can log in using the command `huggingface-cli login` (you can find your access tokens [here](https://huggingface.co/settings/tokens)) 2. Run `transformers-cli pt-to-tf --model-name foo/bar`, where `foo/bar` is the name of the model repository containing the PyTorch weights you want to convert 3. Tag `@joaogante` and `@Rocketknight1` in the ๐Ÿค— Hub PR the command above has just created That's it! ๐ŸŽ‰ ## Debugging mismatches across ML frameworks ๐Ÿ› At some point, when adding a new architecture or when creating TensorFlow weights for an existing architecture, you might come across errors complaining about mismatches between PyTorch and TensorFlow. You might even decide to open the model architecture code for the two frameworks, and find that they look identical. What's going on? ๐Ÿค” First of all, let's talk about why understanding these mismatches matters. Many community members will use ๐Ÿค— Transformers models out of the box, and trust that our models behave as expected. When there is a large mismatch between the two frameworks, it implies that the model is not following the reference implementation for at least one of the frameworks. This might lead to silent failures, in which the model runs but has poor performance. This is arguably worse than a model that fails to run at all! To that end, we aim at having a framework mismatch smaller than `1e-5` at all stages of the model. As in other numerical problems, the devil is in the details. And as in any detail-oriented craft, the secret ingredient here is patience. Here is our suggested workflow for when you come across this type of issues: 1. Locate the source of mismatches. The model you're converting probably has near identical inner variables up to a certain point. Place `breakpoint()` statements in the two frameworks' architectures, and compare the values of the numerical variables in a top-down fashion until you find the source of the problems. 2. Now that you've pinpointed the source of the issue, get in touch with the ๐Ÿค— Transformers team. It is possible that we've seen a similar problem before and can promptly provide a solution. As a fallback, scan popular pages like StackOverflow and GitHub issues. 3. If there is no solution in sight, it means you'll have to go deeper. The good news is that you've located the issue, so you can focus on the problematic instruction, abstracting away the rest of the model! The bad news is that you'll have to venture into the source implementation of said instruction. In some cases, you might find an issue with a reference implementation - don't abstain from opening an issue in the upstream repository. In some cases, in discussion with the ๐Ÿค— Transformers team, we might find that fixing the mismatch is infeasible. When the mismatch is very small in the output layers of the model (but potentially large in the hidden states), we might decide to ignore it in favor of distributing the model. The `pt-to-tf` CLI mentioned above has a `--max-error` flag to override the error message at weight conversion time.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/tokenizer_summary.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Summary of the tokenizers [[open-in-colab]] On this page, we will have a closer look at tokenization. <Youtube id="VFp38yj8h3A"/> As we saw in [the preprocessing tutorial](preprocessing), tokenizing a text is splitting it into words or subwords, which then are converted to ids through a look-up table. Converting words or subwords to ids is straightforward, so in this summary, we will focus on splitting a text into words or subwords (i.e. tokenizing a text). More specifically, we will look at the three main types of tokenizers used in ๐Ÿค— Transformers: [Byte-Pair Encoding (BPE)](#byte-pair-encoding), [WordPiece](#wordpiece), and [SentencePiece](#sentencepiece), and show examples of which tokenizer type is used by which model. Note that on each model page, you can look at the documentation of the associated tokenizer to know which tokenizer type was used by the pretrained model. For instance, if we look at [`BertTokenizer`], we can see that the model uses [WordPiece](#wordpiece). ## Introduction Splitting a text into smaller chunks is a task that is harder than it looks, and there are multiple ways of doing so. For instance, let's look at the sentence `"Don't you love ๐Ÿค— Transformers? We sure do."` <Youtube id="nhJxYji1aho"/> A simple way of tokenizing this text is to split it by spaces, which would give: ``` ["Don't", "you", "love", "๐Ÿค—", "Transformers?", "We", "sure", "do."] ``` This is a sensible first step, but if we look at the tokens `"Transformers?"` and `"do."`, we notice that the punctuation is attached to the words `"Transformer"` and `"do"`, which is suboptimal. We should take the punctuation into account so that a model does not have to learn a different representation of a word and every possible punctuation symbol that could follow it, which would explode the number of representations the model has to learn. Taking punctuation into account, tokenizing our exemplary text would give: ``` ["Don", "'", "t", "you", "love", "๐Ÿค—", "Transformers", "?", "We", "sure", "do", "."] ``` Better. However, it is disadvantageous, how the tokenization dealt with the word `"Don't"`. `"Don't"` stands for `"do not"`, so it would be better tokenized as `["Do", "n't"]`. This is where things start getting complicated, and part of the reason each model has its own tokenizer type. Depending on the rules we apply for tokenizing a text, a different tokenized output is generated for the same text. A pretrained model only performs properly if you feed it an input that was tokenized with the same rules that were used to tokenize its training data. [spaCy](https://spacy.io/) and [Moses](http://www.statmt.org/moses/?n=Development.GetStarted) are two popular rule-based tokenizers. Applying them on our example, *spaCy* and *Moses* would output something like: ``` ["Do", "n't", "you", "love", "๐Ÿค—", "Transformers", "?", "We", "sure", "do", "."] ``` As can be seen space and punctuation tokenization, as well as rule-based tokenization, is used here. Space and punctuation tokenization and rule-based tokenization are both examples of word tokenization, which is loosely defined as splitting sentences into words. While it's the most intuitive way to split texts into smaller chunks, this tokenization method can lead to problems for massive text corpora. In this case, space and punctuation tokenization usually generates a very big vocabulary (the set of all unique words and tokens used). *E.g.*, [Transformer XL](model_doc/transformerxl) uses space and punctuation tokenization, resulting in a vocabulary size of 267,735! Such a big vocabulary size forces the model to have an enormous embedding matrix as the input and output layer, which causes both an increased memory and time complexity. In general, transformers models rarely have a vocabulary size greater than 50,000, especially if they are pretrained only on a single language. So if simple space and punctuation tokenization is unsatisfactory, why not simply tokenize on characters? <Youtube id="ssLq_EK2jLE"/> While character tokenization is very simple and would greatly reduce memory and time complexity it makes it much harder for the model to learn meaningful input representations. *E.g.* learning a meaningful context-independent representation for the letter `"t"` is much harder than learning a context-independent representation for the word `"today"`. Therefore, character tokenization is often accompanied by a loss of performance. So to get the best of both worlds, transformers models use a hybrid between word-level and character-level tokenization called **subword** tokenization. ## Subword tokenization <Youtube id="zHvTiHr506c"/> Subword tokenization algorithms rely on the principle that frequently used words should not be split into smaller subwords, but rare words should be decomposed into meaningful subwords. For instance `"annoyingly"` might be considered a rare word and could be decomposed into `"annoying"` and `"ly"`. Both `"annoying"` and `"ly"` as stand-alone subwords would appear more frequently while at the same time the meaning of `"annoyingly"` is kept by the composite meaning of `"annoying"` and `"ly"`. This is especially useful in agglutinative languages such as Turkish, where you can form (almost) arbitrarily long complex words by stringing together subwords. Subword tokenization allows the model to have a reasonable vocabulary size while being able to learn meaningful context-independent representations. In addition, subword tokenization enables the model to process words it has never seen before, by decomposing them into known subwords. For instance, the [`~transformers.BertTokenizer`] tokenizes `"I have a new GPU!"` as follows: ```py >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") >>> tokenizer.tokenize("I have a new GPU!") ["i", "have", "a", "new", "gp", "##u", "!"] ``` Because we are considering the uncased model, the sentence was lowercased first. We can see that the words `["i", "have", "a", "new"]` are present in the tokenizer's vocabulary, but the word `"gpu"` is not. Consequently, the tokenizer splits `"gpu"` into known subwords: `["gp" and "##u"]`. `"##"` means that the rest of the token should be attached to the previous one, without space (for decoding or reversal of the tokenization). As another example, [`~transformers.XLNetTokenizer`] tokenizes our previously exemplary text as follows: ```py >>> from transformers import XLNetTokenizer >>> tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased") >>> tokenizer.tokenize("Don't you love ๐Ÿค— Transformers? We sure do.") ["โ–Don", "'", "t", "โ–you", "โ–love", "โ–", "๐Ÿค—", "โ–", "Transform", "ers", "?", "โ–We", "โ–sure", "โ–do", "."] ``` We'll get back to the meaning of those `"โ–"` when we look at [SentencePiece](#sentencepiece). As one can see, the rare word `"Transformers"` has been split into the more frequent subwords `"Transform"` and `"ers"`. Let's now look at how the different subword tokenization algorithms work. Note that all of those tokenization algorithms rely on some form of training which is usually done on the corpus the corresponding model will be trained on. <a id='byte-pair-encoding'></a> ### Byte-Pair Encoding (BPE) Byte-Pair Encoding (BPE) was introduced in [Neural Machine Translation of Rare Words with Subword Units (Sennrich et al., 2015)](https://arxiv.org/abs/1508.07909). BPE relies on a pre-tokenizer that splits the training data into words. Pretokenization can be as simple as space tokenization, e.g. [GPT-2](model_doc/gpt2), [RoBERTa](model_doc/roberta). More advanced pre-tokenization include rule-based tokenization, e.g. [XLM](model_doc/xlm), [FlauBERT](model_doc/flaubert) which uses Moses for most languages, or [GPT](model_doc/gpt) which uses Spacy and ftfy, to count the frequency of each word in the training corpus. After pre-tokenization, a set of unique words has been created and the frequency with which each word occurred in the training data has been determined. Next, BPE creates a base vocabulary consisting of all symbols that occur in the set of unique words and learns merge rules to form a new symbol from two symbols of the base vocabulary. It does so until the vocabulary has attained the desired vocabulary size. Note that the desired vocabulary size is a hyperparameter to define before training the tokenizer. As an example, let's assume that after pre-tokenization, the following set of words including their frequency has been determined: ``` ("hug", 10), ("pug", 5), ("pun", 12), ("bun", 4), ("hugs", 5) ``` Consequently, the base vocabulary is `["b", "g", "h", "n", "p", "s", "u"]`. Splitting all words into symbols of the base vocabulary, we obtain: ``` ("h" "u" "g", 10), ("p" "u" "g", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "u" "g" "s", 5) ``` BPE then counts the frequency of each possible symbol pair and picks the symbol pair that occurs most frequently. In the example above `"h"` followed by `"u"` is present _10 + 5 = 15_ times (10 times in the 10 occurrences of `"hug"`, 5 times in the 5 occurrences of `"hugs"`). However, the most frequent symbol pair is `"u"` followed by `"g"`, occurring _10 + 5 + 5 = 20_ times in total. Thus, the first merge rule the tokenizer learns is to group all `"u"` symbols followed by a `"g"` symbol together. Next, `"ug"` is added to the vocabulary. The set of words then becomes ``` ("h" "ug", 10), ("p" "ug", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "ug" "s", 5) ``` BPE then identifies the next most common symbol pair. It's `"u"` followed by `"n"`, which occurs 16 times. `"u"`, `"n"` is merged to `"un"` and added to the vocabulary. The next most frequent symbol pair is `"h"` followed by `"ug"`, occurring 15 times. Again the pair is merged and `"hug"` can be added to the vocabulary. At this stage, the vocabulary is `["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"]` and our set of unique words is represented as ``` ("hug", 10), ("p" "ug", 5), ("p" "un", 12), ("b" "un", 4), ("hug" "s", 5) ``` Assuming, that the Byte-Pair Encoding training would stop at this point, the learned merge rules would then be applied to new words (as long as those new words do not include symbols that were not in the base vocabulary). For instance, the word `"bug"` would be tokenized to `["b", "ug"]` but `"mug"` would be tokenized as `["<unk>", "ug"]` since the symbol `"m"` is not in the base vocabulary. In general, single letters such as `"m"` are not replaced by the `"<unk>"` symbol because the training data usually includes at least one occurrence of each letter, but it is likely to happen for very special characters like emojis. As mentioned earlier, the vocabulary size, *i.e.* the base vocabulary size + the number of merges, is a hyperparameter to choose. For instance [GPT](model_doc/gpt) has a vocabulary size of 40,478 since they have 478 base characters and chose to stop training after 40,000 merges. #### Byte-level BPE A base vocabulary that includes all possible base characters can be quite large if *e.g.* all unicode characters are considered as base characters. To have a better base vocabulary, [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) uses bytes as the base vocabulary, which is a clever trick to force the base vocabulary to be of size 256 while ensuring that every base character is included in the vocabulary. With some additional rules to deal with punctuation, the GPT2's tokenizer can tokenize every text without the need for the <unk> symbol. [GPT-2](model_doc/gpt) has a vocabulary size of 50,257, which corresponds to the 256 bytes base tokens, a special end-of-text token and the symbols learned with 50,000 merges. <a id='wordpiece'></a> ### WordPiece WordPiece is the subword tokenization algorithm used for [BERT](model_doc/bert), [DistilBERT](model_doc/distilbert), and [Electra](model_doc/electra). The algorithm was outlined in [Japanese and Korean Voice Search (Schuster et al., 2012)](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf) and is very similar to BPE. WordPiece first initializes the vocabulary to include every character present in the training data and progressively learns a given number of merge rules. In contrast to BPE, WordPiece does not choose the most frequent symbol pair, but the one that maximizes the likelihood of the training data once added to the vocabulary. So what does this mean exactly? Referring to the previous example, maximizing the likelihood of the training data is equivalent to finding the symbol pair, whose probability divided by the probabilities of its first symbol followed by its second symbol is the greatest among all symbol pairs. *E.g.* `"u"`, followed by `"g"` would have only been merged if the probability of `"ug"` divided by `"u"`, `"g"` would have been greater than for any other symbol pair. Intuitively, WordPiece is slightly different to BPE in that it evaluates what it _loses_ by merging two symbols to ensure it's _worth it_. <a id='unigram'></a> ### Unigram Unigram is a subword tokenization algorithm introduced in [Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates (Kudo, 2018)](https://arxiv.org/pdf/1804.10959.pdf). In contrast to BPE or WordPiece, Unigram initializes its base vocabulary to a large number of symbols and progressively trims down each symbol to obtain a smaller vocabulary. The base vocabulary could for instance correspond to all pre-tokenized words and the most common substrings. Unigram is not used directly for any of the models in the transformers, but it's used in conjunction with [SentencePiece](#sentencepiece). At each training step, the Unigram algorithm defines a loss (often defined as the log-likelihood) over the training data given the current vocabulary and a unigram language model. Then, for each symbol in the vocabulary, the algorithm computes how much the overall loss would increase if the symbol was to be removed from the vocabulary. Unigram then removes p (with p usually being 10% or 20%) percent of the symbols whose loss increase is the lowest, *i.e.* those symbols that least affect the overall loss over the training data. This process is repeated until the vocabulary has reached the desired size. The Unigram algorithm always keeps the base characters so that any word can be tokenized. Because Unigram is not based on merge rules (in contrast to BPE and WordPiece), the algorithm has several ways of tokenizing new text after training. As an example, if a trained Unigram tokenizer exhibits the vocabulary: ``` ["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"], ``` `"hugs"` could be tokenized both as `["hug", "s"]`, `["h", "ug", "s"]` or `["h", "u", "g", "s"]`. So which one to choose? Unigram saves the probability of each token in the training corpus on top of saving the vocabulary so that the probability of each possible tokenization can be computed after training. The algorithm simply picks the most likely tokenization in practice, but also offers the possibility to sample a possible tokenization according to their probabilities. Those probabilities are defined by the loss the tokenizer is trained on. Assuming that the training data consists of the words \\(x_{1}, \dots, x_{N}\\) and that the set of all possible tokenizations for a word \\(x_{i}\\) is defined as \\(S(x_{i})\\), then the overall loss is defined as $$\mathcal{L} = -\sum_{i=1}^{N} \log \left ( \sum_{x \in S(x_{i})} p(x) \right )$$ <a id='sentencepiece'></a> ### SentencePiece All tokenization algorithms described so far have the same problem: It is assumed that the input text uses spaces to separate words. However, not all languages use spaces to separate words. One possible solution is to use language specific pre-tokenizers, *e.g.* [XLM](model_doc/xlm) uses a specific Chinese, Japanese, and Thai pre-tokenizer). To solve this problem more generally, [SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing (Kudo et al., 2018)](https://arxiv.org/pdf/1808.06226.pdf) treats the input as a raw input stream, thus including the space in the set of characters to use. It then uses the BPE or unigram algorithm to construct the appropriate vocabulary. The [`XLNetTokenizer`] uses SentencePiece for example, which is also why in the example earlier the `"โ–"` character was included in the vocabulary. Decoding with SentencePiece is very easy since all tokens can just be concatenated and `"โ–"` is replaced by a space. All transformers models in the library that use SentencePiece use it in combination with unigram. Examples of models using SentencePiece are [ALBERT](model_doc/albert), [XLNet](model_doc/xlnet), [Marian](model_doc/marian), and [T5](model_doc/t5).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/perf_train_cpu_many.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Efficient Training on Multiple CPUs When training on a single CPU is too slow, we can use multiple CPUs. This guide focuses on PyTorch-based DDP enabling distributed CPU training efficiently. ## Intelยฎ oneCCL Bindings for PyTorch [Intelยฎ oneCCL](https://github.com/oneapi-src/oneCCL) (collective communications library) is a library for efficient distributed deep learning training implementing such collectives like allreduce, allgather, alltoall. For more information on oneCCL, please refer to the [oneCCL documentation](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html) and [oneCCL specification](https://spec.oneapi.com/versions/latest/elements/oneCCL/source/index.html). Module `oneccl_bindings_for_pytorch` (`torch_ccl` before version 1.12) implements PyTorch C10D ProcessGroup API and can be dynamically loaded as external ProcessGroup and only works on Linux platform now Check more detailed information for [oneccl_bind_pt](https://github.com/intel/torch-ccl). ### Intelยฎ oneCCL Bindings for PyTorch installation: Wheel files are available for the following Python versions: | Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | | :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: | | 1.13.0 | | โˆš | โˆš | โˆš | โˆš | | 1.12.100 | | โˆš | โˆš | โˆš | โˆš | | 1.12.0 | | โˆš | โˆš | โˆš | โˆš | | 1.11.0 | | โˆš | โˆš | โˆš | โˆš | | 1.10.0 | โˆš | โˆš | โˆš | โˆš | | ``` pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu ``` where `{pytorch_version}` should be your PyTorch version, for instance 1.13.0. Check more approaches for [oneccl_bind_pt installation](https://github.com/intel/torch-ccl). Versions of oneCCL and PyTorch must match. <Tip warning={true}> oneccl_bindings_for_pytorch 1.12.0 prebuilt wheel does not work with PyTorch 1.12.1 (it is for PyTorch 1.12.0) PyTorch 1.12.1 should work with oneccl_bindings_for_pytorch 1.12.100 </Tip> ## Intelยฎ MPI library Use this standards-based MPI implementation to deliver flexible, efficient, scalable cluster messaging on Intelยฎ architecture. This component is part of the Intelยฎ oneAPI HPC Toolkit. oneccl_bindings_for_pytorch is installed along with the MPI tool set. Need to source the environment before using it. for Intelยฎ oneCCL >= 1.12.0 ``` oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)") source $oneccl_bindings_for_pytorch_path/env/setvars.sh ``` for Intelยฎ oneCCL whose version < 1.12.0 ``` torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))") source $torch_ccl_path/env/setvars.sh ``` #### IPEX installation: IPEX provides performance optimizations for CPU training with both Float32 and BFloat16, you could refer [single CPU section](./perf_train_cpu). The following "Usage in Trainer" takes mpirun in Intelยฎ MPI library as an example. ## Usage in Trainer To enable multi CPU distributed training in the Trainer with the ccl backend, users should add **`--ddp_backend ccl`** in the command arguments. Let's see an example with the [question-answering example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) The following command enables training with 2 processes on one Xeon node, with one process running per one socket. The variables OMP_NUM_THREADS/CCL_WORKER_COUNT can be tuned for optimal performance. ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=127.0.0.1 mpirun -n 2 -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex ``` The following command enables training with a total of four processes on two Xeons (node0 and node1, taking node0 as the main process), ppn (processes per node) is set to 2, with one process running per one socket. The variables OMP_NUM_THREADS/CCL_WORKER_COUNT can be tuned for optimal performance. In node0, you need to create a configuration file which contains the IP addresses of each node (for example hostfile) and pass that configuration file path as an argument. ```shell script cat hostfile xxx.xxx.xxx.xxx #node0 ip xxx.xxx.xxx.xxx #node1 ip ``` Now, run the following command in node0 and **4DDP** will be enabled in node0 and node1 with BF16 auto mixed precision: ```shell script export CCL_WORKER_COUNT=1 export MASTER_ADDR=xxx.xxx.xxx.xxx #node0 ip mpirun -f hostfile -n 4 -ppn 2 \ -genv OMP_NUM_THREADS=23 \ python3 run_qa.py \ --model_name_or_path bert-large-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ --no_cuda \ --ddp_backend ccl \ --use_ipex \ --bf16 ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/transformers_agents.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Transformers Agents <Tip warning={true}> Transformers Agents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. </Tip> Transformers version v4.29.0, building on the concept of *tools* and *agents*. You can play with in [this colab](https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj). In short, it provides a natural language API on top of transformers: we define a set of curated tools and design an agent to interpret natural language and to use these tools. It is extensible by design; we curated some relevant tools, but we'll show you how the system can be extended easily to use any tool developed by the community. Let's start with a few examples of what can be achieved with this new API. It is particularly powerful when it comes to multimodal tasks, so let's take it for a spin to generate images and read text out loud. ```py agent.run("Caption the following image", image=image) ``` | **Input** | **Output** | |-----------------------------------------------------------------------------------------------------------------------------|-----------------------------------| | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/beaver.png" width=200> | A beaver is swimming in the water | --- ```py agent.run("Read the following text out loud", text=text) ``` | **Input** | **Output** | |-------------------------------------------------------------------------------------------------------------------------|----------------------------------------------| | A beaver is swimming in the water | <audio controls><source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tts_example.wav" type="audio/wav"> your browser does not support the audio element. </audio> --- ```py agent.run( "In the following `document`, where will the TRRF Scientific Advisory Council Meeting take place?", document=document, ) ``` | **Input** | **Output** | |-----------------------------------------------------------------------------------------------------------------------------|----------------| | <img src="https://datasets-server.huggingface.co/assets/hf-internal-testing/example-documents/--/hf-internal-testing--example-documents/test/0/image/image.jpg" width=200> | ballroom foyer | ## Quickstart Before being able to use `agent.run`, you will need to instantiate an agent, which is a large language model (LLM). We provide support for openAI models as well as opensource alternatives from BigCode and OpenAssistant. The openAI models perform better (but require you to have an openAI API key, so cannot be used for free); Hugging Face is providing free access to endpoints for BigCode and OpenAssistant models. To start with, please install the `agents` extras in order to install all default dependencies. ```bash pip install transformers[agents] ``` To use openAI models, you instantiate an [`OpenAiAgent`] after installing the `openai` dependency: ```bash pip install openai ``` ```py from transformers import OpenAiAgent agent = OpenAiAgent(model="text-davinci-003", api_key="<your_api_key>") ``` To use BigCode or OpenAssistant, start by logging in to have access to the Inference API: ```py from huggingface_hub import login login("<YOUR_TOKEN>") ``` Then, instantiate the agent ```py from transformers import HfAgent # Starcoder agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") # StarcoderBase # agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoderbase") # OpenAssistant # agent = HfAgent(url_endpoint="https://api-inference.huggingface.co/models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5") ``` This is using the inference API that Hugging Face provides for free at the moment. If you have your own inference endpoint for this model (or another one) you can replace the URL above with your URL endpoint. <Tip> StarCoder and OpenAssistant are free to use and perform admirably well on simple tasks. However, the checkpoints don't hold up when handling more complex prompts. If you're facing such an issue, we recommend trying out the OpenAI model which, while sadly not open-source, performs better at this given time. </Tip> You're now good to go! Let's dive into the two APIs that you now have at your disposal. ### Single execution (run) The single execution method is when using the [`~Agent.run`] method of the agent: ```py agent.run("Draw me a picture of rivers and lakes.") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> It automatically selects the tool (or tools) appropriate for the task you want to perform and runs them appropriately. It can perform one or several tasks in the same instruction (though the more complex your instruction, the more likely the agent is to fail). ```py agent.run("Draw me a picture of the sea then transform the picture to add an island") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sea_and_island.png" width=200> <br/> Every [`~Agent.run`] operation is independent, so you can run it several times in a row with different tasks. Note that your `agent` is just a large-language model, so small variations in your prompt might yield completely different results. It's important to explain as clearly as possible the task you want to perform. We go more in-depth on how to write good prompts [here](custom_tools#writing-good-user-inputs). If you'd like to keep a state across executions or to pass non-text objects to the agent, you can do so by specifying variables that you would like the agent to use. For example, you could generate the first image of rivers and lakes, and ask the model to update that picture to add an island by doing the following: ```python picture = agent.run("Generate a picture of rivers and lakes.") updated_picture = agent.run("Transform the image in `picture` to add an island to it.", picture=picture) ``` <Tip> This can be helpful when the model is unable to understand your request and mixes tools. An example would be: ```py agent.run("Draw me the picture of a capybara swimming in the sea") ``` Here, the model could interpret in two ways: - Have the `text-to-image` generate a capybara swimming in the sea - Or, have the `text-to-image` generate capybara, then use the `image-transformation` tool to have it swim in the sea In case you would like to force the first scenario, you could do so by passing it the prompt as an argument: ```py agent.run("Draw me a picture of the `prompt`", prompt="a capybara swimming in the sea") ``` </Tip> ### Chat-based execution (chat) The agent also has a chat-based approach, using the [`~Agent.chat`] method: ```py agent.chat("Generate a picture of rivers and lakes") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes.png" width=200> ```py agent.chat("Transform the picture so that there is a rock in there") ``` <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rivers_and_lakes_and_beaver.png" width=200> <br/> This is an interesting approach when you want to keep the state across instructions. It's better for experimentation, but will tend to be much better at single instructions rather than complex instructions (which the [`~Agent.run`] method is better at handling). This method can also take arguments if you would like to pass non-text types or specific prompts. ### โš ๏ธ Remote execution For demonstration purposes and so that it could be used with all setups, we had created remote executors for several of the default tools the agent has access for the release. These are created using [inference endpoints](https://huggingface.co/inference-endpoints). We have turned these off for now, but in order to see how to set up remote executors tools yourself, we recommend reading the [custom tool guide](./custom_tools). ### What's happening here? What are tools, and what are agents? <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/diagram.png"> #### Agents The "agent" here is a large language model, and we're prompting it so that it has access to a specific set of tools. LLMs are pretty good at generating small samples of code, so this API takes advantage of that by prompting the LLM gives a small sample of code performing a task with a set of tools. This prompt is then completed by the task you give your agent and the description of the tools you give it. This way it gets access to the doc of the tools you are using, especially their expected inputs and outputs, and can generate the relevant code. #### Tools Tools are very simple: they're a single function, with a name, and a description. We then use these tools' descriptions to prompt the agent. Through the prompt, we show the agent how it would leverage tools to perform what was requested in the query. This is using brand-new tools and not pipelines, because the agent writes better code with very atomic tools. Pipelines are more refactored and often combine several tasks in one. Tools are meant to be focused on one very simple task only. #### Code-execution?! This code is then executed with our small Python interpreter on the set of inputs passed along with your tools. We hear you screaming "Arbitrary code execution!" in the back, but let us explain why that is not the case. The only functions that can be called are the tools you provided and the print function, so you're already limited in what can be executed. You should be safe if it's limited to Hugging Face tools. Then, we don't allow any attribute lookup or imports (which shouldn't be needed anyway for passing along inputs/outputs to a small set of functions) so all the most obvious attacks (and you'd need to prompt the LLM to output them anyway) shouldn't be an issue. If you want to be on the super safe side, you can execute the run() method with the additional argument return_code=True, in which case the agent will just return the code to execute and you can decide whether to do it or not. The execution will stop at any line trying to perform an illegal operation or if there is a regular Python error with the code generated by the agent. ### A curated set of tools We identify a set of tools that can empower such agents. Here is an updated list of the tools we have integrated in `transformers`: - **Document question answering**: given a document (such as a PDF) in image format, answer a question on this document ([Donut](./model_doc/donut)) - **Text question answering**: given a long text and a question, answer the question in the text ([Flan-T5](./model_doc/flan-t5)) - **Unconditional image captioning**: Caption the image! ([BLIP](./model_doc/blip)) - **Image question answering**: given an image, answer a question on this image ([VILT](./model_doc/vilt)) - **Image segmentation**: given an image and a prompt, output the segmentation mask of that prompt ([CLIPSeg](./model_doc/clipseg)) - **Speech to text**: given an audio recording of a person talking, transcribe the speech into text ([Whisper](./model_doc/whisper)) - **Text to speech**: convert text to speech ([SpeechT5](./model_doc/speecht5)) - **Zero-shot text classification**: given a text and a list of labels, identify to which label the text corresponds the most ([BART](./model_doc/bart)) - **Text summarization**: summarize a long text in one or a few sentences ([BART](./model_doc/bart)) - **Translation**: translate the text into a given language ([NLLB](./model_doc/nllb)) These tools have an integration in transformers, and can be used manually as well, for example: ```py from transformers import load_tool tool = load_tool("text-to-speech") audio = tool("This is a text to speech tool") ``` ### Custom tools While we identify a curated set of tools, we strongly believe that the main value provided by this implementation is the ability to quickly create and share custom tools. By pushing the code of a tool to a Hugging Face Space or a model repository, you're then able to leverage the tool directly with the agent. We've added a few **transformers-agnostic** tools to the [`huggingface-tools` organization](https://huggingface.co/huggingface-tools): - **Text downloader**: to download a text from a web URL - **Text to image**: generate an image according to a prompt, leveraging stable diffusion - **Image transformation**: modify an image given an initial image and a prompt, leveraging instruct pix2pix stable diffusion - **Text to video**: generate a small video according to a prompt, leveraging damo-vilab The text-to-image tool we have been using since the beginning is a remote tool that lives in [*huggingface-tools/text-to-image*](https://huggingface.co/spaces/huggingface-tools/text-to-image)! We will continue releasing such tools on this and other organizations, to further supercharge this implementation. The agents have by default access to tools that reside on [`huggingface-tools`](https://huggingface.co/huggingface-tools). We explain how to you can write and share your tools as well as leverage any custom tool that resides on the Hub in [following guide](custom_tools). ### Code generation So far we have shown how to use the agents to perform actions for you. However, the agent is only generating code that we then execute using a very restricted Python interpreter. In case you would like to use the code generated in a different setting, the agent can be prompted to return the code, along with tool definition and accurate imports. For example, the following instruction ```python agent.run("Draw me a picture of rivers and lakes", return_code=True) ``` returns the following code ```python from transformers import load_tool image_generator = load_tool("huggingface-tools/text-to-image") image = image_generator(prompt="rivers and lakes") ``` that you can then modify and execute yourself.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/perf_train_tpu_tf.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Training on TPU with TensorFlow <Tip> If you don't need long explanations and just want TPU code samples to get started with, check out [our TPU example notebook!](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) </Tip> ### What is a TPU? A TPU is a **Tensor Processing Unit.** They are hardware designed by Google, which are used to greatly speed up the tensor computations within neural networks, much like GPUs. They can be used for both network training and inference. They are generally accessed through Googleโ€™s cloud services, but small TPUs can also be accessed directly for free through Google Colab and Kaggle Kernels. Because [all TensorFlow models in ๐Ÿค— Transformers are Keras models](https://huggingface.co/blog/tensorflow-philosophy), most of the methods in this document are generally applicable to TPU training for any Keras model! However, there are a few points that are specific to the HuggingFace ecosystem (hug-o-system?) of Transformers and Datasets, and weโ€™ll make sure to flag them up when we get to them. ### What kinds of TPU are available? New users are often very confused by the range of TPUs, and the different ways to access them. The first key distinction to understand is the difference between **TPU Nodes** and **TPU VMs.** When you use a **TPU Node**, you are effectively indirectly accessing a remote TPU. You will need a separate VM, which will initialize your network and data pipeline and then forward them to the remote node. When you use a TPU on Google Colab, you are accessing it in the **TPU Node** style. Using TPU Nodes can have some quite unexpected behaviour for people who arenโ€™t used to them! In particular, because the TPU is located on a physically different system to the machine youโ€™re running your Python code on, your data cannot be local to your machine - any data pipeline that loads from your machineโ€™s internal storage will totally fail! Instead, data must be stored in Google Cloud Storage where your data pipeline can still access it, even when the pipeline is running on the remote TPU node. <Tip> If you can fit all your data in memory as `np.ndarray` or `tf.Tensor`, then you can `fit()` on that data even when using Colab or a TPU Node, without needing to upload it to Google Cloud Storage. </Tip> <Tip> **๐Ÿค—Specific Hugging Face Tip๐Ÿค—:** The methods `Dataset.to_tf_dataset()` and its higher-level wrapper `model.prepare_tf_dataset()` , which you will see throughout our TF code examples, will both fail on a TPU Node. The reason for this is that even though they create a `tf.data.Dataset` it is not a โ€œpureโ€ `tf.data` pipeline and uses `tf.numpy_function` or `Dataset.from_generator()` to stream data from the underlying HuggingFace `Dataset`. This HuggingFace `Dataset` is backed by data that is on a local disc and which the remote TPU Node will not be able to read. </Tip> The second way to access a TPU is via a **TPU VM.** When using a TPU VM, you connect directly to the machine that the TPU is attached to, much like training on a GPU VM. TPU VMs are generally easier to work with, particularly when it comes to your data pipeline. All of the above warnings do not apply to TPU VMs! This is an opinionated document, so hereโ€™s our opinion: **Avoid using TPU Node if possible.** It is more confusing and more difficult to debug than TPU VMs. It is also likely to be unsupported in future - Googleโ€™s latest TPU, TPUv4, can only be accessed as a TPU VM, which suggests that TPU Nodes are increasingly going to become a โ€œlegacyโ€ access method. However, we understand that the only free TPU access is on Colab and Kaggle Kernels, which uses TPU Node - so weโ€™ll try to explain how to handle it if you have to! Check the [TPU example notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) for code samples that explain this in more detail. ### What sizes of TPU are available? A single TPU (a v2-8/v3-8/v4-8) runs 8 replicas. TPUs exist in **pods** that can run hundreds or thousands of replicas simultaneously. When you use more than a single TPU but less than a whole pod (for example, a v3-32), your TPU fleet is referred to as a **pod slice.** When you access a free TPU via Colab, you generally get a single v2-8 TPU. ### I keep hearing about this XLA thing. Whatโ€™s XLA, and how does it relate to TPUs? XLA is an optimizing compiler, used by both TensorFlow and JAX. In JAX it is the only compiler, whereas in TensorFlow it is optional (but mandatory on TPU!). The easiest way to enable it when training a Keras model is to pass the argument `jit_compile=True` to `model.compile()`. If you donโ€™t get any errors and performance is good, thatโ€™s a great sign that youโ€™re ready to move to TPU! Debugging on TPU is generally a bit harder than on CPU/GPU, so we recommend getting your code running on CPU/GPU with XLA first before trying it on TPU. You donโ€™t have to train for long, of course - just for a few steps to make sure that your model and data pipeline are working like you expect them to. <Tip> XLA compiled code is usually faster - so even if youโ€™re not planning to run on TPU, adding `jit_compile=True` can improve your performance. Be sure to note the caveats below about XLA compatibility, though! </Tip> <Tip warning={true}> **Tip born of painful experience:** Although using `jit_compile=True` is a good way to get a speed boost and test if your CPU/GPU code is XLA-compatible, it can actually cause a lot of problems if you leave it in when actually training on TPU. XLA compilation will happen implicitly on TPU, so remember to remove that line before actually running your code on a TPU! </Tip> ### How do I make my model XLA compatible? In many cases, your code is probably XLA-compatible already! However, there are a few things that work in normal TensorFlow that donโ€™t work in XLA. Weโ€™ve distilled them into three core rules below: <Tip> **๐Ÿค—Specific HuggingFace Tip๐Ÿค—:** Weโ€™ve put a lot of effort into rewriting our TensorFlow models and loss functions to be XLA-compatible. Our models and loss functions generally obey rule #1 and #2 by default, so you can skip over them if youโ€™re using `transformers` models. Donโ€™t forget about these rules when writing your own models and loss functions, though! </Tip> #### XLA Rule #1: Your code cannot have โ€œdata-dependent conditionalsโ€ What that means is that any `if` statement cannot depend on values inside a `tf.Tensor`. For example, this code block cannot be compiled with XLA! ```python if tf.reduce_sum(tensor) > 10: tensor = tensor / 2.0 ``` This might seem very restrictive at first, but most neural net code doesnโ€™t need to do this. You can often get around this restriction by using `tf.cond` (see the documentation [here](https://www.tensorflow.org/api_docs/python/tf/cond)) or by removing the conditional and finding a clever math trick with indicator variables instead, like so: ```python sum_over_10 = tf.cast(tf.reduce_sum(tensor) > 10, tf.float32) tensor = tensor / (1.0 + sum_over_10) ``` This code has exactly the same effect as the code above, but by avoiding a conditional, we ensure it will compile with XLA without problems! #### XLA Rule #2: Your code cannot have โ€œdata-dependent shapesโ€ What this means is that the shape of all of the `tf.Tensor` objects in your code cannot depend on their values. For example, the function `tf.unique` cannot be compiled with XLA, because it returns a `tensor` containing one instance of each unique value in the input. The shape of this output will obviously be different depending on how repetitive the input `Tensor` was, and so XLA refuses to handle it! In general, most neural network code obeys rule #2 by default. However, there are a few common cases where it becomes a problem. One very common one is when you use **label masking**, setting your labels to a negative value to indicate that those positions should be ignored when computing the loss. If you look at NumPy or PyTorch loss functions that support label masking, you will often see code like this that uses [boolean indexing](https://numpy.org/doc/stable/user/basics.indexing.html#boolean-array-indexing): ```python label_mask = labels >= 0 masked_outputs = outputs[label_mask] masked_labels = labels[label_mask] loss = compute_loss(masked_outputs, masked_labels) mean_loss = torch.mean(loss) ``` This code is totally fine in NumPy or PyTorch, but it breaks in XLA! Why? Because the shape of `masked_outputs` and `masked_labels` depends on how many positions are masked - that makes it a **data-dependent shape.** However, just like for rule #1, we can often rewrite this code to yield exactly the same output without any data-dependent shapes. ```python label_mask = tf.cast(labels >= 0, tf.float32) loss = compute_loss(outputs, labels) loss = loss * label_mask # Set negative label positions to 0 mean_loss = tf.reduce_sum(loss) / tf.reduce_sum(label_mask) ``` Here, we avoid data-dependent shapes by computing the loss for every position, but zeroing out the masked positions in both the numerator and denominator when we calculate the mean, which yields exactly the same result as the first block while maintaining XLA compatibility. Note that we use the same trick as in rule #1 - converting a `tf.bool` to `tf.float32` and using it as an indicator variable. This is a really useful trick, so remember it if you need to convert your own code to XLA! #### XLA Rule #3: XLA will need to recompile your model for every different input shape it sees This is the big one. What this means is that if your input shapes are very variable, XLA will have to recompile your model over and over, which will create huge performance problems. This commonly arises in NLP models, where input texts have variable lengths after tokenization. In other modalities, static shapes are more common and this rule is much less of a problem. How can you get around rule #3? The key is **padding** - if you pad all your inputs to the same length, and then use an `attention_mask`, you can get the same results as youโ€™d get from variable shapes, but without any XLA issues. However, excessive padding can cause severe slowdown too - if you pad all your samples to the maximum length in the whole dataset, you might end up with batches consisting endless padding tokens, which will waste a lot of compute and memory! There isnโ€™t a perfect solution to this problem. However, you can try some tricks. One very useful trick is to **pad batches of samples up to a multiple of a number like 32 or 64 tokens.** This often only increases the number of tokens by a small amount, but it hugely reduces the number of unique input shapes, because every input shape now has to be a multiple of 32 or 64. Fewer unique input shapes means fewer XLA compilations! <Tip> **๐Ÿค—Specific HuggingFace Tip๐Ÿค—:** Our tokenizers and data collators have methods that can help you here. You can use `padding="max_length"` or `padding="longest"` when calling tokenizers to get them to output padded data. Our tokenizers and data collators also have a `pad_to_multiple_of` argument that you can use to reduce the number of unique input shapes you see! </Tip> ### How do I actually train my model on TPU? Once your training is XLA-compatible and (if youโ€™re using TPU Node / Colab) your dataset has been prepared appropriately, running on TPU is surprisingly easy! All you really need to change in your code is to add a few lines to initialize your TPU, and to ensure that your model and dataset are created inside a `TPUStrategy` scope. Take a look at [our TPU example notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb) to see this in action! ### Summary There was a lot in here, so letโ€™s summarize with a quick checklist you can follow when you want to get your model ready for TPU training: - Make sure your code follows the three rules of XLA - Compile your model with `jit_compile=True` on CPU/GPU and confirm that you can train it with XLA - Either load your dataset into memory or use a TPU-compatible dataset loading approach (see [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)) - Migrate your code either to Colab (with accelerator set to โ€œTPUโ€) or a TPU VM on Google Cloud - Add TPU initializer code (see [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)) - Create your `TPUStrategy` and make sure dataset loading and model creation are inside the `strategy.scope()` (see [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/tpu_training-tf.ipynb)) - Donโ€™t forget to take `jit_compile=True` out again when you move to TPU! - ๐Ÿ™๐Ÿ™๐Ÿ™๐Ÿฅบ๐Ÿฅบ๐Ÿฅบ - Call model.fit() - You did it!
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/_toctree.yml
- sections: - local: index title: ๐Ÿค— Transformers - local: quicktour title: Quick tour - local: installation title: Installation title: Get started - sections: - local: pipeline_tutorial title: Run inference with pipelines - local: autoclass_tutorial title: Write portable code with AutoClass - local: preprocessing title: Preprocess data - local: training title: Fine-tune a pretrained model - local: run_scripts title: Train with a script - local: accelerate title: Set up distributed training with ๐Ÿค— Accelerate - local: peft title: Load and train adapters with ๐Ÿค— PEFT - local: model_sharing title: Share your model - local: transformers_agents title: Agents - local: llm_tutorial title: Generation with LLMs title: Tutorials - sections: - isExpanded: false sections: - local: tasks/sequence_classification title: Text classification - local: tasks/token_classification title: Token classification - local: tasks/question_answering title: Question answering - local: tasks/language_modeling title: Causal language modeling - local: tasks/masked_language_modeling title: Masked language modeling - local: tasks/translation title: Translation - local: tasks/summarization title: Summarization - local: tasks/multiple_choice title: Multiple choice title: Natural Language Processing - isExpanded: false sections: - local: tasks/audio_classification title: Audio classification - local: tasks/asr title: Automatic speech recognition title: Audio - isExpanded: false sections: - local: tasks/image_classification title: Image classification - local: tasks/semantic_segmentation title: Image segmentation - local: tasks/video_classification title: Video classification - local: tasks/object_detection title: Object detection - local: tasks/zero_shot_object_detection title: Zero-shot object detection - local: tasks/zero_shot_image_classification title: Zero-shot image classification - local: tasks/monocular_depth_estimation title: Depth estimation - local: tasks/image_to_image title: Image-to-Image - local: tasks/knowledge_distillation_for_image_classification title: Knowledge Distillation for Computer Vision title: Computer Vision - isExpanded: false sections: - local: tasks/image_captioning title: Image captioning - local: tasks/document_question_answering title: Document Question Answering - local: tasks/visual_question_answering title: Visual Question Answering - local: tasks/text-to-speech title: Text to speech title: Multimodal - isExpanded: false sections: - local: generation_strategies title: Customize the generation strategy title: Generation - isExpanded: false sections: - local: tasks/idefics title: Image tasks with IDEFICS - local: tasks/prompting title: LLM prompting guide title: Prompting title: Task Guides - sections: - local: fast_tokenizers title: Use fast tokenizers from ๐Ÿค— Tokenizers - local: multilingual title: Run inference with multilingual models - local: create_a_model title: Use model-specific APIs - local: custom_models title: Share a custom model - local: chat_templating title: Templates for chat models - local: sagemaker title: Run training on Amazon SageMaker - local: serialization title: Export to ONNX - local: tflite title: Export to TFLite - local: torchscript title: Export to TorchScript - local: benchmarks title: Benchmarks - local: notebooks title: Notebooks with examples - local: community title: Community resources - local: custom_tools title: Custom Tools and Prompts - local: troubleshooting title: Troubleshoot title: Developer guides - sections: - local: performance title: Overview - local: quantization title: Quantization - sections: - local: perf_train_gpu_one title: Methods and tools for efficient training on a single GPU - local: perf_train_gpu_many title: Multiple GPUs and parallelism - local: perf_train_cpu title: Efficient training on CPU - local: perf_train_cpu_many title: Distributed CPU training - local: perf_train_tpu title: Training on TPUs - local: perf_train_tpu_tf title: Training on TPU with TensorFlow - local: perf_train_special title: Training on Specialized Hardware - local: perf_hardware title: Custom hardware for training - local: hpo_train title: Hyperparameter Search using Trainer API title: Efficient training techniques - sections: - local: perf_infer_cpu title: CPU inference - local: perf_infer_gpu_one title: GPU inference title: Optimizing inference - local: big_models title: Instantiating a big model - local: debugging title: Troubleshooting - local: tf_xla title: XLA Integration for TensorFlow Models - local: perf_torch_compile title: Optimize inference using `torch.compile()` title: Performance and scalability - sections: - local: contributing title: How to contribute to transformers? - local: add_new_model title: How to add a model to ๐Ÿค— Transformers? - local: add_tensorflow_model title: How to convert a ๐Ÿค— Transformers model to TensorFlow? - local: add_new_pipeline title: How to add a pipeline to ๐Ÿค— Transformers? - local: testing title: Testing - local: pr_checks title: Checks on a Pull Request title: Contribute - sections: - local: philosophy title: Philosophy - local: glossary title: Glossary - local: task_summary title: What ๐Ÿค— Transformers can do - local: tasks_explained title: How ๐Ÿค— Transformers solve tasks - local: model_summary title: The Transformer model family - local: tokenizer_summary title: Summary of the tokenizers - local: attention title: Attention mechanisms - local: pad_truncation title: Padding and truncation - local: bertology title: BERTology - local: perplexity title: Perplexity of fixed-length models - local: pipeline_webserver title: Pipelines for webserver inference - local: model_memory_anatomy title: Model training anatomy - local: llm_tutorial_optimization title: Getting the most out of LLMs title: Conceptual guides - sections: - sections: - local: main_classes/agent title: Agents and Tools - local: model_doc/auto title: Auto Classes - local: main_classes/callback title: Callbacks - local: main_classes/configuration title: Configuration - local: main_classes/data_collator title: Data Collator - local: main_classes/keras_callbacks title: Keras callbacks - local: main_classes/logging title: Logging - local: main_classes/model title: Models - local: main_classes/text_generation title: Text Generation - local: main_classes/onnx title: ONNX - local: main_classes/optimizer_schedules title: Optimization - local: main_classes/output title: Model outputs - local: main_classes/pipelines title: Pipelines - local: main_classes/processors title: Processors - local: main_classes/quantization title: Quantization - local: main_classes/tokenizer title: Tokenizer - local: main_classes/trainer title: Trainer - local: main_classes/deepspeed title: DeepSpeed Integration - local: main_classes/feature_extractor title: Feature Extractor - local: main_classes/image_processor title: Image Processor title: Main Classes - sections: - isExpanded: false sections: - local: model_doc/albert title: ALBERT - local: model_doc/bart title: BART - local: model_doc/barthez title: BARThez - local: model_doc/bartpho title: BARTpho - local: model_doc/bert title: BERT - local: model_doc/bert-generation title: BertGeneration - local: model_doc/bert-japanese title: BertJapanese - local: model_doc/bertweet title: Bertweet - local: model_doc/big_bird title: BigBird - local: model_doc/bigbird_pegasus title: BigBirdPegasus - local: model_doc/biogpt title: BioGpt - local: model_doc/blenderbot title: Blenderbot - local: model_doc/blenderbot-small title: Blenderbot Small - local: model_doc/bloom title: BLOOM - local: model_doc/bort title: BORT - local: model_doc/byt5 title: ByT5 - local: model_doc/camembert title: CamemBERT - local: model_doc/canine title: CANINE - local: model_doc/codegen title: CodeGen - local: model_doc/code_llama title: CodeLlama - local: model_doc/convbert title: ConvBERT - local: model_doc/cpm title: CPM - local: model_doc/cpmant title: CPMANT - local: model_doc/ctrl title: CTRL - local: model_doc/deberta title: DeBERTa - local: model_doc/deberta-v2 title: DeBERTa-v2 - local: model_doc/dialogpt title: DialoGPT - local: model_doc/distilbert title: DistilBERT - local: model_doc/dpr title: DPR - local: model_doc/electra title: ELECTRA - local: model_doc/encoder-decoder title: Encoder Decoder Models - local: model_doc/ernie title: ERNIE - local: model_doc/ernie_m title: ErnieM - local: model_doc/esm title: ESM - local: model_doc/falcon title: Falcon - local: model_doc/flan-t5 title: FLAN-T5 - local: model_doc/flan-ul2 title: FLAN-UL2 - local: model_doc/flaubert title: FlauBERT - local: model_doc/fnet title: FNet - local: model_doc/fsmt title: FSMT - local: model_doc/funnel title: Funnel Transformer - local: model_doc/fuyu title: Fuyu - local: model_doc/openai-gpt title: GPT - local: model_doc/gpt_neo title: GPT Neo - local: model_doc/gpt_neox title: GPT NeoX - local: model_doc/gpt_neox_japanese title: GPT NeoX Japanese - local: model_doc/gptj title: GPT-J - local: model_doc/gpt2 title: GPT2 - local: model_doc/gpt_bigcode title: GPTBigCode - local: model_doc/gptsan-japanese title: GPTSAN Japanese - local: model_doc/gpt-sw3 title: GPTSw3 - local: model_doc/herbert title: HerBERT - local: model_doc/ibert title: I-BERT - local: model_doc/jukebox title: Jukebox - local: model_doc/led title: LED - local: model_doc/llama title: LLaMA - local: model_doc/llama2 title: Llama2 - local: model_doc/longformer title: Longformer - local: model_doc/longt5 title: LongT5 - local: model_doc/luke title: LUKE - local: model_doc/m2m_100 title: M2M100 - local: model_doc/madlad-400 title: MADLAD-400 - local: model_doc/marian title: MarianMT - local: model_doc/markuplm title: MarkupLM - local: model_doc/mbart title: MBart and MBart-50 - local: model_doc/mega title: MEGA - local: model_doc/megatron-bert title: MegatronBERT - local: model_doc/megatron_gpt2 title: MegatronGPT2 - local: model_doc/mistral title: Mistral - local: model_doc/mluke title: mLUKE - local: model_doc/mobilebert title: MobileBERT - local: model_doc/mpnet title: MPNet - local: model_doc/mpt title: MPT - local: model_doc/mra title: MRA - local: model_doc/mt5 title: MT5 - local: model_doc/mvp title: MVP - local: model_doc/nezha title: NEZHA - local: model_doc/nllb title: NLLB - local: model_doc/nllb-moe title: NLLB-MoE - local: model_doc/nystromformer title: Nystrรถmformer - local: model_doc/open-llama title: Open-Llama - local: model_doc/opt title: OPT - local: model_doc/pegasus title: Pegasus - local: model_doc/pegasus_x title: PEGASUS-X - local: model_doc/persimmon title: Persimmon - local: model_doc/phi title: Phi - local: model_doc/phobert title: PhoBERT - local: model_doc/plbart title: PLBart - local: model_doc/prophetnet title: ProphetNet - local: model_doc/qdqbert title: QDQBert - local: model_doc/rag title: RAG - local: model_doc/realm title: REALM - local: model_doc/reformer title: Reformer - local: model_doc/rembert title: RemBERT - local: model_doc/retribert title: RetriBERT - local: model_doc/roberta title: RoBERTa - local: model_doc/roberta-prelayernorm title: RoBERTa-PreLayerNorm - local: model_doc/roc_bert title: RoCBert - local: model_doc/roformer title: RoFormer - local: model_doc/rwkv title: RWKV - local: model_doc/splinter title: Splinter - local: model_doc/squeezebert title: SqueezeBERT - local: model_doc/switch_transformers title: SwitchTransformers - local: model_doc/t5 title: T5 - local: model_doc/t5v1.1 title: T5v1.1 - local: model_doc/tapex title: TAPEX - local: model_doc/transfo-xl title: Transformer XL - local: model_doc/ul2 title: UL2 - local: model_doc/umt5 title: UMT5 - local: model_doc/xmod title: X-MOD - local: model_doc/xglm title: XGLM - local: model_doc/xlm title: XLM - local: model_doc/xlm-prophetnet title: XLM-ProphetNet - local: model_doc/xlm-roberta title: XLM-RoBERTa - local: model_doc/xlm-roberta-xl title: XLM-RoBERTa-XL - local: model_doc/xlm-v title: XLM-V - local: model_doc/xlnet title: XLNet - local: model_doc/yoso title: YOSO title: Text models - isExpanded: false sections: - local: model_doc/beit title: BEiT - local: model_doc/bit title: BiT - local: model_doc/conditional_detr title: Conditional DETR - local: model_doc/convnext title: ConvNeXT - local: model_doc/convnextv2 title: ConvNeXTV2 - local: model_doc/cvt title: CvT - local: model_doc/deformable_detr title: Deformable DETR - local: model_doc/deit title: DeiT - local: model_doc/deta title: DETA - local: model_doc/detr title: DETR - local: model_doc/dinat title: DiNAT - local: model_doc/dinov2 title: DINOV2 - local: model_doc/dit title: DiT - local: model_doc/dpt title: DPT - local: model_doc/efficientformer title: EfficientFormer - local: model_doc/efficientnet title: EfficientNet - local: model_doc/focalnet title: FocalNet - local: model_doc/glpn title: GLPN - local: model_doc/imagegpt title: ImageGPT - local: model_doc/levit title: LeViT - local: model_doc/mask2former title: Mask2Former - local: model_doc/maskformer title: MaskFormer - local: model_doc/mobilenet_v1 title: MobileNetV1 - local: model_doc/mobilenet_v2 title: MobileNetV2 - local: model_doc/mobilevit title: MobileViT - local: model_doc/mobilevitv2 title: MobileViTV2 - local: model_doc/nat title: NAT - local: model_doc/poolformer title: PoolFormer - local: model_doc/pvt title: Pyramid Vision Transformer (PVT) - local: model_doc/regnet title: RegNet - local: model_doc/resnet title: ResNet - local: model_doc/segformer title: SegFormer - local: model_doc/swiftformer title: SwiftFormer - local: model_doc/swin title: Swin Transformer - local: model_doc/swinv2 title: Swin Transformer V2 - local: model_doc/swin2sr title: Swin2SR - local: model_doc/table-transformer title: Table Transformer - local: model_doc/timesformer title: TimeSformer - local: model_doc/upernet title: UperNet - local: model_doc/van title: VAN - local: model_doc/videomae title: VideoMAE - local: model_doc/vit title: Vision Transformer (ViT) - local: model_doc/vit_hybrid title: ViT Hybrid - local: model_doc/vitdet title: ViTDet - local: model_doc/vit_mae title: ViTMAE - local: model_doc/vitmatte title: ViTMatte - local: model_doc/vit_msn title: ViTMSN - local: model_doc/vivit title: ViViT - local: model_doc/yolos title: YOLOS title: Vision models - isExpanded: false sections: - local: model_doc/audio-spectrogram-transformer title: Audio Spectrogram Transformer - local: model_doc/bark title: Bark - local: model_doc/clap title: CLAP - local: model_doc/encodec title: EnCodec - local: model_doc/hubert title: Hubert - local: model_doc/mctct title: MCTCT - local: model_doc/mms title: MMS - local: model_doc/musicgen title: MusicGen - local: model_doc/pop2piano title: Pop2Piano - local: model_doc/seamless_m4t title: Seamless-M4T - local: model_doc/sew title: SEW - local: model_doc/sew-d title: SEW-D - local: model_doc/speech_to_text title: Speech2Text - local: model_doc/speech_to_text_2 title: Speech2Text2 - local: model_doc/speecht5 title: SpeechT5 - local: model_doc/unispeech title: UniSpeech - local: model_doc/unispeech-sat title: UniSpeech-SAT - local: model_doc/univnet title: UnivNet - local: model_doc/vits title: VITS - local: model_doc/wav2vec2 title: Wav2Vec2 - local: model_doc/wav2vec2-conformer title: Wav2Vec2-Conformer - local: model_doc/wav2vec2_phoneme title: Wav2Vec2Phoneme - local: model_doc/wavlm title: WavLM - local: model_doc/whisper title: Whisper - local: model_doc/xls_r title: XLS-R - local: model_doc/xlsr_wav2vec2 title: XLSR-Wav2Vec2 title: Audio models - isExpanded: false sections: - local: model_doc/align title: ALIGN - local: model_doc/altclip title: AltCLIP - local: model_doc/blip title: BLIP - local: model_doc/blip-2 title: BLIP-2 - local: model_doc/bridgetower title: BridgeTower - local: model_doc/bros title: BROS - local: model_doc/chinese_clip title: Chinese-CLIP - local: model_doc/clip title: CLIP - local: model_doc/clipseg title: CLIPSeg - local: model_doc/clvp title: CLVP - local: model_doc/data2vec title: Data2Vec - local: model_doc/deplot title: DePlot - local: model_doc/donut title: Donut - local: model_doc/flava title: FLAVA - local: model_doc/git title: GIT - local: model_doc/groupvit title: GroupViT - local: model_doc/idefics title: IDEFICS - local: model_doc/instructblip title: InstructBLIP - local: model_doc/kosmos-2 title: KOSMOS-2 - local: model_doc/layoutlm title: LayoutLM - local: model_doc/layoutlmv2 title: LayoutLMV2 - local: model_doc/layoutlmv3 title: LayoutLMV3 - local: model_doc/layoutxlm title: LayoutXLM - local: model_doc/lilt title: LiLT - local: model_doc/lxmert title: LXMERT - local: model_doc/matcha title: MatCha - local: model_doc/mgp-str title: MGP-STR - local: model_doc/nougat title: Nougat - local: model_doc/oneformer title: OneFormer - local: model_doc/owlvit title: OWL-ViT - local: model_doc/owlv2 title: OWLv2 - local: model_doc/perceiver title: Perceiver - local: model_doc/pix2struct title: Pix2Struct - local: model_doc/sam title: Segment Anything - local: model_doc/speech-encoder-decoder title: Speech Encoder Decoder Models - local: model_doc/tapas title: TAPAS - local: model_doc/trocr title: TrOCR - local: model_doc/tvlt title: TVLT - local: model_doc/tvp title: TVP - local: model_doc/vilt title: ViLT - local: model_doc/vision-encoder-decoder title: Vision Encoder Decoder Models - local: model_doc/vision-text-dual-encoder title: Vision Text Dual Encoder - local: model_doc/visual_bert title: VisualBERT - local: model_doc/xclip title: X-CLIP title: Multimodal models - isExpanded: false sections: - local: model_doc/decision_transformer title: Decision Transformer - local: model_doc/trajectory_transformer title: Trajectory Transformer title: Reinforcement learning models - isExpanded: false sections: - local: model_doc/autoformer title: Autoformer - local: model_doc/informer title: Informer - local: model_doc/time_series_transformer title: Time Series Transformer title: Time series models - isExpanded: false sections: - local: model_doc/graphormer title: Graphormer title: Graph models title: Models - sections: - local: internal/modeling_utils title: Custom Layers and Utilities - local: internal/pipelines_utils title: Utilities for pipelines - local: internal/tokenization_utils title: Utilities for Tokenizers - local: internal/trainer_utils title: Utilities for Trainer - local: internal/generation_utils title: Utilities for Generation - local: internal/image_processing_utils title: Utilities for Image Processors - local: internal/audio_utils title: Utilities for Audio processing - local: internal/file_utils title: General Utilities - local: internal/time_series_utils title: Utilities for Time Series title: Internal Helpers title: API
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/benchmarks.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Benchmarks <Tip warning={true}> Hugging Face's Benchmarking tools are deprecated and it is advised to use external Benchmarking libraries to measure the speed and memory complexity of Transformer models. </Tip> [[open-in-colab]] Let's take a look at how ๐Ÿค— Transformers models can be benchmarked, best practices, and already available benchmarks. A notebook explaining in more detail how to benchmark ๐Ÿค— Transformers models can be found [here](https://github.com/huggingface/notebooks/tree/main/examples/benchmark.ipynb). ## How to benchmark ๐Ÿค— Transformers models The classes [`PyTorchBenchmark`] and [`TensorFlowBenchmark`] allow to flexibly benchmark ๐Ÿค— Transformers models. The benchmark classes allow us to measure the _peak memory usage_ and _required time_ for both _inference_ and _training_. <Tip> Hereby, _inference_ is defined by a single forward pass, and _training_ is defined by a single forward pass and backward pass. </Tip> The benchmark classes [`PyTorchBenchmark`] and [`TensorFlowBenchmark`] expect an object of type [`PyTorchBenchmarkArguments`] and [`TensorFlowBenchmarkArguments`], respectively, for instantiation. [`PyTorchBenchmarkArguments`] and [`TensorFlowBenchmarkArguments`] are data classes and contain all relevant configurations for their corresponding benchmark class. In the following example, it is shown how a BERT model of type _bert-base-cased_ can be benchmarked. <frameworkcontent> <pt> ```py >>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments >>> args = PyTorchBenchmarkArguments(models=["bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]) >>> benchmark = PyTorchBenchmark(args) ``` </pt> <tf> ```py >>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments >>> args = TensorFlowBenchmarkArguments( ... models=["bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512] ... ) >>> benchmark = TensorFlowBenchmark(args) ``` </tf> </frameworkcontent> Here, three arguments are given to the benchmark argument data classes, namely `models`, `batch_sizes`, and `sequence_lengths`. The argument `models` is required and expects a `list` of model identifiers from the [model hub](https://huggingface.co/models) The `list` arguments `batch_sizes` and `sequence_lengths` define the size of the `input_ids` on which the model is benchmarked. There are many more parameters that can be configured via the benchmark argument data classes. For more detail on these one can either directly consult the files `src/transformers/benchmark/benchmark_args_utils.py`, `src/transformers/benchmark/benchmark_args.py` (for PyTorch) and `src/transformers/benchmark/benchmark_args_tf.py` (for Tensorflow). Alternatively, running the following shell commands from root will print out a descriptive list of all configurable parameters for PyTorch and Tensorflow respectively. <frameworkcontent> <pt> ```bash python examples/pytorch/benchmarking/run_benchmark.py --help ``` An instantiated benchmark object can then simply be run by calling `benchmark.run()`. ```py >>> results = benchmark.run() >>> print(results) ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base-uncased 8 8 0.006 bert-base-uncased 8 32 0.006 bert-base-uncased 8 128 0.018 bert-base-uncased 8 512 0.088 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base-uncased 8 8 1227 bert-base-uncased 8 32 1281 bert-base-uncased 8 128 1307 bert-base-uncased 8 512 1539 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: PyTorch - use_torchscript: False - framework_version: 1.4.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 08:58:43.371351 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False ``` </pt> <tf> ```bash python examples/tensorflow/benchmarking/run_benchmark_tf.py --help ``` An instantiated benchmark object can then simply be run by calling `benchmark.run()`. ```py >>> results = benchmark.run() >>> print(results) >>> results = benchmark.run() >>> print(results) ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base-uncased 8 8 0.005 bert-base-uncased 8 32 0.008 bert-base-uncased 8 128 0.022 bert-base-uncased 8 512 0.105 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base-uncased 8 8 1330 bert-base-uncased 8 32 1330 bert-base-uncased 8 128 1330 bert-base-uncased 8 512 1770 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: Tensorflow - use_xla: False - framework_version: 2.2.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 09:26:35.617317 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False ``` </tf> </frameworkcontent> By default, the _time_ and the _required memory_ for _inference_ are benchmarked. In the example output above the first two sections show the result corresponding to _inference time_ and _inference memory_. In addition, all relevant information about the computing environment, _e.g._ the GPU type, the system, the library versions, etc... are printed out in the third section under _ENVIRONMENT INFORMATION_. This information can optionally be saved in a _.csv_ file when adding the argument `save_to_csv=True` to [`PyTorchBenchmarkArguments`] and [`TensorFlowBenchmarkArguments`] respectively. In this case, every section is saved in a separate _.csv_ file. The path to each _.csv_ file can optionally be defined via the argument data classes. Instead of benchmarking pre-trained models via their model identifier, _e.g._ `bert-base-uncased`, the user can alternatively benchmark an arbitrary configuration of any available model class. In this case, a `list` of configurations must be inserted with the benchmark args as follows. <frameworkcontent> <pt> ```py >>> from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments, BertConfig >>> args = PyTorchBenchmarkArguments( ... models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512] ... ) >>> config_base = BertConfig() >>> config_384_hid = BertConfig(hidden_size=384) >>> config_6_lay = BertConfig(num_hidden_layers=6) >>> benchmark = PyTorchBenchmark(args, configs=[config_base, config_384_hid, config_6_lay]) >>> benchmark.run() ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base 8 128 0.006 bert-base 8 512 0.006 bert-base 8 128 0.018 bert-base 8 512 0.088 bert-384-hid 8 8 0.006 bert-384-hid 8 32 0.006 bert-384-hid 8 128 0.011 bert-384-hid 8 512 0.054 bert-6-lay 8 8 0.003 bert-6-lay 8 32 0.004 bert-6-lay 8 128 0.009 bert-6-lay 8 512 0.044 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base 8 8 1277 bert-base 8 32 1281 bert-base 8 128 1307 bert-base 8 512 1539 bert-384-hid 8 8 1005 bert-384-hid 8 32 1027 bert-384-hid 8 128 1035 bert-384-hid 8 512 1255 bert-6-lay 8 8 1097 bert-6-lay 8 32 1101 bert-6-lay 8 128 1127 bert-6-lay 8 512 1359 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: PyTorch - use_torchscript: False - framework_version: 1.4.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 09:35:25.143267 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False ``` </pt> <tf> ```py >>> from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments, BertConfig >>> args = TensorFlowBenchmarkArguments( ... models=["bert-base", "bert-384-hid", "bert-6-lay"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512] ... ) >>> config_base = BertConfig() >>> config_384_hid = BertConfig(hidden_size=384) >>> config_6_lay = BertConfig(num_hidden_layers=6) >>> benchmark = TensorFlowBenchmark(args, configs=[config_base, config_384_hid, config_6_lay]) >>> benchmark.run() ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base 8 8 0.005 bert-base 8 32 0.008 bert-base 8 128 0.022 bert-base 8 512 0.106 bert-384-hid 8 8 0.005 bert-384-hid 8 32 0.007 bert-384-hid 8 128 0.018 bert-384-hid 8 512 0.064 bert-6-lay 8 8 0.002 bert-6-lay 8 32 0.003 bert-6-lay 8 128 0.0011 bert-6-lay 8 512 0.074 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base 8 8 1330 bert-base 8 32 1330 bert-base 8 128 1330 bert-base 8 512 1770 bert-384-hid 8 8 1330 bert-384-hid 8 32 1330 bert-384-hid 8 128 1330 bert-384-hid 8 512 1540 bert-6-lay 8 8 1330 bert-6-lay 8 32 1330 bert-6-lay 8 128 1330 bert-6-lay 8 512 1540 -------------------------------------------------------------------------------- ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 2.11.0 - framework: Tensorflow - use_xla: False - framework_version: 2.2.0 - python_version: 3.6.10 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2020-06-29 - time: 09:38:15.487125 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: 32088 - use_gpu: True - num_gpus: 1 - gpu: TITAN RTX - gpu_ram_mb: 24217 - gpu_power_watts: 280.0 - gpu_performance_state: 2 - use_tpu: False ``` </tf> </frameworkcontent> Again, _inference time_ and _required memory_ for _inference_ are measured, but this time for customized configurations of the `BertModel` class. This feature can especially be helpful when deciding for which configuration the model should be trained. ## Benchmark best practices This section lists a couple of best practices one should be aware of when benchmarking a model. - Currently, only single device benchmarking is supported. When benchmarking on GPU, it is recommended that the user specifies on which device the code should be run by setting the `CUDA_VISIBLE_DEVICES` environment variable in the shell, _e.g._ `export CUDA_VISIBLE_DEVICES=0` before running the code. - The option `no_multi_processing` should only be set to `True` for testing and debugging. To ensure accurate memory measurement it is recommended to run each memory benchmark in a separate process by making sure `no_multi_processing` is set to `True`. - One should always state the environment information when sharing the results of a model benchmark. Results can vary heavily between different GPU devices, library versions, etc., so that benchmark results on their own are not very useful for the community. ## Sharing your benchmark Previously all available core models (10 at the time) have been benchmarked for _inference time_, across many different settings: using PyTorch, with and without TorchScript, using TensorFlow, with and without XLA. All of those tests were done across CPUs (except for TensorFlow XLA) and GPUs. The approach is detailed in the [following blogpost](https://medium.com/huggingface/benchmarking-transformers-pytorch-and-tensorflow-e2917fb891c2) and the results are available [here](https://docs.google.com/spreadsheets/d/1sryqufw2D0XlUH4sq3e9Wnxu5EAQkaohzrJbd5HdQ_w/edit?usp=sharing). With the new _benchmark_ tools, it is easier than ever to share your benchmark results with the community - [PyTorch Benchmarking Results](https://github.com/huggingface/transformers/tree/main/examples/pytorch/benchmarking/README.md). - [TensorFlow Benchmarking Results](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/benchmarking/README.md).
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/perf_train_cpu.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Efficient Training on CPU This guide focuses on training large models efficiently on CPU. ## Mixed precision with IPEX IPEX is optimized for CPUs with AVX-512 or above, and functionally works for CPUs with only AVX2. So, it is expected to bring performance benefit for Intel CPU generations with AVX-512 or above while CPUs with only AVX2 (e.g., AMD CPUs or older Intel CPUs) might result in a better performance under IPEX, but not guaranteed. IPEX provides performance optimizations for CPU training with both Float32 and BFloat16. The usage of BFloat16 is the main focus of the following sections. Low precision data type BFloat16 has been natively supported on the 3rd Generation Xeonยฎ Scalable Processors (aka Cooper Lake) with AVX512 instruction set and will be supported on the next generation of Intelยฎ Xeonยฎ Scalable Processors with Intelยฎ Advanced Matrix Extensions (Intelยฎ AMX) instruction set with further boosted performance. The Auto Mixed Precision for CPU backend has been enabled since PyTorch-1.10. At the same time, the support of Auto Mixed Precision with BFloat16 for CPU and BFloat16 optimization of operators has been massively enabled in Intelยฎ Extension for PyTorch, and partially upstreamed to PyTorch master branch. Users can get better performance and user experience with IPEX Auto Mixed Precision. Check more detailed information for [Auto Mixed Precision](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features/amp.html). ### IPEX installation: IPEX release is following PyTorch, to install via pip: | PyTorch Version | IPEX version | | :---------------: | :----------: | | 1.13 | 1.13.0+cpu | | 1.12 | 1.12.300+cpu | | 1.11 | 1.11.200+cpu | | 1.10 | 1.10.100+cpu | ``` pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu ``` Check more approaches for [IPEX installation](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/installation.html). ### Usage in Trainer To enable auto mixed precision with IPEX in Trainer, users should add `use_ipex`, `bf16` and `no_cuda` in training command arguments. Take an example of the use cases on [Transformers question-answering](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) - Training with IPEX using BF16 auto mixed precision on CPU: <pre> python run_qa.py \ --model_name_or_path bert-base-uncased \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ \ <b>--use_ipex \</b> <b>--bf16 --no_cuda</b></pre> ### Practice example Blog: [Accelerating PyTorch Transformers with Intel Sapphire Rapids](https://huggingface.co/blog/intel-sapphire-rapids)
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/_redirects.yml
# Optimizing inference perf_infer_gpu_many: perf_infer_gpu_one
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/chat_templating.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Templates for Chat Models ## Introduction An increasingly common use case for LLMs is **chat**. In a chat context, rather than continuing a single string of text (as is the case with a standard language model), the model instead continues a conversation that consists of one or more **messages**, each of which includes a **role**, like "user" or "assistant", as well as message text. Much like tokenization, different models expect very different input formats for chat. This is the reason we added **chat templates** as a feature. Chat templates are part of the tokenizer. They specify how to convert conversations, represented as lists of messages, into a single tokenizable string in the format that the model expects. Let's make this concrete with a quick example using the `BlenderBot` model. BlenderBot has an extremely simple default template, which mostly just adds whitespace between rounds of dialogue: ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> chat = [ ... {"role": "user", "content": "Hello, how are you?"}, ... {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, ... {"role": "user", "content": "I'd like to show off how chat templating works!"}, ... ] >>> tokenizer.apply_chat_template(chat, tokenize=False) " Hello, how are you? I'm doing great. How can I help you today? I'd like to show off how chat templating works!</s>" ``` Notice how the entire chat is condensed into a single string. If we use `tokenize=True`, which is the default setting, that string will also be tokenized for us. To see a more complex template in action, though, let's use the `mistralai/Mistral-7B-Instruct-v0.1` model. ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1") >>> chat = [ ... {"role": "user", "content": "Hello, how are you?"}, ... {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, ... {"role": "user", "content": "I'd like to show off how chat templating works!"}, ... ] >>> tokenizer.apply_chat_template(chat, tokenize=False) "<s>[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today?</s> [INST] I'd like to show off how chat templating works! [/INST]" ``` Note that this time, the tokenizer has added the control tokens [INST] and [/INST] to indicate the start and end of user messages (but not assistant messages!). Mistral-instruct was trained with these tokens, but BlenderBot was not. ## How do I use chat templates? As you can see in the example above, chat templates are easy to use. Simply build a list of messages, with `role` and `content` keys, and then pass it to the [`~PreTrainedTokenizer.apply_chat_template`] method. Once you do that, you'll get output that's ready to go! When using chat templates as input for model generation, it's also a good idea to use `add_generation_prompt=True` to add a [generation prompt](#what-are-generation-prompts). Here's an example of preparing input for `model.generate()`, using the `Zephyr` assistant model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "HuggingFaceH4/zephyr-7b-beta" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint) # You may want to use bfloat16 and/or move to GPU here messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate", }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") print(tokenizer.decode(tokenized_chat[0])) ``` This will yield a string in the input format that Zephyr expects. ```text <|system|> You are a friendly chatbot who always responds in the style of a pirate</s> <|user|> How many helicopters can a human eat in one sitting?</s> <|assistant|> ``` Now that our input is formatted correctly for Zephyr, we can use the model to generate a response to the user's question: ```python outputs = model.generate(tokenized_chat, max_new_tokens=128) print(tokenizer.decode(outputs[0])) ``` This will yield: ```text <|system|> You are a friendly chatbot who always responds in the style of a pirate</s> <|user|> How many helicopters can a human eat in one sitting?</s> <|assistant|> Matey, I'm afraid I must inform ye that humans cannot eat helicopters. Helicopters are not food, they are flying machines. Food is meant to be eaten, like a hearty plate o' grog, a savory bowl o' stew, or a delicious loaf o' bread. But helicopters, they be for transportin' and movin' around, not for eatin'. So, I'd say none, me hearties. None at all. ``` Arr, 'twas easy after all! ## Is there an automated pipeline for chat? Yes, there is: [`ConversationalPipeline`]. This pipeline is designed to make it easy to use chat models. Let's try the `Zephyr` example again, but this time using the pipeline: ```python from transformers import pipeline pipe = pipeline("conversational", "HuggingFaceH4/zephyr-7b-beta") messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate", }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] print(pipe(messages)) ``` ```text Conversation id: 76d886a0-74bd-454e-9804-0467041a63dc system: You are a friendly chatbot who always responds in the style of a pirate user: How many helicopters can a human eat in one sitting? assistant: Matey, I'm afraid I must inform ye that humans cannot eat helicopters. Helicopters are not food, they are flying machines. Food is meant to be eaten, like a hearty plate o' grog, a savory bowl o' stew, or a delicious loaf o' bread. But helicopters, they be for transportin' and movin' around, not for eatin'. So, I'd say none, me hearties. None at all. ``` [`ConversationalPipeline`] will take care of all the details of tokenization and calling `apply_chat_template` for you - once the model has a chat template, all you need to do is initialize the pipeline and pass it the list of messages! ## What are "generation prompts"? You may have noticed that the `apply_chat_template` method has an `add_generation_prompt` argument. This argument tells the template to add tokens that indicate the start of a bot response. For example, consider the following chat: ```python messages = [ {"role": "user", "content": "Hi there!"}, {"role": "assistant", "content": "Nice to meet you!"}, {"role": "user", "content": "Can I ask a question?"} ] ``` Here's what this will look like without a generation prompt, using the ChatML template we saw in the Zephyr example: ```python tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False) """<|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> <|im_start|>user Can I ask a question?<|im_end|> """ ``` And here's what it looks like **with** a generation prompt: ```python tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) """<|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> <|im_start|>user Can I ask a question?<|im_end|> <|im_start|>assistant """ ``` Note that this time, we've added the tokens that indicate the start of a bot response. This ensures that when the model generates text it will write a bot response instead of doing something unexpected, like continuing the user's message. Remember, chat models are still just language models - they're trained to continue text, and chat is just a special kind of text to them! You need to guide them with the appropriate control tokens so they know what they're supposed to be doing. Not all models require generation prompts. Some models, like BlenderBot and LLaMA, don't have any special tokens before bot responses. In these cases, the `add_generation_prompt` argument will have no effect. The exact effect that `add_generation_prompt` has will depend on the template being used. ## Can I use chat templates in training? Yes! We recommend that you apply the chat template as a preprocessing step for your dataset. After this, you can simply continue like any other language model training task. When training, you should usually set `add_generation_prompt=False`, because the added tokens to prompt an assistant response will not be helpful during training. Let's see an example: ```python from transformers import AutoTokenizer from datasets import Dataset tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-beta") chat1 = [ {"role": "user", "content": "Which is bigger, the moon or the sun?"}, {"role": "assistant", "content": "The sun."} ] chat2 = [ {"role": "user", "content": "Which is bigger, a virus or a bacterium?"}, {"role": "assistant", "content": "A bacterium."} ] dataset = Dataset.from_dict({"chat": [chat1, chat2]}) dataset = dataset.map(lambda x: {"formatted_chat": tokenizer.apply_chat_template(x["chat"], tokenize=False, add_generation_prompt=False)}) print(dataset['formatted_chat'][0]) ``` And we get: ```text <|user|> Which is bigger, the moon or the sun?</s> <|assistant|> The sun.</s> ``` From here, just continue training like you would with a standard language modelling task, using the `formatted_chat` column. ## Advanced: How do chat templates work? The chat template for a model is stored on the `tokenizer.chat_template` attribute. If no chat template is set, the default template for that model class is used instead. Let's take a look at the template for `BlenderBot`: ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> tokenizer.default_chat_template "{% for message in messages %}{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}{{ message['content'] }}{% if not loop.last %}{{ ' ' }}{% endif %}{% endfor %}{{ eos_token }}" ``` That's kind of intimidating. Let's add some newlines and indentation to make it more readable. Note that the first newline after each block as well as any preceding whitespace before a block are ignored by default, using the Jinja `trim_blocks` and `lstrip_blocks` flags. However, be cautious - although leading whitespace on each line is stripped, spaces between blocks on the same line are not. We strongly recommend checking that your template isn't printing extra spaces where it shouldn't be! ``` {% for message in messages %} {% if message['role'] == 'user' %} {{ ' ' }} {% endif %} {{ message['content'] }} {% if not loop.last %} {{ ' ' }} {% endif %} {% endfor %} {{ eos_token }} ``` If you've never seen one of these before, this is a [Jinja template](https://jinja.palletsprojects.com/en/3.1.x/templates/). Jinja is a templating language that allows you to write simple code that generates text. In many ways, the code and syntax resembles Python. In pure Python, this template would look something like this: ```python for idx, message in enumerate(messages): if message['role'] == 'user': print(' ') print(message['content']) if not idx == len(messages) - 1: # Check for the last message in the conversation print(' ') print(eos_token) ``` Effectively, the template does three things: 1. For each message, if the message is a user message, add a blank space before it, otherwise print nothing. 2. Add the message content 3. If the message is not the last message, add two spaces after it. After the final message, print the EOS token. This is a pretty simple template - it doesn't add any control tokens, and it doesn't support "system" messages, which are a common way to give the model directives about how it should behave in the subsequent conversation. But Jinja gives you a lot of flexibility to do those things! Let's see a Jinja template that can format inputs similarly to the way LLaMA formats them (note that the real LLaMA template includes handling for default system messages and slightly different system message handling in general - don't use this one in your actual code!) ``` {% for message in messages %} {% if message['role'] == 'user' %} {{ bos_token + '[INST] ' + message['content'] + ' [/INST]' }} {% elif message['role'] == 'system' %} {{ '<<SYS>>\\n' + message['content'] + '\\n<</SYS>>\\n\\n' }} {% elif message['role'] == 'assistant' %} {{ ' ' + message['content'] + ' ' + eos_token }} {% endif %} {% endfor %} ``` Hopefully if you stare at this for a little bit you can see what this template is doing - it adds specific tokens based on the "role" of each message, which represents who sent it. User, assistant and system messages are clearly distinguishable to the model because of the tokens they're wrapped in. ## Advanced: Adding and editing chat templates ### How do I create a chat template? Simple, just write a jinja template and set `tokenizer.chat_template`. You may find it easier to start with an existing template from another model and simply edit it for your needs! For example, we could take the LLaMA template above and add "[ASST]" and "[/ASST]" to assistant messages: ``` {% for message in messages %} {% if message['role'] == 'user' %} {{ bos_token + '[INST] ' + message['content'].strip() + ' [/INST]' }} {% elif message['role'] == 'system' %} {{ '<<SYS>>\\n' + message['content'].strip() + '\\n<</SYS>>\\n\\n' }} {% elif message['role'] == 'assistant' %} {{ '[ASST] ' + message['content'] + ' [/ASST]' + eos_token }} {% endif %} {% endfor %} ``` Now, simply set the `tokenizer.chat_template` attribute. Next time you use [`~PreTrainedTokenizer.apply_chat_template`], it will use your new template! This attribute will be saved in the `tokenizer_config.json` file, so you can use [`~utils.PushToHubMixin.push_to_hub`] to upload your new template to the Hub and make sure everyone's using the right template for your model! ```python template = tokenizer.chat_template template = template.replace("SYS", "SYSTEM") # Change the system token tokenizer.chat_template = template # Set the new template tokenizer.push_to_hub("model_name") # Upload your new template to the Hub! ``` The method [`~PreTrainedTokenizer.apply_chat_template`] which uses your chat template is called by the [`ConversationalPipeline`] class, so once you set the correct chat template, your model will automatically become compatible with [`ConversationalPipeline`]. ### What are "default" templates? Before the introduction of chat templates, chat handling was hardcoded at the model class level. For backwards compatibility, we have retained this class-specific handling as default templates, also set at the class level. If a model does not have a chat template set, but there is a default template for its model class, the `ConversationalPipeline` class and methods like `apply_chat_template` will use the class template instead. You can find out what the default template for your tokenizer is by checking the `tokenizer.default_chat_template` attribute. This is something we do purely for backward compatibility reasons, to avoid breaking any existing workflows. Even when the class template is appropriate for your model, we strongly recommend overriding the default template by setting the `chat_template` attribute explicitly to make it clear to users that your model has been correctly configured for chat, and to future-proof in case the default templates are ever altered or deprecated. ### What template should I use? When setting the template for a model that's already been trained for chat, you should ensure that the template exactly matches the message formatting that the model saw during training, or else you will probably experience performance degradation. This is true even if you're training the model further - you will probably get the best performance if you keep the chat tokens constant. This is very analogous to tokenization - you generally get the best performance for inference or fine-tuning when you precisely match the tokenization used during training. If you're training a model from scratch, or fine-tuning a base language model for chat, on the other hand, you have a lot of freedom to choose an appropriate template! LLMs are smart enough to learn to handle lots of different input formats. Our default template for models that don't have a class-specific template follows the [ChatML format](https://github.com/openai/openai-python/blob/main/chatml.md), and this is a good, flexible choice for many use-cases. It looks like this: ``` {% for message in messages %} {{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}} {% endfor %} ``` If you like this one, here it is in one-liner form, ready to copy into your code. The one-liner also includes handy support for [generation prompts](#what-are-generation-prompts), but note that it doesn't add BOS or EOS tokens! If your model expects those, they won't be added automatically by `apply_chat_template` - in other words, the text will be tokenized with `add_special_tokens=False`. This is to avoid potential conflicts between the template and the `add_special_tokens` logic. If your model expects special tokens, make sure to add them to the template! ``` tokenizer.chat_template = "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}" ``` This template wraps each message in `<|im_start|>` and `<|im_end|>` tokens, and simply writes the role as a string, which allows for flexibility in the roles you train with. The output looks like this: ```text <|im_start|>system You are a helpful chatbot that will do its best not to say anything so stupid that people tweet about it.<|im_end|> <|im_start|>user How are you?<|im_end|> <|im_start|>assistant I'm doing great!<|im_end|> ``` The "user", "system" and "assistant" roles are the standard for chat, and we recommend using them when it makes sense, particularly if you want your model to operate well with [`ConversationalPipeline`]. However, you are not limited to these roles - templating is extremely flexible, and any string can be a role. ### I want to add some chat templates! How should I get started? If you have any chat models, you should set their `tokenizer.chat_template` attribute and test it using [`~PreTrainedTokenizer.apply_chat_template`], then push the updated tokenizer to the Hub. This applies even if you're not the model owner - if you're using a model with an empty chat template, or one that's still using the default class template, please open a [pull request](https://huggingface.co/docs/hub/repositories-pull-requests-discussions) to the model repository so that this attribute can be set properly! Once the attribute is set, that's it, you're done! `tokenizer.apply_chat_template` will now work correctly for that model, which means it is also automatically supported in places like `ConversationalPipeline`! By ensuring that models have this attribute, we can make sure that the whole community gets to use the full power of open-source models. Formatting mismatches have been haunting the field and silently harming performance for too long - it's time to put an end to them! ## Advanced: Template writing tips If you're unfamiliar with Jinja, we generally find that the easiest way to write a chat template is to first write a short Python script that formats messages the way you want, and then convert that script into a template. Remember that the template handler will receive the conversation history as a variable called `messages`. Each message is a dictionary with two keys, `role` and `content`. You will be able to access `messages` in your template just like you can in Python, which means you can loop over it with `{% for message in messages %}` or access individual messages with, for example, `{{ messages[0] }}`. You can also use the following tips to convert your code to Jinja: ### For loops For loops in Jinja look like this: ``` {% for message in messages %} {{ message['content'] }} {% endfor %} ``` Note that whatever's inside the {{ expression block }} will be printed to the output. You can use operators like `+` to combine strings inside expression blocks. ### If statements If statements in Jinja look like this: ``` {% if message['role'] == 'user' %} {{ message['content'] }} {% endif %} ``` Note how where Python uses whitespace to mark the beginnings and ends of `for` and `if` blocks, Jinja requires you to explicitly end them with `{% endfor %}` and `{% endif %}`. ### Special variables Inside your template, you will have access to the list of `messages`, but you can also access several other special variables. These include special tokens like `bos_token` and `eos_token`, as well as the `add_generation_prompt` variable that we discussed above. You can also use the `loop` variable to access information about the current loop iteration, for example using `{% if loop.last %}` to check if the current message is the last message in the conversation. Here's an example that puts these ideas together to add a generation prompt at the end of the conversation if add_generation_prompt is `True`: ``` {% if loop.last and add_generation_prompt %} {{ bos_token + 'Assistant:\n' }} {% endif %} ``` ### Notes on whitespace As much as possible, we've tried to get Jinja to ignore whitespace outside of {{ expressions }}. However, be aware that Jinja is a general-purpose templating engine, and it may treat whitespace between blocks on the same line as significant and print it to the output. We **strongly** recommend checking that your template isn't printing extra spaces where it shouldn't be before you upload it!
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/add_new_model.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # How to add a model to ๐Ÿค— Transformers? The ๐Ÿค— Transformers library is often able to offer new models thanks to community contributors. But this can be a challenging project and requires an in-depth knowledge of the ๐Ÿค— Transformers library and the model to implement. At Hugging Face, we're trying to empower more of the community to actively add models and we've put together this guide to walk you through the process of adding a PyTorch model (make sure you have [PyTorch installed](https://pytorch.org/get-started/locally/)). <Tip> If you're interested in implementing a TensorFlow model, take a look at the [How to convert a ๐Ÿค— Transformers model to TensorFlow](add_tensorflow_model) guide! </Tip> Along the way, you'll: - get insights into open-source best practices - understand the design principles behind one of the most popular deep learning libraries - learn how to efficiently test large models - learn how to integrate Python utilities like `black`, `ruff`, and `make fix-copies` to ensure clean and readable code A Hugging Face team member will be available to help you along the way so you'll never be alone. ๐Ÿค— โค๏ธ To get started, open a [New model addition](https://github.com/huggingface/transformers/issues/new?assignees=&labels=New+model&template=new-model-addition.yml) issue for the model you want to see in ๐Ÿค— Transformers. If you're not especially picky about contributing a specific model, you can filter by the [New model label](https://github.com/huggingface/transformers/labels/New%20model) to see if there are any unclaimed model requests and work on it. Once you've opened a new model request, the first step is to get familiar with ๐Ÿค— Transformers if you aren't already! ## General overview of ๐Ÿค— Transformers First, you should get a general overview of ๐Ÿค— Transformers. ๐Ÿค— Transformers is a very opinionated library, so there is a chance that you don't agree with some of the library's philosophies or design choices. From our experience, however, we found that the fundamental design choices and philosophies of the library are crucial to efficiently scale ๐Ÿค— Transformers while keeping maintenance costs at a reasonable level. A good first starting point to better understand the library is to read the [documentation of our philosophy](philosophy). As a result of our way of working, there are some choices that we try to apply to all models: - Composition is generally favored over-abstraction - Duplicating code is not always bad if it strongly improves the readability or accessibility of a model - Model files are as self-contained as possible so that when you read the code of a specific model, you ideally only have to look into the respective `modeling_....py` file. In our opinion, the library's code is not just a means to provide a product, *e.g.* the ability to use BERT for inference, but also as the very product that we want to improve. Hence, when adding a model, the user is not only the person who will use your model, but also everybody who will read, try to understand, and possibly tweak your code. With this in mind, let's go a bit deeper into the general library design. ### Overview of models To successfully add a model, it is important to understand the interaction between your model and its config, [`PreTrainedModel`], and [`PretrainedConfig`]. For exemplary purposes, we will call the model to be added to ๐Ÿค— Transformers `BrandNewBert`. Let's take a look: <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_overview.png"/> As you can see, we do make use of inheritance in ๐Ÿค— Transformers, but we keep the level of abstraction to an absolute minimum. There are never more than two levels of abstraction for any model in the library. `BrandNewBertModel` inherits from `BrandNewBertPreTrainedModel` which in turn inherits from [`PreTrainedModel`] and that's it. As a general rule, we want to make sure that a new model only depends on [`PreTrainedModel`]. The important functionalities that are automatically provided to every new model are [`~PreTrainedModel.from_pretrained`] and [`~PreTrainedModel.save_pretrained`], which are used for serialization and deserialization. All of the other important functionalities, such as `BrandNewBertModel.forward` should be completely defined in the new `modeling_brand_new_bert.py` script. Next, we want to make sure that a model with a specific head layer, such as `BrandNewBertForMaskedLM` does not inherit from `BrandNewBertModel`, but rather uses `BrandNewBertModel` as a component that can be called in its forward pass to keep the level of abstraction low. Every new model requires a configuration class, called `BrandNewBertConfig`. This configuration is always stored as an attribute in [`PreTrainedModel`], and thus can be accessed via the `config` attribute for all classes inheriting from `BrandNewBertPreTrainedModel`: ```python model = BrandNewBertModel.from_pretrained("brandy/brand_new_bert") model.config # model has access to its config ``` Similar to the model, the configuration inherits basic serialization and deserialization functionalities from [`PretrainedConfig`]. Note that the configuration and the model are always serialized into two different formats - the model to a *pytorch_model.bin* file and the configuration to a *config.json* file. Calling [`~PreTrainedModel.save_pretrained`] will automatically call [`~PretrainedConfig.save_pretrained`], so that both model and configuration are saved. ### Code style When coding your new model, keep in mind that Transformers is an opinionated library and we have a few quirks of our own regarding how code should be written :-) 1. The forward pass of your model should be fully written in the modeling file while being fully independent of other models in the library. If you want to reuse a block from another model, copy the code and paste it with a `# Copied from` comment on top (see [here](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/roberta/modeling_roberta.py#L160) for a good example and [there](pr_checks#check-copies) for more documentation on Copied from). 2. The code should be fully understandable, even by a non-native English speaker. This means you should pick descriptive variable names and avoid abbreviations. As an example, `activation` is preferred to `act`. One-letter variable names are strongly discouraged unless it's an index in a for loop. 3. More generally we prefer longer explicit code to short magical one. 4. Avoid subclassing `nn.Sequential` in PyTorch but subclass `nn.Module` and write the forward pass, so that anyone using your code can quickly debug it by adding print statements or breaking points. 5. Your function signature should be type-annotated. For the rest, good variable names are way more readable and understandable than type annotations. ### Overview of tokenizers Not quite ready yet :-( This section will be added soon! ## Step-by-step recipe to add a model to ๐Ÿค— Transformers Everyone has different preferences of how to port a model so it can be very helpful for you to take a look at summaries of how other contributors ported models to Hugging Face. Here is a list of community blog posts on how to port a model: 1. [Porting GPT2 Model](https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28) by [Thomas](https://huggingface.co/thomwolf) 2. [Porting WMT19 MT Model](https://huggingface.co/blog/porting-fsmt) by [Stas](https://huggingface.co/stas) From experience, we can tell you that the most important things to keep in mind when adding a model are: - Don't reinvent the wheel! Most parts of the code you will add for the new ๐Ÿค— Transformers model already exist somewhere in ๐Ÿค— Transformers. Take some time to find similar, already existing models and tokenizers you can copy from. [grep](https://www.gnu.org/software/grep/) and [rg](https://github.com/BurntSushi/ripgrep) are your friends. Note that it might very well happen that your model's tokenizer is based on one model implementation, and your model's modeling code on another one. *E.g.* FSMT's modeling code is based on BART, while FSMT's tokenizer code is based on XLM. - It's more of an engineering challenge than a scientific challenge. You should spend more time creating an efficient debugging environment rather than trying to understand all theoretical aspects of the model in the paper. - Ask for help, when you're stuck! Models are the core component of ๐Ÿค— Transformers so we at Hugging Face are more than happy to help you at every step to add your model. Don't hesitate to ask if you notice you are not making progress. In the following, we try to give you a general recipe that we found most useful when porting a model to ๐Ÿค— Transformers. The following list is a summary of everything that has to be done to add a model and can be used by you as a To-Do List: โ˜ (Optional) Understood the model's theoretical aspects<br> โ˜ Prepared ๐Ÿค— Transformers dev environment<br> โ˜ Set up debugging environment of the original repository<br> โ˜ Created script that successfully runs the `forward()` pass using the original repository and checkpoint<br> โ˜ Successfully added the model skeleton to ๐Ÿค— Transformers<br> โ˜ Successfully converted original checkpoint to ๐Ÿค— Transformers checkpoint<br> โ˜ Successfully ran `forward()` pass in ๐Ÿค— Transformers that gives identical output to original checkpoint<br> โ˜ Finished model tests in ๐Ÿค— Transformers<br> โ˜ Successfully added tokenizer in ๐Ÿค— Transformers<br> โ˜ Run end-to-end integration tests<br> โ˜ Finished docs<br> โ˜ Uploaded model weights to the Hub<br> โ˜ Submitted the pull request<br> โ˜ (Optional) Added a demo notebook To begin with, we usually recommend starting by getting a good theoretical understanding of `BrandNewBert`. However, if you prefer to understand the theoretical aspects of the model *on-the-job*, then it is totally fine to directly dive into the `BrandNewBert`'s code-base. This option might suit you better if your engineering skills are better than your theoretical skill, if you have trouble understanding `BrandNewBert`'s paper, or if you just enjoy programming much more than reading scientific papers. ### 1. (Optional) Theoretical aspects of BrandNewBert You should take some time to read *BrandNewBert's* paper, if such descriptive work exists. There might be large sections of the paper that are difficult to understand. If this is the case, this is fine - don't worry! The goal is not to get a deep theoretical understanding of the paper, but to extract the necessary information required to effectively re-implement the model in ๐Ÿค— Transformers. That being said, you don't have to spend too much time on the theoretical aspects, but rather focus on the practical ones, namely: - What type of model is *brand_new_bert*? BERT-like encoder-only model? GPT2-like decoder-only model? BART-like encoder-decoder model? Look at the [model_summary](model_summary) if you're not familiar with the differences between those. - What are the applications of *brand_new_bert*? Text classification? Text generation? Seq2Seq tasks, *e.g.,* summarization? - What is the novel feature of the model that makes it different from BERT/GPT-2/BART? - Which of the already existing [๐Ÿค— Transformers models](https://huggingface.co/transformers/#contents) is most similar to *brand_new_bert*? - What type of tokenizer is used? A sentencepiece tokenizer? Word piece tokenizer? Is it the same tokenizer as used for BERT or BART? After you feel like you have gotten a good overview of the architecture of the model, you might want to write to the Hugging Face team with any questions you might have. This might include questions regarding the model's architecture, its attention layer, etc. We will be more than happy to help you. ### 2. Next prepare your environment 1. Fork the [repository](https://github.com/huggingface/transformers) by clicking on the โ€˜Fork' button on the repository's page. This creates a copy of the code under your GitHub user account. 2. Clone your `transformers` fork to your local disk, and add the base repository as a remote: ```bash git clone https://github.com/[your Github handle]/transformers.git cd transformers git remote add upstream https://github.com/huggingface/transformers.git ``` 3. Set up a development environment, for instance by running the following command: ```bash python -m venv .env source .env/bin/activate pip install -e ".[dev]" ``` Depending on your OS, and since the number of optional dependencies of Transformers is growing, you might get a failure with this command. If that's the case make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do: ```bash pip install -e ".[quality]" ``` which should be enough for most use cases. You can then return to the parent directory ```bash cd .. ``` 4. We recommend adding the PyTorch version of *brand_new_bert* to Transformers. To install PyTorch, please follow the instructions on https://pytorch.org/get-started/locally/. **Note:** You don't need to have CUDA installed. Making the new model work on CPU is sufficient. 5. To port *brand_new_bert*, you will also need access to its original repository: ```bash git clone https://github.com/org_that_created_brand_new_bert_org/brand_new_bert.git cd brand_new_bert pip install -e . ``` Now you have set up a development environment to port *brand_new_bert* to ๐Ÿค— Transformers. ### 3.-4. Run a pretrained checkpoint using the original repository At first, you will work on the original *brand_new_bert* repository. Often, the original implementation is very โ€œresearchyโ€. Meaning that documentation might be lacking and the code can be difficult to understand. But this should be exactly your motivation to reimplement *brand_new_bert*. At Hugging Face, one of our main goals is to *make people stand on the shoulders of giants* which translates here very well into taking a working model and rewriting it to make it as **accessible, user-friendly, and beautiful** as possible. This is the number-one motivation to re-implement models into ๐Ÿค— Transformers - trying to make complex new NLP technology accessible to **everybody**. You should start thereby by diving into the original repository. Successfully running the official pretrained model in the original repository is often **the most difficult** step. From our experience, it is very important to spend some time getting familiar with the original code-base. You need to figure out the following: - Where to find the pretrained weights? - How to load the pretrained weights into the corresponding model? - How to run the tokenizer independently from the model? - Trace one forward pass so that you know which classes and functions are required for a simple forward pass. Usually, you only have to reimplement those functions. - Be able to locate the important components of the model: Where is the model's class? Are there model sub-classes, *e.g.* EncoderModel, DecoderModel? Where is the self-attention layer? Are there multiple different attention layers, *e.g.* *self-attention*, *cross-attention*...? - How can you debug the model in the original environment of the repo? Do you have to add *print* statements, can you work with an interactive debugger like *ipdb*, or should you use an efficient IDE to debug the model, like PyCharm? It is very important that before you start the porting process, you can **efficiently** debug code in the original repository! Also, remember that you are working with an open-source library, so do not hesitate to open an issue, or even a pull request in the original repository. The maintainers of this repository are most likely very happy about someone looking into their code! At this point, it is really up to you which debugging environment and strategy you prefer to use to debug the original model. We strongly advise against setting up a costly GPU environment, but simply work on a CPU both when starting to dive into the original repository and also when starting to write the ๐Ÿค— Transformers implementation of the model. Only at the very end, when the model has already been successfully ported to ๐Ÿค— Transformers, one should verify that the model also works as expected on GPU. In general, there are two possible debugging environments for running the original model - [Jupyter notebooks](https://jupyter.org/) / [google colab](https://colab.research.google.com/notebooks/intro.ipynb) - Local python scripts. Jupyter notebooks have the advantage that they allow for cell-by-cell execution which can be helpful to better split logical components from one another and to have faster debugging cycles as intermediate results can be stored. Also, notebooks are often easier to share with other contributors, which might be very helpful if you want to ask the Hugging Face team for help. If you are familiar with Jupyter notebooks, we strongly recommend you work with them. The obvious disadvantage of Jupyter notebooks is that if you are not used to working with them you will have to spend some time adjusting to the new programming environment and you might not be able to use your known debugging tools anymore, like `ipdb`. For each code-base, a good first step is always to load a **small** pretrained checkpoint and to be able to reproduce a single forward pass using a dummy integer vector of input IDs as an input. Such a script could look like this (in pseudocode): ```python model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = [0, 4, 5, 2, 3, 7, 9] # vector of input ids original_output = model.predict(input_ids) ``` Next, regarding the debugging strategy, there are generally a few from which to choose from: - Decompose the original model into many small testable components and run a forward pass on each of those for verification - Decompose the original model only into the original *tokenizer* and the original *model*, run a forward pass on those, and use intermediate print statements or breakpoints for verification Again, it is up to you which strategy to choose. Often, one or the other is advantageous depending on the original code base. If the original code-base allows you to decompose the model into smaller sub-components, *e.g.* if the original code-base can easily be run in eager mode, it is usually worth the effort to do so. There are some important advantages to taking the more difficult road in the beginning: - at a later stage when comparing the original model to the Hugging Face implementation, you can verify automatically for each component individually that the corresponding component of the ๐Ÿค— Transformers implementation matches instead of relying on visual comparison via print statements - it can give you some rope to decompose the big problem of porting a model into smaller problems of just porting individual components and thus structure your work better - separating the model into logical meaningful components will help you to get a better overview of the model's design and thus to better understand the model - at a later stage those component-by-component tests help you to ensure that no regression occurs as you continue changing your code [Lysandre's](https://gist.github.com/LysandreJik/db4c948f6b4483960de5cbac598ad4ed) integration checks for ELECTRA gives a nice example of how this can be done. However, if the original code-base is very complex or only allows intermediate components to be run in a compiled mode, it might be too time-consuming or even impossible to separate the model into smaller testable sub-components. A good example is [T5's MeshTensorFlow](https://github.com/tensorflow/mesh/tree/master/mesh_tensorflow) library which is very complex and does not offer a simple way to decompose the model into its sub-components. For such libraries, one often relies on verifying print statements. No matter which strategy you choose, the recommended procedure is often the same that you should start to debug the starting layers first and the ending layers last. It is recommended that you retrieve the output, either by print statements or sub-component functions, of the following layers in the following order: 1. Retrieve the input IDs passed to the model 2. Retrieve the word embeddings 3. Retrieve the input of the first Transformer layer 4. Retrieve the output of the first Transformer layer 5. Retrieve the output of the following n - 1 Transformer layers 6. Retrieve the output of the whole BrandNewBert Model Input IDs should thereby consists of an array of integers, *e.g.* `input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19]` The outputs of the following layers often consist of multi-dimensional float arrays and can look like this: ``` [[ [-0.1465, -0.6501, 0.1993, ..., 0.1451, 0.3430, 0.6024], [-0.4417, -0.5920, 0.3450, ..., -0.3062, 0.6182, 0.7132], [-0.5009, -0.7122, 0.4548, ..., -0.3662, 0.6091, 0.7648], ..., [-0.5613, -0.6332, 0.4324, ..., -0.3792, 0.7372, 0.9288], [-0.5416, -0.6345, 0.4180, ..., -0.3564, 0.6992, 0.9191], [-0.5334, -0.6403, 0.4271, ..., -0.3339, 0.6533, 0.8694]]], ``` We expect that every model added to ๐Ÿค— Transformers passes a couple of integration tests, meaning that the original model and the reimplemented version in ๐Ÿค— Transformers have to give the exact same output up to a precision of 0.001! Since it is normal that the exact same model written in different libraries can give a slightly different output depending on the library framework, we accept an error tolerance of 1e-3 (0.001). It is not enough if the model gives nearly the same output, they have to be almost identical. Therefore, you will certainly compare the intermediate outputs of the ๐Ÿค— Transformers version multiple times against the intermediate outputs of the original implementation of *brand_new_bert* in which case an **efficient** debugging environment of the original repository is absolutely important. Here is some advice to make your debugging environment as efficient as possible. - Find the best way of debugging intermediate results. Is the original repository written in PyTorch? Then you should probably take the time to write a longer script that decomposes the original model into smaller sub-components to retrieve intermediate values. Is the original repository written in Tensorflow 1? Then you might have to rely on TensorFlow print operations like [tf.print](https://www.tensorflow.org/api_docs/python/tf/print) to output intermediate values. Is the original repository written in Jax? Then make sure that the model is **not jitted** when running the forward pass, *e.g.* check-out [this link](https://github.com/google/jax/issues/196). - Use the smallest pretrained checkpoint you can find. The smaller the checkpoint, the faster your debug cycle becomes. It is not efficient if your pretrained model is so big that your forward pass takes more than 10 seconds. In case only very large checkpoints are available, it might make more sense to create a dummy model in the new environment with randomly initialized weights and save those weights for comparison with the ๐Ÿค— Transformers version of your model - Make sure you are using the easiest way of calling a forward pass in the original repository. Ideally, you want to find the function in the original repository that **only** calls a single forward pass, *i.e.* that is often called `predict`, `evaluate`, `forward` or `__call__`. You don't want to debug a function that calls `forward` multiple times, *e.g.* to generate text, like `autoregressive_sample`, `generate`. - Try to separate the tokenization from the model's *forward* pass. If the original repository shows examples where you have to input a string, then try to find out where in the forward call the string input is changed to input ids and start from this point. This might mean that you have to possibly write a small script yourself or change the original code so that you can directly input the ids instead of an input string. - Make sure that the model in your debugging setup is **not** in training mode, which often causes the model to yield random outputs due to multiple dropout layers in the model. Make sure that the forward pass in your debugging environment is **deterministic** so that the dropout layers are not used. Or use *transformers.utils.set_seed* if the old and new implementations are in the same framework. The following section gives you more specific details/tips on how you can do this for *brand_new_bert*. ### 5.-14. Port BrandNewBert to ๐Ÿค— Transformers Next, you can finally start adding new code to ๐Ÿค— Transformers. Go into the clone of your ๐Ÿค— Transformers' fork: ```bash cd transformers ``` In the special case that you are adding a model whose architecture exactly matches the model architecture of an existing model you only have to add a conversion script as described in [this section](#write-a-conversion-script). In this case, you can just re-use the whole model architecture of the already existing model. Otherwise, let's start generating a new model. You have two choices here: - `transformers-cli add-new-model-like` to add a new model like an existing one - `transformers-cli add-new-model` to add a new model from our template (will look like BERT or Bart depending on the type of model you select) In both cases, you will be prompted with a questionnaire to fill in the basic information of your model. The second command requires to install `cookiecutter`, you can find more information on it [here](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model). **Open a Pull Request on the main huggingface/transformers repo** Before starting to adapt the automatically generated code, now is the time to open a โ€œWork in progress (WIP)โ€ pull request, *e.g.* โ€œ[WIP] Add *brand_new_bert*โ€, in ๐Ÿค— Transformers so that you and the Hugging Face team can work side-by-side on integrating the model into ๐Ÿค— Transformers. You should do the following: 1. Create a branch with a descriptive name from your main branch ```bash git checkout -b add_brand_new_bert ``` 2. Commit the automatically generated code: ```bash git add . git commit ``` 3. Fetch and rebase to current main ```bash git fetch upstream git rebase upstream/main ``` 4. Push the changes to your account using: ```bash git push -u origin a-descriptive-name-for-my-changes ``` 5. Once you are satisfied, go to the webpage of your fork on GitHub. Click on โ€œPull requestโ€. Make sure to add the GitHub handle of some members of the Hugging Face team as reviewers, so that the Hugging Face team gets notified for future changes. 6. Change the PR into a draft by clicking on โ€œConvert to draftโ€ on the right of the GitHub pull request web page. In the following, whenever you have made some progress, don't forget to commit your work and push it to your account so that it shows in the pull request. Additionally, you should make sure to update your work with the current main from time to time by doing: ```bash git fetch upstream git merge upstream/main ``` In general, all questions you might have regarding the model or your implementation should be asked in your PR and discussed/solved in the PR. This way, the Hugging Face team will always be notified when you are committing new code or if you have a question. It is often very helpful to point the Hugging Face team to your added code so that the Hugging Face team can efficiently understand your problem or question. To do so, you can go to the โ€œFiles changedโ€ tab where you see all of your changes, go to a line regarding which you want to ask a question, and click on the โ€œ+โ€ symbol to add a comment. Whenever a question or problem has been solved, you can click on the โ€œResolveโ€ button of the created comment. In the same way, the Hugging Face team will open comments when reviewing your code. We recommend asking most questions on GitHub on your PR. For some very general questions that are not very useful for the public, feel free to ping the Hugging Face team by Slack or email. **5. Adapt the generated models code for brand_new_bert** At first, we will focus only on the model itself and not care about the tokenizer. All the relevant code should be found in the generated files `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` and `src/transformers/models/brand_new_bert/configuration_brand_new_bert.py`. Now you can finally start coding :). The generated code in `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` will either have the same architecture as BERT if it's an encoder-only model or BART if it's an encoder-decoder model. At this point, you should remind yourself what you've learned in the beginning about the theoretical aspects of the model: *How is the model different from BERT or BART?*". Implement those changes which often means changing the *self-attention* layer, the order of the normalization layer, etcโ€ฆ Again, it is often useful to look at the similar architecture of already existing models in Transformers to get a better feeling of how your model should be implemented. **Note** that at this point, you don't have to be very sure that your code is fully correct or clean. Rather, it is advised to add a first *unclean*, copy-pasted version of the original code to `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` until you feel like all the necessary code is added. From our experience, it is much more efficient to quickly add a first version of the required code and improve/correct the code iteratively with the conversion script as described in the next section. The only thing that has to work at this point is that you can instantiate the ๐Ÿค— Transformers implementation of *brand_new_bert*, *i.e.* the following command should work: ```python from transformers import BrandNewBertModel, BrandNewBertConfig model = BrandNewBertModel(BrandNewBertConfig()) ``` The above command will create a model according to the default parameters as defined in `BrandNewBertConfig()` with random weights, thus making sure that the `init()` methods of all components works. Note that all random initialization should happen in the `_init_weights` method of your `BrandnewBertPreTrainedModel` class. It should initialize all leaf modules depending on the variables of the config. Here is an example with the BERT `_init_weights` method: ```py def _init_weights(self, module): """Initialize the weights""" if isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() elif isinstance(module, nn.Embedding): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.padding_idx is not None: module.weight.data[module.padding_idx].zero_() elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) ``` You can have some more custom schemes if you need a special initialization for some modules. For instance, in `Wav2Vec2ForPreTraining`, the last two linear layers need to have the initialization of the regular PyTorch `nn.Linear` but all the other ones should use an initialization as above. This is coded like this: ```py def _init_weights(self, module): """Initialize the weights""" if isinstnace(module, Wav2Vec2ForPreTraining): module.project_hid.reset_parameters() module.project_q.reset_parameters() module.project_hid._is_hf_initialized = True module.project_q._is_hf_initialized = True elif isinstance(module, nn.Linear): module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) if module.bias is not None: module.bias.data.zero_() ``` The `_is_hf_initialized` flag is internally used to make sure we only initialize a submodule once. By setting it to `True` for `module.project_q` and `module.project_hid`, we make sure the custom initialization we did is not overridden later on, the `_init_weights` function won't be applied to them. **6. Write a conversion script** Next, you should write a conversion script that lets you convert the checkpoint you used to debug *brand_new_bert* in the original repository to a checkpoint compatible with your just created ๐Ÿค— Transformers implementation of *brand_new_bert*. It is not advised to write the conversion script from scratch, but rather to look through already existing conversion scripts in ๐Ÿค— Transformers for one that has been used to convert a similar model that was written in the same framework as *brand_new_bert*. Usually, it is enough to copy an already existing conversion script and slightly adapt it for your use case. Don't hesitate to ask the Hugging Face team to point you to a similar already existing conversion script for your model. - If you are porting a model from TensorFlow to PyTorch, a good starting point might be BERT's conversion script [here](https://github.com/huggingface/transformers/blob/7acfa95afb8194f8f9c1f4d2c6028224dbed35a2/src/transformers/models/bert/modeling_bert.py#L91) - If you are porting a model from PyTorch to PyTorch, a good starting point might be BART's conversion script [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/convert_bart_original_pytorch_checkpoint_to_pytorch.py) In the following, we'll quickly explain how PyTorch models store layer weights and define layer names. In PyTorch, the name of a layer is defined by the name of the class attribute you give the layer. Let's define a dummy model in PyTorch, called `SimpleModel` as follows: ```python from torch import nn class SimpleModel(nn.Module): def __init__(self): super().__init__() self.dense = nn.Linear(10, 10) self.intermediate = nn.Linear(10, 10) self.layer_norm = nn.LayerNorm(10) ``` Now we can create an instance of this model definition which will fill all weights: `dense`, `intermediate`, `layer_norm` with random weights. We can print the model to see its architecture ```python model = SimpleModel() print(model) ``` This will print out the following: ``` SimpleModel( (dense): Linear(in_features=10, out_features=10, bias=True) (intermediate): Linear(in_features=10, out_features=10, bias=True) (layer_norm): LayerNorm((10,), eps=1e-05, elementwise_affine=True) ) ``` We can see that the layer names are defined by the name of the class attribute in PyTorch. You can print out the weight values of a specific layer: ```python print(model.dense.weight.data) ``` to see that the weights were randomly initialized ``` tensor([[-0.0818, 0.2207, -0.0749, -0.0030, 0.0045, -0.1569, -0.1598, 0.0212, -0.2077, 0.2157], [ 0.1044, 0.0201, 0.0990, 0.2482, 0.3116, 0.2509, 0.2866, -0.2190, 0.2166, -0.0212], [-0.2000, 0.1107, -0.1999, -0.3119, 0.1559, 0.0993, 0.1776, -0.1950, -0.1023, -0.0447], [-0.0888, -0.1092, 0.2281, 0.0336, 0.1817, -0.0115, 0.2096, 0.1415, -0.1876, -0.2467], [ 0.2208, -0.2352, -0.1426, -0.2636, -0.2889, -0.2061, -0.2849, -0.0465, 0.2577, 0.0402], [ 0.1502, 0.2465, 0.2566, 0.0693, 0.2352, -0.0530, 0.1859, -0.0604, 0.2132, 0.1680], [ 0.1733, -0.2407, -0.1721, 0.1484, 0.0358, -0.0633, -0.0721, -0.0090, 0.2707, -0.2509], [-0.1173, 0.1561, 0.2945, 0.0595, -0.1996, 0.2988, -0.0802, 0.0407, 0.1829, -0.1568], [-0.1164, -0.2228, -0.0403, 0.0428, 0.1339, 0.0047, 0.1967, 0.2923, 0.0333, -0.0536], [-0.1492, -0.1616, 0.1057, 0.1950, -0.2807, -0.2710, -0.1586, 0.0739, 0.2220, 0.2358]]). ``` In the conversion script, you should fill those randomly initialized weights with the exact weights of the corresponding layer in the checkpoint. *E.g.* ```python # retrieve matching layer weights, e.g. by # recursive algorithm layer_name = "dense" pretrained_weight = array_of_dense_layer model_pointer = getattr(model, "dense") model_pointer.weight.data = torch.from_numpy(pretrained_weight) ``` While doing so, you must verify that each randomly initialized weight of your PyTorch model and its corresponding pretrained checkpoint weight exactly match in both **shape and name**. To do so, it is **necessary** to add assert statements for the shape and print out the names of the checkpoints weights. E.g. you should add statements like: ```python assert ( model_pointer.weight.shape == pretrained_weight.shape ), f"Pointer shape of random weight {model_pointer.shape} and array shape of checkpoint weight {pretrained_weight.shape} mismatched" ``` Besides, you should also print out the names of both weights to make sure they match, *e.g.* ```python logger.info(f"Initialize PyTorch weight {layer_name} from {pretrained_weight.name}") ``` If either the shape or the name doesn't match, you probably assigned the wrong checkpoint weight to a randomly initialized layer of the ๐Ÿค— Transformers implementation. An incorrect shape is most likely due to an incorrect setting of the config parameters in `BrandNewBertConfig()` that do not exactly match those that were used for the checkpoint you want to convert. However, it could also be that PyTorch's implementation of a layer requires the weight to be transposed beforehand. Finally, you should also check that **all** required weights are initialized and print out all checkpoint weights that were not used for initialization to make sure the model is correctly converted. It is completely normal, that the conversion trials fail with either a wrong shape statement or a wrong name assignment. This is most likely because either you used incorrect parameters in `BrandNewBertConfig()`, have a wrong architecture in the ๐Ÿค— Transformers implementation, you have a bug in the `init()` functions of one of the components of the ๐Ÿค— Transformers implementation or you need to transpose one of the checkpoint weights. This step should be iterated with the previous step until all weights of the checkpoint are correctly loaded in the Transformers model. Having correctly loaded the checkpoint into the ๐Ÿค— Transformers implementation, you can then save the model under a folder of your choice `/path/to/converted/checkpoint/folder` that should then contain both a `pytorch_model.bin` file and a `config.json` file: ```python model.save_pretrained("/path/to/converted/checkpoint/folder") ``` **7. Implement the forward pass** Having managed to correctly load the pretrained weights into the ๐Ÿค— Transformers implementation, you should now make sure that the forward pass is correctly implemented. In [Get familiar with the original repository](#34-run-a-pretrained-checkpoint-using-the-original-repository), you have already created a script that runs a forward pass of the model using the original repository. Now you should write an analogous script using the ๐Ÿค— Transformers implementation instead of the original one. It should look as follows: ```python model = BrandNewBertModel.from_pretrained("/path/to/converted/checkpoint/folder") input_ids = [0, 4, 4, 3, 2, 4, 1, 7, 19] output = model(input_ids).last_hidden_states ``` It is very likely that the ๐Ÿค— Transformers implementation and the original model implementation don't give the exact same output the very first time or that the forward pass throws an error. Don't be disappointed - it's expected! First, you should make sure that the forward pass doesn't throw any errors. It often happens that the wrong dimensions are used leading to a *Dimensionality mismatch* error or that the wrong data type object is used, *e.g.* `torch.long` instead of `torch.float32`. Don't hesitate to ask the Hugging Face team for help, if you don't manage to solve certain errors. The final part to make sure the ๐Ÿค— Transformers implementation works correctly is to ensure that the outputs are equivalent to a precision of `1e-3`. First, you should ensure that the output shapes are identical, *i.e.* `outputs.shape` should yield the same value for the script of the ๐Ÿค— Transformers implementation and the original implementation. Next, you should make sure that the output values are identical as well. This one of the most difficult parts of adding a new model. Common mistakes why the outputs are not identical are: - Some layers were not added, *i.e.* an *activation* layer was not added, or the residual connection was forgotten - The word embedding matrix was not tied - The wrong positional embeddings are used because the original implementation uses on offset - Dropout is applied during the forward pass. To fix this make sure *model.training is False* and that no dropout layer is falsely activated during the forward pass, *i.e.* pass *self.training* to [PyTorch's functional dropout](https://pytorch.org/docs/stable/nn.functional.html?highlight=dropout#torch.nn.functional.dropout) The best way to fix the problem is usually to look at the forward pass of the original implementation and the ๐Ÿค— Transformers implementation side-by-side and check if there are any differences. Ideally, you should debug/print out intermediate outputs of both implementations of the forward pass to find the exact position in the network where the ๐Ÿค— Transformers implementation shows a different output than the original implementation. First, make sure that the hard-coded `input_ids` in both scripts are identical. Next, verify that the outputs of the first transformation of the `input_ids` (usually the word embeddings) are identical. And then work your way up to the very last layer of the network. At some point, you will notice a difference between the two implementations, which should point you to the bug in the ๐Ÿค— Transformers implementation. From our experience, a simple and efficient way is to add many print statements in both the original implementation and ๐Ÿค— Transformers implementation, at the same positions in the network respectively, and to successively remove print statements showing the same values for intermediate presentations. When you're confident that both implementations yield the same output, verify the outputs with `torch.allclose(original_output, output, atol=1e-3)`, you're done with the most difficult part! Congratulations - the work left to be done should be a cakewalk ๐Ÿ˜Š. **8. Adding all necessary model tests** At this point, you have successfully added a new model. However, it is very much possible that the model does not yet fully comply with the required design. To make sure, the implementation is fully compatible with ๐Ÿค— Transformers, all common tests should pass. The Cookiecutter should have automatically added a test file for your model, probably under the same `tests/models/brand_new_bert/test_modeling_brand_new_bert.py`. Run this test file to verify that all common tests pass: ```bash pytest tests/models/brand_new_bert/test_modeling_brand_new_bert.py ``` Having fixed all common tests, it is now crucial to ensure that all the nice work you have done is well tested, so that - a) The community can easily understand your work by looking at specific tests of *brand_new_bert* - b) Future changes to your model will not break any important feature of the model. At first, integration tests should be added. Those integration tests essentially do the same as the debugging scripts you used earlier to implement the model to ๐Ÿค— Transformers. A template of those model tests has already added by the Cookiecutter, called `BrandNewBertModelIntegrationTests` and only has to be filled out by you. To ensure that those tests are passing, run ```bash RUN_SLOW=1 pytest -sv tests/models/brand_new_bert/test_modeling_brand_new_bert.py::BrandNewBertModelIntegrationTests ``` <Tip> In case you are using Windows, you should replace `RUN_SLOW=1` with `SET RUN_SLOW=1` </Tip> Second, all features that are special to *brand_new_bert* should be tested additionally in a separate test under `BrandNewBertModelTester`/``BrandNewBertModelTest`. This part is often forgotten but is extremely useful in two ways: - It helps to transfer the knowledge you have acquired during the model addition to the community by showing how the special features of *brand_new_bert* should work. - Future contributors can quickly test changes to the model by running those special tests. **9. Implement the tokenizer** Next, we should add the tokenizer of *brand_new_bert*. Usually, the tokenizer is equivalent to or very similar to an already existing tokenizer of ๐Ÿค— Transformers. It is very important to find/extract the original tokenizer file and to manage to load this file into the ๐Ÿค— Transformers' implementation of the tokenizer. To ensure that the tokenizer works correctly, it is recommended to first create a script in the original repository that inputs a string and returns the `input_ids``. It could look similar to this (in pseudo-code): ```python input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." model = BrandNewBertModel.load_pretrained_checkpoint("/path/to/checkpoint/") input_ids = model.tokenize(input_str) ``` You might have to take a deeper look again into the original repository to find the correct tokenizer function or you might even have to do changes to your clone of the original repository to only output the `input_ids`. Having written a functional tokenization script that uses the original repository, an analogous script for ๐Ÿค— Transformers should be created. It should look similar to this: ```python from transformers import BrandNewBertTokenizer input_str = "This is a long example input string containing special characters .$?-, numbers 2872 234 12 and words." tokenizer = BrandNewBertTokenizer.from_pretrained("/path/to/tokenizer/folder/") input_ids = tokenizer(input_str).input_ids ``` When both `input_ids` yield the same values, as a final step a tokenizer test file should also be added. Analogous to the modeling test files of *brand_new_bert*, the tokenization test files of *brand_new_bert* should contain a couple of hard-coded integration tests. **10. Run End-to-end integration tests** Having added the tokenizer, you should also add a couple of end-to-end integration tests using both the model and the tokenizer to `tests/models/brand_new_bert/test_modeling_brand_new_bert.py` in ๐Ÿค— Transformers. Such a test should show on a meaningful text-to-text sample that the ๐Ÿค— Transformers implementation works as expected. A meaningful text-to-text sample can include *e.g.* a source-to-target-translation pair, an article-to-summary pair, a question-to-answer pair, etcโ€ฆ If none of the ported checkpoints has been fine-tuned on a downstream task it is enough to simply rely on the model tests. In a final step to ensure that the model is fully functional, it is advised that you also run all tests on GPU. It can happen that you forgot to add some `.to(self.device)` statements to internal tensors of the model, which in such a test would show in an error. In case you have no access to a GPU, the Hugging Face team can take care of running those tests for you. **11. Add Docstring** Now, all the necessary functionality for *brand_new_bert* is added - you're almost done! The only thing left to add is a nice docstring and a doc page. The Cookiecutter should have added a template file called `docs/source/model_doc/brand_new_bert.md` that you should fill out. Users of your model will usually first look at this page before using your model. Hence, the documentation must be understandable and concise. It is very useful for the community to add some *Tips* to show how the model should be used. Don't hesitate to ping the Hugging Face team regarding the docstrings. Next, make sure that the docstring added to `src/transformers/models/brand_new_bert/modeling_brand_new_bert.py` is correct and included all necessary inputs and outputs. We have a detailed guide about writing documentation and our docstring format [here](writing-documentation). It is always to good to remind oneself that documentation should be treated at least as carefully as the code in ๐Ÿค— Transformers since the documentation is usually the first contact point of the community with the model. **Code refactor** Great, now you have added all the necessary code for *brand_new_bert*. At this point, you should correct some potential incorrect code style by running: ```bash make style ``` and verify that your coding style passes the quality check: ```bash make quality ``` There are a couple of other very strict design tests in ๐Ÿค— Transformers that might still be failing, which shows up in the tests of your pull request. This is often because of some missing information in the docstring or some incorrect naming. The Hugging Face team will surely help you if you're stuck here. Lastly, it is always a good idea to refactor one's code after having ensured that the code works correctly. With all tests passing, now it's a good time to go over the added code again and do some refactoring. You have now finished the coding part, congratulation! ๐ŸŽ‰ You are Awesome! ๐Ÿ˜Ž **12. Upload the models to the model hub** In this final part, you should convert and upload all checkpoints to the model hub and add a model card for each uploaded model checkpoint. You can get familiar with the hub functionalities by reading our [Model sharing and uploading Page](model_sharing). You should work alongside the Hugging Face team here to decide on a fitting name for each checkpoint and to get the required access rights to be able to upload the model under the author's organization of *brand_new_bert*. The `push_to_hub` method, present in all models in `transformers`, is a quick and efficient way to push your checkpoint to the hub. A little snippet is pasted below: ```python brand_new_bert.push_to_hub("brand_new_bert") # Uncomment the following line to push to an organization. # brand_new_bert.push_to_hub("<organization>/brand_new_bert") ``` It is worth spending some time to create fitting model cards for each checkpoint. The model cards should highlight the specific characteristics of this particular checkpoint, *e.g.* On which dataset was the checkpoint pretrained/fine-tuned on? On what down-stream task should the model be used? And also include some code on how to correctly use the model. **13. (Optional) Add notebook** It is very helpful to add a notebook that showcases in-detail how *brand_new_bert* can be used for inference and/or fine-tuned on a downstream task. This is not mandatory to merge your PR, but very useful for the community. **14. Submit your finished PR** You're done programming now and can move to the last step, which is getting your PR merged into main. Usually, the Hugging Face team should have helped you already at this point, but it is worth taking some time to give your finished PR a nice description and eventually add comments to your code, if you want to point out certain design choices to your reviewer. ### Share your work!! Now, it's time to get some credit from the community for your work! Having completed a model addition is a major contribution to Transformers and the whole NLP community. Your code and the ported pre-trained models will certainly be used by hundreds and possibly even thousands of developers and researchers. You should be proud of your work and share your achievements with the community. **You have made another model that is super easy to access for everyone in the community! ๐Ÿคฏ**
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/run_scripts.md
<!--Copyright 2022 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Train with a script Along with the ๐Ÿค— Transformers [notebooks](./noteboks/README), there are also example scripts demonstrating how to train a model for a task with [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch), [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow), or [JAX/Flax](https://github.com/huggingface/transformers/tree/main/examples/flax). You will also find scripts we've used in our [research projects](https://github.com/huggingface/transformers/tree/main/examples/research_projects) and [legacy examples](https://github.com/huggingface/transformers/tree/main/examples/legacy) which are mostly community contributed. These scripts are not actively maintained and require a specific version of ๐Ÿค— Transformers that will most likely be incompatible with the latest version of the library. The example scripts are not expected to work out-of-the-box on every problem, and you may need to adapt the script to the problem you're trying to solve. To help you with this, most of the scripts fully expose how data is preprocessed, allowing you to edit it as necessary for your use case. For any feature you'd like to implement in an example script, please discuss it on the [forum](https://discuss.huggingface.co/) or in an [issue](https://github.com/huggingface/transformers/issues) before submitting a Pull Request. While we welcome bug fixes, it is unlikely we will merge a Pull Request that adds more functionality at the cost of readability. This guide will show you how to run an example summarization training script in [PyTorch](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) and [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/summarization). All examples are expected to work with both frameworks unless otherwise specified. ## Setup To successfully run the latest version of the example scripts, you have to **install ๐Ÿค— Transformers from source** in a new virtual environment: ```bash git clone https://github.com/huggingface/transformers cd transformers pip install . ``` For older versions of the example scripts, click on the toggle below: <details> <summary>Examples for older versions of ๐Ÿค— Transformers</summary> <ul> <li><a href="https://github.com/huggingface/transformers/tree/v4.5.1/examples">v4.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.4.2/examples">v4.4.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.3.3/examples">v4.3.3</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.2.2/examples">v4.2.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.1.1/examples">v4.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v4.0.1/examples">v4.0.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.5.1/examples">v3.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.4.0/examples">v3.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.3.1/examples">v3.3.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.2.0/examples">v3.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.1.0/examples">v3.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v3.0.2/examples">v3.0.2</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.11.0/examples">v2.11.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.10.0/examples">v2.10.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.9.1/examples">v2.9.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.8.0/examples">v2.8.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.7.0/examples">v2.7.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.6.0/examples">v2.6.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.5.1/examples">v2.5.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.4.0/examples">v2.4.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.3.0/examples">v2.3.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.2.0/examples">v2.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.1.0/examples">v2.1.1</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v2.0.0/examples">v2.0.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.2.0/examples">v1.2.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.1.0/examples">v1.1.0</a></li> <li><a href="https://github.com/huggingface/transformers/tree/v1.0.0/examples">v1.0.0</a></li> </ul> </details> Then switch your current clone of ๐Ÿค— Transformers to a specific version, like v3.5.1 for example: ```bash git checkout tags/v3.5.1 ``` After you've setup the correct library version, navigate to the example folder of your choice and install the example specific requirements: ```bash pip install -r requirements.txt ``` ## Run a script <frameworkcontent> <pt> The example script downloads and preprocesses a dataset from the ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/) library. Then the script fine-tunes a dataset with the [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) on an architecture that supports summarization. The following example shows how to fine-tune [T5-small](https://huggingface.co/t5-small) on the [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) dataset. The T5 model requires an additional `source_prefix` argument due to how it was trained. This prompt lets T5 know this is a summarization task. ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> The example script downloads and preprocesses a dataset from the ๐Ÿค— [Datasets](https://huggingface.co/docs/datasets/) library. Then the script fine-tunes a dataset using Keras on an architecture that supports summarization. The following example shows how to fine-tune [T5-small](https://huggingface.co/t5-small) on the [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) dataset. The T5 model requires an additional `source_prefix` argument due to how it was trained. This prompt lets T5 know this is a summarization task. ```bash python examples/tensorflow/summarization/run_summarization.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Distributed training and mixed precision The [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) supports distributed training and mixed precision, which means you can also use it in a script. To enable both of these features: - Add the `fp16` argument to enable mixed precision. - Set the number of GPUs to use with the `nproc_per_node` argument. ```bash torchrun \ --nproc_per_node 8 pytorch/summarization/run_summarization.py \ --fp16 \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` TensorFlow scripts utilize a [`MirroredStrategy`](https://www.tensorflow.org/guide/distributed_training#mirroredstrategy) for distributed training, and you don't need to add any additional arguments to the training script. The TensorFlow script will use multiple GPUs by default if they are available. ## Run a script on a TPU <frameworkcontent> <pt> Tensor Processing Units (TPUs) are specifically designed to accelerate performance. PyTorch supports TPUs with the [XLA](https://www.tensorflow.org/xla) deep learning compiler (see [here](https://github.com/pytorch/xla/blob/master/README.md) for more details). To use a TPU, launch the `xla_spawn.py` script and use the `num_cores` argument to set the number of TPU cores you want to use. ```bash python xla_spawn.py --num_cores 8 \ summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` </pt> <tf> Tensor Processing Units (TPUs) are specifically designed to accelerate performance. TensorFlow scripts utilize a [`TPUStrategy`](https://www.tensorflow.org/guide/distributed_training#tpustrategy) for training on TPUs. To use a TPU, pass the name of the TPU resource to the `tpu` argument. ```bash python run_summarization.py \ --tpu name_of_tpu_resource \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 16 \ --num_train_epochs 3 \ --do_train \ --do_eval ``` </tf> </frameworkcontent> ## Run a script with ๐Ÿค— Accelerate ๐Ÿค— [Accelerate](https://huggingface.co/docs/accelerate) is a PyTorch-only library that offers a unified method for training a model on several types of setups (CPU-only, multiple GPUs, TPUs) while maintaining complete visibility into the PyTorch training loop. Make sure you have ๐Ÿค— Accelerate installed if you don't already have it: > Note: As Accelerate is rapidly developing, the git version of accelerate must be installed to run the scripts ```bash pip install git+https://github.com/huggingface/accelerate ``` Instead of the `run_summarization.py` script, you need to use the `run_summarization_no_trainer.py` script. ๐Ÿค— Accelerate supported scripts will have a `task_no_trainer.py` file in the folder. Begin by running the following command to create and save a configuration file: ```bash accelerate config ``` Test your setup to make sure it is configured correctly: ```bash accelerate test ``` Now you are ready to launch the training: ```bash accelerate launch run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` ## Use a custom dataset The summarization script supports custom datasets as long as they are a CSV or JSON Line file. When you use your own dataset, you need to specify several additional arguments: - `train_file` and `validation_file` specify the path to your training and validation files. - `text_column` is the input text to summarize. - `summary_column` is the target text to output. A summarization script using a custom dataset would look like this: ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --train_file path_to_csv_or_jsonlines_file \ --validation_file path_to_csv_or_jsonlines_file \ --text_column text_column_name \ --summary_column summary_column_name \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --overwrite_output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate ``` ## Test a script It is often a good idea to run your script on a smaller number of dataset examples to ensure everything works as expected before committing to an entire dataset which may take hours to complete. Use the following arguments to truncate the dataset to a maximum number of samples: - `max_train_samples` - `max_eval_samples` - `max_predict_samples` ```bash python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --max_train_samples 50 \ --max_eval_samples 50 \ --max_predict_samples 50 \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` Not all example scripts support the `max_predict_samples` argument. If you aren't sure whether your script supports this argument, add the `-h` argument to check: ```bash examples/pytorch/summarization/run_summarization.py -h ``` ## Resume training from checkpoint Another helpful option to enable is resuming training from a previous checkpoint. This will ensure you can pick up where you left off without starting over if your training gets interrupted. There are two methods to resume training from a checkpoint. The first method uses the `output_dir previous_output_dir` argument to resume training from the latest checkpoint stored in `output_dir`. In this case, you should remove `overwrite_output_dir`: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --output_dir previous_output_dir \ --predict_with_generate ``` The second method uses the `resume_from_checkpoint path_to_specific_checkpoint` argument to resume training from a specific checkpoint folder. ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --resume_from_checkpoint path_to_specific_checkpoint \ --predict_with_generate ``` ## Share your model All scripts can upload your final model to the [Model Hub](https://huggingface.co/models). Make sure you are logged into Hugging Face before you begin: ```bash huggingface-cli login ``` Then add the `push_to_hub` argument to the script. This argument will create a repository with your Hugging Face username and the folder name specified in `output_dir`. To give your repository a specific name, use the `push_to_hub_model_id` argument to add it. The repository will be automatically listed under your namespace. The following example shows how to upload a model with a specific repository name: ```bash python examples/pytorch/summarization/run_summarization.py --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --push_to_hub \ --push_to_hub_model_id finetuned-t5-cnn_dailymail \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/_config.py
# docstyle-ignore INSTALL_CONTENT = """ # Transformers installation ! pip install transformers datasets # To install from source instead of the last release, comment the command above and uncomment the following one. # ! pip install git+https://github.com/huggingface/transformers.git """ notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}] black_avoid_patterns = { "{processor_class}": "FakeProcessorClass", "{model_class}": "FakeModelClass", "{object_class}": "FakeObjectClass", }
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/task_summary.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # What ๐Ÿค— Transformers can do ๐Ÿค— Transformers is a library of pretrained state-of-the-art models for natural language processing (NLP), computer vision, and audio and speech processing tasks. Not only does the library contain Transformer models, but it also has non-Transformer models like modern convolutional networks for computer vision tasks. If you look at some of the most popular consumer products today, like smartphones, apps, and televisions, odds are that some kind of deep learning technology is behind it. Want to remove a background object from a picture taken by your smartphone? This is an example of a panoptic segmentation task (don't worry if you don't know what this means yet, we'll describe it in the following sections!). This page provides an overview of the different speech and audio, computer vision, and NLP tasks that can be solved with the ๐Ÿค— Transformers library in just three lines of code! ## Audio Audio and speech processing tasks are a little different from the other modalities mainly because audio as an input is a continuous signal. Unlike text, a raw audio waveform can't be neatly split into discrete chunks the way a sentence can be divided into words. To get around this, the raw audio signal is typically sampled at regular intervals. If you take more samples within an interval, the sampling rate is higher, and the audio more closely resembles the original audio source. Previous approaches preprocessed the audio to extract useful features from it. It is now more common to start audio and speech processing tasks by directly feeding the raw audio waveform to a feature encoder to extract an audio representation. This simplifies the preprocessing step and allows the model to learn the most essential features. ### Audio classification Audio classification is a task that labels audio data from a predefined set of classes. It is a broad category with many specific applications, some of which include: * acoustic scene classification: label audio with a scene label ("office", "beach", "stadium") * acoustic event detection: label audio with a sound event label ("car horn", "whale calling", "glass breaking") * tagging: label audio containing multiple sounds (birdsongs, speaker identification in a meeting) * music classification: label music with a genre label ("metal", "hip-hop", "country") ```py >>> from transformers import pipeline >>> classifier = pipeline(task="audio-classification", model="superb/hubert-base-superb-er") >>> preds = classifier("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4532, 'label': 'hap'}, {'score': 0.3622, 'label': 'sad'}, {'score': 0.0943, 'label': 'neu'}, {'score': 0.0903, 'label': 'ang'}] ``` ### Automatic speech recognition Automatic speech recognition (ASR) transcribes speech into text. It is one of the most common audio tasks due partly to speech being such a natural form of human communication. Today, ASR systems are embedded in "smart" technology products like speakers, phones, and cars. We can ask our virtual assistants to play music, set reminders, and tell us the weather. But one of the key challenges Transformer architectures have helped with is in low-resource languages. By pretraining on large amounts of speech data, finetuning the model on only one hour of labeled speech data in a low-resource language can still produce high-quality results compared to previous ASR systems trained on 100x more labeled data. ```py >>> from transformers import pipeline >>> transcriber = pipeline(task="automatic-speech-recognition", model="openai/whisper-small") >>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ``` ## Computer vision One of the first and earliest successful computer vision tasks was recognizing images of zip code numbers using a [convolutional neural network (CNN)](glossary#convolution). An image is composed of pixels, and each pixel has a numerical value. This makes it easy to represent an image as a matrix of pixel values. Each particular combination of pixel values describes the colors of an image. Two general ways computer vision tasks can be solved are: 1. Use convolutions to learn the hierarchical features of an image from low-level features to high-level abstract things. 2. Split an image into patches and use a Transformer to gradually learn how each image patch is related to each other to form an image. Unlike the bottom-up approach favored by a CNN, this is kind of like starting out with a blurry image and then gradually bringing it into focus. ### Image classification Image classification labels an entire image from a predefined set of classes. Like most classification tasks, there are many practical use cases for image classification, some of which include: * healthcare: label medical images to detect disease or monitor patient health * environment: label satellite images to monitor deforestation, inform wildland management or detect wildfires * agriculture: label images of crops to monitor plant health or satellite images for land use monitoring * ecology: label images of animal or plant species to monitor wildlife populations or track endangered species ```py >>> from transformers import pipeline >>> classifier = pipeline(task="image-classification") >>> preds = classifier( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> print(*preds, sep="\n") {'score': 0.4335, 'label': 'lynx, catamount'} {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'} {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'} {'score': 0.0239, 'label': 'Egyptian cat'} {'score': 0.0229, 'label': 'tiger cat'} ``` ### Object detection Unlike image classification, object detection identifies multiple objects within an image and the objects' positions in an image (defined by the bounding box). Some example applications of object detection include: * self-driving vehicles: detect everyday traffic objects such as other vehicles, pedestrians, and traffic lights * remote sensing: disaster monitoring, urban planning, and weather forecasting * defect detection: detect cracks or structural damage in buildings, and manufacturing defects ```py >>> from transformers import pipeline >>> detector = pipeline(task="object-detection") >>> preds = detector( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"], "box": pred["box"]} for pred in preds] >>> preds [{'score': 0.9865, 'label': 'cat', 'box': {'xmin': 178, 'ymin': 154, 'xmax': 882, 'ymax': 598}}] ``` ### Image segmentation Image segmentation is a pixel-level task that assigns every pixel in an image to a class. It differs from object detection, which uses bounding boxes to label and predict objects in an image because segmentation is more granular. Segmentation can detect objects at a pixel-level. There are several types of image segmentation: * instance segmentation: in addition to labeling the class of an object, it also labels each distinct instance of an object ("dog-1", "dog-2") * panoptic segmentation: a combination of semantic and instance segmentation; it labels each pixel with a semantic class **and** each distinct instance of an object Segmentation tasks are helpful in self-driving vehicles to create a pixel-level map of the world around them so they can navigate safely around pedestrians and other vehicles. It is also useful for medical imaging, where the task's finer granularity can help identify abnormal cells or organ features. Image segmentation can also be used in ecommerce to virtually try on clothes or create augmented reality experiences by overlaying objects in the real world through your camera. ```py >>> from transformers import pipeline >>> segmenter = pipeline(task="image-segmentation") >>> preds = segmenter( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> print(*preds, sep="\n") {'score': 0.9879, 'label': 'LABEL_184'} {'score': 0.9973, 'label': 'snow'} {'score': 0.9972, 'label': 'cat'} ``` ### Depth estimation Depth estimation predicts the distance of each pixel in an image from the camera. This computer vision task is especially important for scene understanding and reconstruction. For example, in self-driving cars, vehicles need to understand how far objects like pedestrians, traffic signs, and other vehicles are to avoid obstacles and collisions. Depth information is also helpful for constructing 3D representations from 2D images and can be used to create high-quality 3D representations of biological structures or buildings. There are two approaches to depth estimation: * stereo: depths are estimated by comparing two images of the same image from slightly different angles * monocular: depths are estimated from a single image ```py >>> from transformers import pipeline >>> depth_estimator = pipeline(task="depth-estimation") >>> preds = depth_estimator( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) ``` ## Natural language processing NLP tasks are among the most common types of tasks because text is such a natural way for us to communicate. To get text into a format recognized by a model, it needs to be tokenized. This means dividing a sequence of text into separate words or subwords (tokens) and then converting these tokens into numbers. As a result, you can represent a sequence of text as a sequence of numbers, and once you have a sequence of numbers, it can be input into a model to solve all sorts of NLP tasks! ### Text classification Like classification tasks in any modality, text classification labels a sequence of text (it can be sentence-level, a paragraph, or a document) from a predefined set of classes. There are many practical applications for text classification, some of which include: * sentiment analysis: label text according to some polarity like `positive` or `negative` which can inform and support decision-making in fields like politics, finance, and marketing * content classification: label text according to some topic to help organize and filter information in news and social media feeds (`weather`, `sports`, `finance`, etc.) ```py >>> from transformers import pipeline >>> classifier = pipeline(task="sentiment-analysis") >>> preds = classifier("Hugging Face is the best thing since sliced bread!") >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.9991, 'label': 'POSITIVE'}] ``` ### Token classification In any NLP task, text is preprocessed by separating the sequence of text into individual words or subwords. These are known as [tokens](/glossary#token). Token classification assigns each token a label from a predefined set of classes. Two common types of token classification are: * named entity recognition (NER): label a token according to an entity category like organization, person, location or date. NER is especially popular in biomedical settings, where it can label genes, proteins, and drug names. * part-of-speech tagging (POS): label a token according to its part-of-speech like noun, verb, or adjective. POS is useful for helping translation systems understand how two identical words are grammatically different (bank as a noun versus bank as a verb). ```py >>> from transformers import pipeline >>> classifier = pipeline(task="ner") >>> preds = classifier("Hugging Face is a French company based in New York City.") >>> preds = [ ... { ... "entity": pred["entity"], ... "score": round(pred["score"], 4), ... "index": pred["index"], ... "word": pred["word"], ... "start": pred["start"], ... "end": pred["end"], ... } ... for pred in preds ... ] >>> print(*preds, sep="\n") {'entity': 'I-ORG', 'score': 0.9968, 'index': 1, 'word': 'Hu', 'start': 0, 'end': 2} {'entity': 'I-ORG', 'score': 0.9293, 'index': 2, 'word': '##gging', 'start': 2, 'end': 7} {'entity': 'I-ORG', 'score': 0.9763, 'index': 3, 'word': 'Face', 'start': 8, 'end': 12} {'entity': 'I-MISC', 'score': 0.9983, 'index': 6, 'word': 'French', 'start': 18, 'end': 24} {'entity': 'I-LOC', 'score': 0.999, 'index': 10, 'word': 'New', 'start': 42, 'end': 45} {'entity': 'I-LOC', 'score': 0.9987, 'index': 11, 'word': 'York', 'start': 46, 'end': 50} {'entity': 'I-LOC', 'score': 0.9992, 'index': 12, 'word': 'City', 'start': 51, 'end': 55} ``` ### Question answering Question answering is another token-level task that returns an answer to a question, sometimes with context (open-domain) and other times without context (closed-domain). This task happens whenever we ask a virtual assistant something like whether a restaurant is open. It can also provide customer or technical support and help search engines retrieve the relevant information you're asking for. There are two common types of question answering: * extractive: given a question and some context, the answer is a span of text from the context the model must extract * abstractive: given a question and some context, the answer is generated from the context; this approach is handled by the [`Text2TextGenerationPipeline`] instead of the [`QuestionAnsweringPipeline`] shown below ```py >>> from transformers import pipeline >>> question_answerer = pipeline(task="question-answering") >>> preds = question_answerer( ... question="What is the name of the repository?", ... context="The name of the repository is huggingface/transformers", ... ) >>> print( ... f"score: {round(preds['score'], 4)}, start: {preds['start']}, end: {preds['end']}, answer: {preds['answer']}" ... ) score: 0.9327, start: 30, end: 54, answer: huggingface/transformers ``` ### Summarization Summarization creates a shorter version of a text from a longer one while trying to preserve most of the meaning of the original document. Summarization is a sequence-to-sequence task; it outputs a shorter text sequence than the input. There are a lot of long-form documents that can be summarized to help readers quickly understand the main points. Legislative bills, legal and financial documents, patents, and scientific papers are a few examples of documents that could be summarized to save readers time and serve as a reading aid. Like question answering, there are two types of summarization: * extractive: identify and extract the most important sentences from the original text * abstractive: generate the target summary (which may include new words not in the input document) from the original text; the [`SummarizationPipeline`] uses the abstractive approach ```py >>> from transformers import pipeline >>> summarizer = pipeline(task="summarization") >>> summarizer( ... "In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles." ... ) [{'summary_text': ' The Transformer is the first sequence transduction model based entirely on attention . It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention . For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers .'}] ``` ### Translation Translation converts a sequence of text in one language to another. It is important in helping people from different backgrounds communicate with each other, help translate content to reach wider audiences, and even be a learning tool to help people learn a new language. Along with summarization, translation is a sequence-to-sequence task, meaning the model receives an input sequence and returns a target output sequence. In the early days, translation models were mostly monolingual, but recently, there has been increasing interest in multilingual models that can translate between many pairs of languages. ```py >>> from transformers import pipeline >>> text = "translate English to French: Hugging Face is a community-based open-source platform for machine learning." >>> translator = pipeline(task="translation", model="t5-small") >>> translator(text) [{'translation_text': "Hugging Face est une tribune communautaire de l'apprentissage des machines."}] ``` ### Language modeling Language modeling is a task that predicts a word in a sequence of text. It has become a very popular NLP task because a pretrained language model can be finetuned for many other downstream tasks. Lately, there has been a lot of interest in large language models (LLMs) which demonstrate zero- or few-shot learning. This means the model can solve tasks it wasn't explicitly trained to do! Language models can be used to generate fluent and convincing text, though you need to be careful since the text may not always be accurate. There are two types of language modeling: * causal: the model's objective is to predict the next token in a sequence, and future tokens are masked ```py >>> from transformers import pipeline >>> prompt = "Hugging Face is a community-based open-source platform for machine learning." >>> generator = pipeline(task="text-generation") >>> generator(prompt) # doctest: +SKIP ``` * masked: the model's objective is to predict a masked token in a sequence with full access to the tokens in the sequence ```py >>> text = "Hugging Face is a community-based open-source <mask> for machine learning." >>> fill_mask = pipeline(task="fill-mask") >>> preds = fill_mask(text, top_k=1) >>> preds = [ ... { ... "score": round(pred["score"], 4), ... "token": pred["token"], ... "token_str": pred["token_str"], ... "sequence": pred["sequence"], ... } ... for pred in preds ... ] >>> preds [{'score': 0.2236, 'token': 1761, 'token_str': ' platform', 'sequence': 'Hugging Face is a community-based open-source platform for machine learning.'}] ``` ## Multimodal Multimodal tasks require a model to process multiple data modalities (text, image, audio, video) to solve a particular problem. Image captioning is an example of a multimodal task where the model takes an image as input and outputs a sequence of text describing the image or some properties of the image. Although multimodal models work with different data types or modalities, internally, the preprocessing steps help the model convert all the data types into embeddings (vectors or list of numbers that holds meaningful information about the data). For a task like image captioning, the model learns relationships between image embeddings and text embeddings. ### Document question answering Document question answering is a task that answers natural language questions from a document. Unlike a token-level question answering task which takes text as input, document question answering takes an image of a document as input along with a question about the document and returns an answer. Document question answering can be used to parse structured documents and extract key information from it. In the example below, the total amount and change due can be extracted from a receipt. ```py >>> from transformers import pipeline >>> from PIL import Image >>> import requests >>> url = "https://datasets-server.huggingface.co/assets/hf-internal-testing/example-documents/--/hf-internal-testing--example-documents/test/2/image/image.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> doc_question_answerer = pipeline("document-question-answering", model="magorshunov/layoutlm-invoices") >>> preds = doc_question_answerer( ... question="What is the total amount?", ... image=image, ... ) >>> preds [{'score': 0.8531, 'answer': '17,000', 'start': 4, 'end': 4}] ``` Hopefully, this page has given you some more background information about all the types of tasks in each modality and the practical importance of each one. In the next [section](tasks_explained), you'll learn **how** ๐Ÿค— Transformers work to solve these tasks.
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/serialization.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Export to ONNX Deploying ๐Ÿค— Transformers models in production environments often requires, or can benefit from exporting the models into a serialized format that can be loaded and executed on specialized runtimes and hardware. ๐Ÿค— Optimum is an extension of Transformers that enables exporting models from PyTorch or TensorFlow to serialized formats such as ONNX and TFLite through its `exporters` module. ๐Ÿค— Optimum also provides a set of performance optimization tools to train and run models on targeted hardware with maximum efficiency. This guide demonstrates how you can export ๐Ÿค— Transformers models to ONNX with ๐Ÿค— Optimum, for the guide on exporting models to TFLite, please refer to the [Export to TFLite page](tflite). ## Export to ONNX [ONNX (Open Neural Network eXchange)](http://onnx.ai) is an open standard that defines a common set of operators and a common file format to represent deep learning models in a wide variety of frameworks, including PyTorch and TensorFlow. When a model is exported to the ONNX format, these operators are used to construct a computational graph (often called an _intermediate representation_) which represents the flow of data through the neural network. By exposing a graph with standardized operators and data types, ONNX makes it easy to switch between frameworks. For example, a model trained in PyTorch can be exported to ONNX format and then imported in TensorFlow (and vice versa). Once exported to ONNX format, a model can be: - optimized for inference via techniques such as [graph optimization](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/optimization) and [quantization](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization). - run with ONNX Runtime via [`ORTModelForXXX` classes](https://huggingface.co/docs/optimum/onnxruntime/package_reference/modeling_ort), which follow the same `AutoModel` API as the one you are used to in ๐Ÿค— Transformers. - run with [optimized inference pipelines](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/pipelines), which has the same API as the [`pipeline`] function in ๐Ÿค— Transformers. ๐Ÿค— Optimum provides support for the ONNX export by leveraging configuration objects. These configuration objects come ready-made for a number of model architectures, and are designed to be easily extendable to other architectures. For the list of ready-made configurations, please refer to [๐Ÿค— Optimum documentation](https://huggingface.co/docs/optimum/exporters/onnx/overview). There are two ways to export a ๐Ÿค— Transformers model to ONNX, here we show both: - export with ๐Ÿค— Optimum via CLI. - export with ๐Ÿค— Optimum with `optimum.onnxruntime`. ### Exporting a ๐Ÿค— Transformers model to ONNX with CLI To export a ๐Ÿค— Transformers model to ONNX, first install an extra dependency: ```bash pip install optimum[exporters] ``` To check out all available arguments, refer to the [๐Ÿค— Optimum docs](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli), or view help in command line: ```bash optimum-cli export onnx --help ``` To export a model's checkpoint from the ๐Ÿค— Hub, for example, `distilbert-base-uncased-distilled-squad`, run the following command: ```bash optimum-cli export onnx --model distilbert-base-uncased-distilled-squad distilbert_base_uncased_squad_onnx/ ``` You should see the logs indicating progress and showing where the resulting `model.onnx` is saved, like this: ```bash Validating ONNX model distilbert_base_uncased_squad_onnx/model.onnx... -[โœ“] ONNX model output names match reference model (start_logits, end_logits) - Validating ONNX Model output "start_logits": -[โœ“] (2, 16) matches (2, 16) -[โœ“] all values close (atol: 0.0001) - Validating ONNX Model output "end_logits": -[โœ“] (2, 16) matches (2, 16) -[โœ“] all values close (atol: 0.0001) The ONNX export succeeded and the exported model was saved at: distilbert_base_uncased_squad_onnx ``` The example above illustrates exporting a checkpoint from ๐Ÿค— Hub. When exporting a local model, first make sure that you saved both the model's weights and tokenizer files in the same directory (`local_path`). When using CLI, pass the `local_path` to the `model` argument instead of the checkpoint name on ๐Ÿค— Hub and provide the `--task` argument. You can review the list of supported tasks in the [๐Ÿค— Optimum documentation](https://huggingface.co/docs/optimum/exporters/task_manager). If `task` argument is not provided, it will default to the model architecture without any task specific head. ```bash optimum-cli export onnx --model local_path --task question-answering distilbert_base_uncased_squad_onnx/ ``` The resulting `model.onnx` file can then be run on one of the [many accelerators](https://onnx.ai/supported-tools.html#deployModel) that support the ONNX standard. For example, we can load and run the model with [ONNX Runtime](https://onnxruntime.ai/) as follows: ```python >>> from transformers import AutoTokenizer >>> from optimum.onnxruntime import ORTModelForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("distilbert_base_uncased_squad_onnx") >>> model = ORTModelForQuestionAnswering.from_pretrained("distilbert_base_uncased_squad_onnx") >>> inputs = tokenizer("What am I using?", "Using DistilBERT with ONNX Runtime!", return_tensors="pt") >>> outputs = model(**inputs) ``` The process is identical for TensorFlow checkpoints on the Hub. For instance, here's how you would export a pure TensorFlow checkpoint from the [Keras organization](https://huggingface.co/keras-io): ```bash optimum-cli export onnx --model keras-io/transformers-qa distilbert_base_cased_squad_onnx/ ``` ### Exporting a ๐Ÿค— Transformers model to ONNX with `optimum.onnxruntime` Alternative to CLI, you can export a ๐Ÿค— Transformers model to ONNX programmatically like so: ```python >>> from optimum.onnxruntime import ORTModelForSequenceClassification >>> from transformers import AutoTokenizer >>> model_checkpoint = "distilbert_base_uncased_squad" >>> save_directory = "onnx/" >>> # Load a model from transformers and export it to ONNX >>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True) >>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) >>> # Save the onnx model and tokenizer >>> ort_model.save_pretrained(save_directory) >>> tokenizer.save_pretrained(save_directory) ``` ### Exporting a model for an unsupported architecture If you wish to contribute by adding support for a model that cannot be currently exported, you should first check if it is supported in [`optimum.exporters.onnx`](https://huggingface.co/docs/optimum/exporters/onnx/overview), and if it is not, [contribute to ๐Ÿค— Optimum](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/contribute) directly. ### Exporting a model with `transformers.onnx` <Tip warning={true}> `tranformers.onnx` is no longer maintained, please export models with ๐Ÿค— Optimum as described above. This section will be removed in the future versions. </Tip> To export a ๐Ÿค— Transformers model to ONNX with `tranformers.onnx`, install extra dependencies: ```bash pip install transformers[onnx] ``` Use `transformers.onnx` package as a Python module to export a checkpoint using a ready-made configuration: ```bash python -m transformers.onnx --model=distilbert-base-uncased onnx/ ``` This exports an ONNX graph of the checkpoint defined by the `--model` argument. Pass any checkpoint on the ๐Ÿค— Hub or one that's stored locally. The resulting `model.onnx` file can then be run on one of the many accelerators that support the ONNX standard. For example, load and run the model with ONNX Runtime as follows: ```python >>> from transformers import AutoTokenizer >>> from onnxruntime import InferenceSession >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> session = InferenceSession("onnx/model.onnx") >>> # ONNX Runtime expects NumPy arrays as input >>> inputs = tokenizer("Using DistilBERT with ONNX Runtime!", return_tensors="np") >>> outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs)) ``` The required output names (like `["last_hidden_state"]`) can be obtained by taking a look at the ONNX configuration of each model. For example, for DistilBERT we have: ```python >>> from transformers.models.distilbert import DistilBertConfig, DistilBertOnnxConfig >>> config = DistilBertConfig() >>> onnx_config = DistilBertOnnxConfig(config) >>> print(list(onnx_config.outputs.keys())) ["last_hidden_state"] ``` The process is identical for TensorFlow checkpoints on the Hub. For example, export a pure TensorFlow checkpoint like so: ```bash python -m transformers.onnx --model=keras-io/transformers-qa onnx/ ``` To export a model that's stored locally, save the model's weights and tokenizer files in the same directory (e.g. `local-pt-checkpoint`), then export it to ONNX by pointing the `--model` argument of the `transformers.onnx` package to the desired directory: ```bash python -m transformers.onnx --model=local-pt-checkpoint onnx/ ```
0
hf_public_repos/transformers/docs/source
hf_public_repos/transformers/docs/source/en/glossary.md
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. โš ๏ธ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. --> # Glossary This glossary defines general machine learning and ๐Ÿค— Transformers terms to help you better understand the documentation. ## A ### attention mask The attention mask is an optional argument used when batching sequences together. <Youtube id="M6adb1j2jPI"/> This argument indicates to the model which tokens should be attended to, and which should not. For example, consider these two sequences: ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased") >>> sequence_a = "This is a short sequence." >>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A." >>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"] >>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"] ``` The encoded versions have different lengths: ```python >>> len(encoded_sequence_a), len(encoded_sequence_b) (8, 19) ``` Therefore, we can't put them together in the same tensor as-is. The first sequence needs to be padded up to the length of the second one, or the second one needs to be truncated down to the length of the first one. In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask it to pad like this: ```python >>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True) ``` We can see that 0s have been added on the right of the first sentence to make it the same length as the second one: ```python >>> padded_sequences["input_ids"] [[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]] ``` This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating the position of the padded indices so that the model does not attend to them. For the [`BertTokenizer`], `1` indicates a value that should be attended to, while `0` indicates a padded value. This attention mask is in the dictionary returned by the tokenizer under the key "attention_mask": ```python >>> padded_sequences["attention_mask"] [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]] ``` ### autoencoding models See [encoder models](#encoder-models) and [masked language modeling](#masked-language-modeling-mlm) ### autoregressive models See [causal language modeling](#causal-language-modeling) and [decoder models](#decoder-models) ## B ### backbone The backbone is the network (embeddings and layers) that outputs the raw hidden states or features. It is usually connected to a [head](#head) which accepts the features as its input to make a prediction. For example, [`ViTModel`] is a backbone without a specific head on top. Other models can also use [`VitModel`] as a backbone such as [DPT](model_doc/dpt). ## C ### causal language modeling A pretraining task where the model reads the texts in order and has to predict the next word. It's usually done by reading the whole sentence but using a mask inside the model to hide the future tokens at a certain timestep. ### channel Color images are made up of some combination of values in three channels - red, green, and blue (RGB) - and grayscale images only have one channel. In ๐Ÿค— Transformers, the channel can be the first or last dimension of an image's tensor: [`n_channels`, `height`, `width`] or [`height`, `width`, `n_channels`]. ### connectionist temporal classification (CTC) An algorithm which allows a model to learn without knowing exactly how the input and output are aligned; CTC calculates the distribution of all possible outputs for a given input and chooses the most likely output from it. CTC is commonly used in speech recognition tasks because speech doesn't always cleanly align with the transcript for a variety of reasons such as a speaker's different speech rates. ### convolution A type of layer in a neural network where the input matrix is multiplied element-wise by a smaller matrix (kernel or filter) and the values are summed up in a new matrix. This is known as a convolutional operation which is repeated over the entire input matrix. Each operation is applied to a different segment of the input matrix. Convolutional neural networks (CNNs) are commonly used in computer vision. ## D ### DataParallel (DP) Parallelism technique for training on multiple GPUs where the same setup is replicated multiple times, with each instance receiving a distinct data slice. The processing is done in parallel and all setups are synchronized at the end of each training step. Learn more about how DataParallel works [here](perf_train_gpu_many#dataparallel-vs-distributeddataparallel). ### decoder input IDs This input is specific to encoder-decoder models, and contains the input IDs that will be fed to the decoder. These inputs should be used for sequence to sequence tasks, such as translation or summarization, and are usually built in a way specific to each model. Most encoder-decoder models (BART, T5) create their `decoder_input_ids` on their own from the `labels`. In such models, passing the `labels` is the preferred way to handle training. Please check each model's docs to see how they handle these input IDs for sequence to sequence training. ### decoder models Also referred to as autoregressive models, decoder models involve a pretraining task (called causal language modeling) where the model reads the texts in order and has to predict the next word. It's usually done by reading the whole sentence with a mask to hide future tokens at a certain timestep. <Youtube id="d_ixlCubqQw"/> ### deep learning (DL) Machine learning algorithms which uses neural networks with several layers. ## E ### encoder models Also known as autoencoding models, encoder models take an input (such as text or images) and transform them into a condensed numerical representation called an embedding. Oftentimes, encoder models are pretrained using techniques like [masked language modeling](#masked-language-modeling-mlm), which masks parts of the input sequence and forces the model to create more meaningful representations. <Youtube id="H39Z_720T5s"/> ## F ### feature extraction The process of selecting and transforming raw data into a set of features that are more informative and useful for machine learning algorithms. Some examples of feature extraction include transforming raw text into word embeddings and extracting important features such as edges or shapes from image/video data. ### feed forward chunking In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers. The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g., for `bert-base-uncased`). For an input of size `[batch_size, sequence_length]`, the memory required to store the intermediate feed forward embeddings `[batch_size, sequence_length, config.intermediate_size]` can account for a large fraction of the memory use. The authors of [Reformer: The Efficient Transformer](https://arxiv.org/abs/2001.04451) noticed that since the computation is independent of the `sequence_length` dimension, it is mathematically equivalent to compute the output embeddings of both feed forward layers `[batch_size, config.hidden_size]_0, ..., [batch_size, config.hidden_size]_n` individually and concat them afterward to `[batch_size, sequence_length, config.hidden_size]` with `n = sequence_length`, which trades increased computation time against reduced memory use, but yields a mathematically **equivalent** result. For models employing the function [`apply_chunking_to_forward`], the `chunk_size` defines the number of output embeddings that are computed in parallel and thus defines the trade-off between memory and time complexity. If `chunk_size` is set to 0, no feed forward chunking is done. ### finetuned models Finetuning is a form of transfer learning which involves taking a pretrained model, freezing its weights, and replacing the output layer with a newly added [model head](#head). The model head is trained on your target dataset. See the [Fine-tune a pretrained model](https://huggingface.co/docs/transformers/training) tutorial for more details, and learn how to fine-tune models with ๐Ÿค— Transformers. ## H ### head The model head refers to the last layer of a neural network that accepts the raw hidden states and projects them onto a different dimension. There is a different model head for each task. For example: * [`GPT2ForSequenceClassification`] is a sequence classification head - a linear layer - on top of the base [`GPT2Model`]. * [`ViTForImageClassification`] is an image classification head - a linear layer on top of the final hidden state of the `CLS` token - on top of the base [`ViTModel`]. * [`Wav2Vec2ForCTC`] ia a language modeling head with [CTC](#connectionist-temporal-classification-(CTC)) on top of the base [`Wav2Vec2Model`]. ## I ### image patch Vision-based Transformers models split an image into smaller patches which are linearly embedded, and then passed as a sequence to the model. You can find the `patch_size` - or resolution - of the model in its configuration. ### inference Inference is the process of evaluating a model on new data after training is complete. See the [Pipeline for inference](https://huggingface.co/docs/transformers/pipeline_tutorial) tutorial to learn how to perform inference with ๐Ÿค— Transformers. ### input IDs The input ids are often the only required parameters to be passed to the model as input. They are token indices, numerical representations of tokens building the sequences that will be used as input by the model. <Youtube id="VFp38yj8h3A"/> Each tokenizer works differently but the underlying mechanism remains the same. Here's an example using the BERT tokenizer, which is a [WordPiece](https://arxiv.org/pdf/1609.08144.pdf) tokenizer: ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased") >>> sequence = "A Titan RTX has 24GB of VRAM" ``` The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary. ```python >>> tokenized_sequence = tokenizer.tokenize(sequence) ``` The tokens are either words or subwords. Here for instance, "VRAM" wasn't in the model vocabulary, so it's been split in "V", "RA" and "M". To indicate those tokens are not separate words but parts of the same word, a double-hash prefix is added for "RA" and "M": ```python >>> print(tokenized_sequence) ['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M'] ``` These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding the sentence to the tokenizer, which leverages the Rust implementation of [๐Ÿค— Tokenizers](https://github.com/huggingface/tokenizers) for peak performance. ```python >>> inputs = tokenizer(sequence) ``` The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The token indices are under the key `input_ids`: ```python >>> encoded_sequence = inputs["input_ids"] >>> print(encoded_sequence) [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102] ``` Note that the tokenizer automatically adds "special tokens" (if the associated model relies on them) which are special IDs the model sometimes uses. If we decode the previous sequence of ids, ```python >>> decoded_sequence = tokenizer.decode(encoded_sequence) ``` we will see ```python >>> print(decoded_sequence) [CLS] A Titan RTX has 24GB of VRAM [SEP] ``` because this is the way a [`BertModel`] is going to expect its inputs. ## L ### labels The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels should be the expected prediction of the model: it will use the standard loss in order to compute the loss between its predictions and the expected value (the label). These labels are different according to the model head, for example: - For sequence classification models, ([`BertForSequenceClassification`]), the model expects a tensor of dimension `(batch_size)` with each value of the batch corresponding to the expected label of the entire sequence. - For token classification models, ([`BertForTokenClassification`]), the model expects a tensor of dimension `(batch_size, seq_length)` with each value corresponding to the expected label of each individual token. - For masked language modeling, ([`BertForMaskedLM`]), the model expects a tensor of dimension `(batch_size, seq_length)` with each value corresponding to the expected label of each individual token: the labels being the token ID for the masked token, and values to be ignored for the rest (usually -100). - For sequence to sequence tasks, ([`BartForConditionalGeneration`], [`MBartForConditionalGeneration`]), the model expects a tensor of dimension `(batch_size, tgt_seq_length)` with each value corresponding to the target sequences associated with each input sequence. During training, both BART and T5 will make the appropriate `decoder_input_ids` and decoder attention masks internally. They usually do not need to be supplied. This does not apply to models leveraging the Encoder-Decoder framework. - For image classification models, ([`ViTForImageClassification`]), the model expects a tensor of dimension `(batch_size)` with each value of the batch corresponding to the expected label of each individual image. - For semantic segmentation models, ([`SegformerForSemanticSegmentation`]), the model expects a tensor of dimension `(batch_size, height, width)` with each value of the batch corresponding to the expected label of each individual pixel. - For object detection models, ([`DetrForObjectDetection`]), the model expects a list of dictionaries with a `class_labels` and `boxes` key where each value of the batch corresponds to the expected label and number of bounding boxes of each individual image. - For automatic speech recognition models, ([`Wav2Vec2ForCTC`]), the model expects a tensor of dimension `(batch_size, target_length)` with each value corresponding to the expected label of each individual token. <Tip> Each model's labels may be different, so be sure to always check the documentation of each model for more information about their specific labels! </Tip> The base models ([`BertModel`]) do not accept labels, as these are the base transformer models, simply outputting features. ### large language models (LLM) A generic term that refers to transformer language models (GPT-3, BLOOM, OPT) that were trained on a large quantity of data. These models also tend to have a large number of learnable parameters (e.g. 175 billion for GPT-3). ## M ### masked language modeling (MLM) A pretraining task where the model sees a corrupted version of the texts, usually done by masking some tokens randomly, and has to predict the original text. ### multimodal A task that combines texts with another kind of inputs (for instance images). ## N ### Natural language generation (NLG) All tasks related to generating text (for instance, [Write With Transformers](https://transformer.huggingface.co/), translation). ### Natural language processing (NLP) A generic way to say "deal with texts". ### Natural language understanding (NLU) All tasks related to understanding what is in a text (for instance classifying the whole text, individual words). ## P ### pipeline A pipeline in ๐Ÿค— Transformers is an abstraction referring to a series of steps that are executed in a specific order to preprocess and transform data and return a prediction from a model. Some example stages found in a pipeline might be data preprocessing, feature extraction, and normalization. For more details, see [Pipelines for inference](https://huggingface.co/docs/transformers/pipeline_tutorial). ### PipelineParallel (PP) Parallelism technique in which the model is split up vertically (layer-level) across multiple GPUs, so that only one or several layers of the model are placed on a single GPU. Each GPU processes in parallel different stages of the pipeline and working on a small chunk of the batch. Learn more about how PipelineParallel works [here](perf_train_gpu_many#from-naive-model-parallelism-to-pipeline-parallelism). ### pixel values A tensor of the numerical representations of an image that is passed to a model. The pixel values have a shape of [`batch_size`, `num_channels`, `height`, `width`], and are generated from an image processor. ### pooling An operation that reduces a matrix into a smaller matrix, either by taking the maximum or average of the pooled dimension(s). Pooling layers are commonly found between convolutional layers to downsample the feature representation. ### position IDs Contrary to RNNs that have the position of each token embedded within them, transformers are unaware of the position of each token. Therefore, the position IDs (`position_ids`) are used by the model to identify each token's position in the list of tokens. They are an optional parameter. If no `position_ids` are passed to the model, the IDs are automatically created as absolute positional embeddings. Absolute positional embeddings are selected in the range `[0, config.max_position_embeddings - 1]`. Some models use other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings. ### preprocessing The task of preparing raw data into a format that can be easily consumed by machine learning models. For example, text is typically preprocessed by tokenization. To gain a better idea of what preprocessing looks like for other input types, check out the [Preprocess](https://huggingface.co/docs/transformers/preprocessing) tutorial. ### pretrained model A model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods involve a self-supervised objective, which can be reading the text and trying to predict the next word (see [causal language modeling](#causal-language-modeling)) or masking some words and trying to predict them (see [masked language modeling](#masked-language-modeling-mlm)). Speech and vision models have their own pretraining objectives. For example, Wav2Vec2 is a speech model pretrained on a contrastive task which requires the model to identify the "true" speech representation from a set of "false" speech representations. On the other hand, BEiT is a vision model pretrained on a masked image modeling task which masks some of the image patches and requires the model to predict the masked patches (similar to the masked language modeling objective). ## R ### recurrent neural network (RNN) A type of model that uses a loop over a layer to process texts. ### representation learning A subfield of machine learning which focuses on learning meaningful representations of raw data. Some examples of representation learning techniques include word embeddings, autoencoders, and Generative Adversarial Networks (GANs). ## S ### sampling rate A measurement in hertz of the number of samples (the audio signal) taken per second. The sampling rate is a result of discretizing a continuous signal such as speech. ### self-attention Each element of the input finds out which other elements of the input they should attend to. ### self-supervised learning A category of machine learning techniques in which a model creates its own learning objective from unlabeled data. It differs from [unsupervised learning](#unsupervised-learning) and [supervised learning](#supervised-learning) in that the learning process is supervised, but not explicitly from the user. One example of self-supervised learning is [masked language modeling](#masked-language-modeling-mlm), where a model is passed sentences with a proportion of its tokens removed and learns to predict the missing tokens. ### semi-supervised learning A broad category of machine learning training techniques that leverages a small amount of labeled data with a larger quantity of unlabeled data to improve the accuracy of a model, unlike [supervised learning](#supervised-learning) and [unsupervised learning](#unsupervised-learning). An example of a semi-supervised learning approach is "self-training", in which a model is trained on labeled data, and then used to make predictions on the unlabeled data. The portion of the unlabeled data that the model predicts with the most confidence gets added to the labeled dataset and used to retrain the model. ### sequence-to-sequence (seq2seq) Models that generate a new sequence from an input, like translation models, or summarization models (such as [Bart](model_doc/bart) or [T5](model_doc/t5)). ### Sharded DDP Another name for the foundational [ZeRO](#zero-redundancy-optimizer--zero-) concept as used by various other implementations of ZeRO. ### stride In [convolution](#convolution) or [pooling](#pooling), the stride refers to the distance the kernel is moved over a matrix. A stride of 1 means the kernel is moved one pixel over at a time, and a stride of 2 means the kernel is moved two pixels over at a time. ### supervised learning A form of model training that directly uses labeled data to correct and instruct model performance. Data is fed into the model being trained, and its predictions are compared to the known labels. The model updates its weights based on how incorrect its predictions were, and the process is repeated to optimize model performance. ## T ### Tensor Parallelism (TP) Parallelism technique for training on multiple GPUs in which each tensor is split up into multiple chunks, so instead of having the whole tensor reside on a single GPU, each shard of the tensor resides on its designated GPU. Shards gets processed separately and in parallel on different GPUs and the results are synced at the end of the processing step. This is what is sometimes called horizontal parallelism, as the splitting happens on horizontal level. Learn more about Tensor Parallelism [here](perf_train_gpu_many#tensor-parallelism). ### token A part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords) or a punctuation symbol. ### token Type IDs Some models' purpose is to do classification on pairs of sentences or question answering. <Youtube id="0u3ioSwev3s"/> These require two different sequences to be joined in a single "input_ids" entry, which usually is performed with the help of special tokens, such as the classifier (`[CLS]`) and separator (`[SEP]`) tokens. For example, the BERT model builds its two sequence input as such: ```python >>> # [CLS] SEQUENCE_A [SEP] SEQUENCE_B [SEP] ``` We can use our tokenizer to automatically generate such a sentence by passing the two sequences to `tokenizer` as two arguments (and not a list, like before) like this: ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased") >>> sequence_a = "HuggingFace is based in NYC" >>> sequence_b = "Where is HuggingFace based?" >>> encoded_dict = tokenizer(sequence_a, sequence_b) >>> decoded = tokenizer.decode(encoded_dict["input_ids"]) ``` which will return: ```python >>> print(decoded) [CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP] ``` This is enough for some models to understand where one sequence ends and where another begins. However, other models, such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying the two types of sequence in the model. The tokenizer returns this mask as the "token_type_ids" entry: ```python >>> encoded_dict["token_type_ids"] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` The first sequence, the "context" used for the question, has all its tokens represented by a `0`, whereas the second sequence, corresponding to the "question", has all its tokens represented by a `1`. Some models, like [`XLNetModel`] use an additional token represented by a `2`. ### transfer learning A technique that involves taking a pretrained model and adapting it to a dataset specific to your task. Instead of training a model from scratch, you can leverage knowledge obtained from an existing model as a starting point. This speeds up the learning process and reduces the amount of training data needed. ### transformer Self-attention based deep learning model architecture. ## U ### unsupervised learning A form of model training in which data provided to the model is not labeled. Unsupervised learning techniques leverage statistical information of the data distribution to find patterns useful for the task at hand. ## Z ### Zero Redundancy Optimizer (ZeRO) Parallelism technique which performs sharding of the tensors somewhat similar to [TensorParallel](#tensorparallel--tp-), except the whole tensor gets reconstructed in time for a forward or backward computation, therefore the model doesn't need to be modified. This method also supports various offloading techniques to compensate for limited GPU memory. Learn more about ZeRO [here](perf_train_gpu_many#zero-data-parallelism).
0