DrDavis's picture
Upload folder using huggingface_hub
17c6d62 verified

LLM ์ถ”๋ก  ์ตœ์ ํ™” [[llm-inference-optimization]]

๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ(LLM)์€ ์ฑ„ํŒ… ๋ฐ ์ฝ”๋“œ ์™„์„ฑ ๋ชจ๋ธ๊ณผ ๊ฐ™์€ ํ…์ŠคํŠธ ์ƒ์„ฑ ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ์„ ํ•œ ๋‹จ๊ณ„ ๋Œ์–ด์˜ฌ๋ฆฌ๋ฉฐ, ๋†’์€ ์ˆ˜์ค€์˜ ์ดํ•ด๋ ฅ๊ณผ ์œ ์ฐฝํ•จ์„ ๋ณด์—ฌ์ฃผ๋Š” ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ LLM์„ ๊ฐ•๋ ฅํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ์š”์†Œ์ธ ๊ทธ๋“ค์˜ ํฌ๊ธฐ๋Š” ๋™์‹œ์— ์ถ”๋ก  ๊ณผ์ •์—์„œ ๋„์ „ ๊ณผ์ œ๊ฐ€ ๋˜๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค.

๊ธฐ๋ณธ์ ์ธ ์ถ”๋ก ์€ ๋А๋ฆฝ๋‹ˆ๋‹ค, ์™œ๋ƒํ•˜๋ฉด LLM์ด ๋‹ค์Œ ํ† ํฐ์„ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ๋ฐ˜๋ณต์ ์œผ๋กœ ํ˜ธ์ถœ๋˜์–ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ƒ์„ฑ์ด ์ง„ํ–‰๋จ์— ๋”ฐ๋ผ ์ž…๋ ฅ ์‹œํ€€์Šค๊ฐ€ ๊ธธ์–ด์ ธ ์ฒ˜๋ฆฌ ์‹œ๊ฐ„์ด ์ ์  ๊ธธ์–ด์ง‘๋‹ˆ๋‹ค. ๋˜ํ•œ, LLM์€ ์ˆ˜์‹ญ์–ต ๊ฐœ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์–ด ๋ชจ๋“  ๊ฐ€์ค‘์น˜๋ฅผ ๋ฉ”๋ชจ๋ฆฌ์— ์ €์žฅํ•˜๊ณ  ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐ ์–ด๋ ค์›€์ด ์žˆ์Šต๋‹ˆ๋‹ค.

์ด ๊ฐ€์ด๋“œ๋Š” LLM ์ถ”๋ก ์„ ๊ฐ€์†ํ•˜๊ธฐ ์œ„ํ•ด Transformers์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ตœ์ ํ™” ๊ธฐ์ˆ ์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค.

Hugging Face๋Š” LLM์„ ์ถ”๋ก ์— ์ตœ์ ํ™”ํ•˜์—ฌ ๋ฐฐํฌํ•˜๊ณ  ์„œ๋น„์Šคํ•˜๋Š” ๋ฐ ์ „๋…ํ•˜๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ธ Text Generation Inference (TGI)์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์ฒ˜๋ฆฌ๋Ÿ‰ ์ฆ๊ฐ€๋ฅผ ์œ„ํ•œ ์ง€์†์ ์ธ ๋ฐฐ์นญ๊ณผ ๋‹ค์ค‘ GPU ์ถ”๋ก ์„ ์œ„ํ•œ ํ…์„œ ๋ณ‘๋ ฌํ™”์™€ ๊ฐ™์€ Transformers์— ํฌํ•จ๋˜์ง€ ์•Š์€ ๋ฐฐํฌ ์ง€ํ–ฅ ์ตœ์ ํ™” ๊ธฐ๋Šฅ์„ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค.

์ •์  kv-cache์™€ torch.compile[[static-kv-cache-and-torchcompile]]

๋””์ฝ”๋”ฉ ์ค‘์— LLM์€ ๊ฐ ์ž…๋ ฅ ํ† ํฐ์— ๋Œ€ํ•œ key-value(kv) ๊ฐ’์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. LLM์€ ์ž๊ธฐํšŒ๊ท€(autoregressive)์ด๊ธฐ ๋•Œ๋ฌธ์— ์ƒ์„ฑ๋œ ์ถœ๋ ฅ์ด ํ˜„์žฌ ์ž…๋ ฅ์˜ ์ผ๋ถ€๊ฐ€ ๋˜์–ด ๋งค๋ฒˆ ๋™์ผํ•œ kv ๊ฐ’์„ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋งค๋ฒˆ ๋™์ผํ•œ kv ๊ฐ’์„ ๋‹ค์‹œ ๊ณ„์‚ฐํ•˜๊ธฐ ๋•Œ๋ฌธ์— ํšจ์œจ์ ์ด์ง€ ์•Š์Šต๋‹ˆ๋‹ค.

์ด๋ฅผ ์ตœ์ ํ™”ํ•˜๊ธฐ ์œ„ํ•ด, ์ด์ „ ํ‚ค(key)์™€ ๊ฐ’(value)์„ ์žฌ๊ณ„์‚ฐํ•˜์ง€ ์•Š๊ณ  ์ €์žฅํ•˜๋Š” kv-cache๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ kv-cache๋Š” ๊ฐ ์ƒ์„ฑ ๋‹จ๊ณ„์—์„œ ์ฆ๊ฐ€ํ•˜๋ฉฐ ๋™์ ์ด๊ธฐ ๋•Œ๋ฌธ์— PyTorch ์ฝ”๋“œ๋ฅผ ๋น ๋ฅด๊ณ  ์ตœ์ ํ™”๋œ ์ปค๋„๋กœ ํ†ตํ•ฉํ•˜๋Š” ๊ฐ•๋ ฅํ•œ ์ตœ์ ํ™” ๋„๊ตฌ์ธ torch.compile์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐ ์ œ์•ฝ์ด ์žˆ์Šต๋‹ˆ๋‹ค.

์ •์  kv-cache๋Š” ์ตœ๋Œ“๊ฐ’์„ ๋ฏธ๋ฆฌ ํ• ๋‹นํ•˜์—ฌ ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜์—ฌ torch.compile๊ณผ ๊ฒฐํ•ฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์ตœ๋Œ€ 4๋ฐฐ์˜ ์†๋„ ํ–ฅ์ƒ์ด ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์†๋„ ํ–ฅ์ƒ์€ ๋ชจ๋ธ ํฌ๊ธฐ(๋” ํฐ ๋ชจ๋ธ์€ ์†๋„ ํ–ฅ์ƒ์ด ์ ์Œ)์™€ ํ•˜๋“œ์›จ์–ด์— ๋”ฐ๋ผ ๋‹ค๋ฅผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

ํ˜„์žฌ Llama ๋ฐ ๋ช‡ ๊ฐ€์ง€ ๋‹ค๋ฅธ ๋ชจ๋ธ๋งŒ ์ •์  kv-cache์™€ torch.compile์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. ์‹ค์‹œ๊ฐ„ ๋ชจ๋ธ ํ˜ธํ™˜์„ฑ ๋ชฉ๋ก์€ ์ด ์ด์Šˆ๋ฅผ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค.

์ž‘์—…์˜ ๋ณต์žก์„ฑ์— ๋”ฐ๋ผ ์„ธ ๊ฐ€์ง€ ๋ฐฉ์‹์˜ ์ •์  kv-cache ์‚ฌ์šฉ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค: 1. ๊ธฐ๋ณธ ์‚ฌ์šฉ๋ฒ•: generation_config์—์„œ ํ”Œ๋ž˜๊ทธ๋ฅผ ์„ค์ •ํ•˜๊ธฐ๋งŒ ํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค(๊ถŒ์žฅ); 2. ๊ณ ๊ธ‰ ์‚ฌ์šฉ๋ฒ•: ์—ฌ๋Ÿฌ ๋ฒˆ์˜ ์ƒ์„ฑ์ด๋‚˜ ๋งž์ถคํ˜• ์ƒ์„ฑ ๋ฃจํ”„๋ฅผ ์œ„ํ•ด ์บ์‹œ ๊ฐ์ฒด๋ฅผ ์ฒ˜๋ฆฌํ•ฉ๋‹ˆ๋‹ค; 3. ๊ณ ๊ธ‰ ์‚ฌ์šฉ๋ฒ•: ๋‹จ์ผ ๊ทธ๋ž˜ํ”„๊ฐ€ ํ•„์š”ํ•œ ๊ฒฝ์šฐ, ์ „์ฒด generate ํ•จ์ˆ˜๋ฅผ ํ•˜๋‚˜์˜ ๊ทธ๋ž˜ํ”„๋กœ ์ปดํŒŒ์ผํ•ฉ๋‹ˆ๋‹ค.

์˜ฌ๋ฐ”๋ฅธ ํƒญ์„ ์„ ํƒํ•˜์—ฌ ๊ฐ ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ถ”๊ฐ€ ์ง€์นจ์„ ํ™•์ธํ•˜์„ธ์š”.

torch.compile์„ ์‚ฌ์šฉํ•  ๋•Œ ์–ด๋–ค ์ „๋žต์„ ์‚ฌ์šฉํ•˜๋“ , LLM ์ž…๋ ฅ์„ ์ œํ•œ๋œ ๊ฐ’ ์„ธํŠธ๋กœ ์™ผ์ชฝ์— ํŒจ๋”ฉํ•˜๋ฉด ๋ชจ์–‘๊ณผ ๊ด€๋ จ๋œ ์žฌ์ปดํŒŒ์ผ์„ ํ”ผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. pad_to_multiple_of tokenizer flag๊ฐ€ ์œ ์šฉํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค!

์ด ์˜ˆ์ œ์—์„œ๋Š” Gemma ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ์ž‘์—…์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค:

  1. ๋ชจ๋ธ์˜ generation_config ์†์„ฑ์— ์ ‘๊ทผํ•˜์—ฌ cache_implementation์„ "static"์œผ๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค;
  2. ๋ชจ๋ธ์˜ forward ํŒจ์Šค๋ฅผ ์ •์  kv-cache์™€ ํ•จ๊ป˜ ์ปดํŒŒ์ผํ•˜๊ธฐ ์œ„ํ•ด torch.compile์„ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค.

์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋์ž…๋‹ˆ๋‹ค!

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"  # ๊ธด ๊ฒฝ๊ณ  ๋ฉ”์‹œ์ง€๋ฅผ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ์„ค์ • :)

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto")

model.generation_config.cache_implementation = "static"

model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
input_text = "The theory of special relativity states "
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['The theory of special relativity states 1. The speed of light is constant in all inertial reference']

generate ํ•จ์ˆ˜๋Š” ๋‚ด๋ถ€์ ์œผ๋กœ ๋™์ผํ•œ ์บ์‹œ ๊ฐ์ฒด๋ฅผ ์žฌ์‚ฌ์šฉํ•˜๋ ค๊ณ  ์‹œ๋„ํ•˜๋ฉฐ, ์ด๋ฅผ ํ†ตํ•ด ๊ฐ ํ˜ธ์ถœ ์‹œ ์žฌ์ปดํŒŒ์ผ์˜ ํ•„์š”์„ฑ์„ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. ์žฌ์ปดํŒŒ์ผ์„ ํ”ผํ•˜๋Š” ๊ฒƒ์€ torch.compile์˜ ์„ฑ๋Šฅ์„ ์ตœ๋Œ€ํ•œ ํ™œ์šฉํ•˜๋Š” ๋ฐ ๋งค์šฐ ์ค‘์š”ํ•˜๋ฉฐ, ๋‹ค์Œ ์‚ฌํ•ญ์— ์œ ์˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค:

  1. ๋ฐฐ์น˜ ํฌ๊ธฐ๊ฐ€ ๋ณ€๊ฒฝ๋˜๊ฑฐ๋‚˜ ํ˜ธ์ถœ ๊ฐ„ ์ตœ๋Œ€ ์ถœ๋ ฅ ๊ธธ์ด๊ฐ€ ์ฆ๊ฐ€ํ•˜๋ฉด ์บ์‹œ๋ฅผ ๋‹ค์‹œ ์ดˆ๊ธฐํ™”ํ•ด์•ผ ํ•˜๋ฉฐ, ์ด๋กœ ์ธํ•ด ์ƒˆ๋กœ ์ปดํŒŒ์ผ์„ ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค;
  2. ์ปดํŒŒ์ผ๋œ ํ•จ์ˆ˜์˜ ์ฒซ ๋ช‡ ๋ฒˆ์˜ ํ˜ธ์ถœ์€ ํ•จ์ˆ˜๊ฐ€ ์ปดํŒŒ์ผ๋˜๋Š” ๋™์•ˆ ๋” ๋А๋ฆฝ๋‹ˆ๋‹ค.

๋‹ค์ค‘ ํ„ด ๋Œ€ํ™”์™€ ๊ฐ™์€ ์ •์  ์บ์‹œ์˜ ๊ณ ๊ธ‰ ์‚ฌ์šฉ์„ ์œ„ํ•ด์„œ๋Š”, ์บ์‹œ ๊ฐ์ฒด๋ฅผ [~GenerationMixin.generate] ์™ธ๋ถ€์—์„œ ์ธ์Šคํ„ด์Šคํ™”ํ•˜๊ณ  ์กฐ์ž‘ํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค. ๊ณ ๊ธ‰ ์‚ฌ์šฉ๋ฒ• ํƒญ์„ ์ฐธ์กฐํ•˜์„ธ์š”.

[StaticCache] ๊ฐ์ฒด๋Š” past_key_values ์ธ์ˆ˜๋กœ ๋ชจ๋ธ์˜ [~GenerationMixin.generate] ํ•จ์ˆ˜์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ฐ์ฒด๋Š” ์บ์‹œ ๋‚ด์šฉ์„ ์œ ์ง€ํ•˜๋ฏ€๋กœ, ๋™์  ์บ์‹œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ์ƒˆ๋กœ์šด [~GenerationMixin.generate] ํ˜ธ์ถœ์— ์ด๋ฅผ ์ „๋‹ฌํ•˜์—ฌ ์ƒ์„ฑ์„ ๊ณ„์†ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

from transformers import AutoTokenizer, AutoModelForCausalLM, StaticCache
import torch
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"  # ๊ธด ๊ฒฝ๊ณ  ๋ฉ”์‹œ์ง€๋ฅผ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ์„ค์ • :)

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto")

model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
input_text = "The theory of special relativity states "
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = input_ids.input_ids.shape[1]
model.generation_config.max_new_tokens = 16

past_key_values = StaticCache(
    config=model.config,
    batch_size=1,
    # ์บ์‹œ๋ฅผ ์žฌ์‚ฌ์šฉํ•  ๊ณ„ํš์ด ์žˆ๋Š” ๊ฒฝ์šฐ, ๋ชจ๋“  ๊ฒฝ์šฐ์— ์ถฉ๋ถ„ํ•œ ์บ์‹œ ๊ธธ์ด๋ฅผ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค
    max_cache_len=prompt_length+(model.generation_config.max_new_tokens*2),
    device=model.device,
    dtype=model.dtype
)
outputs = model.generate(**input_ids, past_key_values=past_key_values)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['The theory of special relativity states 1. The speed of light is constant in all inertial reference frames. 2']

# ์ƒ์„ฑ๋œ ํ…์ŠคํŠธ์™€ ๋™์ผํ•œ ์บ์‹œ ๊ฐ์ฒด๋ฅผ ์ „๋‹ฌํ•˜์—ฌ, ์ค‘๋‹จํ•œ ๊ณณ์—์„œ ์ƒ์„ฑ์„ ๊ณ„์†ํ•ฉ๋‹ˆ๋‹ค. 
# ๋‹ค์ค‘ ํ„ด ๋Œ€ํ™”์˜ ๊ฒฝ์šฐ, ์ƒ์„ฑ๋œ ํ…์ŠคํŠธ์— ์ƒˆ๋กœ์šด ์‚ฌ์šฉ์ž ์ž…๋ ฅ์„ ์ถ”๊ฐ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
new_input_ids = outputs
outputs = model.generate(new_input_ids, past_key_values=past_key_values)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['The theory of special relativity states 1. The speed of light is constant in all inertial reference frames. 2. The speed of light is constant in all inertial reference frames. 3.']

๋™์ผํ•œ [StaticCache] ๊ฐ์ฒด๋ฅผ ์ƒˆ๋กœ์šด ํ”„๋กฌํ”„ํŠธ์— ์‚ฌ์šฉํ•˜๋ ค๋ฉด, ํ˜ธ์ถœ ๊ฐ„์— .reset() ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ทธ ๋‚ด์šฉ์„ ์ดˆ๊ธฐํ™”ํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค.

๋” ๊นŠ์ด ๋“ค์–ด๊ฐ€๊ณ  ์‹ถ๋‹ค๋ฉด, [StaticCache] ๊ฐ์ฒด๋ฅผ ๋ชจ๋ธ์˜ forward ํŒจ์Šค์— ๋™์ผํ•œ past_key_values ์ธ์ˆ˜๋กœ ์ „๋‹ฌํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ „๋žต์„ ์‚ฌ์šฉํ•˜๋ฉด, ํ˜„์žฌ ํ† ํฐ๊ณผ ์ด์ „์— ์ƒ์„ฑ๋œ ํ† ํฐ์˜ ์œ„์น˜ ๋ฐ ์บ์‹œ ์œ„์น˜๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๋‹ค์Œ ํ† ํฐ์„ ๋””์ฝ”๋”ฉํ•˜๋Š” ์ž์ฒด ํ•จ์ˆ˜๋ฅผ ์ž‘์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

from transformers import LlamaTokenizer, LlamaForCausalLM, StaticCache, logging
from transformers.testing_utils import CaptureLogger
import torch

prompts = [
    "Simply put, the theory of relativity states that ",
    "My favorite all time favorite condiment is ketchup.",
]

NUM_TOKENS_TO_GENERATE = 40
torch_device = "cuda"

tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", pad_token="</s>", padding_side="right")
model = LlamaForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", device_map="sequential")
inputs = tokenizer(prompts, return_tensors="pt", padding=True).to(model.device)

def decode_one_tokens(model, cur_token, input_pos, cache_position, past_key_values):
    logits = model(
        cur_token,
        position_ids=input_pos,
        cache_position=cache_position,
        past_key_values=past_key_values,
        return_dict=False,
        use_cache=True
    )[0]
    new_token = torch.argmax(logits[:, -1], dim=-1)[:, None]
    return new_token

StaticCache ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ •์  kv-cache์™€ torch.compile์„ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด ๋ช‡ ๊ฐ€์ง€ ์ค‘์š”ํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค:

  1. ์ถ”๋ก ์— ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— [StaticCache] ์ธ์Šคํ„ด์Šค๋ฅผ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ ์ตœ๋Œ€ ๋ฐฐ์น˜ ํฌ๊ธฐ์™€ ์‹œํ€€์Šค ๊ธธ์ด์™€ ๊ฐ™์€ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
  2. ์ •์  kv-cache์™€ ํ•จ๊ป˜ ์ˆœ์ „ํŒŒ๋ฅผ ์ปดํŒŒ์ผํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ์— torch.compile์„ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค.
  3. torch.backends.cuda.sdp_kernel ์ปจํ…์ŠคํŠธ ๊ด€๋ฆฌ์ž์—์„œ enable_math=True๋ฅผ ์„ค์ •ํ•˜์—ฌ ๋„ค์ดํ‹ฐ๋ธŒ PyTorch C++ ๊ตฌํ˜„๋œ ์Šค์ผ€์ผ๋œ ์ ๊ณฑ ์–ดํ…์…˜(scaled dot product attention)์„ ํ™œ์„ฑํ™”ํ•˜์—ฌ ์ถ”๋ก  ์†๋„๋ฅผ ๋”์šฑ ๋†’์ž…๋‹ˆ๋‹ค.
batch_size, seq_length = inputs["input_ids"].shape
with torch.no_grad():
    past_key_values = StaticCache(
        config=model.config, max_batch_size=2, max_cache_len=4096, device=torch_device, dtype=model.dtype
    )
    cache_position = torch.arange(seq_length, device=torch_device)
    generated_ids = torch.zeros(
        batch_size, seq_length + NUM_TOKENS_TO_GENERATE + 1, dtype=torch.int, device=torch_device
    )
    generated_ids[:, cache_position] = inputs["input_ids"].to(torch_device).to(torch.int)

    logits = model(
        **inputs, cache_position=cache_position, past_key_values=past_key_values,return_dict=False, use_cache=True
    )[0]
    next_token = torch.argmax(logits[:, -1], dim=-1)[:, None]
    generated_ids[:, seq_length] = next_token[:, 0]

    decode_one_tokens = torch.compile(decode_one_tokens, mode="reduce-overhead", fullgraph=True)
    cache_position = torch.tensor([seq_length + 1], device=torch_device)
    for _ in range(1, NUM_TOKENS_TO_GENERATE):
        with torch.backends.cuda.sdp_kernel(enable_flash=False, enable_mem_efficient=False, enable_math=True):
            next_token = decode_one_tokens(model, next_token.clone(), None, cache_position, past_key_values)
            generated_ids[:, cache_position] = next_token.int()
        cache_position += 1

text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
text
['Simply put, the theory of relativity states that 1) the speed of light is constant, 2) the speed of light is the same for all observers, and 3) the laws of physics are the same for all observers.',
 'My favorite all time favorite condiment is ketchup. I love it on everything. I love it on my eggs, my fries, my chicken, my burgers, my hot dogs, my sandwiches, my salads, my p']

์ „์ฒด generate ํ•จ์ˆ˜๋ฅผ ์ปดํŒŒ์ผํ•˜๋Š” ๊ฒƒ์€ ์ฝ”๋“œ ์ธก๋ฉด์—์„œ ๊ธฐ๋ณธ ์‚ฌ์šฉ๋ฒ•๋ณด๋‹ค ๋” ๊ฐ„๋‹จํ•ฉ๋‹ˆ๋‹ค. generate ํ•จ์ˆ˜์— ๋Œ€ํ•ด torch.compile์„ ํ˜ธ์ถœํ•˜์—ฌ ์ „์ฒด ํ•จ์ˆ˜๋ฅผ ์ปดํŒŒ์ผํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ •์  ์บ์‹œ์˜ ์‚ฌ์šฉ์„ ์ง€์ •ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์ •์  ์บ์‹œ๋Š” ํ˜ธํ™˜๋˜์ง€๋งŒ, ๋ฒค์น˜๋งˆํฌ์—์„œ๋Š” ๋™์  ์บ์‹œ(๊ธฐ๋ณธ ์„ค์ •)๊ฐ€ ๋” ๋น ๋ฅธ ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ์Šต๋‹ˆ๋‹ค.

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"  # ๊ธด ๊ฒฝ๊ณ  ๋ฉ”์‹œ์ง€๋ฅผ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ์„ค์ • :)

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto")

model.generate = torch.compile(model.generate, mode="reduce-overhead", fullgraph=True)
input_text = "The theory of special relativity states "
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")

outputs = model.generate(**input_ids)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['The theory of special relativity states 1. The speed of light is constant in all inertial reference']

์ด ๋ฐฉ๋ฒ•์„ ํ†ตํ•ด ๋ชจ๋ธ์˜ forward ํŒจ์Šค๋ฟ๋งŒ ์•„๋‹ˆ๋ผ, ์ž…๋ ฅ ์ค€๋น„, logit ์ฒ˜๋ฆฌ๊ธฐ ์ž‘์—… ๋“ฑ์„ ํฌํ•จํ•œ ๋ชจ๋“  ๊ฒƒ์„ ์ปดํŒŒ์ผํ•ฉ๋‹ˆ๋‹ค. ๊ธฐ๋ณธ ์‚ฌ์šฉ ์˜ˆ์ œ์— ๋น„ํ•ด generate ํ˜ธ์ถœ์ด ์•ฝ๊ฐ„ ๋” ๋น ๋ฅผ ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ปดํŒŒ์ผ๋œ ๊ทธ๋ž˜ํ”„๋Š” ๋” ํŠน์ดํ•œ ํ•˜๋“œ์›จ์–ด ์žฅ์น˜๋‚˜ ์‚ฌ์šฉ ์‚ฌ๋ก€์— ์ ํ•ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด ์ ‘๊ทผ ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•˜๋Š” ๋ฐ๋Š” ๋ช‡ ๊ฐ€์ง€ ํฐ ๋‹จ์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค:

  1. ์ปดํŒŒ์ผ ์†๋„๊ฐ€ ํ›จ์”ฌ ๋А๋ฆฝ๋‹ˆ๋‹ค;
  2. generate์˜ ๋ชจ๋“  ๋งค๊ฐœ๋ณ€์ˆ˜ ์„ค์ •์€ generation_config๋ฅผ ํ†ตํ•ด์„œ๋งŒ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค;
  3. ๋งŽ์€ ๊ฒฝ๊ณ ์™€ ์˜ˆ์™ธ๊ฐ€ ์–ต์ œ๋ฉ๋‹ˆ๋‹ค. -- ๋จผ์ € ์ปดํŒŒ์ผ ๋˜์ง€ ์•Š์€ ํ˜•ํƒœ๋กœ ํ…Œ์ŠคํŠธํ•˜๋Š” ๊ฒƒ์„ ๊ถŒ์žฅํ•ฉ๋‹ˆ๋‹ค;
  4. ํ˜„์žฌ ์ž‘์—… ์ค‘์ด์ง€๋งŒ ๊ธฐ๋Šฅ ์ œํ•œ์ด ์‹ฌํ•ฉ๋‹ˆ๋‹ค(์˜ˆ: ์ž‘์„ฑ ์‹œ์ ์—์„œ๋Š” EOS ํ† ํฐ์ด ์„ ํƒ๋˜์–ด๋„ ์ƒ์„ฑ์ด ์ค‘๋‹จ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค).

์ถ”์ • ๋””์ฝ”๋”ฉ [[speculative-decoding]]

๋ณด๋‹ค ์‹ฌ์ธต์ ์ธ ์„ค๋ช…์„ ์›ํ•œ๋‹ค๋ฉด, Assisted Generation: a new direction toward low-latency text generation ๋ธ”๋กœ๊ทธ ๊ฒŒ์‹œ๋ฌผ์„ ํ™•์ธํ•˜์‹ญ์‹œ์˜ค!

์ž๊ธฐ ํšŒ๊ท€์˜ ๋˜ ๋‹ค๋ฅธ ๋ฌธ์ œ๋Š” ๊ฐ ์ž…๋ ฅ ํ† ํฐ์— ๋Œ€ํ•ด ์ˆœ์ „ํŒŒ ์ค‘์— ๋ชจ๋ธ ๊ฐ€์ค‘์น˜๋ฅผ ๋งค๋ฒˆ ๋กœ๋“œํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์ˆ˜์‹ญ์–ต ๊ฐœ์˜ ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ง„ LLM์—๋Š” ๋А๋ฆฌ๊ณ  ๋ฒˆ๊ฑฐ๋กญ์Šต๋‹ˆ๋‹ค. ์ถ”์ • ๋””์ฝ”๋”ฉ(speculative decoding)์€ ๋” ์ž‘๊ณ  ๋น ๋ฅธ ๋ณด์กฐ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ํ›„๋ณด ํ† ํฐ์„ ์ƒ์„ฑํ•˜๊ณ , ์ด๋ฅผ ํฐ LLM์ด ๋‹จ์ผ ์ˆœ์ „ํŒŒ์—์„œ ๊ฒ€์ฆํ•˜์—ฌ ์ด ์†๋„ ์ €ํ•˜๋ฅผ ์™„ํ™”ํ•ฉ๋‹ˆ๋‹ค. ๊ฒ€์ฆ๋œ ํ† ํฐ์ด ์ •ํ™•ํ•˜๋‹ค๋ฉด, LLM์€ ๋ณธ๋ž˜ ์ž์ฒด์ ์œผ๋กœ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ํ† ํฐ์„ ์–ป์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ „๋ฐฉ ํŒจ์Šค๊ฐ€ ๋™์ผํ•œ ์ถœ๋ ฅ์„ ๋ณด์žฅํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ •ํ™•๋„ ์ €ํ•˜๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค.

๊ฐ€์žฅ ํฐ ์†๋„ ํ–ฅ์ƒ์„ ์–ป๊ธฐ ์œ„ํ•ด, ๋ณด์กฐ ๋ชจ๋ธ์€ ๋น ๋ฅด๊ฒŒ ํ† ํฐ์„ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ๋„๋ก LLM๋ณด๋‹ค ํ›จ์”ฌ ์ž‘์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ณด์กฐ ๋ชจ๋ธ๊ณผ LLM ๋ชจ๋ธ์€ ํ† ํฐ์„ ๋‹ค์‹œ ์ธ์ฝ”๋”ฉํ•˜๊ณ  ๋””์ฝ”๋”ฉํ•˜์ง€ ์•Š๋„๋ก ๋™์ผํ•œ ํ† ํฌ๋‚˜์ด์ €๋ฅผ ๊ณต์œ ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.

์ถ”์ • ๋””์ฝ”๋”ฉ์€ ํƒ์š• ๊ฒ€์ƒ‰๊ณผ ์ƒ˜ํ”Œ๋ง ๋””์ฝ”๋”ฉ ์ „๋žต์—์„œ๋งŒ ์ง€์›๋˜๋ฉฐ, ๋ฐฐ์น˜ ์ž…๋ ฅ์„ ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.

๋ณด์กฐ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๊ณ  ์ด๋ฅผ [~GenerationMixin.generate] ๋ฉ”์„œ๋“œ์— ์ „๋‹ฌํ•˜์—ฌ ์ถ”์ • ๋””์ฝ”๋”ฉ์„ ํ™œ์„ฑํ™”ํ•˜์‹ญ์‹œ์˜ค.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

device = "cuda" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b")
inputs = tokenizer("Einstein's theory of relativity states", return_tensors="pt").to(device)

model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b").to(device)
assistant_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m").to(device)
outputs = model.generate(**inputs, assistant_model=assistant_model)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
["Einstein's theory of relativity states that the speed of light is constant.    "]

์ถ”์ • ์ƒ˜ํ”Œ๋ง ๋””์ฝ”๋”ฉ(speculative sampling decoding)์„ ์œ„ํ•ด, ๋ณด์กฐ ๋ชจ๋ธ ์™ธ์—๋„ [~GenerationMixin.generate] ๋ฉ”์„œ๋“œ์— do_sample ๋ฐ temperature ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•˜์‹ญ์‹œ์˜ค.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

device = "cuda" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b")
inputs = tokenizer("Einstein's theory of relativity states", return_tensors="pt").to(device)

model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b").to(device)
assistant_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m").to(device)
outputs = model.generate(**inputs, assistant_model=assistant_model, do_sample=True, temperature=0.7)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
["Einstein's theory of relativity states that motion in the universe is not a straight line.\n"]

ํ”„๋กฌํ”„ํŠธ ์กฐํšŒ ๋””์ฝ”๋”ฉ [[prompt-lookup-decoding]]

ํ”„๋กฌํ”„ํŠธ ์กฐํšŒ ๋””์ฝ”๋”ฉ์€ ํƒ์š• ๊ฒ€์ƒ‰๊ณผ ์ƒ˜ํ”Œ๋ง๊ณผ๋„ ํ˜ธํ™˜๋˜๋Š” ์ถ”์ • ๋””์ฝ”๋”ฉ์˜ ๋ณ€ํ˜•์ž…๋‹ˆ๋‹ค. ํ”„๋กฌํ”„ํŠธ ์กฐํšŒ๋Š” ์š”์•ฝ๊ณผ ๊ฐ™์€ ์ž…๋ ฅ ๊ธฐ๋ฐ˜ ์ž‘์—…์— ํŠนํžˆ ์ž˜ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ํ”„๋กฌํ”„ํŠธ์™€ ์ถœ๋ ฅ ๊ฐ„์— ์ข…์ข… ๊ฒน์น˜๋Š” ๋‹จ์–ด๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฒน์น˜๋Š” n-๊ทธ๋žจ์ด LLM ํ›„๋ณด ํ† ํฐ์œผ๋กœ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค.

ํ”„๋กฌํ”„ํŠธ ์กฐํšŒ ๋””์ฝ”๋”ฉ์„ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด prompt_lookup_num_tokens ๋งค๊ฐœ๋ณ€์ˆ˜์— ๊ฒน์น˜๋Š” ํ† ํฐ ์ˆ˜๋ฅผ ์ง€์ •ํ•˜์‹ญ์‹œ์˜ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ด ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ [~GenerationMixin.generate] ๋ฉ”์„œ๋“œ์— ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

device = "cuda" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b")
inputs = tokenizer("The second law of thermodynamics states", return_tensors="pt").to(device)

model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b").to(device)
assistant_model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m").to(device)
outputs = model.generate(**inputs, prompt_lookup_num_tokens=3)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
['The second law of thermodynamics states that entropy increases with temperature.      ']

์ƒ˜ํ”Œ๋ง๊ณผ ํ•จ๊ป˜ ํ”„๋กฌํ”„ํŠธ ์กฐํšŒ ๋””์ฝ”๋”ฉ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด, [~GenerationMixin.generate] ๋ฉ”์„œ๋“œ์— do_sample ๋ฐ temperature ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•˜์‹ญ์‹œ์˜ค.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

device = "cuda" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b")
inputs = tokenizer("The second law of thermodynamics states", return_tensors="pt").to(device)

model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b").to(device)
outputs = model.generate(**inputs, prompt_lookup_num_tokens=3, do_sample=True, temperature=0.7)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
["The second law of thermodynamics states that energy cannot be created nor destroyed. It's not a"]

์–ดํ…์…˜ ์ตœ์ ํ™” [[attention-optimizations]]

ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ์˜ ์•Œ๋ ค์ง„ ๋ฌธ์ œ๋Š” ์…€ํ”„ ์–ดํ…์…˜ ๋ฉ”์ปค๋‹ˆ์ฆ˜์ด ์ž…๋ ฅ ํ† ํฐ ์ˆ˜์™€ ํ•จ๊ป˜ ๊ณ„์‚ฐ ๋ฐ ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ์ œ๊ณฑ์œผ๋กœ ์ฆ๊ฐ€ํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ์ œํ•œ์€ ํ›จ์”ฌ ๋” ๊ธด ์‹œํ€€์Šค๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” LLM์—์„œ๋Š” ๋”์šฑ ์ปค์ง‘๋‹ˆ๋‹ค. ์ด๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด FlashAttention2 ๋˜๋Š” PyTorch์˜ ์Šค์ผ€์ผ๋œ ์ ๊ณฑ ์–ดํ…์…˜์„ ์‚ฌ์šฉํ•ด ๋ณด์‹ญ์‹œ์˜ค. ์ด๋“ค์€ ๋” ๋ฉ”๋ชจ๋ฆฌ ํšจ์œจ์ ์ธ ์–ดํ…์…˜ ๊ตฌํ˜„์œผ๋กœ ์ถ”๋ก ์„ ๊ฐ€์†ํ™”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

FlashAttention-2 [[flashattention-2]]

FlashAttention๊ณผ FlashAttention-2๋Š” ์–ดํ…์…˜ ๊ณ„์‚ฐ์„ ๋” ์ž‘์€ ์ฒญํฌ๋กœ ๋‚˜๋ˆ„๊ณ  ์ค‘๊ฐ„ ์ฝ๊ธฐ/์“ฐ๊ธฐ ์ž‘์—…์„ ์ค„์—ฌ ์ถ”๋ก  ์†๋„๋ฅผ ๋†’์ž…๋‹ˆ๋‹ค. FlashAttention-2๋Š” ์›๋ž˜ FlashAttention ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ์„ ํ•˜์—ฌ ์‹œํ€€์Šค ๊ธธ์ด ์ฐจ์›์—์„œ๋„ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ  ํ•˜๋“œ์›จ์–ด์—์„œ ์ž‘์—…์„ ๋” ์ž˜ ๋ถ„ํ• ํ•˜์—ฌ ๋™๊ธฐํ™” ๋ฐ ํ†ต์‹  ์˜ค๋ฒ„ํ—ค๋“œ๋ฅผ ์ค„์ž…๋‹ˆ๋‹ค.

FlashAttention-2๋ฅผ ์‚ฌ์šฉํ•˜๋ ค๋ฉด [~PreTrainedModel.from_pretrained] ๋ฉ”์„œ๋“œ์—์„œ attn_implementation="flash_attention_2"๋ฅผ ์„ค์ •ํ•˜์‹ญ์‹œ์˜ค.

from transformers import AutoModelForCausalLM, BitsAndBytesConfig

quant_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained(
    "google/gemma-2b",
    quantization_config=quant_config,
    torch_dtype=torch.bfloat16,
    attn_implementation="flash_attention_2",
)

PyTorch ์Šค์ผ€์ผ๋œ ์ ๊ณฑ ์–ดํ…์…˜(scaled dot product attention) [[pytorch-scaled-dot-product-attention]]

์Šค์ผ€์ผ๋œ ์ ๊ณฑ ์–ดํ…์…˜(SDPA)๋Š” PyTorch 2.0์—์„œ ์ž๋™์œผ๋กœ ํ™œ์„ฑํ™”๋˜๋ฉฐ, FlashAttention, xFormers, PyTorch์˜ C++ ๊ตฌํ˜„์„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค. SDPA๋Š” CUDA ๋ฐฑ์—”๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ ๊ฐ€์žฅ ์„ฑ๋Šฅ์ด ์ข‹์€ ์–ดํ…์…˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ฐฑ์—”๋“œ์—์„œ๋Š” SDPA๊ฐ€ PyTorch C++ ๊ตฌํ˜„์œผ๋กœ ๊ธฐ๋ณธ ์„ค์ •๋ฉ๋‹ˆ๋‹ค.

SDPA๋Š” ์ตœ์‹  PyTorch ๋ฒ„์ „์ด ์„ค์น˜๋˜์–ด ์žˆ์œผ๋ฉด FlashAttention-2๋„ ์ง€์›ํ•ฉ๋‹ˆ๋‹ค.

์„ธ ๊ฐ€์ง€ ์–ดํ…์…˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜ ์ค‘ ํ•˜๋‚˜๋ฅผ ๋ช…์‹œ์ ์œผ๋กœ ํ™œ์„ฑํ™”ํ•˜๊ฑฐ๋‚˜ ๋น„ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด torch.backends.cuda.sdp_kernel ์ปจํ…์ŠคํŠธ ๊ด€๋ฆฌ์ž๋ฅผ ์‚ฌ์šฉํ•˜์‹ญ์‹œ์˜ค. ์˜ˆ๋ฅผ ๋“ค์–ด FlashAttention์„ ํ™œ์„ฑํ™”ํ•˜๋ ค๋ฉด enable_flash=True๋กœ ์„ค์ •ํ•˜์‹ญ์‹œ์˜ค.

import torch
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "google/gemma-2b",
    torch_dtype=torch.bfloat16,
)

with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):
    outputs = model.generate(**inputs)

์–‘์žํ™” [[quantization]]

์–‘์žํ™”๋Š” LLM ๊ฐ€์ค‘์น˜๋ฅผ ๋” ๋‚ฎ์€ ์ •๋ฐ€๋„๋กœ ์ €์žฅํ•˜์—ฌ ํฌ๊ธฐ๋ฅผ ์ค„์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ ์ค„์ด๋ฉฐ GPU ๋ฉ”๋ชจ๋ฆฌ์— ์ œ์•ฝ์ด ์žˆ๋Š” ๊ฒฝ์šฐ ์ถ”๋ก ์„ ์œ„ํ•ด LLM์„ ๋กœ๋“œํ•˜๋Š” ๊ฒƒ์„ ๋” ์šฉ์ดํ•˜๊ฒŒ ํ•ฉ๋‹ˆ๋‹ค. GPU๊ฐ€ ์ถฉ๋ถ„ํ•˜๋‹ค๋ฉด, ๋ชจ๋ธ์„ ์–‘์žํ™”ํ•  ํ•„์š”๋Š” ์—†์Šต๋‹ˆ๋‹ค. ์ถ”๊ฐ€์ ์ธ ์–‘์žํ™” ๋ฐ ์–‘์žํ™” ํ•ด์ œ ๋‹จ๊ณ„๋กœ ์ธํ•ด ์•ฝ๊ฐ„์˜ ์ง€์—ฐ์ด ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค(AWQ ๋ฐ ์œตํ•ฉ AWQ ๋ชจ๋“ˆ ์ œ์™ธ).

๋‹ค์–‘ํ•œ ์–‘์žํ™” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ(์ž์„ธํ•œ ๋‚ด์šฉ์€ Quantization ๊ฐ€์ด๋“œ๋ฅผ ์ฐธ์กฐํ•˜์‹ญ์‹œ์˜ค)๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์—๋Š” Quanto, AQLM, VPTQ, AWQ ๋ฐ AutoGPTQ๊ฐ€ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๊ฐ€์žฅ ์ž˜ ๋งž๋Š” ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ•ด ๋ณด์‹ญ์‹œ์˜ค. ๋˜ํ•œ AutoGPTQ์™€ bitsandbytes๋ฅผ ๋น„๊ตํ•˜๋Š” Overview of natively supported quantization schemes in ๐Ÿค— Transformers ๋ธ”๋กœ๊ทธ ๊ฒŒ์‹œ๋ฌผ์„ ์ฝ์–ด๋ณด๋Š” ๊ฒƒ์„ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค.

์•„๋ž˜์˜ ๋ชจ๋ธ ๋ฉ”๋ชจ๋ฆฌ ๊ณ„์‚ฐ๊ธฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋กœ๋“œํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์ถ”์ •ํ•˜๊ณ  ๋น„๊ตํ•ด ๋ณด์‹ญ์‹œ์˜ค. ์˜ˆ๋ฅผ ๋“ค์–ด Mistral-7B-v0.1๋ฅผ ๋กœ๋“œํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์ถ”์ •ํ•ด ๋ณด์‹ญ์‹œ์˜ค.

Mistral-7B-v0.1์„ ๋ฐ˜์ •๋ฐ€๋„๋กœ ๋กœ๋“œํ•˜๋ ค๋ฉด [~transformers.AutoModelForCausalLM.from_pretrained] ๋ฉ”์„œ๋“œ์—์„œ torch_dtype ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ torch.bfloat16์œผ๋กœ ์„ค์ •ํ•˜์‹ญ์‹œ์˜ค. ์ด ๊ฒฝ์šฐ 13.74GB์˜ ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค.

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model = AutoModelForCausalLM.from_pretrained(
    "mistralai/Mistral-7B-v0.1", torch_dtype=torch.bfloat16, device_map="auto",
)

์ถ”๋ก ์„ ์œ„ํ•ด ์–‘์žํ™”๋œ ๋ชจ๋ธ(8๋น„ํŠธ ๋˜๋Š” 4๋น„ํŠธ)์„ ๋กœ๋“œํ•˜๋ ค๋ฉด bitsandbytes๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  load_in_4bit ๋˜๋Š” load_in_8bit ๋งค๊ฐœ๋ณ€์ˆ˜๋ฅผ True๋กœ ์„ค์ •ํ•˜์‹ญ์‹œ์˜ค. ๋ชจ๋ธ์„ 8๋น„ํŠธ๋กœ ๋กœ๋“œํ•˜๋Š” ๋ฐ๋Š” 6.87GB์˜ ๋ฉ”๋ชจ๋ฆฌ๋งŒ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค.

from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch

quant_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained(
    "mistralai/Mistral-7B-v0.1", quantization_config=quant_config, device_map="auto"
)