| | --- |
| | license: apache-2.0 |
| | --- |
| | |
| | # **TinyWand-DPO** |
| | <p align="left"> |
| | <img src="./TinyWand.png" width="150"/> |
| | <p> |
| | |
| | # **ํ๊ตญ์ด ๋ชจ๋ธ ์ค๋ช
** |
| |
|
| | **1.63B, ํ์ฐฎ์ ํฌ๊ธฐ์ SLM์ ์ด๋จ๊น์?** |
| |
|
| | ## **๋ชจ๋ธ ์๊ฐ** |
| | **TinyWand-DPO**๋ 1.63B์ SLM ๋ชจ๋ธ์
๋๋ค. ์ด ๋ชจ๋ธ์ 1.63B๋ผ๋ ์์ ํฌ๊ธฐ๋ฅผ ๊ฐ์ง์ผ๋ก์จ ์ํ๊ธฐ๊ธฐ์์ ๊ตฌ๋๋๊ฑฐ๋ ํฐ toks/s๋ฅผ ๊ฐ์ง ์ ์์๊ณผ ๋์์ ๊ฐ๋ ฅํ ์ฑ๋ฅ์ ๋ณด์ฌ์ค๋๋ค. |
| |
|
| | ## **๋ชจ๋ธ ๋ผ์ด์ผ์ค** |
| | OPEN |
| |
|
| | ## **๋ชจ๋ธ ์ฑ๋ฅ** |
| | TBD |
| |
|
| | ## **ํ์ต ๊ณผ์ ** |
| | TBD |
| |
|
| | ## **์ฌ์ฉ ์๋ด** |
| |
|
| | **์ถ๋ก ์ ํ์ํ VRAM** |
| | | ์์ํ | ์
๋ ฅ ํ ํฐ ์ | ์ถ๋ ฅ ํ ํฐ ์ | ๋ฉ๋ชจ๋ฆฌ ์ฌ์ฉ๋ | |
| | |---|---|---|---| |
| | | bf16(base) | 64 | 256 | 3,888 MiB | |
| | | q4_K_M | 64 | 256 | 1,788 MiB | |
| |
|
| | **ํ๋กฌํํธ ํ
ํ๋ฆฟ** |
| |
|
| | ๋ณธ ๋ชจ๋ธ์ Alpaca ํ๋กฌํํธ ํ
ํ๋ฆฟ์ ์ฌ์ฉํฉ๋๋ค. |
| |
|
| | ํด๋น ํ
ํ๋ฆฟ์ `apply_chat_template()`๋ฅผ ํตํด [ํ๊น
ํ์ด์ค ํ
ํ๋ฆฟ](https://huggingface.co/docs/transformers/main/chat_templating)์์ ํ์ธ ํ์ค ์ ์์ต๋๋ค. |
| |
|
| | **์๋ ํ์ด์ฌ ์ฝ๋๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ๋ก๋ ๋ฐ ์ฌ์ฉ ํ ์ ์์ต๋๋ค.** |
| | *transformers, torch๊ฐ ์ฌ์ ์ค์น๋์ด์ผํจ* |
| |
|
| | ```python |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | |
| | device = "cuda" # nvidia ๊ทธ๋ํฝ์นด๋ ๊ธฐ์ค |
| | |
| | tokenizer = AutoTokenizer.from_pretrained("maywell/TinyWand-DPO") |
| | model = AutoModelForCausalLM.from_pretrained( |
| | "maywell/TinyWand-DPO", |
| | device_map="auto", |
| | torch_dtype=torch.bfloat16, # ์ฌ์ฉํ๋ ์ฅ๋น๊ฐ bfloat16์ ์ง์ํ์ง ์๋ ๊ฒฝ์ฐ torch.float16์ผ๋ก ๋ฐ๊ฟ์ฃผ์ธ์. |
| | ) |
| | |
| | messages = [ |
| | {"role": "system", "content": "Below is an instruction that describes a task. Write a response that appropriately completes the request."}, # ๋น์ธ ๊ฒฝ์ฐ์๋ ๋์ผํ๊ฒ ์ ์ฉ ๋จ. |
| | {"role": "user", "content": "์ธ์ด๋ชจ๋ธ์ ํ๋ผ๋ฏธํฐ ์๊ฐ ์์ผ๋ฉด ์ด๋ค ์ด์ ์ด ์์ด?"}, |
| | ] |
| | |
| | encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") |
| | |
| | model_inputs = encodeds.to(device) |
| | model.to(device) |
| | |
| | generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) |
| | decoded = tokenizer.batch_decode(generated_ids) |
| | print(decoded[0]) |
| | ``` |