| # Korean GPT | |
| ํ๊ตญ์ด GPT ๋ชจ๋ธ์ ๋๋ค. | |
| ## ์ฌ์ฉ๋ฒ | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| tokenizer = AutoTokenizer.from_pretrained( | |
| "oz1115/korean-gpt-quick-test", | |
| trust_remote_code=True | |
| ) | |
| model = AutoModelForCausalLM.from_pretrained( | |
| "oz1115/korean-gpt-quick-test", | |
| trust_remote_code=True | |
| ) | |
| inputs = tokenizer("์๋ ํ์ธ์", return_tensors="pt") | |
| outputs = model.generate(**inputs, max_length=50) | |
| print(tokenizer.decode(outputs[0])) | |
| ``` | |
| ## ๋ชจ๋ธ ์ ๋ณด | |
| - Vocabulary: 32,000 | |
| - Hidden Size: 512 | |
| - Layers: 8 | |
| - Attention Heads: 8 | |