# Korean GPT 한국어 GPT 모델입니다. ## 사용법 ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained( "oz1115/korean-gpt-quick-test", trust_remote_code=True ) model = AutoModelForCausalLM.from_pretrained( "oz1115/korean-gpt-quick-test", trust_remote_code=True ) inputs = tokenizer("안녕하세요", return_tensors="pt") outputs = model.generate(**inputs, max_length=50) print(tokenizer.decode(outputs[0])) ``` ## 모델 정보 - Vocabulary: 32,000 - Hidden Size: 512 - Layers: 8 - Attention Heads: 8