File size: 3,707 Bytes
b1ba27a | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 | ---
language:
- en
- ko
tags:
- text-generation
- code
- lua
- maple
- lora
license: apache-2.0
datasets:
- maple-api-examples
base_model: nuprl/MultiPL-T-StarCoderBase_1b
---
# MapleStory Worlds Lua Fine-tuned Language Model
## π Model Overview
This model is fine-tuned on MapleStory Worlds Lua API sample code.
It is optimized for game script automation, code generation, and context-aware API usage.
## π€ How to Use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('your-hf-id/model-name')
model = AutoModelForCausalLM.from_pretrained('your-hf-id/model-name')
inputs = tokenizer("local currentTargetEntity = self.Entity.AI", return_tensors='pt')
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs))
```
## βοΈ Training & Experiment Settings
- Batch size: 1
- gradient_accumulation_steps: 4
- Epochs: 3
- Learning rate: 1.2e-4
- Optimizer: AdamW, fp16
- LoRA(PEFT) fine-tuning
## π Performance
| | Before | After | Change |
|--------|----------|----------|---------|
| Perplexity | 46.14 | 5.34 | β8.6x |
| Eval loss | 3.83 | 1.68 | β |
| Speed(sec) | 1.30s | 1.28s | - |
Perplexity measures prediction difficulty for language models. Lower values mean more accurate predictions.
## ποΈ Data
- Official MapleStory Worlds Developer API sample code
- [API Reference](https://maplestoryworlds-creators.nexon.com/ko/apiReference/How-to-use-API-Reference)
## π License
Base model: nuprl/MultiPL-T-StarCoderBase_1b
Hugging Face: [nuprl/MultiPL-T-StarCoderBase_1b](https://huggingface.co/nuprl/MultiPL-T-StarCoderBase_1b)
## Contact
name: bangill
mail: [95potter95@gmail.com](mailto:95potter95@gmail.com)
---
# MapleStory Worlds Lua νμΈνλ μΈμ΄λͺ¨λΈ
## π λͺ¨λΈ κ°μ
μ΄ λͺ¨λΈμ MapleStory Worlds Lua API μμ μ½λλ‘ νμΈνλλ νΉν LLMμ
λλ€.
κ²μ μ€ν¬λ¦½νΈ μλν, μ½λ μμ±, λ¬Έλ§₯ κΈ°λ° API νμ©μ μ΅μ νλμ΅λλ€.
## π€ λͺ¨λΈ μ¬μ©λ²
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('your-hf-id/model-name')
model = AutoModelForCausalLM.from_pretrained('your-hf-id/model-name')
inputs = tokenizer("local currentTargetEntity = self.Entity.AI", return_tensors='pt')
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs))
```
## βοΈ νμ΅/μ€ν μΈν
- Batch size: 1
- gradient_accumulation_steps: 4
- Epochs: 3
- Learning rate: 1.2e-4
- Optimizer: AdamW, fp16
- LoRA(PEFT) κΈ°λ° νμΈνλ
## π μ±λ₯ λ³ν λ° μ§ν
| | νμ΅ μ | νμ΅ ν | λ³νν |
|--------|----------|----------|--------|
| Perplexity | 46.14 | 5.34 | β8.6λ°° |
| Eval loss | 3.83 | 1.68 | β |
| νκ°μλ | 1.30s | 1.28s | - |
Perplexity: μΈμ΄λͺ¨λΈμ μμΈ‘ λμ΄λλ₯Ό λνλ΄λ μ§νλ‘, κ°μ΄ μμμλ‘ μ λ΅μ κ°κΉμ΄ μμΈ‘μ
λλ€.
## ποΈ λ°μ΄ν°
- MapleStory Worlds 곡μ Developer API μμ μ½λ νμ©
- [https://maplestoryworlds-creators.nexon.com/ko/apiReference/How-to-use-API-Reference](https://maplestoryworlds-creators.nexon.com/ko/apiReference/How-to-use-API-Reference)
## π λΌμ΄μΌμ€
κΈ°λ³Έ λͺ¨λΈ: nuprl/MultiPL-T-StarCoderBase_1b
νκΉ
νμ΄μ€: [https://huggingface.co/nuprl/MultiPL-T-StarCoderBase_1b](https://huggingface.co/nuprl/MultiPL-T-StarCoderBase_1b)
## λ¬Έμ
μ΄λ¦: bangill
μ΄λ©μΌ: [95potter95@gmail.com](mailto:95potter95@gmail.com)
|