|
|
---
|
|
|
language:
|
|
|
- en
|
|
|
- ko
|
|
|
tags:
|
|
|
- text-generation
|
|
|
- code
|
|
|
- lua
|
|
|
- maple
|
|
|
- lora
|
|
|
license: apache-2.0
|
|
|
datasets:
|
|
|
- maple-api-examples
|
|
|
base_model: nuprl/MultiPL-T-StarCoderBase_1b
|
|
|
---
|
|
|
|
|
|
# MapleStory Worlds Lua Fine-tuned Language Model
|
|
|
|
|
|
## π Model Overview
|
|
|
This model is fine-tuned on MapleStory Worlds Lua API sample code.
|
|
|
It is optimized for game script automation, code generation, and context-aware API usage.
|
|
|
|
|
|
## π€ How to Use
|
|
|
```python
|
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
tokenizer = AutoTokenizer.from_pretrained('your-hf-id/model-name')
|
|
|
model = AutoModelForCausalLM.from_pretrained('your-hf-id/model-name')
|
|
|
|
|
|
inputs = tokenizer("local currentTargetEntity = self.Entity.AI", return_tensors='pt')
|
|
|
outputs = model.generate(**inputs)
|
|
|
print(tokenizer.decode(outputs))
|
|
|
```
|
|
|
|
|
|
|
|
|
## βοΈ Training & Experiment Settings
|
|
|
- Batch size: 1
|
|
|
- gradient_accumulation_steps: 4
|
|
|
- Epochs: 3
|
|
|
- Learning rate: 1.2e-4
|
|
|
- Optimizer: AdamW, fp16
|
|
|
- LoRA(PEFT) fine-tuning
|
|
|
|
|
|
## π Performance
|
|
|
|
|
|
| | Before | After | Change |
|
|
|
|--------|----------|----------|---------|
|
|
|
| Perplexity | 46.14 | 5.34 | β8.6x |
|
|
|
| Eval loss | 3.83 | 1.68 | β |
|
|
|
| Speed(sec) | 1.30s | 1.28s | - |
|
|
|
|
|
|
Perplexity measures prediction difficulty for language models. Lower values mean more accurate predictions.
|
|
|
|
|
|
## ποΈ Data
|
|
|
- Official MapleStory Worlds Developer API sample code
|
|
|
- [API Reference](https://maplestoryworlds-creators.nexon.com/ko/apiReference/How-to-use-API-Reference)
|
|
|
|
|
|
## π License
|
|
|
Base model: nuprl/MultiPL-T-StarCoderBase_1b
|
|
|
Hugging Face: [nuprl/MultiPL-T-StarCoderBase_1b](https://huggingface.co/nuprl/MultiPL-T-StarCoderBase_1b)
|
|
|
|
|
|
## Contact
|
|
|
name: bangill
|
|
|
mail: [95potter95@gmail.com](mailto:95potter95@gmail.com)
|
|
|
|
|
|
---
|
|
|
|
|
|
# MapleStory Worlds Lua νμΈνλ μΈμ΄λͺ¨λΈ
|
|
|
|
|
|
## π λͺ¨λΈ κ°μ
|
|
|
μ΄ λͺ¨λΈμ MapleStory Worlds Lua API μμ μ½λλ‘ νμΈνλλ νΉν LLMμ
λλ€.
|
|
|
κ²μ μ€ν¬λ¦½νΈ μλν, μ½λ μμ±, λ¬Έλ§₯ κΈ°λ° API νμ©μ μ΅μ νλμ΅λλ€.
|
|
|
|
|
|
## π€ λͺ¨λΈ μ¬μ©λ²
|
|
|
```python
|
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
tokenizer = AutoTokenizer.from_pretrained('your-hf-id/model-name')
|
|
|
model = AutoModelForCausalLM.from_pretrained('your-hf-id/model-name')
|
|
|
|
|
|
inputs = tokenizer("local currentTargetEntity = self.Entity.AI", return_tensors='pt')
|
|
|
outputs = model.generate(**inputs)
|
|
|
print(tokenizer.decode(outputs))
|
|
|
```
|
|
|
|
|
|
## βοΈ νμ΅/μ€ν μΈν
|
|
|
- Batch size: 1
|
|
|
- gradient_accumulation_steps: 4
|
|
|
- Epochs: 3
|
|
|
- Learning rate: 1.2e-4
|
|
|
- Optimizer: AdamW, fp16
|
|
|
- LoRA(PEFT) κΈ°λ° νμΈνλ
|
|
|
|
|
|
## π μ±λ₯ λ³ν λ° μ§ν
|
|
|
|
|
|
| | νμ΅ μ | νμ΅ ν | λ³νν |
|
|
|
|--------|----------|----------|--------|
|
|
|
| Perplexity | 46.14 | 5.34 | β8.6λ°° |
|
|
|
| Eval loss | 3.83 | 1.68 | β |
|
|
|
| νκ°μλ | 1.30s | 1.28s | - |
|
|
|
|
|
|
Perplexity: μΈμ΄λͺ¨λΈμ μμΈ‘ λμ΄λλ₯Ό λνλ΄λ μ§νλ‘, κ°μ΄ μμμλ‘ μ λ΅μ κ°κΉμ΄ μμΈ‘μ
λλ€.
|
|
|
|
|
|
## ποΈ λ°μ΄ν°
|
|
|
- MapleStory Worlds 곡μ Developer API μμ μ½λ νμ©
|
|
|
- [https://maplestoryworlds-creators.nexon.com/ko/apiReference/How-to-use-API-Reference](https://maplestoryworlds-creators.nexon.com/ko/apiReference/How-to-use-API-Reference)
|
|
|
|
|
|
## π λΌμ΄μΌμ€
|
|
|
κΈ°λ³Έ λͺ¨λΈ: nuprl/MultiPL-T-StarCoderBase_1b
|
|
|
νκΉ
νμ΄μ€: [https://huggingface.co/nuprl/MultiPL-T-StarCoderBase_1b](https://huggingface.co/nuprl/MultiPL-T-StarCoderBase_1b)
|
|
|
|
|
|
## λ¬Έμ
|
|
|
μ΄λ¦: bangill
|
|
|
μ΄λ©μΌ: [95potter95@gmail.com](mailto:95potter95@gmail.com)
|
|
|
|
|
|
|