bangill's picture
Update README
b1ba27a verified
---
language:
- en
- ko
tags:
- text-generation
- code
- lua
- maple
- lora
license: apache-2.0
datasets:
- maple-api-examples
base_model: nuprl/MultiPL-T-StarCoderBase_1b
---
# MapleStory Worlds Lua Fine-tuned Language Model
## πŸ“– Model Overview
This model is fine-tuned on MapleStory Worlds Lua API sample code.
It is optimized for game script automation, code generation, and context-aware API usage.
## πŸ€– How to Use
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('your-hf-id/model-name')
model = AutoModelForCausalLM.from_pretrained('your-hf-id/model-name')
inputs = tokenizer("local currentTargetEntity = self.Entity.AI", return_tensors='pt')
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs))
```
## βš™οΈ Training & Experiment Settings
- Batch size: 1
- gradient_accumulation_steps: 4
- Epochs: 3
- Learning rate: 1.2e-4
- Optimizer: AdamW, fp16
- LoRA(PEFT) fine-tuning
## πŸ“Š Performance
| | Before | After | Change |
|--------|----------|----------|---------|
| Perplexity | 46.14 | 5.34 | ↓8.6x |
| Eval loss | 3.83 | 1.68 | ↓ |
| Speed(sec) | 1.30s | 1.28s | - |
Perplexity measures prediction difficulty for language models. Lower values mean more accurate predictions.
## πŸ—ƒοΈ Data
- Official MapleStory Worlds Developer API sample code
- [API Reference](https://maplestoryworlds-creators.nexon.com/ko/apiReference/How-to-use-API-Reference)
## πŸ“„ License
Base model: nuprl/MultiPL-T-StarCoderBase_1b
Hugging Face: [nuprl/MultiPL-T-StarCoderBase_1b](https://huggingface.co/nuprl/MultiPL-T-StarCoderBase_1b)
## Contact
name: bangill
mail: [95potter95@gmail.com](mailto:95potter95@gmail.com)
---
# MapleStory Worlds Lua νŒŒμΈνŠœλ‹ μ–Έμ–΄λͺ¨λΈ
## πŸ“– λͺ¨λΈ κ°œμš”
이 λͺ¨λΈμ€ MapleStory Worlds Lua API 예제 μ½”λ“œλ‘œ νŒŒμΈνŠœλ‹λœ νŠΉν™” LLMμž…λ‹ˆλ‹€.
κ²Œμž„ 슀크립트 μžλ™ν™”, μ½”λ“œ 생성, λ¬Έλ§₯ 기반 API ν™œμš©μ— μ΅œμ ν™”λμŠ΅λ‹ˆλ‹€.
## πŸ€– λͺ¨λΈ μ‚¬μš©λ²•
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('your-hf-id/model-name')
model = AutoModelForCausalLM.from_pretrained('your-hf-id/model-name')
inputs = tokenizer("local currentTargetEntity = self.Entity.AI", return_tensors='pt')
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs))
```
## βš™οΈ ν•™μŠ΅/μ‹€ν—˜ μ„ΈνŒ…
- Batch size: 1
- gradient_accumulation_steps: 4
- Epochs: 3
- Learning rate: 1.2e-4
- Optimizer: AdamW, fp16
- LoRA(PEFT) 기반 νŒŒμΈνŠœλ‹
## πŸ“Š μ„±λŠ₯ λ³€ν™” 및 μ§€ν‘œ
| | ν•™μŠ΅ μ „ | ν•™μŠ΅ ν›„ | 변화폭 |
|--------|----------|----------|--------|
| Perplexity | 46.14 | 5.34 | ↓8.6λ°° |
| Eval loss | 3.83 | 1.68 | ↓ |
| 평가속도 | 1.30s | 1.28s | - |
Perplexity: μ–Έμ–΄λͺ¨λΈμ˜ 예츑 λ‚œμ΄λ„λ₯Ό λ‚˜νƒ€λ‚΄λŠ” μ§€ν‘œλ‘œ, 값이 μž‘μ„μˆ˜λ‘ 정닡에 κ°€κΉŒμš΄ μ˜ˆμΈ‘μž…λ‹ˆλ‹€.
## πŸ—ƒοΈ 데이터
- MapleStory Worlds 곡식 Developer API 예제 μ½”λ“œ ν™œμš©
- [https://maplestoryworlds-creators.nexon.com/ko/apiReference/How-to-use-API-Reference](https://maplestoryworlds-creators.nexon.com/ko/apiReference/How-to-use-API-Reference)
## πŸ“„ λΌμ΄μ„ΌμŠ€
κΈ°λ³Έ λͺ¨λΈ: nuprl/MultiPL-T-StarCoderBase_1b
ν—ˆκΉ…νŽ˜μ΄μŠ€: [https://huggingface.co/nuprl/MultiPL-T-StarCoderBase_1b](https://huggingface.co/nuprl/MultiPL-T-StarCoderBase_1b)
## 문의
이름: bangill
이메일: [95potter95@gmail.com](mailto:95potter95@gmail.com)