File size: 3,707 Bytes
b1ba27a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---

language:
  - en
  - ko
tags:
  - text-generation
  - code
  - lua
  - maple
  - lora
license: apache-2.0
datasets:
  - maple-api-examples
base_model: nuprl/MultiPL-T-StarCoderBase_1b
---


# MapleStory Worlds Lua Fine-tuned Language Model

## πŸ“– Model Overview
This model is fine-tuned on MapleStory Worlds Lua API sample code.
It is optimized for game script automation, code generation, and context-aware API usage.

## πŸ€– How to Use
```python

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained('your-hf-id/model-name')

model = AutoModelForCausalLM.from_pretrained('your-hf-id/model-name')



inputs = tokenizer("local currentTargetEntity = self.Entity.AI", return_tensors='pt')

outputs = model.generate(**inputs)

print(tokenizer.decode(outputs))

```


## βš™οΈ Training & Experiment Settings
- Batch size: 1
- gradient_accumulation_steps: 4
- Epochs: 3
- Learning rate: 1.2e-4
- Optimizer: AdamW, fp16
- LoRA(PEFT) fine-tuning

## πŸ“Š Performance

|        | Before   | After    | Change  |
|--------|----------|----------|---------|
| Perplexity  | 46.14    | 5.34     | ↓8.6x   |
| Eval loss   | 3.83     | 1.68     | ↓       |
| Speed(sec)  | 1.30s    | 1.28s    | -       |

Perplexity measures prediction difficulty for language models. Lower values mean more accurate predictions.

## πŸ—ƒοΈ Data
- Official MapleStory Worlds Developer API sample code
- [API Reference](https://maplestoryworlds-creators.nexon.com/ko/apiReference/How-to-use-API-Reference)

## πŸ“„ License
Base model: nuprl/MultiPL-T-StarCoderBase_1b  

Hugging Face: [nuprl/MultiPL-T-StarCoderBase_1b](https://huggingface.co/nuprl/MultiPL-T-StarCoderBase_1b)



## Contact

name: bangill  

mail: [95potter95@gmail.com](mailto:95potter95@gmail.com)



---



# MapleStory Worlds Lua νŒŒμΈνŠœλ‹ μ–Έμ–΄λͺ¨λΈ



## πŸ“– λͺ¨λΈ κ°œμš”

이 λͺ¨λΈμ€ MapleStory Worlds Lua API 예제 μ½”λ“œλ‘œ νŒŒμΈνŠœλ‹λœ νŠΉν™” LLMμž…λ‹ˆλ‹€.  

κ²Œμž„ 슀크립트 μžλ™ν™”, μ½”λ“œ 생성, λ¬Έλ§₯ 기반 API ν™œμš©μ— μ΅œμ ν™”λμŠ΅λ‹ˆλ‹€.



## πŸ€– λͺ¨λΈ μ‚¬μš©λ²•

```python

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained('your-hf-id/model-name')
model = AutoModelForCausalLM.from_pretrained('your-hf-id/model-name')



inputs = tokenizer("local currentTargetEntity = self.Entity.AI", return_tensors='pt')
outputs = model.generate(**inputs)

print(tokenizer.decode(outputs))

```



## βš™οΈ ν•™μŠ΅/μ‹€ν—˜ μ„ΈνŒ…

- Batch size: 1

- gradient_accumulation_steps: 4

- Epochs: 3

- Learning rate: 1.2e-4

- Optimizer: AdamW, fp16

- LoRA(PEFT) 기반 νŒŒμΈνŠœλ‹



## πŸ“Š μ„±λŠ₯ λ³€ν™” 및 μ§€ν‘œ



|        | ν•™μŠ΅ μ „   | ν•™μŠ΅ ν›„    | 변화폭  |

|--------|----------|----------|--------|

| Perplexity  | 46.14    | 5.34     | ↓8.6λ°° |

| Eval loss   | 3.83     | 1.68     | ↓      |

| 평가속도    | 1.30s    | 1.28s    | -      |



Perplexity: μ–Έμ–΄λͺ¨λΈμ˜ 예츑 λ‚œμ΄λ„λ₯Ό λ‚˜νƒ€λ‚΄λŠ” μ§€ν‘œλ‘œ, 값이 μž‘μ„μˆ˜λ‘ 정닡에 κ°€κΉŒμš΄ μ˜ˆμΈ‘μž…λ‹ˆλ‹€.



## πŸ—ƒοΈ 데이터

- MapleStory Worlds 곡식 Developer API 예제 μ½”λ“œ ν™œμš©

- [https://maplestoryworlds-creators.nexon.com/ko/apiReference/How-to-use-API-Reference](https://maplestoryworlds-creators.nexon.com/ko/apiReference/How-to-use-API-Reference)



## πŸ“„ λΌμ΄μ„ΌμŠ€

κΈ°λ³Έ λͺ¨λΈ: nuprl/MultiPL-T-StarCoderBase_1b  

ν—ˆκΉ…νŽ˜μ΄μŠ€: [https://huggingface.co/nuprl/MultiPL-T-StarCoderBase_1b](https://huggingface.co/nuprl/MultiPL-T-StarCoderBase_1b)



## 문의

이름: bangill  

이메일: [95potter95@gmail.com](mailto:95potter95@gmail.com)