File size: 1,916 Bytes
251fde5
 
 
 
 
4892833
 
 
 
251fde5
 
4892833
251fde5
 
4892833
251fde5
4892833
251fde5
 
4892833
 
 
 
 
 
251fde5
 
4892833
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
base_model: unsloth/qwen2.5-coder-7b-instruct-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- minecraft
- java
- spigot
- papermc
- lora
- unsloth
- qwen2.5-coder
---

# toncode-v1: Minecraft Plugin Coder

This model is a fine-tuned LoRA adapter for **Qwen2.5-Coder-7B-Instruct**, specialized in generating high-quality Java code for Minecraft server plugins (Spigot/Paper API).

## Model Details
- **Developed by:** Akahsizrr
- **Model type:** LoRA Adapter (PEFT)
- **Base Model:** Qwen/Qwen2.5-Coder-7B-Instruct
- **Language(s):** English, Java (Minecraft Spigot/Paper API)
- **License:** Apache-2.0
- **Finetuned from model:** unsloth/qwen2.5-coder-7b-instruct-bnb-4bit

## Training Details
The model was trained using **Unsloth** on a Minecraft-specific dataset containing optimized plugin logic and event handling.

- **Training Steps:** 100
- **Optimizer:** AdamW 8-bit
- **Learning Rate:** 2e-4
- **Hardware:** 2x NVIDIA T4 (Kaggle)
- **Batch Size:** 1 (with Gradient Accumulation Steps: 8)

## How to Get Started
To use this model, you need to load it as an adapter on top of the base Qwen2.5-Coder model using the `peft` or `unsloth` library.

```python
from unsloth import FastLanguageModel
import torch

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "unsloth/qwen2.5-coder-7b-instruct-bnb-4bit",
    max_seq_length = 2048,
    load_in_4bit = True,
)

# Load your fine-tuned adapter
model = FastLanguageModel.for_inference(model)
model.load_adapter("Akahsizrr/toncode-v1")

# Test prompt
instruction = "Create a listener that gives a player a Diamond Sword when they first join the server."
messages = [{"role": "user", "content": instruction}]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to("cuda")

outputs = model.generate(input_ids=inputs, max_new_tokens=512)
print(tokenizer.batch_decode(outputs)[0])