File size: 6,230 Bytes
fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c fdaaa60 489139c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 |
---
base_model: codellama/CodeLlama-7b-Instruct-hf
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:codellama/CodeLlama-7b-Instruct-hf
- lora
- transformers
- luau
- roblox
license: apache-2.0
language:
- en
---
# Model Card for CodeLlama-7B-Instruct-Luau
Fine-tuned version of `codellama/CodeLlama-7b-Instruct-hf` targeted toward the **Luau** programming language, Roblox’s Lua-derived scripting language.
This model is distributed as a **LoRA adapter** and is intended to improve the base model’s performance on Roblox-specific scripting tasks.
---
## Model Details
### Model Description
This model is a parameter-efficient fine-tuning (LoRA) of CodeLlama 7B Instruct, specialized for generating, explaining, and refactoring **Luau** code.
The fine-tuning focuses on Roblox development patterns, including common services, APIs, gameplay scripting idioms, and client/server logic. The model is designed to assist developers during prototyping, learning, and general scripting workflows.
- **Developed by:** darwinkernelpanic
- **Funded by:** Not applicable
- **Shared by:** darwinkernelpanic
- **Model type:** Causal Language Model (decoder-only, LoRA adapter)
- **Language(s) (NLP):** English
- **License:** Apache-2.0
- **Finetuned from model:** codellama/CodeLlama-7b-Instruct-hf
### Model Sources
- **Repository:** https://huggingface.co/darwinkernelpanic/CodeLlama-7b-Instruct-hf-luau
- **Paper:** *Code Llama: Large Language Models for Code* (Meta AI)
- **Demo:** Not available
---
## Uses
### Direct Use
This model can be used directly for:
- Writing Luau scripts for Roblox
- Explaining Roblox APIs and services
- Refactoring or debugging Luau code
- Prototyping gameplay systems and utilities
- Learning Luau and Roblox scripting concepts
The model is intended as a **developer assistant**, not an autonomous system.
### Downstream Use
Potential downstream uses include:
- Further fine-tuning on proprietary Roblox frameworks
- Integration into IDEs or editor tooling
- Chat-based assistants for Roblox development
- Educational or documentation tooling
### Out-of-Scope Use
This model should **not** be used for:
- Safety-critical or production-critical systems
- Legal, medical, or financial advice
- Malware, exploit, or cheat development
- Fully automated code deployment without review
---
## Bias, Risks, and Limitations
- Inherits biases and limitations from the base CodeLlama model
- May hallucinate Roblox APIs or outdated behaviors
- Does not validate code at runtime
- Output correctness depends on prompt quality
### Recommendations
Users should:
- Review all generated code manually
- Test scripts in Roblox Studio
- Cross-check with official Roblox documentation
- Treat outputs as suggestions rather than authoritative solutions
---
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = "codellama/CodeLlama-7b-Instruct-hf"
adapter_model = "darwinkernelpanic/CodeLlama-7b-Instruct-hf-luau"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)
prompt = "Write a Luau function that creates a Part and parents it to Workspace."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=300,
temperature=0.7,
do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
````
---
## Training Details
### Training Data
The model was fine-tuned on a curated mixture of:
* Luau scripts
* Roblox API usage examples
* Open-source Roblox projects
* Synthetic instruction-style prompts
All data was filtered to avoid private, proprietary, or sensitive content.
### Training Procedure
The model was trained using parameter-efficient fine-tuning with LoRA while keeping the base model weights frozen.
#### Preprocessing
* Code formatting normalization
* Instruction-style prompt structuring
* Removal of low-quality or irrelevant samples
#### Training Hyperparameters
* **Training regime:** fp16 mixed precision
#### Speeds, Sizes, Times
* **Base model size:** ~7B parameters
* **Trainable parameters:** <1% (LoRA adapters only)
* **Adapter checkpoint size:** ~100–200 MB
---
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
* Hand-written Luau prompts
* Roblox-specific scripting scenarios
#### Factors
* Luau syntax correctness
* Roblox API familiarity
* Instruction-following behavior
#### Metrics
* Qualitative human evaluation
* Manual code review and comparison with base model
### Results
The LoRA adapter demonstrates improved performance over the base model in:
* Generating idiomatic Luau
* Correct Roblox service usage
* Following game-development-oriented instructions
#### Summary
The model performs best when used as a Roblox development assistant and is not intended for general-purpose natural language tasks.
---
## Model Examination
No formal interpretability or probing analysis was conducted.
---
## Environmental Impact
Carbon emissions were not formally measured.
* **Hardware Type:** Consumer-grade GPU
* **Hours used:** < 24 hours
* **Cloud Provider:** None (local training)
* **Compute Region:** Not applicable
* **Carbon Emitted:** Not estimated
---
## Technical Specifications
### Model Architecture and Objective
* Decoder-only Transformer
* Next-token prediction objective
* LoRA adapters applied to attention layers
### Compute Infrastructure
#### Hardware
* Single consumer-grade GPU
#### Software
* PyTorch
* Transformers
* PEFT
---
## Citation
**BibTeX:**
```bibtex
@misc{darwinkernelpanic2025luau,
title={CodeLlama 7B Instruct Luau LoRA},
author={darwinkernelpanic},
year={2025},
howpublished={Hugging Face},
note={LoRA fine-tuned for Luau / Roblox scripting}
}
```
**APA:**
darwinkernelpanic. (2025). *CodeLlama 7B Instruct Luau LoRA*. Hugging Face.
---
## Model Card Authors
darwinkernelpanic
## Model Card Contact
Use the Hugging Face repository issues or the author’s profile.
---
### Framework versions
* PEFT 0.18.0 |