Efficient Training of Language Models to Fill in the Middle
Paper
•
2207.14255
•
Published
•
1
A specialized fine tune of Qwen/Qwen3-4B-Instruct-2507 trained specifically for FIM Luau code based on "Efficient Training of Language Models to Fill in the Middle" by Mohammad Bavarian et al., 2022. Instead of being a chatbot, it performs Luau autocomplete.
<|repo_name|> and <|file_sep|> are technically optional, but you will get better responses when they are included.
If using a chat API:
[
{ "role": "system", "content": "You are a code completion assistant." },
{ "role": "user", "content": f"<|repo_name|>{reponame}<|file_sep|>{filename}<|fim_suffix|>{suffix}<|fim_prefix|>{prefix}<|fim_middle|>" }
]
If using a completions API, you'll need to essentially bake the chat template in:
prompt = f"<|im_start|>system\nYou are a code completion assistant.<|im_end|>\n<|im_start|>user\n<|repo_name|>{reponame}<|file_sep|>{filename}<|fim_suffix|>{suffix}<|fim_prefix|>{prefix}<|fim_middle|><|im_end|>\n<|im_start|>assistant\n"
Here is an example config.yaml for using this with Continue.dev for autocomplete in VSCode backed by LM Studio:
name: Local Autocomplete
version: 1.0.0
schema: v1
models:
- name: Luau Qwen3 4B FIM v0.1
provider: lmstudio
apiBase: http://localhost:1234/v1
model: luau-qwen3-4b-fim-v0.1
roles:
- autocomplete
defaultCompletionOptions:
stop: [
"<|im_end|>",
"</s>",
"<|repo_name|>",
"<|file_sep|>",
"```"
]
promptTemplates:
autocomplete: "<|im_start|>system\nYou are a code completion assistant.<|im_end|>\n<|im_start|>user\n<|repo_name|>{{{reponame}}}<|file_sep|>{{{filename}}}<|fim_suffix|>{{{suffix}}}<|fim_prefix|>{{{prefix}}}<|fim_middle|><|im_end|>\n<|im_start|>assistant\n"
Source: TorpedoSoftware/the-luau-stack
Dynamic GGUF quantizations provided at sizes ranging approximately from 2 to 4 GB.