QLoRA: Efficient Finetuning of Quantized LLMs
Paper
•
2305.14314
•
Published
•
58
This model is a QLoRA fine-tuned version of deepseek-ai/deepseek-coder-1.3b-instruct, designed for the domain of Verilog RTL synthesis. It accepts natural-language descriptions of digital circuits and generates Verilog code modules.
Trainer APItransformers, peft, accelerate, bitsandbytesfrom transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("louijiec/veriforge-deepseek-coder-1.3b-instruct")
tokenizer = AutoTokenizer.from_pretrained("louijiec/veriforge-deepseek-coder-1.3b-instruct")
prompt = """### Task: Synthesize Verilog\nDesign a 2-to-1 multiplexer using behavioral modeling.\n### Verilog Code:"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
This model has been sanity-checked using prompt-based outputs that are expected to include:
module, input, output, assign, endmodule)moduleFor functional verification, use Icarus Verilog or Verilator to simulate output.