πŸ”Œ AutonomusHDL β€” Verilog-Finetuned Qwen2.5-Coder-14B (GGUF)

AutonomusHDL is a fine-tuned version of Qwen2.5-Coder-14B-Instruct specifically optimized for Hardware Description Language (HDL) tasks, with a focus on Verilog code generation, completion, and reasoning. The model is provided in GGUF format for efficient local inference via llama.cpp and compatible runtimes.


πŸ“¦ Available Files

File Quantization Size Use Case
qwen2.5_coder_14b_instruct_verilog_finetuned_q8.gguf Q8_0 15.7 GB Highest quality, more VRAM/RAM
Qwen2.5 coder-14B-Q3_K_L.gguf Q3_K_L 7.9 GB Lighter, faster, lower memory footprint

Recommendation: Use the Q8 model if you have β‰₯16 GB RAM/VRAM for best output quality. Use Q3_K_L for systems with limited resources.


🧠 Model Details

Property Value
Base Model Qwen2.5-Coder-14B-Instruct
Fine-tune Domain Verilog / HDL Code Generation
Format GGUF
License Apache 2.0
Parameters 14B
Context Length Up to 128K tokens (base model)

πŸš€ Quickstart

With llama.cpp

# Clone and build llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp && make

# Run inference
./llama-cli \
  -m qwen2.5_coder_14b_instruct_verilog_finetuned_q8.gguf \
  -p "Write a Verilog module for a 4-bit synchronous counter with reset." \
  -n 512 \
  --temp 0.2

With Ollama

# Create a Modelfile
echo 'FROM ./qwen2.5_coder_14b_instruct_verilog_finetuned_q8.gguf' > Modelfile

# Import and run
ollama create autonomusHDL -f Modelfile
ollama run autonomusHDL

With LM Studio

  1. Download one of the .gguf files above.
  2. Open LM Studio β†’ Load Model β†’ select the downloaded file.
  3. Start chatting with Verilog prompts directly.

πŸ’‘ Example Prompts

Module generation:

Write a Verilog module for a parameterized FIFO with configurable depth and width.

Debugging:

The following Verilog code has a timing issue. Identify and fix it:
[paste your code]

Testbench generation:

Generate a SystemVerilog testbench for a 32-bit ALU module with add, sub, AND, OR, and XOR operations.

FSM design:

Implement a Moore FSM in Verilog for a traffic light controller with states: RED, GREEN, YELLOW.

🎯 Intended Use Cases

  • RTL design and Verilog code generation
  • HDL code completion and auto-suggestions
  • Testbench and assertion generation
  • Debugging and explaining existing Verilog/VHDL code
  • Learning and educational HDL workflows
  • Integration into EDA tool pipelines

βš™οΈ Hardware Requirements

Quantization Min RAM/VRAM Recommended
Q8_0 (15.7 GB) 16 GB 24 GB+
Q3_K_L (7.9 GB) 8 GB 12 GB+

For CPU-only inference, ensure you have sufficient system RAM. GPU offloading via llama.cpp is supported with CUDA/Metal/Vulkan.


πŸ“œ License

This model is released under the Apache 2.0 License. The base model weights are subject to the Qwen2.5 license.


πŸ™ Acknowledgements


βœ‰οΈ Contact

For questions, issues, or collaboration, reach out via the Community tab on this repository.

Downloads last month
196
GGUF
Model size
15B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Vishvjit2001/autonomusHDL

Base model

Qwen/Qwen2.5-14B
Quantized
(85)
this model