File size: 3,759 Bytes
ad12c97 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 | ---
license: apache-2.0
base_model: Qwen/Qwen2.5-Coder-14B-Instruct
---
# π AutonomusHDL β Verilog-Finetuned Qwen2.5-Coder-14B (GGUF)
**AutonomusHDL** is a fine-tuned version of [Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) specifically optimized for **Hardware Description Language (HDL)** tasks, with a focus on **Verilog** code generation, completion, and reasoning. The model is provided in GGUF format for efficient local inference via `llama.cpp` and compatible runtimes.
---
## π¦ Available Files
| File | Quantization | Size | Use Case |
|---|---|---|---|
| `qwen2.5_coder_14b_instruct_verilog_finetuned_q8.gguf` | Q8_0 | 15.7 GB | Highest quality, more VRAM/RAM |
| `Qwen2.5 coder-14B-Q3_K_L.gguf` | Q3_K_L | 7.9 GB | Lighter, faster, lower memory footprint |
> **Recommendation:** Use the **Q8** model if you have β₯16 GB RAM/VRAM for best output quality. Use **Q3_K_L** for systems with limited resources.
---
## π§ Model Details
| Property | Value |
|---|---|
| **Base Model** | Qwen2.5-Coder-14B-Instruct |
| **Fine-tune Domain** | Verilog / HDL Code Generation |
| **Format** | GGUF |
| **License** | Apache 2.0 |
| **Parameters** | 14B |
| **Context Length** | Up to 128K tokens (base model) |
---
## π Quickstart
### With `llama.cpp`
```bash
# Clone and build llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp && make
# Run inference
./llama-cli \
-m qwen2.5_coder_14b_instruct_verilog_finetuned_q8.gguf \
-p "Write a Verilog module for a 4-bit synchronous counter with reset." \
-n 512 \
--temp 0.2
```
### With Ollama
```bash
# Create a Modelfile
echo 'FROM ./qwen2.5_coder_14b_instruct_verilog_finetuned_q8.gguf' > Modelfile
# Import and run
ollama create autonomusHDL -f Modelfile
ollama run autonomusHDL
```
### With LM Studio
1. Download one of the `.gguf` files above.
2. Open **LM Studio** β Load Model β select the downloaded file.
3. Start chatting with Verilog prompts directly.
---
## π‘ Example Prompts
**Module generation:**
```
Write a Verilog module for a parameterized FIFO with configurable depth and width.
```
**Debugging:**
```
The following Verilog code has a timing issue. Identify and fix it:
[paste your code]
```
**Testbench generation:**
```
Generate a SystemVerilog testbench for a 32-bit ALU module with add, sub, AND, OR, and XOR operations.
```
**FSM design:**
```
Implement a Moore FSM in Verilog for a traffic light controller with states: RED, GREEN, YELLOW.
```
---
## π― Intended Use Cases
- RTL design and Verilog code generation
- HDL code completion and auto-suggestions
- Testbench and assertion generation
- Debugging and explaining existing Verilog/VHDL code
- Learning and educational HDL workflows
- Integration into EDA tool pipelines
---
## βοΈ Hardware Requirements
| Quantization | Min RAM/VRAM | Recommended |
|---|---|---|
| Q8_0 (15.7 GB) | 16 GB | 24 GB+ |
| Q3_K_L (7.9 GB) | 8 GB | 12 GB+ |
For CPU-only inference, ensure you have sufficient system RAM. GPU offloading via `llama.cpp` is supported with CUDA/Metal/Vulkan.
---
## π License
This model is released under the **Apache 2.0 License**. The base model weights are subject to the [Qwen2.5 license](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct/blob/main/LICENSE).
---
## π Acknowledgements
- Base model: [Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) by Alibaba Cloud
- GGUF conversion tooling: [llama.cpp](https://github.com/ggerganov/llama.cpp) by Georgi Gerganov
---
## βοΈ Contact
For questions, issues, or collaboration, reach out via the [Community tab](https://huggingface.co/Vishvjit2001/autonomusHDL/discussions) on this repository. |