| --- |
| license: apache-2.0 |
| base_model: Qwen/Qwen2.5-Coder-14B-Instruct |
| --- |
| |
| # π AutonomusHDL β Verilog-Finetuned Qwen2.5-Coder-14B (GGUF) |
|
|
| **AutonomusHDL** is a fine-tuned version of [Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) specifically optimized for **Hardware Description Language (HDL)** tasks, with a focus on **Verilog** code generation, completion, and reasoning. The model is provided in GGUF format for efficient local inference via `llama.cpp` and compatible runtimes. |
|
|
| --- |
|
|
| ## π¦ Available Files |
|
|
| | File | Quantization | Size | Use Case | |
| |---|---|---|---| |
| | `qwen2.5_coder_14b_instruct_verilog_finetuned_q8.gguf` | Q8_0 | 15.7 GB | Highest quality, more VRAM/RAM | |
| | `Qwen2.5 coder-14B-Q3_K_L.gguf` | Q3_K_L | 7.9 GB | Lighter, faster, lower memory footprint | |
| |
| > **Recommendation:** Use the **Q8** model if you have β₯16 GB RAM/VRAM for best output quality. Use **Q3_K_L** for systems with limited resources. |
| |
| --- |
| |
| ## π§ Model Details |
| |
| | Property | Value | |
| |---|---| |
| | **Base Model** | Qwen2.5-Coder-14B-Instruct | |
| | **Fine-tune Domain** | Verilog / HDL Code Generation | |
| | **Format** | GGUF | |
| | **License** | Apache 2.0 | |
| | **Parameters** | 14B | |
| | **Context Length** | Up to 128K tokens (base model) | |
| |
| --- |
| |
| ## π Quickstart |
| |
| ### With `llama.cpp` |
| |
| ```bash |
| # Clone and build llama.cpp |
| git clone https://github.com/ggerganov/llama.cpp |
| cd llama.cpp && make |
| |
| # Run inference |
| ./llama-cli \ |
| -m qwen2.5_coder_14b_instruct_verilog_finetuned_q8.gguf \ |
| -p "Write a Verilog module for a 4-bit synchronous counter with reset." \ |
| -n 512 \ |
| --temp 0.2 |
| ``` |
| |
| ### With Ollama |
| |
| ```bash |
| # Create a Modelfile |
| echo 'FROM ./qwen2.5_coder_14b_instruct_verilog_finetuned_q8.gguf' > Modelfile |
| |
| # Import and run |
| ollama create autonomusHDL -f Modelfile |
| ollama run autonomusHDL |
| ``` |
| |
| ### With LM Studio |
| |
| 1. Download one of the `.gguf` files above. |
| 2. Open **LM Studio** β Load Model β select the downloaded file. |
| 3. Start chatting with Verilog prompts directly. |
| |
| --- |
| |
| ## π‘ Example Prompts |
| |
| **Module generation:** |
| ``` |
| Write a Verilog module for a parameterized FIFO with configurable depth and width. |
| ``` |
| |
| **Debugging:** |
| ``` |
| The following Verilog code has a timing issue. Identify and fix it: |
| [paste your code] |
| ``` |
| |
| **Testbench generation:** |
| ``` |
| Generate a SystemVerilog testbench for a 32-bit ALU module with add, sub, AND, OR, and XOR operations. |
| ``` |
| |
| **FSM design:** |
| ``` |
| Implement a Moore FSM in Verilog for a traffic light controller with states: RED, GREEN, YELLOW. |
| ``` |
| |
| --- |
| |
| ## π― Intended Use Cases |
| |
| - RTL design and Verilog code generation |
| - HDL code completion and auto-suggestions |
| - Testbench and assertion generation |
| - Debugging and explaining existing Verilog/VHDL code |
| - Learning and educational HDL workflows |
| - Integration into EDA tool pipelines |
| |
| --- |
| |
| ## βοΈ Hardware Requirements |
| |
| | Quantization | Min RAM/VRAM | Recommended | |
| |---|---|---| |
| | Q8_0 (15.7 GB) | 16 GB | 24 GB+ | |
| | Q3_K_L (7.9 GB) | 8 GB | 12 GB+ | |
|
|
| For CPU-only inference, ensure you have sufficient system RAM. GPU offloading via `llama.cpp` is supported with CUDA/Metal/Vulkan. |
|
|
| --- |
|
|
| ## π License |
|
|
| This model is released under the **Apache 2.0 License**. The base model weights are subject to the [Qwen2.5 license](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct/blob/main/LICENSE). |
|
|
| --- |
|
|
| ## π Acknowledgements |
|
|
| - Base model: [Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) by Alibaba Cloud |
| - GGUF conversion tooling: [llama.cpp](https://github.com/ggerganov/llama.cpp) by Georgi Gerganov |
|
|
| --- |
|
|
| ## βοΈ Contact |
|
|
| For questions, issues, or collaboration, reach out via the [Community tab](https://huggingface.co/Vishvjit2001/autonomusHDL/discussions) on this repository. |