--- license: apache-2.0 base_model: syntheticlab/fix-json tags: - llama-cpp - gguf - lora - merged-model --- # fix-json (LoRA Merged GGUF) **Model creator:** [syntheticlab](https://huggingface.co/syntheticlab) **Base model required:** [syntheticlab/fix-json](https://huggingface.co/syntheticlab/fix-json) (Meta-Llama-3.1-8B-Instruct) **GGUF conversion & quantization:** by [CronoBJS](https://huggingface.co/CronoBJS) using [`llama.cpp`](https://github.com/ggerganov/llama.cpp) ⚠️ **Important:** This file is a **merged LoRA adapter in GGUF format**. It is **not** a standalone model. You **must** also have the original base GGUF model (e.g., `Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf`) to run it. Use the `--lora` or `--lora-scaled` flags in `llama.cpp` to apply it at runtime. --- ## Special thanks 🙏 Thanks to [Georgi Gerganov](https://github.com/ggerganov) and the [llama.cpp team](https://github.com/ggerganov/llama.cpp) for making all of this possible. --- ## Running the Model ### 1️⃣ llama.cpp CLI ```bash llama-cli -m "path/to/Meta-Llama-3.1-8B-Instruct.Q4_K_M.gguf" \ --lora "path/to/syntheticlab-fix-json-lora.gguf" \ -c 2048 -n 256 -p "Your prompt here"