| | --- |
| | license: apache-2.0 |
| | tags: |
| | - llama |
| | - gguf |
| | - quantized |
| | library_name: transformers |
| | --- |
| | |
| | # ⚠️ I apologize for not providing any files here. This is just a generated text. |
| |
|
| | # TinyLlama PHP Fine-tuned GGUF |
| |
|
| | This is a GGUF conversion of the TinyLlama model fine-tuned for PHP code generation. |
| |
|
| | ## Model Details |
| | - **Base Model**: TinyLlama |
| | - **Fine-tuned for**: PHP code generation |
| | - **Format**: GGUF (quantized to q4_0) |
| | - **Use with**: llama.cpp, Ollama, or other GGUF-compatible inference engines |
| | |
| | ## Usage |
| | |
| | ### With llama.cpp: |
| | ```bash |
| | ./main -m model.gguf -p "Write a PHP function to" |
| | ``` |
| | |
| | ### With Ollama: |
| | 1. Create a Modelfile: |
| | ``` |
| | FROM ./model.gguf |
| | ``` |
| | 2. Create the model: |
| | ```bash |
| | ollama create tinyllama-php -f Modelfile |
| | ollama run tinyllama-php |
| | ``` |
| | |