Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# gemma3-270m-leetcode-gguf
|
| 2 |
+
|
| 3 |
+
**Original model**: [Codingstark/gemma3-270m-leetcode](https://huggingface.co/Codingstark/gemma3-270m-leetcode)
|
| 4 |
+
**Format**: GGUF
|
| 5 |
+
**Quantization**: bf16
|
| 6 |
+
|
| 7 |
+
This is a GGUF conversion of the Codingstark/gemma3-270m-leetcode model, optimized for use with applications like LM Studio, Ollama, and other GGUF-compatible inference engines.
|
| 8 |
+
|
| 9 |
+
## Usage
|
| 10 |
+
|
| 11 |
+
Load this model in any GGUF-compatible application by referencing the `.gguf` file.
|
| 12 |
+
|
| 13 |
+
## Model Details
|
| 14 |
+
|
| 15 |
+
- **Original Repository**: Codingstark/gemma3-270m-leetcode
|
| 16 |
+
- **Converted Format**: GGUF
|
| 17 |
+
- **Quantization Level**: bf16
|
| 18 |
+
- **Compatible With**: LM Studio, Ollama, llama.cpp, and other GGUF inference engines
|
| 19 |
+
|
| 20 |
+
## Conversion Process
|
| 21 |
+
|
| 22 |
+
This model was converted using the llama.cpp conversion scripts with the following settings:
|
| 23 |
+
- Input format: Hugging Face Transformers
|
| 24 |
+
- Output format: GGUF
|
| 25 |
+
- Quantization: bf16
|
| 26 |
+
|
| 27 |
+
## License
|
| 28 |
+
|
| 29 |
+
Please refer to the original model's license terms.
|