Fredithefish commited on
Commit
ec8ff41
·
verified ·
1 Parent(s): 0d68f27

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - vicharai/Vicoder-html-32B-preview
4
+ library_name: transformers
5
+ tags:
6
+ - gguf
7
+ - code
8
+ - llamacpp
9
+ ---
10
+ <h1>VICODER HTML 32B PREVIEW QUANTIZATIONS</h1>
11
+
12
+ ## Overview
13
+
14
+ [`ViCoder-HTML-32B-preview`](https://huggingface.co/vicharai/ViCoder-html-32B-preview) is a powerful AI model designed to generate full websites, including HTML, Tailwind CSS, and JavaScript.
15
+
16
+
17
+ ## Model Quantizations
18
+
19
+ This model comes in several quantizations, each offering a balance of file size and performance. Choose the one that best suits your memory and quality requirements.
20
+ | **Quantization** | **Size (GB)** | **Expected Quality** | **Notes** |
21
+ |-----------------------|---------------|--------------------------------------------------------|---------------------------------------------------------|
22
+ | **Q8_0** | 34.8 | 🟢 *Very good – nearly full precision* | 8-bit quantization, very close to full precision for most tasks. |
23
+ | **Q6_K** | 26.9 | 🟢 *Good – retains most performance* | 6-bit quantization, high quality, efficient for most applications. |
24
+ | **Q4_K_M** | 19.9 | 🟡 *Moderate – usable with minor degradation* | 4-bit quantization, good tradeoff between quality and size. |
25
+ | **Q3_K_M** | 15.9 | 🟠 *Lower – may lose accuracy, better for small RAM* | 3-bit quantization, lower quality, best for minimal memory use. |
26
+
27
+ ## Features
28
+
29
+ - **Full Website Generation**: Generates HTML code with Tailwind CSS and JavaScript for modern, responsive websites.
30
+ - **Flexible Quantization**: Choose from various quantization models to fit your hardware and performance requirements.
31
+ - **Ease of Use**: The model is easy to integrate using [llama.cpp](https://github.com/ggerganov/llama.cpp) and [Ollama](https://github.com/ollama/ollama)