GGUF
conversational
zenpeach commited on
Commit
40e27e9
·
verified ·
1 Parent(s): 48afbb3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -12,7 +12,7 @@ The models provided here are intended for **local inference** and are suitable f
12
  ## Model Details
13
 
14
  - Base models: IBM Granite 4
15
- - Variants provided: 1B and 3B
16
  - Format: GGUF
17
  - Quantization: 4-bit (model-specific, see table below)
18
  - Intended use: Local inference, code understanding, general-purpose chat
@@ -23,7 +23,6 @@ The models provided here are intended for **local inference** and are suitable f
23
 
24
  ## Quantization Process
25
 
26
- - The **1B Granite 4** model is provided directly from **IBM’s GGUF release**.
27
  - The **3B Granite 4 Micro** model is quantized using **Unsloth** tooling.
28
  - No additional fine-tuning, rebalancing, or prompt modification was applied.
29
  - Quantization parameters were not altered from their original sources.
 
12
  ## Model Details
13
 
14
  - Base models: IBM Granite 4
15
+ - Variants provided: Micro 3B
16
  - Format: GGUF
17
  - Quantization: 4-bit (model-specific, see table below)
18
  - Intended use: Local inference, code understanding, general-purpose chat
 
23
 
24
  ## Quantization Process
25
 
 
26
  - The **3B Granite 4 Micro** model is quantized using **Unsloth** tooling.
27
  - No additional fine-tuning, rebalancing, or prompt modification was applied.
28
  - Quantization parameters were not altered from their original sources.