Update README.md
Browse files
README.md
CHANGED
|
@@ -7,7 +7,7 @@ license: cc-by-sa-4.0
|
|
| 7 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 8 |
|
| 9 |
|
| 10 |
-
**slim-summary-tool** is a 4_K_M quantized GGUF version of slim-summary, providing a small, fast inference implementation,
|
| 11 |
|
| 12 |
The size of the self-contained GGUF model binary is 1.71 GB, which is small enough to run locally on a CPU with reasonable inference speed, and has been optimized to maximize high-quality with the ability to deploy on a local machine.
|
| 13 |
|
|
|
|
| 7 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 8 |
|
| 9 |
|
| 10 |
+
**slim-summary-tool** is a 4_K_M quantized GGUF version of slim-summary, providing a small, fast inference implementation, to provide high-quality summarizations of complex business documents, on a small, specialized locally-deployable model with summary output structured as a python list of key points.
|
| 11 |
|
| 12 |
The size of the self-contained GGUF model binary is 1.71 GB, which is small enough to run locally on a CPU with reasonable inference speed, and has been optimized to maximize high-quality with the ability to deploy on a local machine.
|
| 13 |
|