Update README.md
Browse files
README.md
CHANGED
|
@@ -2,6 +2,7 @@
|
|
| 2 |
license: mit
|
| 3 |
base_model:
|
| 4 |
- inclusionAI/Ling-mini-2.0
|
|
|
|
| 5 |
pipeline_tag: text-generation
|
| 6 |
tags:
|
| 7 |
- chatllm.cpp
|
|
@@ -24,7 +25,7 @@ language:
|
|
| 24 |
|
| 25 |
# Ling‑Mini‑2.0 — ChatLLM.cpp Quantizations (Q4_0 and Q8_0)
|
| 26 |
|
| 27 |
-
Author and distribution: [Riverkan](https://
|
| 28 |
|
| 29 |
This repository provides CPU/GPU-friendly quantized builds of Ling‑Mini‑2.0 for [ChatLLM.cpp](https://github.com/foldl/chatllm.cpp). It is not a LLaMA model, is not affiliated with Meta, and does not use the LLaMA license. Files are distributed in ChatLLM.cpp’s GGML-based format (.bin), ready for local inference.
|
| 30 |
|
|
|
|
| 2 |
license: mit
|
| 3 |
base_model:
|
| 4 |
- inclusionAI/Ling-mini-2.0
|
| 5 |
+
base_model_relation: quantized
|
| 6 |
pipeline_tag: text-generation
|
| 7 |
tags:
|
| 8 |
- chatllm.cpp
|
|
|
|
| 25 |
|
| 26 |
# Ling‑Mini‑2.0 — ChatLLM.cpp Quantizations (Q4_0 and Q8_0)
|
| 27 |
|
| 28 |
+
Author and distribution: [Riverkan](https://riverkan.com)
|
| 29 |
|
| 30 |
This repository provides CPU/GPU-friendly quantized builds of Ling‑Mini‑2.0 for [ChatLLM.cpp](https://github.com/foldl/chatllm.cpp). It is not a LLaMA model, is not affiliated with Meta, and does not use the LLaMA license. Files are distributed in ChatLLM.cpp’s GGML-based format (.bin), ready for local inference.
|
| 31 |
|