|
|
--- |
|
|
license: apache-2.0 |
|
|
base_model: |
|
|
- LLM360/K2-V2-Instruct |
|
|
tags: |
|
|
- llmcompressor |
|
|
--- |
|
|
This is [LLM360/K2-V2-Instruct](https://huggingface.co/LLM360/K2-V2-Instruct) quantized with [LLM Compressor](https://github.com/vllm-project/llm-compressor) with the recipe in the "recipe.yaml" file. The model is compatible with vLLM (tested: v0.12.0). Tested with an RTX Pro 6000. |
|
|
|
|
|
|
|
|
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/) |
|
|
- **License:** Apache 2.0 license |
|
|
|
|
|
## How to Support My Work |
|
|
Subscribe to [The Kaitchup](https://kaitchup.substack.com/subscribe). This helps me a lot to continue quantizing and evaluating models for free. Or you can "[buy me a kofi](https://ko-fi.com/bnjmn_marie)". |
|
|
|