|
|
--- |
|
|
license: apache-2.0 |
|
|
base_model: |
|
|
- LiquidAI/LFM2.5-1.2B-Instruct |
|
|
tags: |
|
|
- llm-compressor |
|
|
--- |
|
|
This is [LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) quantized with [llm-compressor](https://github.com/vllm-project/llm-compressor) to FP8. The model is compatible with vLLM (tested: v0.13.0). Tested with an RTX 4090. |
|
|
|
|
|
|
|
|
- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/) |
|
|
- **License:** Apache 2.0 license |
|
|
|
|
|
## How to Support My Work |
|
|
"[buy me a kofi](https://ko-fi.com/bnjmn_marie)" |
|
|
|
|
|
Subscribe to [The Kaitchup](https://kaitchup.substack.com/subscribe). This helps me a lot to continue quantizing and evaluating models for free. |
|
|
|