File size: 671 Bytes
60d4cc4
 
 
ce660b1
60d4cc4
 
 
 
 
 
 
 
 
 
 
ce660b1
60d4cc4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
---
license: apache-2.0
base_model:
- LiquidAI/LFM2.5-1.2B-Instruct
tags:
- llm-compressor
---
This is [LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) quantized with [llm-compressor](https://github.com/vllm-project/llm-compressor) to FP8. The model is compatible with vLLM (tested: v0.13.0). Tested with an RTX 4090.


- **Developed by:** [The Kaitchup](https://kaitchup.substack.com/)
- **License:** Apache 2.0 license

## How to Support My Work
"[buy me a kofi](https://ko-fi.com/bnjmn_marie)"

Subscribe to [The Kaitchup](https://kaitchup.substack.com/subscribe). This helps me a lot to continue quantizing and evaluating models for free.