Quanted using lmdeploy - auto_awq. Going to run benchmarks to see performence.
Ran it up on 4x A4000 GPUs with 32k context. Got 30 t/s per request.
- Downloads last month
- 16
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support