Mixtral-8x7B-Instruct-v0.1-W4A8-AWQ
W4A8_AWQ quantization with ModelOpt for Mixtral-8x7B-Instruct-v0.1.
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support
W4A8_AWQ quantization with ModelOpt for Mixtral-8x7B-Instruct-v0.1.