Kyutai Moshika 7B GGUF
This model was converted and quantized from moshika-pytorch-bf16 to GGUF using moshi.cpp.
To use the model and learn more goto moshi.cpp.
- Downloads last month
- 516
Hardware compatibility
Log In
to add your hardware
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Codes4Fun/moshika-q4_k-GGUF
Base model
kyutai/moshika-pytorch-bf16