16-bit version of https://huggingface.co/tiiuae/falcon-7b-instruct. For quantized versions, see https://huggingface.co/models?search=thebloke/falcon-7b
- Downloads last month
- 3
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for pcuenq/falcon-7b-instruct-gguf
Base model
tiiuae/falcon-7b-instruct