tiny-Qwen2ForCausalLM-2.5 - GGUF
This is a GGUF conversion of trl-internal-testing/tiny-Qwen2ForCausalLM-2.5.
Conversion Info
- Precision: F16 (Half Precision)
- Tool: llama.cpp convert_hf_to_gguf.py
Usage
Download and use with llama.cpp or any GGUF-compatible inference engine.
- Downloads last month
- 38
Hardware compatibility
Log In
to view the estimation
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support