Transformers
Safetensors
llama
speculative-decoding
eagle3
draft-model
kimi-k2.5
fp8
amd-quark
quantized
no-lm-head-quantization
text-generation-inference
quark
Instructions to use amd/kimi-k2.5-eagle3-fp8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use amd/kimi-k2.5-eagle3-fp8 with Transformers:
# Load model directly from transformers import AutoTokenizer, LlamaForCausalLMEagle3 tokenizer = AutoTokenizer.from_pretrained("amd/kimi-k2.5-eagle3-fp8") model = LlamaForCausalLMEagle3.from_pretrained("amd/kimi-k2.5-eagle3-fp8") - Notebooks
- Google Colab
- Kaggle