Improve model card: Add SuffixDecoding context, vLLM usage, project & repo links
#5
by
nielsr
HF Staff
- opened
This PR significantly improves the model card for Llama-3.1-SwiftKV-8B-Instruct by integrating information about its acceleration with SuffixDecoding.
Specifically, it adds:
- The
pipeline_tag: text-generationto enhance discoverability. - The
library_name: vllm, reflecting the model's primary usage context and enabling automated code snippets forvLLMwithin the Hub. - A clear link to the SuffixDecoding paper.
- A link to the SuffixDecoding project page.
- A link to the Arctic Inference GitHub repository, which implements SuffixDecoding and SwiftKV.
- A detailed "Sample Usage" section with
vLLMcode snippets, directly sourced from the project's GitHub README, to guide users on how to deploy and use the model with these optimizations. - Updated internal links, added SuffixDecoding overview, and improved readability of evaluation tables.
This ensures the model card provides comprehensive information and actionable steps for users interested in leveraging these inference optimizations.
Please review and merge if everything looks good.