Improve model card: Add Tequila paper, Transformers usage, license, and updated tags
#2
by
nielsr
HF Staff
- opened
This PR significantly enhances the model card by:
- Linking to the paper Tequila: Trapping-free Ternary Quantization for Large Language Models and the specific Tequila implementation on GitHub.
- Adding the
pipeline_tag: text-generationfor better discoverability. - Specifying
library_name: transformersto enable the automated inference widget and reflect the model's compatibility. - Updating the
licensetoapache-2.0as per the project's GitHub repository. - Adding relevant
tagssuch asllama,tequila,quantization,ternary-quantization, andspeculative-decodingto accurately describe the model's characteristics and methods. - Including a "Sample Usage" section with a Python code snippet for
eagenerate(speculative decoding) directly from the AngelSlim GitHub README, demonstrating how to use the model withtransformers.AutoTokenizer. - Updating the "Latest Updates" section to include the Tequila implementation release.
- Adjusting the Table of Contents for the new "Sample Usage" section.
These improvements provide more comprehensive and structured information for users.