Improve model card: Add GGUF usage, paper link, and correct metadata
#1
by
nielsr
HF Staff
- opened
This PR improves the model card for the QuantFactory/Ahma-3B-GGUF model by:
- Adding
library_name: llama.cppto the metadata, which is appropriate for a GGUF artifact and enables the relevant "how to use" section. - Removing the incorrect
inference: falsemetadata tag. - Adding
ggufto the model tags for better discoverability. - Prominently linking to the paper "Scaling Data-Constrained Language Models", which influenced the training of the base
Ahma-3Bmodel. - Providing a clear usage example for the GGUF model using
llama-cpp-python. - Clarifying the existing
transformersusage section to indicate it is for the original (non-GGUF)Finnish-NLP/Ahma-3Bmodel. - Adding a direct link to the
llama.cppGitHub repository in the introduction.
These changes help users understand how to best utilize this specific GGUF model and provides clearer context for its development.
munish0838
changed pull request status to
merged