Improve model card: Add pipeline tag, paper/code links, usage, and detailed info
#1
by
nielsr HF Staff - opened
This PR significantly improves the model card for siqi00/MetaMath-Mistral-7B-DFT2 by:
- Adding the
pipeline_tag: text-generationto the metadata, which ensures proper categorization and discoverability on the Hugging Face Hub. - Including direct links to the paper (Discriminative Finetuning of Generative Large Language Models without Reward Models and Human Preference Data) and the official GitHub repository (https://github.com/PenGuln/DFT).
- Populating the "Model description", "Intended uses & limitations", and "Training and evaluation data" sections with detailed information extracted from the paper abstract and the project's GitHub README.
- Adding comprehensive "Performance" tables for both mathematical reasoning and general language tasks, making the model's capabilities clear at a glance.
- Providing a practical "Usage" example to help users quickly get started with text generation and chat completion.
- Including detailed sections on "Installation", "Generating negative samples", "Evaluation", and "Precompute Log-likelihood" from the GitHub repository, enhancing reproducibility and practical utility.
- Adding the BibTeX "Citation" for proper academic attribution.
- Removing the automatically generated comment at the top.
This update makes the model card much more informative and user-friendly for the community.