Improve model card: Add pipeline tag, library, project page, usage example, and update title/links

#1
by nielsr HF Staff - opened

This PR significantly enhances the model card for the helehan/topic-overwrite-llava-7b-full model by:

  • Updating the main title to reflect the paper's name: Systematic Reward Gap Optimization for Mitigating VLM Hallucinations.
  • Adding pipeline_tag: image-text-to-text to the metadata, improving model discoverability on the Hugging Face Hub.
  • Specifying library_name: transformers in the metadata, enabling the automated "Use in Transformers" code snippet and inference widget.
  • Correcting the GitHub repository link to https://github.com/tpr-dpo/tpr-dpo.
  • Including a direct link to the project page: https://tpr-dpo.github.io.
  • Expanding the "Model Description" with details from the paper abstract.
  • Replacing the placeholder "Usage" section with a concrete Python code snippet from the official GitHub repository, allowing for direct inference demonstration.

These changes provide a more comprehensive and user-friendly model card.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment