Improve model card: add metadata, paper, and code links

#1
by nielsr HF Staff - opened

This PR enhances the model card for the MIRG-RL model by:

  • Adding library_name: transformers to the metadata, enabling the automated "How to use" widget on the Hugging Face Hub, as indicated by transformers_version and Qwen2VLForConditionalGeneration in the config files.
  • Adding pipeline_tag: image-text-to-text to the metadata, making the model discoverable for multi-modal tasks involving image input and text output, consistent with "Multi-Image Reasoning and Grounding with Reinforcement Learning".
  • Including a direct link to the paper MIRG-RL: Multi-Image Reasoning and Grounding with Reinforcement Learning.
  • Providing a link to the official GitHub repository for the project (https://github.com/ZEUS2035/MIRG-RL).
  • Integrating key information from the GitHub README regarding datasets, model details, training, and evaluation to provide a more comprehensive overview of the model.
  • Removing the internal "File information" section from the model card.
Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment