Improve model card for SHARE Dialogue Model

#1
by nielsr HF Staff - opened

This PR significantly enhances the model card for the SHARE Dialogue Model. It adds detailed information about the model, including:

  • A comprehensive summary of the model and its purpose based on the paper abstract.
  • A direct link to the associated paper (SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script).
  • The appropriate pipeline_tag (text-generation), enabling users to discover the model through this filter on the Hub.
  • Relevant tags such as dialogue, long-term-dialogue, and shared-memory for better discoverability.
  • Specifies meta-llama/Meta-Llama-3.1-8B-Instruct as the base_model, as indicated by adapter_config.json.
  • Adds a sample usage code snippet for loading the PEFT (LoRA) adapter with the base model using the transformers library.
  • Populates the BibTeX citation with correct author information from the paper.

The paper abstract mentions that the dataset and code are available at a URL. As no specific public GitHub or project page link was provided in the input context, users are advised to consult the paper for the exact URL of the code and dataset.

This makes the model much more discoverable and provides essential information for potential users.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment