Improve Model Card for SHARE LoRA Adapter

#1
by nielsr HF Staff - opened

This PR significantly enhances the model card for the SHARE model by providing comprehensive details derived from its associated paper, "SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script".

Key updates include:

  • A detailed model description, clarifying its role as a LoRA adapter for meta-llama/Meta-Llama-3.1-8B-Instruct within the EPISODE framework.
  • Linking to the official Hugging Face paper page: SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script.
  • Adding the appropriate pipeline_tag: feature-extraction to improve discoverability, reflecting its function in managing and extracting shared memories.
  • Specifying the base_model, language, datasets, and license in the metadata.
  • Including relevant tags such as dialogue, long-term-dialogue, memory, conversational, llama, llm-adapter, and peft.
  • Providing a clear usage example for loading and interacting with the LoRA adapter.
  • Populating the Bias, Risks, and Limitations section based on the nature of the training data (movie scripts).
  • Adding a BibTeX citation for the paper.

Please review and merge this PR to make the model more informative and usable for the community.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment