Improve Model Card for SHARE LoRA Adapter

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +91 -141
README.md CHANGED
@@ -1,199 +1,149 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
11
 
12
  ## Model Details
13
 
14
  ### Model Description
 
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
  ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
  ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
 
46
  ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
 
52
  ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
 
64
  ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
  ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Training Details
77
 
78
  ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
 
84
  ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
  #### Preprocessing [optional]
89
-
90
  [More Information Needed]
91
 
92
-
93
  #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
  ### Testing Data, Factors & Metrics
108
 
109
  #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
 
115
  #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
 
121
  #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
 
127
  ### Results
 
128
 
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
 
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
+ license: mit
4
+ pipeline_tag: feature-extraction
5
+ language: en
6
+ datasets:
7
+ - SHARE
8
+ base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
9
+ tags:
10
+ - dialogue
11
+ - long-term-dialogue
12
+ - memory
13
+ - conversational
14
+ - llama
15
+ - llm-adapter
16
+ - peft
17
  ---
18
 
19
+ # SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Model
 
 
20
 
21
+ This model is a LoRA adapter for `meta-llama/Meta-Llama-3.1-8B-Instruct`, developed as part of the research presented in the paper [SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script](https://huggingface.co/papers/2410.20682).
22
 
23
+ The paper introduces **SHARE**, a new long-term dialogue dataset constructed from movie scripts to leverage shared memories for more engaging conversations. It also presents **EPISODE**, a long-term dialogue framework that utilizes these shared experiences. This model facilitates the extraction of relevant shared memory features from dialogue for downstream tasks related to long-term dialogue understanding and generation, playing a crucial role in enabling the EPISODE framework.
24
 
25
  ## Model Details
26
 
27
  ### Model Description
28
+ This model is a PEFT (LoRA) adapter fine-tuned on [`meta-llama/Meta-Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). It is designed to work with the SHARE dataset and the EPISODE framework for leveraging shared memories in long-term dialogue. Its primary function is to enable the extraction of contextual features that capture "shared memories" for enhancing conversational agents and enabling more engaging and sustainable long-term dialogues.
29
 
30
+ - **Model type:** LoRA adapter for Causal Language Model (Llama)
31
+ - **Language(s) (NLP):** English
32
+ - **License:** MIT
33
+ - **Finetuned from model:** `meta-llama/Meta-Llama-3.1-8B-Instruct`
 
 
 
 
 
 
 
34
 
35
  ### Model Sources [optional]
36
 
37
+ - **Repository:** [More Information Needed]
38
+ - **Paper:** [SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script](https://huggingface.co/papers/2410.20682)
39
+ - **Demo [optional]:** [More Information Needed]
 
 
40
 
41
  ## Uses
42
 
 
 
43
  ### Direct Use
44
+ This model, when loaded with its base LLM, can be used to extract features that represent shared memories and contextual information from dialogue inputs. These features can then be utilized in various long-term dialogue applications to enhance conversational engagement and coherence within the EPISODE framework.
 
 
 
45
 
46
  ### Downstream Use [optional]
47
+ Potential downstream uses include building more engaging and context-aware conversational AI systems that can maintain long-term context and leverage historical interactions, improving chatbots, and advancing research in dialogue systems that require persistent memory and context.
 
 
 
48
 
49
  ### Out-of-Scope Use
50
+ This model is specifically trained for long-term dialogue related to shared memories and may not perform optimally for general-purpose text generation or other NLP tasks unrelated to its fine-tuning domain. It should not be used for generating harmful, biased, or misleading content without careful evaluation.
 
 
 
51
 
52
  ## Bias, Risks, and Limitations
53
 
54
+ The model's performance and potential biases are inherently tied to its training data, the SHARE dataset, which is constructed from movie scripts. As such, it may inherit biases present in cinematic narratives, such as stereotypes, oversimplifications of human relationships, or a focus on specific types of interactions. Further evaluation on diverse datasets is recommended to identify and mitigate any such limitations.
 
 
55
 
56
  ### Recommendations
57
 
58
+ Users (both direct and downstream) should be made aware of the model's training data source and its potential for reflecting biases present in movie scripts. Careful ethical review and bias mitigation strategies should be applied before deploying in real-world scenarios, especially in sensitive applications.
 
 
59
 
60
  ## How to Get Started with the Model
61
 
62
+ To use this LoRA adapter, you need to load the base model (`meta-llama/Meta-Llama-3.1-8B-Instruct`) and then load the LoRA adapter on top of it using the `peft` library.
63
+
64
+ ```python
65
+ from transformers import AutoModelForCausalLM, AutoTokenizer
66
+ from peft import PeftModel, PeftConfig
67
+ import torch
68
+
69
+ # Define your specific model ID on the Hugging Face Hub
70
+ # Replace 'your_org/your_model_name' with the actual model ID (e.g., 'naver-api-ai/SHARE_v1')
71
+ peft_model_id = "your_org/your_model_name"
72
+
73
+ # Load the base model
74
+ base_model_id = "meta-llama/Meta-Llama-3.1-8B-Instruct"
75
+ tokenizer = AutoTokenizer.from_pretrained(base_model_id)
76
+ # Ensure to specify torch_dtype and device_map for efficient loading if available
77
+ base_model = AutoModelForCausalLM.from_pretrained(base_model_id, torch_dtype=torch.bfloat16, device_map="auto")
78
+
79
+ # Load the LoRA adapter on top of the base model
80
+ model = PeftModel.from_pretrained(base_model, peft_model_id)
81
+ # Optional: merge LoRA weights into the base model for easier use, if you don't plan further training
82
+ # model = model.merge_and_unload()
83
+
84
+ # Example usage for dialogue processing or feature extraction (conceptual)
85
+ # The exact method to extract "features" (e.g., embeddings of specific tokens
86
+ # representing shared memories) would depend on the implementation of the SHARE framework
87
+ # and the EPISODE framework as described in the paper.
88
+
89
+ # For basic text generation (after merging LoRA, or if using PeftModel directly with .generate()):
90
+ # prompt = "Character A: Hi, do you remember our trip to the old lighthouse? Character B: Oh, yes! That stormy day..."
91
+ # inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
92
+ # outputs = model.generate(**inputs, max_new_tokens=100)
93
+ # generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
94
+ # print(generated_text)
95
+
96
+ # For accessing hidden states for feature extraction (conceptual):
97
+ # inputs = tokenizer("A dialogue turn that might contain shared memory cues.", return_tensors="pt")
98
+ # outputs = model(**inputs, output_hidden_states=True)
99
+ # # The features for "shared memories" would likely be specific representations
100
+ # # from certain layers or derived from attention mechanisms, as per the paper.
101
+ # all_hidden_states = outputs.hidden_states
102
+ # # Further processing would be needed to extract relevant "shared memory" features.
103
+
104
+ print("Model loaded and ready for use with the base model!")
105
+ ```
106
 
107
  ## Training Details
108
 
109
  ### Training Data
110
 
111
+ The model was fine-tuned using the **SHARE dataset**, which is a novel long-term dialogue dataset constructed from movie scripts. This dataset is designed to be rich in explicit persona information, event summaries, and implicitly extractable shared memories between conversational participants.
 
 
112
 
113
  ### Training Procedure
114
 
 
 
115
  #### Preprocessing [optional]
 
116
  [More Information Needed]
117
 
 
118
  #### Training Hyperparameters
119
+ This model uses LoRA (Low-Rank Adaptation) for fine-tuning. Specific training hyperparameters and the training regime would typically be detailed in the associated paper or the project's official code repository.
 
 
 
 
 
 
 
120
 
121
  ## Evaluation
122
 
 
 
123
  ### Testing Data, Factors & Metrics
124
 
125
  #### Testing Data
126
+ The paper describes experiments conducted using the SHARE dataset to demonstrate that shared memories make long-term dialogues "more engaging and sustainable."
 
 
 
127
 
128
  #### Factors
129
+ The evaluation focuses on the impact of shared memories on dialogue engagement and sustainability.
 
 
 
130
 
131
  #### Metrics
132
+ The effectiveness of the EPISODE framework and the role of shared memories are measured by their ability to make dialogues more engaging and sustainable, and by how effectively shared memories are managed during conversation.
 
 
 
133
 
134
  ### Results
135
+ The paper demonstrates that shared memories between two individuals make long-term dialogues more engaging and sustainable, and that the EPISODE framework effectively manages shared memories during dialogue.
136
 
137
+ ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
138
 
139
+ If you find the SHARE dataset, the EPISODE framework, or this model helpful in your research, please consider citing the original paper:
140
 
141
+ ```bibtex
142
+ @article{share2024longterm,
143
+ title={SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script},
144
+ author={Anonymous}, % Authors not provided in the prompt
145
+ journal={arXiv preprint arXiv:2410.20682},
146
+ year={2024},
147
+ url={https://arxiv.org/abs/2410.20682}
148
+ }
149
+ ```