Improve model card for SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Model

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +105 -91
README.md CHANGED
@@ -1,160 +1,176 @@
1
  ---
2
  base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
3
  library_name: peft
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
 
 
11
 
12
  ## Model Details
13
 
14
  ### Model Description
 
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
-
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
  ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
  ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
 
46
  ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
 
52
  ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
 
64
  ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
  ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
  ## Training Details
77
 
78
  ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
 
84
  ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
 
88
  #### Preprocessing [optional]
89
 
90
  [More Information Needed]
91
 
92
-
93
  #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
 
96
 
97
  #### Speeds, Sizes, Times [optional]
98
 
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
  [More Information Needed]
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
  ### Testing Data, Factors & Metrics
 
108
 
109
  #### Testing Data
110
 
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
  [More Information Needed]
114
 
115
  #### Factors
116
 
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
  [More Information Needed]
120
 
121
  #### Metrics
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
 
127
  ### Results
128
-
129
- [More Information Needed]
130
 
131
  #### Summary
132
 
133
-
134
-
135
  ## Model Examination [optional]
136
 
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
  [More Information Needed]
140
 
141
  ## Environmental Impact
142
 
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
  Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
 
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
  ## Technical Specifications [optional]
154
 
155
  ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
 
159
  ### Compute Infrastructure
160
 
@@ -166,24 +182,25 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
166
 
167
  #### Software
168
 
169
- [More Information Needed]
 
170
 
171
  ## Citation [optional]
172
 
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
  **BibTeX:**
176
 
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
 
 
 
 
182
 
183
  ## Glossary [optional]
184
 
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
  [More Information Needed]
188
 
189
  ## More Information [optional]
@@ -196,7 +213,4 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
196
 
197
  ## Model Card Contact
198
 
199
- [More Information Needed]
200
- ### Framework versions
201
-
202
- - PEFT 0.12.0
 
1
  ---
2
  base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
3
  library_name: peft
4
+ pipeline_tag: text-generation
5
+ license: cc-by-4.0
6
+ datasets:
7
+ - SHARE
8
+ tags:
9
+ - long-form-dialogue
10
+ - dialogue-generation
11
  ---
12
 
13
+ # Model Card for SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Model
 
 
 
14
 
15
+ This model is a fine-tuned version of `meta-llama/Meta-Llama-3.1-8B-Instruct` based on the SHARE framework, designed to enhance long-term dialogue engagement by leveraging shared memories. It was developed using the new SHARE dataset, constructed from movie scripts, which provides rich explicit persona information, event summaries, and implicitly extractable shared memories between conversational participants. The associated EPISODE framework utilizes these shared experiences to facilitate more engaging and sustainable conversations.
16
 
17
  ## Model Details
18
 
19
  ### Model Description
20
+ Shared memories between two individuals strengthen their bond and are crucial for facilitating their ongoing conversations. This model aims to make long-term dialogue more engaging by leveraging these shared memories. To this end, it uses the new long-term dialogue dataset named SHARE, constructed from movie scripts, which are a rich source of shared memories among various relationships. The underlying research introduces the EPISODE framework, a long-term dialogue framework based on SHARE that effectively manages shared memories during dialogue. Experiments demonstrate that shared memories make long-term dialogues more engaging and sustainable.
21
 
22
+ - **Developed by:** The authors of the paper "SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script".
23
+ - **Funded by [optional]:** [More Information Needed]
24
+ - **Shared by [optional]:** [More Information Needed]
25
+ - **Model type:** LoRA adapter for Causal Language Model (`meta-llama/Meta-Llama-3.1-8B-Instruct`) designed for long-term dialogue.
26
+ - **Language(s) (NLP):** English
27
+ - **License:** [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
28
+ - **Finetuned from model [optional]:** `meta-llama/Meta-Llama-3.1-8B-Instruct`
 
 
 
 
29
 
30
  ### Model Sources [optional]
31
 
32
+ - **Repository:** [https://github.com/share-dialogue/SHARE](https://github.com/share-dialogue/SHARE)
33
+ - **Paper [optional]:** [SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script](https://huggingface.co/papers/2410.20682)
34
+ - **Demo [optional]:** [More Information Needed]
 
 
35
 
36
  ## Uses
37
 
 
 
38
  ### Direct Use
39
+ This model is intended for research and development in long-term dialogue systems. It can be used to generate conversational responses that effectively utilize shared memories between participants, leading to more coherent, engaging, and contextually rich interactions over extended conversations.
 
 
 
40
 
41
  ### Downstream Use [optional]
42
+ The model can serve as a foundation for further research and applications in personalized dialogue, conversational AI agents with memory, and interactive storytelling where consistent character memory is crucial.
 
 
 
43
 
44
  ### Out-of-Scope Use
45
+ This model is not intended for generating harmful, biased, or inappropriate content. Users should be aware that as a generative language model, it may produce outputs reflecting biases present in its training data or the base model. It should not be deployed in high-stakes applications without rigorous further testing, fine-tuning, and safety measures.
 
 
 
46
 
47
  ## Bias, Risks, and Limitations
48
 
49
+ The model's performance and outputs are influenced by the characteristics of its training data (movie scripts) and the underlying `Meta-Llama-3.1-8B-Instruct` base model. Potential biases, stereotypes, or limitations present in these sources may be reflected in the generated dialogues. The abstract does not detail specific ethical considerations or limitations beyond the technical scope.
 
 
50
 
51
  ### Recommendations
52
 
53
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. Continuous monitoring and evaluation of model outputs are recommended, especially when integrating into sensitive applications.
 
 
54
 
55
  ## How to Get Started with the Model
56
 
57
+ Use the code below to get started with the model. This model is a PEFT (LoRA) adapter, meaning you need to load the base model (`meta-llama/Meta-Llama-3.1-8B-Instruct`) and then apply this adapter on top.
58
+
59
+ ```python
60
+ import torch
61
+ from transformers import AutoModelForCausalLM, AutoTokenizer
62
+ from peft import PeftModel, PeftConfig
63
+
64
+ # Replace "your_repo_id/this_model" with the actual model ID on the Hub
65
+ model_id = "your_repo_id/this_model"
66
+
67
+ # Load PEFT config to get base model name
68
+ config = PeftConfig.from_pretrained(model_id)
69
+
70
+ # Load the base model and tokenizer
71
+ base_model = AutoModelForCausalLM.from_pretrained(
72
+ config.base_model_name_or_path,
73
+ torch_dtype=torch.bfloat16, # or torch.float16 or torch.float32 depending on your GPU/needs
74
+ device_map="auto",
75
+ )
76
+ tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
77
+
78
+ # Load the PEFT adapter on top of the base model
79
+ model = PeftModel.from_pretrained(base_model, model_id)
80
+
81
+ # For seamless inference, you might want to merge the adapter weights into the base model
82
+ # This makes the model behave like a fully fine-tuned model and can improve performance on some hardware
83
+ # model = model.merge_and_unload()
84
+
85
+ # Example for text generation (using Llama 3.1 chat template)
86
+ messages = [
87
+ {"role": "user", "content": "Hello, do you remember our conversation about the movie 'Inception'?"},
88
+ {"role": "assistant", "content": "Yes, I do! We talked about its complex narrative structure and dream-within-a-dream concept. What specifically about it are you thinking about now?"},
89
+ {"role": "user", "content": "I was wondering, what was the name of the character who guided people through the dream layers?"}
90
+ ]
91
+
92
+ # Apply chat template and tokenize
93
+ input_ids = tokenizer.apply_chat_template(
94
+ messages,
95
+ add_generation_prompt=True,
96
+ return_tensors="pt"
97
+ ).to(model.device)
98
+
99
+ # Generate response
100
+ outputs = model.generate(
101
+ input_ids,
102
+ max_new_tokens=256,
103
+ eos_token_id=tokenizer.eos_token_id,
104
+ do_sample=True,
105
+ temperature=0.7,
106
+ top_p=0.9,
107
+ )
108
+
109
+ generated_text = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
110
+ print(generated_text)
111
+ ```
112
 
113
  ## Training Details
114
 
115
  ### Training Data
116
+ The model was fine-tuned on the new **SHARE dataset**, which is constructed from movie scripts. This dataset contains the summaries of persona information and events of two individuals, as explicitly revealed in their conversation, along with implicitly extractable shared memories.
 
 
 
117
 
118
  ### Training Procedure
119
+ The training procedure leverages the **EPISODE** framework, a long-term dialogue framework based on the SHARE dataset, which utilizes shared experiences between individuals. The model was fine-tuned using Parameter-Efficient Fine-Tuning (PEFT), specifically LoRA, as indicated by the `adapter_config.json`. The objective was to enable the model to effectively manage shared memories during dialogue and produce more engaging and sustainable long-term conversations.
 
120
 
121
  #### Preprocessing [optional]
122
 
123
  [More Information Needed]
124
 
 
125
  #### Training Hyperparameters
126
 
127
+ - **Training regime:** [More Information Needed]
128
+ <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
129
 
130
  #### Speeds, Sizes, Times [optional]
131
 
 
 
132
  [More Information Needed]
133
 
134
  ## Evaluation
135
 
 
 
136
  ### Testing Data, Factors & Metrics
137
+ Experiments using the SHARE dataset were conducted to evaluate the model. The evaluation focused on demonstrating that shared memories between two individuals make long-term dialogues more engaging and sustainable, and that the EPISODE framework effectively manages these shared memories during dialogue.
138
 
139
  #### Testing Data
140
 
 
 
141
  [More Information Needed]
142
 
143
  #### Factors
144
 
 
 
145
  [More Information Needed]
146
 
147
  #### Metrics
148
 
149
+ Specific metrics for evaluation are not detailed in the abstract, but typical dialogue evaluation metrics (e.g., coherence, fluency, engagement, consistency) would likely be used.
 
 
150
 
151
  ### Results
152
+ The research demonstrates that shared memories between two individuals lead to more engaging and sustainable long-term dialogues, and that the EPISODE framework effectively manages shared memories throughout conversations.
 
153
 
154
  #### Summary
155
 
 
 
156
  ## Model Examination [optional]
157
 
 
 
158
  [More Information Needed]
159
 
160
  ## Environmental Impact
161
 
 
 
162
  Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
163
 
164
+ - **Hardware Type:** [More Information Needed]
165
+ - **Hours used:** [More Information Needed]
166
+ - **Cloud Provider:** [More Information Needed]
167
+ - **Compute Region:** [More Information Needed]
168
+ - **Carbon Emitted:** [More Information Needed]
169
 
170
  ## Technical Specifications [optional]
171
 
172
  ### Model Architecture and Objective
173
+ The model is a PEFT (LoRA) adapted version of the `meta-llama/Meta-Llama-3.1-8B-Instruct` large language model. Its primary objective is to facilitate long-term, open-domain dialogue by explicitly leveraging shared memories and contextual information, aiming for more engaging and coherent conversations over time.
 
174
 
175
  ### Compute Infrastructure
176
 
 
182
 
183
  #### Software
184
 
185
+ - PEFT 0.12.0
186
+ - Transformers (compatible with Llama 3.1)
187
 
188
  ## Citation [optional]
189
 
 
 
190
  **BibTeX:**
191
 
192
+ ```bibtex
193
+ @article{share2024longterm,
194
+ title={SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script},
195
+ author={Authors Name(s) Here}, # Please replace with actual author list from the paper
196
+ journal={arXiv preprint arXiv:2410.20682},
197
+ year={2024},
198
+ url={https://huggingface.co/papers/2410.20682}
199
+ }
200
+ ```
201
 
202
  ## Glossary [optional]
203
 
 
 
204
  [More Information Needed]
205
 
206
  ## More Information [optional]
 
213
 
214
  ## Model Card Contact
215
 
216
+ [More Information Needed]