SallySims commited on
Commit
d83cc4a
·
verified ·
1 Parent(s): 8ba3175

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -21
README.md CHANGED
@@ -4,6 +4,8 @@ language:
4
  - en
5
  base_model:
6
  - meta-llama/Llama-3.2-1B-Instruct
 
 
7
  ---
8
 
9
  # Model Card for AnthroBot (Llama-3.2-1B-Instruct Fine-tuned)
@@ -48,14 +50,13 @@ This is the model card of a 🤗 transformers model that has been pushed on the
48
  The model is intended to analyze structured health-related user inputs and return conversational,
49
  personalized feedback.It is designed for educational, wellness, or research purposes.
50
 
51
- [More Information Needed]
52
 
53
  ### Downstream Use [optional]
54
 
55
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
56
  This model can be incorporated into chatbot systems or mobile health platforms that require
57
  health-data-aware natural language interaction.
58
- [More Information Needed]
59
 
60
  ### Out-of-Scope Use
61
 
@@ -66,7 +67,6 @@ health-data-aware natural language interaction.
66
 
67
  *Inputs in languages other than English
68
 
69
- [More Information Needed]
70
 
71
  ## Bias, Risks, and Limitations
72
 
@@ -74,7 +74,6 @@ health-data-aware natural language interaction.
74
  The model is trained on 20000 observations based on anthropometric data collected during the WHO STEPS survey and not in clinical settings.
75
  Outputs may reflect biases present in the training prompts or may misinterpret edge cases.
76
 
77
- [More Information Needed]
78
 
79
  ### Recommendations
80
 
@@ -100,7 +99,6 @@ output = pipe(input_text, max_new_tokens=150, do_sample=True)
100
  print(output[0]['generated_text'])
101
 
102
 
103
- [More Information Needed]
104
 
105
  ## Training Details
106
 
@@ -110,7 +108,6 @@ print(output[0]['generated_text'])
110
  Custom curated structured anthropometric prompts designed to simulate
111
  health-focused instruction-following behavior.
112
 
113
- [More Information Needed]
114
 
115
  ### Training Procedure
116
 
@@ -145,7 +142,6 @@ Enabled llm_int8_enable_fp32_cpu_offload
145
 
146
  <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
147
 
148
- [More Information Needed]
149
 
150
  ## Evaluation
151
 
@@ -159,20 +155,17 @@ Enabled llm_int8_enable_fp32_cpu_offload
159
  Evaluation performed on held-out anthropometricindices and recommendations prompts
160
  with expected interpretive outputs.
161
 
162
- [More Information Needed]
163
 
164
  #### Factors
165
 
166
  <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
167
 
168
- [More Information Needed]
169
 
170
  #### Metrics
171
 
172
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
173
  Human-judged relevance, clarity, and accuracy.
174
 
175
- [More Information Needed]
176
 
177
  ### Results
178
 
@@ -187,7 +180,6 @@ Some rare edge cases may produce vague or overly generic responses.
187
 
188
  <!-- Relevant interpretability work for the model goes here -->
189
 
190
- [More Information Needed]
191
 
192
  ## Environmental Impact
193
 
@@ -209,11 +201,10 @@ Decoder-only transformer based on the LLaMA 3.2B architecture.
209
 
210
  ### Compute Infrastructure
211
 
212
- [More Information Needed]
213
 
214
  #### Hardware
215
 
216
- Google Colab (A100)
217
 
218
  #### Software
219
 
@@ -231,25 +222,20 @@ PyTorch, Hugging Face Transformers, PEFT, BitsAndBytes
231
 
232
  **BibTeX:**
233
 
234
- [More Information Needed]
235
-
236
  **APA:**
237
 
238
- [More Information Needed]
239
-
240
  ## Glossary [optional]
241
 
242
  <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
243
 
244
- [More Information Needed]
245
-
246
  ## More Information [optional]
247
 
248
- [More Information Needed]
249
 
250
  ## Model Card Authors [optional]
251
 
252
- [More Information Needed]
253
 
254
  ## Model Card Contact
255
 
 
4
  - en
5
  base_model:
6
  - meta-llama/Llama-3.2-1B-Instruct
7
+ datasets:
8
+ - SallySims/AnthroBotdata
9
  ---
10
 
11
  # Model Card for AnthroBot (Llama-3.2-1B-Instruct Fine-tuned)
 
50
  The model is intended to analyze structured health-related user inputs and return conversational,
51
  personalized feedback.It is designed for educational, wellness, or research purposes.
52
 
 
53
 
54
  ### Downstream Use [optional]
55
 
56
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
57
  This model can be incorporated into chatbot systems or mobile health platforms that require
58
  health-data-aware natural language interaction.
59
+
60
 
61
  ### Out-of-Scope Use
62
 
 
67
 
68
  *Inputs in languages other than English
69
 
 
70
 
71
  ## Bias, Risks, and Limitations
72
 
 
74
  The model is trained on 20000 observations based on anthropometric data collected during the WHO STEPS survey and not in clinical settings.
75
  Outputs may reflect biases present in the training prompts or may misinterpret edge cases.
76
 
 
77
 
78
  ### Recommendations
79
 
 
99
  print(output[0]['generated_text'])
100
 
101
 
 
102
 
103
  ## Training Details
104
 
 
108
  Custom curated structured anthropometric prompts designed to simulate
109
  health-focused instruction-following behavior.
110
 
 
111
 
112
  ### Training Procedure
113
 
 
142
 
143
  <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
144
 
 
145
 
146
  ## Evaluation
147
 
 
155
  Evaluation performed on held-out anthropometricindices and recommendations prompts
156
  with expected interpretive outputs.
157
 
 
158
 
159
  #### Factors
160
 
161
  <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
162
 
 
163
 
164
  #### Metrics
165
 
166
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
167
  Human-judged relevance, clarity, and accuracy.
168
 
 
169
 
170
  ### Results
171
 
 
180
 
181
  <!-- Relevant interpretability work for the model goes here -->
182
 
 
183
 
184
  ## Environmental Impact
185
 
 
201
 
202
  ### Compute Infrastructure
203
 
 
204
 
205
  #### Hardware
206
 
207
+ Google Colab (A100)
208
 
209
  #### Software
210
 
 
222
 
223
  **BibTeX:**
224
 
 
 
225
  **APA:**
226
 
 
 
227
  ## Glossary [optional]
228
 
229
  <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
230
 
231
+ NA
 
232
  ## More Information [optional]
233
 
234
+ NA
235
 
236
  ## Model Card Authors [optional]
237
 
238
+ NA
239
 
240
  ## Model Card Contact
241