SallySims's picture
Update README.md
437244d verified
---
library_name: transformers
language:
- en
base_model:
- meta-llama/Llama-3.2-1B-Instruct
datasets:
- SallySims/AnthroBotdata
---
# Model Card for AnthroBot (Llama-3.2-1B-Instruct Fine-tuned)
<!-- Provide a longer summary of what this model is. -->
This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct, adapted for reasoning and generating contextual insights from anthropometric data (e.g., age, sex, weight, height, waist circumference).
It can summarise or comment on health-related metrics conversationally.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Sally S. Simmons
- **Funded by [optional]:** NA
- **Shared by [optional]:** https://huggingface.co/SallySims
- **Model type:** Causal Language Model (LLM) with Instruction Tuning
- **Language(s) (NLP):** English
- **License:** Apache 2.0 (or specify if different)
- **Finetuned from model [optional]:** meta-llama/Llama-3.2-1B-Instruct
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/SallySims/AnthroBot_Model_Lora
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
The model is intended to analyze structured health-related user inputs and return conversational,
personalized feedback.It is designed for educational, wellness, or research purposes.
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
This model can be incorporated into chatbot systems or mobile health platforms that require
health-data-aware natural language interaction.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
*Medical diagnosis or treatment
*Critical healthcare decision-making
*Inputs in languages other than English
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model is trained on 20000 observations based on anthropometric data collected during the WHO STEPS survey and 32000 synthetic data not in clinical settings.
Outputs may reflect biases present in the training prompts or may misinterpret edge cases.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Seek professional guidance in addition to the outcomes produced by the model
## How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_id = "SallySims/AnthroBot_Model_Lora"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
input_text = "Age: 30, Sex: female, Height: 150.5 cm, Weight: 75.3 kg, WC: 68.0 cm"
output = pipe(input_text, max_new_tokens=150, do_sample=True)
print(output[0]['generated_text'])
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Custom curated structured anthropometric prompts designed to simulate
health-focused instruction-following behavior.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
Prompts were normalised for consistent numerical formats and tokenization performance.
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
Epochs: 5
Batch size: 2 (accumulation: 4 steps)
Learning rate: 2e-4
Precision: Mixed precision (fp16 / bf16)
LoRA Parameters:
r=16, alpha=32, dropout=0.05
Quantization
4-bit quantization using BitsAndBytesConfig
Enabled llm_int8_enable_fp32_cpu_offload
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
Evaluation performed on held-out anthropometricindices and recommendations prompts
with expected interpretive outputs.
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
Human-judged relevance, clarity, and accuracy.
### Results
Manual inspection shows clear, concise, and useful summaries in the majority of cases.
Some rare edge cases may produce vague or overly generic responses.
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** NVIDIA T4 GPU
- **Hours used:** ~ 2 hours
- **Cloud Provider:** Google Colab
- **Compute Region:** USA
- **Carbon Emitted:** ~1.2 kg CO₂eq (approx.)
## Technical Specifications [optional]
### Model Architecture and Objective
Decoder-only transformer based on the LLaMA 3.2B architecture.
### Compute Infrastructure
#### Hardware
Google Colab (A100)
#### Software
PyTorch, Hugging Face Transformers, PEFT, BitsAndBytes
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
@misc{AnthroBot2025,
author = {Sally Sonia Simmons},
title = {AnthroBot: Instruction-Tuned LLaMA-3.2-1B for Anthropometric Reasoning},
year = {2025},
url = {https://huggingface.co/SallySimmons/AnthroBot_Model_Lora}
}
**BibTeX:**
**APA:**
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
NA
## More Information [optional]
NA
## Model Card Authors [optional]
NA
## Model Card Contact
simmonssallysonia@gmail.com