library_name: transformers
language:
- en
base_model:
- meta-llama/Llama-3.2-1B-Instruct
datasets:
- SallySims/AnthroBotdata
Model Card for AnthroBot (Llama-3.2-1B-Instruct Fine-tuned)
This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct, adapted for reasoning and generating contextual insights from anthropometric data (e.g., age, sex, weight, height, waist circumference). It can summarise or comment on health-related metrics conversationally.
Model Details
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Sally S. Simmons
- Funded by [optional]: NA
- Shared by [optional]: https://huggingface.co/SallySims
- Model type: Causal Language Model (LLM) with Instruction Tuning
- Language(s) (NLP): English
- License: Apache 2.0 (or specify if different)
- Finetuned from model [optional]: meta-llama/Llama-3.2-1B-Instruct
Model Sources [optional]
- Repository: https://huggingface.co/SallySims/AnthroBot_Model_Lora
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Direct Use
The model is intended to analyze structured health-related user inputs and return conversational, personalized feedback.It is designed for educational, wellness, or research purposes.
Downstream Use [optional]
This model can be incorporated into chatbot systems or mobile health platforms that require health-data-aware natural language interaction.
Out-of-Scope Use
*Medical diagnosis or treatment
*Critical healthcare decision-making
*Inputs in languages other than English
Bias, Risks, and Limitations
The model is trained on 20000 observations based on anthropometric data collected during the WHO STEPS survey and 32000 synthetic data not in clinical settings. Outputs may reflect biases present in the training prompts or may misinterpret edge cases.
Recommendations
Seek professional guidance in addition to the outcomes produced by the model
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_id = "SallySims/AnthroBot_Model_Lora"
tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
input_text = "Age: 30, Sex: female, Height: 150.5 cm, Weight: 75.3 kg, WC: 68.0 cm" output = pipe(input_text, max_new_tokens=150, do_sample=True) print(output[0]['generated_text'])
Training Details
Training Data
Custom curated structured anthropometric prompts designed to simulate health-focused instruction-following behavior.
Training Procedure
Preprocessing [optional]
Prompts were normalised for consistent numerical formats and tokenization performance.
Training Hyperparameters
- Training regime: [More Information Needed] Epochs: 5
Batch size: 2 (accumulation: 4 steps)
Learning rate: 2e-4
Precision: Mixed precision (fp16 / bf16)
LoRA Parameters:
r=16, alpha=32, dropout=0.05
Quantization 4-bit quantization using BitsAndBytesConfig
Enabled llm_int8_enable_fp32_cpu_offload
Speeds, Sizes, Times [optional]
Evaluation
Testing Data, Factors & Metrics
Testing Data
Evaluation performed on held-out anthropometricindices and recommendations prompts with expected interpretive outputs.
Factors
Metrics
Human-judged relevance, clarity, and accuracy.
Results
Manual inspection shows clear, concise, and useful summaries in the majority of cases. Some rare edge cases may produce vague or overly generic responses.
Summary
Model Examination [optional]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: NVIDIA T4 GPU
- Hours used: ~ 2 hours
- Cloud Provider: Google Colab
- Compute Region: USA
- Carbon Emitted: ~1.2 kg CO₂eq (approx.)
Technical Specifications [optional]
Model Architecture and Objective
Decoder-only transformer based on the LLaMA 3.2B architecture.
Compute Infrastructure
Hardware
Google Colab (A100)
Software
PyTorch, Hugging Face Transformers, PEFT, BitsAndBytes
Citation [optional]
@misc{AnthroBot2025, author = {Sally Sonia Simmons}, title = {AnthroBot: Instruction-Tuned LLaMA-3.2-1B for Anthropometric Reasoning}, year = {2025}, url = {https://huggingface.co/SallySimmons/AnthroBot_Model_Lora} }
BibTeX:
APA:
Glossary [optional]
NA
More Information [optional]
NA
Model Card Authors [optional]
NA