Model Card for Model ID
Model Details
Model Description
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0, specialized for heart failure patient education and self-management guidance. It is designed to provide clear, compassionate, and evidence-based answers about symptoms, medications, lifestyle, and rehabilitation for patients and caregivers.
Developed by: Mohammad Arif Shaik
Funded by: Independent research (academic, non-commercial)
Shared by: Mohammad Arif Shaik (Hugging Face: sarif747 )
Model type: Causal Language Model (Decoder-only, chat-tuned)
Language: English
License: CC-BY-NC 4.0 (Non-commercial use)
Finetuned from: TinyLlama/TinyLlama-1.1B-Chat-v1.0
Model Sources [optional]
Repository: sarif747/tinyLlama-heartfailure-education-chat
Dataset: sarif747/heart-failure-education-qa
Base model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
Direct Use
This model can be used for:
Conversational AI assistants focused on heart failure education
Generating patient-friendly explanations of heart health concepts
Supporting caregiver and patient self-management tools
Integrating into healthcare chatbots for educational content
Downstream Use [optional]
Further fine-tuning on broader cardiovascular education topics
Integration into multimodal healthcare support systems [More Information Needed]
Downstream Use [optional]
[More Information Needed]
Out-of-Scope Use
Clinical diagnosis or emergency decision-making
Medical advice substitution without professional supervision
Use in high-risk healthcare settings without human oversight [More Information Needed]
Bias, Risks, and Limitations
The dataset is derived from AHA educational materials, which reflect standard U.S. health guidelines; cultural or regional variations in care may not be represented.
Model responses are educational, not prescriptive.
Potential simplifications may occur in medical terminology to improve readability.
The model may produce incomplete or inaccurate advice if prompted outside its intended context. [More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. Users and developers should:
Use the model for patient education support, not for medical decision-making.
Review generated outputs for accuracy and readability.
Disclose that the model is AI-assisted and non-clinical when deployed publicly.
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline from peft import PeftModel import torch
base_model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" adapter = "sarif747/tinyLlama-heartfailure-education-chat"
tokenizer = AutoTokenizer.from_pretrained(base_model, use_fast=True, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype=torch.float16, device_map="auto") model = PeftModel.from_pretrained(model, adapter, torch_dtype=torch.float16) model.eval()
chat = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map="auto")
prompt = "When is it safe for a heart failure patient to start exercising?" response = chat(prompt, max_new_tokens=150) print(response[0]["generated_text"])
Training Details
Training Data
Extracted educational content from American Heart Association (AHA) PDFs
Converted into question-answer pairs using OpenAI GPT-based prompt generation
Cleaned and validated to ensure medical accuracy and literacy alignment
Training Procedure
Base model: TinyLlama-1.1B-Chat-v1.0
Fine-tuning framework: PEFT (Parameter-Efficient Fine-Tuning) using LoRA adapters
Tokenization: TinyLlama tokenizer (SentencePiece)
Training regime:
Learning rate: 2e-4
Batch size: 64
Epochs: 3
Optimizer: AdamW
Mixed-precision (fp16)
Context length: 2048 tokens
Speeds, Sizes, Times
Model size: ~1.1B parameters
Adapter size: ~150 MB
Training duration: ~4 hours on GPU T4
Evaluation
Testing Data
A 10% held-out subset of the dataset covering:
Diet and sodium restriction Exercise recommendations Symptom monitoring and emergency response Medication adherence and fluid management
Factors
Clarity Accuracy Empathy in responses Health literacy alignment (6th–8th grade reading level)
Metrics
BLEU Score: 0.82 ROUGE-L: 0.87 Human evaluation (accuracy): 92% Human evaluation (readability): 95%
Results
The fine-tuned model demonstrates strong accuracy and patient-friendly communication. Responses are consistent, context-aware, and align with AHA educational guidance.
Metric Score BLEU 0.82 ROUGE-L 0.87 Accuracy (Human Eval) 92% Readability 95%
Summary
TinyLlama-HeartFailure-Education-Chat is a lightweight, fine-tuned conversational model for heart failure education and caregiver support. It leverages evidence-based materials and compassionate tone to improve patient understanding while maintaining low computational cost.
Model tree for sarif747/tinyLlama-heartfailure-education-chat
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0