Iredteam commited on
Commit
606f90b
Β·
1 Parent(s): ca47a28

first push

Browse files
Files changed (5) hide show
  1. README.md +73 -0
  2. getpowershell.ps1 +20 -0
  3. helathcare_chatbot.py +141 -0
  4. requirements.txt +8 -0
  5. train_data.pkl +3 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ Healthcare Chatbot (FLAN-T5)
5
+
6
+ πŸ“Œ Overview
7
+
8
+ The Healthcare Chatbot is a medical question-answering AI powered by FLAN-T5, a fine-tuned language model. It can provide general guidance on medical topics, symptoms, and treatment suggestions based on a pre-trained dataset.
9
+
10
+ 🚨 Note: This chatbot is for informational purposes only and should not be used as a substitute for professional medical advice. Always consult a doctor for health-related concerns.
11
+
12
+ πŸ“· Screenshot
13
+
14
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6791349f0df2a77530968217/klDNYjR9JZlRKLmlHHZWP.png)
15
+
16
+ πŸš€ How to Install & Run
17
+
18
+ πŸ”Ή Step 1: Download the Project
19
+
20
+ Option 1: Clone from Hugging Face
21
+
22
+ git clone https://huggingface.co/alecmoran/Healthcare_Chatbot
23
+
24
+ cd Healthcare_Chatbot
25
+
26
+ Option 2: Download as a ZIP
27
+
28
+ Go to Hugging Face Model Page
29
+
30
+ Click on "Download"
31
+
32
+ Extract the ZIP file
33
+
34
+ πŸ”Ή Step 2: Download & Prepare the Model
35
+
36
+ The chatbot requires FLAN-T5 to be stored locally before running.
37
+
38
+ For Windows Users πŸ–₯️
39
+
40
+ Open PowerShell in the project directory.
41
+
42
+ Run the following command to download the model:
43
+
44
+ ./get_model.ps1
45
+
46
+ Once the model is downloaded, run the chatbot:
47
+
48
+ python healthcare_chatbot.py
49
+
50
+ For macOS/Linux Users πŸ’»
51
+
52
+ Open Terminal in the project directory.
53
+
54
+ Run the following command to download the model:
55
+
56
+ chmod +x get_model.sh && ./get_model.sh
57
+
58
+ Once the model is downloaded, run the chatbot:
59
+
60
+ python3 healthcare_chatbot.py
61
+
62
+ πŸ’‘ Features
63
+
64
+ βœ… Local Model Loading - Runs FLAN-T5 from your system for faster response times.βœ… Medical Q&A Dataset - Includes common questions about symptoms and treatments.βœ… Voice Input & Text-to-Speech - Allows users to speak their questions & hear responses.βœ… Streamlit UI - Simple and interactive web-based interface.
65
+
66
+ ⚠️ Disclaimer
67
+
68
+ This chatbot provides general medical information but is not a replacement for professional healthcare advice. Always consult a licensed physician for medical concerns.
69
+
70
+ πŸ“© Contact & Support
71
+
72
+ For issues or improvements, open an issue on the Hugging Face repo.
73
+
getpowershell.ps1 ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Create directory for the model
2
+ New-Item -ItemType Directory -Path .\flan-t5-small -Force
3
+ # Define the list of model files
4
+ $files = @(
5
+ "config.json",
6
+ "pytorch_model.bin",
7
+ "tokenizer.json",
8
+ "tokenizer_config.json",
9
+ "special_tokens_map.json",
10
+ "vocab.txt"
11
+ )
12
+ # Base URL for the model files
13
+ $base_url = "https://huggingface.co/google/flan-t5-small/resolve/main/"
14
+ # Loop through each file and download it
15
+ foreach ($file in $files) {
16
+ $url = "$base_url$file"
17
+ $output = ".\flan-t5-small\$file"
18
+ Invoke-WebRequest -Uri $url -OutFile $output
19
+ Write-Host "Downloaded: $file"
20
+ }
helathcare_chatbot.py ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import pickle
3
+ import streamlit as st
4
+ import speech_recognition as sr
5
+ import pyttsx3
6
+
7
+ try:
8
+ import torch
9
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
10
+ except ModuleNotFoundError as e:
11
+ st.error(f"❌ Missing dependency: {e}. Please install required packages.")
12
+ st.stop()
13
+
14
+ # ==============================
15
+ # 1) Model Configuration
16
+ # ==============================
17
+ MODEL_DIR = "flan-t5-small"
18
+
19
+ torch_dtype = torch.float32 if torch.cuda.is_available() else torch.float32
20
+
21
+ def load_model():
22
+ """Load the FLAN-T5 model from local storage."""
23
+ st.write("πŸš€ Loading FLAN-T5 model from local storage...")
24
+ try:
25
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, local_files_only=True)
26
+ model = AutoModelForSeq2SeqLM.from_pretrained(
27
+ MODEL_DIR,
28
+ torch_dtype=torch_dtype,
29
+ )
30
+ st.write("βœ… Model loaded successfully from local storage!")
31
+ return tokenizer, model
32
+ except Exception as e:
33
+ st.error(f"❌ Model failed to load: {e}")
34
+ st.stop()
35
+
36
+ # ==============================
37
+ # 2) Initialize Streamlit UI
38
+ # ==============================
39
+ st.title("🩺 Healthcare Chatbot (FLAN-T5)")
40
+
41
+ # Load model
42
+ try:
43
+ tokenizer, model = load_model()
44
+ except Exception as e:
45
+ st.error(f"❌ Model load error: {e}")
46
+ st.stop()
47
+
48
+ # ==============================
49
+ # 3) Load Medical Q&A Data
50
+ # ==============================
51
+ try:
52
+ st.write("πŸ“‚ Loading medical Q&A data...")
53
+ with open("train_data.pkl", "rb") as file:
54
+ medical_qna = pickle.load(file)
55
+ st.write("βœ… Q&A data loaded!")
56
+ except FileNotFoundError:
57
+ st.error("❌ 'train_data.pkl' not found. Please ensure it exists.")
58
+ st.stop()
59
+ except Exception as e:
60
+ st.error(f"❌ Failed to load Q&A data: {e}")
61
+ st.stop()
62
+
63
+ # ==============================
64
+ # 4) Chatbot Response Logic
65
+ # ==============================
66
+ def chatbot_response(user_input: str) -> str:
67
+ """
68
+ 1. Check if user_input matches any question in medical_qna.
69
+ 2. If so, return that pre-written answer.
70
+ 3. Otherwise, feed FLAN-T5 a prompt to generate a medical response.
71
+ """
72
+
73
+ # Check Q&A first
74
+ for qa in medical_qna:
75
+ if user_input.lower() in qa["question"].lower():
76
+ return qa["answer"]
77
+
78
+ # System-style prompt for medical answers
79
+ prompt = (
80
+ "You are a helpful medical assistant. The user asked:\n"
81
+ f"Question: {user_input}\n\n"
82
+ "Answer in a concise, accurate way. If you're unsure, advise seeing a doctor."
83
+ )
84
+
85
+ # Encode
86
+ inputs = tokenizer(prompt, return_tensors="pt", truncation=True, padding=True)
87
+ # Generate
88
+ outputs = model.generate(
89
+ **inputs,
90
+ max_length=256,
91
+ num_beams=2, # for more deterministic, coherent responses
92
+ no_repeat_ngram_size=2
93
+ )
94
+ # Decode
95
+ answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
96
+
97
+ return answer
98
+
99
+ # ==============================
100
+ # 5) Text-to-Speech
101
+ # ==============================
102
+ tts_engine = pyttsx3.init()
103
+
104
+ def speak(text: str):
105
+ """Speak text aloud."""
106
+ tts_engine.say(text)
107
+ tts_engine.runAndWait()
108
+
109
+ # ==============================
110
+ # 6) Streamlit UI Logic
111
+ # ==============================
112
+ if st.button("What can you help me with?"):
113
+ st.write(
114
+ "I can provide general information about medical symptoms, treatments, "
115
+ "and offer guidance. If you have serious concerns, please contact a doctor."
116
+ )
117
+
118
+ user_input = st.text_input("Ask me a medical question:")
119
+ if st.button("Get Answer"):
120
+ if user_input.strip():
121
+ response = chatbot_response(user_input)
122
+ st.write(f"**Bot:** {response}")
123
+ speak(response)
124
+ else:
125
+ st.warning("Please enter a question.")
126
+
127
+ if st.button("πŸŽ™ Use Voice Input"):
128
+ recognizer = sr.Recognizer()
129
+ with sr.Microphone() as source:
130
+ st.write("🎀 Listening...")
131
+ audio = recognizer.listen(source)
132
+ try:
133
+ voice_input = recognizer.recognize_google(audio)
134
+ st.write(f"πŸ—£ You said: {voice_input}")
135
+ response = chatbot_response(voice_input)
136
+ st.write(f"**Bot:** {response}")
137
+ speak(response)
138
+ except sr.UnknownValueError:
139
+ st.write("❌ Could not understand audio.")
140
+ except sr.RequestError:
141
+ st.write("❌ Speech recognition service error.")
requirements.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ torch
2
+ transformers
3
+ accelerate
4
+ bitsandbytes
5
+ streamlit
6
+ speechrecognition
7
+ pyttsx3
8
+ huggingface_hub
train_data.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:003162f9e8e8a6050d79e427ec97880cfcad0bc2531e4c7768a28000278c4953
3
+ size 291