Tiffany Degbotse commited on
Commit
87cab8b
·
1 Parent(s): 119706d

Deploy EchoML directly from Query_Your_Model

Browse files
Files changed (7) hide show
  1. Dockerfile +14 -0
  2. README.md +11 -0
  3. __pycache__/app.cpython-313.pyc +0 -0
  4. app.py +16 -0
  5. app/app_chat.py +3 -3
  6. requirements.txt +3 -1
  7. runtime.txt +1 -0
Dockerfile ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Copy all project files
6
+ COPY . .
7
+
8
+ # Install dependencies
9
+ RUN pip install --no-cache-dir -r Query_Your_Model/requirements.txt
10
+
11
+ EXPOSE 7860
12
+
13
+ # Command to run both FastAPI and Streamlit (same as your app.py)
14
+ CMD ["python", "Query_Your_Model/app.py"]
README.md CHANGED
@@ -1,3 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
1
  # Query Your Model — Retrieval-Augmented Explainability
2
 
3
  This project implements an end-to-end framework for interacting with machine learning models through natural-language–style queries. It combines FastAPI (for backend explainability services), Streamlit (for a conversational UI), and explainability methods like SHAP with retrieval-augmented reasoning to make predictions more transparent and interpretable. The system allows you to upload a trained model, input feature values for a new instance, generate explanations using SHAP feature attributions, retrieve similar past reasoning cases for context, and interact with the model via a simple chat-style interface.
 
1
+ ---
2
+ title: Query Your Model
3
+ emoji: 💬
4
+ colorFrom: indigo
5
+ colorTo: blue
6
+ sdk: streamlit
7
+ sdk_version: "1.39.0"
8
+ app_file: Query_Your_Model/app.py
9
+ pinned: false
10
+ ---
11
+
12
  # Query Your Model — Retrieval-Augmented Explainability
13
 
14
  This project implements an end-to-end framework for interacting with machine learning models through natural-language–style queries. It combines FastAPI (for backend explainability services), Streamlit (for a conversational UI), and explainability methods like SHAP with retrieval-augmented reasoning to make predictions more transparent and interpretable. The system allows you to upload a trained model, input feature values for a new instance, generate explanations using SHAP feature attributions, retrieve similar past reasoning cases for context, and interact with the model via a simple chat-style interface.
__pycache__/app.cpython-313.pyc ADDED
Binary file (973 Bytes). View file
 
app.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import subprocess
2
+ import threading
3
+ import time
4
+
5
+ def run_fastapi():
6
+ subprocess.Popen(["uvicorn", "Query_Your_Model.app.api_fastapi:app", "--host", "0.0.0.0", "--port", "8000"])
7
+
8
+ def run_streamlit():
9
+ time.sleep(3)
10
+ subprocess.run([
11
+ "streamlit", "run", "Query_Your_Model/app/ui_streamlit.py",
12
+ "--server.port=7860", "--server.address=0.0.0.0"
13
+ ])
14
+
15
+ threading.Thread(target=run_fastapi).start()
16
+ run_streamlit()
app/app_chat.py CHANGED
@@ -14,7 +14,7 @@ st.title("💬 Chat with Your Model")
14
  # ----------------------------------
15
  with st.sidebar:
16
  st.header("Settings")
17
- api_url = st.text_input("FastAPI endpoint", value="http://127.0.0.1:8000/explain")
18
  model_path = st.text_input("Model path", value="Query_Your_Model/model_data/model.pkl")
19
  feat_names_str = st.text_input(
20
  "Feature names (comma-separated)",
@@ -27,7 +27,7 @@ with st.sidebar:
27
  feat_names = [s.strip() for s in feat_names_str.split(",")]
28
 
29
  # OpenAI client (set OPENAI_API_KEY in .streamlit/secrets.toml)
30
- client = OpenAI(api_key="")
31
 
32
  # ------------------------------------------------
33
  # Helpers
@@ -271,7 +271,7 @@ def apply_what_if(user_q, feat_names, current_features):
271
  # ------------------------------------------------
272
  # Step 1: Enter features & predict
273
  # ------------------------------------------------
274
- st.subheader("Step 1 – Enter features to generate a prediction")
275
  user_features = st.text_input("Enter feature values (comma-separated)", "")
276
  predict_btn = st.button("🔍 Predict and Explain")
277
 
 
14
  # ----------------------------------
15
  with st.sidebar:
16
  st.header("Settings")
17
+ api_url = st.text_input("FastAPI endpoint", value="https://<tiffany101>.hf.space/explain")
18
  model_path = st.text_input("Model path", value="Query_Your_Model/model_data/model.pkl")
19
  feat_names_str = st.text_input(
20
  "Feature names (comma-separated)",
 
27
  feat_names = [s.strip() for s in feat_names_str.split(",")]
28
 
29
  # OpenAI client (set OPENAI_API_KEY in .streamlit/secrets.toml)
30
+ client = OpenAI(api_key="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")
31
 
32
  # ------------------------------------------------
33
  # Helpers
 
271
  # ------------------------------------------------
272
  # Step 1: Enter features & predict
273
  # ------------------------------------------------
274
+ st.subheader("Step 1 – Enter features to generate a prediction in the order sepal length,sepal width,petal length and petal width")
275
  user_features = st.text_input("Enter feature values (comma-separated)", "")
276
  predict_btn = st.button("🔍 Predict and Explain")
277
 
requirements.txt CHANGED
@@ -8,4 +8,6 @@ python-multipart==0.0.9
8
  streamlit==1.39.0
9
  pydantic==2.9.2
10
  joblib==1.4.2
11
- matplotlib==3.9.2S
 
 
 
8
  streamlit==1.39.0
9
  pydantic==2.9.2
10
  joblib==1.4.2
11
+ matplotlib==3.9.2
12
+ requests==2.32.3
13
+ openai==1.40.6
runtime.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ python-3.10