The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Health AI (ML-Only) Pipeline
This project trains ML models and serves predictions with FastAPI + SQLAlchemy (SQLite/MySQL).
Setup
- For local runs, MySQL is the default (loaded from
.envor.env.example). SetDB_TYPE=mysqlplusDB_HOST,DB_PORT,DB_NAME,DB_USER,DB_PASSWORD(or provideDATABASE_URL). To use SQLite instead, setDB_TYPE=sqliteandDB_PATH. UseDB_AUTO_CREATE=1if your MySQL user can create databases. - Install dependencies:
pip install -r requirements.txt
Required Training Columns
Your training CSV must include labels for all outputs:
- Heart attack risk:
heart_attack_risk(0/1 or YES/NO) - Temperature value:
temperature_valueortemperature - Temperature status:
temperature_statusortemp_statusortemp_class - Blood pressure status:
bp_statusorblood_pressure_status - Oxygen value:
spo2oroxygen_valueoroxygen_level - Oxygen status:
oxygen_statusoroxygen_level_statusorspo2_status - Calories:
calories_burnedorkcal
Sensor inputs are taken from the BLE CSV columns (temperature, PPG counts, accel, HR, RR, steps, cadence, eda, etc.).
Train Models
python train_models.py --csv "C:\AI Projects\Prediction for ml\Zoro_20260105053511_20260105110629_9a3a6241_b96f.csv"
Upload Dataset to Hugging Face
Upload dataset files to the Hugging Face dataset repo Sanjay2198/health_ai. Keep datasets out of this repo and push them to HF instead.
CLI:
pip install -U huggingface_hub
huggingface-cli login
huggingface-cli upload --repo-type dataset Sanjay2198/health_ai data/
Python:
from huggingface_hub import login, upload_folder
login()
upload_folder(folder_path=".", repo_id="Sanjay2198/health_ai", repo_type="dataset")
Set folder_path to your dataset directory (for example data/) to avoid uploading project files.
Git (HTTPS):
git lfs install
git clone https://huggingface.co/datasets/Sanjay2198/health_ai
# copy your dataset files into the cloned repo
git add .
git commit -m "Add dataset files"
git push
Git (SSH):
git lfs install
git clone git@hf.co:datasets/Sanjay2198/health_ai
# copy your dataset files into the cloned repo
git add .
git commit -m "Add dataset files"
git push
You can also upload directly from the Hugging Face website using the File Uploader.
Hugging Face Pull Request Flow
If you are contributing via a Hugging Face pull request, use the PR refs workflow:
git clone https://huggingface.co/datasets/Sanjay2198/health_ai
cd health_ai
git fetch origin refs/pr/1:pr/1
git checkout pr/1
hf auth login
# make your changes
git push origin pr/1:refs/pr/1
Use an access token as the git password/credential, then hit Publish on Hugging Face when ready to merge.
Apache Spark (End-to-End)
Spark training and batch prediction run fully in Python with PySpark. This is a separate path from the FastAPI/sklearn models.
Requirements:
- Java 11+ installed and
JAVA_HOMEset. pip install pyspark(already included inrequirements.txt).
Train with Spark:
python spark_train_models.py --csv "C:\AI Projects\Prediction for ml\Zoro_20260105053511_20260105110629_9a3a6241_b96f.csv" --model-dir models_spark
Predict with Spark:
python spark_predict_from_csv.py --input-csv "C:\Users\CSC\OneDrive\Documents\GitHub\Prediction_ml\Zoro_20260105053511_20260105110629_9a3a6241_b96f.csv" --model-dir models_spark
The Spark prediction script writes a single CSV named *_spark_predictions.csv by default.
Run API
uvicorn app.main:app --reload
# python -m uvicorn main:app --reload --port 9000
Render Deploy Note
For deployment on Render:
- Build Command:
pip install -r requirements.txt - Start Command:
./start.sh - Add environment variables in Render's Environment settings:
DB_HOST,DB_PORT,DB_NAME,DB_USER,DB_PASSWORD,DB_AUTO_CREATE,MODEL_DIR(use production database details). - Ensure the
.envfile is not included in the repository (environment variables are set in the dashboard).
History & Frontend Support
GET /predictions?limit=<n>returns the most recent predictions (default 20, max 500) so the frontend can remember what was already generated.GET /predictions/summaryreturns totals, averages, and status distributions for the stored predictions, which is handy for simple visualizations and dashboards.- Both endpoints are cached on the API side (list cache TTL 20s, summary cache TTL 15s) and are invalidated after new predictions, so the UI enjoys faster responses even under load.
- The frontend now includes a “Session overview” card with four quick metrics and a trend line that plots heart-risk probability, temperature, and oxygen over the latest predictions, providing a compact story similar to the reference screenshot.
Frontend preview
- Run
uvicorn app.main:app --reload --port 9000and visithttp://localhost:9000/to load the same UI directly from the backend; the server now serves the static dashboard plus directhttp://localhost:9000/frontend/index.htmlif you still prefer the standalone file. - (Optional) Alternatively, open
frontend/index.htmlby runningpython -m http.serverand visitinghttp://localhost:8000/frontend/index.html. - The embedded dashboard caches the last payloads (predictions + summary) in
localStorageand throttles API calls to once every ~20 seconds unless you force-refresh with the button, so it loads faster on repeat visits and avoids hammering the backend. - The page calls
GET /predictionsand/predictions/summary, caches the payload inlocalStorage, and renders summary cards, a doughnut risk chart, a bar chart for statuses, and a table of the recent predictions.
Predict from CSV
python predict_from_csv.py --input-csv "C:\Users\CSC\OneDrive\Documents\GitHub\Prediction_ml\Zoro_20260105053511_20260105110629_9a3a6241_b96f.csv"
Add --write-db to store predictions in the configured database.
Clinical Reference Ranges
Blood pressure (if you pass systolic and diastolic, the API overrides blood_pressure):
Category,Systolic (Top),Diastolic (Bottom)
Normal,< 120,< 80
Elevated,120-129,< 80
Hypertension (Stage 1),130-139,80-89
Hypertension (Stage 2),140+,90+
Temperature (if you pass temperature_c or temperature_f, the API overrides temperature.status and adds temperature.interpretation):
Category,Celsius (C),Fahrenheit (F),Interpretation
Hypothermia,Below 35.0C,Below 95.0F,Dangerously Low: Seek medical help.
Normal,36.1C - 37.2C,97.0F - 99.0F,Healthy resting range.
Low-Grade Fever,37.3C - 38.0C,99.1F - 100.4F,Slightly elevated; monitor symptoms.
Fever (High),38.1C - 39.4C,100.5F - 103.0F,Body is likely fighting an infection.
Hyperpyrexia,Above 41.1C,Above 106.0F,Dangerously High: Medical emergency.
Status codes used in the API:
- Blood pressure:
NORMAL,ELEVATED,STAGE1,STAGE2 - Temperature:
HYPOTHERM,NORMAL,LOW_FEVER,FEVER_HIGH,HYPERPYR
Apache Kafka (Best Flow)
End-to-end Kafka flow uses:
- Producer (optional): API publishes predictions to Kafka.
- Consumer:
kafka_worker.pyconsumes sensor payloads, runs ML, stores to DB, and publishes predictions.
Start Kafka locally:
docker compose -f docker-compose.kafka.yml up -d
Train models (if not done yet):
python train_models.py --csv "C:\AI Projects\Prediction for ml\Zoro_20260105053511_20260105110629_9a3a6241_b96f.csv"
Run the Kafka worker:
python kafka_worker.py --model-dir models
Send a sample payload:
python kafka_produce_sample.py
Enable API publishing to Kafka (optional):
- Set
KAFKA_PUBLISH=1in your environment. - The API will publish prediction events to
KAFKA_OUTPUT_TOPIC.
Kafka env vars (see .env.example):
KAFKA_BOOTSTRAP_SERVERS(defaultlocalhost:9092)KAFKA_INPUT_TOPIC(defaulthealth_input)KAFKA_OUTPUT_TOPIC(defaulthealth_predictions)KAFKA_GROUP_ID(defaulthealth_ai_worker)
- Downloads last month
- 28