RoyAalekh's picture
added 3. point in Quick Start section of README.md to mention the raw data storage requirement
b5f21c3
|
raw
history blame
3.12 kB

Code4Change: Intelligent Court Scheduling System

Purpose-built for hackathon evaluation. This repository runs out of the box using the Streamlit dashboard and the uv tool.

Requirements

  • Python 3.11+
  • uv (required)
    • macOS/Linux: curl -LsSf https://astral.sh/uv/install.sh | sh
    • Windows (PowerShell): irm https://astral.sh/uv/install.ps1 | iex

Quick Start (Dashboard)

  1. Install uv (see above) and ensure Python 3.11+ is available.
  2. Clone this repository.
  3. Make sure to put ISDMHack_Cases_WPfinal.csv and ISDMHack_Hear.csv in the Data/ folder, or provide court_data.duckdb there. Both in csv format, strictly named as shown.
  4. Launch the dashboard:
uv run streamlit run scheduler/dashboard/app.py

Then open http://localhost:8501 in your browser.

The dashboard provides:

  • Run EDA pipeline (process raw data and extract parameters)
  • Explore data and parameters
  • Generate cases and run simulations
  • Review cause lists and judge overrides
  • Compare performance and export reports

Command Line (optional)

All operations are available via CLI as well:

uv run court-scheduler --help

# End-to-end workflow
uv run court-scheduler workflow --cases 10000 --days 384

For a detailed walkthrough tailored for judges, see docs/HACKATHON_SUBMISSION.md.

Run with Docker (recommended for judges)

If you prefer not to install Python or uv locally, use the provided Docker image.

  1. Build the image (run in repo root):
docker build -t code4change-analysis .
  1. Show CLI help (Windows PowerShell example with volume mounts):
docker run --rm `
  -v ${PWD}\Data:/app/Data `
  -v ${PWD}\outputs:/app/outputs `
  code4change-analysis court-scheduler --help
  1. Example CLI workflow:
docker run --rm `
  -v ${PWD}\Data:/app/Data `
  -v ${PWD}\outputs:/app/outputs `
  code4change-analysis court-scheduler workflow --cases 10000 --days 384
  1. Run the Streamlit dashboard:
docker run --rm -p 8501:8501 `
  -v ${PWD}\Data:/app/Data `
  -v ${PWD}\outputs:/app/outputs `
  code4change-analysis `
  streamlit run scheduler/dashboard/app.py --server.address=0.0.0.0

Then open http://localhost:8501.

Notes for Windows CMD: use ^ for line continuation and replace ${PWD} with the full path.

Data (DuckDB-first)

This repository uses a DuckDB snapshot as the canonical raw dataset.

  • Preferred source: Data/court_data.duckdb (tables: cases, hearings). If this file is present, the EDA step will load directly from it.
  • CSV fallback: If the DuckDB file is missing, place the two organizer CSVs in Data/ with the exact names below and the EDA step will load them automatically:
    • ISDMHack_Cases_WPfinal.csv
    • ISDMHack_Hear.csv

No manual pre-processing is required; launch the dashboard and click “Run EDA Pipeline.”

Notes

  • This submission intentionally focuses on the end-to-end demo path. Internal development notes, enhancements, and bug fix logs have been removed from the README.
  • uv is enforced by the dashboard for a consistent, reproducible environment.