xai_final_project / README.md
keesephillips's picture
Upload 19 files
a7e459d verified

A newer version of the Streamlit SDK is available: 1.54.0

Upgrade
metadata
title: XAI Term Project
emoji: 👁
colorFrom: red
colorTo: red
sdk: streamlit
sdk_version: 1.40.2
app_file: app.py
pinned: false

Explainable AI for Housing Estimates

Instructions to Run

  1. Clone the repo
git clone https://huggingface.co/spaces/keesephillips/xai_term_project
  1. Install all the package dependencies
pip install -r requirements.txt
  1. Run the application via streamlit
streamlit run app.py
  1. Access via localhost (port 8501 on windows)

Background

This application is designed to demonstrate an understanding of how complex problems can be explained using Generalized Additive Models (GAMs) and LIME, and when used in conjunction of one another can sometimes explain the decision boundary no matter the complexity.

Generalized Additive Models

GAMs are an extension of linear regression models that allow for non-linear relationships between the features and target.

GAMs use "smooth functions of our feature variables, which can take on a great many forms, with more detail on what that means in the following section" (Clarke). This is where GAMs differ from generalized linear models like linear regression. A GAM is "composed of a sum of smooth functions of features instead of or in addition to the standard linear feature contributions" (Clarke). These smooth functions or spline functions can create non-linear decision boundaries. Spline functions show the partial dependence of each feature on the target variable, enabling a high-level understanding of the model predictions.

Local Interpretable Model-agnostic Explanations (LIME)

  • Local - explanation should reflect the behavior of the classifier in the neighborhood of the instance being predicted
  • Interpretable - human is able to make sense of it
  • Model Agnostic - applied to any machine learning model
  • Explanation - that helps interpret the model

LIME is a powerful tool that explains some predictions created by a neural network. However, LIME only explains a local area within the neural network model and does not fully explain the entire decision space of a given model. This is an important consideration when using LIME, for example, you are only mapping a subset of the entire decision space of a black box model.

Purpose

This project aims to provide insights into Generalized Additive Models and how they can be used to create non-linear solutions.

Models

  • Neural Network: 3 linear layers using PyTorch
  • GAM: Linear GAM using PyGAM

Author

Created by Keese Phillips

Contact

For any questions or inquiries, please contact keese.phillips@duke.edu

References

Clark, Michael. "Generalized Additive Models." Generalized Additive Models, m-clark.github.io/generalized-additive-models/introduction.html. Accessed 2 Dec. 2024.

Attribution

Dr. Brinnae Bent