A newer version of the Gradio SDK is available: 6.14.0
title: Human Hallucination Prediction
emoji: 👁️
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 5.23.1
app_file: app.py
pinned: false
license: mit
Human Hallucination Prediction
This Gradio demo predicts whether humans will experience visual hallucinations or illusions when viewing specific images. Using adversarially robust neural networks, this tool can forecast perceptual phenomena like illusory contours, figure-ground reversals, and other Gestalt effects before humans report them.
How It Works
This tool uses generative inference with adversarially robust neural networks to predict human visual hallucinations. Robust models trained with adversarial examples develop more human-like perceptual biases, allowing them to predict when humans will perceive:
- Illusory contours (Kanizsa shapes, Ehrenstein illusion)
- Figure-ground ambiguity (Rubin's vase, bistable images)
- Color spreading effects (Neon color illusion)
- Gestalt grouping (Continuity, proximity)
- Brightness illusions (Cornsweet effect)
Features
- Predict hallucinations from uploaded images or example illusions
- Visualize the prediction process step-by-step
- Compare different models (robust vs. standard)
- Adjust prediction parameters for different perceptual phenomena
- Pre-configured examples of classic visual illusions
Usage
- Choose an input: Pick a pre-configured example illusion from the dropdown, or upload your own image.
- Load Parameters: Click "Load Parameters" to fill in optimal prediction settings for that example (or adjust them manually).
- Select the affected part of the visual field: Set where the model should focus by clicking on the input image or the mask preview to define the mask center, then adjust Mask center X/Y, Mask radius, and Mask sigma in the Adaptive Gaussian mask section if needed. The preview circle shows the region that will receive stronger constraint during inference.
- Run inference: Click "Run Generative Inference" to start the prediction. Progress and intermediate steps are shown in the interface.
- View results: Inspect the predicted perceptual effects, visualizations, and any generated outputs in the result panels.
Scientific Background
This demo is based on research showing that adversarially robust neural networks develop perceptual representations similar to human vision. By using generative inference (optimizing images to maximize model confidence), we can reveal what perceptual structures the network expects to see—which often matches what humans hallucinate or perceive in ambiguous images.
Prerequisites
- Python 3.8 or higher
- pip (Python package manager)
Installation
Clone the repository
git clone https://huggingface.co/spaces/ttoosi/Human_Hallucination_Prediction cd Human_Hallucination_PredictionCreate a virtual environment (recommended)
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activateInstall dependencies
pip install -r requirements.txtRun the app
python app.pyOptional: specify a port with
--port(default is 7860):python app.py --port 8861
The web app will be available at http://localhost:7860 (or the port you specified).
Note: Model weights (e.g. robust ResNet50) are downloaded automatically from Hugging Face on first run and cached in the models/ directory. The app also creates a stimuli/ directory for example images.
Running with Docker
docker build -t human-hallucination-prediction .
docker run -p 7860:7860 human-hallucination-prediction
Then open http://localhost:7860 in your browser.
The Prediction Process
- Input: You provide an ambiguous or illusion-inducing image (or use a built-in example).
- Generative inference: The adversarially robust network iteratively updates the image to maximize its confidence, guided by your chosen parameters (model, layer, noise, step size, etc.).
- Prediction: The resulting changes reveal the perceptual structures the network expects—which correspond to what humans tend to hallucinate or perceive in such images.
- Visualization: The interface shows the predicted hallucination and intermediate steps as the optimization runs.
Models
- Robust ResNet50: Trained with adversarial examples (ε=3.0), develops human-like perceptual biases
- Standard ResNet50: Standard ImageNet training without adversarial robustness
Citation
If you use this work in your research, please cite:
@article{toosi2024hallucination,
title={Predicting Human Visual Hallucinations with Robust Neural Networks},
author={Toosi, Tahereh},
year={2024}
}
About
Developed by Tahereh Toosi
This demo demonstrates how adversarially robust neural networks can predict human perceptual hallucinations before they occur.
License
This project is licensed under the MIT License - see the LICENSE file for details.