--- title: Human Hallucination Prediction emoji: 👁️ colorFrom: blue colorTo: purple sdk: gradio sdk_version: 5.23.1 app_file: app.py pinned: false license: mit --- # Human Hallucination Prediction This Gradio demo predicts whether humans will experience visual hallucinations or illusions when viewing specific images. Using adversarially robust neural networks, this tool can forecast perceptual phenomena like illusory contours, figure-ground reversals, and other Gestalt effects before humans report them. ## How It Works This tool uses **generative inference** with adversarially robust neural networks to predict human visual hallucinations. Robust models trained with adversarial examples develop more human-like perceptual biases, allowing them to predict when humans will perceive: - **Illusory contours** (Kanizsa shapes, Ehrenstein illusion) - **Figure-ground ambiguity** (Rubin's vase, bistable images) - **Color spreading effects** (Neon color illusion) - **Gestalt grouping** (Continuity, proximity) - **Brightness illusions** (Cornsweet effect) ## Features - **Predict hallucinations** from uploaded images or example illusions - **Visualize the prediction process** step-by-step - **Compare different models** (robust vs. standard) - **Adjust prediction parameters** for different perceptual phenomena - **Pre-configured examples** of classic visual illusions ## Usage 1. **Select an example illusion** or upload your own image 2. **Click "Load Parameters"** to set optimal prediction settings 3. **Click "Run Generative Inference"** to predict the hallucination 4. **View the results**: The model will show what perceptual effects it predicts humans will experience ## Scientific Background This demo is based on research showing that adversarially robust neural networks develop perceptual representations similar to human vision. By using generative inference (optimizing images to maximize model confidence), we can reveal what perceptual structures the network expects to see—which often matches what humans hallucinate or perceive in ambiguous images. ## Installation To run this demo locally: ```bash # Clone the repository git clone https://huggingface.co/spaces/ttoosi/Human_Hallucination_Prediction cd Human_Hallucination_Prediction # Install dependencies pip install -r requirements.txt # Run the app python app.py ``` The web app will be available at http://localhost:7860. ## The Prediction Process 1. **Input**: Start with an ambiguous or illusion-inducing image 2. **Generative Inference**: The robust neural network iteratively modifies the image to maximize its confidence 3. **Prediction**: The modifications reveal what perceptual structures the network expects—predicting what humans will hallucinate 4. **Visualization**: View the predicted hallucination emerging step-by-step ## Models - **Robust ResNet50**: Trained with adversarial examples (ε=3.0), develops human-like perceptual biases - **Standard ResNet50**: Standard ImageNet training without adversarial robustness ## Citation If you use this work in your research, please cite: ```bibtex @article{toosi2024hallucination, title={Predicting Human Visual Hallucinations with Robust Neural Networks}, author={Toosi, Tahereh}, year={2024} } ``` ## About **Developed by [Tahereh Toosi](https://toosi.github.io)** This demo demonstrates how adversarially robust neural networks can predict human perceptual hallucinations before they occur. ## License This project is licensed under the MIT License - see the LICENSE file for details.