--- title: NATO ASI AI Disaster Management emoji: π colorFrom: blue colorTo: red sdk: gradio sdk_version: 6.0.0 app_file: app.py pinned: false license: mit tags: - disaster-response - geospatial-ai - nato - education - satellite-imagery short_description: Interactive educational platform for AI in Disaster Response --- # π NATO ASI - AI for Disaster Management: Interactive Gradio App **A state-of-the-art interactive web application for exploring AI-powered disaster management techniques.** --- ## π Overview This Gradio application provides an **interactive learning platform** for participants of the NATO Advanced Study Institute on *"AI for Disaster Management"*. It showcases the key concepts, models, and techniques taught throughout the 7-day curriculum. ### π― Key Features - **π Curriculum Explorer**: Navigate through the complete 7-day course structure - **ποΈ Building Damage Detection**: Interactive CNN-based damage classification demo - **π Flood Mapping**: Semantic segmentation with U-Net for flood extent mapping - **π Transfer Learning**: Compare pre-trained models (ResNet50, EfficientNet, etc.) - **βοΈ Deployment & Ethics**: Learn about production deployment and responsible AI - **π Resources**: Comprehensive links to datasets, papers, and learning materials --- ## π Quick Start ### Prerequisites - Python 3.8 or higher - pip package manager ### Installation 1. **Clone the repository** (if you haven't already): ```bash git clone https://github.com/AI4DM/Geospatial-AI-for-Humanitarian-Response.git cd Geospatial-AI-for-Humanitarian-Response ``` 2. **Install dependencies**: ```bash pip install -r requirements.txt ``` 3. **Launch the app**: ```bash python gradio_app.py ``` 4. **Access the app**: - Open your browser and navigate to: `http://localhost:7860` - Or use the public Gradio link that appears in the terminal (if sharing is enabled) ### Docker Installation (Alternative) ```bash # Build the Docker image docker build -t nato-asi-gradio . # Run the container docker run -p 7860:7860 nato-asi-gradio ``` --- ## π¨ Application Structure ### Tab 1: Welcome π - Overview of the NATO ASI curriculum - Learning philosophy and objectives - Quick links to different sections ### Tab 2: Curriculum π - Detailed day-by-day breakdown - Learning outcomes for each module - Key concepts and techniques ### Tab 3: Damage Detection ποΈ - **Functionality**: Upload building images or generate samples - **Model**: CNN-based classification (4 damage levels) - **Output**: Damage level prediction with confidence scores - **Learning**: Days 2-3 content (CNN basics and production systems) ### Tab 4: Flood Mapping π - **Functionality**: Upload satellite imagery or generate flood scenarios - **Model**: U-Net semantic segmentation - **Output**: Pixel-wise flood extent maps with IoU scores - **Learning**: Days 4-5 content (semantic segmentation) ### Tab 5: Transfer Learning π - **Functionality**: Compare different pre-trained architectures - **Models**: ResNet50, VGG16, MobileNetV2, EfficientNetB0 - **Output**: Performance metrics and training time comparisons - **Learning**: Day 6 content (transfer learning techniques) ### Tab 6: Deployment & Ethics βοΈ - Model optimization techniques (TFLite, quantization) - Deployment strategies (cloud, edge, hybrid) - Human-in-the-loop workflows - Ethical AI principles for disaster management ### Tab 7: Resources π - Curated datasets for practice - Online courses and tutorials - Academic papers and research - Humanitarian organizations and communities --- ## π§ Customization & Extension ### Integrating Your Trained Models The app currently uses **simulated predictions** for demonstration purposes. To integrate your actual trained models: #### 1. Load Your Model ```python import tensorflow as tf # Load your trained model damage_model = tf.keras.models.load_model('path/to/your/damage_model.h5') flood_model = tf.keras.models.load_model('path/to/your/flood_model.h5') ``` #### 2. Replace Simulation Functions Update the `simulate_damage_detection()` and `simulate_flood_segmentation()` functions: ```python def real_damage_detection(image, confidence_threshold=0.7): """Real damage detection using trained model""" # Preprocess image img_array = np.array(image.resize((224, 224))) / 255.0 img_array = np.expand_dims(img_array, axis=0) # Predict predictions = damage_model.predict(img_array) damage_level = np.argmax(predictions[0]) confidence = predictions[0][damage_level] # Visualize results result_img = create_visualization(image, damage_level, confidence) return result_img, damage_level, confidence ``` #### 3. Update Gradio Interface Replace the function calls in the Gradio interface: ```python detect_btn.click( fn=real_damage_detection, # Changed from simulate_damage_detection inputs=[input_image, confidence_slider], outputs=[output_image, damage_output, confidence_output] ) ``` ### Adding New Features **Example: Add a New Tab for Temporal Analysis** ```python def create_temporal_analysis_tab(): with gr.Column(): gr.Markdown("# π Temporal Change Detection") with gr.Row(): before_image = gr.Image(type="pil", label="Before Disaster") after_image = gr.Image(type="pil", label="After Disaster") analyze_btn = gr.Button("Analyze Changes") change_map = gr.Image(type="pil", label="Change Detection Map") analyze_btn.click( fn=detect_changes, inputs=[before_image, after_image], outputs=[change_map] ) # Add to main app with gr.Tab("π Change Detection"): create_temporal_analysis_tab() ``` --- ## π Educational Use ### For Instructors This app is designed to complement the Jupyter notebooks: 1. **Pre-Session**: Show the app during course introduction to demonstrate what students will build 2. **During Session**: Use interactive demos to visualize concepts before coding 3. **Post-Session**: Let students experiment with different parameters and scenarios 4. **Assessment**: Have students integrate their trained models into the app ### For Students Recommended learning workflow: 1. **Explore** β Use the app to understand what you'll be building 2. **Learn** β Work through the corresponding Jupyter notebook 3. **Build** β Train your own models using the notebooks 4. **Deploy** β Integrate your models into this Gradio app 5. **Share** β Demonstrate your results to peers and instructors --- ## π Deployment Options ### Local Development ```bash python gradio_app.py # Access at http://localhost:7860 ``` ### Public Sharing (Gradio) The app automatically creates a public link when launched: ```bash python gradio_app.py # Look for: "Running on public URL: https://xxxxx.gradio.live" ``` ### Hugging Face Spaces Deploy to Hugging Face Spaces for permanent hosting: 1. Create a new Space at https://huggingface.co/spaces 2. Upload `gradio_app.py` and `requirements.txt` 3. Space will automatically detect and run the Gradio app ### Google Cloud Run ```bash # Build container gcloud builds submit --tag gcr.io/PROJECT_ID/nato-asi-app # Deploy gcloud run deploy nato-asi-app \ --image gcr.io/PROJECT_ID/nato-asi-app \ --platform managed \ --region us-central1 \ --allow-unauthenticated ``` ### AWS EC2 ```bash # SSH into EC2 instance ssh -i your-key.pem ec2-user@your-instance-ip # Install dependencies sudo yum update -y sudo yum install python3 -y pip3 install -r requirements.txt # Run with nohup for persistent execution nohup python3 gradio_app.py > gradio.log 2>&1 & ``` --- ## π οΈ Technical Details ### Architecture ``` gradio_app.py βββ CONSTANTS & CONFIGURATION β βββ CURRICULUM_DAYS (course structure) β βββ DAMAGE_LEVELS (classification labels) β βββ UTILITY FUNCTIONS β βββ create_sample_building_image() - Generate synthetic buildings β βββ create_flood_map_sample() - Generate flood scenarios β βββ simulate_damage_detection() - Mock AI predictions β βββ simulate_flood_segmentation() - Mock segmentation β βββ GRADIO INTERFACE COMPONENTS β βββ create_welcome_tab() β βββ create_curriculum_tab() β βββ create_damage_detection_tab() β βββ create_flood_mapping_tab() β βββ create_transfer_learning_tab() β βββ create_deployment_tab() β βββ create_resources_tab() β βββ MAIN APPLICATION βββ create_app() - Assembles all components ``` ### Dependencies **Core** (Required): - `gradio` - Web interface framework - `numpy` - Numerical computations - `Pillow` - Image processing **Optional** (For real model integration): - `tensorflow` / `keras` - Deep learning - `rasterio` - Geospatial raster data - `geopandas` - Vector geospatial data ### Performance - **Launch Time**: ~2-3 seconds - **Inference** (simulated): <100ms - **Inference** (real TensorFlow model): 50-200ms on CPU, 10-50ms on GPU - **Memory**: ~200MB base, +2-4GB with loaded models --- ## π Troubleshooting ### Issue: Port 7860 already in use ```bash # Kill existing process lsof -ti:7860 | xargs kill -9 # Or use a different port python gradio_app.py --server-port 7861 ``` ### Issue: Module not found errors ```bash # Ensure all dependencies are installed pip install --upgrade -r requirements.txt # If using conda conda install -c conda-forge gradio numpy pillow ``` ### Issue: Images not displaying - Check that PIL/Pillow is properly installed - Verify image file paths are correct - Ensure uploaded images are in supported formats (JPG, PNG) ### Issue: Slow performance - Use GPU acceleration for model inference (requires TensorFlow GPU) - Reduce image resolution before processing - Enable model caching for repeated predictions --- ## π Usage Analytics To track how participants use the app, you can integrate analytics: ```python import gradio as gr def track_interaction(action, details): timestamp = datetime.now().isoformat() log_entry = f"{timestamp} | {action} | {details}\n" with open("usage_analytics.log", "a") as f: f.write(log_entry) # Example usage detect_btn.click( fn=lambda img, threshold: ( track_interaction("damage_detection", f"threshold={threshold}"), detect_damage(img, threshold) )[1], inputs=[input_image, confidence_slider], outputs=[output_image, damage_output, confidence_output] ) ``` --- ## π€ Contributing We welcome contributions from the community! ### How to Contribute 1. **Fork** the repository 2. **Create** a feature branch: `git checkout -b feature/new-demo` 3. **Make** your changes 4. **Test** thoroughly 5. **Commit**: `git commit -m "Add new demo for landslide detection"` 6. **Push**: `git push origin feature/new-demo` 7. **Open** a Pull Request ### Contribution Ideas - π **New Demos**: Landslide detection, wildfire mapping, infrastructure damage - π¨ **UI Improvements**: Better visualizations, responsive design - π **Documentation**: Tutorials, video guides, translations - π§ **Features**: Real-time inference, batch processing, API endpoints - π **Bug Fixes**: Report issues or submit fixes --- ## π License This project is licensed under the **MIT License**. See [LICENSE](LICENSE) for details. --- ## π Acknowledgments **Developed for**: NATO Advanced Study Institute on AI for Disaster Management **Special Thanks**: - Disaster response professionals who provided real-world insights - Open-source contributors of Gradio, TensorFlow, and geospatial libraries - Organizations sharing satellite imagery and disaster datasets --- ## π Support ### Questions or Issues? - **GitHub Issues**: [Report bugs or request features](https://github.com/AI4DM/Geospatial-AI-for-Humanitarian-Response/issues) - **Email**: Contact the course instructor - **Slack/Discord**: Join the NATO ASI community channel ### Citation If you use this application in your work: ```bibtex @software{nato_asi_gradio_2025, title={NATO ASI - AI for Disaster Management: Interactive Gradio App}, author={Bulent Soykan}, year={2025}, url={https://github.com/AI4DM/Geospatial-AI-for-Humanitarian-Response} } ``` --- ## π Final Note This application demonstrates that **AI for disaster management** is not just about algorithmsβit's about creating accessible, interpretable, and actionable tools that empower humanitarian responders to save lives. **Every feature in this app reflects a real-world need in disaster response.**