--- title: Geodect emoji: 💻 colorFrom: gray colorTo: green sdk: gradio sdk_version: 6.9.0 app_file: app.py pinned: false license: mit short_description: AI-assisted computer vision for 3D geological modeling. --- # ⚒️ Geodect: Geological 3D & Mapping Geodect is an AI-assisted computer vision tool designed to convert field video of geological surfaces or rock samples into clean 3D models and high-resolution 2D orthomosaic maps. ## 🌟 Key Features * **2D Orthomosaic Mapping:** Automatically extract, align, and stitch video frames into a single, high-resolution undistorted map. * **3D Sample Reconstruction:** Transform video orbits of rock samples into interactive `.obj` meshes using Structure from Motion (SfM). * **Intelligent Background Removal:** Designed to segment and remove unwanted objects (like hands or tools) to focus on the geological surface. * **CPU Optimized:** Built to run efficiently within a Hugging Face Space without requiring high-end GPU resources. ## 🚀 Technical Workflow 1. **Extraction:** OpenCV extracts optimal frames from uploaded field video. 2. **Processing:** * **Orthomosaic Mode:** Detects features via SIFT/ORB and estimates homography for seamless stitching. * **3D Mode:** Estimates camera poses and generates a dense point cloud converted into a surface mesh via Trimesh. 3. **Visualization:** Interactive 3D rendering and high-res image downloads provided via the Gradio interface. ## 📂 Architecture This project is implemented as a lightweight four-file Python application: * `app.py`: The main Gradio interface and pipeline controller. * `Orthomosaic.py`: The 2D stitching and blending engine. * `ThreeDimagery.py`: The 3D reconstruction and SfM module. * `requirements.txt`: Environment dependencies including OpenCV, SciPy, and Trimesh. ## 🤝 Credits & Acknowledgments **Developer:** **Adedoyin Ifeoluwa James** **X:** **https://x.com/IAdedoyin64700** **portfolio website:** **https://adedoyinjames-portfolio.vercel.app** **blog:** **https://theadedoyinjournal.wordpress.com** **Technical Resources:** This project was developed with the technical support and computational resources of **NORA RESEARCH LAB**. ---