Upload 4 files
Browse files
fountain-2d-gaussian-splat-w-biplet-colmap.ipynb
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
fountain-3d-gaussian-splat-w-biplet-colmap.ipynb
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
fountain-mip-gaussian-splat-w-biplet-colmap.ipynb
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
fountain-scafford-gaussian-splat-w-biplet-colmap.ipynb
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"metadata":{"kernelspec":{"name":"python3","display_name":"Python 3","language":"python"},"language_info":{"name":"python","version":"3.12.12","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"},"kaggle":{"accelerator":"nvidiaTeslaT4","dataSources":[{"sourceId":49349,"databundleVersionId":5447706,"sourceType":"competition"}],"dockerImageVersionId":31260,"isInternetEnabled":true,"language":"python","sourceType":"notebook","isGpuEnabled":true}},"nbformat_minor":4,"nbformat":4,"cells":[{"cell_type":"markdown","source":"# **Fountain: Scaffold Gaussian Splatting w/biplet,colmap**\n### **biplet-colmap-scaffold-gs**\n\n","metadata":{}},{"cell_type":"markdown","source":"\n\n---\n\n## Overview: What is 3D Gaussian Splatting (3DGS)?\n\nTraditional methods like NeRF use a continuous neural network to represent a scene. In contrast, **3D Gaussian Splatting** is a point-based rendering technique. It represents a 3D scene using millions of overlapping, semi-transparent \"blobs\" (Gaussians).\n\nEach Gaussian is defined by:\n\n* **Position (Mean):** Where it is in space ().\n* **Covariance:** Its shape, size, and orientation (defined via scaling and rotation).\n* **Opacity ():** How transparent it is.\n* **Color (Spherical Harmonics):** How its color changes based on the viewing angle.\n\n**Why it’s a big deal:** It allows for **real-time rendering** (100+ FPS) and faster training times compared to NeRF because it uses \"splatting\"—rasterizing the Gaussians onto the 2D image plane using high-speed GPU kernels.\n\n---\n\n## Comparison of 3DGS Variants\n\nHere is the breakdown of the three specific variants you mentioned and how they differ from the original 3DGS.\n\n### 1. Scafford (Scaffold-GS)\n\n**Focus:** Efficiency and Sparse Data.\n\n* **The Difference:** While 3DGS places Gaussians somewhat randomly, Scaffold-GS uses a **sparse voxel grid (a \"scaffold\")** to guide the distribution.\n* **Key Features:**\n* **Anchor Points:** Gaussians are \"attached\" to anchor points within the grid.\n* **Neural Weighting:** It uses a small MLP to predict the properties of the Gaussians based on the local neighborhood.\n* **Storage:** It is significantly more storage-efficient than 3DGS because it doesn't need to save every attribute for every single Gaussian independently.\n\n\n* **Best For:** Large-scale scenes where memory usage is a concern.\n\n### 2. 2D Gaussian Splatting (2DGS)\n\n**Focus:** Surface Reconstruction and Consistency.\n\n* **The Difference:** Instead of 3D \"volumes,\" it uses **2D oriented disks (flat Gaussians)**.\n* **Key Features:**\n* **Thin Geometry:** By flattening the Gaussians, it forces the model to represent surfaces more accurately. Standard 3DGS often struggles with \"popping\" artifacts or fuzzy surfaces.\n* **Normal Mapping:** It provides well-defined surface normals, which makes it much easier to export the splat into a standard 3D mesh (like a .obj or .ply file).\n\n\n* **Best For:** When you need to turn your splat into a high-quality 3D mesh for gaming or CAD.\n\n### 3. Mip-Splatting\n\n**Focus:** Anti-aliasing and Multi-scale Rendering.\n\n* **The Difference:** 3DGS suffers from \"aliasing\" (jagged edges or flickering) when you zoom out or change resolutions. Mip-Splatting introduces **3D smoothing filters**.\n* **Key Features:**\n* **Frequency Constraint:** It limits the frequency of the Gaussians to match the pixel size of the image.\n* **Scale-Adaptive:** When you move the camera further away, the Gaussians are effectively \"blurred\" or filtered so they don't become smaller than a single pixel, preventing the \"salt-and-pepper\" noise seen in original 3DGS.\n\n\n* **Best For:** High-quality cinematics where the camera moves across different distances.\n\n---\n\n## Summary Table\n\n| Feature | 3DGS (Original) | Scaffold-GS | 2DGS | Mip-Splatting |\n| --- | --- | --- | --- | --- |\n| **Primitive** | 3D Ellipsoid | Anchored 3D Ellipsoid | 2D Thin Disk | Filtered 3D Ellipsoid |\n| **Main Advantage** | Speed & Simplicity | Memory Efficiency | Better Surfaces/Meshing | No Aliasing/Blurring |\n| **VRAM Usage** | High | **Low** | Medium | Medium |\n| **Visual Quality** | Great (except zoom) | Great | Sharp Surfaces | **Best** (consistent) |\n\n---\n\n","metadata":{}},{"cell_type":"code","source":"import os\nimport sys\nimport subprocess\nimport shutil\nfrom pathlib import Path\nimport cv2\nfrom PIL import Image\nimport glob\n\nIMAGE_PATH=\"/kaggle/input/competitions/image-matching-challenge-2023/train/haiper/fountain/images\"\nWORK_DIR = '/kaggle/working/scaffold-gs' \nOUTPUT_DIR = '/kaggle/working/output'\nCOLMAP_DIR = '/kaggle/working/colmap'","metadata":{"trusted":true,"execution":{"execution_failed":"2026-02-14T08:31:25.061Z"}},"outputs":[],"execution_count":null},{"cell_type":"code","source":"!pip install torch_scatter -f https://data.pyg.org/whl/torch-2.8.0+cu126.html","metadata":{"trusted":true},"outputs":[],"execution_count":null},{"cell_type":"code","source":"def install_submodule(name, url, base_dir):\n\n print(f\"\\n{'='*70}\")\n print(f\"Installing {name}\")\n print(f\"{'='*70}\")\n\n # Use absolute path\n path = os.path.abspath(os.path.join(base_dir, \"submodules\", name))\n print(f\" > Target path: {path}\")\n\n # Step 1: Remove existing directory\n if os.path.exists(path):\n print(f\" > Removing old {name}...\")\n shutil.rmtree(path)\n\n # Step 2: Clone repository\n print(f\" > Cloning from {url}...\")\n os.makedirs(os.path.dirname(path), exist_ok=True)\n try:\n run_cmd([\"git\", \"clone\", url, path])\n except subprocess.CalledProcessError as e:\n print(f\"❌ Failed to clone {name}\")\n print(e.stderr)\n return False\n\n # Step 3: Verify files\n print(f\" > Checking cloned files...\")\n files = os.listdir(path)\n print(f\" > Files in {name}: {files[:10]}...\")\n\n # Step 4: Clear build cache\n build_dir = os.path.join(path, \"build\")\n if os.path.exists(build_dir):\n print(f\" > Cleaning build cache...\")\n shutil.rmtree(build_dir)\n\n # Step 5: Install\n print(f\" > Installing {name} (This may take a few minutes)...\")\n\n # Explicitly pass environment variables\n current_env = os.environ.copy()\n result = subprocess.run(\n [sys.executable, \"-m\", \"pip\", \"install\", \"-e\", \".\", \"--no-build-isolation\", \"-v\"],\n cwd=path,\n env=current_env,\n capture_output=True,\n text=True\n )\n\n if result.returncode != 0:\n print(f\"❌ Failed to install {name}\")\n # C++/CUDA build errors often appear in stdout, so output both\n print(\"\\n--- STDOUT (Build Logs) ---\")\n stdout_lines = result.stdout.split('\\n')\n print('\\n'.join(stdout_lines[-60:])) # Show last 60 lines\n print(\"\\n--- STDERR (Error Details) ---\")\n print(result.stderr)\n return False\n\n print(f\"✅ Successfully installed {name}\")\n return True\n\n\ndef install_mipsplatting_submodule(name, base_dir):\n \"\"\"Install submodules included in mip-splatting (no cloning required)\"\"\"\n print(f\"\\n{'='*70}\")\n print(f\"Installing {name} (from mip-splatting submodules)\")\n print(f\"{'='*70}\")\n\n # Submodule path\n path = os.path.abspath(os.path.join(base_dir, \"submodules\", name))\n print(f\" > Target path: {path}\")\n\n # Verify path existence\n if not os.path.exists(path):\n print(f\"❌ Path not found: {path}\")\n return False\n\n # Verify setup.py existence\n setup_py = os.path.join(path, \"setup.py\")\n if not os.path.exists(setup_py):\n print(f\"❌ setup.py not found: {setup_py}\")\n return False\n\n print(f\" > Checking files...\")\n files = os.listdir(path)\n print(f\" > Files in {name}: {files[:10]}...\")\n\n # Clear build cache\n build_dir = os.path.join(path, \"build\")\n if os.path.exists(build_dir):\n print(f\" > Cleaning build cache...\")\n shutil.rmtree(build_dir)\n\n # Install\n print(f\" > Installing {name} (This may take a few minutes)...\")\n\n current_env = os.environ.copy()\n result = subprocess.run(\n [sys.executable, \"-m\", \"pip\", \"install\", \"-e\", \".\", \"--no-build-isolation\", \"-v\"],\n cwd=path,\n env=current_env,\n capture_output=True,\n text=True\n )\n\n if result.returncode != 0:\n print(f\"❌ Failed to install {name}\")\n print(\"\\n--- STDOUT (Build Logs) ---\")\n stdout_lines = result.stdout.split('\\n')\n print('\\n'.join(stdout_lines[-60:]))\n print(\"\\n--- STDERR (Error Details) ---\")\n print(result.stderr)\n return False\n\n print(f\"✅ Successfully installed {name}\")\n return True\n\n\n\n\ndef run_cmd(cmd, check=True, capture=False):\n \"\"\"Run command with better error handling\"\"\"\n print(f\"Running: {' '.join(cmd)}\")\n result = subprocess.run(\n cmd,\n capture_output=capture,\n text=True,\n check=False\n )\n if check and result.returncode != 0:\n print(f\"❌ Command failed with code {result.returncode}\")\n if capture:\n print(f\"STDOUT: {result.stdout}\")\n print(f\"STDERR: {result.stderr}\")\n return result\n\n\ndef setup_environment():\n print(\"🚀 Setting up environment for Scaffold-GS (FIXED VERSION)\")\n\n WORK_DIR = \"scaffold-gs\"\n os.environ[\"CUDA_HOME\"] = \"/usr/local/cuda\"\n\n print(\"\\n\" + \"=\"*70)\n print(\"STEP 0: Fix NumPy\")\n print(\"=\"*70)\n\n run_cmd([sys.executable, \"-m\", \"pip\", \"uninstall\", \"-y\", \"numpy\"])\n run_cmd([sys.executable, \"-m\", \"pip\", \"install\", \"numpy==1.26.4\"])\n\n\n print('------numpy version----------------------')\n !pip show numpy | grep Version\n print('----------------------------')\n\n\n\n \n print(\"\\n\" + \"=\"*70)\n print(\"STEP 1: System packages\")\n print(\"=\"*70)\n\n run_cmd([\"apt-get\", \"update\", \"-qq\"])\n run_cmd([\n \"apt-get\", \"install\", \"-y\", \"-qq\",\n \"colmap\",\n \"build-essential\",\n \"cmake\",\n \"git\",\n \"libopenblas-dev\",\n \"xvfb\"\n ])\n\n os.environ[\"QT_QPA_PLATFORM\"] = \"offscreen\"\n os.environ[\"DISPLAY\"] = \":99\"\n subprocess.Popen(\n [\"Xvfb\", \":99\", \"-screen\", \"0\", \"1024x768x24\"],\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL\n )\n\n\n print('------numpy version----------------------')\n !pip show numpy | grep Version\n print('----------------------------')\n\n\n \n print(\"\\n\" + \"=\"*70)\n print(\"STEP 2: Clone Scaffold-GS\")\n print(\"=\"*70)\n\n if not os.path.exists(WORK_DIR):\n run_cmd([\n \"git\", \"clone\", \"--recursive\",\n \"https://github.com/city-super/Scaffold-GS.git\",\n WORK_DIR\n ])\n else:\n print(\"✓ Repository already exists\")\n\n print('------numpy version----------------------')\n !pip show numpy | grep Version\n print('----------------------------')\n\n\n \n print(\"\\n\" + \"=\"*70)\n print(\"STEP 3: Python packages\")\n print(\"=\"*70)\n\n print(\"\\n📦 Installing PyTorch...\")\n run_cmd([\n sys.executable, \"-m\", \"pip\", \"install\",\n \"torch\", \"torchvision\", \"torchaudio\"\n ])\n\n print(\"\\n📦 Installing core utilities...\")\n run_cmd([\n sys.executable, \"-m\", \"pip\", \"install\",\n \"opencv-python\",\n \"pillow\",\n \"imageio\",\n \"imageio-ffmpeg\",\n \"plyfile\",\n \"tqdm\",\n \"tensorboard\"\n ])\n\n print(\"\\n📦 Installing additional dependencies...\")\n run_cmd([sys.executable, \"-m\", \"pip\", \"install\", \"kornia\"])\n run_cmd([sys.executable, \"-m\", \"pip\", \"install\", \"h5py\"])\n run_cmd([sys.executable, \"-m\", \"pip\", \"install\", \"matplotlib\"])\n run_cmd([sys.executable, \"-m\", \"pip\", \"install\", \"pycolmap\"])\n \n # CRITICAL: Scaffold-GS specific dependencies\n print(\"\\n📦 Installing Scaffold-GS specific dependencies...\")\n run_cmd([sys.executable, \"-m\", \"pip\", \"install\", \"lpips\"]) # LPIPS for perceptual loss\n\n\n print('------numpy version----------------------')\n !pip show numpy | grep Version\n print('----------------------------')\n\n\n run_cmd([sys.executable, \"-m\", \"pip\", \"uninstall\", \"-y\", \"numpy\"])\n run_cmd([sys.executable, \"-m\", \"pip\", \"install\", \"numpy==1.26.4\"])\n\n\n\n \n print(\"\\n\" + \"=\"*70)\n print(\"STEP 4: Build Scaffold-GS submodules\")\n print(\"=\"*70)\n\n \n # simple-knn: Using a proven fixed version (re-cloning)\n success_knn = install_submodule(\n \"simple-knn\",\n \"https://github.com/tztechno/simple-knn.git\",\n WORK_DIR\n )\n\n if not success_knn:\n print(\"❌ Failed to install simple-knn\")\n return None\n\n # diff-gaussian-rasterization: Use the version included in mip-splatting\n # (Do not re-clone as this version supports kernel_size)\n success_rast = install_mipsplatting_submodule(\n \"diff-gaussian-rasterization\",\n WORK_DIR\n )\n\n if not success_rast:\n print(\"❌ Failed to install diff-gaussian-rasterization\")\n return None\n\n \n print('------numpy version----------------------')\n !pip show numpy | grep Version\n print('----------------------------')\n\n\n return WORK_DIR\n\n\nif __name__ == \"__main__\":\n setup_environment()","metadata":{"trusted":true,"execution":{"execution_failed":"2026-02-14T08:31:25.061Z"}},"outputs":[],"execution_count":null},{"cell_type":"code","source":"import os\nimport glob\nimport cv2\nimport numpy as np\nfrom PIL import Image\n\ndef normalize_image_sizes_biplet(input_dir, output_dir=None, size=1024, max_images=None):\n \"\"\"\n Generates two square crops (Left & Right or Top & Bottom)\n from each image in a directory and returns the output directory\n and the list of generated file paths.\n \"\"\"\n if output_dir is None:\n output_dir = 'output/images_biplet'\n os.makedirs(output_dir, exist_ok=True)\n\n print(f\"--- Step 1: Biplet-Square Normalization ---\")\n print(f\"Generating 2 cropped squares (Left/Right or Top/Bottom) for each image...\")\n print()\n\n generated_paths = []\n converted_count = 0\n size_stats = {}\n\n image_files = sorted([f for f in os.listdir(input_dir)\n if f.lower().endswith(('.jpg', '.jpeg', '.png'))])\n\n if max_images is not None:\n image_files = image_files[:max_images]\n print(f\"Processing limited to {max_images} source images (will generate {max_images * 2} cropped images)\")\n\n for img_file in image_files:\n input_path = os.path.join(input_dir, img_file)\n try:\n img = Image.open(input_path)\n original_size = img.size\n\n size_key = f\"{original_size[0]}x{original_size[1]}\"\n size_stats[size_key] = size_stats.get(size_key, 0) + 1\n\n crops = generate_two_crops(img, size)\n base_name, ext = os.path.splitext(img_file)\n\n for mode, cropped_img in crops.items():\n output_path = os.path.join(output_dir, f\"{base_name}_{mode}{ext}\")\n cropped_img.save(output_path, quality=95)\n generated_paths.append(output_path)\n\n converted_count += 1\n print(f\" ✓ {img_file}: {original_size} → 2 square images generated\")\n\n except Exception as e:\n print(f\" ✗ Error processing {img_file}: {e}\")\n\n print(f\"\\nProcessing complete: {converted_count} source images processed\")\n print(f\"Total output images: {len(generated_paths)}\")\n print(f\"Original size distribution: {size_stats}\")\n\n return output_dir, generated_paths\n\n\ndef generate_two_crops(img, size):\n \"\"\"\n Crops the image into a square and returns 2 variations\n (Left/Right for landscape, Top/Bottom for portrait).\n \"\"\"\n width, height = img.size\n crop_size = min(width, height)\n crops = {}\n\n if width > height:\n positions = {\n 'left': 0,\n 'right': width - crop_size\n }\n for mode, x_offset in positions.items():\n box = (x_offset, 0, x_offset + crop_size, crop_size)\n crops[mode] = img.crop(box).resize(\n (size, size),\n Image.Resampling.LANCZOS\n )\n\n else:\n positions = {\n 'top': 0,\n 'bottom': height - crop_size\n }\n for mode, y_offset in positions.items():\n box = (0, y_offset, crop_size, y_offset + crop_size)\n crops[mode] = img.crop(box).resize(\n (size, size),\n Image.Resampling.LANCZOS\n )\n\n return crops","metadata":{"trusted":true,"execution":{"execution_failed":"2026-02-14T08:31:25.062Z"}},"outputs":[],"execution_count":null},{"cell_type":"code","source":"def run_colmap_reconstruction(image_dir, colmap_dir):\n \"\"\"Estimate camera poses and 3D point cloud with COLMAP\"\"\"\n print(\"Running SfM reconstruction with COLMAP...\")\n\n database_path = os.path.join(colmap_dir, \"database.db\")\n sparse_dir = os.path.join(colmap_dir, \"sparse\")\n os.makedirs(sparse_dir, exist_ok=True)\n\n env = os.environ.copy()\n env['QT_QPA_PLATFORM'] = 'offscreen'\n\n print(\"1/4: Extracting features...\")\n subprocess.run([\n 'colmap', 'feature_extractor',\n '--database_path', database_path,\n '--image_path', image_dir,\n '--ImageReader.single_camera', '1',\n '--ImageReader.camera_model', 'OPENCV',\n '--SiftExtraction.use_gpu', '0'\n ], check=True, env=env)\n\n print(\"2/4: Matching features...\")\n subprocess.run([\n 'colmap', 'exhaustive_matcher',\n '--database_path', database_path,\n '--SiftMatching.use_gpu', '0'\n ], check=True, env=env)\n\n print(\"3/4: Sparse reconstruction...\")\n subprocess.run([\n 'colmap', 'mapper',\n '--database_path', database_path,\n '--image_path', image_dir,\n '--output_path', sparse_dir,\n '--Mapper.ba_global_max_num_iterations', '20',\n '--Mapper.ba_local_max_num_iterations', '10'\n ], check=True, env=env)\n\n print(\"4/4: Exporting to text format...\")\n model_dir = os.path.join(sparse_dir, '0')\n if not os.path.exists(model_dir):\n subdirs = [d for d in os.listdir(sparse_dir) if os.path.isdir(os.path.join(sparse_dir, d))]\n if subdirs:\n model_dir = os.path.join(sparse_dir, subdirs[0])\n else:\n raise FileNotFoundError(\"COLMAP reconstruction failed\")\n\n subprocess.run([\n 'colmap', 'model_converter',\n '--input_path', model_dir,\n '--output_path', model_dir,\n '--output_type', 'TXT'\n ], check=True, env=env)\n\n print(f\"COLMAP reconstruction complete: {model_dir}\")\n return model_dir\n\n\ndef convert_cameras_to_pinhole(input_file, output_file):\n \"\"\"Convert camera model to PINHOLE format\"\"\"\n print(f\"Reading camera file: {input_file}\")\n\n with open(input_file, 'r') as f:\n lines = f.readlines()\n\n converted_count = 0\n with open(output_file, 'w') as f:\n for line in lines:\n if line.startswith('#') or line.strip() == '':\n f.write(line)\n else:\n parts = line.strip().split()\n if len(parts) >= 4:\n cam_id = parts[0]\n model = parts[1]\n width = parts[2]\n height = parts[3]\n params = parts[4:]\n\n if model == \"PINHOLE\":\n f.write(line)\n elif model == \"OPENCV\":\n fx = params[0]\n fy = params[1]\n cx = params[2]\n cy = params[3]\n f.write(f\"{cam_id} PINHOLE {width} {height} {fx} {fy} {cx} {cy}\\n\")\n converted_count += 1\n else:\n fx = fy = max(float(width), float(height))\n cx = float(width) / 2\n cy = float(height) / 2\n f.write(f\"{cam_id} PINHOLE {width} {height} {fx} {fy} {cx} {cy}\\n\")\n converted_count += 1\n else:\n f.write(line)\n\n print(f\"Converted {converted_count} cameras to PINHOLE format\")\n\n\ndef prepare_scaffold_gs_data(image_dir, colmap_model_dir):\n \"\"\"Prepare data for Scaffold-GS (same as 3DGS)\"\"\"\n print(\"Preparing data for Scaffold-GS...\")\n\n data_dir = f\"{WORK_DIR}/data/video\"\n os.makedirs(f\"{data_dir}/sparse/0\", exist_ok=True)\n os.makedirs(f\"{data_dir}/images\", exist_ok=True)\n\n print(\"Copying images...\")\n img_count = 0\n for img_file in os.listdir(image_dir):\n if img_file.lower().endswith(('.jpg', '.jpeg', '.png')):\n shutil.copy(\n os.path.join(image_dir, img_file),\n f\"{data_dir}/images/{img_file}\"\n )\n img_count += 1\n print(f\"Copied {img_count} images\")\n\n print(\"Converting camera model to PINHOLE format...\")\n convert_cameras_to_pinhole(\n os.path.join(colmap_model_dir, 'cameras.txt'),\n f\"{data_dir}/sparse/0/cameras.txt\"\n )\n\n for filename in ['images.txt', 'points3D.txt']:\n src = os.path.join(colmap_model_dir, filename)\n dst = f\"{data_dir}/sparse/0/{filename}\"\n if os.path.exists(src):\n shutil.copy(src, dst)\n print(f\"Copied {filename}\")\n else:\n print(f\"Warning: {filename} not found\")\n\n print(f\"Data preparation complete: {data_dir}\")\n return data_dir\n\n\ndef train_scaffold_gs(data_dir, iterations=30000, voxel_size=0, update_init_factor=16):\n \"\"\"Train the Scaffold-GS model with specific parameters\"\"\"\n print(f\"Training Scaffold-GS model for {iterations} iterations...\")\n print(f\"Parameters: voxel_size={voxel_size}, update_init_factor={update_init_factor}\")\n\n model_path = f\"{WORK_DIR}/output/video\"\n\n cmd = [\n sys.executable, 'train.py',\n '-s', data_dir,\n '-m', model_path,\n '--iterations', str(iterations),\n '--voxel_size', str(voxel_size), \n '--update_init_factor', str(update_init_factor), \n '--eval'\n ]\n\n subprocess.run(cmd, cwd=WORK_DIR, check=True)\n\n return model_path\n\n\ndef render_video(model_path, output_video_path, iteration=30000):\n \"\"\"Generate video from the trained Scaffold-GS model\"\"\"\n print(\"Rendering video with Scaffold-GS...\")\n\n cmd = [\n sys.executable, 'render.py',\n '-m', model_path,\n '--iteration', str(iteration)\n ]\n\n subprocess.run(cmd, cwd=WORK_DIR, check=True)\n\n possible_dirs = [\n f\"{model_path}/test/ours_{iteration}/renders\",\n f\"{model_path}/train/ours_{iteration}/renders\",\n ]\n\n render_dir = None\n for test_dir in possible_dirs:\n if os.path.exists(test_dir):\n render_dir = test_dir\n print(f\"Rendering directory found: {render_dir}\")\n break\n\n if render_dir and os.path.exists(render_dir):\n render_imgs = sorted([f for f in os.listdir(render_dir) if f.endswith('.png')])\n\n if render_imgs:\n print(f\"Found {len(render_imgs)} rendered images\")\n\n subprocess.run([\n 'ffmpeg', '-y',\n '-framerate', '30',\n '-pattern_type', 'glob',\n '-i', f\"{render_dir}/*.png\",\n '-c:v', 'libx264',\n '-pix_fmt', 'yuv420p',\n '-crf', '18',\n output_video_path\n ], check=True)\n\n print(f\"Video saved: {output_video_path}\")\n return True\n\n print(\"Error: Rendering directory not found\")\n return False\n\n\ndef create_gif(video_path, gif_path):\n \"\"\"Create GIF from MP4\"\"\"\n print(\"Creating animated GIF...\")\n\n subprocess.run([\n 'ffmpeg', '-y',\n '-i', video_path,\n '-vf', 'setpts=8*PTS,fps=10,scale=720:-1:flags=lanczos',\n '-loop', '0',\n gif_path\n ], check=True)\n\n if os.path.exists(gif_path):\n size_mb = os.path.getsize(gif_path) / (1024 * 1024)\n print(f\"GIF creation complete: {gif_path} ({size_mb:.2f} MB)\")\n return True\n\n return False","metadata":{"trusted":true,"execution":{"execution_failed":"2026-02-14T08:31:25.062Z"}},"outputs":[],"execution_count":null},{"cell_type":"code","source":"def main_pipeline(image_dir, output_dir, square_size=1024, max_images=100, \n iterations=30000, voxel_size=0, update_init_factor=16):\n \"\"\"\n Main pipeline for Scaffold-GS training and rendering\n \n Args:\n image_dir: Input image directory\n output_dir: Output directory for results\n square_size: Image size for preprocessing\n max_images: Maximum number of images to process\n iterations: Training iterations (default: 30000 for Scaffold-GS)\n voxel_size: Voxel size for anchor placement (0 = auto)\n update_init_factor: Initial resolution for anchor growing\n \"\"\"\n\n try:\n print(\"=\"*60)\n print(\"Scaffold-GS Pipeline\")\n print(\"=\"*60)\n \n print(\"=\"*60)\n print(\"Step 1: Normalizing and preprocessing images\")\n print(\"=\"*60)\n\n frame_dir = os.path.join(COLMAP_DIR, \"images\")\n os.makedirs(frame_dir, exist_ok=True)\n\n num_processed = normalize_image_sizes_biplet(\n input_dir=image_dir,\n output_dir=frame_dir, \n size=square_size,\n max_images=max_images\n )\n\n print(f\"Processed images: {num_processed}\")\n\n print(\"=\"*60)\n print(\"Step 2: Running COLMAP reconstruction\")\n print(\"=\"*60)\n colmap_model_dir = run_colmap_reconstruction(frame_dir, COLMAP_DIR)\n\n print(\"=\"*60)\n print(\"Step 3: Preparing Scaffold-GS data\")\n print(\"=\"*60)\n data_dir = prepare_scaffold_gs_data(frame_dir, colmap_model_dir)\n\n print(\"=\"*60)\n print(\"Step 4: Training Scaffold-GS model\")\n print(f\" - Iterations: {iterations}\")\n print(f\" - Voxel size: {voxel_size} (0=auto)\")\n print(f\" - Update init factor: {update_init_factor}\")\n print(\"=\"*60)\n model_path = train_scaffold_gs(\n data_dir, \n iterations=iterations,\n voxel_size=voxel_size,\n update_init_factor=update_init_factor\n )\n\n print(\"=\"*60)\n print(\"Step 5: Rendering video\")\n print(\"=\"*60)\n os.makedirs(OUTPUT_DIR, exist_ok=True)\n output_video = os.path.join(OUTPUT_DIR, \"scaffold_gs_video.mp4\")\n\n success = render_video(model_path, output_video, iteration=iterations)\n\n if success:\n print(\"=\"*60)\n print(f\"Success! Scaffold-GS video generation complete: {output_video}\")\n print(\"=\"*60)\n\n output_gif = os.path.join(OUTPUT_DIR, \"scaffold_gs_video.gif\")\n create_gif(output_video, output_gif)\n\n from IPython.display import Image, display\n display(Image(open(output_gif, 'rb').read()))\n\n return output_video, output_gif\n else:\n print(\"Warning: Rendering complete, but video was not generated\")\n return None, None\n\n except Exception as e:\n print(f\"Error: {str(e)}\")\n import traceback\n traceback.print_exc()\n return None, None\n\n\nif __name__ == \"__main__\":\n IMAGE_DIR = \"/kaggle/input/competitions/image-matching-challenge-2023/train/haiper/fountain/images\"\n OUTPUT_DIR = \"/kaggle/working/output\"\n COLMAP_DIR = \"/kaggle/working/colmap\"\n\n # Scaffold-GS specific parameters\n video_path, gif_path = main_pipeline(\n image_dir=IMAGE_DIR,\n output_dir=OUTPUT_DIR,\n square_size=1024,\n max_images=20,\n iterations=3000, \n voxel_size=0, \n update_init_factor=16 \n )","metadata":{"trusted":true,"execution":{"execution_failed":"2026-02-14T08:31:25.062Z"}},"outputs":[],"execution_count":null},{"cell_type":"code","source":"","metadata":{"trusted":true},"outputs":[],"execution_count":null}]}
|