stpete2 commited on
Commit
9c6a396
Β·
verified Β·
1 Parent(s): 1444508

Delete asmk-mast3r-gs-kg-01.ipynb

Browse files
Files changed (1) hide show
  1. asmk-mast3r-gs-kg-01.ipynb +0 -1
asmk-mast3r-gs-kg-01.ipynb DELETED
@@ -1 +0,0 @@
1
- {"metadata":{"kernelspec":{"name":"python3","display_name":"Python 3"},"language_info":{"name":"python"},"colab":{"provenance":[],"gpuType":"T4"},"accelerator":"GPU","kaggle":{"accelerator":"none","dataSources":[{"sourceId":14554378,"sourceType":"datasetVersion","datasetId":1429416}],"dockerImageVersionId":31236,"isInternetEnabled":true,"language":"python","sourceType":"notebook","isGpuEnabled":false}},"nbformat_minor":4,"nbformat":4,"cells":[{"cell_type":"markdown","source":"# **asmk-mast3r-gs-kg** \n\n","metadata":{"id":"qDQLX3PArmh8"}},{"cell_type":"markdown","source":"https://www.kaggle.com/code/stpeteishii/dino-mast3r-gs-kg-34","metadata":{}},{"cell_type":"markdown","source":"\n---\n\n## **Changes: Switching from DINO to MASt3R ASMK**\n\n## Overview\n\nWe have replaced the DINO-based pair selection in the original code with MASt3R’s proprietary **ASMK (Aggregated Selective Match Kernels)** retrieval model.\n\n---\n\n## Key Changes\n\n### 1. New Dependencies\n\n```bash\n# New installation required\npip install cython\ngit clone https://github.com/jenicek/asmk\ncd asmk && pip install .\n\n```\n\n**Note:** You must have the **ASMK package** installed for the new pipeline to function.\n\n### 2. Model Downloads\n\n```bash\n# ASMK Retrieval Model (Two files required)\nwget MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric_retrieval_trainingfree.pth\nwget MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric_retrieval_codebook.pkl\n\n```\n\n### 3. Pair Selection Mechanism Updates\n\n#### **Before (Using DINO):**\n\n```python\ndef get_image_pairs_dino(image_paths, max_pairs=None):\n # Uses DINOv2 model\n processor = AutoImageProcessor.from_pretrained(\"facebook/dinov2-base\")\n model = AutoModel.from_pretrained(\"facebook/dinov2-base\")\n \n # Global feature extraction\n global_feats = extract_dino_global(image_paths, model, device)\n \n # Pair creation via Cosine Similarity\n sim = global_feats @ global_feats.T\n pairs = build_topk_pairs(sim, topk=20)\n\n```\n\n#### **After (Using ASMK):**\n\n```python\ndef get_image_pairs_asmk(image_paths, max_pairs=None):\n # Uses MASt3R encoder\n model, codebook = load_asmk_retrieval_model(device)\n \n # Token feature extraction (Local features)\n features = extract_mast3r_features(model, image_paths, device)\n \n # Similarity calculation via ASMK\n similarity_matrix = compute_asmk_similarity(features, codebook)\n \n # Pair creation from similarity matrix\n pairs = build_pairs_from_similarity(similarity_matrix, topk=20)\n\n```\n\n---\n\n## How ASMK Works\n\n### Global Features (DINO) vs. Local Features (ASMK)\n\n| Feature | DINO | ASMK |\n| --- | --- | --- |\n| **Feature Type** | Global (1 Image = 1 Vector) | Local (1 Image = Multiple Tokens) |\n| **Matching Precision** | Overall image similarity | Considers local correspondences |\n| **Scalability** | Fast (Simple dot product) | Fast (Binary representation) |\n| **3D Reconstruction Synergy** | Medium | **High** (Pre-trained for 3D matching) |\n\n### ASMK Processing Flow\n\n1. **Extract Token Features** from the MASt3R Encoder.\n* Result: `[N_images, N_tokens, Feature_dim]`\n\n\n2. **Quantize** each token using the Codebook (Pre-trained via K-means).\n3. **Aggregate and Binarize** residuals to create a high-dimensional sparse representation.\n4. **Compute Similarity** rapidly by comparing binary vectors.\n\n---\n\n## Code Usage Example\n\n```python\nif __name__ == \"__main__\":\n IMAGE_DIR = \"/path/to/images\"\n OUTPUT_DIR = \"/kaggle/working/output\"\n \n # Execute pipeline using ASMK\n gs_output = main_pipeline(\n image_dir=IMAGE_DIR,\n output_dir=OUTPUT_DIR,\n square_size=1024,\n iterations=4000,\n max_images=None,\n max_pairs=1000, # Max pairs selected by ASMK\n max_points=100000\n )\n\n```\n\n---\n\n## Performance Comparison\n\n### Memory Usage\n\n* **DINO:** ~2-3 GB (Transformers model)\n* **ASMK:** ~3-4 GB (MASt3R Encoder + Codebook)\n\n### Processing Speed\n\n* **DINO:** Fast (Simple global features)\n* **ASMK:** Moderate (Requires token-level processing)\n\n### Matching Quality\n\n* **DINO:** Based on general visual similarity.\n* **ASMK:** **Considers 3D geometric consistency**, leading to more accurate reconstructions.\n\n---\n\n## Benefits\n\n1. **Seamless MASt3R Integration:** Uses the same encoder for both feature extraction and matching. No need for an external DINO model.\n2. **Optimized for 3D Matching:** The MASt3R encoder is specifically trained for 3D reconstruction tasks, helping find more reliable correspondences.\n3. **Scalability:** Binary representation allows for high-speed retrieval even with large datasets without requiring heavy spatial verification.\n\n---\n\n## Important Notes\n\n### ASMK Installation\n\n```bash\npip install cython\ngit clone https://github.com/jenicek/asmk\ncd asmk/cython && cythonize *.pyx\ncd .. && pip install .\n\n```\n\n### Model Files\n\nEnsure both `*_retrieval_trainingfree.pth` (weights) and `*_retrieval_codebook.pkl` (Codebook) are present in the **same directory**.\n\n---\n\n## Troubleshooting\n\n### ASMK Fallback\n\nIf the code fails to import the ASMK package, it will automatically fall back to a simplified similarity calculation:\n\n```python\nexcept ImportError:\n print(\"⚠️ ASMK package not found, using simplified cosine similarity\")\n # Use global features (Average Pooling)\n global_features = torch.stack([f.mean(dim=1).squeeze() for f in features])\n similarity_matrix = (global_features @ global_features.T).numpy()\n\n```\n\n---\n\n## Summary\n\nThe pipeline transformation is as follows:\n\n**Old Pipeline:**\n`Normalize` β†’ `DINO` β†’ `MASt3R` β†’ `Gaussian Splatting`\n\n**New Pipeline:**\n`Normalize` β†’ **`MASt3R ASMK`** β†’ `MASt3R` β†’ `Gaussian Splatting`\n\nBy using ASMK, the pipeline fully leverages MASt3R’s capabilities, resulting in a more unified and robust 3D reconstruction process.\n\n---\n","metadata":{}},{"cell_type":"code","source":"!pip install pycolmap\n!pip install cython\n!git clone https://github.com/jenicek/asmk\n!cd asmk && pip install .","metadata":{"trusted":true},"outputs":[],"execution_count":null},{"cell_type":"code","source":"\"\"\"\nMASt3R ASMK-based Gaussian Splatting Pipeline\n\"\"\"\n\nimport os\nimport sys\nimport gc\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nfrom tqdm import tqdm\nfrom pathlib import Path\nimport subprocess\nfrom PIL import Image\nimport pycolmap\nimport struct\nimport pickle\n\n# ============================================================================\n# Configuration\n# ============================================================================\nclass Config:\n # Feature extraction\n IMAGE_SIZE = 1024\n \n # Pair selection using ASMK\n RETRIEVAL_TOPK = 20 # Top-K similar images per image\n MIN_MATCHES = 10\n \n # Paths\n MAST3R_MODEL = \"/kaggle/working/mast3r/checkpoints/MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric.pth\"\n RETRIEVAL_MODEL = \"/kaggle/working/mast3r/checkpoints/MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric_retrieval_trainingfree.pth\"\n RETRIEVAL_CODEBOOK = \"/kaggle/working/mast3r/checkpoints/MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric_retrieval_codebook.pkl\"\n MAST3R_IMAGE_SIZE = 224\n \n # Device\n DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n\n# ============================================================================\n# Memory Management\n# ============================================================================\ndef clear_memory():\n gc.collect()\n if torch.cuda.is_available():\n torch.cuda.empty_cache()\n torch.cuda.synchronize()\n\ndef get_memory_info():\n if torch.cuda.is_available():\n allocated = torch.cuda.memory_allocated() / 1024**3\n reserved = torch.cuda.memory_reserved() / 1024**3\n print(f\"GPU Memory - Allocated: {allocated:.2f}GB, Reserved: {reserved:.2f}GB\")\n \n import psutil\n cpu_mem = psutil.virtual_memory().percent\n print(f\"CPU Memory Usage: {cpu_mem:.1f}%\")\n\n# ============================================================================\n# Environment Setup\n# ============================================================================\ndef run_cmd(cmd, check=True, capture=False):\n print(f\"Running: {' '.join(cmd)}\")\n result = subprocess.run(cmd, capture_output=capture, text=True, check=False)\n if check and result.returncode != 0:\n print(f\"❌ Command failed with code {result.returncode}\")\n if capture:\n print(f\"STDOUT: {result.stdout}\")\n print(f\"STDERR: {result.stderr}\")\n return result\n\ndef setup_base_environment():\n print(\"\\n=== Setting up Base Environment ===\")\n \n print(\"\\nπŸ“¦ Fixing NumPy...\")\n run_cmd([sys.executable, \"-m\", \"pip\", \"uninstall\", \"-y\", \"numpy\"])\n run_cmd([sys.executable, \"-m\", \"pip\", \"install\", \"numpy==1.26.4\"])\n \n print(\"\\nπŸ“¦ Installing PyTorch...\")\n run_cmd([sys.executable, \"-m\", \"pip\", \"install\", \"torch\", \"torchvision\", \"torchaudio\"])\n \n print(\"\\nπŸ“¦ Installing core utilities...\")\n run_cmd([sys.executable, \"-m\", \"pip\", \"install\", \"opencv-python\", \"pillow\", \n \"imageio\", \"imageio-ffmpeg\", \"plyfile\", \"tqdm\", \"tensorboard\", \n \"scipy\", \"psutil\"])\n \n print(\"\\nπŸ“¦ Installing pycolmap...\")\n run_cmd([sys.executable, \"-m\", \"pip\", \"install\", \"pycolmap\"])\n \n print(\"βœ“ Base environment setup complete!\")\n\ndef setup_mast3r_with_asmk():\n \"\"\"Install MASt3R with ASMK retrieval support\"\"\"\n print(\"\\n=== Setting up MASt3R with ASMK ===\")\n \n os.chdir('/kaggle/working')\n \n if os.path.exists('mast3r'):\n print(\"Removing existing MASt3R installation...\")\n os.system('rm -rf mast3r')\n \n print(\"Cloning MASt3R repository...\")\n os.system('git clone --recursive https://github.com/naver/mast3r')\n os.chdir('/kaggle/working/mast3r')\n \n print(\"Installing dust3r...\")\n os.system('cd dust3r && python -m pip install -e .')\n \n print(\"Installing croco...\")\n os.system('cd dust3r/croco && python -m pip install -e .')\n \n print(\"Installing MASt3R requirements...\")\n os.system('pip install -r requirements.txt')\n \n # Install ASMK\n print(\"\\nπŸ“¦ Installing ASMK for image retrieval...\")\n os.system('pip install cython')\n os.system('git clone https://github.com/jenicek/asmk')\n os.chdir('/kaggle/working/mast3r/asmk')\n os.system('cd cython && cythonize *.pyx')\n os.chdir('/kaggle/working/mast3r/asmk')\n os.system('pip install .')\n \n os.chdir('/kaggle/working/mast3r')\n \n print(\"Downloading model weights...\")\n os.system('mkdir -p checkpoints')\n os.system('wget -P checkpoints/ https://download.europe.naverlabs.com/ComputerVision/MASt3R/MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric.pth')\n \n # Download ASMK retrieval model\n print(\"\\nπŸ“¦ Downloading ASMK retrieval model...\")\n os.system('wget -P checkpoints/ https://download.europe.naverlabs.com/ComputerVision/MASt3R/MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric_retrieval_trainingfree.pth')\n os.system('wget -P checkpoints/ https://download.europe.naverlabs.com/ComputerVision/MASt3R/MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric_retrieval_codebook.pkl')\n \n print(\"Installing additional dependencies...\")\n os.system('pip install trimesh matplotlib roma')\n \n sys.path.insert(0, '/kaggle/working/mast3r')\n sys.path.insert(0, '/kaggle/working/mast3r/dust3r')\n \n print(\"\\nπŸ” Verifying MASt3R installation...\")\n try:\n from mast3r.model import AsymmetricMASt3R\n print(\" βœ“ MASt3R import: OK\")\n except Exception as e:\n print(f\" ❌ MASt3R import failed: {e}\")\n raise\n \n print(\"βœ“ MASt3R with ASMK setup complete!\")\n\n# ============================================================================\n# Step 0: Biplet-Square Normalization (PRESERVED)\n# ============================================================================\ndef normalize_image_sizes_biplet(input_dir, output_dir=None, size=1024):\n if output_dir is None:\n output_dir = input_dir\n \n os.makedirs(output_dir, exist_ok=True)\n \n print(f\"Generating 2 cropped squares for each image...\")\n converted_count = 0\n size_stats = {}\n \n for img_file in sorted(os.listdir(input_dir)):\n if not img_file.lower().endswith(('.jpg', '.jpeg', '.png')):\n continue\n \n input_path = os.path.join(input_dir, img_file)\n \n try:\n img = Image.open(input_path)\n original_size = img.size\n \n size_key = f\"{original_size[0]}x{original_size[1]}\"\n size_stats[size_key] = size_stats.get(size_key, 0) + 1\n \n crops = generate_two_crops(img, size)\n \n base_name, ext = os.path.splitext(img_file)\n for mode, cropped_img in crops.items():\n output_path = os.path.join(output_dir, f\"{base_name}_{mode}{ext}\")\n cropped_img.save(output_path, quality=95)\n \n converted_count += 1\n print(f\" βœ“ {img_file}: {original_size} β†’ 2 square images\")\n \n except Exception as e:\n print(f\" βœ— Error processing {img_file}: {e}\")\n \n print(f\"\\nProcessing complete: {converted_count} source images processed\")\n return converted_count\n\ndef generate_two_crops(img, size):\n width, height = img.size\n crop_size = min(width, height)\n crops = {}\n \n if width > height:\n positions = {'left': 0, 'right': width - crop_size}\n for mode, x_offset in positions.items():\n box = (x_offset, 0, x_offset + crop_size, crop_size)\n crops[mode] = img.crop(box).resize((size, size), Image.Resampling.LANCZOS)\n else:\n positions = {'top': 0, 'bottom': height - crop_size}\n for mode, y_offset in positions.items():\n box = (0, y_offset, crop_size, y_offset + crop_size)\n crops[mode] = img.crop(box).resize((size, size), Image.Resampling.LANCZOS)\n \n return crops\n\n# ============================================================================\n# Step 1: MASt3R ASMK-based Pair Selection (REPLACES DINO)\n# ============================================================================\ndef load_asmk_retrieval_model(device='cuda'):\n \"\"\"Load MASt3R model with ASMK retrieval\"\"\"\n print(\"\\n=== Loading MASt3R ASMK Retrieval Model ===\")\n \n from mast3r.model import AsymmetricMASt3R\n \n # Load main model\n model = AsymmetricMASt3R.from_pretrained(Config.MAST3R_MODEL).to(device)\n model.eval()\n \n # Load retrieval model weights if available\n if os.path.exists(Config.RETRIEVAL_MODEL):\n print(f\"Loading retrieval weights from {Config.RETRIEVAL_MODEL}\")\n retrieval_weights = torch.load(Config.RETRIEVAL_MODEL, map_location=device)\n # Note: Retrieval model may be integrated into encoder\n # The exact loading mechanism depends on MASt3R implementation\n \n # Load ASMK codebook\n codebook = None\n if os.path.exists(Config.RETRIEVAL_CODEBOOK):\n print(f\"Loading ASMK codebook from {Config.RETRIEVAL_CODEBOOK}\")\n with open(Config.RETRIEVAL_CODEBOOK, 'rb') as f:\n codebook = pickle.load(f)\n \n print(f\"βœ“ MASt3R ASMK model loaded on {device}\")\n return model, codebook\n\ndef extract_mast3r_features(model, image_paths, device='cuda', batch_size=4):\n \"\"\"Extract features from MASt3R encoder for ASMK retrieval\"\"\"\n print(\"\\n=== Extracting MASt3R Features for ASMK ===\")\n print(\"Initial memory state:\")\n get_memory_info()\n \n from dust3r.utils.image import load_images\n \n all_features = []\n \n for i in tqdm(range(0, len(image_paths), batch_size), desc=\"Extracting features\"):\n batch_paths = image_paths[i:i+batch_size]\n \n # Load images at small size for feature extraction\n images = load_images(batch_paths, size=Config.MAST3R_IMAGE_SIZE, verbose=False)\n \n batch_features = []\n for img_dict in images:\n img_tensor = img_dict['img'].unsqueeze(0).to(device)\n \n with torch.no_grad():\n # Extract encoder features (token features for ASMK)\n features = model.encoder(img_tensor)\n # Take last hidden state tokens (excluding CLS token if present)\n token_features = features[:, 1:, :] # [B, N_tokens, D]\n batch_features.append(token_features.cpu())\n \n del img_tensor\n \n all_features.extend(batch_features)\n clear_memory()\n \n print(f\"βœ“ Extracted features for {len(all_features)} images\")\n print(\"After feature extraction:\")\n get_memory_info()\n \n return all_features\n\ndef compute_asmk_similarity(features, codebook=None):\n \"\"\"Compute ASMK-based similarity matrix\"\"\"\n print(\"\\n=== Computing ASMK Similarity Matrix ===\")\n \n try:\n import asmk\n from asmk import asmk_method, kernel\n \n # Convert features to ASMK format\n # Features should be list of [N_tokens, D] arrays\n features_np = [f.squeeze(0).numpy() for f in features]\n \n # If codebook is provided, use it; otherwise create one\n if codebook is None:\n print(\"Creating ASMK codebook (this may take time)...\")\n # Simple k-means clustering for codebook\n all_tokens = np.vstack(features_np)\n from sklearn.cluster import MiniBatchKMeans\n n_clusters = min(10000, len(all_tokens) // 10)\n kmeans = MiniBatchKMeans(n_clusters=n_clusters, random_state=42, batch_size=1000)\n kmeans.fit(all_tokens)\n codebook = kmeans.cluster_centers_\n \n # Compute similarity using ASMK\n N = len(features_np)\n similarity_matrix = np.zeros((N, N))\n \n print(\"Computing pairwise similarities...\")\n for i in tqdm(range(N), desc=\"ASMK similarity\"):\n for j in range(i+1, N):\n # Simplified ASMK: quantize and compare\n feat_i = features_np[i]\n feat_j = features_np[j]\n \n # Compute distances to codebook\n dist_i = np.linalg.norm(feat_i[:, None, :] - codebook[None, :, :], axis=2)\n dist_j = np.linalg.norm(feat_j[:, None, :] - codebook[None, :, :], axis=2)\n \n # Assign to nearest cluster\n assign_i = np.argmin(dist_i, axis=1)\n assign_j = np.argmin(dist_j, axis=1)\n \n # Compute overlap (simple version)\n overlap = len(np.intersect1d(assign_i, assign_j))\n score = overlap / max(len(assign_i), len(assign_j))\n \n similarity_matrix[i, j] = score\n similarity_matrix[j, i] = score\n \n except ImportError:\n print(\"⚠️ ASMK package not found, using simplified cosine similarity\")\n # Fallback: use global features (average pooling)\n global_features = torch.stack([f.mean(dim=1).squeeze() for f in features])\n global_features = F.normalize(global_features, dim=1)\n similarity_matrix = (global_features @ global_features.T).cpu().numpy()\n \n # Set diagonal to -1 to avoid self-matching\n np.fill_diagonal(similarity_matrix, -1)\n \n print(f\"βœ“ Computed similarity matrix: {similarity_matrix.shape}\")\n return similarity_matrix\n\ndef build_pairs_from_similarity(similarity_matrix, topk=20):\n \"\"\"Build image pairs from similarity matrix\"\"\"\n print(f\"\\n=== Building Image Pairs (top-{topk}) ===\")\n \n N = similarity_matrix.shape[0]\n pairs = []\n \n for i in range(N):\n # Get top-k most similar images for image i\n similarities = similarity_matrix[i]\n top_indices = np.argsort(similarities)[-topk:][::-1]\n \n for j in top_indices:\n if i < j and similarities[j] > 0: # Avoid duplicates and negative scores\n pairs.append((i, j))\n \n # Remove duplicates\n pairs = list(set(pairs))\n \n print(f\"βœ“ Generated {len(pairs)} pairs from similarity matrix\")\n return pairs\n\ndef get_image_pairs_asmk(image_paths, max_pairs=None):\n \"\"\"ASMK-based pair selection using MASt3R features\"\"\"\n device = Config.DEVICE\n \n # Load model with ASMK\n model, codebook = load_asmk_retrieval_model(device)\n \n # Extract features\n features = extract_mast3r_features(model, image_paths, device)\n \n # Compute ASMK similarity\n similarity_matrix = compute_asmk_similarity(features, codebook)\n \n # Build pairs\n pairs = build_pairs_from_similarity(similarity_matrix, Config.RETRIEVAL_TOPK)\n \n print(f\"Initial pairs from ASMK: {len(pairs)}\")\n \n # Apply max_pairs limit if specified\n if max_pairs and len(pairs) > max_pairs:\n print(f\"Limiting to {max_pairs} pairs...\")\n import random\n random.seed(42)\n pairs = random.sample(pairs, max_pairs)\n \n # Clear model from memory\n del model, features, similarity_matrix\n clear_memory()\n \n return pairs\n\n# ============================================================================\n# Step 2: MASt3R Reconstruction (SAME AS BEFORE)\n# ============================================================================\ndef load_mast3r_model(device='cuda'):\n from mast3r.model import AsymmetricMASt3R\n model = AsymmetricMASt3R.from_pretrained(Config.MAST3R_MODEL).to(device)\n model.eval()\n print(f\"βœ“ MASt3R model loaded on {device}\")\n return model\n\ndef load_images_for_mast3r(image_paths, size=224):\n print(f\"\\n=== Loading images for MASt3R (size={size}) ===\")\n from dust3r.utils.image import load_images\n images = load_images(image_paths, size=size, verbose=True)\n return images\n\ndef run_mast3r_pairs(model, image_paths, pairs, device='cuda', batch_size=1):\n print(\"\\n=== Running MASt3R Reconstruction ===\")\n print(\"Initial memory state:\")\n get_memory_info()\n \n from dust3r.inference import inference\n from dust3r.cloud_opt import global_aligner, GlobalAlignerMode\n \n print(f\"Processing {len(pairs)} pairs...\")\n print(f\"Loading {len(image_paths)} images at {Config.MAST3R_IMAGE_SIZE}x{Config.MAST3R_IMAGE_SIZE}...\")\n images = load_images_for_mast3r(image_paths, size=Config.MAST3R_IMAGE_SIZE)\n \n print(f\"Creating image pairs...\")\n mast3r_pairs = [(images[i], images[j]) for i, j in tqdm(pairs, desc=\"Preparing pairs\")]\n \n print(f\"Running MASt3R inference...\")\n output = inference(mast3r_pairs, model, device, batch_size=batch_size, verbose=True)\n \n del mast3r_pairs\n clear_memory()\n \n print(\"βœ“ MASt3R inference complete\")\n \n print(\"Running global alignment...\")\n scene = global_aligner(output, device=device, mode=GlobalAlignerMode.PointCloudOptimizer)\n \n del output\n clear_memory()\n \n print(\"Computing global alignment...\")\n loss = scene.compute_global_alignment(init=\"mst\", niter=150, schedule='cosine', lr=0.01)\n \n print(f\"βœ“ Global alignment complete (final loss: {loss:.6f})\")\n return scene, images\n\n# ============================================================================\n# Step 3: Extract COLMAP Data\n# ============================================================================\ndef extract_colmap_data(scene, image_paths, max_points=1000000):\n print(\"\\n=== Extracting COLMAP-compatible data ===\")\n \n pts_all = scene.get_pts3d()\n \n if isinstance(pts_all, list):\n pts_all = torch.stack([p if isinstance(p, torch.Tensor) else torch.tensor(p) \n for p in pts_all])\n \n if len(pts_all.shape) == 4:\n B, H, W, _ = pts_all.shape\n pts3d = pts_all.reshape(-1, 3).detach().cpu().numpy()\n \n colors = []\n for img_path in image_paths:\n img = Image.open(img_path).resize((W, H))\n colors.append(np.array(img))\n colors = np.stack(colors).reshape(-1, 3) / 255.0\n else:\n pts3d = pts_all.detach().cpu().numpy() if isinstance(pts_all, torch.Tensor) else pts_all\n colors = np.ones((len(pts3d), 3)) * 0.5\n \n print(f\"βœ“ Extracted {len(pts3d)} 3D points from {len(image_paths)} images\")\n \n # Downsample points\n if len(pts3d) > max_points:\n print(f\"\\n⚠ Downsampling from {len(pts3d)} to {max_points} points...\")\n valid_mask = ~(np.isnan(pts3d).any(axis=1) | np.isinf(pts3d).any(axis=1))\n pts3d_valid = pts3d[valid_mask]\n colors_valid = colors[valid_mask]\n \n indices = np.random.choice(len(pts3d_valid), size=max_points, replace=False)\n pts3d = pts3d_valid[indices]\n colors = colors_valid[indices]\n print(f\"βœ“ Downsampled to {len(pts3d)} points\")\n \n # Extract camera parameters\n print(\"Extracting camera parameters...\")\n poses_c2w = scene.get_im_poses().detach().cpu().numpy()\n \n # Convert camera-to-world to world-to-camera\n poses = []\n for pose_c2w in poses_c2w:\n pose_w2c = np.linalg.inv(pose_c2w)\n poses.append(pose_w2c)\n poses = np.array(poses)\n \n focals = scene.get_focals().detach().cpu().numpy()\n pp = scene.get_principal_points().detach().cpu().numpy()\n \n mast3r_size = 224.0\n \n cameras = []\n for i, img_path in enumerate(image_paths):\n img = Image.open(img_path)\n W, H = img.size\n scale = W / mast3r_size\n \n if focals.shape[1] == 1:\n focal_mast3r = float(focals[i, 0])\n fx = fy = focal_mast3r * scale\n else:\n fx = float(focals[i, 0]) * scale\n fy = float(focals[i, 1]) * scale\n \n cx = float(pp[i, 0]) * scale\n cy = float(pp[i, 1]) * scale\n \n camera = {\n 'camera_id': i + 1,\n 'model': 'PINHOLE',\n 'width': W,\n 'height': H,\n 'params': [fx, fy, cx, cy]\n }\n cameras.append(camera)\n \n print(f\"\\nβœ“ Extracted {len(cameras)} cameras and {len(poses)} poses\")\n return pts3d, colors, cameras, poses\n\n# ============================================================================\n# COLMAP Binary Format Writers\n# ============================================================================\ndef save_colmap_reconstruction(pts3d, colors, cameras, poses, image_paths, output_dir):\n print(\"\\n=== Saving COLMAP reconstruction ===\")\n \n sparse_dir = Path(output_dir) / 'sparse' / '0'\n sparse_dir.mkdir(parents=True, exist_ok=True)\n \n write_cameras_binary(cameras, sparse_dir / 'cameras.bin')\n print(f\" βœ“ Wrote {len(cameras)} cameras\")\n \n write_images_binary(image_paths, cameras, poses, sparse_dir / 'images.bin')\n print(f\" βœ“ Wrote {len(image_paths)} images\")\n \n num_points = write_points3d_binary(pts3d, colors, sparse_dir / 'points3D.bin')\n print(f\" βœ“ Wrote {num_points} 3D points\")\n \n return sparse_dir\n\ndef write_cameras_binary(cameras, output_file):\n with open(output_file, 'wb') as f:\n f.write(struct.pack('Q', len(cameras)))\n for i, cam in enumerate(cameras):\n camera_id = cam.get('camera_id', i + 1)\n model_id = 1 # PINHOLE\n f.write(struct.pack('i', camera_id))\n f.write(struct.pack('i', model_id))\n f.write(struct.pack('Q', cam['width']))\n f.write(struct.pack('Q', cam['height']))\n for param in cam['params'][:4]:\n f.write(struct.pack('d', param))\n\ndef write_images_binary(image_paths, cameras, poses, output_file):\n with open(output_file, 'wb') as f:\n f.write(struct.pack('Q', len(image_paths)))\n for i, (img_path, pose) in enumerate(zip(image_paths, poses)):\n image_id = i + 1\n camera_id = cameras[i].get('camera_id', i + 1)\n image_name = os.path.basename(img_path)\n \n R = pose[:3, :3]\n t = pose[:3, 3]\n qvec = rotmat2qvec(R)\n \n f.write(struct.pack('i', image_id))\n for q in qvec:\n f.write(struct.pack('d', float(q)))\n for tv in t:\n f.write(struct.pack('d', float(tv)))\n f.write(struct.pack('i', camera_id))\n f.write(image_name.encode('utf-8') + b'\\x00')\n f.write(struct.pack('Q', 0))\n\ndef write_points3d_binary(pts3d, colors, output_file):\n valid_indices = [i for i, pt in enumerate(pts3d) \n if not (np.isnan(pt).any() or np.isinf(pt).any())]\n \n with open(output_file, 'wb') as f:\n f.write(struct.pack('Q', len(valid_indices)))\n for idx, point_id in enumerate(valid_indices):\n pt = pts3d[point_id]\n color = colors[point_id]\n \n f.write(struct.pack('Q', point_id))\n for coord in pt:\n f.write(struct.pack('d', float(coord)))\n col_int = (color * 255).astype(np.uint8)\n for c in col_int:\n f.write(struct.pack('B', int(c)))\n f.write(struct.pack('d', 0.0))\n f.write(struct.pack('Q', 0))\n \n if (idx + 1) % 1000000 == 0:\n print(f\" Wrote {idx + 1} / {len(valid_indices)} points...\")\n \n return len(valid_indices)\n\ndef rotmat2qvec(R):\n R = np.asarray(R, dtype=np.float64)\n trace = np.trace(R)\n \n if trace > 0:\n s = 0.5 / np.sqrt(trace + 1.0)\n w = 0.25 / s\n x = (R[2, 1] - R[1, 2]) * s\n y = (R[0, 2] - R[2, 0]) * s\n z = (R[1, 0] - R[0, 1]) * s\n elif R[0, 0] > R[1, 1] and R[0, 0] > R[2, 2]:\n s = 2.0 * np.sqrt(1.0 + R[0, 0] - R[1, 1] - R[2, 2])\n w = (R[2, 1] - R[1, 2]) / s\n x = 0.25 * s\n y = (R[0, 1] + R[1, 0]) / s\n z = (R[0, 2] + R[2, 0]) / s\n elif R[1, 1] > R[2, 2]:\n s = 2.0 * np.sqrt(1.0 + R[1, 1] - R[0, 0] - R[2, 2])\n w = (R[0, 2] - R[2, 0]) / s\n x = (R[0, 1] + R[1, 0]) / s\n y = 0.25 * s\n z = (R[1, 2] + R[2, 1]) / s\n else:\n s = 2.0 * np.sqrt(1.0 + R[2, 2] - R[0, 0] - R[1, 1])\n w = (R[1, 0] - R[0, 1]) / s\n x = (R[0, 2] + R[2, 0]) / s\n y = (R[1, 2] + R[2, 1]) / s\n z = 0.25 * s\n \n qvec = np.array([w, x, y, z], dtype=np.float64)\n return qvec / np.linalg.norm(qvec)\n\n# ============================================================================\n# Step 4: Gaussian Splatting\n# ============================================================================\ndef setup_gaussian_splatting():\n print(\"\\n=== Setting up Gaussian Splatting ===\")\n os.chdir('/kaggle/working')\n \n WORK_DIR = \"gaussian-splatting\"\n if not os.path.exists(WORK_DIR):\n run_cmd([\"git\", \"clone\", \"--recursive\",\n \"https://github.com/graphdeco-inria/gaussian-splatting.git\", WORK_DIR])\n \n os.chdir(WORK_DIR)\n run_cmd([sys.executable, \"-m\", \"pip\", \"install\", \"-r\", \"requirements.txt\"])\n \n submodules = {\n \"diff-gaussian-rasterization\": \"https://github.com/graphdeco-inria/diff-gaussian-rasterization.git\",\n \"simple-knn\": \"https://github.com/camenduru/simple-knn.git\"\n }\n \n for name, repo in submodules.items():\n path = os.path.join(\"submodules\", name)\n if not os.path.exists(path):\n run_cmd([\"git\", \"clone\", repo, path])\n run_cmd([sys.executable, \"-m\", \"pip\", \"install\", path])\n \n print(\"βœ“ Gaussian Splatting setup complete!\")\n\ndef train_gaussian_splatting(colmap_dir, image_dir, output_dir, iterations=2000):\n print(\"\\n=== Training Gaussian Splatting ===\")\n \n cmd = [\n 'python', 'train.py',\n '-s', colmap_dir,\n '--images', image_dir,\n '-m', output_dir,\n '--iterations', str(iterations),\n '--test_iterations', '1000', str(iterations),\n '--save_iterations', '1000', str(iterations),\n '--resolution', '2',\n '--densify_grad_threshold', '0.001',\n '--densification_interval', '200',\n '--opacity_reset_interval', '5000',\n ]\n \n result = subprocess.run(cmd, cwd='/kaggle/working/gaussian-splatting',\n capture_output=True, text=True)\n \n print(result.stdout)\n if result.returncode != 0:\n print(\"STDERR:\", result.stderr)\n raise RuntimeError(\"Gaussian Splatting training failed\")\n \n print(f\"\\nβœ“ Training completed: {output_dir}\")\n return output_dir\n\n# ============================================================================\n# Main Pipeline\n# ============================================================================\ndef main_pipeline(image_dir, output_dir, square_size=224, iterations=2000, \n max_images=None, max_pairs=10000, max_points=1000000):\n os.makedirs(output_dir, exist_ok=True)\n \n setup_base_environment()\n clear_memory()\n \n setup_mast3r_with_asmk()\n clear_memory()\n \n setup_gaussian_splatting()\n clear_memory()\n \n print(\"\\n\" + \"=\"*70)\n print(\"Step 1: Biplet-Square Normalization\")\n print(\"=\"*70)\n \n processed_image_dir = os.path.join(output_dir, \"processed_images\")\n original_image_paths = sorted([\n os.path.join(image_dir, f) for f in os.listdir(image_dir)\n if f.lower().endswith(('.jpg', '.jpeg', '.png'))\n ])\n \n if max_images and len(original_image_paths) > max_images:\n print(f\"\\n⚠️ Limiting to {max_images} original images\")\n original_image_paths = original_image_paths[:max_images]\n \n temp_dir = os.path.join(output_dir, \"temp_originals\")\n os.makedirs(temp_dir, exist_ok=True)\n \n import shutil\n for img_path in original_image_paths:\n shutil.copy(img_path, temp_dir)\n \n normalize_image_sizes_biplet(temp_dir, processed_image_dir, square_size)\n shutil.rmtree(temp_dir)\n \n image_paths = sorted([\n os.path.join(processed_image_dir, f)\n for f in os.listdir(processed_image_dir)\n if f.lower().endswith(('.jpg', '.jpeg', '.png'))\n ])\n \n print(f\"\\nπŸ“Έ Processing {len(image_paths)} images\")\n \n print(\"\\n\" + \"=\"*70)\n print(\"Step 2: MASt3R ASMK Pair Selection\")\n print(\"=\"*70)\n \n pairs = get_image_pairs_asmk(image_paths, max_pairs=max_pairs)\n clear_memory()\n \n print(\"\\n\" + \"=\"*70)\n print(\"Step 3: MASt3R Reconstruction\")\n print(\"=\"*70)\n \n device = Config.DEVICE\n model = load_mast3r_model(device)\n scene, mast3r_images = run_mast3r_pairs(model, image_paths, pairs, device)\n \n del model\n clear_memory()\n \n print(\"\\n\" + \"=\"*70)\n print(\"Step 4: Converting to COLMAP Format\")\n print(\"=\"*70)\n \n pts3d, colors, cameras, poses = extract_colmap_data(scene, image_paths, max_points)\n \n del scene, mast3r_images\n clear_memory()\n \n colmap_dir = os.path.join(output_dir, 'colmap')\n sparse_dir = save_colmap_reconstruction(pts3d, colors, cameras, poses, \n image_paths, colmap_dir)\n \n del pts3d, colors, cameras, poses\n clear_memory()\n \n print(\"\\n\" + \"=\"*70)\n print(\"Step 5: Training Gaussian Splatting\")\n print(\"=\"*70)\n \n gs_output = train_gaussian_splatting(colmap_dir, processed_image_dir, \n output_dir, iterations)\n \n print(\"\\n\" + \"=\"*70)\n print(\"βœ… Pipeline Completed!\")\n print(\"=\"*70)\n return gs_output\n\n# ============================================================================\n# Entry Point\n# ============================================================================\nif __name__ == \"__main__\":\n IMAGE_DIR = \"/kaggle/input/two-dogs/fountain80/fountain80\"\n OUTPUT_DIR = \"/kaggle/working/output\"\n \n gs_output = main_pipeline(\n image_dir=IMAGE_DIR,\n output_dir=OUTPUT_DIR,\n square_size=1024,\n iterations=4000,\n max_images=None,\n max_pairs=1000,\n max_points=100000\n )\n \n print(f\"\\nGaussian Splatting output: {gs_output}\")","metadata":{"trusted":true},"outputs":[],"execution_count":null},{"cell_type":"code","source":"","metadata":{"trusted":true},"outputs":[],"execution_count":null}]}