stpete2 commited on
Commit
cc9dead
·
verified ·
1 Parent(s): 3cc61f4

Upload 2 files

Browse files
mast3r-to-colmap-pipeline.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
mast3r-to-colmap-ps1-ps2-comparison.ipynb ADDED
@@ -0,0 +1 @@
 
 
1
+ {"metadata":{"kernelspec":{"language":"python","display_name":"Python 3","name":"python3"},"language_info":{"name":"python","version":"3.12.12","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"},"kaggle":{"accelerator":"none","dataSources":[{"sourceId":293774821,"sourceType":"kernelVersion"}],"dockerImageVersionId":31259,"isInternetEnabled":true,"language":"python","sourceType":"notebook","isGpuEnabled":false}},"nbformat_minor":4,"nbformat":4,"cells":[{"cell_type":"code","source":"","metadata":{"trusted":true},"outputs":[],"execution_count":null},{"cell_type":"markdown","source":"# **MASt3R to COLMAP ps1/ps2 Comparison**\n\nps1: called traditional/process1<br>\nps2: called process2<br>\n\nhttps://www.kaggle.com/code/stpeteishii/mast3r-to-colmap-pipeline\n\nhttps://www.kaggle.com/code/stpeteishii/mast3r-to-colmap-ps1-ps2-comparison","metadata":{}},{"cell_type":"markdown","source":"https://arxiv.org/abs/2406.09756","metadata":{}},{"cell_type":"markdown","source":"## **What is MASt3R?**\nMASt3R is a 3D image matching model built to find visual correspondences between photos and understand scenes in three dimensions. It fundamentally treats matching as a 3D geometry problem, unlike older methods that treat it as a 2D task. This makes it exceptionally robust for challenging real-world scenarios.","metadata":{}},{"cell_type":"markdown","source":"\n## **About this notebook**\n\nWhen performing Gaussian Splatting, the traditional approach is to feed the output of COLMAP directly into the Gaussian Splatting pipeline.\n\nMASt3R can replace the role that COLMAP has traditionally played. However, the output format of MASt3R differs from that of COLMAP, so it is necessary to convert MASt3R’s output into COLMAP format.\n\nIn this notebook, we consider two possible approaches for converting MASt3R output into COLMAP format.\n\nThe first approach is the one used to generate submission data with MASt3R in IMC2025. In this notebook, this approach is referred to as **process2**.\n\nThe second approach is referred to as **traditional** or **process1** in this notebook.\n\n* **Process2** directly merges and filters the 3D point clouds (pts3d) and returns them as the function output.\n* **Traditional / process1** does not return 3D point cloud data. Instead, it focuses on decomposing and organizing camera poses into rotations and translations.\n\nWe performed Gaussian Splatting using both approaches and observed a clear difference in the resulting 3DGS reconstructions. The results produced by the traditional/process1 approach were superior.\n\nBased on these observations, when performing Gaussian Splatting with MASt3R, we recommend using the traditional/process1 approach.\nFurthermore, for submissions to IMC2025 using MASt3R, adopting the traditional/process1 approach is also expected to yield better results.\n\n\n\n","metadata":{}},{"cell_type":"code","source":"import numpy as np\nimport pandas as pd\nfrom collections import namedtuple\nimport struct\nimport matplotlib.pyplot as plt\n\ndef quantify_pose_differences_from_bins(sparse_dir_trad, sparse_dir_proc2):\n \"\"\"\n Compare camera poses directly from COLMAP binary files.\n Calculates differences in position (meters) and rotation (degrees).\n \"\"\"\n \n def read_images_binary(path_to_model_file):\n \"\"\"\n Read COLMAP images.bin file\n \"\"\"\n Image = namedtuple(\n \"Image\", [\"id\", \"qvec\", \"tvec\", \"camera_id\", \"name\", \"xys\", \"point3D_ids\"]\n )\n \n images = {}\n with open(path_to_model_file, \"rb\") as fid:\n num_reg_images = struct.unpack(\"<Q\", fid.read(8))[0]\n for _ in range(num_reg_images):\n binary_image_properties = struct.unpack(\"<I\", fid.read(4))\n image_id = binary_image_properties[0]\n \n qvec = struct.unpack(\"<dddd\", fid.read(32))\n tvec = struct.unpack(\"<ddd\", fid.read(24))\n camera_id = struct.unpack(\"<I\", fid.read(4))[0]\n \n image_name = \"\"\n current_char = struct.unpack(\"<c\", fid.read(1))[0]\n while current_char != b\"\\x00\":\n image_name += current_char.decode(\"utf-8\")\n current_char = struct.unpack(\"<c\", fid.read(1))[0]\n \n num_points2D = struct.unpack(\"<Q\", fid.read(8))[0]\n x_y_id_s = struct.unpack(\n \"<\" + \"ddq\" * num_points2D, fid.read(24 * num_points2D)\n )\n xys = np.column_stack(\n [tuple(map(float, x_y_id_s[0::3])), tuple(map(float, x_y_id_s[1::3]))]\n )\n point3D_ids = np.array(tuple(map(int, x_y_id_s[2::3])))\n \n images[image_id] = Image(\n id=image_id,\n qvec=np.array(qvec),\n tvec=np.array(tvec),\n camera_id=camera_id,\n name=image_name,\n xys=xys,\n point3D_ids=point3D_ids,\n )\n return images\n \n # Load images.bin from both Traditional and Process2\n images_trad = read_images_binary(f'{sparse_dir_trad}/images.bin')\n images_proc2 = read_images_binary(f'{sparse_dir_proc2}/images.bin')\n \n print(f\"Loaded {len(images_trad)} images from Traditional\")\n print(f\"Loaded {len(images_proc2)} images from Process2\")\n \n def qvec2rotmat(qvec):\n \"\"\"Convert quaternion to rotation matrix.\"\"\"\n qvec = qvec / np.linalg.norm(qvec)\n w, x, y, z = qvec\n \n R = np.array([\n [1 - 2*y*y - 2*z*z, 2*x*y - 2*z*w, 2*x*z + 2*y*w],\n [2*x*y + 2*z*w, 1 - 2*x*x - 2*z*z, 2*y*z - 2*x*w],\n [2*x*z - 2*y*w, 2*y*z + 2*x*w, 1 - 2*x*x - 2*y*y]\n ])\n return R\n \n def compute_camera_center(qvec, tvec):\n \"\"\"Compute camera center in world coordinates.\"\"\"\n R = qvec2rotmat(qvec)\n center = -R.T @ tvec\n return center\n \n # Compare Poses\n pose_diffs = []\n \n for img_id in images_trad:\n if img_id not in images_proc2:\n continue\n \n img_trad = images_trad[img_id]\n img_proc2 = images_proc2[img_id]\n \n # Calculate Camera Centers\n center_trad = compute_camera_center(img_trad.qvec, img_trad.tvec)\n center_proc2 = compute_camera_center(img_proc2.qvec, img_proc2.tvec)\n pos_diff = np.linalg.norm(center_trad - center_proc2)\n \n # Calculate Rotation Differences (using quaternion dot product)\n qvec_trad = img_trad.qvec / np.linalg.norm(img_trad.qvec)\n qvec_proc2 = img_proc2.qvec / np.linalg.norm(img_proc2.qvec)\n \n dot = np.abs(np.dot(qvec_trad, qvec_proc2))\n dot = np.clip(dot, -1.0, 1.0)\n angle_diff = 2 * np.arccos(dot) * 180 / np.pi\n \n # Calculate Optical Axis Direction\n R_trad = qvec2rotmat(qvec_trad)\n R_proc2 = qvec2rotmat(qvec_proc2)\n \n direction_trad = R_trad @ np.array([0, 0, 1])\n direction_proc2 = R_proc2 @ np.array([0, 0, 1])\n \n direction_angle = np.arccos(\n np.clip(np.dot(direction_trad, direction_proc2), -1, 1)\n ) * 180 / np.pi\n \n pose_diffs.append({\n 'image_id': img_id,\n 'image_name': img_trad.name,\n 'position_diff': pos_diff,\n 'rotation_diff': angle_diff,\n 'direction_angle_diff': direction_angle,\n 'center_trad': center_trad,\n 'center_proc2': center_proc2,\n })\n \n df = pd.DataFrame(pose_diffs)\n \n # Print Statistics\n print(\"\\n\" + \"=\"*60)\n print(\"CAMERA POSE DIFFERENCES\")\n print(\"=\"*60)\n print(f\"\\nPosition (Camera Center) Differences:\")\n print(f\" Mean: {df['position_diff'].mean():.6f} units\")\n print(f\" Median: {df['position_diff'].median():.6f} units\")\n print(f\" Max: {df['position_diff'].max():.6f} units\")\n print(f\" Std: {df['position_diff'].std():.6f} units\")\n \n print(f\"\\nRotation (Quaternion) Differences:\")\n print(f\" Mean: {df['rotation_diff'].mean():.6f}°\")\n print(f\" Median: {df['rotation_diff'].median():.6f}°\")\n print(f\" Max: {df['rotation_diff'].max():.6f}°\")\n print(f\" Std: {df['rotation_diff'].std():.6f}°\")\n \n print(f\"\\nCamera Direction (Optical Axis) Differences:\")\n print(f\" Mean: {df['direction_angle_diff'].mean():.6f}°\")\n print(f\" Median: {df['direction_angle_diff'].median():.6f}°\")\n print(f\" Max: {df['direction_angle_diff'].max():.6f}°\")\n print(f\" Std: {df['direction_angle_diff'].std():.6f}°\")\n \n print(f\"\\n{'='*60}\")\n print(\"Top 5 images with LARGEST ROTATION differences:\")\n print(\"=\"*60)\n top5 = df.nlargest(5, 'rotation_diff')[['image_name', 'rotation_diff', 'direction_angle_diff', 'position_diff']]\n for idx, row in top5.iterrows():\n print(f\" {row['image_name']}: rot={row['rotation_diff']:.6f}°, dir={row['direction_angle_diff']:.6f}°, pos={row['position_diff']:.6f}\")\n \n # Visualization\n fig, axes = plt.subplots(2, 2, figsize=(14, 10))\n \n # Plot Position Differences\n axes[0, 0].hist(df['position_diff'], bins=30, edgecolor='black', alpha=0.7)\n axes[0, 0].set_xlabel('Position Difference', fontsize=12)\n axes[0, 0].set_ylabel('Count', fontsize=12)\n axes[0, 0].set_title('Camera Position Differences', fontsize=14, fontweight='bold')\n axes[0, 0].axvline(df['position_diff'].mean(), color='red', linestyle='--', linewidth=2,\n label=f'Mean: {df[\"position_diff\"].mean():.6f}')\n axes[0, 0].legend()\n axes[0, 0].grid(alpha=0.3)\n \n # Plot Rotation Differences\n axes[0, 1].hist(df['rotation_diff'], bins=30, edgecolor='black', alpha=0.7, color='orange')\n axes[0, 1].set_xlabel('Rotation Difference (degrees)', fontsize=12)\n axes[0, 1].set_ylabel('Count', fontsize=12)\n axes[0, 1].set_title('Camera Rotation Differences', fontsize=14, fontweight='bold')\n axes[0, 1].axvline(df['rotation_diff'].mean(), color='red', linestyle='--', linewidth=2,\n label=f'Mean: {df[\"rotation_diff\"].mean():.6f}°')\n axes[0, 1].legend()\n axes[0, 1].grid(alpha=0.3)\n \n # Plot Direction Differences\n axes[1, 0].hist(df['direction_angle_diff'], bins=30, edgecolor='black', alpha=0.7, color='green')\n axes[1, 0].set_xlabel('Direction Angle Difference (degrees)', fontsize=12)\n axes[1, 0].set_ylabel('Count', fontsize=12)\n axes[1, 0].set_title('Camera Direction Differences', fontsize=14, fontweight='bold')\n axes[1, 0].axvline(df['direction_angle_diff'].mean(), color='red', linestyle='--', linewidth=2,\n label=f'Mean: {df[\"direction_angle_diff\"].mean():.6f}°')\n axes[1, 0].legend()\n axes[1, 0].grid(alpha=0.3)\n \n # Scatter Plot: Position vs Rotation\n scatter = axes[1, 1].scatter(df['position_diff'], df['rotation_diff'], \n c=df['direction_angle_diff'], cmap='viridis', \n s=50, alpha=0.6, edgecolors='black', linewidth=0.5)\n axes[1, 1].set_xlabel('Position Difference', fontsize=12)\n axes[1, 1].set_ylabel('Rotation Difference (degrees)', fontsize=12)\n axes[1, 1].set_title('Position vs Rotation Differences', fontsize=14, fontweight='bold')\n axes[1, 1].grid(alpha=0.3)\n cbar = plt.colorbar(scatter, ax=axes[1, 1])\n cbar.set_label('Direction Angle Diff (°)', fontsize=10)\n \n plt.tight_layout()\n plt.savefig('camera_pose_comparison.png', dpi=200, bbox_inches='tight')\n print(f\"\\n✓ Saved visualization to: camera_pose_comparison.png\")\n plt.show()\n \n # Save to CSV\n df.to_csv('camera_pose_differences.csv', index=False)\n print(f\"✓ Saved detailed data to: camera_pose_differences.csv\")\n \n return df","metadata":{"trusted":true},"outputs":[],"execution_count":null},{"cell_type":"code","source":"sparse_dir_trad='/kaggle/input/biplet-dino-mast3r-ps1-ps2-cp06/output/sparse_traditional/0'\nsparse_dir_proc2='/kaggle/input/biplet-dino-mast3r-ps1-ps2-cp06/output/sparse_new/0'","metadata":{"trusted":true,"execution":{"iopub.status.busy":"2026-01-25T06:51:22.676640Z","iopub.execute_input":"2026-01-25T06:51:22.677002Z","iopub.status.idle":"2026-01-25T06:51:22.693646Z","shell.execute_reply.started":"2026-01-25T06:51:22.676958Z","shell.execute_reply":"2026-01-25T06:51:22.692575Z"}},"outputs":[],"execution_count":null},{"cell_type":"code","source":"pose_diff_df = quantify_pose_differences_from_bins(sparse_dir_trad, sparse_dir_proc2)","metadata":{"trusted":true,"execution":{"iopub.status.busy":"2026-01-25T06:51:22.695576Z","iopub.execute_input":"2026-01-25T06:51:22.695897Z","iopub.status.idle":"2026-01-25T06:51:24.596112Z","shell.execute_reply.started":"2026-01-25T06:51:22.695868Z","shell.execute_reply":"2026-01-25T06:51:24.595301Z"}},"outputs":[],"execution_count":null},{"cell_type":"code","source":"import numpy as np\nfrom collections import namedtuple\nimport struct\n\ndef analyze_specific_cameras(sparse_dir_trad, sparse_dir_proc2, camera_ids):\n \"\"\"Perform detailed comparative analysis of specific camera poses.\"\"\"\n \n def read_images_binary(path):\n \"\"\"Read COLMAP images.bin file.\"\"\"\n Image = namedtuple(\"Image\", [\"id\", \"qvec\", \"tvec\", \"camera_id\", \"name\", \"xys\", \"point3D_ids\"])\n images = {}\n with open(path, \"rb\") as fid:\n num_reg_images = struct.unpack(\"<Q\", fid.read(8))[0]\n for _ in range(num_reg_images):\n binary_image_properties = struct.unpack(\"<I\", fid.read(4))\n image_id = binary_image_properties[0]\n qvec = struct.unpack(\"<dddd\", fid.read(32))\n tvec = struct.unpack(\"<ddd\", fid.read(24))\n camera_id = struct.unpack(\"<I\", fid.read(4))[0]\n image_name = \"\"\n current_char = struct.unpack(\"<c\", fid.read(1))[0]\n while current_char != b\"\\x00\":\n image_name += current_char.decode(\"utf-8\")\n current_char = struct.unpack(\"<c\", fid.read(1))[0]\n num_points2D = struct.unpack(\"<Q\", fid.read(8))[0]\n x_y_id_s = struct.unpack(\"<\" + \"ddq\" * num_points2D, fid.read(24 * num_points2D))\n xys = np.column_stack([tuple(map(float, x_y_id_s[0::3])), tuple(map(float, x_y_id_s[1::3]))])\n point3D_ids = np.array(tuple(map(int, x_y_id_s[2::3])))\n images[image_id] = Image(image_id, np.array(qvec), np.array(tvec), \n camera_id, image_name, xys, point3D_ids)\n return images\n \n def qvec2rotmat(qvec):\n \"\"\"Convert quaternion to rotation matrix.\"\"\"\n qvec = qvec / np.linalg.norm(qvec)\n w, x, y, z = qvec\n return np.array([\n [1 - 2*y*y - 2*z*z, 2*x*y - 2*z*w, 2*x*z + 2*y*w],\n [2*x*y + 2*z*w, 1 - 2*x*x - 2*z*z, 2*y*z - 2*x*w],\n [2*x*z - 2*y*w, 2*y*z + 2*x*w, 1 - 2*x*x - 2*y*y]\n ])\n \n images_trad = read_images_binary(f'{sparse_dir_trad}/images.bin')\n images_proc2 = read_images_binary(f'{sparse_dir_proc2}/images.bin')\n \n for cam_id in camera_ids:\n if cam_id not in images_trad or cam_id not in images_proc2:\n print(f\"\\n[Warning] Camera ID {cam_id} not found in one of the models.\")\n continue\n\n print(f\"\\n{'='*70}\")\n print(f\"CAMERA ID {cam_id}: {images_trad[cam_id].name}\")\n print(\"=\"*70)\n \n img_trad = images_trad[cam_id]\n img_proc2 = images_proc2[cam_id]\n \n # Calculate Camera Centers\n R_trad = qvec2rotmat(img_trad.qvec)\n R_proc2 = qvec2rotmat(img_proc2.qvec)\n \n center_trad = -R_trad.T @ img_trad.tvec\n center_proc2 = -R_proc2.T @ img_proc2.tvec\n \n print(f\"\\nCamera Center (World Coordinates):\")\n print(f\" Traditional: [{center_trad[0]:+.6f}, {center_trad[1]:+.6f}, {center_trad[2]:+.6f}]\")\n print(f\" Process2: [{center_proc2[0]:+.6f}, {center_proc2[1]:+.6f}, {center_proc2[2]:+.6f}]\")\n print(f\" Difference: [{(center_trad-center_proc2)[0]:+.6f}, {(center_trad-center_proc2)[1]:+.6f}, {(center_trad-center_proc2)[2]:+.6f}]\")\n print(f\" Euclidean Dist: {np.linalg.norm(center_trad - center_proc2):.6f} units\")\n \n # Rotation (Quaternion)\n print(f\"\\nQuaternion (w, x, y, z):\")\n print(f\" Traditional: [{img_trad.qvec[0]:+.6f}, {img_trad.qvec[1]:+.6f}, {img_trad.qvec[2]:+.6f}, {img_trad.qvec[3]:+.6f}]\")\n print(f\" Process2: [{img_proc2.qvec[0]:+.6f}, {img_proc2.qvec[1]:+.6f}, {img_proc2.qvec[2]:+.6f}, {img_proc2.qvec[3]:+.6f}]\")\n \n # Camera Direction\n direction_trad = R_trad @ np.array([0, 0, 1])\n direction_proc2 = R_proc2 @ np.array([0, 0, 1])\n \n print(f\"\\nCamera Direction (Optical Axis):\")\n print(f\" Traditional: [{direction_trad[0]:+.6f}, {direction_trad[1]:+.6f}, {direction_trad[2]:+.6f}]\")\n print(f\" Process2: [{direction_proc2[0]:+.6f}, {direction_proc2[1]:+.6f}, {direction_proc2[2]:+.6f}]\")\n \n angle = np.arccos(np.clip(np.dot(direction_trad, direction_proc2), -1, 1)) * 180 / np.pi\n print(f\" Angular Deviation: {angle:.6f}°\")","metadata":{"trusted":true},"outputs":[],"execution_count":null},{"cell_type":"code","source":"analyze_specific_cameras(\n sparse_dir_trad, sparse_dir_proc2,\n camera_ids=[1, 2, 3]\n)","metadata":{"trusted":true,"execution":{"iopub.status.busy":"2026-01-25T06:51:24.616607Z","iopub.execute_input":"2026-01-25T06:51:24.616979Z","iopub.status.idle":"2026-01-25T06:51:24.640217Z","shell.execute_reply.started":"2026-01-25T06:51:24.616941Z","shell.execute_reply":"2026-01-25T06:51:24.639168Z"}},"outputs":[],"execution_count":null},{"cell_type":"code","source":"import numpy as np\nimport struct\nfrom collections import namedtuple\nimport matplotlib.pyplot as plt\n\ndef compare_point_cloud_quality(sparse_dir_trad, sparse_dir_proc2):\n \"\"\"Analyze qualitative and quantitative differences between two point clouds.\"\"\"\n \n def read_points3D_binary(path):\n \"\"\"Read COLMAP points3D.bin binary file.\"\"\"\n Point3D = namedtuple(\"Point3D\", [\"id\", \"xyz\", \"rgb\", \"error\", \"image_ids\", \"point2D_idxs\"])\n \n points3D = {}\n with open(path, \"rb\") as fid:\n num_points = struct.unpack(\"<Q\", fid.read(8))[0]\n for _ in range(num_points):\n point3D_id = struct.unpack(\"<Q\", fid.read(8))[0]\n xyz = struct.unpack(\"<ddd\", fid.read(24))\n rgb = struct.unpack(\"<BBB\", fid.read(3))\n error = struct.unpack(\"<d\", fid.read(8))[0]\n track_length = struct.unpack(\"<Q\", fid.read(8))[0]\n track_elems = struct.unpack(\"<\" + \"ii\" * track_length, fid.read(8 * track_length))\n image_ids = np.array(tuple(map(int, track_elems[0::2])))\n point2D_idxs = np.array(tuple(map(int, track_elems[1::2])))\n \n points3D[point3D_id] = Point3D(\n id=point3D_id,\n xyz=np.array(xyz),\n rgb=np.array(rgb),\n error=error,\n image_ids=image_ids,\n point2D_idxs=point2D_idxs\n )\n return points3D\n \n # Load point cloud data\n points_trad = read_points3D_binary(f'{sparse_dir_trad}/points3D.bin')\n points_proc2 = read_points3D_binary(f'{sparse_dir_proc2}/points3D.bin')\n \n print(\"=\"*70)\n print(\"POINT CLOUD QUALITY COMPARISON\")\n print(\"=\"*70)\n \n # Basic Point Statistics\n print(f\"\\nTotal Number of Points:\")\n print(f\" Traditional: {len(points_trad):,}\")\n print(f\" Process2: {len(points_proc2):,}\")\n \n # Reprojection Error Statistics\n errors_trad = [p.error for p in points_trad.values()]\n errors_proc2 = [p.error for p in points_proc2.values()]\n \n print(f\"\\nReprojection Error (pixels):\")\n print(f\" Traditional:\")\n print(f\" Mean: {np.mean(errors_trad):.6f}\")\n print(f\" Median: {np.median(errors_trad):.6f}\")\n print(f\" Max: {np.max(errors_trad):.6f}\")\n print(f\" Std: {np.std(errors_trad):.6f}\")\n \n print(f\" Process2:\")\n print(f\" Mean: {np.mean(errors_proc2):.6f}\")\n print(f\" Median: {np.median(errors_proc2):.6f}\")\n print(f\" Max: {np.max(errors_proc2):.6f}\")\n print(f\" Std: {np.std(errors_proc2):.6f}\")\n \n # Track Length (Observations per point)\n track_lengths_trad = [len(p.image_ids) for p in points_trad.values()]\n track_lengths_proc2 = [len(p.image_ids) for p in points_proc2.values()]\n \n print(f\"\\nTrack Length (Observations per point):\")\n print(f\" Traditional:\")\n print(f\" Mean: {np.mean(track_lengths_trad):.2f}\")\n print(f\" Median: {np.median(track_lengths_trad):.1f}\")\n print(f\" Max: {np.max(track_lengths_trad)}\")\n \n print(f\" Process2:\")\n print(f\" Mean: {np.mean(track_lengths_proc2):.2f}\")\n print(f\" Median: {np.median(track_lengths_proc2):.1f}\")\n print(f\" Max: {np.max(track_lengths_proc2)}\")\n \n # Centroid and Scale\n xyz_trad = np.array([p.xyz for p in points_trad.values()])\n xyz_proc2 = np.array([p.xyz for p in points_proc2.values()])\n \n center_trad = xyz_trad.mean(axis=0)\n center_proc2 = xyz_proc2.mean(axis=0)\n \n print(f\"\\nPoint Cloud Centroid:\")\n print(f\" Traditional: [{center_trad[0]:+.6f}, {center_trad[1]:+.6f}, {center_trad[2]:+.6f}]\")\n print(f\" Process2: [{center_proc2[0]:+.6f}, {center_proc2[1]:+.6f}, {center_proc2[2]:+.6f}]\")\n print(f\" Difference: [{(center_trad-center_proc2)[0]:+.6f}, {(center_trad-center_proc2)[1]:+.6f}, {(center_trad-center_proc2)[2]:+.6f}]\")\n print(f\" Distance: {np.linalg.norm(center_trad - center_proc2):.6f} units\")\n \n # Spatial Dispersion\n std_trad = xyz_trad.std(axis=0)\n std_proc2 = xyz_proc2.std(axis=0)\n \n print(f\"\\nPoint Cloud Dispersion (Std Dev):\")\n print(f\" Traditional: [{std_trad[0]:.6f}, {std_trad[1]:.6f}, {std_trad[2]:.6f}]\")\n print(f\" Process2: [{std_proc2[0]:.6f}, {std_proc2[1]:.6f}, {std_proc2[2]:.6f}]\")\n \n # --- Visualization ---\n fig, axes = plt.subplots(2, 2, figsize=(14, 10))\n \n # 1. Reprojection Error Distribution\n axes[0, 0].hist(errors_trad, bins=50, alpha=0.7, label='Traditional', edgecolor='black')\n axes[0, 0].hist(errors_proc2, bins=50, alpha=0.7, label='Process2', edgecolor='black')\n axes[0, 0].set_xlabel('Reprojection Error (pixels)')\n axes[0, 0].set_ylabel('Count')\n axes[0, 0].set_title('Reprojection Error Distribution', fontweight='bold')\n axes[0, 0].legend()\n # Limit X-axis to focus on the significant error range\n max_err_limit = min(5, max(np.max(errors_trad), np.max(errors_proc2)))\n axes[0, 0].set_xlim(0, max_err_limit)\n axes[0, 0].grid(alpha=0.3)\n \n # 2. Track Length Distribution\n axes[0, 1].hist(track_lengths_trad, bins=range(1, 20), alpha=0.7, label='Traditional', edgecolor='black')\n axes[0, 1].hist(track_lengths_proc2, bins=range(1, 20), alpha=0.7, label='Process2', edgecolor='black')\n axes[0, 1].set_xlabel('Track Length (Number of Observations)')\n axes[0, 1].set_ylabel('Count')\n axes[0, 1].set_title('Track Length Distribution', fontweight='bold')\n axes[0, 1].legend()\n axes[0, 1].grid(alpha=0.3)\n \n # 3. Top View Projection (X-Y Plane)\n sample_size = min(5000, len(xyz_trad))\n indices_trad = np.random.choice(len(xyz_trad), sample_size, replace=False)\n indices_proc2 = np.random.choice(len(xyz_proc2), sample_size, replace=False)\n \n axes[1, 0].scatter(xyz_trad[indices_trad, 0], xyz_trad[indices_trad, 1], \n s=1, alpha=0.5, label='Traditional', c='blue')\n axes[1, 0].scatter(xyz_proc2[indices_proc2, 0], xyz_proc2[indices_proc2, 1], \n s=1, alpha=0.5, label='Process2', c='red')\n axes[1, 0].scatter(center_trad[0], center_trad[1], s=100, c='blue', marker='x', linewidths=3, label='Centroid Trad')\n axes[1, 0].scatter(center_proc2[0], center_proc2[1], s=100, c='red', marker='x', linewidths=3, label='Centroid Proc2')\n axes[1, 0].set_xlabel('X (units)')\n axes[1, 0].set_ylabel('Y (units)')\n axes[1, 0].set_title('Point Cloud (Top View / X-Y Projection)', fontweight='bold')\n axes[1, 0].legend(markerscale=5)\n axes[1, 0].grid(alpha=0.3)\n axes[1, 0].axis('equal')\n \n # 4. Side View Projection (X-Z Plane)\n axes[1, 1].scatter(xyz_trad[indices_trad, 0], xyz_trad[indices_trad, 2], \n s=1, alpha=0.5, label='Traditional', c='blue')\n axes[1, 1].scatter(xyz_proc2[indices_proc2, 0], xyz_proc2[indices_proc2, 2], \n s=1, alpha=0.5, label='Process2', c='red')\n axes[1, 1].scatter(center_trad[0], center_trad[2], s=100, c='blue', marker='x', linewidths=3)\n axes[1, 1].scatter(center_proc2[0], center_proc2[2], s=100, c='red', marker='x', linewidths=3)\n axes[1, 1].set_xlabel('X (units)')\n axes[1, 1].set_ylabel('Z (units)')\n axes[1, 1].set_title('Point Cloud (Side View / X-Z Projection)', fontweight='bold')\n axes[1, 1].legend(markerscale=5)\n axes[1, 1].grid(alpha=0.3)\n axes[1, 1].axis('equal')\n \n plt.tight_layout()\n plt.savefig('point_cloud_quality_comparison.png', dpi=200, bbox_inches='tight')\n print(f\"\\n✓ Saved visualization to: point_cloud_quality_comparison.png\")\n plt.show()\n \n return {\n 'trad': {'errors': errors_trad, 'track_lengths': track_lengths_trad, 'xyz': xyz_trad},\n 'proc2': {'errors': errors_proc2, 'track_lengths': track_lengths_proc2, 'xyz': xyz_proc2}\n }","metadata":{"trusted":true},"outputs":[],"execution_count":null},{"cell_type":"code","source":"point_quality = compare_point_cloud_quality(\n sparse_dir_trad, sparse_dir_proc2,\n)","metadata":{"trusted":true,"execution":{"iopub.status.busy":"2026-01-25T06:51:24.670747Z","iopub.execute_input":"2026-01-25T06:51:24.671131Z","iopub.status.idle":"2026-01-25T06:52:21.739233Z","shell.execute_reply.started":"2026-01-25T06:51:24.671104Z","shell.execute_reply":"2026-01-25T06:52:21.738478Z"}},"outputs":[],"execution_count":null},{"cell_type":"code","source":"errors_trad=point_quality[\"trad\"][\"errors\"]\nerrors_proc2=point_quality[\"proc2\"][\"errors\"]","metadata":{"trusted":true,"execution":{"iopub.status.busy":"2026-01-25T06:52:21.740281Z","iopub.execute_input":"2026-01-25T06:52:21.740541Z","iopub.status.idle":"2026-01-25T06:52:21.744628Z","shell.execute_reply.started":"2026-01-25T06:52:21.740518Z","shell.execute_reply":"2026-01-25T06:52:21.743846Z"}},"outputs":[],"execution_count":null},{"cell_type":"code","source":"def analyze_error_distribution(errors_trad, errors_proc2):\n \"\"\"Perform detailed statistical analysis of reprojection error distributions.\"\"\"\n import numpy as np\n from scipy import stats\n from scipy.spatial.distance import jensenshannon\n \n print(\"=\"*70)\n print(\"REPROJECTION ERROR DISTRIBUTION ANALYSIS\")\n print(\"=\"*70)\n \n # Basic Statistics\n print(\"\\nBasic Statistics:\")\n print(f\" Traditional: Mean={np.mean(errors_trad):.4f}, Std Dev={np.std(errors_trad):.4f}\")\n print(f\" Process2: Mean={np.mean(errors_proc2):.4f}, Std Dev={np.std(errors_proc2):.4f}\")\n \n # Percentiles\n print(\"\\nPercentiles:\")\n for p in [50, 75, 90, 95, 99]:\n trad_p = np.percentile(errors_trad, p)\n proc2_p = np.percentile(errors_proc2, p)\n print(f\" {p}th Percentile: Trad={trad_p:.4f}, Proc2={proc2_p:.4f}\")\n \n # Outlier / Threshold Analysis\n print(\"\\nPercentage of Points Above Threshold:\")\n for threshold in [0.1, 0.3, 0.5, 0.7, 1.0]:\n trad_pct = np.sum(np.array(errors_trad) > threshold) / len(errors_trad) * 100\n proc2_pct = np.sum(np.array(errors_proc2) > threshold) / len(errors_proc2) * 100\n print(f\" >{threshold:.1f} pixels: Trad={trad_pct:.2f}%, Proc2={proc2_pct:.2f}%\")\n \n # Distribution Shape (Skewness and Kurtosis)\n print(\"\\nDistribution Shape Analysis:\")\n trad_skew = stats.skew(errors_trad)\n proc2_skew = stats.skew(errors_proc2)\n trad_kurt = stats.kurtosis(errors_trad)\n proc2_kurt = stats.kurtosis(errors_proc2)\n \n print(f\" Skewness: Trad={trad_skew:.4f}, Proc2={proc2_skew:.4f}\")\n print(f\" Kurtosis: Trad={trad_kurt:.4f}, Proc2={proc2_kurt:.4f}\")\n \n # Kolmogorov-Smirnov Test (Statistical validation of differences)\n ks_stat, p_value = stats.ks_2samp(errors_trad, errors_proc2)\n print(f\"\\nKolmogorov-Smirnov Test (KS Test):\")\n print(f\" Statistic: {ks_stat:.4f}\")\n print(f\" p-value: {p_value:.2e}\")\n if p_value < 0.001:\n print(\" → Result: Distributions are statistically different (p < 0.001)\")\n \n # Jensen-Shannon Divergence (Similarity measure)\n hist_trad, bins = np.histogram(errors_trad, bins=50, range=(0, 1), density=True)\n hist_proc2, _ = np.histogram(errors_proc2, bins=bins, density=True)\n \n # Normalization for Divergence Calculation\n hist_trad = hist_trad / (hist_trad.sum() + 1e-10)\n hist_proc2 = hist_proc2 / (hist_proc2.sum() + 1e-10)\n \n js_dist = jensenshannon(hist_trad, hist_proc2)\n print(f\"\\nJensen-Shannon Distance: {js_dist:.4f}\")\n print(f\" (Scale: 0 = identical distributions, 1 = completely different)\")\n\n# Execute\nanalyze_error_distribution(errors_trad, errors_proc2)","metadata":{"trusted":true},"outputs":[],"execution_count":null},{"cell_type":"code","source":"def read_points3D_binary(path_to_model_file):\n \"\"\"\n Read COLMAP points3D.bin file.\n \n Returns:\n dict: point3D_id -> Point3D(id, xyz, rgb, error, image_ids, point2D_idxs)\n \"\"\"\n import struct\n import numpy as np\n from collections import namedtuple\n \n # Define Point3D namedtuple structure\n Point3D = namedtuple(\n \"Point3D\", \n [\"id\", \"xyz\", \"rgb\", \"error\", \"image_ids\", \"point2D_idxs\"]\n )\n \n points3D = {}\n \n with open(path_to_model_file, \"rb\") as fid:\n # Read total number of points\n num_points = struct.unpack(\"<Q\", fid.read(8))[0]\n \n for _ in range(num_points):\n # Point3D ID\n point3D_id = struct.unpack(\"<Q\", fid.read(8))[0]\n \n # XYZ coordinates (3 doubles)\n xyz = struct.unpack(\"<ddd\", fid.read(24))\n xyz = np.array(xyz)\n \n # RGB color values (3 unsigned chars)\n rgb = struct.unpack(\"<BBB\", fid.read(3))\n rgb = np.array(rgb)\n \n # Reprojection error (double)\n error = struct.unpack(\"<d\", fid.read(8))[0]\n \n # Track length (number of images observing this point)\n track_length = struct.unpack(\"<Q\", fid.read(8))[0]\n \n # Track elements (pairs of image_id and point2D_idx)\n track_elems = struct.unpack(\n \"<\" + \"ii\" * track_length,\n fid.read(8 * track_length)\n )\n \n # Separate into image_ids and point2D_idxs arrays\n image_ids = np.array(tuple(map(int, track_elems[0::2])))\n point2D_idxs = np.array(tuple(map(int, track_elems[1::2])))\n \n # Create Point3D object and add to dictionary\n points3D[point3D_id] = Point3D(\n id=point3D_id,\n xyz=xyz,\n rgb=rgb,\n error=error,\n image_ids=image_ids,\n point2D_idxs=point2D_idxs\n )\n \n return points3D","metadata":{"trusted":true},"outputs":[],"execution_count":null},{"cell_type":"code","source":"# Load points\npoints3D = read_points3D_binary(f'{sparse_dir_trad}/points3D.bin')\n\nprint(f\"Total points: {len(points3D):,}\")\n\n# Display the first 5 points\nfor i, (point_id, point) in enumerate(list(points3D.items())[:5]):\n print(f\"\\nPoint ID {point_id}:\")\n print(f\" XYZ: {point.xyz}\")\n print(f\" RGB: {point.rgb}\")\n print(f\" Reprojection Error: {point.error:.6f} pixels\")\n print(f\" Observed by {len(point.image_ids)} images (Image IDs: {point.image_ids})\")\n\n# Descriptive Statistics\nimport numpy as np\n\nerrors = [p.error for p in points3D.values()]\ntrack_lengths = [len(p.image_ids) for p in points3D.values()]\n\nprint(f\"\\n=== Quality Metrics Summary ===\")\nprint(f\"Reprojection Error (pixels):\")\nprint(f\" Mean: {np.mean(errors):.6f}\")\nprint(f\" Median: {np.median(errors):.6f}\")\nprint(f\" Max: {np.max(errors):.6f}\")\n\nprint(f\"\\nTrack Length (observations per point):\")\nprint(f\" Mean: {np.mean(track_lengths):.2f}\")\nprint(f\" Median: {np.median(track_lengths):.0f}\")\nprint(f\" Max: {np.max(track_lengths)}\")","metadata":{"trusted":true},"outputs":[],"execution_count":null},{"cell_type":"code","source":"def identify_problematic_points(sparse_dir_proc2):\n \"\"\"Automatically analyze the spatial distribution and concentration of high-error points.\"\"\"\n \n # Load 3D points\n points = read_points3D_binary(f'{sparse_dir_proc2}/points3D.bin')\n \n # Extract coordinates for all points\n xyz_all = np.array([p.xyz for p in points.values()])\n \n # Filter points by reprojection error (Threshold: > 0.5 pixels)\n high_error_points = [(p.xyz, p.error) for p in points.values() if p.error > 0.5]\n \n if not high_error_points:\n print(\"No high-error points (>0.5 pixels) detected.\")\n return\n\n xyz_high_error = np.array([p[0] for p in high_error_points])\n \n print(f\"High-error points (>0.5 pixels): {len(high_error_points)}\")\n print(f\"Percentage of noise: {len(high_error_points)/len(points)*100:.1f}%\")\n \n # [Automation Logic] Calculate the centroid of high-error points\n error_center = xyz_high_error.mean(axis=0)\n print(f\"\\nDetected High-error centroid: {error_center}\")\n print(f\"Global centroid (all points): {xyz_all.mean(axis=0)}\")\n \n # Count high-error points within a specific radius to check for spatial concentration\n # Note: 0.3 is a heuristic value; adjust based on your model's coordinate scale\n distances = np.linalg.norm(xyz_high_error - error_center, axis=1)\n nearby = np.sum(distances < 0.3) \n \n print(f\"\\nPoints concentrated near the error centroid (<0.3 units): {nearby}\")\n print(f\"Concentration rate: {nearby/len(high_error_points)*100:.1f}%\")\n \n # --- Visualization ---\n fig = plt.figure(figsize=(12, 8))\n ax = fig.add_subplot(111, projection='3d')\n \n # Plot all points (Faint blue for context)\n ax.scatter(xyz_all[::10, 0], xyz_all[::10, 1], xyz_all[::10, 2], \n s=1, alpha=0.05, c='blue', label='All points (subsampled)')\n \n # Plot high-error points (Red for visibility)\n ax.scatter(xyz_high_error[:, 0], xyz_high_error[:, 1], xyz_high_error[:, 2],\n s=5, alpha=0.2, c='red', label='High error (>0.5px)')\n \n # Highlight the detected error centroid (Yellow star)\n ax.scatter(*error_center, s=200, c='yellow', marker='*', \n edgecolors='black', linewidth=1, label='Detected Error Centroid')\n \n ax.set_title('Spatial Distribution of High-Error Points')\n ax.set_xlabel('X')\n ax.set_ylabel('Y')\n ax.set_zlabel('Z')\n ax.legend()\n plt.show()","metadata":{"trusted":true},"outputs":[],"execution_count":null},{"cell_type":"code","source":"identify_problematic_points(sparse_dir_trad)","metadata":{"trusted":true,"execution":{"iopub.status.busy":"2026-01-25T07:56:20.499112Z","iopub.execute_input":"2026-01-25T07:56:20.499445Z","iopub.status.idle":"2026-01-25T07:56:39.483760Z","shell.execute_reply.started":"2026-01-25T07:56:20.499417Z","shell.execute_reply":"2026-01-25T07:56:39.482834Z"}},"outputs":[],"execution_count":null},{"cell_type":"code","source":"identify_problematic_points(sparse_dir_proc2)","metadata":{"trusted":true,"execution":{"iopub.status.busy":"2026-01-25T07:56:39.485498Z","iopub.execute_input":"2026-01-25T07:56:39.486198Z","iopub.status.idle":"2026-01-25T07:57:17.583330Z","shell.execute_reply.started":"2026-01-25T07:56:39.486161Z","shell.execute_reply":"2026-01-25T07:57:17.582161Z"}},"outputs":[],"execution_count":null},{"cell_type":"markdown","source":"3dgs via ps1(traditional/process1)\n![](https://huggingface.co/datasets/stpete2/npy/resolve/main/fountain_dino_mast3r_ps1.png)","metadata":{}},{"cell_type":"markdown","source":"3dsgs via ps2(process2)\n![](https://huggingface.co/datasets/stpete2/npy/resolve/main/fountain_dino_mast3r_ps2.png)","metadata":{}},{"cell_type":"markdown","source":"","metadata":{}},{"cell_type":"code","source":"","metadata":{"trusted":true},"outputs":[],"execution_count":null},{"cell_type":"markdown","source":"","metadata":{}}]}