stpete2 commited on
Commit
ef1635f
·
verified ·
1 Parent(s): b63b849

Upload 2 files

Browse files
biplet_dino_mast3r_ps2_gs_colab_11ox.ipynb ADDED
@@ -0,0 +1,1840 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "kernelspec": {
4
+ "display_name": "Python 3",
5
+ "name": "python3"
6
+ },
7
+ "language_info": {
8
+ "name": "python",
9
+ "version": "3.12.12",
10
+ "mimetype": "text/x-python",
11
+ "codemirror_mode": {
12
+ "name": "ipython",
13
+ "version": 3
14
+ },
15
+ "pygments_lexer": "ipython3",
16
+ "nbconvert_exporter": "python",
17
+ "file_extension": ".py"
18
+ },
19
+ "kaggle": {
20
+ "accelerator": "none",
21
+ "dataSources": [],
22
+ "dockerImageVersionId": 31259,
23
+ "isInternetEnabled": true,
24
+ "language": "python",
25
+ "sourceType": "notebook",
26
+ "isGpuEnabled": false
27
+ },
28
+ "colab": {
29
+ "provenance": [],
30
+ "gpuType": "T4"
31
+ },
32
+ "accelerator": "GPU"
33
+ },
34
+ "nbformat_minor": 0,
35
+ "nbformat": 4,
36
+ "cells": [
37
+ {
38
+ "cell_type": "code",
39
+ "source": [],
40
+ "metadata": {
41
+ "_uuid": "8f2839f25d086af736a60e9eeb907d3b93b6e0e5",
42
+ "_cell_guid": "b1076dfc-b9ad-4769-8c92-a6c4dae69d19",
43
+ "trusted": true,
44
+ "execution": {
45
+ "iopub.status.busy": "2026-01-22T11:23:22.240664Z",
46
+ "iopub.execute_input": "2026-01-22T11:23:22.240957Z",
47
+ "iopub.status.idle": "2026-01-22T11:23:22.246018Z",
48
+ "shell.execute_reply.started": "2026-01-22T11:23:22.240936Z",
49
+ "shell.execute_reply": "2026-01-22T11:23:22.245074Z"
50
+ },
51
+ "id": "yhVNR6GETKyA"
52
+ },
53
+ "outputs": [],
54
+ "execution_count": null
55
+ },
56
+ {
57
+ "cell_type": "code",
58
+ "source": [
59
+ "# =====================================================================\n",
60
+ "# biplet_dino_mast3r_ps2_gs_colab_01.ipynb\n",
61
+ "# ASMK を DINO に置き換えたバージョン\n",
62
+ "# =====================================================================\n",
63
+ "\n",
64
+ "# =====================================================================\n",
65
+ "# CELL 1: Install Dependencies\n",
66
+ "# =====================================================================\n",
67
+ "!pip install roma einops timm huggingface_hub\n",
68
+ "!pip install opencv-python pillow tqdm pyaml cython plyfile\n",
69
+ "!pip install pycolmap trimesh\n",
70
+ "!pip install transformers==4.40.0 # DINOに必要\n",
71
+ "!pip uninstall -y numpy scipy\n",
72
+ "!pip install numpy==1.26.4 scipy==1.11.4\n",
73
+ "break"
74
+ ],
75
+ "metadata": {
76
+ "trusted": true,
77
+ "id": "6C3QGJD8TKyC",
78
+ "colab": {
79
+ "base_uri": "https://localhost:8080/",
80
+ "height": 1000
81
+ },
82
+ "outputId": "b362f97d-fbc1-474f-f2cb-b84b565acdb9"
83
+ },
84
+ "outputs": [
85
+ {
86
+ "output_type": "stream",
87
+ "name": "stdout",
88
+ "text": [
89
+ "Collecting roma\n",
90
+ " Downloading roma-1.5.4-py3-none-any.whl.metadata (5.5 kB)\n",
91
+ "Requirement already satisfied: einops in /usr/local/lib/python3.12/dist-packages (0.8.1)\n",
92
+ "Requirement already satisfied: timm in /usr/local/lib/python3.12/dist-packages (1.0.24)\n",
93
+ "Requirement already satisfied: huggingface_hub in /usr/local/lib/python3.12/dist-packages (0.36.0)\n",
94
+ "Requirement already satisfied: torch in /usr/local/lib/python3.12/dist-packages (from timm) (2.9.0+cu126)\n",
95
+ "Requirement already satisfied: torchvision in /usr/local/lib/python3.12/dist-packages (from timm) (0.24.0+cu126)\n",
96
+ "Requirement already satisfied: pyyaml in /usr/local/lib/python3.12/dist-packages (from timm) (6.0.3)\n",
97
+ "Requirement already satisfied: safetensors in /usr/local/lib/python3.12/dist-packages (from timm) (0.7.0)\n",
98
+ "Requirement already satisfied: filelock in /usr/local/lib/python3.12/dist-packages (from huggingface_hub) (3.20.3)\n",
99
+ "Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.12/dist-packages (from huggingface_hub) (2025.3.0)\n",
100
+ "Requirement already satisfied: packaging>=20.9 in /usr/local/lib/python3.12/dist-packages (from huggingface_hub) (25.0)\n",
101
+ "Requirement already satisfied: requests in /usr/local/lib/python3.12/dist-packages (from huggingface_hub) (2.32.4)\n",
102
+ "Requirement already satisfied: tqdm>=4.42.1 in /usr/local/lib/python3.12/dist-packages (from huggingface_hub) (4.67.1)\n",
103
+ "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.12/dist-packages (from huggingface_hub) (4.15.0)\n",
104
+ "Requirement already satisfied: hf-xet<2.0.0,>=1.1.3 in /usr/local/lib/python3.12/dist-packages (from huggingface_hub) (1.2.0)\n",
105
+ "Requirement already satisfied: charset_normalizer<4,>=2 in /usr/local/lib/python3.12/dist-packages (from requests->huggingface_hub) (3.4.4)\n",
106
+ "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.12/dist-packages (from requests->huggingface_hub) (3.11)\n",
107
+ "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.12/dist-packages (from requests->huggingface_hub) (2.5.0)\n",
108
+ "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.12/dist-packages (from requests->huggingface_hub) (2026.1.4)\n",
109
+ "Requirement already satisfied: setuptools in /usr/local/lib/python3.12/dist-packages (from torch->timm) (75.2.0)\n",
110
+ "Requirement already satisfied: sympy>=1.13.3 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (1.14.0)\n",
111
+ "Requirement already satisfied: networkx>=2.5.1 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (3.6.1)\n",
112
+ "Requirement already satisfied: jinja2 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (3.1.6)\n",
113
+ "Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.6.77 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (12.6.77)\n",
114
+ "Requirement already satisfied: nvidia-cuda-runtime-cu12==12.6.77 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (12.6.77)\n",
115
+ "Requirement already satisfied: nvidia-cuda-cupti-cu12==12.6.80 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (12.6.80)\n",
116
+ "Requirement already satisfied: nvidia-cudnn-cu12==9.10.2.21 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (9.10.2.21)\n",
117
+ "Requirement already satisfied: nvidia-cublas-cu12==12.6.4.1 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (12.6.4.1)\n",
118
+ "Requirement already satisfied: nvidia-cufft-cu12==11.3.0.4 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (11.3.0.4)\n",
119
+ "Requirement already satisfied: nvidia-curand-cu12==10.3.7.77 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (10.3.7.77)\n",
120
+ "Requirement already satisfied: nvidia-cusolver-cu12==11.7.1.2 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (11.7.1.2)\n",
121
+ "Requirement already satisfied: nvidia-cusparse-cu12==12.5.4.2 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (12.5.4.2)\n",
122
+ "Requirement already satisfied: nvidia-cusparselt-cu12==0.7.1 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (0.7.1)\n",
123
+ "Requirement already satisfied: nvidia-nccl-cu12==2.27.5 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (2.27.5)\n",
124
+ "Requirement already satisfied: nvidia-nvshmem-cu12==3.3.20 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (3.3.20)\n",
125
+ "Requirement already satisfied: nvidia-nvtx-cu12==12.6.77 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (12.6.77)\n",
126
+ "Requirement already satisfied: nvidia-nvjitlink-cu12==12.6.85 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (12.6.85)\n",
127
+ "Requirement already satisfied: nvidia-cufile-cu12==1.11.1.6 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (1.11.1.6)\n",
128
+ "Requirement already satisfied: triton==3.5.0 in /usr/local/lib/python3.12/dist-packages (from torch->timm) (3.5.0)\n",
129
+ "Requirement already satisfied: numpy in /usr/local/lib/python3.12/dist-packages (from torchvision->timm) (2.0.2)\n",
130
+ "Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /usr/local/lib/python3.12/dist-packages (from torchvision->timm) (11.3.0)\n",
131
+ "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.12/dist-packages (from sympy>=1.13.3->torch->timm) (1.3.0)\n",
132
+ "Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.12/dist-packages (from jinja2->torch->timm) (3.0.3)\n",
133
+ "Downloading roma-1.5.4-py3-none-any.whl (25 kB)\n",
134
+ "Installing collected packages: roma\n",
135
+ "Successfully installed roma-1.5.4\n",
136
+ "Requirement already satisfied: opencv-python in /usr/local/lib/python3.12/dist-packages (4.12.0.88)\n",
137
+ "Requirement already satisfied: pillow in /usr/local/lib/python3.12/dist-packages (11.3.0)\n",
138
+ "Requirement already satisfied: tqdm in /usr/local/lib/python3.12/dist-packages (4.67.1)\n",
139
+ "Collecting pyaml\n",
140
+ " Downloading pyaml-25.7.0-py3-none-any.whl.metadata (12 kB)\n",
141
+ "Requirement already satisfied: cython in /usr/local/lib/python3.12/dist-packages (3.0.12)\n",
142
+ "Collecting plyfile\n",
143
+ " Downloading plyfile-1.1.3-py3-none-any.whl.metadata (43 kB)\n",
144
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m43.3/43.3 kB\u001b[0m \u001b[31m2.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
145
+ "\u001b[?25hRequirement already satisfied: numpy<2.3.0,>=2 in /usr/local/lib/python3.12/dist-packages (from opencv-python) (2.0.2)\n",
146
+ "Requirement already satisfied: PyYAML in /usr/local/lib/python3.12/dist-packages (from pyaml) (6.0.3)\n",
147
+ "Downloading pyaml-25.7.0-py3-none-any.whl (26 kB)\n",
148
+ "Downloading plyfile-1.1.3-py3-none-any.whl (36 kB)\n",
149
+ "Installing collected packages: pyaml, plyfile\n",
150
+ "Successfully installed plyfile-1.1.3 pyaml-25.7.0\n",
151
+ "Collecting pycolmap\n",
152
+ " Downloading pycolmap-3.13.0-cp312-cp312-manylinux_2_28_x86_64.whl.metadata (10 kB)\n",
153
+ "Collecting trimesh\n",
154
+ " Downloading trimesh-4.11.1-py3-none-any.whl.metadata (13 kB)\n",
155
+ "Requirement already satisfied: numpy in /usr/local/lib/python3.12/dist-packages (from pycolmap) (2.0.2)\n",
156
+ "Downloading pycolmap-3.13.0-cp312-cp312-manylinux_2_28_x86_64.whl (20.3 MB)\n",
157
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m20.3/20.3 MB\u001b[0m \u001b[31m58.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
158
+ "\u001b[?25hDownloading trimesh-4.11.1-py3-none-any.whl (740 kB)\n",
159
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m740.4/740.4 kB\u001b[0m \u001b[31m62.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
160
+ "\u001b[?25hInstalling collected packages: trimesh, pycolmap\n",
161
+ "Successfully installed pycolmap-3.13.0 trimesh-4.11.1\n",
162
+ "Collecting transformers==4.40.0\n",
163
+ " Downloading transformers-4.40.0-py3-none-any.whl.metadata (137 kB)\n",
164
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m137.6/137.6 kB\u001b[0m \u001b[31m5.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
165
+ "\u001b[?25hRequirement already satisfied: filelock in /usr/local/lib/python3.12/dist-packages (from transformers==4.40.0) (3.20.3)\n",
166
+ "Requirement already satisfied: huggingface-hub<1.0,>=0.19.3 in /usr/local/lib/python3.12/dist-packages (from transformers==4.40.0) (0.36.0)\n",
167
+ "Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.12/dist-packages (from transformers==4.40.0) (2.0.2)\n",
168
+ "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.12/dist-packages (from transformers==4.40.0) (25.0)\n",
169
+ "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.12/dist-packages (from transformers==4.40.0) (6.0.3)\n",
170
+ "Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.12/dist-packages (from transformers==4.40.0) (2025.11.3)\n",
171
+ "Requirement already satisfied: requests in /usr/local/lib/python3.12/dist-packages (from transformers==4.40.0) (2.32.4)\n",
172
+ "Collecting tokenizers<0.20,>=0.19 (from transformers==4.40.0)\n",
173
+ " Downloading tokenizers-0.19.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.7 kB)\n",
174
+ "Requirement already satisfied: safetensors>=0.4.1 in /usr/local/lib/python3.12/dist-packages (from transformers==4.40.0) (0.7.0)\n",
175
+ "Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.12/dist-packages (from transformers==4.40.0) (4.67.1)\n",
176
+ "Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.12/dist-packages (from huggingface-hub<1.0,>=0.19.3->transformers==4.40.0) (2025.3.0)\n",
177
+ "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.12/dist-packages (from huggingface-hub<1.0,>=0.19.3->transformers==4.40.0) (4.15.0)\n",
178
+ "Requirement already satisfied: hf-xet<2.0.0,>=1.1.3 in /usr/local/lib/python3.12/dist-packages (from huggingface-hub<1.0,>=0.19.3->transformers==4.40.0) (1.2.0)\n",
179
+ "Requirement already satisfied: charset_normalizer<4,>=2 in /usr/local/lib/python3.12/dist-packages (from requests->transformers==4.40.0) (3.4.4)\n",
180
+ "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.12/dist-packages (from requests->transformers==4.40.0) (3.11)\n",
181
+ "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.12/dist-packages (from requests->transformers==4.40.0) (2.5.0)\n",
182
+ "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.12/dist-packages (from requests->transformers==4.40.0) (2026.1.4)\n",
183
+ "Downloading transformers-4.40.0-py3-none-any.whl (9.0 MB)\n",
184
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m9.0/9.0 MB\u001b[0m \u001b[31m71.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
185
+ "\u001b[?25hDownloading tokenizers-0.19.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.6 MB)\n",
186
+ "\u001b[2K \u001b[90m━━━���━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.6/3.6 MB\u001b[0m \u001b[31m87.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
187
+ "\u001b[?25hInstalling collected packages: tokenizers, transformers\n",
188
+ " Attempting uninstall: tokenizers\n",
189
+ " Found existing installation: tokenizers 0.22.2\n",
190
+ " Uninstalling tokenizers-0.22.2:\n",
191
+ " Successfully uninstalled tokenizers-0.22.2\n",
192
+ " Attempting uninstall: transformers\n",
193
+ " Found existing installation: transformers 4.57.6\n",
194
+ " Uninstalling transformers-4.57.6:\n",
195
+ " Successfully uninstalled transformers-4.57.6\n",
196
+ "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
197
+ "sentence-transformers 5.2.0 requires transformers<6.0.0,>=4.41.0, but you have transformers 4.40.0 which is incompatible.\u001b[0m\u001b[31m\n",
198
+ "\u001b[0mSuccessfully installed tokenizers-0.19.1 transformers-4.40.0\n",
199
+ "Found existing installation: numpy 2.0.2\n",
200
+ "Uninstalling numpy-2.0.2:\n",
201
+ " Successfully uninstalled numpy-2.0.2\n",
202
+ "Found existing installation: scipy 1.16.3\n",
203
+ "Uninstalling scipy-1.16.3:\n",
204
+ " Successfully uninstalled scipy-1.16.3\n",
205
+ "Collecting numpy==1.26.4\n",
206
+ " Downloading numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)\n",
207
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m61.0/61.0 kB\u001b[0m \u001b[31m3.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
208
+ "\u001b[?25hCollecting scipy==1.11.4\n",
209
+ " Downloading scipy-1.11.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB)\n",
210
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m60.4/60.4 kB\u001b[0m \u001b[31m6.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
211
+ "\u001b[?25hDownloading numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.0 MB)\n",
212
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m18.0/18.0 MB\u001b[0m \u001b[31m71.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
213
+ "\u001b[?25hDownloading scipy-1.11.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (35.8 MB)\n",
214
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m35.8/35.8 MB\u001b[0m \u001b[31m20.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
215
+ "\u001b[?25hInstalling collected packages: numpy, scipy\n",
216
+ "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
217
+ "mapclassify 2.10.0 requires scipy>=1.12, but you have scipy 1.11.4 which is incompatible.\n",
218
+ "opencv-python 4.12.0.88 requires numpy<2.3.0,>=2; python_version >= \"3.9\", but you have numpy 1.26.4 which is incompatible.\n",
219
+ "tsfresh 0.21.1 requires scipy>=1.14.0; python_version >= \"3.10\", but you have scipy 1.11.4 which is incompatible.\n",
220
+ "pytensor 2.36.3 requires numpy>=2.0, but you have numpy 1.26.4 which is incompatible.\n",
221
+ "spopt 0.7.0 requires scipy>=1.12.0, but you have scipy 1.11.4 which is incompatible.\n",
222
+ "opencv-python-headless 4.12.0.88 requires numpy<2.3.0,>=2; python_version >= \"3.9\", but you have numpy 1.26.4 which is incompatible.\n",
223
+ "opencv-contrib-python 4.12.0.88 requires numpy<2.3.0,>=2; python_version >= \"3.9\", but you have numpy 1.26.4 which is incompatible.\n",
224
+ "shap 0.50.0 requires numpy>=2, but you have numpy 1.26.4 which is incompatible.\n",
225
+ "sentence-transformers 5.2.0 requires transformers<6.0.0,>=4.41.0, but you have transformers 4.40.0 which is incompatible.\n",
226
+ "jax 0.7.2 requires numpy>=2.0, but you have numpy 1.26.4 which is incompatible.\n",
227
+ "jax 0.7.2 requires scipy>=1.13, but you have scipy 1.11.4 which is incompatible.\n",
228
+ "libpysal 4.14.1 requires scipy>=1.12.0, but you have scipy 1.11.4 which is incompatible.\n",
229
+ "rasterio 1.5.0 requires numpy>=2, but you have numpy 1.26.4 which is incompatible.\n",
230
+ "access 1.1.10.post3 requires scipy>=1.14.1, but you have scipy 1.11.4 which is incompatible.\n",
231
+ "tobler 0.13.0 requires numpy>=2.0, but you have numpy 1.26.4 which is incompatible.\n",
232
+ "tobler 0.13.0 requires scipy>=1.13, but you have scipy 1.11.4 which is incompatible.\n",
233
+ "esda 2.8.1 requires scipy>=1.12, but you have scipy 1.11.4 which is incompatible.\n",
234
+ "inequality 1.1.2 requires scipy>=1.12, but you have scipy 1.11.4 which is incompatible.\n",
235
+ "giddy 2.3.8 requires scipy>=1.12, but you have scipy 1.11.4 which is incompatible.\n",
236
+ "jaxlib 0.7.2 requires numpy>=2.0, but you have numpy 1.26.4 which is incompatible.\n",
237
+ "jaxlib 0.7.2 requires scipy>=1.13, but you have scipy 1.11.4 which is incompatible.\u001b[0m\u001b[31m\n",
238
+ "\u001b[0mSuccessfully installed numpy-1.26.4 scipy-1.11.4\n"
239
+ ]
240
+ },
241
+ {
242
+ "output_type": "display_data",
243
+ "data": {
244
+ "application/vnd.colab-display-data+json": {
245
+ "pip_warning": {
246
+ "packages": [
247
+ "numpy"
248
+ ]
249
+ },
250
+ "id": "c6df9411f82f41ceb400a90d4bec5f90"
251
+ }
252
+ },
253
+ "metadata": {}
254
+ },
255
+ {
256
+ "output_type": "error",
257
+ "ename": "SyntaxError",
258
+ "evalue": "'break' outside loop (ipython-input-2150635115.py, line 15)",
259
+ "traceback": [
260
+ "\u001b[0;36m File \u001b[0;32m\"/tmp/ipython-input-2150635115.py\"\u001b[0;36m, line \u001b[0;32m15\u001b[0m\n\u001b[0;31m break\u001b[0m\n\u001b[0m ^\u001b[0m\n\u001b[0;31mSyntaxError\u001b[0m\u001b[0;31m:\u001b[0m 'break' outside loop\n"
261
+ ]
262
+ }
263
+ ],
264
+ "execution_count": 1
265
+ },
266
+ {
267
+ "cell_type": "code",
268
+ "source": [],
269
+ "metadata": {
270
+ "id": "49QM1qVmdm4k"
271
+ },
272
+ "execution_count": null,
273
+ "outputs": []
274
+ },
275
+ {
276
+ "cell_type": "code",
277
+ "source": [],
278
+ "metadata": {
279
+ "id": "bSUbLgHpeeJ4"
280
+ },
281
+ "execution_count": null,
282
+ "outputs": []
283
+ },
284
+ {
285
+ "cell_type": "code",
286
+ "source": [],
287
+ "metadata": {
288
+ "id": "TPcj5qcmedBw"
289
+ },
290
+ "execution_count": 6,
291
+ "outputs": []
292
+ },
293
+ {
294
+ "cell_type": "code",
295
+ "source": [
296
+ "# restart & run after\n",
297
+ "# =====================================================================\n",
298
+ "# CELL 2: Mount Drive and Verify\n",
299
+ "# =====================================================================\n",
300
+ "from google.colab import drive\n",
301
+ "drive.mount('/content/drive')\n",
302
+ "\n",
303
+ "import numpy as np\n",
304
+ "print(f\"✓ np: {np.__version__} - {np.__file__}\")\n",
305
+ "!pip show numpy | grep Version\n",
306
+ "\n",
307
+ "try:\n",
308
+ " import roma\n",
309
+ " print(\"✓ roma is installed\")\n",
310
+ "except ModuleNotFoundError:\n",
311
+ " print(\"⚠️ roma not found, installing...\")\n",
312
+ " !pip install roma\n",
313
+ " import roma\n",
314
+ " print(\"✓ roma installed\")\n",
315
+ "\n",
316
+ "# =====================================================================\n",
317
+ "# CELL 3: Clone Repositories\n",
318
+ "# =====================================================================\n",
319
+ "import os\n",
320
+ "import sys\n",
321
+ "\n",
322
+ "# MASt3Rをクローン\n",
323
+ "if not os.path.exists('/content/mast3r'):\n",
324
+ " print(\"Cloning MASt3R repository...\")\n",
325
+ " !git clone --recursive https://github.com/naver/mast3r.git /content/mast3r\n",
326
+ " print(\"✓ MASt3R cloned\")\n",
327
+ "else:\n",
328
+ " print(\"✓ MASt3R already exists\")\n",
329
+ "\n",
330
+ "# DUSt3Rをクローン(MASt3R内に必要)\n",
331
+ "if not os.path.exists('/content/mast3r/dust3r'):\n",
332
+ " print(\"Cloning DUSt3R repository...\")\n",
333
+ " !git clone --recursive https://github.com/naver/dust3r.git /content/mast3r/dust3r\n",
334
+ " print(\"✓ DUSt3R cloned\")\n",
335
+ "else:\n",
336
+ " print(\"✓ DUSt3R already exists\")\n",
337
+ "\n",
338
+ "# パスを追加\n",
339
+ "sys.path.insert(0, '/content/mast3r')\n",
340
+ "sys.path.insert(0, '/content/mast3r/dust3r')\n",
341
+ "\n",
342
+ "# 確認\n",
343
+ "try:\n",
344
+ " from dust3r.model import AsymmetricCroCo3DStereo\n",
345
+ " print(\"✓ dust3r.model imported successfully\")\n",
346
+ "except ImportError as e:\n",
347
+ " print(f\"✗ Import error: {e}\")\n",
348
+ "\n",
349
+ "# croco(MASt3Rの依存関係)もクローン\n",
350
+ "if not os.path.exists('/content/mast3r/croco'):\n",
351
+ " print(\"Cloning CroCo repository...\")\n",
352
+ " !git clone --recursive https://github.com/naver/croco.git /content/mast3r/croco\n",
353
+ " print(\"✓ CroCo cloned\")\n",
354
+ "\n",
355
+ "# =====================================================================\n",
356
+ "# CELL 4: Clone and Build Gaussian Splatting\n",
357
+ "# =====================================================================\n",
358
+ "print(\"\\n\" + \"=\"*70)\n",
359
+ "print(\"STEP: Clone Gaussian Splatting\")\n",
360
+ "print(\"=\"*70)\n",
361
+ "WORK_DIR = \"/content/gaussian-splatting\"\n",
362
+ "\n",
363
+ "import subprocess\n",
364
+ "if not os.path.exists(WORK_DIR):\n",
365
+ " subprocess.run([\n",
366
+ " \"git\", \"clone\", \"--recursive\",\n",
367
+ " \"https://github.com/graphdeco-inria/gaussian-splatting.git\",\n",
368
+ " WORK_DIR\n",
369
+ " ], capture_output=True)\n",
370
+ " print(\"✓ Cloned\")\n",
371
+ "else:\n",
372
+ " print(\"✓ Already exists\")\n",
373
+ "\n",
374
+ "# インストールが必要なディレクトリ\n",
375
+ "submodules = [\n",
376
+ " \"/content/gaussian-splatting/submodules/diff-gaussian-rasterization\",\n",
377
+ " \"/content/gaussian-splatting/submodules/simple-knn\"\n",
378
+ "]\n",
379
+ "\n",
380
+ "for path in submodules:\n",
381
+ " print(f\"Installing {path}...\")\n",
382
+ " subprocess.run([\"pip\", \"install\", path], check=True)\n",
383
+ "\n",
384
+ "print(\"✓ Custom CUDA modules installed.\")\n",
385
+ "\n",
386
+ "print(f\"✓ np: {np.__version__} - {np.__file__}\")\n",
387
+ "!pip show numpy | grep Version\n",
388
+ "\n",
389
+ "# =====================================================================\n",
390
+ "# CELL 5: Import Core Libraries and Configure Memory\n",
391
+ "# =====================================================================\n",
392
+ "import os\n",
393
+ "import sys\n",
394
+ "import gc\n",
395
+ "import torch\n",
396
+ "import numpy as np\n",
397
+ "from pathlib import Path\n",
398
+ "from tqdm import tqdm\n",
399
+ "import torch.nn.functional as F\n",
400
+ "import shutil\n",
401
+ "from PIL import Image\n",
402
+ "from transformers import AutoImageProcessor, AutoModel\n",
403
+ "\n",
404
+ "# MEMORY MANAGEMENT\n",
405
+ "os.environ['PYTORCH_CUDA_ALLOC_CONF'] = 'expandable_segments:True'\n",
406
+ "\n",
407
+ "def clear_memory():\n",
408
+ " \"\"\"メモリクリア関数\"\"\"\n",
409
+ " gc.collect()\n",
410
+ " if torch.cuda.is_available():\n",
411
+ " torch.cuda.empty_cache()\n",
412
+ " torch.cuda.synchronize()\n",
413
+ "\n",
414
+ "def get_memory_info():\n",
415
+ " \"\"\"Get current memory usage\"\"\"\n",
416
+ " if torch.cuda.is_available():\n",
417
+ " allocated = torch.cuda.memory_allocated() / 1024**3\n",
418
+ " reserved = torch.cuda.memory_reserved() / 1024**3\n",
419
+ " print(f\"GPU Memory - Allocated: {allocated:.2f}GB, Reserved: {reserved:.2f}GB\")\n",
420
+ "\n",
421
+ " import psutil\n",
422
+ " cpu_mem = psutil.virtual_memory().percent\n",
423
+ " print(f\"CPU Memory Usage: {cpu_mem:.1f}%\")\n",
424
+ "\n",
425
+ "# CONFIGURATION\n",
426
+ "class Config:\n",
427
+ " DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
428
+ " MAST3R_WEIGHTS = \"naver/MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric\"\n",
429
+ " DUST3R_WEIGHTS = \"naver/DUSt3R_ViTLarge_BaseDecoder_512_dpt\"\n",
430
+ "\n",
431
+ " # DINO設定\n",
432
+ " DINO_MODEL = \"facebook/dinov2-base\"\n",
433
+ " GLOBAL_TOPK = 20 # 各画像がペアを組む上位K個\n",
434
+ "\n",
435
+ " IMAGE_SIZE = 224\n",
436
+ "\n",
437
+ "# =====================================================================\n",
438
+ "# CELL 6: Image Preprocessing Functions (Biplet)\n",
439
+ "# =====================================================================\n",
440
+ "def normalize_image_sizes_biplet(input_dir, output_dir=None, size=1024):\n",
441
+ " \"\"\"\n",
442
+ " Generates two square crops (Left & Right or Top & Bottom)\n",
443
+ " from each image in a directory.\n",
444
+ " \"\"\"\n",
445
+ " if output_dir is None:\n",
446
+ " output_dir = input_dir + \"_biplet\"\n",
447
+ "\n",
448
+ " os.makedirs(output_dir, exist_ok=True)\n",
449
+ "\n",
450
+ " print(f\"\\n=== Generating Biplet Crops ({size}x{size}) ===\")\n",
451
+ "\n",
452
+ " converted_count = 0\n",
453
+ " size_stats = {}\n",
454
+ "\n",
455
+ " for img_file in tqdm(sorted(os.listdir(input_dir)), desc=\"Creating biplets\"):\n",
456
+ " if not img_file.lower().endswith(('.jpg', '.jpeg', '.png')):\n",
457
+ " continue\n",
458
+ "\n",
459
+ " input_path = os.path.join(input_dir, img_file)\n",
460
+ "\n",
461
+ " try:\n",
462
+ " img = Image.open(input_path)\n",
463
+ " original_size = img.size\n",
464
+ "\n",
465
+ " size_key = f\"{original_size[0]}x{original_size[1]}\"\n",
466
+ " size_stats[size_key] = size_stats.get(size_key, 0) + 1\n",
467
+ "\n",
468
+ " # Generate 2 crops\n",
469
+ " crops = generate_two_crops(img, size)\n",
470
+ "\n",
471
+ " base_name, ext = os.path.splitext(img_file)\n",
472
+ " for mode, cropped_img in crops.items():\n",
473
+ " output_path = os.path.join(output_dir, f\"{base_name}_{mode}{ext}\")\n",
474
+ " cropped_img.save(output_path, quality=95)\n",
475
+ "\n",
476
+ " converted_count += 1\n",
477
+ "\n",
478
+ " except Exception as e:\n",
479
+ " print(f\" ✗ Error processing {img_file}: {e}\")\n",
480
+ "\n",
481
+ " print(f\"\\n✓ Biplet generation complete:\")\n",
482
+ " print(f\" Source images: {converted_count}\")\n",
483
+ " print(f\" Biplet crops generated: {converted_count * 2}\")\n",
484
+ " print(f\" Original size distribution: {size_stats}\")\n",
485
+ "\n",
486
+ " return output_dir\n",
487
+ "\n",
488
+ "\n",
489
+ "def generate_two_crops(img, size):\n",
490
+ " \"\"\"\n",
491
+ " Crops the image into a square and returns 2 variations\n",
492
+ " \"\"\"\n",
493
+ " width, height = img.size\n",
494
+ " crop_size = min(width, height)\n",
495
+ " crops = {}\n",
496
+ "\n",
497
+ " if width > height:\n",
498
+ " # Landscape → Left & Right\n",
499
+ " positions = {\n",
500
+ " 'left': 0,\n",
501
+ " 'right': width - crop_size\n",
502
+ " }\n",
503
+ " for mode, x_offset in positions.items():\n",
504
+ " box = (x_offset, 0, x_offset + crop_size, crop_size)\n",
505
+ " crops[mode] = img.crop(box).resize(\n",
506
+ " (size, size),\n",
507
+ " Image.Resampling.LANCZOS\n",
508
+ " )\n",
509
+ " else:\n",
510
+ " # Portrait or Square → Top & Bottom\n",
511
+ " positions = {\n",
512
+ " 'top': 0,\n",
513
+ " 'bottom': height - crop_size\n",
514
+ " }\n",
515
+ " for mode, y_offset in positions.items():\n",
516
+ " box = (0, y_offset, crop_size, y_offset + crop_size)\n",
517
+ " crops[mode] = img.crop(box).resize(\n",
518
+ " (size, size),\n",
519
+ " Image.Resampling.LANCZOS\n",
520
+ " )\n",
521
+ "\n",
522
+ " return crops\n",
523
+ "\n",
524
+ "# =====================================================================\n",
525
+ "# CELL 7: Image Loading Function\n",
526
+ "# =====================================================================\n",
527
+ "def load_images_from_directory(image_dir, max_images=200):\n",
528
+ " \"\"\"ディレクトリから画像をロード\"\"\"\n",
529
+ " print(f\"\\nLoading images from: {image_dir}\")\n",
530
+ "\n",
531
+ " valid_extensions = {'.jpg', '.jpeg', '.png', '.bmp'}\n",
532
+ " image_paths = []\n",
533
+ "\n",
534
+ " for ext in valid_extensions:\n",
535
+ " image_paths.extend(sorted(Path(image_dir).glob(f'*{ext}')))\n",
536
+ " image_paths.extend(sorted(Path(image_dir).glob(f'*{ext.upper()}')))\n",
537
+ "\n",
538
+ " image_paths = sorted(set(str(p) for p in image_paths))\n",
539
+ "\n",
540
+ " if len(image_paths) > max_images:\n",
541
+ " print(f\"⚠️ Limiting from {len(image_paths)} to {max_images} images\")\n",
542
+ " image_paths = image_paths[:max_images]\n",
543
+ "\n",
544
+ " print(f\"✓ Found {len(image_paths)} images\")\n",
545
+ " return image_paths\n",
546
+ "\n",
547
+ "# =====================================================================\n",
548
+ "# CELL 8: MASt3R Model Loading\n",
549
+ "# =====================================================================\n",
550
+ "def load_mast3r_model(device):\n",
551
+ " \"\"\"MASt3Rモデルをロード\"\"\"\n",
552
+ " print(\"\\n=== Loading MASt3R Model ===\")\n",
553
+ "\n",
554
+ " if '/content/mast3r' not in sys.path:\n",
555
+ " sys.path.insert(0, '/content/mast3r')\n",
556
+ " if '/content/mast3r/dust3r' not in sys.path:\n",
557
+ " sys.path.insert(0, '/content/mast3r/dust3r')\n",
558
+ "\n",
559
+ " from dust3r.model import AsymmetricCroCo3DStereo\n",
560
+ "\n",
561
+ " try:\n",
562
+ " print(f\"Attempting to load: {Config.MAST3R_WEIGHTS}\")\n",
563
+ " model = AsymmetricCroCo3DStereo.from_pretrained(Config.MAST3R_WEIGHTS).to(device)\n",
564
+ " print(\"✓ Loaded MASt3R model\")\n",
565
+ " except Exception as e:\n",
566
+ " print(f\"⚠️ Failed to load MASt3R: {e}\")\n",
567
+ " print(f\"Trying DUSt3R instead: {Config.DUST3R_WEIGHTS}\")\n",
568
+ " model = AsymmetricCroCo3DStereo.from_pretrained(Config.DUST3R_WEIGHTS).to(device)\n",
569
+ " print(\"✓ Loaded DUSt3R model as fallback\")\n",
570
+ "\n",
571
+ " model.eval()\n",
572
+ " print(f\"✓ Model loaded on {device}\")\n",
573
+ " return model\n",
574
+ "\n",
575
+ "# =====================================================================\n",
576
+ "# CELL 9: DINO Pair Selection (REPLACES ASMK)\n",
577
+ "# =====================================================================\n",
578
+ "def load_torch_image(fname, device):\n",
579
+ " \"\"\"Load image as torch tensor\"\"\"\n",
580
+ " import torchvision.transforms as T\n",
581
+ "\n",
582
+ " img = Image.open(fname).convert('RGB')\n",
583
+ " transform = T.Compose([\n",
584
+ " T.ToTensor(),\n",
585
+ " ])\n",
586
+ " return transform(img).unsqueeze(0).to(device)\n",
587
+ "\n",
588
+ "def extract_dino_global(image_paths, model_path, device):\n",
589
+ " \"\"\"Extract DINO global descriptors with memory management\"\"\"\n",
590
+ " print(\"\\n=== Extracting DINO Global Features ===\")\n",
591
+ " print(\"Initial memory state:\")\n",
592
+ " get_memory_info()\n",
593
+ "\n",
594
+ " processor = AutoImageProcessor.from_pretrained(model_path)\n",
595
+ " model = AutoModel.from_pretrained(model_path).eval().to(device)\n",
596
+ "\n",
597
+ " global_descs = []\n",
598
+ " batch_size = 4 # Small batch to save memory\n",
599
+ "\n",
600
+ " for i in tqdm(range(0, len(image_paths), batch_size), desc=\"DINO extraction\"):\n",
601
+ " batch_paths = image_paths[i:i+batch_size]\n",
602
+ " batch_imgs = []\n",
603
+ "\n",
604
+ " for img_path in batch_paths:\n",
605
+ " img = load_torch_image(img_path, device)\n",
606
+ " batch_imgs.append(img)\n",
607
+ "\n",
608
+ " batch_tensor = torch.cat(batch_imgs, dim=0)\n",
609
+ "\n",
610
+ " with torch.no_grad():\n",
611
+ " inputs = processor(images=batch_tensor, return_tensors=\"pt\", do_rescale=False).to(device)\n",
612
+ " outputs = model(**inputs)\n",
613
+ " desc = F.normalize(outputs.last_hidden_state[:, 1:].max(dim=1)[0], dim=1, p=2)\n",
614
+ " global_descs.append(desc.cpu())\n",
615
+ "\n",
616
+ " # Clear batch memory\n",
617
+ " del batch_tensor, inputs, outputs, desc\n",
618
+ " clear_memory()\n",
619
+ "\n",
620
+ " global_descs = torch.cat(global_descs, dim=0)\n",
621
+ "\n",
622
+ " del model, processor\n",
623
+ " clear_memory()\n",
624
+ "\n",
625
+ " print(\"After DINO extraction:\")\n",
626
+ " get_memory_info()\n",
627
+ "\n",
628
+ " return global_descs\n",
629
+ "\n",
630
+ "def build_topk_pairs(global_feats, k, device):\n",
631
+ " \"\"\"Build top-k similar pairs from global features\"\"\"\n",
632
+ " g = global_feats.to(device)\n",
633
+ " sim = g @ g.T\n",
634
+ " sim.fill_diagonal_(-1)\n",
635
+ "\n",
636
+ " N = sim.size(0)\n",
637
+ " k = min(k, N - 1)\n",
638
+ "\n",
639
+ " topk_indices = torch.topk(sim, k, dim=1).indices.cpu()\n",
640
+ "\n",
641
+ " pairs = []\n",
642
+ " for i in range(N):\n",
643
+ " for j in topk_indices[i]:\n",
644
+ " j = j.item()\n",
645
+ " if i < j:\n",
646
+ " pairs.append((i, j))\n",
647
+ "\n",
648
+ " # Remove duplicates\n",
649
+ " pairs = list(set(pairs))\n",
650
+ "\n",
651
+ " return pairs\n",
652
+ "\n",
653
+ "def select_diverse_pairs(pairs, max_pairs, num_images):\n",
654
+ " \"\"\"\n",
655
+ " Select diverse pairs to ensure good image coverage\n",
656
+ " \"\"\"\n",
657
+ " import random\n",
658
+ " random.seed(42)\n",
659
+ "\n",
660
+ " if len(pairs) <= max_pairs:\n",
661
+ " return pairs\n",
662
+ "\n",
663
+ " print(f\"Selecting {max_pairs} diverse pairs from {len(pairs)} candidates...\")\n",
664
+ "\n",
665
+ " # Count how many times each image appears in pairs\n",
666
+ " image_counts = {i: 0 for i in range(num_images)}\n",
667
+ " for i, j in pairs:\n",
668
+ " image_counts[i] += 1\n",
669
+ " image_counts[j] += 1\n",
670
+ "\n",
671
+ " # Sort pairs by: prefer pairs with less-connected images\n",
672
+ " def pair_score(pair):\n",
673
+ " i, j = pair\n",
674
+ " return image_counts[i] + image_counts[j]\n",
675
+ "\n",
676
+ " pairs_scored = [(pair, pair_score(pair)) for pair in pairs]\n",
677
+ " pairs_scored.sort(key=lambda x: x[1])\n",
678
+ "\n",
679
+ " # Select pairs greedily to maximize coverage\n",
680
+ " selected = []\n",
681
+ " selected_images = set()\n",
682
+ "\n",
683
+ " # Phase 1: Select pairs that add new images\n",
684
+ " for pair, score in pairs_scored:\n",
685
+ " if len(selected) >= max_pairs:\n",
686
+ " break\n",
687
+ " i, j = pair\n",
688
+ " if i not in selected_images or j not in selected_images:\n",
689
+ " selected.append(pair)\n",
690
+ " selected_images.add(i)\n",
691
+ " selected_images.add(j)\n",
692
+ "\n",
693
+ " # Phase 2: Fill remaining slots\n",
694
+ " if len(selected) < max_pairs:\n",
695
+ " remaining = [p for p, s in pairs_scored if p not in selected]\n",
696
+ " random.shuffle(remaining)\n",
697
+ " selected.extend(remaining[:max_pairs - len(selected)])\n",
698
+ "\n",
699
+ " print(f\"Selected pairs cover {len(selected_images)} / {num_images} images ({100*len(selected_images)/num_images:.1f}%)\")\n",
700
+ "\n",
701
+ " return selected\n",
702
+ "\n",
703
+ "def get_image_pairs_dino(image_paths, max_pairs=None):\n",
704
+ " \"\"\"DINO-based pair selection\"\"\"\n",
705
+ " device = Config.DEVICE\n",
706
+ "\n",
707
+ " # DINO global features\n",
708
+ " global_feats = extract_dino_global(image_paths, Config.DINO_MODEL, device)\n",
709
+ " pairs = build_topk_pairs(global_feats, Config.GLOBAL_TOPK, device)\n",
710
+ "\n",
711
+ " print(f\"Initial pairs from DINO: {len(pairs)}\")\n",
712
+ "\n",
713
+ " # Apply intelligent pair selection if limit specified\n",
714
+ " if max_pairs and len(pairs) > max_pairs:\n",
715
+ " pairs = select_diverse_pairs(pairs, max_pairs, len(image_paths))\n",
716
+ "\n",
717
+ " return pairs\n",
718
+ "\n",
719
+ "# =====================================================================\n",
720
+ "# CELL 10: MASt3R Reconstruction\n",
721
+ "# =====================================================================\n",
722
+ "def run_mast3r_pairs(model, image_paths, pairs, device, batch_size=1, max_pairs=None):\n",
723
+ " \"\"\"Run MASt3R on selected pairs with memory management\"\"\"\n",
724
+ " print(\"\\n=== Running MASt3R Reconstruction ===\")\n",
725
+ " print(\"Initial memory state:\")\n",
726
+ " get_memory_info()\n",
727
+ "\n",
728
+ " from dust3r.inference import inference\n",
729
+ " from dust3r.cloud_opt import global_aligner, GlobalAlignerMode\n",
730
+ " from dust3r.utils.image import load_images\n",
731
+ "\n",
732
+ " # Limit number of pairs if specified\n",
733
+ " if max_pairs and len(pairs) > max_pairs:\n",
734
+ " print(f\"Limiting pairs from {len(pairs)} to {max_pairs}\")\n",
735
+ " step = max(1, len(pairs) // max_pairs)\n",
736
+ " pairs = pairs[::step][:max_pairs]\n",
737
+ "\n",
738
+ " print(f\"Processing {len(pairs)} pairs...\")\n",
739
+ "\n",
740
+ " # Load images in smaller size\n",
741
+ " print(f\"Loading {len(image_paths)} images at {Config.IMAGE_SIZE}x{Config.IMAGE_SIZE}...\")\n",
742
+ " images = load_images(image_paths, size=Config.IMAGE_SIZE)\n",
743
+ "\n",
744
+ " print(f\"Loaded {len(images)} images\")\n",
745
+ " print(\"After loading images:\")\n",
746
+ " get_memory_info()\n",
747
+ "\n",
748
+ " # Create all image pairs\n",
749
+ " print(f\"Creating {len(pairs)} image pairs...\")\n",
750
+ " mast3r_pairs = []\n",
751
+ " for idx1, idx2 in tqdm(pairs, desc=\"Preparing pairs\"):\n",
752
+ " mast3r_pairs.append((images[idx1], images[idx2]))\n",
753
+ "\n",
754
+ " print(f\"Running MASt3R inference on {len(mast3r_pairs)} pairs...\")\n",
755
+ "\n",
756
+ " # Run inference\n",
757
+ " output = inference(mast3r_pairs, model, device, batch_size=batch_size, verbose=True)\n",
758
+ "\n",
759
+ " del mast3r_pairs\n",
760
+ " clear_memory()\n",
761
+ "\n",
762
+ " print(\"✓ MASt3R inference complete\")\n",
763
+ " print(\"After inference:\")\n",
764
+ " get_memory_info()\n",
765
+ "\n",
766
+ " # Global alignment\n",
767
+ " print(\"Running global alignment...\")\n",
768
+ " scene = global_aligner(\n",
769
+ " output,\n",
770
+ " device=device,\n",
771
+ " mode=GlobalAlignerMode.PointCloudOptimizer\n",
772
+ " )\n",
773
+ "\n",
774
+ " del output\n",
775
+ " clear_memory()\n",
776
+ "\n",
777
+ " print(\"Computing global alignment...\")\n",
778
+ " loss = scene.compute_global_alignment(\n",
779
+ " init=\"mst\",\n",
780
+ " niter=50, # Reduced iterations\n",
781
+ " schedule='cosine',\n",
782
+ " lr=0.01\n",
783
+ " )\n",
784
+ "\n",
785
+ " print(f\"✓ Global alignment complete (final loss: {loss:.6f})\")\n",
786
+ " print(\"Final memory state:\")\n",
787
+ " get_memory_info()\n",
788
+ "\n",
789
+ " return scene, images\n",
790
+ "\n",
791
+ "\n",
792
+ "\n"
793
+ ],
794
+ "metadata": {
795
+ "trusted": true,
796
+ "id": "OWJEB1oQTKyD",
797
+ "colab": {
798
+ "base_uri": "https://localhost:8080/"
799
+ },
800
+ "outputId": "0334296d-b136-45dc-ad4d-e6cc6e3ce9b8"
801
+ },
802
+ "outputs": [
803
+ {
804
+ "output_type": "stream",
805
+ "name": "stdout",
806
+ "text": [
807
+ "Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n",
808
+ "✓ np: 1.26.4 - /usr/local/lib/python3.12/dist-packages/numpy/__init__.py\n",
809
+ "Version: 1.26.4\n",
810
+ "Version 3.1, 31 March 2009\n",
811
+ " Version 3, 29 June 2007\n",
812
+ " 5. Conveying Modified Source Versions.\n",
813
+ " 14. Revised Versions of this License.\n",
814
+ "✓ roma is installed\n",
815
+ "✓ MASt3R already exists\n",
816
+ "✓ DUSt3R already exists\n",
817
+ "✓ dust3r.model imported successfully\n",
818
+ "\n",
819
+ "======================================================================\n",
820
+ "STEP: Clone Gaussian Splatting\n",
821
+ "======================================================================\n",
822
+ "✓ Already exists\n",
823
+ "Installing /content/gaussian-splatting/submodules/diff-gaussian-rasterization...\n",
824
+ "Installing /content/gaussian-splatting/submodules/simple-knn...\n",
825
+ "✓ Custom CUDA modules installed.\n",
826
+ "✓ np: 1.26.4 - /usr/local/lib/python3.12/dist-packages/numpy/__init__.py\n",
827
+ "Version: 1.26.4\n",
828
+ "Version 3.1, 31 March 2009\n",
829
+ " Version 3, 29 June 2007\n",
830
+ " 5. Conveying Modified Source Versions.\n",
831
+ " 14. Revised Versions of this License.\n"
832
+ ]
833
+ }
834
+ ],
835
+ "execution_count": 7
836
+ },
837
+ {
838
+ "cell_type": "code",
839
+ "source": [
840
+ "\n",
841
+ "# =====================================================================\n",
842
+ "# CELL 11: Camera Parameter Extraction\n",
843
+ "# =====================================================================\n",
844
+ "def extract_camera_params_process2(scene, image_paths, conf_threshold=1.5):\n",
845
+ " \"\"\"sceneからカメラパラメータと3D点を抽出\"\"\"\n",
846
+ " print(\"\\n=== Extracting Camera Parameters ===\")\n",
847
+ "\n",
848
+ " cameras_dict = {}\n",
849
+ " all_pts3d = []\n",
850
+ " all_confidence = []\n",
851
+ "\n",
852
+ " try:\n",
853
+ " if hasattr(scene, 'get_im_poses'):\n",
854
+ " poses = scene.get_im_poses()\n",
855
+ " elif hasattr(scene, 'im_poses'):\n",
856
+ " poses = scene.im_poses\n",
857
+ " else:\n",
858
+ " poses = None\n",
859
+ "\n",
860
+ " if hasattr(scene, 'get_focals'):\n",
861
+ " focals = scene.get_focals()\n",
862
+ " elif hasattr(scene, 'im_focals'):\n",
863
+ " focals = scene.im_focals\n",
864
+ " else:\n",
865
+ " focals = None\n",
866
+ "\n",
867
+ " if hasattr(scene, 'get_principal_points'):\n",
868
+ " pps = scene.get_principal_points()\n",
869
+ " elif hasattr(scene, 'im_pp'):\n",
870
+ " pps = scene.im_pp\n",
871
+ " else:\n",
872
+ " pps = None\n",
873
+ " except Exception as e:\n",
874
+ " print(f\"⚠️ Error getting camera parameters: {e}\")\n",
875
+ " poses = None\n",
876
+ " focals = None\n",
877
+ " pps = None\n",
878
+ "\n",
879
+ " n_images = min(len(poses) if poses is not None else len(image_paths), len(image_paths))\n",
880
+ "\n",
881
+ " for idx in range(n_images):\n",
882
+ " img_name = os.path.basename(image_paths[idx])\n",
883
+ "\n",
884
+ " try:\n",
885
+ " # Poseを取得\n",
886
+ " if poses is not None and idx < len(poses):\n",
887
+ " pose = poses[idx]\n",
888
+ " if isinstance(pose, torch.Tensor):\n",
889
+ " pose = pose.detach().cpu().numpy()\n",
890
+ " if not isinstance(pose, np.ndarray) or pose.shape != (4, 4):\n",
891
+ " pose = np.eye(4)\n",
892
+ " else:\n",
893
+ " pose = np.eye(4)\n",
894
+ "\n",
895
+ " # Focalを取得\n",
896
+ " if focals is not None and idx < len(focals):\n",
897
+ " focal = focals[idx]\n",
898
+ " if isinstance(focal, torch.Tensor):\n",
899
+ " focal = focal.detach().cpu().item()\n",
900
+ " else:\n",
901
+ " focal = float(focal)\n",
902
+ " else:\n",
903
+ " focal = 1000.0\n",
904
+ "\n",
905
+ " # Principal pointを取得\n",
906
+ " if pps is not None and idx < len(pps):\n",
907
+ " pp = pps[idx]\n",
908
+ " if isinstance(pp, torch.Tensor):\n",
909
+ " pp = pp.detach().cpu().numpy()\n",
910
+ " else:\n",
911
+ " pp = np.array([112.0, 112.0])\n",
912
+ "\n",
913
+ " # カメラパラメータを保存\n",
914
+ " cameras_dict[img_name] = {\n",
915
+ " 'focal': focal,\n",
916
+ " 'pp': pp,\n",
917
+ " 'pose': pose,\n",
918
+ " 'rotation': pose[:3, :3],\n",
919
+ " 'translation': pose[:3, 3],\n",
920
+ " 'width': Config.IMAGE_SIZE * 4,\n",
921
+ " 'height': Config.IMAGE_SIZE * 4\n",
922
+ " }\n",
923
+ "\n",
924
+ " # 3D点を取得\n",
925
+ " if hasattr(scene, 'im_pts3d') and idx < len(scene.im_pts3d):\n",
926
+ " pts3d_img = scene.im_pts3d[idx]\n",
927
+ " elif hasattr(scene, 'get_pts3d'):\n",
928
+ " pts3d_all = scene.get_pts3d()\n",
929
+ " if idx < len(pts3d_all):\n",
930
+ " pts3d_img = pts3d_all[idx]\n",
931
+ " else:\n",
932
+ " pts3d_img = None\n",
933
+ " else:\n",
934
+ " pts3d_img = None\n",
935
+ "\n",
936
+ " # Confidenceを取得\n",
937
+ " if hasattr(scene, 'im_conf') and idx < len(scene.im_conf):\n",
938
+ " conf_img = scene.im_conf[idx]\n",
939
+ " elif hasattr(scene, 'get_conf'):\n",
940
+ " conf_all = scene.get_conf()\n",
941
+ " if idx < len(conf_all):\n",
942
+ " conf_img = conf_all[idx]\n",
943
+ " else:\n",
944
+ " conf_img = None\n",
945
+ " else:\n",
946
+ " conf_img = None\n",
947
+ "\n",
948
+ " # 3D点とconfidenceを処理\n",
949
+ " if pts3d_img is not None:\n",
950
+ " if isinstance(pts3d_img, torch.Tensor):\n",
951
+ " pts3d_img = pts3d_img.detach().cpu().numpy()\n",
952
+ "\n",
953
+ " if pts3d_img.ndim == 3:\n",
954
+ " pts3d_flat = pts3d_img.reshape(-1, 3)\n",
955
+ " else:\n",
956
+ " pts3d_flat = pts3d_img\n",
957
+ "\n",
958
+ " all_pts3d.append(pts3d_flat)\n",
959
+ "\n",
960
+ " # confidenceを処理\n",
961
+ " if conf_img is not None:\n",
962
+ " if isinstance(conf_img, list):\n",
963
+ " conf_img = np.array(conf_img)\n",
964
+ " elif isinstance(conf_img, torch.Tensor):\n",
965
+ " conf_img = conf_img.detach().cpu().numpy()\n",
966
+ "\n",
967
+ " if conf_img.ndim > 1:\n",
968
+ " conf_flat = conf_img.reshape(-1)\n",
969
+ " else:\n",
970
+ " conf_flat = conf_img\n",
971
+ "\n",
972
+ " if len(conf_flat) != len(pts3d_flat):\n",
973
+ " conf_flat = np.ones(len(pts3d_flat))\n",
974
+ "\n",
975
+ " all_confidence.append(conf_flat)\n",
976
+ " else:\n",
977
+ " all_confidence.append(np.ones(len(pts3d_flat)))\n",
978
+ "\n",
979
+ " except Exception as e:\n",
980
+ " print(f\"⚠️ Error processing image {idx} ({img_name}): {e}\")\n",
981
+ " cameras_dict[img_name] = {\n",
982
+ " 'focal': 1000.0,\n",
983
+ " 'pp': np.array([112.0, 112.0]),\n",
984
+ " 'pose': np.eye(4),\n",
985
+ " 'rotation': np.eye(3),\n",
986
+ " 'translation': np.zeros(3),\n",
987
+ " 'width': Config.IMAGE_SIZE * 4,\n",
988
+ " 'height': Config.IMAGE_SIZE * 4\n",
989
+ " }\n",
990
+ " continue\n",
991
+ "\n",
992
+ " # 全3D点を結合\n",
993
+ " if all_pts3d:\n",
994
+ " pts3d = np.vstack(all_pts3d)\n",
995
+ " confidence = np.concatenate(all_confidence)\n",
996
+ " else:\n",
997
+ " pts3d = np.zeros((0, 3))\n",
998
+ " confidence = np.zeros(0)\n",
999
+ "\n",
1000
+ " print(f\"✓ Extracted camera parameters for {len(cameras_dict)} images\")\n",
1001
+ " print(f\"✓ Total 3D points: {len(pts3d)}\")\n",
1002
+ "\n",
1003
+ " # Confidenceでフィルタリング\n",
1004
+ " if len(confidence) > 0:\n",
1005
+ " valid_mask = confidence > conf_threshold\n",
1006
+ " pts3d = pts3d[valid_mask]\n",
1007
+ " confidence = confidence[valid_mask]\n",
1008
+ " print(f\"✓ After confidence filtering (>{conf_threshold}): {len(pts3d)} points\")\n",
1009
+ "\n",
1010
+ " return cameras_dict, pts3d, confidence\n",
1011
+ "\n"
1012
+ ],
1013
+ "metadata": {
1014
+ "id": "YSt2RDqmviUa"
1015
+ },
1016
+ "execution_count": 8,
1017
+ "outputs": []
1018
+ },
1019
+ {
1020
+ "cell_type": "code",
1021
+ "source": [
1022
+ "# =====================================================================\n",
1023
+ "# CELL 12: COLMAP Export Functions\n",
1024
+ "# =====================================================================\n",
1025
+ "\n",
1026
+ "import struct\n",
1027
+ "import numpy as np\n",
1028
+ "from pathlib import Path\n",
1029
+ "\n",
1030
+ "def rotmat_to_qvec(R):\n",
1031
+ " \"\"\"\n",
1032
+ " 回転行列をクォータニオンに変換\n",
1033
+ "\n",
1034
+ " Args:\n",
1035
+ " R: 3x3回転行列\n",
1036
+ "\n",
1037
+ " Returns:\n",
1038
+ " qvec: [qw, qx, qy, qz] クォータニオン\n",
1039
+ " \"\"\"\n",
1040
+ " # Shepperdの方法\n",
1041
+ " trace = np.trace(R)\n",
1042
+ "\n",
1043
+ " if trace > 0:\n",
1044
+ " s = 0.5 / np.sqrt(trace + 1.0)\n",
1045
+ " w = 0.25 / s\n",
1046
+ " x = (R[2, 1] - R[1, 2]) * s\n",
1047
+ " y = (R[0, 2] - R[2, 0]) * s\n",
1048
+ " z = (R[1, 0] - R[0, 1]) * s\n",
1049
+ " elif R[0, 0] > R[1, 1] and R[0, 0] > R[2, 2]:\n",
1050
+ " s = 2.0 * np.sqrt(1.0 + R[0, 0] - R[1, 1] - R[2, 2])\n",
1051
+ " w = (R[2, 1] - R[1, 2]) / s\n",
1052
+ " x = 0.25 * s\n",
1053
+ " y = (R[0, 1] + R[1, 0]) / s\n",
1054
+ " z = (R[0, 2] + R[2, 0]) / s\n",
1055
+ " elif R[1, 1] > R[2, 2]:\n",
1056
+ " s = 2.0 * np.sqrt(1.0 + R[1, 1] - R[0, 0] - R[2, 2])\n",
1057
+ " w = (R[0, 2] - R[2, 0]) / s\n",
1058
+ " x = (R[0, 1] + R[1, 0]) / s\n",
1059
+ " y = 0.25 * s\n",
1060
+ " z = (R[1, 2] + R[2, 1]) / s\n",
1061
+ " else:\n",
1062
+ " s = 2.0 * np.sqrt(1.0 + R[2, 2] - R[0, 0] - R[1, 1])\n",
1063
+ " w = (R[1, 0] - R[0, 1]) / s\n",
1064
+ " x = (R[0, 2] + R[2, 0]) / s\n",
1065
+ " y = (R[1, 2] + R[2, 1]) / s\n",
1066
+ " z = 0.25 * s\n",
1067
+ "\n",
1068
+ " return np.array([w, x, y, z])\n",
1069
+ "\n",
1070
+ "\n",
1071
+ "def write_cameras_binary(cameras_dict, image_size, output_file):\n",
1072
+ " \"\"\"\n",
1073
+ " COLMAPのcameras.binを出力\n",
1074
+ "\n",
1075
+ " バイナリ形式:\n",
1076
+ " - num_cameras (uint64)\n",
1077
+ " - For each camera:\n",
1078
+ " - camera_id (uint32)\n",
1079
+ " - model_id (int32) # SIMPLE_PINHOLE = 0\n",
1080
+ " - width (uint64)\n",
1081
+ " - height (uint64)\n",
1082
+ " - params (double[]) # focal, cx, cy\n",
1083
+ "\n",
1084
+ " Args:\n",
1085
+ " cameras_dict: カメラパラメータの辞書\n",
1086
+ " image_size: (width, height) 画像サイズ\n",
1087
+ " output_file: 出力ファイルパス\n",
1088
+ " \"\"\"\n",
1089
+ " width, height = image_size\n",
1090
+ " num_cameras = len(cameras_dict)\n",
1091
+ "\n",
1092
+ " # COLMAP camera models\n",
1093
+ " SIMPLE_PINHOLE = 0\n",
1094
+ "\n",
1095
+ " with open(output_file, 'wb') as f:\n",
1096
+ " # カメラ数\n",
1097
+ " f.write(struct.pack('Q', num_cameras))\n",
1098
+ "\n",
1099
+ " # 各カメラの情報\n",
1100
+ " for camera_id, (img_id, cam_params) in enumerate(cameras_dict.items(), start=1):\n",
1101
+ " focal = cam_params['focal']\n",
1102
+ " cx = width / 2.0\n",
1103
+ " cy = height / 2.0\n",
1104
+ "\n",
1105
+ " # camera_id\n",
1106
+ " f.write(struct.pack('I', camera_id))\n",
1107
+ " # model_id (SIMPLE_PINHOLE)\n",
1108
+ " f.write(struct.pack('i', SIMPLE_PINHOLE))\n",
1109
+ " # width\n",
1110
+ " f.write(struct.pack('Q', width))\n",
1111
+ " # height\n",
1112
+ " f.write(struct.pack('Q', height))\n",
1113
+ " # params: focal, cx, cy\n",
1114
+ " f.write(struct.pack('d', focal))\n",
1115
+ " f.write(struct.pack('d', cx))\n",
1116
+ " f.write(struct.pack('d', cy))\n",
1117
+ "\n",
1118
+ " print(f\"COLMAP cameras.bin saved to {output_file}\")\n",
1119
+ "\n",
1120
+ "\n",
1121
+ "def write_images_binary(cameras_dict, output_file):\n",
1122
+ " \"\"\"\n",
1123
+ " COLMAPのimages.binを出力\n",
1124
+ "\n",
1125
+ " バイナリ形式:\n",
1126
+ " - num_images (uint64)\n",
1127
+ " - For each image:\n",
1128
+ " - image_id (uint32)\n",
1129
+ " - qvec (double[4]) # qw, qx, qy, qz\n",
1130
+ " - tvec (double[3]) # tx, ty, tz\n",
1131
+ " - camera_id (uint32)\n",
1132
+ " - name (string with null terminator)\n",
1133
+ " - num_points2D (uint64)\n",
1134
+ " - points2D (x, y, point3D_id) * num_points2D\n",
1135
+ "\n",
1136
+ " Args:\n",
1137
+ " cameras_dict: カメラパラメータの辞書\n",
1138
+ " output_file: 出力ファイルパス\n",
1139
+ " \"\"\"\n",
1140
+ " num_images = len(cameras_dict)\n",
1141
+ "\n",
1142
+ " with open(output_file, 'wb') as f:\n",
1143
+ " # 画像数\n",
1144
+ " f.write(struct.pack('Q', num_images))\n",
1145
+ "\n",
1146
+ " # 各画像の情報\n",
1147
+ " for image_id, (img_id, cam_params) in enumerate(cameras_dict.items(), start=1):\n",
1148
+ " # 回転行列をクォータニオンに変換(自前の関数を使用)\n",
1149
+ " R = cam_params['rotation']\n",
1150
+ " quat = rotmat_to_qvec(R) # [qw, qx, qy, qz]\n",
1151
+ "\n",
1152
+ " # 並進ベクトル\n",
1153
+ " t = cam_params['translation']\n",
1154
+ "\n",
1155
+ " # カメラIDは画像IDと同じ\n",
1156
+ " camera_id = image_id\n",
1157
+ "\n",
1158
+ " # image_id\n",
1159
+ " f.write(struct.pack('I', image_id))\n",
1160
+ "\n",
1161
+ " # qvec (qw, qx, qy, qz)\n",
1162
+ " for q in quat:\n",
1163
+ " f.write(struct.pack('d', q))\n",
1164
+ "\n",
1165
+ " # tvec (tx, ty, tz)\n",
1166
+ " for ti in t:\n",
1167
+ " f.write(struct.pack('d', ti))\n",
1168
+ "\n",
1169
+ " # camera_id\n",
1170
+ " f.write(struct.pack('I', camera_id))\n",
1171
+ "\n",
1172
+ " # name (null-terminated string)\n",
1173
+ " name_bytes = img_id.encode('utf-8') + b'\\x00'\n",
1174
+ " f.write(name_bytes)\n",
1175
+ "\n",
1176
+ " # num_points2D (0 for now)\n",
1177
+ " f.write(struct.pack('Q', 0))\n",
1178
+ "\n",
1179
+ " print(f\"COLMAP images.bin saved to {output_file}\")\n",
1180
+ "\n",
1181
+ "\n",
1182
+ "def write_points3D_binary(pts3d, confidence, output_file):\n",
1183
+ " \"\"\"\n",
1184
+ " COLMAPのpoints3D.binを出力\n",
1185
+ "\n",
1186
+ " バイナリ形式:\n",
1187
+ " - num_points (uint64)\n",
1188
+ " - For each point:\n",
1189
+ " - point3D_id (uint64)\n",
1190
+ " - xyz (double[3])\n",
1191
+ " - rgb (uint8[3])\n",
1192
+ " - error (double)\n",
1193
+ " - track_length (uint64)\n",
1194
+ " - track (image_id, point2D_idx) * track_length\n",
1195
+ "\n",
1196
+ " Args:\n",
1197
+ " pts3d: 3D点の配列 [N, 3]\n",
1198
+ " confidence: 信頼度の配列 [N]\n",
1199
+ " output_file: 出力ファイルパス\n",
1200
+ " \"\"\"\n",
1201
+ " num_points = len(pts3d)\n",
1202
+ "\n",
1203
+ " with open(output_file, 'wb') as f:\n",
1204
+ " # 点の数\n",
1205
+ " f.write(struct.pack('Q', num_points))\n",
1206
+ "\n",
1207
+ " # 各3D点の情報\n",
1208
+ " for point_id, pt in enumerate(pts3d, start=1):\n",
1209
+ " x, y, z = pt\n",
1210
+ "\n",
1211
+ " # point3D_id\n",
1212
+ " f.write(struct.pack('Q', point_id))\n",
1213
+ "\n",
1214
+ " # xyz\n",
1215
+ " f.write(struct.pack('d', x))\n",
1216
+ " f.write(struct.pack('d', y))\n",
1217
+ " f.write(struct.pack('d', z))\n",
1218
+ "\n",
1219
+ " # rgb (デフォルトはグレー)\n",
1220
+ " f.write(struct.pack('B', 128))\n",
1221
+ " f.write(struct.pack('B', 128))\n",
1222
+ " f.write(struct.pack('B', 128))\n",
1223
+ "\n",
1224
+ " # error\n",
1225
+ " if confidence is not None and point_id <= len(confidence):\n",
1226
+ " error = 1.0 / max(confidence[point_id-1], 0.001)\n",
1227
+ " else:\n",
1228
+ " error = 1.0\n",
1229
+ " f.write(struct.pack('d', error))\n",
1230
+ "\n",
1231
+ " # track_length (0 for now)\n",
1232
+ " f.write(struct.pack('Q', 0))\n",
1233
+ "\n",
1234
+ " print(f\"COLMAP points3D.bin saved to {output_file}\")\n",
1235
+ "\n",
1236
+ "\n",
1237
+ "def export_colmap_binary(cameras_dict, pts3d, confidence, image_size, output_dir):\n",
1238
+ " \"\"\"\n",
1239
+ " COLMAPバイナリファイル(cameras.bin, images.bin, points3D.bin)を出力\n",
1240
+ "\n",
1241
+ " Args:\n",
1242
+ " cameras_dict: カメラパラメータの辞書\n",
1243
+ " pts3d: 3D点の配列 [N, 3]\n",
1244
+ " confidence: 信頼度の配列 [N]\n",
1245
+ " image_size: (width, height) 画像サイズ\n",
1246
+ " output_dir: 出力ディレクトリパス\n",
1247
+ " \"\"\"\n",
1248
+ " output_path = Path(output_dir)\n",
1249
+ " output_path.mkdir(parents=True, exist_ok=True)\n",
1250
+ "\n",
1251
+ " # cameras.bin\n",
1252
+ " write_cameras_binary(\n",
1253
+ " cameras_dict,\n",
1254
+ " image_size,\n",
1255
+ " output_path / 'cameras.bin'\n",
1256
+ " )\n",
1257
+ "\n",
1258
+ " # images.bin\n",
1259
+ " write_images_binary(\n",
1260
+ " cameras_dict,\n",
1261
+ " output_path / 'images.bin'\n",
1262
+ " )\n",
1263
+ "\n",
1264
+ " # points3D.bin\n",
1265
+ " write_points3D_binary(\n",
1266
+ " pts3d,\n",
1267
+ " confidence,\n",
1268
+ " output_path / 'points3D.bin'\n",
1269
+ " )\n",
1270
+ "\n",
1271
+ " print(f\"\\nCOLMAP binary files exported to {output_dir}/\")\n",
1272
+ " print(f\" - cameras.bin: {len(cameras_dict)} cameras\")\n",
1273
+ " print(f\" - images.bin: {len(cameras_dict)} images\")\n",
1274
+ " print(f\" - points3D.bin: {len(pts3d)} points\")"
1275
+ ],
1276
+ "metadata": {
1277
+ "id": "jNk5C0k1zkLD"
1278
+ },
1279
+ "execution_count": 9,
1280
+ "outputs": []
1281
+ },
1282
+ {
1283
+ "cell_type": "code",
1284
+ "source": [
1285
+ "# =====================================================================\n",
1286
+ "# CELL 13: Gaussian Splatting Runner\n",
1287
+ "# =====================================================================\n",
1288
+ "def run_gaussian_splatting(source_dir, output_dir, iterations=30000):\n",
1289
+ " \"\"\"Gaussian Splattingを実行\"\"\"\n",
1290
+ " print(\"\\n=== Running Gaussian Splatting ===\")\n",
1291
+ "\n",
1292
+ " os.makedirs(output_dir, exist_ok=True)\n",
1293
+ "\n",
1294
+ " cmd = [\n",
1295
+ " \"python\", \"/content/gaussian-splatting/train.py\",\n",
1296
+ " \"-s\", source_dir,\n",
1297
+ " \"-m\", output_dir,\n",
1298
+ " \"--iterations\", str(iterations),\n",
1299
+ " \"--eval\"\n",
1300
+ " ]\n",
1301
+ "\n",
1302
+ " print(f\"Command: {' '.join(cmd)}\")\n",
1303
+ " print(f\" Source: {source_dir}\")\n",
1304
+ " print(f\" Output: {output_dir}\")\n",
1305
+ "\n",
1306
+ " result = subprocess.run(cmd, capture_output=False, text=True)\n",
1307
+ "\n",
1308
+ " if result.returncode == 0:\n",
1309
+ " print(f\"\\n✓ Gaussian Splatting complete\")\n",
1310
+ "\n",
1311
+ " point_cloud_dir = os.path.join(output_dir, \"point_cloud\")\n",
1312
+ " if os.path.exists(point_cloud_dir):\n",
1313
+ " print(f\"\\n✓ Point cloud directory found: {point_cloud_dir}\")\n",
1314
+ "\n",
1315
+ " for item in sorted(os.listdir(point_cloud_dir)):\n",
1316
+ " item_path = os.path.join(point_cloud_dir, item)\n",
1317
+ " if os.path.isdir(item_path) and item.startswith(\"iteration_\"):\n",
1318
+ " ply_file = os.path.join(item_path, \"point_cloud.ply\")\n",
1319
+ " if os.path.exists(ply_file):\n",
1320
+ " file_size = os.path.getsize(ply_file) / (1024 * 1024)\n",
1321
+ " print(f\" ✓ {item}/point_cloud.ply ({file_size:.2f} MB)\")\n",
1322
+ " else:\n",
1323
+ " print(f\"\\n✗ Gaussian Splatting failed with return code {result.returncode}\")\n",
1324
+ "\n",
1325
+ " return output_dir"
1326
+ ],
1327
+ "metadata": {
1328
+ "id": "o0n2RL3Ep5_Y"
1329
+ },
1330
+ "execution_count": 10,
1331
+ "outputs": []
1332
+ },
1333
+ {
1334
+ "cell_type": "code",
1335
+ "source": [
1336
+ "# =====================================================================\n",
1337
+ "# CELL 14: Main Pipeline\n",
1338
+ "# =====================================================================\n",
1339
+ "def main_pipeline(image_dir, output_dir, square_size=1024, iterations=30000,\n",
1340
+ " max_images=200, max_pairs=100, max_points=500000,\n",
1341
+ " conf_threshold=1.001, preprocess_mode='none'):\n",
1342
+ " \"\"\"メインパイプライン(DINO + CELL 11/12対応版)\"\"\"\n",
1343
+ "\n",
1344
+ " # STEP 0: Image Preprocessing\n",
1345
+ " if preprocess_mode == 'biplet':\n",
1346
+ " print(\"=\"*70)\n",
1347
+ " print(\"STEP 0: Image Preprocessing (Biplet Crops)\")\n",
1348
+ " print(\"=\"*70)\n",
1349
+ "\n",
1350
+ " temp_biplet_dir = os.path.join(output_dir, \"temp_biplet\")\n",
1351
+ " biplet_dir = normalize_image_sizes_biplet(image_dir, temp_biplet_dir, size=square_size)\n",
1352
+ "\n",
1353
+ " images_dir = os.path.join(output_dir, \"images\")\n",
1354
+ " os.makedirs(images_dir, exist_ok=True)\n",
1355
+ "\n",
1356
+ " biplet_suffixes = ['_left', '_right', '_top', '_bottom']\n",
1357
+ " copied_count = 0\n",
1358
+ "\n",
1359
+ " for img_file in os.listdir(temp_biplet_dir):\n",
1360
+ " if any(suffix in img_file for suffix in biplet_suffixes):\n",
1361
+ " src = os.path.join(temp_biplet_dir, img_file)\n",
1362
+ " dst = os.path.join(images_dir, img_file)\n",
1363
+ " shutil.copy2(src, dst)\n",
1364
+ " copied_count += 1\n",
1365
+ "\n",
1366
+ " print(f\"✓ Copied {copied_count} biplet images to {images_dir}\")\n",
1367
+ "\n",
1368
+ " original_images_dir = os.path.join(output_dir, \"original_images\")\n",
1369
+ " os.makedirs(original_images_dir, exist_ok=True)\n",
1370
+ "\n",
1371
+ " original_count = 0\n",
1372
+ " valid_extensions = ('.jpg', '.jpeg', '.png', '.bmp')\n",
1373
+ " for img_file in os.listdir(image_dir):\n",
1374
+ " if img_file.lower().endswith(valid_extensions):\n",
1375
+ " src = os.path.join(image_dir, img_file)\n",
1376
+ " dst = os.path.join(original_images_dir, img_file)\n",
1377
+ " shutil.copy2(src, dst)\n",
1378
+ " original_count += 1\n",
1379
+ "\n",
1380
+ " print(f\"✓ Saved {original_count} original images to {original_images_dir}\")\n",
1381
+ " shutil.rmtree(temp_biplet_dir)\n",
1382
+ " image_dir = images_dir\n",
1383
+ " clear_memory()\n",
1384
+ " else:\n",
1385
+ " images_dir = os.path.join(output_dir, \"images\")\n",
1386
+ " if not os.path.exists(images_dir):\n",
1387
+ " print(\"=\"*70)\n",
1388
+ " print(\"STEP 0: Copying images to output directory\")\n",
1389
+ " print(\"=\"*70)\n",
1390
+ " shutil.copytree(image_dir, images_dir)\n",
1391
+ " print(f\"✓ Copied images to {images_dir}\")\n",
1392
+ " image_dir = images_dir\n",
1393
+ "\n",
1394
+ " # STEP 1: Loading Images\n",
1395
+ " print(\"\\n\" + \"=\"*70)\n",
1396
+ " print(\"STEP 1: Loading and Preparing Images\")\n",
1397
+ " print(\"=\"*70)\n",
1398
+ "\n",
1399
+ " image_paths = load_images_from_directory(image_dir, max_images=max_images)\n",
1400
+ " print(f\"Loaded {len(image_paths)} images\")\n",
1401
+ " clear_memory()\n",
1402
+ "\n",
1403
+ " # STEP 2: Image Pair Selection (DINO)\n",
1404
+ " print(\"\\n\" + \"=\"*70)\n",
1405
+ " print(\"STEP 2: Image Pair Selection (DINO)\")\n",
1406
+ " print(\"=\"*70)\n",
1407
+ "\n",
1408
+ " max_pairs = min(max_pairs, 50)\n",
1409
+ " pairs = get_image_pairs_dino(image_paths, max_pairs=max_pairs)\n",
1410
+ " print(f\"Selected {len(pairs)} image pairs\")\n",
1411
+ " clear_memory()\n",
1412
+ "\n",
1413
+ " # STEP 3: MASt3R 3D Reconstruction\n",
1414
+ " print(\"\\n\" + \"=\"*70)\n",
1415
+ " print(\"STEP 3: MASt3R 3D Reconstruction\")\n",
1416
+ " print(\"=\"*70)\n",
1417
+ "\n",
1418
+ " device = Config.DEVICE\n",
1419
+ " model = load_mast3r_model(device)\n",
1420
+ " scene, mast3r_images = run_mast3r_pairs(model, image_paths, pairs, device)\n",
1421
+ "\n",
1422
+ " del model\n",
1423
+ " clear_memory()\n",
1424
+ "\n",
1425
+ "\n",
1426
+ "\n",
1427
+ " # STEP 4: Converting to COLMAP (CELL 11/12使用)\n",
1428
+ " print(\"\\n\" + \"=\"*70)\n",
1429
+ " print(\"STEP 4: Converting to COLMAP (PINHOLE)\")\n",
1430
+ " print(\"=\"*70)\n",
1431
+ "\n",
1432
+ " # 画像ファイル名のリストを作成\n",
1433
+ " image_names = [os.path.basename(p) for p in image_paths]\n",
1434
+ "\n",
1435
+ " # CELL 11: カメラパラメータの抽出(修正版関数を使用)\n",
1436
+ " cameras_dict, pts3d, confidence = extract_camera_params_process2(\n",
1437
+ " scene=scene,\n",
1438
+ " image_paths=image_paths,\n",
1439
+ " conf_threshold=conf_threshold\n",
1440
+ " )\n",
1441
+ "\n",
1442
+ " print(f\"Extracted {len(cameras_dict)} cameras with conf >= {conf_threshold}\")\n",
1443
+ "\n",
1444
+ " # 画像サイズを取得(最初の画像から)\n",
1445
+ " from PIL import Image\n",
1446
+ " first_img = Image.open(image_paths[0])\n",
1447
+ " image_size = (first_img.width, first_img.height)\n",
1448
+ " first_img.close()\n",
1449
+ "\n",
1450
+ " # COLMAP出力ディレクトリ\n",
1451
+ " colmap_dir = os.path.join(output_dir, \"sparse/0\")\n",
1452
+ " os.makedirs(colmap_dir, exist_ok=True)\n",
1453
+ "\n",
1454
+ " # CELL 12: COLMAPバイナリ形式でエクスポート(修正版関数を使用)\n",
1455
+ " export_colmap_binary(\n",
1456
+ " cameras_dict=cameras_dict,\n",
1457
+ " pts3d=pts3d,\n",
1458
+ " confidence=confidence,\n",
1459
+ " image_size=image_size,\n",
1460
+ " output_dir=colmap_dir\n",
1461
+ " )\n",
1462
+ "\n",
1463
+ " del scene\n",
1464
+ " clear_memory()\n",
1465
+ "\n",
1466
+ "\n",
1467
+ "\n",
1468
+ " # STEP 5: Running Gaussian Splatting\n",
1469
+ " print(\"\\n\" + \"=\"*70)\n",
1470
+ " print(\"STEP 5: Running Gaussian Splatting\")\n",
1471
+ " print(\"=\"*70)\n",
1472
+ "\n",
1473
+ " source_dir = output_dir\n",
1474
+ " model_output_dir = os.path.join(output_dir, \"gaussian_splatting\")\n",
1475
+ "\n",
1476
+ " gs_output = run_gaussian_splatting(\n",
1477
+ " source_dir=source_dir,\n",
1478
+ " output_dir=model_output_dir,\n",
1479
+ " iterations=iterations\n",
1480
+ " )\n",
1481
+ "\n",
1482
+ " # STEP 6: Verify Output\n",
1483
+ " print(\"\\n\" + \"=\"*70)\n",
1484
+ " print(\"PIPELINE COMPLETE\")\n",
1485
+ " print(\"=\"*70)\n",
1486
+ "\n",
1487
+ " ply_path = os.path.join(\n",
1488
+ " model_output_dir,\n",
1489
+ " \"point_cloud\",\n",
1490
+ " f\"iteration_{iterations}\",\n",
1491
+ " \"point_cloud.ply\"\n",
1492
+ " )\n",
1493
+ "\n",
1494
+ " if os.path.exists(ply_path):\n",
1495
+ " file_size = os.path.getsize(ply_path) / (1024 * 1024)\n",
1496
+ " print(f\"✓ Point cloud generated: {ply_path}\")\n",
1497
+ " print(f\" Size: {file_size:.2f} MB\")\n",
1498
+ " else:\n",
1499
+ " print(f\"⚠️ Point cloud not found at: {ply_path}\")\n",
1500
+ "\n",
1501
+ " print(f\"\\nOutput directory structure:\")\n",
1502
+ " print(f\" {output_dir}/\")\n",
1503
+ " print(f\" ├── images/ (processed images)\")\n",
1504
+ " if preprocess_mode == 'biplet':\n",
1505
+ " print(f\" ├── original_images/ (original source images)\")\n",
1506
+ " print(f\" ├── sparse/0/ (COLMAP data)\")\n",
1507
+ " print(f\" │ ├── cameras.bin\")\n",
1508
+ " print(f\" │ ├── images.bin\")\n",
1509
+ " print(f\" │ └── points3D.bin\")\n",
1510
+ " print(f\" └── gaussian_splatting/ (GS output)\")\n",
1511
+ "\n",
1512
+ " return gs_output"
1513
+ ],
1514
+ "metadata": {
1515
+ "trusted": true,
1516
+ "id": "U7Lk41hLTKyF"
1517
+ },
1518
+ "outputs": [],
1519
+ "execution_count": 11
1520
+ },
1521
+ {
1522
+ "cell_type": "code",
1523
+ "source": [
1524
+ "# =====================================================================\n",
1525
+ "# CELL 15: Run Pipeline\n",
1526
+ "# =====================================================================\n",
1527
+ "if __name__ == \"__main__\":\n",
1528
+ " IMAGE_DIR = \"/content/drive/MyDrive/your_folder/fountain\"\n",
1529
+ " OUTPUT_DIR = \"/content/output\"\n",
1530
+ "\n",
1531
+ "\n",
1532
+ " gs_output = main_pipeline(\n",
1533
+ " image_dir=IMAGE_DIR,\n",
1534
+ " output_dir=OUTPUT_DIR,\n",
1535
+ " square_size=512,\n",
1536
+ " iterations=1000,\n",
1537
+ " max_images=30,\n",
1538
+ " max_pairs=300,\n",
1539
+ " max_points=1000000,\n",
1540
+ " conf_threshold=0.5,\n",
1541
+ " preprocess_mode='biplet' # or 'none'\n",
1542
+ " )\n",
1543
+ "\n",
1544
+ " print(\"\\n\" + \"=\"*70)\n",
1545
+ " print(\"PIPELINE COMPLETE\")\n",
1546
+ " print(\"=\"*70)\n",
1547
+ " print(f\"Output directory: {gs_output}\")"
1548
+ ],
1549
+ "metadata": {
1550
+ "trusted": true,
1551
+ "id": "_-8kDLieTKyG",
1552
+ "colab": {
1553
+ "base_uri": "https://localhost:8080/"
1554
+ },
1555
+ "outputId": "393d6ec3-9e40-4b17-a4bb-5cc8ab67b737"
1556
+ },
1557
+ "outputs": [
1558
+ {
1559
+ "output_type": "stream",
1560
+ "name": "stdout",
1561
+ "text": [
1562
+ "======================================================================\n",
1563
+ "STEP 0: Image Preprocessing (Biplet Crops)\n",
1564
+ "======================================================================\n",
1565
+ "\n",
1566
+ "=== Generating Biplet Crops (512x512) ===\n"
1567
+ ]
1568
+ },
1569
+ {
1570
+ "output_type": "stream",
1571
+ "name": "stderr",
1572
+ "text": [
1573
+ "Creating biplets: 100%|██████████| 30/30 [00:02<00:00, 11.18it/s]\n"
1574
+ ]
1575
+ },
1576
+ {
1577
+ "output_type": "stream",
1578
+ "name": "stdout",
1579
+ "text": [
1580
+ "\n",
1581
+ "✓ Biplet generation complete:\n",
1582
+ " Source images: 30\n",
1583
+ " Biplet crops generated: 60\n",
1584
+ " Original size distribution: {'1440x1920': 30}\n",
1585
+ "✓ Copied 60 biplet images to /content/output/images\n",
1586
+ "✓ Saved 30 original images to /content/output/original_images\n",
1587
+ "\n",
1588
+ "======================================================================\n",
1589
+ "STEP 1: Loading and Preparing Images\n",
1590
+ "======================================================================\n",
1591
+ "\n",
1592
+ "Loading images from: /content/output/images\n",
1593
+ "⚠️ Limiting from 60 to 30 images\n",
1594
+ "✓ Found 30 images\n",
1595
+ "Loaded 30 images\n",
1596
+ "\n",
1597
+ "======================================================================\n",
1598
+ "STEP 2: Image Pair Selection (DINO)\n",
1599
+ "======================================================================\n",
1600
+ "\n",
1601
+ "=== Extracting DINO Global Features ===\n",
1602
+ "Initial memory state:\n",
1603
+ "GPU Memory - Allocated: 0.16GB, Reserved: 0.23GB\n",
1604
+ "CPU Memory Usage: 42.7%\n"
1605
+ ]
1606
+ },
1607
+ {
1608
+ "output_type": "stream",
1609
+ "name": "stderr",
1610
+ "text": [
1611
+ "/usr/local/lib/python3.12/dist-packages/huggingface_hub/file_download.py:942: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.\n",
1612
+ " warnings.warn(\n",
1613
+ "DINO extraction: 100%|██████████| 8/8 [00:04<00:00, 1.82it/s]\n"
1614
+ ]
1615
+ },
1616
+ {
1617
+ "output_type": "stream",
1618
+ "name": "stdout",
1619
+ "text": [
1620
+ "After DINO extraction:\n",
1621
+ "GPU Memory - Allocated: 0.17GB, Reserved: 0.25GB\n",
1622
+ "CPU Memory Usage: 40.2%\n",
1623
+ "Initial pairs from DINO: 304\n",
1624
+ "Selecting 50 diverse pairs from 304 candidates...\n",
1625
+ "Selected pairs cover 30 / 30 images (100.0%)\n",
1626
+ "Selected 50 image pairs\n",
1627
+ "\n",
1628
+ "======================================================================\n",
1629
+ "STEP 3: MASt3R 3D Reconstruction\n",
1630
+ "======================================================================\n",
1631
+ "\n",
1632
+ "=== Loading MASt3R Model ===\n",
1633
+ "Attempting to load: naver/MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric\n",
1634
+ "⚠️ Failed to load MASt3R: tried to load naver/MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric from huggingface, but failed\n",
1635
+ "Trying DUSt3R instead: naver/DUSt3R_ViTLarge_BaseDecoder_512_dpt\n",
1636
+ "✓ Loaded DUSt3R model as fallback\n",
1637
+ "✓ Model loaded on cuda\n",
1638
+ "\n",
1639
+ "=== Running MASt3R Reconstruction ===\n",
1640
+ "Initial memory state:\n",
1641
+ "GPU Memory - Allocated: 2.29GB, Reserved: 2.31GB\n",
1642
+ "CPU Memory Usage: 42.7%\n",
1643
+ "Processing 50 pairs...\n",
1644
+ "Loading 30 images at 224x224...\n",
1645
+ ">> Loading a list of 30 images\n",
1646
+ " - adding /content/output/images/image_001_bottom.jpeg with resolution 512x512 --> 224x224\n",
1647
+ " - adding /content/output/images/image_001_top.jpeg with resolution 512x512 --> 224x224\n",
1648
+ " - adding /content/output/images/image_002_bottom.jpeg with resolution 512x512 --> 224x224\n",
1649
+ " - adding /content/output/images/image_002_top.jpeg with resolution 512x512 --> 224x224\n",
1650
+ " - adding /content/output/images/image_003_bottom.jpeg with resolution 512x512 --> 224x224\n",
1651
+ " - adding /content/output/images/image_003_top.jpeg with resolution 512x512 --> 224x224\n",
1652
+ " - adding /content/output/images/image_004_bottom.jpeg with resolution 512x512 --> 224x224\n",
1653
+ " - adding /content/output/images/image_004_top.jpeg with resolution 512x512 --> 224x224\n",
1654
+ " - adding /content/output/images/image_005_bottom.jpeg with resolution 512x512 --> 224x224\n",
1655
+ " - adding /content/output/images/image_005_top.jpeg with resolution 512x512 --> 224x224\n",
1656
+ " - adding /content/output/images/image_006_bottom.jpeg with resolution 512x512 --> 224x224\n",
1657
+ " - adding /content/output/images/image_006_top.jpeg with resolution 512x512 --> 224x224\n",
1658
+ " - adding /content/output/images/image_007_bottom.jpeg with resolution 512x512 --> 224x224\n",
1659
+ " - adding /content/output/images/image_007_top.jpeg with resolution 512x512 --> 224x224\n",
1660
+ " - adding /content/output/images/image_008_bottom.jpeg with resolution 512x512 --> 224x224\n",
1661
+ " - adding /content/output/images/image_008_top.jpeg with resolution 512x512 --> 224x224\n",
1662
+ " - adding /content/output/images/image_009_bottom.jpeg with resolution 512x512 --> 224x224\n",
1663
+ " - adding /content/output/images/image_009_top.jpeg with resolution 512x512 --> 224x224\n",
1664
+ " - adding /content/output/images/image_010_bottom.jpeg with resolution 512x512 --> 224x224\n",
1665
+ " - adding /content/output/images/image_010_top.jpeg with resolution 512x512 --> 224x224\n",
1666
+ " - adding /content/output/images/image_011_bottom.jpeg with resolution 512x512 --> 224x224\n",
1667
+ " - adding /content/output/images/image_011_top.jpeg with resolution 512x512 --> 224x224\n",
1668
+ " - adding /content/output/images/image_012_bottom.jpeg with resolution 512x512 --> 224x224\n",
1669
+ " - adding /content/output/images/image_012_top.jpeg with resolution 512x512 --> 224x224\n",
1670
+ " - adding /content/output/images/image_013_bottom.jpeg with resolution 512x512 --> 224x224\n",
1671
+ " - adding /content/output/images/image_013_top.jpeg with resolution 512x512 --> 224x224\n",
1672
+ " - adding /content/output/images/image_014_bottom.jpeg with resolution 512x512 --> 224x224\n",
1673
+ " - adding /content/output/images/image_014_top.jpeg with resolution 512x512 --> 224x224\n",
1674
+ " - adding /content/output/images/image_015_bottom.jpeg with resolution 512x512 --> 224x224\n",
1675
+ " - adding /content/output/images/image_015_top.jpeg with resolution 512x512 --> 224x224\n",
1676
+ " (Found 30 images)\n",
1677
+ "Loaded 30 images\n",
1678
+ "After loading images:\n",
1679
+ "GPU Memory - Allocated: 2.29GB, Reserved: 2.31GB\n",
1680
+ "CPU Memory Usage: 42.7%\n",
1681
+ "Creating 50 image pairs...\n"
1682
+ ]
1683
+ },
1684
+ {
1685
+ "output_type": "stream",
1686
+ "name": "stderr",
1687
+ "text": [
1688
+ "Preparing pairs: 100%|██████████| 50/50 [00:00<00:00, 472331.53it/s]\n"
1689
+ ]
1690
+ },
1691
+ {
1692
+ "output_type": "stream",
1693
+ "name": "stdout",
1694
+ "text": [
1695
+ "Running MASt3R inference on 50 pairs...\n",
1696
+ ">> Inference with model on 50 image pairs\n"
1697
+ ]
1698
+ },
1699
+ {
1700
+ "output_type": "stream",
1701
+ "name": "stderr",
1702
+ "text": [
1703
+ "\r 0%| | 0/50 [00:00<?, ?it/s]/content/mast3r/dust3r/dust3r/inference.py:44: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.\n",
1704
+ " with torch.cuda.amp.autocast(enabled=bool(use_amp)):\n",
1705
+ "/content/mast3r/dust3r/dust3r/model.py:206: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.\n",
1706
+ " with torch.cuda.amp.autocast(enabled=False):\n",
1707
+ "/content/mast3r/dust3r/dust3r/inference.py:48: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.\n",
1708
+ " with torch.cuda.amp.autocast(enabled=False):\n",
1709
+ "100%|██████████| 50/50 [00:11<00:00, 4.45it/s]\n"
1710
+ ]
1711
+ },
1712
+ {
1713
+ "output_type": "stream",
1714
+ "name": "stdout",
1715
+ "text": [
1716
+ "✓ MASt3R inference complete\n",
1717
+ "After inference:\n",
1718
+ "GPU Memory - Allocated: 2.29GB, Reserved: 2.31GB\n",
1719
+ "CPU Memory Usage: 42.5%\n",
1720
+ "Running global alignment...\n",
1721
+ "Computing global alignment...\n",
1722
+ " init edge (0*,16*) score=42.95501708984375\n",
1723
+ " init edge (16,26*) score=24.80443572998047\n",
1724
+ " init edge (12*,26) score=23.980571746826172\n",
1725
+ " init edge (10*,16) score=18.896928787231445\n",
1726
+ " init edge (12,14*) score=16.737760543823242\n",
1727
+ " init edge (12,18*) score=15.57262897491455\n",
1728
+ " init edge (2*,16) score=15.040303230285645\n",
1729
+ " init edge (9*,16) score=14.898500442504883\n",
1730
+ " init edge (10,22*) score=21.800180435180664\n",
1731
+ " init edge (6*,18) score=17.21731185913086\n",
1732
+ " init edge (4*,18) score=16.68398094177246\n",
1733
+ " init edge (8*,18) score=16.66911506652832\n",
1734
+ " init edge (7*,18) score=16.407312393188477\n",
1735
+ " init edge (22,24*) score=13.573594093322754\n",
1736
+ " init edge (8,20*) score=13.542624473571777\n",
1737
+ " init edge (13*,24) score=4.096213340759277\n",
1738
+ " init edge (22,28*) score=25.894927978515625\n",
1739
+ " init edge (3*,24) score=17.53987693786621\n",
1740
+ " init edge (3,5*) score=16.29656410217285\n",
1741
+ " init edge (19*,20) score=15.545378684997559\n",
1742
+ " init edge (3,11*) score=13.509073257446289\n",
1743
+ " init edge (11,21*) score=18.695545196533203\n",
1744
+ " init edge (11,23*) score=18.40506935119629\n",
1745
+ " init edge (1*,23) score=16.854169845581055\n",
1746
+ " init edge (1,15*) score=14.627829551696777\n",
1747
+ " init edge (23,25*) score=8.872193336486816\n",
1748
+ " init edge (25,27*) score=10.639114379882812\n",
1749
+ " init edge (27,29*) score=9.701958656311035\n",
1750
+ " init edge (17*,27) score=5.2988691329956055\n",
1751
+ " init loss = 0.019614549353718758\n",
1752
+ "Global alignement - optimizing for:\n",
1753
+ "['pw_poses', 'im_depthmaps', 'im_poses', 'im_focals']\n"
1754
+ ]
1755
+ },
1756
+ {
1757
+ "output_type": "stream",
1758
+ "name": "stderr",
1759
+ "text": [
1760
+ "100%|██████████| 50/50 [00:02<00:00, 21.33it/s, lr=1.08654e-05 loss=0.0145464]\n"
1761
+ ]
1762
+ },
1763
+ {
1764
+ "output_type": "stream",
1765
+ "name": "stdout",
1766
+ "text": [
1767
+ "✓ Global alignment complete (final loss: 0.014546)\n",
1768
+ "Final memory state:\n",
1769
+ "GPU Memory - Allocated: 2.47GB, Reserved: 2.84GB\n",
1770
+ "CPU Memory Usage: 42.5%\n",
1771
+ "\n",
1772
+ "======================================================================\n",
1773
+ "STEP 4: Converting to COLMAP (PINHOLE)\n",
1774
+ "======================================================================\n",
1775
+ "\n",
1776
+ "=== Extracting Camera Parameters ===\n",
1777
+ "✓ Extracted camera parameters for 30 images\n",
1778
+ "✓ Total 3D points: 1505280\n",
1779
+ "✓ After confidence filtering (>0.5): 1505280 points\n",
1780
+ "Extracted 30 cameras with conf >= 0.5\n",
1781
+ "COLMAP cameras.bin saved to /content/output/sparse/0/cameras.bin\n",
1782
+ "COLMAP images.bin saved to /content/output/sparse/0/images.bin\n",
1783
+ "COLMAP points3D.bin saved to /content/output/sparse/0/points3D.bin\n",
1784
+ "\n",
1785
+ "COLMAP binary files exported to /content/output/sparse/0/\n",
1786
+ " - cameras.bin: 30 cameras\n",
1787
+ " - images.bin: 30 images\n",
1788
+ " - points3D.bin: 1505280 points\n",
1789
+ "\n",
1790
+ "======================================================================\n",
1791
+ "STEP 5: Running Gaussian Splatting\n",
1792
+ "======================================================================\n",
1793
+ "\n",
1794
+ "=== Running Gaussian Splatting ===\n",
1795
+ "Command: python /content/gaussian-splatting/train.py -s /content/output -m /content/output/gaussian_splatting --iterations 1000 --eval\n",
1796
+ " Source: /content/output\n",
1797
+ " Output: /content/output/gaussian_splatting\n",
1798
+ "\n",
1799
+ "✓ Gaussian Splatting complete\n",
1800
+ "\n",
1801
+ "✓ Point cloud directory found: /content/output/gaussian_splatting/point_cloud\n",
1802
+ " ✓ iteration_1000/point_cloud.ply (118.53 MB)\n",
1803
+ "\n",
1804
+ "======================================================================\n",
1805
+ "PIPELINE COMPLETE\n",
1806
+ "======================================================================\n",
1807
+ "✓ Point cloud generated: /content/output/gaussian_splatting/point_cloud/iteration_1000/point_cloud.ply\n",
1808
+ " Size: 118.53 MB\n",
1809
+ "\n",
1810
+ "Output directory structure:\n",
1811
+ " /content/output/\n",
1812
+ " ├── images/ (processed images)\n",
1813
+ " ├── original_images/ (original source images)\n",
1814
+ " ├── sparse/0/ (COLMAP data)\n",
1815
+ " │ ├── cameras.bin\n",
1816
+ " │ ├── images.bin\n",
1817
+ " │ └── points3D.bin\n",
1818
+ " └── gaussian_splatting/ (GS output)\n",
1819
+ "\n",
1820
+ "======================================================================\n",
1821
+ "PIPELINE COMPLETE\n",
1822
+ "======================================================================\n",
1823
+ "Output directory: /content/output/gaussian_splatting\n"
1824
+ ]
1825
+ }
1826
+ ],
1827
+ "execution_count": 17
1828
+ },
1829
+ {
1830
+ "cell_type": "code",
1831
+ "source": [],
1832
+ "metadata": {
1833
+ "trusted": true,
1834
+ "id": "vVlwllleTKyG"
1835
+ },
1836
+ "outputs": [],
1837
+ "execution_count": 12
1838
+ }
1839
+ ]
1840
+ }