LarryD123 commited on
Commit
d2efaca
·
verified ·
1 Parent(s): 0d47ead

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +7 -0
  2. .github/copilot-instructions.md +106 -0
  3. .gitignore +39 -0
  4. DataProcessing.jpg +3 -0
  5. DataProcessing1.jpg +3 -0
  6. FInal Project Intro.pdf +3 -0
  7. Release_baseline_model.ipynb +0 -0
  8. Release_baseline_model_team1.ipynb +583 -0
  9. Release_post_process.ipynb +245 -0
  10. data/train/IMG_1784.JPG +3 -0
  11. data/train/IMG_1785.JPG +3 -0
  12. data/train/IMG_1786.JPG +3 -0
  13. data/train/IMG_1787.JPG +3 -0
  14. data/train/IMG_1788.JPG +3 -0
  15. data/train/IMG_1789.JPG +3 -0
  16. data/train/IMG_1790.JPG +3 -0
  17. data/train/IMG_1791.JPG +3 -0
  18. data/train/IMG_1792.JPG +3 -0
  19. data/train/IMG_1793.JPG +3 -0
  20. data/train/IMG_1794.JPG +3 -0
  21. data/train/IMG_1795.JPG +3 -0
  22. data/train/IMG_1796.JPG +3 -0
  23. data/train/IMG_1797.JPG +3 -0
  24. data/train/IMG_1798.JPG +3 -0
  25. data/train/IMG_1800.JPG +3 -0
  26. data/train/IMG_1801.JPG +3 -0
  27. data/train/IMG_1802.JPG +3 -0
  28. data/train/IMG_1803.JPG +3 -0
  29. data/train/IMG_1804.JPG +3 -0
  30. data/train/IMG_1805.JPG +3 -0
  31. data/train/IMG_1806.JPG +3 -0
  32. data/train/IMG_1807.JPG +3 -0
  33. data/train/IMG_1808.JPG +3 -0
  34. data/train/IMG_1809.JPG +3 -0
  35. data/train/IMG_1811.JPG +3 -0
  36. data/train/IMG_1812.JPG +3 -0
  37. data/train/IMG_1813.JPG +3 -0
  38. data/train/IMG_1814.JPG +3 -0
  39. data/train/IMG_1815.JPG +3 -0
  40. data/train/IMG_1817.JPG +3 -0
  41. data/train/IMG_1818.JPG +3 -0
  42. data/train/IMG_1819.JPG +3 -0
  43. data/train/IMG_1820.JPG +3 -0
  44. data/train/IMG_1821.JPG +3 -0
  45. data/train/IMG_1823.JPG +3 -0
  46. data/train/IMG_1824.JPG +3 -0
  47. data/train/IMG_1825.JPG +3 -0
  48. data/train/IMG_1826.JPG +3 -0
  49. data/train/IMG_1827.JPG +3 -0
  50. data/train/IMG_1828.JPG +3 -0
.gitattributes CHANGED
@@ -57,3 +57,10 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ demo-frontend/node_modules/@esbuild/win32-x64/esbuild.exe filter=lfs diff=lfs merge=lfs -text
61
+ demo-frontend/node_modules/@rollup/rollup-win32-x64-gnu/rollup.win32-x64-gnu.node filter=lfs diff=lfs merge=lfs -text
62
+ demo-frontend/node_modules/@rollup/rollup-win32-x64-msvc/rollup.win32-x64-msvc.node filter=lfs diff=lfs merge=lfs -text
63
+ FInal[[:space:]]Project[[:space:]]Intro.pdf filter=lfs diff=lfs merge=lfs -text
64
+ tmp-demo-frontend/node_modules/@esbuild/win32-x64/esbuild.exe filter=lfs diff=lfs merge=lfs -text
65
+ tmp-demo-frontend/node_modules/@rollup/rollup-win32-x64-gnu/rollup.win32-x64-gnu.node filter=lfs diff=lfs merge=lfs -text
66
+ tmp-demo-frontend/node_modules/@rollup/rollup-win32-x64-msvc/rollup.win32-x64-msvc.node filter=lfs diff=lfs merge=lfs -text
.github/copilot-instructions.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!-- Use this file to provide workspace-specific custom instructions to Copilot. For more details, visit https://code.visualstudio.com/docs/copilot/copilot-customization#_use-a-githubcopilotinstructionsmd-file -->
2
+ - [x] Verify that the copilot-instructions.md file in the .github directory is created. (Created file with checklist template.)
3
+
4
+ - [x] Clarify Project Requirements (Frontend demo: Vite React TS with Tailwind, faux AI output, map view, no backend.)
5
+ <!-- Ask for project type, language, and frameworks if not specified. Skip if already provided. -->
6
+
7
+ - [ ] Scaffold the Project
8
+ <!--
9
+ Ensure that the previous step has been marked as completed.
10
+ Call project setup tool with projectType parameter.
11
+ Run scaffolding command to create project files and folders.
12
+ Use '.' as the working directory.
13
+ If no appropriate projectType is available, search documentation using available tools.
14
+ Otherwise, create the project structure manually using available file creation tools.
15
+ -->
16
+
17
+ - [ ] Customize the Project
18
+ <!--
19
+ Verify that all previous steps have been completed successfully and you have marked the step as completed.
20
+ Develop a plan to modify codebase according to user requirements.
21
+ Apply modifications using appropriate tools and user-provided references.
22
+ Skip this step for "Hello World" projects.
23
+ -->
24
+
25
+ - [ ] Install Required Extensions
26
+ <!-- ONLY install extensions provided mentioned in the get_project_setup_info. Skip this step otherwise and mark as completed. -->
27
+
28
+ - [ ] Compile the Project
29
+ <!--
30
+ Verify that all previous steps have been completed.
31
+ Install any missing dependencies.
32
+ Run diagnostics and resolve any issues.
33
+ Check for markdown files in project folder for relevant instructions on how to do this.
34
+ -->
35
+
36
+ - [ ] Create and Run Task
37
+ <!--
38
+ Verify that all previous steps have been completed.
39
+ Check https://code.visualstudio.com/docs/debugtest/tasks to determine if the project needs a task. If so, use the create_and_run_task to create and launch a task based on package.json, README.md, and project structure.
40
+ Skip this step otherwise.
41
+ -->
42
+
43
+ - [ ] Launch the Project
44
+ <!--
45
+ Verify that all previous steps have been completed.
46
+ Prompt user for debug mode, launch only if confirmed.
47
+ -->
48
+
49
+ - [ ] Ensure Documentation is Complete
50
+ <!--
51
+ Verify that all previous steps have been completed.
52
+ Verify that README.md and the copilot-instructions.md file in the .github directory exists and contains current project information.
53
+ Clean up the copilot-instructions.md file in the .github directory by removing all HTML comments.
54
+ -->
55
+
56
+ <!--
57
+ ## Execution Guidelines
58
+ PROGRESS TRACKING:
59
+ - If any tools are available to manage the above todo list, use it to track progress through this checklist.
60
+ - After completing each step, mark it complete and add a summary.
61
+ - Read current todo list status before starting each new step.
62
+
63
+ COMMUNICATION RULES:
64
+ - Avoid verbose explanations or printing full command outputs.
65
+ - If a step is skipped, state that briefly (e.g. "No extensions needed").
66
+ - Do not explain project structure unless asked.
67
+ - Keep explanations concise and focused.
68
+
69
+ DEVELOPMENT RULES:
70
+ - Use '.' as the working directory unless user specifies otherwise.
71
+ - Avoid adding media or external links unless explicitly requested.
72
+ - Use placeholders only with a note that they should be replaced.
73
+ - Use VS Code API tool only for VS Code extension projects.
74
+ - Once the project is created, it is already opened in Visual Studio Code—do not suggest commands to open this project in Visual Studio again.
75
+ - If the project setup information has additional rules, follow them strictly.
76
+
77
+ FOLDER CREATION RULES:
78
+ - Always use the current directory as the project root.
79
+ - If you are running any terminal commands, use the '.' argument to ensure that the current working directory is used ALWAYS.
80
+ - Do not create a new folder unless the user explicitly requests it besides a .vscode folder for a tasks.json file.
81
+ - If any of the scaffolding commands mention that the folder name is not correct, let the user know to create a new folder with the correct name and then reopen it again in vscode.
82
+
83
+ EXTENSION INSTALLATION RULES:
84
+ - Only install extension specified by the get_project_setup_info tool. DO NOT INSTALL any other extensions.
85
+
86
+ PROJECT CONTENT RULES:
87
+ - If the user has not specified project details, assume they want a "Hello World" project as a starting point.
88
+ - Avoid adding links of any type (URLs, files, folders, etc.) or integrations that are not explicitly required.
89
+ - Avoid generating images, videos, or any other media files unless explicitly requested.
90
+ - If you need to use any media assets as placeholders, let the user know that these are placeholders and should be replaced with the actual assets later.
91
+ - Ensure all generated components serve a clear purpose within the user's requested workflow.
92
+ - If a feature is assumed but not confirmed, prompt the user for clarification before including it.
93
+ - If you are working on a VS Code extension, use the VS Code API tool with a query to find relevant VS Code API references and samples related to that query.
94
+
95
+ TASK COMPLETION RULES:
96
+ - Your task is complete when:
97
+ - Project is successfully scaffolded and compiled without errors
98
+ - copilot-instructions.md file in the .github directory exists in the project
99
+ - README.md file exists and is up to date
100
+ - User is provided with clear instructions to debug/launch the project
101
+
102
+ Before starting a new task in the above plan, update progress in the plan.
103
+ -->
104
+ - Work through each checklist item systematically.
105
+ - Keep communication concise and focused.
106
+ - Follow development best practices.
.gitignore ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *.pyo
5
+ *.pyd
6
+
7
+ # C extensions
8
+ *.so
9
+
10
+ # Distribution / packaging
11
+ *.egg-info/
12
+ *.egg
13
+ dist/
14
+ build/
15
+
16
+ # Virtual environments
17
+ venv/
18
+ env/
19
+ .venv/
20
+ .env
21
+
22
+ # Jupyter
23
+ .ipynb_checkpoints
24
+
25
+ # VS Code
26
+ .vscode/
27
+
28
+ # Data (large files)
29
+ data/raw_images/
30
+ data/*.tar
31
+ data/*.zip
32
+ data/*.npz
33
+ data/*.h5
34
+ data/*.db
35
+
36
+ # Logs and OS files
37
+ *.log
38
+ .DS_Store
39
+ Thumbs.db
DataProcessing.jpg ADDED

Git LFS Details

  • SHA256: 3fcecc220ef9b7cae4a8d6540c45010c1b910cb25ae306bd954e1899778977ce
  • Pointer size: 131 Bytes
  • Size of remote file: 838 kB
DataProcessing1.jpg ADDED

Git LFS Details

  • SHA256: d869e8a44920a0ddbe36f671b6609f59947d4e501e6c8292ecc3207973b2eaec
  • Pointer size: 131 Bytes
  • Size of remote file: 220 kB
FInal Project Intro.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6742d8b4e83b38cbbb564269fb1b3de312082c1c3b83114d821087f72016153d
3
+ size 594116
Release_baseline_model.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
Release_baseline_model_team1.ipynb ADDED
@@ -0,0 +1,583 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {
6
+ "id": "Y0bKqI_UZNso"
7
+ },
8
+ "source": [
9
+ "# Installing dependencies\n",
10
+ "\n",
11
+ "Please make a copy of this notebook.\n"
12
+ ]
13
+ },
14
+ {
15
+ "cell_type": "code",
16
+ "execution_count": null,
17
+ "metadata": {
18
+ "id": "VBj9R3VCAYkv"
19
+ },
20
+ "outputs": [],
21
+ "source": [
22
+ "!pip install geopy > delete.txt\n",
23
+ "!pip install datasets > delete.txt\n",
24
+ "!pip install torch torchvision datasets > delete.txt\n",
25
+ "!pip install huggingface_hub > delete.txt\n",
26
+ "!rm delete.txt"
27
+ ]
28
+ },
29
+ {
30
+ "cell_type": "markdown",
31
+ "metadata": {
32
+ "id": "_d8uxEsCZTBc"
33
+ },
34
+ "source": [
35
+ "# Huggingface login\n",
36
+ "You will require your personal token."
37
+ ]
38
+ },
39
+ {
40
+ "cell_type": "code",
41
+ "execution_count": null,
42
+ "metadata": {
43
+ "id": "N5vOXHY5Ifv2"
44
+ },
45
+ "outputs": [],
46
+ "source": [
47
+ "!huggingface-cli login"
48
+ ]
49
+ },
50
+ {
51
+ "cell_type": "markdown",
52
+ "metadata": {
53
+ "id": "sv_nG_P8W7Bn"
54
+ },
55
+ "source": [
56
+ "# Data"
57
+ ]
58
+ },
59
+ {
60
+ "cell_type": "markdown",
61
+ "metadata": {
62
+ "id": "0GokzmxKZupF"
63
+ },
64
+ "source": [
65
+ "## Downloading the train and test dataset"
66
+ ]
67
+ },
68
+ {
69
+ "cell_type": "code",
70
+ "source": [
71
+ "import os\n",
72
+ "import pandas as pd\n",
73
+ "from PIL import Image\n",
74
+ "from tqdm.auto import tqdm\n",
75
+ "from datasets import load_dataset\n",
76
+ "\n",
77
+ "LAT_MIN, LAT_MAX = 39.9500, 39.9530\n",
78
+ "LON_MIN, LON_MAX = -75.1930, -75.1900\n",
79
+ "TARGET_SIZE = (224, 224)\n",
80
+ "OUTPUT_DIR = \"/content/small_data\"\n",
81
+ "\n",
82
+ "def process_dataset():\n",
83
+ " print(\"Starting data processing...\")\n",
84
+ " try:\n",
85
+ " ds_train = load_dataset(\"LarryD123/Image2GPS_combined\", split=\"train\", streaming=True)\n",
86
+ " except Exception as e:\n",
87
+ " print(f\"Error loading dataset: {e}\")\n",
88
+ " return\n",
89
+ "\n",
90
+ " os.makedirs(OUTPUT_DIR, exist_ok=True)\n",
91
+ " metadata = []\n",
92
+ " count_kept = 0\n",
93
+ " count_rejected = 0\n",
94
+ "\n",
95
+ " for i, item in enumerate(tqdm(ds_train)):\n",
96
+ " lat = item.get('Latitude') or item.get('latitude')\n",
97
+ " lon = item.get('Longitude') or item.get('longitude')\n",
98
+ "\n",
99
+ " if lat and lon and LAT_MIN <= lat <= LAT_MAX and LON_MIN <= lon <= LON_MAX:\n",
100
+ " try:\n",
101
+ " img = item['image'].convert('RGB').resize(TARGET_SIZE, Image.LANCZOS)\n",
102
+ " file_name = f\"small_{i:05d}.jpg\"\n",
103
+ " img.save(os.path.join(OUTPUT_DIR, file_name), quality=95)\n",
104
+ "\n",
105
+ " metadata.append({\n",
106
+ " \"file_name\": file_name,\n",
107
+ " \"Latitude\": lat,\n",
108
+ " \"Longitude\": lon\n",
109
+ " })\n",
110
+ " count_kept += 1\n",
111
+ " except Exception:\n",
112
+ " continue\n",
113
+ " else:\n",
114
+ " count_rejected += 1\n",
115
+ "\n",
116
+ " if count_kept >= 5000:\n",
117
+ " break\n",
118
+ "\n",
119
+ " df = pd.DataFrame(metadata)\n",
120
+ " df.to_csv(os.path.join(OUTPUT_DIR, \"metadata.csv\"), index=False)\n",
121
+ "\n",
122
+ " print(\"-\" * 30)\n",
123
+ " print(f\"Processing complete.\")\n",
124
+ " print(f\"Saved images: {count_kept}\")\n",
125
+ " print(f\"Rejected images: {count_rejected}\")\n",
126
+ " print(f\"Metadata saved to {OUTPUT_DIR}/metadata.csv\")\n",
127
+ " print(\"-\" * 30)\n",
128
+ "\n",
129
+ "if __name__ == \"__main__\":\n",
130
+ " process_dataset()"
131
+ ],
132
+ "metadata": {
133
+ "id": "jgPn6XrpLs_U"
134
+ },
135
+ "execution_count": null,
136
+ "outputs": []
137
+ },
138
+ {
139
+ "cell_type": "markdown",
140
+ "metadata": {
141
+ "id": "Ua-UFAGyaFvL"
142
+ },
143
+ "source": [
144
+ "## Creating dataloaders and visualizing the data"
145
+ ]
146
+ },
147
+ {
148
+ "cell_type": "code",
149
+ "source": [
150
+ "import torch\n",
151
+ "import numpy as np\n",
152
+ "from torch.utils.data import Dataset, DataLoader\n",
153
+ "from torchvision import transforms\n",
154
+ "from datasets import load_dataset\n",
155
+ "\n",
156
+ "LAT_MIN, LAT_MAX = 39.9495, 39.9550\n",
157
+ "LON_MIN, LON_MAX = -75.1940, -75.1905\n",
158
+ "TARGET_SIZE = (224, 224)\n",
159
+ "BATCH_SIZE = 32\n",
160
+ "NUM_WORKERS = 2\n",
161
+ "\n",
162
+ "class GPSDataset(Dataset):\n",
163
+ " def __init__(self, hf_dataset, transform=None):\n",
164
+ " self.hf_dataset = hf_dataset\n",
165
+ " self.transform = transform\n",
166
+ "\n",
167
+ " def __len__(self):\n",
168
+ " return len(self.hf_dataset)\n",
169
+ "\n",
170
+ " def __getitem__(self, idx):\n",
171
+ " item = self.hf_dataset[idx]\n",
172
+ " image = item['image'].convert('RGB')\n",
173
+ " if self.transform:\n",
174
+ " image = self.transform(image)\n",
175
+ " return image, torch.tensor([item['Latitude'], item['Longitude']], dtype=torch.float32)\n",
176
+ "\n",
177
+ "def standardize_columns(dataset):\n",
178
+ " if 'latitude' in dataset.column_names:\n",
179
+ " dataset = dataset.rename_column(\"latitude\", \"Latitude\")\n",
180
+ " if 'longitude' in dataset.column_names:\n",
181
+ " dataset = dataset.rename_column(\"longitude\", \"Longitude\")\n",
182
+ " return dataset\n",
183
+ "\n",
184
+ "def get_dataloaders():\n",
185
+ " try:\n",
186
+ " ds_train = load_dataset(\"LarryD123/Image2GPS\", split=\"train\")\n",
187
+ " ds_val = load_dataset(\"gydou/released_img\", split=\"train\")\n",
188
+ " except Exception as e:\n",
189
+ " print(f\"Error loading datasets: {e}\")\n",
190
+ " return None, None\n",
191
+ "\n",
192
+ " ds_train = standardize_columns(ds_train)\n",
193
+ " ds_val = standardize_columns(ds_val)\n",
194
+ "\n",
195
+ " ds_train = ds_train.filter(lambda x: LAT_MIN <= x['Latitude'] <= LAT_MAX and\n",
196
+ " LON_MIN <= x['Longitude'] <= LON_MAX)\n",
197
+ "\n",
198
+ " lats = np.array(ds_train['Latitude'])\n",
199
+ " lons = np.array(ds_train['Longitude'])\n",
200
+ "\n",
201
+ " print(f\"Lat Mean: {np.mean(lats):.6f}, Std: {np.std(lats):.6f}\")\n",
202
+ " print(f\"Lon Mean: {np.mean(lons):.6f}, Std: {np.std(lons):.6f}\")\n",
203
+ "\n",
204
+ " train_transform = transforms.Compose([\n",
205
+ " transforms.Resize(TARGET_SIZE),\n",
206
+ " transforms.RandomHorizontalFlip(),\n",
207
+ " transforms.RandomRotation(degrees=15),\n",
208
+ " transforms.ColorJitter(brightness=0.2, contrast=0.2),\n",
209
+ " transforms.ToTensor(),\n",
210
+ " transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n",
211
+ " ])\n",
212
+ "\n",
213
+ " val_transform = transforms.Compose([\n",
214
+ " transforms.Resize(TARGET_SIZE),\n",
215
+ " transforms.ToTensor(),\n",
216
+ " transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n",
217
+ " ])\n",
218
+ "\n",
219
+ " train_dataset = GPSDataset(ds_train, transform=train_transform)\n",
220
+ " val_dataset = GPSDataset(ds_val, transform=val_transform)\n",
221
+ "\n",
222
+ " train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=NUM_WORKERS)\n",
223
+ " val_loader = DataLoader(val_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=NUM_WORKERS)\n",
224
+ "\n",
225
+ " return train_loader, val_loader\n",
226
+ "\n",
227
+ "if __name__ == \"__main__\":\n",
228
+ " train_loader, val_loader = get_dataloaders()"
229
+ ],
230
+ "metadata": {
231
+ "id": "I47GPJ-3QDUM"
232
+ },
233
+ "execution_count": null,
234
+ "outputs": []
235
+ },
236
+ {
237
+ "cell_type": "code",
238
+ "execution_count": null,
239
+ "metadata": {
240
+ "id": "i028XRKnh6-v"
241
+ },
242
+ "outputs": [],
243
+ "source": [
244
+ "# Verify loading\n",
245
+ "for images, gps_coords in train_dataloader:\n",
246
+ " print(images.size(), gps_coords.size())\n",
247
+ " break"
248
+ ]
249
+ },
250
+ {
251
+ "cell_type": "code",
252
+ "execution_count": null,
253
+ "metadata": {
254
+ "id": "DkaQtacFiWK8"
255
+ },
256
+ "outputs": [],
257
+ "source": [
258
+ "import matplotlib.pyplot as plt\n",
259
+ "import torchvision.transforms as transforms\n",
260
+ "import numpy as np\n",
261
+ "\n",
262
+ "def denormalize(tensor, mean, std):\n",
263
+ " mean = np.array(mean)\n",
264
+ " std = np.array(std)\n",
265
+ " tensor = tensor.numpy().transpose((1, 2, 0)) # Convert from C x H x W to H x W x C\n",
266
+ " tensor = std * tensor + mean # Denormalize\n",
267
+ " tensor = np.clip(tensor, 0, 1) # Clip to keep pixel values between 0 and 1\n",
268
+ " return tensor\n",
269
+ "\n",
270
+ "data_iter = iter(train_dataloader)\n",
271
+ "images, gps_coords = next(data_iter) # Get a batch of images and labels\n",
272
+ "# Denormalize the first image in the batch for display\n",
273
+ "itr = 0\n",
274
+ "for im in images:\n",
275
+ " image = denormalize(im, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\n",
276
+ "\n",
277
+ " # Plot the image\n",
278
+ " plt.imshow(image)\n",
279
+ " plt.title(f'Latitude: {gps_coords[itr][0].item():.4f}, Longitude: {gps_coords[itr][1].item():.4f}')\n",
280
+ " plt.axis('off')\n",
281
+ " plt.show()\n",
282
+ " itr += 1"
283
+ ]
284
+ },
285
+ {
286
+ "cell_type": "markdown",
287
+ "metadata": {
288
+ "id": "Fi0I3kbV0iTp"
289
+ },
290
+ "source": [
291
+ "# Swintransformer"
292
+ ]
293
+ },
294
+ {
295
+ "cell_type": "code",
296
+ "execution_count": null,
297
+ "metadata": {
298
+ "id": "lh5AiZc4gKXl"
299
+ },
300
+ "outputs": [],
301
+ "source": [
302
+ "import torch\n",
303
+ "import torch.nn as nn\n",
304
+ "import torch.optim as optim\n",
305
+ "from torch.optim.lr_scheduler import StepLR\n",
306
+ "import torchvision.models as models\n",
307
+ "import numpy as np\n",
308
+ "from geopy.distance import geodesic\n",
309
+ "from tqdm.auto import tqdm\n",
310
+ "from torch.cuda.amp import GradScaler, autocast\n",
311
+ "\n",
312
+ "print(\"Initializing Swin Transformer Base (Torchvision)...\")\n",
313
+ "\n",
314
+ "model = models.swin_b(weights='DEFAULT')\n",
315
+ "model.head = nn.Linear(1024, 2)\n",
316
+ "\n",
317
+ "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
318
+ "model = model.to(device)\n",
319
+ "\n",
320
+ "criterion = nn.MSELoss()\n",
321
+ "optimizer = torch.optim.AdamW(model.parameters(), lr=1e-4, weight_decay=0.05)\n",
322
+ "scheduler = StepLR(optimizer, step_size=7, gamma=0.5)\n",
323
+ "scaler = GradScaler(enabled=True)\n",
324
+ "\n",
325
+ "best_val_dist = float('inf')\n",
326
+ "best_epoch = -1\n",
327
+ "save_path = \"model.pt\"\n",
328
+ "num_epochs = 50\n",
329
+ "\n",
330
+ "print(f\"Starting training for {num_epochs} epochs...\")\n",
331
+ "\n",
332
+ "for epoch in range(1, num_epochs + 1):\n",
333
+ " model.train()\n",
334
+ " running_loss = 0.0\n",
335
+ "\n",
336
+ " train_bar = tqdm(train_dataloader, desc=f\"Epoch {epoch}/{num_epochs} [Train]\", leave=False)\n",
337
+ "\n",
338
+ " for images, coords in train_bar:\n",
339
+ " images = images.to(device)\n",
340
+ " coords = coords.to(device)\n",
341
+ "\n",
342
+ " target_lat = (coords[:, 0] - lat_mean) / lat_std\n",
343
+ " target_lon = (coords[:, 1] - lon_mean) / lon_std\n",
344
+ " targets = torch.stack([target_lat, target_lon], dim=1)\n",
345
+ "\n",
346
+ " optimizer.zero_grad()\n",
347
+ "\n",
348
+ " with autocast(enabled=True):\n",
349
+ " outputs = model(images)\n",
350
+ " loss = criterion(outputs, targets)\n",
351
+ "\n",
352
+ " scaler.scale(loss).backward()\n",
353
+ " scaler.step(optimizer)\n",
354
+ " scaler.update()\n",
355
+ "\n",
356
+ " running_loss += loss.item() * images.size(0)\n",
357
+ " train_bar.set_postfix({'loss': f\"{loss.item():.4f}\"})\n",
358
+ "\n",
359
+ " scheduler.step()\n",
360
+ " epoch_loss = running_loss / len(train_dataloader.dataset)\n",
361
+ "\n",
362
+ " model.eval()\n",
363
+ " distances = []\n",
364
+ "\n",
365
+ " with torch.no_grad():\n",
366
+ " val_bar = tqdm(val_dataloader, desc=f\"Epoch {epoch}/{num_epochs} [Val]\", leave=False)\n",
367
+ "\n",
368
+ " for images, coords in val_bar:\n",
369
+ " images = images.to(device)\n",
370
+ " coords = coords.to(device)\n",
371
+ "\n",
372
+ " with autocast(enabled=True):\n",
373
+ " outputs = model(images)\n",
374
+ "\n",
375
+ " preds = outputs.cpu().numpy() * np.array([lat_std, lon_std]) + np.array([lat_mean, lon_mean])\n",
376
+ " actuals = coords.cpu().numpy()\n",
377
+ "\n",
378
+ " for p, a in zip(preds, actuals):\n",
379
+ " try:\n",
380
+ " dist = geodesic((a[0], a[1]), (p[0], p[1])).meters\n",
381
+ " distances.append(dist)\n",
382
+ " except ValueError:\n",
383
+ " continue\n",
384
+ "\n",
385
+ " avg_dist = np.mean(distances)\n",
386
+ " print(f\"Epoch [{epoch}/{num_epochs}] | Train Loss: {epoch_loss:.6f} | Val Avg Distance: {avg_dist:.2f}m\")\n",
387
+ "\n",
388
+ " if avg_dist < best_val_dist:\n",
389
+ " best_val_dist = avg_dist\n",
390
+ " best_epoch = epoch\n",
391
+ " torch.save(model.state_dict(), save_path)\n",
392
+ " print(f\"New best model saved. Distance: {best_val_dist:.2f}m\")\n",
393
+ "\n",
394
+ "print(\"Training complete.\")\n",
395
+ "print(f\"Best model saved to {save_path} (Epoch {best_epoch}) with Distance: {best_val_dist:.2f}m\")"
396
+ ]
397
+ },
398
+ {
399
+ "cell_type": "markdown",
400
+ "metadata": {
401
+ "id": "CdLEWIKhaX_o"
402
+ },
403
+ "source": [
404
+ "# Testing the model"
405
+ ]
406
+ },
407
+ {
408
+ "cell_type": "code",
409
+ "execution_count": null,
410
+ "metadata": {
411
+ "id": "RXlnLpHvjdpd"
412
+ },
413
+ "outputs": [],
414
+ "source": [
415
+ "from sklearn.metrics import mean_absolute_error, mean_squared_error\n",
416
+ "import numpy as np\n",
417
+ "from geopy.distance import geodesic\n",
418
+ "import torch\n",
419
+ "\n",
420
+ "# Ensure model is in evaluation mode\n",
421
+ "model.eval()\n",
422
+ "\n",
423
+ "all_preds = []\n",
424
+ "all_actuals = []\n",
425
+ "distances = []\n",
426
+ "\n",
427
+ "print(\"Starting model evaluation on validation set...\")\n",
428
+ "\n",
429
+ "with torch.no_grad():\n",
430
+ " for images, gps_coords in val_dataloader:\n",
431
+ " images = images.to(device)\n",
432
+ " gps_coords = gps_coords.to(device)\n",
433
+ "\n",
434
+ " # Forward pass\n",
435
+ " outputs = model(images)\n",
436
+ "\n",
437
+ " # Denormalize predictions\n",
438
+ " # Using previously calculated statistics: lat_std, lon_std, lat_mean, lon_mean\n",
439
+ " normalization_std = np.array([lat_std, lon_std])\n",
440
+ " normalization_mean = np.array([lat_mean, lon_mean])\n",
441
+ "\n",
442
+ " preds = outputs.cpu().numpy() * normalization_std + normalization_mean\n",
443
+ " actuals = gps_coords.cpu().numpy()\n",
444
+ "\n",
445
+ " all_preds.append(preds)\n",
446
+ " all_actuals.append(actuals)\n",
447
+ "\n",
448
+ " # Calculate Geodesic distance (in meters) for each sample\n",
449
+ " for pred, actual in zip(preds, actuals):\n",
450
+ " try:\n",
451
+ " # geodesic() takes coordinates in (latitude, longitude) order\n",
452
+ " dist = geodesic((actual[0], actual[1]), (pred[0], pred[1])).meters\n",
453
+ " distances.append(dist)\n",
454
+ " except ValueError:\n",
455
+ " continue\n",
456
+ "\n",
457
+ "# Concatenate all batches\n",
458
+ "all_preds = np.concatenate(all_preds)\n",
459
+ "all_actuals = np.concatenate(all_actuals)\n",
460
+ "\n",
461
+ "# Calculate final metrics\n",
462
+ "mae = mean_absolute_error(all_actuals, all_preds)\n",
463
+ "rmse = np.sqrt(mean_squared_error(all_actuals, all_preds))\n",
464
+ "avg_distance = np.mean(distances)\n",
465
+ "\n",
466
+ "print(\"-\" * 30)\n",
467
+ "print(f\"Evaluation Results:\")\n",
468
+ "print(f\"Mean Absolute Error (Degrees): {mae:.6f}\")\n",
469
+ "print(f\"Root Mean Squared Error (Degrees): {rmse:.6f}\")\n",
470
+ "print(\"-\" * 30)\n",
471
+ "print(f\"Average Distance Error: {avg_distance:.2f} meters\")\n",
472
+ "print(\"-\" * 30)"
473
+ ]
474
+ },
475
+ {
476
+ "cell_type": "code",
477
+ "execution_count": null,
478
+ "metadata": {
479
+ "id": "12BMgT0CXVu7"
480
+ },
481
+ "outputs": [],
482
+ "source": [
483
+ "import matplotlib.pyplot as plt\n",
484
+ "\n",
485
+ "plt.figure(figsize=(10, 6))\n",
486
+ "\n",
487
+ "plt.scatter(all_actuals[:, 1], all_actuals[:, 0],\n",
488
+ " label='Actual Ground Truth', color='blue', alpha=0.6, s=20)\n",
489
+ "\n",
490
+ "plt.scatter(all_preds[:, 1], all_preds[:, 0],\n",
491
+ " label='Model Prediction', color='red', alpha=0.6, s=20)\n",
492
+ "\n",
493
+ "print(\"Drawing error lines...\")\n",
494
+ "for i in range(len(all_actuals)):\n",
495
+ " plt.plot(\n",
496
+ " [all_actuals[i, 1], all_preds[i, 1]],\n",
497
+ " [all_actuals[i, 0], all_preds[i, 0]],\n",
498
+ " color='gray', linewidth=0.5, alpha=0.5\n",
499
+ " )\n",
500
+ "\n",
501
+ "plt.legend()\n",
502
+ "plt.xlabel('Longitude')\n",
503
+ "plt.ylabel('Latitude')\n",
504
+ "plt.title('Actual vs. Predicted GPS Coordinates (Validation Set)')\n",
505
+ "plt.grid(True, linestyle='--', alpha=0.3)\n",
506
+ "\n",
507
+ "plt.tight_layout()\n",
508
+ "plt.show()"
509
+ ]
510
+ },
511
+ {
512
+ "cell_type": "markdown",
513
+ "metadata": {
514
+ "id": "V-nfMsIf0MHd"
515
+ },
516
+ "source": [
517
+ "# 2. Pushing the model to the Hugging Face (Torchvision Model)\n",
518
+ "\n",
519
+ "Use this code if you loaded the model from Torchvision or built the model from scratch using PyTorch. If you built your model from scratch, make sure to follow the guidelines described here - https://huggingface.co/docs/hub/en/models-uploading#upload-a-pytorch-model-using-huggingfacehub\n"
520
+ ]
521
+ },
522
+ {
523
+ "cell_type": "code",
524
+ "source": [
525
+ "import os\n",
526
+ "from huggingface_hub import HfApi\n",
527
+ "\n",
528
+ "HF_USERNAME = \"HyadesL\"\n",
529
+ "REPO_NAME = \"swin-transformer-gps-penn\"\n",
530
+ "MODEL_FILENAME = \"model.pt\"\n",
531
+ "\n",
532
+ "def upload_to_huggingface():\n",
533
+ " if not os.path.exists(MODEL_FILENAME):\n",
534
+ " print(f\"Error: {MODEL_FILENAME} not found in the current directory.\")\n",
535
+ " return\n",
536
+ "\n",
537
+ " repo_id = f\"{HF_USERNAME}/{REPO_NAME}\"\n",
538
+ " api = HfApi()\n",
539
+ "\n",
540
+ " try:\n",
541
+ " print(f\"Initializing repository: {repo_id}\")\n",
542
+ " api.create_repo(repo_id=repo_id, exist_ok=True)\n",
543
+ "\n",
544
+ " print(f\"Uploading {MODEL_FILENAME}...\")\n",
545
+ " api.upload_file(\n",
546
+ " path_or_fileobj=MODEL_FILENAME,\n",
547
+ " path_in_repo=MODEL_FILENAME,\n",
548
+ " repo_id=repo_id,\n",
549
+ " repo_type=\"model\",\n",
550
+ " commit_message=\"Upload torchvision Swin Transformer weights\"\n",
551
+ " )\n",
552
+ "\n",
553
+ " print(f\"Upload successful: https://huggingface.co/{repo_id}/blob/main/{MODEL_FILENAME}\")\n",
554
+ "\n",
555
+ " except Exception as e:\n",
556
+ " print(f\"Upload failed: {e}\")\n",
557
+ " print(\"Ensure you are logged in via 'huggingface-cli login' with a write token.\")\n",
558
+ "\n",
559
+ "if __name__ == \"__main__\":\n",
560
+ " upload_to_huggingface()"
561
+ ],
562
+ "metadata": {
563
+ "id": "RcYKYGdToQE6"
564
+ },
565
+ "execution_count": null,
566
+ "outputs": []
567
+ }
568
+ ],
569
+ "metadata": {
570
+ "colab": {
571
+ "provenance": []
572
+ },
573
+ "kernelspec": {
574
+ "display_name": "Python 3",
575
+ "name": "python3"
576
+ },
577
+ "language_info": {
578
+ "name": "python"
579
+ }
580
+ },
581
+ "nbformat": 4,
582
+ "nbformat_minor": 0
583
+ }
Release_post_process.ipynb ADDED
@@ -0,0 +1,245 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": null,
6
+ "id": "5638b0d1",
7
+ "metadata": {
8
+ "id": "5638b0d1"
9
+ },
10
+ "outputs": [],
11
+ "source": [
12
+ "!pip install exifread"
13
+ ]
14
+ },
15
+ {
16
+ "cell_type": "markdown",
17
+ "source": [
18
+ "# Extracting GPS Information from Images\n",
19
+ "\n",
20
+ "(You will need to modify this script based on how your dataset is stored in order to execute the code.).\n"
21
+ ],
22
+ "metadata": {
23
+ "id": "yT4UQMosNN02"
24
+ },
25
+ "id": "yT4UQMosNN02"
26
+ },
27
+ {
28
+ "cell_type": "code",
29
+ "source": [
30
+ "import os\n",
31
+ "\n",
32
+ "# train_or_test_or_validation could be either train, test, or validation.\n",
33
+ "train_or_test_or_validation = \"train\" # it could also be test or validation\n",
34
+ "PATH_TO_YOUR_DATA_FOLDER = \"{PATH TO YOUR DATA FOLDER}\"\n",
35
+ "directory_path = f\"{PATH_TO_YOUR_DATA_FOLDER}/{train_or_test_or_validation}\"\n",
36
+ "output_csv = \"metadata.csv\"\n",
37
+ "output_csv = os.path.join(directory_path, output_csv)"
38
+ ],
39
+ "metadata": {
40
+ "id": "7QcjgXtAQB5Q"
41
+ },
42
+ "id": "7QcjgXtAQB5Q",
43
+ "execution_count": null,
44
+ "outputs": []
45
+ },
46
+ {
47
+ "cell_type": "code",
48
+ "execution_count": null,
49
+ "id": "e1713e4e-a848-42e9-b8d8-d1273f1b9689",
50
+ "metadata": {
51
+ "id": "e1713e4e-a848-42e9-b8d8-d1273f1b9689"
52
+ },
53
+ "outputs": [],
54
+ "source": [
55
+ "import exifread, csv\n",
56
+ "\n",
57
+ "def get_exif_data(image_path):\n",
58
+ " with open(image_path, 'rb') as image_file:\n",
59
+ " tags = exifread.process_file(image_file)\n",
60
+ " return tags\n",
61
+ "\n",
62
+ "def export_exif_to_json(exif_data, output_file):\n",
63
+ " # Convert tags to a serializable format\n",
64
+ " exif_data_serializable = {str(tag): str(value) for tag, value in exif_data.items()}\n",
65
+ " with open(output_file, 'w') as json_file:\n",
66
+ " json.dump(exif_data_serializable, json_file, indent=4)"
67
+ ]
68
+ },
69
+ {
70
+ "cell_type": "code",
71
+ "execution_count": null,
72
+ "id": "80fb378d",
73
+ "metadata": {
74
+ "id": "80fb378d"
75
+ },
76
+ "outputs": [],
77
+ "source": [
78
+ "# Function to convert GPS coordinates in degrees, minutes, and seconds to decimal degrees\n",
79
+ "def convert_to_decimal_degrees(value):\n",
80
+ " d, m, s = value.values\n",
81
+ " return d.num / d.den + (m.num / m.den) / 60 + (s.num / s.den) / 3600"
82
+ ]
83
+ },
84
+ {
85
+ "cell_type": "markdown",
86
+ "source": [
87
+ "### You will need to create subfolders in {PATH_TO_YOUR_DATA_FOLDER} for each split (train/test/validation) or just (train/test). Next, place the corresponding images into each split after randomly shuffling them. Then, create a metadata.csv file for each split and place it in the corresponding directory. Note that the current code only works for jpeg images. If the exported images are in some other format, convert them to .jpg before running this code."
88
+ ],
89
+ "metadata": {
90
+ "id": "yaoJPVKrNq9N"
91
+ },
92
+ "id": "yaoJPVKrNq9N"
93
+ },
94
+ {
95
+ "cell_type": "code",
96
+ "execution_count": null,
97
+ "id": "be3c347d",
98
+ "metadata": {
99
+ "id": "be3c347d"
100
+ },
101
+ "outputs": [],
102
+ "source": [
103
+ "with open(output_csv, mode='w', newline='') as csv_file:\n",
104
+ " fieldnames = ['file_name', 'Latitude', 'Longitude']\n",
105
+ " writer = csv.DictWriter(csv_file, fieldnames=fieldnames)\n",
106
+ "\n",
107
+ " # Write the header row\n",
108
+ " writer.writeheader()\n",
109
+ " for filename in os.listdir(directory_path):\n",
110
+ " if os.path.isfile(os.path.join(directory_path, filename)):\n",
111
+ " exif_data = get_exif_data(os.path.join(directory_path, filename))\n",
112
+ " if exif_data:\n",
113
+ " gps_latitude = exif_data.get('GPS GPSLatitude', None)\n",
114
+ " gps_latitude_ref = exif_data.get('GPS GPSLatitudeRef', None)\n",
115
+ " gps_longitude = exif_data.get('GPS GPSLongitude', None)\n",
116
+ " gps_longitude_ref = exif_data.get('GPS GPSLongitudeRef', None)\n",
117
+ " if gps_latitude and gps_longitude:\n",
118
+ " # Convert latitude and longitude to decimal degrees\n",
119
+ " latitude = convert_to_decimal_degrees(gps_latitude)\n",
120
+ " longitude = convert_to_decimal_degrees(gps_longitude)\n",
121
+ "\n",
122
+ " # Adjust for N/S and E/W reference\n",
123
+ " if gps_latitude_ref.values[0] == 'S':\n",
124
+ " latitude = -latitude\n",
125
+ " if gps_longitude_ref.values[0] == 'W':\n",
126
+ " longitude = -longitude\n",
127
+ "\n",
128
+ " # Write the data to the CSV file\n",
129
+ " writer.writerow({'file_name': filename, 'Latitude': latitude, 'Longitude': longitude})"
130
+ ]
131
+ },
132
+ {
133
+ "cell_type": "markdown",
134
+ "source": [
135
+ "# Uploading and Reading a Dataset on Hugging Face"
136
+ ],
137
+ "metadata": {
138
+ "id": "WEkVLwUcQ7PV"
139
+ },
140
+ "id": "WEkVLwUcQ7PV"
141
+ },
142
+ {
143
+ "cell_type": "code",
144
+ "source": [
145
+ "!pip install datasets"
146
+ ],
147
+ "metadata": {
148
+ "id": "xtmhkAUSRBxp",
149
+ "outputId": "cab82646-49d6-431e-bb91-a5fee48296f6",
150
+ "colab": {
151
+ "base_uri": "https://localhost:8080/"
152
+ }
153
+ },
154
+ "id": "xtmhkAUSRBxp",
155
+ "execution_count": null,
156
+ "outputs": [
157
+ {
158
+ "output_type": "stream",
159
+ "name": "stdout",
160
+ "text": [
161
+ "Collecting datasets\n",
162
+ " Downloading datasets-3.1.0-py3-none-any.whl.metadata (20 kB)\n",
163
+ "Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from datasets) (3.16.1)\n",
164
+ "Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from datasets) (1.26.4)\n",
165
+ "Requirement already satisfied: pyarrow>=15.0.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (17.0.0)\n",
166
+ "Collecting dill<0.3.9,>=0.3.0 (from datasets)\n",
167
+ " Downloading dill-0.3.8-py3-none-any.whl.metadata (10 kB)\n",
168
+ "Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from datasets) (2.2.2)\n",
169
+ "Requirement already satisfied: requests>=2.32.2 in /usr/local/lib/python3.10/dist-packages (from datasets) (2.32.3)\n",
170
+ "Requirement already satisfied: tqdm>=4.66.3 in /usr/local/lib/python3.10/dist-packages (from datasets) (4.66.6)\n",
171
+ "Collecting xxhash (from datasets)\n",
172
+ " Downloading xxhash-3.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (12 kB)\n",
173
+ "Collecting multiprocess<0.70.17 (from datasets)\n",
174
+ " Downloading multiprocess-0.70.16-py310-none-any.whl.metadata (7.2 kB)\n",
175
+ "Collecting fsspec<=2024.9.0,>=2023.1.0 (from fsspec[http]<=2024.9.0,>=2023.1.0->datasets)\n",
176
+ " Downloading fsspec-2024.9.0-py3-none-any.whl.metadata (11 kB)\n",
177
+ "Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from datasets) (3.10.10)\n",
178
+ "Requirement already satisfied: huggingface-hub>=0.23.0 in /usr/local/lib/python3.10/dist-packages (from datasets) (0.26.2)\n",
179
+ "Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from datasets) (24.2)\n",
180
+ "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from datasets) (6.0.2)\n",
181
+ "Requirement already satisfied: aiohappyeyeballs>=2.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (2.4.3)\n",
182
+ "Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.3.1)\n",
183
+ "Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (24.2.0)\n",
184
+ "Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.5.0)\n",
185
+ "Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (6.1.0)\n",
186
+ "Requirement already satisfied: yarl<2.0,>=1.12.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (1.17.1)\n",
187
+ "Requirement already satisfied: async-timeout<5.0,>=4.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets) (4.0.3)\n",
188
+ "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.10/dist-packages (from huggingface-hub>=0.23.0->datasets) (4.12.2)\n",
189
+ "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests>=2.32.2->datasets) (3.4.0)\n",
190
+ "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests>=2.32.2->datasets) (3.10)\n",
191
+ "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests>=2.32.2->datasets) (2.2.3)\n",
192
+ "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests>=2.32.2->datasets) (2024.8.30)\n",
193
+ "Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets) (2.8.2)\n",
194
+ "Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets) (2024.2)\n",
195
+ "Requirement already satisfied: tzdata>=2022.7 in /usr/local/lib/python3.10/dist-packages (from pandas->datasets) (2024.2)\n",
196
+ "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.2->pandas->datasets) (1.16.0)\n",
197
+ "Requirement already satisfied: propcache>=0.2.0 in /usr/local/lib/python3.10/dist-packages (from yarl<2.0,>=1.12.0->aiohttp->datasets) (0.2.0)\n",
198
+ "Downloading datasets-3.1.0-py3-none-any.whl (480 kB)\n",
199
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m480.6/480.6 kB\u001b[0m \u001b[31m6.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
200
+ "\u001b[?25hDownloading dill-0.3.8-py3-none-any.whl (116 kB)\n",
201
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m116.3/116.3 kB\u001b[0m \u001b[31m8.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
202
+ "\u001b[?25hDownloading fsspec-2024.9.0-py3-none-any.whl (179 kB)\n",
203
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m179.3/179.3 kB\u001b[0m \u001b[31m10.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
204
+ "\u001b[?25hDownloading multiprocess-0.70.16-py310-none-any.whl (134 kB)\n",
205
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m134.8/134.8 kB\u001b[0m \u001b[31m11.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
206
+ "\u001b[?25hDownloading xxhash-3.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (194 kB)\n",
207
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m194.1/194.1 kB\u001b[0m \u001b[31m14.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
208
+ "\u001b[?25hInstalling collected packages: xxhash, fsspec, dill, multiprocess, datasets\n",
209
+ " Attempting uninstall: fsspec\n",
210
+ " Found existing installation: fsspec 2024.10.0\n",
211
+ " Uninstalling fsspec-2024.10.0:\n",
212
+ " Successfully uninstalled fsspec-2024.10.0\n",
213
+ "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
214
+ "gcsfs 2024.10.0 requires fsspec==2024.10.0, but you have fsspec 2024.9.0 which is incompatible.\u001b[0m\u001b[31m\n",
215
+ "\u001b[0mSuccessfully installed datasets-3.1.0 dill-0.3.8 fsspec-2024.9.0 multiprocess-0.70.16 xxhash-3.5.0\n"
216
+ ]
217
+ }
218
+ ]
219
+ }
220
+ ],
221
+ "metadata": {
222
+ "kernelspec": {
223
+ "display_name": "Python 3 (ipykernel)",
224
+ "language": "python",
225
+ "name": "python3"
226
+ },
227
+ "language_info": {
228
+ "codemirror_mode": {
229
+ "name": "ipython",
230
+ "version": 3
231
+ },
232
+ "file_extension": ".py",
233
+ "mimetype": "text/x-python",
234
+ "name": "python",
235
+ "nbconvert_exporter": "python",
236
+ "pygments_lexer": "ipython3",
237
+ "version": "3.10.4"
238
+ },
239
+ "colab": {
240
+ "provenance": []
241
+ }
242
+ },
243
+ "nbformat": 4,
244
+ "nbformat_minor": 5
245
+ }
data/train/IMG_1784.JPG ADDED

Git LFS Details

  • SHA256: ddde531620c258ccbc73cc3be806ab6d8fe35b06852e8eca1152b2cad940556c
  • Pointer size: 132 Bytes
  • Size of remote file: 8.18 MB
data/train/IMG_1785.JPG ADDED

Git LFS Details

  • SHA256: cbef0df6b4587d30264fc75ce1d3d14654b0910527c3b5dd71e75b0374ef6df6
  • Pointer size: 132 Bytes
  • Size of remote file: 6.24 MB
data/train/IMG_1786.JPG ADDED

Git LFS Details

  • SHA256: 4164b5c87d8d233aa22685100eb826af14ac2574b6346cfbfb44068c4b262882
  • Pointer size: 132 Bytes
  • Size of remote file: 8.5 MB
data/train/IMG_1787.JPG ADDED

Git LFS Details

  • SHA256: ed5cce21e848f127a79534d0e623298939048518675df75c6d5267521d5c146c
  • Pointer size: 132 Bytes
  • Size of remote file: 8.63 MB
data/train/IMG_1788.JPG ADDED

Git LFS Details

  • SHA256: b885b40945647a018d2e6b8b8c7f6e949b84e1df5d16523128f907e49d02e879
  • Pointer size: 132 Bytes
  • Size of remote file: 7.75 MB
data/train/IMG_1789.JPG ADDED

Git LFS Details

  • SHA256: 664cada856c6e0c200eff0eaeb94809c5529abdf93e81fc810cd3d3d2fca3083
  • Pointer size: 132 Bytes
  • Size of remote file: 7.58 MB
data/train/IMG_1790.JPG ADDED

Git LFS Details

  • SHA256: 690052d87c84d72406394126b96873823e2d9b9820ddceeb7f773605b7f0def7
  • Pointer size: 132 Bytes
  • Size of remote file: 8.56 MB
data/train/IMG_1791.JPG ADDED

Git LFS Details

  • SHA256: da3d745eec3d1de6978a5b3ffbcfc8551ff4b5c49153a98ee402edd8123c9255
  • Pointer size: 132 Bytes
  • Size of remote file: 7.86 MB
data/train/IMG_1792.JPG ADDED

Git LFS Details

  • SHA256: d6bf1ce8c84cdc7e8a4ec759c64a2c4b7204741f97d91c91a07c74ae65c175cd
  • Pointer size: 132 Bytes
  • Size of remote file: 7.95 MB
data/train/IMG_1793.JPG ADDED

Git LFS Details

  • SHA256: dcda3c42bf4d44460481dfdf9b9f6d534b791db75b395e6061517ecd8dd3fe94
  • Pointer size: 132 Bytes
  • Size of remote file: 7.84 MB
data/train/IMG_1794.JPG ADDED

Git LFS Details

  • SHA256: b64b4d6eab57f513aaa4962fbbd7e03f2f0fcf692fac69dff6ed7ba15788f9bc
  • Pointer size: 132 Bytes
  • Size of remote file: 8.25 MB
data/train/IMG_1795.JPG ADDED

Git LFS Details

  • SHA256: 7e504fa47bf3eefe3e236b27b28dcecc71a79cf327895bf48d040f7ff6c6e579
  • Pointer size: 132 Bytes
  • Size of remote file: 8.63 MB
data/train/IMG_1796.JPG ADDED

Git LFS Details

  • SHA256: ad4bb79126cce6bfa204eeca22c390de0dc6b8bca22cbb128a9b347d0f36411f
  • Pointer size: 132 Bytes
  • Size of remote file: 7.87 MB
data/train/IMG_1797.JPG ADDED

Git LFS Details

  • SHA256: 373bc11d717d88b0f9ae5c8baecb8a0878475852fe6d4fc805565ed6ec32b88a
  • Pointer size: 132 Bytes
  • Size of remote file: 6.82 MB
data/train/IMG_1798.JPG ADDED

Git LFS Details

  • SHA256: 3837af940041e0a9a40993c175a77cf6f3b829511c971b9686f00066ff880fd9
  • Pointer size: 132 Bytes
  • Size of remote file: 7.07 MB
data/train/IMG_1800.JPG ADDED

Git LFS Details

  • SHA256: b0916f76d0285f133fb5a2e1a8d0cefd3688027887addfe53ca5a28a455dc7fa
  • Pointer size: 132 Bytes
  • Size of remote file: 6.44 MB
data/train/IMG_1801.JPG ADDED

Git LFS Details

  • SHA256: 6f8dafb2b27ac8944bee1b43010510467a8c6d0f0d5f1958411b4cf3d6b44294
  • Pointer size: 132 Bytes
  • Size of remote file: 6.07 MB
data/train/IMG_1802.JPG ADDED

Git LFS Details

  • SHA256: d283f5bd0b979f05698e83b02abd10a4d1cb7554ae0fc744e2f2911726897218
  • Pointer size: 132 Bytes
  • Size of remote file: 5.65 MB
data/train/IMG_1803.JPG ADDED

Git LFS Details

  • SHA256: fb938d98779e2f7780c86ffa8c889bd67548e9338717e2ebc2adde55066a8534
  • Pointer size: 132 Bytes
  • Size of remote file: 6.7 MB
data/train/IMG_1804.JPG ADDED

Git LFS Details

  • SHA256: e3e73b876a2f3a21483689f821bb471388cbe50a3fe25b5ffde649a56a57767e
  • Pointer size: 132 Bytes
  • Size of remote file: 5.98 MB
data/train/IMG_1805.JPG ADDED

Git LFS Details

  • SHA256: 584762ff030457eb2a8fb2660580a86e8962f890bf9690c517824a804004b57a
  • Pointer size: 132 Bytes
  • Size of remote file: 8.05 MB
data/train/IMG_1806.JPG ADDED

Git LFS Details

  • SHA256: 02465975d1f2a2a2cedab443989758d77fa70ce50b754742feab88c61420cccf
  • Pointer size: 132 Bytes
  • Size of remote file: 9 MB
data/train/IMG_1807.JPG ADDED

Git LFS Details

  • SHA256: 57505e409dd27c8ebc5efca25a229a0f19c8dcf771b7f0b140729ac1f470011d
  • Pointer size: 132 Bytes
  • Size of remote file: 5.81 MB
data/train/IMG_1808.JPG ADDED

Git LFS Details

  • SHA256: 2be089ae5af84bba625a7bb11815f390e535d69afccdba0c8edb5a41919d58b2
  • Pointer size: 132 Bytes
  • Size of remote file: 6.12 MB
data/train/IMG_1809.JPG ADDED

Git LFS Details

  • SHA256: 83e5f727cfc19e1c43991a53417f23c106c4927541a3c8a746695cb66a18112f
  • Pointer size: 132 Bytes
  • Size of remote file: 5.77 MB
data/train/IMG_1811.JPG ADDED

Git LFS Details

  • SHA256: df93e02e257c3229f9050e00467810b0d51ff2bba77da5a9b6a846f748956766
  • Pointer size: 132 Bytes
  • Size of remote file: 5.61 MB
data/train/IMG_1812.JPG ADDED

Git LFS Details

  • SHA256: 2964607727a32426c04aefbda1c89a46fef6ac680cb1df2b507fc96c3c4a63f1
  • Pointer size: 132 Bytes
  • Size of remote file: 6.34 MB
data/train/IMG_1813.JPG ADDED

Git LFS Details

  • SHA256: 6e7b75c719c3098c24805a0a8865687974eda3c57df2a951e0b2237f85ed58f3
  • Pointer size: 132 Bytes
  • Size of remote file: 6.12 MB
data/train/IMG_1814.JPG ADDED

Git LFS Details

  • SHA256: 11a4f8085e1e9e0d42029257b92bad2000c4bdf36972cb9dc089ec4d02566301
  • Pointer size: 132 Bytes
  • Size of remote file: 7.06 MB
data/train/IMG_1815.JPG ADDED

Git LFS Details

  • SHA256: 7a74b1725c27edceb74f4c506afa7b53fc03ab2fc8e65d90ab63e8ed69d0cec3
  • Pointer size: 132 Bytes
  • Size of remote file: 6.54 MB
data/train/IMG_1817.JPG ADDED

Git LFS Details

  • SHA256: cf6a2df27e557e3ed74e8174c756061851c03a96692f13046e02cc10b156f50b
  • Pointer size: 132 Bytes
  • Size of remote file: 6.65 MB
data/train/IMG_1818.JPG ADDED

Git LFS Details

  • SHA256: e326003e286e51fea92999607c05090fc19ab7ee1c907a6d532a37852c4c2417
  • Pointer size: 132 Bytes
  • Size of remote file: 6.97 MB
data/train/IMG_1819.JPG ADDED

Git LFS Details

  • SHA256: 7f1e9788ae571fa530a7f821b964e0b244cb492f5aa4b4e3cd86b2e556fdf3c2
  • Pointer size: 132 Bytes
  • Size of remote file: 7.73 MB
data/train/IMG_1820.JPG ADDED

Git LFS Details

  • SHA256: e8dac04edbb07c2017433c99d375502eaf965fd3515de239b4c3b27b094e540d
  • Pointer size: 132 Bytes
  • Size of remote file: 7.34 MB
data/train/IMG_1821.JPG ADDED

Git LFS Details

  • SHA256: 0ca5ced760ad7d79b900130ca49d63f56e36c41273fb0823323384d73bcb2b64
  • Pointer size: 132 Bytes
  • Size of remote file: 7.05 MB
data/train/IMG_1823.JPG ADDED

Git LFS Details

  • SHA256: f105da231305655518409e3ab4fc7814461a2207977ab9b14c3c83d978eb76b3
  • Pointer size: 132 Bytes
  • Size of remote file: 8.06 MB
data/train/IMG_1824.JPG ADDED

Git LFS Details

  • SHA256: 2870682d1aa71f73d3ffd2bc8c795cf7f4a925f9001cfaa32d6345aeaf9befef
  • Pointer size: 132 Bytes
  • Size of remote file: 8.9 MB
data/train/IMG_1825.JPG ADDED

Git LFS Details

  • SHA256: 2a1a38c8d1181c4657a3ec0670a992ec4eaef6a7f7bb806f6c50063dc8253e1b
  • Pointer size: 132 Bytes
  • Size of remote file: 8.13 MB
data/train/IMG_1826.JPG ADDED

Git LFS Details

  • SHA256: 30561f4399f2053d7ff57c1db9884eef925be3fb99bc91d4c8ac067d7ea768d4
  • Pointer size: 132 Bytes
  • Size of remote file: 7.39 MB
data/train/IMG_1827.JPG ADDED

Git LFS Details

  • SHA256: f11ac48e8f2d58322098ea7ffd1fb97a8eb771e7ba28fcc77d8340ac7bf740f1
  • Pointer size: 132 Bytes
  • Size of remote file: 7.78 MB
data/train/IMG_1828.JPG ADDED

Git LFS Details

  • SHA256: dd77f9fcbd81a1dbcd52075ba6a0b1fb25111bfafd4b51aa434d98ca1d2fecd7
  • Pointer size: 132 Bytes
  • Size of remote file: 8.41 MB