Datasets:
The size of tensor in dataset does not match the embedded result created by the code "05-Generate-Major-TOM-Embeddings"
Hello!
I noticed that the embedding vectors generated using the code in "05-Generate-Major-TOM-Embeddings" with the DINOv2 model have a dimensionality of 768, whereas the vectors provided in the "Core-S2RGB-DINOv2" dataset have a dimensionality of 1024. Could you please clarify the reason for this discrepancy?
Hi there,
I've resolved the previous issue by switching to a different embedding model. It turns out that the script uses DINOv2-base, while the "Core-S2RGB-DINOv2" dataset was generated using DINOv2-large. However, I've encountered another issue: even when I use DINOv2-large to generate the embeddings (which now have the same dimensionality), the resulting vectors are still different from those in the dataset. All other metadata—such as pixel bbox, grid cell, grid row u, and grid row r—remains the same.
I'm wondering whether the discrepancy might be due to differences in the data preprocessing pipeline or some internal variation in the model setup?
Hello!
I noticed that the embedding vectors generated using the code in "05-Generate-Major-TOM-Embeddings" with the DINOv2 model have a dimensionality of 768, whereas the vectors provided in the "Core-S2RGB-DINOv2" dataset have a dimensionality of 1024. Could you please clarify the reason for this discrepancy?
I also noticed the mismatched dimention of DINOv2 embeddings. Here is a colab notebook to reproduce this issue.
https://colab.research.google.com/github/ESA-PhiLab/Major-TOM/blob/main/03-Filtering-in-Colab.ipynb
Hi there,
I've resolved the previous issue by switching to a different embedding model. It turns out that the script uses DINOv2-base, while the "Core-S2RGB-DINOv2" dataset was generated using DINOv2-large. However, I've encountered another issue: even when I use DINOv2-large to generate the embeddings (which now have the same dimensionality), the resulting vectors are still different from those in the dataset. All other metadata—such as pixel bbox, grid cell, grid row u, and grid row r—remains the same.
I'm wondering whether the discrepancy might be due to differences in the data preprocessing pipeline or some internal variation in the model setup?
DINOv2-large worked well for me! I recently built a S2 image retrieval app using the official DINOv2 embeddings and the performance was quite reasonable.
Regarding the discrepancy between your generated vectors and the dataset's: its quite common for embeddings not to be bit-for-bit identical. Possible reasons are numerical precision & GPU, data preprocessing, and inference precision.
While the raw numbers might differ, the cosine similarity between your computed embeddings and the official ones should be close to 1.0.
Here are the embeddings computed by myself and a sampled dataset from official embeddings: https://huggingface.co/datasets/ML4Sustain/EarthEmbeddings.