v3.3 license check
#5
by
egrace479 - opened
- .gitattributes +10 -0
- README.md +6 -0
- data/eol_files/catalog-media.csv +3 -0
- data/eol_files/catalog_missing_media_pages.csv +3 -0
- data/eol_files/dec6_pages.csv +3 -0
- data/eol_files/eol_cp_not_media.csv +3 -0
- data/eol_files/eol_licenses.csv +3 -0
- data/eol_files/eol_licenses_missing_owner.csv +3 -0
- data/eol_files/eol_media_cargo_names.csv +3 -0
- data/eol_files/eol_media_duplicates.csv +3 -0
- data/eol_files/eol_missing_content_ids.csv +3 -0
- data/eol_files/eol_taxa_df_num_missing_pg.csv +3 -0
- data/eol_files/eol_taxa_missing_content_ids.csv +3 -0
- data/eol_files/jul26_pages.csv +3 -0
- data/eol_files/media_content_not_catalog.csv +3 -0
- data/eol_files/media_manifest_missing_licenses_jul26.csv +3 -0
- data/eol_files/media_old_pages.csv +3 -0
- data/licenses.csv +3 -0
- notebooks/BioCLIP_taxa_viz.ipynb +3 -0
- notebooks/BioCLIP_taxa_viz.py +184 -0
- notebooks/ToL_catalog_EDA.ipynb +209 -16
- notebooks/ToL_catalog_EDA.py +39 -1
- notebooks/ToL_license_check.ipynb +0 -0
- notebooks/ToL_license_check.py +588 -0
- notebooks/ToL_media_mismatch.ipynb +1792 -0
- notebooks/ToL_media_mismatch.py +303 -0
- scripts/make_licenses.py +101 -0
- scripts/match_owners.py +215 -0
- visuals/{fullData_phyla.png → category-v1-visuals/fullData_phyla.png} +0 -0
- visuals/{inat_phyla.png → category-v1-visuals/inat_phyla.png} +0 -0
- visuals/{num_images_class_y.png → category-v1-visuals/num_images_class_y.png} +0 -0
- visuals/{num_images_kingdom.png → category-v1-visuals/num_images_kingdom.png} +0 -0
- visuals/{num_images_order_y.png → category-v1-visuals/num_images_order_y.png} +0 -0
- visuals/{num_images_phylum_y.png → category-v1-visuals/num_images_phylum_y.png} +0 -0
- visuals/{num_phyla_kingdom.png → category-v1-visuals/num_phyla_kingdom.png} +0 -0
- visuals/{num_species_kingdom.png → category-v1-visuals/num_species_kingdom.png} +0 -0
- visuals/{phyla_ToL.pdf → category-v1-visuals/phyla_ToL.pdf} +0 -0
- visuals/{phyla_ToL.png → category-v1-visuals/phyla_ToL.png} +0 -0
- visuals/{phyla_ToL_scale1.pdf → category-v1-visuals/phyla_ToL_scale1.pdf} +0 -0
- visuals/category-v1-visuals/phyla_ToL_tree_cat-v1.html +0 -0
- visuals/{phyla_ToL_wh.pdf → category-v1-visuals/phyla_ToL_wh.pdf} +0 -0
- visuals/{phyla_iNat21.png → category-v1-visuals/phyla_iNat21.png} +0 -0
- visuals/kingdom_ToL_tree.html +0 -0
- visuals/kingdom_ToL_tree.pdf +3 -0
- visuals/phyla_ToL_tree.html +0 -0
- visuals/phyla_ToL_tree.pdf +3 -0
.gitattributes
CHANGED
|
@@ -53,9 +53,19 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 53 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 54 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
|
|
|
| 56 |
# Large notebooks and data
|
| 57 |
data/v1-dev-names.csv filter=lfs diff=lfs merge=lfs -text
|
| 58 |
notebooks/BioCLIP_taxa_viz_bySource.ipynb filter=lfs diff=lfs merge=lfs -text
|
|
|
|
| 59 |
statistics.csv filter=lfs diff=lfs merge=lfs -text
|
| 60 |
data/catalog*.csv filter=lfs diff=lfs merge=lfs -text
|
| 61 |
data/predicted-catalog.csv filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 54 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
| 56 |
+
|
| 57 |
# Large notebooks and data
|
| 58 |
data/v1-dev-names.csv filter=lfs diff=lfs merge=lfs -text
|
| 59 |
notebooks/BioCLIP_taxa_viz_bySource.ipynb filter=lfs diff=lfs merge=lfs -text
|
| 60 |
+
notebooks/BioCLIP_taxa_viz.ipynb filter=lfs diff=lfs merge=lfs -text
|
| 61 |
statistics.csv filter=lfs diff=lfs merge=lfs -text
|
| 62 |
data/catalog*.csv filter=lfs diff=lfs merge=lfs -text
|
| 63 |
data/predicted-catalog.csv filter=lfs diff=lfs merge=lfs -text
|
| 64 |
+
data/licenses.csv filter=lfs diff=lfs merge=lfs -text
|
| 65 |
+
|
| 66 |
+
# license and eol source related files
|
| 67 |
+
data/eol_files/*.csv filter=lfs diff=lfs merge=lfs -text
|
| 68 |
+
|
| 69 |
+
# visualizations
|
| 70 |
+
visuals/kingdom_ToL_tree.pdf filter=lfs diff=lfs merge=lfs -text
|
| 71 |
+
visuals/phyla_ToL_tree.pdf filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -29,6 +29,7 @@ The `notebooks` folder contains
|
|
| 29 |
- `ToL_catalog_EDA.py`: py file paired to `ToL_catalog_EDA.ipynb` to facilitate diff checking in case of cell text changes in notebook.
|
| 30 |
- `ToL_predicted-catalog_EDA.ipynb`: more full EDA of TreeOfLife10M dataset using `predicted-catalog.csv`. To be updated as `predicted-catalog.csv` is updated, i.e., as the dataset is updated.
|
| 31 |
- `ToL_predicted-catalog_EDA.py`: py file paired to `ToL_predicted-catalog_EDA.ipynb` to facilitate diff checking in case of cell text changes in notebook.
|
|
|
|
| 32 |
|
| 33 |
- `BioCLIP_data_viz.ipynb`: notebook with quick basic stats for `catalog-v1-dev.csv`, generates `taxa_counts.csv`.
|
| 34 |
- `BioCLIP_taxa_viz_bySource.ipynb`: generates data visualizations, in particular, the generation of visualizations in `visuals` folder and some histograms. The treemaps produced in the notebook are interactive.
|
|
@@ -41,4 +42,9 @@ direction of standardization efforts. Maintained for v1 reference, should not be
|
|
| 41 |
### Visuals
|
| 42 |
|
| 43 |
Visualizations generated to demonstrate the distribution and diversity within the phyla of TreeOfLife10M.
|
|
|
|
| 44 |
There is also one for just the iNat21 data included.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
- `ToL_catalog_EDA.py`: py file paired to `ToL_catalog_EDA.ipynb` to facilitate diff checking in case of cell text changes in notebook.
|
| 30 |
- `ToL_predicted-catalog_EDA.ipynb`: more full EDA of TreeOfLife10M dataset using `predicted-catalog.csv`. To be updated as `predicted-catalog.csv` is updated, i.e., as the dataset is updated.
|
| 31 |
- `ToL_predicted-catalog_EDA.py`: py file paired to `ToL_predicted-catalog_EDA.ipynb` to facilitate diff checking in case of cell text changes in notebook.
|
| 32 |
+
- `BioCLIP_taxa_viz.ipynb`: generates data visualizations, in particular, the generation of treemaps in `visuals` folder; also includes a histogram of kingdoms. The treemaps produced in the notebook are interactive and `HTML` interactive versions can also be found in [`visuals`](https://huggingface.co/datasets/imageomics/ToL-EDA/tree/main/visuals).
|
| 33 |
|
| 34 |
- `BioCLIP_data_viz.ipynb`: notebook with quick basic stats for `catalog-v1-dev.csv`, generates `taxa_counts.csv`.
|
| 35 |
- `BioCLIP_taxa_viz_bySource.ipynb`: generates data visualizations, in particular, the generation of visualizations in `visuals` folder and some histograms. The treemaps produced in the notebook are interactive.
|
|
|
|
| 42 |
### Visuals
|
| 43 |
|
| 44 |
Visualizations generated to demonstrate the distribution and diversity within the phyla of TreeOfLife10M.
|
| 45 |
+
- `category-v1-visuals`: visualizations made for the original `catalog-v1-dev.csv` using `BioCLIP_taxa_viz_bySource.ipynb`.
|
| 46 |
There is also one for just the iNat21 data included.
|
| 47 |
+
- `kingdom_ToL_tree.html`: interactive treemap from `kingdom` to `family` to demonstrate the distribution of the data. 2:1 aspect ratio.
|
| 48 |
+
- `kingdom_ToL_tree.pdf`: static treemap from `kingdom` to `family`to demonstrate the distribution of the data. 2:1 aspect ratio.
|
| 49 |
+
- `phyla_ToL_tree.html`: interactive treemap from `phylum` to `family` to demonstrate the distribution of the data. 2:1 aspect ratio.
|
| 50 |
+
- `kingdom_ToL_tree.pdf`: static treemap from `phylum` to `family` to demonstrate the distribution of the data. 2:1 aspect ratio.
|
data/eol_files/catalog-media.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7463ca45b32422816bdfef47a0054e0be9ff20e37abaad021067f8cf03b49ea5
|
| 3 |
+
size 1673454822
|
data/eol_files/catalog_missing_media_pages.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:325d2fc8fc7be58a0474fc0bd1ad06834277046fc9552a66fe45ae058427d604
|
| 3 |
+
size 78302
|
data/eol_files/dec6_pages.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:29476173ff95b7cd3199bc1401f2f3efe6b34700436945d5e4cb6358851199c5
|
| 3 |
+
size 139323931
|
data/eol_files/eol_cp_not_media.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:25fb2f25447f09ac53ea1a180c653eb60b31dcddc304ab2c54efc0e5914b2c7b
|
| 3 |
+
size 6009977
|
data/eol_files/eol_licenses.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:94ffc572135891302dc4a2cc02aaac591167ca00ffea6393c710282c01335f5e
|
| 3 |
+
size 542763601
|
data/eol_files/eol_licenses_missing_owner.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eeafb084f984c8d0bee9f26a186fe7310de7b93517ad18a48c8cc0ecba3a82a4
|
| 3 |
+
size 41734670
|
data/eol_files/eol_media_cargo_names.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f529b9508771df466e15df5d1a77fc21fe90657ae435b3e2e9d8e4a09425d0cd
|
| 3 |
+
size 248486139
|
data/eol_files/eol_media_duplicates.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0884b5791bbf351ac9fc7807577a6c6fd3be7527409fa1e36ae6e210a7c54ee1
|
| 3 |
+
size 4138028
|
data/eol_files/eol_missing_content_ids.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:baf3944705e57a46ea9232e2dacbb8abfff88202f6180e3d5d34b097371cc028
|
| 3 |
+
size 7031298
|
data/eol_files/eol_taxa_df_num_missing_pg.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:67482c95c346ad0ef439195f59920ba185ed55f8ad250d0d83031e1924d7ec9d
|
| 3 |
+
size 1443558
|
data/eol_files/eol_taxa_missing_content_ids.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:53a5a12fbf62515ad18a4451fa1e4dc6724e3dbd354f38f272ccc39dcb3c25f6
|
| 3 |
+
size 16906564
|
data/eol_files/jul26_pages.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5a09f69b060635578ec5c205ded5149babfaa7016e053b1a6c5ed0d2bd93e2ed
|
| 3 |
+
size 139322911
|
data/eol_files/media_content_not_catalog.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6c2bd3358f4e6fca94aa9ebc4214a1003b4552616d5dfba2727210268a3fe490
|
| 3 |
+
size 95707673
|
data/eol_files/media_manifest_missing_licenses_jul26.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c6bdf21c098dc70743963844bfde3719b3d79577b905e73946590ff95f9c057f
|
| 3 |
+
size 117190109
|
data/eol_files/media_old_pages.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f207122e446586d59df2d5e3f466535703f9bfe8e62f6250816141458e740cd7
|
| 3 |
+
size 126220
|
data/licenses.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:40a9690d1455d234acf74e706469a2f99a0ee645d59d712af83bc2616782230f
|
| 3 |
+
size 1963786228
|
notebooks/BioCLIP_taxa_viz.ipynb
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:71f9c4c70f74a43c869568b2eb5f46a782512391875833096ba8678b888f09ee
|
| 3 |
+
size 237174718
|
notebooks/BioCLIP_taxa_viz.py
ADDED
|
@@ -0,0 +1,184 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ---
|
| 2 |
+
# jupyter:
|
| 3 |
+
# jupytext:
|
| 4 |
+
# formats: ipynb,py:percent
|
| 5 |
+
# text_representation:
|
| 6 |
+
# extension: .py
|
| 7 |
+
# format_name: percent
|
| 8 |
+
# format_version: '1.3'
|
| 9 |
+
# jupytext_version: 1.15.2
|
| 10 |
+
# kernelspec:
|
| 11 |
+
# display_name: viz
|
| 12 |
+
# language: python
|
| 13 |
+
# name: python3
|
| 14 |
+
# ---
|
| 15 |
+
|
| 16 |
+
# %%
|
| 17 |
+
import pandas as pd
|
| 18 |
+
import seaborn as sns
|
| 19 |
+
import plotly.express as px
|
| 20 |
+
|
| 21 |
+
sns.set_style("whitegrid")
|
| 22 |
+
sns.set(rc = {'figure.figsize': (10,10)})
|
| 23 |
+
|
| 24 |
+
# %% [markdown]
|
| 25 |
+
# # Number of Images by Taxonomic Rank
|
| 26 |
+
|
| 27 |
+
# %%
|
| 28 |
+
df = pd.read_csv("../data/catalog.csv")
|
| 29 |
+
|
| 30 |
+
# %%
|
| 31 |
+
# Add data_source column for easier slicing
|
| 32 |
+
df.loc[df['inat21_filename'].notna(), 'data_source'] = 'iNat21'
|
| 33 |
+
df.loc[df['bioscan_filename'].notna(), 'data_source'] = 'BIOSCAN'
|
| 34 |
+
df.loc[df['eol_content_id'].notna(), 'data_source'] = 'EOL'
|
| 35 |
+
|
| 36 |
+
# %%
|
| 37 |
+
taxa = list(df.columns[9:16])
|
| 38 |
+
taxa
|
| 39 |
+
|
| 40 |
+
# %% [markdown]
|
| 41 |
+
# Shrink down to just columns we may want for graphing.
|
| 42 |
+
|
| 43 |
+
# %%
|
| 44 |
+
columns = taxa.copy()
|
| 45 |
+
columns.insert(0, 'data_source')
|
| 46 |
+
columns.append('common')
|
| 47 |
+
|
| 48 |
+
# %%
|
| 49 |
+
df_taxa = df[columns]
|
| 50 |
+
df_taxa.head()
|
| 51 |
+
|
| 52 |
+
# %% [markdown]
|
| 53 |
+
# Since the pie charts didn't show much change for phylum, let's try a treemap so we also get a sense of all the diversity inside.
|
| 54 |
+
|
| 55 |
+
# %%
|
| 56 |
+
# Drop null phylum values
|
| 57 |
+
df_phylum = df_taxa.loc[df_taxa.phylum.notna()]
|
| 58 |
+
|
| 59 |
+
# %%
|
| 60 |
+
# Fill null lower ranks with 'unknown' for graphing purposes
|
| 61 |
+
df_phylum = df_phylum.fillna('unknown')
|
| 62 |
+
|
| 63 |
+
# %% [markdown]
|
| 64 |
+
# Get list of all phyla and set color scheme. We'll then assign a color to each phylum so they're consistent across the two charts.
|
| 65 |
+
|
| 66 |
+
# %%
|
| 67 |
+
phyla = list(df_phylum.phylum.unique())
|
| 68 |
+
colors = px.colors.qualitative.Bold
|
| 69 |
+
|
| 70 |
+
# %%
|
| 71 |
+
color_map = {}
|
| 72 |
+
i = 0
|
| 73 |
+
for phylum in phyla:
|
| 74 |
+
# There are only 10 colors in the sequence, so we'll need to loop through it a few times to assign all 49 phyla
|
| 75 |
+
i = i%10
|
| 76 |
+
color_map[phylum] = colors[i]
|
| 77 |
+
i += 1
|
| 78 |
+
|
| 79 |
+
# %%
|
| 80 |
+
# Distribution of Phyla and Lower Taxa (to family) in TreeOfLife10M
|
| 81 |
+
# Minimize margins, set aspect ratio to 2:1
|
| 82 |
+
|
| 83 |
+
fig_phyla = px.treemap(df_phylum, path = ['phylum', 'class', 'order', 'family'],
|
| 84 |
+
color = 'phylum',
|
| 85 |
+
color_discrete_map = color_map)
|
| 86 |
+
|
| 87 |
+
fig_phyla.update_scenes(aspectratio = {'x': 2, 'y': 1})
|
| 88 |
+
fig_phyla.update_layout(font = {'size': 18},
|
| 89 |
+
margin = {
|
| 90 |
+
'l': 0,
|
| 91 |
+
'r': 0,
|
| 92 |
+
't': 0,
|
| 93 |
+
'b': 0
|
| 94 |
+
})
|
| 95 |
+
fig_phyla.show()
|
| 96 |
+
|
| 97 |
+
# %%
|
| 98 |
+
fig_phyla.write_html("../visuals/phyla_ToL_tree.html")
|
| 99 |
+
|
| 100 |
+
# %% [markdown]
|
| 101 |
+
# Aspect ratio set in the plot doesn't work for export (unless using the png export on the graph itself), so we'll set the size manually.
|
| 102 |
+
|
| 103 |
+
# %%
|
| 104 |
+
fig_phyla.write_image("../visuals/phyla_ToL_tree.pdf", width = 900, height = 450)
|
| 105 |
+
|
| 106 |
+
# %% [markdown]
|
| 107 |
+
# ## Images by Kingdom
|
| 108 |
+
|
| 109 |
+
# %%
|
| 110 |
+
df_kingdom = df_taxa.loc[df_taxa.kingdom.notna()]
|
| 111 |
+
df_kingdom.head()
|
| 112 |
+
|
| 113 |
+
# %%
|
| 114 |
+
# Drop null kingdom values
|
| 115 |
+
df_kingdom = df_taxa.loc[df_taxa.kingdom.notna()]
|
| 116 |
+
|
| 117 |
+
# %%
|
| 118 |
+
# Fill null lower ranks with 'unknown' for graphing purposes
|
| 119 |
+
df_kingdom = df_kingdom.fillna('unknown')
|
| 120 |
+
|
| 121 |
+
# %% [markdown]
|
| 122 |
+
# Get list of all kingdoms and set color scheme. We'll then assign a color to each kingdom so they're consistent across the two charts.
|
| 123 |
+
|
| 124 |
+
# %%
|
| 125 |
+
kingdoms = list(df_kingdom.kingdom.unique())
|
| 126 |
+
#colors = px.colors.qualitative.Bold
|
| 127 |
+
|
| 128 |
+
# %%
|
| 129 |
+
king_color_map = {}
|
| 130 |
+
i = 0
|
| 131 |
+
for kingdom in kingdoms:
|
| 132 |
+
# There are only 10 colors in the sequence, so we'll need to loop through it once to assign all 12 kingdoms
|
| 133 |
+
i = i%10
|
| 134 |
+
king_color_map[kingdom] = colors[i]
|
| 135 |
+
i += 1
|
| 136 |
+
|
| 137 |
+
# %%
|
| 138 |
+
# Distribution of Kingdoms and Lower Taxa in TreeOfLife10M
|
| 139 |
+
# Minimize margins, set aspect ratio to 2:1
|
| 140 |
+
|
| 141 |
+
fig_king = px.treemap(df_kingdom, path = ['kingdom', 'phylum', 'class', 'order', 'family'],
|
| 142 |
+
color = 'kingdom',
|
| 143 |
+
color_discrete_map = king_color_map)
|
| 144 |
+
|
| 145 |
+
fig_king.update_scenes(aspectratio = {'x': 2, 'y': 1})
|
| 146 |
+
fig_king.update_layout(font = {'size': 14},
|
| 147 |
+
margin = {
|
| 148 |
+
'l': 0,
|
| 149 |
+
'r': 0,
|
| 150 |
+
't': 0,
|
| 151 |
+
'b': 0
|
| 152 |
+
})
|
| 153 |
+
fig_king.show()
|
| 154 |
+
|
| 155 |
+
# %%
|
| 156 |
+
fig_king.write_html("../visuals/kingdom_ToL_tree.html")
|
| 157 |
+
|
| 158 |
+
# %% [markdown]
|
| 159 |
+
# Aspect ratio set in the plot doesn't work for export (unless using the png export on the graph itself), so we'll set the size manually.
|
| 160 |
+
|
| 161 |
+
# %%
|
| 162 |
+
fig_king.write_image("../visuals/kingdom_ToL_tree.pdf", width = 900, height = 450)
|
| 163 |
+
|
| 164 |
+
# %% [markdown]
|
| 165 |
+
# ### Histograms for Kingdom
|
| 166 |
+
|
| 167 |
+
# %%
|
| 168 |
+
fig = px.histogram(df_kingdom,
|
| 169 |
+
x = 'kingdom',
|
| 170 |
+
#y = 'num_species',
|
| 171 |
+
color = 'kingdom',
|
| 172 |
+
color_discrete_sequence = px.colors.qualitative.Bold,
|
| 173 |
+
labels = {
|
| 174 |
+
'kingdom': "Kingdom",
|
| 175 |
+
#'num_phyla' : "Number of distinct species"
|
| 176 |
+
},
|
| 177 |
+
#text_auto=False
|
| 178 |
+
)
|
| 179 |
+
fig.update_layout(title = "Number of Images by Kingdom",
|
| 180 |
+
yaxis_title = "Number of Images")
|
| 181 |
+
|
| 182 |
+
fig.show()
|
| 183 |
+
|
| 184 |
+
# %%
|
notebooks/ToL_catalog_EDA.ipynb
CHANGED
|
@@ -22,7 +22,7 @@
|
|
| 22 |
"name": "stderr",
|
| 23 |
"output_type": "stream",
|
| 24 |
"text": [
|
| 25 |
-
"/var/folders/nv/f0fq1p1n1_3b11x579py_0q80000gq/T/
|
| 26 |
" df = pd.read_csv(\"../data/catalog.csv\")\n"
|
| 27 |
]
|
| 28 |
}
|
|
@@ -269,7 +269,7 @@
|
|
| 269 |
},
|
| 270 |
{
|
| 271 |
"cell_type": "code",
|
| 272 |
-
"execution_count":
|
| 273 |
"metadata": {},
|
| 274 |
"outputs": [],
|
| 275 |
"source": [
|
|
@@ -278,7 +278,7 @@
|
|
| 278 |
},
|
| 279 |
{
|
| 280 |
"cell_type": "code",
|
| 281 |
-
"execution_count":
|
| 282 |
"metadata": {},
|
| 283 |
"outputs": [
|
| 284 |
{
|
|
@@ -431,7 +431,7 @@
|
|
| 431 |
},
|
| 432 |
{
|
| 433 |
"cell_type": "code",
|
| 434 |
-
"execution_count":
|
| 435 |
"metadata": {},
|
| 436 |
"outputs": [
|
| 437 |
{
|
|
@@ -440,7 +440,7 @@
|
|
| 440 |
"['kingdom', 'phylum', 'class', 'order', 'family', 'genus', 'species']"
|
| 441 |
]
|
| 442 |
},
|
| 443 |
-
"execution_count":
|
| 444 |
"metadata": {},
|
| 445 |
"output_type": "execute_result"
|
| 446 |
}
|
|
@@ -459,7 +459,7 @@
|
|
| 459 |
},
|
| 460 |
{
|
| 461 |
"cell_type": "code",
|
| 462 |
-
"execution_count":
|
| 463 |
"metadata": {},
|
| 464 |
"outputs": [
|
| 465 |
{
|
|
@@ -735,7 +735,7 @@
|
|
| 735 |
},
|
| 736 |
{
|
| 737 |
"cell_type": "code",
|
| 738 |
-
"execution_count":
|
| 739 |
"metadata": {},
|
| 740 |
"outputs": [],
|
| 741 |
"source": [
|
|
@@ -784,7 +784,7 @@
|
|
| 784 |
},
|
| 785 |
{
|
| 786 |
"cell_type": "code",
|
| 787 |
-
"execution_count":
|
| 788 |
"metadata": {},
|
| 789 |
"outputs": [
|
| 790 |
{
|
|
@@ -793,7 +793,7 @@
|
|
| 793 |
"439910"
|
| 794 |
]
|
| 795 |
},
|
| 796 |
-
"execution_count":
|
| 797 |
"metadata": {},
|
| 798 |
"output_type": "execute_result"
|
| 799 |
}
|
|
@@ -804,7 +804,7 @@
|
|
| 804 |
},
|
| 805 |
{
|
| 806 |
"cell_type": "code",
|
| 807 |
-
"execution_count":
|
| 808 |
"metadata": {},
|
| 809 |
"outputs": [
|
| 810 |
{
|
|
@@ -813,7 +813,7 @@
|
|
| 813 |
"9947"
|
| 814 |
]
|
| 815 |
},
|
| 816 |
-
"execution_count":
|
| 817 |
"metadata": {},
|
| 818 |
"output_type": "execute_result"
|
| 819 |
}
|
|
@@ -824,7 +824,7 @@
|
|
| 824 |
},
|
| 825 |
{
|
| 826 |
"cell_type": "code",
|
| 827 |
-
"execution_count":
|
| 828 |
"metadata": {},
|
| 829 |
"outputs": [
|
| 830 |
{
|
|
@@ -833,7 +833,7 @@
|
|
| 833 |
"7758"
|
| 834 |
]
|
| 835 |
},
|
| 836 |
-
"execution_count":
|
| 837 |
"metadata": {},
|
| 838 |
"output_type": "execute_result"
|
| 839 |
}
|
|
@@ -862,7 +862,7 @@
|
|
| 862 |
},
|
| 863 |
{
|
| 864 |
"cell_type": "code",
|
| 865 |
-
"execution_count":
|
| 866 |
"metadata": {},
|
| 867 |
"outputs": [],
|
| 868 |
"source": [
|
|
@@ -1007,7 +1007,7 @@
|
|
| 1007 |
},
|
| 1008 |
{
|
| 1009 |
"cell_type": "code",
|
| 1010 |
-
"execution_count":
|
| 1011 |
"metadata": {},
|
| 1012 |
"outputs": [],
|
| 1013 |
"source": [
|
|
@@ -3641,6 +3641,13 @@
|
|
| 3641 |
"That's a good number of images, so unlikely to be the cause."
|
| 3642 |
]
|
| 3643 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3644 |
{
|
| 3645 |
"cell_type": "markdown",
|
| 3646 |
"metadata": {},
|
|
@@ -3710,11 +3717,197 @@
|
|
| 3710 |
"cell_type": "markdown",
|
| 3711 |
"metadata": {},
|
| 3712 |
"source": [
|
| 3713 |
-
"BIOSCAN and iNat21's overlap of genera is completely contained in EOL.\n",
|
| 3714 |
"\n",
|
| 3715 |
"No changes here."
|
| 3716 |
]
|
| 3717 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3718 |
{
|
| 3719 |
"cell_type": "markdown",
|
| 3720 |
"metadata": {},
|
|
|
|
| 22 |
"name": "stderr",
|
| 23 |
"output_type": "stream",
|
| 24 |
"text": [
|
| 25 |
+
"/var/folders/nv/f0fq1p1n1_3b11x579py_0q80000gq/T/ipykernel_11858/2566980770.py:1: DtypeWarning: Columns (5,6,7) have mixed types. Specify dtype option on import or set low_memory=False.\n",
|
| 26 |
" df = pd.read_csv(\"../data/catalog.csv\")\n"
|
| 27 |
]
|
| 28 |
}
|
|
|
|
| 269 |
},
|
| 270 |
{
|
| 271 |
"cell_type": "code",
|
| 272 |
+
"execution_count": 3,
|
| 273 |
"metadata": {},
|
| 274 |
"outputs": [],
|
| 275 |
"source": [
|
|
|
|
| 278 |
},
|
| 279 |
{
|
| 280 |
"cell_type": "code",
|
| 281 |
+
"execution_count": 4,
|
| 282 |
"metadata": {},
|
| 283 |
"outputs": [
|
| 284 |
{
|
|
|
|
| 431 |
},
|
| 432 |
{
|
| 433 |
"cell_type": "code",
|
| 434 |
+
"execution_count": 5,
|
| 435 |
"metadata": {},
|
| 436 |
"outputs": [
|
| 437 |
{
|
|
|
|
| 440 |
"['kingdom', 'phylum', 'class', 'order', 'family', 'genus', 'species']"
|
| 441 |
]
|
| 442 |
},
|
| 443 |
+
"execution_count": 5,
|
| 444 |
"metadata": {},
|
| 445 |
"output_type": "execute_result"
|
| 446 |
}
|
|
|
|
| 459 |
},
|
| 460 |
{
|
| 461 |
"cell_type": "code",
|
| 462 |
+
"execution_count": 6,
|
| 463 |
"metadata": {},
|
| 464 |
"outputs": [
|
| 465 |
{
|
|
|
|
| 735 |
},
|
| 736 |
{
|
| 737 |
"cell_type": "code",
|
| 738 |
+
"execution_count": 7,
|
| 739 |
"metadata": {},
|
| 740 |
"outputs": [],
|
| 741 |
"source": [
|
|
|
|
| 784 |
},
|
| 785 |
{
|
| 786 |
"cell_type": "code",
|
| 787 |
+
"execution_count": 8,
|
| 788 |
"metadata": {},
|
| 789 |
"outputs": [
|
| 790 |
{
|
|
|
|
| 793 |
"439910"
|
| 794 |
]
|
| 795 |
},
|
| 796 |
+
"execution_count": 8,
|
| 797 |
"metadata": {},
|
| 798 |
"output_type": "execute_result"
|
| 799 |
}
|
|
|
|
| 804 |
},
|
| 805 |
{
|
| 806 |
"cell_type": "code",
|
| 807 |
+
"execution_count": 9,
|
| 808 |
"metadata": {},
|
| 809 |
"outputs": [
|
| 810 |
{
|
|
|
|
| 813 |
"9947"
|
| 814 |
]
|
| 815 |
},
|
| 816 |
+
"execution_count": 9,
|
| 817 |
"metadata": {},
|
| 818 |
"output_type": "execute_result"
|
| 819 |
}
|
|
|
|
| 824 |
},
|
| 825 |
{
|
| 826 |
"cell_type": "code",
|
| 827 |
+
"execution_count": 10,
|
| 828 |
"metadata": {},
|
| 829 |
"outputs": [
|
| 830 |
{
|
|
|
|
| 833 |
"7758"
|
| 834 |
]
|
| 835 |
},
|
| 836 |
+
"execution_count": 10,
|
| 837 |
"metadata": {},
|
| 838 |
"output_type": "execute_result"
|
| 839 |
}
|
|
|
|
| 862 |
},
|
| 863 |
{
|
| 864 |
"cell_type": "code",
|
| 865 |
+
"execution_count": 11,
|
| 866 |
"metadata": {},
|
| 867 |
"outputs": [],
|
| 868 |
"source": [
|
|
|
|
| 1007 |
},
|
| 1008 |
{
|
| 1009 |
"cell_type": "code",
|
| 1010 |
+
"execution_count": 12,
|
| 1011 |
"metadata": {},
|
| 1012 |
"outputs": [],
|
| 1013 |
"source": [
|
|
|
|
| 3641 |
"That's a good number of images, so unlikely to be the cause."
|
| 3642 |
]
|
| 3643 |
},
|
| 3644 |
+
{
|
| 3645 |
+
"cell_type": "markdown",
|
| 3646 |
+
"metadata": {},
|
| 3647 |
+
"source": [
|
| 3648 |
+
"## Diversity Between Datasets"
|
| 3649 |
+
]
|
| 3650 |
+
},
|
| 3651 |
{
|
| 3652 |
"cell_type": "markdown",
|
| 3653 |
"metadata": {},
|
|
|
|
| 3717 |
"cell_type": "markdown",
|
| 3718 |
"metadata": {},
|
| 3719 |
"source": [
|
| 3720 |
+
"BIOSCAN and iNat21's overlap of genera is almost completely contained in EOL.\n",
|
| 3721 |
"\n",
|
| 3722 |
"No changes here."
|
| 3723 |
]
|
| 3724 |
},
|
| 3725 |
+
{
|
| 3726 |
+
"cell_type": "markdown",
|
| 3727 |
+
"metadata": {},
|
| 3728 |
+
"source": [
|
| 3729 |
+
"#### More thorough diversity check with 7-tuple taxa\n",
|
| 3730 |
+
"\n",
|
| 3731 |
+
"We'll first filter down to all unique 7-tuple taxa (by data source). Then, we'll reduce down to just EOL and iNat21 to determine how much EOL adds to iNat21's diversity. The remainder from the total is added by BIOSCAN."
|
| 3732 |
+
]
|
| 3733 |
+
},
|
| 3734 |
+
{
|
| 3735 |
+
"cell_type": "code",
|
| 3736 |
+
"execution_count": 13,
|
| 3737 |
+
"metadata": {},
|
| 3738 |
+
"outputs": [],
|
| 3739 |
+
"source": [
|
| 3740 |
+
"source_taxa = ['data_source', 'kingdom', 'phylum', 'class', 'order', 'family', 'genus', 'species']"
|
| 3741 |
+
]
|
| 3742 |
+
},
|
| 3743 |
+
{
|
| 3744 |
+
"cell_type": "code",
|
| 3745 |
+
"execution_count": 14,
|
| 3746 |
+
"metadata": {},
|
| 3747 |
+
"outputs": [
|
| 3748 |
+
{
|
| 3749 |
+
"name": "stdout",
|
| 3750 |
+
"output_type": "stream",
|
| 3751 |
+
"text": [
|
| 3752 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 3753 |
+
"Index: 466741 entries, 956203 to 11000902\n",
|
| 3754 |
+
"Data columns (total 19 columns):\n",
|
| 3755 |
+
" # Column Non-Null Count Dtype \n",
|
| 3756 |
+
"--- ------ -------------- ----- \n",
|
| 3757 |
+
" 0 split 466741 non-null object \n",
|
| 3758 |
+
" 1 treeoflife_id 466741 non-null object \n",
|
| 3759 |
+
" 2 eol_content_id 448910 non-null float64\n",
|
| 3760 |
+
" 3 eol_page_id 448910 non-null float64\n",
|
| 3761 |
+
" 4 bioscan_part 7831 non-null float64\n",
|
| 3762 |
+
" 5 bioscan_filename 7831 non-null object \n",
|
| 3763 |
+
" 6 inat21_filename 10000 non-null object \n",
|
| 3764 |
+
" 7 inat21_cls_name 10000 non-null object \n",
|
| 3765 |
+
" 8 inat21_cls_num 10000 non-null float64\n",
|
| 3766 |
+
" 9 kingdom 437587 non-null object \n",
|
| 3767 |
+
" 10 phylum 438050 non-null object \n",
|
| 3768 |
+
" 11 class 436934 non-null object \n",
|
| 3769 |
+
" 12 order 437280 non-null object \n",
|
| 3770 |
+
" 13 family 437137 non-null object \n",
|
| 3771 |
+
" 14 genus 439711 non-null object \n",
|
| 3772 |
+
" 15 species 424855 non-null object \n",
|
| 3773 |
+
" 16 common 466741 non-null object \n",
|
| 3774 |
+
" 17 data_source 466741 non-null object \n",
|
| 3775 |
+
" 18 duplicate 466741 non-null bool \n",
|
| 3776 |
+
"dtypes: bool(1), float64(4), object(14)\n",
|
| 3777 |
+
"memory usage: 68.1+ MB\n"
|
| 3778 |
+
]
|
| 3779 |
+
}
|
| 3780 |
+
],
|
| 3781 |
+
"source": [
|
| 3782 |
+
"df['duplicate'] = df.duplicated(subset = source_taxa, keep = 'first')\n",
|
| 3783 |
+
"df_unique_taxa = df.loc[~df['duplicate']]\n",
|
| 3784 |
+
"df_unique_taxa.info(show_counts=True)"
|
| 3785 |
+
]
|
| 3786 |
+
},
|
| 3787 |
+
{
|
| 3788 |
+
"cell_type": "markdown",
|
| 3789 |
+
"metadata": {},
|
| 3790 |
+
"source": [
|
| 3791 |
+
"We have 466,741 unique taxa by source (i.e., the sum of unique 7-tuples within EOL, iNat21, and BIOSCAN, without considering overlaps between them). Our actual unique 7-tuple taxa count for the full dataset is 454,103, so we have about 12,600 taxa shared between our constituent parts.\n",
|
| 3792 |
+
"\n",
|
| 3793 |
+
"Now, let's reduce this down to just EOL and iNat21."
|
| 3794 |
+
]
|
| 3795 |
+
},
|
| 3796 |
+
{
|
| 3797 |
+
"cell_type": "code",
|
| 3798 |
+
"execution_count": 15,
|
| 3799 |
+
"metadata": {},
|
| 3800 |
+
"outputs": [
|
| 3801 |
+
{
|
| 3802 |
+
"name": "stdout",
|
| 3803 |
+
"output_type": "stream",
|
| 3804 |
+
"text": [
|
| 3805 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 3806 |
+
"Index: 458910 entries, 956203 to 11000902\n",
|
| 3807 |
+
"Data columns (total 19 columns):\n",
|
| 3808 |
+
" # Column Non-Null Count Dtype \n",
|
| 3809 |
+
"--- ------ -------------- ----- \n",
|
| 3810 |
+
" 0 split 458910 non-null object \n",
|
| 3811 |
+
" 1 treeoflife_id 458910 non-null object \n",
|
| 3812 |
+
" 2 eol_content_id 448910 non-null float64\n",
|
| 3813 |
+
" 3 eol_page_id 448910 non-null float64\n",
|
| 3814 |
+
" 4 bioscan_part 0 non-null float64\n",
|
| 3815 |
+
" 5 bioscan_filename 0 non-null object \n",
|
| 3816 |
+
" 6 inat21_filename 10000 non-null object \n",
|
| 3817 |
+
" 7 inat21_cls_name 10000 non-null object \n",
|
| 3818 |
+
" 8 inat21_cls_num 10000 non-null float64\n",
|
| 3819 |
+
" 9 kingdom 429756 non-null object \n",
|
| 3820 |
+
" 10 phylum 430219 non-null object \n",
|
| 3821 |
+
" 11 class 429103 non-null object \n",
|
| 3822 |
+
" 12 order 429449 non-null object \n",
|
| 3823 |
+
" 13 family 429320 non-null object \n",
|
| 3824 |
+
" 14 genus 432307 non-null object \n",
|
| 3825 |
+
" 15 species 419345 non-null object \n",
|
| 3826 |
+
" 16 common 458910 non-null object \n",
|
| 3827 |
+
" 17 data_source 458910 non-null object \n",
|
| 3828 |
+
" 18 duplicate 458910 non-null bool \n",
|
| 3829 |
+
"dtypes: bool(1), float64(4), object(14)\n",
|
| 3830 |
+
"memory usage: 67.0+ MB\n"
|
| 3831 |
+
]
|
| 3832 |
+
}
|
| 3833 |
+
],
|
| 3834 |
+
"source": [
|
| 3835 |
+
"df_taxa_Enat = df_unique_taxa.loc[df_unique_taxa.data_source != \"BIOSCAN\"]\n",
|
| 3836 |
+
"df_taxa_Enat.info(show_counts = True)"
|
| 3837 |
+
]
|
| 3838 |
+
},
|
| 3839 |
+
{
|
| 3840 |
+
"cell_type": "markdown",
|
| 3841 |
+
"metadata": {},
|
| 3842 |
+
"source": [
|
| 3843 |
+
"We have 458,910 for EOL and iNat21. Now, we remove taxa duplicates between the two datasets."
|
| 3844 |
+
]
|
| 3845 |
+
},
|
| 3846 |
+
{
|
| 3847 |
+
"cell_type": "code",
|
| 3848 |
+
"execution_count": 16,
|
| 3849 |
+
"metadata": {},
|
| 3850 |
+
"outputs": [
|
| 3851 |
+
{
|
| 3852 |
+
"name": "stdout",
|
| 3853 |
+
"output_type": "stream",
|
| 3854 |
+
"text": [
|
| 3855 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 3856 |
+
"Index: 450284 entries, 956203 to 11000902\n",
|
| 3857 |
+
"Data columns (total 19 columns):\n",
|
| 3858 |
+
" # Column Non-Null Count Dtype \n",
|
| 3859 |
+
"--- ------ -------------- ----- \n",
|
| 3860 |
+
" 0 split 450284 non-null object \n",
|
| 3861 |
+
" 1 treeoflife_id 450284 non-null object \n",
|
| 3862 |
+
" 2 eol_content_id 448492 non-null float64\n",
|
| 3863 |
+
" 3 eol_page_id 448492 non-null float64\n",
|
| 3864 |
+
" 4 bioscan_part 0 non-null float64\n",
|
| 3865 |
+
" 5 bioscan_filename 0 non-null object \n",
|
| 3866 |
+
" 6 inat21_filename 1792 non-null object \n",
|
| 3867 |
+
" 7 inat21_cls_name 1792 non-null object \n",
|
| 3868 |
+
" 8 inat21_cls_num 1792 non-null float64\n",
|
| 3869 |
+
" 9 kingdom 421130 non-null object \n",
|
| 3870 |
+
" 10 phylum 421593 non-null object \n",
|
| 3871 |
+
" 11 class 420477 non-null object \n",
|
| 3872 |
+
" 12 order 420823 non-null object \n",
|
| 3873 |
+
" 13 family 420694 non-null object \n",
|
| 3874 |
+
" 14 genus 423681 non-null object \n",
|
| 3875 |
+
" 15 species 410719 non-null object \n",
|
| 3876 |
+
" 16 common 450284 non-null object \n",
|
| 3877 |
+
" 17 data_source 450284 non-null object \n",
|
| 3878 |
+
" 18 duplicate 450284 non-null bool \n",
|
| 3879 |
+
"dtypes: bool(1), float64(4), object(14)\n",
|
| 3880 |
+
"memory usage: 65.7+ MB\n"
|
| 3881 |
+
]
|
| 3882 |
+
},
|
| 3883 |
+
{
|
| 3884 |
+
"name": "stderr",
|
| 3885 |
+
"output_type": "stream",
|
| 3886 |
+
"text": [
|
| 3887 |
+
"/var/folders/nv/f0fq1p1n1_3b11x579py_0q80000gq/T/ipykernel_11858/532979333.py:1: SettingWithCopyWarning: \n",
|
| 3888 |
+
"A value is trying to be set on a copy of a slice from a DataFrame.\n",
|
| 3889 |
+
"Try using .loc[row_indexer,col_indexer] = value instead\n",
|
| 3890 |
+
"\n",
|
| 3891 |
+
"See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
|
| 3892 |
+
" df_taxa_Enat['duplicate'] = df_taxa_Enat.duplicated(subset = taxa, keep = 'first')\n"
|
| 3893 |
+
]
|
| 3894 |
+
}
|
| 3895 |
+
],
|
| 3896 |
+
"source": [
|
| 3897 |
+
"df_taxa_Enat['duplicate'] = df_taxa_Enat.duplicated(subset = taxa, keep = 'first')\n",
|
| 3898 |
+
"df_unique_taxa_Enat = df_taxa_Enat.loc[~df_taxa_Enat['duplicate']]\n",
|
| 3899 |
+
"df_unique_taxa_Enat.info(show_counts = True)"
|
| 3900 |
+
]
|
| 3901 |
+
},
|
| 3902 |
+
{
|
| 3903 |
+
"cell_type": "markdown",
|
| 3904 |
+
"metadata": {},
|
| 3905 |
+
"source": [
|
| 3906 |
+
"Between iNat21 and EOL we have 450,284 unique taxa. That means we have 3,819 unique 7-tuple taxa added by BIOSCAN, and there are 8,626 taxa (7-tuples) shared between EOL and iNat21 (86% of iNat21).\n",
|
| 3907 |
+
"\n",
|
| 3908 |
+
"EOL has 448,910 unique 7-tuple taxa, so it adds 440,284 more unique taxa to iNat21, then the addition of BIOSCAN adds another 3,819 unique taxa."
|
| 3909 |
+
]
|
| 3910 |
+
},
|
| 3911 |
{
|
| 3912 |
"cell_type": "markdown",
|
| 3913 |
"metadata": {},
|
notebooks/ToL_catalog_EDA.py
CHANGED
|
@@ -491,6 +491,9 @@ eol_long_all_taxa[taxa].info(show_counts = True)
|
|
| 491 |
# %% [markdown]
|
| 492 |
# That's a good number of images, so unlikely to be the cause.
|
| 493 |
|
|
|
|
|
|
|
|
|
|
| 494 |
# %% [markdown]
|
| 495 |
# ### Label Overlap Check
|
| 496 |
|
|
@@ -516,10 +519,45 @@ print(f"There are {len(list(set(inat21_genera) & set(bioscan_genera)))} genera s
|
|
| 516 |
print(f"There are {len(list(set(gen_overlap) & set(bioscan_genera)))} genera shared between all three data sources.")
|
| 517 |
|
| 518 |
# %% [markdown]
|
| 519 |
-
# BIOSCAN and iNat21's overlap of genera is completely contained in EOL.
|
| 520 |
#
|
| 521 |
# No changes here.
|
| 522 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 523 |
# %% [markdown]
|
| 524 |
# ## Overall Stats
|
| 525 |
#
|
|
|
|
| 491 |
# %% [markdown]
|
| 492 |
# That's a good number of images, so unlikely to be the cause.
|
| 493 |
|
| 494 |
+
# %% [markdown]
|
| 495 |
+
# ## Diversity Between Datasets
|
| 496 |
+
|
| 497 |
# %% [markdown]
|
| 498 |
# ### Label Overlap Check
|
| 499 |
|
|
|
|
| 519 |
print(f"There are {len(list(set(gen_overlap) & set(bioscan_genera)))} genera shared between all three data sources.")
|
| 520 |
|
| 521 |
# %% [markdown]
|
| 522 |
+
# BIOSCAN and iNat21's overlap of genera is almost completely contained in EOL.
|
| 523 |
#
|
| 524 |
# No changes here.
|
| 525 |
|
| 526 |
+
# %% [markdown]
|
| 527 |
+
# #### More thorough diversity check with 7-tuple taxa
|
| 528 |
+
#
|
| 529 |
+
# We'll first filter down to all unique 7-tuple taxa (by data source). Then, we'll reduce down to just EOL and iNat21 to determine how much EOL adds to iNat21's diversity. The remainder from the total is added by BIOSCAN.
|
| 530 |
+
|
| 531 |
+
# %%
|
| 532 |
+
source_taxa = ['data_source', 'kingdom', 'phylum', 'class', 'order', 'family', 'genus', 'species']
|
| 533 |
+
|
| 534 |
+
# %%
|
| 535 |
+
df['duplicate'] = df.duplicated(subset = source_taxa, keep = 'first')
|
| 536 |
+
df_unique_taxa = df.loc[~df['duplicate']]
|
| 537 |
+
df_unique_taxa.info(show_counts=True)
|
| 538 |
+
|
| 539 |
+
# %% [markdown]
|
| 540 |
+
# We have 466,741 unique taxa by source (i.e., the sum of unique 7-tuples within EOL, iNat21, and BIOSCAN, without considering overlaps between them). Our actual unique 7-tuple taxa count for the full dataset is 454,103, so we have about 12,600 taxa shared between our constituent parts.
|
| 541 |
+
#
|
| 542 |
+
# Now, let's reduce this down to just EOL and iNat21.
|
| 543 |
+
|
| 544 |
+
# %%
|
| 545 |
+
df_taxa_Enat = df_unique_taxa.loc[df_unique_taxa.data_source != "BIOSCAN"]
|
| 546 |
+
df_taxa_Enat.info(show_counts = True)
|
| 547 |
+
|
| 548 |
+
# %% [markdown]
|
| 549 |
+
# We have 458,910 for EOL and iNat21. Now, we remove taxa duplicates between the two datasets.
|
| 550 |
+
|
| 551 |
+
# %%
|
| 552 |
+
df_taxa_Enat['duplicate'] = df_taxa_Enat.duplicated(subset = taxa, keep = 'first')
|
| 553 |
+
df_unique_taxa_Enat = df_taxa_Enat.loc[~df_taxa_Enat['duplicate']]
|
| 554 |
+
df_unique_taxa_Enat.info(show_counts = True)
|
| 555 |
+
|
| 556 |
+
# %% [markdown]
|
| 557 |
+
# Between iNat21 and EOL we have 450,284 unique taxa. That means we have 3,819 unique 7-tuple taxa added by BIOSCAN, and there are 8,626 taxa (7-tuples) shared between EOL and iNat21 (86% of iNat21).
|
| 558 |
+
#
|
| 559 |
+
# EOL has 448,910 unique 7-tuple taxa, so it adds 440,284 more unique taxa to iNat21, then the addition of BIOSCAN adds another 3,819 unique taxa.
|
| 560 |
+
|
| 561 |
# %% [markdown]
|
| 562 |
# ## Overall Stats
|
| 563 |
#
|
notebooks/ToL_license_check.ipynb
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
notebooks/ToL_license_check.py
ADDED
|
@@ -0,0 +1,588 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ---
|
| 2 |
+
# jupyter:
|
| 3 |
+
# jupytext:
|
| 4 |
+
# formats: ipynb,py:percent
|
| 5 |
+
# text_representation:
|
| 6 |
+
# extension: .py
|
| 7 |
+
# format_name: percent
|
| 8 |
+
# format_version: '1.3'
|
| 9 |
+
# jupytext_version: 1.16.0
|
| 10 |
+
# kernelspec:
|
| 11 |
+
# display_name: Python 3 (ipykernel)
|
| 12 |
+
# language: python
|
| 13 |
+
# name: python3
|
| 14 |
+
# ---
|
| 15 |
+
|
| 16 |
+
# %%
|
| 17 |
+
import pandas as pd
|
| 18 |
+
import seaborn as sns
|
| 19 |
+
|
| 20 |
+
sns.set_style("whitegrid")
|
| 21 |
+
sns.set(rc = {'figure.figsize': (10,10)})
|
| 22 |
+
|
| 23 |
+
# %% [markdown]
|
| 24 |
+
# Load in full images to ease process.
|
| 25 |
+
|
| 26 |
+
# %%
|
| 27 |
+
df = pd.read_csv("../data/predicted-catalog.csv", low_memory = False)
|
| 28 |
+
|
| 29 |
+
# %%
|
| 30 |
+
df.head()
|
| 31 |
+
|
| 32 |
+
# %%
|
| 33 |
+
df.info(show_counts = True)
|
| 34 |
+
|
| 35 |
+
# %% [markdown]
|
| 36 |
+
# The `train_small` is duplicates of `train`, so we will drop those to analyze the full training set plus val.
|
| 37 |
+
|
| 38 |
+
# %% [markdown]
|
| 39 |
+
# `predicted-catalog` doesn't have `train_small`, hence, it's a smaller file.
|
| 40 |
+
|
| 41 |
+
# %% [markdown]
|
| 42 |
+
# Let's add a column indicating the original data source so we can also get some stats by datasource, specifically focusing on EOL since we know licensing for BIOSCAN-1M and iNat21.
|
| 43 |
+
|
| 44 |
+
# %%
|
| 45 |
+
# Add data_source column for easier slicing
|
| 46 |
+
df.loc[df['inat21_filename'].notna(), 'data_source'] = 'iNat21'
|
| 47 |
+
df.loc[df['bioscan_filename'].notna(), 'data_source'] = 'BIOSCAN'
|
| 48 |
+
df.loc[df['eol_content_id'].notna(), 'data_source'] = 'EOL'
|
| 49 |
+
|
| 50 |
+
# %% [markdown]
|
| 51 |
+
# #### Get just EOL CSV for license addition
|
| 52 |
+
|
| 53 |
+
# %%
|
| 54 |
+
eol_df = df.loc[df['data_source'] == 'EOL']
|
| 55 |
+
|
| 56 |
+
# %%
|
| 57 |
+
eol_df.head()
|
| 58 |
+
|
| 59 |
+
# %% [markdown]
|
| 60 |
+
# We don't need the BIOSCAN or iNat21 columns, nor the taxa columns.
|
| 61 |
+
|
| 62 |
+
# %%
|
| 63 |
+
eol_license_cols = eol_df.columns[1:4]
|
| 64 |
+
eol_license_cols
|
| 65 |
+
|
| 66 |
+
# %%
|
| 67 |
+
eol_license_df = eol_df[eol_license_cols]
|
| 68 |
+
#eol_license_df["license"] = None
|
| 69 |
+
|
| 70 |
+
# %%
|
| 71 |
+
eol_license_df.head()
|
| 72 |
+
|
| 73 |
+
# %%
|
| 74 |
+
#eol_license_df.to_csv("../data/eol_files/eol_licenses.csv", index = False)
|
| 75 |
+
|
| 76 |
+
# %% [markdown]
|
| 77 |
+
# ### Merge with Media Manifest to Check for Licenses
|
| 78 |
+
# Previous license files (retained below) are missing files, let's merge with the [media manifest](https://huggingface.co/datasets/imageomics/eol/blob/be7b7e6c372f6547e30030e9576d9cc638320099/data/interim/media_manifest.csv) all these images should have been downloaded from to see if there are any here that don't exist in it. From there we'll check licensing info.
|
| 79 |
+
|
| 80 |
+
# %%
|
| 81 |
+
media = pd.read_csv("../data/media_manifest (july 26).csv", dtype = {"EOL content ID": "int64", "EOL page ID": "int64"}, low_memory = False)
|
| 82 |
+
media.info(show_counts = True)
|
| 83 |
+
|
| 84 |
+
# %%
|
| 85 |
+
# Read eol license df back in with type int64 for ID columns
|
| 86 |
+
eol_license_df = pd.read_csv("../data/eol_files/eol_licenses.csv",
|
| 87 |
+
dtype = {"eol_content_id": "int64", "eol_page_id": "int64"},
|
| 88 |
+
low_memory = False)
|
| 89 |
+
|
| 90 |
+
# %%
|
| 91 |
+
eol_license_df.shape
|
| 92 |
+
|
| 93 |
+
# %%
|
| 94 |
+
eol_df = eol_df.astype({"eol_content_id": "int64", "eol_page_id": "int64"})
|
| 95 |
+
eol_df.info()
|
| 96 |
+
|
| 97 |
+
# %%
|
| 98 |
+
eol_license_df = eol_df[eol_license_cols]
|
| 99 |
+
|
| 100 |
+
# %% [markdown]
|
| 101 |
+
# Notice that we have about 300K more entries in the media manifest, which is about expected from the [comparison of predicted-catalog to the original full list](https://huggingface.co/datasets/imageomics/ToL-EDA/blob/main/notebooks/ToL_predicted-catalog_EDA.ipynb).
|
| 102 |
+
|
| 103 |
+
# %%
|
| 104 |
+
media.rename(columns = {"EOL content ID": "eol_content_id"}, inplace = True)
|
| 105 |
+
|
| 106 |
+
# %%
|
| 107 |
+
eol_df_media = pd.merge(eol_license_df, media, how = "left", on = "eol_content_id")
|
| 108 |
+
|
| 109 |
+
# %%
|
| 110 |
+
eol_df_media.info(show_counts = True)
|
| 111 |
+
|
| 112 |
+
# %% [markdown]
|
| 113 |
+
# We have about 620K images missing copyright owner.
|
| 114 |
+
|
| 115 |
+
# %%
|
| 116 |
+
eol_df_media.head()
|
| 117 |
+
|
| 118 |
+
# %%
|
| 119 |
+
eol_df_media.loc[eol_df_media["Copyright Owner"].isna()].nunique()
|
| 120 |
+
|
| 121 |
+
# %% [markdown]
|
| 122 |
+
# The missing info is distributed across 116,609 pages.
|
| 123 |
+
#
|
| 124 |
+
# There also seems to be a discrepancy in the number of page IDs between these. This lead to duplicated records...definitely something's off.
|
| 125 |
+
|
| 126 |
+
# %% [markdown]
|
| 127 |
+
# Check consistency of merge when matching both `eol_content_id` and `eol_page_id`.
|
| 128 |
+
|
| 129 |
+
# %%
|
| 130 |
+
media.rename(columns = {"EOL page ID": "eol_page_id"}, inplace = True)
|
| 131 |
+
|
| 132 |
+
# %%
|
| 133 |
+
merge_cols = ["eol_content_id", "eol_page_id"]
|
| 134 |
+
|
| 135 |
+
# %%
|
| 136 |
+
eol_df_media_cp = pd.merge(eol_license_df, media, how = "inner", left_on = merge_cols, right_on = merge_cols)
|
| 137 |
+
eol_df_media_cp.info(show_counts = True)
|
| 138 |
+
|
| 139 |
+
# %% [markdown]
|
| 140 |
+
# Okay, so we do have a mis-match of about 113K images where the content IDs and page IDs don't both match.
|
| 141 |
+
|
| 142 |
+
# %%
|
| 143 |
+
eol_df_media_cp.to_csv("../data/eol_files/eol_cp_match_media.csv", index = False)
|
| 144 |
+
|
| 145 |
+
# %%
|
| 146 |
+
tol_ids_in_media = list(eol_df_media_cp.treeoflife_id)
|
| 147 |
+
tol_ids_in_media[:5]
|
| 148 |
+
|
| 149 |
+
# %%
|
| 150 |
+
eol_license_df.head()
|
| 151 |
+
|
| 152 |
+
# %% [markdown]
|
| 153 |
+
# Let's save a copy of the EOL section with content and page IDs that are mismatched.
|
| 154 |
+
|
| 155 |
+
# %%
|
| 156 |
+
eol_df_missing_media = eol_license_df.loc[~eol_license_df.treeoflife_id.isin(tol_ids_in_media)]
|
| 157 |
+
eol_df_missing_media.info(show_counts = True)
|
| 158 |
+
|
| 159 |
+
# %%
|
| 160 |
+
eol_df_missing_media.to_csv("../data/eol_files/eol_cp_not_media.csv", index = False)
|
| 161 |
+
|
| 162 |
+
# %% [markdown]
|
| 163 |
+
# ### Save Record of Missing Content IDs & Compare to Older Media Manifest
|
| 164 |
+
# Let's save a record of the missing content IDs, then we'll compare them to the [July 6 media manifest](https://huggingface.co/datasets/imageomics/eol/blob/eaa00a48fa188f12906c5b8074d60aa8e67eb135/data/interim/media_manifest.csv) to see if any are in there. The July 6 media manifest is smaller, but we'll still check.
|
| 165 |
+
|
| 166 |
+
# %%
|
| 167 |
+
eol_missing_content_ids = eol_df_media.loc[eol_df_media["Medium Source URL"].isna()]
|
| 168 |
+
eol_missing_content_ids.head()
|
| 169 |
+
|
| 170 |
+
# %% [markdown]
|
| 171 |
+
# The pages exist (`eol.org/pages/<eol_page_id>`), but the content IDs do not (`eol.org/media/<eol_content_id>` produces 404).
|
| 172 |
+
|
| 173 |
+
# %%
|
| 174 |
+
#eol_missing_content_ids.to_csv("../data/eol_files/eol_missing_content_ids.csv", index = False)
|
| 175 |
+
|
| 176 |
+
# %%
|
| 177 |
+
media_old = pd.read_csv("../data/media_manifest.csv", dtype = {"EOL content ID": "int64", "EOL page ID": "int64"}, low_memory = False)
|
| 178 |
+
media_old.info(show_counts = True)
|
| 179 |
+
|
| 180 |
+
# %%
|
| 181 |
+
media_old.rename(columns = {"EOL content ID": "eol_content_id"}, inplace = True)
|
| 182 |
+
|
| 183 |
+
# %%
|
| 184 |
+
eol_df_media_old = pd.merge(eol_missing_content_ids[eol_license_cols], media_old, how = "left", on = "eol_content_id")
|
| 185 |
+
|
| 186 |
+
# %%
|
| 187 |
+
eol_df_media_old.info(show_counts = True)
|
| 188 |
+
|
| 189 |
+
# %% [markdown]
|
| 190 |
+
# No, we do not have any of the missing ones in the older media manifest.
|
| 191 |
+
|
| 192 |
+
# %% [markdown]
|
| 193 |
+
# ### Check how this compares to Catalog
|
| 194 |
+
# Let's see if these are all images in TreeOfLife-10M, or a mix between it and Rare Species.
|
| 195 |
+
|
| 196 |
+
# %%
|
| 197 |
+
cat_df = pd.read_csv("../data/catalog.csv", low_memory = False)
|
| 198 |
+
# Remove duplicates in train_small
|
| 199 |
+
cat_df = cat_df.loc[cat_df.split != 'train_small']
|
| 200 |
+
|
| 201 |
+
# %%
|
| 202 |
+
# Add data_source column for easier slicing
|
| 203 |
+
cat_df.loc[cat_df['inat21_filename'].notna(), 'data_source'] = 'iNat21'
|
| 204 |
+
cat_df.loc[cat_df['bioscan_filename'].notna(), 'data_source'] = 'BIOSCAN'
|
| 205 |
+
cat_df.loc[cat_df['eol_content_id'].notna(), 'data_source'] = 'EOL'
|
| 206 |
+
|
| 207 |
+
# %%
|
| 208 |
+
eol_cat_df = cat_df.loc[cat_df.data_source == "EOL"]
|
| 209 |
+
|
| 210 |
+
# %%
|
| 211 |
+
eol_cat_df_media = pd.merge(eol_cat_df[eol_license_cols], media, how = "left", on = "eol_content_id")
|
| 212 |
+
eol_cat_df_media.info(show_counts = True)
|
| 213 |
+
|
| 214 |
+
# %% [markdown]
|
| 215 |
+
# Looks like the problem is distributed across both datasets.
|
| 216 |
+
|
| 217 |
+
# %%
|
| 218 |
+
eol_cat_df_media.loc[eol_cat_df_media["Medium Source URL"].isna()].nunique()
|
| 219 |
+
|
| 220 |
+
# %% [markdown]
|
| 221 |
+
# For `catalog` the missing information is distributed across 9,634 pages, so that's 128 pages (of 400) in the Rare Species dataset that we can't currently match.
|
| 222 |
+
|
| 223 |
+
# %% [markdown]
|
| 224 |
+
# ### What are the taxa of the missing images?
|
| 225 |
+
#
|
| 226 |
+
# Let's bring back a version with the taxa and see what we're dealing with on that end without needing to open the pages.
|
| 227 |
+
|
| 228 |
+
# %%
|
| 229 |
+
cols_of_interest = ['treeoflife_id', 'eol_content_id', 'eol_page_id',
|
| 230 |
+
'kingdom', 'phylum', 'class', 'order', 'family',
|
| 231 |
+
'genus', 'species', 'common']
|
| 232 |
+
|
| 233 |
+
# %%
|
| 234 |
+
taxa_cols = ['kingdom', 'phylum', 'class', 'order', 'family',
|
| 235 |
+
'genus', 'species', 'common']
|
| 236 |
+
|
| 237 |
+
# %%
|
| 238 |
+
eol_taxa_df_media = pd.merge(eol_df[cols_of_interest], media, how = "left", on = "eol_content_id")
|
| 239 |
+
|
| 240 |
+
# %%
|
| 241 |
+
eol_taxa_df_media.loc[eol_taxa_df_media["Medium Source URL"].isna()].nunique()
|
| 242 |
+
|
| 243 |
+
# %%
|
| 244 |
+
eol_taxa_df_media.loc[eol_taxa_df_media["Medium Source URL"].isna()].info(show_counts = True)
|
| 245 |
+
|
| 246 |
+
# %%
|
| 247 |
+
eol_taxa_df_media.loc[eol_taxa_df_media["Medium Source URL"].isna()].sample(7)
|
| 248 |
+
|
| 249 |
+
# %% [markdown]
|
| 250 |
+
# Save a copy of the missing content IDs with taxa info as well.
|
| 251 |
+
|
| 252 |
+
# %%
|
| 253 |
+
#eol_taxa_df_media.loc[eol_taxa_df_media["Medium Source URL"].isna()].to_csv("../data/eol_files/eol_taxa_missing_content_ids.csv", index = False)
|
| 254 |
+
|
| 255 |
+
# %% [markdown]
|
| 256 |
+
# And in `catalog`...
|
| 257 |
+
|
| 258 |
+
# %%
|
| 259 |
+
eol_cat_df_taxa_media = pd.merge(eol_cat_df[cols_of_interest], media, how = "left", on = "eol_content_id")
|
| 260 |
+
eol_cat_df_taxa_media.loc[eol_cat_df_taxa_media["Medium Source URL"].isna()].nunique()
|
| 261 |
+
|
| 262 |
+
# %% [markdown]
|
| 263 |
+
# Alright, so it's 2 orders in Rare species.
|
| 264 |
+
|
| 265 |
+
# %%
|
| 266 |
+
eol_cat_df_taxa_media.loc[eol_cat_df_taxa_media["Medium Source URL"].isna()].info(show_counts = True)
|
| 267 |
+
|
| 268 |
+
# %%
|
| 269 |
+
eol_cat_df_taxa_media.loc[eol_cat_df_taxa_media["Medium Source URL"].isna()].sample(4)
|
| 270 |
+
|
| 271 |
+
# %% [markdown]
|
| 272 |
+
# ## Compare Media Cargo
|
| 273 |
+
# Media cargo is all images we downloaded from EOL 29 July 2023, so should match to `predicted-catalog`.
|
| 274 |
+
|
| 275 |
+
# %%
|
| 276 |
+
cargo = pd.read_csv("../data/eol_media_cargo_names.csv", dtype = {"EOL content ID": "int64", "EOL page ID": "int64"})
|
| 277 |
+
cargo.info(show_counts = True)
|
| 278 |
+
|
| 279 |
+
# %%
|
| 280 |
+
cargo.nunique()
|
| 281 |
+
|
| 282 |
+
# %%
|
| 283 |
+
cargo.head()
|
| 284 |
+
|
| 285 |
+
# %%
|
| 286 |
+
cargo.rename(columns = {"EOL content ID": "eol_content_id"}, inplace = True)
|
| 287 |
+
eol_df_cargo = pd.merge(eol_license_df, cargo, how = "left", on = "eol_content_id")
|
| 288 |
+
|
| 289 |
+
# %%
|
| 290 |
+
eol_df_cargo.info(show_counts = True)
|
| 291 |
+
|
| 292 |
+
# %% [markdown]
|
| 293 |
+
# There seem to be 633 images here that aren't listed in the media cargo.
|
| 294 |
+
#
|
| 295 |
+
# What about in catalog?
|
| 296 |
+
|
| 297 |
+
# %%
|
| 298 |
+
eol_cat_cargo = pd.merge(eol_cat_df[eol_license_cols], cargo, how = "left", on = "eol_content_id")
|
| 299 |
+
eol_cat_cargo.info(show_counts = True)
|
| 300 |
+
|
| 301 |
+
# %% [markdown]
|
| 302 |
+
# Still missing 633 images...so we know it's not part of the Rare Species dataset, but is TreeOfLife-10M...
|
| 303 |
+
|
| 304 |
+
# %%
|
| 305 |
+
media_in_cargo = pd.merge(cargo, media, how = "right", on = "eol_content_id")
|
| 306 |
+
media_in_cargo.info(show_counts = True)
|
| 307 |
+
|
| 308 |
+
# %% [markdown]
|
| 309 |
+
# But there are 26,868 images in media manifest that are not in cargo (or at least the content ID's aren't), despite the media cargo having 154K more images listed.
|
| 310 |
+
|
| 311 |
+
# %% [markdown]
|
| 312 |
+
# ## Compare to Newer Media Manifest
|
| 313 |
+
#
|
| 314 |
+
# We will load in a [new media manifest](https://huggingface.co/datasets/imageomics/eol/blob/3aa274067fc4a18877fb394b1d49a92962c57ed8/data/interim/media_manifest_Dec6.csv) (downloaded Dec. 6) to match up `page_id`s for the missing `content_id`s. This way we can download the images and compare via MD5 checksums to hopefully map the new `content_id`s to the old. (See [discussion #18](https://huggingface.co/datasets/imageomics/eol/discussions/18) in [EOL Repo](https://huggingface.co/datasets/imageomics/eol).)
|
| 315 |
+
|
| 316 |
+
# %%
|
| 317 |
+
media_new = pd.read_csv("../data/media_manifest_Dec6.csv", dtype = {"EOL content ID": "int64", "EOL page ID": "int64"}, low_memory = False)
|
| 318 |
+
media_new.info(show_counts = True)
|
| 319 |
+
|
| 320 |
+
# %%
|
| 321 |
+
media_new.head()
|
| 322 |
+
|
| 323 |
+
# %% [markdown]
|
| 324 |
+
# To allow for easier sanity-check on the matches, we'll use the version of missing info list with taxa included.
|
| 325 |
+
|
| 326 |
+
# %%
|
| 327 |
+
eol_taxa_df_missing_media = eol_taxa_df_media.loc[eol_taxa_df_media["Medium Source URL"].isna()]
|
| 328 |
+
eol_taxa_df_missing_media.head()
|
| 329 |
+
|
| 330 |
+
# %% [markdown]
|
| 331 |
+
# Rename `EOL content ID` and `EOL page ID` columns to match our `eol_taxa_df_missing_media` for easier merging.
|
| 332 |
+
|
| 333 |
+
# %%
|
| 334 |
+
media_new.rename(columns = {"EOL content ID": "eol_content_id", "EOL page ID": "eol_page_id"}, inplace = True)
|
| 335 |
+
|
| 336 |
+
# %% [markdown]
|
| 337 |
+
# First check for any matching content IDs
|
| 338 |
+
|
| 339 |
+
# %%
|
| 340 |
+
eol_taxa_df_missing_media_new_check = pd.merge(eol_taxa_df_missing_media[cols_of_interest], media_new, how = "left", on = "eol_content_id")
|
| 341 |
+
eol_taxa_df_missing_media_new_check.info(show_counts = True)
|
| 342 |
+
|
| 343 |
+
# %% [markdown]
|
| 344 |
+
# Yes, there are no matching content IDs here.
|
| 345 |
+
#
|
| 346 |
+
# Now, let's get our match on page IDs to check they are all listed still for download.
|
| 347 |
+
|
| 348 |
+
# %%
|
| 349 |
+
pg_ids_missing_content = set(eol_taxa_df_missing_media.eol_page_id)
|
| 350 |
+
pg_ids_media_new = set(media_new.eol_page_id)
|
| 351 |
+
|
| 352 |
+
print(f"There are {len(pg_ids_missing_content)} unique page ids that have missing content ids, and there are {len(pg_ids_media_new)} total page ids in the new media manifest.")
|
| 353 |
+
|
| 354 |
+
|
| 355 |
+
# %%
|
| 356 |
+
missing_pgs = []
|
| 357 |
+
for pg in pg_ids_missing_content:
|
| 358 |
+
if pg not in pg_ids_media_new:
|
| 359 |
+
missing_pgs.append(pg)
|
| 360 |
+
print(len(missing_pgs))
|
| 361 |
+
|
| 362 |
+
# %%
|
| 363 |
+
media.rename(columns = {"EOL page ID": "eol_page_id"}, inplace = True)
|
| 364 |
+
pg_ids_media = set(media.eol_page_id)
|
| 365 |
+
|
| 366 |
+
print(f"There are {len(pg_ids_media)} total page ids in the July 26 media manifest.")
|
| 367 |
+
|
| 368 |
+
missing_pgs_jul26 = []
|
| 369 |
+
for pg in pg_ids_missing_content:
|
| 370 |
+
if pg not in pg_ids_media:
|
| 371 |
+
missing_pgs_jul26.append(pg)
|
| 372 |
+
print(len(missing_pgs_jul26))
|
| 373 |
+
|
| 374 |
+
# %% [markdown]
|
| 375 |
+
# There seem to be 152 page IDs that don't match to either manifest.
|
| 376 |
+
|
| 377 |
+
# %%
|
| 378 |
+
missing_pgs[:10]
|
| 379 |
+
|
| 380 |
+
# %%
|
| 381 |
+
# Why are these floats...does it matter?
|
| 382 |
+
missing_pgs_int = [int(pg) for pg in missing_pgs]
|
| 383 |
+
|
| 384 |
+
int_missing_pgs = []
|
| 385 |
+
for pg in missing_pgs_int:
|
| 386 |
+
if pg not in pg_ids_media_new:
|
| 387 |
+
int_missing_pgs.append(pg)
|
| 388 |
+
print(len(int_missing_pgs))
|
| 389 |
+
print(int_missing_pgs[:5])
|
| 390 |
+
|
| 391 |
+
# %% [markdown]
|
| 392 |
+
# It does not matter. There seem to be 152 missing pages, let's try making a couple into URLs...
|
| 393 |
+
#
|
| 394 |
+
# The first has a page (https://eol.org/pages/47186210) without images. Let's compare these 152 page IDs to our `category.csv` page IDs. Maybe these were not added because there were no images (still odd they exist in `predicted-category.csv`, but not in the manifest).
|
| 395 |
+
|
| 396 |
+
# %%
|
| 397 |
+
cat_pgs = set(eol_cat_df.eol_page_id)
|
| 398 |
+
|
| 399 |
+
print(f"There are {len(cat_pgs)} total page ids in the July 26 media manifest.")
|
| 400 |
+
|
| 401 |
+
missing_cat_pgs = []
|
| 402 |
+
for pg in missing_pgs:
|
| 403 |
+
if pg not in cat_pgs:
|
| 404 |
+
missing_cat_pgs.append(pg)
|
| 405 |
+
print(len(missing_cat_pgs))
|
| 406 |
+
|
| 407 |
+
# %% [markdown]
|
| 408 |
+
# Nope, these are all in `category.csv`.
|
| 409 |
+
#
|
| 410 |
+
# Another no image page (https://eol.org/pages/47186225), [this](https://eol.org/pages/46334362) has more data, but still no images. https://eol.org/pages/47186380 & https://eol.org/pages/47121005 also don't show any images.
|
| 411 |
+
#
|
| 412 |
+
# Seems the images for all of these were removed or moved to other pages...
|
| 413 |
+
#
|
| 414 |
+
# Let's make a CSV for the missing pages to check that we do indeed have the images (sanity check), and we can compare the taxa!
|
| 415 |
+
|
| 416 |
+
# %%
|
| 417 |
+
missing_pgs_df = eol_taxa_df_missing_media.loc[eol_taxa_df_missing_media["eol_page_id"].isin(missing_pgs)]
|
| 418 |
+
missing_pgs_df = missing_pgs_df[cols_of_interest]
|
| 419 |
+
missing_pgs_df.info()
|
| 420 |
+
|
| 421 |
+
# %%
|
| 422 |
+
missing_pgs_df.sample(10)
|
| 423 |
+
|
| 424 |
+
# %%
|
| 425 |
+
#missing_pgs_df.to_csv("../data/eol_files/catalog_missing_media_pages.csv", index = False)
|
| 426 |
+
|
| 427 |
+
# %% [markdown]
|
| 428 |
+
# ### Save File with EOL Page IDs & Number of missing content IDs associated with each
|
| 429 |
+
|
| 430 |
+
# %%
|
| 431 |
+
# Count and record number of content IDs for each page ID
|
| 432 |
+
for pg_id in pg_ids_missing_content:
|
| 433 |
+
content_id_list = ['{}'.format(i) for i in eol_taxa_df_missing_media.loc[eol_taxa_df_missing_media['eol_page_id'] == pg_id]['eol_content_id'].unique()]
|
| 434 |
+
eol_taxa_df_missing_media.loc[eol_taxa_df_missing_media['eol_page_id'] == pg_id, "num_content_ids_missing"] = len(content_id_list)
|
| 435 |
+
|
| 436 |
+
cols_of_interest.append("num_content_ids_missing")
|
| 437 |
+
eol_taxa_df_missing_media[cols_of_interest].head()
|
| 438 |
+
|
| 439 |
+
# %%
|
| 440 |
+
#unique page_ids
|
| 441 |
+
eol_taxa_df_missing_media['duplicate'] = eol_taxa_df_missing_media.duplicated(subset = "eol_page_id", keep = 'first')
|
| 442 |
+
eol_taxa_df_num_missing_pg = eol_taxa_df_missing_media.loc[~eol_taxa_df_missing_media['duplicate']]
|
| 443 |
+
eol_taxa_df_num_missing_pg.info()
|
| 444 |
+
|
| 445 |
+
# %% [markdown]
|
| 446 |
+
# This file has the relevant info relating to number of missing content IDs per page id, content ID included is just the first instance of such a content ID.
|
| 447 |
+
|
| 448 |
+
# %%
|
| 449 |
+
#eol_taxa_df_num_missing_pg[cols_of_interest].to_csv("../data/eol_files/eol_taxa_df_num_missing_pg.csv", index = False)
|
| 450 |
+
|
| 451 |
+
# %%
|
| 452 |
+
jul26_page_df = media.loc[media.eol_page_id.isin(pg_ids_missing_content)]
|
| 453 |
+
jul26_page_df.info()
|
| 454 |
+
|
| 455 |
+
# %%
|
| 456 |
+
jul26_page_df.nunique()
|
| 457 |
+
|
| 458 |
+
# %% [markdown]
|
| 459 |
+
# Yes, that's the expected number of unique page IDs. Let's save to CSV for download.
|
| 460 |
+
|
| 461 |
+
# %%
|
| 462 |
+
#jul26_page_df.to_csv("../data/eol_files/jul26_pages.csv", index = False)
|
| 463 |
+
|
| 464 |
+
# %%
|
| 465 |
+
dec6_page_df = media_new.loc[media_new.eol_page_id.isin(pg_ids_missing_content)]
|
| 466 |
+
dec6_page_df.info()
|
| 467 |
+
|
| 468 |
+
# %% [markdown]
|
| 469 |
+
# Okay, we have 5 more entries here, so let's compare unique counts and consider this one.
|
| 470 |
+
|
| 471 |
+
# %%
|
| 472 |
+
dec6_page_df.nunique()
|
| 473 |
+
|
| 474 |
+
# %%
|
| 475 |
+
#dec6_page_df.to_csv("../data/eol_files/dec6_pages.csv", index = False)
|
| 476 |
+
|
| 477 |
+
# %% [markdown]
|
| 478 |
+
# #### Check Older Media Manifest for Missing Pages
|
| 479 |
+
#
|
| 480 |
+
# Let's take a look at the July 6th media manifest to see if these pages are there.
|
| 481 |
+
|
| 482 |
+
# %%
|
| 483 |
+
media_old.rename(columns = {"EOL page ID": "eol_page_id"}, inplace = True)
|
| 484 |
+
pg_ids_media_old = set(media_old.eol_page_id)
|
| 485 |
+
|
| 486 |
+
print(f"There are {len(pg_ids_media_old)} total page ids in the July 6 media manifest.")
|
| 487 |
+
|
| 488 |
+
missing_pgs_jul6 = []
|
| 489 |
+
for pg in missing_pgs:
|
| 490 |
+
if pg not in pg_ids_media_old:
|
| 491 |
+
missing_pgs_jul6.append(pg)
|
| 492 |
+
print(len(missing_pgs_jul6))
|
| 493 |
+
|
| 494 |
+
# %% [markdown]
|
| 495 |
+
# It seems the missing pages are in the _older_ media manifest!
|
| 496 |
+
#
|
| 497 |
+
# Let's merge this with the `missing_pgs_df`, so we can get URLs to download from the pages there.
|
| 498 |
+
|
| 499 |
+
# %%
|
| 500 |
+
# Count and record number of content IDs for each page ID
|
| 501 |
+
for pg_id in missing_pgs:
|
| 502 |
+
content_id_list_mp = ['{}'.format(i) for i in missing_pgs_df.loc[missing_pgs_df['eol_page_id'] == pg_id]['eol_content_id'].unique()]
|
| 503 |
+
missing_pgs_df.loc[missing_pgs_df['eol_page_id'] == pg_id, "num_content_ids_missing"] = len(content_id_list_mp)
|
| 504 |
+
|
| 505 |
+
missing_pgs_df.head()
|
| 506 |
+
|
| 507 |
+
# %%
|
| 508 |
+
#unique page_ids
|
| 509 |
+
missing_pgs_df['duplicate'] = missing_pgs_df.duplicated(subset = "eol_page_id", keep = 'first')
|
| 510 |
+
eol_taxa_num_missing_pgs_df = missing_pgs_df.loc[~missing_pgs_df['duplicate']]
|
| 511 |
+
eol_taxa_num_missing_pgs_df.info()
|
| 512 |
+
|
| 513 |
+
# %%
|
| 514 |
+
older_page_df = media_old.loc[media_old.eol_page_id.isin(missing_pgs)]
|
| 515 |
+
older_page_df.info()
|
| 516 |
+
|
| 517 |
+
# %%
|
| 518 |
+
older_page_df.loc[older_page_df.eol_page_id.astype(str) == "4446364.0"]
|
| 519 |
+
|
| 520 |
+
# %% [markdown]
|
| 521 |
+
# Looks good, let's save to CSV.
|
| 522 |
+
|
| 523 |
+
# %%
|
| 524 |
+
#older_page_df.to_csv("../data/eol_files/media_old_pages.csv", index = False)
|
| 525 |
+
|
| 526 |
+
# %% [markdown]
|
| 527 |
+
# ## Check EOL License file(s)
|
| 528 |
+
#
|
| 529 |
+
# First we'll look at `eol_licenses.csv` from Sam, which only covers `catalog.csv`, so load both these in to make sure we've got full coverage for all included images (Matt's first match attempt from file created above couldn't find ~113K based on `eol_content_id`).
|
| 530 |
+
|
| 531 |
+
# %%
|
| 532 |
+
cat_df = pd.read_csv("../data/catalog.csv", dtype = {"eol_content_id": "int64", "eol_page_id": "int64"}, low_memory = False)
|
| 533 |
+
license_df = pd.read_csv("../data/eol_files/eol_licenses.csv",
|
| 534 |
+
dtype = {"eol_content_id": "int64", "eol_page_id": "int64"},
|
| 535 |
+
low_memory = False)
|
| 536 |
+
|
| 537 |
+
# %% [markdown]
|
| 538 |
+
# The `train_small` is duplicates of `train`, so we will drop those to analyze the full training set plus val.
|
| 539 |
+
|
| 540 |
+
# %%
|
| 541 |
+
cat_df = cat_df.loc[cat_df.split != 'train_small']
|
| 542 |
+
|
| 543 |
+
# %%
|
| 544 |
+
# Add data_source column for easier slicing
|
| 545 |
+
cat_df.loc[cat_df['inat21_filename'].notna(), 'data_source'] = 'iNat21'
|
| 546 |
+
cat_df.loc[cat_df['bioscan_filename'].notna(), 'data_source'] = 'BIOSCAN'
|
| 547 |
+
cat_df.loc[cat_df['eol_content_id'].notna(), 'data_source'] = 'EOL'
|
| 548 |
+
|
| 549 |
+
# %%
|
| 550 |
+
eol_df = cat_df.loc[cat_df.data_source == "EOL"]
|
| 551 |
+
|
| 552 |
+
# %%
|
| 553 |
+
license_df.head()
|
| 554 |
+
|
| 555 |
+
# %%
|
| 556 |
+
license_df.shape
|
| 557 |
+
|
| 558 |
+
# %%
|
| 559 |
+
eol_df.shape
|
| 560 |
+
|
| 561 |
+
# %% [markdown]
|
| 562 |
+
# Yeah, we're missing about 23K images in the license file.
|
| 563 |
+
|
| 564 |
+
# %%
|
| 565 |
+
license_df.info(show_counts = True)
|
| 566 |
+
|
| 567 |
+
# %%
|
| 568 |
+
license_df.loc[license_df["owner"].isna(), "license"].value_counts()
|
| 569 |
+
|
| 570 |
+
# %% [markdown]
|
| 571 |
+
# CC BY licenses without `owner` indicated is rather problematic.
|
| 572 |
+
|
| 573 |
+
# %%
|
| 574 |
+
license_df.loc[license_df["owner"].isna()].sample(5)
|
| 575 |
+
|
| 576 |
+
# %% [markdown]
|
| 577 |
+
# Tracked down `eol_content_id` [14796160](https://eol.org/media/14796160), original source is [BioImages](https://www.bioimages.org.uk/image.php?id=79950) with copyright Malcolm Storey (like 99% of the images on the site (see their [conditions of use](https://www.bioimages.org.uk/cright.htm))). He is listed as "compiler" on the EOL media page.
|
| 578 |
+
|
| 579 |
+
# %%
|
| 580 |
+
license_df.license.value_counts()
|
| 581 |
+
|
| 582 |
+
# %%
|
| 583 |
+
license_df.loc[license_df["license"] == "No known copyright restrictions"].sample(5)
|
| 584 |
+
|
| 585 |
+
# %%
|
| 586 |
+
#license_df.loc[license_df["owner"].isna()].to_csv("../data/eol_files/eol_licenses_missing_owner.csv", index = False)
|
| 587 |
+
|
| 588 |
+
# %%
|
notebooks/ToL_media_mismatch.ipynb
ADDED
|
@@ -0,0 +1,1792 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"cells": [
|
| 3 |
+
{
|
| 4 |
+
"cell_type": "code",
|
| 5 |
+
"execution_count": 1,
|
| 6 |
+
"metadata": {},
|
| 7 |
+
"outputs": [],
|
| 8 |
+
"source": [
|
| 9 |
+
"import pandas as pd"
|
| 10 |
+
]
|
| 11 |
+
},
|
| 12 |
+
{
|
| 13 |
+
"cell_type": "markdown",
|
| 14 |
+
"metadata": {},
|
| 15 |
+
"source": [
|
| 16 |
+
"Load in full images to ease process."
|
| 17 |
+
]
|
| 18 |
+
},
|
| 19 |
+
{
|
| 20 |
+
"cell_type": "code",
|
| 21 |
+
"execution_count": 2,
|
| 22 |
+
"metadata": {},
|
| 23 |
+
"outputs": [],
|
| 24 |
+
"source": [
|
| 25 |
+
"df = pd.read_csv(\"../data/predicted-catalog.csv\", low_memory = False)"
|
| 26 |
+
]
|
| 27 |
+
},
|
| 28 |
+
{
|
| 29 |
+
"cell_type": "code",
|
| 30 |
+
"execution_count": 3,
|
| 31 |
+
"metadata": {},
|
| 32 |
+
"outputs": [
|
| 33 |
+
{
|
| 34 |
+
"data": {
|
| 35 |
+
"text/html": [
|
| 36 |
+
"<div>\n",
|
| 37 |
+
"<style scoped>\n",
|
| 38 |
+
" .dataframe tbody tr th:only-of-type {\n",
|
| 39 |
+
" vertical-align: middle;\n",
|
| 40 |
+
" }\n",
|
| 41 |
+
"\n",
|
| 42 |
+
" .dataframe tbody tr th {\n",
|
| 43 |
+
" vertical-align: top;\n",
|
| 44 |
+
" }\n",
|
| 45 |
+
"\n",
|
| 46 |
+
" .dataframe thead th {\n",
|
| 47 |
+
" text-align: right;\n",
|
| 48 |
+
" }\n",
|
| 49 |
+
"</style>\n",
|
| 50 |
+
"<table border=\"1\" class=\"dataframe\">\n",
|
| 51 |
+
" <thead>\n",
|
| 52 |
+
" <tr style=\"text-align: right;\">\n",
|
| 53 |
+
" <th></th>\n",
|
| 54 |
+
" <th>split</th>\n",
|
| 55 |
+
" <th>treeoflife_id</th>\n",
|
| 56 |
+
" <th>eol_content_id</th>\n",
|
| 57 |
+
" <th>eol_page_id</th>\n",
|
| 58 |
+
" <th>bioscan_part</th>\n",
|
| 59 |
+
" <th>bioscan_filename</th>\n",
|
| 60 |
+
" <th>inat21_filename</th>\n",
|
| 61 |
+
" <th>inat21_cls_name</th>\n",
|
| 62 |
+
" <th>inat21_cls_num</th>\n",
|
| 63 |
+
" <th>kingdom</th>\n",
|
| 64 |
+
" <th>phylum</th>\n",
|
| 65 |
+
" <th>class</th>\n",
|
| 66 |
+
" <th>order</th>\n",
|
| 67 |
+
" <th>family</th>\n",
|
| 68 |
+
" <th>genus</th>\n",
|
| 69 |
+
" <th>species</th>\n",
|
| 70 |
+
" <th>common</th>\n",
|
| 71 |
+
" </tr>\n",
|
| 72 |
+
" </thead>\n",
|
| 73 |
+
" <tbody>\n",
|
| 74 |
+
" <tr>\n",
|
| 75 |
+
" <th>0</th>\n",
|
| 76 |
+
" <td>train</td>\n",
|
| 77 |
+
" <td>f2f0aa29-e87b-469c-bf5b-51a3611ab001</td>\n",
|
| 78 |
+
" <td>22131926.0</td>\n",
|
| 79 |
+
" <td>269504.0</td>\n",
|
| 80 |
+
" <td>NaN</td>\n",
|
| 81 |
+
" <td>NaN</td>\n",
|
| 82 |
+
" <td>NaN</td>\n",
|
| 83 |
+
" <td>NaN</td>\n",
|
| 84 |
+
" <td>NaN</td>\n",
|
| 85 |
+
" <td>Animalia</td>\n",
|
| 86 |
+
" <td>Arthropoda</td>\n",
|
| 87 |
+
" <td>Insecta</td>\n",
|
| 88 |
+
" <td>Lepidoptera</td>\n",
|
| 89 |
+
" <td>Lycaenidae</td>\n",
|
| 90 |
+
" <td>Orthomiella</td>\n",
|
| 91 |
+
" <td>rantaizana</td>\n",
|
| 92 |
+
" <td>Chinese Straight-wing Blue</td>\n",
|
| 93 |
+
" </tr>\n",
|
| 94 |
+
" <tr>\n",
|
| 95 |
+
" <th>1</th>\n",
|
| 96 |
+
" <td>train</td>\n",
|
| 97 |
+
" <td>5faa4f55-32e9-4872-953d-567e5d232e52</td>\n",
|
| 98 |
+
" <td>22291283.0</td>\n",
|
| 99 |
+
" <td>6101931.0</td>\n",
|
| 100 |
+
" <td>NaN</td>\n",
|
| 101 |
+
" <td>NaN</td>\n",
|
| 102 |
+
" <td>NaN</td>\n",
|
| 103 |
+
" <td>NaN</td>\n",
|
| 104 |
+
" <td>NaN</td>\n",
|
| 105 |
+
" <td>Plantae</td>\n",
|
| 106 |
+
" <td>Tracheophyta</td>\n",
|
| 107 |
+
" <td>Polypodiopsida</td>\n",
|
| 108 |
+
" <td>Polypodiales</td>\n",
|
| 109 |
+
" <td>Woodsiaceae</td>\n",
|
| 110 |
+
" <td>Woodsia</td>\n",
|
| 111 |
+
" <td>subcordata</td>\n",
|
| 112 |
+
" <td>Woodsia subcordata</td>\n",
|
| 113 |
+
" </tr>\n",
|
| 114 |
+
" <tr>\n",
|
| 115 |
+
" <th>2</th>\n",
|
| 116 |
+
" <td>train</td>\n",
|
| 117 |
+
" <td>2282f2bf-2b52-4522-b588-dd6f356d5fd6</td>\n",
|
| 118 |
+
" <td>21802775.0</td>\n",
|
| 119 |
+
" <td>45513632.0</td>\n",
|
| 120 |
+
" <td>NaN</td>\n",
|
| 121 |
+
" <td>NaN</td>\n",
|
| 122 |
+
" <td>NaN</td>\n",
|
| 123 |
+
" <td>NaN</td>\n",
|
| 124 |
+
" <td>NaN</td>\n",
|
| 125 |
+
" <td>Animalia</td>\n",
|
| 126 |
+
" <td>Chordata</td>\n",
|
| 127 |
+
" <td>Aves</td>\n",
|
| 128 |
+
" <td>Passeriformes</td>\n",
|
| 129 |
+
" <td>Laniidae</td>\n",
|
| 130 |
+
" <td>Lanius</td>\n",
|
| 131 |
+
" <td>minor</td>\n",
|
| 132 |
+
" <td>Lesser Grey Shrike</td>\n",
|
| 133 |
+
" </tr>\n",
|
| 134 |
+
" <tr>\n",
|
| 135 |
+
" <th>3</th>\n",
|
| 136 |
+
" <td>train</td>\n",
|
| 137 |
+
" <td>76b57c36-2181-4e6d-a5c4-b40e22a09449</td>\n",
|
| 138 |
+
" <td>12784812.0</td>\n",
|
| 139 |
+
" <td>51655800.0</td>\n",
|
| 140 |
+
" <td>NaN</td>\n",
|
| 141 |
+
" <td>NaN</td>\n",
|
| 142 |
+
" <td>NaN</td>\n",
|
| 143 |
+
" <td>NaN</td>\n",
|
| 144 |
+
" <td>NaN</td>\n",
|
| 145 |
+
" <td>NaN</td>\n",
|
| 146 |
+
" <td>NaN</td>\n",
|
| 147 |
+
" <td>NaN</td>\n",
|
| 148 |
+
" <td>NaN</td>\n",
|
| 149 |
+
" <td>NaN</td>\n",
|
| 150 |
+
" <td>NaN</td>\n",
|
| 151 |
+
" <td>tenuis</td>\n",
|
| 152 |
+
" <td>Tenuis</td>\n",
|
| 153 |
+
" </tr>\n",
|
| 154 |
+
" <tr>\n",
|
| 155 |
+
" <th>4</th>\n",
|
| 156 |
+
" <td>train</td>\n",
|
| 157 |
+
" <td>f57d3ab6-2cf5-484b-a590-e2a3d49a3ca2</td>\n",
|
| 158 |
+
" <td>29713643.0</td>\n",
|
| 159 |
+
" <td>45515896.0</td>\n",
|
| 160 |
+
" <td>NaN</td>\n",
|
| 161 |
+
" <td>NaN</td>\n",
|
| 162 |
+
" <td>NaN</td>\n",
|
| 163 |
+
" <td>NaN</td>\n",
|
| 164 |
+
" <td>NaN</td>\n",
|
| 165 |
+
" <td>Animalia</td>\n",
|
| 166 |
+
" <td>Chordata</td>\n",
|
| 167 |
+
" <td>Aves</td>\n",
|
| 168 |
+
" <td>Casuariiformes</td>\n",
|
| 169 |
+
" <td>Casuariidae</td>\n",
|
| 170 |
+
" <td>Casuarius</td>\n",
|
| 171 |
+
" <td>casuarius</td>\n",
|
| 172 |
+
" <td>Southern Cassowary</td>\n",
|
| 173 |
+
" </tr>\n",
|
| 174 |
+
" </tbody>\n",
|
| 175 |
+
"</table>\n",
|
| 176 |
+
"</div>"
|
| 177 |
+
],
|
| 178 |
+
"text/plain": [
|
| 179 |
+
" split treeoflife_id eol_content_id eol_page_id \\\n",
|
| 180 |
+
"0 train f2f0aa29-e87b-469c-bf5b-51a3611ab001 22131926.0 269504.0 \n",
|
| 181 |
+
"1 train 5faa4f55-32e9-4872-953d-567e5d232e52 22291283.0 6101931.0 \n",
|
| 182 |
+
"2 train 2282f2bf-2b52-4522-b588-dd6f356d5fd6 21802775.0 45513632.0 \n",
|
| 183 |
+
"3 train 76b57c36-2181-4e6d-a5c4-b40e22a09449 12784812.0 51655800.0 \n",
|
| 184 |
+
"4 train f57d3ab6-2cf5-484b-a590-e2a3d49a3ca2 29713643.0 45515896.0 \n",
|
| 185 |
+
"\n",
|
| 186 |
+
" bioscan_part bioscan_filename inat21_filename inat21_cls_name \\\n",
|
| 187 |
+
"0 NaN NaN NaN NaN \n",
|
| 188 |
+
"1 NaN NaN NaN NaN \n",
|
| 189 |
+
"2 NaN NaN NaN NaN \n",
|
| 190 |
+
"3 NaN NaN NaN NaN \n",
|
| 191 |
+
"4 NaN NaN NaN NaN \n",
|
| 192 |
+
"\n",
|
| 193 |
+
" inat21_cls_num kingdom phylum class order \\\n",
|
| 194 |
+
"0 NaN Animalia Arthropoda Insecta Lepidoptera \n",
|
| 195 |
+
"1 NaN Plantae Tracheophyta Polypodiopsida Polypodiales \n",
|
| 196 |
+
"2 NaN Animalia Chordata Aves Passeriformes \n",
|
| 197 |
+
"3 NaN NaN NaN NaN NaN \n",
|
| 198 |
+
"4 NaN Animalia Chordata Aves Casuariiformes \n",
|
| 199 |
+
"\n",
|
| 200 |
+
" family genus species common \n",
|
| 201 |
+
"0 Lycaenidae Orthomiella rantaizana Chinese Straight-wing Blue \n",
|
| 202 |
+
"1 Woodsiaceae Woodsia subcordata Woodsia subcordata \n",
|
| 203 |
+
"2 Laniidae Lanius minor Lesser Grey Shrike \n",
|
| 204 |
+
"3 NaN NaN tenuis Tenuis \n",
|
| 205 |
+
"4 Casuariidae Casuarius casuarius Southern Cassowary "
|
| 206 |
+
]
|
| 207 |
+
},
|
| 208 |
+
"execution_count": 3,
|
| 209 |
+
"metadata": {},
|
| 210 |
+
"output_type": "execute_result"
|
| 211 |
+
}
|
| 212 |
+
],
|
| 213 |
+
"source": [
|
| 214 |
+
"df.head()"
|
| 215 |
+
]
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"cell_type": "code",
|
| 219 |
+
"execution_count": 4,
|
| 220 |
+
"metadata": {},
|
| 221 |
+
"outputs": [
|
| 222 |
+
{
|
| 223 |
+
"name": "stdout",
|
| 224 |
+
"output_type": "stream",
|
| 225 |
+
"text": [
|
| 226 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 227 |
+
"RangeIndex: 10092530 entries, 0 to 10092529\n",
|
| 228 |
+
"Data columns (total 17 columns):\n",
|
| 229 |
+
" # Column Non-Null Count Dtype \n",
|
| 230 |
+
"--- ------ -------------- ----- \n",
|
| 231 |
+
" 0 split 10092530 non-null object \n",
|
| 232 |
+
" 1 treeoflife_id 10092530 non-null object \n",
|
| 233 |
+
" 2 eol_content_id 6277374 non-null float64\n",
|
| 234 |
+
" 3 eol_page_id 6277374 non-null float64\n",
|
| 235 |
+
" 4 bioscan_part 1128313 non-null float64\n",
|
| 236 |
+
" 5 bioscan_filename 1128313 non-null object \n",
|
| 237 |
+
" 6 inat21_filename 2686843 non-null object \n",
|
| 238 |
+
" 7 inat21_cls_name 2686843 non-null object \n",
|
| 239 |
+
" 8 inat21_cls_num 2686843 non-null float64\n",
|
| 240 |
+
" 9 kingdom 9831721 non-null object \n",
|
| 241 |
+
" 10 phylum 9833317 non-null object \n",
|
| 242 |
+
" 11 class 9813548 non-null object \n",
|
| 243 |
+
" 12 order 9807409 non-null object \n",
|
| 244 |
+
" 13 family 9775447 non-null object \n",
|
| 245 |
+
" 14 genus 8908268 non-null object \n",
|
| 246 |
+
" 15 species 8749857 non-null object \n",
|
| 247 |
+
" 16 common 10092530 non-null object \n",
|
| 248 |
+
"dtypes: float64(4), object(13)\n",
|
| 249 |
+
"memory usage: 1.3+ GB\n"
|
| 250 |
+
]
|
| 251 |
+
}
|
| 252 |
+
],
|
| 253 |
+
"source": [
|
| 254 |
+
"df.info(show_counts = True)"
|
| 255 |
+
]
|
| 256 |
+
},
|
| 257 |
+
{
|
| 258 |
+
"cell_type": "markdown",
|
| 259 |
+
"metadata": {},
|
| 260 |
+
"source": [
|
| 261 |
+
"The `train_small` is duplicates of `train`, so we will drop those to analyze the full training set plus val."
|
| 262 |
+
]
|
| 263 |
+
},
|
| 264 |
+
{
|
| 265 |
+
"cell_type": "markdown",
|
| 266 |
+
"metadata": {},
|
| 267 |
+
"source": [
|
| 268 |
+
"`predicted-catalog` doesn't have `train_small`, hence, it's a smaller file."
|
| 269 |
+
]
|
| 270 |
+
},
|
| 271 |
+
{
|
| 272 |
+
"cell_type": "markdown",
|
| 273 |
+
"metadata": {},
|
| 274 |
+
"source": [
|
| 275 |
+
"Let's add a column indicating the original data source so we can also get some stats by datasource, specifically focusing on EOL since we know licensing for BIOSCAN-1M and iNat21."
|
| 276 |
+
]
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"cell_type": "code",
|
| 280 |
+
"execution_count": 3,
|
| 281 |
+
"metadata": {},
|
| 282 |
+
"outputs": [],
|
| 283 |
+
"source": [
|
| 284 |
+
"# Add data_source column for easier slicing\n",
|
| 285 |
+
"df.loc[df['inat21_filename'].notna(), 'data_source'] = 'iNat21'\n",
|
| 286 |
+
"df.loc[df['bioscan_filename'].notna(), 'data_source'] = 'BIOSCAN'\n",
|
| 287 |
+
"df.loc[df['eol_content_id'].notna(), 'data_source'] = 'EOL'"
|
| 288 |
+
]
|
| 289 |
+
},
|
| 290 |
+
{
|
| 291 |
+
"cell_type": "markdown",
|
| 292 |
+
"metadata": {},
|
| 293 |
+
"source": [
|
| 294 |
+
"#### Get just EOL CSV for Media Manifest Merge"
|
| 295 |
+
]
|
| 296 |
+
},
|
| 297 |
+
{
|
| 298 |
+
"cell_type": "code",
|
| 299 |
+
"execution_count": 4,
|
| 300 |
+
"metadata": {},
|
| 301 |
+
"outputs": [],
|
| 302 |
+
"source": [
|
| 303 |
+
"eol_df = df.loc[df['data_source'] == 'EOL']"
|
| 304 |
+
]
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"cell_type": "code",
|
| 308 |
+
"execution_count": 7,
|
| 309 |
+
"metadata": {},
|
| 310 |
+
"outputs": [
|
| 311 |
+
{
|
| 312 |
+
"data": {
|
| 313 |
+
"text/html": [
|
| 314 |
+
"<div>\n",
|
| 315 |
+
"<style scoped>\n",
|
| 316 |
+
" .dataframe tbody tr th:only-of-type {\n",
|
| 317 |
+
" vertical-align: middle;\n",
|
| 318 |
+
" }\n",
|
| 319 |
+
"\n",
|
| 320 |
+
" .dataframe tbody tr th {\n",
|
| 321 |
+
" vertical-align: top;\n",
|
| 322 |
+
" }\n",
|
| 323 |
+
"\n",
|
| 324 |
+
" .dataframe thead th {\n",
|
| 325 |
+
" text-align: right;\n",
|
| 326 |
+
" }\n",
|
| 327 |
+
"</style>\n",
|
| 328 |
+
"<table border=\"1\" class=\"dataframe\">\n",
|
| 329 |
+
" <thead>\n",
|
| 330 |
+
" <tr style=\"text-align: right;\">\n",
|
| 331 |
+
" <th></th>\n",
|
| 332 |
+
" <th>split</th>\n",
|
| 333 |
+
" <th>treeoflife_id</th>\n",
|
| 334 |
+
" <th>eol_content_id</th>\n",
|
| 335 |
+
" <th>eol_page_id</th>\n",
|
| 336 |
+
" <th>bioscan_part</th>\n",
|
| 337 |
+
" <th>bioscan_filename</th>\n",
|
| 338 |
+
" <th>inat21_filename</th>\n",
|
| 339 |
+
" <th>inat21_cls_name</th>\n",
|
| 340 |
+
" <th>inat21_cls_num</th>\n",
|
| 341 |
+
" <th>kingdom</th>\n",
|
| 342 |
+
" <th>phylum</th>\n",
|
| 343 |
+
" <th>class</th>\n",
|
| 344 |
+
" <th>order</th>\n",
|
| 345 |
+
" <th>family</th>\n",
|
| 346 |
+
" <th>genus</th>\n",
|
| 347 |
+
" <th>species</th>\n",
|
| 348 |
+
" <th>common</th>\n",
|
| 349 |
+
" <th>data_source</th>\n",
|
| 350 |
+
" </tr>\n",
|
| 351 |
+
" </thead>\n",
|
| 352 |
+
" <tbody>\n",
|
| 353 |
+
" <tr>\n",
|
| 354 |
+
" <th>0</th>\n",
|
| 355 |
+
" <td>train</td>\n",
|
| 356 |
+
" <td>f2f0aa29-e87b-469c-bf5b-51a3611ab001</td>\n",
|
| 357 |
+
" <td>22131926.0</td>\n",
|
| 358 |
+
" <td>269504.0</td>\n",
|
| 359 |
+
" <td>NaN</td>\n",
|
| 360 |
+
" <td>NaN</td>\n",
|
| 361 |
+
" <td>NaN</td>\n",
|
| 362 |
+
" <td>NaN</td>\n",
|
| 363 |
+
" <td>NaN</td>\n",
|
| 364 |
+
" <td>Animalia</td>\n",
|
| 365 |
+
" <td>Arthropoda</td>\n",
|
| 366 |
+
" <td>Insecta</td>\n",
|
| 367 |
+
" <td>Lepidoptera</td>\n",
|
| 368 |
+
" <td>Lycaenidae</td>\n",
|
| 369 |
+
" <td>Orthomiella</td>\n",
|
| 370 |
+
" <td>rantaizana</td>\n",
|
| 371 |
+
" <td>Chinese Straight-wing Blue</td>\n",
|
| 372 |
+
" <td>EOL</td>\n",
|
| 373 |
+
" </tr>\n",
|
| 374 |
+
" <tr>\n",
|
| 375 |
+
" <th>1</th>\n",
|
| 376 |
+
" <td>train</td>\n",
|
| 377 |
+
" <td>5faa4f55-32e9-4872-953d-567e5d232e52</td>\n",
|
| 378 |
+
" <td>22291283.0</td>\n",
|
| 379 |
+
" <td>6101931.0</td>\n",
|
| 380 |
+
" <td>NaN</td>\n",
|
| 381 |
+
" <td>NaN</td>\n",
|
| 382 |
+
" <td>NaN</td>\n",
|
| 383 |
+
" <td>NaN</td>\n",
|
| 384 |
+
" <td>NaN</td>\n",
|
| 385 |
+
" <td>Plantae</td>\n",
|
| 386 |
+
" <td>Tracheophyta</td>\n",
|
| 387 |
+
" <td>Polypodiopsida</td>\n",
|
| 388 |
+
" <td>Polypodiales</td>\n",
|
| 389 |
+
" <td>Woodsiaceae</td>\n",
|
| 390 |
+
" <td>Woodsia</td>\n",
|
| 391 |
+
" <td>subcordata</td>\n",
|
| 392 |
+
" <td>Woodsia subcordata</td>\n",
|
| 393 |
+
" <td>EOL</td>\n",
|
| 394 |
+
" </tr>\n",
|
| 395 |
+
" <tr>\n",
|
| 396 |
+
" <th>2</th>\n",
|
| 397 |
+
" <td>train</td>\n",
|
| 398 |
+
" <td>2282f2bf-2b52-4522-b588-dd6f356d5fd6</td>\n",
|
| 399 |
+
" <td>21802775.0</td>\n",
|
| 400 |
+
" <td>45513632.0</td>\n",
|
| 401 |
+
" <td>NaN</td>\n",
|
| 402 |
+
" <td>NaN</td>\n",
|
| 403 |
+
" <td>NaN</td>\n",
|
| 404 |
+
" <td>NaN</td>\n",
|
| 405 |
+
" <td>NaN</td>\n",
|
| 406 |
+
" <td>Animalia</td>\n",
|
| 407 |
+
" <td>Chordata</td>\n",
|
| 408 |
+
" <td>Aves</td>\n",
|
| 409 |
+
" <td>Passeriformes</td>\n",
|
| 410 |
+
" <td>Laniidae</td>\n",
|
| 411 |
+
" <td>Lanius</td>\n",
|
| 412 |
+
" <td>minor</td>\n",
|
| 413 |
+
" <td>Lesser Grey Shrike</td>\n",
|
| 414 |
+
" <td>EOL</td>\n",
|
| 415 |
+
" </tr>\n",
|
| 416 |
+
" <tr>\n",
|
| 417 |
+
" <th>3</th>\n",
|
| 418 |
+
" <td>train</td>\n",
|
| 419 |
+
" <td>76b57c36-2181-4e6d-a5c4-b40e22a09449</td>\n",
|
| 420 |
+
" <td>12784812.0</td>\n",
|
| 421 |
+
" <td>51655800.0</td>\n",
|
| 422 |
+
" <td>NaN</td>\n",
|
| 423 |
+
" <td>NaN</td>\n",
|
| 424 |
+
" <td>NaN</td>\n",
|
| 425 |
+
" <td>NaN</td>\n",
|
| 426 |
+
" <td>NaN</td>\n",
|
| 427 |
+
" <td>NaN</td>\n",
|
| 428 |
+
" <td>NaN</td>\n",
|
| 429 |
+
" <td>NaN</td>\n",
|
| 430 |
+
" <td>NaN</td>\n",
|
| 431 |
+
" <td>NaN</td>\n",
|
| 432 |
+
" <td>NaN</td>\n",
|
| 433 |
+
" <td>tenuis</td>\n",
|
| 434 |
+
" <td>Tenuis</td>\n",
|
| 435 |
+
" <td>EOL</td>\n",
|
| 436 |
+
" </tr>\n",
|
| 437 |
+
" <tr>\n",
|
| 438 |
+
" <th>4</th>\n",
|
| 439 |
+
" <td>train</td>\n",
|
| 440 |
+
" <td>f57d3ab6-2cf5-484b-a590-e2a3d49a3ca2</td>\n",
|
| 441 |
+
" <td>29713643.0</td>\n",
|
| 442 |
+
" <td>45515896.0</td>\n",
|
| 443 |
+
" <td>NaN</td>\n",
|
| 444 |
+
" <td>NaN</td>\n",
|
| 445 |
+
" <td>NaN</td>\n",
|
| 446 |
+
" <td>NaN</td>\n",
|
| 447 |
+
" <td>NaN</td>\n",
|
| 448 |
+
" <td>Animalia</td>\n",
|
| 449 |
+
" <td>Chordata</td>\n",
|
| 450 |
+
" <td>Aves</td>\n",
|
| 451 |
+
" <td>Casuariiformes</td>\n",
|
| 452 |
+
" <td>Casuariidae</td>\n",
|
| 453 |
+
" <td>Casuarius</td>\n",
|
| 454 |
+
" <td>casuarius</td>\n",
|
| 455 |
+
" <td>Southern Cassowary</td>\n",
|
| 456 |
+
" <td>EOL</td>\n",
|
| 457 |
+
" </tr>\n",
|
| 458 |
+
" </tbody>\n",
|
| 459 |
+
"</table>\n",
|
| 460 |
+
"</div>"
|
| 461 |
+
],
|
| 462 |
+
"text/plain": [
|
| 463 |
+
" split treeoflife_id eol_content_id eol_page_id \\\n",
|
| 464 |
+
"0 train f2f0aa29-e87b-469c-bf5b-51a3611ab001 22131926.0 269504.0 \n",
|
| 465 |
+
"1 train 5faa4f55-32e9-4872-953d-567e5d232e52 22291283.0 6101931.0 \n",
|
| 466 |
+
"2 train 2282f2bf-2b52-4522-b588-dd6f356d5fd6 21802775.0 45513632.0 \n",
|
| 467 |
+
"3 train 76b57c36-2181-4e6d-a5c4-b40e22a09449 12784812.0 51655800.0 \n",
|
| 468 |
+
"4 train f57d3ab6-2cf5-484b-a590-e2a3d49a3ca2 29713643.0 45515896.0 \n",
|
| 469 |
+
"\n",
|
| 470 |
+
" bioscan_part bioscan_filename inat21_filename inat21_cls_name \\\n",
|
| 471 |
+
"0 NaN NaN NaN NaN \n",
|
| 472 |
+
"1 NaN NaN NaN NaN \n",
|
| 473 |
+
"2 NaN NaN NaN NaN \n",
|
| 474 |
+
"3 NaN NaN NaN NaN \n",
|
| 475 |
+
"4 NaN NaN NaN NaN \n",
|
| 476 |
+
"\n",
|
| 477 |
+
" inat21_cls_num kingdom phylum class order \\\n",
|
| 478 |
+
"0 NaN Animalia Arthropoda Insecta Lepidoptera \n",
|
| 479 |
+
"1 NaN Plantae Tracheophyta Polypodiopsida Polypodiales \n",
|
| 480 |
+
"2 NaN Animalia Chordata Aves Passeriformes \n",
|
| 481 |
+
"3 NaN NaN NaN NaN NaN \n",
|
| 482 |
+
"4 NaN Animalia Chordata Aves Casuariiformes \n",
|
| 483 |
+
"\n",
|
| 484 |
+
" family genus species common \\\n",
|
| 485 |
+
"0 Lycaenidae Orthomiella rantaizana Chinese Straight-wing Blue \n",
|
| 486 |
+
"1 Woodsiaceae Woodsia subcordata Woodsia subcordata \n",
|
| 487 |
+
"2 Laniidae Lanius minor Lesser Grey Shrike \n",
|
| 488 |
+
"3 NaN NaN tenuis Tenuis \n",
|
| 489 |
+
"4 Casuariidae Casuarius casuarius Southern Cassowary \n",
|
| 490 |
+
"\n",
|
| 491 |
+
" data_source \n",
|
| 492 |
+
"0 EOL \n",
|
| 493 |
+
"1 EOL \n",
|
| 494 |
+
"2 EOL \n",
|
| 495 |
+
"3 EOL \n",
|
| 496 |
+
"4 EOL "
|
| 497 |
+
]
|
| 498 |
+
},
|
| 499 |
+
"execution_count": 7,
|
| 500 |
+
"metadata": {},
|
| 501 |
+
"output_type": "execute_result"
|
| 502 |
+
}
|
| 503 |
+
],
|
| 504 |
+
"source": [
|
| 505 |
+
"eol_df.head()"
|
| 506 |
+
]
|
| 507 |
+
},
|
| 508 |
+
{
|
| 509 |
+
"cell_type": "markdown",
|
| 510 |
+
"metadata": {},
|
| 511 |
+
"source": [
|
| 512 |
+
"We don't need the BIOSCAN or iNat21 columns, nor the taxa columns."
|
| 513 |
+
]
|
| 514 |
+
},
|
| 515 |
+
{
|
| 516 |
+
"cell_type": "code",
|
| 517 |
+
"execution_count": 5,
|
| 518 |
+
"metadata": {},
|
| 519 |
+
"outputs": [
|
| 520 |
+
{
|
| 521 |
+
"data": {
|
| 522 |
+
"text/plain": [
|
| 523 |
+
"Index(['treeoflife_id', 'eol_content_id', 'eol_page_id'], dtype='object')"
|
| 524 |
+
]
|
| 525 |
+
},
|
| 526 |
+
"execution_count": 5,
|
| 527 |
+
"metadata": {},
|
| 528 |
+
"output_type": "execute_result"
|
| 529 |
+
}
|
| 530 |
+
],
|
| 531 |
+
"source": [
|
| 532 |
+
"eol_license_cols = eol_df.columns[1:4]\n",
|
| 533 |
+
"eol_license_cols"
|
| 534 |
+
]
|
| 535 |
+
},
|
| 536 |
+
{
|
| 537 |
+
"cell_type": "code",
|
| 538 |
+
"execution_count": 6,
|
| 539 |
+
"metadata": {},
|
| 540 |
+
"outputs": [],
|
| 541 |
+
"source": [
|
| 542 |
+
"eol_df = eol_df[eol_license_cols]"
|
| 543 |
+
]
|
| 544 |
+
},
|
| 545 |
+
{
|
| 546 |
+
"cell_type": "code",
|
| 547 |
+
"execution_count": 7,
|
| 548 |
+
"metadata": {},
|
| 549 |
+
"outputs": [
|
| 550 |
+
{
|
| 551 |
+
"data": {
|
| 552 |
+
"text/plain": [
|
| 553 |
+
"treeoflife_id 6277374\n",
|
| 554 |
+
"eol_content_id 6277374\n",
|
| 555 |
+
"eol_page_id 504018\n",
|
| 556 |
+
"dtype: int64"
|
| 557 |
+
]
|
| 558 |
+
},
|
| 559 |
+
"execution_count": 7,
|
| 560 |
+
"metadata": {},
|
| 561 |
+
"output_type": "execute_result"
|
| 562 |
+
}
|
| 563 |
+
],
|
| 564 |
+
"source": [
|
| 565 |
+
"eol_df.nunique()"
|
| 566 |
+
]
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"cell_type": "markdown",
|
| 570 |
+
"metadata": {},
|
| 571 |
+
"source": [
|
| 572 |
+
"Number of unique `eol_content_id`s and `treeoflife_id`s match, and match with total number of `eol_content_id`s shown above in the info for the full dataset."
|
| 573 |
+
]
|
| 574 |
+
},
|
| 575 |
+
{
|
| 576 |
+
"cell_type": "markdown",
|
| 577 |
+
"metadata": {},
|
| 578 |
+
"source": [
|
| 579 |
+
"### Merge with Media Manifest\n",
|
| 580 |
+
"Let's merge with the [media manifest](https://huggingface.co/datasets/imageomics/eol/blob/be7b7e6c372f6547e30030e9576d9cc638320099/data/interim/media_manifest.csv) from which all these images should have been downloaded from to get a clear picture of what is or isn't in the manifest."
|
| 581 |
+
]
|
| 582 |
+
},
|
| 583 |
+
{
|
| 584 |
+
"cell_type": "code",
|
| 585 |
+
"execution_count": 8,
|
| 586 |
+
"metadata": {},
|
| 587 |
+
"outputs": [
|
| 588 |
+
{
|
| 589 |
+
"name": "stdout",
|
| 590 |
+
"output_type": "stream",
|
| 591 |
+
"text": [
|
| 592 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 593 |
+
"RangeIndex: 6574224 entries, 0 to 6574223\n",
|
| 594 |
+
"Data columns (total 6 columns):\n",
|
| 595 |
+
" # Column Non-Null Count Dtype \n",
|
| 596 |
+
"--- ------ -------------- ----- \n",
|
| 597 |
+
" 0 EOL content ID 6574224 non-null int64 \n",
|
| 598 |
+
" 1 EOL page ID 6574224 non-null int64 \n",
|
| 599 |
+
" 2 Medium Source URL 6574222 non-null object\n",
|
| 600 |
+
" 3 EOL Full-Size Copy URL 6574224 non-null object\n",
|
| 601 |
+
" 4 License Name 6574224 non-null object\n",
|
| 602 |
+
" 5 Copyright Owner 5942181 non-null object\n",
|
| 603 |
+
"dtypes: int64(2), object(4)\n",
|
| 604 |
+
"memory usage: 300.9+ MB\n"
|
| 605 |
+
]
|
| 606 |
+
}
|
| 607 |
+
],
|
| 608 |
+
"source": [
|
| 609 |
+
"media = pd.read_csv(\"../data/media_manifest (july 26).csv\", dtype = {\"EOL content ID\": \"int64\", \"EOL page ID\": \"int64\"}, low_memory = False)\n",
|
| 610 |
+
"media.info(show_counts = True)"
|
| 611 |
+
]
|
| 612 |
+
},
|
| 613 |
+
{
|
| 614 |
+
"cell_type": "markdown",
|
| 615 |
+
"metadata": {},
|
| 616 |
+
"source": [
|
| 617 |
+
"We want to make sure the EOL content and page IDs have matching types, so we'll set them to `int64` in `eol_df` too."
|
| 618 |
+
]
|
| 619 |
+
},
|
| 620 |
+
{
|
| 621 |
+
"cell_type": "code",
|
| 622 |
+
"execution_count": 9,
|
| 623 |
+
"metadata": {},
|
| 624 |
+
"outputs": [
|
| 625 |
+
{
|
| 626 |
+
"name": "stdout",
|
| 627 |
+
"output_type": "stream",
|
| 628 |
+
"text": [
|
| 629 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 630 |
+
"Index: 6277374 entries, 0 to 6277373\n",
|
| 631 |
+
"Data columns (total 3 columns):\n",
|
| 632 |
+
" # Column Dtype \n",
|
| 633 |
+
"--- ------ ----- \n",
|
| 634 |
+
" 0 treeoflife_id object\n",
|
| 635 |
+
" 1 eol_content_id int64 \n",
|
| 636 |
+
" 2 eol_page_id int64 \n",
|
| 637 |
+
"dtypes: int64(2), object(1)\n",
|
| 638 |
+
"memory usage: 191.6+ MB\n"
|
| 639 |
+
]
|
| 640 |
+
}
|
| 641 |
+
],
|
| 642 |
+
"source": [
|
| 643 |
+
"eol_df = eol_df.astype({\"eol_content_id\": \"int64\", \"eol_page_id\": \"int64\"})\n",
|
| 644 |
+
"eol_df.info()"
|
| 645 |
+
]
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"cell_type": "markdown",
|
| 649 |
+
"metadata": {},
|
| 650 |
+
"source": [
|
| 651 |
+
"Notice that we have about 300K more entries in the media manifest, which is about expected from the [comparison of predicted-catalog to the original full list](https://huggingface.co/datasets/imageomics/ToL-EDA/blob/main/notebooks/ToL_predicted-catalog_EDA.ipynb)."
|
| 652 |
+
]
|
| 653 |
+
},
|
| 654 |
+
{
|
| 655 |
+
"cell_type": "markdown",
|
| 656 |
+
"metadata": {},
|
| 657 |
+
"source": [
|
| 658 |
+
"Rename media columns for easier matching."
|
| 659 |
+
]
|
| 660 |
+
},
|
| 661 |
+
{
|
| 662 |
+
"cell_type": "code",
|
| 663 |
+
"execution_count": 10,
|
| 664 |
+
"metadata": {},
|
| 665 |
+
"outputs": [],
|
| 666 |
+
"source": [
|
| 667 |
+
"media.rename(columns = {\"EOL content ID\": \"eol_content_id\", \"EOL page ID\": \"eol_page_id\"}, inplace = True)"
|
| 668 |
+
]
|
| 669 |
+
},
|
| 670 |
+
{
|
| 671 |
+
"cell_type": "markdown",
|
| 672 |
+
"metadata": {},
|
| 673 |
+
"source": [
|
| 674 |
+
"Check consistency of merge when matching both `eol_content_id` and `eol_page_id`."
|
| 675 |
+
]
|
| 676 |
+
},
|
| 677 |
+
{
|
| 678 |
+
"cell_type": "code",
|
| 679 |
+
"execution_count": 11,
|
| 680 |
+
"metadata": {},
|
| 681 |
+
"outputs": [],
|
| 682 |
+
"source": [
|
| 683 |
+
"merge_cols = [\"eol_content_id\", \"eol_page_id\"]"
|
| 684 |
+
]
|
| 685 |
+
},
|
| 686 |
+
{
|
| 687 |
+
"cell_type": "code",
|
| 688 |
+
"execution_count": 12,
|
| 689 |
+
"metadata": {},
|
| 690 |
+
"outputs": [
|
| 691 |
+
{
|
| 692 |
+
"name": "stdout",
|
| 693 |
+
"output_type": "stream",
|
| 694 |
+
"text": [
|
| 695 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 696 |
+
"RangeIndex: 6163903 entries, 0 to 6163902\n",
|
| 697 |
+
"Data columns (total 7 columns):\n",
|
| 698 |
+
" # Column Non-Null Count Dtype \n",
|
| 699 |
+
"--- ------ -------------- ----- \n",
|
| 700 |
+
" 0 treeoflife_id 6163903 non-null object\n",
|
| 701 |
+
" 1 eol_content_id 6163903 non-null int64 \n",
|
| 702 |
+
" 2 eol_page_id 6163903 non-null int64 \n",
|
| 703 |
+
" 3 Medium Source URL 6163903 non-null object\n",
|
| 704 |
+
" 4 EOL Full-Size Copy URL 6163903 non-null object\n",
|
| 705 |
+
" 5 License Name 6163903 non-null object\n",
|
| 706 |
+
" 6 Copyright Owner 5549428 non-null object\n",
|
| 707 |
+
"dtypes: int64(2), object(5)\n",
|
| 708 |
+
"memory usage: 329.2+ MB\n"
|
| 709 |
+
]
|
| 710 |
+
}
|
| 711 |
+
],
|
| 712 |
+
"source": [
|
| 713 |
+
"eol_df_media_cp = pd.merge(eol_df, media, how = \"inner\", left_on = merge_cols, right_on = merge_cols)\n",
|
| 714 |
+
"eol_df_media_cp.info(show_counts = True)"
|
| 715 |
+
]
|
| 716 |
+
},
|
| 717 |
+
{
|
| 718 |
+
"cell_type": "markdown",
|
| 719 |
+
"metadata": {},
|
| 720 |
+
"source": [
|
| 721 |
+
"Okay, so we do have a mis-match of about 113K images where the content IDs and page IDs don't both match.\n",
|
| 722 |
+
"\n",
|
| 723 |
+
"Let's save this to a CSV."
|
| 724 |
+
]
|
| 725 |
+
},
|
| 726 |
+
{
|
| 727 |
+
"cell_type": "code",
|
| 728 |
+
"execution_count": 16,
|
| 729 |
+
"metadata": {},
|
| 730 |
+
"outputs": [],
|
| 731 |
+
"source": [
|
| 732 |
+
"eol_df_media_cp.to_csv(\"../data/eol_files/eol_cp_match_media.csv\", index = False)"
|
| 733 |
+
]
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"cell_type": "markdown",
|
| 737 |
+
"metadata": {},
|
| 738 |
+
"source": [
|
| 739 |
+
"Note that merging on just content IDs is going to give the same numbers."
|
| 740 |
+
]
|
| 741 |
+
},
|
| 742 |
+
{
|
| 743 |
+
"cell_type": "code",
|
| 744 |
+
"execution_count": 17,
|
| 745 |
+
"metadata": {},
|
| 746 |
+
"outputs": [
|
| 747 |
+
{
|
| 748 |
+
"name": "stdout",
|
| 749 |
+
"output_type": "stream",
|
| 750 |
+
"text": [
|
| 751 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 752 |
+
"RangeIndex: 6163903 entries, 0 to 6163902\n",
|
| 753 |
+
"Data columns (total 8 columns):\n",
|
| 754 |
+
" # Column Non-Null Count Dtype \n",
|
| 755 |
+
"--- ------ -------------- ----- \n",
|
| 756 |
+
" 0 treeoflife_id 6163903 non-null object\n",
|
| 757 |
+
" 1 eol_content_id 6163903 non-null int64 \n",
|
| 758 |
+
" 2 eol_page_id_x 6163903 non-null int64 \n",
|
| 759 |
+
" 3 eol_page_id_y 6163903 non-null int64 \n",
|
| 760 |
+
" 4 Medium Source URL 6163903 non-null object\n",
|
| 761 |
+
" 5 EOL Full-Size Copy URL 6163903 non-null object\n",
|
| 762 |
+
" 6 License Name 6163903 non-null object\n",
|
| 763 |
+
" 7 Copyright Owner 5549428 non-null object\n",
|
| 764 |
+
"dtypes: int64(3), object(5)\n",
|
| 765 |
+
"memory usage: 376.2+ MB\n"
|
| 766 |
+
]
|
| 767 |
+
}
|
| 768 |
+
],
|
| 769 |
+
"source": [
|
| 770 |
+
"eol_media_content = pd.merge(eol_df,\n",
|
| 771 |
+
" media,\n",
|
| 772 |
+
" how = \"inner\",\n",
|
| 773 |
+
" left_on = \"eol_content_id\",\n",
|
| 774 |
+
" right_on = \"eol_content_id\")\n",
|
| 775 |
+
"eol_media_content.info(show_counts = True)"
|
| 776 |
+
]
|
| 777 |
+
},
|
| 778 |
+
{
|
| 779 |
+
"cell_type": "markdown",
|
| 780 |
+
"metadata": {},
|
| 781 |
+
"source": [
|
| 782 |
+
"The interesting thing is when we look at the uniqueness. There are less _**unique**_ `Medium Source URLs`, suggesting that there are duplicated images that have different content IDs and unique `EOL Full-Size Copy URL`s, so EOL presumably has them duplicated."
|
| 783 |
+
]
|
| 784 |
+
},
|
| 785 |
+
{
|
| 786 |
+
"cell_type": "code",
|
| 787 |
+
"execution_count": 18,
|
| 788 |
+
"metadata": {},
|
| 789 |
+
"outputs": [
|
| 790 |
+
{
|
| 791 |
+
"data": {
|
| 792 |
+
"text/plain": [
|
| 793 |
+
"treeoflife_id 6163903\n",
|
| 794 |
+
"eol_content_id 6163903\n",
|
| 795 |
+
"eol_page_id 503865\n",
|
| 796 |
+
"Medium Source URL 6153828\n",
|
| 797 |
+
"EOL Full-Size Copy URL 6163903\n",
|
| 798 |
+
"License Name 16\n",
|
| 799 |
+
"Copyright Owner 345470\n",
|
| 800 |
+
"dtype: int64"
|
| 801 |
+
]
|
| 802 |
+
},
|
| 803 |
+
"execution_count": 18,
|
| 804 |
+
"metadata": {},
|
| 805 |
+
"output_type": "execute_result"
|
| 806 |
+
}
|
| 807 |
+
],
|
| 808 |
+
"source": [
|
| 809 |
+
"eol_df_media_cp.nunique()"
|
| 810 |
+
]
|
| 811 |
+
},
|
| 812 |
+
{
|
| 813 |
+
"cell_type": "markdown",
|
| 814 |
+
"metadata": {},
|
| 815 |
+
"source": [
|
| 816 |
+
"We'll look into this a little further down. First, let's get a list of all the `treeoflife_id`s that do match to the media manifest so we can make a CSV with all the images that _**aren't**_ matching."
|
| 817 |
+
]
|
| 818 |
+
},
|
| 819 |
+
{
|
| 820 |
+
"cell_type": "code",
|
| 821 |
+
"execution_count": 19,
|
| 822 |
+
"metadata": {},
|
| 823 |
+
"outputs": [
|
| 824 |
+
{
|
| 825 |
+
"data": {
|
| 826 |
+
"text/plain": [
|
| 827 |
+
"['f2f0aa29-e87b-469c-bf5b-51a3611ab001',\n",
|
| 828 |
+
" '5faa4f55-32e9-4872-953d-567e5d232e52',\n",
|
| 829 |
+
" '2282f2bf-2b52-4522-b588-dd6f356d5fd6',\n",
|
| 830 |
+
" '76b57c36-2181-4e6d-a5c4-b40e22a09449',\n",
|
| 831 |
+
" 'f57d3ab6-2cf5-484b-a590-e2a3d49a3ca2']"
|
| 832 |
+
]
|
| 833 |
+
},
|
| 834 |
+
"execution_count": 19,
|
| 835 |
+
"metadata": {},
|
| 836 |
+
"output_type": "execute_result"
|
| 837 |
+
}
|
| 838 |
+
],
|
| 839 |
+
"source": [
|
| 840 |
+
"tol_ids_in_media = list(eol_df_media_cp.treeoflife_id)\n",
|
| 841 |
+
"tol_ids_in_media[:5]"
|
| 842 |
+
]
|
| 843 |
+
},
|
| 844 |
+
{
|
| 845 |
+
"cell_type": "code",
|
| 846 |
+
"execution_count": 20,
|
| 847 |
+
"metadata": {},
|
| 848 |
+
"outputs": [
|
| 849 |
+
{
|
| 850 |
+
"data": {
|
| 851 |
+
"text/html": [
|
| 852 |
+
"<div>\n",
|
| 853 |
+
"<style scoped>\n",
|
| 854 |
+
" .dataframe tbody tr th:only-of-type {\n",
|
| 855 |
+
" vertical-align: middle;\n",
|
| 856 |
+
" }\n",
|
| 857 |
+
"\n",
|
| 858 |
+
" .dataframe tbody tr th {\n",
|
| 859 |
+
" vertical-align: top;\n",
|
| 860 |
+
" }\n",
|
| 861 |
+
"\n",
|
| 862 |
+
" .dataframe thead th {\n",
|
| 863 |
+
" text-align: right;\n",
|
| 864 |
+
" }\n",
|
| 865 |
+
"</style>\n",
|
| 866 |
+
"<table border=\"1\" class=\"dataframe\">\n",
|
| 867 |
+
" <thead>\n",
|
| 868 |
+
" <tr style=\"text-align: right;\">\n",
|
| 869 |
+
" <th></th>\n",
|
| 870 |
+
" <th>treeoflife_id</th>\n",
|
| 871 |
+
" <th>eol_content_id</th>\n",
|
| 872 |
+
" <th>eol_page_id</th>\n",
|
| 873 |
+
" </tr>\n",
|
| 874 |
+
" </thead>\n",
|
| 875 |
+
" <tbody>\n",
|
| 876 |
+
" <tr>\n",
|
| 877 |
+
" <th>0</th>\n",
|
| 878 |
+
" <td>f2f0aa29-e87b-469c-bf5b-51a3611ab001</td>\n",
|
| 879 |
+
" <td>22131926</td>\n",
|
| 880 |
+
" <td>269504</td>\n",
|
| 881 |
+
" </tr>\n",
|
| 882 |
+
" <tr>\n",
|
| 883 |
+
" <th>1</th>\n",
|
| 884 |
+
" <td>5faa4f55-32e9-4872-953d-567e5d232e52</td>\n",
|
| 885 |
+
" <td>22291283</td>\n",
|
| 886 |
+
" <td>6101931</td>\n",
|
| 887 |
+
" </tr>\n",
|
| 888 |
+
" <tr>\n",
|
| 889 |
+
" <th>2</th>\n",
|
| 890 |
+
" <td>2282f2bf-2b52-4522-b588-dd6f356d5fd6</td>\n",
|
| 891 |
+
" <td>21802775</td>\n",
|
| 892 |
+
" <td>45513632</td>\n",
|
| 893 |
+
" </tr>\n",
|
| 894 |
+
" <tr>\n",
|
| 895 |
+
" <th>3</th>\n",
|
| 896 |
+
" <td>76b57c36-2181-4e6d-a5c4-b40e22a09449</td>\n",
|
| 897 |
+
" <td>12784812</td>\n",
|
| 898 |
+
" <td>51655800</td>\n",
|
| 899 |
+
" </tr>\n",
|
| 900 |
+
" <tr>\n",
|
| 901 |
+
" <th>4</th>\n",
|
| 902 |
+
" <td>f57d3ab6-2cf5-484b-a590-e2a3d49a3ca2</td>\n",
|
| 903 |
+
" <td>29713643</td>\n",
|
| 904 |
+
" <td>45515896</td>\n",
|
| 905 |
+
" </tr>\n",
|
| 906 |
+
" </tbody>\n",
|
| 907 |
+
"</table>\n",
|
| 908 |
+
"</div>"
|
| 909 |
+
],
|
| 910 |
+
"text/plain": [
|
| 911 |
+
" treeoflife_id eol_content_id eol_page_id\n",
|
| 912 |
+
"0 f2f0aa29-e87b-469c-bf5b-51a3611ab001 22131926 269504\n",
|
| 913 |
+
"1 5faa4f55-32e9-4872-953d-567e5d232e52 22291283 6101931\n",
|
| 914 |
+
"2 2282f2bf-2b52-4522-b588-dd6f356d5fd6 21802775 45513632\n",
|
| 915 |
+
"3 76b57c36-2181-4e6d-a5c4-b40e22a09449 12784812 51655800\n",
|
| 916 |
+
"4 f57d3ab6-2cf5-484b-a590-e2a3d49a3ca2 29713643 45515896"
|
| 917 |
+
]
|
| 918 |
+
},
|
| 919 |
+
"execution_count": 20,
|
| 920 |
+
"metadata": {},
|
| 921 |
+
"output_type": "execute_result"
|
| 922 |
+
}
|
| 923 |
+
],
|
| 924 |
+
"source": [
|
| 925 |
+
"eol_df.head()"
|
| 926 |
+
]
|
| 927 |
+
},
|
| 928 |
+
{
|
| 929 |
+
"cell_type": "markdown",
|
| 930 |
+
"metadata": {},
|
| 931 |
+
"source": [
|
| 932 |
+
"Let's save a copy of the EOL section with content and page IDs that are mismatched."
|
| 933 |
+
]
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"cell_type": "code",
|
| 937 |
+
"execution_count": 21,
|
| 938 |
+
"metadata": {},
|
| 939 |
+
"outputs": [
|
| 940 |
+
{
|
| 941 |
+
"name": "stdout",
|
| 942 |
+
"output_type": "stream",
|
| 943 |
+
"text": [
|
| 944 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 945 |
+
"Index: 113471 entries, 126 to 6277290\n",
|
| 946 |
+
"Data columns (total 3 columns):\n",
|
| 947 |
+
" # Column Non-Null Count Dtype \n",
|
| 948 |
+
"--- ------ -------------- ----- \n",
|
| 949 |
+
" 0 treeoflife_id 113471 non-null object\n",
|
| 950 |
+
" 1 eol_content_id 113471 non-null int64 \n",
|
| 951 |
+
" 2 eol_page_id 113471 non-null int64 \n",
|
| 952 |
+
"dtypes: int64(2), object(1)\n",
|
| 953 |
+
"memory usage: 3.5+ MB\n"
|
| 954 |
+
]
|
| 955 |
+
}
|
| 956 |
+
],
|
| 957 |
+
"source": [
|
| 958 |
+
"eol_df_missing_media = eol_df.loc[~eol_df.treeoflife_id.isin(tol_ids_in_media)]\n",
|
| 959 |
+
"eol_df_missing_media.info(show_counts = True)"
|
| 960 |
+
]
|
| 961 |
+
},
|
| 962 |
+
{
|
| 963 |
+
"cell_type": "markdown",
|
| 964 |
+
"metadata": {},
|
| 965 |
+
"source": [
|
| 966 |
+
"How many pages are these distributed across?"
|
| 967 |
+
]
|
| 968 |
+
},
|
| 969 |
+
{
|
| 970 |
+
"cell_type": "code",
|
| 971 |
+
"execution_count": 22,
|
| 972 |
+
"metadata": {},
|
| 973 |
+
"outputs": [
|
| 974 |
+
{
|
| 975 |
+
"data": {
|
| 976 |
+
"text/plain": [
|
| 977 |
+
"treeoflife_id 113471\n",
|
| 978 |
+
"eol_content_id 113471\n",
|
| 979 |
+
"eol_page_id 9762\n",
|
| 980 |
+
"dtype: int64"
|
| 981 |
+
]
|
| 982 |
+
},
|
| 983 |
+
"execution_count": 22,
|
| 984 |
+
"metadata": {},
|
| 985 |
+
"output_type": "execute_result"
|
| 986 |
+
}
|
| 987 |
+
],
|
| 988 |
+
"source": [
|
| 989 |
+
"eol_df_missing_media.nunique()"
|
| 990 |
+
]
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"cell_type": "code",
|
| 994 |
+
"execution_count": 23,
|
| 995 |
+
"metadata": {},
|
| 996 |
+
"outputs": [],
|
| 997 |
+
"source": [
|
| 998 |
+
"eol_df_missing_media.to_csv(\"../data/eol_files/eol_cp_not_media.csv\", index = False)"
|
| 999 |
+
]
|
| 1000 |
+
},
|
| 1001 |
+
{
|
| 1002 |
+
"cell_type": "markdown",
|
| 1003 |
+
"metadata": {},
|
| 1004 |
+
"source": [
|
| 1005 |
+
"### Get Content IDs in Media Manifest that didn't match \n"
|
| 1006 |
+
]
|
| 1007 |
+
},
|
| 1008 |
+
{
|
| 1009 |
+
"cell_type": "code",
|
| 1010 |
+
"execution_count": 13,
|
| 1011 |
+
"metadata": {},
|
| 1012 |
+
"outputs": [
|
| 1013 |
+
{
|
| 1014 |
+
"data": {
|
| 1015 |
+
"text/plain": [
|
| 1016 |
+
"[22131926, 22291283, 21802775, 12784812, 29713643]"
|
| 1017 |
+
]
|
| 1018 |
+
},
|
| 1019 |
+
"execution_count": 13,
|
| 1020 |
+
"metadata": {},
|
| 1021 |
+
"output_type": "execute_result"
|
| 1022 |
+
}
|
| 1023 |
+
],
|
| 1024 |
+
"source": [
|
| 1025 |
+
"content_ids_in_catalog = list(eol_df_media_cp.eol_content_id)\n",
|
| 1026 |
+
"content_ids_in_catalog[:5]"
|
| 1027 |
+
]
|
| 1028 |
+
},
|
| 1029 |
+
{
|
| 1030 |
+
"cell_type": "code",
|
| 1031 |
+
"execution_count": 14,
|
| 1032 |
+
"metadata": {},
|
| 1033 |
+
"outputs": [
|
| 1034 |
+
{
|
| 1035 |
+
"name": "stdout",
|
| 1036 |
+
"output_type": "stream",
|
| 1037 |
+
"text": [
|
| 1038 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 1039 |
+
"Index: 410321 entries, 65 to 6574223\n",
|
| 1040 |
+
"Data columns (total 6 columns):\n",
|
| 1041 |
+
" # Column Non-Null Count Dtype \n",
|
| 1042 |
+
"--- ------ -------------- ----- \n",
|
| 1043 |
+
" 0 eol_content_id 410321 non-null int64 \n",
|
| 1044 |
+
" 1 eol_page_id 410321 non-null int64 \n",
|
| 1045 |
+
" 2 Medium Source URL 410319 non-null object\n",
|
| 1046 |
+
" 3 EOL Full-Size Copy URL 410321 non-null object\n",
|
| 1047 |
+
" 4 License Name 410321 non-null object\n",
|
| 1048 |
+
" 5 Copyright Owner 392753 non-null object\n",
|
| 1049 |
+
"dtypes: int64(2), object(4)\n",
|
| 1050 |
+
"memory usage: 21.9+ MB\n"
|
| 1051 |
+
]
|
| 1052 |
+
}
|
| 1053 |
+
],
|
| 1054 |
+
"source": [
|
| 1055 |
+
"media_missing_tol = media.loc[~media.eol_content_id.isin(content_ids_in_catalog)]\n",
|
| 1056 |
+
"media_missing_tol.info(show_counts = True)"
|
| 1057 |
+
]
|
| 1058 |
+
},
|
| 1059 |
+
{
|
| 1060 |
+
"cell_type": "code",
|
| 1061 |
+
"execution_count": 15,
|
| 1062 |
+
"metadata": {},
|
| 1063 |
+
"outputs": [],
|
| 1064 |
+
"source": [
|
| 1065 |
+
"# Save media manifest content IDs that didn't match to predicted-catalog\n",
|
| 1066 |
+
"media_missing_tol.to_csv(\"../data/eol_files/media_content_not_catalog.csv\", index = False)"
|
| 1067 |
+
]
|
| 1068 |
+
},
|
| 1069 |
+
{
|
| 1070 |
+
"cell_type": "markdown",
|
| 1071 |
+
"metadata": {},
|
| 1072 |
+
"source": [
|
| 1073 |
+
"#### Compare to Dec 6 Media Manifest"
|
| 1074 |
+
]
|
| 1075 |
+
},
|
| 1076 |
+
{
|
| 1077 |
+
"cell_type": "code",
|
| 1078 |
+
"execution_count": 16,
|
| 1079 |
+
"metadata": {},
|
| 1080 |
+
"outputs": [
|
| 1081 |
+
{
|
| 1082 |
+
"name": "stdout",
|
| 1083 |
+
"output_type": "stream",
|
| 1084 |
+
"text": [
|
| 1085 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 1086 |
+
"RangeIndex: 6576247 entries, 0 to 6576246\n",
|
| 1087 |
+
"Data columns (total 6 columns):\n",
|
| 1088 |
+
" # Column Non-Null Count Dtype \n",
|
| 1089 |
+
"--- ------ -------------- ----- \n",
|
| 1090 |
+
" 0 EOL content ID 6576247 non-null int64 \n",
|
| 1091 |
+
" 1 EOL page ID 6576247 non-null int64 \n",
|
| 1092 |
+
" 2 Medium Source URL 6576245 non-null object\n",
|
| 1093 |
+
" 3 EOL Full-Size Copy URL 6576247 non-null object\n",
|
| 1094 |
+
" 4 License Name 6576247 non-null object\n",
|
| 1095 |
+
" 5 Copyright Owner 5944184 non-null object\n",
|
| 1096 |
+
"dtypes: int64(2), object(4)\n",
|
| 1097 |
+
"memory usage: 301.0+ MB\n"
|
| 1098 |
+
]
|
| 1099 |
+
}
|
| 1100 |
+
],
|
| 1101 |
+
"source": [
|
| 1102 |
+
"dec_media = pd.read_csv(\"../data/media_manifest_Dec6.csv\", dtype = {\"EOL content ID\": \"int64\", \"EOL page ID\": \"int64\"}, low_memory = False)\n",
|
| 1103 |
+
"dec_media.info(show_counts = True)"
|
| 1104 |
+
]
|
| 1105 |
+
},
|
| 1106 |
+
{
|
| 1107 |
+
"cell_type": "markdown",
|
| 1108 |
+
"metadata": {},
|
| 1109 |
+
"source": [
|
| 1110 |
+
"Only about 2000 more images than July 26 media manifest."
|
| 1111 |
+
]
|
| 1112 |
+
},
|
| 1113 |
+
{
|
| 1114 |
+
"cell_type": "code",
|
| 1115 |
+
"execution_count": 17,
|
| 1116 |
+
"metadata": {},
|
| 1117 |
+
"outputs": [],
|
| 1118 |
+
"source": [
|
| 1119 |
+
"dec_media.rename(columns = {\"EOL content ID\": \"eol_content_id\", \"EOL page ID\": \"eol_page_id\"}, inplace = True)"
|
| 1120 |
+
]
|
| 1121 |
+
},
|
| 1122 |
+
{
|
| 1123 |
+
"cell_type": "code",
|
| 1124 |
+
"execution_count": 18,
|
| 1125 |
+
"metadata": {},
|
| 1126 |
+
"outputs": [
|
| 1127 |
+
{
|
| 1128 |
+
"name": "stdout",
|
| 1129 |
+
"output_type": "stream",
|
| 1130 |
+
"text": [
|
| 1131 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 1132 |
+
"RangeIndex: 5796323 entries, 0 to 5796322\n",
|
| 1133 |
+
"Data columns (total 7 columns):\n",
|
| 1134 |
+
" # Column Non-Null Count Dtype \n",
|
| 1135 |
+
"--- ------ -------------- ----- \n",
|
| 1136 |
+
" 0 treeoflife_id 5796323 non-null object\n",
|
| 1137 |
+
" 1 eol_content_id 5796323 non-null int64 \n",
|
| 1138 |
+
" 2 eol_page_id 5796323 non-null int64 \n",
|
| 1139 |
+
" 3 Medium Source URL 5796323 non-null object\n",
|
| 1140 |
+
" 4 EOL Full-Size Copy URL 5796323 non-null object\n",
|
| 1141 |
+
" 5 License Name 5796323 non-null object\n",
|
| 1142 |
+
" 6 Copyright Owner 5189325 non-null object\n",
|
| 1143 |
+
"dtypes: int64(2), object(5)\n",
|
| 1144 |
+
"memory usage: 309.6+ MB\n"
|
| 1145 |
+
]
|
| 1146 |
+
}
|
| 1147 |
+
],
|
| 1148 |
+
"source": [
|
| 1149 |
+
"eol_dec_media_cp = pd.merge(eol_df, dec_media, how = \"inner\", left_on = merge_cols, right_on = merge_cols)\n",
|
| 1150 |
+
"eol_dec_media_cp.info(show_counts = True)"
|
| 1151 |
+
]
|
| 1152 |
+
},
|
| 1153 |
+
{
|
| 1154 |
+
"cell_type": "markdown",
|
| 1155 |
+
"metadata": {},
|
| 1156 |
+
"source": [
|
| 1157 |
+
"And we have _less_ matching....Let's compare this to the July 26 manifest and see if there are content IDs only in Dec that do match to predicted-catalog."
|
| 1158 |
+
]
|
| 1159 |
+
},
|
| 1160 |
+
{
|
| 1161 |
+
"cell_type": "code",
|
| 1162 |
+
"execution_count": 19,
|
| 1163 |
+
"metadata": {},
|
| 1164 |
+
"outputs": [
|
| 1165 |
+
{
|
| 1166 |
+
"name": "stdout",
|
| 1167 |
+
"output_type": "stream",
|
| 1168 |
+
"text": [
|
| 1169 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 1170 |
+
"RangeIndex: 6021614 entries, 0 to 6021613\n",
|
| 1171 |
+
"Data columns (total 10 columns):\n",
|
| 1172 |
+
" # Column Non-Null Count Dtype \n",
|
| 1173 |
+
"--- ------ -------------- ----- \n",
|
| 1174 |
+
" 0 eol_content_id 6021614 non-null int64 \n",
|
| 1175 |
+
" 1 eol_page_id 6021614 non-null int64 \n",
|
| 1176 |
+
" 2 Medium Source URL_x 6021612 non-null object\n",
|
| 1177 |
+
" 3 EOL Full-Size Copy URL_x 6021614 non-null object\n",
|
| 1178 |
+
" 4 License Name_x 6021614 non-null object\n",
|
| 1179 |
+
" 5 Copyright Owner_x 5399986 non-null object\n",
|
| 1180 |
+
" 6 Medium Source URL_y 6021612 non-null object\n",
|
| 1181 |
+
" 7 EOL Full-Size Copy URL_y 6021614 non-null object\n",
|
| 1182 |
+
" 8 License Name_y 6021614 non-null object\n",
|
| 1183 |
+
" 9 Copyright Owner_y 5399986 non-null object\n",
|
| 1184 |
+
"dtypes: int64(2), object(8)\n",
|
| 1185 |
+
"memory usage: 459.4+ MB\n"
|
| 1186 |
+
]
|
| 1187 |
+
}
|
| 1188 |
+
],
|
| 1189 |
+
"source": [
|
| 1190 |
+
"media_merge = pd.merge(dec_media, media, how = \"inner\", left_on = merge_cols, right_on = merge_cols)\n",
|
| 1191 |
+
"media_merge.info(show_counts = True)"
|
| 1192 |
+
]
|
| 1193 |
+
},
|
| 1194 |
+
{
|
| 1195 |
+
"cell_type": "code",
|
| 1196 |
+
"execution_count": 20,
|
| 1197 |
+
"metadata": {},
|
| 1198 |
+
"outputs": [
|
| 1199 |
+
{
|
| 1200 |
+
"data": {
|
| 1201 |
+
"text/plain": [
|
| 1202 |
+
"[5470022, 5470023, 5470024, 5470025, 5470026]"
|
| 1203 |
+
]
|
| 1204 |
+
},
|
| 1205 |
+
"execution_count": 20,
|
| 1206 |
+
"metadata": {},
|
| 1207 |
+
"output_type": "execute_result"
|
| 1208 |
+
}
|
| 1209 |
+
],
|
| 1210 |
+
"source": [
|
| 1211 |
+
"content_ids_both_media = list(media_merge.eol_content_id)\n",
|
| 1212 |
+
"content_ids_both_media[:5]"
|
| 1213 |
+
]
|
| 1214 |
+
},
|
| 1215 |
+
{
|
| 1216 |
+
"cell_type": "code",
|
| 1217 |
+
"execution_count": 22,
|
| 1218 |
+
"metadata": {},
|
| 1219 |
+
"outputs": [
|
| 1220 |
+
{
|
| 1221 |
+
"name": "stdout",
|
| 1222 |
+
"output_type": "stream",
|
| 1223 |
+
"text": [
|
| 1224 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 1225 |
+
"Index: 554633 entries, 6021614 to 6576246\n",
|
| 1226 |
+
"Data columns (total 6 columns):\n",
|
| 1227 |
+
" # Column Non-Null Count Dtype \n",
|
| 1228 |
+
"--- ------ -------------- ----- \n",
|
| 1229 |
+
" 0 eol_content_id 554633 non-null int64 \n",
|
| 1230 |
+
" 1 eol_page_id 554633 non-null int64 \n",
|
| 1231 |
+
" 2 Medium Source URL 554633 non-null object\n",
|
| 1232 |
+
" 3 EOL Full-Size Copy URL 554633 non-null object\n",
|
| 1233 |
+
" 4 License Name 554633 non-null object\n",
|
| 1234 |
+
" 5 Copyright Owner 544198 non-null object\n",
|
| 1235 |
+
"dtypes: int64(2), object(4)\n",
|
| 1236 |
+
"memory usage: 29.6+ MB\n"
|
| 1237 |
+
]
|
| 1238 |
+
}
|
| 1239 |
+
],
|
| 1240 |
+
"source": [
|
| 1241 |
+
"media_dec_notJuly = dec_media.loc[~dec_media.eol_content_id.isin(content_ids_both_media)]\n",
|
| 1242 |
+
"media_dec_notJuly.info(show_counts = True)"
|
| 1243 |
+
]
|
| 1244 |
+
},
|
| 1245 |
+
{
|
| 1246 |
+
"cell_type": "markdown",
|
| 1247 |
+
"metadata": {},
|
| 1248 |
+
"source": [
|
| 1249 |
+
"Let's see if any of these are in our predicted catalog."
|
| 1250 |
+
]
|
| 1251 |
+
},
|
| 1252 |
+
{
|
| 1253 |
+
"cell_type": "code",
|
| 1254 |
+
"execution_count": 23,
|
| 1255 |
+
"metadata": {},
|
| 1256 |
+
"outputs": [
|
| 1257 |
+
{
|
| 1258 |
+
"name": "stdout",
|
| 1259 |
+
"output_type": "stream",
|
| 1260 |
+
"text": [
|
| 1261 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 1262 |
+
"RangeIndex: 0 entries\n",
|
| 1263 |
+
"Data columns (total 7 columns):\n",
|
| 1264 |
+
" # Column Non-Null Count Dtype \n",
|
| 1265 |
+
"--- ------ -------------- ----- \n",
|
| 1266 |
+
" 0 treeoflife_id 0 non-null object\n",
|
| 1267 |
+
" 1 eol_content_id 0 non-null int64 \n",
|
| 1268 |
+
" 2 eol_page_id 0 non-null int64 \n",
|
| 1269 |
+
" 3 Medium Source URL 0 non-null object\n",
|
| 1270 |
+
" 4 EOL Full-Size Copy URL 0 non-null object\n",
|
| 1271 |
+
" 5 License Name 0 non-null object\n",
|
| 1272 |
+
" 6 Copyright Owner 0 non-null object\n",
|
| 1273 |
+
"dtypes: int64(2), object(5)\n",
|
| 1274 |
+
"memory usage: 132.0+ bytes\n"
|
| 1275 |
+
]
|
| 1276 |
+
}
|
| 1277 |
+
],
|
| 1278 |
+
"source": [
|
| 1279 |
+
"eol_dec_only = pd.merge(eol_df, media_dec_notJuly, how = \"inner\", left_on = merge_cols, right_on = merge_cols)\n",
|
| 1280 |
+
"eol_dec_only.info(show_counts = True)"
|
| 1281 |
+
]
|
| 1282 |
+
},
|
| 1283 |
+
{
|
| 1284 |
+
"cell_type": "markdown",
|
| 1285 |
+
"metadata": {},
|
| 1286 |
+
"source": [
|
| 1287 |
+
"Okay, no matches here, so stick with July 26 file above, this doesn't help us recoup.\n",
|
| 1288 |
+
"\n",
|
| 1289 |
+
"Old media manifest (July 6) won't read EOL content and page IDs in properly, so can't check that against these."
|
| 1290 |
+
]
|
| 1291 |
+
},
|
| 1292 |
+
{
|
| 1293 |
+
"cell_type": "markdown",
|
| 1294 |
+
"metadata": {},
|
| 1295 |
+
"source": [
|
| 1296 |
+
"### Check out the Duplication of Medium Source URLs"
|
| 1297 |
+
]
|
| 1298 |
+
},
|
| 1299 |
+
{
|
| 1300 |
+
"cell_type": "code",
|
| 1301 |
+
"execution_count": 24,
|
| 1302 |
+
"metadata": {},
|
| 1303 |
+
"outputs": [],
|
| 1304 |
+
"source": [
|
| 1305 |
+
"# Identify unique Medium Source URLs\n",
|
| 1306 |
+
"eol_df_media_cp['duplicate'] = eol_df_media_cp.duplicated(subset = \"Medium Source URL\", keep = 'first')\n",
|
| 1307 |
+
"eol_df_media_unique = eol_df_media_cp.loc[~eol_df_media_cp['duplicate']]"
|
| 1308 |
+
]
|
| 1309 |
+
},
|
| 1310 |
+
{
|
| 1311 |
+
"cell_type": "code",
|
| 1312 |
+
"execution_count": 25,
|
| 1313 |
+
"metadata": {},
|
| 1314 |
+
"outputs": [
|
| 1315 |
+
{
|
| 1316 |
+
"name": "stdout",
|
| 1317 |
+
"output_type": "stream",
|
| 1318 |
+
"text": [
|
| 1319 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 1320 |
+
"Index: 6153828 entries, 0 to 6163902\n",
|
| 1321 |
+
"Data columns (total 8 columns):\n",
|
| 1322 |
+
" # Column Non-Null Count Dtype \n",
|
| 1323 |
+
"--- ------ -------------- ----- \n",
|
| 1324 |
+
" 0 treeoflife_id 6153828 non-null object\n",
|
| 1325 |
+
" 1 eol_content_id 6153828 non-null int64 \n",
|
| 1326 |
+
" 2 eol_page_id 6153828 non-null int64 \n",
|
| 1327 |
+
" 3 Medium Source URL 6153828 non-null object\n",
|
| 1328 |
+
" 4 EOL Full-Size Copy URL 6153828 non-null object\n",
|
| 1329 |
+
" 5 License Name 6153828 non-null object\n",
|
| 1330 |
+
" 6 Copyright Owner 5539739 non-null object\n",
|
| 1331 |
+
" 7 duplicate 6153828 non-null bool \n",
|
| 1332 |
+
"dtypes: bool(1), int64(2), object(5)\n",
|
| 1333 |
+
"memory usage: 381.5+ MB\n"
|
| 1334 |
+
]
|
| 1335 |
+
}
|
| 1336 |
+
],
|
| 1337 |
+
"source": [
|
| 1338 |
+
"eol_df_media_unique.info(show_counts = True)"
|
| 1339 |
+
]
|
| 1340 |
+
},
|
| 1341 |
+
{
|
| 1342 |
+
"cell_type": "markdown",
|
| 1343 |
+
"metadata": {},
|
| 1344 |
+
"source": [
|
| 1345 |
+
"It's about 10K images that are duplicated. Let's see how many `Medium Source URL`s it is."
|
| 1346 |
+
]
|
| 1347 |
+
},
|
| 1348 |
+
{
|
| 1349 |
+
"cell_type": "code",
|
| 1350 |
+
"execution_count": 26,
|
| 1351 |
+
"metadata": {},
|
| 1352 |
+
"outputs": [
|
| 1353 |
+
{
|
| 1354 |
+
"data": {
|
| 1355 |
+
"text/plain": [
|
| 1356 |
+
"treeoflife_id 10075\n",
|
| 1357 |
+
"eol_content_id 10075\n",
|
| 1358 |
+
"eol_page_id 5391\n",
|
| 1359 |
+
"Medium Source URL 5833\n",
|
| 1360 |
+
"EOL Full-Size Copy URL 10075\n",
|
| 1361 |
+
"License Name 9\n",
|
| 1362 |
+
"Copyright Owner 545\n",
|
| 1363 |
+
"duplicate 1\n",
|
| 1364 |
+
"dtype: int64"
|
| 1365 |
+
]
|
| 1366 |
+
},
|
| 1367 |
+
"execution_count": 26,
|
| 1368 |
+
"metadata": {},
|
| 1369 |
+
"output_type": "execute_result"
|
| 1370 |
+
}
|
| 1371 |
+
],
|
| 1372 |
+
"source": [
|
| 1373 |
+
"eol_df_media_cp.loc[eol_df_media_cp['duplicate']].nunique()"
|
| 1374 |
+
]
|
| 1375 |
+
},
|
| 1376 |
+
{
|
| 1377 |
+
"cell_type": "markdown",
|
| 1378 |
+
"metadata": {},
|
| 1379 |
+
"source": [
|
| 1380 |
+
"There are 5,833 unique `Medium Source URLs` that are duplicated."
|
| 1381 |
+
]
|
| 1382 |
+
},
|
| 1383 |
+
{
|
| 1384 |
+
"cell_type": "markdown",
|
| 1385 |
+
"metadata": {},
|
| 1386 |
+
"source": [
|
| 1387 |
+
"### Check how this compares to Catalog \n",
|
| 1388 |
+
"Let's see if the missing images are all in TreeOfLife-10M, or a mix between it and Rare Species."
|
| 1389 |
+
]
|
| 1390 |
+
},
|
| 1391 |
+
{
|
| 1392 |
+
"cell_type": "code",
|
| 1393 |
+
"execution_count": 27,
|
| 1394 |
+
"metadata": {},
|
| 1395 |
+
"outputs": [],
|
| 1396 |
+
"source": [
|
| 1397 |
+
"cat_df = pd.read_csv(\"../data/catalog.csv\", low_memory = False)\n",
|
| 1398 |
+
"# Remove duplicates in train_small\n",
|
| 1399 |
+
"cat_df = cat_df.loc[cat_df.split != 'train_small']"
|
| 1400 |
+
]
|
| 1401 |
+
},
|
| 1402 |
+
{
|
| 1403 |
+
"cell_type": "code",
|
| 1404 |
+
"execution_count": 28,
|
| 1405 |
+
"metadata": {},
|
| 1406 |
+
"outputs": [],
|
| 1407 |
+
"source": [
|
| 1408 |
+
"# Add data_source column for easier slicing\n",
|
| 1409 |
+
"cat_df.loc[cat_df['inat21_filename'].notna(), 'data_source'] = 'iNat21'\n",
|
| 1410 |
+
"cat_df.loc[cat_df['bioscan_filename'].notna(), 'data_source'] = 'BIOSCAN'\n",
|
| 1411 |
+
"cat_df.loc[cat_df['eol_content_id'].notna(), 'data_source'] = 'EOL'"
|
| 1412 |
+
]
|
| 1413 |
+
},
|
| 1414 |
+
{
|
| 1415 |
+
"cell_type": "code",
|
| 1416 |
+
"execution_count": 29,
|
| 1417 |
+
"metadata": {},
|
| 1418 |
+
"outputs": [],
|
| 1419 |
+
"source": [
|
| 1420 |
+
"eol_cat_df = cat_df.loc[cat_df.data_source == \"EOL\"]"
|
| 1421 |
+
]
|
| 1422 |
+
},
|
| 1423 |
+
{
|
| 1424 |
+
"cell_type": "markdown",
|
| 1425 |
+
"metadata": {},
|
| 1426 |
+
"source": [
|
| 1427 |
+
"Reduce down to just relevant columns and recast the EOL content and page IDs as `int64`."
|
| 1428 |
+
]
|
| 1429 |
+
},
|
| 1430 |
+
{
|
| 1431 |
+
"cell_type": "code",
|
| 1432 |
+
"execution_count": 30,
|
| 1433 |
+
"metadata": {},
|
| 1434 |
+
"outputs": [],
|
| 1435 |
+
"source": [
|
| 1436 |
+
"eol_cat_df = eol_cat_df[eol_license_cols]"
|
| 1437 |
+
]
|
| 1438 |
+
},
|
| 1439 |
+
{
|
| 1440 |
+
"cell_type": "code",
|
| 1441 |
+
"execution_count": 31,
|
| 1442 |
+
"metadata": {},
|
| 1443 |
+
"outputs": [],
|
| 1444 |
+
"source": [
|
| 1445 |
+
"eol_cat_df = eol_cat_df.astype({\"eol_content_id\": \"int64\", \"eol_page_id\": \"int64\"})"
|
| 1446 |
+
]
|
| 1447 |
+
},
|
| 1448 |
+
{
|
| 1449 |
+
"cell_type": "code",
|
| 1450 |
+
"execution_count": 32,
|
| 1451 |
+
"metadata": {},
|
| 1452 |
+
"outputs": [
|
| 1453 |
+
{
|
| 1454 |
+
"name": "stdout",
|
| 1455 |
+
"output_type": "stream",
|
| 1456 |
+
"text": [
|
| 1457 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 1458 |
+
"Index: 6250420 entries, 956715 to 11000930\n",
|
| 1459 |
+
"Data columns (total 3 columns):\n",
|
| 1460 |
+
" # Column Dtype \n",
|
| 1461 |
+
"--- ------ ----- \n",
|
| 1462 |
+
" 0 treeoflife_id object\n",
|
| 1463 |
+
" 1 eol_content_id int64 \n",
|
| 1464 |
+
" 2 eol_page_id int64 \n",
|
| 1465 |
+
"dtypes: int64(2), object(1)\n",
|
| 1466 |
+
"memory usage: 190.7+ MB\n"
|
| 1467 |
+
]
|
| 1468 |
+
}
|
| 1469 |
+
],
|
| 1470 |
+
"source": [
|
| 1471 |
+
"eol_cat_df.info()"
|
| 1472 |
+
]
|
| 1473 |
+
},
|
| 1474 |
+
{
|
| 1475 |
+
"cell_type": "code",
|
| 1476 |
+
"execution_count": 33,
|
| 1477 |
+
"metadata": {},
|
| 1478 |
+
"outputs": [
|
| 1479 |
+
{
|
| 1480 |
+
"name": "stdout",
|
| 1481 |
+
"output_type": "stream",
|
| 1482 |
+
"text": [
|
| 1483 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 1484 |
+
"Index: 112575 entries, 956761 to 10998986\n",
|
| 1485 |
+
"Data columns (total 3 columns):\n",
|
| 1486 |
+
" # Column Non-Null Count Dtype \n",
|
| 1487 |
+
"--- ------ -------------- ----- \n",
|
| 1488 |
+
" 0 treeoflife_id 112575 non-null object\n",
|
| 1489 |
+
" 1 eol_content_id 112575 non-null int64 \n",
|
| 1490 |
+
" 2 eol_page_id 112575 non-null int64 \n",
|
| 1491 |
+
"dtypes: int64(2), object(1)\n",
|
| 1492 |
+
"memory usage: 3.4+ MB\n"
|
| 1493 |
+
]
|
| 1494 |
+
}
|
| 1495 |
+
],
|
| 1496 |
+
"source": [
|
| 1497 |
+
"eol_cat_df.loc[eol_cat_df[\"treeoflife_id\"].isin(list(eol_df_missing_media.treeoflife_id))].info(show_counts = True)"
|
| 1498 |
+
]
|
| 1499 |
+
},
|
| 1500 |
+
{
|
| 1501 |
+
"cell_type": "markdown",
|
| 1502 |
+
"metadata": {},
|
| 1503 |
+
"source": [
|
| 1504 |
+
"They are _**almost**_ entirely in TreeOfLife-10M, but _some_ may be in Rare Species.\n",
|
| 1505 |
+
"\n",
|
| 1506 |
+
"#### Quick check for the duplicates here"
|
| 1507 |
+
]
|
| 1508 |
+
},
|
| 1509 |
+
{
|
| 1510 |
+
"cell_type": "code",
|
| 1511 |
+
"execution_count": 34,
|
| 1512 |
+
"metadata": {},
|
| 1513 |
+
"outputs": [
|
| 1514 |
+
{
|
| 1515 |
+
"data": {
|
| 1516 |
+
"text/plain": [
|
| 1517 |
+
"['e37fc4b8-73ef-4a8c-8a65-cf65f9f1174e',\n",
|
| 1518 |
+
" '5e3edcd1-8150-4534-8f69-f63c447afd7d',\n",
|
| 1519 |
+
" '776a596f-96a1-47d8-b510-db8fb41be44d',\n",
|
| 1520 |
+
" '7ce491fa-7573-46e8-b11a-ebac6d702bda',\n",
|
| 1521 |
+
" 'd4ca1530-685d-46e8-969c-44a74f0a00d4']"
|
| 1522 |
+
]
|
| 1523 |
+
},
|
| 1524 |
+
"execution_count": 34,
|
| 1525 |
+
"metadata": {},
|
| 1526 |
+
"output_type": "execute_result"
|
| 1527 |
+
}
|
| 1528 |
+
],
|
| 1529 |
+
"source": [
|
| 1530 |
+
"tol_ids_duplicated = list(eol_df_media_cp.loc[eol_df_media_cp['duplicate'], \"treeoflife_id\"].values)\n",
|
| 1531 |
+
"tol_ids_duplicated[:5]"
|
| 1532 |
+
]
|
| 1533 |
+
},
|
| 1534 |
+
{
|
| 1535 |
+
"cell_type": "code",
|
| 1536 |
+
"execution_count": 35,
|
| 1537 |
+
"metadata": {},
|
| 1538 |
+
"outputs": [
|
| 1539 |
+
{
|
| 1540 |
+
"data": {
|
| 1541 |
+
"text/html": [
|
| 1542 |
+
"<div>\n",
|
| 1543 |
+
"<style scoped>\n",
|
| 1544 |
+
" .dataframe tbody tr th:only-of-type {\n",
|
| 1545 |
+
" vertical-align: middle;\n",
|
| 1546 |
+
" }\n",
|
| 1547 |
+
"\n",
|
| 1548 |
+
" .dataframe tbody tr th {\n",
|
| 1549 |
+
" vertical-align: top;\n",
|
| 1550 |
+
" }\n",
|
| 1551 |
+
"\n",
|
| 1552 |
+
" .dataframe thead th {\n",
|
| 1553 |
+
" text-align: right;\n",
|
| 1554 |
+
" }\n",
|
| 1555 |
+
"</style>\n",
|
| 1556 |
+
"<table border=\"1\" class=\"dataframe\">\n",
|
| 1557 |
+
" <thead>\n",
|
| 1558 |
+
" <tr style=\"text-align: right;\">\n",
|
| 1559 |
+
" <th></th>\n",
|
| 1560 |
+
" <th>treeoflife_id</th>\n",
|
| 1561 |
+
" <th>eol_content_id</th>\n",
|
| 1562 |
+
" <th>eol_page_id</th>\n",
|
| 1563 |
+
" <th>Medium Source URL</th>\n",
|
| 1564 |
+
" <th>EOL Full-Size Copy URL</th>\n",
|
| 1565 |
+
" <th>License Name</th>\n",
|
| 1566 |
+
" <th>Copyright Owner</th>\n",
|
| 1567 |
+
" <th>duplicate</th>\n",
|
| 1568 |
+
" </tr>\n",
|
| 1569 |
+
" </thead>\n",
|
| 1570 |
+
" <tbody>\n",
|
| 1571 |
+
" <tr>\n",
|
| 1572 |
+
" <th>33275</th>\n",
|
| 1573 |
+
" <td>e37fc4b8-73ef-4a8c-8a65-cf65f9f1174e</td>\n",
|
| 1574 |
+
" <td>13611057</td>\n",
|
| 1575 |
+
" <td>37146541</td>\n",
|
| 1576 |
+
" <td>https://pensoft.net/J_FILES/1/articles/5492/ex...</td>\n",
|
| 1577 |
+
" <td>https://content.eol.org/data/media/d4/f0/a9/58...</td>\n",
|
| 1578 |
+
" <td>cc-by-3.0</td>\n",
|
| 1579 |
+
" <td>James K. Liebherr</td>\n",
|
| 1580 |
+
" <td>True</td>\n",
|
| 1581 |
+
" </tr>\n",
|
| 1582 |
+
" <tr>\n",
|
| 1583 |
+
" <th>36445</th>\n",
|
| 1584 |
+
" <td>5e3edcd1-8150-4534-8f69-f63c447afd7d</td>\n",
|
| 1585 |
+
" <td>13620019</td>\n",
|
| 1586 |
+
" <td>16355052</td>\n",
|
| 1587 |
+
" <td>https://pensoft.net/J_FILES/1/articles/7546/ex...</td>\n",
|
| 1588 |
+
" <td>https://content.eol.org/data/media/d5/13/ac/58...</td>\n",
|
| 1589 |
+
" <td>cc-by-3.0</td>\n",
|
| 1590 |
+
" <td>Jin-Kyung Choi, Jong-Wook Lee</td>\n",
|
| 1591 |
+
" <td>True</td>\n",
|
| 1592 |
+
" </tr>\n",
|
| 1593 |
+
" <tr>\n",
|
| 1594 |
+
" <th>52304</th>\n",
|
| 1595 |
+
" <td>776a596f-96a1-47d8-b510-db8fb41be44d</td>\n",
|
| 1596 |
+
" <td>13610902</td>\n",
|
| 1597 |
+
" <td>732357</td>\n",
|
| 1598 |
+
" <td>https://pensoft.net/J_FILES/1/articles/5352/ex...</td>\n",
|
| 1599 |
+
" <td>https://content.eol.org/data/media/d4/f0/11/58...</td>\n",
|
| 1600 |
+
" <td>cc-by-3.0</td>\n",
|
| 1601 |
+
" <td>Mary Liz Jameson, Alain Drumont</td>\n",
|
| 1602 |
+
" <td>True</td>\n",
|
| 1603 |
+
" </tr>\n",
|
| 1604 |
+
" <tr>\n",
|
| 1605 |
+
" <th>67099</th>\n",
|
| 1606 |
+
" <td>7ce491fa-7573-46e8-b11a-ebac6d702bda</td>\n",
|
| 1607 |
+
" <td>14119729</td>\n",
|
| 1608 |
+
" <td>62672726</td>\n",
|
| 1609 |
+
" <td>https://live.staticflickr.com/4302/35924815981...</td>\n",
|
| 1610 |
+
" <td>https://content.eol.org/data/media/d7/93/6e/54...</td>\n",
|
| 1611 |
+
" <td>cc-publicdomain</td>\n",
|
| 1612 |
+
" <td>Biodiversity Heritage Library</td>\n",
|
| 1613 |
+
" <td>True</td>\n",
|
| 1614 |
+
" </tr>\n",
|
| 1615 |
+
" <tr>\n",
|
| 1616 |
+
" <th>73915</th>\n",
|
| 1617 |
+
" <td>d4ca1530-685d-46e8-969c-44a74f0a00d4</td>\n",
|
| 1618 |
+
" <td>13613433</td>\n",
|
| 1619 |
+
" <td>60227621</td>\n",
|
| 1620 |
+
" <td>https://pensoft.net/J_FILES/1/articles/5999/ex...</td>\n",
|
| 1621 |
+
" <td>https://content.eol.org/data/media/d4/f9/f4/58...</td>\n",
|
| 1622 |
+
" <td>cc-by-3.0</td>\n",
|
| 1623 |
+
" <td>Oleg Pekarsky</td>\n",
|
| 1624 |
+
" <td>True</td>\n",
|
| 1625 |
+
" </tr>\n",
|
| 1626 |
+
" </tbody>\n",
|
| 1627 |
+
"</table>\n",
|
| 1628 |
+
"</div>"
|
| 1629 |
+
],
|
| 1630 |
+
"text/plain": [
|
| 1631 |
+
" treeoflife_id eol_content_id eol_page_id \\\n",
|
| 1632 |
+
"33275 e37fc4b8-73ef-4a8c-8a65-cf65f9f1174e 13611057 37146541 \n",
|
| 1633 |
+
"36445 5e3edcd1-8150-4534-8f69-f63c447afd7d 13620019 16355052 \n",
|
| 1634 |
+
"52304 776a596f-96a1-47d8-b510-db8fb41be44d 13610902 732357 \n",
|
| 1635 |
+
"67099 7ce491fa-7573-46e8-b11a-ebac6d702bda 14119729 62672726 \n",
|
| 1636 |
+
"73915 d4ca1530-685d-46e8-969c-44a74f0a00d4 13613433 60227621 \n",
|
| 1637 |
+
"\n",
|
| 1638 |
+
" Medium Source URL \\\n",
|
| 1639 |
+
"33275 https://pensoft.net/J_FILES/1/articles/5492/ex... \n",
|
| 1640 |
+
"36445 https://pensoft.net/J_FILES/1/articles/7546/ex... \n",
|
| 1641 |
+
"52304 https://pensoft.net/J_FILES/1/articles/5352/ex... \n",
|
| 1642 |
+
"67099 https://live.staticflickr.com/4302/35924815981... \n",
|
| 1643 |
+
"73915 https://pensoft.net/J_FILES/1/articles/5999/ex... \n",
|
| 1644 |
+
"\n",
|
| 1645 |
+
" EOL Full-Size Copy URL License Name \\\n",
|
| 1646 |
+
"33275 https://content.eol.org/data/media/d4/f0/a9/58... cc-by-3.0 \n",
|
| 1647 |
+
"36445 https://content.eol.org/data/media/d5/13/ac/58... cc-by-3.0 \n",
|
| 1648 |
+
"52304 https://content.eol.org/data/media/d4/f0/11/58... cc-by-3.0 \n",
|
| 1649 |
+
"67099 https://content.eol.org/data/media/d7/93/6e/54... cc-publicdomain \n",
|
| 1650 |
+
"73915 https://content.eol.org/data/media/d4/f9/f4/58... cc-by-3.0 \n",
|
| 1651 |
+
"\n",
|
| 1652 |
+
" Copyright Owner duplicate \n",
|
| 1653 |
+
"33275 James K. Liebherr True \n",
|
| 1654 |
+
"36445 Jin-Kyung Choi, Jong-Wook Lee True \n",
|
| 1655 |
+
"52304 Mary Liz Jameson, Alain Drumont True \n",
|
| 1656 |
+
"67099 Biodiversity Heritage Library True \n",
|
| 1657 |
+
"73915 Oleg Pekarsky True "
|
| 1658 |
+
]
|
| 1659 |
+
},
|
| 1660 |
+
"execution_count": 35,
|
| 1661 |
+
"metadata": {},
|
| 1662 |
+
"output_type": "execute_result"
|
| 1663 |
+
}
|
| 1664 |
+
],
|
| 1665 |
+
"source": [
|
| 1666 |
+
"eol_df_media_cp.loc[eol_df_media_cp['duplicate']].head()"
|
| 1667 |
+
]
|
| 1668 |
+
},
|
| 1669 |
+
{
|
| 1670 |
+
"cell_type": "code",
|
| 1671 |
+
"execution_count": 36,
|
| 1672 |
+
"metadata": {},
|
| 1673 |
+
"outputs": [
|
| 1674 |
+
{
|
| 1675 |
+
"name": "stdout",
|
| 1676 |
+
"output_type": "stream",
|
| 1677 |
+
"text": [
|
| 1678 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 1679 |
+
"Index: 10068 entries, 956913 to 10996963\n",
|
| 1680 |
+
"Data columns (total 3 columns):\n",
|
| 1681 |
+
" # Column Non-Null Count Dtype \n",
|
| 1682 |
+
"--- ------ -------------- ----- \n",
|
| 1683 |
+
" 0 treeoflife_id 10068 non-null object\n",
|
| 1684 |
+
" 1 eol_content_id 10068 non-null int64 \n",
|
| 1685 |
+
" 2 eol_page_id 10068 non-null int64 \n",
|
| 1686 |
+
"dtypes: int64(2), object(1)\n",
|
| 1687 |
+
"memory usage: 314.6+ KB\n"
|
| 1688 |
+
]
|
| 1689 |
+
}
|
| 1690 |
+
],
|
| 1691 |
+
"source": [
|
| 1692 |
+
"eol_cat_df.loc[eol_cat_df[\"treeoflife_id\"].isin(tol_ids_duplicated)].info(show_counts = True)"
|
| 1693 |
+
]
|
| 1694 |
+
},
|
| 1695 |
+
{
|
| 1696 |
+
"cell_type": "markdown",
|
| 1697 |
+
"metadata": {},
|
| 1698 |
+
"source": [
|
| 1699 |
+
"All but 7 of the duplicates are here too."
|
| 1700 |
+
]
|
| 1701 |
+
},
|
| 1702 |
+
{
|
| 1703 |
+
"cell_type": "markdown",
|
| 1704 |
+
"metadata": {},
|
| 1705 |
+
"source": [
|
| 1706 |
+
"Let's save a version of the merged manifest with all duplicates (as in, _**every**_ image that's duplicated is listed, not just the 2nd through however many to appear)."
|
| 1707 |
+
]
|
| 1708 |
+
},
|
| 1709 |
+
{
|
| 1710 |
+
"cell_type": "code",
|
| 1711 |
+
"execution_count": 37,
|
| 1712 |
+
"metadata": {},
|
| 1713 |
+
"outputs": [
|
| 1714 |
+
{
|
| 1715 |
+
"name": "stdout",
|
| 1716 |
+
"output_type": "stream",
|
| 1717 |
+
"text": [
|
| 1718 |
+
"<class 'pandas.core.frame.DataFrame'>\n",
|
| 1719 |
+
"Index: 15908 entries, 1691 to 6163695\n",
|
| 1720 |
+
"Data columns (total 8 columns):\n",
|
| 1721 |
+
" # Column Non-Null Count Dtype \n",
|
| 1722 |
+
"--- ------ -------------- ----- \n",
|
| 1723 |
+
" 0 treeoflife_id 15908 non-null object\n",
|
| 1724 |
+
" 1 eol_content_id 15908 non-null int64 \n",
|
| 1725 |
+
" 2 eol_page_id 15908 non-null int64 \n",
|
| 1726 |
+
" 3 Medium Source URL 15908 non-null object\n",
|
| 1727 |
+
" 4 EOL Full-Size Copy URL 15908 non-null object\n",
|
| 1728 |
+
" 5 License Name 15908 non-null object\n",
|
| 1729 |
+
" 6 Copyright Owner 15148 non-null object\n",
|
| 1730 |
+
" 7 duplicate 15908 non-null bool \n",
|
| 1731 |
+
"dtypes: bool(1), int64(2), object(5)\n",
|
| 1732 |
+
"memory usage: 1009.8+ KB\n"
|
| 1733 |
+
]
|
| 1734 |
+
}
|
| 1735 |
+
],
|
| 1736 |
+
"source": [
|
| 1737 |
+
"# Identify unique Medium Source URLs\n",
|
| 1738 |
+
"eol_df_media_copies = eol_df_media_cp.copy()\n",
|
| 1739 |
+
"eol_df_media_copies['duplicate'] = eol_df_media_copies.duplicated(subset = \"Medium Source URL\", keep = False)\n",
|
| 1740 |
+
"eol_df_media_duplicates = eol_df_media_copies.loc[eol_df_media_copies['duplicate']]\n",
|
| 1741 |
+
"eol_df_media_duplicates.info(show_counts = True)"
|
| 1742 |
+
]
|
| 1743 |
+
},
|
| 1744 |
+
{
|
| 1745 |
+
"cell_type": "markdown",
|
| 1746 |
+
"metadata": {},
|
| 1747 |
+
"source": [
|
| 1748 |
+
"Now we'll save this to CSV (without the duplicate column since they're all duplicates)."
|
| 1749 |
+
]
|
| 1750 |
+
},
|
| 1751 |
+
{
|
| 1752 |
+
"cell_type": "code",
|
| 1753 |
+
"execution_count": 38,
|
| 1754 |
+
"metadata": {},
|
| 1755 |
+
"outputs": [],
|
| 1756 |
+
"source": [
|
| 1757 |
+
"eol_df_media_duplicates[eol_df_media_duplicates.columns[:7]].to_csv(\"../data/eol_files/eol_media_duplicates.csv\", index = False)"
|
| 1758 |
+
]
|
| 1759 |
+
},
|
| 1760 |
+
{
|
| 1761 |
+
"cell_type": "code",
|
| 1762 |
+
"execution_count": null,
|
| 1763 |
+
"metadata": {},
|
| 1764 |
+
"outputs": [],
|
| 1765 |
+
"source": []
|
| 1766 |
+
}
|
| 1767 |
+
],
|
| 1768 |
+
"metadata": {
|
| 1769 |
+
"jupytext": {
|
| 1770 |
+
"formats": "ipynb,py:percent"
|
| 1771 |
+
},
|
| 1772 |
+
"kernelspec": {
|
| 1773 |
+
"display_name": "Python 3 (ipykernel)",
|
| 1774 |
+
"language": "python",
|
| 1775 |
+
"name": "python3"
|
| 1776 |
+
},
|
| 1777 |
+
"language_info": {
|
| 1778 |
+
"codemirror_mode": {
|
| 1779 |
+
"name": "ipython",
|
| 1780 |
+
"version": 3
|
| 1781 |
+
},
|
| 1782 |
+
"file_extension": ".py",
|
| 1783 |
+
"mimetype": "text/x-python",
|
| 1784 |
+
"name": "python",
|
| 1785 |
+
"nbconvert_exporter": "python",
|
| 1786 |
+
"pygments_lexer": "ipython3",
|
| 1787 |
+
"version": "3.11.3"
|
| 1788 |
+
}
|
| 1789 |
+
},
|
| 1790 |
+
"nbformat": 4,
|
| 1791 |
+
"nbformat_minor": 4
|
| 1792 |
+
}
|
notebooks/ToL_media_mismatch.py
ADDED
|
@@ -0,0 +1,303 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ---
|
| 2 |
+
# jupyter:
|
| 3 |
+
# jupytext:
|
| 4 |
+
# formats: ipynb,py:percent
|
| 5 |
+
# text_representation:
|
| 6 |
+
# extension: .py
|
| 7 |
+
# format_name: percent
|
| 8 |
+
# format_version: '1.3'
|
| 9 |
+
# jupytext_version: 1.16.0
|
| 10 |
+
# kernelspec:
|
| 11 |
+
# display_name: Python 3 (ipykernel)
|
| 12 |
+
# language: python
|
| 13 |
+
# name: python3
|
| 14 |
+
# ---
|
| 15 |
+
|
| 16 |
+
# %%
|
| 17 |
+
import pandas as pd
|
| 18 |
+
|
| 19 |
+
# %% [markdown]
|
| 20 |
+
# Load in full images to ease process.
|
| 21 |
+
|
| 22 |
+
# %%
|
| 23 |
+
df = pd.read_csv("../data/predicted-catalog.csv", low_memory = False)
|
| 24 |
+
|
| 25 |
+
# %%
|
| 26 |
+
df.head()
|
| 27 |
+
|
| 28 |
+
# %%
|
| 29 |
+
df.info(show_counts = True)
|
| 30 |
+
|
| 31 |
+
# %% [markdown]
|
| 32 |
+
# The `train_small` is duplicates of `train`, so we will drop those to analyze the full training set plus val.
|
| 33 |
+
|
| 34 |
+
# %% [markdown]
|
| 35 |
+
# `predicted-catalog` doesn't have `train_small`, hence, it's a smaller file.
|
| 36 |
+
|
| 37 |
+
# %% [markdown]
|
| 38 |
+
# Let's add a column indicating the original data source so we can also get some stats by datasource, specifically focusing on EOL since we know licensing for BIOSCAN-1M and iNat21.
|
| 39 |
+
|
| 40 |
+
# %%
|
| 41 |
+
# Add data_source column for easier slicing
|
| 42 |
+
df.loc[df['inat21_filename'].notna(), 'data_source'] = 'iNat21'
|
| 43 |
+
df.loc[df['bioscan_filename'].notna(), 'data_source'] = 'BIOSCAN'
|
| 44 |
+
df.loc[df['eol_content_id'].notna(), 'data_source'] = 'EOL'
|
| 45 |
+
|
| 46 |
+
# %% [markdown]
|
| 47 |
+
# #### Get just EOL CSV for Media Manifest Merge
|
| 48 |
+
|
| 49 |
+
# %%
|
| 50 |
+
eol_df = df.loc[df['data_source'] == 'EOL']
|
| 51 |
+
|
| 52 |
+
# %%
|
| 53 |
+
eol_df.head()
|
| 54 |
+
|
| 55 |
+
# %% [markdown]
|
| 56 |
+
# We don't need the BIOSCAN or iNat21 columns, nor the taxa columns.
|
| 57 |
+
|
| 58 |
+
# %%
|
| 59 |
+
eol_license_cols = eol_df.columns[1:4]
|
| 60 |
+
eol_license_cols
|
| 61 |
+
|
| 62 |
+
# %%
|
| 63 |
+
eol_df = eol_df[eol_license_cols]
|
| 64 |
+
|
| 65 |
+
# %%
|
| 66 |
+
eol_df.nunique()
|
| 67 |
+
|
| 68 |
+
# %% [markdown]
|
| 69 |
+
# Number of unique `eol_content_id`s and `treeoflife_id`s match, and match with total number of `eol_content_id`s shown above in the info for the full dataset.
|
| 70 |
+
|
| 71 |
+
# %% [markdown]
|
| 72 |
+
# ### Merge with Media Manifest
|
| 73 |
+
# Let's merge with the [media manifest](https://huggingface.co/datasets/imageomics/eol/blob/be7b7e6c372f6547e30030e9576d9cc638320099/data/interim/media_manifest.csv) from which all these images should have been downloaded from to get a clear picture of what is or isn't in the manifest.
|
| 74 |
+
|
| 75 |
+
# %%
|
| 76 |
+
media = pd.read_csv("../data/media_manifest (july 26).csv", dtype = {"EOL content ID": "int64", "EOL page ID": "int64"}, low_memory = False)
|
| 77 |
+
media.info(show_counts = True)
|
| 78 |
+
|
| 79 |
+
# %% [markdown]
|
| 80 |
+
# We want to make sure the EOL content and page IDs have matching types, so we'll set them to `int64` in `eol_df` too.
|
| 81 |
+
|
| 82 |
+
# %%
|
| 83 |
+
eol_df = eol_df.astype({"eol_content_id": "int64", "eol_page_id": "int64"})
|
| 84 |
+
eol_df.info()
|
| 85 |
+
|
| 86 |
+
# %% [markdown]
|
| 87 |
+
# Notice that we have about 300K more entries in the media manifest, which is about expected from the [comparison of predicted-catalog to the original full list](https://huggingface.co/datasets/imageomics/ToL-EDA/blob/main/notebooks/ToL_predicted-catalog_EDA.ipynb).
|
| 88 |
+
|
| 89 |
+
# %% [markdown]
|
| 90 |
+
# Rename media columns for easier matching.
|
| 91 |
+
|
| 92 |
+
# %%
|
| 93 |
+
media.rename(columns = {"EOL content ID": "eol_content_id", "EOL page ID": "eol_page_id"}, inplace = True)
|
| 94 |
+
|
| 95 |
+
# %% [markdown]
|
| 96 |
+
# Check consistency of merge when matching both `eol_content_id` and `eol_page_id`.
|
| 97 |
+
|
| 98 |
+
# %%
|
| 99 |
+
merge_cols = ["eol_content_id", "eol_page_id"]
|
| 100 |
+
|
| 101 |
+
# %%
|
| 102 |
+
eol_df_media_cp = pd.merge(eol_df, media, how = "inner", left_on = merge_cols, right_on = merge_cols)
|
| 103 |
+
eol_df_media_cp.info(show_counts = True)
|
| 104 |
+
|
| 105 |
+
# %% [markdown]
|
| 106 |
+
# Okay, so we do have a mis-match of about 113K images where the content IDs and page IDs don't both match.
|
| 107 |
+
#
|
| 108 |
+
# Let's save this to a CSV.
|
| 109 |
+
|
| 110 |
+
# %%
|
| 111 |
+
eol_df_media_cp.to_csv("../data/eol_files/eol_cp_match_media.csv", index = False)
|
| 112 |
+
|
| 113 |
+
# %% [markdown]
|
| 114 |
+
# Note that merging on just content IDs is going to give the same numbers.
|
| 115 |
+
|
| 116 |
+
# %%
|
| 117 |
+
eol_media_content = pd.merge(eol_df,
|
| 118 |
+
media,
|
| 119 |
+
how = "inner",
|
| 120 |
+
left_on = "eol_content_id",
|
| 121 |
+
right_on = "eol_content_id")
|
| 122 |
+
eol_media_content.info(show_counts = True)
|
| 123 |
+
|
| 124 |
+
# %% [markdown]
|
| 125 |
+
# The interesting thing is when we look at the uniqueness. There are less _**unique**_ `Medium Source URLs`, suggesting that there are duplicated images that have different content IDs and unique `EOL Full-Size Copy URL`s, so EOL presumably has them duplicated.
|
| 126 |
+
|
| 127 |
+
# %%
|
| 128 |
+
eol_df_media_cp.nunique()
|
| 129 |
+
|
| 130 |
+
# %% [markdown]
|
| 131 |
+
# We'll look into this a little further down. First, let's get a list of all the `treeoflife_id`s that do match to the media manifest so we can make a CSV with all the images that _**aren't**_ matching.
|
| 132 |
+
|
| 133 |
+
# %%
|
| 134 |
+
tol_ids_in_media = list(eol_df_media_cp.treeoflife_id)
|
| 135 |
+
tol_ids_in_media[:5]
|
| 136 |
+
|
| 137 |
+
# %%
|
| 138 |
+
eol_df.head()
|
| 139 |
+
|
| 140 |
+
# %% [markdown]
|
| 141 |
+
# Let's save a copy of the EOL section with content and page IDs that are mismatched.
|
| 142 |
+
|
| 143 |
+
# %%
|
| 144 |
+
eol_df_missing_media = eol_df.loc[~eol_df.treeoflife_id.isin(tol_ids_in_media)]
|
| 145 |
+
eol_df_missing_media.info(show_counts = True)
|
| 146 |
+
|
| 147 |
+
# %% [markdown]
|
| 148 |
+
# How many pages are these distributed across?
|
| 149 |
+
|
| 150 |
+
# %%
|
| 151 |
+
eol_df_missing_media.nunique()
|
| 152 |
+
|
| 153 |
+
# %%
|
| 154 |
+
eol_df_missing_media.to_csv("../data/eol_files/eol_cp_not_media.csv", index = False)
|
| 155 |
+
|
| 156 |
+
# %% [markdown]
|
| 157 |
+
# ### Get Content IDs in Media Manifest that didn't match
|
| 158 |
+
#
|
| 159 |
+
|
| 160 |
+
# %%
|
| 161 |
+
content_ids_in_catalog = list(eol_df_media_cp.eol_content_id)
|
| 162 |
+
content_ids_in_catalog[:5]
|
| 163 |
+
|
| 164 |
+
# %%
|
| 165 |
+
media_missing_tol = media.loc[~media.eol_content_id.isin(content_ids_in_catalog)]
|
| 166 |
+
media_missing_tol.info(show_counts = True)
|
| 167 |
+
|
| 168 |
+
# %%
|
| 169 |
+
# Save media manifest content IDs that didn't match to predicted-catalog
|
| 170 |
+
media_missing_tol.to_csv("../data/eol_files/media_content_not_catalog.csv", index = False)
|
| 171 |
+
|
| 172 |
+
# %% [markdown]
|
| 173 |
+
# #### Compare to Dec 6 Media Manifest
|
| 174 |
+
|
| 175 |
+
# %%
|
| 176 |
+
dec_media = pd.read_csv("../data/media_manifest_Dec6.csv", dtype = {"EOL content ID": "int64", "EOL page ID": "int64"}, low_memory = False)
|
| 177 |
+
dec_media.info(show_counts = True)
|
| 178 |
+
|
| 179 |
+
# %% [markdown]
|
| 180 |
+
# Only about 2000 more images than July 26 media manifest.
|
| 181 |
+
|
| 182 |
+
# %%
|
| 183 |
+
dec_media.rename(columns = {"EOL content ID": "eol_content_id", "EOL page ID": "eol_page_id"}, inplace = True)
|
| 184 |
+
|
| 185 |
+
# %%
|
| 186 |
+
eol_dec_media_cp = pd.merge(eol_df, dec_media, how = "inner", left_on = merge_cols, right_on = merge_cols)
|
| 187 |
+
eol_dec_media_cp.info(show_counts = True)
|
| 188 |
+
|
| 189 |
+
# %% [markdown]
|
| 190 |
+
# And we have _less_ matching....Let's compare this to the July 26 manifest and see if there are content IDs only in Dec that do match to predicted-catalog.
|
| 191 |
+
|
| 192 |
+
# %%
|
| 193 |
+
media_merge = pd.merge(dec_media, media, how = "inner", left_on = merge_cols, right_on = merge_cols)
|
| 194 |
+
media_merge.info(show_counts = True)
|
| 195 |
+
|
| 196 |
+
# %%
|
| 197 |
+
content_ids_both_media = list(media_merge.eol_content_id)
|
| 198 |
+
content_ids_both_media[:5]
|
| 199 |
+
|
| 200 |
+
# %%
|
| 201 |
+
media_dec_notJuly = dec_media.loc[~dec_media.eol_content_id.isin(content_ids_both_media)]
|
| 202 |
+
media_dec_notJuly.info(show_counts = True)
|
| 203 |
+
|
| 204 |
+
# %% [markdown]
|
| 205 |
+
# Let's see if any of these are in our predicted catalog.
|
| 206 |
+
|
| 207 |
+
# %%
|
| 208 |
+
eol_dec_only = pd.merge(eol_df, media_dec_notJuly, how = "inner", left_on = merge_cols, right_on = merge_cols)
|
| 209 |
+
eol_dec_only.info(show_counts = True)
|
| 210 |
+
|
| 211 |
+
# %% [markdown]
|
| 212 |
+
# Okay, no matches here, so stick with July 26 file above, this doesn't help us recoup.
|
| 213 |
+
#
|
| 214 |
+
# Old media manifest (July 6) won't read EOL content and page IDs in properly, so can't check that against these.
|
| 215 |
+
|
| 216 |
+
# %% [markdown]
|
| 217 |
+
# ### Check out the Duplication of Medium Source URLs
|
| 218 |
+
|
| 219 |
+
# %%
|
| 220 |
+
# Identify unique Medium Source URLs
|
| 221 |
+
eol_df_media_cp['duplicate'] = eol_df_media_cp.duplicated(subset = "Medium Source URL", keep = 'first')
|
| 222 |
+
eol_df_media_unique = eol_df_media_cp.loc[~eol_df_media_cp['duplicate']]
|
| 223 |
+
|
| 224 |
+
# %%
|
| 225 |
+
eol_df_media_unique.info(show_counts = True)
|
| 226 |
+
|
| 227 |
+
# %% [markdown]
|
| 228 |
+
# It's about 10K images that are duplicated. Let's see how many `Medium Source URL`s it is.
|
| 229 |
+
|
| 230 |
+
# %%
|
| 231 |
+
eol_df_media_cp.loc[eol_df_media_cp['duplicate']].nunique()
|
| 232 |
+
|
| 233 |
+
# %% [markdown]
|
| 234 |
+
# There are 5,833 unique `Medium Source URLs` that are duplicated.
|
| 235 |
+
|
| 236 |
+
# %% [markdown]
|
| 237 |
+
# ### Check how this compares to Catalog
|
| 238 |
+
# Let's see if the missing images are all in TreeOfLife-10M, or a mix between it and Rare Species.
|
| 239 |
+
|
| 240 |
+
# %%
|
| 241 |
+
cat_df = pd.read_csv("../data/catalog.csv", low_memory = False)
|
| 242 |
+
# Remove duplicates in train_small
|
| 243 |
+
cat_df = cat_df.loc[cat_df.split != 'train_small']
|
| 244 |
+
|
| 245 |
+
# %%
|
| 246 |
+
# Add data_source column for easier slicing
|
| 247 |
+
cat_df.loc[cat_df['inat21_filename'].notna(), 'data_source'] = 'iNat21'
|
| 248 |
+
cat_df.loc[cat_df['bioscan_filename'].notna(), 'data_source'] = 'BIOSCAN'
|
| 249 |
+
cat_df.loc[cat_df['eol_content_id'].notna(), 'data_source'] = 'EOL'
|
| 250 |
+
|
| 251 |
+
# %%
|
| 252 |
+
eol_cat_df = cat_df.loc[cat_df.data_source == "EOL"]
|
| 253 |
+
|
| 254 |
+
# %% [markdown]
|
| 255 |
+
# Reduce down to just relevant columns and recast the EOL content and page IDs as `int64`.
|
| 256 |
+
|
| 257 |
+
# %%
|
| 258 |
+
eol_cat_df = eol_cat_df[eol_license_cols]
|
| 259 |
+
|
| 260 |
+
# %%
|
| 261 |
+
eol_cat_df = eol_cat_df.astype({"eol_content_id": "int64", "eol_page_id": "int64"})
|
| 262 |
+
|
| 263 |
+
# %%
|
| 264 |
+
eol_cat_df.info()
|
| 265 |
+
|
| 266 |
+
# %%
|
| 267 |
+
eol_cat_df.loc[eol_cat_df["treeoflife_id"].isin(list(eol_df_missing_media.treeoflife_id))].info(show_counts = True)
|
| 268 |
+
|
| 269 |
+
# %% [markdown]
|
| 270 |
+
# They are _**almost**_ entirely in TreeOfLife-10M, but _some_ may be in Rare Species.
|
| 271 |
+
#
|
| 272 |
+
# #### Quick check for the duplicates here
|
| 273 |
+
|
| 274 |
+
# %%
|
| 275 |
+
tol_ids_duplicated = list(eol_df_media_cp.loc[eol_df_media_cp['duplicate'], "treeoflife_id"].values)
|
| 276 |
+
tol_ids_duplicated[:5]
|
| 277 |
+
|
| 278 |
+
# %%
|
| 279 |
+
eol_df_media_cp.loc[eol_df_media_cp['duplicate']].head()
|
| 280 |
+
|
| 281 |
+
# %%
|
| 282 |
+
eol_cat_df.loc[eol_cat_df["treeoflife_id"].isin(tol_ids_duplicated)].info(show_counts = True)
|
| 283 |
+
|
| 284 |
+
# %% [markdown]
|
| 285 |
+
# All but 7 of the duplicates are here too.
|
| 286 |
+
|
| 287 |
+
# %% [markdown]
|
| 288 |
+
# Let's save a version of the merged manifest with all duplicates (as in, _**every**_ image that's duplicated is listed, not just the 2nd through however many to appear).
|
| 289 |
+
|
| 290 |
+
# %%
|
| 291 |
+
# Identify unique Medium Source URLs
|
| 292 |
+
eol_df_media_copies = eol_df_media_cp.copy()
|
| 293 |
+
eol_df_media_copies['duplicate'] = eol_df_media_copies.duplicated(subset = "Medium Source URL", keep = False)
|
| 294 |
+
eol_df_media_duplicates = eol_df_media_copies.loc[eol_df_media_copies['duplicate']]
|
| 295 |
+
eol_df_media_duplicates.info(show_counts = True)
|
| 296 |
+
|
| 297 |
+
# %% [markdown]
|
| 298 |
+
# Now we'll save this to CSV (without the duplicate column since they're all duplicates).
|
| 299 |
+
|
| 300 |
+
# %%
|
| 301 |
+
eol_df_media_duplicates[eol_df_media_duplicates.columns[:7]].to_csv("../data/eol_files/eol_media_duplicates.csv", index = False)
|
| 302 |
+
|
| 303 |
+
# %%
|
scripts/make_licenses.py
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pandas as pd
|
| 2 |
+
from pathlib import Path
|
| 3 |
+
import argparse
|
| 4 |
+
import sys
|
| 5 |
+
|
| 6 |
+
# Output of match_owners.py (catalog-media.csv) should be fed in.
|
| 7 |
+
# Will also work with the media manifest.
|
| 8 |
+
|
| 9 |
+
CC_URL = "https://creativecommons.org/"
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
def get_license_url(license_version):
|
| 13 |
+
"""
|
| 14 |
+
Function to generate the appropriate Creative Commons URL for a given license.
|
| 15 |
+
All licenses in the media manifest are "cc-XX", a variation on "publicdomain", or "No known copyright restrictions".
|
| 16 |
+
|
| 17 |
+
Parameters:
|
| 18 |
+
-----------
|
| 19 |
+
license_version - String. License (eg., "cc-by-nc 3.0").
|
| 20 |
+
|
| 21 |
+
Returns:
|
| 22 |
+
--------
|
| 23 |
+
license_url - String. Creative Commons URL associated with the license_version.
|
| 24 |
+
|
| 25 |
+
"""
|
| 26 |
+
# First check for version number and isolate it
|
| 27 |
+
if "." in license_version:
|
| 28 |
+
version = license_version.split(sep="-")[-1]
|
| 29 |
+
license_name = license_version.split(sep="-" + version)[0]
|
| 30 |
+
else:
|
| 31 |
+
# No version specified, so default to latest version (4.0)
|
| 32 |
+
license_name = license_version
|
| 33 |
+
version = "4.0"
|
| 34 |
+
# Check which type of license
|
| 35 |
+
if license_name[:5] == "cc-by":
|
| 36 |
+
by_x = license_name.split(sep="cc-")[1]
|
| 37 |
+
license_url = CC_URL + "licenses/" + by_x + "/" + version
|
| 38 |
+
elif (license_name[:4] == "cc-0") or ("public" in license_name):
|
| 39 |
+
license_url = CC_URL + "publicdomain/zero/1.0"
|
| 40 |
+
else:
|
| 41 |
+
# "No known copyright restrictions"
|
| 42 |
+
license_url = None
|
| 43 |
+
return license_url
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
def main(src_csv, dest_dir):
|
| 47 |
+
# Check CSV compatibility
|
| 48 |
+
try:
|
| 49 |
+
print("Reading CSV")
|
| 50 |
+
df = pd.read_csv(src_csv, low_memory=False)
|
| 51 |
+
except Exception as e:
|
| 52 |
+
sys.exit(e)
|
| 53 |
+
|
| 54 |
+
# Check for "License Name" or "license_name" column
|
| 55 |
+
print("Processing data")
|
| 56 |
+
df_cols = list(df.columns)
|
| 57 |
+
if "License Name" not in df_cols:
|
| 58 |
+
if "license_name" not in df_cols:
|
| 59 |
+
sys.exit("Source CSV does not have a column labeled License Name or license_name.")
|
| 60 |
+
license_col = "license_name"
|
| 61 |
+
license_col = "License Name"
|
| 62 |
+
|
| 63 |
+
# Check filepath given by user
|
| 64 |
+
dest_dir_path = Path(dest_dir)
|
| 65 |
+
if not dest_dir_path.is_absolute():
|
| 66 |
+
# Use ToL-EDA as reference folder
|
| 67 |
+
base_path = Path(__file__).parent.parent.resolve()
|
| 68 |
+
filepath = base_path / dest_dir_path
|
| 69 |
+
else:
|
| 70 |
+
filepath = dest_dir_path
|
| 71 |
+
|
| 72 |
+
filepath.mkdir(parents=True, exist_ok=True)
|
| 73 |
+
|
| 74 |
+
# Add links for licenses to catalog manifest
|
| 75 |
+
print("Getting URLs for licenses")
|
| 76 |
+
df["license_link"] = df[license_col].apply(get_license_url)
|
| 77 |
+
|
| 78 |
+
# Save license file
|
| 79 |
+
df.to_csv(str(filepath / "licenses.csv"), index=False)
|
| 80 |
+
|
| 81 |
+
# Print counts of licenses for which URL was not resolved
|
| 82 |
+
print(
|
| 83 |
+
"Licenses with no URL: \n",
|
| 84 |
+
df.loc[df["license_link"].isna(), license_col].value_counts(),
|
| 85 |
+
)
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
if __name__ == "__main__":
|
| 89 |
+
parser = argparse.ArgumentParser(
|
| 90 |
+
description="Add license URLs to catalog-media file"
|
| 91 |
+
)
|
| 92 |
+
parser.add_argument("input_file", help="Path to the input CSV file")
|
| 93 |
+
parser.add_argument(
|
| 94 |
+
"--output_path",
|
| 95 |
+
default="data",
|
| 96 |
+
required=False,
|
| 97 |
+
help="Path to the folder for output license file",
|
| 98 |
+
)
|
| 99 |
+
args = parser.parse_args()
|
| 100 |
+
|
| 101 |
+
main(args.input_file, args.output_path)
|
scripts/match_owners.py
ADDED
|
@@ -0,0 +1,215 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import pandas as pd
|
| 2 |
+
from tqdm import tqdm
|
| 3 |
+
from pathlib import Path
|
| 4 |
+
import argparse
|
| 5 |
+
import sys
|
| 6 |
+
|
| 7 |
+
# This file can be used on predicted-catalog or rarespecies-catalog by changing the CATALOG_PATH.
|
| 8 |
+
CATALOG_PATH = "data/catalog.csv"
|
| 9 |
+
LICENSE_COLS = ["treeoflife_id", "eol_content_id", "eol_page_id"]
|
| 10 |
+
EXPECTED_COLS = [
|
| 11 |
+
"EOL content ID",
|
| 12 |
+
"EOL page ID",
|
| 13 |
+
"Medium Source URL",
|
| 14 |
+
"EOL Full-Size Copy URL",
|
| 15 |
+
"License Name",
|
| 16 |
+
"Copyright Owner",
|
| 17 |
+
]
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
def merge_dfs(df, media):
|
| 21 |
+
"""
|
| 22 |
+
Function to process and merge the ToL catalog with the media manifest.
|
| 23 |
+
|
| 24 |
+
Parameters:
|
| 25 |
+
-----------
|
| 26 |
+
df - DataFrame composed of catalog entries, includes columns "treeoflife_id", "eol_content_id", and "eol_page_id".
|
| 27 |
+
media - DataFrame of EOL media manifest with EXPECTED_COLS.
|
| 28 |
+
|
| 29 |
+
Returns:
|
| 30 |
+
--------
|
| 31 |
+
cat_media - DataFrame with media manifest information attached to treeoflife_ids in catalog.
|
| 32 |
+
|
| 33 |
+
"""
|
| 34 |
+
# Reduce to just EOL entries
|
| 35 |
+
eol_df = df.loc[df["eol_content_id"].notna()].copy()
|
| 36 |
+
eol_df = eol_df[LICENSE_COLS]
|
| 37 |
+
|
| 38 |
+
# Set content and page ID types to int64
|
| 39 |
+
eol_df = eol_df.astype({"eol_content_id": "int64", "eol_page_id": "int64"})
|
| 40 |
+
# Rename media versions to match (already int64)
|
| 41 |
+
media.rename(
|
| 42 |
+
columns={"EOL content ID": "eol_content_id", "EOL page ID": "eol_page_id"},
|
| 43 |
+
inplace=True,
|
| 44 |
+
)
|
| 45 |
+
|
| 46 |
+
# Merge dataframes on EOL content and page IDs
|
| 47 |
+
cat_media = pd.merge(
|
| 48 |
+
eol_df, media, how="inner", left_on=LICENSE_COLS[1:], right_on=LICENSE_COLS[1:]
|
| 49 |
+
)
|
| 50 |
+
|
| 51 |
+
return cat_media
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
def merge_owners(cat_media, owners):
|
| 55 |
+
"""
|
| 56 |
+
Function to process and merge the owner fix DataFrame with cat_media for owner matching.
|
| 57 |
+
|
| 58 |
+
Parameters:
|
| 59 |
+
-----------
|
| 60 |
+
cat_media - DataFrame with media manifest information attached to treeoflife_ids in catalog.
|
| 61 |
+
owners - DataFrame of media manifest entries that had missing owners, updated with their information.
|
| 62 |
+
|
| 63 |
+
Returns:
|
| 64 |
+
--------
|
| 65 |
+
cat_owners - DataFrame with media manifest information from missing owners attached to treeoflife_ids.
|
| 66 |
+
|
| 67 |
+
"""
|
| 68 |
+
# Rename owner EOL content and page IDs to match cat_media (already int64)
|
| 69 |
+
owners.rename(
|
| 70 |
+
columns={"EOL content ID": "eol_content_id", "EOL page ID": "eol_page_id"},
|
| 71 |
+
inplace=True,
|
| 72 |
+
)
|
| 73 |
+
# Set columns to merge on
|
| 74 |
+
merge_cols = list(owners.columns)[:5]
|
| 75 |
+
cat_owners = pd.merge(
|
| 76 |
+
cat_media, owners, how="inner", left_on=merge_cols, right_on=merge_cols
|
| 77 |
+
)
|
| 78 |
+
|
| 79 |
+
return cat_owners
|
| 80 |
+
|
| 81 |
+
|
| 82 |
+
def get_owners_titles(cat_media, cat_owners):
|
| 83 |
+
"""
|
| 84 |
+
Function to attach owner names and image titles to catalog entries in cat_media.
|
| 85 |
+
Fills empty "Copyright Owner" and "Title" values with "not provided".
|
| 86 |
+
|
| 87 |
+
Parameters:
|
| 88 |
+
-----------
|
| 89 |
+
cat_media - DataFrame with media manifest information attached to treeoflife_ids in catalog.
|
| 90 |
+
cat_owners - DataFrame with media manifest information from missing owners attached to treeoflife_ids.
|
| 91 |
+
|
| 92 |
+
Returns:
|
| 93 |
+
--------
|
| 94 |
+
cat_media - DataFrame with media manifest information attached to treeoflife_ids in catalog with missing owners resolved.
|
| 95 |
+
"""
|
| 96 |
+
missing_owners = [
|
| 97 |
+
tol_id
|
| 98 |
+
for tol_id in list(
|
| 99 |
+
cat_media.loc[cat_media["Copyright Owner"].isna(), "treeoflife_id"]
|
| 100 |
+
)
|
| 101 |
+
]
|
| 102 |
+
for tol_id in tqdm(missing_owners):
|
| 103 |
+
temp = cat_owners.loc[cat_owners.treeoflife_id == tol_id]
|
| 104 |
+
copyright_owner = temp["Copyright Owner_y"].values
|
| 105 |
+
title = temp.title.values
|
| 106 |
+
cat_media.loc[
|
| 107 |
+
cat_media["treeoflife_id"] == tol_id, "Copyright Owner"
|
| 108 |
+
] = copyright_owner
|
| 109 |
+
cat_media.loc[cat_media["treeoflife_id"] == tol_id, "title"] = title
|
| 110 |
+
|
| 111 |
+
# Print counts of licenses for which owner info was not resolved
|
| 112 |
+
print(
|
| 113 |
+
"Licenses still missing Copyright Owners: \n",
|
| 114 |
+
cat_media.loc[
|
| 115 |
+
cat_media["Copyright Owner"].isna(), "License Name"
|
| 116 |
+
].value_counts(),
|
| 117 |
+
)
|
| 118 |
+
|
| 119 |
+
# Fill null "Copyright Owner" and "Title" values with "not provided"
|
| 120 |
+
cat_media["Copyright Owner"].fillna("not provided", inplace=True)
|
| 121 |
+
cat_media["title"].fillna("not provided", inplace=True)
|
| 122 |
+
|
| 123 |
+
return cat_media
|
| 124 |
+
|
| 125 |
+
|
| 126 |
+
def update_owners(df, media, owners, filepath):
|
| 127 |
+
"""
|
| 128 |
+
Function to fetch and attach the missing owner and title information to EOL catalog entries and save catalog-media file.
|
| 129 |
+
|
| 130 |
+
Parameters:
|
| 131 |
+
-----------
|
| 132 |
+
df - DataFrame composed of catalog entries.
|
| 133 |
+
media - DataFrame of EOL media manifest.
|
| 134 |
+
owners - DataFrame of media manifest entries that had missing owners, updated with their information.
|
| 135 |
+
"""
|
| 136 |
+
cat_media = merge_dfs(df, media)
|
| 137 |
+
cat_owners = merge_owners(cat_media, owners)
|
| 138 |
+
cat_media = get_owners_titles(cat_media, cat_owners)
|
| 139 |
+
|
| 140 |
+
# Save updated catalog media file to chosen location
|
| 141 |
+
cat_media.to_csv(filepath, index=False)
|
| 142 |
+
|
| 143 |
+
|
| 144 |
+
def main(media_csv, owner_csv, dest_dir):
|
| 145 |
+
# Check CSV compatibility
|
| 146 |
+
try:
|
| 147 |
+
print("Reading catalog CSV")
|
| 148 |
+
df = pd.read_csv(CATALOG_PATH, low_memory=False)
|
| 149 |
+
|
| 150 |
+
# Read in media manifest and owner fix CSVs with EOL content and page IDs as type "int64".
|
| 151 |
+
print("Reading media manifest CSV")
|
| 152 |
+
media = pd.read_csv(
|
| 153 |
+
media_csv,
|
| 154 |
+
dtype={"EOL content ID": "int64", "EOL page ID": "int64"},
|
| 155 |
+
low_memory=False,
|
| 156 |
+
)
|
| 157 |
+
print("Reading owner fix CSV")
|
| 158 |
+
owners = pd.read_csv(
|
| 159 |
+
owner_csv,
|
| 160 |
+
dtype={"EOL content ID": "int64", "EOL page ID": "int64"},
|
| 161 |
+
low_memory=False,
|
| 162 |
+
)
|
| 163 |
+
except Exception as e:
|
| 164 |
+
sys.exit(e)
|
| 165 |
+
|
| 166 |
+
# Check for columns
|
| 167 |
+
print("Processing data")
|
| 168 |
+
missing_media_cols = []
|
| 169 |
+
missing_owner_cols = []
|
| 170 |
+
for col in EXPECTED_COLS:
|
| 171 |
+
if col not in list(media.columns):
|
| 172 |
+
missing_media_cols.append(col)
|
| 173 |
+
if col not in list(owners.columns):
|
| 174 |
+
missing_owner_cols.append(col)
|
| 175 |
+
if len(missing_media_cols) > 0:
|
| 176 |
+
sys.exit(f"Media CSV does not have {missing_media_cols} columns.")
|
| 177 |
+
if len(missing_owner_cols) > 0:
|
| 178 |
+
sys.exit(f"Owners CSV does not have {missing_owner_cols} columns.")
|
| 179 |
+
|
| 180 |
+
# If split column included, remove "train_small" entries as they are duplicates from "train".
|
| 181 |
+
if "split" in list(df.columns):
|
| 182 |
+
df = df.loc[df.split != "train_small"]
|
| 183 |
+
|
| 184 |
+
# Check filepath given by user
|
| 185 |
+
dest_dir_path = Path(dest_dir)
|
| 186 |
+
if not dest_dir_path.is_absolute():
|
| 187 |
+
# Use ToL-EDA as reference folder
|
| 188 |
+
base_path = Path(__file__).parent.parent.resolve()
|
| 189 |
+
filepath = base_path / dest_dir_path
|
| 190 |
+
else:
|
| 191 |
+
filepath = dest_dir_path
|
| 192 |
+
|
| 193 |
+
filepath.mkdir(parents=True, exist_ok=True)
|
| 194 |
+
|
| 195 |
+
# Make and save updated manifest to chosen filepath
|
| 196 |
+
update_owners(df, media, owners, str(filepath / "catalog-media.csv"))
|
| 197 |
+
|
| 198 |
+
|
| 199 |
+
if __name__ == "__main__":
|
| 200 |
+
parser = argparse.ArgumentParser(description="Attach missing owner info to catalog")
|
| 201 |
+
parser.add_argument(
|
| 202 |
+
"-m", "--media_input_file", help="Path to the media manifest input CSV file"
|
| 203 |
+
)
|
| 204 |
+
parser.add_argument(
|
| 205 |
+
"-o", "--owner_input_file", help="Path to the owner fix input CSV file"
|
| 206 |
+
)
|
| 207 |
+
parser.add_argument(
|
| 208 |
+
"--output_path",
|
| 209 |
+
default="data",
|
| 210 |
+
required=False,
|
| 211 |
+
help="Path to the folder for output visualization files",
|
| 212 |
+
)
|
| 213 |
+
args = parser.parse_args()
|
| 214 |
+
|
| 215 |
+
main(args.media_input_file, args.owner_input_file, args.output_path)
|
visuals/{fullData_phyla.png → category-v1-visuals/fullData_phyla.png}
RENAMED
|
File without changes
|
visuals/{inat_phyla.png → category-v1-visuals/inat_phyla.png}
RENAMED
|
File without changes
|
visuals/{num_images_class_y.png → category-v1-visuals/num_images_class_y.png}
RENAMED
|
File without changes
|
visuals/{num_images_kingdom.png → category-v1-visuals/num_images_kingdom.png}
RENAMED
|
File without changes
|
visuals/{num_images_order_y.png → category-v1-visuals/num_images_order_y.png}
RENAMED
|
File without changes
|
visuals/{num_images_phylum_y.png → category-v1-visuals/num_images_phylum_y.png}
RENAMED
|
File without changes
|
visuals/{num_phyla_kingdom.png → category-v1-visuals/num_phyla_kingdom.png}
RENAMED
|
File without changes
|
visuals/{num_species_kingdom.png → category-v1-visuals/num_species_kingdom.png}
RENAMED
|
File without changes
|
visuals/{phyla_ToL.pdf → category-v1-visuals/phyla_ToL.pdf}
RENAMED
|
File without changes
|
visuals/{phyla_ToL.png → category-v1-visuals/phyla_ToL.png}
RENAMED
|
File without changes
|
visuals/{phyla_ToL_scale1.pdf → category-v1-visuals/phyla_ToL_scale1.pdf}
RENAMED
|
File without changes
|
visuals/category-v1-visuals/phyla_ToL_tree_cat-v1.html
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
visuals/{phyla_ToL_wh.pdf → category-v1-visuals/phyla_ToL_wh.pdf}
RENAMED
|
File without changes
|
visuals/{phyla_iNat21.png → category-v1-visuals/phyla_iNat21.png}
RENAMED
|
File without changes
|
visuals/kingdom_ToL_tree.html
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
visuals/kingdom_ToL_tree.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:68df18d4984fdac00ab5373e3c460784093e6a057410bedb60d2a454e5817f5e
|
| 3 |
+
size 1187148
|
visuals/phyla_ToL_tree.html
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
visuals/phyla_ToL_tree.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:06e80b5ca147c493383554ce5fb2b23c0ec0b691c46c4d99475563a3a79b3a9a
|
| 3 |
+
size 1076084
|