Datasets:
File size: 14,042 Bytes
fe34008 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 |
# ---
# jupyter:
# jupytext:
# formats: ipynb,py:percent
# text_representation:
# extension: .py
# format_name: percent
# format_version: '1.3'
# jupytext_version: 1.16.0
# kernelspec:
# display_name: std
# language: python
# name: python3
# ---
# %%
import pandas as pd
import seaborn as sns
sns.set_style("whitegrid")
# %%
df = pd.read_csv("../data/lila_image_urls_and_labels.csv", low_memory = False)
df.head()
# %%
df.columns
# %%
df.annotation_level.value_counts()
# %% [markdown]
# Annotation level indicates iimage vs sequence (or unknown), not analogous to `taxonomy_level` from lila-taxonomy-mapping_release.csv. It seems `original_label` may be the analogous column.
#
# We'll likely want to pull out the image-level before doing any sequence checks and such since those should be "clean" images. Though we will want to label them with how many distinct species are in the image first.
#
# We now have 66 less sequence-level annotations and 2,517,374 more image-level! That's quite the update! The unknown count has not changed.
#
# ### Check Dataset Counts
#
# 1. Make sure we have all datasets expected.
# 2. Check which/how many datasets are labeled to the image level (and check for match to [Andrey's spreadsheet](https://docs.google.com/spreadsheets/d/1sC90DolAvswDUJ1lNSf0sk_norR24LwzX2O4g9OxMZE/edit?usp=drive_link)).
# %%
df.dataset_name.value_counts()
# %%
df.groupby(["dataset_name"]).annotation_level.value_counts()
# %% [markdown]
# It seems all the unknown annotation level images are in NACTI (North American Camera Trap Images). At first glance I don't see annotation level information on HF or on [their LILA page](https://lila.science/datasets/nacti)--will require more looking.
#
# Desert Lion Conservation Camera Traps & Trail Camera Images of New Zealand Animals are _not_ included in the [Hugging Face dataset](https://huggingface.co/datasets/society-ethics/lila_camera_traps).
#
# There are definitely more in [Andrey's spreadsheet](https://docs.google.com/spreadsheets/d/1sC90DolAvswDUJ1lNSf0sk_norR24LwzX2O4g9OxMZE/edit?usp=drive_link) that aren't included here. We'll have him go through those too.
# %%
df.sample(10)
# %% [markdown]
# Observe that we also now get multiple URL options; `url_aws` will likely be best/fastest for use with [`distributed-downloader`](https://github.com/Imageomics/distributed-downloader) to get the images.
# %%
df.info(show_counts = True)
# %% [markdown]
# The overall dataset has grown by about 3 million images, we'll see how much of this is non-empty. I'm encouraged by the number of non-null `scientific_name` values seeming to also grow by about 3 million; most of these also seem to have genus now.
#
# We'll definitely want to check on the scientifc name choices where genus and species aren't available, similarly for other ranks, as it is guarunteed as much as kingdom (which is hopefully aligned with all non-empty images).
#
# No licensing info, we'll get that from HF or the datasets themselves (Andrey can check this; most seem to be [Community Data License Agreement (permissive variant)](https://cdla.io/permissive-1-0/)).
# %%
df.nunique()
# %% [markdown]
# We have 739 unique species indicated, though the 908 unique `scientific_name` values is likely more indicative of the diversity.
#
# Interesting also to note that there are duplicate URLs here; these would be the indicators of multiple species in an image as they correspond to the number of unique image IDs. We'll check this out once we remove the images labeled as "empty".
# %%
#check for humans
df.loc[df.species == "homo sapien"]
# %% [markdown]
# Let's start by removing entries with `original_label`: `empty`.
# %%
df_cleaned = df.loc[df.original_label != "empty"].copy()
# %% [markdown]
# ## Save the Reduced Data (no more "empty" labels)
# %%
df_cleaned.to_csv("../data/lila_image_urls_and_labels.csv", index = False)
# %% [markdown]
# Let's check where we are with annotations now that we've removed all the images labeled as empty.
# %%
df.groupby(["dataset_name"]).annotation_level.value_counts()
# %% [markdown]
# We started with 19,351,156 entries, and are left with 10,965,902 after removing all labeled as `empty`, so more than half the images now; it's an increase of about 2.5M from the last version.
#
# Note that there are still about 3.4 million that don't have the species label, 1.5 million that are missing genus designation. 10,192,703 of them have scientific and common name, though! That's nearly all of them.
# %%
df_cleaned.info(show_counts = True)
# %%
df_cleaned.nunique()
# %%
print(df_cleaned.phylum.value_counts())
print()
print(df_cleaned["class"].value_counts())
# %% [markdown]
# We have 10,965,902 total - 10,864,013 unique URLs, suggesting at most 101,889 images have more than one species in them. That's only 1% of our images here and even smaller at the scale we're looking for the next ToL dataset. It is interesting to note though and we should explore this more.
#
# I'm curious about the single "variety", since I thought that was more of a plant label and these are all animals.
#
# All images are in Animalia, as expected; we have 2 phyla represented and 8 classes:
# - Predominantly Chordata, and within that phylum, Mammalia is the vast majority, though aves is about 10%.
# - Note that not every image with a phylum label has a class label.
# - Insecta, malacostraca, arachnida, and diplopoda are all in the class Arthropoda.
#
# ### Label Multi-Species Images
# We'll go by both the URL and image ID, which do seem to correspond to the same images (for uniqueness).
# %%
df_cleaned["multi_species"] = df_cleaned.duplicated(subset = ["url_aws", "image_id"], keep = False)
df_cleaned.loc[df_cleaned["multi_species"]].nunique()
# %% [markdown]
# We've got just under 100K images that have multiple species. We can figure out how many each of them have, and then move on to looking at images per sequence and other labeling info.
# %%
multi_sp_imgs = list(df_cleaned.loc[df_cleaned["multi_species"], "image_id"].unique())
# %%
for img in multi_sp_imgs:
df_cleaned.loc[df_cleaned["image_id"] == img, "num_species"] = df_cleaned.loc[df_cleaned["image_id"] == img].shape[0]
df_cleaned.head()
# %% [markdown]
# #### Save this to CSV now we got those counts
# %%
df_cleaned.to_csv("../data/lila_image_urls_and_labels.csv", index = False)
# %%
df_cleaned.loc[df_cleaned["multi_species"]].head()
# %% [markdown]
# How many different species do we generally have when we have multiple species in an image?
# %%
df_cleaned.num_species.value_counts()
# %% [markdown]
# We have 97,567 images with 2 different species (most multi-species instances), 2,023 with 3 different species, and 92 with 4.
#
# We will want to dedicate some more time to exploring some of these taxonomic counts, but we'll first look at the number of unique taxa (by Linnean 7-rank (`unique_7_tuple`) and then by all taxonomic labels (`unique_taxa`) available). We'll compare these to the number of unique scientific and common names, then perhaps add a count of number of creatures based on one of those labels. At that point we may save another copy of this CSV and start a new analysis notebook.
# %%
df_cleaned.annotation_level.value_counts()
# %% [markdown]
# We've got ~3M labeled to the image and another 3M unknonwn labeling (all from NACTI, which Andrey will check on), leaving ~5M labeled only to at the sequence level. This _should_ give Jianyang something to work with to start exploring near-duplicate de-duplication.
#
# Let's update the non-multi species images to show 1 in the `num_species` column, then move on to checking the taxonomy strings.
# %%
df_cleaned.loc[df_cleaned["num_species"].isna(), "num_species"] = 1.0
df_cleaned.num_species.value_counts()
# %% [markdown]
# ### Taxonomic String Exploration
# %%
lin_taxa = ['kingdom', 'phylum', 'class', 'order', 'family', 'genus', 'species']
all_taxa = ['kingdom',
'phylum',
'subphylum',
'superclass',
'class',
'subclass',
'infraclass',
'superorder',
'order',
'suborder',
'infraorder',
'superfamily',
'family',
'subfamily',
'tribe',
'genus',
'species',
'subspecies',
'variety']
# %% [markdown]
# #### How many have all 7 Linnean ranks?
# %%
df_all_taxa = df_cleaned.dropna(subset = lin_taxa)
df_all_taxa[all_taxa].info(show_counts = True)
# %% [markdown]
# That's pretty good coverage: 7,521,712 out of 10,965,902. It looks like many of them also have the other taxonomic ranks too. Now how many different 7-tuples are there?
#
# #### How many unique 7-tuples?
# %%
#number of unique 7-tuples in full dataset
df_cleaned['lin_duplicate'] = df_cleaned.duplicated(subset = lin_taxa, keep = 'first')
df_unique_lin_taxa = df_cleaned.loc[~df_cleaned['lin_duplicate']].copy()
df_unique_lin_taxa.info(show_counts = True)
# %% [markdown]
# Interesting, we have 891 unique 7-tuple taxonomic strings, but 1 scientific and common name seem to be missing.
# What's the uniqueness count here?
# %%
df_unique_lin_taxa.nunique()
# %% [markdown]
# They're across all datasets. We have 890 unique scientific names and 886 unique common names (from 885 original labels).
# %%
df_unique_lin_taxa.loc[(df_unique_lin_taxa["scientific_name"].isna()) | (df_unique_lin_taxa["common_name"].isna())]
# %% [markdown]
# It's a car...We need to remove cars...
# %%
df_cleaned.loc[df_cleaned["original_label"] == "car"].shape
# %%
df_cleaned.loc[df_cleaned["original_label"] == "car", "dataset_name"].value_counts()
# %% [markdown]
# #### How many unique full taxa (sub ranks included)?
# %%
#number of unique 7-tuples in full dataset
df_cleaned['full_duplicate'] = df_cleaned.duplicated(subset = all_taxa, keep = 'first')
df_unique_all_taxa = df_cleaned.loc[~df_cleaned['full_duplicate']].copy()
df_unique_all_taxa.info(show_counts = True)
# %% [markdown]
# When we consider the sub-ranks as well we wind up with 909 unique taxa (still with one scientific and common name missing--the car!).
# %%
df_unique_all_taxa.nunique()
# %% [markdown]
# We have now captured all 908 unique scientific names, but only 901 of the 999 unique common names.
# %%
df_unique_all_taxa.loc[(df_unique_all_taxa["scientific_name"].isna()) | (df_unique_all_taxa["common_name"].isna())]
# %% [markdown]
# #### Let's remove those cars
# %%
df_cleaned = df_cleaned[df_cleaned["original_label"] != "car"].copy()
df_cleaned[["original_label", "scientific_name", "common_name", "kingdom"]].info(show_counts = True)
# %% [markdown]
# Now we have 10,961,185 instead of 10,965,902 images; they all have `original_label`, but only 10,192,703 of them have `scientific_name`, `common_name`, and `kingdom`. What are the `original_label`s for those ~800K images?
# %%
no_taxa = df_cleaned.loc[(df_cleaned["scientific_name"].isna()) & (df_cleaned["common_name"].isna()) & (df_cleaned["kingdom"].isna())].copy()
print(no_taxa[["dataset_name", "original_label"]].nunique())
no_taxa[["dataset_name", "original_label"]].info(show_counts = True)
# %% [markdown]
# What are these 24 other labels and how are the 768,482 images with them distributed across these 12 datasets?
# %%
no_taxa["original_label"].value_counts()
# %%
no_taxa["dataset_name"].value_counts()
# %%
no_taxa.groupby(["dataset_name"])["original_label"].value_counts()
# %% [markdown]
# Interesting. It seems like all of these should also be removed. Vegetation obstruction could of course be labeled in Plantae, but we're not going to be labeling 7K images for this project.
#
# Let's remove them, then we should have 10,192,703 images.
# %%
non_taxa_labels = list(no_taxa["original_label"].unique())
# %%
df_clean = df_cleaned.loc[~df_cleaned["original_label"].isin(non_taxa_labels)].copy()
df_clean.info(show_counts = True)
# %%
df_clean.nunique()
# %% [markdown]
# Let's check out our top ten labels, scientific names, and common names. Then we'll save this cleaned metadata file.
# %%
df_clean["original_label"].value_counts()[:10]
# %%
df_clean["scientific_name"].value_counts()[:10]
# %%
df_clean["common_name"].value_counts()[:10]
# %% [markdown]
# There are also 257,159 humans in here! Glad the number agrees across labels. We'll probably need to remove the humans, though I may save a copy with them still on the HF repo (it is just our dev repo). Which datasets have them? I thought humans were filtered out previously (though I could be mistaken as they seem to be in 15 of the 20 datasets).
# %%
df_clean.loc[df_clean["original_label"] == "human", "dataset_name"].value_counts()
# %% [markdown]
# What do human labels look like (as in do they have the full taxa structure)?
# %%
df_clean.loc[df_clean["original_label"] == "human"].sample(5)
# %% [markdown]
# It does seem to have full taxa...interesting.
# %%
df_clean.to_csv("../data/lila_image_urls_and_labels_wHumans.csv", index = False)
# %%
df_clean.loc[df_clean["original_label"] != "human"].to_csv("../data/lila_image_urls_and_labels.csv", index = False)
# %%
taxa = [col for col in list(df_clean.columns) if col in all_taxa or col =="original_label"]
df_taxa = df_clean[taxa].copy()
df_taxa.loc[df_taxa["original_label"] == "human"].sample(7)
# %%
df_clean.loc[df_clean["original_label"] != "human"].info(show_counts = True)
# %%
df_clean.loc[df_clean["original_label"] != "human"].nunique()
# %% [markdown]
# We have 1,198,696 distinct sequence IDs for the 9,849,119 unique image IDs, suggesting an average of 8 images per sequence?
# %%
df_clean.loc[df_clean["original_label"] != "human", "annotation_level"].value_counts()
# %% [markdown]
# #### Check Number of Images per Scientific Name?
# %%
# %%
# %%
sns.histplot(df_clean.loc[df_clean["original_label"] != "human"], y = 'class')
# %%
sns.histplot(df_clean.loc[df_clean["original_label"] != "human"], y = 'order')
# %%
|