Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    RuntimeError
Message:      Dataset scripts are no longer supported, but found ancient-mortars.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1029, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 989, in dataset_module_factory
                  raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
              RuntimeError: Dataset scripts are no longer supported, but found ancient-mortars.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for Ancient Mortars

Dataset Summary

The Ancient Mortars dataset consists of ~7 million 224 px. x 224 px. images of micro-particles sampled from various archeological sites around the world (Italy, Israel, etc..). The data comes from particle analyzer that takes rolling video of each particle falling from a chute, taking around 600,000 - 700,000 photos of each particle.

Supported tasks

  • image-classification: The ideal goal of this task is to classify a given image of a particle into one of the following 14 classes:
    • sand_zif
    • kurkar_dor_4
    • kurkar_dor_2
    • kurkar_nahal_4
    • volcanicash_pozzuoli
    • volcanicash_procida
    • olivepress_1
    • sand_beach
    • kurkar_nahal_3
    • kurkar_dor_1
    • kurkar_nahal_1
    • sand_herodian
    • kurkar_nahal_2
    • kurkar_dor_3

Naming Configurations

Each file is named as follows: particlenameLVL1_particlenameLVL2_particlenameLVL3-oldfilename.bmp

  • particlenameLVL1 is the "highest" level of delineation of particles - i.e sand, kurkar, olive, volcanicash, etc.
  • particlenameLVL2 (optional) is the "second order" level of delineation of particles - i.e. location names (dor vs nahal taninim, procida vs pozzuoli)
  • particlenameLVL3 (optional) is the "lowest" level of delineation of particles - i.e sample numbers within a same location (1 vs 2 for two different collection sites within the same quarry)

For example, a particle with class kurkar_dor_4 indicates that it is a kurkar particle from Dor and from collection site 4 specifically within Dor.

Motivation for naming configuration

Because trying to classify each image by the type of particle AND its location AND its specific collection site within a location may be impossible with current computer vision techniques, this naming configuration has been chosen so that it is very easy to "collapse" particle classes and remove some of the data's specificity.

For example, in the case that models are not able to distinguish the differences between kurkar_dor_1 and kurkar_dor_2, one may want to collapse the classes so that all kurkar_dor particles are considered the same class (kurkar_dor_1, kurkar_dor_2, kurkar_dor_3, kurkar_dor_4 to simply kurkar_dor).

HuggingFace Dataset

The HuggingFace ancient-mortars dataset repo has been set up with 4 .zip files under the /data subdirectory.

  • train.zip contains a collection of .bmp files across many samples to be used for training.
  • valid.zip contains a collection of .bmp files across many samples to be used for validation.
  • test.zip contains a collection of .bmp files across many samples to be used for testing.
  • particle_names.txt is a list of the lowest-level particle names for labeling (more on this below)

Some of our experimental samples are of the same type of particle from different locations (i.e. volcanic ash from Pozzuoli, Italy vs volcanic ash from Procida, Italy), and sometimes from different parts of the same location (i.e kurkar from a quarry at Nahal Taninim from location 1 vs from the same quarry at location 2).

We also have multiple images of the same particle for all of the particles in our dataset. In building this dataset, we also need to be careful about making sure that these multiple images of the same particle are not divided amongst different train/test/validation splits, as to ensure that our results do not become inflated. So, the following datasets are set up such that multiple images of the same particle are always in the same split.

ACCRE Dataset

3 datasets have been built and stored in the ancient-mortars directory on ACCRE, each corresponding to a different data configuration from the data on HuggingFace:

  • lowest_config.hf (default) contains all data splits (train/test/valid) with the saved labels corresponding to the lowest naming convention. (particlenameLVL1, particlenameLVL2, and particlenameLVL3 all kept) (i.e. kurkar_nahaltaninim_1 is different from kurkar_nahaltaninim_2)
  • middle_config.hf contains all data splits (train/test/valid) with the saved labels corresponding to the middle naming convention. (particlenameLVL1 and particlenameLVL2 kept only) (i.e. kurkar_nahaltaninim_1 same as kurkar_nahaltaninim_2 under name kurkar_nahaltaninim, but different from kurkar_dor)
  • highest_config.hf contains all data splits (train/test/valid) with the saved labels corresponding to the highest naming convention. (particlenameLVL1 kept only) (i.e. labels just kurkar, sand, volcanicash, etc)

An additional, balanced version of the dataset, built with the "lowest" naming configuration, is also uploaded to ACCRE under the name config_balanced.hf. This configuration contains an equal number of particles belonging to each of the classes, which is therefore significantly smaller than our other datasets given our full dataset is highly imbalanced. The number of olivepress images specifically is far lower than other particle classes.

Dataset Structure

Data Instances

The dataset is structured as a standard HuggingFace DatasetDict object.

A data point comprises an image file path, the image, and its label:

{

'image_file_path': "/example/file/path",

'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=224x224>,

'label': 5

}

Data Fields

  • image_file_path: The image's file path on the ACCRE directory
  • image: A PIL.Image.Image object containing the 224x224 image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files may take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0]
  • label: an integer representing the particle class (use .int2str(example_image["labels"]) method on dataset's labels to convert integer representation to string).

Data Splits

Due to the very large size of our current dataset, we require using a pared-down version of this dataset for model training.

  • For training: We will make use of the "validation" split of the dataset. (contains ~400k images)
  • For validation: We will make use of the "test" split of the dataset. (contains ~400k images)
  • For testing: We will make use of the "train" split of the dataset, from the balanced version of the dataset. (contains ~100k images)

Dataset Creation

Dr. Markus Eberl, archaeologist and Vanderbilt University Associate Professor of Anthropology

Downloads last month
6