Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 133, in _split_generators
                  analyze(archives, downloaded_dirs, split_name)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 107, in analyze
                  for downloaded_dir_file in dl_manager.iter_files(downloaded_dir):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/track.py", line 49, in __iter__
                  for x in self.generator(*self.args):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1356, in _iter_from_urlpaths
                  if xbasename(dirpath).startswith((".", "__")):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 657, in xbasename
                  if is_local_path(a):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 72, in is_local_path
                  return urlparse(url_or_filename).scheme == "" or os.path.ismount(urlparse(url_or_filename).scheme + ":/")
                File "/usr/local/lib/python3.9/urllib/parse.py", line 401, in urlparse
                  splitresult = urlsplit(url, scheme, allow_fragments)
                File "/usr/local/lib/python3.9/urllib/parse.py", line 496, in urlsplit
                  raise ValueError("Invalid IPv6 URL")
              ValueError: Invalid IPv6 URL
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

FFHQ-Aging Dataset

This dataset repository contains the FFHQ-Aging Dataset sourced from its official repository. However, the images provided by the authors are hosted on Google Drive, and the zip file is approximately 90GB in size, making transmission inconvenient.

In this repository, we have downloaded the original zip file from thr provided Google Drive link, unzipped it, and zipped them into smaller chunks so that they can be transmitted easily via Huggingface.



Directory Tree

We provide the zipfiles of the original 1024x1024-sized images in images_zip/images1024x1024. You may run other scripts in the below to unzip / resize them.

πŸ“¦ ffhq_aging_dataset/
β”œβ”€β”€ πŸ“ images/
β”‚   β”œβ”€β”€ πŸ“ images512x512/
β”‚   └── πŸ“ images1024x1024/
β”œβ”€β”€ πŸ“ images_zip/
β”‚   └── πŸ“ images1024x1024/
β”œβ”€β”€ πŸ“ labels/
β”‚   β”œβ”€β”€ πŸ“„ ffhq_aging_labels.csv
β”‚   └── πŸ“„ ffhq_aging_labels.json
β”œβ”€β”€ πŸ“ logs/
β”œβ”€β”€ πŸ“ scripts/
β”‚   β”œβ”€β”€ πŸ’» 01_zip_images.sh
β”‚   β”œβ”€β”€ πŸ’» 02_unzip_images.sh
β”‚   β”œβ”€β”€ πŸ’» 03_batch_image_resize.sh
β”‚   └── πŸ’» 04_process_labels.sh
β”œβ”€β”€ πŸ“ src/
β”‚   β”œβ”€β”€ 🐍 batch_image_resize.py
β”‚   β”œβ”€β”€ 🐍 process_labels.py
β”‚   β”œβ”€β”€ 🐍 unzip_images.py
β”‚   └── 🐍 zip_images.py
β”œβ”€β”€ πŸ“„ .gitignore
└── πŸ“„ README.md


Scripts

Zipping the Images

We have zippped the images into chunks of 1000 images each, and provide them in images_zip/images1024x1024. However, if you need to modify the images and transfer them somewhere, feel free to re-zip them again via the script below:

./scripts/01_zip_images.sh

Unzipping the Images

Unzipping the zipped chunks is as easy as running the below:

./scripts/02_unzip_images.sh

Resizing the Images

If you need to resize the images for some other usage, configure the image sizes and output dir in scripts/03_batch_image_resize.sh, and run the below:

./scripts/03_batch_image_resize.sh

Convertting the original CSV label file into JSON

We have also convertted the original label file from CSV to JSON for convenience, and procide it in labels/ffhq_aging_labels.json. Similarly, if you need to re-compile the JSON label file, configure the parameters in src/process_labels.py, and run the below:

./scripts/04_process_labels.sh
Downloads last month
279