TTI-Set / README.md
pointofnoreturn's picture
Update README.md
8af6518 verified
metadata
annotations_creators:
  - manual
language:
  - en
license: mit
multilinguality: monolingual
pretty_name: TTI Model Attribution Dataset
tags:
  - text-to-image
  - diffusion
  - model-attribution
  - sfw
  - ai-generated-images
task_categories:
  - image-classification
task_ids:
  - multi-class-image-classification
dataset_creator: course_project_team

Text-to-Image Model Attribution Dataset

This dataset is distilled from two comprehensive sources:

  • A 2-year snapshot of the CivitAI SFW (Safe-for-Work) image dataset, containing metadata for generated images.
  • A complete export of all models published on CivitAI, including metadata such as model names, types, and version identifiers.

By matching image-level resourceIDs (used to generate each image) with the corresponding model version IDs from the model dataset, we identified and extracted four distinct subsets of images generated exclusively using one of the following prominent TTI (Text-to-Image) models:

  • flux_df – FLUX
  • dreamshaper_df – DreamShaper
  • juggernaut_df – Juggernaut XL
  • pony_df – Pony Diffusion V6 XL

Objective

As part of a course project, our task is to investigate whether it is possible to train a classifier capable of predicting which model was used to generate a given image, based solely on the visual content.

Dataset Filtering

To ensure the integrity and clarity of model attribution:

  • We filtered out images that used more than one LoRA (Low-Rank Adaptation model), minimizing visual distortion or blending caused by mixed model influences.
  • We limited image entries to those using at most two total resource IDs, increasing the likelihood that a single base model was the dominant generator.

We are trying to find the visual "style signature" of each individual base model.


This dataset is intended for academic, educational, and non-commercial research use only.