Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
(ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 2c6dacbd-7567-4e3b-87f9-5e9fdd15e5cb)')
Error code:   UnexpectedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image_path
string
label
int64
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
null
0
End of preview.

TIGAS Dataset

License: MIT Dataset Size Task

A comprehensive dataset for training AI-generated image detection models

TIGAS Model β€’ GitHub Repository

Dataset Description

The TIGAS Dataset is a large-scale collection of real and AI-generated images designed for training and evaluating AI-generated image detection models. It contains 142,902 images from diverse sources, including state-of-the-art generative models.

Key Features

  • Binary classification task: Real (label=0) vs AI-Generated/Fake (label=1)
  • Diverse generators: 19 different image sources including GANs and diffusion models
  • Balanced split: ~54% real, ~46% fake images
  • High-quality annotations: CSV format with image paths and labels
  • Ready-to-use: Compatible with PyTorch and standard ML pipelines

Dataset Statistics

Overall Distribution

Split Total Images Real (label=0) Fake (label=1) Real %
Train 128,776 69,772 59,004 54.2%
Test 14,126 7,037 7,089 49.8%
Total 142,902 76,809 66,093 53.7%

Image Sources (Train Split)

The dataset includes images from the following generators and sources:

Source Images Type Description
art002_4 10,986 Mixed Artistic images subset 4
art002_1 10,801 Mixed Artistic images subset 1
VQDM 9,518 Generated Vector Quantized Diffusion Model
sd14 9,517 Generated Stable Diffusion 1.4
Midjourney 9,516 Generated Midjourney AI
Glide 9,513 Generated OpenAI GLIDE
wuk 9,510 Mixed Mixed source images
art002_3 8,295 Mixed Artistic images subset 3
gaugan 7,992 Generated NVIDIA GauGAN
art002_2 6,911 Mixed Artistic images subset 2
sd15_1 6,353 Generated Stable Diffusion 1.5 subset 1
sd15_2 6,349 Generated Stable Diffusion 1.5 subset 2
art001 5,966 Mixed Artistic images
ADM 4,756 Mixed Ablated Diffusion Model (ImageNet)
biggan 3,200 Generated BigGAN
stargan 3,198 Generated StarGAN (face manipulation)
sd_xl 3,196 Generated Stable Diffusion XL
face 1,600 Mixed Face images
DALLE2 β€” Generated DALL-E 2 (fake only in subset)

Image Formats

Format Count Percentage
PNG 48,130 37.4%
JPG 44,414 34.5%
JPEG 34,632 26.9%
jpeg 1,600 1.2%

Dataset Structure

TIGAS/
β”œβ”€β”€ LICENSE                     # MIT License
β”œβ”€β”€ README.md                   # This file
β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ annotations01.csv       # Training annotations (128,776 entries)
β”‚   └── images/
β”‚       β”œβ”€β”€ ADM/
β”‚       β”‚   β”œβ”€β”€ 0_real/         # Real images from ImageNet
β”‚       β”‚   └── 1_fake/         # Generated by ADM
β”‚       β”œβ”€β”€ art001/
β”‚       β”‚   β”œβ”€β”€ 0_real/
β”‚       β”‚   └── 1_fake/
β”‚       β”œβ”€β”€ art002_1/ ... art002_4/
β”‚       β”œβ”€β”€ biggan/
β”‚       β”œβ”€β”€ DALLE2/
β”‚       β”œβ”€β”€ face/
β”‚       β”œβ”€β”€ gaugan/
β”‚       β”œβ”€β”€ Glide/
β”‚       β”œβ”€β”€ Midjourney/
β”‚       β”œβ”€β”€ sd_xl/
β”‚       β”œβ”€β”€ sd14/
β”‚       β”œβ”€β”€ sd15_1/
β”‚       β”œβ”€β”€ sd15_2/
β”‚       β”œβ”€β”€ stargan/
β”‚       β”œβ”€β”€ VQDM/
β”‚       └── wuk/
└── test/
    └── annotations01.csv       # Test annotations (14,126 entries)

Annotation Format

The CSV files contain two columns:

image_path,label
images\ADM\0_real\ILSVRC2012_val_00000005.JPEG,0
images\Midjourney\1_fake\image_001.png,1
  • image_path: Relative path to the image file (Windows-style backslashes)
  • label: Binary label where:
    • 0 = Real/Natural image
    • 1 = AI-Generated/Fake image

Note: The test split uses the same images/ directory as train but with different image subsets defined in its annotation file.

Usage

Loading with Python

import pandas as pd
from pathlib import Path
from PIL import Image

# Load annotations
data_root = Path("TIGAS")
train_df = pd.read_csv(data_root / "train" / "annotations01.csv")
test_df = pd.read_csv(data_root / "test" / "annotations01.csv")

# Convert Windows paths to current OS format
train_df['image_path'] = train_df['image_path'].str.replace('\\', '/')

# Load an image
def load_image(row):
    img_path = data_root / "train" / row['image_path']
    image = Image.open(img_path).convert('RGB')
    label = row['label']
    return image, label

# Example
image, label = load_image(train_df.iloc[0])
print(f"Label: {'Real' if label == 0 else 'Fake'}")

Loading with PyTorch

import torch
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
import pandas as pd
from PIL import Image
from pathlib import Path

class TIGASDataset(Dataset):
    def __init__(self, root_dir, split='train', transform=None):
        self.root_dir = Path(root_dir)
        self.split = split
        self.transform = transform
        
        # Load annotations
        ann_path = self.root_dir / split / "annotations01.csv"
        self.annotations = pd.read_csv(ann_path)
        self.annotations['image_path'] = self.annotations['image_path'].str.replace('\\', '/')
        
        # Images are always in train/images/
        self.images_dir = self.root_dir / "train"
    
    def __len__(self):
        return len(self.annotations)
    
    def __getitem__(self, idx):
        row = self.annotations.iloc[idx]
        img_path = self.images_dir / row['image_path']
        image = Image.open(img_path).convert('RGB')
        label = row['label']
        
        if self.transform:
            image = self.transform(image)
        
        return image, label

# Example usage
transform = transforms.Compose([
    transforms.Resize((256, 256)),
    transforms.ToTensor(),
    transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])

train_dataset = TIGASDataset("TIGAS", split='train', transform=transform)
test_dataset = TIGASDataset("TIGAS", split='test', transform=transform)

train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True, num_workers=4)
test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False, num_workers=4)

Using with TIGAS Model

from tigas import TIGAS

# Initialize model (auto-downloads from HuggingFace)
tigas = TIGAS(auto_download=True, device='cuda')

# Evaluate on dataset images
from pathlib import Path
import pandas as pd

data_root = Path("TIGAS")
test_df = pd.read_csv(data_root / "test" / "annotations01.csv")
test_df['image_path'] = test_df['image_path'].str.replace('\\', '/')

# Evaluate first 10 images
for i, row in test_df.head(10).iterrows():
    img_path = data_root / "train" / row['image_path']
    score = tigas(str(img_path))
    true_label = "Real" if row['label'] == 0 else "Fake"
    pred_label = "Real" if score > 0.5 else "Fake"
    print(f"{img_path.name}: Score={score:.4f}, True={true_label}, Pred={pred_label}")

Generators Included

Diffusion Models

  • Stable Diffusion 1.4, 1.5, XL - Open-source text-to-image diffusion models
  • DALL-E 2 - OpenAI's text-to-image model
  • Midjourney - Commercial text-to-image service
  • GLIDE - OpenAI's guided language-to-image diffusion
  • ADM - Ablated Diffusion Model (class-conditional on ImageNet)
  • VQDM - Vector Quantized Diffusion Model

GANs (Generative Adversarial Networks)

  • BigGAN - Large-scale class-conditional GAN
  • GauGAN - NVIDIA's semantic image synthesis
  • StarGAN - Multi-domain face manipulation

Citation

If you use this dataset in your research, please cite:

@dataset{tigas_dataset_2025,
  title={TIGAS Dataset: A Comprehensive Collection for AI-Generated Image Detection},
  author={Morgenshtern, Dmitrij},
  year={2025},
  publisher={HuggingFace},
  url={https://huggingface.co/datasets/H1merka/TIGAS-dataset}
}

License

This dataset is released under the MIT License.

Important: Individual images in this dataset may be derived from or generated using various models with their own licensing terms:

  • ImageNet images (in 0_real folders) are subject to ImageNet terms of use
  • Generated images are outputs of the respective models (Stable Diffusion, Midjourney, etc.)

The annotations and dataset organization are MIT licensed.

Related Resources

Changelog

  • v1.0 (December 2025): Initial release with 142,902 images from 19 sources
Downloads last month
124