Image Classification
Transformers
English
art
Vit / README.md
benjaminStreltzin's picture
Update README.md
61fac3e verified
|
raw
history blame
2.51 kB
metadata
license: unknown
language:
  - en
metrics:
  - accuracy
tags:
  - art
base_model: google/vit-base-patch16-224
datasets:
  - DataScienceProject/Art_Images_Ai_And_Real_
pipeline_tag: image-classification
library_name: transformers

Model Card for Model ID

This model is designed for classifying images as either 'real art' or 'fake art' using a Convolutional Neural Network (CNN) combined with Error Level Analysis (ELA). The CNN extracts features from images, and ELA enhances artifacts that help distinguish between real and AI-generated art.

Model Details

Model Description

This model leverages the Vision Transformer (ViT) architecture, which applies self-attention mechanisms to process images. The model classifies images into two categories: 'real art' and 'fake art'. It captures intricate patterns and features that help in distinguishing between the two categories without the need for Convolutional Neural Networks (CNNs).

Direct Use

This model can be used to classify images as 'real art' or 'fake art' based on visual features learned by the Vision Transformer.

Out-of-Scope Use

The model may not perform optimally on images outside the art domain or on artworks with significantly different visual characteristics compared to those in the training dataset. It is not suitable for medical imaging or other non-artistic visual tasks.

Bias, Risks, and Limitations

Users should be mindful of the model's limitations and potential biases, particularly regarding artworks that differ significantly from the training data. Regular updates and evaluations may be necessary to ensure the model remains accurate and effective.

Recommendations

How to Get Started with the Model

Prepare Data: Organize your images into appropriate folders, ensuring they are resized and normalized. Train the Model: Utilize the provided code to train the Vision Transformer model on your dataset. Evaluate: Assess the model's performance on a separate test set of images.

Training Details

Training Data

Dataset: [Link to dataset or description] Preprocessing: Images are resized, normalized, and prepared for input to the Vision Transformer.

Training Procedure

Images are resized to a uniform dimension and normalized. The Vision Transformer model is then trained on these preprocessed images.

Training Hyperparameters

Evaluation

Testing Data, Factors & Metrics

Testing Data

Factors

Metrics

Results

Summary