File size: 4,046 Bytes
6317bd3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2a1df1b
 
6317bd3
2a1df1b
 
 
 
 
 
 
6317bd3
2a1df1b
 
 
 
 
 
 
 
 
 
6317bd3
 
772d7ac
6317bd3
 
 
 
 
 
 
 
772d7ac
 
 
6317bd3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
772d7ac
6317bd3
 
 
 
 
 
 
2a1df1b
6317bd3
2a1df1b
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
license: apache-2.0
tags:
- vision
- image-classification
- vit
datasets:
- imagenet-1k
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
  example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
  example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
  example_title: Palace
language:
- en
library_name: transformers
pipeline_tag: image-classification
---
# Model Overviwe:
The Vision Transformer (ViT) is a transformer encoder model designed for image recognition tasks. It was pretrained on a large dataset of 14 million images and 21,843 classes known as ImageNet-21k, and fine-tuned on ImageNet 2012, which consists of 1 million images across 1,000 classes.

# How It Works:

Input Representation: Images are split into fixed-size patches (16x16 pixels) and linearly embedded. A special [CLS] token is added at the beginning of the sequence to indicate the image's classification.

Transformer Encoder: The model uses a transformer encoder architecture, similar to BERT for text, to process the image patches. Absolute position embeddings are added to encode spatial information before inputting the sequence into transformer layers.

Classification: After processing through the transformer layers, the output from the [CLS] token is used for image classification. This token's final hidden state represents the entire image's features.

# Intended Uses:

Image Classification: ViT can be directly used for image classification tasks. By adding a linear layer on top of the [CLS] token, the model can classify images into one of the 1,000 ImageNet classes.
Limitations:

Resolution Dependency: While the model was fine-tuned on ImageNet at 224x224 resolution, better performance is achieved with higher resolutions such as 384x384. Larger models generally yield better results but require more computational resources.
Training Details:

Preprocessing: Images are resized to 224x224 pixels and normalized across RGB channels.

Training: Pretraining was conducted on TPUv3 hardware with a batch size of 4096 and learning rate warmup. Gradient clipping was applied during training to enhance stability.
```python
from transformers import ViTImageProcessor, ViTForImageClassification
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import requests
import torch

def predict_image_from_url(url):
    # Load image from URL
    image = Image.open(requests.get(url, stream=True).raw)
    
    # Initialize Sreekanth's processor and model
    processor = AutoImageProcessor.from_pretrained("Sreekanth3096/vit-coco-image-classification")
    model = AutoModelForImageClassification.from_pretrained("Sreekanth3096/vit-coco-image-classification")
    
    # Preprocess image and make predictions
    inputs = processor(images=image, return_tensors="pt")
    outputs = model(**inputs)
    
    # Get predicted class label
    logits = outputs.logits
    predicted_class_idx = logits.argmax(-1).item()
    predicted_class = model.config.id2label[predicted_class_idx]
    
    return predicted_class

# Example usage
if __name__ == "__main__":
    url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
    predicted_class = predict_image_from_url(url)
    print(f"Predicted class: {predicted_class}")

```

For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).

## Training data

The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. 

# Evaluation Results:

Performance: Detailed evaluation results on various benchmarks can be found in tables from the original paper. Fine-tuning the model on higher resolutions typically improves classification accuracy.