Sreekanth3096 commited on
Commit
6317bd3
·
verified ·
1 Parent(s): e45a2a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -5
README.md CHANGED
@@ -1,8 +1,28 @@
1
-
2
- Model Overview:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  The Vision Transformer (ViT) is a transformer encoder model designed for image recognition tasks. It was pretrained on a large dataset of 14 million images and 21,843 classes known as ImageNet-21k, and fine-tuned on ImageNet 2012, which consists of 1 million images across 1,000 classes.
4
 
5
- How It Works:
6
 
7
  Input Representation: Images are split into fixed-size patches (16x16 pixels) and linearly embedded. A special [CLS] token is added at the beginning of the sequence to indicate the image's classification.
8
 
@@ -10,7 +30,7 @@ Transformer Encoder: The model uses a transformer encoder architecture, similar
10
 
11
  Classification: After processing through the transformer layers, the output from the [CLS] token is used for image classification. This token's final hidden state represents the entire image's features.
12
 
13
- Intended Uses:
14
 
15
  Image Classification: ViT can be directly used for image classification tasks. By adding a linear layer on top of the [CLS] token, the model can classify images into one of the 1,000 ImageNet classes.
16
  Limitations:
@@ -21,7 +41,44 @@ Training Details:
21
  Preprocessing: Images are resized to 224x224 pixels and normalized across RGB channels.
22
 
23
  Training: Pretraining was conducted on TPUv3 hardware with a batch size of 4096 and learning rate warmup. Gradient clipping was applied during training to enhance stability.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
- Evaluation Results:
26
 
27
  Performance: Detailed evaluation results on various benchmarks can be found in tables from the original paper. Fine-tuning the model on higher resolutions typically improves classification accuracy.
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - vision
5
+ - image-classification
6
+ - vit
7
+ datasets:
8
+ - imagenet-1k
9
+ - imagenet-21k
10
+ widget:
11
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
12
+ example_title: Tiger
13
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
14
+ example_title: Teapot
15
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
16
+ example_title: Palace
17
+ language:
18
+ - en
19
+ library_name: transformers
20
+ pipeline_tag: image-classification
21
+ ---
22
+ # Model Overviwe:
23
  The Vision Transformer (ViT) is a transformer encoder model designed for image recognition tasks. It was pretrained on a large dataset of 14 million images and 21,843 classes known as ImageNet-21k, and fine-tuned on ImageNet 2012, which consists of 1 million images across 1,000 classes.
24
 
25
+ # How It Works:
26
 
27
  Input Representation: Images are split into fixed-size patches (16x16 pixels) and linearly embedded. A special [CLS] token is added at the beginning of the sequence to indicate the image's classification.
28
 
 
30
 
31
  Classification: After processing through the transformer layers, the output from the [CLS] token is used for image classification. This token's final hidden state represents the entire image's features.
32
 
33
+ # Intended Uses:
34
 
35
  Image Classification: ViT can be directly used for image classification tasks. By adding a linear layer on top of the [CLS] token, the model can classify images into one of the 1,000 ImageNet classes.
36
  Limitations:
 
41
  Preprocessing: Images are resized to 224x224 pixels and normalized across RGB channels.
42
 
43
  Training: Pretraining was conducted on TPUv3 hardware with a batch size of 4096 and learning rate warmup. Gradient clipping was applied during training to enhance stability.
44
+ ```python
45
+ from transformers import ViTImageProcessor, ViTForImageClassification
46
+ from PIL import Image
47
+ import requests
48
+ import torch
49
+
50
+ def predict_image_from_url(url):
51
+ # Load image from URL
52
+ image = Image.open(requests.get(url, stream=True).raw)
53
+
54
+ # Initialize processor and model
55
+ processor = ViTImageProcessor.from_pretrained('google/vit-base-patch16-224')
56
+ model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')
57
+
58
+ # Preprocess image and make predictions
59
+ inputs = processor(images=image, return_tensors="pt")
60
+ outputs = model(**inputs)
61
+
62
+ # Get predicted class label
63
+ logits = outputs.logits
64
+ predicted_class_idx = logits.argmax(-1).item()
65
+ predicted_class = model.config.id2label[predicted_class_idx]
66
+
67
+ return predicted_class
68
+
69
+ # Example usage
70
+ if __name__ == "__main__":
71
+ url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
72
+ predicted_class = predict_image_from_url(url)
73
+ print(f"Predicted class: {predicted_class}")
74
+ ```
75
+
76
+ For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
77
+
78
+ ## Training data
79
+
80
+ The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ImageNet](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
81
 
82
+ # Evaluation Results:
83
 
84
  Performance: Detailed evaluation results on various benchmarks can be found in tables from the original paper. Fine-tuning the model on higher resolutions typically improves classification accuracy.