michaelgathara commited on
Commit
fc3f269
·
verified ·
1 Parent(s): 776fe62

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - image-classification
4
+ - pytorch
5
+ - huggingface
6
+ - vit
7
+ - emotion-recognition
8
+ datasets:
9
+ - zenodo
10
+ base_model: trpakov/vit-face-expression
11
+ library_name: transformers
12
+ ---
13
+
14
+ # ViT Face Expression (Fine-tuned on Zenodo Dataset)
15
+
16
+ This model is a fine-tuned version of [trpakov/vit-face-expression](https://huggingface.co/trpakov/vit-face-expression) on the [IFEED dataset (Zenodo Record 7963451)](https://zenodo.org/record/7963451).
17
+
18
+ ## Model Description
19
+ - **Architecture**: Vision Transformer (ViT)
20
+ - **Task**: Facial Emotion Recognition
21
+ - **Emotions**: Angry, Disgust, Fear, Happy, Neutral, Sad, Surprise
22
+
23
+ ## Usage
24
+
25
+ ```python
26
+ from transformers import ViTImageProcessor, ViTForImageClassification
27
+ from PIL import Image
28
+ import requests
29
+
30
+ url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
31
+ image = Image.open(requests.get(url, stream=True).raw)
32
+
33
+ repo_name = "michaelgathara/vit-face-zenodo"
34
+
35
+ processor = ViTImageProcessor.from_pretrained(repo_name)
36
+ model = ViTForImageClassification.from_pretrained(repo_name)
37
+
38
+ inputs = processor(images=image, return_tensors="pt")
39
+ outputs = model(**inputs)
40
+ logits = outputs.logits
41
+ # model predicts one of the 7 emotions
42
+ predicted_class_idx = logits.argmax(-1).item()
43
+ print("Predicted class:", model.config.id2label[predicted_class_idx])
44
+ ```