Dc-4nderson commited on
Commit
1e38540
·
verified ·
1 Parent(s): a2b8ca5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -1
README.md CHANGED
@@ -13,4 +13,60 @@ tags:
13
  - school
14
  metrics:
15
  - accuracy
16
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  - school
14
  metrics:
15
  - accuracy
16
+ ---
17
+ # 🤖 ViT Emotion Classifier
18
+
19
+ This is a lightweight [Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit) model fine-tuned to classify **emotions** from facial images using a custom dataset of school-aged individuals. It supports 8 emotional categories and is designed to work well on small datasets and limited compute.
20
+
21
+ ---
22
+
23
+ ## 🧠 Supported Emotions
24
+
25
+ The model predicts one of the following emotional states:
26
+
27
+ | Label ID | Emotion |
28
+ |----------|--------------------|
29
+ | 0 | anxious-fearful |
30
+ | 1 | bored |
31
+ | 2 | confused |
32
+ | 3 | discouraged |
33
+ | 4 | frustrated |
34
+ | 5 | neutral |
35
+ | 6 | positive |
36
+ | 7 | suprised |
37
+
38
+ ---
39
+
40
+ ## 📦 Model Details
41
+
42
+ - **Model Type**: `ViTForImageClassification`
43
+ - **Backbone**: `vit-small-patch16-224`
44
+ - **Dataset**: [`Dc-4nderson/feelings_classfication_dataset`](https://huggingface.co/datasets/Dc-4nderson/feelings_classfication_dataset)
45
+ - **Framework**: PyTorch
46
+ - **Labels**: 8 emotions (defined in `config.json`)
47
+ - **Trained on**: Google Colab with < 600 images
48
+
49
+ ---
50
+
51
+ ## 🧪 Usage
52
+
53
+ ```python
54
+ from transformers import AutoImageProcessor, AutoModelForImageClassification
55
+ from PIL import Image
56
+ import torch
57
+
58
+ # Load model + processor
59
+ processor = AutoImageProcessor.from_pretrained("Dc-4nderson/vit-emotion-classifier")
60
+ model = AutoModelForImageClassification.from_pretrained("Dc-4nderson/vit-emotion-classifier")
61
+
62
+ # Load image and preprocess
63
+ image = Image.open("your_image.jpg").convert("RGB")
64
+ inputs = processor(images=image, return_tensors="pt")
65
+
66
+ # Run inference
67
+ with torch.no_grad():
68
+ outputs = model(**inputs)
69
+ pred = torch.argmax(outputs.logits, dim=1).item()
70
+ label = model.config.id2label[str(pred)]
71
+
72
+ print("🧠 Predicted Emotion:", label)