bobs24 commited on
Commit
6ad7197
Β·
verified Β·
1 Parent(s): 5e2e036

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -11
README.md CHANGED
@@ -1,11 +1,78 @@
1
- ---
2
- language:
3
- - en
4
- metrics:
5
- - accuracy
6
- - precision
7
- - recall
8
- base_model:
9
- - facebook/deit-base-patch16-224
10
- pipeline_tag: image-classification
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DeiT-Classification-Apparel πŸ·οΈπŸ‘•
2
+ _A Deep Learning Model for Apparel Image Classification using DeiT_
3
+
4
+ ## πŸ“ Model Overview
5
+ The **DeiT-Classification-Apparel** model is a **Vision Transformer (DeiT)** trained to classify different types of apparel. It leverages **Data-efficient Image Transformers (DeiT)** for improved image recognition with minimal computational resources.
6
+
7
+ - **Architecture**: Vision Transformer (DeiT)
8
+ - **Use Case**: Apparel classification
9
+ - **Framework**: PyTorch
10
+ - **Model Size**: 343MB
11
+ - **Files**:
12
+ - `DeiT_Model_Parameter.pth` – Trained model weights
13
+ - `label_encoder.pkl` – Label encoder for class mapping
14
+
15
+ ## πŸ“‚ Files and Usage
16
+
17
+ ### 1️⃣ Load the Model
18
+ ```python
19
+ import torch
20
+ from torchvision import transforms
21
+ from PIL import Image
22
+ import pickle
23
+
24
+ # Load Model
25
+ model = torch.load_state_dict(torch.load("DeiT_Model_Parameter.pth", map_location=device))
26
+ model.eval()
27
+
28
+ # Load Label Encoder
29
+ with open("label_encoder.pkl", "rb") as f:
30
+ label_encoder = pickle.load(f)
31
+ ```
32
+
33
+ ### 2️⃣ Perform Inference
34
+ ```python
35
+ def predict(image_path):
36
+ # Load and preprocess image
37
+ image = Image.open(image_path).convert("RGB")
38
+ transform = transforms.Compose([
39
+ transforms.Resize((224, 224)),
40
+ transforms.ToTensor(),
41
+ ])
42
+ input_tensor = transform(image).unsqueeze(0)
43
+
44
+ # Make prediction
45
+ with torch.no_grad():
46
+ output = model(input_tensor)
47
+ predicted_label = output.argmax(1).item()
48
+
49
+ return label_encoder.inverse_transform([predicted_label])[0]
50
+
51
+ # Example Usage
52
+ image_path = "sample.jpg"
53
+ prediction = predict(image_path)
54
+ print(f"Predicted Apparel: {prediction}")
55
+ ```
56
+
57
+ ## πŸ“Œ Applications
58
+ βœ… Fashion e-commerce product categorization
59
+ βœ… Retail inventory management
60
+ βœ… Virtual try-on solutions
61
+ βœ… Automated fashion recommendation
62
+
63
+ ## πŸ› οΈ Training Details
64
+ - **Dataset**: Custom apparel dataset
65
+ - **Optimizer**: Adam
66
+ - **Loss Function**: CrossEntropyLoss
67
+ - **Hardware Used**: NVIDIA T4 GPU
68
+
69
+ ## πŸ“’ Citation
70
+ If you use this model, please cite:
71
+ ```
72
+ @misc{bobs24_deit_classification_2024,
73
+ author = {bobs24},
74
+ title = {DeiT-Classification-Apparel},
75
+ year = {2024},
76
+ publisher = {Hugging Face},
77
+ howpublished = {\url{https://huggingface.co/bobs24/DeiT-Classification-Apparel}}
78
+ }