anismizi commited on
Commit
2385a75
·
verified ·
1 Parent(s): a2acd5d

Initial model upload

Browse files
README.md ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - image-classification
5
+ - pytorch
6
+ - skin-analysis
7
+ - dermatology
8
+ - computer-vision
9
+ datasets:
10
+ - custom
11
+ metrics:
12
+ - accuracy
13
+ - f1
14
+ pipeline_tag: image-classification
15
+ widget:
16
+ - src: https://example.com/dry-skin-sample.jpg
17
+ example_title: Dry Skin
18
+ - src: https://example.com/oily-skin-sample.jpg
19
+ example_title: Oily Skin
20
+ ---
21
+
22
+ # 🔬 Skin Type Classification Model
23
+
24
+ A deep learning model for classifying skin types into **dry** and **oily** categories using computer vision.
25
+
26
+ ## Model Description
27
+
28
+ This model is based on ResNet50 architecture and has been fine-tuned specifically for skin type classification. It can analyze facial skin images and determine whether the skin type is dry or oily with high accuracy.
29
+
30
+ ### Key Features
31
+ - **Architecture**: ResNet50-based classification model
32
+ - **Classes**: 2 (dry, oily)
33
+ - **Input**: RGB images (224x224 pixels)
34
+ - **Framework**: PyTorch + Transformers
35
+ - **Performance**: High accuracy on skin type classification
36
+
37
+ ## Intended Use
38
+
39
+ ### Primary Use Cases
40
+ - Dermatological analysis and skin assessment
41
+ - Cosmetic product recommendation systems
42
+ - Skincare routine personalization
43
+ - Medical research and skin health monitoring
44
+
45
+ ### Limitations
46
+ - Designed specifically for facial skin analysis
47
+ - Requires good lighting and clear skin visibility
48
+ - Not suitable for medical diagnosis (for research/cosmetic use only)
49
+ - Performance may vary across different skin tones and ethnicities
50
+
51
+ ## How to Use
52
+
53
+ ### Quick Start with Transformers
54
+
55
+ ```python
56
+ from transformers import AutoModelForImageClassification, AutoImageProcessor
57
+ from PIL import Image
58
+ import torch
59
+
60
+ # Load model and processor
61
+ model = AutoModelForImageClassification.from_pretrained("your-username/skin-type-classifier")
62
+ processor = AutoImageProcessor.from_pretrained("your-username/skin-type-classifier")
63
+
64
+ # Load and process image
65
+ image = Image.open("path/to/skin/image.jpg")
66
+ inputs = processor(images=image, return_tensors="pt")
67
+
68
+ # Make prediction
69
+ with torch.no_grad():
70
+ outputs = model(**inputs)
71
+ predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
72
+ predicted_class = predictions.argmax().item()
73
+
74
+ # Get result
75
+ labels = ["dry", "oily"]
76
+ confidence = predictions[0][predicted_class].item()
77
+ print(f"Predicted skin type: {labels[predicted_class]} (confidence: {confidence:.2%})")
78
+ ```
79
+
80
+ ### Using the Pipeline API
81
+
82
+ ```python
83
+ from transformers import pipeline
84
+
85
+ # Create classification pipeline
86
+ classifier = pipeline("image-classification", model="your-username/skin-type-classifier")
87
+
88
+ # Classify image
89
+ result = classifier("path/to/skin/image.jpg")
90
+ print(result)
91
+ ```
92
+
93
+ ## Model Details
94
+
95
+ ### Architecture
96
+ - **Base Model**: ResNet50
97
+ - **Modification**: Custom classification head with 2 output classes
98
+ - **Input Size**: 224 × 224 × 3 (RGB)
99
+ - **Parameters**: ~25M parameters
100
+
101
+ ### Training Details
102
+ - **Dataset**: Custom skin type classification dataset
103
+ - **Preprocessing**:
104
+ - Resize to 224×224 pixels
105
+ - Normalization: ImageNet statistics
106
+ - Data augmentation applied during training
107
+ - **Training Framework**: PyTorch
108
+ - **Optimization**: Adam optimizer with learning rate scheduling
109
+
110
+ ### Performance Metrics
111
+ - **Training Accuracy**: High performance on validation set
112
+ - **Inference Speed**: Fast inference suitable for real-time applications
113
+ - **Model Size**: ~94MB
114
+
115
+ ## Technical Specifications
116
+
117
+ ### Input Format
118
+ - **Type**: RGB Images
119
+ - **Size**: 224 × 224 pixels
120
+ - **Format**: PIL Image, numpy array, or torch tensor
121
+ - **Normalization**: ImageNet mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
122
+
123
+ ### Output Format
124
+ - **Type**: Classification logits
125
+ - **Classes**:
126
+ - 0: "dry" - Dry skin type
127
+ - 1: "oily" - Oily skin type
128
+ - **Output**: Softmax probabilities for each class
129
+
130
+ ## Ethical Considerations
131
+
132
+ ### Bias and Fairness
133
+ - Model trained on diverse skin types but may have limitations
134
+ - Users should be aware of potential biases in skin tone representation
135
+ - Continuous evaluation needed for fair performance across demographics
136
+
137
+ ### Privacy
138
+ - Model processes images locally - no data transmission required
139
+ - Users responsible for ensuring proper consent when analyzing others' images
140
+ - Recommend anonymization of facial features when possible
141
+
142
+ ## License
143
+
144
+ This model is released under the MIT License. See LICENSE file for details.
145
+
146
+ ## Citation
147
+
148
+ If you use this model in your research, please cite:
149
+
150
+ ```bibtex
151
+ @misc{skin-type-classifier-2025,
152
+ title={Skin Type Classification Model},
153
+ author={Your Name},
154
+ year={2025},
155
+ howpublished={\\url{https://huggingface.co/your-username/skin-type-classifier}},
156
+ }
157
+ ```
158
+
159
+ ## Contact
160
+
161
+ For questions, issues, or collaboration opportunities, please reach out through the Hugging Face model page or GitHub repository.
162
+
163
+ ---
164
+
165
+ **Disclaimer**: This model is for research and cosmetic purposes only. It should not be used for medical diagnosis or treatment decisions. Always consult healthcare professionals for medical concerns.
config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "ResNet50ForImageClassification"
4
+ ],
5
+ "model_type": "resnet",
6
+ "task": "image-classification",
7
+ "id2label": {
8
+ "0": "dry",
9
+ "1": "oily"
10
+ },
11
+ "label2id": {
12
+ "dry": 0,
13
+ "oily": 1
14
+ },
15
+ "num_labels": 2,
16
+ "image_size": 224,
17
+ "num_channels": 3,
18
+ "problem_type": "single_label_classification"
19
+ }
example_usage.py ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Example usage script for the Skin Type Classification model on Hugging Face.
3
+ """
4
+
5
+ from transformers import AutoModelForImageClassification, AutoImageProcessor
6
+ from PIL import Image
7
+ import torch
8
+ import requests
9
+ from io import BytesIO
10
+
11
+ def load_model(model_name="your-username/skin-type-classifier"):
12
+ """Load the model and processor from Hugging Face."""
13
+ model = AutoModelForImageClassification.from_pretrained(model_name)
14
+ processor = AutoImageProcessor.from_pretrained(model_name)
15
+ return model, processor
16
+
17
+ def predict_skin_type(image_path_or_url, model, processor):
18
+ """
19
+ Predict skin type from an image.
20
+
21
+ Args:
22
+ image_path_or_url: Path to local image or URL
23
+ model: The loaded model
24
+ processor: The loaded processor
25
+
26
+ Returns:
27
+ dict: Prediction results with class and confidence
28
+ """
29
+ # Load image
30
+ if image_path_or_url.startswith(('http://', 'https://')):
31
+ response = requests.get(image_path_or_url)
32
+ image = Image.open(BytesIO(response.content))
33
+ else:
34
+ image = Image.open(image_path_or_url)
35
+
36
+ # Convert to RGB if needed
37
+ if image.mode != 'RGB':
38
+ image = image.convert('RGB')
39
+
40
+ # Process image
41
+ inputs = processor(images=image, return_tensors="pt")
42
+
43
+ # Make prediction
44
+ with torch.no_grad():
45
+ outputs = model(**inputs)
46
+ predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
47
+ predicted_class_idx = predictions.argmax().item()
48
+ confidence = predictions[0][predicted_class_idx].item()
49
+
50
+ # Map to class names
51
+ class_names = {0: "dry", 1: "oily"}
52
+ predicted_class = class_names[predicted_class_idx]
53
+
54
+ return {
55
+ "predicted_class": predicted_class,
56
+ "confidence": confidence,
57
+ "all_scores": {
58
+ "dry": predictions[0][0].item(),
59
+ "oily": predictions[0][1].item()
60
+ }
61
+ }
62
+
63
+ def main():
64
+ """Example usage of the skin type classification model."""
65
+ print("🔬 Loading Skin Type Classification Model...")
66
+
67
+ # Load model and processor
68
+ model, processor = load_model()
69
+
70
+ print("✅ Model loaded successfully!")
71
+
72
+ # Example with local image (replace with your image path)
73
+ try:
74
+ image_path = "example_skin_image.jpg" # Replace with actual image path
75
+ result = predict_skin_type(image_path, model, processor)
76
+
77
+ print(f"\n📊 Prediction Results:")
78
+ print(f"Predicted Skin Type: {result['predicted_class']}")
79
+ print(f"Confidence: {result['confidence']:.2%}")
80
+ print(f"All Scores: {result['all_scores']}")
81
+
82
+ except FileNotFoundError:
83
+ print("ℹ️ Please provide a valid image path to test the model")
84
+
85
+ # Example usage patterns
86
+ print("\n💡 Usage Examples:")
87
+ print("1. Local image: predict_skin_type('path/to/image.jpg', model, processor)")
88
+ print("2. URL image: predict_skin_type('https://example.com/image.jpg', model, processor)")
89
+
90
+ if __name__ == "__main__":
91
+ main()
modeling_skin_classifier.py ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ from torchvision.models import resnet50, ResNet50_Weights
4
+ from transformers import PreTrainedModel, PretrainedConfig
5
+ from transformers.modeling_outputs import ImageClassifierOutput
6
+ from typing import Optional
7
+
8
+
9
+ class SkinClassifierConfig(PretrainedConfig):
10
+ """Configuration class for SkinClassifier model."""
11
+
12
+ model_type = "skin-classifier"
13
+
14
+ def __init__(
15
+ self,
16
+ num_labels: int = 2,
17
+ image_size: int = 224,
18
+ num_channels: int = 3,
19
+ **kwargs
20
+ ):
21
+ super().__init__(**kwargs)
22
+ self.num_labels = num_labels
23
+ self.image_size = image_size
24
+ self.num_channels = num_channels
25
+
26
+
27
+ class SkinClassifierModel(PreTrainedModel):
28
+ """
29
+ Skin Type Classification Model based on ResNet50.
30
+
31
+ This model classifies skin images into two categories:
32
+ - dry (label 0)
33
+ - oily (label 1)
34
+ """
35
+
36
+ config_class = SkinClassifierConfig
37
+
38
+ def __init__(self, config):
39
+ super().__init__(config)
40
+ self.config = config
41
+
42
+ # Initialize ResNet50 backbone
43
+ self.resnet = resnet50(weights=None)
44
+
45
+ # Replace the final classification layer
46
+ self.resnet.fc = nn.Linear(self.resnet.fc.in_features, config.num_labels)
47
+
48
+ # Initialize weights
49
+ self.post_init()
50
+
51
+ def forward(
52
+ self,
53
+ pixel_values: torch.FloatTensor,
54
+ labels: Optional[torch.LongTensor] = None,
55
+ **kwargs
56
+ ) -> ImageClassifierOutput:
57
+ """
58
+ Forward pass of the model.
59
+
60
+ Args:
61
+ pixel_values: Tensor of shape (batch_size, num_channels, height, width)
62
+ labels: Optional tensor of shape (batch_size,) for training
63
+
64
+ Returns:
65
+ ImageClassifierOutput with logits and optional loss
66
+ """
67
+ # Forward pass through ResNet
68
+ logits = self.resnet(pixel_values)
69
+
70
+ loss = None
71
+ if labels is not None:
72
+ loss_fct = nn.CrossEntropyLoss()
73
+ loss = loss_fct(logits, labels)
74
+
75
+ return ImageClassifierOutput(
76
+ loss=loss,
77
+ logits=logits,
78
+ )
preprocessor_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "do_resize": true,
4
+ "feature_extractor_type": "ImageFeatureExtractor",
5
+ "image_mean": [0.485, 0.456, 0.406],
6
+ "image_std": [0.229, 0.224, 0.225],
7
+ "resample": 2,
8
+ "size": {
9
+ "height": 224,
10
+ "width": 224
11
+ }
12
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:311dc1b2cac8025b2e1da09127daa9a21b372c0a06455db5ffeb402c18205b0d
3
+ size 94364655
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ torch>=1.9.0
2
+ torchvision>=0.10.0
3
+ transformers>=4.21.0
4
+ Pillow>=8.0.0
5
+ numpy>=1.21.0