alpingo23 commited on
Commit
b555b7e
·
verified ·
1 Parent(s): a89e608

Upload 4 files

Browse files
Files changed (4) hide show
  1. Dockerfile +6 -23
  2. README.md +90 -72
  3. app.py +118 -98
  4. requirements.txt +8 -7
Dockerfile CHANGED
@@ -1,28 +1,11 @@
1
- FROM python:3.11-slim
2
 
3
- WORKDIR /app
4
 
5
- # Set cache directories BEFORE installing dependencies
6
- ENV HF_HOME=/tmp/.cache
7
- ENV TRANSFORMERS_CACHE=/tmp/.cache
8
- ENV TORCH_HOME=/tmp/.cache
9
 
10
- # Create cache directory
11
- RUN mkdir -p /tmp/.cache && chmod 777 /tmp/.cache
12
 
13
- # Install dependencies
14
- COPY requirements.txt requirements.txt
15
- RUN pip install --no-cache-dir -r requirements.txt
16
 
17
- # Copy app
18
- COPY app.py app.py
19
-
20
- # Expose port
21
- EXPOSE 7860
22
-
23
- # Set environment
24
- ENV PORT=7860
25
- ENV PYTHONUNBUFFERED=1
26
-
27
- # Run app
28
- CMD ["python", "app.py"]
 
1
+ FROM python:3.9
2
 
3
+ WORKDIR /code
4
 
5
+ COPY ./requirements.txt /code/requirements.txt
 
 
 
6
 
7
+ RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
 
8
 
9
+ COPY ./app.py /code/
 
 
10
 
11
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -1,105 +1,123 @@
1
- ---
2
- title: Dog Breed Classification API
3
- emoji: 🐕
4
- colorFrom: blue
5
- colorTo: green
6
- sdk: docker
7
- app_port: 7860
8
- pinned: false
9
- ---
10
 
11
- # 🐕 Dog Breed Classification API
12
 
13
- ConvNextV2-large-DogBreed model ile köpek ırkı tahmini yapan API.
14
 
15
- ## 🚀 Kullanım
16
 
17
- ### Endpoint
 
 
 
 
18
 
19
- ```
20
- POST /predict_pet
21
- ```
22
 
23
- ### Request
24
 
25
- **Content-Type:** `multipart/form-data`
 
 
26
 
27
- **Body:**
28
- - `image` (file): Köpek fotoğrafı (JPEG, PNG, WebP)
29
 
30
- ### Response
31
 
32
- ```json
33
- {
34
- "breed": "Doberman_pinscher",
35
- "confidence": 0.533,
36
- "top_5": [
37
- {"breed": "Doberman_pinscher", "confidence": 0.533},
38
- {"breed": "Beauceron", "confidence": 0.065},
39
- {"breed": "German_pinscher", "confidence": 0.041},
40
- {"breed": "Black_and_tan_coonhound", "confidence": 0.023},
41
- {"breed": "Greater_swiss_mountain_dog", "confidence": 0.011}
42
- ],
43
- "model": "ConvNextV2-large-DogBreed",
44
- "accuracy": "91.39%"
45
- }
46
  ```
47
 
48
- ## 📝 Örnekler
 
 
 
 
 
 
49
 
50
- ### Python
51
 
52
- ```python
53
- import requests
54
 
55
- url = "https://YOUR-SPACE-URL.hf.space/predict_pet"
 
 
 
 
 
 
 
 
 
 
56
 
57
- with open("dog.jpg", "rb") as f:
58
- files = {"image": f}
59
- response = requests.post(url, files=files)
60
-
61
- result = response.json()
62
- print(f"Breed: {result['breed']}")
63
- print(f"Confidence: {result['confidence']:.2%}")
 
 
 
 
 
 
 
 
 
 
 
64
  ```
65
 
66
- ### cURL
 
67
 
68
  ```bash
69
- curl -X POST https://YOUR-SPACE-URL.hf.space/predict_pet \
70
- -F "image=@dog.jpg"
71
  ```
72
 
73
- ### JavaScript (Fetch)
74
 
75
- ```javascript
76
- const formData = new FormData();
77
- formData.append('image', fileInput.files[0]);
78
 
79
- const response = await fetch('https://YOUR-SPACE-URL.hf.space/predict_pet', {
80
- method: 'POST',
81
- body: formData
82
  });
83
-
84
- const result = await response.json();
85
- console.log(result.breed, result.confidence);
86
  ```
87
 
88
- ## ℹ️ Model Bilgisi
89
 
90
- - **Model:** [Pavarissy/ConvNextV2-large-DogBreed](https://huggingface.co/Pavarissy/ConvNextV2-large-DogBreed)
91
- - **Accuracy:** 91.39% (validation set)
92
- - **Architecture:** ConvNextV2-large-22k-224
93
- - **Training:** 50 epochs, Stanford Dogs Dataset
94
- - **Classes:** 120 dog breeds
95
 
96
- ## 🔧 Performans
 
 
 
97
 
98
- - **İlk istek:** 10-15 saniye (model yükleme)
99
- - **Sonraki istekler:** 2-4 saniye
100
- - **Hardware:** CPU basic (HF Spaces free tier)
101
 
102
- ## 📄 License
 
 
 
103
 
104
- MIT
105
 
 
 
 
 
1
+ # Dog Breed Classification API
 
 
 
 
 
 
 
 
2
 
3
+ Bu Hugging Face Space, **Pavarissy/ConvNextV2-large-DogBreed** modelini kullanarak köpek ırkı sınıflandırması yapan bir FastAPI backend'idir.
4
 
5
+ ## 🚀 Kurulum Adımları
6
 
7
+ ### 1. Hugging Face Space Oluşturma
8
 
9
+ 1. https://huggingface.co/spaces adresine gidin
10
+ 2. "Create new Space" butonuna tıklayın
11
+ 3. Space adını girin: `petbackend` (veya istediğiniz bir isim)
12
+ 4. SDK olarak **"Docker"** veya **"Gradio"** yerine **"Docker"** seçin (FastAPI için)
13
+ 5. Space'i oluşturun
14
 
15
+ ### 2. Dosyaları Yükleme
 
 
16
 
17
+ Space oluşturulduktan sonra, bu klasördeki dosyaları Space'e yükleyin:
18
 
19
+ - `app.py` - Ana FastAPI uygulaması
20
+ - `requirements.txt` - Python bağımlılıkları
21
+ - `Dockerfile` - Docker yapılandırması (aşağıda)
22
 
23
+ ### 3. Dockerfile Oluşturma
 
24
 
25
+ Space'inize bir `Dockerfile` eklemeniz gerekiyor:
26
 
27
+ ```dockerfile
28
+ FROM python:3.9
29
+
30
+ WORKDIR /code
31
+
32
+ COPY ./requirements.txt /code/requirements.txt
33
+
34
+ RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
35
+
36
+ COPY ./app.py /code/
37
+
38
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
 
 
39
  ```
40
 
41
+ ### 4. Alternatif: Gradio SDK Kullanımı
42
+
43
+ Eğer Docker yerine Gradio SDK kullanmak isterseniz:
44
+
45
+ 1. Space oluştururken SDK olarak "Gradio" seçin
46
+ 2. Sadece `app.py` ve `requirements.txt` dosyalarını yükleyin
47
+ 3. Hugging Face otomatik olarak FastAPI'yi çalıştıracaktır
48
 
49
+ ## 📡 API Endpoints
50
 
51
+ ### GET /
52
+ Ana endpoint - API durumunu gösterir
53
 
54
+ ```bash
55
+ curl https://alpingo23-petbackend.hf.space/
56
+ ```
57
+
58
+ ### POST /predict_pet
59
+ Köpek ırkı tahmini yapar
60
+
61
+ ```bash
62
+ curl -X POST https://alpingo23-petbackend.hf.space/predict_pet \
63
+ -F "image=@dog_image.jpg"
64
+ ```
65
 
66
+ **Response Format:**
67
+ ```json
68
+ {
69
+ "predicted_label": "n02085620-Chihuahua",
70
+ "confidence": 0.95,
71
+ "detection": {
72
+ "box": {
73
+ "x": 50.0,
74
+ "y": 50.0,
75
+ "width": 400.0,
76
+ "height": 400.0
77
+ }
78
+ },
79
+ "imageDimensions": {
80
+ "width": 500,
81
+ "height": 500
82
+ }
83
+ }
84
  ```
85
 
86
+ ### GET /health
87
+ Sağlık kontrolü endpoint'i
88
 
89
  ```bash
90
+ curl https://alpingo23-petbackend.hf.space/health
 
91
  ```
92
 
93
+ ## 🔧 React Native Entegrasyonu
94
 
95
+ App.js dosyanızda zaten doğru endpoint'i kullanıyorsunuz:
 
 
96
 
97
+ ```javascript
98
+ const res = await axios.post('https://alpingo23-petbackend.hf.space/predict_pet', formData, {
99
+ headers: { 'Content-Type': 'multipart/form-data' },
100
  });
 
 
 
101
  ```
102
 
103
+ ## 🐛 Hata Ayıklama
104
 
105
+ Space çalışmıyorsa:
 
 
 
 
106
 
107
+ 1. **Logs kontrolü**: Space sayfasında "Logs" sekmesini kontrol edin
108
+ 2. **Build durumu**: "Building" durumunda mı kontrol edin
109
+ 3. **Model yükleme**: İlk açılışta model indirilir, 2-3 dakika sürebilir
110
+ 4. **API testi**: `/health` endpoint'ini test edin
111
 
112
+ ## 📝 Notlar
 
 
113
 
114
+ - Model ilk yüklendiğinde indirme yapacağı için 2-3 dakika sürebilir
115
+ - CORS tüm kaynaklardan gelen isteklere açık (production'da düzeltilmeli)
116
+ - Detection box simülasyondur, gerçek nesne tespiti yapmaz
117
+ - Model 120+ köpek ırkını tanıyabilir
118
 
119
+ ## 🔗 Bağlantılar
120
 
121
+ - Model: https://huggingface.co/Pavarissy/ConvNextV2-large-DogBreed
122
+ - Space URL: https://alpingo23-petbackend.hf.space
123
+ - Docs: https://alpingo23-petbackend.hf.space/docs (otomatik FastAPI docs)
app.py CHANGED
@@ -1,106 +1,126 @@
1
- """
2
- Clean Pet Backend for Hugging Face Spaces
3
- Using ConvNextV2-large-DogBreed model
4
- """
5
-
6
- from flask import Flask, request, jsonify
7
- from flask_cors import CORS
8
  from PIL import Image
9
  import io
10
- import os
11
-
12
- # Set cache directories FIRST
13
- os.environ['HF_HOME'] = '/tmp/.cache'
14
- os.environ['TRANSFORMERS_CACHE'] = '/tmp/.cache'
15
- os.environ['TORCH_HOME'] = '/tmp/.cache'
16
-
17
- app = Flask(__name__)
18
- CORS(app)
19
-
20
- # Global model variables
21
- model = None
22
- processor = None
23
-
24
- def load_model():
25
- """Load model on first request"""
26
- global model, processor
27
-
28
- if model is not None:
29
- return
30
-
31
- print("🔄 Loading ConvNextV2-large-DogBreed model...")
32
-
33
- from transformers import AutoImageProcessor, AutoModelForImageClassification
34
- import torch
35
-
36
- model_name = "Pavarissy/ConvNextV2-large-DogBreed"
37
-
38
- processor = AutoImageProcessor.from_pretrained(model_name)
39
- model = AutoModelForImageClassification.from_pretrained(model_name)
40
-
41
- device = "cuda" if torch.cuda.is_available() else "cpu"
42
- model = model.to(device)
43
  model.eval()
44
-
45
- print(f"✅ Model loaded on {device}")
46
-
47
- @app.route('/', methods=['GET'])
48
- def health():
49
- return jsonify({
50
- 'status': 'healthy',
51
- 'model': 'ConvNextV2-large-DogBreed',
52
- 'accuracy': '91.39%'
53
- })
54
-
55
- @app.route('/predict_pet', methods=['POST'])
56
- def predict_pet():
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  try:
58
- # Load model if needed
59
- load_model()
60
-
61
- # Get image
62
- if 'image' not in request.files:
63
- return jsonify({'error': 'No image provided'}), 400
64
-
65
- file = request.files['image']
66
- image = Image.open(io.BytesIO(file.read())).convert('RGB')
67
-
68
- # Process and predict
69
- import torch
70
- inputs = processor(image, return_tensors="pt")
71
-
72
- device = next(model.parameters()).device
73
- inputs = {k: v.to(device) for k, v in inputs.items()}
74
-
75
  with torch.no_grad():
76
  outputs = model(**inputs)
77
  logits = outputs.logits
78
-
79
- # Get top prediction
80
- probs = torch.nn.functional.softmax(logits, dim=-1)[0]
81
- top_prob, top_idx = torch.max(probs, dim=0)
82
-
83
- predicted_breed = model.config.id2label[top_idx.item()]
84
- confidence = float(top_prob.item())
85
-
86
- print(f"✅ Predicted: {predicted_breed} ({confidence:.2%})")
87
-
88
- # Return in format expected by frontend
89
- return jsonify({
90
- 'predicted_label': predicted_breed,
91
- 'breed': predicted_breed,
92
- 'confidence': confidence,
93
- 'detection': {'box': {'x': 0, 'y': 0, 'width': 0, 'height': 0}},
94
- 'gender': 'Unknown'
95
- })
96
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
  except Exception as e:
98
- print(f"❌ Error: {str(e)}")
99
- import traceback
100
- traceback.print_exc()
101
- return jsonify({'error': str(e)}), 500
102
-
103
- if __name__ == '__main__':
104
- port = int(os.environ.get('PORT', 7860))
105
- print(f"🚀 Starting server on port {port}")
106
- app.run(host='0.0.0.0', port=port, debug=False)
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI, File, UploadFile, HTTPException
2
+ from fastapi.middleware.cors import CORSMiddleware
 
 
 
 
 
3
  from PIL import Image
4
  import io
5
+ import torch
6
+ from transformers import AutoImageProcessor, AutoModelForImageClassification
7
+ import numpy as np
8
+
9
+ app = FastAPI()
10
+
11
+ # CORS ayarları - Tüm kaynaklardan gelen isteklere izin ver
12
+ app.add_middleware(
13
+ CORSMiddleware,
14
+ allow_origins=["*"],
15
+ allow_credentials=True,
16
+ allow_methods=["*"],
17
+ allow_headers=["*"],
18
+ )
19
+
20
+ # Model ve processor'ı yükle
21
+ MODEL_NAME = "Pavarissy/ConvNextV2-large-DogBreed"
22
+ print(f"🔄 Loading model: {MODEL_NAME}")
23
+
24
+ try:
25
+ processor = AutoImageProcessor.from_pretrained(MODEL_NAME)
26
+ model = AutoModelForImageClassification.from_pretrained(MODEL_NAME)
 
 
 
 
 
 
 
 
 
 
 
27
  model.eval()
28
+ print("✅ Model loaded successfully!")
29
+ except Exception as e:
30
+ print(f"❌ Error loading model: {e}")
31
+ raise e
32
+
33
+ @app.get("/")
34
+ async def root():
35
+ """Ana endpoint - API durumunu gösterir"""
36
+ return {
37
+ "message": "Dog Breed Classification API",
38
+ "model": MODEL_NAME,
39
+ "status": "ready",
40
+ "endpoints": {
41
+ "predict": "/predict_pet (POST)",
42
+ "health": "/health (GET)"
43
+ }
44
+ }
45
+
46
+ @app.post("/predict_pet")
47
+ async def predict_pet(image: UploadFile = File(...)):
48
+ """
49
+ Pet (köpek ırkı) tahmini endpoint'i
50
+
51
+ Expected response format:
52
+ {
53
+ "predicted_label": "n02085620-Chihuahua",
54
+ "confidence": 0.95,
55
+ "detection": {
56
+ "box": {"x": 50, "y": 50, "width": 400, "height": 400}
57
+ }
58
+ }
59
+ """
60
  try:
61
+ # Resmi oku ve RGB'ye çevir
62
+ image_bytes = await image.read()
63
+ img = Image.open(io.BytesIO(image_bytes)).convert('RGB')
64
+
65
+ # Orijinal görüntü boyutları
66
+ width, height = img.size
67
+ print(f"📸 Image received: {width}x{height}")
68
+
69
+ # Model için preprocessing
70
+ inputs = processor(images=img, return_tensors="pt")
71
+
72
+ # Tahmin yap
 
 
 
 
 
73
  with torch.no_grad():
74
  outputs = model(**inputs)
75
  logits = outputs.logits
76
+
77
+ # Softmax ile olasılıkları hesapla
78
+ probabilities = torch.nn.functional.softmax(logits, dim=-1)
79
+ confidence, predicted_idx = torch.max(probabilities, dim=-1)
80
+
81
+ # Tahmin edilen sınıfı al
82
+ predicted_label = model.config.id2label[predicted_idx.item()]
83
+ confidence_score = confidence.item()
84
+
85
+ # Basit bir detection box oluştur (görüntünün %80'i merkeze yerleştirilmiş)
86
+ box_margin = 0.1
87
+ detection_box = {
88
+ "x": float(width * box_margin),
89
+ "y": float(height * box_margin),
90
+ "width": float(width * (1 - 2 * box_margin)),
91
+ "height": float(height * (1 - 2 * box_margin))
92
+ }
93
+
94
+ # Yanıt hazırla (React Native app'in beklediği formatta)
95
+ response = {
96
+ "predicted_label": predicted_label,
97
+ "confidence": float(confidence_score),
98
+ "detection": {
99
+ "box": detection_box
100
+ },
101
+ "imageDimensions": {
102
+ "width": width,
103
+ "height": height
104
+ }
105
+ }
106
+
107
+ print(f"✅ Prediction: {predicted_label} (confidence: {confidence_score:.4f})")
108
+ return response
109
+
110
  except Exception as e:
111
+ print(f"❌ Error during prediction: {str(e)}")
112
+ raise HTTPException(status_code=500, detail=f"Prediction error: {str(e)}")
113
+
114
+ @app.get("/health")
115
+ async def health_check():
116
+ """Sağlık kontrolü endpoint'i"""
117
+ return {
118
+ "status": "healthy",
119
+ "model_loaded": model is not None,
120
+ "model_name": MODEL_NAME
121
+ }
122
+
123
+ # Hugging Face Space'te uvicorn otomatik çalışır
124
+ if __name__ == "__main__":
125
+ import uvicorn
126
+ uvicorn.run(app, host="0.0.0.0", port=7860)
requirements.txt CHANGED
@@ -1,7 +1,8 @@
1
- flask==3.0.0
2
- flask-cors==4.0.0
3
- transformers==4.35.0
4
- torch==2.1.0
5
- torchvision==0.16.0
6
- pillow==10.1.0
7
- accelerate==0.24.0
 
 
1
+ fastapi
2
+ uvicorn[standard]
3
+ python-multipart
4
+ pillow
5
+ torch
6
+ torchvision
7
+ transformers
8
+ numpy