File size: 2,365 Bytes
2981ea0
 
 
 
862f506
2981ea0
c27caa8
 
00c6263
2981ea0
 
 
 
e04701c
 
2981ea0
e04701c
2981ea0
 
 
df08493
 
 
 
2ade383
df08493
 
 
2981ea0
 
 
e4c6687
 
2981ea0
e4c6687
 
 
2981ea0
e4c6687
 
 
 
 
2981ea0
e4c6687
 
 
2981ea0
e4c6687
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2981ea0
 
e4c6687
 
 
 
9fc3b05
 
 
 
e4c6687
 
2981ea0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: cc-by-nc-2.0
---

<h1 style="font-size: 60px;">🍝 RistoNet</h1>

![Food!!!](https://freerangestock.com/sample/159570/assorted-gourmet-snacks-on-a-platter.jpg)

**RistoNet** is an **EfficientNet** model trained on the **Gourmet Photography Dataset** for **food image aesthetic assessment**. It’s a tool for **designers, restaurants, and e-commerce** to evaluate and select the **best possible pictures of their dishes**. Perfect for making menus, websites, or social media shine! πŸ˜ŽπŸ•  

---

## πŸš€ Features
- Assess the aesthetic quality of food images  
- Helps pick the most appealing photos for menus, websites, or e-commerce  
- Lightweight and fast thanks to EfficientNet  
- Ideal for designers, chefs, and food entrepreneurs  

---

## πŸ“Š Model Performance

| Dataset | Accuracy |
|---------------|----------|
| GFD (Test split)     | βœ… 91,17%   |

---

## πŸ–ΌοΈ Quick Usage

```python
## πŸ”Ž Inference Example

import torch
from PIL import Image
from torchvision import models, transforms
from huggingface_hub import hf_hub_download

# ------------------------------
# 1. Load model
# ------------------------------
MODEL_REPO = "Orkidee/RistoNet"
MODEL_FILE = "ristonet.pth"

model = models.efficientnet_b0(weights=None)  # no pretrained weights
num_features = model.classifier[1].in_features
model.classifier[1] = torch.nn.Linear(num_features, 2)  # 2 classes

# Download weights from Hub and load
model_path = hf_hub_download(repo_id=MODEL_REPO, filename=MODEL_FILE)
model.load_state_dict(torch.load(model_path, map_location="cpu"))
model.eval()

# ------------------------------
# 2. Define preprocessing
# ------------------------------
transform = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406],
                         std=[0.229, 0.224, 0.225])
])

# ------------------------------
# 3. Run inference on an image
# ------------------------------
image = Image.open("my_food.jpg").convert("RGB")
input_tensor = transform(image).unsqueeze(0)  # add batch dim

with torch.no_grad():
    outputs = model(input_tensor)
    probs = torch.nn.functional.softmax(outputs, dim=1)
    predicted_class = probs.argmax().item()


# 0 = Negative, 1 = Positive


print("Predicted class:", predicted_class)
print("Probabilities:", probs.numpy())