File size: 4,774 Bytes
137f403
 
 
 
583134f
137f403
 
 
 
 
 
7a15d90
 
 
 
07d12da
 
7a15d90
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
---
license: mit
language:
- en
pipeline_tag: text-generation
tags:
- Recipe
- Beurk
- Les recettes loufoques
- assemblage
- fait moi une glace a la viande
---

# 🍓 Ice-Clem 🍓

![Ice](http://www.image-heberg.fr/files/17608910093371498398.jpg)

Bienvenue sur la documentation de l'IA : Ice-Clem.
ce petit modèle tres simple,
a été crée et entraîné en 5 minutes, en réponse a une idée éclair,
apparu lorsque j'étais assez fatiguée j'avoue 🤣.

Le but de ce modèle est de générer des combinaisons loufoques d'aliments qui ont rien a voir entre eux,
pour vous faire imaginer des plats dégueulasse et vous faire sourir (ou rire j'espère).

le modèle a ete entraîné sur certains mots-clés (les ingrédients),
et a partir d'un ingrédients de départ,
génère totalement aléatoirement la suite des ingrédients pour faire votre plat loufoque (a ne pas concrétiser).

les noms des ingrédients sont rédigés en anglais.

## 🌸 Liste des ingrédients 🌸

Voici la listes des ingrédients de départ que vous pouvez utiliser :

pizza
sushi
pasta
soup
curry
steak
salad
burger
tacos
noodles
rice
bread
cake
cookies
pie
chocolate
vanilla
strawberry
spicy
sour

le modèle générera la suite, de façon aléatoire mais intelligente.
;) 🔥

# 🩵 Exemple d'utilisation 🩵

```
from huggingface_hub import hf_hub_download
import torch
import json
import os

# Define your Hugging Face repository name and the filenames
repo_name = "Clemylia/Ice-Clem" # Make sure this matches the repository name you used
model_filename = "pytorch_model.bin"
word_to_index_filename = "word_to_index.json"
index_to_word_filename = "index_to_word.json"

# Download the files from the Hugging Face Hub
try:
    model_path = hf_hub_download(repo_id=repo_name, filename=model_filename)
    word_to_index_path = hf_hub_download(repo_id=repo_name, filename=word_to_index_filename)
    index_to_word_path = hf_hub_download(repo_id=repo_name, filename=index_to_word_filename)

    print(f"Downloaded model to: {model_path}")
    print(f"Downloaded word_to_index to: {word_to_index_path}")
    print(f"Downloaded index_to_word to: {index_to_word_path}")

    # Load the word_to_index and index_to_word mappings
    with open(word_to_index_path, 'r') as f:
        word_to_index = json.load(f)

    with open(index_to_word_path, 'r') as f:
        index_to_word = json.load(f)
        # Convert keys back to integers if they were saved as strings
        index_to_word = {int(k): v for k, v in index_to_word.items()}


    # Define the model architecture (must match the architecture used for training)
    # You'll need the same hyperparameters as before
    vocab_size = len(word_to_index)
    embedding_dim = 100 # Make sure this matches your training parameter
    hidden_dim = 256 # Make sure this matches your training parameter
    output_dim = vocab_size

    class FoodCombinerModel(torch.nn.Module):
        def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim):
            super(FoodCombinerModel, self).__init__()
            self.embedding = torch.nn.Embedding(vocab_size, embedding_dim)
            self.lstm = torch.nn.LSTM(embedding_dim, hidden_dim, batch_first=True)
            self.fc = torch.nn.Linear(hidden_dim, output_dim)

        def forward(self, x):
            embedded = self.embedding(x)
            lstm_out, _ = self.lstm(embedded)
            output = self.fc(lstm_out[:, -1, :])
            return output

    # Instantiate the model
    loaded_model = FoodCombinerModel(vocab_size, embedding_dim, hidden_dim, output_dim)

    # Load the saved state dictionary
    loaded_model.load_state_dict(torch.load(model_path))

    # Set the model to evaluation mode
    loaded_model.eval()

    print("Model loaded successfully!")

    # Now you can use the loaded_model for generation
    # Make sure the generate_combination function is defined in a previous cell and accessible

    # Generate a combination using the loaded model
    starting_phrase_loaded = "sushi" # You can use any word from your vocabulary
    generated_combination_loaded = generate_combination(loaded_model, starting_phrase_loaded, word_to_index, index_to_word)
    print(f"\nGenerated combination using loaded model starting with '{starting_phrase_loaded}': {generated_combination_loaded}")

    starting_phrase_loaded_2 = "pizza"
    generated_combination_loaded_2 = generate_combination(loaded_model, starting_phrase_loaded_2, word_to_index, index_to_word)
    print(f"Generated combination using loaded model starting with '{starting_phrase_loaded_2}': {generated_combination_loaded_2}")

except Exception as e:
    print(f"An error occurred: {e}")
    print("Please ensure the repository name is correct and the files exist on Hugging Face Hub.")
```