File size: 2,096 Bytes
dd41a61
 
 
 
 
 
 
 
d69b24a
 
dd41a61
 
515f9e3
dd41a61
9239744
9bb5db0
c54aa95
403ee90
dd41a61
9cb18b6
dd41a61
11eb05a
dd41a61
ed53ac2
dd41a61
11eb05a
 
dd41a61
11eb05a
 
 
 
 
 
99b14f8
11eb05a
 
 
dd41a61
 
 
 
 
 
 
 
 
9cb18b6
 
dd41a61
 
 
 
 
 
 
 
 
 
9cb18b6
 
 
dd41a61
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-recipe1m-ALL
  results: []
widget:
- text: "This is a great [MASK]."
---

# RecipeBERT

This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the food domain data [Recipe1M+ dataset](http://pic2recipe.csail.mit.edu/).
Recipe1M+ contains over 1M records of distinct food names with their ingredients and recipes, more details about the dataset can be found on their [project website](http://pic2recipe.csail.mit.edu/).
We used the whole Recipe1M+ dataset with a total of 1,029,720 records, with using 10% of the dataset as an evaluation dataset. Each of the records contains the food name, followed by its ingredients and recipes.

It achieves the following results on the evaluation set:
- Loss: 0.6230

## Usage

You can use this model to get embeddings/representations for your food-related dataset that you will use for your downstream tasks.

```python
from transformers import pipeline

# Your food-related data
food_data = "Hawaiian Pizza"
# Use pipeline for feature extraction
embedding = pipeline(
        'feature-extraction', model='alexdseo/RecipeBERT', framework='pt'
    )
# Mean pooling
food_rep = embedding(food_data, return_tensors='pt')[0].numpy().mean(axis=0)

```



## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step  | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.7914        | 1.0   | 13286 | 0.7377          |
| 0.6945        | 2.0   | 26572 | 0.6569          |
| 0.6574        | 3.0   | 39858 | 0.6216          |


### Framework versions

- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.11.0
- Tokenizers 0.14.1