Add model card
Browse files- MODEL_CARD.md +30 -9
MODEL_CARD.md
CHANGED
|
@@ -1,4 +1,5 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
language: English
|
| 3 |
license: apache-2.0
|
| 4 |
tags:
|
|
@@ -6,23 +7,42 @@ tags:
|
|
| 6 |
- bert
|
| 7 |
- fine-tuned
|
| 8 |
- recipe-bot
|
| 9 |
-
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
---
|
| 12 |
|
| 13 |
# recipe-bert
|
| 14 |
|
| 15 |
-
|
|
|
|
| 16 |
|
| 17 |
-
|
| 18 |
- Base model: bert-base-uncased
|
| 19 |
- Task: binary text classification
|
| 20 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
-
|
| 23 |
```python
|
| 24 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
|
| 25 |
-
|
| 26 |
model_id = "x2-world/recipe-bert"
|
| 27 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 28 |
model = AutoModelForSequenceClassification.from_pretrained(model_id)
|
|
@@ -30,5 +50,6 @@ clf = pipeline('text-classification', model=model, tokenizer=tokenizer)
|
|
| 30 |
print(clf('The pizza was great'))
|
| 31 |
```
|
| 32 |
|
| 33 |
-
|
| 34 |
-
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
---
|
| 3 |
language: English
|
| 4 |
license: apache-2.0
|
| 5 |
tags:
|
|
|
|
| 7 |
- bert
|
| 8 |
- fine-tuned
|
| 9 |
- recipe-bot
|
| 10 |
+
- demo
|
| 11 |
+
datasets:
|
| 12 |
+
- local: data/train.csv
|
| 13 |
+
metrics:
|
| 14 |
+
- name: train_loss
|
| 15 |
+
type: scalar
|
| 16 |
+
value: 0.622 # example from 1-epoch run
|
| 17 |
+
model-index:
|
| 18 |
+
- name: recipe-bert
|
| 19 |
+
results:
|
| 20 |
+
- task: text-classification
|
| 21 |
+
dataset: local/data/train.csv
|
| 22 |
+
metrics:
|
| 23 |
+
- name: train_loss
|
| 24 |
+
type: scalar
|
| 25 |
+
value: 0.622
|
| 26 |
---
|
| 27 |
|
| 28 |
# recipe-bert
|
| 29 |
|
| 30 |
+
Short description
|
| 31 |
+
A small demo fine-tuned BERT model intended for learning and prototyping. Trained on a tiny synthetic dataset; not suitable for production.
|
| 32 |
|
| 33 |
+
Training details
|
| 34 |
- Base model: bert-base-uncased
|
| 35 |
- Task: binary text classification
|
| 36 |
+
- Dataset: local CSV `data/train.csv` (columns: `text,label`)
|
| 37 |
+
- Hyperparameters (demo):
|
| 38 |
+
- epochs: 1
|
| 39 |
+
- batch size: 8
|
| 40 |
+
- optimizer: AdamW (default)
|
| 41 |
+
- seed: 42
|
| 42 |
|
| 43 |
+
How to use
|
| 44 |
```python
|
| 45 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
|
|
|
|
| 46 |
model_id = "x2-world/recipe-bert"
|
| 47 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 48 |
model = AutoModelForSequenceClassification.from_pretrained(model_id)
|
|
|
|
| 50 |
print(clf('The pizza was great'))
|
| 51 |
```
|
| 52 |
|
| 53 |
+
Limitations and license
|
| 54 |
+
- Trained on a toy dataset. Expect poor performance on real data.
|
| 55 |
+
- License: Apache-2.0
|