kingking111009 commited on
Commit
3368719
·
1 Parent(s): f832e63

Upload GPT-2 LoRA model for recipe recommendations

Browse files

- Added LoRA adapter weights (adapter_model.safetensors)
- Added adapter configuration (adapter_config.json)
- Added tokenizer files (tokenizer.json, vocab.json, merges.txt)
- Added model documentation and usage examples

README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language: en
4
+ tags:
5
+ - recipe-recommendation
6
+ - gpt2
7
+ - lora
8
+ - cooking
9
+ - food
10
+ base_model: gpt2
11
+ ---
12
+
13
+ # Recipe GPT-2 LoRA Model
14
+
15
+ A fine-tuned GPT-2 model for recipe recommendations using LoRA (Low-Rank Adaptation).
16
+
17
+ ## Model Description
18
+
19
+ This model generates personalized recipe suggestions based on:
20
+ - Available ingredients
21
+ - Dietary preferences
22
+ - Cooking time constraints
23
+ - Cuisine preferences
24
+
25
+ ## Usage
26
+
27
+ ```python
28
+ from transformers import AutoTokenizer, AutoModelForCausalLM
29
+ from peft import PeftModel
30
+
31
+ # Load base model and tokenizer
32
+ base_model = AutoModelForCausalLM.from_pretrained("gpt2")
33
+ tokenizer = AutoTokenizer.from_pretrained("gpt2")
34
+
35
+ # Load LoRA adapter
36
+ model = PeftModel.from_pretrained(base_model, "nutrientartcd/recipe-gpt2-lora")
37
+
38
+ # Generate recipe suggestion
39
+ prompt = "User: I have chicken, garlic, rice. I'm looking for something ready in about 30 minutes.\nAssistant: "
40
+ inputs = tokenizer(prompt, return_tensors="pt")
41
+ outputs = model.generate(**inputs, max_new_tokens=100, temperature=0.7)
42
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
43
+ print(response)
44
+ ```
45
+
46
+ ## Training Data
47
+
48
+ Trained on a recipe dataset with user interactions and ratings, fine-tuned to provide conversational recipe recommendations.
49
+
50
+ ## Model Details
51
+
52
+ - **Base Model**: GPT-2
53
+ - **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
54
+ - **Training Framework**: Transformers + PEFT
55
+ - **Language**: English
56
+ - **Task**: Conversational recipe recommendation
57
+
58
+ ## Limitations
59
+
60
+ - English language only
61
+ - May generate fictional recipe names
62
+ - Nutritional information not guaranteed to be accurate
63
+ - Requires proper prompt format for optimal results
64
+
65
+ ## Prompt Format
66
+
67
+ The model expects prompts in this conversation format:
68
+ ```
69
+ User: I have [ingredients]. I'm looking for something ready in about [time] minutes. Preferences: [preferences].
70
+ Assistant:
71
+ ```
72
+
73
+ ## Citation
74
+
75
+ If you use this model, please cite:
76
+ ```
77
+ @misc{nutrient-recipe-gpt2-lora,
78
+ title={Recipe GPT-2 LoRA Model},
79
+ author={NutrientAI},
80
+ year={2025},
81
+ url={https://huggingface.co/nutrientartcd/recipe-gpt2-lora}
82
+ }
83
+ ```
adapter_config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "gpt2",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": true,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 16,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.05,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "qalora_group_size": 16,
24
+ "r": 8,
25
+ "rank_pattern": {},
26
+ "revision": null,
27
+ "target_modules": [
28
+ "c_fc",
29
+ "c_proj",
30
+ "c_attn"
31
+ ],
32
+ "target_parameters": null,
33
+ "task_type": "CAUSAL_LM",
34
+ "trainable_token_indices": null,
35
+ "use_dora": false,
36
+ "use_qalora": false,
37
+ "use_rslora": false
38
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7c81ce9c1d21ec3d2c94185dbfce7b58bb54f0c498d9638a7b25a06f38c9584
3
+ size 4730632
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "pad_token": "<|endoftext|>",
5
+ "unk_token": "<|endoftext|>"
6
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "50256": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ }
12
+ },
13
+ "bos_token": "<|endoftext|>",
14
+ "clean_up_tokenization_spaces": false,
15
+ "eos_token": "<|endoftext|>",
16
+ "extra_special_tokens": {},
17
+ "model_max_length": 1024,
18
+ "pad_token": "<|endoftext|>",
19
+ "tokenizer_class": "GPT2Tokenizer",
20
+ "unk_token": "<|endoftext|>"
21
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff