AlbertoB12 commited on
Commit
80aa828
·
verified ·
1 Parent(s): 2b1a285

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -39
README.md CHANGED
@@ -1,58 +1,108 @@
1
  ---
2
- base_model: meta-llama/Llama-3.2-3B-Instruct
3
- library_name: transformers
4
- model_name: Llama-3.2-3B-Instruct-MeditationGuide
5
  tags:
6
- - generated_from_trainer
7
- - trl
8
- - sft
9
- licence: license
 
 
 
 
 
10
  ---
11
 
12
- # Model Card for Llama-3.2-3B-Instruct-MeditationGuide
13
 
14
- This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
15
- It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
- ## Quick start
18
 
19
- ```python
20
- from transformers import pipeline
 
 
 
 
 
 
 
 
 
 
21
 
22
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="AlbertoB12/Llama-3.2-3B-Instruct-MeditationGuide", device="cuda")
24
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
- print(output["generated_text"])
26
- ```
27
 
28
- ## Training procedure
29
 
30
-
31
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
- This model was trained with SFT.
 
 
 
 
34
 
35
- ### Framework versions
 
 
 
 
 
 
 
 
 
36
 
37
- - TRL: 0.21.0
38
- - Transformers: 4.55.4
39
- - Pytorch: 2.8.0+cu126
40
- - Datasets: 4.0.0
41
- - Tokenizers: 0.21.4
 
 
 
 
 
 
 
42
 
43
- ## Citations
 
 
 
 
44
 
 
 
45
 
 
 
 
 
 
 
 
 
46
 
47
- Cite TRL as:
48
-
49
- ```bibtex
50
- @misc{vonwerra2022trl,
51
- title = {{TRL: Transformer Reinforcement Learning}},
52
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
53
- year = 2020,
54
- journal = {GitHub repository},
55
- publisher = {GitHub},
56
- howpublished = {\url{https://github.com/huggingface/trl}}
57
- }
58
  ```
 
1
  ---
2
+ language: en
3
+ license: apache-2.0
 
4
  tags:
5
+ - llama-3.2
6
+ - fine-tuning
7
+ - meditation
8
+ - guided-meditation
9
+ - wellness
10
+ - text-generation
11
+ base_model: meta-llama/Meta-Llama-3.2-3B-Instruct
12
+ datasets:
13
+ - AlbertoB12/GuidedMeditations1
14
  ---
15
 
16
+ # Meditation Guide (Llama 3.2 - 3B)
17
 
18
+ This is a fine-tuned version of `meta-llama/Meta-Llama-3.2-3B-Instruct`, specifically adapted to generate guided meditation scripts. The model was trained on the `AlbertoB12/GuidedMeditations1` dataset, a collection of diverse guided meditation texts.
 
19
 
20
+ The goal of this project is to provide a specialized AI tool for creating content in the wellness and mindfulness space. It can generate complete meditation scripts based on a simple prompt, focusing on themes like relaxation, anxiety relief, focus, and gratitude.
21
 
22
+ ## Model Description
23
+
24
+ - **Base Model**: `meta-llama/Meta-Llama-3.2-3B-Instruct`
25
+ - **Language**: English (en)
26
+ - **Task**: Text Generation, Guided Meditation Scripting
27
+ - **Trained on:** [https://huggingface.co/datasets/AlbertoB12/GuidedMeditations1](https://huggingface.co/datasets/AlbertoB12/GuidedMeditations1)
28
+
29
+ The model excels at adopting a calm, encouraging, and guiding tone suitable for meditation. It understands instructions related to pacing, focus points (e.g., breath, body sensations), and common meditation themes.
30
+
31
+ ## Intended Uses & Limitations
32
+
33
+ ### Intended Uses
34
 
35
+ This model is designed for:
36
+ - **Content Creation**: Generating scripts for wellness apps, YouTube channels, or personal mindfulness practice.
37
+ - **Personalization**: Creating custom meditation scripts tailored to specific needs (e.g., "a 5-minute meditation for morning focus").
38
+ - **Creative Assistance**: A tool for mindfulness teachers and practitioners to brainstorm and develop new meditation content.
 
39
 
40
+ > **Disclaimer:** This model is for informational and creative purposes only. The content it generates is **not** a substitute for professional medical or psychological advice, diagnosis, or treatment.
41
 
42
+ ### Limitations
43
 
44
+ - **Narrow Domain**: The model is highly specialized. It may not perform well on topics outside of meditation, mindfulness, and general wellness.
45
+ - **Potential for Hallucination**: Like all LLMs, it may occasionally generate text that is nonsensical or not perfectly aligned with the prompt.
46
+ - **Bias**: The model's output will reflect the styles and potential biases present in the `GuidedMeditations1` dataset.
47
+
48
+ ## How to Use
49
+
50
+ To use this model, ensure you have accepted the terms of use for Llama 3.2 on the `meta-llama/Meta-Llama-3.2-8B-Instruct` model page. The model should be used with the Llama 3.2 chat template.
51
+
52
+ ```python
53
+ import torch
54
+ from transformers import AutoTokenizer, AutoModelForCausalLM
55
+ import os
56
 
57
+ # --- Configuration ---
58
+ # Set your Hugging Face token (if the model is private or requires authentication)
59
+ # For HF Spaces, set this as a secret named HF_TOKEN
60
+ hf_token = os.getenv("HF_TOKEN")
61
+ model_id = "AlbertoB12/Llama-3.2-3B-Instruct-MeditationGuide"
62
 
63
+ # --- Load Tokenizer and Model ---
64
+ tokenizer = AutoTokenizer.from_pretrained(model_id, token=hf_token, trust_remote_code=True)
65
+ model = AutoModelForCausalLM.from_pretrained(
66
+ model_id,
67
+ torch_dtype=torch.bfloat16,
68
+ device_map="auto",
69
+ token=hf_token,
70
+ trust_remote_code=True
71
+ )
72
+ model.eval()
73
 
74
+ # --- Prepare the Prompt ---
75
+ # Use the official chat template for Llama 3.2
76
+ messages = [
77
+ {
78
+ "role": "system",
79
+ "content": "You are a helpful meditation guide. Your purpose is to generate calm, soothing, and effective guided meditation scripts based on the user's request."
80
+ },
81
+ {
82
+ "role": "user",
83
+ "content": "Write a 5-minute guided meditation script focused on releasing anxiety."
84
+ },
85
+ ]
86
 
87
+ prompt = tokenizer.apply_chat_template(
88
+ messages,
89
+ tokenize=False,
90
+ add_generation_prompt=True
91
+ )
92
 
93
+ # --- Generate the Response ---
94
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
95
 
96
+ outputs = model.generate(
97
+ **inputs,
98
+ max_new_tokens=1024,
99
+ do_sample=True,
100
+ temperature=0.7,
101
+ top_p=0.95,
102
+ eos_token_id=tokenizer.eos_token_id
103
+ )
104
 
105
+ # --- Decode and Print ---
106
+ response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
107
+ print(response)
 
 
 
 
 
 
 
 
108
  ```