AlbertoB12's picture
Update README.md
80aa828 verified
---
language: en
license: apache-2.0
tags:
- llama-3.2
- fine-tuning
- meditation
- guided-meditation
- wellness
- text-generation
base_model: meta-llama/Meta-Llama-3.2-3B-Instruct
datasets:
- AlbertoB12/GuidedMeditations1
---
# Meditation Guide (Llama 3.2 - 3B)
This is a fine-tuned version of `meta-llama/Meta-Llama-3.2-3B-Instruct`, specifically adapted to generate guided meditation scripts. The model was trained on the `AlbertoB12/GuidedMeditations1` dataset, a collection of diverse guided meditation texts.
The goal of this project is to provide a specialized AI tool for creating content in the wellness and mindfulness space. It can generate complete meditation scripts based on a simple prompt, focusing on themes like relaxation, anxiety relief, focus, and gratitude.
## Model Description
- **Base Model**: `meta-llama/Meta-Llama-3.2-3B-Instruct`
- **Language**: English (en)
- **Task**: Text Generation, Guided Meditation Scripting
- **Trained on:** [https://huggingface.co/datasets/AlbertoB12/GuidedMeditations1](https://huggingface.co/datasets/AlbertoB12/GuidedMeditations1)
The model excels at adopting a calm, encouraging, and guiding tone suitable for meditation. It understands instructions related to pacing, focus points (e.g., breath, body sensations), and common meditation themes.
## Intended Uses & Limitations
### Intended Uses
This model is designed for:
- **Content Creation**: Generating scripts for wellness apps, YouTube channels, or personal mindfulness practice.
- **Personalization**: Creating custom meditation scripts tailored to specific needs (e.g., "a 5-minute meditation for morning focus").
- **Creative Assistance**: A tool for mindfulness teachers and practitioners to brainstorm and develop new meditation content.
> **Disclaimer:** This model is for informational and creative purposes only. The content it generates is **not** a substitute for professional medical or psychological advice, diagnosis, or treatment.
### Limitations
- **Narrow Domain**: The model is highly specialized. It may not perform well on topics outside of meditation, mindfulness, and general wellness.
- **Potential for Hallucination**: Like all LLMs, it may occasionally generate text that is nonsensical or not perfectly aligned with the prompt.
- **Bias**: The model's output will reflect the styles and potential biases present in the `GuidedMeditations1` dataset.
## How to Use
To use this model, ensure you have accepted the terms of use for Llama 3.2 on the `meta-llama/Meta-Llama-3.2-8B-Instruct` model page. The model should be used with the Llama 3.2 chat template.
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import os
# --- Configuration ---
# Set your Hugging Face token (if the model is private or requires authentication)
# For HF Spaces, set this as a secret named HF_TOKEN
hf_token = os.getenv("HF_TOKEN")
model_id = "AlbertoB12/Llama-3.2-3B-Instruct-MeditationGuide"
# --- Load Tokenizer and Model ---
tokenizer = AutoTokenizer.from_pretrained(model_id, token=hf_token, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
token=hf_token,
trust_remote_code=True
)
model.eval()
# --- Prepare the Prompt ---
# Use the official chat template for Llama 3.2
messages = [
{
"role": "system",
"content": "You are a helpful meditation guide. Your purpose is to generate calm, soothing, and effective guided meditation scripts based on the user's request."
},
{
"role": "user",
"content": "Write a 5-minute guided meditation script focused on releasing anxiety."
},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# --- Generate the Response ---
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=1024,
do_sample=True,
temperature=0.7,
top_p=0.95,
eos_token_id=tokenizer.eos_token_id
)
# --- Decode and Print ---
response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
print(response)
```