Update README.md
Browse files
README.md
CHANGED
|
@@ -3,200 +3,68 @@ base_model: meta-llama/Llama-3.1-8B
|
|
| 3 |
library_name: peft
|
| 4 |
---
|
| 5 |
|
| 6 |
-
|
| 7 |
|
| 8 |
-
|
| 9 |
|
|
|
|
| 10 |
|
|
|
|
| 11 |
|
| 12 |
-
## Model
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
<!-- Provide a longer summary of what this model is. -->
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
- **Developed by:** [More Information Needed]
|
| 21 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 22 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 23 |
-
- **Model type:** [More Information Needed]
|
| 24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 25 |
-
- **License:** [More Information Needed]
|
| 26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 27 |
-
|
| 28 |
-
### Model Sources [optional]
|
| 29 |
-
|
| 30 |
-
<!-- Provide the basic links for the model. -->
|
| 31 |
-
|
| 32 |
-
- **Repository:** [More Information Needed]
|
| 33 |
-
- **Paper [optional]:** [More Information Needed]
|
| 34 |
-
- **Demo [optional]:** [More Information Needed]
|
| 35 |
-
|
| 36 |
-
## Uses
|
| 37 |
-
|
| 38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 39 |
-
|
| 40 |
-
### Direct Use
|
| 41 |
-
|
| 42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 43 |
-
|
| 44 |
-
[More Information Needed]
|
| 45 |
-
|
| 46 |
-
### Downstream Use [optional]
|
| 47 |
-
|
| 48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 49 |
-
|
| 50 |
-
[More Information Needed]
|
| 51 |
-
|
| 52 |
-
### Out-of-Scope Use
|
| 53 |
-
|
| 54 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
| 55 |
-
|
| 56 |
-
[More Information Needed]
|
| 57 |
-
|
| 58 |
-
## Bias, Risks, and Limitations
|
| 59 |
-
|
| 60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 61 |
-
|
| 62 |
-
[More Information Needed]
|
| 63 |
-
|
| 64 |
-
### Recommendations
|
| 65 |
-
|
| 66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 67 |
-
|
| 68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
| 69 |
-
|
| 70 |
-
## How to Get Started with the Model
|
| 71 |
-
|
| 72 |
-
Use the code below to get started with the model.
|
| 73 |
-
|
| 74 |
-
[More Information Needed]
|
| 75 |
-
|
| 76 |
-
## Training Details
|
| 77 |
-
|
| 78 |
-
### Training Data
|
| 79 |
-
|
| 80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 81 |
-
|
| 82 |
-
[More Information Needed]
|
| 83 |
-
|
| 84 |
-
### Training Procedure
|
| 85 |
-
|
| 86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
| 87 |
-
|
| 88 |
-
#### Preprocessing [optional]
|
| 89 |
-
|
| 90 |
-
[More Information Needed]
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
#### Training Hyperparameters
|
| 94 |
-
|
| 95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
| 96 |
-
|
| 97 |
-
#### Speeds, Sizes, Times [optional]
|
| 98 |
-
|
| 99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
| 100 |
-
|
| 101 |
-
[More Information Needed]
|
| 102 |
|
| 103 |
## Evaluation
|
| 104 |
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
[More Information Needed]
|
| 114 |
-
|
| 115 |
-
#### Factors
|
| 116 |
-
|
| 117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 118 |
-
|
| 119 |
-
[More Information Needed]
|
| 120 |
-
|
| 121 |
-
#### Metrics
|
| 122 |
-
|
| 123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 124 |
-
|
| 125 |
-
[More Information Needed]
|
| 126 |
-
|
| 127 |
-
### Results
|
| 128 |
-
|
| 129 |
-
[More Information Needed]
|
| 130 |
-
|
| 131 |
-
#### Summary
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
## Model Examination [optional]
|
| 136 |
-
|
| 137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 138 |
-
|
| 139 |
-
[More Information Needed]
|
| 140 |
-
|
| 141 |
-
## Environmental Impact
|
| 142 |
-
|
| 143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 144 |
-
|
| 145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 146 |
-
|
| 147 |
-
- **Hardware Type:** [More Information Needed]
|
| 148 |
-
- **Hours used:** [More Information Needed]
|
| 149 |
-
- **Cloud Provider:** [More Information Needed]
|
| 150 |
-
- **Compute Region:** [More Information Needed]
|
| 151 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 152 |
-
|
| 153 |
-
## Technical Specifications [optional]
|
| 154 |
-
|
| 155 |
-
### Model Architecture and Objective
|
| 156 |
-
|
| 157 |
-
[More Information Needed]
|
| 158 |
-
|
| 159 |
-
### Compute Infrastructure
|
| 160 |
-
|
| 161 |
-
[More Information Needed]
|
| 162 |
-
|
| 163 |
-
#### Hardware
|
| 164 |
-
|
| 165 |
-
[More Information Needed]
|
| 166 |
-
|
| 167 |
-
#### Software
|
| 168 |
-
|
| 169 |
-
[More Information Needed]
|
| 170 |
-
|
| 171 |
-
## Citation [optional]
|
| 172 |
-
|
| 173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 174 |
|
| 175 |
-
**
|
| 176 |
|
| 177 |
-
|
| 178 |
|
| 179 |
-
|
| 180 |
|
| 181 |
-
|
|
|
|
| 182 |
|
| 183 |
-
|
|
|
|
|
|
|
| 184 |
|
| 185 |
-
|
|
|
|
|
|
|
|
|
|
| 186 |
|
| 187 |
-
|
| 188 |
|
| 189 |
-
|
| 190 |
|
| 191 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 192 |
|
| 193 |
-
|
| 194 |
|
| 195 |
-
|
| 196 |
|
| 197 |
-
|
| 198 |
|
| 199 |
-
|
| 200 |
-
### Framework versions
|
| 201 |
|
| 202 |
-
-
|
|
|
|
| 3 |
library_name: peft
|
| 4 |
---
|
| 5 |
|
| 6 |
+
## Introduction
|
| 7 |
|
| 8 |
+
This project presents a Large Language Model (LLM) designed to recommend hobbies tailored to individual user profiles, including interests, background, location, available time, and budget. The model collects user information through structured prompts and provides hobby suggestions that match the given constraints. In addition to the hobby recommendation, the model offers basic practical information, such as estimated costs and required materials, to help users get started. To overcome the limitations of existing LLMs in personalizing recommendations, the model is fine-tuned on synthetic instruction-response data representing a broad range of user scenarios, following recent approaches in LLM-based recommendation systems.
|
| 9 |
|
| 10 |
+
## Training Data
|
| 11 |
|
| 12 |
+
The training dataset consists of 1,000 synthetic user profiles generated to capture a diverse range of ages, sexes, interests, and practical constraints such as budget and weekly availability. Each profile was paired with an LLM-generated, step-by-step hobby recommendation using structured prompts designed to elicit logical and grounded suggestions. The data was randomly split into 80% training and 20% validation sets using a fixed random seed (42) to ensure reproducibility. All data was formatted in instruction-response pairs to align with supervised fine-tuning requirements for language models.
|
| 13 |
|
| 14 |
+
## Model Description
|
| 15 |
|
| 16 |
+
The base LLaMA 3.1-8B model was fine-tuned using the parameter-efficient Low-Rank Adaptation (LoRA) approach. Training was performed for 2 epochs with a learning rate of 1e-5 and evaluation steps set to every 150 batches. The Hugging Face Trainer class was used to manage the training loop, data collation, and checkpointing, with the best model saved at the end. The default optimizer (AdamW) was applied. Training and validation splits were derived from the synthetic instruction-response dataset, and all key hyperparameters—such as the output directory, number of epochs, learning rate, and evaluation interval—were specified to support reproducibility.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
## Evaluation
|
| 19 |
|
| 20 |
+
To evaluate both the general reasoning abilities and the domain-specific performance of the model, three benchmarks from the lm_eval suite were selected: ARC-Challenge, HellaSwag, and Winogrande. ARC-Challenge tests the model's advanced reasoning and problem-solving skills on complex grade-school science questions. HellaSwag measures the ability to select contextually appropriate continuations, which reflects commonsense reasoning. Winogrande evaluates how well the model understands language and context by testing its ability to determine which person or object a pronoun (like “he,” “she,” or “it”) refers to in challenging sentences. These benchmarks were chosen because they are standard in the field for assessing language model capabilities and provide a meaningful comparison of general language understanding, reasoning, and commonsense knowledge. Additionally, a domain-specific Hobby Test Set was included to measure how well the model generates personalized hobby recommendations.
|
| 21 |
+
| Model Name | ARC-Challenge | HellaSwag | Winogrande | Hobby Test Set |
|
| 22 |
+
|------------------------------|:-------------:|:---------:|:----------:|:--------------:|
|
| 23 |
+
| Llama-3-8B (base) | 51.4% | 59.9% | 73.1% | 62.1% |
|
| 24 |
+
| Hobby_Recommendation model | 53.7% | 59.9% | 73.2% | 88.5% |
|
| 25 |
+
| Falcon-7B-Instruct | --.-% | --.-% | --.-% | --.-% |
|
| 26 |
+
| Mistral-7B-Instruct | --.-% | --.-% | --.-% | --.-% |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
+
**Model Performance Summary**
|
| 29 |
|
| 30 |
+
## Usage and Intended Uses
|
| 31 |
|
| 32 |
+
This model is designed to provide personalized hobby recommendations based on user profiles, including information such as age, location, interests, budget, and weekly available time. It is suitable for integration into digital assistants, hobby recommendation tools, wellness or lifestyle apps, and any platform where tailored activity suggestions can enhance user engagement. The model is best used in applications that require generating concise and relevant hobby suggestions in response to structured user input.
|
| 33 |
|
| 34 |
+
```{python}
|
| 35 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 36 |
|
| 37 |
+
# Load the tokenizer and model from Hugging Face Hub
|
| 38 |
+
tokenizer = AutoTokenizer.from_pretrained("enemydw/Hobby_Recommendation")
|
| 39 |
+
model = AutoModelForCausalLM.from_pretrained("enemydw/Hobby_Recommendation")
|
| 40 |
|
| 41 |
+
# Set pad_token for batching or compatibility
|
| 42 |
+
tokenizer.pad_token = tokenizer.eos_token
|
| 43 |
+
model.config.pad_token_id = tokenizer.pad_token_id
|
| 44 |
+
```
|
| 45 |
|
| 46 |
+
## Prompt Format
|
| 47 |
|
| 48 |
+
Each prompt is structured as a user profile followed by an instruction for the model to generate a relevant hobby recommendation. The prompt includes fields for location, age, sex, monthly budget, interests, weekly available time, and ends with a standardized instruction line. “The prompts used for this model include the instruction ‘Think step by step’ to guide the model to use Chain of Thought reasoning in its answers.
|
| 49 |
|
| 50 |
+
```
|
| 51 |
+
Location: [City, State]
|
| 52 |
+
Age: [User's Age]
|
| 53 |
+
Sex: [Male/Female]
|
| 54 |
+
Monthly Budget: [Amount in Dollar]
|
| 55 |
+
Interests: [Comma-separated interests]
|
| 56 |
+
Weekly Available Time: [Hours]
|
| 57 |
+
Instruction: Think step by step. Consider multiple possible hobbies based on the user's preferences and constraints.
|
| 58 |
+
Hobby Recommendation:
|
| 59 |
+
```
|
| 60 |
+
## Expected Output Format
|
| 61 |
|
| 62 |
+
The expected output from the model is a concise text string that contains only the name of the recommended hobby. The model does not provide reasoning or additional explanation—only the specific hobby recommendation based on the user profile.
|
| 63 |
|
| 64 |
+
```coding, board games```
|
| 65 |
|
| 66 |
+
```Reason: Board game design is a hobby that aligns well with the user's interests in coding and board games, while also considering their limited time and budget constraints. By designing their own board games, the user can combine their love for coding and board games into a creative and engaging hobby. Additionally, board game design requires minimal equipment and can be done at home, making it a cost-effective option for the user.```
|
| 67 |
|
| 68 |
+
## Limitation
|
|
|
|
| 69 |
|
| 70 |
+
The model is focused on the US and does not cover information about other countries. Also, not all types of hobbies may be included, especially if they were not in the training data. The model does not collect, store, or utilize any real personal information—recommendations are based only on the user-provided inputs and synthetic data, which limits its understanding of unique or highly specific personal circumstances.
|