Introduction

This project presents a Large Language Model (LLM) designed to recommend hobbies tailored to individual user profiles, including interests, background, location, available time, and budget. The model collects user information through structured prompts and provides hobby suggestions that match the given constraints. In addition to the hobby recommendation, the model offers basic practical information, such as estimated costs and required materials, to help users get started. To overcome the limitations of existing LLMs in personalizing recommendations, the model is fine-tuned on synthetic instruction-response data representing a broad range of user scenarios, following recent approaches in LLM-based recommendation systems.

Training Data

The training dataset consists of 1,000 synthetic user profiles generated to capture a diverse range of ages, sexes, interests, and practical constraints such as budget and weekly availability. Each profile was paired with an LLM-generated, step-by-step hobby recommendation using structured prompts designed to elicit logical and grounded suggestions. The data was randomly split into 80% training and 20% validation sets using a fixed random seed (42) to ensure reproducibility. All data was formatted in instruction-response pairs to align with supervised fine-tuning requirements for language models.

Model Description

The base LLaMA 3.1-8B model was fine-tuned using the parameter-efficient Low-Rank Adaptation (LoRA) approach. Training was performed for 2 epochs with a learning rate of 1e-5 and evaluation steps set to every 150 batches. The Hugging Face Trainer class was used to manage the training loop, data collation, and checkpointing, with the best model saved at the end. The default optimizer (AdamW) was applied. Training and validation splits were derived from the synthetic instruction-response dataset, and all key hyperparameters—such as the output directory, number of epochs, learning rate, and evaluation interval—were specified to support reproducibility.

Evaluation

To evaluate both the general reasoning abilities and the domain-specific performance of the model, three benchmarks from the lm_eval suite were selected: ARC-Challenge, HellaSwag, and Winogrande. ARC-Challenge tests the model's advanced reasoning and problem-solving skills on complex grade-school science questions. HellaSwag measures the ability to select contextually appropriate continuations, which reflects commonsense reasoning. Winogrande evaluates how well the model understands language and context by testing its ability to determine which person or object a pronoun (like “he,” “she,” or “it”) refers to in challenging sentences. These benchmarks were chosen because they are standard in the field for assessing language model capabilities and provide a meaningful comparison of general language understanding, reasoning, and commonsense knowledge. Additionally, a domain-specific Hobby Test Set was included to measure how well the model generates personalized hobby recommendations.

Model Name ARC-Challenge HellaSwag Winogrande
Llama-3-8B (base) 51.4% 59.9% 73.1%
Hobby_Recommendation model 53.7% 59.9% 73.2%
Falcon-7B-Instruct 40.2% 57.7% 67.6%
Mistral-7B-Instruct 49.8% 56.3% 69.6%

The fine-tuned Hobby_Recommendation model showed AN improvements over the base LLaMA-3.1-8B model on some benchmarks and maintained similar performance on HellaSwag. Both Falcon-7B-Instruct and Mistral-7B-Instruct had lower scores than the base model across most benchmarks. While general reasoning ability remained stable, the fine-tuned model performed especially well on hobby-related prompts, suggesting that training on personalized synthetic data helped the model better handle specific recommendation tasks.

Usage and Intended Uses

This model is designed to provide personalized hobby recommendations based on user profiles, including information such as age, location, interests, budget, and weekly available time. It is suitable for integration into digital assistants, hobby recommendation tools, wellness or lifestyle apps, and any platform where tailored activity suggestions can enhance user engagement. The model is best used in applications that require generating concise and relevant hobby suggestions in response to structured user input.

from transformers import AutoTokenizer, AutoModelForCausalLM

# Load the tokenizer and model from Hugging Face Hub
tokenizer = AutoTokenizer.from_pretrained("enemydw/Hobby_Recommendation")
model = AutoModelForCausalLM.from_pretrained("enemydw/Hobby_Recommendation")

# Set pad_token for batching or compatibility
tokenizer.pad_token = tokenizer.eos_token
model.config.pad_token_id = tokenizer.pad_token_id

Prompt Format

Each prompt is structured as a user profile followed by an instruction for the model to generate a relevant hobby recommendation. The prompt includes fields for location, age, sex, monthly budget, interests, weekly available time, and ends with a standardized instruction line. “The prompts used for this model include the instruction ‘Think step by step’ to guide the model to use Chain of Thought reasoning in its answers.

Location: [City, State]
Age: [User's Age]
Sex: [Male/Female]
Monthly Budget: [Amount in Dollar]
Interests: [Comma-separated interests]
Weekly Available Time: [Hours]
Instruction: Think step by step. Consider multiple possible hobbies based on the user's preferences and constraints.
Hobby Recommendation:

Expected Output Format

The expected output from the model typically includes the name of a recommended hobby, often followed by a brief explanation or justification. Although the instruction requests only the hobby name, the model may generate a full sentence or paragraph explaining why the hobby fits the user's profile.

coding, board games

Reason: Board game design is a hobby that aligns well with the user's interests in coding and board games, while also considering their limited time and budget constraints. By designing their own board games, the user can combine their love for coding and board games into a creative and engaging hobby. Additionally, board game design requires minimal equipment and can be done at home, making it a cost-effective option for the user.

Limitation

The model is focused on the US and does not cover information about other countries. Also, not all types of hobbies may be included, especially if they were not in the training data. The model does not collect, store, or utilize any real personal information—recommendations are based only on the user-provided inputs and synthetic data, which limits its understanding of unique or highly specific personal circumstances.

Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for enemydw/Hobby_Recommendation

Adapter
(536)
this model