library_name: transformers
license: gpl-3.0
language:
- en
base_model:
- Qwen/Qwen2.5-7B-Instruct
Introduction
According to the August 2025 jobs report, overall unemployment has risen, with the unemployment rate for workers aged 16-24 rising to 10.5% (Bureau of Labor Statistics, 2025). The primary demographic of this age range is recent college graduates, many of whom carry student loan debt and are unable to find stable, long-term employment. While this could be attributed to any of the various economic challenges facing the US today, there is speculation that it may be due to insufficient skills regarding job-hunting and interviews. There are many resources that seek to fill this gap, including interview-prep LLMs such as InterviewsPilot (InterviewsPilot, 2025). However, there is not an LLM that combines multiple features into an end-to-end, user-friendly application, specifically designed to improve an applicant's chances of successfully completing the job-application cycle. Current LLMs struggle to provide accurate interview preparation based on specific jobs and are not finetuned based on the user's profile. They tend to hallucinate and struggle to include specific user details when developing answers to interview questions, resulting in generic responses. Due to these limitations, my interview prep career assistant LLM seeks to provide a full user experience by specifically developing practice job interview questions based on the description of the job they are applying for. Additionally, it provides users with an 'optimal' answer to the interview questions based on their profile and resume. The interview prep LLM is finetuned from model Qwen2.5-7B-Instruct using LoRA with hyperparameters rank: 64, alpha: 128, and dropout: 0.15. That hyperparameter combination resulted in the lowest validation loss, 2.055938. The model was trained on a synthetic dataset that I developed using user job data. After finetuning, the LLM performed with a 21.578 in the SQuADv2 benchmark, a 0.597 in the humaneval benchmark, a 5.040 bleu score in the E2E NLG Challenge benchmark, and a bert score mean precision of 0.813, mean recall of 0.848, and mean f1 of 0.830 on a train/test split. The bert scores specifically indicate that my model has a strong alignment between generated and expected responses.
Data
I was able to find a training dataset of job postings on Kaggle (Arshkon, 2023), under a project labeled ‘LinkedIn Job Postings 2023 Data Analysis’. The dataset used has ~15,000 jobs from LinkedIn. It includes the company, job title, and a description that includes necessary skills. This dataset has a variety of different jobs and descriptions, which allowed my LLM to be trained for a multitude of job descriptions that users may input. I synthetically generated the other two datasets, due to the lack of available datasets. For both generated datasets, I used the Llama-3.2-1B-Instruct model, due to its ability to efficiently produce accurate natural language answers, as well as technical answers, which was necessary for the interview questions. I used the job postings dataset with few-shot prompting to create the user profile dataset. For each job posting in the dataset, I had the model create a 'great', 'mediocre', and 'bad' user profile. An example of the few shot prompting for this was:
Job_Title: Software Engineer
Profile_Type: Great
Applicant_Profile:
Education: Master's in Computer Science
Experience: 5 years building backend systems
Skills: Python, SQL, Git, CI/CD
Certifications: AWS Developer
Training: Agile methodology
Additional Qualifications: Mentorship experience
Due to the long user profiles that were generated, the csv this created was over 2 GB, which is too large for excel to handle. I used a python selector to randomly choose 5000 rows. With my new subset dataset, I used the Llama-3.2-1B-Instruct model again to create the interview/answer data. For each job posting/user profile I had the model create an interview question based on the job description, then an optimal answer to the question based on the user profile. An example of a few shot prompt I used is below.
Job Title: Data Scientist
Job Description: Analyze data and build predictive models.
Applicant Profile: Experienced in Python, R, and ML models.
Interview Question: Tell me about a machine learning project you are proud of.
Optimal Answer: I developed a predictive model using Python and scikit-learn to forecast customer churn, achieving 85% accuracy by carefully preprocessing the data and tuning hyperparameters.
After creating this dataset, I uploaded it to my project notebook. Then, I modified the dataset to reformat it and make it easier to train. I created an 'Instruct' column with each row's job title, description, applicant profile, and the prompt 'Generate a relevant interview question and provide an optimal answer using the information from this applicant's profile. Interview Question and Optimal Answer:'. Then I combined the interview question/ optimal answer into one column labeled 'Answer'.
I established a training, validation, and testing split using scikit-learn's train_test_split function and pandas .sample() method for shuffling. The proportions are as follows:
Training: 3,200 examples (64% of total) Validation: 800 examples (16% of total) Testing: 1,000 examples (20% of total) Random seed: 42
Methodology
Finetuning Tasks: As you did in the fourth project check in, describe which method you chose for training and why you chose that method. Make note of any hyperparameter values you used so that others can reproduce your results.
Evaluation
Usage and Intended Use
Prompt Format
Expected Output Format
This section should briefly describe the expected output format for your model and include a general code chunk showing an example model response.
Limitations
This section should summarize the main limitations of your model. Limitations could be based on benchmark task performance, any observations you noticed when examining model responses, or any shortcomings your model has relative to your ideal use case.