Resume Skill Extractor (LoRA Adapters)
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct. It is specifically trained to act as an expert technical recruiter, analyzing job descriptions to extract concise summaries and bulleted lists of required skills.
Model Details
- Base Model: Meta Llama 3 8B Instruct
- Task: Text Generation / Information Extraction
- Language: English
- Fine-Tuning Method: QLoRA (4-bit precision, Rank 16, Alpha 32)
- Target Modules:
q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj
Training Data
The model was fine-tuned on a custom dataset of 3,050 Data Science and AI job descriptions.
Training Hardware & Hyperparameters
- Hardware: 1x NVIDIA L4 GPU (24GB VRAM) on Lightning AI
- Epochs: 3
- Batch Size: 2 (with 4 Gradient Accumulation steps)
- Learning Rate: 2e-4
- Optimizer: AdamW
- Precision:
bfloat16 - Final Evaluation Loss: 1.407
Example Usage
This is a PEFT LoRA adapter. To use it, you must load the base model in 4-bit and merge these adapters using the peft library.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support