File size: 4,629 Bytes
60efbe2
 
3ae40a6
 
 
 
 
 
 
 
 
60efbe2
 
3ae40a6
60efbe2
3ae40a6
60efbe2
3ae40a6
60efbe2
3ae40a6
 
 
 
 
60efbe2
3ae40a6
60efbe2
3ae40a6
60efbe2
 
 
 
 
3ae40a6
 
 
 
 
 
60efbe2
3ae40a6
60efbe2
3ae40a6
 
 
 
 
60efbe2
 
 
3ae40a6
 
 
60efbe2
3ae40a6
60efbe2
3ae40a6
 
60efbe2
3ae40a6
 
 
60efbe2
3ae40a6
 
60efbe2
3ae40a6
 
 
 
 
60efbe2
3ae40a6
60efbe2
 
 
 
 
3ae40a6
 
 
 
60efbe2
 
 
 
 
3ae40a6
 
 
 
 
 
 
60efbe2
3ae40a6
60efbe2
3ae40a6
 
 
 
 
60efbe2
3ae40a6
60efbe2
3ae40a6
60efbe2
3ae40a6
 
 
 
60efbe2
3ae40a6
60efbe2
3ae40a6
 
 
 
60efbe2
3ae40a6
60efbe2
3ae40a6
60efbe2
3ae40a6
 
60efbe2
 
 
 
3ae40a6
 
60efbe2
 
3ae40a6
 
 
 
60efbe2
3ae40a6
60efbe2
 
 
 
 
 
 
 
 
 
 
 
 
3ae40a6
 
 
 
 
 
 
60efbe2
3ae40a6
60efbe2
3ae40a6
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
---
library_name: peft
license: apache-2.0
base_model: meta-llama/Llama-2-7b-hf
tags:
- resume-screening
- hr-tech
- llama2
- lora
- peft
- fine-tuned
---

# Advanced Resume Screening Model

## Model Description

This is a LoRA (Low-Rank Adaptation) fine-tuned version of Llama-2-7B specifically optimized for resume screening and candidate evaluation tasks. The model can analyze resumes, extract key information, and provide structured assessments of candidate qualifications.

- **Developed by:** kiritps
- **Model type:** Causal Language Model (LoRA Fine-tuned)
- **Language(s):** English
- **License:** Apache 2.0
- **Finetuned from model:** meta-llama/Llama-2-7b-hf

## Model Sources

- **Repository:** https://huggingface.co/kiritps/Advanced-resume-screening

## Uses

### Direct Use

This model is designed for HR professionals and recruitment systems to:
- Analyze and screen resumes automatically
- Extract key qualifications and skills
- Provide structured candidate assessments
- Filter candidates based on specific criteria
- Generate summaries of candidate profiles

### Downstream Use

The model can be integrated into:
- Applicant Tracking Systems (ATS)
- HR management platforms
- Recruitment automation tools
- Candidate matching systems

### Out-of-Scope Use

- Should not be used as the sole decision-maker in hiring processes
- Not intended for discriminatory screening based on protected characteristics
- Not suitable for general-purpose text generation outside of resume/HR context

## How to Get Started with the Model

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

Load base model and tokenizer
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")

Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "kiritps/Advanced-resume-screening")

Example usage
prompt = "Analyze this resume and provide key qualifications: [RESUME TEXT HERE]"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512, temperature=0.7)
response = tokenizer.decode(outputs, skip_special_tokens=True)

text

## Training Details

### Training Data

The model was fine-tuned on a curated dataset of resume-response pairs, designed to teach the model how to:
- Extract relevant information from resumes
- Provide structured analysis of candidate qualifications
- Generate appropriate screening responses

### Training Procedure

#### Training Hyperparameters

- **Training regime:** 4-bit quantization with bfloat16 mixed precision
- **LoRA rank:** 64
- **LoRA alpha:** 16
- **Learning rate:** 2e-4
- **Batch size:** 4
- **Gradient accumulation steps:** 4
- **Training epochs:** Multiple checkpoints saved (3840, 4320, 4800, 5280, 5760 steps)

#### Quantization Configuration

- **Quantization method:** bitsandbytes
- **Load in 4bit:** True
- **Quantization type:** nf4
- **Double quantization:** True
- **Compute dtype:** bfloat16

## Bias, Risks, and Limitations

### Limitations

- Model responses should be reviewed by human recruiters
- May exhibit biases present in training data
- Performance may vary across different industries or job types
- Requires careful prompt engineering for optimal results

### Recommendations

- Use as a screening aid, not a replacement for human judgment
- Regularly audit outputs for potential bias
- Combine with diverse evaluation methods
- Ensure compliance with local employment laws and regulations

## Technical Specifications

### Model Architecture

- **Parameter Count:** ~7B parameters (base) + LoRA adapters
- **Quantization:** 4-bit NF4 quantization

### Compute Infrastructure

#### Hardware
- GPU training environment
- Compatible with consumer and enterprise GPUs

#### Software
- **Framework:** PyTorch
- **PEFT Version:** 0.6.2
- **Transformers:** Latest compatible version
- **Quantization:** bitsandbytes

## Training Procedure

The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16

### Framework Versions
- PEFT 0.6.2
- Transformers (compatible version)
- PyTorch (latest stable)
- bitsandbytes (for quantization)

## Model Card Authors

kiritps

## Model Card Contact

For questions or issues regarding this model, please open an issue in the model repository.