Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,199 +1,138 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
tags:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
-
#
|
| 7 |
|
| 8 |
-
|
| 9 |
|
|
|
|
| 10 |
|
|
|
|
| 11 |
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
-
##
|
| 15 |
|
| 16 |
-
|
| 17 |
|
| 18 |
-
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
-
|
| 21 |
-
-
|
| 22 |
-
-
|
| 23 |
-
- **Model type:** [More Information Needed]
|
| 24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 25 |
-
- **License:** [More Information Needed]
|
| 26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 27 |
|
| 28 |
-
|
| 29 |
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
-
|
| 33 |
-
-
|
| 34 |
-
- **Demo [optional]:** [More Information Needed]
|
| 35 |
-
|
| 36 |
-
## Uses
|
| 37 |
-
|
| 38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 39 |
-
|
| 40 |
-
### Direct Use
|
| 41 |
-
|
| 42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 43 |
-
|
| 44 |
-
[More Information Needed]
|
| 45 |
-
|
| 46 |
-
### Downstream Use [optional]
|
| 47 |
-
|
| 48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 49 |
-
|
| 50 |
-
[More Information Needed]
|
| 51 |
-
|
| 52 |
-
### Out-of-Scope Use
|
| 53 |
-
|
| 54 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
| 55 |
-
|
| 56 |
-
[More Information Needed]
|
| 57 |
-
|
| 58 |
-
## Bias, Risks, and Limitations
|
| 59 |
-
|
| 60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 61 |
-
|
| 62 |
-
[More Information Needed]
|
| 63 |
-
|
| 64 |
-
### Recommendations
|
| 65 |
-
|
| 66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 67 |
-
|
| 68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
| 69 |
-
|
| 70 |
-
## How to Get Started with the Model
|
| 71 |
-
|
| 72 |
-
Use the code below to get started with the model.
|
| 73 |
-
|
| 74 |
-
[More Information Needed]
|
| 75 |
|
| 76 |
## Training Details
|
| 77 |
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
- **
|
| 96 |
-
|
| 97 |
-
##
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
##
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
|
| 169 |
-
[More Information Needed]
|
| 170 |
-
|
| 171 |
-
## Citation [optional]
|
| 172 |
-
|
| 173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 174 |
-
|
| 175 |
-
**BibTeX:**
|
| 176 |
-
|
| 177 |
-
[More Information Needed]
|
| 178 |
-
|
| 179 |
-
**APA:**
|
| 180 |
-
|
| 181 |
-
[More Information Needed]
|
| 182 |
-
|
| 183 |
-
## Glossary [optional]
|
| 184 |
-
|
| 185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 186 |
-
|
| 187 |
-
[More Information Needed]
|
| 188 |
-
|
| 189 |
-
## More Information [optional]
|
| 190 |
-
|
| 191 |
-
[More Information Needed]
|
| 192 |
-
|
| 193 |
-
## Model Card Authors [optional]
|
| 194 |
-
|
| 195 |
-
[More Information Needed]
|
| 196 |
-
|
| 197 |
-
## Model Card Contact
|
| 198 |
-
|
| 199 |
-
[More Information Needed]
|
|
|
|
| 1 |
---
|
| 2 |
+
language: en
|
| 3 |
+
tags:
|
| 4 |
+
- text-classification
|
| 5 |
+
- resume
|
| 6 |
+
- job-description
|
| 7 |
+
- recruitment
|
| 8 |
+
- bge-m3
|
| 9 |
+
license: mit
|
| 10 |
---
|
| 11 |
|
| 12 |
+
# Resume Job Fit Classifier
|
| 13 |
|
| 14 |
+
A cross-encoder model for predicting whether a resume is a fit for a job description.
|
| 15 |
|
| 16 |
+
## Model Description
|
| 17 |
|
| 18 |
+
Fine-tuned [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) as a cross-encoder classifier on resume and job description pairs. The model takes a resume and a job description as input and predicts one of three classes: **Good Fit**, **No Fit**, or **Potential Fit**.
|
| 19 |
|
| 20 |
+
The input is structured as:
|
| 21 |
+
```
|
| 22 |
+
[CLS] resume_text [SEP] job_description_text [SEP]
|
| 23 |
+
```
|
| 24 |
+
The transformer attention mechanism allows every resume token to attend to every JD token simultaneously, making this a true comparison model rather than independent embeddings.
|
| 25 |
|
| 26 |
+
## Datasets
|
| 27 |
|
| 28 |
+
Two datasets were used for training:
|
| 29 |
|
| 30 |
+
1. [cnamuangtoun/resume-job-description-fit](https://huggingface.co/datasets/cnamuangtoun/resume-job-description-fit)
|
| 31 |
+
- Train: 5,616 pairs
|
| 32 |
+
- Test: 1,759 pairs (used as evaluation benchmark)
|
| 33 |
+
- Labels: Good Fit, No Fit, Potential Fit
|
| 34 |
|
| 35 |
+
2. [kens1ang/resume-job-fit-augmented](https://huggingface.co/datasets/kens1ang/resume-job-fit-augmented)
|
| 36 |
+
- Train: 31,205 pairs
|
| 37 |
+
- Labels: Good Fit, No Fit, Potential Fit
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
+
Combined training set: ~36,800 pairs
|
| 40 |
|
| 41 |
+
Label distribution (combined):
|
| 42 |
+
- No Fit: 50.4%
|
| 43 |
+
- Good Fit: 24.7%
|
| 44 |
+
- Potential Fit: 24.9%
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
## Training Details
|
| 47 |
|
| 48 |
+
- **Base model:** BAAI/bge-m3 (570M parameters, supports up to 8192 tokens)
|
| 49 |
+
- **Max sequence length:** 8192 tokens (resume: 4096, JD: 4000)
|
| 50 |
+
- **Optimizer:** AdamW with layer-wise learning rates
|
| 51 |
+
- Bottom layers: LR / 10
|
| 52 |
+
- Top layers: full LR
|
| 53 |
+
- Classifier head: full LR
|
| 54 |
+
- **Learning rate:** 8e-6 with cosine scheduler
|
| 55 |
+
- **Warmup ratio:** 15%
|
| 56 |
+
- **Batch size:** 1 per device, gradient accumulation steps: 32 (effective batch: 32)
|
| 57 |
+
- **Epochs:** 40 max with early stopping patience 6
|
| 58 |
+
- **Loss:** Weighted CrossEntropyLoss to handle class imbalance (No Fit = 50%)
|
| 59 |
+
- **Sampling:** WeightedRandomSampler to oversample minority classes
|
| 60 |
+
- **Good Fit weight boost:** 2x to prioritize finding the best candidates
|
| 61 |
+
- **Label smoothing:** 0.1
|
| 62 |
+
- **Dropout:** 0.3 classifier, 0.15 hidden layers
|
| 63 |
+
- **Precision:** fp16 mixed precision
|
| 64 |
+
- **Gradient checkpointing:** enabled
|
| 65 |
+
- **Hardware:** NVIDIA RTX 4090 (24GB VRAM)
|
| 66 |
+
|
| 67 |
+
## Results
|
| 68 |
+
|
| 69 |
+
| Metric | Eval | Test |
|
| 70 |
+
|---|---|---|
|
| 71 |
+
| Accuracy | 97.06% | 54.80% |
|
| 72 |
+
| Macro F1 | 96.96% | 52.13% |
|
| 73 |
+
| F1 Good Fit | 97.21% | 42.46% |
|
| 74 |
+
| F1 No Fit | 97.38% | 67.43% |
|
| 75 |
+
| F1 Potential Fit | 96.30% | 46.50% |
|
| 76 |
+
|
| 77 |
+
## Known Limitations & Open Problem
|
| 78 |
+
|
| 79 |
+
There is a significant gap between eval (97%) and test (52%) performance. After extensive experimentation this appears to be caused by **label inconsistency between the two training datasets** — the augmented dataset uses different labeling criteria than the original dataset, and the test set follows the original dataset's labeling logic. The model learns contradictory rules and fails to generalize.
|
| 80 |
+
|
| 81 |
+
**Things that were tried:**
|
| 82 |
+
- Full fine-tuning vs frozen layers
|
| 83 |
+
- 2-class (Fit/No Fit) vs 3-class classification — 2 classes gave 69% test F1
|
| 84 |
+
- Layer-wise learning rates
|
| 85 |
+
- Weighted loss + weighted sampling
|
| 86 |
+
- Various dropout, weight decay, label smoothing values
|
| 87 |
+
- Training on original dataset only — best test F1: 69% (2 classes)
|
| 88 |
+
- Training on combined datasets — test F1 dropped to 52%
|
| 89 |
+
|
| 90 |
+
**If you have ideas on how to overcome this gap, contributions and suggestions are welcome.** Possible directions:
|
| 91 |
+
- A cleaner dataset labeled consistently by human recruiters
|
| 92 |
+
- A base model pretrained specifically on recruitment text (e.g. JobBERT)
|
| 93 |
+
- A better data mixing strategy to handle label inconsistency between datasets
|
| 94 |
+
- Confidence thresholding at inference time
|
| 95 |
+
|
| 96 |
+
## Usage
|
| 97 |
+
```python
|
| 98 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
| 99 |
+
import torch
|
| 100 |
+
import numpy as np
|
| 101 |
+
|
| 102 |
+
model = AutoModelForSequenceClassification.from_pretrained("med2425/bge-resume-fit")
|
| 103 |
+
tokenizer = AutoTokenizer.from_pretrained("med2425/bge-resume-fit")
|
| 104 |
+
|
| 105 |
+
model.eval()
|
| 106 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
| 107 |
+
model.to(device)
|
| 108 |
+
|
| 109 |
+
resume = """
|
| 110 |
+
John Smith | Senior ML Engineer
|
| 111 |
+
6 years experience building production ML systems.
|
| 112 |
+
Skills: Python, PyTorch, TensorFlow, NLP, AWS, Docker.
|
| 113 |
+
Built NLP pipelines processing 10M documents/day at TechCorp (2020-Present).
|
| 114 |
+
Fine-tuned BERT models achieving 94% accuracy on document classification.
|
| 115 |
+
B.Sc. Computer Science, State University 2018.
|
| 116 |
+
"""
|
| 117 |
+
|
| 118 |
+
jd = """
|
| 119 |
+
Senior Machine Learning Engineer
|
| 120 |
+
Requirements: 5+ years ML experience, strong Python,
|
| 121 |
+
PyTorch or TensorFlow, NLP experience, production deployment on AWS/GCP/Azure,
|
| 122 |
+
Bachelor in Computer Science or related field.
|
| 123 |
+
"""
|
| 124 |
+
|
| 125 |
+
inputs = tokenizer(resume, jd, return_tensors="pt", truncation=True, max_length=8192).to(device)
|
| 126 |
+
|
| 127 |
+
with torch.no_grad():
|
| 128 |
+
probs = torch.softmax(model(**inputs).logits, dim=-1).squeeze().tolist()
|
| 129 |
+
|
| 130 |
+
id2label = {0: "Good Fit", 1: "No Fit", 2: "Potential Fit"}
|
| 131 |
+
for i, p in enumerate(probs):
|
| 132 |
+
print(f"{id2label[i]}: {p:.2%}")
|
| 133 |
+
print(f"Prediction: {id2label[np.argmax(probs)]}")
|
| 134 |
+
```
|
| 135 |
+
|
| 136 |
+
> **Note:** Use full-length realistic resumes and job descriptions for best results.
|
| 137 |
+
> The model was trained on resumes averaging 700 words and JDs averaging 400 words.
|
| 138 |
+
> Very short inputs may produce unreliable predictions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|