Spaces:
Runtime error
Runtime error
Updated streamlit_apps & readme.
Browse files- README.md +45 -21
- streamlit_app.py +29 -85
- streamlit_second.py +354 -0
README.md
CHANGED
|
@@ -4,7 +4,7 @@
|
|
| 4 |
|
| 5 |
# Resume Matcher
|
| 6 |
|
| 7 |
-
## AI Based Resume Matcher to tailor your resume to a job description. Find the
|
| 8 |
|
| 9 |
</div>
|
| 10 |
|
|
@@ -19,37 +19,61 @@
|
|
| 19 |
|
| 20 |
[](https://www.resumematcher.fyi)
|
| 21 |
|
|
|
|
|
|
|
| 22 |
</div>
|
| 23 |
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
-
|
| 29 |
|
| 30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
|
|
|
|
|
|
|
|
|
| 39 |
|
| 40 |
-
|
| 41 |
-
the belong to.
|
| 42 |
-
For this :-
|
| 43 |
|
| 44 |
-
|
| 45 |
-
2. id2word, and doc2word algorithms are used on the Documents (from Gensim Library).
|
| 46 |
-
3. [LDA](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) (Latent Dirichlet Allocation) is done to extract the Topics from the Document set.(In this case Resumes)
|
| 47 |
-
4. Additional Plots are done to gain more insights about the document.
|
| 48 |
|
| 49 |
<br/>
|
| 50 |
|
| 51 |
---
|
| 52 |
|
| 53 |
-
###
|
|
|
|
|
|
|
| 54 |
|
| 55 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
|
| 5 |
# Resume Matcher
|
| 6 |
|
| 7 |
+
## AI Based Free & Open Source ATS, Resume Matcher to tailor your resume to a job description. Find the best keywords, and gain deep insights into your resume.
|
| 8 |
|
| 9 |
</div>
|
| 10 |
|
|
|
|
| 19 |
|
| 20 |
[](https://www.resumematcher.fyi)
|
| 21 |
|
| 22 |
+
[](https://resume-matcher.streamlit.app/)
|
| 23 |
+
|
| 24 |
</div>
|
| 25 |
|
| 26 |
+
### How does It work?
|
| 27 |
+
|
| 28 |
+
The Resume Matcher takes your resume and job descriptions as input, parses them using Python, and mimics the functionalities of an ATS, providing you with insights and suggestions to make your resume ATS-friendly.
|
| 29 |
+
|
| 30 |
+
The process is as follows:
|
| 31 |
+
|
| 32 |
+
1. **Parsing**: The system uses Python to parse both your resume and the provided job description, just like an ATS would. Parsing is critical as it transforms your documents into a format the system can readily analyze.
|
| 33 |
+
|
| 34 |
+
2. **Keyword Extraction**: The tool uses advanced machine learning algorithms to extract the most relevant keywords from the job description. These keywords represent the skills, qualifications, and experiences the employer seeks.
|
| 35 |
+
|
| 36 |
+
3. **Key Terms Extraction**: Beyond keyword extraction, the tool uses textacy to identify the main key terms or themes in the job description. This step helps in understanding the broader context of what the resume is about.
|
| 37 |
+
|
| 38 |
+
4. **Vector Similarity Using Qdrant**: The tool uses Qdrant, a highly efficient vector similarity search tool, to measure how closely your resume matches the job description. This process is done by representing your resume and job description as vectors in a high-dimensional space and calculating their cosine similarity. The more similar they are, the higher the likelihood that your resume will pass the ATS screening.
|
| 39 |
+
|
| 40 |
+
On top of that, there are various data visualizations that I've added to help you get started.
|
| 41 |
|
| 42 |
+
#### PRs Welcomed 🤗
|
| 43 |
|
| 44 |
+
<br/>
|
| 45 |
+
|
| 46 |
+
---
|
| 47 |
+
|
| 48 |
+
<div align="center">
|
| 49 |
+
|
| 50 |
+
## How to install
|
| 51 |
|
| 52 |
+
</div>
|
| 53 |
|
| 54 |
+
1. Clone the project.
|
| 55 |
+
2. Create a python virtual environment.
|
| 56 |
+
3. Activate the virtual environment.
|
| 57 |
+
4. Do `pip install -r requirements.txt` to install all dependencies.
|
| 58 |
+
5. Put your resumes in PDF Format in the `Data/Resumes` folder. (Delete the existing contents)
|
| 59 |
+
6. Put your Job Descriptions in PDF Format in `Data/JobDescription` folder. (Delete the existing contents)
|
| 60 |
+
7. Run `python run_first.py` this will parse all the resumes to JSON.
|
| 61 |
+
8. Run `streamlit run streamlit_app.py`.
|
| 62 |
|
| 63 |
+
**Note**: For local versions don't run the streamlit_second.app it's for deploying to streamlit.
|
|
|
|
|
|
|
| 64 |
|
| 65 |
+
Note: The Vector Similarity Part is precomputed here. As sentence encoders require heavy GPU and Memory (RAM). I am working on a blog that will show how you can leverage that in a google colab environment for free.
|
|
|
|
|
|
|
|
|
|
| 66 |
|
| 67 |
<br/>
|
| 68 |
|
| 69 |
---
|
| 70 |
|
| 71 |
+
### Note 📝
|
| 72 |
+
|
| 73 |
+
Thanks for the support 💙 this is an ongoing project that I want to build with open source community. There are many ways in which this tool can be upgraded. This includes (not limited to):
|
| 74 |
|
| 75 |
+
- Create a better dashboard instead of Streamlit.
|
| 76 |
+
- Add more features like upploading of resumes and parsing.
|
| 77 |
+
- Add a docker image for easy usage.
|
| 78 |
+
- Contribute to better parsing algorithm.
|
| 79 |
+
- Contribute to on a blog to how to make this work.
|
streamlit_app.py
CHANGED
|
@@ -8,6 +8,7 @@ import plotly.graph_objects as go
|
|
| 8 |
from scripts.utils.ReadFiles import get_filenames_from_dir
|
| 9 |
from streamlit_extras import add_vertical_space as avs
|
| 10 |
from annotated_text import annotated_text, parameters
|
|
|
|
| 11 |
import nltk
|
| 12 |
nltk.download('punkt')
|
| 13 |
|
|
@@ -115,6 +116,22 @@ def tokenize_string(input_string):
|
|
| 115 |
|
| 116 |
st.image('Assets/img/header_image.jpg')
|
| 117 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 118 |
avs.add_vertical_space(5)
|
| 119 |
|
| 120 |
resume_names = get_filenames_from_dir("Data/Processed/Resumes")
|
|
@@ -233,91 +250,18 @@ fig = px.treemap(df2, path=['keyword'], values='value',
|
|
| 233 |
title='Key Terms/Topics Extracted from the selected Job Description')
|
| 234 |
st.write(fig)
|
| 235 |
|
| 236 |
-
avs.add_vertical_space(
|
| 237 |
-
|
| 238 |
-
st.divider()
|
| 239 |
-
|
| 240 |
-
st.markdown("## Vector Similarity Scores")
|
| 241 |
-
st.caption("Powered by Qdrant Vector Search")
|
| 242 |
-
st.info("These are pre-computed queries", icon="ℹ")
|
| 243 |
-
st.warning(
|
| 244 |
-
"Running Qdrant or Sentence Transformers without having capacity is not recommended", icon="⚠")
|
| 245 |
-
|
| 246 |
-
|
| 247 |
-
# Your data
|
| 248 |
-
data = [
|
| 249 |
-
{'text': "{'resume': 'Alfred Pennyworth",
|
| 250 |
-
'query': 'Job Description Product Manager', 'score': 0.62658},
|
| 251 |
-
{'text': "{'resume': 'Barry Allen",
|
| 252 |
-
'query': 'Job Description Product Manager', 'score': 0.43777737},
|
| 253 |
-
{'text': "{'resume': 'Bruce Wayne ",
|
| 254 |
-
'query': 'Job Description Product Manager', 'score': 0.39835533},
|
| 255 |
-
{'text': "{'resume': 'JOHN DOE",
|
| 256 |
-
'query': 'Job Description Product Manager', 'score': 0.3915512},
|
| 257 |
-
{'text': "{'resume': 'Harvey Dent",
|
| 258 |
-
'query': 'Job Description Product Manager', 'score': 0.3519544},
|
| 259 |
-
{'text': "{'resume': 'Barry Allen",
|
| 260 |
-
'query': 'Job Description Senior Full Stack Engineer', 'score': 0.6541866},
|
| 261 |
-
{'text': "{'resume': 'Alfred Pennyworth",
|
| 262 |
-
'query': 'Job Description Senior Full Stack Engineer', 'score': 0.59806436},
|
| 263 |
-
{'text': "{'resume': 'JOHN DOE",
|
| 264 |
-
'query': 'Job Description Senior Full Stack Engineer', 'score': 0.5951386},
|
| 265 |
-
{'text': "{'resume': 'Bruce Wayne ",
|
| 266 |
-
'query': 'Job Description Senior Full Stack Engineer', 'score': 0.57700855},
|
| 267 |
-
{'text': "{'resume': 'Harvey Dent",
|
| 268 |
-
'query': 'Job Description Senior Full Stack Engineer', 'score': 0.38489106},
|
| 269 |
-
{'text': "{'resume': 'Barry Allen",
|
| 270 |
-
'query': 'Job Description Front End Engineer', 'score': 0.76813436},
|
| 271 |
-
{'text': "{'resume': 'Bruce Wayne'",
|
| 272 |
-
'query': 'Job Description Front End Engineer', 'score': 0.60440844},
|
| 273 |
-
{'text': "{'resume': 'JOHN DOE",
|
| 274 |
-
'query': 'Job Description Front End Engineer', 'score': 0.56080043},
|
| 275 |
-
{'text': "{'resume': 'Alfred Pennyworth",
|
| 276 |
-
'query': 'Job Description Front End Engineer', 'score': 0.5395049},
|
| 277 |
-
{'text': "{'resume': 'Harvey Dent",
|
| 278 |
-
'query': 'Job Description Front End Engineer', 'score': 0.3859515},
|
| 279 |
-
{'text': "{'resume': 'JOHN DOE",
|
| 280 |
-
'query': 'Job Description Java Developer', 'score': 0.5449441},
|
| 281 |
-
{'text': "{'resume': 'Alfred Pennyworth",
|
| 282 |
-
'query': 'Job Description Java Developer', 'score': 0.53476423},
|
| 283 |
-
{'text': "{'resume': 'Barry Allen",
|
| 284 |
-
'query': 'Job Description Java Developer', 'score': 0.5313871},
|
| 285 |
-
{'text': "{'resume': 'Bruce Wayne ",
|
| 286 |
-
'query': 'Job Description Java Developer', 'score': 0.44446343},
|
| 287 |
-
{'text': "{'resume': 'Harvey Dent",
|
| 288 |
-
'query': 'Job Description Java Developer', 'score': 0.3616274}
|
| 289 |
-
]
|
| 290 |
-
|
| 291 |
-
# Create a DataFrame
|
| 292 |
-
df = pd.DataFrame(data)
|
| 293 |
-
|
| 294 |
-
# Create different DataFrames based on the query and sort by score
|
| 295 |
-
df1 = df[df['query'] ==
|
| 296 |
-
'Job Description Product Manager'].sort_values(by='score', ascending=False)
|
| 297 |
-
df2 = df[df['query'] ==
|
| 298 |
-
'Job Description Senior Full Stack Engineer'].sort_values(by='score', ascending=False)
|
| 299 |
-
df3 = df[df['query'] == 'Job Description Front End Engineer'].sort_values(
|
| 300 |
-
by='score', ascending=False)
|
| 301 |
-
df4 = df[df['query'] == 'Job Description Java Developer'].sort_values(
|
| 302 |
-
by='score', ascending=False)
|
| 303 |
-
|
| 304 |
-
|
| 305 |
-
def plot_df(df, title):
|
| 306 |
-
fig = px.bar(df, x='text', y=df['score']*100, title=title)
|
| 307 |
-
st.plotly_chart(fig)
|
| 308 |
-
|
| 309 |
-
|
| 310 |
-
st.markdown("### Bar plots of scores based on similarity to Job Description.")
|
| 311 |
|
| 312 |
-
st.
|
| 313 |
-
st.
|
| 314 |
-
|
| 315 |
-
st.
|
| 316 |
-
|
| 317 |
-
|
| 318 |
|
|
|
|
|
|
|
| 319 |
|
| 320 |
-
|
| 321 |
-
|
| 322 |
-
|
| 323 |
-
plot_df(df4, 'Job Description Java Developer 3 Years of Experien')
|
|
|
|
| 8 |
from scripts.utils.ReadFiles import get_filenames_from_dir
|
| 9 |
from streamlit_extras import add_vertical_space as avs
|
| 10 |
from annotated_text import annotated_text, parameters
|
| 11 |
+
from streamlit_extras.badges import badge
|
| 12 |
import nltk
|
| 13 |
nltk.download('punkt')
|
| 14 |
|
|
|
|
| 116 |
|
| 117 |
st.image('Assets/img/header_image.jpg')
|
| 118 |
|
| 119 |
+
st.title(':blue[Resume Matcher]')
|
| 120 |
+
st.subheader(
|
| 121 |
+
'Free and Open Source ATS to help your resume pass the screening stage.')
|
| 122 |
+
st.markdown(
|
| 123 |
+
"Check the website [www.resumematcher.fyi](https://www.resumematcher.fyi/)")
|
| 124 |
+
st.markdown(
|
| 125 |
+
'⭐ Give Resume Matcher a Star on [GitHub](https://github.com/srbhr/Naive-Resume-Matching/)')
|
| 126 |
+
badge(type="github", name="srbhr/Naive-Resume-Matching")
|
| 127 |
+
|
| 128 |
+
st.text('For updates follow me on Twitter.')
|
| 129 |
+
badge(type="twitter", name="_srbhr_")
|
| 130 |
+
|
| 131 |
+
st.markdown(
|
| 132 |
+
'If you like the project and would like to further help in development please consider 👇')
|
| 133 |
+
badge(type="buymeacoffee", name="srbhr")
|
| 134 |
+
|
| 135 |
avs.add_vertical_space(5)
|
| 136 |
|
| 137 |
resume_names = get_filenames_from_dir("Data/Processed/Resumes")
|
|
|
|
| 250 |
title='Key Terms/Topics Extracted from the selected Job Description')
|
| 251 |
st.write(fig)
|
| 252 |
|
| 253 |
+
avs.add_vertical_space(3)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 254 |
|
| 255 |
+
st.title(':blue[Resume Matcher]')
|
| 256 |
+
st.subheader(
|
| 257 |
+
'Free and Open Source ATS to help your resume pass the screening stage.')
|
| 258 |
+
st.markdown(
|
| 259 |
+
'⭐ Give Resume Matcher a Star on [GitHub](https://github.com/srbhr/Naive-Resume-Matching/)')
|
| 260 |
+
badge(type="github", name="srbhr/Naive-Resume-Matching")
|
| 261 |
|
| 262 |
+
st.text('For updates follow me on Twitter.')
|
| 263 |
+
badge(type="twitter", name="_srbhr_")
|
| 264 |
|
| 265 |
+
st.markdown(
|
| 266 |
+
'If you like the project and would like to further help in development please consider 👇')
|
| 267 |
+
badge(type="buymeacoffee", name="srbhr")
|
|
|
streamlit_second.py
ADDED
|
@@ -0,0 +1,354 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import networkx as nx
|
| 2 |
+
from typing import List
|
| 3 |
+
import streamlit as st
|
| 4 |
+
import pandas as pd
|
| 5 |
+
import json
|
| 6 |
+
import plotly.express as px
|
| 7 |
+
import plotly.graph_objects as go
|
| 8 |
+
from scripts.utils.ReadFiles import get_filenames_from_dir
|
| 9 |
+
from streamlit_extras import add_vertical_space as avs
|
| 10 |
+
from annotated_text import annotated_text, parameters
|
| 11 |
+
from streamlit_extras.badges import badge
|
| 12 |
+
import nltk
|
| 13 |
+
nltk.download('punkt')
|
| 14 |
+
|
| 15 |
+
parameters.SHOW_LABEL_SEPARATOR = False
|
| 16 |
+
parameters.BORDER_RADIUS = 3
|
| 17 |
+
parameters.PADDING = "0.5 0.25rem"
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
def create_star_graph(nodes_and_weights, title):
|
| 21 |
+
# Create an empty graph
|
| 22 |
+
G = nx.Graph()
|
| 23 |
+
|
| 24 |
+
# Add the central node
|
| 25 |
+
central_node = "resume"
|
| 26 |
+
G.add_node(central_node)
|
| 27 |
+
|
| 28 |
+
# Add nodes and edges with weights to the graph
|
| 29 |
+
for node, weight in nodes_and_weights:
|
| 30 |
+
G.add_node(node)
|
| 31 |
+
G.add_edge(central_node, node, weight=weight*100)
|
| 32 |
+
|
| 33 |
+
# Get position layout for nodes
|
| 34 |
+
pos = nx.spring_layout(G)
|
| 35 |
+
|
| 36 |
+
# Create edge trace
|
| 37 |
+
edge_x = []
|
| 38 |
+
edge_y = []
|
| 39 |
+
for edge in G.edges():
|
| 40 |
+
x0, y0 = pos[edge[0]]
|
| 41 |
+
x1, y1 = pos[edge[1]]
|
| 42 |
+
edge_x.extend([x0, x1, None])
|
| 43 |
+
edge_y.extend([y0, y1, None])
|
| 44 |
+
|
| 45 |
+
edge_trace = go.Scatter(x=edge_x, y=edge_y, line=dict(
|
| 46 |
+
width=0.5, color='#888'), hoverinfo='none', mode='lines')
|
| 47 |
+
|
| 48 |
+
# Create node trace
|
| 49 |
+
node_x = []
|
| 50 |
+
node_y = []
|
| 51 |
+
for node in G.nodes():
|
| 52 |
+
x, y = pos[node]
|
| 53 |
+
node_x.append(x)
|
| 54 |
+
node_y.append(y)
|
| 55 |
+
|
| 56 |
+
node_trace = go.Scatter(x=node_x, y=node_y, mode='markers', hoverinfo='text',
|
| 57 |
+
marker=dict(showscale=True, colorscale='Rainbow', reversescale=True, color=[], size=10,
|
| 58 |
+
colorbar=dict(thickness=15, title='Node Connections', xanchor='left',
|
| 59 |
+
titleside='right'), line_width=2))
|
| 60 |
+
|
| 61 |
+
# Color node points by number of connections
|
| 62 |
+
node_adjacencies = []
|
| 63 |
+
node_text = []
|
| 64 |
+
for node in G.nodes():
|
| 65 |
+
adjacencies = list(G.adj[node]) # changes here
|
| 66 |
+
node_adjacencies.append(len(adjacencies))
|
| 67 |
+
node_text.append(f'{node}<br># of connections: {len(adjacencies)}')
|
| 68 |
+
|
| 69 |
+
node_trace.marker.color = node_adjacencies
|
| 70 |
+
node_trace.text = node_text
|
| 71 |
+
|
| 72 |
+
# Create the figure
|
| 73 |
+
fig = go.Figure(data=[edge_trace, node_trace],
|
| 74 |
+
layout=go.Layout(title=title, titlefont_size=16, showlegend=False,
|
| 75 |
+
hovermode='closest', margin=dict(b=20, l=5, r=5, t=40),
|
| 76 |
+
xaxis=dict(
|
| 77 |
+
showgrid=False, zeroline=False, showticklabels=False),
|
| 78 |
+
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False)))
|
| 79 |
+
|
| 80 |
+
# Show the figure
|
| 81 |
+
st.plotly_chart(fig)
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
def create_annotated_text(input_string: str, word_list: List[str], annotation: str, color_code: str):
|
| 85 |
+
# Tokenize the input string
|
| 86 |
+
tokens = nltk.word_tokenize(input_string)
|
| 87 |
+
|
| 88 |
+
# Convert the list to a set for quick lookups
|
| 89 |
+
word_set = set(word_list)
|
| 90 |
+
|
| 91 |
+
# Initialize an empty list to hold the annotated text
|
| 92 |
+
annotated_text = []
|
| 93 |
+
|
| 94 |
+
for token in tokens:
|
| 95 |
+
# Check if the token is in the set
|
| 96 |
+
if token in word_set:
|
| 97 |
+
# If it is, append a tuple with the token, annotation, and color code
|
| 98 |
+
annotated_text.append((token, annotation, color_code))
|
| 99 |
+
else:
|
| 100 |
+
# If it's not, just append the token as a string
|
| 101 |
+
annotated_text.append(token)
|
| 102 |
+
|
| 103 |
+
return annotated_text
|
| 104 |
+
|
| 105 |
+
|
| 106 |
+
def read_json(filename):
|
| 107 |
+
with open(filename) as f:
|
| 108 |
+
data = json.load(f)
|
| 109 |
+
return data
|
| 110 |
+
|
| 111 |
+
|
| 112 |
+
def tokenize_string(input_string):
|
| 113 |
+
tokens = nltk.word_tokenize(input_string)
|
| 114 |
+
return tokens
|
| 115 |
+
|
| 116 |
+
|
| 117 |
+
st.image('Assets/img/header_image.jpg')
|
| 118 |
+
|
| 119 |
+
st.title(':blue[Resume Matcher]')
|
| 120 |
+
st.subheader(
|
| 121 |
+
'Free and Open Source ATS to help your resume pass the screening stage.')
|
| 122 |
+
st.markdown(
|
| 123 |
+
"Check the website [www.resumematcher.fyi](https://www.resumematcher.fyi/)")
|
| 124 |
+
st.markdown(
|
| 125 |
+
'⭐ Give Resume Matcher a Star on [GitHub](https://github.com/srbhr/Naive-Resume-Matching/)')
|
| 126 |
+
badge(type="github", name="srbhr/Naive-Resume-Matching")
|
| 127 |
+
|
| 128 |
+
st.text('For updates follow me on Twitter.')
|
| 129 |
+
badge(type="twitter", name="_srbhr_")
|
| 130 |
+
|
| 131 |
+
st.markdown(
|
| 132 |
+
'If you like the project and would like to further help in development please consider 👇')
|
| 133 |
+
badge(type="buymeacoffee", name="srbhr")
|
| 134 |
+
|
| 135 |
+
avs.add_vertical_space(5)
|
| 136 |
+
|
| 137 |
+
resume_names = get_filenames_from_dir("Data/Processed/Resumes")
|
| 138 |
+
|
| 139 |
+
st.write("There are", len(resume_names),
|
| 140 |
+
" resumes present. Please select one from the menu below:")
|
| 141 |
+
output = st.slider('Select Resume Number', 0, len(resume_names)-1, 2)
|
| 142 |
+
|
| 143 |
+
avs.add_vertical_space(5)
|
| 144 |
+
|
| 145 |
+
st.write("You have selected ", resume_names[output], " printing the resume")
|
| 146 |
+
selected_file = read_json("Data/Processed/Resumes/"+resume_names[output])
|
| 147 |
+
|
| 148 |
+
avs.add_vertical_space(2)
|
| 149 |
+
st.markdown("#### Parsed Resume Data")
|
| 150 |
+
st.caption(
|
| 151 |
+
"This text is parsed from your resume. This is how it'll look like after getting parsed by an ATS.")
|
| 152 |
+
st.caption("Utilize this to understand how to make your resume ATS friendly.")
|
| 153 |
+
avs.add_vertical_space(3)
|
| 154 |
+
# st.json(selected_file)
|
| 155 |
+
st.write(selected_file["clean_data"])
|
| 156 |
+
|
| 157 |
+
avs.add_vertical_space(3)
|
| 158 |
+
st.write("Now let's take a look at the extracted keywords from the resume.")
|
| 159 |
+
|
| 160 |
+
annotated_text(create_annotated_text(
|
| 161 |
+
selected_file["clean_data"], selected_file["extracted_keywords"],
|
| 162 |
+
"KW", "#0B666A"))
|
| 163 |
+
|
| 164 |
+
avs.add_vertical_space(5)
|
| 165 |
+
st.write("Now let's take a look at the extracted entities from the resume.")
|
| 166 |
+
|
| 167 |
+
# Call the function with your data
|
| 168 |
+
create_star_graph(selected_file['keyterms'], "Entities from Resume")
|
| 169 |
+
|
| 170 |
+
df2 = pd.DataFrame(selected_file['keyterms'], columns=["keyword", "value"])
|
| 171 |
+
|
| 172 |
+
# Create the dictionary
|
| 173 |
+
keyword_dict = {}
|
| 174 |
+
for keyword, value in selected_file['keyterms']:
|
| 175 |
+
keyword_dict[keyword] = value*100
|
| 176 |
+
|
| 177 |
+
fig = go.Figure(data=[go.Table(header=dict(values=["Keyword", "Value"],
|
| 178 |
+
font=dict(size=12),
|
| 179 |
+
fill_color='#070A52'),
|
| 180 |
+
cells=dict(values=[list(keyword_dict.keys()),
|
| 181 |
+
list(keyword_dict.values())],
|
| 182 |
+
line_color='darkslategray',
|
| 183 |
+
fill_color='#6DA9E4'))
|
| 184 |
+
])
|
| 185 |
+
st.plotly_chart(fig)
|
| 186 |
+
|
| 187 |
+
st.divider()
|
| 188 |
+
|
| 189 |
+
fig = px.treemap(df2, path=['keyword'], values='value',
|
| 190 |
+
color_continuous_scale='Rainbow',
|
| 191 |
+
title='Key Terms/Topics Extracted from your Resume')
|
| 192 |
+
st.write(fig)
|
| 193 |
+
|
| 194 |
+
avs.add_vertical_space(5)
|
| 195 |
+
|
| 196 |
+
job_descriptions = get_filenames_from_dir("Data/Processed/JobDescription")
|
| 197 |
+
|
| 198 |
+
st.write("There are", len(job_descriptions),
|
| 199 |
+
" resumes present. Please select one from the menu below:")
|
| 200 |
+
output = st.slider('Select Job Description Number',
|
| 201 |
+
0, len(job_descriptions)-1, 2)
|
| 202 |
+
|
| 203 |
+
avs.add_vertical_space(5)
|
| 204 |
+
|
| 205 |
+
st.write("You have selected ",
|
| 206 |
+
job_descriptions[output], " printing the job description")
|
| 207 |
+
selected_jd = read_json(
|
| 208 |
+
"Data/Processed/JobDescription/"+job_descriptions[output])
|
| 209 |
+
|
| 210 |
+
avs.add_vertical_space(2)
|
| 211 |
+
st.markdown("#### Job Description")
|
| 212 |
+
st.caption(
|
| 213 |
+
"Currently in the pipeline I'm parsing this from PDF but it'll be from txt or copy paste.")
|
| 214 |
+
avs.add_vertical_space(3)
|
| 215 |
+
# st.json(selected_file)
|
| 216 |
+
st.write(selected_jd["clean_data"])
|
| 217 |
+
|
| 218 |
+
st.markdown("#### Common Words between Job Description and Resumes Highlighted.")
|
| 219 |
+
|
| 220 |
+
annotated_text(create_annotated_text(
|
| 221 |
+
selected_file["clean_data"], selected_jd["extracted_keywords"],
|
| 222 |
+
"JD", "#F24C3D"))
|
| 223 |
+
|
| 224 |
+
st.write("Now let's take a look at the extracted entities from the job description.")
|
| 225 |
+
|
| 226 |
+
# Call the function with your data
|
| 227 |
+
create_star_graph(selected_jd['keyterms'], "Entities from Job Description")
|
| 228 |
+
|
| 229 |
+
df2 = pd.DataFrame(selected_jd['keyterms'], columns=["keyword", "value"])
|
| 230 |
+
|
| 231 |
+
# Create the dictionary
|
| 232 |
+
keyword_dict = {}
|
| 233 |
+
for keyword, value in selected_jd['keyterms']:
|
| 234 |
+
keyword_dict[keyword] = value*100
|
| 235 |
+
|
| 236 |
+
fig = go.Figure(data=[go.Table(header=dict(values=["Keyword", "Value"],
|
| 237 |
+
font=dict(size=12),
|
| 238 |
+
fill_color='#070A52'),
|
| 239 |
+
cells=dict(values=[list(keyword_dict.keys()),
|
| 240 |
+
list(keyword_dict.values())],
|
| 241 |
+
line_color='darkslategray',
|
| 242 |
+
fill_color='#6DA9E4'))
|
| 243 |
+
])
|
| 244 |
+
st.plotly_chart(fig)
|
| 245 |
+
|
| 246 |
+
st.divider()
|
| 247 |
+
|
| 248 |
+
fig = px.treemap(df2, path=['keyword'], values='value',
|
| 249 |
+
color_continuous_scale='Rainbow',
|
| 250 |
+
title='Key Terms/Topics Extracted from the selected Job Description')
|
| 251 |
+
st.write(fig)
|
| 252 |
+
|
| 253 |
+
avs.add_vertical_space(5)
|
| 254 |
+
|
| 255 |
+
st.divider()
|
| 256 |
+
|
| 257 |
+
st.markdown("## Vector Similarity Scores")
|
| 258 |
+
st.caption("Powered by Qdrant Vector Search")
|
| 259 |
+
st.info("These are pre-computed queries", icon="ℹ")
|
| 260 |
+
st.warning(
|
| 261 |
+
"Running Qdrant or Sentence Transformers without having capacity is not recommended", icon="⚠")
|
| 262 |
+
|
| 263 |
+
|
| 264 |
+
# Your data
|
| 265 |
+
data = [
|
| 266 |
+
{'text': "{'resume': 'Alfred Pennyworth",
|
| 267 |
+
'query': 'Job Description Product Manager', 'score': 0.62658},
|
| 268 |
+
{'text': "{'resume': 'Barry Allen",
|
| 269 |
+
'query': 'Job Description Product Manager', 'score': 0.43777737},
|
| 270 |
+
{'text': "{'resume': 'Bruce Wayne ",
|
| 271 |
+
'query': 'Job Description Product Manager', 'score': 0.39835533},
|
| 272 |
+
{'text': "{'resume': 'JOHN DOE",
|
| 273 |
+
'query': 'Job Description Product Manager', 'score': 0.3915512},
|
| 274 |
+
{'text': "{'resume': 'Harvey Dent",
|
| 275 |
+
'query': 'Job Description Product Manager', 'score': 0.3519544},
|
| 276 |
+
{'text': "{'resume': 'Barry Allen",
|
| 277 |
+
'query': 'Job Description Senior Full Stack Engineer', 'score': 0.6541866},
|
| 278 |
+
{'text': "{'resume': 'Alfred Pennyworth",
|
| 279 |
+
'query': 'Job Description Senior Full Stack Engineer', 'score': 0.59806436},
|
| 280 |
+
{'text': "{'resume': 'JOHN DOE",
|
| 281 |
+
'query': 'Job Description Senior Full Stack Engineer', 'score': 0.5951386},
|
| 282 |
+
{'text': "{'resume': 'Bruce Wayne ",
|
| 283 |
+
'query': 'Job Description Senior Full Stack Engineer', 'score': 0.57700855},
|
| 284 |
+
{'text': "{'resume': 'Harvey Dent",
|
| 285 |
+
'query': 'Job Description Senior Full Stack Engineer', 'score': 0.38489106},
|
| 286 |
+
{'text': "{'resume': 'Barry Allen",
|
| 287 |
+
'query': 'Job Description Front End Engineer', 'score': 0.76813436},
|
| 288 |
+
{'text': "{'resume': 'Bruce Wayne'",
|
| 289 |
+
'query': 'Job Description Front End Engineer', 'score': 0.60440844},
|
| 290 |
+
{'text': "{'resume': 'JOHN DOE",
|
| 291 |
+
'query': 'Job Description Front End Engineer', 'score': 0.56080043},
|
| 292 |
+
{'text': "{'resume': 'Alfred Pennyworth",
|
| 293 |
+
'query': 'Job Description Front End Engineer', 'score': 0.5395049},
|
| 294 |
+
{'text': "{'resume': 'Harvey Dent",
|
| 295 |
+
'query': 'Job Description Front End Engineer', 'score': 0.3859515},
|
| 296 |
+
{'text': "{'resume': 'JOHN DOE",
|
| 297 |
+
'query': 'Job Description Java Developer', 'score': 0.5449441},
|
| 298 |
+
{'text': "{'resume': 'Alfred Pennyworth",
|
| 299 |
+
'query': 'Job Description Java Developer', 'score': 0.53476423},
|
| 300 |
+
{'text': "{'resume': 'Barry Allen",
|
| 301 |
+
'query': 'Job Description Java Developer', 'score': 0.5313871},
|
| 302 |
+
{'text': "{'resume': 'Bruce Wayne ",
|
| 303 |
+
'query': 'Job Description Java Developer', 'score': 0.44446343},
|
| 304 |
+
{'text': "{'resume': 'Harvey Dent",
|
| 305 |
+
'query': 'Job Description Java Developer', 'score': 0.3616274}
|
| 306 |
+
]
|
| 307 |
+
|
| 308 |
+
# Create a DataFrame
|
| 309 |
+
df = pd.DataFrame(data)
|
| 310 |
+
|
| 311 |
+
# Create different DataFrames based on the query and sort by score
|
| 312 |
+
df1 = df[df['query'] ==
|
| 313 |
+
'Job Description Product Manager'].sort_values(by='score', ascending=False)
|
| 314 |
+
df2 = df[df['query'] ==
|
| 315 |
+
'Job Description Senior Full Stack Engineer'].sort_values(by='score', ascending=False)
|
| 316 |
+
df3 = df[df['query'] == 'Job Description Front End Engineer'].sort_values(
|
| 317 |
+
by='score', ascending=False)
|
| 318 |
+
df4 = df[df['query'] == 'Job Description Java Developer'].sort_values(
|
| 319 |
+
by='score', ascending=False)
|
| 320 |
+
|
| 321 |
+
|
| 322 |
+
def plot_df(df, title):
|
| 323 |
+
fig = px.bar(df, x='text', y=df['score']*100, title=title)
|
| 324 |
+
st.plotly_chart(fig)
|
| 325 |
+
|
| 326 |
+
|
| 327 |
+
st.markdown("### Bar plots of scores based on similarity to Job Description.")
|
| 328 |
+
|
| 329 |
+
st.subheader(":blue[Legend]")
|
| 330 |
+
st.text("Alfred Pennyworth : Product Manager")
|
| 331 |
+
st.text("Barry Allen : Front End Developer")
|
| 332 |
+
st.text("Harvey Dent : Machine Learning Engineer")
|
| 333 |
+
st.text("Bruce Wayne : Fullstack Developer (MERN)")
|
| 334 |
+
st.text("John Doe : Fullstack Developer (Java)")
|
| 335 |
+
|
| 336 |
+
|
| 337 |
+
plot_df(df1, 'Job Description Product Manager 10+ Years of Exper')
|
| 338 |
+
plot_df(df2, 'Job Description Senior Full Stack Engineer 5+ Year')
|
| 339 |
+
plot_df(df3, 'Job Description Front End Engineer 2 Years of Expe')
|
| 340 |
+
plot_df(df4, 'Job Description Java Developer 3 Years of Experien')
|
| 341 |
+
|
| 342 |
+
|
| 343 |
+
avs.add_vertical_space(3)
|
| 344 |
+
|
| 345 |
+
st.markdown(
|
| 346 |
+
'⭐ Give Resume Matcher a Star on [GitHub](https://github.com/srbhr/Naive-Resume-Matching/)')
|
| 347 |
+
badge(type="github", name="srbhr/Naive-Resume-Matching")
|
| 348 |
+
|
| 349 |
+
st.text('For updates follow me on Twitter.')
|
| 350 |
+
badge(type="twitter", name="_srbhr_")
|
| 351 |
+
|
| 352 |
+
st.markdown(
|
| 353 |
+
'If you like the project and would like to further help in development please consider 👇')
|
| 354 |
+
badge(type="buymeacoffee", name="srbhr")
|