damndeepesh commited on
Commit
6db7601
·
1 Parent(s): b226b08

Add application file

Browse files
.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ .env
2
+ /env
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md CHANGED
@@ -1,13 +1,75 @@
1
- ---
2
- title: ResumeAnalyserGroq
3
- emoji: 🏃
4
- colorFrom: green
5
- colorTo: green
6
- sdk: streamlit
7
- sdk_version: 1.43.2
8
- app_file: app.py
9
- pinned: false
10
- license: apache-2.0
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Resume ATS Scoring Application
2
+
3
+ An advanced NLP-based Resume ATS (Applicant Tracking System) scoring application built with Streamlit. This application helps users evaluate and improve their resumes for specific job roles.
4
+
5
+ ## Features
6
+
7
+ - **Resume Analysis**: Upload your resume and get an ATS compatibility score
8
+ - **Job Role Matching**: Specify your target job role for tailored analysis
9
+ - **Visualization**: View detailed graphs and charts of your resume's performance
10
+ - **Word Cloud**: Visual representation of key terms in your resume
11
+ - **History Comparison**: Compare previous resume versions to track improvements
12
+ - **Advanced NLP**: Powered by Groq API for sophisticated natural language processing
13
+
14
+ ## Setup
15
+
16
+ ### Prerequisites
17
+
18
+ - Python 3.8+
19
+ - Groq API key
20
+
21
+ ### Installation
22
+
23
+ 1. Clone this repository
24
+ 2. Create and activate the virtual environment:
25
+ ```
26
+ python -m venv env
27
+ source env/bin/activate # On Windows: env\Scripts\activate
28
+ ```
29
+ 3. Install the required packages:
30
+ ```
31
+ pip install -r requirements.txt
32
+ ```
33
+ 4. Create a `.env` file in the project root and add your Groq API key:
34
+ ```
35
+ GROQ_API_KEY=your_api_key_here
36
+ ```
37
+ 5. Run the application:
38
+ ```
39
+ streamlit run app.py
40
+ ```
41
+
42
+ ## Project Structure
43
+
44
+ ```
45
+ ├── app.py # Main Streamlit application
46
+ ├── requirements.txt # Project dependencies
47
+ ├── .env # Environment variables (not tracked by git)
48
+ ├── .gitignore # Git ignore file
49
+ ├── README.md # Project documentation
50
+ └── src/ # Source code
51
+ ├── __init__.py # Package initialization
52
+ ├── analyzer.py # Resume analysis logic
53
+ ├── groq_client.py # Groq API integration
54
+ ├── utils.py # Utility functions
55
+ ├── visualization.py # Data visualization components
56
+ └── pages/ # Streamlit pages
57
+ ├── __init__.py # Package initialization
58
+ ├── home.py # Home page
59
+ ├── analysis.py # Analysis page
60
+ ├── visualization.py # Visualization page
61
+ └── history.py # History comparison page
62
+ ```
63
+
64
+ ## Usage
65
+
66
+ 1. Navigate to the home page
67
+ 2. Upload your resume (PDF, DOCX, or TXT format)
68
+ 3. Enter your target job role
69
+ 4. View your ATS score and detailed analysis
70
+ 5. Explore visualizations and recommendations
71
+ 6. Save your results for future comparison
72
+
73
+ ## License
74
+
75
+ Apache-2.0 license
app.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ import os
3
+ from dotenv import load_dotenv
4
+
5
+ # Load environment variables
6
+ load_dotenv()
7
+
8
+ # Import page modules
9
+ from src.pages.home import show_home_page
10
+ from src.pages.analysis import show_analysis_page
11
+ from src.pages.visualization import show_visualization_page
12
+ from src.pages.history import show_history_page
13
+ from src.pages.suggestions import show_suggestions_page
14
+
15
+ # App configuration
16
+ st.set_page_config(
17
+ page_title="Resume ATS Scorer",
18
+ page_icon="📝",
19
+ layout="wide",
20
+ initial_sidebar_state="expanded"
21
+ )
22
+
23
+ # Custom CSS
24
+ st.markdown("""
25
+ <style>
26
+ .main-header {
27
+ font-size: 2.5rem;
28
+ color: #4a86e8;
29
+ text-align: center;
30
+ margin-bottom: 1rem;
31
+ }
32
+ .sub-header {
33
+ font-size: 1.5rem;
34
+ color: #666;
35
+ text-align: center;
36
+ margin-bottom: 2rem;
37
+ }
38
+ </style>
39
+ """, unsafe_allow_html=True)
40
+
41
+ # App title
42
+ st.markdown('<h1 class="main-header">Resume ATS Scoring System</h1>', unsafe_allow_html=True)
43
+ st.markdown('<p class="sub-header">Optimize your resume for Applicant Tracking Systems</p>', unsafe_allow_html=True)
44
+
45
+ # Session state initialization
46
+ if 'resume_data' not in st.session_state:
47
+ st.session_state.resume_data = None
48
+ if 'job_role' not in st.session_state:
49
+ st.session_state.job_role = ""
50
+ if 'analysis_results' not in st.session_state:
51
+ st.session_state.analysis_results = None
52
+ if 'history' not in st.session_state:
53
+ st.session_state.history = []
54
+
55
+ # Navigation tabs
56
+ tabs = st.tabs(["Home", "Analysis", "Visualization", "Suggestions", "History"])
57
+
58
+ # Display the appropriate page based on the selected tab
59
+ with tabs[0]:
60
+ show_home_page()
61
+
62
+ with tabs[1]:
63
+ show_analysis_page()
64
+
65
+ with tabs[2]:
66
+ show_visualization_page()
67
+
68
+ with tabs[3]:
69
+ show_suggestions_page()
70
+
71
+ with tabs[4]:
72
+ show_history_page()
73
+
74
+ # Footer
75
+ st.markdown("---")
76
+ st.markdown("""
77
+ <div style="text-align: center; color: #888; padding: 10px;">
78
+ <p>Resume ATS Scoring System | Powered by Groq API</p>
79
+ </div>
80
+ """, unsafe_allow_html=True)
requirements.txt ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Core dependencies
2
+ streamlit==1.22.0
3
+ python-dotenv==1.0.0
4
+ groq==0.18.0
5
+
6
+ # Document processing
7
+ pypdf2==3.0.1
8
+ python-docx==0.8.11
9
+
10
+ # Data processing and visualization
11
+ pandas==2.0.1
12
+ numpy==1.24.3
13
+ matplotlib==3.7.1
14
+ seaborn==0.12.2
15
+ wordcloud==1.9.2
16
+
17
+ # NLP libraries
18
+ spacy==3.6.0
19
+ scikit-learn==1.2.2
20
+
21
+ # Storage
22
+ pickle-mixin==1.0.2
src/__init__.py ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ # Resume ATS Scoring Application
2
+ # This package contains all the source code for the application
src/__pycache__/__init__.cpython-39.pyc ADDED
Binary file (140 Bytes). View file
 
src/__pycache__/groq_client.cpython-39.pyc ADDED
Binary file (6.12 kB). View file
 
src/analyzer.py ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+ import spacy
4
+ from sklearn.feature_extraction.text import CountVectorizer
5
+ from src.groq_client import analyze_resume
6
+
7
+ # Load spaCy model
8
+ try:
9
+ nlp = spacy.load("en_core_web_sm")
10
+ except OSError:
11
+ # If model is not installed, provide instructions
12
+ print("The spaCy model 'en_core_web_sm' is not installed.")
13
+ print("Please install it using: python3 -m spacy download en_core_web_sm")
14
+ # Create a simple placeholder model for basic functionality
15
+ nlp = spacy.blank("en")
16
+
17
+ def preprocess_text(text):
18
+ """Preprocess resume text for analysis
19
+
20
+ Args:
21
+ text (str): Raw text extracted from resume
22
+
23
+ Returns:
24
+ str: Preprocessed text
25
+ """
26
+ # Remove special characters and extra whitespace
27
+ text = re.sub(r'[^\w\s]', ' ', text)
28
+ text = re.sub(r'\s+', ' ', text).strip()
29
+
30
+ # Convert to lowercase
31
+ text = text.lower()
32
+
33
+ return text
34
+
35
+ def extract_keywords(text, job_role):
36
+ """Extract keywords from resume text
37
+
38
+ Args:
39
+ text (str): Preprocessed resume text
40
+ job_role (str): Target job role
41
+
42
+ Returns:
43
+ list: Extracted keywords
44
+ """
45
+ # Process the text with spaCy
46
+ doc = nlp(text)
47
+
48
+ # Extract nouns, proper nouns, and skill-related words
49
+ keywords = [token.text for token in doc if token.pos_ in ["NOUN", "PROPN"] and len(token.text) > 2]
50
+
51
+ # Use CountVectorizer to get the most common terms
52
+ vectorizer = CountVectorizer(max_features=50, stop_words='english', ngram_range=(1, 2))
53
+ X = vectorizer.fit_transform([text])
54
+ common_terms = vectorizer.get_feature_names_out()
55
+
56
+ # Combine and remove duplicates
57
+ all_keywords = list(set(keywords + list(common_terms)))
58
+
59
+ return all_keywords
60
+
61
+ def analyze_resume_local(resume_text, job_role):
62
+ """Perform local analysis on resume text before calling the Groq API
63
+
64
+ Args:
65
+ resume_text (str): Raw text extracted from resume
66
+ job_role (str): Target job role
67
+
68
+ Returns:
69
+ dict: Local analysis results
70
+ """
71
+ # Preprocess the text
72
+ processed_text = preprocess_text(resume_text)
73
+
74
+ # Extract keywords
75
+ keywords = extract_keywords(processed_text, job_role)
76
+
77
+ # Perform basic format analysis
78
+ format_score = calculate_format_score(resume_text)
79
+
80
+ # Perform basic readability analysis
81
+ readability_score = calculate_readability_score(resume_text)
82
+
83
+ return {
84
+ "local_keywords": keywords,
85
+ "local_format_score": format_score,
86
+ "local_readability_score": readability_score
87
+ }
88
+
89
+ def calculate_format_score(text):
90
+ """Calculate a basic format score for the resume
91
+
92
+ Args:
93
+ text (str): Resume text
94
+
95
+ Returns:
96
+ int: Format score (0-100)
97
+ """
98
+ score = 70 # Base score
99
+
100
+ # Check for section headers
101
+ section_patterns = ["experience", "education", "skills", "projects", "certifications", "summary"]
102
+ found_sections = 0
103
+ for pattern in section_patterns:
104
+ if re.search(r'\b' + pattern + r'\b', text.lower()):
105
+ found_sections += 1
106
+
107
+ # Adjust score based on sections found
108
+ section_score = min(found_sections * 5, 20)
109
+ score += section_score
110
+
111
+ # Check for bullet points
112
+ bullet_count = text.count('•') + text.count('·') + text.count('-')
113
+ bullet_score = min(bullet_count, 10)
114
+ score += bullet_score
115
+
116
+ return min(score, 100) # Cap at 100
117
+
118
+ def calculate_readability_score(text):
119
+ """Calculate a basic readability score for the resume
120
+
121
+ Args:
122
+ text (str): Resume text
123
+
124
+ Returns:
125
+ int: Readability score (0-100)
126
+ """
127
+ # Base score
128
+ score = 70
129
+
130
+ # Split into sentences and words
131
+ sentences = re.split(r'[.!?]+', text)
132
+ sentences = [s.strip() for s in sentences if s.strip()]
133
+
134
+ # Calculate average sentence length
135
+ if sentences:
136
+ words = []
137
+ for sentence in sentences:
138
+ words.extend(sentence.split())
139
+
140
+ avg_sentence_length = len(words) / len(sentences)
141
+
142
+ # Penalize very long sentences
143
+ if avg_sentence_length > 25:
144
+ score -= 10
145
+ elif avg_sentence_length < 10:
146
+ score += 5
147
+
148
+ return min(max(score, 0), 100) # Keep between 0-100
149
+
150
+ def get_resume_analysis(resume_text, job_role, job_description=None):
151
+ """Main function to analyze a resume
152
+
153
+ Args:
154
+ resume_text (str): Text extracted from resume
155
+ job_role (str): Target job role
156
+ job_description (str, optional): Specific job description for enhanced analysis
157
+
158
+ Returns:
159
+ dict: Complete analysis results
160
+ """
161
+ # First perform local analysis
162
+ local_results = analyze_resume_local(resume_text, job_role)
163
+
164
+ # Then call the Groq API for advanced analysis
165
+ groq_results = analyze_resume(resume_text, job_role, job_description)
166
+
167
+ # Combine results
168
+ combined_results = {
169
+ **groq_results,
170
+ "local_keywords": local_results["local_keywords"]
171
+ }
172
+
173
+ return combined_results
src/groq_client.py ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ from groq import Groq
4
+
5
+ # Initialize Groq client
6
+ def get_groq_client():
7
+ """Initialize and return a Groq client using the API key from environment variables"""
8
+ api_key = os.getenv("GROQ_API_KEY")
9
+ if not api_key:
10
+ raise ValueError("GROQ_API_KEY environment variable not set. Please add it to your .env file.")
11
+ return Groq(api_key=api_key)
12
+
13
+ def analyze_resume(resume_text, job_role, job_description=None):
14
+ """Analyze a resume using the Groq API
15
+
16
+ Args:
17
+ resume_text (str): The extracted text from the resume
18
+ job_role (str): The target job role
19
+ job_description (str, optional): Specific job description for enhanced analysis
20
+
21
+ Returns:
22
+ dict: Analysis results including scores and recommendations
23
+ """
24
+ client = get_groq_client()
25
+
26
+ # Prepare the prompt for the Groq API
27
+ job_desc_text = ""
28
+ if job_description:
29
+ job_desc_text = f"""
30
+ JOB DESCRIPTION:
31
+ {job_description}
32
+
33
+ Please analyze the resume specifically against this job description, identifying exact matches and gaps.
34
+ """
35
+
36
+ prompt = f"""
37
+ You are an expert ATS (Applicant Tracking System) analyzer and resume consultant with deep knowledge of industry trends.
38
+ Please analyze the following resume for the role of {job_role}.
39
+
40
+ RESUME TEXT:
41
+ {resume_text}
42
+ {job_desc_text}
43
+ Provide a comprehensive analysis in the following JSON format:
44
+ {{
45
+ "ats_score": <overall score from 0-100>,
46
+ "keyword_match": <keyword match score from 0-100>,
47
+ "format_score": <format and structure score from 0-100>,
48
+ "readability_score": <readability score from 0-100>,
49
+ "document_structure_score": <score for document organization and flow from 0-100>,
50
+ "section_headers_score": <score for section header clarity and formatting from 0-100>,
51
+ "content_organization_score": <score for content layout and organization from 0-100>,
52
+ "visual_layout_score": <score for visual presentation and spacing from 0-100>,
53
+ "strengths": [<list of 3-5 strengths>],
54
+ "improvements": [<list of 3-5 areas for improvement>],
55
+ "missing_keywords": [<list of important keywords missing from the resume>],
56
+ "present_keywords": [<list of important keywords present in the resume>],
57
+ "recommendations": [<list of 3-5 specific recommendations to improve the resume>],
58
+ "skill_gaps": [<list of specific skills mentioned in the job description but missing from the resume>],
59
+ "experience_gaps": [<list of experience requirements mentioned in the job description but not evident in the resume>],
60
+ "resume_enhancement_tips": [<list of 3-5 specific ways to enhance the resume for this exact job>],
61
+ "format_tips": [<list of specific formatting recommendations>],
62
+ "industry_trends": [<list of 3-5 current trends in the industry relevant to the role>],
63
+ "career_recommendations": [<list of 3-5 career development suggestions based on the resume and role>],
64
+ "recommended_certifications": [<list of relevant certifications that would enhance the candidate's profile>]
65
+ }}
66
+
67
+ Ensure your analysis is detailed, specific to the {job_role} role, and actionable. Include current industry trends, career development paths, and relevant certifications for the role.
68
+ """
69
+
70
+ # Call the Groq API
71
+ try:
72
+ chat_completion = client.chat.completions.create(
73
+ messages=[
74
+ {"role": "system", "content": "You are an expert ATS analyzer and resume consultant."},
75
+ {"role": "user", "content": prompt}
76
+ ],
77
+ model="llama3-70b-8192", # Using Llama 3 70B model for high-quality analysis
78
+ temperature=0.2, # Low temperature for more consistent results
79
+ max_tokens=4000, # Allow for detailed analysis
80
+ top_p=0.9
81
+ )
82
+
83
+ # Extract and parse the response
84
+ response_text = chat_completion.choices[0].message.content
85
+
86
+ # Find the JSON part in the response
87
+ try:
88
+ # Try to parse the entire response as JSON first
89
+ analysis_results = json.loads(response_text)
90
+ except json.JSONDecodeError:
91
+ # If that fails, try to extract JSON from the text
92
+ import re
93
+ json_match = re.search(r'\{\s*"ats_score".*\}', response_text, re.DOTALL)
94
+ if json_match:
95
+ analysis_results = json.loads(json_match.group(0))
96
+ else:
97
+ raise ValueError("Could not extract valid JSON from the API response")
98
+
99
+ # Ensure all required fields are present
100
+ required_fields = [
101
+ "ats_score", "keyword_match", "format_score", "readability_score",
102
+ "document_structure_score", "section_headers_score", "content_organization_score", "visual_layout_score",
103
+ "strengths", "improvements", "missing_keywords", "present_keywords", "recommendations", "format_tips",
104
+ "industry_trends", "career_recommendations", "recommended_certifications"
105
+ ]
106
+
107
+ for field in required_fields:
108
+ if field not in analysis_results:
109
+ if field in ["missing_keywords", "present_keywords"]:
110
+ analysis_results[field] = []
111
+ elif field in ["strengths", "improvements", "recommendations"]:
112
+ analysis_results[field] = ["No specific " + field + " identified."]
113
+ else:
114
+ analysis_results[field] = 70 # Default score
115
+
116
+ return analysis_results
117
+
118
+ except Exception as e:
119
+ # Handle API errors
120
+ error_msg = f"Error calling Groq API: {str(e)}"
121
+ print(error_msg)
122
+ raise Exception(error_msg)
123
+
124
+ # Example of how the analysis results should look
125
+ SAMPLE_ANALYSIS = {
126
+ "ats_score": 78,
127
+ "keyword_match": 72,
128
+ "format_score": 85,
129
+ "readability_score": 80,
130
+ "document_structure_score": 82,
131
+ "section_headers_score": 88,
132
+ "content_organization_score": 85,
133
+ "visual_layout_score": 80,
134
+ "strengths": [
135
+ "Clear section headings that are ATS-friendly",
136
+ "Good use of action verbs and quantifiable achievements",
137
+ "Relevant technical skills clearly listed",
138
+ "Consistent formatting throughout the document"
139
+ ],
140
+ "improvements": [
141
+ "Missing some key industry-specific keywords",
142
+ "Contact information could be more prominently displayed",
143
+ "Some bullet points are too lengthy for optimal ATS parsing",
144
+ "Education section lacks detail about relevant coursework"
145
+ ],
146
+ "missing_keywords": [
147
+ "project management",
148
+ "agile methodology",
149
+ "cross-functional",
150
+ "stakeholder management",
151
+ "KPI tracking"
152
+ ],
153
+ "present_keywords": [
154
+ "data analysis",
155
+ "team leadership",
156
+ "strategic planning",
157
+ "budget management"
158
+ ],
159
+ "recommendations": [
160
+ "Add more industry-specific keywords relevant to the job description",
161
+ "Shorten bullet points to 1-2 lines for better readability",
162
+ "Include a skills section with both technical and soft skills",
163
+ "Quantify more achievements with specific metrics and results",
164
+ "Ensure consistent date formatting throughout the resume"
165
+ ],
166
+ "format_tips": [
167
+ "Use consistent font sizes for section headers",
168
+ "Maintain standard margins throughout the document",
169
+ "Ensure proper spacing between sections",
170
+ "Use bullet points consistently for better readability"
171
+ ]
172
+ }
src/pages/__init__.py ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ # Pages package for the Resume ATS Scoring Application
2
+ # Contains all the page modules for the Streamlit application
src/pages/__pycache__/__init__.cpython-39.pyc ADDED
Binary file (146 Bytes). View file
 
src/pages/__pycache__/analysis.cpython-39.pyc ADDED
Binary file (5.27 kB). View file
 
src/pages/__pycache__/history.cpython-39.pyc ADDED
Binary file (3.7 kB). View file
 
src/pages/__pycache__/home.cpython-39.pyc ADDED
Binary file (4.38 kB). View file
 
src/pages/__pycache__/suggestions.cpython-39.pyc ADDED
Binary file (4.74 kB). View file
 
src/pages/__pycache__/visualization.cpython-39.pyc ADDED
Binary file (5.95 kB). View file
 
src/pages/analysis.py ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ import pandas as pd
3
+ import time
4
+ from datetime import datetime
5
+ import os
6
+ from src.groq_client import analyze_resume
7
+
8
+ def show_analysis_page():
9
+ """Display the analysis page with resume scoring and feedback"""
10
+ st.header("Resume Analysis")
11
+
12
+ # Check if resume data is available
13
+ if st.session_state.resume_data is None:
14
+ st.info("Please upload your resume on the Home page first.")
15
+ return
16
+
17
+ # Check if job role is available
18
+ if not st.session_state.job_role:
19
+ st.warning("Please specify your target job role on the Home page.")
20
+ return
21
+
22
+ # Display resume and job role information
23
+ st.subheader("Analysis Information")
24
+ col1, col2 = st.columns(2)
25
+ with col1:
26
+ st.markdown(f"**Resume**: {st.session_state.resume_data['filename']}")
27
+ with col2:
28
+ st.markdown(f"**Target Job Role**: {st.session_state.job_role}")
29
+
30
+ # Check if analysis results are already available
31
+ if st.session_state.analysis_results is None:
32
+ # Show analysis in progress
33
+ with st.spinner("Analyzing your resume... This may take a moment."):
34
+ try:
35
+ # Call the Groq API to analyze the resume
36
+ analysis_results = analyze_resume(
37
+ resume_text=st.session_state.resume_data["text"],
38
+ job_role=st.session_state.job_role,
39
+ job_description=st.session_state.get("job_description", None)
40
+ )
41
+
42
+ # Store the analysis results in session state
43
+ st.session_state.analysis_results = {
44
+ **analysis_results,
45
+ "timestamp": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
46
+ }
47
+
48
+ # Add to history for comparison
49
+ st.session_state.history.append({
50
+ "filename": st.session_state.resume_data["filename"],
51
+ "job_role": st.session_state.job_role,
52
+ "timestamp": st.session_state.analysis_results["timestamp"],
53
+ "ats_score": st.session_state.analysis_results["ats_score"],
54
+ "keyword_match": st.session_state.analysis_results["keyword_match"],
55
+ "format_score": st.session_state.analysis_results["format_score"],
56
+ "readability_score": st.session_state.analysis_results["readability_score"]
57
+ })
58
+
59
+ # Success message
60
+ st.success("Analysis completed successfully!")
61
+
62
+ except Exception as e:
63
+ st.error(f"Error analyzing resume: {str(e)}")
64
+ return
65
+
66
+ # Display the analysis results
67
+ results = st.session_state.analysis_results
68
+
69
+ # Overall ATS Score
70
+ st.subheader("ATS Compatibility Score")
71
+ score_col1, score_col2 = st.columns([1, 3])
72
+ with score_col1:
73
+ st.markdown(
74
+ f"<div style='background-color: {'#4CAF50' if results['ats_score'] >= 80 else '#FFC107' if results['ats_score'] >= 60 else '#F44336'}; "
75
+ f"padding: 20px; border-radius: 10px; text-align: center;'>"
76
+ f"<h1 style='color: white; margin: 0;'>{results['ats_score']}/100</h1>"
77
+ f"</div>",
78
+ unsafe_allow_html=True
79
+ )
80
+ with score_col2:
81
+ st.markdown(f"**Analysis Date**: {results['timestamp']}")
82
+ st.markdown(f"**Keyword Match**: {results['keyword_match']}/100")
83
+ st.markdown(f"**Format & Structure**: {results['format_score']}/100")
84
+ st.markdown(f"**Readability**: {results['readability_score']}/100")
85
+
86
+ # Detailed Format Scores
87
+ st.markdown("### Detailed Format Scores")
88
+ format_scores = {
89
+ "Document Structure": results.get('document_structure_score', 0),
90
+ "Section Headers": results.get('section_headers_score', 0),
91
+ "Content Organization": results.get('content_organization_score', 0),
92
+ "Visual Layout": results.get('visual_layout_score', 0)
93
+ }
94
+
95
+ for score_name, score in format_scores.items():
96
+ st.markdown(
97
+ f"<div style='margin-bottom: 10px;'>"
98
+ f"<span style='color: #666;'>{score_name}:</span> "
99
+ f"<span style='color: {'#4CAF50' if score >= 80 else '#FFC107' if score >= 60 else '#F44336'};'>"
100
+ f"{score}/100</span></div>",
101
+ unsafe_allow_html=True
102
+ )
103
+
104
+ # Format Tips
105
+ if format_tips := results.get('format_tips', []):
106
+ with st.expander("Format Improvement Tips", expanded=True):
107
+ for tip in format_tips:
108
+ st.markdown(f"- {tip}")
109
+
110
+ # Detailed Analysis
111
+ st.subheader("Detailed Analysis")
112
+ with st.expander("Strengths", expanded=True):
113
+ for strength in results['strengths']:
114
+ st.markdown(f"- {strength}")
115
+
116
+ with st.expander("Areas for Improvement", expanded=True):
117
+ for improvement in results['improvements']:
118
+ st.markdown(f"- {improvement}")
119
+
120
+ # Missing Keywords
121
+ st.subheader("Missing Keywords")
122
+ st.markdown("These keywords are commonly found in job descriptions for your target role but are missing from your resume:")
123
+ missing_keywords = results['missing_keywords']
124
+ if missing_keywords:
125
+ # Display as a table
126
+ keyword_df = pd.DataFrame({
127
+ "Keyword": missing_keywords,
128
+ "Importance": ["High" if i < len(missing_keywords)//3 else
129
+ "Medium" if i < 2*len(missing_keywords)//3 else
130
+ "Low" for i in range(len(missing_keywords))]
131
+ })
132
+ st.dataframe(keyword_df, use_container_width=True)
133
+ else:
134
+ st.info("Great job! Your resume contains all the important keywords for this role.")
135
+
136
+ # Advanced Resume Review (when job description is provided)
137
+ if st.session_state.get("job_description"):
138
+ st.subheader("Advanced Resume Review")
139
+
140
+ # Create tabs for different aspects of the advanced review
141
+ review_tabs = st.tabs(["Skill Gaps", "Experience Gaps", "Enhancement Tips"])
142
+
143
+ # Skill Gaps tab
144
+ with review_tabs[0]:
145
+ st.markdown("### Skill Gaps")
146
+ st.markdown("These are specific skills mentioned in the job description that are missing from your resume:")
147
+ skill_gaps = results.get('skill_gaps', [])
148
+ if skill_gaps:
149
+ for skill in skill_gaps:
150
+ st.markdown(f"- {skill}")
151
+ else:
152
+ st.success("Great job! Your resume covers all the key skills mentioned in the job description.")
153
+
154
+ # Experience Gaps tab
155
+ with review_tabs[1]:
156
+ st.markdown("### Experience Gaps")
157
+ st.markdown("These are experience requirements from the job description that aren't clearly demonstrated in your resume:")
158
+ experience_gaps = results.get('experience_gaps', [])
159
+ if experience_gaps:
160
+ for exp in experience_gaps:
161
+ st.markdown(f"- {exp}")
162
+ else:
163
+ st.success("Your experience appears to match the job requirements well!")
164
+
165
+ # Enhancement Tips tab
166
+ with review_tabs[2]:
167
+ st.markdown("### Resume Enhancement Tips")
168
+ st.markdown("Specific ways to enhance your resume for this exact job:")
169
+ enhancement_tips = results.get('resume_enhancement_tips', [])
170
+ if enhancement_tips:
171
+ for tip in enhancement_tips:
172
+ st.markdown(f"- {tip}")
173
+
174
+ # Recommendations
175
+ st.subheader("Recommendations")
176
+ for recommendation in results['recommendations']:
177
+ st.markdown(f"- {recommendation}")
178
+
179
+ # Action buttons section removed as requested
src/pages/history.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ import pandas as pd
3
+ import matplotlib.pyplot as plt
4
+ import seaborn as sns
5
+ from datetime import datetime
6
+
7
+ def show_history_page():
8
+ """Display the history page with previous resume analyses"""
9
+ st.header("Resume Analysis History")
10
+
11
+ # Check if there's any history data
12
+ if not st.session_state.history:
13
+ st.info("You haven't analyzed any resumes yet. Upload and analyze a resume to see your history.")
14
+ return
15
+
16
+ # Display history data
17
+ st.subheader("Previous Analyses")
18
+
19
+ # Create a DataFrame from history data
20
+ history_df = pd.DataFrame(st.session_state.history)
21
+
22
+ # Format the DataFrame for display
23
+ display_df = history_df[['timestamp', 'filename', 'job_role', 'ats_score', 'keyword_match', 'format_score', 'readability_score']].copy()
24
+ display_df.columns = ['Timestamp', 'Resume', 'Job Role', 'ATS Score', 'Keyword Match', 'Format Score', 'Readability Score']
25
+
26
+ # Display the history table
27
+ st.dataframe(display_df, use_container_width=True)
28
+
29
+ # Option to clear history
30
+ if st.button("Clear History", type="secondary"):
31
+ st.session_state.history = []
32
+ st.success("History cleared successfully!")
33
+ st.experimental_rerun()
34
+
35
+ # Trend analysis (if there are multiple entries)
36
+ if len(st.session_state.history) > 1:
37
+ st.subheader("Score Trend Analysis")
38
+
39
+ # Create trend data
40
+ trend_data = history_df.sort_values('timestamp')
41
+
42
+ # Plot the trend
43
+ fig, ax = plt.subplots(figsize=(12, 6))
44
+
45
+ # Plot each score type
46
+ ax.plot(trend_data['timestamp'], trend_data['ats_score'], marker='o', linewidth=2, label='ATS Score')
47
+ ax.plot(trend_data['timestamp'], trend_data['keyword_match'], marker='s', linewidth=2, label='Keyword Match')
48
+ ax.plot(trend_data['timestamp'], trend_data['format_score'], marker='^', linewidth=2, label='Format Score')
49
+ ax.plot(trend_data['timestamp'], trend_data['readability_score'], marker='d', linewidth=2, label='Readability Score')
50
+
51
+ # Customize the plot
52
+ ax.set_title('Resume Score Trends Over Time')
53
+ ax.set_ylim(0, 100)
54
+ ax.set_ylabel('Score')
55
+ ax.set_xlabel('Analysis Date')
56
+ ax.grid(True, linestyle='--', alpha=0.7)
57
+ ax.legend()
58
+
59
+ # Rotate x-axis labels for better readability
60
+ plt.xticks(rotation=45)
61
+ plt.tight_layout()
62
+
63
+ # Display the plot
64
+ st.pyplot(fig)
65
+
66
+ # Improvement analysis
67
+ if len(st.session_state.history) >= 2:
68
+ st.subheader("Improvement Analysis")
69
+
70
+ # Get the first and last entries
71
+ first_entry = trend_data.iloc[0]
72
+ last_entry = trend_data.iloc[-1]
73
+
74
+ # Calculate improvements
75
+ ats_improvement = last_entry['ats_score'] - first_entry['ats_score']
76
+ keyword_improvement = last_entry['keyword_match'] - first_entry['keyword_match']
77
+ format_improvement = last_entry['format_score'] - first_entry['format_score']
78
+ readability_improvement = last_entry['readability_score'] - first_entry['readability_score']
79
+
80
+ # Display improvements
81
+ col1, col2, col3, col4 = st.columns(4)
82
+
83
+ with col1:
84
+ st.metric(
85
+ "ATS Score Improvement",
86
+ f"{last_entry['ats_score']}/100",
87
+ delta=f"{ats_improvement:+.1f}"
88
+ )
89
+
90
+ with col2:
91
+ st.metric(
92
+ "Keyword Match Improvement",
93
+ f"{last_entry['keyword_match']}/100",
94
+ delta=f"{keyword_improvement:+.1f}"
95
+ )
96
+
97
+ with col3:
98
+ st.metric(
99
+ "Format Score Improvement",
100
+ f"{last_entry['format_score']}/100",
101
+ delta=f"{format_improvement:+.1f}"
102
+ )
103
+
104
+ with col4:
105
+ st.metric(
106
+ "Readability Score Improvement",
107
+ f"{last_entry['readability_score']}/100",
108
+ delta=f"{readability_improvement:+.1f}"
109
+ )
110
+
111
+ # Overall assessment
112
+ if ats_improvement > 0:
113
+ st.success("Your resume has improved since your first analysis! Keep up the good work.")
114
+ elif ats_improvement == 0:
115
+ st.info("Your overall score has remained the same. Check the specific metrics to see where you can improve.")
116
+ else:
117
+ st.warning("Your overall score has decreased. This might be because you're targeting a different job role or have made changes that reduced ATS compatibility.")
118
+
119
+ # Tips for improvement
120
+ st.subheader("Tips for Improving Your Score")
121
+ st.markdown("""
122
+ ### How to Improve Your ATS Score
123
+
124
+ 1. **Add missing keywords** from your target job descriptions
125
+ 2. **Use standard section headings** that ATS systems can easily recognize
126
+ 3. **Simplify formatting** by removing tables, text boxes, and complex layouts
127
+ 4. **Quantify achievements** with numbers and metrics
128
+ 5. **Use industry-standard terminology** relevant to your target role
129
+ 6. **Tailor your resume** for each specific job application
130
+ 7. **Use a clean, simple design** with standard fonts
131
+ 8. **Save as PDF or DOCX** formats that are ATS-friendly
132
+ """)
src/pages/home.py ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ import os
3
+ import tempfile
4
+ from PyPDF2 import PdfReader
5
+ import docx
6
+ from datetime import datetime
7
+
8
+ def extract_text_from_pdf(file):
9
+ """Extract text from a PDF file"""
10
+ pdf_reader = PdfReader(file)
11
+ text = ""
12
+ for page in pdf_reader.pages:
13
+ text += page.extract_text() + "\n"
14
+ return text
15
+
16
+ def extract_text_from_docx(file):
17
+ """Extract text from a DOCX file"""
18
+ doc = docx.Document(file)
19
+ text = ""
20
+ for paragraph in doc.paragraphs:
21
+ text += paragraph.text + "\n"
22
+ return text
23
+
24
+ def show_home_page():
25
+ """Display the home page with resume upload and job role input"""
26
+ # Initialize session state variables if they don't exist
27
+ if "job_role" not in st.session_state:
28
+ st.session_state.job_role = ""
29
+ if "job_description" not in st.session_state:
30
+ st.session_state.job_description = ""
31
+ if "resume_data" not in st.session_state:
32
+ st.session_state.resume_data = None
33
+ if "active_tab" not in st.session_state:
34
+ st.session_state.active_tab = 0
35
+
36
+ st.header("Upload Your Resume")
37
+
38
+ # File uploader for resume
39
+ uploaded_file = st.file_uploader(
40
+ "Upload your resume (PDF or DOCX)",
41
+ type=["pdf", "docx", "txt"],
42
+ help="Upload your resume to get an ATS compatibility score"
43
+ )
44
+
45
+ # Job role input
46
+ job_role = st.text_input(
47
+ "Enter your target job role",
48
+ value=st.session_state.job_role if st.session_state.job_role else "",
49
+ placeholder="e.g., Data Scientist, Software Engineer, Product Manager",
50
+ help="Specify the job role you're applying for to get tailored analysis"
51
+ )
52
+
53
+ # Job description toggle and input
54
+ job_desc_toggle = st.checkbox(
55
+ "Add specific job description for enhanced analysis",
56
+ help="Toggle this to add a specific job description for more accurate ATS analysis"
57
+ )
58
+
59
+ job_description = ""
60
+ if job_desc_toggle:
61
+ job_description = st.text_area(
62
+ "Paste the job description",
63
+ value=st.session_state.get("job_description", ""),
64
+ height=200,
65
+ placeholder="Paste the full job description here for more precise keyword matching and tailored recommendations...",
66
+ help="Adding the actual job description significantly improves analysis accuracy"
67
+ )
68
+
69
+ # Process the uploaded file
70
+ if uploaded_file is not None:
71
+ try:
72
+ # Create a temporary file to store the uploaded file
73
+ with tempfile.NamedTemporaryFile(delete=False, suffix=f'.{uploaded_file.name.split(".")[-1]}') as tmp_file:
74
+ tmp_file.write(uploaded_file.getvalue())
75
+ tmp_file_path = tmp_file.name
76
+
77
+ # Extract text from the file based on its type
78
+ if uploaded_file.name.endswith('.pdf'):
79
+ resume_text = extract_text_from_pdf(tmp_file_path)
80
+ elif uploaded_file.name.endswith('.docx'):
81
+ resume_text = extract_text_from_docx(tmp_file_path)
82
+ else: # Assume it's a text file
83
+ resume_text = uploaded_file.getvalue().decode("utf-8")
84
+
85
+ # Clean up the temporary file
86
+ os.unlink(tmp_file_path)
87
+
88
+ # Store the resume data, job role, and job description in session state
89
+ st.session_state.resume_data = {
90
+ "filename": uploaded_file.name,
91
+ "text": resume_text,
92
+ "upload_time": datetime.now().strftime("%Y-%m-%d %H:%M:%S")
93
+ }
94
+ st.session_state.job_role = job_role
95
+
96
+ # Store job description if provided
97
+ if job_desc_toggle and job_description:
98
+ st.session_state.job_description = job_description
99
+ elif not job_desc_toggle:
100
+ # Clear job description if toggle is off
101
+ st.session_state.job_description = ""
102
+
103
+ # Display success message
104
+ st.success(f"Resume '{uploaded_file.name}' uploaded successfully!")
105
+
106
+ # Preview the extracted text
107
+ with st.expander("Preview Extracted Text"):
108
+ st.text_area("Extracted Text", resume_text, height=300)
109
+
110
+ except Exception as e:
111
+ st.error(f"Error processing the file: {str(e)}")
112
+
113
+ # Instructions and tips
114
+ with st.expander("Tips for ATS-Friendly Resumes"):
115
+ st.markdown("""
116
+ ### Tips to Make Your Resume ATS-Friendly
117
+
118
+ 1. **Use standard section headings** (e.g., Education, Experience, Skills)
119
+ 2. **Include relevant keywords** from the job description
120
+ 3. **Avoid using tables, headers, footers, and text boxes**
121
+ 4. **Use standard fonts** like Arial, Calibri, or Times New Roman
122
+ 5. **Save your resume as a simple PDF or DOCX file**
123
+ 6. **Include your contact information** at the top of the resume
124
+ 7. **Quantify your achievements** with numbers and metrics
125
+ 8. **Proofread carefully** for spelling and grammar errors
126
+ """)
127
+
128
+ # Action buttons at the bottom of the page
129
+ if uploaded_file is not None and job_role:
130
+ col1, col2 = st.columns(2)
131
+ with col1:
132
+ if st.button("Save to History", type="primary", use_container_width=True, key="save_history_bottom"):
133
+ # Create a basic entry with available information
134
+ if "history" not in st.session_state:
135
+ st.session_state.history = []
136
+
137
+ history_entry = {
138
+ "filename": st.session_state.resume_data["filename"],
139
+ "job_role": st.session_state.job_role,
140
+ "timestamp": st.session_state.resume_data["upload_time"],
141
+ "ats_score": 0, # Placeholder until analysis is done
142
+ "keyword_match": 0,
143
+ "format_score": 0,
144
+ "readability_score": 0
145
+ }
146
+
147
+ st.session_state.history.append(history_entry)
148
+ st.success("Resume saved to history!")
149
+ with col2:
150
+ if st.button("Analyze Resume", type="primary", use_container_width=True, key="analyze_bottom"):
151
+ if job_role:
152
+ st.session_state.active_tab = 1 # Switch to Analysis tab
153
+ st.rerun()
154
+ else:
155
+ st.warning("Please enter your target job role before analyzing.")
src/pages/suggestions.py ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ import pandas as pd
3
+
4
+ def show_suggestions_page():
5
+ """Display suggestions and recommendations for resume improvement"""
6
+ st.header("Resume Suggestions & Recommendations")
7
+
8
+ # Check if resume data is available
9
+ if st.session_state.resume_data is None:
10
+ st.info("Please upload your resume on the Home page first.")
11
+ return
12
+
13
+ # Check if analysis results are available
14
+ if st.session_state.analysis_results is None:
15
+ st.warning("Please analyze your resume first to get personalized suggestions.")
16
+ return
17
+
18
+ # Get analysis results
19
+ results = st.session_state.analysis_results
20
+
21
+ # Resume Overview Section
22
+ st.subheader("Resume Overview")
23
+ overview_col1, overview_col2 = st.columns(2)
24
+
25
+ with overview_col1:
26
+ st.markdown("### Current Resume Status")
27
+ st.markdown(f"**Overall ATS Score**: {results['ats_score']}/100")
28
+ st.markdown(f"**Keyword Match Rate**: {results['keyword_match']}/100")
29
+ st.markdown(f"**Format Score**: {results['format_score']}/100")
30
+ st.markdown(f"**Readability Score**: {results['readability_score']}/100")
31
+
32
+ with overview_col2:
33
+ st.markdown("### Target Job Role")
34
+ st.markdown(f"**Role**: {st.session_state.job_role}")
35
+ if st.session_state.get('job_description'):
36
+ st.markdown("✅ Using specific job description for analysis")
37
+ else:
38
+ st.markdown("ℹ️ Using general role requirements for analysis")
39
+
40
+ # Detailed Suggestions Tabs
41
+ suggestion_tabs = st.tabs(["Content Enhancement", "Skills & Keywords", "Format & Structure", "Industry Insights"])
42
+
43
+ # Content Enhancement Tab
44
+ with suggestion_tabs[0]:
45
+ st.markdown("### Content Enhancement Suggestions")
46
+
47
+ # Current Strengths
48
+ st.markdown("#### Current Strengths")
49
+ for strength in results['strengths']:
50
+ st.markdown(f"✅ {strength}")
51
+
52
+ # Areas for Improvement
53
+ st.markdown("#### Areas for Improvement")
54
+ for improvement in results['improvements']:
55
+ st.markdown(f"🔄 {improvement}")
56
+
57
+ # Specific Recommendations
58
+ st.markdown("#### Action Items")
59
+ for recommendation in results['recommendations']:
60
+ st.markdown(f"📌 {recommendation}")
61
+
62
+ # Skills & Keywords Tab
63
+ with suggestion_tabs[1]:
64
+ st.markdown("### Skills & Keywords Analysis")
65
+
66
+ # Present Keywords
67
+ st.markdown("#### Present Keywords")
68
+ present_keywords = results.get('present_keywords', [])
69
+ if present_keywords:
70
+ for keyword in present_keywords:
71
+ st.markdown(f"✅ {keyword}")
72
+ else:
73
+ st.info("No keywords were identified in your resume. Consider adding relevant keywords from the job description.")
74
+
75
+ # Missing Keywords with Importance
76
+ st.markdown("#### Missing Keywords")
77
+ missing_keywords = results.get('missing_keywords', [])
78
+ if missing_keywords:
79
+ keyword_df = pd.DataFrame({
80
+ "Keyword": missing_keywords,
81
+ "Importance": ["High" if i < len(missing_keywords)//3 else
82
+ "Medium" if i < 2*len(missing_keywords)//3 else
83
+ "Low" for i in range(len(missing_keywords))]
84
+ })
85
+ st.dataframe(keyword_df, use_container_width=True)
86
+ else:
87
+ st.success("Great job! Your resume appears to contain all the important keywords for this role.")
88
+
89
+ # Industry-Specific Skills
90
+ st.markdown("#### Recommended Industry Skills")
91
+ if 'recommended_skills' in results and results['recommended_skills']:
92
+ for skill in results['recommended_skills']:
93
+ st.markdown(f"💡 {skill}")
94
+ else:
95
+ st.info("No additional industry-specific skills recommendations available for your target role.")
96
+
97
+ # Format & Structure Tab
98
+ with suggestion_tabs[2]:
99
+ st.markdown("### Format & Structure Analysis")
100
+
101
+ # Format Score Breakdown
102
+ st.markdown("#### Format Score Components")
103
+ format_components = [
104
+ "Document Structure",
105
+ "Section Headers",
106
+ "Content Organization",
107
+ "Visual Layout"
108
+ ]
109
+ has_scores = False
110
+ for component in format_components:
111
+ score = results.get(f"{component.lower().replace(' ', '_')}_score", 0)
112
+ if score > 0:
113
+ has_scores = True
114
+ st.progress(score/100)
115
+ st.markdown(f"**{component}**: {score}/100")
116
+
117
+ if not has_scores:
118
+ st.info("Detailed format scoring is not available. Please ensure your resume has been properly analyzed.")
119
+
120
+ # Format Recommendations
121
+ st.markdown("#### Format Improvement Tips")
122
+ format_tips = results.get('format_tips', [])
123
+ if format_tips:
124
+ for tip in format_tips:
125
+ st.markdown(f"🔧 {tip}")
126
+ else:
127
+ st.info("No specific format improvement tips available. Your resume format may already be well-structured.")
128
+
129
+ # Industry Insights Tab
130
+ with suggestion_tabs[3]:
131
+ st.markdown("### Industry Insights & Trends")
132
+
133
+ # Job Market Trends
134
+ st.markdown("#### Current Job Market Trends")
135
+ if 'industry_trends' in results and results['industry_trends']:
136
+ for trend in results['industry_trends']:
137
+ st.markdown(f"📈 {trend}")
138
+ else:
139
+ st.info("Industry trend data is not available at the moment.")
140
+
141
+ # Career Development Suggestions
142
+ st.markdown("#### Career Development Recommendations")
143
+ if 'career_recommendations' in results and results['career_recommendations']:
144
+ for rec in results['career_recommendations']:
145
+ st.markdown(f"🎯 {rec}")
146
+ else:
147
+ st.info("Career development recommendations will be available after analyzing your resume against specific job requirements.")
148
+
149
+ # Certification Recommendations
150
+ st.markdown("#### Recommended Certifications")
151
+ if 'recommended_certifications' in results and results['recommended_certifications']:
152
+ for cert in results['recommended_certifications']:
153
+ st.markdown(f"🏅 {cert}")
154
+ else:
155
+ st.info("No specific certification recommendations available for your target role at this time.")
src/pages/visualization.py ADDED
@@ -0,0 +1,241 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+ import pandas as pd
3
+ import numpy as np
4
+ import matplotlib.pyplot as plt
5
+ import seaborn as sns
6
+ from wordcloud import WordCloud
7
+ import re
8
+
9
+ def show_visualization_page():
10
+ """Display visualizations of resume analysis results"""
11
+ st.header("Resume Analysis Visualizations")
12
+
13
+ # Check if analysis results are available
14
+ if st.session_state.analysis_results is None:
15
+ st.info("Please analyze your resume on the Analysis page first.")
16
+ return
17
+
18
+ # Get analysis results
19
+ results = st.session_state.analysis_results
20
+ resume_text = st.session_state.resume_data["text"]
21
+ job_role = st.session_state.job_role
22
+
23
+ # Create tabs for different visualizations
24
+ viz_tabs = st.tabs(["Score Breakdown", "Word Cloud", "Keyword Analysis", "Comparison"])
25
+
26
+ # Score Breakdown tab
27
+ with viz_tabs[0]:
28
+ st.subheader("ATS Score Breakdown")
29
+
30
+ # Create data for the score breakdown chart
31
+ score_data = pd.DataFrame({
32
+ 'Category': ['Overall ATS Score', 'Keyword Match', 'Format & Structure', 'Readability'],
33
+ 'Score': [
34
+ results['ats_score'],
35
+ results['keyword_match'],
36
+ results['format_score'],
37
+ results['readability_score']
38
+ ]
39
+ })
40
+
41
+ # Create a horizontal bar chart
42
+ fig, ax = plt.subplots(figsize=(10, 6))
43
+ bars = ax.barh(
44
+ score_data['Category'],
45
+ score_data['Score'],
46
+ color=['#4a86e8', '#ff9900', '#6aa84f', '#e06666']
47
+ )
48
+
49
+ # Add score labels to the bars
50
+ for bar in bars:
51
+ width = bar.get_width()
52
+ ax.text(
53
+ width + 2,
54
+ bar.get_y() + bar.get_height()/2,
55
+ f'{width}/100',
56
+ va='center'
57
+ )
58
+
59
+ # Customize the chart
60
+ ax.set_xlim(0, 105)
61
+ ax.set_xlabel('Score')
62
+ ax.set_title('Resume ATS Score Breakdown')
63
+ ax.grid(axis='x', linestyle='--', alpha=0.7)
64
+
65
+ # Display the chart
66
+ st.pyplot(fig)
67
+
68
+ # Add explanation
69
+ st.markdown("""
70
+ ### Score Explanation
71
+
72
+ - **Overall ATS Score**: Composite score indicating how well your resume would perform in an ATS system
73
+ - **Keyword Match**: How well your resume matches keywords for the target job role
74
+ - **Format & Structure**: Assessment of your resume's formatting and structure for ATS compatibility
75
+ - **Readability**: How easy your resume is to read and understand
76
+ """)
77
+
78
+ # Word Cloud tab
79
+ with viz_tabs[1]:
80
+ st.subheader("Resume Word Cloud")
81
+
82
+ # Process text for word cloud
83
+ def preprocess_text(text):
84
+ # Remove special characters and numbers
85
+ text = re.sub(r'[^\w\s]', '', text)
86
+ text = re.sub(r'\d+', '', text)
87
+ # Convert to lowercase
88
+ text = text.lower()
89
+ # Remove common stop words (simplified version)
90
+ stop_words = ['and', 'the', 'to', 'of', 'in', 'a', 'for', 'with', 'on', 'at', 'from', 'by', 'an', 'is', 'was', 'were', 'are', 'be', 'been', 'being', 'have', 'has', 'had', 'do', 'does', 'did', 'but', 'or', 'as', 'if', 'while', 'because', 'so', 'than', 'that', 'this', 'these', 'those', 'then', 'not', 'no']
91
+ words = text.split()
92
+ filtered_words = [word for word in words if word not in stop_words and len(word) > 2]
93
+ return ' '.join(filtered_words)
94
+
95
+ processed_text = preprocess_text(resume_text)
96
+
97
+ # Generate word cloud
98
+ wordcloud = WordCloud(
99
+ width=800,
100
+ height=400,
101
+ background_color='white',
102
+ colormap='viridis',
103
+ max_words=100,
104
+ contour_width=1,
105
+ contour_color='steelblue'
106
+ ).generate(processed_text)
107
+
108
+ # Display word cloud
109
+ fig, ax = plt.subplots(figsize=(12, 8))
110
+ ax.imshow(wordcloud, interpolation='bilinear')
111
+ ax.axis('off')
112
+ st.pyplot(fig)
113
+
114
+ st.markdown("""
115
+ ### Word Cloud Analysis
116
+
117
+ The word cloud visualizes the most frequently used words in your resume.
118
+ Larger words appear more frequently. This can help you identify:
119
+
120
+ - Which terms are most prominent in your resume
121
+ - Whether your resume emphasizes the right skills and experiences
122
+ - Potential overused words that could be replaced with more impactful terms
123
+ """)
124
+
125
+ # Keyword Analysis tab
126
+ with viz_tabs[2]:
127
+ st.subheader("Keyword Analysis")
128
+
129
+ # Create data for present and missing keywords
130
+ present_keywords = results.get('present_keywords', [])
131
+ missing_keywords = results.get('missing_keywords', [])
132
+
133
+ # Display keyword match percentage
134
+ st.metric(
135
+ "Keyword Match Rate",
136
+ f"{results['keyword_match']}%",
137
+ delta=f"{len(present_keywords)} present, {len(missing_keywords)} missing"
138
+ )
139
+
140
+ # Create columns for present and missing keywords
141
+ col1, col2 = st.columns(2)
142
+
143
+ with col1:
144
+ st.markdown("### Present Keywords")
145
+ if present_keywords:
146
+ for keyword in present_keywords:
147
+ st.markdown(f"✅ {keyword}")
148
+ else:
149
+ st.info("No matching keywords found.")
150
+
151
+ with col2:
152
+ st.markdown("### Missing Keywords")
153
+ if missing_keywords:
154
+ for keyword in missing_keywords:
155
+ st.markdown(f"❌ {keyword}")
156
+ else:
157
+ st.success("Great job! Your resume contains all important keywords.")
158
+
159
+ # Keyword distribution chart
160
+ if present_keywords or missing_keywords:
161
+ st.subheader("Keyword Distribution")
162
+
163
+ # Create data for pie chart
164
+ labels = ['Present', 'Missing']
165
+ sizes = [len(present_keywords), len(missing_keywords)]
166
+ colors = ['#4CAF50', '#F44336']
167
+
168
+ # Create pie chart
169
+ fig, ax = plt.subplots(figsize=(8, 8))
170
+ ax.pie(
171
+ sizes,
172
+ labels=labels,
173
+ colors=colors,
174
+ autopct='%1.1f%%',
175
+ startangle=90,
176
+ shadow=True,
177
+ explode=(0.05, 0)
178
+ )
179
+ ax.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle
180
+
181
+ # Display the chart
182
+ st.pyplot(fig)
183
+
184
+ # Comparison tab (if history exists)
185
+ with viz_tabs[3]:
186
+ st.subheader("Historical Comparison")
187
+
188
+ if len(st.session_state.history) <= 1:
189
+ st.info("You need at least two resume analyses to see a comparison. Save your current analysis and upload a different version of your resume to compare.")
190
+ else:
191
+ # Create data for the comparison chart
192
+ history_data = pd.DataFrame(st.session_state.history)
193
+
194
+ # Select which versions to compare
195
+ selected_versions = st.multiselect(
196
+ "Select resume versions to compare",
197
+ options=history_data['timestamp'].tolist(),
198
+ default=history_data['timestamp'].tolist()[-2:]
199
+ )
200
+
201
+ if selected_versions:
202
+ # Filter data based on selection
203
+ filtered_data = history_data[history_data['timestamp'].isin(selected_versions)]
204
+
205
+ # Create a comparison chart
206
+ st.subheader("Score Comparison")
207
+
208
+ # Reshape data for grouped bar chart
209
+ chart_data = pd.melt(
210
+ filtered_data,
211
+ id_vars=['timestamp', 'filename'],
212
+ value_vars=['ats_score', 'keyword_match', 'format_score'],
213
+ var_name='Score Type',
214
+ value_name='Score'
215
+ )
216
+
217
+ # Create a grouped bar chart
218
+ fig, ax = plt.subplots(figsize=(12, 8))
219
+ sns.barplot(
220
+ x='Score Type',
221
+ y='Score',
222
+ hue='timestamp',
223
+ data=chart_data,
224
+ palette='viridis'
225
+ )
226
+
227
+ # Customize the chart
228
+ ax.set_title('Resume Score Comparison')
229
+ ax.set_ylim(0, 100)
230
+ ax.set_xlabel('Score Category')
231
+ ax.set_ylabel('Score')
232
+ ax.legend(title='Version')
233
+
234
+ # Display the chart
235
+ st.pyplot(fig)
236
+
237
+ # Display a table with detailed comparison
238
+ st.subheader("Detailed Comparison")
239
+ comparison_table = filtered_data[['timestamp', 'filename', 'job_role', 'ats_score', 'keyword_match', 'format_score']]
240
+ comparison_table.columns = ['Timestamp', 'Filename', 'Job Role', 'ATS Score', 'Keyword Match', 'Format Score']
241
+ st.dataframe(comparison_table, use_container_width=True)
src/utils.py ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+ import pickle
4
+ from datetime import datetime
5
+ import pandas as pd
6
+
7
+ def save_analysis_to_history(analysis_results, resume_data, job_role):
8
+ """Save the current analysis results to history
9
+
10
+ Args:
11
+ analysis_results (dict): The analysis results from the Groq API
12
+ resume_data (dict): The resume data including filename and text
13
+ job_role (str): The target job role
14
+
15
+ Returns:
16
+ dict: The history entry that was created
17
+ """
18
+ # Create a history entry
19
+ history_entry = {
20
+ "filename": resume_data["filename"],
21
+ "job_role": job_role,
22
+ "timestamp": datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
23
+ "ats_score": analysis_results["ats_score"],
24
+ "keyword_match": analysis_results["keyword_match"],
25
+ "format_score": analysis_results["format_score"],
26
+ "readability_score": analysis_results["readability_score"]
27
+ }
28
+
29
+ return history_entry
30
+
31
+ def format_timestamp(timestamp_str):
32
+ """Format a timestamp string for display
33
+
34
+ Args:
35
+ timestamp_str (str): Timestamp string in format '%Y-%m-%d %H:%M:%S'
36
+
37
+ Returns:
38
+ str: Formatted timestamp for display
39
+ """
40
+ try:
41
+ dt = datetime.strptime(timestamp_str, "%Y-%m-%d %H:%M:%S")
42
+ return dt.strftime("%b %d, %Y at %I:%M %p")
43
+ except:
44
+ return timestamp_str
45
+
46
+ def get_score_color(score):
47
+ """Get a color based on a score value
48
+
49
+ Args:
50
+ score (int): Score value (0-100)
51
+
52
+ Returns:
53
+ str: Hex color code
54
+ """
55
+ if score >= 80:
56
+ return "#4CAF50" # Green
57
+ elif score >= 60:
58
+ return "#FFC107" # Yellow/Amber
59
+ else:
60
+ return "#F44336" # Red
61
+
62
+ def clean_text(text):
63
+ """Clean and normalize text for analysis
64
+
65
+ Args:
66
+ text (str): Raw text
67
+
68
+ Returns:
69
+ str: Cleaned text
70
+ """
71
+ # Remove extra whitespace
72
+ text = re.sub(r'\s+', ' ', text).strip()
73
+
74
+ # Remove special characters that might interfere with analysis
75
+ text = re.sub(r'[^\w\s.,;:!?\-\(\)]', ' ', text)
76
+
77
+ return text
78
+
79
+ def export_analysis_to_csv(history_data, filename="resume_analysis_history.csv"):
80
+ """Export analysis history to a CSV file
81
+
82
+ Args:
83
+ history_data (list): List of history entries
84
+ filename (str): Output filename
85
+
86
+ Returns:
87
+ str: Path to the saved file
88
+ """
89
+ # Convert to DataFrame
90
+ df = pd.DataFrame(history_data)
91
+
92
+ # Save to CSV
93
+ output_path = os.path.join(os.getcwd(), filename)
94
+ df.to_csv(output_path, index=False)
95
+
96
+ return output_path