Duplicate from Mehyaar/Annotated_NER_PDF_Resumes
Browse filesCo-authored-by: MehyarMlaweh <Mehyaar@users.noreply.huggingface.co>
- .gitattributes +55 -0
- README.md +90 -0
- ResumesJsonAnnotated.zip +3 -0
- ResumesPDF.zip +3 -0
.gitattributes
ADDED
|
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
# Audio files - uncompressed
|
| 38 |
+
*.pcm filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
*.sam filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
*.raw filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
# Audio files - compressed
|
| 42 |
+
*.aac filter=lfs diff=lfs merge=lfs -text
|
| 43 |
+
*.flac filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
| 45 |
+
*.ogg filter=lfs diff=lfs merge=lfs -text
|
| 46 |
+
*.wav filter=lfs diff=lfs merge=lfs -text
|
| 47 |
+
# Image files - uncompressed
|
| 48 |
+
*.bmp filter=lfs diff=lfs merge=lfs -text
|
| 49 |
+
*.gif filter=lfs diff=lfs merge=lfs -text
|
| 50 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
| 51 |
+
*.tiff filter=lfs diff=lfs merge=lfs -text
|
| 52 |
+
# Image files - compressed
|
| 53 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 54 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 55 |
+
*.webp filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
size_categories:
|
| 6 |
+
- 1K<n<10K
|
| 7 |
+
---
|
| 8 |
+
**IT Skills Named Entity Recognition (NER) Dataset**
|
| 9 |
+
|
| 10 |
+
## Description:
|
| 11 |
+
This dataset includes **5,029** curriculum vitae (CV) samples, each annotated with IT skills using **Named Entity Recognition (NER)**. The skills are manually labeled and extracted from PDFs, and the data is provided in JSON format. This dataset is ideal for training and evaluating NER models, especially for extracting IT skills from CVs.
|
| 12 |
+
|
| 13 |
+
## Highlights:
|
| 14 |
+
- **5,029 CV samples** with annotated IT skills
|
| 15 |
+
- **Manual annotations** for IT skills using Named Entity Recognition (NER)
|
| 16 |
+
- **Text extracted from PDFs** and annotated for IT skills
|
| 17 |
+
- **JSON format** for easy integration with NLP tools like Spacy
|
| 18 |
+
- **Great resource** for training and evaluating NER models for IT skills extraction
|
| 19 |
+
|
| 20 |
+
## Dataset Details
|
| 21 |
+
|
| 22 |
+
- **Total CVs:** 5,029
|
| 23 |
+
- **Data Format:** JSON files
|
| 24 |
+
- **Annotations:** IT skills labeled using Named Entity Recognition
|
| 25 |
+
|
| 26 |
+
## Data Description
|
| 27 |
+
|
| 28 |
+
Each JSON file in the dataset contains the following fields:
|
| 29 |
+
|
| 30 |
+
| Field | Description |
|
| 31 |
+
|--------------|-----------------------------------------------------------|
|
| 32 |
+
| `text` | The extracted text from the CV PDF |
|
| 33 |
+
| `annotations` | A list of IT skills annotated in the text, where each annotation includes:
|
| 34 |
+
- `start`: Starting position of the skill in the text (zero-based index)
|
| 35 |
+
- `end`: Ending position of the skill in the text (zero-based index, exclusive)
|
| 36 |
+
- `label`: The type of the entity (IT skill)
|
| 37 |
+
|
| 38 |
+
### Example JSON File
|
| 39 |
+
|
| 40 |
+
Here is an example of the JSON structure used in the dataset:
|
| 41 |
+
|
| 42 |
+
```json
|
| 43 |
+
{
|
| 44 |
+
"text": "One97 Communications Limited \nData Scientist Jan 2019 to Till Date \nDetect important information from images and redact\nrequired fields. YOLO CNN Object-detection, OCR\nInsights, find anomaly or performance drop in all\npossible sub-space. \nPredict the Insurance claim probability. Estimate the\npremium amount to be charged\nB.Tech(Computer Science) from SGBAU university in\n2017. \nM.Tech (Computer Science Engineering) from Indian\nInstitute of Technology (IIT), Kanpur in 2019WORK EXPERIENCE\nEDUCATIONMACY WILLIAMS\nDATA SCIENTIST\nData Scientist working on problems related to market research and customer analysis. I want to expand my arsenal of\napplication building and work on different kinds of problems. Looking for a role where I can work with a coordinative team\nand exchange knowledge during the process.\nJava, C++, Python, Machine Learning, Algorithms, Natural Language Processing, Deep Learning, Computer Vision, Pattern\nRecognition, Data Science, Data Analysis, Software Engineer, Data Analyst, C, PySpark, Kubeflow.ABOUT\nSKILLS\nCustomer browsing patterns.\nPredict potential RTO(Return To Origin) orders for e-\ncommerce.\nObject Detection.PROJECTS\nACTIVITES",
|
| 45 |
+
"annotations": [
|
| 46 |
+
[657, 665, "SKILL: Building"],
|
| 47 |
+
[822, 828, "SKILL: python"],
|
| 48 |
+
[811, 815, "SKILL: java"],
|
| 49 |
+
[781, 790, "SKILL: Knowledge"],
|
| 50 |
+
[877, 887, "SKILL: Processing"],
|
| 51 |
+
[194, 205, "SKILL: performance"],
|
| 52 |
+
[442, 452, "SKILL: Technology"],
|
| 53 |
+
[1007, 1014, "SKILL: PySpark"],
|
| 54 |
+
[30, 44, "SKILL: Data Scientist"],
|
| 55 |
+
... ] }
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
## Usage
|
| 59 |
+
This dataset can be used for:
|
| 60 |
+
|
| 61 |
+
- Training Named Entity Recognition (NER) models to identify IT skills from text.
|
| 62 |
+
- Evaluating NER models for their performance in extracting IT skills from CVs.
|
| 63 |
+
- Developing new NLP applications for skill extraction and job matching.
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
## How to Load and Use the Data
|
| 67 |
+
To load and use the data, you can use the following Python code:
|
| 68 |
+
``` python
|
| 69 |
+
import json
|
| 70 |
+
import os
|
| 71 |
+
|
| 72 |
+
# Define the path to the directory containing the JSON files
|
| 73 |
+
directory_path = "path/to/your/json/files"
|
| 74 |
+
|
| 75 |
+
# Load all JSON files
|
| 76 |
+
data = []
|
| 77 |
+
for filename in os.listdir(directory_path):
|
| 78 |
+
if filename.endswith(".json"):
|
| 79 |
+
with open(os.path.join(directory_path, filename), "r") as file:
|
| 80 |
+
data.append(json.load(file))
|
| 81 |
+
|
| 82 |
+
# Example of accessing the first CV's text and annotations
|
| 83 |
+
first_cv = data[0]
|
| 84 |
+
text = first_cv['text']
|
| 85 |
+
annotations = first_cv['annotations']
|
| 86 |
+
|
| 87 |
+
print(f"Text: {text}")
|
| 88 |
+
print(f"Annotations: {annotations}")
|
| 89 |
+
|
| 90 |
+
```
|
ResumesJsonAnnotated.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:337012944890fe4a89d53733c100ea2bb37e387532826d10b9ebca6e94664bcd
|
| 3 |
+
size 17076627
|
ResumesPDF.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1142ad7d67bf4419e17261fe6cce4a8f977c463c26ac7232d5375ceced1659a7
|
| 3 |
+
size 585751064
|