Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
library_name: transformers
|
| 6 |
+
---
|
| 7 |
+
|
| 8 |
+
# Model Card: bart_fine_tuned_model-v2
|
| 9 |
+
|
| 10 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
## Model Name
|
| 14 |
+
|
| 15 |
+
## bart_fine_tuned_model-v2
|
| 16 |
+
|
| 17 |
+
### Model Description
|
| 18 |
+
|
| 19 |
+
<!-- This model represents a fine-tuned version of the facebook/bart-large model, specifically adapted for the task of Resume Summarization. The model has been trained to efficiently generate concise and relevant summaries from extensive resume texts. The fine-tuning process has tailored the original BART model to specialize in summarization tasks based on a specific dataset.. -->
|
| 20 |
+
This model represents a fine-tuned version of the facebook/bart-large model, specifically adapted for the task of Resume Summarization. The model has been trained to efficiently generate concise and relevant summaries from extensive resume texts. The fine-tuning process has tailored the original BART model to specialize in summarization tasks based on a specific dataset.
|
| 21 |
+
|
| 22 |
+
### Model information
|
| 23 |
+
|
| 24 |
+
-**Base Model: derekiya/bart_fine_tuned_model-v2**
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
-**Finetuning Dataset: To be made available in the future.**
|
| 28 |
+
|
| 29 |
+
### Training Parameters
|
| 30 |
+
|
| 31 |
+
- **Evaluation Strategy: epoch:**
|
| 32 |
+
- **Learning Rate: 5e-5**
|
| 33 |
+
- **Per Device Train Batch Size: 8:**
|
| 34 |
+
- **Per Device Eval Batch Size: 8**
|
| 35 |
+
- **Weight Decay: 0.01**
|
| 36 |
+
- **Save Total Limit: 5**
|
| 37 |
+
- **Number of Training Epochs: 10**
|
| 38 |
+
- **Predict with Generate: True**
|
| 39 |
+
- **Gradient Accumulation Steps: 1**
|
| 40 |
+
- **Optimizer: paged_adamw_32bit**
|
| 41 |
+
- **Learning Rate Scheduler Type: cosine**
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
## how to use
|
| 45 |
+
|
| 46 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 47 |
+
**1.** Install the transformers library:
|
| 48 |
+
|
| 49 |
+
**pip install transformers**
|
| 50 |
+
|
| 51 |
+
**2.** Import the necessary modules:
|
| 52 |
+
|
| 53 |
+
import torch
|
| 54 |
+
from transformers import BartTokenizer, BartForConditionalGeneration
|
| 55 |
+
|
| 56 |
+
**3.** Initialize the model and tokenizer:
|
| 57 |
+
|
| 58 |
+
model_name = 'derekiya/bart_fine_tuned_model-v2'
|
| 59 |
+
tokenizer = BartTokenizer.from_pretrained(model_name)
|
| 60 |
+
model = BartForConditionalGeneration.from_pretrained(model_name)
|
| 61 |
+
|
| 62 |
+
**4.** Prepare the text for summarization:
|
| 63 |
+
|
| 64 |
+
text = 'Your resume text here'
|
| 65 |
+
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding="max_length")
|
| 66 |
+
|
| 67 |
+
**5.** Generate the summary:
|
| 68 |
+
|
| 69 |
+
min_length_threshold = 55
|
| 70 |
+
summary_ids = model.generate(inputs["input_ids"], num_beams=4, min_length=min_length_threshold, max_length=150, early_stopping=True)
|
| 71 |
+
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
|
| 72 |
+
|
| 73 |
+
**6.** Output the summary:
|
| 74 |
+
|
| 75 |
+
print("Summary:", summary)
|
| 76 |
+
|
| 77 |
+
## Model Card Authors
|
| 78 |
+
|
| 79 |
+
Dereje Hinsermu
|
| 80 |
+
|
| 81 |
+
## Model Card Contact
|
| 82 |
+
|