|
|
--- |
|
|
library_name: transformers |
|
|
license: mit |
|
|
language: |
|
|
- en |
|
|
--- |
|
|
|
|
|
### Model Description |
|
|
|
|
|
This model is used to generate the template based on the body of any emails or messages. It uses Microsoft's Phi-2 as the base model and was finetuned for 2 epochs on Google Colab's Tesla T4 GPU. |
|
|
|
|
|
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. |
|
|
|
|
|
- **Developed by:** Anupam Wagle |
|
|
- **Model type:** Text Generation |
|
|
- **Language(s) (NLP):** PyTorch |
|
|
- **License:** MIT |
|
|
- **Finetuned from model:** Microsoft Phi-2 |
|
|
|
|
|
## Uses |
|
|
Use to generate the message based on the previous ones. |
|
|
|
|
|
|
|
|
## Bias, Risks, and Limitations |
|
|
For better results, increase the size of the dataset and the training epochs. |
|
|
|
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Training Data |
|
|
|
|
|
The format of the dataset used for finetuning is as follows: |
|
|
[{ |
|
|
"input_email": "Hello Adam,\n\nCan you come to the party tonight after 6 PM?\nBest,\nSubash", |
|
|
"generated_email": "Hi Eve,\n\nThank you for the invitation. I'd love to come to the party tonight after 6 PM. Looking forward to it!\n\nBest,\nAdam" |
|
|
}, |
|
|
...] |
|
|
|
|
|
|
|
|
## Technical Specifications |
|
|
This model was finetuned on Google colab's Tesla t4 GPU for a total of 2 epochs. |
|
|
|
|
|
### Model Architecture and Objective |
|
|
The base model for this was the Microsoft's Phi-2 which was quantized using Bits and Bytes. It's primray objective is to generate messages based on previous messages. |
|
|
|
|
|
|