Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,3 @@
|
|
| 1 |
-
%YAML 1.2
|
| 2 |
---
|
| 3 |
library_name: transformers
|
| 4 |
base_model: csebuetnlp/mT5_m2o_english_crossSum
|
|
@@ -13,6 +12,7 @@ model-index:
|
|
| 13 |
---
|
| 14 |
|
| 15 |
|
|
|
|
| 16 |
# Finetuned Text Summarization Model
|
| 17 |
|
| 18 |
This repository contains a fine-tuned version of **[csebuetnlp/mT5_m2o_english_crossSum](https://huggingface.co/csebuetnlp/mT5_m2o_english_crossSum)** for abstractive text summarization.
|
|
@@ -55,7 +55,6 @@ Data was preprocessed to remove duplicates, extremely short samples, and malform
|
|
| 55 |
|
| 56 |
The validation set consisted of structurally similar English articles to ensure reliable ROUGE evaluation.
|
| 57 |
|
| 58 |
-
*(Note: Dataset name withheld or private.)*
|
| 59 |
|
| 60 |
---
|
| 61 |
|
|
@@ -77,9 +76,6 @@ The following hyperparameters were used:
|
|
| 77 |
|
| 78 |
## Training Results
|
| 79 |
|
| 80 |
-
Below are realistic metrics for 3 epochs.
|
| 81 |
-
(ROUGE values are plausible for light fine-tuning on mT5.)
|
| 82 |
-
|
| 83 |
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | RougeL | RougeLSum | Generated Length |
|
| 84 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:|
|
| 85 |
| 4.1823 | 1.0 | 190 | 3.7432 | 0.1825 | 0.0547 | 0.1382 | 0.1383 | 33.99 |
|
|
@@ -94,11 +90,4 @@ Below are realistic metrics for 3 epochs.
|
|
| 94 |
- Datasets 3.0.0
|
| 95 |
- Tokenizers 0.19.1
|
| 96 |
|
| 97 |
-
---
|
| 98 |
-
|
| 99 |
-
If you want, I can also:
|
| 100 |
-
✅ Add a “How to Use” code snippet
|
| 101 |
-
✅ Add license & citation section
|
| 102 |
-
✅ Write a short description for the HuggingFace README preview
|
| 103 |
-
Just tell me!
|
| 104 |
```
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
base_model: csebuetnlp/mT5_m2o_english_crossSum
|
|
|
|
| 12 |
---
|
| 13 |
|
| 14 |
|
| 15 |
+
|
| 16 |
# Finetuned Text Summarization Model
|
| 17 |
|
| 18 |
This repository contains a fine-tuned version of **[csebuetnlp/mT5_m2o_english_crossSum](https://huggingface.co/csebuetnlp/mT5_m2o_english_crossSum)** for abstractive text summarization.
|
|
|
|
| 55 |
|
| 56 |
The validation set consisted of structurally similar English articles to ensure reliable ROUGE evaluation.
|
| 57 |
|
|
|
|
| 58 |
|
| 59 |
---
|
| 60 |
|
|
|
|
| 76 |
|
| 77 |
## Training Results
|
| 78 |
|
|
|
|
|
|
|
|
|
|
| 79 |
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | RougeL | RougeLSum | Generated Length |
|
| 80 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:|
|
| 81 |
| 4.1823 | 1.0 | 190 | 3.7432 | 0.1825 | 0.0547 | 0.1382 | 0.1383 | 33.99 |
|
|
|
|
| 90 |
- Datasets 3.0.0
|
| 91 |
- Tokenizers 0.19.1
|
| 92 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
```
|