Update README.md
Browse files
README.md
CHANGED
|
@@ -42,7 +42,8 @@ The small dataset size is intentional, as the focus is on few-shot learning rath
|
|
| 42 |
- Max Input Length: 512 tokens
|
| 43 |
- Max Output Length: 64 tokens
|
| 44 |
|
| 45 |
-
###
|
|
|
|
| 46 |
|
| 47 |
### Performance
|
| 48 |
Due to the few-shot nature of this model, its performance is not directly comparable to models trained on the full XSUM dataset. However, it demonstrates the potential of few-shot learning for summarization tasks. Key metrics on the validation set (50 samples) include:
|
|
|
|
| 42 |
- Max Input Length: 512 tokens
|
| 43 |
- Max Output Length: 64 tokens
|
| 44 |
|
| 45 |
+
### Full-Shot learning model
|
| 46 |
+
For a more general-purpose summarization model, check out the full model trained on the entire XSUM dataset: [fulltrain-xsum-bart](https://huggingface.co/bhargavis/fulltrain-xsum-bart).
|
| 47 |
|
| 48 |
### Performance
|
| 49 |
Due to the few-shot nature of this model, its performance is not directly comparable to models trained on the full XSUM dataset. However, it demonstrates the potential of few-shot learning for summarization tasks. Key metrics on the validation set (50 samples) include:
|