Update README.md
Browse filesThe Longformer Encoder-Decoder model is impressive! We would like to contribute by updating the README to include the base_model information. This will help address the missing details in the model card.
README.md
CHANGED
|
@@ -1,6 +1,8 @@
|
|
| 1 |
---
|
| 2 |
language: en
|
| 3 |
license: apache-2.0
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
## Introduction
|
|
@@ -13,4 +15,4 @@ This model is especially interesting for long-range summarization and question a
|
|
| 13 |
|
| 14 |
## Fine-tuning for down-stream task
|
| 15 |
|
| 16 |
-
[This notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing) shows how *led-base-16384* can effectively be fine-tuned on a downstream task.
|
|
|
|
| 1 |
---
|
| 2 |
language: en
|
| 3 |
license: apache-2.0
|
| 4 |
+
base_model:
|
| 5 |
+
- facebook/bart-base
|
| 6 |
---
|
| 7 |
|
| 8 |
## Introduction
|
|
|
|
| 15 |
|
| 16 |
## Fine-tuning for down-stream task
|
| 17 |
|
| 18 |
+
[This notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing) shows how *led-base-16384* can effectively be fine-tuned on a downstream task.
|