Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -8,8 +8,6 @@ configs:
|
|
| 8 |
path: data/test-*
|
| 9 |
dataset_info:
|
| 10 |
features:
|
| 11 |
-
- name: 'Unnamed: 0'
|
| 12 |
-
dtype: int64
|
| 13 |
- name: AssignmentId
|
| 14 |
dtype: string
|
| 15 |
- name: docId
|
|
@@ -21,9 +19,9 @@ dataset_info:
|
|
| 21 |
- name: tid2
|
| 22 |
dtype: int64
|
| 23 |
- name: words1
|
| 24 |
-
dtype:
|
| 25 |
- name: words2
|
| 26 |
-
dtype:
|
| 27 |
- name: phrases1
|
| 28 |
dtype: string
|
| 29 |
- name: phrases2
|
|
@@ -36,15 +34,17 @@ dataset_info:
|
|
| 36 |
dtype: string
|
| 37 |
- name: summary2
|
| 38 |
dtype: string
|
|
|
|
|
|
|
| 39 |
splits:
|
| 40 |
- name: train
|
| 41 |
-
num_bytes:
|
| 42 |
num_examples: 2400
|
| 43 |
- name: test
|
| 44 |
-
num_bytes:
|
| 45 |
num_examples: 600
|
| 46 |
-
download_size:
|
| 47 |
-
dataset_size:
|
| 48 |
---
|
| 49 |
# NORTS - Norwegian Topic-based Summarization Dataset
|
| 50 |
Translated from NORTS (NEWs Topic-based Summarization Dataset, https://github.com/ali-bahrainian/NEWTS) using the 1.3B NLLB model (https://huggingface.co/facebook/nllb-200-distilled-1.3B)
|
|
|
|
| 8 |
path: data/test-*
|
| 9 |
dataset_info:
|
| 10 |
features:
|
|
|
|
|
|
|
| 11 |
- name: AssignmentId
|
| 12 |
dtype: string
|
| 13 |
- name: docId
|
|
|
|
| 19 |
- name: tid2
|
| 20 |
dtype: int64
|
| 21 |
- name: words1
|
| 22 |
+
dtype: string
|
| 23 |
- name: words2
|
| 24 |
+
dtype: string
|
| 25 |
- name: phrases1
|
| 26 |
dtype: string
|
| 27 |
- name: phrases2
|
|
|
|
| 34 |
dtype: string
|
| 35 |
- name: summary2
|
| 36 |
dtype: string
|
| 37 |
+
- name: __index_level_0__
|
| 38 |
+
dtype: int64
|
| 39 |
splits:
|
| 40 |
- name: train
|
| 41 |
+
num_bytes: 11384802
|
| 42 |
num_examples: 2400
|
| 43 |
- name: test
|
| 44 |
+
num_bytes: 2979312
|
| 45 |
num_examples: 600
|
| 46 |
+
download_size: 7539242
|
| 47 |
+
dataset_size: 14364114
|
| 48 |
---
|
| 49 |
# NORTS - Norwegian Topic-based Summarization Dataset
|
| 50 |
Translated from NORTS (NEWs Topic-based Summarization Dataset, https://github.com/ali-bahrainian/NEWTS) using the 1.3B NLLB model (https://huggingface.co/facebook/nllb-200-distilled-1.3B)
|