Commit ·
d4c87c2
verified ·
0
Parent(s):
Super-squash branch 'main' using huggingface_hub
Browse files
.gitattributes
ADDED
|
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
# Audio files - uncompressed
|
| 38 |
+
*.pcm filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
*.sam filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
*.raw filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
# Audio files - compressed
|
| 42 |
+
*.aac filter=lfs diff=lfs merge=lfs -text
|
| 43 |
+
*.flac filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
| 45 |
+
*.ogg filter=lfs diff=lfs merge=lfs -text
|
| 46 |
+
*.wav filter=lfs diff=lfs merge=lfs -text
|
| 47 |
+
# Image files - uncompressed
|
| 48 |
+
*.bmp filter=lfs diff=lfs merge=lfs -text
|
| 49 |
+
*.gif filter=lfs diff=lfs merge=lfs -text
|
| 50 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
| 51 |
+
*.tiff filter=lfs diff=lfs merge=lfs -text
|
| 52 |
+
# Image files - compressed
|
| 53 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 54 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 55 |
+
*.webp filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,112 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
dataset_info:
|
| 3 |
+
- config_name: cleaned
|
| 4 |
+
features:
|
| 5 |
+
- name: input
|
| 6 |
+
dtype: string
|
| 7 |
+
- name: output
|
| 8 |
+
list: string
|
| 9 |
+
- name: metadata
|
| 10 |
+
dtype: string
|
| 11 |
+
splits:
|
| 12 |
+
- name: train
|
| 13 |
+
num_bytes: 175281573
|
| 14 |
+
num_examples: 1256
|
| 15 |
+
- name: validation
|
| 16 |
+
num_bytes: 23257908
|
| 17 |
+
num_examples: 166
|
| 18 |
+
- name: test
|
| 19 |
+
num_bytes: 12708144
|
| 20 |
+
num_examples: 86
|
| 21 |
+
download_size: 128398306
|
| 22 |
+
dataset_size: 211247625
|
| 23 |
+
- config_name: default
|
| 24 |
+
features:
|
| 25 |
+
- name: input
|
| 26 |
+
dtype: string
|
| 27 |
+
- name: output
|
| 28 |
+
list: string
|
| 29 |
+
- name: metadata
|
| 30 |
+
dtype: string
|
| 31 |
+
splits:
|
| 32 |
+
- name: train
|
| 33 |
+
num_bytes: 266540793
|
| 34 |
+
num_examples: 1256
|
| 35 |
+
- name: validation
|
| 36 |
+
num_bytes: 35881749
|
| 37 |
+
num_examples: 166
|
| 38 |
+
- name: test
|
| 39 |
+
num_bytes: 19669178
|
| 40 |
+
num_examples: 86
|
| 41 |
+
download_size: 142100992
|
| 42 |
+
dataset_size: 322091720
|
| 43 |
+
configs:
|
| 44 |
+
- config_name: cleaned
|
| 45 |
+
data_files:
|
| 46 |
+
- split: train
|
| 47 |
+
path: cleaned/train-*
|
| 48 |
+
- split: validation
|
| 49 |
+
path: cleaned/validation-*
|
| 50 |
+
- split: test
|
| 51 |
+
path: cleaned/test-*
|
| 52 |
+
- config_name: default
|
| 53 |
+
data_files:
|
| 54 |
+
- split: train
|
| 55 |
+
path: data/train-*
|
| 56 |
+
- split: validation
|
| 57 |
+
path: data/validation-*
|
| 58 |
+
- split: test
|
| 59 |
+
path: data/test-*
|
| 60 |
+
license: odc-by
|
| 61 |
+
task_categories:
|
| 62 |
+
- text-classification
|
| 63 |
+
language:
|
| 64 |
+
- en
|
| 65 |
+
size_categories:
|
| 66 |
+
- 1K<n<10K
|
| 67 |
+
tags:
|
| 68 |
+
- long-context
|
| 69 |
+
- movie
|
| 70 |
+
- film
|
| 71 |
+
- screenplay
|
| 72 |
+
- narrative
|
| 73 |
+
---
|
| 74 |
+
|
| 75 |
+
# MulD: Movie Character Type Classification
|
| 76 |
+
|
| 77 |
+
this is the Movie Character Types task from [MuLD](https://arxiv.org/abs/2202.07362):
|
| 78 |
+
|
| 79 |
+
- Task: Classify characters as Hero/Protagonist or Villain/Antagonist
|
| 80 |
+
- Data: Movie scripts matched with Wikipedia plot summaries
|
| 81 |
+
- Method: Amazon Turk annotation based on plot summaries
|
| 82 |
+
- Average length: ~45,000 tokens
|
| 83 |
+
- Challenge: Character role understanding from full narrative context
|
| 84 |
+
|
| 85 |
+
```
|
| 86 |
+
@inproceedings{hudson-al-moubayed-2022-muld,
|
| 87 |
+
title = "{M}u{LD}: The Multitask Long Document Benchmark",
|
| 88 |
+
author = "Hudson, George and
|
| 89 |
+
Al Moubayed, Noura",
|
| 90 |
+
editor = "Calzolari, Nicoletta and
|
| 91 |
+
B{\'e}chet, Fr{\'e}d{\'e}ric and
|
| 92 |
+
Blache, Philippe and
|
| 93 |
+
Choukri, Khalid and
|
| 94 |
+
Cieri, Christopher and
|
| 95 |
+
Declerck, Thierry and
|
| 96 |
+
Goggi, Sara and
|
| 97 |
+
Isahara, Hitoshi and
|
| 98 |
+
Maegaard, Bente and
|
| 99 |
+
Mariani, Joseph and
|
| 100 |
+
Mazo, H{\'e}l{\`e}ne and
|
| 101 |
+
Odijk, Jan and
|
| 102 |
+
Piperidis, Stelios",
|
| 103 |
+
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
|
| 104 |
+
month = jun,
|
| 105 |
+
year = "2022",
|
| 106 |
+
address = "Marseille, France",
|
| 107 |
+
publisher = "European Language Resources Association",
|
| 108 |
+
url = "https://aclanthology.org/2022.lrec-1.392/",
|
| 109 |
+
pages = "3675--3685",
|
| 110 |
+
abstract = "The impressive progress in NLP techniques has been driven by the development of multi-task benchmarks such as GLUE and SuperGLUE. While these benchmarks focus on tasks for one or two input sentences, there has been exciting work in designing efficient techniques for processing much longer inputs. In this paper, we present MuLD: a new long document benchmark consisting of only documents over 10,000 tokens. By modifying existing NLP tasks, we create a diverse benchmark which requires models to successfully model long-term dependencies in the text. We evaluate how existing models perform, and find that our benchmark is much more challenging than their `short document' equivalents. Furthermore, by evaluating both regular and efficient transformers, we show that models with increased context length are better able to solve the tasks presented, suggesting that future improvements in these models are vital for solving similar long document problems. We release the data and code for baselines to encourage further research on efficient NLP models."
|
| 111 |
+
}
|
| 112 |
+
```
|
cleaned/test-00000-of-00001.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ea45cdd68e30f45d58a73888571ad1a776781a6736e78b053a0c9b8b7672d588
|
| 3 |
+
size 7694867
|
cleaned/train-00000-of-00001.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c853b4b5b00a4d93c4ae5b5dfb9aa5d0fe75f762de06e0d3ef23b855aa484b72
|
| 3 |
+
size 106372800
|
cleaned/validation-00000-of-00001.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1f96b9e5262b8bcc880b42b253b12e2c904b68a4921acb9ca79ca6df5a877079
|
| 3 |
+
size 14330639
|
data/test-00000-of-00001.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c1bff830c307c4a5611ca0c90cf6e24488af916c7d351424b005393fab1df31d
|
| 3 |
+
size 8513064
|
data/train-00000-of-00001.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f15ca6b7ec2a4dc90dd8320601d35d7966f386aded77d2980233a88b5a12e07a
|
| 3 |
+
size 117729892
|
data/validation-00000-of-00001.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c6456a2ded50a22ae6b5bbd6528b262e1c05cd9a231ce7109c9469b1a2687a56
|
| 3 |
+
size 15858036
|