id
int64 599M
3.26B
| number
int64 1
7.7k
| title
stringlengths 1
290
| body
stringlengths 0
228k
β | state
stringclasses 2
values | html_url
stringlengths 46
51
| created_at
timestamp[s]date 2020-04-14 10:18:02
2025-07-23 08:04:53
| updated_at
timestamp[s]date 2020-04-27 16:04:17
2025-07-23 18:53:44
| closed_at
timestamp[s]date 2020-04-14 12:01:40
2025-07-23 16:44:42
β | user
dict | labels
listlengths 0
4
| is_pull_request
bool 2
classes | comments
listlengths 0
0
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
740,071,697
| 831
|
[GEM] Add WebNLG dataset
|
## Adding a Dataset
- **Name:** WebNLG
- **Description:** WebNLG consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples (16,095 data inputs and 42,873 data-text pairs). The data is available in English and Russian
- **Paper:** https://www.aclweb.org/anthology/P17-1017.pdf
- **Data:** https://webnlg-challenge.loria.fr/download/
- **Motivation:** Included in the GEM shared task, multilingual
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
closed
|
https://github.com/huggingface/datasets/issues/831
| 2020-11-10T16:46:48
| 2020-12-03T13:38:01
| 2020-12-03T13:38:01
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
740,065,376
| 830
|
[GEM] add ToTTo Table-to-text dataset
|
## Adding a Dataset
- **Name:** ToTTo
- **Description:** ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.
- **Paper:** https://arxiv.org/abs/2004.14373
- **Data:** https://github.com/google-research-datasets/totto
- **Motivation:** Included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
closed
|
https://github.com/huggingface/datasets/issues/830
| 2020-11-10T16:38:34
| 2020-12-10T13:06:02
| 2020-12-10T13:06:01
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
740,061,699
| 829
|
[GEM] add Schema-Guided Dialogue
|
## Adding a Dataset
- **Name:** The Schema-Guided Dialogue Dataset
- **Description:** The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 domains, ranging from banks and events to media, calendar, travel, and weather.
- **Paper:** https://arxiv.org/pdf/2002.01359.pdf https://arxiv.org/pdf/2004.15006.pdf
- **Data:** https://github.com/google-research-datasets/dstc8-schema-guided-dialogue
- **Motivation:** Included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
closed
|
https://github.com/huggingface/datasets/issues/829
| 2020-11-10T16:33:44
| 2020-12-03T13:37:50
| 2020-12-03T13:37:50
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
740,008,683
| 828
|
Add writer_batch_size attribute to GeneratorBasedBuilder
|
As specified in #741 one would need to specify a custom ArrowWriter batch size to avoid filling the RAM. Indeed the defaults buffer size is 10 000 examples but for multimodal datasets that contain images or videos we may want to reduce that.
|
closed
|
https://github.com/huggingface/datasets/pull/828
| 2020-11-10T15:28:19
| 2020-11-10T16:27:36
| 2020-11-10T16:27:36
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
739,983,024
| 827
|
[GEM] MultiWOZ dialogue dataset
|
## Adding a Dataset
- **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz)
- **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts β there are no annotations from the user side.
- **Paper:** https://arxiv.org/pdf/2007.12720.pdf
- **Data:** https://github.com/budzianowski/multiwoz
- **Motivation:** Will likely be part of the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
closed
|
https://github.com/huggingface/datasets/issues/827
| 2020-11-10T14:57:50
| 2022-10-05T12:31:13
| 2022-10-05T12:31:13
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
739,976,716
| 826
|
[GEM] Add E2E dataset
|
## Adding a Dataset
- **Name:** E2E NLG dataset (for End-to-end natural language generation)
- **Description:**a dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, the datasets consists of 5,751 dialogue-act Meaning Representations (structured data) and 8.1 reference free-text utterances per dialogue-act on average
- **Paper:** https://arxiv.org/pdf/1706.09254.pdf https://arxiv.org/abs/1901.07931
- **Data:** http://www.macs.hw.ac.uk/InteractionLab/E2E/#data
- **Motivation:** This dataset will likely be included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
closed
|
https://github.com/huggingface/datasets/issues/826
| 2020-11-10T14:50:40
| 2020-12-03T13:37:57
| 2020-12-03T13:37:57
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
739,925,960
| 825
|
Add accuracy, precision, recall and F1 metrics
|
This PR adds several single metrics, namely:
- Accuracy
- Precision
- Recall
- F1
They all uses under the hood the sklearn metrics of the same name. They allow different useful features when training a multilabel/multiclass model:
- have a macro/micro/per label/weighted/binary/per sample score
- score only the selected labels (usually what we call the positive labels) and ignore the negative ones. For example in case of a Named Entity Recognition task, positive labels are (`PERSON`, `LOCATION` or `ORGANIZATION`) and the negative one is `O`.
|
closed
|
https://github.com/huggingface/datasets/pull/825
| 2020-11-10T13:50:35
| 2020-11-11T19:23:48
| 2020-11-11T19:23:43
|
{
"login": "jplu",
"id": 959590,
"type": "User"
}
|
[] | true
|
[] |
739,896,526
| 824
|
Discussion using datasets in offline mode
|
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some points to open discussion:
- if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine.
- AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally.
- I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet.
WDYT? (thks)
|
closed
|
https://github.com/huggingface/datasets/issues/824
| 2020-11-10T13:10:51
| 2023-10-26T09:26:26
| 2022-02-15T10:32:36
|
{
"login": "mandubian",
"id": 77193,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "generic discussion",
"color": "c5def5"
}
] | false
|
[] |
739,815,763
| 823
|
how processing in batch works in datasets
|
Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
tokenizer: Callable = NotImplemented
max_source_length: str = NotImplemented
max_target_length: str = NotImplemented
# TODO: should not be a task item, but cannot see other ways.
tpu_num_cores: int = None
# The arguments set are for all tasks and needs to be kept common.
def __init__(self, config):
self.max_source_length = config['max_source_length']
self.max_target_length = config['max_target_length']
self.tokenizer = config['tokenizer']
self.tpu_num_cores = config['tpu_num_cores']
def _encode(self, batch) -> Dict[str, torch.Tensor]:
batch_encoding = self.tokenizer.prepare_seq2seq_batch(
[x["src_texts"] for x in batch],
tgt_texts=[x["tgt_texts"] for x in batch],
max_length=self.max_source_length,
max_target_length=self.max_target_length,
padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack
return_tensors="pt"
)
return batch_encoding.data
def data_split(self, split):
return self.split_to_data_split[split]
def get_dataset(self, split, n_obs=None):
split = self.data_split(split)
if n_obs is not None:
split = split+"[:{}]".format(n_obs)
dataset = load_dataset(self.task_name, split=split)
dataset = dataset.map(self.preprocessor, remove_columns=dataset.column_names)
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
dataset.set_format(type="torch", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'])
return dataset
```
I call it like
`AutoTask.get(task, train_dataset_config).get_dataset(split="train", n_obs=data_args.n_train)
`
This gives the following error, to me because the data inside the dataset = dataset.map(lambda batch: self._encode(batch), batched=True) is not processed in batch, could you tell me how I can process dataset in batch inside my function? thanks
File "finetune_multitask_trainer.py", line 192, in main
if training_args.do_train else None
File "finetune_multitask_trainer.py", line 191, in <dictcomp>
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in get_dataset
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1236, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1207, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in <lambda>
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in _encode
[x["src_texts"] for x in batch],
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in <listcomp>
[x["src_texts"] for x in batch],
TypeError: string indices must be integers
|
closed
|
https://github.com/huggingface/datasets/issues/823
| 2020-11-10T11:11:17
| 2020-11-10T13:11:10
| 2020-11-10T13:11:09
|
{
"login": "rabeehkarimimahabadi",
"id": 73364383,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
739,579,314
| 822
|
datasets freezes
|
Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks
dataset1 = load_dataset("squad", split="train[:10]")
dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question'])
dataset2 = load_dataset("imdb", split="train[:10]")
dataset2 = dataset2.set_format(type="torch", columns=["text", "label"])
print(len(dataset1))
|
closed
|
https://github.com/huggingface/datasets/issues/822
| 2020-11-10T05:10:19
| 2023-07-20T16:08:14
| 2023-07-20T16:08:13
|
{
"login": "rabeehkarimimahabadi",
"id": 73364383,
"type": "User"
}
|
[
{
"name": "dataset bug",
"color": "2edb81"
}
] | false
|
[] |
739,506,859
| 821
|
`kor_nli` dataset doesn't being loaded properly
|
There are two issues from `kor_nli` dataset
1. csv.DictReader failed to split features by tab
- Should not exist `None` value in label feature, but there it is.
```python
kor_nli_train['train'].unique('gold_label')
# ['neutral', 'entailment', 'contradiction', None]
```
- I found a reason why there is `None` values in label feature as following code
```python
from datasets import load_dataset
kor_nli_train = load_dataset('kor_nli', 'multi_nli')
for idx, example in enumerate(kor_nli_train['train']):
if example['gold_label'] is None:
print(idx, example)
break
# 16835 {'gold_label': None, 'sentence1': 'κ·Έλ μ μ μ μ κ°λ²Όμ΄ λ²
μ€ν¨ μλ§μ κ°μ§κ³ λ¬λ¦¬κΈ° μν΄ μ°μ μ²λΌ νμ μ€ν°λλ₯Ό λ£μλ€.\tμ μ μ μ λ€μΈμ’
μ¬μ±λ€κ³Ό ν¨κ» μλ λ°±μΈ λ¨μκ° μμλ€.\tentailment\nμ¬λ¦Όμ μ¬λΉ¨λ¦¬ μ·μ μ
μκ³ , μκ°μ μΌλ‘ λ―Έμ§κ·Όν λ¬Όμ λΏλ¦΄ μ μλ μμΉ¨ μΈνλ¬Όμ κΈ°κΊΌμ΄ κ°λμλ€.\tμ¬λ¦Όμ μ§μ₯μ λ¦μλ€.\tneutral\nλ΄μμμ κ·Έ μμ¬λ₯Ό ν΄λ΄€λλ°, κ±°κΈ°μ μκ³ κΈ°μ λ©μ§ μκ³ κΈ° λΆλΆμ μ리νκ³ λ°λ² νλ‘ λ§λ λλΉ€μ§ κ°μ κ±Έ κ°μ Έμλλ°, μ λ§ λλ¨ν΄.\tκ·Έλ€μ΄ κ±°κΈ°μ μ리νλ μ κ³ κΈ°λ μκ²Ήλ€. κ±°κΈ°μ μ λ λ¨Ήμ§ λ§λΌ.\tcontradiction\nνλ§€μμ μ£½μμμ λΈλΌμ΄μΈ λ°λ€ν... ν¬λ¦¬μ€ μΌλ¦¬\tν¬λ¦¬μ€ μΌλ¦¬λ μΈμΌμ¦λ§¨μ μ£½μμ μΈκΈνμ§ μλλ€.\tcontradiction\nκ·Έλ¬λ λμ μ리μ¬λ κ·Έλ₯ νκ° λ¬μ΄.\tμ€νκ° λλ λμ μ리μ¬λ νκ° λ¬λ€.\tneutral\nλ§μ§λ§ λ‘λ§μ 맹곡격 μ λ λ°€, 900λͺ
μ΄μμ μ λμΈ μλΉμλ€μ΄ λ‘λ§μΈλ€μκ² κ·Έλ€μ μ¬λ‘μ‘λ μΉλ¦¬λ₯Ό μ£ΌκΈ° 보λ€λ λλ μμ΄μ μ μ§λ λ€.\tλ‘λ§μΈλ€μ΄ κ·Έλ€μ ν¬νμ μΉλ¦¬νλλ‘ λ΄λ²λ €λκΈ° 보λ€λ 900λͺ
μ μ λμΈ μλΉμλ€μ΄ μμ΄νλ€.\tentailment\nμμΌλ‘ λ°μ¬νλΌ.\tλ°μ¬.\tneutral\nκ·Έλ¦¬κ³ λΉμ μ μ°λ¦¬ λ
μ΄ μμ΄μ»€μ μλ€λ κ²μ μκ³ μλ€. μ°λ¦¬ μ¬λλ€μ μ΄λ€ κ²μ΄ μΌλ§λ λ§μμ§ μ΄ν΄νμ§ λͺ»ν κ²μ΄λ€.\tλͺ¨λ μ¬λλ€μ μ°λ¦¬μ μΈ‘μ μμ€ν
μ΄ μ΄λ»κ² μλνλμ§ μκ³ μ΄ν΄ν©λλ€.\tcontradiction\nμ£Όλ―Έκ²μ€\tJumiygesλ λμμ μ΄λ¦μ΄λ€.\tneutral\nμ¬λμ μκΈ° λ―Όμ‘±μ λλ΄μΌ νλ€...\tμ¬λμ μ‘°κ΅μ 곡κ°ν΄μΌ νλ€.\tentailment\nλν PDD 63μ μ λΆμ μ
κ³κ° μ»΄ν¨ν° κΈ°λ° κ³΅κ²©μ λν΄ κ²½κ³ νκ³ λ°©μ΄ν μ€λΉλ₯Ό λ μν μ μλλ‘ μμ€ν
μ·¨μ½μ±, μν, μΉ¨μ
λ° μ΄μμ λν μ 보λ₯Ό 곡μ νλ λ©μ»€λμ¦μ μ립νλ κ²μ΄ μ€μνλ€λ κ²μ μΈμνμ΅λλ€.\tμ 보 μ μ‘ νλ‘ν μ½μ λ§λλ κ²μ μ€μνλ€.\tentailment\nμΉ΄ν λ§ νΌμμ λΈλΌ λ νλΈλ¦¬μΉ΄ λ°λ‘ λ¨μͺ½μλ νΌλ μ²΄κ° μλ €μ§ μ§ μ ν λλ¬Έμ νλ μ€νΈλ‘ λ§μΌμ΄λΌκ³ λΆλ Έλ 16μΈκΈ° λ‘μ§μμΈ λ©λ₯΄μΉ΄ν λμ€λ³΄(Mercato Nuovo)κ° μλ€.\tνΌμμ λΈλΌ λ νλΈλ¦¬μΉ΄μλ μΉ΄νκ° λ§μ΄ μλ€.\tentailment\nμ°λ¦¬κ° μ¬κΈ° μλ ν νΈλ¦°νμ΄ λ μ£Όμ λμ§ μ΄ν΄λ΄μΌκ² μ΄\tμ°λ¦¬λ νΈλ¦°νμ΄ λ¬΄μμ μ£Όμ λμ§ λ³΄λ λ° μκ°μ λλΉνμ§ μμ κ²μ΄λ€.\tcontradiction\nκ·Έλ¬λ μΌνΈμ‘±μ λ¬Ένμ κΈ°λ°μ κ°μ§ μμΌλλ κ΅νλ μ λ½μ μ ν₯ κΈ°λ
κ΅ μΈκ³μλ λ€λ₯΄κ² λ°μ νκ³ κ²°κ΅ λ‘λ§μ μ€μμ§κΆμ νμ μΌλ‘ λ체λμλ€.\tμμΌλλ κ΅νμλ μΌνΈμ‘±μ κΈ°μ§κ° μμλ€.\tentailment\nκΈμ, λ μ νμ μ¬μ§κ° μμ΄\tκΈμ, λμκ² λ§μ μ νκΆμ΄ μμ΄.\tcontradiction\nμ¬μ€, 곡μμ μΈ λ³΄μ₯μ μλ€.\tλ΄κ° μ° λ¬Όκ±΄μ λν 보μ¦μ΄ μμλ€.\tneutral\nλ νκΈ°μ°¨κΈ΄ νμ§λ§, μμμ λ₯΄ λΆλ₯΄μ ―μ μ¬λμ€λ¬μ΄ νΈμμμλ μΆμ λκ°μ΄ μμΎνλ€.\tμμμ λ₯΄ λΆλ₯΄κ²μμλ νΈμμμμ νλμ΄ μλλ₯΄κ³ λ°μ λΆμκΈ°λ₯Ό μ°μΆνλ€.\tcontradiction\nκ·Έμ μ¬ν μμμ΄ μ΄λ―Έ νΌμ‘λ€λ©΄ 곡격 μμλ νΌμ‘μ ν
μ§λ§ λ§μμμλ μ ν 곡ν©μ κΈ°λ―Έκ° λ³΄μ΄μ§ μμλ€.\tκ·Έλ μ λ§μμ΄ λΉν©νμ§ μμλμ§ μ μ μμλ€.\tneutral\nκ³Όκ±°μλ μ£½μμ μνμ΄ ν μ§μ νλ§€λ₯Ό λ§λ λ° κ±°μ λμμ΄ λμ§ μμλ€.\tν μ§ νλ§€λ μ΄λ ν μνλ κ΅ννμ§ μκ³ μ΄λ£¨μ΄μ§λ€.\tcontradiction\nμ΄λ μμ μ μ΄λ₯΄λ¬ λλ μ§κΈ λ€κ°μ€λ μλ‘μ΄ κ²λ€κ³Ό λμ€λ λ§μ μλ‘μ΄ κ²λ€μ΄ λ΄κ° λμ΄κ°κ³ μλ€κ³ λ§νλ μλλ‘ μ μ΄λ€κ³ μλ€.\tλλ μ¬μ ν λ΄κ° 보λ λͺ¨λ μλ‘μ΄ κ²μ μ¬λνλ€.\tcontradiction\nλ΄μ€μν¬λ 물리νμλ€μ΄ κ²½κΈ°μ₯ νμ¬μμ κ³ μλλ‘μ μλμ°¨ κ΅ν΅κ³Ό 보νμ κ΅ν΅μ κ°μ νκΈ° μν΄ μλΌμ μμ§μμ μ°κ΅¬νκ³ μλ€κ³ λ§νλ€.\tκ³ μλλ‘μ μλμ°¨ κ΅ν΅ νλ¦μ κ°μ νλ κ²μ 물리νμλ€μ΄ μλΌλ₯Ό μ°κ΅¬νλ μ΄μ μ€ νλμ΄λ€.\tentailment\nμΌλ§λ λ€λ₯Έκ°? κ·Έλ μ μ λ§μ λ©μΆμλ€κ° λ§μ μ΄μλ€.\tκ·Έλ κ·Έ μλ
κ° μ΄λμ μλμ§ μκ³ μΆμλ€.\tentailment\nκΈμ, κ·Έμκ² λ무 λ§μ κ²μ μ£Όμ§λ§.\tκ·Έλ ν¨μ¬ λ λ§μ κ²μ μꡬν κ²μ΄λ€.\tneutral\nμ무리 κ·Έμ μ°½μλ¬Όμ΄ μλ²½ν΄ λ³΄μΈλ€κ³ ν΄λ, κ·Έλ€μ λ―Ώλ κ²μ μλ§λ μ’μ μκ°μ΄ μλ κ²μ΄λ€.\'\tλμκΈ°λ₯Ό μ λ§λ λ€κ³ ν΄μ λκ΅°κ°λ₯Ό λ―Ώλ κ²μ μλ§ μ’μ§ μμ κ²μ΄λ€.\tneutral\nλ²μ€νλ§ κ·Έλ λΉμ(Bustling Gran Via)λ νΈν
, μμ , κ·Ήμ₯, λμ΄νΈν΄λ½, μΉ΄ν λ±μ΄ μ΄μ°λ¬μ Έ μ°μ±
κ³Ό μ°½κ°λ₯Ό λ³Ό μ μλ€.\tGran Viaλ νΈν
, μμ , κ·Ήμ₯, λμ΄νΈν΄λ½, μΉ΄νμ λ²νν μ‘°ν©μ΄λ€.\tentailment\nμ λΆ μΈμμ\tκ·Έ μ¬λ¬΄μ€μ μμ±ν΄μ μμΉν΄ μλ€.\tneutral\nμ€μ λ¬Έν μ μμ΄ μ΄λ μλμ§ μκ³ μΆλ€λ©΄ νμμ μμ΄λ²λ¦¬κ³ μ€λ¦¬μ½ 밸리μ λ λλͺ¬λλ₯Ό μκ°ν΄ 보λΌ.\tμ€μ λ¬Έν μ μμ λ λλͺ¬λμμ μΌμ΄λλ€.\tentailment\nκ·Έλ¦¬κ³ νλμ€λ¦°μ μ£Όμ§ μκΈ° μν΄ μΉ¨λ μμ μ¬λ €λ¨μ΄\tκ·Έλ
μ λ°©μλ νλμ€λ¦°μ΄ μλ€λ μ§νκ° μ ν μμλ€.\tcontradiction\nL.A.μ μΌμΈ μμ₯μ ν보νλ κ²μ λ§μκ³ μ λ ΄ν 그루λΈλ₯Ό μ‘κ³ , λμ΄ μλ νλΉμ μ¦κΈ°κ³ , μ μ ν λμ°λ¬Ό, κ½, ν₯, κ·Έλ¦¬κ³ κ°μ ― κ°λ‘μ΄λ₯Ό ꡬμ
νλ©΄μ νμ§μΈλ€κ³Ό μ΄μΈλ¦΄ μ μλ νλ₯ν λ°©λ²μ΄λ€.\tLAμ μΌμΈ μμ₯μ λμλ€λλ κ²μ μκ° λλΉλ€.\tcontradiction\nμλλ λ°μΌλ‘ λμ μλμ νμ¨μ λ΄μ¬μλ€. λ¨ ν λ², κ·Έλ¦¬κ³ λ§λ¦¬νμμ¬ λ§μ μ λ‘ λλ΄μλ κ²°μ¬μ΄ λ€μμ¬ μμλ€.\tμλλ μμ¬νκ³ λ§λ¦¬νμμ¬ λ§μ μ μ λ€ λ§μκΈ°λ‘ κ²°μ¬νλ€.\tentailment\n5 μμ Vajpayeeλ ν΅ μ€νμ μ±κ³΅μ μΈ μλ£λ₯Ό λ°ννλλ°, μΈλμΈλ€μ μ£ΌκΆμ νμλ‘ μ μ νμ§λ§ μ΄μ κ΅κ°μ μꡬμμ μΈλ κ΄κ³λ₯Ό 볡μ‘νκ² λ§λ€ μ μμ΅λλ€.\tμΈλλ μ±κ³΅μ μΈ ν΅μ€νμ ν μ μ΄ μλ€.\tcontradiction\nνλΌλ
Έ μμμ λ³΄ν΅ μΌλ§λ λ§μ κ²μ κ°μ§κ³ μλκ°?\tμ μ¬λλ€ μ€μ νλΌλ
Έ μμ κ°λ³Έ μ¬λ μμ΄?\tcontradiction\nκ·Έκ²μ μ 체μ μΈ ννμ μ°μν¨μ μ΄ν 건λνΈμμ κ°μ₯ μ λ³Ό μ μλ€. μλνλ©΄, λ‘λ§μ μλ μ± λ² λλ‘μ²λΌ, λμ κΈΈμν λ³ΈλΉ λ€λ‘ λ κ°κΉμ΄ κ³³μ μ¬λΌμ§κΈ° λλ¬Έμ΄λ€.\tμ± λ² λλ‘μ κΈΈμν λ³ΈλΉμ λμ κ°λ¦°λ€.\tentailment\nλΉμ μ μν΄μ΄ μ΄μ κ°λ°μ μΈ κΈ°μ¨μ κ°μ§κ³ λλλ₯Ό 그릴 κ²μ΄λΌκ³ μκ°νκ² μ§λ§, μλμ€; κ·Έλ κ·Έμ λͺ¨λ κ²½λ ₯μμ λ¨ ν μ λ§μ κ·Έλ Έκ³ , κ·Έκ²μ μ¬μν κ·Έλ¦Όμ΄λ€.\tκ·Έλ κ·Έκ²μ΄ κ·Έλ₯Ό λΆνΈνκ² λ§λ€μκΈ° λλ¬Έμ νλλ§ κ·Έλ Έλ€.\tneutral\nμ΄ μΈμμ μΈ νκ²½μ μλ λν¬ λ μ¨μ΄ 루λΈλ₯΄ λ°λ¬Όκ΄μ μΉ¨μ€μμ λ³Ό μ μλλ‘ κ³νλμλλ°, κ·Έ λΉμ κΆμ μ΄μμ΅λλ€.\tλν΄λ μΉμ κ·Έμ λͺ¨λ κΆμ μ μλ κ·Έμ μΉ¨μ€μμ 보λ κ²½μΉμ λ§μ κ΄μ¬μ κ°μ‘λ€.\tneutral\nκ·Έλ μ°λ¦¬μκ² λ¬Έ μ΄μ λ₯Ό 건λ€μ£Όκ³ λ κΈν λ λ¬λ€.\tκ·Έλ κΈ΄μ₯ν΄μ μ°λ¦¬μκ² μ΄μ λ₯Ό 빨리 μ£Όμλ€.\tneutral\nμμνλ λν μ΅μ’
κ·μΉμ OMBμ μ μΆνλ€.\tμμνλ λν μ΄ κ·μΉμ λ€λ₯Έ κ·Έλ£Ήμ μ μΆνμ§λ§ μ΅μ’
κ·μΉμ OMBκ° νκ°νκΈ° μν κ²μ΄ μμ΅λλ€.\tneutral\nμ μκ°κ²μ κ°λ³΄λ©΄ μ¬λ¦¬λΉμμ 볡μ νν©λ¬Ό κ°μ μ μΎν μ΄λ¦μ κ°μ§ μ νλ€μ μ°Ύμ μ μμ κ²λλ€.μ΄ μ νμ΄ λΏλ¦¬λ₯Ό λ΄λ¦¬λλ‘ λκΈ° μν΄ μ΄¬μμ μ λ¨λ λμ λ©ν¬μμ νλ νΈλ₯΄λͺ¬μ νΌν©λ¬Όμ΄μ£ .\tμ μ κ°κΎΈκΈ° κ°κ²μ μ νλ€μ μ’
μ’
κ·Έλ€μ λͺ©μ μ μ€λͺ
νκΈ° μν΄ κΈ°μ μ μΌλ‘λ κ³Όνμ μΌλ‘ νμλ μ΄λ¦(μ¬λ¦¬λΉμμ 볡μ νν©λ¬Όμ²λΌ)μ λΆμ¬λ°λλ€.\tneutral\nμ€νλ μ€νΈ μμ μ΄λ μ κ·Έλ
μ μ΄μΌκΈ°λ₯Ό λ°κΎΈμλμ§μ ν¨μ¬ λ κ΄μ¬μ΄ μμ κ²μ΄λ€.\tμ€νΈμ μ΄μΌκΈ°λ μ‘°κΈλ λ³νμ§ μμλ€.\tcontradiction\nλ¨νΈκ³Όμ λ§μ§λ§ λκ²°λ‘ λ§₯ν°μ΄λ λ
ΈλΌμ λ³μ μ λ무λ λ₯μνκ² μκ³ ν΄ μκΈ° λλ¬Έμ, κ·Έλ
μκ²λ λΉν©μ€λ¬μΈ μ λλ‘ κ°μμ€λ¬μ΄ κ²μ²λΌ 보μ΄μ§λ§, μ°λ¦¬μκ²λ κ°μ μ μΌλ‘ λΆκ°νΌν΄ 보μΈλ€.\tλ
ΈλΌμ λ³μ μ λΆλͺ
νκ³ νμ°μ μ΄μλ€.\tcontradiction\nμ΄μ§νΈ μ΅λ¨λ¨ λμμΈ μμ€μμ μ€λ μμ¬λ₯Ό ν΅ν΄ μ€μν μν μ ν΄μλ€.\tμμ€μμ μ΄μ§νΈ κ΅κ²½ λ°λ‘ μμ μμΉν΄ μμ΅λλ€.\tneutral\nκ·Έλ¬λ ν¨μ¬ λ μ°μν 건μΆμ ν°μΉλ μ μ±ν μΆ€μΈ Bharatanatyamμμ μνλ 108 κ°μ§ κΈ°λ³Έ ν¬μ¦λ₯Ό μλ° ν¨λμμ λ³Ό μ μμ΅λλ€.\tν¨λμ λν μλ°μ λ¬μ¬λ μΌλ°μ μΈ λͺ¨ν°λΈλ€.\tneutral\nνΈνλ‘κ² μ¬μ΄μ§ κ³λ¨μ μ μμ μ΄ν리μ νμμ κ°μ₯ νλ₯ν μμλΈ μ€ νλμ
λλ€.\tμλ¦λ€μ΄ μ μκ³Ό ν¬κ·ν κ½κ½μ΄ λͺ¨λ μ΄ν리μμ νμμ μΈ μ€νμΌμ 보μ¬μ€λ€.\tneutral\nμ, κ·Έλ¬μΌλ©΄ μ’μμ ν
λ°\tλλ κ·Έκ²μ λ€λ₯΄κ² ν κΈ°νλ₯Ό λͺΉμ κ°λ§νλ€.\tentailment\nννκ° λ μ±μ κΈ°μμ μ리μ‘κ³ μλ μμ μ€μΈ λμ μΌμ΄μμ€λ²κ·Έλ λ
Έλ²¨ ννμ μμμ μλ²νΈ μλ°μ΄μ²(1875λ
)μ μΆμμ§λ‘ λ리 μλ €μ Έ μλ€.\tμλ²νΈ μλ°μ΄μ²λ λ λ€ μΌμ΄μμ€λ²κ·Έ λ§μμ μμλ€.\tentailment\nκ³ κ°λλ λ¬Έμ κ° μλ λλΆλΆμ νμλ€μ΄ λ°κ²¬λ κ²μ 보μ₯νλ€.\tμ₯λΉ λ―Όκ°λλ λ¬Έμ νμ§μ κ΄λ ¨μ΄ μμ΅λλ€.\tcontradiction\nμ€λμ νμ€ν λ°λ°μ§ κ°μ λ μ΄μμ΄\tμ€λ μ¬λ¬΄μ€μ μλ λͺ¨λ μ¬λλ€μ λ°λ°μ§λ₯Ό μ
μλ€.\tneutral\nλͺ»μκΈ΄ ν±μλλ₯Ό μ
κ³ .\tκ·Έκ²μ λΆνμκ³Ό μ£Όν©μμ
λλ€.\tneutral\nμ΄μ£Ό λ
Έλ μμ©μ μ€ λ§μ΄ κ° κ·Έλ€μ νμ§ μμμ μ°λ€.\tλ
Έλ μμ©μμλ νμ§ μμμ μ¬λ μ΄μ£Ό λ
Έλμλ€μ μ¬μ§μ΄ μλ€.\tneutral\nκ·Έλ, κ·Έκ° μ μΈκ³λ₯Ό μ¬νν νμ κ·Έλ° κ±°μΌ\tκ·Έκ²μ μ¬λλ€μ μΈκ³ μ¬νμ λ°λ₯Έλ€.\tentailment\n건λνΈμ ν¬κ³ ν° μ°Έλ무 λͺ κ·Έλ£¨κ° μλ€.\tμ°λ¦¬λ μ¬κΈ° μ€ν¬λ μ΄λ€ μ’
λ₯μ λ―Έκ΅ λ무λ μλ€.\tcontradiction\nFort-de-Franceμμ μΆλ°νλ μλμ°¨λ μ¬κ°μ μΌλ‘, λΉμ μ μμΈ ? λ°λ€ ν¬λκ° κ·Έλμ μ 곡νλ μΎμ ν κ°μ λͺ¨λ ν΄λ³κ³Ό νΌν¬λ ν
μ΄λΈ, μ΄λ¦°μ΄ λ―ΈλλΌν, μλΉμ΄ μλ μλμ λμ°©ν μ μλ€.\tνλμ€ μμμμ μλμ°¨λ ν리λ₯Ό νκ³ μμΈλ‘ κ° μ μλ€.\tentailment\nκ·Έλ¦¬κ³ κ·Έκ²μ μ¨λΌλ°°λ§μ£Όκ° μμνλ λλ‘ μμ°μμ 50λ§ λ¬λ¬λ₯Ό μκ°νμ§ μμ κ²μ΄λΌλ κ²μ μλ―Ένλ€.\tμ¨λΌλ°°λ§ μ£Όλ μμ° μκ°μ νμ§ μμλ€. μλνλ©΄ κ·Έλ κ² νλ κ²μ λν μ΄κΈ° μ λΉμ±μ΄ μ λ° μ‘°μ¬μ λ§μμ§ μμκΈ° λλ¬Έμ΄λ€.\tneutral\nμμμ΄ λ¨Όμ μ΄ .. μ΄ .. λ
ΈμΈμ΄λ κ°μ‘±μ μμμμ 보λ΄λ κ²μ λν΄ μ΄λ»κ² μκ°νλ?\tκ°μ‘±μ μμμμ 보λ΄μ μ¬λ κ²μ λν΄ μ΄λ»κ² μκ°νλμ§ μ νμκ° μλ€.\tcontradiction\nλλ¨Έμ§λ λμκ² λ¬λ Έμ΄.\tλλ¨Έμ§λ λμκ² λ¬λ Έμ§λ§ μκ°μ΄ λ§μ§ μλ€.\tneutral\nμ-ν , 3μμ νλ³μ νλ κ²μ λν΄ κ±±μ νλ©΄ μ λλ€λ κ²μ μκ³ μλ 3μμ΄μΌ.\t3μμ κ·Έλ κ² λ₯μ§ μλ€.\tneutral\nκ·Έλ¦¬κ³ μ΄, κ·Έλ° μμ κ²λ€λ‘ λ€μ μμν΄λ΄. μμ§ ν¨μ¬ μΈ. μ΄, κ·Έ νΉλ³ν λͺ¨λΈ μ°¨λ 150λ¬λ¬μΌ.\tκ·Έ λͺ¨νμ°¨λ 4μ² λ¬λ¬κ° λ λ€.\tcontradiction\nλ΄μΌ λμκ°μΌ νλ€λ©΄, μΉΌμ΄ λ§νλ€.\tλμκ° μ μμ΄. μ€λμ μ λΌ. λ΄μΌμ μ λΌ. μ λ μ λΌ." μΉΌμ΄ λ§νλ€.', 'sentence2': 'contradiction'}
```
2. (Optional) Preferred to change the name of the features for the compatibility with `run_glue.py` in π€ Transformers
- `kor_nli` dataset has same data structure of multi_nli, xnli
- Changing the name of features and the feature type of 'gold_label' to ClassLabel might be helpful
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"premise": datasets.Value("string"),
"hypothesis": datasets.Value("string"),
"label": datasets.features.ClassLabel(names=["entailment", "neutral", "contradiction"]),
}
),
```
If you don't mind, I would like to fix this.
Thanks!
|
closed
|
https://github.com/huggingface/datasets/issues/821
| 2020-11-10T02:04:12
| 2020-11-16T13:59:12
| 2020-11-16T13:59:12
|
{
"login": "sackoh",
"id": 30492059,
"type": "User"
}
|
[] | false
|
[] |
739,387,617
| 820
|
Update quail dataset to v1.3
|
Updated quail to most recent version, to address the problem originally discussed [here](https://github.com/huggingface/datasets/issues/806).
|
closed
|
https://github.com/huggingface/datasets/pull/820
| 2020-11-09T21:49:26
| 2020-11-10T09:06:35
| 2020-11-10T09:06:35
|
{
"login": "ngdodd",
"id": 4889636,
"type": "User"
}
|
[] | true
|
[] |
739,250,624
| 819
|
Make save function use deterministic global vars order
|
The `dumps` function need to be deterministic for the caching mechanism.
However in #816 I noticed that one of dill's method to recursively check the globals of a function may return the globals in different orders each time it's used. To fix that I sort the globals by key in the `globs` dictionary.
I had to add a rectified `save_function` to the saving functions registry of the Pickler to make it work.
This should fix #816
|
closed
|
https://github.com/huggingface/datasets/pull/819
| 2020-11-09T18:12:03
| 2021-11-30T13:34:09
| 2020-11-11T15:20:51
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
739,173,861
| 818
|
Fix type hints pickling in python 3.6
|
Type hints can't be properly pickled in python 3.6. This was causing errors the `run_mlm.py` script from `transformers` with python 3.6
However Cloupickle proposed a [fix](https://github.com/cloudpipe/cloudpickle/pull/318/files) to make it work anyway.
The idea is just to implement the pickling/unpickling of parameterized type hints. There is one detail though: since in python 3.6 we can't use `isinstance` on type hints, then we can't use pickle saving functions registry directly. Therefore we just wrap the `save_global` method of the Pickler.
This should fix https://github.com/huggingface/transformers/issues/8212 for python 3.6 and make `run_mlm.py` support python 3.6
cc @sgugger
|
closed
|
https://github.com/huggingface/datasets/pull/818
| 2020-11-09T16:27:47
| 2020-11-10T09:07:03
| 2020-11-10T09:07:02
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
739,145,369
| 817
|
Add MRQA dataset
|
## Adding a Dataset
- **Name:** MRQA
- **Description:** Collection of different (subsets of) QA datasets all converted to the same format to evaluate out-of-domain generalization (the datasets come from different domains, distributions, etc.). Some datasets are used for training and others are used for evaluation. This dataset was collected as part of MRQA 2019's shared task
- **Paper:** https://arxiv.org/abs/1910.09753
- **Data:** https://github.com/mrqa/MRQA-Shared-Task-2019
- **Motivation:** Out-of-domain generalization is becoming (has become) a de-factor evaluation for NLU systems
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
closed
|
https://github.com/huggingface/datasets/issues/817
| 2020-11-09T15:52:19
| 2020-12-04T15:44:42
| 2020-12-04T15:44:41
|
{
"login": "VictorSanh",
"id": 16107619,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
739,102,686
| 816
|
[Caching] Dill globalvars() output order is not deterministic and can cause cache issues.
|
Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues.
To fix that one could register an implementation of dill's `save_function` in the `datasets` pickler that sorts the globals keys before dumping a function.
|
closed
|
https://github.com/huggingface/datasets/issues/816
| 2020-11-09T15:01:20
| 2020-11-11T15:20:50
| 2020-11-11T15:20:50
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | false
|
[] |
738,842,092
| 815
|
Is dataset iterative or not?
|
Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks
|
closed
|
https://github.com/huggingface/datasets/issues/815
| 2020-11-09T09:11:48
| 2020-11-10T10:50:03
| 2020-11-10T10:50:03
|
{
"login": "rabeehkarimimahabadi",
"id": 73364383,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
738,500,443
| 814
|
Joining multiple datasets
|
Hi
I have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks
|
closed
|
https://github.com/huggingface/datasets/issues/814
| 2020-11-08T16:19:30
| 2020-11-08T19:38:48
| 2020-11-08T19:38:48
|
{
"login": "rabeehkarimimahabadi",
"id": 73364383,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
738,489,852
| 813
|
How to implement DistributedSampler with datasets
|
Hi,
I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them.
I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using datasets in which datasets are iterative? To give you more context, I have multiple of datasets and I need to write sampler for this case. thanks.
|
closed
|
https://github.com/huggingface/datasets/issues/813
| 2020-11-08T15:27:11
| 2022-10-05T12:54:23
| 2022-10-05T12:54:23
|
{
"login": "rabeehkarimimahabadi",
"id": 73364383,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
738,340,217
| 812
|
Too much logging
|
I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock
[2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock
using datasets version = 1.1.2
|
closed
|
https://github.com/huggingface/datasets/issues/812
| 2020-11-07T23:56:30
| 2021-01-26T14:31:34
| 2020-11-16T17:06:42
|
{
"login": "dspoka",
"id": 6183050,
"type": "User"
}
|
[] | false
|
[] |
738,280,132
| 811
|
nlp viewer error
|
Hello,
when I select amazon_us_reviews in nlp viewer, it shows error.
https://huggingface.co/nlp/viewer/?dataset=amazon_us_reviews

|
closed
|
https://github.com/huggingface/datasets/issues/811
| 2020-11-07T17:08:58
| 2022-02-15T10:51:44
| 2022-02-14T15:24:20
|
{
"login": "jc-hou",
"id": 30210529,
"type": "User"
}
|
[
{
"name": "nlp-viewer",
"color": "94203D"
}
] | false
|
[] |
737,878,370
| 810
|
Fix seqeval metric
|
The current seqeval metric returns the following error when computed:
```
~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/78a944d83252b5a16c9a2e49f057f4c6e02f18cc03349257025a8c9aea6524d8/seqeval.py in _compute(self, predictions, references, suffix)
102 scores = {}
103 for type_name, score in report.items():
--> 104 scores[type_name]["precision"] = score["precision"]
105 scores[type_name]["recall"] = score["recall"]
106 scores[type_name]["f1"] = score["f1-score"]
KeyError: 'LOC'
```
This is because the current code basically tries to do:
```
scores = {}
scores["LOC"]["precision"] = some_value
```
which does not work in python. This PR fixes that while keeping the previous nested structure of results, with the same keys.
|
closed
|
https://github.com/huggingface/datasets/pull/810
| 2020-11-06T16:11:43
| 2020-11-09T14:04:29
| 2020-11-09T14:04:28
|
{
"login": "sgugger",
"id": 35901082,
"type": "User"
}
|
[] | true
|
[] |
737,832,701
| 809
|
Add Google Taskmaster dataset
|
## Adding a Dataset
- **Name:** Taskmaster
- **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations)
- **Paper:** https://arxiv.org/abs/1909.05358
- **Data:** https://github.com/google-research-datasets/Taskmaster
- **Motivation:** One of few annotated datasets of this size for goal-oriented dialogue
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
closed
|
https://github.com/huggingface/datasets/issues/809
| 2020-11-06T15:10:41
| 2021-04-20T13:09:26
| 2021-04-20T13:09:26
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
737,638,942
| 808
|
dataset(dgs): initial dataset loading script
|
When trying to create dummy data I get:
> Dataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has t o be created with less guidance. Make sure you create the file dummy_data.
I am not sure how to manually create the dummy_data (what exactly it should contain)
Also note, this library says:
> ImportError: To be able to use this dataset, you need to install the following dependencies['pympi'] using 'pip install pympi' for instance'
When you actually need to `pip install pympi-ling`
|
closed
|
https://github.com/huggingface/datasets/pull/808
| 2020-11-06T10:14:43
| 2021-03-23T06:18:55
| 2021-03-23T06:18:55
|
{
"login": "AmitMY",
"id": 5757359,
"type": "User"
}
|
[] | true
|
[] |
737,509,954
| 807
|
load_dataset for LOCAL CSV files report CONNECTION ERROR
|
## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=False)
print('datasets version: ', datasets.__version__)
print('pytorch version: ', torch.__version__)
print('transformers version: ', transformers.__version__)
# output:
datasets version: 1.1.2
pytorch version: 1.5.0
transformers version: 3.2.0
```
when I load data through `dataset`:
```
dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)
```
Error infos:
```
ConnectionError Traceback (most recent call last)
<ipython-input-17-bbdadb9a0c78> in <module>
----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
588 # Download/copy dataset processing script
589 module_path, hash = prepare_module(
--> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
591 )
592
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)
266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version)
267 try:
--> 268 local_path = cached_path(file_path, download_config=download_config)
269 except FileNotFoundError:
270 if script_version is not None:
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
306 user_agent=download_config.user_agent,
307 local_files_only=download_config.local_files_only,
--> 308 use_etag=download_config.use_etag,
309 )
310 elif os.path.exists(url_or_filename):
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)
473 elif response is not None and response.status_code == 404:
474 raise FileNotFoundError("Couldn't find file at {}".format(url))
--> 475 raise ConnectionError("Couldn't reach {}".format(url))
476
477 # Try a second time
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py
```
And I try to connect to the site with requests:
```
import requests
requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py")
```
Similarly Error occurs:
```
---------------------------------------------------------------------------
ConnectionRefusedError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self)
159 conn = connection.create_connection(
--> 160 (self._dns_host, self.port), self.timeout, **extra_kw
161 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
83 if err is not None:
---> 84 raise err
85
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
73 sock.bind(source_address)
---> 74 sock.connect(sa)
75 return sock
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
676 headers=headers,
--> 677 chunked=chunked,
678 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
380 try:
--> 381 self._validate_conn(conn)
382 except (SocketTimeout, BaseSSLError) as e:
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn)
975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
--> 976 conn.connect()
977
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self)
307 # Add certificate verification
--> 308 conn = self._new_conn()
309 hostname = self.host
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self)
171 raise NewConnectionError(
--> 172 self, "Failed to establish a new connection: %s" % e
173 )
NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
448 retries=self.max_retries,
--> 449 timeout=timeout
450 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
724 retries = retries.increment(
--> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
726 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
438 if new_retry.is_exhausted():
--> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause))
440
MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
<ipython-input-20-18cc3eb4a049> in <module>
1 import requests
2
----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py")
~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs)
102
103 kwargs.setdefault('allow_redirects', False)
--> 104 return request('head', url, **kwargs)
105
106
~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs)
59 # cases, and look like a memory leak in others.
60 with sessions.Session() as session:
---> 61 return session.request(method=method, url=url, **kwargs)
62
63
~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
528 }
529 send_kwargs.update(settings)
--> 530 resp = self.send(prep, **send_kwargs)
531
532 return resp
~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs)
641
642 # Send the request
--> 643 r = adapter.send(request, **kwargs)
644
645 # Total elapsed time of the request (approximately)
~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
514 raise SSLError(e, request=request)
515
--> 516 raise ConnectionError(e, request=request)
517
518 except ClosedPoolError as e:
ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',))
```
|
closed
|
https://github.com/huggingface/datasets/issues/807
| 2020-11-06T06:33:04
| 2021-01-11T01:30:27
| 2020-11-14T05:30:34
|
{
"login": "shexuan",
"id": 25664170,
"type": "User"
}
|
[] | false
|
[] |
737,215,430
| 806
|
Quail dataset urls are out of date
|
<h3>Code</h3>
```
from datasets import load_dataset
quail = load_dataset('quail')
```
<h3>Error</h3>
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml
```
As per [quail v1.3 commit](https://github.com/text-machine-lab/quail/commit/506501cfa34d9ec6c042d31026ba6fea6bcec8ff) it looks like the location and suggested ordering has changed. In [https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58](https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58) the quail v1.2 datasets are being pointed to, which don't exist anymore.
|
closed
|
https://github.com/huggingface/datasets/issues/806
| 2020-11-05T19:40:19
| 2020-11-10T14:02:51
| 2020-11-10T14:02:51
|
{
"login": "ngdodd",
"id": 4889636,
"type": "User"
}
|
[] | false
|
[] |
737,019,360
| 805
|
On loading a metric from datasets, I get the following error
|
`from datasets import load_metric`
`metric = load_metric('bleurt')`
Traceback:
210 class _ArrayXDExtensionType(pa.PyExtensionType):
211
212 ndims: int = None
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'
Any help will be appreciated. Thank you.
|
closed
|
https://github.com/huggingface/datasets/issues/805
| 2020-11-05T15:14:38
| 2022-02-14T15:32:59
| 2022-02-14T15:32:59
|
{
"login": "laibamehnaz",
"id": 36405283,
"type": "User"
}
|
[] | false
|
[] |
736,858,507
| 804
|
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')
|
# The issue
It's all in the title, it appears to be fine on the train and validation sets.
Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ?
# How to reproduce
```py
from datasets import load_dataset
kilt_tasks = load_dataset("kilt_tasks")
trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext')
# both in "kilt_tasks"
In [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']])
Out[18]: False
# and "trivia_qa"
In [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']])
Out[13]: True
# appears to be fine on the train and validation sets.
In [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']])
Out[14]: False
In [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']])
Out[15]: False
In [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']])
Out[16]: True
In [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']])
Out[17]: True
```
|
closed
|
https://github.com/huggingface/datasets/issues/804
| 2020-11-05T11:38:01
| 2020-11-09T14:14:59
| 2020-11-09T14:14:58
|
{
"login": "PaulLerner",
"id": 25532159,
"type": "User"
}
|
[] | false
|
[] |
736,818,917
| 803
|
fix: typos in tutorial to map KILT and TriviaQA
|
closed
|
https://github.com/huggingface/datasets/pull/803
| 2020-11-05T10:42:00
| 2020-11-10T09:08:07
| 2020-11-10T09:08:07
|
{
"login": "PaulLerner",
"id": 25532159,
"type": "User"
}
|
[] | true
|
[] |
|
736,296,343
| 802
|
Add XGlue
|
Dataset is ready to merge. An important feature of this dataset is that for each config the train data is in English, while dev and test data are in multiple languages. Therefore, @lhoestq and I decided offline that we will give the dataset the following API, *e.g.* for
```python
load_dataset("xglue", "ner") # would give the splits 'train', 'validation.en', 'test.en', 'validation.es', 'test.es', ...
```
=> therefore one can load a single language test via
```python
load_dataset("xglue", "ner", split="test.es")
```
Close #749.
|
closed
|
https://github.com/huggingface/datasets/pull/802
| 2020-11-04T17:29:54
| 2022-04-28T08:15:36
| 2020-12-01T15:58:27
|
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[] | true
|
[] |
735,790,876
| 801
|
How to join two datasets?
|
Hi,
I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels?
I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is **not** the next sentence (i.e., from a different article) of the first sentence.
Thanks!
|
closed
|
https://github.com/huggingface/datasets/issues/801
| 2020-11-04T03:53:11
| 2020-12-23T14:02:58
| 2020-12-23T14:02:58
|
{
"login": "shangw-nvidia",
"id": 66387198,
"type": "User"
}
|
[] | false
|
[] |
735,772,775
| 800
|
Update loading_metrics.rst
|
Minor bug
|
closed
|
https://github.com/huggingface/datasets/pull/800
| 2020-11-04T02:57:11
| 2020-11-11T15:28:32
| 2020-11-11T15:28:32
|
{
"login": "ayushidalmia",
"id": 5400513,
"type": "User"
}
|
[] | true
|
[] |
735,551,165
| 799
|
switch amazon reviews class label order
|
Switches the label order to be more intuitive for amazon reviews, #791.
|
closed
|
https://github.com/huggingface/datasets/pull/799
| 2020-11-03T18:38:58
| 2020-11-03T18:44:14
| 2020-11-03T18:44:10
|
{
"login": "joeddav",
"id": 9353833,
"type": "User"
}
|
[] | true
|
[] |
735,518,805
| 798
|
Cannot load TREC dataset: ConnectionError
|
## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.`
* Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address
* Increasing max_redirects to 100 doesn't help
Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant.
* datasets.__version__ == '1.1.2'
* requests.__version__ == '2.24.0'
## Error trace
```
>>> import datasets
>>> datasets.__version__
'1.1.2'
>>> dataset = load_dataset("trec", split="train")
Using custom data configuration default
Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators
dl_files = dl_manager.download_and_extract(_URLs)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
```
I would appreciate some suggestions here.
|
closed
|
https://github.com/huggingface/datasets/issues/798
| 2020-11-03T17:45:22
| 2022-02-14T15:34:22
| 2022-02-14T15:34:22
|
{
"login": "kaletap",
"id": 25740957,
"type": "User"
}
|
[
{
"name": "dataset bug",
"color": "2edb81"
}
] | false
|
[] |
735,420,332
| 797
|
Token classification labels are strings and we don't have the list of labels
|
Not sure if this is an issue we want to fix or not, putting it here so it's not forgotten. Right now, in token classification datasets, the labels for NER, POS and the likes are typed as `Sequence` of `strings`, which is wrong in my opinion. These should be `Sequence` of `ClassLabel` or some types that gives easy access to the underlying labels.
The main problem for preprocessing those datasets is that the list of possible labels is not stored inside the `Dataset` object which makes converting the labels to IDs quite difficult (you either have to know the list of labels in advance or run a full pass through the dataset to get the list of labels, the `unique` method being useless with the type `Sequence[str]`).
|
closed
|
https://github.com/huggingface/datasets/issues/797
| 2020-11-03T15:33:30
| 2022-02-14T15:41:54
| 2022-02-14T15:41:53
|
{
"login": "sgugger",
"id": 35901082,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
},
{
"name": "Dataset discussion",
"color": "72f99f"
}
] | false
|
[] |
735,198,265
| 795
|
Descriptions of raw and processed versions of wikitext are inverted
|
Nothing of importance, but it looks like the descriptions of wikitext-n-v1 and wikitext-n-raw-v1 are inverted for both n=2 and n=103. I just verified by loading them and the `<unk>` tokens are present in the non-raw versions, which confirms that it's a mere inversion of the descriptions and not of the datasets themselves.
Also it would be nice if those descriptions appeared in the dataset explorer.
https://github.com/huggingface/datasets/blob/87bd0864845ea0a1dd7167918dc5f341bf807bd3/datasets/wikitext/wikitext.py#L52
|
closed
|
https://github.com/huggingface/datasets/issues/795
| 2020-11-03T10:24:51
| 2022-02-14T15:46:21
| 2022-02-14T15:46:21
|
{
"login": "fraboniface",
"id": 16835358,
"type": "User"
}
|
[
{
"name": "dataset bug",
"color": "2edb81"
}
] | false
|
[] |
735,158,725
| 794
|
self.options cannot be converted to a Python object for pickling
|
Hi,
Currently I am trying to load csv file with customized read_options. And the latest master seems broken if we pass the ReadOptions object.
Here is a code snippet
```python
from datasets import load_dataset
from pyarrow.csv import ReadOptions
load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024))
```
error is `self.options cannot be converted to a Python object for pickling`
Would you mind to take a look? Thanks!
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-ab83fec2ded4> in <module>
----> 1 load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024))
/tmp/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
602 hash=hash,
603 features=features,
--> 604 **config_kwargs,
605 )
606
/tmp/datasets/src/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)
162 name,
163 custom_features=features,
--> 164 **config_kwargs,
165 )
166
/tmp/datasets/src/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)
281 )
282 else:
--> 283 suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
284
285 if builder_config.data_files is not None:
/tmp/datasets/src/datasets/fingerprint.py in hash(cls, value)
51 return cls.dispatch[type(value)](cls, value)
52 else:
---> 53 return cls.hash_default(value)
54
55 def update(self, value):
/tmp/datasets/src/datasets/fingerprint.py in hash_default(cls, value)
44 @classmethod
45 def hash_default(cls, value):
---> 46 return cls.hash_bytes(dumps(value))
47
48 @classmethod
/tmp/datasets/src/datasets/utils/py_utils.py in dumps(obj)
365 file = StringIO()
366 with _no_cache_fields(obj):
--> 367 dump(obj, file)
368 return file.getvalue()
369
/tmp/datasets/src/datasets/utils/py_utils.py in dump(obj, file)
337 def dump(obj, file):
338 """pickle an object to a file"""
--> 339 Pickler(file, recurse=True).dump(obj)
340 return
341
~/.local/lib/python3.6/site-packages/dill/_dill.py in dump(self, obj)
444 raise PicklingError(msg)
445 else:
--> 446 StockPickler.dump(self, obj)
447 stack.clear() # clear record of 'recursion-sensitive' pickled objects
448 return
/usr/lib/python3.6/pickle.py in dump(self, obj)
407 if self.proto >= 4:
408 self.framer.start_framing()
--> 409 self.save(obj)
410 self.write(STOP)
411 self.framer.end_framing()
/usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)
474 f = self.dispatch.get(t)
475 if f is not None:
--> 476 f(self, obj) # Call unbound method with explicit self
477 return
478
~/.local/lib/python3.6/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/usr/lib/python3.6/pickle.py in save_dict(self, obj)
819
820 self.memoize(obj)
--> 821 self._batch_setitems(obj.items())
822
823 dispatch[dict] = save_dict
/usr/lib/python3.6/pickle.py in _batch_setitems(self, items)
850 k, v = tmp[0]
851 save(k)
--> 852 save(v)
853 write(SETITEM)
854 # else tmp is empty, and we're done
/usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)
494 reduce = getattr(obj, "__reduce_ex__", None)
495 if reduce is not None:
--> 496 rv = reduce(self.proto)
497 else:
498 reduce = getattr(obj, "__reduce__", None)
~/.local/lib/python3.6/site-packages/pyarrow/_csv.cpython-36m-x86_64-linux-gnu.so in pyarrow._csv.ReadOptions.__reduce_cython__()
TypeError: self.options cannot be converted to a Python object for pickling
```
|
closed
|
https://github.com/huggingface/datasets/issues/794
| 2020-11-03T09:27:34
| 2020-11-19T17:35:38
| 2020-11-19T17:35:38
|
{
"login": "hzqjyyx",
"id": 9635713,
"type": "User"
}
|
[
{
"name": "bug",
"color": "d73a4a"
}
] | false
|
[] |
735,105,907
| 793
|
[Datasets] fix discofuse links
|
The discofuse links were changed: https://github.com/google-research-datasets/discofuse/commit/d27641016eb5b3eb2af03c7415cfbb2cbebe8558.
The old links are broken
I changed the links and created the new dataset_infos.json.
Pinging @thomwolf @lhoestq for notification.
|
closed
|
https://github.com/huggingface/datasets/pull/793
| 2020-11-03T08:03:45
| 2020-11-03T08:16:41
| 2020-11-03T08:16:40
|
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[] | true
|
[] |
734,693,652
| 792
|
KILT dataset: empty string in triviaqa input field
|
# What happened
Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark)
# Versions
KILT version is `1.0.0`
`datasets` version is `1.1.2`
[more here](https://gist.github.com/PaulLerner/3768c8d25f723edbac20d99b6a4056c1)
# How to reproduce
```py
In [1]: from datasets import load_dataset
In [4]: dataset = load_dataset("kilt_tasks")
# everything works fine, removed output for a better readibility
Dataset kilt_tasks downloaded and prepared to /people/lerner/.cache/huggingface/datasets/kilt_tasks/all_tasks/1.0.0/821c4295a2c35db2847585918d9c47d7f028f1a26b78825d8e77cd3aeb2621a1. Subsequent calls will reuse this data.
# empty string in triviaqa input field
In [36]: dataset['train_triviaqa'][0]
Out[36]:
{'id': 'dpql_5197',
'input': '',
'meta': {'left_context': '',
'mention': '',
'obj_surface': {'text': []},
'partial_evidence': {'end_paragraph_id': [],
'meta': [],
'section': [],
'start_paragraph_id': [],
'title': [],
'wikipedia_id': []},
'right_context': '',
'sub_surface': {'text': []},
'subj_aliases': {'text': []},
'template_questions': {'text': []}},
'output': {'answer': ['five Β£', '5 Β£', 'Β£5', 'five Β£'],
'meta': [],
'provenance': [{'bleu_score': [1.0],
'end_character': [248],
'end_paragraph_id': [30],
'meta': [],
'section': ['Section::::Question of legal tender.\n'],
'start_character': [246],
'start_paragraph_id': [30],
'title': ['Banknotes of the pound sterling'],
'wikipedia_id': ['270680']}]}}
In [35]: dataset['train_triviaqa']['input'][:10]
Out[35]: ['', '', '', '', '', '', '', '', '', '']
# same with test set
In [37]: dataset['test_triviaqa']['input'][:10]
Out[37]: ['', '', '', '', '', '', '', '', '', '']
# works fine with natural questions
In [34]: dataset['train_nq']['input'][:10]
Out[34]:
['how i.met your mother who is the mother',
'who had the most wins in the nfl',
'who played mantis guardians of the galaxy 2',
'what channel is the premier league on in france',
"god's not dead a light in the darkness release date",
'who is the current president of un general assembly',
'when do the eclipse supposed to take place',
'what is the name of the sea surrounding dubai',
'who holds the nba record for most points in a career',
'when did the new maze runner movie come out']
```
Stay safe :)
|
closed
|
https://github.com/huggingface/datasets/issues/792
| 2020-11-02T17:33:54
| 2020-11-05T10:34:59
| 2020-11-05T10:34:59
|
{
"login": "PaulLerner",
"id": 25532159,
"type": "User"
}
|
[] | false
|
[] |
734,656,518
| 791
|
add amazon reviews
|
Adds the Amazon US Reviews dataset as requested in #353. Converted from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/amazon_us_reviews). cc @clmnt @sshleifer
|
closed
|
https://github.com/huggingface/datasets/pull/791
| 2020-11-02T16:42:57
| 2020-11-03T20:15:06
| 2020-11-03T16:43:57
|
{
"login": "joeddav",
"id": 9353833,
"type": "User"
}
|
[] | true
|
[] |
734,470,197
| 790
|
Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist
|
I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error.
```sh
git clone https://github.com/huggingface/datasets
cd datasets
virtualenv venv -p python3 --system-site-packages
source venv/bin/activate
pip install -e ".[dev]"
```


Python 3.7.7
|
closed
|
https://github.com/huggingface/datasets/issues/790
| 2020-11-02T12:36:35
| 2020-11-10T14:05:02
| 2020-11-10T14:05:02
|
{
"login": "shawwn",
"id": 59632,
"type": "User"
}
|
[] | false
|
[] |
734,237,839
| 789
|
dataset(ncslgr): add initial loading script
|
Its a small dataset, but its heavily annotated
https://www.bu.edu/asllrp/ncslgr.html

|
closed
|
https://github.com/huggingface/datasets/pull/789
| 2020-11-02T06:50:10
| 2020-12-01T13:41:37
| 2020-12-01T13:41:36
|
{
"login": "AmitMY",
"id": 5757359,
"type": "User"
}
|
[] | true
|
[] |
734,136,124
| 788
|
failed to reuse cache
|
I packed the `load_dataset ` in a function of class, and cached data in a directory. But when I import the class and use the function, the data still have to be downloaded again. The information (Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to ******) which logged to terminal shows the path is right to the cache directory, but the files still have to be downloaded again.
|
closed
|
https://github.com/huggingface/datasets/issues/788
| 2020-11-02T02:42:36
| 2020-11-02T12:26:15
| 2020-11-02T12:26:15
|
{
"login": "WangHexie",
"id": 31768052,
"type": "User"
}
|
[] | false
|
[] |
734,070,162
| 787
|
Adding nli_tr dataset
|
Hello,
In this pull request, we have implemented the necessary interface to add our recent dataset [NLI-TR](https://github.com/boun-tabi/NLI-TR). The datasets will be presented on a full paper at EMNLP 2020 this month. [[arXiv link] ](https://arxiv.org/pdf/2004.14963.pdf)
The dataset is the neural machine translation of SNLI and MultiNLI datasets into Turkish. So, we followed a similar format with the original datasets hosted in the HuggingFace datasets hub.
Our dataset is designed to be accessed as follows by following the interface of the GLUE dataset that provides multiple datasets in a single interface over the HuggingFace datasets hub.
```
from datasets import load_dataset
multinli_tr = load_dataset("nli_tr", "multinli_tr")
snli_tr = load_dataset("nli_tr", "snli_tr")
```
Thanks for your help in reviewing our pull request.
|
closed
|
https://github.com/huggingface/datasets/pull/787
| 2020-11-01T21:49:44
| 2020-11-12T19:06:02
| 2020-11-12T19:06:02
|
{
"login": "e-budur",
"id": 2246791,
"type": "User"
}
|
[] | true
|
[] |
733,761,717
| 786
|
feat(dataset): multiprocessing _generate_examples
|
forking this out of #741, this issue is only regarding multiprocessing
I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool.
In my use case, I would instead of:
```python
for datum in data:
yield self.load_datum(datum)
```
do:
```python
return pool.map(self.load_datum, data)
```
As the dataset in question, as an example, has **only** 7000 rows, and takes 10 seconds to load each row on average, it takes almost 20 hours to load the entire dataset.
If this was a larger dataset (and many such datasets exist), it would take multiple days to complete.
Using multiprocessing, for example, 40 cores, could speed it up dramatically. For this dataset, hopefully to fully load in under an hour.
|
closed
|
https://github.com/huggingface/datasets/issues/786
| 2020-10-31T16:52:16
| 2023-01-16T10:59:13
| 2023-01-16T10:59:13
|
{
"login": "AmitMY",
"id": 5757359,
"type": "User"
}
|
[] | false
|
[] |
733,719,419
| 785
|
feat(aslg_pc12): add dev and test data splits
|
For reproducibility sake, it's best if there are defined dev and test splits.
The original paper author did not define splits for the entire dataset, not for the sample loaded via this library, so I decided to define:
- 5/7th for train
- 1/7th for dev
- 1/7th for test
|
closed
|
https://github.com/huggingface/datasets/pull/785
| 2020-10-31T13:25:38
| 2020-11-10T15:29:30
| 2020-11-10T15:29:30
|
{
"login": "AmitMY",
"id": 5757359,
"type": "User"
}
|
[] | true
|
[] |
733,700,463
| 784
|
Issue with downloading Wikipedia data for low resource language
|
Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet
```
jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner')
su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner')
```
And I get the following error for these two languages:
Javanese
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json
```
Sundanese
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json
```
I found from https://github.com/huggingface/datasets/issues/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json` and `https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json` are no longer valid.
Any suggestions on how to handle this issue? Thanks!
|
closed
|
https://github.com/huggingface/datasets/issues/784
| 2020-10-31T11:40:00
| 2022-02-09T17:50:16
| 2020-11-25T15:42:13
|
{
"login": "SamuelCahyawijaya",
"id": 2826602,
"type": "User"
}
|
[] | false
|
[] |
733,536,254
| 783
|
updated links to v1.3 of quail, fixed the description
|
updated links to v1.3 of quail, fixed the description
|
closed
|
https://github.com/huggingface/datasets/pull/783
| 2020-10-30T21:47:33
| 2020-11-29T23:05:19
| 2020-11-29T23:05:18
|
{
"login": "annargrs",
"id": 1450322,
"type": "User"
}
|
[] | true
|
[] |
733,316,463
| 782
|
Fix metric deletion when attribuets are missing
|
When you call `del` on a metric we want to make sure that the arrow attributes are not already deleted.
I just added `if hasattr(...)` to make sure it doesn't crash
|
closed
|
https://github.com/huggingface/datasets/pull/782
| 2020-10-30T16:16:10
| 2020-10-30T16:47:53
| 2020-10-30T16:47:52
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
733,168,609
| 781
|
Add XNLI train set
|
I added the train set that was built using the translated MNLI.
Now you can load the dataset specifying one language:
```python
from datasets import load_dataset
xnli_en = load_dataset("xnli", "en")
print(xnli_en["train"][0])
# {'hypothesis': 'Product and geography are what make cream skimming work .', 'label': 1, 'premise': 'Conceptually cream skimming has two basic dimensions - product and geography .'}
print(xnli_en["test"][0])
# {'hypothesis': 'I havent spoken to him again.', 'label': 2, 'premise': "Well, I wasn't even thinking about that, but I was so frustrated, and, I ended up talking to him again."}
```
Cc @sgugger
|
closed
|
https://github.com/huggingface/datasets/pull/781
| 2020-10-30T13:21:53
| 2022-06-09T23:26:46
| 2020-11-09T18:22:49
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
732,738,647
| 780
|
Add ASNQ dataset
|
This pull request adds the ASNQ dataset. It is a dataset for answer sentence selection derived from Google Natural Questions (NQ) dataset (Kwiatkowski et al. 2019). The dataset details can be found in the paper at https://arxiv.org/abs/1911.04118
The dataset is authored by Siddhant Garg, Thuy Vu and Alessandro Moschitti.
_Please note that I have no affiliation with the authors._
Repo: https://github.com/alexa/wqa_tanda
|
closed
|
https://github.com/huggingface/datasets/pull/780
| 2020-10-29T23:31:56
| 2020-11-10T09:26:23
| 2020-11-10T09:26:23
|
{
"login": "mkserge",
"id": 2992022,
"type": "User"
}
|
[] | true
|
[] |
732,514,887
| 779
|
Feature/fidelity metrics from emnlp2020 evaluating and characterizing human rationales
|
This metric computes fidelity (Yu et al. 2019, DeYoung et al. 2019) and normalized fidelity (Carton et al. 2020).
|
closed
|
https://github.com/huggingface/datasets/pull/779
| 2020-10-29T17:31:14
| 2023-07-11T09:36:30
| 2023-07-11T09:36:30
|
{
"login": "rathoreanirudh",
"id": 11327413,
"type": "User"
}
|
[
{
"name": "transfer-to-evaluate",
"color": "E3165C"
}
] | true
|
[] |
732,449,652
| 778
|
Unexpected behavior when loading cached csv file?
|
I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be nice if the information what `delimiter` or what `column_names` were used would influence the identifier of the cached dataset.
Small snippet to reproduce the behavior:
```python
import datasets
with open("dummy_data.csv", "w") as file:
file.write("test,this;text\n")
print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train").column_names)
# ["test", "this;text"]
print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train", delimiter=";").column_names)
# still ["test", "this;text"]
```
By the way, thanks a lot for this amazing library! :)
|
closed
|
https://github.com/huggingface/datasets/issues/778
| 2020-10-29T16:06:10
| 2020-10-29T21:21:27
| 2020-10-29T21:21:27
|
{
"login": "dcfidalgo",
"id": 15979778,
"type": "User"
}
|
[] | false
|
[] |
732,376,648
| 777
|
Better error message for uninitialized metric
|
When calling `metric.compute()` without having called `metric.add` or `metric.add_batch` at least once, the error was quite cryptic. I added a better error message
Fix #729
|
closed
|
https://github.com/huggingface/datasets/pull/777
| 2020-10-29T14:42:50
| 2020-10-29T15:18:26
| 2020-10-29T15:18:24
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
732,343,550
| 776
|
Allow custom split names in text dataset
|
The `text` dataset used to return only splits like train, test and validation. Other splits were ignored.
Now any split name is allowed.
I did the same for `json`, `pandas` and `csv`
Fix #735
|
closed
|
https://github.com/huggingface/datasets/pull/776
| 2020-10-29T14:04:06
| 2020-10-30T13:46:45
| 2020-10-30T13:23:52
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
732,287,504
| 775
|
Properly delete metrics when a process is killed
|
Tests are flaky when using metrics in distributed setup.
There is because of one test that make sure that using two possibly incompatible metric computation (same exp id) either works or raises the right error.
However if the error is raised, all the processes of the metric are killed, and the open files (arrow + lock files) are not closed correctly. This causes PermissionError on Windows when deleting the temporary directory.
To fix that I added a `finally` clause in the function passed to multiprocess to properly close the files when the process exits.
|
closed
|
https://github.com/huggingface/datasets/pull/775
| 2020-10-29T12:52:07
| 2020-10-29T14:01:20
| 2020-10-29T14:01:19
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
732,265,741
| 774
|
[ROUGE] Add description to Rouge metric
|
Add information about case sensitivity to ROUGE.
|
closed
|
https://github.com/huggingface/datasets/pull/774
| 2020-10-29T12:19:32
| 2020-10-29T17:55:50
| 2020-10-29T17:55:48
|
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[] | true
|
[] |
731,684,153
| 773
|
Adding CC-100: Monolingual Datasets from Web Crawl Data
|
## Adding a Dataset
- **Name:** CC-100: Monolingual Datasets from Web Crawl Data
- **Description:** https://twitter.com/alex_conneau/status/1321507120848625665
- **Paper:** https://arxiv.org/abs/1911.02116
- **Data:** http://data.statmt.org/cc-100/
- **Motivation:** A large scale multi-lingual language modeling dataset. Text is de-duplicated and filtered by how "Wikipedia-like" it is, hopefully helping avoid some of the worst parts of the common crawl.
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
closed
|
https://github.com/huggingface/datasets/issues/773
| 2020-10-28T18:20:41
| 2022-01-26T13:22:54
| 2020-12-14T10:20:07
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
731,612,430
| 772
|
Fix metric with cache dir
|
The cache_dir provided by the user was concatenated twice and therefore causing FileNotFound errors.
The tests didn't cover the case of providing `cache_dir=` for metrics because of a stupid issue (it was not using the right parameter).
I remove the double concatenation and I fixed the tests
Fix #728
|
closed
|
https://github.com/huggingface/datasets/pull/772
| 2020-10-28T16:43:13
| 2020-10-29T09:34:44
| 2020-10-29T09:34:43
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
731,482,213
| 771
|
Using `Dataset.map` with `n_proc>1` print multiple progress bars
|
When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed.
|
closed
|
https://github.com/huggingface/datasets/issues/771
| 2020-10-28T14:13:27
| 2023-02-13T20:16:39
| 2023-02-13T20:16:39
|
{
"login": "sgugger",
"id": 35901082,
"type": "User"
}
|
[] | false
|
[] |
731,445,222
| 770
|
Fix custom builder caching
|
The cache directory of a dataset didn't take into account additional parameters that the user could specify such as `features` or any parameter of the builder configuration kwargs (ex: `encoding` for the `text` dataset).
To fix that, the cache directory name now has a suffix that depends on all of them.
Fix #730
Fix #750
|
closed
|
https://github.com/huggingface/datasets/pull/770
| 2020-10-28T13:32:24
| 2020-10-29T09:36:03
| 2020-10-29T09:36:01
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
731,257,104
| 769
|
How to choose proper download_mode in function load_dataset?
|
Hi, I am a beginner to datasets and I try to use datasets to load my csv file.
my csv file looks like this
```
text,label
"Effective but too-tepid biopic",3
"If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4
"Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .",5
```
First I try to use this command to load my csv file .
``` python
dataset=load_dataset('csv', data_files=['sst_test.csv'])
```
It seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this.
``` python
import pyarrow as pa
from pyarrow import csv
read_options = csv.ReadOptions(block_size=1024*1024)
parse_options = csv.ParseOptions()
convert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()})
dataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options,
parse_options=parse_options, convert_options=convert_options)
```
It keeps the same:
```shell
Dataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210)
```
I think this issue is caused by the parameter "download_mode" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right.
Is it a bug? How to choose proper download_mode to avoid this issue?
|
closed
|
https://github.com/huggingface/datasets/issues/769
| 2020-10-28T09:16:19
| 2022-02-22T12:22:52
| 2022-02-22T12:22:52
|
{
"login": "jzq2000",
"id": 48550398,
"type": "User"
}
|
[] | false
|
[] |
730,908,060
| 768
|
Add a `lazy_map` method to `Dataset` and `DatasetDict`
|
The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases:
1. load image on the fly
2. apply a random function and get different outputs at each epoch (like data augmentation or randomly masking a part of a sentence for BERT-like objectives).
|
open
|
https://github.com/huggingface/datasets/issues/768
| 2020-10-27T22:33:03
| 2020-10-28T08:58:13
| null |
{
"login": "sgugger",
"id": 35901082,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
730,771,610
| 767
|
Add option for named splits when using ds.train_test_split
|
### Feature Request π
Can we add a way to name your splits when using the `.train_test_split` function?
In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `train_test_split`, as it'll just overwrite my real `test` split that I intended to keep.
### Workaround
this is my hack for dealin with this, for now :slightly_smiling_face:
```python
from datasets import load_dataset
β
β
ds = load_dataset('imdb')
ds['train'], ds['validation'] = ds['train'].train_test_split(.1).values()
```
|
open
|
https://github.com/huggingface/datasets/issues/767
| 2020-10-27T19:59:44
| 2020-11-10T14:05:21
| null |
{
"login": "nateraw",
"id": 32437151,
"type": "User"
}
|
[
{
"name": "enhancement",
"color": "a2eeef"
}
] | false
|
[] |
730,669,596
| 766
|
[GEM] add DART data-to-text generation dataset
|
## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **Paper:** https://arxiv.org/abs/2007.02871v1
- **Data:** https://github.com/Yale-LILY/dart
- **Motivation:** the dataset will likely be included in the GEM benchmark
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
closed
|
https://github.com/huggingface/datasets/issues/766
| 2020-10-27T17:34:04
| 2020-12-03T13:37:18
| 2020-12-03T13:37:18
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
730,668,332
| 765
|
[GEM] Add DART data-to-text generation dataset
|
## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **Paper:** https://arxiv.org/abs/2007.02871v1
- **Data:** https://github.com/Yale-LILY/dart
- **Motivation:** It will likely be included in the GEM generation evaluation benchmark
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
|
closed
|
https://github.com/huggingface/datasets/issues/765
| 2020-10-27T17:32:23
| 2020-10-27T17:34:21
| 2020-10-27T17:34:21
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
730,617,828
| 764
|
Adding Issue Template for Dataset Requests
|
adding .github/ISSUE_TEMPLATE/add-dataset.md
|
closed
|
https://github.com/huggingface/datasets/pull/764
| 2020-10-27T16:37:08
| 2020-10-27T17:25:26
| 2020-10-27T17:25:25
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[] | true
|
[] |
730,593,631
| 763
|
Fixed errors in bertscore related to custom baseline
|
[bertscore version 0.3.6 ](https://github.com/Tiiiger/bert_score) added support for custom baseline files. This update added extra argument `baseline_path` to BERTScorer class as well as some extra boolean parameters `use_custom_baseline` in functions like `get_hash(model, num_layers, idf, rescale_with_baseline, use_custom_baseline)`.
This PR fix those matching errors in bertscore metric implementation.
|
closed
|
https://github.com/huggingface/datasets/pull/763
| 2020-10-27T16:08:35
| 2020-10-28T17:59:25
| 2020-10-28T17:59:25
|
{
"login": "juanjucm",
"id": 36761132,
"type": "User"
}
|
[] | true
|
[] |
730,586,972
| 762
|
[GEM] Add Czech Restaurant data-to-text generation dataset
|
- Paper: https://www.aclweb.org/anthology/W19-8670.pdf
- Data: https://github.com/UFAL-DSG/cs_restaurant_dataset
- The dataset will likely be part of the GEM benchmark
|
closed
|
https://github.com/huggingface/datasets/issues/762
| 2020-10-27T16:00:47
| 2020-12-03T13:37:44
| 2020-12-03T13:37:44
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
729,898,867
| 761
|
Downloaded datasets are not usable offline
|
I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset.
Is this the intended behavior ?
(Sorry, I wrote the the first version of this issue while still on nlp 0.3.0).
|
closed
|
https://github.com/huggingface/datasets/issues/761
| 2020-10-26T20:54:46
| 2022-02-15T10:32:28
| 2022-02-15T10:32:28
|
{
"login": "ghazi-f",
"id": 25091538,
"type": "User"
}
|
[] | false
|
[] |
729,637,917
| 760
|
Add meta-data to the HANS dataset
|
The current version of the [HANS dataset](https://github.com/huggingface/datasets/blob/master/datasets/hans/hans.py) is missing the additional information provided for each example, including the sentence parses, heuristic and subcase.
|
closed
|
https://github.com/huggingface/datasets/issues/760
| 2020-10-26T14:56:53
| 2020-12-03T13:38:34
| 2020-12-03T13:38:34
|
{
"login": "yjernite",
"id": 10469459,
"type": "User"
}
|
[
{
"name": "good first issue",
"color": "7057ff"
},
{
"name": "dataset bug",
"color": "2edb81"
}
] | false
|
[] |
729,046,916
| 759
|
(Load dataset failure) ConnectionError: Couldnβt reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
|
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(βcnn_dailymailβ, β3.0.0β, split=βtrainβ)
And I got the following errors.
Traceback (most recent call last):
File βtest.pyβ, line 7, in
test_dataset = load_dataset(βcnn_dailymailβ, β3.0.0β, split=βtestβ)
File βC:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.pyβ, line 589, in load_dataset
module_path, hash = prepare_module(
File βC:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.pyβ, line 268, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File βC:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.pyβ, line 300, in cached_path
output_path = get_from_cache(
File βC:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.pyβ, line 475, in get_from_cache
raise ConnectionError(βCouldnβt reach {}β.format(url))
ConnectionError: Couldnβt reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
How can I fix this ?
|
closed
|
https://github.com/huggingface/datasets/issues/759
| 2020-10-25T15:34:57
| 2023-09-13T23:56:51
| 2021-08-04T18:10:09
|
{
"login": "AI678",
"id": 63541083,
"type": "User"
}
|
[] | false
|
[] |
728,638,559
| 758
|
Process 0 very slow when using num_procs with map to tokenizer
|
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), num_proc=8)
dataset.set_format(type='torch', columns=['input_ids'])
dataset.save_to_disk(file_path+'.arrow')
```
|
closed
|
https://github.com/huggingface/datasets/issues/758
| 2020-10-24T02:40:20
| 2020-10-28T03:59:46
| 2020-10-28T03:59:45
|
{
"login": "ksjae",
"id": 17930170,
"type": "User"
}
|
[] | false
|
[] |
728,241,494
| 757
|
CUDA out of memory
|
In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
|
closed
|
https://github.com/huggingface/datasets/issues/757
| 2020-10-23T13:57:00
| 2020-12-23T14:06:29
| 2020-12-23T14:06:29
|
{
"login": "li1117heex",
"id": 47059217,
"type": "User"
}
|
[] | false
|
[] |
728,211,373
| 756
|
Start community-provided dataset docs
|
Continuation of #736 with clean fork.
#### Old description
This is what I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs.
In slack @thomwolf called it a user-namespace dataset, but the docs call it community dataset.
I think the first naming is clearer, but I didn't address that here.
I didn't add metadata, will try that.
|
closed
|
https://github.com/huggingface/datasets/pull/756
| 2020-10-23T13:17:41
| 2020-10-26T12:55:20
| 2020-10-26T12:55:19
|
{
"login": "sshleifer",
"id": 6045025,
"type": "User"
}
|
[] | true
|
[] |
728,203,821
| 755
|
Start community-provided dataset docs V2
|
closed
|
https://github.com/huggingface/datasets/pull/755
| 2020-10-23T13:07:30
| 2020-10-23T13:15:37
| 2020-10-23T13:15:37
|
{
"login": "sshleifer",
"id": 6045025,
"type": "User"
}
|
[] | true
|
[] |
|
727,863,105
| 754
|
Use full released xsum dataset
|
#672 Fix xsum to expand coverage and include IDs
Code based on parser from older version of `datasets/xsum/xsum.py`
@lhoestq
|
closed
|
https://github.com/huggingface/datasets/pull/754
| 2020-10-23T03:29:49
| 2021-01-01T03:11:56
| 2020-10-26T12:56:58
|
{
"login": "jbragg",
"id": 2238344,
"type": "User"
}
|
[] | true
|
[] |
727,434,935
| 753
|
Fix doc links to viewer
|
It seems #733 forgot some links in the doc :)
|
closed
|
https://github.com/huggingface/datasets/pull/753
| 2020-10-22T14:20:16
| 2020-10-23T08:42:11
| 2020-10-23T08:42:11
|
{
"login": "Pierrci",
"id": 5020707,
"type": "User"
}
|
[] | true
|
[] |
726,917,801
| 752
|
Clicking on a metric in the search page points to datasets page giving "Missing dataset" warning
|
Hi! Sorry if this isn't the right place to talk about the website, I just didn't exactly where to write this.
Searching a metric in https://huggingface.co/metrics gives the right results but clicking on a metric (E.g ROUGE) points to https://huggingface.co/datasets/rouge. Clicking on a metric without searching points to the right page.
Thanks for all the great work!
|
closed
|
https://github.com/huggingface/datasets/issues/752
| 2020-10-21T22:56:23
| 2020-10-22T16:19:42
| 2020-10-22T16:19:42
|
{
"login": "ogabrielluiz",
"id": 24829397,
"type": "User"
}
|
[] | false
|
[] |
726,820,191
| 751
|
Error loading ms_marco v2.1 using load_dataset()
|
Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
9
10 # Downloading and loading a dataset
---> 11 dataset = load_dataset('ms_marco', 'v2.1')
10 frames
/usr/lib/python3.6/json/decoder.py in raw_decode(self, s, idx)
353 """
354 try:
--> 355 obj, end = self.scan_once(s, idx)
356 except StopIteration as err:
357 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Unterminated string starting at: line 1 column 388988661 (char 388988660)
`
```
|
closed
|
https://github.com/huggingface/datasets/issues/751
| 2020-10-21T19:54:43
| 2020-11-05T01:31:57
| 2020-11-05T01:31:57
|
{
"login": "JainSahit",
"id": 30478979,
"type": "User"
}
|
[] | false
|
[] |
726,589,446
| 750
|
load_dataset doesn't include `features` in its hash
|
It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored.
Example: some models on the hub have a different ordering for the labels than what `datasets` uses for MNLI so I'd like to do something along the lines of:
```
dataset = load_dataset("glue", "mnli")
features = dataset["train"].features
features["label"] = ClassLabel(names = ['entailment', 'contradiction', 'neutral']) # new label order
dataset = load_dataset("glue", "mnli", features=features)
```
|
closed
|
https://github.com/huggingface/datasets/issues/750
| 2020-10-21T15:16:41
| 2020-10-29T09:36:01
| 2020-10-29T09:36:01
|
{
"login": "sgugger",
"id": 35901082,
"type": "User"
}
|
[] | false
|
[] |
726,366,062
| 749
|
[XGLUE] Adding new dataset
|
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
|
closed
|
https://github.com/huggingface/datasets/issues/749
| 2020-10-21T10:51:36
| 2022-09-30T11:35:30
| 2021-01-06T10:02:55
|
{
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
}
|
[
{
"name": "dataset request",
"color": "e99695"
}
] | false
|
[] |
726,196,589
| 748
|
New version of CompGuessWhat?! with refined annotations
|
This pull request introduces a few fixes to the annotations for VisualGenome in the CompGuessWhat?! original split.
|
closed
|
https://github.com/huggingface/datasets/pull/748
| 2020-10-21T06:55:41
| 2020-10-21T08:52:42
| 2020-10-21T08:46:19
|
{
"login": "aleSuglia",
"id": 1479733,
"type": "User"
}
|
[] | true
|
[] |
725,884,704
| 747
|
Add Quail question answering dataset
|
QuAIL is a multi-domain RC dataset featuring news, blogs, fiction and user stories. Each domain is represented by 200 texts, which gives us a 4-way data split. The texts are 300-350 word excerpts from CC-licensed texts that were hand-picked so as to make sense to human readers without larger context. Domain diversity mitigates the issue of possible overlap between training and test data of large pre-trained models, which the current SOTA systems are based on. For instance, BERT is trained on Wikipedia + BookCorpus, and was tested on Wikipedia-based SQuAD (Devlin, Chang, Lee, & Toutanova, 2019).
https://text-machine-lab.github.io/blog/2020/quail/ @annargrs
|
closed
|
https://github.com/huggingface/datasets/pull/747
| 2020-10-20T19:33:14
| 2020-10-21T08:35:15
| 2020-10-21T08:35:15
|
{
"login": "sai-prasanna",
"id": 3595526,
"type": "User"
}
|
[] | true
|
[] |
725,627,235
| 746
|
dataset(ngt): add ngt dataset initial loading script
|
Currently only making the paths to the annotation ELAN (eaf) file and videos available.
This is the first accessible way to download this dataset, which is not manual file-by-file.
Only downloading the necessary files, the annotation files are very small, 20MB for all of them, but the video files are large, 100GB in total, saved in `mpg` format.
I do not intend to actually store these as an uncompressed array of frames, because it will be huge.
Future updates may add pose estimation files for all videos, making it easier to work with this data
|
closed
|
https://github.com/huggingface/datasets/pull/746
| 2020-10-20T14:04:58
| 2021-03-23T06:19:38
| 2021-03-23T06:19:38
|
{
"login": "AmitMY",
"id": 5757359,
"type": "User"
}
|
[] | true
|
[] |
725,589,352
| 745
|
Fix emotion description
|
Fixes the description of the emotion dataset to reflect the class names observed in the data, not the ones described in the paper.
I also took the liberty to make use of `ClassLabel` for the emotion labels.
|
closed
|
https://github.com/huggingface/datasets/pull/745
| 2020-10-20T13:28:39
| 2021-04-22T14:47:31
| 2020-10-21T08:38:27
|
{
"login": "lewtun",
"id": 26859204,
"type": "User"
}
|
[] | true
|
[] |
724,918,448
| 744
|
Dataset Explorer Doesn't Work for squad_es and squad_it
|
https://huggingface.co/nlp/viewer/?dataset=squad_es
https://huggingface.co/nlp/viewer/?dataset=squad_it
Both pages show "OSError: [Errno 28] No space left on device".
|
closed
|
https://github.com/huggingface/datasets/issues/744
| 2020-10-19T19:34:12
| 2020-10-26T16:36:17
| 2020-10-26T16:36:17
|
{
"login": "gaotongxiao",
"id": 22607038,
"type": "User"
}
|
[
{
"name": "nlp-viewer",
"color": "94203D"
}
] | false
|
[] |
724,703,980
| 743
|
load_dataset for CSV files not working
|
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInvalid: CSV parse error: Expected 2 columns, got 1
`
I should mention that when I've tried to read data from `https://github.com/lhoestq/transformers/tree/custom-dataset-in-rag-retriever/examples/rag/test_data/my_knowledge_dataset.csv` it worked without a problem. I've read that there might be some problems with /r character, so I've removed them from the custom dataset, but the problem still remains.
I've added a colab reproducing the bug, but unfortunately I cannot provide the dataset.
https://colab.research.google.com/drive/1Qzu7sC-frZVeniiWOwzoCe_UHZsrlxu8?usp=sharing
Are there any work around for it ?
Thank you
|
open
|
https://github.com/huggingface/datasets/issues/743
| 2020-10-19T14:53:51
| 2025-04-24T06:35:25
| null |
{
"login": "iliemihai",
"id": 2815308,
"type": "User"
}
|
[] | false
|
[] |
724,509,974
| 742
|
Add OCNLI, a new CLUE dataset
|
OCNLI stands for Original Chinese Natural Language Inference. It is a corpus for
Chinese Natural Language Inference, collected following closely the procedures of MNLI,
but with enhanced strategies aiming for more challenging inference pairs. We want to
emphasize we did not use human/machine translation in creating the dataset, and thus
our Chinese texts are original and not translated.
|
closed
|
https://github.com/huggingface/datasets/pull/742
| 2020-10-19T11:06:33
| 2020-10-22T16:19:49
| 2020-10-22T16:19:48
|
{
"login": "JetRunner",
"id": 22514219,
"type": "User"
}
|
[] | true
|
[] |
723,924,275
| 741
|
Creating dataset consumes too much memory
|
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examples. """
filepath = os.path.join(base_path, "annotations", "manual", "PHOENIX-2014-T." + split + ".corpus.csv")
images_path = os.path.join(base_path, "features", "fullFrame-210x260px", split)
with open(filepath, "r", encoding="utf-8") as f:
data = csv.DictReader(f, delimiter="|", quoting=csv.QUOTE_NONE)
for row in data:
frames_path = os.path.join(images_path, row["video"])[:-7]
np_frames = []
for frame_name in os.listdir(frames_path):
frame_path = os.path.join(frames_path, frame_name)
im = Image.open(frame_path)
np_frames.append(np.asarray(im))
im.close()
yield row["name"], {"video": np_frames}
```
The dataset creation process goes out of memory on a machine with 500GB RAM.
I was under the impression that the "generator" here is exactly for that, to avoid memory constraints.
However, even if you want the entire dataset in memory, it would be in the worst case
`260x210x3 x 400 max length x 7000 samples` in bytes (uint8) = 458.64 gigabytes
So I'm not sure why it's taking more than 500GB.
And the dataset creation fails after 170 examples on a machine with 120gb RAM, and after 672 examples on a machine with 500GB RAM.
---
## Info that might help:
Iterating over examples is extremely slow.

If I perform this iteration in my own, custom loop (Without saving to file), it runs at 8-9 examples/sec
And you can see at this state it is using 94% of the memory:

And it is only using one CPU core, which is probably why it's so slow:

|
closed
|
https://github.com/huggingface/datasets/issues/741
| 2020-10-18T06:07:06
| 2022-02-15T17:03:10
| 2022-02-15T17:03:10
|
{
"login": "AmitMY",
"id": 5757359,
"type": "User"
}
|
[] | false
|
[] |
723,047,958
| 740
|
Fix TREC urls
|
The old TREC urls are now redirections.
I updated the urls to the new ones, since we don't support redirections for downloads.
Fix #737
|
closed
|
https://github.com/huggingface/datasets/pull/740
| 2020-10-16T09:11:28
| 2020-10-19T08:54:37
| 2020-10-19T08:54:36
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
723,044,066
| 739
|
Add wiki dpr multiset embeddings
|
There are two DPR encoders, one trained on Natural Questions and one trained on a multiset/hybrid dataset.
Previously only the embeddings from the encoder trained on NQ were available. I'm adding the ones from the encoder trained on the multiset/hybrid dataset.
In the configuration you can now specify `embeddings_name="nq"` or `embeddings_name="multiset"`
|
closed
|
https://github.com/huggingface/datasets/pull/739
| 2020-10-16T09:05:49
| 2020-11-26T14:02:50
| 2020-11-26T14:02:49
|
{
"login": "lhoestq",
"id": 42851186,
"type": "User"
}
|
[] | true
|
[] |
723,033,923
| 738
|
Replace seqeval code with original classification_report for simplicity
|
Recently, the original seqeval has enabled us to get per type scores and overall scores as a dictionary.
This PR replaces the current code with the original function(`classification_report`) to simplify it.
Also, the original code has been updated to fix #352.
- Related issue: https://github.com/chakki-works/seqeval/pull/38
```python
from datasets import load_metric
metric = load_metric("seqeval")
y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
metric.compute(predictions=y_pred, references=y_true)
# Output: {'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8}
```
|
closed
|
https://github.com/huggingface/datasets/pull/738
| 2020-10-16T08:51:45
| 2021-01-21T16:07:15
| 2020-10-19T10:31:12
|
{
"login": "Hironsan",
"id": 6737785,
"type": "User"
}
|
[] | true
|
[] |
722,463,923
| 737
|
Trec Dataset Connection Error
|
**Datasets Version:**
1.1.2
**Python Version:**
3.6/3.7
**Code:**
```python
from datasets import load_dataset
load_dataset("trec")
```
**Expected behavior:**
Download Trec dataset and load Dataset object
**Current Behavior:**
Get a connection error saying it couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label (but the link doesn't seem broken)
<details>
<summary>Error Logs</summary>
Using custom data configuration default
Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /root/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-8-66bf1242096e> in <module>()
----> 1 load_dataset("trec")
10 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)
473 elif response is not None and response.status_code == 404:
474 raise FileNotFoundError("Couldn't find file at {}".format(url))
--> 475 raise ConnectionError("Couldn't reach {}".format(url))
476
477 # Try a second time
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
</details>
|
closed
|
https://github.com/huggingface/datasets/issues/737
| 2020-10-15T15:57:53
| 2020-10-19T08:54:36
| 2020-10-19T08:54:36
|
{
"login": "aychang95",
"id": 10554495,
"type": "User"
}
|
[] | false
|
[] |
722,348,191
| 736
|
Start community-provided dataset docs
|
This is one I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs.
+ In slack @thomwolf called it a `user-namespace` dataset, but the docs call it `community dataset`.
I think the first naming is clearer, but I didn't address that here.
+ I didn't add metadata, will try that.
|
closed
|
https://github.com/huggingface/datasets/pull/736
| 2020-10-15T13:41:39
| 2020-10-23T13:15:28
| 2020-10-23T13:15:28
|
{
"login": "sshleifer",
"id": 6045025,
"type": "User"
}
|
[] | true
|
[] |
722,225,270
| 735
|
Throw error when an unexpected key is used in data_files
|
I have found that only "train", "validation" and "test" are valid keys in the `data_files` argument. When you use any other ones, those attached files are silently ignored - leading to unexpected behaviour for the users.
So the following, unintuitively, returns only one key (namely `train`).
```python
datasets = load_dataset("text", data_files={"train": train_f, "valid": valid_f})
print(datasets.keys())
# dict_keys(['train'])
```
whereas using `validation` instead, does return the expected result:
```python
datasets = load_dataset("text", data_files={"train": train_f, "validation": valid_f})
print(datasets.keys())
# dict_keys(['train', 'validation'])
```
I would like to see more freedom in which keys one can use, but if that is not possible at least an error should be thrown when using an unexpected key.
|
closed
|
https://github.com/huggingface/datasets/issues/735
| 2020-10-15T10:55:27
| 2020-10-30T13:23:52
| 2020-10-30T13:23:52
|
{
"login": "BramVanroy",
"id": 2779410,
"type": "User"
}
|
[] | false
|
[] |
721,767,848
| 734
|
Fix GLUE metric description
|
Small typo: the description says translation instead of prediction.
|
closed
|
https://github.com/huggingface/datasets/pull/734
| 2020-10-14T20:44:14
| 2020-10-15T09:27:43
| 2020-10-15T09:27:42
|
{
"login": "sgugger",
"id": 35901082,
"type": "User"
}
|
[] | true
|
[] |
721,366,744
| 733
|
Update link to dataset viewer
|
Change 404 error links in quick tour to working ones
|
closed
|
https://github.com/huggingface/datasets/pull/733
| 2020-10-14T11:13:23
| 2020-10-14T14:07:31
| 2020-10-14T14:07:31
|
{
"login": "negedng",
"id": 12969168,
"type": "User"
}
|
[] | true
|
[] |
721,359,448
| 732
|
dataset(wlasl): initial loading script
|
takes like 9-10 hours to download all of the videos for the dataset, but it does finish :)
|
closed
|
https://github.com/huggingface/datasets/pull/732
| 2020-10-14T11:01:42
| 2021-03-23T06:19:43
| 2021-03-23T06:19:43
|
{
"login": "AmitMY",
"id": 5757359,
"type": "User"
}
|
[] | true
|
[] |
721,142,985
| 731
|
dataset(aslg_pc12): initial loading script
|
This contains the only current public part of this corpus.
The rest of the corpus is not yet been made public, but this sample is still being used by researchers.
|
closed
|
https://github.com/huggingface/datasets/pull/731
| 2020-10-14T05:14:37
| 2020-10-28T15:27:06
| 2020-10-28T15:27:06
|
{
"login": "AmitMY",
"id": 5757359,
"type": "User"
}
|
[] | true
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.