title
stringlengths
1
290
body
stringlengths
0
228k
βŒ€
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
Add seed in metrics
With #361 we noticed that some metrics were not deterministic. In this PR I allow the user to specify numpy's seed when instantiating a metric with `load_metric`. The seed is set only when `compute` is called, and reset afterwards. Moreover when calling `compute` with the same metric instance (i.e. same experiment_id), the metric will always return the same results given the same inputs. This is the case even if the seed is was not specified by the user, as the previous seed is going to be reused. However, instantiating twice a metric (two different experiments) without specifying a seed can create different results.
https://github.com/huggingface/datasets/pull/404
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/404", "html_url": "https://github.com/huggingface/datasets/pull/404", "diff_url": "https://github.com/huggingface/datasets/pull/404.diff", "patch_url": "https://github.com/huggingface/datasets/pull/404.patch", "merged_at": "2020-07-20T10:12:34" }
404
true
return python objects instead of arrays by default
We were using to_pandas() to convert from arrow types, however it returns numpy arrays instead of python lists. I fixed it by using to_pydict/to_pylist instead. Fix #387 It was mentioned in https://github.com/huggingface/transformers/issues/5729
https://github.com/huggingface/datasets/pull/403
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/403", "html_url": "https://github.com/huggingface/datasets/pull/403", "diff_url": "https://github.com/huggingface/datasets/pull/403.diff", "patch_url": "https://github.com/huggingface/datasets/pull/403.patch", "merged_at": "2020-07-17T11:37:00" }
403
true
Search qa
add SearchQA dataset #336
https://github.com/huggingface/datasets/pull/402
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/402", "html_url": "https://github.com/huggingface/datasets/pull/402", "diff_url": "https://github.com/huggingface/datasets/pull/402.diff", "patch_url": "https://github.com/huggingface/datasets/pull/402.patch", "merged_at": "2020-07-16T14:26:59" }
402
true
add web_questions
add Web Question dataset #336 Maybe @patrickvonplaten you can help with the dummy_data structure? it still broken
https://github.com/huggingface/datasets/pull/401
[ "What does the `nlp-cli dummy_data` command returns ?", "`test.json` -> `test` \r\nand \r\n`train.json` -> `train`\r\n\r\nas shown by the `nlp-cli dummy_data` command ;-)", "LGTM for merge @lhoestq - I let you merge if you want to." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/401", "html_url": "https://github.com/huggingface/datasets/pull/401", "diff_url": "https://github.com/huggingface/datasets/pull/401.diff", "patch_url": "https://github.com/huggingface/datasets/pull/401.patch", "merged_at": "2020-08-06T06:16:19" }
401
true
Web questions
add the WebQuestion dataset #336
https://github.com/huggingface/datasets/pull/400
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/400", "html_url": "https://github.com/huggingface/datasets/pull/400", "diff_url": "https://github.com/huggingface/datasets/pull/400.diff", "patch_url": "https://github.com/huggingface/datasets/pull/400.patch", "merged_at": null }
400
true
Spelling mistake
In "Formatting the dataset" part, "The two toehr modifications..." should be "The two other modifications..." ,the word "other" wrong spelled as "toehr".
https://github.com/huggingface/datasets/pull/399
[ "Thanks!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/399", "html_url": "https://github.com/huggingface/datasets/pull/399", "diff_url": "https://github.com/huggingface/datasets/pull/399.diff", "patch_url": "https://github.com/huggingface/datasets/pull/399.patch", "merged_at": "2020-07-16T06:49:37" }
399
true
Add inline links
Add inline links to `Contributing.md`
https://github.com/huggingface/datasets/pull/398
[ "Do you mind adding a link to the much more extended pages on adding and sharing a dataset in the new documentation?", "Sure, I will do that too" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/398", "html_url": "https://github.com/huggingface/datasets/pull/398", "diff_url": "https://github.com/huggingface/datasets/pull/398.diff", "patch_url": "https://github.com/huggingface/datasets/pull/398.patch", "merged_at": "2020-07-22T10:14:22" }
398
true
Add contiguous sharding
This makes dset.shard() play nice with nlp.concatenate_datasets(). When I originally wrote the shard() method, I was thinking about a distributed training scenario, but https://github.com/huggingface/nlp/pull/389 also uses it for splitting the dataset for distributed preprocessing. Usage: ``` nlp.concatenate_datasets([dset.shard(n, i, contiguous=True) for i in range(n)]) ```
https://github.com/huggingface/datasets/pull/397
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/397", "html_url": "https://github.com/huggingface/datasets/pull/397", "diff_url": "https://github.com/huggingface/datasets/pull/397.diff", "patch_url": "https://github.com/huggingface/datasets/pull/397.patch", "merged_at": "2020-07-17T16:59:30" }
397
true
Fix memory issue when doing select
We were passing the `nlp.Dataset` object to get the hash for the new dataset's file name. Fix #395
https://github.com/huggingface/datasets/pull/396
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/396", "html_url": "https://github.com/huggingface/datasets/pull/396", "diff_url": "https://github.com/huggingface/datasets/pull/396.diff", "patch_url": "https://github.com/huggingface/datasets/pull/396.patch", "merged_at": "2020-07-16T08:07:30" }
396
true
Memory issue when doing select
As noticed in #389, the following code loads the entire wikipedia in memory. ```python import nlp w = nlp.load_dataset("wikipedia", "20200501.en", split="train") w.select([0]) ``` This is caused by [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626) for some reason, that tries to serialize the function with all the wikipedia data with it. It's not the case with `.map` or `.filter`. However functions that are based on `.select` like `.shuffle`, `.shard`, `.train_test_split`, `.sort` are affected.
https://github.com/huggingface/datasets/issues/395
[]
null
395
false
Remove remaining nested dict
This PR deletes the remaining unnecessary nested dict #378
https://github.com/huggingface/datasets/pull/394
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/394", "html_url": "https://github.com/huggingface/datasets/pull/394", "diff_url": "https://github.com/huggingface/datasets/pull/394.diff", "patch_url": "https://github.com/huggingface/datasets/pull/394.patch", "merged_at": "2020-07-16T07:39:51" }
394
true
Fix extracted files directory for the DownloadManager
The cache dir was often cluttered by extracted files because of the download manager. For downloaded files, we are using the `downloads` directory to make things easier to navigate, but extracted files were still placed at the root of the cache directory. To fix that I changed the directory for extracted files to cache_dir/downloads/extracted.
https://github.com/huggingface/datasets/pull/393
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/393", "html_url": "https://github.com/huggingface/datasets/pull/393", "diff_url": "https://github.com/huggingface/datasets/pull/393.diff", "patch_url": "https://github.com/huggingface/datasets/pull/393.patch", "merged_at": "2020-07-17T17:02:14" }
393
true
Style change detection
Another [PAN task](https://pan.webis.de/clef20/pan20-web/style-change-detection.html). This time about identifying when the style/author changes in documents. - There's the possibility of adding the [PAN19](https://zenodo.org/record/3577602) and PAN18 style change detection tasks too (these are datasets whose labels are a subset of PAN20's). These would probably make more sense as separate datasets (like wmt is now) - I've converted the integer 0,1 values to a boolean - Using manually downloaded data again. This might be changed at some point following the discussion in https://github.com/huggingface/nlp/pull/349.
https://github.com/huggingface/datasets/pull/392
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/392", "html_url": "https://github.com/huggingface/datasets/pull/392", "diff_url": "https://github.com/huggingface/datasets/pull/392.diff", "patch_url": "https://github.com/huggingface/datasets/pull/392.patch", "merged_at": "2020-07-17T17:13:23" }
392
true
Concatenate datasets
I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema. This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions. Usage: ```python from nlp import Dataset, load_dataset data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]} dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2) dset_concat = Dataset.from_concat([dset1, dset2]) print(dset_concat) # Dataset(schema: {'id': 'int64'}, num_rows: 6) ```
https://github.com/huggingface/datasets/pull/390
[ "Looks cool :)\r\n\r\nI feel like \r\n```python\r\nconcatenated_dataset = dataset1.concatenate(dataset2)\r\n```\r\ncould be more natural. What do you think ?\r\n\r\nAlso could you also concatenate the `nlp.Dataset._data_files` ?\r\n```python\r\nreturn cls(table, info=info, split=split, data_files=self._data_files +...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/390", "html_url": "https://github.com/huggingface/datasets/pull/390", "diff_url": "https://github.com/huggingface/datasets/pull/390.diff", "patch_url": "https://github.com/huggingface/datasets/pull/390.patch", "merged_at": "2020-07-22T09:49:58" }
390
true
Fix pickling of SplitDict
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example: ``` wiki = nlp.load_dataset('wikipedia', split='train') def sentencize(examples): ... wiki = wiki.map(sentencize, batched=True) torch.save(wiki, 'sentencized_wiki_dataset.pt') ``` However, upon unpickling the dataset via torch.load(...), this error is raised: ``` ValueError("Cannot add elem. Use .add() instead.") ``` On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class. The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`. Testing: - Manually pickled and unpickled a modified wikipedia dataset. - Ran `make style` I would be happy to run any other tests, but I couldn't find any in the contributing guidelines.
https://github.com/huggingface/datasets/pull/389
[ "By the way, the reason this is an issue for me is because I want to be able to \"save\" changes made to a dataset by writing something to disk. In this case, I would like to pre-process my dataset once, and then train multiple models on the dataset later without having to re-process the data. \r\n\r\nIs pickling/u...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/389", "html_url": "https://github.com/huggingface/datasets/pull/389", "diff_url": "https://github.com/huggingface/datasets/pull/389.diff", "patch_url": "https://github.com/huggingface/datasets/pull/389.patch", "merged_at": null }
389
true
πŸ› [Dataset] Cannot download wmt14, wmt15 and wmt17
1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code: ``` nlp.load_dataset('wmt14','de-en') nlp.load_dataset('wmt15','de-en') nlp.load_dataset('wmt17','de-en') nlp.load_dataset('wmt19','de-en') ``` The code runs but the download speed is **extremely slow**, the same behaviour is not observed on `wmt16` and `wmt18` 2. When trying to download `wmt17 zh-en`, I got the following error: > ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz
https://github.com/huggingface/datasets/issues/388
[ "similar slow download speed here for nlp.load_dataset('wmt14', 'fr-en')\r\n`\r\nDownloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 658M/658M [1:00:42<00:00, 181kB/s]\r\nDownloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 918M/918M [1:39:38<00:00, 154kB/s]\r\nDow...
null
388
false
Conversion through to_pandas output numpy arrays for lists instead of python objects
In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects. Here is an example: ```python >>> dataset._data.slice(key, 1).to_pandas().to_dict("list") {'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [array([ 101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102])], 'token_type_ids': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])], 'attention_mask': [array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])]} >>> type(dataset._data.slice(key, 1).to_pandas().to_dict("list")['input_ids'][0]) <class 'numpy.ndarray'> >>> dataset._data.slice(key, 1).to_pydict() {'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [[101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]} ```
https://github.com/huggingface/datasets/issues/387
[ "To convert from arrow type we have three options: to_numpy, to_pandas and to_pydict/to_pylist.\r\n\r\n- to_numpy and to_pandas return numpy arrays instead of lists but are very fast.\r\n- to_pydict/to_pylist can be 100x slower and become the bottleneck for reading data, but at least they return lists.\r\n\r\nMaybe...
null
387
false
Update dataset loading and features - Add TREC dataset
This PR: - add a template for a new dataset script - update the caching structure so that the path to the cached data files is also a function of the dataset loading script hash. This way when you update a loading script the data will be automatically updated instead of falling back to the previous version (which is usually a outdated). This makes it in particular easier to iterate when writing a new dataset loading script. - fix a bug in the `ClassLabel` feature and make it more flexible so that its methods `str2int` and `int2str` can also accept list, numpy arrays and PyTorch/TensorFlow tensors. - add the TREC-6 dataset
https://github.com/huggingface/datasets/pull/386
[ "I just copied the files that are on google storage to follow the new `_relative_data_dir ` format. It should be good to merge now :)\r\n\r\nWell actually it seems there are some merge conflicts to fix first" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/386", "html_url": "https://github.com/huggingface/datasets/pull/386", "diff_url": "https://github.com/huggingface/datasets/pull/386.diff", "patch_url": "https://github.com/huggingface/datasets/pull/386.patch", "merged_at": "2020-07-16T08:17:58" }
386
true
Remove unnecessary nested dict
This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated: - MLQA - RACE Will be adding more if necessary. #378
https://github.com/huggingface/datasets/pull/385
[ "We can probably scan the dataset scripts with a regexpr to try to identify this pattern cc @patrickvonplaten maybe", "@mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.\r\n\r\n```python\r\n#!/usr/bin/env pytho...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/385", "html_url": "https://github.com/huggingface/datasets/pull/385", "diff_url": "https://github.com/huggingface/datasets/pull/385.diff", "patch_url": "https://github.com/huggingface/datasets/pull/385.patch", "merged_at": "2020-07-15T10:03:53" }
385
true
Adding the Linguistic Code-switching Evaluation (LinCE) benchmark
Hi, First of all, this library is really cool! Thanks for putting all of this together! This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ): > 1. Why do we need LinCE? >LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details). >Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark. The data comes from social media and here's the summary table of tasks per language pair: | Language Pairs | LID | POS | NER | SA | |----------------------------------------|-----|-----|-----|----| | Spanish-English | βœ… | βœ… | βœ… | βœ… | | Hindi-English | βœ… | βœ… | βœ… | | | Modern Standard Arabic-Egyptian Arabic | βœ… | | βœ… | | | Nepali-English | βœ… | | | | The tasks are as follows: * LID: token-level language identification * POS: part-of-speech tagging * NER: named entity recognition * SA: sentiment analysis With the exception of MSA-EA, the rest of the datasets contain token-level LID labels. ## Usage For Spanish-English LID, we can load the data as follows: ``` import nlp data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng') for split in data: print(data[split]) ``` Here's the output: ``` Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030) Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332) Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289) ``` Here's the list of shortcut names for every dataset available in LinCE: * `lid_spaeng` * `lid_hineng` * `lid_nepeng` * `lid_msaea` * `pos_spaeng` * `pos_hineng` * `ner_spaeng` * `ner_hineng` * `ner_msaea` * `sa_spaeng` All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script. ## Features Here is how the features look in the case of language identification (LID) tasks: | LID Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | For part-of-speech (POS) tagging: | POS Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `pos` | `list<str>` | List of POS tags (string) of a sentence | For named entity recognition (NER): | NER Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `ner` | `list<str>` | List of NER labels (string) of a sentence | **NOTE**: the MSA-EA NER dataset does not contain the `lid` feature. For sentiment analysis (SA): | SA Feature | Type | Description | |---------------------|-------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `sa` | `str` | Sentiment label (string) of a sentence |
https://github.com/huggingface/datasets/pull/383
[ "I am checking the details of the CI log for the failed test, but I don't see how the error relates to the code I added; the error is coming from a config builder different than the `LinceConfig`, and it crashes when `self.config.data_files` because is self.config is None. I would appreciate if someone could help m...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/383", "html_url": "https://github.com/huggingface/datasets/pull/383", "diff_url": "https://github.com/huggingface/datasets/pull/383.diff", "patch_url": "https://github.com/huggingface/datasets/pull/383.patch", "merged_at": "2020-07-16T16:19:46" }
383
true
1080
https://github.com/huggingface/datasets/issues/382
[]
null
382
false
NLp
https://github.com/huggingface/datasets/issues/381
[]
null
381
false
[dataset] Structure of MLQA seems unecessary nested
The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97 Should we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds? ```python features=nlp.Features( { "context": nlp.Value("string"), "questions": nlp.features.Sequence({"question": nlp.Value("string")}), "answers": nlp.features.Sequence( {"text": nlp.Value("string"), "answer_start": nlp.Value("int32"),} ), "ids": nlp.features.Sequence({"idx": nlp.Value("string")}) ```
https://github.com/huggingface/datasets/issues/378
[ "Same for the RACE dataset: https://github.com/huggingface/nlp/blob/master/datasets/race/race.py\r\n\r\nShould we scan all the datasets to remove this pattern of un-necessary nesting?", "You're right, I think we don't need to use the nested dictionary. \r\n" ]
null
378
false
Iyy!!!
https://github.com/huggingface/datasets/issues/377
[]
null
377
false
to_pandas conversion doesn't always work
For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible. Here is an example using the official SQUAD v2 JSON file. This example was found while investigating #373. ```python >>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: ["./train-v2.0.json"]}, download_mode=nlp.GenerateMode.FORCE_REDOWNLOAD, version="1.0.0", field='data') >>> squad['train'] Dataset(schema: {'title': 'string', 'paragraphs': 'list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>'}, num_rows: 442) >>> squad['train'][0] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 589, in __getitem__ format_kwargs=self._format_kwargs, File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 529, in _getitem outputs = self._unnest(self._data.slice(key, 1).to_pandas().to_dict("list")) File "pyarrow/array.pxi", line 559, in pyarrow.lib._PandasConvertible.to_pandas File "pyarrow/table.pxi", line 1367, in pyarrow.lib.Table._to_pandas File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 766, in table_to_blockmanager blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes) File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 1101, in _table_to_blocks list(extension_columns.keys())) File "pyarrow/table.pxi", line 881, in pyarrow.lib.table_to_blocks File "pyarrow/error.pxi", line 105, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string> ``` cc @lhoestq would we have a way to detect this from the schema maybe? Here is the schema for this pretty complex JSON: ```python >>> squad['train'].schema title: string paragraphs: list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>> child 0, item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string> child 0, qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>> child 0, item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>> child 0, question: string child 1, id: string child 2, answers: list<item: struct<text: string, answer_start: int64>> child 0, item: struct<text: string, answer_start: int64> child 0, text: string child 1, answer_start: int64 child 3, is_impossible: bool child 4, plausible_answers: list<item: struct<text: string, answer_start: int64>> child 0, item: struct<text: string, answer_start: int64> child 0, text: string child 1, answer_start: int64 child 1, context: string ```
https://github.com/huggingface/datasets/issues/376
[ "**Edit**: other topic previously in this message moved to a new issue: https://github.com/huggingface/nlp/issues/387", "Could you try to update pyarrow to >=0.17.0 ? It should fix the `to_pandas` bug\r\n\r\nAlso I'm not sure that structures like list<struct> are fully supported in the lib (none of the datasets u...
null
376
false
TypeError when computing bertscore
Hi, I installed nlp 0.3.0 via pip, and my python version is 3.7. When I tried to compute bertscore with the code: ``` import nlp bertscore = nlp.load_metric('bertscore') # load hyps and refs ... print (bertscore.compute(hyps, refs, lang='en')) ``` I got the following error. ``` Traceback (most recent call last): File "bert_score_evaluate.py", line 16, in <module> print (bertscore.compute(hyps, refs, lang='en')) File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metric.py", line 200, in compute output = self._compute(predictions=predictions, references=references, **metrics_kwargs) File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metrics/bertscore/fb176889831bf0ce995ed197edc94b2e9a83f647a869bb8c9477dbb2d04d0f08/bertscore.py", line 105, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() takes 3 positional arguments but 4 were given ``` It seems like there is something wrong with get_hash() function?
https://github.com/huggingface/datasets/issues/375
[ "I am not able to reproduce this issue on my side.\r\nCould you give us more details about the inputs you used ?\r\n\r\nI do get another error though:\r\n```\r\n~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/bert_score/utils.py in bert_cos_score_idf(model, refs, hyps, tokenizer, idf_dict, verbose, batch_siz...
null
375
false
Add dataset post processing for faiss indexes
# Post processing of datasets for faiss indexes Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries. ## Implementation proposition - Faiss indexes have to be added to the `nlp.Dataset` object, and therefore it's in a different scope that what are doing the `_split_generators` and `_generate_examples` methods of `nlp.DatasetBuilder`. Therefore I added a new method for post processing of the `nlp.Dataset` object called `_post_process` (name could change) - The role of `_post_process` is to apply dataset transforms (filter/map etc.) or indexing functions (add_faiss_index) to modify/enrich the `nlp.Dataset` object. It is not part of the `download_and_prepare` process (that is focused on arrow files creation) so the post processing is run inside the `as_dataset` method. - `_post_process` can generate new files (cached files from dataset transforms or serialized faiss indexes) and their names are defined by `_post_processing_resources` - as we know what are the post processing resources, we can download them automatically from google storage instead of computing them if they're available (as we do for arrow files) I'd happy to discuss these choices ! ## The `wiki_dpr` index It takes 1h20 and ~7GB of memory to compute. The final index is 1.42GB and takes ~1.5GB of memory. This is pretty cool given that a naive flat index would take 170GB of memory to store the 21M vectors of dim 768. I couldn't use directly the Faiss `index_factory` as I needed to set the metric to inner product. ## Example of usage ```python import nlp dset = nlp.load_dataset( "wiki_dpr", "psgs_w100_with_nq_embeddings", split="train", with_index=True ) print(len(dset), dset.list_indexes()) # (21015300, ['embeddings']) ``` (it also works with the dataset configuration without the embeddings because I added the index file in google storage for this one too) ## Demo You can also check a demo on google colab that shows how to use it with the DPRQuestionEncoder from transformers: https://colab.research.google.com/drive/1FakNU8W5EPMcWff7iP1H6REg3XSS0YLp?usp=sharing
https://github.com/huggingface/datasets/pull/374
[ "I changed the `wiki_dpr` script to ignore the last 24 examples for now. Hopefully we'll have the full version soon.\r\nThe datasets_infos.json and the data on GCS are updated.\r\n\r\nAnd I also added a check to make sure we don't have post processing resources in sub-directories.", "I added a dummy config that c...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/374", "html_url": "https://github.com/huggingface/datasets/pull/374", "diff_url": "https://github.com/huggingface/datasets/pull/374.diff", "patch_url": "https://github.com/huggingface/datasets/pull/374.patch", "merged_at": "2020-07-13T13:44:01" }
374
true
Segmentation fault when loading local JSON dataset as of #372
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data') ``` causes ``` Using custom data configuration default Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/XXX/.cache/huggingface/datasets/json/default/0.0.0... 0 tables [00:00, ? tables/s]Segmentation fault (core dumped) ``` where `./datasets/train-v2.0.json` is downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/. This is consistent with other SQuAD-formatted JSON files. When attempting to load the dataset again, I get the following: ``` Using custom data configuration default Traceback (most recent call last): File "dataloader.py", line 6, in <module> 'json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data') File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset save_infos=save_infos, File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 382, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/home/XXX/.conda/envs/torch/lib/python3.7/contextlib.py", line 112, in __enter__ return next(self.gen) File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 368, in incomplete_dir os.makedirs(tmp_dir) File "/home/XXX/.conda/envs/torch/lib/python3.7/os.py", line 223, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: '/home/XXX/.cache/huggingface/datasets/json/default/0.0.0.incomplete' ``` (Not sure if you wanted this in the previous issue #369 or not as it was closed.)
https://github.com/huggingface/datasets/issues/373
[ "I've seen this sort of thing before -- it might help to delete the directory -- I've also noticed that there is an error with the json Dataloader for any data I've tried to load. I've replaced it with this, which skips over the data feature population step:\r\n\r\n\r\n```python\r\nimport os\r\n\r\nimport pyarrow.j...
null
373
false
Make the json script more flexible
Fix https://github.com/huggingface/nlp/issues/359 Fix https://github.com/huggingface/nlp/issues/369 JSON script now can accept JSON files containing a single dict with the records as a list in one attribute to the dict (previously it only accepted JSON files containing records as rows of dicts in the file). In this case, you should indicate using `field=XXX` the name of the field in the JSON structure which contains the records you want to load. The records can be a dict of lists or a list of dicts. E.g. to load the SQuAD dataset JSON (without using the `squad` specific dataset loading script), in which the data rows are in the `data` field of the JSON dict, you can do: ```python from nlp import load_dataset dataset = load_dataset('json', data_files='/PATH/TO/JSON', field='data') ```
https://github.com/huggingface/datasets/pull/372
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/372", "html_url": "https://github.com/huggingface/datasets/pull/372", "diff_url": "https://github.com/huggingface/datasets/pull/372.diff", "patch_url": "https://github.com/huggingface/datasets/pull/372.patch", "merged_at": "2020-07-10T14:52:05" }
372
true
Fix cached file path for metrics with different config names
The config name was not taken into account to build the cached file path. It should fix #368
https://github.com/huggingface/datasets/pull/371
[ "Thanks for the fast fix!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/371", "html_url": "https://github.com/huggingface/datasets/pull/371", "diff_url": "https://github.com/huggingface/datasets/pull/371.diff", "patch_url": "https://github.com/huggingface/datasets/pull/371.patch", "merged_at": "2020-07-10T13:45:20" }
371
true
Allow indexing Dataset via np.ndarray
https://github.com/huggingface/datasets/pull/370
[ "Looks like a flaky CI, failed download from S3." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/370", "html_url": "https://github.com/huggingface/datasets/pull/370", "diff_url": "https://github.com/huggingface/datasets/pull/370.diff", "patch_url": "https://github.com/huggingface/datasets/pull/370.patch", "merged_at": "2020-07-10T14:05:43" }
370
true
can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries
Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB): ``` dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]}) ``` causes ``` Traceback (most recent call last): File "dataloader.py", line 9, in <module> ["./path/to/file.json"]}) File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset save_infos=save_infos, File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 483, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 719, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False): File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__ for obj in iterable: File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/datasets/json/88c1bc5c68489f7eda549ed05a5a738527c613b3e7a4ee3524d9d233353a949b/json.py", line 53, in _generate_tables file, read_options=self.config.pa_read_options, parse_options=self.config.pa_parse_options, File "pyarrow/_json.pyx", line 191, in pyarrow._json.read_json File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ``` I haven't been able to find any reports of this specific pyarrow error here or elsewhere.
https://github.com/huggingface/datasets/issues/369
[ "I am able to reproduce this with the official SQuAD `train-v2.0.json` file downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/", "I am facing this issue in transformers library 3.0.2 while reading a csv using datasets.\r\nIs this fixed in latest version? \r\nI updated the latest version 4.0.1 bu...
null
369
false
load_metric can't acquire lock anymore
I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this? Traceback (most recent call last): File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 101, in __init__ self.filelock.acquire(timeout=1) File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/filelock.py", line 278, in acquire raise Timeout(self._lock_file) filelock.Timeout: The file lock '/home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock' could not be acquired. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "examples_huggingface_nlp.py", line 268, in <module> main() File "examples_huggingface_nlp.py", line 242, in main dataset, metric = get_dataset_metric(glue_task) File "examples_huggingface_nlp.py", line 77, in get_dataset_metric metric = nlp.load_metric('glue', glue_config, experiment_id=1) File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/load.py", line 440, in load_metric **metric_init_kwargs, File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 104, in __init__ "Cannot acquire lock, caching file might be used by another process, " ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id' for this run. I0709 15:54:41.008838 139854118430464 filelock.py:318] Lock 139852058030936 released on /home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock
https://github.com/huggingface/datasets/issues/368
[ "I found that, in the same process (or the same interactive session), if I do\r\n\r\nimport nlp\r\n\r\nm1 = nlp.load_metric('glue', 'mrpc')\r\nm2 = nlp.load_metric('glue', 'sst2')\r\n\r\nI will get the same error `ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a uni...
null
368
false
Update Xtreme to add PAWS-X es
This PR adds the `PAWS-X.es` in the Xtreme dataset #362
https://github.com/huggingface/datasets/pull/367
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/367", "html_url": "https://github.com/huggingface/datasets/pull/367", "diff_url": "https://github.com/huggingface/datasets/pull/367.diff", "patch_url": "https://github.com/huggingface/datasets/pull/367.patch", "merged_at": "2020-07-09T12:37:10" }
367
true
Add quora dataset
Added the [Quora question pairs dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs). Implementation Notes: - I used the original version provided on the quora website. There's also a [Kaggle competition](https://www.kaggle.com/c/quora-question-pairs) which has a nice train/test split but I can't find an easy way to download it. - I've made the questions into a list: ```python { "questions": [ {"id":0, "text": "Is this an example question?"}, {"id":1, "text": "Is this a sample question?"}, ], ... } ``` rather than: ```python { "question1": "Is this an example question?", "question2": "Is this a sample question?" "qid0": 0 "qid1": 1 ... } ``` Not sure if this was the right call. - Can't find a good citation for this dataset
https://github.com/huggingface/datasets/pull/366
[ "Tests seem to be failing because of pandas", "Kaggle needs authentification to download datasets. We don't have a way to handle that in the lib for now" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/366", "html_url": "https://github.com/huggingface/datasets/pull/366", "diff_url": "https://github.com/huggingface/datasets/pull/366.diff", "patch_url": "https://github.com/huggingface/datasets/pull/366.patch", "merged_at": "2020-07-13T17:35:21" }
366
true
How to augment data ?
Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = dataset.map(aug, batched=True) ```
https://github.com/huggingface/datasets/issues/365
[ "Using batched map is probably the easiest way at the moment.\r\nWhat kind of augmentation would you like to do ?", "Some samples in the dataset are too long, I want to divide them in several samples.", "Using batched map is the way to go then.\r\nWe'll make it clearer in the docs that map could be used for aug...
null
365
false
add MS MARCO dataset
This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including: - Passage and Document Retrieval - Keyphrase Extraction - QA and NLG This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq
https://github.com/huggingface/datasets/pull/364
[ "The dummy data for v2.1 is missing as far as I can see. I think running the dummy data command should work correctly here. ", "Also, it might be that the structure of the dummy data is wrong - looking at `generate_examples` the structure does not look too easy.", "The fact that the dummy data for v2.1 is miss...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/364", "html_url": "https://github.com/huggingface/datasets/pull/364", "diff_url": "https://github.com/huggingface/datasets/pull/364.diff", "patch_url": "https://github.com/huggingface/datasets/pull/364.patch", "merged_at": "2020-08-06T06:15:48" }
364
true
Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py: The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py src/nlp/arrow_writer.py I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look! datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py: I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ). For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"! (still working on the pretraining, just wanted to push out the new functionality sooner than later)
https://github.com/huggingface/datasets/pull/363
[ "Thank you! I just marked this as a draft PR. It probably would be better to create specific Array2D and Array3D classes as needed instead of a generic MultiArray for now, it should simplify the code a lot too so, I'll update it as such. Also i was meaning to reply earlier, but I wanted to thank you for the testing...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/363", "html_url": "https://github.com/huggingface/datasets/pull/363", "diff_url": "https://github.com/huggingface/datasets/pull/363.diff", "patch_url": "https://github.com/huggingface/datasets/pull/363.patch", "merged_at": "2020-08-24T09:59:35" }
363
true
[dateset subset missing] xtreme paws-x
I tried nlp.load_dataset('xtreme', 'PAWS-X.es') but get the value error It turns out that the subset for Spanish is missing https://github.com/google-research-datasets/paws/tree/master/pawsx
https://github.com/huggingface/datasets/issues/362
[ "You're right, thanks for pointing it out. We will update it " ]
null
362
false
πŸ› [Metrics] ROUGE is non-deterministic
If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different. Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem. Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differents run : > ['0.3350', '0.1470', '0.2329'] ['0.3358', '0.1451', '0.2332'] --- Why ROUGE is not deterministic ?
https://github.com/huggingface/datasets/issues/361
[ "Hi, can you give a full self-contained example to reproduce this behavior?", "> Hi, can you give a full self-contained example to reproduce this behavior?\r\n\r\nThere is a notebook in the post ;)", "> If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different.\r\n...
null
361
false
[Feature request] Add dataset.ragged_map() function for many-to-many transformations
`dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines. `dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from the dataset. However, some dataset transformations are many-to-many. Consider constructing BERT training examples from a dataset of sentences, where you map `["a", "b", "c"] -> ["a[SEP]b", "a[SEP]c", "b[SEP]c", "c[SEP]b", ...]` I propose a more general `ragged_map()` method that takes in a batch of examples of length `N` and return a batch of examples `M`. This is different from the `map(batched=True)` method, which takes examples of length `N` and returns a batch of length `N`, processing individual examples in parallel. I don't have a clear vision of how this would be implemented efficiently and lazily, but would love to hear the community's feedback on this. My specific use case is creating an end-to-end ELECTRA data pipeline. I would like to take the raw WikiText data and generate training examples from this using the `ragged_map()` method, then export to TFRecords and train quickly. This would be a reproducible pipeline with no bash scripts. Currently I'm relying on scripts like https://github.com/google-research/electra/blob/master/build_pretraining_dataset.py, which are less general.
https://github.com/huggingface/datasets/issues/360
[ "Actually `map(batched=True)` can already change the size of the dataset.\r\nIt can accept examples of length `N` and returns a batch of length `M` (can be null or greater than `N`).\r\n\r\nI'll make that explicit in the doc that I'm currently writing.", "You're two steps ahead of me :) In my testing, it also wor...
null
360
false
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures
I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function. ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-23-9aecfbee53bd> in <module> 55 from nlp import load_dataset 56 ---> 57 ds = load_dataset("../text2struct/model/dataset_builder.py", data_files=rel_datafiles) 58 59 ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 430 verify_infos = not save_infos and not ignore_verifications 431 self._download_and_prepare( --> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 433 ) 434 # Sync info ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 481 try: 482 # Prepare split will record examples associated to the split --> 483 self._prepare_split(split_generator, **prepare_split_kwargs) 484 except OSError: 485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or "")) ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _prepare_split(self, split_generator) 736 schema_dict[field.name] = Value(str(field.type)) 737 --> 738 parse_schema(writer.schema, features) 739 self.info.features = Features(features) 740 ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in parse_schema(schema, schema_dict) 734 parse_schema(field.type.value_type, schema_dict[field.name]) 735 else: --> 736 schema_dict[field.name] = Value(str(field.type)) 737 738 parse_schema(writer.schema, features) <string> in __init__(self, dtype, id, _type) ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in __post_init__(self) 55 56 def __post_init__(self): ---> 57 self.pa_type = string_to_arrow(self.dtype) 58 59 def __call__(self): ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in string_to_arrow(type_str) 32 if str(type_str + "_") not in pa.__dict__: 33 raise ValueError( ---> 34 f"Neither {type_str} nor {type_str + '_'} seems to be a pyarrow data type. " 35 f"Please make sure to use a correct data type, see: " 36 f"https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions" ValueError: Neither list<item: string> nor list<item: string>_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions ``` If I create the dataset imperatively, using a pyarrow table, the dataset is created correctly. If I override the `_prepare_split` method to avoid calling the validate schema, the dataset can load as well.
https://github.com/huggingface/datasets/issues/359
[ "Hi, it depends on what it is in your `dataset_builder.py` file. Can you share it?\r\n\r\nIf you are just loading `json` files, you can also directly use the `json` script (which will find the schema/features from your JSON structure):\r\n\r\n```python\r\nfrom nlp import load_dataset\r\nds = load_dataset(\"json\", ...
null
359
false
Starting to add some real doc
Adding a lot of documentation for: - load a dataset - explore the dataset object - process data with the dataset - add a new dataset script - share a dataset script - full package reference This version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html Also: - fix a bug in `train_test_split` - update the `csv` script - add a verbose argument to the dataset processing methods Still missing: - doc for the metrics - how to directly upload a community provided dataset with the CLI - clean up more docstrings - add the `features` argument to `load_dataset` (should be another PR)
https://github.com/huggingface/datasets/pull/358
[ "Ok this is starting to be really big so it's probably good to merge this first version of the doc and continue in another PR :)\r\n\r\nThis first version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/358", "html_url": "https://github.com/huggingface/datasets/pull/358", "diff_url": "https://github.com/huggingface/datasets/pull/358.diff", "patch_url": "https://github.com/huggingface/datasets/pull/358.patch", "merged_at": "2020-07-14T09:58:15" }
358
true
Add hashes to cnn_dailymail
The URL hashes are helpful for comparing results from other sources.
https://github.com/huggingface/datasets/pull/357
[ "Looks you to me :)\r\n\r\nCould you also update the json file that goes with the dataset script by doing \r\n```\r\nnlp-cli test ./datasets/cnn_dailymail --save_infos --all_configs\r\n```\r\nIt will update the features metadata and the size of the dataset with your changes.", "@lhoestq I ran that command.\r\n\r\...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/357", "html_url": "https://github.com/huggingface/datasets/pull/357", "diff_url": "https://github.com/huggingface/datasets/pull/357.diff", "patch_url": "https://github.com/huggingface/datasets/pull/357.patch", "merged_at": "2020-07-13T14:16:38" }
357
true
Add text dataset
Usage: ```python from nlp import load_dataset dset = load_dataset("text", data_files="/path/to/file.txt")["train"] ``` I created a dummy_data.zip which contains three files: `train.txt`, `test.txt`, `dev.txt`. Each of these contains two lines. It passes ```bash RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_text ``` but I would like a second set of eyes to ensure I did it right.
https://github.com/huggingface/datasets/pull/356
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/356", "html_url": "https://github.com/huggingface/datasets/pull/356", "diff_url": "https://github.com/huggingface/datasets/pull/356.diff", "patch_url": "https://github.com/huggingface/datasets/pull/356.patch", "merged_at": "2020-07-10T14:19:03" }
356
true
can't load SNLI dataset
`nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't. Is there a plan to move these datasets to huggingface servers for a more stable solution? Btw, here's the stack trace: ``` File "/content/nlp/src/nlp/builder.py", line 432, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/content/nlp/src/nlp/builder.py", line 466, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/content/nlp/src/nlp/datasets/snli/e417f6f2e16254938d977a17ed32f3998f5b23e4fcab0f6eb1d28784f23ea60d/snli.py", line 76, in _split_generators dl_dir = dl_manager.download_and_extract(_DATA_URL) File "/content/nlp/src/nlp/utils/download_manager.py", line 217, in download_and_extract return self.extract(self.download(url_or_urls)) File "/content/nlp/src/nlp/utils/download_manager.py", line 156, in download lambda url: cached_path(url, download_config=self._download_config,), url_or_urls, File "/content/nlp/src/nlp/utils/py_utils.py", line 190, in map_nested return function(data_struct) File "/content/nlp/src/nlp/utils/download_manager.py", line 156, in <lambda> lambda url: cached_path(url, download_config=self._download_config,), url_or_urls, File "/content/nlp/src/nlp/utils/file_utils.py", line 198, in cached_path local_files_only=download_config.local_files_only, File "/content/nlp/src/nlp/utils/file_utils.py", line 356, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://nlp.stanford.edu/projects/snli/snli_1.0.zip ```
https://github.com/huggingface/datasets/issues/355
[ "I just added the processed files of `snli` on our google storage, so that when you do `load_dataset` it can download the processed files from there :)\r\n\r\nWe are thinking about having available those processed files for more datasets in the future, because sometimes files aren't available (like for `snli`), or ...
null
355
false
More faiss control
Allow users to specify a faiss index they created themselves, as sometimes indexes can be composite for examples
https://github.com/huggingface/datasets/pull/354
[ "> Ok, so we're getting rid of the `FaissGpuOptions`?\r\n\r\nWe support `device=...` because it's simple, but faiss GPU options can be used in so many ways (you can set different gpu options for the different parts of your index for example) that it's probably better to let the user create and configure its index a...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/354", "html_url": "https://github.com/huggingface/datasets/pull/354", "diff_url": "https://github.com/huggingface/datasets/pull/354.diff", "patch_url": "https://github.com/huggingface/datasets/pull/354.patch", "merged_at": "2020-07-09T09:54:51" }
354
true
[Dataset requests] New datasets for Text Classification
We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - [x] TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]** - #386 - [x] Yelp-5 - #1315 - [x] Movie review (Movie Review (MR) dataset [156]) **[done (same as rotten_tomatoes)]** - [x] SST (Stanford Sentiment Treebank) **[include in glue]** - #1934 - [ ] Multi-Perspective Question Answering (MPQA) dataset **[require authentication (indeed manual download)]** - [x] Amazon. This is a popular corpus of product reviews collected from the Amazon website [159]. It contains labels for both binary classification and multi-class (5-class) classification - #791 - #1389 - [x] 20 Newsgroups. The 20 Newsgroups dataset **[done]** - #410 - [x] Sogou News dataset **[done]** - #450 - [x] Reuters news. The Reuters-21578 dataset [165] **[done]** - #471 - [x] DBpedia. The DBpedia dataset [170] - #1116 - [ ] Ohsumed. The Ohsumed collection [171] is a subset of the MEDLINE database - [ ] EUR-Lex. The EUR-Lex dataset - [x] WOS. The Web Of Science (WOS) dataset **[done]** - #424 - [ ] PubMed. PubMed [173] - [x] TREC-QA: TREC-6 + TREC-50 - See above: TREC-6 dataset - [x] Quora. The Quora dataset [180] - #366 All these datasets are cited in https://arxiv.org/abs/2004.03705
https://github.com/huggingface/datasets/issues/353
[ "Pinging @mariamabarham as well", "- `nlp` has MR! It's called `rotten_tomatoes`\r\n- SST is part of GLUE, or is that just SST-2?\r\n- `nlp` also has `ag_news`, a popular news classification dataset\r\n\r\nI'd also like to see:\r\n- the Yahoo Answers topic classification dataset\r\n- the Kaggle Fake News classifi...
null
353
false
πŸ›[BugFix]fix seqeval
Fix seqeval process labels such as 'B', 'B-ARGM-LOC'
https://github.com/huggingface/datasets/pull/352
[ "I think this is good but can you detail a bit the behavior before and after your fix?", "examples:\r\n\r\ninput: `['B', 'I', 'I', 'O', 'B', 'I']`\r\nbefore: `[('B', 0, 0), ('I', 1, 2), ('B', 4, 4), ('I', 5, 5)]`\r\nafter: `[('_', 0, 2), ('_', 4, 5)]`\r\n\r\ninput: `['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O',...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/352", "html_url": "https://github.com/huggingface/datasets/pull/352", "diff_url": "https://github.com/huggingface/datasets/pull/352.diff", "patch_url": "https://github.com/huggingface/datasets/pull/352.patch", "merged_at": "2020-07-16T08:26:46" }
352
true
add pandas dataset
Create a dataset from serialized pandas dataframes. Usage: ```python from nlp import load_dataset dset = load_dataset("pandas", data_files="df.pkl")["train"] ```
https://github.com/huggingface/datasets/pull/351
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/351", "html_url": "https://github.com/huggingface/datasets/pull/351", "diff_url": "https://github.com/huggingface/datasets/pull/351.diff", "patch_url": "https://github.com/huggingface/datasets/pull/351.patch", "merged_at": "2020-07-08T14:15:15" }
351
true
add from_pandas and from_dict
I added two new methods to the `Dataset` class: - `from_pandas()` to create a dataset from a pandas dataframe - `from_dict()` to create a dataset from a dictionary (keys = columns) It uses the `pa.Table.from_pandas` and `pa.Table.from_pydict` funcitons to do so. It is also possible to specify the features types via `features=...` if there are ambiguities (null/nan values), otherwise the arrow schema is infered from the data automatically by pyarrow. One question that I have right now: + Should we also add a `save()` method that would write the dataset on the disk ? Right now if we create a `Dataset` using those two new methods, the data are kept in RAM. Then to reload it we can call the `from_file()` method.
https://github.com/huggingface/datasets/pull/350
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/350", "html_url": "https://github.com/huggingface/datasets/pull/350", "diff_url": "https://github.com/huggingface/datasets/pull/350.diff", "patch_url": "https://github.com/huggingface/datasets/pull/350.patch", "merged_at": "2020-07-08T14:14:32" }
350
true
Hyperpartisan news detection
Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display. Implementation notes: - As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before use. I've used the manual download stuff for this, although the dataset is provided under a Creative Commons Attribution 4.0 International License, so we could host a version if we wanted to? - The 'bias' attribute doesn't exist for the 'byarticle' configuration. I've added an empty string to the class labels to deal with this. Is there a more standard value for empty data? - Should we always subclass `nlp.BuilderConfig`?
https://github.com/huggingface/datasets/pull/349
[ "Thank you so much for working on this! This is awesome!\r\n\r\nHow much would it help you if we would remove the manual request?\r\n\r\nWe are naturally interested in getting some broad idea of how many people and who are using our dataset. But if you consider hosting the dataset yourself, I would rather remove th...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/349", "html_url": "https://github.com/huggingface/datasets/pull/349", "diff_url": "https://github.com/huggingface/datasets/pull/349.diff", "patch_url": "https://github.com/huggingface/datasets/pull/349.patch", "merged_at": "2020-07-07T14:57:11" }
349
true
Add OSCAR dataset
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it πŸ˜… Thanks!
https://github.com/huggingface/datasets/pull/348
[ "@pjox I think the tests don't pass because you haven't provided any dummy data (`dummy_data.zip`).\r\n\r\n ", "> @pjox I think the tests don't pass because you haven't provided any dummy data (`dummy_data.zip`).\r\n\r\nBut can I do the dummy data without running `python nlp-cli test datasets/<your-dataset-folder...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/348", "html_url": "https://github.com/huggingface/datasets/pull/348", "diff_url": "https://github.com/huggingface/datasets/pull/348.diff", "patch_url": "https://github.com/huggingface/datasets/pull/348.patch", "merged_at": null }
348
true
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I guess the error was triggered by the code " module = importlib.import_module(module_path)" at line 57 in the source code: nlp/src/nlp/load.py / (https://github.com/huggingface/nlp/blob/911d5596f9b500e39af8642fe3d1b891758999c7/src/nlp/load.py#L51) Any ideas? p.s. tried the same code on colab, that runs perfectly
https://github.com/huggingface/datasets/issues/347
[ "This is probably a Windows issue, we need to specify the encoding when `load_dataset()` reads the original CSV file.\r\nTry to find the `open()` statement called by `load_dataset()` and add an `encoding='utf-8'` parameter.\r\nSee issues #242 and #307 ", "It should be in `xtreme.py:L755`:\r\n```python\r\n ...
null
347
false
Add emotion dataset
Hello πŸ€— team! I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)). With the current implementation, running ```bash python nlp-cli test datasets/emotion --save_infos --all_configs ``` throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace). Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`. Any pointers on what I'm doing wrong would be greatly appreciated! **Stack trace** ``` INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports. INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0) INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0... INFO:nlp.builder:Generating split train 0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490 Traceback (most recent call last): File "nlp-cli", line 37, in <module> service.run() File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run builder.download_and_prepare( File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare self._download_and_prepare( File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__ for obj in iterable: File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples data = pickle.load(f) _pickle.UnpicklingError: invalid load key, '<'. ```
https://github.com/huggingface/datasets/pull/346
[ "I've tried it and am getting the same error as you.\r\n\r\nYou could use the text files rather than the pickle:\r\n```\r\nhttps://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt\r\nhttps://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt\r\nhttps://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt\r\n```\r\n\r\nThen you would get a...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/346", "html_url": "https://github.com/huggingface/datasets/pull/346", "diff_url": "https://github.com/huggingface/datasets/pull/346.diff", "patch_url": "https://github.com/huggingface/datasets/pull/346.patch", "merged_at": "2020-07-13T14:39:38" }
346
true
Supporting documents in ELI5
I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to facebook, the entire document collection is quite large. However, it would still be helpful to at least include a subset of the supporting documents i.e., having some data is better than having a block of cheese, in my case at least. If you choose not to include them, it would be helpful to have documentation mentioning this specifically. It is especially confusing because the hf nlp ELI5 dataset has the key `'document'` but there are no documents to be found :(
https://github.com/huggingface/datasets/issues/345
[ "Hi @saverymax ! For licensing reasons, the original team was unable to release pre-processed CommonCrawl documents. Instead, they provided a script to re-create them from a CommonCrawl dump, but it unfortunately requires access to a medium-large size cluster:\r\nhttps://github.com/facebookresearch/ELI5#downloading...
null
345
false
Search qa
This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name: - raw_jeopardy: raw data - train_test_val: which is the splitted version #336
https://github.com/huggingface/datasets/pull/344
[ "Could you rebase from master just to make sure we won't break anything for `fever` pls @mariamabarham ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/344", "html_url": "https://github.com/huggingface/datasets/pull/344", "diff_url": "https://github.com/huggingface/datasets/pull/344.diff", "patch_url": "https://github.com/huggingface/datasets/pull/344.patch", "merged_at": null }
344
true
Fix nested tensorflow format
In #339 and #337 we are thinking about adding a way to export datasets to tfrecords. However I noticed that it was not possible to do `dset.set_format("tensorflow")` on datasets with nested features like `squad`. I fixed that using a nested map operations to convert features to `tf.ragged.constant`. I also added tests on the `set_format` function.
https://github.com/huggingface/datasets/pull/343
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/343", "html_url": "https://github.com/huggingface/datasets/pull/343", "diff_url": "https://github.com/huggingface/datasets/pull/343.diff", "patch_url": "https://github.com/huggingface/datasets/pull/343.patch", "merged_at": "2020-07-06T13:11:51" }
343
true
Features should be updated when `map()` changes schema
`dataset.map()` can change the schema and column names. We should update the features in this case (with what is possible to infer).
https://github.com/huggingface/datasets/issues/342
[ "`dataset.column_names` are being updated but `dataset.features` aren't indeed..." ]
null
342
false
add fever dataset
This PR add the FEVER dataset https://fever.ai/ used in with the paper: FEVER: a large-scale dataset for Fact Extraction and VERification (https://arxiv.org/pdf/1803.05355.pdf). #336
https://github.com/huggingface/datasets/pull/341
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/341", "html_url": "https://github.com/huggingface/datasets/pull/341", "diff_url": "https://github.com/huggingface/datasets/pull/341.diff", "patch_url": "https://github.com/huggingface/datasets/pull/341.patch", "merged_at": "2020-07-06T13:03:47" }
341
true
Update cfq.py
Make the dataset name consistent with in the paper: Compositional Freebase Question => Compositional Freebase Questions.
https://github.com/huggingface/datasets/pull/340
[ "Thanks @brainshawn for this update" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/340", "html_url": "https://github.com/huggingface/datasets/pull/340", "diff_url": "https://github.com/huggingface/datasets/pull/340.diff", "patch_url": "https://github.com/huggingface/datasets/pull/340.patch", "merged_at": "2020-07-03T12:33:50" }
340
true
Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337 Some design decisions: - Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting. - Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193. - Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know. - There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know. Also, I noticed that ```python dataset = dataset.select(indices) dataset.set_format("tensorflow") # dataset._format_type is "tensorflow" ``` gives a different output than ```python dataset.set_format("tensorflow") dataset = dataset.select(indices) # dataset._format_type is None ``` The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
https://github.com/huggingface/datasets/pull/339
[ "Really cool @jarednielsen !\r\nDo you think we can make it work with dataset with nested features like `squad` ?\r\n\r\nI just did a PR to fix `.set_format` for datasets with nested features, but as soon as it's merged we could try to make the conversion work on a dataset like `squad`.", "For datasets with neste...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/339", "html_url": "https://github.com/huggingface/datasets/pull/339", "diff_url": "https://github.com/huggingface/datasets/pull/339.diff", "patch_url": "https://github.com/huggingface/datasets/pull/339.patch", "merged_at": "2020-07-22T09:16:11" }
339
true
Run `make style`
These files get changed when I run `make style` on an unrelated PR. Upstreaming these changes so development on a different branch can be easier.
https://github.com/huggingface/datasets/pull/338
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/338", "html_url": "https://github.com/huggingface/datasets/pull/338", "diff_url": "https://github.com/huggingface/datasets/pull/338.diff", "patch_url": "https://github.com/huggingface/datasets/pull/338.patch", "merged_at": "2020-07-02T18:03:10" }
338
true
[Feature request] Export Arrow dataset to TFRecords
The TFRecord generation process is error-prone and requires complex separate Python scripts to download and preprocess the data. I propose to combine the user-friendly features of `nlp` with the speed and efficiency of TFRecords. Sample API: ```python # use these existing methods ds = load_dataset("wikitext", "wikitext-2-raw-v1", split="train") ds = ds.map(lambda ex: tokenizer(ex)) ds.set_format("tensorflow", columns=["input_ids", "token_type_ids", "attention_mask"]) # then add this method ds.export(folder="/my/tfrecords", prefix="myrecord", num_shards=8, format="tfrecord") ``` which would create files like so: ```bash /my/tfrecords/myrecord_1.tfrecord /my/tfrecords/myrecord_2.tfrecord ... ``` I would be happy to contribute this method. We could use a similar approach for PyTorch. Thoughts?
https://github.com/huggingface/datasets/issues/337
[]
null
337
false
[Dataset requests] New datasets for Open Question Answering
We are still a few datasets missing for Open-Question Answering which is currently a field in strong development. Namely, it would be really nice to add: - WebQuestions (Berant et al., 2013) [done] - CuratedTrec (Baudis et al. 2015) [not open-source] - MS-MARCO (NGuyen et al. 2016) [done] - SearchQA (Dunn et al. 2017) [done] - FEVER (Thorne et al. 2018) - [ done] All these datasets are cited in http://arxiv.org/abs/2005.11401
https://github.com/huggingface/datasets/issues/336
[]
null
336
false
BioMRC Dataset presented in BioNLP 2020 ACL Workshop
https://github.com/huggingface/datasets/pull/335
[ "I fixed the issues that you pointed out, re-run all the test and pushed the fixed code :-)", "```\r\n=================================== FAILURES ===================================\r\n___________________ AWSDatasetTest.test_load_dataset_pandas ____________________\r\n\r\nself = <tests.test_dataset_common.AWSDat...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/335", "html_url": "https://github.com/huggingface/datasets/pull/335", "diff_url": "https://github.com/huggingface/datasets/pull/335.diff", "patch_url": "https://github.com/huggingface/datasets/pull/335.patch", "merged_at": "2020-07-15T08:02:07" }
335
true
Add dataset.shard() method
Fixes https://github.com/huggingface/nlp/issues/312
https://github.com/huggingface/datasets/pull/334
[ "Great, done!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/334", "html_url": "https://github.com/huggingface/datasets/pull/334", "diff_url": "https://github.com/huggingface/datasets/pull/334.diff", "patch_url": "https://github.com/huggingface/datasets/pull/334.patch", "merged_at": "2020-07-06T12:35:36" }
334
true
fix variable name typo
https://github.com/huggingface/datasets/pull/333
[ "Good catch :)\r\nI think there is another occurence that needs to be fixed in the second gist (line 4924 of the notebook file):\r\n```python\r\nbleu = nlp.load_metric(...)\r\n```", "Was fixed in e16f79b5f7fc12a6a30c777722be46897a272e6f\r\nClosing it." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/333", "html_url": "https://github.com/huggingface/datasets/pull/333", "diff_url": "https://github.com/huggingface/datasets/pull/333.diff", "patch_url": "https://github.com/huggingface/datasets/pull/333.patch", "merged_at": null }
333
true
Add wiki_dpr
Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder. Note on the implementation: - There are two configs: with and without the embeddings (73GB vs 14GB) - I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing) - I added the case for lists of urls as input of the download_manager
https://github.com/huggingface/datasets/pull/332
[ "The two configurations don't have the same sizes, I may change that so that they both have 21015300 examples for convenience, even though it's supposed to have 21015324 examples in total.\r\n\r\nOne configuration only has 21015300 examples because it seems that the embeddings of the last 24 examples are missing.",...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/332", "html_url": "https://github.com/huggingface/datasets/pull/332", "diff_url": "https://github.com/huggingface/datasets/pull/332.diff", "patch_url": "https://github.com/huggingface/datasets/pull/332.patch", "merged_at": "2020-07-06T12:21:16" }
332
true
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError`
``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py", line 520, in load_dataset builder_instance.download_and_prepare( File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 431, in download_and_prepare self._download_and_prepare( File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 488, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py", line 70, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}] ```
https://github.com/huggingface/datasets/issues/331
[ "I couldn't reproduce on my side.\r\nIt looks like you were not able to generate all the examples, and you have the problem for each split train-test-validation.\r\nCould you try to enable logging, try again and send the logs ?\r\n```python\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\n```", "he...
null
331
false
Doc red
Adding [DocRED](https://github.com/thunlp/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes: - There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `"train_annotated"` and `"train_distant"` to reflect this. - As well as the relation id, the full relation name is mapped from `rel_info.json` - I renamed the 'h', 'r', 't' keys to 'head', 'relation' and 'tail' to make them more readable. - Used the fix from #319 to allow nested sequences of dicts.
https://github.com/huggingface/datasets/pull/330
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/330", "html_url": "https://github.com/huggingface/datasets/pull/330", "diff_url": "https://github.com/huggingface/datasets/pull/330.diff", "patch_url": "https://github.com/huggingface/datasets/pull/330.patch", "merged_at": "2020-07-05T12:27:29" }
330
true
[Bug] FileLock dependency incompatible with filesystem
I'm downloading a dataset successfully with `load_dataset("wikitext", "wikitext-2-raw-v1")` But when I attempt to cache it on an external volume, it hangs indefinitely: `load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount` The filesystem when hanging looks like this: ```bash /fsx ----downloads ----94be...73.lock ----wikitext ----wikitext-2-raw ----wikitext-2-raw-1.0.0.incomplete ``` It appears that on this filesystem, the FileLock object is forever stuck in its "acquire" stage. I have verified that the issue lies specifically with the `filelock` dependency: ```python open("/fsx/hello.txt").write("hello") # succeeds from filelock import FileLock with FileLock("/fsx/hello.lock"): open("/fsx/hello.txt").write("hello") # hangs indefinitely ``` Has anyone else run into this issue? I'd raise it directly on the FileLock repo, but that project appears abandoned with the last update over a year ago. Or if there's a solution that would remove the FileLock dependency from the project, I would appreciate that.
https://github.com/huggingface/datasets/issues/329
[ "Hi, can you give details on your environment/os/packages versions/etc?", "Environment is Ubuntu 18.04, Python 3.7.5, nlp==0.3.0, filelock=3.0.12.\r\n\r\nThe external volume is Amazon FSx for Lustre, and it by default creates files with limited permissions. My working theory is that FileLock creates a lockfile th...
null
329
false
Fork dataset
We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow parses raw text and json with Entity and Relations annotations and creates 2 datasets for training a NER and Relations prediction heads. Is there some good way to "fork" dataset- EG 1. text + json -> Dataset1 1. Dataset1 -> DatasetNER 1. Dataset1 -> DatasetREL or 1. text + json -> Dataset1 1. Dataset1 -> DatasetNER 1. Dataset1 + DatasetNER -> DatasetREL
https://github.com/huggingface/datasets/issues/328
[ "To be able to generate the Arrow dataset you need to either use our csv or json utilities `load_dataset(\"json\", data_files=my_json_files)` OR write your own custom dataset script (you can find some inspiration from the [squad](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py) script for exa...
null
328
false
set seed for suffling tests
Some tests were randomly failing because of a missing seed in a test for `train_test_split(shuffle=True)`
https://github.com/huggingface/datasets/pull/327
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/327", "html_url": "https://github.com/huggingface/datasets/pull/327", "diff_url": "https://github.com/huggingface/datasets/pull/327.diff", "patch_url": "https://github.com/huggingface/datasets/pull/327.patch", "merged_at": "2020-07-02T08:34:04" }
327
true
Large dataset in Squad2-format
At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contexts: 1.047.671 - questions: 1.677.732 - Answers: 6.742.406 - unanswerable: 377.398 It is already cleaned <pre><code> train_data = [ { 'context': "this is the context", 'qas': [ { 'id': "00002", 'is_impossible': False, 'question': "whats is this", 'answers': [ { 'text': "answer", 'answer_start': 0 } ] }, { 'id': "00003", 'is_impossible': False, 'question': "question2", 'answers': [ { 'text': "answer2", 'answer_start': 1 } ] } ] } ] </code></pre> Cause it is growing every day we are thinking about an structure like this: We host an Json file, containing all the download links and the script can load it dynamically. At the moment it is around ~20GB Any advice how to handle this, or an ready to use template ?
https://github.com/huggingface/datasets/issues/326
[ "I'm pretty sure you can get some inspiration from the squad_v2 script. It looks like the dataset is quite big so it will take some time for the users to generate it, but it should be reasonable.\r\n\r\nAlso you are saying that you are still making the dataset grow in size right ?\r\nIt's probably good practice to ...
null
326
false
Add SQuADShifts dataset
This PR adds the four new variants of the SQuAD dataset used in [The Effect of Natural Distribution Shift on Question Answering Models](https://arxiv.org/abs/2004.14444) to facilitate evaluating model robustness to distribution shift.
https://github.com/huggingface/datasets/pull/325
[ "Very cool to have this dataset, thank you for adding it :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/325", "html_url": "https://github.com/huggingface/datasets/pull/325", "diff_url": "https://github.com/huggingface/datasets/pull/325.diff", "patch_url": "https://github.com/huggingface/datasets/pull/325.patch", "merged_at": "2020-06-30T17:07:31" }
325
true
Error when calculating glue score
I was trying glue score along with other metrics here. But glue gives me this error; ``` import nlp glue_metric = nlp.load_metric('glue',name="cola") glue_score = glue_metric.compute(predictions, references) ``` ``` --------------------------------------------------------------------------- --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-b9210a524504> in <module>() ----> 1 glue_score = glue_metric.compute(predictions, references) 6 frames /usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs) 191 """ 192 if predictions is not None: --> 193 self.add_batch(predictions=predictions, references=references) 194 self.finalize(timeout=timeout) 195 /usr/local/lib/python3.6/dist-packages/nlp/metric.py in add_batch(self, predictions, references, **kwargs) 207 if self.writer is None: 208 self._init_writer() --> 209 self.writer.write_batch(batch) 210 211 def add(self, prediction=None, reference=None, **kwargs): /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 155 if self.pa_writer is None: 156 self._build_writer(pa_table=pa.Table.from_pydict(batch_examples)) --> 157 pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema) 158 if writer_batch_size is None: 159 writer_batch_size = self.writer_batch_size /usr/local/lib/python3.6/dist-packages/pyarrow/types.pxi in __iter__() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() TypeError: an integer is required (got type str) ``` I'm not sure whether I'm doing this wrong or whether it's an issue. I would like to know a workaround. Thank you.
https://github.com/huggingface/datasets/issues/324
[ "The glue metric for cola is a metric for classification. It expects label ids as integers as inputs.", "I want to evaluate a sentence pair whether they are semantically equivalent, so I used MRPC and it gives the same error, does that mean we have to encode the sentences and parse as input?\r\n\r\nusing BertToke...
null
324
false
Add package path to sys when downloading package as github archive
This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh) @thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method. This PR fixes https://github.com/huggingface/nlp/issues/305
https://github.com/huggingface/datasets/pull/323
[ "Sorry for the long diff, everything after the imports comes from `black` for code quality :/ ", " I think it's fine and I can't think of another way to make the import work anyways.\r\n\r\nMaybe we can have the `sys.path` behavior inside `prepare_module` instead ? Currently it seems to come out of nowhere in the...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/323", "html_url": "https://github.com/huggingface/datasets/pull/323", "diff_url": "https://github.com/huggingface/datasets/pull/323.diff", "patch_url": "https://github.com/huggingface/datasets/pull/323.patch", "merged_at": null }
323
true
output nested dict in get_nearest_examples
As we are using a columnar format like arrow as the backend for datasets, we expect to have a dictionary of columns when we slice a dataset like in this example: ```python my_examples = dataset[0:10] print(type(my_examples)) # >>> dict print(my_examples["my_column"][0] # >>> this is the first element of the column 'my_column' ``` Therefore I wanted to keep this logic when calling `get_nearest_examples` that returns the top 10 nearest examples: ```python dataset.add_faiss_index(column="embeddings") scores, examples = dataset.get_nearest_examples("embeddings", query=my_numpy_embedding) print(type(examples)) # >>> dict ``` Previously it was returning a list[dict]. It was the only place that was using this output format. To make it work I had to implement `__getitem__(key)` where `key` is a list. This is different from `.select` because `.select` is a dataset transform (it returns a new dataset object) while `__getitem__` is an extraction method (it returns python dictionaries).
https://github.com/huggingface/datasets/pull/322
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/322", "html_url": "https://github.com/huggingface/datasets/pull/322", "diff_url": "https://github.com/huggingface/datasets/pull/322.diff", "patch_url": "https://github.com/huggingface/datasets/pull/322.patch", "merged_at": "2020-07-02T08:33:32" }
322
true
ERROR:root:mwparserfromhell
Hi, I am trying to download some wikipedia data but I got this error for spanish "es" (but there are maybe some others languages which have the same error I haven't tried all of them ). `ERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.` The code I have use was : `dataset = load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')`
https://github.com/huggingface/datasets/issues/321
[ "It looks like it comes from `mwparserfromhell`.\r\n\r\nWould it be possible to get the bad `section` that causes this issue ? The `section` string is from `datasets/wikipedia.py:L548` ? You could just add a `try` statement and print the section if the line `section_text.append(section.strip_code().strip())` crashe...
null
321
false
Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer
Selecting `blog_authorship_corpus` in the nlp viewer throws the following error: ``` NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}] Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 172, in <module> dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None) File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func return get_or_create_cached_value() File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp-viewer/run.py", line 132, in get builder_instance.download_and_prepare() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 488, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) ``` @srush @lhoestq
https://github.com/huggingface/datasets/issues/320
[ "I wonder if this means downloading failed? That corpus has a really slow server.", "This dataset seems to have a decoding problem that results in inconsistencies in the number of generated examples.\r\nSee #215.\r\nThat's why we end up with a `NonMatchingSplitsSizesError `." ]
null
320
false
Nested sequences with dicts
Am pretty much finished [adding a dataset](https://github.com/ghomasHudson/nlp/blob/DocRED/datasets/docred/docred.py) for [DocRED](https://github.com/thunlp/DocRED), but am getting an error when trying to add a nested `nlp.features.sequence(nlp.features.sequence({key:value,...}))`. The original data is in this format: ```python { 'title': "Title of wiki page", 'vertexSet': [ [ { 'name': "mention_name", 'sent_id': "mention in which sentence", 'pos': ["postion of mention in a sentence"], 'type': "NER_type"}, {another mention} ], [another entity] ] ... } ``` So to represent this I've attempted to write: ``` ... features=nlp.Features({ "title": nlp.Value("string"), "vertexSet": nlp.features.Sequence(nlp.features.Sequence({ "name": nlp.Value("string"), "sent_id": nlp.Value("int32"), "pos": nlp.features.Sequence(nlp.Value("int32")), "type": nlp.Value("string"), })), ... }), ... ``` This is giving me the error: ``` pyarrow.lib.ArrowTypeError: Could not convert [{'pos': [[0,2], [2,4], [3,5]], "type": ["ORG", "ORG", "ORG"], "name": ["Lark Force", "Lark Force", "Lark Force", "sent_id": [0, 3, 4]}..... with type list: was not a dict, tuple, or recognized null value for conversion to struct type ``` Do we expect the pyarrow stuff to break when doing this deeper nesting? I've checked that it still works when you do `nlp.features.Sequence(nlp.features.Sequence(nlp.Value("string"))` or `nlp.features.Sequence({key:value,...})` just not nested sequences with a dict. If it's not possible, I can always convert it to a shallower structure. I'd rather not change the DocRED authors' structure if I don't have to though.
https://github.com/huggingface/datasets/issues/319
[ "Oh yes, this is a backward compatibility feature with tensorflow_dataset in which a `Sequence` or `dict` is converted in a `dict` of `lists`, unfortunately it is not very intuitive, see here: https://github.com/huggingface/nlp/blob/master/src/nlp/features.py#L409\r\n\r\nTo avoid this behavior, you can just define ...
null
319
false
Multitask
Following our discussion in #217, I've implemented a first working version of `MultiDataset`. There's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage. I've implemented many of the `nlp.Dataset` methods (cache_files, columns, nbytes, num_columns, num_rows, column_names, schema, shape). Some of the other methods are complicated as they change the number of examples. These raise `NotImplementedError`s at the moment. This will need some tests which I haven't written yet. There's definitely room for improvements but I think the general approach is sound.
https://github.com/huggingface/datasets/pull/318
[ "It's definitely going in the right direction ! Thanks for giving it a try\r\n\r\nI really like the API.\r\nIMO it's fine right now if we don't have all the dataset transforms (map, filter, etc.) as it can be done before building the multitask dataset, but it will be important to have them in the end.\r\nAll the fo...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/318", "html_url": "https://github.com/huggingface/datasets/pull/318", "diff_url": "https://github.com/huggingface/datasets/pull/318.diff", "patch_url": "https://github.com/huggingface/datasets/pull/318.patch", "merged_at": null }
318
true
Adding a dataset with multiple subtasks
I intent to add the datasets of the MT Quality Estimation shared tasks to `nlp`. However, they have different subtasks -- such as word-level, sentence-level and document-level quality estimation, each of which having different language pairs, and some of the data reused in different subtasks. For example, in [QE 2019,](http://www.statmt.org/wmt19/qe-task.html) we had the same English-Russian and English-German data for word-level and sentence-level QE. I suppose these datasets could have both their word and sentence-level labels inside `nlp.Features`; but what about other subtasks? Should they be considered a different dataset altogether? I read the discussion on #217 but the case of QE seems a lot simpler.
https://github.com/huggingface/datasets/issues/317
[ "For one dataset you can have different configurations that each have their own `nlp.Features`.\r\nWe imagine having one configuration per subtask for example.\r\nThey are loaded with `nlp.load_dataset(\"my_dataset\", \"my_config\")`.\r\n\r\nFor example the `glue` dataset has many configurations. It is a bit differ...
null
317
false
add AG News dataset
adds support for the AG-News topic classification dataset
https://github.com/huggingface/datasets/pull/316
[ "Thanks @jxmorris12 for adding this adding. \r\nCan you please add a small description of the PR?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/316", "html_url": "https://github.com/huggingface/datasets/pull/316", "diff_url": "https://github.com/huggingface/datasets/pull/316.diff", "patch_url": "https://github.com/huggingface/datasets/pull/316.patch", "merged_at": "2020-06-30T08:31:55" }
316
true
[Question] Best way to batch a large dataset?
I'm training on large datasets such as Wikipedia and BookCorpus. Following the instructions in [the tutorial notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb), I see the following recommended for TensorFlow: ```python train_tf_dataset = train_tf_dataset.filter(remove_none_values, load_from_cache_file=False) columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions'] train_tf_dataset.set_format(type='tensorflow', columns=columns) features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]} labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])} labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1]) ### Question about this last line ### tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8) ``` This code works for something like WikiText-2. However, scaling up to WikiText-103, the last line takes 5-10 minutes to run. I assume it is because tf.data.Dataset.from_tensor_slices() is pulling everything into memory, not lazily loading. This approach won't scale up to datasets 25x larger such as Wikipedia. So I tried manual batching using `dataset.select()`: ```python idxs = np.random.randint(len(dataset), size=bsz) batch = dataset.select(idxs).map(lambda example: {"input_ids": tokenizer(example["text"])}) tf_batch = tf.constant(batch["ids"], dtype=tf.int64) ``` This appears to create a new Apache Arrow dataset with every batch I grab, and then tries to cache it. The runtime of `dataset.select([0, 1])` appears to be much worse than `dataset[:2]`. So using `select()` doesn't seem to be performant enough for a training loop. Is there a performant scalable way to lazily load batches of nlp Datasets?
https://github.com/huggingface/datasets/issues/315
[ "Update: I think I've found a solution.\r\n\r\n```python\r\noutput_types = {\"input_ids\": tf.int64, \"token_type_ids\": tf.int64, \"attention_mask\": tf.int64}\r\ndef train_dataset_gen():\r\n for i in range(len(train_dataset)):\r\n yield train_dataset[i]\r\ntf_dataset = tf.data.Dataset.from_generator(tra...
null
315
false
Fixed singlular very minor spelling error
An instance of "independantly" was changed to "independently". That's all.
https://github.com/huggingface/datasets/pull/314
[ "Thank you BatJeti! The storm-joker, aka the typo, finally got caught!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/314", "html_url": "https://github.com/huggingface/datasets/pull/314", "diff_url": "https://github.com/huggingface/datasets/pull/314.diff", "patch_url": "https://github.com/huggingface/datasets/pull/314.patch", "merged_at": "2020-06-25T12:43:59" }
314
true
Add MWSC
Adding the [Modified Winograd Schema Challenge](https://github.com/salesforce/decaNLP/blob/master/local_data/schema.txt) dataset which formed part of the [decaNLP](http://decanlp.com/) benchmark. Not sure how much use people would find for it it outside of the benchmark, but it is general purpose. Code is heavily borrowed from the [decaNLP repo](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L773-L877). There's a few (possibly overly opinionated) design choices I made: - I used the train/test/dev split [buried in the decaNLP code](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L852-L855) - I split out each example into the 2 alternatives. Originally the data uses the format: ``` The city councilmen refused the demonstrators a permit because they [feared/advocated] violence. Who [feared/advocated] violence? councilmen/demonstrators ``` I split into the 2 variants: ``` The city councilmen refused the demonstrators a permit because they feared violence. Who feared violence? councilmen/demonstrators The city councilmen refused the demonstrators a permit because they advocated violence. Who advocated violence? councilmen/demonstrators ``` I can't see any use for having the options combined into a single example (splitting them is [the way decaNLP processes](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L846-L850)) them. You can't train on both versions with them combined, and splitting the examples later would be a pain to do. I think [winogrande.py](https://github.com/huggingface/nlp/blob/master/datasets/winogrande/winogrande.py) presents the data in this way? - I've not used the decaNLP framing (appending the options to the question e.g. `Who feared violence? -- councilmen or demonstrators?`) but left it more generic by adding the options as a new key: `"options":["councilmen","demonstrators"]` This should be an easy thing to change using `map` if needed by a specific application. Dataset is working as-is but if anyone has any thoughts/preferences on the design decisions here I'm definitely open to different choices.
https://github.com/huggingface/datasets/pull/313
[ "Looks good to me" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/313", "html_url": "https://github.com/huggingface/datasets/pull/313", "diff_url": "https://github.com/huggingface/datasets/pull/313.diff", "patch_url": "https://github.com/huggingface/datasets/pull/313.patch", "merged_at": "2020-06-30T08:28:10" }
313
true
[Feature request] Add `shard()` method to dataset
Currently, to shard a dataset into 10 pieces on different ranks, you can run ```python rank = 3 # for example size = 10 dataset = nlp.load_dataset('wikitext', 'wikitext-2-raw-v1', split=f"train[{rank*10}%:{(rank+1)*10}%]") ``` However, this breaks down if you have a number of ranks that doesn't divide cleanly into 100, such as 64 ranks. Is there interest in adding a method shard() that looks like this? ```python rank = 3 size = 64 dataset = nlp.load_dataset("wikitext", "wikitext-2-raw-v1", split="train").shard(rank=rank, size=size) ``` TensorFlow has a similar API: https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard. I'd be happy to contribute this code.
https://github.com/huggingface/datasets/issues/312
[ "Hi Jared,\r\nInteresting, thanks for raising this question. You can also do that after loading with `dataset.select()` or `dataset.filter()` which let you keep only a specific subset of rows in a dataset.\r\nWhat is your use-case for sharding?", "Thanks for the pointer to those functions! It's still a little mor...
null
312
false
Add qa_zre
Adding the QA-ZRE dataset from ["Zero-Shot Relation Extraction via Reading Comprehension"](http://nlp.cs.washington.edu/zeroshot/). A common processing step seems to be replacing the `XXX` placeholder with the `subject`. I've left this out as it's something you could easily do with `map`.
https://github.com/huggingface/datasets/pull/311
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/311", "html_url": "https://github.com/huggingface/datasets/pull/311", "diff_url": "https://github.com/huggingface/datasets/pull/311.diff", "patch_url": "https://github.com/huggingface/datasets/pull/311.patch", "merged_at": "2020-06-29T16:37:38" }
311
true
add wikisql
Adding the [WikiSQL](https://github.com/salesforce/WikiSQL) dataset. Interesting things to note: - Have copied the function (`_convert_to_human_readable`) which converts the SQL query to a human-readable (string) format as this is what most people will want when actually using this dataset for NLP applications. - `conds` was originally a tuple but is converted to a dictionary to support differing types. Would be nice to add the logical_form metrics too at some point.
https://github.com/huggingface/datasets/pull/310
[ "That's great work @ghomasHudson !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/310", "html_url": "https://github.com/huggingface/datasets/pull/310", "diff_url": "https://github.com/huggingface/datasets/pull/310.diff", "patch_url": "https://github.com/huggingface/datasets/pull/310.patch", "merged_at": "2020-06-25T12:32:25" }
310
true
Add narrative qa
Test cases for dummy data don't pass Only contains data for summaries (not whole story)
https://github.com/huggingface/datasets/pull/309
[ "Does it make sense to download the full stories? I remember attempting to implement this dataset a while ago and ended up with something like:\r\n```python\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n\r\n dl_dir = dl_manager.download_and_extract(_DOWNLO...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/309", "html_url": "https://github.com/huggingface/datasets/pull/309", "diff_url": "https://github.com/huggingface/datasets/pull/309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/309.patch", "merged_at": null }
309
true
Specify utf-8 encoding for MRPC files
Fixes #307, again probably a Windows-related issue.
https://github.com/huggingface/datasets/pull/308
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/308", "html_url": "https://github.com/huggingface/datasets/pull/308", "diff_url": "https://github.com/huggingface/datasets/pull/308.diff", "patch_url": "https://github.com/huggingface/datasets/pull/308.patch", "merged_at": "2020-06-25T12:16:09" }
308
true
Specify encoding for MRPC
Same as #242, but with MRPC: on Windows, I get a `UnicodeDecodeError` when I try to download the dataset: ```python dataset = nlp.load_dataset('glue', 'mrpc') ``` ```python Downloading and preparing dataset glue/mrpc (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\Users\Python\.cache\huggingface\datasets\glue\mrpc\1.0.0... --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in incomplete_dir(dirname) 369 try: --> 370 yield tmp_dir 371 if os.path.isdir(dirname): ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 430 verify_infos = not save_infos and not ignore_verifications --> 431 self._download_and_prepare( 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 482 # Prepare split will record examples associated to the split --> 483 self._prepare_split(split_generator, **prepare_split_kwargs) 484 except OSError: ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _prepare_split(self, split_generator) 663 generator = self._generate_examples(**split_generator.gen_kwargs) --> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): 665 example = self.info.features.encode_example(record) ~\Miniconda3\envs\nlp\lib\site-packages\tqdm\notebook.py in __iter__(self, *args, **kwargs) 217 try: --> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): 219 # return super(tqdm...) will not catch exception ~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self) 1128 try: -> 1129 for obj in iterable: 1130 yield obj ~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_examples(self, data_file, split, mrpc_files) 514 examples = self._generate_example_mrpc_files(mrpc_files=mrpc_files, split=split) --> 515 for example in examples: 516 yield example["idx"], example ~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_example_mrpc_files(self, mrpc_files, split) 576 reader = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE) --> 577 for n, row in enumerate(reader): 578 is_row_in_dev = [row["#1 ID"], row["#2 ID"]] in dev_ids ~\Miniconda3\envs\nlp\lib\csv.py in __next__(self) 110 self.fieldnames --> 111 row = next(self.reader) 112 self.line_num = self.reader.line_num ~\Miniconda3\envs\nlp\lib\encodings\cp1252.py in decode(self, input, final) 22 def decode(self, input, final=False): ---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0] 24 UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1180: character maps to <undefined> ``` The fix is the same: specify `utf-8` encoding when opening the file. The previous fix didn't work as MRPC's download process is different from the others in GLUE. I am going to propose a new PR :)
https://github.com/huggingface/datasets/issues/307
[]
null
307
false
add pg19 dataset
https://github.com/huggingface/nlp/issues/274 Add functioning PG19 dataset with dummy data `cos_e.py` was just auto-linted by `make style`
https://github.com/huggingface/datasets/pull/306
[ "@lucidrains - Thanks a lot for making the PR - PG19 is a super important dataset! Thanks for making it. Many people are asking for PG-19, so it would be great to have that in the library as soon as possible @thomwolf .", "@mariamabarham yup! around 11GB!", "I'm looking forward to our first deep learning writte...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/306", "html_url": "https://github.com/huggingface/datasets/pull/306", "diff_url": "https://github.com/huggingface/datasets/pull/306.diff", "patch_url": "https://github.com/huggingface/datasets/pull/306.patch", "merged_at": "2020-07-06T07:55:59" }
306
true
Importing downloaded package repository fails
The `get_imports` function in `src/nlp/load.py` has a feature to download a package as a zip archive of the github repository and import functions from the unpacked directory. This is used for example in the `metrics/coval.py` file, and would be useful to add BLEURT (@ankparikh). Currently however, the code seems to have trouble with imports within the package. For example: ``` import nlp coval = nlp.load_metric('coval') ``` yields: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/yacine/Code/nlp/src/nlp/load.py", line 432, in load_metric metric_cls = import_main_class(module_path, dataset=False) File "/home/yacine/Code/nlp/src/nlp/load.py", line 57, in import_main_class module = importlib.import_module(module_path) File "/home/yacine/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval.py", line 21, in <module> from .coval_backend.conll import reader # From: https://github.com/ns-moosavi/coval File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval_backend/conll/reader.py", line 2, in <module> from conll import mention ModuleNotFoundError: No module named 'conll' ``` Not sure what the fix would be there.
https://github.com/huggingface/datasets/issues/305
[]
null
305
false
Problem while printing doc string when instantiating multiple metrics.
When I load more than one metric and try to print doc string of a particular metric,. It shows the doc strings of all imported metric one after the other which looks quite confusing and clumsy. Attached [Colab](https://colab.research.google.com/drive/13H0ZgyQ2se0mqJ2yyew0bNEgJuHaJ8H3?usp=sharing) Notebook for problem clarification..
https://github.com/huggingface/datasets/issues/304
[]
null
304
false
allow to move files across file systems
Users are allowed to use the `cache_dir` that they want. Therefore it can happen that we try to move files across filesystems. We were using `os.rename` that doesn't allow that, so I changed some of them to `shutil.move`. This should fix #301
https://github.com/huggingface/datasets/pull/303
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/303", "html_url": "https://github.com/huggingface/datasets/pull/303", "diff_url": "https://github.com/huggingface/datasets/pull/303.diff", "patch_url": "https://github.com/huggingface/datasets/pull/303.patch", "merged_at": "2020-06-23T15:08:43" }
303
true
Question - Sign Language Datasets
An emerging field in NLP is SLP - sign language processing. I was wondering about adding datasets here, specifically because it's shaping up to be large and easily usable. The metrics for sign language to text translation are the same. So, what do you think about (me, or others) adding datasets here? An example dataset would be [RWTH-PHOENIX-Weather 2014 T](https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/) For every item in the dataset, the data object includes: 1. video_path - path to mp4 file 2. pose_path - a path to `.pose` file with human pose landmarks 3. openpose_path - a path to a `.json` file with human pose landmarks 4. gloss - string 5. text - string 6. video_metadata - height, width, frames, framerate ------ To make it a tad more complicated - what if sign language libraries add requirements to `nlp`? for example, sign language is commonly annotated using `ilex`, `eaf`, or `srt` files, which are all loadable as text, but there is no reason for the dataset to parse that file by itself, if libraries exist to do so.
https://github.com/huggingface/datasets/issues/302
[ "Even more complicating - \r\n\r\nAs I see it, datasets can have \"addons\".\r\nFor example, the WebNLG dataset is a dataset for data-to-text. However, a work of mine and other works enriched this dataset with text plans / underlying text structures. In that case, I see a need to load the dataset \"WebNLG\" with \"...
null
302
false
Setting cache_dir gives error on wikipedia download
First of all thank you for a super handy library! I'd like to download large files to a specific drive so I set `cache_dir=my_path`. This works fine with e.g. imdb and squad. But on wikipedia I get an error: ``` nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=my_path) ``` ``` OSError Traceback (most recent call last) <ipython-input-2-23551344d7bc> in <module> 1 import nlp ----> 2 nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=path) ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 385 with utils.temporary_assignment(self, "_cache_dir", tmp_data_dir): 386 reader = ArrowReader(self._cache_dir, self.info) --> 387 reader.download_from_hf_gcs(self._cache_dir, self._relative_data_dir(with_version=True)) 388 downloaded_info = DatasetInfo.from_directory(self._cache_dir) 389 self.info.update(downloaded_info) ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/arrow_reader.py in download_from_hf_gcs(self, cache_dir, relative_data_dir) 231 remote_dataset_info = os.path.join(remote_cache_dir, "dataset_info.json") 232 downloaded_dataset_info = cached_path(remote_dataset_info) --> 233 os.rename(downloaded_dataset_info, os.path.join(cache_dir, "dataset_info.json")) 234 if self._info is not None: 235 self._info.update(self._info.from_directory(cache_dir)) OSError: [Errno 18] Invalid cross-device link: '/home/local/NTU/nn/.cache/huggingface/datasets/025fa4fd4f04aaafc9e939260fbc8f0bb190ce14c61310c8ae1ddd1dcb31f88c.9637f367b6711a79ca478be55fe6989b8aea4941b7ef7adc67b89ff403020947' -> '/data/nn/nlp/wikipedia/20200501.de/1.0.0.incomplete/dataset_info.json' ```
https://github.com/huggingface/datasets/issues/301
[ "Whoops didn't mean to close this one.\r\nI did some changes, could you try to run it from the master branch ?", "Now it works, thanks!" ]
null
301
false