id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
βŒ€
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
βŒ€
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
618,864,284
124
Xsum, require manual download of some files
closed
https://github.com/huggingface/datasets/pull/124
2020-05-15T10:26:13
2020-05-15T11:04:48
2020-05-15T11:04:46
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
618,820,140
123
[Tests] Local => aws
## Change default Test from local => aws As a default we set` aws=True`, `Local=False`, `slow=False` ### 1. RUN_AWS=1 (default) This runs 4 tests per dataset script. a) Does the dataset script have a valid etag / Can it be reached on AWS? b) Can we load its `builder_class`? c) Can we load **all** dataset configs? d) _Most importantly_: Can we load the dataset? Important - we currently only test the first config of each dataset to reduce test time. Total test time is around 1min20s. ### 2. RUN_LOCAL=1 RUN_AWS=0 ***This should be done when debugging dataset scripts of the ./datasets folder*** This only runs 1 test per dataset test, which is equivalent to aws d) - Can we load the dataset from the local `datasets` directory? ### 3. RUN_SLOW=1 We should set up to run these tests maybe 1 time per week ? @thomwolf The `slow` tests include two more important tests. e) Can we load the dataset with all possible configs? This test will probably fail at the moment because a lot of dummy data is missing. We should add the dummy data step by step to be sure that all configs work. f) Test that the actual dataset can be loaded. This will take quite some time to run, but is important to make sure that the "real" data can be loaded. It will also test whether the dataset script has the correct checksums file which is currently not tested with `aws=True`. @lhoestq - is there an easy way to check cheaply whether the `dataset_info.json` is correct for each dataset script?
closed
https://github.com/huggingface/datasets/pull/123
2020-05-15T09:12:25
2020-05-15T10:06:12
2020-05-15T10:03:26
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
618,813,182
122
Final cleanup of readme and metrics
closed
https://github.com/huggingface/datasets/pull/122
2020-05-15T09:00:52
2021-09-03T19:40:09
2020-05-15T09:02:22
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
618,790,040
121
make style
closed
https://github.com/huggingface/datasets/pull/121
2020-05-15T08:23:36
2020-05-15T08:25:39
2020-05-15T08:25:38
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
618,737,783
120
πŸ› `map` not working
I'm trying to run a basic example (mapping function to add a prefix). [Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing) ```python import nlp dataset = nlp.load_dataset('squad', split='validation[:10%]') def test(sample): sample['title'] = "test prefix @@@ " + sample["title"] return sample print(dataset[0]['title']) dataset.map(test) print(dataset[0]['title']) ``` Output : > Super_Bowl_50 Super_Bowl_50 Expected output : > Super_Bowl_50 test prefix @@@ Super_Bowl_50
closed
https://github.com/huggingface/datasets/issues/120
2020-05-15T06:43:08
2020-05-15T07:02:38
2020-05-15T07:02:38
{ "login": "astariul", "id": 43774355, "type": "User" }
[]
false
[]
618,652,145
119
πŸ› Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
I'm trying to load CNN/DM dataset on Colab. [Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing) But I meet this error : > AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
closed
https://github.com/huggingface/datasets/issues/119
2020-05-15T02:27:26
2020-05-15T05:11:22
2020-05-15T02:45:28
{ "login": "astariul", "id": 43774355, "type": "User" }
[]
false
[]
618,643,088
118
❓ How to apply a map to all subsets ?
I'm working with CNN/DM dataset, where I have 3 subsets : `train`, `test`, `validation`. Should I apply my map function on the subsets one by one ? ```python import nlp cnn_dm = nlp.load_dataset('cnn_dailymail') for corpus in ['train', 'test', 'validation']: cnn_dm[corpus] = cnn_dm[corpus].map(my_func) ``` Or is there a better way to do this ?
closed
https://github.com/huggingface/datasets/issues/118
2020-05-15T01:58:52
2020-05-15T07:05:49
2020-05-15T07:04:25
{ "login": "astariul", "id": 43774355, "type": "User" }
[]
false
[]
618,632,573
117
❓ How to remove specific rows of a dataset ?
I saw on the [example notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb#scrollTo=efFhDWhlvSVC) how to remove a specific column : ```python dataset.drop('id') ``` But I didn't find how to remove a specific row. **For example, how can I remove all sample with `id` < 10 ?**
closed
https://github.com/huggingface/datasets/issues/117
2020-05-15T01:25:06
2022-07-15T08:36:44
2020-05-15T07:04:32
{ "login": "astariul", "id": 43774355, "type": "User" }
[]
false
[]
618,628,264
116
πŸ› Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
I'm trying to use rouge metric. I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence. I tried : ```python import nlp rouge = nlp.load_metric('rouge') with open("test.pred.tokenized") as p, open("test.gold.tokenized") as g: for lp, lg in zip(p, g): rouge.add(lp, lg) ``` But I meet following error : > pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 --- Full stack-trace : ``` Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/metric.py", line 224, in add self.writer.write_batch(batch) File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/arrow_writer.py", line 148, in write_batch pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema) File "pyarrow/table.pxi", line 1550, in pyarrow.lib.Table.from_pydict File "pyarrow/table.pxi", line 1503, in pyarrow.lib.Table.from_arrays File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 ``` (`nlp` installed from source)
closed
https://github.com/huggingface/datasets/issues/116
2020-05-15T01:12:06
2020-05-28T23:43:07
2020-05-28T23:43:07
{ "login": "astariul", "id": 43774355, "type": "User" }
[ { "name": "metric bug", "color": "25b21e" } ]
false
[]
618,615,855
115
AttributeError: 'dict' object has no attribute 'info'
I'm trying to access the information of CNN/DM dataset : ```python cnn_dm = nlp.load_dataset('cnn_dailymail') print(cnn_dm.info) ``` returns : > AttributeError: 'dict' object has no attribute 'info'
closed
https://github.com/huggingface/datasets/issues/115
2020-05-15T00:29:47
2020-05-17T13:11:00
2020-05-17T13:11:00
{ "login": "astariul", "id": 43774355, "type": "User" }
[]
false
[]
618,611,310
114
Couldn't reach CNN/DM dataset
I can't get CNN / DailyMail dataset. ```python import nlp assert "cnn_dailymail" in [dataset.id for dataset in nlp.list_datasets()] cnn_dm = nlp.load_dataset('cnn_dailymail') ``` [Colab notebook](https://colab.research.google.com/drive/1zQ3bYAVzm1h0mw0yWPqKAg_4EUlSx5Ex?usp=sharing) gives following error : ``` ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/cnn_dailymail/cnn_dailymail.py ```
closed
https://github.com/huggingface/datasets/issues/114
2020-05-15T00:16:17
2020-05-15T00:19:52
2020-05-15T00:19:51
{ "login": "astariul", "id": 43774355, "type": "User" }
[]
false
[]
618,590,562
113
Adding docstrings and some doc
Some doc
closed
https://github.com/huggingface/datasets/pull/113
2020-05-14T23:14:41
2020-05-14T23:22:45
2020-05-14T23:22:44
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
618,569,195
112
Qa4mre - add dataset
Added dummy data test only for the first config. Will do the rest later. I had to do add some minor hacks to an important function to make it work. There might be a cleaner way to handle it - can you take a look @thomwolf ?
closed
https://github.com/huggingface/datasets/pull/112
2020-05-14T22:17:51
2020-05-15T09:16:43
2020-05-15T09:16:42
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
618,528,060
111
[Clean-up] remove under construction datastes
closed
https://github.com/huggingface/datasets/pull/111
2020-05-14T20:52:13
2020-05-14T20:52:23
2020-05-14T20:52:22
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
618,520,325
110
fix reddit tifu dummy data
closed
https://github.com/huggingface/datasets/pull/110
2020-05-14T20:37:37
2020-05-14T20:40:14
2020-05-14T20:40:13
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
618,508,359
109
[Reclor] fix reclor
- That's probably one me. Could have made the manual data test more flexible. @mariamabarham
closed
https://github.com/huggingface/datasets/pull/109
2020-05-14T20:16:26
2020-05-14T20:19:09
2020-05-14T20:19:08
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
618,386,394
108
convert can use manual dir as second argument
@mariamabarham
closed
https://github.com/huggingface/datasets/pull/108
2020-05-14T16:52:32
2020-05-14T16:52:43
2020-05-14T16:52:42
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
618,373,045
107
add writer_batch_size to GeneratorBasedBuilder
You can now specify `writer_batch_size` in the builder arguments or directly in `load_dataset`
closed
https://github.com/huggingface/datasets/pull/107
2020-05-14T16:35:39
2020-05-14T16:50:30
2020-05-14T16:50:29
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
618,361,418
106
Add data dir test command
closed
https://github.com/huggingface/datasets/pull/106
2020-05-14T16:18:39
2020-05-14T16:49:11
2020-05-14T16:49:10
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
618,345,191
105
[New structure on AWS] Adapt paths
Some small changes so that we have the correct paths. @julien-c
closed
https://github.com/huggingface/datasets/pull/105
2020-05-14T15:55:57
2020-05-14T15:56:28
2020-05-14T15:56:27
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
618,277,081
104
Add trivia_q
Currently tested only for one config to pass tests. Needs to add more dummy data later.
closed
https://github.com/huggingface/datasets/pull/104
2020-05-14T14:27:19
2020-07-12T05:34:20
2020-05-14T20:23:32
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
618,233,637
103
[Manual downloads] add logic proposal for manual downloads and add wikihow
Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset. The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`. The dataset can then be loaded via: ```python import nlp nlp.load_dataset("wikihow", data_dir="~/wikihow/manual_dir") ``` I added/changed so that there are explicit error messages when using manually downloaded files.
closed
https://github.com/huggingface/datasets/pull/103
2020-05-14T13:30:36
2020-05-14T14:27:41
2020-05-14T14:27:40
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
618,231,216
102
Run save infos
I replaced the old checksum file with the new `dataset_infos.json` by running the script on almost all the datasets we have. The only one that is still running on my side is the cornell dialog
closed
https://github.com/huggingface/datasets/pull/102
2020-05-14T13:27:26
2020-05-14T15:43:04
2020-05-14T15:43:03
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
618,111,651
101
[Reddit] add reddit
- Everything worked fine @mariamabarham. Made my computer nearly crash, but all seems to be working :-)
closed
https://github.com/huggingface/datasets/pull/101
2020-05-14T10:25:02
2020-05-14T10:27:25
2020-05-14T10:27:24
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
618,081,602
100
Add per type scores in seqeval metric
This PR add a bit more detail in the seqeval metric. Now the usage and output are: ```python import nlp met = nlp.load_metric('metrics/seqeval') references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] met.compute(predictions, references) #Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8} ``` It is also possible to compute scores for non IOB notations, POS tagging for example hasn't this kind of notation. Add `suffix` parameter: ```python import nlp met = nlp.load_metric('metrics/seqeval') references = [['O', 'O', 'O', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']] predictions = [['O', 'O', 'MISC', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']] met.compute(predictions, references, metrics_kwargs={"suffix": True}) #Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.9} ```
closed
https://github.com/huggingface/datasets/pull/100
2020-05-14T09:37:52
2020-05-14T23:21:35
2020-05-14T23:21:34
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]
618,026,700
99
[Cmrc 2018] fix cmrc2018
closed
https://github.com/huggingface/datasets/pull/99
2020-05-14T08:22:03
2020-05-14T08:49:42
2020-05-14T08:49:41
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
617,957,739
98
Webis tl-dr
Add the Webid TL:DR dataset.
closed
https://github.com/huggingface/datasets/pull/98
2020-05-14T06:22:18
2020-09-03T10:00:21
2020-05-14T20:54:16
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]
617,809,431
97
[Csv] add tests for csv dataset script
Adds dummy data tests for csv.
closed
https://github.com/huggingface/datasets/pull/97
2020-05-13T23:06:11
2020-05-13T23:23:16
2020-05-13T23:23:15
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
617,739,521
96
lm1b
Add lm1b dataset.
closed
https://github.com/huggingface/datasets/pull/96
2020-05-13T20:38:44
2020-05-14T14:13:30
2020-05-14T14:13:29
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]
617,703,037
95
Replace checksums files by Dataset infos json
### Better verifications when loading a dataset I replaced the `urls_checksums` directory that used to contain `checksums.txt` and `cached_sizes.txt`, by a single file `dataset_infos.json`. It's just a dict `config_name` -> `DatasetInfo`. It simplifies and improves how verifications of checksums and splits sizes are done, as they're all stored in `DatasetInfo` (one per config). Also, having already access to `DatasetInfo` enables to check disk space before running `download_and_prepare` for a given config. The dataset infos json file is user readable, you can take a look at the squad one that I generated in this PR. ### Renaming According to these changes, I did some renaming: `save_checksums` -> `save_infos` `ignore_checksums` -> `ignore_verifications` for example, when you are creating a dataset you have to run ```nlp-cli test path/to/my/dataset --save_infos --all_configs``` instead of ```nlp-cli test path/to/my/dataset --save_checksums --all_configs``` ### And now, the fun part We'll have to rerun the `nlp-cli test ... --save_infos --all_configs` for all the datasets ----------------- feedback appreciated !
closed
https://github.com/huggingface/datasets/pull/95
2020-05-13T19:36:16
2020-05-14T08:58:43
2020-05-14T08:58:42
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
617,571,340
94
Librispeech
Add librispeech dataset and remove some useless content.
closed
https://github.com/huggingface/datasets/pull/94
2020-05-13T16:04:14
2020-05-13T21:29:03
2020-05-13T21:29:02
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]
617,522,029
93
Cleanup notebooks and various fixes
Fixes on dataset (more flexible) metrics (fix) and general clean ups
closed
https://github.com/huggingface/datasets/pull/93
2020-05-13T14:58:58
2020-05-13T15:01:48
2020-05-13T15:01:47
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
617,341,505
92
[WIP] add wmt14
WMT14 takes forever to download :-/ - WMT is the first dataset that uses an abstract class IMO, so I had to modify the `load_dataset_module` a bit.
closed
https://github.com/huggingface/datasets/pull/92
2020-05-13T10:42:03
2020-05-16T11:17:38
2020-05-16T11:17:37
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
617,339,484
91
[Paracrawl] add paracrawl
- Huge dataset - took ~1h to download - Also this PR reformats all dataset scripts and adds `datasets` to `make style`
closed
https://github.com/huggingface/datasets/pull/91
2020-05-13T10:39:00
2020-05-13T10:40:15
2020-05-13T10:40:14
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
617,311,877
90
Add download gg drive
We can now add datasets that download from google drive
closed
https://github.com/huggingface/datasets/pull/90
2020-05-13T09:56:02
2020-05-13T12:46:28
2020-05-13T10:05:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
617,295,069
89
Add list and inspect methods - cleanup hf_api
Add a bunch of methods to easily list and inspect the processing scripts up-loaded on S3: ```python nlp.list_datasets() nlp.list_metrics() # Copy and prepare the scripts at `local_path` for easy inspection/modification. nlp.inspect_dataset(path, local_path) # Copy and prepare the scripts at `local_path` for easy inspection/modification. nlp.inspect_metric(path, local_path) ``` Also clean up the `HfAPI` to use `dataclasses` for better user-experience
closed
https://github.com/huggingface/datasets/pull/89
2020-05-13T09:30:15
2020-05-13T14:05:00
2020-05-13T09:33:10
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
617,284,664
88
Add wiki40b
This one is a beam dataset that downloads files using tensorflow. I tested it on a small config and it works fine
closed
https://github.com/huggingface/datasets/pull/88
2020-05-13T09:16:01
2020-05-13T12:31:55
2020-05-13T12:31:54
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
617,267,118
87
Add Flores
Beautiful language for sure!
closed
https://github.com/huggingface/datasets/pull/87
2020-05-13T08:51:29
2020-05-13T09:23:34
2020-05-13T09:23:33
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
617,260,972
86
[Load => load_dataset] change naming
Rename leftovers @thomwolf
closed
https://github.com/huggingface/datasets/pull/86
2020-05-13T08:43:00
2020-05-13T08:50:58
2020-05-13T08:50:57
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
617,253,428
85
Add boolq
I just added the dummy data for this dataset. This one was uses `tf.io.gfile.copy` to download the data but I added the support for custom download in the mock_download_manager. I also had to add a `tensorflow` dependency for tests.
closed
https://github.com/huggingface/datasets/pull/85
2020-05-13T08:32:27
2020-05-13T09:09:39
2020-05-13T09:09:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
617,249,815
84
[TedHrLr] add left dummy data
closed
https://github.com/huggingface/datasets/pull/84
2020-05-13T08:27:20
2020-05-13T08:29:22
2020-05-13T08:29:21
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
616,863,601
83
New datasets
closed
https://github.com/huggingface/datasets/pull/83
2020-05-12T18:22:27
2020-05-12T18:22:47
2020-05-12T18:22:45
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
616,805,194
82
[Datasets] add ted_hrlr
@thomwolf - After looking at `xnli` I think it's better to leave the translation features and add a `translation` key to make them work in our framework. The result looks like this: ![Screenshot from 2020-05-12 18-34-43](https://user-images.githubusercontent.com/23423619/81721933-ee1faf00-9480-11ea-9e95-d6557cbd0ce0.png) you can see that each split has a `translation` key which value is the nlp.features.Translation object. That's a simple change. If it's ok for you, I will add dummy data for the other configs and treat the other translation scripts in the same way.
closed
https://github.com/huggingface/datasets/pull/82
2020-05-12T16:46:50
2020-05-13T07:52:54
2020-05-13T07:52:53
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
616,793,010
81
add tests
Tests for py_utils functions and for the BaseReader used to read from arrow and parquet. I also removed unused utils functions.
closed
https://github.com/huggingface/datasets/pull/81
2020-05-12T16:28:19
2020-05-13T07:43:57
2020-05-13T07:43:56
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
616,786,803
80
Add nbytes + nexamples check
### Save size and number of examples Now when you do `save_checksums`, it also create `cached_sizes.txt` right next to the checksum file. This new file stores the bytes sizes and the number of examples of each split that has been prepared and stored in the cache. Example: ``` # Cached sizes: <full_config_name> <num_bytes> <num_examples> hansards/house/1.0.0/test 22906629 122290 hansards/house/1.0.0/train 191459584 947969 hansards/senate/1.0.0/test 5711686 25553 hansards/senate/1.0.0/train 40324278 182135 ``` ### Check processing output If there is a `caches_sizes.txt`, then each time we run `download_and_prepare` it will make sure that the sizes match. You can set `ignore_checksums=True` if you don't want that to happen. ### Fill Dataset Info All the split infos and the checksums are now stored correctly in DatasetInfo after `download_and_prepare` ### Check space on disk before running `download_and_prepare` Check if the space is lower than the sum of the sizes of the files in `checksums.txt` and `cached_files.txt`. This is not ideal though as it considers the files for all configs. TODO: A better way to do it would be to have save the `DatasetInfo` instead of the `checksums.txt` and `cached_sizes.txt`, in order to have one file per dataset config (and therefore consider only the sizes of the files for one config and not all of them). It can also be the occasion to factorize all the `download_and_prepare` verifications. Maybe next PR ?
closed
https://github.com/huggingface/datasets/pull/80
2020-05-12T16:18:43
2020-05-13T07:52:34
2020-05-13T07:52:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
616,785,613
79
[Convert] add new pattern
closed
https://github.com/huggingface/datasets/pull/79
2020-05-12T16:16:51
2020-05-12T16:17:10
2020-05-12T16:17:09
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
616,774,275
78
[Tests] skip beam dataset tests for now
For now we will skip tests for Beam Datasets
closed
https://github.com/huggingface/datasets/pull/78
2020-05-12T16:00:58
2020-05-12T16:16:24
2020-05-12T16:16:22
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
616,674,601
77
New datasets
closed
https://github.com/huggingface/datasets/pull/77
2020-05-12T13:51:59
2020-05-12T14:02:16
2020-05-12T14:02:15
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
616,579,228
76
pin flake 8
Flake 8's new version does not like our format. Pinning the version for now.
closed
https://github.com/huggingface/datasets/pull/76
2020-05-12T11:25:29
2020-05-12T11:27:35
2020-05-12T11:27:34
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
616,520,163
75
WIP adding metrics
Adding the following metrics as identified by @mariamabarham: 1. BLEU: BiLingual Evaluation Understudy: https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/bleu.py (multilingual) 2. GLEU: Google-BLEU: https://github.com/cnap/gec-ranking/blob/master/scripts/compute_gleu 3. Sacrebleu: https://pypi.org/project/sacrebleu/1.4.8/ (pypi package), https://github.com/mjpost/sacrebleu (github implementation) 4. ROUGE: Recall-Oriented Understudy for Gisting Evaluation: https://github.com/google-research/google-research/tree/master/rouge, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/rouge.py (multilingual) 5. Seqeval: https://github.com/chakki-works/seqeval (github implementation), https://pypi.org/project/seqeval/0.0.12/ (pypi package) 6. Coval: coreference evaluation package for the CoNLL and ARRAU datasets https://github.com/ns-moosavi/coval 7. SQuAD v1 evaluation script 8. SQuAD V2 evaluation script: https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/ 9. GLUE 10. XNLI Not now: 1. Perplexity: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/perplexity.py 2. Spearman: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/spearman_correlation.py 3. F1_measure: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/f1_measure.py 4. Pearson_corelation: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/pearson_correlation.py 5. AUC: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/auc.py 6. Entropy: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/entropy.py
closed
https://github.com/huggingface/datasets/pull/75
2020-05-12T09:52:00
2020-05-13T07:44:12
2020-05-13T07:44:10
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
616,511,101
74
fix overflow check
I did some tests and unfortunately the test ``` pa_array.nbytes > MAX_BATCH_BYTES ``` doesn't work. Indeed for a StructArray, `nbytes` can be less 2GB even if there is an overflow (it loops...). I don't think we can do a proper overflow test for the limit of 2GB... For now I replaced it with a sanity check on the first element.
closed
https://github.com/huggingface/datasets/pull/74
2020-05-12T09:38:01
2020-05-12T10:04:39
2020-05-12T10:04:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
616,417,845
73
JSON script
Add a JSONS script to read JSON datasets from files.
closed
https://github.com/huggingface/datasets/pull/73
2020-05-12T07:11:22
2020-05-18T06:50:37
2020-05-18T06:50:36
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]
616,225,010
72
[README dummy data tests] README to better understand how the dummy data structure works
In this PR a README.md is added to tests to shine more light on how the dummy data structure works. I try to explain the different possible cases. IMO the best way to understand the logic is to checkout the dummy data structure of the different datasets I mention in the README.md since those are the "edge cases". @mariamabarham @thomwolf @lhoestq @jplu - I'd be happy to checkout the dummy data structure and get some feedback on possible improvements.
closed
https://github.com/huggingface/datasets/pull/72
2020-05-11T22:19:03
2020-05-11T22:26:03
2020-05-11T22:26:01
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
615,942,180
71
Fix arrow writer for big datasets using writer_batch_size
This PR fixes Yacine's bug. According to [this](https://github.com/apache/arrow/blob/master/docs/source/cpp/arrays.rst#size-limitations-and-recommendations), it is not recommended to have pyarrow arrays bigger than 2Go. Therefore I set a default batch size of 100 000 examples per batch. In general it shouldn't exceed 2Go. If it does, I reduce the batch_size on the fly, and I notify the user with a warning.
closed
https://github.com/huggingface/datasets/pull/71
2020-05-11T14:45:36
2020-05-11T20:09:47
2020-05-11T20:00:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
615,679,102
70
adding RACE, QASC, Super_glue and Tiny_shakespear datasets
closed
https://github.com/huggingface/datasets/pull/70
2020-05-11T08:07:49
2020-05-12T13:21:52
2020-05-12T13:21:51
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
615,450,534
69
fix cache dir in builder tests
minor fix
closed
https://github.com/huggingface/datasets/pull/69
2020-05-10T18:39:21
2020-05-11T07:19:30
2020-05-11T07:19:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
614,882,655
68
[CSV] re-add csv
Re-adding csv under the datasets under construction to keep circle ci happy - will have to see how to include it in the tests. @lhoestq noticed that I accidently deleted it in https://github.com/huggingface/nlp/pull/63#discussion_r422263729.
closed
https://github.com/huggingface/datasets/pull/68
2020-05-08T17:38:29
2020-05-08T17:40:48
2020-05-08T17:40:46
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
614,798,483
67
[Tests] Test files locally
This PR adds a `aws` and a `local` decorator to the tests so that tests now run on the local datasets. By default, the `aws` is deactivated and `local` is activated and `slow` is deactivated, so that only 1 test per dataset runs on circle ci. **When local is activated all folders in `./datasets` are tested.** **Important** When adding a dataset, we should no longer upload it to AWS. The steps are: 1. Open a PR 2. Add a dataset as described in `datasets/README.md` 3. If all tests pass, push to master Currently we have 49 functional datasets in our code base. We have 6 datasets "under-construction" that don't pass the tests - so I put them in a folder "datasets_under_construction" - it would be nice to open a PR to fix them and put them in the `datasets` folder. **Important** when running tests locally, the datasets are cached so to rerun them delete your local cache via: `rm -r ~/.cache/huggingface/datasets/*` @thomwolf @mariamabarham @lhoestq
closed
https://github.com/huggingface/datasets/pull/67
2020-05-08T15:02:43
2020-05-08T19:50:47
2020-05-08T15:17:00
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
614,748,552
66
[Datasets] ReadME
closed
https://github.com/huggingface/datasets/pull/66
2020-05-08T13:37:43
2020-05-08T13:39:23
2020-05-08T13:39:22
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
614,746,516
65
fix math dataset and xcopa
- fixes math dataset and xcopa, uploaded both of the to S3
closed
https://github.com/huggingface/datasets/pull/65
2020-05-08T13:33:55
2020-05-08T13:35:41
2020-05-08T13:35:40
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
614,737,057
64
[Datasets] Make master ready for datasets adding
Add all relevant files so that datasets can now be added on master
closed
https://github.com/huggingface/datasets/pull/64
2020-05-08T13:17:00
2020-05-08T13:17:31
2020-05-08T13:17:30
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
614,666,365
63
[Dataset scripts] add all datasets scripts
As mentioned, we can have the canonical datasets in the master. For now I also want to include all the data as present on S3 to make the synchronization easier when uploading new datastes. @mariamabarham @lhoestq @thomwolf - what do you think? If this is ok for you, I can sync up the master with the `add_dataset` branch: https://github.com/huggingface/nlp/pull/37 so that master is up to date.
closed
https://github.com/huggingface/datasets/pull/63
2020-05-08T10:50:15
2020-05-08T17:39:22
2020-05-08T11:34:00
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
614,630,830
62
[Cached Path] Better error message
IMO returning `None` in this function only leads to confusion and is never helpful.
closed
https://github.com/huggingface/datasets/pull/62
2020-05-08T09:39:47
2020-05-08T09:45:47
2020-05-08T09:45:47
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
614,607,474
61
[Load] rename setup_module to prepare_module
rename setup_module to prepare_module due to issues with pytests `setup_module` function. See: PR #59.
closed
https://github.com/huggingface/datasets/pull/61
2020-05-08T08:54:22
2020-05-08T08:56:32
2020-05-08T08:56:16
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
614,372,553
60
Update to simplify some datasets conversion
This PR updates the encoding of `Values` like `integers`, `boolean` and `float` to use python casting and avoid having to cast in the dataset scripts, as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r420176626 We could also change (not included in this PR yet): - `supervized_keys` to make them a NamedTuple instead of a dataclass, and - handle specifically the `Translation` features. as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r421740236 @patrickvonplaten @mariamabarham tell me if you want these two last changes as well.
closed
https://github.com/huggingface/datasets/pull/60
2020-05-07T22:02:24
2020-05-08T10:38:32
2020-05-08T10:18:24
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
614,366,045
59
Fix tests
@patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts. I'm trying to fix them here but I have a weird error, do you think you can have a look? ```bash (datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ============================================================================= test session starts ============================================================================= platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Users/thomwolf/miniconda2/envs/datasets/bin/python cachedir: .pytest_cache rootdir: /Users/thomwolf/Documents/GitHub/datasets plugins: xdist-1.31.0, forked-1.1.3 collected 1 item tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR =================================================================================== ERRORS ==================================================================================== ____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________ file_path = <module 'tests.test_dataset_common' from '/Users/thomwolf/Documents/GitHub/datasets/tests/test_dataset_common.py'> download_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True) download_kwargs = {} def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder: r""" Download/extract/cache a dataset to add to the lib from a path or url which can be: - a path to a local directory containing the dataset processing python script - an url to a S3 directory with a dataset processing python script Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks) and using cloudpickle (among other things). Return: tuple of the unique id associated to the dataset the local path to the dataset """ if download_config is None: download_config = DownloadConfig(**download_kwargs) download_config.extract_compressed_file = True download_config.force_extract = True > name = list(filter(lambda x: x, file_path.split("/")))[-1] + ".py" E AttributeError: module 'tests.test_dataset_common' has no attribute 'split' src/nlp/load.py:169: AttributeError ============================================================================== warnings summary =============================================================================== /Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15 /Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp -- Docs: https://docs.pytest.org/en/latest/warnings.html =========================================================================== short test summary info =========================================================================== ERROR tests/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split' ========================================================================= 1 warning, 1 error in 3.63s ========================================================================= ```
closed
https://github.com/huggingface/datasets/pull/59
2020-05-07T21:48:09
2020-05-08T10:57:57
2020-05-08T10:46:51
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
614,362,308
58
Aborted PR - Fix tests
@patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts. I'm trying to fix them here but I have a weird error, do you think you can have a look? ```bash (datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ============================================================================= test session starts ============================================================================= platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Users/thomwolf/miniconda2/envs/datasets/bin/python cachedir: .pytest_cache rootdir: /Users/thomwolf/Documents/GitHub/datasets plugins: xdist-1.31.0, forked-1.1.3 collected 1 item tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR =================================================================================== ERRORS ==================================================================================== ____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________ file_path = <module 'tests.test_dataset_common' from '/Users/thomwolf/Documents/GitHub/datasets/tests/test_dataset_common.py'> download_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True) download_kwargs = {} def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder: r""" Download/extract/cache a dataset to add to the lib from a path or url which can be: - a path to a local directory containing the dataset processing python script - an url to a S3 directory with a dataset processing python script Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks) and using cloudpickle (among other things). Return: tuple of the unique id associated to the dataset the local path to the dataset """ if download_config is None: download_config = DownloadConfig(**download_kwargs) download_config.extract_compressed_file = True download_config.force_extract = True > name = list(filter(lambda x: x, file_path.split("/")))[-1] + ".py" E AttributeError: module 'tests.test_dataset_common' has no attribute 'split' src/nlp/load.py:169: AttributeError ============================================================================== warnings summary =============================================================================== /Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15 /Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp -- Docs: https://docs.pytest.org/en/latest/warnings.html =========================================================================== short test summary info =========================================================================== ERROR tests/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split' ========================================================================= 1 warning, 1 error in 3.63s ========================================================================= ```
closed
https://github.com/huggingface/datasets/pull/58
2020-05-07T21:40:19
2020-05-07T21:48:01
2020-05-07T21:41:27
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
614,261,638
57
Better cached path
### Changes: - The `cached_path` no longer returns None if the file is missing/the url doesn't work. Instead, it can raise `FileNotFoundError` (missing file), `ConnectionError` (no cache and unreachable url) or `ValueError` (parsing error) - Fix requests to firebase API that doesn't handle HEAD requests... - Allow custom download in datasets script: it allows to use `tf.io.gfile.copy` for example, to download from google storage. I added an example: the `boolq` script
closed
https://github.com/huggingface/datasets/pull/57
2020-05-07T18:36:00
2020-05-08T13:20:30
2020-05-08T13:20:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
614,236,869
56
[Dataset] Tester add mock function
need to add an empty `extract()` function to make `hansard` dataset test work.
closed
https://github.com/huggingface/datasets/pull/56
2020-05-07T17:51:37
2020-05-07T17:52:51
2020-05-07T17:52:50
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
613,968,072
55
Beam datasets
# Beam datasets ## Intro Beam Datasets are using beam pipelines for preprocessing (basically lots of `.map` over objects called PCollections). The advantage of apache beam is that you can choose which type of runner you want to use to preprocess your data. The main runners are: - the `DirectRunner` to run the pipeline locally (default). However I encountered memory issues for big datasets (like the french or english wikipedia). Small dataset work fine - Google Dataflow. I didn't play with it. - Spark or Flink, two well known data processing frameworks. I tried to use the Spark/Flink local runners provided by apache beam for python and wasn't able to make them work properly though... ## From tfds beam datasets to our own beam datasets Tensorflow datasets used beam and a complicated pipeline to shard the TFRecords files. To allow users to download beam datasets and not having to preprocess them, they also allow to download the already preprocessed datasets from their google storage (the beam pipeline doesn't run in that case). On our side, we replace TFRecords by something else. Arrow or Parquet do the job but I chose Parquet as: 1) there is a builtin apache beam parquet writer that is quite convenient, and 2) reading parquet from the pyarrow library is also simple and effective (there is a mmap option !) Moreover we don't shard datasets in many many files like tfds (they were doing probably doing that mainly because of the limit of 2Gb per TFRecord file). Therefore we have a simpler pipeline that saves each split into one parquet file. We also removed the utilities to use their google storage (for now maybe ? we'll have to discuss it). ## Main changes - Added a BeamWriter to save the output of beam pipelines into parquet files and fill dataset infos - Create a ParquetReader and refactor a bit the arrow_reader.py \> **With this, we can now try to add beam datasets from tfds** I already added the wikipedia one, and I will also try to add the Wiki40b dataset ## Test the wikipedia script You can download and run the beam pipeline for wikipedia (using the `DirectRunner` by default) like this: ``` >>> import nlp >>> nlp.load("datasets/nlp/wikipedia", dataset_config="20200501.frr") ``` This wikipedia dataset (lang: frr, North Frisian) is a small one (~10Mb), but feel free to try bigger ones (and fill 20Gb of swap memory if you try the english one lol) ## Next Should we allow to download preprocessed datasets from the tfds google storage ? Should we try to optimize the beam pipelines to run locally without memory issues ? Should we try other data processing frameworks for big datasets, like spark ? ## About this PR It should be merged after #25 ----------------- I'd be happy to have your feedback and your ideas to improve the processing of big datasets like wikipedia :)
closed
https://github.com/huggingface/datasets/pull/55
2020-05-07T11:04:32
2020-05-11T07:20:02
2020-05-11T07:20:00
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
613,513,348
54
[Tests] Improved Error message for dummy folder structure
Improved Error message
closed
https://github.com/huggingface/datasets/pull/54
2020-05-06T18:11:48
2020-05-06T18:13:00
2020-05-06T18:12:59
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
613,436,158
53
[Features] Typo in generate_from_dict
Change `isinstance` test in features when generating features from dict.
closed
https://github.com/huggingface/datasets/pull/53
2020-05-06T16:05:23
2020-05-07T15:28:46
2020-05-07T15:28:45
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
613,339,071
52
allow dummy folder structure to handle dict of lists
`esnli.py` needs that extension of the dummy data testing.
closed
https://github.com/huggingface/datasets/pull/52
2020-05-06T13:54:35
2020-05-06T13:55:19
2020-05-06T13:55:18
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
613,266,668
51
[Testing] Improved testing structure
This PR refactors the test design a bit and puts the mock download manager in the `utils` files as it is just a test helper class. as @mariamabarham pointed out, creating a dummy folder structure can be quite hard to grasp. This PR tries to change that to some extent. It follows the following logic for the `dummy` folder structure now: 1.) The data bulider has no config -> the `dummy` folder structure is: `dummy/<version>/dummy_data.zip` 2) The data builder has >= 1 configs -> the `dummy` folder structure is: `dummy/<config_name_1>/<version>/dummy_data.zip` `dummy/<config_name_2>/<version>/dummy_data.zip` Now, the difficult part is how to create the `dummy_data.zip` file. There are two cases: A) The `data_urs` parameter inserted into the `download_and_extract` fn is a **string**: -> the `dummy_data.zip` file zips the folder: `dummy_data/<relative_path_of_folder_structure_of_url>` B) The `data_urs` parameter inserted into the `download_and_extract` fn is a **dict**: -> the `dummy_data.zip` file zips the folder: `dummy_data/<relative_path_of_folder_structure_of_url_behind _key_1>` `dummy_data/<relative_path_of_folder_structure_of_url_behind _key_2>` By relative folder structure I mean `url_path.split('./')[-1]`. As an example the dataset **xquad** by deepmind has the following url path behind the key `de`: `https://github.com/deepmind/xquad/blob/master/xquad.de.json` -> This means that the relative url path should be `xquad.de.json`. @mariamabarham B) is a change from how is was before and I think is makes more sense. While before the `dummy_data.zip` file for xquad with config `de` looked like: `dummy_data/de` it would now look like `dummy_data/xquad.de.json`. I think this is better and easier to understand. Therefore there are currently 6 tests that would have to have changed their dummy folder structure, but which can easily be done (30min). I also added a function: `print_dummy_data_folder_structure` that prints out the expected structures when testing which should be quite helpful.
closed
https://github.com/huggingface/datasets/pull/51
2020-05-06T12:03:07
2020-05-07T22:07:19
2020-05-06T13:20:18
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
612,583,126
50
[Tests] test only for fast test as a default
Test only for one config on circle ci to speed up testing. Add all config test as a slow test. @mariamabarham @thomwolf
closed
https://github.com/huggingface/datasets/pull/50
2020-05-05T12:59:22
2020-05-05T13:02:18
2020-05-05T13:02:16
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
612,545,483
49
fix flatten nested
closed
https://github.com/huggingface/datasets/pull/49
2020-05-05T11:55:13
2020-05-05T13:59:26
2020-05-05T13:59:25
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
612,504,687
48
[Command Convert] remove tensorflow import
Remove all tensorflow import statements.
closed
https://github.com/huggingface/datasets/pull/48
2020-05-05T10:41:00
2020-05-05T11:13:58
2020-05-05T11:13:56
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
612,446,493
47
[PyArrow Feature] fix py arrow bool
To me it seems that `bool` can only be accessed with `bool_` when looking at the pyarrow types: https://arrow.apache.org/docs/python/api/datatypes.html.
closed
https://github.com/huggingface/datasets/pull/47
2020-05-05T08:56:28
2020-05-05T10:40:28
2020-05-05T10:40:27
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
612,398,190
46
[Features] Strip str key before dict look-up
The dataset `anli.py` currently fails because it tries to look up a key `1\n` in a dict that only has the key `1`. Added an if statement to strip key if it cannot be found in dict.
closed
https://github.com/huggingface/datasets/pull/46
2020-05-05T07:31:45
2020-05-05T08:37:45
2020-05-05T08:37:44
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
612,386,583
45
[Load] Separate Module kwargs and builder kwargs.
Kwargs for the `load_module` fn should be passed with `module_xxxx` to `builder_kwargs` of `load` fn. This is a follow-up PR of: https://github.com/huggingface/nlp/pull/41
closed
https://github.com/huggingface/datasets/pull/45
2020-05-05T07:09:54
2022-10-04T09:32:11
2020-05-08T09:51:22
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
611,873,486
44
[Tests] Fix tests for datasets with no config
Forgot to fix `None` problem for datasets that have no config this in PR: https://github.com/huggingface/nlp/pull/42
closed
https://github.com/huggingface/datasets/pull/44
2020-05-04T13:25:38
2020-05-04T13:28:04
2020-05-04T13:28:03
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
611,773,279
43
[Checksums] If no configs exist prevent to run over empty list
`movie_rationales` e.g. has no configs.
closed
https://github.com/huggingface/datasets/pull/43
2020-05-04T10:39:42
2022-10-04T09:32:02
2020-05-04T13:18:03
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
611,754,343
42
[Tests] allow tests for builders without config
Some dataset scripts have no configs - the tests have to be adapted for this case. In this case the dummy data will be saved as: - natural_questions -> dummy -> -> 1.0.0 (version num) -> -> -> dummy_data.zip
closed
https://github.com/huggingface/datasets/pull/42
2020-05-04T10:06:22
2020-05-04T13:10:50
2020-05-04T13:10:48
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
611,739,219
41
[Load module] allow kwargs into load module
Currenly it is not possible to force a re-download of the dataset script. This simple change allows to pass ``force_reload=True`` as ``builder_kwargs`` in the ``load.py`` function.
closed
https://github.com/huggingface/datasets/pull/41
2020-05-04T09:42:11
2020-05-04T19:39:07
2020-05-04T19:39:06
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
611,721,308
40
Update remote checksums instead of overwrite
When the user uploads a dataset on S3, checksums are also uploaded with the `--upload_checksums` parameter. If the user uploads the dataset in several steps, then the remote checksums file was previously overwritten. Now it's going to be updated with the new checksums.
closed
https://github.com/huggingface/datasets/pull/40
2020-05-04T09:13:14
2020-05-04T11:51:51
2020-05-04T11:51:49
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
611,712,135
39
[Test] improve slow testing
closed
https://github.com/huggingface/datasets/pull/39
2020-05-04T08:58:33
2020-05-04T08:59:50
2020-05-04T08:59:49
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
611,677,656
38
[Checksums] Error for some datasets
The checksums command works very nicely for `squad`. But for `crime_and_punish` and `xnli`, the same bug happens: When running: ``` python nlp-cli nlp-cli test xnli --save_checksums ``` leads to: ``` File "nlp-cli", line 33, in <module> service.run() File "/home/patrick/python_bin/nlp/commands/test.py", line 61, in run ignore_checksums=self._ignore_checksums, File "/home/patrick/python_bin/nlp/builder.py", line 383, in download_and_prepare self._download_and_prepare(dl_manager=dl_manager, download_config=download_config) File "/home/patrick/python_bin/nlp/builder.py", line 627, in _download_and_prepare dl_manager=dl_manager, max_examples_per_split=download_config.max_examples_per_split, File "/home/patrick/python_bin/nlp/builder.py", line 431, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/patrick/python_bin/nlp/datasets/xnli/8bf4185a2da1ef2a523186dd660d9adcf0946189e7fa5942ea31c63c07b68a7f/xnli.py", line 95, in _split_generators dl_dir = dl_manager.download_and_extract(_DATA_URL) File "/home/patrick/python_bin/nlp/utils/download_manager.py", line 246, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/patrick/python_bin/nlp/utils/download_manager.py", line 186, in download self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths) File "/home/patrick/python_bin/nlp/utils/download_manager.py", line 166, in _record_sizes_checksums self._recorded_sizes_checksums[url] = get_size_checksum(path) File "/home/patrick/python_bin/nlp/utils/checksums_utils.py", line 81, in get_size_checksum with open(path, "rb") as f: TypeError: expected str, bytes or os.PathLike object, not tuple ```
closed
https://github.com/huggingface/datasets/issues/38
2020-05-04T08:00:16
2020-05-04T09:48:20
2020-05-04T09:48:20
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
false
[]
611,670,295
37
[Datasets ToDo-List] add datasets
## Description This PR acts as a dashboard to see which datasets are added to the library and work. Cicle-ci should always be green so that we can be sure that newly added datasets are functional. This PR should not be merged. ## Progress **For the following datasets the test commands**: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_<your-dataset-name> ``` and ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_<your-dataset-name> ``` **passes**. - [x] Squad - [x] Sentiment140 - [x] XNLI - [x] Crime_and_Punish - [x] movie_rationales - [x] ai2_arc - [x] anli - [x] event2Mind - [x] Fquad - [x] blimp - [x] empathetic_dialogues - [x] cosmos_qa - [x] xquad - [x] blog_authorship_corpus - [x] SNLI - [x] break_data - [x] SQuAD v2 - [x] cfq - [x] eraser_multi_rc - [x] Glue - [x] Tydiqa - [x] wiki_qa - [x] wikitext - [x] winogrande - [x] wiqa - [x] esnli - [x] civil_comments - [x] commonsense_qa - [x] com_qa - [x] coqa - [x] wiki_split - [x] cos_e - [x] xcopa - [x] quarel - [x] quartz - [x] squad_it - [x] quoref - [x] squad_pt - [x] cornell_movie_dialog - [x] SciQ - [x] Scifact - [x] hellaswag - [x] ted_multi (in translate) - [x] Aeslc (summarization) - [x] drop - [x] gap - [x] hansard - [x] opinosis - [x] MLQA - [x] math_dataset ## How-To-Add a dataset **Before adding a dataset make sure that your branch is up to date**: 1. `git checkout add_datasets` 2. `git pull` **Add a dataset via the `convert_dataset.sh` bash script:** Running `bash convert_dataset.sh <file/to/tfds/datascript.py>` (*e.g.* `bash convert_dataset.sh ../tensorflow-datasets/tensorflow_datasets/text/movie_rationales.py`) will automatically run all the steps mentioned in **Add a dataset manually** below. Make sure that you run `convert_dataset.sh` from the root folder of `nlp`. The conversion script should work almost always for step 1): "convert dataset script from tfds to nlp format" and 2) "create checksum file" and step 3) "make style". It can also sometimes automatically run step 4) "create the correct dummy data from tfds", but this will only work if a) there is either no config name or only one config name and b) the `tfds testing/test_data/fake_example` is in the correct form. Nevertheless, the script should always be run in the beginning until an error occurs to be more efficient. If the conversion script does not work or fails at some step, then you can run the steps manually as follows: **Add a dataset manually** Make sure you run all of the following commands from the root of your `nlp` git clone. Also make sure that you changed to this branch: ``` git checkout add_datasets ``` 1) the tfds datascript file should be converted to `nlp` style: ``` python nlp-cli convert --tfds_path <path/to/tensorflow_datasets/text/your_dataset_name>.py --nlp_directory datasets/nlp ``` This will convert the tdfs script and create a folder with the correct name. 2) the checksum file should be added. Use the command: ``` python nlp-cli test datasets/nlp/<your-dataset-folder> --save_checksums --all_configs ``` A checksums.txt file should be created in your folder and the structure should look as follows: squad/ β”œβ”€β”€ squad.py/ └── urls_checksums/ ...........└── checksums.txt Delete the created `*.lock` file afterward - it should not be uploaded to AWS. 3) run black and isort on your newly added datascript files so that they look nice: ``` make style ``` 4) the dummy data should be added. For this it might be useful to take a look into the structure of other examples as shown in the PR here and at `<path/to/tensorflow_datasets/testing/test_data/test_data/fake_examples>` whether the same data can be used. 5) the data can be uploaded to AWS using the command ``` aws s3 cp datasets/nlp/<your-dataset-folder> s3://datasets.huggingface.co/nlp/<your-dataset-folder> --recursive ``` 6) check whether all works as expected using: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_<your-dataset-name> ``` and ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_<your-dataset-name> ``` 7) push to this PR and rerun the circle ci workflow to check whether circle ci stays green. 8) Edit this commend and tick off your newly added dataset :-) ## TODO-list Maybe we can add a TODO-list here for everybody that feels like adding new datasets so that we will not add the same datasets. Here a link to available datasets: https://docs.google.com/spreadsheets/d/1zOtEqOrnVQwdgkC4nJrTY6d-Av02u0XFzeKAtBM2fUI/edit#gid=0 Patrick: - [ ] boolq - *weird download link* - [ ] c4 - *beam dataset*
closed
https://github.com/huggingface/datasets/pull/37
2020-05-04T07:47:39
2022-10-04T09:32:17
2020-05-08T13:48:23
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
611,528,349
36
Metrics - refactoring, adding support for download and distributed metrics
Refactoring metrics to have a similar loading API than the datasets and improving the import system. # Import system The import system has ben upgraded. There are now three types of imports allowed: 1. `library` imports (identified as "absolute imports") ```python import seqeval ``` => we'll test all the imports before running the scripts and if one cannot be imported we'll display an error message like this one: `ImportError: To be able to use this metric/dataset, you need to install the following dependencies ['seqeval'] using 'pip install seqeval' for instance'` 2. `internal` imports (identified as "relative imports") ```python import .c4_utils ``` => we'll assume this point to a file in the same directory/S3-directory as the main script and download this file. 2. `external` imports (identified as "relative imports" with a comment starting with `# From:`) ```python from .nmt_bleu import compute_bleu # From: https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py ``` => we'll assume this point to the URL of a python script (if it's a link to a github file, we'll take the raw file automatically). => the script is downloaded and renamed to the import name (here above renamed from `bleu.py` to `nmt_bleu.py`). Renaming the file can be necessary if the distant file has the same name as the dataset/metric processing script. If you forgot to rename the distant script and it has the same name as the dataset/metric, you'll have an explicit error message asking to rename the import anyway. # Hosting metrics Metrics are hosted on a S3 bucket like the dataset processing scripts. # Metrics scripts Metrics scripts have a lot in common with datasets processing scripts. They also have a `metric.info` including citations, descriptions and links to relevant pages. Metrics have more documentation to supply to ensure they are used well. Four examples are already included for reference in [./metrics](./metrics): BLEU, ROUGE, SacreBLEU and SeqEVAL. # Automatic support for distributed/multi-processing metric computation We've also added support for automatic distributed/multi-processing metric computation (e.g. when using DistributedDataParallel). We leverage our own dataset format for smart caching in this case. Here is a quick gist of a standard use of metrics (the simplest usage): ```python import nlp bleu_metric = nlp.load_metric('bleu') # If you only have a single iteration, you can easily compute the score like this predictions = model(inputs) score = bleu_metric.compute(predictions, references) # If you have a loop, you can "add" your predictions and references at each iteration instead of having to save them yourself (the metric object store them efficiently for you) for batch in dataloader: model_input, targets = batch predictions = model(model_inputs) bleu.add(predictions, targets) score = bleu_metric.compute() # Compute the score from all the stored predictions/references ``` Here is a quick gist of a use in a distributed torch setup (should work for any python multi-process setup actually). It's pretty much identical to the second example above: ```python import nlp # You need to give the total number of parallel python processes (num_process) and the id of each process (process_id) bleu = nlp.load_metric('bleu', process_id=torch.distributed.get_rank(),b num_process=torch.distributed.get_world_size()) for batch in dataloader: model_input, targets = batch predictions = model(model_inputs) bleu.add(predictions, targets) score = bleu_metric.compute() # Compute the score on the first node by default (can be set to compute on each node as well) ```
closed
https://github.com/huggingface/datasets/pull/36
2020-05-03T23:00:17
2020-05-11T08:16:02
2020-05-11T08:16:00
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
611,413,731
35
[Tests] fix typo
@lhoestq - currently the slow test fail with: ``` _____________________________________________________________________________________ DatasetTest.test_load_real_dataset_xnli _____________________________________________________________________________________ self = <tests.test_dataset_common.DatasetTest testMethod=test_load_real_dataset_xnli>, dataset_name = 'xnli' @slow def test_load_real_dataset(self, dataset_name): with tempfile.TemporaryDirectory() as temp_data_dir: > dataset = load(dataset_name, data_dir=temp_data_dir) tests/test_dataset_common.py:153: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../python_bin/nlp/load.py:497: in load dbuilder.download_and_prepare(**download_and_prepare_kwargs) ../../python_bin/nlp/builder.py:383: in download_and_prepare self._download_and_prepare(dl_manager=dl_manager, download_config=download_config) ../../python_bin/nlp/builder.py:627: in _download_and_prepare dl_manager=dl_manager, max_examples_per_split=download_config.max_examples_per_split, ../../python_bin/nlp/builder.py:431: in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) ../../python_bin/nlp/datasets/xnli/8bf4185a2da1ef2a523186dd660d9adcf0946189e7fa5942ea31c63c07b68a7f/xnli.py:95: in _split_generators dl_dir = dl_manager.download_and_extract(_DATA_URL) ../../python_bin/nlp/utils/download_manager.py:246: in download_and_extract return self.extract(self.download(url_or_urls)) ../../python_bin/nlp/utils/download_manager.py:186: in download self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths) ../../python_bin/nlp/utils/download_manager.py:166: in _record_sizes_checksums self._recorded_sizes_checksums[url] = get_size_checksum(path) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ path = ('', '/tmp/tmpkajlg9yc/downloads/c0f7773c480a3f2d85639d777e0e17e65527460310d80760fd3fc2b2f2960556.c952a63cb17d3d46e412ceb7dbcd656ce2b15cc9ef17f50c28f81c48a7c853b5') def get_size_checksum(path: str) -> Tuple[int, str]: """Compute the file size and the sha256 checksum of a file""" m = sha256() > with open(path, "rb") as f: E TypeError: expected str, bytes or os.PathLike object, not tuple ../../python_bin/nlp/utils/checksums_utils.py:81: TypeError ``` - the checksums probably need to be updated no? And we should also think about how to write a test for the checksums.
closed
https://github.com/huggingface/datasets/pull/35
2020-05-03T13:23:49
2020-05-03T13:24:21
2020-05-03T13:24:20
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
611,385,516
34
[Tests] add slow tests
This PR adds a slow test that downloads the "real" dataset. The test is decorated as "slow" so that it will not automatically run on circle ci. Before uploading a dataset, one should test that this test passes, manually by running ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_<your-dataset-script-name> ``` This PR should be merged after PR: #33
closed
https://github.com/huggingface/datasets/pull/34
2020-05-03T11:01:22
2020-05-03T12:18:30
2020-05-03T12:18:29
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
611,052,081
33
Big cleanup/refactoring for clean serialization
This PR cleans many base classes to re-build them as `dataclasses`. We can thus use a simple serialization workflow for `DatasetInfo`, including it's `Features` and `SplitDict` based on `dataclasses` `asdict()`. The resulting code is a lot shorter, can be easily serialized/deserialized, dataset info are human-readable and we can get rid of the `dataclass_json` dependency. The scripts have breaking changes and the conversion tool is updated. Example of dataset info in SQuAD script now: ```python def _info(self): return nlp.DatasetInfo( description=_DESCRIPTION, features=nlp.Features({ "id": nlp.Value('string'), "title": nlp.Value('string'), "context": nlp.Value('string'), "question": nlp.Value('string'), "answers": nlp.Sequence({ "text": nlp.Value('string'), "answer_start": nlp.Value('int32'), }), }), # No default supervised_keys (as we have to pass both question # and context as input). supervised_keys=None, homepage="https://rajpurkar.github.io/SQuAD-explorer/", citation=_CITATION, ) ``` Example of serialized dataset info: ```bash { "description": "Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.\n", "citation": "@article{2016arXiv160605250R,\n author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},\n Konstantin and {Liang}, Percy},\n title = \"{SQuAD: 100,000+ Questions for Machine Comprehension of Text}\",\n journal = {arXiv e-prints},\n year = 2016,\n eid = {arXiv:1606.05250},\n pages = {arXiv:1606.05250},\narchivePrefix = {arXiv},\n eprint = {1606.05250},\n}\n", "homepage": "https://rajpurkar.github.io/SQuAD-explorer/", "license": "", "features": { "id": { "dtype": "string", "_type": "Value" }, "title": { "dtype": "string", "_type": "Value" }, "context": { "dtype": "string", "_type": "Value" }, "question": { "dtype": "string", "_type": "Value" }, "answers": { "feature": { "text": { "dtype": "string", "_type": "Value" }, "answer_start": { "dtype": "int32", "_type": "Value" } }, "length": -1, "_type": "Sequence" } }, "supervised_keys": null, "name": "squad", "version": { "version_str": "1.0.0", "description": "New split API (https://tensorflow.org/datasets/splits)", "nlp_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0 }, "splits": { "train": { "name": "train", "num_bytes": 79426386, "num_examples": 87599, "dataset_name": "squad" }, "validation": { "name": "validation", "num_bytes": 10491883, "num_examples": 10570, "dataset_name": "squad" } }, "size_in_bytes": 0, "download_size": 35142551, "download_checksums": [] } ```
closed
https://github.com/huggingface/datasets/pull/33
2020-05-01T23:45:57
2020-05-03T12:17:34
2020-05-03T12:17:33
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
610,715,580
32
Fix map caching notebooks
Previously, caching results with `.map()` didn't work in notebooks. To reuse a result, `.map()` serializes the functions with `dill.dumps` and then it hashes it. The problem is that when using `dill.dumps` to serialize a function, it also saves its origin (filename + line no.) and the origin of all the `globals` this function needs. However for notebooks and shells, the filename looks like \<ipython-input-13-9ed2afe61d25\> and the line no. changes often. To fix the problem, I added a new dispatch function for code objects that ignore the origin of the code if it comes from a notebook or a python shell. I tested these cases in a notebook: - lambda functions - named functions - methods - classmethods - staticmethods - classes that implement `__call__` The caching now works as expected for all of them :) I also tested the caching in the demo notebook and it works fine !
closed
https://github.com/huggingface/datasets/pull/32
2020-05-01T11:55:26
2020-05-03T12:15:58
2020-05-03T12:15:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
610,677,641
31
[Circle ci] Install a virtual env before running tests
Install a virtual env before running tests to not running into sudo issues when dynamically downloading files. Same number of tests now pass / fail as on my local computer: ![Screenshot from 2020-05-01 12-14-44](https://user-images.githubusercontent.com/23423619/80798814-8a0a0a80-8ba5-11ea-8db8-599d33bbfccd.png)
closed
https://github.com/huggingface/datasets/pull/31
2020-05-01T10:11:17
2020-05-01T22:06:16
2020-05-01T22:06:15
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
610,549,072
30
add metrics which require download files from github
To download files from github, I copied the `load_dataset_module` and its dependencies (without the builder) in `load.py` to `metrics/metric_utils.py`. I made the following changes: - copy the needed files in a folder`metric_name` - delete all other files that are not needed For metrics that require an external import, I first create a `<metric_name>_imports.py` file which contains all external urls. Then I create a `<metric_name>.py` in which I will load the external files using `<metric_name>_imports.py`
closed
https://github.com/huggingface/datasets/pull/30
2020-05-01T04:13:22
2022-10-04T09:31:58
2020-05-11T08:19:54
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
610,243,997
29
Hf_api small changes
From Patrick: ```python from nlp import hf_api api = hf_api.HfApi() api.dataset_list() ``` works :-)
closed
https://github.com/huggingface/datasets/pull/29
2020-04-30T17:06:43
2020-04-30T19:51:45
2020-04-30T19:51:44
{ "login": "julien-c", "id": 326577, "type": "User" }
[]
true
[]
610,241,907
28
[Circle ci] Adds circle ci config
@thomwolf can you take a look and set up circle ci on: https://app.circleci.com/projects/project-dashboard/github/huggingface I think for `nlp` only admins can set it up, which I guess is you :-)
closed
https://github.com/huggingface/datasets/pull/28
2020-04-30T17:03:35
2020-04-30T19:51:09
2020-04-30T19:51:08
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
610,230,476
27
[Cleanup] Removes all files in testing except test_dataset_common
As far as I know, all files in `tests` were old `tfds test files` so I removed them. We can still look them up on the other library.
closed
https://github.com/huggingface/datasets/pull/27
2020-04-30T16:45:21
2020-04-30T17:39:25
2020-04-30T17:39:23
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
610,226,047
26
[Tests] Clean tests
the abseil testing library (https://abseil.io/docs/python/quickstart.html) is better than the one I had before, so I decided to switch to that and changed the `setup.py` config file. Abseil has more support and a cleaner API for parametrized testing I think. I added a list of all dataset scripts that are currently on AWS, but will replace that once the API is integrated into this lib. One can now easily test for just a single function for a single dataset with: `tests/test_dataset_common.py::DatasetTest::test_load_dataset_wikipedia` NOTE: This PR is rebased on PR #29 so should be merged after.
closed
https://github.com/huggingface/datasets/pull/26
2020-04-30T16:38:29
2020-04-30T20:12:04
2020-04-30T20:12:03
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
609,708,863
25
Add script csv datasets
This is a PR allowing to create datasets from local CSV files. A usage might be: ```python import nlp ds = nlp.load( path="csv", name="bbc", dataset_files={ nlp.Split.TRAIN: ["datasets/dummy_data/csv/train.csv"], nlp.Split.TEST: [""datasets/dummy_data/csv/test.csv""] }, csv_kwargs={ "skip_rows": 0, "delimiter": ",", "quote_char": "\"", "header_as_column_names": True } ) ``` ``` Downloading and preparing dataset bbc/1.0.0 (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/jplu/.cache/huggingface/datasets/bbc/1.0.0... Dataset bbc downloaded and prepared to /home/jplu/.cache/huggingface/datasets/bbc/1.0.0. Subsequent calls will reuse this data. {'test': Dataset(schema: {'category': 'string', 'text': 'string'}, num_rows: 49), 'train': Dataset(schema: {'category': 'string', 'text': 'string'}, num_rows: 99), 'validation': Dataset(schema: {'category': 'string', 'text': 'string'}, num_rows: 0)} ``` How it is read: - `path`: the `csv` word means "I want to create a CSV dataset" - `name`: the name of this dataset is `bbc` - `dataset_files`: this is a dictionary where each key is the list of files corresponding to the key split. - `csv_kwargs`: this is the keywords arguments to "explain" how to read the CSV files * `skip_rows`: number of rows have to be skipped, starting from the beginning of the file * `delimiter`: which delimiter is used to separate the columns * `quote_char`: which quote char is used to represent a column where the delimiter appears in one of them * `header_as_column_names`: will use the first row (header) of the file as name for the features. Otherwise the names will be automatically generated as `f1`, `f2`, etc... Will be applied after the `skip_rows` parameter. **TODO**: for now the `csv.py` is copied each time we create a new dataset as `ds_name.py`, this behavior will be modified to have only the `csv.py` script copied only once and not for all the CSV datasets.
closed
https://github.com/huggingface/datasets/pull/25
2020-04-30T08:28:08
2022-10-04T09:32:13
2020-05-07T21:14:49
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]