title
stringlengths
1
290
body
stringlengths
0
228k
βŒ€
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
[ArrowWriter] Set schema at first write example
Right now if the schema was not specified when instantiating `ArrowWriter`, then it could be set with the first `write_table` for example (it calls `self._build_writer()` to do so). I noticed that it was not done if the first example is added via `.write`, so I added it for coherence.
https://github.com/huggingface/datasets/pull/200
[ "Good point!\r\n\r\nI guess we could add this to `write_batch` as well (before using `self._schema` in the first line of this method)?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/200", "html_url": "https://github.com/huggingface/datasets/pull/200", "diff_url": "https://github.com/huggingface/datasets/pull/200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/200.patch", "merged_at": "2020-05-27T09:07:53" }
200
true
Fix GermEval 2014 dataset infos
Hi, this PR just removes the `dataset_info.json` file and adds a newly generated `dataset_infos.json` file.
https://github.com/huggingface/datasets/pull/199
[ "Hopefully. this also fixes the dataset view on https://huggingface.co/nlp/viewer/ :)", "Oh good catch ! This should fix it indeed" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/199", "html_url": "https://github.com/huggingface/datasets/pull/199", "diff_url": "https://github.com/huggingface/datasets/pull/199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/199.patch", "merged_at": "2020-05-26T21:50:24" }
199
true
Index outside of table length
The offset input box warns of numbers larger than a limit (like 2000) but then the errors start at a smaller value than that limit (like 1955). > ValueError: Index (2000) outside of table length (2000). > Traceback: > File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script > exec(code, module.__dict__) > File "/home/sasha/nlp_viewer/run.py", line 116, in <module> > v = d[item][k] > File "/home/sasha/.local/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 338, in __getitem__ > output_all_columns=self._output_all_columns, > File "/home/sasha/.local/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 290, in _getitem > raise ValueError(f"Index ({key}) outside of table length ({self._data.num_rows}).")
https://github.com/huggingface/datasets/issues/198
[ "Sounds like something related to the nlp viewer @srush ", "Fixed. " ]
null
198
false
Scientific Papers only downloading Pubmed
Hi! I have been playing around with this module, and I am a bit confused about the `scientific_papers` dataset. I thought that it would download two separate datasets, arxiv and pubmed. But when I run the following: ``` dataset = nlp.load_dataset('scientific_papers', data_dir='.', cache_dir='.') Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5.05k/5.05k [00:00<00:00, 2.66MB/s] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4.90k/4.90k [00:00<00:00, 2.42MB/s] Downloading and preparing dataset scientific_papers/pubmed (download: 4.20 GiB, generated: 2.33 GiB, total: 6.53 GiB) to ./scientific_papers/pubmed/1.1.1... Downloading: 3.62GB [00:40, 90.5MB/s] Downloading: 880MB [00:08, 101MB/s] Dataset scientific_papers downloaded and prepared to ./scientific_papers/pubmed/1.1.1. Subsequent calls will reuse this data. ``` only a pubmed folder is created. There doesn't seem to be something for arxiv. Are these two datasets merged? Or have I misunderstood something? Thanks!
https://github.com/huggingface/datasets/issues/197
[ "Hi so there are indeed two configurations in the datasets as you can see [here](https://github.com/huggingface/nlp/blob/master/datasets/scientific_papers/scientific_papers.py#L81-L82).\r\n\r\nYou can load either one with:\r\n```python\r\ndataset = nlp.load_dataset('scientific_papers', 'pubmed')\r\ndataset = nlp.lo...
null
197
false
Check invalid config name
As said in #194, we should raise an error if the config name has bad characters. Bad characters are those that are not allowed for directory names on windows.
https://github.com/huggingface/datasets/pull/196
[ "I think that's not related to the config name but the filenames in the dummy data. Mostly it occurs with files downloaded from drive. In that case the dummy file name is extracted from the google drive link and it corresponds to what comes after `https://drive.google.com/`\r\n\r\n", "> I think that's not related...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/196", "html_url": "https://github.com/huggingface/datasets/pull/196", "diff_url": "https://github.com/huggingface/datasets/pull/196.diff", "patch_url": "https://github.com/huggingface/datasets/pull/196.patch", "merged_at": "2020-05-26T21:04:55" }
196
true
[Dummy data command] add new case to command
Qanta: #194 introduces a case that was not noticed before. This change in code helps community users to have an easier time creating the dummy data.
https://github.com/huggingface/datasets/pull/195
[ "@lhoestq - tiny change in the dummy data command, should be good to merge." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/195", "html_url": "https://github.com/huggingface/datasets/pull/195", "diff_url": "https://github.com/huggingface/datasets/pull/195.diff", "patch_url": "https://github.com/huggingface/datasets/pull/195.patch", "merged_at": "2020-05-26T14:38:27" }
195
true
Add Dataset: Qanta
Fixes dummy data for #169 @EntilZha
https://github.com/huggingface/datasets/pull/194
[ "@lhoestq - the config name is rather special here: *E.g.* `mode=first,char_skip=25`. It includes `=` and `,` - will that be a problem for windows folders, you think? \r\n\r\nApart from that good to merge for me.", "It's ok to have `=` and `,`.\r\nWindows doesn't like things like `?`, `:`, `/` etc.\r\n\r\nI'll ad...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/194", "html_url": "https://github.com/huggingface/datasets/pull/194", "diff_url": "https://github.com/huggingface/datasets/pull/194.diff", "patch_url": "https://github.com/huggingface/datasets/pull/194.patch", "merged_at": "2020-05-26T13:16:20" }
194
true
[Tensorflow] Use something else than `from_tensor_slices()`
In the example notebook, the TF Dataset is built using `from_tensor_slices()` : ```python columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions'] train_tf_dataset.set_format(type='tensorflow', columns=columns) features = {x: train_tf_dataset[x] for x in columns[:3]} labels = {"output_1": train_tf_dataset["start_positions"]} labels["output_2"] = train_tf_dataset["end_positions"] tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8) ``` But according to [official tensorflow documentation](https://www.tensorflow.org/guide/data#consuming_numpy_arrays), this will load the entire dataset to memory. **This defeats one purpose of this library, which is lazy loading.** Is there any other way to load the `nlp` dataset into TF dataset lazily ? --- For example, is it possible to use [Arrow dataset](https://www.tensorflow.org/io/api_docs/python/tfio/arrow/ArrowDataset) ? If yes, is there any code example ?
https://github.com/huggingface/datasets/issues/193
[ "I guess we can use `tf.data.Dataset.from_generator` instead. I'll give it a try.", "Is `tf.data.Dataset.from_generator` working on TPU ?", "`from_generator` is not working on TPU, I met the following error :\r\n\r\n```\r\nFile \"/usr/local/lib/python3.6/contextlib.py\", line 88, in __exit__\r\n next(self.ge...
null
193
false
[Question] Create Apache Arrow dataset from raw text file
Hi guys, I have gathered and preprocessed about 2GB of COVID papers from CORD dataset @ Kggle. I have seen you have a text dataset as "Crime and punishment" in Apache arrow format. Do you have any script to do it from a raw txt file (preprocessed as for BERT like) or any guide? Is the worth of send it to you and add it to the NLP library? Thanks, Manu
https://github.com/huggingface/datasets/issues/192
[ "We store every dataset in the Arrow format. This is convenient as it supports nested types and memory mapping. If you are curious feel free to check the [pyarrow documentation](https://arrow.apache.org/docs/python/)\r\n\r\nYou can use this library to load your covid papers by creating a dataset script. You can fin...
null
192
false
[Squad es] add dataset_infos
@mariamabarham - was still about to upload this. Should have waited with my comment a bit more :D
https://github.com/huggingface/datasets/pull/191
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/191", "html_url": "https://github.com/huggingface/datasets/pull/191", "diff_url": "https://github.com/huggingface/datasets/pull/191.diff", "patch_url": "https://github.com/huggingface/datasets/pull/191.patch", "merged_at": "2020-05-25T16:39:58" }
191
true
add squad Spanish v1 and v2
This PR add the Spanish Squad versions 1 and 2 datasets. Fixes #164
https://github.com/huggingface/datasets/pull/190
[ "Nice ! :) \r\nCan we group them into one dataset with two versions, instead of having two datasets ?", "Yes sure, I can use the version as config name", "@lhoestq can you check? I grouped them", "Awesome :) feel free to merge after fixing the test in the CI", "@mariamabarham - feel free to merge when you'r...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/190", "html_url": "https://github.com/huggingface/datasets/pull/190", "diff_url": "https://github.com/huggingface/datasets/pull/190.diff", "patch_url": "https://github.com/huggingface/datasets/pull/190.patch", "merged_at": "2020-05-25T16:28:45" }
190
true
[Question] BERT-style multiple choice formatting
Hello, I am wondering what the equivalent formatting of a dataset should be to allow for multiple-choice answering prediction, BERT-style. Previously, this was done by passing a list of `InputFeatures` to the dataloader instead of a list of `InputFeature`, where `InputFeatures` contained lists of length equal to the number of answer choices in the MCQ instead of single items. I'm a bit confused on what the output of my feature conversion function should be when using `dataset.map()` to ensure similar behavior. Thanks!
https://github.com/huggingface/datasets/issues/189
[ "Hi @sarahwie, can you details this a little more?\r\n\r\nI'm not sure I understand what you refer to and what you mean when you say \"Previously, this was done by passing a list of InputFeatures to the dataloader instead of a list of InputFeature\"", "I think I've resolved it. For others' reference: to convert f...
null
189
false
When will the remaining math_dataset modules be added as dataset objects
Currently only the algebra_linear_1d is supported. Is there a timeline for making the other modules supported. If no timeline is established, how can I help?
https://github.com/huggingface/datasets/issues/188
[ "On a similar note it would be nice to differentiate between train-easy, train-medium, and train-hard", "Hi @tylerroost, we don't have a timeline for this at the moment.\r\nIf you want to give it a look we would be happy to review a PR on it.\r\nAlso, the library is one week old so everything is quite barebones, ...
null
188
false
[Question] How to load wikipedia ? Beam runner ?
When `nlp.load_dataset('wikipedia')`, I got * `WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be used.` * `AttributeError: 'NoneType' object has no attribute 'size'` Could somebody tell me what should I do ? # Env On Colab, ``` git clone https://github.com/huggingface/nlp cd nlp pip install -q . ``` ``` %pip install -q apache_beam mwparserfromhell -> ERROR: pydrive 1.3.1 has requirement oauth2client>=4.0.0, but you'll have oauth2client 3.0.0 which is incompatible. ERROR: google-api-python-client 1.7.12 has requirement httplib2<1dev,>=0.17.0, but you'll have httplib2 0.12.0 which is incompatible. ERROR: chainer 6.5.0 has requirement typing-extensions<=3.6.6, but you'll have typing-extensions 3.7.4.2 which is incompatible. ``` ``` pip install -q apache-beam[interactive] ERROR: google-colab 1.0.0 has requirement ipython~=5.5.0, but you'll have ipython 5.10.0 which is incompatible. ``` # The whole message ``` WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be used. Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0... --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process() 44 frames /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window() /usr/local/lib/python3.6/dist-packages/apache_beam/io/iobase.py in process(self, element, init_result) 1081 writer.write(e) -> 1082 return [window.TimestampedValue(writer.close(), timestamp.MAX_TIMESTAMP)] 1083 /usr/local/lib/python3.6/dist-packages/apache_beam/io/filebasedsink.py in close(self) 422 def close(self): --> 423 self.sink.close(self.temp_handle) 424 return self.temp_shard_path /usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in close(self, writer) 537 if len(self._buffer[0]) > 0: --> 538 self._flush_buffer() 539 if self._record_batches_byte_size > 0: /usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in _flush_buffer(self) 569 for b in x.buffers(): --> 570 size = size + b.size 571 self._record_batches_byte_size = self._record_batches_byte_size + size AttributeError: 'NoneType' object has no attribute 'size' During handling of the above exception, another exception occurred: AttributeError Traceback (most recent call last) <ipython-input-9-340aabccefff> in <module>() ----> 1 dset = nlp.load_dataset('wikipedia') /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 370 verify_infos = not save_infos and not ignore_verifications 371 self._download_and_prepare( --> 372 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 373 ) 374 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos) 770 with beam.Pipeline(runner=beam_runner, options=beam_options,) as pipeline: 771 super(BeamBasedBuilder, self)._download_and_prepare( --> 772 dl_manager, pipeline=pipeline, verify_infos=False 773 ) # TODO{beam} verify infos 774 /usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in __exit__(self, exc_type, exc_val, exc_tb) 501 def __exit__(self, exc_type, exc_val, exc_tb): 502 if not exc_type: --> 503 self.run().wait_until_finish() 504 505 def visit(self, visitor): /usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in run(self, test_runner_api) 481 return Pipeline.from_runner_api( 482 self.to_runner_api(use_fake_coders=True), self.runner, --> 483 self._options).run(False) 484 485 if self._options.view_as(TypeOptions).runtime_type_check: /usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in run(self, test_runner_api) 494 finally: 495 shutil.rmtree(tmpdir) --> 496 return self.runner.run_pipeline(self, self._options) 497 498 def __enter__(self): /usr/local/lib/python3.6/dist-packages/apache_beam/runners/direct/direct_runner.py in run_pipeline(self, pipeline, options) 128 runner = BundleBasedDirectRunner() 129 --> 130 return runner.run_pipeline(pipeline, options) 131 132 /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_pipeline(self, pipeline, options) 553 554 self._latest_run_result = self.run_via_runner_api( --> 555 pipeline.to_runner_api(default_environment=self._default_environment)) 556 return self._latest_run_result 557 /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_via_runner_api(self, pipeline_proto) 563 # TODO(pabloem, BEAM-7514): Create a watermark manager (that has access to 564 # the teststream (if any), and all the stages). --> 565 return self.run_stages(stage_context, stages) 566 567 @contextlib.contextmanager /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_stages(self, stage_context, stages) 704 stage, 705 pcoll_buffers, --> 706 stage_context.safe_coders) 707 metrics_by_stage[stage.name] = stage_results.process_bundle.metrics 708 monitoring_infos_by_stage[stage.name] = ( /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in _run_stage(self, worker_handler_factory, pipeline_components, stage, pcoll_buffers, safe_coders) 1071 cache_token_generator=cache_token_generator) 1072 -> 1073 result, splits = bundle_manager.process_bundle(data_input, data_output) 1074 1075 def input_for(transform_id, input_id): /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in process_bundle(self, inputs, expected_outputs) 2332 2333 with UnboundedThreadPoolExecutor() as executor: -> 2334 for result, split_result in executor.map(execute, part_inputs): 2335 2336 split_result_list += split_result /usr/lib/python3.6/concurrent/futures/_base.py in result_iterator() 584 # Careful not to keep a reference to the popped future 585 if timeout is None: --> 586 yield fs.pop().result() 587 else: 588 yield fs.pop().result(end_time - time.monotonic()) /usr/lib/python3.6/concurrent/futures/_base.py in result(self, timeout) 430 raise CancelledError() 431 elif self._state == FINISHED: --> 432 return self.__get_result() 433 else: 434 raise TimeoutError() /usr/lib/python3.6/concurrent/futures/_base.py in __get_result(self) 382 def __get_result(self): 383 if self._exception: --> 384 raise self._exception 385 else: 386 return self._result /usr/local/lib/python3.6/dist-packages/apache_beam/utils/thread_pool_executor.py in run(self) 42 # If the future wasn't cancelled, then attempt to execute it. 43 try: ---> 44 self._future.set_result(self._fn(*self._fn_args, **self._fn_kwargs)) 45 except BaseException as exc: 46 # Even though Python 2 futures library has #set_exection(), /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in execute(part_map) 2329 self._registered, 2330 cache_token_generator=self._cache_token_generator) -> 2331 return bundle_manager.process_bundle(part_map, expected_outputs) 2332 2333 with UnboundedThreadPoolExecutor() as executor: /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in process_bundle(self, inputs, expected_outputs) 2243 process_bundle_descriptor_id=self._bundle_descriptor.id, 2244 cache_tokens=[next(self._cache_token_generator)])) -> 2245 result_future = self._worker_handler.control_conn.push(process_bundle_req) 2246 2247 split_results = [] # type: List[beam_fn_api_pb2.ProcessBundleSplitResponse] /usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in push(self, request) 1557 self._uid_counter += 1 1558 request.instruction_id = 'control_%s' % self._uid_counter -> 1559 response = self.worker.do_instruction(request) 1560 return ControlFuture(request.instruction_id, response) 1561 /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in do_instruction(self, request) 413 # E.g. if register is set, this will call self.register(request.register)) 414 return getattr(self, request_type)( --> 415 getattr(request, request_type), request.instruction_id) 416 else: 417 raise NotImplementedError /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in process_bundle(self, request, instruction_id) 448 with self.maybe_profile(instruction_id): 449 delayed_applications, requests_finalization = ( --> 450 bundle_processor.process_bundle(instruction_id)) 451 monitoring_infos = bundle_processor.monitoring_infos() 452 monitoring_infos.extend(self.state_cache_metrics_fn()) /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/bundle_processor.py in process_bundle(self, instruction_id) 837 for data in data_channel.input_elements(instruction_id, 838 expected_transforms): --> 839 input_op_by_transform_id[data.transform_id].process_encoded(data.data) 840 841 # Finish all operations. /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/bundle_processor.py in process_encoded(self, encoded_windowed_values) 214 decoded_value = self.windowed_coder_impl.decode_from_stream( 215 input_stream, True) --> 216 self.output(decoded_value) 217 218 def try_split(self, fraction_of_remainder, total_buffer_size): /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented() /usr/local/lib/python3.6/dist-packages/future/utils/__init__.py in raise_with_traceback(exc, traceback) 417 if traceback == Ellipsis: 418 _, _, traceback = sys.exc_info() --> 419 raise exc.with_traceback(traceback) 420 421 else: /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process() /usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window() /usr/local/lib/python3.6/dist-packages/apache_beam/io/iobase.py in process(self, element, init_result) 1080 for e in bundle[1]: # values 1081 writer.write(e) -> 1082 return [window.TimestampedValue(writer.close(), timestamp.MAX_TIMESTAMP)] 1083 1084 /usr/local/lib/python3.6/dist-packages/apache_beam/io/filebasedsink.py in close(self) 421 422 def close(self): --> 423 self.sink.close(self.temp_handle) 424 return self.temp_shard_path /usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in close(self, writer) 536 def close(self, writer): 537 if len(self._buffer[0]) > 0: --> 538 self._flush_buffer() 539 if self._record_batches_byte_size > 0: 540 self._write_batches(writer) /usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in _flush_buffer(self) 568 for x in arrays: 569 for b in x.buffers(): --> 570 size = size + b.size 571 self._record_batches_byte_size = self._record_batches_byte_size + size AttributeError: 'NoneType' object has no attribute 'size' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ```
https://github.com/huggingface/datasets/issues/187
[ "I have seen that somebody is hard working on easierly loadable wikipedia. #129 \r\nMaybe I should wait a few days for that version ?", "Yes we (well @lhoestq) are very actively working on this." ]
null
187
false
Weird-ish: Not creating unique caches for different phases
Sample code: ```python import nlp dataset = nlp.load_dataset('boolq') def func1(x): return x def func2(x): return None train_output = dataset["train"].map(func1) valid_output = dataset["validation"].map(func1) print() print(len(train_output), len(valid_output)) # Output: 9427 9427 ``` The map method in both cases seem to be pointing to the same cache, so the latter call based on the validation data will return the processed train data cache. What's weird is that the following doesn't seem to be an issue: ```python train_output = dataset["train"].map(func2) valid_output = dataset["validation"].map(func2) print() print(len(train_output), len(valid_output)) # 9427 3270 ```
https://github.com/huggingface/datasets/issues/186
[ "Looks like a duplicate of #120.\r\nThis is already fixed on master. We'll do a new release on pypi soon", "Good catch, it looks fixed.\r\n" ]
null
186
false
[Commands] In-detail instructions to create dummy data folder
### Dummy data command This PR adds a new command `python nlp-cli dummy_data <path_to_dataset_folder>` that gives in-detail instructions on how to add the dummy data files. It would be great if you can try it out by moving the current dummy_data folder of any dataset in `./datasets` with `mv datasets/<dataset_script>/dummy_data datasets/<dataset_name>/dummy_data_copy` and running the command `python nlp-cli dummy_data ./datasets/<dataset_name>` to see if you like the instructions. ### CONTRIBUTING.md Also the CONTRIBUTING.md is made cleaner including a new section on "How to add a dataset". ### Current PRs It would be nice if we can try out if this command helps current PRs, *e.g.* #169 to add a dataset. I comment on those PRs.
https://github.com/huggingface/datasets/pull/185
[ "awesome !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/185", "html_url": "https://github.com/huggingface/datasets/pull/185", "diff_url": "https://github.com/huggingface/datasets/pull/185.diff", "patch_url": "https://github.com/huggingface/datasets/pull/185.patch", "merged_at": "2020-05-22T14:06:34" }
185
true
Use IndexError instead of ValueError when index out of range
**`default __iter__ needs IndexError`**. When I want to create a wrapper of arrow dataset to adapt to fastai, I don't know how to initialize it, so I didn't use inheritance but use object composition. I wrote sth like this. ``` clas HF_dataset(): def __init__(self, arrow_dataset): self.dset = arrow_dataset def __getitem__(self, i): return self.my_get_item(self.dset) ``` But `for sample in my_dataset:` gave me `ValueError(f"Index ({key}) outside of table length ({self._data.num_rows}).")` . This is because default `__iter__` will stop when it catched `IndexError`. You can also see my [work](https://github.com/richardyy1188/Pretrain-MLM-and-finetune-on-GLUE-with-fastai/blob/master/GLUE_with_fastai.ipynb) that uses fastai2 to show/load batches from huggingface/nlp GLUE datasets So I hope we can use `IndexError` instead to let other people who want to wrap it for any purpose won't be caught by this caveat. BTW, I super appreciate your work, both transformers and nlp save my life. πŸ’–πŸ’–πŸ’–πŸ’–πŸ’–πŸ’–πŸ’–
https://github.com/huggingface/datasets/pull/184
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/184", "html_url": "https://github.com/huggingface/datasets/pull/184", "diff_url": "https://github.com/huggingface/datasets/pull/184.diff", "patch_url": "https://github.com/huggingface/datasets/pull/184.patch", "merged_at": "2020-05-28T08:31:18" }
184
true
[Bug] labels of glue/ax are all -1
``` ax = nlp.load_dataset('glue', 'ax') for i in range(30): print(ax['test'][i]['label'], end=', ') ``` ``` -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, ```
https://github.com/huggingface/datasets/issues/183
[ "This is the test set given by the Glue benchmark. The labels are not provided, and therefore set to -1.", "Ah, yeah. Why it didn’t occur to me. πŸ˜‚\nThank you for your comment." ]
null
183
false
Update newsroom.py
Updated the URL for Newsroom download so it's more robust to future changes.
https://github.com/huggingface/datasets/pull/182
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/182", "html_url": "https://github.com/huggingface/datasets/pull/182", "diff_url": "https://github.com/huggingface/datasets/pull/182.diff", "patch_url": "https://github.com/huggingface/datasets/pull/182.patch", "merged_at": "2020-05-22T16:38:23" }
182
true
Cannot upload my own dataset
I look into `nlp-cli` and `user.py` to learn how to upload my own data. It is supposed to work like this - Register to get username, password at huggingface.co - `nlp-cli login` and type username, passworld - I have a single file to upload at `./ttc/ttc_freq_extra.csv` - `nlp-cli upload ttc/ttc_freq_extra.csv` But I got this error. ``` 2020-05-21 16:33:52.722464: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 About to upload file /content/ttc/ttc_freq_extra.csv to S3 under filename ttc/ttc_freq_extra.csv and namespace korakot Proceed? [Y/n] y Uploading... This might take a while if files are large Traceback (most recent call last): File "/usr/local/bin/nlp-cli", line 33, in <module> service.run() File "/usr/local/lib/python3.6/dist-packages/nlp/commands/user.py", line 234, in run token=token, filename=filename, filepath=filepath, organization=self.args.organization File "/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py", line 141, in presign_and_upload urls = self.presign(token, filename=filename, organization=organization) File "/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py", line 132, in presign return PresignedUrl(**d) TypeError: __init__() got an unexpected keyword argument 'cdn' ```
https://github.com/huggingface/datasets/issues/181
[ "It's my misunderstanding. I cannot just upload a csv. I need to write a dataset loading script too.", "I now try with the sample `datasets/csv` folder. \r\n\r\n nlp-cli upload csv\r\n\r\nThe error is still the same\r\n\r\n```\r\n2020-05-21 17:20:56.394659: I tensorflow/stream_executor/platform/default/dso_loa...
null
181
false
Add hall of fame
powered by https://github.com/sourcerer-io/hall-of-fame
https://github.com/huggingface/datasets/pull/180
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/180", "html_url": "https://github.com/huggingface/datasets/pull/180", "diff_url": "https://github.com/huggingface/datasets/pull/180.diff", "patch_url": "https://github.com/huggingface/datasets/pull/180.patch", "merged_at": "2020-05-22T16:35:14" }
180
true
[Feature request] separate split name and split instructions
Currently, the name of an nlp.NamedSplit is parsed in arrow_reader.py and used as the instruction. This makes it impossible to have several training sets, which can occur when: - A dataset corresponds to a collection of sub-datasets - A dataset was built in stages, adding new examples at each stage Would it be possible to have two separate fields in the Split class, a name /instruction and a unique ID that is used as the key in the builder's split_dict ?
https://github.com/huggingface/datasets/issues/179
[ "If your dataset is a collection of sub-datasets, you should probably consider having one config per sub-dataset. For example for Glue, we have sst2, mnli etc.\r\nIf you want to have multiple train sets (for example one per stage). The easiest solution would be to name them `nlp.Split(\"train_stage1\")`, `nlp.Split...
null
179
false
[Manual data] improve error message for manual data in general
`nlp.load("xsum")` now leads to the following error message: ![Screenshot from 2020-05-20 20-05-28](https://user-images.githubusercontent.com/23423619/82481825-3587ea00-9ad6-11ea-9ca2-5794252c6ac7.png) I guess the manual download instructions for `xsum` can also be improved.
https://github.com/huggingface/datasets/pull/178
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/178", "html_url": "https://github.com/huggingface/datasets/pull/178", "diff_url": "https://github.com/huggingface/datasets/pull/178.diff", "patch_url": "https://github.com/huggingface/datasets/pull/178.patch", "merged_at": "2020-05-20T18:18:50" }
178
true
Xsum manual download instruction
https://github.com/huggingface/datasets/pull/177
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/177", "html_url": "https://github.com/huggingface/datasets/pull/177", "diff_url": "https://github.com/huggingface/datasets/pull/177.diff", "patch_url": "https://github.com/huggingface/datasets/pull/177.patch", "merged_at": "2020-05-20T18:16:49" }
177
true
[Tests] Refactor MockDownloadManager
Clean mock download manager class. The print function was not of much help I think. We should think about adding a command that creates the dummy folder structure for the user.
https://github.com/huggingface/datasets/pull/176
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/176", "html_url": "https://github.com/huggingface/datasets/pull/176", "diff_url": "https://github.com/huggingface/datasets/pull/176.diff", "patch_url": "https://github.com/huggingface/datasets/pull/176.patch", "merged_at": "2020-05-20T18:17:18" }
176
true
[Manual data dir] Error message: nlp.load_dataset('xsum') -> TypeError
v 0.1.0 from pip ```python import nlp xsum = nlp.load_dataset('xsum') ``` Issue is `dl_manager.manual_dir`is `None` ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-42-8a32f066f3bd> in <module> ----> 1 xsum = nlp.load_dataset('xsum') ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 397 split_dict = SplitDict(dataset_name=self.name) 398 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 399 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 400 # Checksums verification 401 if verify_infos: ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/datasets/xsum/5c5fca23aaaa469b7a1c6f095cf12f90d7ab99bcc0d86f689a74fd62634a1472/xsum.py in _split_generators(self, dl_manager) 102 with open(dl_path, "r") as json_file: 103 split_ids = json.load(json_file) --> 104 downloaded_path = os.path.join(dl_manager.manual_dir, "xsum-extracts-from-downloads") 105 return [ 106 nlp.SplitGenerator( ~/miniconda3/envs/nb/lib/python3.7/posixpath.py in join(a, *p) 78 will be discarded. An empty last part will result in a path that 79 ends with a separator.""" ---> 80 a = os.fspath(a) 81 sep = _get_sep(a) 82 path = a TypeError: expected str, bytes or os.PathLike object, not NoneType ```
https://github.com/huggingface/datasets/issues/175
[]
null
175
false
nlp.load_dataset('xsum') -> TypeError
https://github.com/huggingface/datasets/issues/174
[]
null
174
false
Rm extracted test dirs
All the dummy data used for tests were duplicated. For each dataset, we had one zip file but also its extracted directory. I removed all these directories Furthermore instead of extracting next to the dummy_data.zip file, we extract in the temp `cached_dir` used for tests, so that all the extracted directories get removed after testing. Finally there was a bug in the `mock_download_manager` that would let it create directories with invalid names, as in #172. I fixed that by encoding url arguments. I had to rename the dummy data for `scientific_papers` and `cnn_dailymail` (the aws tests don't pass for those 2 in this PR, but they will once aws will be synced, as the local ones do) Let me know if it sounds good to you @patrickvonplaten . I'm still not entirely familiar with the mock downloader
https://github.com/huggingface/datasets/pull/173
[ "Thanks for cleaning up the extracted dummy data folders! Instead of changing the file_utils we could also just put these folders under `.gitignore` (or maybe already done?).", "Awesome! I guess you might have to add the changes for the MockDLManager now in a different file though because of my last PR - sorry!" ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/173", "html_url": "https://github.com/huggingface/datasets/pull/173", "diff_url": "https://github.com/huggingface/datasets/pull/173.diff", "patch_url": "https://github.com/huggingface/datasets/pull/173.patch", "merged_at": "2020-05-22T16:34:35" }
173
true
Clone not working on Windows environment
Cloning in a windows environment is not working because of use of special character '?' in folder name .. Please consider changing the folder name .... Reference to folder - nlp/datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs/dailymail/stories/ error log: fatal: cannot create directory at 'datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs': Invalid argument
https://github.com/huggingface/datasets/issues/172
[ "Should be fixed on master now :)", "Thanks @lhoestq πŸ‘ Now I can uninstall WSL and get back to work with windows.πŸ™‚" ]
null
172
false
fix squad metric format
The format of the squad metric was wrong. This should fix #143 I tested with ```python3 predictions = [ {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'} ] references = [ {'answers': [{'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'} ] ```
https://github.com/huggingface/datasets/pull/171
[ "One thing for SQuAD is that I wanted to be able to use the SQuAD dataset directly in the metrics and I'm not sure it will be possible with this format.\r\n\r\n(maybe it's not really possible in general though)", "This is kinda related to one thing I had in mind which is that we may want to be able to dump our mo...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/171", "html_url": "https://github.com/huggingface/datasets/pull/171", "diff_url": "https://github.com/huggingface/datasets/pull/171.diff", "patch_url": "https://github.com/huggingface/datasets/pull/171.patch", "merged_at": "2020-05-22T13:36:48" }
171
true
Rename anli dataset
What we have now as the `anli` dataset is actually the Ξ±NLI dataset from the ART challenge dataset. This name is confusing because `anli` is also the name of adversarial NLI (see [https://github.com/facebookresearch/anli](https://github.com/facebookresearch/anli)). I renamed the current `anli` dataset by `art`.
https://github.com/huggingface/datasets/pull/170
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/170", "html_url": "https://github.com/huggingface/datasets/pull/170", "diff_url": "https://github.com/huggingface/datasets/pull/170.diff", "patch_url": "https://github.com/huggingface/datasets/pull/170.patch", "merged_at": "2020-05-20T12:23:07" }
170
true
Adding Qanta (Quizbowl) Dataset
This PR adds the qanta question answering datasets from [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792) and [Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples](https://www.aclweb.org/anthology/Q19-1029/) (adversarial fold) This partially continues a discussion around fixing dummy data from https://github.com/huggingface/nlp/issues/161 I ran the following code to double check that it works and did some sanity checks on the output. The majority of the code itself is from our `allennlp` version of the dataset reader. ```python import nlp # Default is full question data = nlp.load_dataset('./datasets/qanta') # Four configs # Primarily useful for training data = nlp.load_dataset('./datasets/qanta', 'mode=sentences,char_skip=25') # Primarily used in evaluation data = nlp.load_dataset('./datasets/qanta', 'mode=first,char_skip=25') data = nlp.load_dataset('./datasets/qanta', 'mode=full,char_skip=25') # Primarily useful in evaluation and "live" play data = nlp.load_dataset('./datasets/qanta', 'mode=runs,char_skip=25') ```
https://github.com/huggingface/datasets/pull/169
[ "Hi @EntilZha - sorry for waiting so long until taking action here. We created a new command and a new recipe of how to add dummy_data. Can you maybe rebase to `master` as explained in 7. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp and check that your dummy data is cor...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/169", "html_url": "https://github.com/huggingface/datasets/pull/169", "diff_url": "https://github.com/huggingface/datasets/pull/169.diff", "patch_url": "https://github.com/huggingface/datasets/pull/169.patch", "merged_at": null }
169
true
Loading 'wikitext' dataset fails
Loading the 'wikitext' dataset fails with Attribute error: Code to reproduce (From example notebook): import nlp wikitext_dataset = nlp.load_dataset('wikitext') Error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-17-d5d9df94b13c> in <module>() 11 12 # Load a dataset and print the first examples in the training set ---> 13 wikitext_dataset = nlp.load_dataset('wikitext') 14 print(wikitext_dataset['train'][0]) 6 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 363 verify_infos = not save_infos and not ignore_verifications 364 self._download_and_prepare( --> 365 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 366 ) 367 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 416 try: 417 # Prepare split will record examples associated to the split --> 418 self._prepare_split(split_generator, **prepare_split_kwargs) 419 except OSError: 420 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or "")) /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator) 594 example = self.info.features.encode_example(record) 595 writer.write(example) --> 596 num_examples, num_bytes = writer.finalize() 597 598 assert num_examples == num_examples, f"Expected to write {split_info.num_examples} but wrote {num_examples}" /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in finalize(self, close_stream) 173 def finalize(self, close_stream=True): 174 if self.pa_writer is not None: --> 175 self.write_on_file() 176 self.pa_writer.close() 177 if close_stream: /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_on_file(self) 124 else: 125 # All good --> 126 self._write_array_on_file(pa_array) 127 self.current_rows = [] 128 /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in _write_array_on_file(self, pa_array) 93 def _write_array_on_file(self, pa_array): 94 """Write a PyArrow Array""" ---> 95 pa_batch = pa.RecordBatch.from_struct_array(pa_array) 96 self._num_bytes += pa_array.nbytes 97 self.pa_writer.write_batch(pa_batch) AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
https://github.com/huggingface/datasets/issues/168
[ "Hi, make sure you have a recent version of pyarrow.\r\n\r\nAre you using it in Google Colab? In this case, this error is probably the same as #128", "Thanks!\r\n\r\nYes I'm using Google Colab, it seems like a duplicate then.", "Closing as it is a duplicate", "Hi,\r\nThe squad bug seems to be fixed, but the l...
null
168
false
[Tests] refactor tests
This PR separates AWS and Local tests to remove these ugly statements in the script: ```python if "/" not in dataset_name: logging.info("Skip {} because it is a canonical dataset") return ``` To run a `aws` test, one should now run the following command: ```python pytest -s tests/test_dataset_common.py::AWSDatasetTest::test_builder_class_wmt14 ``` The same `local` test, can be run with: ```python pytest -s tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_wmt14 ```
https://github.com/huggingface/datasets/pull/167
[ "Nice !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/167", "html_url": "https://github.com/huggingface/datasets/pull/167", "diff_url": "https://github.com/huggingface/datasets/pull/167.diff", "patch_url": "https://github.com/huggingface/datasets/pull/167.patch", "merged_at": "2020-05-19T16:17:10" }
167
true
Add a method to shuffle a dataset
Could maybe be a `dataset.shuffle(generator=None, seed=None)` signature method. Also, we could maybe have a clear indication of which method modify in-place and which methods return/cache a modified dataset. I kinda like torch conversion of having an underscore suffix for all the methods which modify a dataset in-place. What do you think?
https://github.com/huggingface/datasets/issues/166
[ "+1 for the naming convention\r\n\r\nAbout the `shuffle` method, from my understanding it should be done in `Dataloader` (better separation between dataset processing - usage)", "+1 for shuffle in `Dataloader`. \r\nSome `Dataloader` just store idxs of dataset and just shuffle those idxs, which might(?) be faster ...
null
166
false
ANLI
Can I recommend the following: For ANLI, use https://github.com/facebookresearch/anli. As that paper says, "Our dataset is not to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself Ξ±NLI, or ART.". Indeed, the paper cited under what is currently called anli says in the abstract "We introduce a challenge dataset, ART". The current naming will confuse people :)
https://github.com/huggingface/datasets/issues/165
[]
null
165
false
Add Spanish POR and NER Datasets
Hi guys, In order to cover multilingual support a little step could be adding standard Datasets used for Spanish NER and POS tasks. I can provide it in raw and preprocessed formats.
https://github.com/huggingface/datasets/issues/164
[ "Hello @mrm8488, are these datasets official datasets published in an NLP/CL/ML venue?", "What about this one: https://github.com/ccasimiro88/TranslateAlignRetrieve?" ]
null
164
false
[Feature request] Add cos-e v1.0
I noticed the second release of cos-e (v1.11) is included in this repo. I wanted to request inclusion of v1.0, since this is the version on which results are reported on in [the paper](https://www.aclweb.org/anthology/P19-1487/), and v1.11 has noted [annotation](https://github.com/salesforce/cos-e/issues/2) [issues](https://arxiv.org/pdf/2004.14546.pdf).
https://github.com/huggingface/datasets/issues/163
[ "Sounds good, @mariamabarham do you want to give a look?\r\nI think we should have two configurations so we can allow either version of the dataset to be loaded with the `1.0` version being the default maybe.\r\n\r\nCc some authors of the great cos-e: @nazneenrajani @bmccann", "cos_e v1.0 is related to CQA v1.0 b...
null
163
false
fix prev files hash in map
Fix the `.map` issue in #160. This makes sure it takes the previous files when computing the hash.
https://github.com/huggingface/datasets/pull/162
[ "Awesome! ", "Hi, yes, this seems to fix #160 -- I cloned the branch locally and verified", "Perfect then :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/162", "html_url": "https://github.com/huggingface/datasets/pull/162", "diff_url": "https://github.com/huggingface/datasets/pull/162.diff", "patch_url": "https://github.com/huggingface/datasets/pull/162.patch", "merged_at": "2020-05-18T21:36:20" }
162
true
Discussion on version identifier & MockDataLoaderManager for test data
Hi, I'm working on adding a dataset and ran into an error due to `download` not being defined on `MockDataLoaderManager`, but being defined in `nlp/utils/download_manager.py`. The readme step running this: `RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_localmydatasetname` triggers the error. If I can get something to work, I can include it in my data PR once I'm done.
https://github.com/huggingface/datasets/issues/161
[ "usually you can replace `download` in your dataset script with `download_and_prepare()` - could you share the code for your dataset here? :-) ", "I have an initial version here: https://github.com/EntilZha/nlp/tree/master/datasets/qanta Thats pretty close to what I'll do as a PR, but still want to do some more s...
null
161
false
caching in map causes same result to be returned for train, validation and test
hello, I am working on a program that uses the `nlp` library with the `SST2` dataset. The rough outline of the program is: ``` import nlp as nlp_datasets ... parser.add_argument('--dataset', help='HuggingFace Datasets id', default=['glue', 'sst2'], nargs='+') ... dataset = nlp_datasets.load_dataset(*args.dataset) ... # Create feature vocabs vocabs = create_vocabs(dataset.values(), vectorizers) ... # Create a function to vectorize based on vectorizers and vocabs: print('TS', train_set.num_rows) print('VS', valid_set.num_rows) print('ES', test_set.num_rows) # factory method to create a `convert_to_features` function based on vocabs convert_to_features = create_featurizer(vectorizers, vocabs) train_set = train_set.map(convert_to_features, batched=True) train_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths']) train_loader = torch.utils.data.DataLoader(train_set, batch_size=args.batchsz) valid_set = valid_set.map(convert_to_features, batched=True) valid_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths']) valid_loader = torch.utils.data.DataLoader(valid_set, batch_size=args.batchsz) test_set = test_set.map(convert_to_features, batched=True) test_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths']) test_loader = torch.utils.data.DataLoader(test_set, batch_size=args.batchsz) print('TS', train_set.num_rows) print('VS', valid_set.num_rows) print('ES', test_set.num_rows) ``` Im not sure if Im using it incorrectly, but the results are not what I expect. Namely, the `.map()` seems to grab the datset from the cache and then loses track of what the specific dataset is, instead using my training data for all datasets: ``` TS 67349 VS 872 ES 1821 TS 67349 VS 67349 ES 67349 ``` The behavior changes if I turn off the caching but then the results fail: ``` train_set = train_set.map(convert_to_features, batched=True, load_from_cache_file=False) ... valid_set = valid_set.map(convert_to_features, batched=True, load_from_cache_file=False) ... test_set = test_set.map(convert_to_features, batched=True, load_from_cache_file=False) ``` Now I get the right set of features back... ``` TS 67349 VS 872 ES 1821 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 68/68 [00:00<00:00, 92.78it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 75.47it/s] 0%| | 0/2 [00:00<?, ?it/s]TS 67349 VS 872 ES 1821 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 77.19it/s] ``` but I think its losing track of the original training set: ``` Traceback (most recent call last): File "/home/dpressel/dev/work/baseline/api-examples/layers-classify-hf-datasets.py", line 148, in <module> for x in train_loader: File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/dpressel/anaconda3/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 338, in __getitem__ output_all_columns=self._output_all_columns, File "/home/dpressel/anaconda3/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 294, in _getitem outputs = self._unnest(self._data.slice(key, 1).to_pydict()) File "pyarrow/table.pxi", line 1211, in pyarrow.lib.Table.slice File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 3: In chunk 0: Invalid: Length spanned by list offsets (15859698) larger than values array (length 100000) Process finished with exit code 1 ``` The full-example program (minus the print stmts) is here: https://github.com/dpressel/mead-baseline/pull/620/files
https://github.com/huggingface/datasets/issues/160
[ "Hi @dpressel, \r\n\r\nthanks for posting your issue! Can you maybe add a complete code snippet that we can copy paste to reproduce the error? For example, I'm not sure where the variable `train_set` comes from in your code and it seems like you are loading multiple datasets at once? ", "Hi, the full example was...
null
160
false
How can we add more datasets to nlp library?
https://github.com/huggingface/datasets/issues/159
[ "Found it. https://github.com/huggingface/nlp/tree/master/datasets" ]
null
159
false
add Toronto Books Corpus
This PR adds the Toronto Books Corpus. . It on consider TMX and plain text files (Moses) defined in the table **Statistics and TMX/Moses Downloads** [here](http://opus.nlpl.eu/Books.php )
https://github.com/huggingface/datasets/pull/158
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/158", "html_url": "https://github.com/huggingface/datasets/pull/158", "diff_url": "https://github.com/huggingface/datasets/pull/158.diff", "patch_url": "https://github.com/huggingface/datasets/pull/158.patch", "merged_at": null }
158
true
nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)"
I'm trying to load datasets from nlp but there seems to have error saying "TypeError: list_() takes exactly one argument (2 given)" gist can be found here https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a
https://github.com/huggingface/datasets/issues/157
[ "You can just run: \r\n`val = nlp.load_dataset('squad')` \r\n\r\nif you want to have just the validation script you can also do:\r\n\r\n`val = nlp.load_dataset('squad', split=\"validation\")`", "If you want to load a local dataset, make sure you include a `./` before the folder name. ", "This happens by just do...
null
157
false
SyntaxError with WMT datasets
The following snippet produces a syntax error: ``` import nlp dataset = nlp.load_dataset('wmt14') print(dataset['train'][0]) ``` ``` Traceback (most recent call last): File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-8-3206959998b9>", line 3, in <module> dataset = nlp.load_dataset('wmt14') File "/home/tom/.local/lib/python3.6/site-packages/nlp/load.py", line 505, in load_dataset builder_cls = import_main_class(module_path, dataset=True) File "/home/tom/.local/lib/python3.6/site-packages/nlp/load.py", line 56, in import_main_class module = importlib.import_module(module_path) File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/tom/.local/lib/python3.6/site-packages/nlp/datasets/wmt14/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2/wmt14.py", line 21, in <module> from .wmt_utils import Wmt, WmtConfig File "/home/tom/.local/lib/python3.6/site-packages/nlp/datasets/wmt14/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2/wmt_utils.py", line 659 <<<<<<< HEAD ^ SyntaxError: invalid syntax ``` Python version: `3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]` Running on Ubuntu 18.04, via a Jupyter notebook
https://github.com/huggingface/datasets/issues/156
[ "Jeez - don't know what happened there :D Should be fixed now! \r\n\r\nThanks a lot for reporting this @tomhosking !", "Hi @patrickvonplaten!\r\n\r\nI'm now getting the below error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError ...
null
156
false
Include more links in README, fix typos
Include more links and fix typos in README
https://github.com/huggingface/datasets/pull/155
[ "I fixed a conflict :) thanks !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/155", "html_url": "https://github.com/huggingface/datasets/pull/155", "diff_url": "https://github.com/huggingface/datasets/pull/155.diff", "patch_url": "https://github.com/huggingface/datasets/pull/155.patch", "merged_at": "2020-05-28T08:31:57" }
155
true
add Ubuntu Dialogs Corpus datasets
This PR adds the Ubuntu Dialog Corpus datasets version 2.0.
https://github.com/huggingface/datasets/pull/154
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/154", "html_url": "https://github.com/huggingface/datasets/pull/154", "diff_url": "https://github.com/huggingface/datasets/pull/154.diff", "patch_url": "https://github.com/huggingface/datasets/pull/154.patch", "merged_at": "2020-05-18T10:12:27" }
154
true
Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations
Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessible and not only the generic citation of the meta-dataset itself. Let's take GLUE as an example: The configuration has the citation for each dataset included (e.g. [here](https://github.com/huggingface/nlp/blob/master/datasets/glue/glue.py#L154-L161)) but it should be copied inside the dataset info so that, when people access `dataset.info.citation` they get both the citation for GLUE and the citation for the specific datasets inside GLUE that they have loaded.
https://github.com/huggingface/datasets/issues/153
[ "As @yoavgo suggested, there should be the possibility to call a function like nlp.bib that outputs all bibtex ref from the datasets and models actually used and eventually nlp.bib.forreadme that would output the same info + versions numbers so they can be included in a readme.md file.", "Actually, double checki...
null
153
false
Add GLUE config name check
Fixes #130 by adding a name check to the Glue class
https://github.com/huggingface/datasets/pull/152
[ "If tests are being added, any guidance on where to add tests would be helpful!\r\n\r\nTagging @thomwolf for review", "Looks good to me. Is this compatible with the way we are doing tests right now @patrickvonplaten ?", "If the tests pass it should be fine :-) \r\n\r\n@Bharat123rox could you check whether the t...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/152", "html_url": "https://github.com/huggingface/datasets/pull/152", "diff_url": "https://github.com/huggingface/datasets/pull/152.diff", "patch_url": "https://github.com/huggingface/datasets/pull/152.patch", "merged_at": null }
152
true
Fix JSON tests.
https://github.com/huggingface/datasets/pull/151
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/151", "html_url": "https://github.com/huggingface/datasets/pull/151", "diff_url": "https://github.com/huggingface/datasets/pull/151.diff", "patch_url": "https://github.com/huggingface/datasets/pull/151.patch", "merged_at": "2020-05-18T07:21:51" }
151
true
Add WNUT 17 NER dataset
Hi, this PR adds the WNUT 17 dataset to `nlp`. > Emerging and Rare entity recognition > This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisation), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet β€œso.. kktny in 30 mins?” - even human experts find entity kktny hard to detect and resolve. This task will evaluate the ability to detect and classify novel, emerging, singleton named entities in noisy text. > > The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities. More information about the dataset can be found on the [shared task page](https://noisy-text.github.io/2017/emerging-rare-entities.html). Dataset is taken is taken from their [GitHub repository](https://github.com/leondz/emerging_entities_17), because the data provided in this repository contains minor fixes in the dataset format. ## Usage Then the WNUT 17 dataset can be used in `nlp` like this: ```python import nlp wnut_17 = nlp.load_dataset("./datasets/wnut_17/wnut_17.py") print(wnut_17) ``` This outputs: ```txt 'train': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 3394) 'validation': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 1009) 'test': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 1287) ``` Number are identical with the ones in [this paper](https://www.ijcai.org/Proceedings/2019/0702.pdf) and are the same as using the `dataset` reader in Flair. ## Features The following feature format is used to represent a sentence in the WNUT 17 dataset: | Feature | Example | Description | ---- | ---- | ----------------- | `id` | `0` | Number (id) of current sentence | `tokens` | `["AHFA", "extends", "deadline"]` | List of tokens (strings) for a sentence | `labels` | `["B-group", "O", "O"]` | List of labels (outer span) The following labels are used in WNUT 17: ```txt O B-corporation I-corporation B-location I-location B-product I-product B-person I-person B-group I-group B-creative-work I-creative-work ```
https://github.com/huggingface/datasets/pull/150
[ "The PR looks awesome! \r\nSince you have already added a dataset I imagine the tests as described in 5. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset all pass, right @stefan-it ?\r\n\r\nI think we are then good to merge this :-) @lhoestq ", "Nice !\r\n\r\nOne thing though...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/150", "html_url": "https://github.com/huggingface/datasets/pull/150", "diff_url": "https://github.com/huggingface/datasets/pull/150.diff", "patch_url": "https://github.com/huggingface/datasets/pull/150.patch", "merged_at": "2020-05-26T20:37:59" }
150
true
[Feature request] Add Ubuntu Dialogue Corpus dataset
https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/
https://github.com/huggingface/datasets/issues/149
[ "@AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual download by following the download instructions in the [repos]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator).\r\nMaybe we can close this issue for...
null
149
false
_download_and_prepare() got an unexpected keyword argument 'verify_infos'
# Reproduce In Colab, ``` %pip install -q nlp %pip install -q apache_beam mwparserfromhell dataset = nlp.load_dataset('wikipedia') ``` get ``` Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0... --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-52471d2a0088> in <module>() ----> 1 dataset = nlp.load_dataset('wikipedia') 1 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info TypeError: _download_and_prepare() got an unexpected keyword argument 'verify_infos' ```
https://github.com/huggingface/datasets/issues/148
[ "Same error for dataset 'wiki40b'", "Should be fixed on master :)" ]
null
148
false
Error with sklearn train_test_split
It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code: ```python data = nlp.load_dataset('imdb', cache_dir=data_cache) f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed) ``` throws: ``` ValueError: Can only get row(s) (int or slice) or columns (string). ``` It's not a big deal, since there are other ways to split the data, but it would be a cool thing to have.
https://github.com/huggingface/datasets/issues/147
[ "Indeed. Probably we will want to have a similar method directly in the library", "Related: #166 " ]
null
147
false
Add BERTScore to metrics
This PR adds [BERTScore](https://arxiv.org/abs/1904.09675) to metrics. Here is an example of how to use it. ```sh import nlp bertscore = nlp.load_metric('metrics/bertscore') # or simply nlp.load_metric('bertscore') after this is added to huggingface's s3 bucket predictions = ['example', 'fruit'] references = [['this is an example.', 'this is one example.'], ['apple']] results = bertscore.compute(predictions, references, lang='en') print(results) ```
https://github.com/huggingface/datasets/pull/146
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/146", "html_url": "https://github.com/huggingface/datasets/pull/146", "diff_url": "https://github.com/huggingface/datasets/pull/146.diff", "patch_url": "https://github.com/huggingface/datasets/pull/146.patch", "merged_at": "2020-05-17T22:22:09" }
146
true
[AWS Tests] Follow-up PR from #144
I forgot to add this line in PR #145 .
https://github.com/huggingface/datasets/pull/145
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/145", "html_url": "https://github.com/huggingface/datasets/pull/145", "diff_url": "https://github.com/huggingface/datasets/pull/145.diff", "patch_url": "https://github.com/huggingface/datasets/pull/145.patch", "merged_at": "2020-05-16T13:54:22" }
145
true
[AWS tests] AWS test should not run for canonical datasets
AWS tests should in general not run for canonical datasets. Only local tests will run in this case. This way a PR is able to pass when adding a new dataset. This PR changes to logic to the following: 1) All datasets that are present in `nlp/datasets` are tested only locally. This way when one adds a canonical dataset, the PR includes his dataset in the tests. 2) All datasets that are only present on AWS, such as `webis/tl_dr` atm are tested only on AWS. I think the testing structure might need a bigger refactoring and better documentation very soon. Merging for now to unblock new PRs @thomwolf @mariamabarham .
https://github.com/huggingface/datasets/pull/144
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/144", "html_url": "https://github.com/huggingface/datasets/pull/144", "diff_url": "https://github.com/huggingface/datasets/pull/144.diff", "patch_url": "https://github.com/huggingface/datasets/pull/144.patch", "merged_at": "2020-05-16T13:44:33" }
144
true
ArrowTypeError in squad metrics
`squad_metric.compute` is giving following error ``` ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type ``` This is how my predictions and references look like ``` predictions[0] # {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'} ``` ``` references[0] # {'answers': [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'} ``` These are structured as per the `squad_metric.compute` help string.
https://github.com/huggingface/datasets/issues/143
[ "There was an issue in the format, thanks.\r\nNow you can do\r\n```python3\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take ...
null
143
false
[WMT] Add all wmt
This PR adds all wmt datasets scripts. At the moment the script is **not** functional for the language pairs "cs-en", "ru-en", "hi-en" because apparently it takes up to a week to get the manual data for these datasets: see http://ufal.mff.cuni.cz/czeng. The datasets are fully functional though for the "big" language pairs "de-en" and "fr-en". Overall I think the scripts are very messy and might need a big refactoring at some point. For now I think there are good to merge (most dataset configs can be used). I will add "cs", "ru" and "hi" when the manual data is available.
https://github.com/huggingface/datasets/pull/142
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/142", "html_url": "https://github.com/huggingface/datasets/pull/142", "diff_url": "https://github.com/huggingface/datasets/pull/142.diff", "patch_url": "https://github.com/huggingface/datasets/pull/142.patch", "merged_at": "2020-05-17T12:18:20" }
142
true
[Clean up] remove bogus folder
@mariamabarham - I think you accidentally placed it there.
https://github.com/huggingface/datasets/pull/141
[ "Same for the dataset_infos.json at the project root no ?", "Sorry guys, I haven't noticed. Thank you for mentioning it." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/141", "html_url": "https://github.com/huggingface/datasets/pull/141", "diff_url": "https://github.com/huggingface/datasets/pull/141.diff", "patch_url": "https://github.com/huggingface/datasets/pull/141.patch", "merged_at": "2020-05-16T13:24:25" }
141
true
[Tests] run local tests as default
This PR also enables local tests by default I think it's safer for now to enable both local and aws tests for every commit. The problem currently is that when we do a PR to add a dataset, the dataset is not yet on AWS on therefore not tested on the PR itself. Thus the PR will always be green even if the datasets are not correct. This PR aims at fixing this. ## Suggestion on how to commit to the repo from now on: Now since the repo is "online", I think we should adopt a couple of best practices: 1) - No direct committing to the repo anymore. Every change should be opened in a PR and be well documented so that we can find it later 2) - Every PR has to be reviewed by at least x people (I guess @thomwolf you should decide here) because we now have to be much more careful when doing changes to the API for backward compatibility, etc...
https://github.com/huggingface/datasets/pull/140
[ "You are right and I think those are usual best practice :) I'm 100% fine with this^^", "Merging this for now to unblock other PRs." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/140", "html_url": "https://github.com/huggingface/datasets/pull/140", "diff_url": "https://github.com/huggingface/datasets/pull/140.diff", "patch_url": "https://github.com/huggingface/datasets/pull/140.patch", "merged_at": "2020-05-16T13:21:43" }
140
true
Add GermEval 2014 NER dataset
Hi, this PR adds the GermEval 2014 NER dataset πŸ˜ƒ > The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation [1] with the following properties: > - The data was sampled from German Wikipedia and News Corpora as a collection of citations. > - The dataset covers over 31,000 sentences corresponding to over 590,000 tokens. > - The NER annotation uses the NoSta-D guidelines, which extend the TΓΌbingen Treebank guidelines, using four main NER categories with sub-structure, and annotating embeddings among NEs such as [ORG FC Kickers [LOC Darmstadt]]. Dataset will be downloaded from the [official GermEval 2014 website](https://sites.google.com/site/germeval2014ner/data). ## Dataset format Here's an example of the dataset format from the original dataset: ```tsv # http://de.wikipedia.org/wiki/Manfred_Korfmann [2009-10-17] 1 Aufgrund O O 2 seiner O O 3 Initiative O O 4 fand O O 5 2001/2002 O O 6 in O O 7 Stuttgart B-LOC O 8 , O O 9 Braunschweig B-LOC O 10 und O O 11 Bonn B-LOC O 12 eine O O 13 große O O 14 und O O 15 publizistisch O O 16 vielbeachtete O O 17 Troia-Ausstellung B-LOCpart O 18 statt O O 19 , O O 20 β€ž O O 21 Troia B-OTH B-LOC 22 - I-OTH O 23 Traum I-OTH O 24 und I-OTH O 25 Wirklichkeit I-OTH O 26 β€œ O O 27 . O O ``` The sentence is encoded as one token per line (tab separated columns. The first column contains either a `#`, which signals the source the sentence is cited from and the date it was retrieved, or the token number within the sentence. The second column contains the token. Column three and four contain the named entity (in IOB2 scheme). Outer spans are encoded in the third column, embedded/nested spans in the fourth column. ## Features I decided to keep most information from the dataset. That means the so called "source" information (where the sentences come from + date information) is also returned for each sentence in the feature vector. For each sentence in the dataset, one feature vector (`nlp.Features` definition) will be returned: | Feature | Example | Description | ---- | ---- | ----------------- | `id` | `0` | Number (id) of current sentence | `source` | `http://de.wikipedia.org/wiki/Manfred_Korfmann [2009-10-17]` | URL and retrieval date as string | `tokens` | `["Schwartau", "sagte", ":"]` | List of tokens (strings) for a sentence | `labels` | `["B-PER", "O", "O"]` | List of labels (outer span) | `nested-labels` | `["O", "O", "O"]` | List of labels for nested span ## Example The following command downloads the dataset from the official GermEval 2014 page and pre-processed it: ```bash python nlp-cli test datasets/germeval_14 --all_configs ``` It then outputs the number for training, development and testset. The training set consists of 24,000 sentences, the development set of 2,200 and the test of 5,100 sentences. Now it can be imported and used with `nlp`: ```python import nlp germeval = nlp.load_dataset("./datasets/germeval_14/germeval_14.py") assert len(germeval["train"]) == 24000 # Show first sentence of training set: germeval["train"][0] ```
https://github.com/huggingface/datasets/pull/139
[ "Had really fun playing around with this new library :heart: ", "That's awesome - thanks @stefan-it :-) \r\n\r\nCould you maybe rebase to master and check if all dummy data tests are fine. I should have included the local tests directly in the test suite so that all PRs are fully checked: #140 - sorry :D ", "@p...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/139", "html_url": "https://github.com/huggingface/datasets/pull/139", "diff_url": "https://github.com/huggingface/datasets/pull/139.diff", "patch_url": "https://github.com/huggingface/datasets/pull/139.patch", "merged_at": "2020-05-16T13:56:22" }
139
true
Consider renaming to nld
Hey :) Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing. The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This means the package makes `nlp` a bad variable name everywhere in the codebase. I've always used `nlp` as the canonical variable name of spaCy's `Language` objects, and this is a convention that a lot of other code has followed (Stanza, flair, etc). And actually, your `transformers` library uses `nlp` as the name for its `Pipeline` instance in your readme. If you stick with the `nlp` name for this package, if anyone uses it then they should rewrite all of that code. If `nlp` is a bad choice of variable anywhere, it's a bad choice of variable everywhere --- because you shouldn't have to notice whether some other function uses a module when you're naming variables within a function. You want to have one convention that you can stick to everywhere. If people use your `nlp` package and continue to use the `nlp` variable name, they'll find themselves with confusing bugs. There will be many many bits of code cut-and-paste from tutorials that give confusing results when combined with the data loading from the `nlp` library. The problem will be especially bad for shadowed modules (people might reasonably have a module named `nlp.py` within their codebase) and notebooks, as people might run notebook cells for data loading out-of-order. I don't think it's an exaggeration to say that if your library becomes popular, we'll all be answering issues around this about once a week for the next few years. That seems pretty unideal, so I do hope you'll reconsider. I suggest `nld` as a better name. It more accurately represents what the package actually does. It's pretty unideal to have a package named `nlp` that doesn't do any processing, and contains data about natural language generation or other non-NLP tasks. The name is equally short, and is sort of a visual pun on `nlp`, since a d is a rotated p.
https://github.com/huggingface/datasets/issues/138
[ "I would suggest `nlds`. NLP is a very general, broad and ambiguous term, the library is not about NLP (as in processing) per se, it is about accessing Natural Language related datasets. So the name should reflect its purpose.\r\n", "Chiming in to second everything @honnibal said, and to add that I think the curr...
null
138
false
Update README.md
small typo
https://github.com/huggingface/datasets/pull/136
[ "Thanks, this was fixed with #135 :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/136", "html_url": "https://github.com/huggingface/datasets/pull/136", "diff_url": "https://github.com/huggingface/datasets/pull/136.diff", "patch_url": "https://github.com/huggingface/datasets/pull/136.patch", "merged_at": null }
136
true
Fix print statement in READ.md
print statement was throwing generator object instead of printing names of available datasets/metrics
https://github.com/huggingface/datasets/pull/135
[ "Indeed, thanks!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/135", "html_url": "https://github.com/huggingface/datasets/pull/135", "diff_url": "https://github.com/huggingface/datasets/pull/135.diff", "patch_url": "https://github.com/huggingface/datasets/pull/135.patch", "merged_at": "2020-05-17T12:14:05" }
135
true
Update README.md
https://github.com/huggingface/datasets/pull/134
[ "the readme got removed, closing this one" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/134", "html_url": "https://github.com/huggingface/datasets/pull/134", "diff_url": "https://github.com/huggingface/datasets/pull/134.diff", "patch_url": "https://github.com/huggingface/datasets/pull/134.patch", "merged_at": null }
134
true
[Question] Using/adding a local dataset
Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets. It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this. A notebook/example script demonstrating this would be very helpful.
https://github.com/huggingface/datasets/issues/133
[ "Hi @zphang,\r\n\r\nSo you can just give the local path to a dataset script file and it should work.\r\n\r\nHere is an example:\r\n- you can download one of the scripts in the `datasets` folder of the present repo (or clone the repo)\r\n- then you can load it with `load_dataset('PATH/TO/YOUR/LOCAL/SCRIPT.py')`\r\n\...
null
133
false
[Feature Request] Add the OpenWebText dataset
The OpenWebText dataset is an open clone of OpenAI's WebText dataset. It can be used to train ELECTRA as is specified in the [README](https://www.github.com/google-research/electra). More information and the download link are available [here](https://skylion007.github.io/OpenWebTextCorpus/).
https://github.com/huggingface/datasets/issues/132
[ "We're experimenting with hosting the OpenWebText corpus on Zenodo for easier downloading. https://zenodo.org/record/3834942#.Xs1w8i-z2J8", "Closing since it's been added in #660 " ]
null
132
false
[Feature request] Add Toronto BookCorpus dataset
I know the copyright/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT.
https://github.com/huggingface/datasets/issues/131
[ "As far as I understand, `wikitext` is refer to `WikiText-103` and `WikiText-2` that created by researchers in Salesforce, and mostly used in traditional language modeling.\r\n\r\nYou might want to say `wikipedia`, a dump from wikimedia foundation.\r\n\r\nAlso I would like to have Toronto BookCorpus too ! Though it...
null
131
false
Loading GLUE dataset loads CoLA by default
If I run: ```python dataset = nlp.load_dataset('glue') ``` The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling: ```python metric = nlp.load_metric("glue") ``` which throws an error telling the user that they need to specify a task in GLUE. Should the same apply for loading datasets?
https://github.com/huggingface/datasets/issues/130
[ "As a follow-up to this: It looks like the actual GLUE task name is supplied as the `name` argument. Is there a way to check what `name`s/sub-datasets are available under a grouping like GLUE? That information doesn't seem to be readily available in info from `nlp.list_datasets()`.\r\n\r\nEdit: I found the info und...
null
130
false
[Feature request] Add Google Natural Question dataset
Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD.
https://github.com/huggingface/datasets/issues/129
[ "Indeed, I think this one is almost ready cc @lhoestq ", "I'm doing the latest adjustments to make the processing of the dataset run on Dataflow", "Is there an update to this? It will be very beneficial for the QA community!", "Still work in progress :)\r\nThe idea is to have the dataset already processed som...
null
129
false
Some error inside nlp.load_dataset()
First of all, nice work! I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb) In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')` I get an error, which is connected with some inner code, I think: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-d848d3a99b8c> in <module>() 1 # Downloading and loading a dataset 2 ----> 3 dataset = nlp.load_dataset('squad', split='validation[:10%]') 8 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 414 try: 415 # Prepare split will record examples associated to the split --> 416 self._prepare_split(split_generator, **prepare_split_kwargs) 417 except OSError: 418 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or "")) /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator) 585 fname = "{}-{}.arrow".format(self.name, split_generator.name) 586 fpath = os.path.join(self._cache_dir, fname) --> 587 examples_type = self.info.features.type 588 writer = ArrowWriter(data_type=examples_type, path=fpath, writer_batch_size=self._writer_batch_size) 589 /usr/local/lib/python3.6/dist-packages/nlp/features.py in type(self) 460 @property 461 def type(self): --> 462 return get_nested_type(self) 463 464 @classmethod /usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema) 370 # Nested structures: we allow dict, list/tuples, sequences 371 if isinstance(schema, dict): --> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()}) 373 elif isinstance(schema, (list, tuple)): 374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type" /usr/local/lib/python3.6/dist-packages/nlp/features.py in <dictcomp>(.0) 370 # Nested structures: we allow dict, list/tuples, sequences 371 if isinstance(schema, dict): --> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()}) 373 elif isinstance(schema, (list, tuple)): 374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type" /usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema) 379 # We allow to reverse list of dict => dict of list for compatiblity with tfds 380 if isinstance(inner_type, pa.StructType): --> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type)) 382 return pa.list_(inner_type, schema.length) 383 /usr/local/lib/python3.6/dist-packages/nlp/features.py in <genexpr>(.0) 379 # We allow to reverse list of dict => dict of list for compatiblity with tfds 380 if isinstance(inner_type, pa.StructType): --> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type)) 382 return pa.list_(inner_type, schema.length) 383 TypeError: list_() takes exactly one argument (2 given) ```
https://github.com/huggingface/datasets/issues/128
[ "Google colab has an old version of Apache Arrow built-in.\r\nBe sure you execute the \"pip install\" cell and restart the notebook environment if the colab asks for it.", "Thanks for reply, worked fine!\r\n" ]
null
128
false
Update Overview.ipynb
update notebook
https://github.com/huggingface/datasets/pull/127
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/127", "html_url": "https://github.com/huggingface/datasets/pull/127", "diff_url": "https://github.com/huggingface/datasets/pull/127.diff", "patch_url": "https://github.com/huggingface/datasets/pull/127.patch", "merged_at": "2020-05-15T11:47:25" }
127
true
remove webis
Remove webis from dataset folder. Our first dataset script that only lives on AWS :-) https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/datasets/webis/tl_dr/?region=us-east-1 @julien-c @jplu
https://github.com/huggingface/datasets/pull/126
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/126", "html_url": "https://github.com/huggingface/datasets/pull/126", "diff_url": "https://github.com/huggingface/datasets/pull/126.diff", "patch_url": "https://github.com/huggingface/datasets/pull/126.patch", "merged_at": "2020-05-15T11:30:26" }
126
true
[Newsroom] add newsroom
I checked it with the data link of the mail you forwarded @thomwolf => works well!
https://github.com/huggingface/datasets/pull/125
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/125", "html_url": "https://github.com/huggingface/datasets/pull/125", "diff_url": "https://github.com/huggingface/datasets/pull/125.diff", "patch_url": "https://github.com/huggingface/datasets/pull/125.patch", "merged_at": "2020-05-15T10:37:02" }
125
true
Xsum, require manual download of some files
https://github.com/huggingface/datasets/pull/124
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/124", "html_url": "https://github.com/huggingface/datasets/pull/124", "diff_url": "https://github.com/huggingface/datasets/pull/124.diff", "patch_url": "https://github.com/huggingface/datasets/pull/124.patch", "merged_at": "2020-05-15T11:04:46" }
124
true
[Tests] Local => aws
## Change default Test from local => aws As a default we set` aws=True`, `Local=False`, `slow=False` ### 1. RUN_AWS=1 (default) This runs 4 tests per dataset script. a) Does the dataset script have a valid etag / Can it be reached on AWS? b) Can we load its `builder_class`? c) Can we load **all** dataset configs? d) _Most importantly_: Can we load the dataset? Important - we currently only test the first config of each dataset to reduce test time. Total test time is around 1min20s. ### 2. RUN_LOCAL=1 RUN_AWS=0 ***This should be done when debugging dataset scripts of the ./datasets folder*** This only runs 1 test per dataset test, which is equivalent to aws d) - Can we load the dataset from the local `datasets` directory? ### 3. RUN_SLOW=1 We should set up to run these tests maybe 1 time per week ? @thomwolf The `slow` tests include two more important tests. e) Can we load the dataset with all possible configs? This test will probably fail at the moment because a lot of dummy data is missing. We should add the dummy data step by step to be sure that all configs work. f) Test that the actual dataset can be loaded. This will take quite some time to run, but is important to make sure that the "real" data can be loaded. It will also test whether the dataset script has the correct checksums file which is currently not tested with `aws=True`. @lhoestq - is there an easy way to check cheaply whether the `dataset_info.json` is correct for each dataset script?
https://github.com/huggingface/datasets/pull/123
[ "For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are corr...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/123", "html_url": "https://github.com/huggingface/datasets/pull/123", "diff_url": "https://github.com/huggingface/datasets/pull/123.diff", "patch_url": "https://github.com/huggingface/datasets/pull/123.patch", "merged_at": "2020-05-15T10:03:26" }
123
true
Final cleanup of readme and metrics
https://github.com/huggingface/datasets/pull/122
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/122", "html_url": "https://github.com/huggingface/datasets/pull/122", "diff_url": "https://github.com/huggingface/datasets/pull/122.diff", "patch_url": "https://github.com/huggingface/datasets/pull/122.patch", "merged_at": "2020-05-15T09:02:22" }
122
true
make style
https://github.com/huggingface/datasets/pull/121
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/121", "html_url": "https://github.com/huggingface/datasets/pull/121", "diff_url": "https://github.com/huggingface/datasets/pull/121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/121.patch", "merged_at": "2020-05-15T08:25:38" }
121
true
πŸ› `map` not working
I'm trying to run a basic example (mapping function to add a prefix). [Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing) ```python import nlp dataset = nlp.load_dataset('squad', split='validation[:10%]') def test(sample): sample['title'] = "test prefix @@@ " + sample["title"] return sample print(dataset[0]['title']) dataset.map(test) print(dataset[0]['title']) ``` Output : > Super_Bowl_50 Super_Bowl_50 Expected output : > Super_Bowl_50 test prefix @@@ Super_Bowl_50
https://github.com/huggingface/datasets/issues/120
[ "I didn't assign the output πŸ€¦β€β™‚οΈ\r\n\r\n```python\r\ndataset.map(test)\r\n```\r\n\r\nshould be :\r\n\r\n```python\r\ndataset = dataset.map(test)\r\n```" ]
null
120
false
πŸ› Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
I'm trying to load CNN/DM dataset on Colab. [Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing) But I meet this error : > AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
https://github.com/huggingface/datasets/issues/119
[ "It's strange, after installing `nlp` on Colab, the `pyarrow` version seems fine from `pip` but not from python :\r\n\r\n```python\r\nimport pyarrow\r\n\r\n!pip show pyarrow\r\nprint(\"version = {}\".format(pyarrow.__version__))\r\n```\r\n\r\n> Name: pyarrow\r\nVersion: 0.17.0\r\nSummary: Python library for Apache ...
null
119
false
❓ How to apply a map to all subsets ?
I'm working with CNN/DM dataset, where I have 3 subsets : `train`, `test`, `validation`. Should I apply my map function on the subsets one by one ? ```python import nlp cnn_dm = nlp.load_dataset('cnn_dailymail') for corpus in ['train', 'test', 'validation']: cnn_dm[corpus] = cnn_dm[corpus].map(my_func) ``` Or is there a better way to do this ?
https://github.com/huggingface/datasets/issues/118
[ "That's the way!" ]
null
118
false
❓ How to remove specific rows of a dataset ?
I saw on the [example notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb#scrollTo=efFhDWhlvSVC) how to remove a specific column : ```python dataset.drop('id') ``` But I didn't find how to remove a specific row. **For example, how can I remove all sample with `id` < 10 ?**
https://github.com/huggingface/datasets/issues/117
[ "Hi, you can't do that at the moment.", "Can you do it by now? Coz it would be awfully helpful!", "you can convert dataset object to pandas and remove a feature and convert back to dataset .", "That's what I ended up doing too. but it feels like a workaround to a feature that should be added to the datasets c...
null
117
false
πŸ› Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
I'm trying to use rouge metric. I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence. I tried : ```python import nlp rouge = nlp.load_metric('rouge') with open("test.pred.tokenized") as p, open("test.gold.tokenized") as g: for lp, lg in zip(p, g): rouge.add(lp, lg) ``` But I meet following error : > pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 --- Full stack-trace : ``` Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/metric.py", line 224, in add self.writer.write_batch(batch) File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/arrow_writer.py", line 148, in write_batch pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema) File "pyarrow/table.pxi", line 1550, in pyarrow.lib.Table.from_pydict File "pyarrow/table.pxi", line 1503, in pyarrow.lib.Table.from_arrays File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 ``` (`nlp` installed from source)
https://github.com/huggingface/datasets/issues/116
[ "Can you share your data files or a minimally reproducible example?", "Sure, [here is a Colab notebook](https://colab.research.google.com/drive/1uiS89fnHMG7HV_cYxp3r-_LqJQvNNKs9?usp=sharing) reproducing the error.\r\n\r\n> ArrowInvalid: Column 1 named references expected length 36 but got length 56", "This is b...
null
116
false
AttributeError: 'dict' object has no attribute 'info'
I'm trying to access the information of CNN/DM dataset : ```python cnn_dm = nlp.load_dataset('cnn_dailymail') print(cnn_dm.info) ``` returns : > AttributeError: 'dict' object has no attribute 'info'
https://github.com/huggingface/datasets/issues/115
[ "I could access the info by first accessing the different splits :\r\n\r\n```python\r\nimport nlp\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\nprint(cnn_dm['train'].info)\r\n```\r\n\r\nInformation seems to be duplicated between the subsets :\r\n\r\n```python\r\nprint(cnn_dm[\"train\"].info == cnn_dm[\"test\...
null
115
false
Couldn't reach CNN/DM dataset
I can't get CNN / DailyMail dataset. ```python import nlp assert "cnn_dailymail" in [dataset.id for dataset in nlp.list_datasets()] cnn_dm = nlp.load_dataset('cnn_dailymail') ``` [Colab notebook](https://colab.research.google.com/drive/1zQ3bYAVzm1h0mw0yWPqKAg_4EUlSx5Ex?usp=sharing) gives following error : ``` ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/cnn_dailymail/cnn_dailymail.py ```
https://github.com/huggingface/datasets/issues/114
[ "Installing from source (instead of Pypi package) solved the problem." ]
null
114
false
Adding docstrings and some doc
Some doc
https://github.com/huggingface/datasets/pull/113
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/113", "html_url": "https://github.com/huggingface/datasets/pull/113", "diff_url": "https://github.com/huggingface/datasets/pull/113.diff", "patch_url": "https://github.com/huggingface/datasets/pull/113.patch", "merged_at": "2020-05-14T23:22:44" }
113
true
Qa4mre - add dataset
Added dummy data test only for the first config. Will do the rest later. I had to do add some minor hacks to an important function to make it work. There might be a cleaner way to handle it - can you take a look @thomwolf ?
https://github.com/huggingface/datasets/pull/112
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/112", "html_url": "https://github.com/huggingface/datasets/pull/112", "diff_url": "https://github.com/huggingface/datasets/pull/112.diff", "patch_url": "https://github.com/huggingface/datasets/pull/112.patch", "merged_at": "2020-05-15T09:16:42" }
112
true
[Clean-up] remove under construction datastes
https://github.com/huggingface/datasets/pull/111
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/111", "html_url": "https://github.com/huggingface/datasets/pull/111", "diff_url": "https://github.com/huggingface/datasets/pull/111.diff", "patch_url": "https://github.com/huggingface/datasets/pull/111.patch", "merged_at": "2020-05-14T20:52:22" }
111
true
fix reddit tifu dummy data
https://github.com/huggingface/datasets/pull/110
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/110", "html_url": "https://github.com/huggingface/datasets/pull/110", "diff_url": "https://github.com/huggingface/datasets/pull/110.diff", "patch_url": "https://github.com/huggingface/datasets/pull/110.patch", "merged_at": "2020-05-14T20:40:13" }
110
true
[Reclor] fix reclor
- That's probably one me. Could have made the manual data test more flexible. @mariamabarham
https://github.com/huggingface/datasets/pull/109
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/109", "html_url": "https://github.com/huggingface/datasets/pull/109", "diff_url": "https://github.com/huggingface/datasets/pull/109.diff", "patch_url": "https://github.com/huggingface/datasets/pull/109.patch", "merged_at": "2020-05-14T20:19:08" }
109
true
convert can use manual dir as second argument
@mariamabarham
https://github.com/huggingface/datasets/pull/108
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/108", "html_url": "https://github.com/huggingface/datasets/pull/108", "diff_url": "https://github.com/huggingface/datasets/pull/108.diff", "patch_url": "https://github.com/huggingface/datasets/pull/108.patch", "merged_at": "2020-05-14T16:52:42" }
108
true
add writer_batch_size to GeneratorBasedBuilder
You can now specify `writer_batch_size` in the builder arguments or directly in `load_dataset`
https://github.com/huggingface/datasets/pull/107
[ "Awesome that's great!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/107", "html_url": "https://github.com/huggingface/datasets/pull/107", "diff_url": "https://github.com/huggingface/datasets/pull/107.diff", "patch_url": "https://github.com/huggingface/datasets/pull/107.patch", "merged_at": "2020-05-14T16:50:29" }
107
true
Add data dir test command
https://github.com/huggingface/datasets/pull/106
[ "Nice - I think we can merge this. I will update the checksums for `wikihow` then as well" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/106", "html_url": "https://github.com/huggingface/datasets/pull/106", "diff_url": "https://github.com/huggingface/datasets/pull/106.diff", "patch_url": "https://github.com/huggingface/datasets/pull/106.patch", "merged_at": "2020-05-14T16:49:10" }
106
true
[New structure on AWS] Adapt paths
Some small changes so that we have the correct paths. @julien-c
https://github.com/huggingface/datasets/pull/105
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/105", "html_url": "https://github.com/huggingface/datasets/pull/105", "diff_url": "https://github.com/huggingface/datasets/pull/105.diff", "patch_url": "https://github.com/huggingface/datasets/pull/105.patch", "merged_at": "2020-05-14T15:56:27" }
105
true
Add trivia_q
Currently tested only for one config to pass tests. Needs to add more dummy data later.
https://github.com/huggingface/datasets/pull/104
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/104", "html_url": "https://github.com/huggingface/datasets/pull/104", "diff_url": "https://github.com/huggingface/datasets/pull/104.diff", "patch_url": "https://github.com/huggingface/datasets/pull/104.patch", "merged_at": "2020-05-14T20:23:32" }
104
true
[Manual downloads] add logic proposal for manual downloads and add wikihow
Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset. The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`. The dataset can then be loaded via: ```python import nlp nlp.load_dataset("wikihow", data_dir="~/wikihow/manual_dir") ``` I added/changed so that there are explicit error messages when using manually downloaded files.
https://github.com/huggingface/datasets/pull/103
[ "> Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.\r\n> \r\n> The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.\r\n...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/103", "html_url": "https://github.com/huggingface/datasets/pull/103", "diff_url": "https://github.com/huggingface/datasets/pull/103.diff", "patch_url": "https://github.com/huggingface/datasets/pull/103.patch", "merged_at": "2020-05-14T14:27:40" }
103
true
Run save infos
I replaced the old checksum file with the new `dataset_infos.json` by running the script on almost all the datasets we have. The only one that is still running on my side is the cornell dialog
https://github.com/huggingface/datasets/pull/102
[ "Haha that cornell dialogue dataset - that ran for 3h on my computer as well. The `generate_examples` method in this script is one of the most inefficient code samples I've ever seen :D ", "Indeed it's been 3 hours already\r\n```73111 examples [3:07:48, 2.40 examples/s]```" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/102", "html_url": "https://github.com/huggingface/datasets/pull/102", "diff_url": "https://github.com/huggingface/datasets/pull/102.diff", "patch_url": "https://github.com/huggingface/datasets/pull/102.patch", "merged_at": "2020-05-14T15:43:03" }
102
true
[Reddit] add reddit
- Everything worked fine @mariamabarham. Made my computer nearly crash, but all seems to be working :-)
https://github.com/huggingface/datasets/pull/101
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/101", "html_url": "https://github.com/huggingface/datasets/pull/101", "diff_url": "https://github.com/huggingface/datasets/pull/101.diff", "patch_url": "https://github.com/huggingface/datasets/pull/101.patch", "merged_at": "2020-05-14T10:27:24" }
101
true
Add per type scores in seqeval metric
This PR add a bit more detail in the seqeval metric. Now the usage and output are: ```python import nlp met = nlp.load_metric('metrics/seqeval') references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] met.compute(predictions, references) #Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8} ``` It is also possible to compute scores for non IOB notations, POS tagging for example hasn't this kind of notation. Add `suffix` parameter: ```python import nlp met = nlp.load_metric('metrics/seqeval') references = [['O', 'O', 'O', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']] predictions = [['O', 'O', 'MISC', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']] met.compute(predictions, references, metrics_kwargs={"suffix": True}) #Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.9} ```
https://github.com/huggingface/datasets/pull/100
[ "LGTM :-) Some small suggestions to shorten the code a bit :-) ", "Can you put the kwargs as normal kwargs instead of a dict? (And add them to the kwargs description As well)", "@thom Is-it what you meant?", "Yes and there is a dynamically generated doc string in the metric script KWARGS DESCRIPTION" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/100", "html_url": "https://github.com/huggingface/datasets/pull/100", "diff_url": "https://github.com/huggingface/datasets/pull/100.diff", "patch_url": "https://github.com/huggingface/datasets/pull/100.patch", "merged_at": "2020-05-14T23:21:34" }
100
true