html_url stringlengths 48 51 | title stringlengths 1 290 | comments listlengths 0 30 | body stringlengths 0 228k ⌀ | number int64 2 7.08k |
|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/565 | No module named 'nlp.logging' | [
"Thanks for reporting.\r\n\r\nApparently this is a versioning issue: the lib downloaded the `bleurt` script from the master branch where we did this change recently. We'll fix that in a new release this week or early next week. Cc @thomwolf \r\n\r\nUntil that, I'd suggest you to download the right bleurt folder fro... | Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing?
```
>>> import nlp
2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
>>> bleurt = nlp.load_metric("bleurt")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/load.py", line 443, in load_metric
metric_cls = import_main_class(module_path, dataset=False)
File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/load.py", line 61, in import_main_class
module = importlib.import_module(module_path)
File "/home/melody/anaconda3/envs/transformers/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/metrics/bleurt/43448cf2959ea81d3ae0e71c5c8ee31dc15eed9932f197f5f50673cbcecff2b5/bleurt.py", line 20, in <module>
from nlp.logging import get_logger
ModuleNotFoundError: No module named 'nlp.logging'
```
Just to show once again that I can't import the logging module:
```
>>> import nlp
2020-09-02 13:48:38.190621: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
>>> nlp.__version__
'0.4.0'
>>> from nlp.logging import get_logger
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'nlp.logging'
``` | 565 |
https://github.com/huggingface/datasets/issues/560 | Using custom DownloadConfig results in an error | [
"From my limited understanding, part of the issue seems related to the `prepare_module` and `download_and_prepare` functions each handling the case where no config is passed. For example, `prepare_module` does mutate the object passed and forces the flags `extract_compressed_file` and `force_extract` to `True`.\r\... | ## Version / Environment
Ubuntu 18.04
Python 3.6.8
nlp 0.4.0
## Description
Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error.
## How to reproduce
### Example without DownloadConfig --> works
```python
import os
os.environ["HF_HOME"] = "/data/hf-test-without-dl-config-01/"
import logging
import nlp
logging.basicConfig(level=logging.INFO)
if __name__ == "__main__":
imdb = nlp.load_dataset(path="imdb")
```
### Example with DownloadConfig --> doesn't work
```python
import os
os.environ["HF_HOME"] = "/data/hf-test-with-dl-config-01/"
import logging
import nlp
from nlp.utils import DownloadConfig
logging.basicConfig(level=logging.INFO)
if __name__ == "__main__":
download_config = DownloadConfig()
imdb = nlp.load_dataset(path="imdb", download_config=download_config)
```
Error traceback:
```
Traceback (most recent call last):
File "/.../example_with_dl_config.py", line 13, in <module>
imdb = nlp.load_dataset(path="imdb", download_config=download_config)
File "/.../python3.6/python3.6/site-packages/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/.../python3.6/python3.6/site-packages/nlp/builder.py", line 463, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/.../python3.6/python3.6/site-packages/nlp/builder.py", line 518, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/.../python3.6/python3.6/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743/imdb.py", line 86, in _split_generators
arch_path = dl_manager.download_and_extract(_DOWNLOAD_URL)
File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 220, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 158, in download
self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)
File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 108, in _record_sizes_checksums
self._recorded_sizes_checksums[url] = get_size_checksum_dict(path)
File "/.../python3.6/python3.6/site-packages/nlp/utils/info_utils.py", line 79, in get_size_checksum_dict
with open(path, "rb") as f:
IsADirectoryError: [Errno 21] Is a directory: '/data/hf-test-with-dl-config-01/datasets/extracted/b6802c5b61824b2c1f7dbf7cda6696b5f2e22214e18d171ce1ed3be90c931ce5'
```
| 560 |
https://github.com/huggingface/datasets/issues/554 | nlp downloads to its module path | [
"Indeed this is a known issue arising from the fact that we try to be compatible with cloupickle.\r\n\r\nDoes this also happen if you are installing in a virtual environment?",
"> Indeed this is a know issue with the fact that we try to be compatible with cloupickle.\r\n> \r\n> Does this also happen if you are in... | I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems:
```>>> import nlp
>>> squad_dataset = nlp.load_dataset('squad')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/load.py", line 530, in load_dataset
module_path, hash = prepare_module(path, download_config=download_config, dataset=True)
File "/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/load.py", line 329, in prepare_module
os.makedirs(main_folder_path, exist_ok=True)
File "/nix/store/685kq8pyhrvajah1hdsfn4q7gm3j4yd4-python3-3.8.5/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
OSError: [Errno 30] Read-only file system: '/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/datasets/squad'
```
Do you have any suggested workaround for this issue?
Perhaps overriding the default value for `force_local_path` of `prepare_module`? | 554 |
https://github.com/huggingface/datasets/issues/546 | Very slow data loading on large dataset | [
"When you load a text file for the first time with `nlp`, the file is converted into Apache Arrow format. Arrow allows to use memory-mapping, which means that you can load an arbitrary large dataset.\r\n\r\nNote that as soon as the conversion has been done once, the next time you'll load the dataset it will be much... | I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data.
It has been 8 hours and still, it is on the loading steps.
It does work when the text dataset size is small about 1 GB, but it doesn't scale.
It also uses a single thread during the data loading step.
```
train_files = glob.glob("xxx/*.txt",recursive=True)
random.shuffle(train_files)
print(train_files)
dataset = nlp.load_dataset('text',
data_files=train_files,
name="customDataset",
version="1.0.0",
cache_dir="xxx/nlp")
```
Is there something that I am missing ? | 546 |
https://github.com/huggingface/datasets/issues/545 | New release coming up for this library | [
"Update: release is planed mid-next week."
] | Hi all,
A few words on the roadmap for this library.
The next release will be a big one and is planed at the end of this week.
In addition to the support for indexed datasets (useful for non-parametric models like REALM, RAG, DPR, knn-LM and many other fast dataset retrieval technics), it will:
- have support for multi-modal datasets
- include various significant improvements on speed for standard processing (map, shuffling, ...)
- have a better support for metrics (better caching, and a robust API) and a bigger focus on reproductibility
- change the name to the final name (voted by the community): `datasets`
- be the 1.0.0 release as we think the API will be mostly stabilized from now on | 545 |
https://github.com/huggingface/datasets/issues/543 | nlp.load_dataset is not safe for multi processes when loading from local files | [
"I'll take a look!"
] | Loading from local files, e.g., `dataset = nlp.load_dataset('csv', data_files=['file_1.csv', 'file_2.csv'])`
concurrently from multiple processes, will raise `FileExistsError` from builder's line 430, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/builder.py#L423-L438
Likely because multiple processes step into download_and_prepare, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/load.py#L550-L554
This can happen when launching distributed training with commands like `python -m torch.distributed.launch --nproc_per_node 4` on a new collection of files never loaded before.
I can create a PR that puts in some file locks. It would be helpful if I can be informed of the convention for naming and placement of the lock. | 543 |
https://github.com/huggingface/datasets/issues/541 | Best practices for training tokenizers with nlp | [
"Docs that explain how to train a tokenizer with `datasets` are available here: https://huggingface.co/docs/tokenizers/training_from_memory#using-the-datasets-library"
] | Hi, thank you for developing this library.
What do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used. | 541 |
https://github.com/huggingface/datasets/issues/539 | [Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data | [
"Hi @gaguilar \r\n\r\nIf you want to take care of this, it very simple, you just need to regenerate the `dataset_infos.json` file as indicated [in the doc](https://huggingface.co/nlp/share_dataset.html#adding-metadata) by [installing from source](https://huggingface.co/nlp/installation.html#installing-from-source) ... | Hi,
There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset.
How can I update the checksum of the library to solve this issue? The error is below and it also appears in the [nlp viewer](https://huggingface.co/nlp/viewer/?dataset=lince&config=lid_msaea):
```python
import nlp
nlp.load_dataset('lince', 'lid_msaea')
```
Output:
```
NonMatchingChecksumError: ['https://ritual.uh.edu/lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/lid_msaea.zip']
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 196, in <module>
dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None)
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp-viewer/run.py", line 150, in get
builder_instance.download_and_prepare()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare
download_config.force_download = download_mode == FORCE_REDOWNLOAD
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 469, in _download_and_prepare
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 36, in verify_checksums
raise NonMatchingChecksumError(str(bad_urls))
```
Thank you in advance!
@lhoestq | 539 |
https://github.com/huggingface/datasets/issues/537 | [Dataset] RACE dataset Checksums error | [
"`NonMatchingChecksumError` means that the checksum of the downloaded file is not the expected one.\r\nEither the file you downloaded was corrupted along the way, or the host updated the file.\r\nCould you try to clear your cache and run `load_dataset` again ? If the error is still there, it means that there was an... | Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-15-8bf7603ce0ed> in <module>
----> 1 dataset = nlp.load_dataset("race")
2 len(dataset["train"]), len(dataset["validation"])
~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
546
547 # Download and prepare data
--> 548 builder_instance.download_and_prepare(
549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
550 )
~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
460 logger.info("Dataset not on Hf google storage. Downloading and preparing it from source")
461 if not downloaded_from_gcs:
--> 462 self._download_and_prepare(
463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
464 )
~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
519 # Checksums verification
520 if verify_infos:
--> 521 verify_checksums(
522 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
523 )
~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls))
39 logger.info("All the checksums matched successfully" + for_verification_name)
40
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['http://www.cs.cmu.edu/~glai1/data/race/RACE.tar.gz']
``` | 537 |
https://github.com/huggingface/datasets/issues/534 | `list_datasets()` is broken. | [
"Thanks for reporting !\r\nThis has been fixed in #475 and the fix will be available in the next release",
"What you can do instead to get the list of the datasets is call\r\n\r\n```python\r\nprint([dataset.id for dataset in nlp.list_datasets()])\r\n```",
"Thanks @lhoestq . "
] | version = '0.4.0'
`list_datasets()` is broken. It results in the following error :
```
In [3]: nlp.list_datasets()
Out[3]: ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/core/formatters.py in __call__(self, obj)
700 type_pprinters=self.type_printers,
701 deferred_pprinters=self.deferred_printers)
--> 702 printer.pretty(obj)
703 printer.flush()
704 return stream.getvalue()
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in pretty(self, obj)
375 if cls in self.type_pprinters:
376 # printer registered in self.type_pprinters
--> 377 return self.type_pprinters[cls](obj, self, cycle)
378 else:
379 # deferred printer
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in inner(obj, p, cycle)
553 p.text(',')
554 p.breakable()
--> 555 p.pretty(x)
556 if len(obj) == 1 and type(obj) is tuple:
557 # Special case for 1-item tuples.
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in pretty(self, obj)
392 if cls is not object \
393 and callable(cls.__dict__.get('__repr__')):
--> 394 return _repr_pprint(obj, self, cycle)
395
396 return _default_pprint(obj, self, cycle)
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in _repr_pprint(obj, p, cycle)
698 """A pprint that just redirects to the normal repr function."""
699 # Find newlines and replace them with p.break_()
--> 700 output = repr(obj)
701 lines = output.splitlines()
702 with p.group():
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/nlp/hf_api.py in __repr__(self)
110
111 def __repr__(self):
--> 112 single_line_description = self.description.replace("\n", "")
113 return f"nlp.ObjectInfo(id='{self.id}', description='{single_line_description}', files={self.siblings})"
114
AttributeError: 'NoneType' object has no attribute 'replace'
``` | 534 |
https://github.com/huggingface/datasets/issues/532 | File exists error when used with TPU | [
"I am facing probably facing similar issues with \r\n\r\n`wiki40b_en_100_0`",
"Could you try to run `dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")` once before calling the script ?\r\n\r\nIt looks like several processes try to create the dataset in arrow format at the same time. If the d... | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py#L131) as follows:
```python
# line 131: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
```
When I run this with [`xla_spawn.py`](https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py), I get the following error (it produces one message per core in TPU, which I believe is fine).
It seems the current version doesn't take into account distributed training processes as in [this example](https://github.com/huggingface/transformers/blob/a573777901e662ec2e565be312ffaeedef6effec/src/transformers/data/datasets/language_modeling.py#L35-L38)?
```
08/25/2020 13:59:41 - WARNING - nlp.builder - Using custom data configuration default
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Exception in device=TPU:6: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Exception in device=TPU:4: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Exception in device=TPU:1: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Exception in device=TPU:7: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Exception in device=TPU:3: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/
447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...
Exception in device=TPU:2: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Exception in device=TPU:0: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
Traceback (most recent call last):
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
Traceback (most recent call last):
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn
fn(gindex, *args)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn
main()
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main
train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None
File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset
dataset = load_dataset("text", data_files=file_path, split="train")
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'
```
| 532 |
https://github.com/huggingface/datasets/issues/525 | wmt download speed example | [
"Thanks for creating the issue :)\r\nThe download link for wmt-en-de raw looks like a mirror. We should use that instead of the current url.\r\nIs this mirror official ?\r\n\r\nAlso it looks like for `ro-en` it tried to download other languages. If we manage to only download the one that is asked it'd be cool\r\n\r... | Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-en')
```
Downloads at 49.1 KB/S
Whereas
```
pip install gdown # download from google drive
!gdown https://drive.google.com/uc?id=1iO7um-HWoNoRKDtw27YUSgyeubn9uXqj
```
Downloads at 127 MB/s. (The file is a copy of wmt-en-de raw).
```
nlp.load_dataset('wmt16', 'ro-en')
```
goes at 27 MB/s, much faster.
if we wget the same data from s3 is the same download speed, but ¼ the file size:
```
wget https://s3.amazonaws.com/datasets.huggingface.co/translation/wmt_en_ro_packed_200_rand.tgz
```
Finally,
```
nlp.load_dataset('wmt19', 'zh-en')
```
Starts fast, but broken. (duplicate of #493 )
| 525 |
https://github.com/huggingface/datasets/issues/524 | Some docs are missing parameter names | [
"Indeed, good catch!"
] | See https://huggingface.co/nlp/master/package_reference/main_classes.html#nlp.Dataset.map. I believe this is because the parameter names are enclosed in backticks in the docstrings, maybe it's an old docstring format that doesn't work with the current Sphinx version. | 524 |
https://github.com/huggingface/datasets/issues/522 | dictionnary typo in docs | [
"Thanks!"
] | Many places dictionary is spelled dictionnary, not sure if its on purpose or not.
Fixed in this pr:
https://github.com/huggingface/nlp/pull/521 | 522 |
https://github.com/huggingface/datasets/issues/519 | [BUG] Metrics throwing new error on master since 0.4.0 | [
"Update - maybe this is only failing on bleu because I was not tokenizing inputs to the metric",
"Closing - seems to be just forgetting to tokenize. And found the helpful discussion in huggingface/evaluate#105 "
] | The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu.
Wasn't happening on 0.4.0 but happening now on master.
```
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute
self.add_batch(predictions=predictions, references=references)
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 242, in add_batch
batch = self.info.features.encode_batch(batch)
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in encode_batch
encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column]
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in <listcomp>
encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column]
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 456, in encode_nested_example
raise ValueError("Got a string but expected a list instead: '{}'".format(obj))
``` | 519 |
https://github.com/huggingface/datasets/issues/517 | add MLDoc dataset | [
"Any updates on this?",
"This request is still an open issue waiting to be addressed by any community member, @GuillemGSubies."
] | Hi,
I am recommending that someone add MLDoc, a multilingual news topic classification dataset.
- Here's a link to the Github: https://github.com/facebookresearch/MLDoc
- and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf
Looks like the dataset contains news stories in multiple languages that can be classified into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). There are 13 languages: Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish | 517 |
https://github.com/huggingface/datasets/issues/514 | dataset.shuffle(keep_in_memory=True) is never allowed | [
"This seems to be fixed in #513 for the filter function, replacing `cache_file_name` with `indices_cache_file_name` in the assert. Although not for the `map()` function @thomwolf ",
"Maybe I'm a bit tired but I fail to see the issue here.\r\n\r\nSince `cache_file_name` is `None` by default, if you set `keep_in_me... | As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory or cache_file_name is None
), "Please use either `keep_in_memory` or `cache_file_name` but not both."
```
This affects both `shuffle()` as `select()` is a sub-routine, and `map()` that has the same check.
I'd love to fix this myself, but unsure what the intention of the assert is given the rest of the logic in the function concerning `ccache_file_name` and `keep_in_memory`. | 514 |
https://github.com/huggingface/datasets/issues/511 | dataset.shuffle() and select() resets format. Intended? | [
"Hi @vegarab yes feel free to open a discussion here.\r\n\r\nThis design choice was not very much thought about.\r\n\r\nSince `dataset.select()` (like all the method without a trailing underscore) is non-destructive and returns a new dataset it has most of its properties initialized from scratch (except the table a... | Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight?
When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later loading the dataset object using `torch.load("dataset.pt")`, which conserves the defined format before saving.
I do shuffling and selecting (for controlling dataset size) after loading the data from .pt-file, as it's convenient whenever you train multiple models with varying sizes of the same dataset.
The obvious workaround for this is to set the format again after using `dataset.select()` or `dataset.shuffle()`.
_I guess this is more of a discussion on the design philosophy of the functions. Please let me know if this is not the right channel for these kinds of discussions or if they are not wanted at all!_
#### How to reproduce:
```python
import nlp
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("t5-base")
def create_features(batch):
context_encoding = tokenizer.batch_encode_plus(batch["context"])
return {"input_ids": context_encoding["input_ids"]}
dataset = nlp.load_dataset("cosmos_qa", split="train")
dataset = dataset.map(create_features, batched=True)
dataset.set_format(type="torch", columns=["input_ids"])
dataset[0]
# {'input_ids': tensor([ 1804, 3525, 1602, ... 0, 0])}
dataset = dataset.shuffle()
dataset[0]
# {'id': '3Q9(...)20', 'context': "Good Old War an (...) play ?', 'answer0': 'None of the above choices .', 'answer1': 'This person likes music and likes to see the show , they will see other bands play .', (...) 'input_ids': [1804, 3525, 1602, ... , 0, 0]}
``` | 511 |
https://github.com/huggingface/datasets/issues/510 | Version of numpy to use the library | [
"Seems like this method was added in 1.17. I'll add a requirement on this.",
"Thank you so much. After upgrading the numpy library, it worked."
] | Thank you so much for your excellent work! I would like to use nlp library in my project. While importing nlp, I am receiving the following error `AttributeError: module 'numpy.random' has no attribute 'Generator'` Numpy version in my project is 1.16.0. May I learn which numpy version is used for the nlp library.
Thanks in advance. | 510 |
https://github.com/huggingface/datasets/issues/509 | Converting TensorFlow dataset example | [
"Do you want to convert a dataset script to the tfds format ?\r\nIf so, we currently have a comversion script nlp/commands/convert.py but it is a conversion script that goes from tfds to nlp.\r\nI think it shouldn't be too hard to do the changes in reverse (at some manual adjustments).\r\nIf you manage to make it w... | Hi,
I want to use TensorFlow datasets with this repo, I noticed you made some conversion script,
can you give a simple example of using it?
Thanks
| 509 |
https://github.com/huggingface/datasets/issues/508 | TypeError: Receiver() takes no arguments | [
"Which version of Apache Beam do you have (can you copy your full environment info here)?",
"apache-beam==2.23.0\r\nnlp==0.4.0\r\n\r\nFor me this was resolved by running the same python script on Linux (or really WSL). ",
"Do you manage to run a dummy beam pipeline with python on windows ? \r\nYou can test a du... | I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')
```
This fails in the apache beam runner.
```
Traceback (most recent call last):
File "D:/ML/wikiembedding/gpt2_sv.py", line 36, in <module>
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=my_cache_dir, beam_runner='DirectRunner')
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 969, in _download_and_prepare
pipeline_results = pipeline.run()
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\pipeline.py", line 534, in run
return self.runner.run_pipeline(self, self._options)
....
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 218, in process_encoded
self.output(decoded_value)
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\operations.py", line 332, in output
cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value)
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\Cython\Shadow.py", line 167, in cast
return type(*args)
TypeError: Receiver() takes no arguments
```
This is run on a Windows 10 machine with python 3.8. I get the same error loading the swedish wikipedia dump. | 508 |
https://github.com/huggingface/datasets/issues/507 | Errors when I use | [
"Looks like an issue with 3.0.2 transformers version. Works fine when I use \"master\" version of transformers."
] | I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors
I am using **transformers 3.0.2** code .
from transformers.pipelines import pipeline
from transformers.modeling_auto import AutoModelForQuestionAnswering
from transformers.tokenization_auto import AutoTokenizer
model_name = "deepset/roberta-base-squad2"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
The errors are :
res = nlp(QA_input)
File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in __call__
for s, e, score in zip(starts, ends, scores)
File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in <listcomp>
for s, e, score in zip(starts, ends, scores)
KeyError: 0
| 507 |
https://github.com/huggingface/datasets/issues/501 | Caching doesn't work for map (non-deterministic) | [
"Thanks for reporting !\r\n\r\nTo store the cache file, we compute a hash of the function given in `.map`, using our own hashing function.\r\nThe hash doesn't seem to stay the same over sessions for the tokenizer.\r\nApparently this is because of the regex at `tokenizer.pat` is not well supported by our hashing fun... | The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it.
```python
import nlp
import transformers
def main():
ds = nlp.load_dataset("reddit", split="train[:500]")
tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2")
def convert_to_features(example_batch):
input_str = example_batch["body"]
encodings = tokenizer(input_str, add_special_tokens=True, truncation=True)
return encodings
ds = ds.map(convert_to_features, batched=True)
if __name__ == "__main__":
main()
```
Roughly 3/10 times, this example recomputes the tokenization.
Is this expected behaviour? | 501 |
https://github.com/huggingface/datasets/issues/492 | nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema | [
"In 0.4.0, the assertion in `concatenate_datasets ` is on the features, and not the schema.\r\nCould you try to update `nlp` ?\r\n\r\nAlso, since 0.4.0, you can use `dset_wikipedia.cast_(dset_books.features)` to avoid the schema cast hack.",
"Or maybe the assertion comes from elsewhere ?",
"I'm using the master... | Here's the code I'm trying to run:
```python
dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir)
dset_wikipedia.drop(columns=["title"])
dset_wikipedia.features.pop("title")
dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir)
dset = nlp.concatenate_datasets([dset_wikipedia, dset_books])
```
This fails because they have different schemas, despite having identical features.
```python
assert dset_wikipedia.features == dset_books.features # True
assert dset_wikipedia._data.schema == dset_books._data.schema # False
```
The Wikipedia dataset has 'text: string', while the BookCorpus dataset has 'text: string not null'. Currently I hack together a working schema match with the following line, but it would be better if this was handled in Features themselves.
```python
dset_wikipedia._data = dset_wikipedia.data.cast(dset_books._data.schema)
```
| 492 |
https://github.com/huggingface/datasets/issues/491 | No 0.4.0 release on GitHub | [
"I did the release on github, and updated the doc :)\r\nSorry for the delay",
"Thanks!"
] | 0.4.0 was released on PyPi, but not on GitHub. This means [the documentation](https://huggingface.co/nlp/) is still displaying from 0.3.0, and that there's no tag to easily clone the 0.4.0 version of the repo. | 491 |
https://github.com/huggingface/datasets/issues/490 | Loading preprocessed Wikipedia dataset requires apache_beam | [] | Running
`nlp.load_dataset("wikipedia", "20200501.en", split="train", dir="/tmp/wikipedia")`
gives an error if apache_beam is not installed, stemming from
https://github.com/huggingface/nlp/blob/38eb2413de54ee804b0be81781bd65ac4a748ced/src/nlp/builder.py#L981-L988
This succeeded without the dependency in version 0.3.0. This seems like an unnecessary dependency to process some dataset info if you're using the already-preprocessed version. Could it be removed? | 490 |
https://github.com/huggingface/datasets/issues/489 | ug | [
"whoops",
"please delete this"
] | 489 | |
https://github.com/huggingface/datasets/issues/488 | issues with downloading datasets for wmt16 and wmt19 | [
"I found `UNv1.0.en-ru.tar.gz` here: https://conferences.unite.un.org/uncorpus/en/downloadoverview, so it can be reconstructed with:\r\n```\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.00\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.... | I have encountered multiple issues while trying to:
```
import nlp
dataset = nlp.load_dataset('wmt16', 'ru-en')
metric = nlp.load_metric('wmt16')
```
1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and now it worked. So it must have been some outdated dependencies that `pip install -e ".[dev]" ` fixed.
2. it was downloading at 60kbs - almost 5 hours to get the dataset. It was downloading all pairs and not just the one I asked for.
I tried the same code with `wmt19` in parallel and it took a few secs to download and it only fetched data for the requested pair. (but it failed too, see below)
3. my machine has crushed and when I retried I got:
```
Traceback (most recent call last):
File "./download.py", line 9, in <module>
dataset = nlp.load_dataset('wmt16', 'ru-en')
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 449, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/stas/anaconda3/envs/main/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/stas/anaconda3/envs/main/lib/python3.7/os.py", line 221, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/stas/.cache/huggingface/datasets/wmt16/ru-en/1.0.0/4d8269cdd971ed26984a9c0e4a158e0c7afc8135fac8fb8ee43ceecf38fd422d.incomplete'
```
it can't handle resumes. but neither allows a new start. Had to delete it manually.
4. and finally when it downloaded the dataset, it then failed to fetch the metrics:
```
Traceback (most recent call last):
File "./download.py", line 15, in <module>
metric = nlp.load_metric('wmt16')
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 442, in load_metric
module_path, hash = prepare_module(path, download_config=download_config, dataset=False)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 258, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 198, in cached_path
local_files_only=download_config.local_files_only,
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 356, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/metrics/wmt16/wmt16.py
```
5. If I run the same code with `wmt19`, it fails too:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz
``` | 488 |
https://github.com/huggingface/datasets/issues/486 | Bookcorpus data contains pretokenized text | [
"Yes indeed it looks like some `'` and spaces are missing (for example in `dont` or `didnt`).\r\nDo you know if there exist some copies without this issue ?\r\nHow would you fix this issue on the current data exactly ? I can see that the data is raw text (not tokenized) so I'm not sure I understand how you would do... | It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end quotes, respectively.
On my own projects, I just run the data through NLTK's TreebankWordDetokenizer to reverse the tokenization (as best as possible). I think it would be beneficial to apply this transformation directly on your remote cached copy of the dataset. If you choose to do so, I would also suggest to use my fork of NLTK that fixes several bugs in their detokenizer (I've opened a pull-request, but they've yet to respond): https://github.com/nltk/nltk/pull/2575 | 486 |
https://github.com/huggingface/datasets/issues/485 | PAWS dataset first item is header | [] | ```
import nlp
dataset = nlp.load_dataset('xtreme', 'PAWS-X.en')
dataset['test'][0]
```
prints the following
```
{'label': 'label', 'sentence1': 'sentence1', 'sentence2': 'sentence2'}
```
dataset['test'][0] should probably be the first item in the dataset, not just a dictionary mapping the column names to themselves. Probably just need to ignore the first row in the dataset by default or something like that. | 485 |
https://github.com/huggingface/datasets/issues/483 | rotten tomatoes movie review dataset taken down | [
"found a mirror: https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz",
"fixed in #484 ",
"Closing this one. Thanks again @jxmorris12 for taking care of this :)"
] | In an interesting twist of events, the individual who created the movie review seems to have left Cornell, and their webpage has been removed, along with the movie review dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz). It's not downloadable anymore. | 483 |
https://github.com/huggingface/datasets/issues/482 | Bugs : dataset.map() is frozen on ELI5 | [
"This comes from an overflow in pyarrow's array.\r\nIt is stuck inside the loop that reduces the batch size to avoid the overflow.\r\nI'll take a look",
"I created a PR to fix the issue.\r\nIt was due to an overflow check that handled badly an empty list.\r\n\r\nYou can try the changes by using \r\n```\r\n!pip in... | Hi Huggingface Team!
Thank you guys once again for this amazing repo.
I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)
However, when I run `dataset.map()` on ELI5 to prepare `input_text, target_text`, `dataset.map` is **frozen** in the first hundreds examples. On the contrary, this works totally fine on SQUAD (80,000 examples). Both `nlp` version 0.3.0 and 0.4.0 cause frozen process . Also try various `pyarrow` versions from 0.16.0 / 0.17.0 / 1.0.0 also have the same frozen process.
Reproducible code can be found on [this colab notebook ](https://colab.research.google.com/drive/14wttOTv3ky74B_c0kv5WrbgQjCF2fYQk?usp=sharing), where I also show that the same mapping function works fine on SQUAD, so the problem is likely due to ELI5 somehow.
----------------------------------------
**More Info :** instead of `map`, if I run `for` loop and apply function by myself, there's no error and can finish within 10 seconds. However, `nlp dataset` is immutable (I couldn't manually assign a new key-value to `dataset `object)
I also notice that SQUAD texts are quite clean while ELI5 texts contain many special characters, not sure if this is the cause ? | 482 |
https://github.com/huggingface/datasets/issues/478 | Export TFRecord to GCP bucket | [
"Nevermind, I restarted my python session and it worked fine...\r\n\r\n---\r\n\r\nI had an authentification error, and I authenticated from another terminal. After that, no more error but it was not working. Restarting the sessions makes it work :)"
] | Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs://my_bucket/x.tfrecord')`
Since `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket.
`dataset.export('local.tfrecord')` works fine,
but `dataset.export('gs://my_bucket/x.tfrecord')` does not work.
There is no error message, I just can't find the file on my bucket...
---
Looking at the code, `nlp` is using `tf.data.experimental.TFRecordWriter`, while I was using `tf.io.TFRecordWriter`.
**What's the difference between those 2 ? How can I write TFRecords files directly to GCP bucket ?**
@jarednielsen @lhoestq | 478 |
https://github.com/huggingface/datasets/issues/477 | Overview.ipynb throws exceptions with nlp 0.4.0 | [
"Thanks for reporting this issue\r\n\r\nThere was a bug where numpy arrays would get returned instead of tensorflow tensors.\r\nThis is fixed on master.\r\n\r\nI tried to re-run the colab and encountered this error instead:\r\n\r\n```\r\nAttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no at... | with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-48907f2ad433> in <module>
----> 1 features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]}
2 labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])}
3 labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1])
4 tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
<ipython-input-5-48907f2ad433> in <dictcomp>(.0)
----> 1 features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]}
2 labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])}
3 labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1])
4 tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
AttributeError: 'numpy.ndarray' object has no attribute 'to_tensor' | 477 |
https://github.com/huggingface/datasets/issues/474 | test_load_real_dataset when config has BUILDER_CONFIGS that matter | [
"The `data_dir` parameter has been removed. Now the error is `ValueError: Config name is missing`\r\n\r\nAs mentioned in #470 I think we can have one test with the first config of BUILDER_CONFIGS, and another test that runs all of the configs in BUILDER_CONFIGS",
"This was fixed in #527 \r\n\r\nClosing this one, ... | It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error.
I think the problem is that `test_load_real_dataset` calls `load_dataset` with `data_dir=temp_data_dir` ([here](https://github.com/huggingface/nlp/blob/master/tests/test_dataset_common.py#L200)). This causes [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/builder.py#L201) to always be false because `config_kwargs` is not `None`. [This line](https://github.com/huggingface/nlp/blob/master/src/nlp/builder.py#L222) will be run instead, which doesn't use `BUILDER_CONFIGS`.
For an example, you can try running the test for lince:
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lince`
which yields
> E TypeError: __init__() missing 3 required positional arguments: 'colnames', 'classes', and 'label_column' | 474 |
https://github.com/huggingface/datasets/issues/469 | invalid data type 'str' at _convert_outputs in arrow_dataset.py | [
"Hi ! Did you try to set the output format to pytorch ? (or tensorflow if you're using tensorflow)\r\nIt can be done with `dataset.set_format(\"torch\", columns=columns)` (or \"tensorflow\").\r\n\r\nNote that for pytorch, string columns can't be converted to `torch.Tensor`, so you have to specify in `columns=` the... | I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type 'str'
I'm using pyarrow 1.0.0. And I have simple custom data set with Text and Integer Label.
Ex: Data
Text , Label #Column Header
I'm facing an Network issue, 1
I forgot my password, 2
Error StackTrace:
File "C:\**\transformers\trainer.py", line 492, in train
for step, inputs in enumerate(epoch_iterator):
File "C:\**\tqdm\std.py", line 1104, in __iter__
for obj in iterable:
File "C:\**\torch\utils\data\dataloader.py", line 345, in __next__
data = self._next_data()
File "C:\**\torch\utils\data\dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "C:\**\torch\utils\data\_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\**\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\**\nlp\arrow_dataset.py", line 414, in __getitem__
output_all_columns=self._output_all_columns,
File "C:\**\nlp\arrow_dataset.py", line 403, in _getitem
outputs, format_type=format_type, format_columns=format_columns, output_all_columns=output_all_columns
File "C:\**\nlp\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type 'str'
| 469 |
https://github.com/huggingface/datasets/issues/468 | UnicodeDecodeError while loading PAN-X task of XTREME dataset | [
"Indeed. Solution 1 is the simplest.\r\n\r\nThis is actually a recurring problem.\r\nI think we should scan all the datasets with regexpr to fix the use of `open()` without encodings.\r\nAnd probably add a test in the CI to forbid using this in the future.",
"I'm happy to tackle the broader problem - will open a ... | Hi 🤗 team!
## Description of the problem
I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset:
```
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-5-1d61f439b843> in <module>
----> 1 dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
528 ignore_verifications = ignore_verifications or save_infos
529 # Download/copy dataset processing script
--> 530 module_path, hash = prepare_module(path, download_config=download_config, dataset=True)
531
532 # Get dataset builder class from the processing script
/usr/local/lib/python3.6/dist-packages/nlp/load.py in prepare_module(path, download_config, dataset, force_local_path, **download_kwargs)
265
266 # Download external imports if needed
--> 267 imports = get_imports(local_path)
268 local_imports = []
269 library_imports = []
/usr/local/lib/python3.6/dist-packages/nlp/load.py in get_imports(file_path)
156 lines = []
157 with open(file_path, mode="r") as f:
--> 158 lines.extend(f.readlines())
159
160 logger.info("Checking %s for additional imports.", file_path)
/usr/lib/python3.6/encodings/ascii.py in decode(self, input, final)
24 class IncrementalDecoder(codecs.IncrementalDecoder):
25 def decode(self, input, final=False):
---> 26 return codecs.ascii_decode(input, self.errors)[0]
27
28 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 111: ordinal not in range(128)
```
## Steps to reproduce
Install from nlp's master branch
```python
pip install git+https://github.com/huggingface/nlp.git
```
then run
```python
from nlp import load_dataset
# AmazonPhotos.zip is located in data/
dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
```
## OS / platform details
- `nlp` version: latest from master
- Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
## Proposed solution
Either change [line 762](https://github.com/huggingface/nlp/blob/7ada00b1d62f94eee22a7df38c6b01e3f27194b7/datasets/xtreme/xtreme.py#L762) in `xtreme.py` to include UTF-8 encoding:
```
# old
with open(filepath) as f
# new
with open(filepath, encoding='utf-8') as f
```
or raise a warning that suggests setting the locale explicitly, e.g.
```python
import locale
locale.setlocale(locale.LC_ALL, 'C.UTF-8')
```
I have a preference for the first solution. Let me know if you agree and I'll be happy to implement the simple fix! | 468 |
https://github.com/huggingface/datasets/issues/445 | DEFAULT_TOKENIZER import error in sacrebleu | [
"This issue was resolved by #447 "
] | Latest Version 0.3.0
When loading the metric "sacrebleu" there is an import error due to the wrong path

| 445 |
https://github.com/huggingface/datasets/issues/444 | Keep loading old file even I specify a new file in load_dataset | [
"Same here !",
"This is the only fix I could come up with without touching the repo's code.\r\n```python\r\nfrom nlp.builder import FORCE_REDOWNLOAD\r\ndataset = load_dataset('csv', data_file='./a.csv', download_mode=FORCE_REDOWNLOAD, version='0.0.1')\r\n```\r\nYou'll have to change the version each time you want... | I used load a file called 'a.csv' by
```
dataset = load_dataset('csv', data_file='./a.csv')
```
And after a while, I tried to load another csv called 'b.csv'
```
dataset = load_dataset('csv', data_file='./b.csv')
```
However, the new dataset seems to remain the old 'a.csv' and not loading new csv file.
Even worse, after I load a.csv, the load_dataset function keeps loading the 'a.csv' afterward.
Is this a cache problem?
| 444 |
https://github.com/huggingface/datasets/issues/443 | Cannot unpickle saved .pt dataset with torch.save()/load() | [
"This seems to be fixed in a non-released version. \r\n\r\nInstalling nlp from source\r\n```\r\ngit clone https://github.com/huggingface/nlp\r\ncd nlp\r\npip install .\r\n```\r\nsolves the issue. "
] | Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling:
```python
>>> import torch
>>> import nlp
>>> squad = nlp.load_dataset("squad.py", split="train")
>>> squad
Dataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype='string', id=None)}, num_rows: 87599)
>>> squad = squad.map(create_features, batched=True)
>>> squad.set_format(type="torch", columns=["source_ids", "target_ids", "attention_mask"])
>>> torch.save(squad, "squad.pt")
>>> squad_pt = torch.load("squad.pt")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 773, in _legacy_load
result = unpickler.load()
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/splits.py", line 493, in __setitem__
raise ValueError("Cannot add elem. Use .add() instead.")
ValueError: Cannot add elem. Use .add() instead.
```
where `create_features` is a function that tokenizes the data using `batch_encode_plus` and returns a Dict with `input_ids`, `target_ids` and `attention_mask`.
```python
def create_features(batch):
source_text_encoding = tokenizer.batch_encode_plus(
batch["source_text"],
max_length=max_source_length,
pad_to_max_length=True,
truncation=True)
target_text_encoding = tokenizer.batch_encode_plus(
batch["target_text"],
max_length=max_target_length,
pad_to_max_length=True,
truncation=True)
features = {
"source_ids": source_text_encoding["input_ids"],
"target_ids": target_text_encoding["input_ids"],
"attention_mask": source_text_encoding["attention_mask"]
}
return features
```
I found a similar issue in [issue 5267 in the huggingface/transformers repo](https://github.com/huggingface/transformers/issues/5267) which was solved by downgrading to `nlp==0.2.0`. That did not solve this problem, however. | 443 |
https://github.com/huggingface/datasets/issues/442 | [Suggestion] Glue Diagnostic Data with Labels | [] | Hello! First of all, thanks for setting up this useful project!
I've just realised you provide the the [Glue Diagnostics Data](https://huggingface.co/nlp/viewer/?dataset=glue&config=ax) without labels, indicating in the `GlueConfig` that you've only a test set.
Yet, the data with labels is available, too (see also [here](https://gluebenchmark.com/diagnostics#introduction)):
https://www.dropbox.com/s/ju7d95ifb072q9f/diagnostic-full.tsv?dl=1
Have you considered incorporating it? | 442 |
https://github.com/huggingface/datasets/issues/439 | Issues: Adding a FAISS or Elastic Search index to a Dataset | [
"`DPRContextEncoder` and `DPRContextEncoderTokenizer` will be available in the next release of `transformers`.\r\n\r\nRight now you can experiment with it by installing `transformers` from the master branch.\r\nYou can also check the docs of DPR [here](https://huggingface.co/transformers/master/model_doc/dpr.html).... | It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on the latest PyArrow 1.0.0 ? Is it yet to be made generally available ? | 439 |
https://github.com/huggingface/datasets/issues/438 | New Datasets: IWSLT15+, ITTB | [
"Thanks Sam, we now have a very detailed tutorial and template on how to add a new dataset to the library. It typically take 1-2 hours to add one. Do you want to give it a try ?\r\nThe tutorial on writing a new dataset loading script is here: https://huggingface.co/nlp/add_dataset.html\r\nAnd the part on how to sha... | **Links:**
[iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html)
Don't know if that link is up to date.
[ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/)
**Motivation**: replicate mbart finetuning results (table below)

For future readers, we already have the following language pairs in the wmt namespaces:
```
wmt14: ['cs-en', 'de-en', 'fr-en', 'hi-en', 'ru-en']
wmt15: ['cs-en', 'de-en', 'fi-en', 'fr-en', 'ru-en']
wmt16: ['cs-en', 'de-en', 'fi-en', 'ro-en', 'ru-en', 'tr-en']
wmt17: ['cs-en', 'de-en', 'fi-en', 'lv-en', 'ru-en', 'tr-en', 'zh-en']
wmt18: ['cs-en', 'de-en', 'et-en', 'fi-en', 'kk-en', 'ru-en', 'tr-en', 'zh-en']
wmt19: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']
``` | 438 |
https://github.com/huggingface/datasets/issues/436 | Google Colab - load_dataset - PyArrow exception | [
"Indeed, we’ll make a new PyPi release next week to solve this. Cc @lhoestq ",
"+1! this is the reason our tests are failing at [TextAttack](https://github.com/QData/TextAttack) \r\n\r\n(Though it's worth noting if we fixed the version number of pyarrow to 0.16.0 that would fix our problem too. But in this case w... | With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue
ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`.
The error goes only when I install version 0.16.0
i.e. !pip install pyarrow==0.16.0 | 436 |
https://github.com/huggingface/datasets/issues/435 | ImportWarning for pyarrow 1.0.0 | [
"This was fixed in #434 \r\nWe'll do a release later this week to include this fix.\r\nThanks for reporting",
"I dont know if the fix was made but the problem is still present : \r\nInstaled with pip : NLP 0.3.0 // pyarrow 1.0.0 \r\nOS : archlinux with kernel zen 5.8.5",
"Yes it was fixed in `nlp>=0.4.0`\r\nYou... | The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files | 435 |
https://github.com/huggingface/datasets/issues/433 | How to reuse functionality of a (generic) dataset? | [
"Hi @ArneBinder, we have a few \"generic\" datasets which are intended to load data files with a predefined format:\r\n- csv: https://github.com/huggingface/nlp/tree/master/datasets/csv\r\n- json: https://github.com/huggingface/nlp/tree/master/datasets/json\r\n- text: https://github.com/huggingface/nlp/tree/master/... | I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to reuse formats and loading functionality for datasets with a common format?
In my case, it took a bit of time to create the Brat dataset and I think others would appreciate to not have to think about that again. Also, I assume there are other formats (e.g. conll) that are widely used, so having this would really ease dataset onboarding and adoption of the library. | 433 |
https://github.com/huggingface/datasets/issues/426 | [FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter | [
"Yes that's definitely something we plan to add ^^",
"Yes, that would be nice. We could take a look at what tensorflow `tf.data` does under the hood for instance.",
"So `tf.data.Dataset.map()` returns a `ParallelMapDataset` if `num_parallel_calls is not None` [link](https://github.com/tensorflow/tensorflow/blob... | It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together? | 426 |
https://github.com/huggingface/datasets/issues/425 | Correct data structure for PAN-X task in XTREME dataset? | [
"Thanks for noticing ! This looks more reasonable indeed.\r\nFeel free to open a PR",
"Hi @lhoestq \r\nI made the proposed changes to the `xtreme.py` script. I noticed that I also need to change the schema in the `dataset_infos.json` file. More specifically the `\"features\"` part of the PAN-X.LANG dataset:\r\n\... | Hi 🤗 team!
## Description of the problem
Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows:
```python
from nlp import load_dataset
# AmazonPhotos.zip is located in data/
dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
dataset_train = dataset['train']
```
However, I am not sure that `load_dataset()` is returning the correct data structure for NER.
Currently, every row in `dataset_train` is of the form
```python
{'word': str, 'ner_tag': str, 'lang': str}
```
but I think we actually want something like
```python
{'words': List[str], 'ner_tags': List[str], 'langs': List[str]}
```
so that each row corresponds to a _sequence_ of words associated with each example. With the current data structure I do not think it is possible to transform `dataset_train` into a form suitable for training because we do not know the boundaries between examples.
Indeed, [this line](https://github.com/google-research/xtreme/blob/522434d1aece34131d997a97ce7e9242a51a688a/third_party/utils_tag.py#L58) in the XTREME repo, processes the texts as lists of sentences, tags, and languages.
## Proposed solution
Replace
```python
with open(filepath) as f:
data = csv.reader(f, delimiter="\t", quoting=csv.QUOTE_NONE)
for id_, row in enumerate(data):
if row:
lang, word = row[0].split(":")[0], row[0].split(":")[1]
tag = row[1]
yield id_, {"word": word, "ner_tag": tag, "lang": lang}
```
from [these lines](https://github.com/huggingface/nlp/blob/ce7d3a1d630b78fe27188d1706f3ea980e8eec43/datasets/xtreme/xtreme.py#L881-L887) of the `_generate_examples()` function with something like
```python
guid_index = 1
with open(filepath, encoding="utf-8") as f:
words = []
ner_tags = []
langs = []
for line in f:
if line.startswith("-DOCSTART-") or line == "" or line == "\n":
if words:
yield guid_index, {"words": words, "ner_tags": ner_tags, "langs": langs}
guid_index += 1
words = []
ner_tags = []
else:
# pan-x data is tab separated
splits = line.split("\t")
# strip out en: prefix
langs.append(splits[0][:2])
words.append(splits[0][3:])
if len(splits) > 1:
labels.append(splits[-1].replace("\n", ""))
else:
# examples have no label in test set
labels.append("O")
```
If you agree, me or @lvwerra would be happy to implement this and create a PR. | 425 |
https://github.com/huggingface/datasets/issues/418 | Addition of google drive links to dl_manager | [
"I think the problem is the way you wrote your urls. Try the following structure to see `https://drive.google.com/uc?export=download&id=your_file_id` . \r\n\r\n@lhoestq ",
"Oh sorry, I think `_get_drive_url` is doing that. \r\n\r\nHave you tried to use `dl_manager.download_and_extract(_get_drive_url(_TRAIN_URL)`... | Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown.
This is the script for me:
```python
class EmoConfig(nlp.BuilderConfig):
"""BuilderConfig for SQUAD."""
def __init__(self, **kwargs):
"""BuilderConfig for EmoContext.
Args:
**kwargs: keyword arguments forwarded to super.
"""
super(EmoConfig, self).__init__(**kwargs)
_TEST_URL = "https://drive.google.com/file/d/1Hn5ytHSSoGOC4sjm3wYy0Dh0oY_oXBbb/view?usp=sharing"
_TRAIN_URL = "https://drive.google.com/file/d/12Uz59TYg_NtxOy7SXraYeXPMRT7oaO7X/view?usp=sharing"
class EmoDataset(nlp.GeneratorBasedBuilder):
""" SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text. Version 1.0.0 """
VERSION = nlp.Version("1.0.0")
force = False
def _info(self):
return nlp.DatasetInfo(
description=_DESCRIPTION,
features=nlp.Features(
{
"text": nlp.Value("string"),
"label": nlp.features.ClassLabel(names=["others", "happy", "sad", "angry"]),
}
),
supervised_keys=None,
homepage="https://www.aclweb.org/anthology/S19-2005/",
citation=_CITATION,
)
def _get_drive_url(self, url):
base_url = 'https://drive.google.com/uc?id='
split_url = url.split('/')
return base_url + split_url[5]
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
if(not os.path.exists("emo-train.json") or self.force):
gdown.download(self._get_drive_url(_TRAIN_URL), "emo-train.json", quiet = True)
if(not os.path.exists("emo-test.json") or self.force):
gdown.download(self._get_drive_url(_TEST_URL), "emo-test.json", quiet = True)
return [
nlp.SplitGenerator(
name=nlp.Split.TRAIN,
gen_kwargs={
"filepath": "emo-train.json",
"split": "train",
},
),
nlp.SplitGenerator(
name=nlp.Split.TEST,
gen_kwargs={"filepath": "emo-test.json", "split": "test"},
),
]
def _generate_examples(self, filepath, split):
""" Yields examples. """
with open(filepath, 'rb') as f:
data = json.load(f)
for id_, text, label in zip(data["text"].keys(), data["text"].values(), data["Label"].values()):
yield id_, {
"text": text,
"label": label,
}
```
Can someone help me in adding gdrive links to be used with default dl_manager or adding gdown as another dl_manager, because I'd like to add this dataset to nlp's official database. | 418 |
https://github.com/huggingface/datasets/issues/415 | Something is wrong with WMT 19 kk-en dataset | [] | The translation in the `train` set does not look right:
```
>>>import nlp
>>>from nlp import load_dataset
>>>dataset = load_dataset('wmt19', 'kk-en')
>>>dataset["train"]["translation"][0]
{'kk': 'Trumpian Uncertainty', 'en': 'Трамптық белгісіздік'}
>>>dataset["validation"]["translation"][0]
{'kk': 'Ақша-несие саясатының сценарийін қайта жазсақ', 'en': 'Rewriting the Monetary-Policy Script'}
``` | 415 |
https://github.com/huggingface/datasets/issues/414 | from_dict delete? | [
"`from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though.\r\nRight now if you want to use `from_dict` you have to install the package from the master branch\r\n```\r\npip install git+https://github.com/... | AttributeError: type object 'Dataset' has no attribute 'from_dict' | 414 |
https://github.com/huggingface/datasets/issues/413 | Is there a way to download only NQ dev? | [
"Unfortunately it's not possible to download only the dev set of NQ.\r\n\r\nI think we could add a way to download only the test set by adding a custom configuration to the processing script though.",
"Ok, got it. I think this could be a valuable feature - especially for large datasets like NQ, but potentially al... | Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)?
As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data.
I tried
```
dataset = nlp.load_dataset('natural_questions', split="validation", beam_runner="DirectRunner")
```
But this still triggered a big download of presumably the whole dataset. Is there any way of doing this or are splits / slicing options only available after downloading?
Thanks! | 413 |
https://github.com/huggingface/datasets/issues/412 | Unable to load XTREME dataset from disk | [
"Hi @lewtun, you have to provide the full path to the downloaded file for example `/home/lewtum/..`",
"I was able to repro. Opening a PR to fix that.\r\nThanks for reporting this issue !",
"Thanks for the rapid fix @lhoestq!"
] | Hi 🤗 team!
## Description of the problem
Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark.
I have manually downloaded the `AmazonPhotos.zip` file from [here](https://www.amazon.com/clouddrive/share/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1) and am running into a `FileNotFoundError` when I point to the location of the dataset.
As far as I can tell, the problem is that `AmazonPhotos.zip` decompresses to `panx_dataset` and `load_dataset()` is not looking in the correct path:
```
# path where load_dataset is looking for fr.tar.gz
/root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/
# path where it actually exists
/root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/panx_dataset/
```
## Steps to reproduce the problem
1. Manually download the XTREME benchmark from [here](https://www.amazon.com/clouddrive/share/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1)
2. Run the following code snippet
```python
from nlp import load_dataset
# AmazonPhotos.zip is in the root of the folder
dataset = load_dataset("xtreme", "PAN-X.fr", data_dir='./')
```
3. Here is the stack trace
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-4-26786bb5fa93> in <module>
----> 1 dataset = load_dataset("xtreme", "PAN-X.fr", data_dir='./')
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
431 self._download_and_prepare(
--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
433 )
434 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
464 split_dict = SplitDict(dataset_name=self.name)
465 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 466 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
467 # Checksums verification
468 if verify_infos:
/usr/local/lib/python3.6/dist-packages/nlp/datasets/xtreme/b8c2ed3583a7a7ac60b503576dfed3271ac86757628897e945bd329c43b8a746/xtreme.py in _split_generators(self, dl_manager)
725 panx_dl_dir = dl_manager.extract(panx_path)
726 lang = self.config.name.split(".")[1]
--> 727 lang_folder = dl_manager.extract(os.path.join(panx_dl_dir, lang + ".tar.gz"))
728 return [
729 nlp.SplitGenerator(
/usr/local/lib/python3.6/dist-packages/nlp/utils/download_manager.py in extract(self, path_or_paths)
196 """
197 return map_nested(
--> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
199 )
200
/usr/local/lib/python3.6/dist-packages/nlp/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_tuple)
170 return tuple(mapped)
171 # Singleton
--> 172 return function(data_struct)
173
174
/usr/local/lib/python3.6/dist-packages/nlp/utils/download_manager.py in <lambda>(path)
196 """
197 return map_nested(
--> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
199 )
200
/usr/local/lib/python3.6/dist-packages/nlp/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
203 elif urlparse(url_or_filename).scheme == "":
204 # File, but it doesn't exist.
--> 205 raise FileNotFoundError("Local file {} doesn't exist".format(url_or_filename))
206 else:
207 # Something unknown
FileNotFoundError: Local file /root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/fr.tar.gz doesn't exist
```
## OS and hardware
```
- `nlp` version: 0.3.0
- Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
``` | 412 |
https://github.com/huggingface/datasets/issues/409 | train_test_split error: 'dict' object has no attribute 'deepcopy' | [
"It was fixed in 2ddd18d139d3047c9c3abe96e1e7d05bb360132c.\r\nCould you pull the latest changes from master @morganmcg1 ?",
"Thanks @lhoestq, works fine now!"
] | `train_test_split` is giving me an error when I try and call it:
`'dict' object has no attribute 'deepcopy'`
## To reproduce
```
dataset = load_dataset('glue', 'mrpc', split='train')
dataset = dataset.train_test_split(test_size=0.2)
```
## Full Stacktrace
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-12-feb740dbec9a> in <module>
1 dataset = load_dataset('glue', 'mrpc', split='train')
----> 2 dataset = dataset.train_test_split(test_size=0.2)
~/anaconda3/envs/fastai2_me/lib/python3.7/site-packages/nlp/arrow_dataset.py in train_test_split(self, test_size, train_size, shuffle, seed, generator, keep_in_memory, load_from_cache_file, train_cache_file_name, test_cache_file_name, writer_batch_size)
1032 "writer_batch_size": writer_batch_size,
1033 }
-> 1034 train_kwargs = cache_kwargs.deepcopy()
1035 train_kwargs["split"] = "train"
1036 test_kwargs = cache_kwargs.deepcopy()
AttributeError: 'dict' object has no attribute 'deepcopy'
``` | 409 |
https://github.com/huggingface/datasets/issues/407 | MissingBeamOptions for Wikipedia 20200501.en | [
"Fixed. Could you try again @mitchellgordon95 ?\r\nIt was due a file not being updated on S3.\r\n\r\nWe need to make sure all the datasets scripts get updated properly @julien-c ",
"Works for me! Thanks.",
"I found the same issue with almost any language other than English. (For English, it works). Will someone... | There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available):
```
nlp.load_dataset('wikipedia', "20200501.en", split='train')
```
And now, having pulled master, I get:
```
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, total: 34.06 GiB) to /home/hltcoe/mgordon/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/76b0b2747b679bb0ee7a1621e50e5a6378477add0c662668a324a5bc07d516dd...
Traceback (most recent call last):
File "scripts/download.py", line 11, in <module>
fire.Fire(download_pretrain)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 468, in _Fire
target=component.__name__)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "scripts/download.py", line 6, in download_pretrain
nlp.load_dataset('wikipedia', "20200501.en", split='train')
File "/exp/mgordon/nlp/src/nlp/load.py", line 534, in load_dataset
save_infos=save_infos,
File "/exp/mgordon/nlp/src/nlp/builder.py", line 460, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/exp/mgordon/nlp/src/nlp/builder.py", line 870, in _download_and_prepare
"\n\t`{}`".format(usage_example)
nlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, S
park, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20200501.en', beam_runner='DirectRunner')`
``` | 407 |
https://github.com/huggingface/datasets/issues/406 | Faster Shuffling? | [
"I think the slowness here probably come from the fact that we are copying from and to python.\r\n\r\n@lhoestq for all the `select`-based methods I think we should stay in Arrow format and update the writer so that it can accept Arrow tables or batches as well. What do you think?",
"> @lhoestq for all the `select... | Consider shuffling bookcorpus:
```
dataset = nlp.load_dataset('bookcorpus', split='train')
dataset.shuffle()
```
According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`.
But I can also just write the lines to a text file:
```
batch_size = 100000
with open('tmp.txt', 'w+') as out_f:
for i in tqdm(range(0, len(dataset), batch_size)):
batch = dataset[i:i+batch_size]['text']
print("\n".join(batch), file=out_f)
```
Which completes in a couple minutes, followed by `shuf tmp.txt > tmp2.txt` which completes in under a minute. And finally,
```
dataset = nlp.load_dataset('text', data_files='tmp2.txt')
```
Which completes in under 10 minutes. I read up on Apache Arrow this morning, and it seems like the columnar data format is not especially well-suited to shuffling rows, since moving items around requires a lot of book-keeping.
Is shuffle inherently slow, or am I just using it wrong? And if it is slow, would it make sense to try converting the data to a row-based format on disk and then shuffling? (Instead of calling select with a random permutation, as is currently done.) | 406 |
https://github.com/huggingface/datasets/issues/395 | Memory issue when doing select | [] | As noticed in #389, the following code loads the entire wikipedia in memory.
```python
import nlp
w = nlp.load_dataset("wikipedia", "20200501.en", split="train")
w.select([0])
```
This is caused by [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626) for some reason, that tries to serialize the function with all the wikipedia data with it.
It's not the case with `.map` or `.filter`.
However functions that are based on `.select` like `.shuffle`, `.shard`, `.train_test_split`, `.sort` are affected.
| 395 |
https://github.com/huggingface/datasets/issues/388 | 🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17 | [
"similar slow download speed here for nlp.load_dataset('wmt14', 'fr-en')\r\n`\r\nDownloading: 100%|██████████████████████████████████████████████████████████| 658M/658M [1:00:42<00:00, 181kB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████| 918M/918M [1:39:38<00:00, 154kB/s]\r\nDow... | 1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code:
```
nlp.load_dataset('wmt14','de-en')
nlp.load_dataset('wmt15','de-en')
nlp.load_dataset('wmt17','de-en')
nlp.load_dataset('wmt19','de-en')
```
The code runs but the download speed is **extremely slow**, the same behaviour is not observed on `wmt16` and `wmt18`
2. When trying to download `wmt17 zh-en`, I got the following error:
> ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz | 388 |
https://github.com/huggingface/datasets/issues/387 | Conversion through to_pandas output numpy arrays for lists instead of python objects | [
"To convert from arrow type we have three options: to_numpy, to_pandas and to_pydict/to_pylist.\r\n\r\n- to_numpy and to_pandas return numpy arrays instead of lists but are very fast.\r\n- to_pydict/to_pylist can be 100x slower and become the bottleneck for reading data, but at least they return lists.\r\n\r\nMaybe... | In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects.
Here is an example:
```python
>>> dataset._data.slice(key, 1).to_pandas().to_dict("list")
{'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [array([ 101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292,
1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938,
4267, 12223, 21811, 1117, 2554, 119, 102])], 'token_type_ids': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0])], 'attention_mask': [array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1])]}
>>> type(dataset._data.slice(key, 1).to_pandas().to_dict("list")['input_ids'][0])
<class 'numpy.ndarray'>
>>> dataset._data.slice(key, 1).to_pydict()
{'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [[101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
``` | 387 |
https://github.com/huggingface/datasets/issues/382 | 1080 | [] | 382 | |
https://github.com/huggingface/datasets/issues/381 | NLp | [] | 381 | |
https://github.com/huggingface/datasets/issues/378 | [dataset] Structure of MLQA seems unecessary nested | [
"Same for the RACE dataset: https://github.com/huggingface/nlp/blob/master/datasets/race/race.py\r\n\r\nShould we scan all the datasets to remove this pattern of un-necessary nesting?",
"You're right, I think we don't need to use the nested dictionary. \r\n"
] | The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97
Should we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds?
```python
features=nlp.Features(
{
"context": nlp.Value("string"),
"questions": nlp.features.Sequence({"question": nlp.Value("string")}),
"answers": nlp.features.Sequence(
{"text": nlp.Value("string"), "answer_start": nlp.Value("int32"),}
),
"ids": nlp.features.Sequence({"idx": nlp.Value("string")})
``` | 378 |
https://github.com/huggingface/datasets/issues/377 | Iyy!!! | [] | 377 | |
https://github.com/huggingface/datasets/issues/376 | to_pandas conversion doesn't always work | [
"**Edit**: other topic previously in this message moved to a new issue: https://github.com/huggingface/nlp/issues/387",
"Could you try to update pyarrow to >=0.17.0 ? It should fix the `to_pandas` bug\r\n\r\nAlso I'm not sure that structures like list<struct> are fully supported in the lib (none of the datasets u... | For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible.
Here is an example using the official SQUAD v2 JSON file.
This example was found while investigating #373.
```python
>>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: ["./train-v2.0.json"]}, download_mode=nlp.GenerateMode.FORCE_REDOWNLOAD, version="1.0.0", field='data')
>>> squad['train']
Dataset(schema: {'title': 'string', 'paragraphs': 'list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>'}, num_rows: 442)
>>> squad['train'][0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 589, in __getitem__
format_kwargs=self._format_kwargs,
File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 529, in _getitem
outputs = self._unnest(self._data.slice(key, 1).to_pandas().to_dict("list"))
File "pyarrow/array.pxi", line 559, in pyarrow.lib._PandasConvertible.to_pandas
File "pyarrow/table.pxi", line 1367, in pyarrow.lib.Table._to_pandas
File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 766, in table_to_blockmanager
blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)
File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 1101, in _table_to_blocks
list(extension_columns.keys()))
File "pyarrow/table.pxi", line 881, in pyarrow.lib.table_to_blocks
File "pyarrow/error.pxi", line 105, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>
```
cc @lhoestq would we have a way to detect this from the schema maybe?
Here is the schema for this pretty complex JSON:
```python
>>> squad['train'].schema
title: string
paragraphs: list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>
child 0, item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>
child 0, qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>
child 0, item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>
child 0, question: string
child 1, id: string
child 2, answers: list<item: struct<text: string, answer_start: int64>>
child 0, item: struct<text: string, answer_start: int64>
child 0, text: string
child 1, answer_start: int64
child 3, is_impossible: bool
child 4, plausible_answers: list<item: struct<text: string, answer_start: int64>>
child 0, item: struct<text: string, answer_start: int64>
child 0, text: string
child 1, answer_start: int64
child 1, context: string
``` | 376 |
https://github.com/huggingface/datasets/issues/375 | TypeError when computing bertscore | [
"I am not able to reproduce this issue on my side.\r\nCould you give us more details about the inputs you used ?\r\n\r\nI do get another error though:\r\n```\r\n~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/bert_score/utils.py in bert_cos_score_idf(model, refs, hyps, tokenizer, idf_dict, verbose, batch_siz... | Hi,
I installed nlp 0.3.0 via pip, and my python version is 3.7.
When I tried to compute bertscore with the code:
```
import nlp
bertscore = nlp.load_metric('bertscore')
# load hyps and refs
...
print (bertscore.compute(hyps, refs, lang='en'))
```
I got the following error.
```
Traceback (most recent call last):
File "bert_score_evaluate.py", line 16, in <module>
print (bertscore.compute(hyps, refs, lang='en'))
File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metric.py", line 200, in compute
output = self._compute(predictions=predictions, references=references, **metrics_kwargs)
File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metrics/bertscore/fb176889831bf0ce995ed197edc94b2e9a83f647a869bb8c9477dbb2d04d0f08/bertscore.py", line 105, in _compute
hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline)
TypeError: get_hash() takes 3 positional arguments but 4 were given
```
It seems like there is something wrong with get_hash() function? | 375 |
https://github.com/huggingface/datasets/issues/373 | Segmentation fault when loading local JSON dataset as of #372 | [
"I've seen this sort of thing before -- it might help to delete the directory -- I've also noticed that there is an error with the json Dataloader for any data I've tried to load. I've replaced it with this, which skips over the data feature population step:\r\n\r\n\r\n```python\r\nimport os\r\n\r\nimport pyarrow.j... | The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault.
```
dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data')
```
causes
```
Using custom data configuration default
Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/XXX/.cache/huggingface/datasets/json/default/0.0.0...
0 tables [00:00, ? tables/s]Segmentation fault (core dumped)
```
where `./datasets/train-v2.0.json` is downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/.
This is consistent with other SQuAD-formatted JSON files.
When attempting to load the dataset again, I get the following:
```
Using custom data configuration default
Traceback (most recent call last):
File "dataloader.py", line 6, in <module>
'json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data')
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset
save_infos=save_infos,
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 382, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/XXX/.conda/envs/torch/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 368, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/XXX/.conda/envs/torch/lib/python3.7/os.py", line 223, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/XXX/.cache/huggingface/datasets/json/default/0.0.0.incomplete'
```
(Not sure if you wanted this in the previous issue #369 or not as it was closed.) | 373 |
https://github.com/huggingface/datasets/issues/369 | can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries | [
"I am able to reproduce this with the official SQuAD `train-v2.0.json` file downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/",
"I am facing this issue in transformers library 3.0.2 while reading a csv using datasets.\r\nIs this fixed in latest version? \r\nI updated the latest version 4.0.1 bu... | Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB):
```
dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]})
```
causes
```
Traceback (most recent call last):
File "dataloader.py", line 9, in <module>
["./path/to/file.json"]})
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset
save_infos=save_infos,
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 719, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False):
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/datasets/json/88c1bc5c68489f7eda549ed05a5a738527c613b3e7a4ee3524d9d233353a949b/json.py", line 53, in _generate_tables
file, read_options=self.config.pa_read_options, parse_options=self.config.pa_parse_options,
File "pyarrow/_json.pyx", line 191, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
I haven't been able to find any reports of this specific pyarrow error here or elsewhere. | 369 |
https://github.com/huggingface/datasets/issues/368 | load_metric can't acquire lock anymore | [
"I found that, in the same process (or the same interactive session), if I do\r\n\r\nimport nlp\r\n\r\nm1 = nlp.load_metric('glue', 'mrpc')\r\nm2 = nlp.load_metric('glue', 'sst2')\r\n\r\nI will get the same error `ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a uni... | I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this?
Traceback (most recent call last):
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 101, in __init__
self.filelock.acquire(timeout=1)
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/filelock.py", line 278, in acquire
raise Timeout(self._lock_file)
filelock.Timeout: The file lock '/home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock' could not be acquired.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "examples_huggingface_nlp.py", line 268, in <module>
main()
File "examples_huggingface_nlp.py", line 242, in main
dataset, metric = get_dataset_metric(glue_task)
File "examples_huggingface_nlp.py", line 77, in get_dataset_metric
metric = nlp.load_metric('glue', glue_config, experiment_id=1)
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/load.py", line 440, in load_metric
**metric_init_kwargs,
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 104, in __init__
"Cannot acquire lock, caching file might be used by another process, "
ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id' for this run.
I0709 15:54:41.008838 139854118430464 filelock.py:318] Lock 139852058030936 released on /home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock
| 368 |
https://github.com/huggingface/datasets/issues/365 | How to augment data ? | [
"Using batched map is probably the easiest way at the moment.\r\nWhat kind of augmentation would you like to do ?",
"Some samples in the dataset are too long, I want to divide them in several samples.",
"Using batched map is the way to go then.\r\nWe'll make it clearer in the docs that map could be used for aug... | Is there any clean way to augment data ?
For now my work-around is to use batched map, like this :
```python
def aug(samples):
# Simply copy the existing data to have x2 amount of data
for k, v in samples.items():
samples[k].extend(v)
return samples
dataset = dataset.map(aug, batched=True)
``` | 365 |
https://github.com/huggingface/datasets/issues/362 | [dateset subset missing] xtreme paws-x | [
"You're right, thanks for pointing it out. We will update it "
] | I tried nlp.load_dataset('xtreme', 'PAWS-X.es') but get the value error
It turns out that the subset for Spanish is missing
https://github.com/google-research-datasets/paws/tree/master/pawsx | 362 |
https://github.com/huggingface/datasets/issues/361 | 🐛 [Metrics] ROUGE is non-deterministic | [
"Hi, can you give a full self-contained example to reproduce this behavior?",
"> Hi, can you give a full self-contained example to reproduce this behavior?\r\n\r\nThere is a notebook in the post ;)",
"> If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different.\r\n... | If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different.
Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem.
Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differents run :
> ['0.3350', '0.1470', '0.2329']
['0.3358', '0.1451', '0.2332']
---
Why ROUGE is not deterministic ? | 361 |
https://github.com/huggingface/datasets/issues/360 | [Feature request] Add dataset.ragged_map() function for many-to-many transformations | [
"Actually `map(batched=True)` can already change the size of the dataset.\r\nIt can accept examples of length `N` and returns a batch of length `M` (can be null or greater than `N`).\r\n\r\nI'll make that explicit in the doc that I'm currently writing.",
"You're two steps ahead of me :) In my testing, it also wor... | `dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines.
`dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from the dataset.
However, some dataset transformations are many-to-many. Consider constructing BERT training examples from a dataset of sentences, where you map `["a", "b", "c"] -> ["a[SEP]b", "a[SEP]c", "b[SEP]c", "c[SEP]b", ...]`
I propose a more general `ragged_map()` method that takes in a batch of examples of length `N` and return a batch of examples `M`. This is different from the `map(batched=True)` method, which takes examples of length `N` and returns a batch of length `N`, processing individual examples in parallel. I don't have a clear vision of how this would be implemented efficiently and lazily, but would love to hear the community's feedback on this.
My specific use case is creating an end-to-end ELECTRA data pipeline. I would like to take the raw WikiText data and generate training examples from this using the `ragged_map()` method, then export to TFRecords and train quickly. This would be a reproducible pipeline with no bash scripts. Currently I'm relying on scripts like https://github.com/google-research/electra/blob/master/build_pretraining_dataset.py, which are less general.
| 360 |
https://github.com/huggingface/datasets/issues/359 | ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures | [
"Hi, it depends on what it is in your `dataset_builder.py` file. Can you share it?\r\n\r\nIf you are just loading `json` files, you can also directly use the `json` script (which will find the schema/features from your JSON structure):\r\n\r\n```python\r\nfrom nlp import load_dataset\r\nds = load_dataset(\"json\", ... | I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function.
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-9aecfbee53bd> in <module>
55 from nlp import load_dataset
56
---> 57 ds = load_dataset("../text2struct/model/dataset_builder.py", data_files=rel_datafiles)
58
59
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
431 self._download_and_prepare(
--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
433 )
434 # Sync info
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
481 try:
482 # Prepare split will record examples associated to the split
--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)
484 except OSError:
485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _prepare_split(self, split_generator)
736 schema_dict[field.name] = Value(str(field.type))
737
--> 738 parse_schema(writer.schema, features)
739 self.info.features = Features(features)
740
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in parse_schema(schema, schema_dict)
734 parse_schema(field.type.value_type, schema_dict[field.name])
735 else:
--> 736 schema_dict[field.name] = Value(str(field.type))
737
738 parse_schema(writer.schema, features)
<string> in __init__(self, dtype, id, _type)
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in __post_init__(self)
55
56 def __post_init__(self):
---> 57 self.pa_type = string_to_arrow(self.dtype)
58
59 def __call__(self):
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in string_to_arrow(type_str)
32 if str(type_str + "_") not in pa.__dict__:
33 raise ValueError(
---> 34 f"Neither {type_str} nor {type_str + '_'} seems to be a pyarrow data type. "
35 f"Please make sure to use a correct data type, see: "
36 f"https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions"
ValueError: Neither list<item: string> nor list<item: string>_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions
```
If I create the dataset imperatively, using a pyarrow table, the dataset is created correctly. If I override the `_prepare_split` method to avoid calling the validate schema, the dataset can load as well. | 359 |
https://github.com/huggingface/datasets/issues/355 | can't load SNLI dataset | [
"I just added the processed files of `snli` on our google storage, so that when you do `load_dataset` it can download the processed files from there :)\r\n\r\nWe are thinking about having available those processed files for more datasets in the future, because sometimes files aren't available (like for `snli`), or ... | `nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't.
Is there a plan to move these datasets to huggingface servers for a more stable solution?
Btw, here's the stack trace:
```
File "/content/nlp/src/nlp/builder.py", line 432, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/content/nlp/src/nlp/builder.py", line 466, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/content/nlp/src/nlp/datasets/snli/e417f6f2e16254938d977a17ed32f3998f5b23e4fcab0f6eb1d28784f23ea60d/snli.py", line 76, in _split_generators
dl_dir = dl_manager.download_and_extract(_DATA_URL)
File "/content/nlp/src/nlp/utils/download_manager.py", line 217, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/content/nlp/src/nlp/utils/download_manager.py", line 156, in download
lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,
File "/content/nlp/src/nlp/utils/py_utils.py", line 190, in map_nested
return function(data_struct)
File "/content/nlp/src/nlp/utils/download_manager.py", line 156, in <lambda>
lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,
File "/content/nlp/src/nlp/utils/file_utils.py", line 198, in cached_path
local_files_only=download_config.local_files_only,
File "/content/nlp/src/nlp/utils/file_utils.py", line 356, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://nlp.stanford.edu/projects/snli/snli_1.0.zip
``` | 355 |
https://github.com/huggingface/datasets/issues/353 | [Dataset requests] New datasets for Text Classification | [
"Pinging @mariamabarham as well",
"- `nlp` has MR! It's called `rotten_tomatoes`\r\n- SST is part of GLUE, or is that just SST-2?\r\n- `nlp` also has `ag_news`, a popular news classification dataset\r\n\r\nI'd also like to see:\r\n- the Yahoo Answers topic classification dataset\r\n- the Kaggle Fake News classifi... | We are missing a few datasets for Text Classification which is an important field.
Namely, it would be really nice to add:
- [x] TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]**
- #386
- [x] Yelp-5
- #1315
- [x] Movie review (Movie Review (MR) dataset [156]) **[done (same as rotten_tomatoes)]**
- [x] SST (Stanford Sentiment Treebank) **[include in glue]**
- #1934
- [ ] Multi-Perspective Question Answering (MPQA) dataset **[require authentication (indeed manual download)]**
- [x] Amazon. This is a popular corpus of product reviews collected from the Amazon website [159]. It contains labels for both binary classification and multi-class (5-class) classification
- #791
- #1389
- [x] 20 Newsgroups. The 20 Newsgroups dataset **[done]**
- #410
- [x] Sogou News dataset **[done]**
- #450
- [x] Reuters news. The Reuters-21578 dataset [165] **[done]**
- #471
- [x] DBpedia. The DBpedia dataset [170]
- #1116
- [ ] Ohsumed. The Ohsumed collection [171] is a subset of the MEDLINE database
- [ ] EUR-Lex. The EUR-Lex dataset
- [x] WOS. The Web Of Science (WOS) dataset **[done]**
- #424
- [ ] PubMed. PubMed [173]
- [x] TREC-QA: TREC-6 + TREC-50
- See above: TREC-6 dataset
- [x] Quora. The Quora dataset [180]
- #366
All these datasets are cited in https://arxiv.org/abs/2004.03705 | 353 |
https://github.com/huggingface/datasets/issues/347 | 'cp950' codec error from load_dataset('xtreme', 'tydiqa') | [
"This is probably a Windows issue, we need to specify the encoding when `load_dataset()` reads the original CSV file.\r\nTry to find the `open()` statement called by `load_dataset()` and add an `encoding='utf-8'` parameter.\r\nSee issues #242 and #307 ",
"It should be in `xtreme.py:L755`:\r\n```python\r\n ... | 
I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps :
https://www.python.org/dev/peps/pep-0263/
I guess the error was triggered by the code " module = importlib.import_module(module_path)" at line 57 in the source code: nlp/src/nlp/load.py / (https://github.com/huggingface/nlp/blob/911d5596f9b500e39af8642fe3d1b891758999c7/src/nlp/load.py#L51)
Any ideas?
p.s. tried the same code on colab, that runs perfectly
| 347 |
https://github.com/huggingface/datasets/issues/345 | Supporting documents in ELI5 | [
"Hi @saverymax ! For licensing reasons, the original team was unable to release pre-processed CommonCrawl documents. Instead, they provided a script to re-create them from a CommonCrawl dump, but it unfortunately requires access to a medium-large size cluster:\r\nhttps://github.com/facebookresearch/ELI5#downloading... | I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to facebook, the entire document collection is quite large. However, it would still be helpful to at least include a subset of the supporting documents i.e., having some data is better than having a block of cheese, in my case at least.
If you choose not to include them, it would be helpful to have documentation mentioning this specifically. It is especially confusing because the hf nlp ELI5 dataset has the key `'document'` but there are no documents to be found :( | 345 |
https://github.com/huggingface/datasets/issues/342 | Features should be updated when `map()` changes schema | [
"`dataset.column_names` are being updated but `dataset.features` aren't indeed..."
] | `dataset.map()` can change the schema and column names.
We should update the features in this case (with what is possible to infer). | 342 |
https://github.com/huggingface/datasets/issues/337 | [Feature request] Export Arrow dataset to TFRecords | [] | The TFRecord generation process is error-prone and requires complex separate Python scripts to download and preprocess the data. I propose to combine the user-friendly features of `nlp` with the speed and efficiency of TFRecords. Sample API:
```python
# use these existing methods
ds = load_dataset("wikitext", "wikitext-2-raw-v1", split="train")
ds = ds.map(lambda ex: tokenizer(ex))
ds.set_format("tensorflow", columns=["input_ids", "token_type_ids", "attention_mask"])
# then add this method
ds.export(folder="/my/tfrecords", prefix="myrecord", num_shards=8, format="tfrecord")
```
which would create files like so:
```bash
/my/tfrecords/myrecord_1.tfrecord
/my/tfrecords/myrecord_2.tfrecord
...
```
I would be happy to contribute this method. We could use a similar approach for PyTorch. Thoughts? | 337 |
https://github.com/huggingface/datasets/issues/336 | [Dataset requests] New datasets for Open Question Answering | [] | We are still a few datasets missing for Open-Question Answering which is currently a field in strong development.
Namely, it would be really nice to add:
- WebQuestions (Berant et al., 2013) [done]
- CuratedTrec (Baudis et al. 2015) [not open-source]
- MS-MARCO (NGuyen et al. 2016) [done]
- SearchQA (Dunn et al. 2017) [done]
- FEVER (Thorne et al. 2018) - [ done]
All these datasets are cited in http://arxiv.org/abs/2005.11401 | 336 |
https://github.com/huggingface/datasets/issues/331 | Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError` | [
"I couldn't reproduce on my side.\r\nIt looks like you were not able to generate all the examples, and you have the problem for each split train-test-validation.\r\nCould you try to enable logging, try again and send the logs ?\r\n```python\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\n```",
"he... | ```
>>> import nlp
>>> nlp.load_dataset('cnn_dailymail', '3.0.0')
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py", line 520, in load_dataset
builder_instance.download_and_prepare(
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 488, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]
``` | 331 |
https://github.com/huggingface/datasets/issues/329 | [Bug] FileLock dependency incompatible with filesystem | [
"Hi, can you give details on your environment/os/packages versions/etc?",
"Environment is Ubuntu 18.04, Python 3.7.5, nlp==0.3.0, filelock=3.0.12.\r\n\r\nThe external volume is Amazon FSx for Lustre, and it by default creates files with limited permissions. My working theory is that FileLock creates a lockfile th... | I'm downloading a dataset successfully with
`load_dataset("wikitext", "wikitext-2-raw-v1")`
But when I attempt to cache it on an external volume, it hangs indefinitely:
`load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount`
The filesystem when hanging looks like this:
```bash
/fsx
----downloads
----94be...73.lock
----wikitext
----wikitext-2-raw
----wikitext-2-raw-1.0.0.incomplete
```
It appears that on this filesystem, the FileLock object is forever stuck in its "acquire" stage. I have verified that the issue lies specifically with the `filelock` dependency:
```python
open("/fsx/hello.txt").write("hello") # succeeds
from filelock import FileLock
with FileLock("/fsx/hello.lock"):
open("/fsx/hello.txt").write("hello") # hangs indefinitely
```
Has anyone else run into this issue? I'd raise it directly on the FileLock repo, but that project appears abandoned with the last update over a year ago. Or if there's a solution that would remove the FileLock dependency from the project, I would appreciate that. | 329 |
https://github.com/huggingface/datasets/issues/328 | Fork dataset | [
"To be able to generate the Arrow dataset you need to either use our csv or json utilities `load_dataset(\"json\", data_files=my_json_files)` OR write your own custom dataset script (you can find some inspiration from the [squad](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py) script for exa... | We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset.
We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers.
Our preprocessing flow parses raw text and json with Entity and Relations annotations and creates 2 datasets for training a NER and Relations prediction heads.
Is there some good way to "fork" dataset-
EG
1. text + json -> Dataset1
1. Dataset1 -> DatasetNER
1. Dataset1 -> DatasetREL
or
1. text + json -> Dataset1
1. Dataset1 -> DatasetNER
1. Dataset1 + DatasetNER -> DatasetREL
| 328 |
https://github.com/huggingface/datasets/issues/326 | Large dataset in Squad2-format | [
"I'm pretty sure you can get some inspiration from the squad_v2 script. It looks like the dataset is quite big so it will take some time for the users to generate it, but it should be reasonable.\r\n\r\nAlso you are saying that you are still making the dataset grow in size right ?\r\nIt's probably good practice to ... | At the moment we are building an large question answering dataset and think about sharing it with the huggingface community.
Caused the computing power we splitted it into multiple tiles, but they are all in the same format.
Right now the most important facts about are this:
- Contexts: 1.047.671
- questions: 1.677.732
- Answers: 6.742.406
- unanswerable: 377.398
It is already cleaned
<pre><code>
train_data = [
{
'context': "this is the context",
'qas': [
{
'id': "00002",
'is_impossible': False,
'question': "whats is this",
'answers': [
{
'text': "answer",
'answer_start': 0
}
]
},
{
'id': "00003",
'is_impossible': False,
'question': "question2",
'answers': [
{
'text': "answer2",
'answer_start': 1
}
]
}
]
}
]
</code></pre>
Cause it is growing every day we are thinking about an structure like this:
We host an Json file, containing all the download links and the script can load it dynamically.
At the moment it is around ~20GB
Any advice how to handle this, or an ready to use template ? | 326 |
https://github.com/huggingface/datasets/issues/324 | Error when calculating glue score | [
"The glue metric for cola is a metric for classification. It expects label ids as integers as inputs.",
"I want to evaluate a sentence pair whether they are semantically equivalent, so I used MRPC and it gives the same error, does that mean we have to encode the sentences and parse as input?\r\n\r\nusing BertToke... | I was trying glue score along with other metrics here. But glue gives me this error;
```
import nlp
glue_metric = nlp.load_metric('glue',name="cola")
glue_score = glue_metric.compute(predictions, references)
```
```
---------------------------------------------------------------------------
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-b9210a524504> in <module>()
----> 1 glue_score = glue_metric.compute(predictions, references)
6 frames
/usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs)
191 """
192 if predictions is not None:
--> 193 self.add_batch(predictions=predictions, references=references)
194 self.finalize(timeout=timeout)
195
/usr/local/lib/python3.6/dist-packages/nlp/metric.py in add_batch(self, predictions, references, **kwargs)
207 if self.writer is None:
208 self._init_writer()
--> 209 self.writer.write_batch(batch)
210
211 def add(self, prediction=None, reference=None, **kwargs):
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
155 if self.pa_writer is None:
156 self._build_writer(pa_table=pa.Table.from_pydict(batch_examples))
--> 157 pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema)
158 if writer_batch_size is None:
159 writer_batch_size = self.writer_batch_size
/usr/local/lib/python3.6/dist-packages/pyarrow/types.pxi in __iter__()
/usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()
/usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
TypeError: an integer is required (got type str)
```
I'm not sure whether I'm doing this wrong or whether it's an issue. I would like to know a workaround. Thank you. | 324 |
https://github.com/huggingface/datasets/issues/321 | ERROR:root:mwparserfromhell | [
"It looks like it comes from `mwparserfromhell`.\r\n\r\nWould it be possible to get the bad `section` that causes this issue ? The `section` string is from `datasets/wikipedia.py:L548` ? You could just add a `try` statement and print the section if the line `section_text.append(section.strip_code().strip())` crashe... | Hi,
I am trying to download some wikipedia data but I got this error for spanish "es" (but there are maybe some others languages which have the same error I haven't tried all of them ).
`ERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.`
The code I have use was :
`dataset = load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')`
| 321 |
https://github.com/huggingface/datasets/issues/320 | Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer | [
"I wonder if this means downloading failed? That corpus has a really slow server.",
"This dataset seems to have a decoding problem that results in inconsistencies in the number of generated examples.\r\nSee #215.\r\nThat's why we end up with a `NonMatchingSplitsSizesError `."
] | Selecting `blog_authorship_corpus` in the nlp viewer throws the following error:
```
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}]
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 172, in <module>
dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None)
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp-viewer/run.py", line 132, in get
builder_instance.download_and_prepare()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 488, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
```
@srush @lhoestq | 320 |
https://github.com/huggingface/datasets/issues/319 | Nested sequences with dicts | [
"Oh yes, this is a backward compatibility feature with tensorflow_dataset in which a `Sequence` or `dict` is converted in a `dict` of `lists`, unfortunately it is not very intuitive, see here: https://github.com/huggingface/nlp/blob/master/src/nlp/features.py#L409\r\n\r\nTo avoid this behavior, you can just define ... | Am pretty much finished [adding a dataset](https://github.com/ghomasHudson/nlp/blob/DocRED/datasets/docred/docred.py) for [DocRED](https://github.com/thunlp/DocRED), but am getting an error when trying to add a nested `nlp.features.sequence(nlp.features.sequence({key:value,...}))`.
The original data is in this format:
```python
{
'title': "Title of wiki page",
'vertexSet': [
[
{ 'name': "mention_name",
'sent_id': "mention in which sentence",
'pos': ["postion of mention in a sentence"],
'type': "NER_type"},
{another mention}
],
[another entity]
]
...
}
```
So to represent this I've attempted to write:
```
...
features=nlp.Features({
"title": nlp.Value("string"),
"vertexSet": nlp.features.Sequence(nlp.features.Sequence({
"name": nlp.Value("string"),
"sent_id": nlp.Value("int32"),
"pos": nlp.features.Sequence(nlp.Value("int32")),
"type": nlp.Value("string"),
})),
...
}),
...
```
This is giving me the error:
```
pyarrow.lib.ArrowTypeError: Could not convert [{'pos': [[0,2], [2,4], [3,5]], "type": ["ORG", "ORG", "ORG"], "name": ["Lark Force", "Lark Force", "Lark Force", "sent_id": [0, 3, 4]}..... with type list: was not a dict, tuple, or recognized null value for conversion to struct type
```
Do we expect the pyarrow stuff to break when doing this deeper nesting? I've checked that it still works when you do `nlp.features.Sequence(nlp.features.Sequence(nlp.Value("string"))` or `nlp.features.Sequence({key:value,...})` just not nested sequences with a dict.
If it's not possible, I can always convert it to a shallower structure. I'd rather not change the DocRED authors' structure if I don't have to though. | 319 |
https://github.com/huggingface/datasets/issues/317 | Adding a dataset with multiple subtasks | [
"For one dataset you can have different configurations that each have their own `nlp.Features`.\r\nWe imagine having one configuration per subtask for example.\r\nThey are loaded with `nlp.load_dataset(\"my_dataset\", \"my_config\")`.\r\n\r\nFor example the `glue` dataset has many configurations. It is a bit differ... | I intent to add the datasets of the MT Quality Estimation shared tasks to `nlp`. However, they have different subtasks -- such as word-level, sentence-level and document-level quality estimation, each of which having different language pairs, and some of the data reused in different subtasks.
For example, in [QE 2019,](http://www.statmt.org/wmt19/qe-task.html) we had the same English-Russian and English-German data for word-level and sentence-level QE.
I suppose these datasets could have both their word and sentence-level labels inside `nlp.Features`; but what about other subtasks? Should they be considered a different dataset altogether?
I read the discussion on #217 but the case of QE seems a lot simpler. | 317 |
https://github.com/huggingface/datasets/issues/315 | [Question] Best way to batch a large dataset? | [
"Update: I think I've found a solution.\r\n\r\n```python\r\noutput_types = {\"input_ids\": tf.int64, \"token_type_ids\": tf.int64, \"attention_mask\": tf.int64}\r\ndef train_dataset_gen():\r\n for i in range(len(train_dataset)):\r\n yield train_dataset[i]\r\ntf_dataset = tf.data.Dataset.from_generator(tra... | I'm training on large datasets such as Wikipedia and BookCorpus. Following the instructions in [the tutorial notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb), I see the following recommended for TensorFlow:
```python
train_tf_dataset = train_tf_dataset.filter(remove_none_values, load_from_cache_file=False)
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']
train_tf_dataset.set_format(type='tensorflow', columns=columns)
features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]}
labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])}
labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1])
### Question about this last line ###
tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
```
This code works for something like WikiText-2. However, scaling up to WikiText-103, the last line takes 5-10 minutes to run. I assume it is because tf.data.Dataset.from_tensor_slices() is pulling everything into memory, not lazily loading. This approach won't scale up to datasets 25x larger such as Wikipedia.
So I tried manual batching using `dataset.select()`:
```python
idxs = np.random.randint(len(dataset), size=bsz)
batch = dataset.select(idxs).map(lambda example: {"input_ids": tokenizer(example["text"])})
tf_batch = tf.constant(batch["ids"], dtype=tf.int64)
```
This appears to create a new Apache Arrow dataset with every batch I grab, and then tries to cache it. The runtime of `dataset.select([0, 1])` appears to be much worse than `dataset[:2]`. So using `select()` doesn't seem to be performant enough for a training loop.
Is there a performant scalable way to lazily load batches of nlp Datasets? | 315 |
https://github.com/huggingface/datasets/issues/312 | [Feature request] Add `shard()` method to dataset | [
"Hi Jared,\r\nInteresting, thanks for raising this question. You can also do that after loading with `dataset.select()` or `dataset.filter()` which let you keep only a specific subset of rows in a dataset.\r\nWhat is your use-case for sharding?",
"Thanks for the pointer to those functions! It's still a little mor... | Currently, to shard a dataset into 10 pieces on different ranks, you can run
```python
rank = 3 # for example
size = 10
dataset = nlp.load_dataset('wikitext', 'wikitext-2-raw-v1', split=f"train[{rank*10}%:{(rank+1)*10}%]")
```
However, this breaks down if you have a number of ranks that doesn't divide cleanly into 100, such as 64 ranks. Is there interest in adding a method shard() that looks like this?
```python
rank = 3
size = 64
dataset = nlp.load_dataset("wikitext", "wikitext-2-raw-v1", split="train").shard(rank=rank, size=size)
```
TensorFlow has a similar API: https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard. I'd be happy to contribute this code. | 312 |
https://github.com/huggingface/datasets/issues/307 | Specify encoding for MRPC | [] | Same as #242, but with MRPC: on Windows, I get a `UnicodeDecodeError` when I try to download the dataset:
```python
dataset = nlp.load_dataset('glue', 'mrpc')
```
```python
Downloading and preparing dataset glue/mrpc (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\Users\Python\.cache\huggingface\datasets\glue\mrpc\1.0.0...
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in incomplete_dir(dirname)
369 try:
--> 370 yield tmp_dir
371 if os.path.isdir(dirname):
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
--> 431 self._download_and_prepare(
432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
482 # Prepare split will record examples associated to the split
--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)
484 except OSError:
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _prepare_split(self, split_generator)
663 generator = self._generate_examples(**split_generator.gen_kwargs)
--> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
665 example = self.info.features.encode_example(record)
~\Miniconda3\envs\nlp\lib\site-packages\tqdm\notebook.py in __iter__(self, *args, **kwargs)
217 try:
--> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):
219 # return super(tqdm...) will not catch exception
~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self)
1128 try:
-> 1129 for obj in iterable:
1130 yield obj
~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_examples(self, data_file, split, mrpc_files)
514 examples = self._generate_example_mrpc_files(mrpc_files=mrpc_files, split=split)
--> 515 for example in examples:
516 yield example["idx"], example
~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_example_mrpc_files(self, mrpc_files, split)
576 reader = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE)
--> 577 for n, row in enumerate(reader):
578 is_row_in_dev = [row["#1 ID"], row["#2 ID"]] in dev_ids
~\Miniconda3\envs\nlp\lib\csv.py in __next__(self)
110 self.fieldnames
--> 111 row = next(self.reader)
112 self.line_num = self.reader.line_num
~\Miniconda3\envs\nlp\lib\encodings\cp1252.py in decode(self, input, final)
22 def decode(self, input, final=False):
---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]
24
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1180: character maps to <undefined>
```
The fix is the same: specify `utf-8` encoding when opening the file. The previous fix didn't work as MRPC's download process is different from the others in GLUE.
I am going to propose a new PR :) | 307 |
https://github.com/huggingface/datasets/issues/305 | Importing downloaded package repository fails | [] | The `get_imports` function in `src/nlp/load.py` has a feature to download a package as a zip archive of the github repository and import functions from the unpacked directory. This is used for example in the `metrics/coval.py` file, and would be useful to add BLEURT (@ankparikh).
Currently however, the code seems to have trouble with imports within the package. For example:
```
import nlp
coval = nlp.load_metric('coval')
```
yields:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/yacine/Code/nlp/src/nlp/load.py", line 432, in load_metric
metric_cls = import_main_class(module_path, dataset=False)
File "/home/yacine/Code/nlp/src/nlp/load.py", line 57, in import_main_class
module = importlib.import_module(module_path)
File "/home/yacine/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval.py", line 21, in <module>
from .coval_backend.conll import reader # From: https://github.com/ns-moosavi/coval
File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval_backend/conll/reader.py", line 2, in <module>
from conll import mention
ModuleNotFoundError: No module named 'conll'
```
Not sure what the fix would be there. | 305 |
https://github.com/huggingface/datasets/issues/304 | Problem while printing doc string when instantiating multiple metrics. | [] | When I load more than one metric and try to print doc string of a particular metric,. It shows the doc strings of all imported metric one after the other which looks quite confusing and clumsy.
Attached [Colab](https://colab.research.google.com/drive/13H0ZgyQ2se0mqJ2yyew0bNEgJuHaJ8H3?usp=sharing) Notebook for problem clarification.. | 304 |
https://github.com/huggingface/datasets/issues/302 | Question - Sign Language Datasets | [
"Even more complicating - \r\n\r\nAs I see it, datasets can have \"addons\".\r\nFor example, the WebNLG dataset is a dataset for data-to-text. However, a work of mine and other works enriched this dataset with text plans / underlying text structures. In that case, I see a need to load the dataset \"WebNLG\" with \"... | An emerging field in NLP is SLP - sign language processing.
I was wondering about adding datasets here, specifically because it's shaping up to be large and easily usable.
The metrics for sign language to text translation are the same.
So, what do you think about (me, or others) adding datasets here?
An example dataset would be [RWTH-PHOENIX-Weather 2014 T](https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/)
For every item in the dataset, the data object includes:
1. video_path - path to mp4 file
2. pose_path - a path to `.pose` file with human pose landmarks
3. openpose_path - a path to a `.json` file with human pose landmarks
4. gloss - string
5. text - string
6. video_metadata - height, width, frames, framerate
------
To make it a tad more complicated - what if sign language libraries add requirements to `nlp`? for example, sign language is commonly annotated using `ilex`, `eaf`, or `srt` files, which are all loadable as text, but there is no reason for the dataset to parse that file by itself, if libraries exist to do so. | 302 |
https://github.com/huggingface/datasets/issues/301 | Setting cache_dir gives error on wikipedia download | [
"Whoops didn't mean to close this one.\r\nI did some changes, could you try to run it from the master branch ?",
"Now it works, thanks!"
] | First of all thank you for a super handy library! I'd like to download large files to a specific drive so I set `cache_dir=my_path`. This works fine with e.g. imdb and squad. But on wikipedia I get an error:
```
nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=my_path)
```
```
OSError Traceback (most recent call last)
<ipython-input-2-23551344d7bc> in <module>
1 import nlp
----> 2 nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=path)
~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
385 with utils.temporary_assignment(self, "_cache_dir", tmp_data_dir):
386 reader = ArrowReader(self._cache_dir, self.info)
--> 387 reader.download_from_hf_gcs(self._cache_dir, self._relative_data_dir(with_version=True))
388 downloaded_info = DatasetInfo.from_directory(self._cache_dir)
389 self.info.update(downloaded_info)
~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/arrow_reader.py in download_from_hf_gcs(self, cache_dir, relative_data_dir)
231 remote_dataset_info = os.path.join(remote_cache_dir, "dataset_info.json")
232 downloaded_dataset_info = cached_path(remote_dataset_info)
--> 233 os.rename(downloaded_dataset_info, os.path.join(cache_dir, "dataset_info.json"))
234 if self._info is not None:
235 self._info.update(self._info.from_directory(cache_dir))
OSError: [Errno 18] Invalid cross-device link: '/home/local/NTU/nn/.cache/huggingface/datasets/025fa4fd4f04aaafc9e939260fbc8f0bb190ce14c61310c8ae1ddd1dcb31f88c.9637f367b6711a79ca478be55fe6989b8aea4941b7ef7adc67b89ff403020947' -> '/data/nn/nlp/wikipedia/20200501.de/1.0.0.incomplete/dataset_info.json'
``` | 301 |
https://github.com/huggingface/datasets/issues/297 | Error in Demo for Specific Datasets | [
"Thanks for reporting these errors :)\r\n\r\nI can actually see two issues here.\r\n\r\nFirst, datasets like `natural_questions` require apache_beam to be processed. Right now the import is not at the right place so we have this error message. However, even the imports are fixed, the nlp viewer doesn't actually hav... | Selecting `natural_questions` or `newsroom` dataset in the online demo results in an error similar to the following.

| 297 |
https://github.com/huggingface/datasets/issues/296 | snli -1 labels | [
"@jxmorris12 , we use `-1` to label examples for which `gold label` is missing (`gold label = -` in the original dataset). ",
"Thanks @mariamabarham! so the original dataset is missing some labels? That is weird. Is standard practice just to discard those examples training/eval?",
"Yes the original dataset is... | I'm trying to train a model on the SNLI dataset. Why does it have so many -1 labels?
```
import nlp
from collections import Counter
data = nlp.load_dataset('snli')['train']
print(Counter(data['label']))
Counter({0: 183416, 2: 183187, 1: 182764, -1: 785})
```
| 296 |
https://github.com/huggingface/datasets/issues/295 | Improve input warning for evaluation metrics | [] | Hi,
I am the author of `bert_score`. Recently, we received [ an issue ](https://github.com/Tiiiger/bert_score/issues/62) reporting a problem in using `bert_score` from the `nlp` package (also see #238 in this repo). After looking into this, I realized that the problem arises from the format `nlp.Metric` takes input.
Here is a minimal example:
```python
import nlp
scorer = nlp.load_metric("bertscore")
with open("pred.txt") as p, open("ref.txt") as g:
for lp, lg in zip(p, g):
scorer.add(lp, lg)
score = scorer.compute(lang="en")
```
The problem in the above code is that `scorer.add()` expects a list of strings as input for the references. As a result, the `scorer` here would take a list of characters in `lg` to be the references. The correct implementation would be calling
```python
scorer.add(lp, [lg])
```
I just want to raise this issue to you to prevent future user errors of a similar kind. I assume some simple type checking can prevent this from happening?
Thanks! | 295 |
https://github.com/huggingface/datasets/issues/294 | Cannot load arxiv dataset on MacOS? | [
"I couldn't replicate this issue on my macbook :/\r\nCould you try to play with different encodings in `with open(path, encoding=...) as f` in scientific_papers.py:L108 ?",
"I was able to track down the file causing the problem by adding the following to `scientific_papers.py` (starting at line 116):\r\n\r\n```py... | I am having trouble loading the `"arxiv"` config from the `"scientific_papers"` dataset on MacOS. When I try loading the dataset with:
```python
arxiv = nlp.load_dataset("scientific_papers", "arxiv")
```
I get the following stack trace:
```bash
JSONDecodeError Traceback (most recent call last)
<ipython-input-2-8e00c55d5a59> in <module>
----> 1 arxiv = nlp.load_dataset("scientific_papers", "arxiv")
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
431 self._download_and_prepare(
--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
433 )
434 # Sync info
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
481 try:
482 # Prepare split will record examples associated to the split
--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)
484 except OSError:
485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _prepare_split(self, split_generator)
662
663 generator = self._generate_examples(**split_generator.gen_kwargs)
--> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
665 example = self.info.features.encode_example(record)
666 writer.write(example)
~/miniconda3/envs/t2t/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)
1106 fp_write=getattr(self.fp, 'write', sys.stderr.write))
1107
-> 1108 for obj in iterable:
1109 yield obj
1110 # Update and possibly print the progressbar.
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/datasets/scientific_papers/107a416c0e1958cb846f5934b5aae292f7884a5b27e86af3f3ef1a093e058bbc/scientific_papers.py in _generate_examples(self, path)
114 # "section_names": list[str], list of section names.
115 # "sections": list[list[str]], list of sections (list of paragraphs)
--> 116 d = json.loads(line)
117 summary = "\n".join(d["abstract_text"])
118 # In original paper, <S> and </S> are not used in vocab during training
~/miniconda3/envs/t2t/lib/python3.7/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
346 parse_int is None and parse_float is None and
347 parse_constant is None and object_pairs_hook is None and not kw):
--> 348 return _default_decoder.decode(s)
349 if cls is None:
350 cls = JSONDecoder
~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in decode(self, s, _w)
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in raw_decode(self, s, idx)
351 """
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
355 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Unterminated string starting at: line 1 column 46983 (char 46982)
163502 examples [02:10, 2710.68 examples/s]
```
I am not sure how to trace back to the specific JSON file that has the "Unterminated string". Also, I do not get this error on colab so I suspect it may be MacOS specific. Copy pasting the relevant lines from `transformers-cli env` below:
- Platform: Darwin-19.5.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
Any ideas? | 294 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.