id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
721,073,812
730
Possible caching bug
The following code with `test1.txt` containing just "🤗🤗🤗": ``` dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1") print(dataset[0]) dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8") print(dataset[0]) ``` produces this output: ``` Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155... Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data. {'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'} Using custom data configuration default Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155) {'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'} ``` Just changing the order (and deleting the temp files): ``` dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8") print(dataset[0]) dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1") print(dataset[0]) ``` produces this: ``` Using custom data configuration default Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155... Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data. {'text': '🤗🤗🤗'} Using custom data configuration default Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155) {'text': '🤗🤗🤗'} ``` Is it intended that the cache path does not depend on the config entries? tested with datasets==1.1.2 and python==3.8.5
closed
https://github.com/huggingface/datasets/issues/730
2020-10-14T02:02:34
2022-11-22T01:45:54
2020-10-29T09:36:01
{ "login": "ArneBinder", "id": 3375489, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
719,558,876
729
Better error message when one forgets to call `add_batch` before `compute`
When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer. ## Reproducer ```python import datasets import torch from datasets import Metric class GatherMetric(Metric): def _info(self): return datasets.MetricInfo( description="description", citation="citation", inputs_description="kwargs", features=datasets.Features({ 'predictions': datasets.Value('int64'), 'references': datasets.Value('int64'), }), codebase_urls=[], reference_urls=[], format='numpy' ) def _compute(self, predictions, references): return {"predictions": predictions, "labels": references} metric = GatherMetric(cache_dir="test-metric") inputs = torch.randint(0, 2, (1024,)) targets = torch.randint(0, 2, (1024,)) batch_size = 8 for i in range(0, 1024, batch_size): pass # User forgets to call `add_batch` result = metric.compute() ``` ## Stack trace: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-13-267729d187fa> in <module> 3 pass 4 # metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size]) ----> 5 result = metric.compute() ~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs) 380 if predictions is not None: 381 self.add_batch(predictions=predictions, references=references) --> 382 self._finalize() 383 384 self.cache_file_name = None ~/git/datasets/src/datasets/metric.py in _finalize(self) 343 elif self.process_id == 0: 344 # Let's acquire a lock on each node files to be sure they are finished writing --> 345 file_paths, filelocks = self._get_all_cache_files() 346 347 # Read the predictions and references ~/git/datasets/src/datasets/metric.py in _get_all_cache_files(self) 280 filelocks = [] 281 for process_id, file_path in enumerate(file_paths): --> 282 filelock = FileLock(file_path + ".lock") 283 try: 284 filelock.acquire(timeout=self.timeout) TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' ```
closed
https://github.com/huggingface/datasets/issues/729
2020-10-12T17:59:22
2020-10-29T15:18:24
2020-10-29T15:18:24
{ "login": "sgugger", "id": 35901082, "type": "User" }
[]
false
[]
719,555,780
728
Passing `cache_dir` to a metric does not work
When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError: ## Reproducer ```python import datasets import torch from datasets import Metric class GatherMetric(Metric): def _info(self): return datasets.MetricInfo( description="description", citation="citation", inputs_description="kwargs", features=datasets.Features({ 'predictions': datasets.Value('int64'), 'references': datasets.Value('int64'), }), codebase_urls=[], reference_urls=[], format='numpy' ) def _compute(self, predictions, references): return {"predictions": predictions, "labels": references} metric = GatherMetric(cache_dir="test-metric") inputs = torch.randint(0, 2, (1024,)) targets = torch.randint(0, 2, (1024,)) batch_size = 8 for i in range(0, 1024, batch_size): metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size]) result = metric.compute() ``` ## Stack trace: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) ~/git/datasets/src/datasets/metric.py in _finalize(self) 349 reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features)) --> 350 self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths])) 351 except FileNotFoundError: ~/git/datasets/src/datasets/arrow_reader.py in read_files(self, files, original_instructions) 227 # Prepend path to filename --> 228 pa_table = self._read_files(files) 229 files = copy.deepcopy(files) ~/git/datasets/src/datasets/arrow_reader.py in _read_files(self, files) 166 for f_dict in files: --> 167 pa_table: pa.Table = self._get_dataset_from_filename(f_dict) 168 pa_tables.append(pa_table) ~/git/datasets/src/datasets/arrow_reader.py in _get_dataset_from_filename(self, filename_skip_take) 291 ) --> 292 mmap = pa.memory_map(filename) 293 f = pa.ipc.open_stream(mmap) ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.memory_map() ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.MemoryMappedFile._open() ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() FileNotFoundError: [Errno 2] Failed to open local file 'test-metric/gather_metric/default/test-metric/gather_metric/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) <ipython-input-17-e42d43cc981f> in <module> 2 for i in range(0, 1024, batch_size): 3 metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size]) ----> 4 result = metric.compute() ~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs) 380 if predictions is not None: 381 self.add_batch(predictions=predictions, references=references) --> 382 self._finalize() 383 384 self.cache_file_name = None ~/git/datasets/src/datasets/metric.py in _finalize(self) 351 except FileNotFoundError: 352 raise ValueError( --> 353 "Error in finalize: another metric instance is already using the local cache file. " 354 "Please specify an experiment_id to avoid colision between distributed metric instances." 355 ) ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances. ``` The code works when we remove the `cache_dir=...` from the metric.
closed
https://github.com/huggingface/datasets/issues/728
2020-10-12T17:55:14
2020-10-29T09:34:42
2020-10-29T09:34:42
{ "login": "sgugger", "id": 35901082, "type": "User" }
[]
false
[]
719,386,366
727
Parallel downloads progress bar flickers
When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line. To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar. Another way would be to have one "master" progress bar that tracks the number of finished downloads, and then one progress bar per process that show the current downloads.
open
https://github.com/huggingface/datasets/issues/727
2020-10-12T13:36:05
2020-10-12T13:36:05
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
719,313,754
726
"Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi, I have encountered this problem during loading the openwebtext dataset: ``` >>> dataset = load_dataset('openwebtext') Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/openwebtext/plain_text/1.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 536, in _download_and_prepare self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://zenodo.org/record/3834942/files/openwebtext.tar.xz'] ``` I think this problem is caused because the released dataset has changed. Or I should download the dataset manually? Sorry for release the unfinised issue by mistake.
closed
https://github.com/huggingface/datasets/issues/726
2020-10-12T11:45:10
2022-02-17T17:53:54
2022-02-15T10:38:57
{ "login": "SparkJiao", "id": 16469472, "type": "User" }
[]
false
[]
718,985,641
725
pretty print dataset objects
Currently, if I do: ``` from datasets import load_dataset load_dataset("wikihow", 'all', data_dir="/hf/pegasus-datasets/wikihow/") ``` I get: ``` DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)}, num_rows: 157252), 'validation': Dataset(features: {'text': Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)}, num_rows: 5599), 'test': Dataset(features: {'text': Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)}, num_rows: 5577)}) ``` This is not very readable. Can we either have a better `__repr__` or have a custom method to nicely pprint the dataset object? Here is my very simple attempt. With this PR, it produces: ``` DatasetDict({ train: Dataset({ features: ['text', 'headline', 'title'], num_rows: 157252 }) validation: Dataset({ features: ['text', 'headline', 'title'], num_rows: 5599 }) test: Dataset({ features: ['text', 'headline', 'title'], num_rows: 5577 }) }) ``` I did omit the data types on purpose to make it more readable, but it shouldn't be too difficult to integrate those too. note that this PR also fixes the inconsistency in output that in master misses enclosing `{}` for Dataset, but it is there for `DatasetDict` - or perhaps it was by design. I'm totally not attached to this format, just wanting something more readable. One approach could be to serialize to `json.dumps` or something similar. It'd make the indentation simpler. Thank you.
closed
https://github.com/huggingface/datasets/pull/725
2020-10-12T02:03:46
2020-10-23T16:24:35
2020-10-23T09:00:46
{ "login": "stas00", "id": 10676103, "type": "User" }
[]
true
[]
718,947,700
724
need to redirect /nlp to /datasets and remove outdated info
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all should probably redirect to: https://huggingface.co/datasets/wikihow also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had the links marked up, the new one is just a jumble of text in one chunk and no markup for links (i.e. not clickable).
closed
https://github.com/huggingface/datasets/issues/724
2020-10-11T23:12:12
2020-10-14T17:00:12
2020-10-14T17:00:12
{ "login": "stas00", "id": 10676103, "type": "User" }
[]
false
[]
718,926,723
723
Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo. Since pseudo-labels are just a large model's generations on an existing dataset, what is the right way to structure this contribution. I read https://huggingface.co/docs/datasets/add_dataset.html, but it doesn't really cover this type of contribution. I could, for example, make a new directory, `xsum_bart_pseudolabels` for each set of pseudolabels or add some sort of parametrization to `xsum.py`: https://github.com/huggingface/datasets/blob/5f4c6e830f603830117877b8990a0e65a2386aa6/datasets/xsum/xsum.py What do you think @lhoestq ?
closed
https://github.com/huggingface/datasets/issues/723
2020-10-11T21:05:45
2021-08-03T05:11:51
2021-08-03T05:11:51
{ "login": "sshleifer", "id": 6045025, "type": "User" }
[]
false
[]
718,689,117
722
datasets(RWTH-PHOENIX-Weather 2014 T): add initial loading script
This is the first sign language dataset in this repo as far as I know. Following an old issue I opened https://github.com/huggingface/datasets/issues/302. I added the dataset official REAMDE file, but I see it's not very standard, so it can be removed.
closed
https://github.com/huggingface/datasets/pull/722
2020-10-10T19:44:08
2022-09-30T14:53:37
2022-09-30T14:53:37
{ "login": "AmitMY", "id": 5757359, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
718,647,147
721
feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it. ```python # This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/ _URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz" dl_manager.download_and_extract(_URL) ``` I get an error: > ValueError: unable to parse ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz as a URL or as a local path I checked, and indeed you don't consider `ftp` as a remote file. https://github.com/huggingface/datasets/blob/4c2af707a6955cf4b45f83ac67990395327c5725/src/datasets/utils/file_utils.py#L188 Adding `ftp` to that list does not immediately solve the issue, so there probably needs to be some extra work.
closed
https://github.com/huggingface/datasets/issues/721
2020-10-10T15:50:20
2022-02-15T10:44:44
2022-02-15T10:44:43
{ "login": "AmitMY", "id": 5757359, "type": "User" }
[]
false
[]
716,581,266
720
OSError: Cannot find data file when not using the dummy dataset in RAG
## Environment info transformers version: 3.3.1 Platform: Linux-4.19 Python version: 3.7.7 PyTorch version (GPU?): 1.6.0 Tensorflow version (GPU?): No Using GPU in script?: Yes Using distributed or parallel set-up in script?: No ## To reproduce Steps to reproduce the behaviour: ``` import os os.environ['HF_DATASETS_CACHE'] = '/workspace/notebooks/POCs/cache' from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False) ``` Plese note that I'm using the whole dataset: **use_dummy_dataset=False** After around 4 hours (downloading and some other things) this is returned: ``` Downloading and preparing dataset wiki_dpr/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /workspace/notebooks/POCs/cache/wiki_dpr/psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2... --------------------------------------------------------------------------- UnpicklingError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding) 459 try: --> 460 return pickle.load(fid, **pickle_kwargs) 461 except Exception: UnpicklingError: pickle data was truncated During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 552 # Prepare split will record examples associated to the split --> 553 self._prepare_split(split_generator, **prepare_split_kwargs) 554 except OSError: /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 840 for key, record in utils.tqdm( --> 841 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose 842 ): /opt/conda/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs) 217 try: --> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): 219 # return super(tqdm...) will not catch exception /opt/conda/lib/python3.7/site-packages/tqdm/std.py in __iter__(self) 1128 try: -> 1129 for obj in iterable: 1130 yield obj ~/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files) 131 break --> 132 vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True) 133 vec_idx = 0 /opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding) 462 raise IOError( --> 463 "Failed to interpret file %s as a pickle" % repr(file)) 464 finally: OSError: Failed to interpret file <_io.BufferedReader name='/workspace/notebooks/POCs/cache/downloads/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-10-f28df370ac47> in <module> 1 # ln -s /workspace/notebooks/POCs/cache /root/.cache/huggingface/datasets ----> 2 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False) /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs) 307 generator_tokenizer = rag_tokenizer.generator 308 return cls( --> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer 310 ) 311 /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer) 298 self.config = config 299 if self._init_retrieval: --> 300 self.init_retrieval() 301 302 @classmethod /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_retrieval(self) 324 325 logger.info("initializing retrieval") --> 326 self.index.init_index() 327 328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None): /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_index(self) 238 split=self.dataset_split, 239 index_name=self.index_name, --> 240 dummy=self.use_dummy_dataset, 241 ) 242 self.dataset.set_format("numpy", columns=["embeddings"], output_all_columns=True) /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 474 if not downloaded_from_gcs: 475 self._download_and_prepare( --> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 477 ) 478 # Sync info /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 553 self._prepare_split(split_generator, **prepare_split_kwargs) 554 except OSError: --> 555 raise OSError("Cannot find data file. " + (self.manual_download_instructions or "")) 556 557 if verify_infos: OSError: Cannot find data file. ``` Thanks
closed
https://github.com/huggingface/datasets/issues/720
2020-10-07T14:27:13
2020-12-23T14:04:31
2020-12-23T14:04:31
{ "login": "josemlopez", "id": 4112135, "type": "User" }
[]
false
[]
716,492,263
719
Fix train_test_split output format
There was an issue in the `transmit_format` wrapper that returned bad formats when using train_test_split. This was due to `column_names` being handled as a List[str] instead of Dict[str, List[str]] when the dataset transform (train_test_split) returns a DatasetDict (one set of column names per split). This should fix @timothyjlaurent 's issue in #620 and fix #676 I added tests for `transmit_format` so that it doesn't happen again
closed
https://github.com/huggingface/datasets/pull/719
2020-10-07T12:39:01
2020-10-07T13:38:08
2020-10-07T13:38:06
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
715,694,709
718
Don't use tqdm 4.50.0
tqdm 4.50.0 introduced permission errors on windows see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111) for the error details. For now I just added `<4.50.0` in the setup.py Hopefully we can find what's wrong with this version soon
closed
https://github.com/huggingface/datasets/pull/718
2020-10-06T13:45:53
2020-10-06T13:49:24
2020-10-06T13:49:22
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
714,959,268
717
Fixes #712 Error in the Overview.ipynb notebook
Fixes #712 Error in the Overview.ipynb notebook by adding `with_details=True` parameter to `list_datasets` function in Cell 3 of **overview** notebook
closed
https://github.com/huggingface/datasets/pull/717
2020-10-05T15:50:41
2020-10-06T06:31:43
2020-10-05T16:25:41
{ "login": "subhrm", "id": 850012, "type": "User" }
[]
true
[]
714,952,888
716
Fixes #712 Attribute error in cell 3 of the overview notebook
Fixes the Attribute error in cell 3 of the overview notebook
closed
https://github.com/huggingface/datasets/pull/716
2020-10-05T15:42:09
2020-10-05T15:46:38
2020-10-05T15:46:32
{ "login": "subhrm", "id": 850012, "type": "User" }
[]
true
[]
714,690,192
715
Use python read for text dataset
As mentioned in #622 the pandas reader used for text dataset doesn't work properly when there are \r characters in the text file. Instead I switched to pure python using `open` and `read`. From my benchmark on a 100MB text file, it's the same speed as the previous pandas reader.
closed
https://github.com/huggingface/datasets/pull/715
2020-10-05T09:47:55
2020-10-05T13:13:18
2020-10-05T13:13:17
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
714,487,881
714
Add the official dependabot implementation
This will keep dependencies up to date. This will require a pr label `dependencies` being created in order to function correctly.
closed
https://github.com/huggingface/datasets/pull/714
2020-10-05T03:49:45
2020-10-12T11:49:21
2020-10-12T11:49:21
{ "login": "ALazyMeme", "id": 12804673, "type": "User" }
[]
true
[]
714,475,732
713
Fix reading text files with carriage return symbols
The new pandas-based text reader isn't able to work properly with files that contain carriage return symbols (`\r`). It fails with the following error message: ``` ... File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader.read File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._read_low_memory File "pandas/_libs/parsers.pyx", line 918, in pandas._libs.parsers.TextReader._read_rows File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._tokenize_rows File "pandas/_libs/parsers.pyx", line 2042, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file. ``` ___ I figured out the pandas uses those symbols as line terminators and this eventually causes the error. Explicitly specifying the `lineterminator` fixes that issue and everything works fine. Please, consider this PR as it seems to be a common issue to solve.
closed
https://github.com/huggingface/datasets/pull/713
2020-10-05T03:07:03
2020-10-09T05:58:25
2020-10-05T13:49:29
{ "login": "mozharovsky", "id": 6762769, "type": "User" }
[]
true
[]
714,242,316
712
Error in the notebooks/Overview.ipynb notebook
Hi, I got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) provided in the main README file to open it in colab. ```python # You can access various attributes of the datasets before downloading them squad_dataset = list_datasets()[datasets.index('squad')] pprint(squad_dataset.__dict__) # It's a simple python dataclass ``` Error message ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-8dc805c4949c> in <module>() 2 squad_dataset = list_datasets()[datasets.index('squad')] 3 ----> 4 pprint(squad_dataset.__dict__) # It's a simple python dataclass AttributeError: 'str' object has no attribute '__dict__' ``` The object `squad_dataset` is a `str` not a `dataclass` .
closed
https://github.com/huggingface/datasets/issues/712
2020-10-04T05:58:31
2020-10-05T16:25:40
2020-10-05T16:25:40
{ "login": "subhrm", "id": 850012, "type": "User" }
[]
false
[]
714,236,408
711
New Update bertscore.py
closed
https://github.com/huggingface/datasets/pull/711
2020-10-04T05:13:09
2020-10-05T16:26:51
2020-10-05T16:26:51
{ "login": "DayasagarRSalian", "id": 51692618, "type": "User" }
[]
true
[]
714,186,999
710
fix README typos/ consistency
closed
https://github.com/huggingface/datasets/pull/710
2020-10-03T22:20:56
2020-10-17T09:52:45
2020-10-17T09:52:45
{ "login": "discdiver", "id": 7703961, "type": "User" }
[]
true
[]
714,067,902
709
How to use similarity settings other then "BM25" in Elasticsearch index ?
**QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?** **ES Reference** https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html **HF doc reference:** https://huggingface.co/docs/datasets/faiss_and_ea.html **context :** ======== I used the latest Elasticsearch server version 7.9.2 When I set DFR which is one of the other similarity algorithms supported by elasticsearch in the mapping, I get an error For example DFR that I had tried in the first instance in mappings as below., `"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "DFR"}}},` I get the following error RequestError: RequestError(400, 'mapper_parsing_exception', 'Unknown Similarity type [DFR] for field [text]') The other thing as another option I had tried was to declare "similarity": "my_similarity" within settings and then assigning "my_similarity" inside the mappings as below `es_config = { "settings": { "number_of_shards": 1, **"similarity": "my_similarity"**: { "type": "DFR", "basic_model": "g", "after_effect": "l", "normalization": "h2", "normalization.h2.c": "3.0" } , "analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}}, }, "mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "my_similarity"}}}, }` For this , I got the following error RequestError: RequestError(400, 'illegal_argument_exception', 'unknown setting [index.similarity] please check that any required plugins are installed, or check the breaking changes documentation for removed settings')
closed
https://github.com/huggingface/datasets/issues/709
2020-10-03T11:18:49
2022-10-04T17:19:37
2022-10-04T17:19:37
{ "login": "nsankar", "id": 431890, "type": "User" }
[]
false
[]
714,020,953
708
Datasets performance slow? - 6.4x slower than in memory dataset
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower. For example, in the `yelp_polarity` dataset (560000 datapoints, or 17500 batches of 32), it was taking me 3:31 to just get process the data and get it on the GPU (no model involved). Whereas, the equivalent in-memory dataset would finish in just 0:33. Is this expected? Given that one of the goals of this project is also accelerate dataset processing, this seems a bit slower than I would expect. I understand the advantages of being able to work on datasets that exceed memory, and that's very exciting to me, but thought I'd open this issue to discuss. For reference I'm running a AMD Ryzen Threadripper 1900X 8-Core Processor CPU, with 128 GB of RAM and an NVME SSD Samsung 960 EVO. I'm running with an RTX Titan 24GB GPU. I can see with `iotop` that the dataset gets quickly loaded into the system read buffers, and thus doesn't incur any additional IO reads. Thus in theory, all the data *should* be in RAM, but in my benchmark code below it's still 6.4 times slower. What am I doing wrong? And is there a way to force the datasets to completely load into memory instead of being memory mapped in cases where you want maximum performance? At 3:31 for 17500 batches, that's 12ms per batch. Does this 12ms just become insignificant as a proportion of forward and backward passes in practice, and thus it's not worth worrying about this in practice? In any case, here's my code `benchmark.py`. If you run it with an argument of `memory` it will copy the data into memory before executing the same test. ``` py import sys from datasets import load_dataset from transformers import DataCollatorWithPadding, BertTokenizerFast from torch.utils.data import DataLoader from tqdm import tqdm if __name__ == '__main__': tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased') collate_fn = DataCollatorWithPadding(tokenizer, padding=True) ds = load_dataset('yelp_polarity') def do_tokenize(x): return tokenizer(x['text'], truncation=True) ds = ds.map(do_tokenize, batched=True) ds.set_format('torch', ['input_ids', 'token_type_ids', 'attention_mask']) if len(sys.argv) == 2 and sys.argv[1] == 'memory': # copy to memory - probably a faster way to do this - but demonstrates the point # approximately 530 batches per second - 17500 batches in 0:33 print('using memory') _ds = [data for data in tqdm(ds['train'])] else: # approximately 83 batches per second - 17500 batches in 3:31 print('using datasets') _ds = ds['train'] dl = DataLoader(_ds, shuffle=True, collate_fn=collate_fn, batch_size=32, num_workers=4) for data in tqdm(dl): for k, v in data.items(): data[k] = v.to('cuda') ``` For reference, my conda environment is [here](https://gist.github.com/05b6101518ff70ed42a858b302a0405d) Once again, I'm very excited about this library, and how easy it is to load datasets, and to do so without worrying about system memory constraints. Thanks for all your great work.
closed
https://github.com/huggingface/datasets/issues/708
2020-10-03T06:44:07
2021-02-12T14:13:28
2021-02-12T14:13:28
{ "login": "eugeneware", "id": 38154, "type": "User" }
[]
false
[]
713,954,666
707
Requirements should specify pyarrow<1
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinning in the setup file. https://github.com/huggingface/datasets/blob/e86a2a8f869b91654e782c9133d810bb82783200/setup.py#L68 Downgrading by installing `pip install "pyarrow<1"` resolved the issue.
closed
https://github.com/huggingface/datasets/issues/707
2020-10-02T23:39:39
2020-12-04T08:22:39
2020-10-04T20:50:28
{ "login": "mathcass", "id": 918541, "type": "User" }
[]
false
[]
713,721,959
706
Fix config creation for data files with NamedSplit
During config creation, we need to iterate through the data files of all the splits to compute a hash. To make sure the hash is unique given a certain combination of files/splits, we sort the split names. However the `NamedSplit` objects can't be passed to `sorted` and currently it raises an error: we need to sort the string of their names instead. Fix #705
closed
https://github.com/huggingface/datasets/pull/706
2020-10-02T15:46:49
2020-10-05T08:15:00
2020-10-05T08:14:59
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
713,709,100
705
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 (installed from master) - `datasets` version: 1.0.2 (installed as a dependency from transformers) - Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.9 I'm testing my own text classification dataset using [this example](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) from transformers. The dataset is split into train / dev / test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample: ``` text,label "Registra-se a presença do acadêmico <name> . <REL_SEP> Ao me deparar com a descrição de dois autores no polo ativo da ação junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclamação trabalhista individual . <REL_SEP> Diante disso , face a ausência injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com relação a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concessão dos benefícios da Justiça Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> Audiência encerrada às 8h42min . <REL_SEP> <name> <REL_SEP> Juíza do Trabalho <REL_SEP> Ata redigida por << <name> >> , Secretário de Audiência .",NO_RELATION ``` However, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section. ## To reproduce Steps to reproduce the behavior: 1. Created a new conda environment using conda env -n transformers python=3.7 2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt 3. Installed tensorflow with `pip install tensorflow` 3. Ran `run_tf_text_classification.py` with the following parameters: ``` --train_file <DATASET_PATH>/train.csv \ --dev_file <DATASET_PATH>/dev.csv \ --test_file <DATASET_PATH>/test.csv \ --label_column_id 1 \ --model_name_or_path neuralmind/bert-base-portuguese-cased \ --output_dir <OUTPUT_PATH> \ --num_train_epochs 4 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --do_train \ --do_eval \ --do_predict \ --logging_steps 1000 \ --evaluate_during_training \ --save_steps 1000 \ --overwrite_output_dir \ --overwrite_cache ``` I have also copied [@Santosh-Gupta 's colab notebook](https://colab.research.google.com/drive/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Here is the stack trace: ``` 2020-10-02 07:33:41.622011: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 /media/discoD/repositorios/transformers_pedro/src/transformers/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options) FutureWarning, 2020-10-02 07:33:43.471648: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1 2020-10-02 07:33:43.471791: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.472664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.472684: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.472765: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.472809: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.472848: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.474209: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.474276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.561219: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.561397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.562345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.563219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:43.563595: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2020-10-02 07:33:43.570091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz 2020-10-02 07:33:43.570494: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:43.570511: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-10-02 07:33:43.570702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.571599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.571633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.571645: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.571654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.571664: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.571691: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.571704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.571718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.571770: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.572641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.573475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:47.139227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-10-02 07:33:47.139265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 2020-10-02 07:33:47.139272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N 2020-10-02 07:33:47.140323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.141248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) 2020-10-02 07:33:47.146317: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:47.146336: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1 10/02/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False 10/02/2020 07:33:47 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False) 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock Using custom data configuration default Traceback (most recent call last): File "run_tf_text_classification.py", line 283, in <module> main() File "run_tf_text_classification.py", line 222, in main max_seq_length=data_args.max_seq_length, File "run_tf_text_classification.py", line 43, in get_tfds ds = datasets.load_dataset("csv", data_files=files) File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 604, in load_dataset **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 158, in __init__ **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 269, in _create_builder_config for key in sorted(data_files.keys()): TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' ``` ## Expected behavior Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) Originally opened this issue at transformers' repository: [https://github.com/huggingface/transformers/issues/7535](https://github.com/huggingface/transformers/issues/7535). @jplu instructed me to open here, since according to [this](https://github.com/huggingface/transformers/issues/7535#issuecomment-702778885) evidence, the problem is from datasets. Thanks!
closed
https://github.com/huggingface/datasets/issues/705
2020-10-02T15:27:55
2020-10-05T08:14:59
2020-10-05T08:14:59
{ "login": "pvcastro", "id": 12713359, "type": "User" }
[]
false
[]
713,572,556
704
Fix remote tests for new datasets
When adding a new dataset, the remote tests fail because they try to get the new dataset from the master branch (i.e., where the dataset doesn't exist yet) To fix that I reverted to the use of the HF API that fetch the available datasets on S3 that is synced with the master branch
closed
https://github.com/huggingface/datasets/pull/704
2020-10-02T12:08:04
2020-10-02T12:12:02
2020-10-02T12:12:01
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
713,559,718
703
Add hotpot QA
Added the [HotpotQA](https://github.com/hotpotqa/hotpot) multi-hop question answering dataset.
closed
https://github.com/huggingface/datasets/pull/703
2020-10-02T11:44:28
2020-10-02T12:54:41
2020-10-02T12:54:41
{ "login": "ghomasHudson", "id": 13795113, "type": "User" }
[]
true
[]
713,499,628
702
Complete rouge kwargs
In #701 we noticed that some kwargs were missing for rouge
closed
https://github.com/huggingface/datasets/pull/702
2020-10-02T09:59:01
2020-10-02T10:11:04
2020-10-02T10:11:03
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
713,485,757
701
Add rouge 2 and rouge Lsum to rouge metric outputs
Continuation of #700 Rouge 2 and Rouge Lsum were missing in Rouge's outputs. Rouge Lsum is also useful to evaluate Rouge L for sentences with `\n` Fix #617
closed
https://github.com/huggingface/datasets/pull/701
2020-10-02T09:35:46
2020-10-02T09:55:14
2020-10-02T09:52:18
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
713,450,295
700
Add rouge-2 in rouge_types for metric calculation
The description of the ROUGE metric says, ``` _KWARGS_DESCRIPTION = """ Calculates average rouge scores for a list of hypotheses and references Args: predictions: list of predictions to score. Each predictions should be a string with tokens separated by spaces. references: list of reference for each prediction. Each reference should be a string with tokens separated by spaces. Returns: rouge1: rouge_1 f1, rouge2: rouge_2 f1, rougeL: rouge_l f1, rougeLsum: rouge_l precision """ ``` but the `rouge_types` argument defaults to `rouge_types = ["rouge1", "rougeL"]`, this PR updates and add `rouge2` to the list so as to reflect the description card.
closed
https://github.com/huggingface/datasets/pull/700
2020-10-02T08:36:45
2020-10-02T11:08:49
2020-10-02T09:59:05
{ "login": "Shashi456", "id": 18056781, "type": "User" }
[]
true
[]
713,395,642
699
XNLI dataset is not loading
`dataset = datasets.load_dataset(path='xnli')` showing below error ``` /opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 36 if len(bad_urls) > 0: 37 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 39 logger.info("All the checksums matched successfully" + for_verification_name) 40 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip'] ``` I think URL is now changed to "https://cims.nyu.edu/~sbowman/xnli/XNLI-MT-1.0.zip"
closed
https://github.com/huggingface/datasets/issues/699
2020-10-02T06:53:16
2020-10-03T17:45:52
2020-10-03T17:43:37
{ "login": "imadarsh1001", "id": 14936525, "type": "User" }
[]
false
[]
712,979,029
697
Update README.md
Hey I was just telling my subscribers to check out your repositories Thank you
closed
https://github.com/huggingface/datasets/pull/697
2020-10-01T16:02:42
2020-10-01T16:12:00
2020-10-01T16:12:00
{ "login": "bishug", "id": 71011306, "type": "User" }
[]
true
[]
712,942,977
696
Elasticsearch index docs
I added the docs for ES indexes. I also added a `load_elasticsearch_index` method to load an index that has already been built. I checked the tests for the ES index and we have tests that mock ElasticSearch. I think this is good for now but at some point it would be cool to have an end-to-end test with a real ES running.
closed
https://github.com/huggingface/datasets/pull/696
2020-10-01T15:18:58
2020-10-02T07:48:19
2020-10-02T07:48:18
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
712,843,949
695
Update XNLI download link
The old link isn't working anymore. I updated it with the new official link. Fix #690
closed
https://github.com/huggingface/datasets/pull/695
2020-10-01T13:27:22
2020-10-01T14:01:15
2020-10-01T14:01:14
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
712,827,751
694
Use GitHub instead of aws in remote dataset tests
Recently we switched from aws s3 to github to download dataset scripts. However in the tests, the dummy data were still downloaded from s3. So I changed that to download them from github instead, in the MockDownloadManager. Moreover I noticed that `anli`'s dummy data were quite heavy (18MB compressed, i.e. the entire dataset) so I replaced them with dummy data with few examples.
closed
https://github.com/huggingface/datasets/pull/694
2020-10-01T13:07:50
2020-10-02T07:47:28
2020-10-02T07:47:27
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
712,822,200
693
Rachel ker add dataset/mlsum
.
closed
https://github.com/huggingface/datasets/pull/693
2020-10-01T13:01:10
2023-09-24T09:48:23
2020-10-01T17:01:13
{ "login": "pdhg", "id": 32742136, "type": "User" }
[]
true
[]
712,818,968
692
Update README.md
closed
https://github.com/huggingface/datasets/pull/692
2020-10-01T12:57:22
2020-10-02T11:01:59
2020-10-02T11:01:59
{ "login": "mayank1897", "id": 62796466, "type": "User" }
[]
true
[]
712,389,499
691
Add UI filter to filter datasets based on task
This is great work, so huge shoutout to contributors and huggingface. The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following tasks (non exhaustive list) - Classification - Multi label - Multi class - Q&A - Summarization - Translation I believe this feature might have some value, for folks trying to find datasets for a particular task, and then testing their model capabilities. Thank you :)
closed
https://github.com/huggingface/datasets/issues/691
2020-10-01T00:56:18
2022-02-15T10:46:50
2022-02-15T10:46:50
{ "login": "praateekmahajan", "id": 7589415, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
712,150,321
690
XNLI dataset: NonMatchingChecksumError
Hi, I tried to download "xnli" dataset in colab using `xnli = load_dataset(path='xnli')` but got 'NonMatchingChecksumError' error `NonMatchingChecksumError Traceback (most recent call last) <ipython-input-27-a87bedc82eeb> in <module>() ----> 1 xnli = load_dataset(path='xnli') 3 frames /usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']` The same code worked well several days ago in colab but stopped working now. Thanks!
closed
https://github.com/huggingface/datasets/issues/690
2020-09-30T17:50:03
2020-10-01T17:15:08
2020-10-01T14:01:14
{ "login": "xiey1", "id": 13307358, "type": "User" }
[]
false
[]
712,095,262
689
Switch to pandas reader for text dataset
Following the discussion in #622 , it appears that there's no appropriate ways to use the payrrow csv reader to read text files because of the separator. In this PR I switched to pandas to read the file. Moreover pandas allows to read the file by chunk, which means that you can build the arrow dataset from a text file that is bigger than RAM (we used to have to shard text files an mentioned in https://github.com/huggingface/datasets/issues/610#issuecomment-691672919) From a test that I did locally on a 1GB text file, the pyarrow reader used to run in 150ms while the new one takes 650ms (multithreading off for pyarrow). This is probably due to chunking since I am having the same speed difference by calling `read()` and calling `read(chunksize)` + `readline()` to read the text file.
closed
https://github.com/huggingface/datasets/pull/689
2020-09-30T16:28:12
2020-09-30T16:45:32
2020-09-30T16:45:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
711,804,828
688
Disable tokenizers parallelism in multiprocessed map
It was reported in #620 that using multiprocessing with a tokenizers shows this message: ``` The current process just got forked. Disabling parallelism to avoid deadlocks... To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false) ``` This message is shown when TOKENIZERS_PARALLELISM is unset. Moreover if it is set to `true`, then the program just hangs. To hide the message (if TOKENIZERS_PARALLELISM is unset) and avoid hanging (if TOKENIZERS_PARALLELISM is `true`), then I set TOKENIZERS_PARALLELISM to `false` when forking the process. After forking is gets back to its original value. Also I added a warning if TOKENIZERS_PARALLELISM was `true` and is set to `false`: ``` Setting TOKENIZERS_PARALLELISM=false for forked processes. ``` cc @n1t0
closed
https://github.com/huggingface/datasets/pull/688
2020-09-30T09:53:34
2020-10-01T08:45:46
2020-10-01T08:45:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
711,664,810
687
`ArrowInvalid` occurs while running `Dataset.map()` function
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error. Code: ```python # train_ds = Dataset(features: { # 'title': Value(dtype='string', id=None), # 'score': Value(dtype='float64', id=None) # }, num_rows: 99999) # suggested in #665 class PicklableTokenizer(BertJapaneseTokenizer): def __getstate__(self): state = dict(self.__dict__) state['do_lower_case'] = self.word_tokenizer.do_lower_case state['never_split'] = self.word_tokenizer.never_split del state['word_tokenizer'] return state def __setstate(self): do_lower_case = state.pop('do_lower_case') never_split = state.pop('never_split') self.__dict__ = state self.word_tokenizer = MecabTokenizer( do_lower_case=do_lower_case, never_split=never_split ) t = PicklableTokenizer.from_pretrained('bert-base-japanese-whole-word-masking') encoded = train_ds.map( lambda examples: {'tokens': t.encode(examples['title'], max_length=1000)}, batched=True, batch_size=1000 ) ``` Error Message: ``` 99% 99/100 [00:22<00:00, 39.07ba/s] --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <timed exec> in <module> /usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1242 fn_kwargs=fn_kwargs, 1243 new_fingerprint=new_fingerprint, -> 1244 update_data=update_data, 1245 ) 1246 else: /usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 151 "output_all_columns": self._output_all_columns, 152 } --> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 154 if new_format["columns"] is not None: 155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names)) /usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 161 # Call actual function 162 --> 163 out = func(self, *args, **kwargs) 164 165 # Update fingerprint of in-place transforms + update in-place history of transforms /usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data) 1496 if update_data: 1497 batch = cast_to_python_objects(batch) -> 1498 writer.write_batch(batch) 1499 if update_data: 1500 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file /usr/local/lib/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 271 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type) 272 typed_sequence_examples[col] = typed_sequence --> 273 pa_table = pa.Table.from_pydict(typed_sequence_examples) 274 self.write_table(pa_table) 275 /usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict() /usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays() /usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.validate() /usr/local/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Column 4 named tokens expected length 999 but got length 1000 ```
closed
https://github.com/huggingface/datasets/issues/687
2020-09-30T06:16:50
2020-09-30T09:53:03
2020-09-30T09:53:03
{ "login": "peinan", "id": 5601012, "type": "User" }
[]
false
[]
711,385,739
686
Dataset browser url is still https://huggingface.co/nlp/viewer/
Might be worth updating to https://huggingface.co/datasets/viewer/
closed
https://github.com/huggingface/datasets/issues/686
2020-09-29T19:21:52
2021-01-08T18:29:26
2021-01-08T18:29:26
{ "login": "jarednielsen", "id": 4564897, "type": "User" }
[]
false
[]
711,182,185
685
Add features parameter to CSV
Add support for the `features` parameter when loading a csv dataset: ```python from datasets import load_dataset, Features features = Features({...}) csv_dataset = load_dataset("csv", data_files=["path/to/my/file.csv"], features=features) ``` I added tests to make sure that it is also compatible with the caching system Fix #623
closed
https://github.com/huggingface/datasets/pull/685
2020-09-29T14:43:36
2020-09-30T08:39:56
2020-09-30T08:39:54
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
711,080,947
684
Fix column order issue in cast
Previously, the order of the columns in the features passes to `cast_` mattered. However even though features passed to `cast_` had the same order as the dataset features, it could fail because the schema that was built was always in alphabetical order. This issue was reported by @lewtun in #623 To fix that I fixed the schema to follow the order of the arrow table columns. I also added the possibility to give features that are not ordered the same way as the dataset features.
closed
https://github.com/huggingface/datasets/pull/684
2020-09-29T12:49:13
2020-09-29T15:56:46
2020-09-29T15:56:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
710,942,704
683
Fix wrong delimiter in text dataset
The delimiter is set to the bell character as it is used nowhere is text files usually. However in the text dataset the delimiter was set to `\b` which is backspace in python, while the bell character is `\a`. I replace \b by \a Hopefully it fixes issues mentioned by some users in #622
closed
https://github.com/huggingface/datasets/pull/683
2020-09-29T09:43:24
2021-05-05T18:24:31
2020-09-29T09:44:06
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
710,325,399
682
Update navbar chapter titles color
Consistency with the color change that was done in transformers at https://github.com/huggingface/transformers/pull/7423 It makes the background-color of the chapter titles in the docs navbar darker, to differentiate them from the inner sections. see changes [here](https://691-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html)
closed
https://github.com/huggingface/datasets/pull/682
2020-09-28T14:35:17
2020-09-28T17:30:13
2020-09-28T17:30:12
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
710,075,721
681
Adding missing @property (+2 small flake8 fixes).
Fixes #678
closed
https://github.com/huggingface/datasets/pull/681
2020-09-28T08:53:53
2020-09-28T10:26:13
2020-09-28T10:26:09
{ "login": "Narsil", "id": 204321, "type": "User" }
[]
true
[]
710,066,138
680
Fix bug related to boolean in GAP dataset.
### Why I did The value in `row["A-coref"]` and `row["B-coref"]` is `'TRUE'` or `'FALSE'`. This type is `string`, then `bool('FALSE')` is equal to `True` in Python. So, both rows are transformed into `True` now. So, I modified this problem. ### What I did I modified `bool(row["A-coref"])` and `bool(row["B-coref"])` to `row["A-coref"] == "TRUE"` and `row["B-coref"] == "TRUE"`. Thank you!
closed
https://github.com/huggingface/datasets/pull/680
2020-09-28T08:39:39
2020-09-29T15:54:47
2020-09-29T15:54:47
{ "login": "otakumesi", "id": 14996977, "type": "User" }
[]
true
[]
710,065,838
679
Fix negative ids when slicing with an array
```python from datasets import Dataset d = ds.Dataset.from_dict({"a": range(10)}) print(d[[0, -1]]) # OverflowError ``` raises an error because of the negative id. This PR fixes that. Fix #668
closed
https://github.com/huggingface/datasets/pull/679
2020-09-28T08:39:08
2020-09-28T14:42:20
2020-09-28T14:42:19
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
710,060,497
678
The download instructions for c4 datasets are not contained in the error message
The manual download instructions are not clear ```The dataset c4 with config en requires manual data. Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff8c5969760>>. Manual data can be loaded with `datasets.load_dataset(c4, data_dir='<path/to/manual/data>') ``` Either `@property` could be added to C4.manual_download_instrcutions (or make it a real property), or the manual_download_instructions function needs to be called I think. Let me know if you want a PR for this, but I'm not sure which possible fix is the correct one.
closed
https://github.com/huggingface/datasets/issues/678
2020-09-28T08:30:54
2020-09-28T10:26:09
2020-09-28T10:26:09
{ "login": "Narsil", "id": 204321, "type": "User" }
[]
false
[]
710,055,239
677
Move cache dir root creation in builder's init
We use lock files in the builder initialization but sometimes the cache directory where they're supposed to be was not created. To fix that I moved the builder's cache dir root creation in the builder's init. Fix #671
closed
https://github.com/huggingface/datasets/pull/677
2020-09-28T08:22:46
2020-09-28T14:42:43
2020-09-28T14:42:42
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
710,014,319
676
train_test_split returns empty dataset item
I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty. The codes: ``` yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp') print(yelp_data[0]) yelp_data = yelp_data.train_test_split(test_size=0.1) print(yelp_data) print(yelp_data['test']) print(yelp_data['test'][0]) ``` The outputs: ``` {'stars': 2.0, 'text': 'xxxx'} Loading cached split indices for dataset at /home/ssd4/huanglianzhe/test_yelp/cache-f9b22d8b9d5a7346.arrow and /home/ssd4/huanglianzhe/test_yelp/cache-4aa26fa4005059d1.arrow DatasetDict({'train': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 7219009), 'test': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)}) Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113) {} # yelp_data['test'][0] is empty ```
closed
https://github.com/huggingface/datasets/issues/676
2020-09-28T07:19:33
2020-10-07T13:46:33
2020-10-07T13:38:06
{ "login": "mojave-pku", "id": 26648528, "type": "User" }
[]
false
[]
709,818,725
675
Add custom dataset to NLP?
Is it possible to add a custom dataset such as a .csv to the NLP library? Thanks.
closed
https://github.com/huggingface/datasets/issues/675
2020-09-27T21:22:50
2020-10-20T09:08:49
2020-10-20T09:08:49
{ "login": "timpal0l", "id": 6556710, "type": "User" }
[]
false
[]
709,661,006
674
load_dataset() won't download in Windows
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've waited upwards of 18 hours to download the 'multi-news' dataset (which isn't very big), and still nothing. I've tried running it through different IDE's and the command line, but it had the same behavior. I've also tried it with all virus and malware protection turned off. I've made sure python and all IDE's are exceptions to the firewall and all the requisite permissions are enabled. Additionally, I checked to see if other packages could download content such as an nltk corpus, and they could. I've also run the same script using Ubuntu and it downloaded fine (and quickly). When I copied the downloaded datasets from my Ubuntu drive to my Windows .cache folder it worked fine by reusing the already-downloaded dataset, but it's cumbersome to do that for every dataset I want to try in my Windows environment. Could this be a bug, or is there something I'm doing wrong or not thinking of? Thanks.
closed
https://github.com/huggingface/datasets/issues/674
2020-09-27T03:56:25
2020-10-05T08:28:18
2020-10-05T08:28:18
{ "login": "ThisDavehead", "id": 34422661, "type": "User" }
[]
false
[]
709,603,989
673
blog_authorship_corpus crashed
This is just to report that When I pick blog_authorship_corpus in https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus I get this: ![image](https://user-images.githubusercontent.com/7553188/94349542-4364f300-0013-11eb-897d-b25660a449f0.png)
closed
https://github.com/huggingface/datasets/issues/673
2020-09-26T20:15:28
2022-02-15T10:47:58
2022-02-15T10:47:58
{ "login": "Moshiii", "id": 7553188, "type": "User" }
[ { "name": "nlp-viewer", "color": "94203D" } ]
false
[]
709,575,527
672
Questions about XSUM
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 204017) >>> data['test'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 11333) ``` The first issue is, the instance counts don’t match what I see on [the dataset's website](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset#what-builds-the-xsum-dataset) (11,333 vs 11,334 for test set; 204,017 vs 204,045 for training set) ``` … training (90%, 204,045), validation (5%, 11,332), and test (5%, 11,334) set. ``` Any thoughts why? Perhaps @mariamabarham could help here, since she recently had a PR on this dataaset https://github.com/huggingface/datasets/pull/289 (reviewed by @patrickvonplaten) Another issue is that the instances don't seem to have IDs. The original datasets provides IDs for the instances: https://github.com/EdinburghNLP/XSum/blob/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json but to be able to use them, the dataset sizes need to match. CC @jbragg
closed
https://github.com/huggingface/datasets/issues/672
2020-09-26T17:16:24
2022-10-04T17:30:17
2022-10-04T17:30:17
{ "login": "danyaljj", "id": 2441454, "type": "User" }
[]
false
[]
709,093,151
671
[BUG] No such file or directory
This happens when both 1. Huggingface datasets cache dir does not exist 2. Try to load a local dataset script builder.py throws an error when trying to create a filelock in a directory (cache/datasets) that does not exist https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L177 Tested on v1.0.2 @lhoestq
closed
https://github.com/huggingface/datasets/issues/671
2020-09-25T16:38:54
2020-09-28T14:42:42
2020-09-28T14:42:42
{ "login": "jbragg", "id": 2238344, "type": "User" }
[]
false
[]
709,061,231
670
Fix SQuAD metric kwargs description
The `answer_start` field was missing in the kwargs docstring. This should fix #657 FYI another fix was proposed by @tshrjn in #658 and suggests to remove this field. However IMO `answer_start` is useful to match the squad dataset format for consistency, even though it is not used in the metric computation. I think it's better to keep it this way, so that you can just give references=squad["answers"] to .compute(). Let me know what sounds the best for you
closed
https://github.com/huggingface/datasets/pull/670
2020-09-25T16:08:57
2020-09-29T15:57:39
2020-09-29T15:57:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
708,857,595
669
How to skip a example when running dataset.map
in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map.
closed
https://github.com/huggingface/datasets/issues/669
2020-09-25T11:17:53
2022-06-17T21:45:03
2020-10-05T16:28:13
{ "login": "xixiaoyao", "id": 24541791, "type": "User" }
[]
false
[]
708,310,956
668
OverflowError when slicing with an array containing negative ids
```python from datasets import Dataset d = ds.Dataset.from_dict({"a": range(10)}) print(d[0]) # {'a': 0} print(d[-1]) # {'a': 9} print(d[[0, -1]]) # OverflowError ``` results in ``` --------------------------------------------------------------------------- OverflowError Traceback (most recent call last) <ipython-input-5-863dc3555598> in <module> ----> 1 d[[0, -1]] ~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in __getitem__(self, key) 1070 format_columns=self._format_columns, 1071 output_all_columns=self._output_all_columns, -> 1072 format_kwargs=self._format_kwargs, 1073 ) 1074 ~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs) 1025 indices = key 1026 -> 1027 indices_array = pa.array([int(i) for i in indices], type=pa.uint64()) 1028 1029 # Check if we need to convert indices ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() OverflowError: can't convert negative value to unsigned int ```
closed
https://github.com/huggingface/datasets/issues/668
2020-09-24T16:27:14
2020-09-28T14:42:19
2020-09-28T14:42:19
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
708,258,392
667
Loss not decrease with Datasets and Transformers
HI, The following script is used to fine-tune a BertForSequenceClassification model on SST2. The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad dataset. In that colab, loss works fine. When I adapt it to SST2, the loss fails to decrease as it should. I attach the adapted script below and appreciate anyone pointing out what I miss? ```python import torch from datasets import load_dataset from transformers import BertForSequenceClassification from transformers import BertTokenizerFast # Load our training dataset and tokenizer dataset = load_dataset("glue", 'sst2') tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased') del dataset["test"] # let's remove it in this demo # Tokenize our training dataset def convert_to_features(example_batch): encodings = tokenizer(example_batch["sentence"]) encodings.update({"labels": example_batch["label"]}) return encodings encoded_dataset = dataset.map(convert_to_features, batched=True) # Format our dataset to outputs torch.Tensor to train a pytorch model columns = ['input_ids', 'token_type_ids', 'attention_mask', 'labels'] encoded_dataset.set_format(type='torch', columns=columns) # Instantiate a PyTorch Dataloader around our dataset # Let's do dynamic batching (pad on the fly with our own collate_fn) def collate_fn(examples): return tokenizer.pad(examples, return_tensors='pt') dataloader = torch.utils.data.DataLoader(encoded_dataset['train'], collate_fn=collate_fn, batch_size=8) # Now let's train our model device = 'cuda' if torch.cuda.is_available() else 'cpu' # Let's load a pretrained Bert model and a simple optimizer model = BertForSequenceClassification.from_pretrained('bert-base-cased', return_dict=True) optimizer = torch.optim.Adam(model.parameters(), lr=1e-5) model.train().to(device) for i, batch in enumerate(dataloader): batch.to(device) outputs = model(**batch) loss = outputs.loss loss.backward() optimizer.step() model.zero_grad() print(f'Step {i} - loss: {loss:.3}') ``` In case needed. - datasets == 1.0.2 - transformers == 3.2.0
closed
https://github.com/huggingface/datasets/issues/667
2020-09-24T15:14:43
2021-01-01T20:01:25
2021-01-01T20:01:25
{ "login": "wangcongcong123", "id": 23032865, "type": "User" }
[]
false
[]
707,608,578
666
Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT?
closed
https://github.com/huggingface/datasets/issues/666
2020-09-23T19:02:25
2020-10-27T15:19:25
2020-10-27T15:19:25
{ "login": "wahab4114", "id": 31090427, "type": "User" }
[]
false
[]
707,037,738
665
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [example['question'], example['context']] encodings = tokenizer.encode_plus(input_pairs, pad_to_max_length=True, max_length=512) context_encodings = tokenizer.encode_plus(example['context']) # Compute start and end tokens for labels using Transformers's fast tokenizers alignement methodes. # this will give us the position of answer span in the context text start_idx, end_idx = get_correct_alignement(example['context'], example['answers']) start_positions_context = context_encodings.char_to_token(start_idx) end_positions_context = context_encodings.char_to_token(end_idx-1) # here we will compute the start and end position of the answer in the whole example # as the example is encoded like this <s> question</s></s> context</s> # and we know the postion of the answer in the context # we can just find out the index of the sep token and then add that to position + 1 (+1 because there are two sep tokens) # this will give us the position of the answer span in whole example sep_idx = encodings['input_ids'].index(tokenizer.sep_token_id) start_positions = start_positions_context + sep_idx + 1 end_positions = end_positions_context + sep_idx + 1 if end_positions > 512: start_positions, end_positions = 0, 0 encodings.update({'start_positions': start_positions, 'end_positions': end_positions, 'attention_mask': encodings['attention_mask']}) return encodings ``` Then I run `dataset.map(convert_to_features)`, it raise ``` In [59]: a.map(convert_to_features) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-59-c453b508761d> in <module> ----> 1 a.map(convert_to_features) /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1242 fn_kwargs=fn_kwargs, 1243 new_fingerprint=new_fingerprint, -> 1244 update_data=update_data, 1245 ) 1246 else: /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 151 "output_all_columns": self._output_all_columns, 152 } --> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 154 if new_format["columns"] is not None: 155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names)) /opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name 157 kwargs[fingerprint_name] = update_fingerprint( --> 158 self._fingerprint, transform, kwargs_for_fingerprint 159 ) 160 /opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args) 103 for key in sorted(transform_args): 104 hasher.update(key) --> 105 hasher.update(transform_args[key]) 106 return hasher.hexdigest() 107 /opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in update(self, value) 55 def update(self, value): 56 self.m.update(f"=={type(value)}==".encode("utf8")) ---> 57 self.m.update(self.hash(value).encode("utf-8")) 58 59 def hexdigest(self): /opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in hash(cls, value) 51 return cls.dispatch[type(value)](cls, value) 52 else: ---> 53 return cls.hash_default(value) 54 55 def update(self, value): /opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in hash_default(cls, value) 44 @classmethod 45 def hash_default(cls, value): ---> 46 return cls.hash_bytes(dumps(value)) 47 48 @classmethod /opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in dumps(obj) 365 file = StringIO() 366 with _no_cache_fields(obj): --> 367 dump(obj, file) 368 return file.getvalue() 369 /opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in dump(obj, file) 337 def dump(obj, file): 338 """pickle an object to a file""" --> 339 Pickler(file, recurse=True).dump(obj) 340 return 341 /opt/conda/lib/python3.7/site-packages/dill/_dill.py in dump(self, obj) 444 raise PicklingError(msg) 445 else: --> 446 StockPickler.dump(self, obj) 447 stack.clear() # clear record of 'recursion-sensitive' pickled objects 448 return /opt/conda/lib/python3.7/pickle.py in dump(self, obj) 435 if self.proto >= 4: 436 self.framer.start_framing() --> 437 self.save(obj) 438 self.write(STOP) 439 self.framer.end_framing() /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 /opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_function(pickler, obj) 1436 globs, obj.__name__, 1437 obj.__defaults__, obj.__closure__, -> 1438 obj.__dict__, fkwdefaults), obj=obj) 1439 else: 1440 _super = ('super' in getattr(obj.func_code,'co_names',())) and (_byref is not None) and getattr(pickler, '_recurse', False) /opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj) 636 else: 637 save(func) --> 638 save(args) 639 write(REDUCE) 640 /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 /opt/conda/lib/python3.7/pickle.py in save_tuple(self, obj) 787 write(MARK) 788 for element in obj: --> 789 save(element) 790 791 if id(obj) in memo: /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 /opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 931 # we only care about session the first pass thru 932 pickler._session = False --> 933 StockPickler.save_dict(pickler, obj) 934 log.info("# D2") 935 return /opt/conda/lib/python3.7/pickle.py in save_dict(self, obj) 857 858 self.memoize(obj) --> 859 self._batch_setitems(obj.items()) 860 861 dispatch[dict] = save_dict /opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items) 883 for k, v in tmp: 884 save(k) --> 885 save(v) 886 write(SETITEMS) 887 elif n: /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 547 548 # Save the reduce() output and finally memoize the object --> 549 self.save_reduce(obj=obj, *rv) 550 551 def persistent_id(self, obj): /opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj) 660 661 if state is not None: --> 662 save(state) 663 write(BUILD) 664 /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 /opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 931 # we only care about session the first pass thru 932 pickler._session = False --> 933 StockPickler.save_dict(pickler, obj) 934 log.info("# D2") 935 return /opt/conda/lib/python3.7/pickle.py in save_dict(self, obj) 857 858 self.memoize(obj) --> 859 self._batch_setitems(obj.items()) 860 861 dispatch[dict] = save_dict /opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items) 883 for k, v in tmp: 884 save(k) --> 885 save(v) 886 write(SETITEMS) 887 elif n: /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 547 548 # Save the reduce() output and finally memoize the object --> 549 self.save_reduce(obj=obj, *rv) 550 551 def persistent_id(self, obj): /opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj) 660 661 if state is not None: --> 662 save(state) 663 write(BUILD) 664 /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 502 f = self.dispatch.get(t) 503 if f is not None: --> 504 f(self, obj) # Call unbound method with explicit self 505 return 506 /opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj) 931 # we only care about session the first pass thru 932 pickler._session = False --> 933 StockPickler.save_dict(pickler, obj) 934 log.info("# D2") 935 return /opt/conda/lib/python3.7/pickle.py in save_dict(self, obj) 857 858 self.memoize(obj) --> 859 self._batch_setitems(obj.items()) 860 861 dispatch[dict] = save_dict /opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items) 883 for k, v in tmp: 884 save(k) --> 885 save(v) 886 write(SETITEMS) 887 elif n: /opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id) 522 reduce = getattr(obj, "__reduce_ex__", None) 523 if reduce is not None: --> 524 rv = reduce(self.proto) 525 else: 526 reduce = getattr(obj, "__reduce__", None) TypeError: can't pickle Tokenizer objects ```
closed
https://github.com/huggingface/datasets/issues/665
2020-09-23T04:28:14
2020-10-08T09:32:16
2020-10-08T09:32:16
{ "login": "xixiaoyao", "id": 24541791, "type": "User" }
[]
false
[]
707,017,791
664
load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable
version: 1.0.2 ``` train_dataset = datasets.load_dataset('squad') ``` The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors. ``` train_dataset = datasets.load_dataset('./my_squad.py') ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-28-25a84b4d1581> in <module> ----> 1 train_dataset = nlp.load_dataset('./my_squad.py') /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 602 hash=hash, 603 features=features, --> 604 **config_kwargs, 605 ) 606 TypeError: 'NoneType' object is not callable
closed
https://github.com/huggingface/datasets/issues/664
2020-09-23T03:53:36
2023-04-17T09:31:20
2020-10-20T09:06:13
{ "login": "xixiaoyao", "id": 24541791, "type": "User" }
[]
false
[]
706,732,636
663
Created dataset card snli.md
First draft of a dataset card using the SNLI corpus as an example. This is mostly based on the [Google Doc draft](https://docs.google.com/document/d/1dKPGP-dA2W0QoTRGfqQ5eBp0CeSsTy7g2yM8RseHtos/edit), but I added a few sections and moved some things around. - I moved **Who Was Involved** to follow **Language**, both because I thought the authors should be presented more towards the front and because I think it makes sense to present the speakers close to the language so it doesn't have to be repeated. - I created a section I called **Data Characteristics** by pulling some things out of the other sections. I was thinking that this would be more about the language use in context of the specific task construction. That name isn't very descriptive though and could probably be improved. -- Domain and language type out of **Language**. I particularly wanted to keep the Language section as simple and as abstracted from the task as possible. -- 'How was the data collected' out of **Who Was Involved** -- Normalization out of **Features/Dataset Structure** -- I also added an annotation process section. - I kept the **Features** section mostly the same as the Google Doc, but I renamed it **Dataset Structure** to more clearly separate it from the language use, and added some links to the documentation pages. - I also kept **Tasks Supported**, **Known Limitations**, and **Licensing Information** mostly the same. Looking at it again though, maybe **Tasks Supported** should come before **Data Characteristics**? The trickiest part about writing a dataset card for the SNLI corpus specifically is that it's built on datasets which are themselves built on datasets so I had to dig in a lot of places to find information. I think this will be easier with other datasets and once there is more uptake of dataset cards so they can just link to each other. (Maybe that needs to be an added section?) I also made an effort not to repeat information across the sections or to refer to a previous section if the information was relevant in a later one. Is there too much repetition still?
closed
https://github.com/huggingface/datasets/pull/663
2020-09-22T22:29:37
2020-10-13T17:05:20
2020-10-12T20:26:52
{ "login": "mcmillanmajora", "id": 26722925, "type": "User" }
[ { "name": "Dataset discussion", "color": "72f99f" } ]
true
[]
706,689,866
662
Created dataset card snli.md
First draft of a dataset card using the SNLI corpus as an example
closed
https://github.com/huggingface/datasets/pull/662
2020-09-22T21:00:17
2023-09-24T09:50:16
2020-09-22T21:26:21
{ "login": "mcmillanmajora", "id": 26722925, "type": "User" }
[ { "name": "Dataset discussion", "color": "72f99f" } ]
true
[]
706,465,936
661
Replace pa.OSFile by open
It should fix #643
closed
https://github.com/huggingface/datasets/pull/661
2020-09-22T15:05:59
2021-05-05T18:24:36
2020-09-22T15:15:25
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
706,324,032
660
add openwebtext
This adds [The OpenWebText Corpus](https://skylion007.github.io/OpenWebTextCorpus/), which is a clean and large text corpus for nlp pretraining. It is an open source effort to reproduce OpenAI’s WebText dataset used by GPT-2, and it is also needed to reproduce ELECTRA. It solves #132 . ### Besides dataset building script, I made some changes to the library. 1. Extract large amount of compressed files with multi processing I add a `num_proc` argument to `DownloadManager.extract` and pass this `num_proc` to `map_nested`. So I can decompress 20 thousands compressed files faster. `num_proc` I add is default to `None`, so it shouldn't break any other thing. 2. In `cached_path`, I change the order to deal with different kind of compressed files (zip, tar, gzip) Because there is no way to 100% detect a file is a zip file (see [this](https://stackoverflow.com/questions/18194688/how-can-i-determine-if-a-file-is-a-zip-file)), I found it wrongly detect `'./datasets/downloads/extracted/58764bd6898fa339b25d92e7fbbc3d8dbf64fb504edff1a30a1d7d99d1561027/openwebtext/urlsf_subset13-630_data.xz'` as a zip and try decompress it with zip, sure it will get error. So I made it detect wheter the file is tar or gzip first and detect zip in the last. 3. `MockDownloadManager.extract` Cuz I pass `num_proc` to `DownloadManager.extract`, I also have to make `MockDownloadManager.extract` to accept extra keywork arguments. So I make it `extract(path, *args, **kwargs)`, but just return the path as original implementation. **Note**: If there is better way for points mentioned above, thought I would like to help, unless we can solve point4 (make dataset building fast), I may not be able to afford rebuild the dataset again because of change of the dataset script (Building the dataset cost me 4 days). ### There is something I think we can improve 4. Long time to decompress compressed files Even I decompress those 20 thousands compressed files with 12 process on my 16 core 3.x Ghz server. It still took about 3 ~ 4days to complete dataset building. Most of time spent on decompress those files. ### Info about the source data The source data is an tar.xz file with following structure, files/directory beyond compressed file is what can we get after decompress it. ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ subset001.xz | .... ``` And this the structure of dummy data, same as the original one. ``` dummy_data.zip |__ dummy_data |__ openwebtext |__fake_subset-1_data-dirxz # actually it is a directory | |__ ....txt | |__ ....txt |__ fake_subset-2_data-dirxz |__ ....txt |__ ....txt ```
closed
https://github.com/huggingface/datasets/pull/660
2020-09-22T12:05:22
2020-10-06T09:20:10
2020-09-28T09:07:26
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[]
true
[]
706,231,506
659
Keep new columns in transmit format
When a dataset is formatted with a list of columns that `__getitem__` should return, then calling `map` to add new columns doesn't add the new columns to this list. It caused `KeyError` issues in #620 I changed the logic to add those new columns to the list that `__getitem__` should return.
closed
https://github.com/huggingface/datasets/pull/659
2020-09-22T09:47:23
2020-09-22T10:07:22
2020-09-22T10:07:20
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
706,206,247
658
Fix squad metric's Features
Resolves issue [657](https://github.com/huggingface/datasets/issues/657).
closed
https://github.com/huggingface/datasets/pull/658
2020-09-22T09:09:52
2020-09-29T15:58:30
2020-09-29T15:58:30
{ "login": "tshrjn", "id": 8372098, "type": "User" }
[]
true
[]
706,204,383
657
Squad Metric Description & Feature Mismatch
The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation.
closed
https://github.com/huggingface/datasets/issues/657
2020-09-22T09:07:00
2020-10-13T02:16:56
2020-09-29T15:57:38
{ "login": "tshrjn", "id": 8372098, "type": "User" }
[]
false
[]
705,736,319
656
Use multiprocess from pathos for multiprocessing
[Multiprocess](https://github.com/uqfoundation/multiprocess) (from the [pathos](https://github.com/uqfoundation/pathos) project) allows to use lambda functions in multiprocessed map. It was suggested to use it by @kandorm. We're already using dill which is its only dependency.
closed
https://github.com/huggingface/datasets/pull/656
2020-09-21T16:12:19
2020-09-28T14:45:40
2020-09-28T14:45:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
705,672,208
655
added Winogrande debiased subset
The [Winogrande](https://arxiv.org/abs/1907.10641) paper mentions a `debiased` subset that wasn't in the first release; this PR adds it.
closed
https://github.com/huggingface/datasets/pull/655
2020-09-21T14:51:08
2020-09-21T16:20:40
2020-09-21T16:16:04
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
true
[]
705,511,058
654
Allow empty inputs in metrics
There was an arrow error when trying to compute a metric with empty inputs. The error was occurring when reading the arrow file, before calling metric._compute.
closed
https://github.com/huggingface/datasets/pull/654
2020-09-21T11:26:36
2020-10-06T03:51:48
2020-09-21T16:13:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
705,482,391
653
handle data alteration when trying type
Fix #649 The bug came from the type inference that didn't handle a weird case in Pyarrow. Indeed this code runs without error but alters the data in arrow: ```python import pyarrow as pa type = pa.struct({"a": pa.struct({"b": pa.string()})}) array_with_altered_data = pa.array([{"a": {"b": "foo", "c": "bar"}}] * 10, type=type) print(array_with_altered_data[0].as_py()) # {'a': {'b': 'foo'}} -> the sub-field "c" is missing ``` (I don't know if this is intended in pyarrow tbh) We didn't take this case into account during type inference. By default it was keeping old features and maybe alter data. To fix that I added a line that checks that the first element of the array is not altered.
closed
https://github.com/huggingface/datasets/pull/653
2020-09-21T10:41:49
2020-09-21T16:13:06
2020-09-21T16:13:05
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
705,390,850
652
handle connection error in download_prepared_from_hf_gcs
Fix #647
closed
https://github.com/huggingface/datasets/pull/652
2020-09-21T08:21:11
2020-09-21T08:28:43
2020-09-21T08:28:42
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
705,212,034
651
Problem with JSON dataset format
I have a local json dataset with the following form. { 'id01234': {'key1': value1, 'key2': value2, 'key3': value3}, 'id01235': {'key1': value1, 'key2': value2, 'key3': value3}, . . . 'id09999': {'key1': value1, 'key2': value2, 'key3': value3} } Note that instead of a list of records it's basically a dictionary of key value pairs with the keys being the record_ids and the values being the corresponding record. Reading this with json: ``` data = datasets.load('json', data_files='path_to_local.json') ``` Throws an error and asks me to chose a field. What's the right way to handle this?
open
https://github.com/huggingface/datasets/issues/651
2020-09-20T23:57:14
2020-09-21T12:14:24
null
{ "login": "vikigenius", "id": 12724810, "type": "User" }
[]
false
[]
704,861,844
650
dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
Hi, I recently want to add a dataset whose source data is like this ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ subset001.xz | .... ``` So I wrote `openwebtext.py` like this ``` def _split_generators(self, dl_manager): dl_dir = dl_manager.download_and_extract(_URL) owt_dir = os.path.join(dl_dir, 'openwebtext') subset_xzs = [ os.path.join(owt_dir, file_name) for file_name in os.listdir(owt_dir) if file_name.endswith('xz') # filter out ...xz.lock ] ex_dirs = dl_manager.extract(subset_xzs, num_proc=round(os.cpu_count()*0.75)) nested_txt_files = [ [ os.path.join(ex_dir,txt_file_name) for txt_file_name in os.listdir(ex_dir) if txt_file_name.endswith('txt') ] for ex_dir in ex_dirs ] txt_files = chain(*nested_txt_files) return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={"txt_files": txt_files} ), ] ``` All went good, I can load and use real openwebtext, except when I try to test with dummy data. The problem is `MockDownloadManager.extract` do nothing, so `ex_dirs = dl_manager.extract(subset_xzs)` won't decompress `subset_xxx.xz`s for me. How should I do ? Or you can modify `MockDownloadManager` to make it like a real `DownloadManager` ?
closed
https://github.com/huggingface/datasets/issues/650
2020-09-19T11:07:03
2020-09-22T11:54:10
2020-09-22T11:54:09
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[]
false
[]
704,838,415
649
Inconsistent behavior in map
I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem. ```python import datasets # Dataset with a single feature called 'field' consisting of two examples dataset = datasets.Dataset.from_dict({'field': ['a', 'b']}) print(dataset[0]) # outputs {'field': 'a'} # Map this dataset to create another feature called 'otherfield', which is a dictionary containing a key called 'capital' dataset = dataset.map(lambda example: {'otherfield': {'capital': example['field'].capitalize()}}) print(dataset[0]) # output is okay {'field': 'a', 'otherfield': {'capital': 'A'}} # Now I want to map again to modify 'otherfield', by adding another key called 'append_x' to the dictionary under 'otherfield' print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x'}})[0]) # printing out the first example after applying the map shows that the new key 'append_x' doesn't get added # it also messes up the value stored at 'capital' {'field': 'a', 'otherfield': {'capital': None}} # Instead, I try to do the same thing by using a different mapped fn print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}})[0]) # this preserves the value under capital, but still no 'append_x' {'field': 'a', 'otherfield': {'capital': 'A'}} # Instead, I try to pass 'otherfield' to remove_columns print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}}, remove_columns=['otherfield'])[0]) # this still doesn't fix the problem {'field': 'a', 'otherfield': {'capital': 'A'}} # Alternately, here's what happens if I just directly map both 'capital' and 'append_x' on a fresh dataset. # Recreate the dataset dataset = datasets.Dataset.from_dict({'field': ['a', 'b']}) # Now map the entire 'otherfield' dict directly, instead of incrementally as before print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['field'].capitalize()}})[0]) # This looks good! {'field': 'a', 'otherfield': {'append_x': 'ax', 'capital': 'A'}} ``` This might be a new issue, because I didn't see this behavior in the `nlp` library. Any help is appreciated!
closed
https://github.com/huggingface/datasets/issues/649
2020-09-19T08:41:12
2020-09-21T16:13:05
2020-09-21T16:13:05
{ "login": "krandiash", "id": 10166085, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
704,753,123
648
offset overflow when multiprocessing batched map on large datasets.
It only happened when "multiprocessing" + "batched" + "large dataset" at the same time. ``` def bprocess(examples): examples['len'] = [] for text in examples['text']: examples['len'].append(len(text)) return examples wiki.map(brpocess, batched=True, num_proc=8) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 153, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/yisiang/datasets/src/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1486, in _map_single batch = self[i : i + batch_size] File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1071, in __getitem__ format_kwargs=self._format_kwargs, File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 972, in _getitem data_subset = self._data.take(indices_array) File "pyarrow/table.pxi", line 1145, in pyarrow.lib.Table.take File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/pyarrow/compute.py", line 268, in take return call_function('take', [data, indices], options) File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays """ The above exception was the direct cause of the following exception: ArrowInvalid Traceback (most recent call last) in 30 owt = datasets.load_dataset('/home/yisiang/datasets/datasets/openwebtext/openwebtext.py', cache_dir='./datasets')['train'] 31 print('load/create data from OpenWebText Corpus for ELECTRA') ---> 32 e_owt = ELECTRAProcessor(owt, apply_cleaning=False).map(cache_file_name=f"electra_owt_{c.max_length}.arrow") 33 dsets.append(e_owt) 34 ~/Reexamine_Attention/electra_pytorch/_utils/utils.py in map(self, **kwargs) 126 writer_batch_size=10**4, 127 num_proc=num_proc, --> 128 **kwargs 129 ) 130 ~/hugdatafast/hugdatafast/transform.py in my_map(self, *args, **kwargs) 21 if not cache_file_name.endswith('.arrow'): cache_file_name += '.arrow' 22 if '/' not in cache_file_name: cache_file_name = os.path.join(self.cache_directory(), cache_file_name) ---> 23 return self.map(*args, cache_file_name=cache_file_name, **kwargs) 24 25 @patch ~/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1285 logger.info("Spawning {} processes".format(num_proc)) 1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard] -> 1287 transformed_shards = [r.get() for r in results] 1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc)) 1289 result = concatenate_datasets(transformed_shards) ~/datasets/src/datasets/arrow_dataset.py in (.0) 1285 logger.info("Spawning {} processes".format(num_proc)) 1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard] -> 1287 transformed_shards = [r.get() for r in results] 1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc)) 1289 result = concatenate_datasets(transformed_shards) ~/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py in get(self, timeout) 655 return self._value 656 else: --> 657 raise self._value 658 659 def _set(self, i, obj): ArrowInvalid: offset overflow while concatenating arrays ```
closed
https://github.com/huggingface/datasets/issues/648
2020-09-19T02:15:11
2025-06-17T12:56:07
2020-09-19T16:46:31
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
704,734,764
647
Cannot download dataset_info.json
I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text/default-53ee3045f07ba8ca/0.0.0/dataset_info.json ``` I tried to open this link manually, but I cannot access this file. How can I download this file and pass it through `dataset.load_dataset()` manually? Versions: Python version 3.7.3 PyTorch version 1.6.0 TensorFlow version 2.3.0 datasets version: 1.0.1
closed
https://github.com/huggingface/datasets/issues/647
2020-09-19T01:35:15
2020-09-21T08:28:42
2020-09-21T08:28:42
{ "login": "chiyuzhang94", "id": 33407613, "type": "User" }
[]
false
[]
704,607,371
646
Fix docs typos
This PR fixes few typos in the docs and the error in the code snippet in the set_format section in docs/source/torch_tensorflow.rst. `torch.utils.data.Dataloader` expects padded batches so it throws an error due to not being able to stack the unpadded tensors. If we follow the Quick tour from the docs where they add the `truncation=True, padding='max_length'` arguments to the tokenizer before passing data to Dataloader, we can easily fix the issue.
closed
https://github.com/huggingface/datasets/pull/646
2020-09-18T19:32:27
2020-09-21T16:30:54
2020-09-21T16:14:12
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
704,542,234
645
Don't use take on dataset table in pyarrow 1.0.x
Fix #615
closed
https://github.com/huggingface/datasets/pull/645
2020-09-18T17:31:34
2023-09-19T07:59:19
2020-09-19T16:46:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
704,534,501
644
Better windows support
There are a few differences in the behavior of python and pyarrow on windows. For example there are restrictions when accessing/deleting files that are open Fix #590
closed
https://github.com/huggingface/datasets/pull/644
2020-09-18T17:17:36
2020-09-25T14:02:30
2020-09-25T14:02:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
704,477,164
643
Caching processed dataset at wrong folder
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = dataset.map(encode, batched=True) ``` The file is about 4 GB, so I cannot process it on the Colab HD because there is no enough space. So I decided to mount my Google Drive fs and do it on it. The dataset is cached in the right place but by processing it (applying `encode` function) seems to use a different folder because Colab HD starts to grow and it crashes when it should be done in the Drive fs. What gets me crazy, it prints it is processing/encoding the dataset in the right folder: ``` Testing the mapped function outputs Testing finished, running the mapping function on the dataset Caching processed dataset at /content/drive/My Drive/text/default-ad3e69d6242ee916/0.0.0/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/cache-b16341780a59747d.arrow ```
closed
https://github.com/huggingface/datasets/issues/643
2020-09-18T15:41:26
2022-02-16T14:53:29
2022-02-16T14:53:29
{ "login": "mrm8488", "id": 3653789, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
704,397,499
642
Rename wnut fields
As mentioned in #641 it would be cool to have it follow the naming of the other NER datasets
closed
https://github.com/huggingface/datasets/pull/642
2020-09-18T13:51:31
2020-09-18T17:18:31
2020-09-18T17:18:30
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
704,373,940
641
Add Polyglot-NER Dataset
Adds the [Polyglot-NER dataset](https://sites.google.com/site/rmyeid/projects/polylgot-ner) with named entity tags for 40 languages. I include separate configs for each language as well as a `combined` config which lumps them all together.
closed
https://github.com/huggingface/datasets/pull/641
2020-09-18T13:21:44
2020-09-20T03:04:43
2020-09-20T03:04:43
{ "login": "joeddav", "id": 9353833, "type": "User" }
[]
true
[]
704,311,758
640
Make shuffle compatible with temp_seed
This code used to return different dataset at each run ```python import dataset as ds dataset = ... with ds.temp_seed(42): shuffled = dataset.shuffle() ``` Now it returns the same one since the seed is set
closed
https://github.com/huggingface/datasets/pull/640
2020-09-18T11:38:58
2020-09-18T11:47:51
2020-09-18T11:47:50
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
704,217,963
639
Update glue QQP checksum
Fix #638
closed
https://github.com/huggingface/datasets/pull/639
2020-09-18T09:08:15
2020-09-18T11:37:08
2020-09-18T11:37:07
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
704,146,956
638
GLUE/QQP dataset: NonMatchingChecksumError
Hi @lhoestq , I know you are busy and there are also other important issues. But if this is easy to be fixed, I am shamelessly wondering if you can give me some help , so I can evaluate my models and restart with my developing cycle asap. 😚 datasets version: editable install of master at 9/17 `datasets.load_dataset('glue','qqp', cache_dir='./datasets')` ``` Downloading and preparing dataset glue/qqp (download: 57.73 MiB, generated: 107.02 MiB, post-processed: Unknown size, total: 164.75 MiB) to ./datasets/glue/qqp/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4... --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) in ----> 1 datasets.load_dataset('glue','qqp', cache_dir='./datasets') ~/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 ~/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 467 if not downloaded_from_gcs: 468 self._download_and_prepare( --> 469 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 470 ) 471 # Sync info ~/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 527 if verify_infos: 528 verify_checksums( --> 529 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" 530 ) 531 ~/datasets/src/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://dl.fbaipublicfiles.com/glue/data/QQP-clean.zip'] ```
closed
https://github.com/huggingface/datasets/issues/638
2020-09-18T07:09:10
2020-09-18T11:37:07
2020-09-18T11:37:07
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[]
false
[]
703,539,909
637
Add MATINF
closed
https://github.com/huggingface/datasets/pull/637
2020-09-17T12:24:53
2020-09-17T13:23:18
2020-09-17T13:23:17
{ "login": "JetRunner", "id": 22514219, "type": "User" }
[]
true
[]
702,883,989
636
Consistent ner features
As discussed in #613 , this PR aims at making NER feature names consistent across datasets. I changed the feature names of LinCE and XTREME/PAN-X
closed
https://github.com/huggingface/datasets/pull/636
2020-09-16T15:56:25
2020-09-17T09:52:59
2020-09-17T09:52:58
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
702,822,439
635
Loglevel
Continuation of #618
closed
https://github.com/huggingface/datasets/pull/635
2020-09-16T14:37:53
2020-09-17T09:52:19
2020-09-17T09:52:18
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
702,676,041
634
Add ConLL-2000 dataset
Adds ConLL-2000 dataset used for text chunking. See https://www.clips.uantwerpen.be/conll2000/chunking/ for details and [motivation](https://github.com/huggingface/transformers/pull/7041#issuecomment-692710948) behind this PR
closed
https://github.com/huggingface/datasets/pull/634
2020-09-16T11:14:11
2020-09-17T10:38:10
2020-09-17T10:38:10
{ "login": "vblagoje", "id": 458335, "type": "User" }
[]
true
[]
702,440,484
633
Load large text file for LM pre-training resulting in OOM
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator used for language modeling based on DataCollatorForLazyLanguageModeling - collates batches of tensors, honoring their tokenizer's pad_token - preprocesses batches for masked language modeling """ block_size: int = 512 def __call__(self, examples: List[dict]) -> Dict[str, torch.Tensor]: examples = [example['text'] for example in examples] batch, attention_mask = self._tensorize_batch(examples) if self.mlm: inputs, labels = self.mask_tokens(batch) return {"input_ids": inputs, "labels": labels} else: labels = batch.clone().detach() if self.tokenizer.pad_token_id is not None: labels[labels == self.tokenizer.pad_token_id] = -100 return {"input_ids": batch, "labels": labels} def _tensorize_batch(self, examples: List[str]) -> Tuple[torch.Tensor, torch.Tensor]: if self.tokenizer._pad_token is None: raise ValueError( "You are attempting to pad samples but the tokenizer you are using" f" ({self.tokenizer.__class__.__name__}) does not have one." ) tensor_examples = self.tokenizer.batch_encode_plus( [ex for ex in examples if ex], max_length=self.block_size, return_tensors="pt", pad_to_max_length=True, return_attention_mask=True, truncation=True, ) input_ids, attention_mask = tensor_examples["input_ids"], tensor_examples["attention_mask"] return input_ids, attention_mask dataset = load_dataset('text', data_files='train.txt',cache_dir="./", , split='train') data_collator = DataCollatorForDatasetsLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15, block_size=tokenizer.max_len) trainer = Trainer(model=model, args=args, data_collator=data_collator, train_dataset=train_dataset, prediction_loss_only=True, ) trainer.train(model_path=model_path) ``` This train.txt is about 1.1GB and has 90k lines where each line is a sequence of 4k words. During training, the memory usage increased fast as the following graph and resulted in OOM before the finish of training. ![image](https://user-images.githubusercontent.com/29704017/93292112-5576b280-f817-11ea-8da2-b2db9bf35665.png) Could you please give me any suggestions on why this happened and how to fix it? Thanks.
open
https://github.com/huggingface/datasets/issues/633
2020-09-16T04:33:15
2021-02-16T12:02:01
null
{ "login": "leethu2012", "id": 29704017, "type": "User" }
[]
false
[]
702,358,124
632
Fix typos in the loading datasets docs
This PR fixes two typos in the loading datasets docs, one of them being a broken link to the `load_dataset` function.
closed
https://github.com/huggingface/datasets/pull/632
2020-09-16T00:27:41
2020-09-21T16:31:11
2020-09-16T06:52:44
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
701,711,255
631
Fix text delimiter
I changed the delimiter in the `text` dataset script. It should fix the `pyarrow.lib.ArrowInvalid: CSV parse error` from #622 I changed the delimiter to an unused ascii character that is not present in text files : `\b`
closed
https://github.com/huggingface/datasets/pull/631
2020-09-15T08:08:42
2020-09-22T15:03:06
2020-09-15T08:26:25
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
701,636,350
630
Text dataset not working with large files
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None File "examples/language-modeling/run_language_modeling.py", line 144, in get_dataset dataset = load_dataset("text", data_files=file_path, split='train+test') File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 469, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 546, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 888, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose): File "/home/ksjae/.local/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__ for obj in iterable: File "/home/ksjae/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py", line 104, in _generate_tables convert_options=self.config.convert_options, File "pyarrow/_csv.pyx", line 714, in pyarrow._csv.read_csv File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status ``` **pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** It gives the same message for both 200MB, 10GB .tx files but not for 700MB file. Can't upload due to size & copyright problem. sorry.
closed
https://github.com/huggingface/datasets/issues/630
2020-09-15T06:02:36
2020-09-25T22:21:43
2020-09-25T22:21:43
{ "login": "ksjae", "id": 17930170, "type": "User" }
[]
false
[]