id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
βŒ€
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
βŒ€
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
946,470,815
2,662
Load Dataset from the Hub (NO DATASET SCRIPT)
## Load the data from any Dataset repository on the Hub This PR adds support for loading datasets from any dataset repository on the hub, without requiring any dataset script. As a user it's now possible to create a repo and upload some csv/json/text/parquet files, and then be able to load the data in one line. Here is an example with the `allenai/c4` repository that contains a lot of compressed json lines files: ```python from datasets import load_dataset data_files = {"train": "en/c4-train.*.json.gz"} c4 = load_dataset("allenai/c4", data_files=data_files, split="train", streaming=True) print(c4.n_shards) # 1024 print(next(iter(c4))) # {'text': 'Beginners BBQ Class Takin...'} ``` By default it loads all the files, but as shown in the example you can choose the ones you want with unix style patterns. Of course it's still possible to use dataset scripts since they offer the most flexibility. ## Implementation details It uses `huggingface_hub` to list the files in a dataset repository. If you provide a path to a local directory instead of a repository name, it works the same way but it uses `glob`. Depending on the data files available, or passed in the `data_files` parameter, one of the available builders will be used among the csv, json, text and parquet builders. Because of this, it's not possible to load both csv and json files at once. In this case you have to load them separately and then concatenate the two datasets for example. ## TODO - [x] tests - [x] docs - [x] when huggingface_hub gets a new release, update the CI and the setup.py Close https://github.com/huggingface/datasets/issues/2629
closed
https://github.com/huggingface/datasets/pull/2662
2021-07-16T17:21:58
2021-08-25T14:53:01
2021-08-25T14:18:08
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
946,446,967
2,661
Add SD task for SUPERB
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). TODO: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Upload these files to the superb-data repo - [x] Transcribe the corresponding s3prl processing of these files into our superb loading script - [x] README: tags + description sections - ~~Add DER metric~~ (we leave the DER metric for a follow-up PR) Related to #2619. Close #2653. cc: @lewtun
closed
https://github.com/huggingface/datasets/pull/2661
2021-07-16T16:43:21
2021-08-04T17:03:53
2021-08-04T17:03:53
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
946,316,180
2,660
Move checks from _map_single to map
The goal of this PR is to remove duplicated checks in the `map` logic to execute them only once whenever possible (`fn_kwargs`, `input_columns`, ...). Additionally, this PR improves the consistency (to align it with `input_columns`) of the `remove_columns` check by adding support for a single string value, which is then wrapped into a list.
closed
https://github.com/huggingface/datasets/pull/2660
2021-07-16T13:53:33
2021-09-06T14:12:23
2021-09-06T14:12:23
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
946,155,407
2,659
Allow dataset config kwargs to be None
Close https://github.com/huggingface/datasets/issues/2658 The dataset config kwargs that were set to None we simply ignored. This was an issue when None has some meaning for certain parameters of certain builders, like the `sep` parameter of the "csv" builder that allows to infer to separator. cc @SBrandeis
closed
https://github.com/huggingface/datasets/pull/2659
2021-07-16T10:25:38
2021-07-16T12:46:07
2021-07-16T12:46:07
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
946,139,532
2,658
Can't pass `sep=None` to load_dataset("csv", ...) to infer the separator via pandas.read_csv
When doing `load_dataset("csv", sep=None)`, the `sep` passed to `pd.read_csv` is still the default `sep=","` instead, which makes it impossible to make the csv loader infer the separator. Related to https://github.com/huggingface/datasets/pull/2656 cc @SBrandeis
closed
https://github.com/huggingface/datasets/issues/2658
2021-07-16T10:05:44
2021-07-16T12:46:06
2021-07-16T12:46:06
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
945,822,829
2,657
`to_json` reporting enhancements
While using `to_json` 2 things came to mind that would have made the experience easier on the user: 1. Could we have a `desc` arg for the tqdm use and a fallback to just `to_json` so that it'd be clear to the user what's happening? Surely, one can just print the description before calling json, but I thought perhaps it'd help to have it self-identify like you did for other progress bars recently. 2. It took me a while to make sense of the reported numbers: ``` 22%|β–ˆβ–ˆβ– | 1536/7076 [12:30:57<44:09:42, 28.70s/it] ``` So iteration here happens to be 10K samples, and the total is 70M records. But the user does't know that, so the progress bar is perfect, but the numbers it reports are meaningless until one discovers that 1it=10K samples. And one still has to convert these in the head - so it's not quick. Not exactly sure what's the best way to approach this, perhaps it can be part of `desc`? or report M or K, so it'd be built-in if it were to print, e.g.: ``` 22%|β–ˆβ–ˆβ– | 15360K/70760K [12:30:57<44:09:42, 28.70s/it] ``` or ``` 22%|β–ˆβ–ˆβ– | 15.36M/70.76M [12:30:57<44:09:42, 28.70s/it] ``` (while of course remaining friendly to small datasets) I forget if tqdm lets you add a magnitude identifier to the running count. Thank you!
open
https://github.com/huggingface/datasets/issues/2657
2021-07-15T23:32:18
2021-07-15T23:33:53
null
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
945,421,790
2,656
Change `from_csv` default arguments
Passing `sep=None` to pandas's `read_csv` lets pandas guess the CSV file's separator This PR allows users to use this pandas's feature by passing `sep=None` to `Dataset.from_csv`: ```python Dataset.from_csv( ..., sep=None ) ```
closed
https://github.com/huggingface/datasets/pull/2656
2021-07-15T14:09:06
2023-09-24T09:56:44
2021-07-16T10:23:26
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[]
true
[]
945,382,723
2,655
Allow the selection of multiple columns at once
**Is your feature request related to a problem? Please describe.** Similar to pandas, it would be great if we could select multiple columns at once. **Describe the solution you'd like** ```python my_dataset = ... # Has columns ['idx', 'sentence', 'label'] idx, label = my_dataset[['idx', 'label']] ``` **Describe alternatives you've considered** we can do `[dataset[col] for col in ('idx', 'label')]` **Additional context** This is of course very minor.
closed
https://github.com/huggingface/datasets/issues/2655
2021-07-15T13:30:45
2024-01-09T15:11:27
2024-01-09T07:46:28
{ "login": "Dref360", "id": 8976546, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
945,167,231
2,654
Give a user feedback if the dataset he loads is streamable or not
**Is your feature request related to a problem? Please describe.** I would love to know if a `dataset` is with the current implementation streamable or not. **Describe the solution you'd like** We could show a warning when a dataset is loaded with `load_dataset('...',streaming=True)` when its lot streamable, e.g. if it is an archive. **Describe alternatives you've considered** Add a new metadata tag for "streaming"
open
https://github.com/huggingface/datasets/issues/2654
2021-07-15T09:07:27
2021-08-02T11:03:21
null
{ "login": "philschmid", "id": 32632186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
945,102,321
2,653
Add SD task for SUPERB
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). Steps: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Upload these files to the superb-data repo - [x] Transcribe the corresponding s3prl processing of these files into our superb loading script - [ ] README: tags + description sections Related to #2619. cc: @lewtun
closed
https://github.com/huggingface/datasets/issues/2653
2021-07-15T07:51:40
2021-08-04T17:03:52
2021-08-04T17:03:52
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
944,865,924
2,652
Fix logging docstring
Remove "no tqdm bars" from the docstring in the logging module to align it with the changes introduced in #2534.
closed
https://github.com/huggingface/datasets/pull/2652
2021-07-14T23:19:58
2021-07-18T11:41:06
2021-07-15T09:57:31
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
944,796,961
2,651
Setting log level higher than warning does not suppress progress bar
## Describe the bug I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well). According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0. I also tried to set `DATASETS_VERBOSITY` environment variable to `error` or `critical` but it also didn't work. ## Steps to reproduce the bug ```python import datasets from datasets.utils.logging import set_verbosity_error set_verbosity_error() def dummy_map(batch): return batch common_voice_train = datasets.load_dataset("common_voice", "de", split="train") common_voice_test = datasets.load_dataset("common_voice", "de", split="test") common_voice_train.map(dummy_map) ``` ## Expected results - The progress bar for `.map` call won't be shown ## Actual results - The progress bar for `.map` is still shown ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-5.4.0-1045-aws-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.5 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2651
2021-07-14T21:06:51
2022-07-08T14:51:57
2021-07-15T03:41:35
{ "login": "Isa-rentacs", "id": 1147443, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
944,672,565
2,650
[load_dataset] shard and parallelize the process
- Some huge datasets take forever to build the first time. (e.g. oscar/en) as it's done in a single cpu core. - If the build crashes, everything done up to that point gets lost Request: Shard the build over multiple arrow files, which would enable: - much faster build by parallelizing the build process - if the process crashed, the completed arrow files don't need to be re-built again Thank you! @lhoestq
closed
https://github.com/huggingface/datasets/issues/2650
2021-07-14T18:04:58
2023-11-28T19:11:41
2023-11-28T19:11:40
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
944,651,229
2,649
adding progress bar / ETA for `load_dataset`
Please consider: ``` Downloading and preparing dataset oscar/unshuffled_deduplicated_en (download: 462.40 GiB, generated: 1.18 TiB, post-processed: Unknown size, total: 1.63 TiB) to cache/oscar/unshuffled_deduplicated_en/1.0.0/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2... HF google storage unreachable. Downloading and preparing it from source ``` and no indication whatsoever of whether things work well or when it'll be done. It's important to have an estimated completion time for when doing slurm jobs since some instances have a cap on run-time. I think for this particular job it sat for 30min in total silence and then after 30min it started generating: ``` 897850 examples [07:24, 10286.71 examples/s] ``` which is already great! Request: 1. ETA - knowing how many hours to allocate for a slurm job 2. progress bar - helps to know things are working and aren't stuck and where we are at. Thank you! @lhoestq
open
https://github.com/huggingface/datasets/issues/2649
2021-07-14T17:34:39
2023-03-27T10:32:49
null
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
944,484,522
2,648
Add web_split dataset for Paraphase and Rephrase benchmark
## Describe: For getting simple sentences from complex sentence there are dataset and task like wiki_split that is available in hugging face datasets. This web_split is a very similar dataset. There some research paper which states that by combining these two datasets we if we train the model it will yield better results on both tests data. This dataset is made from web NLG data. All the dataset related details are provided in the below repository Github link: https://github.com/shashiongithub/Split-and-Rephrase
open
https://github.com/huggingface/datasets/issues/2648
2021-07-14T14:24:36
2021-07-14T14:26:12
null
{ "login": "bhadreshpsavani", "id": 26653468, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
944,424,941
2,647
Fix anchor in README
I forgot to push this fix in #2611, so I'm sending it now.
closed
https://github.com/huggingface/datasets/pull/2647
2021-07-14T13:22:44
2021-07-18T11:41:18
2021-07-15T06:50:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
944,379,954
2,646
downloading of yahoo_answers_topics dataset failed
## Describe the bug I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset ## Steps to reproduce the bug self.dataset = load_dataset( 'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]') # Sample code to reproduce the bug self.dataset = load_dataset( 'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]') ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
closed
https://github.com/huggingface/datasets/issues/2646
2021-07-14T12:31:05
2022-08-04T08:28:24
2022-08-04T08:28:24
{ "login": "vikrant7k", "id": 66781249, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
944,374,284
2,645
load_dataset processing failed with OS error after downloading a dataset
## Describe the bug After downloading a dataset like opus100, there is a bug that OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Steps to reproduce the bug ```python from datasets import load_dataset this_dataset = load_dataset('opus100', 'af-en') ``` ## Expected results there is no error when running load_dataset. ## Actual results Specify the actual results or traceback. Traceback (most recent call last): File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 652, in _download_and_prep self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 989, in _prepare_split example = self.info.features.encode_example(record) File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 952, in encode_example example = cast_to_python_objects(example) File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 219, in cast_to_python_ob return _cast_to_python_objects(obj)[0] File "/home/anaconda3/lib/python3.6/site-packages/datasets/features.py", line 165, in _cast_to_python_o import torch File "/home/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 188, in <module> _load_global_deps() File "/home/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 141, in _load_global_deps ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL) File "/home/anaconda3/lib/python3.6/ctypes/__init__.py", line 348, in __init__ self._handle = _dlopen(self._name, mode) OSError: dlopen: cannot load any more object with static TLS During handling of the above exception, another exception occurred: Traceback (most recent call last): File "download_hub_opus100.py", line 9, in <module> this_dataset = load_dataset('opus100', language_pair) File "/home/anaconda3/lib/python3.6/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 575, in download_and_prepa dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/anaconda3/lib/python3.6/site-packages/datasets/builder.py", line 658, in _download_and_prep + str(e) OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-3.13.0-32-generic-x86_64-with-debian-jessie-sid - Python version: 3.6.6 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2645
2021-07-14T12:23:53
2021-07-15T09:34:02
2021-07-15T09:34:02
{ "login": "fake-warrior8", "id": 40395156, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
944,254,748
2,644
Batched `map` not allowed to return 0 items
## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting-the-dataset), `a batch mapped function can take as input a batch of size N and return a batch of size M where M can be greater or less than N and can even be zero`. However, when the returned batch has a size of zero (neither item in the batch fulfilled the condition), we get an `index out of bounds` error. I think that `arrow_writer.py` is [trying to infer the returned types using the first element returned](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L100), but no elements were returned in this case. For this error to happen, I'm returning a dictionary that contains empty lists for the keys I want to keep, see below. If I return an empty dictionary instead (no keys), then a different error eventually occurs. ## Steps to reproduce the bug ```python def select_rows(examples): # `key` is a column name that exists in the original dataset # The following line simulates no matches found, so we return an empty batch result = {'key': []} return result filtered_dataset = dataset.map( select_rows, remove_columns = dataset.column_names, batched = True, num_proc = 1, desc = "Selecting rows with images that exist" ) ``` The code above immediately triggers the exception. If we use the following instead: ```python def select_rows(examples): # `key` is a column name that exists in the original dataset result = {'key': []} # or defaultdict or whatever # code to check for condition and append elements to result # some_items_found will be set to True if there were any matching elements in the batch return result if some_items_found else {} ``` Then it _seems_ to work, but it eventually fails with some sort of schema error. I believe it may happen when an empty batch is followed by a non-empty one, but haven't set up a test to verify it. In my opinion, returning a dictionary with empty lists and valid column names should be accepted as a valid result with zero items. ## Expected results The dataset would be filtered and only the matching fields would be returned. ## Actual results An exception is encountered, as described. Using a workaround makes it fail further along the line. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.1.dev0 - Platform: Linux-5.4.0-53-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2644
2021-07-14T09:58:19
2021-07-26T14:55:15
2021-07-26T14:55:15
{ "login": "pcuenca", "id": 1177582, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
944,220,273
2,643
Enum used in map functions will raise a RecursionError with dill.
## Describe the bug Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284 In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` dataclass as base class and the `HfArgumentParser`. In the same file I use a `ds.map` that tries to pickle the content of the module including the definition of the enum that runs into the dill bug described above. ## Steps to reproduce the bug ```python from datasets import load_dataset from enum import Enum class A(Enum): a = 'a' def main(): a = A.a def f(x): return {} if a == a.a else x ds = load_dataset('cnn_dailymail', '3.0.0')['test'] ds = ds.map(f, num_proc=15) if __name__ == "__main__": main() ``` ## Expected results The known problem with dill could be prevented as explained in the link above (workaround.) Since `HFArgumentParser` nicely uses the enum class for choices it makes sense to also deal with this bug under the hood. ## Actual results ```python File "/home/xxxx/miniconda3/lib/python3.8/site-packages/dill/_dill.py", line 1373, in save_type pickler.save_reduce(_create_type, (type(obj), obj.__name__, File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 690, in save_reduce save(args) File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 899, in save_tuple save(element) File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 534, in save self.framer.commit_frame() File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 220, in commit_frame if f.tell() >= self._FRAME_SIZE_TARGET or force: RecursionError: maximum recursion depth exceeded while calling a Python object ``` ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-5.9.0-4-amd64-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 3.0.0
open
https://github.com/huggingface/datasets/issues/2643
2021-07-14T09:16:08
2021-11-02T09:51:11
null
{ "login": "jorgeecardona", "id": 100702, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
944,175,697
2,642
Support multi-worker with streaming dataset (IterableDataset).
**Is your feature request related to a problem? Please describe.** The current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking). **Describe the solution you'd like** Ideally `.map` should support multi-worker like tfds, with `AUTOTUNE`. **Describe alternatives you've considered** A simpler solution is to shard the dataset and process it in parallel with pytorch dataloader. The shard does not need to be of equal size. * https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset **Additional context**
open
https://github.com/huggingface/datasets/issues/2642
2021-07-14T08:22:58
2024-05-03T10:11:04
null
{ "login": "changjonathanc", "id": 31893406, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
943,838,085
2,641
load_dataset("financial_phrasebank") NonMatchingChecksumError
## Describe the bug Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("financial_phrasebank", 'sentences_allagree') ``` ## Expected results I expect to see the financial_phrasebank dataset downloaded successfully ## Actual results NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip'] ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-4.14.232-177.418.amzn2.x86_64-x86_64-with-debian-10.6 - Python version: 3.7.10 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2641
2021-07-13T21:21:49
2022-08-04T08:30:08
2022-08-04T08:30:08
{ "login": "courtmckay", "id": 13956255, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
943,591,055
2,640
Fix docstrings
Fix rendering of some docstrings.
closed
https://github.com/huggingface/datasets/pull/2640
2021-07-13T16:09:14
2021-07-15T06:51:01
2021-07-15T06:06:12
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
943,527,463
2,639
Refactor patching to specific submodule
Minor reorganization of the code, so that additional patching functions (not related to streaming) might be created. In relation with the initial approach followed in #2631.
closed
https://github.com/huggingface/datasets/pull/2639
2021-07-13T15:08:45
2021-07-13T16:52:49
2021-07-13T16:52:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
943,484,913
2,638
Streaming for the Json loader
It was not using `open` in the builder. Therefore `pyarrow.json.read_json` was downloading the full file to start yielding rows. Moreover, it appeared that `pyarrow.json.read_json` was not really suited for streaming as it was downloading too much data and failing if `block_size` was not properly configured (related to #2573). So I switched to using `open` which is extended to support reading from remote file progressively, and I removed the pyarrow json reader which was not practical. Instead, I'm using the classical `json.loads` from the standard library.
closed
https://github.com/huggingface/datasets/pull/2638
2021-07-13T14:37:06
2021-07-16T15:59:32
2021-07-16T15:59:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
943,044,514
2,636
Streaming for the Pandas loader
It was not using open in the builder. Therefore pd.read_pickle could fail when streaming from a private repo for example. Indeed, when streaming, open is extended to support reading from remote files and handles authentication to the HF Hub
closed
https://github.com/huggingface/datasets/pull/2636
2021-07-13T09:18:21
2021-07-13T14:37:24
2021-07-13T14:37:23
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
943,030,999
2,635
Streaming for the CSV loader
It was not using `open` in the builder. Therefore `pd.read_csv` was downloading the full file to start yielding rows. Indeed, when streaming, `open` is extended to support reading from remote file progressively.
closed
https://github.com/huggingface/datasets/pull/2635
2021-07-13T09:08:58
2021-07-13T15:19:38
2021-07-13T15:19:37
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
942,805,621
2,634
Inject ASR template for lj_speech dataset
Related to: #2565, #2633. cc: @lewtun
closed
https://github.com/huggingface/datasets/pull/2634
2021-07-13T06:04:54
2021-07-13T09:05:09
2021-07-13T09:05:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
942,396,414
2,633
Update ASR tags
This PR updates the ASR tags of the 5 datasets added in #2565 following the change of task categories in #2620
closed
https://github.com/huggingface/datasets/pull/2633
2021-07-12T19:58:31
2021-07-13T05:45:26
2021-07-13T05:45:13
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
942,293,727
2,632
add image-classification task template
Snippet below is the tl;dr, but you can try it out directly here: [![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/005c025d41f0e48ae3d4ee61c0f20b70/image-classification-task-template-demo.ipynb) ```python from datasets import load_dataset ds = load_dataset('nateraw/image-folder', data_files='PetImages/') # DatasetDict({ # train: Dataset({ # features: ['file', 'labels'], # num_rows: 23410 # }) # }) ds = ds.prepare_for_task('image-classification') # DatasetDict({ # train: Dataset({ # features: ['image_file_path', 'labels'], # num_rows: 23410 # }) # }) ```
closed
https://github.com/huggingface/datasets/pull/2632
2021-07-12T17:41:03
2021-07-13T15:44:28
2021-07-13T15:28:16
{ "login": "nateraw", "id": 32437151, "type": "User" }
[]
true
[]
942,242,271
2,631
Delete extracted files when loading dataset
Close #2481, close #2604, close #2591. cc: @stas00, @thomwolf, @BirgerMoell
closed
https://github.com/huggingface/datasets/pull/2631
2021-07-12T16:39:33
2021-07-19T09:08:19
2021-07-19T09:08:19
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
942,102,956
2,630
Progress bars are not properly rendered in Jupyter notebook
## Describe the bug The progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal). ## Steps to reproduce the bug ```python ds.map(tokenize, num_proc=10) ``` ## Expected results Jupyter widgets displaying the progress bars. ## Actual results Simple plane progress bars. cc: Reported by @thomwolf
closed
https://github.com/huggingface/datasets/issues/2630
2021-07-12T14:07:13
2022-02-03T15:55:33
2022-02-03T15:55:33
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
941,819,205
2,629
Load datasets from the Hub without requiring a dataset script
As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script. Moreover I would like to be able to specify which file goes into which split using the `data_files` argument. This feature should be compatible with private repositories and dataset streaming. This can be implemented by checking the extension of the files in the dataset repository and then by using the right dataset builder that is already packaged in the library (csv/json/text/parquet/etc.)
closed
https://github.com/huggingface/datasets/issues/2629
2021-07-12T08:45:17
2021-08-25T14:18:08
2021-08-25T14:18:08
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
941,676,404
2,628
Use ETag of remote data files
Use ETag of remote data files to create config ID. Related to #2616.
closed
https://github.com/huggingface/datasets/pull/2628
2021-07-12T05:10:10
2021-07-12T14:08:34
2021-07-12T08:40:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
941,503,349
2,627
Minor fix tests with Windows paths
Minor fix tests with Windows paths.
closed
https://github.com/huggingface/datasets/pull/2627
2021-07-11T17:55:48
2021-07-12T14:08:47
2021-07-12T08:34:50
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
941,497,830
2,626
Use correct logger in metrics.py
Fixes #2624
closed
https://github.com/huggingface/datasets/pull/2626
2021-07-11T17:22:30
2021-07-12T14:08:54
2021-07-12T05:54:29
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
941,439,922
2,625
βš›οΈπŸ˜‡βš™οΈπŸ”‘
closed
https://github.com/huggingface/datasets/issues/2625
2021-07-11T12:14:34
2021-07-12T05:55:59
2021-07-12T05:55:59
{ "login": "hustlen0mics", "id": 50596661, "type": "User" }
[]
false
[]
941,318,247
2,624
can't set verbosity for `metric.py`
## Describe the bug ``` [2021-07-10 20:13:11,528][datasets.utils.filelock][INFO] - Lock 139705371374976 acquired on /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow.lock [2021-07-10 20:13:11,529][datasets.arrow_writer][INFO] - Done writing 32 examples in 6100 bytes /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow. [2021-07-10 20:13:11,531][datasets.arrow_dataset][INFO] - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns. [2021-07-10 20:13:11,543][/conda/envs/myenv/lib/python3.8/site-packages/datasets/metric.py][INFO] - Removing /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow ``` As you can see, `datasets` logging come from different places. `filelock`, `arrow_writer` & `arrow_dataset` comes from `datasets.*` which are expected However, `metric.py` logging comes from `/conda/envs/myenv/lib/python3.8/site-packages/datasets/` So when setting `datasets.utils.logging.set_verbosity_error()`, it still logs the last message which is annoying during evaluation. I had to do ``` logging.getLogger("/conda/envs/myenv/lib/python3.8/site-packages/datasets/metric").setLevel(logging.ERROR) ``` to fully mute these messages ## Expected results it shouldn't log these messages when setting `datasets.utils.logging.set_verbosity_error()` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: tried both 1.8.0 & 1.9.0 - Platform: Ubuntu 18.04.5 LTS - Python version: 3.8.10 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2624
2021-07-10T20:23:45
2021-07-12T05:54:29
2021-07-12T05:54:29
{ "login": "thomas-happify", "id": 66082334, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
941,265,342
2,623
[Metrics] added wiki_split metrics
Fixes: #2606 This pull request adds combine metrics for the wikisplit or English sentence split task Reviewer: @patrickvonplaten
closed
https://github.com/huggingface/datasets/pull/2623
2021-07-10T14:51:50
2021-07-14T14:28:13
2021-07-12T22:34:31
{ "login": "bhadreshpsavani", "id": 26653468, "type": "User" }
[]
true
[]
941,127,785
2,622
Integration with AugLy
**Is your feature request related to a problem? Please describe.** Facebook recently launched a library, [AugLy](https://github.com/facebookresearch/AugLy) , that has a unified API for augmentations for image, video and text. It would be pretty exciting to have it hooked up to HF libraries so that we can make NLP models robust to misspellings or to punctuation, or emojis etc. Plus, with Transformers supporting more CV use cases, having augmentations support becomes crucial. **Describe the solution you'd like** The biggest difference between augmentations and preprocessing is that preprocessing happens only once, but you are running augmentations once per epoch. AugLy operates on text directly, so this breaks the typical workflow where we would run the tokenizer once, set format to pt tensors and be ready for the Dataloader. **Describe alternatives you've considered** One possible way of implementing these is to make a custom Dataset class where getitem(i) runs the augmentation and the tokenizer every time, though this would slow training down considerably given we wouldn't even run the tokenizer in batches.
closed
https://github.com/huggingface/datasets/issues/2622
2021-07-10T00:03:09
2023-07-20T13:18:48
2023-07-20T13:18:47
{ "login": "Darktex", "id": 890615, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
940,916,446
2,621
Use prefix to allow exceed Windows MAX_PATH
By using this prefix, you can exceed the Windows MAX_PATH limit. See: https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file?redirectedfrom=MSDN#win32-file-namespaces Related to #2524, #2220.
closed
https://github.com/huggingface/datasets/pull/2621
2021-07-09T16:39:53
2021-07-16T15:28:12
2021-07-16T15:28:11
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
940,893,389
2,620
Add speech processing tasks
This PR replaces the `automatic-speech-recognition` task category with a broader `speech-processing` category. The tasks associated with this category are derived from the [SUPERB benchmark](https://arxiv.org/abs/2105.01051), and ASR is included in this set.
closed
https://github.com/huggingface/datasets/pull/2620
2021-07-09T16:07:29
2021-07-12T18:32:59
2021-07-12T17:32:02
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
940,858,236
2,619
Add ASR task for SUPERB
This PR starts building up the SUPERB benchmark by including the ASR task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/v0.2.0/downstream#asr-automatic-speech-recognition). Usage: ```python from datasets import load_dataset asr = load_dataset("superb", "asr") # DatasetDict({ # train: Dataset({ # features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'], # num_rows: 28539 # }) # validation: Dataset({ # features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'], # num_rows: 2703 # }) # test: Dataset({ # features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'], # num_rows: 2620 # }) # }) ``` I've used the GLUE benchmark as a guide for filling out the README. To move fast during the evaluation PoC I propose to merge one task at a time, so we can continue building the training / evaluation framework in parallel. Note: codewise this PR is ready for review - I'll add the missing YAML tags once #2620 is merged :)
closed
https://github.com/huggingface/datasets/pull/2619
2021-07-09T15:19:45
2021-07-15T08:55:58
2021-07-13T12:40:18
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
940,852,640
2,618
`filelock.py` Error
## Describe the bug It seems that the `filelock.py` went error. ``` >>> ds=load_dataset('xsum') ^CTraceback (most recent call last): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) OSError: [Errno 37] No locks available ``` According to error log, it is OSError, but there is an `except` in the `_acquire` function. ``` def _acquire(self): open_mode = os.O_WRONLY | os.O_CREAT | os.O_EXCL | os.O_TRUNC try: fd = os.open(self._lock_file, open_mode) except (IOError, OSError): pass else: self._lock_file_fd = fd return None ``` I don't know why it stucked rather than `pass` directly. I am not quite familiar with filelock operation, so any help is highly appriciated. ## Steps to reproduce the bug ```python ds = load_dataset('xsum') ``` ## Expected results A clear and concise description of the expected results. ## Actual results ``` >>> ds=load_dataset('xsum') ^CTraceback (most recent call last): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) OSError: [Errno 37] No locks available During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/load.py", line 818, in load_dataset use_auth_token=use_auth_token, File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/load.py", line 470, in prepare_module with FileLock(lock_path): File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 323, in __enter__ self.acquire() File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 272, in acquire self._acquire() File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB) KeyboardInterrupt ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-4.15.0-135-generic-x86_64-with-debian-buster-sid - Python version: 3.6.13 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2618
2021-07-09T15:12:49
2024-06-21T06:14:07
2023-11-23T19:06:19
{ "login": "liyucheng09", "id": 27999909, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
940,846,847
2,617
Fix missing EOL issue in to_json for old versions of pandas
Some versions of pandas don't add an EOL at the end of the output of `to_json`. Therefore users could end up having two samples in the same line Close https://github.com/huggingface/datasets/issues/2615
closed
https://github.com/huggingface/datasets/pull/2617
2021-07-09T15:05:45
2021-07-12T14:09:00
2021-07-09T15:28:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
940,799,038
2,616
Support remote data files
Add support for (streaming) remote data files: ```python data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{relative_file_path}" ds = load_dataset("json", split="train", data_files=data_files, streaming=True) ``` cc: @thomwolf
closed
https://github.com/huggingface/datasets/pull/2616
2021-07-09T14:07:38
2021-07-09T16:13:41
2021-07-09T16:13:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
true
[]
940,794,339
2,615
Jsonlines export error
## Describe the bug When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default ## Steps to reproduce the bug This what I'm running: in python: ``` from datasets import load_dataset ptb = load_dataset("ptb_text_only") ptb["train"].to_json("ptb.jsonl") ``` then out of python: ``` head -10000 ptb.jsonl ``` ## Expected results Properly separated lines ## Actual results The last line is a concatenation of two lines ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.1.dev0 - Platform: Linux-5.4.0-1046-gcp-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2615
2021-07-09T14:02:05
2021-07-09T15:29:07
2021-07-09T15:28:33
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
940,762,427
2,614
Convert numpy scalar to python float in Pearsonr output
Following of https://github.com/huggingface/datasets/pull/2612
closed
https://github.com/huggingface/datasets/pull/2614
2021-07-09T13:22:55
2021-07-12T14:13:02
2021-07-09T14:04:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
940,759,852
2,613
Use ndarray.item instead of ndarray.tolist
This PR follows up on #2612 to use `numpy.ndarray.item` instead of `numpy.ndarray.tolist` as the latter is somewhat confusing to the developer (even though it works). Judging from the `numpy` docs, `ndarray.item` is closer to what we want: https://numpy.org/doc/stable/reference/generated/numpy.ndarray.item.html#numpy-ndarray-item PS. Sorry for the duplicate work here. I should have read the numpy docs more carefully in #2612
closed
https://github.com/huggingface/datasets/pull/2613
2021-07-09T13:19:35
2021-07-12T14:12:57
2021-07-09T13:50:05
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
940,604,512
2,612
Return Python float instead of numpy.float64 in sklearn metrics
This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`. The reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https://huggingface.co/datasets/autonlp/autonlp-benchmark-raft-neelalex__raft-test-neelalex__raft-predictions-3/blob/main/README.md#L11)) and the `numpy.float64` format produces garbage like: ```python import yaml from datasets import load_metric metric = load_metric("accuracy") score = metric.compute(predictions=[0,1], references=[0,1]) print(yaml.dump(score["accuracy"])) # output below # !!python/object/apply:numpy.core.multiarray.scalar # - !!python/object/apply:numpy.dtype # args: # - f8 # - false # - true # state: !!python/tuple # - 3 # - < # - null # - null # - null # - -1 # - -1 # - 0 # - !!binary | # AAAAAAAA8D8= ```
closed
https://github.com/huggingface/datasets/pull/2612
2021-07-09T09:48:09
2021-07-12T14:12:53
2021-07-09T13:03:54
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
940,307,053
2,611
More consistent naming
As per @stas00's suggestion in #2500, this PR inserts a space between the logo and the lib name (`πŸ€—Datasets` -> `πŸ€— Datasets`) for consistency with the Transformers lib. Additionally, more consistent names are used for Datasets Hub, etc.
closed
https://github.com/huggingface/datasets/pull/2611
2021-07-09T00:09:17
2021-07-13T17:13:19
2021-07-13T16:08:30
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
939,899,829
2,610
Add missing WikiANN language tags
Add missing language tags for WikiANN datasets.
closed
https://github.com/huggingface/datasets/pull/2610
2021-07-08T14:08:01
2021-07-12T14:12:16
2021-07-08T15:44:04
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
939,616,682
2,609
Fix potential DuplicatedKeysError
Fix potential DiplicatedKeysError by ensuring keys are unique. We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).
closed
https://github.com/huggingface/datasets/pull/2609
2021-07-08T08:38:04
2021-07-12T14:13:16
2021-07-09T16:42:08
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
938,897,626
2,608
Support streaming JSON files
Use open in JSON dataset builder, so that it can be patched with xopen for streaming. Close #2607.
closed
https://github.com/huggingface/datasets/pull/2608
2021-07-07T13:30:22
2021-07-12T14:12:31
2021-07-08T16:08:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
938,796,902
2,607
Streaming local gzip compressed JSON line files is not working
## Describe the bug Using streaming to iterate on local gzip compressed JSON files raise a file not exist error ## Steps to reproduce the bug ```python from datasets import load_dataset streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True) next(iter(streamed_dataset)) ``` ## Actual results ``` FileNotFoundError Traceback (most recent call last) <ipython-input-6-27a664e29784> in <module> ----> 1 next(iter(streamed_dataset)) ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in __iter__(self) 336 337 def __iter__(self): --> 338 for key, example in self._iter(): 339 if self.features: 340 # we encode the example for ClassLabel feature types for example ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in _iter(self) 333 else: 334 ex_iterable = self._ex_iterable --> 335 yield from ex_iterable 336 337 def __iter__(self): ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in __iter__(self) 76 77 def __iter__(self): ---> 78 for key, example in self.generate_examples_fn(**self.kwargs): 79 yield key, example 80 ~/Documents/GitHub/datasets/src/datasets/iterable_dataset.py in wrapper(**kwargs) 282 def wrapper(**kwargs): 283 python_formatter = PythonFormatter() --> 284 for key, table in generate_tables_fn(**kwargs): 285 batch = python_formatter.format_batch(table) 286 for i, example in enumerate(_batch_to_examples(batch)): ~/Documents/GitHub/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files, original_files) 85 file, 86 read_options=self.config.pa_read_options, ---> 87 parse_options=self.config.pa_parse_options, 88 ) 89 except pa.ArrowInvalid as err: ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/_json.pyx in pyarrow._json.read_json() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/_json.pyx in pyarrow._json._get_reader() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.get_input_stream() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.get_native_file() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.OSFile.__cinit__() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.OSFile._open_readable() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() FileNotFoundError: [Errno 2] Failed to open local file 'gzip://file-000000000000.json::/Users/thomwolf/github-dataset/file-000000000000.json.gz'. Detail: [errno 2] No such file or directory ``` ## Environment info - `datasets` version: 1.9.1.dev0 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.7.7 - PyArrow version: 1.0.0
closed
https://github.com/huggingface/datasets/issues/2607
2021-07-07T11:36:33
2021-07-20T09:50:19
2021-07-08T16:08:41
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
938,763,684
2,606
[Metrics] addition of wiki_split metrics
**Is your feature request related to a problem? Please describe.** While training the model on sentence split the task in English we require to evaluate the trained model on `Exact Match`, `SARI` and `BLEU` score like this ![image](https://user-images.githubusercontent.com/26653468/124746876-ff5a3380-df3e-11eb-9a01-4b48db7a6694.png) While training we require metrics which can give all the output Currently, we don't have an exact match for text normalized data **Describe the solution you'd like** A custom metrics for wiki_split that can calculate these three values and provide it in the form of a single dictionary For exact match, we can refer to [this](https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/squad_metrics.py) **Describe alternatives you've considered** Two metrics are already present one more can be added for an exact match then we can run all three metrics in training script #self-assign
closed
https://github.com/huggingface/datasets/issues/2606
2021-07-07T10:56:04
2021-07-12T22:34:31
2021-07-12T22:34:31
{ "login": "bhadreshpsavani", "id": 26653468, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "metric request", "color": "d4c5f9" } ]
false
[]
938,648,164
2,605
Make any ClientError trigger retry in streaming mode (e.g. ClientOSError)
During the FLAX sprint some users have this error when streaming datasets: ```python aiohttp.client_exceptions.ClientOSError: [Errno 104] Connection reset by peer ``` This error must trigger a retry instead of directly crashing Therefore I extended the error type that triggers the retry to be the base aiohttp error type: `ClientError` In particular both `ClientOSError` and `ServerDisconnectedError` inherit from `ClientError`.
closed
https://github.com/huggingface/datasets/pull/2605
2021-07-07T08:47:23
2021-07-12T14:10:27
2021-07-07T08:59:13
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
938,602,237
2,604
Add option to delete temporary files (e.g. extracted files) when loading dataset
I'm loading a dataset constituted of 44 GB of compressed JSON files. When loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables Having a simple way to delete the extracted files after usage (or even better, to stream extraction/delete) would be nice to avoid disk cluter. I can maybe tackle this one in the JSON script unless you want a more general solution.
closed
https://github.com/huggingface/datasets/issues/2604
2021-07-07T07:56:16
2021-07-19T09:08:18
2021-07-19T09:08:18
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
938,588,149
2,603
Fix DuplicatedKeysError in omp
Close #2598.
closed
https://github.com/huggingface/datasets/pull/2603
2021-07-07T07:38:32
2021-07-12T14:10:41
2021-07-07T12:56:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
938,555,712
2,602
Remove import of transformers
When pickling a tokenizer within multiprocessing, check that is instance of transformers PreTrainedTokenizerBase without importing transformers. Related to huggingface/transformers#12549 and #502.
closed
https://github.com/huggingface/datasets/pull/2602
2021-07-07T06:58:18
2021-07-12T14:10:22
2021-07-07T08:28:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
938,096,396
2,601
Fix `filter` with multiprocessing in case all samples are discarded
Fixes #2600 Also I moved the check for `num_proc` larger than dataset size added in #2566 up so that multiprocessing is not used with one process.
closed
https://github.com/huggingface/datasets/pull/2601
2021-07-06T17:06:28
2021-07-12T14:10:35
2021-07-07T12:50:31
{ "login": "mxschmdt", "id": 4904985, "type": "User" }
[]
true
[]
938,086,745
2,600
Crash when using multiprocessing (`num_proc` > 1) on `filter` and all samples are discarded
## Describe the bug If `filter` is applied to a dataset using multiprocessing (`num_proc` > 1) and all sharded datasets are empty afterwards (due to all samples being discarded), the program crashes. ## Steps to reproduce the bug ```python from datasets import Dataset data = Dataset.from_dict({'id': [0,1]}) data.filter(lambda x: False, num_proc=2) ``` ## Expected results An empty table should be returned without crashing. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/user/venv/lib/python3.8/site-packages/datasets/fingerprint.py", line 397, in wrapper out = func(self, *args, **kwargs) File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2143, in filter return self.map( File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1738, in map result = concatenate_datasets(transformed_shards) File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3267, in concatenate_datasets table = concat_tables(tables_to_concat, axis=axis) File "/home/user/venv/lib/python3.8/site-packages/datasets/table.py", line 853, in concat_tables return ConcatenationTable.from_tables(tables, axis=axis) File "/home/user/venv/lib/python3.8/site-packages/datasets/table.py", line 713, in from_tables blocks = to_blocks(tables[0]) IndexError: list index out of range ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-5.12.11-300.fc34.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.8.10 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2600
2021-07-06T16:53:25
2021-07-07T12:50:31
2021-07-07T12:50:31
{ "login": "mxschmdt", "id": 4904985, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
937,980,229
2,599
Update processing.rst with other export formats
Add other supported export formats than CSV in the docs.
closed
https://github.com/huggingface/datasets/pull/2599
2021-07-06T14:50:38
2021-07-12T14:10:16
2021-07-07T08:05:48
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
true
[]
937,930,632
2,598
Unable to download omp dataset
## Describe the bug The omp dataset cannot be downloaded because of a DuplicatedKeysError ## Steps to reproduce the bug from datasets import load_dataset omp = load_dataset('omp', 'posts_labeled') print(omp) ## Expected results This code should download the omp dataset and print the dictionary ## Actual results Downloading and preparing dataset omp/posts_labeled (download: 1.27 MiB, generated: 13.31 MiB, post-processed: Unknown size, total: 14.58 MiB) to /home/erika_distefano/.cache/huggingface/datasets/omp/posts_labeled/1.1.0/2fe5b067be3bff1d4588d5b0cbb9b5b22ae1b9d5b026a8ff572cd389f862735b... 0 examples [00:00, ? examples/s]2021-07-06 09:43:55.868815: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 990, in _prepare_split writer.write(example, key) File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 338, in write self.check_duplicate_keys() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 3326 Keys should be unique and deterministic in nature During handling of the above exception, another exception occurred: Traceback (most recent call last): File "hf_datasets.py", line 32, in <module> omp = load_dataset('omp', 'posts_labeled') File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/builder.py", line 992, in _prepare_split num_examples, num_bytes = writer.finalize() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 409, in finalize self.check_duplicate_keys() File "/home/erika_distefano/.local/lib/python3.6/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 3326 Keys should be unique and deterministic in nature ## Environment info - `datasets` version: 1.8.0 - Platform: Ubuntu 18.04.4 LTS - Python version: 3.6.9 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2598
2021-07-06T14:00:52
2021-07-07T12:56:35
2021-07-07T12:56:35
{ "login": "erikadistefano", "id": 25797960, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
937,917,770
2,597
Remove redundant prepare_module
I have noticed that after implementing `load_dataset_builder` (#2500), there is a redundant call to `prepare_module`.
closed
https://github.com/huggingface/datasets/pull/2597
2021-07-06T13:47:45
2021-07-12T14:10:52
2021-07-07T13:01:46
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "refactoring", "color": "B67A40" } ]
true
[]
937,598,914
2,596
Transformer Class on dataset
Just wondering if you have intenttion to create TransformerClass : dataset --> dataset and make determnistic transformation (ie not fit).
closed
https://github.com/huggingface/datasets/issues/2596
2021-07-06T07:27:15
2022-11-02T14:26:09
2022-11-02T14:26:09
{ "login": "arita37", "id": 18707623, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
937,483,120
2,595
ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets
Error traceback: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-8-a7b592d3bca0> in <module>() 1 from datasets import load_dataset, load_metric 2 ----> 3 common_voice_train = load_dataset("common_voice", "pa-IN", split="train+validation") 4 common_voice_test = load_dataset("common_voice", "pa-IN", split="test") 9 frames /root/.cache/huggingface/modules/datasets_modules/datasets/common_voice/078d412587e9efeb0ae2e574da99c31e18844c496008d53dc5c60f4159ed639b/common_voice.py in <module>() 19 20 import datasets ---> 21 from datasets.tasks import AutomaticSpeechRecognition 22 23 ModuleNotFoundError: No module named 'datasets.tasks'
closed
https://github.com/huggingface/datasets/issues/2595
2021-07-06T03:20:55
2021-07-06T05:59:49
2021-07-06T05:59:49
{ "login": "profsatwinder", "id": 41314912, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
937,294,772
2,594
Fix BibTeX entry
Fix BibTeX entry.
closed
https://github.com/huggingface/datasets/pull/2594
2021-07-05T18:24:10
2021-07-06T04:59:38
2021-07-06T04:59:38
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
937,242,137
2,593
Support pandas 1.3.0 read_csv
Workaround for this issue in pandas 1.3.0 : https://github.com/pandas-dev/pandas/issues/42387 The csv reader raises an error: ```python /usr/local/lib/python3.7/dist-packages/pandas/io/parsers/readers.py in _refine_defaults_read(dialect, delimiter, delim_whitespace, engine, sep, error_bad_lines, warn_bad_lines, on_bad_lines, names, prefix, defaults) 1304 1305 if names is not lib.no_default and prefix is not lib.no_default: -> 1306 raise ValueError("Specified named and prefix; you can only specify one.") 1307 1308 kwds["names"] = None if names is lib.no_default else names ValueError: Specified named and prefix; you can only specify one. ```
closed
https://github.com/huggingface/datasets/pull/2593
2021-07-05T16:40:04
2021-07-05T17:14:14
2021-07-05T17:14:14
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
937,060,559
2,592
Add c4.noclean infos
Adding the data files checksums and the dataset size of the c4.noclean configuration of the C4 dataset
closed
https://github.com/huggingface/datasets/pull/2592
2021-07-05T12:51:40
2021-07-05T13:15:53
2021-07-05T13:15:52
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
936,957,975
2,591
Cached dataset overflowing disk space
I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb). The cache folder is 500gb (and now my disk space is full). Is there a way to toggle caching or set the caching to be stored on a different device (I have another drive with 4 tb that could hold the caching files). This might not technically be a bug, but I was unsure and I felt that the bug was the closest one. Traceback (most recent call last): File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 186, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/fingerprint.py", line 397, in wrapper out = func(self, *args, **kwargs) File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1983, in _map_single writer.finalize() File "/home/birger/miniconda3/envs/wav2vec2/lib/python3.7/site-packages/datasets/arrow_writer.py", line 418, in finalize self.pa_writer.close() File "pyarrow/ipc.pxi", line 402, in pyarrow.lib._CRecordBatchWriter.close File "pyarrow/error.pxi", line 97, in pyarrow.lib.check_status OSError: [Errno 28] Error writing bytes to file. Detail: [errno 28] No space left on device """ The above exception was the direct cause of the following exception:
closed
https://github.com/huggingface/datasets/issues/2591
2021-07-05T10:43:19
2021-07-19T09:08:19
2021-07-19T09:08:19
{ "login": "BirgerMoell", "id": 1704131, "type": "User" }
[]
false
[]
936,954,348
2,590
Add language tags
This PR adds some missing language tags needed for ASR datasets in #2565
closed
https://github.com/huggingface/datasets/pull/2590
2021-07-05T10:39:57
2021-07-05T10:58:48
2021-07-05T10:58:48
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
936,825,060
2,589
Support multilabel metrics
Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value("int32")`. This PR creates a new feature type `OptionalSequence` which can act as either `Value("int32")` or `Sequence(Value("int32"))`, depending on the data passed. Close #2554.
closed
https://github.com/huggingface/datasets/pull/2589
2021-07-05T08:19:25
2022-07-29T10:56:25
2021-07-08T08:40:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
936,795,541
2,588
Fix test_is_small_dataset
Remove environment variable fixture `env_max_in_memory_dataset_size`. This fixture does not work because env variable is read in datasets.config when first loading datasets, and it is never reread during tests.
closed
https://github.com/huggingface/datasets/pull/2588
2021-07-05T07:46:26
2021-07-12T14:10:11
2021-07-06T17:09:30
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
936,771,339
2,587
Add aiohttp to tests extras require
Currently, none of the streaming tests are runned within our CI test suite, because the streaming tests require aiohttp and this is missing from our tests extras require dependencies. Our CI test suite should be exhaustive and test all the library functionalities.
closed
https://github.com/huggingface/datasets/pull/2587
2021-07-05T07:14:01
2021-07-05T09:04:38
2021-07-05T09:04:38
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
936,747,588
2,586
Fix misalignment in SQuAD
Fix misalignment between: - the answer text and - the answer_start within the context by keeping original leading blank spaces in the context. Fix #2585.
closed
https://github.com/huggingface/datasets/pull/2586
2021-07-05T06:42:20
2021-07-12T14:11:10
2021-07-07T13:18:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
936,484,419
2,585
sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index
## Describe the bug The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location specified by answers['answer_start']. For example: id = '56d1f453e7d4791d009025bd' answers = {'text': ['Pure Land'], 'answer_start': [146]} However the actual text in context at location 146 is 'ure Land,' Which is an off-by-one error from the correct answer. ## Steps to reproduce the bug ```python import datasets def check_context_answer_alignment(example): for a_idx in range(len(example['answers']['text'])): # check raw dataset for answer consistency between context and answer answer_text = example['answers']['text'][a_idx] a_st_idx = example['answers']['answer_start'][a_idx] a_end_idx = a_st_idx + len(example['answers']['text'][a_idx]) answer_text_from_context = example['context'][a_st_idx:a_end_idx] if answer_text != answer_text_from_context: #print(example['id']) return False return True dataset = datasets.load_dataset('squad_v2', split='train', keep_in_memory=True) start_len = len(dataset) dataset = dataset.filter(check_context_answer_alignment, num_proc=1, keep_in_memory=True) end_len = len(dataset) print('{} instances contain mis-alignment between the answer text and answer index.'.format(start_len - end_len)) ``` ## Expected results This code should result in 0 rows being filtered out from the dataset. ## Actual results This filter command results in 258 rows being flagged as containing a discrepancy between the text contained within answers['text'] and the text in example['context'] at the answers['answer_start'] location. This code will reproduce the problem and produce the following count: "258 instances contain mis-alignment between the answer text and answer index." ## Environment info Steps to rebuilt the Conda environment: ``` # create a virtual environment to stuff all these packages into conda create -n round8 python=3.8 -y # activate the virtual environment conda activate round8 # install pytorch (best done through conda to handle cuda dependencies) conda install pytorch torchvision torchtext cudatoolkit=11.1 -c pytorch-lts -c nvidia pip install jsonpickle transformers datasets matplotlib ``` OS: Ubuntu 20.04 Python 3.8 Result of `conda env export`: ``` name: round8 channels: - pytorch-lts - nvidia - defaults dependencies: - _libgcc_mutex=0.1=main - _openmp_mutex=4.5=1_gnu - blas=1.0=mkl - brotlipy=0.7.0=py38h27cfd23_1003 - bzip2=1.0.8=h7b6447c_0 - ca-certificates=2021.5.25=h06a4308_1 - certifi=2021.5.30=py38h06a4308_0 - cffi=1.14.5=py38h261ae71_0 - chardet=4.0.0=py38h06a4308_1003 - cryptography=3.4.7=py38hd23ed53_0 - cudatoolkit=11.1.74=h6bb024c_0 - ffmpeg=4.2.2=h20bf706_0 - freetype=2.10.4=h5ab3b9f_0 - gmp=6.2.1=h2531618_2 - gnutls=3.6.15=he1e5248_0 - idna=2.10=pyhd3eb1b0_0 - intel-openmp=2021.2.0=h06a4308_610 - jpeg=9b=h024ee3a_2 - lame=3.100=h7b6447c_0 - lcms2=2.12=h3be6417_0 - ld_impl_linux-64=2.35.1=h7274673_9 - libffi=3.3=he6710b0_2 - libgcc-ng=9.3.0=h5101ec6_17 - libgomp=9.3.0=h5101ec6_17 - libidn2=2.3.1=h27cfd23_0 - libopus=1.3.1=h7b6447c_0 - libpng=1.6.37=hbc83047_0 - libstdcxx-ng=9.3.0=hd4cf53a_17 - libtasn1=4.16.0=h27cfd23_0 - libtiff=4.2.0=h85742a9_0 - libunistring=0.9.10=h27cfd23_0 - libuv=1.40.0=h7b6447c_0 - libvpx=1.7.0=h439df22_0 - libwebp-base=1.2.0=h27cfd23_0 - lz4-c=1.9.3=h2531618_0 - mkl=2021.2.0=h06a4308_296 - mkl-service=2.3.0=py38h27cfd23_1 - mkl_fft=1.3.0=py38h42c9631_2 - mkl_random=1.2.1=py38ha9443f7_2 - ncurses=6.2=he6710b0_1 - nettle=3.7.3=hbbd107a_1 - ninja=1.10.2=hff7bd54_1 - numpy=1.20.2=py38h2d18471_0 - numpy-base=1.20.2=py38hfae3a4d_0 - olefile=0.46=py_0 - openh264=2.1.0=hd408876_0 - openssl=1.1.1k=h27cfd23_0 - pillow=8.2.0=py38he98fc37_0 - pip=21.1.2=py38h06a4308_0 - pycparser=2.20=py_2 - pyopenssl=20.0.1=pyhd3eb1b0_1 - pysocks=1.7.1=py38h06a4308_0 - python=3.8.10=h12debd9_8 - pytorch=1.8.1=py3.8_cuda11.1_cudnn8.0.5_0 - readline=8.1=h27cfd23_0 - requests=2.25.1=pyhd3eb1b0_0 - setuptools=52.0.0=py38h06a4308_0 - six=1.16.0=pyhd3eb1b0_0 - sqlite=3.35.4=hdfb4753_0 - tk=8.6.10=hbc83047_0 - torchtext=0.9.1=py38 - torchvision=0.9.1=py38_cu111 - typing_extensions=3.7.4.3=pyha847dfd_0 - urllib3=1.26.4=pyhd3eb1b0_0 - wheel=0.36.2=pyhd3eb1b0_0 - x264=1!157.20191217=h7b6447c_0 - xz=5.2.5=h7b6447c_0 - zlib=1.2.11=h7b6447c_3 - zstd=1.4.9=haebb681_0 - pip: - click==8.0.1 - cycler==0.10.0 - datasets==1.8.0 - dill==0.3.4 - filelock==3.0.12 - fsspec==2021.6.0 - huggingface-hub==0.0.8 - joblib==1.0.1 - jsonpickle==2.0.0 - kiwisolver==1.3.1 - matplotlib==3.4.2 - multiprocess==0.70.12.2 - packaging==20.9 - pandas==1.2.4 - pyarrow==3.0.0 - pyparsing==2.4.7 - python-dateutil==2.8.1 - pytz==2021.1 - regex==2021.4.4 - sacremoses==0.0.45 - tokenizers==0.10.3 - tqdm==4.49.0 - transformers==4.6.1 - xxhash==2.0.2 prefix: /home/mmajurski/anaconda3/envs/round8 ```
closed
https://github.com/huggingface/datasets/issues/2585
2021-07-04T15:39:49
2021-07-07T13:18:51
2021-07-07T13:18:51
{ "login": "mmajurski", "id": 9354454, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
936,049,736
2,584
wi_locness: reference latest leaderboard on codalab
The dataset's author asked me to put this codalab link into the dataset's README.
closed
https://github.com/huggingface/datasets/pull/2584
2021-07-02T20:26:22
2021-07-05T09:06:14
2021-07-05T09:06:14
{ "login": "aseifert", "id": 4944799, "type": "User" }
[]
true
[]
936,034,976
2,583
Error iteration over IterableDataset using Torch DataLoader
## Describe the bug I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch IterableDataset. One thing I noticed is that in the former case when I look at the dataloader.sampler class I get torch.utils.data.sampler.SequentialSampler while the latter one gives torch.utils.data.dataloader._InfiniteConstantSampler. I am not sure if this is how it is meant to be used, but that's what seemed reasonable to me. ## Steps to reproduce the bug 1. Does not work. ```python >>> from datasets import load_dataset >>> dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) >>> dataloader = torch.utils.data.DataLoader(dataset, batch_size=4) >>> dataloader.sampler <torch.utils.data.sampler.SequentialSampler object at 0x7f245a510208> >>> for batch in dataloader: ... print(batch) ``` 2. Works. ```python import torch from torch.utils.data import Dataset, IterableDataset, DataLoader class CustomIterableDataset(IterableDataset): 'Characterizes a dataset for PyTorch' def __init__(self, data): 'Initialization' self.data = data def __iter__(self): return iter(self.data) data = list(range(12)) dataset = CustomIterableDataset(data) dataloader = DataLoader(dataset, batch_size=4) print("dataloader: ", dataloader.sampler) for batch in dataloader: print(batch) ``` ## Expected results To get batches of data with the batch size as 4. Output from the latter one (2) though Datasource is different here so actual data is different. dataloader: <torch.utils.data.dataloader._InfiniteConstantSampler object at 0x7f1cc29e2c50> tensor([0, 1, 2, 3]) tensor([4, 5, 6, 7]) tensor([ 8, 9, 10, 11]) ## Actual results <torch.utils.data.sampler.SequentialSampler object at 0x7f245a510208> ... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 474, in _next_data index = self._next_index() # may raise StopIteration File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 427, in _next_index return next(self._sampler_iter) # may raise StopIteration File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 227, in __iter__ for idx in self.sampler: File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 67, in __iter__ return iter(range(len(self.data_source))) TypeError: object of type 'IterableDataset' has no len() ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: '1.8.1.dev0' - Platform: Linux - Python version: Python 3.6.8 - PyArrow version: '3.0.0'
closed
https://github.com/huggingface/datasets/issues/2583
2021-07-02T19:55:58
2021-07-20T09:04:45
2021-07-05T23:48:23
{ "login": "LeenaShekhar", "id": 12227436, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
935,859,104
2,582
Add skip and take
As discussed in https://github.com/huggingface/datasets/pull/2375#discussion_r657084544 I added the `IterableDataset.skip` and `IterableDataset.take` methods that allows to do basic splitting of iterable datasets. You can create new dataset with the first `n` examples using `IterableDataset.take()`, or you can get a dataset with the rest of the examples by skipping the first `n` examples with `IterableDataset.skip()` One implementation detail: Using `take` (or `skip`) prevents future dataset shuffling from shuffling the dataset shards, otherwise the taken examples could come from other shards. In this case it only uses the shuffle buffer. I would have loved to allow the shards of the taken examples to be shuffled anyway, but since we don't know in advance the length of each shard we don't know what shards to take or skip. I think this is ok though since users can shuffle before doing take or skip. I mentioned this in the documentation cc @vblagoje @lewtun
closed
https://github.com/huggingface/datasets/pull/2582
2021-07-02T15:10:19
2021-07-05T16:06:40
2021-07-05T16:06:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
935,783,588
2,581
Faster search_batch for ElasticsearchIndex due to threading
Hey, I think it makes sense to perform search_batch threaded, so ES can perform search in parallel. Cheers!
closed
https://github.com/huggingface/datasets/pull/2581
2021-07-02T13:42:07
2021-07-12T14:13:46
2021-07-12T09:52:51
{ "login": "mwrzalik", "id": 1376337, "type": "User" }
[]
true
[]
935,767,421
2,580
Fix Counter import
Import from `collections` instead of `typing`.
closed
https://github.com/huggingface/datasets/pull/2580
2021-07-02T13:21:48
2021-07-02T14:37:47
2021-07-02T14:37:46
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
935,486,894
2,579
Fix BibTeX entry
Add missing contributor to BibTeX entry. cc: @abhishekkrthakur @thomwolf
closed
https://github.com/huggingface/datasets/pull/2579
2021-07-02T07:10:40
2021-07-02T07:33:44
2021-07-02T07:33:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
935,187,497
2,578
Support Zstandard compressed files
Close #2572. cc: @thomwolf
closed
https://github.com/huggingface/datasets/pull/2578
2021-07-01T20:22:34
2021-08-11T14:46:24
2021-07-05T10:50:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
934,986,761
2,576
Add mC4
AllenAI is now hosting the processed C4 and mC4 dataset in this repo: https://huggingface.co/datasets/allenai/c4 Thanks a lot to them ! In this PR I added the mC4 dataset builder. It supports 108 languages You can load it with ```python from datasets import load_dataset en_mc4 = load_dataset("mc4", "en") fr_mc4 = load_dataset("mc4", "fr") en_and_fr_mc4 = load_dataset("mc4", languages=["en", "fr"]) ``` It also supports streaming, if you don't want to download hundreds of GB of data: ```python en_mc4 = load_dataset("mc4", "en", streaming=True) ``` Regarding the dataset_infos.json, I will add them once I have them. Also we can work on the dataset card at that will be at https://huggingface.co/datasets/mc4 For now I just added a link to https://huggingface.co/datasets/allenai/c4 as well as a few sections
closed
https://github.com/huggingface/datasets/pull/2576
2021-07-01T15:51:25
2021-07-02T14:50:56
2021-07-02T14:50:55
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
934,876,496
2,575
Add C4
The old code for the C4 dataset was to generate the C4 with Apache Beam, as in Tensorflow Datasets. However AllenAI is now hosting the processed C4 dataset in this repo: https://huggingface.co/datasets/allenai/c4 Thanks a lot to them for their amazing work ! In this PR I changed the script to download and prepare the data directly from this repo. It has 4 variants: en, en.noblocklist, en.noclean, realnewslike You can load it with ```python from datasets import load_dataset c4 = load_dataset("c4", "en") ``` It also supports streaming, if you don't want to download hundreds of GB of data: ```python c4 = load_dataset("c4", "en", streaming=True) ``` Regarding the dataset_infos.json, I haven't added the infos for en.noclean. I will add them once I have them. Also we can work on the dataset card at https://huggingface.co/datasets/c4 For now I just added a link to https://huggingface.co/datasets/allenai/c4 as well as a few sections
closed
https://github.com/huggingface/datasets/pull/2575
2021-07-01T13:58:08
2021-07-02T14:50:23
2021-07-02T14:50:23
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
934,632,378
2,574
Add streaming in load a dataset docs
Mention dataset streaming on the "loading a dataset" page of the documentation
closed
https://github.com/huggingface/datasets/pull/2574
2021-07-01T09:32:53
2021-07-01T14:12:22
2021-07-01T14:12:21
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
934,584,745
2,573
Finding right block-size with JSON loading difficult for user
As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets > json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)
open
https://github.com/huggingface/datasets/issues/2573
2021-07-01T08:48:35
2021-07-01T19:10:53
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
934,573,767
2,572
Support Zstandard compressed files
Add support for Zstandard compressed files: https://facebook.github.io/zstd/
closed
https://github.com/huggingface/datasets/issues/2572
2021-07-01T08:37:04
2023-01-03T15:34:01
2021-07-05T10:50:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
933,791,018
2,571
Filter expected warning log from transformers
Close #2569.
closed
https://github.com/huggingface/datasets/pull/2571
2021-06-30T14:48:19
2021-07-02T04:08:17
2021-07-02T04:08:17
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
933,402,521
2,570
Minor fix docs format for bertscore
Minor fix docs format for bertscore: - link to README - format of KWARGS_DESCRIPTION
closed
https://github.com/huggingface/datasets/pull/2570
2021-06-30T07:42:12
2021-06-30T15:31:01
2021-06-30T15:31:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
933,015,797
2,569
Weights of model checkpoint not initialized for RobertaModel for Bertscore
When applying bertscore out of the box, ```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']``` Following the typical usage from https://huggingface.co/docs/datasets/loading_metrics.html ``` from datasets import load_metric metric = load_metric('bertscore') # Example of typical usage for batch in dataset: inputs, references = batch predictions = model(inputs) metric.add_batch(predictions=predictions, references=references) score = metric.compute(lang="en") #score = metric.compute(model_type="roberta-large") # gives the same error ``` I am concerned about this because my usage shouldn't require any further fine-tuning and most people would expect to use BertScore out of the box? I realised the huggingface code is a wrapper around https://github.com/Tiiiger/bert_score, but I think this repo is anyway relying on the model code and weights from huggingface repo.... ## Environment info - `datasets` version: 1.7.0 - Platform: Linux-5.4.0-1041-aws-x86_64-with-glibc2.27 - Python version: 3.9.5 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2569
2021-06-29T18:55:23
2021-07-01T07:08:59
2021-06-30T07:35:49
{ "login": "suzyahyah", "id": 2980993, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
932,934,795
2,568
Add interleave_datasets for map-style datasets
### Add interleave_datasets for map-style datasets Add support for map-style datasets (i.e. `Dataset` objects) in `interleave_datasets`. It was only supporting iterable datasets (i.e. `IterableDataset` objects). ### Implementation details It works by concatenating the datasets and then re-order the indices to make the new dataset. ### TODO - [x] tests - [x] docs Close #2563
closed
https://github.com/huggingface/datasets/pull/2568
2021-06-29T17:19:24
2021-07-01T09:33:34
2021-07-01T09:33:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
932,933,536
2,567
Add ASR task and new languages to resources
This PR adds a new `automatic-speech-recognition` task to the list of supported tasks in `tasks.json` and also includes a few new languages missing from `common_voice`. Note: I used the [Papers with Code list](https://www.paperswithcode.com/area/speech/speech-recognition) as inspiration for the ASR subtasks
closed
https://github.com/huggingface/datasets/pull/2567
2021-06-29T17:18:01
2021-07-01T09:42:23
2021-07-01T09:42:09
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
932,804,725
2,566
fix Dataset.map when num_procs > num rows
closes #2470 ## Testing notes To run updated tests: ```sh pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s ``` With Python code (to view warning): ```python from datasets import Dataset dataset = Dataset.from_dict({"x": ["sample"]}) print(len(dataset)) dataset.map(lambda x: x, num_proc=10) ```
closed
https://github.com/huggingface/datasets/pull/2566
2021-06-29T15:07:07
2021-07-01T09:11:13
2021-07-01T09:11:13
{ "login": "connor-mccarthy", "id": 55268212, "type": "User" }
[]
true
[]
932,445,439
2,565
Inject templates for ASR datasets
This PR adds ASR templates for 5 of the most common speech datasets on the Hub, where "common" is defined by the number of models trained on them. I also fixed a bunch of the tags in the READMEs 😎
closed
https://github.com/huggingface/datasets/pull/2565
2021-06-29T10:02:01
2021-07-05T14:26:26
2021-07-05T14:26:26
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
932,389,639
2,564
concatenate_datasets for iterable datasets
Currently `concatenate_datasets` only works for map-style `Dataset`. It would be nice to have it work for `IterableDataset` objects as well. It would simply chain the iterables of the iterable datasets.
closed
https://github.com/huggingface/datasets/issues/2564
2021-06-29T08:59:41
2022-06-28T21:15:04
2022-06-28T21:15:04
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
932,387,639
2,563
interleave_datasets for map-style datasets
Currently the `interleave_datasets` functions only works for `IterableDataset`. Let's make it work for map-style `Dataset` objects as well. It would work the same way: either alternate between the datasets in order or randomly given probabilities specified by the user.
closed
https://github.com/huggingface/datasets/issues/2563
2021-06-29T08:57:24
2021-07-01T09:33:33
2021-07-01T09:33:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
932,333,436
2,562
Minor fix in loading metrics docs
Make some minor fixes in "Loading metrics" docs.
closed
https://github.com/huggingface/datasets/pull/2562
2021-06-29T07:55:11
2021-06-29T17:21:22
2021-06-29T17:21:22
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
932,321,725
2,561
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets. ## Steps to reproduce the bug - Create a local dataset builder class - load the local builder class file using `load_dataset` and let the cache build - update the file's content - The cache should rebuilt. ## Expected results With `ignore_verifications=True`, `load_dataset` should pick up existing cache. ## Actual results Creates new cache. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-5.4.0-52-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.7 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/2561
2021-06-29T07:43:03
2022-08-04T11:58:36
2022-08-04T11:58:36
{ "login": "apsdehal", "id": 3616806, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]