id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,007,340,089
2,970
Magnet’s
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/2970
2021-09-26T09:50:29
2021-09-26T10:38:59
2021-09-26T10:38:59
{ "login": "rcacho172", "id": 90449239, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,007,217,867
2,969
medical-dialog error
## Describe the bug A clear and concise description of what the bug is. When I attempt to download the huggingface datatset medical_dialog it errors out midway through ## Steps to reproduce the bug ```python raw_datasets = load_dataset("medical_dialog", "en", split="train", download_mode="force_redownload", data_dir="./Medical-Dialogue-Dataset-English") ``` ## Expected results A clear and concise description of the expected results. No error ## Actual results ``` 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_splits(expected_splits, recorded_splits) 72 ] 73 if len(bad_splits) > 0: ---> 74 raise NonMatchingSplitsSizesError(str(bad_splits)) 75 logger.info("All the splits matched successfully.") 76 NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=295097913, num_examples=229674, dataset_name='medical_dialog')}] ``` Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.21.1 - Platform: colab - Python version: colab 3.7 - PyArrow version: N/A
closed
https://github.com/huggingface/datasets/issues/2969
2021-09-25T23:08:44
2024-01-08T09:55:12
2021-10-11T07:46:42
{ "login": "smeyerhot", "id": 43877130, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,007,209,488
2,968
`DatasetDict` cannot be exported to parquet if the splits have different features
## Describe the bug I'm trying to use parquet as a means of serialization for both `Dataset` and `DatasetDict` objects. Using `to_parquet` alongside `from_parquet` or `load_dataset` for a `Dataset` works perfectly. For `DatasetDict`, I use `to_parquet` on each split to save the parquet files in individual folders representing individual splits. This works too, as long as the splits have identical features. If a split has different features to neighboring splits, then loading the dataset will fail: a single schema is used to load both splits, resulting in a failure to load the second parquet file. ## Steps to reproduce the bug The following works as expected: ```python from datasets import load_dataset ds = load_dataset("lhoestq/custom_squad") ds['train'].to_parquet("./ds/train/split.parquet") ds['validation'].to_parquet("./ds/validation/split.parquet") brand_new_dataset = load_dataset("ds") ``` Modifying a single split to add a new feature ends up in a crash: ```python from datasets import load_dataset ds = load_dataset("lhoestq/custom_squad") def identical_answers(e): e['identical_answers'] = len(set(e['answers']['text'])) == 1 return e ds['validation'] = ds['validation'].map(identical_answers) ds['train'].to_parquet("./ds/train/split.parquet") ds['validation'].to_parquet("./ds/validation/split.parquet") brand_new_dataset = load_dataset("ds") ``` ``` File "/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py", line 26, in <module> brand_new_dataset = load_dataset("ds") File "/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py", line 1151, in load_dataset builder_instance.download_and_prepare( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 642, in download_and_prepare self._download_and_prepare( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 732, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 1194, in _prepare_split writer.write_table(table) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_writer.py", line 428, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_writer.py", line 428, in <listcomp> pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1257, in pyarrow.lib.Table.__getitem__ File "pyarrow/table.pxi", line 1833, in pyarrow.lib.Table.column File "pyarrow/table.pxi", line 1808, in pyarrow.lib.Table._ensure_integer_index KeyError: 'Field "identical_answers" does not exist in table schema' ``` It does work, however, to use the `save_to_disk` and `load_from_disk` methods: ```py from datasets import load_from_disk ds = load_dataset("lhoestq/custom_squad") def identical_answers(e): e['identical_answers'] = len(set(e['answers']['text'])) == 1 return e ds['validation'] = ds['validation'].map(identical_answers) ds.save_to_disk("local_path") brand_new_dataset = load_from_disk("local_path") ``` ## Expected results The saving works correctly - but the loading fails. I would expect either an error when saving or an error-less instantiation of the dataset through the parquet files. If it's helpful, I've traced a possible patch to the `write_table` method here: https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/arrow_writer.py#L424-L425 The writer is built only if the parquet writer is `None`, but I expect we would want to build a new writer as the table schema has changed. Furthermore, it relies on having the property `update_features` set to `True` in order to update the features: https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/arrow_writer.py#L254-L255 but the `ArrowWriter` is instantiated without that option in the `_prepare_split` method of the `ArrowBasedBuilder`: https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/builder.py#L1190 Updating these two parts to recreate a schema on each split results in an error that is, unfortunately, out of my expertise: ``` File "/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py", line 27, in <module> brand_new_dataset = load_dataset("ds") File "/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py", line 1163, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 819, in as_dataset datasets = utils.map_nested( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 207, in map_nested mapped = [ File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 208, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 143, in _single_map_nested return function(data_struct) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 850, in _build_single_dataset ds = self._as_dataset( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 920, in _as_dataset dataset_kwargs = ArrowReader(self._cache_dir, self.info).read( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 217, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 238, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 173, in _read_files pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 308, in _get_table_from_filename table = ArrowReader.read_table(filename, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 327, in read_table return table_cls.from_file(filename) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py", line 458, in from_file table = _memory_mapped_arrow_table_from_file(filename) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py", line 45, in _memory_mapped_arrow_table_from_file pa_table = opened_stream.read_all() File "pyarrow/ipc.pxi", line 563, in pyarrow.lib.RecordBatchReader.read_all File "pyarrow/error.pxi", line 114, in pyarrow.lib.check_status OSError: Header-type of flatbuffer-encoded Message is not RecordBatch. ``` ## Environment info - `datasets` version: 1.12.2.dev0 - Platform: Linux-5.14.7-arch1-1-x86_64-with-glibc2.33 - Python version: 3.9.7 - PyArrow version: 5.0.0
closed
https://github.com/huggingface/datasets/issues/2968
2021-09-25T22:18:39
2021-10-07T22:47:42
2021-10-07T22:47:26
{ "login": "LysandreJik", "id": 30755778, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,007,194,837
2,967
Adding vision-and-language datasets (e.g., VQA, VCR) to Datasets
**Is your feature request related to a problem? Please describe.** Would you like to add any vision-and-language datasets (e.g., VQA, VCR) to Huggingface Datasets? **Describe the solution you'd like** N/A **Describe alternatives you've considered** N/A **Additional context** This is Da Yin at UCLA. Recently, we have published an EMNLP 2021 paper about geo-diverse visual commonsense reasoning (https://arxiv.org/abs/2109.06860). We propose a new dataset called GD-VCR, a vision-and-language dataset to evaluate how well V&L models perform on scenarios involving geo-location-specific commonsense. We hope to have our V&L dataset incorporated into Huggingface to further promote our project, but I haven't seen much V&L datasets in the current package. Is it possible to add V&L datasets, and if so, how should we prepare for the loading? Thank you very much!
closed
https://github.com/huggingface/datasets/issues/2967
2021-09-25T20:58:15
2021-10-03T20:34:22
2021-10-03T20:34:22
{ "login": "WadeYin9712", "id": 42200725, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,007,142,233
2,966
Upload greek-legal-code dataset
null
closed
https://github.com/huggingface/datasets/pull/2966
2021-09-25T16:52:15
2021-10-13T13:37:30
2021-10-13T13:37:30
{ "login": "christospi", "id": 9130406, "type": "User" }
[]
true
[]
1,007,084,153
2,965
Invalid download URL of WMT17 `zh-en` data
## Describe the bug Partial data (wmt17 zh-en) cannot be downloaded due to an invalid URL. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('wmt17','zh-en') ``` ## Expected results ConnectionError: Couldn't reach ftp://cwmt-wmt:cwmt-wmt@datasets.nju.edu.cn/parallel/casia2015.zip
closed
https://github.com/huggingface/datasets/issues/2965
2021-09-25T13:17:32
2022-08-31T06:47:11
2022-08-31T06:47:10
{ "login": "Ririkoo", "id": 3339950, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,006,605,904
2,964
Error when calculating Matthews Correlation Coefficient loaded with `load_metric`
## Describe the bug After loading the metric named "[Matthews Correlation Coefficient](https://huggingface.co/metrics/matthews_correlation)" from `🤗datasets`, the `.compute` method fails with the following exception `AttributeError: 'float' object has no attribute 'item'` (complete stack trace can be provided if required). ## Steps to reproduce the bug ```python import torch predictions = torch.ones((10,)) references = torch.zeros((10,)) from datasets import load_metric METRIC = load_metric("matthews_correlation") result = METRIC.compute(predictions=predictions, references=references) ``` ## Expected results We should expect a Python `dict` as it follows: ``` { "matthews_correlation": float() } ``` as defined in https://github.com/huggingface/datasets/blob/master/metrics/matthews_correlation/matthews_correlation.py, so the fix will imply removing `.item()`, since the value returned by the `scikit-learn` function is not a `torch.Tensor` but a `float`, which means that the `.item()` will fail. ## Actual results ``` Traceback (most recent call last): File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 59, in main app() File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 214, in __call__ return get_command(self)(*args, **kwargs) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1137, in __call__ return self.main(*args, **kwargs) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1062, in main rv = self.invoke(ctx) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1668, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 763, in invoke return __callback(*args, **kwargs) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 500, in wrapper return callback(**use_params) # type: ignore File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 43, in train metrics = trainer.evaluate() File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2051, in evaluate output = eval_loop( File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2292, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) File "/home/alvaro.bartolome/XXX/xxx/metrics.py", line 20, in compute_metrics res = METRIC.compute(predictions=predictions, references=eval_preds.label_ids) File "/home/alvaro.bartolome/miniconda3/envs/lang/lib/python3.9/site-packages/datasets/metric.py", line 402, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/alvaro.bartolome/.cache/huggingface/modules/datasets_modules/metrics/matthews_correlation/0275f1e9a4d318e3ea8cdd87547ee0d58d894966616052e3d18444ac8ddd2357/matthews_correlation.py", line 88, in _compute "matthews_correlation": matthews_corrcoef(references, predictions, sample_weight=sample_weight).item(), AttributeError: 'float' object has no attribute 'item' ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-4.15.0-1113-azure-x86_64-with-glibc2.23 - Python version: 3.9.7 - PyArrow version: 5.0.0
closed
https://github.com/huggingface/datasets/issues/2964
2021-09-24T15:55:21
2024-02-16T10:14:35
2021-09-25T08:06:07
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,006,588,605
2,963
raise TypeError( TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects.
## Describe the bug A clear and concise description of what the bug is. I am trying to use Dataset to load my file in order to use Bert embeddings model baut when I finished loading using dataset and I want to pass to the tokenizer using the function map; I get the following error : raise TypeError( TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects. I was able to load my file using dataset before but since this morning , I keep getting this erreor. ## Steps to reproduce the bug ```python # Xtrain, ytrain, filename, len_labels = read_file_2(fic) # Xtrain, lge_size = get_flaubert_layer(Xtrain, path_to_model_lge) data_preprocessed = make_new_traindata(Xtrain) my_dict = {"verbatim": data_preprocessed[1], "label": ytrain} # lemme avec conjonction dataset = Dataset.from_dict(my_dict) ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
closed
https://github.com/huggingface/datasets/issues/2963
2021-09-24T15:35:11
2021-09-24T15:38:24
2021-09-24T15:38:24
{ "login": "keloemma", "id": 40454218, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,006,557,666
2,962
Enable splits during streaming the dataset
## Describe the Problem I'd like to stream only a specific percentage or part of the dataset. I want to do splitting when I'm streaming dataset as well. ## Solution Enabling splits when `streaming = True` as well. `e.g. dataset = load_dataset('dataset', split='train[:100]', streaming = True)` ## Alternatives Below is the alternative of doing it. `dataset = load_dataset("dataset", split='train', streaming = True).take(100)`
open
https://github.com/huggingface/datasets/issues/2962
2021-09-24T15:01:29
2025-07-17T04:53:20
null
{ "login": "merveenoyan", "id": 53175384, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,006,453,781
2,961
Fix CI doc build
Pin `fsspec`. Before the issue: 'fsspec-2021.8.1', 's3fs-2021.8.1' Generating the issue: 'fsspec-2021.9.0', 's3fs-0.5.1'
closed
https://github.com/huggingface/datasets/pull/2961
2021-09-24T13:13:28
2021-09-24T13:18:07
2021-09-24T13:18:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,006,222,850
2,960
Support pandas 1.3 new `read_csv` parameters
Support two new arguments introduced in pandas v1.3.0: - `encoding_errors` - `on_bad_lines` `read_csv` reference: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
closed
https://github.com/huggingface/datasets/pull/2960
2021-09-24T08:37:24
2021-09-24T11:22:31
2021-09-24T11:22:30
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[]
true
[]
1,005,547,632
2,959
Added computer vision tasks
Added various image processing/computer vision tasks.
closed
https://github.com/huggingface/datasets/pull/2959
2021-09-23T15:07:27
2022-03-01T17:41:51
2022-03-01T17:41:51
{ "login": "merveenoyan", "id": 53175384, "type": "User" }
[]
true
[]
1,005,144,601
2,958
Add security policy to the project
Add security policy to the project, as recommended by GitHub: https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository Close #2953.
closed
https://github.com/huggingface/datasets/pull/2958
2021-09-23T08:20:55
2021-10-21T15:16:44
2021-10-21T15:16:43
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,004,868,337
2,957
MultiWOZ Dataset NonMatchingChecksumError
## Describe the bug The checksums for the downloaded MultiWOZ dataset and source MultiWOZ dataset aren't matching. ## Steps to reproduce the bug Both of the below dataset versions yield the checksum error: ```python from datasets import load_dataset dataset = load_dataset('multi_woz_v22', 'v2.2') dataset = load_dataset('multi_woz_v22', 'v2.2_active_only') ``` ## Expected results For the above calls to `load_dataset` to work. ## Actual results NonMatchingChecksumError. Traceback: > Traceback (most recent call last): File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-15-4e91280e112e>", line 1, in <module> dataset = load_dataset('multi_woz_v22', 'v2.2') File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/load.py", line 847, in load_dataset builder_instance.download_and_prepare( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/builder.py", line 615, in download_and_prepare self._download_and_prepare( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare verify_checksums( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json'] ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.10 - PyArrow version: 5.0.0
closed
https://github.com/huggingface/datasets/issues/2957
2021-09-22T23:45:00
2022-03-15T16:07:02
2022-03-15T16:07:02
{ "login": "bradyneal", "id": 8754873, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,004,306,367
2,956
Cache problem in the `load_dataset` method for local compressed file(s)
## Describe the bug Cache problem in the `load_dataset` method: when modifying a compressed file in a local folder `load_dataset` doesn't detect the change and load the previous version. ## Steps to reproduce the bug To test it directly, I have prepared a [Google Colaboratory notebook](https://colab.research.google.com/drive/11Em_Amoc-aPGhSBIkSHU2AvEh24nVayy?usp=sharing) that shows this behavior. For this example, I have created a toy dataset at: https://huggingface.co/datasets/SaulLu/toy_struc_dataset This dataset is composed of two versions: - v1 on commit `a6beb46` which has a single example `{'id': 1, 'value': {'tag': 'a', 'value': 1}}` in file `train.jsonl.gz` - v2 on commit `e7935f4` (`main` head) which has a single example `{'attr': 1, 'id': 1, 'value': 'a'}` in file `train.jsonl.gz` With a terminal, we can start to get the v1 version of the dataset ```bash git lfs install git clone https://huggingface.co/datasets/SaulLu/toy_struc_dataset cd toy_struc_dataset git checkout a6beb46 ``` Then we can load it with python and look at the content: ```python from datasets import load_dataset path = "/content/toy_struc_dataset" dataset = load_dataset(path, data_files={"train": "*.jsonl.gz"}) print(dataset["train"][0]) ``` Output ``` {'id': 1, 'value': {'tag': 'a', 'value': 1}} # This is the example in v1 ``` With a terminal, we can now start to get the v1 version of the dataset ```bash git checkout main ``` Then we can load it with python and look at the content: ```python from datasets import load_dataset path = "/content/toy_struc_dataset" dataset = load_dataset(path, data_files={"train": "*.jsonl.gz"}) print(dataset["train"][0]) ``` Output ``` {'id': 1, 'value': {'tag': 'a', 'value': 1}} # This is the example in v1 (not v2) ``` ## Expected results The last output should have been ``` {"id":1, "value": "a", "attr": 1} # This is the example in v2 ``` ## Ideas As discussed offline with Quentin, if the cache hash was ever sensitive to changes in a compressed file we would probably not have the problem anymore. This situation leads me to suggest 2 other features: - to also have an `load_from_cache_file` argument in the "load_dataset" method - to reorganize the cache so that we can delete the caches related to a dataset (cf issue #ToBeFilledSoon) And thanks again for this great library :hugs: ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
open
https://github.com/huggingface/datasets/issues/2956
2021-09-22T13:34:32
2023-08-31T16:49:01
null
{ "login": "SaulLu", "id": 55560583, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,003,999,469
2,955
Update legacy Python image for CI tests in Linux
Instead of legacy, use next-generation convenience images, built from the ground up with CI, efficiency, and determinism in mind. Here are some of the highlights: - Faster spin-up time - In Docker terminology, these next-gen images will generally have fewer and smaller layers. Using these new images will lead to faster image downloads when a build starts, and a higher likelihood that the image is already cached on the host. - Improved reliability and stability - The existing legacy convenience images are rebuilt practically every day with potential changes from upstream that we cannot always test fast enough. This leads to frequent breaking changes, which is not the best environment for stable, deterministic builds. Next-gen images will only be rebuilt for security and critical-bugs, leading to more stable and deterministic images. More info: https://circleci.com/docs/2.0/circleci-images
closed
https://github.com/huggingface/datasets/pull/2955
2021-09-22T08:25:27
2021-09-24T10:36:05
2021-09-24T10:36:05
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,003,904,803
2,954
Run tests in parallel
Run CI tests in parallel to speed up the test suite. Speed up results: - Linux: from `7m 30s` to `5m 32s` - Windows: from `13m 52s` to `11m 10s`
closed
https://github.com/huggingface/datasets/pull/2954
2021-09-22T07:00:44
2021-09-28T06:55:51
2021-09-28T06:55:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,002,766,517
2,953
Trying to get in touch regarding a security issue
Hey there! I'd like to report a security issue but cannot find contact instructions on your repository. If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future. Thank you for your consideration, and I look forward to hearing from you! (cc @huntr-helper)
closed
https://github.com/huggingface/datasets/issues/2953
2021-09-21T15:58:13
2021-10-21T15:16:43
2021-10-21T15:16:43
{ "login": "JamieSlome", "id": 55323451, "type": "User" }
[]
false
[]
1,002,704,096
2,952
Fix missing conda deps
`aiohttp` was added as a dependency in #2662 but was missing for the conda build, which causes the 1.12.0 and 1.12.1 to fail. Fix #2932.
closed
https://github.com/huggingface/datasets/pull/2952
2021-09-21T15:23:01
2021-09-22T04:39:59
2021-09-21T15:30:44
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,001,267,888
2,951
Dummy labels no longer on by default in `to_tf_dataset`
After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway!
closed
https://github.com/huggingface/datasets/pull/2951
2021-09-20T18:26:59
2021-09-21T14:00:57
2021-09-21T10:14:32
{ "login": "Rocketknight1", "id": 12866554, "type": "User" }
[]
true
[]
1,001,085,353
2,950
Fix fn kwargs in filter
#2836 broke the `fn_kwargs` parameter of `filter`, as mentioned in https://github.com/huggingface/datasets/issues/2927 I fixed that and added a test to make sure it doesn't happen again (for either map or filter) Fix #2927
closed
https://github.com/huggingface/datasets/pull/2950
2021-09-20T15:10:26
2021-09-20T16:22:59
2021-09-20T15:28:01
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,001,026,680
2,949
Introduce web and wiki config in triviaqa dataset
The TriviaQA paper suggests that the two subsets (Wikipedia and Web) should be treated differently. There are also different leaderboards for the two sets on CodaLab. For that reason, introduce additional builder configs in the trivia_qa dataset.
closed
https://github.com/huggingface/datasets/pull/2949
2021-09-20T14:17:23
2021-10-05T13:20:52
2021-10-01T15:39:29
{ "login": "shirte", "id": 1706443, "type": "User" }
[]
true
[]
1,000,844,077
2,948
Fix minor URL format in scitldr dataset
While investigating issue #2918, I found this minor format issues in the URLs (if runned in a Windows machine).
closed
https://github.com/huggingface/datasets/pull/2948
2021-09-20T11:11:32
2021-09-20T13:18:28
2021-09-20T13:18:28
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,000,798,338
2,947
Don't use old, incompatible cache for the new `filter`
#2836 changed `Dataset.filter` and the resulting data that are stored in the cache are different and incompatible with the ones of the previous `filter` implementation. However the caching mechanism wasn't able to differentiate between the old and the new implementation of filter (only the method name was taken into account). This is an issue because anyone that update `datasets` and re-runs some code that uses `filter` would see an error, because the cache would try to load an incompatible `filter` result. To fix this I added the notion of versioning for dataset transform in the caching mechanism, and bumped the version of the `filter` implementation to 2.0.0 This way the new `filter` outputs are now considered different from the old ones from the caching point of view. This should fix #2943 cc @anton-l
closed
https://github.com/huggingface/datasets/pull/2947
2021-09-20T10:18:59
2021-09-20T16:25:09
2021-09-20T13:43:02
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,000,754,824
2,946
Update meteor score from nltk update
It looks like there were issues in NLTK on the way the METEOR score was computed. A fix was added in NLTK at https://github.com/nltk/nltk/pull/2763, and therefore the scoring function no longer returns the same values. I updated the score of the example in the docs
closed
https://github.com/huggingface/datasets/pull/2946
2021-09-20T09:28:46
2021-09-20T09:35:59
2021-09-20T09:35:59
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,000,624,883
2,945
Protect master branch
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.: - 00cc036fea7c7745cfe722360036ed306796a3f2 - 13ae8c98602bbad8197de3b9b425f4c78f582af1 - ... I propose to protect our master branch, so that we avoid we can accidentally make this kind of mistakes in the future: - [x] For Pull Requests using GitHub, allow only squash merging, so that only a single commit per Pull Request is merged into the master branch - Currently, simple merge commits are already disabled - I propose to disable rebase merging as well - ~~Protect the master branch from direct pushes (to avoid accidentally pushing of merge commits)~~ - ~~This protection would reject direct pushes to master branch~~ - ~~If so, for each release (when we need to commit directly to the master branch), we should previously disable the protection and re-enable it again after the release~~ - [x] Protect the master branch only from direct pushing of **merge commits** - GitHub offers the possibility to protect the master branch only from merge commits (which are the ones that introduce all the commits from the feature branch into the master branch). - No need to disable/re-enable this protection on each release This purpose of this Issue is to open a discussion about this problem and to agree in a solution.
closed
https://github.com/huggingface/datasets/issues/2945
2021-09-20T06:47:01
2021-09-20T12:01:27
2021-09-20T12:00:16
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,000,544,370
2,944
Add `remove_columns` to `IterableDataset `
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. ```python from datasets import load_dataset dataset = load_dataset("c4", 'realnewslike', streaming =True, split='train') dataset = dataset.remove_columns('url') ``` ``` AttributeError: 'IterableDataset' object has no attribute 'remove_columns' ``` **Describe the solution you'd like** It would be nice to have `.remove_columns()` to match the `Datasets` api. **Describe alternatives you've considered** This can be done with a single call to `.map()`, I can try to help add this. 🤗
closed
https://github.com/huggingface/datasets/issues/2944
2021-09-20T04:01:00
2021-10-08T15:31:53
2021-10-08T15:31:53
{ "login": "changjonathanc", "id": 31893406, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good first issue", "color": "7057ff" } ]
false
[]
1,000,355,115
2,943
Backwards compatibility broken for cached datasets that use `.filter()`
## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}` Related feature: https://github.com/huggingface/datasets/pull/2836 :question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :) ## Workaround Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`. ## Steps to reproduce the bug 1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists. 2. `pip install datasets==1.11.0` and run the following snippet: ```python from datasets import load_dataset ids = ["1272-141231-0000"] ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.filter(lambda x: x["id"] in ids) ``` 3. `pip install datasets==1.12.1` and re-run the code again ## Expected results Same result as with the previous `datasets` version. ## Actual results ```bash Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1) Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow Traceback (most recent call last): File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module> ds = ds.filter(lambda x: x["id"] in ids) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper out = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter indices = self.map( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map return self._map_single( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper out = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single return Dataset.from_file(cache_file_name, info=info, split=self.split) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file return cls( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__ self.info.features = self.info.features.reorder_fields_as(inferred_features) File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as return Features(recursive_reorder(self, other)) File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position) ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)} Process finished with exit code 1 ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 5.0.0
closed
https://github.com/huggingface/datasets/issues/2943
2021-09-19T16:16:37
2021-09-20T16:25:43
2021-09-20T16:25:42
{ "login": "anton-l", "id": 26864830, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,000,309,765
2,942
Add SEDE dataset
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card. Please see our paper for more details: https://arxiv.org/abs/2106.05006
closed
https://github.com/huggingface/datasets/pull/2942
2021-09-19T13:11:24
2021-09-24T10:39:55
2021-09-24T10:39:54
{ "login": "Hazoom", "id": 13545154, "type": "User" }
[]
true
[]
1,000,000,711
2,941
OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError
## Describe the bug Cannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python >>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko') NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num_examples=7345075, dataset_name='oscar'), 'recorded': SplitInfo(name='train', num_bytes=25284578514, num_examples=7344907, dataset_name='oscar')}] ``` ## Expected results Loading is successful. ## Actual results Loading throws above error. ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 5.0.0
open
https://github.com/huggingface/datasets/issues/2941
2021-09-18T10:39:13
2022-01-19T14:10:07
null
{ "login": "ayaka14732", "id": 68557794, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
999,680,796
2,940
add swedish_medical_ner dataset
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
closed
https://github.com/huggingface/datasets/pull/2940
2021-09-17T20:03:05
2021-10-05T12:13:34
2021-10-05T12:13:33
{ "login": "bwang482", "id": 6764450, "type": "User" }
[]
true
[]
999,639,630
2,939
MENYO-20k repo has moved, updating URL
Dataset repo moved to https://github.com/uds-lsv/menyo-20k_MT, now editing URL to match. https://github.com/uds-lsv/menyo-20k_MT/blob/master/data/train.tsv is the file we're looking for
closed
https://github.com/huggingface/datasets/pull/2939
2021-09-17T19:01:54
2021-09-21T15:31:37
2021-09-21T15:31:36
{ "login": "cdleong", "id": 4109253, "type": "User" }
[]
true
[]
999,552,263
2,938
Take namespace into account in caching
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing. I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used: <s> `~/.cache/huggingface/datasets/username/dataset_name` for the data `~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files </s> EDIT: actually using three underscores: `~/.cache/huggingface/datasets/username___dataset_name` for the data `~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files This PR should fix the issue https://github.com/huggingface/datasets/issues/2842 cc @stas00
closed
https://github.com/huggingface/datasets/pull/2938
2021-09-17T16:57:33
2021-12-17T10:52:18
2021-09-29T13:01:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
999,548,277
2,937
load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
## Describe the bug Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('wiki_bio') ``` ## Expected results It is expected that the dataset downloads without any errors. ## Actual results PermissionError see trace below: ``` Using custom data configuration default Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare self._save_info() File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__ next(self.gen) File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir os.rename(tmp_dir, dirname) PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9' ``` By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed. It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue. ## Environment info - `datasets` version: 1.12.1 - Platform: Windows-10-10.0.22449-SP0 - Python version: 3.8.12 - PyArrow version: 5.0.0
closed
https://github.com/huggingface/datasets/issues/2937
2021-09-17T16:52:10
2022-08-24T13:09:08
2022-08-24T13:09:08
{ "login": "daqieq", "id": 40532020, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
999,521,647
2,936
Check that array is not Float as nan != nan
The Exception wants to check for issues with StructArrays/ListArrays but catches FloatArrays with value nan as nan != nan. Pass on FloatArrays as we should not raise an Exception for them.
closed
https://github.com/huggingface/datasets/pull/2936
2021-09-17T16:16:41
2021-09-21T09:39:05
2021-09-21T09:39:04
{ "login": "Iwontbecreative", "id": 494951, "type": "User" }
[]
true
[]
999,518,469
2,935
Add Jigsaw unintended Bias
Hi, Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff. This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there.
closed
https://github.com/huggingface/datasets/pull/2935
2021-09-17T16:12:31
2021-09-24T10:41:52
2021-09-24T10:41:52
{ "login": "Iwontbecreative", "id": 494951, "type": "User" }
[]
true
[]
999,477,413
2,934
to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
To reproduce: ```python import datasets as ds import weakref import gc d = ds.load_dataset("mnist", split="train") ref = weakref.ref(d._data.table) tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label") del tfd, d gc.collect() assert ref() is None, "Error: there is at least one reference left" ``` This causes issues because the table holds a reference to an open arrow file that should be closed. So on windows it's not possible to delete or move the arrow file afterwards. Moreover the CI test of the `to_tf_dataset` method isn't able to clean up the temporary arrow files because of this. cc @Rocketknight1
closed
https://github.com/huggingface/datasets/issues/2934
2021-09-17T15:26:53
2021-10-13T09:03:23
2021-10-13T09:03:23
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
999,392,566
2,933
Replace script_version with revision
As discussed in https://github.com/huggingface/datasets/pull/2718#discussion_r707013278, the parameter name `script_version` is no longer applicable to datasets without loading script (i.e., datasets only with raw data files). This PR replaces the parameter name `script_version` with `revision`. This way, we are also aligned with: - Transformers: `AutoTokenizer.from_pretrained(..., revision=...)` - Hub: `HfApi.dataset_info(..., revision=...)`, `HfApi.upload_file(..., revision=...)`
closed
https://github.com/huggingface/datasets/pull/2933
2021-09-17T14:04:39
2021-09-20T09:52:10
2021-09-20T09:52:10
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
999,317,750
2,932
Conda build fails
## Describe the bug Current `datasets` version in conda is 1.9 instead of 1.12. The build of the conda package fails.
closed
https://github.com/huggingface/datasets/issues/2932
2021-09-17T12:49:22
2021-09-21T15:31:10
2021-09-21T15:31:10
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
998,326,359
2,931
Fix bug in to_tf_dataset
Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()`
closed
https://github.com/huggingface/datasets/pull/2931
2021-09-16T15:08:03
2021-09-16T17:01:38
2021-09-16T17:01:37
{ "login": "Rocketknight1", "id": 12866554, "type": "User" }
[]
true
[]
998,154,311
2,930
Mutable columns argument breaks set_format
## Describe the bug If you pass a mutable list to the `columns` argument of `set_format` and then change the list afterwards, the returned columns also change. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("glue", "cola") column_list = ["idx", "label"] dataset.set_format("python", columns=column_list) column_list[1] = "foo" # Change the list after we call `set_format` dataset['train'][:4].keys() ``` ## Expected results ```python dict_keys(['idx', 'label']) ``` ## Actual results ```python dict_keys(['idx']) ```
closed
https://github.com/huggingface/datasets/issues/2930
2021-09-16T12:27:22
2021-09-16T13:50:53
2021-09-16T13:50:53
{ "login": "Rocketknight1", "id": 12866554, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
997,960,024
2,929
Add regression test for null Sequence
Relates to #2892 and #2900.
closed
https://github.com/huggingface/datasets/pull/2929
2021-09-16T08:58:33
2021-09-17T08:23:59
2021-09-17T08:23:59
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
997,941,506
2,928
Update BibTeX entry
Update BibTeX entry.
closed
https://github.com/huggingface/datasets/pull/2928
2021-09-16T08:39:20
2021-09-16T12:35:34
2021-09-16T12:35:34
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
997,654,680
2,927
Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument
## Describe the bug Upgrading to 1.12 caused `dataset.filter` call to fail with > get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels ## Steps to reproduce the bug ```pythondef filter_good_rows( ex: Dict, valid_rel_labels: Set[str], valid_ner_labels: Set[str], tokenizer: PreTrainedTokenizerFast, ) -> bool: """Get the good rows""" encoding = get_encoding_for_text(text=ex["text"], tokenizer=tokenizer) ex["encoding"] = encoding for relation in ex["relations"]: if not is_valid_relation(relation, valid_rel_labels): return False for span in ex["spans"]: if not is_valid_span(span, valid_ner_labels, encoding): return False return True def get_dataset(): loader_path = str(Path(__file__).parent / "prodigy_dataset_builder.py") ds = load_dataset( loader_path, name="prodigy-dataset", data_files=sorted(file_paths), cache_dir=cache_dir, )["train"] valid_ner_labels = set(vocab.ner_category) valid_relations = set(vocab.relation_types.keys()) ds = ds.filter( filter_good_rows, fn_kwargs=dict( valid_rel_labels=valid_relations, valid_ner_labels=valid_ner_labels, tokenizer=vocab.tokenizer, ), keep_in_memory=True, num_proc=num_proc, ) ``` `ds` is a `DatasetDict` produced by a jsonl dataset. This runs fine on 1.11 but fails on 1.12 **Stack Trace** ## Expected results I expect 1.12 datasets filter to filter the dataset without raising as it does on 1.11 ## Actual results ``` tf_ner_rel_lib/dataset.py:695: in load_prodigy_arrow_datasets_from_jsonl ds = ds.filter( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper out = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2169: in filter indices = self.map( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1686: in map return self._map_single( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper out = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2048: in _map_single batch = apply_function_on_filtered_inputs( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ inputs = {'_input_hash': [2108817714, 1477695082, -1021597032, 2130671338, -1260483858, -1203431639, ...], '_task_hash': [18070...ons', 'relations', 'relations', ...], 'answer': ['accept', 'accept', 'accept', 'accept', 'accept', 'accept', ...], ...} indices = [0, 1, 2, 3, 4, 5, ...], check_same_num_examples = False, offset = 0 def apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples=False, offset=0): """Utility to apply the function on a selection of columns.""" nonlocal update_data fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] if offset == 0: effective_indices = indices else: effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset processed_inputs = ( > function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) ) E TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'valid_rel_labels' ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1939: TypeError ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Mac - Python version: 3.8.9 - PyArrow version: pyarrow==5.0.0
closed
https://github.com/huggingface/datasets/issues/2927
2021-09-16T01:14:02
2021-09-20T16:23:22
2021-09-20T16:23:21
{ "login": "timothyjlaurent", "id": 2000204, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
997,463,277
2,926
Error when downloading datasets to non-traditional cache directories
## Describe the bug When the cache directory is linked (soft link) to a directory on a NetApp device, the download fails. ## Steps to reproduce the bug ```bash ln -s /path/to/netapp/.cache ~/.cache ``` ```python load_dataset("imdb") ``` ## Expected results Successfully loading IMDB dataset ## Actual results ``` datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33432835, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=659932, num_examples=503, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.1.2 - Platform: Ubuntu - Python version: 3.8 ## Extra notes Stranger yet, trying to debug the phenomenon, I found the range of results to vary a lot without clear direction: - With `cache_dir="/path/to/netapp/.cache"` the same thing happens. - However, when linking `~/netapp/` to `/path/to/netapp` *and* setting `cache_dir="~/netapp/.cache/huggingface/datasets"` - it does work - On the other hand, when linking `~/.cache` to `~/netapp/.cache` without using `cache_dir`, it does work anymore. While I could test it only for a NetApp device, it might have to do with any other mounted FS. Thanks :)
open
https://github.com/huggingface/datasets/issues/2926
2021-09-15T19:59:46
2021-11-24T21:42:31
null
{ "login": "dar-tau", "id": 45885627, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
997,407,034
2,925
Add tutorial for no-code dataset upload
This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git.
closed
https://github.com/huggingface/datasets/pull/2925
2021-09-15T18:54:42
2021-09-27T17:51:55
2021-09-27T17:51:55
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
997,378,113
2,924
"File name too long" error for file locks
## Describe the bug Getting the following error when calling `load_dataset("gar1t/test")`: ``` OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock' ``` ## Steps to reproduce the bug Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4): ```python from datasets import load_dataset load_dataset("gar1t/test") ``` ## Expected results Expect the function to return without an error. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare self._save_info() File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info with FileLock(lock_path): File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__ self.acquire() File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire self._acquire() File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock' ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 5.0.0
closed
https://github.com/huggingface/datasets/issues/2924
2021-09-15T18:16:50
2023-12-08T13:39:51
2021-10-29T09:42:24
{ "login": "gar1t", "id": 184949, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
997,351,590
2,923
Loading an autonlp dataset raises in normal mode but not in streaming mode
## Describe the bug The same dataset (from autonlp) raises an error in normal mode, but does not raise in streaming mode ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=False) ## raises an error load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=True) ## does not raise an error ``` ## Expected results Both calls should raise the same error ## Actual results Call with streaming=False: ``` 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5825.42it/s] Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b Downloading and preparing dataset json/autonlp-data-sentiment_detection-3c8bcd36 to /home/slesage/.cache/huggingface/datasets/json/autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b/0.0.0/d75ead8d5cfcbe67495df0f89bd262f0023257fbbbd94a730313295f3d756d50... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 15923.71it/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 3346.88it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare self._download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1187, in _prepare_split writer.write_table(table) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in <listcomp> pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1249, in pyarrow.lib.Table.__getitem__ File "pyarrow/table.pxi", line 1825, in pyarrow.lib.Table.column File "pyarrow/table.pxi", line 1800, in pyarrow.lib.Table._ensure_integer_index KeyError: 'Field "splits" does not exist in table schema' ``` Call with `streaming=False`: ``` 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6000.43it/s] Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 46916.15it/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 148734.18it/s] ``` ## Environment info - `datasets` version: 1.12.1.dev0 - Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2923
2021-09-15T17:44:38
2022-04-12T10:09:40
2022-04-12T10:09:39
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
997,332,662
2,922
Fix conversion of multidim arrays in list to arrow
Arrow only supports 1-dim arrays. Previously we were converting all the numpy arrays to python list before instantiating arrow arrays to workaround this limitation. However in #2361 we started to keep numpy arrays in order to keep their dtypes. It works when we pass any multi-dim numpy array (the conversion to arrow has been added on our side), but not for lists of multi-dim numpy arrays. In this PR I added two strategies: - one that takes a list of multi-dim numpy arrays on returns an arrow array in an optimized way (more common case) - one that takes a list of possibly very nested data (lists, dicts, tuples) containing multi-dim arrays. This one is less optimized since it converts all the multi-dim numpy arrays into lists of 1-d arrays for compatibility with arrow. This strategy is simpler that just trying to create the arrow array from a possibly very nested data structure, but in the future we can improve it if needed. Fix https://github.com/huggingface/datasets/issues/2921
closed
https://github.com/huggingface/datasets/pull/2922
2021-09-15T17:21:36
2021-09-15T17:22:52
2021-09-15T17:21:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
997,325,424
2,921
Using a list of multi-dim numpy arrays raises an error "can only convert 1-dimensional array values"
This error has been introduced in https://github.com/huggingface/datasets/pull/2361 To reproduce: ```python import numpy as np from datasets import Dataset d = Dataset.from_dict({"a": [np.zeros((2, 2))]}) ``` raises ```python Traceback (most recent call last): File "playground/ttest.py", line 5, in <module> d = Dataset.from_dict({"a": [np.zeros((2, 2))]}).with_format("torch") File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 458, in from_dict pa_table = InMemoryTable.from_pydict(mapping=mapping) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 365, in from_pydict return cls(pa.Table.from_pydict(*args, **kwargs)) File "pyarrow/table.pxi", line 1639, in pyarrow.lib.Table.from_pydict File "pyarrow/array.pxi", line 332, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 223, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_writer.py", line 107, in __arrow_array__ out = pa.array(self.data, type=type) File "pyarrow/array.pxi", line 306, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values
closed
https://github.com/huggingface/datasets/issues/2921
2021-09-15T17:12:11
2021-09-15T17:21:45
2021-09-15T17:21:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
997,323,014
2,920
Fix unwanted tqdm bar when accessing examples
A change in #2814 added bad progress bars in `map_nested`. Now they're disabled by default Fix #2919
closed
https://github.com/huggingface/datasets/pull/2920
2021-09-15T17:09:11
2021-09-15T17:18:24
2021-09-15T17:18:24
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
997,127,487
2,919
Unwanted progress bars when accessing examples
When accessing examples from a dataset formatted for pytorch, some progress bars appear when accessing examples: ```python In [1]: import datasets as ds In [2]: d = ds.Dataset.from_dict({"a": [0, 1, 2]}).with_format("torch") In [3]: d[0] 100%|████████████████████████████████| 1/1 [00:00<00:00, 3172.70it/s] Out[3]: {'a': tensor(0)} ``` This is because the pytorch formatter calls `map_nested` that uses progress bars cc @sgugger
closed
https://github.com/huggingface/datasets/issues/2919
2021-09-15T14:05:10
2021-09-15T17:21:49
2021-09-15T17:18:23
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
997,063,347
2,918
`Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
## Describe the bug Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`: ```python ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` cc @lhoestq ## Steps to reproduce the bug ```python from datasets import load_dataset iter_dset = iter( load_dataset("scitldr", name="FullText", split="test", streaming=True) ) next(iter_dset) ``` ## Expected results Returns the first sample of the dataset ## Actual results Calling `__next__` crashes with the following Traceback: ```python ----> 1 next(dset_iter) ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self) 339 340 def __iter__(self): --> 341 for key, example in self._iter(): 342 if self.features: 343 # we encode the example for ClassLabel feature types for example ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self) 336 else: 337 ex_iterable = self._ex_iterable --> 338 yield from ex_iterable 339 340 def __iter__(self): ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self) 76 77 def __iter__(self): ---> 78 for key, example in self.generate_examples_fn(**self.kwargs): 79 yield key, example 80 ~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split) 162 163 with open(filepath, encoding="utf-8") as f: --> 164 for id_, row in enumerate(f): 165 data = json.loads(row) 166 if self.config.name == "AIC": ~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length) 496 else: 497 length = min(self.size - self.loc, length) --> 498 return super().read(length) 499 500 async def async_fetch_all(self): ~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length) 1481 # don't even bother calling fetch 1482 return b"" -> 1483 out = self.cache._fetch(self.loc, self.loc + length) 1484 self.loc += len(out) 1485 return out ~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end) 378 elif start < self.start: 379 if self.end - end > self.blocksize: --> 380 self.cache = self.fetcher(start, bend) 381 self.start = start 382 else: ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs) 86 def wrapper(*args, **kwargs): 87 self = obj or args[0] ---> 88 return sync(self.loop, func, *args, **kwargs) 89 90 return wrapper ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs) 67 raise FSTimeoutError 68 if isinstance(result[0], BaseException): ---> 69 raise result[0] 70 return result[0] 71 ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout) 23 coro = asyncio.wait_for(coro, timeout=timeout) 24 try: ---> 25 result[0] = await coro 26 except Exception as ex: 27 result[0] = ex ~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end) 538 if r.status == 206: 539 # partial content, as expected --> 540 out = await r.read() 541 elif "Content-Length" in r.headers: 542 cl = int(r.headers["Content-Length"]) ~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self) 1030 if self._body is None: 1031 try: -> 1032 self._body = await self.content.read() 1033 for trace in self._traces: 1034 await trace.send_response_chunk_received( ~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n) 342 async def read(self, n: int = -1) -> bytes: 343 if self._exception is not None: --> 344 raise self._exception 345 346 # migration problem; with DataQueue you have to catch ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` ## Environment info - `datasets` version: 1.12.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.5 - PyArrow version: 2.0.0 - aiohttp version: 3.7.4.post0
closed
https://github.com/huggingface/datasets/issues/2918
2021-09-15T13:06:07
2021-12-01T08:15:00
2021-12-01T08:15:00
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "streaming", "color": "fef2c0" } ]
false
[]
997,041,658
2,917
windows download abnormal
## Describe the bug The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why?? ## Steps to reproduce the bug ```python3.7 + windows ![image](https://user-images.githubusercontent.com/52347799/133436174-4303f847-55d5-434f-a749-08da3bb9b654.png) # Sample code to reproduce the bug ``` ## Expected results It can be downloaded normally. ## Actual results it cann't ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:1.11.0 - Platform:windows - Python version:3.7 - PyArrow version:
closed
https://github.com/huggingface/datasets/issues/2917
2021-09-15T12:45:35
2021-09-16T17:17:48
2021-09-16T17:17:48
{ "login": "wei1826676931", "id": 52347799, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
997,003,661
2,916
Add OpenAI's pass@k code evaluation metric
This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references` convention. The addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893. A few open questions: - The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`? - This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message? - Naming: the implementation sticks to the `predictions`/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted?
closed
https://github.com/huggingface/datasets/pull/2916
2021-09-15T12:05:43
2021-11-12T14:19:51
2021-11-12T14:19:50
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[]
true
[]
996,870,071
2,915
Fix fsspec AbstractFileSystem access
This addresses the issue from #2914 by changing the way fsspec's AbstractFileSystem is accessed.
closed
https://github.com/huggingface/datasets/pull/2915
2021-09-15T09:39:20
2021-09-15T11:35:24
2021-09-15T11:35:24
{ "login": "pierre-godard", "id": 3969168, "type": "User" }
[]
true
[]
996,770,168
2,914
Having a dependency defining fsspec entrypoint raises an AttributeError when importing datasets
## Describe the bug In one of my project, I defined a custom fsspec filesystem with an entrypoint. My guess is that by doing so, a variable named `spec` is created in the module `fsspec` (created by entering a for loop as there are entrypoints defined, see the loop in question [here](https://github.com/intake/filesystem_spec/blob/0589358d8a029ed6b60d031018f52be2eb721291/fsspec/__init__.py#L55)). So that `fsspec.spec`, that was previously referring to the `spec` submodule, is now referring to that `spec` variable. This make the import of datasets failing as it is using that `fsspec.spec`. ## Steps to reproduce the bug I could reproduce the bug with a dummy poetry project. Here is the pyproject.toml: ```toml [tool.poetry] name = "debug-datasets" version = "0.1.0" description = "" authors = ["Pierre Godard"] [tool.poetry.dependencies] python = "^3.8" datasets = "^1.11.0" [tool.poetry.dev-dependencies] [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" [tool.poetry.plugins."fsspec.specs"] "file2" = "fsspec.implementations.local.LocalFileSystem" ``` The only other file being a `debug_datasets/__init__.py` empty file. The overall structure of the project is as follows: ``` . ├── pyproject.toml └── debug_datasets └── __init__.py ``` Then, within the project folder run: ``` poetry install poetry run python ``` And in the python interpreter, try to import `datasets`: ``` import datasets ``` ## Expected results The import should run successfully. ## Actual results Here is the trace of the error I get: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/__init__.py", line 33, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 48, in <module> from .filesystems import extract_path_from_uri, is_remote_filesystem File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/filesystems/__init__.py", line 30, in <module> def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool: AttributeError: 'EntryPoint' object has no attribute 'AbstractFileSystem' ``` ## Suggested fix `datasets/filesystems/__init__.py`, line 30, replace: ``` def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool: ``` by: ``` def is_remote_filesystem(fs: fsspec.AbstractFileSystem) -> bool: ``` I will come up with a PR soon if this effectively solves the issue. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: WSL2 (Ubuntu 20.04.1 LTS) - Python version: 3.8.5 - PyArrow version: 5.0.0 - `fsspec` version: 2021.8.1
closed
https://github.com/huggingface/datasets/issues/2914
2021-09-15T07:54:06
2021-09-15T16:49:17
2021-09-15T16:49:16
{ "login": "pierre-godard", "id": 3969168, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
996,436,368
2,913
timit_asr dataset only includes one text phrase
## Describe the bug The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases. ## Steps to reproduce the bug Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-english 1. Install the dataset and other packages ```python !pip install datasets>=1.5.0 !pip install transformers==4.4.0 !pip install soundfile !pip install jiwer ``` 2. Load the dataset ```python from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") ``` 3. Remove columns that we don't want ```python timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"]) ``` 4. Write a short function to display some random samples of the dataset. ```python from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) display(HTML(df.to_html())) show_random_elements(timit["train"].remove_columns(["file"])) ``` ## Expected results 10 random different transcription phrases. ## Actual results 10 of the same transcription phrase "Would such an act of refusal be useful?" ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.4.1 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: not listed
closed
https://github.com/huggingface/datasets/issues/2913
2021-09-14T21:06:07
2021-09-15T08:05:19
2021-09-15T08:05:18
{ "login": "margotwagner", "id": 39107794, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
996,256,005
2,912
Update link to Blog in docs footer
Update link.
closed
https://github.com/huggingface/datasets/pull/2912
2021-09-14T17:23:14
2021-09-15T07:59:23
2021-09-15T07:59:23
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
996,202,598
2,911
Fix exception chaining
Fix exception chaining to avoid tracebacks with message: `During handling of the above exception, another exception occurred:`
closed
https://github.com/huggingface/datasets/pull/2911
2021-09-14T16:19:29
2021-09-16T15:04:44
2021-09-16T15:04:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
996,149,632
2,910
feat: 🎸 pass additional arguments to get private configs + info
`use_auth_token` can now be passed to the functions to get the configs or infos of private datasets on the hub
closed
https://github.com/huggingface/datasets/pull/2910
2021-09-14T15:24:19
2021-09-15T16:19:09
2021-09-15T16:19:06
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
996,002,180
2,909
fix anli splits
I can't run the tests for dummy data, facing this error `ImportError while loading conftest '/home/zaid/tmp/fix_anli_splits/datasets/tests/conftest.py'. tests/conftest.py:10: in <module> from datasets import config E ImportError: cannot import name 'config' from 'datasets' (unknown location)`
closed
https://github.com/huggingface/datasets/pull/2909
2021-09-14T13:10:35
2021-10-13T11:27:49
2021-10-13T11:27:49
{ "login": "zaidalyafeai", "id": 15667714, "type": "User" }
[]
true
[]
995,970,612
2,908
Update Zenodo metadata with creator names and affiliation
This PR helps in prefilling author data when automatically generating the DOI after each release.
closed
https://github.com/huggingface/datasets/pull/2908
2021-09-14T12:39:37
2021-09-14T14:29:25
2021-09-14T14:29:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
995,968,152
2,907
add story_cloze dataset
@lhoestq I have spent some time but I still I can't succeed in correctly testing the dummy_data.
closed
https://github.com/huggingface/datasets/pull/2907
2021-09-14T12:36:53
2021-10-08T21:41:42
2021-10-08T21:41:41
{ "login": "zaidalyafeai", "id": 15667714, "type": "User" }
[]
true
[]
995,962,905
2,906
feat: 🎸 add a function to get a dataset config's split names
Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub Questions: - [x] I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct? -> no: reverted - [x] Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos) -> yes: added
closed
https://github.com/huggingface/datasets/pull/2906
2021-09-14T12:31:22
2021-10-04T09:55:38
2021-10-04T09:55:37
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
995,843,964
2,905
Update BibTeX entry
Update BibTeX entry.
closed
https://github.com/huggingface/datasets/pull/2905
2021-09-14T10:16:17
2021-09-14T12:25:37
2021-09-14T12:25:37
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
995,814,222
2,904
FORCE_REDOWNLOAD does not work
## Describe the bug With GenerateMode.FORCE_REDOWNLOAD, the documentation says +------------------------------------+-----------+---------+ | | Downloads | Dataset | +====================================+===========+=========+ | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse | +------------------------------------+-----------+---------+ | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh | +------------------------------------+-----------+---------+ | `FORCE_REDOWNLOAD` | Fresh | Fresh | +------------------------------------+-----------+---------+ However, the old dataset is loaded even when FORCE_REDOWNLOAD is chosen. ## Steps to reproduce the bug ```python import pandas as pd from datasets import load_dataset, GenerateMode pd.DataFrame(range(5), columns=['numbers']).to_csv('/tmp/test.tsv.gz', index=False) ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD) print(ee) pd.DataFrame(range(10), columns=['numerals']).to_csv('/tmp/test.tsv.gz', index=False) ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD) print(ee) ``` ## Expected results Dataset({ features: ['numbers'], num_rows: 5 }) Dataset({ features: ['numerals'], num_rows: 10 }) ## Actual results Dataset({ features: ['numbers'], num_rows: 5 }) Dataset({ features: ['numbers'], num_rows: 5 }) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.10 - Python version: 3.7.10 - PyArrow version: 3.0.0
open
https://github.com/huggingface/datasets/issues/2904
2021-09-14T09:45:26
2021-10-06T09:37:19
null
{ "login": "anoopkatti", "id": 5278299, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
995,715,191
2,903
Fix xpathopen to accept positional arguments
Fix `xpathopen()` so that it also accepts positional arguments. Fix #2901.
closed
https://github.com/huggingface/datasets/pull/2903
2021-09-14T08:02:50
2021-09-14T08:51:21
2021-09-14T08:40:47
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
995,254,216
2,902
Add WIT Dataset
## Adding a Dataset - **Name:** *WIT* - **Description:** *Wikipedia-based Image Text Dataset* - **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning ](https://arxiv.org/abs/2103.01913)* - **Data:** *https://github.com/google-research-datasets/wit* - **Motivation:** (excerpt from their Github README.md) > - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples. > - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages. > - A collection of diverse set of concepts and real world entities. > - Brings forth challenging real-world test sets. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/2902
2021-09-13T19:38:49
2024-10-02T15:37:48
2022-06-01T17:28:40
{ "login": "nateraw", "id": 32437151, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
995,232,844
2,901
Incompatibility with pytest
## Describe the bug pytest complains about xpathopen / path.open("w") ## Steps to reproduce the bug Create a test file, `test.py`: ```python import datasets as ds def load_dataset(): ds.load_dataset("counter", split="train", streaming=True) ``` And launch it with pytest: ```bash python -m pytest test.py ``` ## Expected results It should give something like: ``` collected 1 item test.py . [100%] ======= 1 passed in 3.15s ======= ``` ## Actual results ``` ============================================================================================================================= test session starts ============================================================================================================================== platform linux -- Python 3.8.11, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /home/slesage/hf/datasets-preview-backend, configfile: pyproject.toml plugins: anyio-3.3.1 collected 1 item tests/queries/test_rows.py . [100%]Traceback (most recent call last): File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pytest/__main__.py", line 5, in <module> raise SystemExit(pytest.console_main()) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 185, in console_main code = main() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 162, in main ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall return outcome.get_result() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main return wrap_session(config, _main) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 304, in wrap_session config.hook.pytest_sessionfinish( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall gen.send(outcome) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/terminal.py", line 803, in pytest_sessionfinish outcome.get_result() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 428, in pytest_sessionfinish config.cache.set("cache/nodeids", sorted(self.cached_nodeids)) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 188, in set f = path.open("w") TypeError: xpathopen() takes 1 positional argument but 2 were given ``` ## Environment info - `datasets` version: 1.12.0 - Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2901
2021-09-13T19:12:17
2021-09-14T08:40:47
2021-09-14T08:40:47
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
994,922,580
2,900
Fix null sequence encoding
The Sequence feature encoding was failing when a `None` sequence was used in a dataset. Fix https://github.com/huggingface/datasets/issues/2892
closed
https://github.com/huggingface/datasets/pull/2900
2021-09-13T13:55:08
2021-09-13T14:17:43
2021-09-13T14:17:42
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
994,082,432
2,899
Dataset
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/2899
2021-09-12T07:38:53
2021-09-12T16:12:15
2021-09-12T16:12:15
{ "login": "rcacho172", "id": 90449239, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
994,032,814
2,898
Hug emoji
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/2898
2021-09-12T03:27:51
2021-09-12T16:13:13
2021-09-12T16:13:13
{ "login": "Jackg-08", "id": 90539794, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
993,798,386
2,897
Add OpenAI's HumanEval dataset
This PR adds OpenAI's [HumanEval](https://github.com/openai/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models.
closed
https://github.com/huggingface/datasets/pull/2897
2021-09-11T09:37:47
2021-09-16T15:02:11
2021-09-16T15:02:11
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[]
true
[]
993,613,113
2,896
add multi-proc in `to_csv`
This PR extends the multi-proc method used in #2747 for`to_json` to `to_csv` as well. Results on my machine post benchmarking on `ascent_kb` dataset (giving ~45% improvement when compared to num_proc = 1): ``` Time taken on 1 num_proc, 10000 batch_size 674.2055702209473 Time taken on 4 num_proc, 10000 batch_size 425.6553490161896 Time taken on 1 num_proc, 50000 batch_size 623.5897650718689 Time taken on 4 num_proc, 50000 batch_size 380.0402421951294 Time taken on 4 num_proc, 100000 batch_size 361.7168130874634 ``` This is a WIP as writing tests is pending for this PR. I'm also exploring [this](https://arrow.apache.org/docs/python/csv.html#incremental-writing) approach for which I'm using `pyarrow-5.0.0`.
closed
https://github.com/huggingface/datasets/pull/2896
2021-09-10T21:35:09
2021-10-28T05:47:33
2021-10-26T16:00:42
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
993,462,274
2,895
Use pyarrow.Table.replace_schema_metadata instead of pyarrow.Table.cast
This PR partially addresses #2252. ``update_metadata_with_features`` uses ``Table.cast`` which slows down ``load_from_disk`` (and possibly other methods that use it) for very large datasets. Since ``update_metadata_with_features`` is only updating the schema metadata, it makes more sense to use ``pyarrow.Table.replace_schema_metadata`` which is much faster. This PR adds a ``replace_schema_metadata`` method to all table classes, and modifies ``update_metadata_with_features`` to use it instead of ``cast``.
closed
https://github.com/huggingface/datasets/pull/2895
2021-09-10T17:56:57
2021-09-21T22:50:01
2021-09-21T08:18:35
{ "login": "arsarabi", "id": 12345848, "type": "User" }
[]
true
[]
993,375,654
2,894
Fix COUNTER dataset
Fix filename generating `FileNotFoundError`. Related to #2866. CC: @severo.
closed
https://github.com/huggingface/datasets/pull/2894
2021-09-10T16:07:29
2021-09-10T16:27:45
2021-09-10T16:27:44
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
993,342,781
2,893
add mbpp dataset
This PR adds the mbpp dataset introduced by Google [here](https://github.com/google-research/google-research/tree/master/mbpp) as mentioned in #2816. The dataset contain two versions: a full and a sanitized one. They have a slightly different schema and it is current state the loading preserves the original schema. An open question is whether to harmonize the two schemas when loading the dataset or to preserve the original one. Since not all fields are overlapping the schema will not be exactly the same.
closed
https://github.com/huggingface/datasets/pull/2893
2021-09-10T15:27:30
2021-09-16T09:35:42
2021-09-16T09:35:42
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[]
true
[]
993,274,572
2,892
Error when encoding a dataset with None objects with a Sequence feature
There is an error when encoding a dataset with None objects with a Sequence feature To reproduce: ```python from datasets import Dataset, Features, Value, Sequence data = {"a": [[0], None]} features = Features({"a": Sequence(Value("int32"))}) dataset = Dataset.from_dict(data, features=features) ``` raises ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-24-40add67f8751> in <module> 2 data = {"a": [[0], None]} 3 features = Features({"a": Sequence(Value("int32"))}) ----> 4 dataset = Dataset.from_dict(data, features=features) [...] ~/datasets/features.py in encode_nested_example(schema, obj) 888 if isinstance(obj, str): # don't interpret a string as a list 889 raise ValueError("Got a string but expected a list instead: '{}'".format(obj)) --> 890 return [encode_nested_example(schema.feature, o) for o in obj] 891 # Object with special encoding: 892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks TypeError: 'NoneType' object is not iterable ``` Instead, if should run without error, as if the `features` were not passed
closed
https://github.com/huggingface/datasets/issues/2892
2021-09-10T14:11:43
2021-09-13T14:18:13
2021-09-13T14:17:42
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
993,161,984
2,891
Allow dynamic first dimension for ArrayXD
Add support for dynamic first dimension for ArrayXD features. See issue [#887](https://github.com/huggingface/datasets/issues/887). Following changes allow for `to_pylist` method of `ArrayExtensionArray` to return a list of numpy arrays where fist dimension can vary. @lhoestq Could you suggest how you want to extend test suit. For now I added only very limited testing.
closed
https://github.com/huggingface/datasets/pull/2891
2021-09-10T11:52:52
2021-11-23T15:33:13
2021-10-29T09:37:17
{ "login": "rpowalski", "id": 10357417, "type": "User" }
[]
true
[]
993,074,102
2,890
0x290B112ED1280537B24Ee6C268a004994a16e6CE
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/2890
2021-09-10T09:51:17
2021-09-10T11:45:29
2021-09-10T11:45:29
{ "login": "rcacho172", "id": 90449239, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
992,968,382
2,889
Coc
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/2889
2021-09-10T07:32:07
2021-09-10T11:45:54
2021-09-10T11:45:54
{ "login": "Bwiggity", "id": 90444264, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
992,676,535
2,888
v1.11.1 release date
Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago. When do you plan to publush v1.11.1 release?
closed
https://github.com/huggingface/datasets/issues/2888
2021-09-09T21:53:15
2021-09-12T20:18:35
2021-09-12T16:15:39
{ "login": "fcakyon", "id": 34196005, "type": "User" }
[ { "name": "question", "color": "d876e3" } ]
false
[]
992,576,305
2,887
#2837 Use cache folder for lockfile
Fixes #2837 Use a cache folder directory to store the FileLock. The issue was that the lock file was in a readonly folder.
closed
https://github.com/huggingface/datasets/pull/2887
2021-09-09T19:55:56
2021-10-05T17:58:22
2021-10-05T17:58:22
{ "login": "Dref360", "id": 8976546, "type": "User" }
[]
true
[]
992,534,632
2,886
Hj
null
closed
https://github.com/huggingface/datasets/issues/2886
2021-09-09T18:58:52
2021-09-10T11:46:29
2021-09-10T11:46:29
{ "login": "Noorasri", "id": 90416328, "type": "User" }
[]
false
[]
992,160,544
2,885
Adding an Elastic Search index to a Dataset
## Describe the bug When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break: Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453) 90%|████████████████████████████████████████████▉ | 9501/10570 [00:01<00:00, 6335.61docs/s] No error is thrown, but the indexing breaks ~90%. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset from elasticsearch import Elasticsearch es = Elasticsearch() squad = load_dataset('squad', split='validation') index_name = "corpus" es_config = { "settings": { "number_of_shards": 1, "analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}}, }, "mappings": { "properties": { "idx" : {"type" : "keyword"}, "title" : {"type" : "keyword"}, "text": { "type": "text", "analyzer": "standard", "similarity": "BM25" }, } }, } class IndexBuilder: """ Elastic search indexing of a corpus """ def __init__( self, *args, #corpus : None, dataset : squad, index_name = str, query = str, config = dict, **kwargs, ): #instantiate HuggingFace dataset self.dataset = dataset #instantiate ElasticSearch config self.config = config self.es = Elasticsearch() self.index_name = index_name self.query = query def elastic_index(self): print(self.es.info) self.es.indices.delete(index=self.index_name, ignore=[400, 404]) search_index = self.dataset.add_elasticsearch_index(column='context', host='localhost', port='9200', es_index_name=self.index_name, es_index_config=self.config) return search_index def exact_match_method(self, index): scores, retrieved_examples = index.get_nearest_examples('context', query=self.query, k=1) return scores, retrieved_examples if __name__ == "__main__": print(type(squad)) Index = IndexBuilder(dataset=squad, index_name='corpus_index', query='Where was Chopin born?', config=es_config) search_index = Index.elastic_index() scores, examples = Index.exact_match_method(search_index) print(scores, examples) for name in squad.column_names: print(type(squad[name])) ``` ## Environment info We run the code in Poetry. This might be the issue, since the script runs successfully in our local environment. Poetry: - Python version: 3.8 - PyArrow: 4.0.1 - Elasticsearch: 7.13.4 - datasets: 1.10.2 Local: - Python version: 3.8 - PyArrow: 3.0.0 - Elasticsearch: 7.7.1 - datasets: 1.7.0
open
https://github.com/huggingface/datasets/issues/2885
2021-09-09T12:21:39
2021-10-20T18:57:11
null
{ "login": "MotzWanted", "id": 36195371, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
992,135,698
2,884
Add IC, SI, ER tasks to SUPERB
This PR adds 3 additional classification tasks to SUPERB #### Intent Classification Dataset URL seems to be down at the moment :( See the note below. S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands #### Speaker Identification Manual download script: ``` mkdir VoxCeleb1 cd VoxCeleb1 wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad cat vox1_dev* > vox1_dev_wav.zip unzip vox1_dev_wav.zip wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip unzip vox1_test_wav.zip # download the official SUPERB train-dev-test split wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt ``` S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification #### Intent Classification Manual download requires going through a slow application process, see the note below. S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition #### :warning: Note These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
closed
https://github.com/huggingface/datasets/pull/2884
2021-09-09T11:56:03
2021-09-20T09:17:58
2021-09-20T09:00:49
{ "login": "anton-l", "id": 26864830, "type": "User" }
[]
true
[]
991,969,875
2,883
Fix data URLs and metadata in DocRED dataset
The host of `docred` dataset has updated the `dev` data file. This PR: - Updates the dev URL - Updates dataset metadata This PR also fixes the URL of the `train_distant` split, which was wrong. Fix #2882.
closed
https://github.com/huggingface/datasets/pull/2883
2021-09-09T08:55:34
2021-09-13T11:24:31
2021-09-13T11:24:31
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
991,800,141
2,882
`load_dataset('docred')` results in a `NonMatchingChecksumError`
## Describe the bug I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`. ## Steps to reproduce the bug It is quasi only this code: ```python import datasets data = datasets.load_dataset('docred') ``` ## Expected results The DocRED dataset should be loaded without any problems. ## Actual results ``` NonMatchingChecksumError Traceback (most recent call last) <ipython-input-4-b1b83f25a16c> in <module> ----> 1 d = datasets.load_dataset('docred') ~/anaconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs) 845 846 # Download and prepare data --> 847 builder_instance.download_and_prepare( 848 download_config=download_config, 849 download_mode=download_mode, ~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 613 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 614 if not downloaded_from_gcs: --> 615 self._download_and_prepare( 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 617 ) ~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 673 # Checksums verification 674 if verify_infos: --> 675 verify_checksums( 676 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" 677 ) ~/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1fDmfUUo5G7gfaoqWWvK81u08m71TK2g7'] ``` ## Environment info - `datasets` version: 1.11.0 - Platform: Linux-5.11.0-7633-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 5.0.0 This error also happened on my Windows-partition, after freshly installing python 3.9 and `datasets`. ## Remarks - I have already called `rm -rf /home/<user>/.cache/huggingface`, i.e., I have tried clearing the cache. - The problem does not exist for other datasets, i.e., it seems to be DocRED-specific.
closed
https://github.com/huggingface/datasets/issues/2882
2021-09-09T05:55:02
2021-09-13T11:24:30
2021-09-13T11:24:30
{ "login": "tmpr", "id": 51313597, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
991,639,142
2,881
Add BIOSSES dataset
Adding the biomedical semantic sentence similarity dataset, BIOSSES, listed in "Biomedical Datasets - BigScience Workshop 2021"
closed
https://github.com/huggingface/datasets/pull/2881
2021-09-09T00:35:36
2021-09-13T14:20:40
2021-09-13T14:20:40
{ "login": "bwang482", "id": 6764450, "type": "User" }
[]
true
[]
990,877,940
2,880
Extend support for streaming datasets that use pathlib.Path stem/suffix
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the properties `pathlib.Path.stem` and `pathlib.Path.suffix`. Related to #2876, #2874, #2866. CC: @severo
closed
https://github.com/huggingface/datasets/pull/2880
2021-09-08T08:42:43
2021-09-09T13:13:29
2021-09-09T13:13:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
990,257,404
2,879
In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
## Describe the bug Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same. ## Steps to reproduce the bug I was following this tutorial - https://huggingface.co/blog/fine-tune-wav2vec2-english But here's a distilled repro: ```python !pip install datasets==1.4.1 from datasets import load_dataset timit = load_dataset("timit_asr", cache_dir="./temp") unique_transcripts = set(timit["train"]["text"]) print(unique_transcripts) assert len(unique_transcripts) > 1 ``` ## Expected results Expected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it. ## Actual results Every train transcript was "Would such an act of refusal be useful?" Every test transcript was "The bungalow was pleasantly situated near the shore." ## Environment info - `datasets` version: 1.4.1 - Platform: Darwin-18.7.0-x86_64-i386-64bit - Python version: 3.7.9 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: tried both - Using distributed or parallel set-up in script?: no -
closed
https://github.com/huggingface/datasets/issues/2879
2021-09-07T18:53:45
2021-09-08T16:55:19
2021-09-08T09:12:28
{ "login": "rcgale", "id": 2279700, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
990,093,316
2,878
NotADirectoryError: [WinError 267] During load_from_disk
## Describe the bug Trying to load saved dataset or dataset directory from Amazon S3 on a Windows machine fails. Performing the same operation succeeds on non-windows environment (AWS Sagemaker). ## Steps to reproduce the bug ```python # Followed https://huggingface.co/docs/datasets/filesystems.html#loading-a-processed-dataset-from-s3 from datasets import load_from_disk from datasets.filesystems import S3FileSystem s3_file = "output of save_to_disk" s3_filesystem = S3FileSystem() load_from_disk(s3_file, fs=s3_filesystem) ``` ## Expected results load_from_disk succeeds without error ## Actual results Seems like it succeeds in pulling the file into a windows temp directory, as it exists in my system, but fails to process it. ``` Exception ignored in: <finalize object at 0x26409231ce0; dead> Traceback (most recent call last): File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\weakref.py", line 566, in __call__ return info.func(*info.args, **(info.kwargs or {})) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 817, in _cleanup cls._rmtree(name) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) [Previous line repeated 2 more times] File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 618, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 805, in onerror cls._rmtree(path) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 599, in _rmtree_unsafe onerror(os.scandir, path, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 596, in _rmtree_unsafe with os.scandir(path) as scandir_it: NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\grassycup\\AppData\\Local\\Temp\\tmp45f_qbma\\tests3bucket\\output\\test_output\\train\\dataset.arrow' Exception ignored in: <finalize object at 0x264091c7880; dead> Traceback (most recent call last): File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\weakref.py", line 566, in __call__ return info.func(*info.args, **(info.kwargs or {})) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 817, in _cleanup cls._rmtree(name) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) [Previous line repeated 2 more times] File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 618, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 805, in onerror cls._rmtree(path) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 599, in _rmtree_unsafe onerror(os.scandir, path, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 596, in _rmtree_unsafe with os.scandir(path) as scandir_it: NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\grassycup\\AppData\\Local\\Temp\\tmp45f_qbma\\tests3bucket\\output\\test_output\\train\\dataset.arrow' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.8.11 - PyArrow version: 3.0.0
open
https://github.com/huggingface/datasets/issues/2878
2021-09-07T15:15:05
2021-09-07T15:15:05
null
{ "login": "Grassycup", "id": 1875064, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
990,027,249
2,877
Don't keep the dummy data folder or dataset_infos.json when resolving data files
When there's no dataset script, all the data files of a folder or a repository on the Hub are loaded as data files. There are already a few exceptions: - files starting with "." are ignored - the dataset card "README.md" is ignored - any file named "config.json" is ignored (currently it isn't used anywhere, but it could be used in the future to define splits or configs for example, but not 100% sure) However any data files in a folder named "dummy" should be ignored as well as they should only be used to test the dataset. Same for "dataset_infos.json" which should only be used to get the `dataset.info`
closed
https://github.com/huggingface/datasets/issues/2877
2021-09-07T14:09:04
2021-09-29T09:05:38
2021-09-29T09:05:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
990,001,079
2,876
Extend support for streaming datasets that use pathlib.Path.glob
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the method `pathlib.Path.glob`. Related to #2874, #2866. CC: @severo
closed
https://github.com/huggingface/datasets/pull/2876
2021-09-07T13:43:45
2021-09-10T09:50:49
2021-09-10T09:50:48
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
989,919,398
2,875
Add Congolese Swahili speech datasets
## Adding a Dataset - **Name:** Congolese Swahili speech corpora - **Data:** https://gamayun.translatorswb.org/data/ Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Also related: https://mobile.twitter.com/OktemAlp/status/1435196393631764482
open
https://github.com/huggingface/datasets/issues/2875
2021-09-07T12:13:50
2021-09-07T12:13:50
null
{ "login": "osanseviero", "id": 7246357, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
989,685,328
2,874
Support streaming datasets that use pathlib
This PR extends the support in streaming mode for datasets that use `pathlib.Path`. Related to: #2866. CC: @severo
closed
https://github.com/huggingface/datasets/pull/2874
2021-09-07T07:35:49
2021-09-07T18:25:22
2021-09-07T11:41:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
989,587,695
2,873
adding swedish_medical_ner
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021" Code refactored
closed
https://github.com/huggingface/datasets/pull/2873
2021-09-07T04:44:53
2021-09-17T20:47:37
2021-09-17T20:47:37
{ "login": "bwang482", "id": 6764450, "type": "User" }
[]
true
[]
989,453,069
2,872
adding swedish_medical_ner
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
closed
https://github.com/huggingface/datasets/pull/2872
2021-09-06T22:00:52
2021-09-07T04:36:32
2021-09-07T04:36:32
{ "login": "bwang482", "id": 6764450, "type": "User" }
[]
true
[]
989,436,088
2,871
datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289 ``` if datasets.config.PYARROW_VERSION.major < 3: packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"] ``` which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS. ``` import datasets datasets.config.PYARROW_VERSION.major --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module> 1 import datasets ----> 2 datasets.config.PYARROW_VERSION.major AttributeError: 'str' object has no attribute 'major' ``` ## Environment info - `datasets` version: 1.11.0 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.11 - PyArrow version: 4.0.1
closed
https://github.com/huggingface/datasets/issues/2871
2021-09-06T21:06:57
2021-09-08T08:51:52
2021-09-08T08:51:52
{ "login": "bwang482", "id": 6764450, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]