id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,123,402,426
3,678
Add code example in wikipedia card
Close #3292.
closed
https://github.com/huggingface/datasets/pull/3678
2022-02-03T18:09:02
2022-02-21T09:14:56
2022-02-04T13:21:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,123,192,866
3,677
Discovery cannot be streamed anymore
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset iterable_dataset = load_dataset("discovery", name="discovery", split="train", streaming=True) list(iterable_dataset.take(1)) ``` ## Expected results The first row of the train split. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 365, in __iter__ for key, example in self._iter(): File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 362, in _iter yield from ex_iterable File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 272, in __iter__ yield from islice(self.ex_iterable, self.n) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 79, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/discovery/542fab7a9ddc1d9726160355f7baa06a1ccc44c40bc8e12c09e9bc743aca43a2/discovery.py", line 333, in _generate_examples with open(data_file, encoding="utf8") as f: File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 64, in wrapper return function(*args, use_auth_token=use_auth_token, **kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py", line 369, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 456, in open return open_files( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 288, in open_files fs, fs_token, paths = get_fs_token_paths( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 611, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 253, in filesystem return cls(**storage_options) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 68, in __call__ obj = super().__call__(*args, **kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 57, in __init__ self.zip = zipfile.ZipFile(self.fo) File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 1257, in __init__ self._RealGetContents() File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 1320, in _RealGetContents endrec = _EndRecData(fp) File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 263, in _EndRecData fpin.seek(0, 2) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 676, in seek raise ValueError("Cannot seek streaming HTTP file") ValueError: Cannot seek streaming HTTP file ``` ## Environment info - `datasets` version: 1.18.3 - Platform: Linux-5.11.0-1027-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3677
2022-02-03T15:02:03
2022-02-10T16:51:24
2022-02-10T16:51:24
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,123,096,362
3,676
`None` replaced by `[]` after first batch in map
Sometimes `None` can be replaced by `[]` when running map: ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(4)}) ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"]) print(ds.to_pandas()) # b # 0 [None, [0]] # 1 [[], [0]] # 2 [[], [0]] # 3 [[], [0]] ``` This issue has been experienced when running the `run_qa.py` example from `transformers` (see issue https://github.com/huggingface/transformers/issues/15401) This can be due to a bug in when casting `None` in nested lists. Casting only happens after the first batch, since the first batch is used to infer the feature types. cc @sgugger
closed
https://github.com/huggingface/datasets/issues/3676
2022-02-03T13:36:48
2022-10-28T13:13:20
2022-10-28T13:13:20
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
1,123,078,408
3,675
Add CodeContests dataset
## Adding a Dataset - **Name:** CodeContests - **Description:** CodeContests is a competitive programming dataset for machine-learning. - **Paper:** - **Data:** https://github.com/deepmind/code_contests - **Motivation:** This dataset was used when training [AlphaCode](https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode). Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/3675
2022-02-03T13:20:00
2022-07-20T11:07:05
2022-07-20T11:07:05
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
1,123,027,874
3,674
Add FrugalScore metric
This pull request add FrugalScore metric for NLG systems evaluation. FrugalScore is a reference-based metric for NLG models evaluation. It is based on a distillation approach that allows to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance. Paper: https://arxiv.org/abs/2110.08559?context=cs Github: https://github.com/moussaKam/FrugalScore @lhoestq
closed
https://github.com/huggingface/datasets/pull/3674
2022-02-03T12:28:52
2022-02-21T15:58:44
2022-02-21T15:58:44
{ "login": "moussaKam", "id": 28675016, "type": "User" }
[]
true
[]
1,123,010,520
3,673
`load_dataset("snli")` is different from dataset viewer
## Describe the bug The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2). Is this expected? ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Ubuntu 20.4 - Python version: 3.7
closed
https://github.com/huggingface/datasets/issues/3673
2022-02-03T12:10:43
2022-02-16T11:22:31
2022-02-11T17:01:21
{ "login": "pietrolesci", "id": 61748653, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,122,980,556
3,672
Prioritize `module.builder_kwargs` over defaults in `TestCommand`
This fixes a bug in the `TestCommand` where multiple kwargs for `name` were passed if it was set in both default and `module.builder_kwargs`. Example error: ```Python Traceback (most recent call last): File "create_metadata.py", line 96, in <module> main(**vars(args)) File "create_metadata.py", line 86, in main metadata_command.run() File "/opt/conda/lib/python3.7/site-packages/datasets/commands/test.py", line 144, in run for j, builder in enumerate(get_builders()): File "/opt/conda/lib/python3.7/site-packages/datasets/commands/test.py", line 141, in get_builders name=name, cache_dir=self._cache_dir, data_dir=self._data_dir, **module.builder_kwargs TypeError: type object got multiple values for keyword argument 'name' ``` Let me know what you think.
closed
https://github.com/huggingface/datasets/pull/3672
2022-02-03T11:38:42
2022-02-04T12:37:20
2022-02-04T12:37:19
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[]
true
[]
1,122,864,253
3,671
Give an estimate of the dataset size in DatasetInfo
**Is your feature request related to a problem? Please describe.** Currently, only part of the datasets provide `dataset_size`, `download_size`, `size_in_bytes` (and `num_bytes` and `num_examples` inside `splits`). I would want to get this information, or an estimation, for all the datasets. **Describe the solution you'd like** - get access to the git information for the dataset files hosted on the hub - look at the [`Content-Length`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Length) for the files served by HTTP
open
https://github.com/huggingface/datasets/issues/3671
2022-02-03T09:47:10
2022-02-03T09:47:10
null
{ "login": "severo", "id": 1676121, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,122,439,827
3,670
feat: 🎸 generate info if dataset_infos.json does not exist
in get_dataset_infos(). Also: add the `use_auth_token` parameter, and create get_dataset_config_info() ✅ Closes: #3013
closed
https://github.com/huggingface/datasets/pull/3670
2022-02-02T22:11:56
2022-02-21T15:57:11
2022-02-21T15:57:10
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
1,122,335,622
3,669
Common voice validated partition
This patch adds access to the 'validated' partitions of CommonVoice datasets (provided by the dataset creators but not available in the HuggingFace interface yet). As 'validated' contains significantly more data than 'train' (although it contains both test and validation, so one needs to be careful there), it can be useful to train better models where no strict comparison with the previous work is intended.
closed
https://github.com/huggingface/datasets/pull/3669
2022-02-02T20:04:43
2022-02-08T17:26:52
2022-02-08T17:23:12
{ "login": "shalymin-amzn", "id": 98762373, "type": "User" }
[]
true
[]
1,122,261,736
3,668
Couldn't cast array of type string error with cast_column
## Describe the bug In OVH cloud during Huggingface Robust-speech-recognition event on a AI training notebook instance using jupyter lab and running jupyter notebook When using the dataset.cast_column("audio",Audio(sampling_rate=16_000)) method I get error ![image](https://user-images.githubusercontent.com/25264037/152214027-9c42a71a-dd24-463c-a346-57e0287e5a8f.png) This was working with datasets version 1.17.1.dev0 but now with version 1.18.3 produces the error above. ## Steps to reproduce the bug load dataset: ![image](https://user-images.githubusercontent.com/25264037/152216145-159553b6-cddc-4f0b-8607-7e76b600e22a.png) remove columns: ![image](https://user-images.githubusercontent.com/25264037/152214707-7c7e89d1-87d8-4b4f-8cfc-5d7223d35644.png) run my fix_path function. This also creates the audio column that is referring to the absolute file path of the audio ![image](https://user-images.githubusercontent.com/25264037/152214773-51f71ccf-d31b-4449-b63a-1af56436e49f.png) Then I concatenate few other datasets and finally try the cast_column method ![image](https://user-images.githubusercontent.com/25264037/152215032-f341ec86-9d6d-48c9-943b-e2efe37a4d98.png) but get error: ![image](https://user-images.githubusercontent.com/25264037/152215073-b85bd057-98e8-413c-9b05-51e9805f2c24.png) ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: OVH Cloud, AI Training section, container for Huggingface Robust Speech Recognition event image(baaastijn/ovh_huggingface) ![image](https://user-images.githubusercontent.com/25264037/152215161-b4ff7bfb-2736-4afb-9223-761a3338d23c.png) - Python version: 3.8.8 - PyArrow version: ![image](https://user-images.githubusercontent.com/25264037/152215936-4d365760-557e-456b-b5eb-ad1d15cf5073.png)
closed
https://github.com/huggingface/datasets/issues/3668
2022-02-02T18:33:29
2022-07-19T13:36:24
2022-07-19T13:36:24
{ "login": "R4ZZ3", "id": 25264037, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,122,060,630
3,667
Process .opus files with torchaudio
@anton-l suggested to proccess .opus files with `torchaudio` instead of `soundfile` as it's faster: ![opus](https://user-images.githubusercontent.com/16348744/152177816-2df6076c-f28b-4aef-a08d-b499b921414d.png) (moreover, I didn't manage to load .opus files with `soundfile` / `librosa` locally on any my machine anyway for some reason, even with `ffmpeg` installed). For now my current changes work with locally stored file: ```python # download sample opus file (from MultilingualSpokenWords dataset) !wget https://huggingface.co/datasets/polinaeterna/test_opus/resolve/main/common_voice_tt_17737010.opus from datasets import Dataset, Audio audio_path = "common_voice_tt_17737010.opus" dataset = Dataset.from_dict({"audio": [audio_path]}).cast_column("audio", Audio(48000)) dataset[0] # {'audio': {'path': 'common_voice_tt_17737010.opus', # 'array': array([ 0.0000000e+00, 0.0000000e+00, 3.0517578e-05, ..., # -6.1035156e-05, 6.1035156e-05, 0.0000000e+00], dtype=float32), # 'sampling_rate': 48000}} ``` But it doesn't work when loading inside s dataset from bytes (I checked on [MultilingualSpokenWords](https://github.com/huggingface/datasets/pull/3666), the PR is a draft now, maybe the bug is somewhere there ) ```python import torchaudio with open(audio_path, "rb") as b: print(torchaudio.load(b)) # RuntimeError: Error loading audio file: failed to open file <in memory buffer> ```
closed
https://github.com/huggingface/datasets/pull/3667
2022-02-02T15:23:14
2022-02-04T15:29:38
2022-02-04T15:29:38
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,122,058,894
3,666
process .opus files (for Multilingual Spoken Words)
Opus files requires `libsndfile>=1.0.30`. Add check for this version and tests. **outdated:** Add [Multillingual Spoken Words dataset](https://mlcommons.org/en/multilingual-spoken-words/) You can specify multiple languages for downloading 😌: ```python ds = load_dataset("datasets/ml_spoken_words", languages=["ar", "tt"]) ``` 1. I didn't take into account that each time you pass a set of languages the data for a specific language is downloaded even if it was downloaded before (since these are custom configs like `ar+tt` and `ar+tt+br`. Maybe that wasn't a good idea? 2. The script will have to be slightly changed after merge of https://github.com/huggingface/datasets/pull/3664 2. Just can't figure out what wrong with dummy files... 😞 Maybe we should get rid of them at some point 😁
closed
https://github.com/huggingface/datasets/pull/3666
2022-02-02T15:21:48
2022-02-22T10:04:03
2022-02-22T10:03:53
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,121,753,385
3,665
Fix MP3 resampling when a dataset's audio files have different sampling rates
The resampler needs to be updated if the `orig_freq` doesn't match the audio file sampling rate Fix https://github.com/huggingface/datasets/issues/3662
closed
https://github.com/huggingface/datasets/pull/3665
2022-02-02T10:31:45
2022-02-02T10:52:26
2022-02-02T10:52:26
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,121,233,301
3,664
[WIP] Return local paths to Common Voice
Fixes https://github.com/huggingface/datasets/issues/3663 This is a proposed way of returning the old local file-based generator while keeping the new streaming generator intact. TODO: - [ ] brainstorm a bit more on https://github.com/huggingface/datasets/issues/3663 to see if we can do better - [ ] refactor the heck out of this PR to avoid completely copying the logic between the two generators
closed
https://github.com/huggingface/datasets/pull/3664
2022-02-01T21:48:27
2022-02-22T09:14:06
2022-02-22T09:14:06
{ "login": "anton-l", "id": 26864830, "type": "User" }
[]
true
[]
1,121,067,647
3,663
[Audio] Path of Common Voice cannot be used for audio loading anymore
## Describe the bug ## Steps to reproduce the bug ```python from datasets import load_dataset from torchaudio import load ds = load_dataset("common_voice", "ab", split="train") # both of the following commands fail at the moment load(ds[0]["audio"]["path"]) load(ds[0]["path"]) ``` ## Expected results The path should be the complete absolute path to the downloaded audio file not some relative path. ## Actual results ```bash ~/hugging_face/venv_3.9/lib/python3.9/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 150 filepath, frame_offset, num_frames, normalize, channels_first, format) 151 filepath = os.fspath(filepath) --> 152 return torch.ops.torchaudio.sox_io_load_audio_file( 153 filepath, frame_offset, num_frames, normalize, channels_first, format) 154 RuntimeError: Error loading audio file: failed to open file cv-corpus-6.1-2020-12-11/ab/clips/common_voice_ab_19904194.mp3 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3.dev0 - Platform: Linux-5.4.0-96-generic-x86_64-with-glibc2.27 - Python version: 3.9.1 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/3663
2022-02-01T18:40:10
2022-09-21T15:03:09
2022-09-21T14:56:22
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,121,024,403
3,662
[Audio] MP3 resampling is incorrect when dataset's audio files have different sampling rates
The Audio feature resampler for MP3 gets stuck with the first original frequencies it meets, which leads to subsequent decoding to be incorrect. Here is a code to reproduce the issue: Let's first consider two audio files with different sampling rates 32000 and 16000: ```python # first download a mp3 file with sampling_rate=32000 !wget https://file-examples-com.github.io/uploads/2017/11/file_example_MP3_700KB.mp3 import torchaudio audio_path = "file_example_MP3_700KB.mp3" audio_path2 = audio_path.replace(".mp3", "_resampled.mp3") resample = torchaudio.transforms.Resample(32000, 16000) # create a new file with sampling_rate=16000 torchaudio.save(audio_path2, resample(torchaudio.load(audio_path)[0]), 16000) ``` Then we can see an issue here when decoding: ```python from datasets import Dataset, Audio dataset = Dataset.from_dict({"audio": [audio_path, audio_path2]}).cast_column("audio", Audio(48000)) dataset[0] # decode the first audio file sets the resampler orig_freq to 32000 print(dataset .features["audio"]._resampler.orig_freq) # 32000 print(dataset[0]["audio"]["array"].shape) # here decoding is fine # (1308096,) dataset = Dataset.from_dict({"audio": [audio_path, audio_path2]}).cast_column("audio", Audio(48000)) dataset[1] # decode the second audio file sets the resampler orig_freq to 16000 print(dataset .features["audio"]._resampler.orig_freq) # 16000 print(dataset[0]["audio"]["array"].shape) # here decoding uses orig_freq=16000 instead of 32000 # (2616192,) ``` The value of `orig_freq` doesn't change no matter what file needs to be decoded cc @patrickvonplaten @anton-l @cahya-wirawan @albertvillanova The issue seems to be here in `Audio.decode_mp3`: https://github.com/huggingface/datasets/blob/4c417d52def6e20359ca16c6723e0a2855e5c3fd/src/datasets/features/audio.py#L176-L180
closed
https://github.com/huggingface/datasets/issues/3662
2022-02-01T17:55:04
2022-02-02T10:52:25
2022-02-02T10:52:25
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
1,121,000,251
3,661
Remove unnecessary 'r' arg in
Originally from #3489
closed
https://github.com/huggingface/datasets/pull/3661
2022-02-01T17:29:27
2022-02-07T16:57:27
2022-02-07T16:02:42
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[]
true
[]
1,120,982,671
3,660
Change HTTP links to HTTPS
I tested the links. I also fixed some typos. Originally from #3489
open
https://github.com/huggingface/datasets/pull/3660
2022-02-01T17:12:51
2022-09-21T15:16:32
null
{ "login": "bryant1410", "id": 3905501, "type": "User" }
[]
true
[]
1,120,913,672
3,659
push_to_hub but preview not working
## Dataset viewer issue for '*happifyhealth/twitter_pnn*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/happifyhealth/twitter_pnn)* I used ``` dataset.push_to_hub("happifyhealth/twitter_pnn") ``` but the preview is not working. Am I the one who added this dataset ? Yes
closed
https://github.com/huggingface/datasets/issues/3659
2022-02-01T16:23:57
2022-02-09T08:00:37
2022-02-09T08:00:37
{ "login": "thomas-happify", "id": 66082334, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,120,880,395
3,658
Dataset viewer issue for *P3*
## Dataset viewer issue for '*P3*' **Link: https://huggingface.co/datasets/bigscience/P3** ``` Status code: 400 Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. ``` Am I the one who added this dataset ? No
closed
https://github.com/huggingface/datasets/issues/3658
2022-02-01T15:57:56
2023-09-25T12:16:21
2023-09-25T12:16:21
{ "login": "jeffistyping", "id": 22351555, "type": "User" }
[]
false
[]
1,120,602,620
3,657
Extend dataset builder for streaming in `get_dataset_split_names`
Currently, `get_dataset_split_names` doesn't extend a builder module to support streaming, even though it uses `StreamingDownloadManager` to download data. This PR fixes that. To test the change, run the following: ```bash pip install git+https://github.com/huggingface/datasets.git@fix-get_dataset_split_names-streaming python -c "from datasets import get_dataset_split_names; print(get_dataset_split_names('facebook/multilingual_librispeech', 'german', download_mode='force_redownload', revision='137923f945552c6afdd8b60e4a7b43e3088972c1'))" ```
closed
https://github.com/huggingface/datasets/pull/3657
2022-02-01T12:21:24
2022-02-03T22:49:06
2022-02-02T11:22:01
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,120,510,823
3,656
checksum error subjqa dataset
## Describe the bug I get a checksum error when loading the `subjqa` dataset (used in the transformers book). ## Steps to reproduce the bug ```python from datasets import load_dataset subjqa = load_dataset("subjqa","electronics") ``` ## Expected results Loading the dataset ## Actual results ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-2-d2857d460155> in <module>() 2 from datasets import load_dataset 3 ----> 4 subjqa = load_dataset("subjqa","electronics") 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/lewtun/SubjQA/archive/refs/heads/master.zip'] ``` ## Environment info Google colab - `datasets` version: 1.18.2 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/3656
2022-02-01T10:53:33
2022-02-10T10:56:59
2022-02-10T10:56:38
{ "login": "RensDimmendaal", "id": 9828683, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,119,801,077
3,655
Pubmed dataset not reachable
## Describe the bug Trying to use the `pubmed` dataset fails to reach / download the source files. ## Steps to reproduce the bug ```python pubmed_train = datasets.load_dataset('pubmed', split='train') ``` ## Expected results Should begin downloading the pubmed dataset. ## Actual results ``` ConnectionError: Couldn't reach ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0865.xml.gz (InvalidSchema("No connection adapters were found for 'ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0865.xml.gz'")) ``` ## Environment info - `datasets` version: 1.18.2 - Platform: macOS-11.4-x86_64-i386-64bit - Python version: 3.8.2 - PyArrow version: 6.0.0
closed
https://github.com/huggingface/datasets/issues/3655
2022-01-31T18:45:47
2022-12-19T19:18:10
2022-02-14T14:15:41
{ "login": "abhi-mosaic", "id": 77638579, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,119,717,475
3,654
Better TQDM output
This PR does the following: * if `dataset_infos.json` exists for a dataset, uses `num_examples` to print the total number of examples that needs to be generated (in `builder.py`) * fixes `tqdm` + multiprocessing in Jupyter Notebook/Colab (the issue stems from this commit in the `tqdm` repo: https://github.com/tqdm/tqdm/commit/f7722edecc3010cb35cc1c923ac4850a76336f82) * adds the missing `drop_last_batch` and `with_ranks` params to `DatasetDict.map` * correctly computes the number of iterations in `map` and the CSV/JSON loader when `batched=True` to fix `tqdm` progress bars * removes the `bool(logging.get_verbosity() == logging.NOTSET)` (or simplifies `bool(logging.get_verbosity() == logging.NOTSET) or not utils.is_progress_bar_enabled()` to `not utils.is_progress_bar_enabled()`) condition and uses `utils.is_progress_bar_enabled` to check if `tqdm` output is enabled (this comment from @stas00 explains why the `bool(logging.get_verbosity() == logging.NOTSET)` check is problematic: https://github.com/huggingface/transformers/issues/14889#issue-1087318463) Fix #2630
closed
https://github.com/huggingface/datasets/pull/3654
2022-01-31T17:22:43
2022-02-03T15:55:34
2022-02-03T15:55:33
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,119,186,952
3,653
`to_json` in multiprocessing fashion sometimes deadlock
## Describe the bug `to_json` in multiprocessing fashion sometimes deadlock, instead of raising exceptions. Temporary solution is to see that it deadlocks, and then reduce the number of processes or batch size in order to reduce the memory footprint. As @lhoestq pointed out, this might be related to https://bugs.python.org/issue22393#msg315684 where `multiprocessing` fails to raise the OOM exception. One suggested alternative is not use `concurrent.futures` instead. ## Steps to reproduce the bug ## Expected results Script fails when one worker hits OOM, and raise appropriate error. ## Actual results Deadlock ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.1 - Platform: Linux - Python version: 3.8 - PyArrow version: 6.0.1
open
https://github.com/huggingface/datasets/issues/3653
2022-01-31T09:35:07
2022-01-31T09:35:07
null
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,118,808,738
3,652
sp. Columbia => Colombia
"Columbia" is various places in North America. The country is "Colombia".
closed
https://github.com/huggingface/datasets/pull/3652
2022-01-31T00:41:03
2022-02-09T16:55:25
2022-01-31T08:29:07
{ "login": "serapio", "id": 3781280, "type": "User" }
[]
true
[]
1,118,597,647
3,651
Update link in wiki_bio dataset
Fixes #3580 and makes the wiki_bio dataset work again. I changed the link and some documentation, and all the tests pass. Thanks @lhoestq for uploading the dataset to the HuggingFace data bucket. @lhoestq -- all the tests pass, but I'm still not able to import the dataset, as the old Google Drive link is cached somewhere: ```python >>> from datasets import load_dataset load_dataset("wiki_bio>>> load_dataset("wiki_bio") Using custom data configuration default Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to /home/jxm3/.cache/huggingface/datasets/wiki_bio/default/1.1.0/5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9... Traceback (most recent call last): ... File "/home/jxm3/random/datasets/src/datasets/utils/file_utils.py", line 612, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://drive.google.com/uc?export=download&id=1L7aoUXzHPzyzQ0ns4ApBbYepsjFOtXil ``` what do I have to do to invalidate the cache and actually import the dataset? It's clearly set up correctly, since the data is downloaded and processed by the tests. As an aside, this caching-loading-scripts behavior makes for a really bad developer experience. I just wasted an hour trying to figure out where the caching was happening and how to disable it, and I don't know. All I wanted to do was update the link and submit a pull request! I recommend that you all either change this behavior (i.e. updating the link to a dataset should "just work") or document it, since I couldn't find any information about this in the contributing.md or readme or anywhere else! Thanks!
closed
https://github.com/huggingface/datasets/pull/3651
2022-01-30T16:28:54
2022-01-31T14:50:48
2022-01-31T08:38:09
{ "login": "jxmorris12", "id": 13238952, "type": "User" }
[]
true
[]
1,118,537,429
3,650
Allow 'to_json' to run in unordered fashion in order to lower memory footprint
I'm using `to_json(..., num_proc=num_proc, compressiong='gzip')` with `num_proc>1`. I'm having an issue where things seem to deadlock at some point. Eventually I see OOM. I'm guessing it's an issue where one process starts to take a long time for a specific batch, and so other process keep accumulating their results in memory. In order to flush memory, I propose we use optional `imap_unordered`. This will prevent one process to block the other ones. The logical thinking is that index are rarily relevant, and in one wants to keep an index, one can still create another column and reconstruct from there.
closed
https://github.com/huggingface/datasets/pull/3650
2022-01-30T13:23:19
2023-09-25T06:28:51
2023-09-24T16:45:48
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
1,117,502,250
3,649
Add IGLUE dataset
## Adding a Dataset - **Name:** IGLUE - **Description:** IGLUE brings together 4 vision-and-language tasks across 20 languages (Twitter [thread](https://twitter.com/ebugliarello/status/1487045497583976455?s=20&t=SB4LZGDhhkUW83ugcX_m5w)) - **Paper:** https://arxiv.org/abs/2201.11732 - **Data:** https://github.com/e-bug/iglue - **Motivation:** This dataset would provide a nice example of combining the text and image features of `datasets` together for multimodal applications. Note: the data / code are not yet visible on the GitHub repo, so I've pinged the authors for more information. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/3649
2022-01-28T14:59:41
2022-01-28T15:02:35
null
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "multimodal", "color": "19E633" } ]
false
[]
1,117,465,505
3,648
Fix Windows CI: bump python to 3.7
Python>=3.7 is needed to install `tokenizers` 0.11
closed
https://github.com/huggingface/datasets/pull/3648
2022-01-28T14:24:54
2022-01-28T14:40:39
2022-01-28T14:40:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,117,383,675
3,647
Fix `add_column` on datasets with indices mapping
My initial idea was to avoid the `flatten_indices` call and reorder a new column instead, but in the end I decided to follow `concatenate_datasets` and use `flatten_indices` to avoid padding when `dataset._indices.num_rows != dataset._data.num_rows`. Fix #3599
closed
https://github.com/huggingface/datasets/pull/3647
2022-01-28T13:06:29
2022-01-28T15:35:58
2022-01-28T15:35:58
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,116,544,627
3,646
Fix streaming datasets that are not reset correctly
Streaming datasets that use `StreamingDownloadManager.iter_archive` and `StreamingDownloadManager.iter_files` had some issues. Indeed if you try to iterate over such dataset twice, then the second time it will be empty. This is because the two methods above are generator functions. I fixed this by making them return iterables that are reset properly instead. Close https://github.com/huggingface/datasets/issues/3645 cc @anton-l
closed
https://github.com/huggingface/datasets/pull/3646
2022-01-27T17:21:02
2022-01-28T16:34:29
2022-01-28T16:34:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,116,541,298
3,645
Streaming dataset based on dl_manager.iter_archive/iter_files are not reset correctly
Hi ! When iterating over a streaming dataset once, it's not reset correctly because of some issues with `dl_manager.iter_archive` and `dl_manager.iter_files`. Indeed they are generator functions (so the iterator that is returned can be exhausted). They should be iterables instead, and be reset if we do a for loop again: ```python from datasets import load_dataset d = load_dataset("common_voice", "ab", split="test", streaming=True) i = 0 for i, _ in enumerate(d): pass print(i) # 8 # let's do it again i = 0 for i, _ in enumerate(d): pass print(i) # 0 ```
closed
https://github.com/huggingface/datasets/issues/3645
2022-01-27T17:17:41
2022-01-28T16:34:28
2022-01-28T16:34:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
1,116,519,670
3,644
Add a GROUP BY operator
**Is your feature request related to a problem? Please describe.** Using batch mapping, we can easily split examples. However, we lack an appropriate option for merging them back together by some key. Consider this example: ```python # features: # { # "example_id": datasets.Value("int32"), # "text": datasets.Value("string") # } ds = datasets.Dataset() def split(examples): sentences = [text.split(".") for text in examples["text"]] return { "example_id": [ example_id for example_id, sents in zip(examples["example_id"], sentences) for _ in sents ], "sentence": [sent for sents in sentences for sent in sents], "sentence_id": [i for sents in sentences for i in range(len(sents))], } split_ds = ds.map(split, batched=True) def process(examples): outputs = some_neural_network_that_works_on_sentences(examples["sentence"]) return {"outputs": outputs} split_ds = split_ds.map(process, batched=True) ``` I have a dataset consisting of texts that I would like to process sentence by sentence in a batched way. Afterwards, I would like to put it back together as it was, merging the outputs together. **Describe the solution you'd like** Ideally, it would look something like this: ```python def join(examples): order = np.argsort(examples["sentence_id"]) text = ".".join(examples["text"][i] for i in order) outputs = [examples["outputs"][i] for i in order] return {"text": text, "outputs": outputs} ds = split_ds.group_by("example_id", join) ``` **Describe alternatives you've considered** Right now, we can do this: ```python def merge(example): meeting_id = example["example_id"] parts = split_ds.filter(lambda x: x["example_id"] == meeting_id).sort("segment_no") return {"outputs": list(parts["outputs"])} ds = ds.map(merge) ``` Of course, we could process the dataset like this: ```python def process(example): outputs = some_neural_network_that_works_on_sentences(example["text"].split(".")) return {"outputs": outputs} ds = ds.map(process, batched=True) ``` However, that does not allow using an arbitrary batch size and may lead to very inefficient use of resources if the batch size is much larger than the number of sentences in one example. I would very much appreciate some kind of group by operator to merge examples based on the value of one column.
open
https://github.com/huggingface/datasets/issues/3644
2022-01-27T16:57:54
2025-01-28T11:39:48
null
{ "login": "felix-schneider", "id": 208336, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,116,417,428
3,643
Fix sem_eval_2018_task_1 download location
As discussed with @lhoestq in https://github.com/huggingface/datasets/issues/3549#issuecomment-1020176931_ this is the new pull request to fix the download location.
closed
https://github.com/huggingface/datasets/pull/3643
2022-01-27T15:45:00
2022-02-04T15:15:26
2022-02-04T15:15:26
{ "login": "maxpel", "id": 31095360, "type": "User" }
[]
true
[]
1,116,306,986
3,642
Fix dataset slicing with negative bounds when indices mapping is not `None`
Fix #3611
closed
https://github.com/huggingface/datasets/pull/3642
2022-01-27T14:45:53
2022-01-27T18:16:23
2022-01-27T18:16:22
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,116,284,268
3,641
Fix numpy rngs when seed is None
Fixes the NumPy RNG when `seed` is `None`. The problem becomes obvious after reading the NumPy notes on RNG (returned by `np.random.get_state()`): > The MT19937 state vector consists of a 624-element array of 32-bit unsigned integers plus a single integer value between 0 and 624 that indexes the current position within the main array. `The MT19937 state vector`: the seed which we currently index, but this value stays the same for multiple rounds. `plus a single integer value`: the `pos` value in this PR (is 624 if `seed` is set to a fixed value with `np.random.seed`, so we take the first value in the `seed` array returned by `np.random.get_state()`: https://stackoverflow.com/questions/32172054/how-can-i-retrieve-the-current-seed-of-numpys-random-number-generator) NumPy notes: https://numpy.org/doc/stable/reference/random/bit_generators/mt19937.html Fix #3634
closed
https://github.com/huggingface/datasets/pull/3641
2022-01-27T14:29:09
2022-01-27T18:16:08
2022-01-27T18:16:07
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,116,133,769
3,640
Issues with custom dataset in Wav2Vec2
We are training Vav2Vec using the run_speech_recognition_ctc_bnb.py-script. This is working fine with Common Voice, however using our custom dataset and data loader at [NbAiLab/NPSC]( https://huggingface.co/datasets/NbAiLab/NPSC) it crashes after roughly 1 epoch with the following stack trace: ![image](https://user-images.githubusercontent.com/9079808/151355893-6d5887cc-ca19-4b12-948a-124eb6dac372.png) We are able to work around the issue, for instance by adding this check in line#222 in transformers/models/wav2vec2/modeling_wav2vec2.py: ```python if input_length - (mask_length - 1) < num_masked_span: num_masked_span = input_length - (mask_length - 1) ``` Interestingly, these are the variable values before the adjustment: ``` input_length=10 mask_length=10 num_masked_span=2 ```` After adjusting num_masked_spin to 1, the training script runs. The issue is also fixed by setting “replace=True” in the same function. Do you have any idea what is causing this, and how to fix this error permanently? If you do not think this is an Datasets issue, feel free to move the issue.
closed
https://github.com/huggingface/datasets/issues/3640
2022-01-27T12:09:05
2022-01-27T12:29:48
2022-01-27T12:29:48
{ "login": "peregilk", "id": 9079808, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,116,021,420
3,639
same value of precision, recall, f1 score at each epoch for classification task.
**1st Epoch:** 1/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.59it/s] 01/27/2022 09:30:48 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow 01/27/2022 09:30:49 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/recall/default/default_experiment-1-0.arrow PRECISION: {'precision': 0.7612903225806451} RECALL: {'recall': 0.7612903225806451} F1: {'f1': 0.7612903225806451} {'eval_loss': 1.4658324718475342, 'eval_accuracy': 0.7612903118133545, 'eval_runtime': 30.0054, 'eval_samples_per_second': 46.492, 'eval_steps_per_second': 46.492, 'epoch': 3.0} **4th Epoch:** 1/27/2022 09:56:55 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/f1/default/default_experiment-1-0.arrow.92it/s] 01/27/2022 09:56:56 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/precision/default/default_experiment-1-0.arrow 01/27/2022 09:56:56 - INFO - datasets.metric - Removing /home/ubuntu/.cache/huggingface/metrics/recall/default/default_experiment-1-0.arrow PRECISION: {'precision': 0.7698924731182796} RECALL: {'recall': 0.7698924731182796} F1: {'f1': 0.7698924731182796} ## Environment info !git clone https://github.com/huggingface/transformers %cd transformers !pip install . !pip install -r /content/transformers/examples/pytorch/token-classification/requirements.txt !pip install datasets
closed
https://github.com/huggingface/datasets/issues/3639
2022-01-27T10:14:16
2022-02-24T09:02:18
2022-02-24T09:02:17
{ "login": "Dhanachandra", "id": 10828657, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,115,725,703
3,638
AutoTokenizer hash value got change after datasets.map
## Describe the bug AutoTokenizer hash value got change after datasets.map ## Steps to reproduce the bug 1. trash huggingface datasets cache 2. run the following code: ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') def tokenize_function(example): return tokenizer(example["sentence1"], example["sentence2"], truncation=True) raw_datasets = load_dataset("glue", "mrpc") print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) ``` got ``` Reusing dataset glue (/home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1112.35it/s] f4976bb4694ebc51 3fca35a1fd4a1251 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 6.96ba/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15.25ba/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 5.81ba/s] d32837619b7d7d01 5fd925c82edd62b6 ``` 3. run raw_datasets.map(tokenize_function, batched=True) again and see some dataset are not using cache. ## Expected results `AutoTokenizer` work like specific Tokenizer (The hash value don't change after map): ```python from transformers import AutoTokenizer, BertTokenizer from datasets import load_dataset from datasets.fingerprint import Hasher tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') def tokenize_function(example): return tokenizer(example["sentence1"], example["sentence2"], truncation=True) raw_datasets = load_dataset("glue", "mrpc") print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) print(Hasher.hash(tokenize_function)) print(Hasher.hash(tokenizer)) ``` ``` Reusing dataset glue (/home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1091.22it/s] 46d4b31f54153fc7 5b8771afd8d43888 Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6b07ff82ae9d5c51.arrow Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-af738a6d84f3864b.arrow Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-531d2a603ba713c1.arrow 46d4b31f54153fc7 5b8771afd8d43888 ``` ## Environment info - `datasets` version: 1.18.0 - Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyArrow version: 6.0.1
open
https://github.com/huggingface/datasets/issues/3638
2022-01-27T03:19:03
2024-03-11T13:56:15
null
{ "login": "tshu-w", "id": 13161779, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,115,526,438
3,637
[TypeError: Couldn't cast array of type] Cannot load dataset in v1.18
## Describe the bug I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset with v1.17.0. Note that the error is also present if I install from `master` too. As far as I can tell, the dataset loading script is correct and the problematic features [here](https://huggingface.co/datasets/GEM/RiSAWOZ/blob/main/RiSAWOZ.py#L237) also look fine to me. ## Steps to reproduce the bug ```python from datasets import load_dataset dset = load_dataset("GEM/RiSAWOZ") ``` ## Expected results I can load the dataset without error. ## Actual results <details><summary>Traceback</summary> ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 1083 example = self.info.features.encode_example(record) -> 1084 writer.write(example, key) 1085 finally: ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write(self, example, key, writer_batch_size) 445 --> 446 self.write_examples_on_file() 447 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_examples_on_file(self) 403 batch_examples[col] = [row[0][col] for row in self.current_examples] --> 404 self.write_batch(batch_examples=batch_examples) 405 self.current_examples = [] ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 496 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) --> 497 arrays.append(pa.array(typed_sequence)) 498 inferred_features[col] = typed_sequence.get_inferred_type() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.array() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type) 204 # We only do it if trying_type is False - since this is what the user asks for. --> 205 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) 206 return out ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 919 else: --> 920 return func(array, *args, **kwargs) 921 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1064 if isinstance(feature, list): -> 1065 return pa.ListArray.from_arrays(array.offsets, _c(array.values, feature[0])) 1066 elif isinstance(feature, Sequence): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 919 else: --> 920 return func(array, *args, **kwargs) 921 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 919 else: --> 920 return func(array, *args, **kwargs) 921 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 919 else: --> 920 return func(array, *args, **kwargs) 921 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1086 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) -> 1087 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") 1088 TypeError: Couldn't cast array of type struct<医院-3.0T MRI: string, 医院-CT: string, 医院-DSA: string, 医院-公交线路: string, 医院-区域: string, 医院-名称: string, 医院-地址: string, 医院-地铁可达: string, 医院-地铁线路: string, 医院-性质: string, 医院-挂号时间: string, 医院-电话: string, 医院-等级: string, 医院-类别: string, 医院-重点科室: string, 医院-门诊时间: string, 天气-城市: string, 天气-天气: string, 天气-日期: string, 天气-温度: string, 天气-紫外线强度: string, 天气-风力风向: string, 旅游景点-区域: string, 旅游景点-名称: string, 旅游景点-地址: string, 旅游景点-开放时间: string, 旅游景点-是否地铁直达: string, 旅游景点-景点类型: string, 旅游景点-最适合人群: string, 旅游景点-消费: string, 旅游景点-特点: string, 旅游景点-电话号码: string, 旅游景点-评分: string, 旅游景点-门票价格: string, 汽车-价格(万元): string, 汽车-倒车影像: string, 汽车-动力水平: string, 汽车-厂商: string, 汽车-发动机排量(L): string, 汽车-发动机马力(Ps): string, 汽车-名称: string, 汽车-定速巡航: string, 汽车-巡航系统: string, 汽车-座位数: string, 汽车-座椅加热: string, 汽车-座椅通风: string, 汽车-所属价格区间: string, 汽车-油耗水平: string, 汽车-环保标准: string, 汽车-级别: string, 汽车-综合油耗(L/100km): string, 汽车-能源类型: string, 汽车-车型: string, 汽车-车系: string, 汽车-车身尺寸(mm): string, 汽车-驱动方式: string, 汽车-驾驶辅助影像: string, 火车-出发地: string, 火车-出发时间: string, 火车-到达时间: string, 火车-坐席: string, 火车-日期: string, 火车-时长: string, 火车-目的地: string, 火车-票价: string, 火车-舱位档次: string, 火车-车型: string, 火车-车次信息: string, 电影-主演: string, 电影-主演名单: string, 电影-具体上映时间: string, 电影-制片国家/地区: string, 电影-导演: string, 电影-年代: string, 电影-片名: string, 电影-片长: string, 电影-类型: string, 电影-豆瓣评分: string, 电脑-CPU: string, 电脑-CPU型号: string, 电脑-产品类别: string, 电脑-价格: string, 电脑-价格区间: string, 电脑-内存容量: string, 电脑-分类: string, 电脑-品牌: string, 电脑-商品名称: string, 电脑-屏幕尺寸: string, 电脑-待机时长: string, 电脑-显卡型号: string, 电脑-显卡类别: string, 电脑-游戏性能: string, 电脑-特性: string, 电脑-硬盘容量: string, 电脑-系列: string, 电脑-系统: string, 电脑-色系: string, 电脑-裸机重量: string, 电视剧-主演: string, 电视剧-主演名单: string, 电视剧-制片国家/地区: string, 电视剧-单集片长: string, 电视剧-导演: string, 电视剧-年代: string, 电视剧-片名: string, 电视剧-类型: string, 电视剧-豆瓣评分: string, 电视剧-集数: string, 电视剧-首播时间: string, 辅导班-上课方式: string, 辅导班-上课时间: string, 辅导班-下课时间: string, 辅导班-价格: string, 辅导班-区域: string, 辅导班-年级: string, 辅导班-开始日期: string, 辅导班-教室地点: string, 辅导班-教师: string, 辅导班-教师网址: string, 辅导班-时段: string, 辅导班-校区: string, 辅导班-每周: string, 辅导班-班号: string, 辅导班-科目: string, 辅导班-结束日期: string, 辅导班-课时: string, 辅导班-课次: string, 辅导班-课程网址: string, 辅导班-难度: string, 通用-产品类别: string, 通用-价格区间: string, 通用-品牌: string, 通用-系列: string, 酒店-价位: string, 酒店-停车场: string, 酒店-区域: string, 酒店-名称: string, 酒店-地址: string, 酒店-房型: string, 酒店-房费: string, 酒店-星级: string, 酒店-电话号码: string, 酒店-评分: string, 酒店-酒店类型: string, 飞机-准点率: string, 飞机-出发地: string, 飞机-到达时间: string, 飞机-日期: string, 飞机-目的地: string, 飞机-票价: string, 飞机-航班信息: string, 飞机-舱位档次: string, 飞机-起飞时间: string, 餐厅-人均消费: string, 餐厅-价位: string, 餐厅-区域: string, 餐厅-名称: string, 餐厅-地址: string, 餐厅-推荐菜: string, 餐厅-是否地铁直达: string, 餐厅-电话号码: string, 餐厅-菜系: string, 餐厅-营业时间: string, 餐厅-评分: string> to {'旅游景点-名称': Value(dtype='string', id=None), '旅游景点-区域': Value(dtype='string', id=None), '旅游景点-景点类型': Value(dtype='string', id=None), '旅游景点-最适合人群': Value(dtype='string', id=None), '旅游景点-消费': Value(dtype='string', id=None), '旅游景点-是否地铁直达': Value(dtype='string', id=None), '旅游景点-门票价格': Value(dtype='string', id=None), '旅游景点-电话号码': Value(dtype='string', id=None), '旅游景点-地址': Value(dtype='string', id=None), '旅游景点-评分': Value(dtype='string', id=None), '旅游景点-开放时间': Value(dtype='string', id=None), '旅游景点-特点': Value(dtype='string', id=None), '餐厅-名称': Value(dtype='string', id=None), '餐厅-区域': Value(dtype='string', id=None), '餐厅-菜系': Value(dtype='string', id=None), '餐厅-价位': Value(dtype='string', id=None), '餐厅-是否地铁直达': Value(dtype='string', id=None), '餐厅-人均消费': Value(dtype='string', id=None), '餐厅-地址': Value(dtype='string', id=None), '餐厅-电话号码': Value(dtype='string', id=None), '餐厅-评分': Value(dtype='string', id=None), '餐厅-营业时间': Value(dtype='string', id=None), '餐厅-推荐菜': Value(dtype='string', id=None), '酒店-名称': Value(dtype='string', id=None), '酒店-区域': Value(dtype='string', id=None), '酒店-星级': Value(dtype='string', id=None), '酒店-价位': Value(dtype='string', id=None), '酒店-酒店类型': Value(dtype='string', id=None), '酒店-房型': Value(dtype='string', id=None), '酒店-停车场': Value(dtype='string', id=None), '酒店-房费': Value(dtype='string', id=None), '酒店-地址': Value(dtype='string', id=None), '酒店-电话号码': Value(dtype='string', id=None), '酒店-评分': Value(dtype='string', id=None), '电脑-品牌': Value(dtype='string', id=None), '电脑-产品类别': Value(dtype='string', id=None), '电脑-分类': Value(dtype='string', id=None), '电脑-内存容量': Value(dtype='string', id=None), '电脑-屏幕尺寸': Value(dtype='string', id=None), '电脑-CPU': Value(dtype='string', id=None), '电脑-价格区间': Value(dtype='string', id=None), '电脑-系列': Value(dtype='string', id=None), '电脑-商品名称': Value(dtype='string', id=None), '电脑-系统': Value(dtype='string', id=None), '电脑-游戏性能': Value(dtype='string', id=None), '电脑-CPU型号': Value(dtype='string', id=None), '电脑-裸机重量': Value(dtype='string', id=None), '电脑-显卡类别': Value(dtype='string', id=None), '电脑-显卡型号': Value(dtype='string', id=None), '电脑-特性': Value(dtype='string', id=None), '电脑-色系': Value(dtype='string', id=None), '电脑-待机时长': Value(dtype='string', id=None), '电脑-硬盘容量': Value(dtype='string', id=None), '电脑-价格': Value(dtype='string', id=None), '火车-出发地': Value(dtype='string', id=None), '火车-目的地': Value(dtype='string', id=None), '火车-日期': Value(dtype='string', id=None), '火车-车型': Value(dtype='string', id=None), '火车-坐席': Value(dtype='string', id=None), '火车-车次信息': Value(dtype='string', id=None), '火车-时长': Value(dtype='string', id=None), '火车-出发时间': Value(dtype='string', id=None), '火车-到达时间': Value(dtype='string', id=None), '火车-票价': Value(dtype='string', id=None), '飞机-出发地': Value(dtype='string', id=None), '飞机-目的地': Value(dtype='string', id=None), '飞机-日期': Value(dtype='string', id=None), '飞机-舱位档次': Value(dtype='string', id=None), '飞机-航班信息': Value(dtype='string', id=None), '飞机-起飞时间': Value(dtype='string', id=None), '飞机-到达时间': Value(dtype='string', id=None), '飞机-票价': Value(dtype='string', id=None), '飞机-准点率': Value(dtype='string', id=None), '天气-城市': Value(dtype='string', id=None), '天气-日期': Value(dtype='string', id=None), '天气-天气': Value(dtype='string', id=None), '天气-温度': Value(dtype='string', id=None), '天气-风力风向': Value(dtype='string', id=None), '天气-紫外线强度': Value(dtype='string', id=None), '电影-制片国家/地区': Value(dtype='string', id=None), '电影-类型': Value(dtype='string', id=None), '电影-年代': Value(dtype='string', id=None), '电影-主演': Value(dtype='string', id=None), '电影-导演': Value(dtype='string', id=None), '电影-片名': Value(dtype='string', id=None), '电影-主演名单': Value(dtype='string', id=None), '电影-具体上映时间': Value(dtype='string', id=None), '电影-片长': Value(dtype='string', id=None), '电影-豆瓣评分': Value(dtype='string', id=None), '电视剧-制片国家/地区': Value(dtype='string', id=None), '电视剧-类型': Value(dtype='string', id=None), '电视剧-年代': Value(dtype='string', id=None), '电视剧-主演': Value(dtype='string', id=None), '电视剧-导演': Value(dtype='string', id=None), '电视剧-片名': Value(dtype='string', id=None), '电视剧-主演名单': Value(dtype='string', id=None), '电视剧-首播时间': Value(dtype='string', id=None), '电视剧-集数': Value(dtype='string', id=None), '电视剧-单集片长': Value(dtype='string', id=None), '电视剧-豆瓣评分': Value(dtype='string', id=None), '辅导班-班号': Value(dtype='string', id=None), '辅导班-难度': Value(dtype='string', id=None), '辅导班-科目': Value(dtype='string', id=None), '辅导班-年级': Value(dtype='string', id=None), '辅导班-区域': Value(dtype='string', id=None), '辅导班-校区': Value(dtype='string', id=None), '辅导班-上课方式': Value(dtype='string', id=None), '辅导班-开始日期': Value(dtype='string', id=None), '辅导班-结束日期': Value(dtype='string', id=None), '辅导班-每周': Value(dtype='string', id=None), '辅导班-上课时间': Value(dtype='string', id=None), '辅导班-下课时间': Value(dtype='string', id=None), '辅导班-时段': Value(dtype='string', id=None), '辅导班-课次': Value(dtype='string', id=None), '辅导班-课时': Value(dtype='string', id=None), '辅导班-教室地点': Value(dtype='string', id=None), '辅导班-教师': Value(dtype='string', id=None), '辅导班-价格': Value(dtype='string', id=None), '辅导班-课程网址': Value(dtype='string', id=None), '辅导班-教师网址': Value(dtype='string', id=None), '汽车-名称': Value(dtype='string', id=None), '汽车-车型': Value(dtype='string', id=None), '汽车-级别': Value(dtype='string', id=None), '汽车-座位数': Value(dtype='string', id=None), '汽车-车身尺寸(mm)': Value(dtype='string', id=None), '汽车-厂商': Value(dtype='string', id=None), '汽车-能源类型': Value(dtype='string', id=None), '汽车-发动机排量(L)': Value(dtype='string', id=None), '汽车-发动机马力(Ps)': Value(dtype='string', id=None), '汽车-驱动方式': Value(dtype='string', id=None), '汽车-综合油耗(L/100km)': Value(dtype='string', id=None), '汽车-环保标准': Value(dtype='string', id=None), '汽车-驾驶辅助影像': Value(dtype='string', id=None), '汽车-巡航系统': Value(dtype='string', id=None), '汽车-价格(万元)': Value(dtype='string', id=None), '汽车-车系': Value(dtype='string', id=None), '汽车-动力水平': Value(dtype='string', id=None), '汽车-油耗水平': Value(dtype='string', id=None), '汽车-倒车影像': Value(dtype='string', id=None), '汽车-定速巡航': Value(dtype='string', id=None), '汽车-座椅加热': Value(dtype='string', id=None), '汽车-座椅通风': Value(dtype='string', id=None), '汽车-所属价格区间': Value(dtype='string', id=None), '医院-名称': Value(dtype='string', id=None), '医院-等级': Value(dtype='string', id=None), '医院-类别': Value(dtype='string', id=None), '医院-性质': Value(dtype='string', id=None), '医院-区域': Value(dtype='string', id=None), '医院-地址': Value(dtype='string', id=None), '医院-电话': Value(dtype='string', id=None), '医院-挂号时间': Value(dtype='string', id=None), '医院-门诊时间': Value(dtype='string', id=None), '医院-公交线路': Value(dtype='string', id=None), '医院-地铁可达': Value(dtype='string', id=None), '医院-地铁线路': Value(dtype='string', id=None), '医院-重点科室': Value(dtype='string', id=None), '医院-CT': Value(dtype='string', id=None), '医院-3.0T MRI': Value(dtype='string', id=None), '医院-DSA': Value(dtype='string', id=None)} During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) /var/folders/28/k4cy5q7s2hs92xq7_h89_vgm0000gn/T/ipykernel_44306/2896005239.py in <module> ----> 1 dset = load_dataset("GEM/RiSAWOZ") 2 dset ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1692 1693 # Download and prepare data -> 1694 builder_instance.download_and_prepare( 1695 download_config=download_config, 1696 download_mode=download_mode, ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 593 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 594 if not downloaded_from_gcs: --> 595 self._download_and_prepare( 596 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 597 ) ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 682 try: 683 # Prepare split will record examples associated to the split --> 684 self._prepare_split(split_generator, **prepare_split_kwargs) 685 except OSError as e: 686 raise OSError( ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 1084 writer.write(example, key) 1085 finally: -> 1086 num_examples, num_bytes = writer.finalize() 1087 1088 split_generator.split_info.num_examples = num_examples ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in finalize(self, close_stream) 525 # Re-intializing to empty list for next batch 526 self.hkey_record = [] --> 527 self.write_examples_on_file() 528 if self.pa_writer is None: 529 if self.schema: ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_examples_on_file(self) 402 # Since current_examples contains (example, key) tuples 403 batch_examples[col] = [row[0][col] for row in self.current_examples] --> 404 self.write_batch(batch_examples=batch_examples) 405 self.current_examples = [] 406 ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 495 col_try_type = try_features[col] if try_features is not None and col in try_features else None 496 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) --> 497 arrays.append(pa.array(typed_sequence)) 498 inferred_features[col] = typed_sequence.get_inferred_type() 499 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.array() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type) 203 # Also, when trying type "string", we don't want to convert integers or floats to "string". 204 # We only do it if trying_type is False - since this is what the user asks for. --> 205 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) 206 return out 207 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"): 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 946 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 919 else: --> 920 return func(array, *args, **kwargs) 921 922 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1063 # feature must be either [subfeature] or Sequence(subfeature) 1064 if isinstance(feature, list): -> 1065 return pa.ListArray.from_arrays(array.offsets, _c(array.values, feature[0])) 1066 elif isinstance(feature, Sequence): 1067 if feature.length > -1: ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"): 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 946 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 919 else: --> 920 return func(array, *args, **kwargs) 921 922 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1058 } 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) 1062 elif pa.types.is_list(array.type): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 1058 } 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) 1062 elif pa.types.is_list(array.type): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"): 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 946 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 919 else: --> 920 return func(array, *args, **kwargs) 921 922 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1058 } 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) 1062 elif pa.types.is_list(array.type): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 1058 } 1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature): -> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] 1061 return pa.StructArray.from_arrays(arrays, names=list(feature)) 1062 elif pa.types.is_list(array.type): ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"): 943 array = _sanitize(array) --> 944 return func(array, *args, **kwargs) 945 946 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs) 918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 919 else: --> 920 return func(array, *args, **kwargs) 921 922 return wrapper ~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str) 1085 elif not isinstance(feature, (Sequence, dict, list, tuple)): 1086 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) -> 1087 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") 1088 1089 TypeError: Couldn't cast array of type struct<医院-3.0T MRI: string, 医院-CT: string, 医院-DSA: string, 医院-公交线路: string, 医院-区域: string, 医院-名称: string, 医院-地址: string, 医院-地铁可达: string, 医院-地铁线路: string, 医院-性质: string, 医院-挂号时间: string, 医院-电话: string, 医院-等级: string, 医院-类别: string, 医院-重点科室: string, 医院-门诊时间: string, 天气-城市: string, 天气-天气: string, 天气-日期: string, 天气-温度: string, 天气-紫外线强度: string, 天气-风力风向: string, 旅游景点-区域: string, 旅游景点-名称: string, 旅游景点-地址: string, 旅游景点-开放时间: string, 旅游景点-是否地铁直达: string, 旅游景点-景点类型: string, 旅游景点-最适合人群: string, 旅游景点-消费: string, 旅游景点-特点: string, 旅游景点-电话号码: string, 旅游景点-评分: string, 旅游景点-门票价格: string, 汽车-价格(万元): string, 汽车-倒车影像: string, 汽车-动力水平: string, 汽车-厂商: string, 汽车-发动机排量(L): string, 汽车-发动机马力(Ps): string, 汽车-名称: string, 汽车-定速巡航: string, 汽车-巡航系统: string, 汽车-座位数: string, 汽车-座椅加热: string, 汽车-座椅通风: string, 汽车-所属价格区间: string, 汽车-油耗水平: string, 汽车-环保标准: string, 汽车-级别: string, 汽车-综合油耗(L/100km): string, 汽车-能源类型: string, 汽车-车型: string, 汽车-车系: string, 汽车-车身尺寸(mm): string, 汽车-驱动方式: string, 汽车-驾驶辅助影像: string, 火车-出发地: string, 火车-出发时间: string, 火车-到达时间: string, 火车-坐席: string, 火车-日期: string, 火车-时长: string, 火车-目的地: string, 火车-票价: string, 火车-舱位档次: string, 火车-车型: string, 火车-车次信息: string, 电影-主演: string, 电影-主演名单: string, 电影-具体上映时间: string, 电影-制片国家/地区: string, 电影-导演: string, 电影-年代: string, 电影-片名: string, 电影-片长: string, 电影-类型: string, 电影-豆瓣评分: string, 电脑-CPU: string, 电脑-CPU型号: string, 电脑-产品类别: string, 电脑-价格: string, 电脑-价格区间: string, 电脑-内存容量: string, 电脑-分类: string, 电脑-品牌: string, 电脑-商品名称: string, 电脑-屏幕尺寸: string, 电脑-待机时长: string, 电脑-显卡型号: string, 电脑-显卡类别: string, 电脑-游戏性能: string, 电脑-特性: string, 电脑-硬盘容量: string, 电脑-系列: string, 电脑-系统: string, 电脑-色系: string, 电脑-裸机重量: string, 电视剧-主演: string, 电视剧-主演名单: string, 电视剧-制片国家/地区: string, 电视剧-单集片长: string, 电视剧-导演: string, 电视剧-年代: string, 电视剧-片名: string, 电视剧-类型: string, 电视剧-豆瓣评分: string, 电视剧-集数: string, 电视剧-首播时间: string, 辅导班-上课方式: string, 辅导班-上课时间: string, 辅导班-下课时间: string, 辅导班-价格: string, 辅导班-区域: string, 辅导班-年级: string, 辅导班-开始日期: string, 辅导班-教室地点: string, 辅导班-教师: string, 辅导班-教师网址: string, 辅导班-时段: string, 辅导班-校区: string, 辅导班-每周: string, 辅导班-班号: string, 辅导班-科目: string, 辅导班-结束日期: string, 辅导班-课时: string, 辅导班-课次: string, 辅导班-课程网址: string, 辅导班-难度: string, 通用-产品类别: string, 通用-价格区间: string, 通用-品牌: string, 通用-系列: string, 酒店-价位: string, 酒店-停车场: string, 酒店-区域: string, 酒店-名称: string, 酒店-地址: string, 酒店-房型: string, 酒店-房费: string, 酒店-星级: string, 酒店-电话号码: string, 酒店-评分: string, 酒店-酒店类型: string, 飞机-准点率: string, 飞机-出发地: string, 飞机-到达时间: string, 飞机-日期: string, 飞机-目的地: string, 飞机-票价: string, 飞机-航班信息: string, 飞机-舱位档次: string, 飞机-起飞时间: string, 餐厅-人均消费: string, 餐厅-价位: string, 餐厅-区域: string, 餐厅-名称: string, 餐厅-地址: string, 餐厅-推荐菜: string, 餐厅-是否地铁直达: string, 餐厅-电话号码: string, 餐厅-菜系: string, 餐厅-营业时间: string, 餐厅-评分: string> to {'旅游景点-名称': Value(dtype='string', id=None), '旅游景点-区域': Value(dtype='string', id=None), '旅游景点-景点类型': Value(dtype='string', id=None), '旅游景点-最适合人群': Value(dtype='string', id=None), '旅游景点-消费': Value(dtype='string', id=None), '旅游景点-是否地铁直达': Value(dtype='string', id=None), '旅游景点-门票价格': Value(dtype='string', id=None), '旅游景点-电话号码': Value(dtype='string', id=None), '旅游景点-地址': Value(dtype='string', id=None), '旅游景点-评分': Value(dtype='string', id=None), '旅游景点-开放时间': Value(dtype='string', id=None), '旅游景点-特点': Value(dtype='string', id=None), '餐厅-名称': Value(dtype='string', id=None), '餐厅-区域': Value(dtype='string', id=None), '餐厅-菜系': Value(dtype='string', id=None), '餐厅-价位': Value(dtype='string', id=None), '餐厅-是否地铁直达': Value(dtype='string', id=None), '餐厅-人均消费': Value(dtype='string', id=None), '餐厅-地址': Value(dtype='string', id=None), '餐厅-电话号码': Value(dtype='string', id=None), '餐厅-评分': Value(dtype='string', id=None), '餐厅-营业时间': Value(dtype='string', id=None), '餐厅-推荐菜': Value(dtype='string', id=None), '酒店-名称': Value(dtype='string', id=None), '酒店-区域': Value(dtype='string', id=None), '酒店-星级': Value(dtype='string', id=None), '酒店-价位': Value(dtype='string', id=None), '酒店-酒店类型': Value(dtype='string', id=None), '酒店-房型': Value(dtype='string', id=None), '酒店-停车场': Value(dtype='string', id=None), '酒店-房费': Value(dtype='string', id=None), '酒店-地址': Value(dtype='string', id=None), '酒店-电话号码': Value(dtype='string', id=None), '酒店-评分': Value(dtype='string', id=None), '电脑-品牌': Value(dtype='string', id=None), '电脑-产品类别': Value(dtype='string', id=None), '电脑-分类': Value(dtype='string', id=None), '电脑-内存容量': Value(dtype='string', id=None), '电脑-屏幕尺寸': Value(dtype='string', id=None), '电脑-CPU': Value(dtype='string', id=None), '电脑-价格区间': Value(dtype='string', id=None), '电脑-系列': Value(dtype='string', id=None), '电脑-商品名称': Value(dtype='string', id=None), '电脑-系统': Value(dtype='string', id=None), '电脑-游戏性能': Value(dtype='string', id=None), '电脑-CPU型号': Value(dtype='string', id=None), '电脑-裸机重量': Value(dtype='string', id=None), '电脑-显卡类别': Value(dtype='string', id=None), '电脑-显卡型号': Value(dtype='string', id=None), '电脑-特性': Value(dtype='string', id=None), '电脑-色系': Value(dtype='string', id=None), '电脑-待机时长': Value(dtype='string', id=None), '电脑-硬盘容量': Value(dtype='string', id=None), '电脑-价格': Value(dtype='string', id=None), '火车-出发地': Value(dtype='string', id=None), '火车-目的地': Value(dtype='string', id=None), '火车-日期': Value(dtype='string', id=None), '火车-车型': Value(dtype='string', id=None), '火车-坐席': Value(dtype='string', id=None), '火车-车次信息': Value(dtype='string', id=None), '火车-时长': Value(dtype='string', id=None), '火车-出发时间': Value(dtype='string', id=None), '火车-到达时间': Value(dtype='string', id=None), '火车-票价': Value(dtype='string', id=None), '飞机-出发地': Value(dtype='string', id=None), '飞机-目的地': Value(dtype='string', id=None), '飞机-日期': Value(dtype='string', id=None), '飞机-舱位档次': Value(dtype='string', id=None), '飞机-航班信息': Value(dtype='string', id=None), '飞机-起飞时间': Value(dtype='string', id=None), '飞机-到达时间': Value(dtype='string', id=None), '飞机-票价': Value(dtype='string', id=None), '飞机-准点率': Value(dtype='string', id=None), '天气-城市': Value(dtype='string', id=None), '天气-日期': Value(dtype='string', id=None), '天气-天气': Value(dtype='string', id=None), '天气-温度': Value(dtype='string', id=None), '天气-风力风向': Value(dtype='string', id=None), '天气-紫外线强度': Value(dtype='string', id=None), '电影-制片国家/地区': Value(dtype='string', id=None), '电影-类型': Value(dtype='string', id=None), '电影-年代': Value(dtype='string', id=None), '电影-主演': Value(dtype='string', id=None), '电影-导演': Value(dtype='string', id=None), '电影-片名': Value(dtype='string', id=None), '电影-主演名单': Value(dtype='string', id=None), '电影-具体上映时间': Value(dtype='string', id=None), '电影-片长': Value(dtype='string', id=None), '电影-豆瓣评分': Value(dtype='string', id=None), '电视剧-制片国家/地区': Value(dtype='string', id=None), '电视剧-类型': Value(dtype='string', id=None), '电视剧-年代': Value(dtype='string', id=None), '电视剧-主演': Value(dtype='string', id=None), '电视剧-导演': Value(dtype='string', id=None), '电视剧-片名': Value(dtype='string', id=None), '电视剧-主演名单': Value(dtype='string', id=None), '电视剧-首播时间': Value(dtype='string', id=None), '电视剧-集数': Value(dtype='string', id=None), '电视剧-单集片长': Value(dtype='string', id=None), '电视剧-豆瓣评分': Value(dtype='string', id=None), '辅导班-班号': Value(dtype='string', id=None), '辅导班-难度': Value(dtype='string', id=None), '辅导班-科目': Value(dtype='string', id=None), '辅导班-年级': Value(dtype='string', id=None), '辅导班-区域': Value(dtype='string', id=None), '辅导班-校区': Value(dtype='string', id=None), '辅导班-上课方式': Value(dtype='string', id=None), '辅导班-开始日期': Value(dtype='string', id=None), '辅导班-结束日期': Value(dtype='string', id=None), '辅导班-每周': Value(dtype='string', id=None), '辅导班-上课时间': Value(dtype='string', id=None), '辅导班-下课时间': Value(dtype='string', id=None), '辅导班-时段': Value(dtype='string', id=None), '辅导班-课次': Value(dtype='string', id=None), '辅导班-课时': Value(dtype='string', id=None), '辅导班-教室地点': Value(dtype='string', id=None), '辅导班-教师': Value(dtype='string', id=None), '辅导班-价格': Value(dtype='string', id=None), '辅导班-课程网址': Value(dtype='string', id=None), '辅导班-教师网址': Value(dtype='string', id=None), '汽车-名称': Value(dtype='string', id=None), '汽车-车型': Value(dtype='string', id=None), '汽车-级别': Value(dtype='string', id=None), '汽车-座位数': Value(dtype='string', id=None), '汽车-车身尺寸(mm)': Value(dtype='string', id=None), '汽车-厂商': Value(dtype='string', id=None), '汽车-能源类型': Value(dtype='string', id=None), '汽车-发动机排量(L)': Value(dtype='string', id=None), '汽车-发动机马力(Ps)': Value(dtype='string', id=None), '汽车-驱动方式': Value(dtype='string', id=None), '汽车-综合油耗(L/100km)': Value(dtype='string', id=None), '汽车-环保标准': Value(dtype='string', id=None), '汽车-驾驶辅助影像': Value(dtype='string', id=None), '汽车-巡航系统': Value(dtype='string', id=None), '汽车-价格(万元)': Value(dtype='string', id=None), '汽车-车系': Value(dtype='string', id=None), '汽车-动力水平': Value(dtype='string', id=None), '汽车-油耗水平': Value(dtype='string', id=None), '汽车-倒车影像': Value(dtype='string', id=None), '汽车-定速巡航': Value(dtype='string', id=None), '汽车-座椅加热': Value(dtype='string', id=None), '汽车-座椅通风': Value(dtype='string', id=None), '汽车-所属价格区间': Value(dtype='string', id=None), '医院-名称': Value(dtype='string', id=None), '医院-等级': Value(dtype='string', id=None), '医院-类别': Value(dtype='string', id=None), '医院-性质': Value(dtype='string', id=None), '医院-区域': Value(dtype='string', id=None), '医院-地址': Value(dtype='string', id=None), '医院-电话': Value(dtype='string', id=None), '医院-挂号时间': Value(dtype='string', id=None), '医院-门诊时间': Value(dtype='string', id=None), '医院-公交线路': Value(dtype='string', id=None), '医院-地铁可达': Value(dtype='string', id=None), '医院-地铁线路': Value(dtype='string', id=None), '医院-重点科室': Value(dtype='string', id=None), '医院-CT': Value(dtype='string', id=None), '医院-3.0T MRI': Value(dtype='string', id=None), '医院-DSA': Value(dtype='string', id=None)} ``` </details> ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.10 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/3637
2022-01-26T21:38:02
2022-02-09T16:15:53
2022-02-09T16:15:53
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,115,362,702
3,636
Update index.rst
null
closed
https://github.com/huggingface/datasets/pull/3636
2022-01-26T18:43:09
2022-01-26T18:44:55
2022-01-26T18:44:54
{ "login": "VioletteLepercq", "id": 95622912, "type": "User" }
[]
true
[]
1,115,333,219
3,635
Make `ted_talks_iwslt` dataset streamable
null
closed
https://github.com/huggingface/datasets/pull/3635
2022-01-26T18:07:56
2022-10-04T09:36:23
2022-10-03T09:44:47
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,115,133,279
3,634
Dataset.shuffle(seed=None) gives fixed row permutation
## Describe the bug Repeated attempts to `shuffle` a dataset without specifying a seed give the same results. ## Steps to reproduce the bug ```python import datasets # Some toy example data = datasets.Dataset.from_dict( {"feature": [1, 2, 3, 4, 5], "label": ["a", "b", "c", "d", "e"]} ) # Doesn't work as expected print("Shuffle dataset") for _ in range(3): print(data.shuffle(seed=None)[:]) # This seems to work with pandas print("\nShuffle via pandas") for _ in range(3): df = data.to_pandas().sample(frac=1.0) print(datasets.Dataset.from_pandas(df, preserve_index=False)[:]) ``` ## Expected results I assumed that the default setting would initialize a new/random state of a `np.random.BitGenerator` (see [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=shuffle#datasets.Dataset.shuffle)). Wouldn't that reshuffle the rows each time I call `data.shuffle()`? ## Actual results ```bash Shuffle dataset {'feature': [5, 1, 3, 2, 4], 'label': ['e', 'a', 'c', 'b', 'd']} {'feature': [5, 1, 3, 2, 4], 'label': ['e', 'a', 'c', 'b', 'd']} {'feature': [5, 1, 3, 2, 4], 'label': ['e', 'a', 'c', 'b', 'd']} Shuffle via pandas {'feature': [4, 2, 3, 1, 5], 'label': ['d', 'b', 'c', 'a', 'e']} {'feature': [2, 5, 3, 4, 1], 'label': ['b', 'e', 'c', 'd', 'a']} {'feature': [5, 2, 3, 1, 4], 'label': ['e', 'b', 'c', 'a', 'd']} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.0 - Platform: Linux-5.13.0-27-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3634
2022-01-26T15:13:08
2022-01-27T18:16:07
2022-01-27T18:16:07
{ "login": "elisno", "id": 18127060, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,115,040,174
3,633
Mirror canonical datasets in prod
Push the datasets changes to the Hub in production by setting `HF_USE_PROD=1` I also added a fix that makes the script ignore the json, csv, text, parquet and pandas dataset builders. cc @SBrandeis
closed
https://github.com/huggingface/datasets/pull/3633
2022-01-26T13:49:37
2022-01-26T13:56:21
2022-01-26T13:56:21
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,115,027,185
3,632
Adding CC-100: Monolingual Datasets from Web Crawl Data (Datasets links are invalid)
## Describe the bug The dataset links are no longer valid for CC-100. It seems that the website which was keeping these files are no longer accessible and therefore this dataset became unusable. Check out the dataset [homepage](http://data.statmt.org/cc-100/) which isn't accessible. Also the URLs for dataset file per language isn't accessible: http://data.statmt.org/cc-100/<language code here>.txt.xz (language codes: am, sr, ka, etc.) ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("cc100", "ka") ``` It throws 503 error. ## Expected results It should successfully download and load dataset but it throws an exception because the dataset files are no longer accessible. ## Environment info Run from google colab. Just installed the library using pip: ```!pip install -U datasets```
closed
https://github.com/huggingface/datasets/issues/3632
2022-01-26T13:35:37
2022-02-10T06:58:11
2022-02-10T06:58:11
{ "login": "AnzorGozalishvili", "id": 55232459, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,114,833,662
3,631
Labels conflict when loading a local CSV file.
## Describe the bug I am trying to load a local CSV file with a separate file containing label names. It is successfully loaded for the first time, but when I try to load it again, there is a conflict between provided labels and the cached dataset info. Disabling caching globally and/or using `download_mode="force_redownload"` did not help. ## Steps to reproduce the bug ```python load_dataset('csv', data_files='data/my_data.csv', features=Features(text=Value(dtype='string'), label=ClassLabel(names_file='data/my_data_labels.txt'))) ``` `my_data.csv` file has the following structure: ``` text,label "example1",0 "example2",1 ... ``` and the `my_data_labels.txt` looks like this: ``` label1 label2 ... ``` ## Expected results Successfully loaded dataset. ## Actual results ```python File "/usr/local/lib/python3.8/site-packages/datasets/load.py", line 1706, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/usr/local/lib/python3.8/site-packages/datasets/builder.py", line 766, in as_dataset datasets = utils.map_nested( File "/usr/local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 261, in map_nested mapped = [ File "/usr/local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 262, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/usr/local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.8/site-packages/datasets/builder.py", line 797, in _build_single_dataset ds = self._as_dataset( File "/usr/local/lib/python3.8/site-packages/datasets/builder.py", line 872, in _as_dataset return Dataset(fingerprint=fingerprint, **dataset_kwargs) File "/usr/local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 638, in __init__ inferred_features = Features.from_arrow_schema(arrow_table.schema) File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1242, in from_arrow_schema return Features.from_dict(metadata["info"]["features"]) File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1271, in from_dict obj = generate_from_dict(dic) File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1076, in generate_from_dict return {key: generate_from_dict(value) for key, value in obj.items()} File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1076, in <dictcomp> return {key: generate_from_dict(value) for key, value in obj.items()} File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1083, in generate_from_dict return class_type(**{k: v for k, v in obj.items() if k in field_names}) File "<string>", line 7, in __init__ File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 776, in __post_init__ raise ValueError("Please provide either names or names_file but not both.") ValueError: Please provide either names or names_file but not both. ``` ## Environment info - `datasets` version: 1.18.0 - Python version: 3.8.2
closed
https://github.com/huggingface/datasets/issues/3631
2022-01-26T10:00:33
2022-02-11T23:02:31
2022-02-11T23:02:31
{ "login": "pichljan", "id": 8571301, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,114,578,625
3,630
DuplicatedKeysError of NewsQA dataset
After processing the dataset following official [NewsQA](https://github.com/Maluuba/newsqa), I used datasets to load it: ``` a = load_dataset('newsqa', data_dir='news') ``` and the following error occurred: ``` Using custom data configuration default-data_dir=news Downloading and preparing dataset newsqa/default to /root/.cache/huggingface/datasets/newsqa/default-data_dir=news/1.0.0/b0b23e22d94a3d352ad9d75aff2b71375264a122fae301463079ee8595e05ab9... Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1084, in _prepare_split writer.write(example, key) File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 442, in write self.check_duplicate_keys() File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 453, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: ./cnn/stories/6a0f9c8a5d0c6e8949b37924163c92923fe5770d.story Keys should be unique and deterministic in nature During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 1694, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 595, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 684, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1086, in _prepare_split num_examples, num_bytes = writer.finalize() File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 524, in finalize self.check_duplicate_keys() File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 453, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: ./cnn/stories/6a0f9c8a5d0c6e8949b37924163c92923fe5770d.story Keys should be unique and deterministic in nature ```
closed
https://github.com/huggingface/datasets/issues/3630
2022-01-26T03:05:49
2022-02-14T08:37:19
2022-02-14T08:37:19
{ "login": "StevenTang1998", "id": 37647985, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,113,971,575
3,629
Fix Hub repos update when there's a new release
It was not listing the full list of datasets correctly cc @SBrandeis this is why it failed for 1.18.0 We should be good now !
closed
https://github.com/huggingface/datasets/pull/3629
2022-01-25T14:39:45
2022-01-25T14:55:46
2022-01-25T14:55:46
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,113,930,644
3,628
Dataset Card Creator drops information for "Additional Information" Section
First of all, the card creator is a great addition and really helpful for streamlining dataset cards! ## Describe the bug I encountered an inconvenient bug when entering "Additional Information" in the react app, which drops already entered text when switching to a previous section, and then back again to "Additional Information". I was able to reproduce the issue in both Firefox and Chrome, so I suspect a problem with the React logic that doesn't expect users to switch back in the final section. Edit: I'm also not sure whether this is the right place to open the bug report on, since it's not clear to me which particular project it belongs to, or where I could find associated source code. ## Steps to reproduce the bug 1. Navigate to the Section "Additional Information" in the [dataset card creator](https://huggingface.co/datasets/card-creator/) 2. Enter text in an arbitrary field, e.g., "Dataset Curators". 3. Switch back to a previous section, like "Dataset Creation". 4. When switching back again to "Additional Information", the text has been deleted. Notably, this behavior can be reproduced again and again, it's not just problematic for the first "switch-back" from Additional Information. ## Expected results For step 4, the previously entered information should still be present in the boxes, similar to the behavior to all other sections (switching back there works as expected) ## Actual results The text boxes are empty again, and previously entered text got deleted. ## Environment info - `datasets` version: N/A - Platform: Firefox 96.0 / Chrome 97.0 - Python version: N/A - PyArrow version: N/A
open
https://github.com/huggingface/datasets/issues/3628
2022-01-25T14:06:17
2022-01-25T14:09:01
null
{ "login": "dennlinger", "id": 26013491, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,113,556,837
3,627
Fix host URL in The Pile datasets
This PR fixes the host URL in The Pile datasets, once they have mirrored their data in another server. Fix #3626.
closed
https://github.com/huggingface/datasets/pull/3627
2022-01-25T08:11:28
2022-07-20T20:54:42
2022-02-14T08:40:58
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,113,534,436
3,626
The Pile cannot connect to host
## Describe the bug The Pile had issues with their previous host server and have mirrored its content to another server. The new URL server should be updated.
closed
https://github.com/huggingface/datasets/issues/3626
2022-01-25T07:43:33
2022-02-14T08:40:58
2022-02-14T08:40:58
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,113,017,522
3,625
Add a metadata field for when source data was produced
**Is your feature request related to a problem? Please describe.** The current problem is that information about when source data was produced is not easily visible. Though there are a variety of metadata fields available in the dataset viewer, time period information is not included. This feature request suggests making metadata relating to the time that the underlying *source* data was produced more prominent and outlines why this specific information is of particular importance, both in domain-specific historic research and more broadly. **Describe the solution you'd like** There are a variety of metadata fields exposed in the dataset viewer (license, task categories, etc.) These fields make this metadata more prominent both for human users and as potentially machine-actionable information (for example, through the API). I would propose to add a metadata field that says when some underlying data was produced. For example, a dataset would be labelled as being produced between `1800-1900`. **Describe alternatives you've considered** This information is sometimes available in the Datacard or a paper describing the dataset. However, it's often not that easy to identify or extract this information, particularly if you want to use this field as a filter to identify relevant datasets. **Additional context** I believe this feature is relevant for a number of reasons: - Increasingly, there is an interest in using historical data for training language models (for example, https://huggingface.co/dbmdz/bert-base-historic-dutch-cased), and datasets to support this task (for example, https://huggingface.co/datasets/bnl_newspapers). For these datasets, indicating the time periods covered is particularly relevant. - More broadly, time is likely a common source of domain drift. Datasets of movie reviews from the 90s may not work well for recent movie reviews. As the documentation and long-term management of ML data become more of a priority, quickly understanding the time when the underlying text (or other data types) is arguably more important. - time-series data: datasets are adding more support for time series data. Again, the periods covered might be particularly relevant here. **open questions** - I think some of my points above apply not only to the underlying data but also to annotations. As a result, there could also be an argument for encoding this information somewhere. However, I would argue (but could be persuaded otherwise) that this is probably less important for filtering. This type of context is already addressed in the datasheets template and often requires more narrative to discuss. - what level of granularity would make sense for this? e.g. assigning a decade, century or year? - how to encode this information? What formatting makes sense - what specific time to encode; a data range? (mean, modal, min, max value?) This is a slightly amorphous feature request - I would be happy to discuss further/try and propose a more concrete solution if this seems like something that could be worth considering. I realise this might also touch on other parts of the 🤗 hubs ecosystem.
open
https://github.com/huggingface/datasets/issues/3625
2022-01-24T18:52:39
2022-06-28T13:54:49
null
{ "login": "davanstrien", "id": 8995957, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,112,835,239
3,623
Extend support for streaming datasets that use os.path.relpath
This PR extends the support in streaming mode for datasets that use `os.path.relpath`, by patching that function. This feature will also be useful to yield the relative path of audio or image files, within an archive or parent dir. Close #3622.
closed
https://github.com/huggingface/datasets/pull/3623
2022-01-24T16:00:52
2022-02-04T14:03:55
2022-02-04T14:03:54
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,112,831,661
3,622
Extend support for streaming datasets that use os.path.relpath
Extend support for streaming datasets that use `os.path.relpath`. This feature will also be useful to yield the relative path of audio or image files.
closed
https://github.com/huggingface/datasets/issues/3622
2022-01-24T15:58:23
2022-02-04T14:03:54
2022-02-04T14:03:54
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,112,720,434
3,621
Consider adding `ipywidgets` as a dependency.
When I install `datasets` in a fresh virtualenv with jupyterlab I always see this error. ``` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` It's a bit of a nuisance, because I need to run shut down the jupyterlab server in order to install the required dependency. Might it be an option to just include it as a dependency here?
closed
https://github.com/huggingface/datasets/issues/3621
2022-01-24T14:27:11
2022-02-24T09:04:36
2022-02-24T09:04:36
{ "login": "koaning", "id": 1019791, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,112,677,252
3,620
Add Fon language tag
Add Fon language tag to resources.
closed
https://github.com/huggingface/datasets/pull/3620
2022-01-24T13:52:26
2022-02-04T14:04:36
2022-02-04T14:04:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,112,611,415
3,619
fix meta in mls
`monolingual` value of `m ultilinguality` param in yaml meta was changed to `multilingual` :)
closed
https://github.com/huggingface/datasets/pull/3619
2022-01-24T12:54:38
2022-01-24T20:53:22
2022-01-24T20:53:22
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,112,123,365
3,618
TIMIT Dataset not working with GPU
## Describe the bug I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU. I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimized) environment, with a single ml.g4dn.xlarge instance (corresponds to a Tesla T4 GPU). I don't know if the issue is GPU related or Python environment related because everything works when I work off of the CPU Optimized environment with a non-GPU instance. My code also works on Google Colab with a GPU instance. This issue is blocking because I cannot get the 'audio' column in any way due to this error, which means that I can't pass it to any functions. I later use the dataset.map function and that is where I originally noticed this error. ## Steps to reproduce the bug ```python from datasets import load_dataset timit_train = load_dataset('timit_asr', split='train') print(timit_train['audio']) ``` ## Expected results Expected to see inside the 'audio' column, which contains an 'array' nested field with the array data I actually need. ## Actual results Traceback ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-ceeac555e921> in <module> ----> 1 timit_train['audio'] /opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key) 1917 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" 1918 return self._getitem( -> 1919 key, 1920 ) 1921 /opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs) 1902 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) 1903 formatted_output = format_table( -> 1904 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1905 ) 1906 return formatted_output /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns) 529 python_formatter = PythonFormatter(features=None) 530 if format_columns is None: --> 531 return formatter(pa_table, query_type=query_type) 532 elif query_type == "column": 533 if key in format_columns: /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type) 280 return self.format_row(pa_table) 281 elif query_type == "column": --> 282 return self.format_column(pa_table) 283 elif query_type == "batch": 284 return self.format_batch(pa_table) /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_column(self, pa_table) 315 column = self.python_arrow_extractor().extract_column(pa_table) 316 if self.decoded: --> 317 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) 318 return column 319 /opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_column(self, column, column_name) 221 222 def decode_column(self, column: list, column_name: str) -> list: --> 223 return self.features.decode_column(column, column_name) if self.features else column 224 225 def decode_batch(self, batch: dict) -> dict: /opt/conda/lib/python3.6/site-packages/datasets/features/features.py in decode_column(self, column, column_name) 1337 return ( 1338 [self[column_name].decode_example(value) if value is not None else None for value in column] -> 1339 if self._column_requires_decoding[column_name] 1340 else column 1341 ) /opt/conda/lib/python3.6/site-packages/datasets/features/features.py in <listcomp>(.0) 1336 """ 1337 return ( -> 1338 [self[column_name].decode_example(value) if value is not None else None for value in column] 1339 if self._column_requires_decoding[column_name] 1340 else column /opt/conda/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value) 85 dict 86 """ ---> 87 path, file = (value["path"], BytesIO(value["bytes"])) if value["bytes"] is not None else (value["path"], None) 88 if path is None and file is None: 89 raise ValueError(f"An audio sample should have one of 'path' or 'bytes' but both are None in {value}.") TypeError: string indices must be integers ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.0 - Platform: Linux-4.14.256-197.484.amzn2.x86_64-x86_64-with-debian-buster-sid - Python version: 3.6.13 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3618
2022-01-24T03:26:03
2023-07-25T15:20:20
2023-07-25T15:20:20
{ "login": "TheSeamau5", "id": 3227869, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,111,938,691
3,617
PR for the CFPB Consumer Complaints dataset
Think I followed all the steps but please let me know if anything needs changing or any improvements I can make to the code quality
closed
https://github.com/huggingface/datasets/pull/3617
2022-01-23T17:47:12
2022-02-07T21:08:31
2022-02-07T21:08:31
{ "login": "kayvane1", "id": 42403093, "type": "User" }
[]
true
[]
1,111,587,861
3,616
Make streamable the BnL Historical Newspapers dataset
I've refactored the code in order to make the dataset streamable and to avoid it takes too long: - I've used `iter_files` Close #3615
closed
https://github.com/huggingface/datasets/pull/3616
2022-01-22T14:52:36
2022-02-04T14:05:23
2022-02-04T14:05:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,111,576,876
3,615
Dataset BnL Historical Newspapers does not work in streaming mode
## Describe the bug When trying to load in streaming mode, it "hangs"... ## Steps to reproduce the bug ```python ds = load_dataset("bnl_newspapers", split="train", streaming=True) ``` ## Expected results The code should be optimized, so that it works fast in streaming mode. CC: @davanstrien
closed
https://github.com/huggingface/datasets/issues/3615
2022-01-22T14:12:59
2022-02-04T14:05:21
2022-02-04T14:05:21
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,110,736,657
3,614
Minor fixes
This PR: * adds "desc" to the `ignore_kwargs` list in `Dataset.filter` * fixes the default value of `id` in `DatasetDict.prepare_for_task`
closed
https://github.com/huggingface/datasets/pull/3614
2022-01-21T17:48:44
2022-01-24T12:45:49
2022-01-24T12:45:49
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,110,684,015
3,613
Files not updating in dataset viewer
## Dataset viewer issue for '*name of the dataset*' **Link:** Some examples: * https://huggingface.co/datasets/abidlabs/crowdsourced-speech4 * https://huggingface.co/datasets/abidlabs/test-audio-13 *short description of the issue* It seems that the dataset viewer is reading a cached version of the dataset and it is not updating to reflect new files that are added to the dataset. I get this error: ![image](https://user-images.githubusercontent.com/1778297/150566660-30dc0dcd-18fd-4471-b70c-7c4bdc6a23c6.png) Am I the one who added this dataset? Yes
closed
https://github.com/huggingface/datasets/issues/3613
2022-01-21T16:47:20
2022-01-22T08:13:13
2022-01-22T08:13:13
{ "login": "abidlabs", "id": 1778297, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,110,506,466
3,612
wikifix
This should get the wikipedia dataloading script back up and running - at least I hope so (tested with language ff and ii)
closed
https://github.com/huggingface/datasets/pull/3612
2022-01-21T14:05:11
2022-02-03T17:58:16
2022-02-03T17:58:16
{ "login": "apergo-ai", "id": 68908804, "type": "User" }
[]
true
[]
1,110,399,096
3,611
Indexing bug after dataset.select()
## Describe the bug A clear and concise description of what the bug is. Dataset indexing is not working as expected after `dataset.select(range(100))` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import datasets task_to_keys = { "cola": ("sentence", None), "mnli": ("premise", "hypothesis"), "mrpc": ("sentence1", "sentence2"), "qnli": ("question", "sentence"), "qqp": ("question1", "question2"), "rte": ("sentence1", "sentence2"), "sst2": ("sentence", None), "stsb": ("sentence1", "sentence2"), "wnli": ("sentence1", "sentence2"), } task_name = "sst2" raw_datasets = datasets.load_dataset("glue", task_name) train_dataset = raw_datasets["train"] print("before select: ",train_dataset[-2:]) # before select: {'sentence': ['a patient viewer ', 'this new jangle of noise , mayhem and stupidity must be a serious contender for the title . '], 'label': [1, 0], 'idx': [67347, 67348]} train_dataset = train_dataset.select(range(100)) print("after select: ",train_dataset[-2:]) # after select: {'sentence': [], 'label': [], 'idx': []} ``` link to colab: https://colab.research.google.com/drive/1LngeRC9f0jE7eSQ4Kh1cIeb411lRXQD-?usp=sharing ## Expected results A clear and concise description of the expected results. showing 98, 99 index data ## Actual results Specify the actual results or traceback. empty ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/3611
2022-01-21T12:09:30
2022-01-27T18:16:22
2022-01-27T18:16:22
{ "login": "kamalkraj", "id": 17096858, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,109,777,314
3,610
Checksum error when trying to load amazon_review dataset
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug I am getting the issue when trying to load dataset using ``` dataset = load_dataset("amazon_polarity") ``` ## Expected results dataset loaded ## Actual results ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-3-b4758ba980ae> in <module>() ----> 1 dataset = load_dataset("amazon_polarity") 2 dataset.set_format(type='pandas') 3 content_series = dataset['train']['content'] 4 label_series = dataset['train']['label'] 5 df = pd.concat([content_series, label_series], axis=1) 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM&export=download'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Google colab - Python version: 3.7.12
closed
https://github.com/huggingface/datasets/issues/3610
2022-01-20T21:20:32
2022-01-21T13:22:31
2022-01-21T13:22:31
{ "login": "ghost", "id": 10137, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,109,579,112
3,609
Fixes to pubmed dataset download function
Pubmed has updated its settings for 2022 and thus existing download script does not work.
closed
https://github.com/huggingface/datasets/pull/3609
2022-01-20T17:31:35
2022-03-03T16:18:52
2022-03-03T14:23:35
{ "login": "spacemanidol", "id": 3886120, "type": "User" }
[]
true
[]
1,109,310,981
3,608
Add support for continuous metrics (RMSE, MAE)
**Is your feature request related to a problem? Please describe.** I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for hate speech. Once we have this outcome our NLP models conduct regression rather than classification, so binary metrics are not relevant. The only continuous metrics available at https://huggingface.co/metrics are pearson & spearman correlation, which don't ensure that the prediction is on the same scale as the outcome. **Describe the solution you'd like** I would like to be able to tag our models on the Hub with the following metrics: - RMSE - MAE **Describe alternatives you've considered** I don't know if there are any alternatives. **Additional context** Our preprint is available here: https://arxiv.org/abs/2009.10277 . We are making it available for use in Jigsaw's Toxic Severity Rating Kaggle competition: https://www.kaggle.com/c/jigsaw-toxic-severity-rating/overview . I have our first model uploaded to the Hub at https://huggingface.co/ucberkeley-dlab/hate-measure-roberta-large Thanks, Chris
closed
https://github.com/huggingface/datasets/issues/3608
2022-01-20T13:35:36
2022-03-09T17:18:20
2022-03-09T17:18:20
{ "login": "ck37", "id": 50770, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "good first issue", "color": "7057ff" } ]
false
[]
1,109,218,370
3,607
Add MIT Scene Parsing Benchmark
Add MIT Scene Parsing Benchmark (a subset of ADE20k). TODOs: * [x] add dummy data * [x] add dataset card * [x] generate `dataset_info.json`
closed
https://github.com/huggingface/datasets/pull/3607
2022-01-20T12:03:07
2022-02-18T12:51:01
2022-02-18T12:51:00
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,108,918,701
3,606
audio column not saved correctly after resampling
## Describe the bug After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type. ## Steps to reproduce the bug - load a subset of common voice dataset (48Khz) - resample audio column to 16Khz - save with save_to_disk() - load with load_from_disk() ## Expected results I expected that after saving the data, and then loading it back in, the audio column has the correct dataset.Audio type (i.e. same as before saving it) {'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None), 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)} ## Actual results Audio column does not have the right type {'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)}, 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)} ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: linux - Python version: - PyArrow version:
closed
https://github.com/huggingface/datasets/issues/3606
2022-01-20T06:37:10
2022-01-23T01:41:01
2022-01-23T01:24:14
{ "login": "laphang", "id": 24724502, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,108,738,561
3,605
Adding Turkic X-WMT evaluation set for machine translation
This dataset is a human-translated evaluation set for MT crowdsourced and provided by the [Turkic Interlingua ](turkic-interlingua.org) community. It contains eval sets for 8 Turkic languages covering 88 language directions. Languages being covered are: Azerbaijani (az) Bashkir (ba) English (en) Karakalpak (kaa) Kazakh (kk) Kirghiz (ky) Russian (ru) Turkish (tr) Sakha (sah) Uzbek (uz) More info about the corpus is here: [https://github.com/turkic-interlingua/til-mt/tree/master/xwmt](https://github.com/turkic-interlingua/til-mt/tree/master/xwmt) A paper describing the test set is here: [https://arxiv.org/abs/2109.04593](https://arxiv.org/abs/2109.04593)
closed
https://github.com/huggingface/datasets/pull/3605
2022-01-20T01:40:29
2022-01-31T09:50:57
2022-01-31T09:50:57
{ "login": "mirzakhalov", "id": 26018417, "type": "User" }
[]
true
[]
1,108,477,316
3,604
Dataset Viewer not showing Previews for Private Datasets
## Dataset viewer issue for 'abidlabs/test-audio-13' It seems that the dataset viewer does not show previews for `private` datasets, even for the user who's private dataset it is. See [1] for example. If I change the visibility to public, then it does show, but it would be useful to have the viewer even for private datasets. ![image](https://user-images.githubusercontent.com/1778297/150200515-93ff1545-11fd-4793-be64-6bed3cd895e2.png) **Link:** [1] https://huggingface.co/datasets/abidlabs/test-audio-13 **Am I the one who added this dataset?** Yes
closed
https://github.com/huggingface/datasets/issues/3604
2022-01-19T19:29:26
2022-09-26T08:04:43
2022-09-26T08:04:43
{ "login": "abidlabs", "id": 1778297, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,108,392,141
3,603
Add British Library books dataset
This pull request adds a dataset of text from digitised (primarily 19th Century) books from the British Library. This collection has previously been used for training language models, e.g. https://github.com/dbmdz/clef-hipe/blob/main/hlms.md. It would be nice to make this dataset more accessible for others to use through datasets. This is still a WIP but I wanted to get some initial feedback in particular; I wanted to check: - I am handling the use of `iter_archive` correctly - I intend to ensure that `dl_manager.download` gets the complete list of URLs to download upfront, so the progress bar knows how much is left to download and then to pass through the `gen_kwargs` a list of downloaded zip archives wrapped in `iter_archive`. I am unsure if there is a more elegant approach for this? - the number of configs: I have aimed to keep this limited - there are a lot of URLs covering the entire dataset, but I have tried to base the configs on what I believe the majority of people will want to they are not presented with too many options - I am happy to hear suggestions for changing this If there are other glaring omissions or mistakes, I'd be happy to hear them. If this approach seems sensible in general, I will finish all the remaining TODOs, generate dummy_data, etc.
closed
https://github.com/huggingface/datasets/pull/3603
2022-01-19T17:53:05
2022-01-31T17:22:51
2022-01-31T17:01:49
{ "login": "davanstrien", "id": 8995957, "type": "User" }
[]
true
[]
1,108,247,870
3,602
Update url for conll2003
Following https://github.com/huggingface/datasets/issues/3582 I'm changing the download URL of the conll2003 data files, since the previous host doesn't have the authorization to redistribute the data
closed
https://github.com/huggingface/datasets/pull/3602
2022-01-19T15:35:04
2022-01-20T16:23:03
2022-01-19T15:43:53
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,108,207,131
3,601
Add conll2003 licensing
Following https://github.com/huggingface/datasets/issues/3582, this PR updates the licensing section of the CoNLL2003 dataset.
closed
https://github.com/huggingface/datasets/pull/3601
2022-01-19T15:00:41
2022-01-19T17:17:28
2022-01-19T17:17:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,108,131,878
3,600
Use old url for conll2003
As reported in https://github.com/huggingface/datasets/issues/3582 the CoNLL2003 data files are not available in the master branch of the repo that used to host them. For now we can use the URL from an older commit to access the data files
closed
https://github.com/huggingface/datasets/pull/3600
2022-01-19T13:56:49
2022-01-19T14:16:28
2022-01-19T14:16:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,108,111,607
3,599
The `add_column()` method does not work if used on dataset sliced with `select()`
Hello, I posted this as a question on the forums ([here](https://discuss.huggingface.co/t/add-column-does-not-work-if-used-on-dataset-sliced-with-select/13893)): I have a dataset with 2000 entries > dataset = Dataset.from_dict({'colA': list(range(2000))}) and from which I want to extract the first one thousand rows, create a new dataset with these and also add a new column to it: > dataset2 = dataset.select(list(range(1000))) > final_dataset = dataset2.add_column('colB', list(range(1000))) This gives an error >ArrowInvalid: Added column's length must match table's length. Expected length 2000 but got length 1000 So it looks like even though it is a dataset with 1000 rows, it "remembers" the shape of the one it was sliced from. ## Actual results ``` ArrowInvalid Traceback (most recent call last) <ipython-input-138-e806860f3ce3> in <module> ----> 1 final_dataset = dataset2.add_column('colB', list(range(1000))) ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 468 } 469 # apply actual function --> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 472 # re-apply format to the output ~/.local/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 404 # Call actual function 405 --> 406 out = func(self, *args, **kwargs) 407 408 # Update fingerprint of in-place transforms + update in-place history of transforms ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_column(self, name, column, new_fingerprint) 3343 column_table = InMemoryTable.from_pydict({name: column}) 3344 # Concatenate tables horizontally -> 3345 table = ConcatenationTable.from_tables([self._data, column_table], axis=1) 3346 # Update features 3347 info = self.info.copy() ~/.local/lib/python3.8/site-packages/datasets/table.py in from_tables(cls, tables, axis) 729 table_blocks = to_blocks(table) 730 blocks = _extend_blocks(blocks, table_blocks, axis=axis) --> 731 return cls.from_blocks(blocks) 732 733 @property ~/.local/lib/python3.8/site-packages/datasets/table.py in from_blocks(cls, blocks) 668 @classmethod 669 def from_blocks(cls, blocks: TableBlockContainer) -> "ConcatenationTable": --> 670 blocks = cls._consolidate_blocks(blocks) 671 if isinstance(blocks, TableBlock): 672 table = blocks ~/.local/lib/python3.8/site-packages/datasets/table.py in _consolidate_blocks(cls, blocks) 664 return cls._merge_blocks(blocks, axis=0) 665 else: --> 666 return cls._merge_blocks(blocks) 667 668 @classmethod ~/.local/lib/python3.8/site-packages/datasets/table.py in _merge_blocks(cls, blocks, axis) 650 merged_blocks += list(block_group) 651 else: # both --> 652 merged_blocks = [cls._merge_blocks(row_block, axis=1) for row_block in blocks] 653 if all(len(row_block) == 1 for row_block in merged_blocks): 654 merged_blocks = cls._merge_blocks( ~/.local/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0) 650 merged_blocks += list(block_group) 651 else: # both --> 652 merged_blocks = [cls._merge_blocks(row_block, axis=1) for row_block in blocks] 653 if all(len(row_block) == 1 for row_block in merged_blocks): 654 merged_blocks = cls._merge_blocks( ~/.local/lib/python3.8/site-packages/datasets/table.py in _merge_blocks(cls, blocks, axis) 647 for is_in_memory, block_group in groupby(blocks, key=lambda x: isinstance(x, InMemoryTable)): 648 if is_in_memory: --> 649 block_group = [InMemoryTable(cls._concat_blocks(list(block_group), axis=axis))] 650 merged_blocks += list(block_group) 651 else: # both ~/.local/lib/python3.8/site-packages/datasets/table.py in _concat_blocks(blocks, axis) 626 else: 627 for name, col in zip(table.column_names, table.columns): --> 628 pa_table = pa_table.append_column(name, col) 629 return pa_table 630 else: ~/.local/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.append_column() ~/.local/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.add_column() ~/.local/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/.local/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Added column's length must match table's length. Expected length 2000 but got length 1000 ``` A solution provided by @mariosasko is to use `dataset2.flatten_indices()` after the `select()` and before attempting to add the new column: > dataset = Dataset.from_dict({'colA': list(range(2000))}) > dataset2 = dataset.select(list(range(1000))) > dataset2 = dataset2.flatten_indices() > final_dataset = dataset2.add_column('colB', list(range(1000))) which works. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.13.2 (note: also checked with version 1.17.0, still the same error) - Platform: Ubuntu 20.04.3 - Python version: 3.8.10 - PyArrow version: 6.0.0
closed
https://github.com/huggingface/datasets/issues/3599
2022-01-19T13:36:50
2022-01-28T15:35:57
2022-01-28T15:35:57
{ "login": "ThGouzias", "id": 59422506, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,108,107,199
3,598
Readme info not being parsed to show on Dataset card page
## Describe the bug The info contained in the README.md file is not being shown in the dataset main page. Basic info and table of contents are properly formatted in the README. ## Steps to reproduce the bug # Sample code to reproduce the bug The README file is this one: https://huggingface.co/datasets/softcatala/Tilde-MODEL-Catalan/blob/main/README.md ## Expected results README info should appear in the Dataset card page. ## Actual results Nothing is shown. However, labels are parsed and shown successfully.
closed
https://github.com/huggingface/datasets/issues/3598
2022-01-19T13:32:29
2022-01-21T10:20:01
2022-01-21T10:20:01
{ "login": "davidcanovas", "id": 79796807, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,108,092,864
3,597
ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content
## Bug The install of streaming dataset is giving following error. ## Steps to reproduce the bug ```python ! git clone https://github.com/huggingface/datasets.git ! cd datasets ! pip install -e ".[streaming]" ``` ## Actual results Cloning into 'datasets'... remote: Enumerating objects: 50816, done. remote: Counting objects: 100% (2356/2356), done. remote: Compressing objects: 100% (1606/1606), done. remote: Total 50816 (delta 834), reused 1741 (delta 525), pack-reused 48460 Receiving objects: 100% (50816/50816), 72.47 MiB | 27.68 MiB/s, done. Resolving deltas: 100% (22541/22541), done. Checking out files: 100% (6722/6722), done. ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content
closed
https://github.com/huggingface/datasets/issues/3597
2022-01-19T13:19:28
2022-08-05T12:35:51
2022-02-14T08:46:34
{ "login": "amitkml", "id": 49492030, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,107,345,338
3,596
Loss of cast `Image` feature on certain dataset method
## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to an `image`. This also happens when using select on a dataset which has had a column cast to an `Image`. I suspect this might be related to https://github.com/huggingface/datasets/pull/3556 but I don't believe that pull request fixes this issue. ## Steps to reproduce the bug An example of casting a url to an image followed by using the `select` method: ```python from datasets import Dataset from datasets import features url = "https://cf.ltkcdn.net/cats/images/std-lg/246866-1200x816-grey-white-kitten.webp" data_dict = {"url": [url]*2} dataset = Dataset.from_dict(data_dict) dataset = dataset.cast_column('url',features.Image()) sample = dataset.select([1]) ``` [example notebook](https://gist.github.com/davanstrien/06e53f4383c28ae77ce1b30d0eaf0d70#file-potential_casting_bug-ipynb) ## Expected results The cast value is maintained when further methods are applied to the dataset. ## Actual results ```python --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-12-47f393bc2d0d> in <module>() ----> 1 sample = dataset.select([1]) 4 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 487 } 488 # apply actual function --> 489 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 490 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 491 # re-apply format to the output /usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 409 # Call actual function 410 --> 411 out = func(self, *args, **kwargs) 412 413 # Update fingerprint of in-place transforms + update in-place history of transforms /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 2772 ) 2773 else: -> 2774 return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint) 2775 2776 @transmit_format /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _new_dataset_with_indices(self, indices_cache_file_name, indices_buffer, fingerprint) 2688 split=self.split, 2689 indices_table=indices_table, -> 2690 fingerprint=fingerprint, 2691 ) 2692 /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint) 664 if self.info.features.type != inferred_features.type: 665 raise ValueError( --> 666 f"External features info don't match the dataset:\nGot\n{self.info.features}\nwith type\n{self.info.features.type}\n\nbut expected something like\n{inferred_features}\nwith type\n{inferred_features.type}" 667 ) 668 ValueError: External features info don't match the dataset: Got {'url': Image(id=None)} with type struct<url: extension<arrow.py_extension_type<ImageExtensionType>>> but expected something like {'url': Value(dtype='string', id=None)} with type struct<url: string> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.1.dev0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/3596
2022-01-18T20:44:01
2022-01-21T18:07:28
2022-01-21T18:07:28
{ "login": "davanstrien", "id": 8995957, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,107,260,527
3,595
Add ImageNet toy datasets from fastai
Adds the ImageNet toy datasets from FastAI: Imagenette, Imagewoof and Imagewang. TODOs: * [ ] add dummy data * [ ] add dataset card * [ ] generate `dataset_info.json`
closed
https://github.com/huggingface/datasets/pull/3595
2022-01-18T19:03:35
2023-09-24T09:39:07
2022-09-30T14:39:35
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,107,174,619
3,594
fix multiple language downloading in mC4
If we try to access multiple languages of the [mC4 dataset](https://github.com/huggingface/datasets/tree/master/datasets/mc4), it will throw an error. For example, if we do ```python mc4_subset_two_langs = load_dataset("mc4", languages=["st", "su"]) ``` we got ``` FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/multilingual/c4-st+su.tfrecord-00000-of-00002.json.gz ``` Now it should work. Check it (from the root dir of a project): ```python mc4_subset_two_langs = load_dataset("./datasets/mc4/", languages=["st", "su"]) ```
closed
https://github.com/huggingface/datasets/pull/3594
2022-01-18T17:25:19
2022-01-19T11:22:57
2022-01-18T19:10:22
{ "login": "polinaeterna", "id": 16348744, "type": "User" }
[]
true
[]
1,107,070,852
3,593
Update README.md
Towards license of Tweet Eval parts
closed
https://github.com/huggingface/datasets/pull/3593
2022-01-18T15:52:16
2022-01-20T17:14:53
2022-01-20T17:14:53
{ "login": "borgr", "id": 6416600, "type": "User" }
[]
true
[]
1,107,026,723
3,592
Add QuickDraw dataset
Add the QuickDraw dataset. TODOs: * [x] add dummy data * [x] add dataset card * [x] generate `dataset_info.json`
closed
https://github.com/huggingface/datasets/pull/3592
2022-01-18T15:13:39
2022-06-09T10:04:54
2022-06-09T09:56:13
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,106,928,613
3,591
Add support for time, date, duration, and decimal dtypes
Add support for the pyarrow time (maps to `datetime.time` in python), date (maps to `datetime.time` in python), duration (maps to `datetime.timedelta` in python), and decimal (maps to `decimal.decimal` in python) dtypes. This should be helpful when writing scripts for time-series datasets.
closed
https://github.com/huggingface/datasets/pull/3591
2022-01-18T13:46:05
2022-01-31T18:29:34
2022-01-20T17:37:33
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,106,784,860
3,590
Update ANLI README.md
Update license and little things concerning ANLI
closed
https://github.com/huggingface/datasets/pull/3590
2022-01-18T11:22:53
2022-01-20T16:58:41
2022-01-20T16:58:41
{ "login": "borgr", "id": 6416600, "type": "User" }
[]
true
[]
1,106,766,114
3,589
Pin torchmetrics to fix the COMET test
Torchmetrics 0.7.0 got released and has issues with `transformers` (see https://github.com/PyTorchLightning/metrics/issues/770) I'm pinning it to 0.6.0 in the CI, since 0.7.0 makes the COMET metric test fail. COMET requires torchmetrics==0.6.0 anyway.
closed
https://github.com/huggingface/datasets/pull/3589
2022-01-18T11:03:49
2022-01-18T11:04:56
2022-01-18T11:04:55
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,106,749,000
3,588
Update HellaSwag README.md
Adding information from the git repo and paper that were missing
closed
https://github.com/huggingface/datasets/pull/3588
2022-01-18T10:46:15
2022-01-20T16:57:43
2022-01-20T16:57:43
{ "login": "borgr", "id": 6416600, "type": "User" }
[]
true
[]
1,106,719,182
3,587
No module named 'fsspec.archive'
## Describe the bug Cannot import datasets after installation. ## Steps to reproduce the bug ```shell $ python Python 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import datasets Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/__init__.py", line 34, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 61, in <module> from .arrow_writer import ArrowWriter, OptimizedTypedSequence File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_writer.py", line 28, in <module> from .features import ( File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/features/__init__.py", line 2, in <module> from .audio import Audio File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/features/audio.py", line 7, in <module> from ..utils.streaming_download_manager import xopen File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py", line 18, in <module> from ..filesystems import COMPRESSION_FILESYSTEMS File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/filesystems/__init__.py", line 6, in <module> from . import compression File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/filesystems/compression.py", line 5, in <module> from fsspec.archive import AbstractArchiveFileSystem ModuleNotFoundError: No module named 'fsspec.archive' ```
closed
https://github.com/huggingface/datasets/issues/3587
2022-01-18T10:17:01
2022-08-11T09:57:54
2022-01-18T10:33:10
{ "login": "shuuchen", "id": 13246825, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,106,455,672
3,586
Revisit `enable/disable_` toggle function prefix
As discussed in https://github.com/huggingface/transformers/pull/15167, we should revisit the `enable/disable_` toggle function prefix, potentially in favor of `set_enabled_`. Concretely, this translates to - De-deprecating `disable_progress_bar()` - Adding `enable_progress_bar()` - On the caching side, adding `enable_caching` and `disable_caching` Additional decisions have to be made with regards to the existing `set_enabled_X` functions; that is, whether to keep them as is or deprecate them in favor of the aforementioned functions. cc @mariosasko @lhoestq
closed
https://github.com/huggingface/datasets/issues/3586
2022-01-18T04:09:55
2022-03-14T15:01:08
2022-03-14T15:01:08
{ "login": "jaketae", "id": 25360440, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,105,821,470
3,585
Datasets streaming + map doesn't work for `Audio`
## Describe the bug When using audio datasets in streaming mode, applying a `map(...)` before iterating leads to an error as the key `array` does not exist anymore. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("common_voice", "en", streaming=True, split="train") def map_fn(batch): print("audio keys", batch["audio"].keys()) batch["audio"] = batch["audio"]["array"][:100] return batch ds = ds.map(map_fn) sample = next(iter(ds)) ``` I think the audio is somehow decoded before `.map(...)` is actually called. ## Expected results IMO, the above code snippet should work. ## Actual results ```bash audio keys dict_keys(['path', 'bytes']) Traceback (most recent call last): File "./run_audio.py", line 15, in <module> sample = next(iter(ds)) File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 341, in __iter__ for key, example in self._iter(): File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 338, in _iter yield from ex_iterable File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 192, in __iter__ yield key, self.function(example) File "./run_audio.py", line 9, in map_fn batch["input"] = batch["audio"]["array"][:100] KeyError: 'array' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.1.dev0 - Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3585
2022-01-17T12:55:42
2022-01-20T13:28:00
2022-01-20T13:28:00
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "duplicate", "color": "cfd3d7" } ]
false
[]
1,105,231,768
3,584
https://huggingface.co/datasets/huggingface/transformers-metadata
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
closed
https://github.com/huggingface/datasets/issues/3584
2022-01-17T00:18:14
2022-02-14T08:51:27
2022-02-14T08:51:27
{ "login": "ecankirkic", "id": 37082592, "type": "User" }
[ { "name": "wontfix", "color": "ffffff" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,105,195,144
3,583
Add The Medical Segmentation Decathlon Dataset
## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data, and small objects. - **Paper:** [link to the dataset paper if available](https://arxiv.org/abs/2106.05735) - **Data:** http://medicaldecathlon.com/ - **Motivation:** Hugging Face seeks to democratize ML for society. One of the growing niches within ML is the ML + Medicine community. Key data sets will help increase the supply of HF resources for starting an initial community. (cc @osanseviero @abidlabs ) Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/3583
2022-01-16T21:42:25
2022-03-18T10:44:42
null
{ "login": "omarespejel", "id": 4755430, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "vision", "color": "bfdadc" } ]
false
[]
1,104,877,303
3,582
conll 2003 dataset source url is no longer valid
## Describe the bug Loading `conll2003` dataset fails because it was removed (just yesterday 1/14/2022) from the location it is looking for. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("conll2003") ``` ## Expected results The dataset should load. ## Actual results It is looking for the dataset at `https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt` but it was removed from there yesterday (see [commit](https://github.com/davidsbatista/NER-datasets/commit/9d8f45cc7331569af8eb3422bbe1c97cbebd5690) that removed the file and related [issue](https://github.com/davidsbatista/NER-datasets/issues/8)). - We should replace this with an alternate valid location. - this is being referenced in the huggingface course chapter 7 [colab notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter7/section2_pt.ipynb), which is also broken. ```python FileNotFoundError Traceback (most recent call last) <ipython-input-4-27c956bec93c> in <module>() 1 from datasets import load_dataset 2 ----> 3 raw_datasets = load_dataset("conll2003") 11 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params) 610 ) 611 elif response is not None and response.status_code == 404: --> 612 raise FileNotFoundError(f"Couldn't find file at {url}") 613 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 614 if head_error is not None: FileNotFoundError: Couldn't find file at https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
closed
https://github.com/huggingface/datasets/issues/3582
2022-01-15T23:04:17
2022-07-20T13:06:40
2022-01-21T16:57:32
{ "login": "rcanand", "id": 303900, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,104,857,822
3,581
Unable to create a dataset from a parquet file in S3
## Describe the bug Trying to create a dataset from a parquet file in S3. ## Steps to reproduce the bug ```python import s3fs from datasets import Dataset s3 = s3fs.S3FileSystem(anon=False) with s3.open(PATH_LTR_TOY_CLEAN_DATASET, 'rb') as s3file: dataset = Dataset.from_parquet(s3file) ``` ## Expected results A new Dataset object ## Actual results ```AttributeError: 'S3File' object has no attribute 'decode'``` ``` AttributeError Traceback (most recent call last) <command-2452877612515691> in <module> 5 6 with s3.open(PATH_LTR_TOY_CLEAN_DATASET, 'rb') as s3file: ----> 7 dataset = Dataset.from_parquet(s3file) /databricks/python/lib/python3.8/site-packages/datasets/arrow_dataset.py in from_parquet(path_or_paths, split, features, cache_dir, keep_in_memory, columns, **kwargs) 907 from .io.parquet import ParquetDatasetReader 908 --> 909 return ParquetDatasetReader( 910 path_or_paths, 911 split=split, /databricks/python/lib/python3.8/site-packages/datasets/io/parquet.py in __init__(self, path_or_paths, split, features, cache_dir, keep_in_memory, **kwargs) 28 path_or_paths = path_or_paths if isinstance(path_or_paths, dict) else {self.split: path_or_paths} 29 hash = _PACKAGED_DATASETS_MODULES["parquet"][1] ---> 30 self.builder = Parquet( 31 cache_dir=cache_dir, 32 data_files=path_or_paths, /databricks/python/lib/python3.8/site-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, base_path, info, features, use_auth_token, namespace, data_files, data_dir, **config_kwargs) 246 247 if data_files is not None and not isinstance(data_files, DataFilesDict): --> 248 data_files = DataFilesDict.from_local_or_remote( 249 sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token 250 ) /databricks/python/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token) 576 for key, patterns_for_key in patterns.items(): 577 out[key] = ( --> 578 DataFilesList.from_local_or_remote( 579 patterns_for_key, 580 base_path=base_path, /databricks/python/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token) 544 ) -> "DataFilesList": 545 base_path = base_path if base_path is not None else str(Path().resolve()) --> 546 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) 547 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token) 548 return cls(data_files, origin_metadata) /databricks/python/lib/python3.8/site-packages/datasets/data_files.py in resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) 191 data_files = [] 192 for pattern in patterns: --> 193 if is_remote_url(pattern): 194 data_files.append(Url(pattern)) 195 else: /databricks/python/lib/python3.8/site-packages/datasets/utils/file_utils.py in is_remote_url(url_or_filename) 115 116 def is_remote_url(url_or_filename: str) -> bool: --> 117 parsed = urlparse(url_or_filename) 118 return parsed.scheme in ("http", "https", "s3", "gs", "hdfs", "ftp") 119 /usr/lib/python3.8/urllib/parse.py in urlparse(url, scheme, allow_fragments) 370 Note that we don't break the components up in smaller bits 371 (e.g. netloc is a single string) and we don't expand % escapes.""" --> 372 url, scheme, _coerce_result = _coerce_args(url, scheme) 373 splitresult = urlsplit(url, scheme, allow_fragments) 374 scheme, netloc, url, query, fragment = splitresult /usr/lib/python3.8/urllib/parse.py in _coerce_args(*args) 122 if str_input: 123 return args + (_noop,) --> 124 return _decode_args(args) + (_encode_result,) 125 126 # Result objects are more helpful than simple tuples /usr/lib/python3.8/urllib/parse.py in _decode_args(args, encoding, errors) 106 def _decode_args(args, encoding=_implicit_encoding, 107 errors=_implicit_errors): --> 108 return tuple(x.decode(encoding, errors) if x else '' for x in args) 109 110 def _coerce_args(*args): /usr/lib/python3.8/urllib/parse.py in <genexpr>(.0) 106 def _decode_args(args, encoding=_implicit_encoding, 107 errors=_implicit_errors): --> 108 return tuple(x.decode(encoding, errors) if x else '' for x in args) 109 110 def _coerce_args(*args): AttributeError: 'S3File' object has no attribute 'decode' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Ubuntu 20.04.3 LTS - Python version: 3.8.10 - PyArrow version: 6.0.1
open
https://github.com/huggingface/datasets/issues/3581
2022-01-15T21:34:16
2022-02-14T08:52:57
null
{ "login": "regCode", "id": 18012903, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,104,663,242
3,580
Bug in wiki bio load
wiki_bio is failing to load because of a failing drive link . Can someone fix this ? ![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png) ![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-images.githubusercontent.com/3104771/149617875-ef0e30b0-b76e-48cf-b3eb-93ba8e6e5465.png) a
closed
https://github.com/huggingface/datasets/issues/3580
2022-01-15T10:04:33
2022-01-31T08:38:09
2022-01-31T08:38:09
{ "login": "tuhinjubcse", "id": 3104771, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
1,103,451,118
3,579
Add Text2log Dataset
Adding the text2log dataset used for training FOL sentence translating models
closed
https://github.com/huggingface/datasets/pull/3579
2022-01-14T10:45:01
2022-01-20T17:09:44
2022-01-20T17:09:44
{ "login": "apergo-ai", "id": 68908804, "type": "User" }
[]
true
[]
1,103,403,287
3,578
label information get lost after parquet serialization
## Describe the bug In *dataset_info.json* file, information about the label get lost after the dataset serialization. ## Steps to reproduce the bug ```python from datasets import load_dataset # normal save dataset = load_dataset('glue', 'sst2', split='train') dataset.save_to_disk("normal_save") # save after parquet serialization dataset.to_parquet("glue-sst2-train.parquet") dataset = load_dataset("parquet", data_files='glue-sst2-train.parquet') dataset.save_to_disk("save_after_parquet") ``` ## Expected results I expected to keep label information in *dataset_info.json* file even after parquet serialization ## Actual results In the normal serialization i got ```json "label": { "num_classes": 2, "names": [ "negative", "positive" ], "names_file": null, "id": null, "_type": "ClassLabel" }, ``` And after parquet serialization i got ```json "label": { "dtype": "int64", "id": null, "_type": "Value" }, ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.0 - Platform: ubuntu 20.04 - Python version: 3.8.10 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/3578
2022-01-14T10:10:38
2023-07-25T15:44:53
2023-07-25T15:44:53
{ "login": "Tudyx", "id": 56633664, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]