id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
1,271,112,497
4,492
Pin the revision in imagenet download links
Use the commit sha in the data files URLs of the imagenet-1k download script, in case we want to restructure the data files in the future. For example we may split it into many more shards for better paralellism. cc @mariosasko
closed
https://github.com/huggingface/datasets/pull/4492
2022-06-14T17:15:17
2022-06-14T17:35:13
2022-06-14T17:25:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,270,803,822
4,491
Dataset Viewer issue for Pavithree/test
### Link https://huggingface.co/datasets/Pavithree/test ### Description I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missing from my end? Kindly help. ### Owner _No response_
closed
https://github.com/huggingface/datasets/issues/4491
2022-06-14T13:23:10
2022-06-14T14:37:21
2022-06-14T14:34:33
{ "login": "Pavithree", "id": 23344465, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,270,719,074
4,490
Use `torch.nested_tensor` for arrays of varying length in torch formatter
Use `torch.nested_tensor` for arrays of varying length in `TorchFormatter`. The PyTorch API of nested tensors is in the prototype stage, so wait for it to become more mature.
open
https://github.com/huggingface/datasets/issues/4490
2022-06-14T12:19:40
2023-07-07T13:02:58
null
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,270,706,195
4,489
Add SV-Ident dataset
null
closed
https://github.com/huggingface/datasets/pull/4489
2022-06-14T12:09:00
2022-06-20T08:48:26
2022-06-20T08:37:27
{ "login": "e-tornike", "id": 20404466, "type": "User" }
[]
true
[]
1,270,613,857
4,488
Update PASS dataset version
Update the PASS dataset to version v3 (the newest one) from the [version history](https://github.com/yukimasano/PASS/blob/main/version_history.txt). PS: The older versions are not exposed as configs in the script because v1 was removed from Zenodo, and the same thing will probably happen to v2.
closed
https://github.com/huggingface/datasets/pull/4488
2022-06-14T10:47:14
2022-06-14T16:41:55
2022-06-14T16:32:28
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,270,525,163
4,487
Support streaming UDHR dataset
This PR: - Adds support for streaming UDHR dataset - Adds the BCP 47 language code as feature
closed
https://github.com/huggingface/datasets/pull/4487
2022-06-14T09:33:33
2022-06-15T05:09:22
2022-06-15T04:59:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,269,518,084
4,486
Add CCAgT dataset
As described in #4075 I could not generate the dummy data. Also, on the data repository isn't provided the split IDs, but I copy the functions that provide the correct data split. In summary, to have a better distribution, the data in this dataset should be separated based on the amount of NORs in each image.
closed
https://github.com/huggingface/datasets/pull/4486
2022-06-13T14:20:19
2022-07-04T14:37:03
2022-07-04T14:25:45
{ "login": "johnnv1", "id": 20444345, "type": "User" }
[]
true
[]
1,269,463,054
4,485
Fix cast to null
It currently fails with `ArrowNotImplementedError` instead of `TypeError` when one tries to cast integer to null type. Because if this, type inference breaks when one replaces null values with integers in `map` (it first tries to cast to the previous type before inferring the new type). Fix https://github.com/huggingface/datasets/issues/4483
closed
https://github.com/huggingface/datasets/pull/4485
2022-06-13T13:44:32
2022-06-14T13:43:54
2022-06-14T13:34:14
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,269,383,811
4,484
Better ImportError message when a dataset script dependency is missing
When a depenency is missing for a dataset script, an ImportError message is shown, with a tip to install the missing dependencies. This message is not ideal at the moment: it may show duplicate dependencies, and is not very readable. I improved it from ``` ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance' ``` to ``` ImportError: To be able to use bigbench, you need to install the following dependency: bigbench. Please install it using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"' for instance' ```
closed
https://github.com/huggingface/datasets/pull/4484
2022-06-13T12:44:37
2022-07-08T14:30:44
2022-06-13T13:50:47
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,269,253,840
4,483
Dataset.map throws pyarrow.lib.ArrowNotImplementedError when converting from list of empty lists
## Describe the bug Dataset.map throws pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null when converting from a type of 'empty lists' to 'lists with some type'. This appears to be due to the interaction of arrow internals and some assumptions made by datasets. The bug appeared when binarizing some labels, and then adding a dataset which had all these labels absent (to force the model to not label empty strings such with anything) Particularly the fact that this only happens in batched mode is strange. ## Steps to reproduce the bug ```python import numpy as np ds = Dataset.from_dict( { "text": ["the lazy dog jumps over the quick fox", "another sentence"], "label": [[], []], } ) def mapper(features): features['label'] = [ [0,0,0] for l in features['label'] ] return features ds_mapped = ds.map(mapper,batched=True) ``` ## Expected results Not crashing ## Actual results ``` ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2346: in map return self._map_single( ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:532: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:499: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/fingerprint.py:458: in wrapper out = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2751: in _map_single writer.write_batch(batch) ../.venv/lib/python3.8/site-packages/datasets/arrow_writer.py:503: in write_batch arrays.append(pa.array(typed_sequence)) pyarrow/array.pxi:230: in pyarrow.lib.array ??? pyarrow/array.pxi:110: in pyarrow.lib._handle_arrow_array_protocol ??? ../.venv/lib/python3.8/site-packages/datasets/arrow_writer.py:198: in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1812: in cast_array_to_feature casted_values = _c(array.values, feature.feature) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1843: in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1752: in array_cast return array.cast(pa_type) pyarrow/array.pxi:915: in pyarrow.lib.Array.cast ??? ../.venv/lib/python3.8/site-packages/pyarrow/compute.py:376: in cast return call_function("cast", [arr], options) pyarrow/_compute.pyx:542: in pyarrow._compute.call_function ??? pyarrow/_compute.pyx:341: in pyarrow._compute.Function.call ??? pyarrow/error.pxi:144: in pyarrow.lib.pyarrow_internal_check_status ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > ??? E pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null pyarrow/error.pxi:121: ArrowNotImplementedError ``` ## Workarounds * Not using batched=True * Using an np.array([],dtype=float) or similar instead of [] in the input * Naming the output column differently from the input column ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: Ubuntu - Python version: 3.8 - PyArrow version: 8.0.0
closed
https://github.com/huggingface/datasets/issues/4483
2022-06-13T10:47:52
2022-06-14T13:34:14
2022-06-14T13:34:14
{ "login": "sanderland", "id": 48946947, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,269,237,447
4,482
Test that TensorFlow is not imported on startup
TF takes some time to be imported, and also uses some GPU memory. I just added a test to make sure that in the future it's never imported by default when ```python import datasets ``` is called. Right now this fails because `huggingface_hub` does import tensorflow (though this is fixed now on their `main` branch) I'll mark this PR as ready for review once `huggingface_hub` has a new release
closed
https://github.com/huggingface/datasets/pull/4482
2022-06-13T10:33:49
2023-10-12T06:31:39
2023-10-11T09:11:56
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,269,187,792
4,481
Fix iwslt2017
The files were moved to google drive, I hosted them on the Hub instead (ok according to the license) I also updated the `datasets_infos.json`
closed
https://github.com/huggingface/datasets/pull/4481
2022-06-13T09:51:21
2022-10-26T09:09:31
2022-06-13T10:40:18
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,268,921,567
4,480
Bigbench tensorflow GPU dependency
## Describe the bug Loading bigbech ```py from datasets import load_dataset dataset = load_dataset("bigbench","swedish_to_german_proverbs") ``` tries to use gpu and fails with OOM with the following error ``` Downloading and preparing dataset bigbench/swedish_to_german_proverbs (download: Unknown size, generated: 68.92 KiB, post-processed: Unknown size, total: 68.92 KiB) to /home/ceyda/.cache/huggingface/datasets/bigbench/swedish_to_german_proverbs/1.0.0/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0... Generating default split: 0%| | 0/72 [00:00<?, ? examples/s]2022-06-13 14:11:04.154469: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-06-13 14:11:05.133600: F tensorflow/core/platform/statusor.cc:33] Attempting to fetch value instead of handling error INTERNAL: failed initializing StreamExecutor for CUDA device ordinal 3: INTERNAL: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 25396838400 Aborted (core dumped) ``` I think this is because bigbench dependency (below) installs tensorflow (GPU version) and dataloading tries to use GPU as default. `pip install bigbench@https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz` while just doing 'pip install bigbench' results in following error ``` File "/home/ceyda/.local/lib/python3.7/site-packages/datasets/load.py", line 109, in import_main_class module = importlib.import_module(module_path) File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/ceyda/.cache/huggingface/modules/datasets_modules/datasets/bigbench/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0/bigbench.py", line 118, in <module> class Bigbench(datasets.GeneratorBasedBuilder): File "/home/ceyda/.cache/huggingface/modules/datasets_modules/datasets/bigbench/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0/bigbench.py", line 127, in Bigbench BigBenchConfig(name=name, version=datasets.Version("1.0.0")) for name in bb_utils.get_all_json_task_names() AttributeError: module 'bigbench.api.util' has no attribute 'get_all_json_task_names' ``` ## Steps to avoid the bug Not ideal but can solve with (since I don't really use tensorflow elsewhere) `pip uninstall tensorflow` `pip install tensorflow-cpu` ## Environment info - datasets @ master - Python version: 3.7
closed
https://github.com/huggingface/datasets/issues/4480
2022-06-13T05:24:06
2022-06-14T19:45:24
2022-06-14T19:45:23
{ "login": "cceyda", "id": 15624271, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,268,558,237
4,479
Include entity positions as feature in ReCoRD
https://huggingface.co/datasets/super_glue/viewer/record/validation TLDR: We need to record entity positions, which are included in the source data but excluded by the loading script, to enable efficient and effective training for ReCoRD. Currently, the loading script ignores the entity positions ("entity_start", "entity_end") and only records entity text. This might be because the training method of the official baseline is to make n training instance from a datapoint by replacing \"\@ placeholder\" in query with each entity individually. But it increases the already heavy computation by multiple folds. So DeBERTa uses a method that take entity embeddings by their positions in the passage, and thus makes one training instance from one data point. It is way more efficient and proved effective for the ReCoRD task. Can anybody help me with the dataset card rendering error? Maybe @lhoestq ?
closed
https://github.com/huggingface/datasets/pull/4479
2022-06-12T11:56:28
2022-08-19T23:23:02
2022-08-19T13:23:48
{ "login": "richarddwang", "id": 17963619, "type": "User" }
[]
true
[]
1,268,358,213
4,478
Dataset slow during model training
## Describe the bug While migrating towards 🤗 Datasets, I encountered an odd performance degradation: training suddenly slows down dramatically. I train with an image dataset using Keras and execute a `to_tf_dataset` just before training. First, I have optimized my dataset following https://discuss.huggingface.co/t/solved-image-dataset-seems-slow-for-larger-image-size/10960/6, which actually improved the situation from what I had before but did not completely solve it. Second, I saved and loaded my dataset using `tf.data.experimental.save` and `tf.data.experimental.load` before training (for which I would have expected no performance change). However, I ended up with the performance I had before tinkering with 🤗 Datasets. Any idea what's the reason for this and how to speed-up training with 🤗 Datasets? ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset import os dataset_dir = "./dataset" prep_dataset_dir = "./prepdataset" model_dir = "./model" # Load Data dataset = load_dataset("Lehrig/Monkey-Species-Collection", "downsized") def read_image_file(example): with open(example["image"].filename, "rb") as f: example["image"] = {"bytes": f.read()} return example dataset = dataset.map(read_image_file) dataset.save_to_disk(dataset_dir) # Preprocess from datasets import ( Array3D, DatasetDict, Features, load_from_disk, Sequence, Value ) import numpy as np from transformers import ImageFeatureExtractionMixin dataset = load_from_disk(dataset_dir) num_classes = dataset["train"].features["label"].num_classes one_hot_matrix = np.eye(num_classes) feature_extractor = ImageFeatureExtractionMixin() def to_pixels(image): image = feature_extractor.resize(image, size=size) image = feature_extractor.to_numpy_array(image, channel_first=False) image = image / 255.0 return image def process(examples): examples["pixel_values"] = [ to_pixels(image) for image in examples["image"] ] examples["label"] = [ one_hot_matrix[label] for label in examples["label"] ] return examples features = Features({ "pixel_values": Array3D(dtype="float32", shape=(size, size, 3)), "label": Sequence(feature=Value(dtype="int32"), length=num_classes) }) prep_dataset = dataset.map( process, remove_columns=["image"], batched=True, batch_size=batch_size, num_proc=2, features=features, ) prep_dataset = prep_dataset.with_format("numpy") # Split train_dev_dataset = prep_dataset['test'].train_test_split( test_size=test_size, shuffle=True, seed=seed ) train_dev_test_dataset = DatasetDict({ 'train': train_dev_dataset['train'], 'dev': train_dev_dataset['test'], 'test': prep_dataset['test'], }) train_dev_test_dataset.save_to_disk(prep_dataset_dir) # Train Model import datetime import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.applications import InceptionV3 from tensorflow.keras.layers import Dense, Dropout, GlobalAveragePooling2D, BatchNormalization from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping from transformers import DefaultDataCollator dataset = load_from_disk(prep_data_dir) data_collator = DefaultDataCollator(return_tensors="tf") train_dataset = dataset["train"].to_tf_dataset( columns=['pixel_values'], label_cols=['label'], shuffle=True, batch_size=batch_size, collate_fn=data_collator ) validation_dataset = dataset["dev"].to_tf_dataset( columns=['pixel_values'], label_cols=['label'], shuffle=False, batch_size=batch_size, collate_fn=data_collator ) print(f'{datetime.datetime.now()} - Saving Data') tf.data.experimental.save(train_dataset, model_dir+"/train") tf.data.experimental.save(validation_dataset, model_dir+"/val") print(f'{datetime.datetime.now()} - Loading Data') train_dataset = tf.data.experimental.load(model_dir+"/train") validation_dataset = tf.data.experimental.load(model_dir+"/val") shape = np.shape(dataset["train"][0]["pixel_values"]) backbone = InceptionV3( include_top=False, weights='imagenet', input_shape=shape ) for layer in backbone.layers: layer.trainable = False model = Sequential() model.add(backbone) model.add(GlobalAveragePooling2D()) model.add(Dense(128, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(64, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(10, activation='softmax')) model.compile( optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'] ) print(model.summary()) earlyStopping = EarlyStopping( monitor='val_loss', patience=10, verbose=0, mode='min' ) mcp_save = ModelCheckpoint( f'{model_dir}/best_model.hdf5', save_best_only=True, monitor='val_loss', mode='min' ) reduce_lr_loss = ReduceLROnPlateau( monitor='val_loss', factor=0.1, patience=7, verbose=1, min_delta=0.0001, mode='min' ) hist = model.fit( train_dataset, epochs=epochs, validation_data=validation_dataset, callbacks=[earlyStopping, mcp_save, reduce_lr_loss] ) ``` ## Expected results Same performance when training without my "save/load hack" or a good explanation/recommendation about the issue. ## Actual results Performance slower without my "save/load hack". **Epoch Breakdown (without my "save/load hack"):** - Epoch 1/10 41s 2s/step - loss: 1.6302 - accuracy: 0.5048 - val_loss: 1.4713 - val_accuracy: 0.3273 - lr: 0.0010 - Epoch 2/10 32s 2s/step - loss: 0.5357 - accuracy: 0.8510 - val_loss: 1.0447 - val_accuracy: 0.5818 - lr: 0.0010 - Epoch 3/10 36s 3s/step - loss: 0.3547 - accuracy: 0.9231 - val_loss: 0.6245 - val_accuracy: 0.7091 - lr: 0.0010 - Epoch 4/10 36s 3s/step - loss: 0.2721 - accuracy: 0.9231 - val_loss: 0.3395 - val_accuracy: 0.9091 - lr: 0.0010 - Epoch 5/10 32s 2s/step - loss: 0.1676 - accuracy: 0.9856 - val_loss: 0.2187 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 6/10 42s 3s/step - loss: 0.2066 - accuracy: 0.9615 - val_loss: 0.1635 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 7/10 32s 2s/step - loss: 0.1814 - accuracy: 0.9423 - val_loss: 0.1418 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 8/10 32s 2s/step - loss: 0.1301 - accuracy: 0.9856 - val_loss: 0.1388 - val_accuracy: 0.9818 - lr: 0.0010 - Epoch 9/10 loss: 0.1102 - accuracy: 0.9856 - val_loss: 0.1185 - val_accuracy: 0.9818 - lr: 0.0010 - Epoch 10/10 32s 2s/step - loss: 0.1013 - accuracy: 0.9808 - val_loss: 0.0978 - val_accuracy: 0.9818 - lr: 0.0010 **Epoch Breakdown (with my "save/load hack"):** - Epoch 1/10 13s 625ms/step - loss: 3.0478 - accuracy: 0.1146 - val_loss: 2.3061 - val_accuracy: 0.0727 - lr: 0.0010 - Epoch 2/10 0s 80ms/step - loss: 2.3105 - accuracy: 0.2656 - val_loss: 2.3085 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 3/10 0s 77ms/step - loss: 1.8608 - accuracy: 0.3542 - val_loss: 2.3130 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 4/10 1s 98ms/step - loss: 1.8677 - accuracy: 0.3750 - val_loss: 2.3157 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 5/10 1s 204ms/step - loss: 1.5561 - accuracy: 0.4583 - val_loss: 2.3049 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 6/10 1s 210ms/step - loss: 1.4657 - accuracy: 0.4896 - val_loss: 2.2944 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 7/10 1s 205ms/step - loss: 1.4018 - accuracy: 0.5312 - val_loss: 2.2917 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 8/10 1s 207ms/step - loss: 1.2370 - accuracy: 0.5729 - val_loss: 2.2814 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 9/10 1s 214ms/step - loss: 1.1190 - accuracy: 0.6250 - val_loss: 2.2733 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 10/10 1s 207ms/step - loss: 1.1484 - accuracy: 0.6302 - val_loss: 2.2624 - val_accuracy: 0.0909 - lr: 0.0010 ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-4.18.0-305.45.1.el8_4.ppc64le-ppc64le-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2 - TensorFlow: 2.8.0 - GPU (used during training): Tesla V100-SXM2-32GB
open
https://github.com/huggingface/datasets/issues/4478
2022-06-11T19:40:19
2022-06-14T12:04:31
null
{ "login": "lehrig", "id": 9555494, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,268,308,986
4,477
Dataset Viewer issue for fgrezes/WIESP2022-NER
### Link _No response_ ### Description _No response_ ### Owner _No response_
closed
https://github.com/huggingface/datasets/issues/4477
2022-06-11T15:49:17
2022-07-18T13:07:33
2022-07-18T13:07:33
{ "login": "AshTayade", "id": 42551754, "type": "User" }
[]
false
[]
1,267,987,499
4,476
`to_pandas` doesn't take into account format.
**Is your feature request related to a problem? Please describe.** I have a large dataset that I need to convert part of to pandas to do some further analysis. Calling `to_pandas` directly on it is expensive. So I thought I could simply select the columns that I want and then call `to_pandas`. **Describe the solution you'd like** ```python from datasets import Dataset ds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]}) pandas_df = ds.with_format(columns=['a', 'b']).to_pandas() # I would expect `pandas_df` to only include a,b as column. ``` **Describe alternatives you've considered** I could remove all columns that I don't want? But I don't know all of them in advance. **Additional context** I can probably make a PR with some pointers.
closed
https://github.com/huggingface/datasets/issues/4476
2022-06-10T20:25:31
2022-06-15T17:41:41
2022-06-15T17:41:41
{ "login": "Dref360", "id": 8976546, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,267,798,451
4,475
Improve error message for missing pacakges from inside dataset script
Improve the error message for missing packages from inside a dataset script: With this change, the error message for missing packages for `bigbench` looks as follows: ``` ImportError: To be able to use bigbench, you need to install the following dependencies: - 'bigbench' using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"' ``` And this is how it looked before: ``` ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance' ```
closed
https://github.com/huggingface/datasets/pull/4475
2022-06-10T16:59:36
2022-10-06T13:46:26
2022-06-13T13:16:43
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,267,767,541
4,474
[Docs] How to use with PyTorch page
Currently the docs about PyTorch are scattered around different pages, and we were missing a place to explain more in depth how to use and optimize a dataset for PyTorch. This PR is related to #4457 which is the TF counterpart :) cc @Rocketknight1 we can try to align both documentations contents now I think cc @stevhliu let me know what you think !
closed
https://github.com/huggingface/datasets/pull/4474
2022-06-10T16:25:49
2022-06-14T14:40:32
2022-06-14T14:04:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,267,555,994
4,473
Add SST-2 dataset
Add SST-2 dataset. Currently it is part of GLUE benchmark. This PR adds it as a standalone dataset. CC: @julien-c
closed
https://github.com/huggingface/datasets/pull/4473
2022-06-10T13:37:26
2022-06-13T14:11:34
2022-06-13T14:01:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,267,488,523
4,472
Fix 401 error for unauthticated requests to non-existing repos
The hub now returns 401 instead of 404 for unauthenticated requests to non-existing repos. This PR add support for the 401 error and fixes the CI fails on `master`
closed
https://github.com/huggingface/datasets/pull/4472
2022-06-10T12:38:11
2022-06-10T13:05:11
2022-06-10T12:55:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,267,475,268
4,471
CI error with repo lhoestq/_dummy
## Describe the bug CI is failing because of repo "lhoestq/_dummy". See: https://app.circleci.com/pipelines/github/huggingface/datasets/12461/workflows/1b040b45-9578-4ab9-8c44-c643c4eb8691/jobs/74269 ``` requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/datasets/lhoestq/_dummy?full=true ``` The repo seems to no longer exist: https://huggingface.co/api/datasets/lhoestq/_dummy ``` error: "Repository not found" ``` CC: @lhoestq
closed
https://github.com/huggingface/datasets/issues/4471
2022-06-10T12:26:06
2022-06-10T13:24:53
2022-06-10T13:24:53
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,267,470,051
4,470
Reorder returned validation/test splits in script template
null
closed
https://github.com/huggingface/datasets/pull/4470
2022-06-10T12:21:13
2022-06-10T18:04:10
2022-06-10T17:54:50
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,267,213,849
4,469
Replace data URLs in wider_face dataset once hosted on the Hub
This PR replaces the URLs of data files in Google Drive with our Hub ones, once the data owners have approved to host their data on the Hub. They also informed us that their dataset is licensed under CC BY-NC-ND.
closed
https://github.com/huggingface/datasets/pull/4469
2022-06-10T08:13:25
2022-06-10T16:42:08
2022-06-10T16:32:46
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,266,715,742
4,468
Generalize tutorials for audio and vision
This PR updates the tutorials to be more generalizable to all modalities. After reading the tutorials, a user should be able to load any type of dataset, know how to index into and slice a dataset, and do the most basic/common type of preprocessing (tokenization, resampling, applying transforms) depending on their dataset. Other changes include: - Removed the sections about a dataset's metadata, features, and columns because we cover this in an earlier tutorial about inspecting the `DatasetInfo` through the dataset builder. - Separate the sharing dataset tutorial into two sections: (1) uploading via the web interface and (2) using the `huggingface_hub` library. - Renamed some tutorials in the TOC to be more clear and specific. - Added more text to nudge users towards joining the community and asking questions on the forums. - If it's okay with everyone, I'd also like to remove the section about loading and using metrics since we have the `evaluate` docs now.
closed
https://github.com/huggingface/datasets/pull/4468
2022-06-09T22:00:44
2022-06-14T16:22:02
2022-06-14T16:12:00
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,266,218,358
4,467
Transcript string 'null' converted to [None] by load_dataset()
## Issue I am training a luxembourgish speech-recognition model in Colab with a custom dataset, including a dictionary of luxembourgish words, for example the speaken numbers 0 to 9. When preparing the dataset with the script `ds_train1 = mydataset.map(prepare_dataset)` the following error was issued: ``` ValueError Traceback (most recent call last) <ipython-input-69-1e8f2b37f5bc> in <module>() ----> 1 ds_train = mydataset_train.map(prepare_dataset) 11 frames /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2450 if not _is_valid_text_input(text): 2451 raise ValueError( -> 2452 "text input must of type str (single example), List[str] (batch or single pretokenized example) " 2453 "or List[List[str]] (batch of pretokenized examples)." 2454 ) ValueError: text input must of type str (single example), List[str] (batch or single pretokenized example) or List[List[str]] (batch of pretokenized examples). ``` Debugging this problem was not easy, all transcriptions in the dataset are correct strings. Finally I discovered that the transcription string 'null' is interpreted as [None] by the `load_dataset()` script. By deleting this row in the dataset the training worked fine. ## Expected result: transcription 'null' interpreted as 'str' instead of 'None'. ## Reproduction Here is the code to reproduce the error with a one-row-dataset. ``` with open("null-test.csv") as f: reader = csv.reader(f) for row in reader: print(row) ``` ['wav_filename', 'wav_filesize', 'transcript'] ['wavs/female/NULL1.wav', '17530', 'null'] ``` dataset = load_dataset('csv', data_files={'train': 'null-test.csv'}) ``` Using custom data configuration default-81ac0c0e27af3514 Downloading and preparing dataset csv/default to /root/.cache/huggingface/datasets/csv/default-81ac0c0e27af3514/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519... Downloading data files: 100% 1/1 [00:00<00:00, 29.55it/s] Extracting data files: 100% 1/1 [00:00<00:00, 23.66it/s] Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/default-81ac0c0e27af3514/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data. 100% 1/1 [00:00<00:00, 25.84it/s] ``` print(dataset['train']['transcript']) ``` [None] ## Environment info ``` !pip install datasets==2.2.2 !pip install transformers==4.19.2 ```
closed
https://github.com/huggingface/datasets/issues/4467
2022-06-09T14:26:00
2023-07-04T02:18:39
2022-06-09T16:29:02
{ "login": "mbarnig", "id": 1360633, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,266,159,920
4,466
Optimize contiguous shard and select
Currently `.shard()` and `.select()` always create an indices mapping. However if the requested data are contiguous, it's much more optimized to simply slice the Arrow table instead of building an indices mapping. In particular: - the shard/select operation will be much faster - reading speed will be much faster in the resulting dataset, since it won't have to do a lookup step in the indices mapping Since `.shard()` is also used for `.map()` with `num_proc>1`, it will also significantly improve the reading speed of multiprocessed `.map()` operations Here is an example of speed-up: ```python >>> import io >>> import numpy as np >>> from datasets import Dataset >>> ds = Dataset.from_dict({"a": np.random.rand(10_000_000)}) >>> shard = ds.shard(num_shards=4, index=0, contiguous=True) # this calls `.select(range(2_500_000))` >>> buf = io.BytesIO() >>> %time dd.to_json(buf) Creating json from Arrow format: 100%|██████████████████| 100/100 [00:00<00:00, 376.17ba/s] CPU times: user 258 ms, sys: 9.06 ms, total: 267 ms Wall time: 266 ms ``` while previously it was ```python Creating json from Arrow format: 100%|███████████████████| 100/100 [00:03<00:00, 29.41ba/s] CPU times: user 3.33 s, sys: 69.1 ms, total: 3.39 s Wall time: 3.4 s ``` In this simple case the speed-up is x10, but @sayakpaul experienced a x100 speed-up on its data when exporting to JSON. ## Implementation details I mostly improved `.select()`: it now checks if the input corresponds to a contiguous chunk of data and then it slices the main Arrow table (or the indices mapping table if it exists). To check if the input indices are contiguous it checks two possibilities: - if the indices is of type `range`, it checks that start >= 0 and step = 1 - otherwise in the general case, it iterates over the indices. If all the indices are contiguous then we're good, otherwise we have to build an indices mapping. Having to iterate over the indices doesn't cause performance issues IMO because: - either they are contiguous and in this case the cost of iterating over the indices is much less than the cost of creating an indices mapping - or they are not contiguous, and then iterating generally stops quickly when it first encounters the first indice that is not contiguous.
closed
https://github.com/huggingface/datasets/pull/4466
2022-06-09T13:45:39
2022-06-14T16:04:30
2022-06-14T15:54:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,265,754,479
4,465
Fix bigbench config names
Fix https://github.com/huggingface/datasets/issues/4462 in the case of bigbench
closed
https://github.com/huggingface/datasets/pull/4465
2022-06-09T08:06:19
2022-06-09T14:38:36
2022-06-09T14:29:19
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,265,682,931
4,464
Extend support for streaming datasets that use xml.dom.minidom.parse
This PR extends the support in streaming mode for datasets that use `xml.dom.minidom.parse`, by patching that function. This PR adds support for streaming datasets like "Yaxin/SemEval2015". Fix #4453.
closed
https://github.com/huggingface/datasets/pull/4464
2022-06-09T06:58:25
2022-06-09T08:43:24
2022-06-09T08:34:16
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,265,093,211
4,463
Use config_id to check split sizes instead of config name
Fix https://github.com/huggingface/datasets/issues/4462
closed
https://github.com/huggingface/datasets/pull/4463
2022-06-08T17:45:24
2023-09-24T10:03:00
2022-06-09T08:06:37
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,265,079,347
4,462
BigBench: NonMatchingSplitsSizesError when passing a dataset configuration parameter
As noticed in https://github.com/huggingface/datasets/pull/4125 when a dataset config class has a parameter that reduces the number of examples (e.g. named `max_examples`), then loading the dataset and passing `max_examples` raises `NonMatchingSplitsSizesError`. This is because it will check for expected the number of examples of the config with the same name without taking into account the `max_examples` parameter. This can be fixed by checking the expected number of examples using the **config id** instead of name. Indeed the config id corresponds to the config name + an optional suffix that depends on the config parameters
open
https://github.com/huggingface/datasets/issues/4462
2022-06-08T17:31:24
2022-07-05T07:39:55
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,264,800,451
4,461
AttributeError: module 'datasets' has no attribute 'load_dataset'
## Describe the bug I have piped install datasets, but this package doesn't have these attributes: load_dataset, load_metric. ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-5.13.0-44-generic-x86_64-with-debian-bullseye-sid - Python version: 3.6.13 - PyArrow version: 6.0.1
closed
https://github.com/huggingface/datasets/issues/4461
2022-06-08T13:59:20
2024-03-25T12:58:29
2022-06-08T14:41:00
{ "login": "AlexNLP", "id": 59248970, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,264,644,205
4,460
Drop Python 3.6 support
Remove the fallback imports/checks in the code needed for Python 3.6 and update the requirements/CI files. Also, use Python types for the NumPy dtype wherever possible to avoid deprecation warnings in newer NumPy versions.
closed
https://github.com/huggingface/datasets/pull/4460
2022-06-08T12:10:18
2022-07-26T19:16:39
2022-07-26T19:04:21
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,264,636,481
4,459
Add and fix language tags for udhr dataset
Related to #4362.
closed
https://github.com/huggingface/datasets/pull/4459
2022-06-08T12:03:42
2022-06-08T12:36:24
2022-06-08T12:27:13
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,263,531,911
4,457
First draft of the docs for TF + Datasets
I might cc a few of the other TF people to take a look when this is closer to being finished, but it's still a draft for now.
closed
https://github.com/huggingface/datasets/pull/4457
2022-06-07T16:06:48
2022-06-14T16:08:41
2022-06-14T15:59:08
{ "login": "Rocketknight1", "id": 12866554, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,263,241,449
4,456
Workflow for Tabular data
Tabular data are treated very differently than data for NLP, audio, vision, etc. and therefore the worflow for tabular data in `datasets` is not ideal. For example for tabular data, it is common to use pandas/spark/dask to process the data, and then load the data into X and y (X is an array of features and y an array of labels), then train_test_split and finally feed the data to a machine learning model. In `datasets` the workflow is different: we use load_dataset, then map, then train_test_split (if we only have a train split) and we end up with columnar dataset splits, not formatted as X and y. Right now, it is already possible to convert a dataset from and to pandas, but there are still many things that could improve the workflow for tabular data: - be able to load the data into X and y - be able to load a dataset from the output of spark or dask (as far as I know it's usually csv or parquet files on S3/GCS/HDFS etc.) - support "unsplit" datasets explicitly, instead of putting everything in "train" by default cc @adrinjalali @merveenoyan feel free to complete/correct this :) Feel free to also share ideas of APIs that would be super intuitive in your opinion !
open
https://github.com/huggingface/datasets/issues/4456
2022-06-07T12:48:22
2023-03-06T08:53:55
null
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "generic discussion", "color": "c5def5" } ]
false
[]
1,263,089,067
4,455
Update data URLs in fever dataset
As stated in their website, data owners updated their URLs on 28/04/2022. This PR updates the data URLs. Fix #4452.
closed
https://github.com/huggingface/datasets/pull/4455
2022-06-07T10:40:54
2022-06-08T07:24:54
2022-06-08T07:16:17
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,262,674,973
4,454
Dataset Viewer issue for Yaxin/SemEval2015
### Link _No response_ ### Description the link could not visit ### Owner _No response_
closed
https://github.com/huggingface/datasets/issues/4454
2022-06-07T03:31:46
2022-06-07T11:53:11
2022-06-07T11:53:11
{ "login": "WithYouTo", "id": 18160852, "type": "User" }
[ { "name": "duplicate", "color": "cfd3d7" }, { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,262,674,105
4,453
Dataset Viewer issue for Yaxin/SemEval2015
### Link _No response_ ### Description _No response_ ### Owner _No response_
closed
https://github.com/huggingface/datasets/issues/4453
2022-06-07T03:30:08
2022-06-09T08:34:16
2022-06-09T08:34:16
{ "login": "WithYouTo", "id": 18160852, "type": "User" }
[]
false
[]
1,262,529,654
4,452
Trying to load FEVER dataset results in NonMatchingChecksumError
## Describe the bug Trying to load the `fever` dataset fails with `datasets.utils.info_utils.NonMatchingChecksumError`. I tried with `download_mode="force_redownload"` but that did not fix the error. I also tried with `ignore_verification=True` but then that raised a `json.decoder.JSONDecodeError`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('fever', 'v1.0') # Fails with NonMatchingChecksumError dataset = load_dataset('fever', 'v1.0', download_mode="force_redownload") # Fails with NonMatchingChecksumError dataset = load_dataset('fever', 'v1.0', ignore_verification=True)` # Fails with JSONDecodeError ``` ## Expected results I expect this call to return with no error raised. ## Actual results With `ignore_verification=False`: ``` *** datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://s3-eu-west-1.amazonaws.com/fever.public/train.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev_public.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_test.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_test.jsonl'] ``` With `ignore_verification=True`: ``` *** json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.3.dev0 - Platform: Linux-4.15.0-50-generic-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
closed
https://github.com/huggingface/datasets/issues/4452
2022-06-06T23:13:15
2022-12-15T13:36:40
2022-06-08T07:16:16
{ "login": "santhnm2", "id": 5347982, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,262,103,323
4,451
Use newer version of multi-news with fixes
Closes #4430.
closed
https://github.com/huggingface/datasets/pull/4451
2022-06-06T16:57:08
2022-06-07T17:40:01
2022-06-07T17:14:44
{ "login": "JohnGiorgi", "id": 8917831, "type": "User" }
[]
true
[]
1,261,878,324
4,450
Update README.md of fquad
null
closed
https://github.com/huggingface/datasets/pull/4450
2022-06-06T13:52:41
2022-06-06T14:51:49
2022-06-06T14:43:03
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,261,262,326
4,449
Rj
import android.content.DialogInterface; import android.database.Cursor; import android.os.Bundle; import android.view.View; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; import androidx.appcompat.app.AlertDialog; import androidx.appcompat.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { private EditText editTextID; private EditText editTextName; private EditText editTextNum; private String name; private int number; private String ID; private dbHelper db; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); db = new dbHelper(this); editTextID = findViewById(R.id.editText1); editTextName = findViewById(R.id.editText2); editTextNum = findViewById(R.id.editText3); Button buttonSave = findViewById(R.id.button); Button buttonRead = findViewById(R.id.button2); Button buttonUpdate = findViewById(R.id.button3); Button buttonDelete = findViewById(R.id.button4); Button buttonSearch = findViewById(R.id.button5); Button buttonDeleteAll = findViewById(R.id.button6); buttonSave.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { name = editTextName.getText().toString(); String num = editTextNum.getText().toString(); if (name.isEmpty() || num.isEmpty()) { Toast.makeText(MainActivity.this, "Cannot Submit Empty Fields", Toast.LENGTH_SHORT).show(); } else { number = Integer.parseInt(num); try { // Insert Data db.insertData(name, number); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } catch (Exception e) { e.printStackTrace(); } } } }); buttonRead.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { final ArrayAdapter<String> adapter = new ArrayAdapter<>(MainActivity.this, android.R.layout.simple_list_item_1); String name; String num; String id; try { Cursor cursor = db.readData(); if (cursor != null && cursor.getCount() > 0) { while (cursor.moveToNext()) { id = cursor.getString(0); // get data in column index 0 name = cursor.getString(1); // get data in column index 1 num = cursor.getString(2); // get data in column index 2 // Add SQLite data to listView adapter.add("ID :- " + id + "\n" + "Name :- " + name + "\n" + "Number :- " + num + "\n\n"); } } else { adapter.add("No Data"); } cursor.close(); } catch (Exception e) { e.printStackTrace(); } // show the saved data in alertDialog AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this); builder.setTitle("SQLite saved data"); builder.setIcon(R.mipmap.app_icon_foreground); builder.setAdapter(adapter, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { } }); builder.setPositiveButton("OK", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.cancel(); } }); AlertDialog dialog = builder.create(); dialog.show(); } }); buttonUpdate.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { name = editTextName.getText().toString(); String num = editTextNum.getText().toString(); ID = editTextID.getText().toString(); if (name.isEmpty() || num.isEmpty() || ID.isEmpty()) { Toast.makeText(MainActivity.this, "Cannot Submit Empty Fields", Toast.LENGTH_SHORT).show(); } else { number = Integer.parseInt(num); try { // Update Data db.updateData(ID, name, number); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } catch (Exception e) { e.printStackTrace(); } } } }); buttonDelete.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { ID = editTextID.getText().toString(); if (ID.isEmpty()) { Toast.makeText(MainActivity.this, "Please enter the ID", Toast.LENGTH_SHORT).show(); } else { try { // Delete Data db.deleteData(ID); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } catch (Exception e) { e.printStackTrace(); } } } }); buttonDeleteAll.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Delete all data // You can simply delete all the data by calling this method --> db.deleteAllData(); // You can try this also AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this); builder.setIcon(R.mipmap.app_icon_foreground); builder.setTitle("Delete All Data"); builder.setCancelable(false); builder.setMessage("Do you really need to delete your all data ?"); builder.setPositiveButton("Yes", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { // User confirmed , now you can delete the data db.deleteAllData(); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } }); builder.setNegativeButton("No", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { // user not confirmed dialog.cancel(); } }); AlertDialog dialog = builder.create(); dialog.show(); } }); buttonSearch.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { ID = editTextID.getText().toString(); if (ID.isEmpty()) { Toast.makeText(MainActivity.this, "Please enter the ID", Toast.LENGTH_SHORT).show(); } else { try { // Search data Cursor cursor = db.searchData(ID); if (cursor.moveToFirst()) { editTextName.setText(cursor.getString(1)); editTextNum.setText(cursor.getString(2)); Toast.makeText(MainActivity.this, "Data successfully searched", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(MainActivity.this, "ID not found", Toast.LENGTH_SHORT).show(); editTextNum.setText("ID Not found"); editTextName.setText("ID not found"); } cursor.close(); } catch (Exception e) { e.printStackTrace(); } } } }); } }
closed
https://github.com/huggingface/datasets/issues/4449
2022-06-06T02:24:32
2022-06-06T15:44:50
2022-06-06T15:44:50
{ "login": "Aeckard45", "id": 87345839, "type": "User" }
[]
false
[]
1,260,966,129
4,448
New Preprocessing Feature - Deduplication [Request]
**Is your feature request related to a problem? Please describe.** Many large datasets are full of duplications and it has been shown that deduplicating datasets can lead to better performance while training, and more truthful evaluation at test-time. A feature that allows one to easily deduplicate a dataset can be cool! **Describe the solution you'd like** We can define a function and keep only the first/last data-point that yields the value according to this function. **Describe alternatives you've considered** The clear alternative is to repeat a clear boilerplate every time someone want to deduplicate a dataset.
open
https://github.com/huggingface/datasets/issues/4448
2022-06-05T05:32:56
2023-12-12T07:52:40
null
{ "login": "yuvalkirstain", "id": 57996478, "type": "User" }
[ { "name": "duplicate", "color": "cfd3d7" }, { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,260,041,805
4,447
Minor fixes/improvements in `scene_parse_150` card
Add `paperswithcode_id` and fix some links in the `scene_parse_150` card.
closed
https://github.com/huggingface/datasets/pull/4447
2022-06-03T15:22:34
2022-06-06T15:50:25
2022-06-06T15:41:37
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,260,028,995
4,446
Add missing kwargs to docstrings
null
closed
https://github.com/huggingface/datasets/pull/4446
2022-06-03T15:10:27
2022-06-03T16:10:09
2022-06-03T16:01:29
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,259,947,568
4,445
Fix missing args in docstring of load_dataset_builder
Currently, the docstring of `load_dataset_builder` only contains the first parameter `path` (no other): - https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/loading_methods#datasets.load_dataset_builder.path
closed
https://github.com/huggingface/datasets/pull/4445
2022-06-03T13:55:50
2022-06-03T14:35:32
2022-06-03T14:27:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,259,738,209
4,444
Fix kwargs in docstrings
To fix the rendering of `**kwargs` in docstrings, a parentheses must be added afterwards. See: - huggingface/doc-builder/issues/235
closed
https://github.com/huggingface/datasets/pull/4444
2022-06-03T10:29:02
2022-06-03T11:01:28
2022-06-03T10:52:46
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,259,606,334
4,443
Dataset Viewer issue for openclimatefix/nimrod-uk-1km
### Link _No response_ ### Description _No response_ ### Owner _No response_
open
https://github.com/huggingface/datasets/issues/4443
2022-06-03T08:17:16
2023-09-25T12:15:08
null
{ "login": "ZYMXIXI", "id": 32382826, "type": "User" }
[]
false
[]
1,258,589,276
4,442
Dataset Viewer issue for amazon_polarity
### Link https://huggingface.co/datasets/amazon_polarity/viewer/amazon_polarity/test ### Description For some reason the train split is OK but the test split is not for this dataset: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/amazon_polarity/__init__.py' ``` ### Owner No
closed
https://github.com/huggingface/datasets/issues/4442
2022-06-02T19:18:38
2022-06-07T18:50:37
2022-06-07T18:50:37
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,258,568,656
4,441
Dataset Viewer issue for aeslc
### Link https://huggingface.co/datasets/aeslc ### Description The dataset viewer can't find `dataset_infos.json` in it's cache: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/aeslc/eb8e30234cf984a58ebe9f205674597ac1db2ec91e7321cd7f36864f7e3671b8/dataset_infos.json' ``` ### Owner No
closed
https://github.com/huggingface/datasets/issues/4441
2022-06-02T18:57:12
2022-06-07T18:50:55
2022-06-07T18:50:55
{ "login": "lewtun", "id": 26859204, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,258,494,469
4,440
Update docs around audio and vision
As part of the strategy to center the docs around the different modalities, this PR updates the quickstart to include audio and vision examples. This improves the developer experience by making audio and vision content more discoverable, enabling users working in these modalities to also quickly get started without digging too deeply into the docs. Other changes include: - Moved the installation guide to the Get Started section because it should be part of a user's onboarding to the library before exploring tutorials or how-to's. - Updated the native TF code at creating a `tf.data.Dataset` because it was throwing an error. The `to_tensor()` bit was redundant and removing it fixed the error (please double-check me here!). - Added some UI components to the quickstart so it's easier for users to navigate directly to the relevant section with context about what to expect. - Reverted to the code tabs for content that don't have any framework-specific text. I think this saves space compared to the code blocks. We'll still use the code blocks if the `torch` text is different from the `tf` text. Let me know what you think, especially if we should include some code samples for training a model in the audio/vision sections. I left this out since we already showed it in the NLP section. I want to keep the focus on using Datasets to load and process a dataset, and not so much the training part. Maybe we can add links to the Transformers docs instead?
closed
https://github.com/huggingface/datasets/pull/4440
2022-06-02T17:42:03
2022-06-23T16:33:19
2022-06-23T16:23:02
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[ { "name": "documentation", "color": "0075ca" } ]
true
[]
1,258,434,111
4,439
TIMIT won't load after manual download: Errors about files that don't exist
## Describe the bug I get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both cases the HF dataloader fails; it is looking for files that don't exist anywhere in the dataset: it is looking for files with lower-case letters like "**test*" (all the filenames in both my copies are uppercase) and certain file extensions that exclude the .DOC which is provided in TIMIT: ## Steps to reproduce the bug ```python data = load_dataset('timit_asr', 'clean')['train'] ``` ## Expected results The dataset should load with no errors. ## Actual results This error message: ``` File "/home/ubuntu/envs/data2vec/lib/python3.9/site-packages/datasets/data_files.py", line 201, in resolve_patterns_locally_or_by_urls raise FileNotFoundError(error_msg) FileNotFoundError: Unable to resolve any data file that matches '['**test*', '**eval*']' at /home/ubuntu/datasets/timit with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] ``` But this is a strange sort of error: why is it looking for lower-case file names when all the TIMIT dataset filenames are uppercase? Why does it exclude .DOC files when the only parts of the TIMIT data set with "TEST" in them have ".DOC" extensions? ...I wonder, how was anyone able to get this to work in the first place? The files in the dataset look like the following: ``` ³ PHONCODE.DOC ³ PROMPTS.TXT ³ SPKRINFO.TXT ³ SPKRSENT.TXT ³ TESTSET.DOC ``` ...so why are these being excluded by the dataset loader? ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-1060-aws-x86_64-with-glibc2.27 - Python version: 3.9.9 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
closed
https://github.com/huggingface/datasets/issues/4439
2022-06-02T16:35:56
2022-06-03T08:44:17
2022-06-03T08:44:16
{ "login": "drscotthawley", "id": 13925685, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,258,255,394
4,438
Fix docstring of inspect_dataset
As pointed out by @sgugger: - huggingface/doc-builder/issues/235
closed
https://github.com/huggingface/datasets/pull/4438
2022-06-02T14:21:10
2022-06-02T16:40:55
2022-06-02T16:32:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,258,249,582
4,437
Add missing columns to `blended_skill_talk`
Adds the missing columns to `blended_skill_talk` to align the loading logic with [ParlAI](https://github.com/facebookresearch/ParlAI/blob/main/parlai/tasks/blended_skill_talk/build.py). Fix #4426
closed
https://github.com/huggingface/datasets/pull/4437
2022-06-02T14:16:26
2022-06-06T15:49:56
2022-06-06T15:41:25
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,257,758,834
4,436
Fix directory names for LDC data in timit_asr dataset
Related to: - #4422
closed
https://github.com/huggingface/datasets/pull/4436
2022-06-02T06:45:04
2022-06-02T09:32:56
2022-06-02T09:24:27
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,257,496,552
4,435
Load a local cached dataset that has been modified
## Describe the bug I have loaded a dataset as follows: ``` d = load_dataset("emotion", split="validation") ``` Afterwards I make some modifications to the dataset via a `map` call: ``` d.map(some_update_func, cache_file_name=modified_dataset) ``` This generates a cached version of the dataset on my local system in the same directory as the original download of the data (/path/to/cache). Running an `ls` returns: ``` modified_dataset dataset_info.json emotion-test.arrow emotion-train.arrow emotion-validation.arrow ``` as expected. However, when I try to load up the modified cached dataset via a call to ``` modified = load_dataset("emotion", split="validation", data_files="/path/to/cache/modified_dataset") ``` it simply redownloads a new version of the dataset and dumps to a new cache rather than loading up the original modified dataset: ``` Using custom data configuration validation-cdbf51685638421b Downloading and preparing dataset emotion/validation to ... ``` How am I supposed to load the original modified local cache copy of the dataset? ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
closed
https://github.com/huggingface/datasets/issues/4435
2022-06-02T01:51:49
2022-06-02T23:59:26
2022-06-02T23:59:18
{ "login": "mihail911", "id": 2789441, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,256,207,321
4,434
Fix dummy dataset generation script for handling nested types of _URLs
It seems that when user specify nested _URLs structures in their dataset script. An error will be raised when generating dummy dataset. I think the types of all elements in `dummy_data_dict.values()` should be checked because they may have different types. Linked to issue #4428 PS: I am not sure whether my code fix this issue in a proper way.
closed
https://github.com/huggingface/datasets/pull/4434
2022-06-01T14:53:15
2022-06-07T12:08:28
2022-06-07T09:24:09
{ "login": "silverriver", "id": 2529049, "type": "User" }
[]
true
[]
1,255,830,758
4,433
Fix script fetching and local path handling in `inspect_dataset` and `inspect_metric`
Fix #4348
closed
https://github.com/huggingface/datasets/pull/4433
2022-06-01T12:09:56
2022-06-09T10:34:54
2022-06-09T10:26:07
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,255,523,720
4,432
Fix builder docstring
Currently, the args of `DatasetBuilder` do not appear in the docs: https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/builder_classes#datasets.DatasetBuilder
closed
https://github.com/huggingface/datasets/pull/4432
2022-06-01T09:45:30
2022-06-02T17:43:47
2022-06-02T17:35:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,254,618,948
4,431
Add personaldialog datasets
It seems that all tests are passed
closed
https://github.com/huggingface/datasets/pull/4431
2022-06-01T01:20:40
2022-06-11T12:40:23
2022-06-11T12:31:16
{ "login": "silverriver", "id": 2529049, "type": "User" }
[]
true
[]
1,254,412,591
4,430
Add ability to load newer, cleaner version of Multi-News
**Is your feature request related to a problem? Please describe.** The [Multi-News dataloader points to the original version of the Multi-News dataset](https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/datasets/multi_news/multi_news.py#L47), but this has [known errors in it](https://github.com/Alex-Fabbri/Multi-News/issues/11). There exists a [newer version which fixes some of these issues](https://drive.google.com/open?id=1jwBzXBVv8sfnFrlzPnSUBHEEAbpIUnFq). Unfortunately I don't think you can just replace this old URL with the new one, otherwise this could lead to issues with reproducibility. **Describe the solution you'd like** Add a new version to the Multi-News dataloader that points to the updated dataset which has fixes for some known issues. **Describe alternatives you've considered** Replace the current URL to the original version to the dataset with the URL to the version with fixes. **Additional context** Would be happy to make a PR for this, could someone maybe point me to another dataloader that has multiple versions so I can see how this is handled in `datasets`?
closed
https://github.com/huggingface/datasets/issues/4430
2022-05-31T21:00:44
2022-06-07T17:14:44
2022-06-07T17:14:44
{ "login": "JohnGiorgi", "id": 8917831, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,254,184,358
4,429
Update builder docstring for deprecated/added arguments
This PR updates the builder docstring with deprecated/added directives for arguments name/config_name. Follow up of: - #4414 - huggingface/doc-builder#233 First merge: - #4432
closed
https://github.com/huggingface/datasets/pull/4429
2022-05-31T17:37:25
2022-06-08T11:40:18
2022-06-08T11:31:45
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,254,092,818
4,428
Errors when building dummy data if you use nested _URLS
## Describe the bug When making dummy data with the `datasets-cli dummy_data` tool, an error will be raised if you use a nested _URLS in your dataset script. Traceback (most recent call last): File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 43, in <module> main() File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 311, in run self._autogenerate_dummy_data( File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 337, in _autogenerate_dummy_data dataset_builder._split_generators(dl_manager) File "/home/name/.cache/huggingface/modules/datasets_modules/datasets/personal_dialog/559332bced5eeafa7f7efc2a7c10ce02cee2a8116bbab4611c35a50ba2715b77/personal_dialog.py", line 108, in _split_generators data_dir = dl_manager.download_and_extract(urls) File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 56, in download_and_extract dummy_output = self.mock_download_manager.download(url_or_urls) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 130, in download return self.download_and_extract(data_url) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 122, in download_and_extract return self.create_dummy_data_dict(dummy_file, data_url) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 165, in create_dummy_data_dict if isinstance(first_value, str) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()): TypeError: unhashable type: 'list' ## Steps to reproduce the bug You can use my dataset script implemented here: https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py ```python datasets_cli dummy_data datasets/personal_dialog --auto_generate ``` You can change https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py#L54 to ``` "train": "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dev_random.jsonl.gz" ``` before runing the above script to avoid downloading a large training data. ## Expected results The dummy data should be generated ## Actual results An error is raised. It seems that in https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 We only check if the first item of dummy_data_dict.values() is str. However, dummy_data_dict.values() may have the type of [str, list, list]. A simple fix would be changing https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 to ```python if all([isinstance(value, str) for value in dummy_data_dict.values()]) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()): ``` But I don't know if this kinds of change may bring any side effect since I am not sure about the detail logic here. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux - Python version: Python 3.9.10 - PyArrow version: 7.0.0
closed
https://github.com/huggingface/datasets/issues/4428
2022-05-31T16:10:57
2022-06-07T09:24:09
2022-06-07T09:24:09
{ "login": "silverriver", "id": 2529049, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,253,959,313
4,427
Add HF.co for PRs/Issues for specific datasets
As in https://github.com/huggingface/transformers/pull/17485, issues and PR for datasets under a namespace have to be on the HF Hub
closed
https://github.com/huggingface/datasets/pull/4427
2022-05-31T14:31:21
2022-06-01T12:37:42
2022-06-01T12:29:02
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,253,887,311
4,426
Add loading variable number of columns for different splits
**Is your feature request related to a problem? Please describe.** The original dataset `blended_skill_talk` consists of different sets of columns for the different splits: (test/valid) splits have additional data column `label_candidates` that the (train) doesn't have. When loading such data, an exception occurs at table.py:cast_table_to_schema, because of mismatched columns.
closed
https://github.com/huggingface/datasets/issues/4426
2022-05-31T13:40:16
2022-06-03T16:25:25
2022-06-03T16:25:25
{ "login": "DrMatters", "id": 22641583, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,253,641,604
4,425
Make extensions case-insensitive in timit_asr dataset
Related to #4422.
closed
https://github.com/huggingface/datasets/pull/4425
2022-05-31T10:10:04
2022-06-01T14:15:30
2022-06-01T14:06:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,253,542,488
4,424
Fix DuplicatedKeysError in timit_asr dataset
Fix #4422.
closed
https://github.com/huggingface/datasets/pull/4424
2022-05-31T08:47:45
2022-05-31T13:50:50
2022-05-31T13:42:31
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,253,326,023
4,423
Add new dataset MMChat
Hi, I am adding a new dataset MMChat. It seems that all tests are passed
closed
https://github.com/huggingface/datasets/pull/4423
2022-05-31T04:45:07
2022-06-11T12:40:52
2022-06-11T12:31:42
{ "login": "silverriver", "id": 2529049, "type": "User" }
[]
true
[]
1,253,146,511
4,422
Cannot load timit_asr data set
## Describe the bug I am trying to load the timit_asr data set. I have tried with a copy from the LDC, and a copy from deepai. In both cases they fail with a "duplicate key" error. With the LDC version I have to convert the file extensions all to upper-case before I can load it at all. ## Steps to reproduce the bug ```python timit = datasets.load_dataset("timit_asr", data_dir = "/path/to/dataset") # Sample code to reproduce the bug ``` ## Expected results The data set should load without error. It worked for me before the LDC url change. ## Actual results ``` datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: SA1 Keys should be unique and deterministic in nature ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
closed
https://github.com/huggingface/datasets/issues/4422
2022-05-30T22:00:22
2022-06-02T06:34:05
2022-05-31T13:42:31
{ "login": "bhaddow", "id": 992795, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,253,059,467
4,421
Add extractor for bzip2-compressed files
This change enables loading bzipped datasets, just like any other compressed dataset.
closed
https://github.com/huggingface/datasets/pull/4421
2022-05-30T19:19:40
2022-06-06T15:22:50
2022-06-06T15:22:50
{ "login": "osyvokon", "id": 2910707, "type": "User" }
[]
true
[]
1,252,739,239
4,420
Metric evaluation problems in multi-node, shared file system
## Describe the bug Metric evaluation fails in multi-node within a shared file system, because the master process cannot find the lock files from other nodes. (This issue was originally mentioned in the transformers repo https://github.com/huggingface/transformers/issues/17412) ## Steps to reproduce the bug 1. clone [this huggingface model](https://huggingface.co/PereLluis13/wav2vec2-xls-r-300m-ca-lm) and replace the `run_speech_recognition_ctc.py` script with the version in the gist [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71#file-run_speech_recognition_ctc-py). 2. Setup the `venv` according to the requirements of the model file plus `datasets==2.0.0`, `transformers==4.18.0` and `torch==1.9.0` 3. Launch the runner in a distributed environment which has a shared file system for two nodes, preferably with SLURM. Example [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71) Specifically for the datasets, for the distributed setup the `load_metric` is called as: ``` process_id=int(os.environ["RANK"]) num_process=int(os.environ["WORLD_SIZE"]) eval_metrics = {metric: load_metric(metric, process_id=process_id, num_process=num_process, experiment_id="slurm") for metric in data_args.eval_metrics} ``` ## Expected results The training should not fail, due to the failure of the `Metric.compute()` step. ## Actual results For the test I am executing the world size is 4, with 2 GPUs in 2 nodes. However the process is not finding the necessary lock files ``` File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 841, in <module> main() File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 792, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1497, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1624, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2291, in evaluate metric_key_prefix=metric_key_prefix, File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2535, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in compute_metrics metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()} File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in <dictcomp> metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()} File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 419, in compute self.add_batch(**inputs) File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 465, in add_batch self._init_writer() File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 552, in _init_writer self._check_rendez_vous() # wait for master to be ready and to let everyone go File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 342, in _check_rendez_vous ) from None ValueError: Expected to find locked file /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock from process 3 but it doesn't exist. ``` When I look at the cache directory, I can see all the lock files in principle: ``` /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow.lock /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow.lock /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow.lock /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-rdv.lock ``` I see that there was another related issue here https://github.com/huggingface/datasets/issues/1942, but it seems to have resolved via https://github.com/huggingface/datasets/pull/1966. Let me know if there is problem with how I am calling the `load_metric` or whether I need to make changes to the `.compute()` steps. ## Environment info - `datasets` version: 2.0.0 - Platform: Linux-4.18.0-147.8.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core - Python version: 3.7.4 - PyArrow version: 7.0.0 - Pandas version: 1.3.0
closed
https://github.com/huggingface/datasets/issues/4420
2022-05-30T13:24:05
2023-07-11T09:33:18
2023-07-11T09:33:17
{ "login": "gullabi", "id": 40303490, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,252,652,896
4,419
Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual`
**Is your feature request related to a problem? Please describe.** So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual, so maybe it's worth updating. Find an example of an `assertEqual` over a tuple in 🤗 `datasets` unit tests over an `ArrowDataset` at https://github.com/huggingface/datasets/blob/0bb47271910c8a0b628dba157988372307fca1d2/tests/test_arrow_dataset.py#L570 **Describe the solution you'd like** Start slowly replacing all the `assertEqual` statements with `assertTupleEqual` if the assertion is done over a Python tuple, as we're doing with the Python lists using `assertListEqual` rather than `assertEqual`. **Additional context** If so, please let me know and I'll try to go over the tests and create a PR if applicable, otherwise, if you consider this should stay as `assertEqual` rather than `assertSequenceEqual` feel free to close this issue! Thanks 🤗
closed
https://github.com/huggingface/datasets/issues/4419
2022-05-30T12:13:18
2022-09-30T16:01:37
2022-09-30T16:01:37
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,252,506,268
4,418
Add dataset MMChat
null
closed
https://github.com/huggingface/datasets/pull/4418
2022-05-30T10:10:40
2022-05-30T14:58:18
2022-05-30T14:58:18
{ "login": "silverriver", "id": 2529049, "type": "User" }
[]
true
[]
1,251,933,091
4,417
how to convert a dict generator into a huggingface dataset.
### Link _No response_ ### Description Hey there, I have used seqio to get a well distributed mixture of samples from multiple dataset. However the resultant output from seqio is a python generator dict, which I cannot produce back into huggingface dataset. The generator contains all the samples needed for training the model but I cannot convert it into a huggingface dataset. The code looks like this: ``` for ex in seqio_data: print(ex[“text”]) ``` I need to convert the seqio_data (generator) into huggingface dataset. the complete seqio code goes here: ``` import functools import seqio import tensorflow as tf import t5.data from datasets import load_dataset from t5.data import postprocessors from t5.data import preprocessors from t5.evaluation import metrics from seqio import FunctionDataSource, utils TaskRegistry = seqio.TaskRegistry def gen_dataset(split, shuffle=False, seed=None, column="text", dataset_params=None): dataset = load_dataset(**dataset_params) if shuffle: if seed: dataset = dataset.shuffle(seed=seed) else: dataset = dataset.shuffle() while True: for item in dataset[str(split)]: yield item[column] def dataset_fn(split, shuffle_files, seed=None, dataset_params=None): return tf.data.Dataset.from_generator( functools.partial(gen_dataset, split, shuffle_files, seed, dataset_params=dataset_params), output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_name) ) @utils.map_over_dataset def target_to_key(x, key_map, target_key): """Assign the value from the dataset to target_key in key_map""" return {**key_map, target_key: x} dataset_name = 'oscar-corpus/OSCAR-2109' subset= 'mr' dataset_params = {"path": dataset_name, "language":subset, "use_auth_token":True} dataset_shapes = None TaskRegistry.add( "oscar_marathi_corpus", source=seqio.FunctionDataSource( dataset_fn=functools.partial(dataset_fn, dataset_params=dataset_params), splits=("train", "validation"), caching_permitted=False, num_input_examples=dataset_shapes, ), preprocessors=[ functools.partial( target_to_key, key_map={ "targets": None, }, target_key="targets")], output_features={"targets": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)}, metric_fns=[] ) dataset = seqio.get_mixture_or_task("oscar_marathi_corpus").get_dataset( sequence_length=None, split="train", shuffle=True, num_epochs=1, shard_info=seqio.ShardInfo(index=0, num_shards=10), use_cached=False, seed=42 ) for _, ex in zip(range(5), dataset): print(ex['targets'].numpy().decode()) ``` ### Owner _No response_
closed
https://github.com/huggingface/datasets/issues/4417
2022-05-29T16:28:27
2022-09-16T14:44:19
2022-09-16T14:44:19
{ "login": "StephennFernandes", "id": 32235549, "type": "User" }
[ { "name": "question", "color": "d876e3" } ]
false
[]
1,251,875,763
4,416
Add LCCC dataset
Hi, I am trying to add a new dataset lccc. All tests are passed.
closed
https://github.com/huggingface/datasets/pull/4416
2022-05-29T12:27:19
2022-06-15T10:28:59
2022-06-02T09:13:46
{ "login": "silverriver", "id": 2529049, "type": "User" }
[]
true
[]
1,251,002,981
4,415
Update `dataset_infos.json` with new split info in `Dataset.push_to_hub` to avoid verification error
Update `dataset_infos.json` when pushing splits one by one via `Dataset.push_to_hub` to avoid the splits verification error. TODO: ~~- [ ] handle token + `{Audio, Image}.embed_storage`~~ - [x] tests
closed
https://github.com/huggingface/datasets/pull/4415
2022-05-27T17:03:42
2022-06-07T12:42:25
2022-06-07T12:33:52
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,250,546,888
4,414
Rename DatasetBuilder config_name
This PR renames the DatasetBuilder keyword argument `name` to `config_name` so that: - it avoids confusion with the attribute `DatasetBuilder.name`, which is different - it aligns with the Dataset property name `config_name`, defined in `DatasetInfoMixin.config_name` Other simpler possibility could be to rename it to just `config` instead. Please note I have only renamed this argument of DatasetBuilder because I think this refactoring has a low impact on users: we can assume this is not a public facing parameter, but private or related to the inners of our library. It would have a major impact to rename it also in: - load_dataset - load_dataset_builder: although this could also be assumed as inners... - in our CLI commands Besides the naming of `name`, I also find really confusing the naming of `path` in `load_dataset`. IMHO, they should have a more simpler and precise meaning (currently, they are too vague). I would propose (maybe for next major release): ``` load_dataset(dataset, config,... ``` instead of ``` load_dataset(path, name,... ```
closed
https://github.com/huggingface/datasets/pull/4414
2022-05-27T09:28:02
2022-05-31T15:07:21
2022-05-31T14:58:51
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,250,259,822
4,413
Dataset Viewer issue for ett
### Link https://huggingface.co/datasets/ett ### Description Timestamp is not JSON serializable. ``` Status code: 500 Exception: Status500Error Message: Type is not JSON serializable: Timestamp ``` ### Owner No
closed
https://github.com/huggingface/datasets/issues/4413
2022-05-27T02:12:35
2022-06-15T07:30:46
2022-06-15T07:30:46
{ "login": "dgcnz", "id": 24966039, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,249,490,179
4,412
Skip hidden files/directories in data files resolution and `iter_files`
Fix #4115
closed
https://github.com/huggingface/datasets/pull/4412
2022-05-26T12:10:28
2022-06-15T17:11:25
2022-06-01T13:04:16
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,249,462,390
4,411
Update `_format_columns` in `remove_columns`
As explained at #4398, when calling `dataset.add_faiss_index` under certain conditions when calling a sequence of operations `cast_column`, `map`, and `remove_columns`, it fails as it's trying to look for already removed columns. So on, after testing some possible fixes, it seems that setting the dataset format right after removing the columns seems to be working fine, so I had to add a call to `.set_format` in the `remove_columns` function. Hope this helps!
closed
https://github.com/huggingface/datasets/pull/4411
2022-05-26T11:40:06
2022-06-14T19:05:37
2022-06-14T16:01:56
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[]
true
[]
1,249,148,457
4,410
Remove Google Drive URL in spider dataset
The `spider` dataset is distributed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) license. Fix #4401.
closed
https://github.com/huggingface/datasets/pull/4410
2022-05-26T06:17:35
2022-05-26T06:48:42
2022-05-26T06:40:12
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,249,083,179
4,409
Update: add using pcm bytes (#4323)
first of all, please look #4323 why i can not use {"path","array","sampling_rate"} because sf.write(format="wav") and sf.read(BytesIO) is changed my pcm data value maybe, i think wav got header but, pcm is not. and variable naming, pcm data is "byte" type. so, "array" name is not fair i think so, i use scipy lib and numpy (that is huggingface dependency) and refer to @lhoestq answered, 1. encode -> using sampling_rate and pcm byte -> wav style byte (scipy.wavfile.write to byte) 2. byte converting using fairseq style pcm audio read [FileAudioDataset](https://github.com/facebookresearch/fairseq/blob/main/fairseq/data/audio/raw_audio_dataset.py) 4. decode -> read wavfile.read that way is not screw up my pcm byte to float data, and another audio type(wav) safety please check!
closed
https://github.com/huggingface/datasets/pull/4409
2022-05-26T04:26:36
2022-07-07T13:27:29
2022-07-07T13:16:09
{ "login": "YooSungHyun", "id": 34292279, "type": "User" }
[]
true
[]
1,248,687,574
4,408
Update imagenet gate
null
closed
https://github.com/huggingface/datasets/pull/4408
2022-05-25T20:32:19
2022-05-25T20:45:11
2022-05-25T20:36:47
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
1,248,671,778
4,407
Dataset Viewer issue for conll2012_ontonotesv5
### Link https://huggingface.co/datasets/conll2012_ontonotesv5 ### Description Dataset viewer outage. ### Owner No
closed
https://github.com/huggingface/datasets/issues/4407
2022-05-25T20:18:33
2022-06-07T18:39:16
2022-06-07T18:39:16
{ "login": "jiangwangyi", "id": 39762734, "type": "User" }
[ { "name": "dataset-viewer", "color": "E5583E" } ]
false
[]
1,248,626,622
4,406
Improve language tag for PIAF dataset
Hi, As pointed out by @lhoestq in this discussion (https://huggingface.co/datasets/asi/wikitext_fr/discussions/1), it is not yet possible to edit datasets outside of a namespace with the Hub PR feature and that you have to go through GitHub. This modification should allow better referencing since only the xx language tags are currently taken into account and not the xx-xx.
closed
https://github.com/huggingface/datasets/pull/4406
2022-05-25T19:41:55
2022-05-27T14:51:23
2022-05-27T14:51:23
{ "login": "lbourdois", "id": 58078086, "type": "User" }
[]
true
[]
1,248,574,087
4,405
[TypeError: Couldn't cast array of type] Cannot process dataset in v2.2.2
## Describe the bug I am trying to process the [conll2012_ontonotesv5](https://huggingface.co/datasets/conll2012_ontonotesv5) dataset in `datasets` v2.2.2 and am running into a type error when casting the features. ## Steps to reproduce the bug ```python import os from typing import ( List, Dict, ) from collections import ( defaultdict, ) from dataclasses import ( dataclass, ) from datasets import ( load_dataset, ) @dataclass class ConllConverter: path: str name: str cache_dir: str def __post_init__( self, ): self.dataset = load_dataset( path=self.path, name=self.name, cache_dir=self.cache_dir, ) def convert( self, ): class_label = self.dataset["train"].features["sentences"][0]["named_entities"].feature # label_set = list(set([ # label.split("-")[1] if label != "O" else label for label in class_label.names # ])) def prepare_chunk(token, entity): assert len(token) == len(entity) # Sequence length length = len(token) # Variable used entity_chunk = defaultdict(list) idx = flag = 0 # While loop while idx < length: if entity[idx] == "O": flag += 1 idx += 1 else: iob_tp, lab_tp = entity[idx].split("-") assert iob_tp == "B" idx += 1 while idx < length and entity[idx].startswith("I-"): idx += 1 entity_chunk[lab_tp].append(token[flag: idx]) flag = idx entity_chunk = dict(entity_chunk) # for label in label_set: # if label != "O" and label not in entity_chunk.keys(): # entity_chunk[label] = None return entity_chunk def prepare_features( batch: Dict[str, List], ) -> Dict[str, List]: sentence = [ sent for doc_sent in batch["sentences"] for sent in doc_sent ] feature = { "sentence": list(), } for sent in sentence: token = sent["words"] entity = class_label.int2str(sent["named_entities"]) entity_chunk = prepare_chunk(token, entity) sent_feat = { "token": token, "entity": entity, "entity_chunk": entity_chunk, } feature["sentence"].append(sent_feat) return feature column_names = self.dataset.column_names["train"] dataset = self.dataset.map( function=prepare_features, with_indices=False, batched=True, batch_size=3, remove_columns=column_names, num_proc=1, ) dataset.save_to_disk( dataset_dict_path=os.path.join("data", self.path, self.name) ) if __name__ == "__main__": converter = ConllConverter( path="conll2012_ontonotesv5", name="english_v4", cache_dir="cache", ) converter.convert() ``` ## Expected results I want to use the dataset to perform NER task and to change the label list into a {Entity Type: list of spans} format. ## Actual results <details> <summary>Traceback</summary> ```python Traceback (most recent call last): | 0/81 [00:00<?, ?ba/s] File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 532, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 499, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper out = func(self, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2751, in _map_single writer.write_batch(batch) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 503, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 230, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 198, in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in cast_array_to_feature arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in <listcomp> arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1844, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>> to {'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)} """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 110, in <module> converter.convert() File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 91, in convert dataset = self.dataset.map( File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 770, in map { File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 771, in <dictcomp> k: dataset.map( File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2459, in map transformed_shards[index] = async_result.get() File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 771, in get raise self._value TypeError: Couldn't cast array of type struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>> to {'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)} ``` </details> ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: Ubuntu 18.04 - Python version: 3.9.7 - PyArrow version: 7.0.0
closed
https://github.com/huggingface/datasets/issues/4405
2022-05-25T18:56:43
2022-06-07T14:27:20
2022-06-07T14:27:20
{ "login": "jiangwangyi", "id": 39762734, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,248,572,899
4,404
Dataset should have a `.name` field
**Is your feature request related to a problem? Please describe.** If building pipelines that can evaluate on more than one dataset, it would be nice to be able to log results of things like `Evaluating on {dataset.name}` or `results for {dataset.name} are: {results}` Without some way of concisely identifying a dataset from the dataset object, tools which might run on more than one dataset must be passed the dataset object _and_ the name/id of the dataset being used. **Describe the solution you'd like** The DatasetInfo class should have a `name` field which is the name of a dataset. then for a given dataset if it evolves in time the `version` can be updated but its different versions of the same dataset with a unique `name`. The name could then all be accessed by `dataset.name` **Describe alternatives you've considered** For my own purposes I am considering making `NamedDataset[Dataset]` where the subclass just has a .name field. **Additional context** My guess is that most usecases are not working with more than one dataset in a given pipeline so a name is not really needed. This has surprised me though as one of the advantages of a standard dataset interface is to be able to build pipelines which can be passed in a dataset and separate responsibilities of the dataset loading from the train or eval pipeline.
closed
https://github.com/huggingface/datasets/issues/4404
2022-05-25T18:56:08
2022-09-13T15:09:30
2022-06-16T10:47:53
{ "login": "f4hy", "id": 36440, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
1,248,390,134
4,403
Uncomment logging deactivation for ArrowBasedBuilder
null
closed
https://github.com/huggingface/datasets/pull/4403
2022-05-25T16:46:15
2022-05-31T08:33:36
2022-05-31T08:25:02
{ "login": "thomasw21", "id": 24695242, "type": "User" }
[]
true
[]
1,248,078,067
4,402
Skip identical files in `push_to_hub` instead of overwriting
Skip identical files instead of overwriting them to save bandwidth and circumvent (user-side/server-side) errors, which can arise when working with large datasets due to long-lasting HTTP connections, by repeating calls to `push_to_hub` to resume an upload. To be able to check if an upload can be resumed, this PR modifies the shard naming scheme from: ``` data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].parquet ``` to: ``` data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]-<SHARD_FINGERPRINT>.parquet ``` cc @LysandreJik
closed
https://github.com/huggingface/datasets/pull/4402
2022-05-25T13:12:51
2022-05-25T15:16:36
2022-05-25T15:08:03
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
1,247,695,921
4,401
"NonMatchingChecksumError" when importing 'spider' dataset
## Describe the bug When importing 'spider' dataset [https://huggingface.co/datasets/spider] an error occurs ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('spider') ``` ## Expected results Dataset object ## Actual results NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0'] ## Environment info - `datasets` version: 2.2.2 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.11 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
closed
https://github.com/huggingface/datasets/issues/4401
2022-05-25T07:45:07
2022-05-26T06:40:12
2022-05-26T06:40:12
{ "login": "OmarAlaaeldein", "id": 81417777, "type": "User" }
[ { "name": "hosted-on-google-drive", "color": "8B51EF" } ]
false
[]
1,247,404,237
4,400
load dataset wikitext-2-raw-v1 failed. Could not reach wikitext-2-raw-v1.py.
## Describe the bug Could not reach wikitext-2-raw-v1.py ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikitext-2-raw-v1") ``` ## Expected results Download `wikitext-2-raw-v1` dataset successfully. ## Actual results ``` File "load_datasets.py", line 13, in <module> load_dataset("wikitext-2-raw-v1") File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1715, in load_dataset **config_kwargs, File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1536, in load_dataset_builder data_files=data_files, File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1282, in dataset_module_factory raise e1 from None File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1224, in dataset_module_factory dynamic_modules_path=dynamic_modules_path, File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 559, in get_module local_path = self.download_loading_script(revision) File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 539, in download_loading_script return cached_path(file_path, download_config=download_config) File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 246, in cached_path download_desc=download_config.download_desc, File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 582, in get_from_cache raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})") ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.2.2/datasets/wikitext-2-raw-v1/wikitext-2-raw-v1.py (ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=100)",),)) ``` I tried to download wikitext-2-raw-v1.py by chrome and got: ![image](https://user-images.githubusercontent.com/20658907/170171595-0ca9f1da-c05a-4b57-861e-9530bfa3bdb9.png) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: CentOS 7 - Python version: 3.6 - PyArrow version: 3.0.0
closed
https://github.com/huggingface/datasets/issues/4400
2022-05-25T03:10:44
2022-10-24T06:10:27
2022-05-25T03:26:36
{ "login": "cailun01", "id": 20658907, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,246,948,299
4,399
LocalDatasetModuleFactoryWithoutScript extracts invalid builder name
## Describe the bug Trying to load a local dataset raises an error indicating that the config builder has to have a name. No error should be reported, since the call is completly valid. ## Steps to reproduce the bug ```python load_dataset("./data/some-dataset/", name="some-name") ``` ## Expected results The dataset should be loaded. ## Actual results ``` Traceback (most recent call last): File "train_lquad.py", line 19, in <module> load(tokenize_target_function, tokenize_target_function, {}, tokenizer) File "train_lquad.py", line 14, in load dataset = load_dataset("./data/lquad/", name="lquad") File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1708, in load_dataset builder_instance = load_dataset_builder( File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1560, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 269, in __init__ self.config, self.config_id = self._create_builder_config( File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 403, in _create_builder_config raise ValueError(f"BuilderConfig must have a name, got {builder_config.name}") ValueError: BuilderConfig must have a name, got ``` ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.8.6 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 The error is probably in line 795 in load.py: ``` builder_kwargs = { "hash": hash, "data_files": data_files, "name": os.path.basename(self.path), "base_path": self.path, **builder_kwargs, } ``` `os.path.basename` for a directory returns an empty string, rather than the name of the directory.
closed
https://github.com/huggingface/datasets/issues/4399
2022-05-24T18:03:01
2022-09-12T15:30:43
2022-09-12T15:30:43
{ "login": "apohllo", "id": 40543, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "good first issue", "color": "7057ff" } ]
false
[]
1,246,666,749
4,398
Calling `cast_column`/`remove_columns` and a sequence of `map` operations ends up making `faiss` fail with `ValueError`
First of all, sorry in advance for the unclear title, but this bug is weird to explain (at least for me), so I tried my best to summarize all the information in this issue. ## Describe the bug Calling a certain combination of operations over a 🤗 `Dataset` and then trying to calculate the `faiss` index with `.add_faiss_index` ends up throwing an exception while trying to set the format back of a previously removed column. But this just happens over certain conditions... I'll present some scenarios below! ## Steps to reproduce the bug Assuming the following dataset named `sample.csv` with some IMDb data: ```csv id,title,summary 1877830,"The Batman","When a sadistic serial killer begins murdering key political figures in Gotham, Batman is forced to investigate the city's hidden corruption and question his family's involvement." 9419884,"Doctor Strange in the Multiverse of Madness","Doctor Strange teams up with a mysterious teenage girl from his dreams who can travel across multiverses, to battle multiple threats, including other-universe versions of himself, which threaten to wipe out millions across the multiverse. They seek help from Wanda the Scarlet Witch, Wong and others." 11138512,"The Northman","From visionary director Robert Eggers comes The Northman, an action-filled epic that follows a young Viking prince on his quest to avenge his father's murder." 1745960,"Top Gun: Maverick","After more than thirty years of service as one of the Navy's top aviators, Pete Mitchell is where he belongs, pushing the envelope as a courageous test pilot and dodging the advancement in rank that would ground him." ``` We'll be able to reproduce the bug using the following piece of code: ```python # Sample code to reproduce the bug from transformers import DPRContextEncoder, DPRContextEncoderTokenizer import torch torch.set_grad_enabled(False) ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") from datasets import load_dataset, Value ds = load_dataset("csv", data_files=["sample.csv"], split="train") ds = ds.cast_column("id", Value("int32")) # from `int64` to `int32` ds = ds.map(lambda x: {"inputs": f"{ctx_tokenizer.sep_token}".join(["title", "summary"])}) ds = ds.remove_columns(["title", "summary"]) def generate_embeddings(x): return {"embeddings": ctx_encoder(**ctx_tokenizer(x["inputs"], return_tensors="pt"))[0][0].numpy()} ds = ds.map(generate_embeddings) ds = ds.remove_columns("inputs") ds.add_faiss_index(column="embeddings") # It fails here! ``` The code above is an adaptation of https://huggingface.co/docs/datasets/faiss_es, for the sake of presenting the bug with a simple example. ## Expected results Ideally, the `faiss` index should be calculated over the 🤗 `Dataset` and no exception should be triggered. ## Actual results But what happens instead is that a `ValueError: Columns ['inputs'] not in the dataset. Current columns in the dataset: ['id', 'embeddings']`, which makes no sense as that column has been previously dropped. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.31 - Python version: 3.9.5 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
closed
https://github.com/huggingface/datasets/issues/4398
2022-05-24T14:41:34
2022-06-14T16:01:56
2022-06-14T16:01:56
{ "login": "alvarobartt", "id": 36760800, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,246,597,632
4,397
Fix dependency on dill version
We had to make a hotfix by pinning dill: - #4380 because from version 0.3.5, our custom `save_function` pickling function was raising an exception: - #4379 This PR fixes this by implementing our custom `save_function` depending on the version of dill. CC: @anivegesana This PR needs first being merged: - [x] #4384 - so that a circular import is fixed It is also convenient to merge first: - [x] #4385
closed
https://github.com/huggingface/datasets/pull/4397
2022-05-24T13:54:23
2022-10-26T08:45:37
2022-05-25T13:54:08
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,245,479,399
4,396
Fix URL in gem dataset for totto config
As commented in: - https://github.com/huggingface/datasets/issues/4386#issuecomment-1134902372 CC: @StevenTang1998
closed
https://github.com/huggingface/datasets/pull/4396
2022-05-23T17:16:12
2022-05-24T05:49:11
2022-05-24T05:41:00
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,245,436,486
4,395
Add Pascal VOC dataset
This PR adds the Pascal VOC dataset in the same way TFDS has it added. I believe we can iterate on this dataset and in future versions include more data, such as segmentation masks, but for now I think it is a good idea to just add it the same way as TFDS to get a solid first version out there.
closed
https://github.com/huggingface/datasets/pull/4395
2022-05-23T16:34:05
2023-09-24T09:37:05
2022-10-03T09:36:56
{ "login": "nateraw", "id": 32437151, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
1,245,221,657
4,394
trainer became extremely slow after reload dataset by `load_from_disk`
## Describe the bug Due to memory problem, I need to save my tokenized datasets locally by CPU and reload it by multi GPU for running training script. However, after I reload it by `load_from_disk` and start training, the speed is extremely slow. It says I need about 1500 hours with 8 A100 cards. Before this, I can run the whole script in one day with a single A100 card. Since I am try to pre-train a BERT, **my dataset is very large(29058165 rows)** ## Steps to reproduce the bug ```python tokenized_datasets.save_to_disk( "/pathto/dataset" ) tokenized_datasets = load_from_disk( "/pathto/dataset" ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"] if training_args.do_train else None, eval_dataset=tokenized_datasets["validation"] if training_args.do_eval else None, tokenizer=tokenizer, data_collator=data_collator, ) train_result = trainer.train(resume_from_checkpoint=checkpoint) ``` ## Expected results Without the save and reload process, I only need about one day to run the whole script with one A100 card. ## Actual results ``` [INFO|trainer.py:1290] 2022-05-23 22:49:46,266 >> ***** Running training ***** [INFO|trainer.py:1291] 2022-05-23 22:49:46,266 >> Num examples = 29058165 [INFO|trainer.py:1292] 2022-05-23 22:49:46,266 >> Num Epochs = 5 [INFO|trainer.py:1293] 2022-05-23 22:49:46,266 >> Instantaneous batch size per device = 16 [INFO|trainer.py:1294] 2022-05-23 22:49:46,266 >> Total train batch size (w. parallel, distributed & accumulation) = 256 [INFO|trainer.py:1295] 2022-05-23 22:49:46,266 >> Gradient Accumulation steps = 2 [INFO|trainer.py:1296] 2022-05-23 22:49:46,266 >> Total optimization steps = 567540 0%| | 1/567540 [00:09<1544:49:04, 9.80s/it] 0%| | 2/567540 [00:17<1320:00:17, 8.37s/it] 0%| | 3/567540 [00:26<1393:10:17, 8.84s/it] 0%| | 4/567540 [00:34<1344:56:33, 8.53s/it] 0%| | 5/567540 [00:43<1359:36:12, 8.62s/it] ``` ## Environment info ``` torch 1.11.0+cu113 torchaudio 0.11.0+cu113 torchvision 0.12.0+cu113 transformers 4.18.0 datasets 2.2.2 ```
open
https://github.com/huggingface/datasets/issues/4394
2022-05-23T14:04:37
2023-11-23T07:40:30
null
{ "login": "conan1024hao", "id": 50416856, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
1,244,876,662
4,393
Update CI deprecated legacy image
Now our CI still uses a deprecated legacy image: > You’re using a [deprecated Docker convenience image.](https://discuss.circleci.com/t/legacy-convenience-image-deprecation/41034) Upgrade to a next-gen Docker convenience image. This PR updates to next-generation convenience image. Related to: - #2955
closed
https://github.com/huggingface/datasets/pull/4393
2022-05-23T09:35:42
2022-05-23T10:08:28
2022-05-23T09:59:55
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
1,244,859,971
4,392
remove int documentation from logging docs
Removes the `int` documentation from the [logging section](https://huggingface.co/docs/datasets/package_reference/logging_methods#levels) of the docs.
closed
https://github.com/huggingface/datasets/pull/4392
2022-05-23T09:24:55
2022-05-23T15:16:55
2022-05-23T15:08:32
{ "login": "lvwerra", "id": 8264887, "type": "User" }
[]
true
[]