url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.83B
node_id
stringlengths
18
32
number
int64
1
6.09k
title
stringlengths
1
290
labels
list
state
stringclasses
2 values
locked
bool
1 class
milestone
dict
comments
int64
0
54
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
comments_text
list
https://api.github.com/repos/huggingface/datasets/issues/3258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3258/comments
https://api.github.com/repos/huggingface/datasets/issues/3258/events
https://github.com/huggingface/datasets/issues/3258
1,052,188,195
I_kwDODunzps4-tx4j
3,258
Reload dataset that was already downloaded with `load_from_disk` from cloud storage
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
0
2021-11-12T17:14:59Z
2021-11-12T17:14:59Z
null
null
`load_from_disk` downloads the dataset to a temporary directory without checking if the dataset has already been downloaded once. It would be nice to have some sort of caching for datasets downloaded this way. This could leverage the fingerprint of the dataset that was saved in the `state.json` file.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3258/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3258/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/3655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3655/comments
https://api.github.com/repos/huggingface/datasets/issues/3655/events
https://github.com/huggingface/datasets/issues/3655
1,119,801,077
I_kwDODunzps5Cvs71
3,655
Pubmed dataset not reachable
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
6
2022-01-31T18:45:47Z
2022-12-19T19:18:10Z
2022-02-14T14:15:41Z
null
## Describe the bug Trying to use the `pubmed` dataset fails to reach / download the source files. ## Steps to reproduce the bug ```python pubmed_train = datasets.load_dataset('pubmed', split='train') ``` ## Expected results Should begin downloading the pubmed dataset. ## Actual results ``` ConnectionError: Couldn't reach ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0865.xml.gz (InvalidSchema("No connection adapters were found for 'ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0865.xml.gz'")) ``` ## Environment info - `datasets` version: 1.18.2 - Platform: macOS-11.4-x86_64-i386-64bit - Python version: 3.8.2 - PyArrow version: 6.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3655/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3655/timeline
null
completed
null
null
false
[ "Hi @abhi-mosaic, thanks for reporting.\r\n\r\nI'm looking at it... ", "also hitting this issue", "Hey @albertvillanova, sorry to reopen this... I can confirm that on `master` branch the dataset is downloadable now but it is still broken in streaming mode:\r\n\r\n```python\r\n >>> import datasets\r\n >>> pubmed...
https://api.github.com/repos/huggingface/datasets/issues/4665
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4665/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4665/comments
https://api.github.com/repos/huggingface/datasets/issues/4665/events
https://github.com/huggingface/datasets/issues/4665
1,299,652,638
I_kwDODunzps5NdyAe
4,665
Unable to create dataset having Python dataset script only
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2022-07-09T11:45:46Z
2022-07-11T07:10:09Z
2022-07-11T07:10:01Z
null
## Describe the bug Hi there, I'm trying to add the following dataset to Huggingface datasets: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/blob/ I'm trying to do so using the CLI commands but seems that this command generates the wrong `dataset_info.json` file (you can find it in the repo already): ``` datasets-cli test Heriot-WattUniversity/dialog-babi/dialog_babi.py --save_infos --all-configs ``` while it errors when I remove the python script: ``` datasets-cli test Heriot-WattUniversity/dialog-babi/ --save_infos --all-configs ``` The error message is the following: ``` FileNotFoundError: Unable to resolve any data file that matches '['**']' at /Users/as2180/workspace/Heriot-WattUniversity/dialog-babi with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] ``` ## Environment info - `datasets` version: 2.3.2 - Platform: macOS-12.4-arm64-arm-64bit - Python version: 3.9.9 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4665/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4665/timeline
null
completed
null
null
false
[ "Hi @aleSuglia, thanks for reporting.\r\n\r\nWe are having a look at it. \r\n\r\nWe transfer this issue to the Community tab of the corresponding Hub dataset: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/discussions" ]
https://api.github.com/repos/huggingface/datasets/issues/4306
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4306/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4306/comments
https://api.github.com/repos/huggingface/datasets/issues/4306/events
https://github.com/huggingface/datasets/issues/4306
1,231,137,204
I_kwDODunzps5JYam0
4,306
`load_dataset` does not work with certain filename.
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2022-05-10T13:14:04Z
2022-05-10T18:58:36Z
2022-05-10T18:58:09Z
null
## Describe the bug This is a weird bug that took me some time to find out. I have a JSON dataset that I want to load with `load_dataset` like this: ``` data_files = dict(train="train.json.zip", val="val.json.zip") dataset = load_dataset("json", data_files=data_files, field="data") ``` ## Expected results No error. ## Actual results The val file is loaded as expected, but the train file throws JSON decoding error: ``` ╭──────────────────────────── Traceback (most recent call last) ────────────────────────────╮ │ <ipython-input-74-97947e92c100>:5 in <module> │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/load.py:1687 in │ │ load_dataset │ │ │ │ 1684 │ try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES │ │ 1685 │ │ │ 1686 │ # Download and prepare data │ │ ❱ 1687 │ builder_instance.download_and_prepare( │ │ 1688 │ │ download_config=download_config, │ │ 1689 │ │ download_mode=download_mode, │ │ 1690 │ │ ignore_verifications=ignore_verifications, │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:605 in │ │ download_and_prepare │ │ │ │ 602 │ │ │ │ │ │ except ConnectionError: │ │ 603 │ │ │ │ │ │ │ logger.warning("HF google storage unreachable. Downloa │ │ 604 │ │ │ │ │ if not downloaded_from_gcs: │ │ ❱ 605 │ │ │ │ │ │ self._download_and_prepare( │ │ 606 │ │ │ │ │ │ │ dl_manager=dl_manager, verify_infos=verify_infos, **do │ │ 607 │ │ │ │ │ │ ) │ │ 608 │ │ │ │ │ # Sync info │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:694 in │ │ _download_and_prepare │ │ │ │ 691 │ │ │ │ │ 692 │ │ │ try: │ │ 693 │ │ │ │ # Prepare split will record examples associated to the split │ │ ❱ 694 │ │ │ │ self._prepare_split(split_generator, **prepare_split_kwargs) │ │ 695 │ │ │ except OSError as e: │ │ 696 │ │ │ │ raise OSError( │ │ 697 │ │ │ │ │ "Cannot find data file. " │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:1151 in │ │ _prepare_split │ │ │ │ 1148 │ │ │ │ 1149 │ │ generator = self._generate_tables(**split_generator.gen_kwargs) │ │ 1150 │ │ with ArrowWriter(features=self.info.features, path=fpath) as writer: │ │ ❱ 1151 │ │ │ for key, table in logging.tqdm( │ │ 1152 │ │ │ │ generator, unit=" tables", leave=False, disable=True # not loggin │ │ 1153 │ │ │ ): │ │ 1154 │ │ │ │ writer.write_table(table) │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/notebook.py:257 in │ │ __iter__ │ │ │ │ 254 │ │ │ 255 │ def __iter__(self): │ │ 256 │ │ try: │ │ ❱ 257 │ │ │ for obj in super(tqdm_notebook, self).__iter__(): │ │ 258 │ │ │ │ # return super(tqdm...) will not catch exception │ │ 259 │ │ │ │ yield obj │ │ 260 │ │ # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/std.py:1183 in │ │ __iter__ │ │ │ │ 1180 │ │ # If the bar is disabled, then just walk the iterable │ │ 1181 │ │ # (note: keep this check outside the loop for performance) │ │ 1182 │ │ if self.disable: │ │ ❱ 1183 │ │ │ for obj in iterable: │ │ 1184 │ │ │ │ yield obj │ │ 1185 │ │ │ return │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/packaged_modules/j │ │ son/json.py:90 in _generate_tables │ │ │ │ 87 │ │ │ # If the file is one json object and if we need to look at the list of │ │ 88 │ │ │ if self.config.field is not None: │ │ 89 │ │ │ │ with open(file, encoding="utf-8") as f: │ │ ❱ 90 │ │ │ │ │ dataset = json.load(f) │ │ 91 │ │ │ │ │ │ 92 │ │ │ │ # We keep only the field we are interested in │ │ 93 │ │ │ │ dataset = dataset[self.config.field] │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:293 in load │ │ │ │ 290 │ To use a custom ``JSONDecoder`` subclass, specify it with the ``cls`` │ │ 291 │ kwarg; otherwise ``JSONDecoder`` is used. │ │ 292 │ """ │ │ ❱ 293 │ return loads(fp.read(), │ │ 294 │ │ cls=cls, object_hook=object_hook, │ │ 295 │ │ parse_float=parse_float, parse_int=parse_int, │ │ 296 │ │ parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:357 in loads │ │ │ │ 354 │ if (cls is None and object_hook is None and │ │ 355 │ │ │ parse_int is None and parse_float is None and │ │ 356 │ │ │ parse_constant is None and object_pairs_hook is None and not kw): │ │ ❱ 357 │ │ return _default_decoder.decode(s) │ │ 358 │ if cls is None: │ │ 359 │ │ cls = JSONDecoder │ │ 360 │ if object_hook is not None: │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:337 in decode │ │ │ │ 334 │ │ containing a JSON document). │ │ 335 │ │ │ │ 336 │ │ """ │ │ ❱ 337 │ │ obj, end = self.raw_decode(s, idx=_w(s, 0).end()) │ │ 338 │ │ end = _w(s, end).end() │ │ 339 │ │ if end != len(s): │ │ 340 │ │ │ raise JSONDecodeError("Extra data", s, end) │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:353 in raw_decode │ │ │ │ 350 │ │ │ │ 351 │ │ """ │ │ 352 │ │ try: │ │ ❱ 353 │ │ │ obj, end = self.scan_once(s, idx) │ │ 354 │ │ except StopIteration as err: │ │ 355 │ │ │ raise JSONDecodeError("Expecting value", s, err.value) from None │ │ 356 │ │ return obj, end │ ╰───────────────────────────────────────────────────────────────────────────────────────────╯ JSONDecodeError: Unterminated string starting at: line 85 column 20 (char 60051) ``` However, when I rename the `train.json.zip` to other names (like `training.json.zip`, or even to `train.json`), everything works fine; when I unzip the file to `train.json`, it works as well. ## Environment info ``` - `datasets` version: 2.1.0 - Platform: Linux-4.4.0-131-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 7.0.0 - Pandas version: 1.4.2 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4306/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4306/timeline
null
completed
null
null
false
[ "Never mind. It is because of the caching of datasets..." ]
https://api.github.com/repos/huggingface/datasets/issues/705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/705/comments
https://api.github.com/repos/huggingface/datasets/issues/705/events
https://github.com/huggingface/datasets/issues/705
713,709,100
MDU6SXNzdWU3MTM3MDkxMDA=
705
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
[]
closed
false
null
2
2020-10-02T15:27:55Z
2020-10-05T08:14:59Z
2020-10-05T08:14:59Z
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 (installed from master) - `datasets` version: 1.0.2 (installed as a dependency from transformers) - Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.9 I'm testing my own text classification dataset using [this example](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) from transformers. The dataset is split into train / dev / test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample: ``` text,label "Registra-se a presença do acadêmico <name> . <REL_SEP> Ao me deparar com a descrição de dois autores no polo ativo da ação junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclamação trabalhista individual . <REL_SEP> Diante disso , face a ausência injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com relação a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concessão dos benefícios da Justiça Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> Audiência encerrada às 8h42min . <REL_SEP> <name> <REL_SEP> Juíza do Trabalho <REL_SEP> Ata redigida por << <name> >> , Secretário de Audiência .",NO_RELATION ``` However, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section. ## To reproduce Steps to reproduce the behavior: 1. Created a new conda environment using conda env -n transformers python=3.7 2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt 3. Installed tensorflow with `pip install tensorflow` 3. Ran `run_tf_text_classification.py` with the following parameters: ``` --train_file <DATASET_PATH>/train.csv \ --dev_file <DATASET_PATH>/dev.csv \ --test_file <DATASET_PATH>/test.csv \ --label_column_id 1 \ --model_name_or_path neuralmind/bert-base-portuguese-cased \ --output_dir <OUTPUT_PATH> \ --num_train_epochs 4 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --do_train \ --do_eval \ --do_predict \ --logging_steps 1000 \ --evaluate_during_training \ --save_steps 1000 \ --overwrite_output_dir \ --overwrite_cache ``` I have also copied [@Santosh-Gupta 's colab notebook](https://colab.research.google.com/drive/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Here is the stack trace: ``` 2020-10-02 07:33:41.622011: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 /media/discoD/repositorios/transformers_pedro/src/transformers/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options) FutureWarning, 2020-10-02 07:33:43.471648: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1 2020-10-02 07:33:43.471791: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.472664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.472684: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.472765: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.472809: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.472848: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.474209: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.474276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.561219: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.561397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.562345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.563219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:43.563595: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2020-10-02 07:33:43.570091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz 2020-10-02 07:33:43.570494: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:43.570511: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-10-02 07:33:43.570702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.571599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.571633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.571645: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.571654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.571664: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.571691: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.571704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.571718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.571770: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.572641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.573475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:47.139227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-10-02 07:33:47.139265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 2020-10-02 07:33:47.139272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N 2020-10-02 07:33:47.140323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.141248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) 2020-10-02 07:33:47.146317: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:47.146336: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1 10/02/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False 10/02/2020 07:33:47 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False) 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock Using custom data configuration default Traceback (most recent call last): File "run_tf_text_classification.py", line 283, in <module> main() File "run_tf_text_classification.py", line 222, in main max_seq_length=data_args.max_seq_length, File "run_tf_text_classification.py", line 43, in get_tfds ds = datasets.load_dataset("csv", data_files=files) File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 604, in load_dataset **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 158, in __init__ **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 269, in _create_builder_config for key in sorted(data_files.keys()): TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' ``` ## Expected behavior Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) Originally opened this issue at transformers' repository: [https://github.com/huggingface/transformers/issues/7535](https://github.com/huggingface/transformers/issues/7535). @jplu instructed me to open here, since according to [this](https://github.com/huggingface/transformers/issues/7535#issuecomment-702778885) evidence, the problem is from datasets. Thanks!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/705/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/705/timeline
null
completed
null
null
false
[ "Hi !\r\nThanks for reporting :) \r\nIndeed this is an issue on the `datasets` side.\r\nI'm creating a PR", "Thanks @lhoestq !" ]
https://api.github.com/repos/huggingface/datasets/issues/1221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1221/comments
https://api.github.com/repos/huggingface/datasets/issues/1221/events
https://github.com/huggingface/datasets/pull/1221
758,016,032
MDExOlB1bGxSZXF1ZXN0NTMzMjYxNjkw
1,221
Add HKCanCor
[]
closed
false
null
0
2020-12-06T20:32:07Z
2020-12-09T16:34:18Z
2020-12-09T16:34:18Z
null
This PR adds the [Hong Kong Cantonese Corpus](http://compling.hss.ntu.edu.sg/hkcancor/), by [Luke and Wong 2015](http://compling.hss.ntu.edu.sg/hkcancor/data/LukeWong_Hong-Kong-Cantonese-Corpus.pdf). The dummy data included here was manually created, as the original dataset uses a xml-like format (see a copy hosted [here](https://github.com/fcbond/hkcancor/blob/master/sample/d1_v.txt) for example) that requires a few processing steps.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1221/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1221/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1221.diff", "html_url": "https://github.com/huggingface/datasets/pull/1221", "merged_at": "2020-12-09T16:34:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/1221.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1221" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1138
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1138/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1138/comments
https://api.github.com/repos/huggingface/datasets/issues/1138/events
https://github.com/huggingface/datasets/pull/1138
757,378,406
MDExOlB1bGxSZXF1ZXN0NTMyNzY1NTI2
1,138
updated after the class name update
[]
closed
false
null
0
2020-12-04T20:19:43Z
2020-12-05T15:43:32Z
2020-12-05T15:43:32Z
null
@lhoestq <---
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1138/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1138/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1138.diff", "html_url": "https://github.com/huggingface/datasets/pull/1138", "merged_at": "2020-12-05T15:43:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/1138.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1138" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4596
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4596/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4596/comments
https://api.github.com/repos/huggingface/datasets/issues/4596/events
https://github.com/huggingface/datasets/issues/4596
1,288,381,735
I_kwDODunzps5MyyUn
4,596
Dataset Viewer issue for universal_dependencies
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
2
2022-06-29T08:50:29Z
2022-09-07T11:29:28Z
2022-09-07T11:29:27Z
null
### Link https://huggingface.co/datasets/universal_dependencies ### Description invalid json response body at https://datasets-server.huggingface.co/splits?dataset=universal_dependencies reason: Unexpected token I in JSON at position 0 ### Owner _No response_
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4596/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4596/timeline
null
completed
null
null
false
[ "Thanks, looking at it!", "Finally fixed! We updated the dataset viewer and it fixed the issue.\r\n\r\nhttps://huggingface.co/datasets/universal_dependencies/viewer/aqz_tudet/train\r\n\r\n<img width=\"1561\" alt=\"Capture d’écran 2022-09-07 à 13 29 18\" src=\"https://user-images.githubusercontent.com/1676121/18...
https://api.github.com/repos/huggingface/datasets/issues/785
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/785/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/785/comments
https://api.github.com/repos/huggingface/datasets/issues/785/events
https://github.com/huggingface/datasets/pull/785
733,719,419
MDExOlB1bGxSZXF1ZXN0NTEzNDMyNTM1
785
feat(aslg_pc12): add dev and test data splits
[]
closed
false
null
2
2020-10-31T13:25:38Z
2020-11-10T15:29:30Z
2020-11-10T15:29:30Z
null
For reproducibility sake, it's best if there are defined dev and test splits. The original paper author did not define splits for the entire dataset, not for the sample loaded via this library, so I decided to define: - 5/7th for train - 1/7th for dev - 1/7th for test
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/785/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/785/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/785.diff", "html_url": "https://github.com/huggingface/datasets/pull/785", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/785.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/785" }
true
[ "Hi ! I'm not sure we should make this split decision arbitrarily on our side. Users can split it afterwards to whatever they want using `dataset.train_test_split` for example.\r\nMoreover it looks like there's already papers that use this dataset and propose their own splits ([here](http://xanthippi.ceid.upatras.g...
https://api.github.com/repos/huggingface/datasets/issues/3212
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3212/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3212/comments
https://api.github.com/repos/huggingface/datasets/issues/3212/events
https://github.com/huggingface/datasets/issues/3212
1,044,640,967
I_kwDODunzps4-Q_TH
3,212
Sort files before loading
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
1
2021-11-04T11:08:31Z
2021-11-05T17:49:58Z
2021-11-05T17:49:58Z
null
When loading a dataset that consists of several files (e.g. `my_data/data_001.json`, `my_data/data_002.json` etc.) they are not loaded in order when using `load_dataset("my_data")`. This could lead to counter-intuitive results if, for example, the data files are sorted by date or similar since they would appear in different order in the `Dataset`. The straightforward solution is to sort the list of files alphabetically before loading them. cc @lhoestq
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3212/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3212/timeline
null
completed
null
null
false
[ "This will be fixed by https://github.com/huggingface/datasets/pull/3221" ]
https://api.github.com/repos/huggingface/datasets/issues/161
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/161/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/161/comments
https://api.github.com/repos/huggingface/datasets/issues/161/events
https://github.com/huggingface/datasets/issues/161
620,487,535
MDU6SXNzdWU2MjA0ODc1MzU=
161
Discussion on version identifier & MockDataLoaderManager for test data
[ { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
open
false
null
12
2020-05-18T20:31:30Z
2020-05-24T18:10:03Z
null
null
Hi, I'm working on adding a dataset and ran into an error due to `download` not being defined on `MockDataLoaderManager`, but being defined in `nlp/utils/download_manager.py`. The readme step running this: `RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_localmydatasetname` triggers the error. If I can get something to work, I can include it in my data PR once I'm done.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/161/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/161/timeline
null
null
null
null
false
[ "usually you can replace `download` in your dataset script with `download_and_prepare()` - could you share the code for your dataset here? :-) ", "I have an initial version here: https://github.com/EntilZha/nlp/tree/master/datasets/qanta Thats pretty close to what I'll do as a PR, but still want to do some more s...
https://api.github.com/repos/huggingface/datasets/issues/3847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3847/comments
https://api.github.com/repos/huggingface/datasets/issues/3847/events
https://github.com/huggingface/datasets/issues/3847
1,161,856,417
I_kwDODunzps5FQIWh
3,847
Datasets' cache not re-used
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
13
2022-03-07T19:55:15Z
2023-02-02T23:35:45Z
null
null
## Describe the bug For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory. ## Steps to reproduce the bug Here is a reproducer. The GPT2 tokenizer works perfectly with caching, but not the RoBERTa tokenizer in this example. ```python from datasets import load_dataset from transformers import AutoTokenizer raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1") # tokenizer = AutoTokenizer.from_pretrained("gpt2") tokenizer = AutoTokenizer.from_pretrained("roberta-base") text_column_name = "text" column_names = raw_datasets["train"].column_names def tokenize_function(examples): return tokenizer(examples[text_column_name], return_special_tokens_mask=True) tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, remove_columns=column_names, load_from_cache_file=True, desc="Running tokenizer on every text in dataset", ) ``` ## Expected results No tokenization would be required after the 1st run. Everything should be loaded from the cache. ## Actual results Tokenization for some subsets are repeated at the 2nd and 3rd run. Starting from the 4th run, everything are loaded from cache. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Ubuntu 18.04.6 LTS - Python version: 3.6.9 - PyArrow version: 6.0.1
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3847/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3847/timeline
null
null
null
null
false
[ "<s>I think this is because the tokenizer is stateful and because the order in which the splits are processed is not deterministic. Because of that, the hash of the tokenizer may change for certain splits, which causes issues with caching.\r\n\r\nTo fix this we can try making the order of the splits deterministic f...
https://api.github.com/repos/huggingface/datasets/issues/1800
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1800/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1800/comments
https://api.github.com/repos/huggingface/datasets/issues/1800/events
https://github.com/huggingface/datasets/pull/1800
797,798,689
MDExOlB1bGxSZXF1ZXN0NTY0NzE5MjA3
1,800
Add DuoRC Dataset
[]
closed
false
null
1
2021-01-31T20:01:59Z
2021-02-03T05:01:45Z
2021-02-02T22:49:26Z
null
Hi, DuoRC SelfRC is one type of the [DuoRC Dataset](https://duorc.github.io/). DuoRC SelfRC is a crowdsourced Abstractive/Extractive Question-Answering dataset based on Wikipedia movie plots. It contains examples that may have answers in the movie plot, synthesized answers which are not present in the movie plot, or no answers. I have also added ParaphraseRC - the other type of DuoRC dataset where questions are based on Wikipedia movie plots and answers are based on corresponding IMDb movie plots. Paper : [https://arxiv.org/abs/1804.07927](https://arxiv.org/abs/1804.07927) I want to add this to 🤗 datasets to make it more accessible to the community. I have added all the details that I could find. Please let me know if anything else is needed from my end. Thanks, Gunjan
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1800/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1800/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1800.diff", "html_url": "https://github.com/huggingface/datasets/pull/1800", "merged_at": "2021-02-02T22:49:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1800.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1800" }
true
[ "Thanks for approving @lhoestq!\r\nWill apply these changes for the other datasets I've added too." ]
https://api.github.com/repos/huggingface/datasets/issues/874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/874/comments
https://api.github.com/repos/huggingface/datasets/issues/874/events
https://github.com/huggingface/datasets/issues/874
748,193,140
MDU6SXNzdWU3NDgxOTMxNDA=
874
trec dataset unavailable
[]
closed
false
null
2
2020-11-22T08:09:36Z
2020-11-27T13:56:42Z
2020-11-27T13:56:42Z
null
Hi when I try to load the trec dataset I am getting these errors, thanks for your help `datasets.load_dataset("trec", split="train") ` ``` File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/874/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/874/timeline
null
completed
null
null
false
[ "This was fixed in #740 \r\nCould you try to update `datasets` and try again ?", "This has been fixed in datasets 1.1.3" ]
https://api.github.com/repos/huggingface/datasets/issues/3744
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3744/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3744/comments
https://api.github.com/repos/huggingface/datasets/issues/3744/events
https://github.com/huggingface/datasets/issues/3744
1,141,461,165
I_kwDODunzps5ECVCt
3,744
Better shards shuffling in streaming mode
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "fef2c0", "default": fals...
closed
false
null
0
2022-02-17T15:07:21Z
2022-02-23T15:00:58Z
2022-02-23T15:00:58Z
null
Sometimes a dataset script has a `_split_generators` that returns several files as well as the corresponding metadata of each file. It often happens that they end up in two separate lists in the `gen_kwargs`: ```python gen_kwargs = { "files": [os.path.join(data_dir, filename) for filename in all_files], "metadata_files": [all_metadata[filename] for filename in all_files], } ``` It happened for Multilingual Spoken Words for example in #3666 However currently **the two lists are shuffled independently** when shuffling the shards in streaming mode. This leads to `_generate_examples` not having the right metadata for each file. To prevent this issue I suggest that we always shuffle lists of the same length the exact same way to avoid such a big but silent issue. cc @polinaeterna
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3744/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3744/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/5093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5093/comments
https://api.github.com/repos/huggingface/datasets/issues/5093/events
https://github.com/huggingface/datasets/issues/5093
1,402,939,660
I_kwDODunzps5TnykM
5,093
Mismatch between tutoriel and doc
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "7057ff", "default": true, "descript...
closed
false
null
3
2022-10-10T10:23:53Z
2022-10-10T17:51:15Z
2022-10-10T17:51:14Z
null
## Describe the bug In the "Process text data" tutorial, [`map` has `return_tensors` as kwarg](https://huggingface.co/docs/datasets/main/en/nlp_process#map). It does not seem to appear in the [function documentation](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map), nor to work. ## Steps to reproduce the bug MWE: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") from datasets import load_dataset dataset = load_dataset("lhoestq/demo1", split="train") dataset = dataset.map(lambda examples: tokenizer(examples["review"]), batched=True, return_tensors="pt") ``` ## Expected results return_tensors to be a valid kwarg :smiley: ## Actual results ```python >> TypeError: map() got an unexpected keyword argument 'return_tensors' ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5093/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5093/timeline
null
completed
null
null
false
[ "Hi, thanks for reporting! This line should be replaced with \r\n```python\r\ndataset = dataset.map(lambda examples: tokenizer(examples[\"text\"], return_tensors=\"np\"), batched=True)\r\n```\r\nfor it to work (the `return_tensors` part inside the `tokenizer` call).", "Can I work on this?", "Fixed in https://gi...
https://api.github.com/repos/huggingface/datasets/issues/3037
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3037/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3037/comments
https://api.github.com/repos/huggingface/datasets/issues/3037/events
https://github.com/huggingface/datasets/pull/3037
1,018,091,919
PR_kwDODunzps4syi15
3,037
SberQuad
[]
closed
false
null
0
2021-10-06T11:21:08Z
2021-10-06T11:33:08Z
2021-10-06T11:33:08Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3037/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3037/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3037.diff", "html_url": "https://github.com/huggingface/datasets/pull/3037", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3037.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3037" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/6074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6074/comments
https://api.github.com/repos/huggingface/datasets/issues/6074/events
https://github.com/huggingface/datasets/pull/6074
1,822,299,128
PR_kwDODunzps5Wb8O_
6,074
Misc doc improvements
[]
closed
false
null
3
2023-07-26T12:20:54Z
2023-07-27T16:16:28Z
2023-07-27T16:16:02Z
null
Removes the warning about requiring to write a dataset loading script to define multiple configurations, as the README YAML can be used instead (for simple cases). Also, deletes the section about using the `BatchSampler` in `torch<=1.12.1` to speed up loading, as `torch 1.12.1` is over a year old (and `torch 2.0` has been out for a while).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6074/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6074/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6074.diff", "html_url": "https://github.com/huggingface/datasets/pull/6074", "merged_at": "2023-07-27T16:16:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/6074.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6074" }
true
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
https://api.github.com/repos/huggingface/datasets/issues/183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/183/comments
https://api.github.com/repos/huggingface/datasets/issues/183/events
https://github.com/huggingface/datasets/issues/183
623,054,270
MDU6SXNzdWU2MjMwNTQyNzA=
183
[Bug] labels of glue/ax are all -1
[]
closed
false
null
2
2020-05-22T08:43:36Z
2020-05-22T22:14:05Z
2020-05-22T22:14:05Z
null
``` ax = nlp.load_dataset('glue', 'ax') for i in range(30): print(ax['test'][i]['label'], end=', ') ``` ``` -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/183/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/183/timeline
null
completed
null
null
false
[ "This is the test set given by the Glue benchmark. The labels are not provided, and therefore set to -1.", "Ah, yeah. Why it didn’t occur to me. 😂\nThank you for your comment." ]
https://api.github.com/repos/huggingface/datasets/issues/4348
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4348/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4348/comments
https://api.github.com/repos/huggingface/datasets/issues/4348/events
https://github.com/huggingface/datasets/issues/4348
1,235,432,976
I_kwDODunzps5JozYQ
4,348
`inspect` functions can't fetch dataset script from the Hub
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
2
2022-05-13T16:08:26Z
2022-06-09T10:26:06Z
2022-06-09T10:26:06Z
null
The `inspect_dataset` and `inspect_metric` functions are unable to retrieve a dataset or metric script from the Hub and store it locally at the specified `local_path`: ```py >>> from datasets import inspect_dataset >>> inspect_dataset('rotten_tomatoes', local_path='path/to/my/local/folder') FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory. ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4348/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4348/timeline
null
completed
null
null
false
[ "Hi, thanks for reporting! `git bisect` points to #2986 as the PR that introduced the bug. Since then, there have been some additional changes to the loading logic, and in the current state, `force_local_path` (set via `local_path`) forbids pulling a script from the internet instead of downloading it: https://githu...
https://api.github.com/repos/huggingface/datasets/issues/2651
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2651/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2651/comments
https://api.github.com/repos/huggingface/datasets/issues/2651/events
https://github.com/huggingface/datasets/issues/2651
944,796,961
MDU6SXNzdWU5NDQ3OTY5NjE=
2,651
Setting log level higher than warning does not suppress progress bar
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
7
2021-07-14T21:06:51Z
2022-07-08T14:51:57Z
2021-07-15T03:41:35Z
null
## Describe the bug I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well). According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0. I also tried to set `DATASETS_VERBOSITY` environment variable to `error` or `critical` but it also didn't work. ## Steps to reproduce the bug ```python import datasets from datasets.utils.logging import set_verbosity_error set_verbosity_error() def dummy_map(batch): return batch common_voice_train = datasets.load_dataset("common_voice", "de", split="train") common_voice_test = datasets.load_dataset("common_voice", "de", split="test") common_voice_train.map(dummy_map) ``` ## Expected results - The progress bar for `.map` call won't be shown ## Actual results - The progress bar for `.map` is still shown ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-5.4.0-1045-aws-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.5 - PyArrow version: 4.0.1
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2651/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2651/timeline
null
completed
null
null
false
[ "Hi,\r\n\r\nyou can suppress progress bars by patching logging as follows:\r\n```python\r\nimport datasets\r\nimport logging\r\ndatasets.logging.get_verbosity = lambda: logging.NOTSET\r\n# map call ...\r\n```\r\nEDIT: now you have to use `disable_progress_bar `", "Thank you, it worked :)", "See https://github.c...
https://api.github.com/repos/huggingface/datasets/issues/5884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5884/comments
https://api.github.com/repos/huggingface/datasets/issues/5884/events
https://github.com/huggingface/datasets/issues/5884
1,719,548,172
I_kwDODunzps5mfjkM
5,884
`Dataset.to_tf_dataset` fails when strings cannot be encoded as `np.bytes_`
[]
closed
false
null
2
2023-05-22T12:03:06Z
2023-06-09T16:04:56Z
2023-06-09T16:04:55Z
null
### Describe the bug When loading any dataset that contains a column with strings that are not ASCII-compatible, looping over those records raises the following exception e.g. for `é` character `UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128)`. ### Steps to reproduce the bug Running the following script will eventually fail, when reaching to the batch that contains non-ASCII compatible strings. ```python from datasets import load_dataset ds = load_dataset("imdb", split="train") tfds = ds.to_tf_dataset(batch_size=16) for batch in tfds: print(batch) >>> UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128) ``` ### Expected behavior The following script to run properly, making sure that the strings are either `numpy.unicode_` or `numpy.string` instead of `numpy.bytes_` since some characters are not ASCII compatible and that would lead to an issue when applying the `map`. ```python from datasets import load_dataset ds = load_dataset("imdb", split="train") tfds = ds.to_tf_dataset(batch_size=16) for batch in tfds: print(batch) ``` ### Environment info - `datasets` version: 2.12.1.dev0 - Platform: macOS-13.3.1-arm64-arm-64bit - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5884/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5884/timeline
null
completed
null
null
false
[ "May eventually be solved in #5883 ", "#self-assign" ]
https://api.github.com/repos/huggingface/datasets/issues/5631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5631/comments
https://api.github.com/repos/huggingface/datasets/issues/5631/events
https://github.com/huggingface/datasets/issues/5631
1,620,442,854
I_kwDODunzps5glf7m
5,631
Custom split names
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
1
2023-03-12T17:21:43Z
2023-03-24T14:13:00Z
2023-03-24T14:13:00Z
null
### Feature request Hi, I participated in multiple NLP tasks where there are more than just train, test, validation splits, there could be multiple validation sets or test sets. But it seems currently only those mentioned three splits supported. It would be nice to have the support for more splits on the hub. (currently i can have more splits when I am loading datasets from urls, but not hub) ### Motivation Easier access to more splits ### Your contribution No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5631/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5631/timeline
null
completed
null
null
false
[ "Hi!\r\n\r\nYou can also use names other than \"train\", \"validation\" and \"test\". As an example, check the [script](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/blob/e095840f23f3dffc1056c078c2f9320dad9ca74d/common_voice_11_0.py#L139) of the Common Voice 11 dataset. " ]
https://api.github.com/repos/huggingface/datasets/issues/2999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2999/comments
https://api.github.com/repos/huggingface/datasets/issues/2999/events
https://github.com/huggingface/datasets/pull/2999
1,013,536,933
PR_kwDODunzps4skgCm
2,999
Set trivia_qa writer batch size
[]
closed
false
null
0
2021-10-01T16:23:26Z
2021-10-01T16:34:55Z
2021-10-01T16:34:55Z
null
Save some RAM when generating trivia_qa
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2999/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2999/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2999.diff", "html_url": "https://github.com/huggingface/datasets/pull/2999", "merged_at": "2021-10-01T16:34:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/2999.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2999" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3399
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3399/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3399/comments
https://api.github.com/repos/huggingface/datasets/issues/3399/events
https://github.com/huggingface/datasets/issues/3399
1,073,593,861
I_kwDODunzps4__b4F
3,399
Add Wikisource dataset
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
1
2021-12-07T17:21:31Z
2021-12-10T17:26:26Z
null
null
## Adding a Dataset - **Name:** *wikisource* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** Additional high quality textual data, besides Wikipedia. Add loading script as "canonical" dataset (as it is the case for ""wikipedia"). Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). CC: @geohci, @yjernite
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3399/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3399/timeline
null
null
null
null
false
[ "See notebook by @geohci: https://public.paws.wmcloud.org/User:Isaac_(WMF)/HuggingFace%20Wikisource%20Processing.ipynb" ]
https://api.github.com/repos/huggingface/datasets/issues/4632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4632/comments
https://api.github.com/repos/huggingface/datasets/issues/4632/events
https://github.com/huggingface/datasets/issues/4632
1,294,166,880
I_kwDODunzps5NI2tg
4,632
'sort' method sorts one column only
[]
closed
false
null
3
2022-07-05T11:25:26Z
2023-07-25T15:04:27Z
2023-07-25T15:04:27Z
null
The 'sort' method changes the order of one column only (the one defined by the argument 'column'), thus creating a mismatch between a sample fields. I would expect it to change the order of the samples as a whole, based on the 'column' order.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4632/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4632/timeline
null
completed
null
null
false
[ "Hi ! `ds.sort()` does sort the full dataset, not just one column:\r\n```python\r\nfrom datasets import *\r\n\r\nds = Dataset.from_dict({\"foo\": [3, 2, 1], \"bar\": [\"c\", \"b\", \"a\"]})\r\nprint(d.sort(\"foo\").to_pandas()\r\n# foo bar\r\n# 0 1 a\r\n# 1 2 b\r\n# 2 3 c\r\n```\r\n\r\nWhat made y...
https://api.github.com/repos/huggingface/datasets/issues/302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/302/comments
https://api.github.com/repos/huggingface/datasets/issues/302/events
https://github.com/huggingface/datasets/issues/302
643,910,418
MDU6SXNzdWU2NDM5MTA0MTg=
302
Question - Sign Language Datasets
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": fals...
closed
false
null
3
2020-06-23T14:53:40Z
2020-11-25T11:25:33Z
2020-11-25T11:25:33Z
null
An emerging field in NLP is SLP - sign language processing. I was wondering about adding datasets here, specifically because it's shaping up to be large and easily usable. The metrics for sign language to text translation are the same. So, what do you think about (me, or others) adding datasets here? An example dataset would be [RWTH-PHOENIX-Weather 2014 T](https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/) For every item in the dataset, the data object includes: 1. video_path - path to mp4 file 2. pose_path - a path to `.pose` file with human pose landmarks 3. openpose_path - a path to a `.json` file with human pose landmarks 4. gloss - string 5. text - string 6. video_metadata - height, width, frames, framerate ------ To make it a tad more complicated - what if sign language libraries add requirements to `nlp`? for example, sign language is commonly annotated using `ilex`, `eaf`, or `srt` files, which are all loadable as text, but there is no reason for the dataset to parse that file by itself, if libraries exist to do so.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/302/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/302/timeline
null
completed
null
null
false
[ "Even more complicating - \r\n\r\nAs I see it, datasets can have \"addons\".\r\nFor example, the WebNLG dataset is a dataset for data-to-text. However, a work of mine and other works enriched this dataset with text plans / underlying text structures. In that case, I see a need to load the dataset \"WebNLG\" with \"...
https://api.github.com/repos/huggingface/datasets/issues/1847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1847/comments
https://api.github.com/repos/huggingface/datasets/issues/1847/events
https://github.com/huggingface/datasets/pull/1847
803,824,694
MDExOlB1bGxSZXF1ZXN0NTY5Njg4NDY0
1,847
[Metrics] Add word error metric metric
[]
closed
false
null
1
2021-02-08T18:41:15Z
2021-02-09T17:53:21Z
2021-02-09T17:53:21Z
null
This PR adds the word error rate metric to datasets. WER: https://en.wikipedia.org/wiki/Word_error_rate for speech recognition. WER is the main metric used in ASR. `jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1847/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1847/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1847.diff", "html_url": "https://github.com/huggingface/datasets/pull/1847", "merged_at": "2021-02-09T17:53:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/1847.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1847" }
true
[ "Feel free to merge once the CI is all green ;)" ]
https://api.github.com/repos/huggingface/datasets/issues/4545
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4545/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4545/comments
https://api.github.com/repos/huggingface/datasets/issues/4545/events
https://github.com/huggingface/datasets/pull/4545
1,280,899,028
PR_kwDODunzps46KV-y
4,545
Make DuplicateKeysError more user friendly [For Issue #2556]
[]
closed
false
null
2
2022-06-22T21:01:34Z
2022-06-28T09:37:06Z
2022-06-28T09:26:04Z
null
# What does this PR do? ## Summary *DuplicateKeysError error does not provide any information regarding the examples which have the same the key.* *This information is very helpful for debugging the dataset generator script.* ## Additions - ## Changes - Changed `DuplicateKeysError Class` in `src/datasets/keyhash.py` to add current index and duplicate_key_indices to error message. - Changed `check_duplicate_keys` function in `src/datasets/arrow_writer.py` to find indices of examples with duplicate hash if duplicate keys are found. ## Deletions - ## To do : - [x] Find way to find and print path `<Path to Dataset>` in Error message ## Issues Addressed : Fixes #2556
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4545/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4545/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4545.diff", "html_url": "https://github.com/huggingface/datasets/pull/4545", "merged_at": "2022-06-28T09:26:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/4545.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4545" }
true
[ "> Nice thanks !\r\n> \r\n> After your changes feel free to mark this PR as \"ready for review\" ;)\r\n\r\nMarking PR ready for review.\r\n\r\n@lhoestq Let me know if there is anything else required or if we are good to go ahead and merge.", "_The documentation is not available anymore as the PR was closed or mer...
https://api.github.com/repos/huggingface/datasets/issues/837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/837/comments
https://api.github.com/repos/huggingface/datasets/issues/837/events
https://github.com/huggingface/datasets/pull/837
740,250,215
MDExOlB1bGxSZXF1ZXN0NTE4NzcwNDM5
837
AlloCiné dataset card
[]
closed
false
null
0
2020-11-10T21:19:53Z
2020-11-25T21:56:27Z
2020-11-25T21:56:27Z
null
Link to the card page: https://github.com/mcmillanmajora/datasets/blob/allocine_card/datasets/allocine/README.md There wasn't as much information available for this dataset, so I'm wondering what's the best way to address open questions about the dataset. For example, where did the list of films that the dataset creator used come from? I'm also wondering how best to go about talking about limitations when so little is known about the data.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/837/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/837/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/837.diff", "html_url": "https://github.com/huggingface/datasets/pull/837", "merged_at": "2020-11-25T21:56:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/837.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/837" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1249
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1249/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1249/comments
https://api.github.com/repos/huggingface/datasets/issues/1249/events
https://github.com/huggingface/datasets/pull/1249
758,472,863
MDExOlB1bGxSZXF1ZXN0NTMzNjQwNjA1
1,249
Add doc2dial dataset
[]
closed
false
null
2
2020-12-07T12:39:09Z
2020-12-14T16:17:14Z
2020-12-14T16:17:14Z
null
### Doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset v0.9 Once complete this will add the [Doc2dial](https://doc2dial.github.io/data.html) dataset from the generic data sets list.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1249/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1249/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1249.diff", "html_url": "https://github.com/huggingface/datasets/pull/1249", "merged_at": "2020-12-14T16:17:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/1249.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1249" }
true
[ "It not always practical to use nested `Sequence`. If you have troubles with sequence you can use lists instead. \r\n\r\nFor example\r\n```python\r\n\r\nfeatures=datasets.Features(\r\n {\r\n \"dial_id\": datasets.Value(\"string\"),\r\n \"doc_id\": datasets.Value(\"string\"),\r\n \"domain\": ...
https://api.github.com/repos/huggingface/datasets/issues/5576
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5576/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5576/comments
https://api.github.com/repos/huggingface/datasets/issues/5576/events
https://github.com/huggingface/datasets/issues/5576
1,598,582,744
I_kwDODunzps5fSG_Y
5,576
I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
[]
closed
false
null
1
2023-02-24T12:57:49Z
2023-02-24T12:58:31Z
2023-02-24T12:58:18Z
null
I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers. I worked around this by downloading the `the_pile_openwebtext2.py` and editing it to use local files and drop reddit scores as a column (not needed for my purposes). _Originally posted by @tc-wolf in https://github.com/huggingface/datasets/issues/3053#issuecomment-1281392422_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5576/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5576/timeline
null
not_planned
null
null
false
[ "Duplicated issue." ]
https://api.github.com/repos/huggingface/datasets/issues/3076
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3076/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3076/comments
https://api.github.com/repos/huggingface/datasets/issues/3076/events
https://github.com/huggingface/datasets/issues/3076
1,026,113,484
I_kwDODunzps49KT_M
3,076
Error when loading a metric
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
0
2021-10-14T08:29:27Z
2021-10-14T09:14:55Z
2021-10-14T09:14:55Z
null
## Describe the bug As reported by @sgugger, after last release, exception is thrown when loading a metric. ## Steps to reproduce the bug ```python from datasets import load_metric metric = load_metric("squad_v2") ``` ## Actual results ``` FileNotFoundError Traceback (most recent call last) <ipython-input-1-e612a8cab787> in <module> 1 from datasets import load_metric ----> 2 metric = load_metric("squad_v2") d:\projects\huggingface\datasets\src\datasets\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, script_version, **metric_init_kwargs) 1336 ) 1337 revision = script_version -> 1338 metric_module = metric_module_factory( 1339 path, revision=revision, download_config=download_config, download_mode=download_mode 1340 ).module_path d:\projects\huggingface\datasets\src\datasets\load.py in metric_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, **download_kwargs) 1237 if not isinstance(e1, FileNotFoundError): 1238 raise e1 from None -> 1239 raise FileNotFoundError( 1240 f"Couldn't find a metric script at {relative_to_absolute_path(combined_path)}. " 1241 f"Metric '{path}' doesn't exist on the Hugging Face Hub either." FileNotFoundError: Couldn't find a metric script at D:\projects\huggingface\datasets\squad_v2\squad_v2.py. Metric 'squad_v2' doesn't exist on the Hugging Face Hub either. ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3076/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3076/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/385
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/385/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/385/comments
https://api.github.com/repos/huggingface/datasets/issues/385/events
https://github.com/huggingface/datasets/pull/385
655,663,997
MDExOlB1bGxSZXF1ZXN0NDQ4MTAzMjY5
385
Remove unnecessary nested dict
[]
closed
false
null
5
2020-07-13T08:46:23Z
2020-07-15T11:27:38Z
2020-07-15T10:03:53Z
null
This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated: - MLQA - RACE Will be adding more if necessary. #378
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/385/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/385/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/385.diff", "html_url": "https://github.com/huggingface/datasets/pull/385", "merged_at": "2020-07-15T10:03:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/385.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/385" }
true
[ "We can probably scan the dataset scripts with a regexpr to try to identify this pattern cc @patrickvonplaten maybe", "@mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.\r\n\r\n```python\r\n#!/usr/bin/env pytho...
https://api.github.com/repos/huggingface/datasets/issues/2769
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2769/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2769/comments
https://api.github.com/repos/huggingface/datasets/issues/2769/events
https://github.com/huggingface/datasets/pull/2769
963,240,802
MDExOlB1bGxSZXF1ZXN0NzA1ODk5MTYy
2,769
Allow PyArrow from source
[]
closed
false
null
0
2021-08-07T14:26:44Z
2021-08-09T15:38:39Z
2021-08-09T15:38:39Z
null
When installing pyarrow from source the version is: ```python >>> import pyarrow; pyarrow.__version__ '2.1.0.dev612' ``` -> however this breaks the install check at init of `datasets`. This PR makes sure that everything coming after the last `'.'` is removed.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2769/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2769/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2769.diff", "html_url": "https://github.com/huggingface/datasets/pull/2769", "merged_at": "2021-08-09T15:38:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/2769.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2769" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2273/comments
https://api.github.com/repos/huggingface/datasets/issues/2273/events
https://github.com/huggingface/datasets/pull/2273
869,046,290
MDExOlB1bGxSZXF1ZXN0NjI0NDcxODc1
2,273
Added CUAD metrics
[]
closed
false
null
0
2021-04-27T16:49:12Z
2021-04-29T13:59:47Z
2021-04-29T13:59:47Z
null
`EM`, `F1`, `AUPR`, `Precision@80%Recall`, and `Precision@90%Recall` metrics supported for CUAD
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2273/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2273/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2273.diff", "html_url": "https://github.com/huggingface/datasets/pull/2273", "merged_at": "2021-04-29T13:59:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/2273.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2273" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/948/comments
https://api.github.com/repos/huggingface/datasets/issues/948/events
https://github.com/huggingface/datasets/pull/948
754,306,260
MDExOlB1bGxSZXF1ZXN0NTMwMjI4NjQz
948
docs(ADD_NEW_DATASET): correct indentation for script
[]
closed
false
null
0
2020-12-01T11:17:38Z
2020-12-01T11:25:18Z
2020-12-01T11:25:18Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/948/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/948/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/948.diff", "html_url": "https://github.com/huggingface/datasets/pull/948", "merged_at": "2020-12-01T11:25:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/948.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/948" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/575
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/575/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/575/comments
https://api.github.com/repos/huggingface/datasets/issues/575/events
https://github.com/huggingface/datasets/issues/575
693,691,611
MDU6SXNzdWU2OTM2OTE2MTE=
575
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading.
[]
closed
false
null
6
2020-09-04T21:46:25Z
2020-09-22T10:41:36Z
2020-09-22T10:41:36Z
null
Hi, I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset: ``` >>> from nlp import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') ``` However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the last few lines): ``` /net/vaosl01/opt/NFS/su0/miniconda3/envs/hf/lib/python3.7/site-packages/nlp/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only) 354 " to False." 355 ) --> 356 raise ConnectionError("Couldn't reach {}".format(url)) 357 358 # From now on, connected is True. ConnectionError: Couldn't reach https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-48f4-b431-7480817f1adc ``` I tried glue with cola and sst2. I got the same error, just instead of mrpc in the URL, it was replaced with cola and sst2. Since this was not working, I thought I'll try another dataset. So I tried downloading the imdb dataset: ``` ds = load_dataset('imdb', split='train') ``` This downloads the data, but it just blocks after that: ``` Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.56k/4.56k [00:00<00:00, 1.38MB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.07k/2.07k [00:00<00:00, 1.15MB/s] Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown sizetotal: 207.28 MiB) to /net/vaosl01/opt/NFS/su0/huggingface/datasets/imdb/plain_text/1.0.0/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743... Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 84.1M/84.1M [00:07<00:00, 11.1MB/s] ``` I checked the folder `$HF_HOME/datasets/downloads/extracted/<id>/aclImdb`. This folder is constantly growing in size. When I navigated to the train folder within, there was no file. However, the test folder seemed to be populating. The last time I checked it was 327M. I thought the Imdb dataset was smaller than that. My questions are: 1. Why is it still blocking? Is it still downloading? 2. I specified split as train, so why is the test folder being populated? 3. I read somewhere that after downloading, `nlp` converts the text files into some sort of `arrow` files, which will also take a while. Is this also happening here? Thanks.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/575/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/575/timeline
null
completed
null
null
false
[ "Update:\r\n\r\nThe imdb download completed after a long time (about 45 mins). Ofcourse once download loading was instantaneous. Also, the loaded object was of type `arrow_dataset`. \r\n\r\nThe urls for glue still doesn't work though.", "Thanks for the report, I'll give a look!", "I am also seeing a similar err...
https://api.github.com/repos/huggingface/datasets/issues/5492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5492/comments
https://api.github.com/repos/huggingface/datasets/issues/5492/events
https://github.com/huggingface/datasets/issues/5492
1,566,604,216
I_kwDODunzps5dYHu4
5,492
Push_to_hub in a pull request
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true...
open
false
null
2
2023-02-01T18:32:14Z
2023-02-14T22:16:40Z
null
null
Right now `ds.push_to_hub()` can push a dataset on `main` or on a new branch with `branch=`, but there is no way to open a pull request. Even passing `branch=refs/pr/x` doesn't seem to work: it tries to create a branch with that name cc @nateraw It should be possible to tweak the use of `huggingface_hub` in `push_to_hub` to make it open a PR or push to an existing PR
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5492/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5492/timeline
null
null
null
null
false
[ "Assigned to myself and will get to it in the next week, but if someone finds this issue annoying and wants to submit a PR before I do, just ping me here and I'll reassign :). ", "I would like to be assigned to this issue, @nateraw . #self-assign" ]
https://api.github.com/repos/huggingface/datasets/issues/3794
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3794/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3794/comments
https://api.github.com/repos/huggingface/datasets/issues/3794/events
https://github.com/huggingface/datasets/pull/3794
1,153,185,343
PR_kwDODunzps4zniT4
3,794
Add Mahalanobis distance metric
[]
closed
false
null
0
2022-02-27T10:56:31Z
2022-03-02T14:46:15Z
2022-03-02T14:46:15Z
null
Mahalanobis distance is a very useful metric to measure the distance from one datapoint X to a distribution P. In this PR I implement the metric in a simple way with the help of numpy only. Similar to the [MAUVE implementation](https://github.com/huggingface/datasets/blob/master/metrics/mauve/mauve.py), we can make this metric accept texts as input and encode them with a featurize model, if that is desirable.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3794/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3794/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3794.diff", "html_url": "https://github.com/huggingface/datasets/pull/3794", "merged_at": "2022-03-02T14:46:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/3794.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3794" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/563
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/563/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/563/comments
https://api.github.com/repos/huggingface/datasets/issues/563/events
https://github.com/huggingface/datasets/pull/563
690,908,674
MDExOlB1bGxSZXF1ZXN0NDc3NzI2MTEz
563
[Large datasets] Speed up download and processing
[]
closed
false
null
2
2020-09-02T10:31:54Z
2020-09-09T09:03:33Z
2020-09-09T09:03:32Z
null
Various improvements to speed-up creation and processing of large scale datasets. Currently: - distributed downloads - remove etag from datafiles hashes to spare a request when restarting a failed download
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/563/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/563/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/563.diff", "html_url": "https://github.com/huggingface/datasets/pull/563", "merged_at": "2020-09-09T09:03:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/563.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/563" }
true
[ "Looks all good :)\r\nI rebased from master and added a test for parallel `map_nested`", "you're da best" ]
https://api.github.com/repos/huggingface/datasets/issues/5649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5649/comments
https://api.github.com/repos/huggingface/datasets/issues/5649/events
https://github.com/huggingface/datasets/issues/5649
1,630,173,460
I_kwDODunzps5hKnkU
5,649
The index column created with .to_sql() is dependent on the batch_size when writing
[]
closed
false
null
2
2023-03-18T05:25:17Z
2023-06-17T07:01:57Z
2023-06-17T07:01:57Z
null
### Describe the bug It seems like the "index" column is designed to be unique? The values are only unique per batch. The SQL index is not a unique index. This can be a problem, for instance, when building a faiss index on a dataset and then trying to match up ids with a sql export. ### Steps to reproduce the bug ``` from datasets import Dataset import sqlite3 db = sqlite3.connect(":memory:") nice_numbers = Dataset.from_dict({"nice_number": range(101,106)}) nice_numbers.to_sql("nice1", db, batch_size=1) nice_numbers.to_sql("nice2", db, batch_size=2) print(db.execute("select * from nice1").fetchall()) # [(0, 101), (0, 102), (0, 103), (0, 104), (0, 105)] print(db.execute("select * from nice2").fetchall()) # [(0, 101), (1, 102), (0, 103), (1, 104), (0, 105)] ``` ### Expected behavior I expected the "index" column to be unique ### Environment info ``` % datasets-cli env Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.10.1 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.5.2 zsh: segmentation fault datasets-cli env ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5649/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5649/timeline
null
not_planned
null
null
false
[ "Thanks for reporting, @lsb. \r\n\r\nWe are investigating it.\r\n\r\nOn the other hand, please note that in the next `datasets` release, the index will not be created by default (see #5583). If you would like to have it, you will need to explicitly pass `index=True`. ", "I think this is low enough priority for me...
https://api.github.com/repos/huggingface/datasets/issues/5830
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5830/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5830/comments
https://api.github.com/repos/huggingface/datasets/issues/5830/events
https://github.com/huggingface/datasets/pull/5830
1,701,451,399
PR_kwDODunzps5QEFEi
5,830
Debug windows #2
[]
closed
false
null
0
2023-05-09T06:40:34Z
2023-05-09T06:40:47Z
2023-05-09T06:40:47Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5830/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5830/timeline
null
null
true
{ "diff_url": "https://github.com/huggingface/datasets/pull/5830.diff", "html_url": "https://github.com/huggingface/datasets/pull/5830", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5830.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5830" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/356
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/356/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/356/comments
https://api.github.com/repos/huggingface/datasets/issues/356/events
https://github.com/huggingface/datasets/pull/356
653,537,388
MDExOlB1bGxSZXF1ZXN0NDQ2NDM3MDQ5
356
Add text dataset
[]
closed
false
null
0
2020-07-08T19:21:53Z
2020-07-10T14:19:03Z
2020-07-10T14:19:03Z
null
Usage: ```python from nlp import load_dataset dset = load_dataset("text", data_files="/path/to/file.txt")["train"] ``` I created a dummy_data.zip which contains three files: `train.txt`, `test.txt`, `dev.txt`. Each of these contains two lines. It passes ```bash RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_text ``` but I would like a second set of eyes to ensure I did it right.
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 3, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/356/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/356/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/356.diff", "html_url": "https://github.com/huggingface/datasets/pull/356", "merged_at": "2020-07-10T14:19:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/356.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/356" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2732
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2732/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2732/comments
https://api.github.com/repos/huggingface/datasets/issues/2732/events
https://github.com/huggingface/datasets/pull/2732
956,676,360
MDExOlB1bGxSZXF1ZXN0NzAwMjMzMzQy
2,732
Updated TTC4900 Dataset
[]
closed
false
null
2
2021-07-30T11:52:14Z
2021-07-30T16:00:51Z
2021-07-30T15:58:14Z
null
- The source address of the TTC4900 dataset of [@savasy](https://github.com/savasy) has been updated for direct download. - Updated readme.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2732/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2732/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2732.diff", "html_url": "https://github.com/huggingface/datasets/pull/2732", "merged_at": "2021-07-30T15:58:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/2732.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2732" }
true
[ "@lhoestq, lütfen bu PR'ı gözden geçirebilir misiniz?", "> Thanks ! This looks all good now :)\r\n\r\nThanks" ]
https://api.github.com/repos/huggingface/datasets/issues/5517
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5517/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5517/comments
https://api.github.com/repos/huggingface/datasets/issues/5517/events
https://github.com/huggingface/datasets/issues/5517
1,577,976,608
I_kwDODunzps5eDgMg
5,517
`with_format("numpy")` silently downcasts float64 to float32 features
[]
open
false
{ "closed_at": null, "closed_issues": 0, "created_at": "2023-02-13T16:22:42Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }, "description": "Next major release", "due_on": null, "html_url": "https://github.com/huggingface/datasets/milestone/10", "id": 9038583, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels", "node_id": "MI_kwDODunzps4Aier3", "number": 10, "open_issues": 3, "state": "open", "title": "3.0", "updated_at": "2023-04-12T17:00:57Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/10" }
10
2023-02-09T14:18:00Z
2023-02-14T15:38:54Z
null
null
### Describe the bug When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`. ### Steps to reproduce the bug ```python import datasets dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy") print("feature dtype:", dataset.features['a'].dtype) print("array dtype:", dataset['a'].dtype) ``` output: ``` feature dtype: float64 array dtype: float32 ``` ### Expected behavior ``` feature dtype: float64 array dtype: float64 ``` ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 10.0.1 - Pandas version: 1.4.4 ### Suggested Fix Changing [the `_tensorize` function of the numpy formatter](https://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L32) to ```python def _tensorize(self, value): if isinstance(value, (str, bytes, type(None))): return value elif isinstance(value, (np.character, np.ndarray)) and np.issubdtype(value.dtype, np.character): return value elif isinstance(value, np.number): return value return np.asarray(value, **self.np_array_kwargs) ``` fixes this particular issue for me. Not sure if this would break other tests. This should also avoid unnecessary copying of the array.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5517/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5517/timeline
null
null
null
null
false
[ "Hi! This behavior stems from these lines:\r\n\r\nhttps://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L45-L46\r\n\r\nI agree we should preserve the original type whenever possible and downcast explicitly with a warning.\r\n\r\n@lhoestq Do you...
https://api.github.com/repos/huggingface/datasets/issues/3940
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3940/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3940/comments
https://api.github.com/repos/huggingface/datasets/issues/3940/events
https://github.com/huggingface/datasets/pull/3940
1,171,106,853
PR_kwDODunzps40iYxr
3,940
Create CoVAL metric card
[]
closed
false
null
1
2022-03-16T14:31:49Z
2022-03-18T17:37:59Z
2022-03-18T17:35:14Z
null
Initial CoVAL metric card
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3940/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3940/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3940.diff", "html_url": "https://github.com/huggingface/datasets/pull/3940", "merged_at": "2022-03-18T17:35:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/3940.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3940" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/3342
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3342/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3342/comments
https://api.github.com/repos/huggingface/datasets/issues/3342/events
https://github.com/huggingface/datasets/pull/3342
1,067,481,390
PR_kwDODunzps4vM3wh
3,342
Fix ASSET dataset data URLs
[]
closed
false
null
1
2021-11-30T17:13:30Z
2021-12-14T14:50:00Z
2021-12-14T14:50:00Z
null
Change the branch name "master" to "main" in the data URLs, since facebookresearch has changed that.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3342/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3342/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3342.diff", "html_url": "https://github.com/huggingface/datasets/pull/3342", "merged_at": "2021-12-14T14:50:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/3342.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3342" }
true
[ "> Hi @tianjianjiang, thanks for the fix.\r\n> The links should also be updated in the `dataset_infos.json` file.\r\n> The failing tests are due to the missing tag in the header of the `README.md` file:\r\n\r\nHi @albertvillanova, thank you for the info! My apologies for the messy PR.\r\n" ]
https://api.github.com/repos/huggingface/datasets/issues/5844
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5844/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5844/comments
https://api.github.com/repos/huggingface/datasets/issues/5844/events
https://github.com/huggingface/datasets/issues/5844
1,705,907,812
I_kwDODunzps5lrhZk
5,844
TypeError: Couldn't cast array of type struct<answer: struct<unanswerable: bool, answerType: string, free_form_answer: string, evidence: list<item: string>, evidenceAnnotate: list<item: string>, highlighted_evidence: list<item: string>>> to ...
[]
open
false
null
0
2023-05-11T14:15:01Z
2023-05-11T14:15:01Z
null
null
### Describe the bug TypeError: Couldn't cast array of type struct<answer: struct<unanswerable: bool, answerType: string, free_form_answer: string, evidence: list<item: string>, evidenceAnnotate: list<item: string>, highlighted_evidence: list<item: string>>> to {'answer': {'unanswerable': Value(dtype='bool', id=None), 'answerType': Value(dtype='string', id=None), 'free_form_answer': Value(dtype='string', id=None), 'evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'evidenceAnnotate': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'highlighted_evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'unanswerable': Value(dtype='bool', id=None), 'answerType': Value(dtype='string', id=None), 'free_form_answer': Value(dtype='string', id=None), 'evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'evidenceAnnotate': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'highlighted_evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} When I use _load_dataset()_ I get the error `from datasets import load_dataset datafiles = {'train': './data/train.json', 'validation': './data/validation.json', 'test': './data/test.json'} raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache") ` Detailed error information is as follows: Traceback (most recent call last): File "C:/Users/CHENJIALEI/Desktop/NLPCC2023/NLPCC23_SciMRC-main/test2.py", line 9, in <module> raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache") File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\load.py", line 1747, in load_dataset builder_instance.download_and_prepare( File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 814, in download_and_prepare self._download_and_prepare( File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 905, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 1521, in _prepare_split writer.write_table(table) File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\arrow_writer.py", line 540, in write_table pa_table = table_cast(pa_table, self._schema) File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2069, in table_cast return cast_table_to_schema(table, schema) File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2031, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2031, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1740, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1740, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1867, in cast_array_to_feature casted_values = _c(array.values, feature[0]) File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper return func(array, *args, **kwargs) File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1862, in cast_array_to_feature arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1862, in <listcomp> arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper return func(array, *args, **kwargs) File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1867, in cast_array_to_feature casted_values = _c(array.values, feature[0]) File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper return func(array, *args, **kwargs) File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1913, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") It is successful when I load the data separately `raw_data = load_dataset("json", data_files="./data/train.json", cache_dir="./cache")` ### Steps to reproduce the bug 1.from datasets import load_dataset 2.datafiles = {'train': './data/train.json', 'validation': './data/validation.json', 'test': './data/test.json'} 3.raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache") ### Expected behavior Successfully load dataset ### Environment info datasets == 2.6.1 pyarrow == 8.0.0 python == 3.8 platform:windows11
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5844/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5844/timeline
null
null
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/2589
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2589/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2589/comments
https://api.github.com/repos/huggingface/datasets/issues/2589/events
https://github.com/huggingface/datasets/pull/2589
936,825,060
MDExOlB1bGxSZXF1ZXN0NjgzNDc0OTQ0
2,589
Support multilabel metrics
[]
closed
false
{ "closed_at": "2021-07-21T15:36:49Z", "closed_issues": 29, "created_at": "2021-06-08T18:48:33Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-05T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/6", "id": 6836458, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "open_issues": 0, "state": "closed", "title": "1.10", "updated_at": "2021-07-21T15:36:49Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/6" }
5
2021-07-05T08:19:25Z
2022-07-29T10:56:25Z
2021-07-08T08:40:15Z
null
Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value("int32")`. This PR creates a new feature type `OptionalSequence` which can act as either `Value("int32")` or `Sequence(Value("int32"))`, depending on the data passed. Close #2554.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2589/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2589/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2589.diff", "html_url": "https://github.com/huggingface/datasets/pull/2589", "merged_at": "2021-07-08T08:40:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/2589.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2589" }
true
[ "Hi ! Thanks for the fix :)\r\n\r\nIf I understand correctly, `OptionalSequence` doesn't have an associated arrow type that we know in advance unlike the other feature types, because it depends on the type of the examples.\r\n\r\nFor example, I tested this and it raises an error:\r\n```python\r\nimport datasets as ...
https://api.github.com/repos/huggingface/datasets/issues/5309
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5309/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5309/comments
https://api.github.com/repos/huggingface/datasets/issues/5309/events
https://github.com/huggingface/datasets/pull/5309
1,466,758,987
PR_kwDODunzps5D0g1y
5,309
Close stream in `ArrowWriter.finalize` before inference error
[]
closed
false
null
1
2022-11-28T16:59:39Z
2022-12-07T12:55:20Z
2022-12-07T12:52:15Z
null
Ensure the file stream is closed in `ArrowWriter.finalize` before raising the `SchemaInferenceError` to avoid the `PermissionError` on Windows in `incomplete_dir`'s `shutil.rmtree`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5309/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5309/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5309.diff", "html_url": "https://github.com/huggingface/datasets/pull/5309", "merged_at": "2022-12-07T12:52:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/5309.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5309" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/5280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5280/comments
https://api.github.com/repos/huggingface/datasets/issues/5280/events
https://github.com/huggingface/datasets/issues/5280
1,459,823,179
I_kwDODunzps5XAyJL
5,280
Import error
[]
closed
false
null
5
2022-11-22T12:56:43Z
2022-12-15T19:57:40Z
2022-12-15T19:57:40Z
null
https://github.com/huggingface/datasets/blob/cd3d8e637cfab62d352a3f4e5e60e96597b5f0e9/src/datasets/__init__.py#L28 Hy, I have error at the above line. I have python version 3.8.13, the message says I need python>=3.7, which is True, but I think the if statement not working properly (or the message wrong)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5280/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5280/timeline
null
completed
null
null
false
[ "Hi ! Can you \r\n```python\r\nimport platform\r\nprint(platform.python_version())\r\n```\r\nto see that it returns ?", "Hi,\n\n3.8.13\n\nGet Outlook for Android<https://aka.ms/AAb9ysg>\n________________________________\nFrom: Quentin Lhoest ***@***.***>\nSent: Tuesday, November 22, 2022 2:37:02 PM\nTo: huggingfa...
https://api.github.com/repos/huggingface/datasets/issues/2427
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2427/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2427/comments
https://api.github.com/repos/huggingface/datasets/issues/2427/events
https://github.com/huggingface/datasets/pull/2427
907,162,923
MDExOlB1bGxSZXF1ZXN0NjU4MDUwMjAx
2,427
Add copyright info to MLSUM dataset
[]
closed
false
null
2
2021-05-31T07:15:57Z
2021-06-04T09:53:50Z
2021-06-04T09:53:50Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2427/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2427/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2427.diff", "html_url": "https://github.com/huggingface/datasets/pull/2427", "merged_at": "2021-06-04T09:53:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/2427.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2427" }
true
[ "Build fails but this change should not be the reason...", "rebased on master" ]
https://api.github.com/repos/huggingface/datasets/issues/579
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/579/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/579/comments
https://api.github.com/repos/huggingface/datasets/issues/579/events
https://github.com/huggingface/datasets/pull/579
694,947,599
MDExOlB1bGxSZXF1ZXN0NDgxMjU1OTI5
579
Doc metrics
[]
closed
false
null
0
2020-09-07T10:15:24Z
2020-09-10T13:06:11Z
2020-09-10T13:06:10Z
null
Adding documentation on metrics loading/using/sharing
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/579/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/579/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/579.diff", "html_url": "https://github.com/huggingface/datasets/pull/579", "merged_at": "2020-09-10T13:06:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/579.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/579" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/556
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/556/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/556/comments
https://api.github.com/repos/huggingface/datasets/issues/556/events
https://github.com/huggingface/datasets/pull/556
690,218,423
MDExOlB1bGxSZXF1ZXN0NDc3MTQ0MTky
556
Add DailyDialog
[]
closed
false
null
0
2020-09-01T15:01:15Z
2020-09-03T15:42:03Z
2020-09-03T15:38:39Z
null
http://yanran.li/dailydialog.html https://arxiv.org/pdf/1710.03957.pdf
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/556/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/556/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/556.diff", "html_url": "https://github.com/huggingface/datasets/pull/556", "merged_at": "2020-09-03T15:38:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/556.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/556" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4660/comments
https://api.github.com/repos/huggingface/datasets/issues/4660/events
https://github.com/huggingface/datasets/pull/4660
1,297,128,387
PR_kwDODunzps47AYDq
4,660
Fix _resolve_single_pattern_locally on Windows with multiple drives
[]
closed
false
null
2
2022-07-07T09:57:30Z
2022-07-07T17:03:36Z
2022-07-07T16:52:07Z
null
Currently, when `_resolve_single_pattern_locally` is called from a different drive than the one in `pattern`, it raises an exception: ``` _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\io\parquet.py:35: in __init__ **kwargs, C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\builder.py:287: in __init__ sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:761: in from_local_or_remote if not isinstance(patterns_for_key, DataFilesList) C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:723: in from_local_or_remote data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:321: in resolve_patterns_locally_or_by_urls for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions): C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:239: in _resolve_single_pattern_locally for filepath in glob_iter C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:242: in <listcomp> os.path.relpath(filepath, base_path), os.path.relpath(pattern, base_path) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ path = 'C:\\Users\\runneradmin\\AppData\\Local\\Temp\\pytest-of-runneradmin\\pytest-0\\popen-gw0\\data6\\dataset.parquet' start = '/' ... E ValueError: path is on mount 'C:', start on mount 'D:' ``` This PR makes sure that `base_path` is in the same drive as `pattern`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4660/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4660/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4660.diff", "html_url": "https://github.com/huggingface/datasets/pull/4660", "merged_at": "2022-07-07T16:52:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/4660.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4660" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Good catch ! Sorry I forgot (again) about windows paths when writing this x)" ]
https://api.github.com/repos/huggingface/datasets/issues/4142
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4142/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4142/comments
https://api.github.com/repos/huggingface/datasets/issues/4142/events
https://github.com/huggingface/datasets/issues/4142
1,199,794,750
I_kwDODunzps5Hg2o-
4,142
Add ObjectFolder 2.0 dataset
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
1
2022-04-11T10:57:51Z
2022-10-05T10:30:49Z
null
null
## Adding a Dataset - **Name:** ObjectFolder 2.0 - **Description:** ObjectFolder 2.0 is a dataset of 1,000 objects in the form of implicit representations. It contains 1,000 Object Files each containing the complete multisensory profile for an object instance. - **Paper:** [*link to the dataset paper if available*](https://arxiv.org/abs/2204.02389) - **Data:** https://github.com/rhgao/ObjectFolder Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4142/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4142/timeline
null
null
null
null
false
[ "Datasets are not tracked in this repository anymore." ]
https://api.github.com/repos/huggingface/datasets/issues/1888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1888/comments
https://api.github.com/repos/huggingface/datasets/issues/1888/events
https://github.com/huggingface/datasets/pull/1888
809,241,123
MDExOlB1bGxSZXF1ZXN0NTc0MTM2MDU4
1,888
Docs for adding new column on formatted dataset
[]
closed
false
null
1
2021-02-16T11:45:00Z
2021-03-30T14:01:03Z
2021-02-16T11:58:57Z
null
As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added Close #1872
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1888/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1888/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1888.diff", "html_url": "https://github.com/huggingface/datasets/pull/1888", "merged_at": "2021-02-16T11:58:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/1888.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1888" }
true
[ "Close #1872" ]
https://api.github.com/repos/huggingface/datasets/issues/3646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3646/comments
https://api.github.com/repos/huggingface/datasets/issues/3646/events
https://github.com/huggingface/datasets/pull/3646
1,116,544,627
PR_kwDODunzps4xsX66
3,646
Fix streaming datasets that are not reset correctly
[]
closed
false
null
1
2022-01-27T17:21:02Z
2022-01-28T16:34:29Z
2022-01-28T16:34:28Z
null
Streaming datasets that use `StreamingDownloadManager.iter_archive` and `StreamingDownloadManager.iter_files` had some issues. Indeed if you try to iterate over such dataset twice, then the second time it will be empty. This is because the two methods above are generator functions. I fixed this by making them return iterables that are reset properly instead. Close https://github.com/huggingface/datasets/issues/3645 cc @anton-l
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3646/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3646/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3646.diff", "html_url": "https://github.com/huggingface/datasets/pull/3646", "merged_at": "2022-01-28T16:34:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/3646.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3646" }
true
[ "Works smoothly with the `transformers.Trainer` class now, thank you!" ]
https://api.github.com/repos/huggingface/datasets/issues/5469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5469/comments
https://api.github.com/repos/huggingface/datasets/issues/5469/events
https://github.com/huggingface/datasets/pull/5469
1,558,346,906
PR_kwDODunzps5Imhk2
5,469
Remove deprecated `shard_size` arg from `.push_to_hub()`
[]
closed
false
null
2
2023-01-26T15:40:56Z
2023-01-26T17:37:51Z
2023-01-26T17:30:59Z
null
The docstrings say that it was supposed to be deprecated since version 2.4.0, can we remove it?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5469/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5469/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5469.diff", "html_url": "https://github.com/huggingface/datasets/pull/5469", "merged_at": "2023-01-26T17:30:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/5469.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5469" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/6028
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6028/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6028/comments
https://api.github.com/repos/huggingface/datasets/issues/6028/events
https://github.com/huggingface/datasets/pull/6028
1,803,294,981
PR_kwDODunzps5Vb3LJ
6,028
Use new hffs
[]
closed
false
null
13
2023-07-13T15:41:44Z
2023-07-17T17:09:39Z
2023-07-17T17:01:00Z
null
Thanks to @janineguo 's work in https://github.com/huggingface/datasets/pull/5919 which was needed to support HfFileSystem. Switching to `HfFileSystem` will help implementing optimization in data files resolution ## Implementation details I replaced all the from_hf_repo and from_local_or_remote in data_files.py to only use a new `from_patterns` which works for any fsspec path, including hf:// paths, https:// URLs and local paths. This simplifies the codebase since there is no logic duplication anymore when it comes to data files resolution. I added `_prepare_path_and_storage_options` which returns the right storage_options to use given a path and a `DownloadConfig`. This is the only place where the logic depends on the filesystem type that must be used. I also removed the `get_metadata_data_files_list ` and `get_patterns_and_data_files` functions added recently, since data files resolution is now handled using a common interface. ## New features hf:// paths are now supported in data_files ## Breaking changes DataFilesList and DataFilesDict: - use `str` paths instead of `Union[Path, Url]` - require posix paths for windows paths close https://github.com/huggingface/datasets/issues/6017
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6028/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6028/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6028.diff", "html_url": "https://github.com/huggingface/datasets/pull/6028", "merged_at": "2023-07-17T17:01:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/6028.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6028" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/1660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1660/comments
https://api.github.com/repos/huggingface/datasets/issues/1660/events
https://github.com/huggingface/datasets/pull/1660
775,831,423
MDExOlB1bGxSZXF1ZXN0NTQ2NDM2MDg1
1,660
add dataset info
[]
closed
false
null
0
2020-12-29T10:58:19Z
2020-12-30T17:04:30Z
2020-12-30T17:04:30Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1660/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1660/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1660.diff", "html_url": "https://github.com/huggingface/datasets/pull/1660", "merged_at": "2020-12-30T17:04:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/1660.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1660" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/2572
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2572/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2572/comments
https://api.github.com/repos/huggingface/datasets/issues/2572/events
https://github.com/huggingface/datasets/issues/2572
934,573,767
MDU6SXNzdWU5MzQ1NzM3Njc=
2,572
Support Zstandard compressed files
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
5
2021-07-01T08:37:04Z
2023-01-03T15:34:01Z
2021-07-05T10:50:27Z
null
Add support for Zstandard compressed files: https://facebook.github.io/zstd/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2572/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2572/timeline
null
completed
null
null
false
[ "I am trying to load a dataset using Hugging Face Datasets load_dataset method. I am getting the value error as show below. Can someone help with this? I am using Windows laptop and Google Colab notebook.\r\n\r\n```\r\n!pip install zstandard\r\nfrom datasets import load_dataset\r\n\r\nlds = load_dataset(\r\n \"j...
https://api.github.com/repos/huggingface/datasets/issues/4666
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4666/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4666/comments
https://api.github.com/repos/huggingface/datasets/issues/4666/events
https://github.com/huggingface/datasets/issues/4666
1,299,732,238
I_kwDODunzps5NeFcO
4,666
Issues with concatenating datasets
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
2
2022-07-09T17:45:14Z
2022-07-12T17:16:15Z
2022-07-12T17:16:14Z
null
## Describe the bug It is impossible to concatenate datasets if a feature is sequence of dict in one dataset and a dict of sequence in another. But based on the document, it should be automatically converted. > A [datasets.Sequence](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Sequence) with a internal dictionary feature will be automatically converted into a dictionary of lists. This behavior is implemented to have a compatilbity layer with the TensorFlow Datasets library but may be un-wanted in some cases. If you don’t want this behavior, you can use a python list instead of the [datasets.Sequence](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Sequence). ## Steps to reproduce the bug ```python from datasets import concatenate_datasets, load_dataset squad = load_dataset("squad_v2") squad["train"].to_json("output.jsonl", lines=True) temp = load_dataset("json", data_files={"train": "output.jsonl"}) concatenate_datasets([temp["train"], squad["train"]]) ``` ## Expected results No error executing that code ## Actual results ``` ValueError: The features can't be aligned because the key answers of features {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)} has unexpected type - Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None) (expected either {'text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'answer_start': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)} or Value("null"). ``` ## Environment info - `datasets` version: 2.3.2 - Platform: macOS-12.4-arm64-arm-64bit - Python version: 3.8.11 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4666/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4666/timeline
null
completed
null
null
false
[ "Hi! I agree we should improve the features equality checks to account for this particular case. However, your code fails due to `answer_start` having the dtype `int64` instead of `int32` after loading from JSON (it's not possible to embed type precision info into a JSON file; `save_to_disk` does that for arrow fil...
https://api.github.com/repos/huggingface/datasets/issues/3816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3816/comments
https://api.github.com/repos/huggingface/datasets/issues/3816/events
https://github.com/huggingface/datasets/pull/3816
1,158,589,913
PR_kwDODunzps4z5owP
3,816
Doc new UI test workflows2
[]
closed
false
null
1
2022-03-03T15:59:14Z
2022-10-04T09:35:53Z
2022-03-03T16:42:15Z
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3816/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3816/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3816.diff", "html_url": "https://github.com/huggingface/datasets/pull/3816", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3816.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3816" }
true
[ "<img src=\"https://www.bikevillastravel.com/cms/static/images/loading.gif\" alt=\"Girl in a jacket\" width=\"50\" >" ]
https://api.github.com/repos/huggingface/datasets/issues/1380
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1380/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1380/comments
https://api.github.com/repos/huggingface/datasets/issues/1380/events
https://github.com/huggingface/datasets/pull/1380
760,320,494
MDExOlB1bGxSZXF1ZXN0NTM1MTcxOTAw
1,380
Add Tatoeba Dataset
[]
closed
false
null
0
2020-12-09T13:16:04Z
2020-12-10T16:54:28Z
2020-12-10T16:54:27Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1380/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1380/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1380.diff", "html_url": "https://github.com/huggingface/datasets/pull/1380", "merged_at": "2020-12-10T16:54:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/1380.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1380" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/1043
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1043/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1043/comments
https://api.github.com/repos/huggingface/datasets/issues/1043/events
https://github.com/huggingface/datasets/pull/1043
756,100,717
MDExOlB1bGxSZXF1ZXN0NTMxNzAwMDQ1
1,043
Add TSAC: Tunisian Sentiment Analysis Corpus
[]
closed
false
null
0
2020-12-03T11:12:35Z
2020-12-03T13:35:05Z
2020-12-03T13:32:24Z
null
github: https://github.com/fbougares/TSAC paper: https://www.aclweb.org/anthology/W17-1307/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1043/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1043/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1043.diff", "html_url": "https://github.com/huggingface/datasets/pull/1043", "merged_at": "2020-12-03T13:32:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/1043.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1043" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5925/comments
https://api.github.com/repos/huggingface/datasets/issues/5925/events
https://github.com/huggingface/datasets/issues/5925
1,741,941,436
I_kwDODunzps5n0-q8
5,925
Breaking API change in datasets.list_datasets caused by change in HfApi.list_datasets
[]
closed
false
null
0
2023-06-05T14:46:04Z
2023-06-19T17:22:43Z
2023-06-19T17:22:43Z
null
### Describe the bug Hi all, after an update of the `datasets` library, we observer crashes in our code. We relied on `datasets.list_datasets` returning a `list`. Now, after the API of the HfApi.list_datasets was changed and it returns a `list` instead of an `Iterable`, the `datasets.list_datasets` now sometimes returns a `list` and somesimes an `Iterable`. It would be helpful to indicate that by the return type of the `datasets.list_datasets` function. Thanks, Martin ### Steps to reproduce the bug Here, the code crashed after we updated the `datasets` library: ```python # list_datasets no longer returns a list, which leads to an error when one tries to slice it for datasets.list_datasets(with_details=True)[:limit]: ... ``` ### Expected behavior It would be helpful to indicate that by the return type of the `datasets.list_datasets` function. ### Environment info Ubuntu 22.04 datasets 2.12.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5925/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5925/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/434
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/434/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/434/comments
https://api.github.com/repos/huggingface/datasets/issues/434/events
https://github.com/huggingface/datasets/pull/434
665,477,638
MDExOlB1bGxSZXF1ZXN0NDU2NTM3Njgz
434
Fixed check for pyarrow
[]
closed
false
null
1
2020-07-25T00:16:53Z
2020-07-25T06:36:34Z
2020-07-25T06:36:34Z
null
Fix check for pyarrow in __init__.py. Previously would raise an error for pyarrow >= 1.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/434/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/434/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/434.diff", "html_url": "https://github.com/huggingface/datasets/pull/434", "merged_at": "2020-07-25T06:36:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/434.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/434" }
true
[ "Great, thanks!" ]
https://api.github.com/repos/huggingface/datasets/issues/323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/323/comments
https://api.github.com/repos/huggingface/datasets/issues/323/events
https://github.com/huggingface/datasets/pull/323
647,521,308
MDExOlB1bGxSZXF1ZXN0NDQxNTMxOTY3
323
Add package path to sys when downloading package as github archive
[]
closed
false
null
2
2020-06-29T16:46:01Z
2020-07-30T14:00:23Z
2020-07-30T14:00:23Z
null
This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh) @thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method. This PR fixes https://github.com/huggingface/nlp/issues/305
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/323/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/323/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/323.diff", "html_url": "https://github.com/huggingface/datasets/pull/323", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/323.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/323" }
true
[ "Sorry for the long diff, everything after the imports comes from `black` for code quality :/ ", " I think it's fine and I can't think of another way to make the import work anyways.\r\n\r\nMaybe we can have the `sys.path` behavior inside `prepare_module` instead ? Currently it seems to come out of nowhere in the...
https://api.github.com/repos/huggingface/datasets/issues/5437
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5437/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5437/comments
https://api.github.com/repos/huggingface/datasets/issues/5437/events
https://github.com/huggingface/datasets/issues/5437
1,536,837,144
I_kwDODunzps5bmkYY
5,437
Can't load png dataset with 4 channel (RGBA)
[]
closed
false
null
3
2023-01-17T18:22:27Z
2023-01-18T20:20:15Z
2023-01-18T20:20:15Z
null
I try to create dataset which contains about 9000 png images 64x64 in size, and they are all 4-channel (RGBA). When trying to use load_dataset() then a dataset is created from only 2 images. What exactly interferes I can not understand.![Screenshot_20230117_212213.jpg](https://user-images.githubusercontent.com/41611046/212980147-9aa68e30-76e9-4b61-a937-c2fdabd56564.jpg)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5437/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5437/timeline
null
completed
null
null
false
[ "Hi! Can you please share the directory structure of your image folder and the `load_dataset` call? We decode images with Pillow, and Pillow supports RGBA PNGs, so this shouldn't be a problem.\r\n\r\n", "> Hi! Can you please share the directory structure of your image folder and the `load_dataset` call? We decode...
https://api.github.com/repos/huggingface/datasets/issues/6009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6009/comments
https://api.github.com/repos/huggingface/datasets/issues/6009/events
https://github.com/huggingface/datasets/pull/6009
1,792,059,808
PR_kwDODunzps5U1mus
6,009
Fix cast for dictionaries with no keys
[]
closed
false
null
3
2023-07-06T18:48:14Z
2023-07-07T14:13:00Z
2023-07-07T14:01:13Z
null
Fix #5677
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6009/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6009/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6009.diff", "html_url": "https://github.com/huggingface/datasets/pull/6009", "merged_at": "2023-07-07T14:01:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/6009.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6009" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/685
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/685/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/685/comments
https://api.github.com/repos/huggingface/datasets/issues/685/events
https://github.com/huggingface/datasets/pull/685
711,182,185
MDExOlB1bGxSZXF1ZXN0NDk0ODg1NjIz
685
Add features parameter to CSV
[]
closed
false
null
0
2020-09-29T14:43:36Z
2020-09-30T08:39:56Z
2020-09-30T08:39:54Z
null
Add support for the `features` parameter when loading a csv dataset: ```python from datasets import load_dataset, Features features = Features({...}) csv_dataset = load_dataset("csv", data_files=["path/to/my/file.csv"], features=features) ``` I added tests to make sure that it is also compatible with the caching system Fix #623
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/685/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/685/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/685.diff", "html_url": "https://github.com/huggingface/datasets/pull/685", "merged_at": "2020-09-30T08:39:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/685.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/685" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5442
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5442/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5442/comments
https://api.github.com/repos/huggingface/datasets/issues/5442/events
https://github.com/huggingface/datasets/issues/5442
1,550,084,450
I_kwDODunzps5cZGli
5,442
OneDrive Integrations with HF Datasets
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
2
2023-01-19T23:12:08Z
2023-02-24T16:17:51Z
2023-02-24T16:17:51Z
null
### Feature request First of all , I would like to thank all community who are developed DataSet storage and make it free available How to integrate our Onedrive account or any other possible storage clouds (like google drive,...) with the **HF** datasets section. For example, if I have **50GB** on my **Onedrive** account and I want to move between drive and Hugging face repo or vis versa ### Motivation make the dataset section more flexible with other possible storage like the integration between Google Collab and Google drive the storage ### Your contribution Can be done using Hugging face CLI
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5442/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5442/timeline
null
completed
null
null
false
[ "Hi! \r\n\r\nWe use [`fsspec`](https://github.com/fsspec/filesystem_spec) to integrate with storage providers. You can find more info (and the usage examples) in [our docs](https://huggingface.co/docs/datasets/v2.8.0/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage).\r\n\r\n[`gdrivefs`](https://githu...
https://api.github.com/repos/huggingface/datasets/issues/746
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/746/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/746/comments
https://api.github.com/repos/huggingface/datasets/issues/746/events
https://github.com/huggingface/datasets/pull/746
725,627,235
MDExOlB1bGxSZXF1ZXN0NTA2ODMzNDMw
746
dataset(ngt): add ngt dataset initial loading script
[]
closed
false
null
0
2020-10-20T14:04:58Z
2021-03-23T06:19:38Z
2021-03-23T06:19:38Z
null
Currently only making the paths to the annotation ELAN (eaf) file and videos available. This is the first accessible way to download this dataset, which is not manual file-by-file. Only downloading the necessary files, the annotation files are very small, 20MB for all of them, but the video files are large, 100GB in total, saved in `mpg` format. I do not intend to actually store these as an uncompressed array of frames, because it will be huge. Future updates may add pose estimation files for all videos, making it easier to work with this data
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/746/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/746/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/746.diff", "html_url": "https://github.com/huggingface/datasets/pull/746", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/746.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/746" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4748/comments
https://api.github.com/repos/huggingface/datasets/issues/4748/events
https://github.com/huggingface/datasets/pull/4748
1,318,874,913
PR_kwDODunzps48JTEb
4,748
Add image classification processing guide
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
1
2022-07-27T00:11:11Z
2022-07-27T17:28:21Z
2022-07-27T17:16:12Z
null
This PR follows up on #4710 to separate the object detection and image classification guides. It expands a little more on the original guide to include a more complete example of loading and transforming a whole dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4748/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4748/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4748.diff", "html_url": "https://github.com/huggingface/datasets/pull/4748", "merged_at": "2022-07-27T17:16:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/4748.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4748" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/1994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1994/comments
https://api.github.com/repos/huggingface/datasets/issues/1994/events
https://github.com/huggingface/datasets/issues/1994
822,871,238
MDU6SXNzdWU4MjI4NzEyMzg=
1,994
not being able to get wikipedia es language
[]
open
false
null
8
2021-03-05T08:31:48Z
2021-03-11T20:46:21Z
null
null
Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/load.py", line 612, in load_dataset ignore_verifications=ignore_verifications, File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 527, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 1050, in _download_and_prepare "\n\t`{}`".format(usage_example) datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')` thanks @lhoestq for any suggestion/help
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1994/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1994/timeline
null
null
null
null
false
[ "@lhoestq I really appreciate if you could help me providiing processed datasets, I do not really have access to enough resources to run the apache-beam and need to run the codes on these datasets. Only en/de/fr currently works, but I need all the languages more or less. thanks ", "Hi @dorost1234, I think I can ...
https://api.github.com/repos/huggingface/datasets/issues/2979
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2979/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2979/comments
https://api.github.com/repos/huggingface/datasets/issues/2979/events
https://github.com/huggingface/datasets/issues/2979
1,009,634,147
I_kwDODunzps48Lctj
2,979
ValueError when computing f1 metric with average None
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2021-09-28T11:34:53Z
2021-10-01T14:17:38Z
2021-10-01T14:17:38Z
null
## Describe the bug When I try to compute the f1 score for each class in a multiclass classification problem, I get a ValueError. The same happens with recall and precision. I traced the error to the `.item()` in these scripts, which is probably there for the other averages. E.g. from f1.py: ```python return { "f1": f1_score( references, predictions, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight, ).item(), } ``` Since the result is an array with more than one item, the `.item()` throws the error. I didn't submit a PR because this might be needed for the other averages, I'm not very familiar with the library ## Steps to reproduce the bug ```python from datasets import load_metric metric = load_metric("f1") metric.add_batch(predictions=[2,34,1,34,1,2,3], references=[23,52,1,3,523,5,8]) metric.compute(average=None) ``` ## Expected results `array([0.66666667, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ])` ## Actual results ValueError: can only convert an array of size 1 to a Python scalar ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.5 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2979/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2979/timeline
null
completed
null
null
false
[ "Hi @asofiaoliveira, thanks for reporting.\r\n\r\nI'm fixing it." ]
https://api.github.com/repos/huggingface/datasets/issues/1784
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1784/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1784/comments
https://api.github.com/repos/huggingface/datasets/issues/1784/events
https://github.com/huggingface/datasets/issues/1784
794,659,174
MDU6SXNzdWU3OTQ2NTkxNzQ=
1,784
JSONDecodeError on JSON with multiple lines
[]
closed
false
null
2
2021-01-27T00:19:22Z
2021-01-31T08:47:18Z
2021-01-31T08:47:18Z
null
Hello :), I have been trying to load data using a JSON file. Based on the [docs](https://huggingface.co/docs/datasets/loading_datasets.html#json-files), the following format is supported: ```json {"key1":11, "key2":12, "key3":13} {"key1":21, "key2":22, "key3":23} ``` But, when I try loading a dataset with the same format, I get a JSONDecodeError : `JSONDecodeError: Extra data: line 2 column 1 (char 7142)`. Now, this is expected when using `json` to load a JSON file. But I was wondering if there are any special arguments to pass when using `load_dataset` as the docs suggest that this format is supported. When I convert the JSON file to a list of dictionaries format, I get AttributeError: `AttributeError: 'list' object has no attribute 'keys'`. So, I can't convert them to list of dictionaries either. Please let me know :) Thanks, Gunjan
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1784/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1784/timeline
null
completed
null
null
false
[ "Hi !\r\n\r\nThe `json` dataset script does support this format. For example loading a dataset with this format works on my side:\r\n```json\r\n{\"key1\":11, \"key2\":12, \"key3\":13}\r\n{\"key1\":21, \"key2\":22, \"key3\":23}\r\n```\r\n\r\nCan you show the full stacktrace please ? Also which version of datasets an...
https://api.github.com/repos/huggingface/datasets/issues/4007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4007/comments
https://api.github.com/repos/huggingface/datasets/issues/4007/events
https://github.com/huggingface/datasets/issues/4007
1,179,381,021
I_kwDODunzps5GS-0d
4,007
set_format does not work with multi dimension tensor
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
4
2022-03-24T11:27:43Z
2022-03-30T07:28:57Z
2022-03-24T14:39:29Z
null
## Describe the bug set_format only transforms the last dimension of a multi-dimension list to tensor ## Steps to reproduce the bug ```python import torch from datasets import Dataset ds = Dataset.from_dict({"A": [torch.rand((2, 2))]}) # ds = Dataset.from_dict({"A": [np.random.rand(2, 2)]}) # => same result ds = ds.with_format("torch") print(ds[0]) ``` ## Expected results ``` {'A': [tensor([[0.6689, 0.1516], [0.1403, 0.5567]])]} ``` ## Actual results ``` {'A': [tensor([0.6689, 0.1516]), tensor([0.1403, 0.5567])]} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - datasets version: 2.0.0 - Platform: Mac OSX - Python version: 3.8.12 - PyArrow version: 7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4007/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4007/timeline
null
completed
null
null
false
[ "Hi! Use the `ArrayXD` feature type (where X is the number of dimensions) to get correctly formated tensors. So in your case, define the dataset as follows :\r\n```python\r\nds = Dataset.from_dict({\"A\": [torch.rand((2, 2))]}, features=Features({\"A\": Array2D(shape=(2, 2), dtype=\"float32\")}))\r\n```\r\n", "Hi...
https://api.github.com/repos/huggingface/datasets/issues/274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/274/comments
https://api.github.com/repos/huggingface/datasets/issues/274/events
https://github.com/huggingface/datasets/issues/274
639,156,625
MDU6SXNzdWU2MzkxNTY2MjU=
274
PG-19
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
4
2020-06-15T21:02:26Z
2020-07-06T15:35:02Z
2020-07-06T15:35:02Z
null
Hi, and thanks for all your open-sourced work, as always! I was wondering if you would be open to adding PG-19 to your collection of datasets. https://github.com/deepmind/pg19 It is often used for benchmarking long-range language modeling.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/274/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/274/timeline
null
completed
null
null
false
[ "Sounds good! Do you want to give it a try?", "Ok, I'll see if I can figure it out tomorrow!", "Got around to this today, and so far so good, I'm able to download and load pg19 locally. However, I think there may be an issue with the dummy data, and testing in general.\r\n\r\nThe problem lies in the fact that e...
https://api.github.com/repos/huggingface/datasets/issues/1029
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1029/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1029/comments
https://api.github.com/repos/huggingface/datasets/issues/1029/events
https://github.com/huggingface/datasets/pull/1029
755,767,616
MDExOlB1bGxSZXF1ZXN0NTMxNDE2NzE4
1,029
Add PEC
[]
closed
false
null
5
2020-12-03T02:46:08Z
2020-12-04T10:58:19Z
2020-12-03T16:15:06Z
null
A persona-based empathetic conversation dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1029/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1029/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1029.diff", "html_url": "https://github.com/huggingface/datasets/pull/1029", "merged_at": "2020-12-03T16:15:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/1029.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1029" }
true
[ "I'm a bit frustrated now to get this right.", "Hey @zhongpeixiang!\r\nReally nice addition here!\r\n\r\nDid you officially joined the sprint by posting [on the forum thread](https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176) and joining our slac...
https://api.github.com/repos/huggingface/datasets/issues/3825
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3825/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3825/comments
https://api.github.com/repos/huggingface/datasets/issues/3825/events
https://github.com/huggingface/datasets/pull/3825
1,159,802,345
PR_kwDODunzps4z9p4b
3,825
Update version and date in Wikipedia dataset
[]
closed
false
null
1
2022-03-04T16:05:27Z
2022-03-04T17:24:37Z
2022-03-04T17:24:36Z
null
CC: @geohci
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3825/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3825/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3825.diff", "html_url": "https://github.com/huggingface/datasets/pull/3825", "merged_at": "2022-03-04T17:24:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/3825.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3825" }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3825). All of your documentation changes will be reflected on that endpoint." ]
https://api.github.com/repos/huggingface/datasets/issues/2705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2705/comments
https://api.github.com/repos/huggingface/datasets/issues/2705/events
https://github.com/huggingface/datasets/issues/2705
950,488,583
MDU6SXNzdWU5NTA0ODg1ODM=
2,705
404 not found error on loading WIKIANN dataset
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
1
2021-07-22T09:55:50Z
2021-07-23T08:07:32Z
2021-07-23T08:07:32Z
null
## Describe the bug Unable to retreive wikiann English dataset ## Steps to reproduce the bug ```python from datasets import list_datasets, load_dataset, list_metrics, load_metric WIKIANN = load_dataset("wikiann","en") ``` ## Expected results Colab notebook should display successful download status ## Actual results FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/12h3qqog6q4bjve/panx_dataset.tar?dl=1 ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2705/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2705/timeline
null
completed
null
null
false
[ "Hi @ronbutan, thanks for reporting.\r\n\r\nYou are right: we have recently found that the link to the original PAN-X dataset (also called WikiANN), hosted at Dropbox, is no longer working.\r\n\r\nWe have opened an issue in the GitHub repository of the original dataset (afshinrahimi/mmner#4) and we have also contac...
https://api.github.com/repos/huggingface/datasets/issues/5611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5611/comments
https://api.github.com/repos/huggingface/datasets/issues/5611/events
https://github.com/huggingface/datasets/pull/5611
1,611,197,906
PR_kwDODunzps5LW2Lx
5,611
add Dataset.to_list
[]
closed
false
null
3
2023-03-06T11:21:57Z
2023-03-27T13:34:19Z
2023-03-27T13:26:38Z
null
close https://github.com/huggingface/datasets/issues/5606 This PR is for adding the `Dataset.to_list` method. Thank you in advance.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5611/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5611/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5611.diff", "html_url": "https://github.com/huggingface/datasets/pull/5611", "merged_at": "2023-03-27T13:26:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/5611.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5611" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi, thanks for working on this! `Table.to_pylist` requires PyArrow 7.0+, and our minimal version requirement is 6.0, so we need to bump the version requirement to avoid CI failure. I'll do this in a separate PR.", "<details>\n<summ...
https://api.github.com/repos/huggingface/datasets/issues/1
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1/comments
https://api.github.com/repos/huggingface/datasets/issues/1/events
https://github.com/huggingface/datasets/pull/1
599,457,467
MDExOlB1bGxSZXF1ZXN0NDAzMDk1NDYw
1
changing nlp.bool to nlp.bool_
[]
closed
false
null
0
2020-04-14T10:18:02Z
2022-10-04T09:31:40Z
2020-04-14T12:01:40Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1.diff", "html_url": "https://github.com/huggingface/datasets/pull/1", "merged_at": "2020-04-14T12:01:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/1.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5256/comments
https://api.github.com/repos/huggingface/datasets/issues/5256/events
https://github.com/huggingface/datasets/pull/5256
1,452,652,586
PR_kwDODunzps5DFDY0
5,256
fix wrong print
[]
closed
false
null
0
2022-11-17T03:54:26Z
2022-11-18T11:05:32Z
2022-11-18T11:05:32Z
null
print `encoded_dataset.column_names` not `dataset.column_names`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5256/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5256/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5256.diff", "html_url": "https://github.com/huggingface/datasets/pull/5256", "merged_at": "2022-11-18T11:05:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/5256.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5256" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/759
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/759/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/759/comments
https://api.github.com/repos/huggingface/datasets/issues/759/events
https://github.com/huggingface/datasets/issues/759
729,046,916
MDU6SXNzdWU3MjkwNDY5MTY=
759
(Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
[]
closed
false
null
15
2020-10-25T15:34:57Z
2021-08-04T18:10:09Z
2021-08-04T18:10:09Z
null
Hey, I want to load the cnn-dailymail dataset for fine-tune. I write the code like this from datasets import load_dataset test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”) And I got the following errors. Traceback (most recent call last): File “test.py”, line 7, in test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“test”) File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 589, in load_dataset module_path, hash = prepare_module( File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 268, in prepare_module local_path = cached_path(file_path, download_config=download_config) File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 300, in cached_path output_path = get_from_cache( File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 475, in get_from_cache raise ConnectionError(“Couldn’t reach {}”.format(url)) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py How can I fix this ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/759/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/759/timeline
null
completed
null
null
false
[ "Are you running the script on a machine with an internet connection ?", "Yes , I can browse the url through Google Chrome.", "Does this HEAD request return 200 on your machine ?\r\n```python\r\nimport requests ...
https://api.github.com/repos/huggingface/datasets/issues/1073
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1073/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1073/comments
https://api.github.com/repos/huggingface/datasets/issues/1073/events
https://github.com/huggingface/datasets/pull/1073
756,468,034
MDExOlB1bGxSZXF1ZXN0NTMyMDA4NjIw
1,073
Add DialogRE dataset
[]
closed
false
null
0
2020-12-03T18:56:40Z
2020-12-20T13:34:48Z
2020-12-04T13:41:51Z
null
Adding the [DialogRE](https://github.com/nlpdata/dialogre) dataset Version 2. - All tests passed successfully.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1073/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1073/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1073.diff", "html_url": "https://github.com/huggingface/datasets/pull/1073", "merged_at": "2020-12-04T13:41:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/1073.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1073" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/3473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3473/comments
https://api.github.com/repos/huggingface/datasets/issues/3473/events
https://github.com/huggingface/datasets/issues/3473
1,086,937,610
I_kwDODunzps5AyVoK
3,473
Iterating over a vision dataset doesn't decode the images
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "bfdadc", "default": false, "descrip...
closed
false
null
9
2021-12-22T15:26:32Z
2021-12-27T14:13:21Z
2021-12-23T15:21:57Z
null
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"] assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # passes first_image = next(iter(mnist))["image"] assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # fails ``` ## Expected results The image should be decoded, as a PIL Image ## Actual results We get a dictionary ``` {'bytes': b'\x89PNG\r\n\x1a\n\x00..., 'path': None} ``` ## Environment info - `datasets` version: 1.17.1.dev0 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.2 - PyArrow version: 6.0.0 The bug also exists in 1.17.0 ## Investigation I think the issue is that decoding is disabled in `__iter__`: https://github.com/huggingface/datasets/blob/dfe5b73387c5e27de6a16b0caeb39d3b9ded66d6/src/datasets/arrow_dataset.py#L1651-L1661 Do you remember why it was disabled in the first place @albertvillanova ? Also cc @mariosasko @NielsRogge
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3473/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3473/timeline
null
completed
null
null
false
[ "As discussed, I remember I set `decoded=False` here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed.", "> I set decoded=False here to avoid decoding just by iterating over examples of dataset. We wanted to decode only ...
https://api.github.com/repos/huggingface/datasets/issues/1119
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1119/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1119/comments
https://api.github.com/repos/huggingface/datasets/issues/1119/events
https://github.com/huggingface/datasets/pull/1119
757,156,781
MDExOlB1bGxSZXF1ZXN0NTMyNTc5ODA5
1,119
Add Google Great Code Dataset
[]
closed
false
null
0
2020-12-04T14:46:28Z
2020-12-06T17:33:14Z
2020-12-06T17:33:13Z
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1119/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1119/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/1119.diff", "html_url": "https://github.com/huggingface/datasets/pull/1119", "merged_at": "2020-12-06T17:33:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/1119.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1119" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/5020
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5020/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5020/comments
https://api.github.com/repos/huggingface/datasets/issues/5020/events
https://github.com/huggingface/datasets/pull/5020
1,384,684,078
PR_kwDODunzps4_istJ
5,020
Fix URLs of sbu_captions dataset
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
1
2022-09-24T14:00:33Z
2022-09-28T07:20:20Z
2022-09-28T07:18:23Z
null
Forbidden You don't have permission to access /~vicente/sbucaptions/sbu-captions-all.tar.gz on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.2k-fips PHP/5.4.16 mod_fcgid/2.3.9 mod_wsgi/3.4 Python/2.7.5 mod_perl/2.0.11 Perl/v5.16.3 Server at [www.cs.virginia.edu](mailto:csroot@virginia.edu) Port 443
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5020/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5020/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5020.diff", "html_url": "https://github.com/huggingface/datasets/pull/5020", "merged_at": "2022-09-28T07:18:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/5020.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5020" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/3934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3934/comments
https://api.github.com/repos/huggingface/datasets/issues/3934/events
https://github.com/huggingface/datasets/pull/3934
1,170,292,492
PR_kwDODunzps40ftiC
3,934
Create MAUVE metric card
[]
closed
false
null
1
2022-03-15T21:36:07Z
2022-03-18T17:38:14Z
2022-03-18T17:34:13Z
null
Proposing a MAUVE metric card
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3934/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3934/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/3934.diff", "html_url": "https://github.com/huggingface/datasets/pull/3934", "merged_at": "2022-03-18T17:34:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/3934.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3934" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/867/comments
https://api.github.com/repos/huggingface/datasets/issues/867/events
https://github.com/huggingface/datasets/pull/867
745,773,955
MDExOlB1bGxSZXF1ZXN0NTIzMjc4MjI4
867
Fix some metrics feature types
[]
closed
false
null
0
2020-11-18T15:46:11Z
2020-11-19T17:35:58Z
2020-11-19T17:35:57Z
null
Replace `int` feature type to `int32` since `int` is not a pyarrow dtype in those metrics: - accuracy - precision - recall - f1 I also added the sklearn citation and used keyword arguments to remove future warnings
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/867/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/867/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/867.diff", "html_url": "https://github.com/huggingface/datasets/pull/867", "merged_at": "2020-11-19T17:35:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/867.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/867" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4971
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4971/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4971/comments
https://api.github.com/repos/huggingface/datasets/issues/4971/events
https://github.com/huggingface/datasets/pull/4971
1,370,319,516
PR_kwDODunzps4-zk3g
4,971
Preserve non-`input_colums` in `Dataset.map` if `input_columns` are specified
[]
closed
false
null
1
2022-09-12T18:08:24Z
2022-09-13T13:51:08Z
2022-09-13T13:48:45Z
null
Currently, if the `input_columns` list in `Dataset.map` is specified, the columns not in that list are dropped after the `map` transform. This makes the behavior inconsistent with `IterableDataset.map`. (It seems this issue was introduced by mistake in https://github.com/huggingface/datasets/pull/2246) Fix https://github.com/huggingface/datasets/issues/4858
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4971/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4971/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/4971.diff", "html_url": "https://github.com/huggingface/datasets/pull/4971", "merged_at": "2022-09-13T13:48:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/4971.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4971" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/6068
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6068/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6068/comments
https://api.github.com/repos/huggingface/datasets/issues/6068/events
https://github.com/huggingface/datasets/pull/6068
1,820,106,952
PR_kwDODunzps5WUkZi
6,068
fix tqdm lock deletion
[]
closed
false
null
5
2023-07-25T11:17:25Z
2023-07-25T15:29:39Z
2023-07-25T15:17:50Z
null
related to https://github.com/huggingface/datasets/issues/6066
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6068/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6068/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/6068.diff", "html_url": "https://github.com/huggingface/datasets/pull/6068", "merged_at": "2023-07-25T15:17:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/6068.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6068" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
https://api.github.com/repos/huggingface/datasets/issues/850
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/850/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/850/comments
https://api.github.com/repos/huggingface/datasets/issues/850/events
https://github.com/huggingface/datasets/pull/850
742,369,419
MDExOlB1bGxSZXF1ZXN0NTIwNTE0MDY3
850
Create ClassLabel for labelling tasks datasets
[]
closed
false
null
1
2020-11-13T11:07:22Z
2020-11-16T10:32:05Z
2020-11-16T10:31:58Z
null
This PR adds a specific `ClassLabel` for the datasets that are about a labelling task such as POS, NER or Chunking.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/850/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/850/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/850.diff", "html_url": "https://github.com/huggingface/datasets/pull/850", "merged_at": "2020-11-16T10:31:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/850.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/850" }
true
[ "@lhoestq Better?" ]
https://api.github.com/repos/huggingface/datasets/issues/1337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1337/comments
https://api.github.com/repos/huggingface/datasets/issues/1337/events
https://github.com/huggingface/datasets/pull/1337
759,710,482
MDExOlB1bGxSZXF1ZXN0NTM0NjY3NDUz
1,337
Add spanish billion words
[]
closed
false
null
1
2020-12-08T19:18:02Z
2020-12-08T22:59:38Z
2020-12-08T21:15:27Z
null
Add an unannotated corpus of the Spanish language of nearly 1.5 billion words, compiled from different resources from the web. The dataset needs 10 GB (download: 1.89 GiB, generated: 8.34 GiB, post-processed: Unknown size, total: 10.22 GiB), the test using dummy data pass but my laptop isn't able to run it on the real data (I left it running for over 8 hours and it didn't finish).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1337/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1337/timeline
null
null
true
{ "diff_url": "https://github.com/huggingface/datasets/pull/1337.diff", "html_url": "https://github.com/huggingface/datasets/pull/1337", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1337.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1337" }
true
[ "The tests failed because of ```RemoteDatasetTest``` so I tried ```git rebase``` and messed everything up. I've made a new clean PR (#1347)." ]
https://api.github.com/repos/huggingface/datasets/issues/5471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5471/comments
https://api.github.com/repos/huggingface/datasets/issues/5471/events
https://github.com/huggingface/datasets/pull/5471
1,558,557,545
PR_kwDODunzps5InPA7
5,471
Add num_test_batches option
[]
closed
false
null
4
2023-01-26T18:09:40Z
2023-01-27T18:16:45Z
2023-01-27T18:08:36Z
null
`to_tf_dataset` calls can be very costly because of the number of test batches drawn during `_get_output_signature`. The test batches are draw in order to estimate the shapes when creating the tensorflow dataset. This is necessary when the shapes can be irregular, but not in cases when the tensor shapes are the same across all samples. This PR adds an option to change the number of batches drawn, so the user can speed this conversion up. Running the following, and modifying `num_test_batches` ``` import time from datasets import load_dataset from transformers import DefaultDataCollator data_collator = DefaultDataCollator() dataset = load_dataset("beans") dataset = dataset["train"].with_format("np") start = time.time() dataset = dataset.to_tf_dataset( columns=["image"], label_cols=["label"], batch_size=8, collate_fn=data_collator, num_test_batches=NUM_TEST_BATCHES, ) end = time.time() print(end - start) ``` NUM_TEST_BATCHES=200: 0.8197s NUM_TEST_BATCHES=50: 0.3070s NUM_TEST_BATCHES=2: 0.1417s NUM_TEST_BATCHES=1: 0.1352s
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5471/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5471/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/5471.diff", "html_url": "https://github.com/huggingface/datasets/pull/5471", "merged_at": "2023-01-27T18:08:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/5471.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5471" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "I thought this issue was resolved in my parallel `to_tf_dataset` PR! I changed the default `num_test_batches` in `_get_output_signature` to 20 and used a test batch size of 1 to maximize variance to detect shorter samples. I think it...
https://api.github.com/repos/huggingface/datasets/issues/2126
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2126/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2126/comments
https://api.github.com/repos/huggingface/datasets/issues/2126/events
https://github.com/huggingface/datasets/pull/2126
842,779,966
MDExOlB1bGxSZXF1ZXN0NjAyMjcyMjg4
2,126
Replace legacy torch.Tensor constructor with torch.tensor
[]
closed
false
null
0
2021-03-28T16:57:30Z
2021-03-29T09:27:14Z
2021-03-29T09:27:13Z
null
The title says it all (motivated by [this issue](https://github.com/pytorch/pytorch/issues/53146) in the pytorch repo).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2126/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2126/timeline
null
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/2126.diff", "html_url": "https://github.com/huggingface/datasets/pull/2126", "merged_at": "2021-03-29T09:27:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/2126.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2126" }
true
[]