title
stringlengths
1
290
body
stringlengths
0
228k
βŒ€
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
[tiny] fix typo in stream docs
null
https://github.com/huggingface/datasets/pull/3246
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3246", "html_url": "https://github.com/huggingface/datasets/pull/3246", "diff_url": "https://github.com/huggingface/datasets/pull/3246.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3246.patch", "merged_at": "2021-11-10T11:10:39" }
3,246
true
Fix load_from_disk temporary directory
`load_from_disk` uses `tempfile.TemporaryDirectory()` instead of our `get_temporary_cache_files_directory()` function. This can cause the temporary directory to be deleted before the dataset object is garbage collected. In practice, it prevents anyone from using methods like `shuffle` on a dataset loaded this way, because it can't write the shuffled indices in a directory that doesn't exist anymore. In this PR I switch to using `get_temporary_cache_files_directory()` and I update the tests. cc @mariosasko since you worked on `get_temporary_cache_files_directory()`
https://github.com/huggingface/datasets/pull/3245
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3245", "html_url": "https://github.com/huggingface/datasets/pull/3245", "diff_url": "https://github.com/huggingface/datasets/pull/3245.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3245.patch", "merged_at": "2021-11-09T15:30:51" }
3,245
true
Fix filter method for batched=True
null
https://github.com/huggingface/datasets/pull/3244
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3244", "html_url": "https://github.com/huggingface/datasets/pull/3244", "diff_url": "https://github.com/huggingface/datasets/pull/3244.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3244.patch", "merged_at": "2021-11-09T15:52:57" }
3,244
true
Remove redundant isort module placement
`isort` can place modules by itself from [version 5.0.0](https://pycqa.github.io/isort/docs/upgrade_guides/5.0.0.html#module-placement-changes-known_third_party-known_first_party-default_section-etc) onwards, making the `known_first_party` and `known_third_party` fields in `setup.cfg` redundant (this is why our CI works, even though we haven't touched these options in a while).
https://github.com/huggingface/datasets/pull/3243
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3243", "html_url": "https://github.com/huggingface/datasets/pull/3243", "diff_url": "https://github.com/huggingface/datasets/pull/3243.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3243.patch", "merged_at": "2021-11-12T14:02:45" }
3,243
true
Adding ANERcorp-CAMeLLab dataset
null
https://github.com/huggingface/datasets/issues/3242
[ "Adding ANERcorp dataset\r\n\r\n## Adding a Dataset\r\n- **Name:** *ANERcorp-CAMeLLab*\r\n- **Description:** *Since its creation in 2008, the ANERcorp dataset (Benajiba & Rosso, 2008) has been a standard reference used by Arabic named entity recognition researchers around the world. However, over time, this dataset...
null
3,242
false
Swap descriptions of v1 and raw-v1 configs of WikiText dataset and fix metadata
Fix #3237, fix #795.
https://github.com/huggingface/datasets/pull/3241
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3241", "html_url": "https://github.com/huggingface/datasets/pull/3241", "diff_url": "https://github.com/huggingface/datasets/pull/3241.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3241.patch", "merged_at": "2021-11-09T13:49:28" }
3,241
true
Couldn't reach data file for disaster_response_messages
## Describe the bug Following command gives an ConnectionError. ## Steps to reproduce the bug ```python disaster = load_dataset('disaster_response_messages') ``` ## Error ``` ConnectionError: Couldn't reach https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.csv ``` ## Expected results It should load dataset without an error ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Google Colab - Python version: 3.7 - PyArrow version:
https://github.com/huggingface/datasets/issues/3240
[ "It looks like the dataset isn't available anymore on appen.com\r\n\r\nThe CSV files appear to still be available at https://www.kaggle.com/landlord/multilingual-disaster-response-messages though. It says that the data are under the CC0 license so I guess we can host the dataset elsewhere instead ?" ]
null
3,240
false
Inconsistent performance of the "arabic_billion_words" dataset
## Describe the bug When downloaded from macine 1 the dataset is downloaded and parsed correctly. When downloaded from machine two (which has a different cache directory), the following script: import datasets from datasets import load_dataset raw_dataset_elkhair_1 = load_dataset('arabic_billion_words', 'Alittihad', split="train",download_mode='force_redownload') gives the following error: **Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 1.49 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17... Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 348M/348M [00:24<00:00, 14.0MB/s] Traceback (most recent call last): File ".../why_mismatch.py", line 3, in <module> File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 709, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=1601790302, num_examples=349342, dataset_name='arabic_billion_words'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='arabic_billion_words')}]** Note that the package versions of datasets (1.15.1) and rarfile (4.0) are identical. ## Steps to reproduce the bug import datasets from datasets import load_dataset raw_dataset_elkhair_1 = load_dataset('arabic_billion_words', 'Alittihad', split="train",download_mode='force_redownload') # Sample code to reproduce the bug ## Expected results Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 1.49 GiB, post-processed: Unknown size, total: 1.82 GiB) to .../.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17... Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 348M/348M [00:22<00:00, 15.8MB/s] Dataset arabic_billion_words downloaded and prepared to .../.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17. Subsequent calls will reuse this data. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> Machine 1: - `datasets` version: 1.15.1 - Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 4.0.1 Machine 2 (the bugged one) - `datasets` version: 1.15.1 - Platform: Linux-4.4.0-210-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyArrow version: 6.0.0
https://github.com/huggingface/datasets/issues/3239
[]
null
3,239
false
Reuters21578 Couldn't reach
``## Adding a Dataset - **Name:** *Reuters21578* - **Description:** *ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz* - **Data:** *https://huggingface.co/datasets/reuters21578* `from datasets import load_dataset` `dataset = load_dataset("reuters21578", 'ModLewis')` ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz And I try to request the link as follow: `import requests` `requests.head('https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz')` SSLError: HTTPSConnectionPool(host='kdd.ics.uci.edu', port=443): Max retries exceeded with url: /databases/reuters21578/reuters21578.tar.gz (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),)) This problem likes #575 What should I do ?
https://github.com/huggingface/datasets/issues/3238
[ "Hi ! The URL works fine on my side today, could you try again ?", "thank you @lhoestq \r\nit works" ]
null
3,238
false
wikitext description wrong
## Describe the bug Descriptions of the wikitext datasests are wrong. ## Steps to reproduce the bug Please see: https://github.com/huggingface/datasets/blob/f6dcafce996f39b6a4bbe3a9833287346f4a4b68/datasets/wikitext/wikitext.py#L50 ## Expected results The descriptions for raw-v1 and v1 should be switched.
https://github.com/huggingface/datasets/issues/3237
[ "Hi @hongyuanmei, thanks for reporting.\r\n\r\nI'm fixing it.", "Duplicate of:\r\n- #795" ]
null
3,237
false
Loading of datasets changed in #3110 returns no examples
## Describe the bug Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples: ```python DatasetDict({ train: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 0 }) validation: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 0 }) }) ``` ## Steps to reproduce the bug Load any of the datasets that were changed in https://github.com/huggingface/datasets/pull/3110: ```python from datasets import load_dataset load_dataset("qasper") # The problem only started with the commit of #3110 load_dataset("qasper", revision="b6469baa22c174b3906c631802a7016fedea6780") ``` ## Expected results ```python DatasetDict({ train: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 888 }) validation: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 281 }) }) ``` Which can be received when specifying revision of the commit before https://github.com/huggingface/datasets/pull/3110: ```python from datasets import load_dataset load_dataset("qasper", revision="acfe2abda1ca79f0ce5c1896aa83b4b78af76b7d") ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.2.dev0 (master) - Python version: 3.8.10 - PyArrow version: 3.0.0
https://github.com/huggingface/datasets/issues/3236
[ "Hi @eladsegal, thanks for reporting.\r\n\r\nI am sorry, but I can't reproduce the bug:\r\n```\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"qasper\")\r\nDownloading: 5.11kB [00:00, ?B/s]\r\nDownloading and preparing dataset qasper/qasper (download: 9.88 MiB, generated: 35.11 MiB, ...
null
3,236
false
Addd options to use updated bleurt checkpoints
Adds options to use newer recommended checkpoint (as of 2021/10/8) bleurt-20 and its distilled versions. Updated checkpoints are described in https://github.com/google-research/bleurt/blob/master/checkpoints.md#the-recommended-checkpoint-bleurt-20 This change won't affect the default behavior of metrics/bleurt. It only adds option to load newer checkpoints as `datasets.load_metric('bleurt', 'bleurt-20')` `bluert-20` generates scores roughly between 0 and 1, which wasn't the case for the previous checkpoints.
https://github.com/huggingface/datasets/pull/3235
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3235", "html_url": "https://github.com/huggingface/datasets/pull/3235", "diff_url": "https://github.com/huggingface/datasets/pull/3235.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3235.patch", "merged_at": "2021-11-12T14:05:28" }
3,235
true
Avoid PyArrow type optimization if it fails
Adds a new variable, `DISABLE_PYARROW_TYPES_OPTIMIZATION`, to `config.py` for easier control of the Arrow type optimization. Fix #2206
https://github.com/huggingface/datasets/pull/3234
[ "That's good to have a way to disable this easily :)\r\nI just find it a bit unfortunate that users would have to experience the error once and then do `DISABLE_PYARROW_TYPES_OPTIMIZATION=1`. Do you know if there's a way to simply fallback on disabling it automatically when it fails ?", "@lhoestq Actually, I agre...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3234", "html_url": "https://github.com/huggingface/datasets/pull/3234", "diff_url": "https://github.com/huggingface/datasets/pull/3234.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3234.patch", "merged_at": "2021-11-10T12:04:28" }
3,234
true
Improve repository structure docs
Continuation of the documentation started in https://github.com/huggingface/datasets/pull/3221, taking into account @stevhliu 's comments
https://github.com/huggingface/datasets/pull/3233
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3233", "html_url": "https://github.com/huggingface/datasets/pull/3233", "diff_url": "https://github.com/huggingface/datasets/pull/3233.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3233.patch", "merged_at": "2021-11-09T10:02:17" }
3,233
true
The Xsum datasets seems not able to download.
## Describe the bug The download Link of the Xsum dataset provided in the repository is [Link](http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz). It seems not able to download. ## Steps to reproduce the bug ```python load_dataset('xsum') ``` ## Actual results ``` python raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz ```
https://github.com/huggingface/datasets/issues/3232
[ "Hi ! On my side the URL is working fine, could you try again ?", "> Hi ! On my side the URL is working fine, could you try again ?\r\n\r\nI try it again and cannot download the file (might because of my location). Could you please provide another download link(such as google drive)? :>", "I don't know other ...
null
3,232
false
Group tests in multiprocessing workers by test file
By grouping tests by test file, we make sure that all the tests in `test_load.py` are sent to the same worker. Therefore, the fixture `hf_token` will be called only once (and from the same worker). Related to: #3200. Fix #3219.
https://github.com/huggingface/datasets/pull/3231
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3231", "html_url": "https://github.com/huggingface/datasets/pull/3231", "diff_url": "https://github.com/huggingface/datasets/pull/3231.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3231.patch", "merged_at": "2021-11-08T08:59:43" }
3,231
true
Add full tagset to conll2003 README
Even though it is possible to manually get the tagset list with ```python dset.features[field_name].feature.names ``` I think it is useful to have an overview of the used tagset on the dataset card. This is particularly useful in light of the **dataset viewer**: the tags are encoded, so it is not immediately obvious what they are for a given sample. Adding a label-int mapping should make it easier for visitors to get a grasp of what they mean. From user-experience perspective, I would urge the full tagsets to always be available in the README's but I understand that that would take a lot of work, probably. Perhaps it can be automated? closes #3189
https://github.com/huggingface/datasets/pull/3230
[ "I also added the missing `pretty_name` tag in the dataset card to fix the CI" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3230", "html_url": "https://github.com/huggingface/datasets/pull/3230", "diff_url": "https://github.com/huggingface/datasets/pull/3230.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3230.patch", "merged_at": "2021-11-09T10:40:58" }
3,230
true
Fix URL in CITATION file
Currently the BibTeX citation parsed from the CITATION file has wrong URL (it shows the repo URL instead of the proceedings paper URL): ``` @inproceedings{Lhoest_Datasets_A_Community_2021, author = {Lhoest, Quentin and Villanova del Moral, Albert and von Platen, Patrick and Wolf, Thomas and Šaőko, Mario and Jernite, Yacine and Thakur, Abhishek and Tunstall, Lewis and Patil, Suraj and Drame, Mariama and Chaumond, Julien and Plu, Julien and Davison, Joe and Brandeis, Simon and Sanh, Victor and Le Scao, Teven and Canwen Xu, Kevin and Patry, Nicolas and Liu, Steven and McMillan-Major, Angelina and Schmid, Philipp and Gugger, Sylvain and Raw, Nathan and Lesage, Sylvain and Lozhkov, Anton and Carrigan, Matthew and Matussière, Théo and von Werra, Leandro and Debut, Lysandre and Bekman, Stas and Delangue, Clément}, booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, month = {11}, pages = {175--184}, publisher = {Association for Computational Linguistics}, title = {{Datasets: A Community Library for Natural Language Processing}}, url = {https://github.com/huggingface/datasets}, year = {2021} } ```
https://github.com/huggingface/datasets/pull/3229
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3229", "html_url": "https://github.com/huggingface/datasets/pull/3229", "diff_url": "https://github.com/huggingface/datasets/pull/3229.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3229.patch", "merged_at": "2021-11-07T10:04:45" }
3,229
true
Add CITATION file
Add CITATION file.
https://github.com/huggingface/datasets/pull/3228
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3228", "html_url": "https://github.com/huggingface/datasets/pull/3228", "diff_url": "https://github.com/huggingface/datasets/pull/3228.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3228.patch", "merged_at": "2021-11-07T09:51:46" }
3,228
true
Error in `Json(datasets.ArrowBasedBuilder)` class
## Describe the bug When a json file contains a `text` field that is larger than the block_size, the JSON dataset builder fails. ## Steps to reproduce the bug Create a folder that contains the following: ``` . β”œβ”€β”€ testdata β”‚Β Β  └── mydata.json └── test.py ``` Please download [this file](https://github.com/huggingface/datasets/files/7491797/mydata.txt) as `mydata.json`. (The error does not occur in JSON files with shorter text, but it is reproducible when the text is long as in the file I provide) :exclamation: :exclamation: GitHub doesn't allow me to upload JSON so this file is a TXT, and you should rename it to `.json`! `test.py` simply contains: ```python from datasets import load_dataset my_dataset = load_dataset("testdata") ``` To reproduce the error, simply run ``` python test.py ``` ## Expected results The data should load correctly without error. ## Actual results The dataset builder fails with: ``` Using custom data configuration testdata-d490389b8ab4fd82 Downloading and preparing dataset json/testdata to /home/junshern.chan/.cache/huggingface/datasets/json/testdata-d490389b8ab4fd82/0.0.0/3333a8af0db9764dfcff43a42ff26228f0f2e267f0d8a0a294452d188beadb34... 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 2264.74it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 447.01it/s] Failed to read file '/home/junshern.chan/hf-json-bug/testdata/mydata.json' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Missing a name for object member. in row 0 Traceback (most recent call last): File "test.py", line 28, in <module> my_dataset = load_dataset("testdata") File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/builder.py", line 1156, in _prepare_split for key, table in utils.tqdm( File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/tqdm/std.py", line 1168, in __iter__ for obj in iterable: File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 146, in _generate_tables raise ValueError( ValueError: Not able to read records in the JSON file at /home/junshern.chan/hf-json-bug/testdata/mydata.json. You should probably indicate the field of the JSON file containing your records. This JSON file contain the following fields: ['text']. Select the correct one and provide it as `field='XXX'` to the dataset loading method. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 6.0.0
https://github.com/huggingface/datasets/issues/3227
[ "I have additionally identified the source of the error, being that [this condition](https://github.com/huggingface/datasets/blob/fc46bba66ba4f432cc10501c16a677112e13984c/src/datasets/packaged_modules/json/json.py#L124-L126) in the file\r\n`python3.8/site-packages/datasets/packaged_modules/json/json.py` is not bein...
null
3,227
false
Fix paper BibTeX citation with proceedings reference
Fix paper BibTeX citation with proceedings reference.
https://github.com/huggingface/datasets/pull/3226
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3226", "html_url": "https://github.com/huggingface/datasets/pull/3226", "diff_url": "https://github.com/huggingface/datasets/pull/3226.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3226.patch", "merged_at": "2021-11-07T07:05:27" }
3,226
true
Update tatoeba to v2021-07-22
Tatoeba's latest version is v2021-07-22
https://github.com/huggingface/datasets/pull/3225
[ "How about this? @lhoestq @abhishekkrthakur ", "Hi ! I think it would be nice if people could still be able to load the old version.\r\nMaybe this can be a parameter ? For example to load the old version they could do\r\n```python\r\nload_dataset(\"tatoeba\", lang1=\"en\", lang2=\"mr\", date=\"v2020-11-09\")\r\n`...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3225", "html_url": "https://github.com/huggingface/datasets/pull/3225", "diff_url": "https://github.com/huggingface/datasets/pull/3225.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3225.patch", "merged_at": "2021-11-12T11:13:13" }
3,225
true
User-pickling with dynamic sub-classing
This is a continuation of the now closed PR in https://github.com/huggingface/datasets/pull/3206. The discussion there has shaped a new approach to do this. In this PR, behavior of `pklregister` and `Pickler` is extended. Earlier, users were already able to register custom pickle functions. That is useful if they have objects that are not easily picklable with default methods. When one registers a custom function to a type, an object of that type will be pickled with the given function by `Pickler` which looks up the type in its `dispatch` table. The downside of this method, and of `pickle` in general, is that it is limited to direct type-matching and does not allow sub-classes. In many, default, cases that is not an issue. But when you are using external libraries where classes (e.g. parsers, models) are sub-classed this is not ideal. ```python from datasets.fingerprint import Hasher from datasets.utils.py_utils import pklregister class BaseParser: pass class EnglishParser(BaseParser): pass @pklregister(BaseParser) def custom_pkl_func(pickler, obj): print(f"Called the custom pickle function for type {type(obj)}!") # do something with the obj and ultimately save with the pickler base = BaseParser() en = EnglishParser() # Hasher.hash uses the Pickler behind the scenes # `custom_pkl_func` called for base Hasher.hash(base) # `custom_pkl_func` not called for en :-( Hasher.hash(en) ``` In the example above we'd want to sub-class `EnglishParser` to be handled in the same way as its super-class `BaseParser`. This PR solves that by allowing for a keyword-argument `allow_subclasses` in `pklregister` (default: `False`). ```python @pklregister(BaseParser, allow_subclasses=True) ``` When this option is enabled, we not only save the function in `Pickler.dispatch` but also save it in a custom table `Pickler.subclass_dispatch` **which allows us to dynamically add sub-classes of that class to the real dispatch table**. Then, if we want to pickle an object `obj` with `Pickler.dump()` (which ultimately will call `Pickler.save()`) we _first_ check whether any of the object's super-classes exist in `Pickler.sublcass_dispatch` and get the related custom pickle function. If we find one, we add the type of `obj` alongside the function to `Pickler.dispatch`. All of this happens at the start of the call to `Pickler.save()`. _Only then_ dill.Pickler's `save` will be called, which in turn will call `pickle._Pickler.save` which handles everything. Here, the `Pickler.dispatch` table will be used to look up custom pickler functions - and it now also includes the function for `obj`, which was copied from its super-class, which we added at the very start of our custom `Pickler.save()`. For edge cases and, especially, for testing, a contextmanager class `TempPickleRegistry` is included that resets the pickle registry on exit to its previous state. ```python with TempPickleRegistry(): @pklregister(MyObjClass) def pickle_registry_test_false(pickler, obj): pickler.save(obj.fancy_method()) some_obj = MyObjClass() dumps(some_obj) # `MyObjClass` is in Pickler.dispatch # ... `MyObjClass` is _not_ in Pickler.dispatch anymore ``` closes https://github.com/huggingface/datasets/issues/3178 To Do ==== - [x] Write tests - [ ] Write documentation/examples?
https://github.com/huggingface/datasets/pull/3224
[ "@lhoestq Feel free to have a look. The implementation is slightly different from what you suggested. I have opted to overwrite `save` instead of meddling with `save_global`. `save_global` is called very late down in dill/pickle so it is hard to control for what is happening there. I might be wrong. Pickling is mor...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3224", "html_url": "https://github.com/huggingface/datasets/pull/3224", "diff_url": "https://github.com/huggingface/datasets/pull/3224.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3224.patch", "merged_at": null }
3,224
true
Update BibTeX entry
Update BibTeX entry.
https://github.com/huggingface/datasets/pull/3223
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3223", "html_url": "https://github.com/huggingface/datasets/pull/3223", "diff_url": "https://github.com/huggingface/datasets/pull/3223.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3223.patch", "merged_at": "2021-11-06T07:06:38" }
3,223
true
Add docs for audio processing
This PR adds documentation for the `Audio` feature. It describes: - The difference between loading `path` and `audio`, as well as use-cases/best practices for each of them. - Resampling audio files with `cast_column`, and then calling `ds[0]["audio"]` to automatically decode and resample to the desired sampling rate. - Resampling with `map`. Preview [here](https://52969-250213286-gh.circle-artifacts.com/0/docs/_build/html/audio_process.html), let me know if I'm missing anything!
https://github.com/huggingface/datasets/pull/3222
[ "Nice ! love it this way. I guess you can set this PR to \"ready for review\" ?", "I guess we can merge this one now :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3222", "html_url": "https://github.com/huggingface/datasets/pull/3222", "diff_url": "https://github.com/huggingface/datasets/pull/3222.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3222.patch", "merged_at": "2021-11-24T15:35:52" }
3,222
true
Resolve data_files by split name
As discussed in https://github.com/huggingface/datasets/issues/3027 we should automatically infer what file is supposed to go to what split automatically, based on filenames. I added the support for different kinds of patterns, for both dataset repositories and local directories: ``` Input structure: my_dataset_repository/ β”œβ”€β”€ README.md └── dataset.csv Output patterns: {"train": ["*"]} ``` ``` Input structure: my_dataset_repository/ β”œβ”€β”€ README.md β”œβ”€β”€ train.csv └── test.csv my_dataset_repository/ β”œβ”€β”€ README.md └── data/ β”œβ”€β”€ train.csv └── test.csv my_dataset_repository/ β”œβ”€β”€ README.md β”œβ”€β”€ train_0.csv β”œβ”€β”€ train_1.csv β”œβ”€β”€ train_2.csv β”œβ”€β”€ train_3.csv β”œβ”€β”€ test_0.csv └── test_1.csv Output patterns: {"train": ["*train*"], "test": ["*test*"]} ``` ``` Input structure: my_dataset_repository/ β”œβ”€β”€ README.md └── data/ β”œβ”€β”€ train/ β”‚ β”œβ”€β”€ shard_0.csv β”‚ β”œβ”€β”€ shard_1.csv β”‚ β”œβ”€β”€ shard_2.csv β”‚ └── shard_3.csv └── test/ β”œβ”€β”€ shard_0.csv └── shard_1.csv Output patterns: {"train": ["*train*/*", "*train*/**/*"], "test": ["*test*/*", "*test*/**/*"]} ``` and also this pattern that allows to have custom split names, and that is the structure used by #3098 for `push_to_hub` (cc @LysandreJik ): ``` Input structure: my_dataset_repository/ β”œβ”€β”€ README.md └── data/ β”œβ”€β”€ train-00000-of-00003.csv β”œβ”€β”€ train-00001-of-00003.csv β”œβ”€β”€ train-00002-of-00003.csv β”œβ”€β”€ test-00000-of-00001.csv β”œβ”€β”€ random-00000-of-00003.csv β”œβ”€β”€ random-00001-of-00003.csv └── random-00002-of-00003.csv Output patterns: { "train": ["data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"], "test": ["data/test-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"], "random": ["data/random-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"], } ``` You can check the documentation about structuring your repository [here](https://52640-250213286-gh.circle-artifacts.com/0/docs/_build/html/repository_structure.html). cc @stevhliu Fix https://github.com/huggingface/datasets/issues/3027 Fix https://github.com/huggingface/datasets/issues/3212 In the future we can also add support for dataset configurations.
https://github.com/huggingface/datasets/pull/3221
[ "Really cool!\r\nWhen splitting by folder, what do we use for validation set (\"valid\", \"validation\" or both)?", "> When splitting by folder, what do we use for validation set (\"valid\", \"validation\" or both)?\r\n\r\nBoth are fine :) As soon as it has \"valid\" in it", "Merging for now, if you have commen...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3221", "html_url": "https://github.com/huggingface/datasets/pull/3221", "diff_url": "https://github.com/huggingface/datasets/pull/3221.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3221.patch", "merged_at": "2021-11-05T17:49:57" }
3,221
true
Add documentation about dataset viewer feature
Add to the docs more details about the dataset viewer feature in the Hub. CC: @julien-c
https://github.com/huggingface/datasets/issues/3220
[]
null
3,220
false
Eventual Invalid Token Error at setup of private datasets
## Describe the bug From time to time, there appear Invalid Token errors with private datasets: - https://app.circleci.com/pipelines/github/huggingface/datasets/8520/workflows/d44629f2-4749-40f8-a657-50931d0b3434/jobs/52534 ``` ____________ ERROR at setup of test_load_streaming_private_dataset _____________ ValueError: Invalid token passed! ____ ERROR at setup of test_load_streaming_private_dataset_with_zipped_data ____ ValueError: Invalid token passed! =========================== short test summary info ============================ ERROR tests/test_load.py::test_load_streaming_private_dataset - ValueError: I... ERROR tests/test_load.py::test_load_streaming_private_dataset_with_zipped_data ``` - https://app.circleci.com/pipelines/github/huggingface/datasets/8557/workflows/a8383181-ba6d-4487-9d0a-f750b6dcb936/jobs/52763 ``` ____ ERROR at setup of test_load_streaming_private_dataset_with_zipped_data ____ [gw1] linux -- Python 3.6.15 /home/circleci/.pyenv/versions/3.6.15/bin/python3.6 hf_api = <huggingface_hub.hf_api.HfApi object at 0x7f4899bab908> hf_token = 'vgNbyuaLNEBuGbgCEtSBCOcPjZnngJufHkTaZvHwkXKGkHpjBPwmLQuJVXRxBuaRzNlGjlMpYRPbthfHPFWXaaEDTLiqTTecYENxukRYVAAdpeApIUPxcgsowadkTkPj' zip_csv_path = PosixPath('/tmp/pytest-of-circleci/pytest-0/popen-gw1/data16/dataset.csv.zip') @pytest.fixture(scope="session") def hf_private_dataset_repo_zipped_txt_data_(hf_api: HfApi, hf_token, zip_csv_path): repo_name = "repo_zipped_txt_data-{}".format(int(time.time() * 10e3)) hf_api.create_repo(token=hf_token, name=repo_name, repo_type="dataset", private=True) repo_id = f"{USER}/{repo_name}" hf_api.upload_file( token=hf_token, path_or_fileobj=str(zip_csv_path), path_in_repo="data.zip", repo_id=repo_id, > repo_type="dataset", ) tests/hub_fixtures.py:68: ... ValueError: Invalid token passed! =========================== short test summary info ============================ ERROR tests/test_load.py::test_load_streaming_private_dataset_with_zipped_data ```
https://github.com/huggingface/datasets/issues/3219
[]
null
3,219
false
Fix code quality in riddle_sense dataset
Fix trailing whitespace. Fix #3217.
https://github.com/huggingface/datasets/pull/3218
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3218", "html_url": "https://github.com/huggingface/datasets/pull/3218", "diff_url": "https://github.com/huggingface/datasets/pull/3218.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3218.patch", "merged_at": "2021-11-04T17:50:02" }
3,218
true
Fix code quality bug in riddle_sense dataset
## Describe the bug ``` datasets/riddle_sense/riddle_sense.py:36:21: W291 trailing whitespace ```
https://github.com/huggingface/datasets/issues/3217
[ "To give more context: https://github.com/psf/black/issues/318. `black` doesn't treat this as a bug, but `flake8` does. \r\n" ]
null
3,217
false
Pin version exclusion for tensorflow incompatible with keras
Once `tensorflow` version 2.6.2 is released: - https://github.com/tensorflow/tensorflow/commit/c1867f3bfdd1042f694df7a9870be51ba80543cb - https://pypi.org/project/tensorflow/2.6.2/ with the patch: - tensorflow/tensorflow#52927 we can remove the temporary fix we introduced in: - #3208 Fix #3209.
https://github.com/huggingface/datasets/pull/3216
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3216", "html_url": "https://github.com/huggingface/datasets/pull/3216", "diff_url": "https://github.com/huggingface/datasets/pull/3216.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3216.patch", "merged_at": "2021-11-05T10:57:37" }
3,216
true
Small updates to to_tf_dataset documentation
I added a little more description about `to_tf_dataset` compared to just setting the format
https://github.com/huggingface/datasets/pull/3215
[ "@stevhliu Accepted both suggestions, thanks for the review!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3215", "html_url": "https://github.com/huggingface/datasets/pull/3215", "diff_url": "https://github.com/huggingface/datasets/pull/3215.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3215.patch", "merged_at": "2021-11-04T18:55:37" }
3,215
true
Add ACAV100M Dataset
## Adding a Dataset - **Name:** *ACAV100M* - **Description:** *contains 100 million videos with high audio-visual correspondence, ideal for self-supervised video representation learning.* - **Paper:** *https://arxiv.org/abs/2101.10803* - **Data:** *https://github.com/sangho-vision/acav100m* - **Motivation:** *The largest dataset (to date) for audio-visual learning.* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://github.com/huggingface/datasets/issues/3214
[]
null
3,214
false
Fix tuple_ie download url
Fix #3204
https://github.com/huggingface/datasets/pull/3213
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3213", "html_url": "https://github.com/huggingface/datasets/pull/3213", "diff_url": "https://github.com/huggingface/datasets/pull/3213.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3213.patch", "merged_at": "2021-11-05T14:16:05" }
3,213
true
Sort files before loading
When loading a dataset that consists of several files (e.g. `my_data/data_001.json`, `my_data/data_002.json` etc.) they are not loaded in order when using `load_dataset("my_data")`. This could lead to counter-intuitive results if, for example, the data files are sorted by date or similar since they would appear in different order in the `Dataset`. The straightforward solution is to sort the list of files alphabetically before loading them. cc @lhoestq
https://github.com/huggingface/datasets/issues/3212
[ "This will be fixed by https://github.com/huggingface/datasets/pull/3221" ]
null
3,212
false
Fix disable_nullable default value to False
Currently the `disable_nullable` parameter is not consistent across all dataset transforms. For example it is `False` in `map` but `True` in `flatten_indices`. This creates unexpected behaviors like this ```python from datasets import Dataset, concatenate_datasets d1 = Dataset.from_dict({"a": [0, 1, 2, 3]}) d2 = d1.filter(lambda x: x["a"] < 2).flatten_indices() d1.data.schema == d2.data.schema # False ``` This can cause issues when concatenating datasets for example. For consistency I set `disable_nullable` to `False` in `flatten_indices` and I fixed some docstrings cc @SBrandeis
https://github.com/huggingface/datasets/pull/3211
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3211", "html_url": "https://github.com/huggingface/datasets/pull/3211", "diff_url": "https://github.com/huggingface/datasets/pull/3211.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3211.patch", "merged_at": "2021-11-04T11:08:20" }
3,211
true
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py
when I use python examples/pytorch/translation/run_translation.py --model_name_or_path examples/pytorch/translation/opus-mt-en-ro --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/tst-translation --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate to finetune translation model on huggingface, I get the issue"ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py".But I can open the https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py by using website. What should I do to solve the issue?
https://github.com/huggingface/datasets/issues/3210
[ "Hi ! Do you have some kind of proxy in your browser that gives you access to internet ?\r\n\r\nMaybe you're having this error because you don't have access to this URL from python ?", "Hi,do you fixed this error?\r\nI still have this issue when use \"use_auth_token=True\"", "You don't need authentication to ac...
null
3,210
false
Unpin keras once TF fixes its release
Related to: - #3208
https://github.com/huggingface/datasets/issues/3209
[]
null
3,209
false
Pin keras version until TF fixes its release
Fix #3207.
https://github.com/huggingface/datasets/pull/3208
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3208", "html_url": "https://github.com/huggingface/datasets/pull/3208", "diff_url": "https://github.com/huggingface/datasets/pull/3208.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3208.patch", "merged_at": "2021-11-04T09:30:54" }
3,208
true
CI error: Another metric with the same name already exists in Keras 2.7.0
## Describe the bug Release of TensorFlow 2.7.0 contains an incompatibility with Keras. See: - keras-team/keras#15579 This breaks our CI test suite: https://app.circleci.com/pipelines/github/huggingface/datasets/8493/workflows/055c7ae2-43bc-49b4-9f11-8fc71f35a25c/jobs/52363
https://github.com/huggingface/datasets/issues/3207
[]
null
3,207
false
[WIP] Allow user-defined hash functions via a registry
Inspired by the discussion on hashing in https://github.com/huggingface/datasets/issues/3178#issuecomment-959016329, @lhoestq suggested that it would be neat to allow users more control over the hashing process. Specifically, it would be great if users can specify specific hashing functions depending on the **class** of the object. As an example, we found in the linked topic that loaded spaCy models (`Language` objects) have different hashes when `dump`'d, but their byte representation with `Language.to_bytes()` _is_ deterministic. It would therefore be great if we could specify that for `Language` objects, the hasher should hash the objects `to_bytes()` return value instead of the object itself. This PR adds a new, but tiny, dependency to manage the registry, namely [`catalogue`](https://github.com/explosion/catalogue). Two files have been changed (apart from the added dependency in `setup.py`) and one file has been added. **utils.registry** (added) This file defines our custom Registry and builds a registry called "hashers". A Registry is basically dictionary from names (str) to functions. A function can be added to the registry by a decorator, e.g. ```python @hashers.register(spacy.Language) def hash_spacy_language(nlp): return Hasher.hash(nlp.to_bytes()) ``` You'll notice that `spacy.Language` is not a string, even though the registry holds a str->func mapping. To accomplish this with classes in a dynamic way, catalogue.Registry needed to be subclassed and modified as `DatasetsRegistry`. All methods that use a name as an input are now modified so that classes are deterministically converted in strings in such a way that we can later retrieve the actual class from the string (below). **utils.py_utils** (modified) Added two functions to deal with classes and their qualified names, that is, their full descriptive name including the module. On the one hand it allows us to retrieve a string from a given class, e.g. given `Module` class, return `torch.nn.Module` str. Conversly, a function is added to convert such a full qualified name into a class. For instance, given the string `torch.nn.Module`, return the `Module` class. These straightforward methods allow us to interchangeably use classes and strings without any needed user interaction - they can just register a class, and behind the scenes `DatasetsRegistry` converts these to deterministic strings. **fingerprint** (modified) Updated Hasher.hash so that if the object to hash is an instance of a class in the registry, the registered function is used to hash the object instead of the default behavior. To do so we iterate over the registry `hashers` and convert its keys (strings) into classes, and then we can use `isinstance`. ```python # Check if the current object is an instance that is # applicable to the user-defined hashers. If so, hash # with the user-defined function for full_module_name, func in hashers.get_all().items(): registered_cls = get_cls_from_qualname(full_module_name) if isinstance(value, registered_cls): return func(value) ``` **Putting it all together** To test this, you can try the following example with spaCy. First install spaCy from source and checkout a specific commit. ```shell git clone https://github.com/explosion/spaCy.git cd spaCy/ git checkout cab9209c3dfcd1b75dfe5657f10e52c4d847a3cf cd .. git clone https://github.com/BramVanroy/datasets.git cd datasets git checkout registry pip install -e . pip install ../spaCy spacy download en_core_web_sm ``` Now you can run the following script. By default it will use the custom hasher function for the Language object. You can enable the default behavior by commenting out `@hashers.register...`. ```python import spacy from datasets.fingerprint import Hasher from datasets.utils.registry import hashers # Register a function so that when the Hasher encounters a spacy.Language object # it uses this custom function to hash instead of the default @hashers.register(spacy.Language) def hash_spacy_language(nlp): return Hasher.hash(nlp.to_bytes()) def main(): print(hashers.get_all()) nlp = spacy.load("en_core_web_sm") dump1 = Hasher.hash(nlp) nlp = spacy.load("en_core_web_sm") dump2 = Hasher.hash(nlp) print(dump1) # succeeds when using the registered custom function # fails if using the default assert dump1 == dump2 if __name__ == '__main__': main() ``` To do ==== - The above is just a proof-of-concept. I am open to changes/suggestions - Tests still need to be written - We should consider whether we can make `DatasetsRegistry` very restrictive and ONLY allowing classes. That would make testing easier - otherwise we also need to test for other sorts of objects. - Maybe the `hashers` definition is better suited in `fingerprint`? - Documentation/examples need to be updated - Not sure why the logger is not working in `hash()` - `get_cls_from_qualname` might need a fail-safe: is it possible for a full_qualname to not have a module, and if so how do we deal with that?
https://github.com/huggingface/datasets/pull/3206
[ "Hi @BramVanroy, thanks for your PR.\r\n\r\nThere was a bug in TensorFlow/Keras. We have made a temporary fix in master branch. Please, merge master into your PR branch, so that the CI tests pass.\r\n\r\n```\r\ngit checkout registry\r\ngit fetch upstream master\r\ngit merge upstream/master\r\n```", "@albertvillan...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3206", "html_url": "https://github.com/huggingface/datasets/pull/3206", "diff_url": "https://github.com/huggingface/datasets/pull/3206.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3206.patch", "merged_at": null }
3,206
true
Add Multidoc2dial Dataset
This PR adds the MultiDoc2Dial dataset introduced in this [paper](https://arxiv.org/pdf/2109.12595v1.pdf )
https://github.com/huggingface/datasets/pull/3205
[ "@songfeng cc", "Hi @sivasankalpp, thanks for your PR.\r\n\r\nThere was a bug in TensorFlow/Keras. We have made a temporary fix in our master branch. Please, merge master into your PR branch, so that the CI tests pass.\r\n\r\n```\r\ngit checkout multidoc2dial\r\ngit fetch upstream master\r\ngit merge upstream/mas...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3205", "html_url": "https://github.com/huggingface/datasets/pull/3205", "diff_url": "https://github.com/huggingface/datasets/pull/3205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3205.patch", "merged_at": "2021-11-24T16:55:08" }
3,205
true
FileNotFoundError for TupleIE dataste
Hi, `dataset = datasets.load_dataset('tuple_ie', 'all')` returns a FileNotFound error. Is the data not available? Many thanks.
https://github.com/huggingface/datasets/issues/3204
[ "@mariosasko @lhoestq Could you give me an update on how to load the dataset after the fix?\r\nThanks.", "Hi @arda-vianai,\r\n\r\nfirst, you can try:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('tuple_ie', 'all', revision=\"master\")\r\n```\r\nIf this doesn't work, your version of `datasets...
null
3,204
false
Updated: DaNE - updated URL for download
It seems that DaNLP has updated their download URLs and it therefore also needs to be updated in here...
https://github.com/huggingface/datasets/pull/3203
[ "Actually it looks like the old URL is still working, and it's also the one that is mentioned in https://github.com/alexandrainst/danlp/blob/master/docs/docs/datasets.md\r\n\r\nWhat makes you think we should use the new URL ?", "@lhoestq Sorry! I might have jumped to conclusions a bit too fast here... \r\n\r\nI w...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3203", "html_url": "https://github.com/huggingface/datasets/pull/3203", "diff_url": "https://github.com/huggingface/datasets/pull/3203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3203.patch", "merged_at": "2021-11-04T11:46:43" }
3,203
true
Add mIoU metric
**Is your feature request related to a problem? Please describe.** Recently, some semantic segmentation models were added to HuggingFace Transformers, including [SegFormer](https://huggingface.co/transformers/model_doc/segformer.html) and [BEiT](https://huggingface.co/transformers/model_doc/beit.html). Semantic segmentation (which is the task of labeling every pixel of an image with a corresponding class) is typically evaluated using the Mean Intersection and Union (mIoU). Together with the upcoming Image Feature, adding this metric could be very handy when creating example scripts to fine-tune any Transformer-based model on a semantic segmentation dataset. An implementation can be found [here](https://github.com/open-mmlab/mmsegmentation/blob/504965184c3e6bc9ec43af54237129ef21981a5f/mmseg/core/evaluation/metrics.py#L132) for instance.
https://github.com/huggingface/datasets/issues/3202
[ "Resolved via https://github.com/huggingface/datasets/pull/3745." ]
null
3,202
false
Add GSM8K dataset
## Adding a Dataset - **Name:** GSM8K (short for Grade School Math 8k) - **Description:** GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers. - **Paper:** https://openai.com/blog/grade-school-math/ - **Data:** https://github.com/openai/grade-school-math - **Motivation:** The dataset is useful to investigate the reasoning abilities of large Transformer models, such as GPT-3. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://github.com/huggingface/datasets/issues/3201
[ "Closed via https://github.com/huggingface/datasets/pull/4103" ]
null
3,201
false
Catch token invalid error in CI
The staging back end sometimes returns invalid token errors when trying to delete a repo. I modified the fixture in the test that uses staging to ignore this error
https://github.com/huggingface/datasets/pull/3200
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3200", "html_url": "https://github.com/huggingface/datasets/pull/3200", "diff_url": "https://github.com/huggingface/datasets/pull/3200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3200.patch", "merged_at": "2021-11-03T09:41:08" }
3,200
true
Bump huggingface_hub
huggingface_hub just released its first minor version, so we need to update the dependency It was supposed to be part of 1.15.0 but I'm adding it for 1.15.1
https://github.com/huggingface/datasets/pull/3199
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3199", "html_url": "https://github.com/huggingface/datasets/pull/3199", "diff_url": "https://github.com/huggingface/datasets/pull/3199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3199.patch", "merged_at": "2021-11-02T21:41:40" }
3,199
true
Add Multi-Lingual LibriSpeech
Add https://www.openslr.org/94/
https://github.com/huggingface/datasets/pull/3198
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3198", "html_url": "https://github.com/huggingface/datasets/pull/3198", "diff_url": "https://github.com/huggingface/datasets/pull/3198.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3198.patch", "merged_at": "2021-11-04T17:09:22" }
3,198
true
Fix optimized encoding for arrays
Hi ! #3124 introduced a regression that made the benchmarks CI fail because of a bad array comparison when checking the first encoded element. This PR fixes this by making sure that encoding is applied on all sequence types except lists. cc @eladsegal fyi (no big deal)
https://github.com/huggingface/datasets/pull/3197
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3197", "html_url": "https://github.com/huggingface/datasets/pull/3197", "diff_url": "https://github.com/huggingface/datasets/pull/3197.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3197.patch", "merged_at": "2021-11-02T19:12:23" }
3,197
true
QOL improvements: auto-flatten_indices and desc in map calls
This PR: * automatically calls `flatten_indices` where needed: in `unique` and `save_to_disk` to avoid saving the indices file * adds descriptions to the map calls Fix #3040
https://github.com/huggingface/datasets/pull/3196
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3196", "html_url": "https://github.com/huggingface/datasets/pull/3196", "diff_url": "https://github.com/huggingface/datasets/pull/3196.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3196.patch", "merged_at": "2021-11-02T15:41:08" }
3,196
true
More robust `None` handling
PyArrow has explicit support for `null` values, so it makes sense to support Nones on our side as well. [Colab Notebook with examples](https://colab.research.google.com/drive/1zcK8BnZYnRe3Ao2271u1T19ag9zLEiy3?usp=sharing) Changes: * allow None for the features types with special encoding (`ClassLabel, TranslationVariableLanguages, Value, _ArrayXD`) * handle None in `class_encode_column` (also there is an option to stringify Nones and treat them as a class) * support None sorting in `sort` (use pandas for that) * handle None in align_labels_with_mapping * support for None in ArrayXD (converts `None` to `np.nan` to align the behavior with PyArrow) * support for None in the Audio/Image feature * allow promotion when concatenating tables (`pa.concat_tables(table_list, promote=True)`) and `null` row/~~column~~ broadcasting similar to pandas Additional notes: * use `null` instead of `none` for function arguments for consistency with existing `disable_nullable` * fixes a bug with the `update_metadata_with_features` call in `Dataset.rename_columns` * had to update some tests, let me know if that's ok TODO: - [x] check how the Audio features behaves with Nones - [x] Better None handling in `concatenate_datasets`/`add_item` - [x] Fix formatting with Nones - [x] Add Colab with examples - [x] Tests TODOs for subsequent PRs: - Mention None handling in the docs - Add `drop_null`/`fill_null` to `Dataset`/`DatasetDict` Fix #3181 #3253
https://github.com/huggingface/datasets/pull/3195
[ "I also created a PR regarding `disable_nullable` that must be always `False` by default, in order to always allow None values\r\nhttps://github.com/huggingface/datasets/pull/3211", "@lhoestq I addressed your comments, added tests, did some refactoring to make the implementation cleaner and added support for `Non...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3195", "html_url": "https://github.com/huggingface/datasets/pull/3195", "diff_url": "https://github.com/huggingface/datasets/pull/3195.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3195.patch", "merged_at": "2021-12-09T14:26:57" }
3,195
true
Update link to Datasets Tagging app in Spaces
Fix #3193.
https://github.com/huggingface/datasets/pull/3194
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3194", "html_url": "https://github.com/huggingface/datasets/pull/3194", "diff_url": "https://github.com/huggingface/datasets/pull/3194.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3194.patch", "merged_at": "2021-11-08T10:36:22" }
3,194
true
Update link to datasets-tagging app
Once datasets-tagging has been transferred to Spaces: - huggingface/datasets-tagging#22 We should update the link in Datasets.
https://github.com/huggingface/datasets/issues/3193
[]
null
3,193
false
Multiprocessing filter/map (tests) not working on Windows
While running the tests, I found that the multiprocessing examples fail on Windows, or rather they do not complete: they cause a deadlock. I haven't dug deep into it, but they do not seem to work as-is. I currently have no time to tests this in detail but at least the tests seem not to run correctly (deadlocking). ## Steps to reproduce the bug ```shell pytest tests/test_arrow_dataset.py -k "test_filter_multiprocessing" pytest tests/test_arrow_dataset.py -k "test_map_multiprocessing" ``` ## Expected results The functionality to work on all platforms. ## Actual results Deadlock. ## Environment info - `datasets` version: 1.14.1.dev0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.2, also tested with 3.7.9 - PyArrow version: 4.0.1
https://github.com/huggingface/datasets/issues/3192
[]
null
3,192
false
Dataset viewer issue for '*compguesswhat*'
## Dataset viewer issue for '*compguesswhat*' **Link:** https://huggingface.co/datasets/compguesswhat File not found Am I the one who added this dataset ? No
https://github.com/huggingface/datasets/issues/3191
[ "```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('compguesswhat', name='compguesswhat-original',split='train', streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.ve...
null
3,191
false
combination of shuffle and filter results in a bug
## Describe the bug Hi, I would like to shuffle a dataset, then filter it based on each existing label. however, the combination of `filter`, `shuffle` seems to results in a bug. In the minimal example below, as you see in the filtered results, the filtered labels are not unique, meaning filter has not worked. Any suggestions as a temporary fix is appreciated @lhoestq. Thanks. Best regards Rabeeh ## Steps to reproduce the bug ```python import numpy as np import datasets datasets = datasets.load_dataset('super_glue', 'rte', script_version="master") shuffled_data = datasets["train"].shuffle(seed=42) for label in range(2): print("label ", label) data = shuffled_data.filter(lambda example: int(example['label']) == label) print("length ", len(data), np.unique(data['label'])) ``` ## Expected results Filtering per label, should only return the data with that specific label. ## Actual results As you can see, filtered data per label, has still two labels of [0, 1] ``` label 0 length 1249 [0 1] label 1 length 1241 [0 1] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: linux - Python version: 3.7.11 - PyArrow version: 5.0.0
https://github.com/huggingface/datasets/issues/3190
[ "I cannot reproduce this on master and pyarrow==4.0.1.\r\n", "Hi ! There was a regression in `datasets` 1.12 that introduced this bug. It has been fixed in #3019 in 1.13\r\n\r\nCan you try to update `datasets` and try again ?", "Thanks a lot, fixes with 1.13" ]
null
3,190
false
conll2003 incorrect label explanation
In the [conll2003](https://huggingface.co/datasets/conll2003#data-fields) README, the labels are described as follows > - `id`: a `string` feature. > - `tokens`: a `list` of `string` features. > - `pos_tags`: a `list` of classification labels, with possible values including `"` (0), `''` (1), `#` (2), `$` (3), `(` (4). > - `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4). > - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4) `B-LOC` (5), `I-LOC` (6) `B-MISC` (7), `I-MISC` (8). First of all, it would be great if we can get a list of ALL possible pos_tags. Second, the chunk tags labels cannot be correct. The description says the values go from 0 to 4 whereas the data shows values from at least 11 to 21 and 0. EDIT: not really a bug, sorry for mistagging.
https://github.com/huggingface/datasets/issues/3189
[ "Hi @BramVanroy,\r\n\r\nsince these fields are of type `ClassLabel` (you can check this with `dset.features`), you can inspect the possible values with:\r\n```python\r\ndset.features[field_name].feature.names # .feature because it's a sequence of labels\r\n```\r\n\r\nand to find the mapping between names and integ...
null
3,189
false
conll2002 issues
**Link:** https://huggingface.co/datasets/conll2002 The dataset viewer throws a server error when trying to preview the dataset. ``` Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implemented yet ``` In addition, the "point of contact" has encoding issues and does not work when clicked. Am I the one who added this dataset ? No, @lhoestq did
https://github.com/huggingface/datasets/issues/3188
[ "Hi ! Thanks for reporting :)\r\n\r\nThis is related to https://github.com/huggingface/datasets/issues/2742, I'm working on it. It should fix the viewer for around 80 datasets.\r\n", "Ah, hadn't seen that sorry.\r\n\r\nThe scrambled \"point of contact\" is a separate issue though, I think.", "@lhoestq The \"poi...
null
3,188
false
Add ChrF(++) (as implemented in sacrebleu)
Similar to my [PR for TER](https://github.com/huggingface/datasets/pull/3153), it feels only right to also include ChrF and friends. These are present in Sacrebleu and are therefore very similar to implement as TER and sacrebleu. I tested the implementation with sacrebleu's tests to verify. You can try this below for yourself ```python import datasets EPSILON = 1e-4 chrf = datasets.load_metric(r"path\to\datasets\metrics\chrf") test_cases = [ (["abcdefg"], ["hijklmnop"], 0.0), (["a"], ["b"], 0.0), ([""], ["b"], 0.0), ([""], ["ref"], 0.0), ([""], ["reference"], 0.0), (["aa"], ["ab"], 8.3333), (["a", "b"], ["a", "c"], 8.3333), (["a"], ["a"], 16.6667), (["a b c"], ["a b c"], 50.0), (["a b c"], ["abc"], 50.0), ([" risk assessment must be made of those who are qualified and expertise in the sector - these are the scientists ."], ["risk assessment has to be undertaken by those who are qualified and expert in that area - that is the scientists ."], 63.361730), ([" Die Beziehung zwischen Obama und Netanjahu ist nicht gerade freundlich. "], ["Das VerhΓ€ltnis zwischen Obama und Netanyahu ist nicht gerade freundschaftlich."], 64.1302698), (["Niemand hat die Absicht, eine Mauer zu errichten"], ["Niemand hat die Absicht, eine Mauer zu errichten"], 100.0), ] for hyp, ref, score in test_cases: # Note the reference transformation which is different from scarebleu's input format results = chrf.compute(predictions=hyp, references=[[r] for r in ref], char_order=6, word_order=0, beta=3, eps_smoothing=True) if abs(score - results["score"]) > EPSILON: print(f"expected {score}, got {results['score']} for {hyp} - {ref}") test_cases_effective_order = [ (["a"], ["a"], 100.0), ([""], ["reference"], 0.0), (["a b c"], ["a b c"], 100.0), (["a b c"], ["abc"], 100.0), ([""], ["c"], 0.0), (["a", "b"], ["a", "c"], 50.0), (["aa"], ["ab"], 25.0), ] for hyp, ref, score in test_cases_effective_order: # Note the reference transformation which is different from scarebleu's input format results = chrf.compute(predictions=hyp, references=[[r] for r in ref], char_order=6, word_order=0, beta=3, eps_smoothing=False) if abs(score - results["score"]) > EPSILON: print(f"expected {score}, got {results['score']} for {hyp} - {ref}") test_cases_keep_whitespace = [ ( ["Die Beziehung zwischen Obama und Netanjahu ist nicht gerade freundlich."], ["Das VerhΓ€ltnis zwischen Obama und Netanyahu ist nicht gerade freundschaftlich."], 67.3481606, ), ( ["risk assessment must be made of those who are qualified and expertise in the sector - these are the scientists ."], ["risk assessment has to be undertaken by those who are qualified and expert in that area - that is the scientists ."], 65.2414427, ), ] for hyp, ref, score in test_cases_keep_whitespace: # Note the reference transformation which is different from scarebleu's input format results = chrf.compute(predictions=hyp, references=[[r] for r in ref], char_order=6, word_order=0, beta=3, whitespace=True) if abs(score - results["score"]) > EPSILON: print(f"expected {score}, got {results['score']} for {hyp} - {ref}") predictions = ["The relationship between Obama and Netanyahu is not exactly friendly."] references = [["The ties between Obama and Netanyahu are not particularly friendly."]] print(chrf.compute(predictions=predictions, references=references)) ```
https://github.com/huggingface/datasets/pull/3187
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3187", "html_url": "https://github.com/huggingface/datasets/pull/3187", "diff_url": "https://github.com/huggingface/datasets/pull/3187.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3187.patch", "merged_at": "2021-11-02T14:31:26" }
3,187
true
Dataset viewer for nli_tr
## Dataset viewer issue for '*nli_tr*' **Link:** https://huggingface.co/datasets/nli_tr Hello, Thank you for the new dataset preview feature that will help the users to view the datasets online. We just noticed that the dataset viewer widget in the `nli_tr` dataset shows the error below. The error must be due to a temporary problem that may have blocked access to the dataset through the dataset viewer. But the dataset is currently accessible through the link in the error message. May we kindly ask if it would be possible to rerun the job so that it can access the dataset for the dataset viewer function? Thank you. Emrah ------------------------------------------ Server Error Status code: 404 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: 'zip://snli_tr_1.0_train.jsonl::https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/snli_tr_1.0.zip ------------------------------------------ Am I the one who added this dataset ? Yes
https://github.com/huggingface/datasets/issues/3186
[ "It's an issue with the streaming mode:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('nli_tr', name='snli_tr',split='test', streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datas...
null
3,186
false
7z dataset preview not implemented?
## Dataset viewer issue for dataset 'samsum' **Link:** https://huggingface.co/datasets/samsum Server Error Status code: 400 Exception: NotImplementedError Message: Extraction protocol '7z' for file at 'https://arxiv.org/src/1911.12237v2/anc/corpus.7z' is not implemented yet
https://github.com/huggingface/datasets/issues/3185
[ "It's a bug in the dataset viewer: the dataset cannot be downloaded in streaming mode, but since the dataset is relatively small, the dataset viewer should have fallback to normal mode. Working on a fix.", "Fixed. https://huggingface.co/datasets/samsum/viewer/samsum/train\r\n\r\n<img width=\"1563\" alt=\"Capture ...
null
3,185
false
RONEC v2
Hi, as we've recently finished with the new RONEC (Romanian Named Entity Corpus), we'd like to update the dataset here as well. It's actually essential as links to V1 are no longer valid. In reality we'd like to replace completely v1, as v2 is a full re-annotation of v1 with additional data (up to 2x size vs v1). I've run the make style and all the dummy and real data test, and they passed. I hope it's okay to merge the new RONEC v2 in the datasets. Thanks!
https://github.com/huggingface/datasets/pull/3184
[ "@lhoestq Thanks for the review. I totally understand what you are saying. Normally, I would definitely agree with you, but in this particular case, the quality of v1 is poor, and the dataset itself is small (at the time we created v1 it was the only RO NER dataset, and its size was limited by the available resourc...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3184", "html_url": "https://github.com/huggingface/datasets/pull/3184", "diff_url": "https://github.com/huggingface/datasets/pull/3184.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3184.patch", "merged_at": "2021-11-02T16:02:22" }
3,184
true
Add missing docstring to DownloadConfig
Document the `use_etag` and `num_proc` attributes in `DownloadConig`.
https://github.com/huggingface/datasets/pull/3183
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3183", "html_url": "https://github.com/huggingface/datasets/pull/3183", "diff_url": "https://github.com/huggingface/datasets/pull/3183.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3183.patch", "merged_at": "2021-11-02T10:25:37" }
3,183
true
Don't memoize strings when hashing since two identical strings may have different python ids
When hashing an object that has several times the same string, the hashing could return a different hash if the identical strings share the same python `id()` or not. Here is an example code that shows how the issue can affect the caching: ```python import json import pyarrow as pa from datasets.features import Features from datasets.fingerprint import Hasher schema = pa.schema([pa.field("some_string", pa.string()), pa.field("another_string", pa.string())]) features_from_schema = Features.from_arrow_schema(schema) Hasher.hash(features_from_schema) # dffa9dca9a73fd8c features_dict = json.loads('{"some_string": {"dtype": "string", "id": null, "_type": "Value"}, "another_string": {"dtype": "string", "id": null, "_type": "Value"}}') features_from_json = Features.from_dict(features_dict) Hasher.hash(features_from_json) # 3812e76b15e6420e features_from_schema == features_from_json # True ``` This is because in `features_dict`, some strings like "dtype" are repeated but don't share the same id, contrary to the ones in `features_from_schema`. I fixed that by disabling memoization for strings. This could be optimized in the future by implementing a smarter memoization with a special handling for strings.
https://github.com/huggingface/datasets/pull/3182
[ "This change slows down the hash computation a little bit but from my tests it doesn't look too impactful. So I think it's fine to merge this." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3182", "html_url": "https://github.com/huggingface/datasets/pull/3182", "diff_url": "https://github.com/huggingface/datasets/pull/3182.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3182.patch", "merged_at": "2021-11-02T09:35:37" }
3,182
true
`None` converted to `"None"` when loading a dataset
## Describe the bug When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`. ## Steps to reproduce the bug ```python from datasets import load_dataset qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists") print(qasper[60]["full_text"]["section_name"]) ``` When installing version 1.1.40, the output is `[None, 'Introduction', 'Benchmark Datasets', ...]` When installing from the master branch, the output is `['None', 'Introduction', 'Benchmark Datasets', ...]` Notice how the first element was changed from `NoneType` to `str`. ## Expected results `None` should stay as is. ## Actual results `None` is converted to a string. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: master - Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 4.0.1
https://github.com/huggingface/datasets/issues/3181
[ "Hi @eladsegal, thanks for reporting.\r\n\r\n@mariosasko I saw you are already working on this, but maybe my comment will be useful to you.\r\n\r\nAll values are casted to their corresponding feature type (including `None` values). For example if the feature type is `Value(\"bool\")`, `None` is casted to `False`.\r...
null
3,181
false
fix label mapping
Fixing label mapping for hlgd. 0 correponds to same event and 1 corresponds to different event <img width="642" alt="Capture d’écran 2021-10-29 aΜ€ 10 39 58 AM" src="https://user-images.githubusercontent.com/16107619/139454810-1f225e3d-ad48-44a8-b8b1-9205c9533839.png"> <img width="638" alt="Capture d’écran 2021-10-29 aΜ€ 10 40 09 AM" src="https://user-images.githubusercontent.com/16107619/139454813-93066a3c-7d33-4f56-b133-2f1a7661e438.png"> nt
https://github.com/huggingface/datasets/pull/3180
[ "heck, test failings. moving to draft. will come back to this later today hopefully", "Thanks for fixing this :)\r\nI just updated the dataset_infos.json and added the missing `pretty_name` tag to the dataset card", "thank you @lhoestq! running around as always it felt through as a lower priority..." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3180", "html_url": "https://github.com/huggingface/datasets/pull/3180", "diff_url": "https://github.com/huggingface/datasets/pull/3180.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3180.patch", "merged_at": "2021-11-02T10:37:12" }
3,180
true
Cannot load dataset when the config name is "special"
## Describe the bug After https://github.com/huggingface/datasets/pull/3159, we can get the config name of "Check/region_1", which is "Check___region_1". But now we cannot load the dataset (not sure it's related to the above PR though). It's the case for all the similar datasets, listed in https://github.com/huggingface/datasets-preview-backend/issues/78 ## Steps to reproduce the bug ```python >>> from datasets import get_dataset_config_names >>> get_dataset_config_names("Check/region_1") ['Check___region_1'] >>> load_dataset("Check/region_1") Using custom data configuration Check___region_1-d2b3bc48f11c9be2 Downloading and preparing dataset json/Check___region_1 to /home/slesage/.cache/huggingface/datasets/json/Check___region_1-d2b3bc48f11c9be2/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426... 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 4443.12it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 1277.19it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1159, in _prepare_split writer.write_table(table) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 442, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 442, in <listcomp> pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1249, in pyarrow.lib.Table.__getitem__ File "pyarrow/table.pxi", line 1825, in pyarrow.lib.Table.column File "pyarrow/table.pxi", line 1800, in pyarrow.lib.Table._ensure_integer_index KeyError: 'Field "builder_name" does not exist in table schema' ``` Loading in streaming mode also returns something strange: ```python >>> list(load_dataset("Check/region_1", streaming=True, split="train")) Using custom data configuration Check___region_1-d2b3bc48f11c9be2 [{'builder_name': None, 'citation': '', 'config_name': None, 'dataset_size': None, 'description': '', 'download_checksums': None, 'download_size': None, 'features': {'speech': {'feature': {'dtype': 'float64', 'id': None, '_type': 'Value'}, 'length': -1, 'id': None, '_type': 'Sequence'}, 'sampling_rate': {'dtype': 'int64', 'id': None, '_type': 'Value'}, 'label': {'dtype': 'string', 'id': None, '_type': 'Value'}}, 'homepage': '', 'license': '', 'post_processed': None, 'post_processing_size': None, 'size_in_bytes': None, 'splits': None, 'supervised_keys': None, 'task_templates': None, 'version': None}, {'_data_files': [{'filename': 'dataset.arrow'}], '_fingerprint': 'f1702bb5533c549c', '_format_columns': ['speech', 'sampling_rate', 'label'], '_format_kwargs': {}, '_format_type': None, '_indexes': {}, '_indices_data_files': None, '_output_all_columns': False, '_split': None}] ``` ## Expected results The dataset should be loaded ## Actual results An error occurs ## Environment info - `datasets` version: 1.14.1.dev0 - Platform: Linux-5.11.0-1020-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
https://github.com/huggingface/datasets/issues/3179
[ "The issue is that the datasets are malformed. Not a bug with the datasets library" ]
null
3,179
false
"Property couldn't be hashed properly" even though fully picklable
## Describe the bug I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable. ## Steps to reproduce the bug Here is a [colab](https://colab.research.google.com/drive/1gt75LCBIzsmBMvvipEOvWulvyZseBiA7?usp=sharing) but for some reason I cannot reproduce it there. That may have to do with logging/tqdm on Colab, or with running things in notebooks. I tried below code on Windows and Ubuntu as a Python script and getting the same issue (warning below). ```python import pickle from datasets import load_dataset import spacy class Processor: def __init__(self): self.nlp = spacy.load("en_core_web_sm", disable=["tagger", "parser", "ner", "lemmatizer"]) @staticmethod def collate(batch): return [d["en"] for d in batch] def parse(self, batch): batch = batch["translation"] return {"translation_tok": [{"en_tok": " ".join([t.text for t in doc])} for doc in self.nlp.pipe(self.collate(batch))]} def process(self): ds = load_dataset("wmt16", "de-en", split="train[:10%]") ds = ds.map(self.parse, batched=True, num_proc=6) if __name__ == '__main__': pr = Processor() # succeeds with open("temp.pkl", "wb") as f: pickle.dump(pr, f) print("Successfully pickled!") pr.process() ``` --- Here is a small change that includes `Hasher.hash` that shows that the hasher cannot seem to successfully pickle parts form the NLP object. ```python from datasets.fingerprint import Hasher import pickle from datasets import load_dataset import spacy class Processor: def __init__(self): self.nlp = spacy.load("en_core_web_sm", disable=["tagger", "parser", "ner", "lemmatizer"]) @staticmethod def collate(batch): return [d["en"] for d in batch] def parse(self, batch): batch = batch["translation"] return {"translation_tok": [{"en_tok": " ".join([t.text for t in doc])} for doc in self.nlp.pipe(self.collate(batch))]} def process(self): ds = load_dataset("wmt16", "de-en", split="train[:10]") return ds.map(self.parse, batched=True) if __name__ == '__main__': pr = Processor() # succeeds with open("temp.pkl", "wb") as f: pickle.dump(pr, f) print("Successfully pickled class instance!") # succeeds with open("temp.pkl", "wb") as f: pickle.dump(pr.nlp, f) print("Successfully pickled nlp!") # fails print(Hasher.hash(pr.nlp)) pr.process() ``` ## Expected results This to be picklable, working (fingerprinted), and no warning. ## Actual results In the first snippet, I get this warning Parameter 'function'=<function Processor.parse at 0x7f44982247a0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. In the second, I get this traceback which directs to the `Hasher.hash` line. ``` Traceback (most recent call last): File " \Python\Python36\lib\pickle.py", line 918, in save_global obj2, parent = _getattribute(module, name) File " \Python\Python36\lib\pickle.py", line 266, in _getattribute .format(name, obj)) AttributeError: Can't get local attribute 'add_codes.<locals>.ErrorsWithCodes' on <function add_codes at 0x00000296FF606EA0> During handling of the above exception, another exception occurred: Traceback (most recent call last): File " scratch_4.py", line 40, in <module> print(Hasher.hash(pr.nlp)) File " \lib\site-packages\datasets\fingerprint.py", line 191, in hash return cls.hash_default(value) File " \lib\site-packages\datasets\fingerprint.py", line 184, in hash_default return cls.hash_bytes(dumps(value)) File " \lib\site-packages\datasets\utils\py_utils.py", line 345, in dumps dump(obj, file) File " \lib\site-packages\datasets\utils\py_utils.py", line 320, in dump Pickler(file, recurse=True).dump(obj) File " \lib\site-packages\dill\_dill.py", line 498, in dump StockPickler.dump(self, obj) File " \Python\Python36\lib\pickle.py", line 409, in dump self.save(obj) File " \Python\Python36\lib\pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File " \Python\Python36\lib\pickle.py", line 634, in save_reduce save(state) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict StockPickler.save_dict(pickler, obj) File " \Python\Python36\lib\pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems save(v) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \Python\Python36\lib\pickle.py", line 781, in save_list self._batch_appends(obj) File " \Python\Python36\lib\pickle.py", line 805, in _batch_appends save(x) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \Python\Python36\lib\pickle.py", line 736, in save_tuple save(element) File " \Python\Python36\lib\pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File " \Python\Python36\lib\pickle.py", line 634, in save_reduce save(state) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \Python\Python36\lib\pickle.py", line 736, in save_tuple save(element) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict StockPickler.save_dict(pickler, obj) File " \Python\Python36\lib\pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems save(v) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \lib\site-packages\dill\_dill.py", line 1176, in save_instancemethod0 pickler.save_reduce(MethodType, (obj.__func__, obj.__self__), obj=obj) File " \Python\Python36\lib\pickle.py", line 610, in save_reduce save(args) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \Python\Python36\lib\pickle.py", line 736, in save_tuple save(element) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \lib\site-packages\datasets\utils\py_utils.py", line 523, in save_function obj=obj, File " \Python\Python36\lib\pickle.py", line 610, in save_reduce save(args) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \Python\Python36\lib\pickle.py", line 751, in save_tuple save(element) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict StockPickler.save_dict(pickler, obj) File " \Python\Python36\lib\pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems save(v) File " \Python\Python36\lib\pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File " \Python\Python36\lib\pickle.py", line 605, in save_reduce save(cls) File " \Python\Python36\lib\pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File " \lib\site-packages\dill\_dill.py", line 1439, in save_type StockPickler.save_global(pickler, obj, name=name) File " \Python\Python36\lib\pickle.py", line 922, in save_global (obj, module_name, name)) _pickle.PicklingError: Can't pickle <class 'spacy.errors.add_codes.<locals>.ErrorsWithCodes'>: it's not found as spacy.errors.add_codes.<locals>.ErrorsWithCodes ``` ## Environment info Tried on both Linux and Windows - `datasets` version: 1.14.0 - Platform: Windows-10-10.0.19041-SP0 + Python 3.7.9; Linux-5.11.0-38-generic-x86_64-with-Ubuntu-20.04-focal + Python 3.7.12 - PyArrow version: 6.0.0
https://github.com/huggingface/datasets/issues/3178
[ "After some digging, I found that this is caused by `dill` and using `recurse=True)` when trying to dump the object. The problem also occurs without multiprocessing. I can only find [the following information](https://dill.readthedocs.io/en/latest/dill.html#dill._dill.dumps) about this:\r\n\r\n> If recurse=True, th...
null
3,178
false
More control over TQDM when using map/filter with multiple processes
It would help with the clutter in my terminal if tqdm is only shown for rank 0 when using `num_proces>0` in the map and filter methods of datasets. ```python dataset.map(lambda examples: tokenize(examples["text"]), batched=True, num_proc=6) ``` The above snippet leads to a lot of TQDM bars and depending on your terminal, these will not overwrite but keep pushing each other down. ``` #0: 0%| | 0/13 [00:00<?, ?ba/s] #1: 0%| | 0/13 [00:00<?, ?ba/s] #2: 0%| | 0/13 [00:00<?, ?ba/s] #3: 0%| | 0/13 [00:00<?, ?ba/s] #4: 0%| | 0/13 [00:00<?, ?ba/s] #5: 0%| | 0/13 [00:00<?, ?ba/s] #0: 8%| | 1/13 [00:00<?, ?ba/s] #1: 8%| | 1/13 [00:00<?, ?ba/s] ... ``` Instead, it would be welcome if we had the option to only show the progress of rank 0.
https://github.com/huggingface/datasets/issues/3177
[ "Hi,\r\n\r\nIt's hard to provide an API that would cover all use-cases with tqdm in this project.\r\n\r\nHowever, you can make it work by defining a custom decorator (a bit hacky tho) as follows:\r\n```python\r\nimport datasets\r\n\r\ndef progress_only_on_rank_0(func):\r\n def wrapper(*args, **kwargs):\r\n ...
null
3,177
false
OpenSLR dataset: update generate_examples to properly extract data for SLR83
Fixed #3168. The SLR38 indices are CSV files and there wasn't any code in openslr.py to process these files properly. The end result was an empty table. I've added code to properly process these CSV files.
https://github.com/huggingface/datasets/pull/3176
[ "Also fix #3125." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3176", "html_url": "https://github.com/huggingface/datasets/pull/3176", "diff_url": "https://github.com/huggingface/datasets/pull/3176.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3176.patch", "merged_at": "2021-10-29T10:04:09" }
3,176
true
Add docs for `to_tf_dataset`
This PR adds some documentation for new features released in v1.13.0, with the main addition being `to_tf_dataset`: - Show how to use `to_tf_dataset` in the tutorial, and move `set_format(type='tensorflow'...)` to the Process section (let me know if I'm missing anything @Rocketknight1 πŸ˜…). - Add an example for loading dataset from multiple zipped CSV files to the Load section. - Add an example for removing columns for an `IterableDataset`. - Add graphic for visualizing streaming.
https://github.com/huggingface/datasets/pull/3175
[ "This looks great, thank you!", "Thanks !\r\n\r\nFor some reason the new GIF is 6MB, which is a bit heavy for an image on a website. The previous one was around 200KB though which is perfect. For a good experience we usually expect images to be less than 500KB - otherwise for users with poor connection it takes t...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3175", "html_url": "https://github.com/huggingface/datasets/pull/3175", "diff_url": "https://github.com/huggingface/datasets/pull/3175.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3175.patch", "merged_at": "2021-11-03T10:07:23" }
3,175
true
Asserts replaced by exceptions (huggingface#3171)
I've replaced two asserts with their proper exceptions following the guidelines described in issue #3171 by following the contributing guidelines. PS: This is one of my first PRs, hoping I don't break anything!
https://github.com/huggingface/datasets/pull/3174
[ "Your first PR went smoothly, well done!\r\nYou are welcome to continue contributing to this project.\r\nGrΓ cies, @joseporiolayats! πŸ˜‰ " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3174", "html_url": "https://github.com/huggingface/datasets/pull/3174", "diff_url": "https://github.com/huggingface/datasets/pull/3174.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3174.patch", "merged_at": "2021-10-29T13:08:43" }
3,174
true
Fix issue with filelock filename being too long on encrypted filesystems
Infer max filename length in filelock on Unix-like systems. Should fix problems on encrypted filesystems such as eCryptfs. Fix #2924 cc: @lmmx
https://github.com/huggingface/datasets/pull/3173
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3173", "html_url": "https://github.com/huggingface/datasets/pull/3173", "diff_url": "https://github.com/huggingface/datasets/pull/3173.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3173.patch", "merged_at": "2021-10-29T09:42:24" }
3,173
true
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
## Describe the bug I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow. The exception is raised only when the code runs within a specific context. Despite ~10h spent investigating this issue, I have failed to isolate the bug, so let me describe my setup. In my project, `Dataset` is wrapped into a `LightningDataModule` and the data is preprocessed when calling `LightningDataModule.setup()`. Calling `.setup()` in an isolated script works fine (even when wrapped with `hydra.main()`). However, when calling `.setup()` within the experiment script (depends on `pytorch_lightning`), the script crashes and `SystemError 15`. I could avoid throwing this error by modifying ` Dataset.__del__()` (see bellow), but I believe this only moves the problem somewhere else. I am completely stuck with this issue, any hint would be welcome. ```python class Dataset() ... def __del__(self): if hasattr(self, "_data"): _ = self._data # <- ugly trick that allows avoiding the issue. del self._data if hasattr(self, "_indices"): del self._indices ``` ## Steps to reproduce the bug ```python # Unfortunately I couldn't isolate the bug. ``` ## Expected results Calling `Dataset.map()` without throwing an exception. Or at least raising a more detailed exception/traceback. ## Actual results ``` Exception ignored in: <function Dataset.__del__ at 0x7f7cec179160>β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:05<00:00, 1.17ba/s] Traceback (most recent call last): File ".../python3.8/site-packages/datasets/arrow_dataset.py", line 906, in __del__ del self._data File ".../python3.8/site-packages/ray/worker.py", line 1033, in sigterm_handler sys.exit(signum) SystemExit: 15 ``` ## Environment info Tested on 2 environments: **Environment 1.** - `datasets` version: 1.14.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 6.0.0 **Environment 2.** - `datasets` version: 1.14.0 - Platform: Linux-4.18.0-305.19.1.el8_4.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.7 - PyArrow version: 6.0.0
https://github.com/huggingface/datasets/issues/3172
[ "NB: even if the error is raised, the dataset is successfully cached. So restarting the script after every `map()` allows to ultimately run the whole preprocessing. But this prevents to realistically run the code over multiple nodes.", "Hi,\r\n\r\nIt's not easy to debug the problem without the script. I may be wr...
null
3,172
false
Raise exceptions instead of using assertions for control flow
Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks. Currently, there is a total of 87 files with the `assert` statements (located under `datasets` and `src/datasets`), so when working on this, to manage the PR size, only modify 4-5 files at most before submitting a PR.
https://github.com/huggingface/datasets/issues/3171
[ "Adding the remaining tasks for this issue to help new code contributors. \r\n$ cd src/datasets && ack assert -lc \r\n- [x] commands/convert.py:1\r\n- [x] arrow_reader.py:3\r\n- [x] load.py:7\r\n- [x] utils/py_utils.py:2\r\n- [x] features/features.py:9\r\n- [x] arrow_writer.py:7\r\n- [x] search.py:6\r\n- [x] table...
null
3,171
false
Preserve ordering in `zip_dict`
Replace `set` with the `unique_values` generator in `zip_dict`. This PR fixes the problem with the different ordering of the example keys across different Python sessions caused by the `zip_dict` call in `Features.decode_example`.
https://github.com/huggingface/datasets/pull/3170
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3170", "html_url": "https://github.com/huggingface/datasets/pull/3170", "diff_url": "https://github.com/huggingface/datasets/pull/3170.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3170.patch", "merged_at": "2021-10-29T13:09:37" }
3,170
true
Configurable max filename length in file locks
Resolve #2924 (https://github.com/huggingface/datasets/issues/2924#issuecomment-952330956) wherein the assumption of file lock maximum filename length to be 255 raises an OSError on encrypted drives (ecryptFS on Linux uses part of the lower filename, reducing the maximum filename size to 143). Allowing this limit to be set in the config module allows this to be modified by users. Will not affect Windows users, as their class passes 255 on init explicitly. Reproduced with the following example ([the first few lines of a script from Lightning Flash](https://lightning-flash.readthedocs.io/en/latest/reference/speech_recognition.html), fine-tuning a HF model): ```py import torch import flash from flash.audio import SpeechRecognition, SpeechRecognitionData from flash.core.data.utils import download_data # 1. Create the DataModule download_data("https://pl-flash-data.s3.amazonaws.com/timit_data.zip", "./data") datamodule = SpeechRecognitionData.from_json( input_fields="file", target_fields="text", train_file="data/timit/train.json", test_file="data/timit/test.json", ) ``` Which gave this traceback: ```py Traceback (most recent call last): File "lf_ft.py", line 10, in <module> datamodule = SpeechRecognitionData.from_json( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 1005, in from_json return cls.from_data_source( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 571, in from_data_source train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 307, in to_datasets train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 344, in generate_dataset data = load_data(data, mock_dataset) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py", line 103, in load_data dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)}) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py", line 1599, in load_dataset builder_instance = load_dataset_builder( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py", line 1457, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/builder.py", line 285, in __init__ with FileLock(lock_path): File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 323, in __enter__ self.acquire() File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 272, in acquire self._acquire() File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 403, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock' ``` Note the filename is 145 chars long: ``` >>> len("_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock") 145 ``` After installing datasets as an editable local package and modifying the script I was running to first include: ```py import datasets datasets.config.MAX_DATASET_CONFIG_ID_READABLE_LENGTH = 143 ``` The error goes away. If I instead deliberately set the value incorrectly as 144, the OSError returns: ``` Traceback (most recent call last): File "lf_ft.py", line 14, in <module> datamodule = SpeechRecognitionData.from_json( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 1005, in from_json return cls.from_data_source( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 571, in from_data_source train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 307, in to_datasets train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 344, in generate_dataset data = load_data(data, mock_dataset) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py", line 103, in load_data dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)}) File "/home/louis/dev/hf_datasets/src/datasets/load.py", line 1605, in load_dataset builder_instance = load_dataset_builder( File "/home/louis/dev/hf_datasets/src/datasets/load.py", line 1463, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/home/louis/dev/hf_datasets/src/datasets/builder.py", line 285, in __init__ with FileLock(lock_path): File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 326, in __enter__ self.acquire() File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 275, in acquire self._acquire() File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 406, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-32c812b5c1272d64_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279...-5794079643713042223.lock' ```
https://github.com/huggingface/datasets/pull/3169
[ "I've also added environment variable configuration so that this can be configured once per machine (e.g. in a `.bashrc` file), as is already done for a few other config variables here.", "Cancelling PR in favour of @mariosasko's in #3173" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3169", "html_url": "https://github.com/huggingface/datasets/pull/3169", "diff_url": "https://github.com/huggingface/datasets/pull/3169.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3169.patch", "merged_at": null }
3,169
true
OpenSLR/83 is empty
## Describe the bug As the summary says, openslr / SLR83 / train is empty. The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('openslr', 'SLR83') ``` ## Expected results ``` DatasetDict({ train: Dataset({ features: ['path', 'audio', 'sentence'], num_rows: 17877 }) }) ``` ## Actual results ``` DatasetDict({ train: Dataset({ features: ['path', 'audio', 'sentence'], num_rows: 0 }) }) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.1.dev0 (master HEAD) - Platform: Ubuntu 20.04 - Python version: 3.7.10 - PyArrow version: 3.0.0
https://github.com/huggingface/datasets/issues/3168
[ "Hi @tyrius02, thanks for reporting. I see you self-assigned this issue: are you working on this?", "@albertvillanova Yes. Figured I introduced the broken config, I should fix it too.\r\n\r\nI've got it working, but I'm struggling with one of the tests. I've started a PR so I/we can work through it.", "Looks li...
null
3,168
false
bookcorpusopen no longer works
## Describe the bug When using the latest version of datasets (1.14.0), I cannot use the `bookcorpusopen` dataset. The process blocks always around `9924 examples [00:06, 1439.61 examples/s]` when preparing the dataset. I also noticed that after half an hour the process is automatically killed because of the RAM usage (the machine has 1TB of RAM...). This did not happen with 1.4.1. I tried also `rm -rf ~/.cache/huggingface` but did not help. Changing python version between 3.7, 3.8 and 3.9 did not help too. ## Steps to reproduce the bug ```python import datasets d = datasets.load_dataset('bookcorpusopen') ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.0 - Platform: Linux-5.4.0-1054-aws-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyArrow version: 4.0.1
https://github.com/huggingface/datasets/issues/3167
[ "Hi ! Thanks for reporting :) I think #3280 should fix this", "I tried with the latest changes from #3280 on google colab and it worked fine :)\r\nWe'll do a new release soon, in the meantime you can use the updated version with:\r\n```python\r\nload_dataset(\"bookcorpusopen\", revision=\"master\")\r\n```", "Fi...
null
3,167
false
Deprecate prepare_module
In version 1.13, `prepare_module` was deprecated. This PR adds a deprecation warning and removes it from all the library, using `dataset_module_factory` or `metric_module_factory` instead. Fix #3165.
https://github.com/huggingface/datasets/pull/3166
[ "Sounds good, thanks !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3166", "html_url": "https://github.com/huggingface/datasets/pull/3166", "diff_url": "https://github.com/huggingface/datasets/pull/3166.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3166.patch", "merged_at": "2021-11-05T09:27:36" }
3,166
true
Deprecate prepare_module
In version 1.13, `prepare_module` was deprecated. Add deprecation warning and remove its usage from all the library.
https://github.com/huggingface/datasets/issues/3165
[]
null
3,165
false
Add raw data files to the Hub with GitHub LFS for canonical dataset
I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term storage solution, compared to other storage solutions available to my team. From what I can tell, this option is not immediately supported if one follows the sharing steps detailed here: [https://huggingface.co/docs/datasets/share_dataset.html#sharing-a-canonical-dataset](https://huggingface.co/docs/datasets/share_dataset.html#sharing-a-canonical-dataset), since GitHub LFS is not supported for public forks. Is there a way to request this? Thanks!
https://github.com/huggingface/datasets/issues/3164
[ "Hi @zlucia, I would actually suggest hosting the dataset as a huggingface.co-hosted dataset.\r\n\r\nThe only difference with a \"canonical\"/legacy dataset is that it's nested under an organization (here `stanford` or `stanfordnlp` for instance – completely up to you) but then you can upload your data using git-lf...
null
3,164
false
Add Image feature
Adds the Image feature. This feature is heavily inspired by the recently added Audio feature (#2324). Currently, this PR is pretty simple. Some considerations that need further discussion: * I've decided to use `Pillow`/`PIL` as the image decoding library. Another candidate I considered is `torchvision`, mostly because of its `accimage` backend, which should be faster for loading `jpeg` images than `Pillow`. However, `torchvision`'s io module only supports png and jpeg images, has `torch` as a hard dependency, and requires magic to work with image bytes ( `torch.ByteTensor(torch.ByteStorage.from_buffer(image_bytes)))`). * Currently, I'm converting `PIL`'s `Image` type to `np.ndarray`. The vision models in Transformers such as ViT prefer the raw `Image` type and not the decoded tensors, so there is a small overhead due to [this conversion](https://github.com/huggingface/transformers/blob/3e8761ab8077e3bb243fe2f78b2a682bd2257cf1/src/transformers/image_utils.py#L62-L73). IMO this is justified to keep this part aligned with the Audio feature, which also returns `np.ndarray`. What do you think? * Still have to work on the channel decoding logic: * PyTorch prefers the channel-first ordering (C, H, W); TF and Flax the channel-last ordering (H, W, C). One cool feature would be adjusting the channel order based on the selected formatter (`torch`, `tf`, `jax`). * By default, `Image.open` returns images of shape (H, W, C). However, ViT's feature extractor expects the format (C, H, W) if the image is passed as an array (explained [here](https://huggingface.co/transformers/model_doc/vit.html#transformers.ViTFeatureExtractor.__call__)), so I'm more inclined to the format (C, H, W). Which one do you prefer, (C, H, W) or (H, W, C)? * Are there any options you'd like to see? (the user could change those via `cast_column`, such as `sampling_rate` in the Audio feature) TODOs: * [x] tests * in subsequent PRs: * docs - a section in the docs, which gives some additional info on the Image and Audio feature and compares them to `ArrayND` * streaming (waiting for #3129 and #3133 to get merged first) * update the image tasks and the datasets to use the new feature * Image/Audio formatting [Colab Notebook](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c?usp=sharing) where you can play with this feature. I'm also adding a link to the [Image](https://github.com/tensorflow/datasets/blob/7ac7d506488d46038a5854961d068926b3f93c7f/tensorflow_datasets/core/features/image_feature.py#L155) feature in TFDS because one of our goals is to parse TFDS scripts eventually, so our Image feature has to (at least) support all the formats theirs does. Feel free to cc anyone who might be interested. P.S. Please ignore the changes in the `datasets/**/*.py` files πŸ˜„.
https://github.com/huggingface/datasets/pull/3163
[ "Awesome, looking forward to using it :)", "Few additional comments:\r\n* the current API doesn't meet the requirements mentioned in #3145 (e.g. image mime-type). However, this will be doable soon as we also plan to store image bytes alongside paths in arrow files (see https://github.com/huggingface/datasets/pull...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3163", "html_url": "https://github.com/huggingface/datasets/pull/3163", "diff_url": "https://github.com/huggingface/datasets/pull/3163.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3163.patch", "merged_at": "2021-12-06T17:49:02" }
3,163
true
`datasets-cli test` should work with datasets without scripts
It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not). I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/tree/main) -- although @lhoestq came to save the day!
https://github.com/huggingface/datasets/issues/3162
[ "> It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not).\r\n> \r\n> I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeas...
null
3,162
false
Add riddle_sense dataset
Adding a new dataset for QA with riddles. I'm confused about the tagging process because it looks like the streamlit app loads data from the current repo, so is it something that should be done after merging or off my fork?
https://github.com/huggingface/datasets/pull/3161
[ "@lhoestq \r\nI address all the comments, I think. Thanks! \r\n", "The five test fails are unrelated to this PR and fixed on master so we can ignore them" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3161", "html_url": "https://github.com/huggingface/datasets/pull/3161", "diff_url": "https://github.com/huggingface/datasets/pull/3161.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3161.patch", "merged_at": "2021-11-04T14:01:14" }
3,161
true
Better error msg if `len(predictions)` doesn't match `len(references)` in metrics
Improve the error message in `Metric.add_batch` if `len(predictions)` doesn't match `len(references)`. cc: @BramVanroy (feel free to test this code on your examples and review this PR)
https://github.com/huggingface/datasets/pull/3160
[ "Can't test this now but it may be a good improvement indeed.", "I added a function, but it only works with the `list` type. For arrays/tensors, we delegate formatting to the frameworks. " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3160", "html_url": "https://github.com/huggingface/datasets/pull/3160", "diff_url": "https://github.com/huggingface/datasets/pull/3160.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3160.patch", "merged_at": "2021-11-05T09:31:02" }
3,160
true
Make inspect.get_dataset_config_names always return a non-empty list
Make all named configs cases, so that no special unnamed config case needs to be handled differently. Fix #3135.
https://github.com/huggingface/datasets/pull/3159
[ "This PR is already working (although not very beautiful; see below): the idea was to have the `DatasetModule.builder_kwargs` accessible from the `builder_cls`, so that this can generate the default builder config (at the class level, without requiring the builder to be instantiated).\r\n\r\nI have a plan for a fol...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3159", "html_url": "https://github.com/huggingface/datasets/pull/3159", "diff_url": "https://github.com/huggingface/datasets/pull/3159.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3159.patch", "merged_at": "2021-10-28T05:44:49" }
3,159
true
Fix string encoding for Value type
Some metrics have `string` features but currently it fails if users pass integers instead. Indeed feature encoding that handles the conversion of the user's objects to the right python type is missing a case for `string`, while it already works as expected for integers, floats and booleans Here is an example code that didn't work previously, but that works with this fix: ```python import datasets # Note that 'id' is an integer while the SQuAD metric uses strings predictions = [{'prediction_text': '1976', 'id': 5}] references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': 5}] squad_metric = datasets.load_metric("squad") squad_metric.add_batch(predictions=predictions, references=references) results = squad_metric.compute() # {'exact_match': 100.0, 'f1': 100.0} ``` cc @sgugger @philschmid
https://github.com/huggingface/datasets/pull/3158
[ "That was fast! \r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3158", "html_url": "https://github.com/huggingface/datasets/pull/3158", "diff_url": "https://github.com/huggingface/datasets/pull/3158.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3158.patch", "merged_at": "2021-10-25T14:12:05" }
3,158
true
Fixed: duplicate parameter and missing parameter in docstring
changing duplicate parameter `data_files` in `DatasetBuilder.__init__` to the missing parameter `data_dir`
https://github.com/huggingface/datasets/pull/3157
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3157", "html_url": "https://github.com/huggingface/datasets/pull/3157", "diff_url": "https://github.com/huggingface/datasets/pull/3157.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3157.patch", "merged_at": "2021-10-25T14:02:18" }
3,157
true
Illegal instruction (core dumped) at datasets import
## Describe the bug I install datasets using conda and when I import datasets I get: "Illegal instruction (core dumped)" ## Steps to reproduce the bug ``` conda create --prefix path/to/env conda activate path/to/env conda install -c huggingface -c conda-forge datasets # exits with output "Illegal instruction (core dumped)" python -m datasets ``` ## Environment info When I run "datasets-cli env", I also get "Illegal instruction (core dumped)" If I run the following commands: ``` conda create --prefix path/to/another/new/env conda activate path/to/another/new/env conda install -c huggingface transformers transformers-cli env ``` Then I get: - `transformers` version: 4.11.3 - Platform: Linux-5.4.0-67-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No Let me know what additional information you need in order to debug this issue. Thanks in advance!
https://github.com/huggingface/datasets/issues/3155
[ "It seems to be an issue with how conda-forge is building the binaries. It works on some machines, but not a machine with AMD Opteron 8384 processors." ]
null
3,155
false
Sacrebleu unexpected behaviour/requirement for data format
## Describe the bug When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets implementation of BLEU [here](https://github.com/huggingface/datasets/pull/3153). In the below snippet, the original sacrebleu snippet works just fine whereas the datasets implementation throws an error. ## Steps to reproduce the bug ```python import sacrebleu import datasets refs = [ ['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'], ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.'], ] hyps = ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.'] expected_bleu = 48.530827 ds_bleu = datasets.load_metric("sacrebleu") bleu_score_sb = sacrebleu.corpus_bleu(hyps, refs).score print(bleu_score_sb, expected_bleu) # works: 48.5308... bleu_score_ds = ds_bleu.compute(predictions=hyps, references=refs)["score"] print(bleu_score_ds, expected_bleu) # ValueError: Predictions and/or references don't match the expected format. ``` This seems to be related to how datasets forces the features format here: https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L94-L99 and then manipulates the references during the compute stage here https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L119-L122 I do not quite understand why that is required since sacrebleu handles argument parsing quite well [by itself](https://github.com/mjpost/sacrebleu/blob/2787185dd0f8d224c72ee5a831d163c2ac711a47/sacrebleu/metrics/base.py#L229). ## Actual results Traceback (most recent call last): File "C:\Users\bramv\AppData\Roaming\JetBrains\PyCharm2020.3\scratches\scratch_23.py", line 23, in <module> bleu_score_ds = ds_bleu.compute(predictions=hyps, references=refs)["score"] File "C:\dev\python\datasets\src\datasets\metric.py", line 392, in compute self.add_batch(predictions=predictions, references=references) File "C:\dev\python\datasets\src\datasets\metric.py", line 439, in add_batch raise ValueError( ValueError: Predictions and/or references don't match the expected format. Expected format: {'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Value(dtype='string', id='sequence'), length=-1, id='references')}, Input predictions: ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.'], Input references: [['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'], ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.']] ## Environment info - `datasets` version: 1.14.1.dev0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.2 - PyArrow version: 4.0.1
https://github.com/huggingface/datasets/issues/3154
[ "Hi @BramVanroy!\r\n\r\nGood question. This project relies on PyArrow (tables) to store data too big to fit in RAM. In the case of metrics, this means that the number of predictions and references has to match to form a table.\r\n\r\nThat's why your example throws an error even though it matches the schema:\r\n```p...
null
3,154
false
Add TER (as implemented in sacrebleu)
Implements TER (Translation Edit Rate) as per its implementation in sacrebleu. Sacrebleu for BLEU scores is already implemented in `datasets` so I thought this would be a nice addition. I started from the sacrebleu implementation, as the two metrics have a lot in common. Verified with sacrebleu's [testing suite](https://github.com/mjpost/sacrebleu/blob/078c440168c6adc89ba75fe6d63f0d922d42bcfe/test/test_ter.py) that this indeed works as intended. ```python import datasets test_cases = [ (['aaaa bbbb cccc dddd'], ['aaaa bbbb cccc dddd'], 0), # perfect match (['dddd eeee ffff'], ['aaaa bbbb cccc'], 1), # no overlap ([''], ['a'], 1), # corner case, empty hypothesis (['d e f g h a b c'], ['a b c d e f g h'], 1 / 8), # a single shift fixes MT ( [ 'wΓ€hlen Sie " Bild neu berechnen , " um beim Γ„ndern der Bildgrâße Pixel hinzuzufΓΌgen oder zu entfernen , damit das Bild ungefΓ€hr dieselbe Grâße aufweist wie die andere Grâße .', 'wenn Sie alle Aufgaben im aktuellen Dokument aktualisieren mΓΆchten , wΓ€hlen Sie im MenΓΌ des Aufgabenbedienfelds die Option " Alle Aufgaben aktualisieren . "', 'klicken Sie auf der Registerkarte " Optionen " auf die SchaltflΓ€che " Benutzerdefiniert " und geben Sie Werte fΓΌr " Fehlerkorrektur-Level " und " Y / X-VerhΓ€ltnis " ein .', 'Sie kΓΆnnen beispielsweise ein Dokument erstellen , das ein Auto ΓΌber die BΓΌhne enthΓ€lt .', 'wΓ€hlen Sie im Dialogfeld " Neu aus Vorlage " eine Vorlage aus und klicken Sie auf " Neu . "', ], [ 'wΓ€hlen Sie " Bild neu berechnen , " um beim Γ„ndern der Bildgrâße Pixel hinzuzufΓΌgen oder zu entfernen , damit die Darstellung des Bildes in einer anderen Grâße beibehalten wird .', 'wenn Sie alle Aufgaben im aktuellen Dokument aktualisieren mΓΆchten , wΓ€hlen Sie im MenΓΌ des Aufgabenbedienfelds die Option " Alle Aufgaben aktualisieren . "', 'klicken Sie auf der Registerkarte " Optionen " auf die SchaltflΓ€che " Benutzerdefiniert " und geben Sie fΓΌr " Fehlerkorrektur-Level " und " Y / X-VerhΓ€ltnis " niedrigere Werte ein .', 'Sie kΓΆnnen beispielsweise ein Dokument erstellen , das ein Auto enthalt , das sich ΓΌber die BΓΌhne bewegt .', 'wΓ€hlen Sie im Dialogfeld " Neu aus Vorlage " eine Vorlage aus und klicken Sie auf " Neu . "', ], 0.136 # realistic example from WMT dev data (2019) ), ] ter = datasets.load_metric(r"path\to\datasets\metrics\ter") predictions = ["hello there general kenobi", "foo bar foobar"] references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]] print(ter.compute(predictions=predictions, references=references)) for hyp, ref, score in test_cases: # Note the reference transformation which is different from scarebleu's input format results = ter.compute(predictions=hyp, references=[[r] for r in ref]) assert 100*score == results["score"], f"expected {100*score}, got {results['score']}" ```
https://github.com/huggingface/datasets/pull/3153
[ "The problem appears to stem from the omission of the lines that you mentioned. If you add them back and try examples from [this](https://huggingface.co/docs/datasets/using_metrics.html) tutorial (sacrebleu metric example) the code you implemented works fine.\r\n\r\nI think the purpose of these lines is follows:\r\...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3153", "html_url": "https://github.com/huggingface/datasets/pull/3153", "diff_url": "https://github.com/huggingface/datasets/pull/3153.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3153.patch", "merged_at": "2021-11-02T11:04:11" }
3,153
true
Fix some typos in the documentation
null
https://github.com/huggingface/datasets/pull/3152
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3152", "html_url": "https://github.com/huggingface/datasets/pull/3152", "diff_url": "https://github.com/huggingface/datasets/pull/3152.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3152.patch", "merged_at": "2021-10-25T14:03:48" }
3,152
true
Re-add faiss to windows testing suite
In recent versions, `faiss-cpu` seems to be available for Windows as well. See the [PyPi page](https://pypi.org/project/faiss-cpu/#files) to confirm. We can therefore included it for Windows in the setup file. At first tests didn't pass due to problems with permissions as caused by `NamedTemporaryFile` on Windows. This built-in library is notoriously poor in playing nice on Windows. The required change isn't pretty, but it works. First set `delete=False` to not automatically try to delete the file on `exit`. Then, manually delete the file with `unlink`. It's weird, I know, but it works. ```python with tempfile.NamedTemporaryFile(delete=False) as tmp_file: # do stuff os.unlink(tmp_file.name) ``` closes #3150
https://github.com/huggingface/datasets/pull/3151
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3151", "html_url": "https://github.com/huggingface/datasets/pull/3151", "diff_url": "https://github.com/huggingface/datasets/pull/3151.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3151.patch", "merged_at": "2021-11-02T10:06:03" }
3,151
true
Faiss _is_ available on Windows
In the setup file, I find the following: https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/setup.py#L171 However, FAISS does install perfectly fine on Windows on my system. You can also confirm this on the [PyPi page](https://pypi.org/project/faiss-cpu/#files), where Windows wheels are available. Maybe this was true for older versions? For current versions, this can be removed I think. (This isn't really a bug but didn't know how else to tag.) If you agree I can do a quick PR and remove that line.
https://github.com/huggingface/datasets/issues/3150
[ "Sure, feel free to open a PR." ]
null
3,150
false
Add CMU Hinglish DoG Dataset for MT
Address part of #2841 Added the CMU Hinglish DoG Dataset as in GLUECoS. Added it as a seperate dataset as unlike other tasks of GLUE CoS this can't be evaluated for a BERT like model. Consists of parallel dataset between Hinglish (Hindi-English) and English, can be used for Machine Translation between the two. The data processing part is inspired from the GLUECoS repo [here](https://github.com/microsoft/GLUECoS/blob/7fdc51653e37a32aee17505c47b7d1da364fa77e/Data/Preprocess_Scripts/preprocess_mt_en_hi.py) The dummy data part is not working properly, it shows ``` UnboundLocalError: local variable 'generator_splits' referenced before assignment ``` when I run without ``--auto_generate``. Please let me know how I can fix that. Thanks
https://github.com/huggingface/datasets/pull/3149
[ "Hi @lhoestq, thanks a lot for the help. I have moved the part as suggested. \r\nAlthough still while running the dummy data script, I face this issue\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ishan/anaconda3/bin/datasets-cli\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3149", "html_url": "https://github.com/huggingface/datasets/pull/3149", "diff_url": "https://github.com/huggingface/datasets/pull/3149.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3149.patch", "merged_at": "2021-11-15T10:27:45" }
3,149
true
Streaming with num_workers != 0
## Describe the bug When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch. The code owner is likely @lhoestq ## Steps to reproduce the bug For your convenience, we've prepped a colab notebook that reproduces the bug https://colab.research.google.com/drive/1Mgl0oTZSNIE3UeGl_oX9wPCOIxRg19h1?usp=sharing ```python !pip install datasets==1.14.0 should_freeze_forever = True # ^-- set this to True in order to freeze forever, set to False in order to work normally import torch from datasets import load_dataset data = load_dataset("oscar", "unshuffled_deduplicated_bn", split="train", streaming=True) data = data.map(lambda x: {"text": x["text"], "orig": f"oscar[{x['id']}]"}, batched=True) data = data.shuffle(100, seed=1337) data = data.with_format("torch") loader = torch.utils.data.DataLoader(data, batch_size=2, num_workers=2 if should_freeze_forever else 0) # v-- the code should freeze forever at this line for i, row in enumerate(loader): print(row) if i > 10: break print("DONE!") ``` ## Expected results The code should not freeze forever with num_workers=2 ## Actual results The code freezes forever with num_workers=2 ## Environment info - `datasets` version: 1.14.0 (also found in previous versions) - Platform: google colab (also locally) - Python version: 3.7, (also 3.8) - PyArrow version: 3.0.0
https://github.com/huggingface/datasets/issues/3148
[ "I can confirm that I was able to reproduce the bug. This seems odd given that #3423 reports duplicate data retrieval when `num_workers` and `streaming` are used together, which is obviously different from what is reported here. ", "Any update? A possible solution is to have multiple arrow files as shards, and ha...
null
3,148
false
Fix CLI test to ignore verfications when saving infos
Fix #3146.
https://github.com/huggingface/datasets/pull/3147
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3147", "html_url": "https://github.com/huggingface/datasets/pull/3147", "diff_url": "https://github.com/huggingface/datasets/pull/3147.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3147.patch", "merged_at": "2021-10-27T08:01:49" }
3,147
true
CLI test command throws NonMatchingSplitsSizesError when saving infos
When trying to generate a datset JSON metadata, a `NonMatchingSplitsSizesError` is thrown: ``` $ datasets-cli test datasets/arabic_billion_words --save_infos --all_configs Testing builder 'Alittihad' (1/10) Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: Unknown size, post-processed: Unknown size, total: 332.13 MiB) to .cache\arabic_billion_words\Alittihad\1.1.0\8175ff1c9714c6d5d15b1141b6042e5edf048276bb81a9c14e35e149a7a62ae4... Traceback (most recent call last): File "path\huggingface\datasets\.venv\Scripts\datasets-cli-script.py", line 33, in <module> sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')()) File "path\huggingface\datasets\src\datasets\commands\datasets_cli.py", line 33, in main service.run() File "path\huggingface\datasets\src\datasets\commands\test.py", line 144, in run builder.download_and_prepare( File "path\huggingface\datasets\src\datasets\builder.py", line 607, in download_and_prepare self._download_and_prepare( File "path\huggingface\datasets\src\datasets\builder.py", line 709, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "path\huggingface\datasets\src\datasets\utils\info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='arabic_billion_words'), 'recorded': SplitInfo(name='train', num_bytes=1601790302, num_examples=349342, dataset_name='arabic_billion_words')}] ``` This is due because a previous run generated a wrong `dataset_info.json`. This error can be avoided by passing `--ignore_verifications`, but I think this should be assumed when passing `--save_infos`.
https://github.com/huggingface/datasets/issues/3146
[]
null
3,146
false