url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.83B | node_id stringlengths 18 32 | number int64 1 6.09k | title stringlengths 1 290 | labels list | state stringclasses 2
values | locked bool 1
class | milestone dict | comments int64 0 54 | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_text list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/721/comments | https://api.github.com/repos/huggingface/datasets/issues/721/events | https://github.com/huggingface/datasets/issues/721 | 718,647,147 | MDU6SXNzdWU3MTg2NDcxNDc= | 721 | feat(dl_manager): add support for ftp downloads | [] | closed | false | null | 11 | 2020-10-10T15:50:20Z | 2022-02-15T10:44:44Z | 2022-02-15T10:44:43Z | null | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.download_and_extract(_URL)
```
I get an error:
> ValueError: unable to parse ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz as a URL or as a local path
I checked, and indeed you don't consider `ftp` as a remote file.
https://github.com/huggingface/datasets/blob/4c2af707a6955cf4b45f83ac67990395327c5725/src/datasets/utils/file_utils.py#L188
Adding `ftp` to that list does not immediately solve the issue, so there probably needs to be some extra work.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/721/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/721/timeline | null | completed | null | null | false | [
"We only support http by default for downloading.\r\nIf you really need to use ftp, then feel free to use a library that allows to download through ftp in your dataset script (I see that you've started working on #722 , that's awesome !). The users will get a message to install the extra library when they load the ... |
https://api.github.com/repos/huggingface/datasets/issues/4612 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4612/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4612/comments | https://api.github.com/repos/huggingface/datasets/issues/4612/events | https://github.com/huggingface/datasets/issues/4612 | 1,290,984,660 | I_kwDODunzps5M8tzU | 4,612 | Release 2.3.0 broke custom iterable datasets | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2022-07-01T06:46:07Z | 2022-07-05T15:08:21Z | 2022-07-05T15:08:21Z | null | ## Describe the bug
Trying to iterate examples from custom iterable dataset fails to bug introduced in `torch_iterable_dataset.py` since the release of 2.3.0.
## Steps to reproduce the bug
```python
next(iter(custom_iterable_dataset))
```
## Expected results
`next(iter(custom_iterable_dataset))` should return examples from the dataset
## Actual results
```
/usr/local/lib/python3.7/dist-packages/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py in _set_fsspec_for_multiprocess()
16 See https://github.com/fsspec/gcsfs/issues/379
17 """
---> 18 fsspec.asyn.iothread[0] = None
19 fsspec.asyn.loop[0] = None
20
AttributeError: module 'fsspec' has no attribute 'asyn'
```
## Environment info
- `datasets` version: 2.3.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 8.0.0
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4612/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4612/timeline | null | completed | null | null | false | [
"Apparently, `fsspec` does not allow access to attribute-based modules anymore, such as `fsspec.async`.\r\n\r\nHowever, this is a fairly simple fix:\r\n- Change the import to: `from fsspec import asyn`;\r\n- Change line 18 to: `asyn.iothread[0] = None`;\r\n- Change line 19 to `asyn.loop[0] = None`.",
"Hi! I think... |
https://api.github.com/repos/huggingface/datasets/issues/675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/675/comments | https://api.github.com/repos/huggingface/datasets/issues/675/events | https://github.com/huggingface/datasets/issues/675 | 709,818,725 | MDU6SXNzdWU3MDk4MTg3MjU= | 675 | Add custom dataset to NLP? | [] | closed | false | null | 2 | 2020-09-27T21:22:50Z | 2020-10-20T09:08:49Z | 2020-10-20T09:08:49Z | null | Is it possible to add a custom dataset such as a .csv to the NLP library?
Thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/675/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/675/timeline | null | completed | null | null | false | [
"Yes you can have a look here: https://huggingface.co/docs/datasets/loading_datasets.html#csv-files",
"No activity, closing"
] |
https://api.github.com/repos/huggingface/datasets/issues/2751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2751/comments | https://api.github.com/repos/huggingface/datasets/issues/2751/events | https://github.com/huggingface/datasets/pull/2751 | 959,021,262 | MDExOlB1bGxSZXF1ZXN0NzAyMTk5MjA5 | 2,751 | Update metadata for wikihow dataset | [] | closed | false | null | 0 | 2021-08-03T11:31:57Z | 2021-08-03T15:52:09Z | 2021-08-03T15:52:09Z | null | Update metadata for wikihow dataset:
- Remove leading new line character in description and citation
- Update metadata JSON
- Remove no longer necessary `urls_checksums/checksums.txt` file
Related to #2748. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2751/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2751/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2751.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2751",
"merged_at": "2021-08-03T15:52:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2751.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2751"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1232/comments | https://api.github.com/repos/huggingface/datasets/issues/1232/events | https://github.com/huggingface/datasets/pull/1232 | 758,180,669 | MDExOlB1bGxSZXF1ZXN0NTMzMzkyNTc0 | 1,232 | Add Grail QA dataset | [] | closed | false | null | 0 | 2020-12-07T05:46:45Z | 2020-12-08T13:03:19Z | 2020-12-08T13:03:19Z | null | For more information: https://dki-lab.github.io/GrailQA/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1232/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1232/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1232.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1232",
"merged_at": "2020-12-08T13:03:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1232.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1232"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1127/comments | https://api.github.com/repos/huggingface/datasets/issues/1127/events | https://github.com/huggingface/datasets/pull/1127 | 757,229,684 | MDExOlB1bGxSZXF1ZXN0NTMyNjQwMjMx | 1,127 | Add wikiqaar dataset | [] | closed | false | null | 0 | 2020-12-04T16:26:18Z | 2020-12-07T16:39:41Z | 2020-12-07T16:39:41Z | null | Arabic Wiki Question Answering Corpus. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1127/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1127/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1127.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1127",
"merged_at": "2020-12-07T16:39:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1127.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1127"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1848/comments | https://api.github.com/repos/huggingface/datasets/issues/1848/events | https://github.com/huggingface/datasets/pull/1848 | 803,826,506 | MDExOlB1bGxSZXF1ZXN0NTY5Njg5ODU1 | 1,848 | Refactoring: Create config module | [] | closed | false | null | 0 | 2021-02-08T18:43:51Z | 2021-02-10T12:29:35Z | 2021-02-10T12:29:35Z | null | Refactorize configuration settings into their own module.
This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1848/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1848/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1848.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1848",
"merged_at": "2021-02-10T12:29:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1848.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1848"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5368/comments | https://api.github.com/repos/huggingface/datasets/issues/5368/events | https://github.com/huggingface/datasets/pull/5368 | 1,500,322,973 | PR_kwDODunzps5FpZyx | 5,368 | Align remove columns behavior and input dict mutation in `map` with previous behavior | [] | closed | false | null | 1 | 2022-12-16T14:28:47Z | 2022-12-16T16:28:08Z | 2022-12-16T16:25:12Z | null | Align the `remove_columns` behavior and input dict mutation in `map` with the behavior before https://github.com/huggingface/datasets/pull/5252. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5368/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5368/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5368.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5368",
"merged_at": "2022-12-16T16:25:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5368.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5368"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1747 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1747/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1747/comments | https://api.github.com/repos/huggingface/datasets/issues/1747/events | https://github.com/huggingface/datasets/issues/1747 | 788,299,775 | MDU6SXNzdWU3ODgyOTk3NzU= | 1,747 | datasets slicing with seed | [] | closed | false | null | 2 | 2021-01-18T14:08:55Z | 2022-10-05T12:37:27Z | 2022-10-05T12:37:27Z | null | Hi
I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html
I could not find a seed option, could you assist me please how I can get a slice for different seeds?
thank you.
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1747/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1747/timeline | null | completed | null | null | false | [
"Hi :) \r\nThe slicing API from https://huggingface.co/docs/datasets/splits.html doesn't shuffle the data.\r\nYou can shuffle and then take a subset of your dataset with\r\n```python\r\n# shuffle and take the first 100 examples\r\ndataset = dataset.shuffle(seed=42).select(range(100))\r\n```\r\n\r\nYou can find more... |
https://api.github.com/repos/huggingface/datasets/issues/638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/638/comments | https://api.github.com/repos/huggingface/datasets/issues/638/events | https://github.com/huggingface/datasets/issues/638 | 704,146,956 | MDU6SXNzdWU3MDQxNDY5NTY= | 638 | GLUE/QQP dataset: NonMatchingChecksumError | [] | closed | false | null | 1 | 2020-09-18T07:09:10Z | 2020-09-18T11:37:07Z | 2020-09-18T11:37:07Z | null | Hi @lhoestq , I know you are busy and there are also other important issues. But if this is easy to be fixed, I am shamelessly wondering if you can give me some help , so I can evaluate my models and restart with my developing cycle asap. 😚
datasets version: editable install of master at 9/17
`datasets.load_dataset('glue','qqp', cache_dir='./datasets')`
```
Downloading and preparing dataset glue/qqp (download: 57.73 MiB, generated: 107.02 MiB, post-processed: Unknown size, total: 164.75 MiB) to ./datasets/glue/qqp/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4...
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
in
----> 1 datasets.load_dataset('glue','qqp', cache_dir='./datasets')
~/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
609 download_config=download_config,
610 download_mode=download_mode,
--> 611 ignore_verifications=ignore_verifications,
612 )
613
~/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
467 if not downloaded_from_gcs:
468 self._download_and_prepare(
--> 469 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
470 )
471 # Sync info
~/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
527 if verify_infos:
528 verify_checksums(
--> 529 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
530 )
531
~/datasets/src/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://dl.fbaipublicfiles.com/glue/data/QQP-clean.zip']
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/638/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/638/timeline | null | completed | null | null | false | [
"Hi ! Sure I'll take a look"
] |
https://api.github.com/repos/huggingface/datasets/issues/2158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2158/comments | https://api.github.com/repos/huggingface/datasets/issues/2158/events | https://github.com/huggingface/datasets/issues/2158 | 848,506,746 | MDU6SXNzdWU4NDg1MDY3NDY= | 2,158 | viewer "fake_news_english" error | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | 2 | 2021-04-01T14:13:20Z | 2022-10-05T13:22:02Z | 2022-10-05T13:22:02Z | null | When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error:
> ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional dependency for reading xlsx files' for instance'
as well as the error Traceback.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2158/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2158/timeline | null | completed | null | null | false | [
"Thanks for reporting !\r\nThe viewer doesn't have all the dependencies of the datasets. We may add openpyxl to be able to show this dataset properly",
"This viewer tool is deprecated now and the new viewer at https://huggingface.co/datasets/fake_news_english works fine, so I'm closing this issue"
] |
https://api.github.com/repos/huggingface/datasets/issues/5252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5252/comments | https://api.github.com/repos/huggingface/datasets/issues/5252/events | https://github.com/huggingface/datasets/pull/5252 | 1,451,765,838 | PR_kwDODunzps5DCI1U | 5,252 | Support for decoding Image/Audio types in map when format type is not default one | [] | closed | false | null | 6 | 2022-11-16T15:02:13Z | 2022-12-13T17:01:54Z | 2022-12-13T16:59:04Z | null | Add support for decoding the `Image`/`Audio` types in `map` for the formats (Numpy, TF, Jax, PyTorch) other than the default one (Python).
Additional improvements:
* make `Dataset`'s "iter" API cleaner by removing `_iter` and replacing `_iter_batches` with `iter(batch_size)` (also implemented for `IterableDataset`)
* iterate over arrow tables in `map` to avoid `_getitem` calls, which are much slower than `__iter__`/`iter(batch_size)`, when the `format_type` is not Python
* fix `_iter_batches` (now named `iter`) when `drop_last_batch=True` and `pyarrow<=8.0.0` is installed
* lazily extract and decode arrow data in the default format
TODO:
* [x] update the `iter` benchmark in the docs (the `BeamBuilder` cannot load the preprocessed datasets from our bucket, so wait for this to be fixed (cc @lhoestq))
Fix https://github.com/huggingface/datasets/issues/3992, fix https://github.com/huggingface/datasets/issues/3756 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5252/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5252/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5252.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5252",
"merged_at": "2022-12-13T16:59:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5252.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5252"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5252). All of your documentation changes will be reflected on that endpoint.",
"Yes, if the image column is the first in the batch keys, it will ... |
https://api.github.com/repos/huggingface/datasets/issues/1068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1068/comments | https://api.github.com/repos/huggingface/datasets/issues/1068/events | https://github.com/huggingface/datasets/pull/1068 | 756,417,337 | MDExOlB1bGxSZXF1ZXN0NTMxOTY1MDk0 | 1,068 | Add Pubmed (citation + abstract) dataset (2020). | [] | closed | false | null | 4 | 2020-12-03T17:54:10Z | 2020-12-23T09:52:07Z | 2020-12-23T09:52:07Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1068/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1068/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1068.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1068",
"merged_at": "2020-12-23T09:52:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1068.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1068"
} | true | [
"LGTM! ftp addition looks fine but maybe have a look @thomwolf ?",
"It's not finished yet, I need to run the tests on the full dataset (it was running this weekend, there is an error somewhere deep)\r\n",
"@yjernite Ready for review !\r\n@thomwolf \r\n\r\nSo I tried to follow closely the original format that me... |
https://api.github.com/repos/huggingface/datasets/issues/831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/831/comments | https://api.github.com/repos/huggingface/datasets/issues/831/events | https://github.com/huggingface/datasets/issues/831 | 740,071,697 | MDU6SXNzdWU3NDAwNzE2OTc= | 831 | [GEM] Add WebNLG dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 0 | 2020-11-10T16:46:48Z | 2020-12-03T13:38:01Z | 2020-12-03T13:38:01Z | null | ## Adding a Dataset
- **Name:** WebNLG
- **Description:** WebNLG consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples (16,095 data inputs and 42,873 data-text pairs). The data is available in English and Russian
- **Paper:** https://www.aclweb.org/anthology/P17-1017.pdf
- **Data:** https://webnlg-challenge.loria.fr/download/
- **Motivation:** Included in the GEM shared task, multilingual
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/831/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/831/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4797/comments | https://api.github.com/repos/huggingface/datasets/issues/4797/events | https://github.com/huggingface/datasets/pull/4797 | 1,330,000,998 | PR_kwDODunzps48uL-t | 4,797 | Torgo dataset creation | [] | closed | false | null | 1 | 2022-08-05T14:18:26Z | 2022-08-09T18:46:00Z | 2022-08-09T18:46:00Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4797/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4797/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4797.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4797",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4797.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4797"
} | true | [
"Hi @YingLi001, thanks for your proposal to add this dataset.\r\n\r\nHowever, now we add datasets directly to the Hub (instead of our GitHub repository). You have the instructions in our docs: \r\n- [Create a dataset loading script](https://huggingface.co/docs/datasets/dataset_script)\r\n- [Create a dataset card](h... |
https://api.github.com/repos/huggingface/datasets/issues/22 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/22/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/22/comments | https://api.github.com/repos/huggingface/datasets/issues/22/events | https://github.com/huggingface/datasets/pull/22 | 608,298,586 | MDExOlB1bGxSZXF1ZXN0NDEwMTAyMjU3 | 22 | adding bleu score code | [] | closed | false | null | 0 | 2020-04-28T13:00:50Z | 2020-04-28T17:48:20Z | 2020-04-28T17:48:08Z | null | this PR add the BLEU score metric to the lib. It can be tested by running the following code.
` from nlp.metrics import bleu
hyp1 = "It is a guide to action which ensures that the military always obeys the commands of the party"
ref1a = "It is a guide to action that ensures that the military forces always being under the commands of the party "
ref1b = "It is the guiding principle which guarantees the military force always being under the command of the Party"
ref1c = "It is the practical guide for the army always to heed the directions of the party"
list_of_references = [[ref1a, ref1b, ref1c]]
hypotheses = [hyp1]
bleu = bleu.bleu_score(list_of_references, hypotheses,4, smooth=True)
print(bleu) ` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/22/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/22/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/22.diff",
"html_url": "https://github.com/huggingface/datasets/pull/22",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/22.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/22"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2891/comments | https://api.github.com/repos/huggingface/datasets/issues/2891/events | https://github.com/huggingface/datasets/pull/2891 | 993,161,984 | MDExOlB1bGxSZXF1ZXN0NzMxMzkwNjM2 | 2,891 | Allow dynamic first dimension for ArrayXD | [] | closed | false | null | 9 | 2021-09-10T11:52:52Z | 2021-11-23T15:33:13Z | 2021-10-29T09:37:17Z | null | Add support for dynamic first dimension for ArrayXD features. See issue [#887](https://github.com/huggingface/datasets/issues/887).
Following changes allow for `to_pylist` method of `ArrayExtensionArray` to return a list of numpy arrays where fist dimension can vary.
@lhoestq Could you suggest how you want to extend test suit. For now I added only very limited testing. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2891/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2891/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2891.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2891",
"merged_at": "2021-10-29T09:37:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2891.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2891"
} | true | [
"@lhoestq, thanks for your review.\r\n\r\nI added test for `to_pylist`, I didn't do that for `to_numpy` because this method shouldn't be called for dynamic dimension ArrayXD - this method will try to make a single numpy array for the whole column which cannot be done for dynamic arrays.\r\n\r\nI dig into `to_pandas... |
https://api.github.com/repos/huggingface/datasets/issues/4611 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4611/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4611/comments | https://api.github.com/repos/huggingface/datasets/issues/4611/events | https://github.com/huggingface/datasets/pull/4611 | 1,290,940,874 | PR_kwDODunzps46rxIX | 4,611 | Preserve member order by MockDownloadManager.iter_archive | [] | closed | false | null | 1 | 2022-07-01T05:48:20Z | 2022-07-01T16:59:11Z | 2022-07-01T16:48:28Z | null | Currently, `MockDownloadManager.iter_archive` yields paths to archive members in an order given by `path.rglob("*")`, which migh not be the same order as in the original archive.
See issue in:
- https://github.com/huggingface/datasets/pull/4579#issuecomment-1172135027
This PR fixes the order of the members yielded by `MockDownloadManager.iter_archive` so that it is the same as in the original archive. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4611/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4611/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4611.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4611",
"merged_at": "2022-07-01T16:48:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4611.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4611"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3500/comments | https://api.github.com/repos/huggingface/datasets/issues/3500/events | https://github.com/huggingface/datasets/pull/3500 | 1,090,406,133 | PR_kwDODunzps4wXLTB | 3,500 | Docs: Add VCTK dataset description | [] | closed | false | null | 0 | 2021-12-29T10:02:05Z | 2022-01-04T10:46:02Z | 2022-01-04T10:25:09Z | null | This PR is a very minor followup to #1837, with only docs changes (single comment string). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3500/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3500/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3500.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3500",
"merged_at": "2022-01-04T10:25:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3500.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3500"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/516/comments | https://api.github.com/repos/huggingface/datasets/issues/516/events | https://github.com/huggingface/datasets/pull/516 | 681,846,032 | MDExOlB1bGxSZXF1ZXN0NDcwMTY5NTA0 | 516 | [Breaking] Rename formated to formatted | [] | closed | false | null | 0 | 2020-08-19T13:35:23Z | 2020-08-20T08:41:17Z | 2020-08-20T08:41:16Z | null | `formated` is not correct but `formatted` is | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/516/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/516/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/516.diff",
"html_url": "https://github.com/huggingface/datasets/pull/516",
"merged_at": "2020-08-20T08:41:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/516.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/516"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/148/comments | https://api.github.com/repos/huggingface/datasets/issues/148/events | https://github.com/huggingface/datasets/issues/148 | 619,590,555 | MDU6SXNzdWU2MTk1OTA1NTU= | 148 | _download_and_prepare() got an unexpected keyword argument 'verify_infos' | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 2 | 2020-05-17T01:48:53Z | 2020-05-18T07:38:33Z | 2020-05-18T07:38:33Z | null | # Reproduce
In Colab,
```
%pip install -q nlp
%pip install -q apache_beam mwparserfromhell
dataset = nlp.load_dataset('wikipedia')
```
get
```
Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-52471d2a0088> in <module>()
----> 1 dataset = nlp.load_dataset('wikipedia')
1 frames
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
515 download_mode=download_mode,
516 ignore_verifications=ignore_verifications,
--> 517 save_infos=save_infos,
518 )
519
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
361 verify_infos = not save_infos and not ignore_verifications
362 self._download_and_prepare(
--> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
364 )
365 # Sync info
TypeError: _download_and_prepare() got an unexpected keyword argument 'verify_infos'
``` | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/148/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/148/timeline | null | completed | null | null | false | [
"Same error for dataset 'wiki40b'",
"Should be fixed on master :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4881/comments | https://api.github.com/repos/huggingface/datasets/issues/4881/events | https://github.com/huggingface/datasets/issues/4881 | 1,348,495,777 | I_kwDODunzps5QYGmh | 4,881 | Language names and language codes: connecting to a big database (rather than slow enrichment of custom list) | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 48 | 2022-08-23T20:14:24Z | 2023-01-03T08:32:35Z | null | null | **The problem:**
Language diversity is an important dimension of the diversity of datasets. To find one's way around datasets, being able to search by language name and by standardized codes appears crucial.
Currently the list of language codes is [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/resources/languages.json), right? At about 1,500 entries, it is roughly at 1/4th of the world's diversity of extant languages. (Probably less, as the list of 1,418 contains variants that are linguistically very close: 108 varieties of English, for instance.)
Looking forward to ever increasing coverage, how will the list of language names and language codes improve over time?
Enrichment of the custom list by HFT contributors (like [here](https://github.com/huggingface/datasets/pull/4880)) has several issues:
* progress is likely to be slow:

(input required from reviewers, etc.)
* the more contributors, the less consistency can be expected among contributions. No need to elaborate on how much confusion is likely to ensue as datasets accumulate.
* there is no information on which language relates with which: no encoding of the special closeness between the languages of the Northwestern Germanic branch (English+Dutch+German etc.), for instance. Information on phylogenetic closeness can be relevant to run experiments on transfer of technology from one language to its close relatives.
**A solution that seems desirable:**
Connecting to an established database that (i) aims at full coverage of the world's languages and (ii) has information on higher-level groupings, alternative names, etc.
It takes a lot of hard work to do such databases. Two important initiatives are [Ethnologue](https://www.ethnologue.com/) (ISO standard) and [Glottolog](https://glottolog.org/). Both have pros and cons. Glottolog contains references to Ethnologue identifiers, so adopting Glottolog entails getting the advantages of both sets of language codes.
Both seem technically accessible & 'developer-friendly'. Glottolog has a [GitHub repo](https://github.com/glottolog/glottolog). For Ethnologue, harvesting tools have been devised (see [here](https://github.com/lyy1994/ethnologue); I did not try it out).
In case a conversation with linguists seemed in order here, I'd be happy to participate ('pro bono', of course), & to rustle up more colleagues as useful, to help this useful development happen.
With appreciation of HFT, | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4881/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4881/timeline | null | null | null | null | false | [
"Thanks for opening this discussion, @alexis-michaud.\r\n\r\nAs the language validation procedure is shared with other Hugging Face projects, I'm tagging them as well.\r\n\r\nCC: @huggingface/moon-landing ",
"on the Hub side, there is not fine grained validation we just check that `language:` contains an array of... |
https://api.github.com/repos/huggingface/datasets/issues/3387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3387/comments | https://api.github.com/repos/huggingface/datasets/issues/3387/events | https://github.com/huggingface/datasets/pull/3387 | 1,071,836,456 | PR_kwDODunzps4vbAyC | 3,387 | Create Language Modeling task | [] | closed | false | null | 0 | 2021-12-06T07:56:07Z | 2021-12-17T17:18:28Z | 2021-12-17T17:18:27Z | null | Create Language Modeling task to be able to specify the input "text" column in a dataset.
This can be useful for datasets which are not exclusively used for language modeling and have more than one column:
- for text classification datasets (with columns "review" and "rating", for example), the Language Modeling task can be used to specify the "text" column ("review" in this case).
TODO:
- [ ] Add the LanguageModeling task to all dataset scripts which can be used for language modeling | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3387/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3387/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3387.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3387",
"merged_at": "2021-12-17T17:18:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3387.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3387"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6082/comments | https://api.github.com/repos/huggingface/datasets/issues/6082/events | https://github.com/huggingface/datasets/pull/6082 | 1,824,819,672 | PR_kwDODunzps5WkdIn | 6,082 | Release: 2.14.1 | [] | closed | false | null | 4 | 2023-07-27T17:05:54Z | 2023-07-27T17:18:17Z | 2023-07-27T17:08:38Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6082/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6082/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6082.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6082",
"merged_at": "2023-07-27T17:08:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6082.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6082"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6082). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
https://api.github.com/repos/huggingface/datasets/issues/5648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5648/comments | https://api.github.com/repos/huggingface/datasets/issues/5648/events | https://github.com/huggingface/datasets/issues/5648 | 1,629,253,719 | I_kwDODunzps5hHHBX | 5,648 | flatten_indices doesn't work with pandas format | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 1 | 2023-03-17T12:44:25Z | 2023-03-21T13:12:03Z | null | null | ### Describe the bug
Hi,
I noticed that `flatten_indices` throws an error when the batch format is `pandas`. This is probably due to the fact that flatten_indices uses map internally which doesn't accept dataframes as the transformation function output
### Steps to reproduce the bug
tabular_data = pd.DataFrame(np.random.randn(10,10))
tabular_data = datasets.arrow_dataset.Dataset.from_pandas(tabular_data)
tabular_data.with_format("pandas").select([0,1,2,3]).flatten_indices()
### Expected behavior
No error thrown
### Environment info
- `datasets` version: 2.10.1
- Python version: 3.9.5
- PyArrow version: 11.0.0
- Pandas version: 1.4.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5648/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5648/timeline | null | null | null | null | false | [
"Thanks for reporting! This can be fixed by setting the format to `arrow` in `flatten_indices` and restoring the original format after the flattening. I'm working on a PR that reduces the number of the `flatten_indices` calls in our codebase and makes `flatten_indices` a no-op when a dataset does not have an indice... |
https://api.github.com/repos/huggingface/datasets/issues/5537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5537/comments | https://api.github.com/repos/huggingface/datasets/issues/5537/events | https://github.com/huggingface/datasets/issues/5537 | 1,587,567,464 | I_kwDODunzps5eoFto | 5,537 | Increase speed of data files resolution | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": fals... | open | false | null | 5 | 2023-02-16T12:11:45Z | 2023-04-07T17:32:45Z | null | null | Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step.
`datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on all the data files.
This comes from `resolve_patterns_in_dataset_repository` which calls `_resolve_single_pattern_in_dataset_repository`, which iterates on all the files at
```python
glob_iter = [PurePath(filepath) for filepath in fs.glob(PurePath(pattern).as_posix()) if fs.isfile(filepath)]
```
but calling `glob` on such a dataset is too expensive. Indeed it calls `ls()` in `hffilesystem.py` too many times.
Maybe `glob` can be more optimized in `hffilesystem.py`, or the data files resolution can directly be implemented in the filesystem by checking its `dir_cache` ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5537/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5537/timeline | null | null | null | null | false | [
"#self-assign",
"You were right, if `self.dir_cache` is not None in glob, it is exactly the same as what is returned by find, at least for all the tests we have, and some extended evaluation I did across a random sample of about 1000 datasets. \r\n\r\nThanks for the nice hints, and let me know if this is not exac... |
https://api.github.com/repos/huggingface/datasets/issues/1102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1102/comments | https://api.github.com/repos/huggingface/datasets/issues/1102/events | https://github.com/huggingface/datasets/issues/1102 | 757,016,515 | MDU6SXNzdWU3NTcwMTY1MTU= | 1,102 | Add retries to download manager | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2020-12-04T11:08:11Z | 2020-12-22T15:34:06Z | 2020-12-22T15:34:06Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1102/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1102/timeline | null | completed | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/3727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3727/comments | https://api.github.com/repos/huggingface/datasets/issues/3727/events | https://github.com/huggingface/datasets/pull/3727 | 1,138,979,732 | PR_kwDODunzps4y34JN | 3,727 | Patch all module attributes in its namespace | [] | closed | false | null | 0 | 2022-02-15T17:12:27Z | 2022-02-17T17:06:18Z | 2022-02-17T17:06:17Z | null | When patching module attributes, only those defined in its `__all__` variable were considered by default (only falling back to `__dict__` if `__all__` was None).
However those are only a subset of all the module attributes in its namespace (`__dict__` variable).
This PR fixes the problem of modules that have non-None `__all__` variable, but try to access an attribute present in `__dict__` (and not in `__all__`).
For example, `pandas` has attribute `__version__` only present in `__dict__`.
- Before version 1.4, pandas `__all__` was None, thus all attributes in `__dict__` were patched
- From version 1.4, pandas `__all__` is not None, thus attributes in `__dict__` not present in `__all__` are ignored
Fix #3724.
CC: @severo @lvwerra | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3727/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3727/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3727.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3727",
"merged_at": "2022-02-17T17:06:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3727.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3727"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/538/comments | https://api.github.com/repos/huggingface/datasets/issues/538/events | https://github.com/huggingface/datasets/pull/538 | 688,015,912 | MDExOlB1bGxSZXF1ZXN0NDc1MzU3MjY2 | 538 | [logging] Add centralized logging - Bump-up cache loads to warnings | [] | closed | false | null | 0 | 2020-08-28T11:42:29Z | 2020-08-31T11:42:51Z | 2020-08-31T11:42:51Z | null | Add a `nlp.logging` module to set the global logging level easily. The verbosity level also controls the tqdm bars (disabled when set higher than INFO).
You can use:
```
nlp.logging.set_verbosity(verbosity: int)
nlp.logging.set_verbosity_info()
nlp.logging.set_verbosity_warning()
nlp.logging.set_verbosity_debug()
nlp.logging.set_verbosity_error()
nlp.logging.get_verbosity() -> int
```
And use the levels:
```
nlp.logging.CRITICAL
nlp.logging.DEBUG
nlp.logging.ERROR
nlp.logging.FATAL
nlp.logging.INFO
nlp.logging.NOTSET
nlp.logging.WARN
nlp.logging.WARNING
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/538/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/538/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/538.diff",
"html_url": "https://github.com/huggingface/datasets/pull/538",
"merged_at": "2020-08-31T11:42:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/538.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/538"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5762/comments | https://api.github.com/repos/huggingface/datasets/issues/5762/events | https://github.com/huggingface/datasets/issues/5762 | 1,670,326,470 | I_kwDODunzps5jjyjG | 5,762 | Not able to load the pile | [] | closed | false | null | 1 | 2023-04-17T03:09:10Z | 2023-04-17T09:37:27Z | 2023-04-17T09:37:27Z | null | ### Describe the bug
Got this error when I am trying to load the pile dataset
```
TypeError: Couldn't cast array of type
struct<file: string, id: string>
to
{'id': Value(dtype='string', id=None)}
```
### Steps to reproduce the bug
Please visit the following sample notebook
https://colab.research.google.com/drive/1JHcjawcHL6QHhi5VcqYd07W2QCEj2nWK#scrollTo=ulJP3eJCI-tB
### Expected behavior
The pile should work
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5762/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5762/timeline | null | completed | null | null | false | [
"Thanks for reporting, @surya-narayanan.\r\n\r\nI see you already started a discussion about this on the Community tab of the corresponding dataset: https://huggingface.co/datasets/EleutherAI/the_pile/discussions/10\r\nLet's continue the discussion there!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3549/comments | https://api.github.com/repos/huggingface/datasets/issues/3549/events | https://github.com/huggingface/datasets/pull/3549 | 1,096,426,996 | PR_kwDODunzps4wqkGt | 3,549 | Fix sem_eval_2018_task_1 download location | [] | closed | false | null | 2 | 2022-01-07T15:37:52Z | 2022-01-27T15:52:03Z | 2022-01-27T15:52:03Z | null | This changes the download location of sem_eval_2018_task_1 files to include the test set labels as discussed in https://github.com/huggingface/datasets/issues/2745#issuecomment-954588500_ with @lhoestq. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3549/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3549/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3549.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3549",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3549.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3549"
} | true | [
"Hi ! Thanks for pushing this :)\r\n\r\nIt seems that you created this PR from an old version of `datasets` that didn't have the sem_eval_2018_task_1.py file.\r\n\r\nCan you try merging `master` into your branch ? Or re-create your PR from a branch that comes from a more recent version of `datasets` ?\r\n\r\nAnd so... |
https://api.github.com/repos/huggingface/datasets/issues/3125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3125/comments | https://api.github.com/repos/huggingface/datasets/issues/3125/events | https://github.com/huggingface/datasets/pull/3125 | 1,032,046,666 | PR_kwDODunzps4teNPC | 3,125 | Add SLR83 to OpenSLR | [] | closed | false | null | 0 | 2021-10-21T04:26:00Z | 2021-10-22T20:10:05Z | 2021-10-22T08:30:22Z | null | The PR resolves #3119, adding SLR83 (UK and Ireland dialects) to the previously created OpenSLR dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3125/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3125/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3125.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3125",
"merged_at": "2021-10-22T08:30:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3125.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3125"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/310/comments | https://api.github.com/repos/huggingface/datasets/issues/310/events | https://github.com/huggingface/datasets/pull/310 | 644,806,720 | MDExOlB1bGxSZXF1ZXN0NDM5MzY1MDg5 | 310 | add wikisql | [] | closed | false | null | 1 | 2020-06-24T18:00:35Z | 2020-06-25T12:32:25Z | 2020-06-25T12:32:25Z | null | Adding the [WikiSQL](https://github.com/salesforce/WikiSQL) dataset.
Interesting things to note:
- Have copied the function (`_convert_to_human_readable`) which converts the SQL query to a human-readable (string) format as this is what most people will want when actually using this dataset for NLP applications.
- `conds` was originally a tuple but is converted to a dictionary to support differing types.
Would be nice to add the logical_form metrics too at some point. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/310/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/310/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/310.diff",
"html_url": "https://github.com/huggingface/datasets/pull/310",
"merged_at": "2020-06-25T12:32:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/310.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/310"
} | true | [
"That's great work @ghomasHudson !"
] |
https://api.github.com/repos/huggingface/datasets/issues/5046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5046/comments | https://api.github.com/repos/huggingface/datasets/issues/5046/events | https://github.com/huggingface/datasets/issues/5046 | 1,391,372,519 | I_kwDODunzps5S7qjn | 5,046 | Audiofolder creates empty Dataset if files same level as metadata | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"descript... | closed | false | null | 5 | 2022-09-29T19:17:23Z | 2022-10-28T13:05:07Z | 2022-10-28T13:05:07Z | null | ## Describe the bug
When audio files are at the same level as the metadata (`metadata.csv` or `metadata.jsonl` ), the `load_dataset` returns a `DatasetDict` with no rows but the correct columns.
https://github.com/huggingface/datasets/blob/1ea4d091b7a4b83a85b2eeb8df65115d39af3766/docs/source/audio_dataset.mdx?plain=1#L88
## Steps to reproduce the bug
`metadata.csv`:
```csv
file_name,duration,transcription
./2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav,10.768,hello
```
```python
>>> audio_dataset = load_dataset("audiofolder", data_dir="/audio-data/")
>>> audio_dataset
DatasetDict({
train: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
validation: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
})
```
I've tried, with no success,:
- setting `split` to something else so I don't get a `DatasetDict`,
- removing the `./`,
- using `.jsonl`.
## Expected results
```
Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 1
})
```
## Actual results
```
DatasetDict({
train: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
validation: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
})
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.13.0-1025-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5046/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5046/timeline | null | completed | null | null | false | [
"Hi! Unfortunately, I can't reproduce this behavior. Instead, I get `ValueError: audio at 2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav doesn't have metadata in /audio-data/metadata.csv`, which can be fixed by removing the `./` from the file name.\r\n\r\n(Link to a Colab that tries to reproduce this behavior: htt... |
https://api.github.com/repos/huggingface/datasets/issues/3690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3690/comments | https://api.github.com/repos/huggingface/datasets/issues/3690/events | https://github.com/huggingface/datasets/pull/3690 | 1,127,493,538 | PR_kwDODunzps4yP2p5 | 3,690 | Update docs to new frontend/UI | [] | closed | false | null | 17 | 2022-02-08T16:38:09Z | 2022-03-03T20:04:21Z | 2022-03-03T20:04:20Z | null | ### TLDR: Update `datasets` `docs` to the new syntax (markdown and mdx files) & frontend (as how it looks on [hf.co/transformers](https://huggingface.co/docs/transformers/index))
| Light mode | Dark mode |
|-----------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
| <img width="400" alt="Screenshot 2022-02-17 at 14 15 34" src="https://user-images.githubusercontent.com/11827707/154489358-e2fb3708-8d72-4fb6-93f0-51d4880321c0.png"> | <img width="400" alt="Screenshot 2022-02-17 at 14 16 27" src="https://user-images.githubusercontent.com/11827707/154489596-c5a1311b-181c-4341-adb3-d60a7d3abe85.png"> |
## Checklist
- [x] update datasets docs to new syntax (should call `doc-builder convert`) (this PR)
- [x] discuss `@property` methods frontend https://github.com/huggingface/doc-builder/pull/87
- [x] discuss `inject_arrow_table_documentation` (this PR) https://github.com/huggingface/datasets/pull/3690#discussion_r801847860
- [x] update datasets docs path on moon-landing https://github.com/huggingface/moon-landing/pull/2089
- [x] convert pyarrow docstring from Numpydoc style to groups style https://github.com/huggingface/doc-builder/pull/89(https://stackoverflow.com/a/24385103/6558628)
- [x] handle `Raises` section on frontend and doc-builder https://github.com/huggingface/doc-builder/pull/86
- [x] check imgs path (this PR) (nothing to update here)
- [x] doc exaples block has to follow format `Examples::` https://github.com/huggingface/datasets/pull/3693
- [x] fix [this docstring](https://github.com/huggingface/datasets/blob/6ed6ac9448311930557810383d2cfd4fe6aae269/src/datasets/arrow_dataset.py#L3339) (causing svelte compilation error)
- [x] Delete sphinx related files
- [x] Delete sphinx CI
- [x] Update docs config in setup.py
- [x] add `versions.yml` in doc-build https://github.com/huggingface/doc-build/pull/1
- [x] add `versions.yml` in doc-build-dev https://github.com/huggingface/doc-build-dev/pull/1
- [x] https://github.com/huggingface/moon-landing/pull/2089
- [x] format docstrings for example `datasets.DatasetBuilder.download_and_prepare` args format look wrong
- [x] create new github actions. (can probably be in a separate PR) (see the transformers equivalents below)
1. [build_dev_documentation.yml](https://github.com/huggingface/transformers/blob/master/.github/workflows/build_dev_documentation.yml)
2. [build_documentation.yml](https://github.com/huggingface/transformers/blob/master/.github/workflows/build_documentation.yml)
3. [delete_dev_documentation.yml](https://github.com/huggingface/transformers/blob/master/.github/workflows/delete_dev_documentation.yml)
## Note to reviewers
The number of changed files is a lot (100+) because I've converted all `.rst` files to `.mdx` files & they are compiling fine on the svelte side (also, moved all the imgs to to [doc-imgs repo](https://huggingface.co/datasets/huggingface/documentation-images/tree/main/datasets)). Moreover, you should just review them on preprod and see if the rendering look fine.
_Therefore, I'd suggest to focus on the changed_ **`.py`** and **CI files** (github workflows, etc. you can use [this filter here](https://github.com/huggingface/datasets/pull/3690/files?file-filters%5B%5D=.py&file-filters%5B%5D=.yml&show-deleted-files=true&show-viewed-files=true)) during the review & ignore `.mdx` files. (if there's a bug in `.mdx` files, we can always handle it in a separate PR afterwards). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 4,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3690/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3690/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3690.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3690",
"merged_at": "2022-03-03T20:04:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3690.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3690"
} | true | [
"We can have the docstrings of the properties that are missing docstrings (from discussion [here](https://github.com/huggingface/doc-builder/pull/96)) here by using your new `inject_arrow_table_documentation` onthem as well ?",
"@sgugger & @lhoestq could you help me with what should the `docs` section in setup.py... |
https://api.github.com/repos/huggingface/datasets/issues/3936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3936/comments | https://api.github.com/repos/huggingface/datasets/issues/3936/events | https://github.com/huggingface/datasets/pull/3936 | 1,170,713,473 | PR_kwDODunzps40hE-P | 3,936 | Fix Wikipedia version and re-add tests | [] | closed | false | null | 1 | 2022-03-16T08:48:04Z | 2022-03-16T17:04:07Z | 2022-03-16T17:04:05Z | null | To keep backward compatibility when loading using "wikipedia" dataset ID (https://huggingface.co/datasets/wikipedia), we have created the pre-processed data for the same languages we were offering before, but with updated date "20220301":
- de
- en
- fr
- frr
- it
- simple
These pre-processed data can be accessed, e.g.:
```python
ds = load_dataset("wikipedia", "20220301.frr", split="train")
```
The next step will be to offer the pre-processed data for many other languages, but when loading using "wikimedia/wikipedia": https://huggingface.co/datasets/wikimedia/wikipedia | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3936/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3936.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3936",
"merged_at": "2022-03-16T17:04:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3936.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3936"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3936). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/2659 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2659/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2659/comments | https://api.github.com/repos/huggingface/datasets/issues/2659/events | https://github.com/huggingface/datasets/pull/2659 | 946,155,407 | MDExOlB1bGxSZXF1ZXN0NjkxMzcwNzU3 | 2,659 | Allow dataset config kwargs to be None | [] | closed | false | null | 0 | 2021-07-16T10:25:38Z | 2021-07-16T12:46:07Z | 2021-07-16T12:46:07Z | null | Close https://github.com/huggingface/datasets/issues/2658
The dataset config kwargs that were set to None we simply ignored.
This was an issue when None has some meaning for certain parameters of certain builders, like the `sep` parameter of the "csv" builder that allows to infer to separator.
cc @SBrandeis | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2659/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2659/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2659.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2659",
"merged_at": "2021-07-16T12:46:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2659.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2659"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1992/comments | https://api.github.com/repos/huggingface/datasets/issues/1992/events | https://github.com/huggingface/datasets/issues/1992 | 822,672,238 | MDU6SXNzdWU4MjI2NzIyMzg= | 1,992 | `datasets.map` multi processing much slower than single processing | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 13 | 2021-03-05T02:10:02Z | 2023-06-08T12:31:55Z | null | null | Hi, thank you for the great library.
I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.
My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tokenizer.
I noticed that `map` function with `num_proc=mp.cpu_count() //2` takes more than 20 hours to finish the job where as `num_proc=1` gets the job done in about 5 hours. The machine I used has 40 cores, with 126G of RAM. There were no other jobs when `map` function was running.
What could be the reason? I would be happy to provide information necessary to spot the reason.
p.s. I was experiencing the imbalance issue mentioned in [here](https://github.com/huggingface/datasets/issues/610#issuecomment-705177036) when I was using multi processing.
p.s.2 When I run `map` with `num_proc=1`, I see one tqdm bar but all the cores are working. When `num_proc=20`, only 20 cores work.

| {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1992/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1992/timeline | null | null | null | null | false | [
"Hi @hwijeen, you might want to look at issues #1796 and #1949. I think it could be something related to the I/O operations being performed.",
"I see that many people are experiencing the same issue. Is this problem considered an \"official\" bug that is worth a closer look? @lhoestq",
"Yes this looks like a bu... |
https://api.github.com/repos/huggingface/datasets/issues/1330 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1330/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1330/comments | https://api.github.com/repos/huggingface/datasets/issues/1330/events | https://github.com/huggingface/datasets/pull/1330 | 759,657,324 | MDExOlB1bGxSZXF1ZXN0NTM0NjI0MzMx | 1,330 | added un_ga dataset | [] | closed | false | null | 2 | 2020-12-08T17:58:38Z | 2020-12-14T17:52:34Z | 2020-12-14T17:52:34Z | null | Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http://opus.nlpl.eu/UN.php) dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1330/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1330/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1330.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1330",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1330.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1330"
} | true | [
"Looks like this PR includes changes about many other files than the ones for un_ga\r\n\r\nCan you create another branch an another PR please ?",
"@lhoestq, Thank you for suggestions. I have made the changes and raised the new PR https://github.com/huggingface/datasets/pull/1569. "
] |
https://api.github.com/repos/huggingface/datasets/issues/1786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1786/comments | https://api.github.com/repos/huggingface/datasets/issues/1786/events | https://github.com/huggingface/datasets/issues/1786 | 795,462,816 | MDU6SXNzdWU3OTU0NjI4MTY= | 1,786 | How to use split dataset | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | 2 | 2021-01-27T21:37:47Z | 2021-04-23T15:17:39Z | 2021-04-23T15:17:39Z | null | 
Hey,
I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my project but its not giving desired results. Any help will be appreciated! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1786/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1786/timeline | null | completed | null | null | false | [
"By default, all 3 splits will be loaded if you run the following:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"lambada\")\r\nprint(dataset[\"train\"])\r\nprint(dataset[\"valid\"])\r\n\r\n```\r\n\r\nIf you wanted to do load this manually, you could do this:\r\n\r\n```python\r\nf... |
https://api.github.com/repos/huggingface/datasets/issues/5313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5313/comments | https://api.github.com/repos/huggingface/datasets/issues/5313/events | https://github.com/huggingface/datasets/pull/5313 | 1,468,484,136 | PR_kwDODunzps5D6Qfb | 5,313 | Fix description of streaming in the docs | [] | closed | false | null | 1 | 2022-11-29T18:00:28Z | 2022-12-01T14:55:30Z | 2022-12-01T14:00:34Z | null | We say that "the data is being downloaded progressively" which is not true, it's just streamed, so I fixed it. Probably I missed some other places where it is written?
Also changed docstrings for `StreamingDownloadManager`'s `download` and `extract` to reflect the same, as these docstrings are displayed in the documentation cc @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5313/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5313/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5313.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5313",
"merged_at": "2022-12-01T14:00:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5313.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5313"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1814/comments | https://api.github.com/repos/huggingface/datasets/issues/1814/events | https://github.com/huggingface/datasets/pull/1814 | 800,516,236 | MDExOlB1bGxSZXF1ZXN0NTY2OTg4NTI1 | 1,814 | Add Freebase QA Dataset | [] | closed | false | null | 1 | 2021-02-03T16:57:49Z | 2021-02-04T19:47:51Z | 2021-02-04T16:21:48Z | null | Closes PR #1435. Fixed issues with PR #1809.
Requesting @lhoestq to review. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1814/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1814/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1814.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1814",
"merged_at": "2021-02-04T16:21:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1814.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1814"
} | true | [
"Hi @lhoestq \r\n\r\nThanks for approving. Request you to close PR #1435 as well."
] |
https://api.github.com/repos/huggingface/datasets/issues/4587 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4587/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4587/comments | https://api.github.com/repos/huggingface/datasets/issues/4587/events | https://github.com/huggingface/datasets/pull/4587 | 1,287,291,494 | PR_kwDODunzps46flzR | 4,587 | Validate new_fingerprint passed by user | [] | closed | false | null | 1 | 2022-06-28T12:46:21Z | 2022-06-28T14:11:57Z | 2022-06-28T14:00:44Z | null | Users can pass the dataset fingerprint they want in `map` and other dataset transforms.
However the fingerprint is used to name cache files so we need to make sure it doesn't contain bad characters as mentioned in https://github.com/huggingface/datasets/issues/1718, and that it's not too long | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4587/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4587/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4587.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4587",
"merged_at": "2022-06-28T14:00:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4587.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4587"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2183/comments | https://api.github.com/repos/huggingface/datasets/issues/2183/events | https://github.com/huggingface/datasets/pull/2183 | 852,518,411 | MDExOlB1bGxSZXF1ZXN0NjEwNzU3MjUz | 2,183 | Fix s3fs tests for py36 and py37+ | [] | closed | false | null | 0 | 2021-04-07T15:17:11Z | 2021-04-08T08:54:45Z | 2021-04-08T08:54:44Z | null | Recently several changes happened:
1. latest versions of `fsspec` require python>3.7 for async features
2. `s3fs` added a dependency on `aiobotocore`, which is not compatible with the `moto` s3 mock context manager
This PR fixes both issues, by pinning `fsspec` and `s3fs` for python 3.6, and by using `moto` in server mode to support running the tests on python>=3.7 with the latest version of `fsspec` and `s3fs`.
cc @philschmid | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2183/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2183/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2183.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2183",
"merged_at": "2021-04-08T08:54:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2183.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2183"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3966 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3966/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3966/comments | https://api.github.com/repos/huggingface/datasets/issues/3966/events | https://github.com/huggingface/datasets/pull/3966 | 1,173,883,084 | PR_kwDODunzps40rBNE | 3,966 | Create metric card for BERTScore | [] | closed | false | null | 1 | 2022-03-18T18:21:56Z | 2022-03-22T13:35:28Z | 2022-03-22T13:30:56Z | null | Proposing a metric card for BERTScore | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3966/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3966/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3966.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3966",
"merged_at": "2022-03-22T13:30:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3966.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3966"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1012 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1012/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1012/comments | https://api.github.com/repos/huggingface/datasets/issues/1012/events | https://github.com/huggingface/datasets/pull/1012 | 755,485,658 | MDExOlB1bGxSZXF1ZXN0NTMxMTg3MTI2 | 1,012 | Adding Evidence Inference Data: | [] | closed | false | null | 0 | 2020-12-02T17:51:35Z | 2020-12-03T15:04:46Z | 2020-12-03T15:04:46Z | null | http://evidence-inference.ebm-nlp.com/download/
https://arxiv.org/pdf/2005.04177.pdf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1012/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1012/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1012.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1012",
"merged_at": "2020-12-03T15:04:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1012.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1012"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3220/comments | https://api.github.com/repos/huggingface/datasets/issues/3220/events | https://github.com/huggingface/datasets/issues/3220 | 1,045,549,029 | I_kwDODunzps4-Uc_l | 3,220 | Add documentation about dataset viewer feature | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 0 | 2021-11-05T08:11:19Z | 2021-11-05T08:11:19Z | null | null | Add to the docs more details about the dataset viewer feature in the Hub.
CC: @julien-c
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3220/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3220/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3252/comments | https://api.github.com/repos/huggingface/datasets/issues/3252/events | https://github.com/huggingface/datasets/pull/3252 | 1,051,124,749 | PR_kwDODunzps4uagoy | 3,252 | Fix failing CER metric test in CI after update | [] | closed | false | null | 0 | 2021-11-11T15:57:16Z | 2021-11-12T14:06:44Z | 2021-11-12T14:06:43Z | null | Fixes the [failing CER metric test](https://app.circleci.com/pipelines/github/huggingface/datasets/8644/workflows/79816553-fa2f-4756-b022-d5937f00bf7b/jobs/53298) in CI by adding support for `jiwer==2.3.0`, which was released yesterday. Also, I verified that all the tests in `metrics/cer/test_cer.py` pass after the change, so the results should be the same irrespective of the `jiwer` version. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3252/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3252/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3252.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3252",
"merged_at": "2021-11-12T14:06:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3252.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3252"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5658/comments | https://api.github.com/repos/huggingface/datasets/issues/5658/events | https://github.com/huggingface/datasets/pull/5658 | 1,634,867,204 | PR_kwDODunzps5MmJe0 | 5,658 | docs: Update num_shards docs to mention num_proc on Dataset and DatasetDict | [] | closed | false | null | 2 | 2023-03-22T00:12:18Z | 2023-03-24T16:43:34Z | 2023-03-24T16:36:21Z | null | Closes #5653
@mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5658/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5658/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5658.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5658",
"merged_at": "2023-03-24T16:36:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5658.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5658"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/3324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3324/comments | https://api.github.com/repos/huggingface/datasets/issues/3324/events | https://github.com/huggingface/datasets/issues/3324 | 1,064,661,212 | I_kwDODunzps4_dXDc | 3,324 | Can't import `datasets` in python 3.10 | [] | closed | false | null | 0 | 2021-11-26T16:06:14Z | 2021-11-26T16:31:23Z | 2021-11-26T16:31:23Z | null | When importing `datasets` I'm getting this error in python 3.10:
```python
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/__init__.py", line 34, in <module>
from .arrow_dataset import Dataset, concatenate_datasets
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 47, in <module>
from .arrow_reader import ArrowReader
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_reader.py", line 33, in <module>
from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 334, in <module>
class InMemoryTable(TableBlock):
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 361, in InMemoryTable
def from_pandas(cls, *args, **kwargs):
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 24, in wrapper
out = wraps(arrow_table_method)(method)
File "/Users/quentinlhoest/.pyenv/versions/3.10.0/lib/python3.10/functools.py", line 61, in update_wrapper
wrapper.__wrapped__ = wrapped
AttributeError: readonly attribute
```
This makes the conda build fail.
I'm opening a PR to fix this and do a patch release 1.16.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3324/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3324/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3982/comments | https://api.github.com/repos/huggingface/datasets/issues/3982/events | https://github.com/huggingface/datasets/pull/3982 | 1,175,478,099 | PR_kwDODunzps40vrR_ | 3,982 | Exclude Google Drive tests of the CI | [] | closed | false | null | 2 | 2022-03-21T14:34:16Z | 2022-03-31T16:38:02Z | 2022-03-21T14:51:35Z | null | These tests make the CI spam the Google Drive API, the CI now gets banned by Google Drive very often.
I think we can just skip these tests from the CI for now.
In the future we could have a CI job that runs only once a day or once a week for such cases
cc @albertvillanova @mariosasko @severo
Close #3415

| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3982/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3982/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3982.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3982",
"merged_at": "2022-03-21T14:51:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3982.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3982"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I was thinking exactly the same: running unit tests that request continuously a third-party API is not a good idea."
] |
https://api.github.com/repos/huggingface/datasets/issues/3648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3648/comments | https://api.github.com/repos/huggingface/datasets/issues/3648/events | https://github.com/huggingface/datasets/pull/3648 | 1,117,465,505 | PR_kwDODunzps4xvXig | 3,648 | Fix Windows CI: bump python to 3.7 | [] | closed | false | null | 0 | 2022-01-28T14:24:54Z | 2022-01-28T14:40:39Z | 2022-01-28T14:40:39Z | null | Python>=3.7 is needed to install `tokenizers` 0.11 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3648/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3648/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3648.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3648",
"merged_at": "2022-01-28T14:40:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3648.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3648"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1724/comments | https://api.github.com/repos/huggingface/datasets/issues/1724/events | https://github.com/huggingface/datasets/issues/1724 | 784,023,338 | MDU6SXNzdWU3ODQwMjMzMzg= | 1,724 | could not run models on a offline server successfully | [] | closed | false | null | 6 | 2021-01-12T06:08:06Z | 2022-10-05T12:39:07Z | 2022-10-05T12:39:07Z | null | Hi, I really need your help about this.
I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows:

is there anything I can do? Is it possible to download all the things in cache and upload it to the server? Please help me out... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1724/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1724/timeline | null | completed | null | null | false | [
"Transferred to `datasets` based on the stack trace.",
"Hi @lkcao !\r\nYour issue is indeed related to `datasets`. In addition to installing the package manually, you will need to download the `text.py` script on your server. You'll find it (under `datasets/datasets/text`: https://github.com/huggingface/datasets/... |
https://api.github.com/repos/huggingface/datasets/issues/383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/383/comments | https://api.github.com/repos/huggingface/datasets/issues/383/events | https://github.com/huggingface/datasets/pull/383 | 655,291,201 | MDExOlB1bGxSZXF1ZXN0NDQ3ODI0OTky | 383 | Adding the Linguistic Code-switching Evaluation (LinCE) benchmark | [] | closed | false | null | 5 | 2020-07-11T22:35:20Z | 2020-07-16T16:19:46Z | 2020-07-16T16:19:46Z | null | Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).
>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.
The data comes from social media and here's the summary table of tasks per language pair:
| Language Pairs | LID | POS | NER | SA |
|----------------------------------------|-----|-----|-----|----|
| Spanish-English | ✅ | ✅ | ✅ | ✅ |
| Hindi-English | ✅ | ✅ | ✅ | |
| Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | |
| Nepali-English | ✅ | | | |
The tasks are as follows:
* LID: token-level language identification
* POS: part-of-speech tagging
* NER: named entity recognition
* SA: sentiment analysis
With the exception of MSA-EA, the rest of the datasets contain token-level LID labels.
## Usage
For Spanish-English LID, we can load the data as follows:
```
import nlp
data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng')
for split in data:
print(data[split])
```
Here's the output:
```
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)
```
Here's the list of shortcut names for every dataset available in LinCE:
* `lid_spaeng`
* `lid_hineng`
* `lid_nepeng`
* `lid_msaea`
* `pos_spaeng`
* `pos_hineng`
* `ner_spaeng`
* `ner_hineng`
* `ner_msaea`
* `sa_spaeng`
All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.
## Features
Here is how the features look in the case of language identification (LID) tasks:
| LID Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
For part-of-speech (POS) tagging:
| POS Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `pos` | `list<str>` | List of POS tags (string) of a sentence |
For named entity recognition (NER):
| NER Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `ner` | `list<str>` | List of NER labels (string) of a sentence |
**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.
For sentiment analysis (SA):
| SA Feature | Type | Description |
|---------------------|-------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `sa` | `str` | Sentiment label (string) of a sentence |
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/383/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/383/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/383.diff",
"html_url": "https://github.com/huggingface/datasets/pull/383",
"merged_at": "2020-07-16T16:19:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/383.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/383"
} | true | [
"I am checking the details of the CI log for the failed test, but I don't see how the error relates to the code I added; the error is coming from a config builder different than the `LinceConfig`, and it crashes when `self.config.data_files` because is self.config is None. I would appreciate if someone could help m... |
https://api.github.com/repos/huggingface/datasets/issues/5324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5324/comments | https://api.github.com/repos/huggingface/datasets/issues/5324/events | https://github.com/huggingface/datasets/issues/5324 | 1,471,524,512 | I_kwDODunzps5Xta6g | 5,324 | Fix docstrings and types in documentation that appears on the website | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | open | false | null | 2 | 2022-12-01T15:34:53Z | 2022-12-13T19:03:55Z | null | null | While I was working on https://github.com/huggingface/datasets/pull/5313 I've noticed that we have a mess in how we annotate types and format args and return values in the code. And some of it is displayed in the [Reference section](https://huggingface.co/docs/datasets/package_reference/builder_classes) of the documentation on the website.
Would be nice someday, maybe before releasing datasets 3.0.0, to unify it...... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5324/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5324/timeline | null | null | null | null | false | [
"I agree we have a mess with docstrings...",
"Ok, I believe we've cleaned up most of the old syntax we were using for the user-facing docs! There are still a couple of `:obj:`'s and `:class:` floating around in the docstrings we don't expose that I'll track down :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3376/comments | https://api.github.com/repos/huggingface/datasets/issues/3376/events | https://github.com/huggingface/datasets/pull/3376 | 1,070,522,979 | PR_kwDODunzps4vW5sB | 3,376 | Update clue benchmark | [] | closed | false | null | 1 | 2021-12-03T12:06:01Z | 2021-12-08T14:14:42Z | 2021-12-08T14:14:41Z | null | Fix #3374 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3376/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3376/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3376.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3376",
"merged_at": "2021-12-08T14:14:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3376.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3376"
} | true | [
"The CI error is due to missing tags in the CLUE dataset card - merging !"
] |
https://api.github.com/repos/huggingface/datasets/issues/4887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4887/comments | https://api.github.com/repos/huggingface/datasets/issues/4887/events | https://github.com/huggingface/datasets/pull/4887 | 1,349,426,693 | PR_kwDODunzps49t_PM | 4,887 | Add "cc-by-nc-sa-2.0" to list of licenses | [] | closed | false | null | 2 | 2022-08-24T13:11:49Z | 2022-08-26T10:31:32Z | 2022-08-26T10:29:20Z | null | Datasets side of https://github.com/huggingface/hub-docs/pull/285 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4887/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4887/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4887.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4887",
"merged_at": "2022-08-26T10:29:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4887.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4887"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Sorry for the issue @albertvillanova! I think it's now fixed! :heart: "
] |
https://api.github.com/repos/huggingface/datasets/issues/976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/976/comments | https://api.github.com/repos/huggingface/datasets/issues/976/events | https://github.com/huggingface/datasets/pull/976 | 754,826,146 | MDExOlB1bGxSZXF1ZXN0NTMwNjU1NzM5 | 976 | Arabic pos dialect | [] | closed | false | null | 2 | 2020-12-02T00:21:13Z | 2020-12-09T17:30:32Z | 2020-12-09T17:30:32Z | null | A README.md and loading script for the Arabic POS Dialect dataset. The README is missing the sections on personal information, biases, and limitations, as it would probably be better for those to be filled by someone who can read the contents of the dataset and is familiar with Arabic NLP. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/976/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/976/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/976.diff",
"html_url": "https://github.com/huggingface/datasets/pull/976",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/976.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/976"
} | true | [
"looks like this PR includes changes about many other files than the oens for Araboc POS Dialect\r\n\r\nCan you create a another branch and another PR please ?",
"Sorry! I'm not sure how I managed to do that. I'll make a new branch."
] |
https://api.github.com/repos/huggingface/datasets/issues/5563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5563/comments | https://api.github.com/repos/huggingface/datasets/issues/5563/events | https://github.com/huggingface/datasets/pull/5563 | 1,595,049,025 | PR_kwDODunzps5KgtbL | 5,563 | Release: 2.10.0 | [] | closed | false | null | 4 | 2023-02-22T12:48:52Z | 2023-02-22T13:05:55Z | 2023-02-22T12:56:48Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5563/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5563/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5563.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5563",
"merged_at": "2023-02-22T12:56:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5563.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5563"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/1726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1726/comments | https://api.github.com/repos/huggingface/datasets/issues/1726/events | https://github.com/huggingface/datasets/pull/1726 | 784,336,370 | MDExOlB1bGxSZXF1ZXN0NTUzNTQ0ODg4 | 1,726 | Offline loading | [] | closed | false | null | 6 | 2021-01-12T15:21:57Z | 2022-02-15T10:32:10Z | 2021-01-19T16:42:32Z | null | As discussed in #824 it would be cool to make the library work in offline mode.
Currently if there's not internet connection then modules (datasets or metrics) that have already been loaded in the past can't be loaded and it raises a ConnectionError.
This is because `prepare_module` fetches online for the latest version of the module.
To make it work in offline mode one suggestion was to reload the latest local version of the module.
I implemented that and I also raise a warning saying that the module that is loaded is the latest local version.
```python
logger.warning(
f"Using the latest cached version of the module from {cached_module_path} since it "
f"couldn't be found locally at {input_path} or remotely ({error_type_that_prevented_reaching_out_remote_stuff})."
)
```
I added tests to make sure it works as expected and I needed to do a few changes in the code to be able to test things properly. In particular I added a parameter `hf_modules_cache` to `init_dynamic_modules` for testing purposes. It makes it possible to have temporary modules caches for testing.
I also added a `offline` context utility that allows to test part of the code by making all the requests fail as if there was no internet.
Close #824, close #761. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1726/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1726/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1726.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1726",
"merged_at": "2021-01-19T16:42:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1726.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1726"
} | true | [
"It's maybe a bit annoying to add but could we maybe have as well a version of the local data loading scripts in the package?\r\nThe `text`, `json`, `csv`. Thinking about people like in #1725 who are expecting to be able to work with local data without downloading anything.\r\n\r\nMaybe we can add them to package_d... |
https://api.github.com/repos/huggingface/datasets/issues/301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/301/comments | https://api.github.com/repos/huggingface/datasets/issues/301/events | https://github.com/huggingface/datasets/issues/301 | 643,763,525 | MDU6SXNzdWU2NDM3NjM1MjU= | 301 | Setting cache_dir gives error on wikipedia download | [] | closed | false | null | 2 | 2020-06-23T11:31:44Z | 2020-06-24T07:05:07Z | 2020-06-24T07:05:07Z | null | First of all thank you for a super handy library! I'd like to download large files to a specific drive so I set `cache_dir=my_path`. This works fine with e.g. imdb and squad. But on wikipedia I get an error:
```
nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=my_path)
```
```
OSError Traceback (most recent call last)
<ipython-input-2-23551344d7bc> in <module>
1 import nlp
----> 2 nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=path)
~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
385 with utils.temporary_assignment(self, "_cache_dir", tmp_data_dir):
386 reader = ArrowReader(self._cache_dir, self.info)
--> 387 reader.download_from_hf_gcs(self._cache_dir, self._relative_data_dir(with_version=True))
388 downloaded_info = DatasetInfo.from_directory(self._cache_dir)
389 self.info.update(downloaded_info)
~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/arrow_reader.py in download_from_hf_gcs(self, cache_dir, relative_data_dir)
231 remote_dataset_info = os.path.join(remote_cache_dir, "dataset_info.json")
232 downloaded_dataset_info = cached_path(remote_dataset_info)
--> 233 os.rename(downloaded_dataset_info, os.path.join(cache_dir, "dataset_info.json"))
234 if self._info is not None:
235 self._info.update(self._info.from_directory(cache_dir))
OSError: [Errno 18] Invalid cross-device link: '/home/local/NTU/nn/.cache/huggingface/datasets/025fa4fd4f04aaafc9e939260fbc8f0bb190ce14c61310c8ae1ddd1dcb31f88c.9637f367b6711a79ca478be55fe6989b8aea4941b7ef7adc67b89ff403020947' -> '/data/nn/nlp/wikipedia/20200501.de/1.0.0.incomplete/dataset_info.json'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/301/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/301/timeline | null | completed | null | null | false | [
"Whoops didn't mean to close this one.\r\nI did some changes, could you try to run it from the master branch ?",
"Now it works, thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3356 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3356/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3356/comments | https://api.github.com/repos/huggingface/datasets/issues/3356/events | https://github.com/huggingface/datasets/pull/3356 | 1,068,503,932 | PR_kwDODunzps4vQQLD | 3,356 | to_tf_dataset() refactor | [] | closed | false | null | 5 | 2021-12-01T14:54:30Z | 2021-12-09T10:26:53Z | 2021-12-09T10:26:53Z | null | This is the promised cleanup to `to_tf_dataset()` now that the course is out of the way! The main changes are:
- A collator is always required (there was way too much hackiness making things like labels work without it)
- Lots of cleanup and a lot of code moved to `_get_output_signature`
- Should now handle it gracefully when the data collator adds unexpected columns | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3356/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3356/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3356.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3356",
"merged_at": "2021-12-09T10:26:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3356.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3356"
} | true | [
"Also, please don't merge yet - I need to make sure all the code samples and notebooks have a collate_fn specified, since we're removing the ability for this method to work without one!",
"Hi @lhoestq @mariosasko, the other PRs this was depending on in Transformers and huggingface/notebooks are now merged, so thi... |
https://api.github.com/repos/huggingface/datasets/issues/4124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4124/comments | https://api.github.com/repos/huggingface/datasets/issues/4124/events | https://github.com/huggingface/datasets/issues/4124 | 1,196,469,842 | I_kwDODunzps5HUK5S | 4,124 | Image decoding often fails when transforming Image datasets | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 7 | 2022-04-07T19:17:25Z | 2022-04-13T14:01:16Z | 2022-04-13T14:01:16Z | null | ## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image passed around is still raw bytes:
```
[{'bytes': b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00 \x00\x00\x00 \x08\x02\x00\x00\x00\xfc\x18\xed\xa3\x00\x00\x08\x02IDATx\x9cEVIs[\xc7\x11\xeemf\xde\x82\x8d\x80\x08\x89"\xb5V\\\xb6\x94(\xe5\x9f\x90\xca5\x7f$\xa7T\xe5\x9f&9\xd9\x8a\\.\xdb\xa4$J\xa4\x00\x02x\xc0{\xb3t\xe7\x00\xca\x99\xd3\\f\xba\xba\xbf\xa5?|\xfa\xf4\xa2\xeb\xba\xedv\xa3f^\xf8\xd5\x0bY\xb6\x10\xb3\xaaDq\xcd\x83\x87\xdf5\xf3gZ\x1a\x04\x0f\xa0fp\xfa\xe0\xd4\x07?\x9dN\xc4\xb1\x99\xfd\xf2\xcb/\x97\x97\x97H\xa2\xaaf\x16\x82\xaf\xeb\xca{\xbf\xd9l.\xdf\x7f\xfa\xcb_\xff&\x88\x08\x00\x80H\xc0\x80@.;\x0f\x8c@#v\xe3\xe5\xfc\xd1\x9f\xee6q\xbf\xdf\xa6\x14\'\x93\xf1\xc3\xe5\xe3\xd1x\x14c\x8c1\xa5\x1c\x9dsM\xd3\xb4\xed\x08\x89SJ)\xa5\xedv\xbb^\xafNO\x97D\x84Hf ....
```
## Steps to reproduce the bug
```python
from datasets import load_dataset, Dataset
import numpy as np
# seeded NumPy random number generator for reprodducinble results.
rng = np.random.default_rng(seed=0)
test_dataset = load_dataset('cifar100', split="test")
def preprocess_data(dataset):
"""
Helper function to pre-process HuggingFace Cifar-100 Dataset to remove fine_label and coarse_label columns and
add is_flipped column
Args:
dataset: HuggingFace CIFAR-100 Dataset Object
Returns:
new_dataset: A Dataset object with "img" and "is_flipped" columns only
"""
# remove fine_label and coarse_label columns
new_dataset = dataset.remove_columns(['fine_label', 'coarse_label'])
# add the column for is_flipped
new_dataset = new_dataset.add_column(name="is_flipped", column=np.zeros((len(new_dataset)), dtype=np.uint8))
return new_dataset
def generate_flipped_data(example, p=0.5):
"""
A Dataset mapping function that transforms some of the images up-side-down.
If the probability value (p) is 0.5 approximately half the images will be flipped upside-down
Args:
example: An example from the dataset containing a Python dictionary with "img" and "is_flipped" key-value pair
p: the probability of flipping the image up-side-down, Default 0.5
Returns:
example: A Dataset object
"""
# example['img'] = example['img']
if rng.random() > p: # the flip the image and set is_flipped column to 1
example['img'] = example['img'].transpose(
1) # ImageOps.flip(example['img']) #example['img'].transpose(Image.FLIP_TOP_BOTTOM)
example['is_flipped'] = 1
return example
my_test = preprocess_data(test_dataset)
my_test = my_test.map(generate_flipped_data)
```
## Expected results
The dataset should be transformed without problems.
## Actual results
```
/home/rafay/anaconda3/envs/pytorch_new/bin/python /home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py
Reusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)
Reusing dataset cifar100 (/home/rafay/.cache/huggingface/datasets/cifar100/cifar100/1.0.0/f365c8b725c23e8f0f8d725c3641234d9331cd2f62919d1381d1baa5b3ba3142)
20%|█▉ | 1999/10000 [00:00<00:01, 5560.44ex/s]
Traceback (most recent call last):
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2326, in _map_single
writer.write(example)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 441, in write
self.write_examples_on_file()
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 399, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 185, in __arrow_array__
out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
File "pyarrow/array.pxi", line 316, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=32x32 at 0x7F56AEE61DE0> with type Image: did not recognize Python value type when inferring an Arrow data type
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/rafay/Documents/you_only_live_once/upside_down_detector/create_dataset.py", line 55, in <module>
my_test = my_test.map(generate_flipped_data)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1953, in map
return self._map_single(
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 519, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 486, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2360, in _map_single
writer.finalize()
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 522, in finalize
self.write_examples_on_file()
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 399, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/rafay/anaconda3/envs/pytorch_new/lib/python3.10/site-packages/datasets/arrow_writer.py", line 185, in __arrow_array__
out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
File "pyarrow/array.pxi", line 316, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=32x32 at 0x7F56AEE61DE0> with type Image: did not recognize Python value type when inferring an Arrow data type
Process finished with exit code 1
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux(Fedora 35)
- Python version: 3.10
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4124/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4124/timeline | null | completed | null | null | false | [
"A quick hack I have found is that we can call the image first before running the transforms and it makes sure the image is decoded before being passed on.\r\n\r\nFor this I just needed to add `example['img'] = example['img']` to the top of my `generate_flipped_data` function, defined above, so that image decode in... |
https://api.github.com/repos/huggingface/datasets/issues/1240 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1240/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1240/comments | https://api.github.com/repos/huggingface/datasets/issues/1240/events | https://github.com/huggingface/datasets/pull/1240 | 758,355,523 | MDExOlB1bGxSZXF1ZXN0NTMzNTQxNjk5 | 1,240 | Multi Domain Sentiment Analysis Dataset (MDSA) | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 9 | 2020-12-07T09:57:15Z | 2022-10-03T09:39:43Z | 2022-10-03T09:39:43Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1240/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1240/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/1240.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1240",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1240.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1240"
} | true | [
"can you also run `make style` to format the code ?",
"I'll come back to this one in sometime :) @lhoestq ",
"Also if you would use `xml.etree.ElementTree` to parse the XML it would be awesome, because right now you're using an external dependency `xmltodict `",
"> Also if you would use xml.etree.ElementTree ... |
https://api.github.com/repos/huggingface/datasets/issues/206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/206/comments | https://api.github.com/repos/huggingface/datasets/issues/206/events | https://github.com/huggingface/datasets/issues/206 | 625,842,989 | MDU6SXNzdWU2MjU4NDI5ODk= | 206 | [Question] Combine 2 datasets which have the same columns | [] | closed | false | null | 2 | 2020-05-27T16:25:52Z | 2020-06-10T09:11:14Z | 2020-06-10T09:11:14Z | null | Hi,
I am using ``nlp`` to load personal datasets. I created summarization datasets in multi-languages based on wikinews. I have one dataset for english and one for german (french is getting to be ready as well). I want to keep these datasets independent because they need different pre-processing (add different task-specific prefixes for T5 : *summarize:* for english and *zusammenfassen:* for german)
My issue is that I want to train T5 on the combined english and german datasets to see if it improves results. So I would like to combine 2 datasets (which have the same columns) to make one and train T5 on it. I was wondering if there is a proper way to do it? I assume that it can be done by combining all examples of each dataset but maybe you have a better solution.
Hoping this is clear enough,
Thanks a lot 😊
Best | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/206/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/206/timeline | null | completed | null | null | false | [
"We are thinking about ways to combine datasets for T5 in #217, feel free to share your thoughts about this.",
"Ok great! I will look at it. Thanks"
] |
https://api.github.com/repos/huggingface/datasets/issues/4700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4700/comments | https://api.github.com/repos/huggingface/datasets/issues/4700/events | https://github.com/huggingface/datasets/pull/4700 | 1,307,599,161 | PR_kwDODunzps47jKNx | 4,700 | Support extract lz4 compressed data files | [] | closed | false | null | 1 | 2022-07-18T08:41:31Z | 2022-07-18T14:43:59Z | 2022-07-18T14:31:47Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4700/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4700/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4700.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4700",
"merged_at": "2022-07-18T14:31:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4700.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4700"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/964/comments | https://api.github.com/repos/huggingface/datasets/issues/964/events | https://github.com/huggingface/datasets/pull/964 | 754,474,660 | MDExOlB1bGxSZXF1ZXN0NTMwMzY4OTAy | 964 | Adding the WebNLG dataset | [] | closed | false | null | 1 | 2020-12-01T15:05:23Z | 2020-12-02T17:34:05Z | 2020-12-02T17:34:05Z | null | This PR adds data from the WebNLG challenge, with one configuration per release and challenge iteration.
More information can be found [here](https://webnlg-challenge.loria.fr/)
Unfortunately, the data itself comes from a pretty large number of small XML files, so the dummy data ends up being quite large (8.4 MB even keeping only one example per file). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/964/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/964/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/964.diff",
"html_url": "https://github.com/huggingface/datasets/pull/964",
"merged_at": "2020-12-02T17:34:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/964.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/964"
} | true | [
"This is task is part of the GEM suite so will actually need a more complete dataset card. I'm taking a break for now though and will get back to it before merging :) "
] |
https://api.github.com/repos/huggingface/datasets/issues/716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/716/comments | https://api.github.com/repos/huggingface/datasets/issues/716/events | https://github.com/huggingface/datasets/pull/716 | 714,952,888 | MDExOlB1bGxSZXF1ZXN0NDk3OTQ1ODAw | 716 | Fixes #712 Attribute error in cell 3 of the overview notebook | [] | closed | false | null | 1 | 2020-10-05T15:42:09Z | 2020-10-05T15:46:38Z | 2020-10-05T15:46:32Z | null | Fixes the Attribute error in cell 3 of the overview notebook | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/716/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/716/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/716.diff",
"html_url": "https://github.com/huggingface/datasets/pull/716",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/716.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/716"
} | true | [
"Referencing the wrong issue # in the commit message. Closing this to fix it again."
] |
https://api.github.com/repos/huggingface/datasets/issues/1595 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1595/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1595/comments | https://api.github.com/repos/huggingface/datasets/issues/1595/events | https://github.com/huggingface/datasets/pull/1595 | 770,153,693 | MDExOlB1bGxSZXF1ZXN0NTQxOTUwNDk4 | 1,595 | Logiqa en | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 8 | 2020-12-17T15:42:00Z | 2022-10-03T09:38:30Z | 2022-10-03T09:38:30Z | null | logiqa in english. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1595/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1595/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1595.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1595",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1595.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1595"
} | true | [
"I'm getting an error when I try to create the dummy data:\r\n```python\r\naclifton@pop-os:~/data/hf_datasets_sprint/datasets$ python datasets-cli dummy_data ./datasets/logiqa_en/ --auto_generate \r\n2021-01-07 10:50:12.024791: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic l... |
https://api.github.com/repos/huggingface/datasets/issues/1493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1493/comments | https://api.github.com/repos/huggingface/datasets/issues/1493/events | https://github.com/huggingface/datasets/pull/1493 | 762,979,415 | MDExOlB1bGxSZXF1ZXN0NTM3NDc0MDc1 | 1,493 | Added RONEC dataset. | [] | closed | false | null | 4 | 2020-12-11T22:14:50Z | 2020-12-21T14:48:56Z | 2020-12-21T14:48:56Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1493/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1493/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1493.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1493",
"merged_at": "2020-12-21T14:48:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1493.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1493"
} | true | [
"Thanks for the PR @iliemihai . \r\n\r\nFew comments - \r\n\r\nCan you run - \r\n`python datasets-cli dummy_data ./datasets/ronec --auto_generate` to generate dummy data.\r\n\r\nAlso, before committing files run : \r\n`make style`\r\n`flake8 datasets`\r\nthen you can add and commit files.",
"> Thanks for the PR @... | |
https://api.github.com/repos/huggingface/datasets/issues/5686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5686/comments | https://api.github.com/repos/huggingface/datasets/issues/5686/events | https://github.com/huggingface/datasets/pull/5686 | 1,646,308,228 | PR_kwDODunzps5NMXdu | 5,686 | set dev version | [] | closed | false | null | 3 | 2023-03-29T18:24:13Z | 2023-03-29T18:33:49Z | 2023-03-29T18:24:22Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5686/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5686/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5686.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5686",
"merged_at": "2023-03-29T18:24:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5686.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5686"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5686). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
https://api.github.com/repos/huggingface/datasets/issues/548 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/548/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/548/comments | https://api.github.com/repos/huggingface/datasets/issues/548/events | https://github.com/huggingface/datasets/pull/548 | 689,285,996 | MDExOlB1bGxSZXF1ZXN0NDc2MzYzMjU1 | 548 | [Breaking] Switch text loading to multi-threaded PyArrow loading | [] | closed | false | null | 5 | 2020-08-31T15:15:41Z | 2020-09-08T10:19:58Z | 2020-09-08T10:19:57Z | null | Test if we can get better performances for large-scale text datasets by using multi-threaded text file loading based on Apache Arrow multi-threaded CSV loader.
If it works ok, it would fix #546.
**Breaking change**:
The text lines now do not include final line-breaks anymore. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/548/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/548/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/548.diff",
"html_url": "https://github.com/huggingface/datasets/pull/548",
"merged_at": "2020-09-08T10:19:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/548.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/548"
} | true | [
"Awesome !\r\nAlso I was wondering if we should try to make the hashing of the `data_files` faster (it is used to build the cache directory of datasets like `text` or `json`). Right now it reads each file and hashes all of its data. We could simply hash the path and some metadata including the `time last modified` ... |
https://api.github.com/repos/huggingface/datasets/issues/4088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4088/comments | https://api.github.com/repos/huggingface/datasets/issues/4088/events | https://github.com/huggingface/datasets/pull/4088 | 1,191,901,172 | PR_kwDODunzps41l4yE | 4,088 | Remove unused legacy Beam utils | [] | closed | false | null | 1 | 2022-04-04T14:43:51Z | 2022-04-05T15:23:27Z | 2022-04-05T15:17:41Z | null | This PR removes unused legacy custom `WriteToParquet`, once official Apache Beam includes the patch since version 2.22.0:
- Patch PR: https://github.com/apache/beam/pull/11699
- Issue: https://issues.apache.org/jira/browse/BEAM-10022
In relation with:
- #204 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4088/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4088/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4088.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4088",
"merged_at": "2022-04-05T15:17:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4088.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4088"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4009 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4009/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4009/comments | https://api.github.com/repos/huggingface/datasets/issues/4009/events | https://github.com/huggingface/datasets/issues/4009 | 1,179,658,611 | I_kwDODunzps5GUClz | 4,009 | AMI load_dataset error: sndfile library not found | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-03-24T15:13:38Z | 2022-03-24T15:46:38Z | 2022-03-24T15:17:29Z | null | ## Describe the bug
Getting error message when loading AMI dataset.
## Steps to reproduce the bug
`python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])"
`
## Expected results
A clear and concise description of the expected results.
## Actual results
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset
use_auth_token=use_auth_token,
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare
) from None
OSError: Cannot find data file.
Original error:
sndfile library not found
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11
- Python version: 3.7.3
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4009/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4009/timeline | null | completed | null | null | false | [
"Issue unresolved, see [4000](https://github.com/huggingface/datasets/issues/4009#issue-1179658611)"
] |
https://api.github.com/repos/huggingface/datasets/issues/994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/994/comments | https://api.github.com/repos/huggingface/datasets/issues/994/events | https://github.com/huggingface/datasets/pull/994 | 755,146,834 | MDExOlB1bGxSZXF1ZXN0NTMwOTE1MDc2 | 994 | Add Sepedi ner corpus | [] | closed | false | null | 2 | 2020-12-02T10:30:07Z | 2020-12-03T10:19:14Z | 2020-12-02T18:20:08Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/994/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/994/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/994.diff",
"html_url": "https://github.com/huggingface/datasets/pull/994",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/994.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/994"
} | true | [
"Looks like the PR includes commits about many other files.\r\nCould you create a clean branch from master, and create another PR ?",
"Sorry, will do that. "
] | |
https://api.github.com/repos/huggingface/datasets/issues/1158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1158/comments | https://api.github.com/repos/huggingface/datasets/issues/1158/events | https://github.com/huggingface/datasets/pull/1158 | 757,658,926 | MDExOlB1bGxSZXF1ZXN0NTMzMDAxMjM0 | 1,158 | Add BBC Hindi NLI Dataset | [] | closed | false | null | 7 | 2020-12-05T11:25:34Z | 2021-02-05T09:48:31Z | 2021-02-05T09:48:31Z | null | # Dataset Card for BBC Hindi NLI Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- HomePage : https://github.com/midas-research/hindi-nli-data
- Paper : "https://www.aclweb.org/anthology/2020.aacl-main.71"
- Point of Contact : https://github.com/midas-research/hindi-nli-data
### Dataset Summary
- Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs.
- Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic.
- Context and Hypothesis is written in Hindi while Entailment_Label is in English.
- Entailment_label is of 2 types - entailed and not-entailed.
- Dataset can be used to train models for Natural Language Inference tasks in Hindi Language.
[More Information Needed]
### Supported Tasks and Leaderboards
- Natural Language Inference for Hindi
### Languages
Dataset is in Hindi
## Dataset Structure
- Data is structured in TSV format.
- Train and Test files are in seperate files
### Dataset Instances
An example of 'train' looks as follows.
```
{'hypothesis': 'यह खबर की सूचना है|', 'label': 'entailed', 'premise': 'गोपनीयता की नीति', 'topic': '1'}
```
### Data Fields
- Each row contatins 4 columns - Premise, Hypothesis, Label and Topic.
### Data Splits
- Train : 15553
- Valid : 2581
- Test : 2593
## Dataset Creation
- We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available BBC Hindi news text classification datasets in Hindi and pose them as TE problems
- In this recasting process, we build template hypotheses for each class in the label taxonomy
- Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples.
- For more information on the recasting process, refer to paper "https://www.aclweb.org/anthology/2020.aacl-main.71"
### Source Data
Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1)
#### Initial Data Collection and Normalization
- BBC Hindi News Classification Dataset contains 4, 335 Hindi news headlines tagged across 14 categories: India, Pakistan,news, International, entertainment, sport, science, China, learning english, social, southasia, business, institutional, multimedia
- We processed this dataset to combine two sets of relevant but low prevalence classes.
- Namely, we merged the samples from Pakistan, China, international, and southasia as one class called international.
- Likewise, we also merged samples from news, business, social, learning english, and institutional as news.
- Lastly, we also removed the class multimedia because there were very few samples.
#### Who are the source language producers?
Pls refer to this paper: "https://www.aclweb.org/anthology/2020.aacl-main.71"
### Annotations
#### Annotation process
Annotation process has been described in Dataset Creation Section.
#### Who are the annotators?
Annotation is done automatically.
### Personal and Sensitive Information
No Personal and Sensitive Information is mentioned in the Datasets.
## Considerations for Using the Data
Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71
### Discussion of Biases
Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71
### Other Known Limitations
No other known limitations
## Additional Information
Pls refer to this link: https://github.com/midas-research/hindi-nli-data
### Dataset Curators
It is written in the repo : https://github.com/avinsit123/hindi-nli-data that
- This corpus can be used freely for research purposes.
- The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper.
- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.
- If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus.
- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications.
- Rather than redistributing the corpus, please direct interested parties to this page
- Please feel free to send us an email:
- with feedback regarding the corpus.
- with information on how you have used the corpus.
- if interested in having us analyze your data for natural language inference.
- if interested in a collaborative research project.
### Licensing Information
Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi).
Pls contact authors for any information on the dataset.
### Citation Information
```
@inproceedings{uppal-etal-2020-two,
title = "Two-Step Classification using Recasted Data for Low Resource Settings",
author = "Uppal, Shagun and
Gupta, Vivek and
Swaminathan, Avinash and
Zhang, Haimin and
Mahata, Debanjan and
Gosangi, Rakesh and
Shah, Rajiv Ratn and
Stent, Amanda",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.aacl-main.71",
pages = "706--719",
abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.",
}
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1158/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1158/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1158.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1158",
"merged_at": "2021-02-05T09:48:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1158.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1158"
} | true | [
"Hi @avinsit123 !\r\nDid you manage to rename the dataset and apply the suggestion I mentioned for the data fields ?\r\nFeel free to ping me when you're ready for a review :) ",
"Hi @avinsit123 ! Have you had a chance to take a look at my suggestions ?\r\nLet me know if you have questions or if I can help",
"@l... |
https://api.github.com/repos/huggingface/datasets/issues/1683 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1683/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1683/comments | https://api.github.com/repos/huggingface/datasets/issues/1683/events | https://github.com/huggingface/datasets/issues/1683 | 778,287,612 | MDU6SXNzdWU3NzgyODc2MTI= | 1,683 | `ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext | [] | closed | false | null | 2 | 2021-01-04T18:47:53Z | 2021-01-04T19:04:45Z | 2021-01-04T19:04:45Z | null | It seems to fail the final batch ):
steps to reproduce:
```
from datasets import load_dataset
from elasticsearch import Elasticsearch
import torch
from transformers import file_utils, set_seed
from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast
MAX_SEQ_LENGTH = 256
ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base", cache_dir="../datasets/")
ctx_tokenizer = DPRContextEncoderTokenizerFast.from_pretrained(
"facebook/dpr-ctx_encoder-single-nq-base",
cache_dir="..datasets/"
)
dataset = load_dataset('text',
data_files='data/raw/ARC_Corpus.txt',
cache_dir='../datasets')
torch.set_grad_enabled(False)
ds_with_embeddings = dataset.map(
lambda example: {
'embeddings': ctx_encoder(
**ctx_tokenizer(
example["text"],
padding='max_length',
truncation=True,
max_length=MAX_SEQ_LENGTH,
return_tensors="pt"
)
)[0][0].numpy(),
},
batched=True,
load_from_cache_file=False,
batch_size=1000
)
```
ARC Corpus can be obtained from [here](https://ai2-datasets.s3-us-west-2.amazonaws.com/arc/ARC-V1-Feb2018.zip)
And then the error:
```
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-13-67d139bb2ed3> in <module>
14 batched=True,
15 load_from_cache_file=False,
---> 16 batch_size=1000
17 )
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc)
301 num_proc=num_proc,
302 )
--> 303 for k, dataset in self.items()
304 }
305 )
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0)
301 num_proc=num_proc,
302 )
--> 303 for k, dataset in self.items()
304 }
305 )
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1257 fn_kwargs=fn_kwargs,
1258 new_fingerprint=new_fingerprint,
-> 1259 update_data=update_data,
1260 )
1261 else:
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
155 }
156 # apply actual function
--> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
159 # re-apply format to the output
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
161 # Call actual function
162
--> 163 out = func(self, *args, **kwargs)
164
165 # Update fingerprint of in-place transforms + update in-place history of transforms
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)
1526 if update_data:
1527 batch = cast_to_python_objects(batch)
-> 1528 writer.write_batch(batch)
1529 if update_data:
1530 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
276 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)
277 typed_sequence_examples[col] = typed_sequence
--> 278 pa_table = pa.Table.from_pydict(typed_sequence_examples)
279 self.write_table(pa_table)
280
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays()
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.validate()
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Column 1 named text expected length 768 but got length 1000
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1683/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1683/timeline | null | completed | null | null | false | [
"Looks like the mapping function returns a dictionary with a 768-dim array in the `embeddings` field. Since the map is batched, we actually expect the `embeddings` field to be an array of shape (batch_size, 768) to have one embedding per example in the batch.\r\n\r\nTo fix that can you try to remove one of the `[0]... |
https://api.github.com/repos/huggingface/datasets/issues/970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/970/comments | https://api.github.com/repos/huggingface/datasets/issues/970/events | https://github.com/huggingface/datasets/pull/970 | 754,697,489 | MDExOlB1bGxSZXF1ZXN0NTMwNTUxNTkz | 970 | Add SWAG | [] | closed | false | null | 0 | 2020-12-01T20:21:05Z | 2020-12-02T09:55:16Z | 2020-12-02T09:55:15Z | null | Commonsense NLI -> https://rowanzellers.com/swag/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/970/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/970/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/970.diff",
"html_url": "https://github.com/huggingface/datasets/pull/970",
"merged_at": "2020-12-02T09:55:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/970.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/970"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4769/comments | https://api.github.com/repos/huggingface/datasets/issues/4769/events | https://github.com/huggingface/datasets/issues/4769 | 1,322,121,554 | I_kwDODunzps5OzflS | 4,769 | Fail to process SQuADv1.1 datasets with max_seq_length=128, doc_stride=96. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 0 | 2022-07-29T11:18:24Z | 2022-07-29T11:18:24Z | null | null | ## Describe the bug
datasets fail to process SQuADv1.1 with max_seq_length=128, doc_stride=96 when calling datasets["train"].train_dataset.map().
## Steps to reproduce the bug
I used huggingface[ TF2 question-answering examples](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering). And my scripts are as follows:
```
python run_qa.py \
--model_name_or_path $BERT_DIR \
--dataset_name $SQUAD_DIR \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 128 \
--doc_stride 96 \
--output_dir $OUTPUT \
--save_steps 10000 \
--overwrite_cache \
--overwrite_output_dir \
```
## Expected results
Normally process SQuADv1.1 datasets with max_seq_length=128, doc_stride=96.
## Actual results
```
INFO:__main__:Padding all batches to max length because argument was set or we're on TPU.
WARNING:datasets.fingerprint:Parameter 'function'=<function main.<locals>.prepare_train_features at 0x7f15bc2d07a0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
0%| | 0/88 [00:00<?, ?ba/s]thread '<unnamed>' panicked at 'assertion failed: stride < max_len', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/encoding.rs:311:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
0%| | 0/88 [00:00<?, ?ba/s]
Traceback (most recent call last):
File "run_qa.py", line 743, in <module>
main()
File "run_qa.py", line 485, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2394, in map
desc=desc,
File "/anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 551, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 518, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/anaconda3/envs/py37/lib/python3.7/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2768, in _map_single
offset=offset,
File "anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2644, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2336, in decorated
result = f(decorated_item, *args, **kwargs)
File "run_qa.py", line 410, in prepare_train_features
padding=padding,
File "anaconda3/envs/py37/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2512, in __call__
**kwargs,
File "anaconda3/envs/py37/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2703, in batch_encode_plus
**kwargs,
File "anaconda3/envs/py37/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 429, in _batch_encode_plus
is_pretokenized=is_split_into_words,
pyo3_runtime.PanicException: assertion failed: stride < max_len
Traceback (most recent call last):
File "./data/SQuADv1.1/evaluate-v1.1.py", line 92, in <module>
with open(args.prediction_file) as prediction_file:
FileNotFoundError: [Errno 2] No such file or directory: './output/bert_base_squadv1.1_tf2/eval_predictions.json'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Ubuntu, pytorch=1.11.0, tensorflow-gpu=2.9.1
- Python version: 2.7
- PyArrow version: 8.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4769/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4769/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/74 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/74/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/74/comments | https://api.github.com/repos/huggingface/datasets/issues/74/events | https://github.com/huggingface/datasets/pull/74 | 616,511,101 | MDExOlB1bGxSZXF1ZXN0NDE2NjA3MDcy | 74 | fix overflow check | [] | closed | false | null | 0 | 2020-05-12T09:38:01Z | 2020-05-12T10:04:39Z | 2020-05-12T10:04:38Z | null | I did some tests and unfortunately the test
```
pa_array.nbytes > MAX_BATCH_BYTES
```
doesn't work. Indeed for a StructArray, `nbytes` can be less 2GB even if there is an overflow (it loops...).
I don't think we can do a proper overflow test for the limit of 2GB...
For now I replaced it with a sanity check on the first element. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/74/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/74/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/74.diff",
"html_url": "https://github.com/huggingface/datasets/pull/74",
"merged_at": "2020-05-12T10:04:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/74.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/74"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/325/comments | https://api.github.com/repos/huggingface/datasets/issues/325/events | https://github.com/huggingface/datasets/pull/325 | 647,601,592 | MDExOlB1bGxSZXF1ZXN0NDQxNTk3NTgw | 325 | Add SQuADShifts dataset | [] | closed | false | null | 1 | 2020-06-29T19:11:16Z | 2020-06-30T17:07:31Z | 2020-06-30T17:07:31Z | null | This PR adds the four new variants of the SQuAD dataset used in [The Effect of Natural Distribution Shift on Question Answering Models](https://arxiv.org/abs/2004.14444) to facilitate evaluating model robustness to distribution shift. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/325/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/325/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/325.diff",
"html_url": "https://github.com/huggingface/datasets/pull/325",
"merged_at": "2020-06-30T17:07:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/325.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/325"
} | true | [
"Very cool to have this dataset, thank you for adding it :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/176/comments | https://api.github.com/repos/huggingface/datasets/issues/176/events | https://github.com/huggingface/datasets/pull/176 | 621,934,638 | MDExOlB1bGxSZXF1ZXN0NDIwODkzNDky | 176 | [Tests] Refactor MockDownloadManager | [] | closed | false | null | 0 | 2020-05-20T17:07:36Z | 2020-05-20T18:17:19Z | 2020-05-20T18:17:18Z | null | Clean mock download manager class.
The print function was not of much help I think.
We should think about adding a command that creates the dummy folder structure for the user. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/176/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/176/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/176.diff",
"html_url": "https://github.com/huggingface/datasets/pull/176",
"merged_at": "2020-05-20T18:17:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/176.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/176"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5924/comments | https://api.github.com/repos/huggingface/datasets/issues/5924/events | https://github.com/huggingface/datasets/pull/5924 | 1,738,889,236 | PR_kwDODunzps5SCiFv | 5,924 | Add parallel module using joblib for Spark | [] | closed | false | null | 7 | 2023-06-02T22:25:25Z | 2023-06-14T10:25:10Z | 2023-06-14T10:15:46Z | null | Discussion in https://github.com/huggingface/datasets/issues/5798 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5924/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5924/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5924.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5924",
"merged_at": "2023-06-14T10:15:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5924.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5924"
} | true | [
"Hi @lhoestq, I added the `parallel` part according to the discussion we had. Could you take a look to see if this is aligned with your proposal?\r\n\r\nMeanwhile I'm working on adding a `parallel_backend` parameter to `load_datasets` so that it can be used like:\r\n```python\r\nwith parallel_backend('spark', steps... |
https://api.github.com/repos/huggingface/datasets/issues/5221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5221/comments | https://api.github.com/repos/huggingface/datasets/issues/5221/events | https://github.com/huggingface/datasets/issues/5221 | 1,442,309,094 | I_kwDODunzps5V9-Pm | 5,221 | Cannot push | [] | closed | false | null | 2 | 2022-11-09T15:32:05Z | 2022-11-10T18:11:21Z | 2022-11-10T18:11:11Z | null | ### Describe the bug
I am facing the issue when I try to push the tar.gz file around 11G to HUB.
```
(venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 ‹main●›
╰─$ du -sh *
4.0K README.md
13G data
516K test.jsonl
18M train.jsonl
4.0K ulaanbal_v0.py
11G ulaanbal_v0.tar.gz
452K validation.jsonl
(venv) ╭─laptop@laptop~/PersonalProjects/data/ulaanbal_v0 ‹main●›
╰─$ git add ulaanbal_v0.tar.gz && git commit -m 'large version'
(venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 ‹main●›
╰─$ git push
EOFoading LFS objects: 0% (0/1), 0 B | 0 B/s
Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done.
error: failed to push some refs to 'https://huggingface.co/datasets/bayartsogt/ulaanbal_v0'
```
I have already tried pushing a small version of this and it was working fine. So my guess it is probably because of the big file.
Following I run before the commit:
```
╰─$ git lfs install
╰─$ huggingface-cli lfs-enable-largefiles .
```
### Steps to reproduce the bug
Create a private dataset on huggingface and push 12G tar.gz file
### Expected behavior
To be pushed with no issue
### Environment info
- `datasets` version: 2.6.1
- Platform: Darwin-21.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 10.0.0
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5221/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5221/timeline | null | completed | null | null | false | [
"Did you run `huggingface-cli lfs-enable-largefiles` before committing or before adding ? Maybe you can try before adding\r\n\r\nAnyway I'd encourage you to split your data into several TAR archives if possible, this way the dataset can loaded faster using multiprocessing (by giving each process a subset of shards ... |
https://api.github.com/repos/huggingface/datasets/issues/2310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2310/comments | https://api.github.com/repos/huggingface/datasets/issues/2310/events | https://github.com/huggingface/datasets/pull/2310 | 875,096,051 | MDExOlB1bGxSZXF1ZXN0NjI5NTEwNTg5 | 2,310 | Update README.md | [] | closed | false | null | 1 | 2021-05-04T04:38:01Z | 2022-07-06T15:19:58Z | 2022-07-06T15:19:58Z | null | Provides description of data instances and dataset features | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2310/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2310/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2310.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2310",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2310.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2310"
} | true | [
"Hi @cryoff, thanks for completing the dataset card.\r\n\r\nNow there is an automatic validation tool to assure that all dataset cards contain all the relevant information. This is the cause of the non-passing test on your Pull Request:\r\n```\r\nFound fields that are not non-empty list of strings: {'annotations_cr... |
https://api.github.com/repos/huggingface/datasets/issues/1980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1980/comments | https://api.github.com/repos/huggingface/datasets/issues/1980/events | https://github.com/huggingface/datasets/pull/1980 | 821,312,810 | MDExOlB1bGxSZXF1ZXN0NTg0MTI1OTUy | 1,980 | Loading all answers from drop | [] | closed | false | null | 2 | 2021-03-03T17:13:07Z | 2021-03-15T11:27:26Z | 2021-03-15T11:27:26Z | null | Hello all,
I propose this change to the DROP loading script so that all answers are loaded no matter their type. Currently, only "span" answers are loaded, which excludes a significant amount of answers from drop (i.e. "number" and "date").
I updated the script with the version I use for my work. However, I couldn't find a way to verify that all is working when integrated with the datasets repo, since the `load_dataset` method seems to always download the script from github and not local files.
Note that 9 items from the train set have no answers, as well as 1 from the validation set. The script I propose simply do not load them.
Let me know if there is anything else I can do,
Clément | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1980/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1980/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1980.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1980",
"merged_at": "2021-03-15T11:27:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1980.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1980"
} | true | [
"Nice thanks for the change !\r\nThis looks all good to me\r\n\r\nBefore we merge can you just update the dataset_infos.json file of drop ? You can do it by running\r\n```\r\ndatasets-cli test ./datasets/drop --all_configs --save_infos --ignore_verifications\r\n```",
"Done!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5540 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5540/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5540/comments | https://api.github.com/repos/huggingface/datasets/issues/5540/events | https://github.com/huggingface/datasets/pull/5540 | 1,588,438,344 | PR_kwDODunzps5KK5qz | 5,540 | Tutorial for creating a dataset | [] | closed | false | null | 2 | 2023-02-16T22:09:35Z | 2023-02-17T18:50:46Z | 2023-02-17T18:41:28Z | null | A tutorial for creating datasets based on the folder-based builders and `from_dict` and `from_generator` methods. I've also mentioned loading scripts as a next step, but I think we should keep the tutorial focused on the low-code methods. Let me know what you think! 🙂 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5540/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5540/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5540.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5540",
"merged_at": "2023-02-17T18:41:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5540.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5540"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/5814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5814/comments | https://api.github.com/repos/huggingface/datasets/issues/5814/events | https://github.com/huggingface/datasets/pull/5814 | 1,693,216,778 | PR_kwDODunzps5PoOQ9 | 5,814 | Repro windows crash | [] | open | false | null | 1 | 2023-05-02T23:30:18Z | 2023-05-02T23:47:07Z | null | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5814/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5814/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5814.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5814",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5814.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5814"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5814). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/5642 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5642/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5642/comments | https://api.github.com/repos/huggingface/datasets/issues/5642/events | https://github.com/huggingface/datasets/pull/5642 | 1,626,043,177 | PR_kwDODunzps5MIjw9 | 5,642 | Bump hfh to 0.11.0 | [] | closed | false | null | 6 | 2023-03-15T18:26:07Z | 2023-03-20T12:34:09Z | 2023-03-20T12:26:58Z | null | to fix errors like
```
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/...
```
(e.g. from this [failing CI](https://github.com/huggingface/datasets/actions/runs/4428956210/jobs/7769160997))
0.11.0 is the current minimum version in `transformers`
around 5% of users are currently using versions `<0.11.0` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5642/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5642/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5642.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5642",
"merged_at": "2023-03-20T12:26:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5642.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5642"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/1588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1588/comments | https://api.github.com/repos/huggingface/datasets/issues/1588/events | https://github.com/huggingface/datasets/pull/1588 | 769,068,227 | MDExOlB1bGxSZXF1ZXN0NTQxMjg3OTcz | 1,588 | Modified hind encorp | [] | closed | false | null | 1 | 2020-12-16T16:28:14Z | 2020-12-16T22:41:53Z | 2020-12-16T17:20:28Z | null | description added, unnecessary comments removed from .py and readme.md reformated
@lhoestq for #1584 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1588/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1588/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1588.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1588",
"merged_at": "2020-12-16T17:20:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1588.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1588"
} | true | [
"welcome, awesome "
] |
https://api.github.com/repos/huggingface/datasets/issues/2285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2285/comments | https://api.github.com/repos/huggingface/datasets/issues/2285/events | https://github.com/huggingface/datasets/issues/2285 | 871,005,236 | MDU6SXNzdWU4NzEwMDUyMzY= | 2,285 | Help understanding how to build a dataset for language modeling as with the old TextDataset | [] | closed | false | null | 2 | 2021-04-29T13:16:45Z | 2021-05-19T07:22:45Z | 2021-05-19T07:22:39Z | null | Hello,
I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers.
I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator:
```
model_checkpoint = 'distilbert-base-uncased'
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
from transformers import TextDataset
dataset = TextDataset(
tokenizer=tokenizer,
file_path="path/to/text_file.txt",
block_size=512,
)
```
For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer:
```
import datasets
dataset = datasets.load_dataset('path/to/text_file.txt')
model_checkpoint = 'distilbert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
tokenized_datasets
```
So what would be the "standard" way of creating a dataset in the way it was done before?
Thank you very much for the help :)) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2285/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2285/timeline | null | completed | null | null | false | [
"\r\nI received an answer for this question on the HuggingFace Datasets forum by @lhoestq\r\n\r\nHi !\r\n\r\nIf you want to tokenize line by line, you can use this:\r\n\r\n```\r\nmax_seq_length = 512\r\nnum_proc = 4\r\n\r\ndef tokenize_function(examples):\r\n# Remove empty lines\r\nexamples[\"text\"] = [line for li... |
https://api.github.com/repos/huggingface/datasets/issues/6025 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6025/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6025/comments | https://api.github.com/repos/huggingface/datasets/issues/6025/events | https://github.com/huggingface/datasets/issues/6025 | 1,801,852,601 | I_kwDODunzps5rZha5 | 6,025 | Using a dataset for a use other than it was intended for. | [] | closed | false | null | 1 | 2023-07-12T22:33:17Z | 2023-07-13T13:57:36Z | 2023-07-13T13:57:36Z | null | ### Describe the bug
Hi, I want to use the rotten tomatoes dataset but for a task other than classification, but when I interleave the dataset, it throws ```'ValueError: Column label is not present in features.'```. It seems that the label_col must be there in the dataset for some reason?
Here is the full stacktrace
```
File "/home/suryahari/Vornoi/tryage-handoff-other-datasets.py", line 276, in create_dataloaders
dataset = interleave_datasets(dsfold, stopping_strategy="all_exhausted")
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py", line 134, in interleave_datasets
return _interleave_iterable_datasets(
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1833, in _interleave_iterable_datasets
info = DatasetInfo.from_merge([d.info for d in datasets])
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 275, in from_merge
dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None]
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 275, in <listcomp>
dataset_infos = [dset_info.copy() for dset_info in dataset_infos if dset_info is not None]
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 378, in copy
return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File "<string>", line 20, in __init__
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 208, in __post_init__
self.task_templates = [
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/info.py", line 209, in <listcomp>
template.align_with_features(self.features) for template in (self.task_templates)
File "/home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/tasks/text_classification.py", line 20, in align_with_features
raise ValueError(f"Column {self.label_column} is not present in features.")
ValueError: Column label is not present in features.
```
### Steps to reproduce the bug
Delete the column `labels` from the `rotten_tomatoes` dataset. Try to interleave it with other datasets.
### Expected behavior
Should let me use the dataset with just the `text` field
### Environment info
latest datasets library? I don't think this was an issue in earlier versions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6025/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6025/timeline | null | completed | null | null | false | [
"I've opened a PR with a fix. In the meantime, you can avoid the error by deleting `task_templates` with `dataset.info.task_templates = None` before the `interleave_datasets` call.\r\n` "
] |
https://api.github.com/repos/huggingface/datasets/issues/3763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3763/comments | https://api.github.com/repos/huggingface/datasets/issues/3763/events | https://github.com/huggingface/datasets/issues/3763 | 1,145,099,878 | I_kwDODunzps5EQNZm | 3,763 | It's not possible download `20200501.pt` dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-02-20T18:34:58Z | 2022-02-21T12:06:12Z | 2022-02-21T09:25:06Z | null | ## Describe the bug
The dataset `20200501.pt` is broken.
The available datasets: https://dumps.wikimedia.org/ptwiki/
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner')
```
## Expected results
I expect to download the dataset locally.
## Actual results
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner')
Downloading and preparing dataset wikipedia/20200501.pt to /home/jvanz/.cache/huggingface/datasets/wikipedia/20200501.pt/1.0.0/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475...
/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/apache_beam/__init__.py:79: UserWarning: This version of Apache Beam has not been sufficiently tested on Python 3.9. You may encounter bugs or missing features.
warnings.warn(
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 1245, in _download_and_prepare
super()._download_and_prepare(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 661, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/jvanz/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475/wikipedia.py", line 420, in _split_generators
downloaded_files = dl_manager.download_and_extract({"info": info_url})
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 307, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 195, in download
downloaded_path_or_paths = map_nested(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 260, in map_nested
mapped = [
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 261, in <listcomp>
_single_map_nested((function, obj, types, None, True))
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 196, in _single_map_nested
return function(data_struct)
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 216, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 298, in cached_path
output_path = get_from_cache(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 612, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/ptwiki/20200501/dumpstatus.json
```
## Environment info
```
- `datasets` version: 1.18.3
- Platform: Linux-5.3.18-150300.59.49-default-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 6.0.1
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3763/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3763/timeline | null | completed | null | null | false | [
"Hi @jvanz, thanks for reporting.\r\n\r\nPlease note that Wikimedia website does not longer host Wikipedia dumps for so old dates.\r\n\r\nFor a list of accessible dump dates of `pt` Wikipedia, please see: https://dumps.wikimedia.org/ptwiki/\r\n\r\nYou can load for example `20220220` `pt` Wikipedia:\r\n```python\r\n... |
https://api.github.com/repos/huggingface/datasets/issues/4683 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4683/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4683/comments | https://api.github.com/repos/huggingface/datasets/issues/4683/events | https://github.com/huggingface/datasets/pull/4683 | 1,305,443,253 | PR_kwDODunzps47cLkm | 4,683 | Update create dataset card docs | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 1 | 2022-07-15T00:41:29Z | 2022-07-18T17:26:00Z | 2022-07-18T13:24:10Z | null | This PR proposes removing the [online dataset card creator](https://huggingface.co/datasets/card-creator/) in favor of simply copy/pasting a template and using the [Datasets Tagger app](https://huggingface.co/spaces/huggingface/datasets-tagging) to generate the tags. The Tagger app provides more guidance by showing all possible values a user can select in the dropdown menus, whereas the online dataset card creator doesn't, which can make it difficult to know what tag values to input.
Let me know what you think! :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4683/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4683/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4683.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4683",
"merged_at": "2022-07-18T13:24:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4683.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4683"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5919/comments | https://api.github.com/repos/huggingface/datasets/issues/5919/events | https://github.com/huggingface/datasets/pull/5919 | 1,735,519,227 | PR_kwDODunzps5R2_EK | 5,919 | add support for storage_options for load_dataset API | [] | closed | false | null | 12 | 2023-06-01T05:52:32Z | 2023-07-18T06:14:32Z | 2023-07-17T17:02:00Z | null | to solve the issue in #5880
1. add s3 support in the link check step, previous we only check `http` and `https`,
2. change the parameter of `use_auth_token` to `download_config` to support both `storage_options` and `use_auth_token` parameter when trying to handle(list, open, read, etc,.) the remote files.
3. integrate the check part's duplicate code to make adding or deleting other sources easier. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5919/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5919/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5919.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5919",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5919.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5919"
} | true | [
"hi @lhoestq,\r\nI saw some errors in my test and found all the failed reasons are `FileNotFoundError` about `test_load_streaming_private_dataset_with_zipped_data` and `test_load_dataset_private_zipped_images` in `test_load.py `, I run pytest on my own Wins and Ubuntu system all the test in `test_load.py ` are suc... |
https://api.github.com/repos/huggingface/datasets/issues/3470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3470/comments | https://api.github.com/repos/huggingface/datasets/issues/3470/events | https://github.com/huggingface/datasets/pull/3470 | 1,086,049,888 | PR_kwDODunzps4wJO8t | 3,470 | Fix rendering of docs | [] | closed | false | null | 0 | 2021-12-21T17:17:01Z | 2021-12-22T09:23:47Z | 2021-12-22T09:23:47Z | null | Minor fix in docs.
Currently, `ClassLabel` docstring rendering is not right. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3470/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3470/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3470.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3470",
"merged_at": "2021-12-22T09:23:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3470.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3470"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/860/comments | https://api.github.com/repos/huggingface/datasets/issues/860/events | https://github.com/huggingface/datasets/issues/860 | 744,750,691 | MDU6SXNzdWU3NDQ3NTA2OTE= | 860 | wmt16 cs-en does not donwload | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 1 | 2020-11-17T13:45:35Z | 2022-10-05T12:27:00Z | 2022-10-05T12:26:59Z | null | Hi
I am trying with wmt16, cs-en pair, thanks for the help, perhaps similar to the ro-en issue. thanks
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "finetune_t5_trainer.py", line 109, in <dictcomp>
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "/home/rabeeh/internship/seq2seq/tasks/tasks.py", line 82, in get_dataset
dataset = load_dataset("wmt16", self.pair, split=split)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/rabeeh/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/860/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/860/timeline | null | completed | null | null | false | [
"We know host this file, so downloading should be more robust."
] |
https://api.github.com/repos/huggingface/datasets/issues/4771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4771/comments | https://api.github.com/repos/huggingface/datasets/issues/4771/events | https://github.com/huggingface/datasets/pull/4771 | 1,322,600,725 | PR_kwDODunzps48VjWx | 4,771 | Remove dummy data generation docs | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 1 | 2022-07-29T19:20:46Z | 2022-08-03T00:04:01Z | 2022-08-02T23:50:29Z | null | This PR removes instructions to generate dummy data since that is no longer necessary for datasets that are uploaded to the Hub instead of our GitHub repo.
Close #4744 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4771/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4771/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4771.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4771",
"merged_at": "2022-08-02T23:50:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4771.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4771"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2162/comments | https://api.github.com/repos/huggingface/datasets/issues/2162/events | https://github.com/huggingface/datasets/issues/2162 | 849,129,201 | MDU6SXNzdWU4NDkxMjkyMDE= | 2,162 | visualization for cc100 is broken | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | 3 | 2021-04-02T10:11:13Z | 2022-10-05T13:20:24Z | 2022-10-05T13:20:24Z | null | Hi
visualization through dataset viewer for cc100 is broken
https://huggingface.co/datasets/viewer/
thanks a lot
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2162/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2162/timeline | null | completed | null | null | false | [
"This looks like an issue with the cc100 dataset itself but not sure\r\nDid you try loading cc100 on your machine ?",
"Hi\nloading works fine, but the viewer only is broken\nthanks\n\nOn Wed, Apr 7, 2021 at 12:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> This looks like an issue with the cc100 dataset itself bu... |
https://api.github.com/repos/huggingface/datasets/issues/517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/517/comments | https://api.github.com/repos/huggingface/datasets/issues/517/events | https://github.com/huggingface/datasets/issues/517 | 681,896,944 | MDU6SXNzdWU2ODE4OTY5NDQ= | 517 | add MLDoc dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 2 | 2020-08-19T14:41:59Z | 2021-08-03T05:59:33Z | null | null | Hi,
I am recommending that someone add MLDoc, a multilingual news topic classification dataset.
- Here's a link to the Github: https://github.com/facebookresearch/MLDoc
- and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf
Looks like the dataset contains news stories in multiple languages that can be classified into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). There are 13 languages: Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish | {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/517/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/517/timeline | null | null | null | null | false | [
"Any updates on this?",
"This request is still an open issue waiting to be addressed by any community member, @GuillemGSubies."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.