url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.83B | node_id stringlengths 18 32 | number int64 1 6.09k | title stringlengths 1 290 | labels list | state stringclasses 2
values | locked bool 1
class | milestone dict | comments int64 0 54 | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_text list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2473/comments | https://api.github.com/repos/huggingface/datasets/issues/2473/events | https://github.com/huggingface/datasets/pull/2473 | 917,538,629 | MDExOlB1bGxSZXF1ZXN0NjY3MDU5MjI5 | 2,473 | Add Disfl-QA | [] | closed | false | null | 2 | 2021-06-10T16:18:00Z | 2021-07-29T11:56:19Z | 2021-07-29T11:56:18Z | null | Dataset: https://github.com/google-research-datasets/disfl-qa
To-Do: Update README.md and add YAML tags | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2473/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2473/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2473.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2473",
"merged_at": "2021-07-29T11:56:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2473.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2473"
} | true | [
"Sounds great! It'll make things easier for the user while accessing the dataset. I'll make some changes to the current file then.",
"I've updated with the suggested changes. Updated the README, YAML tags as well (not sure of Size category tag as I couldn't pass the path of `dataset_infos.json` for this dataset)\... |
https://api.github.com/repos/huggingface/datasets/issues/35 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/35/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/35/comments | https://api.github.com/repos/huggingface/datasets/issues/35/events | https://github.com/huggingface/datasets/pull/35 | 611,413,731 | MDExOlB1bGxSZXF1ZXN0NDEyNjAyMTc0 | 35 | [Tests] fix typo | [] | closed | false | null | 0 | 2020-05-03T13:23:49Z | 2020-05-03T13:24:21Z | 2020-05-03T13:24:20Z | null | @lhoestq - currently the slow test fail with:
```
_____________________________________________________________________________________ DatasetTest.test_load_real_dataset_xnli _____________________________________________________________________________________
self = <tests.test_dataset_common.DatasetTest testMethod=test_load_real_dataset_xnli>, dataset_name = 'xnli'
@slow
def test_load_real_dataset(self, dataset_name):
with tempfile.TemporaryDirectory() as temp_data_dir:
> dataset = load(dataset_name, data_dir=temp_data_dir)
tests/test_dataset_common.py:153:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../python_bin/nlp/load.py:497: in load
dbuilder.download_and_prepare(**download_and_prepare_kwargs)
../../python_bin/nlp/builder.py:383: in download_and_prepare
self._download_and_prepare(dl_manager=dl_manager, download_config=download_config)
../../python_bin/nlp/builder.py:627: in _download_and_prepare
dl_manager=dl_manager, max_examples_per_split=download_config.max_examples_per_split,
../../python_bin/nlp/builder.py:431: in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
../../python_bin/nlp/datasets/xnli/8bf4185a2da1ef2a523186dd660d9adcf0946189e7fa5942ea31c63c07b68a7f/xnli.py:95: in _split_generators
dl_dir = dl_manager.download_and_extract(_DATA_URL)
../../python_bin/nlp/utils/download_manager.py:246: in download_and_extract
return self.extract(self.download(url_or_urls))
../../python_bin/nlp/utils/download_manager.py:186: in download
self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)
../../python_bin/nlp/utils/download_manager.py:166: in _record_sizes_checksums
self._recorded_sizes_checksums[url] = get_size_checksum(path)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = ('', '/tmp/tmpkajlg9yc/downloads/c0f7773c480a3f2d85639d777e0e17e65527460310d80760fd3fc2b2f2960556.c952a63cb17d3d46e412ceb7dbcd656ce2b15cc9ef17f50c28f81c48a7c853b5')
def get_size_checksum(path: str) -> Tuple[int, str]:
"""Compute the file size and the sha256 checksum of a file"""
m = sha256()
> with open(path, "rb") as f:
E TypeError: expected str, bytes or os.PathLike object, not tuple
../../python_bin/nlp/utils/checksums_utils.py:81: TypeError
```
- the checksums probably need to be updated no? And we should also think about how to write a test for the checksums. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/35/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/35/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/35.diff",
"html_url": "https://github.com/huggingface/datasets/pull/35",
"merged_at": "2020-05-03T13:24:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/35.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/35"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4979/comments | https://api.github.com/repos/huggingface/datasets/issues/4979/events | https://github.com/huggingface/datasets/pull/4979 | 1,374,820,758 | PR_kwDODunzps4_CouM | 4,979 | Fix missing tags in dataset cards | [] | closed | false | null | 1 | 2022-09-15T16:51:03Z | 2022-09-22T12:37:55Z | 2022-09-15T17:12:09Z | null | Fix missing tags in dataset cards:
- amazon_us_reviews
- art
- discofuse
- indic_glue
- ubuntu_dialogs_corpus
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908
- #4921
- #4931 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4979/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4979/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4979.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4979",
"merged_at": "2022-09-15T17:12:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4979.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4979"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5797/comments | https://api.github.com/repos/huggingface/datasets/issues/5797/events | https://github.com/huggingface/datasets/issues/5797 | 1,685,501,199 | I_kwDODunzps5kdrUP | 5,797 | load_dataset is case sentitive? | [] | open | false | null | 2 | 2023-04-26T18:19:04Z | 2023-04-27T11:56:58Z | null | null | ### Describe the bug
load_dataset() function is case sensitive?
### Steps to reproduce the bug
The following two code, get totally different behavior.
1. load_dataset('mbzuai/bactrian-x','en')
2. load_dataset('MBZUAI/Bactrian-X','en')
### Expected behavior
Compare 1 and 2.
1 will download all 52 subsets, shell output:
```Downloading and preparing dataset json/MBZUAI--bactrian-X to xxx```
2 will only download single subset, shell output
```Downloading and preparing dataset bactrian-x/en to xxx```
### Environment info
Python 3.10.11
datasets Version: 2.11.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5797/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5797/timeline | null | null | null | null | false | [
"Hi @haonan-li , thank you for the report! It seems to be a bug on the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) site, there is even no such dataset as `mbzuai/bactrian-x` on the Hub. I opened and [issue](https://github.com/huggingface/huggingface_hub/issues/1453) there.",
"I think `loa... |
https://api.github.com/repos/huggingface/datasets/issues/1775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1775/comments | https://api.github.com/repos/huggingface/datasets/issues/1775/events | https://github.com/huggingface/datasets/issues/1775 | 792,742,120 | MDU6SXNzdWU3OTI3NDIxMjA= | 1,775 | Efficient ways to iterate the dataset | [] | closed | false | null | 2 | 2021-01-24T07:54:31Z | 2021-01-24T09:50:39Z | 2021-01-24T09:50:39Z | null | For a large dataset that does not fits the memory, how can I select only a subset of features from each example?
If I iterate over the dataset and then select the subset of features one by one, the resulted memory usage will be huge. Any ways to solve this?
Thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1775/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1775/timeline | null | completed | null | null | false | [
"It seems that selecting a subset of colums directly from the dataset, i.e., dataset[\"column\"], is slow.",
"I was wrong, ```dataset[\"column\"]``` is fast."
] |
https://api.github.com/repos/huggingface/datasets/issues/291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/291/comments | https://api.github.com/repos/huggingface/datasets/issues/291/events | https://github.com/huggingface/datasets/pull/291 | 642,688,450 | MDExOlB1bGxSZXF1ZXN0NDM3NjM1NjMy | 291 | break statement not required | [] | closed | false | null | 3 | 2020-06-22T01:40:55Z | 2020-06-23T17:57:58Z | 2020-06-23T09:37:02Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/291/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/291/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/291.diff",
"html_url": "https://github.com/huggingface/datasets/pull/291",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/291.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/291"
} | true | [
"I guess,test failing due to connection error?",
"We just fixed the other dataset on master. Could you rebase from master and push to rerun the CI ?",
"If I'm not wrong this function returns None if no main class was found.\r\nI think it makes things less clear not to have a return at the end of the function.\r... | |
https://api.github.com/repos/huggingface/datasets/issues/3042 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3042/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3042/comments | https://api.github.com/repos/huggingface/datasets/issues/3042/events | https://github.com/huggingface/datasets/pull/3042 | 1,020,047,289 | PR_kwDODunzps4s5Lxo | 3,042 | Improving elasticsearch integration | [] | open | false | null | 1 | 2021-10-07T13:28:35Z | 2022-07-06T15:19:48Z | null | null | - adding murmurhash signature to sample in index
- adding optional credentials for remote elasticsearch server
- enabling sample update in index
- upgrade the elasticsearch 7.10.1 python client
- adding ElasticsearchBulider to instantiate a dataset from an index and a filtering query | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3042/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3042/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3042.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3042",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3042.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3042"
} | true | [
"@lhoestq @albertvillanova Iwas trying to fix the failing tests in circleCI but is there a test elasticsearch instance somewhere? If not, can I launch a docker container to have one?"
] |
https://api.github.com/repos/huggingface/datasets/issues/5211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5211/comments | https://api.github.com/repos/huggingface/datasets/issues/5211/events | https://github.com/huggingface/datasets/pull/5211 | 1,438,544,617 | PR_kwDODunzps5CVgBx | 5,211 | Update Overview.ipynb google colab | [] | closed | false | null | 3 | 2022-11-07T15:23:52Z | 2022-11-29T15:59:48Z | 2022-11-29T15:54:17Z | null | - removed metrics stuff
- added image example
- added audio example (with ffmpeg instructions)
- updated the "add a new dataset" section | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5211/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5211/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5211.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5211",
"merged_at": "2022-11-29T15:54:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5211.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5211"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"WDYT @albertvillanova ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5211). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/3601 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3601/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3601/comments | https://api.github.com/repos/huggingface/datasets/issues/3601/events | https://github.com/huggingface/datasets/pull/3601 | 1,108,207,131 | PR_kwDODunzps4xROtF | 3,601 | Add conll2003 licensing | [] | closed | false | null | 0 | 2022-01-19T15:00:41Z | 2022-01-19T17:17:28Z | 2022-01-19T17:17:28Z | null | Following https://github.com/huggingface/datasets/issues/3582, this PR updates the licensing section of the CoNLL2003 dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3601/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3601/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3601.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3601",
"merged_at": "2022-01-19T17:17:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3601.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3601"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5740 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5740/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5740/comments | https://api.github.com/repos/huggingface/datasets/issues/5740/events | https://github.com/huggingface/datasets/pull/5740 | 1,664,132,130 | PR_kwDODunzps5OHI08 | 5,740 | Fix CI mock filesystem fixtures | [] | closed | false | null | 5 | 2023-04-12T08:52:35Z | 2023-04-13T11:01:24Z | 2023-04-13T10:54:13Z | null | This PR fixes the fixtures of our CI mock filesystems.
Before, we had to pass `clobber=True` to `fsspec.register_implementation` to overwrite the still present previously added "mock" filesystem. That meant that the mock filesystem fixture was not working properly, because the previously added "mock" filesystem, should have been deleted by the fixture.
This PR fixes the mock filesystem fixtures, so that the "mock" filesystem is properly deleted from the inner `fsspec` registry.
Tests were added to check the correct behavior of the mock filesystem fixtures.
Related to:
- #5733 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5740/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5740/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5740.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5740",
"merged_at": "2023-04-13T10:54:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5740.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5740"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/1618 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1618/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1618/comments | https://api.github.com/repos/huggingface/datasets/issues/1618/events | https://github.com/huggingface/datasets/issues/1618 | 772,248,730 | MDU6SXNzdWU3NzIyNDg3MzA= | 1,618 | Can't filter language:EN on https://huggingface.co/datasets | [] | closed | false | null | 3 | 2020-12-21T15:23:23Z | 2020-12-22T17:17:00Z | 2020-12-22T17:16:09Z | null | When visiting https://huggingface.co/datasets, I don't see an obvious way to filter only English datasets. This is unexpected for me, am I missing something? I'd expect English to be selectable in the language widget. This problem reproduced on Mozilla Firefox and MS Edge:

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1618/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1618/timeline | null | completed | null | null | false | [
"cc'ing @mapmeld ",
"Full language list is now deployed to https://huggingface.co/datasets ! Recommend close",
"Cool @mapmeld ! My 2 cents (for a next iteration), it would be cool to have a small search widget in the filter dropdown as you have a ton of languages now here! Closing this in the meantime."
] |
https://api.github.com/repos/huggingface/datasets/issues/3516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3516/comments | https://api.github.com/repos/huggingface/datasets/issues/3516/events | https://github.com/huggingface/datasets/pull/3516 | 1,092,657,738 | PR_kwDODunzps4weYhE | 3,516 | dataset `asset` - change to raw.githubusercontent.com URLs | [] | closed | false | null | 0 | 2022-01-03T16:43:57Z | 2022-01-03T17:39:02Z | 2022-01-03T17:39:01Z | null | Changed the URLs to the ones it was automatically re-directing.
Before, the download was failing | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3516/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3516/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3516.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3516",
"merged_at": "2022-01-03T17:39:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3516.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3516"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5366/comments | https://api.github.com/repos/huggingface/datasets/issues/5366/events | https://github.com/huggingface/datasets/pull/5366 | 1,498,530,851 | PR_kwDODunzps5FjSFl | 5,366 | ExamplesIterable fixes | [] | closed | false | null | 1 | 2022-12-15T14:23:05Z | 2022-12-15T14:44:47Z | 2022-12-15T14:41:45Z | null | fix typing and ExamplesIterable.shard_data_sources | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5366/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5366/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5366.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5366",
"merged_at": "2022-12-15T14:41:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5366.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5366"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3244/comments | https://api.github.com/repos/huggingface/datasets/issues/3244/events | https://github.com/huggingface/datasets/pull/3244 | 1,048,675,741 | PR_kwDODunzps4uSgG5 | 3,244 | Fix filter method for batched=True | [] | closed | false | null | 0 | 2021-11-09T14:30:59Z | 2021-11-09T15:52:58Z | 2021-11-09T15:52:57Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3244/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3244/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3244.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3244",
"merged_at": "2021-11-09T15:52:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3244.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3244"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2765/comments | https://api.github.com/repos/huggingface/datasets/issues/2765/events | https://github.com/huggingface/datasets/issues/2765 | 962,861,395 | MDU6SXNzdWU5NjI4NjEzOTU= | 2,765 | BERTScore Error | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-08-06T15:58:57Z | 2021-08-09T11:16:25Z | 2021-08-09T11:16:25Z | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
predictions = ["hello there", "general kenobi"]
references = ["hello there", "general kenobi"]
bert = load_metric('bertscore')
bert.compute(predictions=predictions, references=references,lang='en')
```
# Bug
`TypeError: get_hash() missing 1 required positional argument: 'use_fast_tokenizer'`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Colab
- Python version:
- PyArrow version:
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2765/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2765/timeline | null | completed | null | null | false | [
"Hi,\r\n\r\nThe `use_fast_tokenizer` argument has been recently added to the bert-score lib. I've opened a PR with the fix. In the meantime, you can try to downgrade the version of bert-score with the following command to make the code work:\r\n```\r\npip uninstall bert-score\r\npip install \"bert-score<0.3.10\"\r\... |
https://api.github.com/repos/huggingface/datasets/issues/2955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2955/comments | https://api.github.com/repos/huggingface/datasets/issues/2955/events | https://github.com/huggingface/datasets/pull/2955 | 1,003,999,469 | PR_kwDODunzps4sHuRu | 2,955 | Update legacy Python image for CI tests in Linux | [] | closed | false | null | 1 | 2021-09-22T08:25:27Z | 2021-09-24T10:36:05Z | 2021-09-24T10:36:05Z | null | Instead of legacy, use next-generation convenience images, built from the ground up with CI, efficiency, and determinism in mind. Here are some of the highlights:
- Faster spin-up time - In Docker terminology, these next-gen images will generally have fewer and smaller layers. Using these new images will lead to faster image downloads when a build starts, and a higher likelihood that the image is already cached on the host.
- Improved reliability and stability - The existing legacy convenience images are rebuilt practically every day with potential changes from upstream that we cannot always test fast enough. This leads to frequent breaking changes, which is not the best environment for stable, deterministic builds. Next-gen images will only be rebuilt for security and critical-bugs, leading to more stable and deterministic images.
More info: https://circleci.com/docs/2.0/circleci-images | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2955/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2955/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2955.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2955",
"merged_at": "2021-09-24T10:36:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2955.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2955"
} | true | [
"There is an exception when running `pip install .[tests]`:\r\n```\r\nProcessing /home/circleci/datasets\r\nCollecting numpy>=1.17 (from datasets==1.12.2.dev0)\r\n Downloading https://files.pythonhosted.org/packages/45/b2/6c7545bb7a38754d63048c7696804a0d947328125d81bf12beaa692c3ae3/numpy-1.19.5-cp36-cp36m-manylinu... |
https://api.github.com/repos/huggingface/datasets/issues/157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/157/comments | https://api.github.com/repos/huggingface/datasets/issues/157/events | https://github.com/huggingface/datasets/issues/157 | 620,356,542 | MDU6SXNzdWU2MjAzNTY1NDI= | 157 | nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)" | [] | closed | false | null | 11 | 2020-05-18T16:46:38Z | 2020-06-05T08:08:58Z | 2020-06-05T08:08:58Z | null | I'm trying to load datasets from nlp but there seems to have error saying
"TypeError: list_() takes exactly one argument (2 given)"
gist can be found here
https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/157/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/157/timeline | null | completed | null | null | false | [
"You can just run: \r\n`val = nlp.load_dataset('squad')` \r\n\r\nif you want to have just the validation script you can also do:\r\n\r\n`val = nlp.load_dataset('squad', split=\"validation\")`",
"If you want to load a local dataset, make sure you include a `./` before the folder name. ",
"This happens by just do... |
https://api.github.com/repos/huggingface/datasets/issues/4833 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4833/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4833/comments | https://api.github.com/repos/huggingface/datasets/issues/4833/events | https://github.com/huggingface/datasets/pull/4833 | 1,336,946,965 | PR_kwDODunzps49E_Nk | 4,833 | Fix missing tags in dataset cards | [] | closed | false | null | 1 | 2022-08-12T09:04:52Z | 2022-09-22T14:41:23Z | 2022-08-12T09:45:55Z | null | Fix missing tags in dataset cards:
- boolq
- break_data
- definite_pronoun_resolution
- emo
- kor_nli
- pg19
- quartz
- sciq
- squad_es
- wmt14
- wmt15
- wmt16
- wmt17
- wmt18
- wmt19
- wmt_t2t
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4833/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4833/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4833.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4833",
"merged_at": "2022-08-12T09:45:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4833.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4833"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2643 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2643/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2643/comments | https://api.github.com/repos/huggingface/datasets/issues/2643/events | https://github.com/huggingface/datasets/issues/2643 | 944,220,273 | MDU6SXNzdWU5NDQyMjAyNzM= | 2,643 | Enum used in map functions will raise a RecursionError with dill. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 4 | 2021-07-14T09:16:08Z | 2021-11-02T09:51:11Z | null | null | ## Describe the bug
Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284
In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` dataclass as base class and the `HfArgumentParser`. In the same file I use a `ds.map` that tries to pickle the content of the module including the definition of the enum that runs into the dill bug described above.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from enum import Enum
class A(Enum):
a = 'a'
def main():
a = A.a
def f(x):
return {} if a == a.a else x
ds = load_dataset('cnn_dailymail', '3.0.0')['test']
ds = ds.map(f, num_proc=15)
if __name__ == "__main__":
main()
```
## Expected results
The known problem with dill could be prevented as explained in the link above (workaround.) Since `HFArgumentParser` nicely uses the enum class for choices it makes sense to also deal with this bug under the hood.
## Actual results
```python
File "/home/xxxx/miniconda3/lib/python3.8/site-packages/dill/_dill.py", line 1373, in save_type
pickler.save_reduce(_create_type, (type(obj), obj.__name__,
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 690, in save_reduce
save(args)
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 899, in save_tuple
save(element)
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 534, in save
self.framer.commit_frame()
File "/home/xxxx/miniconda3/lib/python3.8/pickle.py", line 220, in commit_frame
if f.tell() >= self._FRAME_SIZE_TARGET or force:
RecursionError: maximum recursion depth exceeded while calling a Python object
```
## Environment info
- `datasets` version: 1.8.0
- Platform: Linux-5.9.0-4-amd64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2643/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2643/timeline | null | null | null | null | false | [
"I'm running into this as well. (Thank you so much for reporting @jorgeecardona — was staring at this massive stack trace and unsure what exactly was wrong!)",
"Hi ! Thanks for reporting :)\r\n\r\nUntil this is fixed on `dill`'s side, we could implement a custom saving in our Pickler indefined in utils.py_utils.p... |
https://api.github.com/repos/huggingface/datasets/issues/1563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1563/comments | https://api.github.com/repos/huggingface/datasets/issues/1563/events | https://github.com/huggingface/datasets/pull/1563 | 766,211,931 | MDExOlB1bGxSZXF1ZXN0NTM5MzA4Mzg4 | 1,563 | adding tmu-gfm-dataset | [] | closed | false | null | 2 | 2020-12-14T09:45:30Z | 2020-12-21T10:21:04Z | 2020-12-21T10:07:13Z | null | Adding TMU-GFM-Dataset for Grammatical Error Correction.
https://github.com/tmu-nlp/TMU-GFM-Dataset
A dataset for GEC metrics with manual evaluations of grammaticality, fluency, and meaning preservation for system outputs.
More detail about the creation of the dataset can be found in [Yoshimura et al. (2020)](https://www.aclweb.org/anthology/2020.coling-main.573.pdf). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1563/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1563/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1563.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1563",
"merged_at": "2020-12-21T10:07:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1563.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1563"
} | true | [
"@lhoestq Thank you for your code review! I think I could do the necessary corrections. Could you please check it again when you have time?",
"Thank you for merging!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4891/comments | https://api.github.com/repos/huggingface/datasets/issues/4891/events | https://github.com/huggingface/datasets/pull/4891 | 1,350,589,813 | PR_kwDODunzps49x382 | 4,891 | Fix missing tags in dataset cards | [] | closed | false | null | 0 | 2022-08-25T09:14:17Z | 2022-09-22T14:39:02Z | 2022-08-25T13:43:34Z | null | Fix missing tags in dataset cards:
- aslg_pc12
- librispeech_lm
- mwsc
- opus100
- qasc
- quail
- squadshifts
- winograd_wsc
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4891/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4891/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4891.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4891",
"merged_at": "2022-08-25T13:43:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4891.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4891"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2380 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2380/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2380/comments | https://api.github.com/repos/huggingface/datasets/issues/2380/events | https://github.com/huggingface/datasets/pull/2380 | 895,367,201 | MDExOlB1bGxSZXF1ZXN0NjQ3NTk3NTc3 | 2,380 | maintain YAML structure reading from README | [] | closed | false | null | 0 | 2021-05-19T12:12:07Z | 2021-05-19T13:08:38Z | 2021-05-19T13:08:38Z | null | How YAML used be loaded earlier in the string (structure of YAML was affected because of this and YAML for datasets with multiple configs was not being loaded correctly):
```
annotations_creators:
labeled_final:
- expert-generated
labeled_swap:
- expert-generated
unlabeled_final:
- machine-generated
language_creators:
- machine-generated
languages:
- en
licenses:
- other
multilinguality:
- monolingual
size_categories:
labeled_final:
- 10K<n<100K
labeled_swap:
- 10K<n<100K
unlabeled_final:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
- text-scoring
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- text-scoring-other-paraphrase-identification
```
How YAML is loaded in string now:
```
annotations_creators:
labeled_final:
- expert-generated
labeled_swap:
- expert-generated
unlabeled_final:
- machine-generated
language_creators:
- machine-generated
languages:
- en
licenses:
- other
multilinguality:
- monolingual
size_categories:
labeled_final:
- 10K<n<100K
labeled_swap:
- 10K<n<100K
unlabeled_final:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
- text-scoring
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- text-scoring-other-paraphrase-identification
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2380/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2380/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2380.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2380",
"merged_at": "2021-05-19T13:08:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2380.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2380"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3054/comments | https://api.github.com/repos/huggingface/datasets/issues/3054/events | https://github.com/huggingface/datasets/pull/3054 | 1,022,108,186 | PR_kwDODunzps4s_TmE | 3,054 | Update Biosses | [] | closed | false | null | 0 | 2021-10-10T22:25:12Z | 2021-10-13T09:04:27Z | 2021-10-13T09:04:27Z | null | Fix variable naming | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3054/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3054/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3054.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3054",
"merged_at": "2021-10-13T09:04:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3054.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3054"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/882/comments | https://api.github.com/repos/huggingface/datasets/issues/882/events | https://github.com/huggingface/datasets/pull/882 | 749,662,188 | MDExOlB1bGxSZXF1ZXN0NTI2NDQyMjA2 | 882 | Update README.md | [] | closed | false | null | 0 | 2020-11-24T12:23:52Z | 2021-01-29T10:41:07Z | 2021-01-29T10:41:07Z | null | "no label" is "-" in the original dataset but "-1" in Huggingface distribution. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/882/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/882/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/882.diff",
"html_url": "https://github.com/huggingface/datasets/pull/882",
"merged_at": "2021-01-29T10:41:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/882.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/882"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5948/comments | https://api.github.com/repos/huggingface/datasets/issues/5948/events | https://github.com/huggingface/datasets/pull/5948 | 1,754,794,611 | PR_kwDODunzps5S4dUt | 5,948 | Fix sequence of array support for most dtype | [] | closed | false | null | 2 | 2023-06-13T12:38:59Z | 2023-06-14T15:11:55Z | 2023-06-14T15:03:33Z | null | Fixes #5936
Also, a related fix to #5927 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5948/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5948/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5948.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5948",
"merged_at": "2023-06-14T15:03:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5948.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5948"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/2622 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2622/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2622/comments | https://api.github.com/repos/huggingface/datasets/issues/2622/events | https://github.com/huggingface/datasets/issues/2622 | 941,127,785 | MDU6SXNzdWU5NDExMjc3ODU= | 2,622 | Integration with AugLy | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 2 | 2021-07-10T00:03:09Z | 2023-07-20T13:18:48Z | 2023-07-20T13:18:47Z | null | **Is your feature request related to a problem? Please describe.**
Facebook recently launched a library, [AugLy](https://github.com/facebookresearch/AugLy) , that has a unified API for augmentations for image, video and text.
It would be pretty exciting to have it hooked up to HF libraries so that we can make NLP models robust to misspellings or to punctuation, or emojis etc. Plus, with Transformers supporting more CV use cases, having augmentations support becomes crucial.
**Describe the solution you'd like**
The biggest difference between augmentations and preprocessing is that preprocessing happens only once, but you are running augmentations once per epoch. AugLy operates on text directly, so this breaks the typical workflow where we would run the tokenizer once, set format to pt tensors and be ready for the Dataloader.
**Describe alternatives you've considered**
One possible way of implementing these is to make a custom Dataset class where getitem(i) runs the augmentation and the tokenizer every time, though this would slow training down considerably given we wouldn't even run the tokenizer in batches.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2622/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2622/timeline | null | not_planned | null | null | false | [
"Hi,\r\n\r\nyou can define your own custom formatting with `Dataset.set_transform()` and then run the tokenizer with the batches of augmented data as follows:\r\n```python\r\ndset = load_dataset(\"imdb\", split=\"train\") # Let's say we are working with the IMDB dataset\r\ndset.set_transform(lambda ex: {\"text\": ... |
https://api.github.com/repos/huggingface/datasets/issues/4589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4589/comments | https://api.github.com/repos/huggingface/datasets/issues/4589/events | https://github.com/huggingface/datasets/issues/4589 | 1,287,600,029 | I_kwDODunzps5Mvzed | 4,589 | Permission denied: '/home/.cache' when load_dataset with local script | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2022-06-28T16:26:03Z | 2022-06-29T06:26:28Z | 2022-06-29T06:25:08Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4589/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4589/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4363/comments | https://api.github.com/repos/huggingface/datasets/issues/4363/events | https://github.com/huggingface/datasets/issues/4363 | 1,238,897,652 | I_kwDODunzps5J2BP0 | 4,363 | The dataset preview is not available for this split. | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 7 | 2022-05-17T16:34:43Z | 2022-06-08T12:32:10Z | 2022-06-08T09:26:56Z | null | I have uploaded the corpus developed by our lab in the speech domain to huggingface [datasets](https://huggingface.co/datasets/Roh/ryanspeech). You can read about the companion paper accepted in interspeech 2021 [here](https://arxiv.org/abs/2106.08468). The dataset works fine but I can't make the dataset preview work. It gives me the following error that I don't understand. Can you help me to begin debugging it?
```
Status code: 400
Exception: AttributeError
Message: 'NoneType' object has no attribute 'split'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4363/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4363/timeline | null | completed | null | null | false | [
"Hi! A dataset has to be streamable to work with the viewer. I did a quick test, and yours is, so this might be a bug in the viewer. cc @severo \r\n",
"Looking at it. The message is now:\r\n\r\n```\r\nMessage: cannot cache function '__shear_dense': no locator available for file '/src/services/worker/.venv/... |
https://api.github.com/repos/huggingface/datasets/issues/1978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1978/comments | https://api.github.com/repos/huggingface/datasets/issues/1978/events | https://github.com/huggingface/datasets/pull/1978 | 820,956,806 | MDExOlB1bGxSZXF1ZXN0NTgzODI5Njgz | 1,978 | Adding ro sts dataset | [] | closed | false | null | 3 | 2021-03-03T10:08:53Z | 2021-03-05T10:00:14Z | 2021-03-05T09:33:55Z | null | Adding [RO-STS](https://github.com/dumitrescustefan/RO-STS) dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1978/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1978/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1978.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1978",
"merged_at": "2021-03-05T09:33:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1978.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1978"
} | true | [
"@lhoestq thank you very much for the quick review and useful comments! \r\n\r\nI have tried to address them all, and a few comments that you left for ro_sts I have applied to the ro_sts_parallel as well (in read-me: fixed source_datasets, links to homepage, repository, leaderboard, thanks to me message, in ro_sts_... |
https://api.github.com/repos/huggingface/datasets/issues/4393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4393/comments | https://api.github.com/repos/huggingface/datasets/issues/4393/events | https://github.com/huggingface/datasets/pull/4393 | 1,244,876,662 | PR_kwDODunzps44RxWN | 4,393 | Update CI deprecated legacy image | [] | closed | false | null | 1 | 2022-05-23T09:35:42Z | 2022-05-23T10:08:28Z | 2022-05-23T09:59:55Z | null | Now our CI still uses a deprecated legacy image:
> You’re using a [deprecated Docker convenience image.](https://discuss.circleci.com/t/legacy-convenience-image-deprecation/41034) Upgrade to a next-gen Docker convenience image.
This PR updates to next-generation convenience image.
Related to:
- #2955 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4393/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4393/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4393.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4393",
"merged_at": "2022-05-23T09:59:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4393.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4393"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4480/comments | https://api.github.com/repos/huggingface/datasets/issues/4480/events | https://github.com/huggingface/datasets/issues/4480 | 1,268,921,567 | I_kwDODunzps5LojTf | 4,480 | Bigbench tensorflow GPU dependency | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2022-06-13T05:24:06Z | 2022-06-14T19:45:24Z | 2022-06-14T19:45:23Z | null | ## Describe the bug
Loading bigbech
```py
from datasets import load_dataset
dataset = load_dataset("bigbench","swedish_to_german_proverbs")
```
tries to use gpu and fails with OOM with the following error
```
Downloading and preparing dataset bigbench/swedish_to_german_proverbs (download: Unknown size, generated: 68.92 KiB, post-processed: Unknown size, total: 68.92 KiB) to /home/ceyda/.cache/huggingface/datasets/bigbench/swedish_to_german_proverbs/1.0.0/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0...
Generating default split: 0%| | 0/72 [00:00<?, ? examples/s]2022-06-13 14:11:04.154469: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-13 14:11:05.133600: F tensorflow/core/platform/statusor.cc:33] Attempting to fetch value instead of handling error INTERNAL: failed initializing StreamExecutor for CUDA device ordinal 3: INTERNAL: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 25396838400
Aborted (core dumped)
```
I think this is because bigbench dependency (below) installs tensorflow (GPU version) and dataloading tries to use GPU as default.
`pip install bigbench@https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz`
while just doing 'pip install bigbench' results in following error
```
File "/home/ceyda/.local/lib/python3.7/site-packages/datasets/load.py", line 109, in import_main_class
module = importlib.import_module(module_path)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/ceyda/.cache/huggingface/modules/datasets_modules/datasets/bigbench/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0/bigbench.py", line 118, in <module>
class Bigbench(datasets.GeneratorBasedBuilder):
File "/home/ceyda/.cache/huggingface/modules/datasets_modules/datasets/bigbench/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0/bigbench.py", line 127, in Bigbench
BigBenchConfig(name=name, version=datasets.Version("1.0.0")) for name in bb_utils.get_all_json_task_names()
AttributeError: module 'bigbench.api.util' has no attribute 'get_all_json_task_names'
```
## Steps to avoid the bug
Not ideal but can solve with (since I don't really use tensorflow elsewhere)
`pip uninstall tensorflow`
`pip install tensorflow-cpu`
## Environment info
- datasets @ master
- Python version: 3.7
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4480/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4480/timeline | null | completed | null | null | false | [
"Thanks for reporting ! :) cc @andersjohanandreassen can you take a look at this ?\r\n\r\nAlso @cceyda feel free to open an issue at [BIG-Bench](https://github.com/google/BIG-bench) as well regarding the `AttributeError`",
"I'm on vacation for the next week, so won't be able to do much debugging at the moment. So... |
https://api.github.com/repos/huggingface/datasets/issues/3106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3106/comments | https://api.github.com/repos/huggingface/datasets/issues/3106/events | https://github.com/huggingface/datasets/pull/3106 | 1,030,112,473 | PR_kwDODunzps4tYA6i | 3,106 | Fix URLs in blog_authorship_corpus dataset | [] | closed | false | null | 0 | 2021-10-19T10:06:05Z | 2021-10-19T12:50:40Z | 2021-10-19T12:50:39Z | null | After contacting the authors of the paper "Effects of Age and Gender on Blogging", they confirmed:
- the old URLs are no longer valid
- there are alternative host URLs
Fix #3091. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3106/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3106/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3106.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3106",
"merged_at": "2021-10-19T12:50:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3106.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3106"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/457 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/457/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/457/comments | https://api.github.com/repos/huggingface/datasets/issues/457/events | https://github.com/huggingface/datasets/pull/457 | 668,898,386 | MDExOlB1bGxSZXF1ZXN0NDU5MzMyOTM1 | 457 | add set_format to DatasetDict + tests | [] | closed | false | null | 0 | 2020-07-30T15:53:20Z | 2020-07-30T17:34:36Z | 2020-07-30T17:34:34Z | null | Add the `set_format` and `formated_as` and `reset_format` to `DatasetDict`.
Add tests to these for `Dataset` and `DatasetDict`.
Fix some bugs uncovered by the tests for `pandas` formating. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/457/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/457/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/457.diff",
"html_url": "https://github.com/huggingface/datasets/pull/457",
"merged_at": "2020-07-30T17:34:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/457.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/457"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3932/comments | https://api.github.com/repos/huggingface/datasets/issues/3932/events | https://github.com/huggingface/datasets/pull/3932 | 1,170,221,773 | PR_kwDODunzps40fd0T | 3,932 | Create SARI metric card | [] | closed | false | null | 1 | 2022-03-15T20:37:23Z | 2022-03-18T17:37:01Z | 2022-03-18T17:32:55Z | null | SARI metric card! (do we have an expert in text simplification to validate?.. :sweat_smile: ) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3932/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3932/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3932.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3932",
"merged_at": "2022-03-18T17:32:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3932.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3932"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2082/comments | https://api.github.com/repos/huggingface/datasets/issues/2082/events | https://github.com/huggingface/datasets/pull/2082 | 835,401,555 | MDExOlB1bGxSZXF1ZXN0NTk2MDY1NTM0 | 2,082 | Updated card using information from data statement and datasheet | [] | closed | false | null | 0 | 2021-03-19T00:39:38Z | 2021-03-19T14:29:09Z | 2021-03-19T14:29:09Z | null | I updated and clarified the REFreSD [data card](https://github.com/mcmillanmajora/datasets/blob/refresd_card/datasets/refresd/README.md) with information from the Eleftheria's [website](https://elbria.github.io/post/refresd/). I added brief descriptions where the initial card referred to the paper, and I also recreated some of the tables in the paper to show relevant dataset statistics.
I'll email Eleftheria to see if she has any comments on the card. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2082/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2082/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2082.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2082",
"merged_at": "2021-03-19T14:29:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2082.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2082"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3350/comments | https://api.github.com/repos/huggingface/datasets/issues/3350/events | https://github.com/huggingface/datasets/pull/3350 | 1,068,078,160 | PR_kwDODunzps4vO1aj | 3,350 | Avoid content-encoding issue while streaming datasets | [] | closed | false | null | 0 | 2021-12-01T07:56:48Z | 2021-12-01T08:15:01Z | 2021-12-01T08:15:00Z | null | This PR will fix streaming of datasets served with gzip content-encoding:
```
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
Fix #2918.
CC: @severo | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3350/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3350/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3350.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3350",
"merged_at": "2021-12-01T08:15:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3350.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3350"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5759/comments | https://api.github.com/repos/huggingface/datasets/issues/5759/events | https://github.com/huggingface/datasets/issues/5759 | 1,669,977,848 | I_kwDODunzps5jidb4 | 5,759 | Can I load in list of list of dict format? | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 1 | 2023-04-16T13:50:14Z | 2023-04-19T12:04:36Z | null | null | ### Feature request
my jsonl dataset has following format:
```
[{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...]
[{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...]
```
I try to use `datasets.load_dataset('json', data_files=path)` or `datasets.Dataset.from_json`, it raises
```
File "site-packages/datasets/arrow_dataset.py", line 1078, in from_json
).read()
File "site-packages/datasets/io/json.py", line 59, in read
self.builder.download_and_prepare(
File "site-packages/datasets/builder.py", line 872, in download_and_prepare
self._download_and_prepare(
File "site-packages/datasets/builder.py", line 967, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "site-packages/datasets/builder.py", line 1749, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "site-packages/datasets/builder.py", line 1892, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Motivation
I wanna use features like `Datasets.map` or `Datasets.shuffle`, so i need the dataset in memory to be `arrow_dataset.Datasets` format
### Your contribution
PR | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5759/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5759/timeline | null | null | null | null | false | [
"Thanks for reporting, @LZY-the-boys.\r\n\r\nCould you please give more details about what is your intended dataset structure? What are the names of the columns and the value of each row?\r\n\r\nCurrently, the JSON-Lines format is supported:\r\n- Each line correspond to one row of the dataset\r\n- Each line is comp... |
https://api.github.com/repos/huggingface/datasets/issues/6072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6072/comments | https://api.github.com/repos/huggingface/datasets/issues/6072/events | https://github.com/huggingface/datasets/pull/6072 | 1,822,123,560 | PR_kwDODunzps5WbWFN | 6,072 | Fix fsspec storage_options from load_dataset | [] | closed | false | null | 6 | 2023-07-26T10:44:23Z | 2023-07-27T12:51:51Z | 2023-07-27T12:42:57Z | null | close https://github.com/huggingface/datasets/issues/6071 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6072/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6072/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6072.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6072",
"merged_at": "2023-07-27T12:42:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6072.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6072"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/5788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5788/comments | https://api.github.com/repos/huggingface/datasets/issues/5788/events | https://github.com/huggingface/datasets/pull/5788 | 1,681,136,256 | PR_kwDODunzps5O_v4B | 5,788 | Prepare tests for hfh 0.14 | [] | closed | false | null | 6 | 2023-04-24T12:13:03Z | 2023-04-25T14:32:56Z | 2023-04-25T14:25:30Z | null | Related to the coming release of `huggingface_hub==0.14.0`. It will break some internal tests. The PR fixes these tests. Let's double-check the CI but I expect the fixed tests to be running fine with both `hfh<=0.13.4` and `hfh==0.14`. Worth case scenario, existing PRs will have to be rebased once this fix is merged.
See related [discussion](https://huggingface.slack.com/archives/C02V5EA0A95/p1682337463368609?thread_ts=1681994202.635609&cid=C02V5EA0A95) (private slack).
cc @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5788/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5788/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5788.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5788",
"merged_at": "2023-04-25T14:25:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5788.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5788"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/2433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2433/comments | https://api.github.com/repos/huggingface/datasets/issues/2433/events | https://github.com/huggingface/datasets/pull/2433 | 907,488,711 | MDExOlB1bGxSZXF1ZXN0NjU4MzI5MDQ4 | 2,433 | Fix DuplicatedKeysError in adversarial_qa | [] | closed | false | null | 0 | 2021-05-31T13:48:47Z | 2021-06-01T08:52:11Z | 2021-06-01T08:52:11Z | null | Fixes #2431 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2433/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2433/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2433.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2433",
"merged_at": "2021-06-01T08:52:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2433.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2433"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1827 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1827/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1827/comments | https://api.github.com/repos/huggingface/datasets/issues/1827/events | https://github.com/huggingface/datasets/issues/1827 | 802,353,974 | MDU6SXNzdWU4MDIzNTM5NzQ= | 1,827 | Regarding On-the-fly Data Loading | [] | closed | false | null | 4 | 2021-02-05T17:43:48Z | 2021-02-18T13:55:16Z | 2021-02-18T13:55:16Z | null | Hi,
I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point.
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1827/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1827/timeline | null | completed | null | null | false | [
"Possible duplicate\r\n\r\n#1776 https://github.com/huggingface/datasets/issues/\r\n\r\nreally looking PR for this feature",
"Hi @acul3 \r\n\r\nIssue #1776 talks about doing on-the-fly data pre-processing, which I think is solved in the next release as mentioned in the issue #1825. I also look forward to using t... |
https://api.github.com/repos/huggingface/datasets/issues/5126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5126/comments | https://api.github.com/repos/huggingface/datasets/issues/5126/events | https://github.com/huggingface/datasets/pull/5126 | 1,411,757,124 | PR_kwDODunzps5A8Iw3 | 5,126 | Fix class name of symbolic link | [] | closed | false | null | 4 | 2022-10-17T15:11:02Z | 2022-11-14T14:40:18Z | 2022-11-14T14:40:18Z | null | Fix #5098 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5126/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5126/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5126.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5126",
"merged_at": "2022-11-14T14:40:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5126.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5126"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5126). All of your documentation changes will be reflected on that endpoint.",
"I have removed the reference to the Issue in the PR title, so that we avoid to have both references (to the issue and to the PR) in the merge commi... |
https://api.github.com/repos/huggingface/datasets/issues/4226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4226/comments | https://api.github.com/repos/huggingface/datasets/issues/4226/events | https://github.com/huggingface/datasets/pull/4226 | 1,216,331,073 | PR_kwDODunzps420kAv | 4,226 | Add pearsonr mc, update functionality to match the original docs | [] | closed | false | null | 2 | 2022-04-26T18:30:46Z | 2022-05-03T17:09:24Z | 2022-05-03T17:02:28Z | null | - adds pearsonr metric card
- adds ability to return p-value
- p-value was mentioned in the original docs as a return value, but there was no option to return it. I updated the _compute function slightly to have an option to return the p-value. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4226/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4226/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4226.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4226",
"merged_at": "2022-05-03T17:02:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4226.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4226"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"thank you @lhoestq!! :hugs: "
] |
https://api.github.com/repos/huggingface/datasets/issues/5586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5586/comments | https://api.github.com/repos/huggingface/datasets/issues/5586/events | https://github.com/huggingface/datasets/issues/5586 | 1,602,961,544 | I_kwDODunzps5fi0CI | 5,586 | .sort() is broken when used after .filter(), only in 2.10.0 | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2023-02-28T12:18:09Z | 2023-02-28T18:17:26Z | 2023-02-28T17:21:59Z | null | ### Describe the bug
Hi, thank you for your support!
It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method.
After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of the previous unfiltered dataset, resulting in an IndexError.
This only happens with the 2.10.0 release.
### Steps to reproduce the bug
```Python
from datasets import load_dataset
# dataset with length of 1104
ds = load_dataset('glue', 'ax')['test']
ds = ds.filter(lambda x: x['idx'] > 1100)
ds.sort('premise')
print('Done')
```
File "/home/dongkeun/datasets_test/test.py", line 5, in <module>
ds.sort('premise')
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 528, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/fingerprint.py", line 511, in wrapper
out = func(dataset, *args, **kwargs)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3959, in sort
sort_table = query_table(
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 588, in query_table
_check_valid_index_key(key, size)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 537, in _check_valid_index_key
_check_valid_index_key(max(key), size=size)
File "/home/dongkeun/miniconda3/envs/datasets_test/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 531, in _check_valid_index_key
raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
IndexError: Invalid key: 1103 is out of bounds for size 3
### Expected behavior
It should sort the dataset and print "Done". Which it does on 2.9.0.
### Environment info
- `datasets` version: 2.10.0
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5586/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5586/timeline | null | completed | null | null | false | [
"Thanks for reporting and thanks @mariosasko for fixing ! We just did a patch release `2.10.1` with the fix"
] |
https://api.github.com/repos/huggingface/datasets/issues/1263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1263/comments | https://api.github.com/repos/huggingface/datasets/issues/1263/events | https://github.com/huggingface/datasets/pull/1263 | 758,663,787 | MDExOlB1bGxSZXF1ZXN0NTMzNzk5NzU5 | 1,263 | Added kannada news headlines classification dataset. | [] | closed | false | null | 1 | 2020-12-07T16:35:37Z | 2020-12-10T14:30:55Z | 2020-12-09T18:01:31Z | null | Manual Download of a kaggle dataset. Mostly followed process as ms_terms. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1263/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1263/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1263.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1263",
"merged_at": "2020-12-09T18:01:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1263.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1263"
} | true | [
"Hi! Let me know if any more comments! Will fix it! :-)"
] |
https://api.github.com/repos/huggingface/datasets/issues/672 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/672/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/672/comments | https://api.github.com/repos/huggingface/datasets/issues/672/events | https://github.com/huggingface/datasets/issues/672 | 709,575,527 | MDU6SXNzdWU3MDk1NzU1Mjc= | 672 | Questions about XSUM | [] | closed | false | null | 14 | 2020-09-26T17:16:24Z | 2022-10-04T17:30:17Z | 2022-10-04T17:30:17Z | null | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 204017)
>>> data['test']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 11333)
```
The first issue is, the instance counts don’t match what I see on [the dataset's website](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset#what-builds-the-xsum-dataset) (11,333 vs 11,334 for test set; 204,017 vs 204,045 for training set)
```
… training (90%, 204,045), validation (5%, 11,332), and test (5%, 11,334) set.
```
Any thoughts why? Perhaps @mariamabarham could help here, since she recently had a PR on this dataaset https://github.com/huggingface/datasets/pull/289 (reviewed by @patrickvonplaten)
Another issue is that the instances don't seem to have IDs. The original datasets provides IDs for the instances: https://github.com/EdinburghNLP/XSum/blob/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json but to be able to use them, the dataset sizes need to match.
CC @jbragg
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/672/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/672/timeline | null | completed | null | null | false | [
"We should try to regenerate the data using the official script.\r\nBut iirc that's what we used in the first place, so not sure why it didn't match in the first place.\r\n\r\nI'll let you know when the dataset is updated",
"Thanks, looking forward to hearing your update on this thread. \r\n\r\nThis is a blocking... |
https://api.github.com/repos/huggingface/datasets/issues/4648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4648/comments | https://api.github.com/repos/huggingface/datasets/issues/4648/events | https://github.com/huggingface/datasets/issues/4648 | 1,296,659,335 | I_kwDODunzps5NSXOH | 4,648 | Add WikiAnswers dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2022-07-07T01:06:37Z | 2022-07-14T02:03:40Z | 2022-07-14T02:03:40Z | null | ## Adding a Dataset
- **Name:** *WikiAnswers*
- **Description:** *The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. Each cluster optionally contains an answer provided by WikiAnswers users.*
- **Paper:** *https://dl.acm.org/doi/10.1145/2623330.2623677*
- **Data:** *https://github.com/afader/oqa#wikianswers-corpus*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4648/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4648/timeline | null | completed | null | null | false | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/WikiAnswers)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4726/comments | https://api.github.com/repos/huggingface/datasets/issues/4726/events | https://github.com/huggingface/datasets/pull/4726 | 1,312,082,175 | PR_kwDODunzps47ykPI | 4,726 | Fix broken link to the Hub | [] | closed | false | null | 1 | 2022-07-20T22:57:27Z | 2022-07-21T14:33:18Z | 2022-07-21T08:00:54Z | null | The Markdown link fails to render if it is in the same line as the `<span>`. This PR implements @mishig25's fix by using `<a href=" ">` instead.
 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4726/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4726/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4726.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4726",
"merged_at": "2022-07-21T08:00:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4726.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4726"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/530/comments | https://api.github.com/repos/huggingface/datasets/issues/530/events | https://github.com/huggingface/datasets/pull/530 | 684,825,612 | MDExOlB1bGxSZXF1ZXN0NDcyNjQ5NTk2 | 530 | use ragged tensor by default | [] | closed | false | null | 4 | 2020-08-24T17:06:15Z | 2021-10-22T19:38:40Z | 2020-08-24T19:22:25Z | null | I think it's better if it's clear whether the returned tensor is ragged or not when the type is set to tensorflow.
Previously it was a tensor (not ragged) if numpy could stack the output (which can change depending on the batch of example you take), which make things difficult to handle, as it may sometimes return a ragged tensor and sometimes not.
Therefore I reverted this behavior to always return a ragged tensor as we used to do. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/530/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/530/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/530.diff",
"html_url": "https://github.com/huggingface/datasets/pull/530",
"merged_at": "2020-08-24T19:22:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/530.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/530"
} | true | [
"Yes I agree. Maybe something that lets specify different format depending on the column ? Especially to better control dtype and shape (and ragged for tf)\r\n\r\nOh and I forgot: this one should also fix the second issue found in #477 for the next release",
"I am running into the same issue with the error messag... |
https://api.github.com/repos/huggingface/datasets/issues/420 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/420/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/420/comments | https://api.github.com/repos/huggingface/datasets/issues/420/events | https://github.com/huggingface/datasets/pull/420 | 662,029,782 | MDExOlB1bGxSZXF1ZXN0NDUzNjI5OTk2 | 420 | Better handle nested features | [] | closed | false | null | 0 | 2020-07-20T16:44:13Z | 2020-07-21T08:20:49Z | 2020-07-21T08:09:52Z | null | Changes:
- added arrow schema to features conversion (it's going to be useful to fix #342 )
- make flatten handle deep features (useful for tfrecords conversion in #339 )
- add tests for flatten and features conversions
- the reader now returns the kwargs to instantiate a Dataset (fix circular dependencies) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/420/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/420/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/420.diff",
"html_url": "https://github.com/huggingface/datasets/pull/420",
"merged_at": "2020-07-21T08:09:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/420.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/420"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4676 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4676/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4676/comments | https://api.github.com/repos/huggingface/datasets/issues/4676/events | https://github.com/huggingface/datasets/issues/4676 | 1,302,202,028 | I_kwDODunzps5Nngas | 4,676 | Dataset.map gets stuck on _cast_to_python_objects | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"descript... | closed | false | null | 9 | 2022-07-12T15:09:58Z | 2022-10-03T13:01:04Z | 2022-10-03T13:01:03Z | null | ## Describe the bug
`Dataset.map`, when fed a Huggingface Tokenizer as its map func, can sometimes spend huge amounts of time doing casts. A minimal example follows.
Not all usages suffer from this. For example, I profiled the preprocessor at https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb , and it did _not_ have this problem. However, I'm at a loss to figure out how it avoids it, as the example below is simple and minimal and still has this problem.
This casting, where it occurs, causes the `Dataset.map` to run approximately 7x slower than it runs for code which does not cause this casting.
This may be related to https://github.com/huggingface/datasets/issues/1046 . However, the tokenizer is _not_ set to return Tensors.
## Steps to reproduce the bug
A minimal, self-contained example to reproduce is below:
```python
import transformers
from transformers import AutoTokenizer
from datasets import load_dataset
import torch
import cProfile
pretrained = 'distilbert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(pretrained)
squad = load_dataset('squad')
squad_train = squad['train']
squad_tiny = squad_train.select(range(5000))
assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)
def tokenize(ds):
tokens = tokenizer(text=ds['question'],
text_pair=ds['context'],
add_special_tokens=True,
padding='max_length',
truncation='only_second',
max_length=160,
stride=32,
return_overflowing_tokens=True,
return_offsets_mapping=True,
)
return tokens
cmd = 'squad_tiny.map(tokenize, batched=True, remove_columns=squad_tiny.column_names)'
cProfile.run(cmd, sort='tottime')
```
## Actual results
The code works, but takes 10-25 sec per batch (about 7x slower than non-casting code), with the following profile. Note that `_cast_to_python_objects` is the culprit.
```
63524075 function calls (58206482 primitive calls) in 121.836 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
5274034/40 68.751 0.000 111.060 2.776 features.py:262(_cast_to_python_objects)
42223832 24.077 0.000 33.310 0.000 {built-in method builtins.isinstance}
16338/20 5.121 0.000 111.053 5.553 features.py:361(<listcomp>)
5274135 4.747 0.000 4.749 0.000 {built-in method _abc._abc_instancecheck}
80/40 4.731 0.059 116.292 2.907 {pyarrow.lib.array}
5274135 4.485 0.000 9.234 0.000 abc.py:96(__instancecheck__)
2661564/2645196 2.959 0.000 4.298 0.000 features.py:1081(_check_non_null_non_empty_recursive)
5 2.786 0.557 2.786 0.557 {method 'encode_batch' of 'tokenizers.Tokenizer' objects}
2668052 0.930 0.000 0.930 0.000 {built-in method builtins.len}
5000 0.930 0.000 0.938 0.000 tokenization_utils_fast.py:187(_convert_encoding)
5 0.750 0.150 0.808 0.162 {method 'to_pydict' of 'pyarrow.lib.Table' objects}
1 0.444 0.444 121.749 121.749 arrow_dataset.py:2501(_map_single)
40 0.375 0.009 116.291 2.907 arrow_writer.py:151(__arrow_array__)
10 0.066 0.007 0.066 0.007 {method 'write_batch' of 'pyarrow.lib._CRecordBatchWriter' objects}
1 0.060 0.060 121.835 121.835 fingerprint.py:409(wrapper)
11387/5715 0.049 0.000 0.175 0.000 {built-in method builtins.getattr}
36 0.049 0.001 0.049 0.001 {pyarrow._compute.call_function}
15000 0.040 0.000 0.040 0.000 _collections_abc.py:719(__iter__)
3 0.023 0.008 0.023 0.008 {built-in method _imp.create_dynamic}
77 0.020 0.000 0.020 0.000 {built-in method builtins.dir}
37 0.019 0.001 0.019 0.001 socket.py:543(send)
15 0.017 0.001 0.017 0.001 tokenization_utils_fast.py:460(<listcomp>)
432/421 0.015 0.000 0.024 0.000 traitlets.py:1388(_notify_observers)
5000 0.015 0.000 0.018 0.000 _collections_abc.py:672(keys)
51 0.014 0.000 0.042 0.001 traitlets.py:276(getmembers)
5 0.014 0.003 3.775 0.755 tokenization_utils_fast.py:392(_batch_encode_plus)
3/1 0.014 0.005 0.035 0.035 {built-in method _imp.exec_dynamic}
5 0.012 0.002 0.950 0.190 tokenization_utils_fast.py:438(<listcomp>)
31626 0.012 0.000 0.012 0.000 {method 'append' of 'list' objects}
1532/1001 0.011 0.000 0.189 0.000 traitlets.py:643(get)
5 0.009 0.002 3.796 0.759 arrow_dataset.py:2631(apply_function_on_filtered_inputs)
51 0.009 0.000 0.062 0.001 traitlets.py:1766(traits)
5 0.008 0.002 3.784 0.757 tokenization_utils_base.py:2632(batch_encode_plus)
368 0.007 0.000 0.044 0.000 traitlets.py:1715(_get_trait_default_generator)
26 0.007 0.000 0.022 0.001 traitlets.py:1186(setup_instance)
51 0.006 0.000 0.010 0.000 traitlets.py:1781(<listcomp>)
80/32 0.006 0.000 0.052 0.002 table.py:1758(cast_array_to_feature)
684 0.006 0.000 0.007 0.000 {method 'items' of 'dict' objects}
4344/1794 0.006 0.000 0.192 0.000 traitlets.py:675(__get__)
...
```
## Environment info
I observed this on both Google colab and my local workstation:
### Google colab
- `datasets` version: 2.3.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
### Local
- `datasets` version: 2.3.2
- Platform: Windows-7-6.1.7601-SP1
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4676/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4676/timeline | null | completed | null | null | false | [
"Are you able to reproduce this? My example is small enough that it should be easy to try.",
"Hi! Thanks for reporting and providing a reproducible example. Indeed, by default, `datasets` performs an expensive cast on the values returned by `map` to convert them to one of the types supported by PyArrow (the under... |
https://api.github.com/repos/huggingface/datasets/issues/2933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2933/comments | https://api.github.com/repos/huggingface/datasets/issues/2933/events | https://github.com/huggingface/datasets/pull/2933 | 999,392,566 | PR_kwDODunzps4r5MHs | 2,933 | Replace script_version with revision | [] | closed | false | null | 1 | 2021-09-17T14:04:39Z | 2021-09-20T09:52:10Z | 2021-09-20T09:52:10Z | null | As discussed in https://github.com/huggingface/datasets/pull/2718#discussion_r707013278, the parameter name `script_version` is no longer applicable to datasets without loading script (i.e., datasets only with raw data files).
This PR replaces the parameter name `script_version` with `revision`.
This way, we are also aligned with:
- Transformers: `AutoTokenizer.from_pretrained(..., revision=...)`
- Hub: `HfApi.dataset_info(..., revision=...)`, `HfApi.upload_file(..., revision=...)` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2933/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2933/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2933.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2933",
"merged_at": "2021-09-20T09:52:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2933.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2933"
} | true | [
"I'm also fine with the removal in 1.15"
] |
https://api.github.com/repos/huggingface/datasets/issues/2832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2832/comments | https://api.github.com/repos/huggingface/datasets/issues/2832/events | https://github.com/huggingface/datasets/issues/2832 | 978,012,800 | MDU6SXNzdWU5NzgwMTI4MDA= | 2,832 | Logging levels not taken into account | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-08-24T11:50:41Z | 2023-07-12T17:19:30Z | 2023-07-12T17:19:29Z | null | ## Describe the bug
The `logging` module isn't working as intended relative to the levels to set.
## Steps to reproduce the bug
```python
from datasets import logging
logging.set_verbosity_debug()
logger = logging.get_logger()
logger.error("ERROR")
logger.warning("WARNING")
logger.info("INFO")
logger.debug("DEBUG"
```
## Expected results
I expect all logs to be output since I'm putting a `debug` level.
## Actual results
Only the two first logs are output.
## Environment info
- `datasets` version: 1.11.0
- Platform: Linux-5.13.9-arch1-1-x86_64-with-glibc2.33
- Python version: 3.9.6
- PyArrow version: 5.0.0
## To go further
This logging issue appears in `datasets` but not in `transformers`. It happens because there is no handler defined for the logger. When no handler is defined, the `logging` library will output a one-off error to stderr, using a `StderrHandler` with level `WARNING`.
`transformers` sets a default `StreamHandler` [here](https://github.com/huggingface/transformers/blob/5c6eca71a983bae2589eed01e5c04fcf88ba5690/src/transformers/utils/logging.py#L86) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2832/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2832/timeline | null | completed | null | null | false | [
"I just take a look at all the outputs produced by `datasets` using the different log-levels.\r\nAs far as i can tell using `datasets==1.17.0` they overall issue seems to be fixed.\r\n\r\nHowever, I noticed that there is one tqdm based progress indicator appearing on STDERR that I can simply not suppress.\r\n```\r\... |
https://api.github.com/repos/huggingface/datasets/issues/1184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1184/comments | https://api.github.com/repos/huggingface/datasets/issues/1184/events | https://github.com/huggingface/datasets/pull/1184 | 757,807,583 | MDExOlB1bGxSZXF1ZXN0NTMzMTExNjk4 | 1,184 | Add Adversarial SQuAD dataset | [] | closed | false | null | 5 | 2020-12-05T23:51:57Z | 2020-12-16T16:12:58Z | 2020-12-16T16:12:58Z | null | # Adversarial SQuAD
Adding the Adversarial [SQuAD](https://github.com/robinjia/adversarial-squad) dataset as part of the sprint 🎉
This dataset adds adversarial sentences to a subset of the SQuAD dataset's dev examples. How to get the original squad example id is explained in readme->Data Instances. The whole data is intended for use in evaluation. (Which could of course be also used for training if one wants). So there is no classical train/val/test split, but a split based on the number of adversaries added.
There are 2 splits of this dataset:
- AddSent: Has up to five candidate adversarial sentences that don't answer the question, but have a lot of words in common with the question. This adversary is does not query the model in any way.
- AddOneSent: Similar to AddSent, but just one candidate sentences was picked at random. This adversary is does not query the model in any way.
(The AddAny and AddCommon datasets mentioned in the paper are dynamically generated based on model's output distribution thus are not included here)
The failing test look like some unrelated timeout thing, will probably clear if rerun.
- [x] All tests passed
- [x] Added dummy data
- [x] Added data card (as much as I could) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1184/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1184/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1184.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1184",
"merged_at": "2020-12-16T16:12:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1184.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1184"
} | true | [
"the CI error was just a connection error due to all the activity on the repo this week ^^'\r\nI re-ran it so it should be good now",
"I hadn't realized the problem with the dummies since it had passed without errors.\r\nSuggestion: maybe we can show the user a warning based on the generated dummy size.",
"Than... |
https://api.github.com/repos/huggingface/datasets/issues/362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/362/comments | https://api.github.com/repos/huggingface/datasets/issues/362/events | https://github.com/huggingface/datasets/issues/362 | 653,766,245 | MDU6SXNzdWU2NTM3NjYyNDU= | 362 | [dateset subset missing] xtreme paws-x | [] | closed | false | null | 1 | 2020-07-09T05:04:54Z | 2020-07-09T12:38:42Z | 2020-07-09T12:38:42Z | null | I tried nlp.load_dataset('xtreme', 'PAWS-X.es') but get the value error
It turns out that the subset for Spanish is missing
https://github.com/google-research-datasets/paws/tree/master/pawsx | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/362/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/362/timeline | null | completed | null | null | false | [
"You're right, thanks for pointing it out. We will update it "
] |
https://api.github.com/repos/huggingface/datasets/issues/5320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5320/comments | https://api.github.com/repos/huggingface/datasets/issues/5320/events | https://github.com/huggingface/datasets/pull/5320 | 1,471,360,910 | PR_kwDODunzps5ED_UQ | 5,320 | [Extract] Place the lock file next to the destination directory | [] | closed | false | null | 1 | 2022-12-01T13:55:49Z | 2022-12-01T15:36:44Z | 2022-12-01T15:33:58Z | null | Previously it was placed next to the archive to extract, but the archive can be in a read-only directory as noticed in https://github.com/huggingface/datasets/issues/5295
Therefore I moved the lock location to be next to the destination directory, which is required to have write permissions | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5320/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5320/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5320.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5320",
"merged_at": "2022-12-01T15:33:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5320.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5320"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/525 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/525/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/525/comments | https://api.github.com/repos/huggingface/datasets/issues/525/events | https://github.com/huggingface/datasets/issues/525 | 683,875,483 | MDU6SXNzdWU2ODM4NzU0ODM= | 525 | wmt download speed example | [] | closed | false | null | 8 | 2020-08-21T23:29:06Z | 2022-10-04T17:45:39Z | 2022-10-04T17:45:39Z | null | Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-en')
```
Downloads at 49.1 KB/S
Whereas
```
pip install gdown # download from google drive
!gdown https://drive.google.com/uc?id=1iO7um-HWoNoRKDtw27YUSgyeubn9uXqj
```
Downloads at 127 MB/s. (The file is a copy of wmt-en-de raw).
```
nlp.load_dataset('wmt16', 'ro-en')
```
goes at 27 MB/s, much faster.
if we wget the same data from s3 is the same download speed, but ¼ the file size:
```
wget https://s3.amazonaws.com/datasets.huggingface.co/translation/wmt_en_ro_packed_200_rand.tgz
```
Finally,
```
nlp.load_dataset('wmt19', 'zh-en')
```
Starts fast, but broken. (duplicate of #493 )
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/525/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/525/timeline | null | completed | null | null | false | [
"Thanks for creating the issue :)\r\nThe download link for wmt-en-de raw looks like a mirror. We should use that instead of the current url.\r\nIs this mirror official ?\r\n\r\nAlso it looks like for `ro-en` it tried to download other languages. If we manage to only download the one that is asked it'd be cool\r\n\r... |
https://api.github.com/repos/huggingface/datasets/issues/3236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3236/comments | https://api.github.com/repos/huggingface/datasets/issues/3236/events | https://github.com/huggingface/datasets/issues/3236 | 1,048,026,358 | I_kwDODunzps4-d5z2 | 3,236 | Loading of datasets changed in #3110 returns no examples | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 7 | 2021-11-08T23:29:46Z | 2021-11-09T16:46:05Z | 2021-11-09T16:45:47Z | null | ## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
})
validation: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
})
})
```
## Steps to reproduce the bug
Load any of the datasets that were changed in https://github.com/huggingface/datasets/pull/3110:
```python
from datasets import load_dataset
load_dataset("qasper")
# The problem only started with the commit of #3110
load_dataset("qasper", revision="b6469baa22c174b3906c631802a7016fedea6780")
```
## Expected results
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 888
})
validation: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 281
})
})
```
Which can be received when specifying revision of the commit before https://github.com/huggingface/datasets/pull/3110:
```python
from datasets import load_dataset
load_dataset("qasper", revision="acfe2abda1ca79f0ce5c1896aa83b4b78af76b7d")
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.2.dev0 (master)
- Python version: 3.8.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3236/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3236/timeline | null | completed | null | null | false | [
"Hi @eladsegal, thanks for reporting.\r\n\r\nI am sorry, but I can't reproduce the bug:\r\n```\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"qasper\")\r\nDownloading: 5.11kB [00:00, ?B/s]\r\nDownloading and preparing dataset qasper/qasper (download: 9.88 MiB, generated: 35.11 MiB, ... |
https://api.github.com/repos/huggingface/datasets/issues/2810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2810/comments | https://api.github.com/repos/huggingface/datasets/issues/2810/events | https://github.com/huggingface/datasets/pull/2810 | 972,040,022 | MDExOlB1bGxSZXF1ZXN0NzEzNjkzMTI1 | 2,810 | Add WIT Dataset | [] | closed | false | null | 1 | 2021-08-16T19:34:09Z | 2022-05-06T12:27:29Z | 2022-05-06T12:26:16Z | null | Adds Google's [WIT](https://github.com/google-research-datasets/wit) dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2810/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2810/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/2810.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2810",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2810.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2810"
} | true | [
"Google's version of WIT is now available here: https://huggingface.co/datasets/google/wit"
] |
https://api.github.com/repos/huggingface/datasets/issues/1774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1774/comments | https://api.github.com/repos/huggingface/datasets/issues/1774/events | https://github.com/huggingface/datasets/issues/1774 | 792,730,559 | MDU6SXNzdWU3OTI3MzA1NTk= | 1,774 | is it possible to make slice to be more compatible like python list and numpy? | [] | open | false | null | 2 | 2021-01-24T06:15:52Z | 2022-06-01T15:54:50Z | null | null | Hi,
see below error:
```
AssertionError: Requested slice [:10000000000000000] incompatible with 20 examples.
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1774/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1774/timeline | null | null | null | null | false | [
"Hi ! Thanks for reporting.\r\nI am working on changes in the way data are sliced from arrow. I can probably fix your issue with the changes I'm doing.\r\nIf you have some code to reproduce the issue it would be nice so I can make sure that this case will be supported :)\r\nI'll make a PR in a few days ",
"Good i... |
https://api.github.com/repos/huggingface/datasets/issues/722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/722/comments | https://api.github.com/repos/huggingface/datasets/issues/722/events | https://github.com/huggingface/datasets/pull/722 | 718,689,117 | MDExOlB1bGxSZXF1ZXN0NTAxMDI3NjAw | 722 | datasets(RWTH-PHOENIX-Weather 2014 T): add initial loading script | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 3 | 2020-10-10T19:44:08Z | 2022-09-30T14:53:37Z | 2022-09-30T14:53:37Z | null | This is the first sign language dataset in this repo as far as I know.
Following an old issue I opened https://github.com/huggingface/datasets/issues/302.
I added the dataset official REAMDE file, but I see it's not very standard, so it can be removed.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/722/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/722/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/722.diff",
"html_url": "https://github.com/huggingface/datasets/pull/722",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/722.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/722"
} | true | [
"This might be interesting to @kayoyin the author of https://github.com/kayoyin/transformer-slt – pinging you just in case :)",
"Thanks Amit, this is a great idea! I'm thinking of porting the SLT models from my paper here as well, having this dataset would be perfect for that :)",
"Thanks for your contribution,... |
https://api.github.com/repos/huggingface/datasets/issues/2272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2272/comments | https://api.github.com/repos/huggingface/datasets/issues/2272/events | https://github.com/huggingface/datasets/issues/2272 | 869,017,977 | MDU6SXNzdWU4NjkwMTc5Nzc= | 2,272 | Bug in Dataset.class_encode_column | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-04-27T16:13:18Z | 2021-04-30T12:54:27Z | 2021-04-30T12:54:27Z | null | ## Describe the bug
All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded.
## Expected results
All the original columns should be kept.
This needs regression tests.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2272/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2272/timeline | null | completed | null | null | false | [
"This has been fixed in this commit: https://github.com/huggingface/datasets/pull/2254/commits/88676c930216cd4cc31741b99827b477d2b46cb6\r\n\r\nIt was introduced in #2246 : using map with `input_columns` doesn't return the other columns anymore"
] |
https://api.github.com/repos/huggingface/datasets/issues/4883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4883/comments | https://api.github.com/repos/huggingface/datasets/issues/4883/events | https://github.com/huggingface/datasets/issues/4883 | 1,349,083,235 | I_kwDODunzps5QaWBj | 4,883 | With dataloader RSS memory consumed by HF datasets monotonically increases | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 40 | 2022-08-24T08:42:54Z | 2022-09-29T16:16:31Z | null | null | ## Describe the bug
When the HF datasets is used in conjunction with PyTorch Dataloader, the RSS memory of the process keeps on increasing when it should stay constant.
## Steps to reproduce the bug
Run and observe the output of this snippet which logs RSS memory.
```python
import psutil
import os
from transformers import BertTokenizer
from datasets import load_dataset
from torch.utils.data import DataLoader
BATCH_SIZE = 32
NUM_TRIES = 10
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
def transform(x):
x.update(tokenizer(x["text"], return_tensors="pt", max_length=64, padding="max_length", truncation=True))
x.pop("text")
x.pop("label")
return x
dataset = load_dataset("imdb", split="train")
dataset.set_transform(transform)
train_loader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4)
mem_before = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
count = 0
while count < NUM_TRIES:
for idx, batch in enumerate(train_loader):
mem_after = psutil.Process(os.getpid()).memory_info().rss / (1024 * 1024)
print(count, idx, mem_after - mem_before)
count += 1
```
## Expected results
Memory should not increase after initial setup and loading of the dataset
## Actual results
Memory continuously increases as can be seen in the log.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 3,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4883/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4883/timeline | null | null | null | null | false | [
"Are you sure there is a leak? How can I see it? You shared the script but not the output which you believe should indicate a leak.\r\n\r\nI modified your reproduction script to print only once per try as your original was printing too much info and you absolutely must add `gc.collect()` when doing any memory measu... |
https://api.github.com/repos/huggingface/datasets/issues/3433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3433/comments | https://api.github.com/repos/huggingface/datasets/issues/3433/events | https://github.com/huggingface/datasets/issues/3433 | 1,080,910,724 | I_kwDODunzps5AbWOE | 3,433 | Add Multilingual Spoken Words dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | closed | false | null | 0 | 2021-12-15T11:14:44Z | 2022-02-22T10:03:53Z | 2022-02-22T10:03:53Z | null | ## Adding a Dataset
- **Name:** Multilingual Spoken Words
- **Description:** Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken words in 50 languages for academic research and commercial applications in keyword spotting and spoken term search, licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords, totaling 23.4 million 1-second spoken examples (over 6,000 hours).
Read more: https://mlcommons.org/en/news/spoken-words-blog/
- **Paper:** https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/fe131d7f5a6b38b23cc967316c13dae2-Paper-round2.pdf
- **Data:** https://mlcommons.org/en/multilingual-spoken-words/
- **Motivation:**
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3433/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3433/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3698/comments | https://api.github.com/repos/huggingface/datasets/issues/3698/events | https://github.com/huggingface/datasets/pull/3698 | 1,129,864,282 | PR_kwDODunzps4yXtyQ | 3,698 | Add finetune-data CodeFill | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 1 | 2022-02-10T11:12:51Z | 2022-10-03T09:36:18Z | 2022-10-03T09:36:18Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3698/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3698/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3698.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3698",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3698.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3698"
} | true | [
"Thanks for your contribution, @rgismondi. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if... |
https://api.github.com/repos/huggingface/datasets/issues/3243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3243/comments | https://api.github.com/repos/huggingface/datasets/issues/3243/events | https://github.com/huggingface/datasets/pull/3243 | 1,048,630,754 | PR_kwDODunzps4uSWtB | 3,243 | Remove redundant isort module placement | [] | closed | false | null | 0 | 2021-11-09T13:50:30Z | 2021-11-12T14:02:45Z | 2021-11-12T14:02:45Z | null | `isort` can place modules by itself from [version 5.0.0](https://pycqa.github.io/isort/docs/upgrade_guides/5.0.0.html#module-placement-changes-known_third_party-known_first_party-default_section-etc) onwards, making the `known_first_party` and `known_third_party` fields in `setup.cfg` redundant (this is why our CI works, even though we haven't touched these options in a while). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3243/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3243/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3243.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3243",
"merged_at": "2021-11-12T14:02:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3243.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3243"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6007 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6007/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6007/comments | https://api.github.com/repos/huggingface/datasets/issues/6007/events | https://github.com/huggingface/datasets/issues/6007 | 1,789,782,693 | I_kwDODunzps5qreql | 6,007 | Get an error "OverflowError: Python int too large to convert to C long" when loading a large dataset | [
{
"color": "c2e0c6",
"default": false,
"description": "Related to Apache Arrow",
"id": 5705560427,
"name": "arrow",
"node_id": "LA_kwDODunzps8AAAABVBPxaw",
"url": "https://api.github.com/repos/huggingface/datasets/labels/arrow"
}
] | open | false | null | 7 | 2023-07-05T15:16:50Z | 2023-07-10T19:11:17Z | null | null | ### Describe the bug
When load a large dataset with the following code
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train')
```
We encountered the error: "OverflowError: Python int too large to convert to C long"
The error look something like:
```
OverflowError: Python int too large to convert to C long
During handling of the above exception, another exception occurred:
OverflowError Traceback (most recent call last)
<ipython-input-7-0ed8700e662d> in <module>
----> 1 dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train', cache_dir='/sfs/MNBVC/.cache/')
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1749 ignore_verifications=ignore_verifications,
1750 try_from_hf_gcs=try_from_hf_gcs,
-> 1751 use_auth_token=use_auth_token,
1752 )
1753
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
703 if not downloaded_from_gcs:
704 self._download_and_prepare(
--> 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
1225
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
1228
1229 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
791 try:
792 # Prepare split will record examples associated to the split
--> 793 self._prepare_split(split_generator, **prepare_split_kwargs)
794 except OSError as e:
795 raise OSError(
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys)
1219 writer.write(example, key)
1220 finally:
-> 1221 num_examples, num_bytes = writer.finalize()
1222
1223 split_generator.split_info.num_examples = num_examples
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in finalize(self, close_stream)
536 # Re-intializing to empty list for next batch
537 self.hkey_record = []
--> 538 self.write_examples_on_file()
539 if self.pa_writer is None:
540 if self.schema:
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in write_examples_on_file(self)
407 # Since current_examples contains (example, key) tuples
408 batch_examples[col] = [row[0][col] for row in self.current_examples]
--> 409 self.write_batch(batch_examples=batch_examples)
410 self.current_examples = []
411
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
506 col_try_type = try_features[col] if try_features is not None and col in try_features else None
507 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
--> 508 arrays.append(pa.array(typed_sequence))
509 inferred_features[col] = typed_sequence.get_inferred_type()
510 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
180 else:
181 trying_cast_to_python_objects = True
--> 182 out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True))
183 # use smaller integer precisions if possible
184 if self.trying_int_optimization:
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
/sfs/MNBVC/venv/lib64/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
OverflowError: Python int too large to convert to C long
```
However, that dataset can be loaded in a streaming manner:
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train', streaming=True)
for i in dataset:
pass # it work well
```
Another issue is reported in our dataset hub:
https://huggingface.co/datasets/liwu/MNBVC/discussions/2
### Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train')
### Expected behavior
the dataset can be safely loaded
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-3.10.0-1160.an7.x86_64-x86_64-with-centos-7.9
- Python version: 3.6.8
- PyArrow version: 6.0.1
- Pandas version: 1.1.5 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6007/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6007/timeline | null | null | null | null | false | [
"This error means that one of the int32 (`Value(\"int32\")`) columns in the dataset has a value that is out of the valid (int32) range.\r\n\r\nI'll open a PR to print the name of a problematic column to make debugging such errors easier.",
"I am afraid int32 is not the reason for this error.\r\n\r\nI have submitt... |
https://api.github.com/repos/huggingface/datasets/issues/614 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/614/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/614/comments | https://api.github.com/repos/huggingface/datasets/issues/614/events | https://github.com/huggingface/datasets/pull/614 | 699,177,110 | MDExOlB1bGxSZXF1ZXN0NDg0OTQ2MzA1 | 614 | [doc] Update deploy.sh | [] | closed | false | null | 0 | 2020-09-11T11:06:13Z | 2020-09-14T08:49:19Z | 2020-09-14T08:49:17Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/614/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/614/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/614.diff",
"html_url": "https://github.com/huggingface/datasets/pull/614",
"merged_at": "2020-09-14T08:49:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/614.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/614"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/3065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3065/comments | https://api.github.com/repos/huggingface/datasets/issues/3065/events | https://github.com/huggingface/datasets/pull/3065 | 1,023,951,322 | PR_kwDODunzps4tFDjk | 3,065 | Fix test command after refac | [] | closed | false | null | 0 | 2021-10-12T15:23:30Z | 2021-10-12T15:28:47Z | 2021-10-12T15:28:46Z | null | Fix the `datasets-cli` test command after the `prepare_module` change in #2986 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3065/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3065/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3065.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3065",
"merged_at": "2021-10-12T15:28:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3065.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3065"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4751/comments | https://api.github.com/repos/huggingface/datasets/issues/4751/events | https://github.com/huggingface/datasets/pull/4751 | 1,319,440,903 | PR_kwDODunzps48LJ7U | 4,751 | Added dataset information in clinic oos dataset card | [] | closed | false | null | 1 | 2022-07-27T11:44:28Z | 2022-07-28T10:53:21Z | 2022-07-28T10:40:37Z | null | This PR aims to add relevant information like the Description, Language and citation information of the clinic oos dataset card. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4751/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4751/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4751.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4751",
"merged_at": "2022-07-28T10:40:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4751.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4751"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4856/comments | https://api.github.com/repos/huggingface/datasets/issues/4856/events | https://github.com/huggingface/datasets/issues/4856 | 1,339,779,957 | I_kwDODunzps5P22t1 | 4,856 | file missing when load_dataset with openwebtext on windows | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-08-16T04:04:22Z | 2023-01-04T03:39:12Z | 2023-01-04T03:39:12Z | null | ## Describe the bug
0015896-b1054262f7da52a0518521e29c8e352c.txt is missing when I run run_mlm.py with openwebtext. I check the cache_path and can not find 0015896-b1054262f7da52a0518521e29c8e352c.txt. but I can find this file in the 17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz with 7-zip.
## Steps to reproduce the bug
```sh
python run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:/model/roberta-base
```
or
```python
from datasets import load_dataset
load_dataset("openwebtext", None, cache_dir=None, use_auth_token=None)
```
## Expected results
Loading is successful
## Actual results
Traceback (most recent call last):
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 795, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 22] Invalid argument: 'F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa/0015896-b1054262f7da52a0518521e29c8e352c.txt'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: windows
- Python version: 3.8.5
- PyArrow version: 9.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4856/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4856/timeline | null | completed | null | null | false | [
"I have tried to extract ```0015896-b1054262f7da52a0518521e29c8e352c.txt``` from ```17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz``` with 7-zip\r\nand put the file into cache_path ```F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd... |
https://api.github.com/repos/huggingface/datasets/issues/2553 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2553/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2553/comments | https://api.github.com/repos/huggingface/datasets/issues/2553/events | https://github.com/huggingface/datasets/issues/2553 | 931,365,926 | MDU6SXNzdWU5MzEzNjU5MjY= | 2,553 | load_dataset("web_nlg") NonMatchingChecksumError | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-06-28T09:26:46Z | 2021-06-28T17:23:39Z | 2021-06-28T17:23:16Z | null | Hi! It seems the WebNLG dataset gives a NonMatchingChecksumError.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('web_nlg', name="release_v3.0_en", split="dev")
```
Gives
```
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip']
```
## Environment info
- `datasets` version: 1.8.0
- Platform: macOS-11.3.1-x86_64-i386-64bit
- Python version: 3.9.4
- PyArrow version: 3.0.0
Also tested on Linux, with python 3.6.8 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2553/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2553/timeline | null | completed | null | null | false | [
"Hi ! Thanks for reporting. This is due to the WebNLG repository that got updated today.\r\nI just pushed a fix at #2558 - this shouldn't happen anymore in the future.",
"This is fixed on `master` now :)\r\nWe'll do a new release soon !"
] |
https://api.github.com/repos/huggingface/datasets/issues/1388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1388/comments | https://api.github.com/repos/huggingface/datasets/issues/1388/events | https://github.com/huggingface/datasets/pull/1388 | 760,373,136 | MDExOlB1bGxSZXF1ZXN0NTM1MjE1Nzk2 | 1,388 | hind_encorp | [] | closed | false | null | 0 | 2020-12-09T14:22:59Z | 2020-12-09T14:46:51Z | 2020-12-09T14:46:37Z | null | resubmit of hind_encorp file changes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1388/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1388/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1388.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1388",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1388.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1388"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5545/comments | https://api.github.com/repos/huggingface/datasets/issues/5545/events | https://github.com/huggingface/datasets/pull/5545 | 1,590,315,972 | PR_kwDODunzps5KRKct | 5,545 | Added return methods for URL-references to the pushed dataset | [] | open | false | null | 4 | 2023-02-18T11:26:25Z | 2023-02-21T14:17:28Z | null | null | Hi,
I was missing the ability to easily open the pushed dataset and it seemed like a quick fix.
Maybe we also want to log this info somewhere, but let me know if I need to add that too.
Cheers,
David | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5545/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5545/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5545.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5545",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5545.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5545"
} | true | [
"Hi ! Maybe we'd need to align with `transformers` and other libraries that implement `push_to_hub` to agree on what it should return.\r\n\r\ne.g. in `transformers` the typing says it returns a string, but in practice it returns a `CommitInfo`.\r\n\r\nTherefore I'd not add an output to `push_to_hub` here unless we ... |
https://api.github.com/repos/huggingface/datasets/issues/3617 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3617/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3617/comments | https://api.github.com/repos/huggingface/datasets/issues/3617/events | https://github.com/huggingface/datasets/pull/3617 | 1,111,938,691 | PR_kwDODunzps4xdb8K | 3,617 | PR for the CFPB Consumer Complaints dataset | [] | closed | false | null | 8 | 2022-01-23T17:47:12Z | 2022-02-07T21:08:31Z | 2022-02-07T21:08:31Z | null | Think I followed all the steps but please let me know if anything needs changing or any improvements I can make to the code quality | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3617/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3617/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3617.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3617",
"merged_at": "2022-02-07T21:08:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3617.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3617"
} | true | [
"> Nice ! Thanks for adding this dataset :)\n> \n> \n> \n> I left a few comments:\n\nThanks!\n\nI'd be interested in contributing to the core codebase - I had to go down the custom loading approach because I couldn't pull this dataset in using the load_dataset() method. Using either the json or csv files available ... |
https://api.github.com/repos/huggingface/datasets/issues/3762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3762/comments | https://api.github.com/repos/huggingface/datasets/issues/3762/events | https://github.com/huggingface/datasets/issues/3762 | 1,144,849,557 | I_kwDODunzps5EPQSV | 3,762 | `Dataset.class_encode` should support custom class names | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 3 | 2022-02-19T21:21:45Z | 2022-02-21T12:16:35Z | 2022-02-21T12:16:35Z | null | I can make a PR, just wanted approval before starting.
**Is your feature request related to a problem? Please describe.**
It is often the case that classes are not ordered in alphabetical order. Current `class_encode_column` sort the classes before indexing.
https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1235
**Describe the solution you'd like**
I would like to add a **optional** parameter `class_names` to `class_encode_column` that would be used for the mapping instead of sorting the unique values.
**Describe alternatives you've considered**
One can use map instead. I find it harder to read.
```python
CLASS_NAMES = ['apple', 'orange', 'potato']
ds = ds.map(lambda item: CLASS_NAMES.index(item[label_column]))
# Proposition
ds = ds.class_encode_column(label_column, CLASS_NAMES)
```
**Additional context**
I can make the PR if this feature is accepted.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3762/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3762/timeline | null | completed | null | null | false | [
"Hi @Dref360, thanks a lot for your proposal.\r\n\r\nIt totally makes sense to have more flexibility when class encoding, I agree.\r\n\r\nYou could even further customize the class encoding by passing an instance of `ClassLabel` itself (instead of replicating `ClassLabel` instantiation arguments as `Dataset.class_e... |
https://api.github.com/repos/huggingface/datasets/issues/4569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4569/comments | https://api.github.com/repos/huggingface/datasets/issues/4569/events | https://github.com/huggingface/datasets/issues/4569 | 1,284,833,694 | I_kwDODunzps5MlQGe | 4,569 | Dataset Viewer issue for sst2 | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 2 | 2022-06-26T07:32:54Z | 2022-06-27T06:37:48Z | 2022-06-27T06:37:48Z | null | ### Link
https://huggingface.co/datasets/sst2
### Description
Not sure what is causing this, however it seems that `load_dataset("sst2")` also hangs (even though it downloads the files without problem):
```
Status code: 400
Exception: Exception
Message: Give up after 5 attempts with ConnectionError
```
### Owner
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4569/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4569/timeline | null | completed | null | null | false | [
"Hi @lewtun, thanks for reporting.\r\n\r\nI have checked locally and refreshed the preview and it seems working smooth now:\r\n```python\r\nIn [8]: ds\r\nOut[8]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 67349\r\n })\r\n validation: Datas... |
https://api.github.com/repos/huggingface/datasets/issues/6004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6004/comments | https://api.github.com/repos/huggingface/datasets/issues/6004/events | https://github.com/huggingface/datasets/pull/6004 | 1,786,636,368 | PR_kwDODunzps5UjN2h | 6,004 | Misc improvements | [] | closed | false | null | 4 | 2023-07-03T18:29:14Z | 2023-07-06T17:04:11Z | 2023-07-06T16:55:25Z | null | Contains the following improvements:
* fixes a "share dataset" link in README and modifies the "hosting" part in the disclaimer section
* updates `Makefile` to also run the style checks on `utils` and `setup.py`
* deletes a test for GH-hosted datasets (no longer supported)
* deletes `convert_dataset.sh` (outdated)
* aligns `utils/release.py` with `transformers` (the current version is outdated) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6004/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6004/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6004.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6004",
"merged_at": "2023-07-06T16:55:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6004.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6004"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/4404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4404/comments | https://api.github.com/repos/huggingface/datasets/issues/4404/events | https://github.com/huggingface/datasets/issues/4404 | 1,248,572,899 | I_kwDODunzps5Ka7Xj | 4,404 | Dataset should have a `.name` field | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 2 | 2022-05-25T18:56:08Z | 2022-09-13T15:09:30Z | 2022-06-16T10:47:53Z | null | **Is your feature request related to a problem? Please describe.**
If building pipelines that can evaluate on more than one dataset, it would be nice to be able to log results of things like `Evaluating on {dataset.name}` or `results for {dataset.name} are: {results}`
Without some way of concisely identifying a dataset from the dataset object, tools which might run on more than one dataset must be passed the dataset object _and_ the name/id of the dataset being used.
**Describe the solution you'd like**
The DatasetInfo class should have a `name` field which is the name of a dataset. then for a given dataset if it evolves in time the `version` can be updated but its different versions of the same dataset with a unique `name`. The name could then all be accessed by `dataset.name`
**Describe alternatives you've considered**
For my own purposes I am considering making `NamedDataset[Dataset]` where the subclass just has a .name field.
**Additional context**
My guess is that most usecases are not working with more than one dataset in a given pipeline so a name is not really needed. This has surprised me though as one of the advantages of a standard dataset interface is to be able to build pipelines which can be passed in a dataset and separate responsibilities of the dataset loading from the train or eval pipeline.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4404/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4404/timeline | null | completed | null | null | false | [
"Hi! You can already use `dset.builder_name` and `dset.config_name` for that purpose. And when it comes to versioning, it's better to use `dset._fingerprint` than the `version` attribute as the former represents a deterministic hash that encodes all the mutable ops executed on a dataset, and the latter stays the sa... |
https://api.github.com/repos/huggingface/datasets/issues/2912 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2912/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2912/comments | https://api.github.com/repos/huggingface/datasets/issues/2912/events | https://github.com/huggingface/datasets/pull/2912 | 996,256,005 | PR_kwDODunzps4rvhgp | 2,912 | Update link to Blog in docs footer | [] | closed | false | null | 0 | 2021-09-14T17:23:14Z | 2021-09-15T07:59:23Z | 2021-09-15T07:59:23Z | null | Update link. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2912/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2912/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2912.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2912",
"merged_at": "2021-09-15T07:59:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2912.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2912"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4219/comments | https://api.github.com/repos/huggingface/datasets/issues/4219/events | https://github.com/huggingface/datasets/pull/4219 | 1,214,934,025 | PR_kwDODunzps42v6rE | 4,219 | Add F1 Metric Card | [] | closed | false | null | 1 | 2022-04-25T19:14:56Z | 2022-04-26T20:44:18Z | 2022-04-26T20:37:46Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4219/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4219/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4219.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4219",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4219.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4219"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2118/comments | https://api.github.com/repos/huggingface/datasets/issues/2118/events | https://github.com/huggingface/datasets/pull/2118 | 841,563,329 | MDExOlB1bGxSZXF1ZXN0NjAxMjgzMDUx | 2,118 | Remove os.environ.copy in Dataset.map | [] | closed | false | null | 1 | 2021-03-26T03:48:17Z | 2021-03-26T12:03:23Z | 2021-03-26T12:00:05Z | null | Replace `os.environ.copy` with in-place modification
Fixes #2115 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2118/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2118/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2118.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2118",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2118.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2118"
} | true | [
"I thought deepcopy on `os.environ` is unsafe (see [this](https://stackoverflow.com/questions/13142972/using-copy-deepcopy-on-os-environ-in-python-appears-broken)), but I can't replicate the behavior described in the linked SO thread.\r\n\r\nClosing this one because #2119 has a much cleaner approach."
] |
https://api.github.com/repos/huggingface/datasets/issues/4207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4207/comments | https://api.github.com/repos/huggingface/datasets/issues/4207/events | https://github.com/huggingface/datasets/pull/4207 | 1,213,604,615 | PR_kwDODunzps42rmbK | 4,207 | [Minor edit] Fix typo in class name | [] | closed | false | null | 0 | 2022-04-24T09:49:37Z | 2022-05-05T13:17:47Z | 2022-05-05T13:17:47Z | null | Typo: `datasets.DatsetDict` -> `datasets.DatasetDict` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4207/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4207/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4207.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4207",
"merged_at": "2022-05-05T13:17:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4207.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4207"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5750/comments | https://api.github.com/repos/huggingface/datasets/issues/5750/events | https://github.com/huggingface/datasets/issues/5750 | 1,668,289,067 | I_kwDODunzps5jcBIr | 5,750 | Fail to create datasets from a generator when using Google Big Query | [] | closed | false | null | 4 | 2023-04-14T13:50:59Z | 2023-04-17T12:20:43Z | 2023-04-17T12:20:43Z | null | ### Describe the bug
Creating a dataset from a generator using `Dataset.from_generator()` fails if the generator is the [Google Big Query Python client](https://cloud.google.com/python/docs/reference/bigquery/latest). The problem is that the Big Query client is not pickable. And the function `create_config_id` tries to get a hash of the generator by pickling it. So the following error is generated:
```
_pickle.PicklingError: Pickling client objects is explicitly not supported.
Clients have non-trivial state that is local and unpickleable.
```
### Steps to reproduce the bug
1. Install the big query client and datasets `pip install google-cloud-bigquery datasets`
2. Run the following code:
```py
from datasets import Dataset
from google.cloud import bigquery
client = bigquery.Client()
# Perform a query.
QUERY = (
'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '
'WHERE state = "TX" '
'LIMIT 100')
query_job = client.query(QUERY) # API request
rows = query_job.result() # Waits for query to finish
ds = Dataset.from_generator(rows)
for r in ds:
print(r)
```
### Expected behavior
Two options:
1. Ignore the pickle errors when computing the hash
2. Provide a scape hutch so that we can avoid calculating the hash for the generator. For example, allowing to provide a hash from the user.
### Environment info
python 3.9
google-cloud-bigquery 3.9.0
datasets 2.11.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5750/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5750/timeline | null | completed | null | null | false | [
"`from_generator` expects a generator function, not a generator object, so this should work:\r\n```python\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\nclient = bigquery.Client()\r\n\r\ndef gen()\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-d... |
https://api.github.com/repos/huggingface/datasets/issues/4610 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4610/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4610/comments | https://api.github.com/repos/huggingface/datasets/issues/4610/events | https://github.com/huggingface/datasets/issues/4610 | 1,290,603,827 | I_kwDODunzps5M7Q0z | 4,610 | codeparrot/github-code failing to load | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 8 | 2022-06-30T20:24:48Z | 2022-07-05T14:24:13Z | 2022-07-05T09:19:56Z | null | ## Describe the bug
codeparrot/github-code fails to load with a `TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path'`
## Steps to reproduce the bug
```python
from datasets import load_dataset
```
## Expected results
loaded dataset object
## Actual results
```python
[3]: dataset = load_dataset("codeparrot/github-code")
No config specified, defaulting to: github-code/all-all
Downloading and preparing dataset github-code/all-all to /home/bebr/.cache/huggingface/datasets/codeparrot___github-code/all-all/0.0.0/a55513bc0f81db773f9896c7aac225af0cff5b323bb9d2f68124f0a8cc3fb817...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [3], in <cell line: 1>()
----> 1 dataset = load_dataset("codeparrot/github-code")
File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/load.py:1679, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1676 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1678 # Download and prepare data
-> 1679 builder_instance.download_and_prepare(
1680 download_config=download_config,
1681 download_mode=download_mode,
1682 ignore_verifications=ignore_verifications,
1683 try_from_hf_gcs=try_from_hf_gcs,
1684 use_auth_token=use_auth_token,
1685 )
1687 # Build dataset for splits
1688 keep_in_memory = (
1689 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1690 )
File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:1221, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos)
1220 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1221 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
769 split_dict = SplitDict(dataset_name=self.name)
770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
773 # Checksums verification
774 if verify_infos and dl_manager.record_checksums:
File ~/.cache/huggingface/modules/datasets_modules/datasets/codeparrot--github-code/a55513bc0f81db773f9896c7aac225af0cff5b323bb9d2f68124f0a8cc3fb817/github-code.py:169, in GithubCode._split_generators(self, dl_manager)
162 def _split_generators(self, dl_manager):
164 hfh_dataset_info = HfApi(datasets.config.HF_ENDPOINT).dataset_info(
165 _REPO_NAME,
166 timeout=100.0,
167 )
--> 169 patterns = datasets.data_files.get_patterns_in_dataset_repository(hfh_dataset_info)
170 data_files = datasets.data_files.DataFilesDict.from_hf_repo(
171 patterns,
172 dataset_info=hfh_dataset_info,
173 )
175 files = dl_manager.download_and_extract(data_files["train"])
TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path'
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.18.7-arch1-1-x86_64-with-glibc2.35
- Python version: 3.10.5
- PyArrow version: 8.0.0
- Pandas version: 1.4.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4610/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4610/timeline | null | completed | null | null | false | [
"I believe the issue is in `codeparrot/github-code`. `base_path` param is missing - https://huggingface.co/datasets/codeparrot/github-code/blob/main/github-code.py#L169\r\n\r\nFunction definition has changed.\r\nhttps://github.com/huggingface/datasets/blob/0e1c629cfb9f9ba124537ba294a0ec451584da5f/src/datasets/data_... |
https://api.github.com/repos/huggingface/datasets/issues/683 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/683/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/683/comments | https://api.github.com/repos/huggingface/datasets/issues/683/events | https://github.com/huggingface/datasets/pull/683 | 710,942,704 | MDExOlB1bGxSZXF1ZXN0NDk0NzAwNzY1 | 683 | Fix wrong delimiter in text dataset | [] | closed | false | null | 0 | 2020-09-29T09:43:24Z | 2021-05-05T18:24:31Z | 2020-09-29T09:44:06Z | null | The delimiter is set to the bell character as it is used nowhere is text files usually.
However in the text dataset the delimiter was set to `\b` which is backspace in python, while the bell character is `\a`.
I replace \b by \a
Hopefully it fixes issues mentioned by some users in #622 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/683/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/683/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/683.diff",
"html_url": "https://github.com/huggingface/datasets/pull/683",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/683.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/683"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2037/comments | https://api.github.com/repos/huggingface/datasets/issues/2037/events | https://github.com/huggingface/datasets/pull/2037 | 829,919,685 | MDExOlB1bGxSZXF1ZXN0NTkxNTA4MTQz | 2,037 | Fix: Wikipedia - save memory by replacing root.clear with elem.clear | [] | closed | false | null | 1 | 2021-03-12T09:22:00Z | 2021-03-23T06:08:16Z | 2021-03-16T11:01:22Z | null | see: https://github.com/huggingface/datasets/issues/2031
What I did:
- replace root.clear with elem.clear
- remove lines to get root element
- $ make style
- $ make test
- some tests required some pip packages, I installed them.
test results on origin/master and my branch are same. I think it's not related on my modification, isn't it?
```
==================================================================================== short test summary info ====================================================================================
FAILED tests/test_arrow_writer.py::TypedSequenceTest::test_catch_overflow - AssertionError: OverflowError not raised
============================================================= 1 failed, 2332 passed, 5138 skipped, 70 warnings in 91.75s (0:01:31) ==============================================================
make: *** [Makefile:19: test] Error 1
```
Is there anything else I should do? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2037/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2037/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2037.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2037",
"merged_at": "2021-03-16T11:01:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2037.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2037"
} | true | [
"The error you got is minor and appeared in the last version of pyarrow, we'll fix the CI to take this into account. You can ignore it"
] |
https://api.github.com/repos/huggingface/datasets/issues/25 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/25/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/25/comments | https://api.github.com/repos/huggingface/datasets/issues/25/events | https://github.com/huggingface/datasets/pull/25 | 609,708,863 | MDExOlB1bGxSZXF1ZXN0NDExMjQ4Nzg2 | 25 | Add script csv datasets | [] | closed | false | null | 3 | 2020-04-30T08:28:08Z | 2022-10-04T09:32:13Z | 2020-05-07T21:14:49Z | null | This is a PR allowing to create datasets from local CSV files. A usage might be:
```python
import nlp
ds = nlp.load(
path="csv",
name="bbc",
dataset_files={
nlp.Split.TRAIN: ["datasets/dummy_data/csv/train.csv"],
nlp.Split.TEST: [""datasets/dummy_data/csv/test.csv""]
},
csv_kwargs={
"skip_rows": 0,
"delimiter": ",",
"quote_char": "\"",
"header_as_column_names": True
}
)
```
```
Downloading and preparing dataset bbc/1.0.0 (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/jplu/.cache/huggingface/datasets/bbc/1.0.0...
Dataset bbc downloaded and prepared to /home/jplu/.cache/huggingface/datasets/bbc/1.0.0. Subsequent calls will reuse this data.
{'test': Dataset(schema: {'category': 'string', 'text': 'string'}, num_rows: 49), 'train': Dataset(schema: {'category': 'string', 'text': 'string'}, num_rows: 99), 'validation': Dataset(schema: {'category': 'string', 'text': 'string'}, num_rows: 0)}
```
How it is read:
- `path`: the `csv` word means "I want to create a CSV dataset"
- `name`: the name of this dataset is `bbc`
- `dataset_files`: this is a dictionary where each key is the list of files corresponding to the key split.
- `csv_kwargs`: this is the keywords arguments to "explain" how to read the CSV files
* `skip_rows`: number of rows have to be skipped, starting from the beginning of the file
* `delimiter`: which delimiter is used to separate the columns
* `quote_char`: which quote char is used to represent a column where the delimiter appears in one of them
* `header_as_column_names`: will use the first row (header) of the file as name for the features. Otherwise the names will be automatically generated as `f1`, `f2`, etc... Will be applied after the `skip_rows` parameter.
**TODO**: for now the `csv.py` is copied each time we create a new dataset as `ds_name.py`, this behavior will be modified to have only the `csv.py` script copied only once and not for all the CSV datasets. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/25/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/25/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/25.diff",
"html_url": "https://github.com/huggingface/datasets/pull/25",
"merged_at": "2020-05-07T21:14:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/25.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/25"
} | true | [
"Very interesting thoughts, we should think deeper about all what you raised indeed.",
"Ok here is a proposal for a more general API and workflow.\r\n\r\n# New `ArrowBasedBuilder`\r\n\r\nFor all the formats that can be directly and efficiently loaded by Arrow (CSV, JSON, Parquet, Arrow), we don't really want to h... |
https://api.github.com/repos/huggingface/datasets/issues/909 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/909/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/909/comments | https://api.github.com/repos/huggingface/datasets/issues/909/events | https://github.com/huggingface/datasets/pull/909 | 752,508,299 | MDExOlB1bGxSZXF1ZXN0NTI4ODE1NDYz | 909 | Add FiNER dataset | [] | closed | false | null | 9 | 2020-11-27T23:54:20Z | 2020-12-07T16:56:23Z | 2020-12-07T16:56:23Z | null | Hi,
this PR adds "A Finnish News Corpus for Named Entity Recognition" as new `finer` dataset.
The dataset is described in [this paper](https://arxiv.org/abs/1908.04212). The data is publicly available in [this GitHub](https://github.com/mpsilfve/finer-data).
Notice: they provide two testsets. The additional test dataset taken from Wikipedia is named as "test_wikipedia" split. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/909/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/909/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/909.diff",
"html_url": "https://github.com/huggingface/datasets/pull/909",
"merged_at": "2020-12-07T16:56:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/909.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/909"
} | true | [
"> That's really cool thank you !\r\n> \r\n> Could you also add a dataset card ?\r\n> You can find a template here : https://github.com/huggingface/datasets/blob/master/templates/README.md\r\n\r\nThe full information for adding a dataset card can be found here :) \r\nhttps://github.com/huggingface/datasets/blob/mas... |
https://api.github.com/repos/huggingface/datasets/issues/1284 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1284/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1284/comments | https://api.github.com/repos/huggingface/datasets/issues/1284/events | https://github.com/huggingface/datasets/pull/1284 | 759,269,920 | MDExOlB1bGxSZXF1ZXN0NTM0MzAzNDk0 | 1,284 | Update coqa dataset url | [] | closed | false | null | 0 | 2020-12-08T09:16:38Z | 2020-12-08T18:19:09Z | 2020-12-08T18:19:09Z | null | `datasets.stanford.edu` is invalid. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1284/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1284/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1284.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1284",
"merged_at": "2020-12-08T18:19:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1284.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1284"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5921/comments | https://api.github.com/repos/huggingface/datasets/issues/5921/events | https://github.com/huggingface/datasets/pull/5921 | 1,736,563,023 | PR_kwDODunzps5R6j-y | 5,921 | Fix streaming parquet with image feature in schema | [] | closed | false | null | 4 | 2023-06-01T15:23:10Z | 2023-06-02T10:02:54Z | 2023-06-02T09:53:11Z | null | It was not reading the feature type from the parquet arrow schema | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5921/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5921/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5921.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5921",
"merged_at": "2023-06-02T09:53:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5921.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5921"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/4159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4159/comments | https://api.github.com/repos/huggingface/datasets/issues/4159/events | https://github.com/huggingface/datasets/pull/4159 | 1,202,522,153 | PR_kwDODunzps42Izmd | 4,159 | Add `TruthfulQA` dataset | [] | closed | false | null | 2 | 2022-04-12T23:19:04Z | 2022-06-08T15:51:33Z | 2022-06-08T14:43:34Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4159/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4159/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4159.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4159",
"merged_at": "2022-06-08T14:43:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4159.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4159"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Bump. (I'm not sure which reviewer to `@` but, previously, @lhoestq has been very helpful 🤗 )"
] |
https://api.github.com/repos/huggingface/datasets/issues/3520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3520/comments | https://api.github.com/repos/huggingface/datasets/issues/3520/events | https://github.com/huggingface/datasets/pull/3520 | 1,093,747,753 | PR_kwDODunzps4wh6oD | 3,520 | Audio datacard update - first pass | [] | closed | false | null | 2 | 2022-01-04T20:58:25Z | 2022-01-05T12:30:21Z | 2022-01-05T12:30:20Z | null | Filling out data card "Personal and Sensitive Information" for speech datasets to note PII concerns | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3520/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3520/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3520.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3520",
"merged_at": "2022-01-05T12:30:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3520.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3520"
} | true | [
"I'm not sure that we want to change the tags at the top of the cards by hand. Those are used to create the tags in the hub. Although looking at all the tags now, we might want to normalize the current tags again (hyphens or no, \".0\" or no). Maybe we could add a binary tag for public domain or not?",
"> \r\n\r\... |
https://api.github.com/repos/huggingface/datasets/issues/5876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5876/comments | https://api.github.com/repos/huggingface/datasets/issues/5876/events | https://github.com/huggingface/datasets/issues/5876 | 1,717,978,985 | I_kwDODunzps5mZkdp | 5,876 | Incompatibility with DataLab | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | 2 | 2023-05-20T01:39:11Z | 2023-05-25T06:42:34Z | 2023-05-25T06:42:34Z | null | ### Describe the bug
Hello,
I am currently working on a project where both [DataLab](https://github.com/ExpressAI/DataLab) and [datasets](https://github.com/huggingface/datasets) are subdependencies.
I noticed that I cannot import both libraries, as they both register FileSystems in `fsspec`, expecting the FileSystems not being registered before.
When running the code below, I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\__init__.py", line 28, in <module>
from datalabs.arrow_dataset import concatenate_datasets, Dataset
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\arrow_dataset.py", line 60, in <module>
from datalabs.arrow_writer import ArrowWriter, OptimizedTypedSequence
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\arrow_writer.py", line 28, in <module>
from datalabs.features import (
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\features\__init__.py", line 2, in <module>
from datalabs.features.audio import Audio
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\features\audio.py", line 21, in <module>
from datalabs.utils.streaming_download_manager import xopen
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\utils\streaming_download_manager.py", line 16, in <module>
from datalabs.filesystems import COMPRESSION_FILESYSTEMS
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\filesystems\__init__.py", line 37, in <module>
fsspec.register_implementation(fs_class.protocol, fs_class)
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\fsspec\registry.py", line 51, in register_implementation
raise ValueError(
ValueError: Name (bz2) already in the registry and clobber is False
```
I think as simple solution would be to just set `clobber=True` in https://github.com/huggingface/datasets/blob/main/src/datasets/filesystems/__init__.py#L28. This allows the register to discard previous registrations. This should work, as the datalabs FileSystems are copies of the datasets FileSystems. However, I don't know if it is guaranteed to be compatible with other libraries that might use the same protocols.
I am linking the symmetric issue on [DataLab](https://github.com/ExpressAI/DataLab/issues/425) as ideally the issue is solved in both libraries the same way. Otherwise, it could lead to different behaviors depending on which library gets imported first.
### Steps to reproduce the bug
1. Run `pip install datalabs==0.4.15 datasets==2.12.0`
2. Run the following python code:
```
import datalabs
import datasets
```
### Expected behavior
It should be possible to import both libraries without getting a Value Error
### Environment info
datalabs==0.4.15
datasets==2.12.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5876/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5876/timeline | null | completed | null | null | false | [
"Indeed, `clobber=True` (with a warning if the existing protocol will be overwritten) should fix the issue, but maybe a better solution is to register our compression filesystem before the script is executed and unregister them afterward. WDYT @lhoestq @albertvillanova?",
"I think we should use clobber and show a... |
https://api.github.com/repos/huggingface/datasets/issues/2997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2997/comments | https://api.github.com/repos/huggingface/datasets/issues/2997/events | https://github.com/huggingface/datasets/issues/2997 | 1,013,270,069 | I_kwDODunzps48ZUY1 | 2,997 | Dataset has incorrect labels | [] | closed | false | null | 3 | 2021-10-01T12:09:06Z | 2021-10-01T15:32:00Z | 2021-10-01T13:54:34Z | null | The dataset https://huggingface.co/datasets/turkish_product_reviews has incorrect labels - all reviews are labelled with "1" (positive sentiment). None of the reviews is labelled with "0". See screenshot attached:

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2997/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2997/timeline | null | completed | null | null | false | [
"Hi @marshmellow77, thanks for reporting.\r\n\r\nThat issue is fixed since `datasets` version 1.9.0 (see 16bc665f2753677c765011ef79c84e55486d4347).\r\n\r\nPlease, update `datasets` with: `pip install -U datasets`",
"Thanks. Please note that the dataset explorer (https://huggingface.co/datasets/viewer/?dataset=tur... |
https://api.github.com/repos/huggingface/datasets/issues/3630 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3630/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3630/comments | https://api.github.com/repos/huggingface/datasets/issues/3630/events | https://github.com/huggingface/datasets/issues/3630 | 1,114,578,625 | I_kwDODunzps5Cbx7B | 3,630 | DuplicatedKeysError of NewsQA dataset | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 1 | 2022-01-26T03:05:49Z | 2022-02-14T08:37:19Z | 2022-02-14T08:37:19Z | null | After processing the dataset following official [NewsQA](https://github.com/Maluuba/newsqa), I used datasets to load it:
```
a = load_dataset('newsqa', data_dir='news')
```
and the following error occurred:
```
Using custom data configuration default-data_dir=news
Downloading and preparing dataset newsqa/default to /root/.cache/huggingface/datasets/newsqa/default-data_dir=news/1.0.0/b0b23e22d94a3d352ad9d75aff2b71375264a122fae301463079ee8595e05ab9...
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1084, in _prepare_split
writer.write(example, key)
File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 442, in write
self.check_duplicate_keys()
File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 453, in check_duplicate_keys
raise DuplicatedKeysError(key)
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: ./cnn/stories/6a0f9c8a5d0c6e8949b37924163c92923fe5770d.story
Keys should be unique and deterministic in nature
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 1694, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 595, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 684, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1086, in _prepare_split
num_examples, num_bytes = writer.finalize()
File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 524, in finalize
self.check_duplicate_keys()
File "/usr/local/lib/python3.8/dist-packages/datasets/arrow_writer.py", line 453, in check_duplicate_keys
raise DuplicatedKeysError(key)
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: ./cnn/stories/6a0f9c8a5d0c6e8949b37924163c92923fe5770d.story
Keys should be unique and deterministic in nature
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3630/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3630/timeline | null | completed | null | null | false | [
"Thanks for reporting, @StevenTang1998.\r\n\r\nI'm fixing it. "
] |
https://api.github.com/repos/huggingface/datasets/issues/3057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3057/comments | https://api.github.com/repos/huggingface/datasets/issues/3057/events | https://github.com/huggingface/datasets/issues/3057 | 1,022,508,315 | I_kwDODunzps488j0b | 3,057 | Error in per class precision computation | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-10-11T10:05:19Z | 2021-10-11T10:17:44Z | 2021-10-11T10:16:16Z | null | ## Describe the bug
When trying to get the per class precision values by providing `average=None`, following error is thrown `ValueError: can only convert an array of size 1 to a Python scalar`
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
precision_metric = load_metric("precision")
predictions = [0, 2, 1, 0, 0, 1]
references = [0, 1, 2, 0, 1, 2]
results = precision_metric.compute(predictions=predictions, references=references, average=None)
```
## Expected results
` {'precision': array([0.66666667, 0. , 0. ])}`
as per https://github.com/huggingface/datasets/blob/master/metrics/precision/precision.py
## Actual results
```
output = self._compute(predictions=predictions, references=references, **kwargs)
File "~/.cache/huggingface/modules/datasets_modules/metrics/precision/94709a71c6fe37171ef49d3466fec24dee9a79846c9f176dff66a649e9811690/precision.py", line 110, in _compute
sample_weight=sample_weight,
ValueError: can only convert an array of size 1 to a Python scalar
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: linux
- Python version: 3.6.9
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3057/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3057/timeline | null | completed | null | null | false | [
"Hi @tidhamecha2, thanks for reporting.\r\n\r\nIndeed, we fixed this issue just one week ago: #3008\r\n\r\nThe fix will be included in our next version release.\r\n\r\nIn the meantime, you can incorporate the fix by installing `datasets` from the master branch:\r\n```\r\npip install -U git+ssh://git@github.com/hugg... |
https://api.github.com/repos/huggingface/datasets/issues/6006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6006/comments | https://api.github.com/repos/huggingface/datasets/issues/6006/events | https://github.com/huggingface/datasets/issues/6006 | 1,788,855,582 | I_kwDODunzps5qn8Ue | 6,006 | NotADirectoryError when loading gigawords | [] | closed | false | null | 1 | 2023-07-05T06:23:41Z | 2023-07-05T06:31:02Z | 2023-07-05T06:31:01Z | null | ### Describe the bug
got `NotADirectoryError` whtn loading gigawords dataset
### Steps to reproduce the bug
When running
```
import datasets
datasets.load_dataset('gigaword')
```
Got the following exception:
```bash
Traceback (most recent call last): [0/1862]
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1629, in _prepare_split_single
for key, record in generator:
File "/home/x/.cache/huggingface/modules/datasets_modules/datasets/gigaword/ea83a8b819190acac5f2dae011fad51dccf269a0604ec5dd24795b
64efb424b6/gigaword.py", line 115, in _generate_examples
with open(src_path, encoding="utf-8") as f_d, open(tgt_path, encoding="utf-8") as f_s:
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/streaming.py", line 71, in wrapper
return function(*args, use_auth_token=use_auth_token, **kwargs)
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/download/streaming_download_manager.py", line 493, in xope
n
return open(main_hop, mode, *args, **kwargs)
NotADirectoryError: [Errno 20] Not a directory: '/home/x/.cache/huggingface/datasets/downloads/6da52431bb5124d90cf51a0187d2dbee9046e
89780c4be7599794a4f559048ec/org_data/train.src.txt'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "gigaword.py", line 38, in <module>
main()
File "gigaword.py", line 35, in main
train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/")
File "/home/x/MICL/preprocess/fewshot_gym_dataset.py", line 199, in generate_k_shot_data
dataset = self.load_dataset()
File "gigaword.py", line 29, in load_dataset
return datasets.load_dataset('gigaword')
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/load.py", line 1809, in load_dataset
builder_instance.download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1670, in _download_and_prepare
super()._download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1508, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1665, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
Download and process the dataset successfully
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.0.0-1032-azure-x86_64-with-glibc2.10
- Python version: 3.8.0
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6006/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6006/timeline | null | completed | null | null | false | [
"issue due to corrupted download files. resolved after cleaning download cache. sorry for any inconvinence."
] |
https://api.github.com/repos/huggingface/datasets/issues/3341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3341/comments | https://api.github.com/repos/huggingface/datasets/issues/3341/events | https://github.com/huggingface/datasets/issues/3341 | 1,067,449,569 | I_kwDODunzps4_n_zh | 3,341 | Mirror the canonical datasets to the Hugging Face Hub | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 2 | 2021-11-30T16:42:05Z | 2022-01-26T14:47:37Z | 2022-01-26T14:47:37Z | null | - [ ] create a repo on https://hf.co/datasets for every canonical dataset
- [ ] on every commit related to a dataset, update the hf.co repo
See https://github.com/huggingface/moon-landing/pull/1562
@SBrandeis: I let you edit this description if needed to precise the intent. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3341/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3341/timeline | null | completed | null | null | false | [
"I created a GitHub project to keep track of what needs to be done:\r\nhttps://github.com/huggingface/datasets/projects/3\r\n\r\nI also store my code in a (private for now) repository at https://github.com/huggingface/mirror_canonical_datasets_on_hub",
"I understand that the datasets are mirrored on the Hub now, ... |
https://api.github.com/repos/huggingface/datasets/issues/3828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3828/comments | https://api.github.com/repos/huggingface/datasets/issues/3828/events | https://github.com/huggingface/datasets/issues/3828 | 1,160,064,029 | I_kwDODunzps5FJSwd | 3,828 | The Pile's _FEATURE spec seems to be incorrect | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-03-04T21:25:32Z | 2022-03-08T09:30:49Z | 2022-03-08T09:30:48Z | null | ## Describe the bug
If you look at https://huggingface.co/datasets/the_pile/blob/main/the_pile.py:
For "all"
* the pile_set_name is never set for data
* there's actually an id field inside of "meta"
For subcorpora pubmed_central and hacker_news:
* the meta is specified to be a string, but it's actually a dict with an id field inside.
## Steps to reproduce the bug
## Expected results
Feature spec should match the data I'd think?
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3828/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3828/timeline | null | completed | null | null | false | [
"Hi @dlwh, thanks for reporting.\r\n\r\nPlease note, that the source data files for \"all\" config are different from the other configurations.\r\n\r\nThe \"all\" config contains the official Pile data files, from https://mystic.the-eye.eu/public/AI/pile/\r\nAll data examples contain a \"meta\" dict with a single \... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.