url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 600M 2.05B | node_id stringlengths 18 32 | number int64 2 6.51k | title stringlengths 1 290 | user dict | labels listlengths 0 4 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 4 | milestone dict | comments listlengths 0 30 | created_at timestamp[ns, tz=UTC] | updated_at timestamp[ns, tz=UTC] | closed_at timestamp[ns, tz=UTC] | author_association stringclasses 3
values | active_lock_reason float64 | draft float64 0 1 ⌀ | pull_request dict | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app float64 | state_reason stringclasses 3
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3566/comments | https://api.github.com/repos/huggingface/datasets/issues/3566/events | https://github.com/huggingface/datasets/pull/3566 | 1,100,155,902 | PR_kwDODunzps4w2Tcc | 3,566 | Add initial electricity time series dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.g... | [] | closed | false | null | [] | null | [
"@kashif Some commits on the PR branch are not authored by you, so could you please open a new PR and not use rebase this time :)? You can copy and paste the dataset dir to the new branch. \r\n\r\n",
"making a new PR"
] | 2022-01-12T10:21:32Z | 2022-02-15T13:31:48Z | 2022-02-15T13:31:48Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3566.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3566",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3566.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3566"
} | Here is an initial prototype time series dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3566/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3566/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/929/comments | https://api.github.com/repos/huggingface/datasets/issues/929/events | https://github.com/huggingface/datasets/pull/929 | 753,737,794 | MDExOlB1bGxSZXF1ZXN0NTI5NzU4NTU3 | 929 | Add weibo NER dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user... | [] | closed | false | null | [] | null | [] | 2020-11-30T19:22:47Z | 2020-12-03T13:36:55Z | 2020-12-03T13:36:54Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/929.diff",
"html_url": "https://github.com/huggingface/datasets/pull/929",
"merged_at": "2020-12-03T13:36:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/929.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/929... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/929/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/929/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/5222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5222/comments | https://api.github.com/repos/huggingface/datasets/issues/5222/events | https://github.com/huggingface/datasets/issues/5222 | 1,442,412,507 | I_kwDODunzps5V-Xfb | 5,222 | HuggingFace website is incorrectly reporting that my datasets are pickled | {
"avatar_url": "https://avatars.githubusercontent.com/u/10626398?v=4",
"events_url": "https://api.github.com/users/ProGamerGov/events{/privacy}",
"followers_url": "https://api.github.com/users/ProGamerGov/followers",
"following_url": "https://api.github.com/users/ProGamerGov/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [
"cc @McPatate maybe you know what's happening ?",
"Yes I think I know what is happening. We check in zips for pickles, and the UI must display the pickle jar when a scan has an associated list of imports, even when empty.\r\n~I'll fix ASAP !~",
"> I'll fix ASAP !\r\n\r\nActually I'd rather leave it like that f... | 2022-11-09T16:41:16Z | 2022-11-09T18:10:46Z | 2022-11-09T18:06:57Z | NONE | null | null | null | ### Describe the bug
HuggingFace is incorrectly reporting that my datasets are pickled. They are not picked, they are simple ZIP files containing PNG images.
Hopefully this is the right location to report this bug.
### Steps to reproduce the bug
Inspect my dataset respository here: https://huggingface.co/datasets... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5222/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5222/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4840/comments | https://api.github.com/repos/huggingface/datasets/issues/4840/events | https://github.com/huggingface/datasets/issues/4840 | 1,337,342,672 | I_kwDODunzps5PtjrQ | 4,840 | Dataset Viewer issue for darragh/demo_data_raw3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [] | open | false | null | [] | null | [
"do you have an idea of why it can occur @huggingface/datasets? The dataset consists of a single parquet file.",
"Thanks for reporting @severo.\r\n\r\nI'm not able to reproduce that error. I get instead:\r\n```\r\nFileNotFoundError: [Errno 2] No such file or directory: 'orix/data/ChiSig/唐合乐-9-3.jpg'\r\n```\r\n\r\... | 2022-08-12T15:22:58Z | 2022-09-08T07:55:44Z | null | CONTRIBUTOR | null | null | null | ### Link
https://huggingface.co/datasets/darragh/demo_data_raw3
### Description
```
Exception: ValueError
Message: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent.
```
reported by @NielsRogge
### Owner
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4840/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4840/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3787/comments | https://api.github.com/repos/huggingface/datasets/issues/3787/events | https://github.com/huggingface/datasets/pull/3787 | 1,150,235,569 | PR_kwDODunzps4zdE7b | 3,787 | Fix Google Drive URL to avoid Virus scan warning | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"Thanks for this @albertvillanova!",
"Once this PR merged into master and until our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previous... | 2022-02-25T09:35:12Z | 2022-03-04T20:43:32Z | 2022-02-25T11:56:35Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3787.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3787",
"merged_at": "2022-02-25T11:56:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3787.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | This PR fixes, in the datasets library instead of in every specific dataset, the issue of downloading the Virus scan warning page instead of the actual data file for Google Drive URLs.
Fix #3786, fix #3784. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3787/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3787/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1435/comments | https://api.github.com/repos/huggingface/datasets/issues/1435/events | https://github.com/huggingface/datasets/pull/1435 | 760,867,325 | MDExOlB1bGxSZXF1ZXN0NTM1NjIwODE4 | 1,435 | Add FreebaseQA dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/3663322?v=4",
"events_url": "https://api.github.com/users/anaerobeth/events{/privacy}",
"followers_url": "https://api.github.com/users/anaerobeth/followers",
"following_url": "https://api.github.com/users/anaerobeth/following{/other_user}",
"gists_url":... | [] | closed | false | null | [] | null | [
"@yjernite @lhoestq Any suggestions on how to get the dummy data generator to recognize the columns? The structure of the json is:\r\n```\r\n{\r\n \"Dataset\": \"FreebaseQA-eval\", \r\n \"Version\": \"1.0\", \r\n \"Questions\": [\r\n {\r\n \"Question-ID\": \"FreebaseQA-eval-0\", \r\n \"RawQuestion\"... | 2020-12-10T04:03:27Z | 2021-02-05T09:47:30Z | 2021-02-05T09:47:30Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1435.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1435",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1435.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1435"
} | This PR adds the FreebaseQA dataset: A Trivia-type QA Data Set over the Freebase Knowledge Graph
Repo: https://github.com/kelvin-jiang/FreebaseQA
Paper: https://www.aclweb.org/anthology/N19-1028.pdf
## TODO: create dummy data
Error encountered when running `python datasets-cli dummy_data datasets/freebase... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1435/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1435/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1600 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1600/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1600/comments | https://api.github.com/repos/huggingface/datasets/issues/1600/events | https://github.com/huggingface/datasets/issues/1600 | 770,582,960 | MDU6SXNzdWU3NzA1ODI5NjA= | 1,600 | AttributeError: 'DatasetDict' object has no attribute 'train_test_split' | {
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user... | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | [] | null | [
"Hi @david-waterworth!\r\n\r\nAs indicated in the error message, `load_dataset(\"csv\")` returns a `DatasetDict` object, which is mapping of `str` to `Dataset` objects. I believe in this case the behavior is to return a `train` split with all the data.\r\n`train_test_split` is a method of the `Dataset` object, so y... | 2020-12-18T05:37:10Z | 2023-05-03T04:22:55Z | 2020-12-21T07:38:58Z | NONE | null | null | null | The following code fails with "'DatasetDict' object has no attribute 'train_test_split'" - am I doing something wrong?
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files='data.txt')
dataset = dataset.train_test_split(test_size=0.1)
```
> AttributeError: 'DatasetDict' object has no at... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1600/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1600/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/103/comments | https://api.github.com/repos/huggingface/datasets/issues/103/events | https://github.com/huggingface/datasets/pull/103 | 618,233,637 | MDExOlB1bGxSZXF1ZXN0NDE3OTk5MDIy | 103 | [Manual downloads] add logic proposal for manual downloads and add wikihow | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [] | closed | false | null | [] | null | [
"> Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.\r\n> \r\n> The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.\r\n... | 2020-05-14T13:30:36Z | 2020-05-14T14:27:41Z | 2020-05-14T14:27:40Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/103.diff",
"html_url": "https://github.com/huggingface/datasets/pull/103",
"merged_at": "2020-05-14T14:27:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/103.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/103... | Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.
The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.
The dataset ca... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/103/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/103/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1331/comments | https://api.github.com/repos/huggingface/datasets/issues/1331/events | https://github.com/huggingface/datasets/pull/1331 | 759,677,189 | MDExOlB1bGxSZXF1ZXN0NTM0NjQwMzc5 | 1,331 | First version of the new dataset hausa_voa_topics | {
"avatar_url": "https://avatars.githubusercontent.com/u/1858628?v=4",
"events_url": "https://api.github.com/users/michael-aloys/events{/privacy}",
"followers_url": "https://api.github.com/users/michael-aloys/followers",
"following_url": "https://api.github.com/users/michael-aloys/following{/other_user}",
"gi... | [] | closed | false | null | [] | null | [] | 2020-12-08T18:28:52Z | 2020-12-10T11:09:53Z | 2020-12-10T11:09:53Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1331.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1331",
"merged_at": "2020-12-10T11:09:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1331.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Contains loading script as well as dataset card including YAML tags.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1331/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1331/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1499/comments | https://api.github.com/repos/huggingface/datasets/issues/1499/events | https://github.com/huggingface/datasets/pull/1499 | 763,464,693 | MDExOlB1bGxSZXF1ZXN0NTM3OTIyNjA3 | 1,499 | update the dataset id_newspapers_2018 | {
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gi... | [] | closed | false | null | [] | null | [] | 2020-12-12T08:47:12Z | 2020-12-14T15:28:07Z | 2020-12-14T15:28:07Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1499.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1499",
"merged_at": "2020-12-14T15:28:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1499.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Hi, I need to update the link to the dataset. The link in the previous PR was to a small test dataset. Thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1499/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1499/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2854 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2854/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2854/comments | https://api.github.com/repos/huggingface/datasets/issues/2854/events | https://github.com/huggingface/datasets/pull/2854 | 983,726,084 | MDExOlB1bGxSZXF1ZXN0NzIzMjU3NDg5 | 2,854 | Fix caching when moving script | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"Merging since the CI failure is unrelated to this PR"
] | 2021-08-31T10:58:35Z | 2021-08-31T13:13:36Z | 2021-08-31T13:13:36Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2854.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2854",
"merged_at": "2021-08-31T13:13:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2854.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | When caching the result of a `map` function, the hash that is computed depends on many properties of this function, such as all the python objects it uses, its code and also the location of this code.
Using the full path of the python script for the location of the code makes the hash change if a script like `run_ml... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2854/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2854/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/89 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/89/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/89/comments | https://api.github.com/repos/huggingface/datasets/issues/89/events | https://github.com/huggingface/datasets/pull/89 | 617,295,069 | MDExOlB1bGxSZXF1ZXN0NDE3MjM4MjU4 | 89 | Add list and inspect methods - cleanup hf_api | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | [] | 2020-05-13T09:30:15Z | 2020-05-13T14:05:00Z | 2020-05-13T09:33:10Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/89.diff",
"html_url": "https://github.com/huggingface/datasets/pull/89",
"merged_at": "2020-05-13T09:33:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/89.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/89"
} | Add a bunch of methods to easily list and inspect the processing scripts up-loaded on S3:
```python
nlp.list_datasets()
nlp.list_metrics()
# Copy and prepare the scripts at `local_path` for easy inspection/modification.
nlp.inspect_dataset(path, local_path)
# Copy and prepare the scripts at `local_path` for easy... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/89/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/89/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6465/comments | https://api.github.com/repos/huggingface/datasets/issues/6465/events | https://github.com/huggingface/datasets/issues/6465 | 2,022,212,468 | I_kwDODunzps54iIN0 | 6,465 | `load_dataset` uses out-of-date cache instead of re-downloading a changed dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/3391297?v=4",
"events_url": "https://api.github.com/users/mnoukhov/events{/privacy}",
"followers_url": "https://api.github.com/users/mnoukhov/followers",
"following_url": "https://api.github.com/users/mnoukhov/following{/other_user}",
"gists_url": "http... | [] | open | false | null | [] | null | [
"Hi, thanks for reporting! https://github.com/huggingface/datasets/pull/6459 will fix this."
] | 2023-12-02T21:35:17Z | 2023-12-04T16:13:10Z | null | NONE | null | null | null | ### Describe the bug
When a dataset is updated on the hub, using `load_dataset` will load the locally cached dataset instead of re-downloading the updated dataset
### Steps to reproduce the bug
Here is a minimal example script to
1. create an initial dataset and upload
2. download it so it is stored in cache
3. c... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6465/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6465/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/837/comments | https://api.github.com/repos/huggingface/datasets/issues/837/events | https://github.com/huggingface/datasets/pull/837 | 740,250,215 | MDExOlB1bGxSZXF1ZXN0NTE4NzcwNDM5 | 837 | AlloCiné dataset card | {
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
... | [] | closed | false | null | [] | null | [] | 2020-11-10T21:19:53Z | 2020-11-25T21:56:27Z | 2020-11-25T21:56:27Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/837.diff",
"html_url": "https://github.com/huggingface/datasets/pull/837",
"merged_at": "2020-11-25T21:56:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/837.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/837... | Link to the card page: https://github.com/mcmillanmajora/datasets/blob/allocine_card/datasets/allocine/README.md
There wasn't as much information available for this dataset, so I'm wondering what's the best way to address open questions about the dataset. For example, where did the list of films that the dataset creat... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/837/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/837/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3619 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3619/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3619/comments | https://api.github.com/repos/huggingface/datasets/issues/3619/events | https://github.com/huggingface/datasets/pull/3619 | 1,112,611,415 | PR_kwDODunzps4xfnCQ | 3,619 | fix meta in mls | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gist... | [] | closed | false | null | [] | null | [
"Feel free to merge @polinaeterna as soon as you got an approval from either @lhoestq , @albertvillanova or @mariosasko"
] | 2022-01-24T12:54:38Z | 2022-01-24T20:53:22Z | 2022-01-24T20:53:22Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3619.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3619",
"merged_at": "2022-01-24T20:53:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3619.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | `monolingual` value of `m ultilinguality` param in yaml meta was changed to `multilingual` :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3619/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3619/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2378 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2378/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2378/comments | https://api.github.com/repos/huggingface/datasets/issues/2378/events | https://github.com/huggingface/datasets/issues/2378 | 895,131,774 | MDU6SXNzdWU4OTUxMzE3NzQ= | 2,378 | Add missing dataset_infos.json files | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://a... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://a... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_ur... | null | [] | 2021-05-19T08:11:12Z | 2021-05-19T08:11:12Z | null | MEMBER | null | null | null | Some of the datasets in `datasets` are missing a `dataset_infos.json` file, e.g.
```
[PosixPath('datasets/chr_en/chr_en.py'), PosixPath('datasets/chr_en/README.md')]
[PosixPath('datasets/telugu_books/README.md'), PosixPath('datasets/telugu_books/telugu_books.py')]
[PosixPath('datasets/reclor/README.md'), PosixPat... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2378/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2378/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3758/comments | https://api.github.com/repos/huggingface/datasets/issues/3758/events | https://github.com/huggingface/datasets/issues/3758 | 1,143,366,393 | I_kwDODunzps5EJmL5 | 3,758 | head_qa file missing | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"We usually find issues with files hosted at Google Drive...\r\n\r\nIn this case we download the Google Drive Virus scan warning instead of the data file.",
"Fixed: https://huggingface.co/datasets/head_qa/viewer/en/train. Thanks\r\n\r\n<img width=\"1551\" alt=\"Capture d’écran 2022-02-28 à 15 29 04\" src=\"http... | 2022-02-18T16:32:43Z | 2022-02-28T14:29:18Z | 2022-02-21T14:39:19Z | CONTRIBUTOR | null | null | null | ## Describe the bug
A file for the `head_qa` dataset is missing (https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t/HEAD_EN/train_HEAD_EN.json)
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> load_dataset("head_qa", name="en")
```
## Expec... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3758/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3758/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5039/comments | https://api.github.com/repos/huggingface/datasets/issues/5039/events | https://github.com/huggingface/datasets/issues/5039 | 1,390,353,315 | I_kwDODunzps5S3xuj | 5,039 | Hendrycks Checksum | {
"avatar_url": "https://avatars.githubusercontent.com/u/9974388?v=4",
"events_url": "https://api.github.com/users/DanielHesslow/events{/privacy}",
"followers_url": "https://api.github.com/users/DanielHesslow/followers",
"following_url": "https://api.github.com/users/DanielHesslow/following{/other_user}",
"gi... | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting, @DanielHesslow. We are fixing it. ",
"@albertvillanova thanks for taking care of this so quickly!",
"The dataset metadata is fixed. You can download it normally."
] | 2022-09-29T06:56:20Z | 2022-09-29T10:23:30Z | 2022-09-29T10:04:20Z | NONE | null | null | null | Hi,
The checksum for [hendrycks_test](https://huggingface.co/datasets/hendrycks_test) does not compare correctly, I guess it has been updated on the remote.
```
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://people.eecs.berkeley.edu/~hendrycks/data.... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5039/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5039/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6329/comments | https://api.github.com/repos/huggingface/datasets/issues/6329/events | https://github.com/huggingface/datasets/issues/6329 | 1,955,858,020 | I_kwDODunzps50lAZk | 6,329 | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | {
"avatar_url": "https://avatars.githubusercontent.com/u/147399213?v=4",
"events_url": "https://api.github.com/users/shabnam706/events{/privacy}",
"followers_url": "https://api.github.com/users/shabnam706/followers",
"following_url": "https://api.github.com/users/shabnam706/following{/other_user}",
"gists_url... | [] | closed | false | null | [] | null | [] | 2023-10-22T11:07:46Z | 2023-10-23T09:22:58Z | 2023-10-23T09:22:58Z | NONE | null | null | null | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6329/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6329/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4257 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4257/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4257/comments | https://api.github.com/repos/huggingface/datasets/issues/4257/events | https://github.com/huggingface/datasets/pull/4257 | 1,221,393,137 | PR_kwDODunzps43GATC | 4,257 | Create metric card for Mahalanobis Distance | {
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-29T18:37:27Z | 2022-05-02T14:50:18Z | 2022-05-02T14:43:24Z | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4257.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4257",
"merged_at": "2022-05-02T14:43:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4257.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | proposing a metric card to better explain how Mahalanobis distance works (last one for now :sweat_smile: | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4257/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4257/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1814/comments | https://api.github.com/repos/huggingface/datasets/issues/1814/events | https://github.com/huggingface/datasets/pull/1814 | 800,516,236 | MDExOlB1bGxSZXF1ZXN0NTY2OTg4NTI1 | 1,814 | Add Freebase QA Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"Hi @lhoestq \r\n\r\nThanks for approving. Request you to close PR #1435 as well."
] | 2021-02-03T16:57:49Z | 2021-02-04T19:47:51Z | 2021-02-04T16:21:48Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1814.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1814",
"merged_at": "2021-02-04T16:21:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1814.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Closes PR #1435. Fixed issues with PR #1809.
Requesting @lhoestq to review. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1814/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1814/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3038/comments | https://api.github.com/repos/huggingface/datasets/issues/3038/events | https://github.com/huggingface/datasets/pull/3038 | 1,018,113,499 | PR_kwDODunzps4syno_ | 3,038 | add sberquad dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13781234?v=4",
"events_url": "https://api.github.com/users/Alenush/events{/privacy}",
"followers_url": "https://api.github.com/users/Alenush/followers",
"following_url": "https://api.github.com/users/Alenush/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2021-10-06T11:33:39Z | 2021-10-06T11:58:01Z | 2021-10-06T11:58:01Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3038.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3038",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3038.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3038"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3038/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3038/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5524 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5524/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5524/comments | https://api.github.com/repos/huggingface/datasets/issues/5524/events | https://github.com/huggingface/datasets/pull/5524 | 1,580,219,454 | PR_kwDODunzps5JvbMw | 5,524 | [INVALID PR] | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2023-02-10T19:35:50Z | 2023-02-10T19:51:45Z | 2023-02-10T19:49:12Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5524.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5524",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5524.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5524"
} | Hi to whoever is reading this! 🤗
## What's in this PR?
~~Basically, I've removed the 🤗`datasets` installation as `python -m pip install ".[quality]" in the `check_code_quality` job in `.github/workflows/ci.yaml`, as we don't need to install the whole package to run the CI, unless that's done on purpose e.g. to ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5524/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5524/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4453 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4453/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4453/comments | https://api.github.com/repos/huggingface/datasets/issues/4453/events | https://github.com/huggingface/datasets/issues/4453 | 1,262,674,105 | I_kwDODunzps5LQuC5 | 4,453 | Dataset Viewer issue for Yaxin/SemEval2015 | {
"avatar_url": "https://avatars.githubusercontent.com/u/18160852?v=4",
"events_url": "https://api.github.com/users/WithYouTo/events{/privacy}",
"followers_url": "https://api.github.com/users/WithYouTo/followers",
"following_url": "https://api.github.com/users/WithYouTo/following{/other_user}",
"gists_url": "... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"I understand that the issue is that a remote file (URL) is being loaded as a local file. Right @albertvillanova @lhoestq?\r\n\r\n```\r\nMessage: [Errno 2] No such file or directory: 'https://raw.githubusercontent.com/YaxinCui/ABSADataset/main/SemEval2015Task12Corrected/train/restaurants_train.xml'\r\n```",
... | 2022-06-07T03:30:08Z | 2022-06-09T08:34:16Z | 2022-06-09T08:34:16Z | NONE | null | null | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4453/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4453/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/802/comments | https://api.github.com/repos/huggingface/datasets/issues/802/events | https://github.com/huggingface/datasets/pull/802 | 736,296,343 | MDExOlB1bGxSZXF1ZXN0NTE1NTM1MDI0 | 802 | Add XGlue | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [] | closed | false | null | [] | null | [
"Really cool to add XGlue, this will be a nice addition !\r\n\r\nSplits shouldn't depend on the language. There must be configurations for each language, as we're doing for xnli, xtreme, etc.\r\nFor example for XGlue we'll have these configurations: NER.de, NER.en etc."
] | 2020-11-04T17:29:54Z | 2022-04-28T08:15:36Z | 2020-12-01T15:58:27Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/802.diff",
"html_url": "https://github.com/huggingface/datasets/pull/802",
"merged_at": "2020-12-01T15:58:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/802.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/802... | Dataset is ready to merge. An important feature of this dataset is that for each config the train data is in English, while dev and test data are in multiple languages. Therefore, @lhoestq and I decided offline that we will give the dataset the following API, *e.g.* for
```python
load_dataset("xglue", "ner") # wo... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/802/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/802/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3686/comments | https://api.github.com/repos/huggingface/datasets/issues/3686/events | https://github.com/huggingface/datasets/issues/3686 | 1,127,137,290 | I_kwDODunzps5DLsAK | 3,686 | `Translation` features cannot be `flatten`ed | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
... | null | [
"Thanks for reporting, @SBrandeis! Some additional feature types that don't behave as expected when flattened: `Audio`, `Image` and `TranslationVariableLanguages`"
] | 2022-02-08T11:33:48Z | 2022-03-18T17:28:13Z | 2022-03-18T17:28:13Z | CONTRIBUTOR | null | null | null | ## Describe the bug
(`Dataset.flatten`)[https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1265] fails for columns with feature (`Translation`)[https://github.com/huggingface/datasets/blob/3edbeb0ec6519b79f1119adc251a1a6b379a2c12/src/datasets/features/translation.py#L8]
## Steps to... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3686/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3686/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2370 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2370/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2370/comments | https://api.github.com/repos/huggingface/datasets/issues/2370/events | https://github.com/huggingface/datasets/pull/2370 | 893,606,432 | MDExOlB1bGxSZXF1ZXN0NjQ2MDkyNDQy | 2,370 | Adding HendrycksTest dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/43451571?v=4",
"events_url": "https://api.github.com/users/andyzoujm/events{/privacy}",
"followers_url": "https://api.github.com/users/andyzoujm/followers",
"following_url": "https://api.github.com/users/andyzoujm/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [
"@lhoestq Thank you for the review. I've made the suggested changes. There still might be some problems with dummy data though due to some csv loading issues (which I haven't found the cause to).",
"I took a look at the dummy data and some csv lines were cropped. I fixed them :)",
"@andyzoujm Any reason why thi... | 2021-05-17T18:53:05Z | 2023-05-11T05:42:57Z | 2021-05-31T16:37:13Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2370.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2370",
"merged_at": "2021-05-31T16:37:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2370.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Adding Hendrycks test from https://arxiv.org/abs/2009.03300.
I'm having a bit of trouble with dummy data creation because some lines in the csv files aren't being loaded properly (only the first entry loaded in a row of length 6). The dataset is loading just fine. Hope you can kindly help!
Thank you! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2370/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2370/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2602 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2602/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2602/comments | https://api.github.com/repos/huggingface/datasets/issues/2602/events | https://github.com/huggingface/datasets/pull/2602 | 938,555,712 | MDExOlB1bGxSZXF1ZXN0Njg0OTE5MjMy | 2,602 | Remove import of transformers | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/... | [] | 2021-07-07T06:58:18Z | 2021-07-12T14:10:22Z | 2021-07-07T08:28:51Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2602.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2602",
"merged_at": "2021-07-07T08:28:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2602.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | When pickling a tokenizer within multiprocessing, check that is instance of transformers PreTrainedTokenizerBase without importing transformers.
Related to huggingface/transformers#12549 and #502. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2602/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2602/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4782/comments | https://api.github.com/repos/huggingface/datasets/issues/4782/events | https://github.com/huggingface/datasets/issues/4782 | 1,326,247,158 | I_kwDODunzps5PDOz2 | 4,782 | pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2147483648 | {
"avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4",
"events_url": "https://api.github.com/users/conceptofmind/events{/privacy}",
"followers_url": "https://api.github.com/users/conceptofmind/followers",
"following_url": "https://api.github.com/users/conceptofmind/following{/other_user}",
"g... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Thanks for reporting @conceptofmind.\r\n\r\nCould you please give details about your environment? \r\n```\r\n## Environment info\r\n<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n```",
... | 2022-08-02T18:36:05Z | 2022-08-22T09:46:28Z | 2022-08-20T02:11:53Z | NONE | null | null | null | ## Describe the bug
Following the example in CodeParrot, I receive an array size limitation error when deduplicating larger datasets.
## Steps to reproduce the bug
```python
dataset_name = "the_pile"
ds = load_dataset(dataset_name, split="train")
ds = ds.map(preprocess, num_proc=num_workers)
uniques = set(ds.u... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4782/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4782/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/522/comments | https://api.github.com/repos/huggingface/datasets/issues/522/events | https://github.com/huggingface/datasets/issues/522 | 682,478,833 | MDU6SXNzdWU2ODI0Nzg4MzM= | 522 | dictionnary typo in docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/4004127?v=4",
"events_url": "https://api.github.com/users/yonigottesman/events{/privacy}",
"followers_url": "https://api.github.com/users/yonigottesman/followers",
"following_url": "https://api.github.com/users/yonigottesman/following{/other_user}",
"gi... | [] | closed | false | null | [] | null | [
"Thanks!"
] | 2020-08-20T07:11:05Z | 2020-08-20T07:52:14Z | 2020-08-20T07:52:13Z | CONTRIBUTOR | null | null | null | Many places dictionary is spelled dictionnary, not sure if its on purpose or not.
Fixed in this pr:
https://github.com/huggingface/nlp/pull/521 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/522/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/522/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/781/comments | https://api.github.com/repos/huggingface/datasets/issues/781/events | https://github.com/huggingface/datasets/pull/781 | 733,168,609 | MDExOlB1bGxSZXF1ZXN0NTEyOTkyMzQw | 781 | Add XNLI train set | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"Hi! Thanks for adding the translated MNLI! Do you know what translations system / model you used when you created the datasets in the other languages?",
"According to the [paper](https://arxiv.org/pdf/1809.05053.pdf) it's the result of the work of professional translators ;)",
"Thanks for getting back to me.\n... | 2020-10-30T13:21:53Z | 2022-06-09T23:26:46Z | 2020-11-09T18:22:49Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/781.diff",
"html_url": "https://github.com/huggingface/datasets/pull/781",
"merged_at": "2020-11-09T18:22:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/781.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/781... | I added the train set that was built using the translated MNLI.
Now you can load the dataset specifying one language:
```python
from datasets import load_dataset
xnli_en = load_dataset("xnli", "en")
print(xnli_en["train"][0])
# {'hypothesis': 'Product and geography are what make cream skimming work .', 'label':... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/781/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/781/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4148/comments | https://api.github.com/repos/huggingface/datasets/issues/4148/events | https://github.com/huggingface/datasets/issues/4148 | 1,201,169,242 | I_kwDODunzps5HmGNa | 4,148 | fix confusing bleu metric example | {
"avatar_url": "https://avatars.githubusercontent.com/u/6253193?v=4",
"events_url": "https://api.github.com/users/aizawa-naoki/events{/privacy}",
"followers_url": "https://api.github.com/users/aizawa-naoki/followers",
"following_url": "https://api.github.com/users/aizawa-naoki/following{/other_user}",
"gists... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [] | 2022-04-12T06:18:26Z | 2022-04-13T14:16:34Z | 2022-04-13T14:16:34Z | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
I would like to see the example in "Metric Card for BLEU" changed.
The 0th element in the predictions list is not closed in square brackets, and the 1st list is missing a comma.
The BLEU score are calculated correctly, but it is difficult to understa... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4148/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4148/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/508 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/508/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/508/comments | https://api.github.com/repos/huggingface/datasets/issues/508/events | https://github.com/huggingface/datasets/issues/508 | 679,705,734 | MDU6SXNzdWU2Nzk3MDU3MzQ= | 508 | TypeError: Receiver() takes no arguments | {
"avatar_url": "https://avatars.githubusercontent.com/u/1225851?v=4",
"events_url": "https://api.github.com/users/sebastiantomac/events{/privacy}",
"followers_url": "https://api.github.com/users/sebastiantomac/followers",
"following_url": "https://api.github.com/users/sebastiantomac/following{/other_user}",
... | [] | closed | false | null | [] | null | [
"Which version of Apache Beam do you have (can you copy your full environment info here)?",
"apache-beam==2.23.0\r\nnlp==0.4.0\r\n\r\nFor me this was resolved by running the same python script on Linux (or really WSL). ",
"Do you manage to run a dummy beam pipeline with python on windows ? \r\nYou can test a du... | 2020-08-16T07:18:16Z | 2020-09-01T14:53:33Z | 2020-09-01T14:49:03Z | NONE | null | null | null | I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')
```
Th... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/508/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/508/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/728/comments | https://api.github.com/repos/huggingface/datasets/issues/728/events | https://github.com/huggingface/datasets/issues/728 | 719,555,780 | MDU6SXNzdWU3MTk1NTU3ODA= | 728 | Passing `cache_dir` to a metric does not work | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2020-10-12T17:55:14Z | 2020-10-29T09:34:42Z | 2020-10-29T09:34:42Z | CONTRIBUTOR | null | null | null | When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError:
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/728/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/728/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1706/comments | https://api.github.com/repos/huggingface/datasets/issues/1706/events | https://github.com/huggingface/datasets/issues/1706 | 781,494,476 | MDU6SXNzdWU3ODE0OTQ0NzY= | 1,706 | Error when downloading a large dataset on slow connection. | {
"avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4",
"events_url": "https://api.github.com/users/lucadiliello/events{/privacy}",
"followers_url": "https://api.github.com/users/lucadiliello/followers",
"following_url": "https://api.github.com/users/lucadiliello/following{/other_user}",
"gist... | [] | open | false | null | [] | null | [
"Hi ! Is this an issue you have with `openwebtext` specifically or also with other datasets ?\r\n\r\nIt looks like the downloaded file is corrupted and can't be extracted using `tarfile`.\r\nCould you try loading it again with \r\n```python\r\nimport datasets\r\ndatasets.load_dataset(\"openwebtext\", download_mode=... | 2021-01-07T17:48:15Z | 2021-01-13T10:35:02Z | null | CONTRIBUTOR | null | null | null | I receive the following error after about an hour trying to download the `openwebtext` dataset.
The code used is:
```python
import datasets
datasets.load_dataset("openwebtext")
```
> Traceback (most recent call last): ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1706/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1706/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5367/comments | https://api.github.com/repos/huggingface/datasets/issues/5367/events | https://github.com/huggingface/datasets/pull/5367 | 1,499,174,749 | PR_kwDODunzps5FlevK | 5,367 | Fix remove columns from lazy dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-15T22:04:12Z | 2022-12-15T22:27:53Z | 2022-12-15T22:24:50Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5367.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5367",
"merged_at": "2022-12-15T22:24:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5367.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | This was introduced in https://github.com/huggingface/datasets/pull/5252 and causing the transformers CI to break: https://app.circleci.com/pipelines/github/huggingface/transformers/53886/workflows/522faf2e-a053-454c-94f8-a617fde33393/jobs/648597
Basically this code should return a dataset with only one column:
`... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5367/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5367/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1944/comments | https://api.github.com/repos/huggingface/datasets/issues/1944/events | https://github.com/huggingface/datasets/pull/1944 | 816,267,216 | MDExOlB1bGxSZXF1ZXN0NTc5OTU2Nzc3 | 1,944 | Add Turkish News Category Dataset (270K - Lite Version) | {
"avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4",
"events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}",
"followers_url": "https://api.github.com/users/yavuzKomecoglu/followers",
"following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}",
... | [] | closed | false | null | [] | null | [
"I updated your suggestions. Thank you very much for your support. @lhoestq ",
"> Thanks for changing to ClassLabel :)\r\n> This is all good now !\r\n> \r\n> However I can see changes in other files than the ones for interpress_news_category_tr_lite, can you please fix that ?\r\n> To do so you can create another ... | 2021-02-25T09:45:22Z | 2021-03-02T17:46:41Z | 2021-03-01T18:23:21Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1944.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1944",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1944.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1944"
} | This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol.
This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contai... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1944/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1944/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3961/comments | https://api.github.com/repos/huggingface/datasets/issues/3961/events | https://github.com/huggingface/datasets/issues/3961 | 1,173,223,086 | I_kwDODunzps5F7fau | 3,961 | Scores from Index at extra positions are not filtered out | {
"avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4",
"events_url": "https://api.github.com/users/vishalsrao/events{/privacy}",
"followers_url": "https://api.github.com/users/vishalsrao/followers",
"following_url": "https://api.github.com/users/vishalsrao/following{/other_user}",
"gists_url"... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi! Yes, that makes sense! Would you like to submit a PR to fix this?",
"Created PR https://github.com/huggingface/datasets/pull/3971"
] | 2022-03-18T06:13:23Z | 2022-04-12T14:41:58Z | 2022-04-12T14:41:58Z | CONTRIBUTOR | null | null | null | If a FAISS index has fewer records than the requested number of top results (k), then it returns -1 in indices for the additional positions. The get_nearest_examples method only filters out the extra results from the dataset samples. It would be better to filter out extra scores too.
Reference: https://github.com/hu... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3961/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3961/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3991 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3991/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3991/comments | https://api.github.com/repos/huggingface/datasets/issues/3991/events | https://github.com/huggingface/datasets/issues/3991 | 1,177,362,901 | I_kwDODunzps5GLSHV | 3,991 | Add Lung Image Database Consortium image collection (LIDC-IDRI) dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_ur... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | open | false | null | [] | null | [] | 2022-03-22T22:16:05Z | 2022-03-23T12:57:16Z | null | NONE | null | null | null | ## Adding a Dataset
- **Name:** *Lung Image Database Consortium image collection (LIDC-IDRI)*
- **Description:** *Consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions. It is a web-accessible international resource for development, training, and ev... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3991/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3991/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5530/comments | https://api.github.com/repos/huggingface/datasets/issues/5530/events | https://github.com/huggingface/datasets/pull/5530 | 1,582,938,241 | PR_kwDODunzps5J4W_4 | 5,530 | Add missing license in `NumpyFormatter` | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-13T19:33:23Z | 2023-02-14T14:40:41Z | 2023-02-14T12:23:58Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5530.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5530",
"merged_at": "2023-02-14T12:23:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5530.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | ## What's in this PR?
As discussed with @lhoestq in https://github.com/huggingface/datasets/pull/5522, the license for `NumpyFormatter` at `datasets/formatting/np_formatter.py` was missing, but present on the rest of the `formatting/*.py` files. So this PR is basically to include it there. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5530/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5530/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4824/comments | https://api.github.com/repos/huggingface/datasets/issues/4824/events | https://github.com/huggingface/datasets/pull/4824 | 1,335,826,639 | PR_kwDODunzps49BR5H | 4,824 | Fix titles in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] | 2022-08-11T11:27:48Z | 2022-08-11T13:46:11Z | 2022-08-11T12:56:49Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4824.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4824",
"merged_at": "2022-08-11T12:56:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4824.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Fix all the titles in the dataset cards, so that they conform to the required format. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4824/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4824/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6233/comments | https://api.github.com/repos/huggingface/datasets/issues/6233/events | https://github.com/huggingface/datasets/pull/6233 | 1,891,804,286 | PR_kwDODunzps5aF3kd | 6,233 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4",
"events_url": "https://api.github.com/users/NinoRisteski/events{/privacy}",
"followers_url": "https://api.github.com/users/NinoRisteski/followers",
"following_url": "https://api.github.com/users/NinoRisteski/following{/other_user}",
"gist... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-09-12T06:53:06Z | 2023-09-13T18:20:50Z | 2023-09-13T18:10:04Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6233.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6233",
"merged_at": "2023-09-13T18:10:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6233.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | fixed a typo | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6233/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6233/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2343/comments | https://api.github.com/repos/huggingface/datasets/issues/2343/events | https://github.com/huggingface/datasets/issues/2343 | 883,208,539 | MDU6SXNzdWU4ODMyMDg1Mzk= | 2,343 | Columns are removed before or after map function applied? | {
"avatar_url": "https://avatars.githubusercontent.com/u/8199406?v=4",
"events_url": "https://api.github.com/users/taghizad3h/events{/privacy}",
"followers_url": "https://api.github.com/users/taghizad3h/followers",
"following_url": "https://api.github.com/users/taghizad3h/following{/other_user}",
"gists_url":... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Hi! Columns are removed **after** applying the function and **before** updating the examples with the function's output (as per the docs [here](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.map.remove_columns)). I agree the docs on this should be more clear."
] | 2021-05-10T02:36:20Z | 2022-10-24T11:31:55Z | null | NONE | null | null | null | ## Describe the bug
According to the documentation when applying map function the [remove_columns ](https://huggingface.co/docs/datasets/processing.html#removing-columns) will be removed after they are passed to the function, but in the [source code](https://huggingface.co/docs/datasets/package_reference/main_classes.... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2343/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2343/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2752/comments | https://api.github.com/repos/huggingface/datasets/issues/2752/events | https://github.com/huggingface/datasets/pull/2752 | 959,023,608 | MDExOlB1bGxSZXF1ZXN0NzAyMjAxMjAy | 2,752 | Generate metadata JSON for lm1b dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [] | 2021-08-03T11:34:56Z | 2021-08-04T06:40:40Z | 2021-08-04T06:40:39Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2752.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2752",
"merged_at": "2021-08-04T06:40:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2752.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Related to #2743. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2752/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2752/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6202/comments | https://api.github.com/repos/huggingface/datasets/issues/6202/events | https://github.com/huggingface/datasets/issues/6202 | 1,876,630,351 | I_kwDODunzps5v2xtP | 6,202 | avoid downgrading jax version | {
"avatar_url": "https://avatars.githubusercontent.com/u/1332458?v=4",
"events_url": "https://api.github.com/users/chrisflesher/events{/privacy}",
"followers_url": "https://api.github.com/users/chrisflesher/followers",
"following_url": "https://api.github.com/users/chrisflesher/following{/other_user}",
"gists... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"https://github.com/huggingface/datasets/blob/main/setup.py#L236\r\nCurrently has the highest version at 0.3.25; Not sure if there is any reason for this, other than that was the tested version?"
] | 2023-09-01T02:57:57Z | 2023-10-12T16:28:59Z | 2023-10-12T16:28:59Z | NONE | null | null | null | ### Feature request
Whenever I `pip install datasets[jax]` it downgrades jax to version 0.3.25. I seem to be able to install this library first then upgrade jax back to version 0.4.13.
### Motivation
It would be nice to not overwrite currently installed version of jax if possible.
### Your contribution
I... | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6202/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6202/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5328/comments | https://api.github.com/repos/huggingface/datasets/issues/5328/events | https://github.com/huggingface/datasets/pull/5328 | 1,471,661,437 | PR_kwDODunzps5EFAyT | 5,328 | Fix docs building for main | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"EDIT\r\nAt least the docs for ~~main~~ PR branch are now built:\r\n- https://github.com/huggingface/datasets/actions/runs/3594847760/jobs/6053620813",
"Build documentation for main branch was triggered after this PR being merged: h... | 2022-12-01T17:07:45Z | 2022-12-02T16:29:00Z | 2022-12-02T16:26:00Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5328.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5328",
"merged_at": "2022-12-02T16:26:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5328.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | This PR reverts the triggering event for building documentation introduced by:
- #5250
Fix #5326. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5328/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5328/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4470/comments | https://api.github.com/repos/huggingface/datasets/issues/4470/events | https://github.com/huggingface/datasets/pull/4470 | 1,267,470,051 | PR_kwDODunzps45dnYw | 4,470 | Reorder returned validation/test splits in script template | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-10T12:21:13Z | 2022-06-10T18:04:10Z | 2022-06-10T17:54:50Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4470.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4470",
"merged_at": "2022-06-10T17:54:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4470.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4470/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4470/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1810/comments | https://api.github.com/repos/huggingface/datasets/issues/1810/events | https://github.com/huggingface/datasets/issues/1810 | 799,168,650 | MDU6SXNzdWU3OTkxNjg2NTA= | 1,810 | Add Hateful Memes Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | open | false | null | [] | null | [
"I am not sure, but would `datasets.Sequence(datasets.Sequence(datasets.Sequence(datasets.Value(\"int\")))` work?",
"Also, I found the information for loading only subsets of the data [here](https://github.com/huggingface/datasets/blob/master/docs/source/splits.rst).",
"Hi @lhoestq,\r\n\r\nRequest you to check ... | 2021-02-02T10:53:59Z | 2021-12-08T12:03:59Z | null | CONTRIBUTOR | null | null | null | ## Add Hateful Memes Dataset
- **Name:** Hateful Memes
- **Description:** [https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set]( https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set)
- **Paper:** [https://arxiv.org/pdf/2005.04790.pdf](https://arxiv.org/pdf/2005.04790.pdf)
- **Data:** [Thi... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1810/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1810/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5681 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5681/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5681/comments | https://api.github.com/repos/huggingface/datasets/issues/5681/events | https://github.com/huggingface/datasets/issues/5681 | 1,645,630,784 | I_kwDODunzps5iFlVA | 5,681 | Add information about patterns search order to the doc about structuring repo | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gist... | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "htt... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gi... | null | [
"Good idea, I think I've seen this a couple of times before too on the forums. I can work on this :)",
"Closed in #5693 "
] | 2023-03-29T11:44:49Z | 2023-04-03T18:31:11Z | 2023-04-03T18:31:11Z | CONTRIBUTOR | null | null | null | Following [this](https://github.com/huggingface/datasets/issues/5650) issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged load... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5681/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5681/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6359/comments | https://api.github.com/repos/huggingface/datasets/issues/6359/events | https://github.com/huggingface/datasets/issues/6359 | 1,965,378,583 | I_kwDODunzps51JUwX | 6,359 | Stuck in "Resolving data files..." | {
"avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4",
"events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}",
"followers_url": "https://api.github.com/users/Luciennnnnnn/followers",
"following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}",
"gist... | [] | open | false | null | [] | null | [
"Most likely, the data file inference logic is the problem here.\r\n\r\nYou can run the following code to verify this:\r\n```python\r\nimport time\r\nfrom datasets.data_files import get_data_patterns\r\nstart_time = time.time()\r\nget_data_patterns(\"/path/to/img_dir\")\r\nend_time = time.time()\r\nprint(f\"Elapsed... | 2023-10-27T12:01:51Z | 2023-10-28T01:38:21Z | null | NONE | null | null | null | ### Describe the bug
I have an image dataset with 300k images, the size of image is 768 * 768.
When I run `dataset = load_dataset("imagefolder", data_dir="/path/to/img_dir", split='train')` in second time, it takes 50 minutes to finish "Resolving data files" part, what's going on in this part?
From my understa... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6359/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6359/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2055/comments | https://api.github.com/repos/huggingface/datasets/issues/2055/events | https://github.com/huggingface/datasets/issues/2055 | 831,684,312 | MDU6SXNzdWU4MzE2ODQzMTI= | 2,055 | is there a way to override a dataset object saved with save_to_disk? | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"Hi\r\nYou can rename the arrow file and update the name in `state.json`",
"I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am trying to use the following method by setting an exact cache file,\r\n\r\n```\r\n dataset_with_embedding =csv_da... | 2021-03-15T10:50:53Z | 2021-03-22T04:06:17Z | 2021-03-22T04:06:17Z | NONE | null | null | null | At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2055/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2055/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/88 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/88/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/88/comments | https://api.github.com/repos/huggingface/datasets/issues/88/events | https://github.com/huggingface/datasets/pull/88 | 617,284,664 | MDExOlB1bGxSZXF1ZXN0NDE3MjI5ODQw | 88 | Add wiki40b | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"Looks good to me. I have not really looked too much into the Beam Datasets yet though - so I think you can merge whenever you think is good for Beam datasets :-) "
] | 2020-05-13T09:16:01Z | 2020-05-13T12:31:55Z | 2020-05-13T12:31:54Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/88.diff",
"html_url": "https://github.com/huggingface/datasets/pull/88",
"merged_at": "2020-05-13T12:31:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/88.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/88"
} | This one is a beam dataset that downloads files using tensorflow.
I tested it on a small config and it works fine | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/88/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/88/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4194/comments | https://api.github.com/repos/huggingface/datasets/issues/4194/events | https://github.com/huggingface/datasets/pull/4194 | 1,210,958,602 | PR_kwDODunzps42jjD3 | 4,194 | Support lists of multi-dimensional numpy arrays | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-21T12:22:26Z | 2022-05-12T15:16:34Z | 2022-05-12T15:08:40Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4194.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4194",
"merged_at": "2022-05-12T15:08:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4194.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Fix #4191.
CC: @SaulLu | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4194/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5744/comments | https://api.github.com/repos/huggingface/datasets/issues/5744/events | https://github.com/huggingface/datasets/issues/5744 | 1,667,076,620 | I_kwDODunzps5jXZIM | 5,744 | [BUG] With Pandas 2.0.0, `load_dataset` raises `TypeError: read_csv() got an unexpected keyword argument 'mangle_dupe_cols'` | {
"avatar_url": "https://avatars.githubusercontent.com/u/15572698?v=4",
"events_url": "https://api.github.com/users/keyboardAnt/events{/privacy}",
"followers_url": "https://api.github.com/users/keyboardAnt/followers",
"following_url": "https://api.github.com/users/keyboardAnt/following{/other_user}",
"gists_u... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting, @keyboardAnt.\r\n\r\nWe haven't noticed any crash in our CI tests. Could you please indicate specifically the `load_dataset` command that crashes in your side, so that we can reproduce it?",
"This has been fixed in `datasets` 2.11"
] | 2023-04-13T20:21:28Z | 2023-07-06T17:01:59Z | 2023-07-06T17:01:59Z | NONE | null | null | null | The `load_dataset` function with Pandas `1.5.3` has no issue (just a FutureWarning) but crashes with Pandas `2.0.0`.
For your convenience, I opened a draft Pull Request to fix it quickly: https://github.com/huggingface/datasets/pull/5745
---
* The FutureWarning mentioned above:
```
FutureWarning: the 'mangle_... | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5744/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5744/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4431/comments | https://api.github.com/repos/huggingface/datasets/issues/4431/events | https://github.com/huggingface/datasets/pull/4431 | 1,254,618,948 | PR_kwDODunzps44x5aG | 4,431 | Add personaldialog datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_ur... | [] | closed | false | null | [] | null | [
"These test errors are related to issue #4428 \r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"I only made a trivial modification in my commit https://github.com/huggingface/datasets/pull/4431/commits/402c893d35224d7828176717233909ac5f1e7b3e\r\n\r\nI have submitted a PR #4... | 2022-06-01T01:20:40Z | 2022-06-11T12:40:23Z | 2022-06-11T12:31:16Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4431.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4431",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4431.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4431"
} | It seems that all tests are passed | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4431/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4431/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/756/comments | https://api.github.com/repos/huggingface/datasets/issues/756/events | https://github.com/huggingface/datasets/pull/756 | 728,211,373 | MDExOlB1bGxSZXF1ZXN0NTA4OTYwNTc3 | 756 | Start community-provided dataset docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "h... | [] | closed | false | null | [] | null | [
"Oh, really cool @sshleifer!"
] | 2020-10-23T13:17:41Z | 2020-10-26T12:55:20Z | 2020-10-26T12:55:19Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/756.diff",
"html_url": "https://github.com/huggingface/datasets/pull/756",
"merged_at": "2020-10-26T12:55:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/756.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/756... | Continuation of #736 with clean fork.
#### Old description
This is what I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs.
In slack @thomwolf called it a user-... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/756/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/756/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2815/comments | https://api.github.com/repos/huggingface/datasets/issues/2815/events | https://github.com/huggingface/datasets/pull/2815 | 973,862,024 | MDExOlB1bGxSZXF1ZXN0NzE1MjUxNDQ5 | 2,815 | Tiny typo fixes of "fo" -> "of" | {
"avatar_url": "https://avatars.githubusercontent.com/u/9934829?v=4",
"events_url": "https://api.github.com/users/aronszanto/events{/privacy}",
"followers_url": "https://api.github.com/users/aronszanto/followers",
"following_url": "https://api.github.com/users/aronszanto/following{/other_user}",
"gists_url":... | [] | closed | false | null | [] | null | [] | 2021-08-18T16:36:11Z | 2021-08-19T08:03:02Z | 2021-08-19T08:03:02Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2815.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2815",
"merged_at": "2021-08-19T08:03:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2815.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Noticed a few of these when reading docs- feel free to ignore the PR and just fix on some main contributor branch if more helpful. Thanks for the great library! :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2815/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2815/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4644 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4644/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4644/comments | https://api.github.com/repos/huggingface/datasets/issues/4644/events | https://github.com/huggingface/datasets/pull/4644 | 1,296,018,052 | PR_kwDODunzps468mQb | 4,644 | [Minor fix] Typo correction | {
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://ap... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-06T15:37:02Z | 2022-07-06T15:56:32Z | 2022-07-06T15:45:16Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4644.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4644",
"merged_at": "2022-07-06T15:45:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4644.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | recieve -> receive | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4644/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4644/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1393/comments | https://api.github.com/repos/huggingface/datasets/issues/1393/events | https://github.com/huggingface/datasets/pull/1393 | 760,436,267 | MDExOlB1bGxSZXF1ZXN0NTM1MjY4MjUx | 1,393 | Add script_version suggestion when dataset/metric not found | {
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | null | [] | null | [] | 2020-12-09T15:37:38Z | 2020-12-10T18:17:05Z | 2020-12-10T18:17:05Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1393.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1393",
"merged_at": "2020-12-10T18:17:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1393.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Adds a helpful prompt to the error message when a dataset/metric is not found, suggesting the user might need to pass `script_version="master"` if the dataset was added recently. The whole error looks like:
> Couldn't find file locally at blah/blah.py, or remotely at https://raw.githubusercontent.com/huggingface/dat... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1393/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1393/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6426/comments | https://api.github.com/repos/huggingface/datasets/issues/6426/events | https://github.com/huggingface/datasets/pull/6426 | 1,995,363,264 | PR_kwDODunzps5fjOEK | 6,426 | More robust temporary directory deletion | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6426). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-11-15T19:06:42Z | 2023-12-01T15:37:32Z | 2023-12-01T15:31:19Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6426.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6426",
"merged_at": "2023-12-01T15:31:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6426.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | While fixing the Windows errors in #6362, I noticed that `PermissionError` can still easily be thrown on the session exit by the temporary cache directory's finalizer (we would also have to keep track of intermediate datasets, copies, etc.). ~~Due to the low usage of `datasets` on Windows, this PR takes a simpler appro... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6426/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6426/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6349/comments | https://api.github.com/repos/huggingface/datasets/issues/6349/events | https://github.com/huggingface/datasets/issues/6349 | 1,961,435,673 | I_kwDODunzps506SIZ | 6,349 | Can't load ds = load_dataset("imdb") | {
"avatar_url": "https://avatars.githubusercontent.com/u/86415736?v=4",
"events_url": "https://api.github.com/users/vivianc2/events{/privacy}",
"followers_url": "https://api.github.com/users/vivianc2/followers",
"following_url": "https://api.github.com/users/vivianc2/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"I'm unable to reproduce this error. The server hosting the files may have been down temporarily, so try again."
] | 2023-10-25T13:29:51Z | 2023-10-31T19:59:35Z | 2023-10-31T19:59:35Z | NONE | null | null | null | ### Describe the bug
I did `from datasets import load_dataset, load_metric` and then `ds = load_dataset("imdb")` and it gave me the error:
ExpectedMoreDownloadedFiles: {'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'}
I tried doing `ds = load_dataset("imdb",download_mode="force_redownload")` as we... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6349/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6349/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1897/comments | https://api.github.com/repos/huggingface/datasets/issues/1897/events | https://github.com/huggingface/datasets/pull/1897 | 810,113,263 | MDExOlB1bGxSZXF1ZXN0NTc0ODU3MTIy | 1,897 | Fix PandasArrayExtensionArray conversion to native type | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2021-02-17T11:48:24Z | 2021-02-17T13:15:16Z | 2021-02-17T13:15:15Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1897.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1897",
"merged_at": "2021-02-17T13:15:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1897.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types.
However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because
1. the PandasExtensionArray.isna metho... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1897/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1897/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/339/comments | https://api.github.com/repos/huggingface/datasets/issues/339/events | https://github.com/huggingface/datasets/pull/339 | 650,156,468 | MDExOlB1bGxSZXF1ZXN0NDQzNzAyNTcw | 339 | Add dataset.export() to TFRecords | {
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists... | [] | closed | false | null | [] | null | [
"Really cool @jarednielsen !\r\nDo you think we can make it work with dataset with nested features like `squad` ?\r\n\r\nI just did a PR to fix `.set_format` for datasets with nested features, but as soon as it's merged we could try to make the conversion work on a dataset like `squad`.",
"For datasets with neste... | 2020-07-02T19:26:27Z | 2020-07-22T09:16:12Z | 2020-07-22T09:16:12Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/339.diff",
"html_url": "https://github.com/huggingface/datasets/pull/339",
"merged_at": "2020-07-22T09:16:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/339.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/339... | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/339/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/339/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1008/comments | https://api.github.com/repos/huggingface/datasets/issues/1008/events | https://github.com/huggingface/datasets/pull/1008 | 755,372,798 | MDExOlB1bGxSZXF1ZXN0NTMxMDk1ODQy | 1,008 | Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset. https://github.com/nlpdata/c3 https://arxiv.org/abs/1904.09679 | {
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api... | [] | closed | false | null | [] | null | [
"Dupe of #1009 "
] | 2020-12-02T15:28:05Z | 2020-12-02T15:40:55Z | 2020-12-02T15:40:55Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1008.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1008",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1008.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1008"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1008/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1008/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/915/comments | https://api.github.com/repos/huggingface/datasets/issues/915/events | https://github.com/huggingface/datasets/issues/915 | 753,118,481 | MDU6SXNzdWU3NTMxMTg0ODE= | 915 | Shall we change the hashing to encoding to reduce potential replicated cache files? | {
"avatar_url": "https://avatars.githubusercontent.com/u/10428324?v=4",
"events_url": "https://api.github.com/users/zhuzilin/events{/privacy}",
"followers_url": "https://api.github.com/users/zhuzilin/followers",
"following_url": "https://api.github.com/users/zhuzilin/following{/other_user}",
"gists_url": "htt... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": fals... | open | false | null | [] | null | [
"This is an interesting idea !\r\nDo you have ideas about how to approach the decoding and the normalization ?",
"@lhoestq\r\nI think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can\r\n- decode all the current saved datasets to see if there is already one that is equiv... | 2020-11-30T03:50:46Z | 2020-12-24T05:11:49Z | null | NONE | null | null | null | Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the finge... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/915/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/915/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6415/comments | https://api.github.com/repos/huggingface/datasets/issues/6415/events | https://github.com/huggingface/datasets/pull/6415 | 1,992,917,248 | PR_kwDODunzps5fa4n7 | 6,415 | Fix multi gpu map example | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-11-14T14:57:18Z | 2023-11-22T15:48:27Z | 2023-11-22T15:42:19Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6415.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6415",
"merged_at": "2023-11-22T15:42:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6415.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | - use `orch.cuda.set_device` instead of `CUDA_VISIBLE_DEVICES `
- add `if __name__ == "__main__"`
fix https://github.com/huggingface/datasets/issues/6186 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6415/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6415/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/758/comments | https://api.github.com/repos/huggingface/datasets/issues/758/events | https://github.com/huggingface/datasets/issues/758 | 728,638,559 | MDU6SXNzdWU3Mjg2Mzg1NTk= | 758 | Process 0 very slow when using num_procs with map to tokenizer | {
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.... | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting.\r\nIs the distribution of text length of your data evenly distributed across your dataset ? I mean, could it be because the examples in the first part of your dataset are slower to process ?\r\nAlso could how many CPUs can you use for multiprocessing ?\r\n```python\r\nimport multiprocess... | 2020-10-24T02:40:20Z | 2020-10-28T03:59:46Z | 2020-10-28T03:59:45Z | NONE | null | null | null | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_speci... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/758/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/758/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2754/comments | https://api.github.com/repos/huggingface/datasets/issues/2754/events | https://github.com/huggingface/datasets/pull/2754 | 959,105,577 | MDExOlB1bGxSZXF1ZXN0NzAyMjcxMjM4 | 2,754 | Generate metadata JSON for telugu_books dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [] | 2021-08-03T13:14:52Z | 2021-08-04T08:49:02Z | 2021-08-04T08:49:02Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2754.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2754",
"merged_at": "2021-08-04T08:49:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2754.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Related to #2743. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2754/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2754/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5179/comments | https://api.github.com/repos/huggingface/datasets/issues/5179/events | https://github.com/huggingface/datasets/issues/5179 | 1,430,826,100 | I_kwDODunzps5VSKx0 | 5,179 | `map()` fails midway due to format incompatibility | {
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Cc: @lhoestq ",
"You can end up with a list instead of a tensor if all the tensors inside the list can't be stacked together - can you make sure all your inputs are tensors with the same shape ?",
"Is there an easy way to ensure it?",
"You can make sure your `tokenize` function always return tensors of the s... | 2022-11-01T03:57:59Z | 2022-11-08T11:35:26Z | 2022-11-08T11:35:26Z | MEMBER | null | null | null | ### Describe the bug
I am using the `emotion` dataset from Hub for sequence classification. After training the model, I am using it to generate predictions for all the entries present in the `validation` split of the dataset.
```py
def get_test_accuracy(model):
def fn(batch):
inputs = {k:v.to(device... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5179/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5179/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2331/comments | https://api.github.com/repos/huggingface/datasets/issues/2331/events | https://github.com/huggingface/datasets/issues/2331 | 879,031,427 | MDU6SXNzdWU4NzkwMzE0Mjc= | 2,331 | Add Topical-Chat | {
"avatar_url": "https://avatars.githubusercontent.com/u/22266659?v=4",
"events_url": "https://api.github.com/users/ktangri/events{/privacy}",
"followers_url": "https://api.github.com/users/ktangri/followers",
"following_url": "https://api.github.com/users/ktangri/following{/other_user}",
"gists_url": "https:... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | [] | null | [] | 2021-05-07T13:43:59Z | 2021-05-07T13:43:59Z | null | NONE | null | null | null | ## Adding a Dataset
- **Name:** Topical-Chat
- **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles
- **Paper:** https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3079.pdf
- **... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2331/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2331/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5294/comments | https://api.github.com/repos/huggingface/datasets/issues/5294/events | https://github.com/huggingface/datasets/pull/5294 | 1,463,679,582 | PR_kwDODunzps5DqgLW | 5,294 | Support streaming datasets with pathlib.Path.with_suffix | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-24T18:04:38Z | 2022-11-29T07:09:08Z | 2022-11-29T07:06:32Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5294.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5294",
"merged_at": "2022-11-29T07:06:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5294.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | This PR extends the support in streaming mode for datasets that use `pathlib.Path.with_suffix`.
Fix #5293. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5294/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5294/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3842/comments | https://api.github.com/repos/huggingface/datasets/issues/3842/events | https://github.com/huggingface/datasets/pull/3842 | 1,161,336,483 | PR_kwDODunzps40CZvE | 3,842 | Align IterableDataset.shuffle with Dataset.shuffle | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3842). All of your documentation changes will be reflected on that endpoint.",
"We should also add `generator` as a param to `shuffle` to fully align the APIs, no?",
"I added the `generator` argument.\r\n\r\nI had to make a f... | 2022-03-07T12:10:46Z | 2022-03-07T19:03:43Z | 2022-03-07T19:03:42Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3842.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3842",
"merged_at": "2022-03-07T19:03:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3842.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | From #3444 , Dataset.shuffle can have the same API than IterableDataset.shuffle (i.e. in streaming mode).
Currently you can pass an optional seed to both if you want, BUT currently IterableDataset.shuffle always requires a buffer_size, used for approximate shuffling. I propose using a reasonable default value (maybe... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3842/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3842/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3894/comments | https://api.github.com/repos/huggingface/datasets/issues/3894/events | https://github.com/huggingface/datasets/pull/3894 | 1,166,611,270 | PR_kwDODunzps40TzXW | 3,894 | [docs] make dummy data creation optional | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3894). All of your documentation changes will be reflected on that endpoint.",
"The dev doc build rendering doesn't seem to be updated with my last commit for some reason",
"Merging it anyway since I'd like to share this page... | 2022-03-11T16:21:34Z | 2022-03-11T17:27:56Z | 2022-03-11T17:27:55Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3894.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3894",
"merged_at": "2022-03-11T17:27:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3894.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Related to #3507 : dummy data for datasets created on the Hugging Face Hub are optional.
We can discuss later to make them optional for datasets in this repository as well | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3894/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3894/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1537/comments | https://api.github.com/repos/huggingface/datasets/issues/1537/events | https://github.com/huggingface/datasets/pull/1537 | 765,095,210 | MDExOlB1bGxSZXF1ZXN0NTM4ODY1NzIz | 1,537 | added ohsumed | {
"avatar_url": "https://avatars.githubusercontent.com/u/9033954?v=4",
"events_url": "https://api.github.com/users/skyprince999/events{/privacy}",
"followers_url": "https://api.github.com/users/skyprince999/followers",
"following_url": "https://api.github.com/users/skyprince999/following{/other_user}",
"gists... | [] | closed | false | null | [] | null | [] | 2020-12-13T06:58:23Z | 2020-12-17T18:28:16Z | 2020-12-17T18:28:16Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1537.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1537",
"merged_at": "2020-12-17T18:28:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1537.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | UPDATE2: PR passed all tests. Now waiting for review.
UPDATE: pushed a new version. cross fingers that it should complete all the tests! :)
If it passes all tests then it's not a draft version.
This is a draft version | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1537/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1537/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2038/comments | https://api.github.com/repos/huggingface/datasets/issues/2038/events | https://github.com/huggingface/datasets/issues/2038 | 830,036,875 | MDU6SXNzdWU4MzAwMzY4NzU= | 2,038 | outdated dataset_infos.json might fail verifications | {
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications\r\n```",
"Fixed by #2041, thanks again @songfeng !"
] | 2021-03-12T11:41:54Z | 2021-03-16T16:27:40Z | 2021-03-16T16:27:40Z | CONTRIBUTOR | null | null | null | The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc..
Could you please update this file or point me how to update this file?
Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2038/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2038/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4752/comments | https://api.github.com/repos/huggingface/datasets/issues/4752/events | https://github.com/huggingface/datasets/issues/4752 | 1,319,464,409 | I_kwDODunzps5OpW3Z | 4,752 | DatasetInfo issue when testing multiple configs: mixed task_templates | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url":... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"I've narrowed down the issue to the `dataset_module_factory` which already creates a `dataset_infos.json` file down in the `.cache/modules/dataset_modules/..` folder. That JSON file already contains the wrong task_templates for `unfiltered`.",
"Ugh. Found the issue: apparently `datasets` was reusing the already ... | 2022-07-27T12:04:54Z | 2022-08-08T18:20:50Z | null | CONTRIBUTOR | null | null | null | ## Describe the bug
When running the `datasets-cli test` it would seem that some config properties in a DatasetInfo get mangled, leading to issues, e.g., about the ClassLabel.
## Steps to reproduce the bug
In summary, what I want to do is create three configs:
- unfiltered: no classlabel, no tasks. Gets data fr... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4752/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4752/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/305 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/305/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/305/comments | https://api.github.com/repos/huggingface/datasets/issues/305/events | https://github.com/huggingface/datasets/issues/305 | 644,148,149 | MDU6SXNzdWU2NDQxNDgxNDk= | 305 | Importing downloaded package repository fails | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | [
{
"color": "25b21e",
"default": false,
"description": "A bug in a metric script",
"id": 2067393914,
"name": "metric bug",
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug"
}
] | closed | false | null | [] | null | [] | 2020-06-23T21:09:05Z | 2020-07-30T16:44:23Z | 2020-07-30T16:44:23Z | MEMBER | null | null | null | The `get_imports` function in `src/nlp/load.py` has a feature to download a package as a zip archive of the github repository and import functions from the unpacked directory. This is used for example in the `metrics/coval.py` file, and would be useful to add BLEURT (@ankparikh).
Currently however, the code seems to... | {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/305/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/305/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5501/comments | https://api.github.com/repos/huggingface/datasets/issues/5501/events | https://github.com/huggingface/datasets/pull/5501 | 1,569,644,159 | PR_kwDODunzps5JMTn8 | 5,501 | Increase chunk size for speeding up file downloads | {
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api... | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5501). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-02-03T10:50:10Z | 2023-02-09T11:04:11Z | null | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5501.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5501",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5501.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5501"
} | Original fix: https://github.com/huggingface/huggingface_hub/pull/1267
Not sure this function is actually still called though.
I haven't done benches on this. Is there a dataset where files are hosted on the hub through cloudfront so we can have the same setup as in `hf_hub` ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5501/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5501/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2003/comments | https://api.github.com/repos/huggingface/datasets/issues/2003/events | https://github.com/huggingface/datasets/issues/2003 | 824,034,678 | MDU6SXNzdWU4MjQwMzQ2Nzg= | 2,003 | Messages are being printed to the `stdout` | {
"avatar_url": "https://avatars.githubusercontent.com/u/1367529?v=4",
"events_url": "https://api.github.com/users/mahnerak/events{/privacy}",
"followers_url": "https://api.github.com/users/mahnerak/followers",
"following_url": "https://api.github.com/users/mahnerak/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | [
"This is expected to show this message to the user via stdout.\r\nThis way the users see it directly and can cancel the downloading if they want to.\r\nCould you elaborate why it would be better to have it in stderr instead of stdout ?",
"@lhoestq, sorry for the late reply\r\n\r\nI completely understand why you d... | 2021-03-07T22:09:34Z | 2023-07-25T16:35:21Z | 2023-07-25T16:35:21Z | NONE | null | null | null | In this code segment, we can see some messages are being printed to the `stdout`.
https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554
According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2003/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2003/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3987/comments | https://api.github.com/repos/huggingface/datasets/issues/3987/events | https://github.com/huggingface/datasets/pull/3987 | 1,176,481,659 | PR_kwDODunzps40zAxF | 3,987 | Fix Faiss custom_index device | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-22T09:11:24Z | 2022-03-24T12:18:59Z | 2022-03-24T12:14:12Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3987.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3987",
"merged_at": "2022-03-24T12:14:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3987.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Currently, if both `custom_index` and `device` are passed to `FaissIndex`, `device` is silently ignored.
This PR fixes this by raising a ValueError if both arguments are passed.
Alternatively, the `custom_index` could be transferred to the target `device`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3987/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3987/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/320/comments | https://api.github.com/repos/huggingface/datasets/issues/320/events | https://github.com/huggingface/datasets/issues/320 | 647,188,167 | MDU6SXNzdWU2NDcxODgxNjc= | 320 | Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"g... | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | [
"I wonder if this means downloading failed? That corpus has a really slow server.",
"This dataset seems to have a decoding problem that results in inconsistencies in the number of generated examples.\r\nSee #215.\r\nThat's why we end up with a `NonMatchingSplitsSizesError `."
] | 2020-06-29T07:36:35Z | 2020-06-29T14:44:42Z | 2020-06-29T14:44:42Z | CONTRIBUTOR | null | null | null | Selecting `blog_authorship_corpus` in the nlp viewer throws the following error:
```
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dat... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/320/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/320/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/690/comments | https://api.github.com/repos/huggingface/datasets/issues/690/events | https://github.com/huggingface/datasets/issues/690 | 712,150,321 | MDU6SXNzdWU3MTIxNTAzMjE= | 690 | XNLI dataset: NonMatchingChecksumError | {
"avatar_url": "https://avatars.githubusercontent.com/u/13307358?v=4",
"events_url": "https://api.github.com/users/xiey1/events{/privacy}",
"followers_url": "https://api.github.com/users/xiey1/followers",
"following_url": "https://api.github.com/users/xiey1/following{/other_user}",
"gists_url": "https://api.... | [] | closed | false | null | [] | null | [
"Thanks for reporting.\r\nThe data file must have been updated by the host.\r\nI'll update the checksum with the new one.",
"Well actually it looks like the link isn't working anymore :(",
"The new link is https://cims.nyu.edu/~sbowman/xnli/XNLI-1.0.zip\r\nI'll update the dataset script",
"I'll do a release i... | 2020-09-30T17:50:03Z | 2020-10-01T17:15:08Z | 2020-10-01T14:01:14Z | NONE | null | null | null | Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = load_dataset(path='xnli')
3 frames
/usr... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/690/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/690/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2615 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2615/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2615/comments | https://api.github.com/repos/huggingface/datasets/issues/2615/events | https://github.com/huggingface/datasets/issues/2615 | 940,794,339 | MDU6SXNzdWU5NDA3OTQzMzk= | 2,615 | Jsonlines export error | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_u... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting @TevenLeScao! I'm having a look...",
"(not sure what just happened on the assignations sorry)",
"For some reason this happens (both `datasets` version are on master) only on Python 3.6 and not Python 3.8.",
"@TevenLeScao we are using `pandas` to serialize the dataset to JSON Lines. So it... | 2021-07-09T14:02:05Z | 2021-07-09T15:29:07Z | 2021-07-09T15:28:33Z | CONTRIBUTOR | null | null | null | ## Describe the bug
When exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default
## Steps to reproduce the bug
This wha... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2615/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2615/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6180/comments | https://api.github.com/repos/huggingface/datasets/issues/6180/events | https://github.com/huggingface/datasets/pull/6180 | 1,867,032,578 | PR_kwDODunzps5Yy1r- | 6,180 | Use `hf-internal-testing` repos for hosting test dataset repos | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-08-25T13:10:26Z | 2023-08-25T16:58:02Z | 2023-08-25T16:46:22Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6180.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6180",
"merged_at": "2023-08-25T16:46:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6180.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Use `hf-internal-testing` for hosting instead of the maintainers' dataset repos. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6180/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6180/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3637/comments | https://api.github.com/repos/huggingface/datasets/issues/3637/events | https://github.com/huggingface/datasets/issues/3637 | 1,115,526,438 | I_kwDODunzps5CfZUm | 3,637 | [TypeError: Couldn't cast array of type] Cannot load dataset in v1.18 | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://a... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi @lewtun!\r\n \r\nThis one was tricky to debug. Initially, I tought there is a bug in the recently-added (by @lhoestq ) `cast_array_to_feature` function because `git bisect` points to the https://github.com/huggingface/datasets/commit/6ca96c707502e0689f9b58d94f46d871fa5a3c9c commit. Then, I noticed that the feat... | 2022-01-26T21:38:02Z | 2022-02-09T16:15:53Z | 2022-02-09T16:15:53Z | MEMBER | null | null | null | ## Describe the bug
I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset with v1.17.0. Note that the error is also present if I install from `master... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3637/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3637/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3201/comments | https://api.github.com/repos/huggingface/datasets/issues/3201/events | https://github.com/huggingface/datasets/issues/3201 | 1,043,209,142 | I_kwDODunzps4-Lhu2 | 3,201 | Add GSM8K dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url"... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
"Closed via https://github.com/huggingface/datasets/pull/4103"
] | 2021-11-03T08:36:44Z | 2022-04-13T11:56:12Z | 2022-04-13T11:56:11Z | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** GSM8K (short for Grade School Math 8k)
- **Description:** GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers.
- **Paper:** https://openai.com/blog/grade-school-math/
- **Data:** https://github.com/openai/gra... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3201/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3201/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2189/comments | https://api.github.com/repos/huggingface/datasets/issues/2189/events | https://github.com/huggingface/datasets/issues/2189 | 853,052,891 | MDU6SXNzdWU4NTMwNTI4OTE= | 2,189 | save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object. | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"Hi ! We refactored save_to_disk in #2025 so this doesn't happen.\r\nFeel free to try it on master for now\r\nWe'll do a new release soon"
] | 2021-04-08T04:42:53Z | 2022-06-01T16:32:15Z | 2022-06-01T16:32:15Z | NONE | null | null | null | As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset')
n=20
kb_list=[loaded_data.shard(n, i, contiguous=True) for i... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2189/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2189/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/124/comments | https://api.github.com/repos/huggingface/datasets/issues/124/events | https://github.com/huggingface/datasets/pull/124 | 618,864,284 | MDExOlB1bGxSZXF1ZXN0NDE4NTA3NDUx | 124 | Xsum, require manual download of some files | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"g... | [] | closed | false | null | [] | null | [] | 2020-05-15T10:26:13Z | 2020-05-15T11:04:48Z | 2020-05-15T11:04:46Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/124.diff",
"html_url": "https://github.com/huggingface/datasets/pull/124",
"merged_at": "2020-05-15T11:04:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/124.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/124... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/124/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/124/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/1439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1439/comments | https://api.github.com/repos/huggingface/datasets/issues/1439/events | https://github.com/huggingface/datasets/pull/1439 | 760,968,410 | MDExOlB1bGxSZXF1ZXN0NTM1NzA4NDU1 | 1,439 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/46425391?v=4",
"events_url": "https://api.github.com/users/tuner007/events{/privacy}",
"followers_url": "https://api.github.com/users/tuner007/followers",
"following_url": "https://api.github.com/users/tuner007/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [] | 2020-12-10T06:57:01Z | 2020-12-11T15:22:53Z | 2020-12-11T15:22:53Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1439.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1439",
"merged_at": "2020-12-11T15:22:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1439.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 1k-10k -> 1k-1M
3 separate configs are available with min. 1K and max. 211.3k examples | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1439/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1439/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6229/comments | https://api.github.com/repos/huggingface/datasets/issues/6229/events | https://github.com/huggingface/datasets/issues/6229 | 1,889,050,954 | I_kwDODunzps5wmKFK | 6,229 | Apply inference on all images in the dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"From what I see, `MMSegInferencer` supports NumPy arrays, so replace the line `image_path = example['image']` with `image_path = np.array(example['image'])` to fix the issue (`example[\"image\"]` is a `PIL.Image` object). ",
"> From what I see, `MMSegInferencer` supports NumPy arrays, so replace the line `image_... | 2023-09-10T08:36:12Z | 2023-09-20T16:11:53Z | 2023-09-20T16:11:52Z | NONE | null | null | null | ### Describe the bug
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[14], line 11
9 for idx, example in enumerate(dataset['train']):
10 image_path = example['image']
---> 11 mask... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6229/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6229/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/253/comments | https://api.github.com/repos/huggingface/datasets/issues/253/events | https://github.com/huggingface/datasets/pull/253 | 634,791,939 | MDExOlB1bGxSZXF1ZXN0NDMxMjgwOTYz | 253 | add flue dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"g... | [] | closed | false | null | [] | null | [
"The dummy data file was wrong. I only fixed it for the book config. Even though the tests are all green here, this should also be fixed for all other configs. Could you take a look there @mariamabarham ? ",
"Hi @mariamabarham \r\n\r\nFLUE can indeed become a very interesting benchmark for french NLP !\r\nUnfortu... | 2020-06-08T17:11:09Z | 2023-09-24T09:46:03Z | 2020-07-16T07:50:59Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/253.diff",
"html_url": "https://github.com/huggingface/datasets/pull/253",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/253.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/253"
} | This PR add the Flue dataset as requested in this issue #223 . @lbourdois made a detailed description in that issue.
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/253/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/253/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5233/comments | https://api.github.com/repos/huggingface/datasets/issues/5233/events | https://github.com/huggingface/datasets/pull/5233 | 1,447,906,868 | PR_kwDODunzps5C1JVh | 5,233 | Fix shards in IterableDataset.from_generator | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-14T11:42:09Z | 2022-11-14T14:16:03Z | 2022-11-14T14:13:22Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5233.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5233",
"merged_at": "2022-11-14T14:13:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5233.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Allow to define a sharded iterable dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5233/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5233/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3857/comments | https://api.github.com/repos/huggingface/datasets/issues/3857/events | https://github.com/huggingface/datasets/issues/3857 | 1,162,525,353 | I_kwDODunzps5FSrqp | 3,857 | Order of dataset changes due to glob.glob. | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | null | [] | null | [
"I agree using `glob.glob` alone is bad practice because it's not deterministic. Using `sorted` is a nice solution.\r\n\r\nNote that the `xglob` function you are referring to in the `streaming_download_manager.py` code just extends `glob.glob` for URLs - we don't change its behavior. That's why it has no `sorted()`... | 2022-03-08T11:10:30Z | 2022-03-14T11:08:22Z | null | MEMBER | null | null | null | ## Describe the bug
After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system.
There are currently multiple datasets that use `glob.g... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3857/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3857/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2145/comments | https://api.github.com/repos/huggingface/datasets/issues/2145/events | https://github.com/huggingface/datasets/pull/2145 | 844,603,518 | MDExOlB1bGxSZXF1ZXN0NjAzODMxOTE2 | 2,145 | Implement Dataset add_column | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | {
"closed_at": "2021-05-31T16:20:53Z",
"closed_issues": 3,
"created_at": "2021-04-09T13:16:31Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/u... | [
"#2274 has been merged. You can now merge master into this branch and use `assert_arrow_metadata_are_synced_with_dataset_features(dset)` to make sure that the metadata are good :)"
] | 2021-03-30T14:02:14Z | 2021-04-29T14:50:44Z | 2021-04-29T14:50:43Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2145.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2145",
"merged_at": "2021-04-29T14:50:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2145.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Implement `Dataset.add_column`.
Close #1954. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2145/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2145/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6267/comments | https://api.github.com/repos/huggingface/datasets/issues/6267/events | https://github.com/huggingface/datasets/issues/6267 | 1,916,443,262 | I_kwDODunzps5yOpp- | 6,267 | Multi label class encoding | {
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.gith... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"You can use a `Sequence(ClassLabel(...))` feature type to represent a list of labels, and `cast_column`/`cast` to perform the \"string to label\" conversion (`class_encode_column` does support nested fields), e.g., in your case:\r\n```python\r\nfrom datasets import Dataset, Sequence, ClassLabel\r\ndata = {\r\n ... | 2023-09-27T22:48:08Z | 2023-10-26T18:46:08Z | null | NONE | null | null | null | ### Feature request
I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels.
Here's an example of what I'd like to encode:
```
data = {
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6267/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6267/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5162/comments | https://api.github.com/repos/huggingface/datasets/issues/5162/events | https://github.com/huggingface/datasets/issues/5162 | 1,422,461,112 | I_kwDODunzps5UyQi4 | 5,162 | Pip-compile: Could not find a version that matches dill<0.3.6,>=0.3.6 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8604946?v=4",
"events_url": "https://api.github.com/users/Rijgersberg/events{/privacy}",
"followers_url": "https://api.github.com/users/Rijgersberg/followers",
"following_url": "https://api.github.com/users/Rijgersberg/following{/other_user}",
"gists_ur... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting, @Rijgersberg.\r\n\r\nWe were waiting for the release of `dill` 0.3.6, that happened 2 days ago (24 Oct 2022): https://github.com/uqfoundation/dill/releases/tag/dill-0.3.6\r\n- See comment: https://github.com/huggingface/datasets/pull/4397#discussion_r880629543\r\n\r\nAlso `multiprocess` 0.70.... | 2022-10-25T13:23:50Z | 2022-11-14T08:25:37Z | 2022-10-28T05:38:15Z | NONE | null | null | null | ### Describe the bug
When using `pip-compile` (part of `pip-tools`) to generate a pinned requirements file that includes `datasets`, a version conflict of `dill` appears.
It is caused by a transitive dependency conflict between `datasets` and `multiprocess`.
### Steps to reproduce the bug
```bash
$ echo "dataset... | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5162/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5162/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4928/comments | https://api.github.com/repos/huggingface/datasets/issues/4928/events | https://github.com/huggingface/datasets/pull/4928 | 1,360,941,172 | PR_kwDODunzps4-Ubi4 | 4,928 | Add ability to read-write to SQL databases. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Ah CI runs with `pandas=1.3.5` which doesn't return the number of row inserted.",
"wow this is super cool!",
"@lhoestq I'm getting error in integration tests, not sure if it's related to my PR. Any help would be appreciated :) \r... | 2022-09-03T19:09:08Z | 2022-10-03T16:34:36Z | 2022-10-03T16:32:28Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4928.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4928",
"merged_at": "2022-10-03T16:32:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4928.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Fixes #3094
Add ability to read/write to SQLite files and also read from any SQL database supported by SQLAlchemy.
I didn't add SQLAlchemy as a dependence as it is fairly big and it remains optional.
I also recorded a Loom to showcase the feature.
https://www.loom.com/share/f0e602c2de8a46f58bca4b43333d541... | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 4,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4928/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4928/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1151/comments | https://api.github.com/repos/huggingface/datasets/issues/1151/events | https://github.com/huggingface/datasets/pull/1151 | 757,517,092 | MDExOlB1bGxSZXF1ZXN0NTMyODc5ODk4 | 1,151 | adding psc dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1654113?v=4",
"events_url": "https://api.github.com/users/abecadel/events{/privacy}",
"followers_url": "https://api.github.com/users/abecadel/followers",
"following_url": "https://api.github.com/users/abecadel/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | [] | 2020-12-05T02:40:01Z | 2020-12-09T11:38:41Z | 2020-12-09T11:38:41Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1151.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1151",
"merged_at": "2020-12-09T11:38:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1151.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1151/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1151/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/2688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2688/comments | https://api.github.com/repos/huggingface/datasets/issues/2688/events | https://github.com/huggingface/datasets/issues/2688 | 949,182,074 | MDU6SXNzdWU5NDkxODIwNzQ= | 2,688 | hebrew language codes he and iw should be treated as aliases | {
"avatar_url": "https://avatars.githubusercontent.com/u/4436747?v=4",
"events_url": "https://api.github.com/users/eyaler/events{/privacy}",
"followers_url": "https://api.github.com/users/eyaler/followers",
"following_url": "https://api.github.com/users/eyaler/following{/other_user}",
"gists_url": "https://ap... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi @eyaler, thanks for reporting.\r\n\r\nWhile you are true with respect the Hebrew language tag (\"iw\" is deprecated and \"he\" is the preferred value), in the \"mc4\" dataset (which is a derived dataset) we have kept the language tags present in the original dataset: [Google C4](https://www.tensorflow.org/datas... | 2021-07-20T23:13:52Z | 2021-07-21T16:34:53Z | 2021-07-21T16:34:53Z | NONE | null | null | null | https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2688/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2688/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2089/comments | https://api.github.com/repos/huggingface/datasets/issues/2089/events | https://github.com/huggingface/datasets/issues/2089 | 836,788,019 | MDU6SXNzdWU4MzY3ODgwMTk= | 2,089 | Add documentaton for dataset README.md files | {
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "ht... | [] | closed | false | null | [] | null | [
"Hi ! We are using the [datasets-tagging app](https://github.com/huggingface/datasets-tagging) to select the tags to add.\r\n\r\nWe are also adding the full list of tags in #2107 \r\nThis covers multilinguality, language_creators, licenses, size_categories and task_categories.\r\n\r\nIn general if you want to add a... | 2021-03-20T11:44:38Z | 2023-07-25T16:45:38Z | 2023-07-25T16:45:37Z | CONTRIBUTOR | null | null | null | Hi,
the dataset README files have special headers.
Somehow a documenation of the allowed values and tags is missing.
Could you add that?
Just to give some concrete questions that should be answered imo:
- which values can be passted to multilinguality?
- what should be passed to language_creators?
- which valu... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2089/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2089/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.