id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
1,093,833,446
| 3,526
|
Update license to bookcorpus dataset card
|
closed
| 2022-01-04T23:25:23
| 2022-09-30T10:23:38
| 2022-09-30T10:21:20
|
https://github.com/huggingface/datasets/pull/3526
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3526",
"html_url": "https://github.com/huggingface/datasets/pull/3526",
"diff_url": "https://github.com/huggingface/datasets/pull/3526.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3526.patch",
"merged_at": "2022-09-30T10:21:20"
}
|
meg-huggingface
| true
|
[
"The smashwords ToS apply for this dataset, we did the same for https://github.com/huggingface/datasets/pull/3525",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,093,831,268
| 3,525
|
Adding license information for Openbookcorpus
|
closed
| 2022-01-04T23:20:36
| 2022-04-20T09:54:30
| 2022-04-20T09:48:10
|
https://github.com/huggingface/datasets/pull/3525
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3525",
"html_url": "https://github.com/huggingface/datasets/pull/3525",
"diff_url": "https://github.com/huggingface/datasets/pull/3525.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3525.patch",
"merged_at": "2022-04-20T09:48:10"
}
|
meg-huggingface
| true
|
[
"The MIT license seems to be for the crawling code, no ? Then maybe we can also redirect users to the [terms of smashwords.com](https://www.smashwords.com/about/tos) regarding copyrights, in particular the paragraph 10 for end-users. In particular it seems that end users can download and use the content \"for their personal enjoyment in any reasonable non-commercial manner in compliance with copyright law\" and the smashwords end-users agreement.\r\n\r\nIt should be the same for https://github.com/huggingface/datasets/pull/3526 as well",
"May I merge this one ?",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,093,826,723
| 3,524
|
Adding link to license.
|
closed
| 2022-01-04T23:11:48
| 2022-01-05T12:31:38
| 2022-01-05T12:31:37
|
https://github.com/huggingface/datasets/pull/3524
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3524",
"html_url": "https://github.com/huggingface/datasets/pull/3524",
"diff_url": "https://github.com/huggingface/datasets/pull/3524.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3524.patch",
"merged_at": "2022-01-05T12:31:37"
}
|
meg-huggingface
| true
|
[] |
1,093,819,227
| 3,523
|
Added links to licensing and PII message in vctk dataset
|
closed
| 2022-01-04T22:56:58
| 2022-01-06T19:33:50
| 2022-01-06T19:33:50
|
https://github.com/huggingface/datasets/pull/3523
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3523",
"html_url": "https://github.com/huggingface/datasets/pull/3523",
"diff_url": "https://github.com/huggingface/datasets/pull/3523.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3523.patch",
"merged_at": "2022-01-06T19:33:50"
}
|
mcmillanmajora
| true
|
[] |
1,093,807,586
| 3,522
|
wmt19 is broken (zh-en)
|
closed
| 2022-01-04T22:33:45
| 2022-05-06T16:27:37
| 2022-05-06T16:27:37
|
https://github.com/huggingface/datasets/issues/3522
| null |
AjayP13
| false
|
[
"This issue is not reproducible."
] |
1,093,797,947
| 3,521
|
Vivos license update
|
closed
| 2022-01-04T22:17:47
| 2022-01-04T22:18:16
| 2022-01-04T22:18:16
|
https://github.com/huggingface/datasets/pull/3521
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3521",
"html_url": "https://github.com/huggingface/datasets/pull/3521",
"diff_url": "https://github.com/huggingface/datasets/pull/3521.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3521.patch",
"merged_at": null
}
|
mcmillanmajora
| true
|
[] |
1,093,747,753
| 3,520
|
Audio datacard update - first pass
|
closed
| 2022-01-04T20:58:25
| 2022-01-05T12:30:21
| 2022-01-05T12:30:20
|
https://github.com/huggingface/datasets/pull/3520
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3520",
"html_url": "https://github.com/huggingface/datasets/pull/3520",
"diff_url": "https://github.com/huggingface/datasets/pull/3520.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3520.patch",
"merged_at": "2022-01-05T12:30:20"
}
|
meg-huggingface
| true
|
[
"I'm not sure that we want to change the tags at the top of the cards by hand. Those are used to create the tags in the hub. Although looking at all the tags now, we might want to normalize the current tags again (hyphens or no, \".0\" or no). Maybe we could add a binary tag for public domain or not?",
"> \r\n\r\nThat's a good point, I didn't realize these were auto-populated.\r\nAt the same time, some of them are wrong -- how/where are they auto-populated? Seems like we should fix it at that source for the future.\r\nIn the mean time, I see that \"cc0-1.0\" is the desired tag for public domain, so I will change that for now."
] |
1,093,655,205
| 3,519
|
CC100: Using HTTPS for the data source URL fixes load_dataset()
|
closed
| 2022-01-04T18:45:54
| 2022-01-05T17:28:34
| 2022-01-05T17:28:34
|
https://github.com/huggingface/datasets/pull/3519
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3519",
"html_url": "https://github.com/huggingface/datasets/pull/3519",
"diff_url": "https://github.com/huggingface/datasets/pull/3519.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3519.patch",
"merged_at": "2022-01-05T17:28:34"
}
|
aajanki
| true
|
[] |
1,093,063,455
| 3,518
|
Add PubMed Central Open Access dataset
|
closed
| 2022-01-04T06:54:35
| 2022-01-17T15:25:57
| 2022-01-17T15:25:57
|
https://github.com/huggingface/datasets/issues/3518
| null |
albertvillanova
| false
|
[
"In the framework of BigScience:\r\n- bigscience-workshop/data_tooling#121\r\n\r\nI have created this dataset as a community dataset: https://huggingface.co/datasets/albertvillanova/pmc_open_access\r\n\r\nHowever, I was wondering that it may be more appropriate to move it under an org namespace: `pubmed_central` or `pmc`\r\nThis way, we could add other datasets I'm also working on: Author Manuscript Dataset, Historical OCR Dataset, LitArch Open Access Subset.\r\n\r\nWhat do you think? @lhoestq @mariosasko ",
"Why not ! Having them under such namespaces would also help people searching for this kind of datasets.\r\nWe can also invite people from pubmed at one point",
"DONE: https://huggingface.co/datasets/pmc/open_access"
] |
1,092,726,651
| 3,517
|
Add CPPE-5 dataset
|
closed
| 2022-01-03T18:31:20
| 2022-01-19T02:23:37
| 2022-01-05T18:53:02
|
https://github.com/huggingface/datasets/pull/3517
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3517",
"html_url": "https://github.com/huggingface/datasets/pull/3517",
"diff_url": "https://github.com/huggingface/datasets/pull/3517.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3517.patch",
"merged_at": "2022-01-05T18:53:02"
}
|
mariosasko
| true
|
[
"Thanks so much, @mariosasko and @lhoestq , much appreciated!"
] |
1,092,657,738
| 3,516
|
dataset `asset` - change to raw.githubusercontent.com URLs
|
closed
| 2022-01-03T16:43:57
| 2022-01-03T17:39:02
| 2022-01-03T17:39:01
|
https://github.com/huggingface/datasets/pull/3516
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3516",
"html_url": "https://github.com/huggingface/datasets/pull/3516",
"diff_url": "https://github.com/huggingface/datasets/pull/3516.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3516.patch",
"merged_at": "2022-01-03T17:39:01"
}
|
VictorSanh
| true
|
[] |
1,092,624,695
| 3,515
|
`ExpectedMoreDownloadedFiles` for `evidence_infer_treatment`
|
closed
| 2022-01-03T15:58:38
| 2022-02-14T13:21:43
| 2022-02-14T13:21:43
|
https://github.com/huggingface/datasets/issues/3515
| null |
VictorSanh
| false
|
[
"Thanks for reporting @VictorSanh.\r\n\r\nI'm looking at it... "
] |
1,092,606,383
| 3,514
|
Fix to_tf_dataset references in docs
|
closed
| 2022-01-03T15:31:39
| 2022-01-05T18:52:48
| 2022-01-05T18:52:48
|
https://github.com/huggingface/datasets/pull/3514
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3514",
"html_url": "https://github.com/huggingface/datasets/pull/3514",
"diff_url": "https://github.com/huggingface/datasets/pull/3514.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3514.patch",
"merged_at": "2022-01-05T18:52:47"
}
|
mariosasko
| true
|
[
"The code snippet in [this section](https://huggingface.co/docs/datasets/master/use_dataset.html?highlight=to_tf_dataset#tensorflow) is missing an import (`DataCollatorWithPadding`) and doesn't initialize the TF model before the `model.fit` call."
] |
1,092,569,802
| 3,513
|
Add desc parameter to filter
|
closed
| 2022-01-03T14:44:18
| 2022-01-05T18:31:25
| 2022-01-05T18:31:25
|
https://github.com/huggingface/datasets/pull/3513
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3513",
"html_url": "https://github.com/huggingface/datasets/pull/3513",
"diff_url": "https://github.com/huggingface/datasets/pull/3513.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3513.patch",
"merged_at": "2022-01-05T18:31:24"
}
|
mariosasko
| true
|
[] |
1,092,359,973
| 3,512
|
No Data format found
|
closed
| 2022-01-03T09:41:11
| 2022-01-17T13:26:05
| 2022-01-17T13:26:05
|
https://github.com/huggingface/datasets/issues/3512
| null |
shazzad47
| false
|
[
"Hi, which dataset is giving you an error?"
] |
1,092,170,411
| 3,511
|
Dataset
|
closed
| 2022-01-03T02:03:23
| 2022-01-03T08:41:26
| 2022-01-03T08:23:07
|
https://github.com/huggingface/datasets/issues/3511
| null |
MIKURI0114
| false
|
[
"Can you reopen with the correct dataset name (if relevant)?\r\n\r\nThanks",
"The dataset viewer was down tonight. It works again."
] |
1,091,997,004
| 3,510
|
`wiki_dpr` details for Open Domain Question Answering tasks
|
closed
| 2022-01-02T11:04:01
| 2022-02-17T13:46:20
| 2022-02-17T13:46:20
|
https://github.com/huggingface/datasets/issues/3510
| null |
pk1130
| false
|
[
"Hi ! According to the DPR paper, the wikipedia dump is the one from Dec. 20, 2018.\r\nEach instance contains a paragraph of at most 100 word, as well as the title of the wikipedia page it comes from and the DPR embedding (a 768-d vector).",
"Closed by:\r\n- #3534"
] |
1,091,214,808
| 3,507
|
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
|
closed
| 2021-12-30T17:04:25
| 2022-11-04T15:31:38
| 2022-11-04T15:31:37
|
https://github.com/huggingface/datasets/issues/3507
| null |
albertvillanova
| false
|
[
"IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work), so that for datasets that have a working dataset preview, we can remove the dummy data IMO. On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data.\r\n\r\nI don't really have an opinion regarding the JSON metadata as I don't know enough about it.\r\n\r\n",
"I don't know all the details, but generally I'd be in favor of unifying the metadata formats into YAML inside .md (and so deprecating the dataset_infos.json) \r\n\r\n(Ultimately the CI can run on \"HuggingFace Actions\" instead of on GitHub)",
"The dataset_infos.json file currently has these useful infos for each dataset configuration, that I think can be moved to the dataset tags:\r\n- Size of the dataset in MB: download size, arrow file size, and total size (sum of download + arrow)\r\n- Size of each split in MB and number of examples. Again this can be moved to the dataset tags\r\n- Feature type of each column\r\n- supported task templates (it defines what columns correspond to the features and labels for example)\r\n\r\nBut it also has this, which I'm not sure if it should be in the tags or not:\r\n- Checksums of the downloaded files for integrity verifications\r\n\r\nSo ultimately this file could probably be deprecated in favor of having the infos in the tags.\r\n\r\n> Also note that for generating both (dataset_infos.json file and dummy data), the entire dataset needs being downloaded. This can be an issue for huge datasets (like WIT, with 400 GB of data).\r\n\r\nTo get the exact number of examples and size in MB of the dataset, one needs to download and generate it completely. IMO these infos are very important when someone considers using a dataset. Though using streaming we could do some extrapolation to have approximate values instead.\r\n\r\nFor the integrity verifications we also need the number of examples and the checksums of the downloaded files, so it requires the dataset to be fully downloaded once. This can be optional though.\r\n\r\n> IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work)\r\n\r\nI agree with this. Usually if a dataset works in streaming mode, then it works in non-streaming mode (the other way around is not true though).\r\n\r\n> On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data.\r\n\r\nYes indeed, or at least make sure that it was tested on the true data.",
"(note that if we wanted to display sizes, etc we could also pretty easily parse the `dataset_infos.json` on the hub side)",
"I agree that we can move the relevant parts of `dataset_infos.json` to the YAML tags.\r\n\r\n> On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data. <\r\n> > Yes indeed, or at least make sure that it was tested on the true data.\r\n\r\nI like the idea of testing streaming and falling back to the dummy data test if streaming does not work. Generating dummy data can be very tedious, so this would be a nice incentive for the contributors to make their datasets streamable. ",
"CC: @severo ",
"About dummy data, please see e.g. this PR: https://github.com/huggingface/datasets/pull/3692/commits/62368daac0672041524a471386d5e78005cf357a\r\n- I updated the previous dummy data: I just had to rename the file and its directory\r\n - the dummy data zip contains only a single file: `pubmed22n0001.xml.gz`\r\n\r\nThen I discover it fails: https://app.circleci.com/pipelines/github/huggingface/datasets/9800/workflows/173a4433-8feb-4fc6-ab9e-59762084e3e1/jobs/60437\r\n```\r\nNo such file or directory: '.../dummy_data/pubmed22n0002.xml.gz'\r\n```\r\n- it needs dummy data for all the 1114 files: \r\n `_URLs = [f\"ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed22n{i:04d}.xml.gz\" for i in range(1, 1115)]`\r\n- this confirms me that it never passed the test: these dummy data files were not present before my PR\r\n- therefore, is it really useful the data test if we just ignore it when it does not pass?\r\n\r\nIn relation with JSON metadata, I'm generating the file for `pubmed` (see above) in a GCP instance: it's running for more than 3 hours and only 9 million examples generated so far (before my PR, it had 32 million, now it has more).",
"I mention in https://github.com/huggingface/datasets-server/wiki/Preliminary-design that the future \"datasets server\" could be in charge of generating both the dummy data and the dataset-info.json file if required (or their equivalent).",
"Hi ! I think dummy data generation is out of scope for the datasets server, since it's about generating the original data files.\r\n\r\nThat would be amazing to have it generate the dataset_infos.json though !",
"From some offline discussion with @mariosasko and especially for vision datasets, we'll probably not require dummy data anymore and use streaming instead :) This will make adding a new dataset much easier.\r\nThis should also make sure that streaming works as expected directly in the CI, without having to check the dataset viewer once the PR is merged",
"OK. I removed the \"dummy data\" item from the services of the dataset server",
"It seems that migration from dataset-info.json to dataset card YAML has been acted.\r\n\r\nProbably it's a good idea, but I didn't find the pros and cons of this decision, so I put some I could think of:\r\n\r\npros:\r\n- only one file to parse, share, sync\r\n- it gives a hint to the users that if you write your dataset card, you should also specify the metadata\r\n\r\ncons:\r\n- the metadata header might be very long, before reaching the start of the README/dataset card. It might be surprising when you edit the file because the metadata is not shown on top when the dataset card is rendered (dataset page). It also somewhat prevents including large strings like the checksums\r\n- YAML vs JSON: not sure which one is easier for users to fill and maintain\r\n- two concepts are mixed in the same file (metadata and documentation). This means that if you're interested only in one of them, you still have to know how to parse the whole file.\r\n- [low priority] besides the JSON file, we might want to support yaml or toml file if the user prefers (as [prettier](https://prettier.io/docs/en/configuration.html) and others do for their config files, for example). Inside the md, I understand that only YAML is allowed",
"> the metadata header might be very long, before reaching the start of the README/dataset card. It might be surprising when you edit the file because the metadata is not shown on top when the dataset card is rendered (dataset page). It also somewhat prevents including large strings like the checksums\r\n\r\nNote that we could simply not have the checksums in the YAML metadata at all, or maybe at one point have a pointer to another file instead.\r\n\r\nWe can also choose to hide (collapse) certain sections in the YAML by default when we open the dataset card editor.\r\n\r\n> two concepts are mixed in the same file (metadata and documentation). This means that if you're interested only in one of them, you still have to know how to parse the whole file.\r\n\r\nI think it's fine for now. Later if we really end up with too many YAML sections we can see if we need to tweak the API endpoints or the `datasets`/`huggingface_hub` tools\r\n\r\n> YAML vs JSON: not sure which one is easier for users to fill and maintain\r\n\r\nRegarding YAML vs JSON: I think YAML is easier to write by hand, and I also think that it's better for consistency - i.e. we're using more and more YAML to configure models/datasets/spaces",
"I didn't know the decision was already taken. Good to know. 😅",
"> the metadata header might be very long, before reaching the start of the README/dataset card. It might be surprising when you edit the file because the metadata is not shown on top when the dataset card is rendered (dataset page). It also somewhat prevents including large strings like the checksums\r\n\r\nWe can definitely work on this on the hub side to make the UX better",
"Tensorflow Datasets catalog includes a community catalog where you can find and use HF datasets (see [here](https://www.tensorflow.org/datasets/community_catalog/huggingface)).\r\n\r\nFYI I noticed today that they are using the exported dataset_infos.json files from github to get the metadata (see their code [here](https://github.com/tensorflow/datasets/blob/a482f01c036a10496f5e22e69a2ef81b707cc418/tensorflow_datasets/scripts/documentation/build_community_catalog.py#L261))",
"Metadata is now stored as YAML, and dummy data is deprecated, so I think we can close this issue."
] |
1,091,166,595
| 3,506
|
Allows DatasetDict.filter to have batching option
|
closed
| 2021-12-30T15:22:22
| 2022-01-04T10:24:28
| 2022-01-04T10:24:27
|
https://github.com/huggingface/datasets/pull/3506
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3506",
"html_url": "https://github.com/huggingface/datasets/pull/3506",
"diff_url": "https://github.com/huggingface/datasets/pull/3506.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3506.patch",
"merged_at": "2022-01-04T10:24:27"
}
|
thomasw21
| true
|
[] |
1,091,150,820
| 3,505
|
cast_column function not working with map function in streaming mode for Audio features
|
closed
| 2021-12-30T14:52:01
| 2022-01-18T19:54:07
| 2022-01-18T19:54:07
|
https://github.com/huggingface/datasets/issues/3505
| null |
ashu5644
| false
|
[
"Hi! This is probably due to the fact that `IterableDataset.map` sets `features` to `None` before mapping examples. We can fix the issue by passing the old `features` dict to the map generator and performing encoding/decoding there (before calling the map transform function)."
] |
1,090,682,230
| 3,504
|
Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst
|
closed
| 2021-12-29T18:23:20
| 2024-05-20T09:44:59
| 2022-02-17T15:04:25
|
https://github.com/huggingface/datasets/issues/3504
| null |
ToddMorrill
| false
|
[
"Hi @ToddMorrill, thanks for reporting.\r\n\r\nThree weeks ago I contacted the team who created the Pile dataset to report this issue with their data host server: https://the-eye.eu\r\n\r\nThey told me that unfortunately, the-eye was heavily affected by the recent tornado catastrophe in the US. They hope to have their data back online asap.",
"Hi @ToddMorrill, people from the Pile team have mirrored their data in a new host server: https://mystic.the-eye.eu\r\n\r\nSee:\r\n- #3627\r\n\r\nIt should work if you update your URL.\r\n\r\nWe should also update the URL in our course material.",
"The old URL is still present in the HuggingFace course here: \r\nhttps://huggingface.co/course/chapter5/4?fw=pt\r\n\r\nI have created a PR for the Notebook here: https://github.com/huggingface/notebooks/pull/148\r\nNot sure if the HTML is in a public repo. I wasn't able to find it. ",
"Fixed the other two URLs here: \r\nhttps://github.com/mwunderlich/notebooks/pull/1",
"Both URLs are broken now\r\n`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`\r\nAnd\r\n`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`\r\n\r\n\r\n",
"I was able to find a torrent with \"The Pile\" dataset here: [The Pile An 800GB Dataset of Diverse Text for Language Modeling ](https://academictorrents.com/details/0d366035664fdf51cfbe9f733953ba325776e667)\r\n\r\nThe complete dataset is huge, so I would suggest you to download only the \"PUBMED_title_abstracts_2019_baseline.jsonl.zst\" file, which is about 7GB. You can do this by using a torrent client of your choice (I typically utilize Transmission, which is pre-installed in Ubuntu distributions).\r\n\r\n",
"@albertvillanova another issue:\r\n```\r\n15 experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights()\r\n16 File \"/lfs/ampere1/0/brando9/beyond-scale-language-data-diversity/src/diversity/div_coeff.py\", line 474, in experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights\r\n17 column_names = next(iter(dataset)).keys()\r\n18 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 1353, in __iter__\r\n19 for key, example in ex_iterable:\r\n20 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 207, in __iter__\r\n21 yield from self.generate_examples_fn(**self.kwargs)\r\n22 File \"/lfs/ampere1/0/brando9/.cache/huggingface/modules/datasets_modules/datasets/EleutherAI--pile/ebea56d358e91cf4d37b0fde361d563bed1472fbd8221a21b38fc8bb4ba554fb/pile.py\", line 236, in _generate_examples\r\n23 with zstd.open(open(files[subset], \"rb\"), \"rt\", encoding=\"utf-8\") as f:\r\n24 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/streaming.py\", line 74, in wrapper\r\n25 return function(*args, download_config=download_config, **kwargs)\r\n26 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py\", line 496, in xopen\r\n27 file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()\r\n28 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/core.py\", line 134, in open\r\n29 return self.__enter__()\r\n30 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/core.py\", line 102, in __enter__\r\n31 f = self.fs.open(self.path, mode=mode)\r\n32 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/spec.py\", line 1241, in open\r\n33 f = self._open(\r\n34 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py\", line 356, in _open\r\n35 size = size or self.info(path, **kwargs)[\"size\"]\r\n36 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py\", line 121, in wrapper\r\n37 return sync(self.loop, func, *args, **kwargs)\r\n38 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py\", line 106, in sync\r\n39 raise return_result\r\n40 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py\", line 61, in _runner\r\n41 result[0] = await coro\r\n42 File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py\", line 430, in _info\r\n43 raise FileNotFoundError(url) from exc\r\n44 FileNotFoundError: https://the-eye.eu/public/AI/pile_preliminary_components/NIH_ExPORTER_awarded_grant_text.jsonl.zst\r\n```\r\n\r\nany suggestions?",
"related: https://github.com/huggingface/datasets/issues/6144",
"this seems to work but it's rather annoying.\r\n\r\nSummary of how to make it work:\r\n1. get urls to parquet files into a list\r\n2. load list to load_dataset via `load_dataset('parquet', data_files=urls)` (note api names to hf are really confusing sometimes)\r\n3. then it should work, print a batch of text.\r\n\r\npresudo code\r\n```python\r\nurls_hacker_news = [\r\n \"https://huggingface.co/datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00000-of-00004.parquet\",\r\n \"https://huggingface.co/datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00001-of-00004.parquet\",\r\n \"https://huggingface.co/datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00002-of-00004.parquet\",\r\n \"https://huggingface.co/datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00003-of-00004.parquet\"\r\n]\r\n\r\n...\r\n\r\n\r\n # streaming = False\r\n from diversity.pile_subset_urls import urls_hacker_news\r\n path, name, data_files = 'parquet', 'hacker_news', urls_hacker_news\r\n # not changing\r\n batch_size = 512\r\n today = datetime.datetime.now().strftime('%Y-m%m-d%d-t%Hh_%Mm_%Ss')\r\n run_name = f'{path} div_coeff_{num_batches=} ({today=} ({name=}) {data_mixture_name=} {probabilities=})'\r\n print(f'{run_name=}')\r\n\r\n # - Init wandb\r\n debug: bool = mode == 'dryrun'\r\n run = wandb.init(mode=mode, project=\"beyond-scale\", name=run_name, save_code=True)\r\n wandb.config.update({\"num_batches\": num_batches, \"path\": path, \"name\": name, \"today\": today, 'probabilities': probabilities, 'batch_size': batch_size, 'debug': debug, 'data_mixture_name': data_mixture_name, 'streaming': streaming, 'data_files': data_files})\r\n # run.notify_on_failure() # https://community.wandb.ai/t/how-do-i-set-the-wandb-alert-programatically-for-my-current-run/4891\r\n print(f'{debug=}')\r\n print(f'{wandb.config=}')\r\n\r\n # -- Get probe network\r\n from datasets import load_dataset\r\n import torch\r\n from transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n\r\n tokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n if tokenizer.pad_token_id is None:\r\n tokenizer.pad_token = tokenizer.eos_token\r\n probe_network = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\n device = torch.device(f\"cuda:{0}\" if torch.cuda.is_available() else \"cpu\")\r\n probe_network = probe_network.to(device)\r\n\r\n # -- Get data set\r\n def my_load_dataset(path, name):\r\n print(f'{path=} {name=} {streaming=}')\r\n if path == 'json' or path == 'bin' or path == 'csv':\r\n print(f'{data_files_prefix+name=}')\r\n return load_dataset(path, data_files=data_files_prefix+name, streaming=streaming, split=\"train\").with_format(\"torch\")\r\n elif path == 'parquet':\r\n print(f'{data_files=}')\r\n return load_dataset(path, data_files=data_files, streaming=streaming, split=\"train\").with_format(\"torch\")\r\n else:\r\n return load_dataset(path, name, streaming=streaming, split=\"train\").with_format(\"torch\")\r\n # - get data set for real now\r\n if isinstance(path, str):\r\n dataset = my_load_dataset(path, name)\r\n else:\r\n print('-- interleaving datasets')\r\n datasets = [my_load_dataset(path, name).with_format(\"torch\") for path, name in zip(path, name)]\r\n [print(f'{dataset.description=}') for dataset in datasets]\r\n dataset = interleave_datasets(datasets, probabilities)\r\n print(f'{dataset=}')\r\n batch = dataset.take(batch_size)\r\n print(f'{next(iter(batch))=}')\r\n column_names = next(iter(batch)).keys()\r\n print(f'{column_names=}')\r\n\r\n # - Prepare functions to tokenize batch\r\n def preprocess(examples):\r\n return tokenizer(examples[\"text\"], padding=\"max_length\", max_length=128, truncation=True, return_tensors=\"pt\")\r\n remove_columns = column_names # remove all keys that are not tensors to avoid bugs in collate function in task2vec's pytorch data loader\r\n def map(batch):\r\n return batch.map(preprocess, batched=True, remove_columns=remove_columns)\r\n tokenized_batch = map(batch)\r\n print(f'{next(iter(tokenized_batch))=}')\r\n```\r\n\r\nhttps://stackoverflow.com/questions/76891189/how-to-download-data-from-hugging-face-that-is-visible-on-the-data-viewer-but-th/76902681#76902681\r\n\r\nhttps://discuss.huggingface.co/t/how-to-download-data-from-hugging-face-that-is-visible-on-the-data-viewer-but-the-files-are-not-available/50555/5?u=severo",
"If some people stumble upon this thread and still have this problem, i reuploaded the dataset to HF [here](https://huggingface.co/datasets/casinca/PUBMED_title_abstracts_2019_baseline)\r\n\r\nIts the exact same dataset you just have to change the url from the course, for example:\r\n\r\n```python\r\nfrom datasets import load_dataset, DownloadConfig\r\n\r\ndata_files = \"https://huggingface.co/datasets/casinca/PUBMED_title_abstracts_2019_baseline/resolve/main/PUBMED_title_abstracts_2019_baseline.jsonl.zst\"\r\npubmed_dataset = load_dataset(\r\n \"json\",\r\n data_files=data_files,\r\n split=\"train\",\r\n download_config=DownloadConfig(delete_extracted=True), # optional argument\r\n)\r\n```"
] |
1,090,472,735
| 3,503
|
Batched in filter throws error
|
closed
| 2021-12-29T12:01:04
| 2022-01-04T10:24:27
| 2022-01-04T10:24:27
|
https://github.com/huggingface/datasets/issues/3503
| null |
gpucce
| false
|
[] |
1,090,438,558
| 3,502
|
Add QuALITY
|
closed
| 2021-12-29T10:58:46
| 2022-10-03T09:36:14
| 2022-10-03T09:36:14
|
https://github.com/huggingface/datasets/pull/3502
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3502",
"html_url": "https://github.com/huggingface/datasets/pull/3502",
"diff_url": "https://github.com/huggingface/datasets/pull/3502.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3502.patch",
"merged_at": null
}
|
jaketae
| true
|
[
"Thanks for your contribution, @jaketae. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] |
1,090,413,758
| 3,501
|
Update pib dataset card
|
closed
| 2021-12-29T10:14:40
| 2021-12-29T11:13:21
| 2021-12-29T11:13:21
|
https://github.com/huggingface/datasets/pull/3501
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3501",
"html_url": "https://github.com/huggingface/datasets/pull/3501",
"diff_url": "https://github.com/huggingface/datasets/pull/3501.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3501.patch",
"merged_at": "2021-12-29T11:13:21"
}
|
albertvillanova
| true
|
[] |
1,090,406,133
| 3,500
|
Docs: Add VCTK dataset description
|
closed
| 2021-12-29T10:02:05
| 2022-01-04T10:46:02
| 2022-01-04T10:25:09
|
https://github.com/huggingface/datasets/pull/3500
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3500",
"html_url": "https://github.com/huggingface/datasets/pull/3500",
"diff_url": "https://github.com/huggingface/datasets/pull/3500.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3500.patch",
"merged_at": "2022-01-04T10:25:09"
}
|
jaketae
| true
|
[] |
1,090,132,618
| 3,499
|
Adjusting chunk size for streaming datasets
|
closed
| 2021-12-28T21:17:53
| 2022-05-06T16:29:05
| 2022-05-06T16:29:05
|
https://github.com/huggingface/datasets/issues/3499
| null |
JoelNiklaus
| false
|
[
"Hi ! Data streaming uses `fsspec` to read the data files progressively. IIRC the block size for buffering is 5MiB by default. So every time you finish iterating over a block, it downloads the next one. You can still try to increase the `fsspec` block size for buffering if it can help. To do so you just need to increase `fsspec.spec.AbstractBufferedFile.DEFAULT_BLOCK_SIZE `\r\n\r\nCurrently this is unfortunately done in a single thread, so it blocks the processing to download and uncompress the next block. At one point it would be nice to be able to do that in parallel !",
"Hi! Thanks for the help, I will try it :)"
] |
1,090,096,332
| 3,498
|
update `pretty_name` for first 200 datasets
|
closed
| 2021-12-28T19:50:07
| 2022-07-10T14:36:53
| 2022-01-05T16:38:21
|
https://github.com/huggingface/datasets/pull/3498
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3498",
"html_url": "https://github.com/huggingface/datasets/pull/3498",
"diff_url": "https://github.com/huggingface/datasets/pull/3498.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3498.patch",
"merged_at": "2022-01-05T16:38:21"
}
|
bhavitvyamalik
| true
|
[] |
1,090,050,148
| 3,497
|
Changing sampling rate in audio dataset and subsequently mapping with `num_proc > 1` leads to weird bug
|
closed
| 2021-12-28T18:03:49
| 2022-01-21T13:22:27
| 2022-01-21T13:22:27
|
https://github.com/huggingface/datasets/issues/3497
| null |
patrickvonplaten
| false
|
[
"Same error occures when using max samples with https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py",
"I'm seeing this too, when using preprocessing_num_workers with \r\nhttps://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py"
] |
1,089,989,155
| 3,496
|
Update version of pib dataset and make it streamable
|
closed
| 2021-12-28T16:01:55
| 2022-01-03T14:42:28
| 2021-12-29T08:42:57
|
https://github.com/huggingface/datasets/pull/3496
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3496",
"html_url": "https://github.com/huggingface/datasets/pull/3496",
"diff_url": "https://github.com/huggingface/datasets/pull/3496.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3496.patch",
"merged_at": "2021-12-29T08:42:57"
}
|
albertvillanova
| true
|
[
"It seems like there is still an error: `Message: 'TarContainedFile' object has no attribute 'readable'`\r\n\r\nhttps://huggingface.co/datasets/pib/viewer",
"@severo I was wondering about that...\r\n\r\nIt works fine when I run it in streaming mode in my terminal:\r\n```python\r\nIn [3]: from datasets import load_dataset; ds = load_dataset(\"pib\", \"gu-pa\", split=\"train\", streaming=True); item = next(iter(ds))\r\n\r\nIn [4]: item\r\nOut[4]: \r\n{'translation': {'gu': 'એવો નિર્ણય લેવાયો હતો કે ખંતપૂર્વકની કામગીરી હાથ ધરવા, કાયદેસર અને ટેકનિકલ મૂલ્યાંકન કરવા, વેન્ચર કેપિટલ ઇન્વેસ્ટમેન્ટ સમિતિની બેઠક યોજવા વગેરે એઆઇએફને કરવામાં આવેલ પ્રતિબદ્ધતાના 0.50 ટકા સુધી અને બાકીની રકમ એફએફએસને પૂર્ણ કરવામાં આવશે.',\r\n 'pa': 'ਇਹ ਵੀ ਫੈਸਲਾ ਕੀਤਾ ਗਿਆ ਕਿ ਐੱਫਆਈਆਈ ਅਤੇ ਬਕਾਏ ਲਈ ਕੀਤੀਆਂ ਗਈਆਂ ਵਚਨਬੱਧਤਾਵਾਂ ਦੇ 0.50 % ਦੀ ਸੀਮਾ ਤੱਕ ਐੱਫਈਐੱਸ ਨੂੰ ਮਿਲਿਆ ਜਾਏਗਾ, ਇਸ ਨਾਲ ਉੱਦਮ ਪੂੰਜੀ ਨਿਵੇਸ਼ ਕਮੇਟੀ ਦੀ ਬੈਠਕ ਦਾ ਆਯੋਜਨ ਉਚਿਤ ਸਾਵਧਾਨੀ, ਕਾਨੂੰਨੀ ਅਤੇ ਤਕਨੀਕੀ ਮੁੱਲਾਂਕਣ ਲਈ ਸੰਚਾਲਨ ਖਰਚ ਆਦਿ ਦੀ ਪੂਰਤੀ ਹੋਵੇਗੀ।'}}\r\n```",
"OK, it works now!\r\n\r\n<img width=\"794\" alt=\"Capture d’écran 2022-01-03 à 15 41 44\" src=\"https://user-images.githubusercontent.com/1676121/147943676-6199d1a9-f288-4350-af96-a7c297ebb743.png\">\r\n"
] |
1,089,983,632
| 3,495
|
Add VoxLingua107
|
open
| 2021-12-28T15:51:43
| 2021-12-28T15:51:43
| null |
https://github.com/huggingface/datasets/issues/3495
| null |
jaketae
| false
|
[] |
1,089,983,103
| 3,494
|
Clone full repo to detect new tags when mirroring datasets on the Hub
|
closed
| 2021-12-28T15:50:47
| 2021-12-28T16:07:21
| 2021-12-28T16:07:20
|
https://github.com/huggingface/datasets/pull/3494
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3494",
"html_url": "https://github.com/huggingface/datasets/pull/3494",
"diff_url": "https://github.com/huggingface/datasets/pull/3494.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3494.patch",
"merged_at": "2021-12-28T16:07:20"
}
|
lhoestq
| true
|
[
"Good catch !!",
"The CI fail is unrelated to this PR and fixed on master, merging :)"
] |
1,089,967,286
| 3,493
|
Fix VCTK encoding
|
closed
| 2021-12-28T15:23:36
| 2021-12-28T15:48:18
| 2021-12-28T15:48:17
|
https://github.com/huggingface/datasets/pull/3493
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3493",
"html_url": "https://github.com/huggingface/datasets/pull/3493",
"diff_url": "https://github.com/huggingface/datasets/pull/3493.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3493.patch",
"merged_at": "2021-12-28T15:48:17"
}
|
lhoestq
| true
|
[] |
1,089,952,943
| 3,492
|
Add `gzip` for `to_json`
|
closed
| 2021-12-28T15:01:11
| 2022-07-10T14:36:52
| 2022-01-05T13:03:36
|
https://github.com/huggingface/datasets/pull/3492
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3492",
"html_url": "https://github.com/huggingface/datasets/pull/3492",
"diff_url": "https://github.com/huggingface/datasets/pull/3492.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3492.patch",
"merged_at": "2022-01-05T13:03:35"
}
|
bhavitvyamalik
| true
|
[] |
1,089,918,018
| 3,491
|
Update version of pib dataset
|
closed
| 2021-12-28T14:03:58
| 2021-12-29T08:42:57
| 2021-12-29T08:42:57
|
https://github.com/huggingface/datasets/issues/3491
| null |
albertvillanova
| false
|
[] |
1,089,730,181
| 3,490
|
Does datasets support load text from HDFS?
|
open
| 2021-12-28T08:56:02
| 2022-02-14T14:00:51
| null |
https://github.com/huggingface/datasets/issues/3490
| null |
dancingpipi
| false
|
[
"Hi ! `datasets` currently supports reading local files or files over HTTP. We may add support for other filesystems (cloud storages, hdfs...) at one point though :)"
] |
1,089,401,926
| 3,489
|
Avoid unnecessary list creations
|
open
| 2021-12-27T18:20:56
| 2022-07-06T15:19:49
| null |
https://github.com/huggingface/datasets/pull/3489
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3489",
"html_url": "https://github.com/huggingface/datasets/pull/3489",
"diff_url": "https://github.com/huggingface/datasets/pull/3489.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3489.patch",
"merged_at": null
}
|
bryant1410
| true
|
[
"@bryant1410 Thanks for working on this. Could you please split the PR into 4 or 5 smaller PRs (ideally one PR for each bullet point from your description) because it's not practical to review such a large PR, especially if the changes are not interrelated?"
] |
1,089,345,653
| 3,488
|
URL query parameters are set as path in the compression hop for fsspec
|
open
| 2021-12-27T16:29:00
| 2022-01-05T15:15:25
| null |
https://github.com/huggingface/datasets/issues/3488
| null |
albertvillanova
| false
|
[
"I think the test passes because it simply ignore what's after `gzip://`.\r\n\r\nThe returned urlpath is expected to look like `gzip://filename::url`, and the filename is currently considered to be what's after the final `/`, hence the result.\r\n\r\nWe can decide to change this and simply have `gzip://::url`, this way we don't need to guess the filename, what do you think ?"
] |
1,089,209,031
| 3,487
|
Update ADD_NEW_DATASET.md
|
closed
| 2021-12-27T12:24:51
| 2021-12-27T15:00:45
| 2021-12-27T15:00:45
|
https://github.com/huggingface/datasets/pull/3487
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3487",
"html_url": "https://github.com/huggingface/datasets/pull/3487",
"diff_url": "https://github.com/huggingface/datasets/pull/3487.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3487.patch",
"merged_at": "2021-12-27T15:00:45"
}
|
apergo-ai
| true
|
[] |
1,089,171,551
| 3,486
|
Fix weird spacing in ManualDownloadError message
|
closed
| 2021-12-27T11:20:36
| 2021-12-28T09:03:26
| 2021-12-28T09:00:28
|
https://github.com/huggingface/datasets/pull/3486
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3486",
"html_url": "https://github.com/huggingface/datasets/pull/3486",
"diff_url": "https://github.com/huggingface/datasets/pull/3486.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3486.patch",
"merged_at": "2021-12-28T09:00:28"
}
|
bryant1410
| true
|
[] |
1,089,027,581
| 3,485
|
skip columns which cannot set to specific format when set_format
|
closed
| 2021-12-27T07:19:55
| 2021-12-27T09:07:07
| 2021-12-27T09:07:07
|
https://github.com/huggingface/datasets/issues/3485
| null |
tshu-w
| false
|
[
"You can add columns that you wish to set into `torch` format using `dataset.set_format(\"torch\", ['id', 'abc'])` so that input batch of the transform only contains those columns",
"Sorry, I miss `output_all_columns` args and thought after `dataset.set_format(\"torch\", columns=columns)` I can only get specific columns I assigned."
] |
1,088,910,402
| 3,484
|
make shape verification to use ArrayXD instead of nested lists for map
|
open
| 2021-12-27T02:16:02
| 2022-01-05T13:54:03
| null |
https://github.com/huggingface/datasets/issues/3484
| null |
tshu-w
| false
|
[
"Hi! \r\n\r\nYes, this makes sense for numeric values, but first I have to finish https://github.com/huggingface/datasets/pull/3336 because currently ArrayXD only allows the first dimension to be dynamic."
] |
1,088,784,157
| 3,483
|
Remove unused phony rule from Makefile
|
closed
| 2021-12-26T14:37:13
| 2022-01-05T19:44:56
| 2022-01-05T16:34:12
|
https://github.com/huggingface/datasets/pull/3483
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3483",
"html_url": "https://github.com/huggingface/datasets/pull/3483",
"diff_url": "https://github.com/huggingface/datasets/pull/3483.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3483.patch",
"merged_at": "2022-01-05T16:34:12"
}
|
bryant1410
| true
|
[
"The CI failure is unrelated to this PR and fixed on master, merging !"
] |
1,088,317,921
| 3,482
|
Fix duplicate keys in NewsQA
|
closed
| 2021-12-24T11:01:59
| 2022-09-23T12:57:10
| 2022-09-23T12:57:10
|
https://github.com/huggingface/datasets/pull/3482
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3482",
"html_url": "https://github.com/huggingface/datasets/pull/3482",
"diff_url": "https://github.com/huggingface/datasets/pull/3482.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3482.patch",
"merged_at": null
}
|
bryant1410
| true
|
[
"Flaky tests?",
"Thanks for your contribution, @bryant1410.\r\n\r\nI think the fix of the duplicate key in this PR was superseded by:\r\n- #3696\r\n\r\nI'm closing this because we are moving all dataset scripts from GitHub to the Hugging Face Hub."
] |
1,088,308,343
| 3,481
|
Fix overriding of filesystem info
|
closed
| 2021-12-24T10:42:31
| 2021-12-24T11:08:59
| 2021-12-24T11:08:59
|
https://github.com/huggingface/datasets/pull/3481
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3481",
"html_url": "https://github.com/huggingface/datasets/pull/3481",
"diff_url": "https://github.com/huggingface/datasets/pull/3481.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3481.patch",
"merged_at": "2021-12-24T11:08:59"
}
|
albertvillanova
| true
|
[] |
1,088,267,110
| 3,480
|
the compression format requested when saving a dataset in json format is not respected
|
closed
| 2021-12-24T09:23:51
| 2022-01-05T13:03:35
| 2022-01-05T13:03:35
|
https://github.com/huggingface/datasets/issues/3480
| null |
SaulLu
| false
|
[
"Thanks for reporting @SaulLu.\r\n\r\nAt first sight I think the problem is caused because `pandas` only takes into account the `compression` parameter if called with a non-null file path or buffer. And in our implementation, we call pandas `to_json` with `None` `path_or_buf`.\r\n\r\nWe should fix this:\r\n- either handling directly the `compression` parameter ourselves\r\n- or refactoring to pass non-null path or buffer to pandas\r\n\r\nCC: @lhoestq",
"I was thinking if we can handle the `compression` parameter by ourselves? Compression types will be similar to what `pandas` offer. Initially, we can try this with 2-3 compression types and see how good/bad it is? Let me know if it sounds good, I can raise a PR for this next week",
"Hi ! Thanks for your help @bhavitvyamalik :)\r\nMaybe let's start with `gzip` ? I think it's the most common use case, then if we're fine with it we can add other compression methods"
] |
1,088,232,880
| 3,479
|
Dataset preview is not available (I think for all Hugging Face datasets)
|
closed
| 2021-12-24T08:18:48
| 2021-12-24T14:27:46
| 2021-12-24T14:27:46
|
https://github.com/huggingface/datasets/issues/3479
| null |
Abirate
| false
|
[
"You're right, we have an issue today with the datasets preview. We're investigating.",
"It should be fixed now. Thanks for reporting.",
"Down again. ",
"Fixed for good."
] |
1,087,860,180
| 3,478
|
Extend support for streaming datasets that use os.walk
|
closed
| 2021-12-23T16:42:55
| 2021-12-24T10:50:20
| 2021-12-24T10:50:19
|
https://github.com/huggingface/datasets/pull/3478
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3478",
"html_url": "https://github.com/huggingface/datasets/pull/3478",
"diff_url": "https://github.com/huggingface/datasets/pull/3478.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3478.patch",
"merged_at": "2021-12-24T10:50:19"
}
|
albertvillanova
| true
|
[
"Nice. I'll update the dataset viewer once merged, and test on these four datasets"
] |
1,087,850,253
| 3,477
|
Use `iter_files` instead of `str(Path(...)` in image dataset
|
closed
| 2021-12-23T16:26:55
| 2021-12-28T15:15:02
| 2021-12-28T15:15:02
|
https://github.com/huggingface/datasets/pull/3477
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3477",
"html_url": "https://github.com/huggingface/datasets/pull/3477",
"diff_url": "https://github.com/huggingface/datasets/pull/3477.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3477.patch",
"merged_at": "2021-12-28T15:15:02"
}
|
mariosasko
| true
|
[
"`iter_archive` is about to support ZIP archives. I think we should use this no ?\r\n\r\nsee #3347 https://github.com/huggingface/datasets/pull/3379",
"I was interested in the support for isfile/dir in remote.\r\n\r\nAnyway, `iter_files` will be available for community users.",
"I'm not a big fan of having two functions that do the same thing. What do you think ?",
"They do not do the same thing:\r\n- One iterates over files in a directory\r\n- The other I guess will iterate over the members of an archive",
"Makes sense ! Sounds good then - sorry for my misunderstanding\r\n\r\nNote that `iter_archive` will be more performant for data streaming that `iter_files` thanks to the buffering so maybe in the future we can `iter_archive` for some of these datasets",
"Yes, @lhoestq I agree with you: once `iter_archive` supports zip files, it will be more suitable than `iter_files` for these 2 datasets.\r\n\r\nAnyway, this PR also implements `isfile`/`isdir` in streaming mode, besides fixing `iter_files`. And I'm interested in having those in master.\r\n\r\nMaybe, could we merge this PR into master and take note to refactor the datasets to use `iter_archive` once zip is supported?\r\nOther option could be to split this PR into 2..."
] |
1,087,622,872
| 3,476
|
Extend support for streaming datasets that use ET.parse
|
closed
| 2021-12-23T11:18:46
| 2021-12-23T15:34:30
| 2021-12-23T15:34:30
|
https://github.com/huggingface/datasets/pull/3476
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3476",
"html_url": "https://github.com/huggingface/datasets/pull/3476",
"diff_url": "https://github.com/huggingface/datasets/pull/3476.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3476.patch",
"merged_at": "2021-12-23T15:34:30"
}
|
albertvillanova
| true
|
[] |
1,087,352,041
| 3,475
|
The rotten_tomatoes dataset of movie reviews contains some reviews in Spanish
|
open
| 2021-12-23T03:56:43
| 2021-12-24T00:23:03
| null |
https://github.com/huggingface/datasets/issues/3475
| null |
puzzler10
| false
|
[
"Hi @puzzler10, thanks for reporting.\r\n\r\nPlease note this dataset is not hosted on Hugging Face Hub. See: \r\nhttps://github.com/huggingface/datasets/blob/c8f914473b041833fd47178fa4373cdcb56ac522/datasets/rotten_tomatoes/rotten_tomatoes.py#L42\r\n\r\nIf there are issues with the source data of a dataset, you should contact the data owners/creators instead. In the homepage associated with this dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/), you can find the authors of the dataset and how to contact them:\r\n> If you have any questions or comments regarding this site, please send email to Bo Pang or Lillian Lee.\r\n\r\nP.S.: Please also note that the example you gave of non-English review is in Portuguese (not Spanish). ;)",
"Maybe best to just put a quick sentence in the dataset description that highlights this? "
] |
1,086,945,384
| 3,474
|
Decode images when iterating
|
closed
| 2021-12-22T15:34:49
| 2023-09-24T09:54:04
| 2021-12-28T16:08:10
|
https://github.com/huggingface/datasets/pull/3474
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3474",
"html_url": "https://github.com/huggingface/datasets/pull/3474",
"diff_url": "https://github.com/huggingface/datasets/pull/3474.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3474.patch",
"merged_at": null
}
|
lhoestq
| true
|
[] |
1,086,937,610
| 3,473
|
Iterating over a vision dataset doesn't decode the images
|
closed
| 2021-12-22T15:26:32
| 2021-12-27T14:13:21
| 2021-12-23T15:21:57
|
https://github.com/huggingface/datasets/issues/3473
| null |
lhoestq
| false
|
[
"As discussed, I remember I set `decoded=False` here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed.",
"> I set decoded=False here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed\r\n\r\nhttps://github.com/huggingface/datasets/pull/3430 will add more control to decoding, so I think it's OK to enable decoding in `__iter__` for now. After we merge the linked PR, the user can easily disable it again.",
"@mariosasko I wonder why there is no issue in `Audio` feature with decoding disabled in `__iter__`, whereas there is in `Image` feature.\r\n\r\nEnabling decoding in `__iter__` will make fail Audio regressions tests: https://github.com/huggingface/datasets/runs/4608657230?check_suite_focus=true\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_not_decoded\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_decoded\r\n========================= 2 failed, 15 passed in 8.37s =========================",
"Please also note that the regression tests were implemented in accordance with the specifications:\r\n- when doing a `map` (wich calls `__iter__`) of a function that doesn't access the audio field, the decoding should be disabled; this is why the decoding is disabled in `__iter__` (and only enabled in `__getitem__`).",
"> I wonder why there is no issue in Audio feature with decoding disabled in __iter__, whereas there is in Image feature.\r\n\r\n@albertvillanova Not sure if I understand this part. Currently, both the Image and the Audio feature don't decode data in `__iter__`, so their behavior is aligned there.\r\n",
"Therefore, this is not an issue, neither for Audio nor Image feature.\r\n\r\nCould you please elaborate more on the expected use case? @lhoestq @NielsRogge \r\n\r\nThe expected use cases (in accordance with the specs: see #2324):\r\n- decoding should be enabled when accessing a specific item (`__getitem__`)\r\n- decoding should be disabled while iterating (`__iter__`) to allow preprocessing of non-audio/image features (like label or text, for example) using `.map`\r\n- decoding should be enabled in a `.map` only if the `.map` function accesses the audio/image feature (implemented using `LazyDict`)",
"For me it's not an issue, actually. I just (mistakenly) tried to iterate over a PyTorch Dataset instead of a PyTorch DataLoader, \r\n\r\ni.e. I did this:\r\n\r\n`batch = next(iter(train_ds)) `\r\n\r\nwhereas I actually wanted to do\r\n\r\n`batch = next(iter(train_dataloader))`\r\n\r\nand then it turned out that in the first case, the image was a string of bytes rather than a Pillow image, hence Quentin opened an issue.",
"Thanks @NielsRogge for the context.\r\n\r\nSo IMO everything is working as expected.\r\n\r\nI'm closing this issue. Feel free to reopen it again if further changes of the specs should be addressed.",
"Thanks for the details :)\r\n\r\nI still think that it's unexpected to get different results when doing\r\n```python\r\nfor i in range(len(dataset)):\r\n sample = dataset[i]\r\n```\r\nand\r\n```python\r\nfor sample in dataset:\r\n pass\r\n```\r\neven though I understand that if you don't need to decode the data, then decoding image or audio data when iterating is a waste of time and resources.\r\n\r\nBut in this case users can still drop the column that need decoding to get the full speed back no ?"
] |
1,086,908,508
| 3,472
|
Fix `str(Path(...))` conversion in streaming on Linux
|
closed
| 2021-12-22T15:06:03
| 2021-12-22T16:52:53
| 2021-12-22T16:52:52
|
https://github.com/huggingface/datasets/pull/3472
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3472",
"html_url": "https://github.com/huggingface/datasets/pull/3472",
"diff_url": "https://github.com/huggingface/datasets/pull/3472.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3472.patch",
"merged_at": "2021-12-22T16:52:52"
}
|
mariosasko
| true
|
[] |
1,086,588,074
| 3,471
|
Fix Tashkeela dataset to yield stripped text
|
closed
| 2021-12-22T08:41:30
| 2021-12-22T10:12:08
| 2021-12-22T10:12:07
|
https://github.com/huggingface/datasets/pull/3471
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3471",
"html_url": "https://github.com/huggingface/datasets/pull/3471",
"diff_url": "https://github.com/huggingface/datasets/pull/3471.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3471.patch",
"merged_at": "2021-12-22T10:12:07"
}
|
albertvillanova
| true
|
[] |
1,086,049,888
| 3,470
|
Fix rendering of docs
|
closed
| 2021-12-21T17:17:01
| 2021-12-22T09:23:47
| 2021-12-22T09:23:47
|
https://github.com/huggingface/datasets/pull/3470
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3470",
"html_url": "https://github.com/huggingface/datasets/pull/3470",
"diff_url": "https://github.com/huggingface/datasets/pull/3470.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3470.patch",
"merged_at": "2021-12-22T09:23:47"
}
|
albertvillanova
| true
|
[] |
1,085,882,664
| 3,469
|
Fix METEOR missing NLTK's omw-1.4
|
closed
| 2021-12-21T14:19:11
| 2021-12-21T14:52:28
| 2021-12-21T14:49:28
|
https://github.com/huggingface/datasets/pull/3469
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3469",
"html_url": "https://github.com/huggingface/datasets/pull/3469",
"diff_url": "https://github.com/huggingface/datasets/pull/3469.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3469.patch",
"merged_at": "2021-12-21T14:49:28"
}
|
lhoestq
| true
|
[
"I also modified the doctest call to raise the exception that doctest may catch, instead of `doctest.UnexpectedException`.\r\nThis will make debugging easier if it happens again"
] |
1,085,871,301
| 3,468
|
Add COCO dataset
|
closed
| 2021-12-21T14:07:50
| 2023-09-24T09:33:31
| 2022-10-03T09:36:08
|
https://github.com/huggingface/datasets/pull/3468
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3468",
"html_url": "https://github.com/huggingface/datasets/pull/3468",
"diff_url": "https://github.com/huggingface/datasets/pull/3468.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3468.patch",
"merged_at": null
}
|
mariosasko
| true
|
[
"The CI failures other than a missing dummy data file and missing fields in the card are unrelated to this PR. ",
"Thanks a lot for this great work and fixing TFDS based script @mariosasko 🤗 will generate the dummy dataset and write the model card tomorrow!",
"@mariosasko I added the dataset card, I'm on the dummy data rn. ",
"@merveenoyan Let me know if you need any help with the dummy data.\r\n\r\nI plan to split the current script/dataset into 4 smaller scripts/datasets to make sure they are properly indexed by Papers With Code later on. In this format:\r\n* the `*_image_captioning` configs will form the [COCO Captions](https://paperswithcode.com/sota/image-captioning-on-coco-captions) dataset (also present in TFDS, but only the 2017 version)\r\n* the `stuff_segmentation` config will form the [COCO Stuff](https://paperswithcode.com/dataset/coco-stuff) dataset\r\n* the `desnepose` config will form the [DensePose-COCO](https://paperswithcode.com/dataset/densepose) dataset\r\n* the rest will be [COCO](https://paperswithcode.com/dataset/coco) (+ will add the `minival` and the `valminusminival` splits to COCO 2014)\r\n\r\nAlso, if I find the time, I'll add preprocessing examples that rely on `pycocotools` to the README files.",
"@mariosasko I feel like we can just push main COCO and add Captions + Stuff later, WDYT?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your contribution, @mariosasko and @merveenoyan. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] |
1,085,870,665
| 3,467
|
Push dataset infos.json to Hub
|
closed
| 2021-12-21T14:07:13
| 2021-12-21T17:00:10
| 2021-12-21T17:00:09
|
https://github.com/huggingface/datasets/pull/3467
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3467",
"html_url": "https://github.com/huggingface/datasets/pull/3467",
"diff_url": "https://github.com/huggingface/datasets/pull/3467.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3467.patch",
"merged_at": "2021-12-21T17:00:09"
}
|
lhoestq
| true
|
[
"The change from `___` to `--` was allowed by https://github.com/huggingface/moon-landing/pull/1657"
] |
1,085,722,837
| 3,466
|
Add CRASS dataset
|
closed
| 2021-12-21T11:17:22
| 2022-10-03T09:37:06
| 2022-10-03T09:37:06
|
https://github.com/huggingface/datasets/pull/3466
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3466",
"html_url": "https://github.com/huggingface/datasets/pull/3466",
"diff_url": "https://github.com/huggingface/datasets/pull/3466.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3466.patch",
"merged_at": null
}
|
apergo-ai
| true
|
[
"Hi Albert,\r\nThank you for your comments.\r\nI hope I have uploaded my local git repo to include the dummy files and style reworkings.\r\nAdded YAML in Readme as well.\r\n\r\nPlease check again.\r\n\r\nHope it works now :)",
"Thanks for your contribution, @apergo-ai. \r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. It's OK for you? Please, feel free to tell us if you need some help."
] |
1,085,400,432
| 3,465
|
Unable to load 'cnn_dailymail' dataset
|
closed
| 2021-12-21T03:32:21
| 2024-06-12T14:41:17
| 2022-02-17T14:13:57
|
https://github.com/huggingface/datasets/issues/3465
| null |
talha1503
| false
|
[
"Hi @talha1503, thanks for reporting.\r\n\r\nIt seems there is an issue with one of the data files hosted at Google Drive:\r\n```\r\nGoogle Drive - Quota exceeded\r\n\r\nSorry, you can't view or download this file at this time.\r\n\r\nToo many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.\r\n```\r\n\r\nAs you probably know, Hugging Face does not host the data, and in this case the data owner decided to host their data at Google Drive, which has quota limits.\r\n\r\nIs there anything we could do, @lhoestq @mariosasko?",
"This looks related to https://github.com/huggingface/datasets/issues/996",
"It seems that [this](https://huggingface.co/datasets/ccdv/cnn_dailymail) copy of the dataset has fixed the problem",
"thank you @AyhamAlom ...\r\nit resolved the error"
] |
1,085,399,097
| 3,464
|
struct.error: 'i' format requires -2147483648 <= number <= 2147483647
|
open
| 2021-12-21T03:29:01
| 2022-11-21T19:55:11
| null |
https://github.com/huggingface/datasets/issues/3464
| null |
koukoulala
| false
|
[
"Hi ! Can you try setting `datasets.config.MAX_TABLE_NBYTES_FOR_PICKLING` to a smaller value than `4 << 30` (4GiB), for example `500 << 20` (500MiB) ? It should reduce the maximum size of the arrow table being pickled during multiprocessing.\r\n\r\nIf it fixes the issue, we can consider lowering the default value for everyone.",
"@lhoestq I tried that just now but didn't seem to help."
] |
1,085,078,795
| 3,463
|
Update swahili_news dataset
|
closed
| 2021-12-20T18:20:20
| 2021-12-21T06:24:03
| 2021-12-21T06:24:02
|
https://github.com/huggingface/datasets/pull/3463
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3463",
"html_url": "https://github.com/huggingface/datasets/pull/3463",
"diff_url": "https://github.com/huggingface/datasets/pull/3463.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3463.patch",
"merged_at": "2021-12-21T06:24:01"
}
|
albertvillanova
| true
|
[] |
1,085,049,661
| 3,462
|
Update swahili_news dataset
|
closed
| 2021-12-20T17:44:01
| 2021-12-21T06:24:02
| 2021-12-21T06:24:01
|
https://github.com/huggingface/datasets/issues/3462
| null |
albertvillanova
| false
|
[] |
1,085,007,346
| 3,461
|
Fix links in metrics description
|
closed
| 2021-12-20T16:56:19
| 2021-12-20T17:14:52
| 2021-12-20T17:14:51
|
https://github.com/huggingface/datasets/pull/3461
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3461",
"html_url": "https://github.com/huggingface/datasets/pull/3461",
"diff_url": "https://github.com/huggingface/datasets/pull/3461.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3461.patch",
"merged_at": "2021-12-20T17:14:51"
}
|
albertvillanova
| true
|
[] |
1,085,002,469
| 3,460
|
Don't encode lists as strings when using `Value("string")`
|
closed
| 2021-12-20T16:50:49
| 2023-09-25T10:28:30
| 2023-09-25T09:20:28
|
https://github.com/huggingface/datasets/pull/3460
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3460",
"html_url": "https://github.com/huggingface/datasets/pull/3460",
"diff_url": "https://github.com/huggingface/datasets/pull/3460.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3460.patch",
"merged_at": null
}
|
lhoestq
| true
|
[
"Should we close this PR?",
"since the original issue has to do with metrics that have been moved to `evaludate` I think we can close this one",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,084,969,672
| 3,459
|
dataset.filter overwriting previously set dataset._indices values, resulting in the wrong elements being selected.
|
closed
| 2021-12-20T16:16:49
| 2021-12-20T16:34:57
| 2021-12-20T16:34:57
|
https://github.com/huggingface/datasets/issues/3459
| null |
mmajurski
| false
|
[
"I think this is a duplicate of [#3190](https://github.com/huggingface/datasets/issues/3190)?",
"Upgrading the datasets version as per #3190 fixes this bug. \r\nI'm Marking as closed."
] |
1,084,926,025
| 3,458
|
Fix duplicated tag in wikicorpus dataset card
|
closed
| 2021-12-20T15:34:16
| 2021-12-20T16:03:25
| 2021-12-20T16:03:24
|
https://github.com/huggingface/datasets/pull/3458
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3458",
"html_url": "https://github.com/huggingface/datasets/pull/3458",
"diff_url": "https://github.com/huggingface/datasets/pull/3458.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3458.patch",
"merged_at": "2021-12-20T16:03:24"
}
|
lhoestq
| true
|
[
"CI is failing just because of empty sections - merging"
] |
1,084,862,121
| 3,457
|
Add CMU Graphics Lab Motion Capture dataset
|
open
| 2021-12-20T14:34:39
| 2022-03-16T16:53:09
| null |
https://github.com/huggingface/datasets/issues/3457
| null |
osanseviero
| false
|
[
"This dataset has files in ASF/AMC format. [ The skeleton file is the ASF file (Acclaim Skeleton File). The motion file is the AMC file (Acclaim Motion Capture data). ] \r\n\r\nSome questions : \r\n1. How do we go about representing these features using datasets.Features and generate examples ?\r\n2. The dataset download link for ASF/AMC files does not have metadata information, for eg : category and subcategory information. We will need to crawl the website for this information. The authors mention \"Please don't crawl this database for all motions.\" Can we mail the authors for this information ?\r\nThe dataset structure is as follows : \r\n```\r\nsubjects\r\n\t- 01\r\n\t\t- 01_01.amc\r\n\t\t- 01_02.amc\r\n\t\t.\r\n\t\t.\r\n\t\t.\r\n\t\t- 01.asf\r\n\t- 02\r\n\t\t- 02_01.amc\r\n\t\t- 02_02.amc\r\n\t\t.\r\n\t\t.\r\n\t\t.\r\n\t\t- 02.asf\r\n```\r\nThere is no metadata regarding the category, sub-category and motion description.\r\n\r\nNeed your inputs. @mariosasko / @lhoestq \r\nThank you.\r\n",
"Hi @dnaveenr! Thanks for working on this!\r\n\r\n1. We can use the `Sequence(Value(\"string\"))` feature type for the subject's AMC files and `Value(\"string\")` for the subject's ASF file (`Value(\"string\")` represents the file paths) + the types for categories/subcategories and descriptions.\r\n2. We can use this URL to download the motion descriptions: http://mocap.cs.cmu.edu/search.php?subjectnumber=<subject_number>&motion=%%%&maincat=%&subcat=%&subtext=yes where `subject_number` is the number between 1 and 144. And to get categories/subcategories, feel free to contact the authors (they state in the FAQ they are happy to help) and ask them if they can provide the mapping from categories/subcategories to the AMC files to avoid crawling. You can also mention that your goal is to make their dataset more accessible by adding its loading script to the Hub.\r\n\r\nThe AMC files are also available in the tvd, c3d, mpg and avi formats (the links are in the [FAQ](http://mocap.cs.cmu.edu/faqs.php) section), so it would be nice to have one config for each of these additional formats. \r\n\r\nAnd additionally, we can add a `Data Preprocessing` section to the card where we explain how to load/process the files. I can help with that.",
"Hi @mariosasko ,\r\n\r\n1. Thanks for this, so we can add the file paths.\r\n2. Yes, I had already mailed the authors a couple of days back actually, asking for the metadata details[ i.e category, sub-category and motion description] . They are yet to respond though, I will wait for a couple of days and try to follow up with them again. :) Else we can use the workaround solution.\r\n\r\nYes. Supporting all the formats would be helpful. \r\n\r\n> And additionally, we can add a Data Preprocessing section to the card where we explain how to load/process the files. I can help with that.\r\n\r\nOkay. Got it."
] |
1,084,687,973
| 3,456
|
[WER] Better error message for wer
|
closed
| 2021-12-20T11:38:40
| 2021-12-20T16:53:37
| 2021-12-20T16:53:36
|
https://github.com/huggingface/datasets/pull/3456
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3456",
"html_url": "https://github.com/huggingface/datasets/pull/3456",
"diff_url": "https://github.com/huggingface/datasets/pull/3456.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3456.patch",
"merged_at": null
}
|
patrickvonplaten
| true
|
[
"Hi ! I don't think this would solve this issue.\r\nCurrently it looks like there's a bug that converts the list `[\"hello it's nice\"]` to a string `'[\"hello it's nice\"]'` since this is what the metric expects as input. The conversion is done before the data are passed to `_compute()`.\r\n\r\nThis is `Value(\"string\").encode_example` that is called to do the conversion. Since `str()` encoding is too permissive we should consider raising an error if the example is not a string (even though it can be converted to string). ",
"> called\r\n\r\nAh yeah you're right",
"I just opened https://github.com/huggingface/datasets/pull/3460 to fix that. It now raises an error instead of computing the wrong WER",
"Thank you - that should be good enough!"
] |
1,084,599,650
| 3,455
|
Easier information editing
|
closed
| 2021-12-20T10:10:43
| 2023-07-25T15:36:14
| 2023-07-25T15:36:14
|
https://github.com/huggingface/datasets/issues/3455
| null |
borgr
| false
|
[
"Hi ! I guess you are talking about the dataset cards that are in this repository on github ?\r\n\r\nI think github allows to submit a PR even for 1 line though the `Edit file` button on the page of the dataset card.\r\n\r\nMaybe let's mention this in `CONTRIBUTING.md` ?",
"We now host all the datasets on the HF Hub, where you can easily edit them through UI (for single file changes) or Git workflow (for single/multiple file changes)"
] |
1,084,519,107
| 3,454
|
Fix iter_archive generator
|
closed
| 2021-12-20T08:50:15
| 2021-12-20T10:05:00
| 2021-12-20T10:04:59
|
https://github.com/huggingface/datasets/pull/3454
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3454",
"html_url": "https://github.com/huggingface/datasets/pull/3454",
"diff_url": "https://github.com/huggingface/datasets/pull/3454.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3454.patch",
"merged_at": "2021-12-20T10:04:59"
}
|
albertvillanova
| true
|
[] |
1,084,515,911
| 3,453
|
ValueError while iter_archive
|
closed
| 2021-12-20T08:46:18
| 2021-12-20T10:04:59
| 2021-12-20T10:04:59
|
https://github.com/huggingface/datasets/issues/3453
| null |
albertvillanova
| false
|
[] |
1,083,803,178
| 3,452
|
why the stratify option is omitted from test_train_split function?
|
closed
| 2021-12-18T10:37:47
| 2022-05-25T20:43:51
| 2022-05-25T20:43:51
|
https://github.com/huggingface/datasets/issues/3452
| null |
j-sieger
| false
|
[
"Hi ! It's simply not added yet :)\r\n\r\nIf someone wants to contribute to add the `stratify` parameter I'd be happy to give some pointers.\r\n\r\nIn the meantime, I guess you can use `sklearn` or other tools to do a stratified train/test split over the **indices** of your dataset and then do\r\n```\r\ntrain_dataset = dataset.select(train_indices)\r\ntest_dataset = dataset.select(test_indices)\r\n```",
"Hi @lhoestq I would like to add `stratify` parameter, can you give me some pointers for adding the same ?",
"Hi ! Sure :)\r\n\r\nThe `train_test_split` method is defined here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/dc62232fa1b3bcfe2fbddcb721f2d141f8908943/src/datasets/arrow_dataset.py#L3253-L3253\r\n\r\nand inside `train_test_split ` we need to create the right `train_indices` and `test_indices` that are passed here to `.select()`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/dc62232fa1b3bcfe2fbddcb721f2d141f8908943/src/datasets/arrow_dataset.py#L3450-L3464\r\n\r\nFor example if your dataset is like\r\n| | label |\r\n|---:|--------:|\r\n| 0 | 1 |\r\n| 1 | 1 |\r\n| 2 | 0 |\r\n| 3 | 0 |\r\n\r\nand the user passes `stratify=dataset[\"label\"]`, then you should get indices that look like this\r\n```\r\ntrain_indices = [0, 2]\r\ntest_indices = [1, 3]\r\n```\r\n\r\nthese indices will be passed to `.select` to return the stratified train and test splits :)\r\n\r\nFeel free to îng me if you have any question !",
"@lhoestq \r\nI just added the implementation for `stratify` option here #4322 "
] |
1,083,459,137
| 3,451
|
[Staging] Update dataset repos automatically on the Hub
|
closed
| 2021-12-17T17:12:11
| 2021-12-21T10:25:46
| 2021-12-20T14:09:51
|
https://github.com/huggingface/datasets/pull/3451
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3451",
"html_url": "https://github.com/huggingface/datasets/pull/3451",
"diff_url": "https://github.com/huggingface/datasets/pull/3451.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3451.patch",
"merged_at": "2021-12-20T14:09:51"
}
|
lhoestq
| true
|
[
"do keep us updated on how it's going in staging! cc @SBrandeis ",
"Sure ! For now it works smoothly. We'll also do a new release today.\r\n\r\nI can send you some repos to explore on staging, in case you want to see how they look like after being updated.\r\nFor example [swahili_news](https://moon-staging.huggingface.co/datasets/swahili_news/tree/main)"
] |
1,083,450,158
| 3,450
|
Unexpected behavior doing Split + Filter
|
closed
| 2021-12-17T17:00:39
| 2023-07-25T15:38:47
| 2023-07-25T15:38:47
|
https://github.com/huggingface/datasets/issues/3450
| null |
jbrachat
| false
|
[
"Hi ! This is an issue with `datasets` 1.12. Sorry for the inconvenience. Can you update to `>=1.13` ?\r\nsee https://github.com/huggingface/datasets/issues/3190\r\n\r\nMaybe we should also backport the bug fix to `1.12` (in a new version `1.12.2`)"
] |
1,083,373,018
| 3,449
|
Add `__add__()`, `__iadd__()` and similar to `Dataset` class
|
closed
| 2021-12-17T15:29:11
| 2024-02-29T16:47:56
| 2023-07-25T15:33:56
|
https://github.com/huggingface/datasets/issues/3449
| null |
sgraaf
| false
|
[
"I was going through the codebase, and I believe the implementation of __add__() and __iadd__() will be similar to concatenate_datasets() after the elimination of code for arguments other than the list of datasets (info, split, axis). \r\n(Assuming elimination of axis means concatenating over axis 1.)",
"Most data frame libraries (Polars, Pandas, etc.) override `__add__` to perform (mathematical) summation, so having different behavior could lead to confusion."
] |
1,083,231,080
| 3,448
|
JSONDecodeError with HuggingFace dataset viewer
|
closed
| 2021-12-17T12:52:41
| 2022-02-24T09:10:26
| 2022-02-24T09:10:26
|
https://github.com/huggingface/datasets/issues/3448
| null |
kathrynchapman
| false
|
[
"Hi ! I think the issue comes from the dataset_infos.json file: it has the \"flat\" field twice.\r\n\r\nCan you try deleting this file and regenerating it please ?",
"Thanks! That fixed that, but now I am getting:\r\nServer Error\r\nStatus code: 400\r\nException: KeyError\r\nMessage: 'feature'\r\n\r\nI checked the dataset_infos.json and pubmed_neg.py script, I don't use 'feature' anywhere as a key. Is the dataset viewer expecting that I do?",
"It seems that the `feature` key is missing from some feature type definition in your dataset_infos.json:\r\n```json\r\n\t\t\t\"tokens\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n\t\t\t},\r\n\t\t\t\"tags\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n\t\t\t}\r\n```\r\nThey should be\r\n```json\r\n\t\t\t\"tokens\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n \"feature\": {\"dtype\": \"string\", \"id\": null, \"_type\": \"Value\"}\r\n\t\t\t},\r\n\t\t\t\"tags\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\",\r\n \"feature\": {\"num_classes\": 5, \"names\": [\"-\", \"S\", \"H\", \"N\", \"C\"], \"names_file\": null, \"id\": null, \"_type\": \"ClassLabel\"}\r\n\t\t\t}\r\n```\r\n\r\nNote that you can generate the dataset_infos.json automatically to avoid mistakes:\r\n```bash\r\ndatasets-cli test ./path/to/dataset --save_infos\r\n```"
] |
1,082,539,790
| 3,447
|
HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading
|
closed
| 2021-12-16T18:51:13
| 2022-02-17T14:16:27
| 2022-02-17T14:16:27
|
https://github.com/huggingface/datasets/issues/3447
| null |
dunalduck0
| false
|
[
"Hi ! Indeed it says \"downloading and preparing\" but in your case it didn't need to download anything since you used local files (it would have thrown an error otherwise). I think we can improve the logging to make it clearer in this case",
"@lhoestq Thank you for explaining. I am sorry but I was not clear about my intention. I didn't want to kill internet traffic; I wanted to kill all write activity. In other words, you can imagine that my storage has only read access but crashes on write.\r\n\r\nWhen run_clm.py is invoked with the same parameters, the hash in the cache directory \"datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/...\" doesn't change, and my job can load cached data properly. This is great.\r\n\r\nUnfortunately, when params change (which happens sometimes), the hash changes and the old cache is invalid. datasets builder would create a new cache directory with the new hash and create JSON builder there, even though every JSON builder is the same. I didn't find a way to avoid such behavior.\r\n\r\nThis problem can be resolved when using datasets.map() for tokenizing and grouping text. This function allows me to specify output filenames with --cache_file_names, so that the cached files are always valid.\r\n\r\nThis is the code that I used to freeze cache filenames for tokenization. I wish I could do the same to datasets.load_dataset()\r\n```\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n remove_columns=column_names,\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n desc=\"Running tokenizer on dataset\",\r\n cache_file_names={k: os.path.join(model_args.cache_dir, f'{k}-tokenized') for k in raw_datasets},\r\n )\r\n```",
"Hi ! `load_dataset` may re-generate your dataset if some parameters changed indeed. If you want to freeze a dataset loaded with `load_dataset`, I think the best solution is just to save it somewhere on your disk with `.save_to_disk(my_dataset_dir)` and reload it with `load_from_disk(my_dataset_dir)`. This way you will be able to reload the dataset without having to run `load_dataset`"
] |
1,082,414,229
| 3,446
|
Remove redundant local path information in audio/image datasets
|
closed
| 2021-12-16T16:35:15
| 2023-09-24T10:09:30
| 2023-09-24T10:09:27
|
https://github.com/huggingface/datasets/pull/3446
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3446",
"html_url": "https://github.com/huggingface/datasets/pull/3446",
"diff_url": "https://github.com/huggingface/datasets/pull/3446.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3446.patch",
"merged_at": null
}
|
mariosasko
| true
|
[
"Cool, I'm in favor of this PR. Our official examples in speech already make use of `\"audio\"` so no need to change anything there. It would be great if we could prominently feature how one can get the audio path without decoding in the docs.",
"@patrickvonplaten Yes, I agree.\r\n\r\ncc @stevhliu we should add an example where decoding is disabled (to read paths) to [this section](https://github.com/huggingface/datasets/blob/master/docs/source/audio_process.rst#audio-datasets) in the docs and remove the mentions of `path`/`file` (if we merge this PR).",
"Thanks @mariosasko. Are you planning to finish this before we remove the dataset scripts from GitHub? "
] |
1,082,370,968
| 3,445
|
question
|
closed
| 2021-12-16T15:57:00
| 2022-01-03T10:09:00
| 2022-01-03T10:09:00
|
https://github.com/huggingface/datasets/issues/3445
| null |
BAKAYOKO0232
| false
|
[
"Hi ! What's your question ?"
] |
1,082,078,961
| 3,444
|
Align the Dataset and IterableDataset processing API
|
open
| 2021-12-16T11:26:11
| 2025-01-31T11:07:07
| null |
https://github.com/huggingface/datasets/issues/3444
| null |
lhoestq
| false
|
[
"Yes I agree, these should be as aligned as possible. Maybe we can also check the feedback in the survey at http://hf.co/oss-survey and see if people mentioned related things on the API (in particular if we go the breaking change way, it would be good to be sure we are taking the right direction for the community).",
"I like this proposal.\r\n\r\n> There is also an important difference in terms of behavior:\r\nDataset.map adds new columns (with dict.update)\r\nBUT\r\nIterableDataset discards previous columns (it overwrites the dict)\r\nIMO the two methods should have the same behavior. This would be an important breaking change though.\r\n\r\n> The main breaking change would be the change of behavior of IterableDataset.map, because currently it discards all the previous columns instead of keeping them.\r\n\r\nYes, this behavior of `IterableDataset.map` was surprising to me the first time I used it because I was expecting the same behavior as `Dataset.map`, so I'm OK with the breaking change here.\r\n\r\n> IterableDataset only supports \"torch\" (it misses tf, jax, pandas, arrow) and is missing the parameters: columns, output_all_columns and format_kwargs\r\n\r\n\\+ it's also missing the actual formatting code (we return unformatted tensors)\r\n> We could have a completely aligned map method if both methods were lazy by default, but this is a very big breaking change so I'm not sure we can consider doing that.\r\n\r\n> For information, TFDS does lazy map by default, and has an additional .cache() method.\r\n\r\nIf I understand this part correctly, the idea would be for `Dataset.map` to behave similarly to `Dataset.with_transform` (lazy processing) and to have an option to cache processed data (with `.cache()`). This idea is really nice because it can also be applied to `IterableDataset` to fix https://github.com/huggingface/datasets/issues/3142 (again we get the aligned APIs). However, this change would break a lot of things, so I'm still not sure if this is a step in the right direction (maybe it's OK for Datasets 2.0?) \r\n> If the two APIs are more aligned it would be awesome for the examples in transformers, and it would create a satisfactory experience for users that want to switch from one mode to the other.\r\n\r\nYes, it would be amazing to have an option to easily switch between these two modes.\r\n\r\nI agree with the rest.\r\n",
"> If I understand this part correctly, the idea would be for Dataset.map to behave similarly to Dataset.with_transform (lazy processing) and to have an option to cache processed data (with .cache()). This idea is really nice because it can also be applied to IterableDataset to fix #3142 (again we get the aligned APIs). However, this change would break a lot of things, so I'm still not sure if this is a step in the right direction (maybe it's OK for Datasets 2.0?)\r\n\r\nYea this is too big of a change in my opinion. Anyway it's fine as it is right now with streaming=lazy and regular=eager.",
"Hi, IterableDataset is also missing set_format.",
"Yes indeed, thanks. I added it to the list of methods to align in the first post",
"I just encountered the problem of the missing `fn_kwargs` parameter in the `map` method. I am commenting to give a workaround in case someone has the same problem and does not find a solution.\r\nYou can wrap your function call inside a class that contains the other parameters needed by the function called by map, like this:\r\n\r\n```python\r\ndef my_func(x, y, z):\r\n # Do things\r\n\r\nclass MyFuncWrapper:\r\n def __init__(self, y, z):\r\n self.y = y\r\n self.z = z\r\n\r\n def __call__(self, x):\r\n return my_func(x, self.y, self.z)\r\n```\r\n\r\nThen, give an instance of the `MyFuncWrapper` to the map function.",
"Any update on this? It's almost 2024😂 @lhoestq ",
"The main differences have been addressed (map, formatting) but there are still a few things to implement like Dataset.take, Dataset.skip, IterableDataset.set_format, IterableDataset.formatted_as, IterableDataset.reset_format.\r\n\r\nThe rest cannot be implemented for the general case. E.g. train_test_split and select can only work on an iterable dataset if the underlying dataset format allows it (we need to know the number of rows and have some sort of random access)",
"It appears `IterableDataset` now supports all the formats apart from `pandas` but the documentation doesn't have any mention of it yet. The docstring of `with_format` seems like it's even older incorrectly saying it only supports `arrow`. Are there any plans to update the documentation and have some guides on best practices?",
"Thanks, I updated the docstrings. Would be cool to have more examples in the docs though, if this is something you'd like to contribute ;)",
"Now both `Dataset` and `IterableDataset` support all formats including pandas, arrow, polars, torch, tf, numpy, jax :)\n\n```python\nfor df in ds.with_format(\"pandas\").iter(batch_size=100):\n ...\n```\nwill do a new release soon"
] |
1,082,052,833
| 3,443
|
Extend iter_archive to support file object input
|
closed
| 2021-12-16T10:59:14
| 2021-12-17T17:53:03
| 2021-12-17T17:53:02
|
https://github.com/huggingface/datasets/pull/3443
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3443",
"html_url": "https://github.com/huggingface/datasets/pull/3443",
"diff_url": "https://github.com/huggingface/datasets/pull/3443.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3443.patch",
"merged_at": "2021-12-17T17:53:02"
}
|
albertvillanova
| true
|
[] |
1,081,862,747
| 3,442
|
Extend text to support yielding lines, paragraphs or documents
|
closed
| 2021-12-16T07:33:17
| 2021-12-20T16:59:10
| 2021-12-20T16:39:18
|
https://github.com/huggingface/datasets/pull/3442
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3442",
"html_url": "https://github.com/huggingface/datasets/pull/3442",
"diff_url": "https://github.com/huggingface/datasets/pull/3442.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3442.patch",
"merged_at": "2021-12-20T16:39:18"
}
|
albertvillanova
| true
|
[
"The parameter can also be named `split_by` with values \"line\", \"paragraph\" or \"document\" (no 's' at the end)",
"> The parameter can also be named `split_by` with values \"line\", \"paragraph\" or \"document\" (no 's' at the end)\r\n\r\n@lhoestq @mariosasko I would avoid the term `split` in this context and keep it only for \"train\", \"validation\" and \"test\" splits.\r\n- https://huggingface.co/docs/datasets/process.html#split\r\n > datasets.Dataset.train_test_split() creates train and test splits, if your dataset doesn’t already have them.\r\n- https://huggingface.co/docs/datasets/process.html#process-multiple-splits\r\n > Many datasets have splits that you can process simultaneously with datasets.DatasetDict.map().\r\n\r\nPlease note that in the documentation, one of the terms more frequently used in this context is **\"row\"**:\r\n- https://huggingface.co/docs/datasets/access.html#features-and-columns\r\n > A dataset is a table of rows and typed columns.\r\n\r\n > Return the number of rows and columns with the following standard attributes:\r\n > dataset.num_columns\r\n > 4\r\n > dataset.num_rows\r\n > 3668\r\n\r\n- https://huggingface.co/docs/datasets/access.html#rows-slices-batches-and-columns\r\n > Get several rows of your dataset at a time with slice notation or a list of indices:\r\n- https://huggingface.co/docs/datasets/process.html#map\r\n > This function can even create new rows and columns.\r\n\r\nOther of the terms more frequently used in the docs (in the code as well) is **\"example\"**:\r\n- https://huggingface.co/docs/datasets/process.html#map\r\n > It allows you to apply a processing function to each example in a dataset, independently or in batches.\r\n- https://huggingface.co/docs/datasets/process.html#batch-processing\r\n > datasets.Dataset.map() also supports working with batches of examples.\r\n- https://huggingface.co/docs/datasets/process.html#split-long-examples\r\n > When your examples are too long, you may want to split them\r\n- https://huggingface.co/docs/datasets/process.html#data-augmentation\r\n > With batch processing, you can even augment your dataset with additional examples.\r\n\r\nLess frequently used: **\"item\"**:\r\n- https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.add_item\r\n > Add item to Dataset.\r\n\r\nOther term used in the docs (although less frequently) is **\"sample\"**. The advantage of this word is that it is also a verb, so we can use the parameter: \"sample_by\" (if you insist on using a verb instead of a noun).\r\n\r\nIn summary, these proposals:\r\n- config.row\r\n- config.example\r\n- config.item\r\n- config.sample\r\n- config.sample_by",
"I like `sample_by`. Another idea I had was `separate_by`.\r\n\r\nIt could also be `sampling`, `sampling_method`, `separation_method`.\r\n\r\nNot a big fan of the proposed nouns alone since they are very generic, that's why I tried to have something more specific.\r\n\r\nI also agree that we actually should avoid `split` to avoid any confusion",
"Thanks for the analysis of the used terms. I also like `sample_by` (`separate_by` is good too).",
"Thank you !! :D "
] |
1,081,571,784
| 3,441
|
Add QuALITY dataset
|
open
| 2021-12-15T22:26:19
| 2021-12-28T15:17:05
| null |
https://github.com/huggingface/datasets/issues/3441
| null |
lewtun
| false
|
[
"I'll take this one if no one hasn't yet!"
] |
1,081,528,426
| 3,440
|
datasets keeps reading from cached files, although I disabled it
|
closed
| 2021-12-15T21:26:22
| 2022-02-24T09:12:22
| 2022-02-24T09:12:22
|
https://github.com/huggingface/datasets/issues/3440
| null |
dorost1234
| false
|
[
"Hi ! What version of `datasets` are you using ? Can you also provide the logs you get before it raises the error ?"
] |
1,081,389,723
| 3,439
|
Add `cast_column` to `IterableDataset`
|
closed
| 2021-12-15T19:00:45
| 2021-12-16T15:55:20
| 2021-12-16T15:55:19
|
https://github.com/huggingface/datasets/pull/3439
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3439",
"html_url": "https://github.com/huggingface/datasets/pull/3439",
"diff_url": "https://github.com/huggingface/datasets/pull/3439.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3439.patch",
"merged_at": "2021-12-16T15:55:19"
}
|
mariosasko
| true
|
[
"Awesome thanks a lot @mariosasko "
] |
1,081,302,203
| 3,438
|
Update supported versions of Python in setup.py
|
closed
| 2021-12-15T17:30:12
| 2021-12-20T14:22:13
| 2021-12-20T14:22:12
|
https://github.com/huggingface/datasets/pull/3438
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3438",
"html_url": "https://github.com/huggingface/datasets/pull/3438",
"diff_url": "https://github.com/huggingface/datasets/pull/3438.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3438.patch",
"merged_at": "2021-12-20T14:22:12"
}
|
mariosasko
| true
|
[] |
1,081,247,889
| 3,437
|
Update BLEURT hyperlink
|
closed
| 2021-12-15T16:34:47
| 2021-12-17T13:28:26
| 2021-12-17T13:28:25
|
https://github.com/huggingface/datasets/pull/3437
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3437",
"html_url": "https://github.com/huggingface/datasets/pull/3437",
"diff_url": "https://github.com/huggingface/datasets/pull/3437.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3437.patch",
"merged_at": "2021-12-17T13:28:25"
}
|
lewtun
| true
|
[
"seems like a very very low-prio improvement :)",
"@albertvillanova thanks for the feedback! I removed the formatting altogether since I think this is a bit simpler tor read than non-rendered Markdown"
] |
1,081,068,139
| 3,436
|
Add the OneStopQa dataset
|
closed
| 2021-12-15T13:53:31
| 2021-12-17T14:32:00
| 2021-12-17T13:25:29
|
https://github.com/huggingface/datasets/pull/3436
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3436",
"html_url": "https://github.com/huggingface/datasets/pull/3436",
"diff_url": "https://github.com/huggingface/datasets/pull/3436.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3436.patch",
"merged_at": "2021-12-17T13:25:29"
}
|
OmerShubi
| true
|
[] |
1,081,043,756
| 3,435
|
Improve Wikipedia Loading Script
|
closed
| 2021-12-15T13:30:06
| 2022-03-04T08:16:00
| 2022-03-04T08:16:00
|
https://github.com/huggingface/datasets/pull/3435
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3435",
"html_url": "https://github.com/huggingface/datasets/pull/3435",
"diff_url": "https://github.com/huggingface/datasets/pull/3435.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3435.patch",
"merged_at": "2022-03-04T08:16:00"
}
|
geohci
| true
|
[
"I wanted to flag a change from since we discussed this: I initially wrote a function for using the Wikimedia APIs to collect namespace aliases, but decided that adding in more http requests to the script wasn't a great idea so instead used that code to build a static list that I just added directly to the code.\r\n\r\nAlso, an FYI that python library dependencies weren't working on my local end so I wasn't able to directly test the code. I tested a copy with the problematic elements stripped (beam etc.) that worked fine, but someone with a working local copy may want to test just to make sure I didn't accidentally break anything.",
"Also, while I would argue more strongly for some of the changes in this code, they are five distinct changes so not so hard to remove one or two if other folks think they aren't worth the overhead etc.",
"I also add a comment by @geohci in the Issue page:\r\n> See https://public.paws.wmcloud.org/User:Isaac_(WMF)/HuggingFace%20Wikipedia%20Processing.ipynb for more implementation details / some data around the overhead induced by adding the extra preprocessing steps (stripping link prefixes and magic words)",
"Hi ! Thanks a lot, this is very cool ! Note that unfortunately if we change the processing right now, users won't be able to load the \"big\" languages like english anymore, because it requires an Apache Beam runtime to process them. Some Wikipedia dumps have been processed by Hugging Face so that users don't need to run Apache Beam stuff.\r\n\r\nTherefore, we can merge this change after we have processed dumps using this new processing, and host them on the Hugging Face google storage.\r\n\r\nI think we can take care of this and let you know once this is ready ? What do you think @albertvillanova ?\r\n\r\nThis is also an opportunity to have the latest dumps ready, the current ones are from 2020",
"Related PR on updating to the latest dates: https://github.com/huggingface/datasets/pull/3612",
"@lhoestq if the additional processing steps are validated, we could go on generating the processed datasets for the big languages.\r\n\r\nThe only thing before doing that is that we should also validate other change (so that we include it also in the processed datasets):\r\n- #3398 ",
"> @lhoestq if the additional processing steps are validated, we could go on generating the processed datasets for the big languages.\r\n\r\nCool ! Looking forward to it :)\r\n\r\n> The only thing before doing that is that we should also validate other change (so that we include it also in the processed datasets):\r\n> \r\n> https://github.com/huggingface/datasets/issues/3398\r\n\r\nSounds good ! We can definitely add the URL as asked by the Wikipedia to provide credits to the authors.",
"@geohci I do not have push rights to this PR. See: [Enabling repository maintainer permissions on existing pull requests](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork#enabling-repository-maintainer-permissions-on-existing-pull-requests).\r\n\r\nI would like to merge the master branch so that all tests pass. Once done, I will be able approve this PR.",
"> @geohci I do not have push rights to this PR. See: [Enabling repository maintainer permissions on existing pull requests](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork#enabling-repository-maintainer-permissions-on-existing-pull-requests).\r\n> \r\n> I would like to merge the master branch so that all tests pass. Once done, I will be able approve this PR.\r\n\r\n@albertvillanova the `Allow edits by maintainers` box was already checked (what your instructions indicated) and indicates `If checked, users with write access to huggingface/datasets can add new commits to your wikipedia-updates branch. You can always change this setting later.` so you should have permissions already. If there's something else I'm missing or can do, please let me know. If it's not easy to resolve, I am plenty comfortable with you creating a new PR with these changes under your account too."
] |
1,080,917,446
| 3,434
|
Add The People's Speech
|
closed
| 2021-12-15T11:21:21
| 2023-02-28T16:22:29
| 2023-02-28T16:22:28
|
https://github.com/huggingface/datasets/issues/3434
| null |
mariosasko
| false
|
[
"This dataset is now available on the Hub here: https://huggingface.co/datasets/MLCommons/peoples_speech"
] |
1,080,910,724
| 3,433
|
Add Multilingual Spoken Words dataset
|
closed
| 2021-12-15T11:14:44
| 2022-02-22T10:03:53
| 2022-02-22T10:03:53
|
https://github.com/huggingface/datasets/issues/3433
| null |
albertvillanova
| false
|
[] |
1,079,910,769
| 3,432
|
Correctly indent builder config in dataset script docs
|
closed
| 2021-12-14T15:39:47
| 2021-12-14T17:35:17
| 2021-12-14T17:35:17
|
https://github.com/huggingface/datasets/pull/3432
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3432",
"html_url": "https://github.com/huggingface/datasets/pull/3432",
"diff_url": "https://github.com/huggingface/datasets/pull/3432.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3432.patch",
"merged_at": "2021-12-14T17:35:17"
}
|
mariosasko
| true
|
[] |
1,079,866,083
| 3,431
|
Unable to resolve any data file after loading once
|
closed
| 2021-12-14T15:02:15
| 2022-12-11T10:53:04
| 2022-02-24T09:13:52
|
https://github.com/huggingface/datasets/issues/3431
| null |
LzyFischer
| false
|
[
"Hi ! `load_dataset` accepts as input either a local dataset directory or a dataset name from the Hugging Face Hub.\r\n\r\nSo here you are getting this error the second time because it tries to load the local `wiki_dpr` directory, instead of `wiki_dpr` from the Hub. It doesn't work since it's a **cache** directory, not a **dataset** directory in itself.\r\n\r\nTo fix that you can use another cache directory like `cache_dir=\"/data2/whr/lzy/open_domain_data/retrieval/cache\"`",
"thx a lot"
] |
1,079,811,124
| 3,430
|
Make decoding of Audio and Image feature optional
|
closed
| 2021-12-14T14:15:08
| 2022-01-25T18:57:52
| 2022-01-25T18:57:52
|
https://github.com/huggingface/datasets/pull/3430
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3430",
"html_url": "https://github.com/huggingface/datasets/pull/3430",
"diff_url": "https://github.com/huggingface/datasets/pull/3430.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3430.patch",
"merged_at": "2022-01-25T18:57:52"
}
|
mariosasko
| true
|
[
"Closing this PR for now due to https://github.com/huggingface/datasets/issues/3145#issuecomment-993664104.",
"Okay, after some more thinking, I'm re-opening this PR for three reasons:\r\n* This feature will allow us to remove the `image_file_path`/`audio_file_path` columns in our vision/audio datasets. Currently, it makes sense to keep those columns because it's not obvious how to access the underlying path information of the Image/Audio feature. However, if the user is not aware and does `dataset[0][\"image_file_path\"]` on our vision/audio datasets, this will be costly because the image/audio file data has to be decoded first (stored in `dataset[0][\"image\"]`/`dataset[0][\"audio\"]`)\r\n* In CV, we often work with the so-called \"half life\" datasets (RedCaps, WIT, ...) that only provide image URLs and not actual image data, and some of these image URLs even go down. Our solution to this problem is to give a note on how to efficiently download the image data using `map` in the datasets cards of such datasets. This feature will remove the need for a separate `image_url` column of type `Value(\"string\")` in such datasets. Instead, we will be able to use the `image` column of type `Image()` (the image feature knows how to decode image URLs using `xopen`), disable decoding and use `requests.get` for download, which I expect to be faster than `xopen`.\r\n* This feature should help us in implementing `push_to_hub` for the Image/Audio where we transfer actual image/audio data and not paths",
"> This feature will allow us to remove the image_file_path/audio_file_path columns in our vision/audio datasets. Currently, it makes sense to keep those columns because it's not obvious how to access the underlying path information of the Image/Audio feature. However, if the user is not aware and does dataset[0][\"image_file_path\"] on our vision/audio datasets, this will be costly because the image/audio file data has to be decoded first (stored in dataset[0][\"image\"]/dataset[0][\"audio\"])\r\n\r\nThat makes sense !\r\n\r\n> Instead, we will be able to use the image column of type Image() (the image feature knows how to decode image URLs using xopen), disable decoding and use requests.get for download, which I expect to be faster than xopen.\r\n\r\nI feel like it's a bit convoluted compared to having the `image_url` column as string, and say to users to `map` using `requests.get` with `image_url`.\r\n\r\nMoreover I'm not 100% sure that we should have `Image` features with both local paths and URLs, since this behavior is a bit hidden the users and they don't give the same performance at all.\r\n\r\n> This feature should help us in implementing push_to_hub for the Image/Audio where we transfer actual image/audio data and not paths\r\n\r\nCool !",
"Thanks, @lhoestq.\r\n\r\n> I feel like it's a bit convoluted compared to having the image_url column as string, and say to users to map using requests.get with image_url.\r\n\r\nYes, that makes sense. \r\n\r\n>Moreover I'm not 100% sure that we should have Image features with both local paths and URLs, since this behavior is a bit hidden the users and they don't give the same performance at all.\r\n\r\nDo you mean we should remove support for URLs in the Image feature? Because this is what we get for free by adding support for streaming (by using `xopen` instead of `open`) and this is also what the Audio feature does.",
"> Do you mean we should remove support for URLs in the Image feature? Because this is what we get for free by adding support for streaming (by using xopen instead of open) and this is also what the Audio feature does.\r\n\r\nI think it might not be ideal to have URLs in an `Image` type column for a dataset in **non-streaming** mode, since you'd expect to have everything locally. But for a streaming dataset it must use `xopen` indeed",
"Yes, I agree. Let's have the `image_url` columns as `Value(\"string\")` + a note with the map function to download images for local datasets and a note with `cast_column` (which is requested in https://github.com/huggingface/datasets/issues/3369) for streamed datasets (`ds.cast_column(\"image_url\", Image())`).",
"I fixed the merge conflicts and small bugs in nested decoding introduced by #3575. Additionally, I addressed https://github.com/huggingface/datasets/issues/3473 by adding the `_iter` method to `Dataset` (inspired by the `_getitem` method). For `iter(dset)` I set `_iter(dset, decoded=True)` to enable decoding and for `map` `_iter(dset, decoded=False)` to make it lazy."
] |
1,078,902,390
| 3,429
|
Make cast cacheable (again) on Windows
|
closed
| 2021-12-13T19:32:02
| 2021-12-14T14:39:51
| 2021-12-14T14:39:50
|
https://github.com/huggingface/datasets/pull/3429
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3429",
"html_url": "https://github.com/huggingface/datasets/pull/3429",
"diff_url": "https://github.com/huggingface/datasets/pull/3429.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3429.patch",
"merged_at": "2021-12-14T14:39:50"
}
|
mariosasko
| true
|
[] |
1,078,863,468
| 3,428
|
Clean squad dummy data
|
closed
| 2021-12-13T18:46:29
| 2021-12-13T18:57:50
| 2021-12-13T18:57:50
|
https://github.com/huggingface/datasets/pull/3428
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3428",
"html_url": "https://github.com/huggingface/datasets/pull/3428",
"diff_url": "https://github.com/huggingface/datasets/pull/3428.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3428.patch",
"merged_at": "2021-12-13T18:57:50"
}
|
lhoestq
| true
|
[] |
1,078,782,159
| 3,427
|
Add The Pile Enron Emails subset
|
closed
| 2021-12-13T17:14:16
| 2021-12-14T17:30:59
| 2021-12-14T17:30:57
|
https://github.com/huggingface/datasets/pull/3427
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3427",
"html_url": "https://github.com/huggingface/datasets/pull/3427",
"diff_url": "https://github.com/huggingface/datasets/pull/3427.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3427.patch",
"merged_at": "2021-12-14T17:30:55"
}
|
albertvillanova
| true
|
[] |
1,078,670,031
| 3,426
|
Update disaster_response_messages download urls (+ add validation split)
|
closed
| 2021-12-13T15:30:12
| 2021-12-14T14:38:30
| 2021-12-14T14:38:29
|
https://github.com/huggingface/datasets/pull/3426
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3426",
"html_url": "https://github.com/huggingface/datasets/pull/3426",
"diff_url": "https://github.com/huggingface/datasets/pull/3426.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3426.patch",
"merged_at": "2021-12-14T14:38:29"
}
|
mariosasko
| true
|
[] |
1,078,598,140
| 3,425
|
Getting configs names takes too long
|
open
| 2021-12-13T14:27:57
| 2021-12-13T14:53:33
| null |
https://github.com/huggingface/datasets/issues/3425
| null |
severo
| false
|
[
"maybe related to https://github.com/huggingface/datasets/issues/2859\r\n",
"It looks like it's currently calling `HfFileSystem.ls()` ~8 times at the root and for each subdirectory:\r\n- \"\"\r\n- \"en.noblocklist\"\r\n- \"en.noclean\"\r\n- \"en\"\r\n- \"multilingual\"\r\n- \"realnewslike\"\r\n\r\nCurrently `ls` is slow because it iterates on all the files inside the repository.\r\n\r\nAn easy optimization would be to cache the result of each call to `ls`.\r\nWe can also optimize `ls` by using a tree structure per directory instead of a list of all the files.\r\n",
"ok\r\n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.