id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
1,113,556,837
| 3,627
|
Fix host URL in The Pile datasets
|
closed
| 2022-01-25T08:11:28
| 2022-07-20T20:54:42
| 2022-02-14T08:40:58
|
https://github.com/huggingface/datasets/pull/3627
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3627",
"html_url": "https://github.com/huggingface/datasets/pull/3627",
"diff_url": "https://github.com/huggingface/datasets/pull/3627.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3627.patch",
"merged_at": "2022-02-14T08:40:58"
}
|
albertvillanova
| true
|
[
"We should also update the `bookcorpusopen` download url (see #3561) , no? ",
"For `the_pile_openwebtext2` and `the_pile_stack_exchange` I did not regenerate the JSON files, but instead I just changed the download_checksums URL. ",
"Seems like the mystic URL is now broken and the original should be used. ",
"Also if I git clone and edit the repo or reset it before this PR it is still trying to pull using mystic? Why is this? "
] |
1,113,534,436
| 3,626
|
The Pile cannot connect to host
|
closed
| 2022-01-25T07:43:33
| 2022-02-14T08:40:58
| 2022-02-14T08:40:58
|
https://github.com/huggingface/datasets/issues/3626
| null |
albertvillanova
| false
|
[] |
1,113,017,522
| 3,625
|
Add a metadata field for when source data was produced
|
open
| 2022-01-24T18:52:39
| 2022-06-28T13:54:49
| null |
https://github.com/huggingface/datasets/issues/3625
| null |
davanstrien
| false
|
[
"A question to the datasets maintainers: is there a policy about how the set of allowed metadata fields is maintained and expanded?\r\n\r\nMetadata are very important, but defining the standard is always a struggle between allowing exhaustivity without being too complex. Archivists have Dublin Core, open data has https://frictionlessdata.io/, geo has ISO 19139 and INSPIRE, etc. and it's always a mess! I'm not sure we want to dig too much into it, but I'm curious to know if there has been some work on the metadata standard.",
"> Metadata are very important, but defining the standard is always a struggle between allowing exhaustivity without being too complex. Archivists have Dublin Core, open data has [frictionlessdata.io](https://frictionlessdata.io/), geo has ISO 19139 and INSPIRE, etc. and it's always a mess! I'm not sure we want to dig too much into it, but I'm curious to know if there has been some work on the metadata standard.\r\n\r\n\r\nI thought this is a potential issue with adding this field since it might be hard to define what is general enough to be useful for most data vs what becomes very domain-specific. Potentially adding one extra field leads to more and more fields in the future. \r\n\r\nAnother issue is that there are some metadata standards around data i.e. [datacite](https://schema.datacite.org/meta/kernel-4.4/), but not many aimed explicitly at ML data afaik. Some of the discussions around metadata for ML are also more focused on versioning/managing data in production environments. My thinking is that here, some reference to the time of production would also often be tracked/relevant, i.e. for triggering model training, so having this information available in the hub would also help address this use case. ",
"Adding a relevant paper related to this topic: [TimeLMs: Diachronic Language Models from Twitter](https://arxiv.org/abs/2202.03829)\r\n\r\n",
"Related: https://github.com/huggingface/datasets/issues/3877",
"Also related: the [Data Catalog Vocabulary - DCAT](https://www.w3.org/TR/vocab-dcat/) standard will be discussed in a new Working Group at the W3C: https://www.w3.org/2022/06/dx-wg-charter.html"
] |
1,112,835,239
| 3,623
|
Extend support for streaming datasets that use os.path.relpath
|
closed
| 2022-01-24T16:00:52
| 2022-02-04T14:03:55
| 2022-02-04T14:03:54
|
https://github.com/huggingface/datasets/pull/3623
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3623",
"html_url": "https://github.com/huggingface/datasets/pull/3623",
"diff_url": "https://github.com/huggingface/datasets/pull/3623.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3623.patch",
"merged_at": "2022-02-04T14:03:54"
}
|
albertvillanova
| true
|
[] |
1,112,831,661
| 3,622
|
Extend support for streaming datasets that use os.path.relpath
|
closed
| 2022-01-24T15:58:23
| 2022-02-04T14:03:54
| 2022-02-04T14:03:54
|
https://github.com/huggingface/datasets/issues/3622
| null |
albertvillanova
| false
|
[] |
1,112,720,434
| 3,621
|
Consider adding `ipywidgets` as a dependency.
|
closed
| 2022-01-24T14:27:11
| 2022-02-24T09:04:36
| 2022-02-24T09:04:36
|
https://github.com/huggingface/datasets/issues/3621
| null |
koaning
| false
|
[
"Hi! We use `tqdm` to display progress bars, so I suggest you open this issue in their repo.",
"It depends on how you use `tqdm`, no? \r\n\r\nDoesn't this library import via; \r\n\r\n```\r\nfrom tqdm.notebook import tqdm\r\n```",
"Hi! Sorry for the late reply. We import `tqdm` as `from tqdm.auto import tqdm`, which should be equal to `from tqdm.notebook import tqdm` in Jupyter.",
"Any objection if I make a PR that checks if the widgets library is installed beforehand? "
] |
1,112,677,252
| 3,620
|
Add Fon language tag
|
closed
| 2022-01-24T13:52:26
| 2022-02-04T14:04:36
| 2022-02-04T14:04:35
|
https://github.com/huggingface/datasets/pull/3620
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3620",
"html_url": "https://github.com/huggingface/datasets/pull/3620",
"diff_url": "https://github.com/huggingface/datasets/pull/3620.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3620.patch",
"merged_at": "2022-02-04T14:04:35"
}
|
albertvillanova
| true
|
[] |
1,112,611,415
| 3,619
|
fix meta in mls
|
closed
| 2022-01-24T12:54:38
| 2022-01-24T20:53:22
| 2022-01-24T20:53:22
|
https://github.com/huggingface/datasets/pull/3619
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3619",
"html_url": "https://github.com/huggingface/datasets/pull/3619",
"diff_url": "https://github.com/huggingface/datasets/pull/3619.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3619.patch",
"merged_at": "2022-01-24T20:53:21"
}
|
polinaeterna
| true
|
[
"Feel free to merge @polinaeterna as soon as you got an approval from either @lhoestq , @albertvillanova or @mariosasko"
] |
1,112,123,365
| 3,618
|
TIMIT Dataset not working with GPU
|
closed
| 2022-01-24T03:26:03
| 2023-07-25T15:20:20
| 2023-07-25T15:20:20
|
https://github.com/huggingface/datasets/issues/3618
| null |
TheSeamau5
| false
|
[
"Hi ! I think you should avoid calling `timit_train['audio']`. Indeed by doing so you're **loading all the audio column in memory**. This is problematic in your case because the TIMIT dataset is huge.\r\n\r\nIf you want to access the audio data of some samples, you should do this instead `timit_train[:10][\"train\"]` for example.\r\n\r\nOther than that, I'm not sure why you get a `TypeError: string indices must be integers`, do you have a code snippet that reproduces the issue that you can share here ?",
"I get the same error when I try to do `timit_train[0]` or really any indexing into the whole thing. \r\n\r\nReally, that IS the code snippet that reproduces the issue. If you index into other fields like 'file' or whatever, it works. As soon as one of the fields you're looking into is 'audio', you get that issue. It's a weird issue and I suspect it's Sagemaker/environment related, maybe the mix of libraries and dependencies are not good. \r\n\r\n\r\nExample code snippet with issue. \r\n```python\r\nfrom datasets import load_dataset\r\n\r\ntimit_train = load_dataset('timit_asr', split='train')\r\nprint(timit_train[0])\r\n```",
"Ok I see ! From the error you got, it looks like the `value` encoded in the arrow file of the TIMIT dataset you loaded is a string instead of a dictionary with keys \"path\" and \"bytes\" but we don't support this since 1.18\r\n\r\nCan you try regenerating the dataset with `load_dataset('timit_asr', download_mode=\"force_redownload\")` please ? I think it should fix the issue."
] |
1,111,938,691
| 3,617
|
PR for the CFPB Consumer Complaints dataset
|
closed
| 2022-01-23T17:47:12
| 2022-02-07T21:08:31
| 2022-02-07T21:08:31
|
https://github.com/huggingface/datasets/pull/3617
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3617",
"html_url": "https://github.com/huggingface/datasets/pull/3617",
"diff_url": "https://github.com/huggingface/datasets/pull/3617.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3617.patch",
"merged_at": "2022-02-07T21:08:31"
}
|
kayvane1
| true
|
[
"> Nice ! Thanks for adding this dataset :)\n> \n> \n> \n> I left a few comments:\n\nThanks!\n\nI'd be interested in contributing to the core codebase - I had to go down the custom loading approach because I couldn't pull this dataset in using the load_dataset() method. Using either the json or csv files available for this dataset as it was erroring. \n\nI'll rerun it and share the errors and try debug",
"Hey @lhoestq ,\r\n\r\nWhen I use this dataset as part of my project, I'm using this method\r\n\r\n`text_dataset = text_dataset['train'].train_test_split(test_size=0.2)`\r\n\r\nto create a train and test split as this dataset doesn't have one. \r\n\r\nCan I add this directly in the script itself somehow, or is it better to give users the flexibility to slice and split their datasets after loading?",
"> I'd be interested in contributing to the core codebase - I had to go down the custom loading approach because I couldn't pull this dataset in using the load_dataset() method. Using either the json or csv files available for this dataset as it was erroring.\r\n>\r\n> I'll rerun it and share the errors and try debug\r\n\r\nCool ! Let me know if you have questions or if I can help :)\r\n\r\n> Can I add this directly in the script itself somehow, or is it better to give users the flexibility to slice and split their datasets after loading?\r\n\r\nUsually we let the users the flexibility to split the datasets themselves (unless the dataset is already split, or if there is already a standard way to split it in the papers that use it)",
"Thanks Quentin!\r\nAll okay to merge now?",
"Thanks for the feedback Quentin and Mario - implemented all changes :)\r\n\r\n",
"Hey @lhoestq / @mariosasko \r\nAny other changes required to merge? 🤗",
"Hi ! Thanks and sorry for the late response \r\n\r\nIt looks very good ! The CI is still failing because it can't file the dummy_data.zip file, you can fix that by moving `datasets/consumer-finance-complaints/dummy/1.0.0/dummy_data.zip` to `datasets/consumer-finance-complaints/dummy/0.0.0/dummy_data.zip` and it should be all good !",
"@lhoestq - hopefully that should do it!\r\n"
] |
1,111,587,861
| 3,616
|
Make streamable the BnL Historical Newspapers dataset
|
closed
| 2022-01-22T14:52:36
| 2022-02-04T14:05:23
| 2022-02-04T14:05:21
|
https://github.com/huggingface/datasets/pull/3616
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3616",
"html_url": "https://github.com/huggingface/datasets/pull/3616",
"diff_url": "https://github.com/huggingface/datasets/pull/3616.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3616.patch",
"merged_at": "2022-02-04T14:05:21"
}
|
albertvillanova
| true
|
[] |
1,111,576,876
| 3,615
|
Dataset BnL Historical Newspapers does not work in streaming mode
|
closed
| 2022-01-22T14:12:59
| 2022-02-04T14:05:21
| 2022-02-04T14:05:21
|
https://github.com/huggingface/datasets/issues/3615
| null |
albertvillanova
| false
|
[
"@albertvillanova let me know if there is anything I can do to help with this. I had a quick look at the code again and though I could try the following changes:\r\n- use `download` instead of `download_and_extract`\r\nhttps://github.com/huggingface/datasets/blob/d3d339fb86d378f4cb3c5d1de423315c07a466c6/datasets/bnl_newspapers/bnl_newspapers.py#L136\r\n- swith to using `iter_archive` to loop through downloaded data to replace\r\nhttps://github.com/huggingface/datasets/blob/d3d339fb86d378f4cb3c5d1de423315c07a466c6/datasets/bnl_newspapers/bnl_newspapers.py#L159\r\n\r\nLet me know if it's useful for me to try and make those changes. ",
"Thanks @davanstrien.\r\n\r\nI have already been working on it so that it can be used in the BigScience workshop.\r\n\r\nI agree that the `rglob()` is not efficient in this case.\r\n\r\nI tried different solutions without success:\r\n- `iter_archive` cannot be used in this case because it does not support ZIP files yet\r\n\r\nFinally I have used `iter_files()`.",
"I see this is fixed now 🙂. I also picked up a few other tips from your redactors so hopefully my next attempts will support streaming from the start. "
] |
1,110,736,657
| 3,614
|
Minor fixes
|
closed
| 2022-01-21T17:48:44
| 2022-01-24T12:45:49
| 2022-01-24T12:45:49
|
https://github.com/huggingface/datasets/pull/3614
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3614",
"html_url": "https://github.com/huggingface/datasets/pull/3614",
"diff_url": "https://github.com/huggingface/datasets/pull/3614.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3614.patch",
"merged_at": "2022-01-24T12:45:49"
}
|
mariosasko
| true
|
[] |
1,110,684,015
| 3,613
|
Files not updating in dataset viewer
|
closed
| 2022-01-21T16:47:20
| 2022-01-22T08:13:13
| 2022-01-22T08:13:13
|
https://github.com/huggingface/datasets/issues/3613
| null |
abidlabs
| false
|
[
"Yes. The jobs queue is full right now, following an upgrade... Back to normality in the next hours hopefully. I'll look at your datasets to be sure the dataset viewer works as expected on them.",
"Should have been fixed now."
] |
1,110,506,466
| 3,612
|
wikifix
|
closed
| 2022-01-21T14:05:11
| 2022-02-03T17:58:16
| 2022-02-03T17:58:16
|
https://github.com/huggingface/datasets/pull/3612
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3612",
"html_url": "https://github.com/huggingface/datasets/pull/3612",
"diff_url": "https://github.com/huggingface/datasets/pull/3612.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3612.patch",
"merged_at": null
}
|
apergo-ai
| true
|
[
"tests fail because of dataset_infos.json isn't updated. Unfortunately, I cannot get the datasets-cli locally to execute without error. Would need to troubleshoot, what's missing. Maybe someone else can pick up the stick. ",
"Hi ! If we change the default date to the latest one, users won't be able to load the \"big\" languages like english anymore, because it requires an Apache Beam runtime to process them. On the contrary, the old data 20200501 has been processed by Hugging Face so that users don't need to run Apache Beam stuff.\r\n\r\nTherefore I'm in favor of not changing the default date until we have processed the latest versions of wikipedia.\r\n\r\nUsers that want to load other languages or that can use Apache Beam can still pass the `language` and `date` parameter to `load_dataset` if they want anyway:\r\n```python\r\nload_dataset(\"wikipedia\", language=\"fr\", date=\"20220120\")\r\n```",
"in that case you can close the PR",
"Ok thanks !\r\n\r\n(oh I I just noticed that the dataset card is missing the documentation regarding the language and date parameters, let me add it)"
] |
1,110,399,096
| 3,611
|
Indexing bug after dataset.select()
|
closed
| 2022-01-21T12:09:30
| 2022-01-27T18:16:22
| 2022-01-27T18:16:22
|
https://github.com/huggingface/datasets/issues/3611
| null |
kamalkraj
| false
|
[
"Hi! Thanks for reporting! I've opened a PR with the fix."
] |
1,109,777,314
| 3,610
|
Checksum error when trying to load amazon_review dataset
|
closed
| 2022-01-20T21:20:32
| 2022-01-21T13:22:31
| 2022-01-21T13:22:31
|
https://github.com/huggingface/datasets/issues/3610
| null |
ghost
| false
|
[
"It is solved now"
] |
1,109,579,112
| 3,609
|
Fixes to pubmed dataset download function
|
closed
| 2022-01-20T17:31:35
| 2022-03-03T16:18:52
| 2022-03-03T14:23:35
|
https://github.com/huggingface/datasets/pull/3609
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3609",
"html_url": "https://github.com/huggingface/datasets/pull/3609",
"diff_url": "https://github.com/huggingface/datasets/pull/3609.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3609.patch",
"merged_at": null
}
|
spacemanidol
| true
|
[
"Hi ! I think we can simply add a new configuration for the 2022 data instead of replacing them.\r\nYou can add the new configuration here:\r\n```python\r\n BUILDER_CONFIGS = [\r\n datasets.BuilderConfig(name=\"2021\", description=\"The 2021 annual record\", version=datasets.Version(\"1.0.0\")),\r\n datasets.BuilderConfig(name=\"2022\", description=\"The 2022 annual record\", version=datasets.Version(\"1.0.0\")),\r\n ]\r\n```\r\n\r\nAnd we can have the URLs for these two versions this way:\r\n```python\r\n_URLs = {\r\n \"2021\": f\"ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n{i:04d}.xml.gz\" for i in range(1, 1063)],\r\n \"2022\": f\"ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed22n{i:04d}.xml.gz\" for i in range(1, 1114)]\r\n}\r\n```\r\nand depending on the configuration name (you can get it with `self.config.name`) we can pick the URLs of 2021 or the ones of 2022 and pass them to the `dl_manager` in `_split_generators`\r\n\r\nFeel free to ping me if you have questions or if I can help !",
"Hi @spacemanidol, thanks for your contribution.\r\n\r\nThe update of the PubMed dataset URL (besides the update of the corresponding metadata and the dummy data) was already merged to master branch in this other PR:\r\n- #3692 \r\n\r\nI'm closing this PR then.\r\n\r\n@lhoestq please take into account that 2021 data is no longer accessible: every year PubMed releases the baseline data (containing all previous data until that year) and from that on, they release daily updates. ",
"> @lhoestq please take into account that 2021 data is no longer accessible: every year PubMed releases the baseline data (containing all previous data until that year) and from that on, they release daily updates.\r\n\r\nOh ok I didn't know, thanks"
] |
1,109,310,981
| 3,608
|
Add support for continuous metrics (RMSE, MAE)
|
closed
| 2022-01-20T13:35:36
| 2022-03-09T17:18:20
| 2022-03-09T17:18:20
|
https://github.com/huggingface/datasets/issues/3608
| null |
ck37
| false
|
[
"Hey @ck37 \r\n\r\nYou can always use a custom metric as explained [in this guide from HF](https://huggingface.co/docs/datasets/master/loading_metrics.html#using-a-custom-metric-script).\r\n\r\nIf this issue needs to be contributed to (for enhancing the metric API) I think [this link](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html) would be helpful for the `MAE` metric.",
"You can use a local metric script just by providing its path instead of the usual shortcut name ",
"#self-assign I have starting working on this issue to enhance the metric API."
] |
1,109,218,370
| 3,607
|
Add MIT Scene Parsing Benchmark
|
closed
| 2022-01-20T12:03:07
| 2022-02-18T12:51:01
| 2022-02-18T12:51:00
|
https://github.com/huggingface/datasets/pull/3607
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3607",
"html_url": "https://github.com/huggingface/datasets/pull/3607",
"diff_url": "https://github.com/huggingface/datasets/pull/3607.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3607.patch",
"merged_at": "2022-02-18T12:51:00"
}
|
mariosasko
| true
|
[] |
1,108,918,701
| 3,606
|
audio column not saved correctly after resampling
|
closed
| 2022-01-20T06:37:10
| 2022-01-23T01:41:01
| 2022-01-23T01:24:14
|
https://github.com/huggingface/datasets/issues/3606
| null |
laphang
| false
|
[
"Hi ! We just released a new version of `datasets` that should fix this.\r\n\r\nI tested resampling and using save/load_from_disk afterwards and it seems to be fixed now",
"Hi @lhoestq, \r\n\r\nJust tested the latest datasets version, and confirming that this is fixed for me. \r\n\r\nThanks!",
"Also, just an FYI, data that I had saved (with save_to_disk) previously from common voice using datasets==1.17.0 now give the error below when loading (with load_from disk) using datasets==1.18.0. \r\n\r\nHowever, when starting fresh using load_dataset, then doing the resampling, the save/load_from disk worked fine. \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<timed exec> in <module>\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1747 return Dataset.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1748 elif fs.isfile(Path(dest_dataset_path, config.DATASETDICT_JSON_FILENAME).as_posix()):\r\n-> 1749 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1750 else:\r\n 1751 raise FileNotFoundError(\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in load_from_disk(dataset_dict_path, fs, keep_in_memory)\r\n 769 else Path(dest_dataset_dict_path, k).as_posix()\r\n 770 )\r\n--> 771 dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)\r\n 772 return dataset_dict\r\n 773 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1118 info=dataset_info,\r\n 1119 split=split,\r\n-> 1120 fingerprint=state[\"_fingerprint\"],\r\n 1121 )\r\n 1122 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)\r\n 655 if self.info.features.type != inferred_features.type:\r\n 656 raise ValueError(\r\n--> 657 f\"External features info don't match the dataset:\\nGot\\n{self.info.features}\\nwith type\\n{self.info.features.type}\\n\\nbut expected something like\\n{inferred_features}\\nwith type\\n{inferred_features.type}\"\r\n 658 )\r\n 659 \r\n\r\nValueError: External features info don't match the dataset:\r\nGot\r\n{'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=48000, mono=True, id=None), 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)}\r\nwith type\r\nstruct<accent: string, age: string, audio: struct<bytes: binary, path: string>, client_id: string, down_votes: int64, gender: string, locale: string, path: string, segment: string, sentence: string, up_votes: int64>\r\n\r\nbut expected something like\r\n{'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': {'path': Value(dtype='string', id=None), 'bytes': Value(dtype='binary', id=None)}, 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)}\r\nwith type\r\nstruct<accent: string, age: string, audio: struct<path: string, bytes: binary>, client_id: string, down_votes: int64, gender: string, locale: string, path: string, segment: string, sentence: string, up_votes: int64> \r\n```"
] |
1,108,738,561
| 3,605
|
Adding Turkic X-WMT evaluation set for machine translation
|
closed
| 2022-01-20T01:40:29
| 2022-01-31T09:50:57
| 2022-01-31T09:50:57
|
https://github.com/huggingface/datasets/pull/3605
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3605",
"html_url": "https://github.com/huggingface/datasets/pull/3605",
"diff_url": "https://github.com/huggingface/datasets/pull/3605.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3605.patch",
"merged_at": "2022-01-31T09:50:57"
}
|
mirzakhalov
| true
|
[
"hi! Thank you for all the comments! I believe I addressed them all. Let me know if there is anything else",
"Hi there! I was wondering if there is anything else to change before this can be merged",
"@lhoestq Hi! Just a gentle reminder about the steps to merge this one! ",
"Thanks for the heads up ! I think I fixed the last issue with the YAML tags",
"The CI failure is unrelated to this PR and fixed on master, let's merge :)\r\n\r\nThanks a lot !"
] |
1,108,477,316
| 3,604
|
Dataset Viewer not showing Previews for Private Datasets
|
closed
| 2022-01-19T19:29:26
| 2022-09-26T08:04:43
| 2022-09-26T08:04:43
|
https://github.com/huggingface/datasets/issues/3604
| null |
abidlabs
| false
|
[
"Sure, it's on the roadmap.",
"Closing in favor of https://github.com/huggingface/datasets-server/issues/39."
] |
1,108,392,141
| 3,603
|
Add British Library books dataset
|
closed
| 2022-01-19T17:53:05
| 2022-01-31T17:22:51
| 2022-01-31T17:01:49
|
https://github.com/huggingface/datasets/pull/3603
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3603",
"html_url": "https://github.com/huggingface/datasets/pull/3603",
"diff_url": "https://github.com/huggingface/datasets/pull/3603.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3603.patch",
"merged_at": "2022-01-31T17:01:49"
}
|
davanstrien
| true
|
[
"Thanks for all the help and suggestions\r\n\r\n> Since the dataset has a very specific structure it might not be that easy so feel free to ping me if you have questions or if I can help !\r\n\r\nI did get a little stuck here! So far I have created directories for each config i.e:\r\n\r\n`datasets/datasets/blbooks/dummy/1700_1799/1.0.2/dummy_data.zip` \r\n\r\nI have then added two examples of the `jsonl.gz` files that are in the underlying dataset to each dummy_data directory.This fails the test using local files. \r\n\r\nSince \r\n\r\n```python\r\ndef _generate_examples(self, data_dirs):\r\n```\r\n\r\ntakes as input `data_dirs` which is a list of `iter_dirs` do I need to put the dummy files inside another directory? i.e. \r\n\r\n`datasets/datasets/blbooks/dummy/1700_1799/1.0.2/dummy_data/1700/00.jsonl.gz` \r\n\r\n\r\n ",
"I think I managed to create the dummy data :)\r\n\r\nI think everything is good now, if you don't have other changes to do, please mark your PR as \"ready for review\" and ping me!",
"> I think I managed to create the dummy data :)\r\n\r\nThanks so much for that!\r\n\r\n> I think everything is good now, if you don't have other changes to do, please mark your PR as \"ready for review\" and ping me!\r\n\r\nThink it is ready to merge from my end @lhoestq. ",
"The CI failure on windows is unrelated to your PR and fixed on `master`, we can ignore it"
] |
1,108,247,870
| 3,602
|
Update url for conll2003
|
closed
| 2022-01-19T15:35:04
| 2022-01-20T16:23:03
| 2022-01-19T15:43:53
|
https://github.com/huggingface/datasets/pull/3602
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3602",
"html_url": "https://github.com/huggingface/datasets/pull/3602",
"diff_url": "https://github.com/huggingface/datasets/pull/3602.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3602.patch",
"merged_at": "2022-01-19T15:43:53"
}
|
lhoestq
| true
|
[
"Hi. lhoestq \r\n\r\n\r\nWhat is the solution for it?\r\nyou can see it is still doesn't work here.\r\nhttps://colab.research.google.com/drive/1l52FGWuSaOaGYchit4CbmtUSuzNDx_Ok?usp=sharing\r\nThank you.\r\n",
"For now you can specify `load_dataset(..., revision=\"master\")` to use the fix on `master`.\r\n\r\nWe'll also do a new release of `datasets` tomorrow I think"
] |
1,108,207,131
| 3,601
|
Add conll2003 licensing
|
closed
| 2022-01-19T15:00:41
| 2022-01-19T17:17:28
| 2022-01-19T17:17:28
|
https://github.com/huggingface/datasets/pull/3601
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3601",
"html_url": "https://github.com/huggingface/datasets/pull/3601",
"diff_url": "https://github.com/huggingface/datasets/pull/3601.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3601.patch",
"merged_at": "2022-01-19T17:17:28"
}
|
lhoestq
| true
|
[] |
1,108,131,878
| 3,600
|
Use old url for conll2003
|
closed
| 2022-01-19T13:56:49
| 2022-01-19T14:16:28
| 2022-01-19T14:16:28
|
https://github.com/huggingface/datasets/pull/3600
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3600",
"html_url": "https://github.com/huggingface/datasets/pull/3600",
"diff_url": "https://github.com/huggingface/datasets/pull/3600.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3600.patch",
"merged_at": "2022-01-19T14:16:28"
}
|
lhoestq
| true
|
[] |
1,108,111,607
| 3,599
|
The `add_column()` method does not work if used on dataset sliced with `select()`
|
closed
| 2022-01-19T13:36:50
| 2022-01-28T15:35:57
| 2022-01-28T15:35:57
|
https://github.com/huggingface/datasets/issues/3599
| null |
ThGouzias
| false
|
[
"similar #3611 "
] |
1,108,107,199
| 3,598
|
Readme info not being parsed to show on Dataset card page
|
closed
| 2022-01-19T13:32:29
| 2022-01-21T10:20:01
| 2022-01-21T10:20:01
|
https://github.com/huggingface/datasets/issues/3598
| null |
davidcanovas
| false
|
[
"i suspect a markdown parsing error, @severo do you want to take a quick look at it when you have some time?",
"# Problem\r\nThe issue seems to coming from the front matter of the README\r\n```---\r\nannotations_creators:\r\n- no-annotation\r\nlanguage_creators:\r\n- machine-generated\r\nlanguages:\r\n- 'ca'\r\n- 'de'\r\nlicenses:\r\n- cc-by-4.0\r\nmultilinguality:\r\n- translation\r\npretty_name: Catalan-German aligned corpora to train NMT systems.\r\nsize_categories:\r\n- \"1M<n<10M\" \r\nsource_datasets:\r\n- extended|tilde_model\r\ntask_categories:\r\n- machine-translation\r\ntask_ids:\r\n- machine-translation\r\n---\r\n``` \r\n# Solution\r\nThe fix is to correctly style the README as explained [here](https://huggingface.co/docs/datasets/v1.12.0/dataset_card.html). I have also correctly parsed the font matter as shown below:\r\n```\r\n---\r\nannotations_creators: []\r\nlanguage_creators: [machine-generated]\r\nlanguages: ['ca', 'de']\r\nlicenses: []\r\nmultilinguality:\r\n- multilingual\r\npretty_name: 'Catalan-German aligned corpora to train NMT systems.'\r\nsize_categories: \r\n- 1M<n<10M\r\nsource_datasets: ['extended|tilde_model']\r\ntask_categories: ['machine-translation']\r\ntask_ids: ['machine-translation']\r\n---\r\n```\r\nYou can find the README for a sample dataset [here](https://huggingface.co/datasets/ritwikraha/Test)",
"Thank you. It finally worked implementing your changes and leaving a white line between title and text in the description.",
"Thanks, if this solves your issue, can you please close it?"
] |
1,108,092,864
| 3,597
|
ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content
|
closed
| 2022-01-19T13:19:28
| 2022-08-05T12:35:51
| 2022-02-14T08:46:34
|
https://github.com/huggingface/datasets/issues/3597
| null |
amitkml
| false
|
[
"Hi! The `cd` command in Jupyer/Colab needs to start with `%`, so this should work:\r\n```\r\n!git clone https://github.com/huggingface/datasets.git\r\n%cd datasets\r\n!pip install -e \".[streaming]\"\r\n```",
"thanks @mariosasko i had the same mistake and your solution is what was needed"
] |
1,107,345,338
| 3,596
|
Loss of cast `Image` feature on certain dataset method
|
closed
| 2022-01-18T20:44:01
| 2022-01-21T18:07:28
| 2022-01-21T18:07:28
|
https://github.com/huggingface/datasets/issues/3596
| null |
davanstrien
| false
|
[
"Hi! Thanks for reporting! The issue with `cast_column` should be fixed by #3575 and after we merge that PR I'll start working on the `push_to_hub` support for the `Image`/`Audio` feature.",
"> Hi! Thanks for reporting! The issue with `cast_column` should be fixed by #3575 and after we merge that PR I'll start working on the `push_to_hub` support for the `Image`/`Audio` feature.\r\n\r\nThanks, I'll keep an eye out for #3575 getting merged. I managed to use `push_to_hub` sucesfully with images when they were loaded via `map` - something like `ds.map(lambda example: {\"img\": load_image_function(example['fname']})`, this only pushed the images to the hub if the `load_image_function` return a PIL Image without the filename attribute though. I guess this might often be the prefered behaviour though. \r\n",
"Hi ! We merged the PR and did a release of `datasets` that includes the changes. Can you try updating `datasets` and try again ?",
"> Hi ! We merged the PR and did a release of `datasets` that includes the changes. Can you try updating `datasets` and try again ?\r\n\r\nThanks for checking. There is no longer an error when calling `select` but it appears the cast value isn't preserved. Before `select`\r\n\r\n```python\r\ndataset.features\r\n{'url': Image(id=None)}\r\n```\r\n\r\nafter select:\r\n```\r\n{'url': Value(dtype='string', id=None)}\r\n```\r\n\r\nUpdated Colab example [here](https://colab.research.google.com/gist/davanstrien/4e88f55a3675c279b5c2f64299ae5c6f/potential_casting_bug.ipynb) ",
"Hmmm, if I re-run your google colab I'm getting the right type at the end:\r\n```\r\nsample.features\r\n# {'url': Image(id=None)}\r\n```",
"Appolgies - I've just run again and also got this output. I have also sucesfully used the `push_to_hub` method. I think this is fixed now so will close this issue. ",
"Fixed in #3575 "
] |
1,107,260,527
| 3,595
|
Add ImageNet toy datasets from fastai
|
closed
| 2022-01-18T19:03:35
| 2023-09-24T09:39:07
| 2022-09-30T14:39:35
|
https://github.com/huggingface/datasets/pull/3595
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3595",
"html_url": "https://github.com/huggingface/datasets/pull/3595",
"diff_url": "https://github.com/huggingface/datasets/pull/3595.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3595.patch",
"merged_at": null
}
|
mariosasko
| true
|
[
"Thanks for your contribution, @mariosasko. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] |
1,107,174,619
| 3,594
|
fix multiple language downloading in mC4
|
closed
| 2022-01-18T17:25:19
| 2022-01-19T11:22:57
| 2022-01-18T19:10:22
|
https://github.com/huggingface/datasets/pull/3594
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3594",
"html_url": "https://github.com/huggingface/datasets/pull/3594",
"diff_url": "https://github.com/huggingface/datasets/pull/3594.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3594.patch",
"merged_at": "2022-01-18T19:10:22"
}
|
polinaeterna
| true
|
[
"The CI failure is unrelated to your PR and fixed on master, merging :)"
] |
1,107,070,852
| 3,593
|
Update README.md
|
closed
| 2022-01-18T15:52:16
| 2022-01-20T17:14:53
| 2022-01-20T17:14:53
|
https://github.com/huggingface/datasets/pull/3593
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3593",
"html_url": "https://github.com/huggingface/datasets/pull/3593",
"diff_url": "https://github.com/huggingface/datasets/pull/3593.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3593.patch",
"merged_at": "2022-01-20T17:14:52"
}
|
borgr
| true
|
[] |
1,107,026,723
| 3,592
|
Add QuickDraw dataset
|
closed
| 2022-01-18T15:13:39
| 2022-06-09T10:04:54
| 2022-06-09T09:56:13
|
https://github.com/huggingface/datasets/pull/3592
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3592",
"html_url": "https://github.com/huggingface/datasets/pull/3592",
"diff_url": "https://github.com/huggingface/datasets/pull/3592.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3592.patch",
"merged_at": "2022-06-09T09:56:13"
}
|
mariosasko
| true
|
[
"_The documentation is not available anymore as the PR was closed or merged._"
] |
1,106,928,613
| 3,591
|
Add support for time, date, duration, and decimal dtypes
|
closed
| 2022-01-18T13:46:05
| 2022-01-31T18:29:34
| 2022-01-20T17:37:33
|
https://github.com/huggingface/datasets/pull/3591
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3591",
"html_url": "https://github.com/huggingface/datasets/pull/3591",
"diff_url": "https://github.com/huggingface/datasets/pull/3591.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3591.patch",
"merged_at": "2022-01-20T17:37:33"
}
|
mariosasko
| true
|
[
"Is there a dataset which uses these four datatypes for tests purposes?\r\n",
"@severo Not yet. I'll let you know if that changes."
] |
1,106,784,860
| 3,590
|
Update ANLI README.md
|
closed
| 2022-01-18T11:22:53
| 2022-01-20T16:58:41
| 2022-01-20T16:58:41
|
https://github.com/huggingface/datasets/pull/3590
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3590",
"html_url": "https://github.com/huggingface/datasets/pull/3590",
"diff_url": "https://github.com/huggingface/datasets/pull/3590.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3590.patch",
"merged_at": "2022-01-20T16:58:41"
}
|
borgr
| true
|
[] |
1,106,766,114
| 3,589
|
Pin torchmetrics to fix the COMET test
|
closed
| 2022-01-18T11:03:49
| 2022-01-18T11:04:56
| 2022-01-18T11:04:55
|
https://github.com/huggingface/datasets/pull/3589
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3589",
"html_url": "https://github.com/huggingface/datasets/pull/3589",
"diff_url": "https://github.com/huggingface/datasets/pull/3589.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3589.patch",
"merged_at": "2022-01-18T11:04:55"
}
|
lhoestq
| true
|
[] |
1,106,749,000
| 3,588
|
Update HellaSwag README.md
|
closed
| 2022-01-18T10:46:15
| 2022-01-20T16:57:43
| 2022-01-20T16:57:43
|
https://github.com/huggingface/datasets/pull/3588
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3588",
"html_url": "https://github.com/huggingface/datasets/pull/3588",
"diff_url": "https://github.com/huggingface/datasets/pull/3588.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3588.patch",
"merged_at": "2022-01-20T16:57:43"
}
|
borgr
| true
|
[] |
1,106,719,182
| 3,587
|
No module named 'fsspec.archive'
|
closed
| 2022-01-18T10:17:01
| 2022-08-11T09:57:54
| 2022-01-18T10:33:10
|
https://github.com/huggingface/datasets/issues/3587
| null |
shuuchen
| false
|
[] |
1,106,455,672
| 3,586
|
Revisit `enable/disable_` toggle function prefix
|
closed
| 2022-01-18T04:09:55
| 2022-03-14T15:01:08
| 2022-03-14T15:01:08
|
https://github.com/huggingface/datasets/issues/3586
| null |
jaketae
| false
|
[] |
1,105,821,470
| 3,585
|
Datasets streaming + map doesn't work for `Audio`
|
closed
| 2022-01-17T12:55:42
| 2022-01-20T13:28:00
| 2022-01-20T13:28:00
|
https://github.com/huggingface/datasets/issues/3585
| null |
patrickvonplaten
| false
|
[
"This seems related to https://github.com/huggingface/datasets/issues/3505."
] |
1,105,231,768
| 3,584
|
https://huggingface.co/datasets/huggingface/transformers-metadata
|
closed
| 2022-01-17T00:18:14
| 2022-02-14T08:51:27
| 2022-02-14T08:51:27
|
https://github.com/huggingface/datasets/issues/3584
| null |
ecankirkic
| false
|
[] |
1,105,195,144
| 3,583
|
Add The Medical Segmentation Decathlon Dataset
|
open
| 2022-01-16T21:42:25
| 2022-03-18T10:44:42
| null |
https://github.com/huggingface/datasets/issues/3583
| null |
omarespejel
| false
|
[
"Hello! I have recently been involved with a medical image segmentation project myself and was going through the `The Medical Segmentation Decathlon Dataset` as well. \r\nI haven't yet had experience adding datasets to this repository yet but would love to get started. Should I take this issue?\r\nIf yes, I've got two questions -\r\n1. There are 10 different datasets available, so are all datasets to be added in a single PR, or one at a time? \r\n2. Since it's a competition, masks for the test-set are not available. How is that to be tackled? Sorry if it's a silly question, I have recently started exploring `datasets`.",
"Hi! Sure, feel free to take this issue. You can self-assign the issue by commenting `#self-assign`.\r\n\r\nTo answer your questions:\r\n1. It makes the most sense to add each one as a separate config, so one dataset script with 10 configs in a single PR.\r\n2. Just set masks in the test set to `None`.\r\n\r\nNote that the images/masks in this dataset are in NIfTI format, which our `Image` feature currently doesn't support, so I think it's best to yield the paths to the images/masks in the script and add a preprocessing section to the card where we explain how to load/process the images/masks with `nibabel` (I can help with that). \r\n\r\n",
"> Note that the images/masks in this dataset are in NIfTI format, which our `Image` feature currently doesn't support, so I think it's best to yield the paths to the images/masks in the script and add a preprocessing section to the card where we explain how to load/process the images/masks with `nibabel` (I can help with that).\r\n\r\nGotcha, thanks. Will start working on the issue and let you know in case of any doubt.",
"#self-assign",
"This is great! There is a first model on the HUb that uses this dataset! https://huggingface.co/MONAI/example_spleen_segmentation"
] |
1,104,877,303
| 3,582
|
conll 2003 dataset source url is no longer valid
|
closed
| 2022-01-15T23:04:17
| 2022-07-20T13:06:40
| 2022-01-21T16:57:32
|
https://github.com/huggingface/datasets/issues/3582
| null |
rcanand
| false
|
[
"I came to open the same issue.",
"Thanks for reporting !\r\n\r\nI pushed a temporary fix on `master` that uses an URL from a previous commit to access the dataset for now, until we have a better solution",
"I changed the URL again to use another host, the fix is available on `master` and we'll probably do a new release of `datasets` tomorrow.\r\n\r\nIn the meantime, feel free to do `load_dataset(..., revision=\"master\")` to use the fixed script",
"We just released a new version of `datasets` with a working URL. Feel free to update `datasets` and try again :)",
"Hello! Unfortunately, this URL does not work for me. \r\nCould you please tell me how I can solve the problem?\r\n\r\n`>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"conll2003\")\r\nDownloading and preparing dataset conll2003/conll2003 (download: 4.63 MiB, generated: 9.78 MiB, post-processed: Unknown size, total: 14.41 MiB) to /home/dafedo/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/40e7cb6bcc374f7c349c83acd1e9352a4f09474eb691f64f364ee62eb65d0ca6...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/load.py\", line 745, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/builder.py\", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/builder.py\", line 630, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/dafedo/.cache/huggingface/modules/datasets_modules/datasets/conll2003/40e7cb6bcc374f7c349c83acd1e9352a4f09474eb691f64f364ee62eb65d0ca6/conll2003.py\", line 196, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract(urls_to_download)\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 287, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 195, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 203, in map_nested\r\n mapped = [\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 204, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 142, in _single_map_nested\r\n return function(data_struct)\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 218, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 281, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/dafedo/efficient-task-transfer/venv/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 621, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt\r\n`\r\n\r\nI receive the same error when I run \"itrain run_configs/conll2003.json\" from https://github.com/adapter-hub/efficient-task-transfer\r\n\r\nThank you very much in advance!\r\n\r\nRegards, \r\nDaria\r\n",
"Can you try updating `datasets` and try again ?\r\n```\r\npip install -U datasets\r\n```",
"@lhoestq Thank you very much for your answer! \r\n\r\nIt works this way, but for my research I need datasets==1.6.3 or closest to it because otherwise the other package would not work as it is built on this version.\r\nDo you have any other suggestion? I would really appreciate it. Maybe which version of the datasets is without hard-coded link but closest to 1.6.3\r\n",
"No problem, I have solved it. \r\nThank you anyway.",
"Out of curiosity, which package has the `datasets==1.6.3` requirement ?"
] |
1,104,857,822
| 3,581
|
Unable to create a dataset from a parquet file in S3
|
open
| 2022-01-15T21:34:16
| 2022-02-14T08:52:57
| null |
https://github.com/huggingface/datasets/issues/3581
| null |
regCode
| false
|
[
"Hi ! Currently it only works with local paths, file-like objects are not supported yet"
] |
1,104,663,242
| 3,580
|
Bug in wiki bio load
|
closed
| 2022-01-15T10:04:33
| 2022-01-31T08:38:09
| 2022-01-31T08:38:09
|
https://github.com/huggingface/datasets/issues/3580
| null |
tuhinjubcse
| false
|
[
"+1, here's the error I got: \r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>>\r\n>>> load_dataset(\"wiki_bio\")\r\nDownloading: 7.58kB [00:00, 4.42MB/s]\r\nDownloading: 2.71kB [00:00, 1.30MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to /home/jxm3/.cache/huggingface/datasets/wiki_bio/default/1.1.0/5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/load.py\", line 1694, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/builder.py\", line 662, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/jxm3/.cache/huggingface/modules/datasets_modules/datasets/wiki_bio/5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9/wiki_bio.py\", line 125, in _split_generators\r\n data_dir = dl_manager.download_and_extract(my_urls)\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 308, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 251, in map_nested\r\n return function(data_struct)\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 298, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/jxm3/.conda/envs/torch/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 612, in get_from_cache\r\n raise FileNotFoundError(f\"Couldn't find file at {url}\")\r\nFileNotFoundError: Couldn't find file at https://drive.google.com/uc?export=download&id=1L7aoUXzHPzyzQ0ns4ApBbYepsjFOtXil\r\n>>>\r\n```\r\n",
"@alejandrocros and @lhoestq - you added the wiki_bio dataset in #1173. It doesn't work anymore. Can you take a look at this?",
"And if something is wrong with Google Drive, you could try to download (and collate and unzip) from here: https://github.com/DavidGrangier/wikipedia-biography-dataset",
"Hi ! Thanks for reporting. I've downloaded the data and concatenated them into a zip file available here: https://huggingface.co/datasets/wiki_bio/tree/main/data\r\n\r\nI guess we can update the dataset script to use this zip file now :)"
] |
1,103,451,118
| 3,579
|
Add Text2log Dataset
|
closed
| 2022-01-14T10:45:01
| 2022-01-20T17:09:44
| 2022-01-20T17:09:44
|
https://github.com/huggingface/datasets/pull/3579
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3579",
"html_url": "https://github.com/huggingface/datasets/pull/3579",
"diff_url": "https://github.com/huggingface/datasets/pull/3579.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3579.patch",
"merged_at": "2022-01-20T17:09:44"
}
|
apergo-ai
| true
|
[
"The CI fails are unrelated to your PR and fixed on master, I think we can merge now !"
] |
1,103,403,287
| 3,578
|
label information get lost after parquet serialization
|
closed
| 2022-01-14T10:10:38
| 2023-07-25T15:44:53
| 2023-07-25T15:44:53
|
https://github.com/huggingface/datasets/issues/3578
| null |
Tudyx
| false
|
[
"Hi ! We did a release of `datasets` today that may fix this issue. Can you try updating `datasets` and trying again ?\r\n\r\nEDIT: the issue is still there actually\r\n\r\nI think we can fix that by storing the Features in the parquet schema metadata, and then reload them when loading the parquet file",
"This info is stored in the Parquet schema metadata as of https://github.com/huggingface/datasets/pull/5516"
] |
1,102,598,241
| 3,577
|
Add The Mexican Emotional Speech Database (MESD)
|
open
| 2022-01-13T23:49:36
| 2022-01-27T14:14:38
| null |
https://github.com/huggingface/datasets/issues/3577
| null |
omarespejel
| false
|
[] |
1,102,059,651
| 3,576
|
Add PASS dataset
|
closed
| 2022-01-13T17:16:07
| 2022-01-20T16:50:48
| 2022-01-20T16:50:47
|
https://github.com/huggingface/datasets/pull/3576
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3576",
"html_url": "https://github.com/huggingface/datasets/pull/3576",
"diff_url": "https://github.com/huggingface/datasets/pull/3576.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3576.patch",
"merged_at": "2022-01-20T16:50:47"
}
|
mariosasko
| true
|
[] |
1,101,947,955
| 3,575
|
Add Arrow type casting to struct for Image and Audio + Support nested casting
|
closed
| 2022-01-13T15:36:59
| 2022-11-29T11:14:16
| 2022-01-21T13:22:27
|
https://github.com/huggingface/datasets/pull/3575
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3575",
"html_url": "https://github.com/huggingface/datasets/pull/3575",
"diff_url": "https://github.com/huggingface/datasets/pull/3575.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3575.patch",
"merged_at": "2022-01-21T13:22:27"
}
|
lhoestq
| true
|
[
"Regarding the tests I'm just missing the FixedSizeListType type casting for ListArray objects, will to it tomorrow as well as adding new tests + docstrings\r\n\r\nand also adding soundfile in the CI",
"While writing some tests I noticed that the ExtensionArray can't be directly concatenated - maybe we can get rid of the extension types/arrays and only keep their storages in native arrow types.\r\n\r\nIn this case the `cast_storage` functions should be the responsibility of the Image and Audio classes directly. And therefore we would need to never cast to a pyarrow type again but to a HF feature - since they'd end up being the one able to tell what's castable or not. This is fine in my opinion but let me know what you think. I can take care of this on monday I think",
"Alright I got rid of all the extension type stuff, I'm writing the new tests now :)",
"Tests are done, I'll finish the comments and docstrings tomorrow and set the PR on ready for review once it's done !",
"> While writing some tests I noticed that the ExtensionArray can't be directly concatenated - maybe we can get rid of the extension types/arrays and only keep their storages in native arrow types.\r\n>\r\n>In this case the cast_storage functions should be the responsibility of the Image and Audio classes directly. And therefore we would need two never cast to a pyarrow type again but to a HF feature - since they'd end up being the one able to tell what's castable or not. This is fine in my opinion but let me know what you think. I can take care of this on monday I think\r\n\r\nDoes this change affect performance?",
"> Does this change affect performance?\r\n\r\nIn general it shouldn't have a significant impact on performance since the structure of the features is rarely complex (in general we have <20 features and <4 levels of nesting)\r\n\r\nRegarding Audio and Image specifically, casting from a StringArray is a little bit more costly since it creates the \"bytes\" BinaryArray with `None` values with the same length as the \"path\" array. From the tests I did locally this is very fast though and shouldn't affect the user experience at the current scale of the audio/image datasets we have. It also requires a little bit of RAM though\r\n",
"Alright this is ready for review now ! Let me know if you have comments and/or improvements :)",
"I am facing the issue ArrowNotImplementedError but no solution is working. Please help me",
"Can you open an new issue and share the error message as well as the script you used ? We'd be happy to help :)"
] |
1,101,781,401
| 3,574
|
Fix qa4mre tags
|
closed
| 2022-01-13T13:56:59
| 2022-01-13T14:03:02
| 2022-01-13T14:03:01
|
https://github.com/huggingface/datasets/pull/3574
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3574",
"html_url": "https://github.com/huggingface/datasets/pull/3574",
"diff_url": "https://github.com/huggingface/datasets/pull/3574.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3574.patch",
"merged_at": "2022-01-13T14:03:01"
}
|
lhoestq
| true
|
[] |
1,101,157,676
| 3,573
|
Add Mauve metric
|
closed
| 2022-01-13T03:52:48
| 2022-01-20T15:00:08
| 2022-01-20T15:00:08
|
https://github.com/huggingface/datasets/pull/3573
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3573",
"html_url": "https://github.com/huggingface/datasets/pull/3573",
"diff_url": "https://github.com/huggingface/datasets/pull/3573.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3573.patch",
"merged_at": "2022-01-20T15:00:07"
}
|
jthickstun
| true
|
[
"Hi ! The CI was failing because `mauve-text` wasn't installed. I added it to the CI setup :)\r\n\r\nI also did some minor changes to the script itself, especially to remove `**kwargs` and explicitly mentioned all the supported arguments (this way if someone does a typo with some parameters they get an error)"
] |
1,100,634,244
| 3,572
|
ConnectionError in IndicGLUE dataset
|
closed
| 2022-01-12T17:59:36
| 2022-09-15T21:57:34
| 2022-09-15T21:57:34
|
https://github.com/huggingface/datasets/issues/3572
| null |
sahoodib
| false
|
[
"@sahoodib, thanks for reporting.\r\n\r\nIndeed, none of the data links appearing in the IndicGLUE website are working, e.g.: https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/soham-articles.tar.gz\r\n```\r\n<Error>\r\n<Code>UserProjectAccountProblem</Code>\r\n<Message>User project billing account not in good standing.</Message>\r\n<Details>\r\nThe billing account for the owning project is disabled in state delinquent\r\n</Details>\r\n</Error>\r\n```\r\n\r\nWe have contacted the data owners to inform them about their issue and ask them if they plan to fix it.",
"Yesterday I resent a reminder email with more AI4Bharat-related people in the loop.\r\n\r\nI also opened an issue in their repos:\r\n- https://github.com/AI4Bharat/indicnlp_corpus/issues/14\r\n- https://github.com/AI4Bharat/ai4bharat.org/issues/71",
"We have received a reply from the authors reporting they have updated the URLs of their data files and opened a PR. See:\r\n- #4978 "
] |
1,100,519,604
| 3,571
|
Add missing tasks to MuchoCine dataset
|
closed
| 2022-01-12T16:07:32
| 2022-01-20T16:51:08
| 2022-01-20T16:51:07
|
https://github.com/huggingface/datasets/pull/3571
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3571",
"html_url": "https://github.com/huggingface/datasets/pull/3571",
"diff_url": "https://github.com/huggingface/datasets/pull/3571.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3571.patch",
"merged_at": "2022-01-20T16:51:07"
}
|
mariosasko
| true
|
[] |
1,100,480,791
| 3,570
|
Add the KMWP dataset (extension of #3564)
|
closed
| 2022-01-12T15:33:08
| 2022-10-01T06:43:16
| 2022-10-01T06:43:16
|
https://github.com/huggingface/datasets/pull/3570
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3570",
"html_url": "https://github.com/huggingface/datasets/pull/3570",
"diff_url": "https://github.com/huggingface/datasets/pull/3570.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3570.patch",
"merged_at": null
}
|
sooftware
| true
|
[
"Sorry, I'm late to check! I'll send it to you soon!",
"Thanks for your contribution, @sooftware. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there, under this organization namespace: https://huggingface.co/tunib\r\n\r\nPlease, feel free to tell us if you need some help.",
"Close this PR. Thanks!"
] |
1,100,478,994
| 3,569
|
Add the DKTC dataset (Extension of #3564)
|
closed
| 2022-01-12T15:31:29
| 2022-10-01T06:43:05
| 2022-10-01T06:43:04
|
https://github.com/huggingface/datasets/pull/3569
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3569",
"html_url": "https://github.com/huggingface/datasets/pull/3569",
"diff_url": "https://github.com/huggingface/datasets/pull/3569.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3569.patch",
"merged_at": null
}
|
sooftware
| true
|
[
"I reflect your comment! @lhoestq ",
"Wait, the format of the data just changed, so I'll take it into consideration and commit it.",
"I update the code according to the dataset structure change.",
"Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (the quotes `\"` start but never end).",
"> Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (the quotes `\"` start but never end).\r\n\r\nHi! @lhoestq There is a problem. \r\n<img src=\"https://user-images.githubusercontent.com/42150335/149804142-3800e635-f5a0-44d9-9694-0c2b0c05f16b.png\" width=500>\r\n \r\nAs shown in the picture above, the conversation is divided into \"\\n\" in the \"conversion\" column. \r\nThat's why there's an error in the file path that only saved only five lines like below. \r\n\r\n```\r\n'idx', 'class', 'conversation'\r\n'0', '협박 대화', '\"지금 너 스스로를 죽여달라고 애원하는 것인가?'\r\n아닙니다. 죄송합니다.'\r\n죽을 거면 혼자 죽지 우리까지 사건에 휘말리게 해? 진짜 죽여버리고 싶게.'\r\n정말 잘못했습니다.\r\n```\r\n \r\nIn fact, these five lines are all one line. \r\n \r\n\r\n",
"Hi ! I see, in this case ca you make sure that the dummy data has a full sample ?\r\n\r\nFeel free to open the dummy train.csv in the dummy_data.zip file and add the missing lines",
"Sorry, I'm late to check! I'll send it to you soon!",
"Thanks for your contribution, @sooftware. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there, under this organization namespace: https://huggingface.co/tunib\r\n\r\nPlease, feel free to tell us if you need some help.",
"Close this PR. Thanks!"
] |
1,100,380,631
| 3,568
|
Downloading Hugging Face Medical Dialog Dataset NonMatchingSplitsSizesError
|
closed
| 2022-01-12T14:03:44
| 2022-02-14T09:32:34
| 2022-02-14T09:32:34
|
https://github.com/huggingface/datasets/issues/3568
| null |
fabianslife
| false
|
[
"Hi @fabianslife, thanks for reporting.\r\n\r\nI think you were using an old version of `datasets` because this bug was already fixed in version `1.13.0` (13 Oct 2021):\r\n- Fix: 55fd140a63b8f03a0e72985647e498f1fc799d3f\r\n- PR: #3046\r\n- Issue: #2969 \r\n\r\nPlease, feel free to update the library: `pip install -U datasets`."
] |
1,100,296,696
| 3,567
|
Fix push to hub to allow individual split push
|
closed
| 2022-01-12T12:42:58
| 2023-09-24T09:54:19
| 2022-07-27T12:11:11
|
https://github.com/huggingface/datasets/pull/3567
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3567",
"html_url": "https://github.com/huggingface/datasets/pull/3567",
"diff_url": "https://github.com/huggingface/datasets/pull/3567.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3567.patch",
"merged_at": null
}
|
thomasw21
| true
|
[
"This has been addressed in https://github.com/huggingface/datasets/pull/4415. Closing."
] |
1,100,155,902
| 3,566
|
Add initial electricity time series dataset
|
closed
| 2022-01-12T10:21:32
| 2022-02-15T13:31:48
| 2022-02-15T13:31:48
|
https://github.com/huggingface/datasets/pull/3566
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3566",
"html_url": "https://github.com/huggingface/datasets/pull/3566",
"diff_url": "https://github.com/huggingface/datasets/pull/3566.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3566.patch",
"merged_at": null
}
|
kashif
| true
|
[
"@kashif Some commits on the PR branch are not authored by you, so could you please open a new PR and not use rebase this time :)? You can copy and paste the dataset dir to the new branch. \r\n\r\n",
"making a new PR"
] |
1,099,296,693
| 3,565
|
Add parameter `preserve_index` to `from_pandas`
|
closed
| 2022-01-11T15:26:37
| 2022-01-12T16:11:27
| 2022-01-12T16:11:27
|
https://github.com/huggingface/datasets/pull/3565
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3565",
"html_url": "https://github.com/huggingface/datasets/pull/3565",
"diff_url": "https://github.com/huggingface/datasets/pull/3565.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3565.patch",
"merged_at": "2022-01-12T16:11:26"
}
|
Sorrow321
| true
|
[
"> \r\n\r\nI did `make style` and it affected over 500 files\r\n\r\n```\r\nAll done! ✨ 🍰 ✨\r\n575 files reformatted, 372 files left unchanged.\r\nisort tests src benchmarks datasets/**/*.py metri\r\n```\r\n\r\n(result)\r\n\r\n",
"Nvm I was using wrong black version"
] |
1,099,214,403
| 3,564
|
Add the KMWP & DKTC dataset.
|
closed
| 2022-01-11T14:14:08
| 2022-01-12T15:33:49
| 2022-01-12T15:33:28
|
https://github.com/huggingface/datasets/pull/3564
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3564",
"html_url": "https://github.com/huggingface/datasets/pull/3564",
"diff_url": "https://github.com/huggingface/datasets/pull/3564.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3564.patch",
"merged_at": null
}
|
sooftware
| true
|
[
"I reflect your review. cc. @lhoestq ",
"Ah sorry, I missed KMWP comment, wait.",
"I request 2 new pull requests. #3569 #3570"
] |
1,099,070,368
| 3,563
|
Dataset.from_pandas preserves useless index
|
closed
| 2022-01-11T12:07:07
| 2022-01-12T16:11:27
| 2022-01-12T16:11:27
|
https://github.com/huggingface/datasets/issues/3563
| null |
Sorrow321
| false
|
[
"Hi! That makes sense. Sure, feel free to open a PR! Just a small suggestion: let's make `preserve_index` a parameter of `Dataset.from_pandas` (which we then pass to `InMemoryTable.from_pandas`) with `None` as a default value to not have this as a breaking change. "
] |
1,098,341,351
| 3,562
|
Allow multiple task templates of the same type
|
closed
| 2022-01-10T20:32:07
| 2022-01-11T14:16:47
| 2022-01-11T14:16:47
|
https://github.com/huggingface/datasets/pull/3562
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3562",
"html_url": "https://github.com/huggingface/datasets/pull/3562",
"diff_url": "https://github.com/huggingface/datasets/pull/3562.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3562.patch",
"merged_at": "2022-01-11T14:16:46"
}
|
mariosasko
| true
|
[] |
1,098,328,870
| 3,561
|
Cannot load ‘bookcorpusopen’
|
closed
| 2022-01-10T20:17:18
| 2022-02-14T09:19:27
| 2022-02-14T09:18:47
|
https://github.com/huggingface/datasets/issues/3561
| null |
HUIYINXUE
| false
|
[
"The host of this copy of the dataset (https://the-eye.eu) is down and has been down for a good amount of time ([potentially months](https://www.reddit.com/r/Roms/comments/q82s15/theeye_downdied/))\r\n\r\nFinding this dataset is a little esoteric, as the original authors took down the official BookCorpus dataset some time ago.\r\n\r\nThere are community-created versions of BookCorpus, such as the files hosted in the link below.\r\nhttps://battle.shawwn.com/sdb/bookcorpus/\r\n\r\nAnd more discussion here:\r\nhttps://github.com/soskek/bookcorpus\r\n\r\nDo we want to remove this dataset entirely? There's a fair argument for this, given that the official BookCorpus dataset was taken down by the authors. If not, perhaps can open a PR with the link to the community-created tar above and updated dataset description.",
"Hi! The `bookcorpusopen` dataset is not working for the same reason as explained in this comment: https://github.com/huggingface/datasets/issues/3504#issuecomment-1004564980",
"Hi @HUIYINXUE, it should work now that the data owners created a mirror server with all data, and we updated the URL in our library."
] |
1,098,280,652
| 3,560
|
Run pyupgrade for Python 3.6+
|
closed
| 2022-01-10T19:20:53
| 2022-01-31T13:38:49
| 2022-01-31T09:37:34
|
https://github.com/huggingface/datasets/pull/3560
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3560",
"html_url": "https://github.com/huggingface/datasets/pull/3560",
"diff_url": "https://github.com/huggingface/datasets/pull/3560.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3560.patch",
"merged_at": "2022-01-31T09:37:34"
}
|
bryant1410
| true
|
[
"Hi ! Thanks for the change :)\r\nCould it be possible to only run it for the code in `src/` ? We try to not change the code in the `datasets/` directory too often since it refreshes the users cache when they upgrade `datasets`.",
"> Hi ! Thanks for the change :)\r\n> Could it be possible to only run it for the code in `src/` ? We try to not change the code in the `datasets/` directory too often since it refreshes the users cache when they upgrade `datasets`.\r\n\r\nI reverted the changes in `datasets/` instead of changing only `src/`. Does it sound good?",
"I just resolved some conflicts with the master branch. If the CI is green we can merge :)"
] |
1,098,178,222
| 3,559
|
Fix `DuplicatedKeysError` and improve card in `tweet_qa`
|
closed
| 2022-01-10T17:27:40
| 2022-01-12T15:13:58
| 2022-01-12T15:13:57
|
https://github.com/huggingface/datasets/pull/3559
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3559",
"html_url": "https://github.com/huggingface/datasets/pull/3559",
"diff_url": "https://github.com/huggingface/datasets/pull/3559.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3559.patch",
"merged_at": "2022-01-12T15:13:56"
}
|
mariosasko
| true
|
[] |
1,098,025,866
| 3,558
|
Integrate Milvus (pymilvus) library
|
open
| 2022-01-10T15:20:29
| 2022-03-05T12:28:36
| null |
https://github.com/huggingface/datasets/issues/3558
| null |
mariosasko
| false
|
[
"Hi @mariosasko,Just search randomly and I found this issue~ I'm the tech lead of Milvus and we are looking forward to integrate milvus together with huggingface datasets.\r\n\r\nAny suggestion on how we could start?\r\n",
"Feel free to assign to me and we probably need some guide on it",
"@mariosasko any updates my man?\r\n",
"Hi! For starters, I suggest you take a look at this file: https://github.com/huggingface/datasets/blob/master/src/datasets/search.py, which contains all the code for Faiss/ElasticSearch support. We could set up a Slack channel for additional guidance. Let me know what you prefer.",
"> Hi! For starters, I suggest you take a look at this file: https://github.com/huggingface/datasets/blob/master/src/datasets/search.py, which contains all the code for Faiss/ElasticSearch support. We could set up a Slack channel for additional guidance. Let me know what you prefer.\r\n\r\nSure, we take a look and do some research"
] |
1,097,946,034
| 3,557
|
Fix bug in `ImageClassifcation` task template
|
closed
| 2022-01-10T14:09:59
| 2022-01-11T15:47:52
| 2022-01-11T15:47:52
|
https://github.com/huggingface/datasets/pull/3557
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3557",
"html_url": "https://github.com/huggingface/datasets/pull/3557",
"diff_url": "https://github.com/huggingface/datasets/pull/3557.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3557.patch",
"merged_at": "2022-01-11T15:47:52"
}
|
mariosasko
| true
|
[
"The CI failures are unrelated to the changes in this PR.",
"> The CI failures are unrelated to the changes in this PR.\r\n\r\nIt seems that some of the failures are due to the tests on the dataset cards (e.g. CIFAR, MNIST, FASHION_MNIST). Perhaps it's worth addressing those in this PR to avoid confusing downstream developers who branch off `master` and suddenly have a failing CI?",
"@lewtun We only run these tests against the modified datasets on the PR branch, so this will not lead to errors after merging."
] |
1,097,907,724
| 3,556
|
Preserve encoding/decoding with features in `Iterable.map` call
|
closed
| 2022-01-10T13:32:20
| 2022-01-18T19:54:08
| 2022-01-18T19:54:07
|
https://github.com/huggingface/datasets/pull/3556
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3556",
"html_url": "https://github.com/huggingface/datasets/pull/3556",
"diff_url": "https://github.com/huggingface/datasets/pull/3556.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3556.patch",
"merged_at": "2022-01-18T19:54:07"
}
|
mariosasko
| true
|
[] |
1,097,736,982
| 3,555
|
DuplicatedKeysError when loading tweet_qa dataset
|
closed
| 2022-01-10T10:53:11
| 2022-01-12T15:17:33
| 2022-01-12T15:13:56
|
https://github.com/huggingface/datasets/issues/3555
| null |
LeonieWeissweiler
| false
|
[
"Hi, we've just merged the PR with the fix. The fixed version of the dataset can be downloaded as follows:\r\n```python\r\nimport datasets\r\ndset = datasets.load_dataset(\"tweet_qa\", revision=\"master\")\r\n```"
] |
1,097,711,367
| 3,554
|
ImportError: cannot import name 'is_valid_waiter_error'
|
closed
| 2022-01-10T10:32:04
| 2022-02-14T09:35:57
| 2022-02-14T09:35:57
|
https://github.com/huggingface/datasets/issues/3554
| null |
danielbellhv
| false
|
[
"Hi! I can't reproduce this error in Colab, but I'm assuming you are using Amazon SageMaker Studio Notebooks (you mention the `conda_pytorch_p36` kernel), so maybe @philschmid knows more about what might be causing this issue? ",
"Hey @mariosasko. Yes, I am using **Amazon SageMaker Studio Jupyter Labs**. However, I no longer need this notebook; but it would be nice to have this problem solved for others. So don't stress too much if you two can't reproduce error.",
"Hey @danielbellhv, \r\n\r\nThis issue might be related to Studio probably not having an up to date `botocore` and `boto3` version. I ran into this as well a while back. My workaround was \r\n```python\r\n# using older dataset due to incompatibility of sagemaker notebook & aws-cli with > s3fs and fsspec to >= 2021.10\r\n!pip install \"datasets==1.13\" --upgrade\r\n```\r\n\r\nIn `datasets` we use the latest `s3fs` and `fsspec` but aws-cli and notebook is not supporting this. You could also update the `aws-cli` and associated packages to get the latest `datasets` version\r\n"
] |
1,097,252,275
| 3,553
|
set_format("np") no longer works for Image data
|
closed
| 2022-01-09T17:18:13
| 2022-10-14T12:03:55
| 2022-10-14T12:03:54
|
https://github.com/huggingface/datasets/issues/3553
| null |
cgarciae
| false
|
[
"A quick fix for now is doing this:\r\n\r\n```python\r\nX_train = np.stack(dataset[\"train\"][\"image\"])[..., None]",
"This error also propagates to jax and is even trickier to fix, since `.with_format(type='jax')` will use numpy conversion internally (and fail). For a three line failure:\r\n\r\n```python\r\ndataset = datasets.load_dataset(\"mnist\")\r\ndataset.set_format(\"jax\")\r\nX_train = dataset[\"train\"][\"image\"]\r\n```",
"Hi! We've recently introduced a new Image feature that yields PIL Images (and caches transforms on them) instead of arrays.\r\n\r\nHowever, this feature requires a custom transform to yield np arrays directly:\r\n```python\r\nddict = datasets.load_dataset(\"mnist\")\r\n\r\ndef pil_image_to_array(batch):\r\n return {\"image\": [np.array(img) for img in batch[\"image\"]]} # or jnp.array(img) for Jax\r\n\r\nddict.set_transform(pil_image_to_array, columns=\"image\", output_all_columns=True)\r\n```\r\n\r\n[Docs](https://huggingface.co/docs/datasets/master/process.html#format-transform) on `set_transform`.\r\n\r\nAlso, the approach proposed by @cgarciae is not the best because it loads the entire column in memory.\r\n\r\n@albertvillanova @lhoestq WDYT? The Audio and the Image feature currently don't support the TF/Jax/PT Formatters, but for the Numpy Formatter maybe it makes more sense to return np arrays (and not a dict in the case of the Audio feature or a PIL Image object in the case of the Image feature).",
"Yes I agree it should return arrays and not a PIL image (and possible an array instead of a dict for audio data).\r\nI'm currently finishing some code refactoring of the image and audio and opening a PR today. Maybe we can look into that after the refactoring",
"This has been fixed in https://github.com/huggingface/datasets/pull/5072, which is included in the latest release of `datasets`."
] |
1,096,985,204
| 3,552
|
Add the KMWP & DKTC dataset.
|
closed
| 2022-01-08T17:12:14
| 2022-01-11T14:13:30
| 2022-01-11T14:13:30
|
https://github.com/huggingface/datasets/pull/3552
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3552",
"html_url": "https://github.com/huggingface/datasets/pull/3552",
"diff_url": "https://github.com/huggingface/datasets/pull/3552.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3552.patch",
"merged_at": null
}
|
sooftware
| true
|
[] |
1,096,561,111
| 3,551
|
Add more compression types for `to_json`
|
closed
| 2022-01-07T18:25:02
| 2022-07-10T14:36:55
| 2022-02-21T15:58:15
|
https://github.com/huggingface/datasets/pull/3551
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3551",
"html_url": "https://github.com/huggingface/datasets/pull/3551",
"diff_url": "https://github.com/huggingface/datasets/pull/3551.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3551.patch",
"merged_at": "2022-02-21T15:58:15"
}
|
bhavitvyamalik
| true
|
[
"@lhoestq, I looked into how to compress with `zipfile` for which few methods exist, let me know which one looks good:\r\n1. create the file in normal `wb` mode and then zip it separately\r\n2. use `ZipFile.write_str` to write file into the archive. For this we'll need to change how we're writing files from `_write` method \r\n\r\nHow `pandas` handles it is that they have created a wrapper for standard library class `ZipFile` and allow the returned file-like handle to accept byte strings via `write` method instead of `write_str` (purpose was to change the name of function by creating that wrapper)",
"1. sounds not ideal since it creates an intermediary file.\r\nI like pandas' approach. Is it possible to implement 2. using the pandas class ? Or maybe we can have something similar ?",
"Definitely, @lhoestq! I've adapted that from original code and turns out it is faster than `gz` compression. Apart from that I've also added `infer` option to automatically infer compression type from `path_or_buf` given",
"One small thing, currently I'm assuming that user will provide compression extension in `path_or_buf`. Is it this also possible?\r\n`dataset.to_json(\"from_dataset.json\", compression=\"zip\")`? \r\nShould I put an `assert` to ensure the file name provided always has a compression extension?",
"Thanks !\r\n\r\n> One small thing, currently I'm assuming that user will provide compression extension in path_or_buf. Is it this also possible?\r\n>dataset.to_json(\"from_dataset.json\", compression=\"zip\")?\r\n>Should I put an assert to ensure the file name provided always has a compression extension?\r\n\r\nI think it's fine as it is right now :) No need to check the extension of the filename passed to `path_or_buf`.\r\n",
"> turns out it is faster than gz compression\r\n\r\nI think the default compression level of `gzip` is 9 in python, which is very slow. Maybe we can switch to compression level 6 instead which is faster, like the `gzip` command on unix",
"I found that `fsspec` has something that may interest you: [fsspec.open(..., compression=...)](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.open). I don't remember if we've already mentioned it or not\r\n\r\nIt also has `zip` if I understand correctly ! see https://github.com/fsspec/filesystem_spec/blob/master/fsspec/compression.py#L70\r\n\r\nSince `fsspec` is a dependency of `datasets` we can use all this :)\r\n\r\nLet me know if you prefer using `fsspec` instead (I haven't tested this yet to write compressed files). IMO it sounds pretty easy to use and it would make the code base simpler",
"Just tried `fsspec` but I'm not able to write compressed `zip` files :/\r\n`gzip`, `xz`, `bz2` are all working fine and it's really simple (no need for `FileWriteHandler` now!)"
] |
1,096,522,377
| 3,550
|
Bug in `openbookqa` dataset
|
closed
| 2022-01-07T17:32:57
| 2022-05-04T06:33:00
| 2022-05-04T06:32:19
|
https://github.com/huggingface/datasets/issues/3550
| null |
lucadiliello
| false
|
[
"Closed by:\r\n- #4259"
] |
1,096,426,996
| 3,549
|
Fix sem_eval_2018_task_1 download location
|
closed
| 2022-01-07T15:37:52
| 2022-01-27T15:52:03
| 2022-01-27T15:52:03
|
https://github.com/huggingface/datasets/pull/3549
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3549",
"html_url": "https://github.com/huggingface/datasets/pull/3549",
"diff_url": "https://github.com/huggingface/datasets/pull/3549.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3549.patch",
"merged_at": null
}
|
maxpel
| true
|
[
"Hi ! Thanks for pushing this :)\r\n\r\nIt seems that you created this PR from an old version of `datasets` that didn't have the sem_eval_2018_task_1.py file.\r\n\r\nCan you try merging `master` into your branch ? Or re-create your PR from a branch that comes from a more recent version of `datasets` ?\r\n\r\nAnd sorry for the late response !",
"Hi! No problem! I made the new branch like you said and opened https://github.com/huggingface/datasets/pull/3643 for it. I will close this one."
] |
1,096,409,512
| 3,548
|
Specify the feature types of a dataset on the Hub without needing a dataset script
|
closed
| 2022-01-07T15:17:06
| 2022-01-20T14:48:38
| 2022-01-20T14:48:38
|
https://github.com/huggingface/datasets/issues/3548
| null |
lhoestq
| false
|
[
"After looking into this, discovered that this is already supported if the `dataset_infos.json` file is configured correctly! Here is a working example: https://huggingface.co/datasets/abidlabs/test-audio-13\r\n\r\nThis should be probably be documented, though. "
] |
1,096,405,515
| 3,547
|
Datasets created with `push_to_hub` can't be accessed in offline mode
|
closed
| 2022-01-07T15:12:25
| 2024-02-15T17:41:24
| 2023-12-21T15:13:12
|
https://github.com/huggingface/datasets/issues/3547
| null |
TevenLeScao
| false
|
[
"Thanks for reporting. I think this can be fixed by improving the `CachedDatasetModuleFactory` and making it look into the `parquet` cache directory (datasets from push_to_hub are loaded with the parquet dataset builder). I'll look into it",
"Hi, I'm having the same issue. Is there any update on this?",
"We haven't had a chance to fix this yet. If someone would like to give it a try I'd be happy to give some guidance",
"@lhoestq Do you have an idea of what changes need to be made to `CachedDatasetModuleFactory`? I would be willing to take a crack at it. Currently unable to train with datasets I have `push_to_hub` on a cluster whose compute nodes are not connected to the internet.\r\n\r\nIt looks like it might be this line:\r\n\r\nhttps://github.com/huggingface/datasets/blob/0c1d099f87a883e52c42d3fd1f1052ad3967e647/src/datasets/load.py#L994\r\n\r\nWhich wouldn't pick up the stuff saved under `\"datasets/allenai___parquet/*\"`. Additionally, the datasets saved under `\"datasets/allenai___parquet/*\"` appear to have hashes in their name, e.g. `\"datasets/allenai___parquet/my_dataset-def9ee5552a1043e\"`. This would not be detected by `CachedDatasetModuleFactory`, which currently looks for subdirectories\r\n\r\nhttps://github.com/huggingface/datasets/blob/0c1d099f87a883e52c42d3fd1f1052ad3967e647/src/datasets/load.py#L995-L999",
"`importable_directory_path` is used to find a **dataset script** that was previously downloaded and cached from the Hub\r\n\r\nHowever in your case there's no dataset script on the Hub, only parquet files. So the logic must be extended for this case.\r\n\r\nIn particular I think you can add a new logic in the case where `hashes is None` (i.e. if there's no dataset script associated to the dataset in the cache).\r\n\r\nIn this case you can check directly in the in the datasets cache for a directory named `<namespace>__parquet` and a subdirectory named `<config_id>`. The config_id must match `{self.name.replace(\"/\", \"--\")}-*`. \r\n\r\nIn your case those two directories correspond to `allenai___parquet` and then `allenai--my_dataset-def9ee5552a1043e`\r\n\r\nThen you can find the most recent version of the dataset in subdirectories (e.g. sorting using the last modified time of the `dataset_info.json` file).\r\n\r\nFinally, we will need return the module that is used to load the dataset from the cache. It is the same module than the one that would have been normally used if you had an internet connection.\r\n\r\nAt that point you can ping me, because we will need to pass all this:\r\n- `module_path = _PACKAGED_DATASETS_MODULES[\"parquet\"][0]`\r\n- `hash` it corresponds the name of the directory that contains the .arrow file, inside `<namespace>__parquet/<config_id>`\r\n- ` builder_kwargs = {\"hash\": hash, \"repo_id\": self.name, \"config_id\": config_id}`\r\nand currently `config_id` is not a valid argument for a `DatasetBuilder`\r\n\r\nI think in the future we want to change this caching logic completely, since I don't find it super easy to play with.",
"Hi! Is there a workaround for the time being?\r\nLike passing `data_dir` or something like that?\r\n\r\nI would like to use [this diffuser example](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation) on my cluster whose nodes are not connected to the internet. I have downloaded the dataset online form the login node.",
"Hi ! Yes you can save your dataset locally with `my_dataset.save_to_disk(\"path/to/local\")` and reload it later with `load_from_disk(\"path/to/local\")`\r\n\r\n(removing myself from assignees since I'm currently not working on this right now)",
"Still not fixed? ......",
"Any idea @lhoestq who to tag to fix this ? This is a very annoying bug, which is becoming more and more present since the push_to_hub API is getting used more ?",
"Perhaps @mariosasko ? Thanks a lot for the great work on the lib !",
"It should be easier to implement now that we improved the caching of datasets from `push_to_hub`: each dataset has its own directory in the cache.\r\n\r\nThe cache structure has been improved in https://github.com/huggingface/datasets/pull/5331. Now the cache structure is `\"{namespace__}<dataset_name>/<config_name>/<version>/<hash>/\"` which contains the arrow files `\"<dataset_name>-<split>.arrow\"` and `\"dataset_info.json\"`. \r\n\r\nThe idea is to extend `CachedDatasetModuleFactory` to also check if this directory exists in the cache (in addition to the already existing cache check) and return the requested dataset module. The module name can be found in the JSON file in the `builder_name` field.",
"Any progress?",
"I started a PR to draft the logic to reload datasets from the cache fi they were created with push_to_hub: https://github.com/huggingface/datasets/pull/6459\r\n\r\nFeel free to try it out",
"It seems that this does not support dataset with uppercase name ",
"Which version of `datasets` are you using ? This issue has been fixed with `datasets` 2.16",
"I can confirm that this problem is still happening with `datasets` 2.17.0, installed from pip",
"Can you share a code or a dataset that reproduces the issue ? It seems to work fine on my side.",
"Yeah, \r\n```python\r\ndataset = load_dataset(\"roneneldan/TinyStories\")\r\n```\r\nI tried it with:\r\n```python\r\ndataset = load_dataset(\"roneneldan/tinystories\")\r\n```\r\nand it worked.\r\n\r\n> It seems that this does not support dataset with uppercase name\r\n\r\n@fecet was right, but if you just put the name lowercase, it works. "
] |
1,096,367,684
| 3,546
|
Remove print statements in datasets
|
closed
| 2022-01-07T14:30:24
| 2022-01-07T18:09:16
| 2022-01-07T18:09:15
|
https://github.com/huggingface/datasets/pull/3546
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3546",
"html_url": "https://github.com/huggingface/datasets/pull/3546",
"diff_url": "https://github.com/huggingface/datasets/pull/3546.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3546.patch",
"merged_at": "2022-01-07T18:09:15"
}
|
mariosasko
| true
|
[
"The CI failures are unrelated to the changes."
] |
1,096,189,889
| 3,545
|
fix: 🐛 pass token when retrieving the split names
|
closed
| 2022-01-07T10:29:22
| 2022-01-10T10:51:47
| 2022-01-10T10:51:46
|
https://github.com/huggingface/datasets/pull/3545
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3545",
"html_url": "https://github.com/huggingface/datasets/pull/3545",
"diff_url": "https://github.com/huggingface/datasets/pull/3545.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3545.patch",
"merged_at": "2022-01-10T10:51:46"
}
|
severo
| true
|
[
"Currently, it does not work with https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/blob/main/common_voice_7_0.py#L146 (which was the goal), because `dl_manager.download_config.use_auth_token` is ignored, and the authentication is required to be use `huggingface-cli login`.\r\nIn my use case (dataset viewer), I'd prefer to use a specific \"User Token Access\", with only the \"read\" role (https://huggingface.co/settings/token).\r\n\r\nSee https://github.com/huggingface/datasets-preview-backend/issues/74#issuecomment-1007316853 for the context",
"> Simply passing download_config is ok :)\r\n\r\nhmm, I prefer only passing use_auth_token. But the question is more: is it correct, in the (convoluted) case if `download_config.use_auth_token` exists and is different from `use_auth_token`? Which one should be used?",
"If both are passed, `use_auth_token` should have the priority (more specific parameters have the higher priority)"
] |
1,095,784,681
| 3,544
|
Ability to split a dataset in multiple files.
|
open
| 2022-01-06T23:02:25
| 2022-01-06T23:02:25
| null |
https://github.com/huggingface/datasets/issues/3544
| null |
Dref360
| false
|
[] |
1,095,226,438
| 3,543
|
Allow loading community metrics from the hub, just like datasets
|
closed
| 2022-01-06T11:26:26
| 2022-05-31T20:59:14
| 2022-05-31T20:53:37
|
https://github.com/huggingface/datasets/issues/3543
| null |
eladsegal
| false
|
[
"Hi ! Thanks for your message :) This is a great idea indeed. We haven't started working on this yet though. For now I guess you can host your metric on the Hub (either with your model or your dataset) and use `hf_hub_download` to download it (docs [here](https://github.com/huggingface/huggingface_hub/blob/main/docs/hub/how-to-downstream.md#cached_download))",
"This is a great solution in the meantime, thanks!",
"Here's the code I used, in case it can be of help to someone else:\r\n```python\r\nimport os, shutil\r\nfrom huggingface_hub import hf_hub_download\r\ndef download_metric(repo_id, file_path):\r\n # repo_id: for models \"username/model_name\", for datasets \"datasets/username/model_name\"\r\n local_metric_path = hf_hub_download(repo_id=repo_id, filename=file_path)\r\n updated_local_metric_path = (os.path.dirname(local_metric_path) + os.path.basename(local_metric_path).replace(\".\", \"_\") + \".py\")\r\n shutil.copy(local_metric_path, updated_local_metric_path)\r\n return updated_local_metric_path\r\n\r\nmetric = load_metric(download_metric(REPO_ID, FILE_PATH))\r\n```",
"Solved with https://github.com/huggingface/evaluate 🤗 ",
"Yay!! cc @lvwerra @sashavor @douwekiela \r\n\r\nPlease share your feedback @eladsegal =)"
] |
1,095,088,485
| 3,542
|
Update the CC-100 dataset card
|
closed
| 2022-01-06T08:35:18
| 2022-01-06T18:37:44
| 2022-01-06T18:37:44
|
https://github.com/huggingface/datasets/pull/3542
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3542",
"html_url": "https://github.com/huggingface/datasets/pull/3542",
"diff_url": "https://github.com/huggingface/datasets/pull/3542.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3542.patch",
"merged_at": "2022-01-06T18:37:44"
}
|
aajanki
| true
|
[] |
1,095,033,828
| 3,541
|
Support 7-zip compressed data files
|
open
| 2022-01-06T07:11:03
| 2022-07-19T10:18:30
| null |
https://github.com/huggingface/datasets/issues/3541
| null |
albertvillanova
| false
|
[
"This should also resolve: https://github.com/huggingface/datasets/issues/3185."
] |
1,094,900,336
| 3,540
|
How to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset?
|
open
| 2022-01-06T02:13:42
| 2022-01-06T02:17:39
| null |
https://github.com/huggingface/datasets/issues/3540
| null |
CindyTing
| false
|
[] |
1,094,813,242
| 3,539
|
Research wording for nc licenses
|
closed
| 2022-01-05T23:01:38
| 2022-01-06T18:58:20
| 2022-01-06T18:58:19
|
https://github.com/huggingface/datasets/pull/3539
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3539",
"html_url": "https://github.com/huggingface/datasets/pull/3539",
"diff_url": "https://github.com/huggingface/datasets/pull/3539.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3539.patch",
"merged_at": "2022-01-06T18:58:19"
}
|
meg-huggingface
| true
|
[
"The CI failure is about some missing tags or sections in the dataset cards, and is unrelated to the part about non commercial use of this PR. Merging"
] |
1,094,756,755
| 3,538
|
Readme usage update
|
closed
| 2022-01-05T21:26:28
| 2022-01-05T23:34:25
| 2022-01-05T23:24:15
|
https://github.com/huggingface/datasets/pull/3538
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3538",
"html_url": "https://github.com/huggingface/datasets/pull/3538",
"diff_url": "https://github.com/huggingface/datasets/pull/3538.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3538.patch",
"merged_at": "2022-01-05T23:24:15"
}
|
meg-huggingface
| true
|
[] |
1,094,738,734
| 3,537
|
added PII statements and license links to data cards
|
closed
| 2022-01-05T20:59:21
| 2022-01-05T22:02:37
| 2022-01-05T22:02:37
|
https://github.com/huggingface/datasets/pull/3537
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3537",
"html_url": "https://github.com/huggingface/datasets/pull/3537",
"diff_url": "https://github.com/huggingface/datasets/pull/3537.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3537.patch",
"merged_at": "2022-01-05T22:02:37"
}
|
mcmillanmajora
| true
|
[] |
1,094,645,771
| 3,536
|
update `pretty_name` for all datasets
|
closed
| 2022-01-05T18:45:05
| 2022-07-10T14:36:54
| 2022-01-12T22:59:45
|
https://github.com/huggingface/datasets/pull/3536
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3536",
"html_url": "https://github.com/huggingface/datasets/pull/3536",
"diff_url": "https://github.com/huggingface/datasets/pull/3536.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3536.patch",
"merged_at": "2022-01-12T22:59:45"
}
|
bhavitvyamalik
| true
|
[
"Pushed the lastest changes!"
] |
1,094,633,214
| 3,535
|
Add SVHN dataset
|
closed
| 2022-01-05T18:29:09
| 2022-01-12T14:14:35
| 2022-01-12T14:14:35
|
https://github.com/huggingface/datasets/pull/3535
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3535",
"html_url": "https://github.com/huggingface/datasets/pull/3535",
"diff_url": "https://github.com/huggingface/datasets/pull/3535.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3535.patch",
"merged_at": "2022-01-12T14:14:35"
}
|
mariosasko
| true
|
[] |
1,094,352,449
| 3,534
|
Update wiki_dpr README.md
|
closed
| 2022-01-05T13:29:44
| 2022-02-17T13:45:56
| 2022-01-05T14:16:51
|
https://github.com/huggingface/datasets/pull/3534
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3534",
"html_url": "https://github.com/huggingface/datasets/pull/3534",
"diff_url": "https://github.com/huggingface/datasets/pull/3534.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3534.patch",
"merged_at": "2022-01-05T14:16:51"
}
|
lhoestq
| true
|
[] |
1,094,156,147
| 3,533
|
Task search function on hub not working correctly
|
open
| 2022-01-05T09:36:30
| 2022-05-12T14:45:57
| null |
https://github.com/huggingface/datasets/issues/3533
| null |
patrickvonplaten
| false
|
[
"known issue due to https://github.com/huggingface/datasets/pull/2362 (and [internal](https://github.com/huggingface/moon-landing/issues/946)) , will be solved soon",
"hmm actually i have no recollection of why I said that",
"Because it has dots in some YAML keys, it can't be parsed and indexed by the back-end"
] |
1,094,035,066
| 3,532
|
Give clearer instructions to add the YAML tags
|
closed
| 2022-01-05T06:47:52
| 2022-01-17T15:54:37
| 2022-01-17T15:54:36
|
https://github.com/huggingface/datasets/pull/3532
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3532",
"html_url": "https://github.com/huggingface/datasets/pull/3532",
"diff_url": "https://github.com/huggingface/datasets/pull/3532.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3532.patch",
"merged_at": "2022-01-17T15:54:36"
}
|
albertvillanova
| true
|
[
"this is great, maybe just put all of it in one line?\r\n\r\n> TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging"
] |
1,094,033,280
| 3,531
|
Give clearer instructions to add the YAML tags
|
closed
| 2022-01-05T06:44:20
| 2022-01-17T15:54:36
| 2022-01-17T15:54:36
|
https://github.com/huggingface/datasets/issues/3531
| null |
albertvillanova
| false
|
[] |
1,093,894,732
| 3,530
|
Update README.md
|
closed
| 2022-01-05T01:32:07
| 2022-01-05T12:50:51
| 2022-01-05T12:50:50
|
https://github.com/huggingface/datasets/pull/3530
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3530",
"html_url": "https://github.com/huggingface/datasets/pull/3530",
"diff_url": "https://github.com/huggingface/datasets/pull/3530.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3530.patch",
"merged_at": "2022-01-05T12:50:50"
}
|
meg-huggingface
| true
|
[] |
1,093,846,356
| 3,529
|
Update README.md
|
closed
| 2022-01-04T23:52:47
| 2022-01-05T12:50:15
| 2022-01-05T12:50:14
|
https://github.com/huggingface/datasets/pull/3529
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3529",
"html_url": "https://github.com/huggingface/datasets/pull/3529",
"diff_url": "https://github.com/huggingface/datasets/pull/3529.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3529.patch",
"merged_at": "2022-01-05T12:50:14"
}
|
meg-huggingface
| true
|
[] |
1,093,844,616
| 3,528
|
Update README.md
|
closed
| 2022-01-04T23:48:11
| 2022-01-05T12:49:41
| 2022-01-05T12:49:40
|
https://github.com/huggingface/datasets/pull/3528
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3528",
"html_url": "https://github.com/huggingface/datasets/pull/3528",
"diff_url": "https://github.com/huggingface/datasets/pull/3528.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3528.patch",
"merged_at": "2022-01-05T12:49:40"
}
|
meg-huggingface
| true
|
[] |
1,093,840,707
| 3,527
|
Update README.md
|
closed
| 2022-01-04T23:39:41
| 2022-01-05T00:23:50
| 2022-01-05T00:23:50
|
https://github.com/huggingface/datasets/pull/3527
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3527",
"html_url": "https://github.com/huggingface/datasets/pull/3527",
"diff_url": "https://github.com/huggingface/datasets/pull/3527.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3527.patch",
"merged_at": "2022-01-05T00:23:50"
}
|
meg-huggingface
| true
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.