id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
1,046,445,507
| 3,223
|
Update BibTeX entry
|
closed
| 2021-11-06T06:41:52
| 2021-11-06T07:06:38
| 2021-11-06T07:06:38
|
https://github.com/huggingface/datasets/pull/3223
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3223",
"html_url": "https://github.com/huggingface/datasets/pull/3223",
"diff_url": "https://github.com/huggingface/datasets/pull/3223.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3223.patch",
"merged_at": "2021-11-06T07:06:38"
}
|
albertvillanova
| true
|
[] |
1,046,299,725
| 3,222
|
Add docs for audio processing
|
closed
| 2021-11-05T23:07:59
| 2021-11-24T16:32:08
| 2021-11-24T15:35:52
|
https://github.com/huggingface/datasets/pull/3222
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3222",
"html_url": "https://github.com/huggingface/datasets/pull/3222",
"diff_url": "https://github.com/huggingface/datasets/pull/3222.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3222.patch",
"merged_at": "2021-11-24T15:35:52"
}
|
stevhliu
| true
|
[
"Nice ! love it this way. I guess you can set this PR to \"ready for review\" ?",
"I guess we can merge this one now :)"
] |
1,045,890,512
| 3,221
|
Resolve data_files by split name
|
closed
| 2021-11-05T14:07:35
| 2021-11-08T13:52:20
| 2021-11-05T17:49:58
|
https://github.com/huggingface/datasets/pull/3221
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3221",
"html_url": "https://github.com/huggingface/datasets/pull/3221",
"diff_url": "https://github.com/huggingface/datasets/pull/3221.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3221.patch",
"merged_at": "2021-11-05T17:49:57"
}
|
lhoestq
| true
|
[
"Really cool!\r\nWhen splitting by folder, what do we use for validation set (\"valid\", \"validation\" or both)?",
"> When splitting by folder, what do we use for validation set (\"valid\", \"validation\" or both)?\r\n\r\nBoth are fine :) As soon as it has \"valid\" in it",
"Merging for now, if you have comments about the documentation we can address them in subsequent PRs :)",
"Thanks for the comments @stevhliu :) I just opened https://github.com/huggingface/datasets/pull/3233 to take them into account"
] |
1,045,549,029
| 3,220
|
Add documentation about dataset viewer feature
|
open
| 2021-11-05T08:11:19
| 2023-09-25T11:48:38
| null |
https://github.com/huggingface/datasets/issues/3220
| null |
albertvillanova
| false
|
[
"In particular, include this somewhere in the docs: https://huggingface.co/docs/hub/datasets-viewer#access-the-parquet-files\r\n\r\nSee https://github.com/huggingface/hub-docs/issues/563"
] |
1,045,095,000
| 3,219
|
Eventual Invalid Token Error at setup of private datasets
|
closed
| 2021-11-04T18:50:45
| 2021-11-08T13:23:06
| 2021-11-08T08:59:43
|
https://github.com/huggingface/datasets/issues/3219
| null |
albertvillanova
| false
|
[] |
1,045,032,313
| 3,218
|
Fix code quality in riddle_sense dataset
|
closed
| 2021-11-04T17:43:20
| 2021-11-04T17:50:03
| 2021-11-04T17:50:02
|
https://github.com/huggingface/datasets/pull/3218
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3218",
"html_url": "https://github.com/huggingface/datasets/pull/3218",
"diff_url": "https://github.com/huggingface/datasets/pull/3218.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3218.patch",
"merged_at": "2021-11-04T17:50:02"
}
|
albertvillanova
| true
|
[] |
1,045,029,710
| 3,217
|
Fix code quality bug in riddle_sense dataset
|
closed
| 2021-11-04T17:40:32
| 2021-11-04T17:50:02
| 2021-11-04T17:50:02
|
https://github.com/huggingface/datasets/issues/3217
| null |
albertvillanova
| false
|
[
"To give more context: https://github.com/psf/black/issues/318. `black` doesn't treat this as a bug, but `flake8` does. \r\n"
] |
1,045,027,733
| 3,216
|
Pin version exclusion for tensorflow incompatible with keras
|
closed
| 2021-11-04T17:38:06
| 2021-11-05T10:57:38
| 2021-11-05T10:57:37
|
https://github.com/huggingface/datasets/pull/3216
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3216",
"html_url": "https://github.com/huggingface/datasets/pull/3216",
"diff_url": "https://github.com/huggingface/datasets/pull/3216.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3216.patch",
"merged_at": "2021-11-05T10:57:37"
}
|
albertvillanova
| true
|
[] |
1,045,011,207
| 3,215
|
Small updates to to_tf_dataset documentation
|
closed
| 2021-11-04T17:22:01
| 2021-11-04T18:55:38
| 2021-11-04T18:55:37
|
https://github.com/huggingface/datasets/pull/3215
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3215",
"html_url": "https://github.com/huggingface/datasets/pull/3215",
"diff_url": "https://github.com/huggingface/datasets/pull/3215.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3215.patch",
"merged_at": "2021-11-04T18:55:37"
}
|
Rocketknight1
| true
|
[
"@stevhliu Accepted both suggestions, thanks for the review!"
] |
1,044,924,050
| 3,214
|
Add ACAV100M Dataset
|
open
| 2021-11-04T15:59:58
| 2021-12-08T12:00:30
| null |
https://github.com/huggingface/datasets/issues/3214
| null |
nateraw
| false
|
[] |
1,044,745,313
| 3,213
|
Fix tuple_ie download url
|
closed
| 2021-11-04T13:09:07
| 2021-11-05T14:16:06
| 2021-11-05T14:16:05
|
https://github.com/huggingface/datasets/pull/3213
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3213",
"html_url": "https://github.com/huggingface/datasets/pull/3213",
"diff_url": "https://github.com/huggingface/datasets/pull/3213.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3213.patch",
"merged_at": "2021-11-05T14:16:05"
}
|
mariosasko
| true
|
[] |
1,044,640,967
| 3,212
|
Sort files before loading
|
closed
| 2021-11-04T11:08:31
| 2021-11-05T17:49:58
| 2021-11-05T17:49:58
|
https://github.com/huggingface/datasets/issues/3212
| null |
lvwerra
| false
|
[
"This will be fixed by https://github.com/huggingface/datasets/pull/3221"
] |
1,044,617,913
| 3,211
|
Fix disable_nullable default value to False
|
closed
| 2021-11-04T10:52:06
| 2021-11-04T11:08:21
| 2021-11-04T11:08:20
|
https://github.com/huggingface/datasets/pull/3211
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3211",
"html_url": "https://github.com/huggingface/datasets/pull/3211",
"diff_url": "https://github.com/huggingface/datasets/pull/3211.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3211.patch",
"merged_at": "2021-11-04T11:08:20"
}
|
lhoestq
| true
|
[] |
1,044,611,471
| 3,210
|
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py
|
closed
| 2021-11-04T10:47:26
| 2022-03-30T08:26:35
| 2022-03-30T08:26:35
|
https://github.com/huggingface/datasets/issues/3210
| null |
xiuzhilu
| false
|
[
"Hi ! Do you have some kind of proxy in your browser that gives you access to internet ?\r\n\r\nMaybe you're having this error because you don't have access to this URL from python ?",
"Hi,do you fixed this error?\r\nI still have this issue when use \"use_auth_token=True\"",
"You don't need authentication to access those github hosted files\r\nPlease check that you can access this URL from your browser and also from your terminal"
] |
1,044,505,771
| 3,209
|
Unpin keras once TF fixes its release
|
closed
| 2021-11-04T09:15:32
| 2021-11-05T10:57:37
| 2021-11-05T10:57:37
|
https://github.com/huggingface/datasets/issues/3209
| null |
albertvillanova
| false
|
[] |
1,044,504,093
| 3,208
|
Pin keras version until TF fixes its release
|
closed
| 2021-11-04T09:13:32
| 2021-11-04T09:30:55
| 2021-11-04T09:30:54
|
https://github.com/huggingface/datasets/pull/3208
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3208",
"html_url": "https://github.com/huggingface/datasets/pull/3208",
"diff_url": "https://github.com/huggingface/datasets/pull/3208.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3208.patch",
"merged_at": "2021-11-04T09:30:54"
}
|
albertvillanova
| true
|
[] |
1,044,496,389
| 3,207
|
CI error: Another metric with the same name already exists in Keras 2.7.0
|
closed
| 2021-11-04T09:04:11
| 2021-11-04T09:30:54
| 2021-11-04T09:30:54
|
https://github.com/huggingface/datasets/issues/3207
| null |
albertvillanova
| false
|
[] |
1,044,216,270
| 3,206
|
[WIP] Allow user-defined hash functions via a registry
|
closed
| 2021-11-03T23:25:42
| 2021-11-05T12:38:11
| 2021-11-05T12:38:04
|
https://github.com/huggingface/datasets/pull/3206
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3206",
"html_url": "https://github.com/huggingface/datasets/pull/3206",
"diff_url": "https://github.com/huggingface/datasets/pull/3206.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3206.patch",
"merged_at": null
}
|
BramVanroy
| true
|
[
"Hi @BramVanroy, thanks for your PR.\r\n\r\nThere was a bug in TensorFlow/Keras. We have made a temporary fix in master branch. Please, merge master into your PR branch, so that the CI tests pass.\r\n\r\n```\r\ngit checkout registry\r\ngit fetch upstream master\r\ngit merge upstream/master\r\n```",
"@albertvillanova Done. Although new tests will need to be added. I am looking for some feedback on my initial proposal in this PR. Reviews and ideas welcome!",
"Hi ! Thanks for diving into this :)\r\n\r\nWith this approach you get the right hash when doing `Hasher.hash(nlp)` but if you try to hash an object that has `nlp` as one of its attributes for example you will get different hashes every time.\r\n\r\nThis is because `Hasher.hash` is not recursive itself. Indeed what happens when you try to hash an object is that:\r\n1. it is dumped with our custom `dill` pickler (which is recursive)\r\n2. the bytes of the dump are hashed\r\n\r\nTo fix this we must integrate the custom hashing as a custom pickler dumping instead.\r\n\r\nNote that we're only using the `pickler.dumps` method and not `pickler.loads` since we only use it to get hashes, so it doesn't matter if `loads` doesn't reconstruct the object exactly. What's important it only to capture all the necessary information that defines how the object transforms the data (here `nlp.to_bytes()` determines how the spacy pipeline transforms the text).\r\n\r\nOur pickler already has a registry and you can register new dump functions with:\r\n```python\r\nimport dill\r\nimport spacy\r\nfrom datasets.utils.py_utils import pklregister\r\n\r\n@pklregister(spacy.Language)\r\ndef _save_spacy_language(pickler, nlp):\r\n pickler.save_reduce(...) # I think we can use nlp.to_bytes() here\r\n dill._dill.log.info(...)\r\n```\r\n\r\nYou can find some examples of custom dump functions in `py_utils.py`",
"Ah, darn it. Completely missed that register. Time wasted, unfortunately. \r\n\r\nTo better understand what you mean, I figured I'd try the basis of your snippet and I've noticed quite an annoying side-effect of how the pickle dispatch table seems to work. It explicitly uses an object's [`type()`](https://github.com/python/cpython/blob/87032cfa3dc975d7442fd57dea2c6a56d31c911a/Lib/pickle.py#L557-L558), which makes sense for pickling some (primitive) types it is not ideal for more complex ones, I think. `Hasher.hash` has the same issue as far as I can tell.\r\n\r\nhttps://github.com/huggingface/datasets/blob/d21ce54f2c2782f854f975eb1dc2be6f923b4314/src/datasets/fingerprint.py#L187-L191\r\n\r\nThis is very restrictive, and won't work for subclasses. In the case of spaCy, for instance, we register `Language`, but `nlp` is an instance of `English`, which is a _subclass_ of `Language`. These are different types, and so they will not match in the dispatch table. Maybe this is more general approach to cover such cases? Something like this is a start but too broad, but ideally a hierarchy is constructed and traversed of all classes in the table and the lowest class is selected to ensure that the most specific class function is dispatched.\r\n\r\n```python\r\n def hash(cls, value: Any) -> str:\r\n # Try to match the exact type\r\n if type(value) in cls.dispatch:\r\n return cls.dispatch[type(value)](cls, value)\r\n\r\n # Try to match instance (superclass)\r\n for type_cls, func in cls.dispatch.items():\r\n if isinstance(value, type_cls):\r\n return cls.dispatch[type_cls](cls, value)\r\n\r\n return cls.hash_default(value)\r\n```\r\n\r\nThis does not solve the problem for pickling, though. That is quite unfortunate IMO because that implies that users always have to specify the most specific class, which is not always obvious. (For instance, `spacy.load`'s signature returns `Language`, but as said before a subclass might be returned.)\r\n\r\nSecond, I am trying to understand `save_reduce` but I can find very little documentation about it, only the source code which is quite cryptic. Can you explain it a bit? The required arguments are not very clear to me and there is no docstring.\r\n\r\n```python\r\n def save_reduce(self, func, args, state=None, listitems=None, dictitems=None, obj=None):\r\n```",
"Here is an example illustrating the problem with sub-classes.\r\n\r\n```python\r\nimport spacy\r\n\r\nfrom spacy import Language\r\nfrom spacy.lang.en import English\r\n\r\nfrom datasets.utils.py_utils import Pickler, pklregister\r\n\r\n# Only useful in the registry (matching with `nlp`)\r\n# if you swap it out for very specific `English`\r\n@pklregister(English)\r\ndef hash_spacy_language(pickler, nlp):\r\n pass\r\n\r\n\r\ndef main():\r\n print(Pickler.dispatch)\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n print(f\"NLP type {type(nlp)} in dispatch table? \", type(nlp) in Pickler.dispatch)\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```",
"Indeed that's not ideal.\r\nMaybe we could integrate all the subclasses directly in `datasets`. That's simple to do but the catch is that if users have new subclasses of `Language` it won't work.\r\n\r\nOtherwise we can see how to make the API simpler for users by allowing subclasses\r\n```python\r\n# if you swap it out for very specific `English`\r\n@pklregister(Language, allow_subclasses=True)\r\ndef hash_spacy_language(pickler, nlp):\r\n pass\r\n```\r\n\r\nHere is an idea how to make this work, let me know what you think:\r\n\r\nWhen `Pickler.dumps` is called, it uses `Pickler.save_global` which is a method that is going to be called recursively on all the objects. We can customize this part, and make it work as we want when it encounters a subclass of `Language`.\r\n\r\nFor example when it encounters a subclass of `Language`, we can dynamically register the hashing function for the subclass (`English` for example) in `Pickler.save_global`, right before calling the actual `dill.Pickler.save_global(self, obj, name=name)`:\r\n```python\r\npklregister(type(obj))(hash_function_registered_for_parent_class)\r\ndill.Pickler.save_global(self, obj, name=name)\r\n```\r\n\r\nIn practice that means we can have an additional dispatch dictionary (similar to `Pickler.dispatch`) to store the hashing functions when `allow_subclasses=True`, and use this dictionary in `Pickler.save_global` to check if we need to use a hashing function registered with `allow_subclasses=True` and get `hash_function_registered_for_parent_class`.",
"If I understood you correctly, I do not think that that is enough because you are only doing this for a type and its direct parent class. You could do this for all superclasses (so traverse all ancestors and find the registered function for the first that is encountered). I can work on that, if you agree. The one thing that I am not sure about is how you want to create the secondary dispatch table. An empty dict as class variable in Pickler? (It doesn't have to be a true dispatcher, I think.)\r\n\r\nI do not think that dynamic registration is the ideal situation (it feels a bit hacky). An alternative would be to subclass Pickle and Dill to make sure that instead of just type() checking in the dispatch table also superclasses are considered. But that is probably overkill.",
"> You could do this for all superclasses (so traverse all ancestors and find the registered function for the first that is encountered)\r\n\r\nThat makes sense indeed !\r\n\r\n> The one thing that I am not sure about is how you want to create the secondary dispatch table. An empty dict as class variable in Pickler? (It doesn't have to be a true dispatcher, I think.)\r\n\r\nSure, let's try to not use too complicated stuff\r\n\r\n> I do not think that dynamic registration is the ideal situation (it feels a bit hacky). An alternative would be to subclass Pickle and Dill to make sure that instead of just type() checking in the dispatch table also superclasses are considered. But that is probably overkill.\r\n\r\nIndeed that would feel less hacky, but maybe it's too complex just for this. I feel like this part of the library is already hard to understand when you're not familiar with pickle. IMO having only a few changes that are simpler to understand is better than having a rewrite of `dill`'s core code.\r\n\r\nThanks a lot for your insights, it looks like we're going to have something that works well and that unlocks some nice flexibility for users :) Feel free to ping me anytime if I can help on this",
"Sure, thanks for brainstorming! I'll try to work on it this weekend. Will also revert the current changes in this PR and rename it. ",
"It seems like this is going in the right direction :). \r\n\r\n@BramVanroy Just one small suggestion for future contributions: instead of using `WIP` in the PR title, you can create a draft PR if you're still working on it.",
"Maybe I should just create a new (draft) PR then, seeing that I'll have to rename and revert the changes anyway? I'll link to this PR so that the discussion is at least referenced.",
"I can convert this PR to a draft PR. Let me know what would you prefer.",
"I think reverting my previous commits would make for a dirty (or confusing) commit history, so I'll just create a new one. Thanks."
] |
1,044,099,561
| 3,205
|
Add Multidoc2dial Dataset
|
closed
| 2021-11-03T20:48:31
| 2021-11-24T17:32:49
| 2021-11-24T16:55:08
|
https://github.com/huggingface/datasets/pull/3205
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3205",
"html_url": "https://github.com/huggingface/datasets/pull/3205",
"diff_url": "https://github.com/huggingface/datasets/pull/3205.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3205.patch",
"merged_at": "2021-11-24T16:55:08"
}
|
sivasankalpp
| true
|
[
"@songfeng cc",
"Hi @sivasankalpp, thanks for your PR.\r\n\r\nThere was a bug in TensorFlow/Keras. We have made a temporary fix in our master branch. Please, merge master into your PR branch, so that the CI tests pass.\r\n\r\n```\r\ngit checkout multidoc2dial\r\ngit fetch upstream master\r\ngit merge upstream/master\r\n```",
"Hi @albertvillanova, I have merged master into my PR branch. All tests are passing. \r\nPlease take a look when you get a chance, thanks! \r\n",
"Thanks for your feedback @lhoestq. We addressed your comments in the latest commit. Let us know if everything looks okay :) "
] |
1,043,707,307
| 3,204
|
FileNotFoundError for TupleIE dataste
|
closed
| 2021-11-03T14:56:55
| 2021-11-05T15:51:15
| 2021-11-05T14:16:05
|
https://github.com/huggingface/datasets/issues/3204
| null |
arda-vianai
| false
|
[
"@mariosasko @lhoestq Could you give me an update on how to load the dataset after the fix?\r\nThanks.",
"Hi @arda-vianai,\r\n\r\nfirst, you can try:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('tuple_ie', 'all', revision=\"master\")\r\n```\r\nIf this doesn't work, your version of `datasets` is missing some features that are required to run the dataset script, so install the master version with the following command:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```\r\nand then:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('tuple_ie', 'all')\r\n```\r\nshould work (even without `revision`).",
"@mariosasko \r\nThanks, it is working now. I actually did that before but I didn't restart the kernel. I restarted it and it works now. My bad!!!\r\nMany thanks and great job!\r\n-arda"
] |
1,043,552,766
| 3,203
|
Updated: DaNE - updated URL for download
|
closed
| 2021-11-03T12:55:13
| 2021-11-04T13:14:36
| 2021-11-04T11:46:43
|
https://github.com/huggingface/datasets/pull/3203
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3203",
"html_url": "https://github.com/huggingface/datasets/pull/3203",
"diff_url": "https://github.com/huggingface/datasets/pull/3203.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3203.patch",
"merged_at": "2021-11-04T11:46:43"
}
|
MalteHB
| true
|
[
"Actually it looks like the old URL is still working, and it's also the one that is mentioned in https://github.com/alexandrainst/danlp/blob/master/docs/docs/datasets.md\r\n\r\nWhat makes you think we should use the new URL ?",
"@lhoestq Sorry! I might have jumped to conclusions a bit too fast here... \r\n\r\nI was working in Google Colab and got an error that it was unable to use the URL. I then forked the project, updated the URL, ran it locally and it worked. I therefore assumed that my URL update fixed the issue, however, I see now that it might rather be a Google Colab issue... \r\n\r\nStill - this seems to be the official URL for downloading the dataset, and I think that it will be most beneficial to use. :-) ",
"It looks like they're using these new urls for their new datasets. Maybe let's change to the new URL in case the old one stops working at one point. Thanks"
] |
1,043,213,660
| 3,202
|
Add mIoU metric
|
closed
| 2021-11-03T08:42:32
| 2022-06-01T17:39:05
| 2022-06-01T17:39:04
|
https://github.com/huggingface/datasets/issues/3202
| null |
NielsRogge
| false
|
[
"Resolved via https://github.com/huggingface/datasets/pull/3745."
] |
1,043,209,142
| 3,201
|
Add GSM8K dataset
|
closed
| 2021-11-03T08:36:44
| 2022-04-13T11:56:12
| 2022-04-13T11:56:11
|
https://github.com/huggingface/datasets/issues/3201
| null |
NielsRogge
| false
|
[
"Closed via https://github.com/huggingface/datasets/pull/4103"
] |
1,042,887,291
| 3,200
|
Catch token invalid error in CI
|
closed
| 2021-11-02T21:56:26
| 2021-11-03T09:41:08
| 2021-11-03T09:41:08
|
https://github.com/huggingface/datasets/pull/3200
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3200",
"html_url": "https://github.com/huggingface/datasets/pull/3200",
"diff_url": "https://github.com/huggingface/datasets/pull/3200.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3200.patch",
"merged_at": "2021-11-03T09:41:08"
}
|
lhoestq
| true
|
[] |
1,042,860,935
| 3,199
|
Bump huggingface_hub
|
closed
| 2021-11-02T21:29:10
| 2021-11-14T01:48:11
| 2021-11-02T21:41:40
|
https://github.com/huggingface/datasets/pull/3199
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3199",
"html_url": "https://github.com/huggingface/datasets/pull/3199",
"diff_url": "https://github.com/huggingface/datasets/pull/3199.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3199.patch",
"merged_at": "2021-11-02T21:41:40"
}
|
lhoestq
| true
|
[] |
1,042,679,548
| 3,198
|
Add Multi-Lingual LibriSpeech
|
closed
| 2021-11-02T18:23:59
| 2021-11-04T17:09:22
| 2021-11-04T17:09:22
|
https://github.com/huggingface/datasets/pull/3198
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3198",
"html_url": "https://github.com/huggingface/datasets/pull/3198",
"diff_url": "https://github.com/huggingface/datasets/pull/3198.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3198.patch",
"merged_at": "2021-11-04T17:09:22"
}
|
patrickvonplaten
| true
|
[] |
1,042,541,127
| 3,197
|
Fix optimized encoding for arrays
|
closed
| 2021-11-02T15:55:53
| 2021-11-02T19:12:24
| 2021-11-02T19:12:23
|
https://github.com/huggingface/datasets/pull/3197
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3197",
"html_url": "https://github.com/huggingface/datasets/pull/3197",
"diff_url": "https://github.com/huggingface/datasets/pull/3197.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3197.patch",
"merged_at": "2021-11-02T19:12:23"
}
|
lhoestq
| true
|
[] |
1,042,223,913
| 3,196
|
QOL improvements: auto-flatten_indices and desc in map calls
|
closed
| 2021-11-02T11:28:50
| 2021-11-02T15:41:09
| 2021-11-02T15:41:08
|
https://github.com/huggingface/datasets/pull/3196
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3196",
"html_url": "https://github.com/huggingface/datasets/pull/3196",
"diff_url": "https://github.com/huggingface/datasets/pull/3196.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3196.patch",
"merged_at": "2021-11-02T15:41:08"
}
|
mariosasko
| true
|
[] |
1,042,204,044
| 3,195
|
More robust `None` handling
|
closed
| 2021-11-02T11:15:10
| 2021-12-09T14:27:00
| 2021-12-09T14:26:58
|
https://github.com/huggingface/datasets/pull/3195
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3195",
"html_url": "https://github.com/huggingface/datasets/pull/3195",
"diff_url": "https://github.com/huggingface/datasets/pull/3195.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3195.patch",
"merged_at": "2021-12-09T14:26:57"
}
|
mariosasko
| true
|
[
"I also created a PR regarding `disable_nullable` that must be always `False` by default, in order to always allow None values\r\nhttps://github.com/huggingface/datasets/pull/3211",
"@lhoestq I addressed your comments, added tests, did some refactoring to make the implementation cleaner and added support for `None` values in `map` transforms when the feature type is `ArrayXD` (previously, I only implemented `None` decoding).\r\n\r\nMy only concern is that during decoding `ArrayXD` arrays with `None` values will be auto-casted to `float64` to allow `np.nan` insertion and this might be unexpected if `dtype` is not `float`, so one option would be to allow `None` values only if the storage type is `float32` or `float64`. Let me know WDYT would be the most consistent behavior here.",
"Cool ! :D\r\n> My only concern is that during decoding ArrayXD arrays with None values will be auto-casted to float64 to allow np.nan insertion and this might be unexpected if dtype is not float, so one option would be to allow None values only if the storage type is float32 or float64. Let me know WDYT would be the most consistent behavior here.\r\n\r\nYes that makes sense to only fill with nan if the type is compatible",
"After some more experimenting, I think we can keep auto-cast to float because PyArrow also does it:\r\n```python\r\nimport pyarrow as pa\r\narr = pa.array([1, 2, 3, 4, None], type=pa.int32()).to_numpy(zero_copy_only=False) # None present - int32 -> float64\r\nassert arr.dtype == np.float64\r\n```\r\nAdditional changes:\r\n* fixes a bug in the `_is_zero_copy_only` implementation for the ArraXD types. Previously, `_is_zero_copy_only` would always return False for these types. Still have to see if it's possible to optimize copying of the non-extension types (`Sequence`, ...), but I plan to work on that in a separate PR.\r\n* https://github.com/huggingface/datasets/pull/2891 introduced a bug where the dtype of `ArrayXD` wouldn't be preserved due to `to_pylist` call in NumPy Formatter (`np.array(np.array(..).tolist())` doesn't necessarily preserve dtype of the initial array), so I'm also fixing that. ",
"The CI fail for windows is unrelated to this PR, merging"
] |
1,041,999,535
| 3,194
|
Update link to Datasets Tagging app in Spaces
|
closed
| 2021-11-02T08:13:50
| 2021-11-08T10:36:23
| 2021-11-08T10:36:22
|
https://github.com/huggingface/datasets/pull/3194
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3194",
"html_url": "https://github.com/huggingface/datasets/pull/3194",
"diff_url": "https://github.com/huggingface/datasets/pull/3194.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3194.patch",
"merged_at": "2021-11-08T10:36:22"
}
|
albertvillanova
| true
|
[] |
1,041,971,117
| 3,193
|
Update link to datasets-tagging app
|
closed
| 2021-11-02T07:39:59
| 2021-11-08T10:36:22
| 2021-11-08T10:36:22
|
https://github.com/huggingface/datasets/issues/3193
| null |
albertvillanova
| false
|
[] |
1,041,308,086
| 3,192
|
Multiprocessing filter/map (tests) not working on Windows
|
open
| 2021-11-01T15:36:08
| 2021-11-01T15:57:03
| null |
https://github.com/huggingface/datasets/issues/3192
| null |
BramVanroy
| false
|
[] |
1,041,225,111
| 3,191
|
Dataset viewer issue for '*compguesswhat*'
|
closed
| 2021-11-01T14:16:49
| 2022-09-12T08:02:29
| 2022-09-12T08:02:29
|
https://github.com/huggingface/datasets/issues/3191
| null |
benotti
| false
|
[
"```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('compguesswhat', name='compguesswhat-original',split='train', streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/compguesswhat/4d08b9e0a8d1cf036c9626c93be4a759fdd9fcce050ea503ea14b075e830c799/compguesswhat.py\", line 251, in _generate_examples\r\n with gzip.open(filepath) as in_file:\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/gzip.py\", line 58, in open\r\n binary_file = GzipFile(filename, gz_mode, compresslevel)\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/gzip.py\", line 173, in __init__\r\n fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')\r\nFileNotFoundError: [Errno 2] No such file or directory: 'zip://compguesswhat-original/0.2.0/compguesswhat.train.jsonl.gz::https://www.dropbox.com/s/l0nc13udml6vs0w/compguesswhat-original.zip?dl=1'\r\n```\r\n\r\nIt's an issue with the streaming mode. Note that normal mode is used by the dataset viewer when streaming is failing, but only for the smallest datasets. This dataset is above the limit, hence the error.\r\n\r\nSame case as https://github.com/huggingface/datasets/issues/3186#issuecomment-1096549774.",
"cc @huggingface/datasets ",
"There is an issue with the URLs of their data files: https://www.dropbox.com/s/l0nc13udml6vs0w/compguesswhat-original.zip?dl=1\r\n> Dropbox Error: That didn't work for some reason\r\n\r\nError reported to their repo:\r\n- https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1",
"Closed by:\r\n- #4968"
] |
1,041,153,631
| 3,190
|
combination of shuffle and filter results in a bug
|
closed
| 2021-11-01T13:07:29
| 2021-11-02T10:50:49
| 2021-11-02T10:50:49
|
https://github.com/huggingface/datasets/issues/3190
| null |
rabeehk
| false
|
[
"I cannot reproduce this on master and pyarrow==4.0.1.\r\n",
"Hi ! There was a regression in `datasets` 1.12 that introduced this bug. It has been fixed in #3019 in 1.13\r\n\r\nCan you try to update `datasets` and try again ?",
"Thanks a lot, fixes with 1.13"
] |
1,041,044,986
| 3,189
|
conll2003 incorrect label explanation
|
closed
| 2021-11-01T11:03:30
| 2021-11-09T10:40:58
| 2021-11-09T10:40:58
|
https://github.com/huggingface/datasets/issues/3189
| null |
BramVanroy
| false
|
[
"Hi @BramVanroy,\r\n\r\nsince these fields are of type `ClassLabel` (you can check this with `dset.features`), you can inspect the possible values with:\r\n```python\r\ndset.features[field_name].feature.names # .feature because it's a sequence of labels\r\n```\r\n\r\nand to find the mapping between names and integers, use: \r\n```python\r\ndset.features[field_name].feature.int2str(value_or_values_list) # map integer value to string value\r\n# or\r\ndset.features[field_name].feature.str2int(value_or_values_list) # map string value to integer value\r\n```\r\n\r\n"
] |
1,040,980,712
| 3,188
|
conll2002 issues
|
closed
| 2021-11-01T09:49:24
| 2021-11-15T13:50:59
| 2021-11-12T17:18:11
|
https://github.com/huggingface/datasets/issues/3188
| null |
BramVanroy
| false
|
[
"Hi ! Thanks for reporting :)\r\n\r\nThis is related to https://github.com/huggingface/datasets/issues/2742, I'm working on it. It should fix the viewer for around 80 datasets.\r\n",
"Ah, hadn't seen that sorry.\r\n\r\nThe scrambled \"point of contact\" is a separate issue though, I think.",
"@lhoestq The \"point of contact\" is still an issue.",
"It will be fixed in https://github.com/huggingface/datasets/pull/3274, thanks"
] |
1,040,412,869
| 3,187
|
Add ChrF(++) (as implemented in sacrebleu)
|
closed
| 2021-10-31T08:53:58
| 2021-11-02T14:50:50
| 2021-11-02T14:31:26
|
https://github.com/huggingface/datasets/pull/3187
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3187",
"html_url": "https://github.com/huggingface/datasets/pull/3187",
"diff_url": "https://github.com/huggingface/datasets/pull/3187.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3187.patch",
"merged_at": "2021-11-02T14:31:26"
}
|
BramVanroy
| true
|
[] |
1,040,369,397
| 3,186
|
Dataset viewer for nli_tr
|
closed
| 2021-10-31T03:56:33
| 2022-09-12T09:15:34
| 2022-09-12T08:43:09
|
https://github.com/huggingface/datasets/issues/3186
| null |
e-budur
| false
|
[
"It's an issue with the streaming mode:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('nli_tr', name='snli_tr',split='test', streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/nli_tr/c2ddd0c0a70caddac6a81c2dae5ca7939f00060d517d08f1983927818dba6521/nli_tr.py\", line 155, in _generate_examples\r\n with codecs.open(filepath, encoding=\"utf-8\") as f:\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/codecs.py\", line 905, in open\r\n file = builtins.open(filename, mode, buffering)\r\nFileNotFoundError: [Errno 2] No such file or directory: 'zip://snli_tr_1.0_test.jsonl::https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/snli_tr_1.0.zip'\r\n```\r\n\r\nNote that normal mode is used by the dataset viewer when streaming is failing, but only for the smallest datasets. `nli_tr` is above the limit, hence the error.",
"cc @huggingface/datasets ",
"Apparently there is an issue with the data source URLs: Server Not Found\r\n- https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/snli_tr_1.0.zip\r\n\r\nWe are contacting the authors to ask them: \r\n@e-budur you are one of the authors: are you aware of the issue with the URLs of your data ?",
"Reported to their repo:\r\n- https://github.com/boun-tabi/NLI-TR/issues/9",
"The server issue was temporary and is now resolved.",
"Once we have implemented support for streaming, the viewer works: https://huggingface.co/datasets/nli_tr"
] |
1,040,291,961
| 3,185
|
7z dataset preview not implemented?
|
closed
| 2021-10-30T20:18:27
| 2022-04-12T11:48:16
| 2022-04-12T11:48:07
|
https://github.com/huggingface/datasets/issues/3185
| null |
Kirili4ik
| false
|
[
"It's a bug in the dataset viewer: the dataset cannot be downloaded in streaming mode, but since the dataset is relatively small, the dataset viewer should have fallback to normal mode. Working on a fix.",
"Fixed. https://huggingface.co/datasets/samsum/viewer/samsum/train\r\n\r\n<img width=\"1563\" alt=\"Capture d’écran 2022-04-12 à 13 47 45\" src=\"https://user-images.githubusercontent.com/1676121/162953339-cd8922d7-9037-408b-b896-eac1af0bb54f.png\">\r\n\r\nThanks for reporting!"
] |
1,040,114,102
| 3,184
|
RONEC v2
|
closed
| 2021-10-30T10:50:03
| 2021-11-02T16:02:23
| 2021-11-02T16:02:22
|
https://github.com/huggingface/datasets/pull/3184
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3184",
"html_url": "https://github.com/huggingface/datasets/pull/3184",
"diff_url": "https://github.com/huggingface/datasets/pull/3184.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3184.patch",
"merged_at": "2021-11-02T16:02:22"
}
|
dumitrescustefan
| true
|
[
"@lhoestq Thanks for the review. I totally understand what you are saying. Normally, I would definitely agree with you, but in this particular case, the quality of v1 is poor, and the dataset itself is small (at the time we created v1 it was the only RO NER dataset, and its size was limited by the available resources). \r\n\r\nThis is why we worked to build a larger one, with much better inter-annotator agreement. Fact is, models trained on v1 will be of very low quality and I would not recommend to anybody to use/do that. That's why I'd strongly suggest we replace v1 with v2, and kindof make v1 vanish :) \r\n\r\nWhat do you think? If you insist on having v1 accessible, I'll add the required code. Thanks!\r\n\r\n",
"Ok I see ! I think it's fine then, no need to re-add V1"
] |
1,039,761,120
| 3,183
|
Add missing docstring to DownloadConfig
|
closed
| 2021-10-29T16:56:35
| 2021-11-02T10:25:38
| 2021-11-02T10:25:37
|
https://github.com/huggingface/datasets/pull/3183
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3183",
"html_url": "https://github.com/huggingface/datasets/pull/3183",
"diff_url": "https://github.com/huggingface/datasets/pull/3183.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3183.patch",
"merged_at": "2021-11-02T10:25:37"
}
|
mariosasko
| true
|
[] |
1,039,739,606
| 3,182
|
Don't memoize strings when hashing since two identical strings may have different python ids
|
closed
| 2021-10-29T16:26:17
| 2021-11-02T09:35:38
| 2021-11-02T09:35:37
|
https://github.com/huggingface/datasets/pull/3182
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3182",
"html_url": "https://github.com/huggingface/datasets/pull/3182",
"diff_url": "https://github.com/huggingface/datasets/pull/3182.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3182.patch",
"merged_at": "2021-11-02T09:35:37"
}
|
lhoestq
| true
|
[
"This change slows down the hash computation a little bit but from my tests it doesn't look too impactful. So I think it's fine to merge this."
] |
1,039,682,097
| 3,181
|
`None` converted to `"None"` when loading a dataset
|
closed
| 2021-10-29T15:23:53
| 2021-12-11T01:16:40
| 2021-12-09T14:26:57
|
https://github.com/huggingface/datasets/issues/3181
| null |
eladsegal
| false
|
[
"Hi @eladsegal, thanks for reporting.\r\n\r\n@mariosasko I saw you are already working on this, but maybe my comment will be useful to you.\r\n\r\nAll values are casted to their corresponding feature type (including `None` values). For example if the feature type is `Value(\"bool\")`, `None` is casted to `False`.\r\n\r\nIt is true that strings were an exception, but this was recently fixed by @lhoestq (see #3158).",
"Thanks for reporting.\r\n\r\nThis is actually a breaking change that I think can cause issues when users preprocess their data. String columns used to be nullable. Maybe we can correct https://github.com/huggingface/datasets/pull/3158 to keep the None values and avoid this breaking change ?\r\n\r\nEDIT: the other types (bool, int, etc) can also become nullable IMO",
"So what would be the best way to handle a feature that can have a null value in some of the instances? So far I used `None`.\r\nUsing the empty string won't be a good option, as it can be an actual value in the data and is not the same as not having a value at all.",
"Hi @eladsegal,\r\n\r\nUse `None`. As @albertvillanova correctly pointed out, this change in conversion was introduced (by mistake) in #3158. To avoid it, install the earlier revision with:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@8107844ec0e7add005db0585c772ee20adc01a5e\r\n```\r\n\r\nI'm making all the feature types nullable as we speak, and the fix will be merged probably early next week.",
"Hi @mariosasko, is there an estimation as to when this issue will be fixed?",
"https://github.com/huggingface/datasets/pull/3195 fixed it, we'll do a new release soon :)\r\n\r\nFor now feel free to install `datasets` from the master branch",
"Thanks, but unfortunately looks like it isn't fixed yet 😢 \r\n[notebook for 1.14.0](https://colab.research.google.com/drive/1SV3sFXPJMWSQgbm4pr9Y1Q8OJ4JYKcDo?usp=sharing)\r\n[notebook for master](https://colab.research.google.com/drive/145wDpuO74MmsuI0SVLcI1IswG6aHpyhi?usp=sharing)",
"Oh, sorry. I deleted the fix by accident when I was resolving a merge conflict. Let me fix this real quick.",
"Thank you, it works! 🎊 "
] |
1,039,641,316
| 3,180
|
fix label mapping
|
closed
| 2021-10-29T14:42:24
| 2021-11-02T13:41:07
| 2021-11-02T10:37:12
|
https://github.com/huggingface/datasets/pull/3180
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3180",
"html_url": "https://github.com/huggingface/datasets/pull/3180",
"diff_url": "https://github.com/huggingface/datasets/pull/3180.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3180.patch",
"merged_at": "2021-11-02T10:37:12"
}
|
VictorSanh
| true
|
[
"heck, test failings. moving to draft. will come back to this later today hopefully",
"Thanks for fixing this :)\r\nI just updated the dataset_infos.json and added the missing `pretty_name` tag to the dataset card",
"thank you @lhoestq! running around as always it felt through as a lower priority..."
] |
1,039,571,928
| 3,179
|
Cannot load dataset when the config name is "special"
|
closed
| 2021-10-29T13:30:47
| 2021-10-29T13:35:21
| 2021-10-29T13:35:21
|
https://github.com/huggingface/datasets/issues/3179
| null |
severo
| false
|
[
"The issue is that the datasets are malformed. Not a bug with the datasets library"
] |
1,039,539,076
| 3,178
|
"Property couldn't be hashed properly" even though fully picklable
|
closed
| 2021-10-29T12:56:09
| 2024-08-19T13:03:49
| 2022-11-02T17:18:43
|
https://github.com/huggingface/datasets/issues/3178
| null |
BramVanroy
| false
|
[
"After some digging, I found that this is caused by `dill` and using `recurse=True)` when trying to dump the object. The problem also occurs without multiprocessing. I can only find [the following information](https://dill.readthedocs.io/en/latest/dill.html#dill._dill.dumps) about this:\r\n\r\n> If recurse=True, then objects referred to in the global dictionary are recursively traced and pickled, instead of the default behavior of attempting to store the entire global dictionary. This is needed for functions defined via exec().\r\n\r\nIn the utils, this is explicitly enabled\r\n\r\nhttps://github.com/huggingface/datasets/blob/df63614223bf1dd1feb267d39d741bada613352c/src/datasets/utils/py_utils.py#L327-L330\r\n\r\nIs this really necessary? Is there a way around it? Also pinging the spaCy team in case this is easy to solve on their end. (I hope so.)",
"Hi ! Thanks for reporting\r\n\r\nYes `recurse=True` is necessary to be able to hash all the objects that are passed to the `map` function\r\n\r\nEDIT: hopefully this object can be serializable soon, but otherwise we can consider adding more control to the user on how to hash objects that are not serializable (as mentioned in https://github.com/huggingface/datasets/issues/3044#issuecomment-948818210)",
"I submitted a PR to spacy that should fix this issue (linked above). I'll leave this open until that PR is merged. ",
"@lhoestq After some testing I find that even with the updated spaCy, no cache files are used. I do not get any warnings though, but I can see that map is run every time I run the code. Do you have thoughts about why? If you want to try the tests below, make sure to install spaCy from [here](https://github.com/BramVanroy/spaCy) and installing the base model with `python -m spacy download en_core_web_sm`.\r\n\r\n```python\r\nfrom functools import partial\r\nfrom pathlib import Path\r\n\r\nimport spacy\r\nfrom datasets import Dataset\r\nimport datasets\r\ndatasets.logging.set_verbosity_debug()\r\n\r\ndef tokenize(nlp, l):\r\n return {\"tok\": [t.text for t in nlp(l[\"text\"])]}\r\n\r\ndef main():\r\n fin = r\"some/file/with/many/lines\"\r\n lines = Path(fin).read_text(encoding=\"utf-8\").splitlines()\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n ds = Dataset.from_dict({\"text\": lines, \"text_id\": list(range(len(lines)))})\r\n tok = partial(tokenize, nlp)\r\n ds = ds.map(tok, load_from_cache_file=True)\r\n print(ds[0:2])\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\n... or with load_dataset (here I get the message that `load_dataset` can reuse the dataset, but still I see all samples being processed via the tqdm progressbar):\r\n\r\n```python\r\nfrom functools import partial\r\n\r\nimport spacy\r\nfrom datasets import load_dataset\r\nimport datasets\r\ndatasets.logging.set_verbosity_debug()\r\n\r\ndef tokenize(nlp, sample):\r\n return {\"tok\": [t.text for t in nlp(sample[\"text\"])]}\r\n\r\ndef main():\r\n fin = r\"some/file/with/many/lines\"\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n tok_func = partial(tokenize, nlp)\r\n ds = load_dataset('text', data_files=fin)\r\n ds = ds[\"train\"].map(tok_func)\r\n print(ds[0:2])\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```",
"It looks like every time you load `en_core_web_sm` you get a different python object:\r\n```python\r\nimport spacy\r\nfrom datasets.fingerprint import Hasher\r\n\r\nnlp1 = spacy.load(\"en_core_web_sm\")\r\nnlp2 = spacy.load(\"en_core_web_sm\")\r\nHasher.hash(nlp1), Hasher.hash(nlp2)\r\n# ('f6196a33882fea3b', 'a4c676a071f266ff')\r\n```\r\nHere is a list of attributes that have different hashes for `nlp1` and `nlp2`:\r\n- tagger\r\n- parser\r\n- entity\r\n- pipeline (it's the list of the three attributes above)\r\n\r\nI just took a look at the tagger for example and I found subtle differences (there may be other differences though):\r\n```python\r\nnlp1.tagger.model.tok2vec.embed.id, nlp2.tagger.model.tok2vec.embed.id\r\n# (1721, 2243)\r\n```\r\n\r\nWe can try to find all the differences and find the best way to hash those objects properly",
"Thanks for searching! I went looking, and found that this is an implementation detail of thinc\r\n\r\nhttps://github.com/explosion/thinc/blob/68691e303ae68cae4bc803299016f1fc064328bf/thinc/model.py#L96-L98\r\n\r\nPresumably (?) exactly to distinguish between different parts in memory when multiple models are loaded. Do not think that this can be changed on their end - but I will ask what exactly it is for (I'm curious).\r\n\r\nDo you think it is overkill to write something into the hasher explicitly to deal with spaCy models? It seems like something that is beneficial to many, but I do not know if you are open to adding third-party-specific ways to deal with this. If you are, I can have a look for this specific case how we can ignore `thinc.Model.id` from the hasher.",
"It can be even simpler to hash the bytes of the pipeline instead\r\n```python\r\nnlp1.to_bytes() == nlp2.to_bytes() # True\r\n```\r\n\r\nIMO we should integrate the custom hashing for spacy models into `datasets` (we use a custom Pickler for that).\r\nWhat could be done on Spacy's side instead (if they think it's nice to have) is to implement a custom pickling for these classes using `to_bytes`/`from_bytes` to have deterministic pickle dumps.\r\n\r\nFinally I think it would be nice in the future to add an API to let `datasets` users control this kind of things. Something like being able to define your own hashing if you use complex objects.\r\n```python\r\n@datasets.register_hash(spacy.language.Language)\r\ndef hash_spacy_language(nlp):\r\n return Hasher.hash(nlp.to_bytes())\r\n```",
"I do not quite understand what you mean. as far as I can tell, using `to_bytes` does a pickle dump behind the scene (with `srsly`), recursively using `to_bytes` on the required objects. Therefore, the result of `to_bytes` is a deterministic pickle dump AFAICT. Or do you mean that you wish that using your own pickler and running `dumps(nlp)` should also be deterministic? I guess that would require `__setstate__` and `__getstate__` methods on all the objects that have to/from_bytes. I'll have a listen over at spaCy what they think, and if that would solve the issue. I'll try this locally first, if I find the time.\r\n\r\nI agree that having the option to use a custom hasher would be useful. I like your suggestion!\r\n\r\nEDIT: after trying some things and reading through their API, it seems that they explicitly do not want this. https://spacy.io/usage/saving-loading#pipeline\r\n\r\n> When serializing the pipeline, keep in mind that this will only save out the binary data for the individual components to allow spaCy to restore them – not the entire objects. This is a good thing, because it makes serialization safe. But it also means that you have to take care of storing the config, which contains the pipeline configuration and all the relevant settings.\r\n\r\nBest way forward therefore seems to implement the ability to specify a hasher depending on the objects that are pickled, as you suggested. I can work on this if that is useful. I could use some pointers as to how you would like to implement the `register_hash` functionality though. I assume using `catalogue` over at Explosion might be a good starting point.\r\n\r\n",
"Interestingly, my PR does not solve the issue discussed above. The `tokenize` function hash is different on every run, because for some reason `nlp.__call__` has a different hash every time. The issue therefore seems to run much deeper than I thought. If you have any ideas, I'm all ears.\r\n\r\n```shell\r\ngit clone https://github.com/explosion/spaCy.git\r\ncd spaCy/\r\ngit checkout cab9209c3dfcd1b75dfe5657f10e52c4d847a3cf\r\ncd ..\r\n\r\ngit clone https://github.com/BramVanroy/datasets.git\r\ncd datasets\r\ngit checkout registry\r\npip install -e .\r\npip install ../spaCy\r\nspacy download en_core_web_sm\r\n```\r\n\r\n```python\r\nimport spacy\r\n\r\nfrom datasets import load_dataset\r\nfrom datasets.fingerprint import Hasher\r\nfrom datasets.utils.registry import hashers\r\n\r\n@hashers.register(spacy.Language)\r\ndef hash_spacy_language(nlp):\r\n return Hasher.hash(nlp.to_bytes())\r\n\r\ndef main():\r\n fin = r\"your/large/file\"\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n # This is now always the same yay!\r\n print(Hasher.hash(nlp))\r\n\r\n def tokenize(l):\r\n return {\"tok\": [t.text for t in nlp(l[\"text\"])]}\r\n\r\n ds = load_dataset(\"text\", data_files=fin)\r\n # But this is not...\r\n print(Hasher.hash(tokenize))\r\n # ... because of this\r\n print(Hasher.hash(nlp.__call__))\r\n ds = ds[\"train\"].map(tokenize)\r\n print(ds[0:2])\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```",
"Hi ! I just answered in your PR :) In order for your custom hashing to be used for nested objects, you must integrate it into our recursive pickler that we use for hashing.",
"I don't quite understand the design constraints of `datasets` or the script that you're running, but my usual advice is to avoid using pickle unless you _absolutely_ have to. So for instance instead of doing your `partial` over the `nlp` object itself, can you just pass the string `en_core_web_sm` in? This will mean calling `spacy.load()` inside the work function, but this is no worse than having to call `pickle.load()` on the contents of the NLP object anyway -- in fact you'll generally find `spacy.load()` faster, apart from the disk read.\r\n\r\nIf you need to pass in the bytes data and don't want to read from disk, you could do something like this:\r\n\r\n```\r\nmsg = (nlp.lang, nlp.to_bytes())\r\n\r\ndef unpack(lang, bytes_data):\r\n return spacy.blank(lang).from_bytes(bytes_data)\r\n```\r\n\r\nI think that should probably work: the Thinc `model.to_dict()` method (which is used by the `model.to_bytes()` method) doesn't pack the model's ID into the message, so the `nlp.to_bytes()` that you get shouldn't be affected by the global IDs. So you should get a clean message from `nlp.to_bytes()` that doesn't depend on the global state.",
"Hi Matthew, thanks for chiming in! We are currently implementing exactly what you suggest: `to_bytes()` as a default before pickling - but we may prefer `to_dict` to avoid double dumping.\r\n\r\n`datasets` uses pickle dumps (actually dill) to get unique representations of processing steps (a \"fingerprint\" or hash). So it never needs to re-load that dump - it just needs its value to create a hash. If a fingerprint is identical to a cached fingerprint, then the result can be retrieved from the on-disk cache. (@lhoestq or @mariosasko can correct me if I'm wrong.)\r\n\r\nI was experiencing the issue that parsing with spaCy gave me a different fingerprint on every run of the script and thus it could never load the processed dataset from cache. At first I thought the reason was that spaCy Language objects were not picklable with recursive dill, but even after [adjusting for that](https://github.com/explosion/spaCy/pull/9593) the issue persisted. @lhoestq found that this is due to the changing `id`, which you discussed [here](https://github.com/explosion/spaCy/discussions/9609#discussioncomment-1661081). So yes, you are right. On the surface there simply seems to be an incompatibility between `datasets` default caching functionality as it is currently implemented and `spacy.Language`.\r\n\r\nThe [linked PR](https://github.com/huggingface/datasets/pull/3224) aims to remedy that, though. Up to now I have put some effort into making it easier to define your own \"pickling\" function for a given type (and optionally any of its subclasses). That allows us to tell `datasets` that instead of doing `dill.save(nlp)` (non-deterministic), to use `dill.save(nlp.to_bytes())` (deterministic). When I find some more time, the PR [will be expanded](https://github.com/huggingface/datasets/pull/3224#issuecomment-968958528) to improve the user-experience a bit and add a built-in function to pickle `spacy.Language` as one of the defaults (using `to_bytes()`).",
"Is there a workaround for this? maybe by explicitly requesting datasets to cache the result of `.map()`?",
"Hi ! If your function is not picklable, then the fingerprint of the resulting dataset can't be computed. The fingerprint is a hash that is used by the cache to reload previously computed datasets: the dataset file is named `cache-<fingerprint>.arrow` in your dataset's cache directory.\r\n\r\nAs a workaround you can set the fingerprint that is going to be used by the cache:\r\n```python\r\nresult = my_dataset.map(func, new_fingerprint=new_fingerprint)\r\n```\r\nAny future call to `map` with the same `new_fingerprint` will reload the result from the cache.\r\n\r\n**Be careful using this though: if you change your `func`, be sure to change the `new_fingerprint` as well.**",
"I've been having an issue that might be related to this when trying to pre-tokenize a corpus and caching it for using it later in the pre-training of a RoBERTa model. I always get the following warning:\r\n\r\n```\r\nDataset text downloaded and prepared to /gpfswork/rech/project/user/.cache/hf-datasets/text/default-1850886023af0077/0.0.0/acc32f2f2ef863c93c2f30c52f7df6cc9053a1c2230b8d7da0d210404683ca08. Subsequent calls will reuse this data.\r\nParameter 'function'=<function encode_dataset.<locals>.<lambda> at 0x14a92157b280> of the transform datasets.arrow_dataset.Dataset.filter@2.0.1 couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n```\r\n\r\nAnd when I launch the pre-training the pre-tokenized corpus is not found and it is tokenized again, which makes me waste precious GPU hours.\r\n\r\nFor me, the workaround was downgrading `dill` and `multiprocess` to the following versions:\r\n\r\n```\r\ndill 0.3.4\r\nmultiprocess 0.70.12.2 \r\n```",
"> Hi ! If your function is not picklable, then the fingerprint of the resulting dataset can't be computed. The fingerprint is a hash that is used by the cache to reload previously computed datasets: the dataset file is named `cache-<fingerprint>.arrow` in your dataset's cache directory.\r\n> \r\n> As a workaround you can set the fingerprint that is going to be used by the cache:\r\n> \r\n> ```python\r\n> result = my_dataset.map(func, new_fingerprint=new_fingerprint)\r\n> ```\r\n> \r\n> Any future call to `map` with the same `new_fingerprint` will reload the result from the cache.\r\n> \r\n> **Be careful using this though: if you change your `func`, be sure to change the `new_fingerprint` as well.**\r\n\r\nIs the argument `new_fingerprint` available for datasetDict ? I can only use it on arrow datasets but might be useful to generalize it to DatasetDict's map as well ? @lhoestq ",
"> I've been having an issue that might be related to this when trying to pre-tokenize a corpus and caching it for using it later in the pre-training of a RoBERTa model. I always get the following warning:\r\n> \r\n> ```\r\n> Dataset text downloaded and prepared to /gpfswork/rech/project/user/.cache/hf-datasets/text/default-1850886023af0077/0.0.0/acc32f2f2ef863c93c2f30c52f7df6cc9053a1c2230b8d7da0d210404683ca08. Subsequent calls will reuse this data.\r\n> Parameter 'function'=<function encode_dataset.<locals>.<lambda> at 0x14a92157b280> of the transform datasets.arrow_dataset.Dataset.filter@2.0.1 couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n> ```\r\n> \r\n> And when I launch the pre-training the pre-tokenized corpus is not found and it is tokenized again, which makes me waste precious GPU hours.\r\n> \r\n> For me, the workaround was downgrading `dill` and `multiprocess` to the following versions:\r\n> \r\n> ```\r\n> dill 0.3.4\r\n> multiprocess 0.70.12.2 \r\n> ```\r\n\r\nThis worked for me - thanks!",
"I see this has just been closed - it seems quite relevant to another tokenizer I have been trying to use, the `vinai/phobert` family of tokenizers\r\n\r\nhttps://huggingface.co/vinai/phobert-base\r\nhttps://huggingface.co/vinai/phobert-large\r\n\r\nI ran into an issue where a large dataset took several hours to tokenize, the process hung, and I was unable to use the cached version of the tokenized data:\r\n\r\nhttps://discuss.huggingface.co/t/cache-parallelize-long-tokenization-step/25791/3\r\n\r\nI don't see any way to specify the hash of the tokenizer or the fingerprint of the tokenized data to use, so is the tokenized dataset basically lost at this point? Is there a good way to avoid this happening again if I retokenize the data?\r\n",
"In your case it looks like the job failed before caching the data - maybe one of the processes crashed",
"Interesting. Thanks for the observation. Any suggestions on how to start tracking that down? Perhaps run it singlethreaded and see if it crashes?",
"You can monitor your RAM and disk space in case a process dies from OOM or disk full, and when it hangs you can check how many processes are running. IIRC there are other start methods for multiprocessing in python that may show an error message if a process dies.\r\n\r\nRunning on a single process can also help debugging this indeed",
"https://github.com/huggingface/datasets/issues/3178#issuecomment-1189435462\r\n\r\nThe solution does not solve for using commonvoice dataset (\"mozilla-foundation/common_voice_11_0\")",
"Hi @tung-msol could you open a new issue and share the error you got and the map function you used ?",
"I faced the same problem even after using these versions for python 3.10:\r\n\r\n> dill 0.3.4\r\n> multiprocess 0.70.12.2 \r\n\r\nHowever, doing `result = my_dataset.map(func, new_fingerprint=new_fingerprint)` worked!",
"> dataset.map(func, new_fingerprint=new_fingerprint)\r\n\r\ndid you mean `new_fingerprint=\"new_fingerprint\"`, or did you define before this call?",
"You should define your fingerprint as a string that identifies your dataset. By default it's a hash computed from the data files and on the applied transformations."
] |
1,039,487,780
| 3,177
|
More control over TQDM when using map/filter with multiple processes
|
closed
| 2021-10-29T11:56:16
| 2023-02-13T20:16:40
| 2023-02-13T20:16:40
|
https://github.com/huggingface/datasets/issues/3177
| null |
BramVanroy
| false
|
[
"Hi,\r\n\r\nIt's hard to provide an API that would cover all use-cases with tqdm in this project.\r\n\r\nHowever, you can make it work by defining a custom decorator (a bit hacky tho) as follows:\r\n```python\r\nimport datasets\r\n\r\ndef progress_only_on_rank_0(func):\r\n def wrapper(*args, **kwargs):\r\n rank = kwargs.get(\"rank\")\r\n disable_tqdm = kwargs.get(\"disable_tqdm\", False)\r\n disable_tqdm = True if rank is not None and rank > 0 else disable_tqdm\r\n kwargs[\"disable_tqdm\"] = disable_tqdm\r\n return func(*args, **kwargs)\r\n return wrapper\r\n \r\ndatasets.Dataset._map_single = progress_only_on_rank_0(datasets.Dataset._map_single)\r\n``` \r\n\r\nEDIT: Ups, closed by accident.\r\n\r\nThanks for the provided links. `Trainer` requires this for training in multi-node distributed setting. However, `Dataset.map` doesn't support that yet.\r\n\r\nDo you have an API for this in mind? `Dataset.map` is already bloated with the arguments, so IMO it's not a good idea to add a new arg there.\r\n\r\n",
"Inspiration may be found at `transformers`.\r\n\r\nhttps://github.com/huggingface/transformers/blob/4a394cf53f05e73ab9bbb4b179a40236a5ffe45a/src/transformers/trainer.py#L1231-L1233\r\n\r\nTo get unique IDs for each worker, see https://stackoverflow.com/a/10192611/1150683"
] |
1,039,068,312
| 3,176
|
OpenSLR dataset: update generate_examples to properly extract data for SLR83
|
closed
| 2021-10-29T00:59:27
| 2021-11-04T16:20:45
| 2021-10-29T10:04:09
|
https://github.com/huggingface/datasets/pull/3176
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3176",
"html_url": "https://github.com/huggingface/datasets/pull/3176",
"diff_url": "https://github.com/huggingface/datasets/pull/3176.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3176.patch",
"merged_at": "2021-10-29T10:04:09"
}
|
tyrius02
| true
|
[
"Also fix #3125."
] |
1,038,945,271
| 3,175
|
Add docs for `to_tf_dataset`
|
closed
| 2021-10-28T20:55:22
| 2021-11-03T15:39:36
| 2021-11-03T10:07:23
|
https://github.com/huggingface/datasets/pull/3175
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3175",
"html_url": "https://github.com/huggingface/datasets/pull/3175",
"diff_url": "https://github.com/huggingface/datasets/pull/3175.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3175.patch",
"merged_at": "2021-11-03T10:07:23"
}
|
stevhliu
| true
|
[
"This looks great, thank you!",
"Thanks !\r\n\r\nFor some reason the new GIF is 6MB, which is a bit heavy for an image on a website. The previous one was around 200KB though which is perfect. For a good experience we usually expect images to be less than 500KB - otherwise for users with poor connection it takes too long to load. Could you try to reduce its size ? Than I think we can merge :)"
] |
1,038,427,245
| 3,174
|
Asserts replaced by exceptions (huggingface#3171)
|
closed
| 2021-10-28T11:55:45
| 2021-11-06T06:35:32
| 2021-10-29T13:08:43
|
https://github.com/huggingface/datasets/pull/3174
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3174",
"html_url": "https://github.com/huggingface/datasets/pull/3174",
"diff_url": "https://github.com/huggingface/datasets/pull/3174.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3174.patch",
"merged_at": "2021-10-29T13:08:43"
}
|
joseporiolayats
| true
|
[
"Your first PR went smoothly, well done!\r\nYou are welcome to continue contributing to this project.\r\nGràcies, @joseporiolayats! 😉 "
] |
1,038,404,300
| 3,173
|
Fix issue with filelock filename being too long on encrypted filesystems
|
closed
| 2021-10-28T11:28:57
| 2021-10-29T09:42:24
| 2021-10-29T09:42:24
|
https://github.com/huggingface/datasets/pull/3173
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3173",
"html_url": "https://github.com/huggingface/datasets/pull/3173",
"diff_url": "https://github.com/huggingface/datasets/pull/3173.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3173.patch",
"merged_at": "2021-10-29T09:42:24"
}
|
mariosasko
| true
|
[] |
1,038,351,587
| 3,172
|
`SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1`
|
closed
| 2021-10-28T10:29:00
| 2024-04-02T18:13:21
| 2021-11-03T11:26:10
|
https://github.com/huggingface/datasets/issues/3172
| null |
vlievin
| false
|
[
"NB: even if the error is raised, the dataset is successfully cached. So restarting the script after every `map()` allows to ultimately run the whole preprocessing. But this prevents to realistically run the code over multiple nodes.",
"Hi,\r\n\r\nIt's not easy to debug the problem without the script. I may be wrong since I'm not very familiar with PyTorch Lightning, but shouldn't you preprocess the data in the `prepare_data` function of `LightningDataModule` and not in the `setup` function.\r\nAs you can't modify the module state in `prepare_data` (according to the docs), use the `cache_file_name` argument in `Dataset.map` there, and reload the processed data in `setup` with `Dataset.from_file(cache_file_name)`. If `num_proc>1`, check the docs on the `suffix_template` argument of `Dataset.map` to get an idea what the final `cache_file_names` are going to be.\r\n\r\nLet me know if this helps.",
"Hi @mariosasko, thank you for the hint, that helped me to move forward with that issue. \r\n\r\nI did a major refactoring of my project to disentangle my `LightningDataModule` and `Dataset`. Just FYI, it looks like:\r\n\r\n```python\r\nclass Builder():\r\n def __call__() -> DatasetDict:\r\n # load and preprocess the data\r\n return dataset\r\n\r\nclass DataModule(LightningDataModule):\r\n def prepare_data():\r\n self.builder()\r\n def setup():\r\n self.dataset = self.builder()\r\n```\r\n\r\nUnfortunately, the entanglement between `LightningDataModule` and `Dataset` was not the issue.\r\n\r\nThe culprit was `hydra` and a slight adjustment of the structure of my project solved this issue. The problematic project structure was:\r\n\r\n```\r\nsrc/\r\n | - cli.py\r\n | - training/\r\n | -experiment.py\r\n\r\n# code in experiment.py\r\ndef run_experiment(config):\r\n # preprocess data and run\r\n \r\n# code in cli.py\r\n@hydra.main(...)\r\ndef run(config):\r\n return run_experiment(config)\r\n```\r\n\r\nMoving `run()` from `clip.py` to `training.experiment.py` solved the issue with `SystemError 15`. No idea why. \r\n\r\nEven if the traceback was referring to `Dataset.__del__`, the problem does not seem to be primarily related to `datasets`, so I will close this issue. Thank you for your help!",
"Please allow me to revive this discussion, as I have an extremely similar issue. Instead of an error, my datasets functions simply aren't caching properly. My setup is almost the same as yours, with hydra to configure my experiment parameters.\r\n\r\n@vlievin Could you confirm if your code correctly loads the cache? If so, do you have any public code that I can reference for comparison?\r\n\r\nI will post a full example with hydra that illustrates this problem in a little bit, probably on another thread.",
"Hello @mariomeissner, very sorry for the late reply, I hope you have found a solution to your problem!\r\n\r\nI don't have public code at the moment. I have not experienced any other issue with hydra, even if I don't understand why changing the location of the definition of `run()` fixed the problem. \r\n\r\nOverall, I don't have issue with caching anymore, even when \r\n1. using custom fingerprints using the argument `new_fingerprint \r\n2. when using `num_proc>1`",
"I solved my issue by turning the map callable into a class static method, like they do in `lightning-transformers`. Very strange...",
"I have this issue with datasets v2.5.2 with Python 3.8.10 on Ubuntu 20.04.4 LTS. It does not occur when num_proc=1. When num_proc>1, it intermittently occurs and will cause process to hang. As previously mentioned, it occurs even when datasets have been previously cached. I have tried wrapping logic in a static class as suggested with @mariomeissner with no improvement.",
"@philipchung hello ,i have the same issue like yours,did you solve it?",
"No. I was not able to get num_proc>1 to work.",
"same problem here. It randomly occurs...",
"Can someone provide a reproducer to help us debug this (e.g., a `hydra` repo with dummy model and data)?",
"Hi, similarly here, is there any update for this issue? Particularly having the exact same message as in #6393, not sure this is a datasets or llm-foundry issue"
] |
1,037,728,059
| 3,171
|
Raise exceptions instead of using assertions for control flow
|
closed
| 2021-10-27T18:26:52
| 2021-12-23T16:40:37
| 2021-12-23T16:40:37
|
https://github.com/huggingface/datasets/issues/3171
| null |
mariosasko
| false
|
[
"Adding the remaining tasks for this issue to help new code contributors. \r\n$ cd src/datasets && ack assert -lc \r\n- [x] commands/convert.py:1\r\n- [x] arrow_reader.py:3\r\n- [x] load.py:7\r\n- [x] utils/py_utils.py:2\r\n- [x] features/features.py:9\r\n- [x] arrow_writer.py:7\r\n- [x] search.py:6\r\n- [x] table.py:1\r\n- [x] metric.py:3\r\n- [x] tasks/image_classification.py:1\r\n- [x] arrow_dataset.py:17\r\n- [x] fingerprint.py:6\r\n- [x] io/json.py:1\r\n- [x] io/csv.py:1",
"Hi all,\r\nI am interested in taking up `fingerprint.py`, `search.py`, `arrow_writer.py` and `metric.py`. Will raise a PR soon!",
"Let me look into `arrow_dataset.py`, `table.py`, `data_files.py` & `features.py` ",
"All the tasks are completed for this issue. This can be closed. "
] |
1,037,601,926
| 3,170
|
Preserve ordering in `zip_dict`
|
closed
| 2021-10-27T16:07:30
| 2021-10-29T13:09:37
| 2021-10-29T13:09:37
|
https://github.com/huggingface/datasets/pull/3170
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3170",
"html_url": "https://github.com/huggingface/datasets/pull/3170",
"diff_url": "https://github.com/huggingface/datasets/pull/3170.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3170.patch",
"merged_at": "2021-10-29T13:09:37"
}
|
mariosasko
| true
|
[] |
1,036,773,357
| 3,169
|
Configurable max filename length in file locks
|
closed
| 2021-10-26T21:52:55
| 2021-10-28T16:14:14
| 2021-10-28T16:14:13
|
https://github.com/huggingface/datasets/pull/3169
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3169",
"html_url": "https://github.com/huggingface/datasets/pull/3169",
"diff_url": "https://github.com/huggingface/datasets/pull/3169.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3169.patch",
"merged_at": null
}
|
lmmx
| true
|
[
"I've also added environment variable configuration so that this can be configured once per machine (e.g. in a `.bashrc` file), as is already done for a few other config variables here.",
"Cancelling PR in favour of @mariosasko's in #3173"
] |
1,036,673,263
| 3,168
|
OpenSLR/83 is empty
|
closed
| 2021-10-26T19:42:21
| 2021-10-29T10:04:09
| 2021-10-29T10:04:09
|
https://github.com/huggingface/datasets/issues/3168
| null |
tyrius02
| false
|
[
"Hi @tyrius02, thanks for reporting. I see you self-assigned this issue: are you working on this?",
"@albertvillanova Yes. Figured I introduced the broken config, I should fix it too.\r\n\r\nI've got it working, but I'm struggling with one of the tests. I've started a PR so I/we can work through it.",
"Looks like the tests all passed on the PR."
] |
1,036,488,992
| 3,167
|
bookcorpusopen no longer works
|
closed
| 2021-10-26T16:06:15
| 2021-11-17T15:53:46
| 2021-11-17T15:53:46
|
https://github.com/huggingface/datasets/issues/3167
| null |
lucadiliello
| false
|
[
"Hi ! Thanks for reporting :) I think #3280 should fix this",
"I tried with the latest changes from #3280 on google colab and it worked fine :)\r\nWe'll do a new release soon, in the meantime you can use the updated version with:\r\n```python\r\nload_dataset(\"bookcorpusopen\", revision=\"master\")\r\n```",
"Fixed by #3280."
] |
1,036,450,283
| 3,166
|
Deprecate prepare_module
|
closed
| 2021-10-26T15:28:24
| 2021-11-05T09:27:37
| 2021-11-05T09:27:36
|
https://github.com/huggingface/datasets/pull/3166
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3166",
"html_url": "https://github.com/huggingface/datasets/pull/3166",
"diff_url": "https://github.com/huggingface/datasets/pull/3166.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3166.patch",
"merged_at": "2021-11-05T09:27:36"
}
|
albertvillanova
| true
|
[
"Sounds good, thanks !"
] |
1,036,448,998
| 3,165
|
Deprecate prepare_module
|
closed
| 2021-10-26T15:27:15
| 2021-11-05T09:27:36
| 2021-11-05T09:27:36
|
https://github.com/huggingface/datasets/issues/3165
| null |
albertvillanova
| false
|
[] |
1,035,662,830
| 3,164
|
Add raw data files to the Hub with GitHub LFS for canonical dataset
|
closed
| 2021-10-25T23:28:21
| 2021-10-30T19:54:51
| 2021-10-30T19:54:51
|
https://github.com/huggingface/datasets/issues/3164
| null |
zlucia
| false
|
[
"Hi @zlucia, I would actually suggest hosting the dataset as a huggingface.co-hosted dataset.\r\n\r\nThe only difference with a \"canonical\"/legacy dataset is that it's nested under an organization (here `stanford` or `stanfordnlp` for instance – completely up to you) but then you can upload your data using git-lfs (unlike \"canonical\" datasets where we don't host the data)\r\n\r\nLet me know if this fits your use case!\r\n\r\ncc'ing @osanseviero @lhoestq and rest of the team 🤗",
"Hi @zlucia,\r\n\r\nAs @julien-c pointed out, the way to store/host raw data files in our Hub is by using what we call \"community\" datasets:\r\n- either at your personal namespace: `load_dataset(\"zlucia/casehold\")`\r\n- or at an organization namespace: for example, if you create the organization `reglab`, then `load_dataset(\"reglab/casehold\")`\r\n\r\nPlease note that \"canonical\" datasets do not normally store/host their raw data at our Hub, but in a third-party server. For \"canonical\" datasets, we just host the \"loading script\", that is, a Python script that downloads the raw data from a third-party server, creates the HuggingFace dataset from it and caches it locally.\r\n\r\nIn order to create an organization namespace in our Hub, please follow this link: https://huggingface.co/organizations/new\r\n\r\nThere are already many organizations at our Hub (complete list here: https://huggingface.co/organizations), such as:\r\n- Stanford CRFM: https://huggingface.co/stanford-crfm\r\n- Stanford NLP: https://huggingface.co/stanfordnlp\r\n- Stanford CS329S: Machine Learning Systems Design: https://huggingface.co/stanford-cs329s\r\n\r\nAlso note that you in your organization namespace:\r\n- you can add any number of members\r\n- you can store both raw datasets and models, and those can be immediately accessed using `datasets` and `transformers`\r\n\r\nOnce you have created an organization, these are the steps to upload/host a raw dataset: \r\n- The no-code procedure: https://huggingface.co/docs/datasets/upload_dataset.html\r\n- Using the command line (terminal): https://huggingface.co/docs/datasets/share.html#add-a-community-dataset\r\n\r\nPlease, feel free to ping me if you have any further questions or need help.\r\n",
"Ah I see, I think I was unclear whether there were benefits to uploading a canonical dataset vs. a community provided dataset. Thanks for clarifying. I'll see if we want to create an organization namespace and otherwise, will upload the dataset under my personal namespace."
] |
1,035,475,061
| 3,163
|
Add Image feature
|
closed
| 2021-10-25T19:07:48
| 2021-12-30T06:37:21
| 2021-12-06T17:49:02
|
https://github.com/huggingface/datasets/pull/3163
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3163",
"html_url": "https://github.com/huggingface/datasets/pull/3163",
"diff_url": "https://github.com/huggingface/datasets/pull/3163.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3163.patch",
"merged_at": "2021-12-06T17:49:02"
}
|
mariosasko
| true
|
[
"Awesome, looking forward to using it :)",
"Few additional comments:\r\n* the current API doesn't meet the requirements mentioned in #3145 (e.g. image mime-type). However, this will be doable soon as we also plan to store image bytes alongside paths in arrow files (see https://github.com/huggingface/datasets/pull/3129#discussion_r738426187). Then, PIL can return the correct mime-type: \r\n ```python\r\n from PIL import Image\r\n import io\r\n\r\n mimetype = Image.open(io.BytesIO(image_bytes)).get_format_mimetype()\r\n ``` \r\n I plan to add this change in a separate PR.\r\n* currently, I'm returning an `np.ndarray` object after decoding for consistency with the Audio feature. However, the vision models from Transformers prefer an `Image` object to avoid the `Image.fromarray` call in the corresponding feature extractors (see [this warning](https://huggingface.co/transformers/master/model_doc/vit.html#transformers.ViTFeatureExtractor.__call__) in the Transformers docs) cc @NielsRogge \r\n\r\nSo I'm not entirely sure whether to return only a NumPy array, only a PIL Image, or both when decoding. The last point worries me because we shouldn't provide an API that leads to a warning in Transformers (in the docs, not in code :)). At the same time, it makes sense to preserve consistency with the Audio feature and return a NumPy array. \r\n\r\nThat's why I would appreciate your opinions on this.",
"That is a good question. Also pinging @nateraw .\r\n\r\nCurrently we only support returning numpy arrays because of numpy/tf/torch/jax formatting features that we have, and to keep things simple. See the [set_format docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.set_format) for more info",
"I don't think centering the discussion on what ViT expects is good, as the vision Transformers model are still in an experimental stage and we can adapt those depending on what you do here :-).\r\n\r\nIMO, the discussion should revolve on what a user will want to do with a vision dataset, and they will want to:\r\n- lazily decode their images\r\n- maybe apply data augmentation (for the training set)\r\n- resize to a fixed shape for batching\r\n\r\nThe libraries that provide step 2 and 3 either use PIL (thinking torchvision) or cv2 (thinking albumentations). NumPy does not have any function to resize an image or do basic data augmentation (like a rotate) so I think it shouldn't be the default format for an image dataset, PIL or cv2 (in an ideal world with the ability to switch between the two depending on what the users prefer) would be better.\r\n\r\nSide note: I will work on the vision integration in Transformers with Niels next month so please keep me in the loop for those awesome new vision features!",
"@sgugger I completely agree with you, especially after trying to convert the `run_image_classification` script from Transformers to use this feature. The current API doesn't seem intuitive there due to the torchvision transforms, which, as you say, prefer PIL over NumPy arrays. \r\n\r\nSo the default format would return `Image` (PIL) / `np.ndarray` (cv2) and `set_format(numpy/tf/pt)` would return image tensors if I understand you correctly. IMO this makes a lot more sense (and flexibility) than the current API.",
"Also, one additional library worth mentioning here is AugLy which supports image file paths and `PIL.Image.Image` objects.",
"That's so nice !\r\n\r\nAlso I couldn't help myself so I've played with it already ^^\r\nI was agreeably surprised that with minor additions I managed to even allow this, which I find very satisfactory:\r\n```python\r\nimport PIL.Image\r\nfrom datasets import Dataset\r\n\r\npath = \"docs/source/imgs/datasets_logo_name.jpg\"\r\n\r\ndataset = Dataset.from_dict({\"img\": [PIL.Image.open(path)]})\r\nprint(dataset.features)\r\n# {'img': Image(id=None)}\r\nprint(dataset[0][\"img\"])\r\n# <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1200x300 at 0x129DE4AC8>\r\n```\r\n\r\nLet me know if that's a behavior you'd also like to see \r\n\r\nEDIT: just pushed my changes on a branch, you can see the diff [here](https://github.com/mariosasko/datasets-1/compare/add-image-feature...huggingface:image-type-inference) if you want",
"Thanks, @lhoestq! I like your change. Very elegant indeed.\r\n\r\nP.S. I have to write a big comment that explains all the changes/things left to consider. Will do that in the next few days!",
"I'm marking this PR as ready for review.\r\n\r\nThanks to @sgugger's comment, the API is much more flexible now as it decodes images (lazily) as `PIL.Image.Image` objects and supports transforms directly on them.\r\n\r\nAlso, we no longer return paths explicitly (previously, we would return `{\"path\": image_path, \"image\": pil_image}`) for the following reasons:\r\n* what to return when reading an image from an URL or a NumPy array. We could set `path` to `None` in these situations, but IMO we should avoid redundant information.\r\n* returning a dict doesn't match nicely with the requirement of supporting image modifications - what to do if the user modifies both the image path and the image\r\n\r\n(Btw, for the images stored locally, you can access their paths with `dset[idx][\"image\"].filename`, or by avoiding decoding with `paths = [ex[\"path\"] for ex in dset]`. @lhoestq @albertvillanova WDYT about having an option to skip decoding for complex features, e. g. `Audio(decode=False)`? This way, the user can easily access the underlying data.)\r\n\r\nExamples of what you can do:\r\n```python\r\n# load local images\r\ndset = Dataset.from_dict(\"image\": [local_image_path], features=Features({\"images\": Image()}))\r\n# load remote images (we got this for free by adding support for streaming)\r\ndset = Dataset.from_dict(\"image\": [image_url], features=Features({\"images\": Image()}))\r\n# from np.ndarray\r\ndset = Dataset.from_dict({\"image\": [np.array(...)]}, features=Features({\"images\": Image()}))\r\n# cast column\r\ndset = Dataset.from_dict({\"image\": [local_image_path]})\r\ndset.cast_column(\"image\", Image())\r\n\r\n# automatic type inference\r\ndset = Dataset.from_dict({\"image\": [PIL.Image.open(local_image_path)]})\r\n\r\n# transforms\r\ndef img_transform(example):\r\n ...\r\n example[\"image\"] = transformed_pil_image_or_np_ndarray\r\n return example\r\ndset.map(img_trnasform)\r\n\r\n# transform that adds a new column with images (automatic inference of the feature type)\r\ndset.map(lambda ex: {\"image_resized\": ex[\"image\"].resize((100, 100))})\r\nprint(dset.features[\"image_resized\"]) # will print Image()\r\n```\r\n\r\nSome more cool features:\r\n* We store the image filename (`pil_image.filename`) whenever possible to avoid costly conversion to bytes\r\n* if possible, we use native compression when encoding images. Otherwise, we fall back to the lossless PNG format (e.g. after image ops or when storing NumPy arrays)\r\n\r\nHints to make reviewing easier:\r\n* feel free to ignore the extension type part because it's related to PyArrow internals.\r\n* also, let me know if we are too strict/ too flexible in terms of types the Image feature can encode/decode. Hints:\r\n * `encode_example` handles encoding during dataset generation (you can think of it as `yield key, features.encode_example(example)`)\r\n * `objects_to_list_of_image_dicts` handles encoding of returned examples in `map`\r\n\r\nP.S. I'll fork the PR branch and start adding the Image feature to the existing image datasets (will also update the `ImageClassification` template while doing that).",
"> WDYT about having an option to skip decoding for complex features, e. g. Audio(decode=False)?\r\n\r\nYes definitely, also I think it could be useful for the dataset viewer to not decode the data but instead return either the bytes or the (possibly chained) URL. cc @severo ",
"We want to merge this today/tomorrow, so I'd really appreciate your reviews @sgugger @nateraw.\r\n\r\nAlso, you can test this feature on the existing image datasets (MNIST, beans, food101, ...) by installing `datasets` from the PR branch:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@adapt-image-datasets\r\n```\r\n",
"Thanks for the review @nateraw!\r\n\r\n1. This is a copy of your notebook with the fixed map call: https://colab.research.google.com/gist/mariosasko/e351a717682a9392ca03908e65a2600e/image-feature-demo.ipynb\r\n (Sorry for misleading you with the map call in my un-updated notebook)\r\n Also, we can avoid this cast by trying to infer the type of the column (`\"pixel_values\"`) returned by the image feature extractor (we are already doing something similar for the columns with names: `\"attention_mask\"`, `\"input_ids\"`, ...). I plan to add this QOL improvement soon. \r\n2. It should work OK even without updating Pillow and PyArrow (these two libraries are pre-installed in Colab, so updating them requires a restart of the runtime). \r\n > I noticed an error that I'm guessing you ran into when I tried using the older version\r\n\r\n Do you recall which type of error it was because everything works fine on my side if I run the notebooks with the lowest supported version of Pillow (`6.2.1`)?",
"Thanks for playing with it @nateraw and for sharing your notebook, this is useful :)\r\n\r\nI think this is ready now, congrats @mariosasko !",
"Love this feature and hope to release soon!"
] |
1,035,462,136
| 3,162
|
`datasets-cli test` should work with datasets without scripts
|
open
| 2021-10-25T18:52:30
| 2021-11-25T16:04:29
| null |
https://github.com/huggingface/datasets/issues/3162
| null |
sashavor
| false
|
[
"> It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not).\r\n> \r\n> I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/tree/main) -- although @lhoestq came to save the day!\r\n\r\nwhy don't you try to share that info with people, so you can also save some days.",
"Hi ! You can run the command if you download the repository\r\n```\r\ngit clone https://huggingface.co/datasets/huggingface/DataMeasurementsTest\r\n```\r\nand run the command\r\n```\r\ndatasets-cli test DataMeasurementsTest/DataMeasurementsTest.py\r\n```\r\n\r\n(though on my side it doesn't manage to download the data since the dataset is private ^^)",
"> Hi ! You can run the command if you download the repository\r\n> \r\n> ```\r\n> git clone https://huggingface.co/datasets/huggingface/DataMeasurementsTest\r\n> ```\r\n> \r\n> and run the command\r\n> \r\n> ```\r\n> datasets-cli test DataMeasurementsTest/DataMeasurementsTest.py\r\n> ```\r\n> \r\n> (though on my side it doesn't manage to download the data since the dataset is private ^^)\r\n\r\nHi! Thanks for the info. \r\ngit cannot find the repository. Do you know if they have depreciated these tests and created a new one?",
"I think it's become private, but feel free to try with any other dataset like `lhoestq/test` for example at `https://huggingface.co/datasets/lhoestq/test`",
"> I think it's become private, but feel free to try with any other dataset like `lhoestq/test` for example at `https://huggingface.co/datasets/lhoestq/test`\r\n\r\nyour example repo and this page `https://huggingface.co/docs/datasets/add_dataset.html` helped me to solve.. thanks a lot"
] |
1,035,444,292
| 3,161
|
Add riddle_sense dataset
|
closed
| 2021-10-25T18:30:56
| 2021-11-04T14:01:15
| 2021-11-04T14:01:15
|
https://github.com/huggingface/datasets/pull/3161
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3161",
"html_url": "https://github.com/huggingface/datasets/pull/3161",
"diff_url": "https://github.com/huggingface/datasets/pull/3161.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3161.patch",
"merged_at": "2021-11-04T14:01:14"
}
|
ziyiwu9494
| true
|
[
"@lhoestq \r\nI address all the comments, I think. Thanks! \r\n",
"The five test fails are unrelated to this PR and fixed on master so we can ignore them"
] |
1,035,274,640
| 3,160
|
Better error msg if `len(predictions)` doesn't match `len(references)` in metrics
|
closed
| 2021-10-25T15:25:05
| 2021-11-05T11:44:59
| 2021-11-05T09:31:02
|
https://github.com/huggingface/datasets/pull/3160
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3160",
"html_url": "https://github.com/huggingface/datasets/pull/3160",
"diff_url": "https://github.com/huggingface/datasets/pull/3160.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3160.patch",
"merged_at": "2021-11-05T09:31:02"
}
|
mariosasko
| true
|
[
"Can't test this now but it may be a good improvement indeed.",
"I added a function, but it only works with the `list` type. For arrays/tensors, we delegate formatting to the frameworks. "
] |
1,035,174,560
| 3,159
|
Make inspect.get_dataset_config_names always return a non-empty list
|
closed
| 2021-10-25T13:59:43
| 2021-10-29T13:14:37
| 2021-10-28T05:44:49
|
https://github.com/huggingface/datasets/pull/3159
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3159",
"html_url": "https://github.com/huggingface/datasets/pull/3159",
"diff_url": "https://github.com/huggingface/datasets/pull/3159.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3159.patch",
"merged_at": "2021-10-28T05:44:49"
}
|
albertvillanova
| true
|
[
"This PR is already working (although not very beautiful; see below): the idea was to have the `DatasetModule.builder_kwargs` accessible from the `builder_cls`, so that this can generate the default builder config (at the class level, without requiring the builder to be instantiated).\r\n\r\nI have a plan for a follow-up refactoring (same functionality, better implementation, much nicer), but I think we could already merge this, so that @severo can test it in the datasets previewer and report any potential issues.",
"Yes @lhoestq you are completely right. Indeed I was exclusively using `builder_cls.kwargs` to get the community dataset `name` (nothing else): \"lhoestq___demo1\"\r\n\r\nSee et: https://github.com/huggingface/datasets/pull/3159/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R413-R415\r\n\r\nIn your example, the `name` I was getting from `builder_cls.kwargs` was:\r\n```python\r\n{\"name\": \"lhoestq___demo1\",...}\r\n```\r\n\r\nI'm going to refactor all the approach... as I only need the name for this specific case ;)",
"I think this makes more sense now, @lhoestq @severo 😅 ",
"It works well, thanks!"
] |
1,035,158,070
| 3,158
|
Fix string encoding for Value type
|
closed
| 2021-10-25T13:44:13
| 2021-10-25T14:12:06
| 2021-10-25T14:12:05
|
https://github.com/huggingface/datasets/pull/3158
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3158",
"html_url": "https://github.com/huggingface/datasets/pull/3158",
"diff_url": "https://github.com/huggingface/datasets/pull/3158.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3158.patch",
"merged_at": "2021-10-25T14:12:05"
}
|
lhoestq
| true
|
[
"That was fast! \r\n"
] |
1,034,775,165
| 3,157
|
Fixed: duplicate parameter and missing parameter in docstring
|
closed
| 2021-10-25T07:26:00
| 2021-10-25T14:02:19
| 2021-10-25T14:02:19
|
https://github.com/huggingface/datasets/pull/3157
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3157",
"html_url": "https://github.com/huggingface/datasets/pull/3157",
"diff_url": "https://github.com/huggingface/datasets/pull/3157.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3157.patch",
"merged_at": "2021-10-25T14:02:18"
}
|
PanQiWei
| true
|
[] |
1,034,468,757
| 3,155
|
Illegal instruction (core dumped) at datasets import
|
closed
| 2021-10-24T17:21:36
| 2021-11-18T19:07:04
| 2021-11-18T19:07:03
|
https://github.com/huggingface/datasets/issues/3155
| null |
hacobe
| false
|
[
"It seems to be an issue with how conda-forge is building the binaries. It works on some machines, but not a machine with AMD Opteron 8384 processors."
] |
1,034,361,806
| 3,154
|
Sacrebleu unexpected behaviour/requirement for data format
|
closed
| 2021-10-24T08:55:33
| 2021-10-31T09:08:32
| 2021-10-31T09:08:31
|
https://github.com/huggingface/datasets/issues/3154
| null |
BramVanroy
| false
|
[
"Hi @BramVanroy!\r\n\r\nGood question. This project relies on PyArrow (tables) to store data too big to fit in RAM. In the case of metrics, this means that the number of predictions and references has to match to form a table.\r\n\r\nThat's why your example throws an error even though it matches the schema:\r\n```python\r\nrefs = [\r\n ['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'],\r\n ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.'],\r\n] # len(refs) = 2\r\n\r\nhyps = ['The dog bit the man.', \"It wasn't surprising.\", 'The man had just bitten him.'] # len(hyps) = 3\r\n```\r\n\r\nInstead, it should be:\r\n```python\r\nrefs = [\r\n ['The dog bit the man.', 'The dog had bit the man.'],\r\n ['It was not unexpected.', 'No one was surprised.'],\r\n ['The man bit him first.', 'The man had bitten the dog.'], \r\n] # len(refs) = 3\r\n\r\nhyps = ['The dog bit the man.', \"It wasn't surprising.\", 'The man had just bitten him.'] # len(hyps) = 3\r\n```\r\n\r\nHowever, `sacreblue` works with the format that's described in your example, hence this part:\r\nhttps://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L94-L99\r\n\r\nHope you get an idea!",
"Thanks, that makes sense. It is a bit unfortunate because it may be confusing to users since the input format is suddenly different than what they may expect from the underlying library/metric. But it is understandable due to how `datasets` works!"
] |
1,034,179,198
| 3,153
|
Add TER (as implemented in sacrebleu)
|
closed
| 2021-10-23T14:26:45
| 2021-11-02T11:04:11
| 2021-11-02T11:04:11
|
https://github.com/huggingface/datasets/pull/3153
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3153",
"html_url": "https://github.com/huggingface/datasets/pull/3153",
"diff_url": "https://github.com/huggingface/datasets/pull/3153.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3153.patch",
"merged_at": "2021-11-02T11:04:11"
}
|
BramVanroy
| true
|
[
"The problem appears to stem from the omission of the lines that you mentioned. If you add them back and try examples from [this](https://huggingface.co/docs/datasets/using_metrics.html) tutorial (sacrebleu metric example) the code you implemented works fine.\r\n\r\nI think the purpose of these lines is follows:\r\n\r\n1. Sacrebleu metrics confusingly expect a nested list of strings when you have just one reference for each hypothesis (i.e. `[[\"example1\", \"example2\", \"example3]]`), while for cases with more than one reference a _nested list of lists of strings_ (i.e. `[[\"ref1a\", \"ref1b\"], [\"ref2a\", \"ref2b\"], [\"ref3a\", \"ref3b\"]]`) is expected instead. So `transformed_references` line outputs the required single reference format for sacrebleu's ter implementation which you can't pass directly to `compute`.\r\n2. I'm assuming that an additional check is also related to that confusing format with one/many references, because it's really difficult to tell what exactly you're doing wrong if you're not aware of that issue."
] |
1,034,039,379
| 3,152
|
Fix some typos in the documentation
|
closed
| 2021-10-23T01:38:35
| 2021-10-25T14:27:36
| 2021-10-25T14:03:48
|
https://github.com/huggingface/datasets/pull/3152
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3152",
"html_url": "https://github.com/huggingface/datasets/pull/3152",
"diff_url": "https://github.com/huggingface/datasets/pull/3152.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3152.patch",
"merged_at": "2021-10-25T14:03:48"
}
|
h4iku
| true
|
[] |
1,033,890,501
| 3,151
|
Re-add faiss to windows testing suite
|
closed
| 2021-10-22T19:34:29
| 2021-11-02T10:47:34
| 2021-11-02T10:06:03
|
https://github.com/huggingface/datasets/pull/3151
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3151",
"html_url": "https://github.com/huggingface/datasets/pull/3151",
"diff_url": "https://github.com/huggingface/datasets/pull/3151.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3151.patch",
"merged_at": "2021-11-02T10:06:03"
}
|
BramVanroy
| true
|
[] |
1,033,831,530
| 3,150
|
Faiss _is_ available on Windows
|
closed
| 2021-10-22T18:07:16
| 2021-11-02T10:06:03
| 2021-11-02T10:06:03
|
https://github.com/huggingface/datasets/issues/3150
| null |
BramVanroy
| false
|
[
"Sure, feel free to open a PR."
] |
1,033,747,625
| 3,149
|
Add CMU Hinglish DoG Dataset for MT
|
closed
| 2021-10-22T16:17:25
| 2021-11-15T11:36:42
| 2021-11-15T10:27:45
|
https://github.com/huggingface/datasets/pull/3149
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3149",
"html_url": "https://github.com/huggingface/datasets/pull/3149",
"diff_url": "https://github.com/huggingface/datasets/pull/3149.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3149.patch",
"merged_at": "2021-11-15T10:27:45"
}
|
Ishan-Kumar2
| true
|
[
"Hi @lhoestq, thanks a lot for the help. I have moved the part as suggested. \r\nAlthough still while running the dummy data script, I face this issue\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ishan/anaconda3/bin/datasets-cli\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/commands/datasets_cli.py\", line 33, in main\r\n service.run()\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/commands/dummy_data.py\", line 318, in run\r\n self._autogenerate_dummy_data(\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/commands/dummy_data.py\", line 363, in _autogenerate_dummy_data\r\n dataset_builder._prepare_split(split_generator)\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/builder.py\", line 1103, in _prepare_split\r\n example = self.info.features.encode_example(record)\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/features/features.py\", line 981, in encode_example\r\n return encode_nested_example(self, example)\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/features/features.py\", line 775, in encode_nested_example\r\n return {\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/features/features.py\", line 775, in <dictcomp>\r\n return {\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 99, in zip_dict\r\n yield key, tuple(d[key] for d in dicts)\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 99, in <genexpr>\r\n yield key, tuple(d[key] for d in dicts)\r\nKeyError: 'status'\r\n```\r\nThis KeyError is at times different from 'status' also.\r\nwhen I run \r\n```\r\ndatasets-cli dummy_data datasets/cmu_hinglish_dog --auto_generate --json_field='history'\r\n```\r\nI have tried removing unnecessary feature type definition, but that didn't help. Please let me know if I am missing something, thanks!",
"The CI fail is unrelated to this PR and fixed on master. Merging !"
] |
1,033,685,208
| 3,148
|
Streaming with num_workers != 0
|
closed
| 2021-10-22T15:07:17
| 2022-07-04T12:14:58
| 2022-07-04T12:14:58
|
https://github.com/huggingface/datasets/issues/3148
| null |
justheuristic
| false
|
[
"I can confirm that I was able to reproduce the bug. This seems odd given that #3423 reports duplicate data retrieval when `num_workers` and `streaming` are used together, which is obviously different from what is reported here. ",
"Any update? A possible solution is to have multiple arrow files as shards, and handle them like what webdatasets does.\r\n\r\n\r\nPytorch's new dataset RFC is supporting sharding now, which may helps avoid duplicate data under streaming mode. (https://github.com/pytorch/pytorch/blob/master/torch/utils/data/datapipes/iter/grouping.py#L13)\r\n",
"Hi ! Thanks for the insights :) Note that in streaming mode there're usually no arrow files. The data are streamed from TAR, ZIP, text, etc. files directly from the web. Though for sharded datasets we can definitely adopt a similar strategy !",
"fixed by #4375 "
] |
1,033,607,659
| 3,147
|
Fix CLI test to ignore verfications when saving infos
|
closed
| 2021-10-22T13:52:46
| 2021-10-27T08:01:50
| 2021-10-27T08:01:49
|
https://github.com/huggingface/datasets/pull/3147
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3147",
"html_url": "https://github.com/huggingface/datasets/pull/3147",
"diff_url": "https://github.com/huggingface/datasets/pull/3147.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3147.patch",
"merged_at": "2021-10-27T08:01:49"
}
|
albertvillanova
| true
|
[] |
1,033,605,947
| 3,146
|
CLI test command throws NonMatchingSplitsSizesError when saving infos
|
closed
| 2021-10-22T13:50:53
| 2021-10-27T08:01:49
| 2021-10-27T08:01:49
|
https://github.com/huggingface/datasets/issues/3146
| null |
albertvillanova
| false
|
[] |
1,033,580,009
| 3,145
|
[when Image type will exist] provide a way to get the data as binary + filename
|
closed
| 2021-10-22T13:23:49
| 2021-12-22T11:05:37
| 2021-12-22T11:05:36
|
https://github.com/huggingface/datasets/issues/3145
| null |
severo
| false
|
[
"@severo, maybe somehow related to this PR ?\r\n- #3129",
"@severo I'll keep that in mind.\r\n\r\nYou can track progress on the Image feature in #3163 (still in the early stage). ",
"Hi ! As discussed with @severo offline it looks like the dataset viewer already supports reading PIL images, so maybe the dataset viewer doesn't need to disable decoding after all",
"Fixed with https://github.com/huggingface/datasets/pull/3163"
] |
1,033,573,760
| 3,144
|
Infer the features if missing
|
closed
| 2021-10-22T13:17:33
| 2022-09-08T08:23:10
| 2022-09-08T08:23:10
|
https://github.com/huggingface/datasets/issues/3144
| null |
severo
| false
|
[
"Done by @lhoestq here: https://github.com/huggingface/datasets/pull/4500 (https://github.com/huggingface/datasets/pull/4500/files#diff-02930e1d966f4b41f9ddf15d961f16f5466d9bee583138657018c7329f71aa43R1255 in particular)\r\n"
] |
1,033,569,655
| 3,143
|
Provide a way to check if the features (in info) match with the data of a split
|
open
| 2021-10-22T13:13:36
| 2021-10-22T13:17:56
| null |
https://github.com/huggingface/datasets/issues/3143
| null |
severo
| false
|
[
"Related: #3144 "
] |
1,033,566,034
| 3,142
|
Provide a way to write a streamed dataset to the disk
|
open
| 2021-10-22T13:09:53
| 2024-01-12T07:26:43
| null |
https://github.com/huggingface/datasets/issues/3142
| null |
severo
| false
|
[
"Yes, I agree this feature is much needed. We could do something similar to what TF does (https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cache). \r\n\r\nIdeally, if the entire streamed dataset is consumed/cached, the generated cache should be reusable for the Arrow dataset.",
"@mariosasko Hi big brother,any update on this? It's 2024 with large streamed dataset loading consume too much time(exp. 2day..), really need this feature for what TF does"
] |
1,033,555,910
| 3,141
|
Fix caching bugs
|
closed
| 2021-10-22T12:59:25
| 2021-10-22T20:52:08
| 2021-10-22T13:47:05
|
https://github.com/huggingface/datasets/pull/3141
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3141",
"html_url": "https://github.com/huggingface/datasets/pull/3141",
"diff_url": "https://github.com/huggingface/datasets/pull/3141.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3141.patch",
"merged_at": "2021-10-22T13:47:04"
}
|
mariosasko
| true
|
[] |
1,033,524,079
| 3,139
|
Fix file/directory deletion on Windows
|
open
| 2021-10-22T12:22:08
| 2021-10-22T12:22:08
| null |
https://github.com/huggingface/datasets/issues/3139
| null |
mariosasko
| false
|
[] |
1,033,379,997
| 3,138
|
More fine-grained taxonomy of error types
|
open
| 2021-10-22T09:35:29
| 2022-09-20T13:04:42
| null |
https://github.com/huggingface/datasets/issues/3138
| null |
severo
| false
|
[
"related: #4995\r\n"
] |
1,033,363,652
| 3,137
|
Fix numpy deprecation warning for ragged tensors
|
closed
| 2021-10-22T09:17:46
| 2021-10-22T16:04:15
| 2021-10-22T16:04:14
|
https://github.com/huggingface/datasets/pull/3137
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3137",
"html_url": "https://github.com/huggingface/datasets/pull/3137",
"diff_url": "https://github.com/huggingface/datasets/pull/3137.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3137.patch",
"merged_at": "2021-10-22T16:04:14"
}
|
lhoestq
| true
|
[
"This'll be a really helpful fix, thank you!"
] |
1,033,360,396
| 3,136
|
Fix script of Arabic Billion Words dataset to return all data
|
closed
| 2021-10-22T09:14:24
| 2021-10-22T13:28:41
| 2021-10-22T13:28:40
|
https://github.com/huggingface/datasets/pull/3136
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3136",
"html_url": "https://github.com/huggingface/datasets/pull/3136",
"diff_url": "https://github.com/huggingface/datasets/pull/3136.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3136.patch",
"merged_at": "2021-10-22T13:28:39"
}
|
albertvillanova
| true
|
[] |
1,033,294,299
| 3,135
|
Make inspect.get_dataset_config_names always return a non-empty list of configs
|
closed
| 2021-10-22T08:02:50
| 2021-10-28T05:44:49
| 2021-10-28T05:44:49
|
https://github.com/huggingface/datasets/issues/3135
| null |
severo
| false
|
[
"Hi @severo, I guess this issue requests not only to be able to access the configuration name (by using `inspect.get_dataset_config_names`), but the configuration itself as well (I mean you use the name to get the configuration afterwards, maybe using `builder_cls.builder_configs`), is this right?",
"Yes, maybe the issue could be reformulated. As a user, I want to avoid having to manage special cases:\r\n- I want to be able to get the names of a dataset's configs, and use them in the rest of the API (get the data, get the split names, etc).\r\n- I don't want to have to manage datasets with named configs (`glue`) differently from datasets without named configs (`acronym_identification`, `Check/region_1`)"
] |
1,033,251,755
| 3,134
|
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
|
closed
| 2021-10-22T07:07:52
| 2023-09-14T01:19:45
| 2022-01-19T14:02:31
|
https://github.com/huggingface/datasets/issues/3134
| null |
yanan1116
| false
|
[
"Hi,\r\n\r\nDid you try to run the code multiple times (GitHub URLs can be down sometimes for various reasons)? I can access `https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py`, so this code is working without an error on my side. \r\n\r\nAdditionally, can you please run the `datasets-cli env` command because it seems to me that you are using the `datasets` version different from `1.12.1`?",
"Same issue when running `metric = datasets.load_metric(\"accuracy\")`.\r\nError info is:\r\n```\r\nmetric = datasets.load_metric(\"accuracy\")\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-2-d25db38b26c5>\", line 1, in <module>\r\n metric = datasets.load_metric(\"accuracy\")\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\load.py\", line 610, in load_metric\r\n module_path, _ = prepare_module(\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\load.py\", line 330, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 288, in cached_path\r\n output_path = get_from_cache(\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 605, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/accuracy/accuracy.py\r\n```\r\n\r\n\r\n My `datasets-cli env` result is as follows:\r\n- `datasets` version: 1.11.0\r\n- Platform: Windows-10-10.0.19041-SP0\r\n- Python version: 3.8.8\r\n- PyArrow version: 6.0.0\r\n\r\n@yananchen1989 did you find a way to solve this?",
"It seems to be able to solve this issue by adding the equivalent `accuracy.py` locally. \r\nchange `metric = datasets.load_metric(\"accuracy\")` to `metric = datasets.load_metric(path = \"./accuracy.py\")`.\r\nCopy `accuracy.py` from browser at [accuracy.py](https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/accuracy/accuracy.py)",
"> It seems to be able to solve this issue by adding the equivalent `accuracy.py` locally. change `metric = datasets.load_metric(\"accuracy\")` to `metric = datasets.load_metric(path = \"./accuracy.py\")`. Copy `accuracy.py` from browser at [accuracy.py](https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/accuracy/accuracy.py)\r\n\r\nThis is really a good way"
] |
1,032,511,710
| 3,133
|
Support Audio feature in streaming mode
|
closed
| 2021-10-21T13:37:57
| 2021-11-12T14:13:05
| 2021-11-12T14:13:04
|
https://github.com/huggingface/datasets/pull/3133
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3133",
"html_url": "https://github.com/huggingface/datasets/pull/3133",
"diff_url": "https://github.com/huggingface/datasets/pull/3133.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3133.patch",
"merged_at": "2021-11-12T14:13:04"
}
|
albertvillanova
| true
|
[] |
1,032,505,430
| 3,132
|
Support Audio feature in streaming mode
|
closed
| 2021-10-21T13:32:18
| 2021-11-12T14:13:04
| 2021-11-12T14:13:04
|
https://github.com/huggingface/datasets/issues/3132
| null |
albertvillanova
| false
|
[] |
1,032,309,865
| 3,131
|
Add ADE20k
|
closed
| 2021-10-21T10:13:09
| 2023-01-27T14:40:20
| 2023-01-27T14:40:20
|
https://github.com/huggingface/datasets/issues/3131
| null |
NielsRogge
| false
|
[
"I think we can close this issue since PR [#3607](https://github.com/huggingface/datasets/pull/3607) solves this."
] |
1,032,299,417
| 3,130
|
Create SECURITY.md
|
closed
| 2021-10-21T10:03:03
| 2021-10-21T14:33:28
| 2021-10-21T14:31:50
|
https://github.com/huggingface/datasets/pull/3130
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3130",
"html_url": "https://github.com/huggingface/datasets/pull/3130",
"diff_url": "https://github.com/huggingface/datasets/pull/3130.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3130.patch",
"merged_at": null
}
|
zidingz
| true
|
[
"Hi @zidingz, thanks for your contribution.\r\n\r\nHowever I am closing it because it is a duplicate of a previous PR:\r\n - #2958\r\n\r\n"
] |
1,032,234,167
| 3,129
|
Support Audio feature for TAR archives in sequential access
|
closed
| 2021-10-21T08:56:51
| 2021-11-17T17:42:08
| 2021-11-17T17:42:07
|
https://github.com/huggingface/datasets/pull/3129
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3129",
"html_url": "https://github.com/huggingface/datasets/pull/3129",
"diff_url": "https://github.com/huggingface/datasets/pull/3129.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3129.patch",
"merged_at": "2021-11-17T17:42:07"
}
|
albertvillanova
| true
|
[
"Also do you think we can adapt `cast_column` to keep the same value for this new parameter when the user only wants to change the sampling rate ?",
"Thanks for your comments, @lhoestq, I will address them afterwards.\r\n\r\nBut, I think it is more important/urgent first address the current blocking non-passing test: https://github.com/huggingface/datasets/runs/4143579241?check_suite_focus=true\r\n- I am thinking of a way of solving it, but if you have any hint, it will be more than welcome! 😅 \r\n\r\nBasically:\r\n```\r\n{'audio': '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_dataset_with_audio_featur1/data/test_audio_44100.wav'}\r\n``` \r\nbecomes\r\n```\r\n{'audio': {'bytes': None, 'path': '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_dataset_with_audio_featur1/data/test_audio_44100.wav'}}\r\n```\r\nafter a `map`, which is what was stored in the Arrow file. However we expect it remains invariant after this `map`.",
"@lhoestq, @mariosasko I finally proposed another implementation different from my last one:\r\n- Before: store Audio always a struct<path: string, bytes: binary>, where bytes can be None\r\n- Now, depending on the examples, either store Audio as a struct (as before), or as a string.\r\n\r\nPlease note that the main motivation for this change was the issue mentioned above: https://github.com/huggingface/datasets/pull/3129#issuecomment-964347056\r\n",
"Until here we had the assumption that a Features object always has an associated, deterministic, pyarrow schema. This is useful to ensure that we are able to concatenate two datasets that have the same features for example.\r\n\r\nBy breaking this assumption for the Audio type, how can we ensure that we can concatenate two audio datasets if one has Audio as a struct and the other a string ?",
"Oh I noticed that the Audio feature type has a private attribute `_storage_dtype`, so the assumption still holds, since they are now different feature types depending on the this attribute :)\r\n(i mean different from the python equal operator point of view)",
"I think this PR is ready, @lhoestq, @mariosasko. ",
"Nit: We should also mention the new storage structure in the `Features` docstring [here](https://github.com/huggingface/datasets/blob/b29fb550c31de337b952035a7584147e0f18c0cf/src/datasets/features/features.py#L966) for users to know what type of value to return in their dataset scripts (we also have a link to that docstring in the `ADD_NEW_DATASET` template)."
] |
1,032,201,870
| 3,128
|
Support Audio feature for TAR archives in sequential access
|
closed
| 2021-10-21T08:23:01
| 2021-11-17T17:42:07
| 2021-11-17T17:42:07
|
https://github.com/huggingface/datasets/issues/3128
| null |
albertvillanova
| false
|
[] |
1,032,100,613
| 3,127
|
datasets-cli: convertion of a tfds dataset to a huggingface one.
|
open
| 2021-10-21T06:14:27
| 2021-10-27T11:36:05
| null |
https://github.com/huggingface/datasets/issues/3127
| null |
vitalyshalumov
| false
|
[
"Hi,\r\n\r\nthe MNIST dataset is already available on the Hub. You can use it as follows:\r\n```python\r\nimport datasets\r\ndataset_dict = datasets.load_dataset(\"mnist\")\r\n```\r\n\r\nAs for the conversion of TFDS datasets to HF datasets, we will be working on it in the coming months, so stay tuned."
] |
1,032,093,055
| 3,126
|
"arabic_billion_words" dataset does not create the full dataset
|
closed
| 2021-10-21T06:02:38
| 2021-10-22T13:28:40
| 2021-10-22T13:28:40
|
https://github.com/huggingface/datasets/issues/3126
| null |
vitalyshalumov
| false
|
[
"Thanks for reporting, @vitalyshalumov.\r\n\r\nApparently the script to parse the data has a bug, and does not generate the entire dataset.\r\n\r\nI'm fixing it."
] |
1,032,046,666
| 3,125
|
Add SLR83 to OpenSLR
|
closed
| 2021-10-21T04:26:00
| 2021-10-22T20:10:05
| 2021-10-22T08:30:22
|
https://github.com/huggingface/datasets/pull/3125
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3125",
"html_url": "https://github.com/huggingface/datasets/pull/3125",
"diff_url": "https://github.com/huggingface/datasets/pull/3125.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3125.patch",
"merged_at": "2021-10-22T08:30:22"
}
|
tyrius02
| true
|
[] |
1,031,976,286
| 3,124
|
More efficient nested features encoding
|
closed
| 2021-10-21T01:55:31
| 2021-11-02T15:07:13
| 2021-11-02T11:04:04
|
https://github.com/huggingface/datasets/pull/3124
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3124",
"html_url": "https://github.com/huggingface/datasets/pull/3124",
"diff_url": "https://github.com/huggingface/datasets/pull/3124.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3124.patch",
"merged_at": "2021-11-02T11:04:04"
}
|
eladsegal
| true
|
[
"@lhoestq @albertvillanova @mariosasko\r\nCan you please check this out?",
"Thanks, done!"
] |
1,031,793,207
| 3,123
|
Segmentation fault when loading datasets from file
|
closed
| 2021-10-20T20:16:11
| 2021-11-02T14:57:07
| 2021-11-02T14:57:07
|
https://github.com/huggingface/datasets/issues/3123
| null |
TevenLeScao
| false
|
[
"Hi ! I created an issue on Arrow's JIRA after making a minimum reproducible example\r\n\r\nhttps://issues.apache.org/jira/browse/ARROW-14439\r\n\r\n```python\r\nimport io\r\n\r\nimport pyarrow.json as paj\r\n\r\nbatch = b'{\"a\": [], \"b\": 1}\\n{\"b\": 1}'\r\nblock_size = 12\r\n\r\npaj.read_json(\r\n io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)\r\n)\r\n```\r\n\r\nI don't see a way to workaround this properly now without hurting the performance of the JSON loader significantly though",
"The issue has been fixed in pyarrow 6.0.0, please update pyarrow :)\r\n\r\nThe issue was due to missing fields in the JSON data of type list. Now it's working fine and missing list fields are replaced with empty lists"
] |
1,031,787,509
| 3,122
|
OSError with a custom dataset loading script
|
closed
| 2021-10-20T20:08:39
| 2021-11-23T09:55:38
| 2021-11-23T09:55:38
|
https://github.com/huggingface/datasets/issues/3122
| null |
suzanab
| false
|
[
"Hi,\r\n\r\nthere is a difference in how the `data_dir` is zipped between the `classla/janes_tag` and the `classla/reldi_hr` dataset. After unzipping, for the former, the data files (`*.conllup`) are in the root directory (root -> data files), and for the latter, they are inside the `data` directory (root -> `data` -> data files).\r\n\r\nThis can be fixed by removing the `os.path.join` call in https://huggingface.co/datasets/classla/janes_tag/blob/main/janes_tag.py#L86\r\n\r\nLet me know if this works for you.",
"Hi Mario,\r\n\r\nI had already tried that before, but it didn't work. I have now recreated the `classla/janes_tag` zip file so that it also contains the `data` directory, but I am still getting the same error.",
"Hi,\r\n\r\nI just tried to download the `classla/janes_tag` dataset, and this time the zip file is extracted correctly. However, the script is now throwing the IndexError, probably due to a bug in the `_generate_examples`.\r\n\r\nLet me know if you are still getting the same error.",
"I am still getting the same error.",
"Hi, \r\n\r\ncould you try to download the dataset with a different `cache_dir` like so:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('classla/janes_tag', split='validation', cache_dir=\"path/to/different/cache/dir\")\r\n```\r\nIf this works, then most likely the cached extracted data is causing issues. This data is stored at `~/.cache/huggingface/datasets/downloads/extracted` and needs to be deleted, and then it should work (you can easily locate the directory with the path given in the `OSError` message). Additionally, I'd suggest you to update `datasets` to the newest version with:\r\n```\r\npip install -U datasets\r\n```",
"Thank you, deleting the `~/.cache/huggingface/datasets/downloads/extracted` directory helped. However, I am still having problems.\r\n\r\nThere was indeed a bug in the script that was throwing an `IndexError`, which I have now corrected (added the condition to skip the lines starting with '# text') and it is working locally, but still throws an error when I try to load the dataset from HuggingFace. I literally copied and pasted the `_generate_examples` function and ran it on the `dev_all.conllup` file, which I even re-downloaded from the repository to be certain that the files are exactly the same. I also deleted everything again just in case, but it didn't help. The code works locally, but throws an `IndexError` when loading from `datasets.`",
"Hi,\r\n\r\nDid some investigation.\r\n\r\nTo fix the dataset script on the Hub, append the following labels to the `names` list of the `upos_tags` field:\r\n```'INTJ NOUN', 'AUX PRON', 'PART ADV', 'PRON ADP', 'INTJ INTJ', 'VERB NOUN', 'NOUN AUX'```.\r\n\r\nThis step is required to avoid an error due to missing labels in the following step which is:\r\n```python\r\nload_dataset(\"classla/janes_tag\", split=\"validation\", download_mode=\"force_redownload\")\r\n```\r\nThis will generate and cache the dataset, so specifying `download_mode` will not be required anymore unless you update the script/data on the Hub.",
"It works now, thank you!"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.