url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 600M
2.05B
| node_id
stringlengths 18
32
| number
int64 2
6.51k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
timestamp[ns, tz=UTC] | updated_at
timestamp[ns, tz=UTC] | closed_at
timestamp[ns, tz=UTC] | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3650
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3650/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3650/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3650/events
|
https://github.com/huggingface/datasets/pull/3650
| 1,118,537,429
|
PR_kwDODunzps4xyr2o
| 3,650
|
Allow 'to_json' to run in unordered fashion in order to lower memory footprint
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomasw21",
"id": 24695242,
"login": "thomasw21",
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomasw21"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @thomasw21, I remember suggesting `imap_unordered` to @lhoestq at that time to speed up `to_json` further but after trying `pool_imap` on multiple datasets (>9GB) , memory utilisation was almost constant and we decided to go ahead with that only. \r\n\r\n1. Did you try this without `gzip`? Because `gzip` feature was introduced recently and I didn't check multi_proc thing with `gzip`. One thing I know is that `gzip` is slow in our implementation than `zip` (it's a WIP #3551) \r\n2. You can try reducing your batch size, this can also help in avoiding OOM errors!",
"Thanks @bhavitvyamalik ! I see. I'm not sure this PR actually fixes things for me either (I ended up reducing the num_proc/batch_size to lower it). It does allow the process to run for longer, but I think the reason why it was waiting is that one of the process crashes .... Unfortunately I was working on a setup with a low RAM/cpu core ratio. I'm actually very surprised that it doesn't change memory utilization, otherwise I don't see the purpose of `imap_unordered` existing. I think it's main purpose are when you have high variance in samples (in terms of bytes), which causes unecessary accumulation in `imap`\r\n 1. Did not try without `gzip`\r\n 2. Yeah or `num_proc`",
"Can you please try without `gzip` to see how it performs? If it works fine then we can improve `gzip` from our side (I'm already working on it)",
"I'll be busy for next few weeks on another project, will do as soon as I have some bandwidth.\r\n",
"Should we close this PR?",
"Yes we can close this PR if considered unneeded."
] | 2022-01-30T13:23:19Z
| 2023-09-25T06:28:51Z
| 2023-09-24T16:45:48Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3650.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3650",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3650.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3650"
}
|
I'm using `to_json(..., num_proc=num_proc, compressiong='gzip')` with `num_proc>1`. I'm having an issue where things seem to deadlock at some point. Eventually I see OOM. I'm guessing it's an issue where one process starts to take a long time for a specific batch, and so other process keep accumulating their results in memory.
In order to flush memory, I propose we use optional `imap_unordered`. This will prevent one process to block the other ones. The logical thinking is that index are rarily relevant, and in one wants to keep an index, one can still create another column and reconstruct from there.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3650/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3650/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2782
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2782/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2782/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2782/events
|
https://github.com/huggingface/datasets/pull/2782
| 964,858,439
|
MDExOlB1bGxSZXF1ZXN0NzA3MjQ5NDE5
| 2,782
|
Fix renaming of corpus_bleu args
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-08-10T11:02:34Z
| 2021-08-10T11:16:07Z
| 2021-08-10T11:16:07Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2782.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2782",
"merged_at": "2021-08-10T11:16:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2782.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2782"
}
|
Last `sacrebleu` release (v2.0.0) has renamed `sacrebleu.corpus_bleu` args from `(sys_stream, ref_streams)` to `(hipotheses, references)`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15
This PR passes the args without parameter names, so that it is valid for all versions of `sacrebleu`.
This is a partial hotfix of #2781.
Close #2781.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2782/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2782/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3692
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3692/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3692/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3692/events
|
https://github.com/huggingface/datasets/pull/3692
| 1,128,320,004
|
PR_kwDODunzps4yShiu
| 3,692
|
Update data URL in pubmed dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"- I updated the previous dummy data: I just had to rename the file and its directory\r\n - the dummy data zip contains only a single file: `pubmed22n0001.xml.gz`\r\n\r\nThen I discover it fails: https://app.circleci.com/pipelines/github/huggingface/datasets/9800/workflows/173a4433-8feb-4fc6-ab9e-59762084e3e1/jobs/60437\r\n```\r\nNo such file or directory: '.../dummy_data/pubmed22n0002.xml.gz'\r\n```\r\n- it needs dummy data for all the 1114 files: \r\n `_URLs = [f\"ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed22n{i:04d}.xml.gz\" for i in range(1, 1115)]`\r\n- this confirms me that it never passed the test: these dummy data files were not present before my PR\r\n- therefore, is it really useful the data test if we just ignore it when it does not pass?\r\n\r\nIn relation with JSON metadata, I was generating the file for `pubmed` (see above) in a GCP instance: after running during ~12h without finishing, I decided to stop the process.",
"Hi ! Yes I remembered we hardcoded an exception for this one:\r\nhttps://github.com/huggingface/datasets/blob/36db39c75179a0a491c69a4491f7ae7e4615e66f/src/datasets/utils/mock_download_manager.py#L174-L176\r\n\r\nThe exception was used to only require one dummy data file, feel free to update it if you want"
] | 2022-02-09T10:06:21Z
| 2022-02-14T14:15:42Z
| 2022-02-14T14:15:41Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3692.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3692",
"merged_at": "2022-02-14T14:15:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3692.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3692"
}
|
Fix #3655.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3692/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3692/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3206
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3206/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3206/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3206/events
|
https://github.com/huggingface/datasets/pull/3206
| 1,044,216,270
|
PR_kwDODunzps4uEZJe
| 3,206
|
[WIP] Allow user-defined hash functions via a registry
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @BramVanroy, thanks for your PR.\r\n\r\nThere was a bug in TensorFlow/Keras. We have made a temporary fix in master branch. Please, merge master into your PR branch, so that the CI tests pass.\r\n\r\n```\r\ngit checkout registry\r\ngit fetch upstream master\r\ngit merge upstream/master\r\n```",
"@albertvillanova Done. Although new tests will need to be added. I am looking for some feedback on my initial proposal in this PR. Reviews and ideas welcome!",
"Hi ! Thanks for diving into this :)\r\n\r\nWith this approach you get the right hash when doing `Hasher.hash(nlp)` but if you try to hash an object that has `nlp` as one of its attributes for example you will get different hashes every time.\r\n\r\nThis is because `Hasher.hash` is not recursive itself. Indeed what happens when you try to hash an object is that:\r\n1. it is dumped with our custom `dill` pickler (which is recursive)\r\n2. the bytes of the dump are hashed\r\n\r\nTo fix this we must integrate the custom hashing as a custom pickler dumping instead.\r\n\r\nNote that we're only using the `pickler.dumps` method and not `pickler.loads` since we only use it to get hashes, so it doesn't matter if `loads` doesn't reconstruct the object exactly. What's important it only to capture all the necessary information that defines how the object transforms the data (here `nlp.to_bytes()` determines how the spacy pipeline transforms the text).\r\n\r\nOur pickler already has a registry and you can register new dump functions with:\r\n```python\r\nimport dill\r\nimport spacy\r\nfrom datasets.utils.py_utils import pklregister\r\n\r\n@pklregister(spacy.Language)\r\ndef _save_spacy_language(pickler, nlp):\r\n pickler.save_reduce(...) # I think we can use nlp.to_bytes() here\r\n dill._dill.log.info(...)\r\n```\r\n\r\nYou can find some examples of custom dump functions in `py_utils.py`",
"Ah, darn it. Completely missed that register. Time wasted, unfortunately. \r\n\r\nTo better understand what you mean, I figured I'd try the basis of your snippet and I've noticed quite an annoying side-effect of how the pickle dispatch table seems to work. It explicitly uses an object's [`type()`](https://github.com/python/cpython/blob/87032cfa3dc975d7442fd57dea2c6a56d31c911a/Lib/pickle.py#L557-L558), which makes sense for pickling some (primitive) types it is not ideal for more complex ones, I think. `Hasher.hash` has the same issue as far as I can tell.\r\n\r\nhttps://github.com/huggingface/datasets/blob/d21ce54f2c2782f854f975eb1dc2be6f923b4314/src/datasets/fingerprint.py#L187-L191\r\n\r\nThis is very restrictive, and won't work for subclasses. In the case of spaCy, for instance, we register `Language`, but `nlp` is an instance of `English`, which is a _subclass_ of `Language`. These are different types, and so they will not match in the dispatch table. Maybe this is more general approach to cover such cases? Something like this is a start but too broad, but ideally a hierarchy is constructed and traversed of all classes in the table and the lowest class is selected to ensure that the most specific class function is dispatched.\r\n\r\n```python\r\n def hash(cls, value: Any) -> str:\r\n # Try to match the exact type\r\n if type(value) in cls.dispatch:\r\n return cls.dispatch[type(value)](cls, value)\r\n\r\n # Try to match instance (superclass)\r\n for type_cls, func in cls.dispatch.items():\r\n if isinstance(value, type_cls):\r\n return cls.dispatch[type_cls](cls, value)\r\n\r\n return cls.hash_default(value)\r\n```\r\n\r\nThis does not solve the problem for pickling, though. That is quite unfortunate IMO because that implies that users always have to specify the most specific class, which is not always obvious. (For instance, `spacy.load`'s signature returns `Language`, but as said before a subclass might be returned.)\r\n\r\nSecond, I am trying to understand `save_reduce` but I can find very little documentation about it, only the source code which is quite cryptic. Can you explain it a bit? The required arguments are not very clear to me and there is no docstring.\r\n\r\n```python\r\n def save_reduce(self, func, args, state=None, listitems=None, dictitems=None, obj=None):\r\n```",
"Here is an example illustrating the problem with sub-classes.\r\n\r\n```python\r\nimport spacy\r\n\r\nfrom spacy import Language\r\nfrom spacy.lang.en import English\r\n\r\nfrom datasets.utils.py_utils import Pickler, pklregister\r\n\r\n# Only useful in the registry (matching with `nlp`)\r\n# if you swap it out for very specific `English`\r\n@pklregister(English)\r\ndef hash_spacy_language(pickler, nlp):\r\n pass\r\n\r\n\r\ndef main():\r\n print(Pickler.dispatch)\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n print(f\"NLP type {type(nlp)} in dispatch table? \", type(nlp) in Pickler.dispatch)\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```",
"Indeed that's not ideal.\r\nMaybe we could integrate all the subclasses directly in `datasets`. That's simple to do but the catch is that if users have new subclasses of `Language` it won't work.\r\n\r\nOtherwise we can see how to make the API simpler for users by allowing subclasses\r\n```python\r\n# if you swap it out for very specific `English`\r\n@pklregister(Language, allow_subclasses=True)\r\ndef hash_spacy_language(pickler, nlp):\r\n pass\r\n```\r\n\r\nHere is an idea how to make this work, let me know what you think:\r\n\r\nWhen `Pickler.dumps` is called, it uses `Pickler.save_global` which is a method that is going to be called recursively on all the objects. We can customize this part, and make it work as we want when it encounters a subclass of `Language`.\r\n\r\nFor example when it encounters a subclass of `Language`, we can dynamically register the hashing function for the subclass (`English` for example) in `Pickler.save_global`, right before calling the actual `dill.Pickler.save_global(self, obj, name=name)`:\r\n```python\r\npklregister(type(obj))(hash_function_registered_for_parent_class)\r\ndill.Pickler.save_global(self, obj, name=name)\r\n```\r\n\r\nIn practice that means we can have an additional dispatch dictionary (similar to `Pickler.dispatch`) to store the hashing functions when `allow_subclasses=True`, and use this dictionary in `Pickler.save_global` to check if we need to use a hashing function registered with `allow_subclasses=True` and get `hash_function_registered_for_parent_class`.",
"If I understood you correctly, I do not think that that is enough because you are only doing this for a type and its direct parent class. You could do this for all superclasses (so traverse all ancestors and find the registered function for the first that is encountered). I can work on that, if you agree. The one thing that I am not sure about is how you want to create the secondary dispatch table. An empty dict as class variable in Pickler? (It doesn't have to be a true dispatcher, I think.)\r\n\r\nI do not think that dynamic registration is the ideal situation (it feels a bit hacky). An alternative would be to subclass Pickle and Dill to make sure that instead of just type() checking in the dispatch table also superclasses are considered. But that is probably overkill.",
"> You could do this for all superclasses (so traverse all ancestors and find the registered function for the first that is encountered)\r\n\r\nThat makes sense indeed !\r\n\r\n> The one thing that I am not sure about is how you want to create the secondary dispatch table. An empty dict as class variable in Pickler? (It doesn't have to be a true dispatcher, I think.)\r\n\r\nSure, let's try to not use too complicated stuff\r\n\r\n> I do not think that dynamic registration is the ideal situation (it feels a bit hacky). An alternative would be to subclass Pickle and Dill to make sure that instead of just type() checking in the dispatch table also superclasses are considered. But that is probably overkill.\r\n\r\nIndeed that would feel less hacky, but maybe it's too complex just for this. I feel like this part of the library is already hard to understand when you're not familiar with pickle. IMO having only a few changes that are simpler to understand is better than having a rewrite of `dill`'s core code.\r\n\r\nThanks a lot for your insights, it looks like we're going to have something that works well and that unlocks some nice flexibility for users :) Feel free to ping me anytime if I can help on this",
"Sure, thanks for brainstorming! I'll try to work on it this weekend. Will also revert the current changes in this PR and rename it. ",
"It seems like this is going in the right direction :). \r\n\r\n@BramVanroy Just one small suggestion for future contributions: instead of using `WIP` in the PR title, you can create a draft PR if you're still working on it.",
"Maybe I should just create a new (draft) PR then, seeing that I'll have to rename and revert the changes anyway? I'll link to this PR so that the discussion is at least referenced.",
"I can convert this PR to a draft PR. Let me know what would you prefer.",
"I think reverting my previous commits would make for a dirty (or confusing) commit history, so I'll just create a new one. Thanks."
] | 2021-11-03T23:25:42Z
| 2021-11-05T12:38:11Z
| 2021-11-05T12:38:04Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3206.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3206",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3206.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3206"
}
|
Inspired by the discussion on hashing in https://github.com/huggingface/datasets/issues/3178#issuecomment-959016329, @lhoestq suggested that it would be neat to allow users more control over the hashing process. Specifically, it would be great if users can specify specific hashing functions depending on the **class** of the object.
As an example, we found in the linked topic that loaded spaCy models (`Language` objects) have different hashes when `dump`'d, but their byte representation with `Language.to_bytes()` _is_ deterministic. It would therefore be great if we could specify that for `Language` objects, the hasher should hash the objects `to_bytes()` return value instead of the object itself.
This PR adds a new, but tiny, dependency to manage the registry, namely [`catalogue`](https://github.com/explosion/catalogue).
Two files have been changed (apart from the added dependency in `setup.py`) and one file has been added.
**utils.registry** (added)
This file defines our custom Registry and builds a registry called "hashers". A Registry is basically dictionary from names (str) to functions. A function can be added to the registry by a decorator, e.g.
```python
@hashers.register(spacy.Language)
def hash_spacy_language(nlp):
return Hasher.hash(nlp.to_bytes())
```
You'll notice that `spacy.Language` is not a string, even though the registry holds a str->func mapping. To accomplish this with classes in a dynamic way, catalogue.Registry needed to be subclassed and modified as `DatasetsRegistry`. All methods that use a name as an input are now modified so that classes are deterministically converted in strings in such a way that we can later retrieve the actual class from the string (below).
**utils.py_utils** (modified)
Added two functions to deal with classes and their qualified names, that is, their full descriptive name including the module. On the one hand it allows us to retrieve a string from a given class, e.g. given `Module` class, return `torch.nn.Module` str. Conversly, a function is added to convert such a full qualified name into a class. For instance, given the string `torch.nn.Module`, return the `Module` class. These straightforward methods allow us to interchangeably use classes and strings without any needed user interaction - they can just register a class, and behind the scenes `DatasetsRegistry` converts these to deterministic strings.
**fingerprint** (modified)
Updated Hasher.hash so that if the object to hash is an instance of a class in the registry, the registered function is used to hash the object instead of the default behavior. To do so we iterate over the registry `hashers` and convert its keys (strings) into classes, and then we can use `isinstance`.
```python
# Check if the current object is an instance that is
# applicable to the user-defined hashers. If so, hash
# with the user-defined function
for full_module_name, func in hashers.get_all().items():
registered_cls = get_cls_from_qualname(full_module_name)
if isinstance(value, registered_cls):
return func(value)
```
**Putting it all together**
To test this, you can try the following example with spaCy. First install spaCy from source and checkout a specific commit.
```shell
git clone https://github.com/explosion/spaCy.git
cd spaCy/
git checkout cab9209c3dfcd1b75dfe5657f10e52c4d847a3cf
cd ..
git clone https://github.com/BramVanroy/datasets.git
cd datasets
git checkout registry
pip install -e .
pip install ../spaCy
spacy download en_core_web_sm
```
Now you can run the following script. By default it will use the custom hasher function for the Language object. You can enable the default behavior by commenting out `@hashers.register...`.
```python
import spacy
from datasets.fingerprint import Hasher
from datasets.utils.registry import hashers
# Register a function so that when the Hasher encounters a spacy.Language object
# it uses this custom function to hash instead of the default
@hashers.register(spacy.Language)
def hash_spacy_language(nlp):
return Hasher.hash(nlp.to_bytes())
def main():
print(hashers.get_all())
nlp = spacy.load("en_core_web_sm")
dump1 = Hasher.hash(nlp)
nlp = spacy.load("en_core_web_sm")
dump2 = Hasher.hash(nlp)
print(dump1)
# succeeds when using the registered custom function
# fails if using the default
assert dump1 == dump2
if __name__ == '__main__':
main()
```
To do
====
- The above is just a proof-of-concept. I am open to changes/suggestions
- Tests still need to be written
- We should consider whether we can make `DatasetsRegistry` very restrictive and ONLY allowing classes. That would make testing easier - otherwise we also need to test for other sorts of objects.
- Maybe the `hashers` definition is better suited in `fingerprint`?
- Documentation/examples need to be updated
- Not sure why the logger is not working in `hash()`
- `get_cls_from_qualname` might need a fail-safe: is it possible for a full_qualname to not have a module, and if so how do we deal with that?
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3206/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3206/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1604
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1604/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1604/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1604/events
|
https://github.com/huggingface/datasets/issues/1604
| 770,862,112
|
MDU6SXNzdWU3NzA4NjIxMTI=
| 1,604
|
Add tests for the download functions ?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"We have some tests now for it under `tests/test_download_manager.py`."
] | 2020-12-18T12:49:25Z
| 2022-10-05T13:04:24Z
| 2022-10-05T13:04:24Z
|
CONTRIBUTOR
| null | null | null |
AFAIK the download functions in `DownloadManager` are not tested yet. It could be good to add some to ensure behavior is as expected.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1604/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1604/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6106
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6106/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6106/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6106/events
|
https://github.com/huggingface/datasets/issues/6106
| 1,829,131,223
|
I_kwDODunzps5tBlPX
| 6,106
|
load local json_file as dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/39040787?v=4",
"events_url": "https://api.github.com/users/CiaoHe/events{/privacy}",
"followers_url": "https://api.github.com/users/CiaoHe/followers",
"following_url": "https://api.github.com/users/CiaoHe/following{/other_user}",
"gists_url": "https://api.github.com/users/CiaoHe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CiaoHe",
"id": 39040787,
"login": "CiaoHe",
"node_id": "MDQ6VXNlcjM5MDQwNzg3",
"organizations_url": "https://api.github.com/users/CiaoHe/orgs",
"received_events_url": "https://api.github.com/users/CiaoHe/received_events",
"repos_url": "https://api.github.com/users/CiaoHe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CiaoHe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CiaoHe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CiaoHe"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! We use PyArrow to read JSON files, and PyArrow doesn't allow different value types in the same column. #5776 should address this.\r\n\r\nIn the meantime, you can combine `Dataset.from_generator` with the above code to cast the values to the same type. ",
"Thanks for your help!"
] | 2023-07-31T12:53:49Z
| 2023-08-18T01:46:35Z
| 2023-08-18T01:46:35Z
|
NONE
| null | null | null |
### Describe the bug
I tried to load local json file as dataset but failed to parsing json file because some columns are 'float' type.
### Steps to reproduce the bug
1. load json file with certain columns are 'float' type. For example `data = load_data("json", data_files=JSON_PATH)`
2. Then, the error will be triggered like `ArrowInvalid: Could not convert '-0.2253' with type str: tried to convert to double
### Expected behavior
Should allow some columns are 'float' type, at least it should convert those columns to str type.
I tried to avoid the error by naively convert the float item to str:
```python
# if col type is not str, we need to convert it to str
mapping = {}
for col in keys:
if isinstance(dataset[0][col], str):
mapping[col] = [row.get(col) for row in dataset]
else:
mapping[col] = [str(row.get(col)) for row in dataset]
```
### Environment info
- `datasets` version: 2.14.2
- Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6106/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6106/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6489
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6489/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6489/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6489/events
|
https://github.com/huggingface/datasets/issues/6489
| 2,036,743,777
|
I_kwDODunzps55Zj5h
| 6,489
|
load_dataset imageflder for aws s3 path
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9353106?v=4",
"events_url": "https://api.github.com/users/segalinc/events{/privacy}",
"followers_url": "https://api.github.com/users/segalinc/followers",
"following_url": "https://api.github.com/users/segalinc/following{/other_user}",
"gists_url": "https://api.github.com/users/segalinc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/segalinc",
"id": 9353106,
"login": "segalinc",
"node_id": "MDQ6VXNlcjkzNTMxMDY=",
"organizations_url": "https://api.github.com/users/segalinc/orgs",
"received_events_url": "https://api.github.com/users/segalinc/received_events",
"repos_url": "https://api.github.com/users/segalinc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/segalinc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/segalinc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/segalinc"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2023-12-12T00:08:43Z
| 2023-12-12T00:09:27Z
| null |
NONE
| null | null | null |
### Feature request
I would like to load a dataset from S3 using the imagefolder option
something like
`dataset = datasets.load_dataset('imagefolder', data_dir='s3://.../lsun/train/bedroom', fs=S3FileSystem(), streaming=True) `
### Motivation
no need of data_files
### Your contribution
no experience with this
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6489/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6489/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/5390
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5390/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5390/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5390/events
|
https://github.com/huggingface/datasets/issues/5390
| 1,509,357,553
|
I_kwDODunzps5Z9vfx
| 5,390
|
Error when pushing to the CI hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hmmm, git bisect tells me that the behavior is the same since https://github.com/huggingface/datasets/commit/67e65c90e9490810b89ee140da11fdd13c356c9c (3 Oct), i.e. https://github.com/huggingface/datasets/pull/4926",
"Maybe related to the discussions in https://github.com/huggingface/datasets/pull/5196",
"Maybe the current version of moonlanding in Hub CI is the issue.\r\n\r\nI relaunched tests that were working two days ago: now they are failing. https://github.com/huggingface/datasets-server/commit/746414449cae4b311733f8a76e5b3b4ca73b38a9 for example\r\n\r\ncc @huggingface/moon-landing ",
"Hi! I don't think this has anything to do with `datasets`. Hub CI seems to be the culprit - the identical failure can be found in [this](https://github.com/huggingface/datasets/pull/5389) PR (with unrelated changes) opened today.",
"OK! Thanks for looking at it. Closing then."
] | 2022-12-23T13:36:37Z
| 2022-12-23T20:29:02Z
| 2022-12-23T20:29:02Z
|
CONTRIBUTOR
| null | null | null |
### Describe the bug
Note that it's a special case where the Hub URL is "https://hub-ci.huggingface.co", which does not appear if we do the same on the Hub (https://huggingface.co).
The call to `dataset.push_to_hub(` fails:
```
Pushing dataset shards to the dataset hub: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.93s/it]
Traceback (most recent call last):
File "reproduce_hubci.py", line 16, in <module>
dataset.push_to_hub(repo_id=repo_id, private=False, token=USER_TOKEN, embed_external_files=True)
File "/home/slesage/hf/datasets/src/datasets/arrow_dataset.py", line 5025, in push_to_hub
HfApi(endpoint=config.HF_ENDPOINT).upload_file(
File "/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1346, in upload_file
raise err
File "/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1337, in upload_file
r.raise_for_status()
File "/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_DATASETS_SERVER_USER__/bug-16718047265472/upload/main/README.md
```
### Steps to reproduce the bug
```python
# reproduce.py
from datasets import Dataset
import time
USER = "__DUMMY_DATASETS_SERVER_USER__"
USER_TOKEN = "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"
dataset = Dataset.from_dict({"a": [1, 2, 3]})
repo_id = f"{USER}/bug-{int(time.time() * 10e3)}"
dataset.push_to_hub(repo_id=repo_id, private=False, token=USER_TOKEN, embed_external_files=True)
```
```bash
$ HF_ENDPOINT="https://hub-ci.huggingface.co" python reproduce.py
```
### Expected behavior
No error and the dataset should be uploaded to the Hub with the README file (which generates the error).
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.15.0-1026-aws-x86_64-with-glibc2.35
- Python version: 3.9.15
- PyArrow version: 7.0.0
- Pandas version: 1.5.2
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5390/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5390/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5440
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5440/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5440/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5440/events
|
https://github.com/huggingface/datasets/pull/5440
| 1,538,361,143
|
PR_kwDODunzps5HpRbF
| 5,440
|
Fix documentation about batch samplers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomasw21",
"id": 24695242,
"login": "thomasw21",
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomasw21"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008874 / 0.011353 (-0.002479) | 0.004685 / 0.011008 (-0.006323) | 0.101478 / 0.038508 (0.062970) | 0.031409 / 0.023109 (0.008300) | 0.305429 / 0.275898 (0.029531) | 0.371777 / 0.323480 (0.048297) | 0.007282 / 0.007986 (-0.000704) | 0.005545 / 0.004328 (0.001217) | 0.078583 / 0.004250 (0.074333) | 0.037171 / 0.037052 (0.000118) | 0.320186 / 0.258489 (0.061696) | 0.347881 / 0.293841 (0.054040) | 0.034005 / 0.128546 (-0.094541) | 0.011534 / 0.075646 (-0.064113) | 0.326079 / 0.419271 (-0.093193) | 0.040856 / 0.043533 (-0.002677) | 0.307327 / 0.255139 (0.052188) | 0.323521 / 0.283200 (0.040321) | 0.090407 / 0.141683 (-0.051276) | 1.481994 / 1.452155 (0.029840) | 1.490372 / 1.492716 (-0.002345) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.175161 / 0.018006 (0.157155) | 0.447009 / 0.000490 (0.446519) | 0.003570 / 0.000200 (0.003370) | 0.000072 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023868 / 0.037411 (-0.013543) | 0.100791 / 0.014526 (0.086265) | 0.108131 / 0.176557 (-0.068425) | 0.147993 / 0.737135 (-0.589142) | 0.111205 / 0.296338 (-0.185133) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425369 / 0.215209 (0.210160) | 4.241694 / 2.077655 (2.164040) | 2.145403 / 1.504120 (0.641283) | 1.913517 / 1.541195 (0.372322) | 1.887307 / 1.468490 (0.418817) | 0.691615 / 4.584777 (-3.893162) | 3.402233 / 3.745712 (-0.343480) | 1.992532 / 5.269862 (-3.277330) | 1.322292 / 4.565676 (-3.243385) | 0.082862 / 0.424275 (-0.341413) | 0.012595 / 0.007607 (0.004988) | 0.528490 / 0.226044 (0.302445) | 5.313338 / 2.268929 (3.044409) | 2.645037 / 55.444624 (-52.799587) | 2.326279 / 6.876477 (-4.550198) | 2.396955 / 2.142072 (0.254883) | 0.819354 / 4.805227 (-3.985873) | 0.150889 / 6.500664 (-6.349775) | 0.066517 / 0.075469 (-0.008952) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.233673 / 1.841788 (-0.608114) | 14.563293 / 8.074308 (6.488985) | 14.317989 / 10.191392 (4.126597) | 0.150767 / 0.680424 (-0.529657) | 0.028972 / 0.534201 (-0.505229) | 0.400547 / 0.579283 (-0.178736) | 0.402267 / 0.434364 (-0.032097) | 0.459375 / 0.540337 (-0.080962) | 0.544419 / 1.386936 (-0.842517) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006817 / 0.011353 (-0.004536) | 0.004588 / 0.011008 (-0.006421) | 0.099224 / 0.038508 (0.060716) | 0.027730 / 0.023109 (0.004621) | 0.412310 / 0.275898 (0.136412) | 0.445731 / 0.323480 (0.122252) | 0.005197 / 0.007986 (-0.002788) | 0.003601 / 0.004328 (-0.000728) | 0.076200 / 0.004250 (0.071950) | 0.041813 / 0.037052 (0.004761) | 0.415282 / 0.258489 (0.156793) | 0.457182 / 0.293841 (0.163341) | 0.031920 / 0.128546 (-0.096626) | 0.011712 / 0.075646 (-0.063934) | 0.320859 / 0.419271 (-0.098412) | 0.041466 / 0.043533 (-0.002067) | 0.418156 / 0.255139 (0.163017) | 0.435501 / 0.283200 (0.152302) | 0.090727 / 0.141683 (-0.050955) | 1.484014 / 1.452155 (0.031859) | 1.568072 / 1.492716 (0.075356) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.263356 / 0.018006 (0.245350) | 0.410768 / 0.000490 (0.410278) | 0.015983 / 0.000200 (0.015783) | 0.000301 / 0.000054 (0.000246) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024522 / 0.037411 (-0.012889) | 0.103986 / 0.014526 (0.089460) | 0.109253 / 0.176557 (-0.067303) | 0.142308 / 0.737135 (-0.594827) | 0.114037 / 0.296338 (-0.182302) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452617 / 0.215209 (0.237407) | 4.505215 / 2.077655 (2.427560) | 2.185546 / 1.504120 (0.681426) | 1.995540 / 1.541195 (0.454345) | 1.962875 / 1.468490 (0.494385) | 0.690237 / 4.584777 (-3.894540) | 3.448311 / 3.745712 (-0.297401) | 1.901572 / 5.269862 (-3.368289) | 1.170832 / 4.565676 (-3.394844) | 0.082333 / 0.424275 (-0.341942) | 0.012569 / 0.007607 (0.004962) | 0.547822 / 0.226044 (0.321778) | 5.504180 / 2.268929 (3.235251) | 2.693981 / 55.444624 (-52.750644) | 2.320710 / 6.876477 (-4.555767) | 2.270508 / 2.142072 (0.128435) | 0.803145 / 4.805227 (-4.002083) | 0.152168 / 6.500664 (-6.348496) | 0.067408 / 0.075469 (-0.008061) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260689 / 1.841788 (-0.581099) | 14.281112 / 8.074308 (6.206804) | 14.549742 / 10.191392 (4.358350) | 0.129337 / 0.680424 (-0.551087) | 0.017181 / 0.534201 (-0.517020) | 0.380473 / 0.579283 (-0.198810) | 0.387689 / 0.434364 (-0.046675) | 0.446734 / 0.540337 (-0.093603) | 0.532479 / 1.386936 (-0.854457) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008953 / 0.011353 (-0.002400) | 0.004917 / 0.011008 (-0.006091) | 0.098699 / 0.038508 (0.060191) | 0.034460 / 0.023109 (0.011351) | 0.294604 / 0.275898 (0.018706) | 0.322709 / 0.323480 (-0.000770) | 0.007780 / 0.007986 (-0.000206) | 0.004061 / 0.004328 (-0.000267) | 0.076134 / 0.004250 (0.071883) | 0.043786 / 0.037052 (0.006734) | 0.302155 / 0.258489 (0.043666) | 0.339779 / 0.293841 (0.045938) | 0.038305 / 0.128546 (-0.090241) | 0.012131 / 0.075646 (-0.063515) | 0.332656 / 0.419271 (-0.086615) | 0.048029 / 0.043533 (0.004496) | 0.303859 / 0.255139 (0.048720) | 0.315861 / 0.283200 (0.032662) | 0.100758 / 0.141683 (-0.040925) | 1.468072 / 1.452155 (0.015918) | 1.521325 / 1.492716 (0.028609) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244975 / 0.018006 (0.226969) | 0.524392 / 0.000490 (0.523902) | 0.003720 / 0.000200 (0.003520) | 0.000087 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027704 / 0.037411 (-0.009707) | 0.109048 / 0.014526 (0.094522) | 0.118298 / 0.176557 (-0.058259) | 0.158748 / 0.737135 (-0.578388) | 0.125654 / 0.296338 (-0.170684) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406973 / 0.215209 (0.191764) | 4.057502 / 2.077655 (1.979847) | 1.939847 / 1.504120 (0.435727) | 1.746457 / 1.541195 (0.205262) | 1.698866 / 1.468490 (0.230376) | 0.692884 / 4.584777 (-3.891893) | 3.736988 / 3.745712 (-0.008724) | 2.050122 / 5.269862 (-3.219740) | 1.299808 / 4.565676 (-3.265868) | 0.085285 / 0.424275 (-0.338990) | 0.012768 / 0.007607 (0.005161) | 0.510814 / 0.226044 (0.284770) | 5.105319 / 2.268929 (2.836391) | 2.304003 / 55.444624 (-53.140621) | 1.951123 / 6.876477 (-4.925354) | 1.998504 / 2.142072 (-0.143568) | 0.840235 / 4.805227 (-3.964993) | 0.164521 / 6.500664 (-6.336143) | 0.064215 / 0.075469 (-0.011254) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272520 / 1.841788 (-0.569268) | 14.648110 / 8.074308 (6.573802) | 14.573754 / 10.191392 (4.382362) | 0.170053 / 0.680424 (-0.510371) | 0.029389 / 0.534201 (-0.504811) | 0.438924 / 0.579283 (-0.140359) | 0.433572 / 0.434364 (-0.000792) | 0.517702 / 0.540337 (-0.022635) | 0.600389 / 1.386936 (-0.786547) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007362 / 0.011353 (-0.003991) | 0.005451 / 0.011008 (-0.005557) | 0.099336 / 0.038508 (0.060828) | 0.033284 / 0.023109 (0.010174) | 0.377143 / 0.275898 (0.101245) | 0.423724 / 0.323480 (0.100244) | 0.006194 / 0.007986 (-0.001792) | 0.004208 / 0.004328 (-0.000121) | 0.074473 / 0.004250 (0.070223) | 0.049874 / 0.037052 (0.012821) | 0.376012 / 0.258489 (0.117523) | 0.439942 / 0.293841 (0.146101) | 0.037860 / 0.128546 (-0.090686) | 0.012546 / 0.075646 (-0.063100) | 0.349123 / 0.419271 (-0.070148) | 0.048980 / 0.043533 (0.005447) | 0.391205 / 0.255139 (0.136066) | 0.396474 / 0.283200 (0.113274) | 0.105846 / 0.141683 (-0.035836) | 1.502475 / 1.452155 (0.050321) | 1.612303 / 1.492716 (0.119587) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300815 / 0.018006 (0.282809) | 0.542171 / 0.000490 (0.541681) | 0.005465 / 0.000200 (0.005265) | 0.000094 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028904 / 0.037411 (-0.008508) | 0.110352 / 0.014526 (0.095827) | 0.123275 / 0.176557 (-0.053282) | 0.161958 / 0.737135 (-0.575178) | 0.133595 / 0.296338 (-0.162743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438724 / 0.215209 (0.223515) | 4.373633 / 2.077655 (2.295979) | 2.178981 / 1.504120 (0.674861) | 1.992442 / 1.541195 (0.451247) | 2.063149 / 1.468490 (0.594659) | 0.696688 / 4.584777 (-3.888089) | 3.849370 / 3.745712 (0.103658) | 3.509495 / 5.269862 (-1.760367) | 1.923320 / 4.565676 (-2.642356) | 0.085554 / 0.424275 (-0.338721) | 0.012510 / 0.007607 (0.004903) | 0.535953 / 0.226044 (0.309909) | 5.365684 / 2.268929 (3.096755) | 2.686902 / 55.444624 (-52.757723) | 2.330922 / 6.876477 (-4.545554) | 2.353445 / 2.142072 (0.211373) | 0.878336 / 4.805227 (-3.926891) | 0.167296 / 6.500664 (-6.333368) | 0.064564 / 0.075469 (-0.010905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244696 / 1.841788 (-0.597091) | 15.027981 / 8.074308 (6.953673) | 14.545797 / 10.191392 (4.354405) | 0.147229 / 0.680424 (-0.533194) | 0.018007 / 0.534201 (-0.516194) | 0.446196 / 0.579283 (-0.133087) | 0.437418 / 0.434364 (0.003054) | 0.510732 / 0.540337 (-0.029606) | 0.594814 / 1.386936 (-0.792122) |\n\n</details>\n</details>\n\n\n"
] | 2023-01-18T17:04:27Z
| 2023-01-18T17:57:29Z
| 2023-01-18T17:50:04Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5440.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5440",
"merged_at": "2023-01-18T17:50:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5440.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5440"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5440/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5440/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4662
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4662/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4662/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4662/events
|
https://github.com/huggingface/datasets/pull/4662
| 1,298,845,369
|
PR_kwDODunzps47GTEc
| 4,662
|
Fix: conll2003 - fix empty example
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-08T10:49:13Z
| 2022-07-08T14:14:53Z
| 2022-07-08T14:02:42Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4662.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4662",
"merged_at": "2022-07-08T14:02:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4662.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4662"
}
|
As reported in https://huggingface.co/datasets/conll2003/discussions/2#62c45a14f93fc97e8260532f, there was an extra empty example at the end of the dataset
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4662/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4662/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5240
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5240/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5240/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5240/events
|
https://github.com/huggingface/datasets/pull/5240
| 1,448,478,617
|
PR_kwDODunzps5C3Fe6
| 5,240
|
Cleaner error tracebacks for dataset script errors
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq Good catch! This currently leads to an AttributeError (due to `writer` being None) on this line:\r\nhttps://github.com/huggingface/datasets/blob/fed1628d49a91f9ae259ddf6edbb252c7972d9a3/src/datasets/builder.py#L1552\r\n"
] | 2022-11-14T17:42:02Z
| 2022-11-15T18:26:48Z
| 2022-11-15T18:24:38Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5240.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5240",
"merged_at": "2022-11-15T18:24:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5240.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5240"
}
|
Make the traceback of the errors raised in `_generate_examples` cleaner for easier debugging. Additionally, initialize the `writer` in the for-loop to avoid the `ValueError` from `ArrowWriter.finalize` raised in the `finally` block when no examples are yielded before the `_generate_examples` error.
<details>
<summary>
The full traceback of the "SQLAlchemy ImportError" error that gets printed with these changes:
</summary>
```bash
ImportError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split_single(self, arg)
1759 _time = time.time()
-> 1760 for _, table in generator:
1761 # Only initialize the writer when we have the first record (to avoid having to do the clean-up if an error occurs before that)
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/packaged_modules/sql/sql.py in _generate_tables(self)
112 sql_reader = pd.read_sql(
--> 113 self.config.sql, self.config.con, chunksize=chunksize, **self.config.pd_read_sql_kwargs
114 )
/usr/local/lib/python3.7/dist-packages/pandas/io/sql.py in read_sql(sql, con, index_col, coerce_float, params, parse_dates, columns, chunksize)
598 """
--> 599 pandas_sql = pandasSQL_builder(con)
600
/usr/local/lib/python3.7/dist-packages/pandas/io/sql.py in pandasSQL_builder(con, schema, meta, is_cursor)
789 elif isinstance(con, str):
--> 790 raise ImportError("Using URI string without sqlalchemy installed.")
791 else:
ImportError: Using URI string without sqlalchemy installed.
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
<ipython-input-4-5af11af4737b> in <module>
----> 1 ds = Dataset.from_sql('''SELECT * from states WHERE state=="New York";''', "sqlite:///us_covid_data.db")
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in from_sql(sql, con, features, cache_dir, keep_in_memory, **kwargs)
1152 cache_dir=cache_dir,
1153 keep_in_memory=keep_in_memory,
-> 1154 **kwargs,
1155 ).read()
1156
/usr/local/lib/python3.7/dist-packages/datasets/io/sql.py in read(self)
47 # try_from_hf_gcs=try_from_hf_gcs,
48 base_path=base_path,
---> 49 use_auth_token=use_auth_token,
50 )
51
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
825 verify_infos=verify_infos,
826 **prepare_split_kwargs,
--> 827 **download_and_prepare_kwargs,
828 )
829 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
912 try:
913 # Prepare split will record examples associated to the split
--> 914 self._prepare_split(split_generator, **prepare_split_kwargs)
915 except OSError as e:
916 raise OSError(
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1652 job_id = 0
1653 for job_id, done, content in self._prepare_split_single(
-> 1654 {"gen_kwargs": gen_kwargs, "job_id": job_id, **_prepare_split_args}
1655 ):
1656 if done:
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split_single(self, arg)
1789 raise DatasetGenerationError(
1790 f"An error occured while generating the dataset"
-> 1791 ) from e
1792 finally:
1793 yield job_id, False, num_examples_progress_update
DatasetGenerationError: An error occurred while generating the dataset
```
</details>
PS: I've also considered raising the error as follows:
```python
tb = sys.exc_info()[2]
raise DatasetGenerationError(f"An error occurred while generating the dataset: {type(e).__name__}: {e}").with_traceback(tb) from None # this raises the DatasetGenerationError with "e"'s traceback
```
But it seems like "from e" is now the [preferred](https://docs.python.org/3/library/exceptions.html#BaseException.with_traceback) way to chain exceptions.
Fix https://github.com/huggingface/datasets/issues/5186
cc @nateraw
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5240/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5240/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3878
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3878/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3878/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3878/events
|
https://github.com/huggingface/datasets/pull/3878
| 1,164,305,335
|
PR_kwDODunzps40MOpn
| 3,878
|
Update cats_vs_dogs size
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3878). All of your documentation changes will be reflected on that endpoint.",
"Maybe `NonMatchingSplitsSizesError` errors should also tell the user to try using a more recent version of the dataset to get the fixes ?",
"@lhoestq Good idea. Will open a new PR to improve the error messages of NonMatchingSplitsSizesError, NonMatchingChecksumsError, ...",
"It seems there is still a problem. I am using datasets version 2.5.1. \r\nI just typed `ds = load_dataset(\"cats_vs_dogs\")` and get the error below.\r\n\r\n```\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=3893603, num_examples=23422, dataset_name='cats_vs_dogs'), 'recorded': SplitInfo(name='train', num_bytes=3891612, num_examples=23410, dataset_name='cats_vs_dogs')}]\r\n```\r\nIt looks like the dataset still only has 23,410 examples....\r\n",
"Thanks for reporting, I opened https://github.com/huggingface/datasets/pull/5047"
] | 2022-03-09T18:40:56Z
| 2022-09-30T08:47:43Z
| 2022-03-10T14:21:23Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3878.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3878",
"merged_at": "2022-03-10T14:21:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3878.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3878"
}
|
It seems like 12 new examples have been added to the `cats_vs_dogs`. This PR updates the size in the card and the info file to avoid a verification error (reported by @stevhliu).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3878/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3878/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1551
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1551/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1551/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1551/events
|
https://github.com/huggingface/datasets/pull/1551
| 765,621,879
|
MDExOlB1bGxSZXF1ZXN0NTM5MDEwNDAy
| 1,551
|
Monero
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2815308?v=4",
"events_url": "https://api.github.com/users/iliemihai/events{/privacy}",
"followers_url": "https://api.github.com/users/iliemihai/followers",
"following_url": "https://api.github.com/users/iliemihai/following{/other_user}",
"gists_url": "https://api.github.com/users/iliemihai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iliemihai",
"id": 2815308,
"login": "iliemihai",
"node_id": "MDQ6VXNlcjI4MTUzMDg=",
"organizations_url": "https://api.github.com/users/iliemihai/orgs",
"received_events_url": "https://api.github.com/users/iliemihai/received_events",
"repos_url": "https://api.github.com/users/iliemihai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iliemihai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliemihai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iliemihai"
}
|
[
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @iliemihai - you need to add the Readme file! Otherwise seems good. \r\nAlso don't forget to run `make style` & `flake8 datasets` locally, from the datasets folder",
"@skyprince999 I will add the README.d for it. Thank you :D ",
"Thanks for your contribution, @iliemihai. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] | 2020-12-13T19:56:48Z
| 2022-10-03T09:38:35Z
| 2022-10-03T09:38:35Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1551.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1551",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1551.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1551"
}
|
Biomedical Romanian dataset :)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1551/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1551/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2057
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2057/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2057/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2057/events
|
https://github.com/huggingface/datasets/pull/2057
| 832,120,522
|
MDExOlB1bGxSZXF1ZXN0NTkzMzMzMjM0
| 2,057
|
update link to ZEST dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/619844?v=4",
"events_url": "https://api.github.com/users/matt-peters/events{/privacy}",
"followers_url": "https://api.github.com/users/matt-peters/followers",
"following_url": "https://api.github.com/users/matt-peters/following{/other_user}",
"gists_url": "https://api.github.com/users/matt-peters/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/matt-peters",
"id": 619844,
"login": "matt-peters",
"node_id": "MDQ6VXNlcjYxOTg0NA==",
"organizations_url": "https://api.github.com/users/matt-peters/orgs",
"received_events_url": "https://api.github.com/users/matt-peters/received_events",
"repos_url": "https://api.github.com/users/matt-peters/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/matt-peters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matt-peters/subscriptions",
"type": "User",
"url": "https://api.github.com/users/matt-peters"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-15T19:22:57Z
| 2021-03-16T17:06:28Z
| 2021-03-16T17:06:28Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2057.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2057",
"merged_at": "2021-03-16T17:06:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2057.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2057"
}
|
Updating the link as the original one is no longer working.
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2057/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2057/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1979
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1979/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1979/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1979/events
|
https://github.com/huggingface/datasets/pull/1979
| 820,977,853
|
MDExOlB1bGxSZXF1ZXN0NTgzODQ3MTk3
| 1,979
|
Add article_id and process test set template for semeval 2020 task 11…
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8195444?v=4",
"events_url": "https://api.github.com/users/hemildesai/events{/privacy}",
"followers_url": "https://api.github.com/users/hemildesai/followers",
"following_url": "https://api.github.com/users/hemildesai/following{/other_user}",
"gists_url": "https://api.github.com/users/hemildesai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hemildesai",
"id": 8195444,
"login": "hemildesai",
"node_id": "MDQ6VXNlcjgxOTU0NDQ=",
"organizations_url": "https://api.github.com/users/hemildesai/orgs",
"received_events_url": "https://api.github.com/users/hemildesai/received_events",
"repos_url": "https://api.github.com/users/hemildesai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hemildesai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hemildesai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hemildesai"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks !\r\nNow to fix the CI the only thing left is to add a dummy `test-task-tc-template.out` file inside the `dummy_data.zip` at `./datasets/sem_eval_2020_task_11/dummy/1.1.0`\r\nIt must contain the labels template for each dummy article of the test set included in `dummy_data.zip`\r\n\r\nAfter that we should be good to merge this one :)",
"@lhoestq Made the changes! The failure now seems to be unrelated to the changes. Any idea what's going on?",
"This is a bug on master that we're investigating. You can ignore it"
] | 2021-03-03T10:34:32Z
| 2021-03-13T10:59:40Z
| 2021-03-12T13:10:50Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1979.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1979",
"merged_at": "2021-03-12T13:10:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1979.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1979"
}
|
… dataset
- `article_id` is needed to create the submission file for the task at https://propaganda.qcri.org/semeval2020-task11/
- The `technique classification` task provides the span indices in a template for the test set that is necessary to complete the task. This PR implements processing of that template for the dataset.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1979/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1979/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1914
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1914/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1914/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1914/events
|
https://github.com/huggingface/datasets/pull/1914
| 812,149,201
|
MDExOlB1bGxSZXF1ZXN0NTc2NTYyNTkz
| 1,914
|
Fix logging imports and make all datasets use library logger
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-02-19T16:12:34Z
| 2021-02-21T19:48:03Z
| 2021-02-21T19:48:03Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1914.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1914",
"merged_at": "2021-02-21T19:48:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1914.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1914"
}
|
Fix library relative logging imports and make all datasets use library logger.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1914/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1914/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5553
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5553/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5553/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5553/events
|
https://github.com/huggingface/datasets/pull/5553
| 1,592,236,998
|
PR_kwDODunzps5KXXUq
| 5,553
|
improved message error row formatting
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26489385?v=4",
"events_url": "https://api.github.com/users/Plutone11011/events{/privacy}",
"followers_url": "https://api.github.com/users/Plutone11011/followers",
"following_url": "https://api.github.com/users/Plutone11011/following{/other_user}",
"gists_url": "https://api.github.com/users/Plutone11011/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Plutone11011",
"id": 26489385,
"login": "Plutone11011",
"node_id": "MDQ6VXNlcjI2NDg5Mzg1",
"organizations_url": "https://api.github.com/users/Plutone11011/orgs",
"received_events_url": "https://api.github.com/users/Plutone11011/received_events",
"repos_url": "https://api.github.com/users/Plutone11011/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Plutone11011/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Plutone11011/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Plutone11011"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.014953 / 0.011353 (0.003600) | 0.006936 / 0.011008 (-0.004072) | 0.144039 / 0.038508 (0.105531) | 0.046719 / 0.023109 (0.023610) | 0.408832 / 0.275898 (0.132934) | 0.501419 / 0.323480 (0.177939) | 0.010190 / 0.007986 (0.002204) | 0.007618 / 0.004328 (0.003290) | 0.108553 / 0.004250 (0.104303) | 0.048484 / 0.037052 (0.011432) | 0.451586 / 0.258489 (0.193097) | 0.469864 / 0.293841 (0.176023) | 0.062159 / 0.128546 (-0.066387) | 0.019937 / 0.075646 (-0.055710) | 0.473718 / 0.419271 (0.054446) | 0.064777 / 0.043533 (0.021244) | 0.428675 / 0.255139 (0.173536) | 0.467665 / 0.283200 (0.184465) | 0.133528 / 0.141683 (-0.008155) | 1.978084 / 1.452155 (0.525930) | 1.965878 / 1.492716 (0.473162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290112 / 0.018006 (0.272106) | 0.629481 / 0.000490 (0.628992) | 0.003600 / 0.000200 (0.003400) | 0.000144 / 0.000054 (0.000089) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030806 / 0.037411 (-0.006605) | 0.142376 / 0.014526 (0.127850) | 0.150020 / 0.176557 (-0.026537) | 0.193679 / 0.737135 (-0.543457) | 0.151329 / 0.296338 (-0.145009) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629725 / 0.215209 (0.414516) | 6.656313 / 2.077655 (4.578659) | 2.712160 / 1.504120 (1.208041) | 2.328461 / 1.541195 (0.787266) | 2.452502 / 1.468490 (0.984012) | 1.353183 / 4.584777 (-3.231594) | 5.981521 / 3.745712 (2.235809) | 3.707186 / 5.269862 (-1.562676) | 2.460583 / 4.565676 (-2.105094) | 0.178300 / 0.424275 (-0.245975) | 0.020357 / 0.007607 (0.012750) | 0.813564 / 0.226044 (0.587520) | 8.465600 / 2.268929 (6.196671) | 3.491507 / 55.444624 (-51.953117) | 2.810781 / 6.876477 (-4.065695) | 3.100182 / 2.142072 (0.958110) | 1.539321 / 4.805227 (-3.265906) | 0.257735 / 6.500664 (-6.242929) | 0.082785 / 0.075469 (0.007316) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.766586 / 1.841788 (-0.075201) | 20.534638 / 8.074308 (12.460330) | 24.066176 / 10.191392 (13.874784) | 0.272419 / 0.680424 (-0.408005) | 0.048940 / 0.534201 (-0.485261) | 0.606004 / 0.579283 (0.026721) | 0.669684 / 0.434364 (0.235320) | 0.716858 / 0.540337 (0.176521) | 0.949394 / 1.386936 (-0.437542) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010865 / 0.011353 (-0.000488) | 0.009855 / 0.011008 (-0.001153) | 0.105973 / 0.038508 (0.067465) | 0.039818 / 0.023109 (0.016709) | 0.544505 / 0.275898 (0.268607) | 0.511253 / 0.323480 (0.187773) | 0.007350 / 0.007986 (-0.000635) | 0.006950 / 0.004328 (0.002622) | 0.106548 / 0.004250 (0.102298) | 0.062740 / 0.037052 (0.025688) | 0.465881 / 0.258489 (0.207392) | 0.524426 / 0.293841 (0.230585) | 0.056052 / 0.128546 (-0.072495) | 0.020906 / 0.075646 (-0.054741) | 0.125337 / 0.419271 (-0.293935) | 0.064689 / 0.043533 (0.021156) | 0.483055 / 0.255139 (0.227916) | 0.518878 / 0.283200 (0.235678) | 0.127288 / 0.141683 (-0.014394) | 1.936246 / 1.452155 (0.484092) | 2.162532 / 1.492716 (0.669816) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253691 / 0.018006 (0.235685) | 0.606244 / 0.000490 (0.605754) | 0.004251 / 0.000200 (0.004051) | 0.000126 / 0.000054 (0.000071) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038356 / 0.037411 (0.000944) | 0.146690 / 0.014526 (0.132164) | 0.146545 / 0.176557 (-0.030012) | 0.218452 / 0.737135 (-0.518684) | 0.165314 / 0.296338 (-0.131025) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.645768 / 0.215209 (0.430559) | 7.229186 / 2.077655 (5.151531) | 3.484778 / 1.504120 (1.980658) | 2.585310 / 1.541195 (1.044116) | 2.727670 / 1.468490 (1.259180) | 1.393416 / 4.584777 (-3.191361) | 6.448707 / 3.745712 (2.702995) | 3.433652 / 5.269862 (-1.836209) | 2.106450 / 4.565676 (-2.459226) | 0.143899 / 0.424275 (-0.280376) | 0.015097 / 0.007607 (0.007490) | 0.860960 / 0.226044 (0.634916) | 9.509725 / 2.268929 (7.240797) | 3.881601 / 55.444624 (-51.563024) | 3.156018 / 6.876477 (-3.720459) | 3.556330 / 2.142072 (1.414257) | 1.525940 / 4.805227 (-3.279287) | 0.264588 / 6.500664 (-6.236076) | 0.090327 / 0.075469 (0.014858) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.829761 / 1.841788 (-0.012027) | 21.037774 / 8.074308 (12.963466) | 24.464737 / 10.191392 (14.273345) | 0.394165 / 0.680424 (-0.286259) | 0.039286 / 0.534201 (-0.494915) | 0.546412 / 0.579283 (-0.032871) | 0.741760 / 0.434364 (0.307396) | 0.683969 / 0.540337 (0.143632) | 0.831392 / 1.386936 (-0.555544) |\n\n</details>\n</details>\n\n\n"
] | 2023-02-20T17:29:14Z
| 2023-02-21T13:08:25Z
| 2023-02-21T12:58:12Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5553.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5553",
"merged_at": "2023-02-21T12:58:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5553.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5553"
}
|
Solves #5539
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5553/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5553/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1798
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1798/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1798/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1798/events
|
https://github.com/huggingface/datasets/pull/1798
| 797,766,818
|
MDExOlB1bGxSZXF1ZXN0NTY0Njk2NjE1
| 1,798
|
Add Arabic sarcasm dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4",
"events_url": "https://api.github.com/users/mapmeld/events{/privacy}",
"followers_url": "https://api.github.com/users/mapmeld/followers",
"following_url": "https://api.github.com/users/mapmeld/following{/other_user}",
"gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mapmeld",
"id": 643918,
"login": "mapmeld",
"node_id": "MDQ6VXNlcjY0MzkxOA==",
"organizations_url": "https://api.github.com/users/mapmeld/orgs",
"received_events_url": "https://api.github.com/users/mapmeld/received_events",
"repos_url": "https://api.github.com/users/mapmeld/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mapmeld"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq thanks for the comments - I believe these are now addressed. I re-generated the datasets_info.json and dummy data"
] | 2021-01-31T17:38:55Z
| 2021-02-10T20:39:13Z
| 2021-02-03T10:35:54Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1798.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1798",
"merged_at": "2021-02-03T10:35:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1798.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1798"
}
|
This MIT license dataset: https://github.com/iabufarha/ArSarcasm
Via https://sites.google.com/view/ar-sarcasm-sentiment-detection/
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1798/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1798/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4672
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4672/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4672/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4672/events
|
https://github.com/huggingface/datasets/pull/4672
| 1,300,911,467
|
PR_kwDODunzps47NEfV
| 4,672
|
Support extract 7-zip compressed data files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool! Can you please remove `Fix #3541` from the description as this PR doesn't add support for streaming/`iter_archive`, so it only partially addresses the issue?\r\n\r\nSide note:\r\nI think we can use `libarchive` (`libarchive-c` is a Python package with the bindings) for streaming 7z archives. The only issue with this lib is that it's tricky to install on Windows/Mac."
] | 2022-07-11T15:56:51Z
| 2022-07-15T13:14:27Z
| 2022-07-15T13:02:07Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4672.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4672",
"merged_at": "2022-07-15T13:02:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4672.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4672"
}
|
Fix partially #3541, fix #4670.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4672/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4672/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2212
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2212/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2212/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2212/events
|
https://github.com/huggingface/datasets/issues/2212
| 855,999,133
|
MDU6SXNzdWU4NTU5OTkxMzM=
| 2,212
|
Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4",
"events_url": "https://api.github.com/users/hanss0n/events{/privacy}",
"followers_url": "https://api.github.com/users/hanss0n/followers",
"following_url": "https://api.github.com/users/hanss0n/following{/other_user}",
"gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hanss0n",
"id": 21348833,
"login": "hanss0n",
"node_id": "MDQ6VXNlcjIxMzQ4ODMz",
"organizations_url": "https://api.github.com/users/hanss0n/orgs",
"received_events_url": "https://api.github.com/users/hanss0n/received_events",
"repos_url": "https://api.github.com/users/hanss0n/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hanss0n"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Apparently the data are not available from this url anymore. We'll replace it with the new url when it's available",
"I saw this on their website when we request to download the dataset:\r\n\r\n\r\nCan we still request them link for the dataset and make a PR? @lhoestq @yjernite ",
"I've contacted Martin (first author of the fquad paper) regarding a possible new url. Hopefully we can get one soon !",
"They now made a website to force people who want to use the dataset for commercial purposes to seek a commercial license from them ...",
"The script has been adopted to support manual download from the website, so I'm closing this issue."
] | 2021-04-12T13:49:56Z
| 2023-10-03T16:09:19Z
| 2023-10-03T16:09:18Z
|
NONE
| null | null | null |
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, post-processed: Unknown size, total: 9.76 MiB) to /root/.cache/huggingface/datasets/fquad/default/0.1.0/778dc2c85813d05ddd0c17087294d5f8f24820752340958070876b677af9f061...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-48-a2721797e23b> in <module>()
----> 1 fquad = load_dataset("fquad")
11 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
614 raise FileNotFoundError("Couldn't find file at {}".format(url))
615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
--> 616 raise ConnectionError("Couldn't reach {}".format(url))
617
618 # Try a second time
ConnectionError: Couldn't reach https://storage.googleapis.com/illuin/fquad/train.json.zip
```
Does anyone know why that is and how to fix it?
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2212/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2212/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4078
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4078/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4078/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4078/events
|
https://github.com/huggingface/datasets/pull/4078
| 1,189,513,572
|
PR_kwDODunzps41eWnl
| 4,078
|
Fix GithubMetricModuleFactory instantiation with None download_config
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-01T09:26:58Z
| 2022-04-01T14:44:51Z
| 2022-04-01T14:39:27Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4078.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4078",
"merged_at": "2022-04-01T14:39:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4078.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4078"
}
|
Recent PR:
- #4063
introduced a potential bug if `GithubMetricModuleFactory` is instantiated with None `download_config`.
This PR add instantiation tests and fix that potential issue.
CC: @lhoestq
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4078/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4078/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2164
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2164/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2164/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2164/events
|
https://github.com/huggingface/datasets/pull/2164
| 849,739,759
|
MDExOlB1bGxSZXF1ZXN0NjA4NDQ0MTE3
| 2,164
|
Replace assertTrue(isinstance with assertIsInstance in tests
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-04-03T21:07:02Z
| 2021-04-06T14:41:09Z
| 2021-04-06T14:41:08Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2164.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2164",
"merged_at": "2021-04-06T14:41:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2164.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2164"
}
|
Replaces all the occurrences of the `assertTrue(isinstance(` pattern with `assertIsInstance`.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2164/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2164/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3783
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3783/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3783/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3783/events
|
https://github.com/huggingface/datasets/pull/3783
| 1,149,256,744
|
PR_kwDODunzps4zZ1jR
| 3,783
|
Support passing str to iter_files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@mariosasko it was indeed while reading that PR, that I remembered this change I wanted to do long ago... 😉"
] | 2022-02-24T12:58:15Z
| 2022-02-24T16:01:40Z
| 2022-02-24T16:01:40Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3783.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3783",
"merged_at": "2022-02-24T16:01:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3783.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3783"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3783/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3783/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2058
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2058/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2058/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2058/events
|
https://github.com/huggingface/datasets/issues/2058
| 832,159,844
|
MDU6SXNzdWU4MzIxNTk4NDQ=
| 2,058
|
Is it possible to convert a `tfds` to HuggingFace `dataset`?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"events_url": "https://api.github.com/users/abarbosa94/events{/privacy}",
"followers_url": "https://api.github.com/users/abarbosa94/followers",
"following_url": "https://api.github.com/users/abarbosa94/following{/other_user}",
"gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abarbosa94",
"id": 6608232,
"login": "abarbosa94",
"node_id": "MDQ6VXNlcjY2MDgyMzI=",
"organizations_url": "https://api.github.com/users/abarbosa94/orgs",
"received_events_url": "https://api.github.com/users/abarbosa94/received_events",
"repos_url": "https://api.github.com/users/abarbosa94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abarbosa94"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! You can either save the TF dataset to one of the formats supported by datasets (`parquet`, `csv`, `json`, ...) or pass a generator function to `Dataset.from_generator` that yields its examples."
] | 2021-03-15T20:18:47Z
| 2023-07-25T16:47:40Z
| 2023-07-25T16:47:40Z
|
CONTRIBUTOR
| null | null | null |
I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :)
I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` in the future if you think that it would be useful.
Thanks!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2058/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2058/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1468
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1468/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1468/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1468/events
|
https://github.com/huggingface/datasets/pull/1468
| 761,607,531
|
MDExOlB1bGxSZXF1ZXN0NTM2MjQ5OTg0
| 1,468
|
add Indonesian newspapers (id_newspapers_2018)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cahya-wirawan",
"id": 7669893,
"login": "cahya-wirawan",
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cahya-wirawan"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Looks like there's a `Path` issue on windows. Could you try switching to\r\n`glob.glob(os.path.join(article_dir, \"*.json\"))`",
"> Looks like there's a `Path` issue on windows. Could you try switching to\r\n> `glob.glob(os.path.join(article_dir, \"*.json\"))`\r\n\r\nThanks, I replaced it with glob. Let's see if it solves the issue. Anyway, the main directory has a space, could it make the issue on windows? the test on linux don't have this problem.",
"It seems glob doesn't help also. Btw, one of the failing test tried to connect aws which failed:\r\n```\r\nC:\\tools\\miniconda3\\lib\\site-packages\\urllib3\\connection.py:160: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\naddress = ('s3.amazonaws.com', 443), timeout = 10, source_address = None\r\nsocket_options = [(6, 1, 1)]\r\n\r\n```\r\nWhy did it try to connect to aws? I don't use it.",
"It seems that the circleci make a test for whole datasets repository, that means if only one of the dataset in the official repository has a download issue, this will also affect the test of a new dataset like mine, isn't it?\r\nI changed the url to my newspaper dataset which contains only few simple json files and simple directory structure. But it still failed. And it failed not only on windows test. This is one of the error message:\r\n```\r\n-- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ajgt_twitter_ar\r\nFAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_chr_en\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_ajgt_twitter_ar\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_chr_en\r\n===== 4 failed, 2667 passed, 2052 skipped, 4 warnings in 432.05s (0:07:12) =====\r\n\r\nExited with code exit status 1\r\nCircleCI received exit code 1\r\n```\r\nThe test failed on twitter dataset even my dataset has nothing to do with twitter? ",
"merging since the CI is fixed on master",
"Hi, thanks for merging the dataset. I create a new PR (#1499) since I need to update the link to the dataset. "
] | 2020-12-10T20:54:12Z
| 2020-12-12T08:50:51Z
| 2020-12-11T17:04:41Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1468.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1468",
"merged_at": "2020-12-11T17:04:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1468.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1468"
}
|
The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers. The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1468/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1468/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1850
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1850/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1850/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1850/events
|
https://github.com/huggingface/datasets/pull/1850
| 804,412,249
|
MDExOlB1bGxSZXF1ZXN0NTcwMTg0MDAx
| 1,850
|
Add cord 19 dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4",
"events_url": "https://api.github.com/users/ggdupont/events{/privacy}",
"followers_url": "https://api.github.com/users/ggdupont/followers",
"following_url": "https://api.github.com/users/ggdupont/following{/other_user}",
"gists_url": "https://api.github.com/users/ggdupont/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ggdupont",
"id": 5583410,
"login": "ggdupont",
"node_id": "MDQ6VXNlcjU1ODM0MTA=",
"organizations_url": "https://api.github.com/users/ggdupont/orgs",
"received_events_url": "https://api.github.com/users/ggdupont/received_events",
"repos_url": "https://api.github.com/users/ggdupont/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ggdupont/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggdupont/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ggdupont"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Cleaned-up version of previous PR: https://github.com/huggingface/datasets/pull/1129",
"@lhoestq FYI",
"Before merging I might tweak a little bit the dummy data to avoid having to check if the `document_parses` and `embeddings` directories exist or not. I'll do that later today",
"Looks all good now ! Thanks a lot @ggdupont :)\r\nMerging"
] | 2021-02-09T10:22:08Z
| 2021-02-09T15:16:26Z
| 2021-02-09T15:16:26Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1850.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1850",
"merged_at": "2021-02-09T15:16:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1850.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1850"
}
|
Initial version only reading the metadata in CSV.
### Checklist:
- [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template
- [x] Fill the _DESCRIPTION and _CITATION variables
- [x] Implement _infos(), _split_generators() and _generate_examples()
- [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class.
- [x] Generate the metadata file dataset_infos.json for all configurations
- [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card README.md using the template and at least fill the tags
- [x] Both tests for the real data and the dummy data pass.
### Extras:
- [x] add more metadata
- [x] add full text
- [x] add pre-computed document embedding
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1850/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1850/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3675
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3675/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3675/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3675/events
|
https://github.com/huggingface/datasets/issues/3675
| 1,123,078,408
|
I_kwDODunzps5C8NEI
| 3,675
|
Add CodeContests dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] | null |
[
"@mariosasko Can I take this up?",
"This dataset is now available here: https://huggingface.co/datasets/deepmind/code_contests."
] | 2022-02-03T13:20:00Z
| 2022-07-20T11:07:05Z
| 2022-07-20T11:07:05Z
|
CONTRIBUTOR
| null | null | null |
## Adding a Dataset
- **Name:** CodeContests
- **Description:** CodeContests is a competitive programming dataset for machine-learning.
- **Paper:**
- **Data:** https://github.com/deepmind/code_contests
- **Motivation:** This dataset was used when training [AlphaCode](https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode).
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3675/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3675/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3059
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3059/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3059/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3059/events
|
https://github.com/huggingface/datasets/pull/3059
| 1,022,620,057
|
PR_kwDODunzps4tA54w
| 3,059
|
Fix task reloading from cache
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-10-11T12:03:04Z
| 2021-10-11T12:23:39Z
| 2021-10-11T12:23:39Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3059.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3059",
"merged_at": "2021-10-11T12:23:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3059.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3059"
}
|
When reloading a dataset from the cache when doing `map`, the tasks templates were kept instead of being updated regarding the output of the `map` function. This is an issue because we drop the tasks templates that are not compatible anymore after `map`, for example if a column of the template was removed.
This PR fixes this and for convenience introduces a decorator `@transmit_tasks` that takes care of doing this verification, similar to the `@transmit_format` decorator.
This should fix issue https://github.com/huggingface/datasets/issues/3047 cc @sgugger
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3059/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3059/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1241
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1241/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1241/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1241/events
|
https://github.com/huggingface/datasets/pull/1241
| 758,360,643
|
MDExOlB1bGxSZXF1ZXN0NTMzNTQ1OTQ0
| 1,241
|
Opus elhuyar dataset for MT task having languages pair in Spanish to Basque
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4",
"events_url": "https://api.github.com/users/spatil6/events{/privacy}",
"followers_url": "https://api.github.com/users/spatil6/followers",
"following_url": "https://api.github.com/users/spatil6/following{/other_user}",
"gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/spatil6",
"id": 6419011,
"login": "spatil6",
"node_id": "MDQ6VXNlcjY0MTkwMTE=",
"organizations_url": "https://api.github.com/users/spatil6/orgs",
"received_events_url": "https://api.github.com/users/spatil6/received_events",
"repos_url": "https://api.github.com/users/spatil6/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spatil6/subscriptions",
"type": "User",
"url": "https://api.github.com/users/spatil6"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-07T10:03:34Z
| 2020-12-19T14:55:12Z
| 2020-12-09T15:12:48Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1241.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1241",
"merged_at": "2020-12-09T15:12:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1241.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1241"
}
|
Opus elhuyar dataset for MT task having languages pair in Spanish to Basque
More info : http://opus.nlpl.eu/Elhuyar.php
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1241/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1241/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5828
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5828/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5828/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5828/events
|
https://github.com/huggingface/datasets/issues/5828
| 1,699,235,739
|
I_kwDODunzps5lSEeb
| 5,828
|
Stream data concatenation issue
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/48817796?v=4",
"events_url": "https://api.github.com/users/krishnapriya-18/events{/privacy}",
"followers_url": "https://api.github.com/users/krishnapriya-18/followers",
"following_url": "https://api.github.com/users/krishnapriya-18/following{/other_user}",
"gists_url": "https://api.github.com/users/krishnapriya-18/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/krishnapriya-18",
"id": 48817796,
"login": "krishnapriya-18",
"node_id": "MDQ6VXNlcjQ4ODE3Nzk2",
"organizations_url": "https://api.github.com/users/krishnapriya-18/orgs",
"received_events_url": "https://api.github.com/users/krishnapriya-18/received_events",
"repos_url": "https://api.github.com/users/krishnapriya-18/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/krishnapriya-18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krishnapriya-18/subscriptions",
"type": "User",
"url": "https://api.github.com/users/krishnapriya-18"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! \r\n\r\nYou can call `map` as follows to avoid the error:\r\n```python\r\naugmented_dataset_cln = dataset_cln['train'].map(augment_dataset, features=dataset_cln['train'].features)\r\n```",
"Thanks it is solved",
"Hi! \r\nI have run into the same problem with you. Could you please let me know how you solve it? Thanks!"
] | 2023-05-07T21:02:54Z
| 2023-06-29T20:07:56Z
| 2023-05-10T05:05:47Z
|
NONE
| null | null | null |
### Describe the bug
I am not able to concatenate the augmentation of the stream data. I am using the latest version of dataset.
ValueError: The features can't be aligned because the key audio of features {'audio_id': Value(dtype='string',
id=None), 'audio': {'array': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'path':
Value(dtype='null', id=None), 'sampling_rate': Value(dtype='int64', id=None)}, 'transcript': Value(dtype='string',
id=None)} has unexpected type - {'array': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None),
'path': Value(dtype='null', id=None), 'sampling_rate': Value(dtype='int64', id=None)} (expected either
Audio(sampling_rate=16000, mono=True, decode=True, id=None) or Value("null").
### Steps to reproduce the bug
dataset = load_dataset("tobiolatunji/afrispeech-200", "all", streaming=True).shuffle(seed=42)
dataset_cln = dataset.remove_columns(['speaker_id', 'path', 'age_group', 'gender', 'accent', 'domain', 'country', 'duration'])
dataset_cln = dataset_cln.cast_column("audio", Audio(sampling_rate=16000))
from audiomentations import AddGaussianNoise,Compose,Gain,OneOf,PitchShift,PolarityInversion,TimeStretch
augmentation = Compose([
AddGaussianNoise(min_amplitude=0.005, max_amplitude=0.015, p=0.2)
])
def augment_dataset(batch):
audio = batch["audio"]
audio["array"] = augmentation(audio["array"], sample_rate=audio["sampling_rate"])
return batch
augmented_dataset_cln = dataset_cln['train'].map(augment_dataset)
dataset_cln['train'] = interleave_datasets([dataset_cln['train'], augmented_dataset_cln])
dataset_cln['train'] = dataset_cln['train'].shuffle(seed=42)
### Expected behavior
I should be able to merge as sampling rate is same.
### Environment info
import datasets
import transformers
import accelerate
print(datasets.__version__)
print(transformers.__version__)
print(torch.__version__)
print(evaluate.__version__)
print(accelerate.__version__)
2.12.0
4.28.1
2.0.0
0.4.0
0.18.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5828/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5828/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1647
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1647/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1647/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1647/events
|
https://github.com/huggingface/datasets/issues/1647
| 775,525,799
|
MDU6SXNzdWU3NzU1MjU3OTk=
| 1,647
|
NarrativeQA fails to load with `load_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56408839?v=4",
"events_url": "https://api.github.com/users/eric-mitchell/events{/privacy}",
"followers_url": "https://api.github.com/users/eric-mitchell/followers",
"following_url": "https://api.github.com/users/eric-mitchell/following{/other_user}",
"gists_url": "https://api.github.com/users/eric-mitchell/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eric-mitchell",
"id": 56408839,
"login": "eric-mitchell",
"node_id": "MDQ6VXNlcjU2NDA4ODM5",
"organizations_url": "https://api.github.com/users/eric-mitchell/orgs",
"received_events_url": "https://api.github.com/users/eric-mitchell/received_events",
"repos_url": "https://api.github.com/users/eric-mitchell/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eric-mitchell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eric-mitchell/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eric-mitchell"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @eric-mitchell,\r\nI think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`",
"@bhavitvyamalik Great, thanks for this! Confirmed that the problem is resolved on master at [cbbda53](https://github.com/huggingface/datasets/commit/cbbda53ac1520b01f0f67ed6017003936c41ec59).",
"Update: HuggingFace did an intermediate release yesterday just before the v2.0.\r\n\r\nTo load it you can just update `datasets`\r\n\r\n`pip install --upgrade datasets`"
] | 2020-12-28T18:16:09Z
| 2021-01-05T12:05:08Z
| 2021-01-03T17:58:05Z
|
NONE
| null | null | null |
When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with
FileNotFoundError: Couldn't find file locally at narrativeqa/narrativeqa.py, or remotely at
https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/narrativeqa/narrativeqa.py or
https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/narrativeqa/narrativeqa.py
Workaround: manually copy the `narrativeqa.py` builder into my local directory with
curl https://raw.githubusercontent.com/huggingface/datasets/master/datasets/narrativeqa/narrativeqa.py -o narrativeqa.py
and load the dataset as `load_dataset('narrativeqa.py')` everything works fine. I'm on datasets v1.1.3 using Python 3.6.10.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1647/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1647/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1943
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1943/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1943/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1943/events
|
https://github.com/huggingface/datasets/pull/1943
| 816,160,453
|
MDExOlB1bGxSZXF1ZXN0NTc5ODY5NTk0
| 1,943
|
Implement Dataset from JSON and JSON Lines
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks @lhoestq. I was trying to follow @thomwolf suggestion about integrating that script but as `from_json` method...\r\n> Note that I don't think this is necessary a breaking change, we can still keep the old scripts around\r\n\r\nDo you think there is a better way of doing it?\r\n\r\nI was trying to implement more or less the same logic as in the script, but I confess I assumed the target was in-memory only...",
"Basically, I was trying to reimplement `Json(datasets.ArrowBasedBuilder)._generate_tables`, and no writing to arrow file (I assumed only in-memory usage). I started with the first \"else\" clause... \r\n\r\nI was planning to remove my `_cast_table_to_info_features` and use `paj.read_json(parse_options=...)` instead (like in the script).",
"@lhoestq I am wondering why `keep_in_memory` has no effect for JSON...",
"What's the issue exactly ? Apparently it's correctly passed to as_dataset so I don't find the issue",
"Nevermind @lhoestq, I found where the problem was in my code... I push!",
"<s>merging master into this branch should fix the CI issue :)</s>\r\n\r\nOops I didn't refresh the page sorry ^^'\r\n\r\nLooks all good !",
"Good job ! I think we can merge after the last changes regarding the error message and the docstring above :)",
"@lhoestq Done! And I have also added some tests for the `field` parameter.",
"Let me add some more tests for dict of lists JSON file, please.",
"@lhoestq done! ;)",
"We can merge. Additional work will be done in another PR. ;)"
] | 2021-02-25T07:17:33Z
| 2021-03-18T09:42:08Z
| 2021-03-18T09:42:08Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1943.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1943",
"merged_at": "2021-03-18T09:42:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1943.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1943"
}
|
Implement `Dataset.from_jsonl`.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1943/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1943/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1934
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1934/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1934/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1934/events
|
https://github.com/huggingface/datasets/issues/1934
| 814,437,190
|
MDU6SXNzdWU4MTQ0MzcxOTA=
| 1,934
|
Add Stanford Sentiment Treebank (SST)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patpizio",
"id": 15801338,
"login": "patpizio",
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"repos_url": "https://api.github.com/users/patpizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patpizio"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] | null |
[
"Dataset added in release [1.5.0](https://github.com/huggingface/datasets/releases/tag/1.5.0), I think I can close this."
] | 2021-02-23T12:53:16Z
| 2021-03-18T17:51:44Z
| 2021-03-18T17:51:44Z
|
CONTRIBUTOR
| null | null | null |
I am going to add SST:
- **Name:** The Stanford Sentiment Treebank
- **Description:** The first corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language
- **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- **Data:** https://nlp.stanford.edu/sentiment/index.html
- **Motivation:** Already requested in #353, SST is a popular dataset for Sentiment Classification
What's the difference with the [_SST-2_](https://huggingface.co/datasets/viewer/?dataset=glue&config=sst2) dataset included in GLUE? Essentially, SST-2 is a version of SST where:
- the labels were mapped from real numbers in [0.0, 1.0] to a binary label: {0, 1}
- the labels of the *sub-sentences* were included only in the training set
- the labels in the test set are obfuscated
So there is a lot more information in the original SST. The tricky bit is, the data is scattered into many text files and, for one in particular, I couldn't find the original encoding ([*but I'm not the only one*](https://groups.google.com/g/word2vec-toolkit/c/QIUjLw6RqFk/m/_iEeyt428wkJ) 🎵). The only solution I found was to manually replace all the è, ë, ç and so on into an `utf-8` copy of the text file. I uploaded the result in my Dropbox and I am using that as the main repo for the dataset.
Also, the _sub-sentences_ are built at run-time from the information encoded in several text files, so generating the examples is a bit more cumbersome than usual. Luckily, the dataset is not enormous.
I plan to divide the dataset in 2 configs: one with just whole sentences with their labels, the other with sentences _and their sub-sentences_ with their labels. Each config will be split in train, validation and test. Hopefully this makes sense, we may discuss it in the PR I'm going to submit.
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1934/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1934/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4803
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4803/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4803/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4803/events
|
https://github.com/huggingface/datasets/issues/4803
| 1,332,079,562
|
I_kwDODunzps5PZevK
| 4,803
|
Support `pipeline` argument in inspect.py functions
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Now: the preview (first-rows) works, but not the conversion to parquet. See https://huggingface.co/datasets/wikipedia/viewer/20220301.de/train\r\n\r\n```\r\n_split_generators() missing 1 required positional argument: 'pipeline'\r\n\r\nError code: UnexpectedError\r\n```"
] | 2022-08-08T16:01:24Z
| 2023-09-25T12:21:35Z
| null |
CONTRIBUTOR
| null | null | null |
**Is your feature request related to a problem? Please describe.**
The `wikipedia` dataset requires a `pipeline` argument to build the list of splits:
https://huggingface.co/datasets/wikipedia/blob/main/wikipedia.py#L937
But this is currently not supported in `get_dataset_config_info`:
https://github.com/huggingface/datasets/blob/main/src/datasets/inspect.py#L373-L375
which is called by other functions, e.g. `get_dataset_split_names`.
**Additional context**
The dataset viewer is not working out-of-the-box on `wikipedia` for this reason:
https://huggingface.co/datasets/wikipedia/viewer
<img width="637" alt="Capture d’écran 2022-08-08 à 12 01 16" src="https://user-images.githubusercontent.com/1676121/183461838-5330783b-0269-4ba7-a999-314cde2023d8.png">
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4803/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4803/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4895
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4895/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4895/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4895/events
|
https://github.com/huggingface/datasets/issues/4895
| 1,350,798,527
|
I_kwDODunzps5Qg4y_
| 4,895
|
load_dataset method returns Unknown split "validation" even if this dir exists
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13418507?v=4",
"events_url": "https://api.github.com/users/SamSamhuns/events{/privacy}",
"followers_url": "https://api.github.com/users/SamSamhuns/followers",
"following_url": "https://api.github.com/users/SamSamhuns/following{/other_user}",
"gists_url": "https://api.github.com/users/SamSamhuns/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SamSamhuns",
"id": 13418507,
"login": "SamSamhuns",
"node_id": "MDQ6VXNlcjEzNDE4NTA3",
"organizations_url": "https://api.github.com/users/SamSamhuns/orgs",
"received_events_url": "https://api.github.com/users/SamSamhuns/received_events",
"repos_url": "https://api.github.com/users/SamSamhuns/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SamSamhuns/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamSamhuns/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SamSamhuns"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"I don't know the main problem but it looks like, it is ignoring the last directory in your case. So, create a directory called 'zzz' in the same folder as train, validation and test. if it doesn't work, create a directory called \"aaa\". It worked for me.\r\n",
"@SamSamhuns could you please try to load it with the current main-branch version of `datasets`? I suppose the problem is that it tries to get splits names from filenames in this case, ignoring directories names, but `val` wasn't in keywords at that time, but it was fixed recently in this PR https://github.com/huggingface/datasets/pull/4844. ",
"I have a similar problem.\r\nWhen I try to create `data_infos.json` using `datasets-cli test Peter.py --save_infos --all_configs` I get an error:\r\n`ValueError: Unknown split \"test\". Should be one of ['train'].`\r\n\r\nThe `data_infos.json` is created perfectly fine when I use only one split - `datasets.Split.TRAIN`\r\n\r\n@polinaeterna Could you help here please?\r\n\r\nYou can find the code here: https://huggingface.co/datasets/sberbank-ai/Peter/tree/add_splits (add_splits branch)",
"@skalinin It seems the `dataset_infos.json` of your dataset is missing the info on the test split (and `datasets-cli` doesn't ignore the cached infos at the moment, which is a known bug), so your issue is not related to this one. I think you can fix your issue by deleting all the cached `dataset_infos.json` (in the local repo and in `~/.cache/huggingface/modules`) before running the `datasets-cli test` command. Let us know if that doesn't help, and I can try to generate it myself.",
"This code indeed behaves as expected on `main`. But suppose the `val_234.png` is renamed to some other value not containing one of [these](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L31) keywords, in that case, this issue becomes relevant again because the real cause of it is the order in which we check the predefined split patterns to assign data files to each split - first we assign data files based on filenames, and only if this fails meaning not a single split found (`val` is not recognized here in the older versions of `datasets`, which results in an empty `validation` split), do we assign based on directory names.\r\n\r\n@polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if `data_dir` is specified (or if `load_dataset(data_dir)` is called)? ",
"> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nyes that makes sense !",
"Looks like the `val/validation` dir name issue is fixed with the current main-branch version of the `datasets` repository. \r\n\r\n> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nI agree with this as well. I would expect higher precedence to the directory name over the file name. Right now if I place a single file named `train_00001.jpg` under the `validation` directory, `load_dataset` cannot find the validation split.",
"Thanks for the reply\r\n\r\nI've created a separate [issue](https://github.com/huggingface/datasets/issues/4982#issue-1375604693) for my problem.",
"> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nSounds good to me! opened a PR: https://github.com/huggingface/datasets/pull/4985",
"Hi there @polinaeterna @mariosasko ! I have installed 5.2.3.dev0, which should have this fix. Unfortunately, I am still getting the error:\r\n`ValueError: Unknown split \"validation\". Should be one of ['train'].` When I call `load_dataset(\"csv\", data_files=files, split=split)`\r\n\r\nAny help would be greatly appreciated!",
"hi @shaneacton ! could you please show your dataset structure?",
"Hi there @polinaeterna . My local CSV files are stored as follows:\r\nbinding:\r\n---------- tune.csv\r\n---------- public_data:\r\n--------------------------- train.csv\r\n\r\n`self.list_shards(split)` sucessfully finds the relevant data files",
"@shaneacton do you have `validation.csv`/`val.csv`/`valid.csv`/`dev.csv` file in your data folder? I can't find it in the structure you provided",
"@polinaeterna no, does the name of the split need to match the name of the file exactly?\r\n\r\nBut my train file is not actually named 'train.py' its called 'XXXXXXXXX_train_XXXXXXXX.csv'\r\nAnd the code works fine for train, but fails for validation.\r\nDoes the file name need to _contain_ the split name?",
"@shaneacton what files do you expect to be included in \"validation\" split? yes, you should somehow indicate that a file belongs to a certain split - either by including split name in a filename or by putting it into a folder with split name, you can also check out [this documentation page](https://huggingface.co/docs/datasets/main/en/repository_structure) :)\r\nby default all the data goes to a single `train` split",
"@polinaeterna I have specified my train/test/tune files via the `split_to_filepattern` argument when initialising my `FileDataSource` class. This is how `list_shards` is able to find the right files.\r\nAfter your last message, I have tried renaminig my data files to simply `train.csv` and `validation.csv`, however I am still getting the same error: `Unknown split \"validation\". Should be one of ['train']`",
"@polinaeterna I have solved the issue. The solution was to call:\r\n`load_dataset(\"csv\", data_files={split: files}, split=split)`"
] | 2022-08-25T12:11:00Z
| 2022-10-06T17:49:28Z
| 2022-09-29T08:07:50Z
|
NONE
| null | null | null |
## Describe the bug
The `datasets.load_dataset` returns a `ValueError: Unknown split "validation". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split="validation")` even if the `validation` sub-directory exists in the local data path.
The data directories are as follows and attached to this issue:
```
test_data1
|_ train
|_ 1012.png
|_ metadata.jsonl
...
|_ test
...
|_ validation
|_ 234.png
|_ metadata.jsonl
...
test_data2
|_ train
|_ train_1012.png
|_ metadata.jsonl
...
|_ test
...
|_ validation
|_ val_234.png
|_ metadata.jsonl
...
```
They contain the same image files and `metadata.jsonl` but the images in `test_data2` have the split names prepended i.e.
`train_1012.png, val_234.png` and the images in `test_data1` do not have the split names prepended to the image names i.e. `1012.png, 234.png`
I actually saw in another issue `val` was not recognized as a split name but here I would expect the files to take the split from the parent directory name i.e. val should become part of the validation split?
## Steps to reproduce the bug
```python
import datasets
datasets.logging.set_verbosity_error()
from datasets import load_dataset, get_dataset_split_names
# the following only finds train, validation and test splits correctly
path = "./test_data1"
print("######################", get_dataset_split_names(path), "######################")
dataset_list = []
for spt in ["train", "test", "validation"]:
dataset = load_dataset(path, split=spt)
dataset_list.append(dataset)
# the following only finds train and test splits
path = "./test_data2"
print("######################", get_dataset_split_names(path), "######################")
dataset_list = []
for spt in ["train", "test", "validation"]:
dataset = load_dataset(path, split=spt)
dataset_list.append(dataset)
```
## Expected results
```
###################### ['train', 'test', 'validation'] ######################
###################### ['train', 'test', 'validation'] ######################
```
## Actual results
```
Traceback (most recent call last):
File "test_data_loader.py", line 11, in <module>
dataset = load_dataset(path, split=spt)
File "/home/venv/lib/python3.8/site-packages/datasets/load.py", line 1758, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 893, in as_dataset
datasets = map_nested(
File "/home/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 385, in map_nested
return function(data_struct)
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 924, in _build_single_dataset
ds = self._as_dataset(
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 993, in _as_dataset
dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 211, in read
files = self.get_file_instructions(name, instructions, split_infos)
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 184, in get_file_instructions
file_instructions = make_file_instructions(
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 107, in make_file_instructions
absolute_instructions = instruction.to_absolute(name2len)
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in to_absolute
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in <listcomp>
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 433, in _rel_to_abs_instr
raise ValueError(f'Unknown split "{split}". Should be one of {list(name2len)}.')
ValueError: Unknown split "validation". Should be one of ['train', 'test'].
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux Ubuntu 18.04
- Python version: 3.8.12
- PyArrow version: 9.0.0
Data files
[test_data1.zip](https://github.com/huggingface/datasets/files/9424463/test_data1.zip)
[test_data2.zip](https://github.com/huggingface/datasets/files/9424468/test_data2.zip)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4895/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4895/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6166
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6166/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6166/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6166/events
|
https://github.com/huggingface/datasets/pull/6166
| 1,861,259,055
|
PR_kwDODunzps5YfOt0
| 6,166
|
Document BUILDER_CONFIG_CLASS
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009036 / 0.011353 (-0.002317) | 0.004564 / 0.011008 (-0.006444) | 0.114958 / 0.038508 (0.076449) | 0.087329 / 0.023109 (0.064220) | 0.440111 / 0.275898 (0.164213) | 0.486056 / 0.323480 (0.162576) | 0.006580 / 0.007986 (-0.001406) | 0.004257 / 0.004328 (-0.000072) | 0.093458 / 0.004250 (0.089208) | 0.063380 / 0.037052 (0.026328) | 0.469455 / 0.258489 (0.210966) | 0.521630 / 0.293841 (0.227790) | 0.053496 / 0.128546 (-0.075050) | 0.013466 / 0.075646 (-0.062181) | 0.361629 / 0.419271 (-0.057642) | 0.068095 / 0.043533 (0.024562) | 0.472440 / 0.255139 (0.217301) | 0.508682 / 0.283200 (0.225483) | 0.034648 / 0.141683 (-0.107035) | 1.820117 / 1.452155 (0.367962) | 1.933448 / 1.492716 (0.440732) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.276543 / 0.018006 (0.258537) | 0.563380 / 0.000490 (0.562890) | 0.005345 / 0.000200 (0.005146) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029230 / 0.037411 (-0.008181) | 0.095613 / 0.014526 (0.081087) | 0.106178 / 0.176557 (-0.070378) | 0.181095 / 0.737135 (-0.556040) | 0.107789 / 0.296338 (-0.188550) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.612051 / 0.215209 (0.396842) | 6.065008 / 2.077655 (3.987353) | 2.720911 / 1.504120 (1.216791) | 2.495218 / 1.541195 (0.954023) | 2.423351 / 1.468490 (0.954860) | 0.835571 / 4.584777 (-3.749205) | 5.438230 / 3.745712 (1.692518) | 4.550301 / 5.269862 (-0.719561) | 2.919889 / 4.565676 (-1.645788) | 0.097748 / 0.424275 (-0.326527) | 0.009285 / 0.007607 (0.001678) | 0.741968 / 0.226044 (0.515923) | 7.285394 / 2.268929 (5.016466) | 3.433634 / 55.444624 (-52.010991) | 2.680823 / 6.876477 (-4.195654) | 2.931149 / 2.142072 (0.789076) | 1.012852 / 4.805227 (-3.792375) | 0.224899 / 6.500664 (-6.275765) | 0.089411 / 0.075469 (0.013942) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.622759 / 1.841788 (-0.219029) | 23.690030 / 8.074308 (15.615721) | 21.034451 / 10.191392 (10.843059) | 0.241504 / 0.680424 (-0.438920) | 0.030109 / 0.534201 (-0.504092) | 0.472536 / 0.579283 (-0.106747) | 0.631396 / 0.434364 (0.197032) | 0.598997 / 0.540337 (0.058659) | 0.798680 / 1.386936 (-0.588256) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008696 / 0.011353 (-0.002657) | 0.005032 / 0.011008 (-0.005977) | 0.087369 / 0.038508 (0.048861) | 0.078105 / 0.023109 (0.054996) | 0.464861 / 0.275898 (0.188963) | 0.509620 / 0.323480 (0.186140) | 0.006399 / 0.007986 (-0.001587) | 0.004276 / 0.004328 (-0.000052) | 0.081643 / 0.004250 (0.077393) | 0.062560 / 0.037052 (0.025508) | 0.495377 / 0.258489 (0.236888) | 0.484885 / 0.293841 (0.191044) | 0.054354 / 0.128546 (-0.074193) | 0.013851 / 0.075646 (-0.061795) | 0.089531 / 0.419271 (-0.329740) | 0.068732 / 0.043533 (0.025199) | 0.455842 / 0.255139 (0.200703) | 0.528775 / 0.283200 (0.245575) | 0.039646 / 0.141683 (-0.102037) | 1.733600 / 1.452155 (0.281445) | 1.879074 / 1.492716 (0.386358) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.369616 / 0.018006 (0.351610) | 0.607426 / 0.000490 (0.606936) | 0.055540 / 0.000200 (0.055341) | 0.000543 / 0.000054 (0.000488) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036026 / 0.037411 (-0.001385) | 0.103968 / 0.014526 (0.089442) | 0.114852 / 0.176557 (-0.061705) | 0.187313 / 0.737135 (-0.549822) | 0.116839 / 0.296338 (-0.179500) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.614018 / 0.215209 (0.398809) | 6.139914 / 2.077655 (4.062259) | 2.826246 / 1.504120 (1.322126) | 2.524133 / 1.541195 (0.982938) | 2.606981 / 1.468490 (1.138491) | 0.844604 / 4.584777 (-3.740173) | 5.537178 / 3.745712 (1.791465) | 4.594624 / 5.269862 (-0.675237) | 3.032145 / 4.565676 (-1.533532) | 0.094771 / 0.424275 (-0.329504) | 0.008132 / 0.007607 (0.000525) | 0.714287 / 0.226044 (0.488242) | 7.296733 / 2.268929 (5.027804) | 3.698066 / 55.444624 (-51.746558) | 2.862781 / 6.876477 (-4.013696) | 3.114502 / 2.142072 (0.972429) | 0.986612 / 4.805227 (-3.818616) | 0.214438 / 6.500664 (-6.286226) | 0.076201 / 0.075469 (0.000732) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.747728 / 1.841788 (-0.094060) | 24.159845 / 8.074308 (16.085537) | 23.553485 / 10.191392 (13.362093) | 0.248387 / 0.680424 (-0.432037) | 0.029850 / 0.534201 (-0.504351) | 0.526416 / 0.579283 (-0.052867) | 0.625681 / 0.434364 (0.191317) | 0.619690 / 0.540337 (0.079352) | 0.827485 / 1.386936 (-0.559451) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006728 / 0.011353 (-0.004625) | 0.003960 / 0.011008 (-0.007048) | 0.085569 / 0.038508 (0.047061) | 0.077463 / 0.023109 (0.054354) | 0.343112 / 0.275898 (0.067214) | 0.379128 / 0.323480 (0.055648) | 0.004087 / 0.007986 (-0.003899) | 0.003357 / 0.004328 (-0.000972) | 0.065570 / 0.004250 (0.061320) | 0.056259 / 0.037052 (0.019207) | 0.368595 / 0.258489 (0.110106) | 0.402672 / 0.293841 (0.108831) | 0.030946 / 0.128546 (-0.097600) | 0.008509 / 0.075646 (-0.067137) | 0.288552 / 0.419271 (-0.130719) | 0.052134 / 0.043533 (0.008601) | 0.344653 / 0.255139 (0.089514) | 0.374199 / 0.283200 (0.090999) | 0.026251 / 0.141683 (-0.115432) | 1.488258 / 1.452155 (0.036103) | 1.567119 / 1.492716 (0.074402) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218740 / 0.018006 (0.200734) | 0.465483 / 0.000490 (0.464994) | 0.003959 / 0.000200 (0.003759) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029860 / 0.037411 (-0.007551) | 0.087968 / 0.014526 (0.073442) | 0.098257 / 0.176557 (-0.078299) | 0.155478 / 0.737135 (-0.581657) | 0.100696 / 0.296338 (-0.195642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384642 / 0.215209 (0.169432) | 3.821692 / 2.077655 (1.744038) | 1.838012 / 1.504120 (0.333892) | 1.677554 / 1.541195 (0.136360) | 1.764284 / 1.468490 (0.295794) | 0.487512 / 4.584777 (-4.097265) | 3.614572 / 3.745712 (-0.131141) | 3.300740 / 5.269862 (-1.969122) | 2.079044 / 4.565676 (-2.486632) | 0.057392 / 0.424275 (-0.366883) | 0.007642 / 0.007607 (0.000035) | 0.456161 / 0.226044 (0.230117) | 4.554124 / 2.268929 (2.285196) | 2.319288 / 55.444624 (-53.125336) | 1.972024 / 6.876477 (-4.904452) | 2.210598 / 2.142072 (0.068526) | 0.588442 / 4.805227 (-4.216785) | 0.134474 / 6.500664 (-6.366191) | 0.062682 / 0.075469 (-0.012787) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243548 / 1.841788 (-0.598239) | 20.267230 / 8.074308 (12.192922) | 14.872096 / 10.191392 (4.680704) | 0.165164 / 0.680424 (-0.515260) | 0.018985 / 0.534201 (-0.515216) | 0.394526 / 0.579283 (-0.184757) | 0.413918 / 0.434364 (-0.020446) | 0.467130 / 0.540337 (-0.073208) | 0.627055 / 1.386936 (-0.759881) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006940 / 0.011353 (-0.004412) | 0.004203 / 0.011008 (-0.006805) | 0.065828 / 0.038508 (0.027320) | 0.076604 / 0.023109 (0.053495) | 0.401781 / 0.275898 (0.125883) | 0.434838 / 0.323480 (0.111358) | 0.005626 / 0.007986 (-0.002359) | 0.003409 / 0.004328 (-0.000920) | 0.064702 / 0.004250 (0.060452) | 0.057525 / 0.037052 (0.020473) | 0.405032 / 0.258489 (0.146543) | 0.440906 / 0.293841 (0.147065) | 0.032713 / 0.128546 (-0.095833) | 0.008723 / 0.075646 (-0.066923) | 0.071448 / 0.419271 (-0.347823) | 0.048186 / 0.043533 (0.004653) | 0.403950 / 0.255139 (0.148811) | 0.419506 / 0.283200 (0.136307) | 0.023532 / 0.141683 (-0.118150) | 1.496435 / 1.452155 (0.044280) | 1.567236 / 1.492716 (0.074519) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229194 / 0.018006 (0.211188) | 0.451363 / 0.000490 (0.450873) | 0.003651 / 0.000200 (0.003451) | 0.000108 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033674 / 0.037411 (-0.003737) | 0.097521 / 0.014526 (0.082995) | 0.108806 / 0.176557 (-0.067751) | 0.161002 / 0.737135 (-0.576133) | 0.108594 / 0.296338 (-0.187745) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436638 / 0.215209 (0.221429) | 4.348844 / 2.077655 (2.271189) | 2.341737 / 1.504120 (0.837617) | 2.195850 / 1.541195 (0.654656) | 2.332147 / 1.468490 (0.863657) | 0.496180 / 4.584777 (-4.088597) | 3.680987 / 3.745712 (-0.064725) | 3.332203 / 5.269862 (-1.937659) | 2.099541 / 4.565676 (-2.466136) | 0.058629 / 0.424275 (-0.365646) | 0.007363 / 0.007607 (-0.000245) | 0.517658 / 0.226044 (0.291614) | 5.175321 / 2.268929 (2.906392) | 2.858660 / 55.444624 (-52.585964) | 2.540557 / 6.876477 (-4.335920) | 2.755360 / 2.142072 (0.613288) | 0.595488 / 4.805227 (-4.209739) | 0.134265 / 6.500664 (-6.366399) | 0.062033 / 0.075469 (-0.013436) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.389950 / 1.841788 (-0.451838) | 20.800274 / 8.074308 (12.725966) | 15.314531 / 10.191392 (5.123139) | 0.166822 / 0.680424 (-0.513602) | 0.021099 / 0.534201 (-0.513102) | 0.400388 / 0.579283 (-0.178895) | 0.419981 / 0.434364 (-0.014383) | 0.474259 / 0.540337 (-0.066078) | 0.731678 / 1.386936 (-0.655258) |\n\n</details>\n</details>\n\n\n"
] | 2023-08-22T11:27:41Z
| 2023-08-23T14:01:25Z
| 2023-08-23T13:52:36Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6166.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6166",
"merged_at": "2023-08-23T13:52:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6166.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6166"
}
|
Related to https://github.com/huggingface/datasets/issues/6130
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6166/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6166/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4914
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4914/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4914/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4914/events
|
https://github.com/huggingface/datasets/pull/4914
| 1,355,482,624
|
PR_kwDODunzps4-CFyN
| 4,914
|
Support streaming swda dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-08-30T09:46:28Z
| 2022-08-30T11:16:33Z
| 2022-08-30T11:14:16Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4914.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4914",
"merged_at": "2022-08-30T11:14:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4914.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4914"
}
|
Support streaming swda dataset.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4914/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4914/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5511
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5511/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5511/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5511/events
|
https://github.com/huggingface/datasets/issues/5511
| 1,575,851,768
|
I_kwDODunzps5d7Zb4
| 5,511
|
Creating a dummy dataset from a bigger one
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Update `datasets` or downgrade `huggingface-hub` ;)\r\n\r\nThe `huggingface-hub` lib did a breaking change a few months ago, and you're using an old version of `datasets` that does't support it",
"Awesome thanks a lot! Everything works just fine with `datasets==2.9.0` :-) ",
"Getting same error with latest versions.\r\n\r\n\r\n```shell\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nCell In[99], line 1\r\n----> 1 dataset.push_to_hub(\"mirfan899/kids_phoneme_asr\")\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:3538, in Dataset.push_to_hub(self, repo_id, split, private, token, branch, shard_size, embed_external_files)\r\n 3493 def push_to_hub(\r\n 3494 self,\r\n 3495 repo_id: str,\r\n (...)\r\n 3501 embed_external_files: bool = True,\r\n 3502 ):\r\n 3503 \"\"\"Pushes the dataset to the hub.\r\n 3504 The dataset is pushed using HTTP requests and does not need to have neither git or git-lfs installed.\r\n 3505 \r\n (...)\r\n 3536 ```\r\n 3537 \"\"\"\r\n-> 3538 repo_id, split, uploaded_size, dataset_nbytes = self._push_parquet_shards_to_hub(\r\n 3539 repo_id=repo_id,\r\n 3540 split=split,\r\n 3541 private=private,\r\n 3542 token=token,\r\n 3543 branch=branch,\r\n 3544 shard_size=shard_size,\r\n 3545 embed_external_files=embed_external_files,\r\n 3546 )\r\n 3547 organization, dataset_name = repo_id.split(\"/\")\r\n 3548 info_to_dump = self.info.copy()\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:3474, in Dataset._push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, shard_size, embed_external_files)\r\n 3472 shard.to_parquet(buffer)\r\n 3473 uploaded_size += buffer.tell()\r\n-> 3474 _retry(\r\n 3475 api.upload_file,\r\n 3476 func_kwargs=dict(\r\n 3477 path_or_fileobj=buffer.getvalue(),\r\n 3478 path_in_repo=path_in_repo(index),\r\n 3479 repo_id=repo_id,\r\n 3480 token=token,\r\n 3481 repo_type=\"dataset\",\r\n 3482 revision=branch,\r\n 3483 identical_ok=True,\r\n 3484 ),\r\n 3485 exceptions=HTTPError,\r\n 3486 status_codes=[504],\r\n 3487 base_wait_time=2.0,\r\n 3488 max_retries=5,\r\n 3489 max_wait_time=20.0,\r\n 3490 )\r\n 3491 return repo_id, split, uploaded_size, dataset_nbytes\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py:330, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)\r\n 328 while True:\r\n 329 try:\r\n--> 330 return func(*func_args, **func_kwargs)\r\n 331 except exceptions as err:\r\n 332 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)\r\n 117 if check_use_auth_token:\r\n 118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\r\n--> 120 return fn(*args, **kwargs)\r\n\r\nTypeError: HfApi.upload_file() got an unexpected keyword argument 'identical_ok'\r\n```",
"Feel free to update `datasets` and `huggingface-hub`, it should fix it :)",
"I went ahead and upgraded both datasets and hub and still getting the same error\r\n",
"Which version do you have ? It's been a while since it has been fixed",
"huggingface 0.0.1\r\nhuggingface-hub 0.17.1\r\ndatasets 2.14.5\r\n\r\nstill has the issue!!"
] | 2023-02-08T10:18:41Z
| 2023-09-14T11:10:59Z
| 2023-02-08T10:35:48Z
|
MEMBER
| null | null | null |
### Describe the bug
I often want to create a dummy dataset from a bigger dataset for fast iteration when training. However, I'm having a hard time doing this especially when trying to upload the dataset to the Hub.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("lambdalabs/pokemon-blip-captions")
dataset["train"] = dataset["train"].select(range(20))
dataset.push_to_hub("patrickvonplaten/dummy_image_data")
```
gives:
```
~/python_bin/datasets/arrow_dataset.py in _push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, embed_external_files)
4003 base_wait_time=2.0,
4004 max_retries=5,
-> 4005 max_wait_time=20.0,
4006 )
4007 return repo_id, split, uploaded_size, dataset_nbytes
~/python_bin/datasets/utils/file_utils.py in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
328 while True:
329 try:
--> 330 return func(*func_args, **func_kwargs)
331 except exceptions as err:
332 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
~/hf/lib/python3.7/site-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs)
122 )
123
--> 124 return fn(*args, **kwargs)
125
126 return _inner_fn # type: ignore
TypeError: upload_file() got an unexpected keyword argument 'identical_ok'
In [2]:
```
### Expected behavior
I would have expected this to work. It's for me the most intuitive way of creating a dummy dataset.
### Environment info
```
- `datasets` version: 2.1.1.dev0
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.3
- PyArrow version: 11.0.0
- Pandas version: 1.3.5
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5511/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5511/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2414
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2414/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2414/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2414/events
|
https://github.com/huggingface/datasets/pull/2414
| 903,877,096
|
MDExOlB1bGxSZXF1ZXN0NjU1MDg5OTIw
| 2,414
|
Update README.md
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15029054?v=4",
"events_url": "https://api.github.com/users/cryoff/events{/privacy}",
"followers_url": "https://api.github.com/users/cryoff/followers",
"following_url": "https://api.github.com/users/cryoff/following{/other_user}",
"gists_url": "https://api.github.com/users/cryoff/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cryoff",
"id": 15029054,
"login": "cryoff",
"node_id": "MDQ6VXNlcjE1MDI5MDU0",
"organizations_url": "https://api.github.com/users/cryoff/orgs",
"received_events_url": "https://api.github.com/users/cryoff/received_events",
"repos_url": "https://api.github.com/users/cryoff/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cryoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cryoff/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cryoff"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Merging since the CI error is unrelated to this PR and has been fixed on master",
"Thank you for taking a look at the CI error - I was a bit confused with that. Thanks!"
] | 2021-05-27T14:53:19Z
| 2021-06-28T13:46:14Z
| 2021-06-28T13:04:56Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2414.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2414",
"merged_at": "2021-06-28T13:04:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2414.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2414"
}
|
Provides description of data instances and dataset features
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2414/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2414/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3081
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3081/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3081/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3081/events
|
https://github.com/huggingface/datasets/pull/3081
| 1,026,383,749
|
PR_kwDODunzps4tM1Gy
| 3,081
|
[Audio datasets] Adapting all audio datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq - are there other important speech datasets that I'm forgetting here? \r\n\r\nThink PR is good to go otherwise",
"@lhoestq @albertvillanova - how can we make an exception for the AMI README so that the test doesn't fail? The dataset card definitely should have a data preprocessing section",
"Hi @patrickvonplaten ,\r\n\r\nthe data preprocessing section is not defined as a valid section in the readme validation file. After this line:\r\nhttps://github.com/huggingface/datasets/blob/568db594d51110da9e23d224abded2a976b3c8c7/src/datasets/utils/resources/readme_structure.yaml#L20\r\nfeel free to insert (correctly indented of course):\r\n```python\r\n- name: \"Dataset Preprocessing\"\r\n allow_empty: true\r\n allow_empty_text: true\r\n subsections: null\r\n```\r\nand then the tests should pass.",
"Thanks a lot @albertvillanova - I've added the feature to all audio datasets and corrected the task of `covost2`"
] | 2021-10-14T13:13:45Z
| 2021-10-15T12:52:03Z
| 2021-10-15T12:22:33Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3081.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3081",
"merged_at": "2021-10-15T12:22:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3081.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3081"
}
|
This PR adds the new `Audio(...)` features - see: https://github.com/huggingface/datasets/pull/2324 to the most important audio datasets:
- Librispeech
- Timit
- Common Voice
- AMI
- ... (others I'm forgetting now)
The PR is curently blocked because the following leads to a problem:
```python
from datasets import load_dataset
# load first time works
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
# load from cache breaks
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
```
As soon as it's unblocked, I'll adapt the other audio datasets as well.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3081/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3081/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6344
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6344/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6344/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6344/events
|
https://github.com/huggingface/datasets/pull/6344
| 1,957,412,169
|
PR_kwDODunzps5diyd5
| 6,344
|
set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6344). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008237 / 0.011353 (-0.003116) | 0.004658 / 0.011008 (-0.006351) | 0.105902 / 0.038508 (0.067394) | 0.082690 / 0.023109 (0.059581) | 0.471745 / 0.275898 (0.195847) | 0.464772 / 0.323480 (0.141292) | 0.006373 / 0.007986 (-0.001613) | 0.003823 / 0.004328 (-0.000505) | 0.077721 / 0.004250 (0.073471) | 0.068371 / 0.037052 (0.031318) | 0.457004 / 0.258489 (0.198515) | 0.500989 / 0.293841 (0.207148) | 0.036688 / 0.128546 (-0.091858) | 0.010004 / 0.075646 (-0.065643) | 0.363398 / 0.419271 (-0.055874) | 0.065354 / 0.043533 (0.021821) | 0.440326 / 0.255139 (0.185187) | 0.475314 / 0.283200 (0.192115) | 0.029024 / 0.141683 (-0.112659) | 1.851005 / 1.452155 (0.398851) | 1.939997 / 1.492716 (0.447281) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269739 / 0.018006 (0.251732) | 0.510411 / 0.000490 (0.509922) | 0.013423 / 0.000200 (0.013223) | 0.000513 / 0.000054 (0.000458) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032912 / 0.037411 (-0.004499) | 0.097497 / 0.014526 (0.082971) | 0.111945 / 0.176557 (-0.064612) | 0.179264 / 0.737135 (-0.557871) | 0.111901 / 0.296338 (-0.184437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.480994 / 0.215209 (0.265785) | 4.800969 / 2.077655 (2.723314) | 2.467390 / 1.504120 (0.963270) | 2.283219 / 1.541195 (0.742024) | 2.407735 / 1.468490 (0.939245) | 0.573862 / 4.584777 (-4.010915) | 4.213394 / 3.745712 (0.467682) | 4.120092 / 5.269862 (-1.149770) | 2.479549 / 4.565676 (-2.086128) | 0.077204 / 0.424275 (-0.347071) | 0.009165 / 0.007607 (0.001558) | 0.583887 / 0.226044 (0.357842) | 5.760759 / 2.268929 (3.491830) | 3.089220 / 55.444624 (-52.355404) | 2.652330 / 6.876477 (-4.224146) | 2.746255 / 2.142072 (0.604182) | 0.689010 / 4.805227 (-4.116217) | 0.158042 / 6.500664 (-6.342622) | 0.072789 / 0.075469 (-0.002680) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.658877 / 1.841788 (-0.182911) | 22.928756 / 8.074308 (14.854448) | 17.231823 / 10.191392 (7.040431) | 0.201475 / 0.680424 (-0.478949) | 0.025533 / 0.534201 (-0.508668) | 0.467023 / 0.579283 (-0.112260) | 0.470779 / 0.434364 (0.036415) | 0.643192 / 0.540337 (0.102855) | 0.822006 / 1.386936 (-0.564930) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008096 / 0.011353 (-0.003257) | 0.004708 / 0.011008 (-0.006300) | 0.076607 / 0.038508 (0.038099) | 0.086278 / 0.023109 (0.063168) | 0.478027 / 0.275898 (0.202129) | 0.533121 / 0.323480 (0.209641) | 0.006331 / 0.007986 (-0.001654) | 0.004005 / 0.004328 (-0.000324) | 0.076018 / 0.004250 (0.071767) | 0.067240 / 0.037052 (0.030188) | 0.484882 / 0.258489 (0.226393) | 0.536924 / 0.293841 (0.243083) | 0.045064 / 0.128546 (-0.083482) | 0.010071 / 0.075646 (-0.065575) | 0.084319 / 0.419271 (-0.334953) | 0.066267 / 0.043533 (0.022734) | 0.479283 / 0.255139 (0.224144) | 0.507832 / 0.283200 (0.224633) | 0.026436 / 0.141683 (-0.115247) | 1.820043 / 1.452155 (0.367889) | 1.954663 / 1.492716 (0.461947) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292672 / 0.018006 (0.274666) | 0.495523 / 0.000490 (0.495033) | 0.020836 / 0.000200 (0.020636) | 0.000143 / 0.000054 (0.000088) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038326 / 0.037411 (0.000915) | 0.114629 / 0.014526 (0.100103) | 0.126036 / 0.176557 (-0.050521) | 0.191498 / 0.737135 (-0.545638) | 0.128763 / 0.296338 (-0.167575) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.507657 / 0.215209 (0.292448) | 5.062056 / 2.077655 (2.984401) | 2.765895 / 1.504120 (1.261775) | 2.590335 / 1.541195 (1.049141) | 2.790912 / 1.468490 (1.322422) | 0.582819 / 4.584777 (-4.001958) | 4.350034 / 3.745712 (0.604322) | 3.899466 / 5.269862 (-1.370396) | 2.499655 / 4.565676 (-2.066021) | 0.068909 / 0.424275 (-0.355366) | 0.008633 / 0.007607 (0.001026) | 0.593597 / 0.226044 (0.367553) | 5.934398 / 2.268929 (3.665470) | 3.358549 / 55.444624 (-52.086075) | 3.145686 / 6.876477 (-3.730791) | 3.232153 / 2.142072 (1.090080) | 0.753039 / 4.805227 (-4.052188) | 0.164043 / 6.500664 (-6.336621) | 0.072084 / 0.075469 (-0.003385) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632702 / 1.841788 (-0.209086) | 23.411084 / 8.074308 (15.336776) | 17.035726 / 10.191392 (6.844334) | 0.223460 / 0.680424 (-0.456964) | 0.023723 / 0.534201 (-0.510478) | 0.474160 / 0.579283 (-0.105124) | 0.538638 / 0.434364 (0.104274) | 0.595591 / 0.540337 (0.055254) | 0.803324 / 1.386936 (-0.583612) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008300 / 0.011353 (-0.003053) | 0.004667 / 0.011008 (-0.006341) | 0.101028 / 0.038508 (0.062520) | 0.100269 / 0.023109 (0.077160) | 0.418651 / 0.275898 (0.142752) | 0.459061 / 0.323480 (0.135581) | 0.006786 / 0.007986 (-0.001199) | 0.003926 / 0.004328 (-0.000403) | 0.076682 / 0.004250 (0.072432) | 0.066173 / 0.037052 (0.029120) | 0.430644 / 0.258489 (0.172155) | 0.466244 / 0.293841 (0.172403) | 0.040601 / 0.128546 (-0.087946) | 0.009856 / 0.075646 (-0.065790) | 0.351467 / 0.419271 (-0.067805) | 0.068727 / 0.043533 (0.025194) | 0.419527 / 0.255139 (0.164388) | 0.431245 / 0.283200 (0.148045) | 0.028933 / 0.141683 (-0.112750) | 1.749540 / 1.452155 (0.297386) | 1.829076 / 1.492716 (0.336360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282248 / 0.018006 (0.264242) | 0.587293 / 0.000490 (0.586803) | 0.014497 / 0.000200 (0.014297) | 0.000383 / 0.000054 (0.000329) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031861 / 0.037411 (-0.005550) | 0.097395 / 0.014526 (0.082869) | 0.113610 / 0.176557 (-0.062946) | 0.181208 / 0.737135 (-0.555927) | 0.115340 / 0.296338 (-0.180999) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459746 / 0.215209 (0.244537) | 4.582387 / 2.077655 (2.504733) | 2.247968 / 1.504120 (0.743848) | 2.032340 / 1.541195 (0.491145) | 2.151766 / 1.468490 (0.683276) | 0.567664 / 4.584777 (-4.017113) | 4.491732 / 3.745712 (0.746020) | 4.000651 / 5.269862 (-1.269211) | 2.429113 / 4.565676 (-2.136564) | 0.067052 / 0.424275 (-0.357223) | 0.009095 / 0.007607 (0.001488) | 0.546461 / 0.226044 (0.320417) | 5.473524 / 2.268929 (3.204595) | 2.902091 / 55.444624 (-52.542533) | 2.517510 / 6.876477 (-4.358966) | 2.572537 / 2.142072 (0.430464) | 0.683499 / 4.805227 (-4.121728) | 0.154863 / 6.500664 (-6.345801) | 0.071298 / 0.075469 (-0.004171) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.625236 / 1.841788 (-0.216552) | 23.531541 / 8.074308 (15.457233) | 16.762514 / 10.191392 (6.571122) | 0.215922 / 0.680424 (-0.464502) | 0.021928 / 0.534201 (-0.512273) | 0.466055 / 0.579283 (-0.113228) | 0.553036 / 0.434364 (0.118672) | 0.590063 / 0.540337 (0.049725) | 0.789959 / 1.386936 (-0.596977) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008240 / 0.011353 (-0.003113) | 0.004151 / 0.011008 (-0.006858) | 0.077988 / 0.038508 (0.039479) | 0.092865 / 0.023109 (0.069756) | 0.468238 / 0.275898 (0.192340) | 0.512882 / 0.323480 (0.189402) | 0.006632 / 0.007986 (-0.001354) | 0.003879 / 0.004328 (-0.000450) | 0.076238 / 0.004250 (0.071988) | 0.069372 / 0.037052 (0.032319) | 0.481040 / 0.258489 (0.222550) | 0.526332 / 0.293841 (0.232491) | 0.036768 / 0.128546 (-0.091778) | 0.009891 / 0.075646 (-0.065756) | 0.084426 / 0.419271 (-0.334846) | 0.062382 / 0.043533 (0.018849) | 0.480667 / 0.255139 (0.225528) | 0.509001 / 0.283200 (0.225802) | 0.029215 / 0.141683 (-0.112468) | 1.776075 / 1.452155 (0.323920) | 1.948558 / 1.492716 (0.455841) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257879 / 0.018006 (0.239873) | 0.471038 / 0.000490 (0.470548) | 0.009273 / 0.000200 (0.009073) | 0.000208 / 0.000054 (0.000154) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039249 / 0.037411 (0.001838) | 0.133281 / 0.014526 (0.118755) | 0.138261 / 0.176557 (-0.038296) | 0.191051 / 0.737135 (-0.546084) | 0.134493 / 0.296338 (-0.161845) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.507165 / 0.215209 (0.291955) | 5.081018 / 2.077655 (3.003364) | 2.747633 / 1.504120 (1.243513) | 2.558265 / 1.541195 (1.017070) | 2.710839 / 1.468490 (1.242348) | 0.579913 / 4.584777 (-4.004864) | 4.843657 / 3.745712 (1.097945) | 3.942503 / 5.269862 (-1.327358) | 2.529641 / 4.565676 (-2.036036) | 0.068826 / 0.424275 (-0.355449) | 0.008847 / 0.007607 (0.001240) | 0.605332 / 0.226044 (0.379287) | 6.039574 / 2.268929 (3.770646) | 3.437291 / 55.444624 (-52.007333) | 3.086631 / 6.876477 (-3.789846) | 3.189340 / 2.142072 (1.047267) | 0.702650 / 4.805227 (-4.102578) | 0.157403 / 6.500664 (-6.343261) | 0.074637 / 0.075469 (-0.000832) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.816532 / 1.841788 (-0.025256) | 24.526675 / 8.074308 (16.452367) | 17.371691 / 10.191392 (7.180299) | 0.236044 / 0.680424 (-0.444380) | 0.024759 / 0.534201 (-0.509442) | 0.530578 / 0.579283 (-0.048705) | 0.527424 / 0.434364 (0.093060) | 0.620267 / 0.540337 (0.079929) | 0.791159 / 1.386936 (-0.595777) |\n\n</details>\n</details>\n\n\n"
] | 2023-10-23T15:13:28Z
| 2023-10-23T15:24:31Z
| 2023-10-23T15:13:38Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6344.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6344",
"merged_at": "2023-10-23T15:13:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6344.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6344"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6344/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6344/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4506
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4506/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4506/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4506/events
|
https://github.com/huggingface/datasets/issues/4506
| 1,272,516,895
|
I_kwDODunzps5L2REf
| 4,506
|
Failure to hash (and cache) a `.map(...)` (almost always) - using this method can produce incorrect results
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4",
"events_url": "https://api.github.com/users/DrMatters/events{/privacy}",
"followers_url": "https://api.github.com/users/DrMatters/followers",
"following_url": "https://api.github.com/users/DrMatters/following{/other_user}",
"gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DrMatters",
"id": 22641583,
"login": "DrMatters",
"node_id": "MDQ6VXNlcjIyNjQxNTgz",
"organizations_url": "https://api.github.com/users/DrMatters/orgs",
"received_events_url": "https://api.github.com/users/DrMatters/received_events",
"repos_url": "https://api.github.com/users/DrMatters/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DrMatters"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null |
[
"Important info:\r\n\r\nAs hashes are generated randomly for functions, it leads to **false identifying some results as already hashed** (mapping function is not executed after a method update) when there's a `pytorch_lightning.seed_everything(123)`",
"@lhoestq\r\nseems like quite critical stuff for me, if I'm not making a mistake",
"Hi ! Thanks for reporting. This bug seems to appear in python 3.9 using dill 3.5.1\r\n\r\nAs a workaround you can use an older version of dill:\r\n```\r\npip install \"dill<0.3.5\"\r\n```",
"installing `dill<0.3.5` after installing `datasets` by pip results in dependency conflict with the version required for `multiprocess`. It can be solved by installing `pip install datasets \"dill<0.3.5\"` (simultaneously) on a clean environment",
"This has been fixed in https://github.com/huggingface/datasets/pull/4516, we will do a new release soon to include the fix :)"
] | 2022-06-15T17:11:31Z
| 2023-02-16T03:14:32Z
| 2022-06-28T13:23:05Z
|
NONE
| null | null | null |
## Describe the bug
Sometimes I get messages about not being able to hash a method:
`Parameter 'function'=<function StupidDataModule._separate_speaker_id_from_dialogue at 0x7f1b27180d30> of the transform datasets.arrow_dataset.Dataset.
_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.`
Whilst the function looks like this:
```python
@staticmethod
def _separate_speaker_id_from_dialogue(example: arrow_dataset.Example):
speaker_id, dialogue = tuple(zip(*(example["dialogue"])))
example["speaker_id"] = speaker_id
example["dialogue"] = dialogue
return example
```
This is the first step in my preprocessing pipeline, but sometimes the message about failure to hash is not appearing on the first step, but then appears on a later step.
This error is sometimes causing a failure to use cached data, instead of re-running all steps again.
## Steps to reproduce the bug
```python
import copy
import datasets
from datasets import arrow_dataset
def main():
dataset = datasets.load_dataset("blended_skill_talk")
res = dataset.map(method)
print(res)
def method(example: arrow_dataset.Example):
example['previous_utterance_copy'] = copy.deepcopy(example['previous_utterance'])
return example
if __name__ == '__main__':
main()
```
Run with:
```
python -m reproduce_error
```
## Expected results
Dataset is mapped and cached correctly.
## Actual results
The code outputs this at some point:
`Parameter 'function'=<function method at 0x7faa83d2a160> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Ubuntu 20.04.3
- Python version: 3.9.12
- PyArrow version: 8.0.0
- Datasets version: 2.3.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4506/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4506/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5030
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5030/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5030/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5030/events
|
https://github.com/huggingface/datasets/pull/5030
| 1,388,061,340
|
PR_kwDODunzps4_tfBO
| 5,030
|
Fast dataset iter
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"I ran some benchmarks (focused on the data fetching part of `__iter__`) and it seems like the combination `table.to_reader(batch_size)` + `RecordBatch.slice` performs the best ([script](https://gist.github.com/mariosasko/0248288a2e3a7556873969717c1fe52b) with the results). I think we can choose (implicit) `batch_size=10` in the final implementation to avoid having problems with fetching large examples."
] | 2022-09-27T16:44:51Z
| 2022-09-29T15:50:44Z
| 2022-09-29T15:48:17Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5030.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5030",
"merged_at": "2022-09-29T15:48:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5030.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5030"
}
|
Use `pa.Table.to_reader` to make iteration over examples/batches faster in `Dataset.{__iter__, map}`
TODO:
* [x] benchmarking (the only benchmark for now - iterating over (single) examples of `bookcorpus` (75 mil examples) in Colab is approx. 2.3x faster)
* [x] check if iterating over bigger chunks + slicing to fetch individual examples in `_iter` yields better performance
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5030/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5030/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/338
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/338/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/338/comments
|
https://api.github.com/repos/huggingface/datasets/issues/338/events
|
https://github.com/huggingface/datasets/pull/338
| 650,057,253
|
MDExOlB1bGxSZXF1ZXN0NDQzNjIxMTEx
| 338
|
Run `make style`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jarednielsen",
"id": 4564897,
"login": "jarednielsen",
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jarednielsen"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-07-02T16:19:47Z
| 2020-07-02T18:03:10Z
| 2020-07-02T18:03:10Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/338.diff",
"html_url": "https://github.com/huggingface/datasets/pull/338",
"merged_at": "2020-07-02T18:03:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/338.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/338"
}
|
These files get changed when I run `make style` on an unrelated PR. Upstreaming these changes so development on a different branch can be easier.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/338/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/338/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1115
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1115/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1115/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1115/events
|
https://github.com/huggingface/datasets/issues/1115
| 757,127,527
|
MDU6SXNzdWU3NTcxMjc1Mjc=
| 1,115
|
Incorrect URL for MRQA SQuAD train subset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6259768?v=4",
"events_url": "https://api.github.com/users/yuxiang-wu/events{/privacy}",
"followers_url": "https://api.github.com/users/yuxiang-wu/followers",
"following_url": "https://api.github.com/users/yuxiang-wu/following{/other_user}",
"gists_url": "https://api.github.com/users/yuxiang-wu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yuxiang-wu",
"id": 6259768,
"login": "yuxiang-wu",
"node_id": "MDQ6VXNlcjYyNTk3Njg=",
"organizations_url": "https://api.github.com/users/yuxiang-wu/orgs",
"received_events_url": "https://api.github.com/users/yuxiang-wu/received_events",
"repos_url": "https://api.github.com/users/yuxiang-wu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yuxiang-wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuxiang-wu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yuxiang-wu"
}
|
[] |
closed
| false
| null |
[] | null |
[
"good catch !"
] | 2020-12-04T14:05:24Z
| 2020-12-06T17:14:22Z
| 2020-12-06T17:14:22Z
|
CONTRIBUTOR
| null | null | null |
https://github.com/huggingface/datasets/blob/4ef4c8f8b7a60e35c6fa21115fca9faae91c9f74/datasets/mrqa/mrqa.py#L53
The URL for `train+SQuAD` subset of MRQA points to the dev set instead of train set. It should be `https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SQuAD.jsonl.gz`.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1115/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1115/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2314
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2314/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2314/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2314/events
|
https://github.com/huggingface/datasets/pull/2314
| 875,729,271
|
MDExOlB1bGxSZXF1ZXN0NjMwMDExODc4
| 2,314
|
Minor refactor prepare_module
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq this is the PR that I mentioned to you, which can be considered as a first step in refactoring `prepare_module`.",
"closing in favor of #2986 "
] | 2021-05-04T18:37:26Z
| 2021-10-13T09:07:34Z
| 2021-10-13T09:07:34Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2314.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2314",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2314.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2314"
}
|
Start to refactor `prepare_module` to try to decouple functionality.
This PR does:
- extract function `_initialize_dynamic_modules_namespace_package`
- extract function `_find_module_in_github_or_s3`
- some renaming of variables
- use of f-strings
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2314/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2314/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1060
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1060/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1060/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1060/events
|
https://github.com/huggingface/datasets/pull/1060
| 756,349,001
|
MDExOlB1bGxSZXF1ZXN0NTMxOTA4MTgx
| 1,060
|
Fix squad V2 metric script
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The script with changes is used and tested in [#8924](https://github.com/huggingface/transformers/pull/8924). It gives the same results as the old `evaluate_squad` function when used on the same predictions.",
"merging since the CI is fixed on master"
] | 2020-12-03T16:23:32Z
| 2020-12-22T15:02:20Z
| 2020-12-22T15:02:19Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1060.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1060",
"merged_at": "2020-12-22T15:02:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1060.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1060"
}
|
The current squad v2 metric doesn't work with the squad (v1 or v2) datasets. The script is copied from `squad_evaluate` in transformers that requires the labels (with multiple answers) to be like this:
```
references = [{'id': 'a', 'answers': [
{'text': 'Denver Broncos', 'answer_start': 177},
{'text': 'Denver Broncos', 'answer_start': 177}
]}]
```
while the dataset had references like this:
```
references = [{'id': 'a', 'answers':
{'text': ['Denver Broncos' 'Denver Broncos'], 'answer_start': [177, 177]}
}]
```
Using one or the other format fails with the current squad v2 metric:
```
from datasets import load_metric
metric = load_metric("squad_v2")
predictions = [{'id': 'a', 'prediction_text': 'Denver Broncos', 'no_answer_probability': 0.0}]
references = [{'id': 'a', 'answers': [
{'text': 'Denver Broncos', 'answer_start': 177},
{'text': 'Denver Broncos', 'answer_start': 177}
]}]
metric.compute(predictions=predictions, references=references)
```
fails as well as
```
from datasets import load_metric
metric = load_metric("squad_v2")
predictions = [{'id': 'a', 'prediction_text': 'Denver Broncos', 'no_answer_probability': 0.0}]
references = [{'id': 'a', 'answers':
{'text': ['Denver Broncos' 'Denver Broncos'], 'answer_start': [177, 177]}
}]
metric.compute(predictions=predictions, references=references)
```
This is because arrow reformats the references behind the scene.
With this PR (tested locally), both the snippets up there work and return proper results.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1060/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1060/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3151
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3151/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3151/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3151/events
|
https://github.com/huggingface/datasets/pull/3151
| 1,033,890,501
|
PR_kwDODunzps4tkL7t
| 3,151
|
Re-add faiss to windows testing suite
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-10-22T19:34:29Z
| 2021-11-02T10:47:34Z
| 2021-11-02T10:06:03Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3151.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3151",
"merged_at": "2021-11-02T10:06:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3151.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3151"
}
|
In recent versions, `faiss-cpu` seems to be available for Windows as well. See the [PyPi page](https://pypi.org/project/faiss-cpu/#files) to confirm. We can therefore included it for Windows in the setup file.
At first tests didn't pass due to problems with permissions as caused by `NamedTemporaryFile` on Windows. This built-in library is notoriously poor in playing nice on Windows. The required change isn't pretty, but it works. First set `delete=False` to not automatically try to delete the file on `exit`. Then, manually delete the file with `unlink`. It's weird, I know, but it works.
```python
with tempfile.NamedTemporaryFile(delete=False) as tmp_file:
# do stuff
os.unlink(tmp_file.name)
```
closes #3150
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3151/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3151/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4092
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4092/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4092/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4092/events
|
https://github.com/huggingface/datasets/pull/4092
| 1,192,499,903
|
PR_kwDODunzps41n40R
| 4,092
|
Fix dataset `amazon_us_reviews` metadata - 4/4/2022
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/trentonstrong",
"id": 191985,
"login": "trentonstrong",
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/trentonstrong"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc: @albertvillanova just FYI"
] | 2022-04-05T01:39:45Z
| 2022-04-08T12:35:41Z
| 2022-04-08T12:29:31Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4092.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4092",
"merged_at": "2022-04-08T12:29:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4092.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4092"
}
|
Fixes #4048 by running `dataset-cli test` to reprocess data and regenerate metadata. Additionally I've updated the README to include up-to-date counts for the subsets.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4092/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4092/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4256
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4256/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4256/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4256/events
|
https://github.com/huggingface/datasets/pull/4256
| 1,221,379,625
|
PR_kwDODunzps43F9Zw
| 4,256
|
Create metric card for MSE
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-29T18:21:22Z
| 2022-05-02T14:55:42Z
| 2022-05-02T14:48:47Z
|
NONE
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4256.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4256",
"merged_at": "2022-05-02T14:48:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4256.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4256"
}
|
Proposing a metric card for Mean Squared Error
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4256/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4256/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1681
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1681/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1681/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1681/events
|
https://github.com/huggingface/datasets/issues/1681
| 777,644,163
|
MDU6SXNzdWU3Nzc2NDQxNjM=
| 1,681
|
Dataset "dane" missing
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4",
"events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}",
"followers_url": "https://api.github.com/users/KennethEnevoldsen/followers",
"following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}",
"gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KennethEnevoldsen",
"id": 23721977,
"login": "KennethEnevoldsen",
"node_id": "MDQ6VXNlcjIzNzIxOTc3",
"organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs",
"received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events",
"repos_url": "https://api.github.com/users/KennethEnevoldsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KennethEnevoldsen"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @KennethEnevoldsen ,\r\nI think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of datasets.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of datasets using pip:\r\npip install git+https://github.com/huggingface/datasets.git@master",
"The `dane` dataset was added recently, that's why it wasn't available yet. We did an intermediate release today just before the v2.0.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `dane` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"dane\")\r\n```",
"Thanks. Solved the problem."
] | 2021-01-03T14:03:03Z
| 2021-01-05T08:35:35Z
| 2021-01-05T08:35:13Z
|
CONTRIBUTOR
| null | null | null |
the `dane` dataset appear to be missing in the latest version (1.1.3).
```python
>>> import datasets
>>> datasets.__version__
'1.1.3'
>>> "dane" in datasets.list_datasets()
True
```
As we can see it should be present, but doesn't seem to be findable when using `load_dataset`.
```python
>>> datasets.load_dataset("dane")
Traceback (most recent call last):
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 300, in cached_path
output_path = get_from_cache(
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/dane/dane.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/load.py", line 278, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 300, in cached_path
output_path = get_from_cache(
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/dane/dane.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/load.py", line 588, in load_dataset
module_path, hash = prepare_module(
File "/home/kenneth/.Envs/EDP/lib/python3.8/site-packages/datasets/load.py", line 280, in prepare_module
raise FileNotFoundError(
FileNotFoundError: Couldn't find file locally at dane/dane.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/dane/dane.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/dane/dane.py
```
This issue might be relevant to @ophelielacroix from the Alexandra Institut whom created the data.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1681/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1681/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2350
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2350/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2350/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2350/events
|
https://github.com/huggingface/datasets/issues/2350
| 889,580,247
|
MDU6SXNzdWU4ODk1ODAyNDc=
| 2,350
|
`FaissIndex.save` throws error on GPU
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4",
"events_url": "https://api.github.com/users/Guitaricet/events{/privacy}",
"followers_url": "https://api.github.com/users/Guitaricet/followers",
"following_url": "https://api.github.com/users/Guitaricet/following{/other_user}",
"gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Guitaricet",
"id": 2821124,
"login": "Guitaricet",
"node_id": "MDQ6VXNlcjI4MjExMjQ=",
"organizations_url": "https://api.github.com/users/Guitaricet/orgs",
"received_events_url": "https://api.github.com/users/Guitaricet/received_events",
"repos_url": "https://api.github.com/users/Guitaricet/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Guitaricet"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Just in case, this is a workaround that I use in my code and it seems to do the job.\r\n\r\n```python\r\nif use_gpu_index:\r\n data[\"train\"]._indexes[\"text_emb\"].faiss_index = faiss.index_gpu_to_cpu(data[\"train\"]._indexes[\"text_emb\"].faiss_index)\r\n```"
] | 2021-05-12T03:41:56Z
| 2021-05-17T13:41:41Z
| 2021-05-17T13:41:41Z
|
CONTRIBUTOR
| null | null | null |
## Describe the bug
After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error.
```
File "index_wikipedia.py", line 119, in <module>
data["train"].save_faiss_index("text_emb", index_save_path)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 470, in save_faiss_index
index.save(file)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 334, in save
faiss.write_index(index, str(file))
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/faiss/swigfaiss_avx2.py", line 5654, in write_index
return _swigfaiss.write_index(*args)
RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /root/miniconda3/conda-bld/faiss-pkg_1613235005464/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index
```
## Steps to reproduce the bug
Any dataset will do, I just selected a familiar one.
```python
import numpy as np
import datasets
INDEX_STR = "OPQ16_128,IVF512,PQ32"
INDEX_SAVE_PATH = "will_not_save.faiss"
data = datasets.load_dataset("Fraser/news-category-dataset", split=f"train[:10000]")
def encode(item):
return {"text_emb": np.random.randn(768).astype(np.float32)}
data = data.map(encode)
data.add_faiss_index(column="text_emb", string_factory=INDEX_STR, train_size=10_000, device=0)
data.save_faiss_index("text_emb", INDEX_SAVE_PATH)
```
## Expected results
Saving the index
## Actual results
Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) ... don't know how to serialize this type of index
## Environment info
- `datasets` version: 1.6.2
- Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
I will be proposing a fix in a couple of minutes
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2350/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2350/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5922
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5922/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5922/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5922/events
|
https://github.com/huggingface/datasets/issues/5922
| 1,736,898,953
|
I_kwDODunzps5nhvmJ
| 5,922
|
Length of table does not accurately reflect the split
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8068268?v=4",
"events_url": "https://api.github.com/users/amogkam/events{/privacy}",
"followers_url": "https://api.github.com/users/amogkam/followers",
"following_url": "https://api.github.com/users/amogkam/following{/other_user}",
"gists_url": "https://api.github.com/users/amogkam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/amogkam",
"id": 8068268,
"login": "amogkam",
"node_id": "MDQ6VXNlcjgwNjgyNjg=",
"organizations_url": "https://api.github.com/users/amogkam/orgs",
"received_events_url": "https://api.github.com/users/amogkam/received_events",
"repos_url": "https://api.github.com/users/amogkam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/amogkam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amogkam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/amogkam"
}
|
[
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] |
closed
| false
| null |
[] | null |
[
"As already replied by @lhoestq (private channel):\r\n> `.train_test_split` (as well as `.shard`, `.select`) doesn't create a new arrow table to save time and disk space. Instead, it uses an indices mapping on top of the table that locate which examples are part of train or test.",
"This is an optimization that we don't plan to \"fix\", so I'm closing this issue."
] | 2023-06-01T18:56:26Z
| 2023-06-02T16:13:31Z
| 2023-06-02T16:13:31Z
|
NONE
| null | null | null |
### Describe the bug
I load a Huggingface Dataset and do `train_test_split`. I'm expecting the underlying table for the dataset to also be split, but it's not.
### Steps to reproduce the bug

### Expected behavior
The expected behavior is when `len(hf_dataset["train"].data)` should match the length of the train split, and not be the entire unsplit dataset.
### Environment info
datasets 2.10.1
python 3.10.11
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5922/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5922/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/4374
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4374/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4374/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4374/events
|
https://github.com/huggingface/datasets/issues/4374
| 1,241,860,535
|
I_kwDODunzps5KBUm3
| 4,374
|
extremely slow processing when using a custom dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32235549?v=4",
"events_url": "https://api.github.com/users/StephennFernandes/events{/privacy}",
"followers_url": "https://api.github.com/users/StephennFernandes/followers",
"following_url": "https://api.github.com/users/StephennFernandes/following{/other_user}",
"gists_url": "https://api.github.com/users/StephennFernandes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/StephennFernandes",
"id": 32235549,
"login": "StephennFernandes",
"node_id": "MDQ6VXNlcjMyMjM1NTQ5",
"organizations_url": "https://api.github.com/users/StephennFernandes/orgs",
"received_events_url": "https://api.github.com/users/StephennFernandes/received_events",
"repos_url": "https://api.github.com/users/StephennFernandes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/StephennFernandes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StephennFernandes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/StephennFernandes"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] |
closed
| false
| null |
[] | null |
[
"Hi !\r\n\r\nMy guess is that some examples in your dataset are bigger than your RAM, and therefore loading them in RAM to pass them to `remove_non_indic_sentences` takes forever because it might use SWAP memory.\r\n\r\nMaybe several examples in your dataset are grouped together, can you check `len(lang_dataset[\"train\"])` and `lang_dataset[\"train\"].data.nbytes` of both datasets please ? It can also be helpful to check the distribution of lengths of each examples in your dataset.",
"Closing due to inactivity"
] | 2022-05-19T14:18:05Z
| 2023-07-25T15:07:17Z
| 2023-07-25T15:07:16Z
|
NONE
| null | null | null |
## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub
I have a large .txt file of 22 GB which i load into HF dataset
`lang_dataset = datasets.load_dataset("text", data_files="hi.txt")`
further i use a pre-processing function to clean the dataset
`lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)`
the following processing takes astronomical time to process, while hoging all the ram.
similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data.
`lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)`
the hours predicted to preprocess are as follows:
huggingface hub dataset: 6.5 hrs
custom loaded dataset: 7000 hrs
note: both the datasets are almost actually same, just provided by different sources with has +/- some samples, only one is hosted on the HF hub and the other is downloaded in a text format.
## Steps to reproduce the bug
```
import datasets
import psutil
import sys
import glob
from fastcore.utils import listify
import re
import gc
def remove_non_indic_sentences(example):
tmp_ls = []
eng_regex = r'[. a-zA-Z0-9ÖÄÅöäå _.,!"\'\/$]*'
for e in listify(example['text']):
matches = re.findall(eng_regex, e)
for match in (str(match).strip() for match in matches if match not in [""," ", " ", ",", " ,", ", ", " , "]):
if len(list(match.split(" "))) > 2:
e = re.sub(match," ",e,count=1)
tmp_ls.append(e)
gc.collect()
example['clean_text'] = tmp_ls
return example
lang_dataset = datasets.load_dataset("text", data_files="hi.txt")
lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)
## same thing work much faster when loading similar dataset from hub
lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", split="train", use_auth_token=True)
lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)
```
## Actual results
similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data.
`lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)
**the hours predicted to preprocess are as follows:**
huggingface hub dataset: 6.5 hrs
custom loaded dataset: 7000 hrs
**i even tried the following:**
- sharding the large 22gb text files into smaller files and loading
- saving the file to disk and then loading
- using lesser num_proc
- using smaller batch size
- processing without batches ie : without `batched=True`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2.dev0
- Platform: Ubuntu 20.04 LTS
- Python version: 3.9.7
- PyArrow version:8.0.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4374/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4374/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3986
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3986/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3986/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3986/events
|
https://github.com/huggingface/datasets/issues/3986
| 1,176,429,565
|
I_kwDODunzps5GHuP9
| 3,986
|
Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10686779?v=4",
"events_url": "https://api.github.com/users/kelvinAI/events{/privacy}",
"followers_url": "https://api.github.com/users/kelvinAI/followers",
"following_url": "https://api.github.com/users/kelvinAI/following{/other_user}",
"gists_url": "https://api.github.com/users/kelvinAI/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kelvinAI",
"id": 10686779,
"login": "kelvinAI",
"node_id": "MDQ6VXNlcjEwNjg2Nzc5",
"organizations_url": "https://api.github.com/users/kelvinAI/orgs",
"received_events_url": "https://api.github.com/users/kelvinAI/received_events",
"repos_url": "https://api.github.com/users/kelvinAI/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kelvinAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kelvinAI/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kelvinAI"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
open
| false
| null |
[] | null |
[
"Hi ! I didn't managed to reproduce the issue. When you kill the process, is there any stacktrace that shows at what point in the code python is hanging ?",
"Hi @lhoestq , I've traced the issue back to file locking. It's similar to this thread, using Lustre filesystem as well. https://github.com/huggingface/datasets/issues/329 . In this case the user was able to modify and add -o flock option while mounting and it solved the problem. \r\nHowever in other cases such as mine, we do not have the permissions to modify the commands while mounting. I'm still trying to figure out a workaround. Any ideas how can we use a mounted Lustre filesystem with no flock option?\r\n",
"Hi @kelvinAI , I've had this issue on our institution's system which uses Lustre (in addition to our compute nodes being siloed off from external network access). The workaround I made for downloading/loading datasets was to set the `$HFHOME` environment variable to a location on the node's local storage (SSD), effectively a location that gets cleared regularly and sometimes gets used for temporary or cached files which is pretty common, e.g. \"scratch\" storage. Maybe your sysadmins, if you have them, could point you to subdirectories on a node that aren't linked to the Lustre filesystem. After downloading to scratch I found that the transformers, modules, and metrics cached folders were fine to move to my user drives on the Lustre filesystem but cached datasets that had fingerprints still had some issues with filelock, so it would help to use the function `my_dataset.save_to_disk('path/on/lustre_fs')` and static class function `Dataset.load_from_disk('path/on/lustre_fs')`. In rough steps:\r\n\r\n1. Initially download to scratch storage with `ds = datasets.load_dataset(dataset_name)`\r\n2. Call `ds.save_to_disk(my_path_on_lustre)` with a path in your user space on the Lustre filesystem\r\n3. Load datasets with `from datasets import Dataset; new_ds = Dataset.load_from_disk(my_path_on_lustre)`\r\n\r\nObviously this hinges on there existing scratch storage on the nodes you're using. Fingers crossed.",
"Hi @jpmcd , thanks for sharing your experience. For my case, the Lustre filesystem (with more storage space) is the scratch storage like the one you've mentioned. We have a local storage for each user but unfortunately there's not enough space in it to 'cache' huge datasets, hence that is why I tried changing HF_HOME to point to the scratch disk with more space and encountered the flock issue. Unfortunately I'm not aware of any viable solution to this for now so I simply fall back to using torch dataset. ",
"@jpmcd your comment saved me from pulling my hair out in frustration. Setting `HF_HOME` to a directory that's not on Lustre works like a charm. ✨ "
] | 2022-03-22T08:23:21Z
| 2023-03-06T16:55:04Z
| null |
NONE
| null | null | null |
## Describe the bug
Dataset loads indefinitely after modifying cache path (~/.cache/huggingface)
If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script)
** Update: Transformer modules faces the same issue as well during loading
## A clear and concise description of what the bug is.
Issue:
- Dataset loading stalls / freezes indefinitely when HF_HOME is changed to a custom directory
- No error code, had to terminate the process
- There are some files created in the cache directory:
```
custom_cache_dir
| -- modules
| -- __init__.py
| -- datasets_modules
| -- __init__.py
| -- datasets
| -- __init__.py
| -- script.py (Dataset loading script)
| -- script.lock
```
There's no error nor any logs thrown so I'm out of ideas of how to to debug this. The custom dataset works fine if the default ~/.cache dir is used, but unfortunately it's out of space and we do not have permissions to modify the disk.
## Steps to reproduce the bug
What I've tried:
- Modifying HF_HOME (https://github.com/huggingface/transformers/issues/8703)
- Modifying HF_DATASETS_CACHE (https://huggingface.co/docs/datasets/v1.12.0/cache.html)
- Modifying cache_dir param during runtime
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset('test_dataset', cache_dir='/path/to/new/cache')
```
- Disabling dataset cache
```python
>>> from datasets import set_caching_enabled
>>> set_caching_enabled(False)
```
## Expected results
Datasets should load / cache as usual with the only exception that cache directory is different
## Actual results
Any actions taken above to change the cache directory results in loading indefinitely without terminating.
## Environment info
- `transformers` version: 4.18.0.dev0
- Platform: Linux-4.15.0-54-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3986/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3986/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2421
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2421/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2421/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2421/events
|
https://github.com/huggingface/datasets/pull/2421
| 905,549,756
|
MDExOlB1bGxSZXF1ZXN0NjU2NjIwMTM3
| 2,421
|
doc: fix typo HF_MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/borisdayma",
"id": 715491,
"login": "borisdayma",
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"type": "User",
"url": "https://api.github.com/users/borisdayma"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-05-28T14:52:10Z
| 2021-06-04T09:52:45Z
| 2021-06-04T09:52:45Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2421.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2421",
"merged_at": "2021-06-04T09:52:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2421.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2421"
}
|
MAX_MEMORY_DATASET_SIZE_IN_BYTES should be HF_MAX_MEMORY_DATASET_SIZE_IN_BYTES
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2421/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2421/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2137
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2137/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2137/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2137/events
|
https://github.com/huggingface/datasets/pull/2137
| 843,502,835
|
MDExOlB1bGxSZXF1ZXN0NjAyODc0MDYw
| 2,137
|
Fix missing infos from concurrent dataset loading
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-29T15:46:12Z
| 2021-03-31T10:35:56Z
| 2021-03-31T10:35:55Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2137.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2137",
"merged_at": "2021-03-31T10:35:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2137.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2137"
}
|
This should fix issue #2131
When calling `load_dataset` at the same time from 2 workers, one of the worker could have missing split infos when reloading the dataset from the cache.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2137/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2137/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4649
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4649/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4649/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4649/events
|
https://github.com/huggingface/datasets/issues/4649
| 1,296,673,712
|
I_kwDODunzps5NSauw
| 4,649
|
Add PAQ dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] | null |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/PAQ_pairs)"
] | 2022-07-07T01:29:42Z
| 2022-07-14T02:06:27Z
| 2022-07-14T02:06:27Z
|
NONE
| null | null | null |
## Adding a Dataset
- **Name:** *PAQ*
- **Description:** *This repository contains code and models to support the research paper PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them*
- **Paper:** *https://arxiv.org/abs/2102.07033*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/PAQ_pairs.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4649/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4649/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1884
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1884/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1884/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1884/events
|
https://github.com/huggingface/datasets/pull/1884
| 808,755,894
|
MDExOlB1bGxSZXF1ZXN0NTczNzQwNzI5
| 1,884
|
dtype fix when using numpy arrays
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-02-15T18:55:25Z
| 2021-07-30T11:01:18Z
| 2021-07-30T11:01:18Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1884.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1884",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1884.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1884"
}
|
As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1884/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1884/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5569
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5569/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5569/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5569/events
|
https://github.com/huggingface/datasets/pull/5569
| 1,597,132,383
|
PR_kwDODunzps5KnwHD
| 5,569
|
pass the dataset features to the IterableDataset.from_generator function
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hubert-Bonisseur",
"id": 48770768,
"login": "Hubert-Bonisseur",
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hubert-Bonisseur"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008753 / 0.011353 (-0.002600) | 0.004877 / 0.011008 (-0.006131) | 0.098320 / 0.038508 (0.059812) | 0.034123 / 0.023109 (0.011014) | 0.289539 / 0.275898 (0.013641) | 0.323584 / 0.323480 (0.000104) | 0.007455 / 0.007986 (-0.000531) | 0.004763 / 0.004328 (0.000434) | 0.074350 / 0.004250 (0.070100) | 0.039018 / 0.037052 (0.001966) | 0.294319 / 0.258489 (0.035830) | 0.348686 / 0.293841 (0.054845) | 0.037814 / 0.128546 (-0.090732) | 0.011808 / 0.075646 (-0.063838) | 0.333808 / 0.419271 (-0.085464) | 0.047758 / 0.043533 (0.004225) | 0.298533 / 0.255139 (0.043394) | 0.320790 / 0.283200 (0.037590) | 0.095909 / 0.141683 (-0.045774) | 1.434422 / 1.452155 (-0.017732) | 1.509703 / 1.492716 (0.016987) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201728 / 0.018006 (0.183722) | 0.432243 / 0.000490 (0.431753) | 0.002760 / 0.000200 (0.002560) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026090 / 0.037411 (-0.011321) | 0.105914 / 0.014526 (0.091388) | 0.115869 / 0.176557 (-0.060688) | 0.178291 / 0.737135 (-0.558844) | 0.121435 / 0.296338 (-0.174904) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402304 / 0.215209 (0.187095) | 3.995183 / 2.077655 (1.917529) | 1.794548 / 1.504120 (0.290428) | 1.603034 / 1.541195 (0.061839) | 1.643836 / 1.468490 (0.175346) | 0.694934 / 4.584777 (-3.889843) | 3.695128 / 3.745712 (-0.050584) | 2.018582 / 5.269862 (-3.251279) | 1.294315 / 4.565676 (-3.271362) | 0.085346 / 0.424275 (-0.338929) | 0.012201 / 0.007607 (0.004594) | 0.510057 / 0.226044 (0.284012) | 5.123404 / 2.268929 (2.854476) | 2.319089 / 55.444624 (-53.125535) | 1.930935 / 6.876477 (-4.945542) | 1.939700 / 2.142072 (-0.202372) | 0.848282 / 4.805227 (-3.956945) | 0.165561 / 6.500664 (-6.335103) | 0.062028 / 0.075469 (-0.013441) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.220576 / 1.841788 (-0.621212) | 14.413853 / 8.074308 (6.339544) | 14.027156 / 10.191392 (3.835764) | 0.170109 / 0.680424 (-0.510315) | 0.029412 / 0.534201 (-0.504789) | 0.443898 / 0.579283 (-0.135386) | 0.433059 / 0.434364 (-0.001305) | 0.533465 / 0.540337 (-0.006872) | 0.626562 / 1.386936 (-0.760374) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007148 / 0.011353 (-0.004205) | 0.005019 / 0.011008 (-0.005989) | 0.073132 / 0.038508 (0.034624) | 0.032763 / 0.023109 (0.009654) | 0.329309 / 0.275898 (0.053411) | 0.361658 / 0.323480 (0.038178) | 0.005683 / 0.007986 (-0.002302) | 0.003793 / 0.004328 (-0.000536) | 0.071858 / 0.004250 (0.067608) | 0.045160 / 0.037052 (0.008107) | 0.335852 / 0.258489 (0.077363) | 0.384274 / 0.293841 (0.090433) | 0.036647 / 0.128546 (-0.091899) | 0.012217 / 0.075646 (-0.063430) | 0.086265 / 0.419271 (-0.333007) | 0.049223 / 0.043533 (0.005690) | 0.331460 / 0.255139 (0.076321) | 0.353175 / 0.283200 (0.069975) | 0.102214 / 0.141683 (-0.039469) | 1.491451 / 1.452155 (0.039296) | 1.553702 / 1.492716 (0.060985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222972 / 0.018006 (0.204966) | 0.432862 / 0.000490 (0.432372) | 0.000421 / 0.000200 (0.000221) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028401 / 0.037411 (-0.009010) | 0.109331 / 0.014526 (0.094805) | 0.119246 / 0.176557 (-0.057311) | 0.187997 / 0.737135 (-0.549138) | 0.124212 / 0.296338 (-0.172127) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427240 / 0.215209 (0.212031) | 4.271619 / 2.077655 (2.193964) | 2.104948 / 1.504120 (0.600828) | 1.910624 / 1.541195 (0.369430) | 1.943812 / 1.468490 (0.475322) | 0.711466 / 4.584777 (-3.873311) | 3.778987 / 3.745712 (0.033275) | 2.976258 / 5.269862 (-2.293604) | 1.807591 / 4.565676 (-2.758086) | 0.088286 / 0.424275 (-0.335989) | 0.012461 / 0.007607 (0.004854) | 0.527554 / 0.226044 (0.301509) | 5.279461 / 2.268929 (3.010532) | 2.517911 / 55.444624 (-52.926713) | 2.176557 / 6.876477 (-4.699920) | 2.205322 / 2.142072 (0.063249) | 0.855012 / 4.805227 (-3.950215) | 0.170007 / 6.500664 (-6.330658) | 0.065273 / 0.075469 (-0.010196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282785 / 1.841788 (-0.559003) | 14.819500 / 8.074308 (6.745192) | 13.282211 / 10.191392 (3.090819) | 0.161804 / 0.680424 (-0.518620) | 0.017615 / 0.534201 (-0.516586) | 0.420159 / 0.579283 (-0.159124) | 0.441304 / 0.434364 (0.006940) | 0.531704 / 0.540337 (-0.008634) | 0.627477 / 1.386936 (-0.759459) |\n\n</details>\n</details>\n\n\n",
"Hmm I think we need to add more tests. Not sure what would happen with :\r\n- decodable features that may end up decoded twice \r\n- formatted datasets \r\n\r\nI'd be in favor of reverting this until we checked those"
] | 2023-02-23T16:06:04Z
| 2023-02-24T14:06:37Z
| 2023-02-23T18:15:16Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5569.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5569",
"merged_at": "2023-02-23T18:15:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5569.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5569"
}
|
[5558](https://github.com/huggingface/datasets/issues/5568)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5569/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5569/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5708
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5708/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5708/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5708/events
|
https://github.com/huggingface/datasets/issues/5708
| 1,655,023,642
|
I_kwDODunzps5ipaga
| 5,708
|
Dataset sizes are in MiB instead of MB in dataset cards
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Example of bulk edit: https://huggingface.co/datasets/aeslc/discussions/5",
"looks great! \r\n\r\nDo you encode the fact that you've already converted a dataset? (to not convert it twice) or do you base yourself on the info contained in `dataset_info`",
"I am only looping trough the dataset cards, assuming that all of them were created with MiB.\r\n\r\nI agree we should only run the bulk edit once for all canonical datasets: I'm using a for-loop over canonical datasets.",
"yes, worst case, we have this in structured data:\r\n\r\n<img width=\"337\" alt=\"image\" src=\"https://user-images.githubusercontent.com/326577/230037051-06caddcb-08c8-4953-a710-f3d122917db3.png\">\r\n",
"I have just included as well the conversion from MB to GB if necessary. See: \r\n- https://huggingface.co/datasets/bookcorpus/discussions/2/files\r\n- https://huggingface.co/datasets/asnq/discussions/2/files",
"Nice. Is it another loop? Because in https://huggingface.co/datasets/amazon_us_reviews/discussions/2/files we have `32377.29 MB` for example",
"First, I tested some batches to check the changes made. Then I incorporated the MB to GB conversion. Now I'm running the rest.",
"The bulk edit parsed 751 canonical datasets and updated 166.",
"Thanks a lot!\r\n\r\nThe sizes now match as expected!\r\n\r\n<img width=\"1446\" alt=\"Capture d’écran 2023-04-05 à 16 10 15\" src=\"https://user-images.githubusercontent.com/1676121/230107044-ac2a76ea-a4fe-4e81-a925-f464b85f5edd.png\">\r\n",
"I made another bulk edit of ancient canonical datasets that were moved to community organization. I have parsed 11 datasets and opened a PR on 3 of them:\r\n- [ ] \"allenai/scicite\": https://huggingface.co/datasets/allenai/scicite/discussions/3\r\n- [ ] \"allenai/scifact\": https://huggingface.co/datasets/allenai/scifact/discussions/2\r\n- [x] \"dair-ai/emotion\": https://huggingface.co/datasets/dair-ai/emotion/discussions/6",
"should we force merge the PR and close this issue?"
] | 2023-04-05T06:36:03Z
| 2023-09-25T12:07:09Z
| null |
MEMBER
| null | null | null |
As @severo reported in an internal discussion (https://github.com/huggingface/moon-landing/issues/5929):
Now we show the dataset size:
- from the dataset card (in the side column)
- from the datasets-server (in the viewer)
But, even if the size is the same, we see a mismatch because the viewer shows MB, while the info from the README generally shows MiB (even if it's written MB -> https://huggingface.co/datasets/blimp/blob/main/README.md?code=true#L1932)
<img width="664" alt="Capture d’écran 2023-04-04 à 10 16 01" src="https://user-images.githubusercontent.com/1676121/229730887-0bd8fa6e-9462-46c6-bd4e-4d2c5784cabb.png">
TODO: Values to be fixed in: `Size of downloaded dataset files:`, `Size of the generated dataset:` and `Total amount of disk used:`
- [x] Bulk edit on the Hub to fix this in all canonical datasets
- [x] Bulk PR on the Hub to fix ancient canonical datasets that were moved to organizations
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5708/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5708/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2991
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2991/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2991/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2991/events
|
https://github.com/huggingface/datasets/issues/2991
| 1,012,174,823
|
I_kwDODunzps48VI_n
| 2,991
|
add docmentation for the `Unix style pattern` matching feature that can be leverage for `data_files` into `load_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SaulLu",
"id": 55560583,
"login": "SaulLu",
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SaulLu"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2021-09-30T13:22:01Z
| 2021-09-30T13:22:01Z
| null |
CONTRIBUTOR
| null | null | null |
Unless I'm mistaken, it seems that in the new documentation it is no longer mentioned that you can use Unix style pattern matching in the `data_files` argument of the `load_dataset` method.
This feature was mentioned [here](https://huggingface.co/docs/datasets/loading_datasets.html#from-a-community-dataset-on-the-hugging-face-hub) in the previous documentation.
I'd love to hear your opinion @lhoestq , @albertvillanova and @stevhliu
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2991/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2991/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/654
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/654/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/654/comments
|
https://api.github.com/repos/huggingface/datasets/issues/654/events
|
https://github.com/huggingface/datasets/pull/654
| 705,511,058
|
MDExOlB1bGxSZXF1ZXN0NDkwMjI1Nzk3
| 654
|
Allow empty inputs in metrics
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-09-21T11:26:36Z
| 2020-10-06T03:51:48Z
| 2020-09-21T16:13:38Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/654.diff",
"html_url": "https://github.com/huggingface/datasets/pull/654",
"merged_at": "2020-09-21T16:13:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/654.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/654"
}
|
There was an arrow error when trying to compute a metric with empty inputs. The error was occurring when reading the arrow file, before calling metric._compute.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/654/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/654/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6021
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6021/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6021/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6021/events
|
https://github.com/huggingface/datasets/pull/6021
| 1,799,785,904
|
PR_kwDODunzps5VP11Q
| 6,021
|
[docs] Update return statement of index search
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007697 / 0.011353 (-0.003656) | 0.004233 / 0.011008 (-0.006776) | 0.087890 / 0.038508 (0.049382) | 0.065305 / 0.023109 (0.042196) | 0.366919 / 0.275898 (0.091020) | 0.399656 / 0.323480 (0.076176) | 0.006753 / 0.007986 (-0.001232) | 0.003428 / 0.004328 (-0.000900) | 0.070180 / 0.004250 (0.065930) | 0.054164 / 0.037052 (0.017112) | 0.377130 / 0.258489 (0.118641) | 0.403456 / 0.293841 (0.109615) | 0.042639 / 0.128546 (-0.085907) | 0.012396 / 0.075646 (-0.063250) | 0.314235 / 0.419271 (-0.105036) | 0.061976 / 0.043533 (0.018443) | 0.376959 / 0.255139 (0.121820) | 0.433313 / 0.283200 (0.150113) | 0.031253 / 0.141683 (-0.110430) | 1.555749 / 1.452155 (0.103594) | 1.643905 / 1.492716 (0.151189) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208630 / 0.018006 (0.190624) | 0.519532 / 0.000490 (0.519042) | 0.003719 / 0.000200 (0.003519) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027403 / 0.037411 (-0.010008) | 0.080990 / 0.014526 (0.066464) | 0.090424 / 0.176557 (-0.086133) | 0.153922 / 0.737135 (-0.583213) | 0.098156 / 0.296338 (-0.198183) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.519453 / 0.215209 (0.304244) | 5.100089 / 2.077655 (3.022434) | 2.212165 / 1.504120 (0.708045) | 1.894405 / 1.541195 (0.353210) | 1.922914 / 1.468490 (0.454424) | 0.762443 / 4.584777 (-3.822334) | 4.669214 / 3.745712 (0.923502) | 5.016066 / 5.269862 (-0.253796) | 3.128821 / 4.565676 (-1.436856) | 0.091541 / 0.424275 (-0.332734) | 0.007582 / 0.007607 (-0.000026) | 0.652753 / 0.226044 (0.426709) | 6.601375 / 2.268929 (4.332446) | 3.076948 / 55.444624 (-52.367677) | 2.250544 / 6.876477 (-4.625933) | 2.404059 / 2.142072 (0.261987) | 0.994917 / 4.805227 (-3.810311) | 0.200318 / 6.500664 (-6.300346) | 0.069354 / 0.075469 (-0.006115) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.482559 / 1.841788 (-0.359229) | 20.722092 / 8.074308 (12.647784) | 17.703217 / 10.191392 (7.511825) | 0.215370 / 0.680424 (-0.465053) | 0.028208 / 0.534201 (-0.505993) | 0.425992 / 0.579283 (-0.153291) | 0.492785 / 0.434364 (0.058421) | 0.474154 / 0.540337 (-0.066183) | 0.644599 / 1.386936 (-0.742337) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008372 / 0.011353 (-0.002981) | 0.004543 / 0.011008 (-0.006465) | 0.070564 / 0.038508 (0.032056) | 0.066855 / 0.023109 (0.043746) | 0.386724 / 0.275898 (0.110826) | 0.432184 / 0.323480 (0.108704) | 0.005250 / 0.007986 (-0.002736) | 0.003630 / 0.004328 (-0.000698) | 0.069310 / 0.004250 (0.065060) | 0.055759 / 0.037052 (0.018707) | 0.375789 / 0.258489 (0.117299) | 0.417335 / 0.293841 (0.123494) | 0.043424 / 0.128546 (-0.085122) | 0.013106 / 0.075646 (-0.062541) | 0.087836 / 0.419271 (-0.331436) | 0.057770 / 0.043533 (0.014237) | 0.396694 / 0.255139 (0.141555) | 0.439350 / 0.283200 (0.156150) | 0.031660 / 0.141683 (-0.110023) | 1.571339 / 1.452155 (0.119185) | 1.667169 / 1.492716 (0.174452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180534 / 0.018006 (0.162528) | 0.540027 / 0.000490 (0.539537) | 0.003573 / 0.000200 (0.003373) | 0.000141 / 0.000054 (0.000086) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031380 / 0.037411 (-0.006032) | 0.083762 / 0.014526 (0.069236) | 0.098166 / 0.176557 (-0.078390) | 0.160761 / 0.737135 (-0.576374) | 0.097683 / 0.296338 (-0.198656) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.568074 / 0.215209 (0.352865) | 5.660544 / 2.077655 (3.582889) | 2.416698 / 1.504120 (0.912578) | 2.177096 / 1.541195 (0.635901) | 2.206178 / 1.468490 (0.737688) | 0.844864 / 4.584777 (-3.739912) | 4.793636 / 3.745712 (1.047923) | 7.062387 / 5.269862 (1.792525) | 4.201228 / 4.565676 (-0.364449) | 0.091997 / 0.424275 (-0.332279) | 0.007881 / 0.007607 (0.000274) | 0.679466 / 0.226044 (0.453422) | 6.580268 / 2.268929 (4.311340) | 3.229907 / 55.444624 (-52.214717) | 2.524877 / 6.876477 (-4.351600) | 2.463796 / 2.142072 (0.321723) | 0.975627 / 4.805227 (-3.829600) | 0.186670 / 6.500664 (-6.313994) | 0.065307 / 0.075469 (-0.010163) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.501447 / 1.841788 (-0.340340) | 21.231037 / 8.074308 (13.156729) | 17.591671 / 10.191392 (7.400279) | 0.212745 / 0.680424 (-0.467679) | 0.026100 / 0.534201 (-0.508101) | 0.428391 / 0.579283 (-0.150892) | 0.535268 / 0.434364 (0.100904) | 0.506733 / 0.540337 (-0.033604) | 0.660832 / 1.386936 (-0.726104) |\n\n</details>\n</details>\n\n\n"
] | 2023-07-11T21:33:32Z
| 2023-07-12T17:13:02Z
| 2023-07-12T17:03:00Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6021.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6021",
"merged_at": "2023-07-12T17:03:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6021.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6021"
}
|
Clarifies in the return statement of the docstring that the retrieval score is `IndexFlatL2` by default (see [PR](https://github.com/huggingface/transformers/issues/24739) and internal Slack [convo](https://huggingface.slack.com/archives/C01229B19EX/p1689105179711689)), and fixes the formatting because multiple return values are not supported.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6021/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6021/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4942
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4942/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4942/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4942/events
|
https://github.com/huggingface/datasets/issues/4942
| 1,363,869,421
|
I_kwDODunzps5RSv7t
| 4,942
|
Trec Dataset has incorrect labels
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6539145?v=4",
"events_url": "https://api.github.com/users/wmpauli/events{/privacy}",
"followers_url": "https://api.github.com/users/wmpauli/followers",
"following_url": "https://api.github.com/users/wmpauli/following{/other_user}",
"gists_url": "https://api.github.com/users/wmpauli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wmpauli",
"id": 6539145,
"login": "wmpauli",
"node_id": "MDQ6VXNlcjY1MzkxNDU=",
"organizations_url": "https://api.github.com/users/wmpauli/orgs",
"received_events_url": "https://api.github.com/users/wmpauli/received_events",
"repos_url": "https://api.github.com/users/wmpauli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wmpauli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wmpauli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wmpauli"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null |
[
"Thanks for reporting, @wmpauli. \r\n\r\nIndeed we recently fixed this issue:\r\n- #4801 \r\n\r\nThe fix will be accessible after our next library release. In the meantime, you can have it by passing `revision=\"main\"` to `load_dataset`."
] | 2022-09-06T22:13:40Z
| 2022-09-08T11:12:03Z
| 2022-09-08T11:12:03Z
|
NONE
| null | null | null |
## Describe the bug
Both coarse and fine labels seem to be out of line.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = "trec"
raw_datasets = load_dataset(dataset)
df = pd.DataFrame(raw_datasets["test"])
df.head()
```
## Expected results
text (string) | coarse_label (class label) | fine_label (class label)
-- | -- | --
How far is it from Denver to Aspen ? | 5 (NUM) | 40 (NUM:dist)
What county is Modesto , California in ? | 4 (LOC) | 32 (LOC:city)
Who was Galileo ? | 3 (HUM) | 31 (HUM:desc)
What is an atom ? | 2 (DESC) | 24 (DESC:def)
When did Hawaii become a state ? | 5 (NUM) | 39 (NUM:date)
## Actual results
index | label-coarse |label-fine | text
-- |-- | -- | --
0 | 4 | 40 | How far is it from Denver to Aspen ?
1 | 5 | 21 | What county is Modesto , California in ?
2 | 3 | 12 | Who was Galileo ?
3 | 0 | 7 | What is an atom ?
4 | 4 | 8 | When did Hawaii become a state ?
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-1086-azure-x86_64-with-glibc2.27
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4942/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4942/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2476
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2476/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2476/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2476/events
|
https://github.com/huggingface/datasets/pull/2476
| 917,686,662
|
MDExOlB1bGxSZXF1ZXN0NjY3MTg3OTk1
| 2,476
|
Add TimeDial
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @lhoestq,\r\nI've pushed the updated README and tags. Let me know if anything is missing/needs some improvement!\r\n\r\n~PS. I don't know why it's not triggering the build~"
] | 2021-06-10T18:33:07Z
| 2021-07-30T12:57:54Z
| 2021-07-30T12:57:54Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2476.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2476",
"merged_at": "2021-07-30T12:57:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2476.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2476"
}
|
Dataset: https://github.com/google-research-datasets/TimeDial
To-Do: Update README.md and add YAML tags
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2476/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2476/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5085
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5085/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5085/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5085/events
|
https://github.com/huggingface/datasets/issues/5085
| 1,400,113,569
|
I_kwDODunzps5TdAmh
| 5,085
|
Filtering on an empty dataset returns a corrupted dataset.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36087158?v=4",
"events_url": "https://api.github.com/users/gabegma/events{/privacy}",
"followers_url": "https://api.github.com/users/gabegma/followers",
"following_url": "https://api.github.com/users/gabegma/following{/other_user}",
"gists_url": "https://api.github.com/users/gabegma/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gabegma",
"id": 36087158,
"login": "gabegma",
"node_id": "MDQ6VXNlcjM2MDg3MTU4",
"organizations_url": "https://api.github.com/users/gabegma/orgs",
"received_events_url": "https://api.github.com/users/gabegma/received_events",
"repos_url": "https://api.github.com/users/gabegma/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gabegma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gabegma/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gabegma"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4",
"events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}",
"followers_url": "https://api.github.com/users/Mouhanedg56/followers",
"following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}",
"gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mouhanedg56",
"id": 23029765,
"login": "Mouhanedg56",
"node_id": "MDQ6VXNlcjIzMDI5NzY1",
"organizations_url": "https://api.github.com/users/Mouhanedg56/orgs",
"received_events_url": "https://api.github.com/users/Mouhanedg56/received_events",
"repos_url": "https://api.github.com/users/Mouhanedg56/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mouhanedg56"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4",
"events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}",
"followers_url": "https://api.github.com/users/Mouhanedg56/followers",
"following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}",
"gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mouhanedg56",
"id": 23029765,
"login": "Mouhanedg56",
"node_id": "MDQ6VXNlcjIzMDI5NzY1",
"organizations_url": "https://api.github.com/users/Mouhanedg56/orgs",
"received_events_url": "https://api.github.com/users/Mouhanedg56/received_events",
"repos_url": "https://api.github.com/users/Mouhanedg56/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mouhanedg56"
}
] | null |
[
"~~It seems like #5043 fix (merged recently) is the root cause of such behaviour. When we empty indices mapping (because the dataset length equals to zero), we can no longer get column item like: `ds_filter_2['sentence']` which uses\r\n`ds_filter_1._indices.column(0)`~~\r\n\r\n**UPDATE:**\r\nEmpty datasets are returned without going through partial function on `map` method, which will not work to get indices for `filter`: we need to run `get_indices_from_mask_function` partial function on the dataset to get output = `{\"indices\": []}`. But this is complicated since functions used in args, in particular `get_indices_from_mask_function`, do not support empty datasets.\r\nWe can just handle empty datasets aside on filter method.",
"#self-assign",
"Thank you for solving this amazingly quickly!"
] | 2022-10-06T18:18:49Z
| 2022-10-07T19:06:02Z
| 2022-10-07T18:40:26Z
|
NONE
| null | null | null |
## Describe the bug
When filtering a dataset twice, where the first result is an empty dataset, the second dataset seems corrupted.
## Steps to reproduce the bug
```python
datasets = load_dataset("glue", "sst2")
dataset_split = datasets['validation']
ds_filter_1 = dataset_split.filter(lambda x: False) # Some filtering condition that leads to an empty dataset
assert ds_filter_1.num_rows == 0
sentences = ds_filter_1['sentence']
assert len(sentences) == 0
ds_filter_2 = ds_filter_1.filter(lambda x: False) # Some other filtering condition
assert ds_filter_2.num_rows == 0
assert 'sentence' in ds_filter_2.column_names
sentences = ds_filter_2['sentence']
```
## Expected results
The last line should be returning an empty list, same as 4 lines above.
## Actual results
The last line currently raises `IndexError: index out of bounds`.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: macOS-11.6.6-x86_64-i386-64bit
- Python version: 3.9.11
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5085/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5085/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1806
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1806/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1806/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1806/events
|
https://github.com/huggingface/datasets/pull/1806
| 798,607,869
|
MDExOlB1bGxSZXF1ZXN0NTY1Mzk0ODIz
| 1,806
|
Update details to MLSUM dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15138872?v=4",
"events_url": "https://api.github.com/users/padipadou/events{/privacy}",
"followers_url": "https://api.github.com/users/padipadou/followers",
"following_url": "https://api.github.com/users/padipadou/following{/other_user}",
"gists_url": "https://api.github.com/users/padipadou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padipadou",
"id": 15138872,
"login": "padipadou",
"node_id": "MDQ6VXNlcjE1MTM4ODcy",
"organizations_url": "https://api.github.com/users/padipadou/orgs",
"received_events_url": "https://api.github.com/users/padipadou/received_events",
"repos_url": "https://api.github.com/users/padipadou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padipadou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padipadou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padipadou"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks!"
] | 2021-02-01T18:35:12Z
| 2021-02-01T18:46:28Z
| 2021-02-01T18:46:21Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1806.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1806",
"merged_at": "2021-02-01T18:46:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1806.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1806"
}
|
Update details to MLSUM dataset
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1806/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1806/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3015
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3015/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3015/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3015/events
|
https://github.com/huggingface/datasets/pull/3015
| 1,015,130,845
|
PR_kwDODunzps4so0GX
| 3,015
|
Extend support for streaming datasets that use glob.glob
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-10-04T12:42:37Z
| 2021-10-05T13:46:39Z
| 2021-10-05T13:46:38Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3015.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3015",
"merged_at": "2021-10-05T13:46:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3015.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3015"
}
|
This PR extends the support in streaming mode for datasets that use `glob`, by patching the function `glob.glob`.
Related to #2880, #2876, #2874
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3015/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3015/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3833
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3833/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3833/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3833/events
|
https://github.com/huggingface/datasets/pull/3833
| 1,160,543,713
|
PR_kwDODunzps4z_99t
| 3,833
|
Small typos in How-to-train tutorial.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12573521?v=4",
"events_url": "https://api.github.com/users/lkhphuc/events{/privacy}",
"followers_url": "https://api.github.com/users/lkhphuc/followers",
"following_url": "https://api.github.com/users/lkhphuc/following{/other_user}",
"gists_url": "https://api.github.com/users/lkhphuc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lkhphuc",
"id": 12573521,
"login": "lkhphuc",
"node_id": "MDQ6VXNlcjEyNTczNTIx",
"organizations_url": "https://api.github.com/users/lkhphuc/orgs",
"received_events_url": "https://api.github.com/users/lkhphuc/received_events",
"repos_url": "https://api.github.com/users/lkhphuc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lkhphuc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkhphuc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lkhphuc"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-03-06T07:49:49Z
| 2022-03-07T12:35:33Z
| 2022-03-07T12:13:17Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3833.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3833",
"merged_at": "2022-03-07T12:13:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3833.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3833"
}
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3833/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3833/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4216
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4216/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4216/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4216/events
|
https://github.com/huggingface/datasets/pull/4216
| 1,214,614,029
|
PR_kwDODunzps42u1_w
| 4,216
|
Avoid recursion error in map if example is returned as dict value
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-25T14:40:32Z
| 2022-05-04T17:20:06Z
| 2022-05-04T17:12:52Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4216.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4216",
"merged_at": "2022-05-04T17:12:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4216.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4216"
}
|
I noticed this bug while answering [this question](https://discuss.huggingface.co/t/correct-way-to-create-a-dataset-from-a-csv-file/15686/11?u=mariosasko).
This code replicates the bug:
```python
from datasets import Dataset
dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]})
dset.map(lambda ex: {"translation": ex})
```
and this is the fix for it (before this PR):
```python
from datasets import Dataset
dset = Dataset.from_dict({"en": ["aa", "bb"], "fr": ["cc", "dd"]})
dset.map(lambda ex: {"translation": dict(ex)})
```
Internally, this can be fixed by merging two dicts via dict unpacking (instead of `dict.update) `in `Dataset.map`, which avoids creating recursive dictionaries.
P.S. `{**a, **b}` is slightly more performant than `a.update(b)` in my bencmarks.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4216/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4216/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1352
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1352/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1352/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1352/events
|
https://github.com/huggingface/datasets/pull/1352
| 759,978,543
|
MDExOlB1bGxSZXF1ZXN0NTM0ODg0ODg4
| 1,352
|
change url for prachathai67k to internet archive
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cstorm125",
"id": 15519308,
"login": "cstorm125",
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cstorm125"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-09T04:20:37Z
| 2020-12-10T13:42:17Z
| 2020-12-10T13:42:17Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1352.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1352",
"merged_at": "2020-12-10T13:42:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1352.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1352"
}
|
`prachathai67k` is currently downloaded from git-lfs of PyThaiNLP github. Since the size is quite large (~250MB), I moved the URL to archive.org in order to prevent rate limit issues.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1352/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1352/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3459
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3459/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3459/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3459/events
|
https://github.com/huggingface/datasets/issues/3459
| 1,084,969,672
|
I_kwDODunzps5Aq1LI
| 3,459
|
dataset.filter overwriting previously set dataset._indices values, resulting in the wrong elements being selected.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9354454?v=4",
"events_url": "https://api.github.com/users/mmajurski/events{/privacy}",
"followers_url": "https://api.github.com/users/mmajurski/followers",
"following_url": "https://api.github.com/users/mmajurski/following{/other_user}",
"gists_url": "https://api.github.com/users/mmajurski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mmajurski",
"id": 9354454,
"login": "mmajurski",
"node_id": "MDQ6VXNlcjkzNTQ0NTQ=",
"organizations_url": "https://api.github.com/users/mmajurski/orgs",
"received_events_url": "https://api.github.com/users/mmajurski/received_events",
"repos_url": "https://api.github.com/users/mmajurski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mmajurski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmajurski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mmajurski"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"I think this is a duplicate of [#3190](https://github.com/huggingface/datasets/issues/3190)?",
"Upgrading the datasets version as per #3190 fixes this bug. \r\nI'm Marking as closed."
] | 2021-12-20T16:16:49Z
| 2021-12-20T16:34:57Z
| 2021-12-20T16:34:57Z
|
NONE
| null | null | null |
## Describe the bug
When using dataset.select to select a subset of a dataset, dataset._indices are set to indicate which elements are now considered in the dataset.
The same thing happens when you shuffle the dataset; dataset._indices are set to indicate what the new order of the data is.
However, if you then use a dataset.filter, that filter interacts with those dataset._indices values in a non-intuitive manner.
https://huggingface.co/docs/datasets/_modules/datasets/arrow_dataset.html#Dataset.filter
Effectively, it looks like the original set of _indices were discared and overwritten by the set created during the filter operation.
I think this is actually an issue with how the map function handles dataset._indices. Ideally it should use the _indices it gets passed, and then return an updated _indices which reflect the map transformation applied to the starting _indices.
## Steps to reproduce the bug
```python
dataset = load_dataset('imdb', split='train', keep_in_memory=True)
dataset = dataset.shuffle(keep_in_memory=True)
dataset = dataset.select(range(0, 10), keep_in_memory=True)
print("initial 10 elements")
print(dataset['label']) # -> [1, 1, 0, 1, 0, 0, 0, 1, 0, 0]
dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True)
print("filtered 10 elements looking for label 0")
print(dataset['label']) # -> [1, 1, 1, 1, 1, 1]
```
## Actual results
```
$ python indices_bug.py
initial 10 elements
[1, 1, 0, 1, 0, 0, 0, 1, 0, 0]
filtered 10 elements looking for label 0
[1, 1, 1, 1, 1, 1]
```
This code block first shuffles the dataset (to get a mix of label 0 and label 1).
Then it selects just the first 10 elements (the number of elements does not matter, 10 is just easy to visualize). The important part is that you select some subset of the dataset.
Finally, a filter is applied to pull out just the elements with `label == 0`.
The bug is that you cannot combine any dataset operation which sets the dataset._indices with filter.
In this case I have 2, shuffle and subset.
If you just use a single dataset._indices operation (in this case shuffle) the bug still shows up.
The shuffle sets the dataset._indices and then filter uses those indices in the map, then overwrites dataset._indices with the filter results.
```python
dataset = load_dataset('imdb', split='train', keep_in_memory=True)
dataset = dataset.shuffle(keep_in_memory=True)
dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True)
dataset = dataset.select(range(0, 10), keep_in_memory=True)
print(dataset['label']) # -> [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
```
## Expected results
In an ideal world, the dataset filter would respect any dataset._indices values which had previously been set.
If you use dataset.filter with the base dataset (where dataset._indices has not been set) then the filter command works as expected.
## Environment info
Here are the commands required to rebuild the conda environment from scratch.
```
# create a virtual environment
conda create -n dataset_indices python=3.8 -y
# activate the virtual environment
conda activate dataset_indices
# install huggingface datasets
conda install datasets
```
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-41-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 3.0.0
### Full Conda Environment
```
$ conda env export
name: dasaset_indices
channels:
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=4.5=1_gnu
- abseil-cpp=20210324.2=h2531618_0
- aiohttp=3.8.1=py38h7f8727e_0
- aiosignal=1.2.0=pyhd3eb1b0_0
- arrow-cpp=3.0.0=py38h6b21186_4
- attrs=21.2.0=pyhd3eb1b0_0
- aws-c-common=0.4.57=he6710b0_1
- aws-c-event-stream=0.1.6=h2531618_5
- aws-checksums=0.1.9=he6710b0_0
- aws-sdk-cpp=1.8.185=hce553d0_0
- bcj-cffi=0.5.1=py38h295c915_0
- blas=1.0=mkl
- boost-cpp=1.73.0=h27cfd23_11
- bottleneck=1.3.2=py38heb32a55_1
- brotli=1.0.9=he6710b0_2
- brotli-python=1.0.9=py38heb0550a_2
- brotlicffi=1.0.9.2=py38h295c915_0
- brotlipy=0.7.0=py38h27cfd23_1003
- bzip2=1.0.8=h7b6447c_0
- c-ares=1.17.1=h27cfd23_0
- ca-certificates=2021.10.26=h06a4308_2
- certifi=2021.10.8=py38h06a4308_0
- cffi=1.14.6=py38h400218f_0
- conllu=4.4.1=pyhd3eb1b0_0
- cryptography=36.0.0=py38h9ce1e76_0
- dataclasses=0.8=pyh6d0b6a4_7
- dill=0.3.4=pyhd3eb1b0_0
- double-conversion=3.1.5=he6710b0_1
- et_xmlfile=1.1.0=py38h06a4308_0
- filelock=3.4.0=pyhd3eb1b0_0
- frozenlist=1.2.0=py38h7f8727e_0
- gflags=2.2.2=he6710b0_0
- glog=0.5.0=h2531618_0
- gmp=6.2.1=h2531618_2
- grpc-cpp=1.39.0=hae934f6_5
- huggingface_hub=0.0.17=pyhd3eb1b0_0
- icu=58.2=he6710b0_3
- idna=3.3=pyhd3eb1b0_0
- importlib-metadata=4.8.2=py38h06a4308_0
- importlib_metadata=4.8.2=hd3eb1b0_0
- intel-openmp=2021.4.0=h06a4308_3561
- krb5=1.19.2=hac12032_0
- ld_impl_linux-64=2.35.1=h7274673_9
- libboost=1.73.0=h3ff78a5_11
- libcurl=7.80.0=h0b77cf5_0
- libedit=3.1.20210910=h7f8727e_0
- libev=4.33=h7f8727e_1
- libevent=2.1.8=h1ba5d50_1
- libffi=3.3=he6710b0_2
- libgcc-ng=9.3.0=h5101ec6_17
- libgomp=9.3.0=h5101ec6_17
- libnghttp2=1.46.0=hce63b2e_0
- libprotobuf=3.17.2=h4ff587b_1
- libssh2=1.9.0=h1ba5d50_1
- libstdcxx-ng=9.3.0=hd4cf53a_17
- libthrift=0.14.2=hcc01f38_0
- libxml2=2.9.12=h03d6c58_0
- libxslt=1.1.34=hc22bd24_0
- lxml=4.6.3=py38h9120a33_0
- lz4-c=1.9.3=h295c915_1
- mkl=2021.4.0=h06a4308_640
- mkl-service=2.4.0=py38h7f8727e_0
- mkl_fft=1.3.1=py38hd3c417c_0
- mkl_random=1.2.2=py38h51133e4_0
- multiprocess=0.70.12.2=py38h7f8727e_0
- multivolumefile=0.2.3=pyhd3eb1b0_0
- ncurses=6.3=h7f8727e_2
- numexpr=2.7.3=py38h22e1b3c_1
- numpy=1.21.2=py38h20f2e39_0
- numpy-base=1.21.2=py38h79a1101_0
- openpyxl=3.0.9=pyhd3eb1b0_0
- openssl=1.1.1l=h7f8727e_0
- orc=1.6.9=ha97a36c_3
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.4=py38h06a4308_0
- py7zr=0.16.1=pyhd3eb1b0_1
- pycparser=2.21=pyhd3eb1b0_0
- pycryptodomex=3.10.1=py38h27cfd23_1
- pyopenssl=21.0.0=pyhd3eb1b0_1
- pyparsing=3.0.4=pyhd3eb1b0_0
- pyppmd=0.16.1=py38h295c915_0
- pysocks=1.7.1=py38h06a4308_0
- python=3.8.12=h12debd9_0
- python-dateutil=2.8.2=pyhd3eb1b0_0
- python-xxhash=2.0.2=py38h7f8727e_0
- pyzstd=0.14.4=py38h7f8727e_3
- re2=2020.11.01=h2531618_1
- readline=8.1=h27cfd23_0
- requests=2.26.0=pyhd3eb1b0_0
- setuptools=58.0.4=py38h06a4308_0
- six=1.16.0=pyhd3eb1b0_0
- snappy=1.1.8=he6710b0_0
- sqlite=3.36.0=hc218d9a_0
- texttable=1.6.4=pyhd3eb1b0_0
- tk=8.6.11=h1ccaba5_0
- typing_extensions=3.10.0.2=pyh06a4308_0
- uriparser=0.9.3=he6710b0_1
- utf8proc=2.6.1=h27cfd23_0
- wheel=0.37.0=pyhd3eb1b0_1
- xxhash=0.8.0=h7f8727e_3
- xz=5.2.5=h7b6447c_0
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.11=h7f8727e_4
- zstd=1.4.9=haebb681_0
- pip:
- async-timeout==4.0.2
- charset-normalizer==2.0.9
- datasets==1.16.1
- fsspec==2021.11.1
- huggingface-hub==0.2.1
- multidict==5.2.0
- pandas==1.3.5
- pyarrow==6.0.1
- pytz==2021.3
- pyyaml==6.0
- tqdm==4.62.3
- typing-extensions==4.0.1
- urllib3==1.26.7
- yarl==1.7.2
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3459/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3459/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3540
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3540/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3540/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3540/events
|
https://github.com/huggingface/datasets/issues/3540
| 1,094,900,336
|
I_kwDODunzps5BQtpw
| 3,540
|
How to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35062414?v=4",
"events_url": "https://api.github.com/users/CindyTing/events{/privacy}",
"followers_url": "https://api.github.com/users/CindyTing/followers",
"following_url": "https://api.github.com/users/CindyTing/following{/other_user}",
"gists_url": "https://api.github.com/users/CindyTing/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CindyTing",
"id": 35062414,
"login": "CindyTing",
"node_id": "MDQ6VXNlcjM1MDYyNDE0",
"organizations_url": "https://api.github.com/users/CindyTing/orgs",
"received_events_url": "https://api.github.com/users/CindyTing/received_events",
"repos_url": "https://api.github.com/users/CindyTing/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CindyTing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CindyTing/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CindyTing"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2022-01-06T02:13:42Z
| 2022-01-06T02:17:39Z
| null |
NONE
| null | null | null |
Hi,
I use torch.utils.data.Dataset to define my own data, but I need to use the 'map' function of datasets.arrow_dataset.Dataset later, so I hope to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset.
Here is an example.
```
from torch.utils.data import Dataset
from datasets.arrow_dataset import Dataset as HFDataset
class ADataset(Dataset):
def __init__(self, data):
super().__init__()
self.data = data
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class MDataset():
def __init__(self, tokenizer: AutoTokenizer, data_args, training_args):
self.train_dataset = ADataset(data_args)
self.tokenizer = tokenizer
self.data_args = data_args
self.train_dataset = self.train_dataset.map(
self.process_function,
batched=True,
remove_columns=column_names,
load_from_cache_file=True,
desc="Running tokenizer on train dataset",
)
def process_function(self, examples):
sentences = [" ".join(sample[0][3]) for sample in examples]
tokenized = self.tokenizer(
sentences,
max_length=self.max_seq_len,
padding=self.padding,
truncation=True)
```
But it would raise an ERROR, AttributeError: 'ADataset' object has no attribute 'map'.
so how to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset?
Thanks in advance!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3540/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3540/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2627
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2627/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2627/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2627/events
|
https://github.com/huggingface/datasets/pull/2627
| 941,503,349
|
MDExOlB1bGxSZXF1ZXN0Njg3MzczMDg1
| 2,627
|
Minor fix tests with Windows paths
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] |
{
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
}
|
[] | 2021-07-11T17:55:48Z
| 2021-07-12T14:08:47Z
| 2021-07-12T08:34:50Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2627.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2627",
"merged_at": "2021-07-12T08:34:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2627.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2627"
}
|
Minor fix tests with Windows paths.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2627/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2627/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2824
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2824/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2824/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2824/events
|
https://github.com/huggingface/datasets/pull/2824
| 976,394,721
|
MDExOlB1bGxSZXF1ZXN0NzE3MzIyMzY5
| 2,824
|
Fix defaults in cache_dir docstring in load.py
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-08-22T14:48:37Z
| 2021-08-26T13:23:32Z
| 2021-08-26T11:55:16Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2824.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2824",
"merged_at": "2021-08-26T11:55:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2824.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2824"
}
|
Fix defaults in the `cache_dir` docstring.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2824/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2824/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1893
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1893/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1893/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1893/events
|
https://github.com/huggingface/datasets/issues/1893
| 809,556,503
|
MDU6SXNzdWU4MDk1NTY1MDM=
| 1,893
|
wmt19 is broken
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
}
|
[
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null |
[
"This was also mentioned in https://github.com/huggingface/datasets/issues/488 \r\n\r\nThe bucket where is data was stored seems to be unavailable now. Maybe we can change the URL to the ones in https://conferences.unite.un.org/uncorpus/en/downloadoverview ?",
"Closing since this has been fixed by #1912"
] | 2021-02-16T18:39:58Z
| 2021-03-03T17:42:02Z
| 2021-03-03T17:42:02Z
|
CONTRIBUTOR
| null | null | null |
1. Check which lang pairs we have: `--dataset_name wmt19`:
Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']
2. OK, let's pick `ru-en`:
`--dataset_name wmt19 --dataset_config "ru-en"`
no cookies:
```
Traceback (most recent call last):
File "./run_seq2seq.py", line 661, in <module>
main()
File "./run_seq2seq.py", line 317, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 572, in download_and_prepare
self._download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 628, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt19/436092de5f3faaf0fc28bc84875475b384e90a5470fa6afaee11039ceddc5052/wmt_utils.py", line 755, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 276, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 191, in download
downloaded_path_or_paths = map_nested(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 233, in map_nested
mapped = [
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 234, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 172, in _single_map_nested
return function(data_struct)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 211, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz
```
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1893/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1893/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1584
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1584/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1584/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1584/events
|
https://github.com/huggingface/datasets/pull/1584
| 768,820,406
|
MDExOlB1bGxSZXF1ZXN0NTQxMTM2OTQ5
| 1,584
|
Load hind encorp
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/56379013?v=4",
"events_url": "https://api.github.com/users/rahul-art/events{/privacy}",
"followers_url": "https://api.github.com/users/rahul-art/followers",
"following_url": "https://api.github.com/users/rahul-art/following{/other_user}",
"gists_url": "https://api.github.com/users/rahul-art/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rahul-art",
"id": 56379013,
"login": "rahul-art",
"node_id": "MDQ6VXNlcjU2Mzc5MDEz",
"organizations_url": "https://api.github.com/users/rahul-art/orgs",
"received_events_url": "https://api.github.com/users/rahul-art/received_events",
"repos_url": "https://api.github.com/users/rahul-art/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rahul-art/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahul-art/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rahul-art"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-16T12:38:38Z
| 2020-12-18T02:27:24Z
| 2020-12-18T02:27:24Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1584.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1584",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1584.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1584"
}
|
reformated well documented, yaml tags added, code
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1584/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1584/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3316
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3316/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3316/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3316/events
|
https://github.com/huggingface/datasets/issues/3316
| 1,062,185,822
|
I_kwDODunzps4_T6te
| 3,316
|
Add RedCaps dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null |
[] | 2021-11-24T09:23:02Z
| 2022-01-12T14:13:15Z
| 2022-01-12T14:13:15Z
|
MEMBER
| null | null | null |
## Adding a Dataset
- **Name:** RedCaps
- **Description:** Web-curated image-text data created by the people, for the people
- **Paper:** https://arxiv.org/abs/2111.11431
- **Data:** https://redcaps.xyz/
- **Motivation:** Multimodal image-text dataset: 12M+ Image-text pairs
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Proposed by @patil-suraj
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3316/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3316/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/6442
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6442/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6442/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6442/events
|
https://github.com/huggingface/datasets/issues/6442
| 2,006,086,907
|
I_kwDODunzps53knT7
| 6,442
|
Trouble loading image folder with additional features - metadata file ignored
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/57615435?v=4",
"events_url": "https://api.github.com/users/linoytsaban/events{/privacy}",
"followers_url": "https://api.github.com/users/linoytsaban/followers",
"following_url": "https://api.github.com/users/linoytsaban/following{/other_user}",
"gists_url": "https://api.github.com/users/linoytsaban/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/linoytsaban",
"id": 57615435,
"login": "linoytsaban",
"node_id": "MDQ6VXNlcjU3NjE1NDM1",
"organizations_url": "https://api.github.com/users/linoytsaban/orgs",
"received_events_url": "https://api.github.com/users/linoytsaban/received_events",
"repos_url": "https://api.github.com/users/linoytsaban/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/linoytsaban/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/linoytsaban/subscriptions",
"type": "User",
"url": "https://api.github.com/users/linoytsaban"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I reproduced too:\r\n- root: metadata file is ignored (https://huggingface.co/datasets/severo/doc-image-3)\r\n- data/ dir: metadata file is ignored (https://huggingface.co/datasets/severo/doc-image-4)\r\n- train/ dir: works (https://huggingface.co/datasets/severo/doc-image-5)"
] | 2023-11-22T11:01:35Z
| 2023-11-24T17:13:03Z
| 2023-11-24T17:13:03Z
|
NONE
| null | null | null |
### Describe the bug
Loading image folder with a caption column using `load_dataset(<image_folder_path>)` doesn't load the captions.
When loading a local image folder with captions using `datasets==2.13.0`
```
from datasets import load_dataset
data = load_dataset(<image_folder_path>)
data.column_names
```
yields
`{'train': ['image', 'prompt']}`
but when using `datasets==2.15.0`
yeilds
`{'train': ['image']}`
Putting the images and `metadata.jsonl` file into a nested `train` folder **or** loading with `load_dataset("imagefolder", data_dir=<image_folder_path>)` solves the issue and
yields
`{'train': ['image', 'prompt']}`
### Steps to reproduce the bug
1. create a folder `<image_folder_path>` that contains images and a metadata file with additional features- e.g. "prompt"
2. run:
```
from datasets import load_dataset
data = load_dataset("<image_folder_path>")
data.column_names
```
### Expected behavior
`{'train': ['image', 'prompt']}`
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.19.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6442/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6442/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/3918
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3918/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3918/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3918/events
|
https://github.com/huggingface/datasets/issues/3918
| 1,169,366,117
|
I_kwDODunzps5Fsxxl
| 3,918
|
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/51409295?v=4",
"events_url": "https://api.github.com/users/willowdong/events{/privacy}",
"followers_url": "https://api.github.com/users/willowdong/followers",
"following_url": "https://api.github.com/users/willowdong/following{/other_user}",
"gists_url": "https://api.github.com/users/willowdong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/willowdong",
"id": 51409295,
"login": "willowdong",
"node_id": "MDQ6VXNlcjUxNDA5Mjk1",
"organizations_url": "https://api.github.com/users/willowdong/orgs",
"received_events_url": "https://api.github.com/users/willowdong/received_events",
"repos_url": "https://api.github.com/users/willowdong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/willowdong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/willowdong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/willowdong"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @willowdong! These issues were fixed on master. We will have a new release of `datasets` later today. In the meantime, you can avoid these issues by installing `datasets` from master as follows:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets.git\r\n```",
"You should force redownload:\r\n```python\r\ndataset = load_dataset(\"multi_news\", download_mode=\"force_redownload\")\r\ndataset_2 = load_dataset(\"reddit_tifu\", \"long\", download_mode=\"force_redownload\")",
"Fixed by:\r\n- #3787 \r\n- #3843"
] | 2022-03-15T08:53:45Z
| 2022-03-16T15:36:58Z
| 2022-03-15T14:01:25Z
|
NONE
| null | null | null |
## Describe the bug
Can't load the dataset
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('multi_news')
dataset_2=load_dataset("reddit_tifu", "long")
## Actual results
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF']
## Environment info
- `datasets` version: 1.18.4
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.0
- PyArrow version: 6.0.1
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3918/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3918/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1301
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1301/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1301/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1301/events
|
https://github.com/huggingface/datasets/pull/1301
| 759,419,945
|
MDExOlB1bGxSZXF1ZXN0NTM0NDI5MjAy
| 1,301
|
arxiv dataset added
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33005287?v=4",
"events_url": "https://api.github.com/users/tanmoyio/events{/privacy}",
"followers_url": "https://api.github.com/users/tanmoyio/followers",
"following_url": "https://api.github.com/users/tanmoyio/following{/other_user}",
"gists_url": "https://api.github.com/users/tanmoyio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tanmoyio",
"id": 33005287,
"login": "tanmoyio",
"node_id": "MDQ6VXNlcjMzMDA1Mjg3",
"organizations_url": "https://api.github.com/users/tanmoyio/orgs",
"received_events_url": "https://api.github.com/users/tanmoyio/received_events",
"repos_url": "https://api.github.com/users/tanmoyio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tanmoyio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanmoyio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tanmoyio"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Readme added\r\n",
"@lhoestq is it looking alright ? "
] | 2020-12-08T12:50:51Z
| 2020-12-09T18:05:16Z
| 2020-12-09T18:05:16Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1301.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1301",
"merged_at": "2020-12-09T18:05:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1301.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1301"
}
|
**adding arXiv dataset**: arXiv dataset and metadata of 1.7M+ scholarly papers across STEM
dataset link: https://www.kaggle.com/Cornell-University/arxiv
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1301/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1301/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/111
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/111/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/111/comments
|
https://api.github.com/repos/huggingface/datasets/issues/111/events
|
https://github.com/huggingface/datasets/pull/111
| 618,528,060
|
MDExOlB1bGxSZXF1ZXN0NDE4MjQwMjMy
| 111
|
[Clean-up] remove under construction datastes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-05-14T20:52:13Z
| 2020-05-14T20:52:23Z
| 2020-05-14T20:52:22Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/111.diff",
"html_url": "https://github.com/huggingface/datasets/pull/111",
"merged_at": "2020-05-14T20:52:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/111.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/111"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/111/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/111/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/6043
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6043/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6043/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6043/events
|
https://github.com/huggingface/datasets/issues/6043
| 1,807,771,750
|
I_kwDODunzps5rwGhm
| 6,043
|
Compression kwargs have no effect when saving datasets as csv
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"events_url": "https://api.github.com/users/exs-avianello/events{/privacy}",
"followers_url": "https://api.github.com/users/exs-avianello/followers",
"following_url": "https://api.github.com/users/exs-avianello/following{/other_user}",
"gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/exs-avianello",
"id": 128361578,
"login": "exs-avianello",
"node_id": "U_kgDOB6akag",
"organizations_url": "https://api.github.com/users/exs-avianello/orgs",
"received_events_url": "https://api.github.com/users/exs-avianello/received_events",
"repos_url": "https://api.github.com/users/exs-avianello/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions",
"type": "User",
"url": "https://api.github.com/users/exs-avianello"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hello @exs-avianello, I have reproduced the bug successfully and have understood the problem. But I am confused regarding this part of the statement, \"`pandas.DataFrame.to_csv` is always called with a buf-like `path_or_buf`\".\r\n\r\nCan you please elaborate on it?\r\n\r\nThanks!",
"Hi @aryanxk02 ! Sure, what I actually meant is that when passing a path-like `path_or_buf` here\r\n\r\nhttps://github.com/huggingface/datasets/blob/14f6edd9222e577dccb962ed5338b79b73502fa5/src/datasets/arrow_dataset.py#L4708-L4714 \r\n\r\nit gets converted to a file object behind the scenes here\r\n\r\nhttps://github.com/huggingface/datasets/blob/14f6edd9222e577dccb962ed5338b79b73502fa5/src/datasets/io/csv.py#L92-L94\r\n\r\nand the eventual pandas `.to_csv()` calls that write to it always get `path_or_buf=None`, making pandas ignore the `compression` kwarg in the `to_csv_kwargs`\r\n\r\nhttps://github.com/huggingface/datasets/blob/14f6edd9222e577dccb962ed5338b79b73502fa5/src/datasets/io/csv.py#L107-L109",
"@exs-avianello When `path_or_buf` is set to None, the `to_csv()` method will return the CSV data as a string instead of saving it to a file. Hence the compression doesn't take place. I think setting `path_or_buf=self.path_or_buf` should work. What you say?"
] | 2023-07-17T13:19:21Z
| 2023-07-22T17:34:18Z
| null |
NONE
| null | null | null |
### Describe the bug
Attempting to save a dataset as a compressed csv file, the compression kwargs provided to `.to_csv()` that get piped to panda's `pandas.DataFrame.to_csv` do not have any effect - resulting in the dataset not getting compressed.
A warning is raised if explicitly providing a `compression` kwarg, but no warnings are raised if relying on the defaults. This can lead to datasets secretly not getting compressed for users expecting the behaviour to match panda's `.to_csv()`, where the compression format is automatically inferred from the destination path suffix.
### Steps to reproduce the bug
```python
# dataset is not compressed (but at least a warning is emitted)
import datasets
dataset = datasets.load_dataset("rotten_tomatoes", split="train")
dataset.to_csv("uncompressed.csv")
print(os.path.getsize("uncompressed.csv")) # 1008607
dataset.to_csv("compressed.csv.gz", compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1})
print(os.path.getsize("compressed.csv.gz")) # 1008607
```
```shell
>>>
RuntimeWarning: compression has no effect when passing a non-binary object as input.
csv_str = batch.to_pandas().to_csv(
```
```python
# dataset is not compressed and no warnings are emitted
dataset.to_csv("compressed.csv.gz")
print(os.path.getsize("compressed.csv.gz")) # 1008607
# compare with
dataset.to_pandas().to_csv("pandas.csv.gz")
print(os.path.getsize("pandas.csv.gz")) # 418561
```
---
I think that this is because behind the scenes `pandas.DataFrame.to_csv` is always called with a buf-like `path_or_buf`, but users that are providing a path-like to `datasets.Dataset.to_csv` are likely not to expect / know that - leading to a mismatch in their understanding of the expected behaviour of the `compression` kwarg.
### Expected behavior
The dataset to be saved as a compressed csv file when providing a `compression` kwarg, or when relying on the default `compression='infer'`
### Environment info
`datasets == 2.13.1`
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6043/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6043/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1212
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1212/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1212/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1212/events
|
https://github.com/huggingface/datasets/pull/1212
| 757,978,795
|
MDExOlB1bGxSZXF1ZXN0NTMzMjM1MTky
| 1,212
|
Add Sanskrit Classic texts in datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/9317265?v=4",
"events_url": "https://api.github.com/users/parmarsuraj99/events{/privacy}",
"followers_url": "https://api.github.com/users/parmarsuraj99/followers",
"following_url": "https://api.github.com/users/parmarsuraj99/following{/other_user}",
"gists_url": "https://api.github.com/users/parmarsuraj99/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/parmarsuraj99",
"id": 9317265,
"login": "parmarsuraj99",
"node_id": "MDQ6VXNlcjkzMTcyNjU=",
"organizations_url": "https://api.github.com/users/parmarsuraj99/orgs",
"received_events_url": "https://api.github.com/users/parmarsuraj99/received_events",
"repos_url": "https://api.github.com/users/parmarsuraj99/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/parmarsuraj99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parmarsuraj99/subscriptions",
"type": "User",
"url": "https://api.github.com/users/parmarsuraj99"
}
|
[] |
closed
| false
| null |
[] | null |
[
"merging since the CI is fixed on master"
] | 2020-12-06T17:31:31Z
| 2020-12-07T19:04:08Z
| 2020-12-07T19:04:08Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1212.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1212",
"merged_at": "2020-12-07T19:04:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1212.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1212"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1212/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1212/timeline
| null | null | true
|
|
https://api.github.com/repos/huggingface/datasets/issues/2413
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2413/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2413/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2413/events
|
https://github.com/huggingface/datasets/issues/2413
| 903,777,557
|
MDU6SXNzdWU5MDM3Nzc1NTc=
| 2,413
|
AttributeError: 'DatasetInfo' object has no attribute 'task_templates'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/53588015?v=4",
"events_url": "https://api.github.com/users/jungwhank/events{/privacy}",
"followers_url": "https://api.github.com/users/jungwhank/followers",
"following_url": "https://api.github.com/users/jungwhank/following{/other_user}",
"gists_url": "https://api.github.com/users/jungwhank/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jungwhank",
"id": 53588015,
"login": "jungwhank",
"node_id": "MDQ6VXNlcjUzNTg4MDE1",
"organizations_url": "https://api.github.com/users/jungwhank/orgs",
"received_events_url": "https://api.github.com/users/jungwhank/received_events",
"repos_url": "https://api.github.com/users/jungwhank/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jungwhank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jungwhank/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jungwhank"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! Can you try using a more up-to-date version ? We added the task_templates in `datasets` 1.7.0.\r\n\r\nIdeally when you're working on new datasets, you should install and use the local version of your fork of `datasets`. Here I think you tried to run the 1.7.0 tests with the 1.6.2 code"
] | 2021-05-27T13:44:28Z
| 2021-06-01T01:05:47Z
| 2021-06-01T01:05:47Z
|
CONTRIBUTOR
| null | null | null |
## Describe the bug
Hello,
I'm trying to add dataset and contribute, but test keep fail with below cli.
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<my_dataset>`
## Steps to reproduce the bug
It seems like a bug when I see an error with the existing dataset, not the dataset I'm trying to add.
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<any_dataset>`
## Expected results
All test passed
## Actual results
```
# check that dataset is not empty
self.parent.assertListEqual(sorted(dataset_builder.info.splits.keys()), sorted(dataset))
for split in dataset_builder.info.splits.keys():
# check that loaded datset is not empty
self.parent.assertTrue(len(dataset[split]) > 0)
# check that we can cast features for each task template
> task_templates = dataset_builder.info.task_templates
E AttributeError: 'DatasetInfo' object has no attribute 'task_templates'
tests/test_dataset_common.py:175: AttributeError
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Darwin-20.4.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2413/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2413/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/2354
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2354/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2354/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2354/events
|
https://github.com/huggingface/datasets/issues/2354
| 890,439,523
|
MDU6SXNzdWU4OTA0Mzk1MjM=
| 2,354
|
Document DatasetInfo attributes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
] | null |
[] | 2021-05-12T20:01:29Z
| 2021-05-22T09:26:14Z
| 2021-05-22T09:26:14Z
|
MEMBER
| null | null | null |
**Is your feature request related to a problem? Please describe.**
As noted in PR #2255, the attributes of `DatasetInfo` are not documented in the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=datasetinfo#datasetinfo). It would be nice to do so :)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2354/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2354/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/5130
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5130/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5130/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5130/events
|
https://github.com/huggingface/datasets/pull/5130
| 1,413,435,000
|
PR_kwDODunzps5BBxXX
| 5,130
|
Avoid extra cast in `class_encode_column`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-18T15:31:24Z
| 2022-10-19T11:53:02Z
| 2022-10-19T11:50:46Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5130.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5130",
"merged_at": "2022-10-19T11:50:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5130.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5130"
}
|
Pass the updated features to `map` to avoid the `cast` in `class_encode_column`.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5130/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5130/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4780
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4780/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4780/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4780/events
|
https://github.com/huggingface/datasets/pull/4780
| 1,326,034,767
|
PR_kwDODunzps48g9oA
| 4,780
|
Remove apache_beam import from module level in natural_questions dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-08-02T15:34:54Z
| 2022-08-02T16:16:33Z
| 2022-08-02T16:03:17Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4780.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4780",
"merged_at": "2022-08-02T16:03:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4780.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4780"
}
|
Instead of importing `apache_beam` at the module level, import it in the method `_build_pcollection`.
Fix #4779.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4780/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4780/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2886
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2886/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2886/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2886/events
|
https://github.com/huggingface/datasets/issues/2886
| 992,534,632
|
MDU6SXNzdWU5OTI1MzQ2MzI=
| 2,886
|
Hj
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/90416328?v=4",
"events_url": "https://api.github.com/users/Noorasri/events{/privacy}",
"followers_url": "https://api.github.com/users/Noorasri/followers",
"following_url": "https://api.github.com/users/Noorasri/following{/other_user}",
"gists_url": "https://api.github.com/users/Noorasri/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Noorasri",
"id": 90416328,
"login": "Noorasri",
"node_id": "MDQ6VXNlcjkwNDE2MzI4",
"organizations_url": "https://api.github.com/users/Noorasri/orgs",
"received_events_url": "https://api.github.com/users/Noorasri/received_events",
"repos_url": "https://api.github.com/users/Noorasri/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Noorasri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Noorasri/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Noorasri"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-09-09T18:58:52Z
| 2021-09-10T11:46:29Z
| 2021-09-10T11:46:29Z
|
NONE
| null | null | null | null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2886/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2886/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/datasets/issues/1735
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1735/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1735/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1735/events
|
https://github.com/huggingface/datasets/pull/1735
| 785,184,740
|
MDExOlB1bGxSZXF1ZXN0NTU0MjUzMDcw
| 1,735
|
Update add new dataset template
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Add new \"dataset\"? ;)",
"Lol, too used to Transformers ;-)"
] | 2021-01-13T15:08:09Z
| 2021-01-14T15:16:01Z
| 2021-01-14T15:16:00Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1735.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1735",
"merged_at": "2021-01-14T15:16:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1735.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1735"
}
|
This PR fixes a few typos in the "Add new dataset template" and clarifies a bit what to do for the dummy data creation when the `auto_generate` flag can't work.
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1735/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1735/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4066
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4066/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4066/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4066/events
|
https://github.com/huggingface/datasets/pull/4066
| 1,186,728,104
|
PR_kwDODunzps41U63x
| 4,066
|
Tasks alignment with models
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yay! This is exciting! Note that we would probably be able to generate this JSON directly from `huggingface/hub-docs`' `Types.ts` file (cc @osanseviero)",
"The following issue should make this much easier :smile: https://github.com/huggingface/hub-docs/issues/83",
"So far I think I've addressed all the comments that I got on slack, but feel free to do a review @osanseviero and let me know if it sounds good to you",
"It just occurred to me that we should probably restart the `datasets-tagging` space once this is merged to update all the task categories there: https://huggingface.co/spaces/huggingface/datasets-tagging",
"Yes, let me update it now",
"Updated: https://huggingface.co/spaces/huggingface/datasets-tagging",
"current automated export is visible at #4154"
] | 2022-03-30T16:45:56Z
| 2022-04-13T13:12:52Z
| 2022-04-08T12:20:00Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4066.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4066",
"merged_at": "2022-04-08T12:20:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4066.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4066"
}
|
I updated our `tasks.json` file with the new task taxonomy that is aligned with models.
The rule that defines a task is the following:
**Two tasks are different if and only if the steps of their pipelines** are different, i.e. if they can’t reasonably be implemented using the same coherent code (level of granularity/complexity of the code to be defined - ideally I’d like to say “HF user’s level”) - this is the same definition in `transformers`
I will update the tags of all the datasets in this repository [in another PR](https://github.com/huggingface/datasets/pull/4067) for readability.
Main changes:
- conditional-text-generation is split between summarization, translation, text-generation and text2text-generation
- speech-processing is split into automatic-speech-recognition, audio-classification, etc.
- structure-prediction is renamed token-classification
- abstractive-qa now belongs to text2text-generation
Here is just a simplified YAML dump of `tasks.json`:
```yaml
audio-classification:
- keyword-spotting
- speaker-identification
- speaker-intent-classification
- emotion-recognition
- speaker-language-identification
audio-to-audio: []
automatic-speech-recognition: []
conversational:
- dialogue-generation
feature-extraction: []
fill-mask:
- slot-filling
- masked-language-modeling
image-classification:
- multi-label-image-classification
- multi-class-image-classification
image-segmentation:
- instance-segmentation
- semantic-segmentation
- panoptic-segmentation
image-to-text:
- image-captioning
multiple-choice:
- multiple-choice-qa
- multiple-choice-coreference-resolution
object-detection:
- face-detection
- vehicle-detection
question-answering:
- extractive-qa
- open-domain-qa
- closed-domain-qa
sentence-similarity: []
tabular-classification: []
tabular-to-text:
- rdf-to-text
summarization:
- news-articles-summarization
- news-articles-headline-generation
table-to-text: []
table-question-answering: []
text-classification:
- acceptability-classification
- entity-linking-classification
- fact-checking
- intent-classification
- multi-class-classification
- multi-label-classification
- natural-language-inference
- semantic-similarity-classification
- sentiment-classification
- topic-classification
- semantic-similarity-scoring
- sentiment-scoring
- sentiment-analysis
- hate-speech-detection
- text-scoring
text-generation:
- dialogue-modeling
- language-modeling
text-retrieval:
- document-retrieval
- utterance-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
text-to-image: []
text-to-tabular:
- relation-extraction
- semantic-role-labeling
text-to-speech: []
text2text-generation:
- text-simplification
- explanation-generation
- abstractive-qa
- open-domain-abstractive-qa
- closed-domain-qa
- open-book-qa
- closed-book-qa
time-series-forecasting:
- univariate-time-series-forecasting
- multivariate-time-series-forecasting
token-classification:
- named-entity-recognition
- part-of-speech-tagging
- parsing
- lemmatization
- word-sense-disambiguation
- coreference-resolution
translation: []
visual-question-answering: []
voice-activity-detection: []
zero-shot-classification: []
zero-shot-image-classification: []
reinforcement-learning: []
other: []
```
Feel free to comment and give suggestions, especially if you think we can also align this list with other projects
cc @julien-c @osanseviero @severo @lewtun @yjernite @albertvillanova @mariosasko @polinaeterna
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4066/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4066/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2693
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2693/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2693/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2693/events
|
https://github.com/huggingface/datasets/pull/2693
| 949,797,014
|
MDExOlB1bGxSZXF1ZXN0Njk0NDQ1ODAz
| 2,693
|
Fix OSCAR Esperanto
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-07-21T14:43:50Z
| 2021-07-21T14:53:52Z
| 2021-07-21T14:53:51Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2693.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2693",
"merged_at": "2021-07-21T14:53:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2693.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2693"
}
|
The Esperanto part (original) of OSCAR has the wrong number of examples:
```python
from datasets import load_dataset
raw_datasets = load_dataset("oscar", "unshuffled_original_eo")
```
raises
```python
NonMatchingSplitsSizesError:
[{'expected': SplitInfo(name='train', num_bytes=314188336, num_examples=121171, dataset_name='oscar'),
'recorded': SplitInfo(name='train', num_bytes=314064514, num_examples=121168, dataset_name='oscar')}]
```
I updated the number of expected examples in dataset_infos.json
cc @sgugger
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2693/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2693/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/6461
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6461/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6461/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6461/events
|
https://github.com/huggingface/datasets/pull/6461
| 2,018,850,731
|
PR_kwDODunzps5gykvO
| 6,461
|
Fix shard retry mechanism in `push_to_hub`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@Wauplin Maybe `504` should be added to the `retry_on_status_codes` tuple [here](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/lfs.py#L300) to guard against https://github.com/huggingface/datasets/issues/3872",
"We could but I'm not sure to have witness a 504 on S3 before. The issue reported in https://github.com/huggingface/datasets/issues/3872 is a 504 on the `/upload` endpoint on the Hub and this is not an endpoint that is retried on [this line](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/lfs.py#L300).",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005110 / 0.011353 (-0.006243) | 0.003307 / 0.011008 (-0.007701) | 0.062601 / 0.038508 (0.024093) | 0.049644 / 0.023109 (0.026534) | 0.243195 / 0.275898 (-0.032703) | 0.273543 / 0.323480 (-0.049936) | 0.003862 / 0.007986 (-0.004123) | 0.002624 / 0.004328 (-0.001705) | 0.048273 / 0.004250 (0.044023) | 0.037820 / 0.037052 (0.000768) | 0.249134 / 0.258489 (-0.009355) | 0.319359 / 0.293841 (0.025518) | 0.027816 / 0.128546 (-0.100730) | 0.010422 / 0.075646 (-0.065225) | 0.206607 / 0.419271 (-0.212665) | 0.035719 / 0.043533 (-0.007814) | 0.250300 / 0.255139 (-0.004839) | 0.290377 / 0.283200 (0.007177) | 0.018459 / 0.141683 (-0.123224) | 1.114664 / 1.452155 (-0.337490) | 1.171429 / 1.492716 (-0.321288) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091483 / 0.018006 (0.073477) | 0.302770 / 0.000490 (0.302281) | 0.000203 / 0.000200 (0.000003) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018870 / 0.037411 (-0.018541) | 0.062692 / 0.014526 (0.048166) | 0.075381 / 0.176557 (-0.101176) | 0.122338 / 0.737135 (-0.614797) | 0.075608 / 0.296338 (-0.220730) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288115 / 0.215209 (0.072906) | 2.816183 / 2.077655 (0.738528) | 1.535601 / 1.504120 (0.031481) | 1.409546 / 1.541195 (-0.131648) | 1.438569 / 1.468490 (-0.029921) | 0.561797 / 4.584777 (-4.022980) | 2.373921 / 3.745712 (-1.371791) | 2.739437 / 5.269862 (-2.530424) | 1.750921 / 4.565676 (-2.814755) | 0.062114 / 0.424275 (-0.362161) | 0.004965 / 0.007607 (-0.002642) | 0.348614 / 0.226044 (0.122569) | 3.519631 / 2.268929 (1.250703) | 1.910797 / 55.444624 (-53.533827) | 1.610541 / 6.876477 (-5.265936) | 1.617972 / 2.142072 (-0.524100) | 0.639421 / 4.805227 (-4.165806) | 0.117371 / 6.500664 (-6.383293) | 0.041851 / 0.075469 (-0.033618) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945563 / 1.841788 (-0.896224) | 11.362399 / 8.074308 (3.288090) | 10.468468 / 10.191392 (0.277075) | 0.128925 / 0.680424 (-0.551499) | 0.013892 / 0.534201 (-0.520309) | 0.285487 / 0.579283 (-0.293796) | 0.269295 / 0.434364 (-0.165069) | 0.324843 / 0.540337 (-0.215495) | 0.438452 / 1.386936 (-0.948484) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005303 / 0.011353 (-0.006050) | 0.003162 / 0.011008 (-0.007846) | 0.048177 / 0.038508 (0.009669) | 0.048708 / 0.023109 (0.025599) | 0.271663 / 0.275898 (-0.004235) | 0.289948 / 0.323480 (-0.033532) | 0.003955 / 0.007986 (-0.004030) | 0.002616 / 0.004328 (-0.001713) | 0.047510 / 0.004250 (0.043260) | 0.039938 / 0.037052 (0.002886) | 0.277449 / 0.258489 (0.018960) | 0.300315 / 0.293841 (0.006474) | 0.029263 / 0.128546 (-0.099283) | 0.010403 / 0.075646 (-0.065244) | 0.056682 / 0.419271 (-0.362590) | 0.032757 / 0.043533 (-0.010776) | 0.273291 / 0.255139 (0.018152) | 0.289023 / 0.283200 (0.005824) | 0.017843 / 0.141683 (-0.123840) | 1.124762 / 1.452155 (-0.327393) | 1.176646 / 1.492716 (-0.316070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004568 / 0.018006 (-0.013438) | 0.300715 / 0.000490 (0.300225) | 0.000212 / 0.000200 (0.000012) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021528 / 0.037411 (-0.015883) | 0.068317 / 0.014526 (0.053792) | 0.081358 / 0.176557 (-0.095199) | 0.119297 / 0.737135 (-0.617838) | 0.082445 / 0.296338 (-0.213893) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289681 / 0.215209 (0.074472) | 2.843862 / 2.077655 (0.766208) | 1.574257 / 1.504120 (0.070137) | 1.454026 / 1.541195 (-0.087169) | 1.478379 / 1.468490 (0.009889) | 0.558259 / 4.584777 (-4.026518) | 2.513261 / 3.745712 (-1.232451) | 2.759751 / 5.269862 (-2.510111) | 1.730335 / 4.565676 (-2.835341) | 0.063805 / 0.424275 (-0.360470) | 0.004991 / 0.007607 (-0.002616) | 0.346586 / 0.226044 (0.120542) | 3.369163 / 2.268929 (1.100234) | 1.934734 / 55.444624 (-53.509890) | 1.658864 / 6.876477 (-5.217613) | 1.645621 / 2.142072 (-0.496452) | 0.636633 / 4.805227 (-4.168594) | 0.116839 / 6.500664 (-6.383825) | 0.040863 / 0.075469 (-0.034606) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.960925 / 1.841788 (-0.880863) | 11.769189 / 8.074308 (3.694881) | 10.713662 / 10.191392 (0.522270) | 0.140510 / 0.680424 (-0.539914) | 0.015424 / 0.534201 (-0.518777) | 0.288039 / 0.579283 (-0.291244) | 0.277623 / 0.434364 (-0.156741) | 0.322622 / 0.540337 (-0.217716) | 0.539805 / 1.386936 (-0.847131) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005501 / 0.011353 (-0.005852) | 0.003754 / 0.011008 (-0.007254) | 0.062628 / 0.038508 (0.024120) | 0.059951 / 0.023109 (0.036842) | 0.254851 / 0.275898 (-0.021047) | 0.272133 / 0.323480 (-0.051347) | 0.003962 / 0.007986 (-0.004024) | 0.002759 / 0.004328 (-0.001569) | 0.048412 / 0.004250 (0.044161) | 0.039349 / 0.037052 (0.002297) | 0.253093 / 0.258489 (-0.005397) | 0.287048 / 0.293841 (-0.006793) | 0.027197 / 0.128546 (-0.101349) | 0.010828 / 0.075646 (-0.064819) | 0.206371 / 0.419271 (-0.212901) | 0.035881 / 0.043533 (-0.007652) | 0.254905 / 0.255139 (-0.000234) | 0.273819 / 0.283200 (-0.009381) | 0.018041 / 0.141683 (-0.123642) | 1.103970 / 1.452155 (-0.348185) | 1.166340 / 1.492716 (-0.326377) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093196 / 0.018006 (0.075190) | 0.302690 / 0.000490 (0.302200) | 0.000219 / 0.000200 (0.000019) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019552 / 0.037411 (-0.017860) | 0.062337 / 0.014526 (0.047811) | 0.074070 / 0.176557 (-0.102486) | 0.120998 / 0.737135 (-0.616137) | 0.076265 / 0.296338 (-0.220074) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.272637 / 0.215209 (0.057427) | 2.693350 / 2.077655 (0.615696) | 1.398020 / 1.504120 (-0.106100) | 1.285706 / 1.541195 (-0.255488) | 1.342810 / 1.468490 (-0.125680) | 0.565378 / 4.584777 (-4.019399) | 2.390131 / 3.745712 (-1.355581) | 2.892137 / 5.269862 (-2.377725) | 1.819840 / 4.565676 (-2.745836) | 0.062789 / 0.424275 (-0.361486) | 0.004920 / 0.007607 (-0.002687) | 0.329281 / 0.226044 (0.103237) | 3.261664 / 2.268929 (0.992735) | 1.775102 / 55.444624 (-53.669523) | 1.514341 / 6.876477 (-5.362136) | 1.530805 / 2.142072 (-0.611267) | 0.641009 / 4.805227 (-4.164218) | 0.118626 / 6.500664 (-6.382038) | 0.042732 / 0.075469 (-0.032737) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.933179 / 1.841788 (-0.908609) | 12.085247 / 8.074308 (4.010939) | 10.541596 / 10.191392 (0.350204) | 0.140141 / 0.680424 (-0.540283) | 0.014646 / 0.534201 (-0.519555) | 0.289640 / 0.579283 (-0.289643) | 0.281042 / 0.434364 (-0.153322) | 0.326462 / 0.540337 (-0.213876) | 0.441981 / 1.386936 (-0.944955) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005259 / 0.011353 (-0.006094) | 0.003766 / 0.011008 (-0.007242) | 0.048782 / 0.038508 (0.010273) | 0.064946 / 0.023109 (0.041836) | 0.264529 / 0.275898 (-0.011369) | 0.289675 / 0.323480 (-0.033805) | 0.004057 / 0.007986 (-0.003928) | 0.002805 / 0.004328 (-0.001523) | 0.047709 / 0.004250 (0.043459) | 0.041149 / 0.037052 (0.004096) | 0.271254 / 0.258489 (0.012765) | 0.296685 / 0.293841 (0.002844) | 0.029486 / 0.128546 (-0.099060) | 0.010608 / 0.075646 (-0.065038) | 0.056392 / 0.419271 (-0.362879) | 0.033181 / 0.043533 (-0.010352) | 0.267029 / 0.255139 (0.011890) | 0.284987 / 0.283200 (0.001787) | 0.018045 / 0.141683 (-0.123637) | 1.137358 / 1.452155 (-0.314796) | 1.184007 / 1.492716 (-0.308709) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004603 / 0.018006 (-0.013403) | 0.303901 / 0.000490 (0.303411) | 0.000225 / 0.000200 (0.000025) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021957 / 0.037411 (-0.015454) | 0.069427 / 0.014526 (0.054901) | 0.082394 / 0.176557 (-0.094163) | 0.120745 / 0.737135 (-0.616390) | 0.084571 / 0.296338 (-0.211767) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292832 / 0.215209 (0.077623) | 2.824295 / 2.077655 (0.746640) | 1.563273 / 1.504120 (0.059153) | 1.440202 / 1.541195 (-0.100992) | 1.489810 / 1.468490 (0.021320) | 0.561120 / 4.584777 (-4.023657) | 2.439045 / 3.745712 (-1.306667) | 2.867139 / 5.269862 (-2.402722) | 1.793812 / 4.565676 (-2.771865) | 0.062797 / 0.424275 (-0.361478) | 0.005033 / 0.007607 (-0.002574) | 0.343648 / 0.226044 (0.117604) | 3.432285 / 2.268929 (1.163357) | 1.918175 / 55.444624 (-53.526449) | 1.637245 / 6.876477 (-5.239232) | 1.709246 / 2.142072 (-0.432826) | 0.634744 / 4.805227 (-4.170483) | 0.115782 / 6.500664 (-6.384882) | 0.041228 / 0.075469 (-0.034241) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962369 / 1.841788 (-0.879418) | 12.750819 / 8.074308 (4.676511) | 10.927356 / 10.191392 (0.735964) | 0.143454 / 0.680424 (-0.536970) | 0.015348 / 0.534201 (-0.518853) | 0.291207 / 0.579283 (-0.288076) | 0.276924 / 0.434364 (-0.157440) | 0.327287 / 0.540337 (-0.213050) | 0.577439 / 1.386936 (-0.809497) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005070 / 0.011353 (-0.006283) | 0.003475 / 0.011008 (-0.007533) | 0.061985 / 0.038508 (0.023477) | 0.048539 / 0.023109 (0.025430) | 0.229935 / 0.275898 (-0.045963) | 0.255247 / 0.323480 (-0.068233) | 0.003919 / 0.007986 (-0.004066) | 0.002664 / 0.004328 (-0.001664) | 0.048892 / 0.004250 (0.044642) | 0.037381 / 0.037052 (0.000328) | 0.238517 / 0.258489 (-0.019972) | 0.284069 / 0.293841 (-0.009772) | 0.027513 / 0.128546 (-0.101033) | 0.010778 / 0.075646 (-0.064868) | 0.205004 / 0.419271 (-0.214268) | 0.035553 / 0.043533 (-0.007980) | 0.230117 / 0.255139 (-0.025022) | 0.251150 / 0.283200 (-0.032050) | 0.017951 / 0.141683 (-0.123732) | 1.145548 / 1.452155 (-0.306607) | 1.191659 / 1.492716 (-0.301057) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092335 / 0.018006 (0.074329) | 0.300264 / 0.000490 (0.299774) | 0.000206 / 0.000200 (0.000006) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018608 / 0.037411 (-0.018804) | 0.060376 / 0.014526 (0.045850) | 0.073551 / 0.176557 (-0.103006) | 0.118840 / 0.737135 (-0.618295) | 0.074447 / 0.296338 (-0.221892) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287033 / 0.215209 (0.071824) | 2.770958 / 2.077655 (0.693303) | 1.443986 / 1.504120 (-0.060134) | 1.314627 / 1.541195 (-0.226567) | 1.342287 / 1.468490 (-0.126203) | 0.559607 / 4.584777 (-4.025170) | 2.409678 / 3.745712 (-1.336034) | 2.772566 / 5.269862 (-2.497295) | 1.743511 / 4.565676 (-2.822165) | 0.062277 / 0.424275 (-0.361998) | 0.004952 / 0.007607 (-0.002655) | 0.330581 / 0.226044 (0.104537) | 3.280385 / 2.268929 (1.011456) | 1.809599 / 55.444624 (-53.635025) | 1.532186 / 6.876477 (-5.344290) | 1.529689 / 2.142072 (-0.612383) | 0.645213 / 4.805227 (-4.160014) | 0.117564 / 6.500664 (-6.383100) | 0.041657 / 0.075469 (-0.033812) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.943912 / 1.841788 (-0.897876) | 11.414317 / 8.074308 (3.340009) | 10.394915 / 10.191392 (0.203523) | 0.129271 / 0.680424 (-0.551153) | 0.013934 / 0.534201 (-0.520267) | 0.288217 / 0.579283 (-0.291066) | 0.267171 / 0.434364 (-0.167193) | 0.327112 / 0.540337 (-0.213225) | 0.446680 / 1.386936 (-0.940256) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005200 / 0.011353 (-0.006152) | 0.003453 / 0.011008 (-0.007555) | 0.048736 / 0.038508 (0.010228) | 0.051073 / 0.023109 (0.027964) | 0.276591 / 0.275898 (0.000693) | 0.294495 / 0.323480 (-0.028985) | 0.004069 / 0.007986 (-0.003917) | 0.002945 / 0.004328 (-0.001383) | 0.047090 / 0.004250 (0.042839) | 0.040445 / 0.037052 (0.003393) | 0.278464 / 0.258489 (0.019975) | 0.304020 / 0.293841 (0.010179) | 0.028811 / 0.128546 (-0.099736) | 0.010388 / 0.075646 (-0.065259) | 0.057214 / 0.419271 (-0.362057) | 0.032588 / 0.043533 (-0.010945) | 0.277694 / 0.255139 (0.022555) | 0.294979 / 0.283200 (0.011779) | 0.018384 / 0.141683 (-0.123299) | 1.162332 / 1.452155 (-0.289822) | 1.188355 / 1.492716 (-0.304361) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090501 / 0.018006 (0.072495) | 0.303122 / 0.000490 (0.302632) | 0.000222 / 0.000200 (0.000022) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022536 / 0.037411 (-0.014876) | 0.068452 / 0.014526 (0.053926) | 0.080932 / 0.176557 (-0.095625) | 0.119185 / 0.737135 (-0.617950) | 0.081513 / 0.296338 (-0.214825) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291522 / 0.215209 (0.076313) | 2.849467 / 2.077655 (0.771812) | 1.597395 / 1.504120 (0.093275) | 1.512872 / 1.541195 (-0.028323) | 1.488144 / 1.468490 (0.019654) | 0.572436 / 4.584777 (-4.012341) | 2.440129 / 3.745712 (-1.305583) | 2.788045 / 5.269862 (-2.481817) | 1.754246 / 4.565676 (-2.811430) | 0.066706 / 0.424275 (-0.357569) | 0.005035 / 0.007607 (-0.002573) | 0.336621 / 0.226044 (0.110576) | 3.322820 / 2.268929 (1.053891) | 1.940494 / 55.444624 (-53.504130) | 1.670022 / 6.876477 (-5.206454) | 1.666353 / 2.142072 (-0.475720) | 0.646180 / 4.805227 (-4.159047) | 0.116676 / 6.500664 (-6.383988) | 0.040559 / 0.075469 (-0.034910) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971396 / 1.841788 (-0.870392) | 11.782426 / 8.074308 (3.708118) | 10.672034 / 10.191392 (0.480642) | 0.137658 / 0.680424 (-0.542766) | 0.016210 / 0.534201 (-0.517991) | 0.288302 / 0.579283 (-0.290981) | 0.280775 / 0.434364 (-0.153589) | 0.326962 / 0.540337 (-0.213375) | 0.558511 / 1.386936 (-0.828425) |\n\n</details>\n</details>\n\n\n"
] | 2023-11-30T14:57:14Z
| 2023-12-01T17:57:39Z
| 2023-12-01T17:51:33Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6461.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6461",
"merged_at": "2023-12-01T17:51:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6461.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6461"
}
|
When it fails, `preupload_lfs_files` throws a [`RuntimeError`](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/_commit_api.py#L402) error and chains the original HTTP error. This PR modifies the retry mechanism's error handling to account for that.
Fix https://github.com/huggingface/datasets/issues/6392
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6461/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6461/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3344
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3344/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3344/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3344/events
|
https://github.com/huggingface/datasets/pull/3344
| 1,067,567,603
|
PR_kwDODunzps4vNJwd
| 3,344
|
Add ArrayXD docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-11-30T18:53:31Z
| 2021-12-01T20:16:03Z
| 2021-12-01T19:35:32Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3344.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3344",
"merged_at": "2021-12-01T19:35:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3344.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3344"
}
|
Documents support for dynamic first dimension in `ArrayXD` from #2891, and explain the `ArrayXD` feature in general.
Let me know if I'm missing anything @lhoestq :)
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3344/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3344/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/2928
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2928/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2928/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2928/events
|
https://github.com/huggingface/datasets/pull/2928
| 997,941,506
|
PR_kwDODunzps4r0yUb
| 2,928
|
Update BibTeX entry
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-09-16T08:39:20Z
| 2021-09-16T12:35:34Z
| 2021-09-16T12:35:34Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/2928.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2928",
"merged_at": "2021-09-16T12:35:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2928.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2928"
}
|
Update BibTeX entry.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2928/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/2928/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/5327
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5327/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5327/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5327/events
|
https://github.com/huggingface/datasets/pull/5327
| 1,471,657,247
|
PR_kwDODunzps5EE_3Q
| 5,327
|
Avoid unwanted behaviour when splits from script and metadata are not matching because of outdated metadata
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
|
[] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5327). All of your documentation changes will be reflected on that endpoint."
] | 2022-12-01T17:05:23Z
| 2023-01-23T12:48:29Z
| null |
CONTRIBUTOR
| null | 1
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5327.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5327",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5327.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5327"
}
|
will fix #5315
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5327/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5327/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/1211
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1211/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1211/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1211/events
|
https://github.com/huggingface/datasets/pull/1211
| 757,973,719
|
MDExOlB1bGxSZXF1ZXN0NTMzMjMxNDY3
| 1,211
|
Add large spanish corpus
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2020-12-06T17:06:50Z
| 2020-12-09T13:36:36Z
| 2020-12-09T13:36:36Z
|
MEMBER
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/1211.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1211",
"merged_at": "2020-12-09T13:36:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1211.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1211"
}
|
Adds a collection of Spanish corpora that can be useful for pretraining language models.
Following a nice suggestion from @yjernite we provide the user with three main ways to preprocess / load either
* the whole corpus (17GB!)
* one specific sub-corpus
* the whole corpus, but return a single split. this is useful if you want to cache the whole preprocessing step once and interact with individual sub-corpora
See the dataset card for more details.
Ready for review!
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1211/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/1211/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/4950
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4950/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4950/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4950/events
|
https://github.com/huggingface/datasets/pull/4950
| 1,365,458,633
|
PR_kwDODunzps4-jWZ1
| 4,950
|
Update Enwik8 broken link and information
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/54819091?v=4",
"events_url": "https://api.github.com/users/mtanghu/events{/privacy}",
"followers_url": "https://api.github.com/users/mtanghu/followers",
"following_url": "https://api.github.com/users/mtanghu/following{/other_user}",
"gists_url": "https://api.github.com/users/mtanghu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mtanghu",
"id": 54819091,
"login": "mtanghu",
"node_id": "MDQ6VXNlcjU0ODE5MDkx",
"organizations_url": "https://api.github.com/users/mtanghu/orgs",
"received_events_url": "https://api.github.com/users/mtanghu/received_events",
"repos_url": "https://api.github.com/users/mtanghu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mtanghu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mtanghu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mtanghu"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-08T03:15:00Z
| 2022-09-24T22:14:35Z
| 2022-09-08T14:51:00Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4950.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4950",
"merged_at": "2022-09-08T14:51:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4950.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4950"
}
|
The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8.
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4950/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4950/timeline
| null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/3542
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/3542/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/3542/comments
|
https://api.github.com/repos/huggingface/datasets/issues/3542/events
|
https://github.com/huggingface/datasets/pull/3542
| 1,095,088,485
|
PR_kwDODunzps4wmPIP
| 3,542
|
Update the CC-100 dataset card
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/353043?v=4",
"events_url": "https://api.github.com/users/aajanki/events{/privacy}",
"followers_url": "https://api.github.com/users/aajanki/followers",
"following_url": "https://api.github.com/users/aajanki/following{/other_user}",
"gists_url": "https://api.github.com/users/aajanki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aajanki",
"id": 353043,
"login": "aajanki",
"node_id": "MDQ6VXNlcjM1MzA0Mw==",
"organizations_url": "https://api.github.com/users/aajanki/orgs",
"received_events_url": "https://api.github.com/users/aajanki/received_events",
"repos_url": "https://api.github.com/users/aajanki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aajanki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aajanki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aajanki"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-01-06T08:35:18Z
| 2022-01-06T18:37:44Z
| 2022-01-06T18:37:44Z
|
CONTRIBUTOR
| null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/3542.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3542",
"merged_at": "2022-01-06T18:37:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3542.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3542"
}
|
* summary from the dataset homepage
* more details about the data structure
* this dataset does not contain annotations
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3542/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/3542/timeline
| null | null | true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.