url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 600M 2.05B | node_id stringlengths 18 32 | number int64 2 6.51k | title stringlengths 1 290 | user dict | labels listlengths 0 4 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 4 | milestone dict | comments listlengths 0 30 | created_at timestamp[ns, tz=UTC] | updated_at timestamp[ns, tz=UTC] | closed_at timestamp[ns, tz=UTC] | author_association stringclasses 3
values | active_lock_reason float64 | draft float64 0 1 ⌀ | pull_request dict | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app float64 | state_reason stringclasses 3
values | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/730 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/730/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/730/comments | https://api.github.com/repos/huggingface/datasets/issues/730/events | https://github.com/huggingface/datasets/issues/730 | 721,073,812 | MDU6SXNzdWU3MjEwNzM4MTI= | 730 | Possible caching bug | {
"avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4",
"events_url": "https://api.github.com/users/ArneBinder/events{/privacy}",
"followers_url": "https://api.github.com/users/ArneBinder/followers",
"following_url": "https://api.github.com/users/ArneBinder/following{/other_user}",
"gists_url":... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Thanks for reporting. That's a bug indeed.\r\nApparently only the `data_files` parameter is taken into account right now in `DatasetBuilder._create_builder_config` but it should also be the case for `config_kwargs` (or at least the instantiated `builder_config`)",
"Hi, does this bug be fixed? when I load JSON fi... | 2020-10-14T02:02:34Z | 2022-11-22T01:45:54Z | 2020-10-29T09:36:01Z | NONE | null | null | null | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produc... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/730/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/730/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2191/comments | https://api.github.com/repos/huggingface/datasets/issues/2191/events | https://github.com/huggingface/datasets/pull/2191 | 853,364,204 | MDExOlB1bGxSZXF1ZXN0NjExNDY1Nzc0 | 2,191 | Refactorize tests to use Dataset as context manager | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior",
"id": 2851292821,
"name": "refactoring",
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring"
}
] | closed | false | null | [] | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/u... | [
"I find very interesting that idea of using a fixture instead!\r\n\r\nLet me rework a little bit this PR, @lhoestq.",
"@lhoestq, as this is a big refactoring, I had many problems to solve the conflicts with the master branch...\r\n\r\nTherefore, I think it is better to merge this as it is, and then to make other ... | 2021-04-08T11:21:04Z | 2021-04-19T07:53:11Z | 2021-04-19T07:53:10Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2191.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2191",
"merged_at": "2021-04-19T07:53:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2191.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Refactorize Dataset tests to use Dataset as context manager. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2191/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2191/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2034/comments | https://api.github.com/repos/huggingface/datasets/issues/2034/events | https://github.com/huggingface/datasets/pull/2034 | 829,381,388 | MDExOlB1bGxSZXF1ZXN0NTkxMDU2MTEw | 2,034 | Fix typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/3413464?v=4",
"events_url": "https://api.github.com/users/pcyin/events{/privacy}",
"followers_url": "https://api.github.com/users/pcyin/followers",
"following_url": "https://api.github.com/users/pcyin/following{/other_user}",
"gists_url": "https://api.g... | [] | closed | false | null | [] | null | [] | 2021-03-11T17:46:13Z | 2021-03-11T18:06:25Z | 2021-03-11T18:06:25Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2034.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2034",
"merged_at": "2021-03-11T18:06:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2034.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Change `ENV_XDG_CACHE_HOME ` to `XDG_CACHE_HOME ` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2034/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2034/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/280/comments | https://api.github.com/repos/huggingface/datasets/issues/280/events | https://github.com/huggingface/datasets/issues/280 | 640,677,615 | MDU6SXNzdWU2NDA2Nzc2MTU= | 280 | Error with SquadV2 Metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/32203792?v=4",
"events_url": "https://api.github.com/users/avinregmi/events{/privacy}",
"followers_url": "https://api.github.com/users/avinregmi/followers",
"following_url": "https://api.github.com/users/avinregmi/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [] | 2020-06-17T19:10:54Z | 2020-06-19T08:33:41Z | 2020-06-19T08:33:41Z | NONE | null | null | null | I can't seem to import squad v2 metrics.
**squad_metric = nlp.load_metric('squad_v2')**
**This throws me an error.:**
```
ImportError Traceback (most recent call last)
<ipython-input-8-170b6a170555> in <module>
----> 1 squad_metric = nlp.load_metric('squad_v2')
~/env/lib6... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/280/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/280/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3571 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3571/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3571/comments | https://api.github.com/repos/huggingface/datasets/issues/3571/events | https://github.com/huggingface/datasets/pull/3571 | 1,100,519,604 | PR_kwDODunzps4w3fVQ | 3,571 | Add missing tasks to MuchoCine dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [] | 2022-01-12T16:07:32Z | 2022-01-20T16:51:08Z | 2022-01-20T16:51:07Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3571.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3571",
"merged_at": "2022-01-20T16:51:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3571.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Addresses the 2nd bullet point in #2520.
I'm also removing the licensing information, because I couldn't verify that it is correct. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3571/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3571/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2975/comments | https://api.github.com/repos/huggingface/datasets/issues/2975/events | https://github.com/huggingface/datasets/pull/2975 | 1,008,444,654 | PR_kwDODunzps4sVAOt | 2,975 | ignore dummy folder and dataset_infos.json | {
"avatar_url": "https://avatars.githubusercontent.com/u/46553104?v=4",
"events_url": "https://api.github.com/users/Ishan-Kumar2/events{/privacy}",
"followers_url": "https://api.github.com/users/Ishan-Kumar2/followers",
"following_url": "https://api.github.com/users/Ishan-Kumar2/following{/other_user}",
"gist... | [] | closed | false | null | [] | null | [] | 2021-09-27T18:09:03Z | 2021-09-29T09:45:38Z | 2021-09-29T09:05:38Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2975.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2975",
"merged_at": "2021-09-29T09:05:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2975.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Fixes #2877
Added the `dataset_infos.json` to the ignored files list and also added check to ignore files which have parent directory as `dummy`.
Let me know if it is correct. Thanks :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2975/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2975/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4087/comments | https://api.github.com/repos/huggingface/datasets/issues/4087/events | https://github.com/huggingface/datasets/pull/4087 | 1,191,819,805 | PR_kwDODunzps41lnfO | 4,087 | Fix BeamWriter output Parquet file | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-04T13:46:50Z | 2022-04-05T15:00:40Z | 2022-04-05T14:54:48Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4087.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4087",
"merged_at": "2022-04-05T14:54:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4087.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Since now, the `BeamWriter` saved a Parquet file with a simplified schema, where each field value was serialized to JSON. That resulted in Parquet files larger than Arrow files.
This PR:
- writes Parquet file preserving original schema and without serialization, thus avoiding serialization overhead and resulting in... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4087/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4087/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4981/comments | https://api.github.com/repos/huggingface/datasets/issues/4981/events | https://github.com/huggingface/datasets/issues/4981 | 1,375,086,773 | I_kwDODunzps5R9ii1 | 4,981 | Can't create a dataset with `float16` features | {
"avatar_url": "https://avatars.githubusercontent.com/u/15098095?v=4",
"events_url": "https://api.github.com/users/dconathan/events{/privacy}",
"followers_url": "https://api.github.com/users/dconathan/followers",
"following_url": "https://api.github.com/users/dconathan/following{/other_user}",
"gists_url": "... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Hi @dconathan, thanks for reporting.\r\n\r\nWe rely on Arrow as a backend, and as far as I know currently support for `float16` in Arrow is not fully implemented in Python (C++), hence the `ArrowNotImplementedError` you get.\r\n\r\nSee, e.g.: https://arrow.apache.org/docs/status.html?highlight=float16#data-types",... | 2022-09-15T21:03:24Z | 2023-03-22T21:40:09Z | null | CONTRIBUTOR | null | null | null | ## Describe the bug
I can't create a dataset with `float16` features.
I understand from the traceback that this is a `pyarrow` error, but I don't see anywhere in the `datasets` documentation about how to successfully do this. Is it actually supported? I've tried older versions of `pyarrow` as well with the same e... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4981/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4981/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3899/comments | https://api.github.com/repos/huggingface/datasets/issues/3899/events | https://github.com/huggingface/datasets/pull/3899 | 1,166,931,812 | PR_kwDODunzps40UzR3 | 3,899 | Add exact match metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-11T22:21:40Z | 2022-03-21T16:10:03Z | 2022-03-21T16:05:35Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3899.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3899",
"merged_at": "2022-03-21T16:05:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3899.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Adding the exact match metric and its metric card.
Note: Some of the tests have failed, but I wanted to make a PR anyway so that the rest of the code can be reviewed if anyone has time. I'll look into + work on fixing the failed tests when I'm back online after the weekend | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3899/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3899/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1192/comments | https://api.github.com/repos/huggingface/datasets/issues/1192/events | https://github.com/huggingface/datasets/pull/1192 | 757,839,671 | MDExOlB1bGxSZXF1ZXN0NTMzMTM0NjI3 | 1,192 | Add NewsPH_NLI dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/3663322?v=4",
"events_url": "https://api.github.com/users/anaerobeth/events{/privacy}",
"followers_url": "https://api.github.com/users/anaerobeth/followers",
"following_url": "https://api.github.com/users/anaerobeth/following{/other_user}",
"gists_url":... | [] | closed | false | null | [] | null | [] | 2020-12-06T04:00:31Z | 2020-12-07T15:39:43Z | 2020-12-07T15:39:43Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1192.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1192",
"merged_at": "2020-12-07T15:39:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1192.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | This PR adds the NewsPH-NLI Dataset, the first benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing.
Link to the paper: https://... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1192/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1192/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5695/comments | https://api.github.com/repos/huggingface/datasets/issues/5695/events | https://github.com/huggingface/datasets/issues/5695 | 1,650,974,156 | I_kwDODunzps5iZ93M | 5,695 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError | {
"avatar_url": "https://avatars.githubusercontent.com/u/32778667?v=4",
"events_url": "https://api.github.com/users/amariucaitheodor/events{/privacy}",
"followers_url": "https://api.github.com/users/amariucaitheodor/followers",
"following_url": "https://api.github.com/users/amariucaitheodor/following{/other_use... | [] | closed | false | null | [] | null | [
"Hi ! It looks like an issue with PyArrow: https://issues.apache.org/jira/browse/ARROW-5030\r\n\r\nIt appears it can happen when you have parquet files with row groups larger than 2GB.\r\nI can see that your parquet files are around 10GB. It is usually advised to keep a value around the default value 500MB to avoid... | 2023-04-02T14:42:44Z | 2023-04-11T09:17:54Z | 2023-04-10T08:04:04Z | NONE | null | null | null | ### Describe the bug
Calling `datasets.load_dataset` to load the (publicly available) dataset `theodor1289/wit` fails with `pyarrow.lib.ArrowNotImplementedError`.
### Steps to reproduce the bug
Steps to reproduce this behavior:
1. `!pip install datasets`
2. `!huggingface-cli login`
3. This step will throw the e... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5695/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5695/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3504 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3504/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3504/comments | https://api.github.com/repos/huggingface/datasets/issues/3504/events | https://github.com/huggingface/datasets/issues/3504 | 1,090,682,230 | I_kwDODunzps5BAn12 | 3,504 | Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst | {
"avatar_url": "https://avatars.githubusercontent.com/u/12600692?v=4",
"events_url": "https://api.github.com/users/ToddMorrill/events{/privacy}",
"followers_url": "https://api.github.com/users/ToddMorrill/followers",
"following_url": "https://api.github.com/users/ToddMorrill/following{/other_user}",
"gists_u... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "2edb81",
"default": false,
"descrip... | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi @ToddMorrill, thanks for reporting.\r\n\r\nThree weeks ago I contacted the team who created the Pile dataset to report this issue with their data host server: https://the-eye.eu\r\n\r\nThey told me that unfortunately, the-eye was heavily affected by the recent tornado catastrophe in the US. They hope to have th... | 2021-12-29T18:23:20Z | 2023-08-14T23:28:48Z | 2022-02-17T15:04:25Z | NONE | null | null | null | ## Describe the bug
I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt).
https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst
## Steps to reproduce ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3504/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3504/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1579/comments | https://api.github.com/repos/huggingface/datasets/issues/1579/events | https://github.com/huggingface/datasets/pull/1579 | 767,808,465 | MDExOlB1bGxSZXF1ZXN0NTQwMzk5OTY5 | 1,579 | Adding CLIMATE-FEVER dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1658969?v=4",
"events_url": "https://api.github.com/users/tdiggelm/events{/privacy}",
"followers_url": "https://api.github.com/users/tdiggelm/followers",
"following_url": "https://api.github.com/users/tdiggelm/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | [
"I `git rebase`ed my branch to `upstream/master` as suggested in point 7 of <https://huggingface.co/docs/datasets/share_dataset.html> and subsequently used `git pull` to be able to push to my remote branch. However, I think this messed up the history.\r\n\r\nPlease let me know if I should create a clean new PR with... | 2020-12-15T16:49:22Z | 2020-12-22T13:43:16Z | 2020-12-22T13:43:15Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1579.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1579",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1579.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1579"
} | This PR request the addition of the CLIMATE-FEVER dataset:
A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, ref... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1579/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1579/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2299/comments | https://api.github.com/repos/huggingface/datasets/issues/2299/events | https://github.com/huggingface/datasets/issues/2299 | 873,914,717 | MDU6SXNzdWU4NzM5MTQ3MTc= | 2,299 | My iPhone | {
"avatar_url": "https://avatars.githubusercontent.com/u/82856229?v=4",
"events_url": "https://api.github.com/users/Jasonbuchanan1983/events{/privacy}",
"followers_url": "https://api.github.com/users/Jasonbuchanan1983/followers",
"following_url": "https://api.github.com/users/Jasonbuchanan1983/following{/other_... | [] | closed | false | null | [] | null | [] | 2021-05-02T11:11:11Z | 2021-07-23T09:24:16Z | 2021-05-03T08:17:38Z | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2299/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2299/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2167/comments | https://api.github.com/repos/huggingface/datasets/issues/2167/events | https://github.com/huggingface/datasets/issues/2167 | 849,944,891 | MDU6SXNzdWU4NDk5NDQ4OTE= | 2,167 | Split type not preserved when reloading the dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [] | 2021-04-04T19:29:54Z | 2021-04-19T09:08:55Z | 2021-04-19T09:08:55Z | CONTRIBUTOR | null | null | null | A minimal reproducible example:
```python
>>> from datasets import load_dataset, Dataset
>>> dset = load_dataset("sst", split="train")
>>> dset.save_to_disk("sst")
>>> type(dset.split)
<class 'datasets.splits.NamedSplit'>
>>> dset = Dataset.load_from_disk("sst")
>>> type(dset.split) # NamedSplit expected
<cla... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2167/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2167/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1438/comments | https://api.github.com/repos/huggingface/datasets/issues/1438/events | https://github.com/huggingface/datasets/pull/1438 | 760,962,193 | MDExOlB1bGxSZXF1ZXN0NTM1NzAzMTEw | 1,438 | A descriptive name for my changes | {
"avatar_url": "https://avatars.githubusercontent.com/u/56379013?v=4",
"events_url": "https://api.github.com/users/rahul-art/events{/privacy}",
"followers_url": "https://api.github.com/users/rahul-art/followers",
"following_url": "https://api.github.com/users/rahul-art/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [
"I have noticed that the master branch of your fork has diverged from the one of the repo. This is probably what causes the mess in the github diff \"Files changed\".\r\n\r\nI would suggest to re-fork the `datasets` repo and recreate a new branch and a new PR. ",
"You're pretty close to having all things ready to... | 2020-12-10T06:47:24Z | 2020-12-15T10:36:27Z | 2020-12-15T10:36:26Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1438.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1438",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1438.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1438"
} | hind encorp resubmited | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1438/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1438/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5369/comments | https://api.github.com/repos/huggingface/datasets/issues/5369/events | https://github.com/huggingface/datasets/pull/5369 | 1,500,622,276 | PR_kwDODunzps5Fqaj- | 5,369 | Distributed support | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright all the tests are passing - this is ready for review",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n... | 2022-12-16T17:43:47Z | 2023-07-25T12:00:31Z | 2023-01-16T13:33:32Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5369.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5369",
"merged_at": "2023-01-16T13:33:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5369.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | To split your dataset across your training nodes, you can use the new [`datasets.distributed.split_dataset_by_node`]:
```python
import os
from datasets.distributed import split_dataset_by_node
ds = split_dataset_by_node(ds, rank=int(os.environ["RANK"]), world_size=int(os.environ["WORLD_SIZE"]))
```
This wor... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5369/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5369/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6006/comments | https://api.github.com/repos/huggingface/datasets/issues/6006/events | https://github.com/huggingface/datasets/issues/6006 | 1,788,855,582 | I_kwDODunzps5qn8Ue | 6,006 | NotADirectoryError when loading gigawords | {
"avatar_url": "https://avatars.githubusercontent.com/u/115634163?v=4",
"events_url": "https://api.github.com/users/xipq/events{/privacy}",
"followers_url": "https://api.github.com/users/xipq/followers",
"following_url": "https://api.github.com/users/xipq/following{/other_user}",
"gists_url": "https://api.gi... | [] | closed | false | null | [] | null | [
"issue due to corrupted download files. resolved after cleaning download cache. sorry for any inconvinence."
] | 2023-07-05T06:23:41Z | 2023-07-05T06:31:02Z | 2023-07-05T06:31:01Z | NONE | null | null | null | ### Describe the bug
got `NotADirectoryError` whtn loading gigawords dataset
### Steps to reproduce the bug
When running
```
import datasets
datasets.load_dataset('gigaword')
```
Got the following exception:
```bash
Traceback (most recent call last): ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6006/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6006/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6128/comments | https://api.github.com/repos/huggingface/datasets/issues/6128/events | https://github.com/huggingface/datasets/issues/6128 | 1,841,545,493 | I_kwDODunzps5tw8EV | 6,128 | IndexError: Invalid key: 88 is out of bounds for size 0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/38727343?v=4",
"events_url": "https://api.github.com/users/TomasAndersonFang/events{/privacy}",
"followers_url": "https://api.github.com/users/TomasAndersonFang/followers",
"following_url": "https://api.github.com/users/TomasAndersonFang/following{/other_... | [] | closed | false | null | [] | null | [
"Hi @TomasAndersonFang,\r\n\r\nHave you tried instead to use `torch_compile` in `transformers.TrainingArguments`? https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.torch_compile",
"> \r\n\r\nI tried this and got the following error:\r\n\r\n```\r\nTraceback (mo... | 2023-08-08T15:32:08Z | 2023-08-11T13:35:09Z | 2023-08-11T13:35:09Z | NONE | null | null | null | ### Describe the bug
This bug generates when I use torch.compile(model) in my code, which seems to raise an error in datasets lib.
### Steps to reproduce the bug
I use the following code to fine-tune Falcon on my private dataset.
```python
import transformers
from transformers import (
AutoModelForCausalLM... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6128/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6128/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6110/comments | https://api.github.com/repos/huggingface/datasets/issues/6110/events | https://github.com/huggingface/datasets/issues/6110 | 1,831,110,633 | I_kwDODunzps5tJIfp | 6,110 | [BUG] Dataset initialized from in-memory data does not create cache. | {
"avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4",
"events_url": "https://api.github.com/users/MattYoon/events{/privacy}",
"followers_url": "https://api.github.com/users/MattYoon/followers",
"following_url": "https://api.github.com/users/MattYoon/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"This is expected behavior. You must provide `cache_file_name` when performing `.map` on an in-memory dataset for the result to be cached."
] | 2023-08-01T11:58:58Z | 2023-08-17T14:03:01Z | 2023-08-17T14:03:00Z | NONE | null | null | null | ### Describe the bug
`Dataset` initialized from in-memory data (dictionary in my case, haven't tested with other types) does not create cache when processed with the `map` method, unlike `Dataset` initialized by other methods such as `load_dataset`.
### Steps to reproduce the bug
```python
# below code was ru... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6110/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6110/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1198/comments | https://api.github.com/repos/huggingface/datasets/issues/1198/events | https://github.com/huggingface/datasets/pull/1198 | 757,903,453 | MDExOlB1bGxSZXF1ZXN0NTMzMTgwNjAz | 1,198 | Add ALT | {
"avatar_url": "https://avatars.githubusercontent.com/u/6429850?v=4",
"events_url": "https://api.github.com/users/chameleonTK/events{/privacy}",
"followers_url": "https://api.github.com/users/chameleonTK/followers",
"following_url": "https://api.github.com/users/chameleonTK/following{/other_user}",
"gists_ur... | [] | closed | false | null | [] | null | [
"the `RemoteDatasetTest ` erros in the CI are fixed on master so it's fine",
"used `Translation ` feature type and fixed few typos as you suggested.",
"Sorry, I made a mistake. please see new PR here. https://github.com/huggingface/datasets/pull/1436"
] | 2020-12-06T11:25:30Z | 2020-12-10T04:18:12Z | 2020-12-10T04:18:12Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1198.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1198",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1198.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1198"
} | ALT dataset -- https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1198/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1198/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1851/comments | https://api.github.com/repos/huggingface/datasets/issues/1851/events | https://github.com/huggingface/datasets/pull/1851 | 804,523,174 | MDExOlB1bGxSZXF1ZXN0NTcwMjc2MTk5 | 1,851 | set bert_score version dependency | {
"avatar_url": "https://avatars.githubusercontent.com/u/3596?v=4",
"events_url": "https://api.github.com/users/pvl/events{/privacy}",
"followers_url": "https://api.github.com/users/pvl/followers",
"following_url": "https://api.github.com/users/pvl/following{/other_user}",
"gists_url": "https://api.github.com... | [] | closed | false | null | [] | null | [] | 2021-02-09T12:51:07Z | 2021-02-09T14:21:48Z | 2021-02-09T14:21:48Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1851.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1851",
"merged_at": "2021-02-09T14:21:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1851.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1851/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1851/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4193/comments | https://api.github.com/repos/huggingface/datasets/issues/4193/events | https://github.com/huggingface/datasets/pull/4193 | 1,210,734,701 | PR_kwDODunzps42izQG | 4,193 | Document save_to_disk and push_to_hub on images and audio files | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Good catch, I updated the docstrings"
] | 2022-04-21T09:04:36Z | 2022-04-22T09:55:55Z | 2022-04-22T09:49:31Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4193.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4193",
"merged_at": "2022-04-22T09:49:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4193.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Following https://github.com/huggingface/datasets/pull/4187, I explained in the documentation of `save_to_disk` and `push_to_hub` how they handle image and audio data. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4193/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4193/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1270 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1270/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1270/comments | https://api.github.com/repos/huggingface/datasets/issues/1270/events | https://github.com/huggingface/datasets/pull/1270 | 758,917,216 | MDExOlB1bGxSZXF1ZXN0NTM0MDAyODIz | 1,270 | add DFKI SmartData Corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/4944799?v=4",
"events_url": "https://api.github.com/users/aseifert/events{/privacy}",
"followers_url": "https://api.github.com/users/aseifert/followers",
"following_url": "https://api.github.com/users/aseifert/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | [] | 2020-12-07T23:03:48Z | 2020-12-08T17:41:23Z | 2020-12-08T17:41:23Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1270.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1270",
"merged_at": "2020-12-08T17:41:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1270.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | - **Name:** DFKI SmartData Corpus
- **Description:** DFKI SmartData Corpus is a dataset of 2598 German-language documents which has been annotated with fine-grained geo-entities, such as streets, stops and routes, as well as standard named entity types.
- **Paper:** https://www.dfki.de/fileadmin/user_upload/import/94... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1270/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1270/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5436/comments | https://api.github.com/repos/huggingface/datasets/issues/5436/events | https://github.com/huggingface/datasets/pull/5436 | 1,536,633,173 | PR_kwDODunzps5Hjh4v | 5,436 | Revert container image pin in CI benchmarks | {
"avatar_url": "https://avatars.githubusercontent.com/u/11387611?v=4",
"events_url": "https://api.github.com/users/0x2b3bfa0/events{/privacy}",
"followers_url": "https://api.github.com/users/0x2b3bfa0/followers",
"following_url": "https://api.github.com/users/0x2b3bfa0/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-01-17T15:59:50Z | 2023-01-18T09:05:49Z | 2023-01-18T06:29:06Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5436.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5436",
"merged_at": "2023-01-18T06:29:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5436.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Closes #5433, reverts #5432, and also:
* Uses [ghcr.io container images](https://cml.dev/doc/self-hosted-runners/#docker-images) for extra speed
* Updates `actions/checkout` to `v3` (note that `v2` is [deprecated](https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead... | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5436/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5436/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2503/comments | https://api.github.com/repos/huggingface/datasets/issues/2503/events | https://github.com/huggingface/datasets/issues/2503 | 920,636,186 | MDU6SXNzdWU5MjA2MzYxODY= | 2,503 | SubjQA wrong boolean values in entries | {
"avatar_url": "https://avatars.githubusercontent.com/u/26485052?v=4",
"events_url": "https://api.github.com/users/arnaudstiegler/events{/privacy}",
"followers_url": "https://api.github.com/users/arnaudstiegler/followers",
"following_url": "https://api.github.com/users/arnaudstiegler/following{/other_user}",
... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi @arnaudstiegler, thanks for reporting. I'm investigating it.",
"@arnaudstiegler I have just checked that these mismatches are already present in the original dataset: https://github.com/megagonlabs/SubjQA\r\n\r\nWe are going to contact the dataset owners to report this.",
"I have:\r\n- opened an issue in th... | 2021-06-14T17:42:46Z | 2021-08-25T03:52:06Z | null | NONE | null | null | null | ## Describe the bug
SubjQA seems to have a boolean that's consistently wrong.
It defines:
- question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective).
- is_ques_subjective: A boolean subjectivity label derived from question_subj_level (i.e., scores below 4 are... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2503/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2503/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2650/comments | https://api.github.com/repos/huggingface/datasets/issues/2650/events | https://github.com/huggingface/datasets/issues/2650 | 944,672,565 | MDU6SXNzdWU5NDQ2NzI1NjU= | 2,650 | [load_dataset] shard and parallelize the process | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://a... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_u... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}"... | null | [
"I need the same feature for distributed training",
"I think @TevenLeScao is exploring adding multiprocessing in `GeneratorBasedBuilder._prepare_split` - feel free to post updates here :)",
"Posted a PR to address the building side, still needs something to load sharded arrow files + tests",
"Closing as this ... | 2021-07-14T18:04:58Z | 2023-11-28T19:11:41Z | 2023-11-28T19:11:40Z | CONTRIBUTOR | null | null | null | - Some huge datasets take forever to build the first time. (e.g. oscar/en) as it's done in a single cpu core.
- If the build crashes, everything done up to that point gets lost
Request: Shard the build over multiple arrow files, which would enable:
- much faster build by parallelizing the build process
- if the p... | {
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 3,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 10,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2650/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2650/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1478/comments | https://api.github.com/repos/huggingface/datasets/issues/1478/events | https://github.com/huggingface/datasets/issues/1478 | 762,293,076 | MDU6SXNzdWU3NjIyOTMwNzY= | 1,478 | Inconsistent argument names. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8402500?v=4",
"events_url": "https://api.github.com/users/Fraser-Greenlee/events{/privacy}",
"followers_url": "https://api.github.com/users/Fraser-Greenlee/followers",
"following_url": "https://api.github.com/users/Fraser-Greenlee/following{/other_user}",... | [] | closed | false | null | [] | null | [
"Also for the `Accuracy` metric the `accuracy_score` method should have its args in the opposite order so `accuracy_score(predictions, references,,,)`.",
"Thanks for pointing this out ! 🕵🏻 \r\nPredictions and references should indeed be swapped in the docstring.\r\nHowever, the call to `accuracy_score` should n... | 2020-12-11T12:19:38Z | 2020-12-19T15:03:39Z | 2020-12-19T15:03:39Z | CONTRIBUTOR | null | null | null | Just find it a wee bit odd that in the transformers library `predictions` are those made by the model:
https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_utils.py#L51-L61
While in many datasets metrics they are the ground truth labels:
https://github.com/huggingface/datasets/blob/c3f5... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1478/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1478/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5950/comments | https://api.github.com/repos/huggingface/datasets/issues/5950/events | https://github.com/huggingface/datasets/issues/5950 | 1,755,197,946 | I_kwDODunzps5onjH6 | 5,950 | Support for data with instance-wise dictionary as features | {
"avatar_url": "https://avatars.githubusercontent.com/u/33274336?v=4",
"events_url": "https://api.github.com/users/richardwth/events{/privacy}",
"followers_url": "https://api.github.com/users/richardwth/followers",
"following_url": "https://api.github.com/users/richardwth/following{/other_user}",
"gists_url"... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! We use the Arrow columnar format under the hood, which doesn't support such dictionaries: each field must have a fixed type and exist in each sample.\r\n\r\nInstead you can restructure your data like\r\n```\r\n{\r\n \"index\": 0,\r\n \"keys\": [\"2 * x + y >= 3\"],\r\n \"values\": [[\"2 * x + y >= 3\... | 2023-06-13T15:49:00Z | 2023-06-14T12:13:38Z | null | NONE | null | null | null | ### Feature request
I notice that when loading data instances with feature type of python dictionary, the dictionary keys would be broadcast so that every instance has the same set of keys. Please see an example in the Motivation section.
It is possible to avoid this behavior, i.e., load dictionary features as it i... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5950/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5950/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2958/comments | https://api.github.com/repos/huggingface/datasets/issues/2958/events | https://github.com/huggingface/datasets/pull/2958 | 1,005,144,601 | PR_kwDODunzps4sLTaB | 2,958 | Add security policy to the project | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [] | 2021-09-23T08:20:55Z | 2021-10-21T15:16:44Z | 2021-10-21T15:16:43Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2958.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2958",
"merged_at": "2021-10-21T15:16:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2958.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Add security policy to the project, as recommended by GitHub: https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository
Close #2953. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2958/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2958/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5428/comments | https://api.github.com/repos/huggingface/datasets/issues/5428/events | https://github.com/huggingface/datasets/issues/5428 | 1,535,166,139 | I_kwDODunzps5bgMa7 | 5,428 | Load/Save FAISS index using fsspec | {
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https:/... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi! Sure, feel free to submit a PR. Maybe if we want to be consistent with the existing API, it would be cleaner to directly add support for `fsspec` paths in `Dataset.load_faiss_index`/`Dataset.save_faiss_index` in the same manner as it was done in `Dataset.load_from_disk`/`Dataset.save_to_disk`.",
"That's a gr... | 2023-01-16T16:08:12Z | 2023-03-27T15:18:22Z | 2023-03-27T15:18:22Z | CONTRIBUTOR | null | null | null | ### Feature request
From what I understand `faiss` already support this [link](https://github.com/facebookresearch/faiss/wiki/Index-IO,-cloning-and-hyper-parameter-tuning#generic-io-support)
I would like to use a stream as input to `Dataset.load_faiss_index` and `Dataset.save_faiss_index`.
### Motivation
In... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5428/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5428/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1071/comments | https://api.github.com/repos/huggingface/datasets/issues/1071/events | https://github.com/huggingface/datasets/pull/1071 | 756,447,296 | MDExOlB1bGxSZXF1ZXN0NTMxOTkwNzY1 | 1,071 | add xlrd to test package requirements | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [] | 2020-12-03T18:32:47Z | 2020-12-03T18:47:16Z | 2020-12-03T18:47:16Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1071.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1071",
"merged_at": "2020-12-03T18:47:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1071.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Adds `xlrd` package to the test requirements to handle scripts that use `pandas` to load excel files | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1071/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1071/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4722/comments | https://api.github.com/repos/huggingface/datasets/issues/4722/events | https://github.com/huggingface/datasets/pull/4722 | 1,310,785,916 | PR_kwDODunzps47t_HJ | 4,722 | Docs: Fix same-page haslinks | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-20T10:04:37Z | 2022-07-20T17:02:33Z | 2022-07-20T16:49:36Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4722.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4722",
"merged_at": "2022-07-20T16:49:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4722.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | `href="/docs/datasets/quickstart#audio"` implicitly goes to `href="/docs/datasets/{$LATEST_STABLE_VERSION}/quickstart#audio"`. Therefore, https://huggingface.co/docs/datasets/quickstart#audio #audio hashlink does not work since the new docs were not added to v2.3.2 (LATEST_STABLE_VERSION)
to preserve the version, it... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4722/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4722/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5705/comments | https://api.github.com/repos/huggingface/datasets/issues/5705/events | https://github.com/huggingface/datasets/issues/5705 | 1,653,500,383 | I_kwDODunzps5ijmnf | 5,705 | Getting next item from IterableDataset took forever. | {
"avatar_url": "https://avatars.githubusercontent.com/u/16588434?v=4",
"events_url": "https://api.github.com/users/HongtaoYang/events{/privacy}",
"followers_url": "https://api.github.com/users/HongtaoYang/followers",
"following_url": "https://api.github.com/users/HongtaoYang/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [
"Hi! It can take some time to iterate over Parquet files as big as yours, convert the samples to Python, and find the first one that matches a filter predicate before yielding it...",
"Thanks @mariosasko, I figured it was the filter operation. I'm closing this issue because it is not a bug, it is the expected beh... | 2023-04-04T09:16:17Z | 2023-04-05T23:35:41Z | 2023-04-05T23:35:41Z | NONE | null | null | null | ### Describe the bug
I have a large dataset, about 500GB. The format of the dataset is parquet.
I then load the dataset and try to get the first item
```python
def get_one_item():
dataset = load_dataset("path/to/datafiles", split="train", cache_dir=".", streaming=True)
dataset = dataset.filter(lambda... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5705/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5705/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1673 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1673/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1673/comments | https://api.github.com/repos/huggingface/datasets/issues/1673/events | https://github.com/huggingface/datasets/issues/1673 | 777,263,651 | MDU6SXNzdWU3NzcyNjM2NTE= | 1,673 | Unable to Download Hindi Wikipedia Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/30871963?v=4",
"events_url": "https://api.github.com/users/aditya3498/events{/privacy}",
"followers_url": "https://api.github.com/users/aditya3498/followers",
"following_url": "https://api.github.com/users/aditya3498/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"Currently this dataset is only available when the library is installed from source since it was added after the last release.\r\n\r\nWe pin the dataset version with the library version so that people can have a reproducible dataset and processing when pinning the library.\r\n\r\nWe'll see if we can provide access ... | 2021-01-01T10:52:53Z | 2021-01-05T10:22:12Z | 2021-01-05T10:22:12Z | NONE | null | null | null | I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to reso... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1673/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1673/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2180/comments | https://api.github.com/repos/huggingface/datasets/issues/2180/events | https://github.com/huggingface/datasets/pull/2180 | 852,258,635 | MDExOlB1bGxSZXF1ZXN0NjEwNTQxOTA2 | 2,180 | Add tel to xtreme tatoeba | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2021-04-07T10:23:15Z | 2021-04-07T15:50:35Z | 2021-04-07T15:50:34Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2180.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2180",
"merged_at": "2021-04-07T15:50:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2180.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | This should fix issue #2149 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2180/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2180/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/604 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/604/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/604/comments | https://api.github.com/repos/huggingface/datasets/issues/604/events | https://github.com/huggingface/datasets/pull/604 | 697,774,581 | MDExOlB1bGxSZXF1ZXN0NDgzNjgxNTc0 | 604 | Update bucket prefix | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2020-09-10T11:01:13Z | 2020-09-10T12:45:33Z | 2020-09-10T12:45:32Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/604.diff",
"html_url": "https://github.com/huggingface/datasets/pull/604",
"merged_at": "2020-09-10T12:45:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/604.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/604... | cc @julien-c | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/604/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/604/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/845/comments | https://api.github.com/repos/huggingface/datasets/issues/845/events | https://github.com/huggingface/datasets/pull/845 | 741,841,350 | MDExOlB1bGxSZXF1ZXN0NTIwMDg1NDMy | 845 | amazon description fields as bullets | {
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | null | [] | null | [] | 2020-11-12T18:50:41Z | 2020-11-12T18:50:54Z | 2020-11-12T18:50:54Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/845.diff",
"html_url": "https://github.com/huggingface/datasets/pull/845",
"merged_at": "2020-11-12T18:50:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/845.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/845... | One more minor formatting change to amazon reviews's description (in addition to #844). Just reformatting the fields to display as a bulleted list in markdown. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/845/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/845/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/471/comments | https://api.github.com/repos/huggingface/datasets/issues/471/events | https://github.com/huggingface/datasets/pull/471 | 671,996,423 | MDExOlB1bGxSZXF1ZXN0NDYyMTExNTU1 | 471 | add reuters21578 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"g... | [] | closed | false | null | [] | null | [] | 2020-08-03T11:07:14Z | 2022-08-04T08:39:11Z | 2020-09-03T09:58:50Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/471.diff",
"html_url": "https://github.com/huggingface/datasets/pull/471",
"merged_at": "2020-09-03T09:58:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/471.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/471... | new PR to add the reuters21578 dataset and fix the circle CI problems.
Fix partially:
- #353
Subsequent PR after:
- #449 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/471/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/471/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6112/comments | https://api.github.com/repos/huggingface/datasets/issues/6112/events | https://github.com/huggingface/datasets/issues/6112 | 1,833,693,299 | I_kwDODunzps5tS_Bz | 6,112 | yaml error using push_to_hub with generated README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/1643887?v=4",
"events_url": "https://api.github.com/users/kevintee/events{/privacy}",
"followers_url": "https://api.github.com/users/kevintee/followers",
"following_url": "https://api.github.com/users/kevintee/following{/other_user}",
"gists_url": "http... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
... | null | [
"Thanks for reporting! This is a bug in converting the `ArrayXD` types to YAML. It will be fixed soon."
] | 2023-08-02T18:21:21Z | 2023-12-12T15:00:44Z | 2023-12-12T15:00:44Z | NONE | null | null | null | ### Describe the bug
When I construct a dataset with the following features:
```
features = Features(
{
"pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)),
"input_ids": Sequence(feature=Value(dtype="int64")),
"attention_mask": Sequence(Value(dtype="int64")),
"token... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6112/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6112/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3815/comments | https://api.github.com/repos/huggingface/datasets/issues/3815/events | https://github.com/huggingface/datasets/pull/3815 | 1,158,589,512 | PR_kwDODunzps4z5oq- | 3,815 | Fix iter_archive getting reset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2022-03-03T15:58:52Z | 2022-03-03T18:06:37Z | 2022-03-03T18:06:13Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3815.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3815",
"merged_at": "2022-03-03T18:06:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3815.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | The `DownloadManager.iter_archive` method currently returns an iterator - which is **empty** once you iter over it once. This means you can't pass the same archive iterator to several splits.
To fix that, I changed the ouput of `DownloadManager.iter_archive` to be an iterable that you can iterate over several times... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3815/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3815/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/599 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/599/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/599/comments | https://api.github.com/repos/huggingface/datasets/issues/599/events | https://github.com/huggingface/datasets/pull/599 | 697,377,786 | MDExOlB1bGxSZXF1ZXN0NDgzMzI3ODQ5 | 599 | Add MATINF dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [
"Hi ! sorry for the late response\r\n\r\nCould you try to rebase from master ? We changed the named of the library last week so you have to include this change in your code.\r\n\r\nCan you give me more details about the error you get when running the cli command ?\r\n\r\nNote that in case of a manual download you h... | 2020-09-10T03:31:09Z | 2023-09-24T09:50:08Z | 2020-09-17T12:17:25Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/599.diff",
"html_url": "https://github.com/huggingface/datasets/pull/599",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/599.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/599"
} | @lhoestq The command to create metadata failed. I guess it's because the zip is not downloaded from a remote address? How to solve that? Also the CI fails and I don't know how to fix that :( | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/599/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/599/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5023/comments | https://api.github.com/repos/huggingface/datasets/issues/5023/events | https://github.com/huggingface/datasets/issues/5023 | 1,385,881,112 | I_kwDODunzps5Smt4Y | 5,023 | Text strings are split into lists of characters in xcsr dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 2022-09-26T11:11:50Z | 2022-09-28T07:54:20Z | 2022-09-28T07:54:20Z | MEMBER | null | null | null | ## Describe the bug
Text strings are split into lists of characters.
Example for "X-CSQA-en":
```
{'id': 'd3845adc08414fda',
'lang': 'en',
'question': {'stem': ['T',
'h',
'e',
' ',
'd',
'e',
'n',
't',
'a',
'l',
' ',
'o',
'f',
'f',
'i',
'c',
'e',
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5023/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5023/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2336/comments | https://api.github.com/repos/huggingface/datasets/issues/2336/events | https://github.com/huggingface/datasets/pull/2336 | 881,298,783 | MDExOlB1bGxSZXF1ZXN0NjM0ODk1OTU5 | 2,336 | Fix overflow issue in interpolation search | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"~~Seems like the CI failure is unrelated to this PR~~ (fixed with the merge). \r\n\r\n@lhoestq Can you please verify that everything is OK in terms of speed? Another solution is to change the offsets array dtype to np.int64 (but this doesn't scale in theory compared to Python integer which is unbound). I'm not sur... | 2021-05-08T20:51:36Z | 2021-05-10T13:29:07Z | 2021-05-10T13:26:12Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2336.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2336",
"merged_at": "2021-05-10T13:26:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2336.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Fixes #2335
More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2336/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2336/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3278/comments | https://api.github.com/repos/huggingface/datasets/issues/3278/events | https://github.com/huggingface/datasets/pull/3278 | 1,054,249,463 | PR_kwDODunzps4uj2EQ | 3,278 | Proposed update to the documentation for WER | {
"avatar_url": "https://avatars.githubusercontent.com/u/2111202?v=4",
"events_url": "https://api.github.com/users/wooters/events{/privacy}",
"followers_url": "https://api.github.com/users/wooters/followers",
"following_url": "https://api.github.com/users/wooters/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | null | [] | null | [] | 2021-11-15T23:28:31Z | 2021-11-16T11:19:37Z | 2021-11-16T11:19:37Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3278.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3278",
"merged_at": "2021-11-16T11:19:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3278.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | I wanted to submit a minor update to the description of WER for your consideration.
Because of the possibility of insertions, the numerator in the WER formula can be larger than N, so the value of WER can be greater than 1.0:
```
>>> from datasets import load_metric
>>> metric = load_metric("wer")
>>> metric.... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3278/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3278/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4720/comments | https://api.github.com/repos/huggingface/datasets/issues/4720/events | https://github.com/huggingface/datasets/issues/4720 | 1,309,980,195 | I_kwDODunzps5OFLYj | 4,720 | Dataset Viewer issue for shamikbose89/lancaster_newsbooks | {
"avatar_url": "https://avatars.githubusercontent.com/u/50837285?v=4",
"events_url": "https://api.github.com/users/shamikbose/events{/privacy}",
"followers_url": "https://api.github.com/users/shamikbose/followers",
"following_url": "https://api.github.com/users/shamikbose/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"It seems like the list of splits could not be obtained:\r\n\r\n```python\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names(\"shamikbose89/lancaster_newsbooks\", \"default\")\r\nUsing custom data configuration default\r\nTraceback (most recent call last):\r\n File \"/home/slesage/h... | 2022-07-19T20:00:07Z | 2022-09-08T16:47:21Z | 2022-09-08T16:47:21Z | NONE | null | null | null | ### Link
https://huggingface.co/datasets/shamikbose89/lancaster_newsbooks
### Description
Status code: 400
Exception: ValueError
Message: Cannot seek streaming HTTP file
I am able to use the dataset loading script locally and it also runs when I'm using the one from the hub, but the viewer sti... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4720/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4720/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3192/comments | https://api.github.com/repos/huggingface/datasets/issues/3192/events | https://github.com/huggingface/datasets/issues/3192 | 1,041,308,086 | I_kwDODunzps4-ERm2 | 3,192 | Multiprocessing filter/map (tests) not working on Windows | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url":... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [] | 2021-11-01T15:36:08Z | 2021-11-01T15:57:03Z | null | CONTRIBUTOR | null | null | null | While running the tests, I found that the multiprocessing examples fail on Windows, or rather they do not complete: they cause a deadlock. I haven't dug deep into it, but they do not seem to work as-is. I currently have no time to tests this in detail but at least the tests seem not to run correctly (deadlocking).
#... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3192/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3192/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3110/comments | https://api.github.com/repos/huggingface/datasets/issues/3110/events | https://github.com/huggingface/datasets/pull/3110 | 1,030,558,484 | PR_kwDODunzps4tZakS | 3,110 | Stream TAR-based dataset using iter_archive | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"I'm creating a new branch `stream-tar-audio` just for the audio datasets since they need https://github.com/huggingface/datasets/pull/3129 to be merged first",
"The CI fails are only related to missing sections or tags in the dataset cards - which is unrelated to this PR"
] | 2021-10-19T17:16:24Z | 2021-11-05T17:48:49Z | 2021-11-05T17:48:48Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3110.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3110",
"merged_at": "2021-11-05T17:48:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3110.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | I converted all the dataset based on TAR archive to use iter_archive instead, so that they can be streamable.
It means that around 80 datasets become streamable :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3110/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3110/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4317/comments | https://api.github.com/repos/huggingface/datasets/issues/4317/events | https://github.com/huggingface/datasets/pull/4317 | 1,232,737,401 | PR_kwDODunzps43qBzh | 4,317 | Fix cnn_dailymail (dm stories were ignored) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-11T14:25:25Z | 2022-05-11T16:00:09Z | 2022-05-11T15:52:37Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4317.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4317",
"merged_at": "2022-05-11T15:52:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4317.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | https://github.com/huggingface/datasets/pull/4188 introduced a bug in `datasets` 2.2.0: DailyMail stories are ignored when generating the dataset.
I fixed that, and removed the google drive link (it has annoying quota limitations issues)
We can do a patch release after this is merged | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4317/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4317/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1165/comments | https://api.github.com/repos/huggingface/datasets/issues/1165/events | https://github.com/huggingface/datasets/pull/1165 | 757,720,226 | MDExOlB1bGxSZXF1ZXN0NTMzMDQ0NzEy | 1,165 | Add ar rest reviews | {
"avatar_url": "https://avatars.githubusercontent.com/u/28743265?v=4",
"events_url": "https://api.github.com/users/abdulelahsm/events{/privacy}",
"followers_url": "https://api.github.com/users/abdulelahsm/followers",
"following_url": "https://api.github.com/users/abdulelahsm/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [
"Copy-pasted from the Slack discussion:\r\nthe annotation and language creators should be found , not unknown\r\nthe example should go under the \"Data Instances\" paragraph, not \"Data fields\"\r\ncan you remove the abstract from the citation and add it to the dataset description? More people will see that",
"@y... | 2020-12-05T16:56:42Z | 2020-12-21T17:06:23Z | 2020-12-21T17:06:23Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1165.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1165",
"merged_at": "2020-12-21T17:06:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1165.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | added restaurants reviews in Arabic for sentiment analysis tasks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1165/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1165/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6010/comments | https://api.github.com/repos/huggingface/datasets/issues/6010/events | https://github.com/huggingface/datasets/issues/6010 | 1,793,838,152 | I_kwDODunzps5q68xI | 6,010 | Improve `Dataset`'s string representation | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"I want to take a shot at this if possible ",
"Yes, feel free to work on this.\r\n\r\nYou can check the PyArrow Table `__repr__` and Polars DataFrame `__repr__`/`_repr_html_` implementations for some pointers/ideas.",
"@mariosasko are there any other similar issues that I could work on? I see this has been alr... | 2023-07-07T16:38:03Z | 2023-09-01T03:45:07Z | null | CONTRIBUTOR | null | null | null | Currently, `Dataset.__repr__` outputs a dataset's column names and the number of rows. We could improve it by printing its features and the first few rows.
We should also implement `_repr_html_` to have a rich HTML representation in notebooks/Streamlit. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6010/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6010/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5188/comments | https://api.github.com/repos/huggingface/datasets/issues/5188/events | https://github.com/huggingface/datasets/pull/5188 | 1,432,477,139 | PR_kwDODunzps5CBaoQ | 5,188 | add: segmentation guide. | {
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "... | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
... | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @osanseviero. Am I good to merge? ",
"I would wait for a second approval just in case :) ",
"Sure :) ",
"Merging since the images have been pushed as LFS files ([PR](https://huggingface.co/datasets/huggingface/documentat... | 2022-11-02T04:34:36Z | 2022-11-04T18:25:57Z | 2022-11-04T18:23:34Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5188.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5188",
"merged_at": "2022-11-04T18:23:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5188.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Closes #5181
I have opened a PR on Hub (https://huggingface.co/datasets/huggingface/documentation-images/discussions/5) to include the images in our central Hub repository. Once the PR is merged I will edit the image links.
I have also prepared a [Colab Notebook](https://colab.research.google.com/drive/1BMDCfOT... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5188/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5188/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6252/comments | https://api.github.com/repos/huggingface/datasets/issues/6252/events | https://github.com/huggingface/datasets/issues/6252 | 1,906,375,378 | I_kwDODunzps5xoPrS | 6,252 | exif_transpose not done to Image (PIL problem) | {
"avatar_url": "https://avatars.githubusercontent.com/u/108274349?v=4",
"events_url": "https://api.github.com/users/rhajou/events{/privacy}",
"followers_url": "https://api.github.com/users/rhajou/followers",
"following_url": "https://api.github.com/users/rhajou/following{/other_user}",
"gists_url": "https://... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | {
"closed_at": null,
"closed_issues": 0,
"created_at": "2023-02-13T16:22:42Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/follow... | [
"Indeed, it makes sense to do this by default. \r\n\r\nIn the meantime, you can use `.with_transform` to transpose the images when accessing them:\r\n\r\n```python\r\nimport PIL.ImageOps\r\n\r\ndef exif_transpose_transform(batch):\r\n batch[\"image\"] = [PIL.ImageOps.exif_transpose(image) for image in batch[\"imag... | 2023-09-21T08:11:46Z | 2023-09-22T14:07:52Z | null | NONE | null | null | null | ### Feature request
I noticed that some of my images loaded using PIL have some metadata related to exif that can rotate them when loading.
Since the dataset.features.Image uses PIL for loading, the loaded image may be rotated (width and height will be inverted) thus for tasks as object detection and layoutLM this ca... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6252/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6252/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/112/comments | https://api.github.com/repos/huggingface/datasets/issues/112/events | https://github.com/huggingface/datasets/pull/112 | 618,569,195 | MDExOlB1bGxSZXF1ZXN0NDE4Mjc0MTU4 | 112 | Qa4mre - add dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [] | closed | false | null | [] | null | [] | 2020-05-14T22:17:51Z | 2020-05-15T09:16:43Z | 2020-05-15T09:16:42Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/112.diff",
"html_url": "https://github.com/huggingface/datasets/pull/112",
"merged_at": "2020-05-15T09:16:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/112.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/112... | Added dummy data test only for the first config. Will do the rest later.
I had to do add some minor hacks to an important function to make it work.
There might be a cleaner way to handle it - can you take a look @thomwolf ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/112/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/112/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5590 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5590/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5590/comments | https://api.github.com/repos/huggingface/datasets/issues/5590/events | https://github.com/huggingface/datasets/pull/5590 | 1,603,549,504 | PR_kwDODunzps5K9N_H | 5,590 | Release: 2.10.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-02-28T17:58:11Z | 2023-02-28T18:16:27Z | 2023-02-28T18:06:08Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5590.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5590",
"merged_at": "2023-02-28T18:06:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5590.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5590/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5590/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1355 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1355/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1355/comments | https://api.github.com/repos/huggingface/datasets/issues/1355/events | https://github.com/huggingface/datasets/pull/1355 | 759,994,208 | MDExOlB1bGxSZXF1ZXN0NTM0ODk3NzQw | 1,355 | Addition of py_ast dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/36307201?v=4",
"events_url": "https://api.github.com/users/reshinthadithyan/events{/privacy}",
"followers_url": "https://api.github.com/users/reshinthadithyan/followers",
"following_url": "https://api.github.com/users/reshinthadithyan/following{/other_use... | [] | closed | false | null | [] | null | [] | 2020-12-09T04:59:17Z | 2020-12-09T16:19:49Z | 2020-12-09T16:19:48Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1355.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1355",
"merged_at": "2020-12-09T16:19:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1355.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | @lhoestq as discussed in PR #1195 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1355/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1355/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1363/comments | https://api.github.com/repos/huggingface/datasets/issues/1363/events | https://github.com/huggingface/datasets/pull/1363 | 760,160,944 | MDExOlB1bGxSZXF1ZXN0NTM1MDM4NjM0 | 1,363 | Adding OPUS MultiUN | {
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [] | 2020-12-09T09:29:01Z | 2020-12-09T17:54:20Z | 2020-12-09T17:54:20Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1363.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1363",
"merged_at": "2020-12-09T17:54:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1363.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Adding UnMulti
http://www.euromatrixplus.net/multi-un/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1363/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1363/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2679/comments | https://api.github.com/repos/huggingface/datasets/issues/2679/events | https://github.com/huggingface/datasets/issues/2679 | 948,506,638 | MDU6SXNzdWU5NDg1MDY2Mzg= | 2,679 | Cannot load the blog_authorship_corpus due to codec errors | {
"avatar_url": "https://avatars.githubusercontent.com/u/38069449?v=4",
"events_url": "https://api.github.com/users/izaskr/events{/privacy}",
"followers_url": "https://api.github.com/users/izaskr/followers",
"following_url": "https://api.github.com/users/izaskr/following{/other_user}",
"gists_url": "https://a... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi @izaskr, thanks for reporting.\r\n\r\nHowever the traceback you joined does not correspond to the codec error message: it is about other error `NonMatchingSplitsSizesError`. Maybe you missed some important part of your traceback...\r\n\r\nI'm going to have a look at the dataset anyway...",
"Hi @izaskr, thanks... | 2021-07-20T10:13:20Z | 2021-07-21T17:02:21Z | 2021-07-21T13:11:58Z | NONE | null | null | null | ## Describe the bug
A codec error is raised while loading the blog_authorship_corpus.
## Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("blog_authorship_corpus")
```
## Expected results
Loading the dataset without errors.
## Actual results
An error simila... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2679/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2679/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4381/comments | https://api.github.com/repos/huggingface/datasets/issues/4381/events | https://github.com/huggingface/datasets/issues/4381 | 1,243,478,863 | I_kwDODunzps5KHftP | 4,381 | Bug in caching 2 datasets both with the same builder class name | {
"avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4",
"events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}",
"followers_url": "https://api.github.com/users/NouamaneTazi/followers",
"following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}",
"gist... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi @NouamaneTazi, thanks for reporting.\r\n\r\nPlease note that both datasets are cached in the same directory because their loading builder classes have the same name: `class MTOP(datasets.GeneratorBasedBuilder)`.\r\n\r\nYou should name their builder classes differently, e.g.:\r\n- `MtopDomain`\r\n- `MtopIntent`"... | 2022-05-20T18:18:03Z | 2022-06-02T08:18:37Z | 2022-05-25T05:16:15Z | MEMBER | null | null | null | ## Describe the bug
The two datasets `mteb/mtop_intent` and `mteb/mtop_domain `use both the same cache folder `.cache/huggingface/datasets/mteb___mtop`. So if you first load `mteb/mtop_intent` then datasets will not load `mteb/mtop_domain`.
If you delete this cache folder and flip the order how you load the two datas... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4381/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4381/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3616 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3616/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3616/comments | https://api.github.com/repos/huggingface/datasets/issues/3616/events | https://github.com/huggingface/datasets/pull/3616 | 1,111,587,861 | PR_kwDODunzps4xcZMD | 3,616 | Make streamable the BnL Historical Newspapers dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [] | 2022-01-22T14:52:36Z | 2022-02-04T14:05:23Z | 2022-02-04T14:05:21Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3616.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3616",
"merged_at": "2022-02-04T14:05:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3616.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | I've refactored the code in order to make the dataset streamable and to avoid it takes too long:
- I've used `iter_files`
Close #3615 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3616/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3616/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6271/comments | https://api.github.com/repos/huggingface/datasets/issues/6271/events | https://github.com/huggingface/datasets/issues/6271 | 1,920,420,295 | I_kwDODunzps5yd0nH | 6,271 | Overwriting Split overwrites data but not metadata, corrupting dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13859249?v=4",
"events_url": "https://api.github.com/users/govindrai/events{/privacy}",
"followers_url": "https://api.github.com/users/govindrai/followers",
"following_url": "https://api.github.com/users/govindrai/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [] | 2023-09-30T22:37:31Z | 2023-10-16T13:30:50Z | 2023-10-16T13:30:50Z | NONE | null | null | null | ### Describe the bug
I want to be able to overwrite/update/delete splits in my dataset. Currently the only way to do is to manually go into the dataset and delete the split. If I try to overwrite programmatically I end up in an error state and (somewhat) corrupting the dataset. Read below.
**Current Behavior**
Whe... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6271/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6271/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5074/comments | https://api.github.com/repos/huggingface/datasets/issues/5074/events | https://github.com/huggingface/datasets/issues/5074 | 1,397,850,352 | I_kwDODunzps5TUYDw | 5,074 | Replace AssertionErrors with more meaningful errors | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
},
{
"color": "DF8D62",
"defa... | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/20004072?v=4",
"events_url": "https://api.github.com/users/galbwe/events{/privacy}",
"followers_url": "https://api.github.com/users/galbwe/followers",
"following_url": "https://api.github.com/users/galbwe/following{/other_user}",
"gists_url": "https://a... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/20004072?v=4",
"events_url": "https://api.github.com/users/galbwe/events{/privacy}",
"followers_url": "https://api.github.com/users/galbwe/followers",
"following_url": "https://api.github.com/users/galbwe/following{/other_user}",
"gists_ur... | null | [
"Hi, can I pick up this issue?",
"#self-assign",
"Looks like the top-level `datasource` directory was removed when https://github.com/huggingface/datasets/pull/4974 was merged, so there are 3 source files to fix."
] | 2022-10-05T14:03:55Z | 2022-10-07T14:33:11Z | 2022-10-07T14:33:11Z | CONTRIBUTOR | null | null | null | Replace the AssertionErrors with more meaningful errors such as ValueError, TypeError, etc.
The files with AssertionErrors that need to be replaced:
```
src/datasets/arrow_reader.py
src/datasets/builder.py
src/datasets/utils/version.py
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5074/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5074/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5944/comments | https://api.github.com/repos/huggingface/datasets/issues/5944/events | https://github.com/huggingface/datasets/pull/5944 | 1,752,882,200 | PR_kwDODunzps5Sx7O4 | 5,944 | Arrow dataset builder to be able to load and stream Arrow datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/10278877?v=4",
"events_url": "https://api.github.com/users/mariusz-jachimowicz-83/events{/privacy}",
"followers_url": "https://api.github.com/users/mariusz-jachimowicz-83/followers",
"following_url": "https://api.github.com/users/mariusz-jachimowicz-83/fo... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq tips applied. Thanks for a review. :smile: It's a lot of fun to improve this project. ",
"Let's add some documentation in a subsequent PR :)\r\n\r\nIn particular @mariosasko and I think it's important to note to users tha... | 2023-06-12T14:21:49Z | 2023-06-13T17:36:02Z | 2023-06-13T17:29:01Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5944.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5944",
"merged_at": "2023-06-13T17:29:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5944.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | This adds a Arrow dataset builder to be able to load and stream from already preprocessed Arrow files.
It's related to https://github.com/huggingface/datasets/issues/3035 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5944/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5944/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3600 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3600/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3600/comments | https://api.github.com/repos/huggingface/datasets/issues/3600/events | https://github.com/huggingface/datasets/pull/3600 | 1,108,131,878 | PR_kwDODunzps4xQ-vt | 3,600 | Use old url for conll2003 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2022-01-19T13:56:49Z | 2022-01-19T14:16:28Z | 2022-01-19T14:16:28Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3600.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3600",
"merged_at": "2022-01-19T14:16:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3600.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | As reported in https://github.com/huggingface/datasets/issues/3582 the CoNLL2003 data files are not available in the master branch of the repo that used to host them.
For now we can use the URL from an older commit to access the data files | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3600/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3600/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3499/comments | https://api.github.com/repos/huggingface/datasets/issues/3499/events | https://github.com/huggingface/datasets/issues/3499 | 1,090,132,618 | I_kwDODunzps5A-hqK | 3,499 | Adjusting chunk size for streaming datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_ur... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi ! Data streaming uses `fsspec` to read the data files progressively. IIRC the block size for buffering is 5MiB by default. So every time you finish iterating over a block, it downloads the next one. You can still try to increase the `fsspec` block size for buffering if it can help. To do so you just need to inc... | 2021-12-28T21:17:53Z | 2022-05-06T16:29:05Z | 2022-05-06T16:29:05Z | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) I hit a performance bottleneck because of the ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3499/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3499/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2806/comments | https://api.github.com/repos/huggingface/datasets/issues/2806/events | https://github.com/huggingface/datasets/pull/2806 | 971,625,449 | MDExOlB1bGxSZXF1ZXN0NzEzMzM5NDUw | 2,806 | Fix streaming tar files from canonical datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"In case it's relevant for this PR, I'm finding that I cannot stream the `bookcorpus` dataset (using the `master` branch of `datasets`), which is a `.tar.bz2` file:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nbooks_dataset_streamed = load_dataset(\"bookcorpus\", split=\"train\", streaming=True)\r\n... | 2021-08-16T11:10:28Z | 2021-10-13T09:04:03Z | 2021-10-13T09:04:02Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2806.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2806",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2806.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2806"
} | Previous PR #2800 implemented support to stream remote tar files when passing the parameter `data_files`: they required a glob string `"*"`.
However, this glob string creates an error when streaming canonical datasets (with a `join` after the `open`).
This PR fixes this issue and allows streaming tar files both f... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2806/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2806/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4017/comments | https://api.github.com/repos/huggingface/datasets/issues/4017/events | https://github.com/huggingface/datasets/pull/4017 | 1,180,595,160 | PR_kwDODunzps41Ad_L | 4,017 | Support streaming scan dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-25T10:11:28Z | 2022-03-25T12:08:55Z | 2022-03-25T12:03:52Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4017.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4017",
"merged_at": "2022-03-25T12:03:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4017.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4017/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4017/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1052/comments | https://api.github.com/repos/huggingface/datasets/issues/1052/events | https://github.com/huggingface/datasets/pull/1052 | 756,171,798 | MDExOlB1bGxSZXF1ZXN0NTMxNzU5MjA0 | 1,052 | add sharc dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [] | 2020-12-03T12:57:23Z | 2020-12-03T16:44:21Z | 2020-12-03T14:09:54Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1052.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1052",
"merged_at": "2020-12-03T14:09:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1052.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | This PR adds the ShARC dataset.
More info:
https://sharc-data.github.io/index.html | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1052/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1052/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5796 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5796/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5796/comments | https://api.github.com/repos/huggingface/datasets/issues/5796/events | https://github.com/huggingface/datasets/pull/5796 | 1,685,451,919 | PR_kwDODunzps5PORm- | 5,796 | Spark docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-04-26T17:39:43Z | 2023-04-27T16:41:50Z | 2023-04-27T16:34:45Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5796.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5796",
"merged_at": "2023-04-27T16:34:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5796.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Added a "Use with Spark" doc page to document `Dataset.from_spark` following https://github.com/huggingface/datasets/pull/5701
cc @maddiedawson | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5796/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5796/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4241/comments | https://api.github.com/repos/huggingface/datasets/issues/4241/events | https://github.com/huggingface/datasets/issues/4241 | 1,217,423,686 | I_kwDODunzps5IkGlG | 4,241 | NonMatchingChecksumError when attempting to download GLUE | {
"avatar_url": "https://avatars.githubusercontent.com/u/9650729?v=4",
"events_url": "https://api.github.com/users/drussellmrichie/events{/privacy}",
"followers_url": "https://api.github.com/users/drussellmrichie/followers",
"following_url": "https://api.github.com/users/drussellmrichie/following{/other_user}",... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi :)\r\n\r\nI think your issue may be related to the older `nlp` library. I was able to download `glue` with the latest version of `datasets`. Can you try updating with:\r\n\r\n```py\r\npip install -U datasets\r\n```\r\n\r\nThen you can download:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\nds = load_data... | 2022-04-27T14:14:21Z | 2022-04-28T07:45:27Z | 2022-04-28T07:45:27Z | NONE | null | null | null | ## Describe the bug
I am trying to download the GLUE dataset from the NLP module but get an error (see below).
## Steps to reproduce the bug
```python
import nlp
nlp.__version__ # '0.2.0'
nlp.load_dataset('glue', name="rte", download_mode="force_redownload")
```
## Expected results
I expect the dataset to ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4241/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4241/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5002/comments | https://api.github.com/repos/huggingface/datasets/issues/5002/events | https://github.com/huggingface/datasets/issues/5002 | 1,380,589,402 | I_kwDODunzps5SSh9a | 5,002 | Dataset Viewer issue for loubnabnl/humaneval-x | {
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "... | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url... | null | [
"It's a bug! Thanks for reporting, I'm looking at it",
"Fixed."
] | 2022-09-21T09:06:17Z | 2022-09-21T11:49:49Z | 2022-09-21T11:49:49Z | NONE | null | null | null | ### Link
https://huggingface.co/datasets/loubnabnl/humaneval-x/viewer/
### Description
The dataset has subsets but the viewer gets stuck in the default subset even when I select another one (the data loading of the subsets works fine)
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5002/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5002/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2885 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2885/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2885/comments | https://api.github.com/repos/huggingface/datasets/issues/2885/events | https://github.com/huggingface/datasets/issues/2885 | 992,160,544 | MDU6SXNzdWU5OTIxNjA1NDQ= | 2,885 | Adding an Elastic Search index to a Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/36195371?v=4",
"events_url": "https://api.github.com/users/MotzWanted/events{/privacy}",
"followers_url": "https://api.github.com/users/MotzWanted/followers",
"following_url": "https://api.github.com/users/MotzWanted/following{/other_user}",
"gists_url"... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Hi, is this bug deterministic in your poetry env ? I mean, does it always stop at 90% or is it random ?\r\n\r\nAlso, can you try using another version of Elasticsearch ? Maybe there's an issue with the one of you poetry env",
"I face similar issue with oscar dataset on remote ealsticsearch instance. It was mainl... | 2021-09-09T12:21:39Z | 2021-10-20T18:57:11Z | null | NONE | null | null | null | ## Describe the bug
When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break:
Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)
90%|████████████████████████████... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2885/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2885/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6412/comments | https://api.github.com/repos/huggingface/datasets/issues/6412/events | https://github.com/huggingface/datasets/issues/6412 | 1,992,401,594 | I_kwDODunzps52waK6 | 6,412 | User token is printed out! | {
"avatar_url": "https://avatars.githubusercontent.com/u/25702692?v=4",
"events_url": "https://api.github.com/users/mohsen-goodarzi/events{/privacy}",
"followers_url": "https://api.github.com/users/mohsen-goodarzi/followers",
"following_url": "https://api.github.com/users/mohsen-goodarzi/following{/other_user}"... | [] | closed | false | null | [] | null | [
"Indeed, this is not a good practice. I've opened a PR that removes the token value from the (deprecation) warning."
] | 2023-11-14T10:01:34Z | 2023-11-14T22:19:46Z | 2023-11-14T22:19:46Z | NONE | null | null | null | This line prints user token on command line! Is it safe?
https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/load.py#L2091 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6412/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6412/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6168/comments | https://api.github.com/repos/huggingface/datasets/issues/6168/events | https://github.com/huggingface/datasets/pull/6168 | 1,861,867,274 | PR_kwDODunzps5YhT7Y | 6,168 | Fix ArrayXD YAML conversion | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6168). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | 2023-08-22T17:02:54Z | 2023-12-12T15:06:59Z | 2023-12-12T15:00:43Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6168.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6168",
"merged_at": "2023-12-12T15:00:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6168.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Replace the `shape` tuple with a list in the `ArrayXD` YAML conversion.
Fix #6112 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6168/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6168/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/153/comments | https://api.github.com/repos/huggingface/datasets/issues/153/events | https://github.com/huggingface/datasets/issues/153 | 619,972,246 | MDU6SXNzdWU2MTk5NzIyNDY= | 153 | Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | null | [] | null | [
"As @yoavgo suggested, there should be the possibility to call a function like nlp.bib that outputs all bibtex ref from the datasets and models actually used and eventually nlp.bib.forreadme that would output the same info + versions numbers so they can be included in a readme.md file.",
"Actually, double checki... | 2020-05-18T07:24:22Z | 2020-05-18T21:18:16Z | null | MEMBER | null | null | null | Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessibl... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/153/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/153/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/913/comments | https://api.github.com/repos/huggingface/datasets/issues/913/events | https://github.com/huggingface/datasets/pull/913 | 752,892,020 | MDExOlB1bGxSZXF1ZXN0NTI5MDkyOTc3 | 913 | My new dataset PEC | {
"avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4",
"events_url": "https://api.github.com/users/zhongpeixiang/events{/privacy}",
"followers_url": "https://api.github.com/users/zhongpeixiang/followers",
"following_url": "https://api.github.com/users/zhongpeixiang/following{/other_user}",
"g... | [] | closed | false | null | [] | null | [
"How to resolve these failed checks?",
"Thanks for adding this one :) \r\n\r\nTo fix the check_code_quality, please run `make style` with the latest version of black, isort, flake8\r\nTo fix the test_no_encoding_on_file_open, make sure to specify the encoding each time you call `open()` on a text file.\r\nFor exa... | 2020-11-29T11:10:37Z | 2020-12-01T10:41:53Z | 2020-12-01T10:41:53Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/913.diff",
"html_url": "https://github.com/huggingface/datasets/pull/913",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/913.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/913"
} | A new dataset PEC published in EMNLP 2020. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/913/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/913/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3767/comments | https://api.github.com/repos/huggingface/datasets/issues/3767/events | https://github.com/huggingface/datasets/pull/3767 | 1,146,036,648 | PR_kwDODunzps4zPahh | 3,767 | Expose method and fix param | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://ap... | [] | closed | false | null | [] | null | [] | 2022-02-21T16:57:47Z | 2022-02-22T08:35:03Z | 2022-02-22T08:35:02Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3767.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3767",
"merged_at": "2022-02-22T08:35:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3767.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | A fix + expose a new method, following https://github.com/huggingface/datasets/pull/3670 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3767/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3767/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1989/comments | https://api.github.com/repos/huggingface/datasets/issues/1989/events | https://github.com/huggingface/datasets/issues/1989 | 822,328,147 | MDU6SXNzdWU4MjIzMjgxNDc= | 1,989 | Question/problem with dataset labels | {
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"It seems that I get parsing errors for various fields in my data. For example now I get this:\r\n```\r\n File \"../../../models/tr-4.3.2/run_puppets.py\", line 523, in <module>\r\n main()\r\n File \"../../../models/tr-4.3.2/run_puppets.py\", line 249, in main\r\n datasets = load_dataset(\"csv\", data_files... | 2021-03-04T17:06:53Z | 2023-07-24T14:39:33Z | 2023-07-24T14:39:33Z | NONE | null | null | null | Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon".
This is the trace I get:
```
File "../../../models/tr-4.3.2/run_... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1989/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1989/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/17 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/17/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/17/comments | https://api.github.com/repos/huggingface/datasets/issues/17/events | https://github.com/huggingface/datasets/pull/17 | 605,753,027 | MDExOlB1bGxSZXF1ZXN0NDA4MDk3NjM0 | 17 | Add Pandas as format type | {
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.githu... | [] | closed | false | null | [] | null | [] | 2020-04-23T18:20:14Z | 2020-04-27T18:07:50Z | 2020-04-27T18:07:48Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/17.diff",
"html_url": "https://github.com/huggingface/datasets/pull/17",
"merged_at": "2020-04-27T18:07:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/17.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/17"
} | As detailed in the title ^^ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/17/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/17/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2428/comments | https://api.github.com/repos/huggingface/datasets/issues/2428/events | https://github.com/huggingface/datasets/pull/2428 | 907,169,746 | MDExOlB1bGxSZXF1ZXN0NjU4MDU2MjI3 | 2,428 | Add copyright info for wiki_lingua dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "ht... | [] | closed | false | null | [] | null | [
"Build fails but this change should not be the reason...",
"rebased on master",
"rebased on master"
] | 2021-05-31T07:22:52Z | 2021-06-04T10:22:33Z | 2021-06-04T10:22:33Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2428.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2428",
"merged_at": "2021-06-04T10:22:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2428.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2428/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2428/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/4069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4069/comments | https://api.github.com/repos/huggingface/datasets/issues/4069/events | https://github.com/huggingface/datasets/pull/4069 | 1,186,790,578 | PR_kwDODunzps41VIMJ | 4,069 | Add support for metadata files to `imagefolder` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Love it !\r\n\r\n+1 to using JSON Lines rather than CSV. I've also seen image datasets for which JSON Lines was used.\r\n\r\nA `file_name` column sounds good as well, and it means we could reuse the same name for audio. And ok to che... | 2022-03-30T17:47:51Z | 2022-05-03T12:49:00Z | 2022-05-03T12:42:16Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4069.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4069",
"merged_at": "2022-05-03T12:42:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4069.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | This PR adds support for metadata files to `imagefolder` to add an ability to specify image fields other than `image` and `label`, which are inferred from the directory structure in the loaded dataset.
To be parsed as an image metadata file, a file should be named `"info.csv"` and should have the following structure... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4069/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4069/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4579/comments | https://api.github.com/repos/huggingface/datasets/issues/4579/events | https://github.com/huggingface/datasets/pull/4579 | 1,286,106,285 | PR_kwDODunzps46bo2h | 4,579 | Support streaming cfq dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq I've been refactoring a little the code:\r\n- Use less RAM by loading only the required samples: only if its index is in the splits file\r\n- Start yielding \"earlier\" in streaming mode: for each `split_idx`:\r\n - either ... | 2022-06-27T17:11:23Z | 2022-07-04T19:35:01Z | 2022-07-04T19:23:57Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4579.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4579",
"merged_at": "2022-07-04T19:23:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4579.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Support streaming cfq dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4579/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4579/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/77 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/77/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/77/comments | https://api.github.com/repos/huggingface/datasets/issues/77/events | https://github.com/huggingface/datasets/pull/77 | 616,674,601 | MDExOlB1bGxSZXF1ZXN0NDE2NzQwMjAz | 77 | New datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"g... | [] | closed | false | null | [] | null | [] | 2020-05-12T13:51:59Z | 2020-05-12T14:02:16Z | 2020-05-12T14:02:15Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/77.diff",
"html_url": "https://github.com/huggingface/datasets/pull/77",
"merged_at": "2020-05-12T14:02:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/77.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/77"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/77/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/77/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/5603 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5603/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5603/comments | https://api.github.com/repos/huggingface/datasets/issues/5603/events | https://github.com/huggingface/datasets/pull/5603 | 1,607,143,509 | PR_kwDODunzps5LJZzG | 5,603 | Don't compute checksums if not necessary in `datasets-cli test` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-02T16:42:39Z | 2023-03-03T15:45:32Z | 2023-03-03T15:38:28Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5603.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5603",
"merged_at": "2023-03-03T15:38:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5603.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | we only need them if there exists a `dataset_infos.json` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5603/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5603/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6032/comments | https://api.github.com/repos/huggingface/datasets/issues/6032/events | https://github.com/huggingface/datasets/issues/6032 | 1,804,358,679 | I_kwDODunzps5rjFQX | 6,032 | DownloadConfig.proxies not work when load_dataset_builder calling HfApi.dataset_info | {
"avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4",
"events_url": "https://api.github.com/users/codingl2k1/events{/privacy}",
"followers_url": "https://api.github.com/users/codingl2k1/followers",
"following_url": "https://api.github.com/users/codingl2k1/following{/other_user}",
"gists_url... | [] | open | false | null | [] | null | [
"`HfApi` comes from the `huggingface_hub` package. You can use [this](https://huggingface.co/docs/huggingface_hub/v0.16.3/en/package_reference/utilities#huggingface_hub.configure_http_backend) utility to change the `huggingface_hub`'s `Session` proxies (see the example).\r\n\r\nWe plan to implement https://github.c... | 2023-07-14T07:22:55Z | 2023-09-11T13:50:41Z | null | NONE | null | null | null | ### Describe the bug
```python
download_config = DownloadConfig(proxies={'https': '<my proxy>'})
builder = load_dataset_builder(..., download_config=download_config)
```
But, when getting the dataset_info from HfApi, the http requests not using the proxies.
### Steps to reproduce the bug
1. Setup proxies i... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6032/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6032/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/825/comments | https://api.github.com/repos/huggingface/datasets/issues/825/events | https://github.com/huggingface/datasets/pull/825 | 739,925,960 | MDExOlB1bGxSZXF1ZXN0NTE4NTAyNjgx | 825 | Add accuracy, precision, recall and F1 metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.githu... | [] | closed | false | null | [] | null | [] | 2020-11-10T13:50:35Z | 2020-11-11T19:23:48Z | 2020-11-11T19:23:43Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/825.diff",
"html_url": "https://github.com/huggingface/datasets/pull/825",
"merged_at": "2020-11-11T19:23:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/825.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/825... | This PR adds several single metrics, namely:
- Accuracy
- Precision
- Recall
- F1
They all uses under the hood the sklearn metrics of the same name. They allow different useful features when training a multilabel/multiclass model:
- have a macro/micro/per label/weighted/binary/per sample score
- score only t... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/825/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/825/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/570/comments | https://api.github.com/repos/huggingface/datasets/issues/570/events | https://github.com/huggingface/datasets/pull/570 | 691,846,397 | MDExOlB1bGxSZXF1ZXN0NDc4NTI3OTQz | 570 | add reuters21578 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.githu... | [] | closed | false | null | [] | null | [] | 2020-09-03T10:25:47Z | 2020-09-03T10:46:52Z | 2020-09-03T10:46:51Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/570.diff",
"html_url": "https://github.com/huggingface/datasets/pull/570",
"merged_at": "2020-09-03T10:46:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/570.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/570... | Reopen a PR this the merge. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/570/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/570/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6464/comments | https://api.github.com/repos/huggingface/datasets/issues/6464/events | https://github.com/huggingface/datasets/pull/6464 | 2,020,860,462 | PR_kwDODunzps5g5djo | 6,464 | Add concurrent loading of shards to datasets.load_from_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/51880718?v=4",
"events_url": "https://api.github.com/users/kkoutini/events{/privacy}",
"followers_url": "https://api.github.com/users/kkoutini/followers",
"following_url": "https://api.github.com/users/kkoutini/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | null | [
"If we use multithreading no need to ask for `num_proc`. And maybe we the same numbers of threads as tqdm by default (IIRC it's `max(32, cpu_count() + 4)`) - you can even use `tqdm.contrib.concurrent.thread_map` directly to simplify the code\r\n\r\nAlso you can ignore the `IN_MEMORY_MAX_SIZE` config for this. This ... | 2023-12-01T13:13:53Z | 2023-12-07T12:47:02Z | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6464.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6464",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6464.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6464"
} | In some file systems (like luster), memory mapping arrow files takes time. This can be accelerated by performing the mmap in parallel on processes or threads.
- Threads seem to be faster than processes when gathering the list of tables from the workers (see https://github.com/huggingface/datasets/issues/2252).
- I'... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6464/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6464/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6442/comments | https://api.github.com/repos/huggingface/datasets/issues/6442/events | https://github.com/huggingface/datasets/issues/6442 | 2,006,086,907 | I_kwDODunzps53knT7 | 6,442 | Trouble loading image folder with additional features - metadata file ignored | {
"avatar_url": "https://avatars.githubusercontent.com/u/57615435?v=4",
"events_url": "https://api.github.com/users/linoytsaban/events{/privacy}",
"followers_url": "https://api.github.com/users/linoytsaban/followers",
"following_url": "https://api.github.com/users/linoytsaban/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [
"I reproduced too:\r\n- root: metadata file is ignored (https://huggingface.co/datasets/severo/doc-image-3)\r\n- data/ dir: metadata file is ignored (https://huggingface.co/datasets/severo/doc-image-4)\r\n- train/ dir: works (https://huggingface.co/datasets/severo/doc-image-5)"
] | 2023-11-22T11:01:35Z | 2023-11-24T17:13:03Z | 2023-11-24T17:13:03Z | NONE | null | null | null | ### Describe the bug
Loading image folder with a caption column using `load_dataset(<image_folder_path>)` doesn't load the captions.
When loading a local image folder with captions using `datasets==2.13.0`
```
from datasets import load_dataset
data = load_dataset(<image_folder_path>)
data.column_names
```
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6442/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6442/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6292/comments | https://api.github.com/repos/huggingface/datasets/issues/6292/events | https://github.com/huggingface/datasets/issues/6292 | 1,937,050,470 | I_kwDODunzps5zdQtm | 6,292 | how to load the image of dtype float32 or float64 | {
"avatar_url": "https://avatars.githubusercontent.com/u/26437644?v=4",
"events_url": "https://api.github.com/users/wanglaofei/events{/privacy}",
"followers_url": "https://api.github.com/users/wanglaofei/followers",
"following_url": "https://api.github.com/users/wanglaofei/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | null | [
"Hi! Can you provide a code that reproduces the issue?\r\n\r\nAlso, which version of `datasets` are you using? You can check this by running `python -c \"import datasets; print(datasets.__version__)\"` inside the env. We added support for \"float images\" in `datasets 2.9`."
] | 2023-10-11T07:27:16Z | 2023-10-11T13:19:11Z | null | NONE | null | null | null | _FEATURES = datasets.Features(
{
"image": datasets.Image(),
"text": datasets.Value("string"),
},
)
The datasets builder seems only support the unit8 data. How to load the float dtype data? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6292/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6292/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/658/comments | https://api.github.com/repos/huggingface/datasets/issues/658/events | https://github.com/huggingface/datasets/pull/658 | 706,206,247 | MDExOlB1bGxSZXF1ZXN0NDkwNzk4MDc0 | 658 | Fix squad metric's Features | {
"avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4",
"events_url": "https://api.github.com/users/tshrjn/events{/privacy}",
"followers_url": "https://api.github.com/users/tshrjn/followers",
"following_url": "https://api.github.com/users/tshrjn/following{/other_user}",
"gists_url": "https://ap... | [] | closed | false | null | [] | null | [
"Closing this one in favor of #670 \r\n\r\nThanks again for reporting the issue and proposing this fix !\r\nLet me know if you have other remarks"
] | 2020-09-22T09:09:52Z | 2020-09-29T15:58:30Z | 2020-09-29T15:58:30Z | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/658.diff",
"html_url": "https://github.com/huggingface/datasets/pull/658",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/658.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/658"
} | Resolves issue [657](https://github.com/huggingface/datasets/issues/657). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/658/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/658/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2271/comments | https://api.github.com/repos/huggingface/datasets/issues/2271/events | https://github.com/huggingface/datasets/issues/2271 | 869,002,141 | MDU6SXNzdWU4NjkwMDIxNDE= | 2,271 | Synchronize table metadata with features | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"See PR #2274 "
] | 2021-04-27T15:55:13Z | 2022-06-01T17:13:21Z | 2022-06-01T17:13:21Z | MEMBER | null | null | null | **Is your feature request related to a problem? Please describe.**
As pointed out in this [comment](https://github.com/huggingface/datasets/pull/2145#discussion_r621326767):
> Metadata stored in the schema is just a redundant information regarding the feature types.
It is used when calling Dataset.from_file to kno... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2271/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2271/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1978/comments | https://api.github.com/repos/huggingface/datasets/issues/1978/events | https://github.com/huggingface/datasets/pull/1978 | 820,956,806 | MDExOlB1bGxSZXF1ZXN0NTgzODI5Njgz | 1,978 | Adding ro sts dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/36982089?v=4",
"events_url": "https://api.github.com/users/lorinczb/events{/privacy}",
"followers_url": "https://api.github.com/users/lorinczb/followers",
"following_url": "https://api.github.com/users/lorinczb/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
"@lhoestq thank you very much for the quick review and useful comments! \r\n\r\nI have tried to address them all, and a few comments that you left for ro_sts I have applied to the ro_sts_parallel as well (in read-me: fixed source_datasets, links to homepage, repository, leaderboard, thanks to me message, in ro_sts_... | 2021-03-03T10:08:53Z | 2021-03-05T10:00:14Z | 2021-03-05T09:33:55Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1978.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1978",
"merged_at": "2021-03-05T09:33:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1978.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Adding [RO-STS](https://github.com/dumitrescustefan/RO-STS) dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1978/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1978/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/434/comments | https://api.github.com/repos/huggingface/datasets/issues/434/events | https://github.com/huggingface/datasets/pull/434 | 665,477,638 | MDExOlB1bGxSZXF1ZXN0NDU2NTM3Njgz | 434 | Fixed check for pyarrow | {
"avatar_url": "https://avatars.githubusercontent.com/u/58701810?v=4",
"events_url": "https://api.github.com/users/nadahlberg/events{/privacy}",
"followers_url": "https://api.github.com/users/nadahlberg/followers",
"following_url": "https://api.github.com/users/nadahlberg/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
"Great, thanks!"
] | 2020-07-25T00:16:53Z | 2020-07-25T06:36:34Z | 2020-07-25T06:36:34Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/434.diff",
"html_url": "https://github.com/huggingface/datasets/pull/434",
"merged_at": "2020-07-25T06:36:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/434.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/434... | Fix check for pyarrow in __init__.py. Previously would raise an error for pyarrow >= 1.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/434/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/434/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/541/comments | https://api.github.com/repos/huggingface/datasets/issues/541/events | https://github.com/huggingface/datasets/issues/541 | 688,521,224 | MDU6SXNzdWU2ODg1MjEyMjQ= | 541 | Best practices for training tokenizers with nlp | {
"avatar_url": "https://avatars.githubusercontent.com/u/11806234?v=4",
"events_url": "https://api.github.com/users/moskomule/events{/privacy}",
"followers_url": "https://api.github.com/users/moskomule/followers",
"following_url": "https://api.github.com/users/moskomule/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [
"Docs that explain how to train a tokenizer with `datasets` are available here: https://huggingface.co/docs/tokenizers/training_from_memory#using-the-datasets-library"
] | 2020-08-29T12:06:49Z | 2022-10-04T17:28:04Z | 2022-10-04T17:28:04Z | NONE | null | null | null | Hi, thank you for developing this library.
What do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/541/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/541/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/790/comments | https://api.github.com/repos/huggingface/datasets/issues/790/events | https://github.com/huggingface/datasets/issues/790 | 734,470,197 | MDU6SXNzdWU3MzQ0NzAxOTc= | 790 | Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist | {
"avatar_url": "https://avatars.githubusercontent.com/u/59632?v=4",
"events_url": "https://api.github.com/users/shawwn/events{/privacy}",
"followers_url": "https://api.github.com/users/shawwn/followers",
"following_url": "https://api.github.com/users/shawwn/following{/other_user}",
"gists_url": "https://api.... | [] | closed | false | null | [] | null | [
"I saw that `faiss-cpu` 1.6.4.post2 was released recently to fix the installation on macos. It should work now",
"Closing this one.\r\nFeel free to re-open if you still have issues"
] | 2020-11-02T12:36:35Z | 2020-11-10T14:05:02Z | 2020-11-10T14:05:02Z | NONE | null | null | null | I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error.
```sh
git clone https://github.com/huggingface/datasets
cd datasets
virtualenv venv -p python3 --system-site-packages
source venv/bin/activate
pip install -e ".... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/790/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/790/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/579/comments | https://api.github.com/repos/huggingface/datasets/issues/579/events | https://github.com/huggingface/datasets/pull/579 | 694,947,599 | MDExOlB1bGxSZXF1ZXN0NDgxMjU1OTI5 | 579 | Doc metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | [] | 2020-09-07T10:15:24Z | 2020-09-10T13:06:11Z | 2020-09-10T13:06:10Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/579.diff",
"html_url": "https://github.com/huggingface/datasets/pull/579",
"merged_at": "2020-09-10T13:06:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/579.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/579... | Adding documentation on metrics loading/using/sharing | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/579/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/579/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3448 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3448/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3448/comments | https://api.github.com/repos/huggingface/datasets/issues/3448/events | https://github.com/huggingface/datasets/issues/3448 | 1,083,231,080 | I_kwDODunzps5AkMto | 3,448 | JSONDecodeError with HuggingFace dataset viewer | {
"avatar_url": "https://avatars.githubusercontent.com/u/57716109?v=4",
"events_url": "https://api.github.com/users/kathrynchapman/events{/privacy}",
"followers_url": "https://api.github.com/users/kathrynchapman/followers",
"following_url": "https://api.github.com/users/kathrynchapman/following{/other_user}",
... | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | [] | null | [
"Hi ! I think the issue comes from the dataset_infos.json file: it has the \"flat\" field twice.\r\n\r\nCan you try deleting this file and regenerating it please ?",
"Thanks! That fixed that, but now I am getting:\r\nServer Error\r\nStatus code: 400\r\nException: KeyError\r\nMessage: 'feature'\r\n\r\n... | 2021-12-17T12:52:41Z | 2022-02-24T09:10:26Z | 2022-02-24T09:10:26Z | NONE | null | null | null | ## Dataset viewer issue for 'pubmed_neg'
**Link:** https://huggingface.co/datasets/IGESML/pubmed_neg
I am getting the error:
Status code: 400
Exception: JSONDecodeError
Message: Expecting property name enclosed in double quotes: line 61 column 2 (char 1202)
I have checked all files - I am not u... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3448/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3448/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3503/comments | https://api.github.com/repos/huggingface/datasets/issues/3503/events | https://github.com/huggingface/datasets/issues/3503 | 1,090,472,735 | I_kwDODunzps5A_0sf | 3,503 | Batched in filter throws error | {
"avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4",
"events_url": "https://api.github.com/users/gpucce/events{/privacy}",
"followers_url": "https://api.github.com/users/gpucce/followers",
"following_url": "https://api.github.com/users/gpucce/following{/other_user}",
"gists_url": "https://a... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
... | null | [] | 2021-12-29T12:01:04Z | 2022-01-04T10:24:27Z | 2022-01-04T10:24:27Z | CONTRIBUTOR | null | null | null | I hope this is really a bug, I could not find it among the open issues
## Describe the bug
using `batched=False` in DataSet.filter throws error
```python
TypeError: filter() got an unexpected keyword argument 'batched'
```
but in the docs it is lister as an argument.
## Steps to reproduce the bug
```python
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3503/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3503/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4841/comments | https://api.github.com/repos/huggingface/datasets/issues/4841/events | https://github.com/huggingface/datasets/pull/4841 | 1,337,401,243 | PR_kwDODunzps49Gf0I | 4,841 | Update ted_talks_iwslt license to include ND | {
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://ap... | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-08-12T16:14:52Z | 2022-08-14T11:15:22Z | 2022-08-14T11:00:22Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4841.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4841",
"merged_at": "2022-08-14T11:00:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4841.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | Excerpt from the paper's abstract: "Aside from its cultural and social relevance, this content, which is published under the Creative Commons BY-NC-ND license, also represents a precious language resource for the machine translation research community" | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4841/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4841/timeline | null | null | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.