comments_url stringlengths 70 70 | timeline_url stringlengths 70 70 | closed_at stringlengths 20 20 ⌀ | performed_via_github_app null | state_reason stringclasses 3
values | node_id stringlengths 18 32 | state stringclasses 2
values | assignees listlengths 0 4 | draft bool 2
classes | number int64 1.61k 6.73k | user dict | title stringlengths 1 290 | events_url stringlengths 68 68 | milestone dict | labels_url stringlengths 75 75 | created_at stringlengths 20 20 | active_lock_reason null | locked bool 1
class | assignee dict | pull_request dict | id int64 771M 2.18B | labels listlengths 0 4 | url stringlengths 61 61 | comments listlengths 0 30 | repository_url stringclasses 1
value | author_association stringclasses 3
values | body stringlengths 0 228k ⌀ | updated_at stringlengths 20 20 | html_url stringlengths 49 51 | reactions dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1809/comments | https://api.github.com/repos/huggingface/datasets/issues/1809/timeline | 2021-02-03T16:43:06Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY1NzY4ODQz | closed | [] | false | 1,809 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | Add FreebaseQA dataset | https://api.github.com/repos/huggingface/datasets/issues/1809/events | null | https://api.github.com/repos/huggingface/datasets/issues/1809/labels{/name} | 2021-02-02T08:35:53Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1809.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1809",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1809.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1809"
} | 799,059,141 | [] | https://api.github.com/repos/huggingface/datasets/issues/1809 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Adding FreebaseQA dataset suggested in PR #1435 with minor edits. Also closes that PR.
Requesting @lhoestq to review. | 2021-02-03T17:15:05Z | https://github.com/huggingface/datasets/pull/1809 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1809/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1808/comments | https://api.github.com/repos/huggingface/datasets/issues/1808/timeline | 2022-06-01T15:38:13Z | null | completed | MDU6SXNzdWU3OTg4NzkxODA= | closed | [] | null | 1,808 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.git... | writing Datasets in a human readable format | https://api.github.com/repos/huggingface/datasets/issues/1808/events | null | https://api.github.com/repos/huggingface/datasets/issues/1808/labels{/name} | 2021-02-02T02:55:40Z | null | false | null | null | 798,879,180 | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "d876e3",
"default": true... | https://api.github.com/repos/huggingface/datasets/issues/1808 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Hi
I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq | 2022-06-01T15:38:13Z | https://github.com/huggingface/datasets/issues/1808 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1808/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1807/comments | https://api.github.com/repos/huggingface/datasets/issues/1807/timeline | 2021-02-02T18:06:58Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY1NTczNzU5 | closed | [] | false | 1,807 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | Adding an aggregated dataset for the GEM benchmark | https://api.github.com/repos/huggingface/datasets/issues/1807/events | null | https://api.github.com/repos/huggingface/datasets/issues/1807/labels{/name} | 2021-02-02T00:39:53Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1807.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1807",
"merged_at": "2021-02-02T18:06:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1807.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 798,823,591 | [] | https://api.github.com/repos/huggingface/datasets/issues/1807 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | This dataset gathers modified versions of several other conditional text generation datasets which together make up the shared task for the Generation Evaluation and Metrics workshop (think GLUE for text generation)
The changes from the original datasets are detailed in the Dataset Cards on the GEM website, which ar... | 2021-02-02T22:48:41Z | https://github.com/huggingface/datasets/pull/1807 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1807/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1806/comments | https://api.github.com/repos/huggingface/datasets/issues/1806/timeline | 2021-02-01T18:46:21Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY1Mzk0ODIz | closed | [] | false | 1,806 | {
"avatar_url": "https://avatars.githubusercontent.com/u/15138872?v=4",
"events_url": "https://api.github.com/users/padipadou/events{/privacy}",
"followers_url": "https://api.github.com/users/padipadou/followers",
"following_url": "https://api.github.com/users/padipadou/following{/other_user}",
"gists_url": "... | Update details to MLSUM dataset | https://api.github.com/repos/huggingface/datasets/issues/1806/events | null | https://api.github.com/repos/huggingface/datasets/issues/1806/labels{/name} | 2021-02-01T18:35:12Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1806.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1806",
"merged_at": "2021-02-01T18:46:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1806.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 798,607,869 | [] | https://api.github.com/repos/huggingface/datasets/issues/1806 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Update details to MLSUM dataset | 2021-02-01T18:46:28Z | https://github.com/huggingface/datasets/pull/1806 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1806/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1805/comments | https://api.github.com/repos/huggingface/datasets/issues/1805/timeline | 2021-03-06T14:32:46Z | null | completed | MDU6SXNzdWU3OTg0OTgwNTM= | closed | [] | null | 1,805 | {
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"events_url": "https://api.github.com/users/abarbosa94/events{/privacy}",
"followers_url": "https://api.github.com/users/abarbosa94/followers",
"following_url": "https://api.github.com/users/abarbosa94/following{/other_user}",
"gists_url":... | can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index | https://api.github.com/repos/huggingface/datasets/issues/1805/events | null | https://api.github.com/repos/huggingface/datasets/issues/1805/labels{/name} | 2021-02-01T16:14:17Z | null | false | null | null | 798,498,053 | [] | https://api.github.com/repos/huggingface/datasets/issues/1805 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | So, I have the following instances in my dataset
```
{'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of
this increase in rotation?',
'answer': 'C',
'example_id': 'ARCCH_Mercury_7175875',
'options':[{'option_context': 'One effect of ... | 2021-03-06T14:32:46Z | https://github.com/huggingface/datasets/issues/1805 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1805/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1804/comments | https://api.github.com/repos/huggingface/datasets/issues/1804/timeline | 2021-02-05T15:49:25Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY1MjkzMTc3 | closed | [] | false | 1,804 | {
"avatar_url": "https://avatars.githubusercontent.com/u/36051308?v=4",
"events_url": "https://api.github.com/users/calpt/events{/privacy}",
"followers_url": "https://api.github.com/users/calpt/followers",
"following_url": "https://api.github.com/users/calpt/following{/other_user}",
"gists_url": "https://api.... | Add SICK dataset | https://api.github.com/repos/huggingface/datasets/issues/1804/events | null | https://api.github.com/repos/huggingface/datasets/issues/1804/labels{/name} | 2021-02-01T15:57:44Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1804.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1804",
"merged_at": "2021-02-05T15:49:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1804.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 798,483,881 | [] | https://api.github.com/repos/huggingface/datasets/issues/1804 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Adds the SICK dataset (http://marcobaroni.org/composes/sick.html).
Closes #1772.
Edit: also closes #1632, which is the original issue requesting the dataset. The newer one is a duplicate. | 2021-02-05T17:46:28Z | https://github.com/huggingface/datasets/pull/1804 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1804/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1803/comments | https://api.github.com/repos/huggingface/datasets/issues/1803/timeline | 2021-08-04T18:10:42Z | null | completed | MDU6SXNzdWU3OTgyNDM5MDQ= | closed | [] | null | 1,803 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Querying examples from big datasets is slower than small datasets | https://api.github.com/repos/huggingface/datasets/issues/1803/events | null | https://api.github.com/repos/huggingface/datasets/issues/1803/labels{/name} | 2021-02-01T11:08:23Z | null | false | null | null | 798,243,904 | [] | https://api.github.com/repos/huggingface/datasets/issues/1803 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets.
For example
```python
from datasets import load_dataset
b1 = load_dataset("bookcorpus", split="train[:1%]")
b50 = load_dataset("bookcorpus", split="train[:50%]")
b100 = load_dataset("bookcorp... | 2021-08-04T18:11:01Z | https://github.com/huggingface/datasets/issues/1803 | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1803/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1802/comments | https://api.github.com/repos/huggingface/datasets/issues/1802/timeline | 2021-02-03T10:06:30Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY0ODE4NDIy | closed | [] | false | 1,802 | {
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}"... | add github of contributors | https://api.github.com/repos/huggingface/datasets/issues/1802/events | null | https://api.github.com/repos/huggingface/datasets/issues/1802/labels{/name} | 2021-02-01T03:49:19Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1802.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1802",
"merged_at": "2021-02-03T10:06:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1802.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 797,924,468 | [] | https://api.github.com/repos/huggingface/datasets/issues/1802 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | This PR will add contributors GitHub id at the end of every dataset cards. | 2021-02-03T10:09:52Z | https://github.com/huggingface/datasets/pull/1802 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1802/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1801/comments | https://api.github.com/repos/huggingface/datasets/issues/1801/timeline | 2021-02-02T13:17:28Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY0NzMwODYw | closed | [] | false | 1,801 | {
"avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4",
"events_url": "https://api.github.com/users/mounicam/events{/privacy}",
"followers_url": "https://api.github.com/users/mounicam/followers",
"following_url": "https://api.github.com/users/mounicam/following{/other_user}",
"gists_url": "htt... | [GEM] Updated the source link of the data to update correct tokenized version. | https://api.github.com/repos/huggingface/datasets/issues/1801/events | null | https://api.github.com/repos/huggingface/datasets/issues/1801/labels{/name} | 2021-01-31T21:17:19Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1801.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1801",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1801.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1801"
} | 797,814,275 | [] | https://api.github.com/repos/huggingface/datasets/issues/1801 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | 2021-02-02T13:17:38Z | https://github.com/huggingface/datasets/pull/1801 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1801/reactions"
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/1800/comments | https://api.github.com/repos/huggingface/datasets/issues/1800/timeline | 2021-02-02T22:49:26Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY0NzE5MjA3 | closed | [] | false | 1,800 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | Add DuoRC Dataset | https://api.github.com/repos/huggingface/datasets/issues/1800/events | null | https://api.github.com/repos/huggingface/datasets/issues/1800/labels{/name} | 2021-01-31T20:01:59Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1800.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1800",
"merged_at": "2021-02-02T22:49:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1800.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 797,798,689 | [] | https://api.github.com/repos/huggingface/datasets/issues/1800 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Hi,
DuoRC SelfRC is one type of the [DuoRC Dataset](https://duorc.github.io/). DuoRC SelfRC is a crowdsourced Abstractive/Extractive Question-Answering dataset based on Wikipedia movie plots. It contains examples that may have answers in the movie plot, synthesized answers which are not present in the movie plot, or... | 2021-02-03T05:01:45Z | https://github.com/huggingface/datasets/pull/1800 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1800/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1799/comments | https://api.github.com/repos/huggingface/datasets/issues/1799/timeline | 2021-02-09T15:49:58Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY0NzEyMzUy | closed | [] | false | 1,799 | {
"avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4",
"events_url": "https://api.github.com/users/gmihaila/events{/privacy}",
"followers_url": "https://api.github.com/users/gmihaila/followers",
"following_url": "https://api.github.com/users/gmihaila/following{/other_user}",
"gists_url": "htt... | Update: SWDA - Fixed code to use all metadata features. Added comments and cleaned c… | https://api.github.com/repos/huggingface/datasets/issues/1799/events | null | https://api.github.com/repos/huggingface/datasets/issues/1799/labels{/name} | 2021-01-31T19:18:55Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1799.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1799",
"merged_at": "2021-02-09T15:49:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1799.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 797,789,439 | [] | https://api.github.com/repos/huggingface/datasets/issues/1799 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | This is a dataset I currently use my research and I realized some features are not being returned.
Previous code was not using all available metadata and was kind of messy
I fixed code to use all metadata and made some modification to be more efficient and better formatted.
Please let me know if I need to ma... | 2021-02-09T22:06:13Z | https://github.com/huggingface/datasets/pull/1799 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1799/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1798/comments | https://api.github.com/repos/huggingface/datasets/issues/1798/timeline | 2021-02-03T10:35:54Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY0Njk2NjE1 | closed | [] | false | 1,798 | {
"avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4",
"events_url": "https://api.github.com/users/mapmeld/events{/privacy}",
"followers_url": "https://api.github.com/users/mapmeld/followers",
"following_url": "https://api.github.com/users/mapmeld/following{/other_user}",
"gists_url": "https://... | Add Arabic sarcasm dataset | https://api.github.com/repos/huggingface/datasets/issues/1798/events | null | https://api.github.com/repos/huggingface/datasets/issues/1798/labels{/name} | 2021-01-31T17:38:55Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1798.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1798",
"merged_at": "2021-02-03T10:35:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1798.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 797,766,818 | [] | https://api.github.com/repos/huggingface/datasets/issues/1798 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | This MIT license dataset: https://github.com/iabufarha/ArSarcasm
Via https://sites.google.com/view/ar-sarcasm-sentiment-detection/ | 2021-02-10T20:39:13Z | https://github.com/huggingface/datasets/pull/1798 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1798/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1797/comments | https://api.github.com/repos/huggingface/datasets/issues/1797/timeline | 2021-08-04T18:09:37Z | null | completed | MDU6SXNzdWU3OTczNTc5MDE= | closed | [] | null | 1,797 | {
"avatar_url": "https://avatars.githubusercontent.com/u/46243662?v=4",
"events_url": "https://api.github.com/users/smile0925/events{/privacy}",
"followers_url": "https://api.github.com/users/smile0925/followers",
"following_url": "https://api.github.com/users/smile0925/following{/other_user}",
"gists_url": "... | Connection error | https://api.github.com/repos/huggingface/datasets/issues/1797/events | null | https://api.github.com/repos/huggingface/datasets/issues/1797/labels{/name} | 2021-01-30T07:32:45Z | null | false | null | null | 797,357,901 | [] | https://api.github.com/repos/huggingface/datasets/issues/1797 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Hi
I am hitting to the error, help me and thanks.
`train_data = datasets.load_dataset("xsum", split="train")`
`ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/xsum/xsum.py` | 2021-08-04T18:09:37Z | https://github.com/huggingface/datasets/issues/1797 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1797/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1796/comments | https://api.github.com/repos/huggingface/datasets/issues/1796/timeline | null | null | null | MDU6SXNzdWU3OTczMjk5MDU= | open | [] | null | 1,796 | {
"avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4",
"events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}",
"followers_url": "https://api.github.com/users/ayubSubhaniya/followers",
"following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}",
"g... | Filter on dataset too much slowww | https://api.github.com/repos/huggingface/datasets/issues/1796/events | null | https://api.github.com/repos/huggingface/datasets/issues/1796/labels{/name} | 2021-01-30T04:09:19Z | null | false | null | null | 797,329,905 | [] | https://api.github.com/repos/huggingface/datasets/issues/1796 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | I have a dataset with 50M rows.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes.
When I applied the `filter()` function it is taking too much time. I need to filter se... | 2024-01-19T13:25:21Z | https://github.com/huggingface/datasets/issues/1796 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1796/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1795/comments | https://api.github.com/repos/huggingface/datasets/issues/1795/timeline | 2021-02-05T09:54:06Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY0MDk5OTUz | closed | [] | false | 1,795 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Custom formatting for lazy map + arrow data extraction refactor | https://api.github.com/repos/huggingface/datasets/issues/1795/events | null | https://api.github.com/repos/huggingface/datasets/issues/1795/labels{/name} | 2021-01-29T16:35:53Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1795.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1795",
"merged_at": "2021-02-05T09:54:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1795.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 797,021,730 | [] | https://api.github.com/repos/huggingface/datasets/issues/1795 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | Hi !
This PR refactors the way data are extracted from pyarrow tables to extend it to the use of custom formatting functions.
While the internal storage of the dataset is always the Apache Arrow format, by setting a specific format on a dataset, you can cast the output of `datasets.Dataset.__getitem__` in NumPy/p... | 2022-07-30T09:50:11Z | https://github.com/huggingface/datasets/pull/1795 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1795/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1794/comments | https://api.github.com/repos/huggingface/datasets/issues/1794/timeline | 2021-01-29T16:31:38Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY0MDYyMTkw | closed | [] | false | 1,794 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Move silicone directory | https://api.github.com/repos/huggingface/datasets/issues/1794/events | null | https://api.github.com/repos/huggingface/datasets/issues/1794/labels{/name} | 2021-01-29T15:33:15Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1794.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1794",
"merged_at": "2021-01-29T16:31:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1794.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 796,975,588 | [] | https://api.github.com/repos/huggingface/datasets/issues/1794 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | The dataset was added in #1761 but not in the right directory. I'm moving it to /datasets | 2021-01-29T16:31:39Z | https://github.com/huggingface/datasets/pull/1794 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1794/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1793/comments | https://api.github.com/repos/huggingface/datasets/issues/1793/timeline | 2021-01-29T16:53:32Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY0MDMzMjk0 | closed | [] | false | 1,793 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | Minor fix the docstring of load_metric | https://api.github.com/repos/huggingface/datasets/issues/1793/events | null | https://api.github.com/repos/huggingface/datasets/issues/1793/labels{/name} | 2021-01-29T14:47:35Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1793.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1793",
"merged_at": "2021-01-29T16:53:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1793.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 796,940,299 | [] | https://api.github.com/repos/huggingface/datasets/issues/1793 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | Minor fix:
- duplicated attributes
- format fix | 2021-01-29T16:53:32Z | https://github.com/huggingface/datasets/pull/1793 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1793/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1792/comments | https://api.github.com/repos/huggingface/datasets/issues/1792/timeline | 2021-02-12T14:13:28Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY0MDI4NTk1 | closed | [] | false | 1,792 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | Allow loading dataset in-memory | https://api.github.com/repos/huggingface/datasets/issues/1792/events | null | https://api.github.com/repos/huggingface/datasets/issues/1792/labels{/name} | 2021-01-29T14:39:50Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1792.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1792",
"merged_at": "2021-02-12T14:13:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1792.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 796,934,627 | [] | https://api.github.com/repos/huggingface/datasets/issues/1792 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | Allow loading datasets either from:
- memory-mapped file (current implementation)
- from file descriptor, copying data to physical memory
Close #708 | 2021-02-12T14:13:28Z | https://github.com/huggingface/datasets/pull/1792 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1792/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1791/comments | https://api.github.com/repos/huggingface/datasets/issues/1791/timeline | 2021-01-29T17:05:07Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY0MDE5OTk3 | closed | [] | false | 1,791 | {
"avatar_url": "https://avatars.githubusercontent.com/u/7549587?v=4",
"events_url": "https://api.github.com/users/TezRomacH/events{/privacy}",
"followers_url": "https://api.github.com/users/TezRomacH/followers",
"following_url": "https://api.github.com/users/TezRomacH/following{/other_user}",
"gists_url": "h... | Small fix with corrected logging of train vectors | https://api.github.com/repos/huggingface/datasets/issues/1791/events | null | https://api.github.com/repos/huggingface/datasets/issues/1791/labels{/name} | 2021-01-29T14:26:06Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1791.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1791",
"merged_at": "2021-01-29T17:05:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1791.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 796,924,519 | [] | https://api.github.com/repos/huggingface/datasets/issues/1791 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Now you can set `train_size` to the whole dataset size via `train_size = -1` and login writes not `Training the index with the first -1 vectors` but (for example) `Training the index with the first 16123 vectors`. And maybe more than dataset length. Logging will be correct | 2021-01-29T18:51:10Z | https://github.com/huggingface/datasets/pull/1791 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1791/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1790/comments | https://api.github.com/repos/huggingface/datasets/issues/1790/timeline | null | null | null | MDU6SXNzdWU3OTY2NzgxNTc= | open | [] | null | 1,790 | {
"avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4",
"events_url": "https://api.github.com/users/miyamonz/events{/privacy}",
"followers_url": "https://api.github.com/users/miyamonz/followers",
"following_url": "https://api.github.com/users/miyamonz/following{/other_user}",
"gists_url": "http... | ModuleNotFoundError: No module named 'apache_beam', when specific languages. | https://api.github.com/repos/huggingface/datasets/issues/1790/events | null | https://api.github.com/repos/huggingface/datasets/issues/1790/labels{/name} | 2021-01-29T08:17:24Z | null | false | null | null | 796,678,157 | [] | https://api.github.com/repos/huggingface/datasets/issues/1790 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ```py
import datasets
wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets')
```
then `ModuleNotFoundError: No module named 'apache_beam'` happend.
The error doesn't appear when it's '20200501.en'.
I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo... | 2021-03-25T12:10:51Z | https://github.com/huggingface/datasets/issues/1790 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1790/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1789/comments | https://api.github.com/repos/huggingface/datasets/issues/1789/timeline | 2021-01-28T18:13:56Z | null | null | MDExOlB1bGxSZXF1ZXN0NTYzNDQyMTc2 | closed | [] | false | 1,789 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | [BUG FIX] typo in the import path for metrics | https://api.github.com/repos/huggingface/datasets/issues/1789/events | null | https://api.github.com/repos/huggingface/datasets/issues/1789/labels{/name} | 2021-01-28T18:01:37Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1789.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1789",
"merged_at": "2021-01-28T18:13:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1789.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 796,229,721 | [] | https://api.github.com/repos/huggingface/datasets/issues/1789 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | This tiny PR fixes a typo introduced in https://github.com/huggingface/datasets/pull/1726 which prevents loading new metrics | 2021-01-28T18:13:56Z | https://github.com/huggingface/datasets/pull/1789 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1789/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1788/comments | https://api.github.com/repos/huggingface/datasets/issues/1788/timeline | 2021-01-28T18:46:13Z | null | null | MDExOlB1bGxSZXF1ZXN0NTYyODc1NzA2 | closed | [] | true | 1,788 | {
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "http... | Doc2dial rc | https://api.github.com/repos/huggingface/datasets/issues/1788/events | null | https://api.github.com/repos/huggingface/datasets/issues/1788/labels{/name} | 2021-01-27T23:51:00Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1788.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1788",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1788.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1788"
} | 795,544,422 | [] | https://api.github.com/repos/huggingface/datasets/issues/1788 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | 2021-01-28T18:46:13Z | https://github.com/huggingface/datasets/pull/1788 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1788/reactions"
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/1787/comments | https://api.github.com/repos/huggingface/datasets/issues/1787/timeline | 2021-01-28T13:56:29Z | null | null | MDExOlB1bGxSZXF1ZXN0NTYyODI1NTI3 | closed | [] | false | 1,787 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4",
"events_url": "https://api.github.com/users/yuchenlin/events{/privacy}",
"followers_url": "https://api.github.com/users/yuchenlin/followers",
"following_url": "https://api.github.com/users/yuchenlin/following{/other_user}",
"gists_url": "... | Update the CommonGen citation information | https://api.github.com/repos/huggingface/datasets/issues/1787/events | null | https://api.github.com/repos/huggingface/datasets/issues/1787/labels{/name} | 2021-01-27T22:12:47Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1787.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1787",
"merged_at": "2021-01-28T13:56:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1787.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 795,485,842 | [] | https://api.github.com/repos/huggingface/datasets/issues/1787 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | 2021-01-28T13:56:29Z | https://github.com/huggingface/datasets/pull/1787 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1787/reactions"
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/1786/comments | https://api.github.com/repos/huggingface/datasets/issues/1786/timeline | 2021-04-23T15:17:39Z | null | completed | MDU6SXNzdWU3OTU0NjI4MTY= | closed | [] | null | 1,786 | {
"avatar_url": "https://avatars.githubusercontent.com/u/78090287?v=4",
"events_url": "https://api.github.com/users/kkhan188/events{/privacy}",
"followers_url": "https://api.github.com/users/kkhan188/followers",
"following_url": "https://api.github.com/users/kkhan188/following{/other_user}",
"gists_url": "htt... | How to use split dataset | https://api.github.com/repos/huggingface/datasets/issues/1786/events | null | https://api.github.com/repos/huggingface/datasets/issues/1786/labels{/name} | 2021-01-27T21:37:47Z | null | false | null | null | 795,462,816 | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | https://api.github.com/repos/huggingface/datasets/issues/1786 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | 
Hey,
I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my pro... | 2021-04-23T15:17:39Z | https://github.com/huggingface/datasets/issues/1786 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1786/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1785/comments | https://api.github.com/repos/huggingface/datasets/issues/1785/timeline | 2021-01-30T01:07:56Z | null | completed | MDU6SXNzdWU3OTU0NTg4NTY= | closed | [] | null | 1,785 | {
"avatar_url": "https://avatars.githubusercontent.com/u/4341867?v=4",
"events_url": "https://api.github.com/users/olinguyen/events{/privacy}",
"followers_url": "https://api.github.com/users/olinguyen/followers",
"following_url": "https://api.github.com/users/olinguyen/following{/other_user}",
"gists_url": "h... | Not enough disk space (Needed: Unknown size) when caching on a cluster | https://api.github.com/repos/huggingface/datasets/issues/1785/events | null | https://api.github.com/repos/huggingface/datasets/issues/1785/labels{/name} | 2021-01-27T21:30:59Z | null | false | null | null | 795,458,856 | [] | https://api.github.com/repos/huggingface/datasets/issues/1785 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk.
The exact error thrown:
```bash
>>> load_dataset(dataset, cache_dir="/path/to/cluster/shared/path")
OSError: Not eno... | 2022-11-07T16:33:03Z | https://github.com/huggingface/datasets/issues/1785 | {
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1785/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1784/comments | https://api.github.com/repos/huggingface/datasets/issues/1784/timeline | 2021-01-31T08:47:18Z | null | completed | MDU6SXNzdWU3OTQ2NTkxNzQ= | closed | [] | null | 1,784 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | JSONDecodeError on JSON with multiple lines | https://api.github.com/repos/huggingface/datasets/issues/1784/events | null | https://api.github.com/repos/huggingface/datasets/issues/1784/labels{/name} | 2021-01-27T00:19:22Z | null | false | null | null | 794,659,174 | [] | https://api.github.com/repos/huggingface/datasets/issues/1784 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Hello :),
I have been trying to load data using a JSON file. Based on the [docs](https://huggingface.co/docs/datasets/loading_datasets.html#json-files), the following format is supported:
```json
{"key1":11, "key2":12, "key3":13}
{"key1":21, "key2":22, "key3":23}
```
But, when I try loading a dataset with th... | 2021-01-31T08:47:18Z | https://github.com/huggingface/datasets/issues/1784 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1784/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1783/comments | https://api.github.com/repos/huggingface/datasets/issues/1783/timeline | 2021-02-01T13:58:44Z | null | completed | MDU6SXNzdWU3OTQ1NDQ0OTU= | closed | [] | null | 1,783 | {
"avatar_url": "https://avatars.githubusercontent.com/u/30875246?v=4",
"events_url": "https://api.github.com/users/ChewKokWah/events{/privacy}",
"followers_url": "https://api.github.com/users/ChewKokWah/followers",
"following_url": "https://api.github.com/users/ChewKokWah/following{/other_user}",
"gists_url"... | Dataset Examples Explorer | https://api.github.com/repos/huggingface/datasets/issues/1783/events | null | https://api.github.com/repos/huggingface/datasets/issues/1783/labels{/name} | 2021-01-26T20:39:02Z | null | false | null | null | 794,544,495 | [] | https://api.github.com/repos/huggingface/datasets/issues/1783 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | In the Older version of the Dataset, there are a useful Dataset Explorer that allow user to visualize the examples (training, test and validation) of a particular dataset, it is no longer there in current version.
Hope HuggingFace can re-enable the feature that at least allow viewing of the first 20 examples of a ... | 2021-02-01T13:58:44Z | https://github.com/huggingface/datasets/issues/1783 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1783/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1782/comments | https://api.github.com/repos/huggingface/datasets/issues/1782/timeline | 2021-01-26T13:50:49Z | null | null | MDExOlB1bGxSZXF1ZXN0NTYxNzI5OTc3 | closed | [] | false | 1,782 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Update pyarrow import warning | https://api.github.com/repos/huggingface/datasets/issues/1782/events | null | https://api.github.com/repos/huggingface/datasets/issues/1782/labels{/name} | 2021-01-26T11:47:11Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1782.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1782",
"merged_at": "2021-01-26T13:50:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1782.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 794,167,920 | [] | https://api.github.com/repos/huggingface/datasets/issues/1782 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | Update the minimum version to >=0.17.1 in the pyarrow version check and update the message.
I also moved the check at the top of the __init__.py | 2021-01-26T13:50:50Z | https://github.com/huggingface/datasets/pull/1782 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1782/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1781/comments | https://api.github.com/repos/huggingface/datasets/issues/1781/timeline | 2022-10-05T12:37:06Z | null | completed | MDU6SXNzdWU3OTM5MTQ1NTY= | closed | [] | null | 1,781 | {
"avatar_url": "https://avatars.githubusercontent.com/u/45964869?v=4",
"events_url": "https://api.github.com/users/PalaashAgrawal/events{/privacy}",
"followers_url": "https://api.github.com/users/PalaashAgrawal/followers",
"following_url": "https://api.github.com/users/PalaashAgrawal/following{/other_user}",
... | AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import | https://api.github.com/repos/huggingface/datasets/issues/1781/events | null | https://api.github.com/repos/huggingface/datasets/issues/1781/labels{/name} | 2021-01-26T04:18:35Z | null | false | null | null | 793,914,556 | [] | https://api.github.com/repos/huggingface/datasets/issues/1781 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | I'm using Colab. And suddenly this morning, there is this error. Have a look below!

| 2022-10-05T12:37:06Z | https://github.com/huggingface/datasets/issues/1781 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1781/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1780/comments | https://api.github.com/repos/huggingface/datasets/issues/1780/timeline | 2021-01-28T10:19:45Z | null | null | MDExOlB1bGxSZXF1ZXN0NTYxNDkxNTgy | closed | [] | false | 1,780 | {
"avatar_url": "https://avatars.githubusercontent.com/u/3091916?v=4",
"events_url": "https://api.github.com/users/dwadden/events{/privacy}",
"followers_url": "https://api.github.com/users/dwadden/followers",
"following_url": "https://api.github.com/users/dwadden/following{/other_user}",
"gists_url": "https:/... | Update SciFact URL | https://api.github.com/repos/huggingface/datasets/issues/1780/events | null | https://api.github.com/repos/huggingface/datasets/issues/1780/labels{/name} | 2021-01-26T02:49:06Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1780.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1780",
"merged_at": "2021-01-28T10:19:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1780.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 793,882,132 | [] | https://api.github.com/repos/huggingface/datasets/issues/1780 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Hi,
I'm following up this [issue](https://github.com/huggingface/datasets/issues/1717). I'm the SciFact dataset creator, and I'm trying to update the SciFact data url in your repo. Thanks again for adding the dataset!
Basically, I'd just like to change the `_URL` to `"https://scifact.s3-us-west-2.amazonaws.com/re... | 2021-01-28T18:48:00Z | https://github.com/huggingface/datasets/pull/1780 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1780/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1779/comments | https://api.github.com/repos/huggingface/datasets/issues/1779/timeline | 2021-01-26T10:20:19Z | null | null | MDExOlB1bGxSZXF1ZXN0NTYxMjEwNjI5 | closed | [] | false | 1,779 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Ignore definition line number of functions for caching | https://api.github.com/repos/huggingface/datasets/issues/1779/events | null | https://api.github.com/repos/huggingface/datasets/issues/1779/labels{/name} | 2021-01-25T16:42:29Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1779.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1779",
"merged_at": "2021-01-26T10:20:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1779.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 793,539,703 | [] | https://api.github.com/repos/huggingface/datasets/issues/1779 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | As noticed in #1718 , when a function used for processing with `map` is moved inside its python file, then the change of line number causes the caching mechanism to consider it as a different function. Therefore in this case, it recomputes everything.
This is because we were not ignoring the line number definition f... | 2021-01-26T10:20:20Z | https://github.com/huggingface/datasets/pull/1779 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1779/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1778/comments | https://api.github.com/repos/huggingface/datasets/issues/1778/timeline | 2021-01-29T09:34:51Z | null | null | MDExOlB1bGxSZXF1ZXN0NTYxMTU2Mzk1 | closed | [] | false | 1,778 | {
"avatar_url": "https://avatars.githubusercontent.com/u/18527321?v=4",
"events_url": "https://api.github.com/users/rsanjaykamath/events{/privacy}",
"followers_url": "https://api.github.com/users/rsanjaykamath/followers",
"following_url": "https://api.github.com/users/rsanjaykamath/following{/other_user}",
"g... | Narrative QA Manual | https://api.github.com/repos/huggingface/datasets/issues/1778/events | null | https://api.github.com/repos/huggingface/datasets/issues/1778/labels{/name} | 2021-01-25T15:22:31Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1778.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1778",
"merged_at": "2021-01-29T09:34:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1778.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 793,474,507 | [] | https://api.github.com/repos/huggingface/datasets/issues/1778 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Submitting the manual version of Narrative QA script which requires a manual download from the original repository | 2021-01-29T09:35:14Z | https://github.com/huggingface/datasets/pull/1778 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1778/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1777/comments | https://api.github.com/repos/huggingface/datasets/issues/1777/timeline | 2021-01-25T11:12:53Z | null | completed | MDU6SXNzdWU3OTMyNzM3NzA= | closed | [] | null | 1,777 | {
"avatar_url": "https://avatars.githubusercontent.com/u/76427077?v=4",
"events_url": "https://api.github.com/users/nlp-student/events{/privacy}",
"followers_url": "https://api.github.com/users/nlp-student/followers",
"following_url": "https://api.github.com/users/nlp-student/following{/other_user}",
"gists_u... | GPT2 MNLI training using run_glue.py | https://api.github.com/repos/huggingface/datasets/issues/1777/events | null | https://api.github.com/repos/huggingface/datasets/issues/1777/labels{/name} | 2021-01-25T10:53:52Z | null | false | null | null | 793,273,770 | [] | https://api.github.com/repos/huggingface/datasets/issues/1777 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Edit: I'm closing this because I actually meant to post this in `transformers `not `datasets`
Running this on Google Colab,
```
!python run_glue.py \
--model_name_or_path gpt2 \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_gpu_train_batch_size 10 \
--gradient_accu... | 2021-01-25T11:12:53Z | https://github.com/huggingface/datasets/issues/1777 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1777/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1776/comments | https://api.github.com/repos/huggingface/datasets/issues/1776/timeline | 2021-05-20T04:15:58Z | null | completed | MDU6SXNzdWU3OTI3NTUyNDk= | closed | [] | null | 1,776 | {
"avatar_url": "https://avatars.githubusercontent.com/u/14048129?v=4",
"events_url": "https://api.github.com/users/shuaihuaiyi/events{/privacy}",
"followers_url": "https://api.github.com/users/shuaihuaiyi/followers",
"following_url": "https://api.github.com/users/shuaihuaiyi/following{/other_user}",
"gists_u... | [Question & Bug Report] Can we preprocess a dataset on the fly? | https://api.github.com/repos/huggingface/datasets/issues/1776/events | null | https://api.github.com/repos/huggingface/datasets/issues/1776/labels{/name} | 2021-01-24T09:28:24Z | null | false | null | null | 792,755,249 | [] | https://api.github.com/repos/huggingface/datasets/issues/1776 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache?
BTW, I tried raising `writer_batch_si... | 2021-05-20T04:15:58Z | https://github.com/huggingface/datasets/issues/1776 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1776/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1775/comments | https://api.github.com/repos/huggingface/datasets/issues/1775/timeline | 2021-01-24T09:50:39Z | null | completed | MDU6SXNzdWU3OTI3NDIxMjA= | closed | [] | null | 1,775 | {
"avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4",
"events_url": "https://api.github.com/users/zhongpeixiang/events{/privacy}",
"followers_url": "https://api.github.com/users/zhongpeixiang/followers",
"following_url": "https://api.github.com/users/zhongpeixiang/following{/other_user}",
"g... | Efficient ways to iterate the dataset | https://api.github.com/repos/huggingface/datasets/issues/1775/events | null | https://api.github.com/repos/huggingface/datasets/issues/1775/labels{/name} | 2021-01-24T07:54:31Z | null | false | null | null | 792,742,120 | [] | https://api.github.com/repos/huggingface/datasets/issues/1775 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | For a large dataset that does not fits the memory, how can I select only a subset of features from each example?
If I iterate over the dataset and then select the subset of features one by one, the resulted memory usage will be huge. Any ways to solve this?
Thanks | 2021-01-24T09:50:39Z | https://github.com/huggingface/datasets/issues/1775 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1775/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1774/comments | https://api.github.com/repos/huggingface/datasets/issues/1774/timeline | 2024-01-31T15:54:18Z | null | completed | MDU6SXNzdWU3OTI3MzA1NTk= | closed | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
... | null | 1,774 | {
"avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4",
"events_url": "https://api.github.com/users/world2vec/events{/privacy}",
"followers_url": "https://api.github.com/users/world2vec/followers",
"following_url": "https://api.github.com/users/world2vec/following{/other_user}",
"gists_url": "h... | is it possible to make slice to be more compatible like python list and numpy? | https://api.github.com/repos/huggingface/datasets/issues/1774/events | null | https://api.github.com/repos/huggingface/datasets/issues/1774/labels{/name} | 2021-01-24T06:15:52Z | null | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | null | 792,730,559 | [] | https://api.github.com/repos/huggingface/datasets/issues/1774 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Hi,
see below error:
```
AssertionError: Requested slice [:10000000000000000] incompatible with 20 examples.
``` | 2024-01-31T15:54:18Z | https://github.com/huggingface/datasets/issues/1774 | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1774/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1773/comments | https://api.github.com/repos/huggingface/datasets/issues/1773/timeline | 2021-08-04T18:13:01Z | null | completed | MDU6SXNzdWU3OTI3MDgxNjA= | closed | [] | null | 1,773 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.git... | bug in loading datasets | https://api.github.com/repos/huggingface/datasets/issues/1773/events | null | https://api.github.com/repos/huggingface/datasets/issues/1773/labels{/name} | 2021-01-24T02:53:45Z | null | false | null | null | 792,708,160 | [] | https://api.github.com/repos/huggingface/datasets/issues/1773 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Hi,
I need to load a dataset, I use these commands:
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files={'train': 'sick/train.csv',
'test': 'sick/test.csv',
'validation': 'sick/validation.csv'})
prin... | 2021-09-06T08:54:46Z | https://github.com/huggingface/datasets/issues/1773 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1773/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1772/comments | https://api.github.com/repos/huggingface/datasets/issues/1772/timeline | 2021-02-05T15:49:25Z | null | completed | MDU6SXNzdWU3OTI3MDM3OTc= | closed | [] | null | 1,772 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.git... | Adding SICK dataset | https://api.github.com/repos/huggingface/datasets/issues/1772/events | null | https://api.github.com/repos/huggingface/datasets/issues/1772/labels{/name} | 2021-01-24T02:15:31Z | null | false | null | null | 792,703,797 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | https://api.github.com/repos/huggingface/datasets/issues/1772 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Hi
It would be great to include SICK dataset.
## Adding a Dataset
- **Name:** SICK
- **Description:** a well known entailment dataset
- **Paper:** http://marcobaroni.org/composes/sick.html
- **Data:** http://marcobaroni.org/composes/sick.html
- **Motivation:** this is an important NLI benchmark
Instruction... | 2021-02-05T15:49:25Z | https://github.com/huggingface/datasets/issues/1772 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1772/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1771/comments | https://api.github.com/repos/huggingface/datasets/issues/1771/timeline | 2021-01-24T23:06:29Z | null | completed | MDU6SXNzdWU3OTI3MDEyNzY= | closed | [] | null | 1,771 | {
"avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4",
"events_url": "https://api.github.com/users/world2vec/events{/privacy}",
"followers_url": "https://api.github.com/users/world2vec/followers",
"following_url": "https://api.github.com/users/world2vec/following{/other_user}",
"gists_url": "h... | Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/csv/csv.py | https://api.github.com/repos/huggingface/datasets/issues/1771/events | null | https://api.github.com/repos/huggingface/datasets/issues/1771/labels{/name} | 2021-01-24T01:53:52Z | null | false | null | null | 792,701,276 | [] | https://api.github.com/repos/huggingface/datasets/issues/1771 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Hi,
When I load_dataset from local csv files, below error happened, looks raw.githubusercontent.com was blocked by the chinese government. But why it need to download csv.py? should it include when pip install the dataset?
```
Traceback (most recent call last):
File "/home/tom/pyenv/pystory/lib/python3.6/site-p... | 2021-01-24T23:06:29Z | https://github.com/huggingface/datasets/issues/1771 | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1771/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1770/comments | https://api.github.com/repos/huggingface/datasets/issues/1770/timeline | 2022-06-01T15:43:15Z | null | completed | MDU6SXNzdWU3OTI2OTgxNDg= | closed | [] | null | 1,770 | {
"avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4",
"events_url": "https://api.github.com/users/world2vec/events{/privacy}",
"followers_url": "https://api.github.com/users/world2vec/followers",
"following_url": "https://api.github.com/users/world2vec/following{/other_user}",
"gists_url": "h... | how can I combine 2 dataset with different/same features? | https://api.github.com/repos/huggingface/datasets/issues/1770/events | null | https://api.github.com/repos/huggingface/datasets/issues/1770/labels{/name} | 2021-01-24T01:26:06Z | null | false | null | null | 792,698,148 | [] | https://api.github.com/repos/huggingface/datasets/issues/1770 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | to combine 2 dataset by one-one map like ds = zip(ds1, ds2):
ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'}
or different feature:
ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'} | 2022-06-01T15:43:15Z | https://github.com/huggingface/datasets/issues/1770 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1770/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1769/comments | https://api.github.com/repos/huggingface/datasets/issues/1769/timeline | 2022-10-05T12:38:51Z | null | completed | MDU6SXNzdWU3OTI1MjMyODQ= | closed | [] | null | 1,769 | {
"avatar_url": "https://avatars.githubusercontent.com/u/14048129?v=4",
"events_url": "https://api.github.com/users/shuaihuaiyi/events{/privacy}",
"followers_url": "https://api.github.com/users/shuaihuaiyi/followers",
"following_url": "https://api.github.com/users/shuaihuaiyi/following{/other_user}",
"gists_u... | _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2 | https://api.github.com/repos/huggingface/datasets/issues/1769/events | null | https://api.github.com/repos/huggingface/datasets/issues/1769/labels{/name} | 2021-01-23T10:13:00Z | null | false | null | null | 792,523,284 | [] | https://api.github.com/repos/huggingface/datasets/issues/1769 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine.
The script I use is https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py
Script args:
```
--model_name_or_path
../../../model/chine... | 2022-10-05T12:38:51Z | https://github.com/huggingface/datasets/issues/1769 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1769/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1768/comments | https://api.github.com/repos/huggingface/datasets/issues/1768/timeline | 2021-01-25T09:14:59Z | null | null | MDExOlB1bGxSZXF1ZXN0NTYwMDgyNzIx | closed | [] | false | 1,768 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | Mention kwargs in the Dataset Formatting docs | https://api.github.com/repos/huggingface/datasets/issues/1768/events | null | https://api.github.com/repos/huggingface/datasets/issues/1768/labels{/name} | 2021-01-22T16:43:20Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1768.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1768",
"merged_at": "2021-01-25T09:14:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1768.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 792,150,745 | [] | https://api.github.com/repos/huggingface/datasets/issues/1768 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Hi,
This was discussed in Issue #1762 where the docs didn't mention that keyword arguments to `datasets.Dataset.set_format()` are allowed.
To prevent people from having to check the code/method docs, I just added a couple of lines in the docs.
Please let me know your thoughts on this.
Thanks,
Gunjan
@lho... | 2021-01-31T12:33:10Z | https://github.com/huggingface/datasets/pull/1768 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1768/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1767/comments | https://api.github.com/repos/huggingface/datasets/issues/1767/timeline | 2021-01-25T20:37:42Z | null | null | MDExOlB1bGxSZXF1ZXN0NTYwMDE2MzE2 | closed | [] | false | 1,767 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | Add Librispeech ASR | https://api.github.com/repos/huggingface/datasets/issues/1767/events | null | https://api.github.com/repos/huggingface/datasets/issues/1767/labels{/name} | 2021-01-22T14:54:37Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1767.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1767",
"merged_at": "2021-01-25T20:37:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1767.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 792,068,497 | [] | https://api.github.com/repos/huggingface/datasets/issues/1767 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | This PR adds the librispeech asr dataset: https://www.tensorflow.org/datasets/catalog/librispeech
There are 2 configs: "clean" and "other" whereas there are two "train" datasets for "clean", hence the name "train.100" and "train.360".
As suggested by @lhoestq, due to the enormous size of the dataset in `.arrow` f... | 2021-01-25T20:38:07Z | https://github.com/huggingface/datasets/pull/1767 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1767/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1766/comments | https://api.github.com/repos/huggingface/datasets/issues/1766/timeline | 2021-02-02T10:38:06Z | null | completed | MDU6SXNzdWU3OTIwNDQxMDU= | closed | [] | null | 1,766 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8089862?v=4",
"events_url": "https://api.github.com/users/lamthuy/events{/privacy}",
"followers_url": "https://api.github.com/users/lamthuy/followers",
"following_url": "https://api.github.com/users/lamthuy/following{/other_user}",
"gists_url": "https:/... | Issues when run two programs compute the same metrics | https://api.github.com/repos/huggingface/datasets/issues/1766/events | null | https://api.github.com/repos/huggingface/datasets/issues/1766/labels{/name} | 2021-01-22T14:22:55Z | null | false | null | null | 792,044,105 | [] | https://api.github.com/repos/huggingface/datasets/issues/1766 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches:
```
File "train_matching_min.py", line 160, in <module>ch... | 2021-02-02T10:38:06Z | https://github.com/huggingface/datasets/issues/1766 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1766/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1765/comments | https://api.github.com/repos/huggingface/datasets/issues/1765/timeline | 2021-01-23T03:44:14Z | null | completed | MDU6SXNzdWU3OTE1NTMwNjU= | closed | [] | null | 1,765 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1295082?v=4",
"events_url": "https://api.github.com/users/EvanZ/events{/privacy}",
"followers_url": "https://api.github.com/users/EvanZ/followers",
"following_url": "https://api.github.com/users/EvanZ/following{/other_user}",
"gists_url": "https://api.g... | Error iterating over Dataset with DataLoader | https://api.github.com/repos/huggingface/datasets/issues/1765/events | null | https://api.github.com/repos/huggingface/datasets/issues/1765/labels{/name} | 2021-01-21T22:56:45Z | null | false | null | null | 791,553,065 | [] | https://api.github.com/repos/huggingface/datasets/issues/1765 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | I have a Dataset that I've mapped a tokenizer over:
```
encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids'])
encoded_dataset[:1]
```
```
{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),
'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 2... | 2022-10-28T02:16:38Z | https://github.com/huggingface/datasets/issues/1765 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1765/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1764/comments | https://api.github.com/repos/huggingface/datasets/issues/1764/timeline | 2021-01-21T21:00:02Z | null | completed | MDU6SXNzdWU3OTE0ODY4NjA= | closed | [] | null | 1,764 | {
"avatar_url": "https://avatars.githubusercontent.com/u/12455298?v=4",
"events_url": "https://api.github.com/users/SaeedNajafi/events{/privacy}",
"followers_url": "https://api.github.com/users/SaeedNajafi/followers",
"following_url": "https://api.github.com/users/SaeedNajafi/following{/other_user}",
"gists_u... | Connection Issues | https://api.github.com/repos/huggingface/datasets/issues/1764/events | null | https://api.github.com/repos/huggingface/datasets/issues/1764/labels{/name} | 2021-01-21T20:56:09Z | null | false | null | null | 791,486,860 | [] | https://api.github.com/repos/huggingface/datasets/issues/1764 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Today, I am getting connection issues while loading a dataset and the metric.
```
Traceback (most recent call last):
File "src/train.py", line 180, in <module>
train_dataset, dev_dataset, test_dataset = create_race_dataset()
File "src/train.py", line 130, in create_race_dataset
train_dataset = load_da... | 2021-01-21T21:00:19Z | https://github.com/huggingface/datasets/issues/1764 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1764/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1763/comments | https://api.github.com/repos/huggingface/datasets/issues/1763/timeline | 2021-01-22T10:13:45Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU5NDU3MTY1 | closed | [] | false | 1,763 | {
"avatar_url": "https://avatars.githubusercontent.com/u/9641196?v=4",
"events_url": "https://api.github.com/users/gowtham1997/events{/privacy}",
"followers_url": "https://api.github.com/users/gowtham1997/followers",
"following_url": "https://api.github.com/users/gowtham1997/following{/other_user}",
"gists_ur... | PAWS-X: Fix csv Dictreader splitting data on quotes | https://api.github.com/repos/huggingface/datasets/issues/1763/events | null | https://api.github.com/repos/huggingface/datasets/issues/1763/labels{/name} | 2021-01-21T18:21:01Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1763.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1763",
"merged_at": "2021-01-22T10:13:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1763.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 791,389,763 | [] | https://api.github.com/repos/huggingface/datasets/issues/1763 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR |
```python
from datasets import load_dataset
# load english paws-x dataset
datasets = load_dataset('paws-x', 'en')
print(len(datasets['train'])) # outputs 49202 but official dataset has 49401 pairs
print(datasets['train'].unique('label')) # outputs [1, 0, -1] but labels are binary [0,1]
... | 2021-01-22T10:14:33Z | https://github.com/huggingface/datasets/pull/1763 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1763/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1762/comments | https://api.github.com/repos/huggingface/datasets/issues/1762/timeline | 2021-02-02T07:13:22Z | null | completed | MDU6SXNzdWU3OTEyMjYwMDc= | closed | [] | null | 1,762 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | Unable to format dataset to CUDA Tensors | https://api.github.com/repos/huggingface/datasets/issues/1762/events | null | https://api.github.com/repos/huggingface/datasets/issues/1762/labels{/name} | 2021-01-21T15:31:23Z | null | false | null | null | 791,226,007 | [] | https://api.github.com/repos/huggingface/datasets/issues/1762 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
I tried this, but Dataset doesn't suppor... | 2021-02-02T07:13:22Z | https://github.com/huggingface/datasets/issues/1762 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1762/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1761/comments | https://api.github.com/repos/huggingface/datasets/issues/1761/timeline | 2021-01-26T13:50:31Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU5MjUyMzEw | closed | [] | false | 1,761 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"events_url": "https://api.github.com/users/eusip/events{/privacy}",
"followers_url": "https://api.github.com/users/eusip/followers",
"following_url": "https://api.github.com/users/eusip/following{/other_user}",
"gists_url": "https://api.g... | Add SILICONE benchmark | https://api.github.com/repos/huggingface/datasets/issues/1761/events | null | https://api.github.com/repos/huggingface/datasets/issues/1761/labels{/name} | 2021-01-21T14:29:12Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1761.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1761",
"merged_at": "2021-01-26T13:50:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1761.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 791,150,858 | [] | https://api.github.com/repos/huggingface/datasets/issues/1761 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | My collaborators and I within the Affective Computing team at Telecom Paris would like to re-submit our spoken dialogue dataset for publication.
This is a new pull request relative to the [previously closed request](https://github.com/huggingface/datasets/pull/1712) which was reviewed by @lhoestq.
| 2021-02-04T14:32:48Z | https://github.com/huggingface/datasets/pull/1761 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1761/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1760/comments | https://api.github.com/repos/huggingface/datasets/issues/1760/timeline | 2021-01-22T09:40:00Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU5MjE3MjY0 | closed | [] | false | 1,760 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | More tags | https://api.github.com/repos/huggingface/datasets/issues/1760/events | null | https://api.github.com/repos/huggingface/datasets/issues/1760/labels{/name} | 2021-01-21T13:50:10Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1760.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1760",
"merged_at": "2021-01-22T09:40:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1760.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 791,110,857 | [] | https://api.github.com/repos/huggingface/datasets/issues/1760 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | Since the hub v2 is going to be released soon I figured it would be great to add the missing tags at least for some of the datasets of reference listed [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#write-the-loadingprocessing-code) | 2021-01-22T09:40:01Z | https://github.com/huggingface/datasets/pull/1760 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1760/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1759/comments | https://api.github.com/repos/huggingface/datasets/issues/1759/timeline | 2021-01-21T17:21:06Z | null | completed | MDU6SXNzdWU3OTA5OTIyMjY= | closed | [] | null | 1,759 | {
"avatar_url": "https://avatars.githubusercontent.com/u/19912393?v=4",
"events_url": "https://api.github.com/users/ChrisDelClea/events{/privacy}",
"followers_url": "https://api.github.com/users/ChrisDelClea/followers",
"following_url": "https://api.github.com/users/ChrisDelClea/following{/other_user}",
"gist... | wikipedia dataset incomplete | https://api.github.com/repos/huggingface/datasets/issues/1759/events | null | https://api.github.com/repos/huggingface/datasets/issues/1759/labels{/name} | 2021-01-21T11:47:15Z | null | false | null | null | 790,992,226 | [] | https://api.github.com/repos/huggingface/datasets/issues/1759 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Hey guys,
I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset.
Unfortunately, I found out that there is an incompleteness for the German dataset.
For reasons unknown to me, the number of inhabitants has been removed from many pages:
Thorey-sur-Ouche has 128 inhabitants a... | 2021-01-21T17:22:11Z | https://github.com/huggingface/datasets/issues/1759 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1759/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1758/comments | https://api.github.com/repos/huggingface/datasets/issues/1758/timeline | 2021-01-22T00:25:50Z | null | completed | MDU6SXNzdWU3OTA2MjYxMTY= | closed | [] | null | 1,758 | {
"avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4",
"events_url": "https://api.github.com/users/afogarty85/events{/privacy}",
"followers_url": "https://api.github.com/users/afogarty85/followers",
"following_url": "https://api.github.com/users/afogarty85/following{/other_user}",
"gists_url"... | dataset.search() (elastic) cannot reliably retrieve search results | https://api.github.com/repos/huggingface/datasets/issues/1758/events | null | https://api.github.com/repos/huggingface/datasets/issues/1758/labels{/name} | 2021-01-21T02:26:37Z | null | false | null | null | 790,626,116 | [] | https://api.github.com/repos/huggingface/datasets/issues/1758 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices.
The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer.
I am indexing data t... | 2021-01-22T00:25:50Z | https://github.com/huggingface/datasets/issues/1758 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1758/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1757/comments | https://api.github.com/repos/huggingface/datasets/issues/1757/timeline | 2021-03-08T14:34:52Z | null | completed | MDU6SXNzdWU3OTA0NjY1MDk= | closed | [] | null | 1,757 | {
"avatar_url": "https://avatars.githubusercontent.com/u/6183050?v=4",
"events_url": "https://api.github.com/users/dspoka/events{/privacy}",
"followers_url": "https://api.github.com/users/dspoka/followers",
"following_url": "https://api.github.com/users/dspoka/following{/other_user}",
"gists_url": "https://ap... | FewRel | https://api.github.com/repos/huggingface/datasets/issues/1757/events | null | https://api.github.com/repos/huggingface/datasets/issues/1757/labels{/name} | 2021-01-20T23:56:03Z | null | false | null | null | 790,466,509 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | https://api.github.com/repos/huggingface/datasets/issues/1757 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | ## Adding a Dataset
- **Name:** FewRel
- **Description:** Large-Scale Supervised Few-Shot Relation Classification Dataset
- **Paper:** @inproceedings{han2018fewrel,
title={FewRel:A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation},
auth... | 2021-03-09T02:52:05Z | https://github.com/huggingface/datasets/issues/1757 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1757/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1756/comments | https://api.github.com/repos/huggingface/datasets/issues/1756/timeline | 2021-03-01T10:36:21Z | null | completed | MDU6SXNzdWU3OTAzODAwMjg= | closed | [] | null | 1,756 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https:... | Ccaligned multilingual translation dataset | https://api.github.com/repos/huggingface/datasets/issues/1756/events | null | https://api.github.com/repos/huggingface/datasets/issues/1756/labels{/name} | 2021-01-20T22:18:44Z | null | false | null | null | 790,380,028 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | https://api.github.com/repos/huggingface/datasets/issues/1756 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language ... | 2021-03-01T10:36:21Z | https://github.com/huggingface/datasets/issues/1756 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1756/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1755/comments | https://api.github.com/repos/huggingface/datasets/issues/1755/timeline | 2021-01-20T22:03:39Z | null | completed | MDU6SXNzdWU3OTAzMjQ3MzQ= | closed | [] | null | 1,755 | {
"avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4",
"events_url": "https://api.github.com/users/afogarty85/events{/privacy}",
"followers_url": "https://api.github.com/users/afogarty85/followers",
"following_url": "https://api.github.com/users/afogarty85/following{/other_user}",
"gists_url"... | Using select/reordering datasets slows operations down immensely | https://api.github.com/repos/huggingface/datasets/issues/1755/events | null | https://api.github.com/repos/huggingface/datasets/issues/1755/labels{/name} | 2021-01-20T21:12:12Z | null | false | null | null | 790,324,734 | [] | https://api.github.com/repos/huggingface/datasets/issues/1755 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | I am using portions of HF's helpful work in preparing / scoring the SQuAD 2.0 data. The problem I have is that after using `select` to re-ordering the dataset, computations slow down immensely where the total scoring process on 131k training examples would take maybe 3 minutes, now take over an hour.
The below examp... | 2021-01-20T22:03:39Z | https://github.com/huggingface/datasets/issues/1755 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1755/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1754/comments | https://api.github.com/repos/huggingface/datasets/issues/1754/timeline | 2021-01-25T09:12:06Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU4MTU5NjEw | closed | [] | false | 1,754 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Use a config id in the cache directory names for custom configs | https://api.github.com/repos/huggingface/datasets/issues/1754/events | null | https://api.github.com/repos/huggingface/datasets/issues/1754/labels{/name} | 2021-01-20T11:11:00Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1754.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1754",
"merged_at": "2021-01-25T09:12:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1754.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 789,881,730 | [] | https://api.github.com/repos/huggingface/datasets/issues/1754 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | As noticed by @JetRunner there was some issues when trying to generate a dataset using a custom config that is based on an existing config.
For example in the following code the `mnli_custom` would reuse the cache used to create `mnli` instead of generating a new dataset with the new label classes:
```python
from ... | 2021-01-25T09:12:07Z | https://github.com/huggingface/datasets/pull/1754 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1754/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1753/comments | https://api.github.com/repos/huggingface/datasets/issues/1753/timeline | 2021-01-20T14:39:30Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU4MTQ3Njkx | closed | [] | false | 1,753 | {
"avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4",
"events_url": "https://api.github.com/users/ricardorei/events{/privacy}",
"followers_url": "https://api.github.com/users/ricardorei/followers",
"following_url": "https://api.github.com/users/ricardorei/following{/other_user}",
"gists_url"... | fix comet citations | https://api.github.com/repos/huggingface/datasets/issues/1753/events | null | https://api.github.com/repos/huggingface/datasets/issues/1753/labels{/name} | 2021-01-20T10:52:38Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1753.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1753",
"merged_at": "2021-01-20T14:39:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1753.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 789,867,685 | [] | https://api.github.com/repos/huggingface/datasets/issues/1753 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | I realized COMET citations were not showing in the hugging face metrics page:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105164848-8b9da900-5b0d-11eb-9e20-a38f559d2037.png">
This pull request is intended to fix that.
Thanks! | 2021-01-20T14:39:30Z | https://github.com/huggingface/datasets/pull/1753 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1753/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1752/comments | https://api.github.com/repos/huggingface/datasets/issues/1752/timeline | 2021-01-20T10:25:02Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU4MTA5NTA5 | closed | [] | false | 1,752 | {
"avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4",
"events_url": "https://api.github.com/users/ricardorei/events{/privacy}",
"followers_url": "https://api.github.com/users/ricardorei/followers",
"following_url": "https://api.github.com/users/ricardorei/following{/other_user}",
"gists_url"... | COMET metric citation | https://api.github.com/repos/huggingface/datasets/issues/1752/events | null | https://api.github.com/repos/huggingface/datasets/issues/1752/labels{/name} | 2021-01-20T09:54:43Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1752.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1752",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1752.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1752"
} | 789,822,459 | [] | https://api.github.com/repos/huggingface/datasets/issues/1752 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | In my last pull request to add COMET metric, the citations where not following the usual "format". Because of that they where not correctly displayed on the website:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105158000-686efb80-5b05-11eb-8bb0-9c8... | 2021-01-20T10:27:07Z | https://github.com/huggingface/datasets/pull/1752 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1752/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1751/comments | https://api.github.com/repos/huggingface/datasets/issues/1751/timeline | 2021-01-20T14:56:52Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU3NjA1ODE2 | closed | [] | false | 1,751 | {
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
... | Updated README for the Social Bias Frames dataset | https://api.github.com/repos/huggingface/datasets/issues/1751/events | null | https://api.github.com/repos/huggingface/datasets/issues/1751/labels{/name} | 2021-01-19T17:53:00Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1751.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1751",
"merged_at": "2021-01-20T14:56:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1751.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 789,232,980 | [] | https://api.github.com/repos/huggingface/datasets/issues/1751 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | See the updated card at https://github.com/mcmillanmajora/datasets/tree/add-SBIC-card/datasets/social_bias_frames. I incorporated information from the [SBIC data statement](https://homes.cs.washington.edu/~msap/social-bias-frames/DATASTATEMENT.html), paper, and the corpus README file included with the dataset download. | 2021-01-20T14:56:52Z | https://github.com/huggingface/datasets/pull/1751 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1751/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1750/comments | https://api.github.com/repos/huggingface/datasets/issues/1750/timeline | 2021-01-19T09:48:43Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU3MTM1MzM1 | closed | [] | false | 1,750 | {
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url":... | Fix typo in README.md of cnn_dailymail | https://api.github.com/repos/huggingface/datasets/issues/1750/events | null | https://api.github.com/repos/huggingface/datasets/issues/1750/labels{/name} | 2021-01-19T03:06:05Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1750.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1750",
"merged_at": "2021-01-19T09:48:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1750.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 788,668,085 | [] | https://api.github.com/repos/huggingface/datasets/issues/1750 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | When I read the README.md of `CNN/DailyMail Dataset`, there seems to be a typo `CCN`.
I am afraid this is a trivial matter, but I would like to make a suggestion for revision. | 2021-01-19T11:07:29Z | https://github.com/huggingface/datasets/pull/1750 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1750/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1749/comments | https://api.github.com/repos/huggingface/datasets/issues/1749/timeline | 2021-01-29T18:38:08Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU2OTgxMDc5 | closed | [] | false | 1,749 | {
"avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4",
"events_url": "https://api.github.com/users/gmihaila/events{/privacy}",
"followers_url": "https://api.github.com/users/gmihaila/followers",
"following_url": "https://api.github.com/users/gmihaila/following{/other_user}",
"gists_url": "htt... | Added metadata and correct splits for swda. | https://api.github.com/repos/huggingface/datasets/issues/1749/events | null | https://api.github.com/repos/huggingface/datasets/issues/1749/labels{/name} | 2021-01-18T18:36:32Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1749.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1749",
"merged_at": "2021-01-29T18:38:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1749.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 788,476,639 | [] | https://api.github.com/repos/huggingface/datasets/issues/1749 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Switchboard Dialog Act Corpus
I made some changes following @bhavitvyamalik recommendation in #1678:
* Contains all metadata.
* Used official implementation from the [/swda](https://github.com/cgpotts/swda) repo.
* Add official train and test splits used in [Stolcke et al. (2000)](https://web.stanford.edu/~jur... | 2021-01-29T19:35:52Z | https://github.com/huggingface/datasets/pull/1749 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1749/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1748/comments | https://api.github.com/repos/huggingface/datasets/issues/1748/timeline | 2021-01-19T11:26:58Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU2OTQ0NDEx | closed | [] | false | 1,748 | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "htt... | add Stuctured Argument Extraction for Korean dataset | https://api.github.com/repos/huggingface/datasets/issues/1748/events | null | https://api.github.com/repos/huggingface/datasets/issues/1748/labels{/name} | 2021-01-18T17:14:19Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1748.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1748",
"merged_at": "2021-01-19T11:26:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1748.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 788,431,642 | [] | https://api.github.com/repos/huggingface/datasets/issues/1748 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | 2021-09-17T16:53:18Z | https://github.com/huggingface/datasets/pull/1748 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1748/reactions"
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/1747/comments | https://api.github.com/repos/huggingface/datasets/issues/1747/timeline | 2022-10-05T12:37:27Z | null | completed | MDU6SXNzdWU3ODgyOTk3NzU= | closed | [] | null | 1,747 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.git... | datasets slicing with seed | https://api.github.com/repos/huggingface/datasets/issues/1747/events | null | https://api.github.com/repos/huggingface/datasets/issues/1747/labels{/name} | 2021-01-18T14:08:55Z | null | false | null | null | 788,299,775 | [] | https://api.github.com/repos/huggingface/datasets/issues/1747 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Hi
I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html
I could not find a seed option, could you assist me please how I can get a slice for different seeds?
thank you.
@lhoestq | 2022-10-05T12:37:27Z | https://github.com/huggingface/datasets/issues/1747 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1747/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1746/comments | https://api.github.com/repos/huggingface/datasets/issues/1746/timeline | 2021-01-18T11:31:23Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU2NzQxMjIw | closed | [] | false | 1,746 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Fix release conda worflow | https://api.github.com/repos/huggingface/datasets/issues/1746/events | null | https://api.github.com/repos/huggingface/datasets/issues/1746/labels{/name} | 2021-01-18T11:29:10Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1746.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1746",
"merged_at": "2021-01-18T11:31:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1746.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 788,188,184 | [] | https://api.github.com/repos/huggingface/datasets/issues/1746 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | The current workflow yaml file is not valid according to https://github.com/huggingface/datasets/actions/runs/487638110 | 2021-01-18T11:31:24Z | https://github.com/huggingface/datasets/pull/1746 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1746/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1745/comments | https://api.github.com/repos/huggingface/datasets/issues/1745/timeline | 2021-01-18T00:59:34Z | null | completed | MDU6SXNzdWU3ODc4MzgyNTY= | closed | [] | null | 1,745 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.git... | difference between wsc and wsc.fixed for superglue | https://api.github.com/repos/huggingface/datasets/issues/1745/events | null | https://api.github.com/repos/huggingface/datasets/issues/1745/labels{/name} | 2021-01-18T00:50:19Z | null | false | null | null | 787,838,256 | [] | https://api.github.com/repos/huggingface/datasets/issues/1745 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Hi
I see two versions of wsc in superglue, and I am not sure what is the differences and which one is the original one. could you help to discuss the differences? thanks @lhoestq | 2021-01-18T11:02:43Z | https://github.com/huggingface/datasets/issues/1745 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1745/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1744/comments | https://api.github.com/repos/huggingface/datasets/issues/1744/timeline | 2021-01-18T11:26:09Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU2MzA0MjU4 | closed | [] | false | 1,744 | {
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://ap... | Add missing "brief" entries to reuters | https://api.github.com/repos/huggingface/datasets/issues/1744/events | null | https://api.github.com/repos/huggingface/datasets/issues/1744/labels{/name} | 2021-01-17T07:58:49Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1744.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1744",
"merged_at": "2021-01-18T11:26:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1744.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 787,649,811 | [] | https://api.github.com/repos/huggingface/datasets/issues/1744 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | This brings the number of examples for ModApte to match the stated `Training set (9,603 docs)...Test Set (3,299 docs)` | 2021-01-18T11:26:09Z | https://github.com/huggingface/datasets/pull/1744 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1744/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1743/comments | https://api.github.com/repos/huggingface/datasets/issues/1743/timeline | 2022-06-01T15:49:34Z | null | completed | MDU6SXNzdWU3ODc2MzE0MTI= | closed | [] | null | 1,743 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | Issue while Creating Custom Metric | https://api.github.com/repos/huggingface/datasets/issues/1743/events | null | https://api.github.com/repos/huggingface/datasets/issues/1743/labels{/name} | 2021-01-17T07:01:14Z | null | false | null | null | 787,631,412 | [] | https://api.github.com/repos/huggingface/datasets/issues/1743 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Hi Team,
I am trying to create a custom metric for my training as follows, where f1 is my own metric:
```python
def _info(self):
# TODO: Specifies the datasets.MetricInfo object
return datasets.MetricInfo(
# This is the description that will appear on the metrics page.
... | 2022-06-01T15:49:34Z | https://github.com/huggingface/datasets/issues/1743 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1743/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1742/comments | https://api.github.com/repos/huggingface/datasets/issues/1742/timeline | 2021-03-29T12:43:30Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU2MjgyMDYw | closed | [] | false | 1,742 | {
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "... | Add GLUE Compat (compatible with transformers<3.5.0) | https://api.github.com/repos/huggingface/datasets/issues/1742/events | null | https://api.github.com/repos/huggingface/datasets/issues/1742/labels{/name} | 2021-01-17T05:54:25Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1742.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1742",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1742.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1742"
} | 787,623,640 | [] | https://api.github.com/repos/huggingface/datasets/issues/1742 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Link to our discussion on Slack (HF internal)
https://huggingface.slack.com/archives/C014N4749J9/p1609668119337400
The next step is to add a compatible option in the new `run_glue.py`
I duplicated `glue` and made the following changes:
1. Change the name to `glue_compat`.
2. Change the label assignments for MN... | 2023-09-24T09:52:12Z | https://github.com/huggingface/datasets/pull/1742 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1742/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1741/comments | https://api.github.com/repos/huggingface/datasets/issues/1741/timeline | 2021-01-16T02:39:18Z | null | completed | MDU6SXNzdWU3ODczMjcwNjA= | closed | [] | null | 1,741 | {
"avatar_url": "https://avatars.githubusercontent.com/u/43234824?v=4",
"events_url": "https://api.github.com/users/XiaoYang66/events{/privacy}",
"followers_url": "https://api.github.com/users/XiaoYang66/followers",
"following_url": "https://api.github.com/users/XiaoYang66/following{/other_user}",
"gists_url"... | error when run fine_tuning on text_classification | https://api.github.com/repos/huggingface/datasets/issues/1741/events | null | https://api.github.com/repos/huggingface/datasets/issues/1741/labels{/name} | 2021-01-16T02:23:19Z | null | false | null | null | 787,327,060 | [] | https://api.github.com/repos/huggingface/datasets/issues/1741 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | dataset:sem_eval_2014_task_1
pretrained_model:bert-base-uncased
error description:
when i use these resoruce to train fine_tuning a text_classification on sem_eval_2014_task_1,there always be some problem(when i use other dataset ,there exist the error too). And i followed the colab code (url:https://colab.researc... | 2021-01-16T02:39:28Z | https://github.com/huggingface/datasets/issues/1741 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1741/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1740/comments | https://api.github.com/repos/huggingface/datasets/issues/1740/timeline | 2021-01-20T13:41:26Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU2MDA5NjM1 | closed | [] | false | 1,740 | {
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gi... | add id_liputan6 dataset | https://api.github.com/repos/huggingface/datasets/issues/1740/events | null | https://api.github.com/repos/huggingface/datasets/issues/1740/labels{/name} | 2021-01-15T22:58:34Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1740.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1740",
"merged_at": "2021-01-20T13:41:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1740.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 787,264,605 | [] | https://api.github.com/repos/huggingface/datasets/issues/1740 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | id_liputan6 is a large-scale Indonesian summarization dataset. The articles were harvested from an online news portal, and obtain 215,827 document-summary pairs: https://arxiv.org/abs/2011.00679 | 2021-01-20T13:41:26Z | https://github.com/huggingface/datasets/pull/1740 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1740/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1739/comments | https://api.github.com/repos/huggingface/datasets/issues/1739/timeline | 2021-01-29T10:53:03Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU1OTY5Njgx | closed | [] | false | 1,739 | {
"avatar_url": "https://avatars.githubusercontent.com/u/9607332?v=4",
"events_url": "https://api.github.com/users/Shimorina/events{/privacy}",
"followers_url": "https://api.github.com/users/Shimorina/followers",
"following_url": "https://api.github.com/users/Shimorina/following{/other_user}",
"gists_url": "h... | fixes and improvements for the WebNLG loader | https://api.github.com/repos/huggingface/datasets/issues/1739/events | null | https://api.github.com/repos/huggingface/datasets/issues/1739/labels{/name} | 2021-01-15T21:45:23Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1739.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1739",
"merged_at": "2021-01-29T10:53:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1739.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 787,219,138 | [] | https://api.github.com/repos/huggingface/datasets/issues/1739 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | - fixes test sets loading in v3.0
- adds additional fields for v3.0_ru
- adds info to the WebNLG data card | 2021-01-29T14:34:06Z | https://github.com/huggingface/datasets/pull/1739 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1739/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1738/comments | https://api.github.com/repos/huggingface/datasets/issues/1738/timeline | 2021-01-15T10:08:19Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU0OTk2NDU4 | closed | [] | false | 1,738 | {
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_u... | Conda support | https://api.github.com/repos/huggingface/datasets/issues/1738/events | null | https://api.github.com/repos/huggingface/datasets/issues/1738/labels{/name} | 2021-01-14T15:11:25Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1738.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1738",
"merged_at": "2021-01-15T10:08:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1738.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 786,068,440 | [] | https://api.github.com/repos/huggingface/datasets/issues/1738 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | Will push a new version on anaconda cloud every time a tag starting with `v` is pushed (like `v1.2.2`).
Will appear here: https://anaconda.org/huggingface/datasets
Depends on `conda-forge` for now, so the following is required for installation:
```
conda install -c huggingface -c conda-forge datasets
``` | 2021-01-15T10:08:20Z | https://github.com/huggingface/datasets/pull/1738 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 4,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1738/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1737/comments | https://api.github.com/repos/huggingface/datasets/issues/1737/timeline | 2021-01-14T10:25:24Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU0NjA2ODg5 | closed | [] | false | 1,737 | {
"avatar_url": "https://avatars.githubusercontent.com/u/6429850?v=4",
"events_url": "https://api.github.com/users/chameleonTK/events{/privacy}",
"followers_url": "https://api.github.com/users/chameleonTK/followers",
"following_url": "https://api.github.com/users/chameleonTK/following{/other_user}",
"gists_ur... | update link in TLC to be github links | https://api.github.com/repos/huggingface/datasets/issues/1737/events | null | https://api.github.com/repos/huggingface/datasets/issues/1737/labels{/name} | 2021-01-14T02:49:21Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1737.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1737",
"merged_at": "2021-01-14T10:25:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1737.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 785,606,286 | [] | https://api.github.com/repos/huggingface/datasets/issues/1737 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Base on this issue https://github.com/huggingface/datasets/issues/1064, I can now use the official links.
| 2021-01-14T10:25:24Z | https://github.com/huggingface/datasets/pull/1737 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1737/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1736/comments | https://api.github.com/repos/huggingface/datasets/issues/1736/timeline | 2021-01-14T10:29:38Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU0NDYyNjYw | closed | [] | false | 1,736 | {
"avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4",
"events_url": "https://api.github.com/users/jonatasgrosman/events{/privacy}",
"followers_url": "https://api.github.com/users/jonatasgrosman/followers",
"following_url": "https://api.github.com/users/jonatasgrosman/following{/other_user}",
... | Adjust BrWaC dataset features name | https://api.github.com/repos/huggingface/datasets/issues/1736/events | null | https://api.github.com/repos/huggingface/datasets/issues/1736/labels{/name} | 2021-01-13T20:39:04Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1736.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1736",
"merged_at": "2021-01-14T10:29:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1736.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 785,433,854 | [] | https://api.github.com/repos/huggingface/datasets/issues/1736 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | I added this dataset some days ago, and today I used it to train some models and realized that the names of the features aren't so good.
Looking at the current features hierarchy, we have "paragraphs" with a list of "sentences" with a list of "sentences?!". But the actual hierarchy is a "text" with a list of "paragr... | 2021-01-14T10:29:38Z | https://github.com/huggingface/datasets/pull/1736 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1736/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1735/comments | https://api.github.com/repos/huggingface/datasets/issues/1735/timeline | 2021-01-14T15:16:00Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU0MjUzMDcw | closed | [] | false | 1,735 | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https:... | Update add new dataset template | https://api.github.com/repos/huggingface/datasets/issues/1735/events | null | https://api.github.com/repos/huggingface/datasets/issues/1735/labels{/name} | 2021-01-13T15:08:09Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1735.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1735",
"merged_at": "2021-01-14T15:16:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1735.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 785,184,740 | [] | https://api.github.com/repos/huggingface/datasets/issues/1735 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | This PR fixes a few typos in the "Add new dataset template" and clarifies a bit what to do for the dummy data creation when the `auto_generate` flag can't work. | 2021-01-14T15:16:01Z | https://github.com/huggingface/datasets/pull/1735 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1735/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1734/comments | https://api.github.com/repos/huggingface/datasets/issues/1734/timeline | 2021-01-14T10:42:18Z | null | null | MDExOlB1bGxSZXF1ZXN0NTU0MDYxMzMz | closed | [] | false | 1,734 | {
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "... | Fix empty token bug for `thainer` and `lst20` | https://api.github.com/repos/huggingface/datasets/issues/1734/events | null | https://api.github.com/repos/huggingface/datasets/issues/1734/labels{/name} | 2021-01-13T09:55:09Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1734.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1734",
"merged_at": "2021-01-14T10:42:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1734.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 784,956,707 | [] | https://api.github.com/repos/huggingface/datasets/issues/1734 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | add a condition to check if tokens exist before yielding in `thainer` and `lst20` | 2021-01-14T10:42:18Z | https://github.com/huggingface/datasets/pull/1734 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1734/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1733/comments | https://api.github.com/repos/huggingface/datasets/issues/1733/timeline | 2021-08-04T18:13:55Z | null | completed | MDU6SXNzdWU3ODQ5MDMwMDI= | closed | [] | null | 1,733 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.git... | connection issue with glue, what is the data url for glue? | https://api.github.com/repos/huggingface/datasets/issues/1733/events | null | https://api.github.com/repos/huggingface/datasets/issues/1733/labels{/name} | 2021-01-13T08:37:40Z | null | false | null | null | 784,903,002 | [] | https://api.github.com/repos/huggingface/datasets/issues/1733 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Hi
my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not
thanks | 2021-08-04T18:13:55Z | https://github.com/huggingface/datasets/issues/1733 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1733/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1732/comments | https://api.github.com/repos/huggingface/datasets/issues/1732/timeline | 2021-01-14T10:19:41Z | null | null | MDExOlB1bGxSZXF1ZXN0NTUzOTkzNTAx | closed | [] | false | 1,732 | {
"avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4",
"events_url": "https://api.github.com/users/mounicam/events{/privacy}",
"followers_url": "https://api.github.com/users/mounicam/followers",
"following_url": "https://api.github.com/users/mounicam/following{/other_user}",
"gists_url": "htt... | [GEM Dataset] Added TurkCorpus, an evaluation dataset for sentence simplification. | https://api.github.com/repos/huggingface/datasets/issues/1732/events | null | https://api.github.com/repos/huggingface/datasets/issues/1732/labels{/name} | 2021-01-13T07:50:19Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1732.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1732",
"merged_at": "2021-01-14T10:19:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1732.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 784,874,490 | [] | https://api.github.com/repos/huggingface/datasets/issues/1732 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | We want to use TurkCorpus for validation and testing of the sentence simplification task. | 2021-01-14T10:19:41Z | https://github.com/huggingface/datasets/pull/1732 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1732/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1731/comments | https://api.github.com/repos/huggingface/datasets/issues/1731/timeline | 2021-01-13T11:17:40Z | null | completed | MDU6SXNzdWU3ODQ3NDQ2NzQ= | closed | [] | null | 1,731 | {
"avatar_url": "https://avatars.githubusercontent.com/u/13365326?v=4",
"events_url": "https://api.github.com/users/yangp725/events{/privacy}",
"followers_url": "https://api.github.com/users/yangp725/followers",
"following_url": "https://api.github.com/users/yangp725/following{/other_user}",
"gists_url": "htt... | Couldn't reach swda.py | https://api.github.com/repos/huggingface/datasets/issues/1731/events | null | https://api.github.com/repos/huggingface/datasets/issues/1731/labels{/name} | 2021-01-13T02:57:40Z | null | false | null | null | 784,744,674 | [] | https://api.github.com/repos/huggingface/datasets/issues/1731 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/swda/swda.py
| 2021-01-13T11:17:40Z | https://github.com/huggingface/datasets/issues/1731 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1731/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1730/comments | https://api.github.com/repos/huggingface/datasets/issues/1730/timeline | 2021-01-13T10:19:46Z | null | null | MDExOlB1bGxSZXF1ZXN0NTUzNzgxMDY0 | closed | [] | false | 1,730 | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https:... | Add MNIST dataset | https://api.github.com/repos/huggingface/datasets/issues/1730/events | null | https://api.github.com/repos/huggingface/datasets/issues/1730/labels{/name} | 2021-01-12T21:48:02Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1730.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1730",
"merged_at": "2021-01-13T10:19:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1730.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 784,617,525 | [] | https://api.github.com/repos/huggingface/datasets/issues/1730 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | This PR adds the MNIST dataset to the library. | 2021-01-13T10:19:47Z | https://github.com/huggingface/datasets/pull/1730 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1730/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1729/comments | https://api.github.com/repos/huggingface/datasets/issues/1729/timeline | 2021-03-31T04:24:07Z | null | completed | MDU6SXNzdWU3ODQ1NjU4OTg= | closed | [] | null | 1,729 | {
"avatar_url": "https://avatars.githubusercontent.com/u/28235457?v=4",
"events_url": "https://api.github.com/users/pablodz/events{/privacy}",
"followers_url": "https://api.github.com/users/pablodz/followers",
"following_url": "https://api.github.com/users/pablodz/following{/other_user}",
"gists_url": "https:... | Is there support for Deep learning datasets? | https://api.github.com/repos/huggingface/datasets/issues/1729/events | null | https://api.github.com/repos/huggingface/datasets/issues/1729/labels{/name} | 2021-01-12T20:22:41Z | null | false | null | null | 784,565,898 | [] | https://api.github.com/repos/huggingface/datasets/issues/1729 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | I looked around this repository and looking the datasets I think that there's no support for images-datasets. Or am I missing something? For example to add a repo like this https://github.com/DZPeru/fish-datasets | 2021-03-31T04:24:07Z | https://github.com/huggingface/datasets/issues/1729 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1729/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1728/comments | https://api.github.com/repos/huggingface/datasets/issues/1728/timeline | 2021-01-18T19:15:32Z | null | completed | MDU6SXNzdWU3ODQ0NTgzNDI= | closed | [] | null | 1,728 | {
"avatar_url": "https://avatars.githubusercontent.com/u/18645407?v=4",
"events_url": "https://api.github.com/users/ameet-1997/events{/privacy}",
"followers_url": "https://api.github.com/users/ameet-1997/followers",
"following_url": "https://api.github.com/users/ameet-1997/following{/other_user}",
"gists_url"... | Add an entry to an arrow dataset | https://api.github.com/repos/huggingface/datasets/issues/1728/events | null | https://api.github.com/repos/huggingface/datasets/issues/1728/labels{/name} | 2021-01-12T18:01:47Z | null | false | null | null | 784,458,342 | [] | https://api.github.com/repos/huggingface/datasets/issues/1728 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Is it possible to add an entry to a dataset object?
**Motivation: I want to transform the sentences in the dataset and add them to the original dataset**
For example, say we have the following code:
``` python
from datasets import load_dataset
# Load a dataset and print the first examples in the training s... | 2021-01-18T19:15:32Z | https://github.com/huggingface/datasets/issues/1728 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1728/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1727/comments | https://api.github.com/repos/huggingface/datasets/issues/1727/timeline | 2022-06-01T16:06:02Z | null | completed | MDU6SXNzdWU3ODQ0MzUxMzE= | closed | [] | null | 1,727 | {
"avatar_url": "https://avatars.githubusercontent.com/u/6603920?v=4",
"events_url": "https://api.github.com/users/nadavo/events{/privacy}",
"followers_url": "https://api.github.com/users/nadavo/followers",
"following_url": "https://api.github.com/users/nadavo/following{/other_user}",
"gists_url": "https://ap... | BLEURT score calculation raises UnrecognizedFlagError | https://api.github.com/repos/huggingface/datasets/issues/1727/events | null | https://api.github.com/repos/huggingface/datasets/issues/1727/labels{/name} | 2021-01-12T17:27:02Z | null | false | null | null | 784,435,131 | [] | https://api.github.com/repos/huggingface/datasets/issues/1727 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
```
from datasets import load_metric
bleurt = load_me... | 2022-06-01T16:06:02Z | https://github.com/huggingface/datasets/issues/1727 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1727/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1726/comments | https://api.github.com/repos/huggingface/datasets/issues/1726/timeline | 2021-01-19T16:42:32Z | null | null | MDExOlB1bGxSZXF1ZXN0NTUzNTQ0ODg4 | closed | [] | false | 1,726 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Offline loading | https://api.github.com/repos/huggingface/datasets/issues/1726/events | null | https://api.github.com/repos/huggingface/datasets/issues/1726/labels{/name} | 2021-01-12T15:21:57Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1726.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1726",
"merged_at": "2021-01-19T16:42:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1726.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 784,336,370 | [] | https://api.github.com/repos/huggingface/datasets/issues/1726 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | As discussed in #824 it would be cool to make the library work in offline mode.
Currently if there's not internet connection then modules (datasets or metrics) that have already been loaded in the past can't be loaded and it raises a ConnectionError.
This is because `prepare_module` fetches online for the latest vers... | 2022-02-15T10:32:10Z | https://github.com/huggingface/datasets/pull/1726 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1726/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1725/comments | https://api.github.com/repos/huggingface/datasets/issues/1725/timeline | 2022-06-01T16:00:59Z | null | completed | MDU6SXNzdWU3ODQxODIyNzM= | closed | [] | null | 1,725 | {
"avatar_url": "https://avatars.githubusercontent.com/u/41193842?v=4",
"events_url": "https://api.github.com/users/xinjicong/events{/privacy}",
"followers_url": "https://api.github.com/users/xinjicong/followers",
"following_url": "https://api.github.com/users/xinjicong/following{/other_user}",
"gists_url": "... | load the local dataset | https://api.github.com/repos/huggingface/datasets/issues/1725/events | null | https://api.github.com/repos/huggingface/datasets/issues/1725/labels{/name} | 2021-01-12T12:12:55Z | null | false | null | null | 784,182,273 | [] | https://api.github.com/repos/huggingface/datasets/issues/1725 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | your guidebook's example is like
>>>from datasets import load_dataset
>>> dataset = load_dataset('json', data_files='my_file.json')
but the first arg is path...
so how should i do if i want to load the local dataset for model training?
i will be grateful if you can help me handle this problem!
thanks a lot! | 2022-06-01T16:00:59Z | https://github.com/huggingface/datasets/issues/1725 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1725/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1723/comments | https://api.github.com/repos/huggingface/datasets/issues/1723/timeline | 2021-01-26T17:02:08Z | null | null | MDExOlB1bGxSZXF1ZXN0NTUzMjQ4MzU1 | closed | [] | false | 1,723 | {
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url"... | ADD S3 support for downloading and uploading processed datasets | https://api.github.com/repos/huggingface/datasets/issues/1723/events | null | https://api.github.com/repos/huggingface/datasets/issues/1723/labels{/name} | 2021-01-12T07:17:34Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1723.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1723",
"merged_at": "2021-01-26T17:02:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1723.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 783,982,100 | [] | https://api.github.com/repos/huggingface/datasets/issues/1723 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | # What does this PR do?
This PR adds the functionality to load and save `datasets` from and to s3.
You can save `datasets` with either `Dataset.save_to_disk()` or `DatasetDict.save_to_disk`.
You can load `datasets` with either `load_from_disk` or `Dataset.load_from_disk()`, `DatasetDict.load_from_disk()`.
Lo... | 2021-01-26T17:02:08Z | https://github.com/huggingface/datasets/pull/1723 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1723/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1724/comments | https://api.github.com/repos/huggingface/datasets/issues/1724/timeline | 2022-10-05T12:39:07Z | null | completed | MDU6SXNzdWU3ODQwMjMzMzg= | closed | [] | null | 1,724 | {
"avatar_url": "https://avatars.githubusercontent.com/u/49967236?v=4",
"events_url": "https://api.github.com/users/lkcao/events{/privacy}",
"followers_url": "https://api.github.com/users/lkcao/followers",
"following_url": "https://api.github.com/users/lkcao/following{/other_user}",
"gists_url": "https://api.... | could not run models on a offline server successfully | https://api.github.com/repos/huggingface/datasets/issues/1724/events | null | https://api.github.com/repos/huggingface/datasets/issues/1724/labels{/name} | 2021-01-12T06:08:06Z | null | false | null | null | 784,023,338 | [] | https://api.github.com/repos/huggingface/datasets/issues/1724 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Hi, I really need your help about this.
I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows:
, the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, or... | 2021-03-31T14:23:49Z | https://github.com/huggingface/datasets/pull/1720 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1720/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1719/comments | https://api.github.com/repos/huggingface/datasets/issues/1719/timeline | 2021-01-11T18:45:02Z | null | null | MDExOlB1bGxSZXF1ZXN0NTUyODk3MzY4 | closed | [] | false | 1,719 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Fix column list comparison in transmit format | https://api.github.com/repos/huggingface/datasets/issues/1719/events | null | https://api.github.com/repos/huggingface/datasets/issues/1719/labels{/name} | 2021-01-11T17:23:56Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1719.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1719",
"merged_at": "2021-01-11T18:45:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1719.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 783,557,542 | [] | https://api.github.com/repos/huggingface/datasets/issues/1719 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | As noticed in #1718 the cache might not reload the cache files when new columns were added.
This is because of an issue in `transmit_format` where the column list comparison fails because the order was not deterministic. This causes the `transmit_format` to apply an unnecessary `set_format` transform with shuffled col... | 2021-01-11T18:45:03Z | https://github.com/huggingface/datasets/pull/1719 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1719/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1718/comments | https://api.github.com/repos/huggingface/datasets/issues/1718/timeline | 2021-01-26T02:47:59Z | null | completed | MDU6SXNzdWU3ODM0NzQ3NTM= | closed | [] | null | 1,718 | {
"avatar_url": "https://avatars.githubusercontent.com/u/18296312?v=4",
"events_url": "https://api.github.com/users/ofirzaf/events{/privacy}",
"followers_url": "https://api.github.com/users/ofirzaf/followers",
"following_url": "https://api.github.com/users/ofirzaf/following{/other_user}",
"gists_url": "https:... | Possible cache miss in datasets | https://api.github.com/repos/huggingface/datasets/issues/1718/events | null | https://api.github.com/repos/huggingface/datasets/issues/1718/labels{/name} | 2021-01-11T15:37:31Z | null | false | null | null | 783,474,753 | [] | https://api.github.com/repos/huggingface/datasets/issues/1718 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading fr... | 2022-06-29T14:54:42Z | https://github.com/huggingface/datasets/issues/1718 | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1718/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1717/comments | https://api.github.com/repos/huggingface/datasets/issues/1717/timeline | 2021-01-26T02:52:17Z | null | completed | MDU6SXNzdWU3ODMwNzQyNTU= | closed | [] | null | 1,717 | {
"avatar_url": "https://avatars.githubusercontent.com/u/3091916?v=4",
"events_url": "https://api.github.com/users/dwadden/events{/privacy}",
"followers_url": "https://api.github.com/users/dwadden/followers",
"following_url": "https://api.github.com/users/dwadden/following{/other_user}",
"gists_url": "https:/... | SciFact dataset - minor changes | https://api.github.com/repos/huggingface/datasets/issues/1717/events | null | https://api.github.com/repos/huggingface/datasets/issues/1717/labels{/name} | 2021-01-11T05:26:40Z | null | false | null | null | 783,074,255 | [] | https://api.github.com/repos/huggingface/datasets/issues/1717 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Hi,
SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated!
I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?
It also looks like the dataset is being downloa... | 2021-01-26T02:52:17Z | https://github.com/huggingface/datasets/issues/1717 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1717/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1716/comments | https://api.github.com/repos/huggingface/datasets/issues/1716/timeline | 2021-01-18T14:21:42Z | null | null | MDExOlB1bGxSZXF1ZXN0NTUyMjgzNzE5 | closed | [] | false | 1,716 | {
"avatar_url": "https://avatars.githubusercontent.com/u/48222101?v=4",
"events_url": "https://api.github.com/users/kushal2000/events{/privacy}",
"followers_url": "https://api.github.com/users/kushal2000/followers",
"following_url": "https://api.github.com/users/kushal2000/following{/other_user}",
"gists_url"... | Add Hatexplain Dataset | https://api.github.com/repos/huggingface/datasets/issues/1716/events | null | https://api.github.com/repos/huggingface/datasets/issues/1716/labels{/name} | 2021-01-10T13:30:01Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1716.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1716",
"merged_at": "2021-01-18T14:21:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1716.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 782,819,006 | [] | https://api.github.com/repos/huggingface/datasets/issues/1716 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Adding Hatexplain - the first benchmark hate speech dataset covering multiple aspects of the issue | 2021-01-18T14:21:42Z | https://github.com/huggingface/datasets/pull/1716 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1716/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1715/comments | https://api.github.com/repos/huggingface/datasets/issues/1715/timeline | 2021-01-12T17:14:33Z | null | null | MDExOlB1bGxSZXF1ZXN0NTUyMjM2NDA5 | closed | [] | false | 1,715 | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "htt... | add Korean intonation-aided intention identification dataset | https://api.github.com/repos/huggingface/datasets/issues/1715/events | null | https://api.github.com/repos/huggingface/datasets/issues/1715/labels{/name} | 2021-01-10T06:29:04Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1715.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1715",
"merged_at": "2021-01-12T17:14:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1715.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 782,754,441 | [] | https://api.github.com/repos/huggingface/datasets/issues/1715 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | 2021-09-17T16:54:13Z | https://github.com/huggingface/datasets/pull/1715 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1715/reactions"
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/1714/comments | https://api.github.com/repos/huggingface/datasets/issues/1714/timeline | 2021-01-13T16:05:24Z | null | null | MDExOlB1bGxSZXF1ZXN0NTUxOTc3MDA0 | closed | [] | false | 1,714 | {
"avatar_url": "https://avatars.githubusercontent.com/u/15869827?v=4",
"events_url": "https://api.github.com/users/maxbartolo/events{/privacy}",
"followers_url": "https://api.github.com/users/maxbartolo/followers",
"following_url": "https://api.github.com/users/maxbartolo/following{/other_user}",
"gists_url"... | Adding adversarialQA dataset | https://api.github.com/repos/huggingface/datasets/issues/1714/events | null | https://api.github.com/repos/huggingface/datasets/issues/1714/labels{/name} | 2021-01-08T21:46:09Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1714.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1714",
"merged_at": "2021-01-13T16:05:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1714.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 782,416,276 | [] | https://api.github.com/repos/huggingface/datasets/issues/1714 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Adding the adversarialQA dataset (https://adversarialqa.github.io/) from Beat the AI (https://arxiv.org/abs/2002.00293) | 2021-01-13T16:05:24Z | https://github.com/huggingface/datasets/pull/1714 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1714/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1713/comments | https://api.github.com/repos/huggingface/datasets/issues/1713/timeline | 2021-09-17T12:47:40Z | null | completed | MDU6SXNzdWU3ODIzMzc3MjM= | closed | [] | null | 1,713 | {
"avatar_url": "https://avatars.githubusercontent.com/u/9393002?v=4",
"events_url": "https://api.github.com/users/pranav-s/events{/privacy}",
"followers_url": "https://api.github.com/users/pranav-s/followers",
"following_url": "https://api.github.com/users/pranav-s/following{/other_user}",
"gists_url": "http... | Installation using conda | https://api.github.com/repos/huggingface/datasets/issues/1713/events | null | https://api.github.com/repos/huggingface/datasets/issues/1713/labels{/name} | 2021-01-08T19:12:15Z | null | false | null | null | 782,337,723 | [] | https://api.github.com/repos/huggingface/datasets/issues/1713 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only be installed using pip and... | 2021-09-17T12:47:40Z | https://github.com/huggingface/datasets/issues/1713 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1713/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1712/comments | https://api.github.com/repos/huggingface/datasets/issues/1712/timeline | 2021-01-21T10:31:11Z | null | null | MDExOlB1bGxSZXF1ZXN0NTUxODkxMDk4 | closed | [] | false | 1,712 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"events_url": "https://api.github.com/users/eusip/events{/privacy}",
"followers_url": "https://api.github.com/users/eusip/followers",
"following_url": "https://api.github.com/users/eusip/following{/other_user}",
"gists_url": "https://api.g... | Silicone | https://api.github.com/repos/huggingface/datasets/issues/1712/events | null | https://api.github.com/repos/huggingface/datasets/issues/1712/labels{/name} | 2021-01-08T18:24:18Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1712.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1712",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1712.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1712"
} | 782,313,097 | [] | https://api.github.com/repos/huggingface/datasets/issues/1712 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | My collaborators and I within the Affective Computing team at Telecom Paris would like to push our spoken dialogue dataset for publication. | 2021-01-21T14:12:37Z | https://github.com/huggingface/datasets/pull/1712 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1712/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1711/comments | https://api.github.com/repos/huggingface/datasets/issues/1711/timeline | 2021-01-11T09:23:19Z | null | null | MDExOlB1bGxSZXF1ZXN0NTUxNzQxODA2 | closed | [] | false | 1,711 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Fix windows path scheme in cached path | https://api.github.com/repos/huggingface/datasets/issues/1711/events | null | https://api.github.com/repos/huggingface/datasets/issues/1711/labels{/name} | 2021-01-08T13:45:56Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1711.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1711",
"merged_at": "2021-01-11T09:23:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1711.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 782,129,083 | [] | https://api.github.com/repos/huggingface/datasets/issues/1711 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | As noticed in #807 there's currently an issue with `cached_path` not raising `FileNotFoundError` on windows for absolute paths. This is due to the way we check for a path to be local or not. The check on the scheme using urlparse was incomplete.
I fixed this and added tests | 2021-01-11T09:23:20Z | https://github.com/huggingface/datasets/pull/1711 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1711/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1710/comments | https://api.github.com/repos/huggingface/datasets/issues/1710/timeline | 2022-08-04T11:55:04Z | null | completed | MDU6SXNzdWU3ODE5MTQ5NTE= | closed | [] | null | 1,710 | {
"avatar_url": "https://avatars.githubusercontent.com/u/5771366?v=4",
"events_url": "https://api.github.com/users/fredriko/events{/privacy}",
"followers_url": "https://api.github.com/users/fredriko/followers",
"following_url": "https://api.github.com/users/fredriko/following{/other_user}",
"gists_url": "http... | IsADirectoryError when trying to download C4 | https://api.github.com/repos/huggingface/datasets/issues/1710/events | null | https://api.github.com/repos/huggingface/datasets/issues/1710/labels{/name} | 2021-01-08T07:31:30Z | null | false | null | null | 781,914,951 | [] | https://api.github.com/repos/huggingface/datasets/issues/1710 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | **TLDR**:
I fail to download C4 and see a stacktrace originating in `IsADirectoryError` as an explanation for failure.
How can the problem be fixed?
**VERBOSE**:
I use Python version 3.7 and have the following dependencies listed in my project:
```
datasets==1.2.0
apache-beam==2.26.0
```
When runn... | 2022-08-04T11:56:10Z | https://github.com/huggingface/datasets/issues/1710 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1710/reactions"
} | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.