url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 3.28B | node_id stringlengths 18 32 | number int64 1 7.71k | title stringlengths 1 290 | user dict | labels listlengths 0 4 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 4 | milestone dict | comments listlengths 0 30 | created_at timestamp[us, tz=UTC]date 2020-04-14 10:18:02 2025-07-30 11:34:53 | updated_at timestamp[us, tz=UTC]date 2020-04-27 16:04:17 2025-07-31 05:22:35 | closed_at timestamp[us, tz=UTC]date 2020-04-14 12:01:40 2025-07-30 14:22:21 ⌀ | author_association stringclasses 4
values | type null | active_lock_reason null | sub_issues_summary dict | body stringlengths 0 228k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 4
values | draft float64 0 1 ⌀ | pull_request dict | created_at_dt timestamp[us, tz=UTC]date 2020-04-14 10:18:02 2025-07-30 11:34:53 | closed_at_dt timestamp[us, tz=UTC]date 2020-04-14 12:01:40 2025-07-30 14:22:21 ⌀ | time_to_close duration[us] | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1849/comments | https://api.github.com/repos/huggingface/datasets/issues/1849/events | https://github.com/huggingface/datasets/issues/1849 | 804,292,971 | MDU6SXNzdWU4MDQyOTI5NzE= | 1,849 | Add TIMIT | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "@patrickvonplaten Could you please help me with how the output text has to be represented in the data? TIMIT has Words, Phonemes and texts. Also has lot on info on the speaker and the dialect. Could you please help me? An example of how to arrange it would be ... | 2021-02-09T07:29:41Z | 2021-03-15T05:59:37Z | 2021-03-15T05:59:37Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ## Adding a Dataset
- **Name:** *TIMIT*
- **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems*
- **Paper:** *Homepage*: http://groups.inf.ed.ac.uk... | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1849/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1849/timeline | null | completed | null | null | 2021-02-09T07:29:41Z | 2021-03-15T05:59:37Z | 33 days, 22:29:56 | false |
https://api.github.com/repos/huggingface/datasets/issues/1848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1848/comments | https://api.github.com/repos/huggingface/datasets/issues/1848/events | https://github.com/huggingface/datasets/pull/1848 | 803,826,506 | MDExOlB1bGxSZXF1ZXN0NTY5Njg5ODU1 | 1,848 | Refactoring: Create config module | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [] | 2021-02-08T18:43:51Z | 2021-02-10T12:29:35Z | 2021-02-10T12:29:35Z | MEMBER | null | null | null | Refactorize configuration settings into their own module.
This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1848/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1848/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1848.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1848",
"merged_at": "2021-02-10T12:29:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1848.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-08T18:43:51Z | 2021-02-10T12:29:35Z | 1 day, 17:45:44 | true |
https://api.github.com/repos/huggingface/datasets/issues/1846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1846/comments | https://api.github.com/repos/huggingface/datasets/issues/1846/events | https://github.com/huggingface/datasets/pull/1846 | 803,806,380 | MDExOlB1bGxSZXF1ZXN0NTY5NjczMzcy | 1,846 | Make DownloadManager downloaded/extracted paths accessible | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "First I was thinking of the dict, which makes sense for .download, mapping URL to downloaded path. However does this make sense for .extract, mapping the downloaded path to the extracted path? I ask this because the user did not chose the downloaded path, so this i... | 2021-02-08T18:14:42Z | 2021-02-25T14:10:18Z | 2021-02-25T14:10:18Z | MEMBER | null | null | null | Make accessible the file paths downloaded/extracted by DownloadManager.
Close #1831.
The approach:
- I set these paths as DownloadManager attributes: these are DownloadManager's concerns
- To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1846/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1846/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1846.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1846",
"merged_at": "2021-02-25T14:10:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1846.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-08T18:14:42Z | 2021-02-25T14:10:18Z | 16 days, 19:55:36 | true |
https://api.github.com/repos/huggingface/datasets/issues/1847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1847/comments | https://api.github.com/repos/huggingface/datasets/issues/1847/events | https://github.com/huggingface/datasets/pull/1847 | 803,824,694 | MDExOlB1bGxSZXF1ZXN0NTY5Njg4NDY0 | 1,847 | [Metrics] Add word error metric metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Feel free to merge once the CI is all green ;)",
"created_at": "2021-02-09T17:40:46Z",
"html_url": "https://github.com/huggingface/datasets/pull/1847#issuecomment-776114811",
"id": 776114811,
"issue_url": "https://api.github.com/repos/huggingface/da... | 2021-02-08T18:41:15Z | 2021-02-09T17:53:21Z | 2021-02-09T17:53:21Z | CONTRIBUTOR | null | null | null | This PR adds the word error rate metric to datasets.
WER: https://en.wikipedia.org/wiki/Word_error_rate
for speech recognition. WER is the main metric used in ASR.
`jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939) | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1847/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1847/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1847.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1847",
"merged_at": "2021-02-09T17:53:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1847.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-08T18:41:15Z | 2021-02-09T17:53:21Z | 23:12:06 | true |
https://api.github.com/repos/huggingface/datasets/issues/1845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1845/comments | https://api.github.com/repos/huggingface/datasets/issues/1845/events | https://github.com/huggingface/datasets/pull/1845 | 803,714,493 | MDExOlB1bGxSZXF1ZXN0NTY5NTk2MTIz | 1,845 | Enable logging propagation and remove logging handler | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Thank you @lhoestq. This logging configuration makes more sense to me.\r\n\r\nOnce propagation is allowed, the end-user can customize logging behavior and add custom handlers to the proper top logger in the hierarchy.\r\n\r\nAnd I also agree with following the best... | 2021-02-08T16:22:13Z | 2021-02-09T14:22:38Z | 2021-02-09T14:22:37Z | MEMBER | null | null | null | We used to have logging propagation disabled because of this issue: https://github.com/tensorflow/tensorflow/issues/26691
But since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826
I also re... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1845/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1845/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1845.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1845",
"merged_at": "2021-02-09T14:22:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1845.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-08T16:22:13Z | 2021-02-09T14:22:37Z | 22:00:24 | true |
https://api.github.com/repos/huggingface/datasets/issues/1844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1844/comments | https://api.github.com/repos/huggingface/datasets/issues/1844/events | https://github.com/huggingface/datasets/issues/1844 | 803,588,125 | MDU6SXNzdWU4MDM1ODgxMjU= | 1,844 | Update Open Subtitles corpus with original sentence IDs | {
"avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4",
"events_url": "https://api.github.com/users/Valahaar/events{/privacy}",
"followers_url": "https://api.github.com/users/Valahaar/followers",
"following_url": "https://api.github.com/users/Valahaar/following{/other_user}",
"gists_url": "htt... | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi ! You're right this can can useful.\r\nThis should be easy to add, so feel free to give it a try if you want to contribute :)\r\nI think we just need to add it to the _generate_examples method of the OpenSubtitles dataset builder [here](https://github.com/huggin... | 2021-02-08T13:55:13Z | 2021-02-12T17:38:58Z | 2021-02-12T17:38:58Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles).
I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat a... | {
"avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4",
"events_url": "https://api.github.com/users/Valahaar/events{/privacy}",
"followers_url": "https://api.github.com/users/Valahaar/followers",
"following_url": "https://api.github.com/users/Valahaar/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1844/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1844/timeline | null | completed | null | null | 2021-02-08T13:55:13Z | 2021-02-12T17:38:58Z | 4 days, 3:43:45 | false |
https://api.github.com/repos/huggingface/datasets/issues/1842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1842/comments | https://api.github.com/repos/huggingface/datasets/issues/1842/events | https://github.com/huggingface/datasets/issues/1842 | 803,563,149 | MDU6SXNzdWU4MDM1NjMxNDk= | 1,842 | Add AMI Corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | closed | false | null | [] | null | [
{
"author_association": "COLLABORATOR",
"body": "Available here: ~https://huggingface.co/datasets/ami~ https://huggingface.co/datasets/edinburghcstr/ami",
"created_at": "2022-10-05T12:45:43Z",
"html_url": "https://github.com/huggingface/datasets/issues/1842#issuecomment-1268388393",
"id": 126838... | 2021-02-08T13:25:00Z | 2023-02-28T16:29:22Z | 2023-02-28T16:29:22Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ## Adding a Dataset
- **Name:** *AMI*
- **Description:** *The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elic... | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1842/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1842/timeline | null | completed | null | null | 2021-02-08T13:25:00Z | 2023-02-28T16:29:22Z | 750 days, 3:04:22 | false |
https://api.github.com/repos/huggingface/datasets/issues/1843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1843/comments | https://api.github.com/repos/huggingface/datasets/issues/1843/events | https://github.com/huggingface/datasets/issues/1843 | 803,565,393 | MDU6SXNzdWU4MDM1NjUzOTM= | 1,843 | MustC Speech Translation | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | open | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "Hi @patrickvonplaten I would like to work on this dataset. \r\n\r\nThanks! ",
"created_at": "2021-02-09T04:21:11Z",
"html_url": "https://github.com/huggingface/datasets/issues/1843#issuecomment-775652224",
"id": 775652224,
"issue_url": "https:... | 2021-02-08T13:27:45Z | 2021-05-14T14:53:34Z | null | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ## Adding a Dataset
- **Name:** *IWSLT19*
- **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.*
- **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation*
- **Data:** *https://sites.google.com/view/iwslt-evaluation-2... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1843/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1843/timeline | null | null | null | null | 2021-02-08T13:27:45Z | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1841/comments | https://api.github.com/repos/huggingface/datasets/issues/1841/events | https://github.com/huggingface/datasets/issues/1841 | 803,561,123 | MDU6SXNzdWU4MDM1NjExMjM= | 1,841 | Add ljspeech | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | closed | false | null | [] | null | [] | 2021-02-08T13:22:26Z | 2021-03-15T05:59:02Z | 2021-03-15T05:59:02Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ## Adding a Dataset
- **Name:** *ljspeech*
- **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of ap... | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1841/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1841/timeline | null | completed | null | null | 2021-02-08T13:22:26Z | 2021-03-15T05:59:02Z | 34 days, 16:36:36 | false |
https://api.github.com/repos/huggingface/datasets/issues/1839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1839/comments | https://api.github.com/repos/huggingface/datasets/issues/1839/events | https://github.com/huggingface/datasets/issues/1839 | 803,559,164 | MDU6SXNzdWU4MDM1NTkxNjQ= | 1,839 | Add Voxforge | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | open | false | null | [] | null | [] | 2021-02-08T13:19:56Z | 2021-02-08T13:28:31Z | null | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ## Adding a Dataset
- **Name:** *voxforge*
- **Description:** *VoxForge is a language classification dataset. It consists of user submitted audio clips submitted to the website. In this release, data from 6 languages is collected - English, Spanish, French, German, Russian, and Italian. Since the website is constant... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1839/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1839/timeline | null | null | null | null | 2021-02-08T13:19:56Z | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1840/comments | https://api.github.com/repos/huggingface/datasets/issues/1840/events | https://github.com/huggingface/datasets/issues/1840 | 803,560,039 | MDU6SXNzdWU4MDM1NjAwMzk= | 1,840 | Add common voice | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "I have started working on adding this dataset.",
"created_at": "2021-02-12T08:09:25Z",
"html_url": "https://github.com/huggingface/datasets/issues/1840#issuecomment-778045229",
"id": 778045229,
"issue_url": "https://api.github.com/repos/hugging... | 2021-02-08T13:21:05Z | 2022-03-20T15:23:40Z | 2021-03-15T05:56:21Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ## Adding a Dataset
- **Name:** *common voice*
- **Description:** *Mozilla Common Voice Dataset*
- **Paper:** Homepage: https://voice.mozilla.org/en/datasets
- **Data:** https://voice.mozilla.org/en/datasets
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/dat... | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1840/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1840/timeline | null | completed | null | null | 2021-02-08T13:21:05Z | 2021-03-15T05:56:21Z | 34 days, 16:35:16 | false |
https://api.github.com/repos/huggingface/datasets/issues/1836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1836/comments | https://api.github.com/repos/huggingface/datasets/issues/1836/events | https://github.com/huggingface/datasets/issues/1836 | 803,531,837 | MDU6SXNzdWU4MDM1MzE4Mzc= | 1,836 | test.json has been removed from the limit dataset repo (breaks dataset) | {
"avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4",
"events_url": "https://api.github.com/users/Paethon/events{/privacy}",
"followers_url": "https://api.github.com/users/Paethon/followers",
"following_url": "https://api.github.com/users/Paethon/following{/other_user}",
"gists_url": "https://... | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Thanks for the heads up ! I'm opening a PR to fix that",
"created_at": "2021-02-10T15:39:32Z",
"html_url": "https://github.com/huggingface/datasets/issues/1836#issuecomment-776799708",
"id": 776799708,
"issue_url": "https://api.github.com/repos/hugg... | 2021-02-08T12:45:53Z | 2021-02-10T16:14:58Z | 2021-02-10T16:14:58Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | https://github.com/huggingface/datasets/blob/16042b233dbff2a7585110134e969204c69322c3/datasets/limit/limit.py#L51
The URL is not valid anymore since test.json has been removed in master for some reason. Directly referencing the last commit works:
`https://raw.githubusercontent.com/ilmgut/limit_dataset/0707d3989cd... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1836/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1836/timeline | null | completed | null | null | 2021-02-08T12:45:53Z | 2021-02-10T16:14:58Z | 2 days, 3:29:05 | false |
https://api.github.com/repos/huggingface/datasets/issues/1838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1838/comments | https://api.github.com/repos/huggingface/datasets/issues/1838/events | https://github.com/huggingface/datasets/issues/1838 | 803,557,521 | MDU6SXNzdWU4MDM1NTc1MjE= | 1,838 | Add tedlium | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "Hi @patrickvonplaten \r\nI can have a look to this dataset later since I am trying to add the OpenSLR dataset https://github.com/huggingface/datasets/pull/2173\r\nHopefully I have enough space since the compressed file is 21GB. The release 3 is even bigger: 54... | 2021-02-08T13:17:52Z | 2022-10-04T14:34:12Z | 2022-10-04T14:34:12Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ## Adding a Dataset
- **Name:** *tedlium*
- **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.*
- **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www.openslr.org/51... | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1838/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1838/timeline | null | completed | null | null | 2021-02-08T13:17:52Z | 2022-10-04T14:34:12Z | 603 days, 1:16:20 | false |
https://api.github.com/repos/huggingface/datasets/issues/1837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1837/comments | https://api.github.com/repos/huggingface/datasets/issues/1837/events | https://github.com/huggingface/datasets/issues/1837 | 803,555,650 | MDU6SXNzdWU4MDM1NTU2NTA= | 1,837 | Add VCTK | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "@patrickvonplaten I'd like to take this, if nobody has already done it. I have added datasets before through the datasets sprint, but I feel rusty on the details, so I'll look at the guide as well as similar audio PRs (#1878 in particular comes to mind). If th... | 2021-02-08T13:15:28Z | 2021-12-28T15:05:08Z | 2021-12-28T15:05:08Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ## Adding a Dataset
- **Name:** *VCTK*
- **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent arch... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1837/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1837/timeline | null | completed | null | null | 2021-02-08T13:15:28Z | 2021-12-28T15:05:08Z | 323 days, 1:49:40 | false |
https://api.github.com/repos/huggingface/datasets/issues/1835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1835/comments | https://api.github.com/repos/huggingface/datasets/issues/1835/events | https://github.com/huggingface/datasets/issues/1835 | 803,524,790 | MDU6SXNzdWU4MDM1MjQ3OTA= | 1,835 | Add CHiME4 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | open | false | null | [] | null | [
{
"author_association": "NONE",
"body": "@patrickvonplaten not sure whether it is still needed, but willing to tackle this issue",
"created_at": "2023-12-26T18:31:40Z",
"html_url": "https://github.com/huggingface/datasets/issues/1835#issuecomment-1869707805",
"id": 1869707805,
"issue_url": "... | 2021-02-08T12:36:38Z | 2025-01-26T16:18:59Z | null | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ## Adding a Dataset
- **Name:** Chime4
- **Description:** Chime4 is a dataset for automatic speech recognition. It is especially useful for evaluating models in a noisy environment and for multi-channel ASR
- **Paper:** Dataset comes from a channel: http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/ . Results pape... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1835/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1835/timeline | null | null | null | null | 2021-02-08T12:36:38Z | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1834/comments | https://api.github.com/repos/huggingface/datasets/issues/1834/events | https://github.com/huggingface/datasets/pull/1834 | 803,517,094 | MDExOlB1bGxSZXF1ZXN0NTY5NDMzNDA4 | 1,834 | Fixes base_url of limit dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4",
"events_url": "https://api.github.com/users/Paethon/events{/privacy}",
"followers_url": "https://api.github.com/users/Paethon/followers",
"following_url": "https://api.github.com/users/Paethon/following{/other_user}",
"gists_url": "https://... | [] | closed | false | null | [] | null | [
{
"author_association": "NONE",
"body": "OK, apparently it is a lot more complicated than simply changing the URL? Going to make an issue.",
"created_at": "2021-02-08T12:42:50Z",
"html_url": "https://github.com/huggingface/datasets/pull/1834#issuecomment-775121567",
"id": 775121567,
"issue_u... | 2021-02-08T12:26:35Z | 2021-02-08T12:42:50Z | 2021-02-08T12:42:50Z | NONE | null | null | null | `test.json` is not available in the master branch of the repository anymore. Linking to a specific commit. | {
"avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4",
"events_url": "https://api.github.com/users/Paethon/events{/privacy}",
"followers_url": "https://api.github.com/users/Paethon/followers",
"following_url": "https://api.github.com/users/Paethon/following{/other_user}",
"gists_url": "https://... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1834/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1834/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1834.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1834",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1834.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1834"
} | 2021-02-08T12:26:35Z | 2021-02-08T12:42:50Z | 0:16:15 | true |
https://api.github.com/repos/huggingface/datasets/issues/1833 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1833/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1833/comments | https://api.github.com/repos/huggingface/datasets/issues/1833/events | https://github.com/huggingface/datasets/pull/1833 | 803,120,978 | MDExOlB1bGxSZXF1ZXN0NTY5MDk5MTUx | 1,833 | Add OSCAR dataset card | {
"avatar_url": "https://avatars.githubusercontent.com/u/635220?v=4",
"events_url": "https://api.github.com/users/pjox/events{/privacy}",
"followers_url": "https://api.github.com/users/pjox/followers",
"following_url": "https://api.github.com/users/pjox/following{/other_user}",
"gists_url": "https://api.githu... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "@lhoestq Thanks for the suggestions! I agree with all of them. Should I accept them one by one or can I accept them all at once? When I try to load the whole diff GitHub is complaining and it does no render them well (probably my browser?) 😅 ",
"created_a... | 2021-02-08T01:39:49Z | 2021-02-12T14:09:25Z | 2021-02-12T14:08:24Z | CONTRIBUTOR | null | null | null | I added more information and completed the dataset card for OSCAR which was started by @lhoestq in his previous [PR](https://github.com/huggingface/datasets/pull/1824). | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1833/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1833/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1833.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1833",
"merged_at": "2021-02-12T14:08:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1833.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-08T01:39:49Z | 2021-02-12T14:08:24Z | 4 days, 12:28:35 | true |
https://api.github.com/repos/huggingface/datasets/issues/1832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1832/comments | https://api.github.com/repos/huggingface/datasets/issues/1832/events | https://github.com/huggingface/datasets/issues/1832 | 802,880,897 | MDU6SXNzdWU4MDI4ODA4OTc= | 1,832 | Looks like nokogumbo is up-to-date now, so this is no longer needed. | {
"avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4",
"events_url": "https://api.github.com/users/JimmyJim1/events{/privacy}",
"followers_url": "https://api.github.com/users/JimmyJim1/followers",
"following_url": "https://api.github.com/users/JimmyJim1/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [] | 2021-02-07T06:52:07Z | 2021-02-08T17:27:29Z | 2021-02-08T17:27:29Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Looks like nokogumbo is up-to-date now, so this is no longer needed.
__Originally posted by @dependabot in https://github.com/discourse/discourse/pull/11373#issuecomment-738993432__ | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1832/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1832/timeline | null | completed | null | null | 2021-02-07T06:52:07Z | 2021-02-08T17:27:29Z | 1 day, 10:35:22 | false |
https://api.github.com/repos/huggingface/datasets/issues/1831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1831/comments | https://api.github.com/repos/huggingface/datasets/issues/1831/events | https://github.com/huggingface/datasets/issues/1831 | 802,868,854 | MDU6SXNzdWU4MDI4Njg4NTQ= | 1,831 | Some question about raw dataset download info in the project . | {
"avatar_url": "https://avatars.githubusercontent.com/u/27874014?v=4",
"events_url": "https://api.github.com/users/svjack/events{/privacy}",
"followers_url": "https://api.github.com/users/svjack/followers",
"following_url": "https://api.github.com/users/svjack/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
{
"author_association": "MEMBER",
"body": "Hi ! The `dl_manager` is a `DownloadManager` object and is responsible for downloading the raw data files.\r\nIt is used by dataset builders in their `_split_generators` method to download the raw data files that are necessary to build the datasets splits.\r\n\r\nT... | 2021-02-07T05:33:36Z | 2021-02-25T14:10:18Z | 2021-02-25T14:10:18Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hi , i review the code in
https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py
in the _split_generators function is the truly logic of download raw datasets with dl_manager
and use Conll2003 cls by use import_main_class in load_dataset function
My question is that , with this logic i... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1831/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1831/timeline | null | completed | null | null | 2021-02-07T05:33:36Z | 2021-02-25T14:10:18Z | 18 days, 8:36:42 | false |
https://api.github.com/repos/huggingface/datasets/issues/1829 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1829/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1829/comments | https://api.github.com/repos/huggingface/datasets/issues/1829/events | https://github.com/huggingface/datasets/pull/1829 | 802,693,600 | MDExOlB1bGxSZXF1ZXN0NTY4NzgzNjA5 | 1,829 | Add Tweet Eval Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [] | 2021-02-06T12:36:25Z | 2021-02-08T13:17:54Z | 2021-02-08T13:17:53Z | CONTRIBUTOR | null | null | null | Closes Draft PR #1407.
Notes:
1. I have excluded `mapping.txt` from the dataset at it only contained the name mappings, which are already present in the ClassLabels.
2. I have also exluded the textual names for the emojis mentioned in the [mapping](https://github.com/cardiffnlp/tweeteval/blob/main/datasets/emoji/... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1829/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1829/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1829.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1829",
"merged_at": "2021-02-08T13:17:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1829.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-06T12:36:25Z | 2021-02-08T13:17:53Z | 2 days, 0:41:28 | true |
https://api.github.com/repos/huggingface/datasets/issues/1830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1830/comments | https://api.github.com/repos/huggingface/datasets/issues/1830/events | https://github.com/huggingface/datasets/issues/1830 | 802,790,075 | MDU6SXNzdWU4MDI3OTAwNzU= | 1,830 | using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? | {
"avatar_url": "https://avatars.githubusercontent.com/u/7662740?v=4",
"events_url": "https://api.github.com/users/wumpusman/events{/privacy}",
"followers_url": "https://api.github.com/users/wumpusman/followers",
"following_url": "https://api.github.com/users/wumpusman/following{/other_user}",
"gists_url": "h... | [] | open | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi @wumpusman \r\n`datasets` has a caching mechanism that allows to cache the results of `.map` so that when you want to re-run it later it doesn't recompute it again.\r\nSo when you do `.map`, what actually happens is:\r\n1. compute the hash used to identify your ... | 2021-02-06T21:00:26Z | 2021-02-24T21:56:14Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower:
````
def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"):
words_u... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1830/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1830/timeline | null | null | null | null | 2021-02-06T21:00:26Z | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1827 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1827/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1827/comments | https://api.github.com/repos/huggingface/datasets/issues/1827/events | https://github.com/huggingface/datasets/issues/1827 | 802,353,974 | MDU6SXNzdWU4MDIzNTM5NzQ= | 1,827 | Regarding On-the-fly Data Loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "NONE",
"body": "Possible duplicate\r\n\r\n#1776 https://github.com/huggingface/datasets/issues/\r\n\r\nreally looking PR for this feature",
"created_at": "2021-02-06T01:43:56Z",
"html_url": "https://github.com/huggingface/datasets/issues/1827#issuecomment-774374760",
"id... | 2021-02-05T17:43:48Z | 2021-02-18T13:55:16Z | 2021-02-18T13:55:16Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hi,
I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point.
Thanks,
Gunjan | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1827/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1827/timeline | null | completed | null | null | 2021-02-05T17:43:48Z | 2021-02-18T13:55:16Z | 12 days, 20:11:28 | false |
https://api.github.com/repos/huggingface/datasets/issues/1828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1828/comments | https://api.github.com/repos/huggingface/datasets/issues/1828/events | https://github.com/huggingface/datasets/pull/1828 | 802,449,234 | MDExOlB1bGxSZXF1ZXN0NTY4NTkwNDM2 | 1,828 | Add CelebA Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi @gchhablani! Thanks for all the contributions! We definitely want more image datasets, but Face datasets are tricky in general, in this one includes predicting attributes such as Attractiveness, Gender, or Race, which can be pretty problematic.\r\n\r\nWould you ... | 2021-02-05T20:20:55Z | 2021-02-18T14:17:07Z | 2021-02-18T14:17:07Z | CONTRIBUTOR | null | null | null | Trying to add CelebA Dataset.
Need help with testing. Loading examples takes a lot of time so I am unable to generate the `dataset_infos.json` and unable to test. Also, need help with creating `dummy_data.zip`.
Additionally, trying to load a few examples using `load_dataset('./datasets/celeb_a',split='train[10:20]... | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1828/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1828/timeline | null | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1828.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1828",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1828.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1828"
} | 2021-02-05T20:20:55Z | 2021-02-18T14:17:07Z | 12 days, 17:56:12 | true |
https://api.github.com/repos/huggingface/datasets/issues/1826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1826/comments | https://api.github.com/repos/huggingface/datasets/issues/1826/events | https://github.com/huggingface/datasets/pull/1826 | 802,074,744 | MDExOlB1bGxSZXF1ZXN0NTY4Mjc4OTI2 | 1,826 | Print error message with filename when malformed CSV | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [] | 2021-02-05T11:07:59Z | 2021-02-09T17:39:27Z | 2021-02-09T17:39:27Z | MEMBER | null | null | null | Print error message specifying filename when malformed CSV file.
Close #1821 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1826/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1826/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1826.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1826",
"merged_at": "2021-02-09T17:39:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1826.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-05T11:07:59Z | 2021-02-09T17:39:27Z | 4 days, 6:31:28 | true |
https://api.github.com/repos/huggingface/datasets/issues/1825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1825/comments | https://api.github.com/repos/huggingface/datasets/issues/1825/events | https://github.com/huggingface/datasets/issues/1825 | 802,073,925 | MDU6SXNzdWU4MDIwNzM5MjU= | 1,825 | Datasets library not suitable for huge text datasets. | {
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_u... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
{
"author_association": "MEMBER",
"body": "Hi ! Looks related to #861 \r\n\r\nYou are right: tokenizing a dataset using map takes a lot of space since it can store `input_ids` but also `token_type_ids`, `attention_mask` and `special_tokens_mask`. Moreover if your tokenization function returns python integer... | 2021-02-05T11:06:50Z | 2021-03-30T14:04:01Z | 2021-03-16T09:44:00Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hi,
I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1825/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1825/timeline | null | completed | null | null | 2021-02-05T11:06:50Z | 2021-03-16T09:44:00Z | 38 days, 22:37:10 | false |
https://api.github.com/repos/huggingface/datasets/issues/1824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1824/comments | https://api.github.com/repos/huggingface/datasets/issues/1824/events | https://github.com/huggingface/datasets/pull/1824 | 802,048,281 | MDExOlB1bGxSZXF1ZXN0NTY4MjU3MTU3 | 1,824 | Add OSCAR dataset card | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "Hi @lhoestq! When are you planning to release the version with this dataset?\r\n\r\nBTW: What a huge README file :astonished:",
"created_at": "2021-02-05T22:54:25Z",
"html_url": "https://github.com/huggingface/datasets/pull/1824#issuecomment-774331791"... | 2021-02-05T10:30:26Z | 2021-05-05T18:24:14Z | 2021-02-08T11:30:33Z | MEMBER | null | null | null | I started adding the dataset card for OSCAR !
For now it's just basic info for all the different configurations in `Dataset Structure`.
In particular the Data Splits section tells how may samples there are for each config. The Data Instances section show an example for each config, and it also shows the size in MB.... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1824/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1824/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1824.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1824",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1824.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1824"
} | 2021-02-05T10:30:26Z | 2021-02-08T11:30:33Z | 3 days, 1:00:07 | true |
https://api.github.com/repos/huggingface/datasets/issues/1822 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1822/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1822/comments | https://api.github.com/repos/huggingface/datasets/issues/1822/events | https://github.com/huggingface/datasets/pull/1822 | 802,003,835 | MDExOlB1bGxSZXF1ZXN0NTY4MjIxMzIz | 1,822 | Add Hindi Discourse Analysis Natural Language Inference Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/33565881?v=4",
"events_url": "https://api.github.com/users/avinsit123/events{/privacy}",
"followers_url": "https://api.github.com/users/avinsit123/followers",
"following_url": "https://api.github.com/users/avinsit123/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Could you also run `make style` to fix the CI check on code formatting ?",
"created_at": "2021-02-10T14:51:32Z",
"html_url": "https://github.com/huggingface/datasets/pull/1822#issuecomment-776760493",
"id": 776760493,
"issue_url": "https://api.githu... | 2021-02-05T09:30:54Z | 2021-02-15T09:57:39Z | 2021-02-15T09:57:39Z | CONTRIBUTOR | null | null | null | # Dataset Card for Hindi Discourse Analysis Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#dat... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1822/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1822/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1822.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1822",
"merged_at": "2021-02-15T09:57:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1822.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-05T09:30:54Z | 2021-02-15T09:57:39Z | 10 days, 0:26:45 | true |
https://api.github.com/repos/huggingface/datasets/issues/1823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1823/comments | https://api.github.com/repos/huggingface/datasets/issues/1823/events | https://github.com/huggingface/datasets/pull/1823 | 802,042,181 | MDExOlB1bGxSZXF1ZXN0NTY4MjUyMjIx | 1,823 | Add FewRel Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "Hi @lhoestq,\r\n\r\nSorry for the late response. What do you mean when you say \"adding names to default config\"? Should I handle \"pid2name\" in the same config as \"default\"?",
"created_at": "2021-02-18T14:00:16Z",
"html_url": "https://github.com/h... | 2021-02-05T10:22:03Z | 2021-03-01T11:56:20Z | 2021-03-01T10:21:39Z | CONTRIBUTOR | null | null | null | Hi,
This PR closes this [Card](https://github.com/huggingface/datasets/projects/1#card-53285184) and Issue #1757.
I wasn't sure how to add `pid2name` along with the dataset so I added it as a separate configuration. For each (head, tail, tokens) triplet, I have created one example. I have added the dictionary key... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1823/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1823/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1823.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1823",
"merged_at": "2021-03-01T10:21:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1823.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-05T10:22:03Z | 2021-03-01T10:21:39Z | 23 days, 23:59:36 | true |
https://api.github.com/repos/huggingface/datasets/issues/1821 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1821/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1821/comments | https://api.github.com/repos/huggingface/datasets/issues/1821/events | https://github.com/huggingface/datasets/issues/1821 | 801,747,647 | MDU6SXNzdWU4MDE3NDc2NDc= | 1,821 | Provide better exception message when one of many files results in an exception | {
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi!\r\n\r\nThank you for reporting this issue. I agree that the information about the exception should be more clear and explicit.\r\n\r\nI could take on this issue.\r\n\r\nOn the meantime, as you can see from the exception stack trace, HF Datasets uses pandas to r... | 2021-02-05T00:49:03Z | 2021-02-09T17:39:27Z | 2021-02-09T17:39:27Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | I find when I process many files, i.e.
```
train_files = glob.glob('rain*.csv')
validation_files = glob.glob(validation*.csv')
datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files))
```
I sometimes encounter an error due to one of the files being misformed (i.e. no dat... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1821/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1821/timeline | null | completed | null | null | 2021-02-05T00:49:03Z | 2021-02-09T17:39:27Z | 4 days, 16:50:24 | false |
https://api.github.com/repos/huggingface/datasets/issues/1820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1820/comments | https://api.github.com/repos/huggingface/datasets/issues/1820/events | https://github.com/huggingface/datasets/pull/1820 | 801,529,936 | MDExOlB1bGxSZXF1ZXN0NTY3ODI4OTg1 | 1,820 | Add metrics usage examples and tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2021-02-04T18:23:50Z | 2021-02-05T14:00:01Z | 2021-02-05T14:00:00Z | MEMBER | null | null | null | All metrics finally have usage examples and proper fast + slow tests :)
I added examples of usage for every metric, and I use doctest to make sure they all work as expected.
For "slow" metrics such as bert_score or bleurt which require to download + run a transformer model, the download + forward pass are only do... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1820/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1820/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1820.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1820",
"merged_at": "2021-02-05T14:00:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1820.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-04T18:23:50Z | 2021-02-05T14:00:00Z | 19:36:10 | true |
https://api.github.com/repos/huggingface/datasets/issues/1819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1819/comments | https://api.github.com/repos/huggingface/datasets/issues/1819/events | https://github.com/huggingface/datasets/pull/1819 | 801,448,670 | MDExOlB1bGxSZXF1ZXN0NTY3NzYyMzI2 | 1,819 | Fixed spelling `S3Fileystem` to `S3FileSystem` | {
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [] | 2021-02-04T16:36:46Z | 2021-02-04T16:52:27Z | 2021-02-04T16:52:26Z | CONTRIBUTOR | null | null | null | Fixed documentation spelling errors.
Wrong `S3Fileystem`
Right `S3FileSystem` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1819/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1819/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1819.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1819",
"merged_at": "2021-02-04T16:52:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1819.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-04T16:36:46Z | 2021-02-04T16:52:26Z | 0:15:40 | true |
https://api.github.com/repos/huggingface/datasets/issues/1816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1816/comments | https://api.github.com/repos/huggingface/datasets/issues/1816/events | https://github.com/huggingface/datasets/pull/1816 | 800,660,995 | MDExOlB1bGxSZXF1ZXN0NTY3MTExMjEx | 1,816 | Doc2dial rc update to latest version | {
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "- update data loader and readme for latest version 1.0.1",
"created_at": "2021-02-03T20:52:44Z",
"html_url": "https://github.com/huggingface/datasets/pull/1816#issuecomment-772815257",
"id": 772815257,
"issue_url": "https://api.github.com/repos... | 2021-02-03T20:08:54Z | 2021-02-15T15:15:24Z | 2021-02-15T15:04:33Z | CONTRIBUTOR | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1816/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1816/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1816.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1816",
"merged_at": "2021-02-15T15:04:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1816.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-03T20:08:54Z | 2021-02-15T15:04:33Z | 11 days, 18:55:39 | true | |
https://api.github.com/repos/huggingface/datasets/issues/1817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1817/comments | https://api.github.com/repos/huggingface/datasets/issues/1817/events | https://github.com/huggingface/datasets/issues/1817 | 800,870,652 | MDU6SXNzdWU4MDA4NzA2NTI= | 1,817 | pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 599 but got length 1500 | {
"avatar_url": "https://avatars.githubusercontent.com/u/9610770?v=4",
"events_url": "https://api.github.com/users/LuCeHe/events{/privacy}",
"followers_url": "https://api.github.com/users/LuCeHe/followers",
"following_url": "https://api.github.com/users/LuCeHe/following{/other_user}",
"gists_url": "https://ap... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi !\r\nThe error you have is due to the `input_ids` column not having the same number of examples as the other columns.\r\nIndeed you're concatenating the `input_ids` at this line:\r\n\r\nhttps://github.com/LuCeHe/GenericTools/blob/431835d8e13ec24dceb5ee4dc4ae58f0... | 2021-02-04T02:30:23Z | 2022-10-05T12:42:57Z | 2022-10-05T12:42:57Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | I am trying to preprocess any dataset in this package with GPT-2 tokenizer, so I need to structure the datasets as long sequences of text without padding. I've been following a couple of your tutorials and here you can find the script that is failing right at the end
https://github.com/LuCeHe/GenericTools/blob/maste... | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1817/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1817/timeline | null | completed | null | null | 2021-02-04T02:30:23Z | 2022-10-05T12:42:57Z | 608 days, 10:12:34 | false |
https://api.github.com/repos/huggingface/datasets/issues/1818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1818/comments | https://api.github.com/repos/huggingface/datasets/issues/1818/events | https://github.com/huggingface/datasets/issues/1818 | 800,958,776 | MDU6SXNzdWU4MDA5NTg3NzY= | 1,818 | Loading local dataset raise requests.exceptions.ConnectTimeout | {
"avatar_url": "https://avatars.githubusercontent.com/u/15032072?v=4",
"events_url": "https://api.github.com/users/Alxe1/events{/privacy}",
"followers_url": "https://api.github.com/users/Alxe1/followers",
"following_url": "https://api.github.com/users/Alxe1/following{/other_user}",
"gists_url": "https://api.... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi ! Thanks for reporting. This was indeed a bug introduced when we moved the `json` dataset loader inside the `datasets` package (before that, the `json` loader was fetched online, as all the other dataset scripts).\r\n\r\nThis should be fixed on master now. Feel ... | 2021-02-04T05:55:23Z | 2022-06-01T15:38:42Z | 2022-06-01T15:38:42Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Load local dataset:
```
dataset = load_dataset('json', data_files=["../../data/json.json"])
train = dataset["train"]
print(train.features)
train1 = train.map(lambda x: {"labels": 1})
print(train1[:2])
```
but it raised requests.exceptions.ConnectTimeout:
```
/Users/littlely/myvirtual/tf2/bin/python3.7 /Us... | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1818/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1818/timeline | null | completed | null | null | 2021-02-04T05:55:23Z | 2022-06-01T15:38:42Z | 482 days, 9:43:19 | false |
https://api.github.com/repos/huggingface/datasets/issues/1814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1814/comments | https://api.github.com/repos/huggingface/datasets/issues/1814/events | https://github.com/huggingface/datasets/pull/1814 | 800,516,236 | MDExOlB1bGxSZXF1ZXN0NTY2OTg4NTI1 | 1,814 | Add Freebase QA Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "Hi @lhoestq \r\n\r\nThanks for approving. Request you to close PR #1435 as well.",
"created_at": "2021-02-04T19:47:51Z",
"html_url": "https://github.com/huggingface/datasets/pull/1814#issuecomment-773561367",
"id": 773561367,
"issue_url": "http... | 2021-02-03T16:57:49Z | 2021-02-04T19:47:51Z | 2021-02-04T16:21:48Z | CONTRIBUTOR | null | null | null | Closes PR #1435. Fixed issues with PR #1809.
Requesting @lhoestq to review. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1814/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1814/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1814.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1814",
"merged_at": "2021-02-04T16:21:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1814.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-03T16:57:49Z | 2021-02-04T16:21:48Z | 23:23:59 | true |
https://api.github.com/repos/huggingface/datasets/issues/1815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1815/comments | https://api.github.com/repos/huggingface/datasets/issues/1815/events | https://github.com/huggingface/datasets/pull/1815 | 800,610,017 | MDExOlB1bGxSZXF1ZXN0NTY3MDY3NjU1 | 1,815 | Add CCAligned Multilingual Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi !\r\n\r\nWe already have some datasets that can have many many configurations possible.\r\nTo be able to support that, we allow to subclass BuilderConfig to add as many additional parameters as you may need.\r\nThis way users can load any language they want. For... | 2021-02-03T18:59:52Z | 2021-03-01T12:33:03Z | 2021-03-01T10:36:21Z | CONTRIBUTOR | null | null | null | Hello,
I'm trying to add [CCAligned Multilingual Dataset](http://www.statmt.org/cc-aligned/). This has the potential to close #1756.
This dataset has two types - Document-Pairs, and Sentence-Pairs.
The datasets are huge, so I won't be able to test all of them. At the same time, a user might only want to downlo... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1815/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1815/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1815.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1815",
"merged_at": "2021-03-01T10:36:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1815.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-03T18:59:52Z | 2021-03-01T10:36:21Z | 25 days, 15:36:29 | true |
https://api.github.com/repos/huggingface/datasets/issues/1812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1812/comments | https://api.github.com/repos/huggingface/datasets/issues/1812/events | https://github.com/huggingface/datasets/pull/1812 | 799,379,178 | MDExOlB1bGxSZXF1ZXN0NTY2MDMxODIy | 1,812 | Add CIFAR-100 Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "Hi @lhoestq,\r\nI have updated with the changes from the review.",
"created_at": "2021-02-05T20:42:31Z",
"html_url": "https://github.com/huggingface/datasets/pull/1812#issuecomment-774277831",
"id": 774277831,
"issue_url": "https://api.github.c... | 2021-02-02T15:22:59Z | 2021-02-08T11:10:18Z | 2021-02-08T10:39:06Z | CONTRIBUTOR | null | null | null | Adding CIFAR-100 Dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1812/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1812/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1812.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1812",
"merged_at": "2021-02-08T10:39:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1812.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-02T15:22:59Z | 2021-02-08T10:39:06Z | 5 days, 19:16:07 | true |
https://api.github.com/repos/huggingface/datasets/issues/1813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1813/comments | https://api.github.com/repos/huggingface/datasets/issues/1813/events | https://github.com/huggingface/datasets/pull/1813 | 800,435,973 | MDExOlB1bGxSZXF1ZXN0NTY2OTIxNDcz | 1,813 | Support future datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2021-02-03T15:26:49Z | 2021-02-05T10:33:48Z | 2021-02-05T10:33:47Z | MEMBER | null | null | null | If a dataset is available at the version of the local installation of `datasets` (e.g. 1.2.0), then loading this dataset means loading the script at this version.
However when trying to load a dataset that is only available on master, currently users have to specify `script_version="master"` in `load_dataset` to mak... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1813/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1813/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1813.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1813",
"merged_at": "2021-02-05T10:33:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1813.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-03T15:26:49Z | 2021-02-05T10:33:47Z | 1 day, 19:06:58 | true |
https://api.github.com/repos/huggingface/datasets/issues/1811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1811/comments | https://api.github.com/repos/huggingface/datasets/issues/1811/events | https://github.com/huggingface/datasets/issues/1811 | 799,211,060 | MDU6SXNzdWU3OTkyMTEwNjA= | 1,811 | Unable to add Multi-label Datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Thanks for adding this dataset! As far as I know `supervised_keys` is mostly a holdover from TFDS, but isn't really used, so feel free to drop it (@lhoestq or @thomwolf correct me if I'm wrong). It definitely shouldn't be blocking :) ",
"created_at": "2021-02-... | 2021-02-02T11:50:56Z | 2021-02-18T14:16:31Z | 2021-02-18T14:16:31Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | I am trying to add [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. The dataset contains two labels per image - `fine label` and `coarse label`. Using just one label in supervised keys as
`supervised_keys=("img", "fine_label")` raises no issue. But trying `supervised_keys=("img", "fine_label","coarse... | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1811/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1811/timeline | null | completed | null | null | 2021-02-02T11:50:56Z | 2021-02-18T14:16:31Z | 16 days, 2:25:35 | false |
https://api.github.com/repos/huggingface/datasets/issues/1810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1810/comments | https://api.github.com/repos/huggingface/datasets/issues/1810/events | https://github.com/huggingface/datasets/issues/1810 | 799,168,650 | MDU6SXNzdWU3OTkxNjg2NTA= | 1,810 | Add Hateful Memes Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | open | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "I am not sure, but would `datasets.Sequence(datasets.Sequence(datasets.Sequence(datasets.Value(\"int\")))` work?",
"created_at": "2021-02-04T19:43:38Z",
"html_url": "https://github.com/huggingface/datasets/issues/1810#issuecomment-773558898",
"id":... | 2021-02-02T10:53:59Z | 2021-12-08T12:03:59Z | null | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ## Add Hateful Memes Dataset
- **Name:** Hateful Memes
- **Description:** [https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set]( https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set)
- **Paper:** [https://arxiv.org/pdf/2005.04790.pdf](https://arxiv.org/pdf/2005.04790.pdf)
- **Data:** [Thi... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1810/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1810/timeline | null | null | null | null | 2021-02-02T10:53:59Z | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1809/comments | https://api.github.com/repos/huggingface/datasets/issues/1809/events | https://github.com/huggingface/datasets/pull/1809 | 799,059,141 | MDExOlB1bGxSZXF1ZXN0NTY1NzY4ODQz | 1,809 | Add FreebaseQA dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi ! It looks like this PR contains changes about other datasets than freebase_qa such as DuoRC.\r\n\r\nCan you remove these changes please ?",
"created_at": "2021-02-03T09:56:35Z",
"html_url": "https://github.com/huggingface/datasets/pull/1809#issuecomment... | 2021-02-02T08:35:53Z | 2021-02-03T17:15:05Z | 2021-02-03T16:43:06Z | CONTRIBUTOR | null | null | null | Adding FreebaseQA dataset suggested in PR #1435 with minor edits. Also closes that PR.
Requesting @lhoestq to review. | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1809/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1809/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1809.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1809",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1809.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1809"
} | 2021-02-02T08:35:53Z | 2021-02-03T16:43:06Z | 1 day, 8:07:13 | true |
https://api.github.com/repos/huggingface/datasets/issues/1806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1806/comments | https://api.github.com/repos/huggingface/datasets/issues/1806/events | https://github.com/huggingface/datasets/pull/1806 | 798,607,869 | MDExOlB1bGxSZXF1ZXN0NTY1Mzk0ODIz | 1,806 | Update details to MLSUM dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/15138872?v=4",
"events_url": "https://api.github.com/users/padipadou/events{/privacy}",
"followers_url": "https://api.github.com/users/padipadou/followers",
"following_url": "https://api.github.com/users/padipadou/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Thanks!",
"created_at": "2021-02-01T18:46:28Z",
"html_url": "https://github.com/huggingface/datasets/pull/1806#issuecomment-771073990",
"id": 771073990,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/1806",
"node_id": "MD... | 2021-02-01T18:35:12Z | 2021-02-01T18:46:28Z | 2021-02-01T18:46:21Z | CONTRIBUTOR | null | null | null | Update details to MLSUM dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1806/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1806/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1806.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1806",
"merged_at": "2021-02-01T18:46:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1806.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-01T18:35:12Z | 2021-02-01T18:46:21Z | 0:11:09 | true |
https://api.github.com/repos/huggingface/datasets/issues/1808 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1808/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1808/comments | https://api.github.com/repos/huggingface/datasets/issues/1808/events | https://github.com/huggingface/datasets/issues/1808 | 798,879,180 | MDU6SXNzdWU3OTg4NzkxODA= | 1,808 | writing Datasets in a human readable format | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.git... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "d876e3",
"default": true... | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "AFAIK, there is currently no built-in method on the `Dataset` object to do this.\r\nHowever, a workaround is to directly use the Arrow table backing the dataset, **but it implies loading the whole dataset in memory** (correct me if I'm mistaken @lhoestq).\r\n\... | 2021-02-02T02:55:40Z | 2022-06-01T15:38:13Z | 2022-06-01T15:38:13Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hi
I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1808/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1808/timeline | null | completed | null | null | 2021-02-02T02:55:40Z | 2022-06-01T15:38:13Z | 484 days, 12:42:33 | false |
https://api.github.com/repos/huggingface/datasets/issues/1804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1804/comments | https://api.github.com/repos/huggingface/datasets/issues/1804/events | https://github.com/huggingface/datasets/pull/1804 | 798,483,881 | MDExOlB1bGxSZXF1ZXN0NTY1MjkzMTc3 | 1,804 | Add SICK dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/36051308?v=4",
"events_url": "https://api.github.com/users/calpt/events{/privacy}",
"followers_url": "https://api.github.com/users/calpt/followers",
"following_url": "https://api.github.com/users/calpt/following{/other_user}",
"gists_url": "https://api.... | [] | closed | false | null | [] | null | [] | 2021-02-01T15:57:44Z | 2021-02-05T17:46:28Z | 2021-02-05T15:49:25Z | CONTRIBUTOR | null | null | null | Adds the SICK dataset (http://marcobaroni.org/composes/sick.html).
Closes #1772.
Edit: also closes #1632, which is the original issue requesting the dataset. The newer one is a duplicate. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1804/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1804/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1804.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1804",
"merged_at": "2021-02-05T15:49:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1804.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-01T15:57:44Z | 2021-02-05T15:49:25Z | 3 days, 23:51:41 | true |
https://api.github.com/repos/huggingface/datasets/issues/1807 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1807/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1807/comments | https://api.github.com/repos/huggingface/datasets/issues/1807/events | https://github.com/huggingface/datasets/pull/1807 | 798,823,591 | MDExOlB1bGxSZXF1ZXN0NTY1NTczNzU5 | 1,807 | Adding an aggregated dataset for the GEM benchmark | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Nice !",
"created_at": "2021-02-02T22:48:41Z",
"html_url": "https://github.com/huggingface/datasets/pull/1807#issuecomment-772067202",
"id": 772067202,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/1807",
"node_id": "MDE... | 2021-02-02T00:39:53Z | 2021-02-02T22:48:41Z | 2021-02-02T18:06:58Z | MEMBER | null | null | null | This dataset gathers modified versions of several other conditional text generation datasets which together make up the shared task for the Generation Evaluation and Metrics workshop (think GLUE for text generation)
The changes from the original datasets are detailed in the Dataset Cards on the GEM website, which ar... | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1807/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1807/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1807.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1807",
"merged_at": "2021-02-02T18:06:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1807.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-02T00:39:53Z | 2021-02-02T18:06:58Z | 17:27:05 | true |
https://api.github.com/repos/huggingface/datasets/issues/1805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1805/comments | https://api.github.com/repos/huggingface/datasets/issues/1805/events | https://github.com/huggingface/datasets/issues/1805 | 798,498,053 | MDU6SXNzdWU3OTg0OTgwNTM= | 1,805 | can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index | {
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"events_url": "https://api.github.com/users/abarbosa94/events{/privacy}",
"followers_url": "https://api.github.com/users/abarbosa94/followers",
"following_url": "https://api.github.com/users/abarbosa94/following{/other_user}",
"gists_url":... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi ! Indeed we used to require mapping functions to be picklable with `pickle` or `dill` in order to cache the resulting datasets. And FAISS indexes are not picklable unfortunately.\r\n\r\nBut since #1703 this is no longer required (the caching will simply be disab... | 2021-02-01T16:14:17Z | 2021-03-06T14:32:46Z | 2021-03-06T14:32:46Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | So, I have the following instances in my dataset
```
{'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of
this increase in rotation?',
'answer': 'C',
'example_id': 'ARCCH_Mercury_7175875',
'options':[{'option_context': 'One effect of ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"events_url": "https://api.github.com/users/abarbosa94/events{/privacy}",
"followers_url": "https://api.github.com/users/abarbosa94/followers",
"following_url": "https://api.github.com/users/abarbosa94/following{/other_user}",
"gists_url":... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1805/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1805/timeline | null | completed | null | null | 2021-02-01T16:14:17Z | 2021-03-06T14:32:46Z | 32 days, 22:18:29 | false |
https://api.github.com/repos/huggingface/datasets/issues/1801 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1801/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1801/comments | https://api.github.com/repos/huggingface/datasets/issues/1801/events | https://github.com/huggingface/datasets/pull/1801 | 797,814,275 | MDExOlB1bGxSZXF1ZXN0NTY0NzMwODYw | 1,801 | [GEM] Updated the source link of the data to update correct tokenized version. | {
"avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4",
"events_url": "https://api.github.com/users/mounicam/events{/privacy}",
"followers_url": "https://api.github.com/users/mounicam/followers",
"following_url": "https://api.github.com/users/mounicam/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "@mounicam we'll keep the original version in the Turk dataset proper, and use the updated file in the GEM aggregated dataset which I'll add later today\r\n\r\n@lhoestq do not merge, I'll close when I've submitted the GEM dataset PR :) ",
"created_at": "2021-01-... | 2021-01-31T21:17:19Z | 2021-02-02T13:17:38Z | 2021-02-02T13:17:28Z | CONTRIBUTOR | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1801/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1801/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1801.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1801",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1801.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1801"
} | 2021-01-31T21:17:19Z | 2021-02-02T13:17:28Z | 1 day, 16:00:09 | true | |
https://api.github.com/repos/huggingface/datasets/issues/1802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1802/comments | https://api.github.com/repos/huggingface/datasets/issues/1802/events | https://github.com/huggingface/datasets/pull/1802 | 797,924,468 | MDExOlB1bGxSZXF1ZXN0NTY0ODE4NDIy | 1,802 | add github of contributors | {
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}"... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "@lhoestq Can you confirm if this format is fine? I will update cards based on your feedback.",
"created_at": "2021-02-01T03:52:55Z",
"html_url": "https://github.com/huggingface/datasets/pull/1802#issuecomment-770545792",
"id": 770545792,
"issue... | 2021-02-01T03:49:19Z | 2021-02-03T10:09:52Z | 2021-02-03T10:06:30Z | CONTRIBUTOR | null | null | null | This PR will add contributors GitHub id at the end of every dataset cards. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1802/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1802/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1802.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1802",
"merged_at": "2021-02-03T10:06:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1802.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-02-01T03:49:19Z | 2021-02-03T10:06:30Z | 2 days, 6:17:11 | true |
https://api.github.com/repos/huggingface/datasets/issues/1803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1803/comments | https://api.github.com/repos/huggingface/datasets/issues/1803/events | https://github.com/huggingface/datasets/issues/1803 | 798,243,904 | MDU6SXNzdWU3OTgyNDM5MDQ= | 1,803 | Querying examples from big datasets is slower than small datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
{
"author_association": "NONE",
"body": "Hello, @lhoestq / @gaceladri : We have been seeing similar behavior with bigger datasets, where querying time increases. Are you folks aware of any solution that fixes this problem yet? ",
"created_at": "2021-02-09T22:42:41Z",
"html_url": "https://github.co... | 2021-02-01T11:08:23Z | 2021-08-04T18:11:01Z | 2021-08-04T18:10:42Z | MEMBER | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets.
For example
```python
from datasets import load_dataset
b1 = load_dataset("bookcorpus", split="train[:1%]")
b50 = load_dataset("bookcorpus", split="train[:50%]")
b100 = load_dataset("bookcorp... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1803/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1803/timeline | null | completed | null | null | 2021-02-01T11:08:23Z | 2021-08-04T18:10:42Z | 184 days, 7:02:19 | false |
https://api.github.com/repos/huggingface/datasets/issues/1800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1800/comments | https://api.github.com/repos/huggingface/datasets/issues/1800/events | https://github.com/huggingface/datasets/pull/1800 | 797,798,689 | MDExOlB1bGxSZXF1ZXN0NTY0NzE5MjA3 | 1,800 | Add DuoRC Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "Thanks for approving @lhoestq!\r\nWill apply these changes for the other datasets I've added too.",
"created_at": "2021-02-03T05:01:44Z",
"html_url": "https://github.com/huggingface/datasets/pull/1800#issuecomment-772230386",
"id": 772230386,
"... | 2021-01-31T20:01:59Z | 2021-02-03T05:01:45Z | 2021-02-02T22:49:26Z | CONTRIBUTOR | null | null | null | Hi,
DuoRC SelfRC is one type of the [DuoRC Dataset](https://duorc.github.io/). DuoRC SelfRC is a crowdsourced Abstractive/Extractive Question-Answering dataset based on Wikipedia movie plots. It contains examples that may have answers in the movie plot, synthesized answers which are not present in the movie plot, or... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1800/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1800/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1800.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1800",
"merged_at": "2021-02-02T22:49:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1800.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-31T20:01:59Z | 2021-02-02T22:49:26Z | 2 days, 2:47:27 | true |
https://api.github.com/repos/huggingface/datasets/issues/1799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1799/comments | https://api.github.com/repos/huggingface/datasets/issues/1799/events | https://github.com/huggingface/datasets/pull/1799 | 797,789,439 | MDExOlB1bGxSZXF1ZXN0NTY0NzEyMzUy | 1,799 | Update: SWDA - Fixed code to use all metadata features. Added comments and cleaned c… | {
"avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4",
"events_url": "https://api.github.com/users/gmihaila/events{/privacy}",
"followers_url": "https://api.github.com/users/gmihaila/followers",
"following_url": "https://api.github.com/users/gmihaila/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "@yjernite Pushed all the changes you recommended. Thank you for your help!",
"created_at": "2021-02-05T22:01:23Z",
"html_url": "https://github.com/huggingface/datasets/pull/1799#issuecomment-774311885",
"id": 774311885,
"issue_url": "https://ap... | 2021-01-31T19:18:55Z | 2021-02-09T22:06:13Z | 2021-02-09T15:49:58Z | CONTRIBUTOR | null | null | null | This is a dataset I currently use my research and I realized some features are not being returned.
Previous code was not using all available metadata and was kind of messy
I fixed code to use all metadata and made some modification to be more efficient and better formatted.
Please let me know if I need to ma... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1799/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1799/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1799.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1799",
"merged_at": "2021-02-09T15:49:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1799.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-31T19:18:55Z | 2021-02-09T15:49:58Z | 8 days, 20:31:03 | true |
https://api.github.com/repos/huggingface/datasets/issues/1797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1797/comments | https://api.github.com/repos/huggingface/datasets/issues/1797/events | https://github.com/huggingface/datasets/issues/1797 | 797,357,901 | MDU6SXNzdWU3OTczNTc5MDE= | 1,797 | Connection error | {
"avatar_url": "https://avatars.githubusercontent.com/u/46243662?v=4",
"events_url": "https://api.github.com/users/smile0925/events{/privacy}",
"followers_url": "https://api.github.com/users/smile0925/followers",
"following_url": "https://api.github.com/users/smile0925/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi ! For future references let me add a link to our discussion here : https://github.com/huggingface/datasets/issues/759#issuecomment-770684693\r\n\r\nLet me know if you manage to fix your proxy issue or if we can do something on our end to help you :)",
"creat... | 2021-01-30T07:32:45Z | 2021-08-04T18:09:37Z | 2021-08-04T18:09:37Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hi
I am hitting to the error, help me and thanks.
`train_data = datasets.load_dataset("xsum", split="train")`
`ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/xsum/xsum.py` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1797/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1797/timeline | null | completed | null | null | 2021-01-30T07:32:45Z | 2021-08-04T18:09:37Z | 186 days, 10:36:52 | false |
https://api.github.com/repos/huggingface/datasets/issues/1798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1798/comments | https://api.github.com/repos/huggingface/datasets/issues/1798/events | https://github.com/huggingface/datasets/pull/1798 | 797,766,818 | MDExOlB1bGxSZXF1ZXN0NTY0Njk2NjE1 | 1,798 | Add Arabic sarcasm dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4",
"events_url": "https://api.github.com/users/mapmeld/events{/privacy}",
"followers_url": "https://api.github.com/users/mapmeld/followers",
"following_url": "https://api.github.com/users/mapmeld/following{/other_user}",
"gists_url": "https://... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "@lhoestq thanks for the comments - I believe these are now addressed. I re-generated the datasets_info.json and dummy data",
"created_at": "2021-02-02T03:34:52Z",
"html_url": "https://github.com/huggingface/datasets/pull/1798#issuecomment-771329962",
... | 2021-01-31T17:38:55Z | 2021-02-10T20:39:13Z | 2021-02-03T10:35:54Z | CONTRIBUTOR | null | null | null | This MIT license dataset: https://github.com/iabufarha/ArSarcasm
Via https://sites.google.com/view/ar-sarcasm-sentiment-detection/ | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1798/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1798/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1798.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1798",
"merged_at": "2021-02-03T10:35:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1798.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-31T17:38:55Z | 2021-02-03T10:35:54Z | 2 days, 16:56:59 | true |
https://api.github.com/repos/huggingface/datasets/issues/1796 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1796/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1796/comments | https://api.github.com/repos/huggingface/datasets/issues/1796/events | https://github.com/huggingface/datasets/issues/1796 | 797,329,905 | MDU6SXNzdWU3OTczMjk5MDU= | 1,796 | Filter on dataset too much slowww | {
"avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4",
"events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}",
"followers_url": "https://api.github.com/users/ayubSubhaniya/followers",
"following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}",
"g... | [] | open | false | null | [] | null | [
{
"author_association": "NONE",
"body": "When I use the filter on the arrow table directly, it works like butter. But I can't find a way to update the table in `Dataset` object.\r\n\r\n```\r\nds_table = dataset.data.filter(mask=dataset['flag'])\r\n```",
"created_at": "2021-01-30T04:13:39Z",
"html_ur... | 2021-01-30T04:09:19Z | 2025-05-15T13:19:55Z | null | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | I have a dataset with 50M rows.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes.
When I applied the `filter()` function it is taking too much time. I need to filter se... | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1796/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1796/timeline | null | null | null | null | 2021-01-30T04:09:19Z | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1794 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1794/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1794/comments | https://api.github.com/repos/huggingface/datasets/issues/1794/events | https://github.com/huggingface/datasets/pull/1794 | 796,975,588 | MDExOlB1bGxSZXF1ZXN0NTY0MDYyMTkw | 1,794 | Move silicone directory | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2021-01-29T15:33:15Z | 2021-01-29T16:31:39Z | 2021-01-29T16:31:38Z | MEMBER | null | null | null | The dataset was added in #1761 but not in the right directory. I'm moving it to /datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1794/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1794/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1794.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1794",
"merged_at": "2021-01-29T16:31:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1794.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-29T15:33:15Z | 2021-01-29T16:31:38Z | 0:58:23 | true |
https://api.github.com/repos/huggingface/datasets/issues/1795 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1795/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1795/comments | https://api.github.com/repos/huggingface/datasets/issues/1795/events | https://github.com/huggingface/datasets/pull/1795 | 797,021,730 | MDExOlB1bGxSZXF1ZXN0NTY0MDk5OTUz | 1,795 | Custom formatting for lazy map + arrow data extraction refactor | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "This PR is amazing!!!\r\n\r\nI only looked at `arrow_dataset.py` and `formatting/formatting.py` but those look good to me.\r\n\r\nMy only (tiny) concern is the name of the function: I don't think it's self-evident that `set_format` applies a generic transformation,... | 2021-01-29T16:35:53Z | 2022-07-30T09:50:11Z | 2021-02-05T09:54:06Z | MEMBER | null | null | null | Hi !
This PR refactors the way data are extracted from pyarrow tables to extend it to the use of custom formatting functions.
While the internal storage of the dataset is always the Apache Arrow format, by setting a specific format on a dataset, you can cast the output of `datasets.Dataset.__getitem__` in NumPy/p... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1795/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1795/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1795.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1795",
"merged_at": "2021-02-05T09:54:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1795.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-29T16:35:53Z | 2021-02-05T09:54:06Z | 6 days, 17:18:13 | true |
https://api.github.com/repos/huggingface/datasets/issues/1793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1793/comments | https://api.github.com/repos/huggingface/datasets/issues/1793/events | https://github.com/huggingface/datasets/pull/1793 | 796,940,299 | MDExOlB1bGxSZXF1ZXN0NTY0MDMzMjk0 | 1,793 | Minor fix the docstring of load_metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [] | 2021-01-29T14:47:35Z | 2021-01-29T16:53:32Z | 2021-01-29T16:53:32Z | MEMBER | null | null | null | Minor fix:
- duplicated attributes
- format fix | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1793/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1793/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1793.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1793",
"merged_at": "2021-01-29T16:53:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1793.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-29T14:47:35Z | 2021-01-29T16:53:32Z | 2:05:57 | true |
https://api.github.com/repos/huggingface/datasets/issues/1791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1791/comments | https://api.github.com/repos/huggingface/datasets/issues/1791/events | https://github.com/huggingface/datasets/pull/1791 | 796,924,519 | MDExOlB1bGxSZXF1ZXN0NTY0MDE5OTk3 | 1,791 | Small fix with corrected logging of train vectors | {
"avatar_url": "https://avatars.githubusercontent.com/u/7549587?v=4",
"events_url": "https://api.github.com/users/TezRomacH/events{/privacy}",
"followers_url": "https://api.github.com/users/TezRomacH/followers",
"following_url": "https://api.github.com/users/TezRomacH/following{/other_user}",
"gists_url": "h... | [] | closed | false | null | [] | null | [] | 2021-01-29T14:26:06Z | 2021-01-29T18:51:10Z | 2021-01-29T17:05:07Z | CONTRIBUTOR | null | null | null | Now you can set `train_size` to the whole dataset size via `train_size = -1` and login writes not `Training the index with the first -1 vectors` but (for example) `Training the index with the first 16123 vectors`. And maybe more than dataset length. Logging will be correct | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1791/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1791/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1791.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1791",
"merged_at": "2021-01-29T17:05:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1791.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-29T14:26:06Z | 2021-01-29T17:05:07Z | 2:39:01 | true |
https://api.github.com/repos/huggingface/datasets/issues/1792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1792/comments | https://api.github.com/repos/huggingface/datasets/issues/1792/events | https://github.com/huggingface/datasets/pull/1792 | 796,934,627 | MDExOlB1bGxSZXF1ZXN0NTY0MDI4NTk1 | 1,792 | Allow loading dataset in-memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "I am wondering how to test their difference...",
"created_at": "2021-01-29T15:18:05Z",
"html_url": "https://github.com/huggingface/datasets/pull/1792#issuecomment-769866757",
"id": 769866757,
"issue_url": "https://api.github.com/repos/huggingface/da... | 2021-01-29T14:39:50Z | 2021-02-12T14:13:28Z | 2021-02-12T14:13:28Z | MEMBER | null | null | null | Allow loading datasets either from:
- memory-mapped file (current implementation)
- from file descriptor, copying data to physical memory
Close #708 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1792/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1792/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1792.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1792",
"merged_at": "2021-02-12T14:13:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1792.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-29T14:39:50Z | 2021-02-12T14:13:28Z | 13 days, 23:33:38 | true |
https://api.github.com/repos/huggingface/datasets/issues/1790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1790/comments | https://api.github.com/repos/huggingface/datasets/issues/1790/events | https://github.com/huggingface/datasets/issues/1790 | 796,678,157 | MDU6SXNzdWU3OTY2NzgxNTc= | 1,790 | ModuleNotFoundError: No module named 'apache_beam', when specific languages. | {
"avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4",
"events_url": "https://api.github.com/users/miyamonz/events{/privacy}",
"followers_url": "https://api.github.com/users/miyamonz/followers",
"following_url": "https://api.github.com/users/miyamonz/following{/other_user}",
"gists_url": "http... | [] | open | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi !\r\n\r\nApache Beam is a framework used to define data transformation pipelines. These pipeline can then be run in many runtimes: DataFlow, Spark, Flink, etc. There also exist a local runner called the DirectRunner.\r\nWikipedia is a dataset that requires some ... | 2021-01-29T08:17:24Z | 2021-03-25T12:10:51Z | null | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ```py
import datasets
wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets')
```
then `ModuleNotFoundError: No module named 'apache_beam'` happend.
The error doesn't appear when it's '20200501.en'.
I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1790/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1790/timeline | null | null | null | null | 2021-01-29T08:17:24Z | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1789/comments | https://api.github.com/repos/huggingface/datasets/issues/1789/events | https://github.com/huggingface/datasets/pull/1789 | 796,229,721 | MDExOlB1bGxSZXF1ZXN0NTYzNDQyMTc2 | 1,789 | [BUG FIX] typo in the import path for metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | [] | 2021-01-28T18:01:37Z | 2021-01-28T18:13:56Z | 2021-01-28T18:13:56Z | MEMBER | null | null | null | This tiny PR fixes a typo introduced in https://github.com/huggingface/datasets/pull/1726 which prevents loading new metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1789/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1789/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1789.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1789",
"merged_at": "2021-01-28T18:13:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1789.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-28T18:01:37Z | 2021-01-28T18:13:56Z | 0:12:19 | true |
https://api.github.com/repos/huggingface/datasets/issues/1787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1787/comments | https://api.github.com/repos/huggingface/datasets/issues/1787/events | https://github.com/huggingface/datasets/pull/1787 | 795,485,842 | MDExOlB1bGxSZXF1ZXN0NTYyODI1NTI3 | 1,787 | Update the CommonGen citation information | {
"avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4",
"events_url": "https://api.github.com/users/yuchenlin/events{/privacy}",
"followers_url": "https://api.github.com/users/yuchenlin/followers",
"following_url": "https://api.github.com/users/yuchenlin/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | [] | 2021-01-27T22:12:47Z | 2021-01-28T13:56:29Z | 2021-01-28T13:56:29Z | CONTRIBUTOR | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1787/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1787/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1787.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1787",
"merged_at": "2021-01-28T13:56:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1787.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-27T22:12:47Z | 2021-01-28T13:56:29Z | 15:43:42 | true | |
https://api.github.com/repos/huggingface/datasets/issues/1788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1788/comments | https://api.github.com/repos/huggingface/datasets/issues/1788/events | https://github.com/huggingface/datasets/pull/1788 | 795,544,422 | MDExOlB1bGxSZXF1ZXN0NTYyODc1NzA2 | 1,788 | Doc2dial rc | {
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | [] | 2021-01-27T23:51:00Z | 2021-01-28T18:46:13Z | 2021-01-28T18:46:13Z | CONTRIBUTOR | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "http... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1788/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1788/timeline | null | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1788.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1788",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1788.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1788"
} | 2021-01-27T23:51:00Z | 2021-01-28T18:46:13Z | 18:55:13 | true | |
https://api.github.com/repos/huggingface/datasets/issues/1786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1786/comments | https://api.github.com/repos/huggingface/datasets/issues/1786/events | https://github.com/huggingface/datasets/issues/1786 | 795,462,816 | MDU6SXNzdWU3OTU0NjI4MTY= | 1,786 | How to use split dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/78090287?v=4",
"events_url": "https://api.github.com/users/kkhan188/events{/privacy}",
"followers_url": "https://api.github.com/users/kkhan188/followers",
"following_url": "https://api.github.com/users/kkhan188/following{/other_user}",
"gists_url": "htt... | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "By default, all 3 splits will be loaded if you run the following:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"lambada\")\r\nprint(dataset[\"train\"])\r\nprint(dataset[\"valid\"])\r\n\r\n```\r\n\r\nIf you wanted to do load ... | 2021-01-27T21:37:47Z | 2021-04-23T15:17:39Z | 2021-04-23T15:17:39Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | 
Hey,
I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my pro... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1786/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1786/timeline | null | completed | null | null | 2021-01-27T21:37:47Z | 2021-04-23T15:17:39Z | 85 days, 17:39:52 | false |
https://api.github.com/repos/huggingface/datasets/issues/1784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1784/comments | https://api.github.com/repos/huggingface/datasets/issues/1784/events | https://github.com/huggingface/datasets/issues/1784 | 794,659,174 | MDU6SXNzdWU3OTQ2NTkxNzQ= | 1,784 | JSONDecodeError on JSON with multiple lines | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi !\r\n\r\nThe `json` dataset script does support this format. For example loading a dataset with this format works on my side:\r\n```json\r\n{\"key1\":11, \"key2\":12, \"key3\":13}\r\n{\"key1\":21, \"key2\":22, \"key3\":23}\r\n```\r\n\r\nCan you show the full sta... | 2021-01-27T00:19:22Z | 2021-01-31T08:47:18Z | 2021-01-31T08:47:18Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hello :),
I have been trying to load data using a JSON file. Based on the [docs](https://huggingface.co/docs/datasets/loading_datasets.html#json-files), the following format is supported:
```json
{"key1":11, "key2":12, "key3":13}
{"key1":21, "key2":22, "key3":23}
```
But, when I try loading a dataset with th... | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1784/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1784/timeline | null | completed | null | null | 2021-01-27T00:19:22Z | 2021-01-31T08:47:18Z | 4 days, 8:27:56 | false |
https://api.github.com/repos/huggingface/datasets/issues/1785 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1785/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1785/comments | https://api.github.com/repos/huggingface/datasets/issues/1785/events | https://github.com/huggingface/datasets/issues/1785 | 795,458,856 | MDU6SXNzdWU3OTU0NTg4NTY= | 1,785 | Not enough disk space (Needed: Unknown size) when caching on a cluster | {
"avatar_url": "https://avatars.githubusercontent.com/u/4341867?v=4",
"events_url": "https://api.github.com/users/olinguyen/events{/privacy}",
"followers_url": "https://api.github.com/users/olinguyen/followers",
"following_url": "https://api.github.com/users/olinguyen/following{/other_user}",
"gists_url": "h... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi ! \r\n\r\nWhat do you mean by \"disk_usage(\".\").free` can't compute on the cluster's shared disk\" exactly ?\r\nDoes it return 0 ?",
"created_at": "2021-01-28T13:32:50Z",
"html_url": "https://github.com/huggingface/datasets/issues/1785#issuecomment-769... | 2021-01-27T21:30:59Z | 2024-12-04T02:57:00Z | 2021-01-30T01:07:56Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk.
The exact error thrown:
```bash
>>> load_dataset(dataset, cache_dir="/path/to/cluster/shared/path")
OSError: Not eno... | {
"avatar_url": "https://avatars.githubusercontent.com/u/4341867?v=4",
"events_url": "https://api.github.com/users/olinguyen/events{/privacy}",
"followers_url": "https://api.github.com/users/olinguyen/followers",
"following_url": "https://api.github.com/users/olinguyen/following{/other_user}",
"gists_url": "h... | {
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1785/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1785/timeline | null | completed | null | null | 2021-01-27T21:30:59Z | 2021-01-30T01:07:56Z | 2 days, 3:36:57 | false |
https://api.github.com/repos/huggingface/datasets/issues/1783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1783/comments | https://api.github.com/repos/huggingface/datasets/issues/1783/events | https://github.com/huggingface/datasets/issues/1783 | 794,544,495 | MDU6SXNzdWU3OTQ1NDQ0OTU= | 1,783 | Dataset Examples Explorer | {
"avatar_url": "https://avatars.githubusercontent.com/u/30875246?v=4",
"events_url": "https://api.github.com/users/ChewKokWah/events{/privacy}",
"followers_url": "https://api.github.com/users/ChewKokWah/followers",
"following_url": "https://api.github.com/users/ChewKokWah/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi @ChewKokWah,\r\n\r\nWe're working on it! In the meantime, you can still find the dataset explorer at the following URL: https://huggingface.co/datasets/viewer/",
"created_at": "2021-01-27T02:30:14Z",
"html_url": "https://github.com/huggingface/datasets/i... | 2021-01-26T20:39:02Z | 2021-02-01T13:58:44Z | 2021-02-01T13:58:44Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | In the Older version of the Dataset, there are a useful Dataset Explorer that allow user to visualize the examples (training, test and validation) of a particular dataset, it is no longer there in current version.
Hope HuggingFace can re-enable the feature that at least allow viewing of the first 20 examples of a ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/30875246?v=4",
"events_url": "https://api.github.com/users/ChewKokWah/events{/privacy}",
"followers_url": "https://api.github.com/users/ChewKokWah/followers",
"following_url": "https://api.github.com/users/ChewKokWah/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1783/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1783/timeline | null | completed | null | null | 2021-01-26T20:39:02Z | 2021-02-01T13:58:44Z | 5 days, 17:19:42 | false |
https://api.github.com/repos/huggingface/datasets/issues/1782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1782/comments | https://api.github.com/repos/huggingface/datasets/issues/1782/events | https://github.com/huggingface/datasets/pull/1782 | 794,167,920 | MDExOlB1bGxSZXF1ZXN0NTYxNzI5OTc3 | 1,782 | Update pyarrow import warning | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2021-01-26T11:47:11Z | 2021-01-26T13:50:50Z | 2021-01-26T13:50:49Z | MEMBER | null | null | null | Update the minimum version to >=0.17.1 in the pyarrow version check and update the message.
I also moved the check at the top of the __init__.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1782/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1782/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1782.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1782",
"merged_at": "2021-01-26T13:50:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1782.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-26T11:47:11Z | 2021-01-26T13:50:49Z | 2:03:38 | true |
https://api.github.com/repos/huggingface/datasets/issues/1781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1781/comments | https://api.github.com/repos/huggingface/datasets/issues/1781/events | https://github.com/huggingface/datasets/issues/1781 | 793,914,556 | MDU6SXNzdWU3OTM5MTQ1NTY= | 1,781 | AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import | {
"avatar_url": "https://avatars.githubusercontent.com/u/45964869?v=4",
"events_url": "https://api.github.com/users/PalaashAgrawal/events{/privacy}",
"followers_url": "https://api.github.com/users/PalaashAgrawal/followers",
"following_url": "https://api.github.com/users/PalaashAgrawal/following{/other_user}",
... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi ! I'm not able to reproduce the issue. Can you try restarting your runtime ?\r\n\r\nThe PyExtensionType is available in pyarrow starting 0.17.1 iirc. If restarting your runtime doesn't fix this, can you try updating pyarrow ?\r\n```\r\npip install pyarrow --upgr... | 2021-01-26T04:18:35Z | 2024-07-07T17:55:12Z | 2022-10-05T12:37:06Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | I'm using Colab. And suddenly this morning, there is this error. Have a look below!

| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1781/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1781/timeline | null | completed | null | null | 2021-01-26T04:18:35Z | 2022-10-05T12:37:06Z | 617 days, 8:18:31 | false |
https://api.github.com/repos/huggingface/datasets/issues/1780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1780/comments | https://api.github.com/repos/huggingface/datasets/issues/1780/events | https://github.com/huggingface/datasets/pull/1780 | 793,882,132 | MDExOlB1bGxSZXF1ZXN0NTYxNDkxNTgy | 1,780 | Update SciFact URL | {
"avatar_url": "https://avatars.githubusercontent.com/u/3091916?v=4",
"events_url": "https://api.github.com/users/dwadden/events{/privacy}",
"followers_url": "https://api.github.com/users/dwadden/followers",
"following_url": "https://api.github.com/users/dwadden/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi ! The error you get is the result of some verifications the library is doing when loading a dataset that already has some metadata in the dataset_infos.json. You can ignore the verifications with \r\n```\r\npython datasets-cli test datasets/scifact --save_infos ... | 2021-01-26T02:49:06Z | 2021-01-28T18:48:00Z | 2021-01-28T10:19:45Z | CONTRIBUTOR | null | null | null | Hi,
I'm following up this [issue](https://github.com/huggingface/datasets/issues/1717). I'm the SciFact dataset creator, and I'm trying to update the SciFact data url in your repo. Thanks again for adding the dataset!
Basically, I'd just like to change the `_URL` to `"https://scifact.s3-us-west-2.amazonaws.com/re... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1780/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1780/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1780.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1780",
"merged_at": "2021-01-28T10:19:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1780.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-26T02:49:06Z | 2021-01-28T10:19:45Z | 2 days, 7:30:39 | true |
https://api.github.com/repos/huggingface/datasets/issues/1779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1779/comments | https://api.github.com/repos/huggingface/datasets/issues/1779/events | https://github.com/huggingface/datasets/pull/1779 | 793,539,703 | MDExOlB1bGxSZXF1ZXN0NTYxMjEwNjI5 | 1,779 | Ignore definition line number of functions for caching | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2021-01-25T16:42:29Z | 2021-01-26T10:20:20Z | 2021-01-26T10:20:19Z | MEMBER | null | null | null | As noticed in #1718 , when a function used for processing with `map` is moved inside its python file, then the change of line number causes the caching mechanism to consider it as a different function. Therefore in this case, it recomputes everything.
This is because we were not ignoring the line number definition f... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1779/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1779/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1779.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1779",
"merged_at": "2021-01-26T10:20:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1779.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-25T16:42:29Z | 2021-01-26T10:20:19Z | 17:37:50 | true |
https://api.github.com/repos/huggingface/datasets/issues/1777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1777/comments | https://api.github.com/repos/huggingface/datasets/issues/1777/events | https://github.com/huggingface/datasets/issues/1777 | 793,273,770 | MDU6SXNzdWU3OTMyNzM3NzA= | 1,777 | GPT2 MNLI training using run_glue.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/76427077?v=4",
"events_url": "https://api.github.com/users/nlp-student/events{/privacy}",
"followers_url": "https://api.github.com/users/nlp-student/followers",
"following_url": "https://api.github.com/users/nlp-student/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [] | 2021-01-25T10:53:52Z | 2021-01-25T11:12:53Z | 2021-01-25T11:12:53Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Edit: I'm closing this because I actually meant to post this in `transformers `not `datasets`
Running this on Google Colab,
```
!python run_glue.py \
--model_name_or_path gpt2 \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_gpu_train_batch_size 10 \
--gradient_accu... | {
"avatar_url": "https://avatars.githubusercontent.com/u/76427077?v=4",
"events_url": "https://api.github.com/users/nlp-student/events{/privacy}",
"followers_url": "https://api.github.com/users/nlp-student/followers",
"following_url": "https://api.github.com/users/nlp-student/following{/other_user}",
"gists_u... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1777/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1777/timeline | null | completed | null | null | 2021-01-25T10:53:52Z | 2021-01-25T11:12:53Z | 0:19:01 | false |
https://api.github.com/repos/huggingface/datasets/issues/1778 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1778/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1778/comments | https://api.github.com/repos/huggingface/datasets/issues/1778/events | https://github.com/huggingface/datasets/pull/1778 | 793,474,507 | MDExOlB1bGxSZXF1ZXN0NTYxMTU2Mzk1 | 1,778 | Narrative QA Manual | {
"avatar_url": "https://avatars.githubusercontent.com/u/18527321?v=4",
"events_url": "https://api.github.com/users/rsanjaykamath/events{/privacy}",
"followers_url": "https://api.github.com/users/rsanjaykamath/followers",
"following_url": "https://api.github.com/users/rsanjaykamath/following{/other_user}",
"g... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "@lhoestq sorry I opened a new pull request because of some issues with the previous code base. This pull request is originally from #1364",
"created_at": "2021-01-25T15:33:45Z",
"html_url": "https://github.com/huggingface/datasets/pull/1778#issuecommen... | 2021-01-25T15:22:31Z | 2021-01-29T09:35:14Z | 2021-01-29T09:34:51Z | CONTRIBUTOR | null | null | null | Submitting the manual version of Narrative QA script which requires a manual download from the original repository | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1778/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1778/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1778.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1778",
"merged_at": "2021-01-29T09:34:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1778.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-25T15:22:31Z | 2021-01-29T09:34:51Z | 3 days, 18:12:20 | true |
https://api.github.com/repos/huggingface/datasets/issues/1776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1776/comments | https://api.github.com/repos/huggingface/datasets/issues/1776/events | https://github.com/huggingface/datasets/issues/1776 | 792,755,249 | MDU6SXNzdWU3OTI3NTUyNDk= | 1,776 | [Question & Bug Report] Can we preprocess a dataset on the fly? | {
"avatar_url": "https://avatars.githubusercontent.com/u/14048129?v=4",
"events_url": "https://api.github.com/users/shuaihuaiyi/events{/privacy}",
"followers_url": "https://api.github.com/users/shuaihuaiyi/followers",
"following_url": "https://api.github.com/users/shuaihuaiyi/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "We are very actively working on this. How does your dataset look like in practice (number/size/type of files)?",
"created_at": "2021-01-24T10:13:14Z",
"html_url": "https://github.com/huggingface/datasets/issues/1776#issuecomment-766322913",
"id": 766322... | 2021-01-24T09:28:24Z | 2021-05-20T04:15:58Z | 2021-05-20T04:15:58Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache?
BTW, I tried raising `writer_batch_si... | {
"avatar_url": "https://avatars.githubusercontent.com/u/14048129?v=4",
"events_url": "https://api.github.com/users/shuaihuaiyi/events{/privacy}",
"followers_url": "https://api.github.com/users/shuaihuaiyi/followers",
"following_url": "https://api.github.com/users/shuaihuaiyi/following{/other_user}",
"gists_u... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1776/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1776/timeline | null | completed | null | null | 2021-01-24T09:28:24Z | 2021-05-20T04:15:58Z | 115 days, 18:47:34 | false |
https://api.github.com/repos/huggingface/datasets/issues/1775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1775/comments | https://api.github.com/repos/huggingface/datasets/issues/1775/events | https://github.com/huggingface/datasets/issues/1775 | 792,742,120 | MDU6SXNzdWU3OTI3NDIxMjA= | 1,775 | Efficient ways to iterate the dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4",
"events_url": "https://api.github.com/users/zhongpeixiang/events{/privacy}",
"followers_url": "https://api.github.com/users/zhongpeixiang/followers",
"following_url": "https://api.github.com/users/zhongpeixiang/following{/other_user}",
"g... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "It seems that selecting a subset of colums directly from the dataset, i.e., dataset[\"column\"], is slow.",
"created_at": "2021-01-24T09:26:21Z",
"html_url": "https://github.com/huggingface/datasets/issues/1775#issuecomment-766317289",
"id": 766317... | 2021-01-24T07:54:31Z | 2021-01-24T09:50:39Z | 2021-01-24T09:50:39Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | For a large dataset that does not fits the memory, how can I select only a subset of features from each example?
If I iterate over the dataset and then select the subset of features one by one, the resulted memory usage will be huge. Any ways to solve this?
Thanks | {
"avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4",
"events_url": "https://api.github.com/users/zhongpeixiang/events{/privacy}",
"followers_url": "https://api.github.com/users/zhongpeixiang/followers",
"following_url": "https://api.github.com/users/zhongpeixiang/following{/other_user}",
"g... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1775/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1775/timeline | null | completed | null | null | 2021-01-24T07:54:31Z | 2021-01-24T09:50:39Z | 1:56:08 | false |
https://api.github.com/repos/huggingface/datasets/issues/1774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1774/comments | https://api.github.com/repos/huggingface/datasets/issues/1774/events | https://github.com/huggingface/datasets/issues/1774 | 792,730,559 | MDU6SXNzdWU3OTI3MzA1NTk= | 1,774 | is it possible to make slice to be more compatible like python list and numpy? | {
"avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4",
"events_url": "https://api.github.com/users/world2vec/events{/privacy}",
"followers_url": "https://api.github.com/users/world2vec/followers",
"following_url": "https://api.github.com/users/world2vec/following{/other_user}",
"gists_url": "h... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
... | null | [
{
"author_association": "MEMBER",
"body": "Hi ! Thanks for reporting.\r\nI am working on changes in the way data are sliced from arrow. I can probably fix your issue with the changes I'm doing.\r\nIf you have some code to reproduce the issue it would be nice so I can make sure that this case will be support... | 2021-01-24T06:15:52Z | 2024-01-31T15:54:18Z | 2024-01-31T15:54:18Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hi,
see below error:
```
AssertionError: Requested slice [:10000000000000000] incompatible with 20 examples.
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1774/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1774/timeline | null | completed | null | null | 2021-01-24T06:15:52Z | 2024-01-31T15:54:18Z | 1102 days, 9:38:26 | false |
https://api.github.com/repos/huggingface/datasets/issues/1773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1773/comments | https://api.github.com/repos/huggingface/datasets/issues/1773/events | https://github.com/huggingface/datasets/issues/1773 | 792,708,160 | MDU6SXNzdWU3OTI3MDgxNjA= | 1,773 | bug in loading datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.git... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Looks like an issue with your csv file. Did you use the right delimiter ?\r\nApparently at line 37 the CSV reader from pandas reads 2 fields instead of 1.",
"created_at": "2021-01-24T15:11:37Z",
"html_url": "https://github.com/huggingface/datasets/issues/17... | 2021-01-24T02:53:45Z | 2021-09-06T08:54:46Z | 2021-08-04T18:13:01Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hi,
I need to load a dataset, I use these commands:
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files={'train': 'sick/train.csv',
'test': 'sick/test.csv',
'validation': 'sick/validation.csv'})
prin... | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1773/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1773/timeline | null | completed | null | null | 2021-01-24T02:53:45Z | 2021-08-04T18:13:01Z | 192 days, 15:19:16 | false |
https://api.github.com/repos/huggingface/datasets/issues/1772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1772/comments | https://api.github.com/repos/huggingface/datasets/issues/1772/events | https://github.com/huggingface/datasets/issues/1772 | 792,703,797 | MDU6SXNzdWU3OTI3MDM3OTc= | 1,772 | Adding SICK dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.git... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [] | 2021-01-24T02:15:31Z | 2021-02-05T15:49:25Z | 2021-02-05T15:49:25Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hi
It would be great to include SICK dataset.
## Adding a Dataset
- **Name:** SICK
- **Description:** a well known entailment dataset
- **Paper:** http://marcobaroni.org/composes/sick.html
- **Data:** http://marcobaroni.org/composes/sick.html
- **Motivation:** this is an important NLI benchmark
Instruction... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1772/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1772/timeline | null | completed | null | null | 2021-01-24T02:15:31Z | 2021-02-05T15:49:25Z | 12 days, 13:33:54 | false |
https://api.github.com/repos/huggingface/datasets/issues/1771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1771/comments | https://api.github.com/repos/huggingface/datasets/issues/1771/events | https://github.com/huggingface/datasets/issues/1771 | 792,701,276 | MDU6SXNzdWU3OTI3MDEyNzY= | 1,771 | Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/csv/csv.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4",
"events_url": "https://api.github.com/users/world2vec/events{/privacy}",
"followers_url": "https://api.github.com/users/world2vec/followers",
"following_url": "https://api.github.com/users/world2vec/following{/other_user}",
"gists_url": "h... | [] | closed | false | null | [] | null | [
{
"author_association": "NONE",
"body": "I temporary manually download csv.py as custom dataset loading script",
"created_at": "2021-01-24T02:24:04Z",
"html_url": "https://github.com/huggingface/datasets/issues/1771#issuecomment-766278609",
"id": 766278609,
"issue_url": "https://api.github.c... | 2021-01-24T01:53:52Z | 2021-01-24T23:06:29Z | 2021-01-24T23:06:29Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hi,
When I load_dataset from local csv files, below error happened, looks raw.githubusercontent.com was blocked by the chinese government. But why it need to download csv.py? should it include when pip install the dataset?
```
Traceback (most recent call last):
File "/home/tom/pyenv/pystory/lib/python3.6/site-p... | {
"avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4",
"events_url": "https://api.github.com/users/world2vec/events{/privacy}",
"followers_url": "https://api.github.com/users/world2vec/followers",
"following_url": "https://api.github.com/users/world2vec/following{/other_user}",
"gists_url": "h... | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1771/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1771/timeline | null | completed | null | null | 2021-01-24T01:53:52Z | 2021-01-24T23:06:29Z | 21:12:37 | false |
https://api.github.com/repos/huggingface/datasets/issues/1770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1770/comments | https://api.github.com/repos/huggingface/datasets/issues/1770/events | https://github.com/huggingface/datasets/issues/1770 | 792,698,148 | MDU6SXNzdWU3OTI2OTgxNDg= | 1,770 | how can I combine 2 dataset with different/same features? | {
"avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4",
"events_url": "https://api.github.com/users/world2vec/events{/privacy}",
"followers_url": "https://api.github.com/users/world2vec/followers",
"following_url": "https://api.github.com/users/world2vec/following{/other_user}",
"gists_url": "h... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi ! Currently we don't have a way to `zip` datasets but we plan to add this soon :)\r\nFor now you'll need to use `map` to add the fields from one dataset to the other. See the comment here for more info : https://github.com/huggingface/datasets/issues/853#issueco... | 2021-01-24T01:26:06Z | 2022-06-01T15:43:15Z | 2022-06-01T15:43:15Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | to combine 2 dataset by one-one map like ds = zip(ds1, ds2):
ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'}
or different feature:
ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'} | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1770/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1770/timeline | null | completed | null | null | 2021-01-24T01:26:06Z | 2022-06-01T15:43:15Z | 493 days, 14:17:09 | false |
https://api.github.com/repos/huggingface/datasets/issues/1769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1769/comments | https://api.github.com/repos/huggingface/datasets/issues/1769/events | https://github.com/huggingface/datasets/issues/1769 | 792,523,284 | MDU6SXNzdWU3OTI1MjMyODQ= | 1,769 | _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/14048129?v=4",
"events_url": "https://api.github.com/users/shuaihuaiyi/events{/privacy}",
"followers_url": "https://api.github.com/users/shuaihuaiyi/followers",
"following_url": "https://api.github.com/users/shuaihuaiyi/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [
{
"author_association": "NONE",
"body": "More information: `run_mlm.py` will raise same error when `data_args.line_by_line==True`\r\n\r\nhttps://github.com/huggingface/transformers/blob/9152f16023b59d262b51573714b40325c8e49370/examples/language-modeling/run_mlm.py#L300\r\n",
"created_at": "2021-01-23T10... | 2021-01-23T10:13:00Z | 2022-10-05T12:38:51Z | 2022-10-05T12:38:51Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine.
The script I use is https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py
Script args:
```
--model_name_or_path
../../../model/chine... | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1769/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1769/timeline | null | completed | null | null | 2021-01-23T10:13:00Z | 2022-10-05T12:38:51Z | 620 days, 2:25:51 | false |
https://api.github.com/repos/huggingface/datasets/issues/1768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1768/comments | https://api.github.com/repos/huggingface/datasets/issues/1768/events | https://github.com/huggingface/datasets/pull/1768 | 792,150,745 | MDExOlB1bGxSZXF1ZXN0NTYwMDgyNzIx | 1,768 | Mention kwargs in the Dataset Formatting docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [] | 2021-01-22T16:43:20Z | 2021-01-31T12:33:10Z | 2021-01-25T09:14:59Z | CONTRIBUTOR | null | null | null | Hi,
This was discussed in Issue #1762 where the docs didn't mention that keyword arguments to `datasets.Dataset.set_format()` are allowed.
To prevent people from having to check the code/method docs, I just added a couple of lines in the docs.
Please let me know your thoughts on this.
Thanks,
Gunjan
@lho... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1768/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1768/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1768.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1768",
"merged_at": "2021-01-25T09:14:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1768.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-22T16:43:20Z | 2021-01-25T09:14:59Z | 2 days, 16:31:39 | true |
https://api.github.com/repos/huggingface/datasets/issues/1767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1767/comments | https://api.github.com/repos/huggingface/datasets/issues/1767/events | https://github.com/huggingface/datasets/pull/1767 | 792,068,497 | MDExOlB1bGxSZXF1ZXN0NTYwMDE2MzE2 | 1,767 | Add Librispeech ASR | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "> Awesome thank you !\r\n> \r\n> The dummy data are quite big but it was expected given that the raw files are flac files.\r\n> Given that the script doesn't even read the flac files I think we can remove them. Or maybe use empty flac files (see [here](https:/... | 2021-01-22T14:54:37Z | 2021-01-25T20:38:07Z | 2021-01-25T20:37:42Z | CONTRIBUTOR | null | null | null | This PR adds the librispeech asr dataset: https://www.tensorflow.org/datasets/catalog/librispeech
There are 2 configs: "clean" and "other" whereas there are two "train" datasets for "clean", hence the name "train.100" and "train.360".
As suggested by @lhoestq, due to the enormous size of the dataset in `.arrow` f... | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1767/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1767/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1767.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1767",
"merged_at": "2021-01-25T20:37:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1767.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-22T14:54:37Z | 2021-01-25T20:37:42Z | 3 days, 5:43:05 | true |
https://api.github.com/repos/huggingface/datasets/issues/1764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1764/comments | https://api.github.com/repos/huggingface/datasets/issues/1764/events | https://github.com/huggingface/datasets/issues/1764 | 791,486,860 | MDU6SXNzdWU3OTE0ODY4NjA= | 1,764 | Connection Issues | {
"avatar_url": "https://avatars.githubusercontent.com/u/12455298?v=4",
"events_url": "https://api.github.com/users/SaeedNajafi/events{/privacy}",
"followers_url": "https://api.github.com/users/SaeedNajafi/followers",
"following_url": "https://api.github.com/users/SaeedNajafi/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | [
{
"author_association": "NONE",
"body": "Academic WIFI was blocking.",
"created_at": "2021-01-21T21:00:19Z",
"html_url": "https://github.com/huggingface/datasets/issues/1764#issuecomment-764938716",
"id": 764938716,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/1764"... | 2021-01-21T20:56:09Z | 2021-01-21T21:00:19Z | 2021-01-21T21:00:02Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Today, I am getting connection issues while loading a dataset and the metric.
```
Traceback (most recent call last):
File "src/train.py", line 180, in <module>
train_dataset, dev_dataset, test_dataset = create_race_dataset()
File "src/train.py", line 130, in create_race_dataset
train_dataset = load_da... | {
"avatar_url": "https://avatars.githubusercontent.com/u/12455298?v=4",
"events_url": "https://api.github.com/users/SaeedNajafi/events{/privacy}",
"followers_url": "https://api.github.com/users/SaeedNajafi/followers",
"following_url": "https://api.github.com/users/SaeedNajafi/following{/other_user}",
"gists_u... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1764/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1764/timeline | null | completed | null | null | 2021-01-21T20:56:09Z | 2021-01-21T21:00:02Z | 0:03:53 | false |
https://api.github.com/repos/huggingface/datasets/issues/1766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1766/comments | https://api.github.com/repos/huggingface/datasets/issues/1766/events | https://github.com/huggingface/datasets/issues/1766 | 792,044,105 | MDU6SXNzdWU3OTIwNDQxMDU= | 1,766 | Issues when run two programs compute the same metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/8089862?v=4",
"events_url": "https://api.github.com/users/lamthuy/events{/privacy}",
"followers_url": "https://api.github.com/users/lamthuy/followers",
"following_url": "https://api.github.com/users/lamthuy/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi ! To avoid collisions you can specify a `experiment_id` when instantiating your metric using `load_metric`. It will replace \"default_experiment\" with the experiment id that you provide in the arrow filename. \r\n\r\nAlso when two `experiment_id` collide we're ... | 2021-01-22T14:22:55Z | 2021-02-02T10:38:06Z | 2021-02-02T10:38:06Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches:
```
File "train_matching_min.py", line 160, in <module>ch... | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1766/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1766/timeline | null | completed | null | null | 2021-01-22T14:22:55Z | 2021-02-02T10:38:06Z | 10 days, 20:15:11 | false |
https://api.github.com/repos/huggingface/datasets/issues/1765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1765/comments | https://api.github.com/repos/huggingface/datasets/issues/1765/events | https://github.com/huggingface/datasets/issues/1765 | 791,553,065 | MDU6SXNzdWU3OTE1NTMwNjU= | 1,765 | Error iterating over Dataset with DataLoader | {
"avatar_url": "https://avatars.githubusercontent.com/u/1295082?v=4",
"events_url": "https://api.github.com/users/EvanZ/events{/privacy}",
"followers_url": "https://api.github.com/users/EvanZ/followers",
"following_url": "https://api.github.com/users/EvanZ/following{/other_user}",
"gists_url": "https://api.g... | [] | closed | false | null | [] | null | [
{
"author_association": "COLLABORATOR",
"body": "Instead of:\r\n```python\r\ndataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32)\r\n```\r\nIt should be:\r\n```python\r\ndataloader = torch.utils.data.DataLoader(encoded_dataset, batch_size=32)\r\n```\r\n\r\n`batch_sampler` accepts a Sa... | 2021-01-21T22:56:45Z | 2022-10-28T02:16:38Z | 2021-01-23T03:44:14Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | I have a Dataset that I've mapped a tokenizer over:
```
encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids'])
encoded_dataset[:1]
```
```
{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),
'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 2... | {
"avatar_url": "https://avatars.githubusercontent.com/u/1295082?v=4",
"events_url": "https://api.github.com/users/EvanZ/events{/privacy}",
"followers_url": "https://api.github.com/users/EvanZ/followers",
"following_url": "https://api.github.com/users/EvanZ/following{/other_user}",
"gists_url": "https://api.g... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1765/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1765/timeline | null | completed | null | null | 2021-01-21T22:56:45Z | 2021-01-23T03:44:14Z | 1 day, 4:47:29 | false |
https://api.github.com/repos/huggingface/datasets/issues/1763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1763/comments | https://api.github.com/repos/huggingface/datasets/issues/1763/events | https://github.com/huggingface/datasets/pull/1763 | 791,389,763 | MDExOlB1bGxSZXF1ZXN0NTU5NDU3MTY1 | 1,763 | PAWS-X: Fix csv Dictreader splitting data on quotes | {
"avatar_url": "https://avatars.githubusercontent.com/u/9641196?v=4",
"events_url": "https://api.github.com/users/gowtham1997/events{/privacy}",
"followers_url": "https://api.github.com/users/gowtham1997/followers",
"following_url": "https://api.github.com/users/gowtham1997/following{/other_user}",
"gists_ur... | [] | closed | false | null | [] | null | [] | 2021-01-21T18:21:01Z | 2021-01-22T10:14:33Z | 2021-01-22T10:13:45Z | CONTRIBUTOR | null | null | null |
```python
from datasets import load_dataset
# load english paws-x dataset
datasets = load_dataset('paws-x', 'en')
print(len(datasets['train'])) # outputs 49202 but official dataset has 49401 pairs
print(datasets['train'].unique('label')) # outputs [1, 0, -1] but labels are binary [0,1]
... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1763/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1763/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1763.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1763",
"merged_at": "2021-01-22T10:13:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1763.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-21T18:21:01Z | 2021-01-22T10:13:45Z | 15:52:44 | true |
https://api.github.com/repos/huggingface/datasets/issues/1762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1762/comments | https://api.github.com/repos/huggingface/datasets/issues/1762/events | https://github.com/huggingface/datasets/issues/1762 | 791,226,007 | MDU6SXNzdWU3OTEyMjYwMDc= | 1,762 | Unable to format dataset to CUDA Tensors | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi ! You can get CUDA tensors with\r\n\r\n```python\r\ndataset.set_format(\"torch\", columns=columns, device=\"cuda\")\r\n```\r\n\r\nIndeed `set_format` passes the `**kwargs` to `torch.tensor`",
"created_at": "2021-01-21T15:41:55Z",
"html_url": "https://git... | 2021-01-21T15:31:23Z | 2021-02-02T07:13:22Z | 2021-02-02T07:13:22Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
I tried this, but Dataset doesn't suppor... | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1762/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1762/timeline | null | completed | null | null | 2021-01-21T15:31:23Z | 2021-02-02T07:13:22Z | 11 days, 15:41:59 | false |
https://api.github.com/repos/huggingface/datasets/issues/1761 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1761/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1761/comments | https://api.github.com/repos/huggingface/datasets/issues/1761/events | https://github.com/huggingface/datasets/pull/1761 | 791,150,858 | MDExOlB1bGxSZXF1ZXN0NTU5MjUyMzEw | 1,761 | Add SILICONE benchmark | {
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"events_url": "https://api.github.com/users/eusip/events{/privacy}",
"followers_url": "https://api.github.com/users/eusip/followers",
"following_url": "https://api.github.com/users/eusip/following{/other_user}",
"gists_url": "https://api.g... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "Thanks for the feedback. All your comments have been addressed!",
"created_at": "2021-01-26T12:22:53Z",
"html_url": "https://github.com/huggingface/datasets/pull/1761#issuecomment-767508192",
"id": 767508192,
"issue_url": "https://api.github.co... | 2021-01-21T14:29:12Z | 2021-02-04T14:32:48Z | 2021-01-26T13:50:31Z | CONTRIBUTOR | null | null | null | My collaborators and I within the Affective Computing team at Telecom Paris would like to re-submit our spoken dialogue dataset for publication.
This is a new pull request relative to the [previously closed request](https://github.com/huggingface/datasets/pull/1712) which was reviewed by @lhoestq.
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1761/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1761/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1761.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1761",
"merged_at": "2021-01-26T13:50:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1761.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-21T14:29:12Z | 2021-01-26T13:50:31Z | 4 days, 23:21:19 | true |
https://api.github.com/repos/huggingface/datasets/issues/1760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1760/comments | https://api.github.com/repos/huggingface/datasets/issues/1760/events | https://github.com/huggingface/datasets/pull/1760 | 791,110,857 | MDExOlB1bGxSZXF1ZXN0NTU5MjE3MjY0 | 1,760 | More tags | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Conll has `multilingual` but is only tagged as `en`",
"created_at": "2021-01-21T15:20:29Z",
"html_url": "https://github.com/huggingface/datasets/pull/1760#issuecomment-764716388",
"id": 764716388,
"issue_url": "https://api.github.com/repos/huggingfa... | 2021-01-21T13:50:10Z | 2021-01-22T09:40:01Z | 2021-01-22T09:40:00Z | MEMBER | null | null | null | Since the hub v2 is going to be released soon I figured it would be great to add the missing tags at least for some of the datasets of reference listed [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#write-the-loadingprocessing-code) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1760/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1760/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1760.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1760",
"merged_at": "2021-01-22T09:40:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1760.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-21T13:50:10Z | 2021-01-22T09:40:00Z | 19:49:50 | true |
https://api.github.com/repos/huggingface/datasets/issues/1759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1759/comments | https://api.github.com/repos/huggingface/datasets/issues/1759/events | https://github.com/huggingface/datasets/issues/1759 | 790,992,226 | MDU6SXNzdWU3OTA5OTIyMjY= | 1,759 | wikipedia dataset incomplete | {
"avatar_url": "https://avatars.githubusercontent.com/u/19912393?v=4",
"events_url": "https://api.github.com/users/ChrisDelClea/events{/privacy}",
"followers_url": "https://api.github.com/users/ChrisDelClea/followers",
"following_url": "https://api.github.com/users/ChrisDelClea/following{/other_user}",
"gist... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi !\r\nFrom what pickle file fo you get this ?\r\nI guess you mean the dataset loaded using `load_dataset` ?",
"created_at": "2021-01-21T15:27:30Z",
"html_url": "https://github.com/huggingface/datasets/issues/1759#issuecomment-764721531",
"id": 7647215... | 2021-01-21T11:47:15Z | 2021-01-21T17:22:11Z | 2021-01-21T17:21:06Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | Hey guys,
I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset.
Unfortunately, I found out that there is an incompleteness for the German dataset.
For reasons unknown to me, the number of inhabitants has been removed from many pages:
Thorey-sur-Ouche has 128 inhabitants a... | {
"avatar_url": "https://avatars.githubusercontent.com/u/19912393?v=4",
"events_url": "https://api.github.com/users/ChrisDelClea/events{/privacy}",
"followers_url": "https://api.github.com/users/ChrisDelClea/followers",
"following_url": "https://api.github.com/users/ChrisDelClea/following{/other_user}",
"gist... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1759/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1759/timeline | null | completed | null | null | 2021-01-21T11:47:15Z | 2021-01-21T17:21:06Z | 5:33:51 | false |
https://api.github.com/repos/huggingface/datasets/issues/1758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1758/comments | https://api.github.com/repos/huggingface/datasets/issues/1758/events | https://github.com/huggingface/datasets/issues/1758 | 790,626,116 | MDU6SXNzdWU3OTA2MjYxMTY= | 1,758 | dataset.search() (elastic) cannot reliably retrieve search results | {
"avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4",
"events_url": "https://api.github.com/users/afogarty85/events{/privacy}",
"followers_url": "https://api.github.com/users/afogarty85/followers",
"following_url": "https://api.github.com/users/afogarty85/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "Hi !\r\nI tried your code on my side and I was able to workaround this issue by waiting a few seconds before querying the index.\r\nMaybe this is because the index is not updated yet on the ElasticSearch side ?",
"created_at": "2021-01-21T16:06:24Z",
"html_... | 2021-01-21T02:26:37Z | 2021-01-22T00:25:50Z | 2021-01-22T00:25:50Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices.
The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer.
I am indexing data t... | {
"avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4",
"events_url": "https://api.github.com/users/afogarty85/events{/privacy}",
"followers_url": "https://api.github.com/users/afogarty85/followers",
"following_url": "https://api.github.com/users/afogarty85/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1758/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1758/timeline | null | completed | null | null | 2021-01-21T02:26:37Z | 2021-01-22T00:25:50Z | 21:59:13 | false |
https://api.github.com/repos/huggingface/datasets/issues/1757 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1757/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1757/comments | https://api.github.com/repos/huggingface/datasets/issues/1757/events | https://github.com/huggingface/datasets/issues/1757 | 790,466,509 | MDU6SXNzdWU3OTA0NjY1MDk= | 1,757 | FewRel | {
"avatar_url": "https://avatars.githubusercontent.com/u/6183050?v=4",
"events_url": "https://api.github.com/users/dspoka/events{/privacy}",
"followers_url": "https://api.github.com/users/dspoka/followers",
"following_url": "https://api.github.com/users/dspoka/following{/other_user}",
"gists_url": "https://ap... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
{
"author_association": "NONE",
"body": "+1",
"created_at": "2021-01-21T11:40:06Z",
"html_url": "https://github.com/huggingface/datasets/issues/1757#issuecomment-764584875",
"id": 764584875,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/1757",
"node_id": "MDEyOkl... | 2021-01-20T23:56:03Z | 2021-03-09T02:52:05Z | 2021-03-08T14:34:52Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ## Adding a Dataset
- **Name:** FewRel
- **Description:** Large-Scale Supervised Few-Shot Relation Classification Dataset
- **Paper:** @inproceedings{han2018fewrel,
title={FewRel:A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation},
auth... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1757/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1757/timeline | null | completed | null | null | 2021-01-20T23:56:03Z | 2021-03-08T14:34:52Z | 46 days, 14:38:49 | false |
https://api.github.com/repos/huggingface/datasets/issues/1756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1756/comments | https://api.github.com/repos/huggingface/datasets/issues/1756/events | https://github.com/huggingface/datasets/issues/1756 | 790,380,028 | MDU6SXNzdWU3OTAzODAwMjg= | 1,756 | Ccaligned multilingual translation dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https:... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [] | 2021-01-20T22:18:44Z | 2021-03-01T10:36:21Z | 2021-03-01T10:36:21Z | CONTRIBUTOR | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1756/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1756/timeline | null | completed | null | null | 2021-01-20T22:18:44Z | 2021-03-01T10:36:21Z | 39 days, 12:17:37 | false |
https://api.github.com/repos/huggingface/datasets/issues/1755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1755/comments | https://api.github.com/repos/huggingface/datasets/issues/1755/events | https://github.com/huggingface/datasets/issues/1755 | 790,324,734 | MDU6SXNzdWU3OTAzMjQ3MzQ= | 1,755 | Using select/reordering datasets slows operations down immensely | {
"avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4",
"events_url": "https://api.github.com/users/afogarty85/events{/privacy}",
"followers_url": "https://api.github.com/users/afogarty85/followers",
"following_url": "https://api.github.com/users/afogarty85/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "MEMBER",
"body": "You can use `Dataset.flatten_indices()` to make it fast after a select or shuffle.",
"created_at": "2021-01-20T21:39:53Z",
"html_url": "https://github.com/huggingface/datasets/issues/1755#issuecomment-763966244",
"id": 763966244,
"issue_url": "https... | 2021-01-20T21:12:12Z | 2021-01-20T22:03:39Z | 2021-01-20T22:03:39Z | NONE | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | I am using portions of HF's helpful work in preparing / scoring the SQuAD 2.0 data. The problem I have is that after using `select` to re-ordering the dataset, computations slow down immensely where the total scoring process on 131k training examples would take maybe 3 minutes, now take over an hour.
The below examp... | {
"avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4",
"events_url": "https://api.github.com/users/afogarty85/events{/privacy}",
"followers_url": "https://api.github.com/users/afogarty85/followers",
"following_url": "https://api.github.com/users/afogarty85/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1755/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1755/timeline | null | completed | null | null | 2021-01-20T21:12:12Z | 2021-01-20T22:03:39Z | 0:51:27 | false |
https://api.github.com/repos/huggingface/datasets/issues/1754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1754/comments | https://api.github.com/repos/huggingface/datasets/issues/1754/events | https://github.com/huggingface/datasets/pull/1754 | 789,881,730 | MDExOlB1bGxSZXF1ZXN0NTU4MTU5NjEw | 1,754 | Use a config id in the cache directory names for custom configs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | [] | 2021-01-20T11:11:00Z | 2021-01-25T09:12:07Z | 2021-01-25T09:12:06Z | MEMBER | null | null | null | As noticed by @JetRunner there was some issues when trying to generate a dataset using a custom config that is based on an existing config.
For example in the following code the `mnli_custom` would reuse the cache used to create `mnli` instead of generating a new dataset with the new label classes:
```python
from ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1754/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1754/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1754.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1754",
"merged_at": "2021-01-25T09:12:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1754.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-20T11:11:00Z | 2021-01-25T09:12:06Z | 4 days, 22:01:06 | true |
https://api.github.com/repos/huggingface/datasets/issues/1753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1753/comments | https://api.github.com/repos/huggingface/datasets/issues/1753/events | https://github.com/huggingface/datasets/pull/1753 | 789,867,685 | MDExOlB1bGxSZXF1ZXN0NTU4MTQ3Njkx | 1,753 | fix comet citations | {
"avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4",
"events_url": "https://api.github.com/users/ricardorei/events{/privacy}",
"followers_url": "https://api.github.com/users/ricardorei/followers",
"following_url": "https://api.github.com/users/ricardorei/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [] | 2021-01-20T10:52:38Z | 2021-01-20T14:39:30Z | 2021-01-20T14:39:30Z | NONE | null | null | null | I realized COMET citations were not showing in the hugging face metrics page:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105164848-8b9da900-5b0d-11eb-9e20-a38f559d2037.png">
This pull request is intended to fix that.
Thanks! | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1753/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1753/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1753.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1753",
"merged_at": "2021-01-20T14:39:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1753.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-20T10:52:38Z | 2021-01-20T14:39:30Z | 3:46:52 | true |
https://api.github.com/repos/huggingface/datasets/issues/1752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1752/comments | https://api.github.com/repos/huggingface/datasets/issues/1752/events | https://github.com/huggingface/datasets/pull/1752 | 789,822,459 | MDExOlB1bGxSZXF1ZXN0NTU4MTA5NTA5 | 1,752 | COMET metric citation | {
"avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4",
"events_url": "https://api.github.com/users/ricardorei/events{/privacy}",
"followers_url": "https://api.github.com/users/ricardorei/followers",
"following_url": "https://api.github.com/users/ricardorei/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | [
{
"author_association": "NONE",
"body": "I think its better to create a new branch with this fix. I forgot I was still using the old branch.",
"created_at": "2021-01-20T10:27:07Z",
"html_url": "https://github.com/huggingface/datasets/pull/1752#issuecomment-763503821",
"id": 763503821,
"issue... | 2021-01-20T09:54:43Z | 2021-01-20T10:27:07Z | 2021-01-20T10:25:02Z | NONE | null | null | null | In my last pull request to add COMET metric, the citations where not following the usual "format". Because of that they where not correctly displayed on the website:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105158000-686efb80-5b05-11eb-8bb0-9c8... | {
"avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4",
"events_url": "https://api.github.com/users/ricardorei/events{/privacy}",
"followers_url": "https://api.github.com/users/ricardorei/followers",
"following_url": "https://api.github.com/users/ricardorei/following{/other_user}",
"gists_url"... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1752/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1752/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1752.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1752",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1752.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1752"
} | 2021-01-20T09:54:43Z | 2021-01-20T10:25:02Z | 0:30:19 | true |
https://api.github.com/repos/huggingface/datasets/issues/1751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1751/comments | https://api.github.com/repos/huggingface/datasets/issues/1751/events | https://github.com/huggingface/datasets/pull/1751 | 789,232,980 | MDExOlB1bGxSZXF1ZXN0NTU3NjA1ODE2 | 1,751 | Updated README for the Social Bias Frames dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
... | [] | closed | false | null | [] | null | [] | 2021-01-19T17:53:00Z | 2021-01-20T14:56:52Z | 2021-01-20T14:56:52Z | CONTRIBUTOR | null | null | null | See the updated card at https://github.com/mcmillanmajora/datasets/tree/add-SBIC-card/datasets/social_bias_frames. I incorporated information from the [SBIC data statement](https://homes.cs.washington.edu/~msap/social-bias-frames/DATASTATEMENT.html), paper, and the corpus README file included with the dataset download. | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1751/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1751/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1751.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1751",
"merged_at": "2021-01-20T14:56:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1751.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-19T17:53:00Z | 2021-01-20T14:56:52Z | 21:03:52 | true |
https://api.github.com/repos/huggingface/datasets/issues/1750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1750/comments | https://api.github.com/repos/huggingface/datasets/issues/1750/events | https://github.com/huggingface/datasets/pull/1750 | 788,668,085 | MDExOlB1bGxSZXF1ZXN0NTU3MTM1MzM1 | 1,750 | Fix typo in README.md of cnn_dailymail | {
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"events_url": "https://api.github.com/users/forest1988/events{/privacy}",
"followers_url": "https://api.github.com/users/forest1988/followers",
"following_url": "https://api.github.com/users/forest1988/following{/other_user}",
"gists_url":... | [] | closed | false | null | [] | null | [
{
"author_association": "CONTRIBUTOR",
"body": "Good catch, thanks!",
"created_at": "2021-01-19T09:48:20Z",
"html_url": "https://github.com/huggingface/datasets/pull/1750#issuecomment-762728197",
"id": 762728197,
"issue_url": "https://api.github.com/repos/huggingface/datasets/issues/1750",
... | 2021-01-19T03:06:05Z | 2021-01-19T11:07:29Z | 2021-01-19T09:48:43Z | CONTRIBUTOR | null | null | null | When I read the README.md of `CNN/DailyMail Dataset`, there seems to be a typo `CCN`.
I am afraid this is a trivial matter, but I would like to make a suggestion for revision. | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1750/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1750/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1750.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1750",
"merged_at": "2021-01-19T09:48:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1750.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 2021-01-19T03:06:05Z | 2021-01-19T09:48:43Z | 6:42:38 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.